text
stringlengths
56
7.94M
\betaegin{document} \betaegin{abstract} We construct a compact manifold with a closed $\Gammatwo$ structure not admitting any torsion-free $\Gammatwo$ structure, which is non-formal and has first Betti number $b_1=1$. We develop a method of resolution for orbifolds that arise as a quotient $M/\muathbb{Z}_2$ with $M$ a closed $\Gammatwo$ manifold under the assumption that the singular locus carries a nowhere-vanishing closed $1$-form. \varepsilonnd{abstract} \muaketitle \sigmaection{Introduction}\lambdaanglebel{sec:intro} A $\Gammatwo$ structure on a $7$-dimensional manifold $M$ is a reduction of the structure group of its frame bundle to the exceptional Lie group $\Gammatwo$. Such a structure determines an orientation, a metric $g$ and a non-degenerate $3$-form $\varphi$; these define a cross product $\tauimes$ on $TM$ by means of the expression $$ \varphi(X,Y,Z)=g(X \tauimes Y,Z). $$ The group $\Gammatwo$ appears on Berger's list \cite{BE} of possible holonomy groups of simply connected, irreducible and non-symmetric Riemannian manifolds. Non-complete metrics with holonomy $\Gammatwo$ were given by Bryant in \cite{Br87} and complete metrics were obtained by Bryant and Salamon in \cite{BrS89}. First compact examples were constructed in 1996 by Joyce in \cite{J1} and \cite{J2}. More compact manifolds with holonomy $\Gammatwo$ were constructed later by Kovalev \cite{Kovalev}, Kovalev and Lee \cite{KovalevLee}, Corti, Haskins, Nordstr\"om and Pacini \cite{CHNP} and recently by Joyce and Karigiannis \cite{JK}. The torsion of a $\Gammatwo$ structure $(M,\varphi,g)$ is defined as $\nuabla \varphi$, the covariant derivative of $\varphi$. Fern\' andez and Gray \cite{FG} classified $\Gammatwo$ structures into $16$ different types according to equations involving the torsion of the structure. In this paper we focus on two of them, namely {\varepsilonm torsion-free} and {\varepsilonm closed} $\Gammatwo$ structures. A $\Gammatwo$ structure is called torsion-free if the holonomy of $g$ is contained in $\Gammatwo$, that is $\nuabla \varphi=0$ or equivalently $d\varphi=0$ and $d\sigmatar \varphi=0$, where $\sigmatar$ denotes the Hodge star. A $\Gammatwo$ structure is said to be closed if it verifies $d\varphi=0$; these are also named {\varepsilonm calibrated}. Metrics defined by such types of $\Gammatwo$ structures have interesting properties; while torsion-free $\Gammatwo$ manifolds are Ricci-flat, closed $\Gammatwo$ manifolds have negative scalar curvature and both the scalar-flatness and the Einstein condition are equivalent to the fact that the structure is torsion-free (see \cite{Br06} and \cite{CI}). This paper contributes to understanding topological properties of compact manifolds with a closed $\Gammatwo$ structure that cannot be endowed with a torsion-free $\Gammatwo$ structure. First examples of these were provided by Fern\' andez in \cite{F} and \cite{F-2}; the example in \cite{F} is a nilmanifold and the examples in \cite{F-2} are solvamifolds. Nilmanifols and solvmanifolds arise as compact quotients of Lie groups by lattices; these Lie groups are nilpotent in the first case and solvable in the second. In both examples the $\Gammatwo$ structure is induced by a closed left-invariant $\Gammatwo$ form on the Lie group. The solvmanifolds in \cite{F-2} have $b_1=3$. In \cite{CF} the authors classify nilpotent Lie algrebras that admit a closed $\Gammatwo$ structure; this list provides more examples of compact manifolds with $b_1 \gammaeq 2$ endowed with a closed $\Gammatwo$ structure but not admitting torsion-free $\Gammatwo$ structures. In \cite{VM} the author develops a method that allows to construct $7$-dimensional solvable Lie groups endowed with a closed $\Gammatwo$ structure and as an application provided an example with $b_1=1$. Recently in \cite{FFKM} the autors construct another example that has $b_1 = 1$. Their starting point is a nilmanifold $M$ with $b_1=3$ that admits a closed $\Gammatwo$ structure and an involution that preserves it. The quotient $X=M/\muathbb{Z}_2$ is an orbifold with $b_1=1$ and its isotropy locus consists of $16$ disjoint tori. Then they resolve the singularities to obtain a smooth manifold. Being this the geography of such manifolds, this paper provides an example of a compact manifold carrying a closed $\Gammatwo$ structure. Its topological properties are different from those that the already mentioned ones have, as we shall discuss later. Our construction consists in resolving an orbifold; for that purpose we first develop a resolution method that is summarized in the following result: \betaegin{theorem} \lambdaanglebel{theo:resol-exist} Let $(M,\varphi,g)$ be a closed $\Gammatwo$ structure on a compact manifold. Suppose that $\j \colon M \tauo M$ is an involution such that $\j^*\varphi=\varphi$ and consider the orbifold $X=M/\j$. Let $L=\omegaperatorname{Fix}(\j)$ be the singular locus of $X$ and suppose that there is a nowhere-vanishing closed $1$-form $\tauheta \in \Omega^1(L)$. Then, there exists a compact $\Gammatwo$ manifold endowed with a closed $\Gammatwo$ structure $(\widetilde X, \widetilde \varphi, \widetilde g)$ and a map $\rhoho \colon \widetilde X \tauo X$ such that: \betaegin{enumerate} \item The map $\rhoho \colon \widetilde X- \rhoho^{-1}(L) \tauo X-L$ is a diffeomorphism. \item There exists a small neighbourhood $U$ of $L$ such that $\rhoho^*(\varphi)=\widetilde \varphi$ on $\widetilde X- \rhoho^{-1}(U)$. \varepsilonnd{enumerate} \varepsilonnd{theorem} The fixed point locus $L$ is an oriented $3$-dimensional manifold (see Lemma \rhoef{lem:3-dim}); the existence of a nowhere-vanishing closed $\tauheta \in \Omega^1(L)$ is equivalent to the fact that each connected component of $L$ is a mapping torus of an orientation-preserving diffeomorphism of an oriented surface. In our example, the singular locus is formed by $16$ disjoint nilmanifolds whose universal covering is the Heisenberg group. The resolution method follows the ideas of Joyce and Karigiannis in \cite{JK}, where they develop a method to resolve $\muathbb{Z}_2$ singularities induced by the action of an involution on manifolds endowed with a torsion-free $\Gammatwo$ structure in the case that the singular locus $L$ has a nowhere-vanishing harmonic $1$-form. The local model of the singularity being $\muathbb{R}R^3 \tauimes (\muathbb{C}^2/\{\phim 1 \})$, the resolution is constructed by replacing a tubular neighbourhood of the singular locus a with a bundle over $L$ with fibre the Eguchi-Hanson space. Then they construct a $1$-parameter family of closed $\Gammatwo$ structures on the resolution; these have small torsion when the value of the parameter is small. Then they apply a theorem of Joyce \cite[Th. 11.6.1]{Joyce2} which states that if one can find a closed $\Gammatwo$ structure $\varphi$ on a compact $7$-manifold $M$ whose torsion is sufficiently small in a certain sense, then there exists a torsion-free $\Gammatwo$ structure which is close to $\varphi$ and it determines the same de Rham cohomology class. This method provides a torsion-free $\Gammatwo$ structure on the resolution; if its fundamental group is finite then its holonomy is $\Gammatwo$. The main difficulty of their construction relies on the fact that two of the three pieces that they glue, namely an annulus around the singular set of the orbifold and a germ of resolution, do not come naturally equipped with a torsion-free $\Gammatwo$ structure. However, there is a canonical way to define a $\Gammatwo$ structure on them and to obtain a closed $\Gammatwo$ structure by making a small perturbation. The torsion of the structure is too large so that they need to make additional corrections. We shall follow the same ideas to perform the resolution; the method is simplified because we avoid these technical difficulties. In this paper we are interested in the interplay between closed $\Gammatwo$ manifolds with small first Betti number and the condition of being formal. Formal manifolds are those whose rational cohomology algebra is described by its rational model. This is a notion of rational homotopy theory and has been sucessfully applied in some geometric situations. The Thurston-Weinstein problem is a remarkable example in the context of symplectic geometry; this consists in constructing symplectic manifolds with no K\" ahler structure. Deligne, Griffiths, Morgan and Sullivan proved in \cite{DGMS} that compact K\" ahler manifolds are formal; thus, non-formal symplectic manifolds are solutions of this problem. Formality is less understood in the case of exceptional holonomy; in particular, the problem of deciding whether or not manifolds with holonomy $\Gammatwo$ and $\muathrm{Sp}in(7)$ are formal is still open. There are some partial results for holonomy $\Gammatwo$ manifolds; in \cite{CN} authors proved that compact non-formal manifolds with holonomy $\Gammatwo$ have second Betti number $b_2 \gammaeq 4$. In addition, in \cite{CKT} authors proved that compact manifolds with holonomy $\Gammatwo$ are \tauextit{almost formal}; this condition implies that triple Massey products $\lambdaangle \tauimesi_1,\tauimesi_2,\tauimesi_3 \rhoangle$ are trivial except perhaps for the case that the degree of $\tauimesi_1$, $\tauimesi_2$ and $\tauimesi_3$ is $2$. Non-trivial Massey products are obstructions to formality but there are examples of non-formal compact $7$-manifolds that only have trivial triple Massey products (see \cite{CN}). However, the presence of a geometric structure makes the situation different; for instance in \cite{MT} the authors prove that simply-connected $7$-dimensional Sasakian manifolds are formal if and only if its triple Massey products are trivial. Formal examples of closed $\Gammatwo$ manifolds that do not admit any torsion-free $\Gammatwo$ structure are the solvmanifolds provided in \cite{F-2} and \cite{VM}, and the compact manifold with $b_1=1$ provided in \cite{FFKM}. Non-formal examples are the nilmanifolds obtained in \cite{CF}; these have $b_1\gammaeq 2$. In this paper we prove: \betaegin{theorem} There exists a compact non-formal closed $\Gammatwo$ manifold with $b_1=1$ that cannot be endowed with a torsion-free $\Gammatwo$ structure. \varepsilonnd{theorem} The manifold $\widetilde X$ that we construct is the resolution of a closed $\Gammatwo$ orbifold $X$, obtained as the quotient of a nilmanifold $M$ by the action of the group $\muathbb{Z}_2$. The orbifold has $b_1=1$ and a non-trivial Massey product coming from $M$. The resolution process does not change the first Betti number; in addition the non-trivial Massey product on $X$ lifts to a non-trivial Massey product on $\widetilde X$. This paper is organized as follows. In section \rhoef{sec:pre} we review some necessary preliminaries on orbifolds, $\Gammatwo$ structures and formality. Section \rhoef{sec:resolu} is devoted to prove Theorem \rhoef{theo:resol-exist}, and in section \rhoef{sec:topo} we characterise the cohomology ring of the resolution. With these tools at hand we finally construct in section \rhoef{sec:const} the non-formal compact closed $\Gammatwo$ manifold with $b_1=1$. \nuoindent\tauextbf{Acknowledgements.} I am grateful to my thesis advisors Giovanni Bazzoni and Vicente Mu\~noz for suggesting this this problem to me and for useful conversations. I acknowledge financial support by a FPU Grant (FPU16/03475). \sigmaection{Preliminaries} \lambdaanglebel{sec:pre} \muathfrak{su}bsection{Orbifolds} We first introduce some aspects about orbifolds, which can be found in \cite{CFM} and \cite{MR}. \betaegin{definition}\lambdaanglebel{def:orbifold} An $n$-dimensional orbifold is a Hausdorff and second countable space $X$ endowed with an atlas $\{(U_{\alpha},V_\alpha, \phisi_{\alpha},\Gamma_{\alpha})\}$, where $\{V_\alpha\}$ is an open cover of $X$, $U_\alpha \muathfrak{su}bset\muathbb{R}R^n$, $\Gamma_{\alpha} < \Deltaiff(U_\alpha)$ is a finite group acting by diffeomorphisms, and $\phisi_{\alpha}\colon U_{\alpha} \tauo V_{\alpha} \muathfrak{su}bset X$ is a $\Gamma_{\alpha}$-invariant map which induces a homeomorphism $U_{\alpha}/\Gamma_\alpha \cong V_{\alpha}$. There is a condition of compatibility of charts for intersections. For each point $x \in V_{\alpha} \cap V_{\beta}$ there is some $V_\delta \muathfrak{su}bset V_{\alpha} \cap V_{\beta}$ with $x \in V_\delta$ so that there are group monomorphisms $\rhoho_{\delta \alpha}\colon \Gamma_\delta \tauhetaookrightarrow \Gamma_\alpha$, $\rhoho_{\delta \beta}\colon \Gamma_\delta \tauhetaookrightarrow \Gamma_\beta$, and open differentiable embeddings $\muathrm{Im}ath_{\delta \alpha}\colon U_\delta \tauo U_\alpha$, $\muathrm{Im}ath_{\delta \beta}\colon U_\delta \tauo U_\beta$, which satisfy $\muathrm{Im}ath_{\delta \alpha}(\gamma(x))=\rhoho_{\delta \alpha}(\gamma)(\muathrm{Im}ath_{\delta \alpha}(x))$ and $\muathrm{Im}ath_{\delta \beta}(\gamma(x)) = \rhoho_{\delta \beta}(\gamma)(\muathrm{Im}ath_{\delta \beta}(x))$, for all $\gamma\in \Gamma_\delta$. \varepsilonnd{definition} We can refine the atlas of an orbifold $X$ in order to obtain better properties; given a point $x \in X$, there is a chart $(U,V,\phisi,\Gamma)$ with $U \muathfrak{su}bset \muathbb{R}R^n$, $U/\Gamma \cong V$, so that the preimage $\phisi^{-1}(\{x\})= \{u\}$, and verifies $\gamma(u)=u$ for all $\gamma \in \Gamma$. We call $\Gamma$ the \varepsilonmph{isotropy group} at $x$, and we denote it by $\Gamma_x$. This group is well defined up to conjugation by a diffeomorphism of a small open set of $\muathbb{R}R^n$. The singular locus of $X$ is the set $ S=\{x\in X \mubox{ s.t. } \Gamma_x\nueq \{1\} \}, $ and of course, $X-S$ is a smooth manifold. We now describe the de Rham complex of an $n$-dimensional orbifold $X$. First of all, a $k$-form $\varepsilonta$ on $X$ consists of a collection of differential $k$-forms $\{\varepsilonta_\alpha \}$ such that: \betaegin{enumerate} \item $\varepsilonta_\alpha \in \Omega^k(U_\alpha)$ is $\Gamma_\alpha$-invariant, \item If $V_\delta \muathfrak{su}bset V_\alpha $ and $\muathrm{Im}ath_{\delta \alpha} \colon U_\delta \tauo U_\alpha$ is the associated embedding, then $\muathrm{Im}ath_{\delta \alpha}^*(\varepsilonta_\alpha)= \varepsilonta_\delta$. \varepsilonnd{enumerate} The space of orbifold $k$-forms on $X$ is denoted by $\Omega^k(X)$. In addition, it is obvious that the wedge product of orbifold forms and the exterior differential $d$ on $X$ are well defined. Therefore $(\Omega^*(X),d)$ is a differential graded algebra that we call the de Rham complex of $X$. Its cohomology coincides with the cohomology of the space $X$ with real coefficients, $H^*(X)$ (see \cite[Proposition 2.13]{CFM}). In this paper the orbifold involved is the orbit space of a smooth manifold $M$ under the action of $\muathbb{Z}_2= \{ \muathrm{Id}, \j \}$, where $\j$ is an involution. The singular locus of $X=M/\muathbb{Z}_2$ is $\omegaperatorname{Fix}(\j)$. In addition, let us denote by $\Omega^k(M)^{\muathbb{Z}_2}$ the space of $\muathbb{Z}_2$-invariant $k$-forms. Then $$ \Omega^k(X)=\Omega^k(M)^{\muathbb{Z}_2}, $$ and both the wedge product and exterior derivative preserve the $\muathbb{Z}_2$-invariance. An averaging argument ensures that $H^k(X)=H^k(M)^{\muathbb{Z}_2}$. \muathfrak{su}bsection{$\Gammatwo$ structures} We now focus on $\Gammatwo$ structures on manifolds and orbifolds. Basic references are \cite{Br06}, \cite{FG}, \cite{HL}, \cite{Joyce2} and \cite{Salamon}. Let us identify $\muathbb{R}R^7$ with the imaginary part of the octonions $\muathbb{O}$. The multiplicative structure on $\muathbb{O}$ endows $\muathbb{R}R^7$ with a cross product $\tauimes$, which defines a $3$-form $\varphi_0(u,v,w)= \lambdaangle u\tauimes v, w \rhoangle $, where $\lambdaangle \cdot, \cdot \rhoangle$ denotes the scalar product on $\muathbb{R}R^7$. In coordinates, \betaegin{equation}\lambdaanglebel{eqn:std} \varphi_0= v^{127} + v^{347} + v^{567} + v^{135} - v^{236} - v^{146} - v^{245}, \varepsilonnd{equation} where $(v^1,\deltaots,v^7)$ is the standard basis of $(\muathbb{R}R^7)^*$ and $v^{ijk}$ stands for $v^i \wedge v^j \wedge v^k$. The stabilizer of $\varphi_0$ under the action of $\muathrm{Gl}(7,\muathbb{R}R)$ on $\Lambda^3(\muathbb{R}R^7)^*$ is the group $\Gammatwo$, a simply connected $14$-dimensional Lie group which is contained in $\muathrm{SO}(7)$. \betaegin{definition} Let $V$ be a real vector space of dimension $7$. A $3$-form $\varphi \in \Lambda^3 V^*$ is a $\Gammatwo$ form on $V$ if there is a linear isomorphism $u\colon V\tauo \muathbb{R}R^7$ such that $u^*(\varphi_0)= \varphi$, where $\varphi_0$ is given by equation (\rhoef{eqn:std}). \varepsilonnd{definition} A $\Gammatwo$ structure $\varphi$ determines an orientation because $\Gammatwo \muathfrak{su}bset \muathrm{SO}(7)$; the choice of a volume form $\muathrm{vol}$ on $V$ compatible with the orientation determines a unique metric $g_{\muathrm{vol}}$ with associated unit-length volume form $\muathrm{vol}$ by the formula: $$ i(x)\varphi \wedge i(y) \varphi \wedge \varphi = 6 g_{\muathrm{vol}}(x,y)\muathrm{vol}, $$ which ensures that the metric $u^*(g_0)$ is determined by the volume form $u^*(\muathrm{vol}_{\muathbb{R}R^7})$. Note that the metric $u^*(g_0)$ does not depend on the isomorphism $u$ with $u^*(\varphi_0)=\varphi$. We say that $g=u^*(g_0)$ is the metric associated to $\varphi$. Of course, a $\Gammatwo$ form $\varphi$ induces a cross product $\tauimes$ on $V$ by the formula $\varphi(u,v,w)=g(u\tauimes v,w)$. The orbit of $\varphi_0$ under the action of $\muathrm{Gl}(7,\muathbb{R}R)$ is an open set of $\Lambda^3 (\muathbb{R}R^7)^*$, thus the space of $\Gammatwo$ forms on $\muathbb{R}R^7$ is an open set. \betaegin{definition} Let $M$ be a $7$-dimensional manifold. A $\Gammatwo$ form on $M$ is a $3$-form $\varphi \in \Omega^3(M)$ such that for every $p \in M$ the $3$-form $\varphi_p$ is a $\Gammatwo$ form. Let $X$ be a $7$-dimensional orbifold with atlas $\{(U_\alpha,V_\alpha,\phisi_\alpha,\Gamma_\alpha) \}$. A $\Gammatwo$ form on $X$ is a differential $3$-form $\varphi \in \Omega^3(X)$ such that $\varphi_\alpha$ is a $\Gammatwo$ form on $U_\alpha$. \varepsilonnd{definition} Let $\varphi$ be a $\Gammatwo$ form on a manifold $M$ or an orbifold $X$. In both cases, $\varphi$ determines a metric $g$ and a cross product $\tauimes$. In this case we say that $(M,\varphi,g)$ or $(X,\varphi,g)$ is a $\Gammatwo$ structure. In addition, $\Gammatwo$ manifolds are of course oriented. We state a well-known fact about $\Gammatwo$ structures (see for instance \cite[Chapter 10, Section 3]{Joyce2}). \betaegin{lemma}\lambdaanglebel{universal} There exists a universal constant $m$ such that if $(M,\varphi,g)$ is a $\Gammatwo$ structure and $\| \phihi - \varphi \|_{C^0,g}< m$ then $\phihi$ is a $\Gammatwo$ form. \varepsilonnd{lemma} \betaegin{proof} Let $(\muathbb{R}^7, \varphi_0,g_0)$ be the standard $\Gammatwo$ structure. Being the space of $\Gammatwo$ forms on $\muathbb{R}^7$ open in $\Lambdaambda^3 (\muathbb{R}^7)^*$, there exists a constant $m>0$ such that if a $3$-form $\phihi_0$ verifies that $\| \phihi_0 - \varphi_0 \|_{g_0} <m$, then $\phihi_0$ is a $\Gammatwo$ form. We now check that $m$ is the claimed universal constant. Let $(M,\varphi,g)$ be a $\Gammatwo$ manifold; let $\phihi$ such that $ \| \phihi_p - \varphi_p \|_{g_p} < m$ for every $p\in M$. In order to check that $\phihi_p$ is a $\Gammatwo$ form, let $A \colon (T_pM, \varphi_p,g_p) \tauo (\muathbb{R}^7, \varphi_0,g_0)$ be an isomorphism of $\Gammatwo$ vector spaces, then: $$ \| A^t \phihi_p - \varphi_0 \|_{g_0} = \| \phihi_p - \varphi_p \|_{g_p} < m $$ and therefore $A^t \phihi_p$ is a $\Gammatwo$ form. Since $A$ is an isomorphism, $\phihi_p$ is also a $\Gammatwo$ form. \varepsilonnd{proof} In \cite{FG} Fern\' andez and Gray classified $\Gammatwo$ structures $(M,\varphi, g)$ into $16$ types according to $\nuabla \varphi$, where $\nuabla$ denotes the Levi-Civita connection associated to $g$. The motivation for such classification is the holonomy principle, stating that the holonomy of $g$ is contained in $\Gammatwo$ if and only if $\nuabla \varphi=0$. In \cite{FG} they also prove that $\nuabla \varphi=0$ if and only if $d\varphi=0$ and $d(\sigmatar \varphi)=0$, where $\sigmatar$ denotes the Hodge star. In this paper we are interested in closed and torsion-free $\Gammatwo$ structures on manifolds and orbifolds: \betaegin{definition} Let $(M,\varphi,g)$ or $(X,\varphi,g)$ a $\Gammatwo$ structure on a manifold or an orbifold. We say the $\Gammatwo$ structure is closed if $d\varphi=0$. If in addition $d(\sigmatar \varphi)=0$ we say that the $\Gammatwo$ structure is torsion-free. \varepsilonnd{definition} \betaegin{definition} Let $(X,\varphi)$ be a closed $\Gammatwo$ structure on a $7$-dimensional orbifold. A closed $\Gammatwo$ resolution of $(X, \varphi)$ consists of a smooth manifold endowed with a closed $\Gammatwo$ structure $(\widetilde X, \phi)$ and a map $\rhoho \colon \widetilde X \tauo X$ such that: \betaegin{enumerate} \item Let $S\muathfrak{su}bset X$ be the singular locus and $E=\rhoho^{-1}(S)$. Then, $\rhoho|_{\widetilde X- E} \colon \widetilde X - E \tauo X- S$ is a diffeomorphism, \item Outside a neighbourhood of $E$, $\rhoho^*(\varphi)= \phi$. \varepsilonnd{enumerate} The subset $E$ is called exceptional locus. \varepsilonnd{definition} \muathfrak{su}bsubsection{$\Gammatwo$ involutions} \betaegin{definition} Let $(M,\varphi)$ be a $\Gammatwo$ manifold, we say that $\j \colon M \tauo M$ is a $\Gammatwo$ involution if $\j^*(\varphi)=\varphi$, $\j^2=\muathrm{Id}$, and $\j \nueq \muathrm{Id}$. \varepsilonnd{definition} In this paper we shall focus on orbifolds that are obtained as a quotient of a closed $\Gammatwo$ manifold $(M,\varphi)$ by the action of a $\Gammatwo$ involution $\j$; that is $X=M/\j$. The next result states that the fixed locus $L$ of $\j$ is a $3$-dimensional submanifold. \betaegin{lemma} \lambdaanglebel{lem:3-dim} The submanifold $L$ is $3$-dimensional and oriented by $\varphi|_L$. In addition, $\varphi|_L$ is the oriented unit-length volume form determined by the metric $g|_L$. \varepsilonnd{lemma} \betaegin{proof} The result is deduced from the fact that if $(\muathbb{R}R^7,\varphi_0, \lambdaangle \cdot, \cdot \rhoangle)$ is the standard $\Gammatwo$ structure on $\muathbb{R}R^7$ and if $\muathfrak{j} \in \Gammatwo$ is an involution, $\muathfrak{j} \nueq \muathrm{Id}$, then $\muathfrak{j}$ is diagonalizable with eigenvalues $\phim 1$ and $\deltaim(V_{1})=3$, $\deltaim(V_{-1})=4$, where $V_{\phim 1}$ denotes the eigenspace associated to the eigenvalue $\phim 1$. In addition, $\varphi_0(v_1,v_2,v_3)=\phim 1$ if $(v_1,v_2,v_3)$ is an orthogonal basis of $V_1$. We now prove this statement; first $\muathfrak{j}$ is diagonalizable with eigenvalues $\phim 1$ because $\muathfrak{j}^2=\muathrm{Id}$, $\muathfrak{j} \nueq \muathrm{Id}$ and $\j \in \muathrm{SO}(7)$. Let us take a unit-length vector $v_1 \in V_{1}$; the vector space $W=\lambdaangle v_1 \rhoangle^\phierp$ is fixed by $\muathfrak{j}$ because $\muathfrak{j}\in \muathrm{SO}(7)$, and carries in addition an $\muathrm{SU}(3)$ structure determined by $\omega= i(v_1)\varphi_0$, $\muathbb{R}e(\Omega)=\varphi_0|_W$ (see \cite{SW}). Of course, the $\muathrm{SU}(3)$ structure is preserved by $\muathfrak{j}$. Viewed as a complex map, $\muathfrak{j} \colon W \tauo W$ has three complex eigenvalues $\lambda_1, \lambda_2, \lambda_3$ that verify $\lambda_j^2=1$ and $\lambda_1 \lambda_2 \lambda_3=1$ because $\muathfrak{j}^2=\muathrm{Id}$ and $\muathfrak{j}$ preserves the $\muathrm{SU}(3)$ structure. Being $\muathfrak{j} \nueq \muathrm{Id}$, we obtain that $\lambda_1=1$ and $\lambda_2=\lambda_3=-1$ up to a permutation of the indices; this proves that $\deltaim(V_{1})=3$ and $\deltaim(V_{-1})=4$. Now observe that $\muathfrak{j} (u \tauimes v)= \muathfrak{j}(u) \tauimes \muathfrak{j}(v)$, where $\tauimes$ is the cross product on $\muathbb{R}R^7$ that determines $\varphi$. Thus, let $(v_1,v_2,v_3)$ be an orthogonal basis of $V_1$, then $v_1 \tauimes v_2 \in V_1$; so necessarily, $v_1 \tauimes v_2= \phim v_3$ and $\varphi_0(v_1,v_2,v_3)=\phim 1$. \varepsilonnd{proof} \betaegin{remark} If $d\varphi=0$, Lemma \rhoef{lem:3-dim} states that $L$ is a calibrated submanifold of $M$ in the sense of \cite{HL}. \varepsilonnd{remark} \muathfrak{su}bsubsection{$\muathrm{SU}(2)$ structures} Let us identify $\muathbb{R}R^4$ with $\muathbb{H}$ and identify $\muathrm{SU}(2)$ with $\muathrm{Sp}(1)$ as usual. The multiplication by $i$, $j$ and $k$ on the quaternions yields $\muathrm{Sp}(1)$-equivariant endomorphisms $I$, $J$ and $K$ that determine invariant $2$-forms by the contraction of these endomorphism with the scalar product on $\muathbb{R}R^4$. In coordinates, these are: \betaegin{equation}\lambdaanglebel{eqn:su2} \betaegin{aligned} \omega_1^0 = w^{12} + w^{34}, \phisiquad \omega_2^0= w^{13} - w^{24}, \phisiquad \omega_3^0 = w^{14} + w^{23}. \varepsilonnd{aligned} \varepsilonnd{equation} where $(w_1,w_2,w_3,w_4)$ denotes the standard basis of $\muathbb{R}R^4$. \betaegin{definition} Let $W$ be a real vector space of dimension $4$. An $\muathrm{SU}(2)$ structure on $W$ is determined by $2$-forms $(\omega_1,\omega_2,\omega_3)$ such that there is a linear isomorphism $u\colon W \tauo \muathbb{R}R^4$ with $u^*(\omega_j^0)= \omega_j$, where the forms $\omega_j^0$ are given by equation (\rhoef{eqn:su2}). \varepsilonnd{definition} An $\muathrm{SU}(2)$ structure on a vector space $W$ determines a $\Gammatwo$ structure on $W\omegaplus \muathbb{R}R^3 $. To check this we can suppose that $(W,\omega_1,\omega_2,\omega_3)=(\muathbb{R}R^4,\omega_1^0,\omega_2^0, \omega_3^0)$. Denote by $(v_5,v_6,v_7)$ the standard basis of $\muathbb{R}R^3$, then \betaegin{equation}\lambdaanglebel{eqn:su2-g2} \varphi_0= v^{567} + \omega_1^0 \wedge v^7 + \omega_2^0 \wedge v^5 - \omega_3^0 \wedge v^6. \varepsilonnd{equation} In addition if we fix on $\muathbb{R}R^3$ the orientation determined by $v^{567}$, then $W$ is oriented by $\epsilonrac{1}{2}(\omega_1^0)^2$. \betaegin{definition} Let $N$ be a $4$-dimensional manifold. An $\muathrm{SU}(2)$ structure on $N$ consists of $2$-forms $(\omega_1,\omega_2,\omega_3) \in \Omega^2(N)$ that determine an $\muathrm{SU}(2)$ structure on $T_pN$ for every $p\in N $. In addition, if $d\omega_1=d\omega_2=d\omega_3=0$ we say that $(\omega_1,\omega_2,\omega_3)$ is a hyperK\" ahler structure. Let $Y$ be a $4$-dimensional orbifold with atlas $\{(U_\alpha,V_\alpha,\phisi_\alpha,\Gamma_\alpha) \}$. An $\muathrm{SU}(2)$ structure on $Y$ consists of $2$-forms $(\omega_1,\omega_2,\omega_3) \in \Omega^2(Y)$ such that $(\omega_1^\alpha,\omega_2^\alpha,\omega_3^\alpha)$ is an $\muathrm{SU}(2)$ structure on $U_\alpha$. In addition, if $d\omega_1=d\omega_2=d\omega_3=0$ we say that $(\omega_1,\omega_2,\omega_3)$ is a hyperK\" ahler structure. \varepsilonnd{definition} In view of Lemma \rhoef{lem:3-dim} the local model of $X$ around $L$ is $( \muathbb{C}^2/\muathbb{Z}_2) \tauimes \muathbb{R}R^3$, with $\muathbb{Z}_2= \lambdaangle -\muathrm{Id}, \muathrm{Id} \rhoangle$. The standard $\Gammatwo$ form induces the orbifold hyperK\" ahler $\muathrm{SU}(2)$ structure $(\omega_1^0, \omega_2^0, \omega_3^0)$ on $\muathbb{C}^2/\muathbb{Z}_2$. We now detail the hyperK\" ahler resolution of $Y=\muathbb{C}^2/\muathbb{Z}_2$; this will be useful in order to construct the resolution of $X$ in section \rhoef{sec:resolu}. The holomorphic resolution of $Y$ is $N=\widetilde \muathbb{C}^2/\muathbb{Z}_2$; where $\widetilde \muathbb{C}^2$ is the blow-up of $\muathbb{C}^2$ at $0$. That is, $$ \widetilde \muathbb{C}^2= \{ (z_1,z_2,\varepsilonll)\in \muathbb{C}^2 \tauimes \muathbb{CP}^1 \mubox{ s.t. } (z_1,z_2)\in \varepsilonll\}, $$ and the action of $-\muathrm{Id}$ lifts to $(z_1,z_2,\varepsilonll) \lambdaongmapsto (-z_1,-z_2,\varepsilonll)$. We shall call the exceptional divisor $E=\{0 \}\tauimes \muathbb{CP}^1 \muathfrak{su}bset N$. Note that there is a well-defined projection $\sigma_0 \colon N \tauo \muathbb{CP}^1$. Let us consider $r_0 \colon Y \tauo [0,\infty)$ the radial function induced from $\muathbb{C}^2$; one can check taking coordinates that $r_0^2$ is not smooth on $N$, but $r_0^4$ is. Consider the blow up map, $\chi_0 \colon N \tauo Y$. Then, one can check that $\chi_0^*(\omega_2^0)$ and $\chi^*(\omega_3^0)$ are non-degenerate smooth forms on $N$; this holds because $\omega_2^0 + i \omega_3^0 = dz_1 \wedge dz_2$ and the pullback of a holomorphic form under a holomorphic resolution is holomorphic. A computation in coordinates shows that $\chi_0^*(\omega_1^0)$ has a pole on $\muathbb{CP}^1$. Let $a>0$ and define $\muathsf{f}_a(x)= \muathbf{g}_a(x) + 2a\lambdaog(x)$, where $ \muathbf{g}_a(x)=(x^4 + a^2)^{1/2}- a\lambdaog((x^4 + a^2)^{1/2}+a)$. Consider on $Y-E$: $$ \widehat \omega_1^a= -\epsilonrac{1}{4} dId\muathsf{f}_a(r_0). $$ One can check that $(\widehat \omega_1^a,\chi_0^*(\omega_2^0), \chi_0^*(\omega_3^0))$ is a hyperK\" ahler structure on $N-E$; it can be extended as a hyperK\" ahler structure on $N$ because: $$ -\epsilonrac{1}{4} dId (\lambdaog(r_0^2))= \sigma_0^*(\omega_{\muathbb{CP}^1}), $$ where $\omega_{\muathbb{CP}^1}$ stands for the Fubini-Study form of $\muathbb{CP}^1$. \muathfrak{su}bsection{Formality} In this section we review some definitions and results about formal manifolds and formal orbifolds; basic references are \cite{DGMS}, \cite{FOT}, and \cite{OT}. We work with commutative differential graded algebras (in the sequel CDGAs); these consist of a pairs $(A,d)$ where $A$ is a commutative graded algebra $A=\omegaplus_{i\gammaeq 0}{A^i}$ over $\muathbb{R}R$, and $d \colon A^* \tauo A^{*+1}$ is a differential, which is a graded derivation that verifies $d^2=0$. If $a \in A$ is an homogenous element, we denote its degree by $|a|$, and $\betaar{a}=(-1)^{|a|}a$. The cohomology algebra of a CDGA $(A,d)$ is denoted by $H^*(A,d)$; it is also a CDGA with the differential being zero. If $a\in A$ is a closed element we denote its cohomology class by $[a]$. The CDGA $(A,d)$ is said to be connected if $H^0(A,d)=\muathbb{R}R$. In our context, the main examples of CDGAs are the de Rham complex of a manifold or an orbifold. In section \rhoef{sec:const} we also make use of the Chevalley-Eilenberg CDGA of a Lie group $G$, that consists of the algebra $\Lambda^* \muathfrak{g}^*$, the differential of a $1$-form is $ d\alpha (x,y) = -\alpha[x,y], $ and is extended to $\Lambda^* \muathfrak{g}^*$ as a graded derivation. \betaegin{definition} A CDGA $(A,d)$ is said to be minimal if: \betaegin{enumerate} \item $A$ is free as an algebra, that is $A$ is the free algebra $\Lambda V$ over a graded vector space $V =\omegaplus_i V^i$. \item There is a collection of generators $\{ a_i \}_{i }$ indexed by a well ordered set, such that $|a_i| \lambdaeq |a_j|$ if $i<j$ and each $da_j$ is expressed in terms of the previous $a_i$ with $i<j$. \varepsilonnd{enumerate} \varepsilonnd{definition} Morphisms between CDGAs are required to preserve the degree and to commute with the differential; a morphism of CDGAs $\kappa \colon (B,d) \tauo (A,d)$ is said to be a quasi-isomorphism if it induces an isomorphism on cohomology $\kappa \colon H^*(B,d) \tauo H^*(A,d)$. \betaegin{definition} A CDGA $(B,d)$ is a model of the CDGA $(A,d)$ if there exists a quasi-isomorphism $ \kappa \colon (B, d) \tauo (A,d). $ If $(B,d)$ is minimal we say that $(B,d)$ is a minimal model of $(A,d)$. \varepsilonnd{definition} Minimal models of connected DGAs exist and are unique up to isomorphism of CDGAs. So we define the minimal model of a connected manifold or a connected orbifold as the minimal model of its associated de Rham complex. \betaegin{definition} A minimal algebra $(\Lambda V , d)$ is formal if there exists a quasi-isomorphism, $$ (\Lambda V, d ) \tauo (H^*( \Lambda V, d), 0). $$ A manifold or an orbifold is formal if its minimal model is formal. \varepsilonnd{definition} We now recall the definition of triple Massey products; these are objects that detect non-formality of manifolds. Let $(A,d)$ be a CDGA and let $\tauimesi_1,\tauimesi_2,\tauimesi_3$ be cohomology classes such that $\tauimesi_1\tauimesi_2=0$ and $\tauimesi_2\tauimesi_3=0$. Under these assumptions we can define the triple Massey product of these cohomology classes $\lambdaangle \tauimesi_1,\tauimesi_2,\tauimesi_3 \rhoangle$. In order to provide its definition we first introduce the concept of a defining system for $ \lambdaangle \tauimesi_1,\tauimesi_2, \tauimesi_3 \rhoangle $. \betaegin{definition} A defining system for $ \lambdaangle \tauimesi_1,\tauimesi_2, \tauimesi_3 \rhoangle $ is an element $(a_1,a_2,a_3,a_{12},a_{23})$ such that: \betaegin{enumerate} \item $[a_i]=\tauimesi_i$ for $1 \lambdaeq i \lambdaeq 3$, \item $da_{12}= \betaar{a}_1a_2$, and $da_{23}= \betaar{a}_2a_3$. \varepsilonnd{enumerate} \varepsilonnd{definition} One can check that $ \betaar{a}_1a_{23} + \betaar{a}_{12} a_3 $ is a closed ($|a_1| + |a_2| + |a_3| -1$)-form. The triple Massey product $\lambdaangle \tauimesi_1, \tauimesi_2,\tauimesi_3 \rhoangle$ is the set formed by the cohomology classes that defining systems determine, that is: $$ \{[ \betaar{a}_1a_{23} + \betaar{a}_{12} a_3] \mubox{ s.t. } (a_1,a_2,a_3,a_{12},a_{23}) \mubox{ runs over all defining systems}\}. $$ If $0 \in \lambdaangle \tauimesi_1, \tauimesi_2, \tauimesi_3 \rhoangle$ we say that the triple Massey product is trivial. \betaegin{theorem} Let $(\Lambda V,d)$ be a formal minimal algebra. Let $\tauimesi_1,\tauimesi_2,\tauimesi_3$ be cohomology classes such that the triple Massey product $\lambdaangle \tauimesi_1,\tauimesi_2,\tauimesi_3\rhoangle$ is defined. Then $\lambdaangle \tauimesi_1,\tauimesi_2,\tauimesi_3 \rhoangle$ is trivial. \varepsilonnd{theorem} As a consequence, we obtain: \betaegin{corollary}\lambdaanglebel{cor:massey-form} Let $(\Lambda V,d)$ be the minimal model of $(A,d)$. Let $\tauimesi_1,\tauimesi_2,\tauimesi_3 \in H^*(A,d)$ such that the triple Massey product $\lambdaangle \tauimesi_1,\tauimesi_2,\tauimesi_3\rhoangle$ is defined. If $ \lambdaangle \tauimesi_1, \tauimesi_2, \tauimesi_3 \rhoangle$ is not trivial then $(\Lambda V,d)$ is not formal. \varepsilonnd{corollary} \betaegin{proof} Suppose that $(\Lambda V, d)$ is formal and let $ \kappa \colon (\Lambda V,d)\tauo (A, d) $ be a quasi-isomorphism. Let us take cohomology classes $\tauimesi_1',\tauimesi_2',\tauimesi_3' \in H^*(\Lambda V,d)$ with $\kappa(\tauimesi_j')=\tauimesi_j$ then the Massey product $\lambdaangle \tauimesi_1',\tauimesi_2', \tauimesi_3' \rhoangle$ is well-defined and there is a defining system $(a_1,a_2,a_3,a_{12},a_{23})$ such that $$ \betaar{a}_1 a_{23} + \betaar{a}_{12} a_3= d\alpha. $$ But of course $0=\kappa[\betaar{a}_1 a_{23} + \betaar{a}_{12} a_3] \in \lambdaangle \tauimesi_1, \tauimesi_2, \tauimesi_3 \rhoangle$; yielding a contradiction. \varepsilonnd{proof} We finally outline some aspects about finite group actions on minimal models. Let $M$ be a compact manifold and let $\kappa \colon (\Lambda V, d) \tauo (\Omega(M),d)$ be the minimal model. Let $\Gamma$ be a finite subgroup of $\Deltaiff(M)$ acting on the left; the pullback of forms defines a right action of $\Gamma$ on $(\Omega(M),d)$. Lifting theorems for CDGAs ensure the existence of a morphism $\omegaverline{\gamma} \colon \Lambda V \tauo \Lambda V$ that lifts up to homotopy the pullback by each $\gamma \in \Gamma$; that is, $\kappa \circ \omegaverline{\gamma} \sigmaim \gamma^* \circ \kappa$; in particular, $[\kappa (\omegaverline{\gamma}(a))]=[\gamma^* \kappa (a)]$ if $da=0$. This implies that $\omegaverline{\muathrm{Id}} \sigmaim \muathrm{Id}$ and that $\omegaverline{\gamma \gamma'}\sigmaim \omegaverline{\gamma} \, \omegaverline{\gamma}'$; therefore these liftings provide an homotopy action on $\Lambda V$. These liftings can be modified making use of group cohomology techniques (see \cite[Theorem 2]{O}) in order to endow $\Lambda V$ with a right action of $\Gamma$. \betaegin{theorem} \lambdaanglebel{thm:action-dga} Let $M$ be a compact connected manifold and let $\Gamma$ be a subgroup of $\Deltaiff(M)$ acting on the left. There is a right action of $\Gamma$ on the minimal model $\kappa \colon (\Lambda V, d) \tauo (\Omega(M),d)$ by morphisms of CDGAs such that $[\kappa(a \gamma)]=[\gamma^* \kappa(a)]$ for every closed element $a \in \Lambda V$ and every $\gamma \in \Gamma$. \varepsilonnd{theorem} If there is a right action of a finite group $\Gamma$ on a CDGA $(A,d)$ one can consider the CDGA of $\Gamma$-invariant elements $(A^\Gamma,d)$. An average argument leads us to $H^*(A,d)^\Gamma=H^*(A^\Gamma,d)$. In addition, if $\Gamma$ also acts on $(B,d)$ on the right by morphisms and $\i \colon (A,d) \tauo (B,d)$ is a morphism such that $[\i (a \gamma)]= [(\i a) \gamma]$ for every closed $a \in A$ and $\gamma \in \Gamma$ one can define: $$ \underline{\i} \colon (A^\Gamma ,d)\tauo (B^\Gamma,d), \phisiquad \underline{\i} a= |\Gamma |^{-1}\muathfrak{su}m_{\gamma \in \Gamma}{\i(a) \gamma}, $$ where $|\Gamma|$ denotes the cardinal number of $\Gamma$. This verifies that $[\underline{\i} (a)]=[\i(a)]$ for closed elements $a \in A^\Gamma$. In particular if $\i$ is a quasi-isomorphism so is $\underline{\i}$. \betaegin{lemma}\lambdaanglebel{lem:formal-quotient} Let $\Gamma$ be a finite group acting on a compact connected manifold $M$ by diffeomorphisms. If $M$ is formal then $M/\Gamma$ is also formal. \varepsilonnd{lemma} \betaegin{proof} First of all, the fact that $(\Omega (M/\Gamma),d)= (\Omega(M)^\Gamma,d)$ and our previous argument ensure that $H^*(M/\Gamma)=H^*(M)^\Gamma$. Let $\kappa \colon (\Lambda V, d) \tauo (\Omega(M),d)$ be the minimal model of $M$ as constructed in Theorem \rhoef{thm:action-dga}. The CDGA $((\Lambda V)^\Gamma, d)$ is a model for $(\Omega(M/\Gamma),d)$ because of the quasi-isomorphism $\underline{\kappa} \colon ((\Lambda V)^\Gamma, d) \tauo (\Omega(M)^\Gamma,d)$ defined as above. Consider $(\Lambda W,d)$ the minimal model of $(\Omega(M/\Gamma),d)$ and let $\phisi \colon (\Lambda W, d) \tauo ((\Lambda V)^\Gamma, d) $ be a quasi isomorphism. Being $M$ formal one can consider a quasi-isomorphism $\i \colon (\Lambda V, d) \tauo (H^*(\Lambda V,d),0)$ and define $\underline{\i} \colon ((\Lambda V)^\Gamma, d) \tauo (H^*(\Lambda V,d)^\Gamma,0)=(H(\Lambda W, d),0)$, which is also a quasi-isomorphism. Then we can construct a quasi isomorphism: $$ \underline{\i} \circ \phisi \colon (\Lambda W, d) \tauo (H^*(\Lambda W, d),0). $$ Therefore, $M/\Gamma$ is formal. \varepsilonnd{proof} \sigmaection{Resolution process} \lambdaanglebel{sec:resolu} Let $(M,\varphi,g)$ be a closed $\Gammatwo$ structure on a compact manifold $M$, let $\j \colon M \tauo M$ be a $\Gammatwo$ involution and let $X=M/\j$. The singular locus of the closed $\Gammatwo$ orbifold $(X, \varphi, g)$ is the set $L=\omegaperatorname{Fix}(\j)$, a $3$-dimensional oriented manifold according to Lemma \rhoef{lem:3-dim}. This section is devoted to construct a resolution $\rhoho \colon \widetilde{X} \tauo X$ under the extra assumption that $L$ has a nowhere-vanishing closed $1$-form $\tauheta \in \Omega^1(L)$. This hypothesis yields a topological characterisation of $L$ that we now outline. Let us denote by $L_1,\deltaots, L_r$ the connected components of $L$; according to Tischler's Theorem \cite{T} each $L_i$ is a fibre bundle over $S^1$ with fibre a connected surface $\Sigma_i$; that is, $L_i$ is the mapping torus of a diffeomorphism $\phisi_i \in \Deltaiff(\Sigma_i)$: $$ L_i = \Sigma_i \tauimes [0,1]/ (x,0) \sigmaim (\phisi_i(x),1). $$ Let us denote $\muathrm{q}_i \colon \Sigma_i \tauimes [0,1] \tauo L_i$ the quotient map and $\muathrm{b}_i \colon L_i \tauo S^1$ the bundle map; then we can suppose that $\tauheta|_{L_i}= \muathrm{b}_i^*(\tauheta_0)$, where $\tauheta_0$ denotes the angular form on $S^1$. In addition, taking into account that $L_i$ is oriented and that $H^3(L_i)\cong \{ [\alpha]\in H^2(\Sigma_i) \mubox{ s.t. } \phisi_i^*[\alpha]=[\alpha] \}$, we obtain that $\Sigma_i$ is oriented and $\phisi_i^*= \muathrm{Id}$ on $H^2(\Sigma_i)$. The resolution process consists of replacing a neighbourhood of $L$ with a closed $\Gammatwo$ manifold. The local model of the singularity is $\muathbb{R}R^3 \tauimes Y$ where $Y=\muathbb{C}^2/\muathbb{Z}_2$ as we discussed in section \rhoef{sec:pre}. The closed $\Gammatwo$ manifold that we introduce is the blow-up of $\nuu/\j$ at the zero section, where $\nu$ denotes the normal bundle of $L$ in $M$. Its local model is $\muathbb{R}R^3 \tauimes N$ where $N=\widetilde{\muathbb{C}}^2/\muathbb{Z}_2$. This requires the choice of complex structure on $\nuu/\j$ which is determined by a choice of a unit-lenght vector $V$ on $L$ by means of the expression $I(X)=V \tauimes X$, where $\tauimes$ is the cross-product associated to $\varphi$. This vector field exists because $L$ is parallelizable, but we shall choose $V= \| \tauheta \|^{-1} \tauheta^\sigmaharp$ in order to guarantee that the $\Gammatwo$ form that we later define on the resolution is closed. Before constructing a $\Gammatwo$ form on the resolution we study the $O(1)$ term of $\varepsilonxp^*(\varphi)$ by splitting $T\nu$ into an horizontal and a vertical bundle with the aid of a connection. This allows us to obtain a formula for the $O(1)$ term that resembles the standard $\Gammatwo$ structure on $\muathbb{R}R^3 \tauimes Y$. Its pullback under the blow-up map has a pole at the zero section; a non-singular $\Gammatwo$ structure is defined on the resolution following the ideas we introducd in subection \rhoef{subsec:local} for resolving the local model. This form is not closed in general, so that we need to consider a closed approximation of it. In addition, the resolution process requires the introduction of a $1$-parameter family of closed forms; small values of the parameter guarantee that these are non-degenerate and closed to $\varepsilonxp^*(\varphi)$ on an annulus around $L$ after a diffeomorphism. As Remark \rhoef{rem:size-exc} states, the size of the exceptional divisor decreases as the parameter tends to $0$. This section is organized as follows: in subsection \rhoef{subsec:split} we introduce some notations concerning the normal bundle $\nuu$ of $L$ and we understand its second order Taylor approximation $\phi_2$ in subsection \rhoef{subsec:taylor}; this is an auxiliary construction. In subsection \rhoef{subsec:local} we obtain local formulas for the $O(1)$-terms and introduce the parameter $t$; these tools allow us to perform the resolution in \rhoef{subsec:resol}. \muathfrak{su}bsection{Splitting of the normal bundle} \lambdaanglebel{subsec:split} We now introduce some notations that we need for the resolution process. Let $\phii \colon \nuu \tauo L$ be the normal bundle of $L$. We consider $R>0$ such that the neighbourhood of the $0$ section $Z$, $\nuu_R=\{ v_p \in \nuu_p \mubox { s.t. } \|v_p\|<R \}$ is diffeomorphic to a neighbourhood $U$ of $L$ on $M$ via the exponential map. On $\nuu_R$ we consider $\phi=(\varepsilonxp)^*\varphi$, which is a closed $\Gammatwo$ form on $\nuu_R$. In addition, the induced involution on $\nuu$ is $d\j(v_p)=-v_p$; but we shall also denote it by $\j$. It shall be useful to denote the dilations by $F_t \colon \nuu \tauo \nuu$, $F_t(v_p)=tv_p$. We also define the vector field over $\nuu$, $\muathcal{R}(v_p)=\epsilonrac{d}{dt}\Big|_{t=0} e^tv_p$. A connection $\nuabla$ on $\nuu$ induces a splitting $T\nuu= V\omegaplus H$ where $V=\kappaer(d\phii)\cong \phii^*\nuu$ and $d\phii_{v_p} \colon H_{v_p} \tauo T_pL$ is an isomorphism; being $TM|_L= \nuu \omegaplus TL$, the connection induces an isomorphism $\muathcal{T} \colon T\nuu \tauo \phii^*(TM|_L)$. The choice of $\nuabla$ is made in subsection \rhoef{subsec:resol}. Note that any tensor $\mathrm{T}$ on $TM|_{L}$ defines a tensor on $\phii^*(TM|_L)$ because $\phii^*(TM|_L)_{v_p}= T_pM|_{L}$ . Using this we define on $\nuu$: \betaegin{enumerate} \item A metric, $g_1= \muathcal{T}^*(g|_{L})$; that is, $g_1$ makes $(H_{v_p},g_1)$ and $(T_pL,g)$ isometric, $H_{v_p}$ is perpendicular to $V_{v_p}$ and $V_{v_p}$ isometric to $\nuu_p$. \item A $\Gammatwo$ structure $\phi_1= \muathcal{T}^*(\varphi|_L)$ with $g_1$ as an associated metric. \varepsilonnd{enumerate} Of course, $\muathcal{T}$ is an isometry. These tensors are constant in the fibres in the following sense; under the identification $\widehat \muathcal{T}_{v_p}=\muathcal{T}^{-1}_{0_p} \circ \muathcal{T}_{v_p} \colon T_{v_p}\nuu \tauo T_{0_p}\nuu$ it holds that $\widehat \muathcal{T}_{v_p}^*(g_1)=g_1$ and $\widehat \muathcal{T}_{v_p}^*(\phi_1)=\phi_1$. Note also that these values coincide with $\varepsilonxp^*g|_{Z}$ and $\phihi$ respectively since $(d\varepsilonxp)|_{Z}=\muathrm{Id}$. These tensors are thus independent of $\nuabla$ only on $Z$. We shall also denote $W^k_{i,j}= \Lambda^i V^* \omegatimes \Lambda^j H^*$ where we understand $V^*= \muathrm{Ann}(H)$ and $H^*= \muathrm{Ann}(V)$. There are $g_1$-orthogonal splittings $\Lambda^k T^*\nuu = \omegaplus_{i+j=k} W^{k}_{i,j}$ and given $\alpha \in \Lambda^k T^*\nu$ we denote by $[\alpha]_{i,j}$ the projection of $\alpha$ to $W_{i,j}$. Observe also that one can restrict each $\beta \in \Lambda^k V^*$ to the fibre $\nu_p$, and the restriction $\muathrm{r}_k\colon \Lambda^k V^* \tauo \Lambda^k T^*\nuu$, $\muathrm{r}_k(\betaeta)_{v_p}= \betaeta_{v_p}|_{\nu_p}$ is an isomorphism because $T_{v_p}\nuu_p=V_{v_p}$. We now state some technical observations concerning vertical forms; proofs are computations in terms of local coordinates that we include for completeness. \betaegin{remark}\lambdaanglebel{remark-connection} Note that $H^* = \phii^* (T^*L)$ does not depend on the connection but $V^*$ does. More precisely, in local coordinates $(x_1,x_2,x_3,y_1,y_2,y_3,y_4)\in U \tauimes \muathbb{R}^4$ the horizontal distribution at $(x,y)$ is generated by: $$ \phiartial_{ x_i}- \muathfrak{su}m_{j=1}^{4} A_i^j(x,y) \phiartial_{y_j}, $$ where $A_i^j(x,y)=\muathfrak{su}m_{k=1}^4 {A_{i,k}^j(x)y_k}$ for some differentiable functions $A_{i,k}^j$. Then $V^*$ is generated by: $$ \varepsilonta_j= dy_j + \muathfrak{su}m_{i=1}^3{A_i^j(x,y) dx_i}. $$ Note also that since $A_i^j(x,ty)=tA_i^j(x,y)$ we get that $F_t^*(\varepsilonta_i)=t\varepsilonta_i$. \varepsilonnd{remark} \betaegin{lemma} \lambdaanglebel{lem:homog-tensors} The following identities hold: \betaegin{enumerate} \item $F_t^*(\phi_1)= [\phi_1]_{0,3} + t^2[\phi_1]_{2,1}$ \item $F_t^*(g_1)= g_1|_{H\omegatimes H} + t^2 g_1|_{V\omegatimes V}$ \varepsilonnd{enumerate} \varepsilonnd{lemma} \betaegin{proof} We shall prove the first equality being the second similar. Note that $\phi_1|_Z$ is a $\Gammatwo$ structure whose induced metric makes $V$ perpendicular to $H$ and $H|_{Z}=TZ$; thus taking into account formula (\rhoef{eqn:su2-g2}) we can write in local coordinates: $$ \phi_1|_Z = f(p)dx_1\wedge dx_2 \wedge dx_3 + \muathfrak{su}m_{i=1}^3{\muathfrak{su}m_{j<k}{f_{ijk}(p)dx_i\wedge dy_j \wedge dy_k }}. $$ Thus, $\phi_1 = [\phi_1]_{0,3} + [\phi_1]_{2,1}$, where $([\phi_1]_{0,3})_{v_p}= f(p)dx_1\wedge dx_2 \wedge dx_3$ and $([\phi_1]_{2,1})_{v_p}= \muathfrak{su}m_{i=1}^3{\muathfrak{su}m_{j<k}{f_{ijk}(p)dx_i\wedge (\varepsilonta_j)_{v_p}\wedge (\varepsilonta_k)_{v_p} }}$. Therefore, $F_t^*([\phi]_{0,3})= [\phi]_{0,3}$ and according to the previous remark, $F_t^*[\phi]_{2,1}=t^2[\phi]_{2,1}$. \varepsilonnd{proof} \betaegin{lemma} \lambdaanglebel{zero-section} Let $\muu \in V^*$ be a form such that $\muu=0$ on $T\nu|_Z$. Then, $[d\mu]_{1,1}=0$ and $[d\mu]_{0,2}=0$ on $T\nu|_Z$. \varepsilonnd{lemma} \betaegin{proof} In local coordinates, $\mu= \muathfrak{su}m_{i=1}^{4}{f_i(x,y)\varepsilonta_i}$ with $f_i(x,0)=0$ as $\muu=0$ on $T\nuu|_Z$. Then, \betaegin{align*} d\mu=& \muathfrak{su}m_{i=1}^4 \muathfrak{su}m_{j=1}^3 {\epsilonrac{\phiartial f_i}{\phiartial x_j }(x,y) dx_j \wedge \varepsilonta_i} \\ &+ \muathfrak{su}m_{i=1}^4 \muathfrak{su}m_{j=1}^4 {\epsilonrac{\phiartial f_i}{\phiartial y_j }(x,y) dy_j \wedge \varepsilonta_i} + \muathfrak{su}m_{i=1}^4 f_i(x,y)d\varepsilonta_i . \varepsilonnd{align*} Since $f_i(x,0)=0$ and $\varepsilonta_i|_{T\nu|_Z}= dy_i$ the following equalities hold on $T\nu|_Z$: \betaegin{align*} [d\mu]_{2,0} (x,0)=& \muathfrak{su}m_{i=1}^4 \muathfrak{su}m_{j=1}^4 {\epsilonrac{\phiartial f_i}{\phiartial y_j }(x,0) dy_j \wedge dy_i}, \\ [d\mu]_{1,1} (x,0)=& \muathfrak{su}m_{i=1}^4 \muathfrak{su}m_{j=1}^3{\epsilonrac{\phiartial f_i}{\phiartial x_j} (x,0) dx_j \wedge \varepsilonta_i}= 0, \\ [d\mu]_{0,2} (x,0)=&0. \varepsilonnd{align*} \varepsilonnd{proof} \betaegin{lemma} \lambdaanglebel{lem:bounded norm} Consider coordinates $(x,y)=(x_1,x_2,x_3,y_1,y_2,y_3,y_4)\in B \tauimes \muathbb{R}^4$ of $\nu$, with $B\muathfrak{su}bset \muathbb{R}R^3$ a closed ball. Let $\varepsilonta_j$ be the projection of $dy_j$ to $V^*$ as in Remark \rhoef{remark-connection}. Then, $ \|(\varepsilonta_i)_{(x,0)}\|_{g_1}= \|(\varepsilonta_i)_{(x,y)}\|_{g_1} $ and $ \|(dx_i)_{(x,0)}\|_{g_1}= \|(dx_i)_{(x,y)}\|_{g_1} $. There exist $C_1>0$, $C_2>0$ such that $ \|[d\varepsilonta_i]_{0,2}\|_{g_1} \lambdaeq C_1 r $ and $ \|[d\varepsilonta_i]_{1,1}\|_{g_1} \lambdaeq C_2$ on $\nu$. \varepsilonnd{lemma} \betaegin{proof} The first two equalities are clear taking into account that $\muathcal{T}^*(\varepsilonta_j)=\varepsilonta_j$, $\muathcal{T}^*(dx_j)=dx_j$ and that $\muathcal{T}$ is a $g_1$-isometry. For the third and fourth equality we first compute $d\varepsilonta_j$ $$ d\varepsilonta_j= \muathfrak{su}m_{k=1}^4{ \muathfrak{su}m_{i,l=1}^3{y_k \epsilonrac{\phiartial A_{i,k}^j(x)}{\phiartial x_l}dx_l \wedge dx_i}} + \muathfrak{su}m_{k=1}^4 \muathfrak{su}m_{i=1}^3 A_{i,k}^j(x) dy_k \wedge dx_i. $$ This implies that: \betaegin{align*} [d\varepsilonta_j]_{0,2}=& \muathfrak{su}m_{k=1}^4{ \muathfrak{su}m_{i,l=1}^3{y_k \epsilonrac{\phiartial A_{i,k}^j(x)}{\phiartial x_l}dx_l \wedge dx_i}} - \muathfrak{su}m_{k,n=1}^4 \muathfrak{su}m_{i,m=1}^3 A_{i,k}^j(x) A_{m,n}^k(x) y_n dx_m \wedge dx_i, \\ [d\varepsilonta_j]_{1,1}= & \muathfrak{su}m_{k=1}^4 \muathfrak{su}m_{i=1}^3 A_{i,k}^j(x) \varepsilonta_k \wedge dx_i. \varepsilonnd{align*} The functions $|A_{i,k}^j|$ are bounded on $B$, and that the $g_1$-norm of the terms $\varepsilonta_m \wedge dx_j$ and $dx_j \wedge dx_k$ are constant on the fibres as explained before. Taking into account that $L$ is compact the choice of constants $C_1$ and $C_2$ becoms clear. \varepsilonnd{proof} \muathfrak{su}bsection{Taylor series} \lambdaanglebel{subsec:taylor} We now introduce the Taylor series of $\phi$ and interpolate it with the seccond order approximation. This is an auxiliary tool for our resolution process. Consider the dilation over the fibres $F_t \colon\nuu \tauo \nuu$, and define the Taylor series of $F_t^*\phi$ and $F_t^*g$ near $t=0$ (note that $F_0^*(\phi)$ and $F_0^*(g)$ are defined on $\nuu$). That is, $$ F_t^*(\phi) \sigmaim \muathfrak{su}m_{k=0}^{\infty}{t^{2k}\phi^{2k}}, \phisiuad F_t^*g \sigmaim \muathfrak{su}m_{k=0}^\infty {t^{2k}g^{2k}}. $$ Note that we only wrote even terms because both $\phi$ and $g$ are $\j$ invariant and $\j=F_{-1}$. In addition, since $F_{ts}=F_t\circ F_s$ we have that $F_s^*(\phi^{2k})=s^{2k}\phi^{2k}$, $F_s^*(g^{2k})=s^{2k}g^{2k}$. For $i+j=3$ and $p+q=2$ we define $\phi^{2k}_{i,j}= [\phi^{2k}]_{i,j}$, $g^{2k}_{p,q}|_{V^p \omegatimes H^q}$ ; here $V^p$ denotes the tensor product of $V$ with itself $p$ times. We have the following properties: \betaegin{enumerate} \item $\|\phi^{2k}_{i,j} \|_{g_1} = O(r^{2k-i})$, where $r$ is measured with respect to the metric on $\nuu$. To check it let $\|v_p\|_{g_1}=1$; taking into account Lemma \rhoef{lem:homog-tensors} and the fact that $F_t \colon (\nu,g_1|_{H\omegatimes H} +t^2g_1|_{V\omegatimes V}) \tauo (\nu,g_1)$ is an isometry we get: \betaegin{align*} \| (\phi^{2k}_{i,j})_{r v_p} \|_{g_1}=& \| r^{2k} F_{r^{-1}}^* (\phi^{2k}_{i,j})_{rv_p} \|_{g_1} = r^{2k}\|(\phi^{2k}_{i,j})_{v_p}\|_{g_1|_{H\omegatimes H} +r^{2}g_1|_{V\omegatimes V}} \\ =& \, r^{2k-i}\|(\phi^{2k}_{i,j})_{v_p}\|_{g_1}. \varepsilonnd{align*} \item The previous statement ensures that $\phi^{2k}_{i,j}=0$ if $i>2k$. \item If $k\gammaeq 1$, $\phi^{2k}$ is exact. Being $\phi^{2k}$ homogeneous of order $2k$, we have that $\muathcal{L}_{\muathcal{R}}(\phi^{2k})= 2k \phi^{2k}$; where $\muathcal{R}(v_p)=\epsilonrac{d}{dt}\Big|_{t=0} (e^t v_p)$ is defined as above. In addition, since $\phi$ is closed we have that $d\phi^{2k}=0$ for every $k$. Thus, $2k\phi^{2k}=d(i(\muathcal{R})\phi^{2k})$. \varepsilonnd{enumerate} Taking these properties into account we construct a $\Gammatwo$ form $\phi_{3,\varepsilon}$ that interpolates $\phi$ with the approximation $\phi_2= \phi^0 + \phi^2$. The parameter $\varepsilon>0$ indicates that the interpolation occurs on $r\lambdaeq \varepsilon$ and is done in such a way that $\phi_{3,\varepsilon}|_{r \lambdaeq \epsilonrac{\varepsilon}{2}}= \phi_2$. Of course, this is possible because the difference between $\phi$ and $\phi_2$ is small near the zero section. \betaegin{proposition} \lambdaanglebel{prop:interpolation-order-2} The form $\phi_2=\phi^0 + \phi^2$ is closed and $\phi = \phi_2 + O(r)$. There exists $\varepsilon_0>0$ such that for each $\varepsilon < \varepsilon_0$ there exists a $\j$-invariant $\Gammatwo$ form $\phi_{3,\varepsilon}$ such that $\phi_{3,\varepsilon}=\phi_2$ if $r\lambdaeq \epsilonrac{\varepsilon}{2}$ and $\phi_{3,\varepsilon}=\phi$ if $r\gammaeq \varepsilon$. \varepsilonnd{proposition} \betaegin{proof} The first part is a consequence of the previous remark; zero order terms are $\phi^0=\phi^0_{0,3}$ and $\phi^{2}_{2,1}$, thus $\phi=\phi_2 + O(r)$. In addition, $\phi_2$ is closed because each $\phi^{2k}$ is. Since $\phi|_Z=\phi_2|_Z$ Poincar\'e Lemma for submanifolds ensures that $\phi=\phi_2 + d\tauimesi$ for some $\j$-invariant 2-form $\tauimesi$; more precisely, $ \tauimesi_{v_x}= \int_{0}^1{i(\muathcal{R}_{\tau v_x})(\phi - \phi_2)d\tau}. $ In addition, $\|\tauimesi\|_{g_1}=O(r^2)$ because $\|\phi - \phi_2\|_{g_1}=O(r)$. Let $\varpi$ be a smooth function such that $\varpi=1$ if $x\lambdaeq \epsilonrac{1}{2}$ and $\varpi=0$ if $x\gammaeq 1$ and define $\varpi_\varepsilon(x)=\varpi(\epsilonrac{x}{\varepsilon})$. Then, $|\varpi_\varepsilon'| \lambdaeq \epsilonrac{C}{\varepsilon}$ so that $$ \phi_{3,\varepsilon}= \phi + d(\varpi_\varepsilon(r) \tauimesi) $$ is a $\Gammatwo$ form on $r\lambdaeq \varepsilon$ if $\varepsilon$ is small enough because it is $O(\varepsilon)$-near $\phi$. The form $\phi_{3,\varepsilon}$ interpolates $\phi_2$ with $\phi$ over the stated domains and it is $\j$-invariant because both $\phi$ and $\varpi_\varepsilon(r) \tauimesi$ are. \varepsilonnd{proof} \muathfrak{su}bsection{Local formulas} \lambdaanglebel{subsec:local} The purpose of this section is making an additional preparation; we first provide a local formula for $\phi_1$ that will be useful in order to construct the $\Gammatwo$ form of the resolution. Later we change $\phi_2$ by $O(r)$ terms so that we control its local formula and we introduce the parameter $t$; these preparations are essential to construct a closed $\Gammatwo$ form on the resolution. \muathfrak{su}bsubsection{Formula for $\phi_1$} We first write $\phi_1$ and $g_1$ in terms of the components of the Taylor series of $g$ and $\phi$. This is an easy consequence of the homogeneus behaviour of the tensors involved: \betaegin{lemma} \lambdaanglebel{lem:homog} The following equalities hold: \betaegin{enumerate} \item $\phi_1 = \phi^0 + \phi^2_{2,1} $ \item $ g_1 = g_{0,2}^0 + g_{2,0}^2 $ \varepsilonnd{enumerate} \varepsilonnd{lemma} \betaegin{proof} We prove the first equality, being the second similar. Using the fact that $\phi^0=\phi^0_{0,3}$ and $\phi^2_{2,1}$ are homogeneous one can check that these are constant on the fibres. We shall do it for $\phi^2_{2,1}$, write in local coordinates $(x,y)$: $$ \phi^2_{2,1}=\muathfrak{su}m_{i=1}^3{\muathfrak{su}m_{j<k}f_{ijk}(x,y)dx_i\wedge (\varepsilonta_j)_{(x,y)}\wedge (\varepsilonta_k)_{(x,y)} }. $$ Taking into account that $F_t^*\phi^2_{2,1}=t^2\phi^2_{2,1}$ and $F_t^*\varepsilonta_i=t\varepsilonta_i$ we get $f_{ijk}(x,ty)=f_{ijk}(x,y)$. Therefore, $f_{ijk}(x,y)=f_{ijk}(x,0)$. Since $\phi_1|_{TM|_Z}= \phi|_{TM|_Z}= (\phi^0 + \phi^2_{2,1})|_{TM|_Z}$, we obtain that $[\phi_1]_{0,3}|_{T\nu|_Z}=\phi^0|_{T\nu|_Z}$ and $[\phi_1]_{2,1}|_{T\nu|_Z}=\phi^2_{2,1}|_{T\nu|_Z}$. But these forms are constant on the fibres of the bundle $T\nuu \tauo \nuu$, so that the previous equalities hold on $T\nu$. \varepsilonnd{proof} We now obtain a local formula for $\phi_1$. For that purpose let us define $e_1=\|\tauheta\|^{-1} \tauheta$ and consider an orthonormal oriented frame $(e_1,e_2,e_3)$ of $TL$ on a neighbourhood $U\muathfrak{su}bset L$. Define also the $\muathrm{SU}(2)$ structure $(\omega_1^L,\omega_2^L,\omega_3^L)$ on $\nuu$ by means of the equality: $$ \varphi|_{L}= e_1\wedge e_2 \wedge e_3 + e_1\wedge \omega_1^L + e_2 \wedge \omega_2^L - e_3 \wedge \omega_3^L. $$ More precisely, the complex structure is determined by $\omegamega_1^L=i(e_1^\sigmaharp)\varphi|_{\nu}$, that is $I(X)=e_1^\sigmaharp \tauimes X$ where $\tauimes$ denotes the vector product associated to $\varphi|_L$. The complex volume form is $\omega_2^L + i\omega_3^L$; note that a counterclockwise rotation of angle $\sigma$ in the plane $(e_2,e_3)$ changes $\omega_2^L + i\omega_3^L$ by the complex phase $e^{i\sigma}$. Using $\muathcal{T}$ we obtain: $$ \phi_1= \phii^*e_1\wedge \phii^*e_2 \wedge \phii^*e_3 + \phii^*e_1\wedge \omega_1 + \phii^* e_2 \wedge \omega_2 - \phii^* e_3 \wedge \omega_3, $$ where the forms $\omega_j \in \Lambda^2 V^*$ are $\j$-invariant and verify $\omega_j|_{Z}= \varepsilonxp^*(\omega_j^L)$. Fixed $p \in L$, $(\omega_1 |_{\nu_p}, \omega_2 |_{\nu_p}, \omega_3 |_{\nu_p})$ determines an $\muathrm{SU}(2)$ structure on the $4$-manifold $\nuu_p$ because the restriction $\muathrm{r}_2$ is an isomorphism. The associated metric on $T\nuu_p$ is $g_1|_{\nuu_p}$ and the complex form is induced by $I$ on $\nuu$ under the canonical isomorphism. Therefore, $\omega_1 |_{\nu_p} = -\epsilonrac{1}{4}d_{\nu_p}( I[dr^2]_{\nu_p})$. In addition, since the complex volume form is $dz_1 \wedge dz_2 = \epsilonrac{1}{2}d(z_1dz_2 -z_2dz_1)$ there is a $\j$-invariant $1$-form $\muu \in V^*$ such that $d_{\nuu_p}(\muu|_{\nu_p})= (\omega_2 + i \omega_3)|_{\nu_p}$ and $\mu|_{T\nu|_{Z}}=0$. We decompose it as $\mu = \mu_1 + i \mu_2$. Being the restriction to the fibre $\muathrm{r}_2$ a monomorphism, we obtain $$ \omega_1= -\epsilonrac{1}{4}[d[Idr^2]_{1,0}]_{2,0}, \phisiquad \omega_2 + i\omega_3 = [d\mu]_{2,0}, $$ here we also denoted by $I$ the complex structure on $V^*$ determined by the complex structure $I(X)=e_1^\sigmaharp \tauimes X$ on $V=\phii^*(\nuu)$, this depends on the splitting. Observe that the complex structure $I$ on $\nu$ verifies $\j \circ I = I \circ \j$ and thus, the complex structure on $V^*$ verifies $\j I\alpha= I \j\alpha$. In particular, $I\alpha$ is $\j$-invariant if $\alpha$ is. \muathfrak{su}bsubsection{Changing $\phi_2$ by $O(r)$ terms.} First of all define the $1$-parameter family $$ \phi_2^t = \phi^0 + t^2 \phi^2= F_t^*(\phi_2). $$ These forms are well-defined on $\nuu$ because $\phi^0$ and $\phi^2$ are homogeneous. We now change this $1$-parameter family by $O(r)$ terms so that we have an explicit local formula for it. Consider the exact $\j$-invariant form: $$ \beta = -\epsilonrac{1}{4} \phii^*\tauheta \wedge d( (\|\tauheta\|^{-1}\circ \phii) I[dr^2]_{1,0}) + d(\phii^*e_2\wedge \muu_2 - \phii^*e_3\wedge \muu_3 ) \in W_{2,1}\omegaplus W_{1,2}\omegaplus W_{0,3}, $$ and note that $\phi_1= \phii^*(e_1\wedge e_2 \wedge e_3) + [\beta]_{2,1}$. In addition, $\beta$ does not depend on the orthonormal oriented basis $(e_2,e_3)$ of $\lambdaangle \tauheta^*\rhoangle^\phierp$. We now introduce a $1$-parameter family of closed $\j$-invariant forms: $$ \widehat{\phi}_2^t = \phii^*(e_1\wedge e_2 \wedge e_3) + t^2[\beta]. $$ We claim that fixed $s>0$ there exists $t_s>0$ such that $\widehat \phi_2^t$ is a $\Gammatwo$ form on $\nu_{2s}$ if $t<t_s$ . To check this we compare $\widehat \phi_{2}^t$ with $F_t^*\phi_1$ and use Lemma \rhoef{universal} to conclude. Denote $g_t=F_t^*(g_1)$ and observe that Lemma \rhoef{lem:homog} implies that $F_t^*\phi_1= \phi^0 + t^2 \phi^2_{2,1}$ and $g_t= t^2 g_{2,0} + g_{0,2}$, then: $$ \| F_t^*\phi_1 - \widehat \phi_2^t \|_{g_t} = t \| [\beta]_{1,2}\|_{g_1} + t^2\| [\beta]_{0,3} \|_{g_1}, $$ so one can bound $\| [\beta]_{1,2}\|_{g_1}$, $\| [\beta]_{0,3} \|_{g_1}$ on $\nu_{2s}$ and chose $t_s>0$ such that for each $t<t_s$, $t \| [\beta]_{1,2}\|_{g_1} + t^2\| [\beta]_{0,3} \|_{g_1} <m$ where $m$ is the universal constant obtained in Lemma \rhoef{universal}. We construct a $\Gammatwo$ form $\phi_{3,s}^t$ that interpolates $F_t^*\phi$ with $\phi_2^t$. The parameter $s>0$ indicates that the interpolation occurs on the disk $r\lambdaeq s$ and we require that $\phi_{3,\varepsilon}|_{r \lambdaeq \epsilonrac{s}{2}}= \phi_2$. In subsection \rhoef{subsec:resol} we employ large values of the parameter. \betaegin{proposition} There is $\tauimesi \in W_{0,2}$ such that $\|\tauimesi\|_{g_1}=O(r^2)$ and $\phi^2 = \beta +d\tauimesi$. Fixed $s>0$ there exists $t_s'>0$ such that for each $t<t_s'$, there is a closed $\j$-invariant $\Gammatwo$ form $\widehat \phi_{3,s}^t$ on $\nu_{2s}$ that coincides with $\widehat \phi_2^t$ on $r\lambdaeq \epsilonrac{s}{2}$ and $\phi_2^t$ on $r\gammaeq s$. \varepsilonnd{proposition} \betaegin{proof} Write the second term of the Taylor series of $\phi$ as $\phi^2= \phi^2_{2,1}+ \phi^2_{1,2} + \phi^2_{0,3}$ and note that $\phi^2_{2,1}=[\beta]_{2,1}$. Being $\beta$ and $\phi^2$ closed, we obtain $ d (\phi^2_{1,2} + \phi^2_{0,3}) = d([\beta]_{1,2} + [\beta]_{0,3})$. Poincar\'e Lemma ensures that $\phi^2_{1,2} + \phi^2_{0,3} =[\beta]_{1,2} + [\beta]_{0,3} + d\tauimesi $ with $$ \tauimesi_{v_x} = \int_{0}^1{i(\muathcal{R}_{\tau v_x})(\phi^2_{1,2} + \phi^2_{0,3} -[\beta]_{1,2} - [\beta]_{0,3})d\tau} = \int_{0}^1 {i(\muathcal{R}_{\tau v_x})(\phi^2_{1,2} -[\beta]_{1,2})d\tau}. $$ Hence $\tauimesi \in W_{0,2}$. One can check that $\tauimesi$ is $\j$-invariant by taking into account that $\phi^2_{1,2} -[\beta]_{1,2}$ is $\j$-invariant and that $\muathcal{R}_{t\j(v_x)}=\j(\muathcal{R}_{tv_x})$. In addition $\|\tauimesi\|_{g_1}=O(r^2)$ because $\phi^2_{1,2} + \phi^2_{0,3}|_{Z}=0$ (these terms are $O(r)$ and $O(r^2)$ in the $g_1$-norm) and $([\beta]_{1,2} + [\beta]_{0,3})|_Z=0$ according to Lemma \rhoef{zero-section}. Let $\varpi$ be a smooth function such that $\varpi=1$ if $x\lambdaeq \epsilonrac{1}{2}$ and $\varpi=0$ if $x\gammaeq 1$, and let $\varpi_s(x)=\varpi(\epsilonrac{x}{s})$. The form $\widehat \phi_{3,s}^t = \phi^0 + t^2 \beta + t^2 d(\varpi_s(r) \tauimesi)$ is closed and $\j$-invariant; it coincides with $\widehat \phi_2^t$ on $r\lambdaeq \epsilonrac{s}{2}$ and with $\phi_2^t$ on $r\gammaeq s$. It is clear that $\widehat{\phi}_{3,s}^t$ is a $\Gammatwo$ form on the region $r\gammaeq s$ for $t<t_s$; we now check that it is also a $\Gammatwo$ form on $r\lambdaeq s$ for some choice of $t$. We are going to compare $\widehat{\phi}{}^t_{3,\varepsilon}$ with $F_t^*\phi_1$ and use Lemma \rhoef{universal} to conclude the result. Since $\varpi_s \tauimesi \in W_{0,2}$ we have that $d(\varpi_s \tauimesi) \in W_{1,2}\omegaplus W_{0,3}$. As a consequence $\|t^2 d(\varpi_s(r) \tauimesi) \|_{g_t} \lambdaeq t \| d(\varpi_s(r) \tauimesi) \|_{g_1}= t(O(r^2s^{-1}) + O(r))$ so that: \betaegin{align*} \| \widehat \phi_{3,s}^t - F_t^*\phi_1\|_{g_t} =& \, t (\| [\beta]_{1,2}\|_{g_1} + t\| [\beta]_{0,3} \|_{g_1} + O(r^2s^{-1}) + O(r)) \\ \lambdaeq & \, t (\| [\beta]_{1,2}\|_{g_1} + \| [\beta]_{0,3} \|_{g_1} + O(r)). \varepsilonnd{align*} For the last equality we used that $t<1$ and that $r\lambdaeq 2s$. Then $\widehat \phi_{3,s}^t$ is a $\Gammatwo$ form if the parameter $t<t_s$ verifies $$ t \, ( \muathrm{max}_{r\lambdaeq 2s}(\| [\beta]_{1,2}\|_{g_1} + \| [\beta]_{0,3} \|_{g_1} + O(r)) < m $$ where $m$ is the constant provided by Lemma \rhoef{universal}. \varepsilonnd{proof} \muathfrak{su}bsection{Resolution of $\nuu/\j$}\lambdaanglebel{subsec:resol} The resolution process is inspired in the hyperK\"ahler resolution $N= \widetilde{\muathbb{C}^2}/\muathbb{Z}_2$ of $Y= \muathbb{C}^2/\muathbb{Z}_2$ described in subsection \rhoef{subsec:resol}. Consider the blow-up map $\chi_0 \colon N \tauo Y$ and the hyperK\" ahler structure $(\widehat \omega_1^a, \chi_0^*(\omega_2^0), \chi_0^*(\omega_3^0))$ on $N$. Recall that $\widehat \omega_1^a$ denotes the extension of $ -\epsilonrac{1}{4} dId\muathsf{f}_a(r_0)$, where $r_0$ is the radial function on $\muathbb{C}^2$ and: $$ \muathsf{f}_a(x)= \muathbf{g}_a(x) + 2a\lambdaog(x), \phisiquad \muathbf{g}_a(x)=(x^4 + a^2)^{1/2}- a \lambdaog((x^4 + a^2)^{1/2} + a). $$ We now focus in the resolution of $\nu/\j$. For that purpose, consider the complex structure $I$ on $\nu$ determined by the $2$-form $i(e_1^\sigmaharp)\varphi|_\nu$ and define $P$ as the fiberwise blow-up of $\nuu/\j$ at $0$. That is $P= P_{\muathrm{U}(2)}(\nuu)\tauimes_{\muathrm{U}(2)} N$, where $P_{\muathrm{U}(2)}(\nuu)$ denotes the principal $\muathrm{U}(2)$-bundle associated to $\nu$. This construction yields projections $\chi \colon P \tauo \nuu/\j$ and $\muathrm{pr}= \betaar{\phii} \circ \chi$; where $\betaar{\phii}$ denotes the map that $\phii \colon \nuu \tauo L$ induces. We also define $Q=\chi^{-1}(0)$; this is a $\muathbb{CP}^1$ bundle over $L$ that can be expressed as $Q=P_{\muathrm{U}(2)}(\nu)\tauimes_{\muathrm{U}(2)} \muathbb{CP}^1$. Note that there is a projection $\sigma_0 \colon N \tauo \muathbb{CP}^1$ that induces a complex line bundle $\sigma \colon P \tauo Q$. A $\j$-invariant tensor on $\nuu$ descends to $\nuu/\j$ and its pullback by $\chi$ is smooth over $P-Q$, but it may not be smooth on $P$. If the tensor preserves the complex structure $I$ on $P$ then the pullback is smooth on $P$ because $P= P_{U(2)}(\nuu)\tauimes_{\muathrm{U}(2)} N$. We choose $\nuabla$ such that $\nuabla I=0$, so that we can lift $\nuabla$ to $P$ and define $TP=V'\omegaplus H'$; this is compatible with the splitting $T\nuu=V\omegaplus H$. In addition, $\muu_2,\mu_3,\omegamega_1,\omegamega_2,\omegamega_3$ induce forms on $\nuu/ \j$ and $\chi^*(\muu_k)$, $\chi^*(\omegamega_k)$ are smooth for $k=\{2,3\}$. We shall also consider $\Lambda^k T^*P= \omegaplus_{i+j=k} \Lambda^i V' \omegatimes \Lambda^j H'$ and $[\alpha]=\muathfrak{su}m_{i,j}[\alpha]_{i,j}$. In order to define a $\Gammatwo$ structure on $P$ we need to find a resolution of $\omega_1$. For that purpose denote by $r$ the pullback of the radial function on $\nuu$ and define: $$ \widehat \omega_1 = -\epsilonrac{1}{4}d (|\tauheta|^{-1} I[d\muathsf{f}_{|\tauheta|}(r)]_{1,0}), $$ where $|\tauheta|= \| \tauheta \| \circ \muathrm{pr}$. Observe that $\muathbf{g}_{|\tauheta|}(r)$ is smooth on $P$ because $r^4$ is. In addition, $-\epsilonrac{1}{2} dI[d(\lambdaog(r^2))]_{1,0}= \sigma^*(F_Q)$ on $P-Q$, where $F_Q$ is the curvature of the line bundle $\sigma \colon P \tauo Q$. Fiberwise it coincides with the Fubini-Study form on $\muathbb{CP}^1$. Note also that $\muathrm{pr}^* \tauheta \wedge [\widehat{\omega}_1]_{2,0} = -\epsilonrac{1}{4} e_1 \wedge [d(I[d\muathsf{f}_{|\tauheta|}]_{1,0})]_{2,0}$. We now define a $\Gammatwo$ form $\Phi_1^t$ which is near $\chi^*(F_t^*\phi_1)$ on $r>1$, this is: $$ \Phi_{1}^t = \muathrm{pr}^*(e_1\wedge e_2 \wedge e_3) + t^2[\widehat \beta]_{2,1}, $$ where $$ \widehat \beta = \muathrm{pr}^* \tauheta \wedge \widehat\omega_1 + d(\muathrm{pr}^*e_2 \wedge \chi^*(\mu_2) - \muathrm{pr}^*e_3 \wedge \chi^*(\mu_3)). $$ Observe that $\beta$ does not depend on the orthonormal oriented basis $(e_2,e_3)$ of $\lambdaangle \tauheta^* \rhoangle^\phierp$. In addition, the metric induced by $\Phi_1^1$ on $TP$ has the form $h_1= h_{2,0} + h_{0,2}$ where $h_{2,0}$ and $h_{0,2}$ are metrics on $V'$ and $H'$ respectively. In addition, the metric that $\Phi_1^t$ induces is $h_t= t^2 h_{2,0} + h_{0,2}$. We define a family of closed forms: $$ \Phi_2^{t}=\muathrm{pr}^*( e_1\wedge e_2 \wedge e_3) + t^2 \widehat \beta. $$ Note that $\Phi_2^t$ is a $\Gammatwo$ structure on $\nuu_{2s}$ for some $t<t_s''$. This is ensured by Lemma \rhoef{universal} because: $$ \| \Phi_2^t - \Phi_1^t \|_{h_t} = t \|[\widehat \beta]_{1,2}\|_{h_1} + t^2\|[\widehat \beta]_{0,3}\|_{h_1}, $$ and one can bound $\|[\widehat \beta]_{1,2}\|_{h_1}$ and $\|[\widehat \beta]_{0,3}\|_{h_1}$ on $\chi^{-1}(\nu_{2s})$. The parameter $t$ is devoted to compensate errors introduced by $\|[\widehat \beta]_{1,2}\|_{h_1}$ and $\|[\widehat \beta]_{0,3}\|_{h_1}$ that mainly come from the terms $[F_Q]_{1,1}$ and $[F_Q]_{0,2}$, which are zero if and only if the curvature is vertical. Lemma \rhoef{lem:Q-trivial} states that the bundle $Q$ is trivial, $Q=L\tauimes \muathbb{CP}^1$. It might not happen that $P$ is the pullback of $N$ via the projection map $L \tauimes \muathbb{CP}^1 \tauo \muathbb{CP}^1$; in the case that it is, then $F_Q \in \Lambda^2 V'$. \betaegin{proposition} \lambdaanglebel{prop:inter-resol} There exist $s_0>1$, such that for each $s>s_0$ one can find $t_s'''$ such that for each $t<t_s'''$ there is a closed $\Gammatwo$ structure $\Phi_{3,s}^t$ such that $\Phi_{3,s}^t= \Phi_2^t$ on $r \lambdaeq \epsilonrac{s}{8}$ and $\Phi_{3,s}^{t}= \chi^*(\widehat \phi_{3,s}^t)$ on $r \gammaeq \epsilonrac{s}{4}$. \varepsilonnd{proposition} \betaegin{proof} On the anulus $\epsilonrac{s}{8} < r < \epsilonrac{s}{4}$ we have that: $$ \Phi_2^t - \chi^*(\widehat \phi_{3,s}^t)= \epsilonrac{1}{4} t^2 d(\muathrm{pr}^*e_1 \wedge (I[d(\muathsf{f}_{|\tauheta|}(r) - r^2 )]_{1,0})). $$ We now let $\varpi$ be a smooth function such that $\varpi=1$ if $x\lambdaeq \epsilonrac{1}{8}$ and $\varpi=0$ if $x\gammaeq \epsilonrac{1}{4}$ and $\varpi_s(x)=\varpi(\epsilonrac{x}{s})$; then $|\varpi_\varepsilon'| \lambdaeq \epsilonrac{C}{s}$. Define $\betaar{\muathsf{f}}_{a}(x)=\muathsf{f}_a(x)-x^2 = \epsilonrac{a^2}{(x^4 + a^2)^{1/2} + x^2} - a \lambdaog((x^4 + a^2)^{1/2} + a) + 2 a\lambdaog(x)$ and $$ \tauimesi_s = \varpi_s \muathrm{pr}^*e_1 \wedge (Id[\betaar{\muathsf{f}}_{\tauheta}(r)]_{1,0}). $$ The form $d\tauimesi_s$ lies in $W_{2,1}\omegaplus W_{1,2}\omegaplus W_{0,3}$. In order to analyze the $h_1$ norm of each component first observe that $|\betaar{\muathsf{f}}_a'| =O(x^{-1})$ and $|\betaar{\muathsf{f}}_a''| =O(x^{-2})$ on $x>1$. In addition, note that if $(x,y)=(x_1,x_2,x_3,y_1,y_2,y_3,y_4)\in B \tauimes \muathbb{R}R^4$ is a complex unitary parametrisation; that is, in coordinates $I(x,y)=(x_1,x_2,x_3,-y_2,y_1,-y_4,y_3)$ and the vectors of the frame that the parametrisation determines have lenght one. Moreover, connection forms verify $I\varepsilonta_1=-\varepsilonta_2$, $I\varepsilonta_3=-\varepsilonta_4$; to check this one has to observe that the matrices $(A_{i,k}^j)_{k,j}$ defined in Remark \rhoef{remark-connection} are complex linear because $\nuabla I =0$. Taking this into account, a straightforward computation of the pullback yields the claim. Taking these observations and Lemma \rhoef{lem:bounded norm} into account we obtain that on $r>1$: \betaegin{align*} \|[d\tauimesi_s]_{2,1}\|_{h_1} =& \| \varpi_s \muathrm{pr}^*e_1 \wedge [dId[\betaar{\muathsf{f}}_{|\tauheta|}(r)]_{1,0}]_{2,0} + [d\varpi_s]_{1,0}\wedge \muathrm{pr}^*e_1 \wedge Id[\betaar{\muathsf{f}}_{|\tauheta|}(r)]_{1,0}\|_{h_1} \\ =& O(r^{-2}) + O(r^{-1}s^{-1}) , \\ \|[d\tauimesi _s]_{1,2}\|_{h_1}=& \|\varpi_s \muathrm{pr}^*e_1 \wedge [dId[\betaar{\muathsf{f}}_{|\tauheta|}(r)]_{1,0}]_{1,1} + \varpi_s \muathrm{pr}^*(de_1) \wedge Id[\betaar{\muathsf{f}}_{|\tauheta|}(r)]_{1,0} \\ &+ [d\varpi_s]_{0,1}\wedge \muathrm{pr}^*e_1 \wedge Id[\betaar{\muathsf{f}}_{|\tauheta|}(r)]_{1,0}\|_{h_1} = O(r^{-1})+ O(s^{-1}), \\ \|[d\tauimesi_s]_{0,3}\|_{h_1}=& \|\varpi_s \muathrm{pr}^*e_1 \wedge [dId[\betaar{\muathsf{f}}_{|\tauheta|}(r)]_{1,0}]_{0,2}\|_{h_1}= O(1). \varepsilonnd{align*} We now prove that $\| [d\varpi_s]_{1,0}\wedge \muathrm{pr}^*e_1 \wedge Id[\betaar{\muathsf{f}}_{|\tauheta|}(r)]_{1,0} \|_{h_1}=O(r^{-1}s^{-1})$ and $\| [d\varpi_\sigma]_{0,1}\wedge \muathrm{pr}^*e_1 \wedge Id[\betaar{\muathsf{f}}_{|\tauheta|}(r)]_{1,0} \|_{h_1} =O(s^{-1})$. We first trivialize $\nu$ using orthonormal complex coordinates $(x,y)$ and taking into account Lemma \rhoef{lem:bounded norm} we obtain $ \|Id[\betaar{\muathsf{f}}_{|\tauheta|}(r)]_{1,0}\|_{h_1} = \| \muathfrak{su}m_{j=1}^4 \betaar{\muathsf{f}}_{|\tauheta|}'(r) \epsilonrac{y_j}{r} \varepsilonta_j \|_{h_1} = O(r^{-1})$. On the other hand, $\varpi_s(x,y)=\varpi_s(r)$ and thus \betaegin{align*} [d\varpi_s]_{1,0} =& \muathfrak{su}m_{i=1}^4{\varpi_s'(r) \epsilonrac{y_i}{r} \varepsilonta_i}, \\ [d\varpi_s]_{1,0} =& - \muathfrak{su}m_{i=1}^4\muathfrak{su}m_{j=1}^3{ \varpi_s'(r) \epsilonrac{y_i}{r} A_j^i (x,y)dx_j} . \varepsilonnd{align*} Taking into account that $A_j^i(x,y)=O(r)$ we obtain that $ \| [d\varpi_\sigma]_{1,0} \|_{h_1} =O(s^{-1})$,$ \| [d\varpi_\sigma]_{0,1}\|_{h_1}=O(r s^{-1})$. A multiplication yields the desired estimates. The remaining estimates are obtained by taking derivatives of $$ [Id\betaar{\muathsf{f}}_{|\tauheta|}]_{1,0}=\epsilonrac{\betaar{\muathsf{f}}_{|\tauheta|}'(r)}{r} (-y_1 \varepsilonta_2 + y_2 \varepsilonta_1 - y_3 \varepsilonta_4 + y_4 \varepsilonta_3), $$ and using Lemma \rhoef{lem:bounded norm}. Our estimates yield: $$ \| t^2 d\tauimesi_s \|_{h_t} = O(r^{-2}) + O(r^{-1}s^{-1}) + t(O(r^{-1})+ O(s^{-1}) ) + t^2O(1) $$ Take $s_0$ such that for each $0<t<1$ and $s>s_0$ it holds that $| O(r^{-2}) + O(r^{-1}s^{-1}) + t(O(r^{-1})+ O(s^{-1})) |< \epsilonrac{m}{4}$ on $\epsilonrac{s}{8} \lambdaeq r \lambdaeq \epsilonrac{s}{4}$. Let $s>s_0$ and take $t_s''<t_s$ such that $|t^2O(1)|< \epsilonrac{m}{2} $ and $ \|\Phi_2^t - \Phi_1^t \|_{h_t} < \epsilonrac{m}{2} $ on $\chi^{-1}(\nu_{2s})$; this is possible as we argued before. Define the closed form $$ \Phi_{3,s}^t = \Phi_2^t - \epsilonrac{t^2}{4} d\tauimesi_s, $$ which coincides with $\Phi_2^t$ if $r\lambdaeq \epsilonrac{s}{8}$ and with $\chi^*(\widehat \phi_{3,s}^t)$ if $r\gammaeq \epsilonrac{s}{4}$. On the neck $\epsilonrac{s}{8} \lambdaeq r \lambdaeq \epsilonrac{s}{4}$ we have that: $$ \| \Phi_{3,s}^t - \Phi_1^t\|_{h_t} \lambdaeq \| \Phi_{3,s}^t - \Phi_2^t \|_{h_t} + \| \Phi_2^t - \Phi_1^t \|_{h_t} < m.$$ The statement is therefore proved. \varepsilonnd{proof} The map $F_t \circ \chi $ allows us to glue an annulus around the zero section on $(\nuu/\j ,\phi_2)$ and an annulus around $Q$ on $(P, \widehat{\Phi}_2^t)$; this yields a resolution. \betaegin{theorem} \lambdaanglebel{theo:resol} There exists a closed $\Gammatwo$ resolution $\rhoho \colon \widetilde{X} \tauo X$. In addition, let us denote $D_s(Q)$ the $s$-disk of $P$ centered at $Q$; then $$ \widetilde X= X- \varepsilonxp(\nuu_{\varepsilon}/\j) \cup_{\varepsilonxp \circ F_t \circ \chi} D_s(Q) $$ for some $\varepsilon>0$, $t>0$ and $s>0$. \varepsilonnd{theorem} \betaegin{proof} Let $\varepsilon_0<R$ and $s_0>0$ be the values provided by Proposition \rhoef{prop:interpolation-order-2} and \rhoef{prop:inter-resol}. Fix $s>s_0$ and choose $t<t_s'''$ with $st = \epsilonrac{\varepsilon}{4}$ for some $\varepsilon<\varepsilon_0$. The map $F_t \circ \chi$ identifies $s\lambdaeq r \lambdaeq 2s$ on $P$ with $\epsilonrac{\varepsilon}{4} \lambdaeq r \lambdaeq \epsilonrac{\varepsilon}{2}$ on $\nu/\j$. Consider the $\Gammatwo$ forms $\Phi_{3,s}^t$ on $\chi^{-1}(\nu_{2s}/\j)$ and $\phi_{3,\varepsilon}$ on on $\nu_{2\varepsilon}/\j$; on the annulus $s \lambdaeq r \lambdaeq 2s$ of $\chi^{-1}(\nu_{2s}/\j)$ we have that $\Phi_{3,s}^t = \chi^*(\widehat \phi_{3,s}^t)= \phi_2^t$ and on $\epsilonrac{\varepsilon}{4} \lambdaeq r \lambdaeq \epsilonrac{\varepsilon}{2}$ on $\nu/\j$ we have that $\phi_{3,\varepsilon}=\phi_2$. Being $(F_t\circ \chi)^* \phi_2 = \chi^*(\phi_2^t)$, the $\Gammatwo$ structure is well defined on the resolution. \varepsilonnd{proof} \betaegin{remark} \lambdaanglebel{rem:size-exc} The radious of the disc $r \lambdaeq 2s$ with respect to the metric $h_t$ is $2st$. Fixed $s_0>0$ the map $F_t \circ \chi$ identifies $0<r \lambdaeq 2s$ on $P$ with $0<r \lambdaeq 2st$ on $\nu$; therefore if we choose $t\tauo 0$ then the size of the exceptional divisor decreases. \varepsilonnd{remark} \sigmaection{Topology of the resolution} \lambdaanglebel{sec:topo} This section is devoted to understanding the cohomology algebra of the resolution; we shall make use of real coefficients and denote by $H^*(M)$ the algebra $H^*(M,\muathbb{R}R)$. We start by describing $H^*(\widetilde X)$ in terms of $H^*(X)$ and $H^*(L)$ and we then compute the induced product on it. The fibre bundle $\nuu$ is topologically trivial; this follows from the fact that every $3$ manifold is parallelizable. For a proof see \cite[Remark 2.14]{JK}. However, it might not be trivial as a complex bundle as we shall deduce from the computation of its total Chern class. Let us suppose for a moment that $L$ is connected; then $L$ is the mapping torus of diffeomorphism $\phisi \colon \Sigma \tauo \Sigma$, where $\Sigma$ is an orientable surface of genus $g$. In section \rhoef{sec:resolu} we denoted by $\muathrm{q} \colon \Sigma \tauimes [0,1] \tauo L$ the quotient projection, and by $\muathrm{b} \colon L \tauo S^1$ the bundle projection. We also chose that $\tauheta= \muathrm{b}^*(\tauheta_0)$ with $\tauheta_0$ the angular form on $S^1$. In Proposition \rhoef{prop:chern-class} we compute the total Chern class of $\nu$ by observing first that $\nu$ admits a section and thus $\nu= \underline{\muathbb{C}}\omegaplus \kappaer{\tauheta}$; where $\underline{\muathbb{C}}$ denotes the trivial line bundle over $L$. Then we identify $\kappaer(\tauheta)$ with the tangent space of the fibres taking into account that $\tauheta= \muathrm{b}^*(\tauheta_0)$. A formula for $c(\nu)$ follows from these remarks. In order to state the result it shall be useful to note that $2$-forms on $\Sigma$ determine closed $2$-forms on $L$. More precisely, let us consider $\varpi \colon [0,1]\tauo \muathbb{R}R$ a bump function with $\varpi|_{[0,1/4]}=0$ and $\varpi|_{[3/4,1]}=1$. Let $\beta \in \Omega^2(\Sigma)$ and let $\alpha \in \Omega^1(\Sigma)$ such that $\phisi^*\beta= \beta + d\alpha$; note that this is possible because $\phisi^*= \muathrm{Id}$ on $H^2(\Sigma)$. Then $\betaar{\beta}= \beta + d(\varpi(t) \alpha) \in \Omega^2(\Sigma \tauimes [0,1])$ induces a $2$-form on $L$ via the push-forward. Of course, one can show that the cohomology class of $\omegaverline{\beta}$ does not depend on $\alpha$. In addition, from the Mayer-Vietoris long exact sequence we deduce that $[\muathrm{q}_*(\betaar{\beta})] \nueq 0$ if $[\beta] \nueq 0$. We denote by $\omega_\Sigma\in \Omega^2(L)$ a closed $2$-form induced by a volume form $\muathrm{vol}_\Sigma$ of $\Sigma$ that integrates $1$ on $\Sigma$. This class represents the Poincar\' e dual of a circle $C\muathfrak{su}bset L$ such that $\muathrm{q} ( \{p_0\} \tauimes [0,1] ) \muathfrak{su}bset C$ and $C-\muathrm{q} ( \{p_0\} \tauimes \{0\})$ is an embedded line on $\muathrm{q}(\Sigma \tauimes \{0\})$ if it is not empty. \betaegin{proposition} \lambdaanglebel{prop:chern-class} The total Chern class of $\nuu$ is $c(\nuu)= 1 + (2-2g)[\omega_\Sigma]$. \varepsilonnd{proposition} \betaegin{proof} Let $\tauimes$ be the cross product on $TM|_L$ determined by $\varphi$. Consider on $E=\kappaer(\tauheta)$ the complex structure $\muathrm{J} W = W \tauimes e_1^\sigmaharp$, where $e_1= \|\tauheta\|^{-1}\tauheta$. This is well-defined because $\tauimes$ defines a cross product on $T_pL$ and if $\tauheta(X)=0$, then $X \tauimes e_1^\sigmaharp \phierp e_1^\sigmaharp$. Recall also that the complex structure on $\nuu$ is: $I(v) = e_1^\sigmaharp \tauimes v$. We prove that there is an isomorphism of complex line bundles: $$ \underline{\muathbb{C}} \omegaplus E \tauo \nuu. $$ A nowhere-vanishing section $s \colon L \tauo\nu$ exists because $\deltaim L=3>4 = \muathrm{rk}(\nu)$; we define the isomorphism $\underline{\muathbb{C}} \omegaplus E \tauo \nuu$, $$ (z_1 + iz_2,W) \lambdaongmapsto z_1 s + z_2 e_1^\sigmaharp \tauimes s + W \tauimes s . $$ In order to check that the isomorphism is complex linear one uses the equality \cite[Lemma 2.9]{SW}: $$ u \tauimes (v \tauimes w) + v \tauimes (u \tauimes w) = g(u,w)v + g(v,w)u - 2g(u, v)w. $$ where $g$ denotes the restriction to $\nu$ of the metric on $M$. In our case taking $u=e_1^\sigmaharp$, $v=s$ and $w=W$ we obtain that $e_1^\sigmaharp \tauimes(W \tauimes s)= ( W \tauimes e_1^\sigmaharp)\tauimes s$. From the isomorphism we get that $c(\nuu)=c(\underline{\muathbb{C}})c(E)=1+c_1(E)$. We now compute $c_1(E)$; note that $E$ is the vertical distribution $d\muathrm{q}(T\Sigma\tauimes[0,1])\muathfrak{su}bset TM$. First consider a compactly-supported $2$-form $\upsilon\in \Omega^2(T\Sigma)$ representing the Thom class of the bundle $T\Sigma \tauo S$ that integrates $1$ over the fibres. Being the diffeomorphism $d\phisi \colon T\Sigma \tauo T\Sigma$ volume-preserving we obtain that $(d\phisi)^*\upsilon$ is also a compactly-supported $2$-form that integrates $1$ over the fibres. Thus, $(d\phisi)^*\upsilon= \upsilon + d\alpha$ for some compactly-supported $\alpha \in \Omega^1(T\Sigma)$. In addition let $s_0 \colon \Sigma \tauo T\Sigma$ the zero section; then $[s_0^*(\upsilon)]=(2-2g)[\muathrm{vol}_\Sigma]$. The push-forward $\muathrm{q}_*(\upsilon + d(\varpi \alpha)) \in \Omega^2(E)$ of course induces the Thom class of $E$. Being $s[p,t]=d\muathrm{q}_{(p,t)}(s_0(p,t))$ the zero section of $E$ we obtain: $$ c_1(E)=s^*[\muathrm{q}_*(\upsilon + d(\varpi \alpha))]=[q_*(s_0^*\upsilon + d(\varpi s_0^* \alpha) ) ]=(2-2g)[\omega_\Sigma]. $$ To obtain the last equality we have taken into account that $s_0^*(d\varpi)=0$, $s_0^*(\phisi^* \upsilon)=s_0^*\upsilon + d(s_0^* \alpha)$ and $[s_0^*(\upsilon)]=(2-2g)[\muathrm{vol}_\Sigma]$. \varepsilonnd{proof} The projectivized bundle of $\nu$ coincides with $Q$ because $ \muathbb{P}(\nu)= P_{\muathrm{U}(2)}(\nu)\tauimes_{\muathrm{U}(2)} \muathbb{CP}^1 = Q. $ An obstruction-theoretic argument ensures that it is trivial: \betaegin{lemma} \lambdaanglebel{lem:Q-trivial} The bundle $Q \tauo L$ is trivial. \varepsilonnd{lemma} \betaegin{proof} First recall that the spaces $\Deltaiff(S^2)$ and $\muathrm{SO}(3)$ have the same homotopy type. Classifying $S^2$ bundles is therefore equivalent to classifying rank $3$ vector bundles. In our case, denoting by $E=\kappaer(\tauheta)$ as in the proof of Proposition \rhoef{prop:chern-class}, if $g_{\alpha \beta} \in SO(2)$ are the transition functions of $E$, taking into account the diffeomorphism $\muathbb{CP}^1 \tauo S^2$ one can compute that the transition functions of $Q$ are $$ h_{\alpha \beta}(x)(v_1,v_2,v_3) = (g_{\alpha \beta}(v_1,v_2),v_3) $$ Therefore, the associated rank $3$ vector bundle $V$ has transition functions $g_{\alpha \beta} \tauimes \muathrm{Id} \in \muathrm{SO}(3)$. This is trivial if and only if $Q$ is. We now observe that $V$ is trivial if and only if its second Stiefel-Withney class vanishes. For that purpose consider a CW-decomposition, $$ L=\cup_{k=0}^3{L^k}. $$ Then $V|_{L^1}$ is trivial because $\muathrm{SO}(3)$ is connected. The trivialization extends to $L^2$ if the primary obstruction cocycle is exact; this coincides with the second Stiefel-Whitney class (see \cite[Proposition 3.21]{Hatcher}). If it vanishes, then the last obstruction cocycle lies in $H^3(L,\phii_2(\muathrm{SO}(3)))=0$ and therefore the trivialization extends to $L$. We now compute the second Stiefel-Whitney class of $V$. Regarding the transition functions $V=E\omegaplus \muathbb{R}R$ and thus $w_2(V)=w_2(E)$. Being $E$ a complex vector bundle, we obtain $w_2(E)=c_1(E) \mubox{ (mod 2)}=(2-2g)\omega_\Sigma \mubox{ (mod 2)}=0$. \varepsilonnd{proof} Using Proposition \rhoef{prop:chern-class} we re-state a well known fact. For that purpose consider the tautological bundle associated to $\nu$: $$ \omegaverline{P} = P_{\muathrm{U}(2)}(\nu)\tauimes_{\muathrm{U}(2)} \widetilde{\muathbb{C}}^2. $$ Denote frames in $P_{\muathrm{U}(2)}(\nu)$ by $F$. There is a well-defined $\muathbb{Z}_2$ action on $\omegaverline{P}$, determined by $[F,(z_1,z_2,\varepsilonll)] \lambdaongmapsto [F,(-z_1,-z_2,\varepsilonll)]$. The quotient $\omegaverline{P}/\muathbb{Z}_2 $ coincides with $P$. We denote by $\varrho \colon \omegaverline{P} \tauo P$ the projection. \betaegin{proposition}\lambdaanglebel{prop:cohomology-exceptional-divisor} Let $\muathrm{e}(\omegaverline{P})$ be the Euler class the line bundle $\omegaverline{P} \tauo Q$. Denote by $H^*(L)[\muathbf{x}]$ the algebra of polynomials with coeffiecients in $H^*(L)$ The map: $$ F \colon H^*(L)[\muathbf{x}]/\lambdaangle \muathbf{x}^2+ (2-2g) [\omega_\Sigma] \muathbf{x} \rhoangle \tauo H^{*}(Q),\phisiuad F(\beta)= \muathrm{pr}^*\beta, F(\muathbf{x})=\muathrm{e}(\omegaverline{P}), $$ is an isomorphism of algebras. \varepsilonnd{proposition} Recall that we denoted the projection by $\muathrm{pr} \colon P \tauo L$. Consider $\tau \in \Omega^2(\omegaverline{P})$ the Thom $2$-form of the line bundle $\omegaverline{P} \tauo Q $ and note that we can suppose that $\tau$ is $\muathbb{Z}_2$-invariant because the involution preserves the orientation on the fibres. From Proposition \rhoef{prop:cohomology-exceptional-divisor} we obtain: $$ [\tau \wedge \tau] = - (2-2g)[(\varrho \circ \muathrm{pr})^*\omega_\Sigma \wedge \tau]. $$ We also denote by $\tau$ the pushforward $\varrho_*\tau \in \Omega(P)$; on $H^*(P)$ it also verifies that: $$ [\tau \wedge \tau ]= - (2-2g)[ \muathrm{pr}^*\omega_\Sigma \wedge \tau]. $$ Of course, we can extend $\tau$ to a $2$-form on $\widetilde{X}$ and it corresponds to the Poincar\' e dual of $Q$. We now compute the cohomology of $\widetilde{X}$; for this we do not assume that $L$ is connected and we denote by $ L_1 ,\deltaots, L_r $ it connected compontents. Each $L_i$ is the mapping torus of diffeomorphism $\phisi_i \colon \Sigma_i \tauo \Sigma_i$, where $\Sigma_i$ is an orientable surface of genus $g_i$; we denote by $\omega_i$ the $2$-form $\omega_{\Sigma_i}$ as constructed before. We also denote $Q_i= Q|_{L_i}$, $P_i=P|_{L_i}$ and $\tau_i$ the Thom form of $Q_i \muathfrak{su}bset P_i$. \betaegin{proposition}\lambdaanglebel{prop:cohomology-short-sequence} There is a split exact sequence: $$ \tauimesymatrix{ 0 \alphar[r] & H^*(X) \alphar[r]^{\phii^*} & H^*(\widetilde X) \alphar[r] & \omegaplus_{i=1}^r H^*(L_i)\omegatimes \lambdaangle \muathbf{x}_i \rhoangle \alphar[r] & 0} $$ where $\muathbf{x}_i$ has degree two. \varepsilonnd{proposition} \betaegin{proof} The existence of such exact sequence is contained in the proof of \cite[Proposition 6.1]{JK}; we outline it. Consider the long exact sequence of pairs $(X, L)$ and $(\widetilde X, Q)$. There is a commutative diagram: \sigmamall{ $$ \tauimesymatrix{ H^{k}(X,L) \alphar[r] \alphar[d]^{\phii^*} & H^{k}(X) \alphar[r]^{e_L^*} \alphar[d]^{\phii^*} & \omegaplus_i H^{k}(L_i,\muathbb{R}R) \alphar[d]^{\phii^*} \alphar[r]^{D_1} & H^{k+1}(X, L ) \alphar[d]^{\phii^*} \\ H^{k}(\widetilde X,Q) \alphar[r] & H^{k}(\widetilde X) \alphar[r]^{e_Q^*} & \omegaplus_i H^{k}(Q_i) \alphar[r]^{D_2} & H^{k+1}(\widetilde X, Q ) } $$ } \nuormalsize{ Here we denoted the inclusions $e_L \colon L \tauo X$ and $e_Q \colon Q \tauo \widetilde{X}$. The first and fourth columns are isomorphisms; these correspond to the identity map. The third column is injective with cokernel $\omegaplus_i H^*(Q_i)/H^*(L_i)$; this is isomorphic to $\omegaplus_i H^{k-2}(L_i)\omegatimes \lambdaangle \muathbf{x}_i\rhoangle $, because $Q_i=L_i \tauimes S^2$. Thus we get a commutative diagram with exact columns:} \sigmamall{ $$ \tauimesymatrix{ &0 \alphar[d] &0 \alphar[d] & \\ H^{k}(X,L) \alphar[r] \alphar[d]^{\phii^*} & H^{k}(X) \alphar[r]^{e_L^*} \alphar[d]^{\phii^*} & \omegaplus_i H^{k}(L_i) \alphar[d]^{\phii^*} \alphar[r]^{D_1} & H^{k+1}(X, L ) \alphar[d]^{\phii^*} \\ H^{k}(\widetilde X,Q) \alphar[r] & H^{k}(\widetilde X) \alphar[r]^{e_Q^*} \alphar[d] & \omegaplus_i H^{k}(Q_i) \alphar[r]^{D_2} \alphar[d] & H^{k+1}(\widetilde X, Q ) \\ & \coker(\phii^*) \alphar[d] \alphar[r]^{\betaar{e}_Q} & \omegaplus_i H^{k-2}(L_i)\omegatimes \lambdaangle \muathbf{x}_i\rhoangle \alphar[d] & \\ &0 &0 & } $$ } \nuormalsize{ Of course, $\betaar{e}_Q$ is the action induced by $e_Q^*$ on the quotient. In addition, the fact that first and fourth columns are the identity implies that $\muathrm{Im}(e_L^*)=\muathrm{Im}(e_Q^*)$.} Snake Lemma ensures that there is a exact sequence: $$ 0 \tauo \kappaer(e_L^*) \tauo \kappaer(e_Q^*) \tauo \kappaer(\betaar{e}_Q) \tauo \coker(e_L^*) \tauo \coker(e_Q^*) \tauo \coker(\betaar{e}_Q) \tauo 0. $$ The maps are induced by $\phii^*$, except for the connecting map $\kappaer(\betaar{e}_Q) \tauo \coker(e_L)$. The map $\phii^* \colon \kappaer(e_L^*) \tauo \kappaer(e_Q^*)$ is an isomorphism because the first column is an isomorphism and the diagram is commutative. In addition, taking into account that the fourth column is an isomorphism and that the diagram is commutative one can also check that $\phii^*$ is an isomorphism between $\muathrm{Im}(D_1)$ and $\muathrm{Im}(D_2)$. Moreover: $$ \muathrm{Im}(D_1)= \omegaplus_i H^*(Q_i)/ \kappaer (D_1) = \omegaplus_i H^{*}(L_i)/ \muathrm{Im}(e_L^*) = \coker(e_L^*), $$ and the isomorphism is induced by the map that $\phii^*$ induces on the quotient. Simmilarly, $\coker(e_Q^*)$ is isomorphic to $\muathrm{Im}(D_2)$ via $\phii^*$. This means that $\kappaer(\betaar{e}_Q)=0=\coker(\betaar{e}_Q)$ so, $$ \coker (\phii^*)= \omegaplus_i H^{*-2}(L_i)\omegatimes \lambdaangle \muathbf{x}_i \rhoangle. $$ Consider $\tau_i$ the Poincar\' e dual of $Q_i \muathfrak{su}bset \widetilde{X}$ as constructed before. Then, $$ \beta \omegatimes \muathbf{x}_i \lambdaongmapsto \muathrm{pr}^*(\beta) \tau_i $$ is a splitting of the previous exact sequence. \varepsilonnd{proof} This result implies that there is an isomorphism of vector spaces between $H^*(\widetilde{X})$ and $H^*(X) \omegaplus \omegaplus_{i=1}^r H^*(L_i)\omegatimes \lambdaangle \muathbf{x}_i \rhoangle $. The algebra structure of $H^*(\widetilde{X})$ induces an algebra structure on $H^*(X) \omegaplus \omegaplus_{i=1}^r H^*(L_i)\omegatimes \lambdaangle \muathbf{x}_i \rhoangle $ that we compute in Proposition \rhoef{prop:cohom-alg}. This is necessary in order to decide whether the resolution $\widetilde{X}$ is formal or not, because formality condition involves products of cohomology classes. \betaegin{proposition} \lambdaanglebel{prop:cohom-alg} There is an isomorphism $$ H^*(\widetilde{X})= H^*(X) \betaigoplus \omegaplus_{i=1}^r H^*(L_i)\omegatimes \lambdaangle \muathbf{x}_i \rhoangle. $$ Let $\alpha, \beta \in H^*(X)$, $\gamma_i \in H^*(L_i)$, $\gamma_j' \in H^*(L_j)$ and let $e_i \colon L_i \tauo X$ be the inclusion. The wedge product on $ H^*(\widetilde{X})$ determines the following product on the left hand side: \betaegin{enumerate} \item $\alpha \beta = \alpha \wedge \betaeta$, \item $\alpha (\gamma_i\omegatimes \muathbf{x}_i) = (e_i^*(\alpha) \wedge \gamma_i) \omegatimes \muathbf{x}_i$, \item $(\gamma_i\omegatimes \muathbf{x}_i) (\gamma_j' \omegatimes \muathbf{x}_i)=0$ if $i\nueq j$, \item $(\gamma_i \omegatimes \muathbf{x}_i)( \gamma_i' \omegatimes \muathbf{x}_i) = -2(\gamma_i \wedge \gamma_i')PD[L_i] -(2-2g_i)( \omega_i \omegatimes \muathbf{x}_i) $. \varepsilonnd{enumerate} \varepsilonnd{proposition} \betaegin{proof} Let $s \colon \omegaplus_{i=1}^r H^*(L_i)\omegatimes \lambdaangle \muathbf{x}_i \rhoangle \tauo H^*(\widetilde{X})$ be the splitting map constructed in the proof of Proposition \rhoef{prop:cohomology-short-sequence}. Then, the isomorphism is determined by: $$ \mathrm{T} =(\rhoho^*,s) \colon H^*(X) \omegaplus \omegaplus_{i=1} H^*(L_i)\omegatimes \lambdaangle \muathbf{x}_i \rhoangle \tauo H^*(\widetilde{X}). $$ In order to obtain a formula for the product between forms $\varepsilonta$, $\varepsilonta'$ we have to compute $(\mathrm{T})^{-1} \lambdaeft( \mathrm{T} \varepsilonta \wedge \mathrm{T} \varepsilonta' \rhoight)$. All the statements are evident except for the last one. We only check $\muathbf{x}_i^2= -2PD[L_i] - (2-2g_i)(\omega_i\omegatimes \muathbf{x}_i)$, the announced formula is deduced from this and the fact that $H^*(\widetilde X)$ is an algebra. First of all, $\mathrm{T} \muathbf{x}_ i \wedge \mathrm{T} \muathbf{x}_i = [\tau_i \wedge \tau_i]$; we now compute $\mathrm{T}^{-1}[\tau_i \wedge \tau_i]$. On the one hand taking into account the equality $$ [ \tau_i \wedge \tau_i ] = - (2-2g_i)[\muathrm{pr}^*(\omega_i) \wedge \tau_i], $$ we obtain that the restriction of $\mathrm{T}^{-1}[\tau_i \wedge \tau_i]$ to $H^*(L_i)\omegatimes \lambdaangle \muathbf{x}_i \rhoangle$ is $ - (2-2g_i)(\omega_i \omegatimes \muathbf{x}_i) . $ On the other hand, note first that if $x\in L_i$ then $\tau_i|_{P_x}$ is the Thom form of $Q_x \muathfrak{su}bset P_x$ because $\tau_i$ is the Thom form of $Q_i \muathfrak{su}bset P_i$. Thus: $$ \int_{P_x}{\tauau_i \wedge \tauau_i}= [Q_x][Q_x]=-2. $$ The restriction of $\mathrm{T}^{-1} [\tau_i \wedge \tau_i]$ to $H^*(X)$ has compact support around $L_i$ and $$ \int_{\nu_x}{ \rhoho^*(\tauau_i \wedge \tauau_i)}= \int_{\nu_x -0}{ \rhoho^*(\tauau_i \wedge \tauau_i)}= \int_{P_x-Q_x }{ \tauau_i \wedge \tauau_i}= \int_{P_x }{ \tauau_i \wedge \tauau_i}=-2. $$ The restriction in thus equal to $-2PD[L_i]$. \varepsilonnd{proof} \sigmaection{Non-formal compact $\Gammatwo$ manifold with $b_1=1$} \lambdaanglebel{sec:const} Nilpotent Lie algebras that have a closed left-invariant $\Gammatwo$ structure are classified in \cite{CF}; from these one obtain nilmanifolds with an invariant closed $\Gammatwo$ structure. Of course, excluding the $7$-dimensional torus, these are non-formal and have $b_2 \gammaeq 2$. From a $\muathbb{Z}_2$ action on a nilmanifold, in \cite{FFKM} authors construct a formal orbifold whose isotropy locus are $16$ disjoint $3$-tori; then they prove that its resolution is also formal. In this section we follow the same process to construct first a non-formal $\Gammatwo$ orbifold with $b_1=1$ from a nilmanifold; its isotropy locus consists of $16$ disjoint non-formal nilmanifolds. Later we prove that its resolution is also non-formal and does not admit any torsion-free $\Gammatwo$ structure. \muathfrak{su}bsection{Orbifold with $b_1=1$} Let us consider the Lie algebra $\muathfrak{g}$ with structure equations $$ (0,0,0,12,23,-13,-2 (16)+ 2 (25) + 2(26) - 2(34) ), $$ and let $(e_1,e_2,e_3,e_4,e_5,e_6,e_7)$ be the generators of $\muathfrak{g}$ that verify the structure equations, that is, $[e_1,e_2]=-e_4$, $[e_2,e_3]=-e_5$ and so on. Recall that the simply connected Lie group $G$ associated to $\muathfrak{g}$ is the vector space $\muathfrak{g}$ endowed with the product $*$ determined by the Baker-Campbell-Hausdorff formula. \betaegin{remark} The Lie algebra $\muathfrak{g}$ belongs to the $1$-parameter family of algebras $147E1$ listed in Gong's classification \cite{G}; we choose the parameter $\lambda=2$. The associated Lie group admits an invariant closed $\Gammatwo$ structure as proved in \cite{CF}. \varepsilonnd{remark} Define $u_1=e_1$, $u_2=e_2$, $u_3=e_3$, $u_4= \epsilonrac{1}{2}e_4$, $u_5= \epsilonrac{1}{2}e_5$, $u_6= \epsilonrac{1}{2}e_6$ and $u_7= \epsilonrac{1}{6}e_7$. \betaegin{proposition}\lambdaanglebel{prop:gh} If $x=\muathfrak{su}m_{k=1}^7{\lambda_k u_k}$ and $y= \muathfrak{su}m_{k=1}^7{\mu_k u_k}$ then \betaegin{align*} x*y= &( \lambda_1 + \mu_1)u_1 + (\lambda_2 + \mu_2)u_2 + (\lambda_3 + \mu_3)u_3 + (\lambda_4 + \mu_4 - ( \lambda_1\mu_2 - \lambda_2\mu_1))u_4 \\ &+ (\lambda_5 + \mu_5 - ( \lambda_2\mu_3 - \lambda_3\mu_2))u_5 + (\lambda_6 + \mu_6 + (\lambda_1\mu_3 - \lambda_3\mu_1))u_6 \\ &+ (\lambda_7 + \mu_7 + (\lambda_1-\mu_1 - \lambda_2 + \mu_2)(\lambda_1 \mu_3 - \lambda_3 \mu_1) - (\lambda_3- \mu_3)(\lambda_1 \mu_2 - \mu_2 \lambda_1 ))u_7 \\ &+ (- (\lambda_2 - \mu_2)(\lambda_2\mu_3-\lambda_3 \mu_2) + 3 (\lambda_1 \mu_6 + \lambda_6 \mu_1))u_7 \\ &+ (- 3 (\lambda_2 \mu_5 - \lambda_5 \mu_2) - 3( \lambda_2 \mu_6 - \lambda_6 \mu_2) + 3(\lambda_3 \mu_4 + \lambda_4\mu_3) )u_7. \varepsilonnd{align*} \varepsilonnd{proposition} \betaegin{proof} Being $\muathfrak{g}$ is $3$-step, the Baker-Campbell-Hausdorff formula yields: $$ x*y = x + y + \epsilonrac{1}{2} [x,y] + \epsilonrac{1}{12} \lambdaeft( [x,[x,y]] - [y,[x,y]] \rhoight). $$ Taking into account that $u_7\in Z(\muathfrak{g})$ and that $ [u_i,[u_j,u_k]]= 0$ if $i\gammaeq 4$ or $j \gammaeq 4$ or $k \lambdaeq 4$, it follows: \betaegin{align*} x*y =& \muathfrak{su}m_{k=1}^7{(\lambda_k+ \mu_i) u_i} + \epsilonrac{1}{2} \muathfrak{su}m_{1 \lambdaeq i<j \lambdaeq 7}(\lambda_i\mu_j - \lambda_j\mu_i)[u_i,u_j] \\ &+ \epsilonrac{1}{12} \muathfrak{su}m_{1 \lambdaeq k\lambdaeq 3} (\lambda_k-\mu_k) \muathfrak{su}m_{1 \lambdaeq i<j \lambdaeq 3} {(\lambda_i\mu_j - \lambda_j\mu_i)[u_k,[u_i,u_j]]}; \varepsilonnd{align*} The non-zero combinations $[u_i,u_j]$ and $[u_k,[u_i,u_j]]$ are: \betaegin{align*} [u_1,u_2]=& -2u_4, & [u_2,u_5]=& -6u_7, & [u_3,[u_1,u_2]]=& -12 u_7 \\ [u_1,u_3]=& 2 u_6, & [u_2,u_6]=& -6u_7, & [u_1,[u_1,u_3]]=& 12 u_7\\ [u_1,u_6]=& 6u_7, & [u_3,u_4]=&6u_7, & [u_2,[u_1,u_3]]=& -12 u_7 , \\ [u_2,u_3]=& -2u_5, & & & [u_2,[u_2,u_3]]=& 12u_7. \varepsilonnd{align*} The announced formula easily follows from this. \varepsilonnd{proof} Proposition \rhoef{prop:gh} ensures that $$ \Gamma= \Big\{ \muathfrak{su}m_{i=1}^7{n_i u_i}, \mubox{ s.t. } n_i \in \muathbb{Z} \Big\}, $$ is a discrete subgroup of $G$, which is of course co-compact. Indeed, a straightforward computation gives a fundamental domain for the left action of $\Gamma$ on $G$: \betaegin{proposition} \lambdaanglebel{prop:lattice} A fundamental domain for left the action of $\Gamma$ on $G$ is $$ \muathcal{D}=\Big\{ \muathfrak{su}m_{i=1}^7{t_i u_i}, \mubox{ s.t. } 0 \lambdaeq t_i \lambdaeq 1 \Big \}. $$ \varepsilonnd{proposition} According to \cite[Lemma 5]{CF}, the group $G$ admits an invariant closed $\Gammatwo$ structure determined by: $$ \varphi= v^{127} + v^{347} + v^{567} + v^{135} - v^{236} - v^{146} - v^{245}. $$ where: \betaegin{minipage}[t]{0.48\tauextwidth} \betaegin{itemize} \item $v^1=\sigmaqrt{3}(2e^1 + e^5 - e^2 + e^6)$; \item $v^2=3e^2 - e^5 + e^6$; \item $v^3=e^3 + 2e^4$; \item $v^4=\sigmaqrt{3}(e^3 + e^7)$; \varepsilonnd{itemize} \varepsilonnd{minipage} \betaegin{minipage}[t]{0.48\tauextwidth} \betaegin{itemize} \item $v^5=\sigmaqrt{2}(e^6 - e^5)$; \item $v^6=\sigmaqrt{6}(e^5 + e^6)$, \item $v_7=2\sigmaqrt{2}(e^4 - e^3)$. \varepsilonnd{itemize} \varepsilonnd{minipage} Consider $M=G/\Gamma$; points of $M$ will be denoted by $[x]$, for some $x \in G$. The nilmanifold $M$ inherits a closed $\Gammatwo$ structure that we also denote by $\varphi$. We now define an involution $\j$ on $M$ such that $\j^*\varphi= \varphi$. For that purpose it is sufficient to define an order $2$ isomorphism $\j \colon G \tauo G$ of $G$ with $\j^*\varphi=\varphi$, and $\muathfrak{j} \Gamma = \Gamma$. The desired map is: $$ \j(e_k)=e_k, \phisiuad k \in{3,4,7}, \phisiquad \j(e_k)=-e_k, \phisiuad k\in \{1,2,5,6\}. $$ Looking at the structure constants of $G$ it becomes clear that $\j$ is an automorphism of $\muathfrak{g}$. Baker-Campbell-Hausdorff formula ensures that $\j$ is an homomorphism. In addition, it is clear that $\j(\Gamma)\muathfrak{su}bset \Gamma$. Finally, one can easily deduce that $\j^*(\varphi)= \varphi$. We define the orbifold $X=M/\j$, which has a closed $\Gammatwo$ structure determined by $\varphi$. We now study its singular locus: \betaegin{proposition} The isotropy locus has $16$ connected components; these are all diffeomorphic and their universal covering is the Heisenberg group. Let us define $H_0=\{ \lambda_3 u_3 + \lambda_4 u_4 + \lambda_7 u_7, \mubox { s.t. } \lambda_j \in \muathbb{R}R \}$ and $\muathcal{E}=\{ \varepsilon_1 u_1 + \varepsilon_2 u_2 + \varepsilon_5 u_5 + \varepsilon_6 u_6, \mubox{ s.t. } \varepsilon_j \in \{0, \epsilonrac{1}{2}\} \}$. The $16$ connected components of the isotropy locus are: $$ H_\varepsilon=[L_{\varepsilon}H_0], \phisiuad \varepsilon \in \muathcal{E}, $$ where $L_\varepsilon$ denotes the left translation on $G$ by the element $\varepsilon \in \muathcal{E}$. \varepsilonnd{proposition} \betaegin{proof} It is clear that $H_0$ is a connected component of $\omegaperatorname{Fix}(\j)$ that contains $0$, which is the unit of $G$. Being $\j$ an homomorphism, we conclude that $H_0$ is a subgroup of $G$. It is thus sufficient to prove that the Lie algebra $\muathfrak{h}$ of $H_0$ is the Heisenberg algebra. This is of course true because $\muathfrak{h} = \lambdaangle e_3, e_4, e_7 \rhoangle$ with $[e_3,e_4]=e_7$ and $[e_j,e_7]=0$ for $j\in \{3,4\}$. Let $\muathcal{K}= \{1,2,5,6\}$ and consider $x= \muathfrak{su}m_{k\in \muathcal{K}}{\lambda_k u_k} \in \muathcal{D}$, that is, $\lambda_k \in [0,1]$. We now check that if $\gamma*x = \j(x)$ for some $\gammaamma \in \Gammaamma $ then $[x] \in H_\varepsilon$ for some $\varepsilon \in \muathcal{E}$. Let us denote $\gammaamma = \muathfrak{su}m_{k=1}^{7}{n_k u_k}$; taking into account Proposition \rhoef{prop:gh} one obtains: \betaegin{align*} \gamma*x =& (n_1 + \lambda_1)u_1 + (n_2 + \lambda_2)u_2 + n_3u_3 + (n_4 - n_1 \lambda_2 + n_2 \lambda_1)u_4 \\ &+ (n_5 + \lambda_5 + n_3 \lambda_2)u_5 + (n_6 + \lambda_6 - n_3 \lambda_1 )u_6 + \lambda' u_7, \varepsilonnd{align*} for some $\lambda' \in \muathbb{R}R$. The equation $\j(x)= \gamma*x$ yields immediatly to $2\lambda_j =- n_j$ for $j=\{1,2\}$ and $n_3=0$. Taking this into account, $n_4 - n_1 \lambda_2 + n_2 \lambda_1= n_4$, $n_5 + \lambda_5 + n_3 \lambda_2= n_5 + \lambda_5$, $n_6 + a_6 - n_3 \lambda_1= n_6 + \lambda_6$ and thus $n_4=0$, $2\lambda_5=-n_5$ and $2\lambda_6=-n_6$. Thus, $x= -\epsilonrac{1}{2} \muathfrak{su}m_{k \in \muathcal{K}} n_k u_k$; and thus $x \in H_\varepsilon$ for some $\varepsilon \in \muathcal{E}$. We now let $[y]$ be an isotropy point; one can write: $y= x_1*x_2$; with $x_1= \muathfrak{su}m_{k\in \muathcal{K}}{\lambda_k u_k}$ and $x_2=\muathfrak{su}m_{k\nuotin \muathcal{K}}\mu_k u_k \in H_0$. The choice becomes clear from the equality: \betaegin{align*} x_1*x_2=& \lambda_1 u_1 + \lambda_2 u_2 + \mu_3u_3 + \mu_4 u_4 + ( \lambda_5 - \lambda_2 \mu_3 )u_5 + ( \lambda_6 + \alpha_1 \mu_3 )u_6 \\ &+( \mu_7+ (\lambda_1 - \lambda_2)(\lambda_1 \mu_3) + \lambda_2 \mu_3)u_7, \varepsilonnd{align*} that is of course deduced from Proposition \rhoef{prop:gh}. Using this decomposition we obtain the equality $\gamma*x_1 x_2= \j(y)= \j(x_1) x_2$ that implies $\j(x_1)=\gamma x_1$. Take $x_1' \in \muathcal{E}$ with $x_1= \gamma' x_1'$, then $ [y]=[\gamma' x_1' x_2]=[x_1' x_2] \in [L_{x_1'}H_0]. $ \varepsilonnd{proof} \muathfrak{su}bsection{Non-formality of the resolution} We start by computing the real cohomology algebra of the orbifold. Nomizu's theorem \cite{Nomizu} ensures that $(\Lambda^* \muathfrak{g}^*,d)$ is the minimal model of $M$. Taking into account that $H^*(X)=H^*(M)^{\muathbb{Z}_2}$ we obtain that $((\Lambda^* \muathfrak{g}^*)^{\muathbb{Z}_2}, d)$ is a model for $X$. The cohomology of $X$ is: \betaegin{align*} H^1(X)=& \lambdaangle [e^3] \rhoangle, \\ H^2(X)=& \lambdaangle [e^{25}], [e^{15}- e^{26}], [e^{15}- e^{34}] \rhoangle, \\ H^3(X)=& \lambdaangle [e^{235}], [e^{135}], [e^{356}], [e^{124}], [e^{146}], [e^{245}], [e^{127} + 2e^{145}], \\ &\,\, [e^{125} + e^{167} - e^{257} - 2 e^{456} - e^{347}]\rhoangle . \varepsilonnd{align*} We now prove that $X$ is not formal. \betaegin{proposition} \lambdaanglebel{prop:X-nf} The triple Massey product $\lambdaangle [e^3],[e^{15}- e^{26}],[e^3] \rhoangle$ of $((\Lambda^* \muathfrak{g}^*)^{\muathbb{Z}_2},d)$ is not trivial. Therefore, $X$ is not formal. \varepsilonnd{proposition} \betaegin{proof} First of all, one can check that that space of exact $3$-forms of $((\Lambda \muathfrak{g})^{\muathbb{Z}_2},d)$ is: $$ B^3((\Lambda^* \muathfrak{g}^*)^{\muathbb{Z}_2},d)= \lambdaangle e^{123},e^{135}-e^{236}, -e^{136}+ e^{235} + e^{236}, e^{127}- 2 e^{146} + 2 e^{245} + 2 e^{246} \rhoangle. $$ and the space of closed $2$-forms is: $$ Z^2( (\Lambda^* \muathfrak{g}^*)^{\muathbb{Z}_2},d) = \lambdaangle e^{12},-e^{16}+e^{25}+e^{26}-e^{34}, e^{25}, e^{15}-e^{26}, e^{15}- e^{34} \rhoangle . $$ Let us take $\tauimesi_1= [e^3]= \tauimesi_3$, $\tauimesi_2= [e^{15} - e^{26}]$; the representatives of these cohomology classes are $\alpha_1=\alpha_3=e^3$ and $\alpha_2= e^{15} - e^{26} + dx$ for some $x \in (\muathfrak{g}^*)^{\muathbb{Z}_2} $; our previous computations ensure that the Massey product $\lambdaangle \tauimesi_1,\tauimesi_2,\tauimesi_3\rhoangle$ is well defined. More precisely, $\betaar{\alpha}_1 \wedge \alpha_2 = d(-e^{56} + e^3x + \beta_1)$ and $\betaar{\alpha}_2 \wedge \alpha_3 = d( e^{56}-e^3x+ \beta_2)$, where $\beta_1$ and $\beta_2$ are closed forms. Defining systems for $\lambdaangle \tauimesi_1, \tauimesi_2, \tauimesi_3 \rhoangle$ are $(e^3,e^{15} - e^{26} + dx, e^3, -e^{56} + e^3x + \beta_1, e^{56}-e^3x+ \beta_2 )$ and the triple Massey product is $$ \lambdaangle \tauimesi_1, \tauimesi_2, \tauimesi_3 \rhoangle=\{ [2e^{356} + e^{3}\beta] \mubox{ s.t. } d\beta =0 \}. $$ The zero cohomology class is not an element of this set due to our previous computations. Corollary \rhoef{cor:massey-form} ensures that $X$ is not formal. \varepsilonnd{proof} Let $\rhoho \colon \tauilde{X} \tauo X$ be the closed $\Gammatwo$ resolution constructed in Theorem \rhoef{theo:resol}. Lifting this triple Massey product to $\widetilde X$ we prove that $\widetilde X$ is not formal. \betaegin{proposition} \lambdaanglebel{prop:resol-nf} The resolution $\widetilde X$ is not formal. \varepsilonnd{proposition} \betaegin{proof} Let $(\Lambda V,d)$ be the minimal model of $\widetilde X$ with $V=\omegaplus_{i=1}^7 V^i$, and let $\kappa \colon \Lambda V \tauo \Omega(\widetilde X)$ be a quasi-isomorphism. From Proposition \rhoef{prop:cohom-alg} we deduce that $H^1(\widetilde X)= \lambdaangle \rhoho^*(e^3) \rhoangle$ and that: $$ H^2 (\widetilde X)= \lambdaangle \rhoho^*(e^{25}), \rhoho^*(e^{15}-e^{26}), \rhoho^*(e^{15}-e^{34}), \tau_1,\deltaots, \tau_{16} \rhoangle. $$ In addition, $\rhoho^*(e^3 \wedge (e^{15}-e^{26})) = d\rhoho^*(e^{56})$ and $\rhoho^*[e^{235}]$ and $\rhoho^*[e^{135}]$ are linearly independent on $H^3(\widetilde X, \muathbb{R}R)$. Then, according to Proposition \rhoef{prop:cohom-alg} one can choose: \betaegin{align*} V^1=& \lambdaangle a\rhoangle ,\\ V^2=& \lambdaangle b_1,b_2,b_3,y_1, \deltaots, y_{16}, n \rhoangle. \varepsilonnd{align*} with $da=0$, $db_j=dy_j=0$ and $dn=a b_2$ and the map $\kappa$ is: \betaegin{align*} \kappa(a)=&\rhoho^*(e^3), & \kappa(b_2)=&\rhoho^*(e^{15}-e^{26}),& \kappa(n)=& \rhoho^*(e^{56}), \\ \kappa(b_1)=& \rhoho^*(e^{25}), & \kappa(b_3)=& \rhoho^*(e^{15}-e^{34}), & \kappa(y_j)=& \tau_j. \varepsilonnd{align*} We now define a Massey product. Let us take $\tauimesi_1= [a]= \tauimesi_3$, $\tauimesi_2= [b_2]$; the representatives of these cohomology classes are $\alpha_1=\alpha_3=a$ and $\alpha_2= b_2$. Then $\betaar{\alpha}_1\wedge \alpha_2 = d(-n+ \beta_1 + \omega_1)$ and $\betaar{\alpha}_2 \wedge \alpha_3=d(n+ \beta_2 + \omega_2)$ with $\beta_1,\beta_2 \in \lambdaangle b_1,b_2,b_3 \rhoangle$ and $\omega_1,\omega_2 \in \lambdaangle y_{1}, \deltaots, y_{16} \rhoangle$. Therefore, defining systems of $\lambdaangle \tauimesi_1,\tauimesi_2,\tauimesi_3 \rhoangle$ are $(a,b_2,a,-n + \beta_1 + \omega_1, n+ \beta_2 + \omega_2)$ and the Massey product is the set $$ \{ [2an + a\beta + a\omega] \mubox{ s.t. } \beta \in \lambdaangle b_1,b_2,b_3 \rhoangle, \phisiuad \omega \in \lambdaangle y_{1}, \deltaots, y_{16} \rhoangle \} . $$ We now observe that $[2an + a\beta + a\omega]=0$ in $H^*(\Lambda V,d)$ if and only if $\omega=0$ and $[\kappa(2an + a\beta)]=0$. This is because $[\kappa(a \omega)]=[\rhoho^*(e^3) \wedge \kappa(\omega)]=0$ if and only if $\omega=0$, and if $[\omega]\nueq 0$, the elements $[\kappa(a \omega)]$ and $[\kappa (2an + a\beta)]$ are linearly independent. In addition, $\kappa(2an + a\beta)=\rhoho^*(2e^{356} + e^3 \wedge \beta')$, with $\beta' \in \lambdaangle e^{25}, e^{15}-e^{26}, e^{15}- e^{34} \rhoangle$. Taking into account Proposition \rhoef{prop:cohom-alg} $[\kappa(2an + a\beta]=0$ if and only if $[2e^{356} + e^3 \wedge \beta']=0$ on $X$. But $[2e^{356} + e^3 \wedge \beta']\nueq 0$ as shown in Proposition \rhoef{prop:X-nf}. \varepsilonnd{proof} There is another non-trivial triple Massey product that comes from the isotropy locus. In order to describe it we have to construct the subspace $V^3$ of our minimal model; it is a direct sum $V^3=C\omegaplus N$; such that $dC=0$ and there are not closed elements on $N$. To construct $C$ one takes a basis of the space $H^3(\widetilde{X}) / H^1(\widetilde X)H^2(\widetilde X)$; for instance: \betaegin{align*} \lambdaangle &\rhoho^*[e^{346}], \rhoho^*[e^{124}], \rhoho^*[e^{146}], \rhoho^*[e^{245}], \rhoho^*[e^{127} + 2e^{145}], \\ &\rhoho^*[e^{125} + e^{167} - e^{257} - 2 e^{456} - e^{347}]\rhoangle \omegaplus \lambdaangle \{ [e^4 \omegatimes \muathbf{x}_i ] \}_{i=1}^{16}\rhoangle, \varepsilonnd{align*} Let $C= \lambdaangle c_1, \deltaots, c_6, z_1, \deltaots z_{16} \rhoangle$ with $dC=0$ and define $\kappa(c_1)=\rhoho^*(e^{346}),\kappa(c_2)=\rhoho^*(e^{124})$, $\deltaots$, $\kappa(c_6)=\rhoho^*(e^{125} + e^{167} - e^{257} - 2 e^{456} - e^{347})$ and $\kappa(z_i)=e^4\omegatimes \muathbf{x}_i$. With this notation, the triple Massey product coming from the singular locus $$ \lambdaangle [a],[z_j],-[a] \rhoangle $$ is not trivial. \betaegin{proposition} The fundamental group of $\widetilde X$ is $\phii_1(\widetilde X)=\muathbb{Z} \tauimes \muathbb{Z}_2 \tauimes \muathbb{Z}_6$. \varepsilonnd{proposition} \betaegin{proof} Let us denote $\phii \colon M \tauo X$ the quotient projection. In order to compute $\phii_1(X)$ we first observe that $\phii_1(M)$ is isomorphic to $\Gammaamma$ due to the exact sequence $ 0 \tauo \phii_1(G) \tauo \phii_1(M) \tauo \Gammaamma \tauo 0. $ Of course, each generator $u_i \in \Gammaamma$ is identified with the homotopy class $f_i$ determined by the image of the path from $0$ to $u_i$ under the quotient map $q \colon G \tauo M$. Denote by $[\cdot, \cdot]$ the commutator of two elements on $\phii_1(M)$; then the product structure on $\Gamma$ determines that the non-zero commutators are: \betaegin{align*} [f_1,f_2]=& f_4^{-2}, & [f_1,f_2]=& f_5^{-2}, & [f_2,f_5]=& f_7^{-6}, &[f_3,f_4]=& f_7^{6}. \\ [f_1,f_3]=& f_6^2, & [f_1,f_6]=& f_7^{6}, & [f_2,f_6]=& f_7^{-6}, & & \varepsilonnd{align*} Taking into account \cite[Corollary 6.3]{Bre} the map $\phii_* \colon \phii_1(M) \tauo \phii_1(X)$ is surjective; we now analyze $\phii_*(f_j)$. First of all, under the projection $\phii$ the image of the loop $f_1$ is the same as the path from $0$ to $\epsilonrac{1}{2} x_1$ followed by the same path in the reverse direction; this is of course contractible and thus $\phii_*(f_1)=0$; in the same manner $\phii_*(f_2)=\phii_*(f_5)=\phii_*(f_6)=0$. Taking into account commutator relations this implies that $\phii_*(f_4^2)=0$, $\phii_*(f_7^6)=0$ and that $\phii_*(f_3)$, $\phii_*(f_4)$, $\phii_*(f_7)$ commute. Thus, $\phii_1(X)=\muathbb{Z} \tauimes \muathbb{Z}_2 \tauimes \muathbb{Z}_6$. We now prove that the resolution process does not alter the fundamental group. For each $\varepsilon \in \muathcal{E}$ consider a small tubular neighbourhood $B^\varepsilon$ of $H_\varepsilon$ and suppose additionally that $B^\varepsilon$ are pairwise disjoint. Take $D^\varepsilon \muathfrak{su}bset B^\varepsilon$ a smaller tubular neighbourhood of $H_\varepsilon$ Define $U$ a connected open set containing $\cup_\varepsilon B^\varepsilon$ that is homotopy equivalent to $\betaigvee_\varepsilon H_\varepsilon$ and $V=X-\cup_\varepsilon D^\varepsilon$. Seifert-Van Kampen theorem states that $\phii_1(X)$ is the amalgameted product of $\phii_1(V)$ and $\phii_1(U)$ via $\phii_1(U \cap V)$. Define $\widetilde U=\rhoho^{-1}(U)$, $\widetilde V=\rhoho^{-1}(V)$; note that $\widetilde V$ and $V$ are diffeomorphic via $\rhoho$; in addition, $\rhoho_* \colon \phii_1(\widetilde U) \tauo \phii_1(U)$ is an isomorphism because $\widetilde U$ is homotopy equivallent to $\betaigvee_\varepsilon H_\varepsilon\tauimes S^2$. This observation and a further application of Seifert-Van Kampen theorem ensures that $\phii_1(\widetilde{X})=\phii_1(X)$. \varepsilonnd{proof} \betaegin{proposition} \lambdaanglebel{prop: no-torsion-free} The manifold $\widetilde X$ does not admit torsion-free $\Gammatwo$ structures. \varepsilonnd{proposition} \betaegin{proof} Suppose that $\widetilde X$ admits a torsion-free $\Gammatwo$ structure. Since $g$ is Ricci flat and $b_1=1$, \cite{Bo} ensures that there is a finite covering $N\tauimes S^1 \tauo \widetilde X$; with $N$ a compact simply connected $6$-dimensional manifold. Note that the covering is regular because $\phii_1(\widetilde X)$ is abelian; thus $(N \tauimes S^1) / H =\widetilde X$, where $H$ denotes the Deck group of the covering. The manifold $N$ is formal because it is simply-connected and $6$-dimensional (see \cite[Theorem 3.2]{FM} ); therefore $N \tauimes S^1$ is formal (see \cite[Lemma 2.11]{FM}). Lemma \rhoef{lem:formal-quotient} allows us to conclude that $(N \tauimes S^1) / H =\widetilde X$ is formal; yielding a contradiction. \varepsilonnd{proof} \betaegin{remark} We can also prove Proposition \rhoef{prop: no-torsion-free} by making use of the topological obstruction of torsion-free $\Gammatwo$ structures obtained in \cite{CKT}. Suppose that $\widetilde X$ has a torsion-free $\Gammatwo$ structure, then \cite[Theorem 4.10]{CKT} guarantees the existence of CDGAs $(A,d)$ and $(B,d)$ with the differential $d\colon B^k \tauo B^{k+1}$ being zero except for $k=3$, and quasi-isomorphisms: $$ \tauimesymatrix{ (\Omega(\widetilde X),d) & \alphar[l] (A,d) \alphar[r] & (B,d). } $$ This implies \cite[Corollary 4.13]{CKT} that non-zero triple Massey products $\lambdaangle \tauimesi_1, \tauimesi_2,\tauimesi_3 \rhoangle$ on $(\Omega(\widetilde X),d)$ verify that $|\tauimesi_1|+ |\tauimesi_2|=4$ and $|\tauimesi_2|+ |\tauimesi_3|=4$. Let $(A',d)$ be the minimal model of $(A,d)$, then one can obtain quasi-isomorphisms: $$ \tauimesymatrix{ (\Lambda V,d) & \alphar[l] (A',d) \alphar[r] & (B,d). } $$ The same conclusion holds for non-zero Massey products on $(\Lambda V, d)$. This contradicts the fact that there is a non-zero Massey product $\lambdaangle \tauimesi_1, \tauimesi_2,\tauimesi_3 \rhoangle$ on $(\Lambda V,d)$ with $|\tauimesi_1|=|\tauimesi_3|=1$ and $|\tauimesi_2|=2$ as it is obtained in the proof of Proposition \rhoef{prop:resol-nf}. Therefore $\widetilde X$ does not have a torsion-free $\Gammatwo$ structure. \varepsilonnd{remark} \betaegin{remark} There exists a finite covering $Y \tauo \widetilde X$ such that $\phii_1(Y)=\muathbb{Z}$ because $\phii_1(\widetilde X)=\muathbb{Z} \tauimes \muathbb{Z}_2 \tauimes \muathbb{Z}_6$. The manifold $Y$ is also non-formal as a consequence of Lemma \rhoef{lem:formal-quotient} and of course, it has first Betti number $b_1=1$ and admits a closed $\Gammatwo$ structure. Argueing as in the proof of Proposition \rhoef{prop: no-torsion-free} one can conclude that $Y$ does not admit any torsion-free $\Gammatwo$ structure. \varepsilonnd{remark} {\sigmamall \betaegin{thebibliography}{33} \betaibitem{BFM} G.~Bazzoni, M.~Fern\'andez, V.~Mu\~noz, Non-formal co-symplectic manifolds, \tauextit{ Trans. Amer. Math Soc.} \tauextbf{367} (2015), 4459-4481. \betaibitem{BE} M. Berger. {Sur les groupes d' holonomie homog\`{e}nes de vari\'{e}t\'{e}s \`{a} conexion affine et des vari\'{e}t\'{e}s riemanniennes}, \varepsilonmph{Bull. Soc. Math. France}, 283 (1955): 279--330. \betaibitem{Bo} Bochner, S. Vector fields and Ricci curvature. \varepsilonmph{Bull. Amer. Math. Soc.}, \tauextbf{52} (1946): 776--797. \betaibitem{Bre} G. Bredon, {\it Introduction to Compact Transformation Groups}, Academic Press, 1972. \betaibitem{Br87} R. L. Bryant, \varepsilonmph{Metrics with exceptional holonomy}, Ann. Math, \tauextbf{126} (1987): 525--576. \betaibitem{BrS89} R. L. Bryant, \varepsilonmph{S. M. Salamon, On the construction of some complete metrics with exceptional holonomy}, Duke Math. J. \tauextbf{58} (1989): 829--850. \betaibitem{Br06} R.~L.~Bryant, Some remarks on $\Gammatwo$-structures, {\it Proceedings of G\"okova Geometry-Topology Conference 2005\/}, G\"okova Geometry/Topology Conference (GGT), G\"okova, (2006), pp. 75--109. \betaibitem{CFM} G.~Cavalcanti, M.~Fern\'andez, V.~Mu\~noz, Symplectic resolutions, Lefschetz property and formality, \tauextit{ Advances Math.} {\betaf 218} (2008), 576--599. \betaibitem{CKT} K.~F.~Chan, S.~Karigiannis, C.~C.~Tsang, The $\muathcal{L}_B$-cohomology on compact torsion-free $\Gammatwo$ manifolds and an application to ‘almost’ formality,{ \varepsilonm Ann. Global Anal. Geom.}, {\betaf 55} (2019), 325--369. \betaibitem{CI} R.~Cleyton, S.~ Ivanov, On the geometry of closed G2-structures, {\varepsilonm Commun. Math. Phys.} {\betaf 270}(2007), 53–-67. \betaibitem{CF} D.~Conti and M.~Fern\' andez, Nilmanifolds with a calibrated $\Gammatwo$ structure, \tauextit{Differ. Geom. Appl.} \tauextbf{29} (2011), 493–506. \betaibitem{CHNP} A.~Corti, M.~Haskins, J.~Nordstr\"om, T.~Pacini, $\Gammatwo$-manifolds and associative submanifolds via semi-Fano 3-folds, {\varepsilonm Duke Math. J.\/} {\betaf 164} (10) (2015), 1971--2092. \betaibitem{CN} D.~ Crowley, J.~Nordstr\"om, The rational homotopy type of $(n-1)$-connected manifolds of dimension up to $5n-3$, {\varepsilonm J. Topol.}, {\betaf 13} (2) (2020), 539--575. \betaibitem{DGMS} P.~Deligne, P.~Griffiths, J.~Morgan and D.~Sullivan, Real homotopy theory of K\"ahler manifolds, {\varepsilonm Invent. Math.\/} {\betaf 29} (1975), 245--274. \betaibitem{FOT} Y.~F\' elix, J.~Oprea, D. Tanr\' e, Algebraic models in geometry, {\varepsilonm Oxford University Press: Oxford\/}, UK, 2008. \betaibitem{F} M.~Fern\'andez, An example of a compact calibrated manifold associated with the exceptional Lie group $\Gammatwo$, {\varepsilonm J. Differ. Geom.\/} {\betaf 26} (1987), 367--370. \betaibitem{F-2} M.~Fern\'andez, A family of compact solvable $\Gammatwo$-calibrated manifolds, {\varepsilonm T\^ohoku Math. J\/} {\betaf 39} (1987), 287--289. \betaibitem{FFKM} M.~Fern\' andez, A.~Fino, A.~Kovalev, V.Mu\~noz, A compact $\Gammatwo$-calibrated manifold with first Betti number $b_1=1$, arXiv:1808.07144 [math.DG]. \betaibitem{FG} M.~Fern\'andez, A.~Gray, Riemannian manifolds with structure group $\Gammatwo$, {\varepsilonm Annali di Mat. Pura Appl.\/} {\betaf 32} (1982), 19--45. \betaibitem{FM} M.~Fern\'andez, V.~Mu\~noz, Formality of Donaldson submanifolds {\varepsilonm Math. Z.}, {\betaf 250} (2005), 149--175. \betaibitem{G} M.-P.~Gong, Classification of nilpotent Lie algebras of dimension $7$ (over algebraically closed fields and $\muathbb{R}R$), PhD thesis, University of Waterloo, Ontario, Canada, 1998. \betaibitem{Hatcher} A.~Hatcher, Vector bundles and K-Theory, (2017). \betaibitem{HL} R.~Harvey and H.~B.~Lawson, Calibrated geometries, {\varepsilonm Acta Math.\/} {\betaf 148} (3) (1982) 47--157. \betaibitem{J1} D.~D.~ Joyce, Compact Riemannian 7-manifolds with holonomy $\Gammatwo$. I. {\varepsilonm J. Differ. Geom.\/} {\betaf 43} (1996), 291--328. \betaibitem{J2} D.~D.~ Joyce, Compact Riemannian 7-manifolds with holonomy $\Gammatwo$. II, {\varepsilonm J. Differ. Geom.\/} {\betaf 43} (1996), 329--375. \betaibitem{Joyce2} D.~ Joyce, {\varepsilonm Compact manifolds with special holonomy}, OSP, Oxford, 2008. \betaibitem{JK} D.~ Joyce and S.~Karigiannis, A new construction of compact torsion-free $\Gammatwo$-manifolds by gluing families of Eguchi-Hanson spaces, arXiv:1707.09325 [math.DG]. \betaibitem{Kovalev} A.~Kovalev, Twisted connected sums and special Riemannian holonomy, {\varepsilonm J. Reine Angew. Math.\/} {\betaf 565} (2003), 125--160. \betaibitem{KovalevLee} A.~G.~ Kovalev, N.~H.~Lee, $K3$ surfaces with non-symplectic involution and compact irreducible $\Gammatwo$-manifolds, {\it Math. Proc. Camb. Phil. Soc.\/}, {\betaf 151} (2011), 193--218. \betaibitem{VM} V.~Manero, {Compact solvmanifolds with calibrated and cocalibrated $\Gammatwo$-structures}, {\it Manuscripta Math.}, (2019). https://doi.org/10.1007/s00229-019-01133-w \betaibitem{MR} V.~Mu\~noz, J.~Rojo, { Symplectic resolution of orbifolds with homogeneous isotropy}, {\it Geom. Dedicata}, (2019). \betaibitem{MT} V.~Mu\~noz, A.~Tralle, {Simply-connected K-contact and Sasakian manifolds of dimension $7$} {\varepsilonm Math. Z.}, {\betaf 281} (1), (2015), 457--470. \betaibitem{Nomizu} K.~Nomizu, On the cohomology of compact homogeneous spaces of nilpotent Lie groups, {\varepsilonm Ann. of Math.\/} {\betaf 59} (1954), 531--538. \betaibitem{O} J.~ Oprea, Lifting homotopy actions in rational homotopy, {\varepsilonm J. Pure Appl. Algebra \/} {\betaf 32} (2), (1984), 177–-190. \betaibitem{OT} J.~Oprea, A.~Tralle, {\varepsilonm Sympectic manifolds with no K\" ahler stuctures } Springer-Verlag Berlin Heidelberg, 1997. \betaibitem{SW} D. Salamon, T. Walpuski. Notes on the octonions, arXiv:1005.2820 [math.RA]. \betaibitem{Salamon} S.~Salamon, {\varepsilonm Riemannian geometry and holonomy groups}, Longman Scientific and Technical, Harlow Essex, U.K., 1989. \betaibitem{T} D.~Tischler, On fibering certain foliated manifolds over $S^1$, \tauextit{Topology} \tauextbf{9} (1970), 153-154. \varepsilonnd{thebibliography} } \varepsilonnd{document}
\begin{document} \title{Quantum Correlations in Deutsch-Jozsa Algorithm via Deterministic Quantum Computation \\with One Qubit Model} \author[a1,a2]{Márcio M. Santos\corref{cor1}} \ead{[email protected]} \author[a3]{Eduardo I. Duzzioni} \ead{[email protected]} \cortext[cor1]{Corresponding author} \address[a1]{Instituto de Ciência, Engenharia e Tecnologia, Universidade Federal dos Vales do Jequitinhonha e Mucuri\\ Rua do Cruzeiro 01, Jardim São Paulo, 39803-371\\ Teófilo Otoni, Minas Gerais, Brazil\\ } \address[a2]{Instituto de Física, Universidade Federal de Uberlândia\\ Av. João Naves de Ávila 2121, Santa Mônica, 38400-902\\ Uberlândia, Minas Gerais, Brazil\\} \address[a3]{Departamento de Física, Universidade Federal de Santa Catarina\\ Campus Universitário Reitor João David Ferreira Lima, Trindade, 88040-900 \\ Florianópolis, Santa Catarina, Brazil\\} \begin{abstract} Quantum correlations have been pointed out as the most likely source of the speed-up in quantum computation. Here we analyzed the presence of quantum correlations in the implementation of Deutsch-Jozsa algorithm running in the DQC1 and DQCp models of quantum computing. For some balanced functions, the qubits in DQC1 model are quantum correlated just in the intermediate steps of the algorithm for a given decomposition into one and two qubits gates. In the DQCp model the final state is strongly quantum correlated for some balanced functions, so that the pairwise entanglement between blocks scales with the system size. Although the Deutsch-Jozsa algorithm is efficiently implemented in both models of computation, the presence of quantum correlations is not a sufficient property for computational gain in this case, since the performance of the classical probabilistic algorithm is better than the quantum ones. The measurement of other qubits than the control one showed to be inefficient to turn the algorithm deterministic. \end{abstract} \begin{keyword} Quantum computing \sep Quantum correlations \sep Mixed-state computing \end{keyword} \maketitle \section{Introduction} Quantum correlations have been pointed out as a resource for quantum computation. To observe advantage of pure state quantum computation over classical computing entanglement is seen as a necessary resource\cite{Jozsa,Nest2012}. However, such resource does not seem to be so essential for quantum computation with mixed states. It was observed that the amount of entanglement present in the trace evaluation of a unitary matrix realized in the Deterministic Quantum Computation with one qubit (DQC1) model could not explain the resulting speed-up \cite{entan DQC1}. Other examples using de DQC1 model which present quantum advantage are the Shor\textquoteright s factorization algorithm \cite{Parker}, the measurement of the average fidelity decay of a quantum map \cite{Poulin}, and the approximation of the Jones Polynomial \cite{Shor2008}. This model of computation have already been implemented in an optical system \cite{Lanyon} and in Nuclear Magnetic Resonance \cite{Ryan,Marx}. The first quantum algorithm, introduced by D. Deutsch in 1985 \cite{Deutsch1985}, aimed to decide if a function $f:\{0,1\}\rightarrow\{0,1\}$ is constant or balanced. Although the first version of this algorithm was probabilistic, improvements on it showed that it is possible to know with certainty the function class with just one measurement \cite{Mosca1997}, while in the classical case two evaluations of $f$ are necessary. The generalization of Deutsch algorithm to an input of $n$ qubits was made by D. Deutsch and R. Jozsa in 1992 \cite{DeutschJozsa}. In this case the function $f:\{0,1\}^{n}\rightarrow\{0,1\}$ is said to be constant if $f(j)=0$ or $f(j)=1$ for all $j$ $(j=0,...,2{}^{n}-1)$ and balanced if $f(j)=0$ for half of the $j$ values and $f(j)=1$ for the remaining $j$ values. Classically, it will take $2$ to $2^{n-1}+1$ evaluations of $f$ to know the function class with certainty. In quantum computation the Deutsch-Jozsa algorithm gives the exactly answer to the problem with just $n$ individual qubit measurements. In Ref. \cite{Mosca1997} the authors use $n+1$ qubits to solve this problem, while the Collins version of this algorithm uses only $n$ qubits \cite{DJ Collins}. The Collins version is represented by the circuit in Fig. \ref{fig:FigCollins}, where the unitary $U$ encodes the function $f$. After the $n$ first Hadamard gates the register is in an equal superposition of $2{}^{n}$ states $\left|j\right\rangle $. The action of $U$ on $\left|j\right\rangle $ is $U\left|j\right\rangle =(-1)^{f(j)}\left|j\right\rangle $, which means $U$ is a matrix of the form \begin{equation} U=\sum_{j}(-1)^{f(j)}\left|j\right\rangle \left\langle j\right|.\label{eq:TU-1} \end{equation} Applying again the $n$ Hadamard gates the readout of the algorithm can be made by projecting the final state on the state $\left|j=0\right\rangle $, giving the result \begin{equation} \frac{1}{2^{n}}\sum_{j=0}^{2^{n}-1}(-1)^{f(j)}=\left\{ \begin{array}{c} \pm1\quad\textrm{if \ensuremath{f}\:\ is constant,}\\ 0\quad\textrm{if \ensuremath{f}\:\ is balanced.} \end{array}\right.\label{eq:deutsch-1} \end{equation} \begin{figure} \caption{\label{fig:FigCollins} \label{fig:FigCollins} \end{figure} The structure of $U$ in Eq. (\ref{eq:TU-1}) allows the definition of the function class of $f$ by the evaluation of its normalized trace. An efficient way to evaluate the trace of a unitary matrix is given by the DQC1 model \cite{DQC1}. The DQC1 model is composed by $n+1$ qubits, where $n$ qubits are in the fully mixed state $I{}^{\otimes n}/2^{n}$ and only one qubit presents a degree of purity controlled by $\alpha\;(0<\mathrm{\alpha<1})$, as represented by the circuit in Fig. \ref{fig:DQC1-circuit}. The system initial state is $\rho_{I}=2^{-(n+1)}(I_{0}+\alpha Z_{0})\otimes I{}^{\otimes n}$, where the index $0$ refers to the semi-pure qubit (control qubit), $I$ is the identity matrix, and $Z$ is the Pauli matrix $\sigma_{Z}$. \begin{figure} \caption{\label{fig:DQC1-circuit} \label{fig:DQC1-circuit} \end{figure} Just before the measurement the state of the system is \begin{equation} \rho=\frac{1}{2^{n+1}}\left(I_{0}\otimes I_{n}+\alpha X_{0}\otimes U_{n}\right),\label{eq:finalstate} \end{equation} where $X_{0}$ is the Pauli matrix $\sigma_{x}$. One way of implementing the Deutsch-Jozsa algorithm via DQC1 model is choosing the unitary matrix $U_{n}$ as $U$ defined in Eq. (\ref{eq:TU-1}) \cite{Arvind}. In this case, the state of the control qubit is characterized by $\left\langle \sigma_{x}\right\rangle =\alpha\frac{1}{2^{n}}\sum_{j=0}^{2^{n}-1}(-1)^{f(j)}$ and $\left\langle \sigma_{y}\right\rangle =\left\langle \sigma_{z}\right\rangle =0$. Thus, if a $\sigma_{x}$ measurement is performed on this qubit, the result for its expected value is $\left\langle \sigma_{x}\right\rangle =0$ and variance $\triangle\sigma_{x}=1$ for a balanced function and $\left\langle \sigma_{x}\right\rangle =\pm\alpha$ with variance $\triangle\sigma_{x}=\sqrt{1-\alpha^{2}}$ for a constant function. The Deutsch-Jozsa algorithm was approached by different models of computation: probabilistic classical computation \cite{Arvind,preskill}, circuital quantum computation with pure states\cite{Mosca1997,DJ Collins}, ensemble quantum computation \cite{Arvind}, adiabatic quantum computation \cite{Das,Wei}, one-way quantum computation \cite{Chaves2011}, dissipative quantum computation \cite{Santos2012}, and blind quantum computation \cite{Barz2012}. In this work we study the presence of quantum correlations in the Deutsch-Jozsa algorithm performed in the DQC1 and DQCp models of quantum computation. In the next section we develop a circuit to perform the computation composed by a universal set of gates. Then, considering the DQC1 model we evaluate the presence of quantum correlations after the application of each quantum gate. To observe the generation of correlations with an initial pure state we evaluate, in section 3, the negativity generated by the realization of the same algorithm performed in the DQCp model. We then present a discussion followed by our conclusions. \section{Quantum Correlations in Deutsch-Jozsa Algorithm via DQC1 Model} Now we analyze the correlations present in the Deutsch-Jozsa algorithm implemented via DQC1 model. The final state of the computation (\ref{eq:finalstate}) can be written as \begin{align} \rho & =\sum_{j=0}^{2^{n}-1}(1/2^{n+1})\left[\left|0\right\rangle \left\langle 0\right|+\alpha(-1)^{f(j)}\left|0\right\rangle \left\langle 1\right|\right.\nonumber \\ & \quad\quad\left.+\alpha(-1)^{f(j)}\left|1\right\rangle \left\langle 0\right|+\left|1\right\rangle \left\langle 1\right|\right]\otimes\left|j\right\rangle \left\langle j\right|\nonumber \\ & =\sum_{j=0}^{2^{n}-1}(1/2^{n+1})(\left|a_{j}\right\rangle \left\langle a_{j}\right|+\left|b_{j}\right\rangle \left\langle b_{j}\right|)\otimes\left|j\right\rangle \left\langle j\right|,\label{eq:statef} \end{align} where $\left|a_{j}\right\rangle =cos\phi\left|0\right\rangle +(-1)^{f(j)}sin\phi\left|1\right\rangle $, $\left|b_{j}\right\rangle =sin\phi\left|0\right\rangle +(-1)^{f(j)}cos\phi\left|1\right\rangle $, and $sin(2\phi)=\alpha$ \cite{Datta Thesis}. Particularly for $\alpha=1$ the final state is \begin{equation} \rho=\sum_{j=0}^{2^{n}-1}(1/2^{n})\left|f(j)\right\rangle \left\langle f(j)\right|\otimes\left|j\right\rangle \left\langle j\right|,\label{eq:statef1} \end{equation} with $\left|f(j)\right\rangle =(\left|0\right\rangle +(-1)^{f(j)}\left|1\right\rangle )/\sqrt{2}$. It is easy to observe from Eqs. (\ref{eq:statef}) and (\ref{eq:statef1}) that the state $\rho$ is separable for any partition, since the $|j\rangle$ states describe the computational basis. We remember that any bipartite separable state can be cast under three categories: \emph{i)} classical-classical (CC) states with the form $\rho=\sum_{i}p_{i}\left|i_{A}\right\rangle \left\langle i_{A}\right|\otimes\left|i_{B}\right\rangle \left\langle i_{B}\right|$ where $\left\{ \left|i_{A(B)}\right\rangle \right\} $ is an orthonormal set and $\{p_{i}\}$ is a probability distribution, \emph{ii)} classical-quantum (CQ) states with the form $\rho=\sum_{i}p_{i}\left|i_{A}\right\rangle \left\langle i_{A}\right|\otimes\rho_{i}^{B}$ where $\left\{ \rho_{i}^{B}\right\} $ are quantum states, and \emph{iii)} fully quantum states (QQ) with the form $\rho=\sum_{i}p_{i}\rho_{i}^{A}\otimes\rho_{i}^{B}$\cite{Piani}. Rewriting the state in Eq. (\ref{eq:statef}) in the form \begin{equation} \sum_{\substack{i = +,-\\ 0<j<2^n - 1}}p_{i,j}\left|i\right\rangle \left\langle i\right|\otimes\left|j\right\rangle \left\langle j\right|,\label{eq:statef1b} \end{equation} where $ \left|\pm\right\rangle = (\left|0\right\rangle \pm \left|1\right\rangle)/\sqrt{2}$ and $p_{\pm,j} = (1\pm\alpha (-1)^{f(j)})/2$, it becomes clear that the final state of the computation is a CC state. Therefore $\rho$ has no quantum correlations. Indeed, any quantum discord-like measure over any bipartition should confirm this statement \cite{Ollivier2001,Henderson2001,Modi2012,Rulli}. After performing $\sigma_{x}$ measurements on the control qubit the best scenario to discriminate between constant and balanced functions occurs when $\alpha=1$. The Deutsch-Jozsa algorithm is implemented efficiently via DQC1 model, since the expected value of $\sigma_{x}$ must be known with a given precision, which is independent of the number $n$ of mixed qubits. In Ref. 15 the authors show that this quantum algorithm (for $\alpha=1$) has at most a good performance as the classical probabilistic algorithm. Therefore, the quantum and classical versions of the Deutsch-Jozsa algorithm discussed here have equivalent performance. Such result is not obvious, because it is possible that quantum correlations are present in intermediate states of the computation even when the initial and final states do not have any. Although this is not the case, quantum correlations may be related to the speedup of quantum computation since the quantum computer can evolve through states that use a smaller number of gates in the quantum solution \cite{Datta correlations middle}. To investigate the birth and death of quantum correlations in the implementation of the Deutsch-Jozsa algorithm, we use the procedure presented by S. Bullock and L. Markov to decompose a diagonal unitary operator in a sequence of one qubit rotations and CNOTs \cite{Bullock}. Such synthesis is general, so that it can describe any unitary applied over the Deutsch-Jozsa algorithm for balanced or constant functions. The decomposition was done for the two and three mixed qubits cases, where the later is presented in Fig. \ref{fig:Synthesized-Deutsch-Jozsa-algori-1}. The presence or absence of quantum correlations after each quantum gate in the synthesized algorithm is pointed out after writing the system state as a CC state or not. See the Appendix for details. For two mixed qubits case, corresponding to four values for the index $j$ ($j=00,01,10,11$), no quantum correlations are found in any step of the algorithm. In our decomposition for three mixed qubits, quantum correlations are found between the second and the last but one CNOT gates for some balanced functions. In this last case we found that there is no entanglement as measured by negativity (see the definition in Eq.(\ref{eq:neg})) \cite{negativity} evaluated for all steps in the synthesized algorithm considering different splits for all kind of functions: $i)$ a splitting that separates the control qubit from all the others and $ii)$ another one that puts the top two qubits in one partition and the bottom two in the other partition. We observe that the rotation angles $\theta_{j}$ present in the synthesis of the algorithm may have, among other values, $\pm\pi/4$ for some balanced functions. In these situations the operator $R_{j}$ becomes the $T$ (or $\pi/8$) gate, a unitary that lies outside the Clifford group. Despite of Gottesman-Knill\cite{GK} theorem and Bryan Eastin result (that a concordant computation can be simulated using a classical computer)\cite{Eastin} do not apply to these decompositions, the algorithm presented above can efficiently be simulated in a classical computer. For pure states quantum discord is equal to entanglement entropy, i.e., it measures entanglement between two parties \cite{Henderson2001}. In correspondence, Collins, Kim and, Holton arrived at a similar conclusion for the Deutsch-Jozsa algorithm implemented through the conventional pure state quantum computation model \cite{DJ Collins}. They found that no entanglement is generated between two qubits, while for three or more qubits some balanced functions generate entanglement among them. Chaves and de Melo showed that there are functions for which it is possible to implement the Deutsch-Jozsa algorithm in the one-way quantum computation method with decoherence from a state that presents only classical correlations \cite{Chaves2011}. Arvind, Dorai and, Kulmar implemented the Deutsch-Jozsa algorithm in a NMR experiment and observed the absence of entanglement for the one and two qubits cases and entanglement generation for some balanced functions in the three qubit case \cite{Dorai}. By using pure state quantum computation, Kenigsberg, Mor, and Ratsaby found the maximal sub-problem that can be solved without entanglement \cite{Kenigsberg}. \begin{center} \begin{sidewaysfigure} \textcolor{white}{\rule{1cm}{4cm}} \includegraphics[scale=0.5]{synthesis}\protect\caption{\label{fig:Synthesized-Deutsch-Jozsa-algori-1}Synthesized Deutsch-Jozsa algorithm implemented via DQC1 model for three mixed qubits. Depending on the balanced function, the qubits can be quantum correlated along the steps highlighted on the top of the figure. Here, the operator $R_{j}\equiv R_{z}^{l}(\theta_{j})=e^{-i\theta_{j}/2}\left|0\right\rangle _{l}\left\langle 0\right|+e^{i\theta_{j}/2}\left|1\right\rangle _{l}\left\langle 1\right|$ rotates the state of the $l$-th qubit by an amount $\theta_{j}$ around the $z$ axis. The angles of rotation are defined by $\theta_{0}=-\theta_{1}\equiv-\pi(f_{0}-f_{1}+f_{2}-f_{3}+f_{4}-f_{5}+f_{6}-f_{7})/8$, $\theta_{2}=-\theta_{3}\equiv\pi(f_{0}-f_{1}+f_{2}-f_{3}-f_{4}+f_{5}-f_{6}+f_{7})/8$, $\theta_{4}=-\theta_{7}\equiv-\pi(f_{0}-f_{1}-f_{2}+f_{3}-f_{4}+f_{5}+f_{6}-f_{7})/8$, $\theta_{5}=-\theta_{6}\equiv-\pi(f_{0}-f_{1}-f_{2}+f_{3}+f_{4}-f_{5}-f_{6}+f_{7})/8$, $\theta_{8}=-\theta_{9}\equiv-\pi(f_{0}+f_{1}-f_{2}-f_{3}+f_{4}+f_{5}-f_{6}-f_{7})/8$, $\theta_{10}=-\theta_{11}\equiv\pi(f_{0}+f_{1}-f_{2}-f_{3}-f_{4}-f_{5}+f_{6}+f_{7})/8$, $\theta_{12}=-\theta_{13}\equiv-\pi(f_{0}+f_{1}+f_{2}+f_{3}-f_{4}-f_{5}-f_{6}-f_{7})/8$, and $\theta_{14}=2\Phi\equiv\pi(f_{0}+f_{1}+f_{2}+f_{3}+f_{4}+f_{5}+f_{6}+f_{7})/8.$} \end{sidewaysfigure} \end{center} \section{Quantum Correlations in Deutsch-Jozsa Algorithm via DQCp model} The basic idea of the deterministic quantum computation with pure states (DQCp model) is to reproduce in the answer qubit the expectation values of DQC1 model for the control qubit \cite{DQC1}. The same results for the Deutsch-Jozsa algorithm in DQC1 model presented above, that is, the expected values and variances of $\sigma_{x}$ for the control qubit, can be achieved if $\alpha=1$ and the $n$ mixed qubits in the DQC1 circuit are initialized in the state $\left|+\right\rangle ^{\otimes n}=\left[(\left|0\right\rangle +\left|1\right\rangle )/\sqrt{2}\right]^{\otimes n}$. This particular result, by its turn, demonstrates that to solve oracle problems in a quantum computer running via DQC1 model can be as powerful as in a quantum computer running via DQCp model \cite{DQC1}. An important difference between DQC1 and DQCp models is that with pure initial states the circuit can generate significant amounts of entanglement among the qubits at the end of the computation, while with mixed states none quantum correlation can be generated. To verify this hypothesis, we quantify entanglement among different partitions through negativity, a measure which has the advantage of being easily evaluated for a general bipartite mixed state \cite{negativity}. Its expression is given by \textcolor{black}{ \begin{equation} \mathcal{N}\left(\rho\right)=\frac{\left\Vert \rho^{T_{A}}\right\Vert _{1}-1}{2},\label{eq:neg} \end{equation} where $\left\Vert O\right\Vert _{1}=\textrm{tr}\sqrt{O^{\dagger}O}$ is the trace norm of operator $O$ and the partial transposition on subsystem $A$ is denoted by $\rho^{T_{A}}$ (it could have also been defined with partial transposition on subsystem $B$). The range of values for negativity goes from zero to $(d-1)/2$, where $d$ is the smaller partition between $A$ and $B$.} The algorithm was run 50 times with random balanced functions for a number of work qubits from 1 to 10, and the negativity was evaluated for two different splits: $i)$ a split that separates the $(n+1)/2$ top qubits and the $(n+1)/2$ bottom qubits for $n$ odd, and $ii)$ the $n/2$ top qubits and $n/2+1$ bottom qubits for $n$ even. The maximum value of the negativity achieved for each number of qubits is shown in Fig. \ref{fig:Negativity}. The resulting curve presents an overall increasing pattern, with the characteristic of sequential values of the negativity being approximately constant since it is limited by the dimension of the smaller partition \cite{Datta Thesis}. Differently from the results presented above for DQC1 model, even for the case with $n=1$ the states of the system are quantum correlated, a behavior that remains as $n$ increases. \begin{figure} \caption{Negativity for the final state in the realization of Deutsch-Jozsa algorithm via DQCp model as a function of the number of qubits $n$. The values are relative to the splitting separating the $(n+1)/2$ top qubits and the $(n+1)/2$ bottom qubits, for $n$ odd, and the $n/2$ top qubits and $n/2+1$ bottom qubits for $n$ even. The sequence of approximately constant values is due to the negativity limitation by the dimension of the smaller partition. Here, $s$ is the number of qubits in the smaller partition.\label{fig:Negativity} \label{fig:Negativity} \end{figure} \section{Discussion } The final states of the control qubit in the Deutsch-Jozsa algorithm implemented via DQC1 and DQCp models for the possible function classes do not have orthogonal support, which means they can not be distinguished with just one measurement as in the conventional pure state quantum computation \cite{Mosca1997,DJ Collins}. This makes such algorithm probabilistic. We tried to remove the probabilistic character of the solutions obtained through the models discussed above by using the DQC1\textbf{\large{}$_{k}$} model of computation \cite{Morimae2014}. In the later the the quantum circuit is similar to DQC1 circuit but $k$ qubits are measured instead of only one. Nevertheless it was unsuccessful. The same procedure was repeated for DQCp\textbf{\large{}$_{k}$} model of computation, but it was in vain again. A possible reason for these results relies on the state of the control qubit for balanced functions, which one is proportional to the identity operator. So it is impossible to distinguish it perfectly from any other state. \section{Conclusions} We reviewed the Deutsch-Jozsa algorithm implemented via DQC1 model and also expanded this idea to the DQCp model. In both models the initial state of the qubits is uncorrelated, quantum and classically. In the DQC1 model the final state of the algorithm does not possess any kind of correlations. Otherwise, in the DQCp model the qubits at the end of the algorithm are highly correlated for some balanced functions and the bipartite entanglement among the block of qubits increases with the system size. The Deutsch-Jozsa algorithm is efficiently implemented in these different models of computation. Independently of the existence or absence of quantum correlations among the qubits in each step of the algorithm, the quantum solution shows no advantage over that one given by the classical probabilistic algorithm. Some efforts in order to decide if the function class is constant or balanced by the realization of a $k$-collective measurement on the qubits in the DQC1\textbf{\large{}$_{k}$} and DQCp\textbf{\large{}$_{k}$} models of computation have been shown inefficient. \section*{Acknowledgments} The authors acknowledge the financial support from Brazilian agencies CAPES, FAPEMIG, CNPq, and Brazilian National Institute of Science and Technology for Quantum Information (INCT-IQ). \section*{Appendix} Here we show how to detect quantum correlations in the state of the system after every step of the synthesized algorithm for three mixed qubits. The procedure is the same for the two mixed qubits case, where no quantum correlated states are found. This is done by verifying the form of the state, i.e., if it can be written as a CC state it does not possess any quantum correlation, otherwise the correlations have some quantum nature. As shown in the main text, the synthesized algorithm is composed by: a Hadamard gate; one-qubit rotations around the $z$ axis, here indicated by $R_{k}^{i}$, where $k$ is an index associated to the rotation angle $\theta_{k}$ defined in Fig. \ref{fig:Synthesized-Deutsch-Jozsa-algori-1} and $i$ is the qubit index, beginning from $0$ for the semi-pure qubit and ranging from $1$ to $3$ for the mixed qubits; and controlled operations, here indicated by $CNOT_{m}^{n}$, where $m$ and $n$ are the indexes of the control and target qubit, respectively. The initial state $\rho_{ini}=2^{-(n+1)}(I_{0}+\alpha Z_{0})\otimes I^{\otimes n}$ has no quantum correlations. Since $I_{0}$ can be written in any basis, including the eigenstates $Z_{0}$, this state can take the form $\rho_{ini}= \sum_i p_{i}\left|i\right\rangle \left\langle i\right|$, with $p_{i}=\{2^{-(n+1)}(1+\alpha),2^{-(n+1)}(1-\alpha)\}$ representing a classical probability distribution. Now we will present the states $\rho_{s}$ after each step $s$ of the synthesized algorithm (beginning from step $0$) and do a brief analysis of quantumness of correlations for some initial and final steps. \thickmuskip=0mu 0) Hadamard gate on qubit $0$ and $R_{0}^{3}$: \[ \rho_{0}=2^{-4}(I_{0}+\alpha X_{0})\otimes I^{\otimes3}, \] where we used $n=3$ to indicate that we are studying the three mixed qubits case. This state can be put in the same form as the previous one and also represents a classical probability distribution. Therefore, it does not have quantum correlations. 1) $CNOT_{0}^{3}$: \[ \rho_{1}=2^{-4}(I^{\otimes4}+\alpha X_{0}I_{1}I_{2}X_{3}), \] where we dropped some of the $\otimes$ for simplicity. Again, the identities for the qubits 0 and 3 on the first term can be written in the same basis of the $X_{0}$ and $X_{3}$ on the second term, so the state can be written in a totally classical form. 2) $R_{1}^{3}$: \[ \rho_{2}=2^{-4}(I^{\otimes4}+\alpha X_{0}I_{1}I_{2}Q_{3}(\theta_{1})), \] where every matrix $Q_{k}(x)$ has the form $\left|0\right\rangle \left\langle 1\right|_{k}e^{-ix}+\left|1\right\rangle \left\langle 0\right|_{k}e^{ix}$. In this case, as the identity assumes the same form in any basis, $I_{0}$ and $X_{0}$ are diagonal in the eigenbasis of $X_{0}$. Similarly $I_{3}$ and $Q_{3}(\theta_{1})$ have a common eigenbasis, so $\rho_{2}$ assumes a classical form. 3) $CNOT_{1}^{3}$: \[ \rho_{3}=2^{-4}\left\{ I^{\otimes4}+\alpha X_{0}\left|0\right\rangle \left\langle 0\right|_{1}I_{2}Q_{3}(\theta_{1})+\alpha X_{0}\left|1\right\rangle \left\langle 1\right|_{1}I_{2}Q_{3}^{*}(\theta_{1})\right\} , \] where $Q_{3}^{\dagger}(\theta_{1})$ is the Hermitian conjugate of $Q_{3}(\theta_{1})$. The commutativity between $Q_{3}(\theta_{1})$ and $Q_{3}^{\dagger}(\theta_{1})$ depends on the value of $\theta_{1}$. From the main text we have that, for balanced functions, $\theta_{1}$ may assume a value of the set $\left\{ 0,\pm\nicefrac{\pi}{4},\pm\nicefrac{\pi}{2}\right\} $. If $\theta_{1}=\pm\nicefrac{\pi}{4}$, $Q_{3}(\theta_{1})$ and $Q_{3}^{\dagger}(\theta_{1})$ commute. Then, the state $\rho_{3}$ will be quantum correlated if $\theta_{1}=\pm\nicefrac{\pi}{4}$, and will be a classical state if $\theta_{1}=0$ or $\theta_{1}=\pm\nicefrac{\pi}{2}$. Thus, the system possesses quantum correlations at this point of the synthesized algorithm only for balanced functions for which $\theta_{1}=\pm\nicefrac{\pi}{4}$. 4) $R_{2}^{3}$: \[ \rho_{4}=2^{-4}\left\{ I^{\otimes4}+\alpha X_{0}\left|0\right\rangle \left\langle 0\right|_{1}I_{2}Q_{3}(\theta_{2}+\theta_{1})+\alpha X_{0}\left|1\right\rangle \left\langle 1\right|_{1}I_{2}Q_{3}(\theta_{2}-\theta_{1})\right\} . \] As in the previous step, for some values of $\theta_{1}$ and $\theta_{2}$ (e.g. $\theta_{1}=\pm\nicefrac{\pi}{4}$ and $\theta_{2}=0$) this state cannot be put in a diagonal form, such that, for some balanced functions $\rho_{4}$ is quantum correlated. 5) $CNOT_{0}^{3}$: \[ \rho_{5}=2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}I_{2}P_{3}(\theta_{2}+\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{1}I_{2}P_{3}(\theta_{2}-\theta_{1})\right]+H.c.\right]\right\} , \] where $H.c.$ means the Hermitian conjugate and $P_{k}(x)=\left|0\right\rangle \left\langle 0\right|_{k}e^{-ix}+\left|1\right\rangle \left\langle 1\right|_{k}e^{ix}$. Rearranging this expression we obtain $\rho_{5}=\left|0\right\rangle \left\langle 0\right|_{1}I_{2}[I_{0}I_{3} +\alpha\left|0\right\rangle \left\langle 1\right|_{0}P_{3}(\theta_{2}+\theta_{1})\\+\alpha\left|1\right\rangle \left\langle 0\right|_{0}P_{3}^{\dagger}(\theta_{2}+\theta_{1})]+\left|1\right\rangle \left\langle 1\right|_{1}I_{2}[I_{0}I_{3}+\alpha\left|0\right\rangle \left\langle 1\right|_{0}P_{3}(\theta_{2}-\theta_{1})+\alpha\left|1\right\rangle \left\langle 0\right|_{0}P_{3}^{\dagger}(\theta_{2}-\theta_{1})]$. In all terms of $\rho_{5}$ the states for qubits $1$ and $2$ are diagonal on the same basis (for each qubit state space), but the terms for qubits $0$ and $3$ are not. The commutation between the two terms on the right hand side of the previous expression is proportional to $\left(\left|0\right\rangle \left\langle 0\right|_{0}-\left|1\right\rangle \left\langle 1\right|_{0}\right)[P_{3}(\theta_{2}+\theta_{1})P_{3}^{\dagger}(\theta_{2}-\theta_{1})-P_{3}(\theta_{2}-\theta_{1})P_{3}^{\dagger}(\theta_{2}+\theta_{1})]$, which in turn is proportional to $\sin(2\theta_{1})$. Thus, if $\theta_{1}=\pm\nicefrac{\pi}{4}$ $\rho_{5}$ is quantum correlated at this point of the synthesized algorithm, otherwise it represents just a classical probability distribution. As the procedure to identify quantumness of correlations is the same for the remaining states, we will just write down these states and inform here, in advance, that all states up to $\rho_{28}$ can be quantum correlated for some balanced functions. We will show the analysis for the final states of the computation. 6) $R_{3}^{3}$: \[ \rho_{6}=\rho_{5}. \] 7) $CNOT_{0}^{3}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{7} & = & 2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}P_{3}(\theta_{2}+\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}P_{3}^{\dagger}(\theta_{2}+\theta_{1})\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}P_{3}(\theta_{2}-\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}P_{3}^{\dagger}(\theta_{2}-\theta_{1})\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup 8) $R_{4}^{3}$: \[ \rho_{8}=\rho_{7}. \] 9) $CNOT_{1}^{3}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{9} & = & 2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}P_{3}(\theta_{2}+\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}P_{3}^{\dagger}(\theta_{2}+\theta_{1})\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}P_{3}^{\dagger}(\theta_{2}-\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}P_{3}(\theta_{2}-\theta_{1})\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup 10) $R_{5}^{3}$: \[ \rho_{10}=\rho_{9}. \] 11) $CNOT_{0}^{3}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{11} & = & 2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}(\theta_{2}+\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}^{\dagger}(\theta_{2}+\theta_{1})\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}^{\dagger}(\theta_{2}-\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}(\theta_{2}-\theta_{1})\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup 12) $R_{6}^{3}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{12} & = & 2^{-4}\left\{ I^{\otimes4} \right. \\ & & \left.+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}(\theta_{6}+\theta_{2}+\theta_{1}) + \left|1\right\rangle \left\langle 1\right|_{2}Q_{3}(\theta_{6}-\theta_{2}-\theta_{1})\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}(\theta_{6}-\theta_{2}+\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}(\theta_{6}+\theta_{2}-\theta_{1})\right]\right]+H.c.\right]\right\}. \end{eqnarray*} \endgroup 13) $CNOT_{1}^{3}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{13} & = & 2^{-4}\left\{ I^{\otimes4} \right. \\ & & \left.+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}(\theta_{6}+\theta_{2}+\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}(\theta_{6}-\theta_{2}-\theta_{1})\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}^{\dagger}(\theta_{6}-\theta_{2}+\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}^{\dagger}(\theta_{6}+\theta_{2}-\theta_{1})\right]\right]+H.c.\right]\right\}. \end{eqnarray*} \endgroup 14) $R_{7}^{3}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{14} & = & 2^{-4}\left\{ I^{\otimes4} \right. \\ & & \left.+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}(\theta_{7}+\theta_{6}+\theta_{2}+\theta_{1}) +\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}(\theta_{7}+\theta_{6}-\theta_{2}-\theta_{1})\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}(\theta_{7}-\theta_{6}+\theta_{2}-\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}(\theta_{7}-\theta_{6}-\theta_{2}+\theta_{1})\right]\right]+H.c.\right]\right\}. \end{eqnarray*} \endgroup 15) $CNOT_{2}^{3}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{15} & = & 2^{-4}\left\{ I^{\otimes4} \right. \\ & & \left.+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}(\theta_{7}+\theta_{6}+\theta_{2}+\theta_{1}) +\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}^{\dagger}(\theta_{7}+\theta_{6}-\theta_{2}-\theta_{1})\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}(\theta_{7}-\theta_{6}+\theta_{2}-\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}^{\dagger}(\theta_{7}-\theta_{6}-\theta_{2}+\theta_{1})\right]\right]+H.c.\right]\right\}. \end{eqnarray*} \endgroup 16) $CNOT_{1}^{3}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{16} & = & 2^{-4}\left\{ I^{\otimes4} \right. \\ & & \left.+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}(\theta_{7}+\theta_{6}+\theta_{2}+\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}^{\dagger}(\theta_{7}+\theta_{6}-\theta_{2}-\theta_{1})\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}Q_{3}^{\dagger}(\theta_{7}-\theta_{6}+\theta_{2}-\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}Q_{3}(\theta_{7}-\theta_{6}-\theta_{2}+\theta_{1})\right]\right]+H.c.\right]\right\}. \end{eqnarray*} \endgroup 17) $CNOT_{2}^{3}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{17} & = & 2^{-4}\left\{ I^{\otimes4} \right. \\ & & \left.+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}P_{3}(\theta_{7}+\theta_{6}+\theta_{2}+\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}P_{3}^{\dagger}(\theta_{7}+\theta_{6}-\theta_{2}-\theta_{1})\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 0\right|_{2}P_{3}^{\dagger}(\theta_{7}-\theta_{6}+\theta_{2}-\theta_{1})+\left|1\right\rangle \left\langle 1\right|_{2}P_{3}(\theta_{7}-\theta_{6}-\theta_{2}+\theta_{1})\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup Let us define now $A_{3} \equiv P_{3}(\theta_{7}+\theta_{6}+\theta_{2}+\theta_{1})$, $B_{3} \equiv P_{3}^{\dagger}(\theta_{7}+\theta_{6}-\theta_{2}-\theta_{1})$, $C_{3} \equiv P_{3}^{\dagger}(\theta_{7}-\theta_{6}+\theta_{2}-\theta_{1})$ and $D_{3} \equiv P_{3}(\theta_{7}-\theta_{6}-\theta_{2}+\theta_{1})$. 18) $R_{8}^{2}$: \[ \rho_{18}=\rho_{17}. \] 19) $CNOT_{0}^{2}$: \begingroup \begin{center} \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{19}&=&2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[\left|0\right\rangle \left\langle 1\right|_{2}A_{3}+\left|1\right\rangle \left\langle 0\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left. +\left|1\right\rangle \left\langle 1\right|_{1}\left[\left|0\right\rangle \left\langle 1\right|_{2}C_{3}+\left|1\right\rangle \left\langle 0\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \end{center} \endgroup 20) $R_{9}^{2}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{20} & = & 2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[e^{-i\theta_{9}}\left|0\right\rangle \left\langle 1\right|_{2}A_{3}+e^{i\theta_{9}}\left|1\right\rangle \left\langle 0\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[e^{-i\theta_{9}}\left|0\right\rangle \left\langle 1\right|_{2}C_{3}+e^{i\theta_{9}}\left|1\right\rangle \left\langle 0\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup 21) $CNOT_{1}^{2}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{21} & = & 2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[e^{-i\theta_{9}}\left|0\right\rangle \left\langle 1\right|_{2}A_{3}+e^{i\theta_{9}}\left|1\right\rangle \left\langle 0\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[e^{-i\theta_{9}}\left|1\right\rangle \left\langle 0\right|_{2}C_{3}+e^{i\theta_{9}}\left|0\right\rangle \left\langle 1\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup 22) $R_{10}^{2}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{22} & = & 2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[e^{-i\left(\theta_{10}+\theta_{9}\right)}\left|0\right\rangle \left\langle 1\right|_{2}A_{3}+e^{i\left(\theta_{10}+\theta_{9}\right)}\left|1\right\rangle \left\langle 0\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[e^{i\left(\theta_{10}-\theta_{9}\right)}\left|1\right\rangle \left\langle 0\right|_{2}C_{3}+e^{-i\left(\theta_{10}-\theta_{9}\right)}\left|0\right\rangle \left\langle 1\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup 23) $CNOT_{0}^{2}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{23} & = & 2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[e^{-i\left(\theta_{10}+\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}A_{3}+e^{i\left(\theta_{10}+\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[e^{i\left(\theta_{10}-\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}C_{3}+e^{-i\left(\theta_{10}-\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup 24) $R_{11}^{2}$: \[ \rho_{24}=\rho_{23}. \] 25) $CNOT_{1}^{2}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{25} & = & 2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 0\right|_{1}\left[e^{-i\left(\theta_{10}+\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}A_{3}+e^{i\left(\theta_{10}+\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 1\right|_{1}\left[e^{i\left(\theta_{10}-\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}C_{3}+e^{-i\left(\theta_{10}-\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup 26) $R_{12}^{1}$: \[ \rho_{26}=\rho_{25}. \] 27) $CNOT_{0}^{1}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{27} & = & 2^{-4}\left\{ I^{\otimes4}+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[\left|0\right\rangle \left\langle 1\right|_{1}\left[e^{-i\left(\theta_{10}+\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}A_{3}+e^{i\left(\theta_{10}+\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left.+\left|1\right\rangle \left\langle 0\right|_{1}\left[e^{i\left(\theta_{10}-\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}C_{3}+e^{-i\left(\theta_{10}-\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup 28) $R_{13}^{1}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{28} & = & 2^{-4}\left\{ I^{\otimes4} \right. \\ & & \left.+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[e^{-i\theta_{13}}\left|0\right\rangle \left\langle 1\right|_{1}\left[e^{-i\left(\theta_{10}+\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}A_{3}+e^{i\left(\theta_{10}+\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left.+e^{i\theta_{13}}\left|1\right\rangle \left\langle 0\right|_{1}\left[e^{i\left(\theta_{10}-\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}C_{3}+e^{-i\left(\theta_{10}-\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup This state is clearly diagonal on qubits $2$ and $3$, but it is not fully diagonal on qubits $0$ and $1$ for balanced functions. Thus, the correlations in state $\rho_{28}$ may present some quantum nature. 29) $CNOT_{0}^{1}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{29} & = & 2^{-4}\left\{ I^{\otimes4} \right. \\ & & \left.+\left[\alpha\left|0\right\rangle \left\langle 1\right|_{0}\left[e^{-i\theta_{13}}\left|0\right\rangle \left\langle 0\right|_{1}\left[e^{-i\left(\theta_{10}+\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}A_{3}+e^{i\left(\theta_{10}+\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left.+e^{i\theta_{13}}\left|1\right\rangle \left\langle 1\right|_{1}\left[e^{i\left(\theta_{10}-\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}C_{3}+e^{-i\left(\theta_{10}-\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup The $CNOT_{0}^{1}$ turns the state diagonal on qubit $1$, besides being already diagonal on qubits $2$ and $3$. Now, the state of qubit $0$ in each term of the expression for $\rho_{29}$ admits the same eigenbasis, so the correlations in the total state are purely classical for any balanced or constant function. 30) $e^{i\Phi}R_{14}^{0}$: \begingroup \setlength{\thickmuskip}{0mu} \setlength{\medmuskip}{0mu} \setlength{\thinmuskip}{0mu} \begin{eqnarray*} \rho_{30} & = & 2^{-4}\left\{ I^{\otimes4} \right. \\ & & \left.+\left[\alpha e^{-i\theta_{14}}\left|0\right\rangle \left\langle 1\right|_{0}\left[e^{-i\theta_{13}}\left|0\right\rangle \left\langle 0\right|_{1}\left[e^{-i\left(\theta_{10}+\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}A_{3}+e^{i\left(\theta_{10}+\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}B_{3}\right]\right.\right.\right.\\ & & \left.\left.\left.+e^{i\theta_{13}}\left|1\right\rangle \left\langle 1\right|_{1}\left[e^{i\left(\theta_{10}-\theta_{9}\right)}\left|0\right\rangle \left\langle 0\right|_{2}C_{3}+e^{-i\left(\theta_{10}-\theta_{9}\right)}\left|1\right\rangle \left\langle 1\right|_{2}D_{3}\right]\right]+H.c.\right]\right\} . \end{eqnarray*} \endgroup The application of this last gate does not generate any quantum correlation in the final state of the system as is shown in the main text. \section*{References} \end{document}
\betaegin{document} \title{Tractability of Approximation \ for Weighted Korobov Spaces\ on Classical and Quantum Computers} \betaegin{abstract} We study the approximation problem (or problem of optimal recovery in the $L_2$-norm) for weighted Korobov spaces with smoothness parameter $\alpha$. The weights~$\gammaamma_j$ of the Korobov spaces moderate the behavior of periodic functions with respect to successive variables. The non-negative smoothness parameter $\alpha$ measures the decay of Fourier coefficients. For $\alpha=0$, the Korobov space is the $L_2$ space, whereas for positive $\alpha$, the Korobov space is a space of periodic functions with some smoothness and the approximation problem corresponds to a compact operator. The periodic functions are defined on $[0,1]^d$ and our main interest is when the dimension $d$ varies and may be large. We consider algorithms using two different classes of information. The first class $\lambdaall$ consists of arbitrary linear functionals. The second class $\lambdastd$ consists of only function values and this class is more realistic in practical computations. We want to know when the approximation problem is tractable. Tractability means that there exists an algorithm whose error is at most $\varepsilon$ and whose information cost is bounded by a polynomial in the dimension $d$ and in $\varepsilon^{-1}$. Strong tractability means that the bound does not depend on $d$ and is polynomial in $\varepsilon^{-1}$. In this paper we consider the worst case, randomized and quantum settings. In each setting, the concepts of error and cost are defined differently, and therefore tractability and strong tractability depend on the setting and on the class of information. In the worst case setting, we apply known results to prove that strong tractability and tractability in the class $\lambdaall$ are equivalent. This holds iff $\alpha>0$ and the sum-exponent $s_{\gamma}$ of weights is finite, where $s_{\gamma}\,=\, \inf\betaig\{\,s>0\ :\ \sum_{j=1}^\infty\gamma_j^s\,<\,\infty\,\betaig\}$. In the worst case setting for the class $\lambdastd$ we must assume that $\alpha>1$ to guarantee that functionals from $\lambdastd$ are continuous. The notions of strong tractability and tractability are not equivalent. In particular, strong tractability holds iff $\alpha>1$ and $\sum_{j=1}^\infty\gamma_j<\infty$. In the randomized setting, it is known that randomization does not help over the worst case setting in the class $\lambdaall$. For the class $\lambdastd$, we prove that strong tractability and tractability are equivalent and this holds under the same assumption as for the class $\lambdaall$ in the worst case setting, that is, iff $\alpha>0$ and $s_{\gamma} < \infty$. In the quantum setting, we consider only upper bounds for the class $\lambdastd$ with $\alpha>1$. We prove that $s_{\gamma}<\infty$ implies strong tractability. Hence for $s_{\gamma}>1$, the randomized and quantum settings both break worst case intractability of approximation for the class $\lambdastd$. We indicate cost bounds on algorithms with error at most $\varepsilon$. Let ${\betaf c}(d)$ denote the cost of computing $L(f)$ for $L\in \lambdaall$ or $L\in \lambdastd$, and let the cost of one arithmetic operation be taken as unity. The information cost bound in the worst case setting for the class $\lambdaall$ is of order ${\betaf c} (d) {\bf c}dot \varepsilon^{-p}$ with $p$ being roughly equal to $2\max(s_\gamma,\alpha^{-1})$. Then for the class $\lambdastd$ in the randomized setting, we obtain the total cost of order ${\betaf c}(d)\,\varepsilon^{-p-2} + d\,\varepsilon^{-2p-2}$, which for small $\varepsilon$ is roughly $$ d\,\varepsilon^{-2p-2}. $$ In the quantum setting, we present a quantum algorithm with error at most $\varepsilon$ that uses about only $d + \lambdaog \varepsilon^{-1}$ qubits and whose total cost is of order $$ ({\betaf c}(d) +d) \, \varepsilon^{-1-3p/2}. $$ The speedup of the quantum setting over the randomized setting is of order $$ \frac{d}{{\betaf c}(d)+d}\,\lambdaeft(\frac1{\varepsilon}\right)^{1+p/2}. $$ Hence, we have a polynomial speedup of order $\varepsilon^{-(1+p/2)}$. We stress that $p$ can be arbitrarily large, and in this case the speedup is huge. \varepsilonnd{abstract} \section{Introduction} We study the approximation problem (or problem of optimal recovery in the $L_2$-norm) for periodic functions $f: [0,1]^d \to {\mathbb{C}}$ that belong to Korobov spaces. These are the most studied spaces of periodic functions. Usually, the unweighted case, in which all variables play the same role, is analyzed. As in {\bf c}ite{HW,SWkor}, in this paper we analyze a more general case of weighted Korobov spaces, in which the successive variables may have diminishing importance. We consider the unit ball of weighted Korobov spaces $H_d$. Hence we assume that $\Vert f \Vert_d \lambdae 1$ where the norm depends on a non-negative smoothness parameter $\alpha$ and a sequence $\gamma=\{\gamma_j\}$ of positive weights. For $\alpha=0$ we have $\|f\|_d=\|f\|_{L_2([0,1]^d)}$, and for $\alpha>0$ the norm is given by $$ \|f\|_d\,=\,\betaigg(\sum_{h\in \Z^d}r_{\alpha}(\gamma,h)\,|\hat f(h)|^2\betaigg)^{1/2} $$ where $\Z^d\,=\,\{\,\deltaots,-1,0,1,\deltaots\,\}^d$, Fourier coefficients are denoted by $\hat f(h)$, and \betaegin{equation}\lambdaabel{rdef} r_{\alpha}(\gamma,h)\,=\,\prod_{j=1}^dr_{\alpha}(\gamma_j,h_j)\qquad\mbox{with}\qquad r_{\alpha}(\gamma_j,h_j)\,=\,\lambdaeft\{\betaegin{array}{rr} 1\ \ & \mbox{if\ } h_j=0,\\ \gamma_j^{-1}|h_j|^{\alpha} & \mbox{if\ } h_j\not=0,\varepsilonnd{array}\right. \varepsilonnd{equation} The smoothness parameter $\alpha$ measures the decay of the Fourier coefficients. It is known that the weighted Korobov space $H_d$ consists of functions that are $k_j$ times differentiable with respect to the $j$th variable if $k_j\lambdae \alpha/2$. For $\alpha\gammae0$, the space $H_d$ is a Hilbert space, and for $\alpha>1$, it is a Hilbert space with a reproducing kernel. The weights $\gammaamma_j$ of Korobov spaces moderate the behavior of periodic functions with respect to successive variables. For $\|f\|_d\lambdae 1$ and for small $\gamma_j$, we have large $r_{\alpha}(\gamma,h)$ with non-zero $h_j$ and therefore the corresponding Fourier coefficient $|\hat f(h)|$ must be small. In the limiting case when $\gamma_j$ approaches zero, all Fourier coefficients $\hat f(h)$ with non-zero $h_j$ must be zero, that is, the function $f$ does not depend on the $j$th variable. We consider algorithms using different classes of information. We study the two classes $\lambdaall$ and $\lambdastd$ of information. The first one $\lambdaall=H_d^*$ consists of all continuous linear functionals, whereas the second one $\lambdastd$, called the standard information, is more realistic in practical computations and consists only of function values, i.e., of $L_x(f)=f(x)\ \forall f\in H_d$ with $x\in [0,1]^d$. Such functionals are continuous only if $\alpha>1$. Our main interest is when the dimension $d$ varies and may be large. In particular, we want to know when the approximation problem is tractable. Tractability means that there exists an algorithm whose error is at most $\varepsilon$ and whose information cost (i.e., the number of information evaluations from $\lambdaall$ or $\lambdastd$) is bounded by a polynomial in the dimension $d$ and in $\varepsilon^{-1}$. Strong tractability means that the bound does not depend on $d$ and is polynomial in $\varepsilon^{-1}$. The exponent of strong tractability is defined roughly as the minimal non-negative $p$ for which the bound is of order $\varepsilon^{-p}$. We consider the worst case, randomized and quantum settings. Each setting has its own definition of error, information and total cost. In the worst case setting we consider only deterministic algorithms, whose error, information and total costs are defined by their worst performance. In the randomized setting we allow randomized algorithms, and their error and costs are defined on the average with respect to randomization for a worst function from the unit ball of $H_d$. In the quantum setting we allow quantum algorithms that run on a (hypothetical) quantum computer, with the corresponding definitions of error and costs. Clearly, the concepts of tractability and strong tractability depend on the setting and on the class of information. We are interested in checking how the setting and the class of information change conditions on tractability. The approximation problem corresponds to the embedding operator between the weighted Korobov space $H_d$ and the space $L_2([0,1]^d)$. This operator is compact iff $\alpha>0$. That is why for $\alpha=0$ we obtain negative results in all three settings and for the two classes of information. In Section~\ref{det} we study the worst case setting. It is enough to consider linear algorithms of the form $$ A_{n,d}(f)\,=\,\sum_{k=1}^na_kL_k(f). $$ Here, the $a_k$'s are some elements of $L_2([0,1]^d)$, and the $L_k$'s are some continuous linear functionals from $\lambdaall$ or $\lambdastd$. The functions $a_k$ do not depend on $f$; they form the fixed output basis of the algorithm. Necessary and sufficient conditions on tractability of approximation in the worst case setting easily follow from {\bf c}ite{HW,WWwei,WWpow}. With $$ s_{\gamma}=\inf\betaigg\{s>0:\, \sum_{j=1}^\infty\gamma_j^s\,<\,\infty\,\betaigg\}, $$ we have: \betaegin{enumerate} \item Let $\alpha\gammae0$. Strong tractability and tractability of approximation in the class $\lambdaall$ are equivalent, and this holds iff $\alpha > 0$ and the sum-exponent $s_{\gamma}$ is finite. If so, the exponent of strong tractability is $$ p^*(\lambdaall)\,=\,2\,\max\lambdaeft(s_\gamma,\alpha^{-1}\right)\ $$ \item Let $\alpha>1$. Strong tractability of approximation in the class $\lambdastd$ holds iff $$ \sum_{j=1}^\infty \gamma_j\,<\,\infty. $$ If so, then $p^*(\lambdaall)\lambdae 2$ and the exponent of strong tractability $p^*(\lambdastd)$ satisfies $$ p^*(\lambdastd)\,\in \,[p^*(\lambdaall),\,p^*(\lambdaall)+2]. $$ \item Let $\alpha>1$. Tractability of approximation in the class $\lambdastd$ holds iff $$ a\,:=\,\lambdaimsup_{d\to\infty}\frac{\sum_{j=1}^d\gamma_j}{\lambdan\,d}\,<\,\infty. $$ \varepsilonnd{enumerate} \vskip 1pc In particular, we see that for the classical unweighted Korobov space, in which $\gamma_j=1$ for all $j$, the approximation problem is intractable. To break intractability we must take weights $\gamma_j$ converging to zero with a polynomial rate, that is, $\gamma_j=O(j^{-k})$ for some positive $k$. Then $s_{\gamma}\lambdae 1/k$. In Section~\ref{mc} we study the randomized setting. We consider randomized algorithms of the form $$ A_{n,d}(f,\omega)\,=\,\varphi _{\omega}\lambdaeft(L_{1,\omega}(f),L_{2,\omega}(f), \deltaots,L_{n,\omega}(f)\right), $$ where $\omega$ is a random element that is distributed according to a probability measure $\varrho $, and $L_{k,\omega}\in \Lambda$ with $\varphi _{\omega}$ being a mapping >From ${\mathbb{C}}^n$ into $L_2([0,1]^d)$. The randomized error of an algorithm $A_{n,d}$ is defined by taking the square root of the average value of $\|f-A_{n,d}(f,\omega)\|^2_{L_2([0,1]^2)}$ with respect to $\omega$ according to a probability measure $\varrho $, and then by taking the worst case with respect to $f$ from the unit ball of $H_d$. It is known, see {\bf c}ite{N92}, that randomization does not help over the worst case setting for the class $\lambdaall$. That is why, for the class $\lambdaall$, tractability and strong tractability in the randomized setting are equivalent to tractability and strong tractability in the worst case setting. For the class $\lambdastd$ we prove: \betaegin{enumerate} \item Strong tractability and tractability of approximation are equivalent, and this holds iff $\alpha>0$ and $s_\gamma\,<\,\infty$. In this case, the exponent of strong tractability is in the interval $[p^*(\lambdaall),\,p^*(\lambdaall)+2]$, where $p^*(\lambdaall)=2\max(s_\gamma,\alpha^{-1})$. \item For any $p>p^*(\lambdaall)$, we present an algorithm $A_{n,d}$ with $n$ of order $\varepsilonps^{-(p+2)}$ and randomized error at most $\varepsilonps$. Let ${\betaf c}(d)$ be the cost of computing one function value, and let the cost of performing one arithmetic operation be taken as unity. Then the total cost of the algorithm $A_{n,d}$ is of order $$ {\betaf c}(d)\,\lambdaeft(\frac1{\varepsilonps}\right)^{p+2}\,+\, d\,\lambdaeft(\frac1{\varepsilonps}\right)^{2p+2} \quad\forall\,d=1,2,\deltaots\,,\ \forall\,\varepsilon\in (0,1). $$ Hence, the only dependence on $d$ is through ${\betaf c}(d)$ and $d$. Clearly, if $d$ is fixed and $\varepsilon$ goes to zero then the second term dominates and the total cost of $A_{n,d}$ is of order $$ d\,\lambdaeft(\frac1{\varepsilonps}\right)^{2p+2}. $$ \varepsilonnd{enumerate} The essence of these results is that in the randomized setting there is no difference between tractability conditions when we use functionals from $\lambdaall$ or from $\lambdastd$. This is especially important when $s_{\gamma}>1$, since approximation is then intractable in the worst case setting for the class $\lambdastd$ independently of $\alpha$, and is strongly tractable in the randomized setting for the class $\lambdastd$. Hence for $s_{\gamma}>1$, randomization breaks intractability of approximation in the worst case setting for the class $\lambdastd$. In Section~\ref{qc} we study the quantum setting. We consider quantum algorithms that run on a (hypothetical) quantum computer. Our analysis in this section is based on the framework for quantum algorithms introduced in {\bf c}ite{He1} that is relevant for the approximate solution of problems of analysis. We only consider upper bounds for the class $\lambdastd$ and weighted Korobov spaces with $\alpha >1$ and $s_\gamma<\infty$. We present a quantum algorithm with error at most $\varepsilon$ whose total cost is of order $$ \betaig( {\betaf c}(d) + d\betaig) \lambdaeft(\frac1{\varepsilonps}\right)^{1+3p/2} \quad\forall\,d=1,2,\deltaots\,,\ \forall\,\varepsilon\in (0,1) $$ with $p\alphapprox p^*(\lambdaall)$ being roughly the exponent of strong tractability in the worst case setting. The quantum algorithm uses about $d + \lambdaog \varepsilon^{-1}$ qubits. Hence, for moderate $d$ and even for large $\varepsilon^{-1}$, the number of qubits is quite modest. This is especially important, since the number of qubits will be a limiting resource for the foreseeable future. It is interesting to compare the results in the quantum setting with the results in the randomized setting for the class $\lambdastd$. The number of quantum queries is of order $\varepsilonps^{-1-3p^*(\lambdaall)/2}$ which is smaller than the corresponding number $\varepsilonps^{-2-p}$ of function values in the randomized setting only if $p^*(\lambdaall)= 2\max(s_\gamma,\alpha^{-1})<2$. This holds when $s_\gamma<1$, since $\alpha>1$ has been already assumed. However, the number of quantum combinatory operations is always {\it significantly} smaller than the corresponding number of combinatory operations in the randomized settings. If $d$ is fixed and $\varepsilon$ goes to zero then the total cost bound in the randomized setting is of order $d\varepsilon^{-2p-2}$ which is significantly larger than the total cost bound of order $({\betaf c}(d)+d)\varepsilon^{-1-3p/2}$ in the quantum setting. This means that the exponent of $\varepsilon^{-1}$ in the cost bound in the quantum setting is $1+p/2$ less than the exponent in the randomized setting. We do not know whether our upper bounds for the quantum computer can be improved. The speedup of the quantum setting over the randomized setting, defined as the ratio of the corresponding randomized and quantum costs, is of order $$ \frac{d}{{\betaf c}(d)+d}\,\lambdaeft(\frac1{\varepsilon}\right)^{1+p/2}. $$ Hence, we have a polynomial speedup of order $\varepsilon^{-(1+p/2)}$. If $p^*(\lambdaall)$ is close to zero, we may also take $p$ close to zero and then the speedup is roughly $\varepsilon^{-1}$. But $p^*(\lambdaall)$ can be arbitrarily large. This holds for large $s_\gamma$. In this case $p$ is also large and the speedup is huge. We finish our paper with two appendices. The first is about a general framework for quantum algorithms and the second contains a proof of the fact that weighted Korobov spaces are algebras. This fact is crucial for our upper bounds for quantum algorithms and hence for Theorem~\ref{th7}. \section{Approximation for Weighted Korobov Spaces} In this section we define approximation for periodic functions from the weighted Korobov space $H_d$. The space $H_d$ is a Hilbert space of complex-valued $L_2$ functions defined on $[0,1]^d$ that are periodic in each variable with period $1$. The inner product and norm of $H_d$ are defined as follows. We take a sequence $\gamma=\{\gamma_j\}$ of weights such that $$ 1\,\gammae\,\gamma_1\,\gammae\,\gamma_2\,\gammae\,{\bf c}dots\,>\,0. $$ Let $\alpha\gammae 0$. For $h=[h_1,h_2,\deltaots,h_d]\in \Z^d$ define $$ r_{\alpha}(\gamma,h)\,=\,\prod_{j=1}^dr_{\alpha}(\gamma_j,h_j)\qquad\mbox{with}\qquad r_{\alpha}(\gamma_j,h_j)\,=\,\lambdaeft\{\betaegin{array}{rr} 1\ \ & \mbox{if\ } h_j=0,\\ \gamma_j^{-s}|h_j|^{\alpha} & \mbox{if\ } h_j\not=0,\varepsilonnd{array}\right. $$ where $s=1$ for $\alpha>0$, and $s=0$ for $\alpha=0$. Note that $r_{\alpha}(\gamma,h)\gammae1$ for all $h\in \Z^d$, and the smallest $r_{\alpha}(\gamma,h)$ is achieved for $h=0$ and has the value $1$. The inner product in $H_d$ is given by $$ \left\langle f,g\right\rangle_d\,=\,\sum_{h\in \Z^d} r_{\alpha}(\gamma,h)\,\hat f(h)\,\overline{\hat g(h)}, $$ where $h=(h_1,\deltaots,h_d)$, and $\hat f(h)$ is the Fourier coefficient $$ \hat f(h)\,=\,\int_{[0,1]^d}\varepsilonxp\lambdaeft(-2\pi i\, h{\bf c}dot x\right)\,f(x)\,dx, $$ with $h{\bf c}dot x=h_1x_1+\deltaots+h_dx_d$. The inner product in $H_d$ can be also written as $$ \left\langle f,g\right\rangle_d\,=\,\hat f(0) \overline{\hat g(0)}\,+\, \sum_{h\in \Z^d, h\not=0}r_{\alpha}(\gamma,h)\,\hat f(h)\,\overline{\hat g(h)}, $$ thus the zeroth Fourier coefficient is unweighted. The norm in $H_d$ is $$ \|f\|_d\,=\,\betaigg(\sum_{h\in \Z^d}r_{\alpha}(\gamma,h)\,|\hat f(h)|^2\betaigg)^{1/2}. $$ Note that for $\alpha=0$ we have $r_{0}(\gamma,h)\varepsilonquiv 1$, and $$ \left\langle f,g\right\rangle_d\,=\,\sum_{h\in\Z^d}\hat f(h)\overline{\hat g(h)}\,=\, \int_{[0,1]^d}f(x)\overline{g(x)}\,dx. $$ Hence, in this case $H_d=L_2([0,1]^d)$ is the space of square integrable functions. Observe that for any $\alpha\gammae0$ we have $H_d\subset L_2([0,1]^d)$ and $$ \|f\|_{L_2([0,1]^d)}\,\lambdae\,\|f\|_d\qquad \forall\,f\in H_d. $$ \vskip 1pc For $\alpha>1$, the space $H_d$ is a reproducing kernel Hilbert space, see {\bf c}ite{A50,Wahba}. That is, there exists a function $K_d:[0,1]^d\times [0,1]^d\to {\mathbb{C}}$, called the reproducing kernel, such that $K_d({\bf c}dot,y)\in H_d$ for all $y\in [0,1]^d$, and $$ f(y)\,=\,\left\langle f,K_d({\bf c}dot,y)\right\rangle_d\qquad \forall\,f\in H_d,\ \forall\, y\in[0,1]^d. $$ The essence of the last formula is that the linear functional $L_y(f)=f(y)$ for $f\in H_d$ is continuous and its norm is $$ \|L_y\|\,=\,K_d^{1/2}(y,y)\qquad \forall\,y\in [0,1]^d. $$ It is known, see e.g. {\bf c}ite{SWkor}, that the reproducing kernel $K_d$ is \betaegin{equation}\lambdaabel{kernel} K_d(x,y)\,=\,\sum_{h\in \Z^d}\frac{\varepsilonxp\betaig(2\pi i h{\bf c}dot(x-y)\betaig)} {r_{\alpha}(\gamma,h)}. \varepsilonnd{equation} This can be rewritten as $$ K_d(x,y)\,=\,\,\prod_{j=1}^d \sum_{h=-\infty}^{\infty}\frac{\varepsilonxp\lambdaeft(2\pi i h (x_j-y_j)\right)} {r_{\alpha}(\gamma_j,h)}\,=\, \prod_{j=1}^d\lambdaeft(1+2\gamma_j\sum_{h=1}^\infty\frac{{\bf c}os\lambdaeft(2\pi h(x_j-y_j)\right)}{h^{\alpha}}\right). $$ Hence, $K_d(x,y)$ depends on $x-y$ and takes only real values. From this we have $$ K_d(y,y)\,=\, \prod_{j=1}^d\lambdaeft(1+2\gamma_j\zeta(\alpha)\right), $$ where $\zeta$ is the Riemann zeta function, $\zeta(\alpha)=\sum_{h=1}^\infty h^{-\alpha}$. Hence, $\alpha>1$ guarantees that $K_d(y,y)$ is well defined and that $\|L_y\|$ is finite. \vskip 1pc We return to the general case for $\alpha\gammae 0$. For $\gamma_j\varepsilonquiv1$, the space $H_d$ is the $L_2$ version of the (unweighted) Korobov space of periodic functions. For general weights $\gamma_j$, the space $H_{d}$ is called a {\varepsilonm weighted} Korobov space. We now explain the role of weights $\gamma_j$. Take $f\in H_d$ with $\|f\|_d\lambdae 1$. For small values of~$\gamma_j$ we must have small Fourier coefficients $\hat f (h)$ with $h_j\not=0$. Indeed, $\|f\|_d\lambdae 1$ implies that $r_{\alpha}(\gamma,h)|\hat f(h)|^2\lambdae1$, and for $h_j\not=0$ this implies that $|\hat f(h)|^2 \lambdae \gamma_j/|h_j|^{\alpha}\lambdae\gamma_j$, as claimed. Thus, small $\gamma_j$'s correspond to smoother functions in the unit ball of $H_d$ in the sense that the Fourier coefficients $\hat f(h)$ with $h_j\not=0$ must scale like $\gamma_j^{1/2}$ in order to keep $\|f\|_{d}\lambdae 1$. The spaces $H_d$ are related to each other when we vary $d$. Indeed, it is easy to check that for $d_1\lambdae d_2$ we have $$ H_{d_1}\,\subseteq\,H_{d_2}\quad\mbox{and}\quad \|f\|_{d_1}=\|f\|_{d_2}\qquad \forall\,f\in H_{d_1}. $$ That is, a function of $d_1$ variables from $H_{d_1}$, when treated as a function of $d_2$ variables with no dependence on the last $d_2-d_1$ variables, also belongs to $H_{d_2}$ with the same norm as in $H_{d_1}$. This means that we have an increasing sequence of spaces $H_1\subset H_2\subset{\bf c}dots\subset\,H_d$, and an increasing sequence of the unit balls of $H_d$, $B_1\subset B_2\subset{\bf c}dots\subset B_d$, and $H_{d_1}{\bf c}ap B_{d_2}=B_{d_1}$ for $d_1\lambdae d_2$. So far we assumed that all weights $\gamma_j$ are positive. We can also take zero weights as the limiting case of positive weights when we adopt the convention that $0/0=0$. Indeed, if one of the weights tends to zero, say $\gamma_d\to0$, then $r_{\alpha}(\gamma,h)$ goes to infinity for all $h$ with $h_d\not=0$. Thus to guarantee that $\|f\|_d$ remains finite we must have $\hat f(h)=0$ for all $h$ with $h_d \ne 0$. This means that $f$ does not depend on the $x_d$ coordinate. Similarly, if all the weights $\gamma_j$ are zero for $j\gammae k$ then a function $f$ from $H_{d}$ does not depend on the coordinates $x_k,x_{k+1},\deltaots, x_d$. \vskip 1pc We are ready to define {\it multivariate approximation} (simply called approximation) as the operator $\alphapp_d:\,H_d\to L_2([0,1]^d)$ given by $$ \alphapp_d f\,=\,f. $$ Hence, $\alphapp_d$ is the embedding from the Korobov space $H_d$ to the space $L_2([0,1]^d)$. It is easy to see that $\|\alphapp_d\|=1$; moreover $\alphapp_d$ is a compact embedding iff $\alpha>0$. Indeed, consider the operator $W_d:=\alphapp_d^*\,\alphapp_d:H_d\to H_d$, where $\alphapp^*_d:L_2([0,1]^d)\to H_d$ is the adjoint operator to $\alphapp_d$. Then for all $f,g\in H_d$ we have $$ \left\langle W_df,g\right\rangle_d\,=\,\left\langle \alphapp_d\,f,\alphapp_d\,g\right\rangle_{L_2([0,1]^d)}\,=\, \left\langle f,g\right\rangle_{L_2([0,1]^d)}. $$ {} From this we conclude that $$ W_df_h\,=\, r^{-1}_{\alpha}(\gamma,h)\,f_h\qquad \forall\,h\in\Z^d, $$ where $f_h(x)=\varepsilonxp\lambdaeft(2\pi i h{\bf c}dot x\right)/r_{\alpha}^{1/2}(\gamma,h)$. We have $\|f_h\|_d=1$ and ${\rm span}(f_h\,:\, h\in \Z^d)$ is dense in $L_2([0,1]^d)$. This yields that $W_d$ has the form \betaegin{equation}\lambdaabel{wd} (W_df) (x)\,=\,\sum_{h\in \Z^d}r^{-1}_{\alpha}(\gamma,h)\,\hat f(h) \,\varepsilonxp\lambdaeft(2\pi i h{\bf c}dot x\right)\qquad \forall\,f\in H_d, \varepsilonnd{equation} where for $\alpha\in [0,1]$ the convergence of the last series is understood in the $L_2$ sense. Thus, $H_d$ has an orthonormal basis consisting of eigenvectors of $W_d$, and $r^{-1}_{\alpha}(\gamma,h)$ is the eigenvalue of $W_d$ corresponding to $f_h$ for $h\in \Z^d$. Clearly, $$ \|\alphapp_df\|_{L_2([0,1]^d)}\,=\,\left\langle W_df,f\right\rangle_d^{1/2}\qquad \forall\,f\in H_d, $$ and therefore, since $W_d$ is self adjoint, $$ \|\alphapp_d\|\,=\,\|W_d\|^{1/2}\,=\,\lambdaeft(\max_{h\in \Z^d} r^{-1}_{\alpha}(\gamma,h)\right)^{1/2}\,=\,1. $$ For $\alpha=0$ we have $\alphapp_d=W_d$ and both are the identity operator on $L_2([0,1]^d)$, and therefore they are {\it not} compact. In contrast, for $\alpha>0$, the eigenvalues of $W_d$ go to zero as $|h|=|h_1|+|h_2|+{\bf c}dots+ |h_d|$ goes to infinity, and therefore the operator $W_d$ is compact and $\alphapp_d$ is a compact embedding. \section{Worst Case Setting} \lambdaabel{det} In this section we deal with tractability of approximation in the worst case setting. To recall the notion of tractability we proceed as follows. We approximate $\alphapp_d$ by algorithms\footnote{It is known that nonlinear algorithms as well as adaptive choice of $L_k$ do not help in decreasing the worst case error, see e.g., {\bf c}ite{TWW}.} of the form $$ A_{n,d}(f)\,=\,\sum_{k=1}^na_kL_k(f). $$ Here, the $a_k$'s are some elements of $L_2([0,1]^d)$, and the $L_k$'s are some continuous linear functionals defined on $H_d$. Observe that the functions $a_k$ do not depend on $f$, they form the fixed output basis of the algorithm, see {\bf c}ite{NW00}. For all the algorithms in this paper we use the optimal basis consisting of the eigenvectors of $W_d$. We assume that $L_k\in \Lambda$, and consider two classes of information $\Lambda$. The first class is $\Lambda=\lambdaall=H_d^*$ which consists of {\varepsilonm all} continuous linear functionals. That is, $L\in\lambdaall$ iff there exists $g\in H_d$ such that $L(f)=\left\langle f,g\right\rangle_d$ for all $f\in H_d$. The class $\lambdaall$ is well defined for all $\alpha\gammae 0$. The second class $\Lambda=\lambdastd$ is called standard information and is defined only for $\alpha>1$, $$ \Lambda\,=\,\lambdastd\,=\,\betaig\{L_x\,:\ x\in [0,1]^d\ \mbox{with}\ L_x(f)=f(x)\ \forall\,f\in H_d\,\betaig\}. $$ Hence, the class $\lambdastd$ consists of function evaluations. They are continuous linear functionals since $H_d$ is a reproducing kernel Hilbert space whenever $\alpha>1$. The worst case error of the algorithm $A_{n,d}$ is defined as $$ \varepsilonwor(A_{n,d})\,=\,\sup\betaig\{\,\|f-A_{n,d}(f)\|_{L_2([0,1]^d)}\, :\ f\in H_d,\ \|f\|_d\,\lambdae\,1\,\betaig\}\,=\, \betaigg\|\alphapp_d-\sum_{k=1}^na_kL_k({\bf c}dot)\betaigg\|. $$ Let ${\bf c}wor(\varepsilonps,H_d,\Lambda)$ be the minimal $n$ for which we can find an algorithm $A_{n,d}$, i.e., find elements $a_k\in L_2([0,1]^d)$ and functionals $L_k\in \Lambda$, with worst case error at most $\varepsilonps \|\alphapp_d\|$, that is, $$ {\bf c}wor(\varepsilonps,H_d,\Lambda)\,=\,\min\betaig\{\,n\,:\ \varepsilonxists \ A_{n,d}\ \ \mbox{such that}\ \ \varepsilonwor(A_{n,d})\lambdae \varepsilonps\,\|\alphapp_d\|\ \betaig\}. $$ Observe that in our case $\|\alphapp_d\|=1$ and this represents the initial error that we can achieve by the zero algorithm $A_{n,d}=0$ without sampling the function. Therefore $\varepsilonps\|\alphapp_d\|=\varepsilonps$ can be interpreted as reducing the initial error by a factor $\varepsilonps$. Obviously, it is only of interest to consider $\varepsilonps<1$. This minimal number ${\bf c}wor(\varepsilonps,H_d,\Lambda)$ of functional evaluations is closely related to the worst case complexity of the approximation problem, see e.g., {\bf c}ite{TWW}. This explains our choice of notation. We are ready to define tractability, see {\bf c}ite{W94}. We say that approximation is {\it tractable} in the class $\Lambda$ iff there exist nonnegative numbers $C$, $p$ and $q$ such that \betaegin{equation}\lambdaabel{trac} {\bf c}wor(\varepsilonps,H_d,\Lambda)\,\lambdae\,C\,\varepsilonps^{-p}\,d^{\,q}\qquad \forall\, \varepsilonps\in(0,1),\ \forall\,d \in \N. \varepsilonnd{equation} The essence of tractability is that the minimal number of functional evaluations is bounded by a polynomial in $\varepsilonps^{-1}$ and $d$. We say that approximation is {\it strongly tractable} in the class $\Lambda$ iff $q=0$ in (\ref{trac}). Hence, strong tractability means that the minimal number of functional evaluations has a bound independent of $d$ and polynomially dependent on $\varepsilonps^{-1}$. The infimum of $p$ in (\ref{trac}) is called the {\it exponent} of strong tractability and denoted by $p^*=p^*(\Lambda)$. That is, for any positive $\deltaelta$ there exists a positive $C_{\deltaelta}$ such that $$ {\bf c}wor(\varepsilonps,H_d,\Lambda)\,\lambdae\,C_{\deltaelta}\,\varepsilonps^{-(p^*+\deltaelta)} \qquad \forall\, \varepsilonps\in(0,1),\ \forall\,d \in \N $$ and $p^*$ is the smallest number with this property. Necessary and sufficient conditions on tractability of approximation in the worst case setting easily follow from {\bf c}ite{HW,WWwei,WWpow}. In order to present them we need to recall the notion of the sum-exponent $s_\gamma$ of the sequence $\gamma$, see {\bf c}ite{WWwei}, which is defined as \betaegin{equation}\lambdaabel{sum-exponent} s_{\gamma}\,=\, \inf\betaigg\{\,s>0\ :\ \sum_{j=1}^\infty\gamma_j^s\,<\,\infty\,\betaigg\}, \varepsilonnd{equation} with the convention that the infimum of the empty set is taken as infinity. Hence, for the unweighted case, $\gamma_j\varepsilonquiv 1$, we have $s_\gamma=\infty$. For $\gamma_j=\Theta(j^{-\kappa})$ with $\kappa>0$, we have $s_\gamma=1/\kappa$. On the other hand, if $s_{\gamma}$ is finite then for any positive $\deltaelta$ there exists a positive $M_{\deltaelta}$ such that $k\,\gamma_k^{s_{\gamma}+\deltaelta}\,\lambdae\,\sum_{j=1}^\infty\gamma_j^{s_{\gamma}+\deltaelta}\,\lambdae\, M_{\deltaelta}$. Hence, $\gamma_k=O(k^{-1/(s_{\gamma}+\deltaelta)})$. This shows that $s_{\gamma}$ is finite iff $\gamma_j$ goes to zero polynomially fast in $j^{-1}$, and the reciprocal of $s_{\gamma}$ roughly measures the rate of this convergence. We begin with the class $\lambdaall$. Complexity and optimal algorithms are well known in this case, see e.g., {\bf c}ite{TWW}. Let us define \betaegin{equation}\lambdaabel{seth} R(\varepsilonps,d)\,=\,\betaig\{\,h\in Z^d\,:\, r^{-1}_{\alpha}(\gamma,h)\,>\,\varepsilonps^{2}\,\betaig\}. \varepsilonnd{equation} as the set of indices $h$ for which the eigenvalues of $W_d$, see (\ref{wd}), are greater than $\varepsilonps^{2}$. Then the complexity ${\bf c}wor(\varepsilonps,H_d,\lambdaall)$ is equal to the cardinality of the set $R(\varepsilonps,d)$, \betaegin{equation}\lambdaabel{7plus} {\bf c}wor(\varepsilonps,H_d,\lambdaall)\,=\,\betaig| R(\varepsilonps,d)\betaig|, \varepsilonnd{equation} and the algorithm \betaegin{equation}\lambdaabel{algop} A_{n,d}(f)(x)\,=\,\sum_{h\in R(\varepsilonps,d)}\hat f(h)\,\varepsilonxp\lambdaeft(2\pi i h{\bf c}dot x\right) \varepsilonnd{equation} with $n=\betaig| R(\varepsilonps,d)\betaig|$ is optimal and has worst case error at most $\varepsilonps$. This simply means that the truncation of the Fourier series to terms corresponding to the largest eigenvalues of $W_d$ is the best approximation of the function $f$. For $\alpha=0$ all eigenvalues of $W_d$ have the value $1$. Thus for $\varepsilonps< 1$ we have infinitely many eigenvalues greater than $\varepsilonps^2$ even for~$d=1$. Therefore the cardinality of the set $R(\varepsilonps,1)$ and the complexity are infinite, which means that approximation is not even solvable, much less tractable. For $\alpha>0$ and $d=1$, we obtain $$ {\bf c}wor(\varepsilonps,H_1,\lambdaall)\,\alphapprox\,2\,\gamma_1^{1/\alpha}\,\varepsilonps^{-2/\alpha}. $$ It is proven in {\bf c}ite{WWwei} that strong tractability and tractability are equivalent, and this holds iff $s_{\gamma}$ is finite. Furthermore, the exponent of strong tractability is $p^*(\lambdaall)=2\max\lambdaeft(s_{\gamma},\alpha^{-1}\right)$. We stress that the exponent of strong tractability is determined by the weight sequence $\gamma$ if $s_{\gamma}>\alpha^{-1}$. On the other hand, if $s_{\gamma}\lambdae\alpha^{-1}$ then $p^*(\lambdaall)=2\alpha^{-1}$, and this exponent appears in the complexity even when $d=1$. For such weights, i.e., $s_{\gamma}\lambdae\alpha^{-1}$, multivariate approximation in any number of variables $d$ requires roughly the same number of functional evaluations as for $d=1$. We now turn to the class $\lambdastd$ and assume that $\alpha>1$. Formally, tractability of approximation in the class $\lambdastd$ has not been studied; however, it is easy to analyze this problem based on the existing results. First, observe that approximation is not easier than {\it multivariate integration} (or simply integration) defined as $$ {\rm INT}_d(f)\,=\,\int_{[0,1]^d}f(x)\,dx\,=\,\hat f(0)\qquad \forall\,f\in H_d. $$ Indeed, $\|{\rm INT}_d\|=1$, and for any algorithm $A_{n,d}(f) =\sum_{k=1}^na_kf(x_k)$ for some $a_k\in L_2([0,1]^d)$ and some $x_k\in [0,1]^d$, we have $$ \|\alphapp_df-A_{n,d}(f)\|^2_{L_2([0,1]^d)}\,=\,\sum_{h\in\Z^d}\betaig|\hat f(h)- \widehat{A_{n,d}(f)}(h)\betaig|^2\,\gammae\,\betaigg|\hat f(0)-\sum_{k=1}^nb_kf(x_k)\betaigg|^2, $$ with $b_k=\int_{[0,1]^d}a_k(x)\,dx$. Hence, it is not easier to approximate $\alphapp_d$ than ${\rm INT}_d$, and necessary conditions on tractability of integration are also necessary conditions on tractability for approximation. It is known, see {\bf c}ite{HW}, that integration is strongly tractable iff $\sum_{j=1}^\infty\gamma_j<\infty$, and is tractable iff $a:=\lambdaimsup_{d\to\infty}\sum_{j=1}^d\gamma_j/\lambdan\,d\,<\,\infty$. Hence, the same conditions are also necessary for tractability of approximation. Due to {\bf c}ite{WWpow}, it turns out that these conditions are also sufficient for tractability of approximation. More precisely, if $\sum_{j=1}^\infty\gamma_j <\infty$, then approximation is strongly tractable and its exponent $p^*(\lambdastd)\in[p^*(\lambdaall),p^*(\lambdaall)+2]$, see Corollary 2 (i) of {\bf c}ite{WWpow}. Clearly, in this case $p^*(\lambdaall)\lambdae 2$. Assume that $a\in (0,\infty)$. Then there exists a positive $M$ such that $$ d\,\gamma_d/\lambdan\,d\,\lambdae\,\sum_{j=1}^d\gamma_j/\lambdan\,d\,<\,M $$ for all $d$. Hence, $\gamma_j=O(j^{-1}\lambdan\,j)$, and clearly $s_{\gamma}=1$. Once more, by Corollary 2 (i) of {\bf c}ite{WWpow}, we know that for any positive $\deltaelta$ there exists a positive number $C_{\deltaelta}$ such that the worst case complexity of approximation is bounded by $C_{\deltaelta}\,\varepsilonps^{-(2+\deltaelta)}d^{4\zeta(\alpha)\,a+\deltaelta}$. This proves tractability of approximation. We summarize this analysis in the following theorem. \betaegin{thm} Consider approximation $\alphapp_d\,:\,H_d\,\to\,L_2([0,1]^d)$ in the worst case setting. \betaegin{enumerate} \item Let $\alpha\gammae0$. Strong tractability and tractability of approximation in the class $\lambdaall$ are equivalent, and this holds iff $s_\gamma\,<\,\infty$ and $\alpha>0$. In this case, the exponent of strong tractability is $$ p^*(\lambdaall)\,=\,2\,\max\lambdaeft(s_\gamma,\alpha^{-1}\right). $$ \item Let $\alpha>1$. Strong tractability of approximation in the class $\lambdastd$ holds iff $$ \sum_{j=1}^\infty \gamma_j\,<\,\infty. $$ When this holds, then $p^*(\lambdaall)\lambdae 2$ and the exponent of strong tractability $$ p^*(\lambdastd)\in [p^*(\lambdaall),p^*(\lambdaall)+2]. $$ \item Let $\alpha>1$. Tractability of approximation in the class $\lambdastd$ holds iff $$ a\,:=\,\lambdaimsup_{d\to\infty}\frac{\sum_{j=1}^d\gamma_j}{\lambdan\,d}\,<\,\infty. $$ When this holds, for any positive $\deltaelta$ there exists a positive $C_{\deltaelta}$ such that $$ {\bf c}wor(\varepsilonps,H_d,\lambdastd)\,\lambdae\,C_{\deltaelta}\, \varepsilonps^{-(2+\deltaelta)}\,d^{\,4\zeta(\alpha)a +\deltaelta}\qquad\forall\,d=1,2,\deltaots\,,\ \forall\,\varepsilon\in (0,1), $$ where $\zeta$ is the Riemann zeta function. \varepsilonnd{enumerate} \varepsilonnd{thm} \section{Randomized Setting} \lambdaabel{mc} In this section we deal with tractability of approximation in the randomized setting for the two classes $\lambdaall$ and $\lambdastd$. The randomized setting is precisely defined in {\bf c}ite{TWW}. Here we only mention that we consider randomized algorithms $$ A_{n,d}(f,\omega)\,=\,\varphi _{\omega}\betaigg(L_{1,\omega}(f),L_{2,\omega}(f), \deltaots,L_{n,\omega}(f)\betaigg), $$ where $\omega$ is a random element that is distributed according to a probability measure $\varrho $, and $L_{k,\omega}\in \Lambda$ with $\varphi _{\omega}$ being a mapping >From ${\mathbb{C}}^n$ into $L_2([0,1]^d)$. The essence of randomized algorithms is that the evaluations, as well the way they are combined, may depend on a random element. The primary example of a randomized algorithm is the standard Monte Carlo for approximating multivariate integration which is of the form $$ A_{n,d}(f,\omega)\,=\,\frac1n\,\sum_{k=1}^nf(\omega_k), $$ where $\omega=[\omega_1,\omega_2,\deltaots,\omega_n]$ with independent and uniformly distributed $\omega_k$ over $[0,1]^d$ which requires $nd$ random numbers from $[0,1]$. In this case, $L_{k,\omega}(f)=f(\omega_k)$ are function values at random sample points, and $\varphi _{\omega}(y_1,y_2,\deltaots,y_n)= n^{-1}\sum_{k=1}^ny_k$ does not depend on $\omega$ and is a deterministic mapping. The randomized error of the algorithm $A_{n,d}$ is defined as $$ \varepsilonran(A_{n,d})\,=\,\sup\lambdaeft\{\, \E^{1/2}\lambdaeft(\|f-A_{n,d}(f,\omega) \|^2_{L_2([0,1]^d)}\right)\ :\ f\in H_d,\, \|f\|_d\,\lambdae\,1\,\right\}. $$ Hence, we first take the square root of the average value of the error $\|f-A_{n,d}(f,\omega)\|^2_{L_2([0,1]^2)}$ with respect to $\omega$ according to the probability measure $\varrho $, and then take the worst case with respect to $f$ from the unit ball of $H_d$. Let ${\bf c}ran(\varepsilonps,H_d,\Lambda)$ be the minimal $n$ for which we can find an algorithm $A_{n,d}$, i.e., a measure $\varrho $, functionals $L_{k,\omega}$ and a mapping $\varphi _{\omega}$, with randomized error at most $\varepsilonps$. That is, $$ {\bf c}ran(\varepsilonps,H_d,\Lambda)\,=\,\min\betaig\{\,n\,:\ \varepsilonxists \ A_{n,d}\ \ \mbox{such that}\ \ \varepsilonran(A_{n,d})\lambdae \varepsilonps\ \betaig\}. $$ Then tractability in the randomized setting is defined as in the paragraph containing (\ref{trac}), with the replacement of ${\bf c}wor(\varepsilonps,H_d,\Lambda)$ by ${\bf c}ran(\varepsilonps,H_d,\Lambda)$. We are ready to discuss tractability in the randomized setting for the class $\lambdaall$. It is proven in {\bf c}ite{N92} that randomization does not really help for approximating linear operators over Hilbert space for the class $\lambdaall$ since $$ {\bf c}wor(2^{1/2}\varepsilonps,H_d,\lambdaall)\,\lambdae\, {\bf c}ran(\varepsilonps,H_d,\lambdaall)\,\lambdae\, {\bf c}wor(\varepsilonps,H_d,\lambdaall), $$ and these estimates hold for all $\varepsilonps\in (0,1)$ and for all $d \in \N$. This means that tractability in the randomized setting is equivalent to tractability in the worst case setting, and we can use the first part of Theorem 1 to characterize tractability also in the randomized setting. We now turn to the class $\lambdastd$. It is well known that randomization may significantly help for some problems. The most known example is the standard Monte Carlo for multivariate integration of $d$ variables, which requires at most $\varepsilonps^{-2}$ random function values if the $L_2$ norm of a function is at most one, independently of how large $d$ is. We now show that randomization also helps for approximation over Korobov spaces, and may even break intractability of approximation in the worst case setting. As we shall see, this will be achieved by a randomized algorithm using the standard Monte Carlo for approximating the Fourier coefficients corresponding to the largest eigenvalues of the operator $W_d$ defined by (\ref{wd}). To define such an algorithm we proceed as follows. We assume that $\alpha>1$ so that the class $\lambdastd$ is well defined. Without loss of generality we also assume that approximation is tractable in the class $\lambdaall$, which is equivalent to assuming that $s_\gamma<\infty$. We know from Section 2 that $R(\varepsilonps/2^{1/2},d)$ is the set of indices $h$ for which the eigenvalues of $W_d$ are greater than $\varepsilonps^{2}/2$, see (\ref{seth}). We also know that the cardinality of the set $R(\varepsilonps/2^{1/2},d)$ is exactly equal to ${\bf c}wor(\varepsilonps/2^{1/2},H_d,\lambdaall)$ and that for any positive $\deltaelta$ there exists a positive $C_{\deltaelta}$ such that $$ \betaig| R(\varepsilonps/2^{1/2},d)\betaig|\,=\,{\bf c}wor(\varepsilonps/2^{1/2},H_d,\lambdaall)\,\lambdae\, C_{\deltaelta}\,\varepsilonps^{-(p^*(\lambdaall)+\deltaelta)}\quad \quad\forall\,d=1,2,\deltaots\,,\ \forall\,\varepsilon\in (0,1), $$ with $p^*(\lambdaall)=2\max(s_\gamma,\alpha^{-1})$. We want to approximate $$ f(x)\,=\,\sum_{h\in\Z^d}\hat f(h)\varepsilonxp\lambdaeft(2\pi ih{\bf c}dot x\right) $$ for $f\in H_d$. The main idea of our algorithm is to approximate the Fourier coefficients $\hat f(h)$ for $h\in R(\varepsilonps/2^{1/2},d)$ by the standard Monte Carlo, whereas the Fourier coefficients $\hat f(h)$ for $h\notin R(\varepsilonps/2^{1/2},d)$ are approximated simply by zero. That is, the algorithm $A_{n.d}$ takes the form \betaegin{equation}\lambdaabel{our1} A_{n,d}(f,\omega)(x)\,=\,\sum_{h\in R(\varepsilonps/2^{1/2},d)}\lambdaeft(\frac1n \sum_{k=1}^nf(\omega_k)\varepsilonxp\lambdaeft(-2\pi i h{\bf c}dot\omega_k\right)\right)\, \varepsilonxp\lambdaeft(2\pi i h{\bf c}dot x\right), \varepsilonnd{equation} where, as for the standard Monte Carlo, $\omega=(\omega_1,\omega_2,\deltaots,\omega_n)$ with independent and uniformly distributed $\omega_k$ over $[0,1]^d$. The last formula can be rewritten as \betaegin{equation}\lambdaabel{our2} A_{n,d}(f,\omega)(x)\,=\, \frac1n\sum_{k=1}^nf(\omega_k)\lambdaeft(\sum_{h\in R(\varepsilonps/2^{1/2},d)} \varepsilonxp\betaigg(-2\pi i h{\bf c}dot(x-\omega_k)\betaigg)\,\right). \varepsilonnd{equation} {} From (\ref{our2}) it is clear that the randomized algorithm $A_{n,d}$ uses $n$ random function values. We are ready to analyze the randomized error of the algorithm $A_{n,d}$. First of all observe that $$ \int_{[0,1]^d}\betaig|f(x)-A_{n,d}(f,\omega)\betaig|^2dx\,=\, \sum_{h\in R(\varepsilonps/2^{1/2},d)}\betaigg|\hat f(h)-\frac1n\sum_{k=1}^nf(\omega_k) e^{-2\pi i h{\bf c}dot\omega_k}\betaigg|^2\,+\,\sum_{h\notin R(\varepsilonps/2^{1/2},d)}| \hat f(h)|^2. $$ We now compute the average value of the last formula with respect to $\omega$. Using the well known formula for the Monte Carlo randomized error we obtain $$ \sum_{h\in R(\varepsilonps/2^{1/2},d)}\frac{{\rm INT}_d(|f|^2)-|\hat f(h)|^2}{n} \,+\,\sum_{h\notin R(\varepsilonps/2^{1/2},d)}|\hat f(h)|^2. $$ Since ${\rm INT}_d(|f|^2)=\sum_{h\in\Z^d}|\hat f(h)|^2\lambdae\|f\|_d^2$, and \betaegin{eqnarray*} \sum_{h\notin R(\varepsilonps/2^{1/2},d)}|\hat f(h)|^2\,&=&\, \sum_{h\notin R(\varepsilonps/2^{1/2},d)}r_{\alpha}(\gamma,h)|\hat f(h)|^2/r_{\alpha}(\gamma,h)\\ &\lambdae&\, \tfrac12{\varepsilonps^2}\,\sum_{h\notin R(\varepsilonps/2^{1/2},d)}r_{\alpha}(\gamma,h)|\hat f(h)|^2\,\lambdae\,\tfrac12{\varepsilonps^2}\,\|f\|^2_d, \varepsilonnd{eqnarray*} the error of $A_{n,d}$ satisfies $$ {\varepsilonran}(A_{n,d})^2\,\lambdae\,\frac{|R(\varepsilonps/2^{1/2},d)|}{n}\,+\,\frac{\varepsilonps^2}2. $$ Taking \betaegin{equation}\lambdaabel{ourn} n\,=\,\frac{2\,|R(\varepsilonps/2^{1/2},d)|}{\varepsilonps^2}\,=\,O\lambdaeft( \varepsilonps^{-(2+p^*(\lambdaall)+\deltaelta)}\right) \varepsilonnd{equation} we conclude that the error of $A_{n,d}$ is at most $\varepsilonps$. This is achieved for $n$ given by (\ref{ourn}), which does {\it not} depend on $d$, and which depends polynomially on $\varepsilonps^{-1}$ with an exponent that exceeds the exponent of strong tractability in the class $\lambdaall$, roughly speaking, by at most two. This means that approximation is strongly tractable in the class $\lambdastd$ under exactly the same conditions as in the class $\lambdaall$. We now discuss the total cost of the algorithm $A_{n,d}$. This algorithm requires $n$ function evaluations $f(\omega_k)$. Since $\omega_k$ is a vector with $d$ components, it seems reasonable to assume that the cost of one such function evaluation depends on $d$ and is, say, ${\betaf c}(d)$. Obviously, ${\betaf c}(d)$ should not be exponential in $d$ since for large $d$ we could not even compute one function value. On the other hand, ${\betaf c}(d)$ should be at least linear in $d$ since our functions may depend on all $d$ variables. Let us also assume that we can perform combinatory operations such as arithmetic operations over complex numbers, comparisons of real numbers, and evaluations of exponential functions. For simplicity assume that the cost of one combinatory operation is taken as unity. Hence, for given $h$ and $\omega_k$, we can compute the inner product $h{\bf c}dot \omega_k$ and then $\varepsilonxp(-2\pi i h{\bf c}dot\omega_k)$ in cost of order $d$. The implementation of the algorithm $A_{n,d}$ can be done as follows. We compute and output $$ y_h\,=\, \frac1n \sum_{k=1}^nf(\omega_k)\varepsilonxp\lambdaeft(-2\pi i h{\bf c}dot\omega_k\right) $$ for all $h\in R(\varepsilonps/2^{1/2},d)$. This is done in cost of order $$ n\,{\betaf c}(d)\,+\,n\,d\,|R(\varepsilonps/2^{1/2},d)|. $$ Knowing the coefficients $y_h$ we can compute the algorithm $A_{n,d}$ at any vector $x\in [0,1]^d$ as $$ A_{n,d}(f,\omega)(x)\,=\,\sum_{h\in R(\varepsilonps/2^{1/2},d)}y_h\, \varepsilonxp\lambdaeft(2\pi i h{\bf c}dot x\right) $$ with cost of order $d\,|R(\varepsilonps/2^{1/2},d)|$. Using the estimates on $|R(\varepsilonps/2^{1/2},d)|$ and $n$ given by (\ref{ourn}), we conclude that the total cost of the algorithm $A_{n,d}$ is of order $$ \lambdaeft(\frac1{\varepsilonps}\right)^{p+2}\, {\betaf c}(d)\,+\, \lambdaeft(\frac1{\varepsilonps}\right)^{2p+2}\,d $$ with $p=p^*(\lambdaall)+\deltaelta$. Hence, the only dependence on $d$ is through ${\betaf c}(d)$ and $d$. We stress the difference in the exponents of the number of function values and the number of combinatory operations used by the algorithm $A_{n,d}$. For a fixed $\varepsilonps$ and varying $d$, the first term of the cost will dominate the second term when ${\betaf c}(d)$ grows more than linearly in $d$. In this case the first exponent $p+2$ determines the total cost of the algorithm $A_{n,d}$. On the other hand, for a fixed $d$ and $\varepsilonps$ tending to zero, the opposite is true, and the second term dominates the first term of the cost, and the second exponent $2p+2$ determines the cost of $A_{n,d}$. We summarize this analysis in the following theorem. \betaegin{thm} Consider approximation $\alphapp_d\,:\,H_d\,\to\,L_2([0,1]^d)$ in the randomized setting. \betaegin{enumerate} \item Let $\alpha\gammae0$. Strong tractability and tractability of approximation in the class $\lambdaall$ are equivalent, and this holds iff $s_\gamma\,<\,\infty$ and $\alpha>0$. When this holds, the exponent of strong tractability is $$ p^*(\lambdaall)\,=\,2\,\max\lambdaeft(s_\gamma,\alpha^{-1}\right). $$ \item Let $\alpha>1$. Strong tractability and tractability of approximation in the class $\lambdastd$ are equivalent, and this holds under the same conditions as in the class $\lambdaall$, that is, iff $s_\gamma<\infty$. When this holds, the exponent of strong tractability $p^*(\lambdastd)\in [p^*(\lambdaall),p^*(\lambdaall)+2]$. \item The algorithm $A_{n,d}$ defined by (\ref{our1}) with $n$ given by (\ref{ourn}) of order roughly $\varepsilonps^{-(p^*(\lambdaall)+2)}$ approximates $\alphapp_d$ with randomized error at most $\varepsilonps$. For any positive $\deltaelta$ there exists a positive number $K_{\deltaelta}$ such that the total cost of the algorithm $A_{n,d}$ is bounded by $$ K_{\deltaelta}\lambdaeft( \lambdaeft(\frac1{\varepsilonps}\right)^{p+2}\, {\betaf c}(d)\,+\, \lambdaeft(\frac1{\varepsilonps}\right)^{2p+2}\,d\right) \ \quad\forall\,d=1,2,\deltaots\,,\ \forall\,\varepsilon\in (0,1), $$ with $p=p^*(\lambdaall)+\deltaelta$. \varepsilonnd{enumerate} \varepsilonnd{thm} \vskip 1pc We now comment on the assumption $\alpha>1$ that is present for the class $\lambdastd$. As we know from Section 3, this assumption is necessary to guarantee that function values are continuous linear functionals and it was essential when we dealt with the worst case setting. In the randomized setting, the situation is different since we are using random function values, and the randomized error depends only on function values in the average sense. This means that $f(x)$ does not have to be well defined everywhere, and continuity of the linear functional $L_x(f)=f(x)$ is irrelevant. Since for any $\alpha\gammae0$, the Korobov space $H_\alpha$ is a subset of $L_2([0,1]^d)$, we can treat $f$ as a $L_2$ function. This means that in the randomized setting we can consider the class $\lambdastd$ for all $\alpha\gammae 0$. \vskip 1pc \noindent {\betaf Remark 1} This is true only if we allow the use of random numbers >From $[0,1]$. If we only allow the use of random bits (coin tossing as a source of randomness) then again we need function values to be continuous linear functionals, which is guaranteed by the condition $\alpha > 1$, see {\bf c}ite{Nov95} for a formal definition of such ``restricted'' Monte Carlo algorithms. We add that it is easy to obtain random bits from a quantum computer while it is not possible to obtain random numbers from $[0,1]$. \vskip 1pc Observe that the algorithm $A_{n,d}$ is well defined for any $\alpha\gammae0$ since the standard Monte Carlo algorithm is well defined for functions from $L_2([0,1]^d)$. Furthermore, the randomized error analysis did not use the fact that $\alpha>1$, and is valid for all $\alpha>0$. For $\alpha=0$ the analysis breaks down since $n$ given by (\ref{ourn}) would then be infinite. Even if we treat functions in the $L_2$ sense tractability requires that $s_{\gamma}$ be finite. Indeed, for $s_\gamma=\infty$ we must approximate exponentially\footnote{We follow a convention of complexity theory that if the function grows faster than polynomial then we say it is exponential.} many Fourier coefficients which, obviously, contradicts tractability. We summarize this comment in the following corollary. \betaegin{cor} Consider approximation $\alphapp_d\,:\,H_d\,\to\,L_2([0,1]^d)$ in the randomized setting with $\alpha\in [0,1]$ in the class $\lambdastd$. \betaegin{enumerate} \item Strong tractability and tractability of approximation are equivalent, and this holds iff $\alpha>0$ and $s_\gamma\,<\,\infty$. When this holds, the exponent of strong tractability is in the interval $[p,p+2]$, where $p=p^*(\lambdaall)=2\max(s_\gamma,\alpha^{-1})$. \item The algorithm $A_{n,d}$ defined by (\ref{our1}) with $n$ given by (\ref{ourn}) of order roughly $\varepsilonps^{-(p^*(\lambdaall)+2)}$ approximates $\alphapp_d$ with randomized error at most $\varepsilonps$. \varepsilonnd{enumerate} \varepsilonnd{cor} The essence of these results is that in the randomized setting there is no difference between tractability conditions when we use functionals from $\lambdaall$ and when we use random function values. This is especially important when $s_{\gamma}>1$, since approximation is then intractable in the worst case setting for the class $\lambdastd$ independently of $\alpha$. Thus we have the following corollary. \betaegin{cor} Let $s_\gamma>1$. For the class $\lambdastd$, randomization breaks intractability of approximation in the worst case setting. \varepsilonnd{cor} \section{Quantum Setting} \lambdaabel{qc} Our analysis in this section is based on the framework introduced in {\bf c}ite{He1} of quantum algorithms for the approximate solution of problems of analysis. We refer the reader to the surveys {\bf c}ite{EHI}, {\bf c}ite{S3}, and to the monographs {\bf c}ite{Gr}, {\bf c}ite{Nie}, and {\bf c}ite{P} for general reading on quantum computation. This approach is an extension of the framework of information-based complexity theory (see {\bf c}ite{TWW} and, more formally, {\bf c}ite{Nov95}) to quantum computation. It also extends the binary black box model of quantum computation (see {\bf c}ite{BBC:98}) to situations where mappings on spaces of functions have to be computed. Some of the main notions of quantum algorithms can be found in Appendix 1. For more details and background discussion we refer to {\bf c}ite{He1}. \subsection{Quantum Summation of a Single Sequence} \lambdaabel{q3} We need results about the summation of finite sequences on a quantum computer. The summation problem is defined as follows. For $N\in\N$ and $1\lambdae p\lambdae\infty$, let $L_p^N$ denote the space of all functions $g:\{0,1,\deltaots,N-1\}\to \R$, equipped with the norm $$ \|g\|_{L_p^N}=\lambdaeft(\frac{1}{N}\sum_{j=0}^{N-1}|g(j)|^p \right)^{1/p} \ \ \mbox{if}\ p<\infty,\ \ \mbox{and} \ \ \|g\|_{L_\infty^N}=\max_{0\lambdae j\lambdae N-1} |g(j)|. $$ Define $S_N:L_p^N\to \R$ by $$ S_N(g) = \frac{1}{N}\sum_{j=0}^{N-1}g(j) $$ and let $$ F=\mathcal{B}_p^N:=\{g\in L_p^N \,|\,\ \|g\|_{L_p^N}\lambdae 1\}. $$ Observe that $S_N(\mathcal{B}_p^N)=[-1,1]$ for all $p$ and $N$. We wish to compute $A(g,\varepsilonps)$ which approximates $S_N(g)$ with error $\varepsilonps$ and with probability at least $\tfrac34$. That is, $A(g,\varepsilonps)$ is a random variable which is computed by a quantum algorithm such that the inequality $|S_N(g)-A(g,\varepsilonps)|\lambdae \varepsilonps$ holds with probability at least $\tfrac34$. The performance of a quantum algorithm can be summarized by the number of quantum queries, quantum operations and qubits. These notions are defined in Appendix 1. Here we only mention that the quantum algorithm obtains information on the function values $g(j)$ by using only quantum queries. The number of quantum operations is defined as the total number of bit operations performed by the quantum algorithm. The number of qubits is defined as $m$ if all quantum operations are performed in the Hilbert space of dimension $2^m$. It is important to seek algorithms that require as small a number of qubits as possible. We denote by $e_n^q(S_N,F)$ the minimal error (in the above sense, of probability $\gammae \tfrac34$) that can be achieved by a quantum algorithm using only $n$ queries. The query complexity is defined for $\varepsilon > 0$ by $$ {\bf c}qq (\varepsilon, S_N, F) = \min\{\,n\ |\ \, e^q_n(S_N,F) \lambdae \varepsilon\}. $$ The total (quantum) complexity ${\bf c}qua (\varepsilon, S_N, F)$ is defined as the minimal total cost of a quantum algorithm that solves the summation problem to within $\varepsilonps$. The total cost of a quantum algorithm is defined by counting the total number of quantum queries plus quantum operations used by the quantum algorithm. Let ${\betaf c}$ be the cost of one evaluation of $g(j)$. It is reasonable to assume that the cost of one quantum query is taken as ${\betaf c}+m$ since $g(j)$'s are computed and $m$ qubits are processed by a quantum query, see Appendix 1 for more details. The quantum summation is solved by the Grover search and amplitude estimation algorithm which can be found in {\bf c}ite{G2} and {\bf c}ite{BHM:00}. This algorithm enjoys almost minimal error and will be repetitively used for approximation as we shall see in Sections 5.2 and 5.3. Let us summarize the known results about the order of $e_n^q(S_N,\mathcal{B}_p^N)$ for $p=\infty$ and $p=2$. The case $p=\infty$ is due to {\bf c}ite{G2}, {\bf c}ite{BHM:00} (upper bounds) and {\bf c}ite{NW} (lower bounds). The results in the case $p=2$ are due to {\bf c}ite{He1}. Further results for arbitrary $1 \lambdae p \lambdae \infty$ can be also found in {\bf c}ite{He1} and {\bf c}ite{HN2}. In what follows, by ``log'' we mean the logarithm to the base 2. \betaegin{thm} \lambdaabel{summation} There are constants $c_j>0$ for $j\in \{ 1, \deltaots , 9 \}$ such that for all $n,N\in\N$ with $2< n\lambdae c_1 N$ we have $$ e_n^q(S_N,\mathcal{B}_\infty^N) \alphasymp n^{-1} $$ and $$ c_2 n^{-1}\lambdae e_n^q(S_N,\mathcal{B}_2^N)\lambdae c_3 n^{-1}\lambdaog^{3/2}n {\bf c}dot \lambdaog\lambdaog n . $$ For $\varepsilon\lambdae\varepsilon_0 < \tfrac12$, we have $$ {\bf c}qq (\varepsilon, S_N, \mathcal{B}_\infty^N ) \alphasymp \min (N, \varepsilon^{-1} ) $$ and $$ c_4 \min(N,\varepsilonps^{-1})\lambdae {\bf c}qq (\varepsilon, S_N, \mathcal{B}_2^N ) \lambdae c_5 \min (N, \varepsilon^{-1} \lambdaog^{3/2} \varepsilon^{-1} {\bf c}dot \lambdaog\lambdaog \varepsilon^{-1} ) . $$ For $N\gammae\varepsilonps^{-1}$, the algorithm for the upper bound uses about $\lambdaog N$ qubits and the total complexity is bounded by $$ c_6 \,{\betaf c}\,\varepsilonps^{-1}\, \lambdae\, {\bf c}qua (\varepsilon, S_N, \mathcal{B}_\infty^N )\,\lambdae\, c_7\,{\betaf c}\, \varepsilon^{-1} {\bf c}dot \lambdaog N $$ and $$ c_8 \,{\betaf c}\,\varepsilonps^{-1}\,\lambdae\, {\bf c}qua (\varepsilon, S_N, \mathcal{B}_2^N )\,\lambdae\, c_9\,{\betaf c}\, \varepsilon^{-1} \lambdaog^{3/2} \varepsilon^{-1} {\bf c}dot \lambdaog\lambdaog \varepsilon^{-1} {\bf c}dot \lambdaog N. $$ \varepsilonnd{thm} So far we required that the error is no larger than $\varepsilon$ with probability at least $\tfrac34$. To decrease the probability of failure >From $\tfrac14$ to, say, $e^{-\varepsilonll/8}$ one can repeat the algorithm $\varepsilonll$ times and take the median as the final result. See Lemma 3 of {\bf c}ite{He1} for details. We also assumed so far that $\|g\|_{L_p^N}\lambdae 1$. If this bound is changed to, say, $\|g\|_{L_p^N}\lambdae M$ then it is enough to rescale the problem and replace $g(j)$ by $g(j)/M$. Then we multiply the computed result by $M$ and obtain the results as in the last theorem with $\varepsilonps$ replaced by $M \varepsilonps$. \subsection{The Idea of the Algorithm for Approximation} \lambdaabel{q2} The starting point of our quantum algorithm for approximation is a deterministic algorithm on a classical computer that is similar to the randomized algorithm given by (\ref{our1}), namely \betaegin{equation} \lambdaabel{start1} A_{N,d}(f)(x)\,=\,\sum_{h\in R(\varepsilonps/3,d)}\lambdaeft(\frac1N \sum_{j=1}^N f(x_j)\varepsilonxp\lambdaeft(-2\pi i h{\bf c}dot x_j\right)\right)\, \varepsilonxp\lambdaeft(2\pi i h{\bf c}dot x\right), \varepsilonnd{equation} where the $x_1, \deltaots , x_N$ come from a suitable deterministic rule, and $R({\bf c}dot,d)$ is defined by (\ref{seth}). The error analysis of $A_{N,d}$ will be based on three types of errors. The first error arises from replacing the infinite Fourier series by a finite series over the set $R(\varepsilonps/3,d)$; this error is $\varepsilon/3$. The second error is made since we replace the Fourier coefficients which are integrals by a quadrature formulas We will choose $N$ and the deterministic rule for computing $x_j$ in such a way that the combination of these two errors yields \betaegin{equation} \lambdaabel{start2} \Vert A_{N,d} (f) - f \Vert_{L_2([0,1]^d)} \lambdae \tfrac23\,\varepsilon \quad \forall\,f \in H_d, \ \Vert f \Vert_d \lambdae 1 . \varepsilonnd{equation} This will be possible (see (\ref{bound}) below) if $N$ is, in general, exponentially large in $d$. This may look like a serious drawback, but the point is that we do {\it not} need to exactly compute the sums in (\ref{start1}). Instead, the sums \betaegin{equation} \lambdaabel{start3} \lambdaeft(\frac1N \sum_{j=1}^N f(x_j)\varepsilonxp\lambdaeft(-2\pi i h{\bf c}dot x_j\right)\right)_{h \in R(\varepsilon/3,d)} \varepsilonnd{equation} will be approximately computed by a quantum algorithm whose cost depends only logarithmically on $N$. We have to guarantee that this third (quantum) error is bounded by $\varepsilon/3$, with probability at least $\tfrac34$. As we shall see, $\lambdaog N$ will be at most linear in $d$ and polynomial in $\lambdaog \varepsilonps^{-1}$, which will allow us to have good bounds on the total cost of the quantum algorithm. \vskip 1pc \noindent {\betaf Remark 2} Observe that the $|R(\varepsilon/3,d)|$ sums given by (\ref{start3}) depend only on $N$ function values of $f$, whereas $h$ takes as many values as the cardinality of the set $R(\varepsilonps/3,d)$. Since each function value costs ${\betaf c}(d)$, and since ${\betaf c}(d)$ is usually much larger than the cost of one combinatory operation, it seems like a good idea to compute all sums in (\ref{start3}) simultaneously. We do not know how to do this efficiently on a quantum computer and therefore compute these sums sequentially. \vskip 1pc \subsection{Quantum Summation Applied to our Sequences} \lambdaabel{q33} As outlined in the previous subsection, for the approximation problem we need to compute $S_N(g_h)$ for several sequences $g_1,g_2,\deltaots,g_R$ each of length $N$ with $R=|R(\varepsilonps/3,d)|$. We assume that $g_h \in L_p^N$ for $p=2$ or $p=\infty$, and $\Vert g_h \Vert_p \lambdae M$. We now want to compute $A(g_h,\varepsilonps)$ on a quantum computer such that (with $\varepsilonps/3$ now replaced by $\varepsilonps$) \betaegin{equation} \lambdaabel{req} \sum_{h=1}^R |S_N(g_h) - A(g_h,\varepsilonps) |^2 \lambdae \varepsilon^2 \varepsilonnd{equation} with probability at least $\tfrac34$. In our case the sequences $g_h$ are the terms of (\ref{start3}) and we assume that we can compute $g_h(j)=f(x_j)\varepsilonxp\lambdaeft(-2\pi i h{\bf c}dot x_j\right)$. The cost ${\betaf c}$ of computing one function value $g_h(j)$ is now equal to ${\betaf c}(d)+2d+2$, since we can compute $g_h(j)$ using one evaluation of $f$ and $2d+2$ combinatory operations needed to compute the inner product $y=h{\bf c}dot x_j$ and $f(x_j)\varepsilonxp(-2\pi iy)$. The cost of one call of the oracle is roughly \betaegin{equation} \lambdaabel{sec1} \lambdaog N + {\betaf c}(d) +2d+2, \varepsilonnd{equation} since we need about $\lambdaog N$ qubits and the cost of computing $g_h$ is ${\betaf c}(d)+2d+2$. This summation problem can be solved by the Grover search or amplitude amplification algorithm mentioned in Section 5.1. To guarantee that the bound (\ref{req}) holds it is enough to compute an approximation for each component with error $\deltaelta=\varepsilon R^{-1/2}$. We will assume that \betaegin{equation}\lambdaabel{assumN} M\,\deltaelta^{-1}\,=\,\frac{M\,R^{1/2}}{\varepsilonps}\,\lambdae\, N. \varepsilonnd{equation} We can satisfy (\ref{assumN}) by computing each $S_N(g_h)$ independently for each $h$. We begin with the case $p=\infty$. To compute one sum with error $\delta$ with probability at least $1 - \varepsilonta$ we need roughly $\lambdaog \varepsilonta^{-1}$ repetitions of the algorithm and this requires about $(M/\delta) \lambdaog \varepsilonta^{-1}$ queries. We put $\varepsilonta R = \tfrac14$ to obtain an algorithm that computes each sum in such a way that (\ref{req}) holds. Hence we need roughly $\frac{M \sqrt{R}}{\varepsilon} \lambdaog R$ queries for each $g_h$. Together we need roughly \betaegin{equation} \lambdaabel{sec2} R {\bf c}dot \frac{M \sqrt{R}}{\varepsilon}{\bf c}dot \lambdaog R \ \ \mbox{queries.} \varepsilonnd{equation} The case $p=2$ is similar and we need roughly \betaegin{equation} \lambdaabel{sec3} R {\bf c}dot \frac{M \sqrt{R}}{\varepsilon}{\bf c}dot \lambdaog^{3/2} \frac{M \sqrt{R}}{\varepsilon}{\bf c}dot \lambdaog \lambdaog \frac{M \sqrt{R}}{\varepsilon}{\bf c}dot \lambdaog R \ \ \mbox{queries}. \varepsilonnd{equation} The total cost is of order \betaegin{eqnarray} \lambdaabel{sec4} \lambdaeft( \lambdaog N + {\betaf c}(d)+2d+2 \right) R \frac{M \sqrt{R}}{\varepsilon} \lambdaog R &&\ \ \mbox{for}\ p=\infty,\\ \lambdaeft( \lambdaog N + {\betaf c}(d)+2d+2 \right) R \frac{M \sqrt{R}}{\varepsilon} \lambdaog^{3/2} \frac{M \sqrt{R}}{\varepsilon}{\bf c}dot \lambdaog \lambdaog \frac{M \sqrt{R}}{\varepsilon}{\bf c}dot \lambdaog R &&\ \ \mbox{for}\ p=2. \varepsilonnd{eqnarray} \subsection{Results on Tractability} We only consider upper bounds for the class $\lambdastd$ and weighted Korobov spaces for $\alpha >1$ and $s_\gamma<\infty$. We combine the idea >From Subsection \ref{q2} together with the upper bounds from Subsection \ref{q33}. We need estimates for the numbers $N$, $M$, and $R$. We know from Section 2 that $R(\varepsilonps/3,d)$ is the set of indices $h$ for which the eigenvalues of $W_d$ are greater than $\varepsilonps^{2}/9$, see (\ref{seth}). We also know from (\ref{7plus}) that the cardinality of the set $R(\varepsilonps/3,d)$ is exactly equal to ${\bf c}wor(\varepsilonps/3,H_d,\lambdaall)$ and that for any positive $\varepsilonta$ there exists a positive $C_{\varepsilonta}$ such that $$ R=\betaig| R(\varepsilonps/3,d)\betaig|\,=\,{\bf c}wor(\varepsilonps/3,H_d,\lambdaall)\,\lambdae\, C_{\varepsilonta}\,\varepsilonps^{-(p^*(\lambdaall)+\varepsilonta)}\quad \forall\,d=1,2,\deltaots\,,\ \forall\,\varepsilon\in (0,1). $$ For $f \in H_d$ with $\Vert f \Vert_d \lambdae 1$ we know that $$ |f(y)|\, =\, |\lambdas f,K_d({\bf c}dot,y)\right\rangle|\,\lambdae\, K_d(y,y)^{1/2} \,=\, \prod_{j=1}^d\betaigg(1+2\gamma_j\zeta(\alpha)\betaigg)^{1/2} , $$ where $\zeta$ is the Riemann zeta function, and hence $$ |f(y)| \lambdae \varepsilonxp \betaigg( \zeta(\alpha ) \sum_{j=1}^d \gamma_j \betaigg). $$ This means that when $\sum_{j=1}^\infty \gamma_j < \infty$ we can apply the results from Section~\ref{q33} with $p=\infty$ and $M$ independent of $d$ and of order one. If $\sum_{j=1}^\infty \gamma_j = \infty$, which happens when $s_\gamma>1$ and could happen if $s_\gamma=1$, we use the quantum results for $p=2$ and need estimates not only for $N$ in (\ref{start3}) but also for $M$ that bounds the $L_2^N$-norms of the terms in (\ref{start3}). We know from Lemma 2 (ii) in {\bf c}ite{SWkor} that there are lattice rules $Q_{N,d}(f)=N^{-1}\sum_{j=1}^Nf(x_j)$ with prime $N$ and $x_j=\{j\,z/N\}$ for some non-zero integer $z\in[-N/2,N/2]^d$ and with $\{{\bf c}dot\}$ denoting the fractional part, for which \betaegin{equation} \lambdaabel{bound} \betaig| \, {\rm INT}_d(f) - Q_{N,d}(f) \, \betaig| \lambdae \frac{\prod_{j=1}^d (1+ 2\gamma_j)^{1/2} }{\sqrt N} {\bf c}dot \Vert f \Vert_d . \varepsilonnd{equation} As in Section~\ref{q2}, we have to guarantee an error $\delta=\varepsilonps R^{-1/2}=O(\varepsilon^{1+ (p^*(\lambdaall)+2)/2})$ for all integrands $x \mapsto f_h(x)=f(x) \varepsilonxp(-2\pi i h {\bf c}dot x)$ with $h \in R(\varepsilon/3,d)$. For these integrands $f_h$ we have \betaegin{eqnarray*} \|f_h\|_d^2\,&=&\,\sum_{j\in \Z^d}|\hat f(h+j)|^2r_{\alpha}(\gamma,j)= \sum_{j\in \Z^d}|\hat f(h+j)|^2r_{\alpha}(\gamma,h+j)\,\frac{r_{\alpha}(\gamma,j)} {r_{\alpha}(\gamma,h+j)}\\ &\lambdae&\,\betaigg(\sum_{j\in \Z^d}|\hat f(h+j)|^2r_{\alpha}(\gamma,h+j)\betaigg)\, \,\max_{j\in\Z^d}\frac{r_{\alpha}(\gamma,j)}{r_{\alpha}(\gamma,h+j)}\\ &=&\,\|f\|_d^2\,\max_{j\in\Z^d}\frac{r_{\alpha}(\gamma,j)}{r_{\alpha}(\gamma,h+j)}. \varepsilonnd{eqnarray*} We now show that \betaegin{equation}\lambdaabel{rrr} \frac{r_{\alpha}(\gamma,j)}{r_{\alpha}(\gamma,h+j)}\,\lambdae\, r_{\alpha}(\gamma,h)\,\prod_{m=1}^d\max(1,\gamma_m2^{\alpha})\qquad \forall\,j,h\in \Z^d. \varepsilonnd{equation} Indeed, since $r_{\alpha}$ is a product, it is enough to check (\ref{rrr}) for all components of $r_{\alpha}$. For the $m$th component it is easy to check that $$ \frac{r_{\alpha}(\gamma_m,j_m)}{r_{\alpha}(\gamma_m,h_m+j_m)}\,\lambdae\, \max(1,\gamma_m2^{\alpha})r_{\alpha}(\gamma_m,h_m), $$ >From which (\ref{rrr}) follows. In our case $s_\gamma<\infty$ which implies that $\gamma_m$ tends to zero and therefore $\prod_{m=1}^\infty\max(1,\gamma_m2^{\alpha})$ is finite. Furthermore, for $h\in R(\varepsilonps/3,d)$ we have $r_\alpha(\gamma,h)\lambdae 9/\varepsilonps^2$. Hence, $\Vert f_h \Vert_d =O(1/\varepsilon)$ for all $h\in R(\varepsilonps/3,d)$. We replace $\gamma_j$ by $1$ in (\ref{bound}) and have $$ \betaig| \, {\rm INT}_d(f_h) - Q_{N,d}(f_h) \, \betaig| = O\lambdaeft(\frac{3^{d/2}}{\varepsilonps \sqrt{N}} \right) = O(\varepsilonps^{1+(p^*(\lambdaall)+\varepsilonta)/2}) $$ if we take $N$ at least of order $$ N \,\alphasymp\, 3^d\,\lambdaeft( \frac{1}{\varepsilon} \right)^{4+ p^*(\lambdaall)+\varepsilonta} $$ or $$ \lambdaog N \,\alphasymp\, d\, +\ \lambdaog \varepsilon^{-1}. $$ To bound $M$ we need to consider the $L_2^N$-norms of the terms $f_h(x_j)=g_h(j)$ in (\ref{start3}). Since the Korobov space $H_d$ is an algebra, see Appendix 2, we know that $|f_h|^2\in H_d$ and $$ \Vert \, |f_h|^2 \, \Vert_d \,\lambdae\, C(d) {\bf c}dot \Vert f_h \Vert_d^2\,=\, O\lambdaeft(C(d)\,\varepsilon^{-2}\right), $$ where $C(d)$ is given in Appendix 2. Applying the bound (\ref{bound}) to the function $|f_h|^2$, we obtain a bound, in the $L_2^N$-norm, of the sequence $z_h = (g_h(j))_{j=1, \deltaots , N} = (f_h(x_j))_{j=1, \deltaots , N}$. This is the number $M$ that we need in our estimates. We obtain $$ \Vert z_h \Vert_{L_2^N}^2\,\lambdae\, M^2\, =\, {\rm INT}_d(|f_h|^2) + O\lambdaeft(3^{d/2}\,C(d) \, \varepsilon^{-2} \,N^{-1/2}\right). $$ Obviously, $$ {\rm INT}_d(|f_h|^2)\,=\,\widehat{|f_h|^2}(0)\,=\, \sum_{j\in \Z^d}|\hat f(h+j)|^2\,\lambdae\, \|f\|_d^2\,\lambdae 1\quad \forall\,h \in \Z^d. $$ To guarantee that $M$ does not depend on $d$ and is of order $1$, we take $N$ such that $$ \lambdaog N \,\alphasymp\, d + \lambdaog C(d) + \lambdaog \varepsilon^{-1}\, \alphasymp\, d+ \lambdaog \varepsilon^{-1}, $$ since $\lambdaog C(d)$ is of order $d$ due to Appendix 2. Putting these estimates together, we obtain estimates for the quantum algorithm. We use about $d + \lambdaog \varepsilon^{-1}$ qubits. The total cost of the algorithm is of order $$ ( {\betaf c}(d) + d) \lambdaeft(\frac1{\varepsilonps}\right)^{1+3(p^*(\lambdaall)+\varepsilonta)/2} . $$ Hence, the only dependence on $d$ is through ${\betaf c}(d)$ and $d$. We summarize this analysis in the following theorem. \betaegin{thm} \lambdaabel{th7} Consider approximation $\alphapp_d\,:\,H_d\,\to\,L_2([0,1]^d)$ in the quantum setting with $\alpha>1$ in the class $\lambdastd$. Assume that $s_\gamma<\infty$. Then we have strong tractability. The quantum algorithm solves the problem to within $\varepsilonps$ with probability at least $\tfrac34$ and uses about $d + \lambdaog \varepsilon^{-1}$ qubits. For any positive $\delta$ there exists a positive number $K_{\deltaelta}$ such that the total cost of the algorithm is bounded by $$ K_{\delta}\lambdaeft( ({\betaf c}(d)\,+\, d) \, \lambdaeft(\frac1{\varepsilonps}\right)^{1+3(p^*(\lambdaall)+\deltaelta)/2}\right) \ \quad\forall\,d=1,2,\deltaots\,,\ \forall\,\varepsilon\in (0,1). $$ \varepsilonnd{thm} \vskip 1pc It is interesting to compare the results in the quantum setting with the results in the worst case and randomized settings for the class $\lambdastd$. We ignore the small parameter $\delta$ in Theorems 1, 2, 4 and 6. Then if $s_\gamma>1$, the quantum setting (as well as the randomized setting) breaks intractability of approximation in the worst case setting (again for the class $\lambdastd$). The number of quantum queries and quantum combinatory operations is of order $\varepsilonps^{-1-3p^*(\lambdaall)/2}$, which is smaller than the corresponding number of function values in the randomized setting only if $p^*(\lambdaall)<2$. However, the number of quantum combinatory operations is always significantly smaller than the corresponding number of combinatory operations in the randomized settings. \section{Appendix 1: Quantum Algorithms} We present a framework for quantum algorithms, see {\bf c}ite{He1} for more details. Let $D$, $K$ be nonempty sets, and let $\mathcal{F}(D,K)$ denote the set of all functions from $D$ to $K$. Let $\K$, the scalar field, be either the field of real numbers $\R$ or the field of complex numbers ${\mathbb{C}}$, and let $G$ be a normed space with scalar field $\K$. Let $S:F\to G$ be a mapping, where $F \subset \mathcal{F}(D,K)$. We approximate $S(f)$ for $f\in F$ by means of quantum computations. Let $H_1$ be the two-dimensional complex Hilbert space ${\mathbb{C}}^2$, with its unit vector basis $\{e_0,e_1\}$, and let $$ H_m=H_1\otimes{\bf c}dots\otimes H_1 $$ be the $m$-fold tensor product of $H_1$, endowed with the tensor Hilbert space structure. It is convenient to let $$\Z[0,N) := \{0,\deltaots,N-1\}$$ for $N\in\N$ (as usual, $\N= \{1,2,\deltaots \}$ and $\N_0=\N{\bf c}up\{0\})$. Let $\mathcal{C}_m = \{\lambdag i\right\rangle:\, i\in\Z[0,2^m)\}$ be the canonical basis of $H_m$, where $\lambdag i \right\rangle$ stands for $e_{j_0}\otimes\deltaots\otimes e_{j_{m-1}}$, and $i=\sum_{k=0}^{m-1}j_k2^{m-1-k}$ is the binary expansion of $i$. Denote the set of unitary operators on $H_m$ by $\mathcal{U}(H_m)$. A quantum query on $F$ is given by a tuple \betaegin{equation} \lambdaabel{J1} Q=(m,m',m'',Z,\tau,\betaeta), \varepsilonnd{equation} where $m,m',m''\in \N, m'+m''\lambdae m, Z\subseteq \Z[0,2^{m'})$ is a nonempty subset, and $$\tau:Z\to D$$ $$\betaeta:K\to\Z[0,2^{m''})$$ are arbitrary mappings. Denote $m(Q):=m$, the number of qubits of $Q$. Given such a query $Q$, we define for each $f\in F$ the unitary operator $Q_f$ by setting for $\lambdag i\right\rangle\lambdag x\right\rangle\lambdag y\right\rangle\in \mathcal{C}_m =\mathcal{C}_{m'}\otimes\mathcal{C}_{m''}\otimes\mathcal{C}_{m-m'-m''}$: \betaegin{equation} \lambdaabel{AB1} Q_f\lambdag i\right\rangle\lambdag x\right\rangle\lambdag y\right\rangle= \lambdaeft\{\betaegin{array}{ll} \lambdag i\right\rangle\lambdag x\oplus\betaeta(f(\tau(i)))\right\rangle\lambdag y\right\rangle &\quad \mbox {if} \quad i\in Z,\\ \lambdag i\right\rangle\lambdag x\right\rangle\lambdag y\right\rangle & \quad\mbox{otherwise,} \varepsilonnd{array} \right. \varepsilonnd{equation} where $\oplus$ means addition modulo $2^{m''}$. Hence the query uses $m'$ bits to represent the index $i$ which is used to define the argument $\tau(i)$ at which the function is evaluated. We assume that the cost of one evaluation of $f$ is ${\betaf c}$. The value of $f(\tau(i))$ is then coded by the mapping $\betaeta$ using $m''$ bits. Usually, the mapping $\betaeta$ is chosen in a such a way that the $m''$ most significant bits of $\betaeta(f(\tau(i)))$ are stored. The number of bits that are processed is $m'+m''\lambdae m$, and usually $m'+m''$ is insignificantly less than $m$. That is why we define the cost of one query as $m+{\betaf c}$. A quantum algorithm on $F$ with no measurement is a tuple $A=(Q,(U_j)_{j=0}^n)$, where $Q$ is a quantum query on $F$, $n\in\N_0$ and $U_{j}\in \mathcal{U}(H_m)\,(j=0,\deltaots,n)$, with $m=m(Q)$. Given $f\in F$, we let $A_f\in \mathcal{U}(H_m)$ be defined as \betaegin{equation} \lambdaabel{B1a} A_f = U_n Q_f U_{n-1}\deltaots U_1 Q_f U_0. \varepsilonnd{equation} We denote by $n_q(A):=n$ the number of queries and by $m(A)=m=m(Q)$ the number of qubits of $A$. Let $(A_f(x,y))_{x,y\in\mathcal{C}_{m}}$ be the matrix of the transformation $A_f$ in the canonical basis $\mathcal{C}_{m}$, $A_f(x,y)=\left\langle x|A_f|y\right\rangle$. A quantum algorithm on $F$ with output in $G$ (or shortly, from $F$ to $G$) with $k$ measurements is a tuple $$ A=((A_\varepsilonll)_{\varepsilonll=0}^{k-1},(b_\varepsilonll)_{\varepsilonll=0}^{k-1},\varphi), $$ where $k\in\N,$ and $A_\varepsilonll\,(\varepsilonll=0,\deltaots,k-1)$ are quantum algorithms on $F$ with no measurements, $$ b_0\in\Z[0,2^{m_0}), $$ for $1\lambdae \varepsilonll \lambdae k-1,\,b_\varepsilonll$ is a function $$ b_\varepsilonll:\prod_{i=0}^{\varepsilonll-1}\Z[0,2^{m_i}) \to \Z[0,2^{m_\varepsilonll}), $$ where we denoted $m_\varepsilonll:=m(A_\varepsilonll)$, and $\varphi$ is a function $$ \varphi:\prod_{\varepsilonll=0}^{k-1}\Z[0,2^{m_\varepsilonll}) \to G $$ with values in $G$. The output of $A$ at input $f\in F$ will be a probability measure $A(f)$ on $G$, defined as follows: First put \betaegin{eqnarray} p_{A,f}(x_0,\deltaots, x_{k-1})&=& |A_{0,f}(x_0,b_0)|^2 |A_{1,f}(x_1,b_1(x_0))|^2\deltaots\nonumber\\ &&\deltaots |A_{k-1,f}(x_{k-1},b_{k-1}(x_0,\deltaots,x_{k-2}))|^2.\lambdaabel{M1} \varepsilonnd{eqnarray} Then define $A(f)$ by setting \betaegin{equation} \lambdaabel{M3} A(f)(C) =\sum_{\varphi (x_0,\deltaots,x_{k-1})\in C}p_{A,f}(x_0,\deltaots, x_{k-1}) \quad \forall \,C\subseteq G. \varepsilonnd{equation} We let $n_q(A):=\sum_{\varepsilonll=0}^{k-1} n_q(A_\varepsilonll)$ denote the number of queries used by $A$. For brevity we say $A$ is a quantum algorithm if $A$ is a quantum algorithm with $k$ measurements for $k\gammae 0$. Informally, such an algorithm $A$ starts with a fixed basis state $b_0$ and function $f$, and applies in an alternating way unitary transformations $U_{j}$ (not depending on $f$) and the operator $Q_f$ of a certain query. After a fixed number of steps the resulting state is measured, which gives a (random) basis state, say $\xi_0$. This state is memorized and then transformed (e.g.,\ by a classical computation, which is symbolized by $b_1$) into a new basis state $b_1(\xi_0)$. This is the starting state to which the next sequence of quantum operations is applied (with possibly another query and number of qubits). The resulting state is again measured, which gives the (random) basis state $\xi_1$. This state is memorized, $b_2(\xi_0,\xi_1)$ is computed (classically), and so on. After $k$ such cycles, we obtain $\xi_0,\deltaots,\xi_{k-1}$. Then finally an element $\varphi(\xi_0,\deltaots,\xi_{k-1})$ of $G$ is computed (e.g., \ again on a classical computer) from the results of all measurements. The probability measure $A(f)$ is its distribution. The error of $A$ is defined as follows: Let $0\lambdae\theta< 1$, $f\in F$, and let $\zeta$ be any random variable with distribution $A(f)$. Then put $ e(S,A,f,\theta)=\inf\lambdaeft\{\varepsilon\,\,|\,\,{\mathbf P}\{\|S(f)-\zeta\|> \varepsilon\}\lambdae\theta \right\}. $ Associated with this we introduce $$ e(S,A,F,\theta)=\sup_{f\in F} e(S,A,f,\theta), $$ $$ e(S,A,f)=e(S,A,f,\tfrac14), $$ and $$ e(S,A,F)=e(S,A,F,\tfrac14) = \sup_{f\in F} e(S,A,f). $$ Of course one could easily replace here $\tfrac14$ by another positive number $a < \tfrac12$. The $n$th minimal query error is defined for $n\in\N_0$ as $$ e_n^q(S,F)=\inf\{e(S,A,F)\,\,|\,\,A\,\, \mbox{is any quantum algorithm with}\,\, n_q(A)\lambdae n\}. $$ This is the minimal error which can be reached using at most $n$ queries. The quantum query complexity is defined for $\varepsilon > 0$ by $$ {\bf c}qq (\varepsilon, S, F) = \min\{n_q(A)\,\,|\,\, A\,\,\mbox{is any quantum algorithm with}\,\, e(S,A,F) \lambdae \varepsilon\}. $$ The quantities $e_n^q(S,F)$ and ${\bf c}qq (\varepsilon, S, F)$ are inverse to each other in the following sense: For all $n\in \N_0$ and $\varepsilon > 0$, $e_n^q(S,F)\lambdae \varepsilon$ if and only if ${\bf c}qq (\varepsilon_1, S, F) \lambdae n$ for all $\varepsilon_1 > \varepsilon$. Thus, determining the query complexity is equivalent to determining the $n$th minimal query error. The total (quantum) complexity ${\bf c}qua (\varepsilon, S, F)$ is defined similarly. Here we count the number of quantum gates that are used by the algorithm; if function values are needed then we put ${\betaf c}$ as the cost of one function evaluation. >From a practical point of view, the number of available qubits in the near future will be severely limited. Hence it is a good idea to present algorithms that only use a small number of qubits. \section{Appendix 2: Korobov Spaces are Algebras} We show that the Korobov space $H_d$ is an algebra for $\alphalpha>1$. More precisely, we prove that if $f,g\in H_{d}$ then $fg\in H_d$ and \betaegin{equation}\lambdaabel{appe1} \| f\,g\|_{d}\, \lambdae\, C(d)\,\| f\|_{d}\,\|g\|_{d}, \varepsilonnd{equation} with $$ C(d)\,=\,2^{\,d\,\max(1,\alphalpha/2)}\,\prod_{j=1}^d\betaigg(1+ 2\gamma_j\zeta(\alphalpha)\betaigg)^{1/2}. $$ For $f(x)=\sum_{j}\hat f(j)\varepsilonxp(2\pi i j{\bf c}dot x)$ and $g(x)=\sum_{k}\hat g(k)\varepsilonxp(2\pi i k{\bf c}dot x)$, with $j$ and $k$ varying through $\Z^d$, we have $$ f(x)g(x)\,=\,\sum_j\sum_k\hat f(j)\hat g(k)\varepsilonxp(2\pi i(j+k){\bf c}dot x)\,=\, \sum_h\lambdaeft(\sum_j\hat f(j)\hat g(h-j)\right)\varepsilonxp(2\pi i h{\bf c}dot x). $$ Hence, we need to estimate $$ \|fg\|_d^2\,=\,\sum_{h}\betaigg|\sum_j\hat f(j)\hat g(h-j)\, r^{1/2}_{\alphalpha}(\gamma,h)\betaigg|^2. $$ Observe that $$ r^{1/2}_{\alphalpha}(\gamma_m,h_m)\,\lambdae\,c\,\lambdaeft(r^{1/2}_{\alphalpha}(\gamma_m,k_m)+ r^{1/2}_{\alphalpha}(\gamma_m,h_m-k_m)\right)\qquad \forall\,k_m\in \Z, $$ with $c=2^{\max(0,(\alphalpha-2)/2)}$. This holds for $h_m=0$ since $c\gammae 1$ and $r_{\alpha}(\gamma_m,k_m)\gammae 1$, and is also true for $h_m\not=0$ and $k_m=0$. For other values of $h_m$ and $k_m$, the inequality is equivalent to $|h_m|^{\alphalpha/2}\lambdae c(|k_m|^{\alphalpha/2}+|h_m-k_m|^{\alphalpha/2})$ which holds with $c=1$ for $\alphalpha/2\lambdae 1$, and with $c=2^{(\alphalpha-2)/2}$ for $\alphalpha/2>1$ by the use of the standard argument. Applying this inequality $d$ times we get $$ r^{1/2}_{\alphalpha}(\gamma,h)\,\lambdae\,c^d\,\prod_{m=1}^d\lambdaeft( r^{1/2}_{\alphalpha}(\gamma_m,k_m)+ r^{1/2}_{\alphalpha}(\gamma_m,h_m-k_m)\right)\qquad \forall\,k\in \Z^d. $$ Let $D=\{1,2,\deltaots,d\}$ and let $u\subset D$. By $\overline{u}=D-u$ we denote the complement of $u$. Define $$ r_{\alphalpha}(\gamma,h_u)\,=\,\prod_{m\in u}r_{\alphalpha}(\gamma_m,h_m),\qquad r_{\alphalpha}(\gamma,h_{\overline{u}})\,=\,\prod_{m\in \overline{u}}r_{\alphalpha}(\gamma_m,h_m). $$ Then we can rewrite the last inequality as $$ r^{1/2}_{\alphalpha}(\gamma,h)\,\lambdae\,c^d\,\sum_{u\subset D} r^{1/2}_{\alphalpha}(\gamma,k_u)\,r^{1/2}_{\alphalpha}(\gamma,h_{\overline{u}}-k_{\overline{u}}) \qquad \forall\,k\in \Z^d. $$ For $u\subset D$, we define \betaegin{eqnarray*} F_u(x)\,&=&\,\sum_j|\hat f(j)|\,r^{1/2}_{\alphalpha}(\gamma,j_u)\, \varepsilonxp(2\pi i j{\bf c}dot x),\\ G_{\overline{u}}(x)\,&=&\,\sum_j|\hat g(j)|\,r^{1/2}_{\alphalpha}(\gamma,j_{\overline{u}})\, \varepsilonxp(2\pi i j{\bf c}dot x). \varepsilonnd{eqnarray*} Observe that $F_u$ and $G_{\overline{u}}$ are well defined functions in $L_2([0,1]^d)$ since $r_{\alphalpha}(\gamma,j_u) \lambdae r_{\alphalpha}(\gamma,j)$ for all $u$ and since $f$ and $g$ are from $H_d$. In terms of these functions we see that \betaegin{eqnarray*} \betaigg|\sum_j\hat f(j)\hat g(h-j)\,r^{1/2}_{\alphalpha}(\gamma,h)\betaigg|\,&\lambdae&\, \sum_j|\hat f(j)|\,|\hat g(h-j)|\,r^{1/2}_{\alphalpha}(\gamma,h)\\ &\lambdae&\,c^d\,\sum_{u\subset D} \sum_j|\hat f(j)|\,r^{1/2}_{\alphalpha}(\gamma,j_{u}) |\hat g(h-j)|\,r^{1/2}_{\alphalpha}(\gamma,h_{\overline{u}}-j_{\overline{u}})\\ &=&\,c^d\sum_{u\subset D}\sum_j\hat F_u(j)\,\hat G_{\overline{u}}(h-j). \varepsilonnd{eqnarray*} Therefore $$ \|f\,g\|_d^2\,\lambdae\,c^{2d}\,\sum_h\lambdaeft(\sum_{u\subset D} \sum_j\hat F_u(j)\,\hat G_{\overline{u}}(h-j)\right)^2. $$ Since the sum with respect to $u$ has $2^d$ terms, we estimate the square of the sum of these $2^d$ terms by the sum of the squared terms multiplied by $2^d$, and obtain $$ \|f\,g\|_d^2\,\lambdae\,2^dc^{2d}\,\sum_{u\subset D}a_u, $$ where $$ a_u\,=\, \sum_h\lambdaeft(\sum_j\hat F_u(j)\,\hat G_{\overline{u}}(h-j)\right)^2. $$ We now estimate $a_u$. Each $h$ and $j$ may be written as $h=(h_u,h_{\overline{u}})$ and $j=(j_u,j_{\overline{u}})$, and therefore \betaegin{eqnarray*} a_u&=&\sum_{h_u}\sum_{h_{\overline{u}}}\lambdaeft( \sum_{j_u}\sum_{j_{\overline{u}}}\hat F_u(j_u,j_{\overline{u}})\,\hat G_{\overline{u}}(h_u-j_u, h_{\overline{u}}-j_{\overline{u}})\right)^2\\ &=& \sum_{h_u}\sum_{h_{\overline{u}}}\lambdaeft( \sum_{j_u}\sum_{j_{\overline{u}}}\hat F_u(h_u-j_u,j_{\overline{u}})\,\hat G_{\overline{u}}(j_u, h_{\overline{u}}-j_{\overline{u}})\right)^2\\ &=& \sum_{h_u}\sum_{h_{\overline{u}}} \sum_{j_u}\sum_{j_{\overline{u}}} \sum_{k_u}\sum_{k_{\overline{u}}} \hat F_u(h_u-j_u,j_{\overline{u}}) \hat F_u(h_u-k_u,k_{\overline{u}}) \hat G_{\overline{u}}(j_u,h_{\overline{u}}-j_{\overline{u}}) \hat G_{\overline{u}}(k_u,h_{\overline{u}}-k_{\overline{u}}). \varepsilonnd{eqnarray*} Note that $$ \sum_{h_{\overline{u}}}\hat G_{\overline{u}}(j_u,h_{\overline{u}}-j_{\overline{u}}) \hat G_{\overline{u}}(k_u,h_{\overline{u}}-k_{\overline{u}})\,\lambdae\,G(j_u)G(k_u), $$ where $$ G(j_u)\,=\, \lambdaeft( \sum_{h_{\overline{u}}}\hat G_{\overline{u}}(j_u,h_{\overline{u}})^2\right)^{1/2}. $$ Similarly, $$ \sum_{h_{u}}\hat F_{u}(h_u-j_{u},j_{\overline{u}})\,\hat F_{u}(h_u-k_u,k_{\overline{u}}) \,\lambdae\,F(j_{\overline{u}})F(k_{\overline{u}}), $$ where $$ F(j_{\overline{u}})\,=\, \lambdaeft( \sum_{h_{u}}\hat F_{u}(h_u,j_{\overline{u}})^2\right)^{1/2}. $$ We obtain $$ a_u\,\lambdae\, \sum_{j_u}\sum_{j_{\overline{u}}} \sum_{k_u}\sum_{k_{\overline{u}}}F(j_{\overline{u}})F(k_{\overline{u}})G(j_u)G(k_u) \,=\,\lambdaeft(\sum_{j_{\overline{u}}}F(j_{\overline{u}})\right)^2 \lambdaeft(\sum_{k_u}G(k_{u})\right)^2. $$ Observe that \betaegin{eqnarray*} \lambdaeft(\sum_{j_{\overline{u}}}F(j_{\overline{u}})\right)^2\,&=&\, \lambdaeft(\sum_{j_{\overline{u}}}\lambdaeft(\sum_{j_u}\hat F_u(j_u,j_{\overline{u}})^2\right)^{1/2} r^{1/2}_{\alphalpha}(\gamma,j_{\overline{u}})r^{-1/2}_{\alphalpha}(\gamma,j_{\overline{u}})\right)^2\\ &\lambdae&\, \sum_{j_{\overline{u}}}\lambdaeft(\sum_{j_u}\hat F_u(j_u,j_{\overline{u}})^2 r_{\alphalpha}(\gamma,j_{\overline{u}}) \right)\lambdaeft(\sum_{j_{\overline{u}}}r^{-1}_{\alphalpha}(\gamma,j_{\overline{u}})\right)\\ &=&\, \lambdaeft(\sum_{j_{\overline{u}}}\sum_{j_u} |\hat f(j_u,j_{\overline{u}})|^2r_{\alphalpha}(\gamma,j_{u})r_{\alphalpha}(\gamma,j_{\overline{u}})\right) \lambdaeft(\sum_{j_{\overline{u}}}r^{-1}_{\alphalpha}(\gamma,j_{\overline{u}})\right)\\ &=&\,\lambdaeft(\sum_j|\hat f(j)|^2r_{\alphalpha}(\gamma,j)\right) \lambdaeft(\sum_{j_{\overline{u}}}r^{-1}_{\alphalpha}(\gamma,j_{\overline{u}})\right)\\ &=&\,\|f\|^2_d\, \sum_{j_{\overline{u}}}r^{-1}_{\alphalpha}(\gamma,j_{\overline{u}}). \varepsilonnd{eqnarray*} For the last sum we have $$ \sum_{j_{\overline{u}}}r^{-1}_{\alphalpha}(\gamma,j_{\overline{u}})\,=\, \prod_{m\in \overline{u}} \lambdaeft(1+\gamma_m\sum_{j\not=0}|j|^{-\alphalpha}\right)\,=\, \prod_{m\in \overline{u}} \lambdaeft(1+2\gamma_m\zeta(\alphalpha)\right). $$ Similarly, $$ \lambdaeft(\sum_{k_u}G(k_u)\right)^2\,\lambdae\,\|g\|_d^2\,\sum_{k_u} r^{-1}_{\alphalpha}(\gamma,k_u)\,=\,\|g\|^2_d\,\prod_{m\in u}\lambdaeft(1+2\gamma_m \zeta(\alphalpha)\right). $$ Putting all these estimates together we conclude that \betaegin{eqnarray*} \|f\,g\|^2_d\,&\lambdae&\,2^dc^{2d}\sum_{u\subset D}\|f\|^2_d\,\|g\|^2_d\, \prod_{m\in \overline{u}} \lambdaeft(1+2\gamma_m\zeta(\alphalpha)\right)\prod_{m\in u} \lambdaeft(1+2\gamma_m\zeta(\alphalpha)\right)\\ &=&\, 2^dc^{2d}\sum_{u\subset D}\|f\|^2_d\,\|g\|^2_d\prod_{m=1}^d \betaigg(1+2\gamma_m\zeta(\alphalpha)\betaigg)\\ &=&\,4^dc^{2d}\prod_{m=1}^d\betaigg(1+2\gamma_m\zeta(\alphalpha) \betaigg)\,\|f\|^2_d\,\|g\|^2_d, \varepsilonnd{eqnarray*} >From which (\ref{appe1}) easily follows. \vskip 1pc For the quantum setting, we need to consider the function $w(x)=f(x)\overline{f}(x)=|f(x)|^2$ for $f\in H_d$. Note that $\overline{f}$ also belongs to $H_d$ and $\|\overline{f}\|_d=\|f\|_d$, since $\hat{\overline{f}}(h)=\overline{\hat f(-h)}$ and $r_{\alphalpha}(\gamma,h)=r_{\alphalpha}(\gamma,-h)$ for all $h\in \Z^d$. Then (\ref{appe1}) guarantees that $w\in H_d$ and \betaegin{equation}\lambdaabel{appe2} \betaig\|\,|f|^2\,\betaig\|_d\,\lambdae\,C(d)\,\|f\|^2_d \qquad \forall\,f\in H_d. \varepsilonnd{equation} \vskip 2pc {\betaf Acknowledgments. } We are grateful to Stefan Heinrich, Anargyros Papageorgiou, Joseph F. Traub, Greg Wasilkowski, and Arthur Werschulz for valuable remarks. \vskip 2pc \betaegin{thebibliography}{99} \betaibitem{A50} N. Aronszajn (1950): Theory of reproducing kernels. Trans. Amer. Math. Soc. {\betaf 68}, 337--404. \betaibitem{BBC:98} R.~Beals, H.~Buhrman, R.~Cleve, and M.~Mosca (1998): Quantum lower bounds by polynomials. Proceedings of 39th IEEE FOCS, 352--361, see also http://arXiv.org/abs/quant-ph/9802049. \betaibitem{BHM:00} G.~Brassard, P.~H{\o}yer, M.~Mosca, and A.~Tapp (2000): Quantum amplitude amplification and estimation. Technical report, http://arXiv.org/abs/quant-ph/0005055. \betaibitem{EHI} A. Ekert, P. Hayden, and H. Inamori (2000): Basic concepts in quantum computation. See http://arXiv.org/abs/quant-ph/0011013. \betaibitem{G1} L. Grover (1996): A fast quantum mechanical algorithm for database search. Proc. 28 Annual ACM Symp. on the Theory of Computing, 212--219, ACM Press New York. See also http://arXiv.org/abs/quant-ph/9605043. \betaibitem{G2} L. Grover (1998): A framework for fast quantum mechanical algorithms. Proc. 30 Annual ACM Symp. on the Theory of Computing, 53--62, ACM Press New York. See also http://arXiv.org/abs/quant-ph/9711043. \betaibitem{Gr} J. Gruska (1999): Quantum Computing. McGraw-Hill, London. \betaibitem{He1} Heinrich, S. (2002): Quantum summation with an application to integration. J. Complexity {\betaf 18}. See also http://arXiv.org/abs/quant-ph/0105116. \betaibitem{He2} S.\ Heinrich (2001): Quantum integration in Sobolev classes. Preprint. See also http://arXiv.org/abs/quant-ph/0112153. \betaibitem{HN1} S. Heinrich and E. Novak (2002): Optimal summation and integration by deterministic, randomized, and quantum algorithms. In: Monte Carlo and Quasi-Monte Carlo Methods 2000. K.-T. Fang, F. J. Hickernell, H. Niederreiter (eds.), pp. 50--62, Springer. See also http://arXiv.org/abs/quant-ph/0105114. \betaibitem{HN2} S. Heinrich and E. Novak (2001): On a problem in quantum summation. Submitted to J. Complexity. See also http://arXiv.org/abs/quant-ph/0109038. \betaibitem{HW} F. J. Hickernell and H. Wo\'zniakowski (2001): Tractability of multivariate integration for periodic functions. J. Complexity {\betaf 17}, 660--682. \betaibitem{NW} A. Nayak and F. Wu (1999): The quantum query complexity of approximating the median and related statistics. STOC, May 1999, 384--393. See also http://arXiv.org/abs/quant-ph/9804066. \betaibitem{Nie} M. A. Nielsen and I. L. Chuang (2000): Quantum Computation and Quantum Information, Cambridge University Press. \betaibitem{N92} E. Novak (1992): Optimal linear randomized methods for linear operators in Hilbert spaces. J. Complexity {\betaf 8}, 22--36. \betaibitem{Nov95} E. Novak (1995): The real number model in numerical analysis. J. Complexity {\betaf 11}, 57--73. \betaibitem{Nov01} E. Novak (2001): Quantum complexity of integration. J. Complexity {\betaf 17}, 2--16. See also http://arXiv.org/abs/quant-ph/0008124. \betaibitem{NW00} E. Novak and H. Wo\'zniakowski (2000): Complexity of linear problems with a fixed output basis. J. Complexity {\betaf 16}, 333--362. \betaibitem{NW01} E. Novak and H. Wo\'zniakowski (2001): Intractability results for integration and discrepancy, J. Complexity {\betaf 17}, 388--441. \betaibitem{P} A. O. Pittenger (1999): Introduction to Quantum Computing Algorithms. Birk\-h\"auser, Boston. \betaibitem{S3} P. W. Shor (2000): Introduction to Quantum Algorithms. See http://arXiv.org/abs/quant-ph/quant-ph/0005003. \betaibitem{SW2} I. H. Sloan and H. Wo\'zniakowski (1998): When are quasi-Monte Carlo algorithms efficient for high dimensional integrals? J. Complexity {\betaf 14}, 1--33. \betaibitem{SWkor} I. H. Sloan and H. Wo\'zniakowski (2001): Tractability of multivariate integration for weighted Korobov classes. J. Complexity {\betaf 17}, 697--721. \betaibitem{TWW} J. F. Traub, G. W. Wasilkowski and H. Wo\'zniakowski (1988): Information-Based Complexity, Academic Press, New York. \betaibitem{TW} J. F. Traub and H. Wo\'zniakowski (2001): Path integration on a quantum computer, submitted for publication. See also http://arXiv.org/abs/quant-ph/0109113. \betaibitem{Wahba} G. Wahba (1990): Spline Models for Observational Data, SIAM-NSF Regional Conference Series in Appl. Math., SIAM, {\betaf 59}, Philadelphia. \betaibitem{WWwei} G. W. Wasilkowski and H. Wo\'zniakowski (1999): Weighted tensor product algorithms for linear multivariate problems. J. Complexity {\betaf 15}, 402--447. \betaibitem{WWpow} G. W. Wasilkowski and H. Wo\'zniakowski (2001): On the power of standard information for weighted approximation. Found. Comput. Math. {\betaf 1}, 417-434, 2001. \betaibitem{W94} H. Wo\'zniakowski (1994): Tractability and strong tractability of linear multivariate problems. J. Complexity {\betaf 10}, 96--128. \betaibitem{W} H. Wo\'zniakowski (1999): Efficiency of quasi-Monte Carlo algorithms for high dimensional integrals. In Monte Carlo and Quasi-Monte Carlo Methods 1998, eds. H. Niederreiter and J. Spanier, Springer Verlag, Berlin, 114--136. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \begin{center} {\Large A note on 3-colorable plane graphs without 5- and 7-cycles \footnote{\footnotesize Supported partially by NSFC 10371055}} \vskip 15pt {Baogang Xu\footnote{\footnotesize email: [email protected]}} {\small School of Mathematics and Computer Science, Nanjing Normal University, 122 Ninghai Road} {\small Nanjing, 210097, PR China} \end{center} \begin{abstract} In \cite{bgrs0}, Borodin {\em et al} figured out a gap of \cite{bxu1}, and gave a new proof with the similar technique. The purpose of this note is to fix the gap of \cite{bxu1} by slightly revising the definition of {\em special faces}, and adding a few lines of explanation in the proofs (new added text are all in black font). \begin{flushleft} {\em Key words and phrases: plane graph, cycle, coloring AMS 2000 Subject Classification: 05c15, 05c78} \end{flushleft} \end{abstract} \iffalse In 1976, Steinberg (\cite{a11} p.229) conjectured that every plane graph without $4$- and $5$-cycles is $3$-colorable. Erd\"os (\cite{a11} p.229) suggested the following relaxation in 1990: Is there an integer $k\geq 5$ such that every plane graph without $i$-cycles for $4\leq i\leq k$ is $3$-colorable? Abbott and Zhou $^{\cite{ahlabz}}$ showed that $k=11$ is acceptable. Sanders and Zhao $^{\cite{dpsyz}}$, and Borodin $^{\cite{a8}}$ independently, improved that to $k=9$. Xu $^{\cite{bxu}}$ improved $k=9$ to $k=8$ by showing that for every plane graph $G$ without cycles of length from 4 to 8, any 3-coloring of a face of degree at most 13 can be extended to $G$. Recently, \fi In \cite{bgrs}, Borodin {\it et al} proved that every plane graph $G$ without cycles of length from 4 to 7 is 3-colorable that provides a new upper bound to Steinberg's conjecture (see \cite{a11} p.229). In \cite{ovbar}, Borodin and Raspaud proved that every plane graph with neither 5-cycles nor triangles of distance less than four is 3-colorable, and they conjectured that every plane graph with neither 5-cycles nor adjacent triangles is 3-colorable, where the distance between triangles is the length of the shortest path between vertices of different triangles, and two triangles are said to be adjacent if they have an edge in common. In \cite{bxu2}, Xu improved Borodin and Raspaud's result by showing that every plane graph with neither 5-cycles nor triangles of distance less than three is 3-colorable. In this note, it is proved that every plane graph without 5- and 7-cycles and without adjacent triangles is 3-colorable. This improves the result of \cite{bgrs}, and offers a partial solution for Borodin and Raspaud's conjecture \cite{ovbar}. Let $G=(V, E, F)$ be a plane graph, where $V, E$ and $F$ denote the sets of vertices, edges and faces of $G$ respectively. The neighbor set and degree of a vertex $v$ are denoted by $N(v)$ and $d(v)$, respectively. Let $f$ be a face of $G$. We use $b(f), V(f)$ and $N(f)$ to denote the boundary of $f$, the set of vertices on $b(f)$, and the set of faces adjacent to $f$ respectively. The degree of $f$, denoted by $d(f)$, is the length of the facial walk of $f$. A $k$-vertex ($k$-face) is a vertex (face) of degree $k$. Let $C$ be a cycle of $G$. We use $int(C)$ and $ext(C)$ to denote the sets of vertices located inside and outside $C$, respectively. $C$ is called a {\it separating} cycle if both $int(C)\neq \emptyset$ and $ext(C)\neq \emptyset$, and is called a {\it facial cycle} otherwise. For convenience, we still use $C$ to denote the set of vertices of $C$. {\bf Let $f$ be an 11-face bounded by a cycle $C=u_1u_2u_3\ldots u_{11}u_1$. A 4-cycle $u_1u_2u_3vu_1$ is called an {\em ear} of $f$ if $v\not\in C$. The graph $G_1$, obtained from $G$ by removing $u_2$ and all the vertices in $int(u_1u_2u_3vu_1)$, is called an {\em ear-reduction} of $G$ on $f$. Since $u_1vu_3\ldots u_{11}u_1$ is still an 11-cycle bounding a face, say $f_1$, in $G_1$, if $f_1$ has an ear, we may make an ear-reduction to $G_1$ on $f_1$ and get a new graph $G_2$ and an 11-face $f_2$ bounded by a cycle in $G_2$. Continue this procedure, we get a sequence of graphs $G, G_1, G_2, \ldots$, and a sequence of 11-faces $f, f_1, f_2, \ldots $, such that $f_i$ is an 11-face in $G_i$. Each of these 11-faces is called a {\em collapse} of $f$. An 11-face $f$ of $G$ is called a {\it special face} if the following hold: (1) $b(f)$ is a cycle; (2) $f$ is adjacent to a triangle sharing only one edge with $f$; and furthermore, for each collapse $f'$ of $f$ and its corresponding graph $G'$: (3) every vertex in $V(G')\setminus V(f')$ has at most two neighbors on $b(f')$; and (4) for every edge $uv$ of $G'\setminus V(f')$, $|N_{G'}(u)\cap V(f')|+|N_{G'}(v)\cap V(f')|\leq 3$.} A vertex in $G\setminus V(f)$ that violates (3) is called a {\it claw-center} of $b(f)$, and a pair of adjacent vertices in $G\setminus V(f)$ that violates (4) is called a {\it d-claw-center} of $b(f)$. A separating 11-cycle $C$ is called a {\it special cycle} if in $G\setminus ext(C)$, $C$ is the boundary of a special face. We use ${\cal G}$ to denote the set of plane graphs without 5- and 7-cycles and without adjacent triangles. Following is our main theorem. \renewcommand{0.8}{1} \begin{guess}\label{th-1} Let $G$ be a graph in ${\cal G}$ that contains cycles of length $4$ or $6$, $f$ an arbitrary face that is a special face, or a $3$-face, or a $9$-face with $b(f)$ being a cycle. Then, any $3$-coloring of $f$ can be extended to $G$. \end{guess} \renewcommand{0.8}{2} As a corollary of Theorem \ref{th-1}, every plane graph in ${\cal G}$ is 3-colorable. To see this, let $G$ be a plane graph in ${\cal G}$. By Gr\"{o}tzsch's theorem, we may assume that $G$ contains triangles. {\bf If $G$ contains neither 4-cycles nor 6-cycles, then by Theorem 1.2 of \cite{bgrs}, $G$ is 3-colorable}. Otherwise, for an arbitrary triangle $T$, any 3-coloring of $T$ can be extended to $int(T)$ and $ext(T)$, that yields a 3-coloring of $G$. \vskip 10pt \noindent {\bf Proof of Theorem \ref{th-1}.} Assume that $G$ is a counterexample to Theorem \ref{th-1} with minimum $\sigma(G)=|V(G)|+|E(G)|$. Without loss of generality, assume that the unbounded face $f_o$ is a special face, or a 3-face or a 9-face with $b(f)$ being a cycle, such that a 3-coloring $\phi$ of $f_o$ cannot be extended to $G$. Let $C=b(f_o)$ and let $p=|C|$. Then, every vertex not in $C$ has degree at least 3. By our choice of $G$, $f_o$ has no ears if $p=11$, and neither 4-cycle nor 6-cycle is adjacent to triangles. Since $G\setminus int(C')$ is still in ${\cal G}$ for any separating cycle $C'$ of $G$, {\bf either by the minimality of $G$ or by Theorem 1.2 of \cite{bgrs} (this will be used frequently but implicitly)}, \renewcommand{0.8}{1} \begin{lemma}\label{clm0} $G$ contains neither special cycles, nor separating $k$-cycles,$k=3,9$. \end{lemma} \begin{lemma}\label{2-connect} $G$ is $2$-connected. That is, the boundary of every face of $G$ is a cycle. \end{lemma}\renewcommand{0.8}{2} \indent Interested readers may find the proof of Lemma \ref{2-connect} in \cite{bgrs} (see that of Lemma 2.2). Let $C'$ be a cycle of $G$, and $u$ and $v$ two vertices on $C'$. We use $C'[u,v]$ to denote the path of $C'$ clockwisely from $u$ to $v$, and let $C'(u,v)=C'[u,v]\setminus \{u, v\}$. Unless specified particularly, we always write a cycle on its vertices sequence clockwisely. \renewcommand{0.8}{1} \begin{lemma}\label{chordless} $C$ is chordless.\end{lemma} \renewcommand{0.8}{2} {\bf Proof.} Assume to the contrary that $C$ has a chord $uv$. Let $S_1=V(C(u,v))$, $S_2=V(C(v,u))$, and assume that $|S_1|<|S_2|$. It is certain that $p=9$ or 11, and $|S_1|\leq 4$. Since $|S_1|=3$ provides $C[u,v]+uv$ is a 5-cycle, and $|S_1|=4$ provides $C[v,u]+uv$ is a $(p-4)$-cycle, we assume that $|S_1|=1$ or 2. If $|S_1|=1$, say $S_1=\{w\}$, then $uvwu$ bounds a 3-face by Lemma \ref{clm0}. Let $G'$ be the graph obtained from $G-w$ by inserting a new vertex into $uv$. Then, $G'\in {\cal G}$, $\sigma(G')=\sigma(G)-1$, {\bf and the unbounded face of $G'$ is a special face of $G'$ if $p=11$ since $f_o$ is one of $G$}. We can extend $\phi$ to a 3-coloring $\phi'$ of $G'$. This produces a contradiction because $\phi'$ and $\phi(w)$ yield a 3-coloring of $G$ that extends $\phi$. Assume $|S_1|=2$. Since $C[v,u]+uv$ is a $(p-2)$-cycle, and since $G$ has neither adjacent triangles nor 5-cycles, $p=11$ and there exists a 3-face sharing a unique edge with $f_o$ on $C[v,u]$. So, $C[v,u]+uv$ is a separating 9-cycle, a contradiction to Lemma \ref{clm0}. $ \rule{1mm}{3mm}$ \renewcommand{0.8}{1} \begin{lemma}\label{11-cycles} $N(u)\cap N(v)\cap int(C_1)=\emptyset$ for separating $11$-cycle $C_1$ and $uv\in E(C_1)$. \end{lemma} \renewcommand{0.8}{2} {\bf Proof.} Assume to the contrary that $x\in N(u)\cap N(v)\cap int(C_1)$. By Lemma \ref{clm0}, $xuvx$ bounds a 3-face. We will show that $C_1$ has neither claw-center nor d-claw-center. Then, $C_1$ is a special cycle that contradicts Lemma \ref{clm0}. {\bf Let $G'=G\setminus ext(C_1)$, and let $f'$ be the unbounded face of $G'$. For each collapse $f''$ of $f'$, $xuvx$ is always adjacent to $f''$, and a claw-center (resp. d-claw-center) of $C_1$ is also one of $b(f'')$. We may assume that each claw-center (resp. d-claw-center) of $C_1$ has three neighbors (resp. four neighbors) on $C_1$}. If $xw\in E(G)$ for some $w\in C_1\setminus\{u,v\}$, assume that $u,v$ and $w$ clockwisely lie on $C_1$, then $|V(C_1(v,w))|\geq 5$ and $|V(C_1(w,u))|\geq 5$ since $G\in {\cal G}$, and hence $|C_1|\geq 13$, a contradiction. If a vertex $y\in int(C_1)\setminus\{x\}$ has three neighbors $z_1, z_2$ and $z_3$ on $C_1$, then by simply counting the number of vertices in $C_1\setminus\{z_1, z_2, z_3\}$, $G$ must contain a 9-cycle $C_2$ with $x\in int(C_2)$, a contradiction to Lemma \ref{clm0} because $C_2$ is a separating 9-cycle. Assume that $\{a, b\}$ is a d-claw-center of $C_1$. Since $G$ has no adjacent triangles, $|(N(a)\cup N(b))\cap C_1|\geq 3$. If $(N(a)\cup N(b))\cap C_1$ has exactly three vertices, say $a_1, a_2$ and $a_3$ clockwisely on $C_1$, we may assume that $a_1\in N(a)\cap N(b)$, then $|V(C_1(a_1,a_2))|\geq 5$ and $|V(C_1(a_3,a_1))|\geq 5$ that provide $|C_1|\geq 13$. So, assume that $a$ has two neighbors $a_1, a_2\in C_1$, $b$ has two neighbors $b_1, b_2\in C_1\setminus\{a_1, a_2\}$, and assume these four vertices clockwisely lie on $C_1$. If $a_1a_2\in E(C_1)$, then $|V(C_1(a_2, b_1))|\geq 4$ and $|V(C_1(b_2,a_1))|\geq 4$ providing $|C_1|\geq 12$, a contradiction. So, we may assume that $a_1a_2\not\in E(C_1)$ and $b_1b_2\not\in E(C_1)$, i.e., $|V(C_1(a_1,a_2))|\geq 1$ and $|V(C_1(b_1,b_2))|\geq 1$. By symmetry, we assume $x\in int(C_1[a_1, b_1]\cup a_1abb_1)$. By simply counting the number of vertices in $C_1\setminus\{a_1,a_2,b_1,b_2\}$, we get $|C_1|>11$, a contradiction. $ \rule{1mm}{3mm}$ \renewcommand{0.8}{1} \begin{lemma}\label{length-2} For $u,v\in C$ and $x\not\in C$, if $xu, xv\in E(G)$, then $uv\in E(C)$. \end{lemma} \renewcommand{0.8}{2} {\bf Proof.} Assume to the contrary that $uv\not\in E(C)$. By Lemma~\ref{chordless}, $uv\not\in E(G)$. Let $|V(C[u,v])|=l<|V(C[v,u])|$. Then, $3\leq l\leq {p+1\over 2}\leq 6$. Since $C[u,v]\cup vxu$ is an $(l+1)$-cycle and $C[v,u]\cup uxv$ is a $(p-l+3)$-cycle, $l\not\in \{4, 6\}$, and $l\neq 5$ whenever $p=9$. If $l=5$ and $p=11$, $C[v,u]\cup uxv$ must bound a 9-face by Lemma \ref{clm0}, then $f_o$ has to be adjacent to a 3-face $f_1$ on $C[u,v]$, and hence $C[u,v]\cup vxu\cup b(f_1)$ yields a 7-cycle. So, $l=3$. Let $C[u,v]=uwv$. If $p=11$, then there exists a 3-face sharing a unique edge with $f_o$ on $C[v,u]$ that contradicts Lemma \ref{11-cycles} because $C[v,u]\cup vxu$ is a separating 11-cycle. Therefore, $p=9$ and $C[v,u]\cup vxu$ bounds a 9-face by Lemmas~\ref{clm0} and \ref{chordless}. Let $G'$ be the graph obtained from $G\setminus V(C(v,u))$ by inserting $5$ new vertices into $ux$. Then, $G'\in {\cal G}$, $\sigma(G')<\sigma(G)$, and the unbounded face of $G'$ has degree $9$. We can extend $\phi(u), \phi(w)$ and $\phi(v)$ to a 3-coloring $\phi'$ of $G'$ with $\phi'(u)\neq \phi'(x)$. But $\phi'$ and $\phi$ yield a 3-coloring of $G$ that extends $\phi$, a contradiction. $ \rule{1mm}{3mm}$ \renewcommand{0.8}{1} \begin{lemma}\label{4-face} $G$ contains neither $4$-cycles nor $6$-cycles. \end{lemma}\renewcommand{0.8}{2} {\bf Proof.} First assume to the contrary that $G$ contains a 4-cycle. Assume that $C_1$ is a separating 4-cycle. Let $\psi$ be an extension of $\phi$ on $G\setminus int(C_1)$, and let $G_1$ be the graph obtained from $G\setminus ext(C_1)$ by inserting five new vertices into an edge of $C_1$. If $p\neq 3$ then $|C\setminus C_1|\geq 6$ since $C$ is chordless, and hence $|ext(C_1)|\geq 6$. If $p=3$ then $|C\cap C_1|\leq 1$ and hence $E(C)\cap E(C_1)=\emptyset$, again $|ext(C_1)|\geq 6$ because every face incident with some edge on $C_1$ is a $4^+$-face. Therefore, $\sigma(G_1)<\sigma(G)$, and we can extend the restriction of $\psi$ on $C_1$ to $G_1$, and thus get a 3-coloring of $G$ that extends $\phi$. So, we assume that $G$ contains no separating 4-cycles. We proceed to show that one can identify a pair of diagonal vertices of a 4-cycle such that $\phi$ can be extended to a 3-coloring of the resulting graph $G'$. Since any 3-coloring of $G'$ offers a 3-coloring of $G$, this contradiction guarantees the nonexistence of 4-cycles in $G$. Let $f$ be an arbitrary 4-face of $G$ with $b(f)=uvwxu$. If $f\not\in N(f_o)$, $b(f)$ contains a pair of diagonal vertices that are not on $C$. By symmetry, we assume that $u, w\in b(f)\setminus C$ whenever $f\not\in N(f_o)$. Let $G_{u,w}$ be the graph obtained from $G$ by identifying $u$ and $w$, and let $r_{uw}$ be the new vertex obtained by identifying $u$ and $w$. It is clear that $G_{u,w}$ contains no adjacent triangles since no edge of $b(f)$ is contained in triangles. If $f\not\in N(f_o)$, it is certain that $\phi$ is still a proper coloring of $C$ in $G_{u,w}$. If $f\in N(f_o)$, we may assume that $u\in C$, then $w\not\in C$ and $N(w)\cap C\subset \{x,v\}$ by Lemmas \ref{chordless} and \ref{length-2}, and thus $\phi$ is also a proper coloring of $C$ in $G_{u,w}$ by letting $\phi(r_{u,w})=\phi(u)$. Since a cycle of length 5 or 7 in $G_{u,w}$ yields a 7-cycle or a separating 9-cycle in $G$, $G_{u,w}\in {\cal G}$. Now we need only to check that $f_o$ is still a special face in $G_{u,w}$ in case of $p=11$. Assume that $p=11$. {\bf We first consider the case that $N(f_o)$ has 4-faces. Choose $f$ to be a 4-face in $N(f_o)$. By symmetry, we assume that $ux\in E(C)$. Let $x_1x_2uxx_3$ be a segment on $C$. Since $f_o$ is adjacent to a 3-face and has no ears, we may suppose that $v\not\in C$ and $xx_3$ is not on 4-cycles. Assume that $N(w)\cap N(x_1)$ has a vertex, say $w'$. $w'\not\in C$ by Lemmas~\ref{chordless} and \ref{length-2}, and so $(C\cup x_1w'wx)\setminus\{u,x_2\}$ is an 11-cycle. Let $f'$ be a 3-face sharing a unique edge with $f_o$. Either $b(f')\cap \{x_1x_2, x_2u\}\neq \emptyset$ produces a 7-cycle, or $b(f')\cap (C\setminus \{u,x_2\})\neq \emptyset$ contradicts Lemma~\ref{11-cycles}. So, $N(w)\cap N(x_1)=\emptyset$, and $f_o$ has no ears in $G_{u,w}$}. If $C$ has a claw-center $z$, then $z$ has three neighbors on $C$. Let $y_1, y_2$ and $y_3$ be three neighbors of $z$ clockwisely on $C$ in $G_{u,w}$. Then $y_i=r_{uw}$ for an $i$. Assume $y_1=r_{uw}$. It is clear that $x\not\in \{y_2, y_3\}$, and $y_2y_3\in E(C)$ by Lemma \ref{length-2}. If $|V(C(x,y_2))|\leq 3$, then in $G$, $C(x,y_2)\cup xwzy_2\cup zy_3$ contains a cycle of length 5 or 7. If $|V(C(y_3,u))|\leq 3$, then in $G$, $C(y_3,u)\cup C_1\cup wzy_2\cup zy_3$ contains a cycle of length 5 or 7, or a separating 9-cycle. Therefore, $|V(C(x,y_2))|\geq 4$, $|V(C(y_3,u))|\geq 4$, and hence $p\geq 12$, a contradiction. Assume that $C$ has a d-claw-center $\{z_1,z_2\}$ in $G_{u,w}$. Since $C$ has no claw-center in $G_{u,w}$, $|N(z_i)\cap C|=2$, $i=1,2$. Let $N(z_1)\cap C=\{y_1,y_2\}$ and $N(z_2)\cap C=\{y_3,y_4\}$. Since $G$ contains no adjacent triangles, $\{y_1,y_2\}\cap \{y_3,y_4\}=\emptyset$ by Lemma \ref{length-2}. Since $f_o$ is a special face in $G$, we may assume that $y_2=r_{uw}$. Then, $y_3y_4\in E(C)$ by Lemma \ref{length-2}. Using the similar argument as used in the last paragraph, we get $p\geq 12$ by counting the number of vertices in $C(x,y_3), C(y_4,y_1)$ and $C(y_1,u)$, a contradiction. {\bf Suppose that $N(f_o)$ has no 4-faces. If $b(f)\cap C\neq\emptyset$, both $u$ and $w$ have no neighbor on $C\setminus\{v,x\}$ by Lemmas~\ref{chordless} and \ref{length-2}. If every 4-face shares no common vertex with $f_o$, we may suppose that $w$ has no neighbor on $C$. In either case, it is straightforward to check that $f_o$ has no ears in $G_{u,w}$}. $C$ has a claw-center $z$ provides $z=r_{u,w}$, and $C$ has a d-claw-center provides $r_{u,w}$ is in the d-claw-center. In either case, one may get a contradiction that $p\geq 12$ by almost the same arguments as above. Now, assume that $C'$ is a 6-cycle of $G$. Since $G$ contains no 4-cycles as just proved above, every face incident with some edge on $C'$ is a $6^+$-face. If $C'$ is a separating cycle, it is not difficult to verify that $|ext(C')|\geq 4$, then by letting $G''$ be the graph obtained from $G\setminus int(C')$ by inserting three vertices into an edge of $C'$, we can first extend $\phi$ to $G\setminus int(C')$, and then extend the restriction of $\phi$ on $C'$ to $G''$, and thus get an extension of $\phi$ on $G$. So, we assume that $G$ has no separating 6-cycles. Let $f'$ be an arbitrary 6-face. If $b(f')\cap C\neq \emptyset$, we choose $u_0$ to be a vertex in $b(f')\cap C$, and choose $u_1$ to be a vertex in $b(f')\setminus C$. If $b(f')\cap C=\emptyset$, since $G$ contains no $l$-cycle for $l=4,5$ or 7, there must be a vertex on $b(f')$ that has no neighbors on $C$, we choose such a vertex as $u_1$. Let $b(f')=u_0u_1\ldots u_5u_0$, and let $H$ be the graph obtained from $G$ by identifying $u_1$ and $u_5$, $u_2$ and $u_4$, respectively. Since $H$ contains no adjacent triangles, and any 5-cycle (7-cycle) of $H$ yields a 7-cycle (separating 9-cycle) in $G$, $H\in {\cal G}$. We will show that $\phi$ is still a coloring of $f_o$ in $H$. It is trivial if $b(f')\cap C=\emptyset$, since the operation from $G$ to $H$ is independent of $\phi$. Assume that $b(f')\cap C\neq \emptyset$. Then, $u_0\in C$ and $u_1\not\in C$ by our choice, and $u_2\not\in C$ and $N(u_1)\cap C=\{u_0\}$ by Lemma \ref{length-2}. If either $u_2$ has no neighbors on $C$, or $u_4\not\in C$, then we are done. Otherwise, assume that $u_4\in C$ and $u_2$ has a neighbor, say $z$, on $C$, and assume that $u_0, z$ and $u_4$ lie on $C$ clockwisely. Since $G$ contains no 5-cycles, $u_0u_4\not\in E(G)$, and hence $u_5\in C$ by Lemma \ref{length-2}. Since $G$ contains no 4-cycles and no separating 6-cycles, $|V(C(u_0, z))|\geq 4$, $|V(C(z,u_4))|\geq 4$, and hence $p\geq 12$, a contradiction. Finally, we will prove that $f_o$ is still a special face in $H$ in case of $p=11$. Then, a contradiction occurs again since $\phi$ can be extended to $H$ that offers an extension of $\phi$ to $G$, this will end the proof of Lemma \ref{4-face} and also the proof of our theorem. The proof technique is again, as used repeatedly, to derive a contradiction by counting the number of vertices on the segments divided by the vertices adjacent to some claw-center or d-claw-center of $C$. We leave the case that $b(f')\cap C\neq \emptyset$ to the readres, and proceed only with the case $b(f')\cap C=\emptyset$. {\bf Suppose that every 6-face has no common vertex with $f_o$. Note that the above procedure holds for an arbitrary 6-face of $G$, and note that $G$ has neither 4-cycles nor separating 6-cycles as just proved, it is straightforward to check that we can choose $f'$ to be a 6-face such that $f_o$ has no ears in $H$}. Assume that $p=11$ but $f_o$ is not a special face in $H$. Let $r_{1,5}$ and $r_{2,4}$ be the vertices obtained by identifying $u_1$ and $u_5$, and $u_2$ and $u_4$, respectively. Assume that $C$ has a claw-center $y$ with three neighbors $y_1,y_2$ and $y_3$, clockwisely on $C$ in $H$. By symmetry, we may assume that $y=r_{1,5}$, and assume that $y_1u_1\in E(G)$ and $y_2u_5, y_3u_5\in E(G)$. Then, $y_2y_3\in E(C)$ by Lemma \ref{length-2}. Since $G$ contains no adjacent triangles, contains no cycles of length 4,5 and 7, and contains no separating 9-cycles, $|V(C(y_1,y_2))|\geq 4$, $|V(C(y_3,y_1))|\geq 5$, and hence $p\geq 12$, a contradiction. Assume that $C$ has a d-claw-center $\{z_1,z_2\}$ in $H$. Then, each of $z_1$ and $z_2$ has two neighbors on $C$ and these four vertices are all distinct. By symmetry, we may assume that $z_1=r_{1,5}$. $z_2$ may be $u_0$, $r_{2,4}$ or a vertex not on $C\cup C'$. In each case, the same argument as above ensures that $p\geq 12$. This contradiction completes the proof of Lemma \ref{4-face}. $ \rule{1mm}{3mm}$ \vskip 8pt Our proof is then completed because by the assumption in Theorem \ref{th-1}, $G$ contains either 4-cycles or 6-cycles. $ \rule{1mm}{3mm}$ \vskip 10pt \renewcommand{0.8}{0.8} \end{document}
\begin{document} \title{Algebraic Structure of the Minimum Error Discrimination Problem for Linearly Independent Density Matrices} \begin{abstract} The minimum error discrimination problem for ensembles of linearly independent pure states are known to have an interesting structure; for such a given ensemble the optimal POVM is given by the pretty good measurment of another ensemble which can be related to the former ensemble by a bijective mapping $\mathscr{R}$ on the ``space of ensembles''. In this paper we generalize this result to ensembles of general linearly independent states (not necessarily pure) and also give an analytic expression for the inverse of the map, i.e., for $\mathscr{R}^{-1}$. In the process of proving this we also simplify the necessary and sufficient conditions that a POVM needs to satisfy to maximize the probability of success for the MED of an LI ensemble of states. This simplification is then employed to arrive at a rotationally invariant necessary and sufficient conditions of optimality. Using these rotationally invariant conditions it is established that every state of a LI mixed state ensemble can be resolved to a pure state decomposition so that the corresponding pure state ensemble (corresponding to pure states of all mixed states together) has as its optimal POVM a pure state decomposition of the optimal POVM of mixed state ensemble. This gives the necessary and sufficient conditions for the PGM of a LI ensemble to be its optimal POVM; another generalization for the pure state case. Also, these rotationally invariant conditions suggest a technique to give the optimal POVM for an ensemble of LI states. This technique is polynomial in time and outpeforms standard barrier-type interior point SDP in terms of computational complexity. \end{abstract} \section{Introduction} \label{intro} Minimum Error Discrimination (MED) is a state hypothesis testing problem in quantum state discrimination. The setting is as follows: Alice selects a state $\rho_i$ with probability $p_i>0$ from an ensemble of $m$ states $\widetilde{P}=\{ p_i >0, \rho_i \}_{i=1}^{m}$, and sends it to Bob, who is then tasked to find the index $i$ from the set $\{1,2,\cdots, m\}$, by performing measurement on the state he receives. His measurement is a generalized POVM of $m$ elements $E = \{ E_i \}_{i=1}^{m}$, and his strategy for hypothesis testing is based on a one-to-one correspondence between the states $ \rho_i \in \widetilde{P}$ and POVM elements $E_i \in E$ such that he will declare having been given $\rho_j$ when his measurement yields the $j$-th outcome. Since the states $\rho_1, \; \rho_2, \; \cdots, \rho_m$ are not necessarily orthogonal they aren't perfectly distinguishable, i.e., there doesn't exist a measurement such that $Tr \left( \rho_i E_j \right) = \delta_{i,j} Tr \left( \rho_i E_i \right), \; \forall \; 1 \ leq i,j \leq m$ unless $Tr \left( \rho_i \rho_j \right) = \delta_{i,j} Tr \left( \rho_i^2 \right), \; \forall \; 1 \leq i,j \leq m$. That $Tr \left( \rho_i E_j \right) \neq 0$ for some $i \neq j$ implies that there may arise a situation where Alice sends the state $\rho_i$ but Bob's measurement yields the $j$-th outcome which leads him to conclude that Alice gave him $\rho_j$. This is an error. The average probability of such error is given by: \begin{equation} \label{Pe} P_e = \sum_{\substack{i,j=1 \\ i\neq j}}^m p_i Tr( \rho_i \Pi_j) \end{equation} where $\{ \Pi_i \}_{i=1}^{m}$ represents an m element POVM with $ \Pi_i \geq 0$ and $ \sum_{i=1}^{m} \Pi_i = \mathbb{1}$. The average probability of success is given by: \begin{equation} \label{Ps} P_s = \sum_{i=1}^m p_i Tr( \rho_i \Pi_i) \end{equation} Both probabilities sum up to 1: \begin{equation} \label{Sum1} P_s + P_e =1 \end{equation} In MED we are given an ensemble and tasked with finding the maximum value that the $P_s$, as defined in equation \eqref{Ps}, attains over the ``space'' of $m$ element POVMs\footnote{It is appropriate to call the set of $m$ element POVMs a space because it is a convex set, inherently implying that there is a notion of additon and scalar multiplication defined between any two points in the set. The restrictions are on the fact that all linear combinations of elements much be convex combinations. Additionally this space is compact.} and the points in the space of $m$ element POVMs where this maximum value is attained. \begin{equation} \label{Pmax} P_{s}^{\text{max}} = \text{Max} \{ P_s \; | \; \{ \Pi_i \}_{i=1}^{m}, \; \Pi_i \geq 0, \; \sum_{i} \Pi_i = \mathbb{1}\} = 1- P_{e}^{\text{min}} \end{equation} Despite the innocuous nature of the problem there have been fairly limited class of ensembles for which the problem has been solved analytically. This includes any ensemble with just two states, i.e., when $m=2$ \cite{Hel}, ensembles of any number where the states are equiprobable and lie on the orbit of a unitary \cite{Ban, Chou}, an ensemble of $3$ qubits \cite{Ha}\footnote{In \cite{Ha} a the general recipe to obtain the optimal POVM for an ensemble of any number of qubits states has been lain down.}, and all pure state ensembles for which the pretty good measurement (PGM) associated with a LI pure state ensemble is its optimal POVM as well \cite{Sasaki}. In \cite{Mas} it was shown that there exists a relation between an ensemble $\widetilde{P}$ and another ensemble $\widetilde{Q} = \{ q_i \geq 0, \sigma_i \}_{i=1}^{m}$, with the condition that $ supp \left( q_i \sigma_i \right) \subseteq supp \left( p_i \rho_i \right)$, $\forall \; 1 \leq i \leq m$, such that the optimal POVM for MED of $\widetilde{P}$ is given by the pretty good measurement (PGM) of $\widetilde{Q}$. In the case of linearly independent pure state ensembles (LIP), it is known that $\sigma_i = \rho_i, \; \forall \; 1 \leq i \leq m$, and it is also known that $\widetilde{Q}$ is given as a function of $\widetilde{P}$. This function is invertible and an analytic expression for the inverse of the function is known. This relation between a LI pure state ensemble and its optimal POVM is of significance in finding the optimal POVM \cite{Singal}. It is, hence, desirable to know if such a function exists for other classes of ensembles too. In \cite{Carlos} it was shown that such a function isn't definable for linearly dependent pure state ensembles. What about mixed states? From \cite{Yohina} we know that the optimal POVM for an ensemble of LI states is a projective measurement where the rank of the $i$-th projector equals the rank of the $i$-th state in the ensemble. As we will later show, this itself exhibits that $rank \left( p_i \rho_i \right) = rank \left( q_i \sigma_i \right)$, $\forall \; 1 \leq i \leq m$, and, since $ supp \left( q_i \sigma_i \right) \subseteq supp \left( p_i \rho_i \right)$, $\forall \; 1 \leq i \leq m$, this implies that $ supp \left( q_i \sigma_i \right) = supp \left( p_i \rho_i \right)$, $\forall \; 1 \leq i \leq m$. This gives us an indication that the aforementioned function may be definable in the general LI state case, i.e., when the states aren't necessarily pure. In this paper we establish that such a function is definable and that it is an invertible function as well. Additionally, we give an analytic expression for the inverse function. In the process we also simplify the necessary and sufficient condition that a POVM has to satisfy to be the optimal POVM for an ensemble of linearly independent states. Also, the necessary and sufficient condition is brought to a rotationally invariant form. This form can be exploited to obtain the optimal POVM for the MED of any LI ensemble. These rotationally invariant conditions tell us that for for each ensemble of LI states, there is a corresponding pure state decomposition such that the ensemble corresponding to this pure state decomposition has an optimal POVM which is itself a pure state decomposition of the optimal POVM for the mixed state ensemble. This fact is used to show when the pretty good measurement of an LI ensemble is its optimal measurment; this is also a generalization of the pure state case. Also, the rotationally invariant conditions suggest a recipe to obtain the optimal POVM for a LI ensemble of states. This technique is polynomial in time and simple to use. The paper is divided into various sections as follows: section \eqref{OPTC} gives the known optimal conditions for the MED of any general ensemble; section \eqref{Mixed} first introduces what is known so far about MED for LI state ensembles and then goes onto establish the main result of the paper, i.e., that every LI state ensemble can be mapped to another LI state ensemble through an invertible map, such that the PGM of the image of the ensemble under the map is the optimal POVM for the MED of the corresponding pre-image ensemble. Establishing the existence of such a map requires a simplification of the known optimality conditions in the case for LI ensembles which we prove. In the same section we also obtain an analaytic expression for the inverse of this map. In section \eqref{compareMEDP} we compare the problem of MED for general LI mixed ensembles with the problem of MED for LI pure state ensembles which are defined on the same Hilbert space $\mathcal{H}$. It is shown that for every LI mixed state ensemble has a pure state decomposition whose optimal POVM is itself a pure state decomposition of the optimal POVM of the mixed state ensemble. Section \eqref{Solution} employs the results developed in section \eqref{Mixed} to give an efficient and simple numerical technique to obtain the optimal POVM for the MED of any LI ensemble. \section{The Optimum Conditions} \label{OPTC} Alice picks a state $\rho_i$ with probability $p_i$ from the ensemble $\widetilde{P} = \{ p_i, \rho_i \}_{i=1}^{m}$ and hand it to Bob for MED. The states $\rho_1, \rho_2, \cdots, \rho_m$ act on a Hilbert space $\mathcal{H}$ of dimension $n$ and $supp \left(p_1\rho_1 \right)$, $supp \left(p_2 \rho_2 \right)$, $\cdots$, $supp \left(p_m\rho_m \right)$ together span $\mathcal{H}$. Bob's task is the optimization problem given by equation \eqref{Pmax}. This optimization is over the space of of $m$ element POVMs, i.e., the space given by $\left\{ \left\{ \Pi_i \right\}_{i=1}^{m}, \text{ where $\Pi_i \geq 0, \; \forall \; 1 \leq i \leq m, \; \sum_{i}^{m} \Pi_i = \mathbb{1}$} \right\}$, where $\mathbb{1}$ is the identity operator on $\mathcal{H}$. To every constrained optimization problem (called the primal problem) there is a dual problem which provides a lower bound if primal problem is a constrained minimization or an upper bound if the primal problem is a constrained maximization to the objective function being optimized in the primal problem. Under certain conditions these bounds are tight implying that one can obtain the solution for the primal problem from its dual. We then say that there is no duality gap between both problems \cite{Boyd}. For MED there is no duality gap and the dual problem can be solved to obtain optimal POVM. This dual problem is given as follows \cite{Yuen}: \begin{equation} \label{dual} \text{Min} \; \text{Tr}(Z) \; \ni \; Z-p_i \rho_i \geq 0, \; \forall\; 1 \leq i \leq m. \end{equation} Also the optimal $m$-element POVM will satisfy the complementarity slackness condition: \begin{equation} \label{cslack} (Z- p_i \rho_i)\Pi_i= \Pi_i(Z-p_i \rho_i)=0, \, \forall \, 1\leq i \leq m. \end{equation} Now summing over $i$ in equation \eqref{cslack} and using the fact that $ \sum_{i=1}^{m} \Pi_i = \mathbb{1}$ we get: \begin{equation} \label{Z} Z= \sum_{i=1}^{m} p_i \rho_i \Pi_i = \sum_{i}^{m} \Pi_i p_i \rho_i. \end{equation} Using equation \eqref{Z} in equation \eqref{cslack}, we get: \begin{eqnarray} & \Pi_j ( Z - p_i \rho_i) \Pi_i = \Pi_j ( Z - p_j \rho_j) \Pi_i, &\; \forall \; 1 \leq i,j \leq m \notag \\ \label{St} \Rightarrow & \Pi_j ( p_j \rho_j - p_i \rho_i ) \Pi_i =0, & \; \forall \; 1 \leq i,j \leq m \end{eqnarray} Equation \eqref{St} was derived by Holevo \cite{Hol}, separately, without using the dual optimization problem stated in the problem \eqref{dual}. Equation \eqref{cslack} and equation \eqref{St} are equivalent to each other. These are necessary but not sufficient conditions. Of the set of $m$ element POVMs which satisfy equation \eqref{cslack} (or equivalently equation \eqref{St}) only a proper subset is optimal. This optimal POVM will satisfy the global maxima conditions given below: \begin{eqnarray} & Z \geq p_i \rho_i \notag &, \; \forall \; 1 \leq i \leq m,\\ \label{Glb} \Longrightarrow & \sum_{k=1}^{m} p_k \rho_k \Pi_k - p_i \rho_i \geq 0,& \; \forall \; 1 \leq i \leq m. \end{eqnarray} Thus the necessary and sufficient conditions for the $m$-element POVM(s) to maximize $P_s$ are given by equations \eqref{cslack} (or equivalently, equation \eqref{St}) and condition \eqref{Glb}. \section{Linearly Independent States} \label{Mixed} Let $\mathcal{H}$ be an $n$ dimensional Hilbert space. Consider a set of $m$ $(\leq n)$ LI states in $\mathcal{H}$, denoted by $P =\{ \rho_i \}_{i=1}^{m}$, where $\rho_i \in \mathcal{B}(\mathcal{H}), \; \rho_i \geq 0, \; Tr(\rho_i)=1,\; \forall \; 1 \leq i \leq m$. Let $r_i \equiv \text{rank}(\rho_i), \;\forall \; 1 \leq i \leq m$. Also let $\sum_{i=1}^m r_i = n$. This implies that $\mathcal{H}$ is fully spanned by supports of $\rho_1, \rho_2, \cdots, \rho_m$ and that the supports of $\rho_1, \rho_2, \cdots, \rho_m$ are linearly independent. Let elements within $P$ be indexed in descending order of $r_i$, i.e., $r_{i}\geq r_{i+1}, \; \forall \; 1 \leq i \leq m-1$. Consider $T \in \mathcal{B}(\mathcal{H})$ to be non-singular; construct an ensemble $\widetilde{P}'=\{ p_i', \rho_i'\}_{i=1}^{m}$ by a congruence transformation on elements of $P$ by $T$ in the following manner: \begin{subequations} \begin{equation} \label{cojens1} \rho'_i \equiv \dfrac{ T \rho_i T^{\dag} }{ Tr(T \rho_i T^{\dag})}, \end{equation} \begin{equation} \label{cojens2} p'_i \equiv \frac{Tr( T \rho_i T^{\dag}) }{ \sum_{\substack{j=1}}^{m} Tr( T \rho_j T^{\dag}) }. \end{equation} \end{subequations} Note (i) $\widetilde{P}'=\{p'_i >0, \, \rho'_i \}_{i=1}^{m}$ is an ensemble of $m$ linearly independent states (ii) rank$(\rho'_i)=r_i, \; \forall \; 1 \leq i \leq m$. Let's denote the transformations in equations \eqref{cojens1} and \eqref{cojens2} concisely by: $\widetilde{P}'= T P T^{\dag}$. Using this define the following set: \begin{equation} \label{ens1} \mathcal{E}(r_1,r_2,\cdots, r_m) \equiv \; \{ T P T^{\dag} \; | \; T \in \mathcal{B}(\mathcal{H}), \; det(T)\neq 0\} \end{equation} $\mathcal{E}(r_1,r_2,\cdots, r_m)$ is the set of LI ensembles where the $i$-th state has rank $r_i$. This is a $2n^2 - \sum_{i=1}^{m} r_{i}^{2} -1$ real parameter space. If $r_{k}=r_{k+1}=\cdots = r_{k+s-1}$, then a single ensemble can be represented by $s!$ elements in $\mathcal{E}(r_1,r_2,\cdots,r_m)$, all of which are equivalent to each other upto a permutation among the $k \text{-th}, (k+1)\text{-th}, \cdots, (k+s-1)\text{-th}$ states\footnote{Allowing for this multiplicity is \emph{just} a matter of convenience, i.e., one could adopt more criteria to do away with such multiplicities but that complicates the description of $\mathcal{E}(r_1,r_2,\cdots,r_m)$ and, for that purpose, such a description is avoided.}. Let us now list what is known so far about the optimal POVMs for MED of LI ensembles. For the case of pure state ensembles (LIP), i.e., when $r_i=1,$ $\forall \; i = 1,2,\cdots,m$\footnote{Note that in this case $m=n$.}, it is already well known that the optimal POVM is given by a unique rank-one projective measurement \cite{Ken}. There is a corresponding result for general LI ensembles and that was explicitly proved in \cite{Yohina}, although it could also be inferred from \cite{Mas}. Therein, it was shown that the optimal POVM for MED of a LI ensemble $\widetilde{P}$ of $m$ states with ranks $r_1, r_2, \cdots, r_m$ respectively, i.e., such that $\widetilde{P} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$, is given by a POVM $\{ \Pi_i \}_{i=1}^{m}$ with the relation $rank(\Pi_i)=r_i, \; \forall \; 1 \leq i \leq m$. Note that the linear independence of the states $\rho_1, \rho_2, \cdots, \rho_m$, is contained in the relation: $\sum_{i=1}^{m}r_i = \; dim\mathcal{H} \; (=n)$ and this relation along with the aforementioned condition, that $rank\left( \Pi_i \right)=r_i$, implies that $\{ \Pi_i \}_{i=1}^{m}$ \emph{has to be} a projective measurement, i.e, $\Pi_i \Pi_j = \delta_{ij} \Pi_i, \; \forall \; 1 \leq i,j \leq m$. The relation $rank\left( \Pi_i \right) = r_i$ also ensures that the optimal POVM is unique. To establish this consider a case where we know that two $m$-element POVMs are optimal for the MED of some LI ensemble in $\mathcal{E}(r_1,r_2,\cdots,r_m)$; let these optimal POVMs (which are projective measurments) be denoted by $\{ \Pi_i^{ \left( 1 \right)} \}_{i=1}^{m}$ and $\{ \Pi_i^{ \left( 2 \right)} \}_{i=1}^{m}$. The rank condition tells us that $rank \left( \Pi_i^{ \left( 1 \right) } \right)= rank \left( \Pi_i^{ \left( 2 \right) } \right) = r_i$, $\forall \; 1 \leq i \leq m$. The only way that a convex combination of both POVMs of the form $\{ p \Pi_i^{ \left( 1 \right) } + (1-p) \Pi_i^{ \left( 2 \right) } \}_{i=1}^{m}$ ( where $ 0 < p < 1$)\footnote{We need to ensure that the POVM, which is a convex combination, is also an $m$ element POVM. That is why convex combinations are only taken in this form.} also satisfies the rank condition ( that $rank \left( p \Pi_i^{ \left( 1 \right) } + (1-p) \Pi_i^{ \left( 2 \right) } \right) = r_i$, $\forall \; 1 \leq i \leq m$) is if $ \Pi_i^{ \left( 1 \right) } = \Pi_i^{ \left( 2 \right) }$, $\forall \; 1 \leq i \leq m$. Another way of saying the same thing is that for $0 < p <1$, $\{ p \Pi_i^{\left( 1 \right) } + (1-p) \Pi_i^{ \left( 2 \right) } \}_{i=1}^{m}$ is a projective measurement iff $ \Pi_i^{ \left( 1 \right) } = \Pi_i^{ \left( 2 \right) }$, $\forall \; 1 \leq i \leq m$.This implies that for MED of any LI ensemble, the optimal POVM is unique. We now define a set, which we denote by $\mathcal{P}(r_1,r_2,\cdots,r_m)$. An element $\{ \Pi_i \}_{i=1}^{m} \in \mathcal{P}(r_1,r_2,\cdots,r_m)$ has the properties: (i) $\sum_{i=1}^{m} \Pi_i = \mathbb{1}$ (ii) $Rank(\Pi_i)=r_i,\; \forall \; 1 \leq i \leq m$ (iii) $\Pi_i \Pi_j = \delta_{ij} \Pi_i$. As noted before, (i) and (ii), along with the relation $\sum_{i=1}^{m} r_i = dim\mathcal{H}$, imply (iii) to hold true. Thus $\mathcal{P}\left(r_1,r_2,\cdots,r_m \right)$ is a subset of the set of projective measurements on $\mathcal{H}$. $\mathcal{P}(r_1,r_2,\cdots,r_m)$ is an $n^2 - \sum_{i=1}^{m} r_i^{2}$ real parameter set. The uniqueness of the optimal POVM for MED of an ensemble of LI states implies that one can unambiguously define ``the optimal POVM map" from $\mathcal{E}(r_1,r_2,\cdots,r_m)$ to $\mathcal{P}(r_1,r_2,\cdots,r_m)$. Let the optimal POVM map be denoted by $\mathscr{P}$. Then $\mathscr{P}: \mathcal{E}(r_1,r_2,\cdots,r_m) \longrightarrow \mathcal{P}(r_1,r_2,\cdots,r_m)$ is such that $\mathscr{P}(\widetilde{P})$ is the unique optimal POVM in $\mathcal{P}(r_1,r_2,\cdots,r_m)$ for the MED of any ensemble $\widetilde{P} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$. In \cite{Mas} it was shown that the optimal POVM for MED of a LI ensemble $\widetilde{P} = \{p_i , \rho_i\}_{i=1}^{m} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$, i.e., $\mathscr{P} \left( \widetilde{P} \right) $, is the PGM of another ensemble of states $\widetilde{Q} = \{q_i, \sigma_i \}_{i=1}^{m}$, where (1) $q_i \geq 0$, $\sum_{i=1}^{m}q_i=1$ and (2) $supp \left( \sigma_i \right) \subseteq supp \left( \rho_i \right)$, for all $ 1 \leq i \leq m$. If we denote $\mathscr{P} \left( \widetilde{P} \right) $ as $ \{ \Pi_i \}_{i=1}^m$, then $\Pi_i$ has the form\footnote{Note that $\left( \sum_{j=1}^{m} q_j \sigma_j \right)^{-\frac{1}{2}}$ is well defined because $\sum_{j=1}^{m} q_j \sigma_j > 0$. This is the consequence of the fact that the supports of $\rho_1, \rho_2, \cdots, \rho_m$ span $\mathcal{H}$}: \begin{equation} \label{formPI} \Pi_i = \left( \sum_{j=1}^{m} q_j \sigma_j \right)^{-\frac{1}{2}} \; q_i \sigma_i \; \left( \sum_{k=1}^{m} q_k \sigma_k \right)^{-\frac{1}{2}}. \end{equation} In the LIP case, i.e., when $r_i =1, \; \forall \; 1 \leq i \leq m$, we know the following: \begin{enumerate} \item{ $q_i > 0, \; \forall \; 1 \leq i \leq m$} \item{$supp(\rho_i) = supp(\sigma_i), \; \forall \; 1 \leq i \leq m$\footnote{Since in the LIP case, $\rho_i$ are all rank one, this means $\rho_i = \sigma_i , \; \forall \; 1 \leq i \leq m$}} \item {The correspondence $\widetilde{P} \rightarrow \widetilde{Q}$ is a map, and it is an invertible map. An analytic expression for the inverse map, i.e. the map from $\widetilde{Q} \rightarrow \widetilde{P}$, was obtained in \cite{Bela, Mas, Carlos}.} \end{enumerate} We are motivated to answer the question whether these results can be extended to cases where $r_i \geq 1$? We already noted that $rank \left(\Pi_i \right)=rank \left( \rho_i \right)$, $ \forall \; 1 \leq i \leq m$. This implies that (1) $q_i >0$\footnote{Had $q_i=0$ for any $i=1,2,\cdots, m$, $ \Pi_i =0$ (see equation \eqref{formPI}). We know that this isn't true because $rank(\Pi_i)=r_i \neq 0$.} and (2) $supp \left( \sigma_i \right) $ $ =$ $ supp \left( \rho_i \right)$\footnote{Since $\sigma_i$ and $\Pi_i$ are related through a congruence transformation $\forall \; 1 \leq i \leq m$ (see equation \eqref{formPI}) it follows that $rank(\sigma_i) = rank \left( \Pi_i \right) = r_i$. Since $supp(\sigma_i)$ is a subspace of $supp(\rho_i)$ and since $rank(\rho_i)= r_i = rank(\sigma_i)$ it follows that $supp(\sigma_i) = supp(\rho_i)$, $\forall \; 1 \leq i \leq m$.}. In this paper we establish that (3) holds for general LI ensembles too, i.e., we first establish that the correspondence $\widetilde{P} \rightarrow \ widetilde{Q}$ is a mapping, then we prove that this is an invertible map and we give an analytic expression for the inverse of this map. Later on we will use the existence of this map to derive a technique to obtain the optimal POVM for a LI ensemble, in the same way as done for LI pure state ensembles in \cite{Singal}. For this purpose defnie the PGM map from $\mathcal{E}(r_1,r_2,\cdots,r_m)$ to $\mathcal{P}(r_1,r_2,\cdots,r_m)$ such that $PGM \left( \widetilde{Q} \right)$ is the pretty good measurment associated with the ensemble and the PGM of the ensemble $\widetilde{Q} = \{q_i, \sigma_i\}_{i=1}^{m}$ is defined by: \begin{equation} \label{PGM} PGM \left( \widetilde{Q} \right) = \{ \left(\sum_{j=1}^{m} q_j \sigma_j \right)^{-\frac{1}{2}} q_i \sigma_i \left(\sum_{k=1}^{m} q_k \sigma_k \right)^{-\frac{1}{2}} \}_{i=1}^{m}. \end{equation} \subsection{The $\widetilde{P} \rightarrow \widetilde{Q}$ Correspondence:} \label{PQcorr} Given that $\mathscr{P} \left( \widetilde{P} \right) = \{\Pi_i \}_{i=1}^{m}$, where $\{\Pi_i \}_{i=1}^{m} \in \mathcal{P}(r_1,r_2,\cdots,r_m)$. Hence $ \Pi_i \Pi_j = \delta_{ij} \, \Pi_i, \; \forall \; 1 \leq \, i,j \, \leq m$. Consider a spectral decomposition of each $\Pi_i$ into pure states: \begin{equation} \label{Pidecomposition} \Pi_i = \sum_{j=1}^{r_i} \ketbra{w_{ij}}{w_{ij}}, \end{equation} where $\braket{w_{i_1j_1}}{w_{i_2j_2}}=\delta_{i_1i_2}\delta_{j_1j_2}$ for $ 1 \leq i_1, i_2 \leq m$ and $1 \leq j_1 \leq r_{i_1}$, $1 \leq j_2 \leq r_{i_2}$. For each $\Pi_i$ there is a $U\left(r_i\right)$ degree of freedom in choosing this spectral decomposition. For now we assume that $\{ \ketbra{w_{ij}}{w_{ij}} \}_{j=1}^{r_i}$ is any spectral decomposition of $\Pi_i$ in equation \eqref{Pidecomposition}. Later on a specific choice of the set $\{ \ket{w_{ij}} \}_{i=1, j=1}^{i=m,j=r_i}$ will be made. Each of the unnormalized density matrices $p_i \rho_i$ can be decomposed into a sum of $r_i$ pure states in the following way: \begin{equation} \label{rhodecomposition} p_i \rho_i = \sum_{j_i=1}^{r_i} \ketbrat{\psi}{ij_i}{\psi}{ij_i}. \end{equation} Here the vectors $\tket{\psi}{ij_i}$ are unnormalized. And the set $\{ \tket{\psi}{ij_i} \}_{j_i=1}^{r_i}$ is LI. Again there is a $U\left( r_i \right)$ degree of freedom in the choice of decomposition of the unnormalized state $p_i \rho_i$ into the vectors $\tket{\psi}{ij_i}$. We assume that some choice of such a decomposition has been made in equation \eqref{rhodecomposition} without any particular bias. Let the gram matrix corresponding to the set $\{ \tket{\psi}{ij_i} \; | \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i\}$ be denoted by $G$, whose matrix elements are given by the following equation: \begin{equation} \label{Gram2} G^{(l \; i)}_{k_l \; j_i}= \tbraket{\psi}{l k_l}{\psi}{i j_i} \end{equation} Some explanation on the indices is in order. All the $n \times n$ matrices that we deal with in this paper are divided into blocks of sizes $r_1, r_2, \cdots, r_m$. The matrix element of such an $n \times n$ matrix is given by two tiers of row indices and two tiers of column indices: the inter-block row (or column) index and the intra-block row (or column) index. The former are represented by the superscript $(l \; i)$, where $l$ represents the row block and $i$ represents the column block in the $n \times n$ matrix, whereas the latter are represented by subscripts $k_l \; j_i$, where $k_l$ represents the $k$-th row and $j_i$ the $j$-th column of the $(l \; i)$-th matrix block of the $n \times n$ matrix. This implies that $1 \leq k_l \leq r_l$ and $1 \leq j_i \leq r_i$. At times subscripts $l$ in $k_l$ and $i$ in $j_i$ are omitted. In such situations it is clear which block the intrablock indices $k$ and $j$ are for. This notation, while at first seems cumbersome, will come in handy later. For each $i=1,2,\cdots, m$, the set $\{ \tket{\psi}{ij_i} \}_{j_i=1}^{r_i}$ is LI. Since $supp \left(p_1 \rho_1 \right),$ $supp \left(p_2 \rho_2 \right),$ $\cdots$, $supp \left(p_m \rho_m \right)$ are LI, the set $\bigcup_{i=1}^{m} \{ \tket{\psi}{ij_i} \}_{j_i=1}^{r_i} $ is LI as well. This implies that $G>0$. Corresponding to the set $\bigcup_{i=1}^{m} \{ \tket{\psi}{ij_i} \}_{j_i=1}^{r_i} $ there is another set of vectors given by: $\{ \tket{u}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$ with the property: \begin{equation} \label{psiandu} \tbraket{\psi}{i_1j_1}{u}{i_2j_2} = \delta_{i_1 i_2} \delta_{j_1 j_2}, \; \forall \; 1 \leq i_1,i_2 \leq m \text{ and } 1 \leq j_1 \leq r_{i_1}, \; 1 \leq j_2 \leq r_{i_2}. \end{equation} The vectors $\tket{u}{ij_i}$ can be expanded in the basis $\{ \tket{\psi}{ij} \}_{i=1,j_i=1}^{i=m,j_i=r_i}$ in the following way: \begin{equation} \label{u} \tket{u}{ij_i} = \sum_{l=1}^{m}\sum_{l_k=1}^{r_l} \left( G^{-1} \right)^{(l \; i)}_{k_l \; j_i} \tket{\psi}{lk_l}, \; \forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq m. \end{equation} From equation \eqref{u} it can be seen that the set $\{ \tket{u}{ij_i} \}_{i=1, \; j_i=1}^{i=m, \; j_i=r_i}$ is a LI set of $n$ vectors. Hence it forms a basis for $\mathcal{H}$. This is also corroborated by the fact that the gram matrix of the set $\{ \tket{u}{ij_i} \}_{i=1, \; j_i=1}^{i=m, \; j_i=r_i}$ is $G^{-1}$. Thus the orthonormal basis vectors $\{ \ket{ w_{ij_i} } \}_{i=1, \; j_i=1}^{i=m, \; j_i=r_i}$, given by equation \eqref{Pidecomposition}, can be expanded in terms of the $\tket{u}{ij_i}$ vectors: \begin{equation} \label{wexpandu} \ket{w_{ij_i}} = \sum_{l=1}^{m}\sum_{k_l=1}^{r_l} \left( G^\frac{1}{2} W \right)^{(l \; i)}_{k_l \; j_i} \tket{u}{lk_l}, \; \forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i \end{equation} where $W$ is an $n \times n$ unitary matrix. There is a one-to-one correspondence between the unitary matrix $W$ and the choice of spectral decomposition in equation \eqref{Pidecomposition}, i.e., fixing the spectral decomposition of the projectors $\Pi_i$ in equation \eqref{Pidecomposition} fixes the unitary $W$ uniquely. This becomes clearer in the following equation: Substituting equation \eqref{wexpandu} in equation \eqref{Pidecomposition} we get: \begin{equation} \label{mixedP} \Pi_i = \sum_{l_1, l_2 = 1}^{m} \sum_{k_1=1}^{r_1} \sum_{k_2=1}^{r_2} \left( \sum_{j=1}^{r_i} \left( G^{\frac{1}{2}}W \right)^{(l_1 \; i)}_{k_1 \; j} \left( W^\dag G^{\frac{1}{2}} \right)^{(i \; l_2)}_{j \; k_2} \right) \ketbrat{u}{l_1 k_1}{u}{l_2 k_2}. \end{equation} Upon substituting the expression for $\Pi_i$ and $\Pi_j$ from equation \eqref{mixedP} into equation \eqref{St} we get the following: \begin{align} \label{St3} \sum_{\substack{ 1 \leq l_1, \; l_2 \leq m, \\ 1\leq k_1 \leq r_{l_1}, \\ 1\leq k_2 \leq r_{l_2} }} \xi^{(l_1 \; l_2)}_{k_1 \; k_2} \ketbrat{u}{l_1 k_1}{u}{l_2 k_2} \; = \; 0 \end{align} where $ \xi^{(l_1 l_2)}_{k_1 k_2}$ is given by: \begin{align} \label{xi} & \xi^{(l_1 \; l_2)}_{k_1 \; k_2} = \notag \\ &\sum_{s=1}^{r_i}\sum_{t=1}^{r_j} \left(G^{\frac{1}{2}}W\right)^{\left( l_1 \; i \right)}_{k_1 \; s} \left( \sum_{h=1}^{r_i} \left( W^\dag G^{\frac{1}{2}} \right)^{\left(i \; i \right)}_{s \; h} \left( G^{\frac{1}{2}}W \right)^{\left(i \; j \right)}_{h \; t} - \sum_{g=1}^{r_j} \left( W^\dag G^{\frac{1}{2}} \right)^{\left(i \; j \right)}_{s \; g} \left( G^{\frac{1}{2}}W \right)^{\left(j \; j \right)}_{g \; t} \right) \left( W^\dag G^{\frac{1}{2}}\right)^{\left(j \; l_2 \right)}_{t \; k_2} \end{align} Equation \eqref{St3} is the stationary condition \eqref{St}. The expression for $\xi^{\left( l_1 \; l_2 \right)}_{k_1 \; k_2}$ in equation \eqref{xi} is pretty complicated. It is desired make equation \eqref{St3} more transparent. With this aim in mind we partition the matrix $G^{\frac{1}{2}} W$ into the aforementioned blocks and introduce a notation for these blocks: \begin{enumerate} \item \begin{equation} \label{part1} G^{\frac{1}{2}} W = \begin{pmatrix} X^{(11)} & X^{(12)} & \cdots & X^{(1m)}\\ X^{(21)} & X^{(22)} & \cdots & X^{(2m)}\\ \vdots & \vdots & \ddots & \vdots\\ X^{(m1)} & X^{(m2)} & \cdots & X^{(mm)} \end{pmatrix} \end{equation} where $X^{(l_1l_2)}$ is the $\left( l_1 l_2 \right) $-th block of dimension $r_{l_1} \times r_{l_2}$ in $G^\frac{1}{2}W$. The matrix elements of $X^{(l_1 l_2)}$ are given by $ \left( X^{(l_1 l_2)} \right)_{k_1 \; k_2} = \left( G^{\frac{1}{2} } W \right)^{\left( l_1 \; l_2 \right)}_{k_1 \; k_2}, \; \forall \, 1 \leq l_1, l_2 \leq m,$ $ \forall \, 1 \leq k_1 \leq r_{l_1}, \; 1 \leq k_2 \leq r_{l_2}$. \item Define: \begin{align} \label{column} & C^{(i)} \equiv \begin{pmatrix} X^{(1i)}\\ X^{(2i)}\\ \vdots \\ X^{(mi)} \end{pmatrix}, \; 1 \leq i \leq m \end{align} Thus $C^{(i)}$ is the $i$-th block column of $G^{\frac{1}{2}}W$. \item Similarly, let's partition $W^\dag G^{-frac{1}{2}}$ into blocks: \begin{equation} \label{part2} W^{\dag}G^{-\frac{1}{2}} \, = \begin{pmatrix} Y^{(11)} & Y^{(12)} & \cdots & Y^{(1m)}\\ Y^{(21)} & Y^{(22)} & \cdots & Y^{(2m)}\\ \vdots & \vdots & \ddots & \vdots\\ Y^{(m1)} & Y^{(m2)} & \cdots & Y^{(mm)} \end{pmatrix} \end{equation} where $\left( Y^{(l_1 l_2)}\right)_{k_1 k_2} = \left( W^{\dag} G^{-\frac{1}{2}} \right)^{\left( l_1 \; l_2 \right)}_{k_1 \; k_2}, \; \forall \, 1 \leq l_1, l_2 \leq m, \; 1 \leq k_1 \leq r_{l_1}$ and $1 \leq k_2 \leq r_{l_2}$. \item Define: \begin{align} \label{row} & R^{(i)} \equiv \begin{pmatrix} Y^{(i1)} & Y^{(i2)} & \cdots & Y^{(im)} \end{pmatrix}, \; 1 \leq i \leq m \end{align} Thus $R^{(i)}$ is the $i$-th block-row of $W^{\dag} G^{-\frac{1}{2}}$. \end{enumerate} Substituting equations \eqref{part1} and \eqref{column} in equation \eqref{St3} we obtain condition \eqref{St} in a more transparent form: \begin{equation} \label{St5} C^{(i)} \left( {X^{(ii)}}^{\dag} X^{(ij)} - {X^{(ji)}}^{\dag} X^{(jj)} \right) {C^{(j)}}^{\dag}\, = \,0, \quad \forall \, 1 \leq i,j \leq m \end{equation} where $ {X^{ji}}^{\dag}$ is the $(ij)$-th block of $W^\dag G^{\frac{1}{2}}$. From the definition of equations \eqref{column} and \eqref{row}, $R^{(i)} C^{(i)} = \mathbb{1}_{r_i}, \quad \forall \, 1 \leq i \leq m$ where $\mathbb{1}_{r_i}$ is the identity matrix of dimension $r_i$. Left and right multiplying the LHS and RHS of equation \eqref{St5} by $R^{(i)}$ and ${R^{(j)}}^\dag$ respectively gives: \begin{eqnarray} & R^{(i)} \, C^{(i)} \left( {X^{(ii)}}^{\dag} X^{(ij)} - {X^{(ji)}}^{\dag} X^{(jj)} \right) {C^{(j)}}^{\dag} \, {R^{(j)}}^{\dag} & = \,0 \notag \\ \label{St5'} \Longrightarrow & {X^{(ii)}}^{\dag} X^{(ij)} - {X^{(ji)}}^{\dag} X^{(jj)} & =0, \; \forall \; 1 \leq i,j \leq m. \end{eqnarray} Let $U_D$ be a block diagonal unitary matrix given in the following equation: \begin{equation} \label{UD} U_D = \begin{pmatrix} U^{(1)} & 0 & \cdots & 0 \\ 0 & U^{(2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & U^{(m)} \end{pmatrix} \end{equation} where $U^{(i)}$ is an $r_i \times r_i$ unitary matrix for $i=1,2,\cdots,m$. We remarked earlier that there is a $U(r_i)$ degree of freedom in choice of resolution of spectral decomposition of $\Pi_i$ in equation \eqref{Pidecomposition}. What that means is that $\Pi_i$ is invariant under the transformation: $\ket{w_{ij}} \rightarrow \ket{w_{ij}'} = \sum_{k=1}^{r_i} U^{(i)}_{k j} \ket{w_{ik}} =\sum_{l=1}^{m}\sum_{k=1}^{r_l} \left(U_D \right)^{\left( l \; i \right)}_{k \; j} \ket{w_{lk}} $, where $1 \leq i \leq m, 1 \leq j \leq r_i$. Expanding the vectors $\ket{w'_{ij}}$ in the basis $\{ \tket{u}{ij_i} \}_{i=1, j_i=1}^{r=m,j_i=r_i}$ gives: \begin{equation} \label{wij'} \ket{w_{ij}'} = \sum_{l=1}^{m}\sum_{k=1}^{r_l} \left( G^{\frac{1}{2}}WU_D \right)^{(l \; i)}_{k \; j} \tket{u}{lk}. \end{equation} It is readily seen that this will leave $\Pi_i$ invariant in equation \eqref{mixedP}. Here we make a specific choice of $U_D$, which is so that the diagonal blocks of $G^{\frac{1}{2}}WU_D$ are positive semidefinite, i.e., $X^{(ii)}U^{(i)} \geq 0, \; \forall \, 1 \leq i \leq m $\footnote{ Given some arbitrary choice of spectral decomposition for $ \Pi_1, \Pi_2, \cdots, \Pi_m $ and the fixed unitary $W$ that corresponds to these spectral decompositions, choose $ U^{(1)},U^{(2)},\cdots,U^{(m)}$ such that $X^{(ii)}U^{(i)} \geq 0, \quad \forall \, 1 \leq i \leq m $. It is always possible to find some $U^{(i)}$ such that $X^{(ii)}U^{(i)} \geq 0$ using singular value decomposition of $X^{(ii)}$. Moreover once the non-singularity of the $X^{(ii)}$ matrices has been established (proved in theorem \eqref{Xii}), the unitaries $ U^{(1)},U^{(2)},\cdots,U^{(m)}$ are unique for a given the spectral decompositions of $\Pi_i$'s (and the associated $W$).}. From here onwards we assume that $U_D$ is absorbed within $W$, i.e., $WU_D \rightarrow W$, $X^{(ij)} U^{(j)} \rightarrow X^{(ij)}$ and $\ket{w'_{i j_i}} \longrightarrow \ket{w_{i j_i}}$. This establishes that for any given decomposition of the unnormalized states $p_i \rho_i$ into pure unnormalized states $\tket{\psi}{i j_i}$, as in equation \eqref{rhodecomposition}, there is a unique unitary $W$ such that (1) the ONB $\{ \ket{w_{i j_i}} \}_{i=1, \; j_i=1}^{i=m, \; j_i=r_i}$, defined by equation \eqref{wexpandu}, corresponds to the optimal POVM, in equation \eqref{Pidecomposition} and (2) the matrix $G^\frac{1}{2}W$, which occurs in the equation \eqref{wexpandu}, has positive semi-definite block diagonal matrices\footnote{As mentioned in the footnote above, it is only when we prove that $X^{(ii)}$'s are non-singular, that it will be clear that there exists a unique $U^{(i)}$ such that $X^{(ii)}U^{(i)}>0$. And only then will it be clear that $W \longrightarrow W U_D$ is unique. As it stands now, the non-singularity of the $X^{(ii)}$'s still remains to be proved.} ( i.e., $X^{\left( i i \right)} \geq 0, \; \forall \ ; 1 \leq i \leq m$). This point should be kept in mind since it will be crucial later. Thus equation \eqref{St5'} becomes: \begin{equation} \label{St6} X^{(ii)} X^{(ij)} - {X^{(ji)}}^{\dag} X^{(jj)} =0, \; \forall \; 1 \leq i,j \leq m \end{equation} Define $D$ as the block diagonal matrix containing diagonal blocks of $G^{\frac{1}{2}}W$: \begin{equation} \label{DX} D \equiv \begin{pmatrix} {X^{(11)}} & 0 & \cdots & 0 \\ 0 & {X^{(22)}} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & {X^{(mm)}} \end{pmatrix} \end{equation} Left multiplying $G^{\frac{1}{2}}W$ by $D$ gives: \begin{equation} \label{DXG} D G^{\frac{1}{2}}W = \begin{pmatrix} (X^{(11)})^2 & {X^{(11)}} X^{(12)} & \cdots & {X^{(11)}} X^{(1m)} \\ {X^{(22)}} X^{(21)} & (X^{(22)})^2 & \cdots & {X^{(22)}} X^{(2m)}\\ \vdots & \vdots & \ddots & \vdots \\ {X^{(mm)}} X^{(m1)} & {X^{(mm)}} X^{(m2)} & \cdots & (X^{(mm)})^2 \end{pmatrix} \end{equation} Equation \eqref{St6} tells us that $D G^{\frac{1}{2}}W$ is a hermitian matrix. From that we get: \begin{equation} \label{Ainv} \left( DG^\frac{1}{2}W \right)^2 = \left( DG^\frac{1}{2}W \right) \; \left( W^\dag G^\frac{1}{2} D \right) = DGD \end{equation} Thus condition \eqref{St} implies that one needs to find a block diagonal matrix, $D=$ $ Diag ($ $X^{(11)},$ $X^{(22)}$ $,\cdots,$ $X^{(mm)} ) $ $\geq0$ where $X^{(ii)}$ is an $r_i \times r_i$ positive semidefinite matrix, so that the diagonal blocks of \emph{a} hermitian square root of the matrix $DGD$ are given by $\left( X^{(11)} \right)^2, \left( X^{(22)} \right)^2, \cdots,\left( X^{(mm)} \right)^2$ respectively. Here $G$ corresponds to the gram matrix of vectors $\{ \tket{\psi}{ij_i} \; | \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i\}$ where $p_i\rho_i = \sum_{j_i=1}^{r_i} \ketbrat{\psi}{ij_i}{\psi}{ij_i}$, for all $i=1,2,\cdots,m$. This is a rotationally invariant condition, i.e., these optimality conditions enable us to get the optimal POVM for any ensemble of the form $U \widetilde{P} U^\dag$, where $U \in U(n)$. Condition \eqref{St} is only one of the necessary and sufficient conditions that the optimal POVM needs to satisfy. The other condition is given by condition \eqref{Glb}. We will prove that both conditions can be subsumed in the statement that $D G^\frac{1}{2} W >0$. We can already see that condition \eqref{St} is contained in the statement $DG^\frac{1}{2}W>0$ because positivity of a matrix subsumes hermiticity as well. But to establish the positivity we first need to prove that $D G^\frac{1}{2} W$ is non-singular for which we only need to establish that $D$ is non-singular (since $G^\frac{1}{2} >0$ and $W$ is unitary, $G^\frac{1}{2} W$ is non-singular). To prove that $D$ is non-singular is equivalent to proving that $X^{(ii)}$ are non-singular, i.e., $X^{(ii)}$ is of rank $r_i$ for all $ 1 \leq i \leq m$. \begin{theorem} \label{Xii} $X^{(ii)}$ is of rank $r_i$, $\forall \; 1 \leq i \leq m$. \end{theorem} \begin{proof} Using equations \eqref{rhodecomposition}, \eqref{mixedP} and \eqref{part1}, the operator $p_i^2 \rho_i \Pi_i \rho_i$ can be expanded in the following operator basis, $\{ \ketbrat{\psi}{ij}{\psi}{lk}\; | \; 1 \leq i, \; l \leq m; \; 1 \leq j \leq r_i, \; 1 \leq k \leq r_l \}$. This gives: \begin{align} \label{ka} p_i^2 \rho_i \Pi_i \rho_i = \sum_{\substack{j, \; k=1}}^{r_i} \left( { \left( X^{\left( ii \right) }\right)}^2 \right)_{jk} \ketbrat{\psi}{ij}{\psi}{ik}. \end{align} Now we know that $rank \left( \Pi_i \right) = r_i, \; \forall \; 1 \leq i \leq m$. So $rank \left( p_i^2 \rho_i \Pi_i \rho_i \right), rank \left( p_i \rho_i \Pi_i \right) ( = rank\left( p_i \Pi_i \rho_i \right) ) $ $\leq r_i, \; \forall \; 1 \leq i \leq m$. We first establish that $rank \left( p_i \rho_i \Pi_i \right) = rank \left( p_i \Pi_i \rho_i \right) = r_i$. Suppose not, i.e., let $rank \left( p_k \rho_k \Pi_k \right) < r_i$. This implies that $\exists \; \ket{v} \in supp \left( \Pi_k \right)- \{0\}$ $\ni \; p_k \rho_k \Pi_k \ket{v} =0$. But since $\Pi_j \ket{v} = 0$ when $j \neq k$ \footnote{ $\ket{v} \in supp{ \left( \Pi_i \right)}$ and $\Pi_i \Pi_j = \Pi_i \delta_{ij}, \; \forall \; 1 \leq i,j \leq m$ implies that $\ket{v} \notin supp \left( \Pi_j \right)$.}, we get that $Z \ket{v} = \sum_{i=1}^{m} p_i \rho_i \Pi_i \ket{v} = 0$ using equation \eqref{Z}. This in turn implies that $Z$ cannot be non-singular. But the optimality condition \eqref{Glb} demands that $Z > 0$. Hence the assumption that $rank \left( p_i \rho_i \Pi_i \right) < r_i$ isn't true for any $1 \leq i \leq m$. This implies that $rank \left( p_i \rho_i \Pi_i \right) = r_i$, $\forall \; 1 \leq i \leq m$. That $rank \left( p_i \rho_i \Pi_i \right)$ $ = rank \left( p_i \Pi_i \rho_i \right)$ $ = rank \left( \Pi_i \right)$ $ = rank \left( \rho_i \right)$ $ = r_i$ implies that any non-zero vector belonging to $supp \left(\Pi_i \right)$ has a non-zero component in $supp \left( \rho_i \right)$ and vice versa for all $1 \leq i \leq m$. This tells us that $\rho_i \ket{v} \neq 0 \Rightarrow p_i^2 \rho_i \Pi_i \rho_i \ket{v} \neq 0, \; \forall \; \ket{v} \in \mathcal{H}$, i.e., $supp \left( \rho_i \right) \subseteq supp \left( p_i^2 \rho_i \Pi_i \rho_i \right)$. We already know that $supp \left( p_i^2 \rho_i \Pi_i \rho_i \right) \subseteq supp\left( \rho_i \right)$. This implies $supp \left( p_i^2 \rho_i \Pi_i \rho_i \right) = supp \left( \rho_i \right)$ which, in turn, implies that $rank \left( p_i^2 \rho_i \Pi_i \rho_i \right) = r_i$, $\forall \; 1 \leq i \leq m$. Using equation \eqref{ka}, this implies that $\left(X^{(ii)}\right)^2$ is of rank $r_i$ and that implies that $X^{(ii)}$ is of rank $r_i$ for all $1 \leq i \leq m$. \end{proof} Theorem \eqref{Xii} implies that $D>0$. And this in turn implies that $D G^\frac{1}{2} W$ is non-singular. We want to now show that the necessary and sufficient optimality conditions given by equation \eqref{cslack} (or equivalently, \eqref{St}) and the inequality \eqref{Glb} are equivalent to the statement that $D G^\frac{1}{2} W >0 $, where $DG^\frac{1}{2}W$ is the matrix occuring in equation \eqref{DXG}. To show that we first need to simplify the optimal POVM conditions for linearly independent states. Let us define a new set of vectors $\{ \tket{\chi}{ij_i} \; | \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i \}$. \begin{equation} \label{chi} \tket{\chi}{ij_i} = \sum_{\substack{k_i=1}}^{r_i} X^{(ii)}_{k_ij_i} \tket{\psi}{ik_i}, \; \forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i. \end{equation} Since $rank \left( X^{(ii)} \right) = r_i$, $\left\{ \tket{\chi}{ij} \right\}_{j=1}^{r_i}$ is a basis for $Supp(p_i\rho_i)$. And $\{ \tket{\chi}{ij_i} \}_{i=1, j_i=1}^{i=m, j_i = r_i}$ is a basis for $\mathcal{H}$. Now the inner product of any two vectors from the set $\{ \tket{\chi}{i j_i} \}_{i=1, \; j_i = 1}^{i=m, \; j_i=r_i}$ is given by: \begin{equation} \label{innerchi} \tbraket{\chi}{i_1 j_1}{\chi}{i_2 j_2} = \left( DGD \right)^{ \left( i_1 \; i_2 \right) }_{j_1 \; j_2}, \; \forall \; 1 \leq i_1, i_2 \leq m, \; 1 \leq j_1 \leq r_{i_1}, \; 1 \leq j_2 \leq r_{i_2} \end{equation} This shows us that the gram matrix of the set of vectors $\{ \tket{\chi}{i j_i} \}_{i=1, \; j_i = 1}^{i=1, \; j_i=1r_i}$ is the matrix $DGD$. Using this basis we simplify the necessary and sufficient conditions for the optimal POVM for MED of linearly independent states. \begin{theorem} \label{necsufcond} In the problem of MED of a LI ensemble $\{p_i, \rho_i \}_{i=1}^{m}$ if a POVM, represented as $\{\Pi_i\}_{i=1}^{m}$, satisfies the following two conditions then it is the optimal POVM for MED of the said ensemble: \begin{enumerate} \item $\Pi_i \left( p_i \rho_i - p_j \rho_j \right) \Pi_j = 0, \; \forall \, 1 \leq i, \; j \leq m.$ This is equivalently expressed as: $ \left( Z - p_i \rho_i \right) \Pi_i = 0, \; \forall \; 1 \leq i \leq m$, \label{ZGREATER0} \item $Z > 0$, \end{enumerate} where $Z$ is defined as in \eqref{Z}. \end{theorem} \begin{proof} We need to prove that once we find $\{ \Pi_i \}_{i=1}^m$ which satisfies condition 1. and 2., i.e., such that conditions \eqref{cslack} (or equivalently equation \eqref{St}) and \eqref{Glb}, then that implies that $ \sum_{i=1}^{m} p_i \rho_i \Pi_i \geq p_i \rho_i, \; \forall \; 1 \leq i \leq m$. Suppose that 1. has been satisfied. This implies that we found a block diagonal matrix $D\geq0$ (given by equation \eqref{DX}) such that the block-diagonal of \emph{a} hermitian square root of $DGD$ (equation \eqref{DXG}) is $D^2$. The $i$-th block in this block-diagonal matrix $D$ is a positive semi-definite $r_i \times r_i$ matrix denoted by $X^{(ii)}$. Additionally, theorem \eqref{Xii} tells us that the non-singularity of $Z$ implies that $D$ has to be non-singular, i.e., $Det(Z) \neq 0 \Rightarrow Det(D) \neq 0 $. This is equivalent to the statement that $X^{(ii)}$ is of rank $r_i$, i.e., $X^{(ii)}>0, \; \forall \; 1 \leq i \leq m $. Using $X^{(ii)}$ define a new set of vectors as given in equations \eqref{chi}. Let's expand $Z$ and $p_i\rho_i$ in the operator basis $ \{ \ketbrat{\chi}{i_1j_1}{\chi}{i_2j_2} \; | \; 1 \leq i_1, i_2 \leq m, \; 1 \leq j_1 \leq r_{i_1} \text{ and } 1 \leq j_2 \leq r_{i_2} \}$: \begin{align} \label{Zchi} Z & = \sum_{i_1, \; i_2 =1}^{m} \sum_{j_1=1}^{r_{i_1}}\sum_{j_2=1}^{r_{i_2}} \left( W^\dag G^{-\frac{1}{2}} D^{-1} \right)^{(i_1 \; i_2)}_{j_1 \; j_2}\ketbrat{\chi}{i_1j_1}{\chi}{i_2j_2}\\ p_i\rho_i & = \sum_{\substack{k,l=1}}^{r_i} ({X^{(ii)}}^{-2})_{kl} \ketbrat{\chi}{ik}{\chi}{il} \end{align} Thus $Z> 0 \Leftrightarrow W^\dag G^{-\frac{1}{2}}D^{-1} > 0 \Leftrightarrow DG^\frac{1}{2}W >0$. Thus proving : $Z>0 \Rightarrow Z \geq p_i \rho_i, \; \forall \; 1 \leq i \leq m$ is equivalent to proving $ W^\dag G^{-\frac{1}{2}} D^{-1} >0$ $ \Rightarrow W^\dag G^{-\frac{1}{2}} D^{-1} \geq $ $ \left( X^{(ii)} \right)^{-2}$, $\forall \; 1 \leq i \leq m$. Since $W^\dag G^{-\frac{1}{2}} D^{-1}$ $=$ $\left( D G^\frac{1}{2} W \right)^{-1}$, our objective is to prove that given $ \left( D G^\frac{1}{2} W \right)^{-1} >0$ (where $D G^\frac{1}{2} W$ is given by equation \eqref{DXG}) implies that \footnotesize: \begin{eqnarray} \label{invdepict1} & \begin{pmatrix} (X^{(11)})^2 & \cdots & {X^{(11)}} X^{(1i)} & \cdots & {X^{(11)}} X^{(1m)} \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ {X^{(ii)}} X^{(i1)} & \cdots & (X^{(ii)})^2 & \cdots & {X^{(ii)}} X^{(im)}\\ \vdots & \ddots & \vdots & \ddots & \vdots \\ {X^{(mm)}} X^{(m1)} & \cdots & {X^{(mm)}} X^{(mi)} & \cdots & (X^{(mm)})^2 \end{pmatrix}^{-1} \geq \begin{pmatrix} 0 & \cdots & 0 & \cdots & 0\\ \vdots & \ddots & \vdots & \ddots & \vdots \\ 0 & \cdots & (X^{(ii)})^{-2} & \cdots & 0\\ \vdots & \ddots & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & \cdots & 0\\ \end{pmatrix}, \; \forall \; 1 \leq i \leq m & \notag \\ & \Bigg( \text{Permute: } \left\{ \begin{array} {l l l} k \rightarrow & m+k-(i-1), & \forall \; 1 \leq k \leq i-1 \notag \\ k \rightarrow & k-(i-1), & \forall \; i \leq k \leq m \notag \end{array} \right. \Bigg) \notag \\ & \notag \\ \Longleftrightarrow & \begin{pmatrix} (X^{(ii)})^2 & X^{(ii)} X^{\scriptscriptstyle(i \: i+1)} & \cdots & X^{(ii)} X^{\scriptscriptstyle(i \: i-1)}\\ X^{\scriptscriptstyle( i\!{\scriptscriptstyle +}\!1 \: i\!{\scriptscriptstyle +}\!1)} X^{\scriptscriptstyle(i+1 \: i)} & {X^{\scriptscriptstyle( i+1 \: i+1)}}^{2} &\cdots & X^{\scriptscriptstyle(i+1 \: i+1)} X^{\scriptscriptstyle(i+1 \: i-1)}\\ \vdots& \vdots & \ddots & \vdots \\ X^{\scriptscriptstyle(i-1 \: i-1)} X^{\scriptscriptstyle(i-1 \: i)}& X^{\scriptscriptstyle( i-1 \: i-1)} X^{\scriptscriptstyle(i-1 \: i+1)} & \cdots & (X^{\scriptscriptstyle(i-1 \: i-1)})^2 \end{pmatrix}^{-1} \! {\scriptstyle \geq } \begin{pmatrix} (X^{\scriptscriptstyle(ii)})^{\scriptscriptstyle{-2}}& 0 & \cdots & 0\\ 0 & 0 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \end{pmatrix}, \; \forall \; i. \end{eqnarray} \normalsize Define:\begin{align} \left( \begin{array}{c|c} A & \qquad B \qquad \qquad\\ \hline ~ & \qquad ~ \qquad \qquad\\ B^{\dag} & \qquad C \qquad \qquad\\ ~ & \qquad ~ \qquad \qquad \end{array} \right) \equiv \left( \begin{array}{c|ccc} (X^{(ii)})^2 & X^{(ii)} X^{(i \: i+1)} & \cdots & X^{(ii)} X^{(i \: i-1)}\\ \hline X^{( i+1 \: i+1)} X^{(i+1 \: i)} & {X^{( i+1 \: i+1)}}^{2} &\cdots & X^{(i+1 \: i+1)} X^{(i+1 \: i-1)}\\ \vdots & \vdots & \ddots & \vdots \\ X^{(i-1 \: i-1)} X^{(i-1 \: i)} & X^{( i-1 \: i-1)} X^{(i-1 \: i+1)} & \cdots & (X^{(i-1 \: i-1)})^2 \end{array} \right) \end{align} Hence our objective to prove that: \begin{eqnarray} \label{invdepict2} \begin{pmatrix} A & B \\ B^{\dag} & C \end{pmatrix}^{-1} > 0 \Longrightarrow \begin{pmatrix} A & B \\ B^{\dag} & C \end{pmatrix}^{-1} \geq \begin{pmatrix} A^{-1} & 0 \\ 0 & 0 \end{pmatrix} \end{eqnarray} Given that $\bigl(\begin{smallmatrix} A&B\\ B^{\dag}&C \end{smallmatrix} \bigr) > 0$ its inverse is given by \cite{Boyd}: \begin{align} \begin{pmatrix} A & B \\ B^{\dag} & C \end{pmatrix} ^{-1} & = \begin{pmatrix} A^{-1} + Q S_AQ^{\dag} & -Q S_A \\ -S_A Q^{\dag} & S_A \end{pmatrix}\\ & = \begin{pmatrix} A^{-1} & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} Q S_AQ^{\dag} & -Q S_A \\ -S_A Q^{\dag} & S_A \end{pmatrix} \end{align} where $S_A \equiv (C-B^{\dag}A^{-1}B)^{-1} > 0$ is the inverse of the Schur complement of $A$ in $\bigl(\begin{smallmatrix} A&B\\ B^{\dag}&C \end{smallmatrix} \bigr)$ and $Q \equiv A^{-1}B$ \cite{Boyd}. Hence the inequality \eqref{invdepict2} amounts to proving the following: \begin{eqnarray} \label{invdepict3} \begin{pmatrix} Q S_AQ^{\dag} & -Q S_A \\ -S_A Q^{\dag} & S_A \end{pmatrix} \geq 0 \end{eqnarray} As shown in \cite{Boyd}, if $S_A >0 $, then: $\bigl(\begin{smallmatrix} Q S_AQ^{\dag} & -Q S_A \\ -S_A Q^{\dag} & S_A \end{smallmatrix} \bigr) \geq 0 \Leftrightarrow$ Schur complement of $S_A$ in $\bigl(\begin{smallmatrix} Q S_AQ^{\dag} & -Q S_A \\ -S_A Q^{\dag} & S_A \end{smallmatrix} \bigr) \geq 0$. Now $\bigl(\begin{smallmatrix} A&B\\ B^{\dag}&C \end{smallmatrix} \bigr) > 0 \Longrightarrow S_A > 0$. The Schur complement of $S_A$ in $\bigl(\begin{smallmatrix} Q S_AQ^{\dag} & -Q S_A \\ -S_A Q^{\dag} & S_A \end{smallmatrix} \bigr)$ is equal to $0$. This implies that $\bigl(\begin{smallmatrix} Q S_AQ^{\dag} & -Q S_A \\ -S_A Q^{\dag} & S_A \end{smallmatrix} \bigr) \geq 0$. Hence the inequality \eqref{invdepict3} is true. This proves condition 1. of the theorem (or equivalently condition \eqref{St}) and $Z>0$ subsumes condition given by \eqref{Glb}. This proves the theorem. \end{proof} Hence the necessary and sufficient conditions \eqref{St} (or equivalently equation \eqref{cslack}) and \eqref{Glb} are subsumed in the statement: $DG^{\frac{1}{2}}W >0$. Alternatively, the necessary and sufficient conditions can be put in the following corollary: \begin{corollary} \label{corollary1} The necessary and sufficient condition for an $m$-element POVM $\{ \Pi_i \}_{i=1}^{m}$ to optimally discriminate among an ensemble of $m$ linearly independent states $\{ p_i, \; \rho_i\}_{i=1}^{m}$ is that $\{ \Pi_i \}_{i=1}^{m}$ is a projective measurment and $\sum_{i=1}^{m} p_i \rho_i \Pi_i > 0$. \end{corollary} We can re-express the necessary and sufficient conditions to obtain the optimal POVM for MED of the ensemble $\widetilde{P}$ as: \begin{itemize} \item[\textbf{A}:] \label{AA} One needs to find a block diagonal matrix, $D=$ $ Diag ($ $X^{(11)},$ $X^{(22)}$ $,\cdots,$ $X^{(mm)} ) $ $\geq0$ where $X^{(ii)}$ is an $r_i \times r_i$ positive definite matrix, so that the diagonal blocks of the positive square root of the matrix $DGD$ are given by $\left( X^{(11)} \right)^2, \left( X^{(22)} \right)^2, \cdots,\left( X^{(mm)} \right)^2$ respectively. Here $G$ corresponds to the gram matrix of vectors $\{ \tket{\psi}{ij_i} \; | \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i\}$ where $p_i\rho_i = \sum_{j_i=1}^{r_i} \ketbrat{\psi}{ij_i}{\psi}{ij_i}$, for all $i=1,2,\cdots,m$. \end{itemize} Condition \textbf{A} is a rotationally invariant form of expressing conditions \eqref{cslack} (or equivalently \eqref{St}) and \eqref{Glb}. We will now construct the ensemble $\widetilde{Q} = \{ q_i , \sigma_i \}_{i=1}^{m} \; \in \mathcal{E}(r_1,r_2,\cdots,r_m)$, such that $supp \left( q_i \sigma_i \right) = supp \left( p_i \rho_i \right), \; \forall \; 1 \leq i \leq m$, and for which the relation $PGM \left(\widetilde{Q} \right) \{ \Pi_i \}_{i=1}^{m}$ holds true. Using equation \eqref{chi}, define the following: \begin{equation} \label{sigma} \sigma_i \equiv \frac{1}{\sum_{k_i=1}^{r_i} \tbraket{\chi}{i k_i}{\chi}{i k_i}} \sum_{j_i=1}^{r_i} \ketbrat{\chi}{i j_i}{\chi}{i j_i}, \; \forall \; 1 \leq i \leq m, \end{equation} \begin{equation} \label{qi} q_i \equiv \dfrac{\sum_{j_i=1}^{r_i} \tbraket{\chi}{i j_i}{\chi}{i j_i}}{\sum_{l=1}^{m} \sum_{k_l=1}^{r_l} \tbraket{\chi}{l k_l}{\chi}{l k_l}} , \; \forall \; 1 \leq i \leq m. \end{equation} By the very definition $q_i > 0, \; \forall \; 1 \leq i \leq m$. And since the set $\{ \tket{\chi}{ij_i} \}_{j_i=1}^{r_i}$ spans $supp \left( p_i \rho_i \right)$, we have that $supp \left( q_i \sigma_i \right) = supp \left( p_i\rho_i \right), \; \forall \; 1 \leq i \leq m$. It remains to be shown that $\{ \Pi_i \}_{i=1}^{m}$ is the PGM of $\widetilde{Q}$. \begin{theorem} \label{PGMtheorem} $\{ \Pi_i \}_{i=1}^{m}$ is the PGM of $\widetilde{Q}$, i.e., $ \Pi_i = \left( \sum_{j=1}^{m} q_j \sigma_j \right)^{- \frac{1}{2} } q_i \sigma_i \left( \sum_{k=1}^{m} q_k \sigma_k \right)^{- \frac{1}{2} }, \; \forall \; 1 \leq i \leq m$. \end{theorem} \begin{proof} We introduce a set of vectors complementary to the set $\{ \tket{\chi}{i j_i} \}_{i=1, \; j_i=1}^{i = m, \; j_i = r_i}$ in the same way that the vectors $\{ \tket{u}{i j_i} \}_{i=1, \; j_i=1}^{i = m, \; j_i = r_i}$ is complementary to the set $\{ \tket{\psi}{i j_i} \}_{i=1, \; j_i=1}^{i = m, \; j_i = r_i}$ based on equation \eqref{psiandu}. \begin{equation} \label{chiandomega} \tbraket{\chi}{i_1j_1}{\omega}{i_2j_2} = \delta_{i_1 i_2}\delta_{j_1j_2}, \; \forall \; 1 \leq i_1, i_2 \leq m, \; 1 \leq j_1 \leq r_{i_1}, \; 1 \leq j_2 \leq r_{i_2}. \end{equation} Based on the definition of the vectors $\{ \tket{\chi}{i j_i} \}_{i=1, \; j_i=1}^{i = m, \; j_i = r_i}$ from equation \eqref{chi}: \begin{equation} \label{wu} \tket{\omega}{ij_i} \equiv \sum_{l =1}^{m} \sum_{k_l=1}^{r_l} \left({D}^{-1}\right)^{(l \; i)}_{k_l \; j_i}\tket{u}{l k_l} \end{equation} From the definition of $\tket{\omega}{ij_i}$ in equations \eqref{chiandomega}, \eqref{wu} it is easy to see that $\{ \tket{\omega}{ij_i} \}_{i=1, \; j_i=1}^{i = m, \; j_i = r_i}$ will form a linearly independent set. We can expand $\ket{w_{ij}}$ from equation \eqref{Pidecomposition} in $\tket{\omega}{ij}$: \begin{equation} \label{mixedv3} \ket{w_{ij}} = \sum_{l=1}^{m}\sum_{k_l=1}^{r_l} \left( D G^{\frac{1}{2}}W \right)^{(l \; i)}_{k_l \; j}\tket{\omega}{l k_l}, \end{equation} and, similar to equation \eqref{mixedP} we get: \begin{equation} \label{Pomega} \Pi_i = \sum_{l_1, l_2 = 1}^{m} \sum_{k_1=1}^{r_1} \sum_{k_2=1}^{r_2} \left( \sum_{j=1}^{r_i} \left( D G^{\frac{1}{2}}W \right)^{(l_1 \; i)}_{k_1 \; j} \left( W^\dag G^{\frac{1}{2}} D \right)^{(i \; l_2)}_{j \; k_2} \right) \ketbrat{\omega}{l_1 k_1}{\omega}{l_2 k_2}. \end{equation} We will prove that $\left( \sum_{j=1}^{m} q_j \sigma_j \right)^{- \frac{1}{2} } q_i \sigma_i \left( \sum_{k=1}^{m} q_k \sigma_k \right)^{- \frac{1}{2} }$ is equal to the RHS of equation \eqref{Pomega}, $\forall \; 1 \leq i \leq m$. That will prove the theorem. By the definition of $\sigma_i$ in equation \eqref{sigma} we get that $ \sum_{i=1}^{m} q_i \sigma_i $ is given by: \begin{equation} \label{sumsigma} \sum_{i=1}^{m} q_i \sigma_i = \frac{1}{\sum_{s=1}^{m} \sum_{t_s=1}^{r_s} \tbraket{\chi}{s t_s}{\chi}{s t_s}} \sum_{l=1}^{m} \sum_{k_l=1}^{r_l} \ketbrat{\chi}{l k_l}{\chi}{l k_l} \end{equation} Using equation \eqref{sumsigma}, it can easily be verified that: \begin{equation} \label{sumsigmainv} \left( \sum_{i=1}^{m} q_i \sigma_i \right)^{-1} = \left( \sum_{s=1}^{m} \sum_{t_s=1}^{r_s} \tbraket{\chi}{s t_s}{\chi}{s t_s} \right) \sum_{l=1}^{m} \sum_{k_l=1}^{r_l} \ketbrat{\omega}{l k_l}{\omega}{l k_l} \end{equation} Bearing in mind the $DG^\frac{1}{2} W$ is the positive square root of the matrix $DGD$, and that $DGD$ is the gram matrix of the set of vectors $\{ \tket{\chi}{i j_i} \}_{i=1, \; j_i=1}^{i = m, \; j_i = r_i}$, it can be easily verified that: \begin{equation} \label{sumsigmainvsq} \left( \sum_{i=1}^{m} q_i \sigma_i \right)^{-\frac{1}{2}} = \left( \sum_{s=1}^{m} \sum_{t_s=1}^{r_s} \tbraket{\chi}{s t_s}{\chi}{s t_s} \right)^\frac{1}{2} \sum_{l_1=1}^{m} \sum_{k_1=1}^{r_{l_1}} \sum_{l_2=1}^{m} \sum_{k_2=1}^{r_{l_2}} \ketbrat{\omega}{l_1 k_1}{\omega}{l_2 k_2} \left( D G^\frac{1}{2} W \right)^{ \left( l_1 \; l_2 \right) }_{k_1 \; k_2}. \end{equation} Using the expression for $\left( \sum_{i=1}^{m} q_i \sigma_i \right)^{-\frac{1}{2}}$ in equation \eqref{sumsigmainvsq}, the expression for $q_i \sigma_i$ in equations \eqref{sigma} and \eqref{qi} and after a bit of algebra we get the result that $\left( \sum_{j=1}^{m} q_j \sigma_j \right)^{- \frac{1}{2} } q_i \sigma_i \left( \sum_{k=1}^{m} q_k \sigma_k \right)^{- \frac{1}{2} }$ is equal to the RHS of equation \eqref{Pomega}, $\forall \; 1 \leq i \leq m$. This establishes that $\{ \Pi_i \}_{i=1}^{m} = PGM (\widetilde{Q})$. Hence proved. \end{proof} Thus we have shown that for every $\widetilde{P} = \{ p_i, \rho_i \}_{i=1}^{m} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$ there exists an ensemble $\widetilde{Q} = \{ q_i, \sigma_i \}_{i=1}^{m} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$ such that $supp \left( q_i \sigma_i \right) = supp \left( p_i \rho_i \right), \; \forall \; 1 \leq i \leq m$ and such that $\widetilde{Q}$'s PGM is $\{ \Pi_i \}_{i=1}^{m} = \mathscr{P}\left( \widetilde{Q} \right)$. This establishes the $\widetilde{P} \longrightarrow \widetilde{Q}$ correspondence mentioned in the end of the previous subsection. The next question that needs to be answered is whether there was any ambiguity in the way we arrived at the ensemble $\widetilde{Q}$ for a given $\widetilde{P}$? The only ambiguity that we have allowed to remain is in the choice of the decomposition of the states $p_i \rho_i$ in the pure unnormalized states $\tket{\psi}{i j_i}$ in equation \eqref{rhodecomposition}. For a given choice of such a decomposition for all $i = 1, 2, \cdots, m$, we arrived at a unique $n \times n$ unitary $W$ such that the block diagonal matrix $D$, defined in equation \eqref{part1} and equation \eqref{DX}, is positive definite. And using the $X^{(ii)}$ matrices we arrived at the set of states $\tket{\chi}{i j_i}$ in equation \eqref{chi} from which the states $q_i \sigma_i$ were constructed using equations \eqref{sigma} and \eqref{qi}. It is now natural to ask if the final states $q_i \sigma_i$ depend on the choice of the decomposition of the $p_i \rho_i$'s used in equation \eqref{rhodecomposition}. Very briefly we take the reader through the sequence of steps that show that this isn't the case. Let $U'^{(i)}$ be an $r_i \times r_i$ unitary, for $i=1, 2, \cdots, m$. Arrange the $m$ unitary matrices - $U'^{(1)}$, $U'^{(2)}$, $\cdots$, $U'^{(m)}$ as diagonal blocks of an $n \times n$ unitary matrix which we call $U'_D$: \begin{equation} \label{U'D} U'_D = \begin{pmatrix} {U'}^{(1)} & 0 & \cdots & 0 \\ 0 & {U'}^{(2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & {U'}^{(m)} \end{pmatrix}. \end{equation} Define the following: \begin{eqnarray} \label{psi'} \tket{\psi '}{i j_i} \equiv \sum_{l=1}^{m} \sum_{k_l =1 }^{r_l} \left( U'_D \right)^{\left( l \; i \right)}_{k_l \; j_i} \tket{\psi}{i j_i}, \; \forall \; 1 \leq i \leq m, 1 \leq j_i \leq r_i, \\ \label{u'} \tket{u '}{i j_i} \equiv \sum_{l=1}^{m} \sum_{k_l =1 }^{r_l} \left( U'_D \right)^{\left( l \; i \right)}_{k_l \; j_i} \tket{u}{i j_i}, \; \forall \; 1 \leq i \leq m, 1 \leq j_i \leq r_i. \end{eqnarray} Note that $p_i \rho_i = \sum_{j=1}^{r_i} \ketbrat{\psi '}{i j_i}{\psi '}{i j_i}$, $\forall \; 1 \leq i \leq m$, which implies that we now have an alternative decomposition of the states $p_i \rho_i$ into the pure states $\tket{\psi}{i j_i}$. Also note that: \begin{equation} \label{psi'andu'} \tbraket{\psi'}{i_1j_1}{u'}{i_2j_2} = \delta_{i_1 i_2} \delta_{j_1 j_2}, \; \forall \; 1 \leq i_1,i_2 \leq m \text{ and } 1 \leq j_1 \leq r_{i_1}, \; 1 \leq j_2 \leq r_{i_2}, \end{equation} which is similar to equation \eqref{psiandu}. Equation \eqref{wexpandu} modifies to: \begin{equation} \label{wexpandu'} \ket{w_{ij_i}} = \sum_{l=1}^{m}\sum_{k_l=1}^{r_l} \left( {U'_D}^\dag G^\frac{1}{2} W U'_D \right)^{(l \; i)}_{k_l \; j_i} \tket{u'}{lk_l}, \; \forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i. \end{equation} Earlier on, we chose the $n \times n$ unitary $W$ in such a manner that the diagonal blocks of $G^\frac{1}{2}W$, i.e., the matrices $X^{(11)}$, $X^{(22)}$, $\cdots$, $X^{(mm)}$ are hermitian (and positive definite). The diagonal blocks now become ${U'^{(1)}}^\dag X^{(11)}$, ${U'^{(2)}}^\dag X^{(22)}$, $\cdots$, ${U'^{(m)}}^\dag X^{(mm)}$. Hence we now employ a different decomposition for the projectors $\Pi_i$ than given in equation \eqref{wexpandu}: \begin{equation} \label{w'expandu'} \ket{w'_{ij_i}} = \sum_{l=1}^{m}\sum_{k_l=1}^{r_l} \left( {U'_D}^\dag G^\frac{1}{2} W U'_D \right)^{(l \; i)}_{k_l \; j_i} \tket{u'}{lk_l}, \; \forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i. \end{equation} The diagonal blocks in this case are ${U'^{(1)}}^\dag X^{(11)} U'^{(1)}$, ${U'^{(2)}}\dag X^{(22)} U'^{(2)}$, $\cdots$, ${U'^{(m)}}\dag X^{(mm)} U'^{(m)}$, which are not only hermitian but positive definite (since $X^{(ii)} > 0, \; \forall \; 1 \leq i \leq m $). Just in the case of equation \eqref{chi}, define: \begin{eqnarray} \label{chi'} & \tket{\chi'}{ij_i} & = \sum_{\substack{k=1}}^{r_i} \left( {U'^{(i)}}^\dag X^{(ii)} U'^{(i)} \right)_{kj} \tket{\psi '}{ik} \notag \\ & ~ & = \sum_{\substack{k=1}}^{r_i} \left( X^{(ii)} U'^{(i)} \right)_{kj} \tket{\psi }{ik}, \forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i. \end{eqnarray} Using equation \eqref{chi'} and equation \eqref{chi} it isn't difficult to show that: \begin{equation} \label{chichi'} \sum_{j=1}^{r_i} \ketbrat{\chi '}{i j}{\chi '}{i j} = \sum_{k=1}^{r_i} \ketbrat{\chi }{i k}{\chi }{i k}, \; \forall \; 1 \leq i \leq m. \end{equation} Using equation \eqref{sigma} and \eqref{qi} we get that: \begin{equation} \label{sigma'} \sigma_i = \frac{1}{\sum_{k_i=1}^{r_i} \tbraket{\chi '}{i k_i}{\chi '}{i k_i}} \sum_{j_i=1}^{r_i} \ketbrat{\chi '}{i j_i}{\chi '}{i j_i}, \; \forall \; 1 \leq i \leq m. \end{equation} \begin{equation} \label{q'i} q'_i \equiv \dfrac{\sum_{j_i=1}^{r_i} \tbraket{\chi '}{i j_i}{\chi '}{i j_i}}{\sum_{l=1}^{m} \sum_{k_l=1}^{r_l} \tbraket{\chi '}{l k_l}{\chi '}{l k_l}} , \; \forall \; 1 \leq i \leq m. \end{equation} This establishes that the correspondence $\widetilde{P} \longrightarrow \widetilde{Q}$ is invariant over the choice of pure state decompositions of $p_i\rho_i$\footnote{Actually, this association is also invariant over the choice of spectral decomposition of $\Pi_i$ in equation \eqref{Pidecomposition}. Our choice of spectral decomposition was such that the $D$ matrix, defined in equation \eqref{DX}, is positive definite for the sake of the convenience this offers; this isn't necessary.}. Going through all the steps taken to construct th ensemble $\widetilde{Q}$ from the ensemble $\widetilde{P}$ and the optimal POVM $\mathscr{P}\left( \widetilde{Q} \right) = \{ \Pi_i \}_{i=1}^{m}$, we can see that there is no degree of freedom on account of which the association of $\widetilde{P}$ to $\widetilde{Q}$ can be regarded as ambiguous. This tells us that the correspondence $\widetilde{P} \longrightarrow \widetilde{Q}$ is a map from $\mathcal{E}(r_1,r_2,\cdots,r_m)$ to itself. We denote this map by $\mathscr{R}$; thus we have $\mathscr{R} : \ ens \longrightarrow \mathcal{E}(r_1,r_2,\cdots,r_m)$, such that $\mathscr{R} \left( \widetilde{P} \right) = \widetilde{Q}$ and such that $\mathscr{P}\left(\widetilde{P} \right) = PGM\left(\mathscr{R}\left(\widetilde{P}\right)\right)$. \subsection{Invertibility of $\mathscr{R}$} \label{invertibilityofR} The existence of the map $\mathscr{R}$ was already demonstrated in \cite{Mas}. The reason we went through the elaborate process of re-demonstrating its existence is that these sequence of steps enables us to trivially establish that the map $\mathscr{R}$ is invertible. We first show that $\mathscr{R}$ is onto. \begin{theorem} \label{Ronto} The map $\mathscr{R}$ is onto. \end{theorem} \begin{proof} This means we have to prove that $\forall \; \widetilde{Q} \in \mathcal{E}(r_1,r_2,\cdots,r_m),$ $ \exists \; \text{some } \widetilde{P} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$ $\ni \; \mathscr{R} \left( \widetilde{P} \right) = \widetilde{Q}$. Let $\widetilde{Q} = \{ q_i, \sigma_i \}_{i=1}^{m} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$. Thus $supp \left( q_1 \rho_1 \right)$, $supp \left( q_2 \rho_2 \right)$, $\cdots$, $supp \left( q_m \rho_m \right)$ are LI subspaces of $\mathcal{H}$ of dimensions $r_1, r_2, \cdots, r_m$ respectively. Let the following be a resolution of the state $q_i \sigma_i$ into pure states: \begin{equation} \label{sigmaresolution} q_i \sigma_i = \sum_{j=1}^{r_i} \ketbrat{\zeta}{ij_i}{\zeta}{ij_i}, \; \forall \; 1 \leq i \leq m. \end{equation} There is a $U \left( r_i \right)$ degree of freedom of choosing such a resolution. The set $\{ \tket{\zeta}{ij_i} \}_{i=1, \; j_i = 1}^{i=m, \; j_i = r_i}$ is LI. Let's denote the gram matrix corresponding to the set of states $\{ \tket{\zeta}{ij_i} \}_{i=1, \; j_i = 1}^{i=m, \; j_i = r_i}$ by $F$. The matrix elements of $F$ are given by: \begin{equation} \label{Fmatrixelement} F^{(i_1 \; i_2)}_{j_1 \; j_2} = \tbraket{\zeta}{i_1j_1}{\zeta}{i_2j_2}, \; \forall \; 1 \leq i_1, i_2 \leq m, \; 1 \leq j_1 \leq r_{i_1}, \; 1 \leq j_2 \leq r_{i_2}. \end{equation} $F^{\frac{1}{2}}$ is the positive definite square root of $F$. Partition $F^{\frac{1}{2}}$ in the following manner: \begin{equation} \label{Froot} F^{\frac{1}{2}}= \begin{pmatrix} H^{(11)} & H^{(12)} & \cdots & H^{(1m)} \\ H^{(21)} & H^{(22)} & \cdots & H^{(2m)} \\ \vdots & \vdots & \ddots & \vdots \\ H^{(m1)} & H^{(m2)} & \cdots & H^{(mm)} \end{pmatrix}, \end{equation} where $H^{(ij)}$ is the $\left(i, j \right)$-th block matrix in $F$ and is of dimension $r_i \times r_j$, $\forall \; 1 \leq i, j \leq m$. Note that $F^{\frac{1}{2}} > 0 $ implies that $H^{(ii)} > 0, \; \forall \; 1 \leq i \leq m$. Corresponding to the set $\{ \tket{\zeta}{ij_i} \}_{i=1, \; j_i = 1}^{i=m, \; j_i = r_i}$ $\exists$ another unique set $\{ \tket{z}{ij_i} \}_{i=1, \; j_i = 1}^{i=m, \; j_i = r_i}$ such that \begin{equation} \label{zetaz} \tbraket{\zeta}{i_1j_1}{z}{i_2j_2} = \delta_{i_1, i_2}\delta_{j_1,j_2}, \; \forall \; 1 \leq i_1, \; \leq i_2 \leq m, \; 1 \leq j_1 \leq r_{i_1}, \; 1 \leq j_2 \leq r_{i_2}. \end{equation} The relation that the set $\{ \tket{z}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$ bears to $\{ \tket{\zeta}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$ is equivalent to that which $\{ \tket{u}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$ bears to $\{ \tket{\psi}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$ (see equation \eqref{psiandu}); or as $\{ \tket{\omega}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$ bears to $\{ \tket{\chi}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$ (see equation \eqref{chiandomega}). Let the PGM of $\{q_i, \; \sigma_i \}_{i=1}^{m}$ be denoted by $\{ \Omega_i \}_{i=1}^{m}$. Thus $\Omega_i \geq 0$ and $\sum_{i=1}^{m} \Omega_i \, = \, \mathbb{1}$. In the body of the proof of theorem \eqref{PGMtheorem} we constructed the PGM for an ensemble of mixed states using the pure state decomposition of the corresponding mixed states. Following the same sequence of steps gives us the $\Omega_i$ projectors expanded in the $\{ \ketbrat{z}{l_1 k_1}{z}{l_2k_2} \; | \; 1 \leq l_1,l_2 \leq m, \; 1 \leq l_1 \leq r_{l_1}, \; 1 \leq k_2 \leq r_{l_2} \}$ operator basis: \begin{equation} \label{omegaexp} \Omega_i = \sum_{l_1, l_2 =1}^{m}\sum_{k_1=1}^{r_{l_1}}\sum_{k_2=1}^{r_{l_2}} \left( \sum_{j=1}^{r_i} \left( F^{\frac{1}{2}} \right)^{(l_1 \; i)}_{k_1 \; j} \left( F^{\frac{1}{2}} \right)^{(i \; l_2)}_{j \; k_2} \right) \ketbrat{z}{l_1k_1}{z}{l_2k_2}, \; \forall \; 1 \leq i \leq m. \end{equation} The gram matrix of the set $\{ \tket{z}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$ is given by $F^{-1}$ and using this fact it is trivial to show that the operators $\Omega_i$, given in equation \eqref{omegaexp}, are indeed projectors. Thus we have the PGM of the ensemble $\widetilde{Q}$ with us. Now we construct the ensemble which we will denote by $\widetilde{P'} = \{p'_i, \rho'_i \}_{i=1}^{m}$. This ensemble will be such that $\mathscr{R} \left( \widetilde{P}' \right) = \widetilde{Q}.$ Define the following: \begin{eqnarray} & \tket{\phi}{ij} & \equiv \sum_{k=1}^{r_i} \left( \left( H^{(ii)} \right)^{-\frac{1}{2}} \right)_{kj} \tket{\zeta}{ik}, \; \forall \; 1 \leq i \leq m, \\ \label{p'_irho'_i} & p'_i \rho'_i & \equiv \frac{1}{\sum_{l=1}^{m} \sum_{k_l=1}^{r_l} \tbraket{\phi}{l k_l}{\phi}{l k_l}} \sum_{j_i=1}^{r_i} \ketbrat{\phi}{i j_i}{\phi}{i j_i}, \; \forall \; 1 \leq i \leq m. \end{eqnarray} Note that $supp \left(p_i' \rho_i' \right) = supp \left(p_i \rho_i \right), \; \forall \; 1 \leq i \leq m$. This also implies that $\widetilde{P}' \in \mathcal{E}(r_1,r_2,\cdots,r_m)$. Let's denote $c = \frac{1}{\sum_{l=1}^{m} \sum_{k_l=1}^{r_l} \tbraket{\phi}{l k_l}{\phi}{l k_l}}$. We insert equations \eqref{p'_irho'_i} and \eqref{omegaexp} into equation \eqref{Z} to obtain: \begin{eqnarray} & Z & = \sum_{\substack{i=1}}^{m} p'_i \rho'_i \Omega_i \notag \\ & ~ & = c \sum_{i_1,i_2=1}^{m}\sum_{j_1=1}^{r_{i_1}} \sum_{j_2=1}^{r_{i_2}} \left( F^{-\frac{1}{2}} \right)^{(i_1 \; i_2)}_{j_1 \; j_2}\ketbrat{\zeta}{i_1j_1}{\zeta}{i_2j_2} \; > \; 0 \end{eqnarray} $PGM \left( \widetilde{Q} \right) = \{ \Omega_i \}_{i=1}^{m}$ is a projective measurment and $Z = \sum_{i=1}^{m} p_i \rho_i \Omega_i >0$. By the corollary \eqref{corollary1}, $PGM \left( \widetilde{Q} \right) = \mathscr{P} \left( \widetilde{P'} \right)$. We still need to verify if $\mathscr{R}\left(\widetilde{P}'\right) = \widetilde{Q}$ or not. To this purpose we need to construct the ensemble $\widetilde{Q}'$ from $\widetilde{P}'$ in the same way as $\widetilde{Q}$ was constructed from $\widetilde{P}$ in section \eqref{PQcorr}. Let's start by defining: \begin{equation} \label{DA} D_A \equiv \begin{pmatrix} \left( H^{(11)} \right)^{-\frac{1}{2}} & 0 & \cdots & 0 \\ 0 & \left( H^{(22)} \right)^{-\frac{1}{2}} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \left( H^{(mm)} \right)^{-\frac{1}{2}} \end{pmatrix} \end{equation} From equation \eqref{p'_irho'_i} we see that the vectors $\{ \tket{\phi}{i j_i} \}_{j_i=1}^{r_i}$ form a resolution of the state $p'_i \rho'_i$. The set of vectors $\{ \tket{\phi}{i j_i} \}_{i=1,\;j_i=1}^{i=m, \; j_i=r_i}$ are LI, so the gram matrix associated with this set, which we denote by $G'$, must be positive definite. Indeed it is given by $G' = c D_A F D_A$ which is positive definite. The matrix equivalent of $G^\frac{1}{2} W$, given in equation \eqref{part1} , in this case is $\sqrt{c} D_A F^\frac{1}{2}$. Note that, upto unitary degree of freedom in the choice of the decomposition of the states $p'_i \rho'_i$ into pure unnormalized states $\tket{\phi}{i j_i}$, the matrix $\sqrt{c} D_A F^\frac{1}{2}$ can be uniquely associated with the ensemble $\{p'_i, \rho'_i \}_{i=1}^{m}, \; \forall \; 1 \leq i \leq m$. The diagonal blocks of $\sqrt{c} D_A F^\frac{1}{2}$ are $\sqrt{c} \left( H^{(11)} \right)^\frac{1}{2}, \; \sqrt{c} \left( H^{(22)} \right)^\frac{1}{2}, \; \cdots, \;\sqrt{c} \left( H^{(mm)} \right) ^\frac{1}{2}$. Hence, the role played by $D>0$, given in equation \eqref{DX}, here is $\sqrt{c} \left(D_A \right)^{-1}$. Thus the matrix equivalent of $D G^\frac{1}{2} W$, given in equation \eqref{DXG}, here is $c F^\frac{1}{2}$ which is positive definite, and whose block diagonals - $c H^{(11)}, \; c H^{(22)}, \cdots, c H^{(mm)}$, are squares of the block diagonals of the matrix $\sqrt{c} D_A F^\frac{1}{2}$. We can construct a new set of vectors $\{ \tket{\zeta'}{ij_i} \}_{i=1,j_i=1}^{i=m,j_i=r_i}$ from $\{ \tket{\phi}{ij_i} \}_{i=1,j_i=1}^{i=m,j_i=r_i}$ in the same way $\{ \tket{\chi}{ij_i} \}_{i=1,j_i=1}^{i=m,j_i=r_i}$ were constructed from $\{ \tket{\psi}{ij_i} \}_{i=1,j_i=1}^{i=m,j_i=r_i}$ in equation \eqref{chi}; the role of $X^{(ii)}$ being played by $\sqrt{c}\left(H^{(ii)}\right)^\frac{1}{2}$. But then we get that $\tket{\zeta'}{ij_i} = \tket{\zeta}{ij_i}, \; \forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i$. This tells us that $q'_i \sigma'_i = \sum_{j=1}^{r_i} \ketbrat{\zeta'}{ij_i}{\zeta'}{ij_i} $, $\forall \; 1 \leq i \leq m$. This shows us that $\mathscr{R}\left( \widetilde{P'} \right) = \widetilde{Q}$ is indeed true. Hence $\mathscr{R}$ is onto. \end{proof} We next prove that $\mathscr{R}$ is one-to-one. \begin{theorem} \label{Roneone} $\mathscr{R}$ is one-to-one. \end{theorem} \begin{proof} We need to prove that if $ \mathscr{R} \left( \widetilde{P} \right) = \mathscr{R} \left( \widetilde{P'} \right) $ then $\widetilde{P} = \widetilde{P'}$, $\forall \; \widetilde{P}, \widetilde{P'} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$. Let's denote $\widetilde{Q} =\mathscr{R} \left( \widetilde{P} \right) =\{ q_i, \sigma_i \}_{i=1}^{m} $ and $\widetilde{Q'} =\mathscr{R} \left( \widetilde{P'} \right)=\{ q'_i, \sigma'_i \}_{i=1}^{m} $. Let $\widetilde{P} = \{ p_i, \rho_i \}_{i=1}^{m} $ and $\widetilde{P'} = \{ p'_i, \rho'_i \}_{i=1}^{m} $. Given that $\mathscr{R}\left( \widetilde{P} \right) = \widetilde{Q}$. This implies the following: for any pure state decomposition of the states $\{ q_i \sigma_i \}_{i=1}^{m}$, with a corresponding gram matrix $F$, there exists a corresponding pure state decomposition of the states $\{ p_i \rho_i \}_{i=1}^{m}$, with a corresponding gram matrix $G$, such that $G = c D_{A} F D_{A}$, where $D_A$ is as defined in equation \eqref{DA} and $F^\frac{1}{2}$ is as defined in equation \eqref{Froot} and $c$ being the normalization constant. Similarly, given that $\mathscr{R}\left( \widetilde{P'} \right) = \widetilde{Q'}$, any pure state decomposition of the states $\{ q'_i \sigma'_i \}_{i=1}^{m}$, with a corresponding gram matrix $F'$, there exists a corresponding pure state decomposition of the states $\{ p'_i \rho'_i \}_{i=1}^{m}$, with a corresponding gram matrix $G'$, such that $G' = c' {D'}_{A} F {D'}_{A}$, where all the primed quantities ${D'}_A$ and ${F'}^\frac{1}{2}$ are defined similar to unprimed quantities in the equations \eqref{DA} and \eqref{Froot} and $c'$ is the corresponding normalization constant. That $\widetilde{Q}_1 = \widetilde{Q}_2$ implies that for any choice of pure state decomposition of the primed and unprimed ensemble states, there exists a block-diagonal unitary $U_D$ of the form given in equation \eqref{UD}, such that the gram matrices $F$ and $F'$ can be related by the relation: $F' = {U_D}^\dag F {U_D}$. It also implies that ${F'}^\frac{1}{2} = {U_D}^\dag F^\frac{1}{2} {U_D}$, ${D'}_A = {U_D}^\dag D_A {U_D}$. Thus we get the relation that $G' = {U_D}^\dag G {U_D}$. Thus the corresponding pure state decompositions of $\widetilde{P}$ and $\widetilde{P'}$ are related through an equation similar to equation \eqref{psi'} which implies that $\widetilde{P} = \widetilde{P'}$. Hence we have proved that $\mathscr{R}\left(\widetilde{P}'\right)=\mathscr{R}\left(\widetilde{P}\right)$ $\Longleftrightarrow \widetilde{P}'=\widetilde{P}$. Hence $\mathscr{R}$ is one to one. \end{proof} The theorems \eqref{Roneone} and \eqref{Ronto} jointly establish that the map $\mathscr{R}$ is invertible. We summarize all that we have done in this section in the following: \textbf{Hence we have proved the existence of a bijective function $\mathbf{\mathscr{R}: \mathcal{E}(r_1,r_2,\cdots,r_m) \longrightarrow \mathcal{E}(r_1,r_2,\cdots,r_m)}$ such that the optimal POVM for the MED of any LI ensemble $\mathbf{\widetilde{P} \in \mathcal{E}(r_1,r_2,\cdots,r_m)}$, which is given by $\mathbf{\mathscr{P}\left(\widetilde{P}\right)}$, satisfies the following relation:} \begin{equation} \label{SOLFORM} \mathbf{\mathscr{P}\left( \widetilde{P} \right) = PGM \left( \mathscr{R} \left( \widetilde{P} \right) \right).} \end{equation} \textbf{The inverse map $\mathbf{\mathscr{R}^{-1}}$ has an analytic expression:} \begin{equation} \label{invR} \mathbf{\mathscr{R}^{-1}\left( \{ q_i, \sigma_i \}_{i=1}^{m} \right) = \{p_i, \rho_i \}_{i=1}^{m},} \end{equation} \textbf{where, if } \begin{itemize} \item $\mathbf{q_i \sigma_i = \dfrac{1}{\sum_{s=1}^{m}\sum_{t_s=1}^{r_s} \tbraket{\chi}{st_s}{\chi}{st_s}} \sum_{l=1}^{m}\sum_{k_l=1}^{r_l} \ketbrat{\chi}{l k_l}{\chi}{l k_l}}$ \item $\mathbf{p_i \rho_i = \dfrac{1}{\sum_{s=1}^{m}\sum_{t_s=1}^{r_s} \tbraket{\psi}{st_s}{\psi}{st_s}}\sum_{l=1}^{m}\sum_{k_l=1}^{r_l} \ketbrat{\psi}{l k_l}{\psi}{l k_l}}$ \end{itemize} \textbf{are pure state decompositions of the states in} $\mathbf{\widetilde{Q}}$ \textbf{and} $\mathbf{\widetilde{P}}$\textbf{ respectively, then} $\mathbf{\{ \tket{\chi}{i j_i} \}_{i=1, \; j_i =1}^{i=m, \; j_i = r_i}}$\textbf{ and }$\mathbf{\{ \tket{\psi}{i j_i} \}_{i=1, \; j_i =1}^{i=m, \; j_i = r_i}}$\textbf{ are related through the transformation:} \begin{equation} \label{psichi} \mathbf{\tket{\psi}{i j} = c \sum_{k=1}^{r_i} \left( \left(H^{(ii)}\right)^{-\frac{1}{2}} \right)_{k j} \tket{\chi}{ik}, \; \forall \; 1 \leq i \leq m, \; 1 \leq j \leq r_i,} \end{equation} \textbf{where $\mathbf{c = \dfrac{1}{\sqrt{\sum_{s=1}^{m}\sum_{t,t_1,t_2 = 1}^{r_s} \left( \left( H^{(ss)} \right)^{-\frac{1}{2}} \right)_{t t_1} \left( F \right)^{(s \; s)}_{t_1 \; t_2} \left( \left( H^{(ss)} \right)^{-\frac{1}{2}} \right)_{t_2 t}}}}$ and where $\mathbf{F}$ is the gram matrix of the set $\mathbf{\{ \tket{\chi}{i j_i} \}_{i=1, \; j_i =1}^{i=m, \; j_i = r_i}}$ and $\mathbf{H^{(ii)}}$'s are as defined in equation \eqref{Froot}.} \section{Comparing MED for Mixed LI ensembles and LI pure state ensembles} \label{compareMEDP} Minimum Error Discrimination is the task of extracting information about a state, by discarding some of the uncertainty of which state Alice sends Bob from the ensemble. Heuristically, one can expect that Bob is required to extract \emph{more} information while performing MED of an ensemble of $n$ LI pure states, which span $\mathcal{H}$, compared to an ensemble of $m$ ($m<n$) LI mixed states, where the supports of these $m$ states also span $\mathcal{H}$. This is because Bob requires to ``probe" the first ensemble ``deeper" compared to the second ensemble of states. This is better appreciated when comparing the MED of a mixed state ensemble and an ensemble of LI pure states which form pure state decompositions of the mixed states in the former. In this case it is a natural to ask if, generally, the optimal POVM for the LI pure state ensemble is a pure state decomposition of the optimal POVM for the mixed state ensemble, i.e., when a mixed state ensemble $\{ p_i, \rho_i \}_{i=1}^{m}$, with optimal POVM $\{ \ Pi_i \}_{i=1}^{m}$, and a pure state ensemble $\{ \lambda_{ij_i}, \ketbra{\psi'_{ij_i}}{\psi'_{ij_i}} \}_{i=1,j_i=1}^{i=m, j_i=r_i}$, with optimal POVM $\{ \ketbra{w'_{ij_i}}{w'_{ij_i}}\}_{i=1,j_i=1}^{i=m, j_i=r_i}$, are related by $p_i \rho_i = \sum_{j=1}^{r_i} \lambda_{ij} \ketbra{\psi'_{ij}}{\psi'{ij}}$ is it generally true that $\Pi_i = \sum_{j=1}^{r_i} \ketbra{w'_{ij}}{w'_{ij}}$, $\forall \; 1 \leq i \leq m$? In general, the answer is no. But we will now show that for every LI mixed state ensemble, one can find a corresponding pure state decomposition such that the optimal POVM for the MED of the ensemble of these LI pure states is a pure state decomposition of the optimal POVM for MED of the mixed state ensemble. Let equation \eqref{rhodecomposition} give a pure state decomposition of $p_i \rho_i, \; \forall \; 1 \leq i \leq m$. Then corresponding to the states $\{ \tket{\psi}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$, there exist a unique set of states $\{ \tket{u}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$, given by equation \eqref{u}, and a unique $n \times n$ unitary $W$, such that the projectors of the optimal POVM for the ensemble $\{p_i, \rho_i \}_{i=1}^{m}$ are given by equation \eqref{mixedP} and the matrix $DG^\frac{1}{2}W >0$, where $G^\frac{1}{2}$ is the positive definite square root of the gram matrix $G$ of the $\tket{\psi}{ij_i}$ vectors and $D$ is defined in equation \eqref{DX}. Using $D$ we construct a new set of vectors $\{ \tket{\chi}{ij_i} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$, as given by equation \eqref{chi} and from this set we constuct a new ensemble of states $\{q_i, \sigma_i \}_{i=1}^{m}$, using equations \eqref{sigma} and \eqref{qi}. It was verified that the optimal POVM $\{ \Pi_i \}_{i=1}^{m} $ is the PGM of the ensemble $\{q_i, \sigma_i \}_{i=1}^{m}$. We now make the $U \left( r_1 \right) \times U \left( r_2 \right) \times \cdots \times U \left( r_m \right)$ degree of freedom in choosing the pure state decomposition in equation \eqref{rhodecomposition} explicit. Thus, let $p_i \rho_i = \sum_{j=1}^{r_i} \ketbrat{\psi'}{ij}{\psi'}{ij}$ be a pure state decomposition of the LI states in the ensemble $p_i \rho_i, \; \forall \; 1 \leq i \leq m$, where $\tket{\psi'}{ij_i}$ and $\tket{\psi}{ij_i}$ are related by equation \eqref{psi'}, where $U'_D$ is a block diagonal unitary given by equation \eqref{U'D}. $U'_D$ is a variable for now; it's value will be fixed later. Corresponding to the primed vectors $\tket{\psi'}{ij_i}$, we have $\tket{u}{ij_i} \longrightarrow \tket{u'}{ij_i}$, as per equation \eqref{u'}, $W \longrightarrow W' = {U'_D}^\dag W U'_D$, $G \longrightarrow G'= {U'_D}^\dag G U'_D$ , $G^\frac{1}{2}W \longrightarrow {G'}^\frac{1}{2}W' = {U'_D}^\dag G^\frac{1}{2}W U'_D$ and $\ket{w_{ij_i}} \longrightarrow \ket{w'_{ij_i}} = \sum_{l=1}^{m}\sum_{k_l=1}^{r_l} \left( {G'}^\frac{1}{2} W' \right)^{(l \; i)}_{k_l \; j_i} \tket{u'}{l k_l}$ (equation \eqref{w'expandu'}). $G^\frac{1}{2}W \longrightarrow {U'_D}^\dag G^\frac{1}{2}W U'_D$ implies that $X^{ \left( i j \right)} \ longrightarrow {X'}^{ \left( i j \right)}= {{U'}^{ \left( i \right)}}^\dag X^{ \left( i j \right)} {U'}^{ \left( j \right)}$. In particular we can choose ${U'}^{ \left( i \right)}$ to be such that $ {X'}^{ \left( i i \right)}$ are diagonal, $\forall \; 1 \leq i \leq m$. This fixes the block diagonal unitary $U'_D$. Since $D \longrightarrow D' = {U'_D}^\dag D U'_D$, $D'$ is a diagonal matrix. This implies that $\tket{\chi}{ij_i}\longrightarrow \tket{\chi'}{ij_i} = \sum_{l=1}^{m}\sum_{k_l=1}^{r_l} \left(D'\right)^{(l \; i)}_{k_l \; j_i} \tket{\psi'}{l k_l}$ $=\left(D'\right)^{(i \; i)}_{j_i \; j_i} \tket{\psi'}{ij_i} $. As noted in the end of subsection \eqref{PQcorr}, the ensemble $\{q_i, \sigma_i \}_{i=1}^{m}$ remains invariant. Note that the diagonal of $D' {G'}^\frac{1}{2} W' = \sqrt{D' G' D'}$ is ${D'}^{2}$. Let $\tket{\psi'}{ij_i} = \sqrt{\lambda_{ij_i}}\ket{\psi_{ij_i}}$, where $\ket{\psi_{ij_i}}$ are normalized. According to \cite{Singal} to solve the MED of the LI pure state ensemble $\{\lambda_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$, we need to find an $n \times n$ positive definite diagonal matrix $D''$, such that the diagonal of the positive square root of the matrix $D'' G' D''$ is ${D''}^2$. Here $G'$ is the gram matrix corresponding to the ensemble $\{\lambda_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1, j_i = 1}^{i=m, j_i = r_i}$. But we have already found the solution: $D'' = D'$. In this case the optimal POVM is then given by $\{ \ketbra{w'_{ij_i}}{w'_{ij_i}} \}_{i=1,j_i=1}^{i=m, j_i=r_i}$. And we know that $\Pi_i = \sum_{j=1}^{r_i} \ketbra{w'_{ij_i}}{w'_{ij_i}}$, $\forall \; 1 \leq i \leq m$. Also, just as shown in \cite{Singal}, $\{ \ketbra{w'_{ij_i}}{w'_{ij_i}} \}_{i=1,j_i=1}^{i=m, j_i=r_i}$ is the PGM of the ensemble $\{ \lambda'_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1,j_i = 1}^{i=m,j_i=r_i}$, where $\lambda'_{ij_i} =\dfrac{ \left( \left( D' \right)^{(i \; i)}_{j_i j_i} \right)^2 \lambda_{ij_i} } { Tr\left(D' G' D'\right)}$. But just as noted above $\tket{\chi'}{ij_i} = \sqrt{ Tr \left( D'G'D' \right) } \sqrt{\lambda'_{ij_i}}\ket{\psi_{ij_i}}$, $\forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i$. \emph{Thus, $\{ \lambda'_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1,j_i = 1}^{i=m,j_i=r_i}$, whose PGM is the optimal POVM for the ensemble $\{ \lambda_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1,j_i = 1}^{i=m,j_i=r_i}$, is a pure state decomposition of the ensemble $\{q_i, \sigma_i \}_{i=1}^{m}$ $ ( =\mathscr{R} \left( \widetilde{P} \right))$, whose PGM is the optimal POVM for the ensemble $\{ p_i, \rho_i \}_{i=1}^{m}$, where $\{ \lambda_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ ij_i}} \}_{i=1,j_i = 1}^{i=m,j_i=r_i}$ itself is a pure state decomposition of the ensemble $\{ p_i, \rho_i \}_{i=1}^{m}$}. The feature that ensures that there is a LI pure state decomposition of the mixed state ensemble, such that the optimal POVM of the LI pure state ensemble is a pure state decomposition of the optimal POVM of the LI mixed state ensemble, is the spectral decomposition of the matrices $X^{(ii)}$. This begs the question: for any LI mixed state ensemble, is such a LI pure state decomposition unique? The key feature that is required is that the ${X'}^{(ii)}$ matrices are diagonalized. Hence there are as many pure state decompositions of the mixed state ensemble with this property as there are spectral decompositions of the $D$ matrix. If $X^{(ii)}$ has $s_i$ distinct eigenvalues and the degeneracy of the $j_{i}$-th eigenvalue $( 1 \leq j_i \leq s_i)$ has a degeneracy of $k_{j_i}$\footnote{Needless to say, $\sum_{j_i}^{s_i} k_{j_i} = r_i$.}, then there is a $U(k_{1_1}) \times U(k_{2_1}) \times \cdots \times U(k_{s_1}) \times U(k_{1_2}) \times U(k_{2_2}) \times \cdots \times U(k_{s_2}) \times \cdots \times U(k_{1_ m}) \times U(k_{2_m}) \times \cdots \times U(k_{s_m})$ degree of freedom in choosing a pure state decomposition of the mixed state ensemble with this property. What about the converse? Consider an ensemble of pure states $\{ \lambda_i, \ketbra{\psi_i}{\psi_i} \}_{i=1}^{n}$ whose optimal POVM is $\{ \ketbra{v_i}{v_i} \}_{i=1}^{n}$. Partition the ensemble into disjoint subsets and sum over the elements in each subset and collect all such summations to form a new ensemble $\{p_i, \rho_i \}_{i=1}^{m}$, whose optimal POVM, let's say is given by $\{\Pi_i \}_{i=1}^{m}$. It is generally not the case that $\{ \ketbra{v_i}{v_i} \}_{i=1}^{n}$ is a pure state decomposition of elements in $\{\Pi_i \}_{i=1}^{m}$. So for which pure state ensembles is this true? Let's re-index the pure state ensemble: $i \longrightarrow (i, j_i)$, so that $p_i \rho_i = \sum_{j=1}^{r_i} \lambda_i \ketbra{\psi_{ij_i}}{\psi_{ij_i}}$. While performing the MED of the LI pure state ensemble, if the matrix $DG^\frac{1}{2}W$ is such that its block diagonals\footnote{i.e., the first $r_1 \times r_1$ diagonal block, the second $r_2 \times r_2$ block etc} are diagonal, then it is easy to see that the relation $\Pi_i = \sum_{j=1}^{r_i} \ketbra{v_{ij}}{v_{ij}}$ also holds true. Another question is if, given the problem of the MED of a LI mixed state ensemble, can one substitute the problem with the MED of a pure state decomposition such that the optimal POVM of the latter is a pure state decomposition of the former? The answer, unfortunately, is no. The reason being that to substitute the mixed state ensemble MED problem with the pure state ensemble MED problem one needs to first obtain the $n \times n$ unitary $W$ such that when $D G^\frac{1}{2} W$ is constructed (where $D$ is given by equation \eqref{DX}), it is positive definite. This is already equivalent to finding a solution for the MED of the mixed state ensemble. We know that the optimal POVM of a pure state LI ensemble is given by its own PGM iff the diagonal of the positive square root of the ensemble's gram matrix is a multiple of the identity. How does this condition change when we're given to perform the MED of a LI mixed state ensemble? In the following we prove that this occurs iff the diagonal blocks of $G^\frac{1}{2}$ are diagonalized and when the diagonal of $G^\frac{1}{2}$ is a multiple of the identity. \begin{theorem} \label{PGMOPT} For an ensemble $\widetilde{P} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$ to satisfy $\mathscr{R}\left(\widetilde{P}\right) = \widetilde{P}$ it is necessary and sufficient that all eigenvalues of all the block diagonal matrices of $G^\frac{1}{2}$ are equal. \end{theorem} \begin{proof} \textbf{Necessary Part:} Let $\mathscr{R}\left(\widetilde{P}\right) = \widetilde{P}$. Let the pure state decomposition of $\widetilde{P}$ whose optimal POVM is a pure state decomposition of the optimal POVM for MED of $\widetilde{P}$ be $\{ \lambda_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1,j_i = 1}^{i=m,j_i=r_i}$. Hence we have $p_i \rho_i = \sum_{j=1}^{r_i} \lambda_{ij} \ketbra{\psi_{ij}}{\psi_{ij}}$, $\forall \; 1 \leq i \leq m$. It was mentioned above that there exists a pure state decomposition of $\mathscr{R}\left( \widetilde{P} \right)$ of the form $\{ \lambda'_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1,j_i = 1}^{i=m,j_i=r_i}$, who PGM is the optimal POVM of $\{ \lambda_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1,j_i = 1}^{i=m,j_i=r_i}$. Since $\mathscr{R}\left(\widetilde{P}\right) = \widetilde{P}$ it follows that the $\sqrt{\lambda'_{ij}}\ket{\psi_{ij}}$ ($ = \tket{\psi'}{ij_i}$) vectors and the $\sqrt{\lambda_{ij}}\ket{\psi_{ij}}$ ($ = \tket{\psi}{ij_i}$) vectors are related by a block diagonal unitary transformation, given in equation \eqref{psi'}. But since the set $\{ \ket{\psi_{ij}} \}_{j=1}^{r_i}$ are linearly independent, it follows that $U'_D$ must be a diagonal matrix. This can only mean that both the ensembles $\{ \lambda'_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1,j_i = 1}^{i=m,j_i=r_i}$ and $\{ \lambda_{ij_i}, \ketbra{\psi_{ij_i}}{\psi_{ij_i}} \}_{i=1,j_i = 1}^{i=m,j_i=r_i}$ are equal, as well. In the beginning of section \eqref{compareMEDP}, it was noted that $\sqrt{\lambda'_{ij_i}} \ket{\psi_{ij_i}}$ and $\sqrt{\lambda_{ij_i}} \ket{\psi_{ij_i}}$ are also related through $\lambda'_{ij_i} =\dfrac{ \left( \left( D' \right)^{(i \; i)}_{j_i j_i} \right)^2 \lambda_{ij_i} } { Tr\left(D' G' D'\right)}$, $\forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i$. But since $\lambda'_{ij_i} = \lambda_{ij}$ , $\forall \; 1 \leq i \leq m, \; 1 \leq j_i \leq r_i$, this implies that $D'$ is a multiple of the identity. This implies that $D' G' D' \mathcal{P}(r_1,r_2,\cdots,r_m)pto G'$ which implies that $ \sqrt{D' G' D'} = D' {G'}^\frac{1}{2} W' \mathcal{P}(r_1,r_2,\cdots,r_m)pto {G'}^\frac{1}{2}$. This implies that $W = \mathbb{1}_n$. $D'$ is the block diagonal matrix one gets by ``extracting'' the diagonal blocks of ${G'}^\frac{1}{2} W'$. Since $W' = \mathbb{1}_n$, $D'$ is the block diagonal matrix ``extracted'' from ${G'}^\frac{1}{2}$. Similarly, $D$ is the block diagonal matrix ``extracted'' from ${G}^\frac{1}{2}$. And since $D'$ is a multiple of identity and since $D'$ and $D$ are related by a unitary transformation, $D$ is also a multiple of the identity. This tells us that the diagonal blocks of $G^\frac{1}{2}$, i.e., the matrices $X^{(ii)}$, are positive definite diagonal matrices, with equal diagonals. Hence all eigenvalues of all the block diagonal matrices of $G^\frac{1}{2}$ are equal. \textbf{Sufficient Part:} If all eigenvalues of all the block diagonal matrices of $G^\frac{1}{2}$ are equal, then these diagonal blocks are diagonal matrices themselves. Let $D''$ be the matrix comprising of only the diagonal blocks of $G^\frac{1}{2}$. $D''$ is, thus, a multiple of the identity. Note that the the block-diagonal of $D'' {G'}^\frac{1}{2}$ is $ {D''}^2$. Hence we have found a block-diagonal positive definite matrix $D''$ such that the diagonal blocks of the positive square root of $D'' G D''$ is given by ${D''}^2$, which implies that we have solved the MED problem for the ensemble $\widetilde{P}$. Using $D''$, we can construct the vectors $\tket{\chi}{ij_i}$ from the vectors $\tket{\psi}{ij_i}$ using equation \eqref{chi} and then construct the states $q_i \sigma_i$ using equation \eqref{sigma} and equation \eqref{qi}. Since $D''$ is simply a multiple of the identity, it isn't difficult to see that $q_i \sigma_i = p_i \rho_i$, $\forall \; 1 \leq i \leq m$. This proves that $\mathscr{R}\left(R \right) = P$. Hence proved. \end{proof} \section{Solution For the MED problem} \label{Solution} The necessary and sufficient condition to solve the MED for a general LI ensemble as specified by \textbf{A} (on page \pageref{AA}) suggest a technique to solve the problem. In this section we give this technique without going into the theoretical details which justify the claim that it can be used effectively to obtain the optimal POVM for the MED of any ensemble $\widetilde{P} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$. This is because this techhnique is a generalization of the technique given in \cite{Singal}, wherein all the relevant theoretical background has been developed for LI of pure state ensembles. The theoretical background for the mixed states ensemble case is a trivial generalization of that for the pure state ensemble case; it follows the same sequence of steps as that for the LI pure state ensemble case. In the following we explain what the technique is. We assume that we know the solution for the MED of some ensemble $\widetilde{P}_0=\{ p^{(0)}_i, \rho^{(0)}_i \}_{i=1}^{m} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$ and want to obtain the solution for the MED of another ensemble $\widetilde{P}_1=\{ p^{(1)}_i, \rho^{(1)}_i \}_{i=1}^{m} \in \mathcal{E}(r_1,r_2,\cdots,r_m)$. Let $p^{(0)}_i \rho^{(0)}_{i} = \sum_{j=1}^{r_i} \ketbrat{\psi^{(0)}}{ij}{\psi^{(0)}}{ij}, \; \forall \; 1 \leq i \leq m,$ be a pure state decomposition for the ensemble $\widetilde{P}_0$. And let the gram matix corresponding to the set $\{ \tket{\psi^{(0)}}{ij_i} \}_{i=1,j_i =1 }^{m,r_i}$ be $G_0$. Similarly, let $p^{(1)}_i \rho^{(1)}_{i} = \sum_{j=1}^{r_i} \ketbrat{\psi^{(1)}}{ij}{\psi^{(1)}}{ij}, \; \forall \; 1 \leq i \leq m,$ be a pure state decomposition for the ensemble $\widetilde{P}_1$. And let the gram matix corresponding to the set $\{ \tket{\psi^{(1)}}{ij_i} \}_{i=1,j_i =1 }^{m,r_i}$ be $G_1$. Knowing the solution for the MED of $\widetilde{P}_0$ implies that we know a block diagonal matrix $D_0$, of the form as given by equation \eqref{DX}, such that the diagonal-block of positive square root of $D_0G_0D_0$ is $D_0^2$ (in accordance with the rotationally invariant necessary and sufficient conditions given by \textbf{A} on page \pageref{AA}). Let's rewrite equation \eqref{Ainv} in the following form: \begin{equation} \label{EOY} \left( D G^\frac{1}{2} W \right)^2 - DGD = 0 \end{equation} Let's define a linear function $G(t) \equiv (1-t) G_0 + t G_1$, where $t \in [0,1]$. So $G(0)=G_0$ and $G(1)=G_1$. Note that $G(t) > 0$ and $Tr \left( G(t) \right) =1, \; \forall \; 0 \leq t \leq 1$. Thus for any value of $t \in [0,1]$, $G(t)$ corresponds to the gram matrix of a pure state decomposition of some ensemble $\widetilde{P}_t \in \mathcal{E}(r_1,r_2,\cdots,r_m)$\footnote{Actually, $G(t)$, for each value of $t \in [0,1]$, corresponds to a family of unitarily equivalent ensembles, i.e., $G(t)$ corresponds to the set of ensembles $\{ U \widetilde{P}_t U^\dag \; | \; U \text{ varies over } U(n) \}$. The notation $U \widetilde{P}_t U^\dag$ is the same as has been used in equation \eqref{ens1}.}. Using equation \eqref{EOY} we drag the solution for $D$ from $t=0$ where the value is known to $t=1$ where the solution isn't known. This can be done in different ways. \subsection{Taylor Series Expansion and Analytic Continuation} \label{Taylor} A formal way of doing it is by using Taylor series expansion and analytic continuation. We start by assuming that the matrices $\left( DG^\frac{1}{2}W \right) (t), D(t)$ and $G(t)$ are analytic functions from $[0,1]$\footnote{$G(t)$ is the function mentioned above; it is linear in $t$ and hence is analytic in $t$. We will not provide for the proof of the analytic dependence of $\left( DG^\frac{1}{2}W \right) (t)$ or $ D(t)$ here since a detailed proof the same is provided in \cite{Singal} for the pure state ensemble case which can be trivially generalized to the mixed state case.}. $D(t)$ take the form \begin{equation} \label{DT} D(t) = \begin{pmatrix} {Z^{(11)(t)}} & 0 & \cdots & 0 \\ 0 & {Z^{(22)(t)}} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & {Z^{(mm)(t)}} \end{pmatrix}, \end{equation} and $\left( DG^\frac{1}{2} W \right) (t)$ takes the form \begin{equation} \label{DGWT} \left( DG^{\frac{1}{2}} W \right) (t) = \begin{pmatrix} \left( Z^{(11)} (t) \right)^2 & Z^{(12)}(t) & \cdots & Z^{(1m)}(t)\\ Z^{(21)}(t) & \left( Z^{(22)} (t) \right)^2 & \cdots & Z^{(2m)} (t)\\ \vdots & \vdots & \ddots & \vdots\\ Z^{(m1)}(t) & Z^{(m2)}(t) & \cdots & \left( Z^{(mm)}(t) \right)^2 \end{pmatrix}, \end{equation} where \begin{equation} \label{ZijT} Z^{(ij)}(t) = \begin{pmatrix} r^{(ij)}_{11}(t) + i c^{(ij)}_{11}(t) & r^{(ij)}_{12}(t) + i c^{(ij)}_{12}(t) & \cdots & r^{(ij)}_{1r_j}(t) + i c^{(ij)}_{1r_j}(t) \\ r^{(ij)}_{21}(t) + i c^{(ij)}_{21}(t) & r^{(ij)}_{22}(t) + i c^{(ij)}_{22}(t) & \cdots & r^{(ij)}_{2r_j}(t) + i c^{(ij)}_{2r_j}(t) \\ \vdots & \vdots & \ddots & \vdots \\ r^{(ij)}_{r_i 1}(t) + i c^{(ij)}_{r_i1}(t) & r^{(ij)}_{r_i2}(t) + i c^{(ij)}_{r_i2}(t) & \cdots & r^{(ij)}_{r_ir_j}(t) + i c^{(ij)}_{r_ir_j}(t) \\ \end{pmatrix}, \end{equation} i.e, $Z^{(ij)}(t)$ are $r_i \times r_j$ matrices. Also, the hermiticity of $\left( DG^\frac{1}{2} W \right) (t)$ requires that $c^{(ii)}_{jk}(t) = - c^{(ii)}_{kj}(t) , \; 1 \leq i \leq m, \; 1 \leq j, k \leq r_i$. With the constraints on $c^{(ii)}_{jk}(t)$ in place, $r^{(il)}_{j_ik_l}$ and $c^{(il)}_{j_ik_l}$ are $n^2$ (dependent) variables. In the following we show how to obtain the Taylor series expansion of these variables with respect to the independent variable $t$. Note that $Z^{(ii)}$ are equal to $X^{(ii)}$ and $Z^{(ij)}(t)$ are equal to $ \left( X^({ii}) \right)^{-1} X^{(ij)}$ forall $1 \leq i \neq j \leq m$, where $X^{(ij)}$ are defined in equation \eqref{part1}. Taking the total derivative of both sides of equation \eqref{EOY} with respect to $t$ and set $t= 0$, we get $n^2$ coupled linear equations which can be solved for the unknowns $\dfrac{d r^{(i l)}_{j_i k_l}}{dt} |_{t=0}$ and $\dfrac{d c^{(i l)}_{j_i k_l}}{dt} |_{t=0}$, $\forall \; 1 \leq i, l \leq m,$ $1 \leq j_i \leq r_i$ and $1 \leq k_l \leq r_l$. Again taking the second order total derivative of both sides of equation \eqref{EOY} with respect to $t$ and setting $t=0$, we get $n^2$ coupled linear equations which can be solved for the unknowns $\dfrac{d^2 r^{(i l)}_{j_i k_l}}{dt^2} |_{t=0}$ and $\dfrac{d^2 c^{(i l)}_{j_i k_l}}{dt^2} |_{t=0}$, $\forall \; 1 \leq i, l \leq m,$ $1 \leq j_i \leq r_i$ and $1 \leq k_l \leq r_l$. In this way we have obtain the $K$-th order derivatives of the $r^{il}_{j_ik_l}$ and $c^{il}_{j_ik_l}$ with respect to $t$ at $t=0$. Using these derivatives we can taylor expand about $r^{il}_{j_ik_l}(t)$ and $c^{il}_{j_ik_l}(t)$ about $t=0$. Our goal is to find a solution for the values of $r^{il}_{j_ik_l}(1)$ and $c^{il}_{j_ik_l}(1)$. It is reasonable to divide the interval $[0,1]$ into a certain number of intervales, say $L$ intervals, so that one taylor expands within every interval and then analytically continues from the starting point of each interval to reach $t=1$ finally. The following statements are made on the basis of results in \cite{Singal}: \begin{itemize} \item $L\equiv \lceil|| G(0) - G(1) || n^{2}\rceil$ gives a reasonable number of intervals for very low error margin. Also beyond a certain order to which Taylor series are expanded the error margin doesn't decrease appreciably; neither does the error margin increase appreciably as $n$ increases while the order to which Taylor series is expanded remained constant. \item For ensembles $\widetilde{P}_1 \in \mathcal{E}(r_1,r_2,\cdots,r_m)$, to which the gram matrix $G_1$ corresponds, one can find the starting point gram matrix $G_0$ close enough to $G_1$ such that $\lceil|| G(0) - G(1) || n^{2}\rceil=1$. This implies that the interval $[0,1]$ need not be divded into subintervals for the purpose of analytic continuation. For these cases the computational complexity of such process is $n^6$. In cases where one isn't able to obtain the starting point close enough, the computional complexity increases to $n^8$, as expected. This is because the number of intervals required to obtain the solution increases as $n^2$ with $n$. \end{itemize} \subsection{Newton-Raphson Method} \label{Newton} Another technique to obtain the solution for the the MED of optimal POVM for a LI ensemble is to use Newton's method based on equation \eqref{EOY}. As starting point, we substitute the solutions for the MED of $G_0$ viz., $D_0$ and $D_0G_0^\frac{1}{2} W_0$, whose values we know, in equation \eqref{EOY}, along with $G_1$. The aim is to change the values of $D$ and $DG^\frac{1}{2}W$ so that the LHS of the equation converges to $0$\footnote{Despite the fact that we have no formal proof that Newton-Raphson method will necessarily converge to the desired solution for equation \eqref{EOY}, over 100,000 examples for various values of $n$ and $r_1, r_2, \cdots, r_m$ have been sampled, for which the method works. An undesirable solution would require that the LHS of equation \eqref{EOY} does converge to $0$ but that $DG^\frac{1}{2}W$ isn't positive definite. Heuristically, we can expect $D$ and $D G^\frac{1}{2} W$ to converge to the desirable solution (i.e., the solution such that $D_1G_1^\frac{1}{2}W_1 >0$) because our starting point has that $D_0 G_0^\frac{1}{2} W_0>0$ and is, hence, likely to be ``closer'' to our starting point; the metric being given by the Hilbert-Schmidt norm.}. The sequence of steps are the same as laid out in \cite{Singal}. This method is much simpler to implement compared to the Taylor series example and has a computational complexity of $n^6$. \subsection{Barrier Type Interior Point Method (SDP)} \label{SDP} In \cite{Singal} we showed how the barrier-type interior point method has a computational complexity of $n^8$. We will summarize in brief how this barrier-type interior point method works. This is an iterative algorithm just like the Newton-Raphson method. In fact, the barrier type interior point method comprises of implementing the Newton-Raphson method to obtain the stationary point of the quantity being minimized which is the the quantity $Tr \left( Z \right) - \sum_{i=1}^{m}w_i^{(0)} Log \left( Det \left( Z - p_i \rho_i \right) \right)$. The weights $w_i^{(0)}$ have a very small value, so that the objective function varies by very little from the function $Tr \left( Z \right)$. The reason the term $\sum_{i=1}^{m}w_i^{(0)} Log \left( Det \left( Z - p_i \rho_i \right) \right)$ is added to the function $Tr \left( Z \right) $ is to ensure that if we start from a feasible point (a point where $Z^{(0)} - p_i \rho_i \geq 0, \; \forall \; 1 \leq i \leq m$), our second iterate $Z^{(1)}$ will necessarily remain in the feasible region. This happens because the term $\sum_{i=1}^{m}w_i^{(0)} Log \left( Det \left( Z - p_i \rho_i \right) \right)$ blows up to infinity if any of the operators $ Z - p_i \rho_i$ approaches the boundary of the positive convex set, i.e., if the eigenvalue(s) of any one of these operators approaches 0; the directional derivative would be such that the next iterate for $Z$ would remain in the feasible region. Computing the directional derivative involes computing an $n^2 \times n^2$ square matrix whose computational cost is $n^8$. Thus the computational cost of the barrier-type interior point method is $n^8$. \textbf{ The Taylor series method and Newton-Raphson method mentioned in sections \eqref{Taylor} and \eqref{Newton} have lower computational complexity and are simpler to implement thus giving an edge over the SDP method mentioned above. } \section{Conclusion} We look back over what has been done in this paper: first, the necessary and sufficient conditions for obtaining the optimal POVM for the MED for an ensembles of linearly independent states was simplified. Using the simplified conditions we proved that there exists a bijective function $\mathscr{R}$ which when acted upon any such ensemble gives another ensemble whose PGM is the optimal POVM of for the MED of the pre-image. We also obtained a closed form expression for $\mathscr{R}^{-1}$. This is a generalization of a similar result that was hitherto only proved for linearly independent pure state ensemble in \cite{Bela, Mas, Carlos}. The result also gives a rotationally invariant form of representing the necessary and sufficient conditions for the MED of an ensemble of LI states. This rotationally invariant form for the necessary and sufficient conditions of the optimal POVM is employed for two purposes: 1.) we use it to show that for every LI mixed state ensemble there exists a corresponding pure state decomposition so that the optimal POVM for the MED of the latter is a pure state decomposition for the MED of the former. This is then employed to show under what conditions the optimal POVM of a mixed state ensemble is given by its own PGM. 2.) We employ this rotationally invariant form of the necessary and sufficient conditions in a technique which gives us the optimal POVM for an ensemble. Our technique is compared to a standard SDP technique; that of a barrier-type interior point method. It is found that along with the advantage of our technique being simpler to implement, our technique has a lower computational complexity compared to the barrier-type IPM; our technique has a computational complexity of $n^6$ whereas the computational complexity of the latter SDP technique is $n^8$, which gives our technique an edge over the SDP technique. \end{document}
\begin{document} \begin{abstract} In the literature there are two different notions of lovely pairs of a theory T, according to whether T is simple or geometric. We introduce a notion of lovely pairs for an independence relation, which generalizes both the simple and the geometric case, and show how the main theorems for those two cases extend to our general notion. \end{abstract} \keywords{Lovely pair, dense pair, independence relation} \subjclass[2010]{ Primary 03C64; Secondary 12J15, 54E52. } \maketitle {\small \tableofcontents } \makeatletter \renewcommand\@makefnmark {\normalfont(\@textsuperscript{\normalfont\@thefnmark})} \renewcommand\@makefntext[1] {\noindent\makebox[1.8em][r]{\@makefnmark\ }#1} \makeatother \section{Introduction} Let $T$ be a complete first-order theory and $\mathfrak C$ be a monster model for~$T$. In the literature there are at least different notions of lovely pairs, according to whether $T$ is simple \cite{poizat83, BPV, vassiliev05} or geometric \cite{macintyre, vdd-dense, boxall}. Another class of lovely pairs, generalizing the geometric case, is given by dense pairs of theories with an existential matroid (see \cite{fornasiero-matroids} for the case when $T$ expands a field). The study of lovely pairs started with~\cite{robinson59}, where dense pairs of real closed fields and pairs of algebraically closed fields were studied, and has continued, under various names, until present day: \cite{macintyre} studied lovely pairs of geometric theories and \cite{vdd-dense} lovely pairs of o-minimal structure expanding a group (they called them ``dense pairs'', because when $T$ is an o-minimal theory expanding a group, a lovely pair for $T$ is a pair $A \prec B \models T$, with $A$ dense subset of~$B$); on the other hand, \cite{poizat83} studied beautiful pairs of stable structures, which were generalized to lovely pairs of simple structures in~\cite{BPV}. Many results and techniques are similar for all the above classes of theories. In this article we introduce a unified approach, via a general notion, the $\ind$-lovely pairs (Definition~\ref{def:lovely}), where $\ind$ is an independence relation on~$\mathfrak C$ in the sense of~\cite{adler}, and show that for suitable values of $\ind$ we get the various special cases we recorded above: \begin{enumerate} \item if $\mathfrak C$ is simple and $\ind[f]$ is Shelah's forking, then a $\ind[f]$-lovely pair is a lovely pair of a simple theory in the sense of~\cite{BPV}; \item if $\mathfrak C$ is geometric and $\ind[$\acl$]\,$ is the independence relation induced by~$\acl$, then a $\ind[$\acl$]$-lovely pair is a lovely pair of a geometric theory in the sense of \cite{boxall}; \item if $\mathfrak C$ has an existential matroid $\mat$ and expands a field and $\ind[$\mat$]$ is the independence relation induced by~$\mat$, then a $\ind[$\mat$]$-lovely pair is a dense pair in the sense of~\cite{fornasiero-matroids}. \end{enumerate} It may happen that $\mathfrak C$ has more than one independence relation: in this case, to different independence relations correspond different notions of lovely pairs. For instance, $\mathfrak C$~can be both stable and geometric, but $\ind[f] \neq \ind[$\mat$]$: in this case, a lovely pair of $T$ as a geometric theory will be different from a lovely pair of $T$ as a simple theory: see Example~\ref{ex:triple-love}. We generalize some of the main results for lovely pairs from \cite{vdd-dense, BPV, boxall} to $\ind$-lovely pairs: see \S\ref{subsec:existence}, \ref{subsec:uniqueness}, \ref{sec:model-complete}, \ref{sec:tuples}. Moreover, we show how lovely pairs inherit ``stability'' properties from~$T$: that is, if $T$ is stable, or simple, or NIP, then lovely pairs of $T$ have the same property, see \S\ref{sec:stability}. Since $\ind$-lovely pairs depend in an essential way on the independence relation~$\ind$, we need a more detailed study of independence relations, which we do in \S\ref{sec:preliminary}. Moreover, different independence relations give different kinds of lovely pairs: thus, it seems worthwhile to produce new independence relations; a technique to produce new independence relations is explained in \S\ref{sec:new-independence}. Finally, a $\ind$-lovely pair has at least one independence relation $\ind_P$ inherited from~$\mathfrak C$ (see \S\ref{sec:lovely-independence}): we hope that, among other things, $\ind_P$ will prove useful in studying the original theory~$T$. A notion of $\ind$-lovely pairs has been also proposed by I. Ben Yaacov, using a different notion of ``independence relation'' than the one employed here. \section[Preliminaries]{Preliminaries on independence relations} \label{sec:preliminary} Let $\mathfrak C$ be a monster model of some complete theory~$T$; ``\intro{small}'' will mean ``of cardinality smaller than the monstrosity of $\mathfrak C$''. Let $\ind$ be a symmetric independence relation on $\mathfrak C$, in the sense of \cite{adler}; so $\ind$ is a ternary relation on small subsets of $\mathfrak C$ satisfying, for every small $A, B, C, D\subseteq \mathfrak C$, the following conditions. \begin{description} \item[Invariance] for every $\sigma\in \aut(\mathfrak C)$, $A\ind_BC\mathbb{R}ightarrow \sigma(A)\ind_{\sigma(B)}\sigma(C)$. \item[Normality] $A \ind_C B \mathbb{R}ightarrow A C \ind_C B$. \item[Symmetry] $A\ind_BC\mathbb{R}ightarrow C\ind_BA$. \item[Left Transitivity] (which we will simply call \textbf{Transitivity}) assuming $B\subseteq C\subseteq D$, $A\ind_B D$ iff $A\ind_BC$ and $A\ind_CD$. \footnote{In Adler's terminology, this axiom also includes Monotonicity and Base Monotonicity.} \item[Finite character] $A\ind_BC$ iff $A_0\ind_BC$ for all finite $A_0\subseteq A$. \item[Extension] there is some $A^\prime\mathrm{eq}uiv_BA$ such that $A^\prime\ind_BC$. \item[Local character] there is some small $\kappa_0$ (depending only on~$\ind$), such that there is some $C_0\subseteq C$ with $\card{C_0} < \kappa_0 + \card{A}^+$ and $A\ind_{C_0} C$. \end{description} Following \cite{adler}, we say that $\ind$ satisfies \intro{strong finite character} if, whenever $A \notind_B C$, there exists a formula $\phi(\bar x, \bar y, \bar z)$, $\bar a \in A$, $\bar b \in B$, and $\bar c \in C$, such that $\mathfrak C \models \phi(\bar x, \bar y, \bar z)$, and, for every $\bar a' \in \mathfrak C$, if $\mathfrak C \models \phi(\bar a', \bar b, \bar c)$, then $\bar a' \notind_B C$. We are \emph{not} assuming that $\mathfrak C$ eliminates imaginaries. For non-small $A,B,C\subseteq\mathfrak C$, $A\ind_BC$ is defined to mean the following: \begin{sentence} For all small $A^\prime\subseteq A$, $B^\prime\subseteq B$, and small $C^\prime\subseteq C$, there is some small $B''$ such that $B' \subseteq B'' \subseteq B$, and $A^\prime\ind_{B''}C^\prime$. \end{sentence} For every $A \subset \mathfrak C$ and $c \in \mathfrak C$, we say that $c \in \mat(A)$ if $c \ind_A c$. By ``tuple'' we will mean a tuple of small length. \begin{remark}\label{rem:local-char} In the axioms of an independence relation, we can substitute the Local Character Axiom with the following one: \begin{sentence}[(LC')] There is some $\kappa_0$ such that, for every $\bar a$ finite tuple in $\mathfrak C$ and $B$ small subset of~$\mathfrak C$, there exists $B_0 \subset B$, such that $\card{B_0} < \kappa_0$ and $\bar a \ind_{B_0} B$. \end{sentence} Moreover, the $\kappa_0$ in (LC') and the $\kappa_0$ in the Local Character Axiom are the same. \end{remark} \begin{proof} Let $A$ and $B$ be small subsets of~$\mathfrak C$. For every $\bar a$ finite tuple in $A$, let $B_{\bar a} \subset B$, such that $\card{B_{\bar a}} < \kappa_0$ and $\bar a \ind_{B_{\bar a}} B$. Define $C := \bigcup \set{B_{\bar a}: \bar a \subset A \text{ finite}} \subseteq B$. Then, by Finite Character, $A \ind_{C} B$, and $\card{C} < \kappa_0 + \card{A}^+$. \end{proof} \begin{remark} Let $\bar a$ be a small tuple and $B$ and $C$ be small subsets of~$\mathfrak C$. Assume that $\tp(\bar a / BC)$ is finitely satisfied in~$C$. Then, $\bar a \ind_C B$. \end{remark} \begin{proof} The assumptions imply that $\tp(\bar a/BC)$ does not fork in the sense of Shelah's over~$C$. By \cite[Remark 1.20]{adler}, we are done. \end{proof} \begin{question} Is there a more direct proof of the above remark, that does not use Shelah's forking? \end{question} \begin{definition} $\kappa_0$ is the smallest regular cardinal $\kappa$, such that, for every finite tuple $\bar a$ and every small set $B$, there exists $B_0 \subseteq B$, with $\card{B_0} < \kappa$ and $\bar a \ind_{B_0} B$. \end{definition} \begin{remark} $\kappa_0 \leq \abs T^+$; hence, for every small sets $A$ and~$B$, there exists $B_0 \subseteq B$, such that $\card{B_0} \leq \card T + \card A$ and $A \ind_{B_0} B$. \end{remark} \begin{proof} Given a small tuple $\bar a$ and a small set $B$, let $C$ be a subset of $\bar a$ such that $\tp(\bar a/B C)$ is finitely satisfied in~$C$, and $\card{C} \leq \card{B} + \card T$ (\cite[Remark 2.4]{adler}). Hence, $\bar a \ind_C B$. \end{proof} \begin{definition} Let $A$ and $B$ be small subsets of $\mathfrak C$, with $A \subseteq B$. Let $p$ be a type over $A$ and $q$ be a type over~$B$. We say that $q$ is a \intro{nonforking extension} of~$p$, and write $q \sqsubseteq p$, if $q$ extends $p$ and $q \ind_A B$. We say that $q$ is a \intro{forking extension} of~$p$, and write $q \mathrel{\sqsubseteq_{\mkern-15mu{\not}\mkern15mu}} p$, if $q$ extends $p$ and $q \notind_A B$. \end{definition} In the next lemma we use the fact that $\kappa_0$ is regular. \begin{lemma}\label{lem:forking-chain} Let $A \subset \mathfrak C$ be small and $p \in S_n(A)$. There is no sequence $p = p_0 \mathrel{\sqsubseteq_{\mkern-15mu{\not}\mkern15mu}} p_1 \mathrel{\sqsubseteq_{\mkern-15mu{\not}\mkern15mu}} \dots \mathrel{\sqsubseteq_{\mkern-15mu{\not}\mkern15mu}} p_i \dots$, indexed by $\kappa_0$, of forking extensions of $p$. \end{lemma} \begin{proof} Assume, for contradiction, that such a sequence exists. For every $i < \kappa_0$, let $B_i$ be the domain of $p_i$. Let $B := \bigcup_{i < \kappa_0} B_i$, and $D \subset B$ such that $\card{D} < \kappa_0$ and $p \ind_D B$ ($D$ exists by definition of $\kappa_0$). Since $\kappa_0$ is regular, $D \subseteq B_i$ for some $i < \kappa_0$, and therefore $p \ind_{B_i} B$, and in particular $p \ind_{B_i} B_{i+1}$, absurd. \end{proof} \begin{lemma} All axioms for $\ind$, except extension, are valid also for large subsets of~$\mathfrak C$. The Local Character Axiom holds with the same $\kappa_0$. \end{lemma} \begin{proof} Let us prove Local Character for non-small sets. Let $A$ and $B$ be subsets of~$\mathfrak C$. Assume, for contradiction, that, for every $C \subset B$, if $\card{C} < \kappa_0$, then $A \notind C B$. First, we consider the case when $A = \bar a$ is a finite tuple. Define inductively a family of sets $(B_i: i < \kappa_0)$, such that: \begin{enumerate} \item each $B_i$ is a subset of $B$, of cardinality strictly less than $\kappa_0$; \item $(B_i: i < \kappa_0)$ is an increasing family of sets; \item $\bar a \notind_{\bigcup_{j < i} B_j} B_{i}$. \end{enumerate} In fact, choose $B_0$ a finite subsets of~$B$, such that $\bar a \notind_{\emptyset} B_0$; then, choose $B_i$ inductively, satisfying the given conditions. Finally, let $p_i := \tp(\bar a / B_i)$. Then, $(p_i: i < \kappa_0)$ is chain of forking extensions of length~$\kappa_0$, contradicting Lemma~\ref{lem:forking-chain}. If $A$ is infinite, proceed as in the proof of Remark~\ref{rem:local-char}. \end{proof} \begin{remark}\label{rem:localization} Let $P$ be a (possibly, large) subsets of~$\mathfrak C$. Let $\ind'$ the following relation on subsets of~$\mathfrak C$: $A \ind'_C B$ iff $A \ind_{C P} B$. Then, $\ind'$ satisfies Symmetry, Monotonicity, Base Monotonicity, Transitivity, Normality, Finite Character and Local Character (with the same constant $\kappa_0$). If moreover $P$ is $\aut(\mathfrak C)$-invariant (that is, $f(P) = P$ for every $f \in \aut(\mathfrak C)$), then $\ind'$ also satisfies Invariance. \end{remark} \begin{proof} Let us prove Local Character. Let $A$ and $B$ be small subsets of~$\mathfrak C$. Let $B_0 \subset B$ and $P_0 \subset P$, such that $\card{B_0 \cup P_0} < \kappa_0 + \card{A}^+$ and $A \ind_{B_0 P_0} B P$. Then, $A \ind_{B_0 P} B$, and therefore $A \ind'_{B_0} B$. \end{proof} \begin{example} Let $P \subset \mathfrak C$ be definable without parameters, and $\ind'$ be as in the above remark. It is not true in general that $\ind'$ is an independence relation: more precisely, it might not satisfy the Extension axiom. For instance, let $\pair{G, +}$ be a monster model of the theory of $\pair{\mathbb{Q}, +}$, and let $\mathfrak C$ be the 2-sorted structure $G \sqcup G$, with the group structure on the first sort, and the action by translation of the first sort on the second. Notice that $\mathfrak C$ is $\omega$-stable; let $\ind$ be Shelah's forking on~$\mathfrak C$, and $P$ be the first sort. Choose $a$ and $b$ in the second sort arbitrarily. Then, there is no $a' \equiv a$ such that $a' \ind' b$, because $\mathfrak C = \acl(Pb)$, but $\mathfrak C \neq \acl(P)$. \end{example} We will always consider models of $T$ as elementary substructures of $\mathfrak C$: therefore, given $A \subset M \models T$, we can talk about $\mat(A)$ (a subset of $\mathfrak C$, not of $M$!), and for $p \in S(M)$ we can define $p \ind_A M$. We say that $\ind$ is \intro{superior} if $\kappa_0 = \omega$. \begin{remark} $\ind$ is superior iff, for every finite set $A \subset \mathfrak C$ and every set $C \subseteq \mathfrak C$, there exists a finite subset $C_0 \subset C$ such that $A \ind_{C_0} C$. Moreover, $\ind$ is superior iff $\sqsubseteq$ is a well-founded partial ordering, and therefore one can define a corresponding rank $\U^{\ind}$ for types (as in \cite[\S5.1]{wagner} for supersimple theories). \end{remark} \begin{example}\label{ex:independence-strict} \begin{enumerate} \item If $\mathfrak C$ is simple, we can take $\ind$ equal to Shelah's forking $\ind[f]$. $\kappa_0 \leq \card{T}^+$, and $\ind[f]$ is superior iff $\mathfrak C$ is supersimple, and the rank induced by $\ind[f]$ is the Lascar rank~$\SU$. \item If $\mathfrak C$ is rosy, we can take $\ind$ equal to \th-forking $\ind[\th]$. $\ind[\th]$ is superior iff $\mathfrak C$ is superrosy. \item If $\mathfrak C$ is geometric, we can take $\ind$ equal to $\ind[$\acl$]$, the independence relation induced by the algebraic closure. $\ind$ is then superior of rank~1, and coincides with real \th-forking. In all three cases, $\mat$ is the algebraic closure (that is, $\ind$ is \intro{strict}). \end{enumerate} \end{example} \begin{example}\label{ex:independence-matroid} If $\mat$ is an existential matroid on $\mathfrak C$, we can take $\ind$ equal to the induced independence relation $\ind[$\mat$]$: see \cite{fornasiero-matroids} for details; $\ind[$\mat$]$ is superior, of rank~1. \end{example} \begin{remark} If $\mathfrak C$ is supersimple then $\ind$ is superior, and $\U^{\ind} \leq \SU$. \end{remark} \begin{proof} Let $a \subset \mathfrak C$ be a finite tuple and $B \subset \mathfrak C$. Let $B_0 \subset B$ be finite such that $a \ind[f]_{B_0} B$. Then, by \cite[Remark~1.20]{adler}, $a \ind_{B_0} B$. \end{proof} Let $\mathcal L$ be the language of $T$. Notice that the same structure $\mathfrak C$ could have more than one independence relation. The following example is taken from~\cite[Example~1.33]{adler}. \begin{example}\label{ex:2} Let $\mathcal L = \set{E(x,y)}$ and $\mathfrak C$ be the monster model where $E$ is an equivalence relation with infinitely many equivalence classes, all infinite. Then $\mathfrak C$ is $\omega$-stable, of Morley rank 2, and geometric. Hence, $\mathfrak C$ has the following 2 independence relation: $\ind[f]$ and $\ind[$\acl$]$. More explicitly: $A \ind[$\acl$]_B C$ iff $A \cap C \subseteq B$, and $A \ind[f]_B C$ if $\acleq(AB) \cap \acleq(B C) = \acleq(B)$. Both independence relations are superior, strict, and satisfy Strong Finite Character. However, these two independence relations are different, and the rank of $\mathfrak C$ is different according to the two different ranks: $\U^{\acl}(\mathfrak C) = 1$ (the rank induced by $\acl$), while $\SU(\mathfrak C) = 2$. $\mathfrak C$ also has a third independence relation, which we will now describe. For every $X \subset \mathfrak C$, let $\mat_E(X)$ be the set of elements which are equivalent to some element in~$X$. Define $A \ind[$E$]_C B$ if $\mat_E(AC) \cap \mat_E(BC) = \mat_E(C)$. $\ind[$E$]$ is superior and satisfy Strong Finite Character, but it is not strict. The rank of $\mathfrak C$ according to $\ind[$E$]$ is~$1$. \end{example} \begin{definition} Let $A$ and $B$ be small subsets of~$\mathfrak C$ and $\pi(\bar x)$ be a partial type with parameters from~$A B$. We say that $\pi$ does not fork over $B$ (and write $\pi \ind_B A$) if there exists a complete type $q(\bar x) \in S_n(A B)$ containing $\pi$ and such that $q \ind_B A$ (or, equivalently, if there exists $\bar c \in \mathfrak C^n$ realization of $\pi(\bar x)$ such that $\bar c \ind_B A$). \end{definition} The interesting cases are when $\pi$ is either a single formula or a complete type over $A B$. \begin{remark} The above definition does not depend on the choice of the set of parameters~$A$: that is, given some other subset $A'$ of $\mathfrak C$ such that the parameters of~$\pi$ are contained in $A' \cup B$, then there exists $\bar c \in \mathfrak C^n$ satisfying $\pi$ and such that $\bar c \ind_B A$ iff there exists $\bar c \in \mathfrak C^n$ satisfying $\pi$ and such that $\bar c \ind_B A'$. \end{remark} \begin{proof} Assume that $\pi \ind_B A$. Let $\bar c \in \mathfrak C^n$ such that $\bar c$ satisfies $\pi$ and $\bar c \ind_B A$. Let $\bar c' \in \mathfrak C^n$ such that $\bar c' \equiv_{A B} \bar c$ and $\bar c' \ind_{AB} A'$. Therefore, $\bar c'$ realizes $\pi$ and $\bar c' \ind_B A$, and thus $\bar c' \ind_ B A'$. \end{proof} \subsection{The closure operator cl} For this subsection, $A$, $A'$, $B$, and $C$ will always be subsets of $\mathfrak C$, and $a$, $a'$ be elements of~$\mathfrak C$. \begin{remark}[Invariance]\label{rem:mat-invariance} Assume that $a \in \mat(B)$ and $a' B' \mathrm{eq}uiv a B$; then $a' \in \mat(B')$. \end{remark} \begin{remark}\label{rem:mat-ind} If $a \in \mat(A)$ then $a \ind_A B$. \end{remark} \begin{proof} Suppose $a\in \mat(A)$. By definition, $a \ind_A a$. Moreover, $a \ind_{A a} B$. Thus, by transitivity, $a \ind_A B$. \end{proof} \begin{remark}[Monotonicity]\label{rem:mat-monotone} If $a \in \mat(B)$, then $a \in \mat(B C )$. \end{remark} \begin{proof} By Remark~\ref{rem:mat-ind}, $a \ind_B B C a$. Thus, $a \ind_{B C} a$. \end{proof} \begin{remark} If $A \subseteq \mat(B)$, then $A \ind_B A$. \end{remark} \begin{proof} W.l.o.g\mbox{.}\xspace, $A = \pair{a_1, \dotsc, a_n}$ is a finite tuple. By repeated application of Remarks~\ref{rem:mat-ind} and~\ref{rem:mat-monotone}, we have $B A \ind_B B a_1$, $B A \ind_{B a_1} B a_1 a_2$, \dots. Hence, by transitivity, $A \ind_B A$. \end{proof} \begin{remark} If $A \subseteq \mat(B)$, then $A \ind_B C$. \end{remark} \begin{proof} By the above remark, $A \ind_B A$. Moreover, $A \ind_{AB} C$, and therefore, by transitivity, $A \ind_B C$. \end{proof} \begin{remark}\label{rem:mat-idempotent} $A \ind_B C$ iff $A \ind_{\mat(B)} C$. Therefore, $\ind_B$ and $\U^{\ind}(\cdot / B)$ depend only on $\mat(B)$, and not on~$B$. \end{remark} \begin{proposition} $\mat$ is an invariant closure operator. If moreover $\ind$ is superior, then $\mat$ is finitary. \end{proposition} \begin{proof} The fact that $\mat$ is invariant is Remark~\ref{rem:mat-invariance}. To prove that $\mat$ is a closure operator, we have to show: \begin{description} \item[Extension] $A \subseteq \mat (A)$.\\ This is clear. \item[Monotonicity] $A \subseteq B$ implies $\mat(A) \subseteq \mat(B)$.\\ This follows from Remark~\ref{rem:mat-monotone}. \item[Idempotency] $\mat(\mat(A)) = \mat(A)$.\\ This follows from Remark~\ref{rem:mat-idempotent}. \end{description} Finally, we have to prove that if $\ind$ is superior, then $\mat$ is finitary. Let $a \in \mat(B)$, that is $a \ind_B a$. Since $\ind$ is superior, there exists $B_0 \subseteq B$ finite, such that $a \ind_{B_0} B$. \end{proof} \begin{conjecture} $\mat$ is always finitary. Moreover, $\mat$ is definable: that is, for every $A \subseteq \mathfrak C$ small, the set $\mat(A)$ is ord-definable over~$A$. \end{conjecture} \begin{remark} If $A \ind_B C$, then $\mat(A B) \cap \mat(B C) = \mat(B)$ \textup(but the converse is not true in general). \end{remark} \section{Lovely pairs} Let $P$ be a new unary predicate symbol, and $T^2$ be the theory of all possible expansions of $T$ to the language $\mathcal L^2 := \mathcal L \cup \set P$. We will use a superscript $1$ to denote model-theoretic notions for~$\mathcal L$, and a superscript $2$ to denote those notions for $\mathcal L^2$: for instance, we will write $a \mathrm{eq}uiv^1_C a'$ if the $\mathcal L$-type of $a$ and $a'$ over $C$ is the same, or $S^2_n(A)$ to denote the set of complete $\mathcal L^2$-types in $n$ variables over~$A$. Let $M_P = \pair{M, P(M)} \models T^2$. Given $A \subseteq M$, we will denote $P(A) := P(M) \cap A$. We will write $\monster_P := \pair{\mathfrak C, P}$ for a monster model of $T^2$. \begin{definition}\label{def:lovely} Fix some small cardinal $\kappa > \max(\kappa_0, \card{T})$; we will say that $A$ is \intro{very small} if it is of cardinality much smaller than~$\kappa$. We say that $M_P$ is a $\ind$-lovely pair for $T$ (or simply a lovely pair if $\ind$ and $T$ are clear from the context) if it satisfies the following conditions: \begin{enumerate} \item $M$ is a small model of~$T$. \item (Density property) Let $A \subset M$ be very small, and $q \in S^1_1(A)$ be a complete $\mathcal L$-1-type over $A$. Assume that $q \ind_{P(A)} A$. Then, $q$ is realized in $P(M)$. \item (Extension property) Let $A \subset M$ be very small and $q \in S^1_1(A)$. Then, there exists $b \in M$ realizing $q$ such that $b \ind_A P(M)$. \end{enumerate} \end{definition} The above definition is the natural extension of Definition~3.1 in~\cite{BPV} to the case when $\ind \neq \ind[f]$. If we need to specify the cardinal~$\kappa$, we talk about $\kappa$-lovely pairs. \begin{remark} If $M_P$ satisfies the Extension property, then $M$ is $\kappa$-saturated (as an $\mathcal L$-structure). \end{remark} \begin{definition} Let $X \subseteq M \models T$. The closure of $X$ in $M$ is \[ \mat^M(X) \coloneqq M \cap \mat(X). \] $X$ is \intro{closed} in $M$ if $\mat^M(X) = X$. \end{definition} \begin{remark}\label{rem:density-substructure} If $M_P$ satisfies the Density property, then \begin{enumerate} \item $P(M) \preceq M$; \item $P(M)$ is closed in $M$; \item $P(M)$ is $\kappa$-saturated (as an $\mathcal L$-structure). \end{enumerate} \end{remark} \begin{proof} Let $\bar p \in P(M)^n$ and $\phi(x, \bar y)$ be some $\mathcal L$-formula. Assume that there exists $a \in M$ such that $M \models \phi(a, \bar p)$. Let $q := \tp^1(c/\bar p)$. Since $P(\bar p) = \bar p$, we have $q \ind_{P(\bar p)} \bar p$, and therefore there exists $a' \in P(M)$ such that $M \models \phi(a', \bar p)$, proving that $P(M) \preceq M$. Assume that $a \ind_{P(M)} a$. Let $P_0 \subset P(M)$ be very small, such that $a \ind_{P_0} a$. Let $q := \tp^1(a/ P_0 a)$; notice that $q \ind_{P_0} P_0 a$; therefore, by the Density property, $q$ is realized in $P(M)$, that is $a \in P(M)$. Assume that $A \subseteq P(M)$ is a very small subset and let $q \in S^1_1(A)$. Since $P(A) = A$, we have $q \ind_{P(A)} A$, and therefore $q$ is realized in~$P(M)$, proving saturation. \end{proof} \begin{example} If $\ind$ is the trivial independence relation (that is, $A \ind_B C$ for every $A$, $B$, and $C$), then $M_P$ is a lovely pair iff $M \models T$ is sufficiently saturated and $P(M)= M$ (more precisely, the Extension Property always hold, and the Density Property holds iff $M = P(M)$). \end{example} \begin{example}\label{ex:triple-love} Let $T$ and $\mathfrak C$ be as in Example~\ref{ex:2}. The 3 choices for an independence relation $\ind$ on $\mathfrak C$ will correspond to 3 different kind of lovely pairs; in particular, the theory of lovely pairs for $T$ as a simple theory (which is equal to the theory of Beautiful Pairs for $T$ \cite{poizat83}) is different from the theory of lovely pairs for $T$ as a geometric theory. More explicitly, let $M_P \models T^2$, and assume that $M$ is sufficiently saturated. Then, $M_P$ is a model of the theory of $\ind[f]$-lovely pairs iff: \begin{enumerate} \item infinitely many equivalence classes are disjoint from~$P(M)$; \item infinitely many equivalence classes intersect~$P(M)$; \item for each equivalence class $C$, both $C \setminus P(M)$ and $C \cap P(M)$ are infinite. \end{enumerate} On the other hand, $M_P$ is a model of the theory of $\ind[$\acl$]$-lovely pairs iff: \begin{enumerate} \item all equivalence classes intersect~$P(M)$; \item for each equivalence class $C$, both $C \setminus P(M)$ and $C \cap P(M)$ are infinite. \end{enumerate} Finally, $M_P$ is a model of $\ind[$E$]$-lovely pairs iff: \begin{enumerate} \item infinitely many equivalence classes are disjoint from~$P(M)$; \item infinitely many equivalence classes are contained in~$P(M)$; \item every equivalence class is either disjoint or contained in~$P(M)$. \end{enumerate} \end{example} \subsection{Existence} \label{subsec:existence} \begin{lemma}\label{lem:independent-model} Let $A$, $B$, and $C$ be small subsets of~$\mathfrak C$. Assume that $A \ind_C B$. Then, there exists $N \prec \mathfrak C$ small model, such that $A C \subseteq N$ and $N \ind_C B$. In particular, for any $B$ and $C$ small subsets of~$\mathfrak C$, there exists $N \prec \mathfrak C$ small model, such that $C \subseteq N$ and $N \ind_C B$. \end{lemma} \begin{proof} Let $N' \prec N$ be a small model containing~$A$. Choose $N \equiv^1_{AC} N'$ such that $N \ind_C B$ ($N$ exists by the Extension Axiom). \end{proof} Modifying the above proof a little, we can see that, for any small cardinal~$\kappa$, in the above lemma we can additionally require that $N$ is $\kappa$-saturated, or that $\card N \leq \card A + \card T$, etc. Fix $\kappa > \kappa_0$ a small cardinal. \begin{lemma}\label{lem:lovely-existence} $\kappa$-lovely pairs exist. If $\pair{C, P(C)}$ is an $\mathcal L^2$-structure, where $P(C) \subseteq C \subset \mathfrak C$, $C$~is small, and $P(C)$ is closed in~$C$ \textup(i.e\mbox{.}\xspace, $\mat(P(C)) \cap C = P(C)$\textup), then there exists a $\kappa$-lovely pair $M_P$, such that $\pair{C, P(C)}$ is an $\mathcal L^2$-substructure of $M_P$, and $P(M) \ind_{P(C)} C$. \end{lemma} Therefore, given $A$ and $B$ small subsets of $\mathfrak C$, there exists a $\kappa$-lovely pair $M_P$ such that $P(M) \ind_A B$ and $A \subseteq P(M)$. \begin{proof} The proof of~\cite[Lemma~3.5]{BPV} works also in this situation, with some small modification. Here are some details. If $\pair{C, P(C)}$ is not given, define $C = P(C) = \emptyset$. Construct a chain $\Pa{\pair{M_i, P(M_i)}: i < \kappa^+}$ of small subsets of $\mathfrak C$, such that: \begin{enumerate}[(a)] \item $\pair{A, P(A)}$ is an $\mathcal L^2$-substructure of $\pair{M_0, P(M_0)}$, and $M_0 \ind_{P(A)} P(M_0)$; \item for every $i < \kappa^+$, any complete $\mathcal L$-type over $M_i$ \footnote{Here we can choose: either in one variable, or in finitely many variables, or in a small number of variables: see \S\ref{sec:tuples}.}, which does not fork over $P(M_i)$ is realized in $P(M_{i+1})$; \item $i < j < \kappa^+$ implies that $\pair{M_i, P(M_i)}$ is an $\mathcal L^2$-substructure of $\pair{M_j, P(M_j)}$, and $P(M_j) \ind_{P(M_i)} M_i$; \item for $i$ successor, $M_i$ is a ($\kappa + \card{P(M_i)})^+$-saturated model of~$T$. \end{enumerate} First, we construct $\pair{M_0, P(M_0)}$. Let $M_0 \prec \mathfrak C$ be a small model containing $A$, and define $P(M_0) := P(A)$. Notice that $A \ind_{P(A)} P(M_0)$. Given $\pair{M_i, P(M_i)}$, let $\Pa{p_j: j < \lambda}$ to be an enumeration of all $\mathcal L$-types over~$M_i$, \footnote{With the same meaning of ``type'' as in (b).} such that $p_j \ind_{P(M_i)} M_i$. Let \begin{itemize} \item $a_0$ be any realization of $p_0$ (in $\mathfrak C$); \item $a_1$ be a realization of $p_1$ such that $a_1 \ind_{M_i} a_0$; \item \dots \item for every $j < \lambda$, let $a_j$ be a realization of $p_j$ such that $a_j \ind_{M_i} \Pa{a_k: k < j}$. \end{itemize} Define $A := \Pa{a_j: j < \lambda}$. It is then easy to see that $A \ind_{P(M_i)} M_i$. Conclude the proof as in~\cite{BPV} (using Lemma~\ref{lem:independent-model} where necessary). \end{proof} \subsection[The back-and-forth argument]{Uniqueness: the back-and-forth argument} \label{subsec:uniqueness} In this subsection, $M_P$ will be a lovely pair. The following definition is from~\cite{BPV}. \begin{definition} \begin{enumerate} \item A set $A \subset M$ is \intro{P\nobreakdash-\hspace{0pt}\relax independent\xspace{}} if $A \ind_{P(A)} P(M)$ (and similarly for tuples). \item Given a possibly infinite tuple $\bar a$ from~$M$, $\ensuremath{\text{\rm P-$\tp$}}(\bar a)$, the P\nobreakdash-\hspace{0pt}\relax type\xspace of~$\bar a$, is the information which tell us which members of $\bar a$ are in~$P(M)$. \end{enumerate} \end{definition} \begin{remark} Let $A \subseteq M$ be P\nobreakdash-\hspace{0pt}\relax independent\xspace. Then $\mat(A) \cap P(M) = \mat^M \Pa{P(A)}$. \end{remark} \begin{proof} Let $c \in \mat(A) \cap P(M)$. Then $c\ind_AP(M)$, by Remark 1.2, and $A \ind_{P(A)} P(M)$. Therefore $c\ind_{P(A)}P(M)$. Therefore $c\ind_{P(A)}c$, that is $c \in \mat\Pa{P(A)}$. \end{proof} \begin{proposition} \label{prop:back-and-forth} Let $M_P$ and $M'_P$ be two $\ind$-lovely pairs for~$T$. Let $\bar a$ be a P\nobreakdash-\hspace{0pt}\relax independent\xspace very small tuple from~$M$, and $\bar a'$ be a P\nobreakdash-\hspace{0pt}\relax independent\xspace tuple of the same length from~$M'$. If $\bar a \equiv^1 \bar a'$ and $\ensuremath{\text{\rm P-$\tp$}}(\bar a) = \ensuremath{\text{\rm P-$\tp$}}(\bar a')$, then $\bar a \equiv^2 \bar a'$. \end{proposition} \begin{proof} Back-and-forth argument. Let \begin{multline*} \Gamma := \bigl\{f: \bar a \to \bar a': \bar a \subset M,\ \bar a' \subset M',\ \bar a \ \&\ \bar a' \text{ very small},\ f \text { bijection},\\ \bar a \ \&\ \bar a' \text{ P\nobreakdash-\hspace{0pt}\relax independent\xspace},\ \bar a \equiv^1 \bar a',\ \ensuremath{\text{\rm P-$\tp$}}(\bar a) = \ensuremath{\text{\rm P-$\tp$}}(\bar a')\bigr\}. \end{multline*} We want to prove that $\Gamma$ has the back-and-forth property. Let $f:\bar{a}\rightarrow \bar{a}^\prime$ be in $\Gamma$. Let $b\in M$. \Case 1 Suppose $b\in P(M)$. Then, since $\bar{a}$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, $b\ind_{P(\bar{a})}\bar{a}$. Let $b^\prime\in M$ be such that $\tp^1(\bar{a}b)=\tp^1(\bar{a}^\prime b^\prime)$. Then $b^\prime\ind_{P(\bar{a}^\prime)}\bar{a}^\prime$. By density, we may assume $b^\prime\in P(M)$. Then $\ensuremath{\text{\rm P-$\tp$}}(\bar{a}b)= \ensuremath{\text{\rm P-$\tp$}}(\bar{a}^\prime b^\prime)$. Since $b, b^\prime\in P(M)$, both $\bar{a}b$ and $\bar{a}^\prime b^\prime$ are P\nobreakdash-\hspace{0pt}\relax independent\xspace. So the partial isomorphism has been extended. \Case 2 Suppose $b\notin P(M)$. Let $\bar{f}$ be a very small tuple from $P(M)$ such that $\bar{a}b\bar{f}$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace. By repeated use of case 1, we obtain a small tuple $\bar{f}^\prime$ from $P(M')$ such that $\tp^1(\bar{a}\bar{f})=\tp^1(\bar{a}^\prime\bar{f}^\prime)$. Let $b^\prime \in M'$ be such that $\tp^1(\bar{a}b\bar{f})=\tp^1(\bar{a}^\prime b^\prime\bar{f}^\prime)$. By extension, we may assume $b^\prime\ind_{\bar{a}^\prime\bar{f}^\prime}P(M')$. It follows that $\bar{a}^\prime b^\prime \bar{f}^\prime$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, and that $\mat(b' \bar a' \bar f') \cap \mat(\bar a' \bar f' P(M')) = \mat(\bar a' \bar f')$. Suppose $b^\prime\in P(M')$. Then, $b' \in \mat(\bar a' \bar f') \cap P(M')$. Since $\bar a' \bar f'$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, $b' \in \mat(\bar f' P(\bar a'))$. Therefore, $b \in \mat^M \Pa{\bar f P(\bar a)} \subseteq P(M)$, absurd. So $\ensuremath{\text{\rm P-$\tp$}}(\bar{a}b\bar{f}) = \ensuremath{\text{\rm P-$\tp$}}(\bar{a}^\prime b^\prime\bar{f}^\prime)$. So the partial isomorphism has been extended. \end{proof} \begin{definition} $T^d$ is the theory of $\ind$-lovely pairs (the ``$d$'' stands for ``dense'', a legacy from the o-minimal case). Given a property $\mathcal S$ of models of~$T^2$, we will say that ``$\mathcal S$ is first-order'' to mean that there is an $\mathcal L^2$-theory $T'$ expanding~$T^2$, such that every model of $T^2$ with the property $\mathcal S$ is a model of~$T'$, and every \emph{sufficiently saturated} model of $T'$ has the property~$S$. In particular, by ``$\ind$-loveliness is first-order'' (or simply ``loveliness is first-order'' when $\ind$ is clear from the context), we will mean that every sufficiently saturated model of $T^d$ is a lovely pair. \end{definition} If $T$ is pregeometric, then $\ind[$\acl$]$-loveliness is first-order iff $T$ is geometric (see \S~\ref{subsec:rank-one}). \cite{BPV} and \cite{vassiliev05} investigate the question when $\ind[f]$-loveliness is first-order for simple theories. We will not study this question for~$\ind$, except for a partial result in Corollary~\ref{cor:low} . \section[Near model completeness]{Near model completeness and other properties} \label{sec:model-complete} In this section we assume that loveliness is first order and that $\monster_P := \pair{\mathfrak C, P}$ is a monster model of~$T^d$. \subsection{Near model completeness} \begin{lemma}\label{lem:local-large} For every finite tuple $\bar{a}$ from $\mathfrak C$ there is some small $C\subseteq P$ such that $\bar{a}\ind_{C^\prime}P$ for all $C^\prime \mathrm{eq}uiv^1_{\bar{a}}C$ such that $C^\prime\subseteq P$. \end{lemma} \begin{proof} Suppose not. Then for each small $C_{\alpha}\subseteq P$ there is some small $C_{\alpha+1}\subseteq P$ such that $\bar{a}\notind_{C_\alpha}C_{\alpha+1}$ and $\tp^1(C_{\alpha}/\bar{a})$ is realised in $C_{\alpha+1}$. By compactness there is, for any $\kappa$ less than the monstrosity of $\mathfrak C$, an increasing sequence $(C_{\alpha})_{\alpha<\kappa}$ such that $\bar{a}\notind_{C_{\alpha}}C_{\beta}$ for all $\alpha<\beta<\kappa$, contradicting Lemma~\ref{lem:forking-chain}. \end{proof} \begin{definition} Given a tuple of variables~$\bar z$, we will write $P(\bar z)$ as a shorthand for $P(z_1) \wedge \dots \wedge P(z_n)$. A \intro{special} formula is a formula of the form $(\exists \bar z) \Pa{P(\bar z) \wedge \varphi(\bar x,\bar z)}$, where $\varphi$ is an $\mathcal L$-formula. \end{definition} \begin{proposition}[Near model completeness] \label{prop:model-complete} Every $\mathcal L^2$-formula without parameter is equivalent modulo $T^2$ to a Boolean combination of special formulae \textup(without parameters\textup). \end{proposition} \begin{proof} Let $\bar{a}$ and $\bar{b}$ be finite tuples from~$\mathfrak C$. Suppose they both satisfy exactly the same special formulae. It suffices to prove that $\tp^2(\bar{a}) = \tp^2(\bar{b})$. Let $C \subset P$ be as in Lemma~\ref{lem:local-large}. By assumption and compactness there is some $D\subseteq P$ such that $\tp^1(\bar{a}C) = \tp^1(\bar{b}D)$. Suppose $\bar{b}\notind_D P$. Then there is some small $E\subseteq P$ such that $\bar{a}\notind_D E$. By assumption and compactness there exist $C^\prime, E^\prime \subseteq P$ such that $\tp^1(\bar{a}C^\prime E^\prime) = \tp^1(\bar{b}DE)$. But then $C^\prime\models \tp^1(C/\bar{a})$ and so $\bar{a}\ind_{C^\prime}P$ and hence $\bar{a}\ind_{C^\prime}E^\prime$. This contradicts invariance of~$\ind$. So $\bar{b}\ind_D P$. So both $\bar{a}C$ and $\bar{b}D$ are P\nobreakdash-\hspace{0pt}\relax independent\xspace. It follows from our assumption that $\ensuremath{\text{\rm P-$\tp$}}(\bar{a}C) = \ensuremath{\text{\rm P-$\tp$}}(\bar{b}D)$ and we know $\tp^1(\bar{a}C) = \tp^1(\bar{b}D)$. Therefore $\tp^2(\bar{a}C) = \tp^2(\bar{b}D)$. \end{proof} \subsection{Definable subsets of P} \begin{lemma}[{\cite[4.1.7]{boxall}}] \label{lem:trace} Let $\bar b \in \mathfrak C^n$ be P\nobreakdash-\hspace{0pt}\relax independent\xspace. Given a set $Y \subseteq P^m$, t.f.a.e\mbox{.}\xspace: \begin{enumerate} \item $Y$ is $T^d$-definable over $\bar b$; \item there exists $Z \subseteq \mathfrak C^m$ that is $T$-definable over~$\bar b$, such that $Y = Z \cap P^m$. \end{enumerate} \end{lemma} \begin{proof} $(2 \mathbb{R}ightarrow 1)$ is obvious. Assume (1). Then, (2) follows from compactness and the fact that the $\mathcal L^2$-type over $\bar b$ elements from $P$ is determined by their $\mathcal L$-types. \end{proof} \subsection{Definable and algebraic closure} Let $\pair{M, P(M)}$ be a lovely pair. In the next proposition we will consider imaginary elements; to simplify the notation, we will use $\acl^1$ for the algebraic closure for imaginary elements in~$Meq$, and $\acl^2$ for the algebraic closure for imaginary elements in $\pair{M, P(M)}^\mathrm{eq}$, and similarly for~$\dcl$. \begin{proposition}[{\cite[4.1.8, 4.1.9]{boxall}}] \label{prop:acl} Let $\bar a \subset M$ be P\nobreakdash-\hspace{0pt}\relax independent\xspace. Then, $\acl^2(\bar a) \cap Meq = \acl^1(\bar a) \cap Meq$ and $\dcl^2(\bar a) \cap Meq = \dcl^1(\bar a) \cap Meq$. \end{proposition} \begin{proof} Clearly, $\dcl^2(\bar a) \cap Meq \subseteq \dcl^1(\bar a) \cap Meq$, and similarly for~$\acl$. We have to prove the opposite inclusions. W.l.o.g\mbox{.}\xspace, $\bar a$ is a very small tuple. Let us prove first the statement for~$\dcl$. So, let $e \in Meq \cap \dcl^2(\bar a)$. Let $\bar b$ be a P\nobreakdash-\hspace{0pt}\relax independent\xspace very small tuple in $M$ containing~$\bar a$, such that $e \in \dcl^1(\bar b)$. Denote $\bar b_0 \coloneqq P(\bar b)$ and $\bar b_1 \coloneqq \bar b \setminus P(\bar b)$; notice that $\bar b \ind_{\bar b_0} P(M)$. \begin{claim}\label{cl:dcl-1} Let $\bar b' \equiv^1_{\bar a} \bar b$ be such that $\bar b' \ind_{\bar a} \bar b$. Then, $\bar b' \equiv^1_e \bar b$. \end{claim} Notice that $\bar b'_0 \nsubseteq P(M)$ in general. Since $a$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, we have $a \ind_{P(\bar a)} \bar b_0$, and therefore $\bar b'_0 \ind_{P(\bar a)} \bar a$. Since moreover $\bar b'_0 \ind_{\bar a} \bar b$, by transitivity we have $\bar b'_0 \ind_{P(\bar a)} \bar b$, and thus $\bar b'_0 \ind_{P(\bar b)} \bar b$. Therefore, by the Density property, there exists $\bar b''_0 \subset P(M)$ such that $\bar b''_0 \equiv^1_{\bar b} \bar b'_0$. Let $\theta \in \aut^1(\mathfrak C / \bar b)$ be such that $\theta(\bar b'_0) = \bar b''_0$. Let $r(x) \coloneqq \tp^1(\bar b'_1 / \bar b \bar b'_0)$, and $q \coloneqq \theta(r) \in S^1(\bar b \bar b''_0)$. By the Extension property, there exists $\bar b''_1$ in~$M$, such that $\bar b''_1$ realizes~$q$ and \begin{equation}\label{eq:dcl} \bar b''_1 \ind_{\bar b \bar b''_0} P(M). \end{equation} Denote $\bar b'' \coloneqq \bar b''_0 \bar b''_1$; notice that $\bar b'' \equiv^1_{\bar b} \bar b'$. Since $\bar b \ind_{\bar a} \bar b'$, we have $\bar b \ind_{\bar a} \bar b''$, and therefore $\bar b_1'' \ind_{\bar b_0'' \bar a} \bar b \bar b''_0$. Therefore, by \mathrm{eq}ref{eq:dcl} and transitivity, $\bar b''_1 \ind_{\bar a \bar b''_0} P(M)$. Since moreover $\bar a$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, by transitivity again we have $\bar b'' \ind_{\bar b''_0} P(M)$. Thus, we proved that $\bar b''$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace and has the same P\nobreakdash-\hspace{0pt}\relax type\xspace over $\bar a$ as~$\bar b$, and thus $\bar b'' \equiv^2_{e} \bar b$. Since moreover, by definition, $\bar b'' \equiv^1_{\bar b} \bar b'$, we have $\bar b'' \equiv^1_e \bar b'$, and the claim is proved. Since $e \in \dcl^1(\bar b)$, there exists a function $f$ which is $T$-definable without parameters, and such that $e = f(\bar b)$ \begin{claim}\label{cl:dcl-2} Assume that $\bar b'' \subset M$ and $\bar b'' \equiv_{\bar a} \bar b$. Then, $e = f(\bar b'')$. \end{claim} It follows immediately from Claim~\ref{cl:dcl-1}. Now, since loveliness is first-order, we can assume that $\pair{M, P(M)}$ is sufficiently saturated; therefore, Claim~\ref{cl:dcl-2} implies that $e \in \dcl^1(\bar a)$, and thus $\dcl^1(\bar a) \cap Meq = \dcl^2(\bar a) \cap Meq$. Assume now that $e \in \acl^2(\bar a) \cap Meq$. Let $X$ be the set of realizations of $\tp^2(e / \bar a)$. Notice that $X$ is a finite subset of~$Meq$, and therefore it is definable in~$Meq$; Let $e'$ be a canonical parameter for $X$ in the sense of~$Meq$. Since $e' \in \dcl^2(\bar a) \cap Meq$, by the first assertion we have $e' \in \dcl^1(\bar a)$. Since $e \in \acl^1(e')$, we $e \in \acl^1(\bar a)$. Therefore, $\acl^1(\bar a) \cap Meq = \acl^2(\bar a) \cap Meq$. \end{proof} \begin{proposition}[{\cite[6.1.3]{boxall}}] Let $\bar a \subset M$ be P\nobreakdash-\hspace{0pt}\relax independent\xspace. Let $f: P(M)^n \to Meq$ be $T^d$-definable with parameters~$\bar a$. Then, there exists $g: M^n \to Meq$ which is $T$-definable with parameters~$\bar a$, such that $f = g \upharpoonright P(M)^n$. \end{proposition} \begin{proof} Let $\pair{N, P(N)}$ be an elementary extension of $\pair{M, P(M)}$ and $c \in P(N)$. By Proposition~\ref{prop:acl}, $f(c) \in \dcl^1(\bar a)$, and therefore there exists $g_i: N^n \to N^\mathrm{eq}$ which is $T$-definable with parameters~$\bar a$, such that $f(c) = g_i(c)$. By compactness, finitely many $g_i$ will suffice. The conclusion follows from Lemma~\ref{lem:trace}. \end{proof} Notice that in the above proposition we were not able to prove the stronger result that $\param g \in \dcl^2(\param f)$, where $\param f$ is the canonical parameter of $f$ according to~$T^d$ (cf\mbox{.}\xspace \cite[6.1.3]{boxall}). Nor were we able to prove any form of elimination of imaginaries for $T^d$ (cf\mbox{.}\xspace \cite[theorems 1.2.4, 1.2.6 and 1.2.7]{boxall} and \cite[\S8.5]{fornasiero-matroids}). \section{Small and imaginary tuples }\label{sec:tuples} \subsection{Small tuples} We show how in Definition~\ref{def:lovely}, we can pass from $\mathcal L$-1-types to $\mathcal L$-types in very small number of variables. Let $M_P = \pair{M, P(M)}$ be a small model of $T^2$. \begin{lemma}\label{lem:multi-density} Assume that $M_P$ satisfies the Density Property \textup(for $\mathcal L$-1-types\textup). Then, $M_P$ satisfies the Density Property for $\mathcal L$-types of very small length. \end{lemma} \begin{proof} Let $A \subset M$ be very small and $\bar b$ be a tuple in $M$ of very small length, such that $\bar b \ind_{P(A)} A$. We must prove that there exists $\bar b' \subset P(M)$ such that $\bar b' \equiv^1_A \bar b$. Define \[ \mathcal F := \set{f \text{ partial 1-automorphism of } M/A:\ \Dom(f) \subseteq \bar b,\ \Imag(f) \subseteq P(M)}. \] Let $f \in \mathcal F$ be a maximal element (Zorn). I claim that $\Dom(f) = \bar b$ (this suffices to prove the conclusion). Assume not. Let $\bar d := \Dom(f)$ and $e \in \bar b \setminus \bar d$. Let $\bar d' := f(\bar d) \subset P$, $q := \tp^1(e/ A \bar d)$, and $q' := f(q)$. Notice that $e' \bar d' \ind_{P(A)} A$, and therefore $q' \ind_{P(A) \bar d'} A \bar d'$. Since $\bar d' \subseteq B$, the Density Property for $\mathcal L$-1-types implies that there exists $e' \in P(M)$ satisfying $q'$, and hence $\bar d' e'' \equiv^1_A \bar d e$, contradicting the maximality of~$f$. \end{proof} \begin{lemma} Assume that $M_P$ satisfies the Extension Property \textup(for $\mathcal L$-1-types\textup). Then, $M_P$ satisfies the Extension Property for $\mathcal L$-types of very small length. \end{lemma} \begin{proof} Let $A \subset M$ be very small and $\bar b$ be a tuple in $M$ of very small length. We must prove that there exists $\bar b' \subset M$ such that $\bar b' \equiv^1_A \bar b$ and $\bar b' \ind_A P$. Define \[ \mathcal F := \set{f \text{ partial 1-automorphism of } M/A:\ \Dom(f) \subseteq \bar b,\ \Imag(f) \ind_A P(M)}. \] Let $f \in \mathcal F$ be a maximal element (Zorn). I claim that $\Dom(f) = \bar b$ (this suffices to prove the conclusion). Assume not. Let $\bar d := \Dom(f)$ and $e \in \bar b \setminus \bar d$. Let $\bar d' := f(\bar d)$, $q := \tp^1(e/ A \bar d)$, and $q' := f(q)$. The Extension Property for $\mathcal L$-1-types implies that there exists $e' \in M$ satisfying $q'$ and such that $e' \ind_{A \bar d'} P(M)$. Besides, by assumption, $\bar d' \ind_A P(M)$; therefore $e' \bar d' \ind_A P$. Since moreover $e' \bar d' \equiv^1_A e \bar d$, we have a contradiction. \end{proof} \subsection{Imaginary tuples} Given an independence relation $\ind$ on $\mathfrak C$, we do not know if there always exists an independence relation $\ind[$\eq$]$ on $\mathfrak Ceq$ extending~$\ind$. However, as the next lemma shows, the independence relation $\ind[$\eq$]$, if it exists, it is unique. \footnote{Thanks to H.~Adler for pointing this out.} \begin{lemma} Let $\ind[$\eq$]$ be an independence relation on $\mathfrak Ceq$ extending~$\ind$. Let $A$, $B$, and $C$ be small subsets of~$\mathfrak Ceq$. Then, the following are equivalent: \begin{enumerate} \item $A \ind[$\eq$]_C B$; \item There exist $A_0$ and $B_0$ small subsets of~$\mathfrak C$, such that $A \subseteq \dcleq(A_0)$, $B \subseteq \dcleq(B_0)$, and for every $C_0$ small subset of $\mathfrak C$ with $C \subseteq \dcleq(C_0)$, there exists $A_0' \subset \mathfrak C$ such that $A_0' \equiv_{B_0 C} A_0$ and $A_0' \ind_{C_0} B_0$. \end{enumerate} \end{lemma} \begin{proof} Exercise: first reduce to the case when $A$ and $B$ are subsets of~$\mathfrak C$. \end{proof} Remember that $\ind$ is strict if $\mat = \acl$. \begin{warning*} It may happen that $\ind$ is strict, but $\ind[$\eq$]$ is not strict: consider for instance the independence relation $\ind[$\acl$]$ in Example~\ref{ex:2} (cf\mbox{.}\xspace \cite[\S6]{fornasiero-matroids}). \end{warning*} \begin{proviso*} For the remainder of this section, we assume that $\ind[$\eq$]$ is an independence relation on $\mathfrak C^{\mathrm{eq}}$ extending~$\ind$. Moreover, we also assume that $P$ is closed in~$\mathfrak C$. \end{proviso*} \begin{definition} Assume that $M_P = \pair{M, P(M)}$ is a model of $T^2$. Let $A \subseteq Meq$. \begin{enumerate} \item Define $P(A) := \mat^{\eq}(P) \cap A$. Notice that $P(A)$ is the same as before if $A$ is a set of \emph{real} elements. \item We say that $A$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace if $A \ind[$\eq$]_{P(A)} P$. \item Given a possibly infinite tuple $\bar a$ from~$Meq$, $\ensuremath{\text{\rm P-$\tp$}}(\bar a)$, the P\nobreakdash-\hspace{0pt}\relax type\xspace of $\bar a$, is the information which tell us which members of $\bar a$ are in $P(Meq)$ (again, notice that $\ensuremath{\text{\rm P-$\tp$}}(\bar a)$ is the same as before if $\bar a$ is a tuple of real elements). \end{enumerate} \end{definition} The following two lemmas are the analogues of \cite[Remark~3.3]{BPV}. \begin{lemma}\label{lem:imaginary-density} If $M_P$ satisfies the Density Property \textup(for $\mathcal L$-1-types\textup), then it satisfies the Density Property for imaginary tuples: that is, if $\bar c$ and $\bar a$ are very small tuples in $Meq$ and $\bar c \ind[$\eq$]_{P(\bar a)} \bar a$, then there exists $\bar c' \in \mat^{\eq}(P)$, such that $\bar c' \equiv^1_{\bar a} \bar c$. \end{lemma} \begin{proof} Fix $\bar d$ a small tuple in $M$ such that $\bar c = [\bar d]_E$, for some $\emptyset$-definable equivalence relation~$E$. Let $\bar a_0 := P(\bar a)$ and $\bar a_1 := \bar a \setminus P$. Fix $\bar b_1$ small tuple in $M$ such that $\bar a_1 = [\bar b_1]_F$, for some $\emptyset$-definable equivalence relation~$F$. Let $\bar d'$ in $M$ such that $\bar d' \equiv^1_{\bar a_0 \bar c} \bar d$ and $\bar d' \ind[$\eq$]_{\bar a_0 \bar c} \bar a$. Since, by assumption, $\bar c \ind[$\eq$]_{\bar a_0} \bar a$, we have $\bar d' \ind[$\eq$]_{\bar a_0} \bar a$. Notice, moreover, that $[\bar d']_E = \bar c$. Let $\bar b'_1 \equiv^1_{\bar a} \bar b_1$ such that $\bar b_1' \ind[$\eq$]_{\bar a} \bar d'$; notice that $[\bar b'_1]_F = \bar a_1$. Moreover, by Transitivity, $\bar b'_1 \ind[$\eq$]_{\bar a_0} \bar d'$; therefore, $\bar a_0 \bar b'_1 \ind[$\eq$]_{P(\bar a_0 \bar b'_1)} \bar d'$. Let $\bar b_0$ be a small tuple in $P$ such that $\bar a_0 \in \acleq(\bar b_0)$; moreover, we can choose $\bar b_0$ that satisfies $\bar b_0 \ind[$\eq$]_{\bar a_0} \bar b_1' \bar d'$. Hence, $\bar b_1' \ind_{\bar b_0} \bar d'$: notice that all tuples are real. Hence, by Lemma~\ref{lem:multi-density}, there exists $\bar d''$ in $P$ such that $\bar d'' \equiv^1_{\bar b_0 \bar b'_1} \bar d'$. Define $\bar c'' := [\bar d'']_E$. Then, $\bar c'' \equiv^1_{\bar a} \bar c$ and $\bar c'' \in \mat^{\eq}(P)$. \end{proof} \begin{lemma}\label{lem:imaginary-extension} If $M_P$ satisfies the Extension Property \textup(for $\mathcal L$-1-types\textup), then it satisfies the Extension Property for imaginary tuples: that is, if $\bar c$ and $\bar a$ are very small tuples in $Meq$, then there exists $\bar c' \in M^{\mathrm{eq}}$, such that $\bar c' \equiv^1_{\bar a} \bar c$ and $\bar c' \ind[$\eq$]_{\bar a} P$. \end{lemma} \begin{proof} Fix $\bar b$ and $\bar d$ small tuples in $M$ such that $\bar a = [\bar b]_F$ and $\bar c = [\bar d]_E$, for some $\emptyset$-definable equivalence relations $E$ and~$F$. Let $\bar b' \equiv^1_{\bar a} \bar b$ such that $\bar b' \ind[$\eq$]_{\bar a} \bar d$. Notice that $\bar a = [\bar b']_F$ and that $\bar a \in \mat^{\eq}(\bar b')$ (because $\bar a \in \dcleq(\bar b')$). By Lemma~\ref{lem:multi-density}, there exists $\bar d'$ in $M$ such that $\bar d' \equiv^1_{\bar b'} \bar d$ and $\bar d' \ind_{\bar b'} P$. Notice that $\bar d' \equiv_{\bar b' \bar a} \bar d$; therefore, since $\bar d \ind_{\bar a} \bar b'$, we have $\bar d' \ind[$\eq$]_{\bar a} \bar b'$, hence, by transitivity, $\bar d' \ind_{\bar a} P$. Finally, let $\bar c' := [\bar d']_E$. Thus, $\bar c' \equiv^1_{\bar a} \bar c$ and $\bar c \ind[$\eq$]_{\bar a} P$. \end{proof} \begin{lemma}\label{lem:imaginary-type} Let $M_P$ and $M'_P$ be two $\ind$-lovely pairs for~$T$. Let $\bar a$ be a P\nobreakdash-\hspace{0pt}\relax independent\xspace small tuple from~$Meq$, and $\bar a'$ be a P\nobreakdash-\hspace{0pt}\relax independent\xspace tuple of the same length and the same sorts from~${M'}^{\mathrm{eq}}$. If $\bar a \equiv^1 \bar a'$ and $\ensuremath{\text{\rm P-$\tp$}}(\bar a) = \ensuremath{\text{\rm P-$\tp$}}(\bar a')$, then $\bar a \equiv^2 \bar a'$. \end{lemma} \begin{proof} Again, by a back-and-forth argument. Denote $P := P(M)$ and $P' := P(M')$. Let $\bar a_0 := P(\bar a)$ and $\bar a'_0 := P(\bar a')$. Let \begin{multline*} \Gamma := \bigl\{f: \bar a \to \bar a': \bar a \subset Meq,\ \bar a' \subset \monster_Peq,\ \bar a \ \&\ \bar a' \text{ very small},\ f \text { bijection},\\ \bar a \ \&\ \bar a' \text{ P\nobreakdash-\hspace{0pt}\relax independent\xspace{}},\ \bar a \equiv^1 \bar a',\ \ensuremath{\text{\rm P-$\tp$}}(\bar a) = \ensuremath{\text{\rm P-$\tp$}}(\bar a')\bigr\}. \end{multline*} We want to prove that $\Gamma$ has the back-and-forth property. So, let $f: \bar a \to \bar a'$ be in $\Gamma$, and $\bar c \subsetMeq \setminus \bar a$ be a small tuple; we want to find $g \in \Gamma$ such that $g$ extends $f$ and $\bar c$ is contained in the domain of~$g$. We can reduce ourselves to two cases. \Case 1 $\bar c \subset \mat^{\eq}(P)$. Let $q := \tp^1(\bar c/\bar a)$ and $q' := f(q)$. Since $\bar a$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, we have $q \ind[$\eq$]_{\bar a_0} \bar a$, and hence $q' \ind[$\eq$]_{\bar a'_0} \bar a'$. Therefore, by the Density property in Lemma~\ref{lem:imaginary-density}, there exists $\bar c' \subset \mat^{\eq}(P')$ satisfying $q'$; extend $f$ to $\bar a \bar c$ setting $f(\bar c) = \bar c'$. \Case 2 $\bar c \subset Meq \setminus \mat^{\eq}(P)$. Let $\bar p_0$ be a small subset of $P$ such that $\bar c \bar p_0 \bar a$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace. By Case~1, w.l.o.g\mbox{.}\xspace $\bar p_0 \subseteq \bar a_0$, i.e\mbox{.}\xspace $\bar c \bar a$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, that is $\bar c \bar a \ind_{\bar a_0} P$. Let $q := \tp^1(\bar c/\bar a)$ and $q' := f(q)$. By Lemma~\ref{lem:imaginary-extension}, there exists $\bar c' \subset \monster_Peq$ satisfying $q'$ such that $\bar c'\ind[$\eq$]_{\bar a'} P'$. \begin{claim} $\bar c' \cap \mat^{\eq}(P') = \emptyset$. \end{claim} Assume, for contradiction, that $c_0 \in \bar c' \cap \mat^{\eq}(P')$. Since $\bar c' \ind[$\eq$]_{\bar a'} P'$, we have $c_0 \in \mat^{\eq}(\bar a') \cap \mat^{\eq}(P') = \mat(\bar a'_0)$ hence, $c_0 \in \mat(\bar a) \subseteq \mat^{\eq}(P)$, absurd. Thus, $\bar c \bar a$ and $\bar c' \bar a'$ have the same P\nobreakdash-\hspace{0pt}\relax type\xspace and the same $\mathcal L$-type. Moreover, by transitivity, $\bar c' \bar a' \ind[$\eq$]_{\bar a'_0} P'$, that is $\bar c' \bar a'$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace. Thus, we can extend $f$ to $\bar c \bar a$ setting $f(\bar c) = \bar c'$. \end{proof} \section{Lowness and equivalent formulations of loveliness} \label{sec:low} Let $M_P = \pair{M, P(M)} \models T^2$. \begin{remark} The Density Property for $M_P$ is equivalent to: \begin{sentence}[(\S)] For every $A$ very small subset of~$M$ and $q \ind S^1_1(A)$, if $q \ind_{P(M)} A$, then $q$ is realized in~$P(M)$. \end{sentence} \end{remark} \begin{proof} Let $A \subset M$ be very small. The proof that the Density Property implies (\S) is as in~\cite[Remark~3.4]{BPV}: given $c \in \mathfrak C$ such that $c \ind_{P(M)} A$, let $P_0 \subset P(M)$ very small such that $P(M) \ind_{P_0} A$; by Transitivity, $c \ind_{P(A) P_0} A$, and therefore, by the Density Property, there exists $c' \in P(M)$ such that \mbox{$c' \equiv^1_{P_0 A} c$}. For the converse, assume that $\pair{M, P(M)}$ satisfies (\S), and let $q \in S^1_1(A)$ such that $q \ind_{P(A)} A$. Let $r \in S^1_1(A \cup P(M))$ be a non-forking extension of~$q$. Let $c \in \mathfrak C$ be a realization of~$r$. By transitivity, we have that $c \ind_{P(A)} A P(M)$, and therefore $c \ind_{P(M)} A$. Hence, by (\S), there exists $c' \in P(M)$ such that $c \mathrm{eq}uiv^1_A c$, and hence $c'$ is a realization of $q$ in $P(M)$. \footnote{Thanks to E.~Vassiliev for the proof.} \end{proof} \begin{lemma}\label{lem:density-saturated} Assume that $M_P$ is $\kappa$-saturated. Then, t.f.a.e\mbox{.}\xspace: \begin{enumerate} \item $M_P$ satisfies the Density property; \item for every $A \subset M$ very small, for every $\mathcal L$-formula $\phi(x)$ in 1 variable with parameters from~$A$, if $\phi$ does not fork over $P(A)$, then $\phi$ is realized in $P(M)$. \item for every $A \subset M$ very small, for every $\mathcal L$-formula $\phi(\bar x)$ in many variables with parameters from~$A$, if $\phi$ does not fork over $P(A)$, then $\phi$ is realized in $P(M)$. \end{enumerate} \end{lemma} \begin{proof} $(1 \mathbb{R}ightarrow 2)$. Let $q(x) \in S^1_1(A)$ extending $\phi(x)$ such that $q \ind_{P(A)} A$. Choose $c \in P(M)$ realizing $q(x)$. (Notice that we did not use the fact that $M_P$ is $\kappa$-saturated). $(2 \mathbb{R}ightarrow 1)$. Let $q \in S^1_1(A)$ be a complete $\mathcal L$-1-type over some very small set $A$, such that $q \ind_{P(A)} A$. Consider the following partial $\mathcal L^2$-1-type over~$A$: $\Phi(x) := q(x) \ \&\ x \in P$. By (2), $\Phi$ is consistent, and hence, by saturation, realized in $P(M)$. $(3 \mathbb{R}ightarrow 2)$ is trivial, and $(1 \mathbb{R}ightarrow 3)$ follows as in $(1 \mathbb{R}ightarrow 2)$ using Lemma~\ref{lem:multi-density}. \end{proof} \begin{definition} Let $\bar x$ and $\bar y$ be finite tuples of variables. Fix a formula $\phi(\bar x, \bar y)$. Let $\bar z$ be very small tuple, and define \[ \Sigma_{\phi, \bar z}(\bar y, \bar z) := \set{\pair{\bar b, \bar c}: \phi(\bar x, \bar b) \text{ forks over } \bar c}. \] We say that $\ind$ is \intro{low} if $\Sigma_\phi(\bar y, \bar z)$ is type-definable, for every formula $\phi$. \end{definition} \begin{remark} When $T$ is simple and $\ind[f]$ is Shelah's forking, then $\ind[f]$ is low iff $T$ is a low simple theory. Moreover, if $T$ is stable, then $T$ (and hence~$\ind[f]$) is low. See \cite{BPV} for definitions and proofs. \end{remark} \begin{corollary}[{\cite[4.1]{BPV}}]\label{cor:low} If $\ind$ is low iff the Density property is first order. If $\ind$ is low, the axiomatization for the Density property is: \begin{equation} \label{eq:density} (\forall \bar b)\ (\forall \bar c \in P)\ \Pa{\pair{\bar b, \bar c} \notin \Sigma_{\phi, \bar z}(\bar y, \bar z) \mathbb{R}ightarrow (\exists a \in P)\ \phi(a, \bar b, \bar c)}, \end{equation} where $\phi(x, \bar y)$ varies among all the $\mathcal L$-formulae, with $x$ a single variable, $\bar y$ and $\bar b$ are finite tuples of variables of the same length, and $\bar z$ and $\bar c$ are very small tuples of variables of the same length. \end{corollary} Notice that if $\ind$ is low, then \mathrm{eq}ref{eq:density} is indeed given by a set of axioms. \begin{proof} Assume that $\ind$ is low. Let $M_P$ be a $\kappa$-saturated model of $T^2$. We have to prove that $M_P$ satisfies the Density property iff \mathrm{eq}ref{eq:density} holds. Notice that \mathrm{eq}ref{eq:density} is equivalent to:\\ ``If $\phi(x, \bar b)$ does not fork over $\bar c$, with $\bar c$ very small tuple in $P(M)$, then $\phi(x, \bar b)$ is satisfied in~$P(M)$''.\\ By Lemma~\ref{lem:density-saturated}, this is equivalent to the Density property. Conversely, assume that the Density property is first order. Fix an $\mathcal L$-formula $\phi(\bar x, \bar y)$; we must show that $\Sigma := \Sigma_{\phi, \bar z}(\bar y, \bar z)$ is preserved under ultraproducts. Assume that, for every $i$ in some index set $I$, $\pair{\bar b_i, \bar c_i} \in \Sigma$, that is $\phi(\bar x, \bar b_i)$ forks over $\bar c_i$ (where $\length{\bar b_i} = \length{\bar y}$, and $\bar c_i$ are very small tuples all of the same length). By Lemma~\ref{lem:lovely-existence}, for every $i \in I$ there exists a lovely pair $\pair{N_i, P_i}$ such that $\bar c_i \in P_i$ and $\bar b_i \ind_{\bar c_i} P_i$. Let $\pair{N, P(N), \bar b, \bar c}$ be an ultraproduct of the $\pair{N_i, P_i, \bar b_i, \bar c_i}$, and let $\pair{M, P}$ be a $\kappa$-saturated elementary extension of $\pair{N, P(N)}$. \begin{claim} $\phi(\bar x, \bar b)$ is not realized in~$P$. \end{claim} In fact, for every $i \in I$, since $\phi(\bar x, \bar b_i)$ forks over $\bar c_i$, and $\pair{N_i, P_i}$ satisfies the Density property, $\phi(\bar x, \bar b_i)$ is not realized in~$P_i$; thus, $\phi(\bar x, \bar b)$ is not realized in $P(N)$, and hence not in~$P$. Moreover, since the Density property is first order, $\pair{M, P}$ satisfies it. Therefore, by Lemma~\ref{lem:density-saturated}, $\phi(\bar x, \bar b)$ forks over~$\bar c$. \end{proof} \subsection{The rank 1 case} \label{subsec:rank-one} In this subsection we will study more in details the case when $\ind$ is superior and $\U^{\ind}(\mathfrak C) = 1$. \begin{remark} Let $\mathfrak C$ be pregeometric (that is, $\acl$ has the Exchange property). Then, $\ind[$\acl$]\,$ is superior and $\U^{\acl}(\mathfrak C) = 1$. If $V \subseteq \mathfrak C$ is definable, then $V$ is infinite iff $\U^{\acl}(V) = 1$. \end{remark} \begin{proviso*} For the remainder of this subsection, $\ind$ is superior and $\U^{\ind}(\mathfrak C) = 1$. Moreover, we denote $\U \coloneqq \U^{\ind}$. Finally, $M$~is a small model and $M_P = \pair{M, P(M)} \models T^2$. \end{proviso*} \begin{lemma} If $M_P$ satisfies the Density Property, then \begin{enumerate} \item $P(M)$ is an elementary substructure of~$M$; \item $P(M)$ is $\kappa$-saturated; \item $P(M)$ is \intro{$\mat$-dense} in $M$: that is, for every $T$-definable subset $V$ of $M$, if $\U(V) = 1$, then $P(M) \cap V \neq \emptyset$. \end{enumerate} Conversely, if $M_P$ is $\kappa$-saturated and satisfies conditions (1), (2), and (3), then $M_P$ satisfies the Density Property. \end{lemma} \begin{proof} Assume that $M_P$ satisfies the Density property. (1) and (2) follow from Remark~\ref{rem:density-substructure}. For (3), let $V \subset M$ be $T$-definable with parameters~$\bar a$, such that $\U(V) = 1$. Notice that $V$ does not fork over any set, because $\U(M) = 1 = \U(V)$, and in particular $V$ does not fork over $P(\bar a)$. Let $q(x) \in S^1_1(\bar a)$ expanding $x \in V$ and such that $q \notind_{P(\bar a)} \bar a$. By the Density property, $q$~is realized in $P(M)$ and therefore $V \cap P(M)$ is nonempty. For the converse, assume that $M_P$ is $\kappa$-saturated and satisfies the conditions in the lemma. Let $A \subset M$ be very small and $q \in S^1_1(A)$ such that $q \ind_{P(A)} A$. If $\U(q) = 0$, then $\U\Pa{q \upharpoonright{P(A)}} = 0$, and hence, since $P(M)$ is closed in~$M$, all realization of $q$ are in~$P(M)$, and in particular $q$ is realized in~$P(M)$. Otherwise, $\U(q) = 1$. Thus, for every $V \coloneqq \phi(x, \bar a) \in q(x)$, $\U(V) = 1$, and therefore $V \cap P(M) \neq \emptyset$. Therefore, $q$~is finitely satisfiable in~$P(M)$, and hence, by saturation, $q$~is satisfiable in~$P(M)$. \end{proof} \begin{example} There exists a pregeometric structure $\mathfrak C$ such that $\ind[$\acl$]\,$ is not low. In fact, let $\mathfrak C$ be a monster model of $T := \Theory(\pair{\mathbb{Z}, <})$. We have $a \in \acl(b)$ iff $\abs{a - b}$ is finite, and $\acl(B) = \bigcup_{b \in B} \acl(b)$. We shall prove that, for every $M_P \models T^2$, if $P(M) \neq M$, $P(M)$ is algebraically closed in~$M$, and $M_P$ is $\omega$-saturated, then $P$ is not dense in $M$ (and therefore $M_P$ does not satisfy the Density Property). Let $a \in M \setminus P(M)$, and let $q(y)$ be the following partial $S^2_1$-type over $a$: \[ a < y \ \&\ y - a = \infty \ \&\ [a,y] \cap P = \emptyset. \] Notice that $q(y)$ if finitely satisfiable in $M$, and hence, by saturation, there exists $b \in M$ satisfying it. Thus, $[a, b]$ is a $T$-definable infinite set that does not intersect~$P(M)$. \end{example} \begin{definition} Let $f: \mathfrak C^m \leadsto \mathfrak C ^n$ be an \intro{application} (that is, a multi-valued partial function); we say that $f$ is a \intro{\bar zapplication{}} if $f$ is definable and $\U(f(\bar c)) \leq 0$ is finite for every $\bar c \in \mathfrak C^m$. \end{definition} \begin{definition} We say that ``$\U$ is definable'' if, for every formula $\phi(\bar x, \bar y)$ and every $n \in \mathbb{N}$, the set $\set{\bar b \in \ \mathfrak C^m: \U \Pa{\phi(\mathfrak C^n, \bar b)} = n}$ is definable, with the same parameters as~$\phi$. \end{definition} \begin{remark} $\U$ is definable iff, for every formula $\phi(x, y)$ without parameters (where $x$ and $y$ have length~$1$), the set $\set{b \in \ \mathfrak C: \U \Pa{\phi(\mathfrak C^n, b)} = 0}$ is definable without parameters. \end{remark} \begin{remark} If $\mathfrak C$ is pregeometric, then $\U^{\acl}$ is definable iff $\mathfrak C$ eliminates the quantifier~$\exists^\infty$. \end{remark} \begin{remark} If $\U$ is definable, then loveliness is first-order. The axioms of $T^d$ are: \begin{description} \item[Closure] $P(M)$ is closed in~$M$; \item[Density] for every $V$ $T$-definable subset of~$M$, if $\U(V) = 1$, then intersects~$P(M)$; \item[Extension] let $V$ be a $T$-definable subset of $M$ with $\U(V) = 1$ and $f: M^n \leadsto M$ be any $T$-definable \bar zapplication; then, $V \nsubseteq f(P(M)^n)$. \end{description} In particular, if $\mathfrak C$ is geometric and $\ind = \ind[$\acl$]$, then the axioms of $T^d$ are: \begin{description} \item[Closure] $P(M) \prec M$; \item[Density] for every $V$ $T$-definable subset of~$M$, if $V$ is infinite, then $V$ intersects~$P(M)$; \item[Extension] let $V$ be a $T$-definable infinite subset of $M$ with $\U(V) = 1$ and $\bar b$ be a finite tuple in~$M$; then, $V \nsubseteq \mat(\bar b P(M))$. \end{description} In both cases, if $\mathfrak C$ expands an integral domain, then the Extension axiom can be proved from the first two axioms (\cite[Theorem~8.3]{fornasiero-matroids}). \end{remark} Conversely, we have the following result. \begin{proposition} Assume that loveliness is first-order. Then, $\U$~is definable. \end{proposition} \begin{proof} Fix an $\mathcal L$-formula $\psi(x, y)$. Denote $V_c \coloneqq \phi(\mathfrak C, c)$, and define $Z \coloneqq \set{c \in \mathfrak C: \U(V_c) = 1}$. We have to prove that $Z$ is definable. This is equivalent to show that both $Z$ and its complement are preserved under ultraproducts. \begin{enumerate} \item Let $(b_i : i \in I)$ be a sequence such that $b_i \in Z$ for every $i \in I$. For every $i \in I$, choose $c_i \in \mathfrak C$ such that $c_i \in V_c \setminus \mat(\emptyset)$. By Lemma~\ref{lem:lovely-existence}, for every $i \in I$ there exists a lovely pair $\pair{N_i, P_i}$ such that $b_i \in P_i$ and $c_i \ind_{b_i} P_i$. Therefore, $c_i \notin P_i$. Let $\pair{N, P(N), b, c}$ be an ultraproduct of the $\pair{N_i, P_i, b_i, c_i}$ and let $\pair{M, P(M)}$ be a $\kappa$-saturated elementary extension of $\pair{N, P(N)}$. Thus, $c \notin P(M)$. If, for contradiction, $\U(V_b) = 0$, then $c \in \mat(b)$. However, $b \in P(M)$ and $P(M)$ is closed in~$M$, absurd.\\ Notice that for this half of the proof we only used the fact that the Density property is first-order. Notice moreover that we proved that $\mat$ is a definable matroid in the sense of \cite{fornasiero-matroids}. \item Let $(b_i : i \in I)$ be a sequence such that $b_i \notin Z$ for every $i \in I$. By Lemma~\ref{lem:lovely-existence}, for every $i \in I$ there exists a lovely pair $\pair{N_i, P(N_i)}$ such that $b_i \in P(N_i)$. Let $\pair{N, P(N), b}$ be the ultraproduct of the $\pair{N_i, P(N_i), b_i}$ with ultrafilter~$\mu$, and let $\pair{M, P(M)}$ be a $\kappa$-saturated elementary extension of $\pair{N, P(N)}$. By the first half of the proof, the set $\mat(b)$ is ord-definable. Hence, ``$x \in V_b$ and $x \notin \mat(b)$'' is a consistent partial type (in~$x$, with parameter~$b$). Thus, by the Extension property, there exists $d \in M$ such that $d \in V_b$, $d \notin \mat(b)$, and $d \ind_b P(M)$. Since $P(M)$ is closed in~$M$, we have $d \notin P(M)$. Thus, there exists $c \in N$ such that $c \in V_b \setminus P(N)$. Choose $(c_i: i \in I)$ such that $c = (c_i: i \in I) / \mu$, and such that $c_i \in V_{b_i} \setminus P(N_i)$ for every $i \in I$. However, since $\U(V_{b_i}) = 0$, we have $c_i \in \mat^{N_i}(b_i) \subseteq P(N_i)$, absurd. \qedhere \end{enumerate} \end{proof} See \cite{fornasiero-matroids} for more results on the case when $\U$ is definable, and \cite{boxall} for more on lovely pairs of geometric structures. \section{NIP, stability, etc. in lovely pairs} \label{sec:stability} For this section, we assume that $\monster_P = \pair{\mathfrak C, P}$ is a monster model of $T^2$. \subsection{Coheirs} Let $M \subseteq N \subset \monster$ be small subsets of~$\monster$, such that $\pair{M, P(M)} \preceq \pair{N, P(N)} \prec \monster_P$. Assume also that both $\pair{M, P(M)}$ and $\pair{N, P(N)}$ are sufficiently saturated (in particular, they are $\kappa_0$-saturated). \begin{remark} $M \ind_{P(M)} P$, and similarly for~$N$. \end{remark} \begin{proof} It is sufficient to prove that $\bar{m}\ind_{P(M)}P$ for every finite tuple $\bar{m}$ from~$M$. By local character there is some $C\subseteq P$ with $\card C < \kappa_0$ and such that $\bar{m}\ind_C P$. Let $C^\prime\models \tp^2(C/\bar{m})$ be such that $C^\prime\subseteq M$. Then $C^\prime\subseteq P(M)$ and $\bar{m}\ind_{C^\prime}P$. Therefore $\bar{m}\ind_{P(M)}P$. \end{proof} \begin{lemma}\label{lem:62} Let $a \in \monster^h$ and $q$ be a small tuple in~$P$. Assume that $a \ind_{M P} N$ and $a M \ind_{P(M) q} P$. Then, $a N \ind_{P(N) q} P$. \end{lemma} \begin{proof} \begin{align} a M \ind_{P(M) q} P \mathbb{R}a a \ind_{M q} P \mathbb{R}a & a \ind_{M q} M P.\\ a \ind_{M q} M P \ \&\ t a \ind_{M P} N P \mathbb{R}a & a \ind_{M q} N P.\\ a M \ind_{M q} N P \mathbb{R}a a M \ind_{N q} N P \mathbb{R}a & a N \ind_{N q} P.\\ \pair{N, P(N)} \prec \pair{\monster, P} \mathbb{R}a N \ind_{P(N)} P \mathbb{R}a & N q \ind_{P(N) q} P.\\ a N \ind_{N q} P \ \&\ t N q \ind_{P(N) q} P \mathbb{R}a & a N \ind_{P(N) q} P. \end{align} \end{proof} Remember that $T^d$ is the theory of lovely pairs. \begin{proviso*} For the remainder of this section, we assume that \textbf{``being lovely'' is a first order property} and that $\monster_P = \pair{\mathfrak C, P}$ is a monster model of~$T^d$. \end{proviso*} \subsection{NIP} \begin{theorem} If $T$ has NIP, then $T^d$ also has NIP. \end{theorem} \begin{proof}[Short proof] Apply Proposition~\ref{prop:model-complete} and \cite[Corollary~2.7]{CS10}. \end{proof} \begin{proof}[Long proof] Let $a\in \mathfrak C$. Suppose $\tp^2(a/N)$ is finitely satisfied in $M$. We show that there are no more than $2^{|M|}$ choices for $\tp^2(a/N)$. By local character there is some $C\subset N P$ such that $\card C \leq \kappa_0$ and $a\ind_C N P$. Therefore there is some $M^\prime$ such that $\pair{M,P(M)}\prec \pair{M^\prime,P(M^\prime)}\prec \pair{N,P(N)}$, $|M^\prime|=|M|$ and $a\ind_{M^\prime P}N P$. This $M^\prime$ depends on $a$. However we are assuming $\tp^2(a/N)$ is finitely realisable in $M$ which implies that $\tp^2(a/N)$ is invariant over $M$. Therefore any $M^{\prime\prime}\models \tp^2(M^\prime/M)$ such that $M^{\prime\prime}\subseteq N$ would work in place of $M^\prime$. There are no more than $2^{|M|}$ possibilities for $\tp^2(M^\prime/M)$. For each of these, fix some particular realisation. Then, when we choose $M^\prime$, we are actually selecting it from a list of at most $2^{|M|}$ things. First we select the correct $M^\prime$ from our list and assert that $a\ind_{M^\prime P}N P$. We now choose $\tp^2(a\bar{f}/M^\prime)$ extending $\tp^2(a/M^\prime)$ such that $aM^\prime\bar{f}$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace. We know we can do this such that $\card{\bar{f}}\leq \card{M^\prime}= \card{M}$. This was also a choice of one thing from a list of no more than $2^{\card{M}}$ things. Let $\bar{f}^\prime$ be such that $\bar f \in P$, $a\bar{f}^\prime\models \tp^2(a\bar{f}/M^\prime)$, and $\tp^1(a\bar{f}^\prime/N)$ is finitely realisable in $M^\prime$. We specify $\tp^1(a\bar{f}^\prime/N)$ and we know that this too is a choice from a list of no more than $2^{\card{M}}$ things (since the original theory $T$ has NIP). We now have enough to completely determine $\tp^2(a\bar{f}^\prime/N)$. This is because the choice of $\tp^2(a\bar{f}^\prime/M^\prime)$ determines $\ensuremath{\text{\rm P-$\tp$}}(a\bar{f}^\prime)$ and, by Lemma~\ref{lem:62}, it also gives that $aN\bar{f}^\prime$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace and then we only need $\tp^1(a\bar{f}^\prime/N)$ to determine $\tp^2(a\bar{f}^\prime/N)$. Overall, we made our choice from a list of $2^{|M|}\times 2^{|M|}\times 2^{|M|}=2^{|M|}$ things. \end{proof} \subsection{Stability, super-stability, \texorpdfstring{$\omega$}{omega}-stability} \begin{theorem} If $T$ is stable (resp. superstable, resp.\ totally transcendental), then $T^d$ also is. \end{theorem} \begin{proof} Assume that $T$ is stable. Remember that a theory $T$ is stable iff it is $\lambda$-stable for some cardinal $\lambda$, iff it is $\lambda$-stable for every $\lambda^{\card T} = \lambda$. Choose $\lambda$ a small cardinal such that $\lambda^{\kappa_0 + \card T} = \lambda$. Let $\pair{M, P(M)} \prec \monster_P$ be a model of~$T^d$ of cardinality~$\lambda$. Notice that $\card{S^1_{\kappa_0}(M)} = \lambda$. We must prove that $\card{S^2_1(M)} \leq \lambda$. Let $q \in S^2_1(M)$ and $c \in \mathfrak C$ satisfying~$q$. Let $\bar p \subset P$ such that $c \ind_{M \bar p} P$ and $\card{\bar p} < \kappa_0$. Since moreover $M \ind_{P(M)} P$, we have $c \ind_{P(M) \bar p} P$, and therefore $c M \bar p$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace. Thus, $\tp^2(c M \bar p)$ is determined only by $\tp^1(c M \bar p)$ plus the P\nobreakdash-\hspace{0pt}\relax type\xspace of~$c$. Therefore, $\tp^2(c /M)$ is determined by $\tp^1(c /M)$ plus the P\nobreakdash-\hspace{0pt}\relax type\xspace of~$c$. Since $\card{\bar p} < \kappa_0$, we have $\card{S^2_1(M)} \leq \card{S^1_{\kappa_0}(M)} = \lambda$, and we are done. Assume now that $T$ is super-stable. Remember that a theory $T$ is super-stable iff there exists a cardinal $\mu$ such that $T$ it is $\lambda$-stable every cardinal $\lambda > \mu$, and that $T$ is totally transcendental iff we can take $\mu = \card T$. Moreover, since $T$ is super-stable, $\ind$ is superior, and therefore $\kappa_0 = \omega$. Let $\lambda \geq \mu$ and $\pair{M, P(M)} \prec \monster_P$ be a model of~$T^d$ of cardinality~$\lambda$. Let $\bar p \subset P$ such that $c \ind_{M \bar p} P$ and $\bar p$ is finite. As before, $c M \bar p$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, and thus $\tp^2(c /M)$ is determined by $\tp^1(c /M)$ plus the P\nobreakdash-\hspace{0pt}\relax type\xspace of~$c$. Since $\bar p$ is finite, we have $\card{S^2_1(M)} \leq \card{S^1_{< \omega}(M)} = \lambda$. Hence, $T^d$ is super-stable, and, if $T$ is totally transcendental, then $T^d$ is also totally transcendental. \end{proof} \begin{example} In general, if $T$ is geometric, $T^d$ is not geometric. For instance, if $\mathfrak C$ is an o-minimal structure expanding a field and $\ind = \ind[$\acl$]$, then $T^d$ is not geometric~\cite{fornasiero-matroids}. \end{example} \subsection{Simplicity and supersimplicity} \label{subsec:simple} By ``divides'' we will always mean ``divide in the sense of Shelah's'' (but we might have to specify in which structure). \begin{fact} \label{fact:simple-definition} The following are equivalent: \begin{enumerate} \item $\tp(\bar c / A \bar b)$ does not divide over $A$; \item for any indiscernible sequence $I = \pair{\bar b_i: i \in \mathbb{N}}$ with $\bar b_0 = \bar b$, let $p_i(x)$ be the copy of $\tp(\bar c/A \bar b)$ over $A \bar b_i$; then, there is a tuple $\bar c'$ realizing $\bigcup_i p_i(x)$; \item for any indiscernible sequence $I = \pair{\bar b_i: i \in \mathbb{N}}$, with $\bar b_0 = \bar b$, there is a tuple $\bar c'$ realizing $\tp(\bar c / A \bar b)$, such that $I$ is indiscernible over $A \bar c'$. \end{enumerate} \end{fact} \begin{proof} \cite[Remark~3.2(2) and Lemma~3.1]{casanovas07}. \end{proof} The following fact is well known: for a reference, see \cite[Exercise~29.1]{TZ}. \begin{fact}\label{fact:simple} Let $T$ be a simple theory. Let $\pair{M_i: i < \omega}$ be an indiscernible sequence over~$A$. Assume that $C \ind[f]_{A} M_0$. Let $p_0(y) \coloneqq \tp(C/M_0)$ and $p_i(y)$ be the copy of $p_0$ over~$M_i$. Then, there exists~$C'$, such that: \begin{enumerate} \item $C' \models \bigcup_i p_i(y)$; \item $C' \ind[f]_A \bigcup_i M_i$; \item $\pair{M_i: i < \omega}$ is an indiscernible sequence over~$A C'$. \end{enumerate} \end{fact} Remember that $A \ind[f]_B C$ implies $A \ind_B C$. \begin{remark}\label{rem:heir} Assume that $\pair{M, P(M)} \preceq \pair{\mathfrak C, P}$ (but not necessarily that $\pair{\mathfrak C, P} \models T^d$). Then, for every $\bar c \in P$, $\tp^2(\bar c/M)$ is finitely satisfiable in~$P(M)$. Therefore, $M \ind[f]_{P(M)} P$ (in the sense of both $T$ and~$T^d$), and $M \ind_{P(M)} P$. \end{remark} \begin{proposition}\label{prop:simple} If $T$ is simple, then $T^d$ is also simple. If $T$ is supersimple, then $T^d$ is also supersimple. \end{proposition} \begin{proof} The proof is almost identical to the one of~\cite[Proposition~6.2]{BPV}. We will use the notation $A \ind[f]_B C$ to mean that $A$ and $C$ do not fork over~$C$, in the sense of Shelah's, according to the theory~$T$ (and \emph{not} to the theory~$T^d$), while, when saying ``$\tp^2(\bar a/B)$ divides over~$C$'', we will imply ``according to~$T^d$''. Let $\pair{M, P(M)}$ be a small model of~$T^d$ and $\bar a$ be a finite tuple. We have to find $A \subseteq M$ such that $\card A \leq \card T$, and $\tp^2(\bar a/M)$ does not divide over~$A$ (by \cite[Lemma~6.1]{BPV}, this will prove that $T^d$ is simple). When $T$ is supersimple, we will see that $A$ could be chosen finite (and hence $T^d$ is supersimple). By Remark~\ref{rem:heir}, $M \ind[f]_{P(M)} P$. Hence, since $\ind[f]$ satisfies local character, there exist $C \subset P$ and $A \subset M$, both of cardinality at most~$\card T$, such that \begin{equation}\label{eq:simple-1} \bar a \ind[f]_{A C} M P. \end{equation} Moreover, since $C \subset P$, $C \ind[f]_{P(M)} M$, and hence, after maybe enlarging $A$, we can also assume that \begin{equation}\label{eq:simple-2} C \ind[f]_{P(A)} M. \end{equation} If moreover $T$ was supersimple, then $A$ and $C$ could be chosen finite. Let $\pair{M_i: i < \omega}$ be an $\mathcal L^2$-indiscernible sequence over~$A$, such that $M_0 = M$. \mathrm{eq}ref{eq:simple-2} implies that $C \ind[f]_A M$; therefore, we can apply Fact~\ref{fact:simple}. Let $p_0(y) \coloneqq \tp^1(C/M_0)$ and $p_i(y)$ be the copy of $p_0$ over~$M_i$. Then, there exists~$C'$ realizing the conclusions of Fact~\ref{fact:simple}. Notice that $C' \ind[f]_{P(A)} A$, thus, by transitivity, $C' \ind[f]_{P(A)} \bigcup_i M_i$, and hence $C' \ind_{P(A)} \bigcup_i M_i$. Since $\monster_P$ is lovely, there exists $C''$ in $P$ such that $C'' \equiv^1_{\bigcup_i M_i} C'$; thus, w.l.o.g\mbox{.}\xspace $C' \subset P$. Notice that $M \ind_{P(M)C} P$, thus $M C$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, and the same for $M C'$. Moreover, $M C$ and $M C'$ satisfy the same $\mathcal L$-type and the same P\nobreakdash-\hspace{0pt}\relax type\xspace: therefore, they have the same $\mathcal L^2$-type. Thus, by changing the sequence of $M_i$'s, we can assume that $C = C'$. Let $r(\bar x) \coloneqq \tp^1(\bar a / M C)$, and $r_i(\bar x)$ be the copy of $r(x)$ over $M_i$. By \mathrm{eq}ref{eq:simple-1}, $\bar a \ind[f]_{A C} M C$, moreover, $\pair{M_i: i < \omega}$ is $\mathcal L$-indiscernible over~$A C$. Thus, there exists $\bar a'$ realizing $\bigcup_i r_i(\bar x)$, such that \begin{equation}\label{eq:simple-3} \bar a' \ind[f]_{A C} \bigcup_i M_i C. \end{equation} By loveliness of $\monster_P$ again, we can assume that $\bar a' \ind_{\bigcup_i M_i C} P$ (note: here we have~$\ind$, not~$\ind[f]$). Thus, for each~$i$, by~\mathrm{eq}ref{eq:simple-3}, \begin{equation} \bar a' \ind_{M_i C} P. \end{equation} Since moreover $C \subset P$ and $M_i \ind_{P(M_i)} P$, we have $M_i C \ind_{P(M_i) C} P$, and therefore $\bar a' M_i C \ind_{P(M_i) C} P$. Thus, each $\bar a' M_i C$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace. Moreover, they have all the same P\nobreakdash-\hspace{0pt}\relax type\xspace and the same $\mathcal L$-type. Thus, all the $\bar a' M_i C$ have the same $\mathcal L^2$-type. Moreover, by~\mathrm{eq}ref{eq:simple-1}, $\bar a \ind_{M C} P$, and thus $\bar a M C$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace, and therefore $\bar a M C \mathrm{eq}uiv^2 \bar a' M_i C$. Therefore, by Fact~\ref{fact:simple-definition}, $\tp^2(\bar a/M)$ does not divide over~$C$, and we are done. \end{proof} \begin{question} Assume that $T$ is (super)rosy. Is $T^d$ also (super)rosy? \end{question} \section{Independence relation in lovely pairs} \label{sec:lovely-independence} In this section, we will assume that ``being lovely is first order'' and that $\monster_P = \pair{\mathfrak C, P}$ is a monster model of $T^d$. Let $\ind'$ the following relation on subsets of $\monster_P$: $A \ind'_C B$ iff $A \ind_{C P} B$; we will write $\ind_P$ instead of $\ind'$. \begin{definition} $\ind$ satisfies (*) if:\\ For every $a \in \monster^k$, $b \in \monster^h$, for every $C$ tuple in $\monster$ (not necessarily of small length), if $a \notind_C b$, then there exists a finite subtuple $c$ of $C$ (of length~$l$) and an $\mathcal L$-formula $\phi(x, y, z)$, such that: \begin{enumerate} \item $\monster \models \phi(a, b, c)$; \item for every $a' \in \monster^k$, for every $c'$ subtuple of $C$ of length~$l$, if $\monster \models \phi(a', b, c')$, then $a' \notind_{C} b$. \end{enumerate} \end{definition} Notice that (*) implies Strong Finite Character. \begin{remark} the independence relations in examples~\ref{ex:independence-strict} and~\ref{ex:independence-matroid} satisfy (*). \end{remark} \begin{proof} Let us show that when $T$ is simple, then $\ind[f]$ satisfies (*) Assume that $a \notind_C b$. Let $p(y) \coloneqq \tp(b /C a)$ and $p_0 \coloneqq \tp (b /C)$. Thus, $p$~divides over~$C$. Hence, by \cite[Proposition~2.3.9 and Remark~2.3.5]{wagner}, there exist a formula~$\psi$, a cardinal~$\lambda$, a finite subtuple $c \subset C$, and a formula $\theta(y, a, c) \in p(y)$, such that such that $D_0(\theta(y, a, c) < D_0(p_0)$, where $D_0(\cdot) \coloneqq D(\cdot, \psi,\lambda)$. Let $n \coloneqq D_0(p_0)$. By \cite[Remark~2.3.5]{wagner}, the set $\set{a', c': D_0(\theta(y, a', c')) < n}$ is ord-definable; hence, there exists a formula $\sigma(x, z)$ such that $\mathfrak C \models \phi(a, c)$, and for every $c'$ and~$a'$, if $\phi(a', c')$, then $D(\theta(y, a', c')) < n$. Define $\phi(x, y, z) \coloneqq \theta(y, x, z) \ \&\ \sigma(y, z)$. Then, $\phi$ satisfies the conclusion of~(*). The other cases are similar: when $T$ is rosy, use the local \th-ranks instead of the rank $D$ (\cite[\S3]{onshuus06}); when $\mat$ is an existential matroid, to prove (*) for $\ind[$\mat$]$ use the associated global rank. \end{proof} \begin{proposition} $\ind_P$ is an independence relation on $\monster_P$; the constant $\kappa_0$ for the local character axiom is the same for $\ind_P$ and for~$\ind$. Besides, the closure operator $\mat_P$ induced by $\ind_P$ satisfies $\mat_P(X) = \mat(X P)$; in particular, $\mat_P(\emptyset) = P$. Moreover, if $\ind$ satisfies \textup(*\textup), then $\ind_P$ also satisfies \textup(*\textup). \end{proposition} In particular, one can consider $\ind_P$-lovely pairs. \begin{proof} Let us verify the various axioms of an independence relation. Invariance is clear from invariance of~$\ind$. Symmetry, Monotonicity, Base Monotonicity, Transitivity, Normality, Finite Character and Local Character (with the same constant $\kappa_0$) follow from Remark~\ref{rem:localization}. It remains to prove Extension; instead, we will prove the Existence Axiom (which, under the other axioms, is equivalent to Extension \cite[Exercise 1.5]{adler}). Let $A$, $B$ and $C$ be small subsets of $\monster$. Let $P_0 \subset P$ be a small subset, such that $P(A B C) \subseteq P_0$ and $A B C \ind_{P_0} P$ (here we use that Local Character holds also for large subsets of~$\mathfrak C$). W.l.o.g\mbox{.}\xspace, we can assume that $P_0 \subset A \cap B \cap C$. Let $A' \mathrm{eq}uiv^1_{C} A$ such that $A' \ind_{C} B$. Since $\monster(P)$ is a lovely pair, there exists $A'' \mathrm{eq}uiv^1_{C B} A'$ such that $A'' \ind_{C B} P$. Notice that $A'' \ind_{C} B$. Hence, by some forking calculus, $A'' \ind_{C P} B$. Moreover, $A'' C \mathrm{eq}uiv^1 A C$, and $A C$ is P\nobreakdash-\hspace{0pt}\relax independent\xspace. We claim that $A'' C$ is also P\nobreakdash-\hspace{0pt}\relax independent\xspace: in fact, $C B \ind_{P_0} P$ and $A'' \ind_{B C} P$, and hence $A'' B C \ind_{P_0} P$. Since $P$ is closed, this also implies that $P(A'' B C) = P_0$, and thus, since $\monster(P)$ is lovely, $A'' C \mathrm{eq}uiv^2 A C$, and we are done. Assume now that $\ind$ satisfies (*). Let $a$ and $b$ be finite tuples in~$\monster$, and $C$ be a (not necessarily small) subset of~$\monster$. Assume that $a \notind_{P C} B$. Then, by (*), there exists an $\mathcal L$-formula $\phi(x, y, z, w)$ and finite tuples $c$ in $C$ and $p$ in $P$, such that $\monster \models \phi(a, b, c, p)$ and, for every $a' \subset \monster$, $c' \subset C$ and $p' \subset P$, if $\monster \models \phi(a', b, c', p')$, then $a' \notind_{C P} b$. Let $\psi(x, y, z)$ be the $\mathcal L^2$-formula $(\exists w \in P) \phi(x, y, z)$. Then, $\psi$ witnesses the fact that $\ind_P$ satisfies (*). \end{proof} \begin{conjecture} There exists an independence relation $\ind[2]$ on~$\monster_P$, such that: \begin{enumerate} \item $\ind[2]$ coincides with $\ind$ on subsets of $P$; \item $\ind[2]_P = \ind_P$. \end{enumerate} \end{conjecture} \subsection{Rank} Assume that $\ind$ is superior (that is, $\kappa_0 = \omega$). We have seen that then $\ind_P$ is also superior. Let $\U \coloneqq \U^{\ind}$ be the rank on $\mathfrak C$ induced by $\ind$ and $\U^{\mathrm P}$ the rank on~$\monster_P$ induced by $\ind_P$. We now investigate the relationship between the 2 ranks. Given a partial $\mathcal L$-type $\pi(\bar x)$, we define $U(\pi)$ as the supremum of the $\U(q)$, where $q(\bar x)$ varies among the complete $\mathcal L$-types extending $\pi(\bar x)$, and similarly for $\U^{\mathrm P}$ on partial $\mathcal L^2$-types. However, if $q$ is a complete $\mathcal L$-type, then $q$ is also a partial $\mathcal L^2$-type. Hence, we can compare $\U(q)$ and $\U^{\mathrm P}(q)$. \begin{lemma}[{\cite[8.31]{fornasiero-matroids}}] For every $B \subset \mathfrak C$ small and every $q \in S^1_n(B)$, $\U(q) = \U^{\mathrm P}(q)$. The same equality holds for partial $\mathcal L$-types. \end{lemma} \begin{proof} First, we will prove, by induction on $\alpha$, that, for every ordinal number $\alpha$, and every $\mathcal L$-type~$q'$, if $\U^{\mathrm P}(q') \geq \alpha$, then $\U(q') \geq \alpha$. If $\alpha$ is limit, the conclusion follows immediately from the inductive hypothesis. Assume that $\alpha = \beta + 1$, and that $q' = q$. Let $C \supset B$ be a small set and $\bar a \in \mathfrak C^n$, such that $\U^{\mathrm P}(\bar a / C) \geq \beta$ and $\bar a \notind_{PB} C$ ($C$ and $\bar a$ exist by definition of~$\U^{\mathrm P}$). By inductive hypothesis, $\U(\bar a / C) \geq \beta$. Let $P_0 \subset P$ be small (actually, finite), such that $\bar a \ind_{C P_0} P$; define $C' := C P_0$. Notice that $\bar a \notind_{P B} C'$ and $\bar a \ind_{C'} P$. \begin{claim} $\bar a \notind_B C'$. \end{claim} If not, then, since $B \subset C'$ and by transitivity, we would have $a \ind_B C' P$, and therefore $a \ind_{P B} C'$, contradiction. Moreover, since $\mat^{\mathrm P}(C') = \mat^{\mathrm P}(C)$, we have $\U^{\mathrm P}(a / C') = \U^{\mathrm P}(a / C) \geq \beta$. Hence, by Inductive Hypothesis, $\U(a / C') \geq \beta$. Since $a \notind_B C'$, we have $\U(a / B) \geq \beta + 1= \alpha$, and we are done. Therefore, $\U(q) \geq \U^{\mathrm P}(q)$. Second, we will prove by induction on~$\alpha$, that, for every ordinal number~$\alpha$, and every $\mathcal L$-type~$q'$, if $\U(q') \geq \alpha$, then $\U^{\mathrm P}(q') \geq \alpha$. If $\alpha$ is a limit ordinal, the conclusion is immediate from the inductive hypothesis. If $\alpha = \beta + 1$, let $C' \supset B$ be a small set and $r' \in S^1_n(C')$, such that $q \mathrel{\sqsubseteq_{\mkern-15mu{\not}\mkern15mu}} r'$ and $\U(r') \geq \beta$. By the Extension Property, there exists $C \equiv^1_B C'$ such that $C \ind_B P$; let $f \in \aut(\mathfrak C/B)$ such that $f(C') = C$, and let $r := f(r')$. Then, $q \mathrel{\sqsubseteq_{\mkern-15mu{\not}\mkern15mu}} r$ and $\U(r) \geq \beta$. By inductive hypothesis, $\U^{\mathrm P}(r) \geq \beta$. Let $a \in \mathfrak C$ be any realization of~$r$. If $a \ind_{PB} C$, then, since $C \ind_B P$, we have $a \ind_B C$, contradicting \mbox{$q \notind_B C$}. Thus, $a \notind_{PB} C$, and hence then $\U^{\mathrm P}(q) \geq \U^{\mathrm P}(a/B) > \U^{\mathrm P}(r) \geq \beta$, and we are done. Therefore, $\U^{\mathrm P}(q) \geq \U(q)$. For the case when $q$ is a partial $\mathcal L$-type, let $r \in S^1_n(B)$ be any complete type extending~$q$. Then, $\U(r) = \U^{\mathrm P}(r)$. Since this is true for any such~$r$, we have $\U(q) = \U^{\mathrm P}(q)$. \end{proof} \subsection{Approximating definable sets} We say that a ran $\U$ is \intro{continuous} if, for every ordinal~$\alpha$ and small set~$B$, the set $\set{q \in S_n(B) : \U(q) > \alpha}$ is closed in $S_n(B)$. \begin{proposition}[{\cite[8.36]{fornasiero-matroids}}] Assume that $\ind$ is superior and $\U^{\mathrm P}$ is continuous. Let $\bar b$ be a small P\nobreakdash-\hspace{0pt}\relax independent\xspace tuple in~$\mathfrak C$. Let $X \subseteq \mathfrak C^n$ be $T$-definable over~$\bar b$. Let $Y \subseteq X$ be $T^d$-definable over~$\bar b$. Then, there exists $Z \subseteq X$ $T$-definable over~$\bar b$, such that, for every $\bar c \in Z \mathbin \Delta Y$, $\U^{\mathrm P}(\bar c / \bar b) < \U^{\ind}(X)$. \end{proposition} \begin{proof} Let $\alpha \coloneqq \U^{\ind}(X)$. Let $W \coloneqq \set{q \in S^2_X(\bar b) : \U^{\mathrm P}(q) = \alpha}$. Since $\U^{\mathrm P}$ is continuous, $W$~is closed. Let $\theta: S^2_X(\bar b) \to S^1_X(\bar b)$ be the restriction map and $V \coloneqq \rho(W)$. Define $\tilde Y \coloneqq \rho\Pa{S^2_Y(\bar b) \cap W} \subseteq V$. By Transitivity and Proposition~\ref{prop:back-and-forth}, $\rho$~is a homeomorphism between $W$ and~$V$, and therefore $\tilde Y$ is clopen in~$W$, and $W$ is closed in $S^1_{X}(\bar b)$. By standard arguments, there exists $Z \subseteq \mathfrak C^n$ which is $T$-definable over $\bar b$ and such that $S^1_Z(\bar b) \cap V = \tilde Y$. Then, $Z$ satisfies the conclusion. \end{proof} \begin{comment} Remember the definition of Strong Finite Character from~\cite{adler}. \begin{remark} $\ind$ satisfies Strong Finite Character iff, for every $A$ and $B$ small subsets of~$\mathfrak C$, and every cardinal~$\alpha$, the set \[ \set{\bar c \in \mathfrak C^\alpha: \bar c \ind_A B} \] is closed in $S_{\alpha}(A B)$. \end{remark} We do not know any example of an independence relation that does not satisfy Strong Finite Character. \begin{proviso*} For the remainder of this subsection, we assume that $\ind$ satisfies Strong Finite Character. \end{proviso*} \begin{definition} Let $\bar b$ be a small P\nobreakdash-\hspace{0pt}\relax independent\xspace tuple in~$\mathfrak C$, and $X$ be a subset of $\mathfrak C^n$ which is $T^d$-definable over~$\bar b$. We say that $X$ is $P$-negligible if, for every $\bar c \in X$, $\bar c \notind_{\bar b} P$. \end{definition} \begin{remark} Let $\bar b$ be P\nobreakdash-\hspace{0pt}\relax independent\xspace and $X$ be $T$-definable over $\bar b$ and $P$-negligible. Then, by the Extension property, $X$~is empty. \end{remark} \begin{remark} Let $\bar b$ be a small tuple in~$\mathfrak C$. \[ W := \set{q \in S^2_n(\bar b) : q \ind_{\bar b} P}. \] Then, $W$ is closed in $S^2_n(\bar b)$. \end{remark} \begin{proposition} Let $\bar b$ be a small P\nobreakdash-\hspace{0pt}\relax independent\xspace subset of~$\mathfrak C$. Let $Y \subseteq \mathfrak C^n$ be $T^d$-definable over~$\bar b$. Then, there exists $Z \subseteq \mathfrak C^n$ which is $T$-definable over~$\bar b$, and such that $Z \mathbin \Delta Y$ is $P$-negligible. \end{proposition} \begin{proof} Define $W \coloneqq \set{q \in S_n^2(\bar b): q \ind_{\bar b} P}$. Let $\theta: S^2_n(\bar b) \to S^1_n(\bar b)$ be the restriction map and $V \coloneqq \rho(W)$. Define $\tilde Y \coloneqq \rho\Pa{S^2_Y(\bar b) \cap W} \subseteq V$. By Transitivity and Proposition~\ref{prop:back-and-forth}, $\rho$~is a homeomorphism between $W$ and~$V$, and therefore $\tilde Y$ is clopen in~$W$, and $W$ is closed in $S^1_{n}(\bar b)$. By standard arguments, there exists $Z \subseteq \mathfrak C^n$ which is $T$-definable over $\bar b$ and such that $S^1_Z(\bar b) \cap V = \tilde Y$; thus, $Z \mathbin \Delta Y$ is $P$-negligible. \end{proof} The next lemma shows that the above proposition is a generalization of \cite[4.1.7]{boxall} and \cite[8.36]{fornasiero-matroids}. \begin{lemma} Assume that $\ind$ is superior. Let $\bar b$ be P\nobreakdash-\hspace{0pt}\relax independent\xspace, $X$~be a subset of $\mathfrak C^n$ which is $T$-definable over~$\bar b$, and~$Y$ be a subset of $X$ which is $T^d$-definable over~$\bar b$. If $Y$ is $P$-negligible, then, for every $\bar c \in Y$, $\U^{\mathrm P}(\bar c) < \U^{\ind}(X)$. In particular, if $\U^{\ind}(X) = 0$, then $Y$ is empty, and if instead $\U^{\ind}(X) = 1$, then $Y \subseteq \mat(\bar b P)$. \end{lemma} \begin{proof} Let $\bar c \in Y$. If, for contradiction, $\U^{\mathrm P}(\bar c / \bar b) = \U^{\ind}(X)$, then \[ \U^{\ind}(\bar c / \bar b P) = \U^{\ind}(X) \geq \U^{\ind}(\bar c / \bar b) \geq \U^{\ind}(\bar c / \bar b P), \] and therefore $\bar c \ind_{\bar b} P$, absurd. \end{proof} \end{comment} \section{Producing more independence relations} \label{sec:new-independence} For this section, we assume that loveliness is first-order and that $\monster_P = \pair{\mathfrak C, P}$ is a monster model of~$T^d$. Moreover, we assume that $\ind$ is superior, and we denote $\U \coloneqq \U^{\ind}$. \begin{remark} Let $B$ be a small subset of $\mathfrak C$ and $q$ be a complete type over~$B$. Assume that $\alpha \leq \U(q)$. Then, there exists a complete type $r$ extending~$q$, such that $\U(r) = \alpha$. \end{remark} \begin{proof} By induction on $\beta := \U(q)$. If $\beta = \alpha$, let $r := q$. Otherwise, $\beta > \alpha$. If, for every $r$ forking extension of~$q$, $\U(r) < \alpha$, then , by definition, $\U(q) \leq \alpha$, absurd. Hence, there exists $r$ forking extension of~$q$, such that $\U(r) \geq \alpha$. Thus, $\alpha \leq \U(r) < \beta$, and therefore, by inductive hypothesis, there exists $s$ extension of~$r$, such that $\U(s) = \alpha$. \end{proof} Let $\theta$ be an ordinal such that $\theta = \omega^\delta$ for some~$\delta$. We will use the ``big $\bigo$'' and ``small $\smallo$'' notations: $\alpha = \smallo(\theta)$ (or $\beta \gg \alpha$) if $\alpha < \theta$, $\alpha = \beta + \smallo(\theta)$ if there exists $\varepsilon < \theta$ such that $\alpha = \beta + \varepsilon$, and $\alpha = \bigo(\theta)$ if there exists $n \in \mathbb{N}$ such that $\alpha \leq n \theta$. Notice that, since $\theta$ is a power of $\omega$, $\smallo(\theta) + \smallo(\theta) = \smallo(\theta)$. Define $\ind[\th]heta$, the coarsening of $\ind$ at~$\theta$, in the following way: for every $\bar a$ finite tuple in $\mathfrak C$ and every $B$, $C$ small subsets of $\mathfrak C$, $\bar a \ind[\th]heta_B C$ if $\U(\bar a / B) = U(\bar a/B C) + \smallo(\theta)$. If $A$ is a small subset of $\mathfrak C$, define $A \ind_B C$ if, for every $\bar a$ finite tuple in~$A$, $\bar a \ind[\th]heta_B C$. We will use also the notation $(\ind)^\theta$ for~$\ind[\th]heta$. Notice that $\ind[\th]heta$ is trivial if $\theta$ is large enough. Assume that \begin{itemize} \item[(**)] For every $a \in \mathfrak C$, $\U(a / \emptyset) = \bigo(\theta)$. \end{itemize} Notice that Condition (**) is equivalent to: \begin{itemize}\item[] For every $\bar a$ finite tuple in $\mathfrak C$ and every $B \subset \mathfrak C$, $\U(\bar a / B) = \bigo(\theta)$. \end{itemize} However, it might happen that $\U(\mathfrak C) \gg \theta$; for instance, if $\theta = 1$, it can happen that $\U(a/\emptyset)$ is finite for every $a \in \mathfrak C$ and, for every $n \in \mathbb{N}$ there exists $a_n \in \mathfrak C$ such that $\U(a / \emptyset)> n$, and thus $\U(\mathfrak C) = \omega \gg 1$. \begin{proposition} $\ind[\th]heta$ is a superior independence relation on~$\mathfrak C$, and $\ind$ refines~$\ind[\th]heta$. \end{proposition} \begin{proof} The only non-trivial axiom is Left Transitivity, which is also the only place where we use the Condition (**) (Right Transitivity instead is always true). Assume that $\bar c \ind[\th]heta_B A$ and $\bar d \ind[\th]heta_{B \bar c} A$, for some finite tuples $\bar c$ and $\bar d$, and some small sets $A$ and~$B$. We claim that $\bar c \bar d \ind[\th]heta_{B} A$. In fact, by Lascar's inequalities, we have \begin{multline*} \U(\bar c \bar d / B) \leq \U(\bar d/ \bar c B ) \oplus \U(\bar c / B) \leq\\ \U(\bar d / \bar c \bar b A) + o(\theta) \oplus \U(\bar c / B A) + \smallo(\theta) = \U(\bar d / \bar c \bar b A) \oplus \U(\bar c / B A) + \smallo(\theta). \end{multline*} By (**), we have that $\U(\bar d / \bar c \bar b A) \oplus \U(\bar c / B A) = \U(\bar d / \bar c \bar b A) + \U(\bar c / B A) + \smallo(\theta)$. Hence, again by Lascar's inequalities, $\U(\bar c \bar d / B) \leq \U(\bar d \bar c / BA) + \smallo(\theta)$. Symmetry then follows (see~\cite[Theorem~1.14]{adler}). \end{proof} Let $\mat^{\theta}$ be the closure operator induced by $\ind[\th]heta$: thus, $a \in \mat^{\theta}(B)$ iff $\U(a/B) < \theta$. Let $\U^\theta$ be the rank induced by~$\ind[\th]heta$. \begin{remark} Let $B$ be a small subset of~$\mathfrak C$. For every $q \in S_m(B)$, we have $\U(q) = n \theta + \smallo(\theta)$, for a unique $n \in \mathbb{N}$. Then, $\U^\theta(q) = n$. \end{remark} \begin{proof} First, we will prove, by induction on~$n$, that, if $\U(q) \geq n \theta$, then $\U^\theta(q) \geq n$. If $n = 0$, there is nothing to prove. Assume that we have already proved the conclusion for~$n$, and let $q \in S_m(B)$ such that $\U(q) \geq (n + 1)\theta$. Let $C \supset B$ small and $r \in S_M(C)$ extending $q$, such that $\U(r) = n \theta$. Thus, by inductive hypothesis, $\U^\theta(r) \geq n$, and, by definition, $r$ is a non-forking extension of $q$ in the sense of~$\ind[\th]heta$. Therefore, $\U^\theta(q) \geq \U^\theta(r) + 1 \geq n + 1$. Conversely, we will now prove, by induction on~$n$, that, if $\U^\theta(q) \geq n$, then $\U(q) \geq n \theta$. Again, if $n = 0$, there is nothing to prove. Assume that we have already proved the conclusion for~$n$, and let $q \in S_m(B)$ such that $\U^\theta(q) \geq n + 1$. Let $r$ be a $\ind[\th]heta$-forking extension of $q$ such that $\U(r) \geq n$. By inductive hypothesis, $\U(r) \geq n \theta$. Moreover, by our choice of~$r$, $\U(q) \geq \U(r) + \theta \geq (n + 1) \theta$. \end{proof} Notice that there is at most one $\theta$ satisfying (**) and such that $\ind[\th]heta$ is nontrivial, that is the minimum ordinal satisfying (**). \begin{example} Let $T = ACF_0$. Let $T^d$ be the theory of Beautiful Pairs for~$T$; that is $\pair{M, P(M)} \models T^d$ if $M \models ACF_0$ and $P(M)$ is a proper algebraically closed subfield of~$M$. Then, $T^d$ is $\omega$-stable of $\U$-rank~$\omega$. Let $\monster_P$ be a monster model of~$T^d$. We have two natural equivalence relations on $\monster_P$: Shelah's forking $\ind[f]$ and $\ind_P$, where $\ind$ is Shelah's forking relation for models of~$T$. We claim that $(\ind[f])^\omega = \ind_P$. In fact, according to both $(\ind[f])^\omega$ and $\ind_P$, $\pair{M, P}$ has rank~1. However, since $\pair{M, P(M)}$ expands a field, there is at most one independence relation on it that has rank 1 \cite{fornasiero-matroids}. \end{example} \end{document}
\begin{document} \title{A tight negative example for MMS fair allocations} \author{Uriel Feige\thanks{Weizmann Institute, Israel. \texttt{[email protected]}}, Ariel Sapir\thanks{ \texttt{[email protected]}}, Laliv Tauber\thanks{Bar Ilan University, Israel. \texttt{[email protected]}}} \maketitle \begin{abstract} We consider the problem of allocating indivisible goods to agents with additive valuation functions. Kurokawa, Procaccia and Wang {[JACM, 2018]} present instances for which every allocation gives some agent less than her maximin share. We present such examples with larger gaps. For three agents and nine items, we design an instance in which at least one agent does not get more than a $\frac{39}{40}$ fraction of her maximin share. {Moreover, we show that there is no negative example in which the difference between the number of items and the number of agents is smaller than six, and that the gap (of $\frac{1}{40}$) of our example is worst possible among all instances with nine items.} For $n \ge 4$ agents, we show examples in which at least one agent does not get more than a $1 - \frac{1}{n^4}$ fraction of her maximin share. {In the instances designed by Kurokawa, Procaccia and Wang, the gap is exponentially small in $n$.} {Our proof techniques extend to allocation of chores (items of negative value), though the quantitative bounds for chores are different from those for goods. For three agents and nine chores, we design an instance in which the MMS gap is $\frac{1}{43}$.} \end{abstract} \section{Introduction} We consider allocation problems with $m$ items, $n$ agents, and nonnegative additive valuation functions. The {\em maximin share} (MMS) of an agent $i$ is the highest value $w_i$, such that if all agents have the same valuation function $v_i$ that $i$ has, there is an allocation in which every agent gets value at least $w_i$. An allocation is {\em maximin fair} if every agent gets a bundle that she values at least as much as her MMS. Hence if all agents have the same valuation function, a maximin fair allocation exists. Perhaps surprisingly, if agents have different additive valuation functions, then a maximin fair allocation need not exist. Kurokawa, Procaccia and Wang~\cite{KPW18} present negative examples showing that for every $n \ge 3$, there are instances for which in every allocation, at least one agent does not receive her MMS. The {\em gap} (namely, the fraction of MMS lost by some agent) is not stated explicitly in~\cite{KPW18}. However, one can derive explicit gaps from their examples by substituting values for certain parameters that are used in the examples. Doing so gives gaps that are exponentially small in $n$. Even for small $n$, the gaps shown by these examples are orders of magnitude smaller than the positive results that are known, where these positive results show that (for additive valuations) there always is an allocation that gives every agent at least a $\frac{3}{4} + \Omega(\frac{1}{n})$ of her MMS~\cite{KPW18,BK20,GHSSY18,GT20}. We present negative examples with substantially larger gaps than those shown in~\cite{KPW18}. The motivation for designing such examples is that they are needed if one is to ever establish tight bounds on the fraction of the MMS that can be guaranteed to be given to agents. Though we are far from establishing tight bounds for general instances, our bounds are tight in special cases. {In particular, when there are at most nine items and the additive valuation functions are integer valued, our results imply the following tight threshold phenomenon. If for every agent, {her valuation function is such that the sum of all item values} is at most~$119$, then a maximin fair allocation always exists. If the sum of item values is~$120$, then there are instances in which a maximin fair allocation does not exist, and then the gap is $\frac{1}{40}$. If the sum of item values is larger than~$120$, the gap cannot be larger then $\frac{1}{40}$.} \subsection{Our results} The term {\em negative example} will refer to an allocation instance with additive valuation functions in which there is no allocation that gives every agent her MMS. The term $Gap(n,m)$ refers to the largest possible value $\delta \ge 0$, such that there is an allocation instance with $m$ items and $n$ agents with additive valuations, such that in every allocation there is an agent that gets at most a $1 - \delta$ fraction of her MMS. In our work, we find negative examples with the smallest possible number of items. The number of items turns out to be nine. Among allocation instances with nine items, we find the allocation instance with the largest gap. Theorem~\ref{thm:largeGap} is based on this allocation instance. \begin{theorem} \label{thm:largeGap} There is an allocation instance with three agents and nine items for which in every allocation, at least one of the agents does not get more than a $\frac{39}{40}$ fraction of her MMS. In other words, $$Gap(n = 3 \; , \; m = 9) \ge \frac{1}{40}.$$ \end{theorem} The minimality of the number items in Theorem~\ref{thm:largeGap} is implied by Theorem~\ref{thm:n+5}, together with Proposition~\ref{pro:MMSexists} that implies that $n\ge 3$ in every negative example. \begin{theorem} \label{thm:n+5} For every $n$, every allocation instance with $n$ agents and $m \le n + 5$ items has an allocation in which every agent gets her MMS. In other words, $$Gap(n \ge 1 \; , \; m \le n + 5) = 0$$ \end{theorem} A weaker version of Theorem~\ref{thm:n+5} (with $m \le n + 3$) was previously proved in~\cite{BL15}. The maximilaty of the gap in Theorem~\ref{thm:largeGap} (when there are nine items) is implied by Theorem~\ref{thm:tightGap}. \begin{theorem} \label{thm:tightGap} Every allocation instance with three agents and nine items has an allocation in which every agent gets at least a $\frac{39}{40}$ fraction of her MMS. In other words, $$Gap(n = 3 \; , \; m = 9) \le \frac{1}{40}$$ \end{theorem} The proof of Theorem~\ref{thm:tightGap} is based on analysis that reduces the infinite space of possible negative examples into a finite number of classes. For each class, the negative example with largest possible gap within the class can be determined by solving a linear program. For every class we solved the respective linear program using a standard LP solver, and verified that there is no negative example (with 9 items) for which the gap is larger than $\frac{1}{40}$. Theorem~\ref{thm:largeGap} is concerned with three agents. We also provide negative examples for every number of agents $n \ge 4$. The gaps in these negative examples deteriorate at a rate that is polynomial in $\frac{1}{n}$. \begin{theorem} \label{thm:polynomialGap} For every $n \ge 4$, there is an allocation instance with $n$ agents and at most $3n + 3$ items for which in every allocation, at least one of the agents does not get more than a $1 - \frac{1}{n^4}$ fraction of her MMS. In other words, $$Gap(n \ge 4 \; , \; m \le 3n+3) \ge \frac{1}{n^4}$$ \end{theorem} Our negative examples that prove Theorem~\ref{thm:polynomialGap} are inspired by, and contain ingredients from, the negative examples presented in~\cite{KPW18}. The new aspect in our constructions is the formulation of Lemma~\ref{lem:split}, the observation that this lemma suffices for the proofs to go through (a related but more demanding property was used in~\cite{KPW18}), and a design, based on modular arithmetic, that satisfies the Lemma. {The techniques of this paper extend from allocation of goods to allocation of chores (items of negative value, or equivalently, positive dis-utility). We find that results for chores are qualitatively similar to those for goods, though quantitative values of the gaps are different from those values for goods. Likewise, the proof techniques for the case of chores are similar to those shown in this paper for goods, though some of the details change. To simplify the presentation in this paper, all sections of the paper refer only to allocation of goods, except for Section~\ref{sec:chores} that refers only to allocation of chores. Section~\ref{sec:chores} is kept short, and presents only the adaptation of Theorem~\ref{thm:largeGap} to the case of chores. \begin{theorem} \label{thm:chores} There is an allocation instance with three agents and nine chores for which in every allocation, at least one of the agents does not get less than a $\frac{44}{43}$ fraction of her MMS (of dis-utility). In other words, the instance has an MMS gap of $\frac{1}{43}$. \end{theorem} We have verified that with eight chores, there always is an allocation giving every agent no more dis-utility than her MMS, and (using a computer assisted proof) that for nine items, $\frac{1}{43}$ is the largest possible gap. However, we omit details of this verification from this manuscript.} \subsection{Related work} In this section we review related work that is most relevant to the current paper. In particular, we shall only review papers that concern the maximin share (there are numerous papers considering other fairness notions), and only in the context of nonnegative additive valuation functions (some of the works we cite consider also other classes of valuation functions). The maximin share was introduced by Budish~\cite{Budish11}. The fact that there are allocations instances with additive valuations in which no MMS allocation exists was shown in~\cite{KPW18}. That paper presents an instance with three agents and twelve items that has no MMS allocation. The gap in that instance as presented in that paper is around $10^{-6}$, though by optimizing parameters associated with the instance it is possible to reduce the gap to the order of $10^{-3}$. The paper also shows that for every $n \ge 4$ there are instances with $3n+4$ items and no MMS allocation. The gaps in these instances are exponentially small in $n$, and this is inherent in the construction given in that paper. Work on proving the existence of allocations that give a large fraction of the MMS was initiated in~\cite{KPW18}. The largest fraction currently known is $\frac{3}{4} + \frac{1}{12n}$~\cite{GT20}. For the case of three agents, it was shown in~\cite{GM19} that there is an allocation that gives every agent at least a $\frac{8}{9}$ fraction of her MMS. Our Theorem~\ref{thm:largeGap} shows that one cannot guarantee more than a $\frac{39}{40}$ fraction in this case. For the case of four agents, it was shown in~\cite{GHSSY18} that there is an allocation that gives every agent at least a $\frac{4}{5}$ fraction of her MMS. In~\cite{BL15} it was shown that an MMS allocation always exists if $m \le n+3$. We improve the bound to $m \le n + 5$, and show that this is best possible when $n=3$. In~\cite{AMNS17} it was shown that if all items have values in $\{0,1,2\}$ then an MMS allocation exists. Our negative example in Theorem~\ref{thm:largeGap} uses integer values as high as~26. \subsection{Preliminaries} An allocation instance has a set $M = \{e_1, \ldots, e_m\}$ of $m$ items and a set $\{1, \ldots n\}$ of $n$ agents. The term {\em bundle} will always denote a set of items. Every agent $i$ has a valuation function $v_i: 2^M \rightarrow R$ that assigns a value to every possible bundle of items. We assume throughout that valuation functions $v$ are normalized ($v(\emptyset) = 0$) and monotone ($v(S) \le v(T)$ for all $S \subset T \subseteq M$). An $n$-partition of $M$ is a partition of $M$ into $n$ disjoint bundles. $P_n(M)$ denotes the set of all $n$-partitions of $M$. An allocation $A = (A_1, \ldots, A_n)$ is an $n$-partition of $M$, with the interpretation that for every $1 \le i \le n$, agent $i$ receives bundle $A_i$. The utility that agent $i$ derives from this allocation is $v_i(A_i)$. \begin{definition} \label{def:MMS} Consider an allocation instance with a set $M = \{e_1, \ldots, e_m\}$ of $m$ items and a set $\{1, \ldots n\}$ of $n$ agents. Then the maximin share of agent $i$, denoted by $MMS_i$, is the maximum over all $n$-partitions of $M$, of the minimum value under $v_i$ of a bundle in the $n$-partition. $$MMS_i = \max_{(B_1, \ldots, B_n) \in P_n(M)} \min_j [v_i(B_j)]$$ An $n$-partition that maximizes the above expression will be referred to as an $MMS_i$-partition. \end{definition} An allocation that gives every agent at least her MMS is referred to as an MMS allocation. A valuation function $v$ is additive if $v(S) = \sum_{e \in S} v(e)$. Though Definition~\ref{def:MMS} applies to arbitrary valuation functions, in this paper we shall only consider additive valuation functions. By convention, in all remaining parts of the paper, all valuation functions are additive, unless explicitly stated otherwise. We now review some known propositions concerning the MMS (with additive valuations). For completeness, we also sketch the proofs of these propositions, though we emphasize that all propositions in this section were known and are not original contributions of the current paper. \begin{proposition} \label{pro:MMSexists} Every allocation instance in which either all agents or all agents but one have the same valuation function has an MMS allocation. \end{proposition} \begin{proof} Let $v = v_1 = \ldots = v_{n-1}$ be the valuation function shared by all agents but one, and let $v_n$ be the valuation function of agent~$n$ who may have a different valuation function. Let $B_1, \ldots, B_n$ be an MMS partition with respect to $v$. For every agent $i$ with $1 \le i \le n-1$, every one of these bundles has value at least $MMS_i$. Allocate to agent~$n$ the bundle $j$ that maximizes $v_n(B_j)$, and allocate the remaining bundles to the other agents. Additivity of $v_n$ implies that $v_n(B_j) \ge MMS_n$, and hence every agent gets at least her MMS. \end{proof} The following three propositions concern reduction steps that allow us to replace an allocation instance by a simpler one. An allocation instance with additive valuations and $m$ items $\{e_1, \ldots, e_j\}$ is {\em ordered} if for every agent $i$ and every two items $e_j$ and $e_k$ with $j < k$ we have that $v_i(e_j) \ge v_i(e_k)$. Given an unordered allocation instance with additive valuations and $m$ items, its {\em ordered version} is obtained by replacing the valuation function $v_i$ of each agent $i$ by a new additive valuation function $v'_i$ in which item values are non-increasing. That is, let $\sigma$ denote a permutation over $m$ items with respect to which the values of items are non-increasing under $v_i$. Then for every $1 \le j \le m$ we have that $v'_i(e_j) = v_i(e_{\sigma^{-1}(j)})$. The following proposition is due to~\cite{BL15}. \begin{proposition} \label{pro:BL} For every instance $I$ with additive valuations, every allocation $A'$ for its ordered version $I'$ can be transformed to an allocation $A$ for $I$, while ensuring that every agent derives at least as high utility from $A$ in $I$ as derived from $A'$ in $I'$. \end{proposition} \begin{proof} A choosing sequence is a sequence of names of agents (repetitions are allowed). The choosing sequence induces an allocation by the following procedure. Starting from round~1, in each round $r$, the agent whose name appears in the $r$th location in the choosing sequence receives the item of highest value for the agent (ties can be broken arbitrarily), among the yet unallocated items. The allocation $A'$ for $I'$ induces a choosing sequence, where for every $r$, the agent in location $r$ is the one to which $A'$ allocated the $r$th most valuable item in $I'$. Using this choosing sequence for the instance $I$, in every round $r$, the respective agent gets an item that she values at least as her $r$th most valuable item, which is the value of the item that she got under $A'$. \end{proof} Proposition~\ref{pro:BL} implies that when searching for a negative example with the maximum possible gap, it suffices to restrict attention to ordered instances. The following two propositions are helpful for arguments that are based on induction on $n$. As each such proposition concerns two instances, in the MMS notation we shall specify which instance we refer to. \begin{proposition} \label{pro:reduce1} Let $I$ be an arbitrary allocation instance with a set $M$ of items and $n$ agents. Let $I'$ be an allocation instance derived from $I$ by removing an arbitrary item $e$ from $M$, and removing one arbitrary agent. Then for each of the remaining agent $q$, $MMS_q(I') \ge MMS_q(I)$. \end{proposition} \begin{proof} Let $(B_1, \ldots, B_n)$ be an $MMS_q(I)$ partition. By renaming bundles, we may assume without loss of generality that $e \in B_n$. Then $(B_1, \ldots, B_{n-2}, (B_{n-1} \cup B_n \setminus \{e\}) )$ is an $(n-1)$ partition for $M \setminus \{e\}$ that certifies that $MMS_q(I') \ge MMS_q(I)$. \end{proof} \begin{proposition} \label{pro:reduce2} Let $I$ be an arbitrary allocation instance with a set $M$ of $m \ge 2$ items, and $n$ agents. Let $I'$ be an allocation instance derived from $I$ by removing two items $e_i$ and $e_j$ from $M$, and removing one arbitrary agent. Then for every remaining agent $q$, if {either the $MMS_q(I)$ partition has a bundle that contains both $e_i$ and $e_j$, or $v_q(e_i) + v_q(e_j) \le MMS_q(I)$,} then $MMS_q(I') \ge MMS_q(I)$. \end{proposition} \begin{proof} Let $(B_1, \ldots, B_n)$ be an $MMS_q(I)$ partition for $I$. If both $e_i$ and $e_j$ belong to the same bundle, then the proof is as in that for Proposition~\ref{pro:reduce1}. If $e_i$ and $e_j$ are in different bundles, by renaming bundles, we may assume without loss of generality that $e_i \in B_{n-1}$ and $e_j \in B_n$. Then $(B_1, \ldots, B_{n-2}, (B_{n-1} \setminus \{e_i\}) \cup (B_n \setminus \{e_j\}) )$ is an $(n-1)$ partition for $M \setminus \{e_i, e_j\}$ that certifies that $MMS_q(I') \ge MMS_q(I)$. This is because $v_q\left((B_{n-1} \setminus \{e_i\}) \cup (B_n \setminus \{e_j\})\right) = v_q(B_{n-1}) + v_q(B_n) - (v_q(e_i) + v_q(e_j)) \ge 2MMS_q(I) - MMS_q(I) = MSS_q(I)$. \end{proof} \section{An MMS gap of $\frac{1}{40}$} \label{sec:40} In this section we prove Theorem~\ref{thm:largeGap}, showing an allocation instance for which in every allocation, at least one of the agents gets at most a $\frac{39}{40}$ fraction of her MMS. \begin{proof} To present the instance that proves Theorem~\ref{thm:largeGap}, we think of the nine items as arranged in a three by three matrix, with rows $r_1, r_2, r_3$ (starting from the top) and columns $c_1, c_2, c_3$ (starting from the left). $\left( \begin{array}{ccc} e_1 & e_2 & e_3 \\ e_4 & e_5 & e_6 \\ e_7 & e_8 & e_9 \\ \end{array} \right)$ There are three agents, referred to as $R$ (the {\em row} agent), $C$ (the {\em column} agent), and $U$ (the {\em unbalanced} agent). The MMS of every agent is~40. When depicting valuation functions, for each agent, we present the items in one of her MMS bundles in boldface. Every row in the valuation function of $R$ has value~40 and gives $R$ her MMS. Her valuation function is: $\left( \begin{array}{ccc} 1 & 16 & 23 \\ 26 & 4 & 10 \\ {\bf 12} & {\bf 19} & {\bf 9} \\ \end{array} \right)$ Every column in the valuation function of $C$ has value~40 and gives $C$ her MMS. Her valuation function is: $\left( \begin{array}{ccc} 1 & 16 & {\bf 22} \\ 26 & 4 & {\bf 9} \\ 13 & 20 & {\bf 9} \\ \end{array} \right)$ The bundles that give $U$ her MMS are $p = \{e_2, e_4\}$ (the {\em pair}, in boldface), $d = \{e_3, e_5, e_7\}$ (the {\em diagonal}), and $q = \{e_1, e_6, e_8, e_9\}$ (the {\em quadruple}). The valuation function of $U$ is: $\left( \begin{array}{ccc} 1 & {\bf 15} & 23 \\ {\bf 25} & 4 & 10 \\ 13 & 20 & 9 \\ \end{array} \right)$ It remains to show that no allocation gives every agent her MMS. An allocation is a partition into three bundles. As a sanity check, let us first consider the three partitions that each give one of the agents her MMS. For the partition $(r_1,r_2,r_3)$, both $C$ and $U$ want only $r_3$, and hence one of them does not get her MMS. For the partition $(c_1,c_2,c_3)$, both $R$ and $U$ want only $c_3$, and hence one of them does not get her MMS. For the partition $(p,d,q)$, both $R$ and $C$ want only $p$, and hence one of them does not get her MMS. To analyse all possible partitions in a systematic way, we consider a valuation function $M$ that values each item as the maximum value given to the item by the three agents. Hence $M$ is: $\left( \begin{array}{ccc} 1 & 16 & 23 \\ 26 & 4 & 10 \\ 13 & 20 & 9 \\ \end{array} \right)$ Every allocation that gives every agent her MMS partitions $M$ into three bundles, where the sum of values in each bundle is at least~40, but not more than~42 (as the sum of all values of $M$ is $40*3 + 2$). If one of the bundles has two items, then this bundle must be $\{e_2, e_4\} = p$, whose value under $M$ is~42. Hence each of the two remaining bundles must have value~40 under $M$. The unique way of partitioning the remaining items into two bundles of value~40 is to have the bundles $\{e_3, e_5, e_7\} = d$ and $\{e_1, e_6, e_8, e_9\} = q$. {(The only way of reaching a value~40 in a bundle that contains item $e_3$ of value 23 is to include the two items of values~4 and~13.)} But we already saw (in the sanity check) that the partition $(p, d ,q)$ is not a valid solution. It follows that the partition must be into three bundles, each of size three. The bundle containing $e_9$ must have value between~40 and~42. There are only two such bundles of size three, namely $r_3$ and $c_3$. Each of them has value~42. If one of them is chosen, the remaining two bundles in the partition must then each be of value~40. For $e_4$, the only two bundles of value~40 are $r_2$ and $c_1$. Hence we get only two possible partitions, $(r_1,r_2,r_3)$ and $(c_1,c_2,c_3)$, and both were already excluded in our sanity check. \end{proof} \section{MMS gaps that are inverse polynomial in the number of agents} We present examples that apply for every $n \ge 4$. The initial design of our examples will include $5n - 7$ items, but for $n \ge 6$, this number will be reduced later. It will be convenient to think of the items as being arranged as selected entries in an $n$ by $n$ matrix, along the perimeter of the matrix, and along its main diagonal. We will construct two valuation functions, where a set $R$ of at least two agents have valuation function $V_R$, and a set $C$ of of at least two agents have valuation function $V_C$ (here $R$ stands for {\em row} and $C$ stands for {\em column}, and $|R| + |C| = n$). We will start with a base matrix $B$, and then modify $B$ so as to obtain $V_R$ and $V_C$. In the base matrix $B$, the items have only seven different values, regardless of the value of $n$. We shall partition the items into groups of items of equal value, and give an informative name to each group. Rows are numbered from top down, and columns from left to right. We use the convention that the index $j$ specifies an arbitrary value in the range $2\le j \le n-1$. The value of items in each group, and the locations of the groups in $B$, are as follows. \begin{itemize} \item $B_{1j} = (n-2)n$. (Top row, excluding corners). \item $B_{1n} = 1$. (Top-right corner.) \item $B_{j1} = (n-2)(n-1)$. (Left column, excluding corners.) \item $B_{jj} = (n-2)(n^2 - 4n + 2)$. (Main diagonal, excluding corners.) \item $B_{jn} = (n-2)(n-1) + 1$. (Right column, excluding corners.) \item $B_{n1} \cup B_{nj} = (n-2)^2 + 1$. (Bottom row, excluding bottom-right corner). \item $B_{nn} = (n-2)(n-3)$. (Bottom-right corner.) \end{itemize} For $n=4$, this gives the following matrix. $\left( \begin{array}{cccc} 0 & 8 & 8 & 1 \\ 6 & 4 & 0 & 7 \\ 6 & 0 & 4 & 7 \\ 5 & 5 & 5 & 2 \\ \end{array} \right)$ Observe that all entries of $B$ are nonnegative. Moreover, All row sums and all column sums have the same value $t_B = n(n-2)^2 + 1$ A bundle of items will be called {\em good} if the sum of its values is $t_B$. Hence all rows and all columns are good, but there are also other bundles that are good. A partition of all items into $n$ bundles is {\em good} if every bundle in the partition is good. For example, a partitioning of the items into row bundles is good, and likewise, a partitioning into column bundles is good. The following lemma constrains the structure of good partitions of $B$. \begin{lemma} \label{lem:split} In every partitioning of the items of $B$ into $n$ {\em good} bundles, the structure of the good partition is such that at least one of the following three conditions hold: \begin{enumerate} \item The bottom row is split among the $n$ good bundles (one item in each bundle). \item The right column is split among the $n$ good bundles (one item in each bundle). \item At least one of the bundles contains at least one item from the bottom row and at least one item from the right column, but does not contain the item $B_{nn}$. \end{enumerate} \end{lemma} \begin{proof} Observe that $t_B = n(n-2)^2 + 1 = 1$ modulo $n-2$. There are exactly $2n-2$ items that have value~1 modulo $n-2$ (the bottom row and the right column, excluding the bottom-right corner). We refer to these items as {\em special}. The remaining items have value~0 modulo $n-2$, and are not special. In every good partition, it must be the case that one good bundle has $n-1$ special items, and each other good bundle has one special item. Consider the good bundle with $n-1$ special items. If the $n-1$ special items are all in the bottom row (or all in the right column), then item $B_{nn}$ must be the remaining item in the bundle (that is the only way to reach $t_B$), and then the right column (or bottom row) must be split. If the $n-1$ odd items include at least one from the bottom row and at least one from the right column, then we may assume that $B_{nn}$ is also in the bundle (as otherwise condition~3 of the Lemma holds). This accounts for $n$ items in the bundle. The sum of values of these $n$ items cannot possibly be equal to $t_B$. This can be verified by a case analysis. If $B_{1n}$ is among these items, then the only way to reach $t_B$ with $n-2$ additional special items is to add all items of $B_{jn}$ (as special items in the bottom row have strictly smaller value than items in $B_{jn}$), but then the bundle has no special items from the bottom row. Alternatively, if $B_{1n}$ is not among these items, then the only way to reach $t_B$ with $n-1$ special items is to add all special items of the bottom row (as special items in $B_{jn}$ have strictly larger value than special items items in the bottom row), but then the bundle has no special items from the right column. Consequently, the sum values of these $n$ items needs to be strictly smaller than $t_B$. Their total value is minimized if they are $B_{1n} \cup B_{nj} \cup B_{nn}$, giving a value of $1 + (n-2)((n-2)^2 + 1) + (n-2)(n-3) = (n-1)(n-2)^2 + 1$. Hence a value of $(n-2)^2$ is missing in order to complete the sum of values to $t_B = n(n-2)^2 + 1$. For $n \ge 5$, none of the remaining items has such small value, and hence such a good bundle cannot be formed at all. The only case that remains to be considered is $n=4$, because for $n=4$ the value of diagonal items $B_{jj}$ happens to satisfy $(n-2)(n^2 - 4n + 2) = (n-2)^2$. Recall the matrix for $n=4$ depicted above. The composition of values in a good bundle that has two special items from the bottom row, the special item $B_{14}$, the item $B_{44}$, and one diagonal item, is $(5,5,1,2,4)$. But then one of the two items of value~8 does not have a good bundle. (An item of value~8 needs an additional value of~9 to reach~17. However, of the items that remain, there is only one combination of items that gives value~9, namely, as $4+5$.) \end{proof} \begin{remark} \label{rem:KPW18} Our proof for Theorem~\ref{thm:polynomialGap} follows a pattern used in~\cite{KPW18}. In their construction, the base matrix $B$ was required to have the property that it has only two good partitions: the row partition and the column partition. In contrast, we allow $B$ to have many more good partitions (as specified in Lemma~\ref{lem:split}), and show that even with this extra flexibility, the proof pattern of~\cite{KPW18} still works. Given this extra flexibility in the properties of $B$, we design such matrices (one for each value of $n$) with much smaller integer entries than the corresponding matrices designed in~\cite{KPW18}. \end{remark} Using the matrix $B$, we shall now create two matrices, one for $V_R$ and one for $V_C$. First, every entry of $B$ is multiplied by~$n$. Then, for $V_R$, subtract $1$ from the value of each special item in the bottom row, and add $n - 1$ to the value of the bottom-right corner. For $n=4$, the matrix for $V_R$ is: $\left( \begin{array}{cccc} 0 & 32 & 32 & 4 \\ 24 & 16 & 0 & 28 \\ 24 & 0 & 16 & 28 \\ 19 & 19 & 19 & 11 \\ \end{array} \right)$ The maximin share of every agent in $R$ is~68 (each row is a bundle). For general $n \ge 4$, this maximin share is $t_V = nt_B = n^2(n-2)^2 + n$. For $V_C$, subtract $1$ from the value of each special item in the right column, and add $n - 1$ to the value of the bottom-right corner. For $n=4$, the matrix for $V_C$ is: $\left( \begin{array}{cccc} 0 & 32 & 32 & 3 \\ 24 & 16 & 0 & 27 \\ 24 & 0 & 16 & 27 \\ 20 & 20 & 20 & 11 \\ \end{array} \right)$ Similar to agents in $R$, the maximin share of every agent in $C$ is~68 (each column is a bundle). For general $n \ge 4$, this maximin share is $t_V = nt_B = n^2(n-2)^2 + n$. \begin{proposition} \label{pro:polynomialGap} If $|R| + |C| = n$ and $|R|, |C| \ge 2$, then in every allocation, at least one player gets a bundle that he values as at most $t_V - 1$. \end{proposition} \begin{proof} The allocation partitions the items into $n$ bundles. If at least one of the bundles has value less than $t_B$ in $B$, then the same bundle has value at most $n(t_B - 1) + (n - 1) = t_V - 1$ for the agent who receives it. Hence we may assume that every bundle has value $t_B$ in $B$. By Lemma~\ref{lem:split}, there are only three possibilities for this. \begin{enumerate} \item {\em The bottom row is split.} Then every agent in $R$ receives a bundle that contains a single item from the bottom row. As $|R| \ge 2$, for at least one row agent, this single item lost a value of~1 in the process of constructing $V_R$. Consequently, the value received by this agent is $nt_B - 1 = t_V - 1$. \item {\em The right column is split.} Then every agent in $C$ receives a bundle that contains a single item from the right column. As $|C| \ge 2$, for at least one column agent, this single item lost a value of~1 in the process of constructing $V_C$. Consequently, the value received by this agent is $nt_B - 1 = t_V - 1$. \item {\em At least one of the bundles contains at least one item from the bottom row and at least one item from the right column, but does not contain the item $B_{nn}$.} Such a bundle has value at most $t_V-1$ for every agent. \end{enumerate} \end{proof} We can now prove Theorem~\ref{thm:polynomialGap}. In fact, we state a somewhat stronger version of it in which the gap is improved from $\frac{1}{n^4}$ to a somewhat larger value. \begin{theorem} \label{thm:polynomialGap1} For given $N$, let $n = \lceil \frac{N + 4}{2} \rceil$. Then for every $N \ge 4$, there is an allocation instance with $N$ agents and at most $N + 4n - 7$ items (which gives $3N+1$ when $N$ is even and $3N + 3$ when $N$ is odd) for which in every allocation, at least one of the agents does not get more than a $1 - \frac{1}{f(n)}$ fraction of her MMS. Here, the function $f(n)$ has value $f(n) = n^2(n-2)^2 +n$. In other words, $$Gap(N \ge 5 \; , \; m \le 3N+3) \ge \frac{1}{f(n)}$$ \end{theorem} \begin{proof} For $4 \le N \le 5$ we have that the corresponding value of $n = \lceil \frac{N + 4}{2} \rceil = N$, and hence the corresponding instances was described above. (Observe that $f(n)$ equals the corresponding value of $t_V$ in these instances.) For $N \ge 6$, we have that $n = \lceil \frac{N + 4}{2} \rceil < N$. In this case we construct an instance as above for the corresponding value of $n$ (with value $t_V = n^2(n-2)^2 +n$). We add to this instance $N-n$ agents so that the number of agents becomes $N$. Among the agents, we set $\lfloor \frac{N}{2} \rfloor$ agents to be row agents, and the remaining agents to be column agents. We also add to the instance $N-n$ auxiliary items, each of value $t_V$, and so the total number of items is $(5n - 7) + (N-n) = N + 4n - 7$. For each of the $N$ agents, the MMS is $t_v$ (by partitioning the set of items into the $N-n$ auxiliary items, and either the $n$ rows or the $n$ columns). $N-n$ agents get their MMS by getting an auxiliary item. However, among the $n$ agents that remain, at least two are row agents (because $|R| - (N - n) = \lfloor \frac{N}{2} \rfloor - N + \lceil \frac{N + 4}{2} \rceil = 2$) and at least two are column agents, and this suffices for Proposition~\ref{pro:polynomialGap} to apply. \end{proof} \section{An MMS allocation whenever $m \le n + 5$} In this section we prove Theorem~\ref{thm:n+5}, that if $m \le n+5$ there always is an MMS allocation. The proof makes use of the following two lemmas. \begin{lemma} \label{lem:reduce1} Let $I$ be an allocation instance with $n$ agents and $m$ items, and assume that for every instance with $n-1$ agents and $m-1$ items there is an MMS allocation. If there is an agent $i$ and item $e$ for which $v_i(e) \ge MMS_i(I)$, then $I$ has an MMS allocation. \end{lemma} \begin{proof} Remove item $e$ and agent $i$, resulting in an instance $I'$ with $n-1$ agents and $m-1$ items. By Proposition~\ref{pro:reduce1}, for every agent $j \not= i$ it holds that $MSS_j(I') \ge MSS_j(I)$. By the assumption of the lemma, there is an MSS allocation $A'$ for $I'$. Extend $A'$ to an allocation $A$ for $I$, by giving item $e$ to agent $i$. Allocation $A$ is an MSS allocation for $I$. \end{proof} \begin{lemma} \label{lem:reduce2} Let $I$ be an allocation instance with $n$ agents and $m$ items, and assume that for every instance with $n-1$ agents and $m-2$ items there is an MMS allocation. Suppose that there is an agent $q$ and a bundle $B$ containing two items such that $v_q(B) \ge MMS_q(I)$, and moreover, for every agent $j \not= q$, at least one of the following conditions hold: \begin{enumerate} \item $B$ is {\em small}: $v_j(B) \le MSS_j(I)$. \item $B$ is {\em directly dominated}: $B$ is equal to or contained in one of the bundles of the $MMS_j$ partition. \item $B$ is {\em indirectly dominated}: the $MMS_j$ partition contains a bundle $B'$ such that $v_j(B') \ge v_j(B)$ and $|B' \cap B| = 1$. \end{enumerate} Then $I$ has an MMS allocation. \end{lemma} \begin{proof} Remove bundle $B$ and agent $q$, resulting in an instance $I'$ with $n-1$ agents and $m-2$ items. We claim that $MSS_j(I') \ge MSS_j(I)$ for every agent $j \not= q$. For agents for which either condition~1 or condition~2 hold, this follows by Proposition~\ref{pro:reduce2}. For an agent $j$ for which only condition~3 holds, let $B_1'$ denote the other bundle intersected by $B$. Replace the two bundles $B'$ and $B_1'$ in the $MMS_j$ partition by the two bundles $B$ and $B_1 = (B' \cup B'_1) \setminus B$. We have that $v_j(B)\ge MMS_j(I)$ (as condition~1 is assumed not to hold) and $v_j(B_1) \ge MMS_j(I)$. {(The last inequality can be verified as follows. Condition~3 holding implies that $v_j(B') \ge v_j(B)$. This together with $(B \cup B_1) = (B' \cup B'_1)$ implies that $v_j(B_1) \ge v_j(B'_1)$. The fact that $B'_1$ is a bundle in the original $MSS_j$ partition implies that $v_j(B'_1) \ge MMS_j$.)} Hence we get an $MMS_j$ partition in which $B$ is one of the bundles, and now we can apply condition~2 to conclude that $MSS_j(I') \ge MSS_j(I)$. By the assumption of the lemma, there is an MSS allocation $A'$ for $I'$. Extend $A'$ to an allocation $A$ for $I$, by giving bundle $B$ to agent $q$. Allocation $A$ is an MSS allocation for $I$. \end{proof} We now prove Theorem~\ref{thm:n+5}. \begin{proof} The proof is by induction on $n$. The theorem trivially holds for $n = 1$, and holds for $n = 2$ by Proposition~\ref{pro:MMSexists}. The case $n=2$ serves as the base case of the induction, and it remains to prove the theorem for $n\ge 3$. In all cases with $n\ge 3$ we assume without loss of generality: \begin{itemize} \item The theorem has already been proved for all $n' < n$ (the inductive hypothesis). \item $m = n+5$ (because if $m<n+5$, we may add $n + 5 - m$ auxiliary items that have~0 value to all agents). \item All bundles in the MMS partition of every agent are of size at least~2. \end{itemize} The third assumption can be made without loss of generality, as otherwise there is an agent $i$ and item $e$ for which $v_i(e) \ge MMS_i$, and then Lemma~\ref{lem:reduce1} allows us to reduce the instance to one in which the induction hypothesis already holds. {Observe that the third assumption implies (among other things) that it suffices to consider only $n \le 5$, because for $n \ge 6$ we have that $m = n + 5 < 2n$, and the third assumption cannot hold.} {Using these assumptions, the cases $n = 3$, $n=4$, and $n=5$ are proved in Lemma~\ref{lem:3}, Lemma~\ref{lem:4}, and Lemma~\ref{lem:5}, respectively.} \end{proof} \subsection{Three agents, eight items} \begin{lemma} \label{lem:3} Every allocation instance with $n=3$ agents and $m = n + 5$ items has an MMS allocation. \end{lemma} \begin{proof} By Proposition~\ref{pro:BL} we may assume that the instance is ordered (for every $1 \le i < j \le m$ and every agent $q$, $v_q(e_i) \ge v_q(e_j)$). Recall (see the proof of Theorem~\ref{thm:n+5}) that we may assume that the MMS partition of an agent contains only bundles of size at least~2. Consequently, for every agent $j$, her $MMS_j$ partition contains at least one bundle (call it $B_j$) that has exactly two items. If the three bundles $B_1$, $B_2$ and $B_3$ are disjoint, give each agent her respective bundle, and allocate the two remaining items arbitrarily. It remains to consider the case that at least two of these bundles intersect. W.l.o.g., let these bundles be $B_1$ and $B_2$. Suppose that $|B_1 \cap B_2| = 1$. Then as the instance is ordered, all agents agree that one of the two bundles, $B_1$ or $B_2$, is not more valuable than the other. W.l.o.g., let this bundle be $B_1$. Likewise, if $B_1 = B_2$, then also in this case $B_1$ is not more valuable than $B_2$. There are two cases to consider: \begin{itemize} \item $v_3(B_1) \ge MMS_3$. In this case Lemma~\ref{lem:reduce2} applies with agent~3 serving as agent $q$, and with $B_1$ serving as $B$. Hence an MMS allocation exists. \item $v_3(B_1) < MMS_3$. In this case Lemma~\ref{lem:reduce2} applies with agent~1 serving as agent $q$, and with $B_1$ serving as $B$. Hence an MMS allocation exists. \end{itemize} \end{proof} \subsection{Four agents, nine items} \begin{lemma} \label{lem:4} Every allocation instance with $n=4$ agents and $m = n + 5$ items has an MMS allocation. \end{lemma} \begin{proof} Consider an allocation instance $I$ with four agents and a set $M$ of at most nine items. Recall (see the proof of Theorem~\ref{thm:n+5}) that we may assume that the MMS partition of an agent contains only bundles of size at least~2. Consequently, for every agent $i$, her $MMS_i$ partition contains three bundles of size two, and one bundle of size three. Let $(B_{1,1}, B_{1,2}, B_{1,3}, B_{1,4})$ denote the $MMS_1$ partition of agent~1, with $|B_{1,1}| = |B_{1,2}| = |B_{1,3}| = 2$ and $|B_{1,4}| = 3$. Suppose that for some $k \le 3$, there is exactly one agent $i \ge 2$ for which $v_i(B_{1,k}) \ge MMS_i$. Then Lemma~\ref{lem:reduce2} applies with agent $i$ serving as agent $q$, and $B_{1,k}$ serving as bundle $B$. Hence an MMS allocation exists. Likewise, if for some $k \le 3$ there is no agent $i \ge 2$ for which $v_i(B_{1,k}) \ge MMS_i$, Lemma~\ref{lem:reduce2} applies with agent $1$ serving as agent $q$, and $B_{1,k}$ serving as bundle $B$. Hence also in this case an MMS allocation exists. It follows that we can assume that for each of the bundles $\{B_{1,1}, B_{1,2}, B_{1,3}\}$ there is at most one agent $2 \le i \le 4$ that values it less than her MSS. Consider now a bipartite graph $G$. Its left hand side contains four vertices, corresponding to the four agents $\{1,2,3,4\}$. Its right hand side has four vertices, corresponding to the four bundles $\{B_{1,1}, B_{1,2}, B_{1,3}, B_{1,4}\}$. For every $1 \le i,j \le 4$ there is an edge between agent $i$ and bundle $B_{1,j}$ if $v_i(B_{1,j}) \ge MMS_i$. Observe that a perfect matching in $G$ induces an MMS allocation, giving every agent her matched bundle. Hence it suffices to show that $G$ has a perfect matching. Each of the right hand side vertices $B_{1,k}$ for $1 \le k \le 3$ has degree at least~3 (as at most one agent values it less than her MMS), and $B_{1,4}$ has degree at least~1 (as agent~1 values it at least as $MSS_1$). Hence for every $k \le 3$, every set of $k$ right hand side vertices has at least $k$ left hand side neighbors. Moreover, the set of all right hand side vertices has four left hand side neighbors, as for every agent $i$, at least one of the four bundles has value at least $\frac{1}{4}v_i(M) \ge MMS_i$. Hence by Hall's condition, $G$ has a perfect matching. \end{proof} \subsection{Five agents, ten items} \begin{lemma} \label{lem:5} Every allocation instance with $n=5$ agents and $m = n + 5$ items has an MMS allocation. \end{lemma} \begin{proof} Let $I$ be an arbitrary allocation instance with $5$ agents and $10$ items. Recall (see the proof of Theorem~\ref{thm:n+5}) that we may assume that the MMS partition of an agent contains only bundles of size at least~2. As $m = 2n$, this implies that for every agent $i$, all bundles of her $MMS_i$ partition are of size two. By Proposition~\ref{pro:BL} we may assume that the instance is ordered (for every $1 \le i < j \le m$ and every agent $q$, $v_q(e_i) \ge v_q(e_j)$). For every agent $i$, consider the bundle $B_i$ in her MMS partition that contains the item $e_1$. This gives five bundles (not necessarily all distinct). Among these bundles, consider the bundle $B$ in which the second item of the bundle has highest index (lowest value). Then for every agent $i$ we have that $v_i(B) \le v_i(B_i)$, because the instance is ordered. Let $q$ be an agent that has $B$ as a bundle in her MMS partition (if there is more that one such agent, pick one arbitrarily). Lemma~\ref{lem:reduce2} (condition~3 in the lemma) implies $I$ has an MMS allocation. \end{proof} \section{Tightness of MMS ratio for nine items} In this section we prove Theorem~\ref{thm:tightGap}, showing that every allocation instance with three agents and nine items has an allocation that gives each agent at least a $\frac{39}{40}$ of her MMS. The proof has three steps. \begin{enumerate} \item The proof of Theorem~\ref{thm:structure} that shows that a negative example can have only one of two possible structures. \item Each structure induces linear constraints on the valuation functions of the agents. For each structure, we set up a linear program that finds a solution that satisfies all linear constraints implied by the corresponding structure, while maximizing the MMS gap in that solution. These LPs are under-constrained, and the optimal feasible solutions of these LPs turn out not to correspond to true negative examples. Hence we need to add additional constraints to the LPs, preventing the LPs from producing solutions that are not true negative examples. \item For each of the two structures, we partition all potential negative examples that have this structure into a finite number of classes, where each class offers some refinement of the structure. The classes need not be disjoint. The refined structure of a class gives rise to additional constraints to the LP. Thus we end up with a finite number of different LPs, one for each class. We then verify that none of these LPs generates a negative example with MMS gap larger than $\frac{1}{40}$ (this is done by having a computer program solve the corresponding LPs), and this proves Theorem~\ref{thm:tightGap}. \end{enumerate} \subsection{Only two possible structures} We first present the theoretical analysis that leads to Theorem~\ref{thm:structure}. Let $I$ be an instance with three agents $\{1,2,3\}$ and a set $M$ of nine items $\{e_1, \ldots, e_9\}$. By Proposition~\ref{pro:BL}, we assume without loss of generality that the instance is {\em ordered} (for every $i < j$, all agents agree that item $e_i$ has value at least as large as item $e_j$). We say that a bundle $B$ of items is {\em good} for agent $i$ if $v_i(B) \ge MMS_i$, and {\em bad} for $i$ otherwise. Every agent $i$ has a partition of the items into three bundles $B_{i1}, B_{i2}, B_{i3}$, such that each of these bundles is good for $i$. Fix for each agent such a partition. We may assume that every bundle is of size at least two (recall Lemma~\ref{lem:reduce1}). \begin{proposition} \label{pro:containment} If a bundle in one partition contains a bundle from another partition (containment need not be strict), then $I$ has an MMS allocation. \end{proposition} \begin{proof} W.l.o.g., assume that $B_{1,1} \subseteq B_{2,1}$. If $B_{1,1}$ is good for agent 3, then give $B_{1,1}$ to agent~3. As $(B_{2,2} \cup B_{2,3}) \subseteq (B_{1,2} \cup B_{1,3})$, we have that $v_2(B_{1,2}) + v_2(B_{1,3}) \ge v_2(B_{2,2}) + v_2(B_{2,3}) \ge 2MMS_2$. Give agent~2 whichever of the bundles $B_{1,2}$ or $B_{1,3}$ she values as at least $MMS_2$, and give the remaining bundle to agent~1, resulting in an MMS allocation. If $B_{1,1}$ is bad for agent~3, give $B_{1,1}$ to agent~1, and give agent~3 whichever items are in $B_{2,1} \setminus B_{1,1}$, and in addition, whichever bundle she prefers over $B_{2,2}$ and $B_{2,3}$, thus giving her value at least $\frac{1}{2}v_3(M \setminus B_{1,1}) \ge \frac{1}{3}v_3(M) \ge MMS_3$. Give the remaining bundle to Agent~2. Every agent gets at least her MMS. \end{proof} We can conclude that the nine bundles are distinct. \begin{proposition} \label{pro:symmetricDifference} If there are two bundles (necessarily, of different agents) $B$ and $B'$ such that $|B \setminus B'| = |B' \setminus B| = 1$, then $I$ has an MMS allocation. \end{proposition} \begin{proof} W.l.o.g., assume that $|B_{1,1} \setminus B_{2,1}| = |B_{2,1} \setminus B_{1,1}| = 1$. As the instance is ordered and the two bundles share all but one item, it follows that all {agents} view one bundle, without loss of generality let it be $B_{2,1}$, as at least as valuable as the other. Observe that if $B_{1,1}$ is given to either agent~1 or agent~3, then agent~2 can partition the remaining items into two bundles that each has value {of at least} $MSS_2(I)$ (in her bundle that contains the item $B_{1,1} \setminus B_{2,1}$, replace that item by the item $B_{2,1} \setminus B_{1,1}$). If $B_{1,1}$ is good for agent~3, then give $B_{1,1}$ to agent~3. In the instance that remains, each of the remaining agents has two disjoint bundles of value at least as high as her original MMS, and hence an MMS allocation exists. If $B_{1,1}$ is bad for agent~3, give $B_{1,1}$ to agent~1. Agent~2 can then partition the remaining items into two bundles as explained above. Agent~3 chooses among them the bundle that she prefers, thus getting at least $\frac{v_3(M)-v_3(B_{1,1})}{2} \ge MMS_3$. Agent~2 gets the remaining bundle, and hence at least her MMS. \end{proof} \begin{corollary} \label{cor:size3} Suppose that for two agents $i$ and $j$ every bundle in their MMS partition is of size three. Then either for every bundle of $i$ and every bundle of $j$ the intersection has exactly one item, or $I$ has an MMS allocation. \end{corollary} \begin{proof} If two bundles of different agents are identical, the corollary follows from Proposition~\ref{pro:containment}. If they have two items in their intersection, the corollary follows from Proposition~\ref{pro:symmetricDifference}. If they are disjoint, then another bundle of the second agent is either identical to the bundle of the first agent, or their intersection has two items. \end{proof} \begin{proposition} \label{pro:size3} If every bundle is of size three, then $I$ has an MMS allocation. \end{proposition} \begin{proof} Let $e$ be the item of smallest value according to the order $\pi$, and for every agent $i$, let $B_i$ be her bundle that contains $e$. Except for containing $e$, these three bundles are disjoint. Hence there are two items, $e_1$ and $e_2$, that are in none of these bundles, and each of them is worth at least as much as $e$ for every agent. Give $B_3$ to agent~3, give $(B_2 \setminus \{e\}) \cup \{e_2\}$ to agent~2, and give $(B_1 \setminus \{e\}) \cup \{e_1\}$ to agent~1. \end{proof} \begin{proposition} \label{pro:uniqueGood} Suppose that the following condition does not hold: among the three bundles of agent~1, there is a bundle that is good both for agent 2 and for agent 3, and the remaining two bundles are bad both for agent 2 and agent 3. Then $I$ has an MMS allocation. \end{proposition} \begin{proof} Observe that for every $j \in \{2,3\}$, there is at least one bundle $B$ of agent~1 with $v_j(B) \ge \frac{1}{3}v_j(M) \ge MMS_j$. If the condition of the proposition does hold, then without loss of generality, $B_{1,2}$ is good for agent~2, and $B_{1,3}$ is good for agent~3. For every $j \in \{1,2,3\}$, give bundle $B_{1,j}$ to agent $j$. \end{proof} By symmetry, Proposition~\ref{pro:uniqueGood} applies also if we permute the names of agents. Permuting also the names of the bundles, we can thus assume the following: \begin{itemize} \item For every agent $i$, bundle $B_{i,1}$ is good for both other agents, and bundles $B_{i,2}$ and $B_{i,3}$ are bad for both other agents. \end{itemize} Observe that for every $i \not= j$, agent $i$ values bundle $B_{j,1}$ strictly more than $B_{i,1}$, because she values $B_{i,2}$ and $B_{i,3}$ strictly more than $B_{j,2}$ and $B_{j,3}$. Likewise, agent $j$ values bundle $B_{i,1}$ strictly more than $B_{j,1}$. \begin{proposition} \label{pro:intersectGodBad} If the good bundle in the partition of agent $i$ does not intersect a bad bundle in the partition of agent $j \not=i$, then $I$ has an MMS allocation. \end{proposition} \begin{proof} Suppose that $B_{1,1}$ does not intersect $B_{2,2}$. Give $B_{1,1}$ to agent~3 (recall that $B_{1,1}$ is good for all agents), give $B_{2,2}$ to agent~2, and give the remaining items to agent~1. These items form a good bundle for agent~1, because we removed from the grand bundle one of his original bundles ($B_{1,1}$) and a bundle ($B_{2,2}$) that is bad for agent~1. \end{proof} \begin{proposition} \label{pro:2good} If an agent~$i$ has a bundle $B$ of size two in her partition, then either this bundle is good for the other two agents, or $I$ has an MMS allocation. \end{proposition} \begin{proof} {Suppose that for agent~$i$ bundle $B_{i,2}$ (which is bad for the other two agents) is of size two. Then Lemma~\ref{lem:reduce2} implies that an MMS allocation exists.} \end{proof} Observe that the combination of Proposition~\ref{pro:uniqueGood} and Proposition~\ref{pro:2good} implies that an agent cannot have two bundles of size two in his partition. \begin{corollary} \label{cor:2disjoint} If $|B_{1,1} = 2|$, then either the good bundles of the other agent are disjoint from $B_{1,1}$, or $I$ has an MMS allocation. \end{corollary} \begin{proof} Suppose that $|B_{1,1} = 2|$ and that $B_{2,1}$ intersects $B_{1,1}$. Then w.l.o.g., $B_{2,2}$ does not intersect $B_{1,1}$, and Proposition~\ref{pro:intersectGodBad} applies. \end{proof} \begin{proposition} \label{pro:22} If $|B_{1,1}| = |B_{2,1}| = 2$, then $I$ has an MMS allocation. \end{proposition} \begin{proof} By Corollary~\ref{cor:2disjoint}, $B_{1,1}$ and $B_{2,1}$ and $B_{3,1}$ can be assumed to be disjoint from each other. Hence we can give each agent $i$ the bundle $B_{i,1}$. \end{proof} Proposition~\ref{pro:22} implies among other things that every instance with three agents and up to eight items has an MMS allocation, a fact that was already proved in Lemma~\ref{lem:3}. We are now ready to state Theorem~\ref{thm:structure}. However, before doing it, we introduce some notation and terminology. Recall that we may assume that instance $I$ is ordered. We still do so, but we no longer assume that the order is from item $e_1$ to item $e_9$. Instead the order is according to some permutation $\pi$ that is left unspecified at this point. Instead, our naming convention for items is based on arranging the items in a three by three matrix, and naming the items according to their location in the matrix, as specified below. $\left( \begin{array}{ccc} e_1 & e_2 & e_3 \\ e_4 & e_5 & e_6 \\ e_7 & e_8 & e_9 \\ \end{array} \right)$ The rows of the matrix are referred to as $R_1, R_2, R_3$, starting from the top row, and the columns are referred to as $C_1, C_2, C_3$, starting from the left column. The two main diagonals of the matrix {are} $\{e_1, e_5, e_9\}$ and $\{e_3, e_5, e_7\}$. \begin{theorem} \label{thm:structure} A negative example for three players and nine items must have the following structure (after appropriately renaming the items). For one agent $R$, the MMS partition is into the three rows ($R_1$, $R_2$ and $R_3$), for one agent $C$, the MMS partition is into the three columns ($C_1$, $C_2$ and $C_3$), and for one agent $U$, the MMS partition is to a bundle $P = \{e_2, e_4\}$ ($P$ stands for {\em pair}), a bundle $D$ that is one of the two main diagonals ($D$ stands for {\em diagonal}), and a bundle $Q$ with the remaining four items ($Q$ stands for {\em quadruple}). Bundle $P$ is good for all agents, whereas $D$ and $Q$ are bad for agents $R$ and $C$. The row and the column that do not intersect $P$ are good for all agents (these are $R_3$ and $C_3$), whereas the remaining rows and columns are good only for the agents that have them in their partition, and bad for the other agents. \end{theorem} As there are two main diagonals, Theorem~\ref{thm:structure} offers two possible structures. We refer to them as the parallel diagonals structure (bundle $d$ runs in parallel to bundle $p$), and the crossing diagonals structure (bundle $d$ crosses bundle $p$). They are depicted in figures~\ref{fig:9Elements3PartitionsB} and~\ref{fig:9Elements3Partitions}, respectively. Within each figure, for every item $e_i$, the entry $r_i$ ($c_i$, $u_i$, respectively) denotes its value to agent $R$ ($C$, $U$, respectively). \begin{figure*} \caption{The parallel diagonals structure, with MMS partitions for players $R$, $C$ and $U$ respectively. The good bundles, marked by a $\star $ above the item, are $R_3 = \{e_7, e_8, e_9\} \label{fig:9Elements3PartitionsB} \end{figure*} \begin{figure*} \caption{The crossing diagonals structure, with MMS partitions for players $R$, $C$ and $U$ respectively. The good bundles, marked by a $\star $ above the item, are $R_3 = \{e_7, e_8, e_9\} \label{fig:9Elements3Partitions} \end{figure*} We now prove Theorem~\ref{thm:structure}. \begin{proof} By Proposition~\ref{pro:size3}, for at least one agent her MMS partition contains a bundle of size two. Propositions~\ref{pro:2good} and~\ref{pro:22} imply that at most one agent has a bundle of size two in her MMS partition. It follows that there are exactly two agents ($R$ and $C$) for which the MMS partition is composed of bundles of size three, and one agent ($U$) whose partition has a bundle of size two. That partition cannot have two bundles of size two (see remark after Proposition~\ref{pro:2good}), and hence the other two bundles in that partition are of sizes three and four. By Corollary~\ref{cor:size3}, the intersection of every bundle of $R$ and every bundle of $C$ is of size~1. Hence up to permuting the names of the items, the MMS partition of $R$ can be assumed to be the row bundles, and the MMS partition of $C$ can be assumed to be the column bundles. Each of {the agents} $R$ and $C$ has exactly one bundle that is good for all agents, and without loss of generality these bundles are $R_3$ and $C_3$. Agent $U$ has a bundle of size two. By Proposition~\ref{pro:2good}, this bundle is good for the other two agents. By Proposition~\ref{pro:intersectGodBad}, it intersects only bad bundles of other agents. By Proposition~\ref{pro:containment} it is not contained in a bundle of any other agent. Hence without loss of generality, this bundle is $\{e_2,e_4\}$, which we denoted by $P$. (The other alternative would have been items $\{e_1, e_5\}$, but it can be converted to $P$ by switching the order among the first two rows.) As noted above, agent $U$ has a bundle (call it $D$) with three items. It follows from Propositions~\ref{pro:containment} and~\ref{pro:symmetricDifference} that $D$ intersects every bundle of every other agent exactly once. This gives three options. Two of them are the main diagonals, giving the parallel diagonals and the crossing diagonals structures allowed by Theorem~\ref{thm:structure}. The remaining option is $\{e_1, e_6, e_8\}$. However, by renaming the items (permuting rows $R_1$ and $R_2$ and permuting columns $C_1$ and $C_2$) we see that this is the same structure as the parallel diagonals one. \end{proof} \subsection{The LP approach} Theorem~\ref{thm:structure} establishes that a negative example with three agents and nine items, if one exists, must have one of two possible structures, either the one that we referred to as {\em parallel diagonals}, or the one that we referred to as {\em crossing diagonals}. Indeed, the negative example presented in the proof of Theorem~\ref{thm:largeGap} has the parallel diagonals structure. In this section we explain how each of the structures can guide us in a systematic computer assisted search for an actual negative example. {Let us first introduce terminology that will be used in our proof. Among all negative examples, we wish to find the one with largest MMS gap. We refer to such a negative example as a {\em max-gap instance}. Equivalently, a max-gap instance is one of smallest value $b$, such that there is an allocation instance in which no allocation gives every agent more than a {$\frac{b-1}{b}$} fraction of her MMS. By scaling valuation functions of agents, this is equivalent to the following definition. \begin{definition} \label{def:maxGap} A {\em max gap instance} is an allocation instance with the smallest value of $b$, in which the MMS of every agent is $b$, but in every allocation, at least one agent gets a value of at most $b-1$. A {\em max-gap PD-instance} ({\em max-gap CD-instance}, respectively) is a max gap instance among those instances that have the parallel diagonals structure (crossing diagonals, respectively). \end{definition} By Theorem~\ref{thm:structure}, a max gap instance is either a max-gap PD-instance or a max-gap CD-instance. We will explain in this section how to bound $b$ in max-gap PD-instances. The same principles apply equally well to bound $b$ in max-gap CD-instances.} The following theorem strengthens Theorem~\ref{thm:structure} to account for the fact that now we are not only interested in a negative example, but rather in a negative example of a max-gap type. In this theorem, we identify $R$, $C$ and $U$ from the parallel diagonals structure with agents~1,~2 and~3, respectively. $b$ denotes the value of the MMS. \begin{theorem} \label{thm:max-gap} Every max-gap PD-instance has the parallel diagonals structure (by definition). Moreover, all properties below either necessarily hold, or can be assumed to hold without loss of generality. \begin{enumerate} \item For every agent $i$, for every bundle $B$ that is in her MMS partition according to the parallel diagonals structure, it holds that $v_i(B) = b$. \item For every allocation $A = (A_1, A_2, A_3)$, at least one of the three inequalities $v_i(A_i) \le b-1$ holds (where $1 \le i \le 3$). \item For every agent $i$ and every bundle $B$ that belongs to the MMS partition of a different agent and $B$ is bad for $i$, it holds that $v_i(B)\le b-1$. \item For every item $e$ and agent~$i$, it holds that $v_i(e) \ge 1$. \item {With only four possible exceptions, for every two items $e$ and $e'$ and every agent $i$, if the two items are not in the same bundle in the $MMS_i$ partition, then $|v_i(e) - v_i(e')| \ge 1$. There are two exceptions, $(e_6,e_9)$ and $(e_8,e_9)$, and two partial exceptions, the cases $v_R(e_8) \ge v_R(e_2)$ and $v_C(e_6) \ge v_C(e_4)$.} \item The instance is ordered. Namely, there is some permutation $\pi$ over nine items. For every two items $e$ and $e'$, if $e_i \ge_{\pi} e_j$ (this notation denotes that $e$ precedes $e'$ according to $\pi$), then for every agent $i$ it holds that $v_i(e) \ge v_i(e')$. \end{enumerate} \end{theorem} \begin{proof} We prove the properties of the theorem in the same order as they are stated in the theorem. For each property, the notation in the proof corresponds to the notation in the theorem. \begin{enumerate} \item The fact that $b$ is the MMS implies $v_i(B) \ge b$. We may further assume without loss of generality that $v_i(B) = b$, because reducing the value of an item (while still keeping the MMS value to be at least $b$), there still is no allocation that gives every agent a value larger than $b-1$. \item This is because in a max-gap PD-instance there is no allocation that gives every agent value strictly larger than $b-1$. \item Suppose for the sake of contradiction that $v_i(B) = b-1+\delta$ for $\delta > 0$ (and note that $\delta < 1$ by Proposition~\ref{pro:uniqueGood}). Let $e$ be an arbitrary item in $B$. Modify the valuation function $v_i$ to $v'_i$, where the only change is that $v'_i(e) = v_i(e) + 1 - \delta$. Hence now $v'_i(B) = b$, and Proposition~\ref{pro:uniqueGood} implies that an MMS allocation exists (with respect to $v'_i$). In this allocation, every agent other than $i$ gets at least her MMS, whereas agent $i$ gets at least $b - 1 + \delta > b-1$ (with respect to $v_i$). Hence the instance is not a max-gap instance. \item Suppose for the sake of contradiction that $v_i(e) = 1 - \delta$ for $\delta > 0$ (and note that $\delta \le 1$ as item values are non-negative). Let $e' \not= e$ be an arbitrary item such that both $e$ and $e'$ belong to the same bundle $B$ in the $MMS_i$ partition. Modify the valuation function $v_i$ to $v'_i$, where the only {changes are} that $v'_i(e') = v_i(e') + 1 - \delta$ {and $v'_i(e) = 0$}. Hence now $v'_i(B \setminus \{e\}) = b$. This allows us to move $e$ to a different bundle of the $MMS_i$ partition. After this move, we {still have an $MMS_i$ partition with respect to $v'$, but the structure is} neither the parallel diagonals structure nor the crossing diagonals structure (as sizes of bundles no longer obey these structures). Hence an MMS allocation exists. In this allocation, every agent other than $i$ gets at least her MMS, whereas agent $i$ gets at least $b - 1 + \delta > b-1$ (with respect to $v_i$). Hence the instance is not a max-gap instance. \item Let $e$ and $e'$ be two items different than the pairs $(e_6,e_9)$ and $(e_8,e_9)$. Suppose for the sake of contradiction that $v_i(e) = v_i(e') + 1 - \delta$ for $0 < \delta \le 1$, where $e$ and $e'$ are not in the same bundle in the $MMS_i$ partition. Modify the valuation function $v_i$ to $v'_i$, where the only change is that the values of $e$ and $e'$ are swapped. Namely, $v'_i(e') = v_i(e)$ and $v'_i(e) = v_i(e')$. Now swapping $e$ and $e'$ in the bundles of the $MMS_i$ partition (recall that $e$ and $e'$ are in different bundles), we get a new $MMS_i$ partition, that we refer to as $MMS'_i$. After this swap, we no longer have the structure required by Theorem~\ref{thm:structure}. This can be shown via a case analysis. We sketch the case analysis for the case that $i=R$, and leave the cases $i=C$ and $i=U$ to the reader. If $i = R$, then $e$ and $e'$ are in different rows. If they are also in different columns, then after the modification, {there is a bundle in $MMS'_R$ and a bundle in $MMS_C$ that do not intersect each other}, contradicting Theorem~\ref{thm:structure}. If they are in the same column, and either $e$ or $e'$ belong to $D$, then after the modification, a bundle in $MMS'_R$ fails to intersect $D$. Hence neither $e$ nor $e'$ are in $D$. Conditioned on this and on being in the same column, there are four possibilities. If $e$ and $e'$ are in $C_1$, then they must be $e_1$ and $e_4$. After the modification, $P$ is contained in a single bundle of the $MMS'_R$ partition. If they are in $C_2$ and $e_2$ is at least as valuable as $e_8$, then the bundle $(e_7,e_2,e_9)$ is in the $MMS'_R$ partition, is good for all three agents, and intersects $P$ (which is not allowed, by Theorem~\ref{thm:structure}). If they are in $C_2$ and $e_8$ is at least as valuable as $e_2$, then this is the partial exception concerning $v_R(e_8) \ge v_R(e_2)$. If they are in $C_3$ then they are $e_6$ and $e_9$, and this is the exception concerning $(e_6,e_9)$. In each of the cases analyzed above, the structure that results after replacing $MMS_R$ by $MMS'_R$ is not consistent with Theorem~\ref{thm:structure}. Hence an MMS allocation exists. In this allocation, every agent other than $R$ gets at least her MMS, whereas agent $R$ gets at least $b - 1 + \delta > b-1$ (with respect to $v_R$). Hence the instance is not a max-gap instance. \item Being an ordered instance can be assumed without loss of generality, by Proposition~\ref{pro:BL}. \end{enumerate} \end{proof} Armed with Theorem~\ref{thm:max-gap}, we set up a linear program that searches for a max-gap PD-instance. For each agent $i$, there are nine variables specifying the values of $v_i(e_1), \ldots, v_i(e_9)$. For convenience, we rename these variables to be $\{r_1, \ldots, r_9\}$ for agent $R$, $\{c_1, \ldots, c_9\}$ for agent $C$, and $\{u_1, \ldots, u_9\}$ for agent $U$ (as in Figure~\ref{fig:9Elements3PartitionsB}). In addition, we have the variable $b$. Hence altogether there are 28 variables. The objective function of the LP is to minimize $b$. Item~4 of Theorem~\ref{thm:max-gap} gives 27 linear constraints, that we refer to as the {\em positivity constraints}. That is, for every $1 \le j \le 9$ we have the constraints: \begin{eqnarray*}\label{eqn:positivity} \begin{array}{lcl} r_j &\ge &1 \\ c_j &\ge &1 \\ u_j &\ge &1 \end{array} \end{eqnarray*} Item~1 of Theorem~\ref{thm:max-gap} gives 9 linear constraints, that we refer to as the {\em MMS constraints}. \begin{eqnarray*}\label{eqn:MMSconstraints} \begin{array}{lcl} r_1 + r_2 + r_3 &= &b \\ r_4+r_5+r_6 &= &b \\ r_7+r_8+r_9 &= &b \\ c_1 + c_4 + c_7 &= &b \\ c_2 + c_5+ c_8 &= &b \\ c_3+c_6+c_9 &= &b \\ u_2 + u_4 &= &b \\ u_3+u_5+u_7 &= &b \\ u_1+u_6+u_8+u_9 &= &b \end{array} \end{eqnarray*} Item~3 of Theorem~\ref{thm:max-gap} gives 12 linear constraints, that we refer to as the {\em bad bundles constraints}. \begin{eqnarray*}\label{eqn:badBundles} \begin{array}{lcl} c_1 + c_2 + c_3 &\le &b-1 \\ u_1 + u_2 + u_3 &\le &b-1 \\ c_4+c_5+c_6 &\le &b-1 \\ u_4+u_5+u_6 &\le &b-1 \\ r_1 + r_4 + r_7 &\le &b-1 \\ u_1 + u_4 + u_7 &\le &b-1 \\ r_2 + r_5+ r_8 &\le &b-1 \\ u_2 + u_5+ u_8 &\le &b-1 \\ r_3+r_5+r_7 &\le &b-1 \\ c_3+c_5+c_7 &\le &b-1 \\ r_1+r_6+r_8+r_9 &\le &b-1 \\ c_1+c_6+c_8+c_9 &\le &b-1 \end{array} \end{eqnarray*} Minimizing $b$ subject to the above $27 + 9 + 12 = 48$ constraints, the optimal solution (found by an LP solver) has value $b=13$. This certifies that in every max-gap PD-instance, every agent receives at least a $\frac{12}{13}$ fraction of her MMS. This is still far from our goal of $\frac{39}{40}$, that would match the bound in Theorem~\ref{thm:largeGap}. Indeed, the solution to the LP is not a max-gap PS-instance, and not even a negative example: it satisfies all constraints of the LP, but does have an MMS allocation. To make further progress, we need to generate additional valid constraints to the LP. To be useful, these constraints should not be implied by the existing constraints. We present two procedures for generating useful constraints. One procedure generates constraints that we refer to as {\em order constraints}. Recall item~6 of Theorem~\ref{thm:max-gap}, which imposes an order $\pi$ over the items. The following proposition uses it in order to derive additional constraints. \begin{proposition} \label{pro:orderConstraints} We may assume without loss of generality that $e_4 \ge_{\pi} e_2$, $e_3 \ge_{\pi} e_7$ and $e_8 \ge_{\pi} e_6$. \end{proposition} \begin{proof} By symmetry and item~6 of Theorem~\ref{thm:max-gap} (which imposes an order $\pi$ over the items), we may assume that $e_4 \ge_{\pi} e_2$. (If $e_2 \ge_{\pi} e_4$, then we may transpose all matrices and interchange the roles of $R$ and $C$, and get an equivalent negative instance that does satisfy $e_4 \ge_{\pi} e_2$.) By item~5 of Theorem~\ref{thm:max-gap}, this gives the following three constraints: $r_4 \ge r_2 + 1$, $c_4 \ge c_2 + 1$ and $u_4 \ge u_2$ (observe that $e_2$ and $e_4$ are in the same bundle in the $MMS_U$ partition). The fact that $r_1 + r_2 + r_3 = v_R(R_1) = b > b-1 \ge v_R(C_1) = r_1 + r_4 + r_7$ then implies that $r_3 \ge r_7 + 1$. As the instance can be assumed to be ordered, we have $e_3 \ge_{\pi} e_7$, which implies the two constraints $c_3 \ge c_7 + 1$ and $u_3 \ge u_7$ (also here, $e_3$ and $e_7$ are in the same bundle in the $MMS_U$ partition). In a similar manner, the fact that $v_C(C_2) = b > b-1 \ge v_C(R_2)$ implies that $e_8 \ge_{\pi} e_6$, and consequently that $r_8 \ge r_6 + 1$ and $u_8 \ge u_6$. \end{proof} By Proposition~\ref{pro:orderConstraints} and its proof, we add {nine} constraints to the LP. {(Two of the constraints, $r_3 \ge r_7 +1$ and $c_8 \ge c_6 + 1$, are not really needed, as they are implied by constraints already present in the LP.)} We refer to these constraints as {\em order constraints}. Observe that some of the power comes from the fact that an order constraint such as $e_4 \ge_{\pi} e_2$ does not simply translate to $r_4 \ge r_2$, but rather to the stronger $r_4 \ge r_2 + 1$. \begin{eqnarray*}\label{eqn:order} \begin{array}{lcl} r_4 &\ge &r_2 + 1\\ c_4 &\ge &c_2 + 1\\ u_4 &\ge &u_2 \\ r_3 &\ge &r_7 + 1 \\ c_3 &\ge &c_7 + 1 \\ u_3 &\ge &u_7 \\ r_8 &\ge &r_6 + 1\\ c_8 &\ge &c_6 + 1\\ u_8 &\ge &u_6 \end{array} \end{eqnarray*} With the above order constraints, the value of the LP increases to $b = 16$. There are plenty of other order constraints that can be inferred (see Appendix~\ref{app:implement}), but those that we could infer did not turn out to be useful. Another procedure generates constraints that we refer to as {\em allocation constraints}. The following proposition introduces three such constraints. \begin{proposition} \label{pro:allocationConstraints} The following three constraints are valid constraints. \begin{enumerate} \item $u_6 + u_7 + u_9 \le b-1$. \item $u_3 + u_9 \le b-1$. \item $r_3 + r_9 \le b-1$. \end{enumerate} \end{proposition} \begin{proof} Consider the allocation $(e_1,e_3,e_4), (e_2,e_5,e_8), (e_6,e_7,e_9)$ to agents $R$, $C$ and $U$ respectively. Observe that $v_R(e_1,e_3,e_4) = r_1 + r_3 + r_4 > r_1 + r_3 + r_2 = b$ and that $v_C(e_2,e_5,e_8) = c_2 + c_5 + c_8 = b$. Hence to be a max-gap PD-instance, $v_U(e_6,e_7,e_9) \le b-1$ must hold, implying the first constraint of the proposition. Consider the allocation $(e_4,e_5,e_6), (e_1,e_2,e_7,e_8), (e_3,e_9)$. Observe that $v_R(e_4,e_5,e_6) = r_4 + r_5 + r_6 = b$. We also have $v_C(e_1,e_2,e_7,e_8) = c_1 + c_2 + c_7 + c_8 > b$. This holds because $c_1 + c_7 > c_5 + c_6$ (because $v_c(C_1) = b > b-1 = v_C(R_2)$), and $c_2 + c_5 + c_8 = b$. Hence to be a max-gap PD-instance, $v_U(e_3,e_9) \le b-1$ must hold, implying the second constraint of the proposition. We now prove the third constraint of the proposition. Assume for the sake of contradiction that $r_3 + r_9 > b-1$. Having proved that $v_U(e_3,e_9) \le b-1$, we have that $u_1 + u_2 + u_4 + u_5 + u_6 + u_7 + u_8 > 2b$. Hence either $u_1 + u_4 + u_7 > b$ or $u_2 + u_5 + u_6 + u_8 > b$. Whichever inequality holds (pick the first one if they both hold), give the respective items to $U$ (who gets more than her MMS), items $(e_3,e_9)$ to $R$ (who gets more than $b-1$), and the remaining items to $C$ (who gets at least her MMS, as she gets a complete column). This allocation has a gap strictly smaller than~1, contradicting the assumption that the instance is a max-gap PD-instance. \end{proof} With the three allocation constraints of Proposition~\ref{pro:allocationConstraints}, the value of the LP increases to $b = 19$. We refer to the above LP {(containing $60$ constraints)} as the {\em root LP}. We are not aware of additional useful linear constraints that hold without loss of generality. {Hence at this point, we revert to mixed integer programming. \subsection{Using mixed integer programming} We use the root LP described above as the basis of a mixed integer program (MIP) that finds the negative example with largest MMS gap, among those that have the parallel diagonals structure. (A similar MIP handles the crossing diagonals structure.) This is done by introducing {\em selection variables} that take on only $0/1$ values. Using these selection variables, we add two sets of constraints, referred to as {\em selected order constraints} and {\em selected allocation constraints}. The goal of the selected order constraints is to make sure that the MIP selects a consistent ordering $\pi$ among the items. As we do not know which ordering to select, the selection variables allow the MIP to investigate all possible orderings (that are consistent with the order constraints that we already have), and choose among them the worst one. To illustrate how this is done, consider the two items $e_4$ and $e_9$. We wish to allow the MIP to investigate both the possibility that $e_4 \ge_{\pi} e_9$ and that $e_9 \ge_{\pi} e_4$. We introduce a selection variable $s_{49}$ with the integer (binary) constraint $s_{49} \in \{0,1\}$. We also select a sufficiently large constant, larger than the maximum possible difference in value between any two items. Choosing this constant as~40 suffices for our purpose. The MIP then contains the following six linear constraints (the $+1$ terms are from item~5 of Theorem~\ref{thm:max-gap}): \begin{itemize} \item $r_4 \ge r_9 + 1 - 40 + 40s_{49}$. \item $c_4 \ge c_9 + 1 - 40 + 40s_{49}$. \item $u_4 \ge u_9 + 1 - 40 + 40s_{49}$. \item $r_9 \ge r_4 + 1 - 40s_{49}$. \item $c_9 \ge c_4 + 1 - 40s_{49}$. \item $u_9 \ge u_4 + 1 - 40s_{49}$. \end{itemize} If $s_{49} = 1$ the above constraints are satisfied if and only if $e_4 \ge_{\pi} e_9$. If $s_{49} = 0$, they are satisfied if and only if $e_9 \ge_{\pi} e_4$. We do not need to add selected order constraints for every pair of items, because some order constraints are forced by other constraints (e.g., by Proposition~\ref{pro:orderConstraints}). In our MIP, we used selected order constraints for seven pairs of items. The goal of the selected allocation constraints is to make sure that solutions of the MIP are supported only on max gap instances. To illustrate how this is done, consider a candidate allocation $(e_1,e_2,e_3), (e_4,e_6,e_8), (e_5,e_7,e_9)$ (the bundles go to agents $R$, $C$ and $U$ in this order). We need to ensure that in this allocation some agent gets value at most $b-1$. Observe that in this allocation agent $R$ gets $R_1$ and hence a value of at least $b$. Thus the agent getting value at most $b-1$ is either $C$ or $U$, but we do not know which one of them. We introduce a selection variable $s_{c468u579}$ with the integer (binary) constraint $s_{c468u579} \in \{0,1\}$. We also select a sufficiently large constant, larger than the maximum possible value of a bundle of items. Choosing this constant as~100 suffices for our purpose. The MIP then contains the following two linear constraints. \begin{itemize} \item $c_4 + c_6 + c_8 \le b-1 + 100s_{c468u579}$. \item $u_5 +u_7 + u_9 \le b-1 + 100 - 100s_{c468u579}$. \end{itemize} If $s_{c468u579} = 1$ the above constraints are satisfied if and only if the bundle of $U$ is worth at most $b-1$. If $s_{c468u579} = 0$, they are satisfied if and only of the bundle of $C$ is worth at most $b-1$. As in the case of selected order constraints, we do not need to add selection allocation constraints for every possible allocation, as some allocation constraints are forced by other constraints (e.g., by Proposition~\ref{pro:allocationConstraints}). In our MIP, we used selected allocation constraints for forty allocations. The code for the MIPs (one for the parallel diagonals structure, one for the crossing diagonals structure) that verify the correctness of Theorem~\ref{thm:tightGap} can be obtained from the authors upon request. Appendix~\ref{app:implement} provides some further explanations regarding the constraints used in these MIPs. \begin{remark} The example proving Theorem~\ref{thm:largeGap} was also found using the MIP approach described above. We extracted a negative example from a solution of value $b=40$, and then manually modified it so as to make it easier to generate a humanly verifiable proof for its correctness. (There are several negative examples with $b=40$, and the MIP solver does not necessarily produce the one that is easiest for humans to analyse.) \end{remark}} \section{Extension to chores} \label{sec:chores} {\em Chores} are items of negative value, or equivalently, positive {dis-utility}. In allocation problems involving only chores, the convention is that all items must be allocated. In analogy to Definition~\ref{def:MMS}, $MMS_i$ is the minimum over all $n$-partitions of $M$, of the maximum dis-utility under $v_i$ of a bundle in the $n$-partition. (Note that as dis-utility replaces value, maximum and minimum are interchanged in this definition, compared to Definition~\ref{def:MMS}.) It is known that for agents with additive dis-utility functions over chores, there are allocation instances in which in every allocation some agent gets a bundle of dis-utility higher than her MMS~\cite{ARSW17}, and that there always is an allocation giving every agent a bundle of dis-utility at most $\frac{11}{9}$ times her MMS~\cite{HL19}. {We now prove Theorem~\ref{thm:chores}, that there is an instance with three agents and nine chores that has an MMS gap of $\frac{1}{43}$.} \begin{proof} We present an example with an MMS gap of $\frac{1}{43}$, using notation as in Section~\ref{sec:40}. Every row in the dis-utility function of $R$ has value~43 and gives $R$ her MMS. Her dis-utility function is: $\left( \begin{array}{ccc} 6 & 15 & 22 \\ 26 & 10 & 7 \\ {\bf 12} & {\bf 19} & {\bf 12} \\ \end{array} \right)$ Every column in the dis-utility function of $C$ has value~43 and gives $C$ her MMS. Her dis-utility function is: $\left( \begin{array}{ccc} 6 & 15 & {\bf 23} \\ 26 & 10 & {\bf 8} \\ 11 & 18 & {\bf 12} \\ \end{array} \right)$ The bundles that give $U$ her MMS are $p = \{e_2, e_4\}$ (the {\em pair}, in boldface), $d = \{e_3, e_5, e_7\}$ (the {\em diagonal}), and $q = \{e_1, e_6, e_8, e_9\}$ (the {\em quadruple}). The dis-utility function of $U$ is: $\left( \begin{array}{ccc} 6 & {\bf 16} & 22 \\ {\bf 27} & 10 & 7 \\ 11 & 18 & 12 \\ \end{array} \right)$ In analogy to Section~\ref{sec:40}, to analyse all possible allocations in a systematic way, it is convenient to consider a dis-utility function $M$ in which the dis-utility of each chore as the minimum (rather than maximum, as we are dealing with chores) dis-utility given to the chore by the three agents. Hence $M$ is: $\left( \begin{array}{ccc} 6 & 15 & 22 \\ 26 & 10 & 7 \\ 11 & 18 & 12 \\ \end{array} \right)$ Adaptation of the analysis of Section~\ref{sec:40} shows that in every allocation, some agent gets chores of dis-utility at least~44, whereas the MMS is~43. Further details of the proof are omitted. \end{proof} \section{Discussion} {The open questions below refer to allocations of goods. Questions of a similar nature can be asked for chores, though the quantitative bounds in these questions would be different from those mentioned below.} Let $\delta_n$ denote the largest value such that for $n$ agents, there is an allocation instance with additive valuations for which no allocation gives every agent more than a $1 - \delta_n$ fraction of her MMS. We have that $\delta_1 = \delta_2 = 0$. As to $\delta_3$, the combination of our Theorem~\ref{thm:largeGap} and the results of~\cite{GM19} imply that $\frac{1}{40} \le \delta_3 \le \frac{1}{9}$. It would be interesting to determine the exact value of $\delta_3$, or at least to narrow the gap between its lower bound and upper bound. Computer assisted techniques, such as those used in the proof of Theorem~\ref{thm:tightGap}, may turn out useful for this purpose. For general $n \ge 3$, the combination of our Theorem~\ref{thm:polynomialGap} and the results of~\cite{GT20} imply that $\frac{1}{n^4} \le \delta_n \le \frac{1}{4} + \frac{1}{12n}$. We do not know whether $\delta_n$ tends to~0 as $n$ grows. Determining whether this is the case remains as an interesting open question. The known results do not exclude the possibility that $\delta_n$ tends to $\frac{1}{4}$ as $n$ grows, but we would be very surprised if this turns out to be true. \begin{appendix} \section{Some further details on the MIP} \label{app:implement} { The notation in this appendix is hopefully self explanatory. Proofs are only sketched. We note that for the parallel diagonal structure, many order constraints are implied by other constraints. This may explain why a relatively small number of selected order constraints suffice in order to enforce a consistent total order among all items. The following order constraints are stated here for convenience, with hints as to why they hold. They need not be added explicitly to the MIP, as they are implied by other constraints in the root LP. \begin{itemize} \item $e_2 \ge e_5 + 1$ ($P > R_2$). \item $e_2 \ge e_7 + 1$ ($P > C_1$). \item $e_3 \ge e_8 + 1$ ($C_3 > Q$). \item $e_4 \ge e_3 + 1$ ($P > R_1$). \item $e_7 \ge e_1 + 1$ ($R_3 > Q$). \item $e_7 \ge e_6 + 1$ ($R_3 > Q$). \item $e_9 \ge e_5 + 1$ ($R_3 > D$ and $e_3 \ge e_8 + 1$). \item $e_5 \ge e_1 + 1$ ($v_R(R_2) > v_R(C_1)$ and $e_7 > e_6$; $v_C(C_2) > v_C(R_1)$ and $e_3 > e_8$; $v_U(D) > v_U(C_1)$ and $e_4 > e_3$). \end{itemize} We now provide some details (proofs left to the reader) about the root LP for the crossing diagonals structure. Naturally, it has the MMS constraints and the bad bundles constraints that are associated with the CD structure (e.g., the constraint $u_1 + u_5 + u_9 = b$). Theorem~\ref{thm:max-gap} holds with the following change: the two exceptions are $(e_3, e_6)$ and $(e_7,e_8)$, instead of $(e_6,e_9)$ and $(e_8,e_9)$. Proposition~\ref{pro:orderConstraints} and~\ref{pro:allocationConstraints} hold without change. Consequently, we can have a root LP with 60 constraints (same number as in the root LP for the parallel diagonals structure), and then extend it to an MIP. For the crossing diagonals structure, the value of the MIP turns out to be~47, larger than the value of~40 obtained for the parallel diagonals structure. For the crossing diagonals structure we can strengthen the root LP, as additional useful order constraints can be inferred. The following order constraints hold (hints are given) but need not be added (as they are implied by other constraints). \begin{itemize} \item $e_2 > e_1, e_7$ ($P \ge C_1$). \item $e_2 > e_5, e_6$ ($P \ge R_2$). \item $e_4 > e_1, e_3$ ($P \ge R_1$). \item $e_4 > e_5, e_8$ ($P \ge C_2$). \item $e_9 > e_3, e_6$ ($R_3 \ge Q$). \item $e_9 > e_7, e_8$ ($C_3 \ge Q$). \end{itemize} We now derive useful order constraints. Suppose for the sake of contradiction that $e_9 \ge e_4$. Recall that $e_3 \ge e_7$. Then $v_C(C_1) = v_C(C_3)$ implies that $e_1 \ge e_6$, whereas $v_R(R_2) > v_R(D)$ implies that $e_6 > e_1$, a contradiction. Hence we have the following useful constraint: \begin{itemize} \item $e_4 \ge e_9 + 1$. \end{itemize} This implies additional order constraints (note that most of them can be enhanced by a $+1$ term), some of which are useful: \begin{itemize} \item $e_1 \ge e_6$ ($v_U(D) \ge v_U(R_2)$). \item $e_5 \ge e_7$ ($v_R(R_2) \ge v_R(C_1)$). \item $e_3 \ge e_5$ ($v_C(C_3) \ge v_C(R_2)$). \item $e_8 \ge e_1$ ($v_R(R_3) \ge v_R(C_1)$). \item $e_9 \ge e_2$ ($v_U(D) \ge v_U(C_2)$). \end{itemize} In general, there is a tradeoff in the effort involved in deriving constraints analytically, compared to having more binary selection variables in the MIP. We believe that we have reached a reasonable point along this tradeoff curve, so that neither the analytic proofs nor the code for the MIP are too complicated. } \end{appendix} \end{document}
\begin{document} \title{Two phase flows of compressible viscous fluids} \author{Eduard Feireisl \thanks{The work of E.F. was supported by the Czech Sciences Foundation (GA\v CR), Grant Agreement 21--02411S.} \and Anton\' \i n Novotn\' y { \thanks{The work of A.N. was partially supported by the Eduard \v Cech visiting program at the Mathematical Institute of the Acdemy of Sciences of the Czech Republic. } } } \maketitle \centerline{Institute of Mathematics of the Academy of Sciences of the Czech Republic;} \centerline{\v Zitn\' a 25, CZ-115 67 Praha 1, Czech Republic} \centerline{[email protected]} \centerline{and} \centerline{IMATH, EA 2134, Universit\'e de Toulon,} \centerline{BP 20132, 83957 La Garde, France} \centerline{[email protected]} \begin{abstract} We introduce a new concept of \emph{dissipative varifold solution} to models of two phase compressible viscous fluids. In contrast with the existing approach based on the Young measure description, the new formulation is variational combining the energy and momentum balance in a single inequality. We show the existence of dissipative varifold solutions for a large class of general viscous fluids with non--linear dependence of the viscous stress on the symmetric velocity gradient. \end{abstract} {\bf Keywords:} Two phase flow, compressible fluid, varifold solution, non-Newtonian fluid \centerline{\it Dedicated to Maurizio Grasselli on the occasion of his 60-th birthday} \section{Introduction} \label{p} We consider a simple model of a two--phase flow, where the interface between the two fluids is described via a phase variable $\chi \in \{ 0 , 1 \}$ that coincides with the characteristic function of the domain occupied by one of the fluid components. Accordingly, the time evolution of $\chi$ is governed by the transport equation \begin{equation} \label{p0} \mathbb{P}artial_t \chi + \vc{u} \cdot \nabla_x \chi = 0, \end{equation} where $\vc{u}$ is the fluid velocity. We denote \[ \begin{split} \mathcal{O}_1(t) &= \left\{ x \in \Omega_\varepsilonga \ \Big| \ \chi(t,x) = 1 \right\} - \mathscr{B}ox{the part of the physical space}\ \Omega_\varepsilonga\subset R^d \ \mathscr{B}ox{occupied by fluid 1},\\ \mathcal{O}_2(t) &= \left\{ x \in \Omega_\varepsilonga \ \Big| \ \chi(t,x) = 0 \right\} - \mathscr{B}ox{the part of the physical space}\ \Omega_\varepsilonga\subset R^d \ \mathscr{B}ox{occupied by fluid 2},\\ \Gamma(t) &= \Ov{\mathcal{O}}_1(t) \cap \Ov{\mathcal{O}}_2 (t) - \mathscr{B}ox{the interface.} \end{split} \] As the mass of the fluid is conserved, its density $\varrho$ satisfies the standard equation of continuity \begin{equation} \label{p1} \mathbb{P}artial_t \varrho + {\rm div}_x (\varrho \vc{u} ) = 0. \end{equation} The material properties of the two fluid components are characterized by a general barotropic equation of state \begin{equation} \label{p2a} p_i = p_i(\varrho),\ i=1,2,\ p(\chi, \varrho) = \chi p_1 (\varrho) + (1 - \chi)p_2(\varrho). \end{equation} The viscous stress $\mathbb{S}$ is related to the symmetric velocity gradient $\mathbb{D}_x \vc{u}$ through an ``implicit'' rheological law \begin{equation} \label{p2} \mathbb{S}_i : \mathbb{D}_x \vc{u} = F_i (\mathbb{D}_x \vc{u}) + F^*_i (\mathbb{S}_i),\ i = 1,2 \end{equation} where the dissipative potentials $F_i$, $i=1,2$ are convex functions defined on the space $R^{d \times d}_{\rm sym}$ of symmetric $d-$dimensional real matrices, and where $F^*_i$ denotes the convex conjugate. More specifically, we suppose \begin{equation} \label{p4} F_i : R^{d \times d}_{\rm sym} \to [0, \infty] \ \mathscr{B}ox{is l.s.c. convex},\ F_i(0) = 0,\ i=1,2. \end{equation} Note that \eqref{p2} is equivalent to \[ \mathbb{S}_i \in \mathbb{P}artial F_i (\mathbb{D}_x \vc{u}) \ \Leftrightarrow \ \mathbb{D}_x \vc{u} \in \mathbb{P}artial F^*_i (\mathbb{S}_i). \] It is easy to check that the choice \[ F_i = \frac{\mu_i}{2} \left| \mathbb{D} - \frac{1}{d} {\rm tr} [\mathbb{D}] \right|^2 + \frac{\lambda_i}{2} \left|{\rm tr} \mathbb[\mathbb{D}] \right|^2, \ i=1,2 \] gives rise to the standard Newtonian viscous stress used in the Navier--Stokes system. Similarly to \eqref{p2a}, we set \[ \mathbb{S} (\chi, \mathbb{D}_x \vc{u}) = \chi \mathbb{S}_1 (\mathbb{D}_x \vc{u}) + (1 - \chi) \mathbb{S}_2 (\mathbb{D}_x \vc{u}). \] We refer to Bul\' \i \v cek et al \cite{BuGwMaSG} for more details about ''implicitly'' constituted fluids, where the viscous stress is related to the velocity gradient in a way similar to \eqref{p2}. Similar abstract approach has been used in \cite{AbbFeiNov}. The balance of momentum is satisfied for each individual fluid. The velocity is continuous, while the Cauchy stress \[ \mathbb{T}_i = \mathbb{S}_i - p_i \mathbb{I},\ i=1,2 \] experiences a jump on the interface, specifically, \[ [\mathbb{T} \cdot \vc{n}] = \kappa H \vc{n}\ \mathscr{B}ox{on} \ \Gamma ,\ \kappa \geq 0, \] where $H \vc{n}$ is the mean curvature vector, and $\kappa$ the coefficient of surface tension. Consequently, it is convenient to formulate the momentum balance in the weak form: \begin{equation} \label{p1a} \begin{split} \left[ \intO{\varrho \vc{u} \cdot \boldsymbol{\varphi} } \right]_{t = 0}^{t = \tau} &= \int_0^\tau \intO{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u}: \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ &- \int_0^\tau \intO{ \mathbb{S} (\chi , \nabla_x \vc{u}) : \nabla_x \boldsymbol{\varphi} } \,{\rm d} t + \kappa \int_0^\tau \int_{\Gamma(t)} H \vc{n} \cdot \boldsymbol{\varphi} \ {\rm d} S_x \,{\rm d} t \end{split} \end{equation} for any $0 \leq \tau \leq T$, $\boldsymbol{\varphi} \in C^1_c([0,T] \times \Omega_\varepsilonga; R^d)$, where we have omitted the effect of external volume forces for simplicity. \subsection{Varifold solutions} If the initial interface is a smooth surface (curve if $d=2$), the problem admits a local in time regular solution as long as the data are regular, see the survey of Denisova, Solonnikov \cite{DeSo} and the references cited therein. Global existence is not expected because of possible singularities and self--intersections of $\Gamma (t)$. Plotnikov \cite{Plot1, Plot3, Plot2} adapted the concept of \emph{varifold} to study the problem for large times in the context of incompressible non-Newtonian fluids for $d=2$. Later Abels \cite{AbelsI, AbelsII} introduced the class of \emph{measure--valued} varifold solutions to attack the problem for $d=2,3$ still in the incompressible setting but for a larger family of viscous stresses including the standard Newtonian fluids. He also showed the existence of weak varifold solutions in the absence of surface tension. Ambrose at al. \cite{ALNS} extended the existence proof to the case with surface tension for incompressible Newtonian fluids and $d=2$. To the best of our knowledge, with the only exception of the work of Plotnikov \cite{Plot2} dealing with a simplified Stokes model, similar problems are completely open in the context of \emph{compressible} fluids. We introduce a new weak formulation of the problem based on the principles of the calculus of variations. The main idea is to rewrite the momentum equation with the associated total energy balance as a single inequality in terms of the dissipation potentials $F_1$, $F_2$. The resulting problem still uses the concept of varifold, however, the description of oscillations via a Young measure is no longer necessary. The new formulation is versatile and can be used for both compressible and incompressible fluids. Finally, we show that the problem admits global in time solutions as long as ${\rm div}_x \vc{u}$ is penalized in a way similar to \cite{FeLiMa}. \section{Weak formulation} To avoid problems with physical boundary, we consider the space--periodic boundary conditions. In other words, the spatial domain $\Omega_\varepsilonga$ is identified with a flat torus, \begin{equation} \label{p5a} \Omega_\varepsilonga = \mathbb{T}^d,\ d=2,3. \end{equation} \subsection{Varifolds} Following Abels \cite{AbelsI}, Plotnikov \cite{Plot1}, we introduce the concept of \emph{varifold} $V$, \[ V \in L^\infty_{\rm weak-(*)}(0,T; \mathcal{M}^+(\mathbb{T}^d \times S^{d-1})), \] which, after disintegration, takes the form \[ V = |V| \otimes V_x,\ |V| \in \mathcal{M}^+(\mathbb{T}^d),\ \{ V_x \}_{x \in \mathbb{T}^d} \subset \mathfrak{P}(S^{d-1}). \] The \emph{first variation of} $V$ reads \[ \left< \delta V(t); \boldsymbol{\varphi} \right> = \int_{\mathbb{T}^d \otimes S^{d-1}} (\mathbb{I} - z \otimes z) : \nabla_x \boldsymbol{\varphi}(x) \ {\rm d} V(t) \ \mathscr{B}ox{for any}\ \boldsymbol{\varphi} \in C^1(\mathbb{T}^d). \] Similarly to \cite{AbelsI}, we rewrite the momentum equation \eqref{p1a} as \begin{equation} \label{p1aa} \begin{split} \left[ \intTd{\varrho \vc{u} \cdot \boldsymbol{\varphi} } \right]_{t = 0}^{t = \tau} &= \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u}: \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ &- \int_0^\tau \intTd{ \mathbb{S} (\chi , \mathbb{D}_x \vc{u}) : \nabla_x \boldsymbol{\varphi} } \,{\rm d} t + \kappa \int_0^\tau \int_{\mathbb{T}^d \times S^{d-1}} \left< \delta V; \boldsymbol{\varphi} \right> \ {\rm d} V \,{\rm d} t \end{split} \end{equation} for any $0 \leq \tau \leq T$, $\boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d)$. \subsection{Total energy balance} Introducing the pressure potentials \begin{equation} \label{p5b} P_i(\varrho),\ P'_i(\varrho) \varrho - P_i(\varrho) = p_i(\varrho),\ i=1,2,\ P(\chi,\varrho) = \chi P_1(\varrho) + (1-\chi) P_2(\varrho), \end{equation} we deduce the \emph{total energy balance} \begin{equation} \label{p5c} \frac{{\rm d}}{\,{\rm d} t } \left[ \intTd{ \left( \frac{1}{2} \varrho|\vc{u}|^2 + P(\chi, \varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}(\mathbb{T}^d; R^d)} \right] + \intTd{ \mathbb{S}(\chi, \mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } = 0, \end{equation} cf. Abels \cite{AbelsI}. In the weak formulation, the energy balance \eqref{p5c} is replaced by an inequality \begin{equation} \label{p5d} \left[ \intTd{ \left( \frac{1}{2} \varrho|\vc{u}|^2 + P(\chi, \varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}(\mathbb{T}^d; R^d)} \right]_{t=0}^{t=\tau} + \int_0^\tau \intTd{ \mathbb{S}(\chi, \mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } \,{\rm d} t \leq 0. \end{equation} Next, subtracting \eqref{p1aa} from \eqref{p5d} we obtain \begin{equation} \label{p5e} \begin{split} &\left[ \intTd{ \left( \frac{1}{2} \varrho|\vc{u}|^2 - \intTd{\varrho \vc{u} \cdot \boldsymbol{\varphi} } + P(\chi, \varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}(\mathbb{T}^d; R^d)} \right]_{t=0}^{t=\tau}\\ &+ \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u}: \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ & + \kappa \int_0^\tau \int_{\mathbb{T}^d \times S^{d-1}} \left< \delta V; \boldsymbol{\varphi} \right> \ {\rm d} V \,{\rm d} t \\ &+ \int_0^\tau \intTd{ \mathbb{S}(\chi, \mathbb{D}_x \vc{u}) : (\mathbb{D}_x \vc{u} - \mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t \leq 0. \end{split} \end{equation} Finally, in view of \eqref{p2}, \[ \begin{split} F(\chi, \mathbb{D}_x \boldsymbol{\varphi}) - F(\chi, \mathbb{D}_x \vc{u}) &\equiv \chi \Big( F_1(\mathbb{D}_x \boldsymbol{\varphi} ) - F_1 (\mathbb{D}_x \vc{u}) \Big) + (1 -\chi) \Big( F_2(\mathbb{D}_x \boldsymbol{\varphi} ) - F_2 (\mathbb{D}_x \vc{u}) \Big)\\ &\geq \mathbb{S}(\chi, \mathbb{D}_x \vc{u}) : (\mathbb{D}_x \boldsymbol{\varphi} - \mathbb{D}_x \vc{u}) \end{split} \] Consequently, we rewrite \eqref{p5e} in the final form \begin{equation} \label{p5f} \begin{split} \int_0^\tau &\intTd{ \Big( F(\chi, \mathbb{D}_x \boldsymbol{\varphi}) - F(\chi, \mathbb{D}_x \vc{u}) \Big) } \,{\rm d} t \\ &\geq \left[ \intTd{ \left( \frac{1}{2} \varrho|\vc{u}|^2 - \intTd{\varrho \vc{u} \cdot \boldsymbol{\varphi} } + P(\chi, \varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}(\mathbb{T}^d; R^d)} \right]_{t=0}^{t=\tau}\\ &+ \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u}: \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ & + \kappa \int_0^\tau \int_{\mathbb{T}^d \times S^{d-1}} \left< \delta V; \boldsymbol{\varphi} \right> \ {\rm d} V \,{\rm d} t \end{split} \end{equation} for any $0 \leq \tau \leq T$, $\boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d)$. \subsection{Dissipative varifold solutions} We are ready to introduce the concept of \emph{dissipative varifold solution}. \begin{Definition}[{\bf dissipative varifold solution}] \label{pD1} We say that $[\chi, \varrho, \vc{u}, V]$ is a \emph{dissipative varifold solution} of the problem \eqref{p0}, \eqref{p1}, \eqref{p1a}, \eqref{p5a} if the following holds: \begin{itemize} \item {\bf Regularity class.} \begin{equation} \label{ds1} \chi \in C([0,T]; L^1(\mathbb{T}^d)),\ \chi \in \{ 0,1 \} \ \mathscr{B}ox{for a.a.}\ t, x; \end{equation} \begin{equation} \label{ds2} \varrho \in C([0,T; L^1(\mathbb{T}^d)]),\ 0 \leq \underline{\varrho} \le \varrho(t,x) \leq \Ov{\varrho}\ \mathscr{B}ox{for a.a.}\ t, x; \end{equation} \begin{equation} \label{ds3} \vc{u} \in L^\alpha (0,T; W^{1,\alpha} (\mathbb{T}^d; R^d))\ \mathscr{B}ox{for some}\ \alpha > 1, \varrho \vc{u} \in C([0,T]; L^2(\mathbb{T}^d; R^d)), \end{equation} \begin{equation} \label{ds4} V \in L^\infty_{\rm weak-(*)}(0,T; \mathcal{M}^+ (\mathbb{T}^d \times S^{d-1})). \end{equation} \item {\bf Transport.} \begin{equation} \label{ds5} \left[ \intTd{ \chi \varphi } \right]_{t=0}^{t = \tau} = \int_0^\tau \intTd{ \Big( \chi \mathbb{P}artial_t \varphi + \chi \vc{u} \cdot \nabla_x \varphi + \chi {\rm div}_x \vc{u} \varphi \Big) } \,{\rm d} t \end{equation} for any $0 \leq \tau \leq T$, $\varphi \in C^1([0,T \times \mathbb{T}^d])$. \item {\bf Mass conservation.} \begin{equation} \label{ds6} \left[ \intTd{ \varrho \varphi } \right]_{t=0}^{t = \tau} = \int_0^\tau \intTd{ \Big( \varrho \mathbb{P}artial_t \varphi + \varrho \vc{u} \cdot \nabla_x \varphi \Big) } \,{\rm d} t \end{equation} for any $0 \leq \tau \leq T$, $\varphi \in C^1([0,T \times \mathbb{T}^d])$. \item{{\bf Momentum--energy balance}} \begin{equation} \label{ds7} \begin{split} \int_0^\tau &\intTd{ \Big( F(\chi, \mathbb{D}_x \boldsymbol{\varphi}) - F(\chi, \mathbb{D}_x \vc{u}) \Big) } \,{\rm d} t \\ &\geq \left[ \intTd{ \left( \frac{1}{2} \varrho|\vc{u}|^2 - \varrho \vc{u} \cdot \boldsymbol{\varphi} + P(\chi, \varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}(\mathbb{T}^d; R^d)} \right]_{t=0}^{t=\tau}\\ &+ \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u}: \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ & + \kappa \int_0^\tau \int_{\mathbb{T}^d \times S^{d-1}} \left< \delta V; \boldsymbol{\varphi} \right> \ {\rm d} V \,{\rm d} t \end{split} \end{equation} for any $0 \leq \tau \leq T$, and any $\boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d)$ satisfying \begin{equation} \label{ds7a} \int_0^T \intTd{ \left( F_1 (\mathbb{D}_x \boldsymbol{\varphi}) + F_2 (\mathbb{D}_x \boldsymbol{\varphi}) \right) } \,{\rm d} t < \infty. \end{equation} \item {\bf Varifold compatibility.} If, in addition, $\kappa > 0$, we require $\chi \in L^\infty_{{\rm weak-(*)}}(0,T; BV(\mathbb{T}^d))$, and \begin{equation} \label{ds8} \int_{\mathbb{T}^d \times S^{d-1}} \boldsymbol{\varphi} \cdot z \ {\rm d} V(\tau, \cdot) + \int_{\mathbb{T}^d} \boldsymbol{\varphi} \cdot {\rm d} \nabla_x \chi (\tau) = 0 \end{equation} for a.a. $0 \leq \tau \leq T$, and any $\boldsymbol{\varphi} \in C(\mathbb{T}^d)$. \end{itemize} \end{Definition} First observe that dissipative varifold solutions coincide with weak varifold solutions in the sense of Plotnikov \cite{Plot1} and Abels \cite{AbelsI}, if they satisfy the energy balance \begin{equation} \label{ds9} \left[ \intTd{ \left( \frac{1}{2} \varrho |\vc{u}|^2 + P(\chi, \varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}(\mathbb{T}^d; R^d)} \right]_{t=0}^{t = \tau} + \int_0^\tau \intTd{ \mathbb{S} : \mathbb{D}_x \vc{u} } \,{\rm d} t = 0, \end{equation} where \begin{equation} \label{ds10} \mathbb{S}(t,x) \in \chi \mathbb{P}artial F_1((\chi , \mathbb{D}_x \vc{u}) (t,x)) + (1- \chi) \mathbb{P}artial F_2((\chi , \mathbb{D}_x \vc{u}) (t,x)) \ \mathscr{B}ox{for a.a.}\ t,x. \end{equation} Indeed subtracting \eqref{ds9} from \eqref{ds7} we obtain \[ \begin{split} \int_0^\tau &\intTd{ \Big( F(\chi, \mathbb{D}_x \boldsymbol{\varphi}) - F(\chi, \mathbb{D}_x \vc{u}) - \mathbb{S} : (\mathbb{D}_x \boldsymbol{\varphi} - \mathbb{D}_x \vc{u} \Big) } \,{\rm d} t \\ &\geq - \left[ \intTd{\varrho \vc{u} \cdot \boldsymbol{\varphi} } \right]_{t=0}^{t=\tau}\\ &+ \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u}: \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} - \mathbb{S} : \mathbb{D}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ & + \kappa \int_0^\tau \int_{\mathbb{T}^d \times S^{d-1}} \left< \delta V; \boldsymbol{\varphi} \right> \ {\rm d} V \,{\rm d} t , \end{split} \] where, by virtue of \eqref{ds10}, \[ \begin{split} \int_0^\tau &\intTd{ \Big( F(\chi, \mathbb{D}_x \boldsymbol{\varphi}) - F(\chi, \mathbb{D}_x \vc{u}) - \mathbb{S} : (\mathbb{D}_x \boldsymbol{\varphi} - \mathbb{D}_x \vc{u} \Big) } \,{\rm d} t \\ &= -\int_0^\tau \intTd{ \Big( F(\chi, \mathbb{D}_x \boldsymbol{\varphi}) - F(\chi, \mathbb{D}_x \vc{u}) - \mathbb{S} : (\mathbb{D}_x \boldsymbol{\varphi} - \mathbb{D}_x \vc{u} \Big) } \,{\rm d} t \leq 0. \end{split} \] Consequently, we deduce the standard weak varifold formulation of the momentum equation \[ \begin{split} &\left[ \intTd{\varrho \vc{u} \cdot \boldsymbol{\varphi} } \right]_{t=0}^{t=\tau}\\ &= \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u}: \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} - \mathbb{S} : \mathbb{D}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ & + \kappa \int_0^\tau \int_{\mathbb{T}^d \times S^{d-1}} \left< \delta V; \boldsymbol{\varphi} \right> \ {\rm d} V \,{\rm d} t \end{split} \] for any $0 \leq \tau \leq T$, and any $\boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d))$. Note that Definition \ref{pD1} can be easily adapted to incompressible fluids just by dropping the pressure term and considering solenoidal test functions in \eqref{ds7}. \section{Hypotheses and main results} \label{H} Our goal is to handle the largest possible class of the pressure--density equations of state as well as the dissipative potentials. For technical reasons, however, certain singularity of $F_i$ will be required to keep the system out of the vacuum. Our principal technical hypotheses concerning the dissipative potentials read: \begin{equation} \label{H1} F_i : R^{d \times d}_{\rm sym} \to [0, \infty]\ \mathscr{B}ox{convex l.s.c},\ F_i(0) = 0,\ 0 \in {\rm int} ( {\rm Dom}[F_1] \cap {\rm Dom}[F_2] ); \end{equation} \begin{equation} \label{H2} F_i (\mathbb{D}) \ageq \Big| \mathbb{D} - \frac{1}{d} {\rm tr}[\mathbb{D}] \Big|^\alpha - 1 \ \mathscr{B}ox{for some}\ \alpha > \frac{2d}{d +2}; \end{equation} \begin{equation} \label{H3} F_i (\mathbb{D}) = \infty \ \mathscr{B}ox{whenever}\ {\rm tr}[\mathbb{D}] > \Ov{d} \geq 0; \end{equation} $i = 1,2$. Hypothesis \eqref{H2} seems natural to guarantee integrability of the convective term by means of the energy bounds, hypothesis \eqref{H3} yields a uniform bound \begin{equation} \label{H4} |{\rm div}_x \vc{u} | \leq \Ov{d} \ \mathscr{B}ox{a.a. in}\ (0,T) \times \Omega_\varepsilonga. \end{equation} We refer the reader to \cite{FeLiMa} for the modelling aspects of \eqref{H3}. In the context of incompressible fluids, the hypothesis \eqref{H4} becomes irrelevant, while \eqref{H1}, \eqref{H2} can be rewritten as \begin{equation} \label{H1a} F_i : R^{d \times d}_{\rm sym,0} \to [0, \infty]\ \mathscr{B}ox{convex l.s.c},\ F_i(0) = 0,\ 0 \in {\rm int} ( {\rm Dom}[F_1] \cap {\rm Dom}[F_2] ); \end{equation} \begin{equation} \label{H2a} F_i (\mathbb{D}) \ageq | \mathbb{D} |^\alpha - 1 \ \mathscr{B}ox{for some}\ \alpha > \frac{2d}{d +2}, \end{equation} $i=1,2$. As $F_i$ may become infinite, the above class is broad enough to accommodate the rheology of \emph{thick fluids}, see Barnes \cite{Barn} and Rodrigues \cite{Rodr}. As the velocity $\vc{u}$ enjoys the Sobolev regularity \eqref{ds3} and \eqref{H4} holds, the theory of DiPerna--Lions \cite{DL} applies to \eqref{ds5}, \eqref{ds6}. In particular, for the initial data \begin{equation} \label{H5} 0 < \underline{\varrho} \leq \varrho_0 \leq \Ov{\varrho}, \end{equation} the equation of continuity \eqref{ds6} admits a unique renormalized solution $\varrho$ belonging to the class \[ \varrho \in C([0,T]; L^1(\mathbb{T}^d)),\ \underline{\varrho} \exp \left( - t \Ov{d} \right) \leq \varrho(t,x) \leq \Ov{\varrho} \exp \left( t \Ov{d} \right) \ \mathscr{B}ox{for any}\ t,x. \] By the same token, the phase variables $\chi$, \[ \chi \in C([0,T]; L^1(\mathbb{T}^d)),\ \mathscr{B}ox{ ranges in the set}\ \{0,1 \} \] provided the same is true for the initial data. In addition, $\chi$ and $\varrho$ satisfy the renormalized equation \begin{equation} \label{H6} \begin{split} &\left[ \intO{ b(\chi, \varrho) \varphi (0, \cdot) } \right]_{t = 0}^{t = \tau} \\ &= \int_0^\tau \intO{ \left[ b(\chi, \varrho) \mathbb{P}artial_t \varphi + b(\chi, \varrho) \vc{u} \cdot \nabla_x \varphi + \left( b(\chi, \varrho) - \frac{ \mathbb{P}artial b(\chi, \varrho)}{\mathbb{P}artial \varrho} \varrho \right) {\rm div}_x \vc{u} \varphi \right] } \,{\rm d} t \end{split} \end{equation} for any $0 \leq \tau \leq T$, any $\varphi \in C^1([0,T] \times \Ov{\Omega_\varepsilonga})$, and any $b \in C^1_{\rm loc}([0,1] \times (0, \infty))$. We are ready to state our main results. We start with a mixture of two compressible fluids without surface tension. \begin{Theorem} [{\bf Compressible fluids without surface tension}] \label{TH1} Let $d=2,3$. Suppose that $\kappa = 0$ and $p_i \in C^1_c(0,\infty)$ are increasing functions of $\varrho$ for $i=1,2$. Let $F_i$, $i=1,2$, satisfy \eqref{H1}--\eqref{H3} with $\alpha \geq \frac{11}{5}$ if $d=3$ and $\alpha \geq 2$ if $d=2$. In addition, suppose there is a constant $k > 0$ such that \begin{equation} \label{H6a} \frac{1}{k} F_1 (\mathbb{D}) - k \leq F_2(\mathbb{D}) \leq k \Big( F_1 (\mathbb{D}) + 1 \Big) \ \mathscr{B}ox{for any}\ \mathbb{D} \in R^{d\times d}_{\rm sym}. \end{equation} Let the initial data satisfy \[ \chi(0, \cdot)= \chi_0 = 1_{\Omega_\varepsilonga}, \ \Omega_\varepsilonga_0 \subset \mathbb{T}^d \ \mathscr{B}ox{a Lipschitz domain}, \] \[ \varrho(0, \cdot )= \varrho_0,\ 0 < \underline{\varrho} \leq \varrho_0 \leq \Ov{\varrho},\ \varrho \vc{u} (0, \cdot) = \vc{m}_0 \in L^2(\mathbb{T}^d; R^d). \] Then the problem \eqref{p0}, \eqref{p1}, \eqref{p1a}, \eqref{p5a} admits a dissipative varifold solution $(\varrho, \vc{u}, \chi)$ in $(0,T) \times \mathbb{T}^d$ in the sense of Definition \ref{pD1}. \end{Theorem} The hypothesis \eqref{H6a} requires the dissipative potentials to share the growth of the same order for large $\mathbb{D}$. To handle the compressible case with surface tension, we restrict ourselves to the pressure--density rheological law pertinent to isothermal gases, namely \begin{equation} \label{gas} p_i(\varrho) = a_i \varrho, \ a_i > 0,\ i=1,2,\ p(\varrho, \chi) = \chi a_1 \varrho + (1 - \chi) a_2 \varrho. \end{equation} \begin{Theorem} [{\bf Compressible fluids with surface tension}] \label{TH3} Let $d=2,3$, and $\kappa \geq 0$. Suppose that the pressure is given by \eqref{gas}. Let $F_i$, $i=1,2$, satisfy \eqref{H1}--\eqref{H3}. Let the initial data satisfy \[ \chi(0, \cdot)= \chi_0 = 1_{\Omega_\varepsilonga}, \ \Omega_\varepsilonga_0 \subset \mathbb{T}^d \ \mathscr{B}ox{a Lipschitz domain}, \] \[ \varrho(0, \cdot )= \varrho_0,\ 0 < \underline{\varrho} \leq \varrho_0 \leq \Ov{\varrho},\ \varrho \vc{u} (0, \cdot) = \vc{m}_0 \in L^2(\mathbb{T}^d; R^d). \] Then the problem \eqref{p0}, \eqref{p1}, \eqref{p1a}, \eqref{p5a} admits a dissipative varifold solution $(\varrho, \vc{u}, \chi,V)$ in $(0,T) \times \mathbb{T}^d$ in the sense of Definition \ref{pD1}. \end{Theorem} Finally, we reformulate the result in terms of incompressible fluids. \begin{Theorem} [{\bf Incompressible fluids}] \label{TH2} Let $d=2,3$, and $\kappa \geq 0$. Suppose that $F_i$ satisfy \eqref{H1a}, \eqref{H2a}. Let the initial data satisfy \[ \chi(0, \cdot)= \chi_0 = 1_{\Omega_\varepsilonga}, \ \Omega_\varepsilonga_0 \subset \mathbb{T}^d \ \mathscr{B}ox{a Lipschitz domain}, \] \[ \varrho(0, \cdot )= \varrho_0,\ 0 < \underline{\varrho} \leq \varrho_0 \leq \Ov{\varrho},\ \varrho \vc{u} (0, \cdot) = \vc{m}_0 \in L^2(\mathbb{T}^d; R^d). \] Then the problem \eqref{p0}, \eqref{p1}, \eqref{p1a}, \eqref{p5a} admits a dissipative varifold solution $(\varrho, \vc{u}, \chi, V)$ in $(0,T) \times \mathbb{T}^d$ in the incompressible setting of Definition \ref{pD1}. Specifically, ${\rm div}_x \vc{u} = 0$ and \eqref{ds7} holds for any test function $\boldsymbol{\varphi}$ such that ${\rm div}_x \boldsymbol{\varphi} = 0$. \end{Theorem} The rest of the paper is devoted to the proof of the above results. \section{Basic approximation scheme} \label{b} To construct the varifold solution, we adapt the approximation scheme introduced in \cite{AbbFeiNov}. First, we replace the dissipative potentials $F_i$ by their Moreau--Yosida approximation, \[ F^\varepsilon_i (\mathbb{D}) = \inf_{\mathbb{M} \in R^{d \times d}_{\rm sym}} \left\{ \frac{1}{2 \varepsilon} | \mathbb{M} - \mathbb{D} |^2 + F^i(\mathbb{D})\right\},\ i=1,2, \ \delta > 0 \] Next, we consider an orthogonal basis $\{ \vc{e}_n \}_{n=1}^\infty$ of the space $L^2(\Omega_\varepsilonga; R^d)$ consisting of, say, trigonometric polynomials. We look for approximate velocity field belonging to the space \[ \vc{u} \in C([0,T]; X_N),\ X_N = {\rm span} \{ \vc{e}_n \}_{n=1}^N . \] Given $\vc{u} \in C([0,T]; X_N)$, the transport equation as well as the equation of continuity may be solved by the method of characteristics, \begin{equation} \label{b1} \chi(t, x) = \chi_0 (\vc{X}^{-1}(t,x)),\ \varrho(t,x) = \varrho_0 (\vc{X}^{-1}(t,x)) \exp \left( - \int_0^t {\rm div}_x \vc{u} (s, \vc{X}^{-1}(s, x )) {\rm d} s \right), \end{equation} where $\vc{X}$ is the associated Lagrangian flow \[ \frac{{\rm d}}{\,{\rm d} t } \vc{X}(t,x) = \vc{u} (t, \vc{X}(t,x)),\ \vc{X}(0,x) = x. \] At this stage, we approximate the initial data $\chi_0$ to be a characteristic function of a $C^1-$domain $\Omega_\varepsilonga_0$. Accordingly, if $\vc{u}$ is smooth with respect to the $x-$variable, the image of $\Omega_\varepsilonga_0$ under the flow remains $C^1$ at any time, \[ \chi_0 = 1_{\Omega_\varepsilonga_0} \ \mathscr{B}ox{of class}\ C^1 \ \mathbb{R}ightarrow \ \chi(t, \cdot) = 1_{\Omega_\varepsilonga_t},\ \Omega_\varepsilonga_t \ \mathscr{B}ox{of class}\ C^1. \] The velocity field will be identified via a Faedo--Galerkin approximation: \begin{equation} \label{b2} \begin{split} \left[ \intTd{ \varrho \vc{u} \cdot \boldsymbol{\varphi}} \right]_{t=0}^{t=\tau} &= \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ & - \int_0^\tau \intTd{ \mathbb{P}artial F^\varepsilon (\chi, \mathbb{D}_x \vc{u}) : \mathbb{D}_x \boldsymbol{\varphi} } \,{\rm d} t + \kappa \int_0^\tau \int_{\mathbb{P}artial \Omega_\varepsilonga_t} \left( \mathbb{I} - \vc{n} \otimes \vc{n} \right) : \nabla_x \boldsymbol{\varphi} \ {\rm d} S_x \,{\rm d} t \\ &- \delta \int_0^\tau \intTd{ {\rm d}elta_x^m \vc{u} \cdot {\rm d}elta_x^m \boldsymbol{\varphi} } \,{\rm d} t ,\ \varrho \vc{u}(0, \cdot) = \vc{m}_0,\ \varepsilon > 0,\ m > 2d, \end{split} \end{equation} for any $\boldsymbol{\varphi} \in C^1([0,T]; X_N)$, where we have set \[ \mathbb{P}artial F^\varepsilon (\mathbb{D}_x \vc{u}) = \chi \mathbb{P}artial F^\varepsilon_1 (\mathbb{D}_x \vc{u}) + (1 - \chi) \mathbb{P}artial F^\varepsilon_2 (\mathbb{D}_x \vc{u}). \] The reader may consult Abels \cite{AbelsI}, where the specific form of the ``varifold term'' in \eqref{b2} is discussed. Identity \eqref{b2} contains also an elliptic regularization necessary for performing the passage from smooth to rough dissipation potentials. The approximation scheme depends on three parameters: $N$, $\varepsilon$, and $\delta$. These being fixed, the existence of approximate solutions can be shown by the standard fixed--point argument, see \cite[Section 3]{AbbFeiNov} and Abels \cite[Section 4]{AbelsI}. Our goal is to perform consecutively the limits $N \to \infty$, $\varepsilon \to 0$, and $\delta \to 0$. \subsection{Energy estimates} As all quantities are smooth at the basic approximation level, we may consider $\vc{u}$ as a test function in \eqref{b2} obtaining the energy balance \begin{equation} \label{b3} \begin{split} \frac{{\rm d}}{\,{\rm d} t } \left[ \intTd{ \left( \frac{1}{2} \varrho |\vc{u}|^2 + P(\chi,\varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}^+(\mathbb{T}^d)} \right] \\ + \intTd{ \mathbb{P}artial F^\varepsilon (\chi, \mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } + \delta \intTd{ |{\rm d}elta_x^m \vc{u} |^2 } = 0, \end{split} \end{equation} where we have used the identity (see Abels \cite[Section 2.4, formula (2.8)]{AbelsI}) \begin{equation} \label{b4} \frac{{\rm d}}{\,{\rm d} t } \int_{\mathbb{P}artial \Omega_\varepsilonga_t} \varphi \ {\rm d} S_x = \int_{\mathbb{P}artial \Omega_\varepsilonga_t} (\mathbb{I} - \vc{n} \otimes \vc{n}) : \nabla_x (\varphi \vc{u}) \ {\rm d} S_x + \int_{\mathbb{P}artial \Omega_\varepsilonga_t} \vc{n} \cdot \ \nabla_x \varphi \vc{n} \cdot \vc{u} \ {\rm d} S_x, \ \varphi \in C^1(\mathbb{T}^d), \end{equation} together with the renormalized version \eqref{H6} of the equations \eqref{ds5}, \eqref{ds6}, specifically \[ \left[ \intTd{ P(\chi, \varrho) } \right]_{t=0}^{t = \tau} + \int_0^\tau \intTd{p(\chi, \varrho) {\rm div}_x \vc{u} } \,{\rm d} t = 0 \] for any $0 \leq \tau \leq T$. \subsection{Limit $N \to \infty$} \label{N} The limit $N \to \infty$ in the family of Faedo--Galerkin approximations is straightforward. Let $(\chi_N, \varrho_N, \vc{u}_N)_{N \geq 1}$ be the corresponding family of solutions. Keeping $\varepsilon > 0$, $\delta > 0$ fixed we deduce from the energy balance and \eqref{b1} the following uniform bounds: \begin{equation} \label{b6} \begin{split} \varrho_N &\approx 1,\\ (\vc{u}_N)_{N \geq 1} \ &\mathscr{B}ox{bounded in}\ L^\infty (0,T; L^2(\mathbb{T}^d; R^d)) \cap L^2(0,T; W^{2m,2} (\mathbb{T}^d; R^d )), \\ (\chi_N)_{N \geq 1}\ &\mathscr{B}ox{bounded in}\ L^\infty((0,T) \times \mathbb{T}^d),\ (\kappa \chi_N)_{N \geq 1} \ \mathscr{B}ox{bounded in}\ L^\infty(0,T; BV(\mathbb{T}^d)). \end{split} \end{equation} First, we have \begin{equation} \label{b7} \vc{u}_N \to \vc{u} \ \mathscr{B}ox{weakly-(*) in} \ L^\infty(0,T; L^2(\mathbb{T}^d; R^d)) \ \mathscr{B}ox{and weakly in}\ L^2(0,T; W^{2m, 2} (\mathbb{T}^d; R^d)) \end{equation} passing to a suitable subsequence as the case may be. In particular, the velocity field remains regular at least in the spatial variable, and we get \begin{equation} \label{b8} \varrho_N \to \varrho \ \mathscr{B}ox{in} \ C^1([0,T] \times \mathbb{T}^d),\ \chi_N \to \chi \ \mathscr{B}ox{weakly-(*) in} \ L^\infty_{{\rm weak-(*)}} (0,T; \mathcal{M}(R^d)), \end{equation} and \[ \chi_N \to \chi \ \mathscr{B}ox{in}\ C([0,T]; L^1(\mathbb{T}^d)) , \] where the limit functions are given by formula \eqref{b1}. Denoting \[ \Pi_N : L^2(\mathbb{T}^d; R^d) \to X_N \] the orthogonal projection, we deduce from \eqref{b2} that \[ ( \mathbb{P}artial_t \Pi_N (\varrho_N \vc{u}_N) )_{N \geq 1} \ \mathscr{B}ox{bounded in}\ L^p(0,T; W^{-k,2}(\mathbb{T}^d; R^d) ) \ \mathscr{B}ox{for some}\ p > 1,\ k > 0. \] Consequently, by virtue of Aubin--Lions Lemma, \[ \int_0^T \intTd{ \varrho_N |\vc{u}_N|^2 } \,{\rm d} t \to \int_0^T \intTd{ \varrho |\vc{u}|^2} \,{\rm d} t , \] yielding \[ \vc{u}_N \to \vc{u} \ \mathscr{B}ox{in}\ L^2(0,T; L^2(\mathbb{T}^d; R^d)). \] Thus, finally, \[ \vc{u}_N \to \vc{u} \ \mathscr{B}ox{in}\ L^q(0,T; C^1(\mathbb{T}^d; R^d)) \ \mathscr{B}ox{for some}\ q > 2. \] As the regularization $\mathbb{P}artial F^\varepsilon_i$ are globally Lipschitz in $\mathbb{D}_x \vc{u}$, we are allowed to perform the limit in the momentum equation \eqref{b2} obtaining \begin{equation} \label{b9} \begin{split} \left[ \intTd{ \varrho \vc{u} \cdot \boldsymbol{\varphi}} \right]_{t=0}^{t=\tau} &= \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ & - \int_0^\tau \intTd{ \mathbb{P}artial F^\varepsilon (\chi, \mathbb{D}_x \vc{u}) : \mathbb{D}_x \boldsymbol{\varphi} } \,{\rm d} t + \kappa \int_0^\tau \int_{\mathbb{P}artial \Omega_\varepsilonga_t} \left( \mathbb{I} - \vc{n} \otimes \vc{n} \right) : \nabla_x \boldsymbol{\varphi} \ {\rm d} S_x \,{\rm d} t \\ &- \delta \int_0^\tau \intTd{ {\rm d}elta_x^m \vc{u} \cdot {\rm d}elta_x^m \boldsymbol{\varphi} } \,{\rm d} t ,\ \varrho \vc{u}(0, \cdot) = \vc{m}_0,\ \varepsilon > 0,\ m > 2d, \end{split} \end{equation} for any $\boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d)$. Finally, we pass to the limit in the energy balance \eqref{b3}: \begin{equation} \label{b10} \begin{split} \left[ \mathbb{P}si \intTd{ \left( \frac{1}{2} \varrho |\vc{u}|^2 + P(\chi,\varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}^+(\mathbb{T}^d)} \right]_{t = 0}^{ t = \tau} \\ - \int_0^\tau \mathbb{P}artial_t \mathbb{P}si \left( \intTd{ \left( \frac{1}{2} \varrho |\vc{u}|^2 + P(\chi,\varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}^+(\mathbb{T}^d)} \right) \,{\rm d} t \\ + \int_0^\tau \mathbb{P}si \intTd{ \mathbb{P}artial F^\varepsilon (\chi, \mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } \,{\rm d} t + \delta \int_0^\tau \mathbb{P}si \intTd{ |{\rm d}elta_x^m \vc{u} |^2 } \,{\rm d} t \leq 0, \end{split} \end{equation} for a.a. $0 \leq \tau \leq T$ and any $\mathbb{P}si \in C^1[0,T]$, $\mathbb{P}si \geq 0$. \begin{Remark} \label{bR1} As a matter of fact, a more refined argument would yield equality in \eqref{b10}. This is, however, not needed for the remaining part of the proof. \end{Remark} \section{Compressible case without surface tension} \label{C} Being given a family of approximate solutions identified in Section \ref{N}, our goal is to perform the limits $\varepsilon \to 0$, $\delta \to 0$. Fixing $\delta > 0$ we denote $(\varrho_\varepsilon, \chi_\varepsilon, \vc{u}_\varepsilon)_{\varepsilon > 0}$ the associated approximate solution. \subsection{Limit $\varepsilon \to 0$} \label{SE} Since the energy balance \eqref{b10} holds, we may repeat step by step the arguments of Section \ref{N} to obtain \begin{equation} \label{C1} \begin{split} \varrho_\varepsilon \approx 1,\ \varrhoe &\to \varrho \ \mathscr{B}ox{in}\ C^1([0,T] \times \Omega_\varepsilonga),\\ \vc{u}_\varepsilon &\to \vc{u} \ \mathscr{B}ox{weakly-(*) in} \ L^\infty(0,T; L^2(\mathbb{T}^d; R^d)) \ \mathscr{B}ox{and weakly in}\ L^2(0,T; W^{2m, 2} (\mathbb{T}^d; R^d)), \\ \chi_\varepsilon &\to \chi \ \mathscr{B}ox{in}\ C([0,T]; L^1(\mathbb{T}^d)) \ \mathscr{B}ox{and weakly-(*) in}\ L^\infty(0,T; BV(\mathbb{T}^d)) \end{split} \end{equation} passing to a suitable subsequence as the case may be. Similarly to the preceding section, the characteristic curves are still well defined and the limit functions $\chi$, $\varrho$ are given by formula \eqref{b1}. Next, we take $\mathbb{P}si = 1$ in the energy inequality \eqref{b10} and subtract the resulting expression from the momentum balance obtaining \begin{equation} \label{C2} \begin{split} &\left[ \intTd{ \left( \frac{1}{2} \varrhoe |\vc{u}_\varepsilon|^2 - \varrhoe \vc{u}_\varepsilon \cdot \boldsymbol{\varphi} + P(\chi_\varepsilon,\varrhoe) \right) } + \kappa \| \nabla_x \chi_\varepsilon \|_{\mathcal{M}^+(\mathbb{T}^d)} \right]_{t = 0}^{ t = \tau} \\ &+ \int_0^\tau \intTd{ \Big( \varrhoe \vc{u}_\varepsilon \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrhoe \vc{u}_\varepsilon \otimes \vc{u}_\varepsilon : \nabla_x \boldsymbol{\varphi} + p(\chi_\varepsilon, \varrhoe) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ &+ \kappa \int_0^\tau \int_{\mathbb{P}artial \Omega_\varepsilonga_{\varepsilon,t}} \left( \mathbb{I} - \vc{n} \otimes \vc{n} \right) : \nabla_x \boldsymbol{\varphi} \ {\rm d} S_x \,{\rm d} t - \delta \int_0^\tau \intTd{ {\rm d}elta_x^m \vc{u}_\varepsilon \cdot {\rm d}elta_x^m \boldsymbol{\varphi} } \,{\rm d} t \\ &+ \int_0^\tau \intTd{ \mathbb{P}artial F^\varepsilon (\chi_\varepsilon , \mathbb{D}_x \vc{u}_\varepsilon) : (\mathbb{D}_x \vc{u}_\varepsilon - \mathbb{D}_x \boldsymbol{\varphi} ) } \,{\rm d} t + \delta \int_0^\tau \mathbb{P}si \intTd{ |{\rm d}elta_x^m \vc{u}_\varepsilon |^2 } \,{\rm d} t \leq 0, \end{split} \end{equation} for any $\boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d)$, where \[ \intTd{ \mathbb{P}artial F^\varepsilon (\chi_\varepsilon , \mathbb{D}_x \vc{u}_\varepsilon) : (\mathbb{D}_x \vc{u}_\varepsilon - \mathbb{D}_x \boldsymbol{\varphi} ) } \geq \intTd{ \Big( F^\varepsilon (\chi_\varepsilon, \mathbb{D}_x \vc{u}_\varepsilon) - F^\varepsilon (\chi_\varepsilon, \mathbb{D}_x \boldsymbol{\varphi}) \Big) } \] Now, consider $\mathbb{P}hi \in C^1(\mathbb{T}^d; R^d)$ such that $|\mathbb{D}_x \mathbb{P}hi | < M$, and $\mathbb{P}si \in C^1_c(0,T)$, $0 \leq \mathbb{P}si \leq 1$. In view if hypothesis \eqref{H1}, \[ \mathbb{D}_x \mathbb{P}hi \in {\rm int} ({\rm Dom}[F_1] \cap {\rm Dom}[F_2]) \] as long as $M > 0$ is small enough. Using $\boldsymbol{\varphi}(t,x) = \mathbb{P}si(t) \mathbb{P}hi (x)$ as a test function in \eqref{C2} we obtain \[ \begin{split} &\int_0^T \mathbb{P}artial_t \mathbb{P}si \intTd{ \varrhoe \vc{u}_\varepsilon \cdot \mathbb{P}hi } \,{\rm d} t + \int_0^T \mathbb{P}si \intTd{ \Big( \varrhoe \vc{u}_\varepsilon \otimes \vc{u}_\varepsilon : \nabla_x \mathbb{P}hi + p(\chi_\varepsilon, \varrhoe) {\rm div}_x \mathbb{P}hi \Big) } \,{\rm d} t \\ &+ \kappa \int_0^T \mathbb{P}si \int_{\mathbb{P}artial \Omega_\varepsilonga_{\varepsilon,t}} \left( \mathbb{I} - \vc{n} \otimes \vc{n} \right) : \nabla_x \mathbb{P}hi \ {\rm d} S_x \,{\rm d} t - \delta \int_0^T \mathbb{P}si \intTd{ {\rm d}elta_x^m \vc{u}_\varepsilon \cdot {\rm d}elta_x^m \mathbb{P}hi } \,{\rm d} t \\ &\leq \intTd{ \left( \frac{1}{2} \varrho_0 |\vc{u}_0|^2 + P(\chi_0,\varrho_0) \right) } + \kappa \| \nabla_x \chi_0 \|_{\mathcal{M}^+(\mathbb{T}^d)} + \int_0^T \intTd{ F^\varepsilon (\chi_\varepsilon, \mathbb{P}si \mathbb{D}_x \mathbb{P}hi ) }, \end{split} \] where, furthermore, \[ \intTd{ F^\varepsilon (\chi_\varepsilon, \mathbb{P}si \mathbb{D}_x \mathbb{P}hi )} \leq \intTd{ F_1 (\mathbb{P}si \mathbb{D}_x \mathbb{P}hi) } + \intTd{ F_2 (\mathbb{P}si \mathbb{D}_x \mathbb{P}hi) } \leq C(M). \] Thus we may infer that \[ \begin{split} t &\in [0,T] \mapsto \intTd{(\varrhoe \vc{u}_\varepsilon) (t, \cdot) \cdot \mathbb{P}hi } \in BV[0,T] \\ & \mathscr{B}ox{whenever}\ \mathbb{P}hi \in C^1(\mathbb{T}^d; R^d), \end{split} \] with the norm bounded uniformly for $\varepsilon \to 0$. Consequently, similarly to Section \ref{N}, we conclude \[ \int_0^T \intTd{ \varrhoe |\vc{u}_\varepsilon|^2 } \,{\rm d} t \to \int_0^T \intTd{ \varrho |\vc{u}|^2 }, \] yielding, finally, \begin{equation} \label{C3} \vc{u}_\varepsilon \to \vc{u} \ \mathscr{B}ox{in}\ L^q(0,T; C^1(\mathbb{T}^d; R^d)) \ \mathscr{B}ox{for some}\ q > 2. \end{equation} The ultimate goal of this step is to perform the limit in \eqref{C2}. Summing up the previous observations, we get \begin{equation} \label{C5} \begin{split} \limsup_{\varepsilon \to 0} &\int_0^\tau \intTd{ F^\varepsilon (\chi_\varepsilon, \mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t - \liminf_{\varepsilon \to 0} \int_0^\tau \intTd{ F^\varepsilon (\chi_\varepsilon, \mathbb{D}_x \vc{u}_\varepsilon) } \,{\rm d} t \\ &\geq \left[ \intTd{ \left( \frac{1}{2} \varrho |\vc{u}|^2 - \varrho \vc{u} \cdot \boldsymbol{\varphi} + P(\chi,\varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}^+(\mathbb{T}^d)} \right]_{t = 0}^{ t = \tau} \\ &+ \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ &+ \kappa \int_0^\tau \int_{\mathbb{P}artial \Omega_\varepsilonga_{t}} \left( \mathbb{I} - \vc{n} \otimes \vc{n} \right) : \nabla_x \boldsymbol{\varphi} \ {\rm d} S_x \,{\rm d} t - \delta \int_0^\tau \intTd{ {\rm d}elta_x^m \vc{u} \cdot {\rm d}elta_x^m \boldsymbol{\varphi} } \,{\rm d} t \\ &+ \delta \int_0^\tau \mathbb{P}si \intTd{ |{\rm d}elta_x^m \vc{u} |^2 } \,{\rm d} t \end{split} \end{equation} for any $\boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d)$. Consider a test function \begin{equation} \label{C4} \boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d),\ \int_0^T \intTd{ \Big( F_1(\mathbb{D}_x \boldsymbol{\varphi}) + F_2 (\mathbb{D}_x \boldsymbol{\varphi}) \Big) } \,{\rm d} t < \infty, \end{equation} and write \[ \int_0^\tau \intTd{ F^\varepsilon (\chi_\varepsilon, \mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t = \int_0^\tau \intTd{ \chi_\varepsilon F^\varepsilon_1 (\mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t + \int_0^\tau \intTd{ (1 - \chi_\varepsilon) F^\varepsilon_2 (\mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t . \] We already know that \[ \chi_\varepsilon \to \chi \in \{ 0 ; 1 \}\ \mathscr{B}ox{a.a. in}\ (0,T) \times \mathbb{T}^d,\ F^\varepsilon_i (\mathbb{D}_x \boldsymbol{\varphi}) \nearrow F_i(\boldsymbol{\varphi}) \ \mathscr{B}ox{in}\ (0,T) \times \mathbb{T}^d,\ i=1,2. \] Consequently, in accordance with \eqref{C4}, \begin{equation} \label{C5a} \begin{split} \int_0^\tau &\intTd{ F^\varepsilon (\chi_\varepsilon, \mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t = \int_0^\tau \intTd{ \chi_\varepsilon F^\varepsilon_1 (\mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t + \int_0^\tau \intTd{ (1 - \chi_\varepsilon) F^\varepsilon_2 (\mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t \\ &\leq \int_0^\tau \intTd{ \chi_\varepsilon F_1 (\mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t + \int_0^\tau \intTd{ (1 - \chi_\varepsilon) F_2 (\mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t \\ &\to \int_0^\tau \intTd{ \chi F_1 (\mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t + \int_0^\tau \intTd{ (1 - \chi) F_2 (\mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t = \int_0^\tau \intTd{ F(\chi, \mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t . \end{split} \end{equation} On the other hand, \[ \begin{split} \int_0^\tau &\intTd{F^\varepsilon (\chi_\varepsilon, \mathbb{D}_x \vc{u}_\varepsilon) } \,{\rm d} t = \int_0^\tau \intTd{ \chi_\varepsilon F^{\varepsilon}_1 (\mathbb{D}_x \vc{u}_\varepsilon) } \,{\rm d} t + \int_0^\tau \intTd{ (1 - \chi_\varepsilon) F^{\varepsilon}_2 (\mathbb{D}_x \vc{u}_\varepsilon) } \,{\rm d} t \\ &\geq \int_0^\tau \intTd{ \chi_\varepsilon F^{\Ov{\varepsilon}}_1 (\mathbb{D}_x \vc{u}_\varepsilon) } \,{\rm d} t + \int_0^\tau \intTd{ (1 - \chi_\varepsilon) F^{\Ov{\varepsilon}}_2 (\mathbb{D}_x \vc{u}_\varepsilon) } \,{\rm d} t \\ &\to \int_0^\tau \intTd{ \chi F^{\Ov{\varepsilon}}_1 (\mathbb{D}_x \vc{u}) } \,{\rm d} t + \int_0^\tau \intTd{ (1 - \chi) F^{\Ov{\varepsilon}}_2 (\mathbb{D}_x \vc{u}) } \,{\rm d} t \ \mathscr{B}ox{as}\ \varepsilon \to, \ \Ov{\varepsilon} \ \mathscr{B}ox{fixed}. \end{split} \] Consequently, \[ \begin{split} \liminf_{\varepsilon \to 0} &\int_0^\tau \intTd{F^\varepsilon (\chi_\varepsilon, \mathbb{D}_x \vc{u}_\varepsilon) } \,{\rm d} t \\ & \geq \int_0^\tau \intTd{ \chi F^{\Ov{\varepsilon}}_1 (\mathbb{D}_x \vc{u}) } \,{\rm d} t + \int_0^\tau \intTd{ (1 - \chi) F^{\Ov{\varepsilon}}_2 (\mathbb{D}_x \vc{u}) } \,{\rm d} t \end{split} \] for any $\Ov{\varepsilon}$. Thus we may infer that \begin{equation} \label{C6} \begin{split} \liminf_{\varepsilon \to 0} &\int_0^\tau \intTd{F^\varepsilon (\chi_\varepsilon, \mathbb{D}_x \vc{u}_\varepsilon) } \,{\rm d} t \\ & \geq \int_0^\tau \intTd{ \chi F_1 (\mathbb{D}_x \vc{u}) } \,{\rm d} t + \int_0^\tau \intTd{ (1 - \chi) F_2 (\mathbb{D}_x \vc{u}) } \,{\rm d} t . \end{split} \end{equation} Combining \eqref{C5}, \eqref{C5a}, and \eqref{C6}, we obtain \begin{equation} \label{C8} \begin{split} \int_0^\tau &\intTd{ F (\chi, \mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t - \int_0^\tau \intTd{ F (\chi, \mathbb{D}_x \vc{u}) } \,{\rm d} t \\ &\geq \left[ \intTd{ \left( \frac{1}{2} \varrho |\vc{u}|^2 - \varrho \vc{u} \cdot \boldsymbol{\varphi} + P(\chi,\varrho) \right) } + \kappa \| \nabla_x \chi \|_{\mathcal{M}^+(\mathbb{T}^d)} \right]_{t = 0}^{ t = \tau} \\ &+ \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi} + p(\chi, \varrho) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ &+ \kappa \int_0^\tau \int_{\mathbb{P}artial \Omega_\varepsilonga_{t}} \left( \mathbb{I} - \vc{n} \otimes \vc{n} \right) : \nabla_x \boldsymbol{\varphi} \ {\rm d} S_x \,{\rm d} t - \delta \int_0^\tau \intTd{ {\rm d}elta_x^m \vc{u} \cdot {\rm d}elta_x^m \boldsymbol{\varphi} } \,{\rm d} t \\ &+ \delta \int_0^\tau \mathbb{P}si \intTd{ |{\rm d}elta_x^m \vc{u} |^2 } \,{\rm d} t \end{split} \end{equation} for any $\boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d)$ satisfying \eqref{C4}. \subsection{Limit $\delta \to 0$} \label{SD} Let $(\chi_\delta, \varrho_\delta, \vc{u}_\delta )_{\delta > 0}$ be the family of approximate solutions obtained in the previous section. Our ultimate goal is to perform the limit $\delta \to 0$. Up to this moment, the proof has been identical for Theorems \ref{TH1}, \ref{TH3}, with the obvious modifications for Theorem \ref{TH2}. Now, we focus on the case of compressible fluids without surface tension. If $\kappa = 0$, the approximate solutions satisfy the following relations: \begin{equation} \label{C9} \left[ \intTd{ \chi_\delta \varphi } \right]_{t=0}^{t = \tau} = \int_0^\tau \intTd{ \Big( \chi_\delta \mathbb{P}artial_t \varphi + \chi_\delta \vc{u}_\delta \cdot \nabla_x \varphi + \chi_\delta {\rm div}_x \vc{u}_\delta \varphi \Big) } \,{\rm d} t \end{equation} for any $0 \leq \tau \leq T$, $\varphi \in C^1([0,T \times \mathbb{T}^d])$; \begin{equation} \label{C10} \left[ \intTd{ \varrho_\delta \varphi } \right]_{t=0}^{t = \tau} = \int_0^\tau \intTd{ \Big( \varrho_\delta \mathbb{P}artial_t \varphi + \varrho_\delta \vc{u}_\delta \cdot \nabla_x \varphi \Big) } \,{\rm d} t \end{equation} for any $0 \leq \tau \leq T$, $\varphi \in C^1([0,T] \times \mathbb{T}^d)$; \begin{equation} \label{C11} \begin{split} \int_0^\tau &\intTd{ F (\chi_\delta, \mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t - \int_0^\tau \intTd{ F (\chi_\delta, \mathbb{D}_x \vc{u}_\delta) } \,{\rm d} t \\ &\geq \left[ \intTd{ \left( \frac{1}{2} \varrho_\delta |\vc{u}_\delta|^2 - \varrho_\delta \vc{u}_\delta \cdot \boldsymbol{\varphi} + P(\chi_\delta,\varrho_\delta) \right) } \right]_{t = 0}^{ t = \tau} \\ &+ \int_0^\tau \intTd{ \Big( \varrho_\delta \vc{u}_\delta \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho_\delta \vc{u}_\delta \otimes \vc{u}_\delta : \nabla_x \boldsymbol{\varphi} + p(\chi_\delta, \varrho_\delta) {\rm div}_x \boldsymbol{\varphi} \Big) } \,{\rm d} t \\ &- \delta \int_0^\tau \intTd{ {\rm d}elta_x^m \vc{u}_\delta \cdot {\rm d}elta_x^m \boldsymbol{\varphi} } \,{\rm d} t + \delta \int_0^\tau \intTd{ |{\rm d}elta_x^m \vc{u}_\delta |^2 } \,{\rm d} t \end{split} \end{equation} for any $\boldsymbol{\varphi} \in C^1([0,T] \times \mathbb{T}^d; R^d)$ satisfying \eqref{C4}. Setting $\boldsymbol{\varphi} \equiv 1$ in \eqref{C11}, we recover the energy estimates \[ \begin{split} {\rm ess} \sup_{t \in (0,T)} \left\| \varrho_\delta |\vc{u}_\delta|^2 \right\|_{L^1(\mathbb{T}^d)} &\aleq 1,\\ \int_0^T \intTd{ F(\chi_\delta, \mathbb{D}_x \vc{u}_\delta) } &\aleq 1,\\ \delta \int_0^T \intTd | {\rm d}elta_x^m \vc{u}_\delta |^2 &\aleq 1. \end{split} \] Consequently, by virtue of hypothesis \eqref{H3}, \begin{equation} \label{C12a} | {\rm div}_x \vc{u}_\delta (t,x)| \leq \Ov{d} \ \mathscr{B}ox{for a.a.}\ (t,x). \end{equation} In addition, \[ \chi_\delta \in C([0,T]; L^1(\mathbb{T}^d)),\ \chi_\delta(t,x) \in \{ 0,1 \} \ \mathscr{B}ox{for a.a.} \ (t,x) \] and \[ \varrho_\delta \in C([0,T]; L^1(\mathbb{T}^d)),\ \underline{\varrho} \exp \left( - t \Ov{d} \right) \leq \varrho_\delta(t,x) \leq \Ov{\varrho} \exp \left( t \Ov{d} \right) \ \mathscr{B}ox{for a.a}\ (t,x), \] represent the unique renormalized solution of \eqref{C9}, \eqref{C10} in the sense of DiPerna and Lions \cite{DL}. Similarly to Section \ref{SE}, we show \[ \varrho_\delta \to \varrho \ \mathscr{B}ox{in}\ C_{\rm weak}([0,T]; L^q(\mathbb{T}^d)),\ 1 < q < \infty, \] \[ \varrho_\delta \vc{u}_\delta \to \varrho \vc{u} \ \mathscr{B}ox{in} \ L^p_{\rm weak}(0,T; L^2(\mathbb{T}^d; R^d)) \ \mathscr{B}ox{for any}\ 1 < p < \infty, \] and \begin{equation} \label{C12} \varrho_\delta \vc{u}_\delta \otimes \vc{u}_\delta \to \varrho \vc{u} \otimes \vc{u} \ \mathscr{B}ox{weakly in}\ L^q((0,T) \times \mathbb{T}^d; R^{d \times d}_{\rm sym}) \ \mathscr{B}ox{for some}\ q > 1. \end{equation} In particular, \[ \int_0^T \intTd{\varrho_\delta | \vc{u}_\delta - \vc{u} |^2 } = \int_0^T \intTd{ \left( \varrho_\delta|\vc{u}_\delta|^2 - 2 \varrho_\delta \vc{u}_\delta \cdot \vc{u} + \varrho_\delta |\vc{u}_\delta|^2 \right) } \,{\rm d} t \to 0. \] As $\varrho_\delta$ is bounded below away from zero, the above relation yields \begin{equation} \label{C13} \vc{u}_\delta \to \vc{u}\ \mathscr{B}ox{in}\ L^2((0,T) \times \mathbb{T}^d; R^d). \end{equation} With the estimates \eqref{C12a}, \eqref{C13} at hand, we may apply \cite[Theorem II.5]{DL} to obtain \begin{equation} \label{C15} \chi_\delta \to \chi \ \mathscr{B}ox{in}\ C([0,T]; L^q(\mathbb{T}^d)) \ \mathscr{B}ox{for any}\ 1 \leq q < \infty, \end{equation} where $\chi$ is a renormalized solution of the transport equation \eqref{ds5}, $\chi(t,x) \in \{ 0, 1 \}$ for a.a. $(t,x)$. Moreover, it is easy to check that $\varrho$ is a renormalized solution of the equation of continuity \eqref{ds6}. Finally, we use the fact that $\chi_\delta$, $\varrho_\delta$ satisfy the renormalized equation \eqref{H6} and rewrite \eqref{C11} in the form \begin{equation} \label{C16} \begin{split} \int_0^\tau &\intTd{ F (\chi_\delta, \mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t - \int_0^\tau \intTd{ F (\chi_\delta, \mathbb{D}_x \vc{u}_\delta) } \,{\rm d} t \\ &\geq \left[ \intTd{ \left( \frac{1}{2} \varrho_\delta |\vc{u}_\delta|^2 - \varrho_\delta \vc{u}_\delta \cdot \boldsymbol{\varphi} \right) } \right]_{t = 0}^{ t = \tau} \\ &+ \int_0^\tau \intTd{ \Big( \varrho_\delta \vc{u}_\delta \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho_\delta \vc{u}_\delta \otimes \vc{u}_\delta : \nabla_x \boldsymbol{\varphi} + p(\chi_\delta, \varrho_\delta) ({\rm div}_x \boldsymbol{\varphi} - {\rm div}_x \vc{u}_\delta) \Big) } \,{\rm d} t \\ &- \delta \int_0^\tau \intTd{ {\rm d}elta_x^m \vc{u}_\delta \cdot {\rm d}elta_x^m \boldsymbol{\varphi} } \,{\rm d} t \\ &+ \delta \int_0^\tau \intTd{ |{\rm d}elta_x^m \vc{u}_\delta |^2 } \,{\rm d} t . \end{split} \end{equation} Repeating the arguments of Section \ref{SE}, we let $\delta \to 0$ in \eqref{C16} obtaining \begin{equation} \label{C17} \begin{split} \int_0^\tau &\intTd{ F (\chi, \mathbb{D}_x \boldsymbol{\varphi}) } \,{\rm d} t - \int_0^\tau \intTd{ F (\chi, \mathbb{D}_x \vc{u}) } \,{\rm d} t \\ &\geq \left[ \intTd{ \left( \frac{1}{2} \varrho |\vc{u}|^2 - \varrho \vc{u} \cdot \boldsymbol{\varphi} \right) } \right]_{t = 0}^{ t = \tau} \\ &+ \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t \boldsymbol{\varphi} + \varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi} + \Ov{p(\chi, \varrho)} {\rm div}_x \boldsymbol{\varphi} - \Ov{p(\chi, \varrho) {\rm div}_x \vc{u} } \Big) } \,{\rm d} t , \end{split} \end{equation} where \[ \begin{split} p(\chi_\delta, \varrho_\delta) &\to \Ov{p(\varrho, \chi)} \ \mathscr{B}ox{weakly-(*) in}\ L^\infty((0,T) \times \mathbb{T}^d), \\ p(\chi_\delta, \varrho_\delta) {\rm div}_x \vc{u}_\delta &\to \Ov{p(\varrho, \chi) {\rm div}_x \vc{u}_\delta} \ \mathscr{B}ox{weakly in} L^\alpha ((0,T) \times \Omega_\varepsilonga; R^d). \end{split} \] \subsubsection{Strong convergence of the density} \label{SCD} Our ultimate goal in the proof of Theorem \ref{TH1} is to ``remove'' the bars in \eqref{C17} which amounts to showing strong a.a. pointwise convergence \begin{equation} \label{C18} \varrho_\delta \to \varrho \ \mathscr{B}ox{a.a. in}\ (0,T) \times \mathbb{T}^d. \end{equation} To see \eqref{C18}, it is enough to justify the choice $\boldsymbol{\varphi} = \vc{u}$ as a test function in \eqref{C17}. Indeed, as shown in \cite{FeLiMa}, this would yield \[ \int_0^\tau \intTd{ \Ov{p(\chi, \varrho) {\rm div}_x \vc{u} } } \,{\rm d} t \geq \int_0^\tau \intTd{ \Ov{p(\chi, \varrho)} {\rm div}_x \vc{u} } \,{\rm d} t . \] On the other hand, from the renormalization, \begin{equation} \label{C19} \begin{split} \frac{{\rm d} }{\,{\rm d} t } & \intTd{ \Big[ \Ov{ P(\chi, \varrho ) } - P(\chi, \varrho) \Big] } \,{\rm d} t = \intTd{ p(\chi, \varrho) {\rm div}_x \vc{u} - \Ov{ p(\chi, \varrho) {\rm div}_x \vc{u} } } \,{\rm d} t \\ &\leq \intTd{ \Big( p(\chi, \varrho) - \Ov{p(\chi, \varrho)} \Big) {\rm div}_x \vc{u} } \,{\rm d} t . \end{split} \end{equation} As $P_i$, $i=1,2$, are strictly convex on the range of $\varrho$ and $\chi_\delta$ converges strongly, we get \[ \begin{split} &\intTd{ \Ov{ P(\chi, \varrho ) } - P(\chi, \varrho) } = \intTd{ \left( \chi \Big[ \Ov{P_1(\varrho)} - P_1(\varrho) \Big] + (1 - \chi) \Big[ \Ov{P_2(\varrho)} - P_2(\varrho) \Big] \right) } \\ &\ageq \limsup_{\delta \to 0} \| \varrho_\delta - \varrho \|^2_{L^2(\mathbb{T}^d)}. \end{split} \] By the same token, \[ \intTd{ \Big( p(\chi, \varrho) - \Ov{p(\chi, \varrho)} \Big) {\rm div}_x \vc{u} } \leq \| {\rm div}_x \vc{u} \|_{L^\infty(\mathbb{T}^d)} \limsup_{\delta \to 0} \| \varrho_\delta - \varrho \|^2_{L^2(\mathbb{T}^d)}. \] Thus we may apply Gronwall's lemma to \eqref{C19} to obtain, up to a suitable subsequence, \eqref{C18}. In order to make the preceding step rigorous, we have to justify the choice $\boldsymbol{\varphi} = \vc{u}$ in \eqref{C17}. Similarly to \cite{FeLiMa}, we consider the regularization in time via the Steklov averages. Specifically, let \[ \begin{split} \eta_h(t) &= \frac{1}{h} 1_{[-h,0]}(t),\ \eta_{-h}(t) = \frac{1}{h} 1_{[0,h]}(t),\ h > 0, \\ \xi_\varepsilon &\in C^\infty_c (0, \tau),\ 0 \leq \xi_\varepsilon \leq 1,\ \xi_\varepsilon (t) = 1 \ \mathscr{B}ox{if}\ \varepsilon \leq t \leq \tau - \varepsilon. \end{split} \] A suitable approximation of $\vc{u}$ reads \[ [\vc{u}]_{h, \varepsilon} = \xi_\varepsilon \eta_{-h} * \eta_h *( \xi_\varepsilon \vc{u}),\ \varepsilon > 0,\ h > 0, \] where $*$ stands for the convolution in the time variable $t$. The function $[\vc{u}]_{h, \varepsilon}$ can be used as a test function in \eqref{C17}. As shown in \cite[Section 4.3]{FeLiMa}, we have \[ \left[ \intTd{ \left( \frac{1}{2} \varrho |\vc{u}|^2 - \varrho \vc{u} \cdot [\vc{u}_{h,\varepsilon}] \right) } \right]_{t = 0}^{ t = \tau} + \int_0^\tau \intTd{ \Big( \varrho \vc{u} \cdot \mathbb{P}artial_t [\vc{u}_{h,\varepsilon}] + \varrho \vc{u} \otimes \vc{u} : \nabla_x [\vc{u}_{h,\varepsilon}] } \,{\rm d} t \to 0 \] as $\varepsilon, h \to 0$. Note that it is only at this point, where the hypothesis $\alpha \geq \frac{11}{5}$ is needed. Consequently, it is enough to show \begin{equation} \label{C20} \limsup_{\varepsilon, h \to 0} \int_0^\tau \intTd{ F(\chi,\mathbb{D}_x [\vc{u}]_{h,\varepsilon} ) } \,{\rm d} t \leq \int_0^\tau \intTd{ F(\chi, \mathbb{D}_x \vc{u} ) } \,{\rm d} t . \end{equation} To see \eqref{C20} we first use convexity of $F_i$ and the hypothesis $F_i(0) = 0$ to observe \[ F_i (\mathbb{D}_x [\vc{u}]_{h,\varepsilon} ) ) = F_i (\xi_\varepsilon \eta_{-h}* \eta_h * (\xi_\varepsilon \mathbb{D}_x \vc{u})) \leq \xi_\varepsilon F_i (\eta_{-h}* \eta_h * (\xi_\varepsilon \mathbb{D}_x \vc{u})),\ i = 1,2. \] Next, by Jensen' inequality, \[ F_i (\eta_{-h}* \eta_h * (\xi_\varepsilon \mathbb{D}_x \vc{u})) \leq \eta_{-h} * F_i (\eta_h * (\xi_\varepsilon \mathbb{D}_x \vc{u})) \leq \eta_{-h} * \eta_h * F_i (\xi_\varepsilon \mathbb{D}_x \vc{u})),\ i=1,2. \] Consequently, \[ F_i (\mathbb{D}_x [\vc{u}]_{h,\varepsilon} ) ) \leq \xi_\varepsilon \eta_{-h} * \eta_h * F_i (\xi_\varepsilon \mathbb{D}_x \vc{u})). \] By virtue of hypothesis \eqref{H6a}, \[ F_i (\xi_\varepsilon \mathbb{D}_x \vc{u}) \to F_i (\mathbb{D}_x \vc{u}) \ \mathscr{B}ox{in}\ L^1 ((0,\tau) \times \mathbb{T}^d),\ i=1,2,\ \mathscr{B}ox{as}\ \varepsilon \to 0; \] whence \eqref{C20} follows. Note that it is only at this moment, where the hypothesis \eqref{H6a} was needed. We have proved Theorem \ref{TH1}. \section{Fluids with surface tension} The above arguments can be easily adapted to handle the general case of compressible fluids with surface tension stated in Theorem \ref{TH3}. Indeed, as the pressure $p(\chi, \varrho)$ is now linear with respect to $\varrho$, the proof of strong convergence of the density performed in Section \ref{SCD} is now longer needed. Accordingly, we can drop the hypothesis $\alpha \geq \frac{11}{5}$ as well as \eqref{H6a}. In addition to the arguments of the preceding section, we have to perform the limit $\delta \to 0$ in the varifold term \[ \kappa \int_0^\tau \int_{\mathbb{P}artial \Omega_\varepsilonga_{\delta, t}} \left( \mathbb{I} - \vc{n} \otimes \vc{n} \right) : \nabla_x \boldsymbol{\varphi} \ {\rm d} S_x \,{\rm d} t \] and to show the compatibility relation \eqref{ds8}. However, this can be done in the same way is in \cite{AbelsI}, \cite{Plot2}. This completes the proof of Theorem \ref{TH3} Finally, it is easy to observe that the proof can be modified in a straightforward manner to deal with the incompressible case claimed in Theorem \ref{TH2}. \def\cprime{$'$} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0 \hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi} \end{document}
\begin{document} \title{Analytical Framework for Quantum Alternating Operator Ans\"atze} \def\QUAIL{ \affiliation{Quantum Artificial Intelligence Lab. (QuAIL), NASA Ames Research Center, Moffett Field, CA 94035, USA}} \def\USRA{ \affiliation{USRA Research Institute for Advanced Computer Science (RIACS), Mountain View, CA 94043, USA}} \author{Stuart Hadfield} \QUAIL \USRA \author{Tad Hogg} \QUAIL \author{Eleanor G. Rieffel} \QUAIL \date{December 2022} \begin{abstract} We develop a framework for analyzing layered quantum algorithms such as quantum alternating operator ans\"atze. In the context of combinatorial optimization, our framework relates quantum cost gradient operators, derived from the cost and mixing Hamiltonians, to classical cost difference functions that reflect cost function neighborhood structure. By considering QAOA circuits from the Heisenberg picture, we derive exact general expressions for expectation values as series expansions in the algorithm parameters, cost gradient operators, and cost difference functions. This enables novel interpretability and insight into QAOA behavior in various parameter regimes. For single-level QAOA$_1$ we show the leading-order changes in the output probabilities and cost expectation value explicitly in terms of classical cost differences, for arbitrary cost functions. This demonstrates that, for sufficiently small positive parameters, probability flows from lower to higher cost states on average. By selecting signs of the parameters, we can control the direction of flow. We use these results to derive a classical random algorithm emulating QAOA$_1$ in the small-parameter regime, i.e., that produces bitstring samples with the same probabilities as QAOA$_1$ up to small error. For deeper QAOA$_p$ circuits we apply our framework to derive analogous and additional results in several settings. In particular we show QAOA always beats random guessing. We describe how our framework incorporates cost Hamiltonian locality for specific problem classes, including causal cone approaches, and applies to QAOA performance analysis with arbitrary parameters. We illuminate our results with a number of examples including applications to QUBO problems, MaxCut, and variants of MaxSAT. We illustrate the generalization of our framework to QAOA circuits using mixing unitaries beyond the transverse-field mixer through two examples of constrained optimization problems, Max Independent Set and Graph Coloring. We conclude by outlining some of the further applications we envision for the framework. \end{abstract} \maketitle \tableofcontents \section{Introduction} \label{sec:intro} Parameterized quantum circuits have gained much attention both as potential near-term algorithms and as new paradigms for quantum algorithm design more generally. Approaches based on Quantum Alternating Operator Ans\"atze (QAOA)~ \cite{hogg2000quantum,hogg2000quantumb, Farhi2014,Hadfield17_QApprox,hadfield2019quantum} have been extensively studied in recent years \cite{farhi2014quantum,farhi2016quantum,Wecker2016training,Shabani16,lin2016performance,jiang2017near,farhi2017quantum,verdon2017quantum,wang2018quantum,crooks2018performance,mcclean2018barren,hadfield2018thesis,pichler2018quantum,zhou2018quantum,lloyd2018quantum,brandao2018fixed,niu2019optimizing,bapat2018bang,guerreschi2019qaoa,marsh2019quantum,verdon2019cvqaoa,akshay2019reachability,morales2019universality,farhi2019quantum,wang2019xy,hastings2019classical,szegedy2019qaoa,bravyi2019obstacles,marshall2020characterizing,farhi2020quantum,farhi2020quantumb,shaydulin2020classical,ozaeta2020expectation,wurtz2020bounds,stollenwerk2020toward,streif2020quantum,streif2020training,marwaha2021local,barak2021classical,marwaha2021bounds,chou2021limitations,harrigan2021quantum,kremenetski2021quantum,brady2021optimal,brady2021behavior,wurtz2022counterdiabaticity}. These approaches alternate $p$ times between applications of a cost-function-based phase-separation operator and a probability-amplitude mixing operator, illustrated in \figref{fig:QAOAcircuit}. Nevertheless, relatively few rigorous performance guarantees are known in nontrivial settings, especially beyond a small constant number of circuit layers and the few specific problems studied thus far. Thus, the power and underlying mechanisms of such algorithms remains unclear, as do the problems and regimes where quantum advantage may be possible. Hence it is important to develop novel tools and approaches for analyzing and understanding such algorithms. \begin{figure} \caption{A Quantum Alternating Operator Ansatz circuit with~$p$ levels that alternate between application of the phase and mixing operators $U_P$ and $U_M$. The $\gamma_j,\beta_j$ are parameters for these operators and $\ket{s} \label{fig:QAOAcircuit} \end{figure} This analysis includes evaluating QAOA behavior and performance for both individual problem instances and for classes of problems or instances, e.g., worst- or average-case performance for instances in a class. Evaluating average behavior can be particularly useful for large $p$ since that involves many QAOA parameters. In this case, finding optimal parameters for each instance can be difficult. Instead, focusing on characteristic or average behavior can simplify parameter selection by identifying a single set of parameters that work well for typical instances of the class, as well as lead to new performance guarantees. To help address these issues, in this paper we develop theoretical tools in the form of an analytic framework, with notation, concepts, and results, to provide better understanding of the behavior of these algorithms, their strengths and weaknesses, and ultimately to design better quantum algorithms. The main underlying ideas of our framework are as follows, and apply to computing algorithm operators and expectation values of interest: \begin{itemize} \item Quantum circuit observable expectation values can be equivalently computed by acting on the observable by the circuit (by conjugation $H\rightarrow U^\dagger H U=H+\dots $, i.e., the Heisenberg picture), followed by taking the initial state expectation value. \item For layered circuits we can recursively conjugate to obtain exact operator series in the algorithm parameters; for alternating ansatz terms of the same or similar form will reappear in the series. \item For QAOA, in particular with the originally proposed transverse-field mixer, the resulting terms, their action, and their initial state expectation values can be related to classical functions derived from the cost function that capture its structure. \end{itemize} A number of useful properties for analysis are shown, including that initial state expectation values of many of the resulting terms are identically zero. For specific problem classes we can directly incorporate problem and operator locality. Further, in particular parameter regimes many such terms may be close to zero, allowing for compact approximate expressions by neglecting such terms, particularly for higher-order behavior. The expressions we derive are useful for more efficient numeric explorations of QAOA, as well as direct analytical insights. Further, the framework can be used to simplify the generation of more complicated expressions with computer algebra systems, such as for exact expressions or higher-order approximations, enabling further exploration of QAOA behavior. Our framework can be applied to illuminate various aspects of QAOA. We illustrate the use of the framework in several general results and applications including: \begin{itemize} \item exact formulas for QAOA$_p$ probabilities and expectation values as power series in the algorithm angles (parameters). \item leading-order approximations for QAOA$_p$ expectation values and probabilities. \begin{itemize} \item for arbitrary (nonconstant) cost functions there exist angles such that QAOA$_p$ beats random guessing. \item better or more applicable approximations may be systematically obtained by including higher-order terms. \item simple classical algorithms emulate sampling from QAOA circuits in some of these regimes, with small error. \item identification of some general parameter regimes for which QAOA$_p$ performance analysis is classically tractable, particularly a regime of polynomially-small parameters, which hence precludes quantum advantage, as illustrated in \figref{fig:phaseDiagram}. \end{itemize} \item a generalized formalization unifying several previous approaches for deriving QAOA$_p$ performance bounds, encapsulating previous results for specific problems such as exact results for MaxCut; we demonstrate this approach by obtaining results for QAOA$_1$ on a variant of Max-$2$-SAT as a specific example. \item a number of analytical and numerical examples illustrating the application of our framework to QAOA with the transverse-field mixer. \item two examples showing how our framework and results extend to quantum alternating ans\"atze beyond the transverse-field mixer, involving problems with different domains and encodings, or hard feasibility constraints. \end{itemize} Though we focus on the original QAOA, the framework naturally generalizes to other cases with structured ans\"atze and so would be useful for other applications, such as realizations of the variational quantum eigensolver for quantum chemistry, though in this paper we only hint at these further applications \sh{through the examples of \secref{sec:generalizedCalculus} and} concluding discussion of \secref{sec:discussion}. \begin{figure} \caption{Schematic regimes of the QAOA$_1$ operator $U_M(\beta)U_P(\gamma)$ with parameters $\beta$ and $\gamma$; similar ideas apply for QAOA$_p$. The inner region indicates operators close to the Identity $I=U_M(0)U_P(0)$, corresponding to small parameters, where the QAOA probabilities and cost expectation values are characterized by the leading order terms of the exact series expressions we derive. In particular, sufficiently (i.e., inverse-polynomially) small parameters allow for efficient classical emulation generally (i.e., classical sampling up to small error). In contrast, for arbitrary parameters, efficient classical sampling from QAOA$_1$ circuits is impossible under standard beliefs in computational complexity theory~\cite{farhi2016quantum} \label{fig:phaseDiagram} \end{figure} We remark here on some particularly related prior work. Our results build off, generalize, and unify various previous analytical approaches and results for the performance of QAOA for specific problems or in particular settings. For example, \cite{wang2018quantum,hadfield2018thesis,lin2016performance,brandao2018fixed,bravyi2019obstacles,wurtz2020bounds,ozaeta2020expectation,marwaha2021local,marwaha2021bounds,chou2021limitations} obtain bounds to the cost expectation value for relatively few layers. Whereas most prior work concerns specific problems or problem classes, many of our results apply to arbitrary cost functions. For MaxCut, ~\cite[App. 22-24]{szegedy2019qaoa} proposes a related approach for algorithmically generating polynomials capturing the QAOA cost expectation value. Another recent paper, similar in spirit to our framework though employing different technical approaches, explores a number of related but distinct viewpoints on low-depth quantum optimization~\cite{mcclean2020low}, in particular relating the QAOA mixing operator to a graph Laplacian (of which the transverse-field mixer is a special case). Our work provides complementary insights on the connection between mixers and cost function structure. Several of recent works consider classical algorithms matching the performance of low-depth QAOA circuits~\cite{hastings2019classical,marwaha2021local,barak2021classical}. Classical algorithms for generally computing expectation values of quantum observables are considered in~\cite{bravyi2019classical}; in particular they show quantum circuits exponentially close to the identity can be efficiently sampled from. Our work extends this result, in the context of QAOA circuits, from exponentially small angles to polynomially small. \sh{We emphasize that many of our formulas and results apply to QAOA circuits of arbitrary depth. As an example we consider QAOA for Grover's unstructured search problem, and demonstrate in \secref{sec:smallAngleQAOAp} how small-angle analysis easily reproduces the result obtained rigorously in~\cite{jiang2017near} that QAOA with the transverse-field mixer can reproduce the famous quadratic speedup of Grover's algorithm.} The remainder of the Introduction gives an informal description of our framework and the results we obtained by applying it. Subsequent sections give full definitions, generalizations and precise statements of results. \subsection{Overview of analytical framework} \label{sec:frameworkOverview} Quantum algorithms based on quantum alternating operator ans\"atze utilize the noncommutativity of the cost and mixing operators in a fundamental way. From the Heisenberg perspective, a quantum circuit may be equivalently viewed as acting on quantum observables\footnote{Many important quantities for parameterized quantum circuits may be expressed as observable expectation values such as the expected cost and approximation ratio, or measurement probabilities.} by conjugation, rather than on a quantum state (by matrix multiplication); observable expectation values may be equivalently computed as the initial state expectation of the conjugated observable. For a circuit with $\ell$ layers, this corresponds to $\ell$ iterated conjugation operations. In some cases this formulation gives advantages in computing or approximating expectation values. For example, this idea was applied to derive an exact formula for the level-$1$ QAOA cost expectation value for MaxCut in \cite{wang2018quantum}. We extend these approaches to more general cost functions and settings using the correspondence between commutators and unitary conjugation (i.e., between Lie algebras and Lie groups) to derive exact expressions for relevant operators and expectation values as power series in the algorithm parameters with terms that reflect cost function structure. For QAOA we show how the resulting operators and expectation values reflect cost function changes over neighborhoods induced by the mixing operator. For simplicity, the primary example we consider in the paper is the transverse-field mixer Hamiltonian $B=\sum_j X_j$ and corresponding initial state $\ket{s}:=\ket{+}^{\otimes n}$ as originally proposed for the quantum approximate optimization algorithm~\cite{farhi2014quantum}, though our framework may be applied more generally. This mixer induces a neighborhood structure on classical bitstrings related to Hamming (i.e., bit-flip) distance on the Boolean cube. Different mixers induce different neighborhood structures, and our framework extends to these cases as well, as we illustrate with two examples in \secref{sec:generalizedCalculus}. At the heart of our approach is a fundamental correspondence $$\textrm{quantum operators}\,\,\,\, \longleftrightarrow \,\,\,\, \textrm{classical functions}$$ between derived operators on the space of quantum states, and classical functions that describe their behavior in terms of the structure of the cost function. The functions and operators appearing in our framework are related to discrete versions of functions and operators from vector calculus, particularly discrete difference operators. Given a mixer $B$ and cost Hamiltonian $C$, iteratively taking commutators generates a sequence of Hamiltonians (up to factors of $i$). The most fundamental such operator is their commutator $$ \nabla C := [B,C]=BC-CB,$$ which we call the \textit{cost gradient operator}, as motivated by its action on particular quantum states. (For convenience, we will sometimes refer to higher-order commutators of $B$ and $C$ as \textit{cost gradients} generically.) In particular, we show $\nabla C \ket{s}=\tfrac1{\sqrt{2^n}}\sum_{x\in\{0,1\}^n}dc(x)\ket{x}$, for the classical \textit{cost divergence function} $$ dc(x):=\sum_{j=1}^n \partial_j c(x),$$ where $c(x)$ is the cost function to be optimized, and $\partial_j c(x)$ gives the change in cost for each string $x\in\{0,1\}^n$ with respect to flipping its $j$th bit (i.e., $dc$ captures average cost function structure over single bit-flip neighborhoods). For example, $dc(x^*)\leq 0$ is a necessary condition for $x^*$ to maximize the cost function. We may then use properties of the functions to understand properties of the operators, and vice versa; for example we have in general $\tfrac1{2^n}\sum_x dc(x)=\bra{s}\nabla C\ket{s}=0$. The cost gradient $\nabla C$ corresponds to infinitesimal conjugation of $C$ by the QAOA mixing operator, i.e., $e^{i\beta B}Ce^{-i \beta B}=C+i\beta\nabla C+O(\beta^2)$ as $|\beta|\rightarrow 0$. Including additional terms leads to higher-order cost gradients (nested commutators, e.g., $\nabla^2 C:=[B,[B,C]]$, and so on) that reflect cost changes over neighborhoods of greater Hamming distance, at higher powers of the mixing angle. To include the QAOA phase operator we also require commutators with respect to $C$, $\nabla_C:=[C,\cdot]$, which act in relation to the underlying cost function $c(x)$. Hence, treating the layers of QAOA as iterated conjugations of an observable leads to series expressions which may be used to compute or approximate expectation values and other important quantities. As we discuss briefly in Sec.~\ref{sec:discussion}, our framework can be applied to recently proposed variants of QAOA \cite{bravyi2019obstacles,zhu2020adaptive}, and more generally to layered quantum algorithms including applications beyond combinatorial optimization; in such cases the resulting operators will reflect structure resulting from both the problem and choice of ansatz. \subsection{Overview of application to QAOA} The series expansions resulting from our framework are especially informative when the leading terms in the series dominate the behavior. This is the case for the \textit{small-angle regime}, i.e., when all angles in the QAOA parameter schedule are relatively small in magnitude. Hence, as a main application of our framework, we investigate the small-angle setting for QAOA circuits generally. We analyze single-layer QAOA$_1$ in detail in \secref{sec:QAOA1}, and generalize to $p\geq 1$ layers (denoted QAOA$_p$ throughout) in \secref{sec:QAOAp}. \begin{table}[h] \centering \begin{tabular}{ |c|c|c| } \hline Quantity & Initial value & Leading-order contribution \\ \hline $P_1(x)$ & $\tfrac1{2^n}$ & $-\tfrac2{2^n}\gamma \beta dc(x)$ \\ $\langle C \rangle_1$ & $ \tfrac1{2^n}\sum_x c(x)$ & $-\tfrac2{2^n}\gamma \beta \sum_x c(x) dc(x)$ \\ \hline $P_p(x)$ & $\tfrac1{2^n}$ & $-\tfrac2{2^n} \left(\sum_{1\leq i\leq j}^p \gamma_i \beta_j\right) dc(x)$ \\ $\langle C \rangle_p$ & $ \tfrac1{2^n}\sum_x c(x)$ & $-\tfrac2{2^n} \left(\sum_{1\leq i\leq j}^p \gamma_i \beta_j\right) \sum_x c(x)dc(x)$ \\ \hline \end{tabular} \caption{ Leading-order cost expectation and probabilities for QAOA$_p$ for cost function~$c(x)$, which dominate in particular when the QAOA angles are small, $|\gamma_j|,|\beta_j|\ll 1$. The initial probability of measuring $x\in\{0,1\}^n$ is $P_0(x)=\tfrac1{2^n}$ which corresponds to QAOA with all angles zero. The first two rows follow from \thmref{thm1:smallAngles}, and the following two rows follow from \thmsref{thm:allanglessmall}{thm:smallprecursed}. In each case the next contributing terms are order $4$ and higher in the QAOA angles; all other terms up to order $3$ are shown to be identically~$0$. We show the three additional terms that contribute up to fifth order explicitly in \thmref{thm:smallprecursed}. We emphasize the expression $\sum_{1\leq i\leq j}^p \gamma_i \beta_j$ depends on both the algorithm parameter values and their ordering.} \label{tab:tab1smallangles} \end{table} For QAOA$_1$, we show that, to third order in the angles $\gamma,\beta$, the probability to measure each bitstring $x$ changes from its initial value by a single contribution proportional to the cost divergence $dc(x)$ and the product $\gamma\beta$. Hence, with respect to the $2^n$-dimensional space of bitstring probabilities, to leading order QAOA is reminiscent of a step of classical gradient descent, with learning rate (step size) proportional to $\gamma\beta$. The leading-order contribution to the cost expectation then follows as the expectation of $c(x)dc(x)$ taken uniformly over bitstrings $x$. Both leading-order expressions are shown in \tabref{tab:tab1smallangles}. These expressions are derived using the correspondence between cost gradients and classical functions. In particular, we give several general lemmas showing that the terms corresponding to other low-order combinations of $\gamma,\beta$ (i.e., in this case $\gamma,\beta,\gamma^2, \beta^2,\gamma^2\beta,\gamma\beta^2)$ are identically zero, independent of the particular cost function, and that the same terms contribute for the case of QAOA$_p$ with $p\geq 1$ (with coefficients depending on all $2p$ angles). Higher-order contributing terms are shown to similarly relate to classical functions and are relatively straightforward to derive using our framework, and such that increasingly accurate approximate formulas may be systematically generated. As a consequence, we show that the measurement outcomes of QAOA$_1$ are effectively classically emulatable in the regime of polynomially small angles. More precisely, we provide a simple classical randomized algorithm that produces bitstrings with probabilities matching the leading order behavior of QAOA$_1$, and bound the resulting error from the neglected terms for the case where $\gamma,\beta$ are bounded in magnitude by an inverse-polynomial in the problem size. (A similar argument applies to QAOA$_p$ (cf. \tabref{tab:tab1smallangles}), though we do not analyze the error in detail for the general $p$ case.) We use these results to show that QAOA always beats random guessing for any nonconstant cost function. Our general results point to the tradeoff between parameter size and number of QAOA levels for potential quantum advantage. Indeed, it is known that sampling from QAOA$_p$ circuits with unrestricted parameters, even for $p=1$, cannot be efficiently performed classically under widely believed complexity theoretic assumptions~\cite{farhi2016quantum}. Our results take steps towards a clearer demarcation of when quantum advantage may be possible (cf. \figref{fig:phaseDiagram}). Extending the analysis to QAOA with arbitrary number of levels~$p$ shows that QAOA$_p$ gives the same leading-order behavior as QAOA$_1,$ with a suitable choice of effective angles for QAOA$_1$, as indicated by the quantities shown in \tabref{tab:tab1smallangles}. Thus, there is no possibility of quantum advantage for QAOA$_p$ for fixed $p$ when all angles are small. We similarly show how to obtain leading-order terms in particular cases where only some of the angles are small, which yield further generally applicable expressions. For QAOA$_1$ applied to quadratic unconstrained binary optimization (QUBO) problems, we show the case of small-mixing angle $\beta$ and unrestricted phase angle $\gamma$ can again be efficiently emulated classically, whereas this does not appear generally possible in the converse case of small phase angle but arbitrary mixing angle. The latter case can be understood in light of our framework from the fact that cost differences over neighborhoods of arbitrary Hamming distance contribute as the mixing angle grows in magnitude, which eventually become superpolynomial in size. Application of our framework is not restricted to small angles. Including higher order terms in our series expansions results in a sequence of increasingly accurate approximations valid over larger angle regimes. Alternatively, for specific problems and relatively small $p$ we can often derive compact exact expressions using our formalism by explicitly incorporating problem structure. We give high-level algorithms computing approximate or exact QAOA cost expectation values in this way in \secref{sec:classicalAlgGeneral2}. \sh{Our approach yields more tractable expressions than alternative approaches to obtaining general formulas; we contrast with one such \lq\lq sum-of-paths\rq\rq\ approach to QAOA in \appref{app:sumOfPaths}}. \subsection{Roadmap of paper} The Table of Contents gives the detailed structure of the paper. Here, we make some remarks on dependencies between the various sections. \secref{sec:calculus} gives a self-contained presentation of our framework, including general cost difference functions and cost gradient operators and their properties; see \tabref{tab:summary} below for a summary of important notation. As an immediate demonstration of our formalism we state our general leading-order results for QAOA in \secref{sec:smallAngleQAOA1initial}, and illustrate how the framework facilitates comparison between different quantum circuit ans\"atze by comparing leading-order QAOA$_1$ to a quantum quench in \secref{sec:quench}. In \secref{sec:probLocality}, we show how to incorporate into our framework, when applicable, additional problem or instance-wise locality considerations (for example, MaxCut restricted to bounded-degree graphs), providing a basis for more refined QAOA results incorporating locality in \secsref{sec:lightcones}{sec:lightconesp}. \secref{sec:QAOA1} provides a detailed application of our framework to QAOA$_1$. After deriving general results, we use the framework to obtain results for QAOA$_1$ for small mixing angle (but arbitrary phase) and vice versa (small phase, but arbitrary mixing angle). The section presents several examples; subsequent sections are independent of these examples. \secref{sec:QAOAp} applies the framework to the general case of QAOA$_p$. While a number of the results in \secref{sec:QAOAp} generalize those in \secref{sec:QAOA1}, this section can be read immediately after \secref{sec:calculus} by those readers who wish to focus on the $p > 1$ case. \secref{sec:generalizedCalculus} discusses how our framework applies to QAOA using different encodings and mixers other than the transverse-field mixer. In this case, the operators in our framework are defined in terms of the neighborhood structures induced by those alternative mixers, which we illustrate with two examples of constrained problems: Maximum Independent Set and Graph Coloring. This section builds on the results of \secref{sec:calculus}, but does not directly depend on those of \secref{sec:QAOA1} or \secref{sec:QAOAp}. Finally, we comment on several additional applications of our framework in \secref{sec:discussion}. Some additional technical results and proofs are deferred to the Apendices and may also be read independently. \section{Analytical framework for quantum optimization} \label{sec:calculus} Consider a classical cost function $c(x)$ we seek to optimize over bitstrings $x\in\{0,1\}^n$, i.e., the $n$-dimension Boolean hypercube. For example, $c(x)$ gives the number of cut edges in MaxCut, or the number of satisfied clauses in MaxSAT. Here for clarity we assume all bitstrings encode feasible candidate solutions; we consider examples which extend our methods to optimization over non-trivial feasible subspaces in \secref{sec:generalizedCalculus}. We may encode each bitstring $x\in\{0,1\}^n$ with the corresponding $n$-qubit \textit{computational basis} state $\ket{x}$. A computational basis (Pauli~$Z$) measurement of this quantum register at the end of an algorithm then returns each candidate solution~$x$ with some probability~$P(x)$. (Throughout the paper $n$ denotes the number of (qu)bits and $X_j,Y_j,Z_j$ denote the Pauli matrices $\sigma_X,\sigma_Y,\sigma_Z$ acting on the $j$th qubit, respectively.) The cost function is naturally represented as the \textit{cost Hamiltonian} \begin{equation} \label{eq:costHam} C = \sum_x c(x) \ket{x}\bra{x} \end{equation} which acts diagonally in the computational basis as $C\ket{x}=c(x)\ket{x}$ for each $x\in\{0,1\}^n$. In quantum algorithms such as quantum annealing or QAOA, the usual initial state is \begin{equation} \label{eq:initState} \ket{s}:=\ket{+}^{\otimes n}=\left(\tfrac{\ket{0}+\ket{1}}{\sqrt{2}} \right)^{\otimes n} = \frac1{\sqrt{2^n}} \sum_{x\in\{0,1\}^n} \ket{x} \end{equation} which gives a uniform probability distribution over bitstrings with respect to computational basis measurements (i.e., $P_0(x):=|\bra{x}\ket{s}|^2=\tfrac1{2^n}$, and $\langle c\rangle_0 = \tfrac1{2^n}\sum_x c(x)$), and the (transverse-field) \textit{mixing Hamiltonian} \begin{equation} \label{eq:mixingHam} B :=\sum_{j=1}^n X_j \end{equation} is utilized to mediate probability flow between states.\footnote{Though different initial states or mixing Hamiltonians have been proposed to some extent in the literature, for simplicity we use $\ket{s}$ to denote the equal superposition state \eqrefp{eq:initState} and $B$ to denote the transverse-field Hamiltonian \eqrefp{eq:mixingHam} throughout the paper. This paradigm is called X-QAOA in \cite{streif2020quantum}.} Here we briefly review the original QAOA~\cite{Farhi2014} of Farhi, Goldstone, and Gutmann, which is the main application we consider in the paper. A QAOA$_p$ circuit creates the parameterized quantum state (cf.~\figref{fig:QAOAcircuit}) \begin{equation} \ket{\boldsymbol{\gamma \beta}}_p = U_M(\beta_p)U_P(\gamma_p)U_M(\beta_{p-1})\dots U_P(\gamma_{1})\ket{s} = e^{-i\beta_p B}e^{-i\gamma_p C}e^{-i\beta_{p-1} B}\dots e^{-i\gamma_1 C} \ket{s} \end{equation} by applying the mixing and phase separation unitaries $U_M(\beta), U_P(\gamma)$ in alternation $p$ times each to the initial state $\ket{s}=\tfrac1{\sqrt{2^n}} \sum_x \ket{x}$. The original QAOA mixing unitary $U_M(\beta)=\exp(-i\beta B)$ is specified as time evolution under the transverse-field Hamiltonian of \eqref{eq:mixingHam}, and the phase operator $U_P(\gamma)=\exp(-i\gamma C)$ by time evolution under the cost Hamiltonian $C$. In optimization applications typically each QAOA state preparation is followed by a measurement in the computational basis which returns some $x\in\{0,1\}^n$ probabilistically. For QAOA$_p$ with fixed parameters we let $P_p(x)$ denote the probability of a such a measurement returning the bitstring $x$, and $\langle C \rangle_p = \langle c \rangle_p :=\bra{\boldsymbol{\gamma \beta}}C\ket{\boldsymbol{\gamma \beta}}$ the expected value of the cost Hamiltonian (function). More generally, we use $ \langle A \rangle_p := \bra{\boldsymbol{\gamma \beta}}A\ket{\boldsymbol{\gamma \beta}}$ to denote the QAOA$_p$ expectation value of an operator~$A$. We refer generically to $\gamma_1,\beta_1,\dots,\beta_p$ and $p$ as QAOA \textit{parameters}, and will often use the term QAOA \textit{angles} when the QAOA \textit{level} $p$ is understood. The quantum gate depth for implementing a QAOA$_p$ circuit is related to but not the same as~$p$. Repeated preparation and measurement of a QAOA$_p$ state on a quantum computer produces a collection of candidate solution samples, with the overall best solution found returned. The samples may be used to estimate quantities such as $\langle C\rangle_p$ which in turn may be used to search for better phase and mixing angles, e.g., variationally. Alternatively, angles may be selected a priori or restricted to a specific domain or schedule. The performance of such algorithms arise from the action of the mixing operator, which for the choice of $B$ of \eqref{eq:mixingHam} relates to Hamming distances between bitstrings. Its action on quantum states can be readily interpreted in terms of classical bitstrings (with respect to the computational basis). Indeed, let $x^{(j)}$ denote the bitstring $x\in\{0,1\}^n$ with its $j$th bit flipped. Then from its action on basis states $X_j\ket{x}=\ket{x^{(j)}}$ we identify each Pauli operator $X_j$ as the $j$th \textit{bit-flip} operator. Hence the Hamiltonian~$B$ maps a given $\ket{x}$ to a superposition of strings differing from~$x$ by a single bit-flip \begin{equation} \label{eq:mixHam2} B\ket{x}=\sum_{j=1}^n\ket{x^{(j)}} =: \sum_{y\in nbd(x)}\ket{y}, \end{equation} where $nbd(x)\subset\{0,1\}^n$ denotes the Hamming distance-$1$ neighbors of $x$. Thus we see that the \textit{quantum} operator $B$ induces a \textit{classical neighborhood} structure on the Boolean cube. Similarly, iteratively applying \eqref{eq:mixHam2} one easily shows the action of $B^k$ relates to Hamming distance-$k$ neighborhoods. Indeed, for the QAOA mixing operator $U_M(\beta)=e^{-i\beta B}$, the identities $X^2=I$ and $[X_i,X_j]=0$ give $U_M(\beta) =\prod_{j=1}^n (I \cos(\beta) - i\sin(\beta)X_j)$, and so its matrix elements with respect to the computational basis, $x,y\in\{0,1\}^n$, are \begin{eqnarray} \label{eq:mixingmatrixelements} \bra{x}U_M(\beta)\ket{y} =\cos^n(\beta)(-i\tan(\beta))^{d(x,y)}=:u_{d(x,y)}, \end{eqnarray} which depend only on $\beta$ and the Hamming distance $d=d(x,y)$ between $x$ and $y$.\footnote{ The quantities $|u_d(\beta)|$, $d=1,\dots,n$ each have a single maximum at $\beta_d^* = \arctan \sqrt{d/(n-d)}$. This shows that state transitions of increasing Hamming weight are relatively favored as $|\beta|$ increases. This observation motivates a complementary sum-of-paths approach discussed in \sh{\appref{app:sumOfPaths}. }} We use this mixing neighborhood structure to build a general calculus relating classical functions encoding how the cost function changes over each neighborhood to quantum operators capturing the action of corresponding QAOA circuits. Our results formalize a number of results in prior literature concerning more specific settings. Though we focus on the transverse-field mixer~\eqrefp{eq:mixingHam}, our results provide a guide for applying our framework to other mixers, or initial states. We consider examples of more general mixing Hamiltonians and unitaries in \secref{sec:generalizedCalculus}, which naturally induce different classical neighborhoods in an analogous way. In each case the mixing operator neighborhoods are independent of the particular cost function and any neighborhood structure that it may carry. \subsection{Cost difference and cost divergence functions} \label{sec:costDiffs} The aggregate action of the QAOA phase and mixing operators fundamentally relates to the underlying (classical) operations of \textit{cost function evaluations} and \textit{bit flips}. To use this observation to characterize the behavior of QAOA circuits we define a sequence of classical \textit{cost difference} functions derived from the cost function.\footnote{Some of our definitions to follow are labeled in analogy with familiar notions from vector calculus as an aid to identifying their behavior. However, we note important differences in our discrete domain setting.} For an arbitrary cost function $c(x)$ we define for $j=1,\dots,n$ the \textit{jth (partial) cost difference} function~$\partial_j c$ as \begin{eqnarray} \label{eq:costdiffs} \partial_j c\,(x):= c(x^{(j)})-c(x). \end{eqnarray} Each $\partial_j c$ encodes how the cost function changes with respect to flipping the $j$th bit of its input.\footnote{A related but distinct definition sometimes called the \lq\lq$j$th derivative\rq\rq\ is $c(x|_{x_j=1})-c(x|_{x_j=0})$; see \cite{boros2002pseudo}.} Considering the local neighborhood of single bit flips about each bitstring $x$ we define the \textit{cost divergence} function as the total cost difference over each neighborhood \begin{equation} \label{eq:costdiv} dc(x) := \sum_{j=1}^n \partial_j c(x). \end{equation} The cost differences and divergence satisfy $\sum_x \partial_jc(x) = \sum_x dc(x) = 0$. These functions capture information about the cost function structure. We show an example in \figref{fig:fig1costdiv}. For a solution~$x^*$ that maximizes~$c(x)$, we have $\partial_j c(x^*)\leq 0$ for each $j\in [n]$, which implies $dc(x^*)\leq 0$. Thus $\forall j\in [n]\; \partial_j c(x^*)\leq 0$ gives a necessary condition for a string $x^*$ to give a global maximum. (Likewise, $\partial_j c(x^*)\geq 0$ and $dc(x^*)\geq 0$ for minima.) Hence we expect the cost divergence to be relatively large in magnitude and negative (positive) near local maxima (minima). On the other hand, for \lq\lq typical\rq\rq\ bitstrings $y$ with cost close to the mean~$\langle c \rangle_0=\tfrac1{2^n}\sum_xc(x)$, we expect nearly as many possible bit flips will increase the cost function as decrease, which suggests $dc(y)\simeq 0$ for such typical strings~$y$. \begin{figure} \caption{Example: Local cost differences $\partial_ic(x)$ for the $3$-variable Max-$2$-SAT instance $c(x)=(x_1\vee x_2)+(x_2\vee \overline{x} \label{fig:fig1costdiv} \end{figure} \subsubsection{Higher-order and mixed cost differences} We likewise consider higher-order changes in the cost function due to multiple bit flips. This follows in the usual way from considering each partial difference $\partial_j$ as an operator mapping classical functions to classical functions $\partial_j:c\rightarrow \partial_j c$ which may then be iterated. A useful property is the antisymmetry relation \begin{eqnarray} \label{eq:partialjCsym} \partial_j c(x^{(j)}) \,=\, -\partial_j c(x). \end{eqnarray} Applying this twice gives \begin{equation} \partial_j^2 c (x) = \partial_j (\partial_j c(x))=-2 \partial_j c(x) \end{equation} which we write as $\partial_j^2 c = -2\partial_j c$, and hence it follows $\partial_j^k c =(-2)^{k-1} \partial_j c$ for $k \in \naturals$. For $j\neq k$, let $x^{(j,k)}$ denote $x$ with its $j$th and $k$th bits flipped, and so $\ket{x^{(j,k)}}=X_jX_k\ket{x}$. The second-order (mixed) cost difference function $ \partial_j \partial_k c$ is given by $$ \partial_j ( \partial_k c) (x) = c(x) - c(x^{(j)}) - c(x^{(k)})+ c(x^{(j,k)})$$ which is symmetric with respect to interchange of $j$ and $k$. Hence, $\partial_j \partial_k c = \partial_j \partial_k c\,$ so mixed difference operations mutually commute. These functions generalize to third order cost differences $\partial_i \partial_j \partial_k c$ and higher orders in the natural way. We may likewise iterate the cost divergence operator $d:c\rightarrow dc$ to give \begin{equation} \label{eq:d2c} d^2c(x) := d(dc)(x) = -2 dc(x) + 2 \sum_{i<j}\partial_i \partial_j c(x) , \end{equation} which relates to the change in cost function over the Hamming distance $2$ neighborhood of each~$x$. More generally, the higher-order cost divergence $d^kc$ is defined recursively as $d^kc(x):=d(d(\dots d(dc))..)(x)$, which relates to the change in cost function over neighborhoods of $k$ bit flips or less. \subsection{Application: Leading-order behavior of QAOA} \label{sec:smallAngleQAOA1initial} Before developing the operator side of our framework, we state two results relating the leading-order behavior of QAOA, generally, to the cost difference functions. The details are given in \secsref{sec:QAOA1}{sec:QAOAp}. We first consider QAOA with a single level. By expressing the action of a QAOA$_1$ circuit as a series in the parameters $\gamma,\beta$, the leading-order contributions to the measurement probabilities and cost function expectation (beyond the initial contributions $P_0(x)=1/2^n$ and $\langle C\rangle_0=\langle c\rangle_0$ obtained from uniform random guessing) are expressible in terms of the cost divergence function. Hence, the cost divergence determines the behavior of QAOA$_1$ in the \lq\lq small-angle\rq\rq\ regime $|\gamma|,|\beta|\ll 1$; see \remref{rem:normInvariance} below, which also considers additional important quantities such as $\|C\|$. Moreover, we show that for sufficiently small parameters QAOA$_p$ behaves as an effective QAOA$_1$. \begin{theorem}[Leading-order QAOA$_1$]\label{thm1:smallAngles} Consider QAOA$_1$ with an arbitrary cost function~$c(x)$. Then to lowest order in $\gamma,\beta$ the probability $P_1(x)$ of measuring a bitstring~$x\in\{0,1\}^n$ is given by \begin{equation} \label{eq:px1} P_1(x) \,\simeq \,\, P_0(x) \,-\, \frac{2\gamma \beta}{2^n}dc(x), \end{equation} with expected value of the cost Hamiltonian (cost function) \begin{eqnarray} \label{eq:expecC1} \langle C \rangle_1 \,\simeq \, \langle C \rangle_0 \, -\, \frac{2 \gamma \beta}{2^n} \sum_x c(x) \,dc(x), \end{eqnarray} where the neglected terms in both equations are degree~4 or higher in $\gamma,\beta$. Moreover, for any non-constant $c(x)$ we have \begin{eqnarray}\label{eq:expecC1b} \sum_x c(x) dc(x) < 0. \end{eqnarray} \end{theorem} The theorem gives novel insight into QAOA. We see that to leading order, QAOA$_1$ causes probability to flow in (or out) of each bitstring $x$ in proportion to its cost divergence $dc(x)$. In particular, for a maximization problem with $c(y)\geq 0$ for every $y$, when $0< \gamma,\beta \ll 1$ from \eqref{eq:expecC1} and \eqref{eq:expecC1b} we see that to leading order \textit{probability flows to increase the cost expectation value}. Similarly, selecting the product $\gamma \beta <0$ guarantees that probability flows to reduce the cost expectation value, to lowest order. As an example, for MaxCut, \tabref{tab:summary} below has $dc(x)=2m-4c(x)$, so \thmref{thm1:smallAngles} gives $$ \langle C \rangle_1 - \langle C \rangle_0 \simeq -4m\gamma\beta\langle C \rangle_0 + 8\gamma\beta\langle C^2 \rangle_0=2\gamma\beta m$$ to leading order for $|\gamma|,|\beta|\ll 1$, where notably no terms quadratic in $m$ contribute. In particular, when $\gamma\beta$ is positive, the change in cost function $\langle C \rangle_1 - \langle C \rangle_0 $ is strictly positive as, e.g., $\gamma,\beta\rightarrow 0^+$. Such information may help in initial selection and search for suitable algorithm parameters. We bound the error arising from the neglected higher-order terms in \eqsref{eq:px1}{eq:expecC1} in \secref{sec:QAOA1}, from which it follows that angles can always be selected such that QAOA beats random guessing in expectation; see \secref{sec:randomGuessing}. In terms of the solution probabilities, observe that, while not the same, the lowest order approximation \eqref{eq:px1} is reminiscent of a step of classical gradient descent for optimizing functions over continuous domains, in this case the $2^n$-dimensional probability vector, with \lq\lq learning rate\rq\rq\ proportional to the product $\gamma\beta$. Observe that, letting $\widetilde{P_1}(x)$ denote the righthand side of \eqref{eq:px1}, the property $\sum_x dc(x)=0$ implies $\widetilde{P_1}(x)$ gives a normalized probability distribution (assuming large enough $n$ or sufficiently small $|\gamma|,|\beta|$ such that $\widetilde{P_1}(x)\geq 0$), which implies \eqref{eq:px1} may be interpreted classically. Indeed, using the theorem we show a simple classical randomized algorithm based on coin flipping in \secref{sec:smallQAOA1class} that reproduces the lowest-order QAOA probabilities, and hence samples from QAOA circuits to small error in a suitably defined \lq\lq small-angle\rq\rq regime. The proof of \thmref{thm1:smallAngles} is relatively straightforward using our framework and is given in \secref{sec:smallAngleQAOA1}. We emphasize that the proof shows the other possible terms up to third order (i.e., those proportional to $\gamma,\gamma^2,\gamma^3,\beta,\beta^2,\beta^3,\gamma^2\beta$ or $\gamma\beta^2$) to be identically zero in both~\eqsref{eq:px1}{eq:expecC1}, in general. We show the nonzero contributing terms up to fifth order explicitly in \secref{sec:QAOAp}. Furthermore, we show that for any cost Hamiltonian given as a linear combination of Pauli $Z$ operators $C=a_0I+\sum_ja_jZ_j +\sum_{i<j}a_{ij}Z_iZ_j+\dots$, the quantity $\tfrac1{2^n}\sum_x c(x) \,dc(x)$ appearing in \eqref{eq:expecC1} may be efficiently computed from the Hamiltoian coefficients~$a_\alpha$; see \lemref{lem:expecCDC} below. For $p>1$ layers we similarly derive leading order expansions for QAOA$_p$, with respect to all or some angles being treated as expansion parameters, in \secref{sec:smallAngleQAOAp}. As remarked, the leading order contribution to probabilities and cost expectation for QAOA$_p$ is shown to be of the same form as that of QAOA$_1$, but with a generalized coefficient that depends on all $2p$ phase and mixing angles. \begin{theorem}[3rd order QAOA$_p$] \label{thm:allanglessmall} For QAOA$_p$ we have cost expectation \begin{eqnarray} \label{eq:expecCp0a} \langle C \rangle_p \, &=& \langle C \rangle_{0} \,-\, \frac2{2^n} \left(\sum_{1\leq i\leq j}^p \gamma_i \beta_j\right) \sum_x c(x)dc(x)\, + \,\dots \end{eqnarray} where the terms not shown to the right are order four or higher in $\gamma_1,\beta_1,\dots, \gamma_p,\beta_p$. \end{theorem} \begin{table}[ht] \centering \begin{tabular}{ |c|c|c||c|} \hline Label & Symbol & Definition & Example: Value for MaxCut\\ \hline Cost function & $c(x)$ & $c:\{0,1\}^n \rightarrow {\mathbb R}$ & $\sum_{(ij)\in E } x_i \oplus x_j$\\ $j$th partial difference & $\partial_j c(x)$ & $c(x^{(j)})-c(x)$ & $|N_j|-2\sum_{i \in N(j)} x_i \oplus x_j$ \\ Cost divergence & $dc(x)$ & $\sum_{i=1}^n \partial_i c(x)$ & $2|E| - 4c(x)$ \\ $\ell$th cost divergence & $d^\ell c(x)$ & $\sum_{i=1}^n \partial_i d^{\ell-1} c(x)$ & $(-4)^\ell (c(x)-\tfrac{|E|}2)$ \\ \hline Mixing Hamiltonian & $B$ & transverse-field & $\sum_{i=1}^n X_i$\\ Cost Hamiltonian& $C$ & $C\ket{x}=c(x)\ket{x}$ & $\tfrac{|E|}{2}I-\tfrac12\sum_{(ij)\in E} Z_i Z_j$\\ Cost divergence Ham. & $DC$ & $DC \ket{x}= dc(x)\ket{x}$ & $2\sum_{(ij)\in E} Z_i Z_j$ \\ $\ell$th cost div. Ham. & $D^\ell C$ & $D^\ell C \ket{x}= d^\ell c(x)\ket{x}$ & $-\tfrac12 (-4)^\ell \sum_{(ij)\in E} Z_i Z_j$ \\ Cost gradient op. & $\nabla C$ & $[B ,C]$ & $ i \sum_{(ij)\in E}(Y_iZ_j + Z_iY_j)$ \\ Order 2 cost gradient & $\nabla^2 C$ & $[B,[B ,C]]$ & $4 \sum_{(ij)\in E}(Y_iY_j - Z_iZ_j)$ \\ $\ell$th-order gradient & $\nabla^\ell C$ & $[B,[B,\dots [B ,C]]..]$ & $\begin{cases} \,4^{\ell-1}\,\nabla C & \text{for $\ell$ odd}\\ \,4^{\ell-2}\,\nabla^2 C & \text{for $\ell$ even}\\ \end{cases}$\\ Gradient w.r.t. $A$ & $\nabla_A $ & $[A ,\cdot]$ & (various; see \secsref{sec:costGrads}{sec:QAOA1})\\ Mixed gradient & $\nabla_C \nabla C $ & $[C,[B ,C]]$ & $- \sum_{i=1}^n ( |N_i| X_i + \sum_{j,\ell \in N(i)} X_i Z_j Z_\ell)$ \\ \hline Initial state & $\ket{s}$ & $\ket{+}^{\otimes n}=\frac1{\sqrt{2^n}}\sum_x\ket{x}$& uniform distrib. of possible cuts \\ Initial expectation & $\langle \,\cdot\, \rangle_0$ & $\bra{s} \cdot \ket{s}$ & $\langle C \rangle_0 = \tfrac1{2^n}\sum_x c(x)=|E|/2$\\%\tfrac{|E|}2$ \\ \hline \end{tabular} \caption{ Summary of important notation, with the MaxCut problem, which seeks to partition the vertices of a graph so as to maximize the number of cut edges, as an example. The top of the table shows functions on bits and the middle shows related operators on qubits. The $n$-qubit computational basis states $\ket{x}$ represent bitstrings $x\in\{0,1\}^n$, and the bottom of the table shows the standard QAOA initial state we primarily consider in this paper, though other choices are possible. The operators $X_j,Y_j,Z_j$ are the single qubit Pauli matrices acting on the $j$th qubit. The symbols $\partial_j,d$ denote right-acting operators on functions, and likewise $D,\nabla$ denote superoperators on the space of $n$-qubit matrices; in both cases $k$th powers indicate operator iteration. The rightmost column shows their realization for MaxCut on a graph $G=(V,E)$ with $|V|=n$ nodes, where here the graph neighborhood of each vertex variable is denoted $N(j)=\{i:(ij)\in E\}\subset [n]$ with $|N(j)|=\deg(j)$ the degree of vertex~$j$. The formula $dc(x)=2|E|-4c(x)$ follows from \eqref{eq:DCb} and is particular to MaxCut, as is the formula given for $\nabla^\ell C$ which is a special case of \lemref{lem:quboGrads}. Similar but more complicated formulas can be derived for higher-order mixed gradients $\nabla_{A_\ell}^{a_\ell}\dots\nabla_{A_2}^{a_2}\nabla_{A_1}^{a_1}C$ and for applications to problems other than MaxCut. } \label{tab:summary} \end{table} \begin{rem} \thmsref{thm1:smallAngles}{thm:allanglessmall} apply more generally than stated. Viewed as expansions about the Identity operator $I$, i.e., the QAOA circuit with all parameters set to $0$, we immediately obtain corresponding results about any choice of parameters $\gamma'_j,\beta'_j$ such that the resulting QAOA circuit becomes equal or proportional to $I$. In such cases the resulting expansions are given in terms of the parameter differences $\gamma_j-\gamma'_j$ and $\beta_j-\beta'_j$. Indeed, QAOA circuits are often periodic: \eqref{eq:mixingmatrixelements} shows that the QAOA mixing operator $U_M(\beta)$ is proportional to the Identity when $\beta$ is an integer multiple of $\pi$, and hence $\langle C\rangle_p$ is (at least) $\pi$-periodic in each mixing angle $\beta_j$. The periodicity of the phase operator $U_P(\gamma)$ relates to the coefficients of the cost Hamiltonian, which are uniform for many problems (e.g. MaxCut) and hence lead to similar insights (cf. \eqref{eq:expecC1maxcut}). For simplicity we do not deal explicitly with periodicity of QAOA circuits in the remainder of the paper, though similar considerations apply to many of our results to follow. \end{rem} \subsubsection{Example: leading-order QAOA versus a simple quench}\label{sec:quench} Our framework facilitates comparison between different quantum circuit ans\"atze, in particular by allowing a term-by-term comparison of the resulting series expressions. To illustrate this we show that to leading order the QAOA cost expectation value is the same as that of a simple quantum quench algorithm, where for a fixed Hamiltonian $H=aC+bB$ the state $\ket{\tau}:=e^{-i\tau H}\ket{s}$ is prepared and measured in the computational basis. Such a quantum approach to classical optimization is considered, e.g., in~\cite{hastings2019duality,callison2020energetic}. \begin{prop} \label{prop:quench} Let $\tau > 0$, $\gamma,\beta\in{\mathbb R}$, and Hamiltonian $H$ be such that $\tau H=\sqrt{2}\beta B + \sqrt{2}\gamma C$. Then a quench of $H$ for time $\tau$ produces the same leading-order change \eqrefp{eq:expecC1} in $\langle C\rangle$ as QAOA$_1(\gamma,\beta)$, i.e., \begin{equation} \label{eq:quenchLeadingOrder} \langle C \rangle_\tau \,=\, \bra{\tau}C\ket{\tau} \,=\, \langle C \rangle_0 -\, \frac{2 \gamma \beta}{2^n} \sum_x c(x) \,dc(x)\dots \end{equation} where the terms to the right are order four or higher in $\gamma,\beta$. For QAOA$_p$, similarly, selecting instead $\gamma,\beta\in{\mathbb R}$ such that $\gamma\beta=\sum_{1\leq i\leq j}^p \gamma_i \beta_j$ gives $\langle C \rangle_\tau$ as in \eqref{eq:expecCp0a}. \end{prop} Hence, QAOA with sufficiently small parameters resembles a short-time quench. As we show, our framework allows for simple derivation of such formulas. Higher order contributing terms may be systematically derived with our framework. \begin{rem} A simple quench conserves $H$ in expectation $\langle H\rangle_{\tau} = \langle H \rangle_0$, so for $H=aC+bB$ linearity gives $ a(\langle C \rangle_\tau - \langle C \rangle_0 ) = -b\,(\langle B \rangle_\tau - \langle B \rangle_0)= bn -b\langle B \rangle_\tau$, and thus the change in $\langle C\rangle$ is determined from the change in $\langle B\rangle$. As $-n \leq \langle B \rangle \leq n$ for any quantum state $\ket{\tau}$, we have $|\langle C \rangle_\tau - \langle C \rangle_0| \leq 2|b/a|n$ for any possible quench time $\tau$. Therefore, when $b/a$ is constant, the above quench can shift the expectation value of $C$ by at most $ O(n)$, whereas $|C(x^*)-\langle C\rangle_0|$ may in general be much larger (e.g., $O(n^2)$ for MaxCut). This shows the limitations of a simple quench, i.e., the quench time or ratio $|b/a|$ must often grow with the problem size if large improvements in the expected value of the cost function are sought. Alternatively, to circumvent this requirement, the Hamiltonian $B$ can be replaced or augmented with additional $k$-local terms as proposed in~\cite{hastings2019duality}. QAOA does not in general obey a similarly simple conservation law. \end{rem} In the remainder of \secref{sec:calculus} we derive the operators used in our framework and show the connection to the cost difference functions, as well as a number of useful properties. For the convenience of the reader we both summarize our main notation in \tabref{tab:summary} and as an guiding example show its explicit realization for MaxCut, an NP-hard constraint satisfaction problem considered extensively for QAOA~\cite{Farhi2014,wang2018quantum,hadfield2018thesis,crooks2018performance,zhou2018quantum,guerreschi2019qaoa}. \subsection{Hamiltonians representing cost differences} \label{sec:costHam} For a cost function $c:\{0,1\}^n\rightarrow {\mathbb R}$ with corresponding $n$-qubit diagonal\footnote{Throughout the paper \textit{diagonal} operators are with respect to the computational basis.} Hamiltonian~$C$ given in \eqref{eq:costHam}, i.e., acting as $C\ket{x}=c(x)\ket{x}$ for each $x\in\{0,1\}^n$, we may uniquely express~$C$ as a multinomial in the Pauli $Z$ operator basis as \begin{equation} \label{eq:costHamZs} C=a_0 I + \sum_{j=1}^n a_j Z_j + \sum_{i<j} a_{ij}Z_iZ_j+\dots \end{equation} We say $C$ is $k$-local for $k$ the largest degree of a nonzero term in \eqref{eq:costHamZs}; we further discuss problem and Hamiltonian locality considerations in~\secref{sec:probLocality}. The coefficients $a_\alpha$ of $C$ are given by the Fourier expansion of the cost function $c(x)$; see \cite{hadfield2018representation} for details. In particular the Identity coefficient~$a_0= \tfrac1{2^n}\sum_x c(x)$ is the cost expectation under the uniform probability distribution. For example, for constraint satisfaction problems with cost function given by a sum of clauses $c=\sum_j c_j$, with each clause acting on at most $k$ variables, $k=O(1)$, we may efficiently construct a $k$-local cost Hamiltonian of the form \eqref{eq:costHamZs} with a number of terms polynomial in $n$. For such cost Hamiltonians the corresponding QAOA phase operator $U_P(\gamma)=e^{-i\gamma C}$ is efficiently implementable in terms of single-qubit and CNOT gates~\cite{hadfield2018representation}. Motivating examples are the NP-hard optimization problems MaxCut and Max-$2$-SAT, studied for QAOA in \cite{Farhi2014,Wecker2016training,wang2018quantum,wilson2019optimizing}, which each have quadratic cost Hamiltonians. Using the results of \cite{hadfield2018representation} we may similarly lift the classical partial difference and cost divergence functions of \eqsref{eq:costdiffs}{eq:costdiv} to diagonal Hamiltonians~$\partial_j C$ and $DC=\sum_j \partial_j C$, respectively, which act on computational basis states $\ket{x}$ as $\partial_j C\ket{x} = \partial_j c (x) \ket{x}$ and $DC\ket{x} = dc(x)\ket{x}$. In particular, given a cost Hamiltonian $C$ explicitly as in \eqref{eq:costHamZs}, the operators $\partial_j C$ and $DC$ are easily computed from the useful relations \begin{equation} \label{eq:djC} \partial_j C = X_jCX_j - C , \end{equation} $j=1,\dots, n$, from which it trivially follows that \begin{equation} \label{eq:djCb} \bra{s}\partial_j C \ket{s} =0 \,\;\;\;\;\; \text{and} \;\;\;\;\;\, \bra{s} DC \ket{s} =0. \end{equation} Results such as \eqsref{eq:djC}{eq:djCb} apply generally to any cost Hamiltonian. Next observe that from the Pauli anti-commutation relation $\{X,Z\}:=XZ+ZX=0$, the action of $X_j\cdot X_j$ on $C$ either flips the sign of or leaves alone each term of $C$ in \eqref{eq:costHamZs}. Hence if for each $j$ we define a partition of the terms of $C$ as $C=C^{\{j\}}+C^{\{\setminus j\}}$, where $C^{\{j\}}$ is defined as the partial sum of the terms of \eqref{eq:costHamZs} that contain a $Z_j$ factor, and $C^{\{\setminus j\}}$ the remainder, then \begin{equation} \label{eq:partialjC} \partial_j C = -2 C^{\{j\}}. \end{equation} For example, for MaxCut, $C^{\{j\}}$ contains a $ZZ$ term for each neighbor (adjacent vertex) of the $j$th vertex. Summing over $j$, the cost divergence Hamiltonian satisfies \begin{equation} \label{eq:DCa} DC = -2 \sum_{j=1}^n C^{\{j\}}, \end{equation} Each strictly $k$-local term in $C$ appears $k$ times in the sum above. We may alternatively partition a $k$-local Hamiltonian as $C=C_{(0)}+C_{(1)}+\dots C_{(k)}$, where each $C_{(j)}$ contains strictly $j$-local terms. Then we also have \begin{equation} \label{eq:DCb} DC=-2(C_{(1)}+2C_{(2)}+\dots+kC_{(k)}), \end{equation} Thus we see that for general cost Hamiltonians $C$, we may represent $DC$ either in terms of the problem locality (i.e., the problem graph structure reflected in the $C^{\{j\}}$) as in \eqref{eq:DCa}, or operator locality of~$C$ as in \eqref{eq:DCb}. Both of these representations of $DC$ are useful in our analysis to follow. Furthermore, it is often useful to view $D:C\rightarrow DC$ as a superoperator on the space of diagonal Hamiltonians. Iterating this operation $D^\ell C := D(D(\dots (DC))..)$ gives \begin{equation} \label{eq:DlC} D^\ell C = (-2)^\ell \sum_{j=0}^k j^\ell C_{(j)} \end{equation} for $\ell = 0,1,2,...$, so we see $D^\ell C$ closely relates to $C$ (and moreover $d^\ell c$ relates to $c$ similarly). Hence if $C$ is strictly $k$-local, then $D^\ell C=(-2k)^\ell C$ is proportional to $C$, and so they represent the same classical function up to a constant~\cite{hadfield2018representation}. For example, the cost Hamiltonian for MaxCut is $2$-local, up to its Identity term; see \tabref{tab:summary}. \begin{rem} \label{rem:probProj} An important family of diagonal Hamiltonians are the projectors \begin{equation} \label{eq:Hx} H_y := \ket{y}\bra{y} \;\;\;\; \text{ for } \;\; y\in\{0,1\}^n, \end{equation} which as observables give the probability of measuring the string $x$ for a generic state $\ket{\psi}$ in expectation as $P_\psi(y)=|\bra{y}\ket{\psi}|^2=\bra{\psi}H_y\ket{\psi}$. Importantly, $H_y$ represents the classical function $\delta_y(x)$ in the sense of \eqref{eq:costHam} which is $1$ when its input $x$ equals $y$ and $0$ otherwise. \end{rem} \subsection{Cost gradient operators} \label{sec:costGrads} Non-diagonal operators generated by (iterated) commutators of~$B$ and~$C$ play a fundamental role in our approach to analyzing QAOA. Specifically, given a cost Hamiltonian~$C$, we identify the commutator of $B$ and~$C$ as the \textit{cost gradient operator} $\nabla C := [B,C]$, with $\nabla:=[B,\cdot]$.\footnote{Commutator operators such as $\nabla=[B,\cdot]$ are often called \textit{adjoint operators} and written $\rm{ad}_B:=\nabla$, sometimes with an additional $i=\sqrt{-1}$ factor $\rm{ad}_B:=i\nabla$ in the context of Lie algebras \cite{woit2017quantum}. These operators are \textit{inner derivations} on matrices, i.e., they satisfy the \textit{Leibniz property} $\nabla(C_1C_2) = (\nabla C_1)C_2 + C_1(\nabla C_2)$; see e.g. \cite[Sec. 5]{hatano2005finding}.} We refer to $\nabla C$ as a \lq\lq gradient\rq\rq\ from its action on computational basis states $\ket{x}$, and on the equal superposition state $\ket{s}=\frac{1}{\sqrt{2^n}}\sum_x \ket{x}$, which is easily expressed in terms of the cost difference functions is reminiscent of vector calculus over the reals.\footnote{Our notion of gradient is distinct from others in the literature, e.g., the (vector calculus) gradient of the classical function $f(\vec{\theta}\,):=\bra{\psi} U^\dagger(\vec{\theta}) A U(\vec{\theta}) \ket{\psi}$ for a parameterized quantum circuit $U(\vec{\theta}) \ket{\psi}$ \cite{crooks2019gradients}.} Observe that $\nabla C$ is skew-adjoint and traceless; in general we can make $\nabla C$ into a Hamiltonian if we multiply by $\pm i$. First consider the commutators $\nabla_jC:=[X_j,C]=X_jC-CX_j$, $j=1,\dots,n$, so that $\nabla C =\sum_{j=1}^n \nabla_j C$. For each computational basis state $\ket{x}$ we have $$ \nabla_jC\ket{x}\, = (c(x)-c(x^{(j)}))\ket{x^{(j)}} = - \partial_j c(x)\ket{x^{(j)}}.$$ Comparing the action on basis states, we may relate the operators $\partial_jC$ and $\nabla_j C$ as $\nabla_j C = \partial_j C \,X_j = - X_j\, \partial_j C.$ Thus we have \begin{equation} \label{eq:actionGradC} \nabla C \ket{x} = \,-\sum_{j=1}^n \partial_j c(x)\ket{x^{(j)}}, \end{equation} and it follows \begin{eqnarray} \label{eq:DCsuperpos0} \nabla C \ket{s} \, =\, \frac{1}{\sqrt{2^n}} \sum_x dc(x)\ket{x}. \end{eqnarray} Hence $\bra{x}\nabla C\ket{x}=0$ for each $x\in\{0,1\}^n$, and $\bra{s}\nabla C\ket{s}=0$ from $\sum_x dc(x)=0$. The superoperator $\nabla:=[B,\cdot]$ mapping $C$ to $\nabla C$ cannot increase the locality of each Pauli term in $C$ of \eqref{eq:costHamZs}, as $B$ is a sum of $1$-local terms, and hence the resulting number of terms in the Pauli expansion of $\nabla C$ can increase at most by a multiplicative factor of $k$ for $k$-local $C$. For problems with cost Hamiltonians of bounded locality, e.g. QUBO problems, we can easily calculate $\partial_j C$ and $\nabla C$ explicitly, as demonstrated in \tabref{tab:summary}. \subsubsection{Higher-order gradients} To connect our framework to QAOA circuits, consider the QAOA mixing operator $U_M(\beta)=e^{-i\beta B}$. The superoperator $i\nabla$ is the infintesimal generator of conjugation by $U_M$, i.e., \begin{equation} \label{eq:conj1} e^{\beta B} C e^{-i\beta B} = C + \sum_{k=1}^\infty \frac{(i\beta)^k}{k!} \nabla^k C =:e^{i\beta \nabla}C. \end{equation} We will elaborate on and use this relationship extensively in \secref{sec:QAOA1}. From this perspective our formalism naturally generalizes to include \textit{higher-order gradients} \begin{equation} \label{eq:higherOrderDerivs} \nabla^\ell C := [B, [B, [B, \dots, [B,[B,C]]]..]]. \end{equation} $\nabla^\ell C$ is skew-adjoint for $\ell$ odd and self-adjoint (i.e., a Hamiltonian) for $\ell$ even. Our notation evokes but differs from that of vector calculus. The symbol $\nabla^2$ denotes \lq\lq $\nabla \cdot \nabla$\rq\rq\ in the latter, i.e., denotes the Laplacian operator, whereas in our notation $\nabla^2 := \nabla \nabla$ and $\nabla^2 C = [B,[B,C]]$. Indeed, we may instead identify $\nabla^2 C$ with the \textit{Hessian} of $c(x)$ from its action on basis states \begin{equation} \label{eq:grad2Cx} \nabla^2 C \ket{x} = - 2 \sum_{j=1}^n \partial_jc(x )\ket{x} + 2\sum_{j<k} \partial_j \partial_k c (x) \ket{x^{(j,k)}}.\end{equation} Hence for each computational basis state $x$ we have $ \bra{x}\nabla^2 C \ket{x} = -2 dc(x).$ On the uniform superposition state, the symmetry $\partial_j \partial_k c(x^{(j,k)}) = \partial_j \partial_k c(x)$ gives \begin{equation} \label{eq:grad2Cs} \nabla^2 C \ket{s}= \frac{1}{\sqrt{2^n}} \sum_x d^2c(x) \ket{x}, \end{equation} where $d^2c(x)$ is defined in \eqref{eq:d2c}. This implies $\bra{s}\nabla^2 C\ket{s}=\tfrac1{2^n}\sum_x d^2c(x)$. More generally, for an operator $A$ which acts on $\ket{s}$ as $A\ket{s}=\frac{1}{\sqrt{2^n}}\sum_x a(x)\ket{x}$ for a real function $a(x)$, it follows \begin{equation} \label{eq:gradAs} \nabla A\,\ket{s} =\frac1{\sqrt{2^n}}\sum_x da(x)\ket{x}. \end{equation} This equation is useful for computing the action and expectation values of higher-order cost gradients, as we shall see in \secref{sec:expecVals}. In particular this relates the action of $\nabla^\ell C$ on $\ket{s}$ to higher order cost divergence functions $d^\ell c(x)$ as \begin{equation} \label{eq:gradCs} \nabla^\ell C \ket{s}= \frac{1}{\sqrt{2^n}} \sum_x d^\ell c(x) \ket{x}. \end{equation} In \lemref{lem:expecGrad} below we show $\langle\nabla^\ell C\rangle_0:=\bra{s}\nabla^\ell C\ket{s}=0$ for $\ell \in {\mathbb Z}_+$ and any cost Hamiltonian, which implies $\sum_x d^\ell c(x) =0$ for any cost function. Note $\langle\nabla^\ell C\rangle_\psi \neq 0$ in general. \begin{rem} From the discussion for $\nabla C$ above, if $C$ is $k$-local then so is $\nabla^\ell C$, and the number of Pauli terms in $\nabla^\ell C$ is at most $k^\ell * {\boldsymbol{i}}nom{n}k =O(k^\ell n^k)$. Hence, when $k=O(1)$, if $\ell=O(\log n)$ the number of terms remains $poly(n)$ and so we can represent and compute $\nabla^\ell C$ efficiently as a linear combination of Pauli terms. For arbitrary cost Hamiltonians with $poly(n)$ Pauli terms, the same argument applies for $\ell=O(1)$. \end{rem} \subsubsection{Directional and mixed gradients} To extend \eqref{eq:conj1} to include the QAOA phase operator, we require still more general notions of gradient operators. \textit{Directional cost gradients} are defined as commutators of the cost Hamiltonian taken with respect to a general $n$-qubit operator~$A$ as \begin{equation} \nabla_A C := [A,C], \end{equation} corresponding to the superoperator $\nabla_A := [A,\cdot]$. Our above definition satisfies $\nabla := \nabla_B$. Trivially, two compatible linear operators $A,G$ commute if and only if $\nabla_A G = \nabla_G A = 0$. Further observe $\nabla_A$ is linear in~$A$ as $\nabla_{cA}=c\nabla_A$ for $c\in{\mathbb C}$ and $\nabla_{A+G}=\nabla_A+\nabla_{G}$. Gradients with respect to the cost Hamiltonian $\nabla_C = [C,\cdot]$ naturally arise in analysis of QAOA. Generally, higher-order (mixed) gradients of $C$ will likewise reflect the structure of the cost function over neighborhoods of increasing Hamming distance. Clearly, $\nabla_C C =0$ and $\nabla_C B = - \nabla C$. The operator $\nabla_C \nabla C := [C,[B,C]]$ is easily shown to act on basis states as \begin{equation} \label{eq:DCDCx} \nabla_C \nabla C \ket{x} = -\sum_{j=1} ^n \left(\partial_jc(x)\right)^2\ket{x^{(j)}}, \end{equation} and on the initial state $\ket{s}$ as \begin{eqnarray} \label{eq:DCDCs} \nabla_C \nabla C \ket{s} &=&\tfrac{1}{\sqrt{2^n}}\sum_x\left(-\sum_{j=1}^n \left(\partial_jc(x)\right)^2\right)\ket{x} =\tfrac{1}{\sqrt{2^n}}\sum_x \left(c(x) dc(x) - \sum_{j=1} ^n c(x^{(j)})\partial_j c(x) \right)\ket{x}, \nonumber \end{eqnarray} which implies $\langle \nabla_C \nabla C \rangle_0 =\frac2{2^n}\sum_x c(x) dc(x) \leq 0$ for all cost Hamiltonians $C$; see \lemref{lem:expecCDC} below. Similarly, for $\ell=1,2,3,\dots$ it follows from induction that \begin{equation} \label{eq:DlCDCx} \nabla_C^\ell \nabla C \ket{x} = - \sum_{j=1}^n (\partial_j c(x))^{\ell+1} \ket{x^{(j)}}, \end{equation} which using $\partial_j c (x^{(j)})=-\partial_j c(x)$ gives \begin{equation} \label{eq:DlCDCs} \nabla_C^\ell \nabla C \ket{s} = \frac{-1}{\sqrt{2^n}} \sum_x \sum_{j=1}^n (-\partial_j c(x))^{\ell+1} \ket{x}. \end{equation} Finally, from the Jacobi identity~\cite{woit2017quantum} for Lie algebras it follows that any triple of $n$-qubit operators $F,G,A$ satisfy \begin{equation} \label{eq:Jacobi} \nabla_F \nabla_G A + \nabla_A \nabla_F G + \nabla_G \nabla_A F= 0. \end{equation} Applying this property to $B$, $C$, and $\nabla C$ gives the useful identities \begin{equation} \label{eq:costJacobi} \nabla \nabla_C \nabla C = \nabla_C \nabla^2 C, \end{equation} which in particular implies $\nabla_C \nabla \nabla_C \nabla C = \nabla_C^2 \nabla^2 C$. Similarly, in general we have \begin{equation} \label{eq:crossOperator} \nabla_{\nabla C} A =-\nabla_A\nabla C =(\nabla \nabla_C - \nabla_C \nabla)A \end{equation} for any matrix $A$ acting on $n$-qubits.\footnote{ \eqref{eq:crossOperator} implies that \lq\lq gradients with respect to gradients\rq\rq\ can be written in terms of gradients with respect to $B$ and $C$, for example we have $\nabla_{\nabla C}\nabla^2 C = \nabla \nabla_C \nabla^2 C-\nabla_C \nabla^3 C$, $\nabla_{\nabla_C\nabla C}C=-\nabla^2_C \nabla C$ and $\nabla^2_{\nabla C}C=-\nabla_{\nabla_C}\nabla_C \nabla C =\nabla^2_C \nabla^2 C-\nabla \nabla^2_C \nabla C$.} Further such identities may be derived, e.g., by applying higher-order Jacobi identities \cite{alekseev2016higher}, or when considering particular problems. \subsubsection{General gradient operators} \label{sec:genGrads} General higher-order mixed cost gradient operators $\nabla_{A_\ell}^{b_\ell}\nabla_{A_{\ell-1}}^{b_{\ell-1}}\dots \nabla_{A_1}^{b_1} C$ follow in the natural way. We refer to $\sum_j b_j$ as the \textit{order} of a gradient operator, where for convenience we refer to all such operators as \textit{cost gradient operators}. We remark that nested commutators similarly appear in a number of quantum computing applications such as, e.g., analysis of Hamiltonian simulation with Trotter-Suzuki formulas~\cite{childs2019theory}. We give the following general characterization of general gradients with respect to $B$ and $C$ that will appear in our application to QAOA. \begin{lem} \label{lem:genGrad} For $C,C'$ diagonal Hamiltonians (representing classical cost functions as in \eqref{eq:costHam}), we define the general cost gradient operator \begin{equation} \label{eq:costGradOpGen} A=\nabla^{b_\ell}\nabla_C^{b_{\ell-1}}\dots\nabla^{b_3}\nabla_C^{b_2}\nabla^{b_1} C'. \end{equation} for $(b_1,b_2,\dots,b_\ell)$ a tuple of positive integers. Then $A$ has the following properties: \begin{enumerate} \item $A$ is a real matrix with respect to the computational basis: $a_{xy}:= \bra{x}A\ket{y}\,\in {\mathbb R}$. Hence, there exists a real function $a(x)$ such that $A\ket{s} = \frac{1}{\sqrt{2^n}}\sum_x a(x) \ket{x}$. \item $A$ is traceless, which implies $\sum_x a_{xx}=0$, and $A$ is either self- or skew-adjoint: \begin{enumerate} \item If $\sum_j b_j$ is even then $A^\dagger = A$, which implies $a_{xy}=a_{yx}$ for each $x,y\in\{0,1\}^n$. \item Else if $\sum_j b_j$ is odd then $A^\dagger = - A$, which implies $a_{xx}=0$ (i.e., $A$ has no diagonal part), and $a_{yx}=-a_{xy}$ for each $x,y\in\{0,1\}^n$. \end{enumerate} \item Each tuple $(b_\ell,\dots,b_2,b_1)\subset \{1,2,3,\dots\}^\ell$ identifies a cost gradient operator ~\eqrefp{eq:costGradOpGen}. However this labeling is not canonical as it is many-to-one. \end{enumerate} \end{lem} \noindent Here we have considered distinct $C,C'$ for generality, as we often but not always have $C'=C$ in application of the lemma. \begin{proof} The first point follows from expanding each of the commutators in $A$ in turn to give a linear combination of ordered products of $B$, $C$, and $A$. As each matrix multiplication consists of strictly real quantities, the resulting computational basis matrix elements $a_{xy}$ of $A$ are real and $A$ acts on $\ket{s}$ as $A\ket{s}=\tfrac1{\sqrt{2^n}}\sum_x (\sum_y a_{xy}) \ket{x}$. The second point follows from the definition of the commutator, and the cyclic property and linearity of the trace operation. Points 2.a and 2.b follow observing the commutator of a self-adjoint and a self- (skew-)adjoint operator is skew- (self-)adjoint, and applying the first point. For the third point, \eqref{eq:costJacobi} shows the mapping from positive integers is not injective in general. \end{proof} Each $A=\nabla^{b_\ell}\nabla_C^{b_{\ell-1}}\dots\nabla_C^{b_2}\nabla^{b_1} C'$ may be uniquely decomposed into its diagonal and off-diagonal parts $A=A_{diag}+A_{non-diag}$ with respect to the computational basis, such that $\langle A\rangle_0=\langle A_{non-diag}\rangle_0$ and $\nabla_C A = \nabla_C A_{non-diag}$. In the skew-adjoint case $A_{diag}=0$. In the self-adjoint case, we have corresponding real functions $a(x)=a_{diag}(x)+a_{non-diag}(x)$, with $\sum_x a_{diag}(x)=0$. We will use these properties in the proofs of \appref{app:expecVals}. \subsection{Initial state expectation values} \label{sec:expecVals} We next show several results concerning the computation of initial state expectation values of cost gradient operators, which show their dependence on the cost difference functions; roughly speaking, \emph{powers of $\nabla_C$ correspond to powers of the cost function $c(x)$ and its coefficients, whereas powers of $\nabla$ correspond to cost differences over greater Hamming distances.} Looking forward to our application to QAOA, we will derive relevant expectation values as series expressions in the algorithm parameters and initial state expectation values of different cost gradients; these lemmas allow us identify a significant fraction of the resulting terms to be identically zero which greatly simplifies our results and proofs. Some additional results are given in \appref{app:expecVals}. Observe that the expectation value of any $n$-qubit matrix $A$ with respect to $\ket{s}=\frac{1}{\sqrt{2^n}}\sum_x\ket{x}$ relates to the trace of $A$ as \begin{equation} \label{eq:generalInitStateExpec} \langle A\rangle_0= \frac1{2^n} tr(A) + \frac1{2^n} \sum_{y\neq x} \bra{y}A\ket{x}, \end{equation} where we recall our notation $\langle A\rangle_0:=\bra{s}A\ket{s}$. When $A$ is diagonal $A=a_0I + \sum_j a_j Z_j + \sum_{i<j}a_{ij}Z_iZ_j+\dots$ this becomes $\langle A\rangle_0 = \frac1{2^n} tr(A)=a_0$. Hence, \lemref{lem:genGrad} implies a necessary condition for the initial state expectation value of a gradient operator \eqrefp{eq:costGradOpGen} to be nonzero is that its order (sum of the exponents $\sum_j b_j$) is even. \begin{lem} \label{lem:genGrads} Let $C'$ be diagonal. If $\sum_j b_j$ is odd then $\langle \nabla^{b_\ell}\nabla_C^{b_{\ell-1}}\dots\nabla_C^{b_2}\nabla^{b_1} C'\rangle_0=0$. \end{lem} \begin{proof} Follows from case 2.b of \lemref{lem:genGrad} using that the parity of $\sum_j b_j$ determines the adjointness of $\nabla^{b_\ell}\nabla_C^{b_{\ell-1}}\dots\nabla_C^{b_2}\nabla^{b_1} C'$, and applying \eqref{eq:generalInitStateExpec}. \end{proof} We next show two results concerning more specific cases. \begin{lem} \label{lem:expecGrad} For any $n$-qubit linear operator $A$ and $\ell\in{\mathbb Z}_+$ we have $\langle \nabla^\ell A\rangle_0 =0$. \end{lem} \begin{proof} For $\ell=1$ as $B\ket{s}=n\ket{s}$ we have $\bra{s}\nabla A \ket{s}=\bra{s}(BA-AB)\ket{s}= (n-n)\bra{s}A\ket{s}=0$. The general result follows inductively considering $\nabla^{\ell}A=B(\nabla^{\ell-1}A)-(\nabla^{\ell-1}A)B$. \end{proof} In particular, the lemma implies $\langle \nabla^\ell C\rangle_0 = 0$ for any cost Hamiltonian $C$ and $\ell\in{\mathbb Z}_+$. \sh{ \begin{rem} \label{rem:GenLems} The results of \lemssref{lem:genGrad}{lem:genGrads}{lem:expecGrad} extend to more general QAOA initial states and mixing Hamiltonians in a straightforward way. In particular, the result and proof of \lemref{lem:expecGrad} only requires that the initial state is some eigenvector of the mixing Hamiltonian. Similar considerations apply to a number of our results to follow. \end{rem} } \begin{lem} \label{lem:expecCDC} For a cost Hamiltonian $C=a_0I + \sum_j a_jZ_j + \sum_{i<j}a_{ij}Z_iZ_j +\dots$ we have \begin{eqnarray} \label{eq:expecDCis0} \langle\nabla_C \nabla C \rangle_0 &=& \frac{2}{2^n} \sum_x c(x)dc(x) = \frac{-1}{2^n}\sum_x \sum_j (\partial_j c(x))^2\\ &=& -4(\sum_j a_j^2 + \sum_{i<j} 2 a_{ij}^2+\sum_{i<j<\ell} 3 a_{ij\ell}^2 +\dots)\nonumber \end{eqnarray} which implies $\langle\nabla_C \nabla C \rangle_0 \leq 0$. Moreover, $\langle \nabla_C \nabla^2 C \rangle_0 = \langle \nabla^2_C \nabla C \rangle_0 = \langle \nabla \nabla_C\nabla C \rangle_0 =0$. \end{lem} \begin{proof} For $\langle\nabla_C \nabla C \rangle_0$, the first two equalities follow from \eqsref{eq:DCDCx}{eq:DCDCs} which imply $\nabla_C \nabla C = -\sum_j (\partial_j C)^2 X_j$, and hence $\langle\nabla_C \nabla C \rangle_0 \leq 0$ always. The third equality follows expanding as a Pauli sum $\nabla_C \nabla C = -4\sum_\sigma a_\sigma^2 \sum_{i\in \sigma}X_i + \dots$ where $\sigma$ denotes tuples of strictly increasing qubit indices as in the definition of $C$, and any Pauli terms not shown to the the right each contain at least one $Z$ factor and so from~\eqref{eq:generalInitStateExpec} have $0$ expectation with respect to $\ket{s}$. The final statement follows directly from \lemref{lem:genGrads}. \end{proof} Results for higher-order gradients may be similarly derived. In \appref{app:expecVals} we show the next (order $4$) nonzero cost gradient expectations to be \begin{equation} \label{eq:expecDcD3C} \langle \nabla_C \nabla^3 C \rangle_0 = \frac2{2^n}\sum_x c(x)d^3c(x), \end{equation} \begin{equation} \label{eq:expecDc3DC} \langle \nabla^3_C \nabla C \rangle_0 = \frac1{2^n}\sum_x \sum_{j=1}^n (\partial_j c(x))^4, \end{equation} \begin{equation} \label{eq:expecDc2D2C} \langle \nabla^2_C \nabla^2 C \rangle_0 =\langle \nabla_C \nabla \nabla_C \nabla C \rangle_0=\frac2{2^n}\sum_x \sum_{j<k} (\partial_{j,k}c(x))^2 \partial_j\partial_k c(x). \end{equation} The first equality of \eqref{eq:expecDc2D2C} follows from \eqref{eq:Jacobi}. Generalizing \eqsref{eq:expecDcD3C}{eq:expecDc3DC} to $\ell=1,3,5\dots$ gives $ \langle \nabla_C \nabla^\ell C \rangle_0 = \frac2{2^n}\sum_x c(x)d^\ell c(x)$ and $\langle \nabla^\ell_C \nabla C \rangle_0 = \frac1{2^n}\sum_x \sum_{j} (\partial_j c(x))^{\ell+1}$, respectively, or $0$ for $\ell$ even. Such quantities may be efficiently computed from the Pauli basis coefficients of the cost Hamiltonian~$C$; cf. \lemref{lem:expecCDC}. We show such expressions for QUBO problems in \lemref{lem:higherOrderExpectations} of \appref{app:expecVals}. In general these expressions reveal dependence on the cost function structure. For example, an important term for analyzing QAOA for MaxCut is $\langle \nabla^2_C \nabla^2 C \rangle_0$, which depends on the number of triangles in the problem graph and is zero in triangle-free cases; cf. \secref{sec:MaxCutQUBO}. \subsection{Problem and Hamiltonian locality considerations} \label{sec:probLocality} For the sake of generality and notational clarity, so far we have considered arbitrary cost functions $c(x)$. For specific problems the cost function typically has additional structure we may directly incorporate into our framework when computing various quantities of interest such as expectation values. Here we elaborate on the generalization of our framework to incorporate locality, which we further consider in the context of QAOA$_1$ in \secref{sec:lightcones} and QAOA$_p$ in \secref{sec:lightconesp}; see in particular \tabref{tab:locality}. Recalling the discussion of \secref{sec:costHam}, consider an objective function $c(x)=\sum_{j=1}^m c_j(x)$, with corresponding cost Hamiltonian $C=\sum_{j=1}^m C_j$.\footnote{Clearly such decompositions are not unique. The two most useful for our purposes are when each $C_j$ corresponds to a problem clause $c_j$, or when each $C_j$ corresponds to a single Pauli $Z$ term in $C$. For the latter case it's sometimes convenient to include factors or the identity in $C_j$ which does not affect gradients (commutators) of $C_j$ nor QAOA dynamics.} Assume each $c_j$ acts on a subset $N_j \subset [n]$ of the variables $x_1,\dots,x_n$ (i.e., flipping variable values in $[n]\setminus N_j$ does not change $c_j(x)$). Then each $C_j$ acts nontrivially only on the qubits indexed by $N_j$ and so is $|N_j|$-local. For example, $|N_j|=2$ for MaxCut, and $|N_j| = k$ for Max-$k$-SAT where each $c_j$ is a $k$-variable Boolean clause. We identify $N_j$ similarly for non-diagonal Pauli terms and Hamiltonians. To explicitly take advantage of Hamiltonian locality we will sometimes use $\cdot|_S$ to denote an operator restricted (in the obvious way) to a subset $S\subset [n]$ of the qubits or variables.\footnote{More precisely, for an operator expressed as a sum of Pauli terms $A=\sum_\alpha c_\alpha \sigma_\alpha$ and $S\subset [n]$, we define $A|_S$ to be the operator obtained by discarding any terms that act on qubits outside of $S$.} For example, $d|_{\{1,2\}}c:=\partial_1c+\partial_2c$ and $\nabla|_{\{1,2\}}C:=[X_1,C]+[X_2,C]$. Hence, for each $c_j$ and corresponding $C_j$, the cost divergence of $c_j$ reduces to a sum over $n_j$ bit flips $$ dc_j(x)=d|_{N_j}c_j(x)=\sum_{i\in N_j}\partial_ic_j(x), $$ and similarly $$DC_j =D|_{N_j}C_j = \sum_{i\in N_j} \partial_i C_j$$ When each $C_j$ consists of a single Pauli $Z$ term \eqsref{eq:DCa}{eq:DCb} imply $DC_j =-2|N_j| C_j$. Hence, considering the product structure of $\ket{s}=\ket{+}^{\otimes n}$, quantities of interest may often be evaluated by considering only a relevant subset of qubits or variables. In particular, the operator $\nabla C_j = \nabla|_{N_j}C_j$ acts on $\ket{s}$ as (cf. \eqref{eq:DCsuperpos0}) \begin{eqnarray} \label{eq:nablaCjs} \nabla C_j \ket{s} = \frac1{\sqrt{2^n}} \sum_x dc_j(x)\ket{x} = \left(\frac1{\sqrt{2^{|N_j|}}} \sum_{x\in\{0,1\}^{N_j}} dc_j(x)\ket{x}\right)\otimes \ket{+}^{\otimes n-|N_j|} \end{eqnarray} where $C_j\ket{x}=c_j(x)$, and $\{0,1\}^{N_j}$ denotes bitstrings over the variables in $N_j$. The term in parenthesis is guaranteed to be efficiently computable classically when $|N_j|$ is a constant or scales as $O(\log n)$. Hence it is relatively straightforward to incorporate locality where applicable into our framework. In particular, for any operator $A$ and $S\subset [n]$ we have $\langle\, A|_S\rangle_0\,= \bra{+}^{\otimes S}\,A|_S\,\ket{+}^{\otimes S}\bra{+}^{\otimes [n]\setminus S}\ket{+}^{\otimes [n]\setminus S}= \frac{1}{2^{|S|}}\bra{+}^{\otimes S}\,A|_S\,\ket{+}^{\otimes S}$. For example, $\langle \nabla^\ell C_j\rangle_0=0$ for $\ell \in \naturals$ from \lemref{lem:expecGrad}, and adapting the proof of \lemref{lem:expecCDC} gives \begin{equation} \langle \nabla_C\nabla C_j\rangle_0 = \frac2{2^n}\sum_x c(x)dc_j(x) = \frac2{2^{n_j}}\sum_{x\in \{0,1\}^{N_j}}c(x)dc_j(x), \end{equation} which becomes $\langle \nabla_C\nabla C_j\rangle_0=-4a_j^2$ when $C_j$ is a single Pauli term $C_j=a_jZ_{j_1}\dots Z_{j_{n_j}}$. By linearity this gives $\langle \nabla_C\nabla C\rangle_0=\sum_j \langle \nabla_C\nabla C_j\rangle_0$. Similar considerations will apply to higher order cost gradients as we further discuss in \secsref{sec:lightcones}{sec:lightconesp}. \section{Applications to QAOA$_1$} \label{sec:QAOA1} As employed, for instance, in the QAOA performance analysis of \cite{wang2018quantum,hadfield2018thesis} for specific problems, it is often advantageous to consider QAOA circuits from the Heisenberg perspective i.e., as acting by conjugation on an observable~$C$ rather than acting on a state directly. The Heisenberg picture naturally has a number of applications in quantum computing, e.g. \cite{gottesman1998heisenberg}. Indeed, for a QAOA$_p$ circuit $Q=U_M(\beta_p)\dots U_P(\gamma_1)$ we have $$ \langle C \rangle_p =\bra{s}\,(Q^\dagger C Q) \,\ket{s} ,$$ so computing the QAOA$_p$ expectation value of the cost function, for any $p$, is equivalent to computing the initial state expectation value of the time evolved (conjugated) observable $U^\dagger C U $. Here, we consider application of our framework from this perspective to QAOA$_1$. In~\secref{sec:QAOAp} we generalize our result to QAOA$_p$. First consider a general $n$-qubit Hamiltonian $H$ and corresponding unitary $U=e^{-iHt}$ for $t\in{\mathbb R}$. From the perspective of Lie groups and algebras~\cite{woit2017quantum,hall2013quantum}, the superoperator $it\nabla_H=it[H,\cdot]$ is the infinitesimal generator of conjugation by $U$, i.e., $e^{it\nabla_H}=U^\dagger \cdot U$. Explicitly, acting on an operator $A$ we have \begin{equation} \label{eq:infinitesimalConj} U^\dagger A U = A + it\nabla_H A + \frac{(it)^2}{2} \nabla^2_H A + \frac{(it)^3}{3!} \nabla^3_H A \, + \dots \end{equation} Thus we can express conjugation by an operator $U=e^{-iHt}$ in terms of gradients $\nabla_H$. Such superoperator expansions are also familiar from quantum information theory, e.g. the Liouvillian representation of unitary quantum channels~\cite{preskill1997lecture}. Iterating this formula, the QAOA$_1$ operator $Q=Q(\gamma,\beta)=U_M(\beta)U_P(\gamma)$ acts by conjugation as to map the cost Hamiltonian $C\rightarrow Q^\dagger C Q$ to a linear combination of cost gradient operators. We show several exact series expansions for QAOA$_1$ with arbitrary cost function (Hamiltonian). \begin{lem} \label{lem:QAOA1conj} Let $A$ be an $n$-qubit matrix. The QAOA operator $ Q= U_M(\beta) U_P(\gamma)$ acts by conjugation on $A$ as \begin{equation} \label{eq:costHamConj} Q^\dagger A Q = \sum_{\ell=0}^\infty \sum_{k=0}^\infty \frac{(i\beta)^k(i\gamma)^\ell}{\ell!\,k!} \, \nabla_C^\ell \nabla^k A. \end{equation} \end{lem} \begin{proof} Identifying $Q^\dagger A Q$ as conjugation of $A$ by $U_M(\beta)$, followed by conjugation by~$U_P(\gamma)$, the result then follows from two applications of \eqref{eq:infinitesimalConj}. \end{proof} Observe we may write \eqref{eq:costHamConj} succinctly as \begin{equation} \label{eq:costHamConj2} Q^\dagger A Q = e^{i\gamma \nabla_C} e^{i\beta \nabla} A, \end{equation} where the order-of-operations of the superoperators $e^{i\gamma \nabla_C}$ and $e^{i\beta \nabla}$ is implicitly understood. Hence all terms in $Q^\dagger C Q$ are of the form $\nabla_C^\ell \nabla^k C$, i.e., containing at most one alternation of $\nabla$ and $\nabla_C$.\footnote{The Baker–Campbell–Hausdorff formula~\cite{woit2017quantum} allows one to similarly express the effective Hamiltonian $H=H(\gamma,\beta)$ of each QAOA layer $Q=U_M(\beta)U_P(\gamma)=exp(-iH)$ as a series in commutators of $B$ and $C$, $$H= \beta B + \gamma C -i\tfrac{\gamma\beta}2\nabla C -\tfrac{\gamma\beta^2}{12}\nabla^2 C +\tfrac{\gamma^2\beta}{12}\nabla_C\nabla C +i\tfrac{\gamma^2\beta^2}{24}\nabla_C\nabla^2C+\dots,$$ but with more complicated expressions for higher-order terms as compared to \eqref{eq:costHamConj}.} In~\secref{sec:QAOAp} we apply this formula recursively to obtain analogous formulas for QAOA$_p$ consisting of terms containing at most $2p-1$ such alternations. \begin{rem} \label{rem:normInvariance} \eqsref{eq:costHamConj}{eq:costHamConj2} are invariant under the rescalings $(\gamma,C)\rightarrow (\gamma/a,aC)$ or $(\beta,B)\rightarrow (\beta/b,bB)$ for any $a,b>0$. Hence to apply these formulas perturbatively one must consider the quantities $\|\gamma C\|=|\gamma|\|C\|$ and $\|\beta B\|=|\beta|n$ as small parameters, not just $|\gamma|,|\beta|$. We consider such a small-angle regime in~\secref{sec:smallAngleQAOA1}. Note that rescaling $C$ without proportionately rescaling $c(x)$ preserves the solution structure but would violate the condition \eqref{eq:costHam}; we assume $c(x)$ uniquely determines~$C$ as in~\eqref{eq:costHam} throughout. \end{rem} We apply \lemref{lem:QAOA1conj} to derive exact expressions for the cost expectation and probabilities of QAOA$_1$ using $\langle C\rangle_1 = \langle Q^\dagger C Q\rangle_0$ and $P_1(x)= \langle Q^\dagger H_x Q\rangle_0$ for $H_x:=\ket{x}\bra{x}$ from~\remref{rem:probProj}. \begin{theorem} \label{thm:QAOA1} Let $C$ be a cost Hamiltonian. For QAOA$_1$ the cost expectation is \begin{eqnarray*} \langle C \rangle_1 &=& \langle C \rangle_0 \,-\,\gamma \beta \langle \nabla_C \nabla C \rangle_0 \,\,+\, \sum_{ \ell + k \geq 4,\ell + k \text{ even} }^\infty \frac{(i\gamma)^\ell(i\beta)^k}{\ell!\,k!} \langle\nabla_C^\ell \nabla^k C \rangle_0, \nonumber \end{eqnarray*} where $\langle \nabla_C \nabla C \rangle_0=\tfrac{2}{2^n}\sum_x c(x)dc(x)\leq 0$, and the probability of measuring $x\in\{0,1\}^n$ is \begin{eqnarray*} \label{eq:P1x} P_1(x) &=& P_0(x) +\sum_{ \ell + k \text{ even} }^\infty \frac{(i\gamma)^\ell(i\beta)^k}{\ell!\,k!} \langle\nabla_C^\ell \nabla^k H_x \rangle_0 \end{eqnarray*} where $H_x:=\ket{x}\bra{x}$, and both sums are over subsets of integers $\ell,k>0$. \end{theorem} \begin{proof} Plugging $A=C$ into \eqref{eq:costHamConj}, using $[C,C]=0$, and taking expectation values with respect to $\ket{s}$ gives the QAOA$_1$ cost expectation value \begin{equation} \label{eq:expeccostHamConj0} \langle C\rangle_1 \,=\, \bra{s}Q^\dagger C Q\ket{s}\,=\,\langle C\rangle_0 \, + \, \sum_{\ell=0}^\infty \sum_{k=1}^\infty \frac{(i\gamma)^\ell(i\beta)^k}{\ell!\,k!} \langle \nabla_C^\ell \nabla^k C \rangle_0. \end{equation} From \lemsref{lem:expecGrad}{lem:expecCDC}, the lowest order nonzero contribution $\langle C \rangle$ is the second-order term $-\gamma \beta \langle \nabla_C \nabla C\rangle_0= \tfrac{-2}{2^n}\gamma\beta\sum_x c(x)dc(x)$, and the remaining terms with $\ell+k$ odd are identically zero from \lemref{lem:genGrads}, which gives the first result. The probability result follows similarly. Applying \eqref{eq:costHamConj} to $H_y:=\ket{y}\bra{y}$ gives \begin{equation} P_1(y)= \langle Q^\dagger H_y Q\rangle_0=\sum_{\ell=0}^\infty \sum_{k=0}^\infty \frac{(i\gamma)^\ell(i\beta)^k}{\ell!\,k!} \langle \nabla_C^\ell \nabla^k H_y \rangle_0, \end{equation} to which using $[C,H_y]=0$ and again applying \lemref{lem:genGrads} gives the stated result. \end{proof} We generalize this result to QAOA$_p$ in \thmref{thm:smallprecursed} in \secref{sec:QAOAp}. We now use the leading order terms of \thmref{thm:QAOA1} to characterize QAOA$_1$ in small-parameter regimes. \subsection{Leading-order QAOA$_1$} \label{sec:smallAngleQAOA1} This section provides the proofs of the results stated in \secref{sec:smallAngleQAOA1initial}. \begin{proof}[Proof of \thmref{thm1:smallAngles}] The results follow applying \lemref{lem:expecCDC} to the leading order terms of \thmref{thm:QAOA1}, as $\langle\nabla_C\nabla C\rangle_0=\tfrac{2}{2^n}\sum_x c(x)dc(x)\leq 0$ and $\langle \nabla_C \nabla H_x \rangle_0 = \tfrac{2}{2^n} \sum_y c(y)d\delta_x(y)= \tfrac{2}{2^n} dc(x)$ for $H_x=\ket{x}\bra{x}$ of \remref{rem:probProj}. When $c(x)$ is nonconstant then its Hamiltonian representation $C=a_0 + \sum_j a_j Z_j + \sum_{i<j}a_{ij} Z_i Z_j+\dots$ has at least one nonzero Pauli coefficient $a_\alpha$, $\alpha\neq0$, and so \lemref{lem:expecCDC} implies $\langle\nabla_C\nabla C\rangle_0 < 0$. \end{proof} The proof of \thmref{thm:allanglessmall} is similar and given in \secref{sec:smallAngleQAOApClass}. Comparing to our results of \thmssref{thm1:smallAngles}{thm:allanglessmall}{thm:QAOA1} we see that by selecting $\tau H=\sqrt{2}\beta B + \sqrt{2}\gamma C$ for a simple quantum quench, the lowest-order contribution to $\langle C\rangle$ is the same as for for QAOA. \begin{proof}[Proof of \propref{prop:quench}] Observe that $\tau \nabla_H = \nabla_{\tau H}$ and hence $\tau \nabla_H C =\sqrt{2}\beta\nabla C$ and $\tau^2 \nabla^2_H C = 2\gamma\beta \nabla_C\nabla C +2 \beta^2\nabla^2 C$. Applying \eqref{eq:infinitesimalConj} to conjugation of $C$ by $e^{-iH\tau}$ gives \begin{eqnarray} e^{i\tau H}Ce^{-i\tau H} &=& C + i\tau\nabla_H C - \frac{\tau^2}2 \nabla^2_H C \,\pm \dots\\ &=& C + i\sqrt{2}\beta\nabla C - \gamma\beta \nabla_C \nabla C - \beta^2 \nabla^2 C + \dots\nonumber \end{eqnarray} Taking initial state expectation values of both sides and applying \lemref{lem:expecCDC} gives~\eqref{eq:quenchLeadingOrder}. The statement for QAOA$_p$ follows similarly. \end{proof} \subsubsection{Small-angle error bound} Here we derive rigorous error bounds to the leading-order approximations of \thmref{thm1:smallAngles}. Here, the spectral norm $\|C\|:=\|C\|_2$ is the maximal eigenvalue in absolute value. For a maximization problem with $c(x)\geq 0$, e.g., a constraint satisfaction problem, the norm gives the optimal cost $\|C\| =\max_x c(x)$. Here we write $C=a_0I+C_Z$ as in \eqref{eq:costHamZs}, with $C_Z$ traceless. \begin{cor} \label{cor:thm1errorboundeps} Let $\widetilde{P_1}(x)$, $\widetilde{\langle C\rangle_1}$ denote the probability and cost expectation estimates (\eqsref{eq:px1}{eq:expecC1}) of \thmref{thm1:smallAngles} for QAOA$_1$ with a $k$-local cost Hamiltonian $C=a_0I + C_Z$, and let $0<\e<1$. \begin{itemize} \item If $|\gamma|\leq \frac{\e^{1/4}}{2\min(\|C_Z\|,\|C\|)}$ and $|\beta|\leq\frac{\sqrt \e}{2 k}$ then \begin{equation} \label{eq:smallAnglesErrorBound} \left|\langle C \rangle_1 - \widetilde{\langle C\rangle}_1 \right| < \min(\|C_Z\|,\|C\|)\;\e. \end{equation} \item If additionally or instead $|\beta|<\frac25\frac{\sqrt \e}{n}$ then for each $x\in\{0,1\}^n$ we have \begin{equation} \label{eq:smallAnglesErrorBoundProb} \left| P_1(x) - \widetilde{P_1}(x) \right| < \frac{\e}{2^{n}}. \end{equation} \end{itemize} \end{cor} The proof is shown in \appref{app:normAndErrorBounds} through a sequence of general lemmas. For example, for MaxCut we have $\|C_Z\|=m/2\leq \max_x c(x)=\|C\|$. Typically $\|C_Z\|< \|C\|$, but this is not true for arbitrary $C$, which is the reason for the minimum function of \eqref{eq:smallAnglesErrorBound}. Futhermore, the difference in the ranges for~$\beta$ between the two cases of \corref{cor:thm1errorboundeps} arises because measurement probabilities $P_1(x)$ correspond to $n$-local observables. As for optimization applications we typically are interested in the expected approximation ratio $\langle C \rangle/ c_{opt} \geq \langle C \rangle/ \|C\|$ (for maximization) achieved by a QAOA circuit, here we have selected the ranges of $\gamma,\beta$ for \eqref{eq:smallAnglesErrorBound} such that the resulting error bound contains a factor $\|C\|$ (or $\|C_Z\|$). Hence $O(\|C\|\e)$ error in the estimate for $\langle C\rangle_1$ corresponds to an $O(\e)$ error estimate for the expected approximation ratio. A general bound for the error of the cost expectation estimate of \thmref{thm1:smallAngles} as a polynomial in $\gamma,\beta$ is given in \lemref{lem:thm1errorbound}, which may be used to alternatively select the ranges of $\gamma,\beta$ and resulting error bound. The proof of \lemref{lem:thm1errorbound} is shown using a recursive formula that may be extended to obtain similar bounds when including higher-order terms beyond the approximations of \thmref{thm1:smallAngles}, as indicated in \thmref{thm:QAOA1}. \subsubsection{Small-angle QAOA$_1$ behaves classically} \label{sec:smallQAOA1class} The above results imply that QAOA$_1$ can be classically emulated in the small-angle regime, in the following sense: we show a simple classical randomized algorithm for emulating QAOA$_1$ with sufficiently small $|\gamma|,|\beta|$ that reproduces the leading-order probabilities $\widetilde{P_1}(x)$ of sampling each bitstring~$x$ from \thmref{thm1:smallAngles}. For $|\gamma|,|\beta|$ polynomially small in the problem parameters such that the conditions of \corref{cor:thm1errorboundeps} are satisfied this implies the same error bounds \eqsref{eq:smallAnglesErrorBound}{eq:smallAnglesErrorBoundProb} apply for the classical algorithm. Specifically, assume the cost function $c(x)$ can be efficiently evaluated classically. Let $K$ be a bound such that $|\partial_jc (x)|\leq K$ for all $j,x$; for example, for MaxCut, we may set $K$ to be $2$ times the maximum vertex degree in the graph. For an arbitrary cost Hamiltonian we may always take $K=2\|C\|$ (or, $K=2\|C_Z\|$ for $C=a_0I+C_Z$). Consider the following simple classical randomized algorithm. \vskip 0.5pc \textbf{ Algorithm 1: sample according to leading-order QAOA$_1$} \begin{enumerate} \item \underline{Input}: Parameters $\gamma,\beta\in [-1,1]$ such that $|\gamma\beta|\leq \frac1{2nK}$. \item Select a bitstring $x_0\in\{0,1\}^n$ and an index $j\in [n]$ each uniformly at random. \item Compute $\partial_j c(x_0)$ and let $\delta (x_0,j) := 2n\gamma\beta \partial_j c(x_0)$. \item Using a weighted coin, flip the $j$th bit of $x_0$ with probability given by $\frac12 + \frac12 \delta (x_0,j)$. \item Return the resulting bitstring $x_1$. \end{enumerate} By the choice of algorithm parameters in Step 1 we have $-1\leq \delta (x,j)\leq 1$ which ensures the probability distribution resulting from Step 4 remains normalized. For sufficiently small $\gamma,\beta$ this algorithm emulates QAOA$_1$ up to small error. \begin{theorem} \label{thm:smallAnglesClassical} Consider a cost Hamiltonian $C=a_0I+C_Z$. For fixed $0<\e<1$, let $|\beta|\leq \frac25\frac{\sqrt \e}{n}$ and $|\gamma|\leq \e^{1/4}/(2\min(\|C_Z\|,\|C\|)$. Then there exists an efficient classical randomized algorithm producing bitstring $x$ with probability $P_{class}(x)$ which satisfies \begin{equation} \label{eq:thm2a} |P_{class}(x)-P_1(x)| \leq \frac{\epsilon}{2^{n}}. \end{equation} and expected value of the cost function $| \langle c \rangle_{class} - \langle C \rangle_1 | < \|C_Z\|\e$. \end{theorem} \noindent In this sense we say that QAOA$_1$ behaves classically in the small-angle regime. \begin{proof} Consider The conditions of the theorem imply $|\gamma\beta|\leq 1/(9n\min(\|C_Z\|,\|C\|))$ and hence $\gamma,\beta$ satisfy the first condition of Algorithm~$1$. The probability of this procedure returning a particular bitstring $x$ is \begin{eqnarray*} Pr(x_1=x) &=&\, Pr(x_0 = x)\sum_j Pr(j)Pr(coin=0) \,+\,\sum_j Pr(x_0 = x^{(j)})Pr(j) Pr(coin=1)\\ &=& \frac1{2^n}\frac1{n}\sum_j(\frac12 - \frac12\delta (x,j)) \,+\, \sum_j \frac1{n}\frac1{2^n}(\frac12 + \frac12\delta (x^{(j)},j))\\ &=& \frac1{2^n}( \frac1{n}\frac{n}{2} + \frac1{n}\frac{n}{2}) - \frac{1}{2^n}\frac1{n}\sum_j\frac12\delta (x,j)) \,+\, \frac1{2^n}\frac1{n}\sum_j \frac12\delta (x^{(j)},j)\\ &=& \frac1{2^n} -\frac1{2^n} \frac1{n}\sum_j\delta (x,j)\\ &=&Pr(x_0 = x) \,-\, \frac2{2^n} \gamma\beta \,dc(x) \end{eqnarray*} where we have used $\delta (x^{(j)},j)=-\delta (x,j)$. Observe that from $\sum_x dc(x) = 0$ the resulting distribution remains normalized. Hence the expected value of the cost function $c(x)$ is $$\langle c \rangle = \sum_x Pr(x_1=x)c(x) = \langle c(x) \rangle_0 -2\gamma \beta \langle c(x) dc(x)\rangle_0.$$ and so we see Algorithm 1 reproduces the leading-order probability and cost expectation estimates of \thmref{thm1:smallAngles}. The error bounds then follow directly from \corref{cor:thm1errorboundeps}. \end{proof} \thmref{thm:smallAnglesClassical} demonstrates that QAOA parameters must necessarily be not-too-small for potential quantum advantage. In practice, the parameter search space may be pruned in advance to avoid such regions. \subsection{Causal cones and locality considerations for QAOA$_1$} \label{sec:lightcones} Here we consider problem and Hamiltonian locality for QAOA$_1$ circuits, building off of \secref{sec:probLocality}. We extend these results to QAOA$_p$ in \secref{sec:lightconesp}. Our framework formalizes here similar observations made in the original QAOA paper \cite{farhi2014quantum} in the context of computing the cost expectation $\langle C\rangle_p$ and resulting bound to the approximation ratio, as well as subsequent work computing such quantities analytically or numerically for specific problems~\cite{wang2018quantum,hadfield2018thesis}, and approaches to computing or approximating quantum circuit observable expectation values more generally~\cite{evenbly2009algorithms,bravyi2019classical}. In particular we show how using locality in our framework allows for a more direct method of dealing with the action of the QAOA phase operator. This is especially advantageous in cases where each variable appears in relatively few clauses (e.g., MaxCut on bounded degree graphs, as considered for QAOA in \cite{Farhi2014,wang2018quantum}). Such cases allow for straightforward evaluation in our framework by providing succinct efficiently computable expressions of relevant quantities such as $\bra{\gamma}\nabla C\ket{\gamma}$ and $\bra{\gamma}\nabla^2 C\ket{\gamma}$ which appear for instance in both the leading-order and exact analysis of QAOA$_1$ of \secsref{sec:smallMixingAngle}{sec:MaxCutQUBO}. Consider a cost Hamiltonian $C=a_0I+\sum_{j=1}^m C_j$ such that each $C_j$ is a single Pauli term. As observed in \cite{Farhi2014}, in cases where the \textit{problem locality} is such that for the resulting cost Hamiltonian: i) each $C_j$ acts on at most $k$ qubits and ii) each qubit appears in at most $D$ terms, this information can be taken advantage of in computing each $\langle C_j\rangle$, especially when $k,D\ll n$, by a priori discarding operators that can be shown to not contribute. \footnote{On the other hand, while causal cone considerations are useful for computing quantities such as $\langle C\rangle=\sum_j\langle C_j\rangle$ in the case, e.g., of bounded-degree CSPs, for evaluating QAOA probabilities they may be less useful as $H_x=\ket{x}\bra{x}$ is $n$-local. This observation motivates the general result of \thmref{thm:QAOA1}.} This reduced number of necessary qubits and operators for computing expectation values is often called the (reverse) \textit{causal cone} or \textit{lightcone} in the literature \cite{evenbly2009algorithms,bravyi2018quantum,bravyi2019classical,shehab2019noise,streif2020training}, and relates to Lieb-Robinson bounds concerning the speed of propagation of information~\cite{hastings2008observations}. See in particular \cite[Sec. IV.B]{childs2019theory} for applications to simulating local observables. We use the nomenclature \textit{QAOA}$_p$-\textit{cones}, which we define below and in~\tabref{tab:locality}. \begin{table}[ht] \centering \begin{tabular}{ |c|c|c||c|} \hline Label & Symb. & Definition & Example: Value for MaxCut\\ \hline Hamiltonian term &$C_j$ & $a_jZ_{{j_1}}\dots Z_{j_{|N_j|}}$ & $C_{uv}=-\frac12Z_uZ_v$, \,$(uv)\in E$\\ Classical term & $c_j(x)$ & $a_j(-1)^{x_{j_1}\oplus x_{j_2} \oplus \dots \oplus x_{j_{|N_j|}}}$& $c_{uv}=-\tfrac12(-1)^{x_u\oplus x_v}$\\ Qubits in $C_j$ ($c_j$) & $N_j$ & $\{i:Z_i\in C_j\}\subset [n]$ & $N_{uv}=\{u,v\}$ \\ QAOA$_1$-cone of $C_j$ & $L_j$ & $ \cup_{i: N_i\cap N_j \neq \emptyset} N_i$ & $\{u,v\}\cup \{w: (uw)\in E \vee (vw)\in E\}$\\ QAOA$_p$-cone of $C_j$ & $L_{j,p}$ & $L_{j,p-1}\cup_{i: N_i\cap L_{j,p-1} \neq \emptyset} N_i$ & $L_{uv,p}=\{\ell:\, $dist$(\ell,N_{uv})\leq p\}$\\ \hline \end{tabular} \caption{Notation concerning locality for cost Hamiltonian $C=a_0I + \sum_{j}C_j$, with corresponding decomposition of the classical cost function $c(x)=a_0+\sum_jc_j(x)$ such that $C_j\ket{x}=c_j(x)\ket{x}$. The QAOA$_1$-cone of $C_j$ corresponds to the cost function neighborhood (with respect to the variables) of each $c_j$. For MaxCut, dist$(\ell,N_{uv})$ indicates the smaller of the edge distance in the graph of vertex $\ell$ from $u$, or $v$, and so the QAOA$_1$-cone of $C_{uv}$ is vertices (qubits) in the graph adjacent to $u$ or $v$.} \label{tab:locality} \end{table} Recall from \secref{sec:costHam} we may decompose $C$ with respect to $C^{\{i\}}$ the (sum of) terms in $C$ containing a $Z_i$ factor, $i=1,\dots,n$, which from \eqref{eq:partialjC} satisfy $\partial_i C =-2 C^{\{i\}}$ (i.e., each $C^{\{i\}}$ is diagonal and represents the function $-\tfrac12 \partial_ic(x)$). Consider the operator $\widetilde{B}:=U_P^\dagger B U_P$, which will appear in our analysis to follow. Using $B=\sum_{i=1}^n X_i$ and $XZ=-ZX$ we have \begin{equation} \label{eq:Bconj} \widetilde{B} = U_P^\dagger B U_P= \sum_{i=1}^n e^{2i\gamma C^{\{i\}}}X_i =\sum_{i=1}^n e^{-i\gamma \partial_i C} X_i =\sum_{i=1}^n X_i e^{i\gamma \partial_i C}, \end{equation} which implies $\widetilde{B}\ket{s}=\sum_{i=1}^n e^{-i\gamma \partial_i C}\ket{s}=\tfrac1{\sqrt{2^n}}\sum_x(\sum_{i=1}^n e^{-i\gamma \partial_i c(x)})\ket{x}$. As each Hamiltonian $\partial_iC$ only acts on qubits adjacent to $i$ with respect to $C=\sum_{j=1}^mC_j$, \eqref{eq:Bconj} reflects the QAOA$_1$-cone structure of the particular problem. Hence for each $C_j$, $j=1,\dots,m$, acting on qubits $N_j\subset[n]$, define the superset \begin{equation} L_{j}=N_j \cup \{\ell: \exists j'\neq j \; \text{ s.t. } \ell \in N_j \cap N_{j'}\}, \end{equation} which we refer to as the \textit{QAOA}$_1$-\textit{cone} of $C_j$. In \secref{sec:lightconesp} we generalize this to $L_{j,p}$ for QAOA$_p$ with $L_{j,1}=L_j$ and $L_{j,0} :=N_j$, as summarized in \tabref{tab:locality}. Turning to cost gradient operators, \eqref{eq:Bconj} implies the phase operator acts on $\nabla C$ as \begin{eqnarray} \label{eq:costGradConj} e^{i\gamma C}\,\nabla C\,e^{-i\gamma C}= [\widetilde{B},C] \,= \nabla_{\widetilde{B}} C = \sum_j \nabla_{\widetilde{B}} |_{L_j}C_j. \end{eqnarray} In particular terms in $C$ that do not act entirely within $L_j$ always commute through and cancel in each $e^{i\gamma C}\,\nabla C_j\,e^{-i\gamma C}$. Hence using $X\ket{s}=\ket{s}$ with \eqsref{eq:nablaCjs}{eq:Bconj} gives \begin{eqnarray} \label{eq:expecgammanablaC} \bra{\gamma}i\nabla C \ket{\gamma} &=& i\bra{s}\sum_{i=1}^n (e^{i\gamma \partial_i C} - e^{-i\gamma \partial_i C})C \ket{s} =-\frac{2}{2^n} \sum_x c(x) \sum_{i=1}^n \sin(\gamma \partial_i c(x)) \end{eqnarray} where for each $C_j = a_j Z_{j_1}\dots Z_{j_{|N_j|}}$ from \eqref{eq:costGradConj} we have \begin{eqnarray} \label{eq:expecgammanablaCj} \bra{\gamma}i\nabla C_j \ket{\gamma} &=& -\frac{2}{2^{|N_j|}} \sum_{x\in\{0,1\}^{L_j}} c_j(x) \sum_{i\in N_j} \sin(\gamma \partial_i c_j(x)), \end{eqnarray} and so reduces to a sum over at most $|L_j|$ variables, with $ \bra{\gamma}i\nabla C\ket{\gamma} =\sum_j\bra{\gamma}i\nabla C_j \ket{\gamma}$. Observe that $|L_j|$ gives an upper bound to the necessary number of variables to sum over, i.e., we can compute \eqref{eq:expecgammanablaCj} by summing over $2^{|L_j|}$ bitstrings. Alternatively as $|L_j|$ becomes large we may approximate quantities such as \eqsref{eq:expecgammanablaC}{eq:expecgammanablaCj} with, for example, Monte Carlo estimates. For $\nabla^2C$, a similar calculation, shown in \appref{app:smallMix}, yields \begin{eqnarray} \label{eq:expecgammanabla2C} \bra{\gamma}\nabla^2 C \ket{\gamma} = \frac{-8}{2^n} \sum_x c(x) \sum_{i<i'} \sin(\tfrac{\gamma}2 (\partial_i c(x)+\tfrac12\partial_i\partial_{i'}c(x))) \sin(\tfrac{\gamma}2 (\partial_{i'} c(x)+\tfrac12\partial_i\partial_{i'}c(x))), \end{eqnarray} which for each $C_j = a Z_{j_1}\dots Z_{j_{N_j}}$, $C=\sum_j C_j$, reduces to \begin{eqnarray*} \bra{\gamma}\nabla^2 C_j \ket{\gamma} = \frac{-8}{2^{|L_j|}} \sum_{x\in\{0,1\}^{|L_j|}} c_j(x) \sum_{i<i'} \sin(\tfrac{\gamma}2 (\partial_i c(x)+\tfrac12\partial_i\partial_{i'}c(x))) \sin(\tfrac{\gamma}2 (\partial_{i'} c(x)+\tfrac12\partial_i\partial_{i'}c(x))). \end{eqnarray*} We apply these results in \secref{sec:smallMixingAngle}. \subsection{QAOA$_1$ with small mixing angle and arbitrary phase angle} \label{sec:smallMixingAngle} In this section we generalize the leading-order QAOA$_1$ results of \secref{sec:smallAngleQAOA1} to the case of arbitrary phase angle $\gamma$, but $|\beta|$ taken as a small parameter. We show for this regime that QAOA$_1$ probabilities and expectation values remain easily expressible in terms of local cost differences, for arbitrary cost functions. Higher order terms may be similarly derived with our framework. (We show complementary results for the setting of small $|\gamma|$ but arbitrary $\beta$ in \secref{sec:smallphase} for the case of QUBO cost functions.) \begin{theorem} \label{thm:smallBetap1} The probability of measuring each string $x$ for QAOA$_1$ to first order in $\beta$ is \begin{equation} P_1(x) \simeq P_0(x) \,-\, \frac{2\beta}{2^n} \sum_{j=1}^n \sin({\gamma\, \partial_j c(x)}), \end{equation} and the first-order expectation value is then $\,\, \langle C \rangle_1 \, \,\simeq\, \langle C \rangle_0 \,-\, \frac{2\beta}{2^n} \sum_x c(x) \left(\sum_{j=1}^n \sin({\gamma\, \partial_j c(x)})\right). $ \end{theorem} Consider a string $x^*$ that maximizes $c(x)$. Then $\partial_j c(x^*) \leq 0$ for all $j$. Therefore, assuming $\gamma>0$ is small enough that each product $|\gamma \partial_j c(x^*)| < \pi $, then we see that the probability of such a state will increase, to lowest order in $\beta>0$. Thus we again see in this regime that to lowest order probability will flow as to increase $\langle C \rangle$. Similar arguments apply to minimization. \begin{proof} Expressing $\langle C \rangle_1$ as the expectation value of $U_M(\beta)^\dagger C U_M(\beta)$ as given in \eqref{eq:conj1}, taken with respect to $\ket{\gamma}:=U_P(\gamma)\ket{s}$, gives to low order in $\beta$ \begin{eqnarray} \label{eq:expecCsmallbeta} \langle C \rangle_1 &=& \langle C \rangle_0 + \beta \bra{\gamma} i\nabla C \ket{\gamma} - \tfrac\beta{2} \bra{\gamma} \nabla^2 C \ket{\gamma} + \dots \end{eqnarray} The leading order contribution then follows from \eqref{eq:expecgammanablaC}. which plugging into \eqref{eq:expecCsmallbeta} gives the result for $\langle C \rangle_1$. Similarly, repeating this derivation for the QAOA$_1$ expectation value of $H_x=\ket{x}\bra{x}$ shows the first-order correction to the probability of measuring each string $x$ is $- \frac{2\beta}{2^n} \sum_{j=1}^n \sin({\gamma \partial_j C(x)})$ which gives the result for $P_1(x)$. \end{proof} Applying the small argument approximation $\sin(x)\simeq x$ to \thmref{thm:smallBetap1} reproduces the results of \thmref{thm1:smallAngles}. Furthermore, extending \thmref{thm:smallBetap1} to include the second-order contribution in $\beta$ to $\langle C\rangle_1$ follows readily from \eqref{eq:expecgammanabla2C} and \eqref{eq:expecCsmallbeta}. From analysis similar to the case when both angles are small, and a simple modification to Algorithm 1 above it follows that QAOA$_1$ is classically emulatable for sufficiently small $|\beta|$ (up to small additive error), independent of the size of $|\gamma|$. \begin{cor} \label{cor:smallbeta} There exists a constant $b$ such that for QAOA$_1$ with $|\beta|\leq b/n$ there is a simple classical randomized algorithm such that for each $x$ $$ |P_{class}(x)-P_1(x)|= O(1/2^n).$$ \end{cor} \begin{proof} The classical algorithm is constructed by adjusting the quantities of Algorithm $1$ to match those of \thmref{thm:smallBetap1}, which yields identical leading order terms for both algorithms. It remains to bound the error (cf. the results and proofs of \appref{app:errorBounds}). Let $\ket{\gamma } := U_P(\gamma)\ket{s}$. Applying \eqref{eq:infinitesimalConj} to $H_x:=\ket{x}\bra{x}$ for $P_1(x)=\bra{\gamma\beta}H_x \ket{\gamma\beta}$ gives $$P_1(x)= P_0(x) + \beta \bra{\gamma } i\nabla H_x \ket{\gamma } + \sum_{j=2}^\infty \frac{(i\beta)^j}{j!} \bra{\gamma } \nabla^j H_x \ket{\gamma }$$ Observe that using $|\bra{x}\ket{\gamma}|=\frac1{\sqrt{2^n}}$ we have $ |\bra{\gamma}\nabla^\ell H_x\ket{\gamma}|\leq (2n)^\ell |\bra{\gamma} H_x\ket{\gamma}| = (2n)^\ell \frac1{2^n}$, so we bound the tail sum as \begin{eqnarray*} \left| \sum_{j=2}^\infty \frac{(i\beta)^j}{j !} \bra{\gamma}\nabla^\ell H_x \ket{\gamma}\right| &\leq& \sum_{j=2}^\infty \frac{(2n\beta)^j}{j!} \frac1{2^n} \leq \frac1{2^n}(e^{2n\beta} -2n\beta -1) \leq \frac2{2^n}n^2\beta^2 e^{2n\beta}. \end{eqnarray*} Thus if $|\beta|=O(1/n)$ this quantity is $O(\frac1{2^n})$ as desired. \end{proof} Generally, for QAOA with larger parameter values, the leading-order approximations become less accurate as higher-order terms become significant. For large enough parameters, contributing terms will involve cost function differences over increasingly large neighborhoods, and the number of contributing terms may become super-polynomial. Hence the direct mapping to classical randomized algorithms fails to generalize to arbitrary angles as the probability updates will no longer be efficiently computable in general. Indeed, the results of~\cite{farhi2016quantum} imply that, under commonly believed computational complexity conjectures, there cannot exist an efficient classical algorithms emulating QAOA$_p$ with arbitrary angles in general, even for QAOA$_1$; see \remref{rem:samplingComplexity} below. In the remainder of \secref{sec:QAOA1} we illustrate the application of our framework by considering several examples including the Hamming ramp toy problem, a simple quench protocol related to QAOA, and QAOA$_1$ for MaxCut and QUBO problems. \subsection{Example: QAOA$_1$ for the Hamming ramp} \label{sec:HammingRamp} Here we consider the Hamming weight ramp problem (studied e.g. for QAOA in~\cite{bapat2018bang}). We apply our framework to determine the leading order terms of the QAOA$_1$ cost expectation, and show it matches the exact solution. Consider the Hamming-weight ramp cost function $c(x)=\alpha|x|$ with $\alpha\in{\mathbb R}\setminus\{0\}$ which we may write \begin{equation} c(x)=\alpha|x|=\frac{\alpha n}2 +( \alpha |x|-\frac{ \alpha n}2) \, =:\frac{\alpha n}2 + c_z(x), \end{equation} with Hamiltonian \begin{equation} C=\frac{\alpha n}2 I - \frac{\alpha}2 \sum_j Z_j \, =:\frac{\alpha n}2I + C_Z, \end{equation} i.e., $C_Z := - \frac{\alpha}2 \sum_j Z_j$ represents the function $c_z(x)=\alpha |x|-\frac{ \alpha n}2$. \paragraph{Exact results:} For QAOA$_1$ a simple calculation shows \begin{eqnarray} \label{eq:rampOptimal} \langle C \rangle_1 = \frac{\alpha n}{2} + \frac{\alpha n}{2} \sin(\alpha \gamma) \sin(2\beta). \end{eqnarray} Thus QAOA$_1$ optimally solves this problem, with probability $1$, with angles $\gamma=\frac{\pi}{2\alpha},\beta=\pi/4$. Expanding \eqref{eq:rampOptimal} using $\sin(x)=x-\frac{x^3}{3!}+\dots$ gives \begin{eqnarray} \label{eq:rampSmallAngle} \langle C\rangle_1 &=& \frac{\alpha n}2 + \alpha^2 n\gamma\beta - \frac23 \alpha^2 n \gamma \beta^3 - \frac16 \alpha^4 n \gamma^3 \beta +\dots \end{eqnarray} where the terms not show are order $6$ or higher in $\alpha$, and ${\gamma,\beta}$ combined. \begin{figure} \caption{$\langle C\rangle_1$ vs $\epsilon$ with $\beta = -\epsilon (\pi/4)$ and $\gamma = \epsilon(\pi/2)$ linearly interpolating from zero to the angles giving the optimal solution with probability 1.} \caption{$\langle C\rangle_1$ vs $\beta$ and $\gamma$} \caption{Comparison of approximate versus exact approximations of $\langle C\rangle_1$ for QAOA$_1$ for the Hamming weight ramp problem $c(x)=|x|$. Here we consider the minimization version optimized by the all $0$ string. Panel (a) shows the deviation between the exact formula \eqref{eq:rampOptimal} \label{fig:ramp} \end{figure} \paragraph{Our framework:} \thmsref{thm1:smallAngles}{thm:smallprecursed} produce the same leading-order expression \eqref{eq:rampSmallAngle} for the Hamming ramp. For this cost function it is easily seen $d^\ell c(x) = (-2)^\ell c_z (x)$, and in particular $d c(x) = -2 c_z (x)=\alpha (n -2|x|)$ and $d^2 c(x) = 4 c_z (x)$. Thus we have $$ \langle \nabla_C \nabla C \rangle_0 = \frac2{2^n}\sum_x c(x) dc(x) =-4\langle C\; C_Z \rangle_0= -4 \sum_j (\frac{\alpha^2}4)= -\alpha^2 n$$ which from \thmref{thm:smallprecursed} gives the correct leading order contribution $\alpha^2 n \gamma \beta$. Next we have \begin{eqnarray*} \langle \nabla_C \nabla^3 C \rangle_0 = \frac2{2^n}\sum_x c(x) d^3c(x) = \frac8{2^n}\sum_x c(x) dc(x) =-4\alpha^2 n \end{eqnarray*} which gives the $\gamma \beta^3$ term coefficient $-\frac23\alpha^2n$. Next, from \lemref{lem:quboGrads} we have $\nabla^2 C=4C_Z$, it follows $\langle \nabla^2_C \nabla^2 C \rangle_0 = 0$ as anticipated from \eqref{eq:rampSmallAngle}. Finally, for the term $\langle \nabla^3_C \nabla C \rangle_0$, we have $\nabla_C \nabla C = \alpha^2 B$ and so \begin{eqnarray*} \langle \nabla^3_C \nabla C \rangle_0 &=& \langle \nabla^2_C \nabla_C \nabla C \rangle_0 =-\alpha^2 \langle \nabla_C \nabla C \rangle = -\alpha^4 \langle B\rangle_0 = -\alpha^4 n \end{eqnarray*} Using these expressions in \thmref{thm:smallprecursed} (which generalizes \thmref{thm1:smallAngles}) for QAOA$_1$ then gives the correct terms up to 4th order. \figref{fig:ramp} compares these approximations with the exact behavior. In~\cite{bapat2018bang}, variants of the ramp such as the Bush-of-implications and Ramp-with-spike are studied for QAOA, along with cost functions given by perturbations of the ramp. Our techniques similarly apply to these problems. More importantly, \thmsref{thm1:smallAngles}{thm:smallprecursed} may be applied to problems where the exact results are not generally obtainable. \subsection{Example: QAOA$_1$ for random Max-3-SAT} \label{sec:3SAT} When applying QAOA to instances drawn from a class of problems, it may be convenient to select a single set of parameters rather than optimizing parameters individually for each instance. For instance, this selection could be QAOA parameters that optimize the average cost for the class of problems. Our framework can address this situation by averaging the cost operators over the class of problems with a fixed choice of parameters. As an example, here we apply this idea to random Max-$3$-SAT. A SAT problem in conjunctive normal form consists of $m$ disjunctive clauses in $n$ Boolean literals (i.e., variables or their negations). For $k$-SAT, each clause involves $k$~distinct literals. The cost function associated with a Max-$k$-SAT instance is the number of clauses an assignment $x$ satisfies. For Max-3-SAT, the corresponding cost Hamiltonian as in \eqref{eq:costHamZs} and used in \lemref{lem:expecCDC} is easily obtained \cite{hadfield2018representation}: each clause contributes $7/8$ to $a_0$, and $\pm 1/8$ to each corresponding $a_i$, $a_{i,j}$ and $a_{i,j,k}$, with the signs depending on which variables are negated within each clause. Random $k$-SAT instances are constructed by randomly selecting $k$ distinct variables for each clause and negating each variable with probability one-half. Averaging the squares of the cost coefficients that appear in \lemref{lem:expecCDC} over random 3-SAT instances gives \begin{equation} \overline{ \langle \nabla_C \nabla C \rangle_0 } = -\frac{3m}{4} \end{equation} where here the overline denotes the average over random instances. Applying this in \thmref{thm:QAOA1} gives the averaged leading-order change in QAOA$_1$ expected cost for such random Max-3-SAT instances to be \begin{eqnarray} \overline{\langle C\rangle_1} \,=\, \langle C\rangle_0 + \frac34 m \gamma \beta + \dots \,=\, \frac78 m+ \frac34 m \gamma \beta + \dots \end{eqnarray} where $\langle C\rangle_0 = \tfrac78 m$ corresponds to the expected solution value obtained from uniform random guessing. Similar considerations apply to the higher-order terms of \thmref{thm:QAOA1}. An important consideration when using averages over a class of problems is how well they represent the characteristics of typical instances. The variance with respect to instances in the class is one measure of this. As an example, consider random Max-3-SAT with a fixed clause to variable ratio $m = \alpha n$ as $n$ increases. This scaling provides a high concentration of hard instances when $\alpha$ is close to the transition between mostly satisfiable and mostly unsatisfiable instances~\cite{kirkpatrick94}. In this case, the variance scales as \begin{equation} \mbox{var}( \langle \nabla_C \nabla C \rangle_0 ) \sim \frac{9 n \alpha^2}{128}. \end{equation} Thus, the relative deviation in $\langle \nabla_C \nabla C \rangle_0$ is proportional to $1/\sqrt{n}$. Such concentration indicates that the average leading-order change $\langle C\rangle_1$ is representative of the behavior of individual instances. Thus averaging over the class of problems gives a simple representative expression for how QAOA performs for this class of problems when the angles are small. Similar considerations apply to higher-order terms. For individual 3-SAT instances, the QAOA$_1$ cost expectation value can be efficiently evaluated, for example, by extending the prescription of \secref{sec:MaxCutQUBO} (cf. \secref{sec:classicalAlgGeneral2}). However, for other problem classes, or higher-order terms in the expansion, or QAOA$_p$ more generally, evaluating such quantities can be challenging. In such cases, averaging over a class of problems can be a simpler proxy for the behavior of typical instances than per-instance evaluation, and, for example, applied toward obtaining good algorithm parameters. \subsection{ Analysis of QAOA$_1$ for QUBO problems and MaxCut}\label{sec:MaxCutQUBO} Our framework is useful for \textit{exact} analysis of QAOA with \emph{arbitrary} angles, not just to leading-order contributions or in small-angle regimes. Such analysis is challenging in general and typically requires some degree of problem specialization. Here we show how our framework may be used to succinctly reproduce analytic performance results previously obtained for MaxCut~\cite{wang2018quantum}, and extend such analysis to the wider class of QUBO problems. For these problems we have the following. \begin{lem} \label{lem:quboGrads} For a QUBO cost Hamiltonian $C = a_0 I + \sum_{j}a_{j}Z_j + \sum_{i<j}a_{ij}Z_iZ_j =: a_0 I + C_{(1)} + C_{(2)}$ we have for $k\in{\mathbb Z}_+$ \begin{eqnarray} \nabla^{2k}C = 4^k C_{(1)} \, + \, 16^{k-1}\nabla^2 C_{(2)},\\ \nabla^{2k+1}C = 4^k \nabla C_{(1)} \, + \,16^{k} \nabla C_{(2)}. \end{eqnarray} \end{lem} We prove the lemma in \appref{app:klocal}. Similar results follow for higher degree cost Hamiltonians (towards analysis of QAOA for problems beyond QUBOs). For example, for a strictly $3$-local cost Hamiltonian $C=\sum_{i<j<k}a_{ijk}Z_iZ_jZ_k$ it can be shown that any $\nabla^\ell C$ for $\ell\geq 4$ can be expressed as a real linear combination of $C$, $\nabla C$, $\nabla^2 C$, and $\nabla^3 C$. We use \lemref{lem:quboGrads} to sum the series $U_M^\dagger CU_M= C+i\beta\nabla C -\tfrac12\beta^2\nabla^2C+\dots$. Consider an instance of MaxCut, i.e., a graph $G=(V,E)$ with $|V|=n$ and $|E|=m$, with cost Hamiltonian $ C=\frac{m}{2}I-\frac12\sum_{(ij)\in E}Z_iZ_j$, and the other related operators shown in \tabref{tab:summary}. From \lemref{lem:quboGrads}, for a strictly quadratic cost Hamiltonian $C_Z=\sum_{uv}c_{uv}Z_uZ_v$ we have $\nabla^{2k+1} C_Z = 16^k \nabla C_Z$ and $\nabla^{2k} C_Z = 16^{k-1} \nabla^2 C_Z$, where $\nabla C_Z$ and $\nabla^2 C_Z$ are easily derived as Pauli operator expansions given in \tabref{tab:summary}. Hence, it follows $U_M^\dagger(\beta) C_ZU_M(\beta)=C_Z-\tfrac14\sin(4\beta)i\nabla C_Z-\tfrac18 \sin^2(2\beta)\nabla^2 C_Z$, and so applying \thmref{thm:QAOA1} for QAOA$_1$ yields \begin{eqnarray} \label{eq:expecC1maxcut} \langle C\rangle_1 &=& \langle C\rangle_0 \,\,+\, \frac{\sin(4\beta)}{4}\, \bra{\gamma}i\nabla C \ket{\gamma} -\frac{\sin^2(2\beta)}{8} \, \bra{\gamma} \nabla^2 C\ket{\gamma}, \end{eqnarray} for MaxCut, where $\ket{\gamma}=U_P(\gamma)\ket{s}$ and the angles $\gamma,\beta$ are arbitrary. (We give a more detailed derivation of \eqref{eq:expecC1maxcut} in \appref{sec:QUBOs}.) The right-hand side expectation values depend on the structure of the graph $G$, as reflected in \eqsref{eq:expecgammanablaC}{eq:expecgammanabla2C}, and may be further reduced. Indeed, we previously computed the quantities of \eqref{eq:expecC1maxcut} for the case of MaxCut in \cite{wang2018quantum,hadfield2018thesis} in terms of the proble graph parameters; comparing to \cite[Thm. 1]{wang2018quantum}, we have \begin{equation} \label{eq:maxcutnablaCexpec} \bra{\gamma}i\nabla C \ket{\gamma} = \sin(\gamma) \sum_{u\in V} d_u\, \cos^{d_u-1}(\gamma), \end{equation} where $d_u=\deg(u)$ is the graph degree of vertex $u$, and \begin{equation}\label{eq:maxcutnabla2Cexpec} \bra{\gamma}\nabla^2 C \ket{\gamma} = 2\sum_{(uv)\in E} \cos^{d_u+d_v-2f_{uv}-2}(\gamma)\, (1-\cos^{f_{uv}}(2\gamma)), \end{equation} where $f_{uv}$ gives the number of triangles ($3$-cycles) containing the edge $(uv)$ in $G$. In particular, if $G$ is triangle free then $\bra{\gamma}\nabla^2 C \ket{\gamma}=0$. Hence we may always efficiently compute $\langle C \rangle_1$ for a given instance of MaxCut. Moreover, \eqssref{eq:expecC1maxcut}{eq:maxcutnablaCexpec}{eq:maxcutnabla2Cexpec} lead to QAOA$_1$ performance bounds (i.e., the parameter-optimized expected approximation ratio achieved) for MaxCut for particular classes of graphs; see~\cite{wang2018quantum,hadfield2018thesis}. Likewise, for general QUBO cost Hamiltonians, applying \lemref{lem:quboGrads} yields \begin{eqnarray} \label{eq:QAOA1QUBO} \langle C \rangle_1 = \langle C \rangle_0 +\frac{\sin (2\beta)}{2} \bra{\gamma}i\nabla C_{(1)} \ket{\gamma} + \frac{\sin(4\beta)}{4} \bra{\gamma}i\nabla C_{(2)} \ket{\gamma} - \frac{\sin^2(2\beta)}{8} \bra{\gamma}\nabla^2 C_{(2)} \ket{\gamma} . \end{eqnarray} The right-hand side expectation values are independent of $\beta$ and may each be similarly computed as in the case of MaxCut as polynomials of~$\gamma$ that reflect the cost Hamiltonian coefficients~$a_\alpha$ and underlying adjacency graph, see e.g. \cite[App. E]{hadfield2018thesis}. Alternatively, they may be computed using \eqsref{eq:expecgammanablaC}{eq:expecgammanabla2C}. Practical computation of these quantities typically takes advantage of linearity of expectation $\langle C\rangle_p = \sum_j \langle C_j\rangle_p$ for cost Hamiltonians $C=\sum C_j$ where each $C_j$ is a single weighted Pauli term, as well as problem-locality considerations, as discussed in \secsref{sec:probLocality}{sec:lightcones}. Similar formulas for more general problems may be obtained with our framework. As an example we consider a variant of Max-2-SAT. \subsubsection{Example: Analysis of QAOA$_1$ for Balanced-Max-$2$-SAT} \label{sec:balancedMax2Sat} Here we apply our framework to QAOA$_1$ for the Balanced-Max-$2$-SAT problem, defined as instances of Max-$2$-SAT such that each variable appears negated or unnegated in an equal number of clauses. Assuming the Unique Games Conjecture in computational complexity theory, there is a sharp threshold $\theta\simeq 0.943$ such that it is NP-hard to hard to approximate this problem to $\theta$ or better whereas a $\theta-\epsilon$ approximation can be achieved any polynomial time \cite{khot2007optimal}[Thms. 3 and 4]; see also \cite{austrin2007balanced}.\footnote{A related variant of Max-$2$-SAT has been previously explored for quantum annealing~\cite{santra2014max}. In that case, random instances were constructed using equal probabilities of a variable appearing as negated or unnegated in each clause; cf. also \secref{sec:3SAT}.} Here for simplicity we consider problem instance with the additional assumption that each pair of variables $x_i,x_j$ appears in at most one clause (i.e., one of $x_i \vee x_j$, $\overline{x}_i \vee x_j$, $x_i \vee \overline{x}_j$, or $\overline{x}_i \vee \overline{x}_j$).\footnote{We may relax this assumption at the expense of a more complicated proof and presentation of \eqref{eq:expecC1balmax2sat} due to bookkeeping required, in which case $G$ generalizes to a multigraph.} We use $(-1)^{i\oplus j}$ to denote the parity of a given clause, which is $-1$ when only one of $x_j$, $x_j$ is negated. Then for a Balanced-Max-$2$-SAT instance with $m$ clauses and $n$ variables, the cost Hamiltonian takes a particularly simple form \begin{equation} \label{eq:costHamBalancedMaxSat} C = \frac34 I - \frac14 \sum_{(ij)\in E}(-1)^{i\oplus j} Z_iZ_j, \end{equation} where we have identified the graph $G=([n],E)$ with $|E|=m$ edges corresponding to the problem clauses. For computing $\langle C\rangle_1=\tfrac34{m}+\sum_{(ij)\in E}\langle C_{ij}\rangle_1$, the QAOA$_1$-cone of each $C_{ij}$ consists of terms corresponding to edges adjacent to $(ij)$ in $E$. A neighbor $k$ of both $i$ and $j$ defines a triangle in $G$, with parity defined to be the product of its edge parities. Let $F^{\pm}$ denote the number of triangles in $G$ with parity $\pm1$. \footnote{A balanced instance need not have balanced triangles. E.g, the single-triangle instance $x_1\vee \overline{x}_2+x_2\vee \overline{x}_3+x_3\vee \overline{x}_1$ has $(f^+,f^-)=(0,1)$ whereas the two-triangle instance $x_1\vee x_2+x_2\vee \overline{x}_3+x_3\vee \overline{x}_1 + \overline{x}_2 \vee \overline{x}_4 + \overline{x}_2 \vee \overline{x}_5 + x_4\vee x_5$ has $(f^+,f^-)=(2,0)$.} Similarly, for each edge define $f^+=f^+_{ij},f^-=f^-_{ij}$ with respect to the triangles containing $(ij)$. Applying our framework and results above \eqrefp{eq:QAOA1QUBO}, in \appref{sec:QUBOs} we show the exact QAOA$_1$ cost expectation is given as a function of the angles and problem instance by \begin{eqnarray} \label{eq:expecC1balmax2sat} \langle C\rangle_1&=&\langle C\rangle_0 \,+\ \frac{\sin4\beta\sin(\gamma/2)}8\sum_{(ij)\in E}(\cos^{d_i}(\gamma/2)+\cos^{d_j}(\gamma/2))\nonumber\\ &-&\frac{\sin^22\beta}{4}\sum_{(ij)\in E} \cos^{d+e-2f_{ij}^+-2f_{ij}^-}(\gamma/2)\,g(f_{ij}^+,f_{ij}^-; \gamma,\beta), \end{eqnarray} where $\langle C\rangle_0=\frac34m$, $d_i+1$ is the degree of vertex $i$ in $G$, and we have defined $$g(f^+,f^-;\gamma,\beta)=\sum_{\ell=1,3,5,\dots}^{f^++f^-} \cos^{2(f^++f^--\ell)}(\gamma/2)\sin^{2\ell}(\gamma/2) {\boldsymbol{i}}nom{f^+}{\ell} \prescript{}{2}{\mathbf{F}}_1(-f^-,-\ell;f^+-\ell+1;-1).$$ Here $\prescript{}{2}{\mathbf{F}}_1$ is the Gaussian (ordinary) hypergeometric function; see e.g.~\cite{slater1966generalized}. In particular, the function ${\boldsymbol{i}}nom{f^+}{\ell} \prescript{}{2}{\mathbf{F}}_1(-f^-,-\ell;f^+-\ell+1;-1)$ occurring in $g(\cdot)$ evaluates to $f^+-f^-$ for $\ell=1$, and to ${\boldsymbol{i}}nom{f^+}{3}-{\boldsymbol{i}}nom{f^-}3-\tfrac12 f^+f^-(f^+-f^-)$ for $\ell=3$. Hence for a given Balance Max-2-SAT instance $\langle C \rangle_1$ may be efficiently computed. In some cases we can obtain further simplified expressions. For example, in the case that all triangles in the instance are of the same parity, $\langle C \rangle_1$ is easily seen to reduce to the same formula for MaxCut given in \secref{sec:MaxCutQUBO} and previously obtained in~\cite{wang2018quantum,hadfield2018thesis} (up to constant factors and the shift $\gamma\rightarrow\gamma/2$ due to the affine mapping between the respective cost Hamiltonians). Indeed, along the way to proving \eqref{eq:expecC1balmax2sat} in \appref{sec:QUBOs}, we rederive \eqssref{eq:expecC1maxcut}{eq:maxcutnablaCexpec}{eq:maxcutnabla2Cexpec} using our framework. Results such as \eqref{eq:expecC1balmax2sat} may be used to bound the expected QAOA approximation ratio $\langle R\rangle_1 = \langle C\rangle_1 / c^* \geq \langle C\rangle_1 / m$, where $c^*$ is the optimal cost value. \subsection{QAOA$_1$ with small phase angle and arbitrary mixing angle} \label{sec:smallphase} We conclude \secref{sec:QAOA1} by considering QAOA$_1$ with small phase angle $\gamma$ and arbitrary mixing angle~$\beta$ (i.e., the converse case of \secref{sec:smallMixingAngle}). For simplicity, here we consider QUBO problems as in \secref{sec:MaxCutQUBO}; similar but more complicated formulas follow for more general cost Hamiltonians. We show how the leading order contribution to the cost expectation value reflects cost Hamiltonian structure (Fourier coefficients) of \eqref{eq:costHamZs}. \begin{theorem} \label{thm:smallgamma} Consider a QUBO cost Hamiltonian $C=a_0I+\sum_j a_j Z_j + \sum_{j<k} a_{jk}Z_jZ_k$. The QAOA$_1$ expectation value of $C$ to second order in $\gamma$ is \begin{eqnarray*} \label{eq:smallGammaApprox} \langle C \rangle_1 &\simeq& \langle C\rangle_0 + 2\gamma \left( \sin(2\beta)\sum_j a_j^2 + \sin(4\beta)\sum_{j<k} a_{jk}^2 \right) \,-\, 4\gamma^2 \sin^2(2\beta) \sum_{i<j}a_{ij}\left( a_ia_j +\sum_{k} a_{ik}a_{jk}\right) \end{eqnarray*} \end{theorem} \begin{proof} Recall $C_{(k)}$ denotes the Hamiltonian defined as the (sum of the) strictly $k$-local terms of~$C=a_0 I + C_{(1)} + C_{(2)}$. Expanding each expectation value in \eqref{eq:QAOA1QUBO} to second order in~$\gamma$ and applying \lemsref{lem:genGrads}{lem:expecGrad} gives \begin{eqnarray*} \bra{\gamma} i\nabla C_{(j)} \ket{\gamma}&=& \cancel{\langle i\nabla C_{(j)}\rangle_0} - \gamma \langle \nabla_C \nabla C_{(j)}\rangle_0 - \frac{\gamma^2}2 \cancel{i\langle \nabla^2_C \nabla C_{(j)}\rangle_0} + \dots, \end{eqnarray*} for $j=1,2$, where $\langle \nabla_C \nabla C_{(1)}\rangle_0 =-4\sum_{j}a_j^2$ and $\langle \nabla_C \nabla C_{(2)}\rangle_0 =-8\sum_{j<k}a_{jk}^2$ from \lemref{lem:expecCDC}, which together give the result in the first parenthesis. For the last term in \eqref{eq:QAOA1QUBO} we have \begin{eqnarray*} \bra{\gamma} \nabla^2 C_{(2)} \ket{\gamma}&=& \cancel{\langle \nabla^2 C_{(2)}\rangle_0} - \cancel{\gamma i \langle \nabla_C \nabla^2 C_{(2)}\rangle_0} - \frac{\gamma^2}2\langle \nabla^2_C \nabla^2 C_{(2)}\rangle_0 + \dots, \end{eqnarray*} with $\langle \nabla_C^2 \nabla^2 C_{(2)}\rangle_0 =64 \sum_{i<j}a_ia_ja_{ij}+64 \sum_{i<j}\sum_{k\in nbd(i,j)} a_{ki}a_{ij}a_{jk}$ which we show in \lemref{lem:higherOrderExpectations} of \appref{app:expecVals}. \end{proof} \begin{rem} \label{rem:samplingComplexity} The quantities of \thmref{thm:smallgamma} may be alternatively expressed in terms of expectation values of classical functions (as in \thmref{thm1:smallAngles}), see \lemref{lem:higherOrderExpectations}. However, in contrast to the case of QAOA$_1$ with small mixing angle considered in \thmref{thm:smallBetap1}, here we do not obtain a simple expression for the the leading order probability of measuring each given~$x$ (i.e., the result of \thmref{thm:smallgamma} is not of the form $\langle C\rangle_1 \simeq \sum_x c(x) P'(x)$ for some explicit probability distribution $P'(x)$). Hence, \thmref{thm:smallgamma} doesn't directly yield a simple classical algorithm approximately emulating QAOA$_1$ in this parameter regime in the same way as from \thmref{thm1:smallAngles}. Indeed, a general efficient classical procedure for sampling from QAOA$_1$ circuits for arbitrary parameters would imply the collapse of the polynomial hierarchy in computational complexity~\cite{farhi2016quantum}, and hence such procedures are believed impossible. Our results are consistent with this viewpoint and indicate that sampling remains hard even as the phase angle becomes small (for constant but sufficiently large mixing angle), but as expected becomes relatively easy when both angles, or only the mixing angle, are sufficiently small. Additional intuition may be gained from the sum-of-paths viewpoint of \sh{\appref{app:sumOfPaths}}. \end{rem} \section{Application to QAOA$_p$} \label{sec:QAOAp} The gradient operator framework extends to QAOA$_p$ with arbitrary level~$p$ and parameters~$\gamma_i,\beta_j$. In the Heisenberg picture, the $j$th QAOA level acts by conjugating the observable resulting from the preceding ($(j+1)$th) level, in iteration for $j=p,p-1,\dots,1$. This approach yields formulas similar to but more general than~\lemref{lem:QAOA1conj}. \begin{lem} \label{lem:QAOAconjGen} The QAOA$_p$ operator $Q=Q_pQ_{p-1}\dots Q_2Q_1$, where $ Q_j= U_M(\beta_j) U_P(\gamma_j)$, acts by conjugation on the cost Hamiltonian $C$ as \begin{equation} \label{eq:costHamConjGen} Q^\dagger C Q = \sum_{\ell_{1}=0}^\infty \sum_{k_{1}=0}^\infty \cdots \sum_{\ell_{p-1}=0}^\infty \sum_{k_{p-1}=0}^\infty \sum_{\ell_p=0}^\infty \sum_{k_p=0}^\infty \frac{(i\gamma_1 \nabla_C)^{\ell_1} (i\beta_1\nabla)^{k_1} \dots (i\gamma_p \nabla_C)^{\ell_p} (i\beta_p\nabla)^{k_p} }{\ell_1!k_1!\ell_2!k_2!\dots\ell_p!k_p!} C . \end{equation} \end{lem} The QAOA$_p$ cost expectation then follows as $\langle C\rangle_p=\langle Q^\dagger C Q \rangle_0$. \begin{proof} Follows recursively applying \eqref{eq:infinitesimalConj} for each $Q_j$ as in the proof of \lemref{lem:QAOA1conj}. \end{proof} \subsection{Effect of $p$th level of QAOA$_p$} \label{sec:smallAngleQAOAp1} We next consider the change in cost expectation between a level-($p-1$) QAOA circuit (with fixed parameters), and the level-$p$ circuit constructed from adding an additional QAOA level. This case concerns the last layer applied of the QAOA circuit, or, alternatively the effect of an additional layer. The following theorem generalizes \thmref{thm:QAOA1}. \begin{theorem} \label{thm:costExpecHeis} For QAOA$_p$ the cost expectation satisfies \begin{equation} \langle C \rangle_p = \langle C \rangle_{p-1} + \, \sum_{\ell=0}^\infty \sum_{k=1}^\infty \frac{(i\gamma_p)^\ell(i\beta_p)^k}{\ell!\,k!} \,\langle\nabla_C^\ell \nabla^k C\rangle_{p-1}. \end{equation} \end{theorem} \noindent Here, with respect to the fixed QAOA$_{p}$ state $\ket{\boldsymbol{\gamma}\boldsymbol{\beta}}_{p}$ state, $\langle \cdot \rangle_{p-1}$ indicates the expectation value with respect to the corresponding QAOA$_{p-1}$ state resulting from application of only the first $p-1$ QAOA stages (or, equivalently, $\langle \cdot \rangle_p$ with $\gamma_{p},\beta_{p}$ set to $0$). \begin{proof} The result follows applying \lemref{lem:QAOAconjGen} to the righthand side of $\bra{\boldsymbol{\gamma}\boldsymbol{\beta}}_{p}C \ket{\boldsymbol{\gamma}\boldsymbol{\beta}}_{p}= \bra{\boldsymbol{\gamma}\boldsymbol{\beta}}_{p-1}\left(Q_p^\dagger CQ_p \right) \ket{\boldsymbol{\gamma}\boldsymbol{\beta}}_{p-1}, $ where $Q_p:=U_M(\beta_p)U_P(\gamma_p)$. \end{proof} \subsubsection{Example: QAOA$_p$ with small phase and mixing angles at level $p$} When the parameters of the final $p$th level of the QAOA$_p$ are small, we obtain a generalization of \thmref{thm1:smallAngles}. \begin{cor} [Small angles at level $p$]\label{cor:smallAnglesp} For QAOA$_{p}$ with $p$th level angles $\gamma_p,\beta_p$, to second order we have \begin{equation} \label{eq:thmexpecCpsmall} \langle C \rangle_{p} - \langle C \rangle_{p-1} \simeq \beta_p \langle i\nabla C \rangle_{p-1} - \frac{\beta_p^2}{2} \langle \nabla^2 C \rangle_{p-1} - \gamma_p \beta_p \langle \nabla_C \nabla C\rangle_{p-1}, \end{equation} where the neglected terms cubic or higher in $\gamma_p,\beta_p$. \end{cor} \begin{proof} The result follows collecting the quadratic leading-order terms of \thmref{thm:costExpecHeis}. \end{proof} \subsubsection{Example: QAOA$_p$ with small mixing angle at level $p$} \label{sec:smallMixingAnglep} We also consider the case of QAOA$_p$ with small mixing angle $\beta_p$, but arbitrary phase angle $\gamma_p$. This case applies, for instance, in parameter schedules inspired by quantum annealing, where $\beta$ approaches zero at the end of the anneal. The following result generalizes \thmref{thm:smallBetap1}; the proof is similar and is given in \appref{app:smallMix}. \begin{cor} \label{cor:smallBeta} Consider a QAOA$_{p-1}$ state $\ket{\boldsymbol{\gamma \beta}}_{p-1} =\sum_{x}q_{x} \ket{x}$ to which another ($p$th) level of QAOA with angles $\beta:=\beta_p,\gamma:=\gamma_p$ is applied. The change in probability to first order in $\beta$ is \begin{eqnarray}\label{eq:probsmallbetap} P_{p}(x)-P_{p-1}(x) = -2\beta \sum_{j=1}^n r_{xj} \sin(\alpha_{xj}+\gamma \partial_j c(x)) +\dots \end{eqnarray} for each $x\in\{0,1\}^n$, where we have defined the real polar variables $r,\alpha$ as $r_{xj}=|q^\dagger_{x^{(j)}}q_{x}|$ and $ q^\dagger_{x^{(j)}}q_{x} = r_{xj}e^{-i\alpha_{xj}}$ which reflect the degree of which the coefficients $q_{x}$ are non-real. The expectation value then changes as \begin{equation} \label{eq:expecsmallbetap} \langle C \rangle_{p} - \langle C \rangle_{p-1} = -2\beta \sum_x c(x) \sum_{j=1}^n r_{xj} \sin(\alpha_{xj}+\gamma \partial_j c(x)) +\dots. \end{equation} \end{cor} \subsection{Leading-order QAOA$_p$}\label{sec:smallAngleQAOAp} When all QAOA angles are small or viewed as expansion parameters, \lemref{lem:QAOAconjGen} leads to the following generalization of \thmref{thm:QAOA1}. \begin{theorem} \label{thm:smallprecursed} For QAOA$_p$, to fifth order in the angles $\gamma_1,\beta_1,\dots, \gamma_p,\beta_p$ we have \begin{eqnarray} \label{eq:expecCp0} \langle C \rangle_p \, \,=\, \, \langle C \rangle_{0} &-&\langle \nabla_C \nabla C \rangle_0 \sum_{1\leq i\leq j}^p \gamma_i \beta_j \nonumber\\%\, +\, &+& \langle \nabla_C \nabla^3 C \rangle_0 \left( \sum_{i\leq j< k<l} \gamma_i \beta_j \beta_k \beta_l + \frac12 \sum_{i\leq j<k} \gamma_i (\beta^2_j \beta_k+\beta_j \beta^2_k) + \frac1{3!}\sum_{i\leq j} \gamma_i \beta_j^3 \right) \nonumber\\ &+& \langle \nabla_C^3 \nabla C \rangle_0 \left( \sum_{i<j< k\leq l} \gamma_i \gamma_j \gamma_k \beta_l + \frac12 \sum_{j<k \leq l} (\gamma^2_j \gamma_k+\gamma_j \gamma^2_k) \beta_l + \frac1{3!}\sum_{i\leq j} \gamma^3_i \beta_j \right) \nonumber\\ &+&\langle \nabla^2_C \nabla^2 C \rangle_0 \Bigg( \sum_{i<j\leq k< l} \gamma_i \gamma_j \beta_k \beta_l + \frac12 \sum_{i\leq k<l} \gamma^2_i \beta_k \beta_l + \frac12 \sum_{i<j\leq k} \gamma_i \gamma_j \beta_k^2 \nonumber\\ && \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \,\,+\,\, \frac14 \sum_{i\leq j} \gamma^2_i \beta_j^2 \,+\,\sum_{i\leq j < k \leq l} \gamma_i \beta_j \gamma_k \beta_l \Bigg) \;\;\;+\;\;\;\dots \end{eqnarray} where the terms not shown to the right are each \textbf{order six} or higher in angles and the gradient operator expectations. \end{theorem} The gradient operator initial state expectations of \eqref{eq:expecCp0} are expressed in terms of cost difference functions in \eqsssref{eq:expecDCis0}{eq:expecDcD3C}{eq:expecDc3DC}{eq:expecDc2D2C}, and \lemref{lem:higherOrderExpectations} of \appref{app:expecVals}. These factors depend only on the cost function. Higher order contributions may be derived similarly. \sh{ \begin{rem} Observe that the $k$-th order coefficient of \eqref{eq:expecCp0} consists of sums over products of angles, with each product of degree $k$. At least one of these sums is indexed by $k$ distinct angles, and the others involve $k$ distinct angles or fewer. Sums with $k$ distinct angles contain a number of terms that scales as $O(p^k)$ and so these factors dominate when $p$ becomes large. This observation extends to more general initial states or mixing operators (recalling \remref{rem:GenLems}). \end{rem} } \begin{proof} We compute $\langle C\rangle_p= \langle Q^\dagger C Q\rangle_0$ by taking the expectation value of \eqref{eq:costHamConjGen} with respect to $\ket{s}$ and collecting the terms proportional to each cost gradient expectation value. From \lemref{lem:genGrads}, only terms for which $\ell_1+k_1+\dots \ell_p + k_p$ is even can give nonzero contributions. Moreover, from \lemref{lem:expecGrad} any expectation value with a leftmost $\nabla$ or a rightmost $\nabla_C$ is zero. Applying these rules the only terms which can contribute up to fifth order are \begin{eqnarray*} \langle C \rangle_p \,=\, \langle C \rangle_{0} &+& a_0 \langle \nabla_C \nabla C \rangle_0 + a_1\langle \nabla_C \nabla^3 C \rangle_0 + a_2\langle \nabla^2_C \nabla^2 C \rangle_0\\&+& \, a_3\langle \nabla^3_C \nabla C \rangle_0 + a_4\langle \nabla_C \nabla \nabla_C\nabla C \rangle_0\,+\,\dots \end{eqnarray*} where the $a_i$ are real homogeneous polynomials in the angles which correspond to the possible ways of generating its associated gradient operator in the sum \eqref{eq:costHamConjGen}. From \lemref{lem:expecCDC} the second order term is $\langle \nabla_C \nabla C \rangle_0 = \frac2{2^n}\sum_x c(x) dc(x)$. The coefficient $a_0$ follows from each possible selection of the ordered pair $(i\gamma\nabla_C)(i\beta\nabla)C$ in \eqref{eq:costHamConjGen} and is easily seen to be $a_0=\sum_{1\leq i\leq j}^p \gamma_i \beta_j$. At fourth order, similarly, the coefficients $a_1,\dots,a_4$ are degree $4$ polynomials and easily calculated as sums corresponding to all possible ways of generating the associated gradient, when the order of each (super)operator product in \eqref{eq:costHamConjGen} is accounted for. In particular, $\langle\nabla_C \nabla \nabla_C \nabla C\rangle_0 = \langle \nabla^2_C \nabla^2 C \rangle_0$ from \eqref{eq:costJacobi}, which gives $a_4=\sum_{i\leq j < k \leq l} \gamma_i \beta_j \gamma_k \beta_l$ and corresponds to the final term of the last parenthesis. \end{proof} Repeating the above proof for the QAOA$_p$ probability $P_p(x)=\langle H_x\rangle_p =\langle \ket{x}\bra{x}\rangle_p$ as in the proof of \thmref{thm:QAOA1} produces the leading-order expression for~$P_p(x)$ given in \tabref{tab:tab1smallangles}, and similarly for higher order terms. \eqref{eq:expecCp0} shows that for any parameters values the cost function expectation value achieved by QAOA$_p$ can be a expressed as a linear combination of the initial expectation values of $C$ and its gradients. These expectation values depend only on the structure of the cost function. The dependence on QAOA parameters occurs through the coefficients of each expectation value, which are polynomials of the algorithm parameters. These polynomials are homogeneous but not symmetric. For example, the leading-order coefficient polynomial $\sum_{1\leq i\leq j}^p \gamma_i \beta_j$ depends more strongly on early $\gamma$ values and late $\beta$ values, as opposed to late $\gamma$ values or early $\beta$ values. Hence \eqref{eq:expecCp0} and its higher order generalizations may help select parameters. Indeed, for a few-parameter schedule, such as e.g., $\gamma_j = \gamma_0 + aj$ and $\beta_j = \beta_0 + bj$, the coefficients of \eqref{eq:expecCp0} can be computed in terms of the reduced parameter set. \sh{In particular, for such linear schedules satisfying $|\gamma_j|,|\beta_j|\leq\Delta $ the leading-order coefficient grows with $p$ as $|\sum_{1\leq i\leq j}^p \gamma_i \beta_j|=O(\Delta p^2)$.} \sh{ As an aside we show how the leading-order term of \eqref{eq:expecCp0} yields a simple argument showing that that QAOA can also obtain the quadratic quantum speedup for Grover's problem, reproducing results rigorously obtained in~\cite{jiang2017near}. For convenience consider the case of strings of $n$ variables with a single marked solution $x^*$, and corresponding cost Hamiltonian $C=\ket{x^*}\bra{x^*}$. Following~\cite{jiang2017near}, we apply QAOA with the standard initial state and mixer, and fixed parameters $\gamma_j=\gamma=\pi$ and $\beta_j=\beta=\pi/n$, which gives $\sum_{1\leq i\leq j}^p \gamma_i \beta_j=O(p^2/n)$. In this case the mixing angles become very small as $n$ becomes large, and hence all products of phase and mixing angles also become small. Hence, we can reasonably assume that the leading-order term of \eqref{eq:expecCp0} dominates for large $n$, and neglect the higher-order terms. Since $\langle C \rangle_0 = 1/2^n$ and $\langle \nabla_C \nabla C \rangle_0 = -2n/2^n$ for this problem, to leading order we have $\langle C\rangle_p \simeq 1/2^n + O(p^2/2^n)$, where the factors of $n$ have canceled. Rearranging terms then implies that the success probability $P_p(x^*)=\langle C\rangle_p$ is $\Theta(1)$ when $p=\Theta(\sqrt{2^n})$, reproducing Grover's famous speedup for QAOA as obtained in~\cite{jiang2017near}. While this approach does not immediately reveal constant factors or other details, it is particularly useful for obtaining qualitative insights. Beyond Grover's problem We will apply related ideas using the low-order terms of \thmref{thm:smallprecursed} to obtain further insights in future work. We conclude this subsection with two useful remarks.} \begin{rem} The included terms of the leading-order coefficient polynomial $\sum_{1\leq i\leq j}^p \gamma_i \beta_j$ of \eqref{eq:expecCp0} are also illuminated from the following general formulas. Observe that using $U^\dagger_MU_M=I$ we have \begin{eqnarray} U_M(\beta)U_P(\gamma)=e^{-i\gamma (U_M(\beta)CU^\dagger_M(\beta))}U_M(\beta). \end{eqnarray} Hence for QAOA$_p$ with $Q:=U_M(\beta_p)U_P(\gamma_p)\dots U_M(\beta_1)U_P(\gamma_1)$, we similarly have \begin{eqnarray} Q= e^{-i\gamma_p (U_M(\beta_p)CU^\dagger_M(\beta_p))} \, e^{-i\gamma_{p-1} (U_M(\beta_p+\beta_{p-1})CU^\dagger_M(\beta_p+\beta_{p-1}))} \dots e^{-i\gamma_1 (U_M(\overline{\beta})CU^\dagger_M(\overline{\beta}))}U_M(\overline{\beta}) \end{eqnarray} for $\overline{\beta}:=\sum_{j=1}^p\beta_j$, where we have used the property $U_M(\alpha)U_M(\beta)=U_M(\alpha+\beta)$. \end{rem} \begin{rem} To consider QAOA with arbitrary initial states, \lemref{lem:QAOAconjGen} may be similarly applied taking expectation values of \eqref{eq:costHamConjGen} to obtain results analogous to those of \thmssref{thm1:smallAngles}{thm:allanglessmall}{thm:smallprecursed}. \sh{Similar results follow for more general QAOA mixing Hamiltonians under mild conditions. In particular, one only requires the initial state to be an eigenstate of the mixing Hamiltonian for the leading-order term of \thmref{thm:smallprecursed} to be provably of the same form (as indicated in \remref{rem:GenLems}).} \end{rem} \subsubsection{Leading-order QAOA$_p$ behaves like an effective QAOA$_1$} \label{sec:smallAngleQAOApClass} \sh{\lemref{lem:QAOAconjGen} and \thmref{thm:smallprecursed}} give perturbative expansions in the parameters $\gamma_1,\beta_1,\dots,\gamma_p,\beta_p$, with respect to conjugation by the Identity. These results suggest that when all QAOA angles are small in magnitude, only the relatively few nonzero low-order terms will effectively contribute. Hence when all parameters are sufficiently small, QAOA$_p$ can again be efficiently classically emulated by a similar argument to the $p=1$ case. \begin{proof}[Proof of \thmref{thm:allanglessmall}] The result follows from \thmref{thm:smallprecursed} and \eqref{eq:expecDCis0}. \end{proof} \begin{rem} When all angles are bounded such that $|\gamma_1|,|\beta_1|,\dots,|\beta_p|\leq L$, comparing \thmref{thm:allanglessmall} as $L\rightarrow 0^+$ to \thmref{thm1:smallAngles} shows that the cost expectation of QAOA$_p$ approaches that of an equivalent QAOA$_1$ with effective angles $\gamma',\beta'$ satisfying $$ \gamma'\beta' = \sum_{1\leq i\leq j}^p \gamma_i \beta_j .$$ \sh{Hence as $L$ becomes small the leading-order term of \thmsref{thm:allanglessmall}{thm:smallprecursed} becomes insensitive to the particular bounded parameter schedule the angles may be derived from.} \end{rem} Hence, when all angles are sufficiently small we may again classically emulate QAOA$_p$. \begin{rem} Repeating the analysis of \secref{sec:smallQAOA1class} we see that for $p=O(1)$ there exists a sufficiently large polynomial $q(n)$ such that if $|\gamma_1|,|\beta_1|,\dots,|\beta_p|=1/q(n)$ then QAOA$_p$ can be efficiently classically sampled from with absolute error of each probability $O(1/2^n)$. \end{rem} \subsubsection{QAOA beats random guessing} \label{sec:randomGuessing} Here we show that QAOA always beats random guessing, in the sense that there always exist polynomially small angles (as opposed to exponentially small) such that $\langle C\rangle_p$ beats $\langle C\rangle_0$ (by a factor at worst polynomially small). The QAOA$_1$ case implies the same for QAOA$_p$. Whereas it is known QAOA beats random guessing for specific problems (cf. \secref{sec:MaxCutQUBO}), the following result holds generally. \begin{cor}[QAOA beats random guessing] \label{cor:qaoa1randomguessing} Let $c(x)$ be a cost function on $n$ bits that we seek to maximize or minimize, with corresponding Hamiltonian $C$. Assume $c$ is nonconstant and $|c(x)|$ is bounded by a polynomial in $n$. Then there exists a polynomial $q(n)>0$ and angles $\gamma_1,\beta_1,\dots$ that achieve $\langle C \rangle_p = \langle C \rangle_0 +\Omega(1/q(n))$ for maximization, or similarly, $\langle C \rangle_p = \langle C \rangle_0-\Omega(1/q(n))$ for minimization. \end{cor} \begin{proof} Observe as QAOA$_p$ subsumes QAOA$_1$ (i.e., by setting $\gamma_2=\beta_2=\dots=\beta_p=0$), it suffices to show the claim for $p=1$. Recall from \lemref{lem:expecCDC} that $\langle\nabla_C\nabla C\rangle_0 < 0$ for nonconstant $c(x)$, such that the sign of the leading-order contribution in the approximation $\widetilde{\langle C\rangle}_1$ of \thmref{thm1:smallAngles} (cf. \thmref{thm:QAOA1}) is equal to the sign of $\gamma\beta$, and hence the sign of $\widetilde{\langle C\rangle}_1-\langle C\rangle_0$ can be controlled by selecting the quadrants of $\gamma,\beta$ appropriately (i.e., for maximization take $\gamma,\beta>0$ or $\gamma,\beta<0$). The result then follows observing that we can select sufficiently small $|\gamma|,|\beta|$ and $\e$ such that the error bound of \corref{cor:thm1errorboundeps} together with the triangular inequality give the result. Specifically, for simplicity assume we seek to maximize a cost function $c(x)>0$; the general case is similar. Let $a:=-\langle \nabla_C\nabla C \rangle_0$ which satisfies $0<a<4n\|C\|^2$ from \lemref{lem:expecCDC}, and \lemsref{lem:normGrad}{lem:normCGrad} in \appref{app:normBounds}, and let $\epsilon=(\frac1{8}\frac{a}{n\|C\|^2})^{4}$, $\gamma=\frac{\epsilon^{1/4}}{2\|C\|}$ and $\beta=\frac{\sqrt{\epsilon}}{2n}$. Here $\e$ is selected to satisfy $\|C\|\epsilon=\tfrac12\gamma\beta a$. These choices satisfy the conditions of \corref{cor:thm1errorboundeps} to give $|\langle C\rangle_1-\widetilde{\langle C\rangle}_1|<\|C\|\epsilon$. Hence $\langle C\rangle_1 > \widetilde{\langle C\rangle}_1 - \|C\|\epsilon$ and so the claim follows as \begin{eqnarray*} \langle C\rangle_1 -\langle C\rangle_0 &=& \langle C\rangle_1 -\widetilde{\langle C\rangle}_1+\widetilde{\langle C\rangle}_1-\langle C\rangle_0 \\ &>&- \|C\|\epsilon + \gamma\beta a \,=\, \tfrac12\gamma\beta a \\%=\|C\|\epsilon= &=&\Omega(1/\textrm{poly}(n)) \end{eqnarray*} where the assumption $\|C\|=O(\textrm{poly}(n))$ implies the inverse-polynomial lower bound. \end{proof} \noindent Clearly, the quantities of \corref{cor:qaoa1randomguessing} may be tightened significantly when considering specific cost functions, or by considering higher-order terms and more involved error bounds. \subsection{Causal cones and locality considerations for QAOA$_p$} \label{sec:lightconesp} Here we build off of the definitions and discussion of \secref{sec:lightcones}. Suppose again we are given a cost Hamiltonian $C=a_0I+\sum_jC_j$ such that each Pauli term $C_j=a_jZ_{j_1}\dots Z_{j_{N_j}}$ acts on at most $|N_j|\leq k$ qubits. Assume each qubit appears in at most $D$ terms; the causal cone perspective is especially useful when $D$ is bounded. We let $L_{j,\ell}\subset [n]$ denote the lightcone of $C_j$ corresponding to the $\ell=1,\dots,p$ levels of a QAOA$_p$ circuit applied in turn, with $L_{j,\ell-1}\subset L_{j,\ell}$, as defined in~\tabref{tab:locality}. Note that as the QAOA levels act in reverse order with respect to conjugation, each $L_{j,\ell}$ corresponds to QAOA angles indexed from $p-\ell+1,\dots,p-1,p$; cf. \eqref{eq:costHamConjGen}. For example, $L_{j,1}$ corresponds to the single-level QAOA conjugation of $C_j$ with angles $\gamma_p,\beta_p$. For computing QAOA$_p$ expectation values such as $\langle C\rangle_p = \sum_j \langle C_j\rangle_p$, each $\langle C_j\rangle_p$ may again be computed independently and taking advantage of the QAOA-cones of the $C_j$. Each $\langle C_j\rangle_p$ can be computed by restricting the initial state and resulting operators to at most $k((D-1)(k-1))^p$ qubits, which for particular values may be significantly fewer qubits than $n$. Moreover, we may apply this idea for each QAOA layer in turn. \begin{theorem} \label{thm:lightconep} Let $C=a_0I+\sum_j C_j$, $C_j=a_j Z_{j_1}\dots Z_{i_{N_j}}$, be a $k$-local cost Hamiltonian (i.e., $|N_j|\leq k$), such each qubit appears in at most $D$ of the $C_j$. Then for QAOA$_p$ with operator $Q=Q_p\dots Q_2Q_1$, $Q_i=U_M(\beta_i)U_P(\gamma_i)$, we have \begin{eqnarray} \label{eq:expecCplightcones} \langle C\rangle_p &=& \sum_j\, \bra{+}^{\otimes L_{j,p}} \,Q^\dagger|_{L_{j,p}}\, C_j\,Q|_{L_{j,p}}\, \ket{+}^{\otimes L_{j,p}}\\ &=& \sum_j \,\bra{+}^{\otimes L_{j,p}} \,Q_1^\dagger|_{L_{j,p}}\,Q_2^\dagger|_{L_{j,p-1}}\dots Q_p^\dagger|_{L_{j,1}}\, C_j\,Q_p|_{L_{j,1}}\dots Q_2|_{L_{j,p-1}}\,Q_1|_{L_{j,p}}\, \ket{+}^{\otimes L_{j,p}},\nonumber \end{eqnarray} with $|L_{j,\ell}|\leq \min( k((D-1)(k-1))^\ell, \, n)$ for $\ell=1,\dots,p$. \end{theorem} The first equality of \eqref{eq:expecCplightcones} shows that each $\langle C_j\rangle_p = \langle Q^\dagger C_j Q\rangle_0$ can be equivalently computed by discarding from the QAOA$_p$ circuit any qubits or operators outside of $L_{j,p}$. The second equality shows that this idea may be applied layerwise, with successive (with respect to conjugation) QAOA layers depending on increasing numbers of qubits. Both properties are relatively straightforward to take advantage of in numerical computations. Similar results apply for computing QAOA expectation values of a general cost gradient operator as in \eqref{eq:costGradOpGen} (with respect to the resulting QAOA-cones of its Pauli terms). \begin{proof} Consider a fixed observable $C_j=aZ_{j_1}Z_{j_2}\dots Z_{j_k}$ for a QAOA$_p$ circuit. At each $\ell$th layer, operators outside of $L_{j,\ell}$ commute through and cancel with respect to conjugation. Hence applying causality considerations to \lemref{lem:QAOAconjGen}, in particular the property that mixing operator conjugations cannot increase Pauli term locality, we have \begin{eqnarray*}\label{eq:costHamConjGen2} Q^\dagger C_j Q &=& \sum_{\ell_{1}=0}^\infty \sum_{k_{1}=0}^\infty \cdots \sum_{\ell_{p-1}=0}^\infty \sum_{k_{p-1}=0}^\infty \sum_{\ell_p=0}^\infty \sum_{k_p=0}^\infty \frac{(i\gamma_1 \nabla_C)^{\ell_1} (i\beta_1\nabla)^{k_1} \dots (i\gamma_p \nabla_C)^{\ell_p} (i\beta_p\nabla)^{k_p} }{\ell_1!k_1!\ell_2!k_2!\dots\ell_p!k_p!} C_j \\ &=& \sum_{\ell_{1}=0}^\infty \dots \sum_{k_p=0}^\infty \frac{(i\gamma_1 \nabla_C|_{L_{j,p}})^{\ell_1} (i\beta_1\nabla |_{L_{j,p-1}})^{k_1} \dots ( (i\gamma_p \nabla_C |_{L_{j,1}})^{\ell_p} (i\beta_p\nabla |_{N_{j}})^{k_p} }{\ell_1!k_1!\ell_2!k_2!\dots\ell_p!k_p!} C_j, \end{eqnarray*} where we recall $N_j=L_{j,0}$. Taking initial state expectations and summing over $j$ then gives a slighter tighter result than \eqref{eq:expecCplightcones}. Observing that we may increase each $L_{j,\ell}$ without affecting expectation values implies the two equalities of \eqref{eq:expecCplightcones}. Finally, the bound on $|L_{j,\ell}|$ follows similarly observing that each conjugation by $U_P$ can increase the degree of a given Pauli term by a factor of at most $(D-1)(k-1)$, and that by definition $|L_{j,\ell}|\leq n$. \end{proof} \subsection{ Algorithms computing or approximating $\langle C\rangle_p$} \label{sec:classicalAlgGeneral2} Here we show two general algorithms for computing QAOA cost expectation values $\langle C\rangle_p$. The first approach is exact, though its may scale exponentially in $p$; hence the second approach considers a family of approximations for $\langle C \rangle_p$ obtained by truncating the series expression of \thmref{thm:costExpecHeis} at a given order. The first approach encompasses much of the existing techniques in the literature. We emphasize that while such classical procedures may be useful, e.g., to help find good algorithm parameters, for optimization applications in general a quantum computer is required to efficiently obtain the corresponding solution samples. \vskip 1pc \textbf{ Algorithm 2: compute $\langle C\rangle_p$ for QAOA$_p$} \begin{enumerate} \item \underline{Input}: Parameters $n,p\in\naturals$, angles $(\gamma_1,\beta_1,\dots,\gamma_p,\beta_p)\in {\mathbb R}^{2p}$, cost Hamiltonian $C=a_o I+\sum_{j=1}^m C_j$ with $m=$poly$(n)$ and $C_j=a_jZ_{j_1}\dots Z_{j_\ell}$. \item For $j=1,\dots,m$ and $\ell=1,\dots,p$ compute lightcones $L_{j,\ell}\subset [n]$ (or upper bounds). \item For $j=1$ compute $Q^\dagger C_j Q$ as a Pauli sum by restricting the operators in each QAOA level to those in $L_{j,\ell}$ as in \thmref{thm:lightconep} and its proof. \item Discard all terms in the sum containing a Y or a Z factor, and set $\langle C_j\rangle_p$ as the sum of the remaining coefficients. \item Apply Steps 3 and 4 for each $j= 1, . . . , m$. \item Return the overall sum $\langle C\rangle_p = a_0+\sum_j \langle C_j\rangle_p$. \end{enumerate} Algorithm 2 generalizes the proof given in \cite{wang2018quantum,hadfield2018thesis} of $\langle C \rangle_1$ for MaxCut \eqrefp{eq:expecC1maxcut}, and is used, for instance, in \cite{hadfield2018thesis} to show a similar result for for the Directed-MaxCut problem variant. As observed in \cite{Farhi2014}, for $C=\sum_j C_j$ the quantities $\langle C_i \rangle_p$ and $\langle C_j \rangle_p$ are equal whenever the terms $C_i$ and $C_j$ have the same local neighborhood structure with respect to $p$ and the underlying cost function (more precisely, whenever there exists a permutation of qubits taking $C|_{L_{i,p}}$ to $C|_{L_{j,p}}$). This property significantly reduces the number of unique computations of $\langle C_j\rangle_p$ required in Steps 3 and 4 in order to compute $\langle C\rangle_p$, as a recent paper further elaborates~\cite{shaydulin2021exploiting}. For example, for MaxCut on the complete graph it suffices to compute the expectation value for a single edge. Further results concerning symmetry in QAOA are given in \cite{shaydulin2020classical}. Even for bounded-degree problems, the number of terms involved in computing each $\langle C_j\rangle_p$ in Algorithm 2 typically grows exponentially with $p$ which makes exact QAOA performance results difficult to obtain beyond quite small $p$ in general. \thmssref{thm:costExpecHeis}{thm:smallprecursed}{thm:lightconep} may be similarly used to obtain faster classical algorithms for approximating $\langle C\rangle_p$. Given a parameter~$\ell\in\naturals$, Algorithm~3 returns an approximation of $\langle C \rangle_p$ accurate up to order~$\ell$ in the QAOA angles, such that the accuracy of the returned approximation improves with increased $\ell$ and converges to the exact value returned by Algorithm~2. We describe how worst-case error bounds may be obtained in for fixed truncation $\ell$ in \appref{app:errorBounds}, though our approximations often perform much better in practice than such bounds may indicate (cf. \figref{fig:ramp}). We emphasize that the condition $|\gamma_i|,|\beta_j|<1$ in optimal parameter sets or restricted parameter schedules often appears in the literature. \vskip 1pc \textbf{ Algorithm 3: approximate $\langle C\rangle_p$ to order $\ell$ in the QAOA angles} \begin{enumerate} \item \underline{Input}: Parameters $p,\ell\in\naturals$, angles $(\gamma_1,\beta_1,\dots,\gamma_p,\beta_p)\in {\mathbb R}^{2p}$, cost Hamiltonian $C$ \item If $\ell$ is odd then $\ell \leftarrow \ell -1$. \item If $\ell=0$ or $p=0$ then return $\langle C \rangle_0$. \item Let $\mathcal{A}:=\{(a_1,b_1\dots,a_p,b_p)\in \{0,1,\dots,\ell -1\}^{2p}: \sum_j(a_j+b_j)\in \{0,2,\dots,\ell \}\}.$ \item Let $G_\alpha =\nabla_C^{a_1}\nabla_{b_1}\dots \nabla_C^{a_p}\nabla^{b_p}C$ for each $\alpha \in \mathcal{A}$ \item For each $\alpha\in \mathcal{A}$, compute $\langle G_\alpha \rangle_0$ using the lemmas of \secref{sec:expecVals} and the coefficient polynomials $f_\alpha(\gamma_1,\dots,\beta_p)$ as in the proof of \thmref{thm:smallprecursed}. \item Return the estimate $\widetilde{\langle C\rangle_p} = \sum_{\alpha \in \mathcal{A}} f_\alpha(\gamma_1,\dots,\beta_p) \langle G_\alpha \rangle_0 $ \end{enumerate} Clearly, Algorithm $3$ may also take advantage of lightcone considerations as in Algorithm $2$ in computing the quantities $\langle G_\alpha \rangle_0=\sum_j\langle G_{\alpha,j} \rangle_0$ for $C=\sum_j C_j$. For implementation, the set $\mathcal{A}$ contains at most $\tfrac12 \ell^{2p}$ elements and so the operators $G_\alpha$ can be enumerated and efficiently computed as Pauli sums when $p=O(1)$ and $\ell=\log n$. If we further restrict to $\ell=O(1)$, then we can efficiently compute each $\langle G_\alpha \rangle_0$, and the polynomials $f_\alpha(\gamma_1,\dots,\beta_p)$ each have poly$(n)$ terms. Hence, when $p,\ell=O(1)$ and the number of terms in $C$ is poly(n), the algorithm can always be implemented efficiently. Algorithms 2 and 3 may be further combined to yield hybrid approaches to estimating $\langle C\rangle_p$, where some parameters are treated perturbatively and other exactly, though we do not enumerate these approaches here. We conclude \secref{sec:QAOAp} by considering an alternative approach to deriving series expressions for QAOA quantities. \section{Generalized mixers, initial states, and applications} \label{sec:generalizedCalculus} We proposed designs for generalized QAOA mixing Hamiltonians and unitaries in~\cite{hadfield2019quantum,Hadfield17_QApprox} beyond the transverse-field Hamiltonian mixer originally proposed in~\cite{Farhi2014}. Such constructions are particularly appealing for optimization problems or encodings with hard constraints. Here we illustrate generalizations of our framework to quantum alternating operator ans\"atze constructions for Max Independent Set and a Graph Coloring optimization problems. Similar ideas apply more generally such as to the other optimization problems and corresponding constructions of~\cite{hadfield2019quantum}. \sh{We conclude the section with an outline of how the framework may be applied to electronic structure problems of quantum chemistry based on the QAOA construction of~\cite{kremenetski2021quantum}.} \sh{We provide some additional discussion of} applications beyond classical optimization in \secref{sec:discussion}. For the purpose of this section, consider a general mixing Hamiltonian $\widetilde{B}=\sum_j \widetilde{B}_j$ acting on $n$ qubits with each $[\widetilde{B}_j,C]\neq 0$ (and possibly $[\widetilde{B}_j,\widetilde{B}_k]\neq 0$ for some $k\neq j$). We consider here problems such that each $x\in\{0,1\}^n$ is either feasible, meaning it encodes a valid candidate problem solution, or else infeasible. We assume that the $\widetilde{B}_j$ each preserve feasibility, i.e., map the subspace of feasible computational basis states to itself. As the cost Hamiltonian~$C$ is diagonal, this immediately implies that the gradients of all orders $\nabla^{a_\ell}_{\widetilde{B}} \dots \nabla_{\widetilde{B}}^{a_3} \nabla^{a_2}_C \nabla^{a_1}_{\widetilde{B}} C$ also preserve the feasible subspace. We show through two examples the relationship between the resulting cost gradients and (generalized) classical cost difference functions. In particular, for Max-Independent-Set we consider the transverse-field mixer augmented with control, and for Graph Coloring problems we consider a type of XY mixer~\cite{hadfield2019quantum,wang2019xy}. We refer the reader to~\cite{hadfield2019quantum} for additional details and variations. \subsection{Maximum Independent Set} \label{sec:MaxIndepSet} Consider the NP-hard optimization problem Max-Independent-Set: given a graph~$G=(V,E)$ with $|V|=n$ vertices, we seek to find (the size of) the largest subset of independent vertices. Different QAOA variants for this problem were considered in~\cite{farhi2014quantum,Hadfield17_QApprox,pichler2018quantum,hadfield2019quantum,farhi2020quantum}; see e.g.~\cite{ausiello2012complexity} for problem details, classical algorithms, and complexity. Proceeding as in \cite{hadfield2019quantum}, we encode subsets of $V$ with $n$ indicator variables $x_j$, which map to $n$-qubit computational basis states $\ket{x}$. The feasible subspace is spanned by the subset of basis states $\ket{x}$, $x\in\{0,1\}^n$, that encode independent sets (which depend on the particular problem instance). Let $\widetilde{B}=\sum_{j=1}^n \widetilde{B}_j$, where each $\widetilde{B}_j$ is such that $e^{-i\beta \widetilde{B}_j}=\Lambda_{f_j}(e^{-i\beta X_j})$ is an $X_j$-rotation controlled on all of the neighbors of vertex $j$ not belonging to an independent set $x$, with control function $f_j=\wedge_{\ell \in nbd(j)} \overline{x}_\ell$, which when $f_j(x)=1$ implies that flipping the $j$th bit of $x$ cannot cause the independent set property to be violated. In \cite{hadfield2019quantum,hadfield2018representation} we show that $$\widetilde{B}_j = X_j \prod_{\ell \in nbd(j)} \overline{x}_\ell $$ suffices, where $\overline{x}_\ell=\frac12 I + \frac12Z_\ell$ represents the (negation of) the Boolean function returning the $j$th bit of~$x$, such when its control function is false $\widetilde{B}_j$ acts as the Identity. Expanding $\widetilde{B}_j$ as a Pauli sum, the number of terms in this representation of $\widetilde{B}_j$ may be exponential in $n$, for example if $deg(j)=\Theta(n)$. Nevertheless, each multi-controlled $X_j$-rotation $e^{-i\widetilde{B}_j}$ can be efficiently implemented with basic quantum gates \cite{barenco1995elementary,nielsen2010quantum, hadfield2018thesis}. As products of $B_j$ operators connect every feasible state $\ket{x}$ to the empty set state $\ket{00\dots0}$, it follows~\cite{hadfield2019quantum} that sufficiently many applications of $e^{-i\beta \widetilde{B}_j}$ for different~$j$ can connect any two feasible solution states $\ket{x}$ and $\ket{y}$. (Note that~\cite{hadfield2019quantum} considers a variety of possible mixing unitaries, constructed, for example, as $e^{-\beta\widetilde{B}}$ or $\prod_j e^{-\beta\widetilde{B}_j}$, which are inequivalent; we do not deal with this distinction here and focus instead on the classical calculus derived from the~$\widetilde{B}_j$ generally.) The mixing Hamiltonians $\widetilde{B}_j$ induce a neighborhood structure on the space of feasible solutions, in this case the unit Hamming distance strings that are also feasible. For each $j=1,\dots n$ and independent set~$x$ we define the local differences $$ \widetilde{\partial}_j c (x) :=\left\{ \begin{array}{ll} \partial_j c(x) \qquad \qquad \text{ if } f_j(x) = 1\\ \qquad 0\qquad \qquad \text{ else,} \end{array} \right. $$ i.e., our usual notion of partial difference but restricted to independent sets (feasible strings) connected by single bit flips. Clearly, this structure may be applied to other cost functions over independent sets. For our case of maximizing cardinality, assuming the restriction to feasible states the cost function $c(x)=|x|$ is the Hamming weight of each string, and maps to the Hamiltonian $C=\frac{n}2 I - \frac{1}{2}\sum_j Z_j$. Hence, observe that each $\widetilde{\partial}_j c (x)\in\{-1,0,1\}$. The cost divergence then becomes $$\widetilde{dc}(x)=\sum_j \widetilde{\partial}_j c(x),$$ which is equal to the number of independent sets obtainable from $x$ by flipping a bit from $0$ to $1$, minus the number obtained flipping a $1$ to $0$. The cost gradient becomes $$\widetilde{\nabla}C=[\widetilde{B},C]=\sum_j\nabla_{\widetilde{B}_j}C,$$ where the operators $\nabla_{\widetilde{B}_j}$ act for each feasible $\ket{x}$ as $$\nabla_{\widetilde{B_j}}C\ket{x}= -\widetilde{\partial}_j c (x)\ket{x^{(j)}}. $$ For example, consider the possible choice of initial state $\ket{00\dots 0}$ which encodes the empty set. As all single-vertex sets are also independent, $\widetilde{\nabla}C$ acts to give an equal superposition of them as $$\widetilde{\nabla}C\ket{00\dots 0}=-\ket{100\dots}-\ket{010\dots}-\dots-\ket{0\dots 01}.$$ Note that different choices of initial state will lead to different such relations. Mixed gradients follow similarly. For example, as the cost Hamiltonian is $1$-local, it follows $\widetilde{\nabla}C= i Y_j \otimes (\prod_{\ell \in nbd(j)} \overline{x}_\ell ),$ and so we have $$ \nabla_C \widetilde{\nabla}C = i \widetilde{B}.$$ Further results concerning higher order gradients and initial state expectation values may be similarly derived as for the transverse-field mixer case. \subsection{Graph Coloring} \label{sec:graphColoring} Here we consider optimization problems over colorings of a graph. Given a graph $G=(V,E)$, $|V|=n$, and $k$ colors, we say a \textit{valid coloring} is assignment of a unique color to each vertex, and a \textit{proper coloring} is a valid assignment such that each edge connect vertices of different colors. Here we assume an arbitrary cost function $c(y)$ we seek to optimize over valid colorings $y$. For example, the Max-$k$-Cut generalization of MaxCut is the maximization problem where $c(y)$ gives the number of edges with different colors. Several other NP-hard variants of optimization problems over valid or proper graph colorings are considered in~\cite{hadfield2019quantum}. Generally, depending on $k$ and the qubit encoding used, some bitstrings may represent invalid colorings. Following~\cite{hadfield2019quantum,wang2019xy} we consider encoding the coloring of each vertex using the Hamming weight-$1$ subspace of $k$ qubits, for $kn$ qubits total. (This encoding may be equivalently viewed as $n$ $k$-dits~\cite{hadfield2019quantum}, which we utilize below.) Valid vertex colorings then span a $k^n$-dimensional subspace of the $2^{kn}$-dimensional Hilbert space, with proper colorings corresponding to a subspace that depends on $G$. Hence we may write computational basis states encoding valid (or proper) colorings as $\ket{y}=\ket{y_1y_2\cdots y_n}$, $y_j \in [k]$. A natural mixer for this encoding is the \textit{ring $XY$-mixer} \cite{hadfield2019quantum,wang2019xy} derived from the Hamiltonian $ \widetilde{B}:=\sum_{j=1}^n \widetilde{B}_j$ with \begin{equation} \label{eq:BGC} \widetilde{B}_j:=\sum_{\ell=1}^k \frac{X_{j,\ell}X_{j,{\ell+1}}+Y_{j,\ell}Y_{j,{\ell+1}}}2, \end{equation} where $(j,\ell)$ labels the qubit encoding the $\ell$th color variable for the $j$th vertex and the label additions are modulo $k$. (Here the factor of $1/2$ is included for convenience.) We may identify left- and right-shift operators $L_j$,$R_j$ which act as $$L_j\ket{y}=\ket{y_1\dots y_{j-1}(y_j-1)y_{j+1}\dots y_n}\,=:\ket{y^{(j,-1)}},$$ with $L_j^\ell\ket{y}=L_jL_j\dots L_j\ket{y}=\ket{y^{(j,-\ell)}}$, and $$R_j\ket{y}=\ket{y_1\dots y_{j-1}(y_j+1)y_{j+1}\dots y_n}\,=:\ket{y^{(j,+1)}},$$ with $R_j^\ell\ket{y}=\ket{y^{(j,\ell)}}$. As in \cite[App. C]{hadfield2019quantum} we may write $$\widetilde{B}_j = L_j + R_j. $$ Hence in this case, from the structure of the mixing Hamiltonians, for a generic cost function $c(y)$ we define \textit{left and right jth partial differences} \begin{eqnarray*} \partial_{j,L}c(y)&:=&c(y^{(j,-1)})-c(y),\\ \partial_{j,R}c(y)&:=&c(y^{(j,1)})-c(y). \end{eqnarray*} These functions relate to the action of the generalized partial gradients $$ \nabla_{\widetilde{B}_j}C\ket{y}=-\left(\partial_{j,L}c(y)L_j+\partial_{j,R}c(y)R_j\right)\ket{y}$$ and hence we have $$ \widetilde{\nabla}C=-\sum_j ( L_j\partial_{j,L}C+R_j\partial_{j,R}C) .$$ Thus in this case we see that the gradient corresponds to cost differences with respect to left and right shift of each $k$-dit variable. The Hamiltonians $\widetilde{B}_j,\widetilde{B}$ together with the diagonal cost Hamiltonian~$C$ and their derived gradients have the important property of mapping valid colorings to valid colorings. This property is preserved if we replace the ring structure of the sum in \eqref{eq:BGC} with a fully connected one, i.e., \begin{equation} \label{eq:BGCb} \widetilde{B}_j=\sum_{\ell<m} \frac{X_{j,\ell}X_{j,m}+Y_{j,\ell}Y_{j,m}}2. \end{equation} in which case the above expressions generalize to shifts in $k$-many directions. This property is maintained if we instead consider \textit{sequential mixers} $U(\beta)=\prod U_i(\beta)$ built from \textit{partial mixers} $U_i(\beta)=e^{-i\beta\frac{X_{j,\ell}X_{j,m}+Y_{j,\ell}Y_{j,m}}2}$, in which case the order of the $U_i$ becomes important due to noncommutativity~\cite{hadfield2019quantum} and this will be reflected when applying our framework. Finally, these QAOA constructions, and our framework, extend to problems where we seek to restrict to proper graph colorings by combining the XY mixers here with control, analogous to the controlled transverse-field mixers for Max-Independent-Set~\cite{hadfield2019quantum}. \sh{ \subsection{Electronic Structure} \label{sec:elecStruct} Here we outline how our framework may be applied to problems beyond combinatorial optimization, in particular the prototypical electronic structure problem of quantum chemistry. A recent work~\cite{kremenetski2021quantum} proposed and studied a generalization of QAOA suitable for this problem, and we follow this approach here. This application demonstrates the breadth of our framework and how it may be applied to diverse problems and ans\"atze. Indeed a wide variety of different ans\"atze have been proposed for this problem, see for example \cite{kremenetski2021quantum,romero2018strategies}. Consider an instance of the electronic structure problem corresponding to an electronic Hamiltonian $C$ given in second quantized form, specified as a quartic polynomial of fermionic creation and anihilation operators with respect to a truncated single-particle basis, for which we seek to find its ground state and energy $E_0$; see \cite{kremenetski2021quantum,romero2018strategies} and references therein for the technical details. For such problems it is standard to assume that one has classically obtained the Hartree-Fock approximate ground state and energy~$E_{0,HF}$, and has suitably rotated the operators (orbitals) such that the Hartree-Fock ground state is encoded on the quantum computer by a single computational basis state $\ket{HF}$, which we take as the QAOA initial state. Identifying $C$ as the cost Hamiltonian, a natural choice of mixer is then derived from the Hartree-Fock approximation as $$\widetilde{B}=E_{0,HF} \ket{HF}\bra{HF} + \dots$$ where the terms not shown to the right similarly correspond to the excited states and energies of the Hartree-Fock approximation (i.e., are generated from the possible excitations of the Hartree orbitals). With this choice, the resulting QAOA circuits retain the desirable property of reproducing adiabatic quantum annealing behavior in particular limits as the QAOA depth $p$ becomes large. However, we note that, in contrast to the usual QAOA operators, here the mixing Hamiltonian is diagonal with respect to the computational basis while the cost Hamiltonian is not. Also observe that this QAOA construction satisfies the properties of \remref{rem:GenLems} and so many of our results apply immediately in this setting. In particular, the main focus of~\cite{kremenetski2021quantum} is regimes where $p$ is relatively large, and phase and mixing angles are drawn from a fixed schedule such that their magnitudes are relatively small. Hence we jump straight to the generalization of \thmref{thm:smallprecursed}. For QAOA$_p$ it then follows that to leading-order we have \begin{eqnarray*} \langle C \rangle_p &=& \langle C \rangle_{0} \,-\, \langle \nabla_C \nabla C \rangle_0 \sum_{1\leq i\leq j}^p \gamma_i \beta_j \,+\, \dots\\ &=& E_{0,HF} + 2(E_{0,HF}\braket{\phi}{\phi} - \bra{\phi}\widetilde{B}\ket{\phi}) \sum_{1\leq i\leq j}^p \gamma_i \beta_j + \dots, \end{eqnarray*} where the terms not shown the the right are again proportional to higher-order polynomials in the angles. The second line follows using $\ket{\phi}:=C\ket{HF}$, and using standard properties of Hartree-Fock approximations it follows that $E_{0,HF}\braket{\phi}{\phi} - \bra{\phi}\widetilde{B}\ket{\phi} \leq 0$. This last property shows that, for positive angles, to leading order the QAOA ground state energy estimate is guaranteed to improve upon the Hartree-Fock estimate. Further analysis may be applied in the spirit of our results obtained for combinatorial optimization. In particular, in future work we will leverage this generalization of the framework to help explain universal behavior empirically observed in QAOA performance (phase) diagrams across different problems and settings in~\cite{kremenetski2021quantum}. Finally, we remark that the underlying approach and analysis of our framework is relatively straightforward to extend to particular ansatz beyond QAOA, such as the UCC-SD (unitary coupled cluster singles and doubles) ansatz considered in~\cite{romero2018strategies}. } \section{Discussion and Future Directions} \label{sec:discussion} Our framework provides tools for, and its application provides new insights into, layered quantum algorithms. We focused on our framework's application to quantum alternating operator ans\"atze circuits. Specifically, we used it to obtain both new results and more succinct ways of expressing and seeing existing results for transverse field QAOA$_p$ for general $p$, not just $p=1$. We also discussed two examples of our framework applied to QAOA with more complicated mixing operators. Our framework applies more broadly than just the cases discussed in this paper, providing a promising approach to obtain both additional results for QAOA and insights into other types of problems and quantum algorithms. For example, the framework can be applied to eigenvalue problems related to physical systems. For instance, the general conjugation formula \eqref{eq:infinitesimalConj} has been previously invoked \cite[Sec. I]{romero2018strategies} in the context of particular parameterized quantum circuits for the electronic structure problem of quantum chemistry. \sh{Similar to but more general than the quantum alternating operator ansatz approach of~\cite{kremenetski2021quantum} considered in \secref{sec:elecStruct},} circuit ans\"atze for this problem are often built from fermionic creation and annihilation operators and simple reference (e.g., Hartree-Fock) initial states. These operators then have natural interpretations in terms of transitions between occupied orbitals, analogous to the operator-function correspondence of \secref{sec:frameworkOverview}, suggesting a route for future work to apply and extend our framework and results in detail to different ans\"atze for this setting. \sh{For QAOA and beyond, questions related to parameter setting may be amenable to the series approaches of our framework, in particular toward a better understanding of parameter concentration and landscape independence phenomena, as well as implications for algorithm performance limitations~\cite{brandao2018fixed,bravyi2019obstacles,marwaha2021bounds,chou2021limitations}. Moreover our framework may be applied in} both in ideal and realistic (noisy) settings. For example, in cases when noise largely flattens the cost expectation landscape \cite{marshall2020characterizing,wang2020noise}, our series expressions with only a few terms may be sufficient to capture key aspects of the behavior. In addition to further applications to QAOA, our framework can give insights into recently proposed variants such as adaptive QAOA \cite{zhu2020adaptive} and recursive QAOA (RQAOA) \cite{bravyi2019obstacles}. The framework and obtained results could also provide insight into quantum annealing (QA) and adiabatic quantum computing (AQC) given the close ties between parameters for QAOA and QA or AQC schedules \cite{brady2021optimal,brady2021behavior,zhou2018quantum,kremenetski2021quantum,wurtz2022counterdiabaticity}, including to cases with advanced drivers \cite{hen2016quantum,hen2016driver,leipold2020constructing}. One such application of our formalism is designing more effective mixing operators and ans\"atze, and facilitating quantitative means for comparing them. Our framework suggests direct approaches to incorporating cost function information into mixing operators, such as, for example, using $i\nabla C$ as a mixing Hamiltonian. Indeed, the ansatz $U(\alpha)\ket{s}=e^{-i\alpha\, i\nabla C}\ket{s}$ has the same leading-order contribution to $\langle C\rangle$ as QAOA$_1$ for $\alpha=\gamma\beta$ when using the transverse-field mixer. Another possibility is to incorporate measurements or expectation values of cost gradient observables, such as $i\nabla C$ or $\nabla_C \nabla C$, directly into the algorithm or parameter search procedure. \thmref{thm:costExpecHeis} shows that the cost expectation of a QAOA$_p$ circuit can be computed or approximated in terms of expectation values taken for a corresponding circuit with $p-1$ levels of the cost Hamiltonian and its cost gradient operators, which could be estimated classically or via a quantum computer. In this vein, a recent paper \cite{magann2021feedback} proposes an adaptive parameter setting strategy that involves repeatedly obtaining estimates of $\langle i\nabla C \rangle$. As a practical matter, evaluating series approximations derived from our framework is straightforward but can become tedious when one desires to include many terms. In these situations, computer algebra systems or numerical software can readily compute the required expressions. Moreover, manipulating the resulting series can improve the accuracy for a fixed number of terms. For example, instead of truncating the series expansion at a finite number of terms, giving a polynomial approximation, often better numerical approximations arise from Pade approximants, which are rational functions whose coefficients match the truncated Taylor series~\cite{orszag1978advanced} (cf.~\figref{fig:ramp}). These potential applications of our formalism highlight its general applicability to understanding algorithms based on layered quantum circuits, including identifying new operators and ans\"atze matched to the structure of specific problems. \section*{Acknowledgments} We thank the members of the Quantum Artificial Intelligence Lab (QuAIL) for valuable feedback and discussions. We are grateful for support from NASA Ames Research Center and the Defense Advanced Research Projects Agency (DARPA) via IAA8839 annex 114. S.H. and T.H. are supported by the NASA Academic Mission Services, Contract No. NNA16BD14C. \addcontentsline{toc}{section}{References} {\boldsymbol{i}}bliographystyle{ieeetr} {\boldsymbol{i}}bliography{bib} \newcommand{\changelocaltocdepth}[1]{ \addtocontents{toc}{\protect\setcounter{tocdepth}{#1}} \setcounter{tocdepth}{#1} } \changelocaltocdepth{1} \begin{appendix} \section{Sum-of-paths perspective for QAOA} \label{app:sumOfPaths} \sh{ Here we briefly show how QAOA probabilities (and similarly cost expectation values) can be expressed as sums over sequences of bitstrings (\lq\lq paths\rq\rq), each weighted by functions of the Hamming distances and cost function values along the path. This ``sum-of-paths" perspective complements the main results of our framework and provides additional intuition, though the resulting expressions are no longer power series in the algorithm parameters. Indeed several results of the paper were originally derived using a sum-of-paths approach, but with significantly more complicated proofs than those obtained with our QAOA framework; on the other hand, some tasks are convenient in the sum-of-paths formulation such as working directly with QAOA state amplitudes. Consider the QAOA mixing operator matrix elements $u_{d(x,y)}(\beta)=\bra{x}U_M(\beta)\ket{y}=\cos^n(\beta)(-i\tan\beta)^{d(x,y)}$ given in \eqref{eq:mixingmatrixelements} for Hamming distance $d(x,y)$, which satisfy $u_d^\dagger = (-1)^d u_d$. For QAOA$_1$, using the computational basis decomposition $I= \sum_y\ket{y}\bra{y}$ the probability amplitude corresponding to each string $x\in\{0,1\}^n$ is \begin{equation} \label{eq:amp1sumpaths} \bra{x}\ket{\gamma \beta} = \sum_y\bra{x}U_M(\beta)\ket{y}\bra{y}U_P(\gamma)\ket{s}= \frac{1}{\sqrt{2^n}}\sum_y u_{d(x,y)}(\beta) e^{-i\gamma c(y)}, \end{equation} i.e., the amplitude $\bra{x}\ket{\gamma \beta}$ may be expressed as sum of \lq\lq single-step paths\rq\rq (from each possible $y\in\{0,1\}^n$, to $x$) with each path contributing $u_{d(x,y)}(\beta) e^{-i\gamma c(y)}$ times its initial amplitude $\bra{y}\ket{s}=1/\sqrt{2^n}$. The probability $P_1(x)=|\bra{x}\ket{\gamma \beta}|^2$ of measuring $x$ is then \begin{eqnarray} \label{eq:prob1sumpaths} P_1(x)&=& \frac1{2^n} \sum_{y,z} u^\dagger_{d(x,z)}u_{d(x,y)} e^{-i\gamma (c(y)-c(z))}\nonumber\\ &=&\frac1{2^n} \sum_{y,z} \cos^{2n}\beta (\tan\beta)^{d(x,y)+d(x,z)}i^{d(x,z)-d(x,y)} e^{-i\gamma (c(y)-c(z))}\nonumber\\ &=&\frac{\cos^{2n}\beta}{2^n} \sum_{y,z} (\tan\beta)^{d(x,y)+d(x,z)} i^{d(x,z)-d(x,y)} \left( \cos(\gamma(c(y)-c(z))) - i \sin(\gamma(c(y)-c(z)))\right)\nonumber\\ &=&\frac{\cos^{2n}\beta}{2^n}\, {\boldsymbol{i}}gg( \sum_{\substack{y,z \\ d(x,z)-d(x,y)\textrm{ even}}} (\tan\beta)^{d(x,y)+d(x,z)} (-1)^{\frac{d(x,z)-d(x,y)}2} \cos(\gamma(c(y)-c(z))) \nonumber\\ & &\,\,\,+ \sum_{\substack{y,z \\ d(x,z)-d(x,y)\textrm{ odd}}} (\tan\beta)^{d(x,y)+d(x,z)} (-1)^{\frac{d(x,z)-d(x,y)-1}2} \sin(\gamma(c(y)-c(z))){\boldsymbol{i}}gg) \end{eqnarray} where the last equation follows because $\cos$ and $\sin$ are even and odd functions, respectively, and shows explicitly that all contributing terms are real valued, but with differing signs that may lead to term cancellation (i.e., paths may \lq\lq interfere\rq\rq). The signs depend on the algorithm parameters, the difference in cost function between $y$ and $z$, and their Hamming distances of from $x$. Thus we see that the probability $P_1(x)$ corresponds to a sum of two-step paths ($z$ to $x$ to $y$) with weights as shown in \eqref{eq:prob1sumpaths}. The cost expectation $\langle C\rangle_1=\sum_x c(x)P_1(x)$ then follows from additionally weighting all paths in $P_1(x)$ by $c(x)$, and then summing over all $x\in\{0,1\}$. Furthermore, Taylor expansions of \eqref{eq:prob1sumpaths} reproduce the leading-order contributions of \thmref{thm1:smallAngles}, though with considerably more effort than required for our proof. It is straightforward to extend the sum-of-paths perspective to QAOA$_p$ with $p>1$, at the expense of additional notation. In this case probabilities correspond to paths of length~$2p$. Let $u^{(j)}_{d(x,z)}:=u_{d(x,z)}(\gamma_j,\beta_j)$, and let $Q_j(x)$ denote the amplitude of computational basis state $\ket{x}$ after $j$ layers of QAOA have been applied. Clearly, $Q_0(x)= 1/\sqrt{2^n}$, and $Q_1(x)=\bra{x}\ket{\gamma\beta}$ is given in \eqref{eq:amp1sumpaths}. Generalizing \eqref{eq:amp1sumpaths} for a QAOA$_p$ state gives \begin{equation} Q_p(x) = \sum_z u^{(p)}_{d(x,z)} e^{-i\gamma_p c(z)}Q_{p-1}(z), \end{equation} which may be expanded recursively to give \begin{equation} \label{eq:ampgenp} Q_p(x) = \frac1{\sqrt{2^n}}\sum_{z_1,z_2,\dots,z_p} u^{(1)}_{d(z_1,z_2)} u^{(2)}_{d(z_2,z_3)} \dots u^{(p)}_{d(z_{p},x)} e^{-i (\gamma_1c(z_1) + \gamma_2c(z_2) + \dots \gamma_p c(z_p))}. \end{equation} Hence computational basis amplitudes after $p$ levels of QAOA are likewise given by paths consisting of $p$-tuples of intermediate bitstrings, weighted by phase factors corresponding to cost function evaluations and mixing matrix elements corresponding to Hamming weight differences along the path. Expressions for the probability and expected cost may then be obtained from $P_p(x)=|Q_p(x)|^2=Q_p^\dagger(x) Q_p(x)$ as above. While in principle this approach can also lead to results such as \thmref{thm:allanglessmall}, our proof provides a more succinct route. Nevertheless, the sum-of-paths perspective gives a different but complementary perspective into QAOA, and the general approach outlined here may be extended to a wider variety of ans\"atze, such as QAOA with generalized mixers, encodings, and initial states. } \section{ Initial state expectation values} \label{app:expecVals} Here we extend the results of \secref{sec:expecVals}. For a linear operator $A$ on $n$ qubits we write its computational basis representation $A=\sum_{x,y}a_{xy}\ket{x}\bra{y}$ with matrix elements $ a_{xy}:= \bra{x}A\ket{y}\,\in {\mathbb C}$ for $x,y\in\{0,1\}^n$. The operator $A$ is traceless if and only if $\sum_x a_{xx} =0$. \begin{lem} \label{lem:expec1} Let $C$ be a cost Hamiltonian and $A$ a cost gradient as in (\ref{eq:costGradOpGen}) such that $[A,C]\neq 0$ and $A\ket{s} = \frac{1}{\sqrt{2^n}}\sum_x a(x) \ket{x}$ for a real function $a(x)$. Then there exists a real function $g(x)$ satisfying $\nabla_C A \ket{s} = \frac{1}{\sqrt{2^n}}\sum_x g(x) \ket{x}$, and: \begin{itemize} \item If $A$ is skew-adjoint $A^\dagger = -A$ then $\;\sum_x a(x) = 0$, \begin{equation*} \langle \nabla_C A \rangle_0 = \frac{2}{2^n}\sum_x c(x)a(x)= \frac{1}{2^n}\sum_x g(x), \;\;\;\;\; \text{ and } \;\;\;\;\; \langle \nabla^2_C A \rangle_0 =0. \end{equation*} \item Else if instead $A$ is self-adjoint $A^\dagger =A$ then $\; \langle \nabla_C A \rangle_0 = \sum_x g(x) =0,\;$ and \begin{eqnarray*} \langle \nabla^2_C A \rangle_0 &=& \frac1{2^n}\sum_{x\neq y} (c(x)-c(y))^2 a_{xy} \,=\, \frac{2}{2^n}\sum_x c(x)g(x)\\ &=&\frac{2}{2^n}\sum_x c^2(x) a(x) -\frac2{2^n}\sum_{x,y}c(x)c(y)a_{xy}. \end{eqnarray*} \end{itemize} \end{lem} The condition $[A,C]\neq 0$ is included because otherwise trivially $\nabla_C A = \nabla^2_C A=0$ and so the quantities of the lemma become identically zero. Recall a cost gradient $A$ may be uniquely decomposed into its diagonal and off-diagonal parts $A=A_{diag}+A'$ with respect to the computational basis, with $\nabla_C A = \nabla_C A'$. As the results of the self-adjoint case are invariant under replacing $A$ by $A+E$ for a traceless real diagonal matrix $E$ (such that $\nabla_C E=0$), we can replace $a(x)$ above by any function $a(x)+e(x)$ where $\sum_x e(x)=0$; hence we can often ignore any diagonal component of cost gradient operators in computing initial state expectation values. \begin{proof} From the definition of $A$ we have $a(x)=\sum_y a_{xy}$, and $a_{yx}=\pm a_{xy}$ when $A^\dagger = \pm A$, which implies $\sum_x a(x)= 0$ when $A^\dagger = - A$. Using $A=\sum_{x,y}a_{xy}\ket{x}\bra{y}$, the function $g$ is defined as $\tfrac1{\sqrt{2^n}}g(x):=\bra{x} \nabla_C A \ket{s}$ and satisfies $$\tfrac1{\sqrt{2^n}}g(x) =\bra{x} CA-AC\ket{s}=\tfrac1{\sqrt{2^n}}c(x)a(x)-\tfrac1{\sqrt{2^n}}\sum_y c(y)a_{xy}=\tfrac1{\sqrt{2^n}}\sum_y a_{xy}(c(x)-c(y)),$$ which implies $\sum_x g(x) = 0$ when $A^\dagger = A$. Indeed, from $\bra{s}A=\pm \tfrac{1}{2^n}\sum_x a(x)\bra{x}$ we have $$\langle g\rangle_0=\bra{s}\nabla_C A\ket{s} = \bra{s}CA\ket{s}- \bra{s}AC\ket{s} =\langle ca\rangle_0\mp \langle ca\rangle_0$$ when $A^\dagger = \pm A$, which shows the results for $\langle\nabla_C A\rangle_0$. Similarly, $\bra{s}\nabla_CA=\mp \tfrac{1}{2^n}\sum_x g(x)\bra{x}$ and so \begin{eqnarray*} \bra{s} \nabla^2_C A \ket{s}&=&\bra{s}C\,\nabla_CA\ket{s}- \bra{s}\nabla_CA\,C\ket{s} =\langle cg\rangle_0\pm \langle cg\rangle_0\\ &=&\bra{s}(C^2A-2CAC+AC^2)\ket{s}=\langle c^2a\rangle_0\pm \langle c^2a\rangle_0 - 2\langle CAC\rangle_0. \end{eqnarray*} Finally, we use $\langle c^2a\rangle_0=\tfrac1{2^n}\sum_x c^2(x)a(x)=\tfrac1{2^n}\sum_{x,y} c^2(x)a_{xy}$ and $\langle CAC\rangle_0=\tfrac1{2^n}\sum_{x,y}c(x)c(y)a_{xy}$ for the case $A=A^\dagger$ to give $$ \langle \nabla^2_C A\rangle_0 =\frac2{2^n}\sum_{x,y} (c^2(x)-c(x)c(y))a_{xy}=\frac1{2^n}\sum_{x,y} (c^2(x)+c^2(y)-2c(x)c(y))a_{xy}$$ which shows the terms in the sum with $x=y$ are zero and implies that last equation of the lemma. \end{proof} We use the lemma to prove Equations (\ref{eq:expecDcD3C}), (\ref{eq:expecDc3DC}), and (\ref{eq:expecDc2D2C}) concerning the nonzero initial state expectation values of cost gradient at fourth order. The technique of the proof may be used to similarly compute higher order expectation values. \begin{lem} \label{lem:higherOrderExpectations} For a cost Hamiltonian $C$ representing real function $c(x)$ we have: \begin{itemize} \item $\langle \nabla_C \nabla^3 C \rangle_0 = \frac2{2^n}\sum_x c(x)d^3c(x)$ \item $\langle \nabla^2_C \nabla^2 C \rangle_0 =\langle \nabla_C \nabla \nabla_C \nabla C \rangle_0 =\frac2{2^n}\sum_x \sum_{j<k} (\partial_{j,k}c(x))^2 \partial_j\partial_k c(x)$ \item $\langle \nabla^3_C \nabla C \rangle_0 = \frac2{2^n}\sum_x c(x)\sum_{j} (\partial_j c(x))^3 =- \frac1{2^n}\sum_x \sum_{j} (\partial_j c(x))^4$ \end{itemize} where $\partial_{j,k}c(x) := c(x^{(j,k)})-c(x)=\partial_j\partial_k c(x) + \partial_j c(x)+\partial_k c(x)$. In particular, for a QUBO cost Hamiltonian $C=a_0 +\sum_i a_i Z_i + \sum_{i<j}a_{ij}Z_iZ_j$ these quantities can be further expressed (defining $a_{ji}:=a_{ij}$) as: \begin{itemize} \item $\langle \nabla_C \nabla^3 C \rangle_0 = -16 \sum_ia_i^2 - 128 \sum_{i<j}a_{ij}^2$ \item $\langle \nabla^2_C \nabla^2 C \rangle_0 = 64 \sum_{i<j}a_ia_ja_{ij}+ 192 \sum_{i<j<k} a_{ij}a_{jk}a_{ik}$ \item $\langle \nabla^3_C \nabla C \rangle_0 = -16\sum_i a_i^4 - 32\sum_{i<j} a_{ij}^4 - 96 \sum_{i<j}a_{ij}^2 (a_i^2+a_j^2 + \frac12 \sum_{k\neq i,j}(a_{ik}^2+a_{jk}^2))$ \end{itemize} \end{lem} The second part of the lemma shows that we may always efficiently compute the quantities of the first part for cost functions such as QUBOs. Moreover, each expectation values reflects the topology of the underlying QUBO problem graph; for example, for MaxCut, $\langle \nabla^2_C \nabla^2 C \rangle_0$ is seen to be identically zero on triangle-free graphs. \begin{proof} Recall that any diagonal part of $A$ in Lemma~\ref{lem:expec1} can be ignored (such as, for example, in $\nabla^2 C$ for MaxCut from Table~\ref{tab:summary}) as it will not affect the initial state expectation values. The first result follows from the first case of Lemma~\ref{lem:expec1} with $A=\nabla^3 C=-A^\dagger$ which from (\ref{eq:gradCs}) satisfies $a(x)=d^3c(x)$ to give $\langle \nabla_C A \rangle_0 = \frac2{2^n}\sum_x c(x)d^3c(x)$. The second result follows applying the second case of Lemma~\ref{lem:expec1} to $A$ defined as the non-diagonal part of $\nabla^2 C$, which from (\ref{eq:grad2Cx}) has matrix elements $a_{xy}=2\partial_j\partial_k c(x)=2\partial_j\partial_k c(x^{(j,k)})$ when $x$ and $y=x^{(j,k)}$ differ by two bit flips $j,k$, and $a_{xy}=0$ otherwise, and so $$\langle \nabla^2_C \nabla^2 C \rangle_0 = \frac2{2^n}\sum_x \sum_{j<k} (c(x)-c(x^{(j,k)}))^2\partial_j\partial_k c(x),$$ where $c(x^{(j,k)})-c(x)=\partial_j\partial_k c(x) + \partial_j c(x)+\partial_k c(x)$ follow from the definitions of Sec.~\ref{sec:costDiffs}. Additionally, from the Jacobi identity and (\ref{eq:costJacobi}) we have $\langle \nabla_C \nabla \nabla_C \nabla_C \rangle_0 =\langle \nabla_C^2\nabla^2 C\rangle_0$. For the third result, let $A=\nabla_C \nabla C=A^\dagger$, for which from (\ref{eq:DCDCx}) we have $a_{xy}=-(\partial_jc(x))^2$ when $y=x^{(j)}$ and $0$ otherwise, and hence the first equality of the second case of Lemma~\ref{lem:expec1} gives $\langle \nabla^2_C A \rangle_0=\tfrac1{2^n}\sum_x\sum_j (c(x)-c(x^{(j)})^2(-(\partial_jc(x))^2)=-\tfrac1{2^n}\sum_x\sum_j (\partial_jc(x))^4$. This sum can be easily rearranged to become $\tfrac2{2^n}\sum_x c(x) \sum_j (\partial_jc(x))^3$, or alternaively this may be obtained from the second equality of the second case of Lemma~\ref{lem:expec1} by defining the function $g(x)=(\partial_j c(x))^3$ from (\ref{eq:DlCDCs}) that corresponds to $\nabla_C A$. Now further suppose we have a QUBO cost Hamiltonian~$C:=a_0I + C_1 + C_2$. We proceed by expanding each cost gradient as a linear combination of Pauli operators, and then computing the initial state expectation value using the observation that only terms containing stricilty $I$ or $X$ factors can contribute (see \cite[Sec. YY]{hadfield2018thesis} for details). Note that this approach allows us to avoid dealing with exactly computing the coefficients of various terms which can't contribute. Let $C_{2,Y}:=\sum_{i<j}a_{ij}Y_iY_j$ and here we write $a_{ji}=a_{ij}$, $a_{ii}:=0$, and $j\in nbd(i)$ if $a_{ij}\neq 0$. Applying Lemma~\ref{lem:quboGrads} and the Pauli matrix commutation relations ($[X,Y]=2iZ$ and cyclic permutations) we have $\nabla C = \nabla C_1 + \nabla C_2$, $\nabla^2 C = 4C_1 + 8C_2 - 8C_{2,Y}$, and $\nabla^3 C=4\nabla C_1 + 16\nabla C_2$. Hence $\nabla_C \nabla^3 C = -16\sum_i a_i^2 X_i -64\sum_{i<j}a_{ij}^2(X_i+X_j) +\dots$ where the terms not shown to the right each contain $Z$ factors (for example, terms proportional to $a_ia_{ij}X_iZ_j$), which implies $\langle \nabla_C \nabla^3 C \rangle_0=-16\sum_i a_i^2 -128\sum_{i<j}a_{ij}^2$ as stated. Similarly, $\nabla_C \nabla^2 C = -8 \nabla_C C_{2,Y}=16i\sum_{i<j}a_{ij}(a_iX_iY_j + a_jY_i X_j+\sum_{k\neq i,j} (a_{ik}Z_k X_iY_j+a_{jk}Y_iX_jZ_k))$, and so $\nabla_C^2 \nabla^2 C = 64 \sum_{i<j}a_{ij}a_ia_jX_iX_j+64 \sum_{i<j}\sum_{k} a_{ij}a_{jk}a_{ik}X_iX_j +\dots,$ with the terms to the right each containing $Y$ or $Z$ factors. As $a_{ii}=0$ the $a_{ij}a_{jk}a_{ik}$ terms are nonzero only for triangles in the graph formed from nonzero $a_{ij}$ which implies $\langle \nabla_C^2 \nabla^2 C\rangle_0 = 64 \sum_{i<j}a_{ij}a_ia_j+3*64 \sum_{i<j<k} a_{ij}a_{jk}a_{ik}$. Finally, a similar calculation for gives the result for $\langle\nabla_C^3\nabla C\rangle_0$, as can readily be seen expanding the identity $\nabla_C \nabla C = \nabla_C^4 B$. \end{proof} \subsection{QAOA with small mixing angle}\label{app:smallMix} Here we show two proofs related to results in \secssref{sec:lightcones}{sec:smallMixingAngle}{sec:smallMixingAnglep}. \begin{proof}[Proof of \eqref{eq:expecgammanabla2C}] Recal the notation $\nabla = \sum_i \nabla_i$ with $\nabla_i = [X_i,\cdot]$. As for any $C$ $$\nabla^2 C=\sum_i\nabla_i^2 C + 2\sum_{i<j}\nabla_i\nabla_jC= -2DC +2\sum_{i<j}\nabla_i\nabla_jC$$ and $\langle DC\rangle_0 =0$ we have \begin{eqnarray} \bra{\gamma}\nabla^2 C \ket{\gamma} = 2\sum_{i<j} \langle \nabla_i\nabla_j C\rangle_0. \end{eqnarray} Next observe $\partial_iC + \partial_kC + \partial_i\partial_kC= -2(C^{\{i\setminus k\}}+C^{\{k\setminus i\}})$, where $C^{\{j\setminus k\}}$ denotes the terms in $C$ that contain a $Z_j$ but not a $Z_k$, which gives $$U_P^\dagger X_iX_kU_p=e^{i\gamma(C^{\{i\setminus k\}}+C^{\{k\setminus i\}})}X_iX_k.$$ Hence \eqref{eq:expecgammanabla2C} follows by observing \begin{eqnarray*} \langle \nabla_i\nabla_j C\rangle_0 &=& \langle X_iX_j C-X_iCX_j-X_jCX_i+CX_iX_j\rangle_0\\ &=& \langle Ce^{-2i\gamma(C^{\{i\setminus j\}}+C^{\{j\setminus i\}})} + e^{2i\gamma(C^{\{i\setminus j\}}+C^{\{j\setminus i\}})} C\rangle_0\\ &-& \langle Ce^{-2i\gamma(C^{\{i\setminus j\}}-C^{\{j\setminus i\}})} + e^{2i\gamma(C^{\{i\setminus j\}}-C^{\{j\setminus i\}})} C\rangle_0\\ &=& 2\langle (\cos(C^{\{i\setminus j\}}+C^{\{j\setminus i\}})-2\cos((\gamma C^{\{i\setminus j\}}-C^{\{j\setminus i\}})))C\rangle\\ &=& -4\langle \sin(\gamma C^{\{i\setminus j\}})\sin(\gamma C^{\{i\setminus j\}})C\rangle_0\\ &=& -4\langle \sin(\gamma(\tfrac12\partial_iC +\tfrac14 \partial_i\partial_jC )) \sin(\gamma(\tfrac12\partial_jC +\tfrac14 \partial_i\partial_jC )) C\rangle_0 \end{eqnarray*} where we used $C^{\{i\setminus j\}}=-\tfrac12\partial_iC -\tfrac14\partial_i\partial_jC$ and $\cos(a)-\cos(b)=-2\sin\tfrac{a+b}2\sin\tfrac{a-b}{2}$. \end{proof} \begin{proof}[Proof of \corref{cor:smallBeta}] Consider a general QAOA$_p$ state which we write shorthand as $\ket{p}:=\ket{\boldsymbol{\gamma \beta}}_p = \sum_x q_{x,p} \ket{x},$ where the coefficients $q_{x,p}$ may be complex. Proceeding as in the proof of Theorem~\ref{thm:smallBetap1} gives to first order in $\beta$ \begin{eqnarray} \langle C \rangle_{p+1} &\simeq& \bra{p}C\ket{p} + \beta \bra{p\gamma_{p+1}} i\nabla C \ket{p\gamma_{p+1}}=\langle C \rangle_p +\beta \bra{p} \nabla_{\widetilde{B}}C \ket{p} \end{eqnarray} where $\ket{p\gamma_{p+1}} := e^{-i\gamma_{p+1}C}\ket{p}$ and $\widetilde{B}=e^{i\gamma C}B\e^{-i\gamma C}$. Observing that \begin{eqnarray} \nabla_{\widetilde{B}}C \ket{p} = -\sum_x \left( \sum_{j=1}^n q_{x^{(j)},p}\partial_j c(x) e^{i\gamma_{p+1} \partial_j c(x)}\right) \ket{x}, \end{eqnarray} we have $\langle C \rangle_{p+1} \simeq \langle C \rangle_{p} - i\beta \sum_x q^\dagger_{x,p} \left(\sum_{j=1}^n q_{x^{(j)},p}\partial_j c(x) e^{i\gamma_{p+1} \partial_j c(x)}\right)$. Next observe that \begin{eqnarray*} \sum_x \sum_{j=1}^n q^\dagger_{x,p} q_{x^{(j)},p}\partial_j c(x) e^{i\gamma_{p+1} \partial_j c(x)} &=& \sum_{j=1}^n \sum_x q^\dagger_{x,p} q_{x^{(j)},p} e^{i\gamma_{p+1} \partial_j c(x)} (c(x^{(j)})-c(x))\\ &=& \sum_{j=1}^n \sum_x c(x) (q^\dagger_{x^{(j)},p}q_{x,p} e^{-i\gamma_{p+1} \partial_j c(x)}-q^\dagger_{x,p} q_{x^{(j)},p} e^{i\gamma_{p+1} \partial_j c(x)})\\ &=& \sum_{j=1}^n \sum_x c(x) |q^\dagger_{x,p} q_{x^{(j)},p}| (e^{-i\alpha_{xjp}-i\gamma_{p+1} \partial_j c(x)} - e^{i\alpha_{xjp} + i\gamma_{p+1} \partial_j c(x)})\\ &=& \sum_{j=1}^n \sum_x c(x) r_{xjp} (-2i\sin(\alpha_{xjp}+\gamma_{p+1} \partial_j c(x)))\\ &=& -\sum_x c(x) \sum_{j=1}^n 2i r_{xjp} \sin(\alpha_{xjp}+\gamma_{p+1} \partial_j c(x)) \end{eqnarray*} where we have defined the real polar variables $r,\alpha$ as $r_{xjp}=|q^\dagger_{x^{(j)},p}q_{x,p}|$ and $ q^\dagger_{x^{(j)},p}q_{x,p} = r_{xjp}e^{-i\alpha_{xjp}}$ which reflect the degree of which the coefficients $q_{x,p}$ are non-real. Plugging in above implies (\ref{eq:expecsmallbetap}). Further, rearranging as \begin{equation} \langle C \rangle_{p+1} \simeq \sum_x c(x) \left(|q_{x,p}|^2 -2\beta \sum_{j=1}^n r_{xjp} \sin(\alpha_{xjp}+\gamma_{p+1} \partial_j c(x)) \right) \end{equation} shows (\ref{eq:probsmallbetap}) gives the change in probability for each state $x$ to first order. \end{proof} \section{Norm and error bounds} \label{app:normAndErrorBounds} Here we show bounds to the norms of cost gradient operators, which we subsequently apply to bound the error of our leading-order formulas. The spectral norm $\|A\|:=\|A\|_2$ is the maximal eigenvalue in absolute value of $A$. When $C$ represents a function $c(x)\geq 0$ to be maximized, for example a constraint satisfaction problem such as MaxCut or MaxSAT, then $\|C\|=\max_x c(x)$. Hence computing $\|C\|$ exactly can be NP-hard, in which case upper bounds to $\|C\|$ (such as the number of clauses) typically suffice. For commutator norms, commuting components should not affect the norm estimates, in particular the Identity operator component of each operator. To reflect this in our norm and error estimates, here we define \begin{equation} \label{eq:normstar} \|A\|_* := \min_{b\in{\mathbb C}}\|A + bI\|, \end{equation} which in particular satisfies $\|A\|_*\leq \|A\|$, and similarly for the traceless part of $A$ \begin{equation}\label{eq:normstar2} \|A\|_*\leq \|A-\tfrac1{2^n}tr(A)I\|. \end{equation} As $\|aI\|_*=0$ for $a\in{\mathbb C}$, \eqref{eq:normstar} gives a seminorm. For example, for MaxCut we have $C=\tfrac{|E|}2I-\tfrac12\sum_{(ij)}Z_iZ_j=:\tfrac{|E|}2I+C_Z$ which satisfies $\|C_Z\|=\tfrac{|E|}2$ and $\|C\|_*=\tfrac{c^*}2=\frac{\|C\|}2$. When $c^*\neq |E|$ we have $\|C\|_* < \|C-\tfrac{|E|}2I\|$ which illustrates that the inequality in \eqref{eq:normstar2} can be strict. \subsection{Norm bounds} \label{app:normBounds} We first consider general gradients (commutators) $\nabla_G:=[G,\cdot]$. \begin{lem} \label{lem:normCGrad} For $n$-qubit operators $A,G$ and $\ell \in \naturals$ we have \begin{equation} \label{eq:lemnormCGrad} \|\nabla_G^\ell A\| \, \leq \, (2\|G\|_*)^\ell \|A\|_* \,\leq\, (2\|G\|)^\ell \|A\|. \end{equation} \noindent If we know a decomposition $A=A_{com}+A_{nc}$ such that $[G,A_{com}]=0$ then this improves to \begin{equation} \label{eq:lemnormCGrad2} \|\nabla_G^\ell A\| \,=\, \|\nabla_G^\ell A_{nc}\| \,\leq\, (2\|G\|_*)^\ell \|A_{nc}\|_*. \end{equation} \end{lem} \begin{proof} Let $G'=G+gI$ and $A'=A+aI$. As identity components always commute, trivially $\nabla^\ell_{G'} A'=\nabla^\ell_G A$ for all $g,a$. Selecting $g,a$ such that $\|G\|_*=\|G'\|$ and $\|A\|_*=\|A'\|$ and applying the triangle and Cauchy-Schwarz inequalities to $\nabla_{G'} A' = G'A'-A'G'$ gives $\|\nabla_G A\|\leq 2\|G'\|\|A'\|=2\|G\|_*\|A\|_*\leq 2\|G\|\|A\|$. \eqref{eq:lemnormCGrad} then follows by induction on $\nabla_G^{\ell-1}A=\nabla_{G'}^{\ell-1}A'$. \eqref{eq:lemnormCGrad2} follows by the same arguments observing $\nabla_G A = \nabla_G A_{nc}$. \end{proof} In particular, for the cost Hamiltonian $C=a_0I + C_Z$, \lemref{lem:normCGrad} implies \begin{equation} \|\nabla^\ell_C A\|\leq (2\|C_Z\|)^\ell \|A_{non-diag}\| \end{equation} for $A_{non-diag}$ the Pauli basis components of $A$ that contain an $X$ or $Y$ factor. For gradients $\nabla^\ell$ with respect to the transverse-field Hamiltonian $B$ we show a tighter bound applicable to $k$-local cost Hamiltonians. (Recall an operator is $k$-local if its Pauli expansion contains terms acting nontrivially on $k$ or fewer qubits.) \begin{lem} \label{lem:normGrad} For a $k$-local operator $A$ and $\ell \in \naturals$ we have \begin{equation} \label{eq:lem:normGrad} \|\nabla^\ell A\|\,\leq\, (2k)^\ell \|A\|_*\,\leq\, (2k)^\ell \|A\|, \end{equation} and this bound is tight (i.e., there exists an $A$ such that $\|\nabla^\ell A\|= (2k)^\ell \|A\|_*=(2k)^\ell \|A\|$). \end{lem} \begin{proof} Let $A'=aI+A$ such that $\|A\|_*=\|A\|.$ In general $A'=\sum_j w_j \sigma^{(j)}$, where $w_j\in {\mathbb C}$ and each $\sigma^{(j)}$ gives a string of at most $k$ Pauli matrices. Consider the action of $\nabla=[B,\cdot]=\sum_i[X_i,\cdot]$ on a fixed $\sigma^{(j)}$; clearly, of the $n$ commutators $[X_i,\sigma^{(j)}]$ for $i=1,\dots,n$, at most $k$ can be nonzero. Moreover, each $[X_i,\cdot]$ either acts unitarily on a $\sigma^{(j)}$ (up to a factor of $2$) if $\sigma^{(j)}$ contains a $Y_i$ or $Z_i$ factor, or else annihilates it. Hence $\|[X_i,\sigma^{(j)}]\|\leq 2\|\sigma^{(j)}\|=2$. and so Cauchy-Schwarz gives $$\|\nabla A\|=\|\nabla A'\|=\| \sum_j w_j \sum_{j_{1}}^{j_{k}} [X_{j_i},\sigma^{(j)}]\| \leq 2k \|A\|_*\leq 2k\|A\|. $$ Repeating this argument for $\|\nabla(\nabla^{\ell-1}A')\|$ gives the claim by induction. Next, let $A=Z_1Z_2\dots Z_k$ for some $k\leq n$; a straightforward calculation shows $\|\nabla^\ell A\|= (2k)^\ell \|A\|_*=(2k)^\ell \|A\|=(2k)^\ell$ for each $\ell=1,2,\dots$ as desired. \end{proof} \lemsref{lem:normCGrad}{lem:normGrad} may be applied recursively to bound $\|A\|$ for $A$ an arbitrary cost gradient operator as in \lemref{lem:genGrad}. Norm bounds are useful for upper bounding expectation values, as for any Hamiltonian~$H$ and normalized quantum state $\ket{\psi}$ we have $\bra{\psi}H\ket{\psi}\leq \|H\|$. For example, for MaxCut, \lemref{lem:normGrad} gives $\langle (i\nabla)^\ell C \rangle_p \leq 4^\ell \frac{c^*}2$. \subsection{Error bounds} \label{app:errorBounds} We now show how to obtain error bounds for commutator expansions, which we use to below to prove \corref{cor:thm1errorboundeps} concerning the error of the estimates of \thmref{thm1:smallAngles}. The same approach may be applied to bounding the error when truncating our series expressions at higher order, or to deriving corresponding estimates for QAOA$_p$. Here we utilize an integral representation of operator conjugation that follows from the fundamental theorem of calculus (see e.g. \cite[Eq. 23]{childs2019theory}) \begin{equation}\label{eq:error1} e^{i\beta \nabla_H}A=A+i\int_0^\beta d\alpha\, e^{i\alpha \nabla_H}\nabla_H A, \end{equation} which may be recursively applied to give \begin{equation} \label{eq:error2} e^{i\beta \nabla_H}A=A +i\beta\nabla_HA -\int_0^\beta d\alpha \int_0^\alpha d\tau\, e^{i\tau \nabla_H}\nabla^2_H A, \end{equation} and similarly to pull out higher-order leading terms as desired. \begin{lem} \label{lem:smallmixC} For a $k$-local matrix $A$ acting on $n$ qubits we have \begin{equation*} \|U_M^\dagger(\beta) \,A\, U_M(\beta)-(A+i\beta \nabla A)\| \leq 2k^2 \beta^2 \|A\|_*. \end{equation*} \end{lem} \begin{proof} Applying \eqref{eq:error2} with $U_M(\beta)$ and using Cauchy-Schwarz gives \begin{eqnarray*} \|U_M^\dagger A U_M - A -i\beta\nabla A\|&=&\|\int_0^\beta d\alpha \int_0^\alpha d\tau e^{i\tau \nabla}\nabla^2 A\| \,\leq\, \left|\int_0^\beta d\alpha \int_0^\alpha d\tau\right|\cdot \|e^{i\tau \nabla}\nabla^2 A\|\\ &\leq& \frac{\beta^2}2\|\nabla^2 A\|\leq 2k^2\beta^2\|A\|_*\leq 2k^2\beta^2\|A\|. \end{eqnarray*} where we have used $\|e^{i\tau \nabla}\nabla^2 A\|=\|\nabla^2 A\|$ from unitarity, and \lemref{lem:normGrad}. \end{proof} A similar argument yields an $O((2k|\beta|)^{\ell+1}\|A\|_*)$ error bound for truncating the series for $U_M^\dagger A U_M$ at order $\ell$ (cf. \cite[Thm. 5]{childs2019theory}). We next bound the error of the quantities of Theorem~\ref{thm1:smallAngles} as stated in Cor.~\ref{cor:thm1errorboundeps}. \begin{lem} \label{lem:thm1errorbound} For QAOA$_1$ with $k$-local cost Hamiltonian $C$ the error in the estimate (\ref{eq:expecC1}) of \thmref{thm1:smallAngles} is bounded as \begin{equation} \label{eq:errorBoundC} \left| \langle C\rangle_1 - \,\left( \langle C \rangle_0 - \gamma \beta \langle \nabla_C \nabla C \rangle_0 \right) \right| \,\leq\, 4|\beta|\gamma^2 k\|C\|_*^3 + 4\beta^2|\gamma| k^2\|C\|_*^2 . \end{equation} \end{lem} \begin{proof} Applying \eqsref{eq:error1}{eq:error2} to QAOA$_1$ we have \begin{eqnarray*} e^{i\gamma \nabla_C} e^{i\beta \nabla}C &=& C +i\beta e^{i\gamma \nabla_C} \nabla C -\int_0^\beta d\alpha \int_0^\alpha d\tau e^{i\gamma \nabla_C} e^{i\tau \nabla}\nabla^2 C\\ &=& C + i\beta\nabla C - \gamma\beta \nabla_C\nabla C -\beta \int_0^\gamma d\alpha \int_0^\alpha d\tau e^{i\tau \nabla_C}\nabla_C^2 \nabla C\\ & &-\, \int_0^\beta d\alpha \int_0^\alpha d\tau \left(e^{i\tau \nabla}\nabla^2 C + i\int_0^\gamma da e^{ia\nabla_C} \nabla_C (e^{i\tau \nabla}\nabla^2 C)\right), \end{eqnarray*} so taking the initial state expectation value of both sides and using $\bra{s}\nabla^\ell C\ket{s}=0$ gives \begin{eqnarray*} |\langle C\rangle_1 - \langle C\rangle_0 +\gamma\beta \langle \nabla_C \nabla C\rangle_0| &\leq& |\beta| \| \int_0^\gamma d\alpha \int_0^\alpha d\tau e^{i\tau \nabla_C}\nabla_C^2 \nabla C\|+\frac{\beta^2}2\|\int_0^\gamma da e^{ia\nabla_C} \nabla_C (e^{i\tau \nabla}\nabla^2 C)\|\\ &\leq& |\beta| (\gamma^2/2) \|\nabla_C^2\nabla C\| + \frac{\beta^2\gamma}2 \|\nabla_C \nabla^2 C\|\\ &\leq& 4|\beta|\gamma^2 k\|C\|_*^3 + 4\beta^2|\gamma| k^2\|C\|_*^2, \end{eqnarray*} where we have used \lemsref{lem:normCGrad}{lem:normGrad}. Observing the left-hand side is invariant under shifts $C\leftarrow C +aI$ implies the tighter bound \eqref{eq:errorBoundC}. \end{proof} Naively apply the same approach to the error in the probability estimate of \thmref{thm1:smallAngles} leads to a bound proportional to $\|H_x\|=\|\ket{x}\bra{x}\|=1$. Here we apply a refined analysis to obtain a more useful bound that reflects the exponentially small initial probabilities in the computational basis, at the expense of an exponential factor in the bound. \begin{lem} \label{lem:thm1errorboundp} For QAOA$_1$ the error in the estimate (\ref{eq:px1}) of \thmref{thm1:smallAngles} satisfies \begin{equation} \label{eq:probBound} \left| P_1(x) - (P_0(x)- \frac{2\gamma \beta}{2^n}dc(x))\right| \leq \frac{2}{2^n}{\boldsymbol{i}}gg( n^2\beta^2 e^{2n|\beta|} + \frac43n|\beta||\gamma|^3\|C\|_*^3 \cosh(2|\gamma|\|C\|_*){\boldsymbol{i}}gg). \end{equation} \end{lem} \begin{proof} Turning to the probability, for $H_x=\ket{x}\bra{x}$ we have \begin{eqnarray*} e^{i\gamma \nabla_C} e^{i\beta \nabla}H_x &=& H_x + i\beta e^{i\gamma \nabla_C}\nabla H_x + e^{i\gamma \nabla_C}\underbrace{(-\frac{\beta^2}2\nabla^2 H_x -i\frac{\beta^2}{3!}\nabla^3 H_x + \dots)}_{R_1(\beta)}\\ &=& H_x + i\beta\nabla H_x -\gamma\beta\nabla_C\nabla C +i\beta \underbrace{(-\frac{\gamma^2}2\nabla^2_C\nabla H_x+\dots)}_{R_2(\gamma)} + e^{i\gamma \nabla_C}R_1(\beta) \end{eqnarray*} and so taking initial state expectations and using $\langle \nabla^\ell H_x \rangle_0=0$ from \lemref{lem:expecGrad} gives $$ P_1(x)-P_0(x) +\langle \nabla_c \nabla H_x \rangle_0= \bra{\gamma} R_1(\beta) \ket{\gamma} +i\beta \langle R_2(\gamma)\rangle_0$$ for $\ket{\gamma}:=U_P(\gamma)\ket{s}$. For the first term on the right we have \begin{eqnarray*} |\bra{\gamma}R_1(\beta)\ket{\gamma}|&=& \left|\frac1{2^n}\sum_{yz}e^{-i\gamma(c(z)-c(y))} \bra{y}R_1(\beta)\ket{z}\right| \,\leq\, \frac1{2^n}\sum_{yz}| \bra{y}R_1(\beta)\ket{z}|\\ &\leq& \frac1{2^n}\sum_{\ell=2}^\infty \frac{|\beta|^\ell}{\ell!} \sum_{yz} | \bra{y}\nabla^\ell H_x \ket{z}| \,\leq\, \frac1{2^n}\sum_{\ell=2}^\infty \frac{|\beta|^\ell}{\ell!}(2n)^\ell \\ &\leq& \frac2{2^n}n^2\beta^2 e^{2n|\beta| } \end{eqnarray*} where we have used $\sum_{y,z}|\bra{y}\nabla^\ell H_x \ket{z}|\leq (2n)^\ell$, which follows observing that $H_x$ has a single nonzero matrix element $\bra{x}H_x\ket{x}=1$ in the computational basis, and each application of $\nabla$ increases the number of nonzero matrix elements (equal to $\pm 1$) by a factor at most $2n$ (e.g., we have $\nabla H_x = \sum_{j=1}^n (\ket{x^{(j)}}\bra{x}-\ket{x}\bra{x^{(j)}})$. Next, for the term corresponding to $R_2(\gamma)=-\frac{\gamma^2}{2}\nabla^2_C \nabla H_x -i\frac{\gamma^3}{3!}\nabla^3_C \nabla H_x+\dots$, suppose we have shifted $C$ such that $\|C\|=\|C\|_*$, which leaves $R_2(\gamma)$ and other commutators invariant. Consider $G:=\nabla H_x$ which similarly satisfies $\sum_{y,z}|\bra{y}G\ket{z}|\leq 2n$. From Lem.~\ref{lem:genGrads}, when $\ell$ is even $\langle \nabla_C^\ell G \rangle_0=0$, and for odd $\ell$ from expanding the commutators we have \begin{eqnarray*} |\langle \nabla_C^\ell G \rangle_0| &=& \frac1{2^n} |\sum_{y,z}\bra{y}\nabla_C^\ell G\ket{z}| = \frac1{2^n} |\sum_{y,z}\bra{y}C^\ell G - \ell C^{\ell-1}GC+\dots \pm G C^\ell\ket{z}|\\ &\leq& \frac{2^\ell}{2^n} \|C\|_*^\ell \sum_{y,z} |\bra{y} G \ket{z}| \leq \frac{2^{\ell+1}}{2^n} \|C\|_*^\ell n, \end{eqnarray*} which implies \begin{eqnarray*} |\langle R_2(\gamma) \rangle_0 | &=& | -i\frac{\gamma^3}{3!}\langle \nabla_C^3 \nabla H_x\rangle_0+ i\frac{\gamma^5}{5!}\langle \nabla_C^5 \nabla H_x\rangle_0+\dots|\\ &\leq& \frac{|\gamma|^3}{3!} \frac{2^3}{2^n}2n\|C\|_*^3 +\frac{|\gamma|^5}{5!}\frac{2^5}{2^n}2n\|C\|_*^5+\dots\\ &\leq& \frac83\frac{n}{2^n}|\gamma|^3\|C\|_*^3 \cosh(2\gamma \|C\|_*) \,\,\leq\,\, \frac{3n}{2^n}|\gamma|^3\|C\|_*^3 e^{2 |\gamma| \|C\|_*}. \end{eqnarray*} Hence using $|P_1(x)-P_0(x) +\langle \nabla_c \nabla H_x \rangle_0|\leq |\bra{\gamma} R_1 \ket{\gamma}| +|\beta \langle R_2\rangle_0|$ gives (\ref{eq:probBound}). \end{proof} Finally, we use the lemmas to bound the error of the estimates of \thmref{thm1:smallAngles}. \begin{proof}[Proof of \corref{cor:thm1errorboundeps}] From \lemref{lem:thm1errorbound} for QAOA$_1$ with $k$-local $C$ we have \begin{equation} \label{eq:errorBoundC2} \left| \langle C\rangle_1 - \,\left( \langle C \rangle_0 - \gamma \beta \langle \nabla_C \nabla C \rangle_0 \right) \right| \,\leq\, \|C\|_* ( 4|\beta|\gamma^2 k\|C\|_*^2 + 4\beta^2|\gamma| k^2\|C\|_*). \end{equation} with $\langle \nabla_C \nabla C\rangle_0 = 2\langle C\, DC \rangle_0$. Selecting $\gamma,\beta$ such that $|\beta| \leq \frac{\sqrt{\e}}{2k}$ and $|\gamma| \leq \frac{\e^{1/4}}{2\|C\|_*} $ gives $$4|\beta|\gamma^2 k\|C\|_*^2 + 4\beta^2|\gamma| k^2\|C\|_*\leq \frac{\e}2 + \frac{\e^{5/4}}2 \leq \e$$ which then implies \eqref{eq:smallAnglesErrorBound} observing that $\|C_Z\| \geq \|C\|_*$. Similarly, the probability bound follows from \lemref{lem:thm1errorboundp} applying the smaller mixing angle $|\beta|\leq \frac25 \frac{\sqrt{\epsilon}}{n}$ to \eqref{eq:probBound} which gives an upper bound of $0.92\e/2^n$ and implies \eqref{eq:smallAnglesErrorBoundProb}. \end{proof} \section{ Quadratic Hamiltonians} \label{app:klocal} For $k$-local Hamiltonians, higher-order gradients have particularly nice properties.\footnote{Recall an operator on qubits is (strictly) $k$-local if all terms in its Pauli operator expansion act nontrivially on at most $k$ qubits, and strictly $k$-local if all non-Identity terms act on exactly $k$ qubits.} Here we prove \lemref{lem:quboGrads} for QUBOs by considering the linear and quadratic terms in turn. \begin{lem} For a linear ($1$-local) cost Hamiltonian $C= \sum_{j}c_{j}Z_j$, we have \begin{eqnarray} \nabla C = -2i C_Y \;\;\; \text{ and } \;\;\; \nabla^2 C = 4 C, \end{eqnarray} with $C_Y:=\sum_{j}c_{j}Z_j$, which for $\ell\in\naturals$ implies \begin{equation} \nabla^{2\ell} C = 4^\ell C\;\;\; \text{ and } \;\;\;\nabla^{2\ell+1} C = 4^\ell \nabla C. \end{equation} \end{lem} \begin{proof} The lemma follows trivially from the Pauli matrix commutation relations as $\nabla_j C = [X_j,C]=2ic_jY_j$ and $\nabla^2_j C = 4c_jZ_j$. The second statement follows by induction. \end{proof} \begin{lem} \label{lem:quadraticGrads} For a strictly quadratic ($2$-local) cost Hamiltonian $C=\sum_{uv}c_{uv}Z_uZ_v$, \begin{eqnarray} \nabla^2 C = 8C - 8C_Y \;\;\; \text{ and } \;\;\; \nabla^3 C = 16 \nabla C, \end{eqnarray} with $C_Y:=\sum_{uv}c_{uv}Y_uY_v$, which for $\ell\in\naturals$ implies \begin{equation} \nabla^{2\ell} C = 16^{\ell-1} \nabla^2 C\;\;\; \text{ and } \;\;\;\nabla^{2\ell+1} C = 16^\ell \nabla C. \end{equation} \end{lem} \begin{proof} The first statement follws trivially as before using the cyclic Pauli relations $[X_j,Y_j]=2iZ_j$ and the linearity of $\nabla=[B,\cdot]$, and the second by induction. \end{proof} \lemref{lem:quboGrads} then follows immediately from combining the linear and quadratic cases. Analaoous results follow for $k$-local Hamiltonians with $k>2$. \subsection{QAOA$_1$ for quadratic cost Hamiltonians} \label{sec:QUBOs} \lemref{lem:quboGrads} may be applied directly to QAOA for QUBOs such as MaxCut where $ C=\frac{m}{2}-\frac12\sum_{(ij)\in E}Z_iZ_j$. Recall the quantities of \tabref{tab:summary}. \begin{proof}[Proof of \eqref{eq:expecC1maxcut}] For MaxCut, applying Lem.~\ref{lem:quadraticGrads} to (\ref{eq:infinitesimalConj}) we have \begin{eqnarray} \label{eq:UMCQUBO} U_M^\dagger C U_M &=& \sum_{k=0}^\infty \tfrac{(i\beta)^k}{k!}\nabla^k C \nonumber\, =\, C + \sum_{k=0}^\infty \tfrac{(i\beta)^{2k+1}}{(2k+1)!}16^k\nabla C + \sum_{k=1}^\infty \tfrac{(i\beta)^{2k}}{(2k)!}16^{k-1}\nabla^2 C\nonumber\\ &=& C + \tfrac{i}{4}\nabla C \sum_{k=0}^\infty \tfrac{(-1)^k(4\beta)^{2k+1}}{(2k+1)!} +\tfrac{1}{16} \nabla^2 C \sum_{k=1}^\infty \tfrac{(-1)^k(4\beta)^{2k}}{(2k)!}\nonumber\\ &=& C + \tfrac{\sin(4\beta)}{4}\,i\nabla C +\tfrac{(\cos(4\beta)-1)}{16} \, \nabla^2 C. \end{eqnarray} Hence, using $\cos (2x)=1-2\sin^2(x)$, taking the expectation value with respect to $\ket{\gamma}:=U_P(\gamma)\ket{s}$ gives \eqref{eq:expecC1maxcut} via $\langle C\rangle_1=\bra{\gamma}U_M^\dagger C U_M\ket{\gamma}$ and $\bra{\gamma} C \ket{\gamma}=\bra{s}C\ket{s}=\langle C\rangle_0$. \end{proof} The general QUBO formula~\eqrefp{eq:QAOA1QUBO} for $\langle C\rangle_1$ is obtained similarly. Here we consider the explicit example of Balanced-Max-$2$-SAT as defined in \secref{sec:balancedMax2Sat}, which generalizes (and rederives along the way) previous results for QAOA$_1$ for MaxCut. We emphasize the calculations such as that of the following proof for more general problems may be most easily implemented numerically, in particular for $p>1$ (cf. Algorithm 2 in \secref{sec:classicalAlgGeneral2}). \begin{proof} [Proof of \eqref{eq:expecC1balmax2sat} (and \eqsref{eq:maxcutnablaCexpec}{eq:maxcutnabla2Cexpec}] As $C$ is strictly $2$-local (modulo the identity term), \eqref{eq:QAOA1QUBO} (cf. \eqref{eq:UMCQUBO}) gives $\langle C\rangle_1 =\bra{\gamma}U_M^\dagger C U_M\ket{\gamma}$ with $\ket{\gamma}=U_P\ket{s}$ and $$ U_M^\dagger C U_M = C +\frac{\sin4\beta}4 i\nabla C -\frac{\sin^22\beta}8 \nabla^2 C.$$ We follow but generalize the proof for MaxCut in \cite{hadfield2018thesis,wang2018quantum} (which is a simpler case with all signs the same of the $ZZ$ terms in the cost Hamiltonian). Recall that in the Pauli basis only strictly $I,X$ terms can contribute to QAOA initial state expectation values (i.e., if a Pauli string $A_j$ contains a $Y$ or $Z$ factor then $\bra{s}A_j\ket{s}=Tr(\ket{s}\bra{s}A_j)=0$). Observing that $i\nabla C= -\frac12 \sum_{(ij)\in E}(-1)^{i\oplus j} (Y_iZ_j+Z_iY_j)$ (cf. \tabref{tab:summary}) we have \begin{eqnarray} U_P^\dagger i\nabla C U_P&=&-\frac12 \sum_{(ij)\in E}(-1)^{i\oplus j} U_P^\dagger(Y_iZ_j+Z_iY_j)U_P\nonumber\\ &=&-\frac12 \sum_{(ij)\in E}(-1)^{i\oplus j} (e^{2i\gamma C^{\{i\}}}Y_iZ_j+e^{2i\gamma C^{\{j\}}}Z_iY_j), \end{eqnarray} where as the terms in $C$ mutually commute we may write each exponential as a Pauli sum using $e^{\pm i\gamma Z_iZ_j/2}=I\cos(\gamma/2)\pm i\sin(\gamma/2)Z_iZ_j$ and expanding the product. Clearly, the only resulting term that can contribute is proportional to $(-1)^{i\oplus j}Z_iZ_j*(-1)^{i\oplus j}Y_iZ_j=-iX_i$ (i.e., all other terms will contain at least one $Z$ factor). Hence, as each $C^{\{i\}}$ contains $d_i+1$ terms we have \begin{eqnarray} \bra{\gamma} i\nabla C \ket{\gamma} &=&\frac12 \sum_{(ij)\in E}\sin(\gamma/2 )(\cos^{d_i}(\gamma/2)+\cos^{d_j}(\gamma/2)). \end{eqnarray} (A nearly identical argument gives \eqref{eq:maxcutnablaCexpec} for MaxCut after summing over the edges.) Similarly, using $\nabla^2C=8C-8C_Y=-\frac14 \sum_{(ij)\in E}(-1)^{i\oplus j} (Z_iZ_j-Y_iY_j)$ gives \begin{eqnarray} U_P^\dagger \nabla^2 C U_P &=& 8C_Z+2 \sum_{(ij)\in E}(-1)^{i\oplus j} e^{2i\gamma (C^{\{i\}\setminus j}+C^{\{j\}\setminus i})}Y_iY_j \end{eqnarray} and so as $\langle C_Z \rangle_0 =0$ we have \begin{eqnarray} \label{eq:expoeq110} \bra{\gamma} \nabla^2 C \ket{\gamma} = 2 \sum_{(ij)\in E}(-1)^{i\oplus j}\langle e^{2i\gamma (C^{\{i\}\setminus j}+C^{\{j\}\setminus i})}Y_iY_j \rangle_0. \end{eqnarray} Consider a fixed edge $(ij)$ with $d:=deg(i)-1$ and $e:=deg(j)-1$ recall we define $f^+=f^+_{ij},f^-=f^-_{ij}$ as the positive and negative triangles containing $(ij)$ the the instance graph $E$. For convenience here we let $c=\cos(\gamma/2)$ and $s=\sin(\gamma/2)$. Writing the exponential in \eqref{eq:expoeq110} as a product of factors $e^{\pm i\gamma Z_uZ_v/2}=Ic\pm isZZ$ and expanding gives a linear combination of $2^{d+e}$ terms, which we view as a series in $s$. The lowest order (w.r.t $s$) term that can contribute \cite{hadfield2018thesis} corresponds to selecting a single triangle from among the factors and is $${\boldsymbol{i}}nom{f^+}{1} c^{d+e-2}s^2-{\boldsymbol{i}}nom{f^-}{1}c^{d+e-2}s^2=(f^+-f^-)c^{d+e-2}s^2,$$ i.e. negative triangles subtract, and there are $f^+,f^-$ ways to select a triangle of each type. (cf. the corresponding proof for MaxCut where all triangles are positive~\cite{hadfield2018thesis,wang2018quantum}.) As $f^+,f^-$ become larger, pairs of additional triangles can combine to $\pm I$ such that the next contributing terms are proportional to $c^{d+e-6}s^6$ and involve $3$ triangles, with each sign determined by the number of positive- versus negative-parity triangles involved to give proportionality factor \begin{eqnarray*} {\boldsymbol{i}}nom{f^+}{3}{\boldsymbol{i}}nom{f^-}{0}-{\boldsymbol{i}}nom{f^+}{2}{\boldsymbol{i}}nom{f^-}{1}+\dots-{\boldsymbol{i}}nom{f^+}{0}{\boldsymbol{i}}nom{f^-}{3} &=& \sum_{k=0}^3 {\boldsymbol{i}}nom{f^+}{3-k}{\boldsymbol{i}}nom{f^-}{k}(-1)^k \end{eqnarray*} (i.e. three negative-parity triangles give $-1$, and so on.) Generalizing this argument to higher order, factors with $\ell=1,3,5$ triangles combine to contribute $c^{d+e-2\ell}s^{2\ell}$ times the factor\footnote{Without the factor $(-1)^k$ we would have Vandermonde's identity $\sum_{k=0}^\ell{\boldsymbol{i}}nom{f^+}{\ell-k}{\boldsymbol{i}}nom{f^-}{k}={\boldsymbol{i}}nom{f}{\ell}$.} \begin{eqnarray} \label{eq:hypgepfactor} \sum_{k=0}^\ell {\boldsymbol{i}}nom{f^+}{\ell-k}{\boldsymbol{i}}nom{f^-}{k}(-1)^k =: {\boldsymbol{i}}nom{f^+}{\ell} \prescript{}{2}{\mathbf{F}}_1(-f^-,-\ell;f^+-\ell+1;-1) \end{eqnarray} where $\prescript{}{2}{\mathbf{F}}_1$ is the Gaussian (ordinary) hypergeometric function~\cite{slater1966generalized}. Note \eqref{eq:hypgepfactor} evaluates to $f^+-f^-$ for $\ell=1$, and to ${\boldsymbol{i}}nom{f^+}{3}-{\boldsymbol{i}}nom{f^-}3-\tfrac12 f^+f^-(f^+-f^-)$ for $\ell=3$. Hence we have \begin{eqnarray*} \langle e^{2i\gamma (C^{\{i\}\setminus j}+C^{\{j\}\setminus i})}c_{ij}Y_iY_j \rangle_0 &=& c^{d+e-2f}\sum_{\ell=1,3,5,\dots}^{f^++f^-} c^{2(f-\ell)}s^{2\ell} {\boldsymbol{i}}nom{f^+}{\ell} \prescript{}{2}{\mathbf{F}}_1(-f^-,-\ell;f^+-\ell+1;-1)\\ &=:& c^{2(f-\ell)}s^{2\ell}\, g(f^+,f^-;\gamma,\beta) \end{eqnarray*} where we have defined $g(f^+,f^-;\gamma,\beta)=\sum_{\ell=1,3,5,\dots}^f c^{2(f-\ell)}s^{2\ell} {\boldsymbol{i}}nom{f^+}{\ell} \prescript{}{2}{\mathbf{F}}_1(-f^-,-\ell;f^+-\ell+1;-1)$. It easy to see this term is $0$ for all edges with $f^+=f^-$ or $f=0$, i.e., $g(a,a)=0$ for $a=0,1,2,\dots$. Moreover, the case of strictly positive or strictly negative triangles $g(f,0)=-g(0,f)$ reproduces the analysis of MaxCut (up to constants) and is summable to give $$g(f,0)=\tfrac12(1-(c^2-s^2)^f)=\tfrac12(1-\cos^f\gamma)=-g(0,f),$$ which implies \eqref{eq:maxcutnabla2Cexpec} for MaxCut. Hence, we have $\langle e^{2i\gamma (C^{\{i\}\setminus j}+C^{\{j\}\setminus i})}c_{ij}Y_iY_j \rangle_0 = c^{d+e-2f^+-2f^-}g(f^+,f^-)$ and so $\bra{\gamma} \nabla^2C\ket{\gamma}=2 \sum_{ij}c^{d+e-2f^+-2f^-}g(f^+,f^-)$, which together with the result for $\bra{\gamma} \nabla^2C\ket{\gamma}$ gives \eqref{eq:expecC1balmax2sat}. \end{proof} \end{appendix} \end{document}
\betagin{document} \title[Quantitative uniqueness ] {Quantitative uniqueness of solutions to second order elliptic equations with singular potentials in two dimensions} \author{ Blair Davey and Jiuyi Zhu} \address{ Department of Mathematics\\ The City College of New York\\ New York, NY 10031, USA\\ Email: [email protected] } \address{ Department of Mathematics\\ Louisiana State University\\ Baton Rouge, LA 70803, USA\\ Email: [email protected] } \thanks{\noindent{Davey is supported in part by the Simons Foundation Grant number 430198. Zhu is supported in part by \indent NSF grant DMS-1656845}} \date{} \subsetbjclass[2010]{35J15, 35J10, 35A02.} \keywords {Carleman estimates, unique continuation, singular lower order terms, vanishing order} \betagin{abstract} In this article, we study the vanishing order of solutions to second order elliptic equations with singular lower order terms in the plane. In particular, we derive lower bounds for solutions on arbitrarily small balls in terms of the Lebesgue norms of the lower order terms for all admissible exponents. Then we show that a scaling argument allows us to pass from these vanishing order estimates to estimates for the rate of decay of solutions at infinity. Our proofs rely on a new $L^p - L^q$ Carleman estimate for the Laplacian in $\mathbb{R}^2$. \end{abstract} \maketitle \section{Introduction} In this paper, we study the vanishing order of solutions to second order elliptic equations with singular lower order terms in $\ensuremath{\mathbb{R}}^2$. Vanishing order is a characterization of the rate at which a solution can go to zero and is therefore a quantitative description of the strong unique continuation property. Let $\Omega \subset \ensuremath{\mathbb{R}}^2$ be open and connected. Assume that $V : \Omega \to \ensuremath{\mathbb{C}}$ is an element of $L^t\pr{\Omega}$ for some $t > 1$, and that $W : \Omega \to \ensuremath{\mathbb{C}}^2$ belongs to $L^s\pr{\Omega}$ for some $s > 2$. Let $L = \Delta + W \cdot \nabla + V$ denote an elliptic operator. Suppose $u$ is a solution to $L u = 0$ in $\Omega$. Our vanishing order estimates take the following form: For all $r$ sufficiently small and $x_0 \in \Omega' \Subset \Omega$, \betagin{equation} \norm{u}_{L^\infty\pr{B_r\pr{x_0}}} \ge c r^{C \beta}, \lambdabel{vanishing} \end{equation} where $\beta$ depends on the elliptic operator $L$ or, more explicitly, on the Lebesgue norms of $W$ and $V$. The estimate \eqref{vanishing} implies that the vanishing order of $u$ at $x_0$ is less than $C\betata$. Vanishing order estimates have a deep connection with the eigenfunctions Riemannian manifolds. Let $\mathcal{M}$ be a smooth, compact Riemannian manifolds without boundary, and let $ \phi_\lambdambda$ be a classical eigenfunction, $$-\Delta_{g} \phi_\lambdambda=\lambdambda \phi_\lambdambda \quad \quad \mbox{in} \ \mathcal{M}.$$ Donnelly and Fefferman in \cite{DF88} proved that for any $x_0\in \mathcal{M}$ $$\norm{u}_{L^\infty\pr{B_r\pr{x_0}}} \ge c r^{C \sqrt{\lambdambda}}.$$ That is, the maximal vanishing order for eigenfunction $\phi_\lambdambda$ is $C\sqrt{\lambdambda}$ and this estimate is sharp. More recently, there has been an interest in understanding how the vanishing order of solutions to the Schr\"odinger equation \betagin{equation} -\Delta u=V(x)u \lambdabel{schro} \end{equation} depends on the potential function, $V$. Bourgain and Kenig \cite{BK05} showed that when $V \in L^\infty$ the vanishing order is less than $C(1+\|V\|_{L^\infty}^{2/3})$, i.e. the estimate \eqref{vanishing} holds with $\betata=(1+\|V\|_{L^\infty}^{2/3})$. The examples constructed by Meshkov in \cite{Mes92} indicate that Bourgain and Kenig's result is sharp when $V$ is complex-valued and the underlying space is $2$-dimensional. If $V \in W^{1, \infty}$, Kukavica in \cite{Kuk98} established that the upper bound for vanishing order is $C(1+\|V\|_{W^{1, \infty}})$. By different approaches, Bakri \cite{Bak12} and Zhu \cite{Zhu16} independently improved Kukavica's results and showed that the sharp vanishing order is less than $C(1+\sqrt{\|V\|_{W^{1, \infty}}})$. Since elliptic equations with singular lower order terms are known to satisfy the strong unique continuation property, attention has recently shifted to trying to understand the role of singular weights in vanishing order estimates. Kenig and Wang \cite{KW15} characterized the vanishing order of solutions to elliptic equations with drift, $\Delta u+W\cdot \nabla u = 0$, in the plane for real valued $W\in L^s(\mathbb R^2)$ with $s\geq 2$. Since we would like to consider possibly complex-valued $W$ and $V$, the techniques from \cite{KW15} are not applicable to the present paper where we investigate the vanishing order of solutions to $L u = 0$. In \cite{DZ17}, the authors developed a new $L^p\to L^q$ type quantitative Carleman estimates for a range of $p$ and $q$ values. This Carleman estimates allowed us to study the quantitative uniqueness of solutions to $Lu=0$ for $n\geq 3$. Motivated by the ideas in \cite{DZ17}, here we further develop the techniques to study solutions $Lu=0$ for $n=2$. Through the application of a scaling argument, the vanishing order estimate given in \eqref{vanishing} implies a quantitative unique continuation at infinity theorem. For solutions to $L u = 0$ in $\ensuremath{\mathbb{R}}^2$, the following result can be invoked from \eqref{vanishing}: For all $R$ sufficiently large, \betagin{equation} \inf_{\abs{x_0} = R} \norm{u}_{L^\infty\pr{B_1\pr{x_0}}} \ge c \exp\pr{- C R^\Pi \log R}, \end{equation} where $\Pi$ depends on $L$. We may interpret such estimates as bounds on the rate of decay of solutions at infinity. In \cite{BK05}, Bourgain and Kenig showed that if $u$ is a bounded, normalized, non-trivial solution to \eqref{schro} in $\ensuremath{\mathbb{R}}^n$, where $\norm{V}_{L^\infty\pr{\ensuremath{\mathbb{R}}^n}} \le 1$, then a vanishing order estimate implies that \betagin{align} \inf_{\abs{x_0} = R} \norm{u}_{L^\infty\pr{B_1\pr{x_0}}} \ge c \exp\pr{- C R^{4/3} \log R}.\lambdabel{quantEst} \end{align} This lower bound played an important role in their study of Anderson localization. Recall that $W \in L^s_{loc}\pr{\Omega}$ and $V \in L^t_{loc}\pr{\Omega}$ for some $s > 2$ and $t >1$. Suppose $u\in W^{1,2}_{loc}\pr{\Omega}$ is a non-trivial weak solution to \betagin{equation} \Delta u+ W(x)\cdot \nabla u+V(x) u=0. \lambdabel{goal} \end{equation} An application of H\"older's inequality implies that there exists a $p \in \pb{1, 2}$, depending on $s$ and $t$, such that $W(x)\cdot \nabla u+V(x) u \in L^p_{loc}\pr{\Omega}$. By regularity theory, it follows that $u \in W_{loc}^{2,p}\pr{\Omega}$ and therefore $u$ is a solution to \eqref{goal} almost everywhere in $\Omega$. Moreover, we have that $u \in L^\infty_{loc}\pr{\Omega}$. Therefore, the solution $u$ belongs to $L^\infty_{loc}\pr{\Omega} \cap W^{1,2}_{loc}\pr{\Omega} \cap W^{2,p}_{loc}\pr{\Omega}$ and $u$ satisfies equation \eqref{goal} almost everywhere in $\Omega$. We use the notation $B_{r}\pr{x_0} \subset \ensuremath{\mathbb{R}}^n$ to denote the ball of radius $r$ centered at $x_0$. When the center is understood from the context, we simply write $B_{r}$. Our first result, an order of vanishing estimate for equation \eqref{goal}, is as follows. \betagin{theorem} Assume that for some $s \in \pb{2, \infty}$ and $t \in \pb{1, \infty}$, $\norm{W}_{L^s\pr{B_{10}}} \le K$ and $\norm{V}_{L^t\pr{B_{10}}} \le M$. Let $u$ be a solution to \eqref{goal} in $B_{10}$. Assume that $u$ is bounded and normalized in the sense that \betagin{align} & \|u\|_{L^\infty(B_{6})}\leq \hat{C}, \lambdabel{bound} \\ & \|u\|_{L^\infty(B_{1})}\geq 1. \lambdabel{normal} \end{align} Then for any sufficiently small $\varepsilon > 0$, the vanishing order of $u$ in $B_{1}$ is less than $C\pr{1 + C_1 K^\kappa + C_2 M^\mu}$. That is, for any $x_0\in B_1$ and every $r$ sufficiently small, \betagin{align*} \|u\|_{L^\infty(B_{r}(x_0))} &\ge c r^{C\pr{1 + C_1 K^\kappa + C_2 M^\mu}}, \end{align*} where $\displaystyle \kappa = \left\{\betagin{array}{ll} \frac{2s}{s-2} & t > \frac{2s}{s+2} \\ \frac{t}{t-1-\varepsilon t} & 1 < t \le \frac{2s}{s+2} \end{array}\right.$, $\displaystyle \mu = \left\{\betagin{array}{ll} \frac{2s}{3s-2} & t \ge s \\ \frac{2st}{3st+2t-4s-\varepsilon(2st+4t-4s)} & \frac{2s}{s+2} < t < s \\ \frac{t}{t-1+\varepsilon\pr{t-2t\varepsilon}} & 1 < t \le \frac{2s}{s+2} \end{array}\right.$, \newline $c = c\pr{s, t, K, M, \hat C, \varepsilon}$, $C$ is universal, $C_1 = C_1\pr{s, t, \varepsilon}$, and $C_2 = C_2\pr{ s, t,\varepsilon}$. \lambdabel{thh} \end{theorem} Theorem \ref{thh} in combination with a scaling argument gives rise to the following unique continuation at infinity theorem. \betagin{theorem} Assume that for some $s \in \pb{2, \infty}$ and $t \in \pb{ 1, \infty}$, $\norm{W}_{L^s\pr{\ensuremath{\mathbb{R}}^2}} \le A_1$ and $\norm{V}_{L^t\pr{\ensuremath{\mathbb{R}}^2}} \le A_0$. Let $u$ be a solution to \eqref{goal} in $\ensuremath{\mathbb{R}}^2$. Assume that $\norm{u}_{L^\infty\pr{\ensuremath{\mathbb{R}}^2}} \le C_0$ and $\abs{u\pr{0}} \ge 1$. Then for every sufficiently small $\varepsilon > 0$ and all $R$ sufficiently large, \betagin{equation*} \inf_{\abs{x_0} = R} \norm{u}_{L^\infty\pr{B_1\pr{x_0}}} \ge \exp\pr{-C R^\Pi \log R}, \end{equation*} where \betagin{align*} \Pi = \left\{\betagin{array}{ll} 2 & t > \frac{2s}{s+2} \\ \frac{t(s-2)}{s(t-1-\varepsilon t)} & 1 < t \le \frac{2s}{s+2}, \end{array}\right. \end{align*} and $C = C\pr{s, t, A_1, A_0, C_0, \varepsilon}$. \lambdabel{UCVW} \end{theorem} If we consider elliptic equations with drift, then the order of vanishing estimates follow directly from Theorem \ref{thh}. In particular, if $V \equiv 0$, then the following statement is a consequence of Theorem \ref{thh} with $t = \infty$ and $M = 0$. \betagin{corollary} Assume that for some $s \in \pb{2, \infty}$, $\norm{W}_{L^s\pr{B_{10}}} \le K$. Let $u$ be a solution to $\Delta u + W \cdot \nabla u = 0$ in $B_{10}$. Assume that $u$ is bounded and normalized in the sense of \eqref{bound} and \eqref{normal}. Then the vanishing order of $u$ in $B_{1}$ is less than $C\pr{1 + C_1 K^\kappa}$. That is, for any $x_0\in B_1$ and every $r$ sufficiently small, \betagin{align*} \|u\|_{L^\infty(B_{r}(x_0))} &\ge c r^{C\pr{1 + C_1 K^\kappa}}, \end{align*} where $\displaystyle \kappa = \frac{2s}{s-2} $, $c = c\pr{s, K, \hat C}$, and $C_1 = C_1\pr{s}$. \lambdabel{thhW} \end{corollary} The corresponding unique continuation at infinity theorem follows from Theorem \ref{UCVW} in the same way. Note that this pair of corollaries is independent of $\varepsilon$. \betagin{corollary} Assume that for some $s \in \pb{2, \infty}$, $\norm{W}_{L^s\pr{\ensuremath{\mathbb{R}}^n}} \le A_1$. Let $u$ be a solution to $\Delta u + W \cdot \nabla u = 0$ in $\ensuremath{\mathbb{R}}^2$. Assume that $\norm{u}_{L^\infty\pr{\ensuremath{\mathbb{R}}^n}} \le C_0$ and $\abs{u\pr{0}} \ge 1$. Then for every $R$ sufficiently large, \betagin{equation*} \inf_{\abs{x_0} = R} \norm{u}_{L^\infty\pr{B_1\pr{x_0}}} \ge \exp\pr{-C R^2 \log R}, \end{equation*} where $C = C\pr{s, A_1, C_0}$. \lambdabel{UCW} \end{corollary} If we consider solutions to equations without a gradient potential, a slightly modified proof leads to a better order of vanishing in the setting where $t \in \pb{1,\infty}$. Although vanishing order estimates are not explicitly stated in \cite{KT16}, such results follow from the quantitative uniqueness theorems presented in that paper. The following theorem improves on the estimates implied from \cite{KT16} in two ways: we reduce the vanishing order and we extend the range of $t$ from $t > 2$ to all admissible exponents, $t > 1$. \betagin{theorem} Assume that $\norm{V}_{L^t\pr{B_{R_0}}} \le M$ for some $t \in \pb{1, \infty}$. Let $u$ be a solution to $\Delta u + V u = 0$ in $B_{10}$. Assume that $u$ is bounded and normalized in the sense of \eqref{bound} and \eqref{normal}. Then for any sufficiently small $\varepsilon > 0$, the vanishing order of $u$ in $B_{1}$ is less than $C\pr{1 + C_2M^\mu}$. That is, for any $x_0\in B_1$ and every $r$ sufficiently small, \betagin{align*} \|u\|_{L^\infty(B_{r}(x_0))} &\ge c r^{C\pr{1 + C_2M^\mu}}, \end{align*} where $\displaystyle \mu = \left\{\betagin{array}{ll} \frac{2t}{3t-2} & 2 < t \le \infty \\ \frac{t}{2t-2-\varepsilon\pr{2t-1-2\varepsilon}} & 1 < t \le 2 \end{array}\right.$, $c = c\pr{t, M, \hat C, \varepsilon}$, and $C_2 = C_2\pr{t,\varepsilon}$. \lambdabel{thhh} \end{theorem} \betagin{remark} If we tried to prove Theorem \ref{thhh} using the same approach that gave Corollary \ref{thhW}, the resulting theorem would be weaker. That is, if $W \equiv 0$, then Theorem \ref{thh} with $s = \infty$ and $K = 0$ implies a version of Theorem \ref{thhh} with $\mu$ replaced by $\displaystyle \tilde \mu = \left\{\betagin{array}{ll} \frac{2}{3} & t = \infty \\ \frac{2t}{3t-4-\varepsilon(2t-4)} & 2 < t < \infty \\ \frac{t}{t-1+\varepsilon\pr{t-2\varepsilon t}} & 1 < t \le 2 \end{array}\right.$. For all $t \in \pr{1, \infty}$, we see that $\tilde \mu > \mu$, and therefore, such an approach gives a worse result than the one presented in Theorem \ref{thhh}. This observation implies that the gradient potential $W$ plays some role in how the vanishing order depends on the norm of $V$. Notice that the different cases in Theorem \ref{thh} are determined by the relationship between $t$ and $s$, and that, in some cases, $\kappa$ and $\mu$ depends on both $t$ and $s$. Where these relationships comes from can be seen in the proofs below, Lemma \ref{CarlpqVW} for example. \end{remark} Finally, as a consequence of Theorem \ref{thhh}, we derive a quantitative unique continuation at infinity theorem. \betagin{theorem} Assume that $\norm{V}_{L^t\pr{\ensuremath{\mathbb{R}}^2}} \le A_0$ for some $t \in \pb{1, \infty}$. Let $u$ be a solution to $\Delta u + V u = 0$ in $\ensuremath{\mathbb{R}}^2$. Assume that $\norm{u}_{L^\infty\pr{\ensuremath{\mathbb{R}}^2}} \le C_0$ and $\abs{u\pr{0}} \ge 1$. Then for any sufficiently small $\varepsilon > 0$ and $R$ sufficiently large, \betagin{equation*} \inf_{\abs{x_0} = R} \norm{u}_{L^\infty\pr{B_1\pr{x_0}}} \ge \exp\pr{-C R^\Pi \log R}, \end{equation*} where $\displaystyle \Pi = \left\{\betagin{array}{ll} \frac{4t-4}{3t-2} & t > 2 \\ \frac{2t-2}{2t-2-\varepsilon\pr{2t-1-2\varepsilon}} & 1 < t \le 2, \end{array}\right.$, and $C = C\pr{t, C_0, A_0, \varepsilon}$. \lambdabel{UCV} \end{theorem} Let's review some literature about unique continuation results in $\ensuremath{\mathbb{R}}^2$. The (weak) unique continuation property implies that a solution is trivial if the solution vanishes in an open subset in the domain. And the strong unique continuation property implies that a solution is trivial if the solution vanishes to infinite order at some point in the domain. The results of Schechter and Simon \cite{SS80} as well as Amrein, Berthier and Georgescu \cite{ABG81} show that solutions to \eqref{schro} in $\Omega \subset \ensuremath{\mathbb{R}}^2$ satisfy the weak unique continuation property whenever $V \in L^t_{loc}\pr{\Omega}$, $t > 1$. Jerison and Kenig \cite{JK85} established the strong unique continuation property for (\ref{schro}) if $V \in L^{n/2}_{loc}\pr{\Omega}$ for $n \ge 3$. In the setting where $n = 2$, they showed in \cite{JK85} that strong unique continuation holds for $V\in L^t_{loc}\pr{\Omega}$ with $t > 1$. On the other hand, the counterexample of Kenig and Nadirashvili \cite{KN00} implies that weak unique continuation can fail for $V \in L^1$. For drift equations of the form $\Delta u + W \cdot \nabla u = 0$ in $\Omega \subset \ensuremath{\mathbb{R}}^2$, the counterexamples due to Mandache \cite{Man02} and Koch and Tataru \cite{KT02} show that weak unique continuation can fail for $W \in L^{2^-}$ and $L^2_{weak}$, respectively. It has been believed for some time (see the comments in \cite{KT02}, for example) that strong unique continuation holds for $W \in L^{2^+}$. In the setting where $V$ and $W$ are assumed to be real-valued, with $s > 2$ and $t > 1$, Allesandrini \cite{Ale12} used the correspondence between elliptic equations in the plane and first-order Beltrami equations to prove the strong unique continuation property. In fact, his results are more general since the leading operator may be variable with non-smooth, non-symmetric coefficients. Since we are able to characterize the vanishing order of solutions to \eqref{goal} with $V \in L^{1^+}$ and $W \in L^{2^+}$, in some sense, our results provide a complete description of quantitative uniqueness for elliptic equations in $\mathbb R^2$. As in \cite{DZ17}, our main tool is an $L^p - L^q$ Carleman estimate. To prove such an inequality, we replace Sogge's eigenfunction estimates \cite{Sog86} on $S^{n-1}$ for $n \ge 3$ with eigenfunction estimates that we derive explicitly from Parseval's inequality and from the fact that all eigenfunctions on $S^1$ are bounded. The resulting Carleman estimates hold for $1< p \le 2 < q < \infty$. Moreover, compared with corresponding results in \cite{DZ17}, the exponent of the parameter $\tau$ in the $n = 2$ Carleman estimate is always positive for all $p$ and $q$ within the appropriate ranges. See Theorem \ref{Carlpq} for the details. As a consequence, we can treat all $s > 2$ and all $t > 1$, thereby leaving no gaps between our results and we would expect from the literature. The outline of the paper is as follows. In section \ref{CarlEst}, we state and prove the main $L^q - L^q$ Carleman estimates. Section \ref{Thm1Proof} is devoted to the proof of Theorem \ref{thh}. The proof of Theorem \ref{thhh} is very similar to that of Theorem \ref{thh}, but the main differences are described in section \ref{Thm3Proof}. In section \ref{QuantUC}, we show how the quantitative unique continuation at infinity theorems follow from the vanishing order estimates through a scaling argument. Finally, we present the proof of an important lemma of $L^p\to L^q$ Carleman estimates and the quantitative Caccioppoli inequality in the appendix. The letters $c$, $C$, $C_0$, $C_1$ and $C_2$ are independent of $u$ and may vary from line to line. \section{Carleman estimates } \lambdabel{CarlEst} In this section, we state and prove the crucial tools, the $L^p-L^{ q}$ type Carleman estimates. Let $r=|x-x_0|$. Define the weight function to be $$\phi(r)=\log r+\log(\log r)^2.$$ We use the notation $\|u\|_{L^p(r^{-2} dx)}$ to denote the $L^p$ norm with weight $r^{-2}$, i.e. $\|u\|_{L^p(r^{-2} dx)} = \displaystyle \pr{\int |u\pr{x}|^p r^{-2}\, dx}^\frac{1}{p}$. Our $L^p - L^q$ Carleman estimate for the Laplacian is as follows. \betagin{theorem} Let $1< p \le 2 < q < \infty$. For any $\varepsilon \in \pr{0,1}$, there exists a constant $C$ and sufficiently small $R_0$ such that for any $u\in C^{\infty}_{0}\pr{B_{R_0}(x_0)\backslash\set{x_0} }$ and $\tau>1$, one has \betagin{align} &\tau^{1+ \beta_1} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} +\tau^{\beta_0} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)} \nonumber \\ &+ \tau^{\beta_1} \|(\log r )^{-1} e^{-\tau \phi(r)}r \nabla u\|_{L^2(r^{-2}dx)} \leq C \|(\log r ) e^{-\tau \phi(r)} r^2 \Delta u\|_{L^p(r^{-2} dx)} , \lambdabel{mainCar} \end{align} where $\beta_0 = \frac{2}{q}\pr{1 - \varepsilon}+1-\frac{1}{p}$ and $\beta_1= 1- \frac{1}{p}$. Furthermore, $C = C\pr{p, q, \varepsilon}$. \lambdabel{Carlpq} \end{theorem} To prove our Carleman estimate, we first establish some intermediate Carleman estimates for first-order operators. Towards this goal, we introduce polar coordinates in $\mathbb R^2\backslash \{0\}$ by setting $x=r\omegaega$, with $r=|x|$ and $\omegaega=(\omegaega_1, \omegaega_2)\in S^{1}$. Define a new coordinate $t=\log r$. Then $\displaystyle \frac{\partial }{\partial x_j}=e^{-t}(\omegaega_j\partial_t+ \Omegaega_j)$, for $j = 1,2$, where each $\Omegaega_j$ is a vector field in $S^{1}$. It is well known that the vector fields $\Omegaega_j$ satisfy $$\displaystyle \subsetm^{2}_{j=1}\omegaega_j\Omegaega_j=0 \quad{and} \displaystyle \quad \subsetm^{2}_{j=1}\Omegaega_j\omegaega_j=1.$$ In the new coordinate system, the Laplace operator takes the form \betagin{equation} e^{2t} \Delta=\partial^2_t u+\Delta_{\omegaega}, \lambdabel{laplace} \end{equation} where $\displaystyle \Delta_\omegaega=\subsetm_{j=1}^2 \Omegaega^2_j$ is the Laplace-Beltrami operator on $S^{1}$. The eigenvalues for $-\Delta_\omegaega$ are $k^2$, $k\in \ensuremath{\mathbb{Z}}_{\ge 0}$. The corresponding eigenspace is $E_k$, the space of spherical harmonics of degree $k$. It follows that $$\| \Delta_\omegaega v\|^2_{L^2(dtd\omegaega)}=\subsetm_{k\geq 0} k^4\| v_k\|^2_{L^2(dtd\omegaega)}$$ and \betagin{equation} \subsetm_{j=1}^n\| \Omegaega_j v\|^2_{L^2(dtd\omegaega)} =\subsetm_{k\geq 0} k^2\|v_k\|^2_{L^2(dtd\omegaega)}, \lambdabel{lll} \end{equation} where $v_k$ denotes the projection of $v$ onto $E_k$ and $\|\cdot\|_{L^2(dtd\omegaega)}$ denotes the $L^2$ norm on $(-\infty, \infty)\times S^{1}$. Note that the projection operator, $P_k$, acts only on the angular variables. In particular, $P_k v\pr{t, \omega} = P_k v\pr{t, \cdot} \pr{\omega}$. Let $$\Lambdambda=\sqrt{-\Delta_\omegaega}.$$ The operator $\Lambdambda$ is a first-order elliptic pseudodifferential operator on $L^2(S^{1})$. The eigenvalues for the operator $\Lambdambda$ are $k$, with corresponding eigenspace $E_k$. That is, for any $v\in C^\infty_0(S^{1})$, \betagin{equation} \Lambdambda v= \subsetm_{k\geq 0}k P_k v. \lambdabel{ord} \end{equation} Set \betagin{equation} L^\pm=\partial_t\pm \Lambdambda. \lambdabel{use} \end{equation} From the equation (\ref{laplace}), it follows that \betagin{equation*} e^{2t} \Delta=L^+L^-=L^-L^+. \end{equation*} With $r=e^t$, we define the weight function in terms of $t$, $$\varphi(t)=\phi(e^t)=t+\log t^2.$$ We only consider the solutions in balls with small radius $r$. In term of $t$, we study the case when $t$ is sufficiently close to $-\infty$. We first state an $L^2- L^2$ Carleman inequality for the operator $L^+$. For the proof of this result (which still holds when $n = 2$), we refer the reader to our companion paper, \cite{DZ17}. \betagin{lemma} If $\abs{t_0}$ is sufficiently large, then for any $v \in C^\infty_0\pr{\pr{-\infty, -\abs{t_0}} \times S^{1}}$, we have that \betagin{eqnarray} \tau \norm{t^{-1} e^{-\tau \varphi(t)}v}_{L^2(dtd\omegaega )} &+&\norm{t^{-1}e^{-\tau \varphi(t)} \partial_t v}_{L^2(dtd\omegaega )} +\subsetm_{j=1}^2 \norm{t^{-1}e^{-\tau \varphi(t)} \Omegaega_j v }_{L^2(dtd\omegaega )} \nonumber \\ &\leq& C\norm{t^{-1} e^{-\tau \varphi(t)} L^+v}_{L^2(dtd\omegaega )}. \lambdabel{cond} \end{eqnarray} \lambdabel{Car22} \end{lemma} To prove the $L^p -L^2$ Carleman estimates, we use estimates for the projections of functions into the spaces of spherical harmonics. The eigenfunctions on $S^1$ are those elements of $L^2\pr{S^1}$ that satisfy the system \betagin{equation} \left\{ \betagin{array}{lll} -\Delta_\theta e_\lambdambda &=&\lambdambda e_\lambdambda, \quad 0\leq \theta\leq 2\pi \nonumber \\ e_\lambdambda(0)&=&e_\lambdambda(2\pi), \end{array} \right. \end{equation} where we now use $\Delta_\theta$ to denote the Laplace-Beltrami operator on $S^1$. The orthonormal eigenfunctions are given by $\frac{1}{\sqrt{\pi}}\cos k\theta$ and $\frac{1}{\sqrt{\pi}}\sigman k\theta$ for $k\in \mathbb Z_{\geq 0}$ with eigenvalue $\lambda_k = k^2$. We can describe them by $e_k\pr{\theta} = \frac{1}{\sqrt{2\pi}} e^{i k \theta}$ with $k \in \ensuremath{\mathbb{Z}}$. The following result is the $n = 2$ analog of Lemma 3 in \cite{DZ17}. Since Sogge's estimates on $S^{n-1}$ from \cite{Sog86} hold in $n \ge 3$, our proof instead relies on Parseval's inequalities and the fact that every eigenfunction is bounded. \betagin{lemma} Let $M, N \in \ensuremath{\mathbb{N}}$ and let $\set{c_k}$ be a sequence of numbers such that $\abs{c_k} \le 1$ for all $k$. For any $v \in L^2\pr{S^{1}}$ and every $p \in \brac{1, 2}$, we have that \betagin{align} \|\subsetm^M_{k=N} c_k P_k v\|_{L^2(S^{1})} &\leq C \pr{ \subsetm^M_{k=N} |c_k|^2}^{\frac{1}{p}-\frac{1}{2}}\| v\|_{L^p(S^{1})}. \lambdabel{haha} \end{align} \lambdabel{upDown} \end{lemma} \betagin{proof} Recall that $P_k v = v_k$ is the projection of $v$ on the space of spherical harmonics of degree $k$, $E_k$. Thus, $P_0 v = \innp{v, e_0} e_0$ and $P_k v = \innp{v, e_k} e_k + \innp{v, e_{-k}} e_{- k}$ for every $k \in \ensuremath{\mathbb{Z}}_+$, where we use the notation $\innp{\cdot, \cdot}$ to denote a pairing of elements in dual spaces. By Parseval's identity, $$ \subsetm^\infty_{k=-\infty}|\innp{e_k, \ v}|^2 = \|v\|_{L^2(S^1)}^2. $$ Since $\|e_k\|_{L^\infty (S^1)}= \frac{1}{\sqrt{2\pi}}$ for all $k \in \ensuremath{\mathbb{Z}}$, then for $k \ne 0$, \betagin{align*} \|P_k v\|_{L^\infty (S^1)}^2 &= \| \innp{v, e_k} e_k + \innp{v, e_{-k}} e_{-k}\|_{L^\infty (S^1)}^2 \le 2 \| \innp{v, e_k} e_k\|_{L^\infty (S^1)}^2 + 2 \|\innp{v, e_{-k}} e_{-k}\|_{L^\infty (S^1)}^2 \\ &= 2 |\innp{e_k, \ v}|^2 \|e_k\|_{L^\infty (S^1)}^2 + 2 |\innp{e_{-k}, \ v}|^2 \|e_{-k}\|_{L^\infty (S^1)}^2 = \frac{1}{\pi} \pr{|\innp{e_k, \ v}|^2 + |\innp{e_{-k}, \ v}|^2} \\ &\le \frac{1}{\pi} \subsetm^\infty_{j=-\infty}|\innp{e_j, \ v}|^2 = \frac{1}{\pi} \|v\|_{L^2(S^1)}^2. \end{align*} Similarly, $\|P_0 v\|_{L^\infty (S^1)}^2 \le \frac 1 {2\pi} \|v\|_{L^2(S^1)}^2$. Thus, for every $k \in \ensuremath{\mathbb{Z}}_{\ge 0}$, \betagin{equation} \|P_k v\|_{L^\infty (S^1)}\leq c \|v\|_{L^2(S^1)}. \lambdabel{dim2} \end{equation} It follows that for any $u\in L^2(S^1)$ $$ \innp{u, P_k v} = \innp{P_k u, v} \le \| P_k u\|_{L^{\infty}(S^{1})}\|v\|_{L^{1}(S^{1})} \le c \| u\|_{L^{2}(S^{1})}\|v\|_{L^{1}(S^{1})}. $$ By duality, we conclude that \betagin{align} \| P_k v\|_{L^2\pr{S^1}} \le c \| v\|_{L^{1}(S^{1})}. \lambdabel{down} \end{align} Since each $e_k$ is normalized in $L^2\pr{S^1}$, then $\sqrt{2\pi} \norm{e_k}_{L^\infty\pr{S^1}} =1 = \norm{e_k}_{L^2\pr{S^1}}$ and therefore, for $k \ne 0$, \betagin{align*} \| P_k v\|_{L^\infty\pr{S^1}} &= \| \innp{v, e_k} e_k + \innp{v, e_{-k}} e_{-k}\|_{L^\infty\pr{S^1}} \le \abs{\innp{v, e_k} } \| e_k\|_{L^\infty\pr{S^1}} + \abs{\innp{v, e_{-k}}} \| e_{-k}\|_{L^\infty\pr{S^1}} \\ &= \frac 1 {\sqrt{2 \pi}} \pr{ \abs{\innp{v, e_k} } \| e_k\|_{L^2\pr{S^1}} + \abs{\innp{v, e_{-k}}} \| e_{-k}\|_{L^2\pr{S^1}}} \\ &= \frac 1 {\sqrt{2 \pi}} \pr{ \|\innp{v, e_k} e_k+ \innp{v, e_{-k}} e_{-k}\|_{L^2\pr{S^1}}} = \frac 1 {\sqrt{2 \pi}} \| P_k v \|_{L^2\pr{S^1}}, \end{align*} where we have used orthogonality. Similarly, $\| P_0 v\|_{L^\infty\pr{S^1}} = \frac 1 {\sqrt{2 \pi}} \| P_0 v \|_{L^2\pr{S^1}}$. Thus, $\| P_k v\|_{L^\infty\pr{S^1}} \le \frac 1 {\sqrt{2\pi}} \| P_k v\|_{L^2\pr{S^1}}$ for every $k \in \ensuremath{\mathbb{Z}}_{\ge 0}$. Combining this observation with \eqref{down} shows that \betagin{align} \| P_k v\|_{L^\infty\pr{S^1}} \le C \| v\|_{L^{1}(S^{1})}. \lambdabel{downs} \end{align} Finally, it follows from the normalization condition in combination with Parseval's identity that \betagin{equation} \| P_k v\|_{L^{2}(S^{1})} \leq \|v\|_{L^{2}(S^{1})}. \lambdabel{stay} \end{equation} Interpolating \eqref{down} and \eqref{stay} gives that \betagin{align} \| P_k v\|_{L^{2}(S^{1})} &\leq C(p) \|v\|_{L^{p}(S^{1})} \lambdabel{indu} \end{align} for all $1\leq p\leq 2$. Now we consider a more general setting. Let $\{c_k\}$ be a sequence of numbers with $|c_k|\leq 1$. For all $M\leq N$, it follows from orthogonality and H\"older's inequality that \betagin{align*} \|\subsetm^M_{k=N} c_k P_k v\|^2_{L^2(S^{1})} = \subsetm^M_{k=N} \| c_k P_k v\|^2_{L^2(S^{1})} = \subsetm^M_{k=N} \abs{c_k}^2 \innp{P_k v, v} \leq \subsetm^M_{k=N} |c_k|^2 \|P_k v\|_{L^{\infty}(S^{1})}\|v\|_{L^{1}(S^{1})}. \end{align*} Combining this inequality with (\ref{downs}) shows that \betagin{equation*} \|\subsetm^M_{k=N} c_k P_k v\|_{L^2(S^{1})} \leq C \pr{\subsetm^M_{k=N} |c_k|^2}^{\frac{1}{2}} \|v\|_{L^{1}(S^{1})}. \end{equation*} Clearly, as long as $\abs{c_k} \le 1$, then \betagin{equation} \|\subsetm^M_{k=N} c_k P_k v\|_{L^2(S^{1})} \leq \| v\|_{L^2(S^{1})}. \lambdabel{seqSame} \end{equation} As before, we interpolate the last two inequalities to reach \eqref{haha}. \end{proof} Now we state an $L^p-L^2$ type Carleman estimate for the operator $L^-$. \betagin{lemma} For every $v \in C^\infty_c\pr{(-\infty, \ t_0)\times S^{1}}$ and $1< p \le 2$, \betagin{equation} \|t^{-{1}} e^{-\tau \varphi(t)} v\|_{L^2(dtd\omegaega)} \leq C \tau^\betata \|t e^{-\tau \varphi(t)} L^- v\|_{L^p(dtd\omegaega)}, \lambdabel{key-} \end{equation} where $\beta = \frac{1-p}{p}$. \lambdabel{CarLpp} \end{lemma} When $p = 2$, the proof of Lemma \ref{CarLpp} follows from the proof of Lemma 2 in \cite{DZ17} (which still holds when $n = 2$) combined with the fact that $\abs{t} \ge \abs{t_0} \ge C$. For $p \in \pr{1, 2}$, the proof of Lemma \ref{CarLpp} is similar to the proof of Lemma 4 in \cite{DZ17}. The major difference is that we need to replace the eigenfunction estimates from Lemma 3 in \cite{DZ17} by Lemma \ref{upDown} in the current paper. For the readers' convenience and the complete the presentation, we include the proof of Lemma \ref{CarLpp} in the Appendix. Lemma \ref{CarLpp} plays the crucial role in the $L^p\to L^q$ Carleman estimates. Let's compare the parameter $\betata$ from Lemma 4 in \cite{DZ17} with the one in Lemma \ref{CarLpp} above. In Lemma \ref{CarLpp} above, we see that $\betata<0$ for any $p>1$. However, in Lemma 4 from \cite{DZ17}, $\betata<0$ if $p>\frac{6n-4}{3n+2}$, so $\betata$ changes signs over the full range of $p$ values. The fact that $\beta$ doesn't change signs here implies that we can deal with all the admissible $s$ and $t$ in Theorem \ref{thh}. The proof of Lemma 4 in \cite{DZ17} was partially motivated by the work in \cite{Reg99}, \cite{Jer86} and \cite{BKRS88}. We now have all of the intermediate results required to prove the general $L^p- L^q$ Carleman given in Theorem \ref{Carlpq}. We combine Lemmas \ref{Car22} and \ref{CarLpp}, apply a Sobolev inequality, then interpolate to reach the conclusion of Theorem \ref{Carlpq}. \betagin{proof}[Proof of Theorem \ref{Carlpq}] Let $u\in C^{\infty}_{0}\pr{B_{R_0}(x_0)\backslash\set{x_0} }$. After to an application of Lemma \ref{Car22} to $v$, we apply Lemma \ref{CarLpp} to $L^+v$, and see that \betagin{align*} \tau \norm{t^{-1} e^{-\tau \varphi(t)}v}_{L^2(dtd\omegaega )} &+\norm{t^{-1}e^{-\tau \varphi(t)} \partial_t v}_{L^2(dtd\omegaega )} +\subsetm_{j=1}^2 \norm{t^{-1}e^{-\tau \varphi(t)} \Omegaega_j v }_{L^2(dtd\omegaega )} \\ &\leq C\norm{t^{-1} e^{-\tau \varphi(t)} L^+v}_{L^2(dtd\omegaega )} \le C\tau^{\beta}\norm{t e^{-\tau \varphi(t)} L^- L^+v}_{L^p(dtd\omegaega )}, \end{align*} where $\beta = \frac{1-p}{p}$. Recalling the definitions of $t$, $\varphi$, and $L^\pm$, this gives \betagin{align} &\tau \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} + \|(\log r )^{-1} e^{-\tau \phi(r)}r \nabla u\|_{L^2(r^{-2}dx)} \nonumber \\ &\leq C \tau^{\betata} \|(\log r ) e^{-\tau \phi(r)} r^2 \Delta u\|_{L^p(r^{-2}dx)}. \lambdabel{dodo} \end{align} Direct computations shows that $\phi'(r)=\frac{1}{r}+\frac{2}{r\log r} \le \frac 1 r$ since $r \le R_0 \le 1$. By the Sobolev imbedding $W^{1, 2}\hookrightarrow L^{q'}$ with any $2<q < q' <\infty$ in $n=2$, we have \betagin{align} \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^{q'}} &\le c_{q'} \|\nabla [(\log r)^{-1} e^{-\tau \phi(r)} u ]\|_{L^2} \nonumber \\ &\le C\tau \| (\log r)^{-1} e^{-\tau \phi(r)} r^{-1} u\|_{L^2} +C \| (\log r)^{-1} e^{-\tau \phi(r)} \nabla u \|_{L^2} \nonumber \\ &+C \| (\log r)^{-2} e^{-\tau \phi(r)} r^{-1} u \|_{L^2} \nonumber \\ &\le C\tau \| (\log r)^{-1} e^{-\tau \phi(r)} u \|_{L^2(r^{-2}dx)} +C \| (\log r)^{-1} e^{-\tau \phi(r)} r \nabla u \|_{L^2(r^{-2}dx)} \nonumber \\ &+C \| (\log r)^{-2} e^{-\tau \phi(r)} u \|_{L^2(r^{-2}dx)} \nonumber \\ &\leq C \tau^\beta \|(\log r ) e^{-\tau \phi(r)} r^2 \Delta u\|_{L^p(r^{-2} dx)}, \lambdabel{L2*Est} \end{align} where the last inequality follows from \eqref{dodo}. Obviously, the inequality \eqref{dodo} indicates that \betagin{equation} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2\pr{r^{-2}dx}} \leq C \tau^{\beta -1} \|(\log r ) e^{-\tau \phi(r)} r^2 \Delta u\|_{L^p(r^{-2} dx)}. \lambdabel{L2Est} \end{equation} To get a range of $L^q$-norms on the left, we interpolate the last two inequalities. Choose $\lambda \in \pr{0,1}$ so that $q = 2 \lambda + \pr{1-\lambda} q'$. The application of H\"older's inequality yields that \betagin{align*} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^q\pr{r^{-2\lambda}dx}} &\le \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2\pr{r^{-2}dx}}^{\frac{2\lambda}{q}} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^{q'}}^{\frac{q'\pr{1-\lambda}}{q}}. \end{align*} Since $\lambda = \frac{q'-q}{q' -2}$, if we set $\theta = \frac{2\lambda}{q} = \frac{2(q'-q)}{q(q'-2)}$, then $1 - \theta = 1-\frac{2(q'-q)}{q(q'-2)}= \frac{q'(q-2)}{q(q'-2)}$. Thus, $\theta$ falls in the interval $[0,\ 1]$. From \eqref{dodo} and \eqref{L2Est}, we obtain that \betagin{align*} & \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^q\pr{r^{-2\lambda}dx}} \\ &\leq \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2\pr{r^{-2}dx}}^{\theta} \|(\log r)^{-1} e^{-\tau\phi(r)}u\|_{L^{q'}}^{1-\theta} \\ &\le \brac{C\tau^{\betata-1} \|(\log r ) e^{-\tau \phi(r)} r^2 \Delta u\|_{L^p(r^{-2} dx)}}^\theta \brac{C \tau^\betata \|(\log r ) e^{-\tau \phi(r)} r^2 \Delta u\|_{L^p(r^{-2} dx)}}^{1 - \theta} \\ &= C \tau^{\betata-\theta} \|(\log r ) e^{-\tau \phi(r)} r^2 \Delta u\|_{L^p(r^{-2} dx)}. \end{align*} We have $\beta - \theta = \frac{1}{p}-1-\frac{2}{q}\pr{1-\varepsilon}$, where $\varepsilon=\frac{q-2}{q'-2}$. Since $q' \in \pr{q,\infty}$ and $q > 2$, then $\varepsilon \in \pr{0,1}$. Moreover, since we can choose $q'$ to be arbitrarily large, then $\varepsilon$ may be made arbitrarily close to zero. Recalling the definition of $\lambda$, we conclude that for any $2 < q<\infty$ and any $\varepsilon \in \pr{0,1}$, \betagin{equation} \tau^{\frac{2}{q}\pr{1 - \varepsilon}+1-\frac{1}{p}}\|(\log r)^{-1} e^{-\tau\phi(r)}u\|_{L^q\pr{r^{-2\pr{1-\varepsilon}}dx}} \leq C \|(\log r ) e^{-\tau \phi(r)} r^2 \Delta u\|_{L^p(r^{-2} dx)}. \lambdabel{inter} \end{equation} Combining \eqref{inter} with \eqref{dodo}, we arrive at the proof of Theorem \ref{Carlpq}. \end{proof} \section{The proof of Theorem \ref{thh}} \lambdabel{Thm1Proof} The first step in the proof of Theorem \ref{thh} is to establish a Carleman estimate for the operator $\Delta + W \cdot \nabla + V$. We use the triangle inequality and H\"older's inequality along with the crucial Carleman estimates in Theorem \ref{Carlpq}. Because of the correlation between the potentials $W(x)$ and $V(x)$, we need to work in cases depending on the relationships between $s$ and $t$. \betagin{lemma} Assume that for some $s \in \pb{2, \ \infty}$ and $t \in \pb{ 1, \ \infty}$, $\norm{W}_{L^s\pr{B_{R_0}}} \le K$ and $\norm{V}_{L^t\pr{B_{R_0}}} \le M$. Then for every sufficiently small $\varepsilon > 0$, there exist constants $C_0$, $C_1$, $C_2$, and sufficiently small $R_0 < 1$ such that for any $u\in C^{\infty}_{0}(B_{R_0}(x_0)\setminus \set{x_0})$ and $$\tau \ge 1+ C_1 K^{\kappa} + C_2 M^{\mu},$$ one has \betagin{align} \tau^{2 - \frac 1 p} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} &\leq C_0 \|(\log r ) e^{-\tau \phi(r)} r^2\pr{ \Delta u + W \cdot \nabla u + V u}\|_{L^p(r^{-2} dx)} , \lambdabel{main1} \end{align} where $\displaystyle \kappa = \left\{\betagin{array}{ll} \frac{2s}{s-2} & t > \frac{2s}{s+2} \\ \frac{t}{t-1-\varepsilon t} & 1 < t \le \frac{2s}{s+2} \end{array}\right.$, $\displaystyle \mu = \left\{\betagin{array}{ll} \frac{2s}{3s-2} & t \ge s \\ \frac{2st}{3st+2t-4s-\varepsilon(2st+4t-4s)} & \frac{2s}{s+2} < t < s \\ \frac{t}{t-1+\varepsilon\pr{t-2\varepsilon t}} & 1 < t \le \frac{2s}{s+2} \end{array}\right.$, and \newline $\displaystyle p = \left\{\betagin{array}{ll} \frac{2s}{s+2} & t > \frac{2s}{s+2} \\ \frac{t}{1+\varepsilon t} & 1 < t \le \frac{2s}{s+2} \end{array}\right.$. Moreover, $C_0 = 2C$, where $C\pr{p, q, \varepsilon}$ is from Theorem \ref{Carlpq} with $\displaystyle q = \left\{\betagin{array}{ll} \frac{2st}{st+2t-2s} & \frac{2s}{s+2} < t < s \\ \frac 1 \varepsilon & 1 < t \le \frac{2s}{s+2} \end{array}\right.$, $C_1 = C_1\pr{s, t, \varepsilon}$, and $C_2 = C_2\pr{s,t, \varepsilon}$. \lambdabel{CarlpqVW} \end{lemma} \betagin{proof} Assume that $\varepsilon > 0$ is sufficiently small. From our crucial estimate \eqref{mainCar} in Theorem \ref{Carlpq} and the triangle inequality, we get that \betagin{align} &\tau^{2 - \frac 1 p} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} +\tau^{\frac 2 q \pr{1 - \varepsilon} + 1 - \frac 1 p} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)} \nonumber \\ &+\tau^{1 - \frac 1 p} \|(\log r )^{-1} e^{-\tau \phi(r)}r \nabla u\|_{L^2(r^{-2}dx)} \nonumber \\ &\le C\|(\log r) e^{-\tau \phi(r)} r^2 (\Delta u)\|_{L^p(r^{-2} dx)} \nonumber \\ &\le C\|(\log r) e^{-\tau \phi(r)} r^2 (\Delta u+ W\cdot \nabla u + V u)\|_{L^p(r^{-2} dx)} \nonumber \\ &+ C\|(\log r) e^{-\tau \phi(r)} r^2 W\cdot \nabla u \|_{L^p(r^{-2} dx)} + C\|(\log r) e^{-\tau \phi(r)} r^2 V u \|_{L^p(r^{-2} dx)}. \lambdabel{triIneq} \end{align} To reach the estimate (\ref{main1}) in the lemma, we will absorb the last two terms above into the lefthand side by appropriately choosing $p$ and $q$, and making $\tau$ sufficiently large. If $p \in \pb{1, 2}$, by H\"older' inequality, it follows that \betagin{align} & \|(\log r) e^{-\tau\phi(r)} r^2 W\cdot\nabla u\|_{L^p(r^{-2} dx)} \nonumber \\ &\le \|W\|_{L^{\frac{2p}{2-p}}\pr{B_{R_0}}} \|\pr{\log r}^2 r^{2 - \frac 2 p}\|_{L^{\infty}\pr{B_{R_0}}} \|(\log r)^{-1} e^{-\tau \phi(r)} r |\nabla u|\|_{L^2(r^{-2} dx)} \nonumber \\ &\le c \|W\|_{L^{\frac{2p}{2-p}}\pr{B_{R_0}}} \|(\log r)^{-1} e^{-\tau \phi(r)} r |\nabla u|\|_{L^2(r^{-2} dx)}, \lambdabel{hod2} \end{align} where we have used the fact that $2 - \frac 2 p > 0$ and $R_0$ is small enough. Similarly, an application of H\"older's inequality implies that \betagin{align} \|(\log r) e^{-\tau \phi(r)} r^2 V u \|_{L^p(r^{-2} dx)} &\le \|V\|_{L^{\frac{2p}{2-p}}\pr{B_{R_0}}} \|\pr{\log r}^2 r^{3 - \frac 2 p}\|_{L^{\infty}\pr{B_{R_0}}} \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^2(r^{-2} dx)} \nonumber \\ &\le c \|V\|_{L^{\frac{2p}{2-p}}\pr{B_{R_0}}} \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^2(r^{-2} dx)}. \lambdabel{hod3} \end{align} Note that $1 - \frac 1 p + \frac 1 q\pr{1 - \varepsilon} > 0$. Therefore, if $p \in \pb{1, 2}$ and $q > 2$, by H\"older's inequality again, we obtain that \betagin{align} & \|(\log r) e^{-\tau \phi(r)} r^2 V u \|_{L^p (r^{-2}dx)} \nonumber \\ &\le \|V\|_{L^{\frac{pq}{q-p}}\pr{B_{R_0}}} \|\pr{\log r}^2 r^{2\brac{1 - \frac 1 p + \frac 1 q\pr{1 - \varepsilon}}} \|_{L^{\infty}\pr{B_{R_0}}} \|(\log r)^{-1} e^{-\tau \phi(r)} u r^{ -\frac 2 q\pr{1 - \varepsilon}}\|_{L^q} \nonumber \\ &\le c \|V\|_{L^{\frac{pq}{q-p}}\pr{B_{R_0}}} \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)}. \lambdabel{hod33} \end{align} Next we work in cases to achieve the conclusion (\ref{main1}) in the lemma and determine the appropriate power of $\tau$. \\ \noindent {\bf Case 1: $t \in \brac{s, \infty}$} \\ If $t \ge s$, then we choose $p = \frac{2s}{s+2}$. Since $s > 2$, then $p\in (1, \ 2]$ is in the appropriate range of Theorem \ref{Carlpq}. As $\frac{2p}{2-p} = s \le t$, by H\"older's inequality, $\|V\|_{L^{s}} \le c \|V\|_{L^{t}}$. Substituting \eqref{hod2} and \eqref{hod3} into \eqref{triIneq} yields that \betagin{align*} &\tau^{\frac 3 2 - \frac 1 s} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} +\tau^{\frac 1 2 - \frac 1 s} \|(\log r )^{-1} e^{-\tau \phi(r)}r \nabla u\|_{L^2(r^{-2}dx)} \nonumber \\ &\le C\|(\log r) e^{-\tau \phi(r)} r^2 (\Delta u+ W\cdot \nabla u + V u)\|_{L^p(r^{-2} dx)} \nonumber \\ &+ c C K \|(\log r)^{-1} e^{-\tau \phi(r)} r |\nabla u|\|_{L^2(r^{-2} dx)} + c C M \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^2(r^{-2} dx)}. \end{align*} In order to absorb the last two terms on the right into the lefthandside in the last inequality, we choose $\tau \ge 1 + \pr{c C K}^{\frac {2s} {s-2} }+ \pr{2c C M}^{\frac {2s} {3s-2}} $ to get the conclusion (\ref{main1}). \\ \noindent {\bf Case 2: $t \in \pr{\frac{2s}{s+2}, s}$} \\ In this case, we choose $p = \frac{2s}{s+2}$. We use (\ref{hod33}) to absorb the term involving the potential $V(x)$. We need to choose $t=\frac{pq}{q-p}$. Since $p = \frac{2s}{s+2}$, then $q=\frac{2st}{st+2t-2s}$. We can check that $p$ falls in the appropriate range and that $q \in \pr{2, \infty}$ follows from the assumption on $t$. Since $\frac{2p}{2-p} = s$ and $\frac{pq}{q-p} = t$, substituting \eqref{hod2} and \eqref{hod33} into \eqref{triIneq}, we obtain that \betagin{align} &\tau^{\frac 3 2 - \frac 1 s} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} +\tau^{\frac 3 2 + \frac 1 s - \frac 2 t - \varepsilon \pr{ 1 + \frac 2 s - \frac 2 t }} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)} \nonumber \\ &+\tau^{\frac 1 2 - \frac 1 s} \|(\log r )^{-1} e^{-\tau \phi(r)}r \nabla u\|_{L^2(r^{-2}dx)} \nonumber \\ &\le C\|(\log r) e^{-\tau \phi(r)} r^2 (\Delta u+ W\cdot \nabla u + V u)\|_{L^p(r^{-2} dx)} \nonumber \\ &+ c C K \|(\log r)^{-1} e^{-\tau \phi(r)} r |\nabla u|\|_{L^2(r^{-2} dx)} + c C M \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)}. \lambdabel{boundSub} \end{align} Since $s > 2$ implies that $3st + 2t - 4s>2st + 4t - 4s$, then $0<\varepsilon<1<\frac{3st + 2t - 4s}{2st + 4t -4s}$ and we have that $\brac{\frac 3 2 + \frac 1 s - \frac 2 t - \varepsilon\pr{ 1 + \frac 2 s - \frac 2 t }}^{-1}>0$. Straightforward computations show that $\brac{\frac 3 2 + \frac 1 s - \frac 2 t - \varepsilon\pr{ 1 + \frac 2 s - \frac 2 t }}^{-1} =\frac{2st}{3st+2t-4s-\varepsilon ( 2st+4t-4s)}$. Therefore, to absorb the last two terms into the lefthand side and get (\ref{main1}), we take $\tau \ge 1 + \pr{c C K}^{\frac {2s} {s-2}}+ \pr{c C M}^{\frac{2st}{3st+2t-4s-\varepsilon ( 2st+4t-4s)}}$. \\ \noindent {\bf Case 3: $t \in \pb{1, \frac{2s}{s+2}}$} \\ As in the previous case, we'll substitute \eqref{hod2} and \eqref{hod33} into \eqref{triIneq}. Therefore, we need to choose $p \in \pr{1,2}$ and $q \in \pr{2, \infty}$ so that $\frac{2p}{2-p} \le s$ and $\frac{pq}{q-p} = t$. Let $q = \frac{1}{\varepsilon}$ and $p =\frac{tq}{q+t}$. Note that $p=\frac{tq}{q+t}=\frac{t}{1+\varepsilon t} \in \pr{1, 2}$ if $0<\varepsilon<\frac{t-1}{t}$. And as long as $\varepsilon < \frac 1 2$, then $q$ is in the appropriate range. Since $1<t\leq \frac{2s}{s+2}$, then $s\geq \frac{2t}{2-t}$. It follows that $$\frac{2p}{2-p} = \frac{2tq}{2q+2t-tq}=\frac{2t}{2-t+2t\varepsilon}<\frac{2t}{2-t} \le s.$$ An application of H\"older's inequality shows that $\|W\|_{L^{\frac{2p}{2-p}}} \le c \|W\|_{L^{s}}$. We have $$\frac{2}{q}\pr{1 - \varepsilon}+1-\frac{1}{p}= 2 \varepsilon \pr{1 - \varepsilon}+1-\frac{1+\frac t q}{t}=\frac{t-1}{t}+\varepsilon_0 >0,$$ where $\varepsilon_0 = \varepsilon\pr{1-2\varepsilon}$. Substituting \eqref{hod2} and \eqref{hod33} into \eqref{triIneq} gives that \betagin{align} &\tau^{\frac{2t-1}{t}-\varepsilon} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} +\tau^{\frac{t-1}{t}+\varepsilon_0} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)} \nonumber \\ &+\tau^{\frac{t-1}{t}-\varepsilon} \|(\log r )^{-1} e^{-\tau \phi(r)}r \nabla u\|_{L^2(r^{-2}dx)} \nonumber \\ &\le C\|(\log r) e^{-\tau \phi(r)} r^2 (\Delta u+ W\cdot \nabla u + V u)\|_{L^p(r^{-2} dx)} \nonumber \\ &+ c C K \|(\log r)^{-1} e^{-\tau \phi(r)} r |\nabla u|\|_{L^2(r^{-2} dx)} + c C M \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)}. \lambdabel{boundSub} \end{align} If we choose $\tau \ge 1 + \pr{ c C K}^{\frac{t}{t-1-\varepsilon t}} + \pr{ c C M}^{\frac{t}{t-1+\varepsilon_0t}}$, we will arrive at the conclusion (\ref{main1}). \end{proof} With the aid of the Carleman estimates in Lemma \ref{CarlpqVW}, we present the $L^\infty$ three-ball inequality that will serve as an important tool in the proof of Theorem \ref{thh}. \betagin{lemma} Let $0 < r_0< r_1< R_1 < R_0$, where $R_0 < 1$ is sufficiently small. Assume that for some $s \in \pb{2, \infty}$, $t \in \pb{1, \infty}$, $\|W\|_{L^s\pr{B_{R_0}}} \le K$ and $\|V\|_{L^t\pr{B_{R_0}}} \le M$. Let $u$ be a solution to \eqref{goal}. Then, for any sufficiently small $\varepsilon, \partialta > 0$, \betagin{align} \|u\|_{L^\infty \pr{B_{3r_1/4}}} &\le C F_ \partialta\pr{r_1} |\log r_1| \brac{ (K+|\log r_0|)F_ \partialta\pr{r_0} \|u\|_{L^\infty(B_{2r_0})}}^{k_0} \nonumber \\ &\times \brac{(K+|\log R_1|) F_ \partialta\pr{R_1}\|u\|_{L^\infty(B_{R_1})}}^{1 - k_0} \nonumber \\ &+C F_ \partialta\pr{r_1} \pr{\frac{R_1 }{r_1}} \pr{1 +\frac{|\log r_0|}{K}} \nonumber \\ &\times \exp\brac{\pr{1 + C_1 K^\kappa + C_2 M^\mu} \pr{\phi\pr{\frac{R_1}{2}}-\phi(r_0)}} \|u\|_{L^\infty(B_{2r_0})}, \lambdabel{three} \end{align} where $\displaystyle k_0 = \frac{\phi(\frac{R_1}{2})-\phi(r_1)}{\phi(\frac{R_1}{2})-\phi(r_0)}$, $F_ \partialta\pr{r} = (1 + r K^{\frac{s}{s-2}+ \partialta} + r M^{\frac{t}{2t-2}+ \partialta})$, and $\kappa$, $\mu$, $C_1$, and $C_2$ are as given in Lemma \ref{CarlpqVW}, and $C = C\pr{s, t, \varepsilon}$. \lambdabel{threeBall} \end{lemma} \betagin{proof} Fix the sufficiently small constants $\varepsilon, \partialta > 0$. Let $r_0< r_1< R_1$. Choose a smooth function $\eta\in C^\infty_{0}(B_{R_0})$ with $B_{2R_1}\subsetbset B_{R_0}$. The standard notation $\brac{a,b}$ is used to denote a closed annulus with inner radius $a$ and outer radius $b$. Let $$D_1=\brac{\frac{3}{2}r_0, \frac{1}{2}R_1 }, \quad \quad D_2= \brac{r_0, \frac{3}{2}r_0}, \quad \quad D_3=\brac{\frac{1}{2}R_1, \frac{3 }{4}R_1}.$$ Let $\eta=1$ on $D_1$ and $\eta=0$ on $[0, \ r_0]\cup \brac{\frac{3}{4}R_1, \ R_1}$. It is easy to see that $|\nabla \eta|\leq \frac{C}{r_0}$ and $|\nabla^2\eta|\leq \frac{C}{r_0^2}$ on $D_2$. Similarly, we have $|\nabla \eta|\leq \frac{C}{R_1}$ and $|\nabla^2 \eta|\leq\frac{C}{R_1^2}$ on $D_3$. Since $u$ is a solution to \eqref{goal} in $B_{R_0}$, as discussed in the introduction, $u \in L^\infty\pr{B_{R_1}} \cap W^{1,2}\pr{B_{R_1}} \cap W^{2,p}\pr{B_{R_1}}$. By regularization, the estimate in Lemma \ref{CarlpqVW} holds for $\eta u$. Taking into account that $u$ is a solution to equation \eqref{goal} and substituting $\eta u$ into the Carleman estimate in Lemma \ref{CarlpqVW}, we get that whenever $$\tau \ge 1+ C_1 K^{\kappa} + C_2 M^{\mu},$$ \betagin{align*} \tau^{2 - \frac 1 p} \|(\log r)^{-1} e^{-\tau \phi(r)} \eta u\|_{L^2(r^{-2}dx)} &\leq C_0 \|(\log r ) e^{-\tau \phi(r)} r^2\pr{ \Delta \pr{\eta u} + W \cdot \nabla\pr{\eta u} + V \eta u}\|_{L^p(r^{-2} dx)} \\ &= C_0 \|(\log r ) e^{-\tau \phi(r)} r^2\pr{ \Delta \eta \, u + 2 \nabla \eta \cdot \nabla u + W \cdot \nabla \eta \, u }\|_{L^p(r^{-2} dx)}, \end{align*} where $\kappa$, $\mu$, and $p$ are as given in the Lemma \ref{CarlpqVW}. Thus, the following holds \betagin{equation} \tau^{2 - \frac 1 p} \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^2(D_1, r^{-2}dx )}\leq \mathcal{B}, \lambdabel{jjj} \end{equation} where $$ \mathcal{B}:= C_0 \|(\log r) e^{-\tau \phi(r)} r^2 (\Delta \eta \, u+W\cdot \nabla \eta \, u + 2\nabla \eta \cdot \nabla u)\|_{L^p(D_2\cup D_3, r^{-2} dx)}.$$ To bound $\mathcal{B}$, an application of H\"older's inequality yields that \betagin{align*} \|(\log r) e^{-\tau \phi(r)} r^2 \Delta \eta \, u\|_{L^p(D_2\cup D_3, r^{-2} dx)} &\le \| \pr{\log r} r^{2} \Delta \eta\|_{L^\infty\pr{D_2}} \| r^{- \frac 2 p}\|_{L^{\frac{2p}{2-p}}\pr{D_2}}\| e^{-\tau \phi(r)} u\|_{L^2(D_2)} \\ &+\| \pr{\log r} r^{2} \Delta \eta\|_{L^\infty\pr{D_2}} \| r^{- \frac 2 p}\|_{L^{\frac{2p}{2-p}}\pr{D_2}}\| e^{-\tau \phi(r)} u\|_{L^2(D_2)} \end{align*} and \betagin{align*} \|(\log r) e^{-\tau \phi(r)} r^2 \nabla \eta \cdot \nabla u\|_{L^p(D_2\cup D_3, r^{-2} dx)} &\le \| \pr{\log r} r^{2} \nabla \eta\|_{L^\infty\pr{D_2}} \| r^{- \frac 2 p}\|_{L^{\frac{2p}{2-p}}\pr{D_2}}\| e^{-\tau \phi(r)} \nabla u\|_{L^2(D_2)} \\ &+\| \pr{\log r} r^{2} \nabla \eta\|_{L^\infty\pr{D_3}} \| r^{- \frac 2 p}\|_{L^{\frac{2p}{2-p}}\pr{D_3}}\| e^{-\tau \phi(r)} \nabla u\|_{L^2(D_3)}. \end{align*} Since $\frac{2p}{2-p} \le s$, arguments that are similar to those that appear in \eqref{hod2} show that \betagin{align*} &\|(\log r) e^{-\tau\phi(r)} r^2 W\cdot\nabla \eta \, u\|_{L^p(r^{-2} dx, D_2 \cup D_3)} \\ &\le c \|W\|_{L^{s}\pr{D_2}} \norm{\nabla \eta}_{L^\infty\pr{D_2}}\| e^{-\tau \phi(r)} r u \|_{L^2(D_2, r^{-2} dx)} \\ &+ c \|W\|_{L^{s}\pr{D_3}} \norm{\nabla \eta}_{L^\infty\pr{D_3}}\| e^{-\tau \phi(r)} r u \|_{L^2(D_3, r^{-2} dx)} \\ &\le cK r_0^{-1}\|e^{-\tau \phi(r)} u\|_{{L^2(D_2)}} + cK R_1^{-1}\|e^{-\tau \phi(r)} u \|_{L^2(D_3)}, \end{align*} where we have used the bounds on $\abs{\nabla \eta}$. From the estimates of $\eta$ in $D_2$ and $D_3$, it follows that \betagin{eqnarray*} \mathcal{B} &\leq & C |\log r_0| r_0^{-1}\pr{\|e^{-\tau \phi(r)} u\|_{{L^2(D_2)}} + r_0 \|e^{-\tau \phi(r)} \nabla u\|_{{L^2(D_2)}}} +CK r_0^{-1}\|e^{-\tau \phi(r)}u\|_{{L^2(D_2)}} \\ &+& C|\log R_1|R_1^{-1}\pr{\|e^{-\tau \phi(r)} u\|_{{L^2(D_3)}}+ R_1 \|e^{-\tau \phi(r)} \nabla u\|_{{L^2(D_3)}}} +CK R_1^{-1}\|e^{-\tau \phi(r)} u\|_{{L^2(D_3)}}. \end{eqnarray*} Therefore, \betagin{eqnarray*} \mathcal{B} &\leq & C\pr{K + |\log r_0|} r_0^{-1} e^{-\tau \phi(r_0)} \pr{\| u\|_{{L^2(D_2)}}+ r_0 \|\nabla u\|_{{L^2(D_2)}}} \\ &+& C\pr{K+ |\log R_1|} R_1^{-1}e^{-\tau \phi\pr{\frac{R_1}{2}}} \pr{\| u\|_{{L^2(D_3)}}+ R_1 \| \nabla u\|_{{L^2(D_3)}}}, \end{eqnarray*} where we have used the fact that $e^{-\tau\phi(r)}$ is a decreasing function with respect to $r$. To estimate the gradient term, $\nabla u$, we use the the Caccioppoli inequality (see Lemma \ref{CaccLem} in the appendix) to get that \betagin{equation*} \|\nabla u\|_{L^2(D_2)}\leq \frac{C}{r_0}F_ \partialta\pr{r_0} \| u\|_{L^2\pr{B_{2r_0}\backslash B_{{r_0}/{2}}}} \end{equation*} and \betagin{equation*} \|\nabla u\|_{L^2(D_3)}\leq \frac{C}{R_1} F_ \partialta\pr{R_1} \| u\|_{L^2\pr{B_{R_1}\backslash B_{{R_1}/{4}}}}, \end{equation*} where we adopt the notation $F_ \partialta\pr{r} = 1+r K^{\frac{s}{s-2} + \partialta}+r M^{\frac{t}{2t-2}+ \partialta}$. Therefore, \betagin{eqnarray*} \mathcal{B} &\leq& C\pr{K+|\log r_0|} r_0^{-1}e^{-\tau \phi(r_0)}F_ \partialta\pr{r_0}\|u\|_{L^2(B_{2r_0})} \nonumber \\ &+& C\pr{K+|\log R_1|} {R_1}^{-1}e^{-\tau \phi\pr{\frac{R_1}{2}}}F_ \partialta\pr{R_1} \|u\|_{L^2(B_{R_1})}. \end{eqnarray*} Introduce a new set $D_4=\{r\in D_1, \ r\leq r_1\}$. From \eqref{jjj} and that $\tau \ge 1$ and $2 - \frac 1 p > 0$, it follows that \betagin{align*} \| u\|_{L^2 (D_4)} &\le \tau^{2 - \frac 1 p}\| u\|_{L^2 (D_4)} \le \tau^{2 - \frac 1 p} \|e^{\tau\phi(r)}(\log r) r\|_{L^\infty\pr{D_4}} \| (\log r)^{-1} e^{-\tau\phi(r)} u\|_{L^2 (D_4, r^{-2}dx)} \\ &\le e^{\tau \phi(r_1)} |\log r_1| r_1 \mathcal{B}, \end{align*} where we have considered that $e^{\tau\phi(r)}(\log r) r$ is increasing on $D_1$ for $R_0$ sufficiently small. Adding $\|u\|_{L^2 \pr{B_{3r_0/2}}}$ to both sides of the last inequality and using the upper bound on $\mathcal{B}$ implies that \betagin{align*} \| u\|_{L^2 (B_{r_1})} &\le C |\log r_1| \pr{K+|\log r_0|} \pr{\frac{r_1}{r_0}} e^{\tau \brac{\phi(r_1)- \phi(r_0)}} F_ \partialta\pr{r_0} \|u\|_{L^2(B_{2r_0})} \\ &+ C |\log r_1| \pr{K+|\log R_1|} \pr{\frac{r_1}{R_1}} e^{\tau\brac{\phi(r_1) - \phi\pr{\frac{R_1}{2}}}} F_ \partialta\pr{R_1} \|u\|_{L^2(B_{R_1})}. \end{align*} For the ease of the presentation, we define $$U_1 =\|u\|_{L^2(B_{2r_0})}, \quad \quad U_2=\|u\|_{L^2(B_{R_1})},$$ $$A_1 = C |\log r_1| \pr{K+|\log r_0|} \pr{\frac{r_1}{r_0}} F_ \partialta\pr{r_0},$$ and $$A_2 = C |\log r_1| \pr{K+|\log R_1|} \pr{\frac{r_1}{R_1}} F_ \partialta\pr{R_1}.$$ Then the previous inequality simplifies to \betagin{eqnarray} \| u\|_{L^2 (B_{r_1})} &\leq& A_1 \brac{\frac{\exp \pr{\phi(r_1)}}{\exp\pr{ \phi(r_0)}}}^\tau U_1 + A_2 \brac{\frac{\exp\pr{ \phi(r_1)}}{\exp\pr{ \phi\pr{\frac{R_1}{2}}}}}^\tau U_2. \lambdabel{D4est} \end{eqnarray} Introduce another parameter $k_0$ as $$\frac{1}{k_0}=\frac{\phi(\frac{R_1}{2})-\phi(r_0)}{\phi(\frac{R_1}{2})-\phi(r_1)}.$$ Recall that $\phi(r)=\log r+\log (\log r)^2$. If $r_1$ and $R_1$ are fixed, and $r_0\ll r_1$, i.e. $r_0$ is sufficiently small, then $\frac{1}{k_0}\sigmameq \log \frac{1}{r_0}$. Let $$\tau_1 =\frac{k_0}{\phi\pr{\frac{R_1}{2}}-\phi(r_1)}\log\pr{\frac{A_2{U}_2}{A_1 {U}_1}}.$$ If $\tau_1 \ge 1 + C_1 K^\kappa + C_2 M^\mu$, then the calculations performed above are valid with $\tau = \tau_1$. Substituting $\tau_1$ into \eqref{D4est} gives that \betagin{eqnarray} \| u\|_{L^2 (B_{r_1})} &\leq& 2\pr{A_1 U_1}^{k_0}\pr{A_2 U_2}^{1 - k_0}. \lambdabel{mix1} \end{eqnarray} Instead, if $\tau_1 < 1 + C_1 K^\kappa + C_2 M^\mu$, then \betagin{align*} U_2 < \frac{A_1}{A_2} \exp\brac{\pr{1 + C_1 K^\kappa + C_2 M^\mu} \pr{\phi\pr{\frac{R_1}{2}}-\phi(r_0)}} U_1. \end{align*} The last inequality indicates that \betagin{equation} \|u\|_{L^2 (B_{r_1})} \le C \pr{\frac{R_1}{r_0}} \pr{1 +\frac{|\log r_0|}{K}} e^{\pr{1 + C_1 K^\kappa + C_2 M^\mu} \pr{\phi\pr{\frac{R_1}{2}}-\phi(r_0)}} \|u\|_{L^2(B_{2r_0})}. \lambdabel{mix2} \end{equation} The combination of \eqref{mix1} and \eqref{mix2} gives that \betagin{align} \| u\|_{L^2 (B_{r_1})} &\le C |\log r_1| r_1 \brac{\frac{\pr{K+|\log r_0|} F_ \partialta\pr{r_0}}{r_0} \|u\|_{L^2(B_{2r_0})}}^{k_0} \nonumber \\ &\times \brac{\frac{\pr{K+|\log R_1|} F_ \partialta\pr{R_1}}{R_1} \|u\|_{L^2(B_{R_1})}}^{1 - k_0} \nonumber \\ &+C \pr{\frac{R_1}{r_0}} \pr{1 +\frac{|\log r_0|}{K}}e^{\pr{1 + C_1 K^\kappa + C_2 M^\mu} \pr{\phi\pr{\frac{R_1}{2}}-\phi(r_0)}} \|u\|_{L^2(B_{2r_0})}. \lambdabel{end2} \end{align} By elliptic regularity (see for example \cite{HL11}, \cite{GT01}) and a scaling argument, we have that \betagin{align} \|u\|_{L^\infty(B_r)} \le C \frac{F_ \partialta\pr{r}}{r} \|u\|_{L^2(B_{2r})}. \lambdabel{ell} \end{align} From \eqref{end2} and \eqref{ell}, we arrive at the three-ball inequality in the $L^\infty$-norm that is given in \eqref{three}. \end{proof} Now we are ready to give the proof of Theorem \ref{thh}. We first use the three-ball inequality to perform the propagation of smallness argument. Then we apply the three-ball inequality again to obtain the order of vanishing estimate. \betagin{proof} [Proof of Theorem \ref{thh}] Without loss of generality, we may assume that $x_0 = 0$. Let $\varepsilon > 0$ be sufficiently small. Fix some $ \partialta > 0$. Choose $r_0=\frac{r}{2}$, $r_1=4r$ and $R_1=10r$. The application of \eqref{three} shows that \betagin{align} \|u\|_{L^\infty \pr{B_{3r}}} &\le C F_ \partialta\pr{1}^2 \pr{K+|\log r|} |\log r| \|u\|_{L^\infty(B_{r})}^{k_0} \|u\|_{L^\infty(B_{10r})}^{1 - k_0} \nonumber \\ &+C \pr{1 + \frac{\abs{\log r}}{K}} \exp\brac{ \pr{1 + C_1 K^\kappa + C_2 M^\mu} \pr{\phi\pr{5r}-\phi\pr{\frac r 2}} } \|u\|_{L^\infty(B_{r})}, \lambdabel{refi} \end{align} where $\displaystyle k_0 = \frac{\phi(5r)-\phi(4r)}{\phi(5r)-\phi\pr{\frac r 2}}$. It is easy to check that $$c^{-1}\leq \phi(5r)-\phi\pr{\frac{r}{2}}\leq c \quad \mbox{and} \quad c^{-1}\leq \phi(5r)-\phi(4r)\leq c,$$ where $c$ is some universal constant. Thus, $k_0$ is independent of $r$ in this case. Choose a small $r$ such that $$\|u \|_{L^\infty\pr{B_r}}=\ell,$$ where $\ell>0$ by the unique continuation property. Since $\|u \|_{L^\infty\pr{B_1}} \geq 1$, there exists some $\bar x\in B_1$ such that $\displaystyle u(\bar x)=\|u \|_{L^\infty\pr{B_1}}\geq 1$. We select a sequence of balls with radius $r$, centered at $x_0=0, \ x_1, \ldots, x_d$ so that $x_{i+1}\in B_{r}(x_i)$ for every $i$, and $\bar x\in B_{r}(x_d)$. Notice that the number of balls, $d$, depends on the radius $r$ which will be fixed later. Applying the $L^\infty$ version of three-ball inequality (\ref{refi}) at the origin and using boundedness assumption of $u$ given in \eqref{bound} yields that \betagin{align*} \|u\|_{L^\infty \pr{B_{3r}(0)}} &\le C \hat C^{1 - k_0} F_ \partialta\pr{1}^2 \ell^{k_0} \pr{1+\frac{|\log r|}{K}} \abs{\log r} K \nonumber \\ &+ C \ell \pr{1 + \frac{|\log r|}{K}} \exp\brac{c\pr{1 + C_1 K^\kappa + C_2 M^\mu} }. \end{align*} Since $B_r(x_{i+1})\subsetbset B_{3r}(x_{i})$, it is true that \betagin{equation} \|u\|_{L^\infty (B_r(x_{i+1}))}\leq \|u\|_{L^\infty (B_{3r}(x_{i}))} \lambdabel{bbb} \end{equation} for every $i = 1, 2, \ldots, d$. Repeating the argument as before with balls centered at $x_i$ and using \eqref{bbb}, we get \betagin{equation*} \|u\|_{L^\infty (B_{3r}(x_{i}))} \leq C_i \ell^{D_i} \pr{1+\frac{|\log r|}{K}}^{E_i} |\log r|^{F_i} \exp\brac{H_i\pr{1 + C_1 K^\kappa + C_2 M^\mu} } \end{equation*} for $i=0, 1, \cdots, d$, where each $C_i$ depends on $d$, $\hat C$, $s$, $t$, $K$, $M$, and $C$ from Lemma \ref{threeBall}, and $D_i$, $E_i$, $F_i$ $H_i$ are constants that depend on $d$. Due to the fact that $u(\bar x)\geq 1$ and $\bar x \in B_{3r}(x_d)$, we have that \betagin{equation*} \ell \ge c \exp\brac{-C \pr{1 + C_1 K^\kappa + C_2 M^\mu} }\pr{1+\frac{|\log r|}{K}}^{-C} |\log r|^{-C}, \end{equation*} where $c$ and $C$ are new constants with $c\pr{s, t, d, K, M, \hat C, \varepsilon}$ and $C\pr{d}$. Now we fix the radius $r$ as a small number. In this sense, $d$ is a fixed constant. We are going to use the three-ball inequality at the origin again with a different set of radii. Let $\frac{3}{4}r_1=r$, $R_1=10r$ and let $r_0 << r$, i.e. $r_0$ is sufficiently small with respect to $r$. It follows from the three-ball inequality \eqref{three} that, $$ \ell \leq {\rm I} +\Pi,$$ where \betagin{align*} {\rm I} &= C F_ \partialta\pr{r} |\log r| \brac{ (K+|\log r_0|) F_ \partialta\pr{r_0}\|u\|_{L^\infty(B_{2r_0})}}^{k_0} \brac{ (K+|\log 10r|) F_ \partialta\pr{10 r} \|u\|_{L^\infty(B_{10 r})}}^{1 - k_0} \\ \Pi &= C F_ \partialta\pr{r} \pr{\frac{r }{r_0 }} \pr{1 +\frac{|\log r_0|}{K}} e^{\pr{1 + C_1 K^\kappa + C_2 M^\mu} \pr{\phi\pr{5r}-\phi(r_0)}} \|u\|_{L^\infty(B_{2r_0})}, \end{align*} with $\displaystyle k_0 = \frac{\phi(5r)-\phi(\frac 4 3 r)}{\phi(5r)-\phi(r_0)}$ and $F_ \partialta\pr{r} = 1 + r K^{\frac{s}{s-2} + \partialta} + r M^{\frac{t}{2t-2}+ \partialta}$. On one hand, if ${\rm I} \leq \Pi$, we have \betagin{align*} &c \exp\brac{-C \pr{1 + C_1 K^\kappa + C_2 M^\mu} }\pr{1+\frac{|\log r|}{K}}^{-C} |\log r|^{-C} \le \ell \le 2 \Pi \\ &\le 2 C F_ \partialta\pr{r} \pr{\frac{r }{r_0 }} \pr{1 +\frac{|\log r_0|}{K}} e^{\pr{1 + C_1 K^\kappa + C_2 M^\mu} \pr{\phi\pr{5r}-\phi(r_0)}} \|u\|_{L^\infty(B_{2r_0})}. \end{align*} If $r_0 << r$, then $\phi\pr{r_0}-\pr{C + \phi\pr{5r}} \gtrsim \phi\pr{r_0}$. Since $r$ is a fixed small positive constant, we obtain \betagin{align*} \|u\|_{L^\infty(B_{2r_0})} &\ge c r_0^{C \pr{1 + C_1 K^\kappa + C_2 M^\mu} }, \end{align*} where now $c\pr{s, t, M, K, \hat C, \varepsilon}$ and $C$ is some universal constant. On the other hand, if $\Pi \leq {\rm I}$, then \betagin{align*} &c \exp\brac{-C \pr{1 + C_1 K^\kappa + C_2 M^\mu} }\pr{1+\frac{|\log r|}{K}}^{-C} |\log r|^{-C} \le \ell \le 2 {\rm I} \\ &\le 2 C F_ \partialta\pr{r} |\log r| \brac{(K+|\log r_0|) F_ \partialta\pr{r_0} \|u\|_{L^\infty(B_{2r_0})}}^{k_0} \brac{(K+|\log 10r|) F_ \partialta\pr{10 r} \|u\|_{L^\infty(B_{10 r})}}^{1 - k_0}. \end{align*} Raising both sides to $\frac{1}{k_0}$ and using that $\|u\|_{L^\infty\pr{B_{10r}}}\leq \hat{C}$ gives that \betagin{align*} \|u\|_{L^\infty(B_{2r_0})} &\ge \frac{\hat C}{|\log r_0|} \pr{\frac{c/2 C \hat C K}{\abs{\log r}^{1+C} \pr{K+|\log r|}^C F_ \partialta\pr{1}^{2} }}^{\frac 1 {k_0}} \exp\brac{-\frac{C}{k_0} \pr{1 + C_1 K^\kappa + C_2 M^\mu} }. \end{align*} Since $r$ is a fixed small positive constant, then $\frac{1}{k_0}\sigmameq\log \frac{1}{r_0}$ if $ r_0\ll r$. Finally, we arrive at \betagin{align*} \|u\|_{L^\infty(B_{2r_0})} &\ge C r_0^{C\pr{1 + C_1 K^\kappa + C_2 M^\mu}} \end{align*} as before. This completes the proof of Theorem \ref{thh}. \end{proof} \section{The proof of Theorem \ref{thhh}} \lambdabel{Thm3Proof} The proof of Theorem \ref{thhh} is in the same spirit as that of Theorem \ref{thh}. The main difference between these theorems is that Theorem \ref{thhh} has smaller values for $\mu$ than those in Theorem \ref{thh}. See the remark following the statement of Theorem \ref{thhh} for the specific comparison of powers. The improvement in Theorem \ref{thhh} comes from a slightly modified Carleman estimate for the operator $\Delta + V$ which is made possible by the absence of a gradient potential. \betagin{lemma} Assume that for some $t \in \pb{ 1, \ \infty}$, $\norm{V}_{L^t\pr{B_{R_0}}} \le M$. Then for every sufficiently small $\varepsilon > 0$, there exist constants $C_0$, $C_1$, and sufficiently small $R_0 < 1$ such that for any $u\in C^{\infty}_{0}(B_{R_0}(x_0)\setminus \set{x_0})$ and $$\tau \ge 1+ C_1 M^{\mu},$$ one has \betagin{align} \tau^{2 - \frac 1 p} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} &\leq C_0 \|(\log r ) e^{-\tau \phi(r)} r^2\pr{ \Delta u + V u}\|_{L^p(r^{-2} dx)} , \lambdabel{main12} \end{align} where $\displaystyle \mu = \left\{\betagin{array}{ll} \frac{2t}{3t-2} & 2 < t \le \infty \\ \frac{t}{2t-2-\varepsilon\pr{2t-1-2\varepsilon}} & 1 < t \le 2 \end{array}\right.$, and $\displaystyle p = \left\{\betagin{array}{ll} \frac{2t}{t+2} & t > 2 \\ \frac{t}{t- \varepsilon} & 1 < t \le 2 \end{array}\right.$. Moreover, $C_0 = 2C$, where $C\pr{p, q, \varepsilon}$ is from Theorem \ref{Carlpq} with $ q = \frac{t}{t-1-\varepsilon}$ when $1 < t \le 2$, and $C_1= C_1\pr{t, \varepsilon}$. \lambdabel{CarlpqV} \end{lemma} \betagin{proof} Assume that $\varepsilon$ is sufficiently small. The same argument as \eqref{triIneq} from the proof of Lemma \ref{CarlpqVW} implies that \betagin{align} &\tau^{2 - \frac 1 p} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} +\tau^{\frac 2 q \pr{1 - \varepsilon} + 1 - \frac 1 p} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)} \nonumber \\ &\le C\|(\log r) e^{-\tau \phi(r)} r^2 (\Delta u + V u)\|_{L^p(r^{-2} dx)} + C\|(\log r) e^{-\tau \phi(r)} r^2 V u \|_{L^p(r^{-2} dx)}. \lambdabel{triIneqV} \end{align} We proceed to discuss the cases for $t$ to obtain (\ref{main12}). \\ \noindent {\bf Case 1: $t \in \pb{2, \infty}$} \\ Set $p = \frac{2t}{t+2}$ so that $t=\frac{2p}{2-p}$. Substituting the estimate \eqref{hod3} in \eqref{triIneqV} yields that \betagin{align*} &\tau^{\frac 3 2 - \frac 1 t} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} \\ &\le C\|(\log r) e^{-\tau \phi(r)} r^2 (\Delta u + V u)\|_{L^p(r^{-2} dx)} + c C M \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^2(r^{-2}dx)}. \end{align*} Taking $\tau \ge 1 + \pr{2 c C M}^{\frac{2t}{3t-2}}$, we are able to absorb the second term on the righthand side of the last inequality into the lefthand side. Then the estimate (\ref{main12}) follows. \\ \noindent {\bf Case 2: $t \in \pb{1, 2}$} \\ We optimize the power of $\tau$ by choosing $p$ very close to $1$. Set $p =\frac{1}{1-\frac{\varepsilon}{t}}$ and $q=\frac{pt}{t-p}$. Since $t \le 2$, then $q=\frac{t}{t-1-\varepsilon}>2$. And if $\varepsilon < t -1$, $q < \infty$ as well. We will use the estimate \eqref{hod33} to bound the last term in \eqref{triIneqV}. We have $$\frac{2}{q}(1-\varepsilon)+1-\frac{1}{p}=\frac{1-2\varepsilon}{p}-\frac{2(1-\varepsilon)}{t}+1=\frac{2t-2-\varepsilon\pr{2t-1-2\varepsilon}}{t}>0$$ if we further assume that $\varepsilon\pr{2t-1-2\varepsilon} < 2\pr{t-1}$. Substituting \eqref{hod33} into \eqref{triIneqV} and simplifying shows that \betagin{align*} &\tau^{1 + \frac{\varepsilon}{t}} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^2(r^{-2}dx)} +\tau^{\frac{ 2t-2-\varepsilon\pr{2t-1-2\varepsilon}}{t}} \|(\log r)^{-1} e^{-\tau \phi(r)}u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)} \nonumber \\ &\le C\|(\log r) e^{-\tau \phi(r)} r^2 (\Delta u + Vu)\|_{L^p(r^{-2} dx)} + c C M \|(\log r)^{-1} e^{-\tau \phi(r)} u\|_{L^q(r^{-2\pr{1 - \varepsilon}}dx)}. \end{align*} We may absorb the second term on the right if $\tau \ge 1 +\pr{c C M}^{\frac t {2t-2 -\varepsilon\pr{2t-1-2\varepsilon}}}$. This completes the proof of Lemma \ref{CarlpqV}. \end{proof} \betagin{proof}[Proof of Theorem \ref{thhh}] Using Lemma \ref{CarlpqV} in place of Lemma \ref{CarlpqVW}, we derive a $L^\infty$ version of the three-ball inequality like the one in Lemma \ref{threeBall} for the operator $\Delta + V$. Then by applying the propagation of smallness argument from the proof of Theorem \ref{thh}, we will arrive at the proof of Theorem \ref{thhh}. \end{proof} \section{Unique continuation at infinity} \lambdabel{QuantUC} In this section, using the scaling arguments established in \cite{BK05}, we show how the maximal order of vanishing estimates implies the quantitative unique continuation estimates at infinity. \betagin{proof}[Proof of Theorem \ref{UCVW}] Let $u$ be a solution to \eqref{goal} in $\ensuremath{\mathbb{R}}^2$. Fix $x_0 \in \ensuremath{\mathbb{R}}^2$ and set $\abs{x_0} = R$. We do a scaling as follows, $u_R(x) = u(x_0 + Rx)$, $W_R\pr{x} = R \, W\pr{x_0 + R x}$, and $V_R\pr{x} = R^2 V\pr{x_0 + R x}$. For any $r > 0$, \betagin{align*} \norm{W_R}_{L^s\pr{B_r\pr{0}}} &= \pr{\int_{B_r\pr{0}} \abs{W_R\pr{x}}^s dx}^{\frac 1 s} = \pr{\int_{B_r\pr{0}} \abs{R \, W\pr{x_0 + R x}}^s dx}^{\frac 1 s} \\ &= R^{1 - \frac 2 s } \pr{ \int_{B_r\pr{0}} \abs{W\pr{x_0 + R x}}^s d\pr{Rx} }^{\frac 1 s} = R^{1 - \frac 2 s } \norm{W}_{L^s\pr{B_{r R}\pr{x_0}}}. \end{align*} and \betagin{align*} \norm{V_R}_{L^t\pr{B_r\pr{0}}} &= R^{2 - \frac 2 t } \pr{ \int_{B_r\pr{0}} \abs{V\pr{x_0 + R x}}^t d\pr{Rx} }^{\frac 1 t} = R^{2 - \frac 2 t } \norm{V}_{L^t\pr{B_{r R}\pr{x_0}}}. \end{align*} Therefore, $$\norm{W_R}_{L^s\pr{B_{10}\pr{0}}} = R^{1 - \frac 2 s} \norm{W}_{L^s\pr{B_{10R}\pr{x_0}}} \le A_1 R^{1 - \frac 2 s}$$ and $$\displaystyle \norm{V_R}_{L^t\pr{B_{10}\pr{0}}} \le A_0 R^{2 - \frac 2 t}.$$ Moreover, $u_R$ satisfies a scaled version of \eqref{goal} in $B_{10}$, \betagin{align*} & \Delta u_R\pr{x} + W_R\pr{x} \cdot \nabla u_R\pr{x} + V_R\pr{x} u_R\pr{x} \\ &= R^2 \Delta u\pr{x_0 + R x} + R^2 W\pr{x_0 + Rx} \cdot \nabla u\pr{x_0 + R x} + R^2 V\pr{x_0 + R x}u\pr{x_0 + R x} = 0. \end{align*} Clearly, \betagin{align*} \norm{u_R}_{L^\infty\pr{B_{6}}} &= \norm{u}_{L^\infty\pr{B_{6R}\pr{x_0}}} \le C_0. \end{align*} Note that for $\displaystyle\widetilde{x_0} := -x_0/R$, we have $\displaystyle \abs{\widetilde{x_0}} = 1$ and $\abs{u_R(\widetilde{x_0})} = \abs{u(0)} \ge 1$. Thus, $\displaystyle\norm{u_R}_{L^\infty(B_1)} \ge 1$. If $R$ is sufficiently large, then we may apply Theorem \ref{thh} to $u_R$ with some arbitrarily small $\varepsilon \in \pr{ 0, 1}$, $K = A_1 R^{1 - \frac 2 s}$, $M = A_2 R^{2 - \frac 2 t}$, and $\hat C = C_0$ to obtain \betagin{align*} \norm{u}_{L^\infty\pr{{B_{1}(x_0)}}} = & \norm{u_R}_{L^\infty\pr{B_{1/R}(0)}} \\ \ge & c(1/R)^{^{C\brac{1 + C_1 \pr{A_1 R^{1 - \frac 2 s}}^\kappa + C_2 \pr{A_2 R^{2 - \frac 2 t} }^\mu}}} \\ =& c \exp\set{-C\brac{1 + C_1 \pr{A_1 R^{1 - \frac 2 s}}^\kappa + C_2 \pr{A_2 R^{2 - \frac 2 t} }^\mu} \log R}, \end{align*} where $\kappa$ and $\mu$ depend on $s$, $t$, and $\varepsilon$. Further simplifying, we see that \betagin{align*} \norm{u}_{L^\infty\pr{{B_{1}(x_0)}}} \ge &c \exp\brac{-C\pr{1 + C_1 A_1^\kappa + C_2 A_2^\mu } R^\Pi \log R} \end{align*} where \betagin{align*} \Pi := \max\set{\kappa \pr{\frac {s-2} s}, \mu \pr{\frac {2t-2} t }}. \end{align*} Recalling the values of $\kappa$ and $\mu$ from Theorem \ref{thh}, a computation shows that when $0 < \varepsilon < 1$, \betagin{align*} \Pi = \left\{\betagin{array}{ll} 2 & t > \frac{2s}{s+2}, \\ \frac{t(s-2)}{s(t-1-\varepsilon t)} & 1 < t \le \frac{2s}{s+2} \\ \end{array}\right. \end{align*} and the conclusion of the theorem follows. \end{proof} \betagin{proof}[Proof of Theorem \ref{UCV}] To prove Theorem \ref{UCV}, we follow the same approach as before and use that $$\mu \pr{\frac {2t-2} t} = \left\{\betagin{array}{ll} \frac{4t-4}{3t-2} & t > 2 \\ \frac{2t-2}{2t-2-\varepsilon\pr{2t-1-2\varepsilon}} & 1 < t \le 2, \end{array}\right.$$ where $\mu$ is given in the statement of Theorem \ref{thhh}. \end{proof} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix} In this section, we first present the proof of Lemma \ref{CarLpp} for the case $1<p<2$. The eigenfunction estimates in Lemma \ref{upDown} play an important role in the argument. Then we prove a quantitative Caccioppoli inequality in dimension $n=2$. \betagin{proof}[Proof of Lemma \ref{CarLpp}] We define a conjugated operator $L^-_\tau$ of $L^-$ by $$L^-_\tau u=e^{-\tau\varphi(t)}L^-(e^{\tau \varphi(t)}u).$$ With $v=e^{\tau \varphi(t)}u$, it is equivalent to prove \betagin{equation} \|t^{-{1}}u \|_{L^2(dtd\omegaega)}\leq C \tau^\betata \|t L^-_\tau u\|_{L^p(dtd\omegaega)}. \lambdabel{keyu} \end{equation} From the definition of $\Lambdambda$ and $L^-$ in \eqref{ord} and \eqref{use}, the operator $L^-_\tau$ can be written as \betagin{equation} L^-_\tau=\subsetm_{k\geq 0} (\partial_t+\tau \varphi'(t)-k)P_k. \lambdabel{ord1} \end{equation} Let $M=\lceil 2\tau\rceil$. Since $\displaystyle \subsetm_{k\geq 0} P_k v= v$, we split $\displaystyle \subsetm_{k \ge 0} P_k v$ into $$P^+_\tau=\subsetm_{k> M}P_k, \quad \quad P^-_\tau=\subsetm_{k=0}^{M}P_k.$$ Then \eqref{keyu} is reduced to show both the following inequalities \betagin{equation} \| t^{-1} P^+_\tau {u} \|_{L^2(dtd\omegaega)} \leq \tau^{\betata}\| t{L_\tau^- u} \|_{L^{{p}}(dtd\omegaega)} \lambdabel{key1} \end{equation} and \betagin{equation} \| t^{-1} P^-_\tau {u}\|_{L^2(dtd\omegaega)}\leq \tau^\betata\| t {L_\tau^- u} \|_{L^{{p}}(dtd\omegaega)} \lambdabel{key2} \end{equation} hold for all $u \in C^\infty_c\pr{(-\infty, \ t_0)\times S^{1}}$ and $1< p < 2$. We first establish \eqref{key1}. From \eqref{ord1} and properties of the projection operator $P_k$, it follows that \betagin{equation} P_k L^-_\tau u= (\partial_t+\tau \varphi'(t)-k)P_k u. \lambdabel{sord} \end{equation} For $u\in C^\infty_{0}\pr{ (-\infty, \ t_0)\times S^{1}}$, the solution $P_k u$ of this first order differential equation is given by \betagin{equation} P_k u(t, \omegaega) =-\int_{-\infty}^{\infty} H(s-t)e^{k(t-s)+\tau\brac{\varphi(s)-\varphi(t)}} P_k L^-_\tau u (s,\omegaega)\, ds, \lambdabel{star} \end{equation} where $H(z)=1$ if $z\geq 0$ and $H(z)=0$ if $z<0$. For $k\geq M \ge 2 \tau$, it can be shown that \betagin{equation*} H(s-t)e^{k(t-s)+\tau\brac{\varphi(s)-\varphi(t)}} \leq e^{-\frac{1}{2}k|t-s|} \end{equation*} for all $s, t\in (-\infty, \ t_0)$. Taking the $L^2\pr{S^{1}}$-norm in \eqref{star} yields that \betagin{equation*} \|P_k u(t, \cdot)\|_{L^2(S^{1})} \leq \int_{-\infty}^{\infty} e^{-\frac{1}{2}k|t-s|} \|P_k L^-_\tau u(s, \cdot)\|_{L^2(S^{1})} \,ds. \end{equation*} Applying the eigenfunction estimates \eqref{indu} gives that \betagin{equation*} \|P_k u(t, \cdot)\|_{L^2(S^{1})} \leq C \int_{-\infty}^{\infty} e^{-\frac{1}{2}k|t-s|} \|L^-_\tau u(s, \cdot)\|_{L^p(S^{1})} \,ds \end{equation*} for all $1\leq p\leq 2$. Furthermore, the application of Young's inequality for convolution yields that \betagin{equation*} \|P_k u\|_{L^2(dt d\omegaega)} \leq C \pr{\int_{-\infty}^{\infty} e^{-\frac{\sigmagma}{2}k|z|} dz}^{\frac{1}{\sigmagma}}\|L^-_\tau u\|_{L^p(dtd\omegaega)} \end{equation*} with $\frac{1}{\sigmagma}=\frac{3}{2}-\frac{1}{p}$. Therefore, \betagin{equation*} \|P_k u\|_{L^2(dt d\omegaega)} \leq C k^{\frac{1}{p} - \frac{3}{2}} \|L^-_\tau u\|_{L^p(dtd\omegaega)}, \end{equation*} where we have used the fact that $$ \pr{\int_{-\infty}^{\infty} e^{-\frac{\sigmagma}{2}k|z|} dz }^{\frac{1}{\sigmagma}}\leq C k^{\frac{1}{p} - \frac{3}{2}}. $$ Squaring and summing up $k> M$ shows that $$\subsetm_{k> M} \|P_k u\|^2_{L^2(dt d\omegaega)} \leq C \subsetm_{k> M} k^{\frac{2}{p} - 3} \|L^-_\tau u\|^2_{L^p(dtd\omegaega)}.$$ Thus, $\displaystyle \subsetm_{k> M} k^{\frac{2}{p} - 3}$ converges if $p>1$. If $p=1$, $\displaystyle \subsetm_{k> M} k^{\frac{2}{p}- 3}$ diverges. Therefore, \betagin{equation*} \| P^+_\tau u\|_{L^2(dtd\omegaega)} \leq C\tau^{\frac{1-p}{p}}\| L^-_\tau u\|_{L^p(dtd\omegaega)}, \end{equation*} which implies estimate \eqref{key1} since $|t|\geq |t_0|$, where $\abs{t_0}$ is large. Set $N=\lceil\tau \varphi^\primeme(t)\rceil$. Recall that $\varphi(t)=t+\log t^2$. By Taylor's theorem, for all $s, t \in (-\infty, \ t_0)$, we have \betagin{align} \varphi(s)-\varphi(t) & = \varphi'(t)(s-t)-\frac{1}{(s_0)^2}(s-t)^2, \lambdabel{Taylor} \end{align} where $s_0$ is some number between $s$ and $t$. If $s>t$, then $$S_k(s, t) = e^{k(t-s) + \tau\brac{\varphi\pr{s - \varphi\pr{t}}}} \leq e^{-(k-\tau\varphi'(t))(s-t)-\frac{\tau}{t^2}(s-t)^2}.$$ Hence \betagin{equation} H(s-t) S_k(s, t)\leq e^{-|k- N ||s-t|-\frac{\tau}{t^2}(s-t)^2}. \lambdabel{case1} \end{equation} Furthermore, we consider the case $N\leq k\leq M$. The summation of \eqref{star} over $k$ shows that \betagin{equation} \| \subsetm^M_{k=N} P_k u(t, \cdot)\|_{L^2(S^{1})} \leq \int_{-\infty}^{\infty} \| \subsetm^M_{k=N} H(s-t)S_k(s,t) P_k L^-_\tau u(s, \cdot) \|_{L^2(S^{1})}\, ds. \lambdabel{back} \end{equation} Let $c_k= H(s-t)S_k(s,t)$. It is clear that $|c_k|\leq 1$. Now we make use of Lemma \ref{upDown}. An application of estimate \eqref{haha} shows that for all $1< p < 2$ \betagin{align} & \|\subsetm^M_{k=N}H(s-t)S_k(s,t) P_k L^-_\tau u(s, \cdot)\|_{L^2(S^{1})}\leq \nonumber \\ & C \pr{\subsetm^M_{k=N} H(s-t)|S_k(s,t)|^2}^{\frac{1}{p}-\frac{1}{2}} \|L^-_\tau u(s, \cdot) \|_{L^p(S^{1})}. \lambdabel{mixSum} \end{align} From \eqref{case1}, we have \betagin{align} \subsetm^M_{k=N} H(s-t)|S_k(s,t)|^2 &\leq \pr{\subsetm^M_{k=N+1} e^{-2|k- N||s-t|}+1}e^{ -\frac{2\tau}{t^2}(s-t)^2} \nonumber \\ & \leq C \pr{ \frac{1}{|s-t|}+1} e^{ -\frac{\tau}{t^2}(s-t)^2} . \lambdabel{sumBnd} \end{align} Therefore, the last two inequalities imply that \betagin{align*} \|\subsetm^M_{k=N}H(s-t)S_k(s,t) P_k L^-_\tau u(s,\cdot)\|_{L^2(S^{1})} & \leq C (|s-t|^{-\alphapha_2}+1)e^{-\frac{\alphapha_2\tau}{t^2}(s-t)^2}\| L^-_\tau u(s, \cdot) \|_{L^p(S^{1})}, \end{align*} where $\alphapha_2=\frac{(2-p)}{2p}$. It can be shown that \betagin{equation} e^{-\frac{\alphapha_2\tau}{t^2}(s-t)^2}\leq C|t|\pr{1+\sqrt{\tau}|s-t|}^{-1}. \lambdabel{expBnd} \end{equation} Thus, $$ \|\subsetm^M_{k=N}H(s-t)S_k(s,t) P_k L^-_\tau u(s,\cdot)\|_{L^2(S^{1})} \leq \frac{C|t|(|s-t|^{-\alphapha_2}+1)\| L^-_\tau u(s, \cdot) \|_{L^p(S^{1})}}{1+\sqrt{\tau}|s-t|}.$$ It follows from \eqref{back} that \betagin{equation} |t|^{-1}\| \subsetm^M_{k=N} P_k u(t, \cdot)\|_{L^2(S^{1})} \leq C \int_{-\infty}^{\infty}\frac{(|s-t|^{-\alphapha_2}+1)\| L^-_\tau u(s, \cdot) \|_{L^p(S^{1})}}{1+\sqrt{\tau}|s-t|}. \lambdabel{mile} \end{equation} For the case $k\leq N-1$, solving the first order differential equation \eqref{sord} gives that \betagin{equation} P_k u(t, \omegaega)=\int_{-\infty}^{\infty} H(t-s)S_k(s, t) P_k L^-_\tau u\pr{s, \omega}\, ds \lambdabel{star1}. \end{equation} The estimate \eqref{Taylor} shows that for any $s, t$ \betagin{equation} H(t-s) S_k(s, t)\leq e^{-|N- 1 - k ||s-t|-\frac{\tau}{s^2}(t-s)^2}. \lambdabel{case2} \end{equation} Using \eqref{case2} and performing the calculation as before, we conclude that \betagin{equation} \| \subsetm_{k=0}^{N-1} P_k u(t, \cdot)\|_{L^2(S^{1})} \leq C \int_{-\infty}^{\infty} \frac{|s|(|s-t|^{-\alphapha_2}+1)\| L^-_\tau u(s, \cdot) \|_{L^p(S^{1})}}{1+\sqrt{\tau}|s-t|}. \lambdabel{mile1} \end{equation} Since $s,t $ are in $(-\infty, \ t_0)$ with $|t_0|$ large enough, the combination of estimates \eqref{mile} and \eqref{mile1} gives \betagin{equation*} |t|^{-1}\| P_\tau^- u(t, \cdot)\|_{L^2(S^{1})} \leq C \int_{-\infty}^{\infty} \frac{|s|(|s-t|^{-\alphapha_2}+1)\| L^-_\tau u(s, \cdot) \|_{L^p(S^{1})}}{1+\sqrt{\tau}|s-t|}. \end{equation*} Applying Young's inequality for convolution, we obtain \betagin{equation*} \| t^{-1} P_\tau^- u (t, \cdot) \|_{L^2(dtd\omegaega)} \leq C \brac{\int_{-\infty}^{\infty} \pr{\frac{ |z|^{-\alphapha_2}+1}{1+\sqrt{\tau}|z|}}^\sigma \, dz}^{\frac{1}{\sigma}} \| tL^-_\tau u \|_{L^p(dtd\omegaega)}, \end{equation*} where $\frac{1}{\sigma}=\frac{3}{2}-\frac{1}{p}$. Therefore, \betagin{equation*} \| t^{-1} P_\tau^- u (t, \cdot) \|_{L^2(dtd\omegaega)}\leq C\tau^{-\frac{1}{2\sigma}+\frac{\alphapha_2}{2}} \| t L^-_\tau u \|_{L^p(dtd\omegaega)}, \end{equation*} where we have used the fact $$\brac{\int_{-\infty}^{\infty} \pr{\frac{ |z|^{-\alphapha_2}+1} {1+\sqrt{\tau}|z|}}^\sigma \, dz}^{\frac{1}{\sigma}} \leq C\tau^{-\frac{1}{2\sigma}+\frac{\alphapha_2}{2}}$$ with $\alpha_2 \in \pr{0, \frac 1 2}$ and $\sigma \in \pr{1, 2}$. This completes \eqref{key2} since $-\frac{1}{2\sigma}+\frac{\alphapha_2}{2}=\frac{1-p}{p}$. Finally, the proof is complete. \end{proof} We state and prove a Caccioppoli inequality for the second order elliptic equation (\ref{goal}) with singular lower order terms. Because our lower order terms are assumed to be singular, we must employ a Sobolev embedding in $\ensuremath{\mathbb{R}}^2$, and this forces the right hand side to be relatively larger than it was in Lemma 5 from \cite{DZ17}, the corresponding result for $n \ge 3$. \betagin{lemma} Assume that for some $s \in \pb{2, \infty}$ and $t \in \pb{ 1, \infty}$, $\norm{W}_{L^s\pr{B_{R}}} \le K$ and $\norm{V}_{L^t\pr{B_{R}}} \le M$. Let $u$ be a solution to equation \eqref{goal} in $B_R$. Then for any $ \partialta > 0$, there exists a constant $C$, depending only on $s$, $t$ and $ \partialta$, such that for any $r<R$, \betagin{equation*} \|\nabla u\|^2_{L^2(B_r)}\leq C\brac{\frac{1}{(R-r)^2}+M^{\frac{t}{t-1}+ \partialta} +K^{\frac{2s}{s-2} + \partialta }}\| u\|^2_{L^2(B_R)}. \end{equation*} \lambdabel{CaccLem} \end{lemma} \betagin{proof} We start by decomposing $V$ and $W$ into bounded and unbounded parts. For some $M_0, K_0$ to be determined, let $$V(x)=\overline{V}_{M_0}+V_{M_0}, \quad W(x)=\overline{W}_{K_0}+W_{K_0},$$ where $${\overline{V}}_{M_0}=V(x)\chi_{\set{|V(x)|\leq \sqrt{M_0}}}, \quad {V}_{M_0}=V(x)\chi_{\set{|V(x)|>\sqrt{M_0}}},$$ and $${\overline{W}}_{K_0}=W(x)\chi_{\set{|W(x)|\leq \sqrt{K_0}}}, \quad {W}_{K_0}=W(x)\chi_{\set{|W(x)|>\sqrt{K_0}}}.$$ For any $q \in \brac{1, t}$, \betagin{equation} \|V_{M_0}\|_{L^q}\leq M_0^{-\frac{t-q}{2q}}\|V_{M_0}\|_{L^{t}}^{\frac{t}{q}}\leq M_0^{-\frac{t-q}{2q}}\|V\|_{L^{t}}^{\frac{t}{q}} \leq M_0^{-\frac{t-q}{2q}}M^{\frac{t}{q}}. \lambdabel{who} \end{equation} Similarly, for any $q \in \brac{1, s}$, we have \betagin{equation} \|W_{K_0}\|_{L^q}\leq K_0^{-\frac{s-q}{2q}}\|W_{K_0}\|_{L^{s}}^{\frac{s}{q}}\leq K_0^{-\frac{s-q}{2q}}\|W\|_{L^{s}}^{\frac{s}{q}} \leq K_0^{-\frac{s-q}{2q}}K^{\frac{s}{q}}. \lambdabel{mmo} \end{equation} Let $B_R\subsetbset B_1$. Choose a smooth cut-off function $\eta \in C^\infty_0\pr{B_R}$ such that $\eta(x) \equiv 1$ in $B_r$ and $|\nabla \eta|\leq \frac{C}{|R-r|}$. Multiplying both sides of equation (\ref{goal}) by $\eta^2 u$ and integrating by parts, we obtain \betagin{equation} \int |\nabla u|^2 \eta^2 = \int V\eta^2 u^2 +\int W\cdot \nabla u \, \eta^2 u - 2\int \nabla u\cdot\nabla \eta \, \eta \, u. \lambdabel{cacc} \end{equation} We estimate the terms on the right side of \eqref{cacc}. For the first term, we see that \betagin{equation*} \int V \eta^2 u^2 \leq \int |{\overline{V}_{M_0}}|\eta^2 u^2 + \int |{{V}_{M_0}}|\eta^2 u^2. \end{equation*} It is clear that \betagin{equation} \int |{\overline{V}_{M_0}}|\eta^2 u^2\leq M_0^{\frac{1}{2}}\int \eta^2 u^2. \lambdabel{sss} \end{equation} Fix $ \partialta > 0$. Let $ \partialta_0 = \partialta \frac{\pr{t-1}^2}{t+ \partialta\pr{t-1}}$. By H\"older's inequality, \eqref{who} with $q = 1 + \partialta_0$, and Sobolev embedding with $2q^\primeme := 2\pr{1 + \partialta_0^{-1}} > 2$, we get \betagin{align} \abs{\int {V}_{M_0}\eta^2 u^2 } &\leq \pr{\int|{{V}_{M_0}}|^{q}}^{\frac1{q}} \pr{\int \abs{\eta^2u^2}^{q^\primeme}}^{\frac{1}{q^\primeme}} \le C_{ \partialta_0} {M_0}^{-\frac{t-q}{2q}}M^{\frac{t}{q}} \int |\nabla (\eta u)|^2. \lambdabel{mmm} \end{align} Taking $C_{ \partialta_0} {M_0}^{-\frac{t-q}{2q}}M^{\frac{t}{q}}=\frac 1 {16}$, i.e $M_0 = C_{ \partialta_0, t} M^{\frac{2t}{t-q}}$, from (\ref{sss}) and (\ref{mmm}), we get \betagin{eqnarray} \int V \eta^2 u^2 & \leq & C M^{\frac{t}{t-q}}\int |\eta u|^2+\frac{1}{16}\int |\nabla (\eta u)|^2 \nonumber \\ & \leq &C M^{\frac{t}{t-1} + \partialta}\int |\eta u|^2+\frac{1}{8}\int |\nabla \eta|^2 u^2 +\frac{1}{8}\int |\nabla u|^2 \eta^2. \lambdabel{more1} \end{eqnarray} Now we estimate the second term in the righthand side of (\ref{cacc}). We have that \betagin{equation} \int W \cdot \nabla u \eta^2 u = \int |{\overline{W}_{K_0}}| |\nabla u| \eta^2 u + \int |{W}_{K_0}| | \nabla u| \eta^2 u. \lambdabel{w1} \end{equation} By Young's inequality for products, \betagin{equation} \int |{\overline{W}_{K_0}}| |\nabla u| \eta^2 u \leq \frac{1}{8} \int |\nabla u|^2\eta^2 +CK_0 \int \eta^2 u^2. \lambdabel{w2} \end{equation} This time, set $ \partialta_0 = \partialta \frac{\pr{s-2}^2}{2s+ \partialta\pr{s-2}}$. By H\"older's inequality, \eqref{who} with $q = 2 + \partialta_0$, and Sobolev embedding with $q^\primeme = 2\pr{1 + 2 \partialta_0^{-1}} > 2$, we get \betagin{eqnarray} \int {W}_{K_0} \cdot \nabla u \eta^2 u &\leq & \pr{\int |{{W}_{K_0}}|^{{q}}}^{\frac{1}{q}} \pr{\int |\nabla u\cdot \eta |^{2}}^{\frac{1}{2}} \pr{\int |u \eta |^{q^{\primeme}}}^{\frac{1}{q^\primeme}} \nonumber \\ & \leq &C_{ \partialta_0} K_0^{-\frac{s-q}{2q}}K^{\frac{s}{q}}\||\nabla u|\eta\|_{L^2} \|\nabla(\eta u)\|_{L^2} \nonumber \\ & \leq &C_{ \partialta_0} K_0^{-\frac{s-q}{2q}}K^{\frac{s}{q}}\||\nabla u|\eta\|_{L^2}\pr{\||\nabla\eta | u \|_{L^2} + \||\nabla u|\eta\|_{L^2} }\nonumber \\ &\leq & 2 C_{ \partialta_0} K_0^{-\frac{s-q}{2q}}K^{\frac{s}{2}}\||\nabla u|\eta|\|^2_{L^2} + \frac 1 2 C_{ \partialta_0} K_0^{-\frac{s-q}{2q}}K^{\frac{s}{2}} \||\nabla\eta | u \|_{L^2}^2. \lambdabel{w3} \end{eqnarray} We choose $C_{ \partialta_0} K_0^{-\frac{s-q}{2q}}K^{\frac{s}{q}}=\frac 1 {16}$, that is, $K_0=C_{ \partialta_0, s} K^{\frac{2s}{s-q}}$. The combination of (\ref{w1}), (\ref{w2}) and (\ref{w3}) gives that \betagin{eqnarray} \int W \cdot \nabla u \, \eta^2 u &\leq& \frac{1}{4} \||\nabla u| \eta\|^2_{L^2}+CK^{\frac{2s}{s-2} + \partialta}\|u\eta\|^2_{L^2} +\frac{1}{32}\||\nabla \eta| u\|^2_{L^2} . \lambdabel{more2} \end{eqnarray} Finally, Young's inequality for products implies that \betagin{align*} 2\int \nabla u\cdot\nabla \eta \, \eta \, u \le \frac 1 8 \||\nabla u| \eta\|^2_{L^2} + 8 \||\nabla \eta| u \|^2_{L^2} \end{align*} Together with (\ref{cacc}), (\ref{more1}) and (\ref{more2}), we obtain \betagin{equation*} \int |\nabla u|^2 \eta^2 \leq C \pr{M^{\frac{t}{t-1} + \partialta} +K^{\frac{2s}{s-2} + \partialta}} \int |u\eta|^2\,dx+ C \int |\nabla \eta|^2u^2\,dx. \end{equation*} From the assumptions on $\eta$, this completes the proof in the lemma. \end{proof} \partialtaf$'${$'$} \betagin{thebibliography}{BKRS88} \bibitem[ABG81]{ABG81} W.~O. Amrein, A.-M. Berthier, and V.~Georgescu. \newblock {$L^{p}$}-inequalities for the {L}aplacian and unique continuation. \newblock {\em Ann. Inst. Fourier (Grenoble)}, 31(3):vii, 153--168, 1981. \bibitem[Ale12]{Ale12} Giovanni Alessandrini. \newblock Strong unique continuation for general elliptic equations in 2{D}. \newblock {\em J. Math. Anal. Appl.}, 386(2):669--676, 2012. \bibitem[Bak12]{Bak12} Laurent Bakri. \newblock Quantitative uniqueness for {S}chr{\"o}dinger operator. \newblock {\em Indiana Univ. Math. J.}, 61(4):1565--1580, 2012. \bibitem[BK05]{BK05} Jean Bourgain and Carlos~E. Kenig. \newblock On localization in the continuous {A}nderson-{B}ernoulli model in higher dimension. \newblock {\em Invent. Math.}, 161(2):389--426, 2005. \bibitem[BKRS88]{BKRS88} B.~Barcel\'o, C.~E. Kenig, A.~Ruiz, and C.~D. Sogge. \newblock Weighted {S}obolev inequalities and unique continuation for the {L}aplacian plus lower order terms. \newblock {\em Illinois J. Math.}, 32(2):230--245, 1988. \bibitem[DF88]{DF88} Harold Donnelly and Charles Fefferman. \newblock Nodal sets of eigenfunctions on {R}iemannian manifolds. \newblock {\em Invent. Math.}, 93(1):161--183, 1988. \bibitem[DZ17]{DZ17} Blair Davey and Jiuyi Zhu. \newblock Quantitative uniqueness of solutions to second order elliptic equations with singular lower order terms. \newblock arXiv:1702.04742, 2017. \bibitem[GT01]{GT01} David Gilbarg and Neil~S. Trudinger. \newblock {\em Elliptic partial differential equations of second order}. \newblock Classics in Mathematics. Springer-Verlag, Berlin, 2001. \newblock Reprint of the 1998 edition. \bibitem[HL11]{HL11} Qing Han and Fanghua Lin. \newblock {\em Elliptic partial differential equations}, volume~1 of {\em Courant Lecture Notes in Mathematics}. \newblock Courant Institute of Mathematical Sciences, New York, second edition, 2011. \bibitem[Jer86]{Jer86} David Jerison. \newblock Carleman inequalities for the {D}irac and {L}aplace operators and unique continuation. \newblock {\em Adv. in Math.}, 62(2):118--134, 1986. \bibitem[JK85]{JK85} David Jerison and Carlos~E. Kenig. \newblock Unique continuation and absence of positive eigenvalues for {S}chr\"odinger operators. \newblock {\em Ann. of Math. (2)}, 121(3):463--494, 1985. \newblock With an appendix by E. M. Stein. \bibitem[KN00]{KN00} Carlos~E. Kenig and Nikolai Nadirashvili. \newblock A counterexample in unique continuation. \newblock {\em Math. Res. Lett.}, 7(5-6):625--630, 2000. \bibitem[KT02]{KT02} Herbert Koch and Daniel Tataru. \newblock Sharp counterexamples in unique continuation for second order elliptic equations. \newblock {\em J. Reine Angew. Math.}, 542:133--146, 2002. \bibitem[KT16]{KT16} Abel Klein and C.~S.~Sidney Tsang. \newblock Quantitative unique continuation principle for {S}chr{\"o}dinger operators with singular potentials. \newblock {\em Proc. Amer. Math. Soc.}, 144(2):665--679, 2016. \bibitem[Kuk98]{Kuk98} Igor Kukavica. \newblock Quantitative uniqueness for second-order elliptic operators. \newblock {\em Duke Math. J.}, 91(2):225--240, 1998. \bibitem[KW15]{KW15} Carlos Kenig and Jenn-Nan Wang. \newblock Quantitative uniqueness estimates for second order elliptic equations with unbounded drift. \newblock {\em Math. Res. Lett.}, 22(4):1159--1175, 2015. \bibitem[Man02]{Man02} Niculae Mandache. \newblock A counterexample to unique continuation in dimension two. \newblock {\em Comm. Anal. Geom.}, 10(1):1--10, 2002. \bibitem[Mes92]{Mes92} V.~Z. Meshkov. \newblock On the possible rate of decay at infinity of solutions of second order partial differential equations. \newblock {\em Math USSR SB.}, 72:343--361, 1992. \bibitem[Reg99]{Reg99} R.~Regbaoui. \newblock Unique continuation for differential equations of {S}chr{\"o}dinger's type. \newblock {\em Comm. Anal. Geom.}, 7(2):303--323, 1999. \bibitem[Sog86]{Sog86} Christopher~D. Sogge. \newblock Oscillatory integrals and spherical harmonics. \newblock {\em Duke Math. J.}, 53(1):43--65, 1986. \bibitem[SS80]{SS80} M.~Schechter and B.~Simon. \newblock Unique continuation for {S}chr\"odinger operators with unbounded potentials. \newblock {\em J. Math. Anal. Appl.}, 77(2):482--492, 1980. \bibitem[Zhu16]{Zhu16} Jiuyi Zhu. \newblock Quantitative uniqueness of elliptic equations. \newblock {\em Amer. J. Math.}, 138(3):733--762, 2016. \end{thebibliography} \end{document}
\begin{document} \title{Compactly supported solution of the time-fractional porous medium equation on the half-line} \begin{abstract} In this work we prove that the time-fractional porous medium equation on the half-line with Dirichlet boundary condition has a unique compactly supported solution. The approach we make is based on a transformation of the fractional integro-differential equation into a nonlinear Volterra integral equation. Then, the shooting method is applied in order to facilitate the analysis of the free-boundary problem. We further show that there exists an exactly one choice of initial conditions for which the solution has a zero which guarantees the no-flux condition. Then, our previous considerations imply the unique solution of the original problem.\\ \noindent\textbf{Keywords}: fractional derivative, porous medium equation, compact support, existence, uniqueness \end{abstract} \section{Introduction} In the last few decades an interest in the fractional calculus increased significantly. This can be seen, for instance, in a profound development of theoretical aspects of this branch and its utility as a powerful tool in modelling many physical phenomena. A good example is anomalous diffusion, which has been deeply explored recently \cite{Met00}. Many experiments have been performed and their results indicate either sub- or superdiffusive character of various processes. For instance, apart from its emergence in biomechanical transport \cite{Lev97,Nos83} and condensed matter physics \cite{Ott90}, the anomalous diffusion is also present in percolation of some porous media \cite{El04,Kun01,Ram08}. For us, the relevant physical experiment is based on a moisture imbibition in certain materials. For some specimens water diffuses at a slower pace than in the classical situation and the usual porous medium equation is not adequate to model this correctly \cite{El04,De06}. To deal with this problem an approach based on the modelling of the waiting times distribution has been proposed and a time-fractional porous medium equation has been introduced to successfully to describe the subdissusive version of the process \cite{Ger10,Pac03,Sun13}. In our previous works \cite{plociniczak13,plociniczak14,plociniczak15} we stated the main assumptions of the model and proved a number of its mathematical properties. By $u=u(x,t)$ we denote the (nondimensional) moisture concentration at a point $x\geq 0$ and time $t\geq 0$. We consider the following nonlocal PDE \begin{equation} \partial^\alpha_t u = \left(u^m u_x\right)_x, \quad 0<\alpha< 1, \quad m>1, \label{eqn:DiffEqPDE} \end{equation} where the temporal derivative is of the Riemann-Liouville type \begin{equation} \partial^\alpha_t u(x,t) = \frac{1}{\Gamma(1-\alpha)}\frac{\partial}{\partial t}\int_0^t (t-s)^{-\alpha} u(x,s) ds. \end{equation} In \cite{plociniczak15} the Reader can find the derivation of the above equation as a consequence of the trapping phenomenon. The initial-boundary conditions are as follows \begin{equation} u(x,0) = 0, \quad u(0,t) = 1, \quad x>0, \quad t>0, \label{eq:CondPDE} \end{equation} which models a one-dimensional semi-infinite medium with the interface kept in a contact with water. Since the equation (\ref{eqn:DiffEqPDE}) is degenerate at $u=0$ it is natural to expect that its solution has a compact support for at least some values of $m$ (as in the classical case \cite{Atk71}). This has a straightforward physical meaning, namely the wetting front propagates at a finite speed. In the previous work \cite{ploswi} we have proved that (\ref{eqn:DiffEqPDE}) possess a unique weak compactly supported solution in the class of sufficiently almost everywhere smooth functions which have a zero at some point. This paper presents a result that weakens those assumptions and leaves only the requirement of physically motivated boundedness. \section{Main result} In this work we will consider the equation of anomalous diffusion in the self-similar form (for more details see \cite{plociniczak14,plociniczak15}). The transformation leading to it can be commenced by putting $\eta = x t^{-\alpha/2}$ for some $0<\alpha\leq 1$ and denoting $u(x,t)=U(x t^{-\alpha/2})$. In the previous work \cite{ploswi} we proved the existence and uniqueness of the weak compactly supported solution of the resulting transformed problem \begin{equation}\label{eq:1} (U^m U')'=\left [(1-\alpha)-\tfrac{\alpha}{2}\eta \tfrac{\,d }{\,d \eta}\right]I^{0,1-\alpha}_{-\tfrac{2}{\alpha}}U(\eta), \quad U(0)=1, \quad U(\infty)=0, \quad m>1, \end{equation} under the {\it{a-priori}} assumption that this solution has a zero, i.e. $U(\eta^*)=0$ for some $\eta^* > 0$. Here, the Erd\'elyi-Kober operator $I^{a,b}_c$ (see \cite{kir93,kir97,sneddon}), is defined by \begin{equation} I^{a,b}_c U(\eta) = \frac{1}{\Gamma(b)}\int_0^1 (1-s)^{b-1}s^a U(s^\frac{1}{c}\eta)ds. \label{eqn:EK} \end{equation} The main result of this work is to prove that such $\eta^{*}$ exists under the assumption that $U$ is bounded. To ensure the uniqueness of $\eta^{*}$ the auxiliary condition needs to be posed, namely \begin{equation} \lim\limits_{\eta\to\eta^-_*}-U(\eta)^m U'(\eta)=0. \label{co:1} \end{equation} Physically, this is simply the no-flux requirement through the wetting front and can be derived from the equation alone (see \cite{ploswi}). In what follows we will consider $\alpha$ and $m$ to be a fixed constants. To prove our statement we will consider the following auxiliary problem \begin{equation}\label{eq:2} (U^m U')'=\left [(1-\alpha)-\tfrac{\alpha}{2}\eta \tfrac{\,d }{\,d \eta}\right]I^{0,1-\alpha}_{-\tfrac{2}{\alpha}}U(\eta), \quad U(0)=1, \quad U'(0)=-\beta, \quad m>1, \end{equation} where $\beta$ is a positive constant. Our technique is based on the fact that the above integro-differential problem can be transformed into an Volterra equation. \begin{prop} If U$(\eta)$ is a continuous solution of \begin{equation}\label{eq:3} U(\eta)^{m+1}=1+(m+1)\left[-\beta \eta+\int\limits_{0}^{\eta}((1-\tfrac{\alpha}{2})\eta-z)I^{0,1-\alpha}_{-\tfrac{2}{\alpha}}U(z)\,d z\right]. \end{equation} then it is twice-differentiable and is a solution of \eqref{eq:2}. \end{prop} \begin{proof} Assume that $U(\eta)$ is a solution of \eqref{eq:3}. The right-hand side of \eqref{eq:3} is $C^2$ since $I^{0,1-\alpha}_{-\frac{2}{\alpha}} U$ is continuous. Then, the left-hand side is also $C^2$ since $m>1$. Differentiating twice the equation \eqref{eq:3} we get \eqref{eq:2}. Thus $U(\eta)$ is also a solution of \eqref{eq:2}. \end{proof} Firstly we will prove that the solution of \eqref{eq:3} exists. To do this we need to introduce the function space in which we will operate \begin{equation} X:=\{U\in C[0,\infty]:0\leq U\leq 1\}, \end{equation} with the uniform norm, i.e. $\left\|U\right\|=\sup_{0\leq t\leq \infty}|U(t)|$. Next, we introduce the function space $M$, \begin{equation} M:=\{U\in C[0,\infty]:U(0)=1,0\leq U\leq 1\}, \end{equation} which is subspace of $X$ and in which the solution of \eqref{eq:3} will be sought. It is easy to see that $X$ is a Banach space ($X$ is subspace of $\mathcal{B}[0,\infty]$, the space of bounded functions, and is closed). For the same reason $M$ is also a Banach space. Moreover, we can make the following simple observation. \begin{prop} \label{prop:M} The subspace $M\subset X$ is bounded and convex. \end{prop} \begin{proof} Let $\gamma\in (0,1)$, $u,v\in M$ and introduce the function $w(x)=\gamma u(x)+(1-\gamma) v(x)$. From definition of $M$ we know that $u(0)=1$, $v(0)=1$ and $0\leq u\leq 1$, $0\leq v\leq 1$. From properties continuous functions the $w$ is also a continuous function. Next, if $u(0)=1$ and $v(0)=1$ then $w(0)=\alpha u(0)+(1-\beta)u(0)=\gamma +(1-\gamma)=1$. And the last properties to show is the boundedness \begin{equation} \gamma\cdot 0+(1-\gamma)\cdot 0=0\leq w=\gamma u+(1-\gamma) v\leq \gamma+1-\gamma=1. \end{equation} Hence, $w\in M$ which implies that $M$ is convex. \end{proof} In order to state the main result we have to construct an appropriate integral operator and show that it possesses a fixed point. First, define the auxiliary operator $S_\beta:M\rightarrow M$ (the well-definiteness will be settled in the following lemma) \begin{equation} S_\beta(Y)=1+(m+1)\left[-\beta \eta+\int\limits_{0}^{\eta}((1-\tfrac{\alpha}{2})\eta-z)I^{0,1-\alpha}_{-\tfrac{2}{\alpha}}Y(z)^{1/(1+m)}\,d z\right], \end{equation} then the equation \eqref{eq:3} is equivalent to the fixed-point problem for $S$ \begin{equation} S_\beta(Y) = Y, \end{equation} what can be seen by the substitution $Y = U^{m+1}$. Now, we can prove several important properties of $S$. \begin{lem} \label{lem:S} Assume that \begin{equation} \beta\geq\frac{2-\alpha}{\sqrt{2\Gamma (2-\alpha)(m+1)}}=:\beta_0, \end{equation} then for $Y\in M$ the following holds. \begin{enumerate} \item (\textit{Existence of a zero}) There exists $\eta^*(\beta)$ such that $S_\beta(Y)(\eta^*(\beta)) = 0$. Moreover, \begin{equation} \eta_2(\beta)\leq \eta^*(\beta)\leq \eta_1(\beta), \end{equation} where \begin{equation} \begin{split} \eta_1(\beta)&:=\frac{ 4 \beta \Gamma (2-\alpha )}{(2-\alpha)^2}-\frac{2\sqrt{2} \sqrt{2 \beta ^2 (m+1) \Gamma (2-\alpha )^2-(2-\alpha) \Gamma (3-\alpha)}}{(2-\alpha)^2\sqrt{m+1}}, \\ \eta_2(\beta)&:=\frac{2 \sqrt{2} \sqrt{\Gamma (2-\alpha ) \left(\alpha ^2+2 \beta ^2 (m+1) \Gamma (2-\alpha )\right)}}{\alpha ^2\sqrt{m+1}}-\frac{4 \beta \Gamma (2-\alpha )}{\alpha ^2}.\\ \end{split} \end{equation} \item (\textit{Estimates}) We have the following estimates \begin{equation} g_2(\eta,\beta) \leq S_\beta(Y)(\eta) \leq g_1(\eta,\beta), \end{equation} where \begin{equation} \begin{split} g_1(\eta,\beta)&:=1+(m+1)\left( -\beta \eta+\frac{(2-\alpha)^2}{\Gamma(2-\alpha)}\frac{\eta^2}{8}\right), \\ g_2 (\eta,\beta)&:= 1+(m+1)\left( -\beta \eta-\frac{\alpha ^2}{\Gamma(2-\alpha)}\frac{\eta^2}{8}\right). \end{split} \end{equation} \item (\textit{Range}) The operator $S_\beta$ maps $M$ into itself. \end{enumerate} \end{lem} \begin{proof} Let us take $Y\in M$ and check when the integral operator $S$ is well-defined. From the definition of the space $M$ we can write an inequality for the Erd\'elyi-Kober operator \begin{equation} 0\leq{\Large I}^{0,1-\alpha}_{-\tfrac{2}{\alpha}}U(\eta)=\frac{1}{\Gamma(1-\alpha)}\int\limits_0^1(1-s)^{-\alpha}Y(s^{\tfrac{-\alpha}{2}}\eta)^{1/(1+m)}\,d s\leq \frac{1}{\Gamma(2-\alpha)}, \end{equation} which we will use in further parts of the proof. The function $S_\beta(Y)(\eta)$ can be estimated from the above by \begin{equation} S_\beta(Y)(\eta)\leq 1+(m+1)\left[-\beta \eta+\frac{1}{\Gamma (2-\alpha)}\int\limits_{0}^{\left(1-\frac{\alpha}{2}\right)\eta}\left(\left(1-\frac{\alpha}{2}\right)\eta-z\right)\,d z\right], \end{equation} what can be integrated to yield $S_\beta(Y)(\eta)\leq g_1 (\eta,\beta)$. On the other hand, in the same way we can find the lower limit of $S_\beta(Y)(\eta)$ \begin{equation} S_\beta(Y)(\eta) \geq 1+(m+1)\left(-\beta \eta -\frac{1}{\Gamma(2-\alpha)}\int\limits_{\left(1-\frac{\alpha}{2}\right)\eta}^{\eta}\left(\left(1-\frac{\alpha}{2}\right)\eta-z\right)\,d z\right), \end{equation} which implies that $S_\beta(Y)(\eta) \geq g_2 (\eta,\beta)$. Now, notice that $g_1$ is decreasing in the interval $[0,\eta^*(\beta)]$ and attains its maximal value equal to $1$ at $\eta=0$. Hence, we get that $S_\beta(Y)(\eta)\leq 1$. Also, from the definition we immediately have $S_\beta(Y)(0)=1$ and thus $S_\beta(Y)\in M$. We see that both, $g_1$ and $g_2$ are quadratic functions so it is straightforward (but tedious) to find their positivity intervals. We get that $g_1(\eta,\beta) \geq 0 $ for $\eta\leq \eta_1(\beta)$ and the function $g_2(\eta,\beta) \geq 0 $ for $\eta\leq \eta_2(\beta)$. Further, we can see that $\eta_2$ is always greater than zero. The square-root in $\eta_1(\beta)$ is real only when \begin{equation}\label{beta} \beta\geq\frac{2-\alpha}{\sqrt{2\Gamma (2-\alpha)(m+1)}}, \end{equation} what is satisfied by the assumption. Hence, due to the continuity of $g_1$, $g_2$ and $S_\beta(Y)(\eta)$ we can conclude the existence of at least one $\eta^*(\beta)$ which satisfies inequalities. \begin{equation} \eta_2(\beta)\leq \eta^*(\beta)\leq \eta_1(\beta). \end{equation} This concludes the proof. \end{proof} We can now see that for the appropriate values of $\beta$ the operator $S_\beta(Y)$ is well-defined on $M$. We can now state the main existence result. \begin{thm} \label{thm:existence} For $\beta\geq\beta_0$ the equation \eqref{eq:3} has at least one nonnegative compactly supported solution $Y\in M$. Moreover, $\text{supp} \; Y = [0,\eta^*(\beta)]$. \end{thm} \begin{proof} It is convenient to introduce yet another operator which maps $M$ into itself \begin{equation}\label{op:1} A_\beta(Y)= \begin{cases} 1+(m+1)\left[-\beta \eta+\int\limits_{0}^{\eta}((1-\tfrac{\alpha}{2})\eta-z)I^{0,1-\alpha}_{-\tfrac{2}{\alpha}}Y(z)^{1/(1+m)}\,d z\right], & \text{for}\ \eta\leq \eta^*(\beta), \\ 0, & \text{for}\ \eta> \eta^*(\beta), \end{cases} \end{equation} where $\eta^*$ is the smallest argument that satisfies the equation below (Lemma \ref{lem:S} assures its well-definiteness) \begin{equation} 1+(m+1)\left[-\beta \eta^*(\beta)+\int\limits_{0}^{\eta^*(\beta)}((1-\tfrac{\alpha}{2})\eta^*(\beta)-z)I^{0,1-\alpha}_{-\tfrac{2}{\alpha}}Y(z)^{1/(1+m)}\,d z\right]=0. \end{equation} Our original problem is thus reduced to showing the existence of a solution of the following \begin{equation}\label{eq:5} A_\beta(Y)=Y, \end{equation} where operator $A$ is defined above. From Proposition \ref{prop:M} we have that $M$ is bounded, closed and convex. Moreover, $M\subset X$, i.e. $M$ is a subspace of a Banach space. The kernel of integral operator $A$ is continuous, hence using the Theorem 3.4 from \cite{precup} we conclude that the operator $A$ is completely continuous. Conclusively, we can use Schauder's theorem \cite{precup,zeidler}, which says that equation \eqref{eq:5} has a solution. The form of the compact support comes from the definition of $A_\alpha$. \end{proof} As for now we know that \eqref{eq:3} has a bounded solution which possesses a zero provided that $\beta$ is large enough. It is also true that inside the interval of the admissible $\beta$ there exists exactly one value if of such for which the no-flux condition \eqref{co:1} is satisfied. \begin{thm} There exists a value of $\beta$ (and hence $\eta^{*}(\beta)$), for which the solution of \eqref{eq:3} satisfies \eqref{co:1}. \end{thm} \begin{proof} Assume that $U\in M$ and differentiate the equation \eqref{eq:3} to get \begin{equation}\label{eq:6} U(\eta^*(\beta))^m U'(\eta^*(\beta))=-\beta +\left(1-\frac{\alpha}{2}\right)\int\limits_0^{\eta^*(\beta)}{\Large I}^{0,1-\alpha}_{-\tfrac{2}{\alpha}}U(z)\,d z-\frac{\alpha}{2}\eta^*(\beta){\Large I}^{0,1-\alpha}_{-\tfrac{2}{\alpha}}U(\eta^*(\beta)). \end{equation} Since $U(\eta) = 0$ for $\eta\geq \eta^*(\beta)$ we have \begin{equation}\label{eq:7} \frac{\alpha}{2}\eta^*(\beta){\Large I}^{0,1-\alpha}_{-\tfrac{2}{\alpha}}U(\eta^*(\beta))=0, \end{equation} which, along with the former formula, implies \begin{equation}\label{eq:8} U(\eta^*(\beta))^m U'(\eta^*(\beta))=-\beta +\left(1-\frac{\alpha}{2}\right)\int\limits_0^{\eta^*(\beta)}{\Large I}^{0,1-\alpha}_{-\tfrac{2}{\alpha}}U(z)\,d z. \end{equation} Now, we estimate the magnitude of $U(\eta^*(\beta))^m U'(\eta^*(\beta))$. Using the boundedness of the function $U$ we have \begin{equation} U(\eta^*(\beta))^m U'(\eta^*(\beta))\leq-\beta+\left(1-\frac{\alpha}{2}\right)\frac{1}{\Gamma(2-\alpha)}\eta_1(\beta)=:f_+(\beta). \end{equation} On the other hand, we can write \begin{equation} \begin{split} U(\eta^*(\beta))^m U'(\eta^*(\beta))&\geq -\beta+\frac{1-\frac{\alpha}{2}}{\Gamma(2-\alpha)}\int\limits_{0}^{\eta_2(\beta)}\left( 1+(m+1)\left( -\beta z-\frac{\alpha^2}{\Gamma(2-\alpha)}\frac{z^2}{8}\right)\right)^{\tfrac{1}{1+m}}\,d z\\ & =: f_-(\beta). \end{split} \end{equation} The above estimates can be written as \begin{equation} f_-(\beta)\leq U(\eta^*(\beta))^m U'(\eta^*(\beta))\leq f_+(\beta). \end{equation} Next, we compute the limits $\lim\limits_{\beta\to\infty}f_{\pm}(\beta)$. It is easy to see that \begin{gather} \lim\limits_{\beta \to \infty}\eta_{1,2}(\beta)=0, \end{gather} and hence $\lim\limits_{\beta\to\infty}f_{\pm}(\beta)=-\infty$. Moreover, \begin{flalign} f_+(\beta_0(m))=\frac{\alpha }{\sqrt{2} \sqrt{(m+1) \Gamma (2-\alpha )}}>0, \end{flalign} We further see that $\eta_2(0)>0$ so consequently $f_-(0)>0$. Functions $f_{\pm}$ are $\beta$-decreasing and continuous. Therefore, since $f_\pm(\beta)$ change their sign when $\beta$ increases from the Darboux Theorem there exists a $\beta^*$ (and hence $\eta^*$) such that the condition $U(\eta^*)^m U'(\eta^*)=0$ is satisfied. \end{proof} Equation (\ref{eq:3}) has thus a unique solution that belongs to $M$, satisfies \eqref{co:1} and has a zero. Now, we can invoke our previous results \cite{ploswi} to conclude that \eqref{eq:1} has a unique bounded solution. \begin{cor}[\cite{ploswi}, Corollary 1] Let $U$ be a bounded weak solution of \eqref{eq:1} such that $0\leq U(\eta)\leq 1$. Then, \begin{itemize} \item it is compactly supported with $\text{supp }U = [0,\eta^*]$ for a unique $\eta^*>0$, \item it is unique, \item it is twice differentiable in the neighbourhood of $\eta$ such that $U(\eta) >0$, \item it is monotone decreasing. \end{itemize} \end{cor} \section{Conclusion} In this paper we proved that the solution of \eqref{eq:1} has a compact support. We proceeded by reducing our problem to an auxiliary equation to which a shooting method was applied. One of the most important steps was to transform the fractional integro-differential equation into a nonlinear Voltera integral and choose an appropriate function space, where the solution of \eqref{eq:2} was sought. To find a unique $\eta^*$ we posed the no-flux condition \eqref{co:1} which arises from both physical and mathematical reasons. Based on our current results and theorems provided in \cite{ploswi}, we can say that solution of \eqref{eq:1} exist, is unique compactly supported decreasing function. This result rigorously confirms the obvious physical observation and, additionally, gives some useful estimates on the wetting front. \end{document}
\begin{example}in{equation}gin{document} \begin{example}in{equation}gin{abstract} We show that a relative entropy condition recently shown by Leger and Vasseur to imply uniqueness and stable $L^2$ dependence on initial data of Lax $1$- or $n$-shock solutions of an $n\tauildemes n$ system of hyperbolic conservation laws with convex entropy implies Lopatinski stability in the sense of Majda. This means in particular that Leger and Vasseur's relative entropy condition represents a considerable improvement over the standard entropy condition of decreasing shock strength and increasing entropy along forward Hugoniot curves, which, in a recent example exhibited by Barker, Freist\"uhler and Zumbrun, was shown to fail to imply Lopatinski stability, even for systems with convex entropy. This observation bears also on the parallel question of existence, at least for small $BV$ or $H^s$ perturbations. \varepsilonnd{abstract} \date{\tauoday} \title{ Entropy criteria and stability of extreme shocks: a remark on a paper of Leger and Vasseur } \section{Introduction} In this brief note, we examine for extreme Lax shock solutions of a system of conservation laws \begin{example}in{equation}\lambdaabel{system} u_t + f(u)_x=0, \; u\in {\mathbb R}^n, \varepsilonnd{equation} possessing a convex entropy \begin{example}in{equation}\lambdaabel{ent} \varepsilonnd{theorem}a, \quad P := \nabla^2 \varepsilonnd{theorem}a>0,\quad \nabla_u \varepsilonnd{theorem}a \nabla_u f= \nabla_u q, \varepsilonnd{equation} the relation between Lopatinski stability in the sense of Erpenbeck and Majda \cite{Er,M1,M2,M3} and a relative entropy criterion introduced recently by Leger and Vasseur \cite{LV}. A number of entropy criteria have been proposed over the years to distinguish physically admissible or stable shock waves. Some of the oldest \cite{B,W} are decrease of characteristic speed (compressivity) or increase in entropy across the shock, or their instantaneous equivalents: monotone decrease of shock speed or increase of entropy along the forward $1$-Hugoniot curve from a fixed left state. All of these conditions coincide for small-amplitude waves \cite{Sm}, agreeing also with the property of stability, or well-posedness of the shock solution with respect to nearby initial data, as determined by nonvanishing of a certain Lopatinski determinant \cite{La,Er,M1,M2,M3,Da}; however, for large-amplitude waves, the relations between these different criteria are unclear. {The related concept of entropy dissipation $\varepsilonnd{theorem}a_t-q_x \lambdaeq 0$ in the sense of measures at the shock, or $[\varepsilonnd{theorem}a]-\sigma[q]\lambdaeq 0$ for shock speed $\sigma,$ is justified as a necessary condition for the physicality criterion of {\it admissibility}. Here we are thinking of admissibility in the sense of Lax, for small-amplitude waves, under the assumption of genuine nonlinearity and strictly convex entropy (Theorem 5.7, \cite{La}), but also admissibility in the sense of existence of a nearby viscous shock profile, for arbitrary-amplitude waves, under the assumption of an associated entropy-compatible viscosity \cite{K, KS}.} The property of entropy dissipation may be seen to follow from monotone decrease of shock speed along the Hugoniot curve (Lemma 3, \cite{LV}; see also \cite{La}) making a link (if only one way) between large-amplitude admissibility and the more classical monotonicity condition defined originally in a small-amplitude context. Most recently, Leger and Vasseur have introduced a {\it relative entropy condition} pertaining to arbitrary-amplitude waves\fracootnote{See also an earlier analysis by Leger \cite{Leg}, proving $L^2$-contractivity of entropy solutions in the scalar case.}. Specifically, assuming $L^\infty$ boundedness and a strong trace property on solutions\fracootnote{Satisfied by $BV$ solutions.}, these authors establish for systems of conservation laws possessing a convex entropy uniqueness and stable dependence in $L^2$ on initial data for perturbations of extreme Lax shock waves (without loss of generality $1$-shocks) of arbitrary amplitude, under the conditions that \begin{example}in{equation}gin{itemize} \item[(i)] shock speed is nonincreasing along the forward Hugoniot curve from a fixed left state; explicitly $\s'(s) \lambdaeq 0$ for $s \gammaeq 0,$ with notation introduced in Section \ref{s2} below, and \item[(ii)] relative entropy (defined in \varepsilonqref{etarel}) is nondecreasing with respect to the right state along the forward Hugoniot curve; explicitly $d_s \varepsilonnd{theorem}a(u|S_u(s)) \gammaeq 0$ for $s \gammaeq 0.$ \varepsilonnd{itemize} The result of Leger and Vasseur is nominally an a priori short-time stability estimate and leaves open the question of existence, even for more standard perturbations that are small in $BV$ or $H^s.$ The purpose of the present note is to verify that conditions (i)-(ii) imply satisfaction of the Lopatinski condition of Majda \cite{M1,M2,M3,Le1}. Indeed, we show that satisfaction of the Lopatinski condition follows from the much weaker conditions that (i) and (ii) hold only for the single value $s = s_+$ corresponding to the right endstate; see (i')-(ii') below. This observation immediately yields, by the existing theory of \cite{Le1,M1,M2,M3}, existence and stability for the classes of small $BV$ or $H^s$ perturbations under conditions (i')-(ii'). One might hope that it could be eventually of use also in constructing approximate solutions and ultimately the demonstration of existence in the much more delicate $L^2$/strong trace setting considered in \cite{LV}. We remark that Lopatinski stability is necessary for the physicality condition of {\it viscous stability}, or stability of an associated viscous profile \cite{ZS}. \br From the above discussion, (i) is sufficient for entropy dissipation, which is necessary for the physicality condition of admissibility, and (i')-(ii') are sufficient for Lopatinski stability, which is necessary for the physicality condition of viscous stability. As strict entropy dissipation and Lopatinski stability are open conditions, whereas (i) and (i')-(ii'), as nonstrict monotonicity conditions, are closed, it is evident that {\it {\rm (i)} and {\rm (i')-(ii'),} are sufficient but not necessary} for strict entropy dissipation and Lopatinski stability, respectively. For, a closed condition holds at the boundary of its region of satisfaction, whereupon an implied open condition must therefore hold at some point outside. \varepsilonr \br \lambdaabel{r31} An examination of the argument of \cite{LV}, reveals that their hypothesis (ii) may be weakened to\fracootnote{Specifically, in the course of the proof of Theorem 3 in \cite{LV}, assumption (ii) is used only in the key Lemma 4, which is invoked only in Lemma 8, in 1-3 page 291, where the weakened form (ii$^*$) is used, not for all $s \gammaeq 0$ but only in an interval $0 \lambdaeq s \lambdaeq s_+ + C$ (in their notation, $0 \lambdaeq s \lambdaeq s_u$), where $C > 0$ is determined by the $L^\infty$ bound assumed on solutions.} $$\mbox{\rm (ii$^*$)} \quad (s - s_+) \bibitemg( \varepsilonnd{theorem}a(u|S_{u}(s))-\varepsilonnd{theorem}a(u|S_u(s_+)) \bibitemg) \gammaeq 0, \quad \mbox{for $s \gammaeq 0,$}$$ with notation introduced in Section \ref{s2}, where $(u, S_u(s_+), \s(s_+))$ is the fixed 1-shock under consideration. Evidently, (ii$^*$) implies our condition (ii') stated in Assumption II below. \varepsilonr \section{Definitions and result} \lambdaabel{s2} Let $\varepsilonnd{theorem}a,q$ be a convex entropy/entropy flux pair. Then \cite{La}, for $P = \nabla^2 \varepsilonnd{theorem}a$, $A:=\nabla f$, $P$ is symmetric positive definite and $PA$ is symmetric. Thus, $A$ is self-adjoint with respect to the inner product induced by $P$, and so the eigenvectors of $A$ corresponding to distinct eigenvalues are $P$-orthogonal. The {\it relative entropy} $\varepsilonnd{theorem}a(u|v)$ is defined following \cite{D,LV} as \begin{example}in{equation}\lambdaabel{etarel} \varepsilonnd{theorem}a(u|v):=\varepsilonnd{theorem}a(u)-\varepsilonnd{theorem}a(v)-\nabla \varepsilonnd{theorem}a(v)(u-v). \varepsilonnd{equation} We assume that $A$ is strictly hyperbolic\fracootnote{For Theorem \ref{main} to hold, we only need strict hyperbolicity to hold at the right endstate $S_u(s_+)$ introduced in Assumption II.}, with eigenvalues $a_1<a_2<\dots<a_n,$ and associated eigenvectors $r_1,r_2,\dots,r_n.$ \varepsilonnsuremath{\mathrm{e}}dskip {\bf Assumptions I.} For a given left state $u$, suppose that there is a well-defined $C^1$ $1$-Hugoniot curve of states $S_u(s)$ and associated speeds $\sigma(s)$, $s\gammaeq 0$, satisfying \begin{example}in{equation}\lambdaabel{rh} \sigma(s)(S_u(s)-u)=f(S_u(s))-f(u), \varepsilonnd{equation} with $S_u(0)=u$, $\sigma(0)=a_1(u)$. More precisely, assume that condition \varepsilonqref{rh} is everywhere full rank, with the linearized equations \begin{example}in{equation}\lambdaabel{lrh} \sigma'(s) (S_u(s)-u) = \bibitemg(A(S_u(s))-\sigma(s) \bibitemg)S_u'(s) \varepsilonnd{equation} (hence, by the Implicit Function Theorem, also the nonlinear equations \varepsilonqref{rh}) uniquely solvable (up to a constant multiplier) for $(\sigma'(s), S_u'(s))$. Moreover, assume that the resulting discontinuity is a Lax $1$-shock, in the sense that \begin{example}in{equation}\lambdaabel{lax} a_1(u)>\sigma(s) \quad \hbox{\rm and} \quad a_1(S_u(s)) <\sigma(s)< a_2(S_u(s)) < a_3(S_u(s)) < \dots < a_n(S_u(s)). \varepsilonnd{equation} \varepsilonnsuremath{\mathrm{e}}dskip The {\it Lopatinski} (stability) {\it condition} for the shock $(u,S_u(s),\sigma(s)),$ with notation introduced in Assumptions I above, is \begin{example}in{equation}\lambdaabel{lop} \det \bp (S_u(s)-u) & r_2(S_u(s)) & \dots & r_n(S_u(s))\varepsilonp \neq 0. \varepsilonnd{equation} This may be recognized as the condition that the Riemann problem be well-posed for data near $(u,S_u(s))$, more precisely, that the Jacobian of the associated Lax wave-map be full rank \cite{La,Sm}. Condition \varepsilonqref{lop} is a crucial building block both (through resulting a priori stability estimates) for the small $H^s$-perturbation existence/stability theory of Majda \cite{M1,M2,M3,Me} and (through direct construction based on Riemann solutions) in the small-$BV$ perturbation existence/stability theory of Lewicka and others \cite{Le1,Le2,C} in the vicinity of a single large-amplitude shock. \varepsilonnsuremath{\mathrm{e}}dskip {\bf Assumptions II.} (i') $\sigma'(s_+)\lambdaeq 0$, (ii') $d_s \varepsilonnd{theorem}a(u | S_u(s_+))\gammaeq 0$, for shock $(u,S_u(s_+),\s(s_+))$, $s_+ \gammaeq 0$. \begin{example}in{equation}gin{lemma}\lambdaabel{l} Condition {\rm (ii')} is equivalent to \begin{example}in{equation}\lambdaabel{form} \lambdaangle S_u'(s_+), \nabla^2\varepsilonnd{theorem}a(S_u(s_+)) (S_u(s_+)-u) \tauext{\rm{ran}}gle \gammaeq 0. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{example}in{equation}gin{proof} Differentiating \varepsilonqref{etarel}, we have $$ \begin{example}in{equation}gin{aligned} d_s \varepsilonnd{theorem}a(u|S_u(s)) &= d_s \bibitemg[ \varepsilonnd{theorem}a(u)-\varepsilonnd{theorem}a(S_u(s))-\nabla \varepsilonnd{theorem}a(S_u(s)) (u-S_u(s)) \bibitemg] \\ &= -\nabla \varepsilonnd{theorem}a(S_u(s))S_u'(s) - \bibitemg[ \bibitemg\lambdaangle S_u'(s),\nabla^2 \varepsilonnd{theorem}a (S_u(s))(u-S_u(s))\bibitemg\tauext{\rm{ran}}gle -\nabla \varepsilonnd{theorem}a(S_u(s))S_u'(s) \bibitemg] \\ &= - \bibitemg\lambdaangle S_u'(s),\nabla^2 \varepsilonnd{theorem}a(S_u(s))(u-S_u(s)) \bibitemg\tauext{\rm{ran}}gle, \varepsilonnd{aligned} $$ whence the assertion follows. \varepsilonnd{proof} \begin{example}in{equation}gin{theorem}\lambdaabel{main} Under Assumptions {\rm I} and {\rm II,} Lopatinski condition \varepsilonqref{lop} holds for $(u,S_u(s_+),\sigma(s_+))$. \varepsilonnd{theorem} \begin{example}in{equation}gin{proof} Failure of \varepsilonqref{lop} implies that $$ S_u(s_+) - u = \sum_{2 \lambdaeq j \lambdaeq n} \a_j r_j^+, \qquad r_j^+ := r_j(S_u(s_+)),$$ for some $\a_j \in \R,$ which are not all equal to zero, since we may assume $s_+ > 0,$ $S_u(s_+) \neq u.$ Then, by Assumptions I, $$ (A_+ - \s(s_+)) S'_u(s_+) = \s'(s_+) \sum_{2 \lambdaeq j \lambdaeq n} \a_j r_j^+, \qquad A_+ := A(S_u(s_+)).$$ Hence, inverting $A_+ - \s(s_+)$ (as we may by \varepsilonqref{lax}): $$ S'_u(s_+) = \s'(s_+) \sum_{2 \lambdaeq j \lambdaeq n} \b_j r_j^+, \qquad \b_j := (a_j(S_u(s_+))- \s(s_+))^{-1} \a_j. $$ We remark that, since $(u, S_u(s_+), \s(s_+))$ is a $1$-shock \varepsilonqref{lax}, there holds $\b_j \a_j \gammaeq 0,$ and, since the shock is non-trivial, $\b_j \a_j > 0$ for at least one $j.$ Hence, by $\s'(s_+) \neq 0$ (a consequence of Assumptions I), positive definiteness of $P,$ and the fact that the eigenvectors of $A$ are $P$-orthogonal, we deduce \begin{example}in{equation} \lambdaabel{contr} \bibitemg\lambdaangle S'_u(s_+), P_+ (A_+ - \s(s_+)) S_u'(s_+) \bibitemg\tauext{\rm{ran}}gle = \s'(s_+)^2 \sum_{2 \lambdaeq j \lambdaeq n} \b_j \a_j \bibitemg\lambdaangle r_j^+, P_+ r_j^+\bibitemg\tauext{\rm{ran}}gle > 0, \varepsilonnd{equation} with notation $P_+ := P(S_u(s_+)) = \nabla^2 \varepsilonnd{theorem}a(S_u(s_+)).$ However, Assumptions II and Lemma \ref{l} imply $$ \s'(s_+) \bibitemg\lambdaangle S'_u(s_+), P_+ (S_u(s_+) - u) \bibitemg\tauext{\rm{ran}}gle\lambdaeq 0,$$ so that, by \varepsilonqref{lrh}, $$ \bibitemg\lambdaangle S'_u(s_+), P_+( A_+ - \s(s_+)) S'_u(s_+) \bibitemg\tauext{\rm{ran}}gle \lambdaeq 0,$$ in contradiction with \varepsilonqref{contr}. \varepsilonnd{proof} \section{discussion and open problems} The corresponding absolute entropy conditions that shock strengh is decreasing and absolute entropy $\varepsilonnd{theorem}a(S_u(s))$ is increasing along the forward Hugoniot curve have been shown \cite{B} to hold {\it globally} for very general gas dynamical equations of state. However, recently, Barker, Freist\"uhler, and Zumbrun \cite{BFZ} have shown by explicit example that there exist systems satisfying these conditions and also possessing a convex entropy, but for which nonetheless {\it the Lopatinski condition can fail}. Thus, the relative entropy condition represents a considerable sharpening of the older absolute entropy condition. An interesting open problem would be to find an analog of this condition for intermediate shocks; however, we see no obvious candidate for this. Certainly, the approach of Theorem \ref{main} breaks down, since there is no relation between $P(u)$- and $P(S_u(s))$-orthogonality. An interesting, but more speculative, problem would be to make use of the Lopatinski condition to construct approximate solutions in the small $L^2$-perturbation class, towards an eventual possible small $L^2$/strong trace class existence theory. It would be extremely interesting, of course, to find some analog also for the corresponding viscous shock stability problem, whether directly as in \cite{LV}, or indirectly as here through the study of spectral stability and the linearized eigenvalue problem. \begin{example}in{equation}gin{thebibliography}{GMWZ7} \bibitembitem[B]{B} H.A. Bethe, {\it On the theory of shock waves for an arbitrary equation of state,} [Rep. No. 545, Serial No. NDRC-B-237, Office Sci. Res. Develop., U. S. Army Ballistic Research Laboratory, Aberdeen Proving Ground, MD, 1942]. Classic papers in shock compression science, 421--492, High-press. Shock Compression Condens. Matter, Springer, New York, 1998, \bibitembitem[C]{C} I-L. Chern, {\it Stability theorem and truncation error analysis for the Glimm scheme and for a front tracking method for flows with strong discontinuities,} Comm. Pure Appl. Math. 42 (1989), no. 6, 815-844. \bibitembitem[Er]{Er} J. Erpenbeck, {\it Stability of step shocks.} Phys. Fluids 5 (1962) no. 10, 1181--1187. \bibitembitem[BFZ]{BFZ} B. Barker, H. Freist\"uhler, and K. Zumbrun {\it Convex entropy, Hopf bifurcation, and viscous and inviscid shock stability,} Preprint (2012). \bibitembitem[D]{D} R. Diperna, {\it Uniqueness of solutions to hyperbolic conservation laws,} Indiana Univ. Math. J. 28 (1979) no.1, 137--188. \bibitembitem[Da]{Da} C. Dafermos, {\it Hyperbolic conservation laws in continuum physics.} Grundlehren der Mathematischen Wissenschaften, 325. Springer-Verlag, Berlin, Berlin, 2010. xxxvi+708 pp. \bibitembitem[La]{La} P.D. Lax, {\it Hyperbolic systems of conservation laws and the mathematical theory of shock waves}, Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, No. 11. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1973. v+48 pp. \bibitembitem[Leg]{Leg} N. Leger, {\it $L^2$ stability estimates for shock solutions of scalar conservation laws using the relative entropy method}, Arch. Ration. Mech. Anal. 199 (2011) 761--778. \bibitembitem [LV]{LV} N. Leger and A. Vasseur, {\it Relative entropy and the stability of shocks and contact discontinuities for systems of conservation laws with non-BV perturbations,} Arch. Ration. Mech. Anal. 201 (2011), no. 1, 271--302. \bibitembitem[Le1]{Le1} M. Lewicka, {\it $L^1$ stability of patterns of non-interacting large shock waves,} Indiana Univ. Math. J. 49 (2000), no. 4, 1515--1537. \bibitembitem[Le2]{Le2} M. Lewicka, {\it Well posedness for hyperbolic systems of conservation laws with large BV data}, Arch. Ration. Mech. Anal. 173 (2004), no. 3, 415--445. \bibitembitem[K]{K} S. Kawashima, {\it Systems of a hyperbolic--parabolic composite type, with applications to the equations of magnetohydrodynamics}. thesis, Kyoto University (1983). \bibitembitem[KS]{KS} S. Kawashima and Y. Shizuta, \tauextit{On the normal form of the symmetric hyperbolic-parabolic systems associated with the conservation laws}. Tohoku Math. J. 40 (1988) 449--464. \bibitembitem[M1]{M1} A. Majda, {\it The stability of multi-dimensional shock fronts -- a new problem for linear hyperbolic equations.} Mem. Amer. Math. Soc. 275 (1983). \bibitembitem[M2]{M2} A. Majda, {\it The existence of multi-dimensional shock fronts.} Mem. Amer. Math. Soc. 281 (1983). \bibitembitem[M3]{M3} A. Majda, {\it Compressible fluid flow and systems of conservation laws in several space variables.} Springer-Verlag, New York (1984), viii+ 159 pp. \bibitembitem[Me]{Me} G. M\'etivier, \tauextit{Stability of multidimensional shocks}. Advances in the theory of shock waves, 25--103, Progr. Nonlinear Differential Equations Appl., 47, Birkh\"auser Boston, Boston, MA, 2001. \bibitembitem[Sm]{Sm} J. Smoller, {\it Shock waves and reaction--diffusion equations,} Second edition, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 258. Springer-Verlag, New York, 1994. xxiv+632 pp. ISBN: 0-387-94259-9. \bibitembitem[W]{W} H. Weyl, {\it Shock Waves in Arbitrary fluids}, Comm. Pure and Appl. Math. 2 (1949). \bibitembitem[ZS]{ZS} K. Zumbrun and D. Serre, {\it Viscous and inviscid stability of multidimensional planar shock fronts,} Indiana Univ. Math. J. 48 (1999) 937--992. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Efficient computation of \ Rankin $p$-adic L-functions} \begin{abstract} We present an efficient algorithm for computing certain special values of Rankin triple product $p$-adic L-functions and give an application of this to the explicit construction of rational points on elliptic curves. \end{abstract} \section{Introduction} The purpose of this paper is to describe an efficient algorithm for computing certain special values of Rankin triple product $p$-adic L-functions. These special values are $p$-adic numbers and our algorithm computes them in polynomial-time in the desired precision. This improves on existing algorithms which require exponential time in the desired precision. Our method has the pleasant feature of also being applicable to Rankin double product $p$-adic L-functions, and working equally well in weight one as compared to higher weights (Sections \ref{Sec-Weight1} and \ref{Sec-SingleDouble}). We hope it will usefully complement the powerful methods based upon overconvergent modular symbols for computing $p$-adic L-functions \cite{PS}, which the author understands are less readily adaptable to higher product $p$-adic L-functions. We describe an application of our algorithm to the efficient construction of rational points on elliptic curves over $\mbox{\Bb{Q}}$. The curves we consider all have rank one and relatively small conductor, and so this application does not yield any ``new'' points. However, the constructions give experimental verification both of the correctness of the implementation of our algorithm, and various sophisticated and new conjectural constructions of rational points on elliptic curves. Even in the rank one setting these constructions are of interest; for instance, they allow one to carry out by $p$-adic means the complex analytic calculations in \cite{DDLR} (see Example \ref{Ex-57a}), and in fact $p$-adically interpolate points found using much older but not well-understood methods. A different and enticing application of our algorithm is to the experimental study of conjectural constructions of ``new'' points on elliptic curves over certain number fields using weight one modular forms \cite{DLR}. In this paper though we shall not address experimentally the calculation of these (Stark-Heegner) points attached to weight one forms or the $p$-adic interpolation of points. We plan to return to these questions in future joint work. All of the applications of our algorithm are based upon ideas of Darmon and Rotger \cite{DR-Arizona,DR-GrossZagier}. In particular, Darmon encouraged the author to try to apply the method for computing with overconvergent modular forms in \cite{AL} to Rankin $p$-adic L-functions, and gave him invaluable help during the implementation of the algorithm and preparation of this paper. Much of the work behind this paper was in making the methods in \cite{AL} sufficiently fast in practice to turn a theoretical algorithm (for higher level) into one useful for experimental mathematics. In writing this paper the author had the choice between trying to give a comprehensive background to the theory necessary to define Rankin $p$-adic L-functions and present the work of Darmon and Rotger, or distilling just enough to describe his contribution. He chose the latter, since the long introduction to \cite{DR-GrossZagier} is already very clear but incompressible. This introduction should be read in parallel to our brief (and simplified) description below by anyone wishing to get a deeper understanding of the significance of the algorithm in our paper. The reader should also refer to that source for definitions of any unfamiliar terms below. (All the definitions we shall really need are gathered in Sections \ref{Sec-Background} and \ref{Sec-RLdef}.) Let $f,g,h$ be newforms of weights $k,l,m \geq 2$, primitive characters $\chi_f,\chi_g,\chi_h$ with $\chi_f \chi_g \chi_h = 1$, and level $N$. Assume that the Heegner Hypothesis H \cite[Section 1]{DR-GrossZagier} is satisfied. Let $p$ be a prime not dividing the level $N$, and fix an embedding of $\bar{\mbox{\Bb{Q}}}$ into $\mbox{\Bb{C}}p$, the completion of an algebraic closure of the field of $p$-adic numbers $\mbox{\Bb{Q}}p$. Assume $f,g$ and $h$ are ordinary at $p$. Let ${\mathbf f},{\mathbf g}$ and ${\mathbf h}$ be the (unique) Hida families of (overconvergent) $p$-adic modular forms passing through $f,g$ and $h$. The Rankin $p$-adic L-function ${\mathcal L}_p^{f}({\mathbf f},{\mathbf g},{\mathbf h})$ associates to each triple of weights $(x,y,z)$ in (a suitable subset of) $\mbox{\Bb{Z}}_{\geq 2}^3$ a $p$-adic number ${\mathcal L}_p^{f}({\mathbf f},{\mathbf g},{\mathbf h})(x,y,z) \in \mbox{\Bb{C}}p$. It has a defining interpolation property over a certain set $\Sigma_{f}$ of unbalanced weights, relating it to the special value of the classical (Garrett-)Rankin triple product L-function at its central critical point. (Weights $(x,y,z)$ are balanced if the largest is strictly smaller than the sum of the other two, and otherwise unbalanced.) The theorem of Darmon and Rotger \cite[Theorem 1.3]{DR-GrossZagier} equates its value at the {\it balanced} weights to an explicit algebraic number times the $p$-adic Abel-Jacobi map of a certain cycle on a product of Kuga-Sato varieties evaluated at a particular differential form. At balanced weights $(x,y,z)$ for reasons of sign the classical Rankin triple product L-function vanishes at its central critical point, and so the special value ${\mathcal L}_p^{f}({\mathbf f},{\mathbf g},{\mathbf h})(x,y,z)$ is thought of as some kind of first derivative. (Darmon and Rotger actually construct in addition ${\mathcal L}_p^{g}({\mathbf f},{\mathbf g},{\mathbf h})(x,y,z)$ and ${\mathcal L}_p^{h}({\mathbf f},{\mathbf g},{\mathbf h})(x,y,z)$ but we only consider ${\mathcal L}_p^{f}({\mathbf f},{\mathbf g},{\mathbf h})(x,y,z)$ and shall omit from here-on the superscript $f$.) In this paper we present an algorithm for computing ${\mathcal L}_p({\mathbf f},{\mathbf g},{\mathbf h})(x,y,z) \in \mbox{\Bb{C}}p$ for balanced weights $(x,y,z)$ to a given $p$-adic precision in polynomial-time in the precision, provided $p \geq 5$ and {\it under the following assumption on the weights}. Let us specialise the Hida families back to the original weights $(k,l,m)$ to recover the newforms $f$, $g$ and $h$, and assume that $(k,l,m)$ is a balanced triple. (This is only a notational simplification --- we are after all really interested in our original newforms, the Hida families being introduced just to define the interpolation properties of the L-function.) Our algorithm requires that $k = l - m + 2$. This is enough for all our present and immediately envisaged arithmetic applications. The problem which makes finding special values of Rankin triple product $p$-adic L-functions challenging is that of computing ordinary projections of $p$-adic modular forms. That is, in the definition of ${\mathcal L}_p({\mathbf f},{\mathbf g},{\mathbf h})(k,l,m)$ one encounters a $p$-adic modular form ``$d^{-(1+t)}(g^{[p]}) \times h$'' which is not classical, and then has to compute its ordinary projection to some precision. Since this form is not classical any straightforward approach to this has exponential-time in the desired $p$-adic precision; for example, by iterating the Atkin operator on $q$-expansions or on some suitable space of classical modular forms (as the latter necessarily has exponential dimension in the required precision, by consideration of weights cf. \cite[Proposition I.2.12 ii.]{FG}). Our solution lies in the fact that ``$d^{-(1+t)}(g^{[p]}) \times h$'' is nearly overconvergent \cite[Section 2.5]{DR-GrossZagier}. More precisely, our assumption ($k = l - m + 2$) on the weights is exactly that which ensures it is overconvergent, and so the methods we developed for computing with such forms in \cite{AL} can be applied. We expect that our methods can be generalised to handle nearly overconvergent modular forms (using their explicit description in \cite{CGJ}) and thus compute Rankin triple product $p$-adic L-functions at {\it any} balanced point $(x,y,z)$, but we have not carried out any detailed work in this direction. The main result of our paper is really the algorithm (and its refinements) in Section \ref{Sec-ProjAlg} for computing the ordinary projection of certain overconvergent modular forms and in addition the ordinary subspace. (We give a full and rigorous analysis of this algorithm, but not of two aspects of our overall algorithm for computing Rankin triple product $p$-adic L-functions. These are of minor practical importance, see Note \ref{Note-RankinL} (\ref{Note-NotProved}) and (\ref{Note-HeckeOps}), but difficult to analyse.) Regarding arithmetic applications, the most immediate is the following one deduced by Darmon from \cite[Theorem 1.3]{DR-GrossZagier} and \cite[Lemma 2.4]{DRS}, in a personal communication. Assume that $f$ and $g$ are newforms of weight $2$ and trivial character, and that $f$ has rational Fourier coefficients. Let $E_f$ denote the elliptic curve over $\mbox{\Bb{Q}}$ associated to $f$, and $\log_{E_f}: E_f(\mbox{\Bb{Q}}p) \rightarrow \mbox{\Bb{Q}}p$ be the formal $p$-adic logarithm map. Then there exists a point $P_g \in E_f (\mbox{\Bb{Q}})$ and a computable positive integer $d_g$ such that \begin{equation}\label{Eqn-Pg} \log_{E_f}(P_g) = 2 d_g \mbox{\Bb F}rac{{\mathcal E}_0(g) {\mathcal E}_1(g)} {{\mathcal E}(g,f,g)} {\mathcal L}_p({\mathbf g},{\mathbf f},{\mathbf g})(2,2,2). \end{equation} Here ${\mathcal E}(g,f,g)/{\mathcal E}_0(g) {\mathcal E}_1(g)$ is the explicit non-zero algebraic (in fact quadratic) number which occurs in the Darmon-Rotger formula \cite[Theorem 1.3]{DR-GrossZagier} --- it depends only upon the $p$th coefficients in the $q$-expansions of $f$ and $g$. Thus if ${\mathcal L}_p({\mathbf g},{\mathbf f},{\mathbf g})(2,2,2)$ is non-vanishing one can recover a point of infinite order on $E_f(\mbox{\Bb{Q}})$. (The integer $d_g$ is that which appears, in different notation ``$d_T$'' for ``$T := T_g$'', in \cite[Remark 3.1.3]{DDLR}.) The point $P_g$ is closely related to classically constructed points (``Zhang points''). We give an example of this application (Example \ref{Ex-57a}), and understand it will be worked out in detail in the forthcoming Ph.D. thesis of Michael Daub \cite{Daub}. In addition, we also present a number of variations of this application which suggest generalisations of the different underlying theoretical constructions and also illustrate our algorithm (Section \ref{Sec-Examples}). The paper is organised in a simple manner, Section \ref{Sec-Algorithm} containing the theoretical background and algorithms, and Section \ref{Sec-Examples} our illustrative computations.\\ {\it Acknowledgements:} This paper would have been neither started nor finished without the constant help and encouragement of Henri Darmon. It is a pleasure to thank him for this, and to thank also David Loeffler, Victor Rotger and Andrew Wiles for enlightening discussions, and the anonymous referee for many useful comments. \section{The Algorithm}\label{Sec-Algorithm} In this section we present our algorithm for computing the ordinary projection of overconvergent modular forms and certain special values of Rankin triple product $p$-adic L-functions. \subsection{Theoretical background}\label{Sec-Background} We first gather some background material on overconvergent modular forms and the ordinary subspace. \subsubsection{Katz expansions of overconvergent modular forms}\label{Sec-KatzExp} Let $N$ be a positive integer, and $p \geq 5$ be a prime not dividing $N$. Let $\chi: (\mbox{\Bb{Z}}/N\mbox{\Bb{Z}})^* \rightarrow \mbox{\Bb{Z}}p^*$ be a Dirichlet character with image in $\mbox{\Bb{Z}}p^*$. The condition that $\chi$ has image in $\mbox{\Bb{Z}}p^*$ is partly for notational convenience, but see also Note \ref{Note-Minor} (\ref{Note-Fast}). For each integer $k$ let ${\mathbf M}_k (N,\chi,\mbox{\Bb{Z}}p)$ denote the space of classical modular forms for $\Gamma_1(N)$ with character $\chi$ whose $q$-expansions at infinity have coefficients in $\mbox{\Bb{Z}}p$. This is a free $\mbox{\Bb{Z}}p$-module of finite rank. Let $E_{p-1}$ be the classical Eisenstein series of weight $p-1$ and level $1$ normalised to have constant term $1$. For each integer $i > 0$, one may choose a free $\mbox{\Bb{Z}}p$-module ${\mathbf W}_i(N,\chi,\mbox{\Bb{Z}}p)$ of ${\mathbf M}_{k + i(p-1)}(N,\chi,\mbox{\Bb{Z}}p)$ such that \[ {\mathbf M}_{k + i(p-1)}(N,\chi,\mbox{\Bb{Z}}p) = E_{p-1} {\mathbf d}ot {\mathbf M}_{k + (i-1)(p-1)}(N,\chi,\mbox{\Bb{Z}}p) \oplus {\mathbf W}_i(N,\chi,\mbox{\Bb{Z}}p).\] (This choice is not canonical cf. \cite[Page 105]{Katz}.) Define ${\mathbf W}_0(N,\chi,\mbox{\Bb{Z}}p) := {\mathbf M}_k(N,\chi,\mbox{\Bb{Z}}p)$. Let $K$ be a finite extension of $\mbox{\Bb{Q}}p$ with ring of integers $B$. Define ${\mathbf W}_i(N,\chi,B) := {\mathbf W}_i(N,\chi,\mbox{\Bb{Z}}p) \otimes_{{\mbox{\Bbs{Z}}}_p} B$. For $r \in B$ the space $M_k(N,\chi,B;r)$ of $r$-overconvergent modular forms is by (our) definition the space of all ``Katz expansions'' of the form \[ f = \sum_{i = 0}^\infty r^i \mbox{\Bb F}rac{b_i}{E_{p-1}^i},\quad b_i \in {\mathbf W}_i(N,\chi,B),\quad \lim_{i \rightarrow \infty} b_i = 0\] where $b_i \rightarrow 0$ as $i \rightarrow \infty$ means the $q$-expansions of $b_i$ are more and more divisible by $p$ as $i$ goes to infinity, see \cite[Proposition 2.6.2]{Katz}. We define $M_k(N,\chi,K;r) := M_k(N,\chi,B;r) \otimes_{B} K$, a $p$-adic Banach space. The element $r \in B$ plays a purely auxiliary role, determining the inner radius $p^{-{\rm ord}_p(r)}$ of the annuli of overconvergence into the supersingular locus. (Here ${\rm ord}_p({\mathbf d}ot)$ is the $p$-adic valuation normalised with ${\rm ord}_p(p) = 1$.) From a computational point of view it is more convenient for each rational number $\alpha > 0$ to consider series of the form \[ f = \sum_{i = 0}^\infty p^{\lfloor \alpha i \rfloor} \mbox{\Bb F}rac{b_i}{E_{p-1}^i},\quad b_i \in {\mathbf W}_i(N,\chi,\mbox{\Bb{Z}}p).\] We just write $M_k(N,\chi,\mbox{\Bb{Z}}p,\alpha)$ for the space of all such elements and call it again the space of $\alpha$-overconvergent modular forms as no confusion is likely to arise. The space of {\it overconvergent modular forms} $M_k(N,\chi,\mbox{\Bb{Z}}p)$ is the union $\cup_{\alpha > 0} M_k(N,\chi,\mbox{\Bb{Z}}p,\alpha)$. In everything just defined we may also just forget the character $\chi$ and consider the space $M_k(N,\mbox{\Bb{Z}}p)$ of overconvergent modular forms for $\Gamma_1(N)$ itself. \subsubsection{The ordinary subspace} Any overconvergent modular form $f \in M_k(N,\mbox{\Bb{Z}}p)$ is also a $p$-adic modular form \cite[Section 1.4(b)]{JPS} and has a $q$-expansion, and we define the {\it ordinary projection} in the usual way as $e_{ord}(f) := \lim_{n \rightarrow \infty} U_p^{n!}(f)$, where $U_p$ is the Atkin operator on $q$-expansions, i.e., $U_p: \sum_n a_n q^n \mapsto \sum_n a_{np} q^n$. When $k \geq 2$ the image of $e_{ord}$ on $p$-adic modular forms of level $N$ over $\mbox{\Bb{Z}}p$ is equal to its image on the space of classical modular forms ${\mathbf M}_k(\Gamma_1(N) \cap \Gamma_0(p),\mbox{\Bb{Z}}p)$ of level $Np$ with trivial character at $p$, see e.g. \cite[Theorem 6.1]{Cole1} or for a precise statement (when $k \geq 3$) \cite[Theorem II.4.3 ii]{FG}. We have for each $\nu \geq 1$ an embedding \begin{equation}\label{Eqn-nu} {\mathbf M}_k(\Gamma_1(N) \cap \Gamma_0(p^\nu),B) \hookrightarrow M_k(N,B;r) \end{equation} for any $r \in B$ with ${\rm ord}_p(r) < 1/p^{\nu - 2}(p+1)$, see (at least for $N \geq 3$) \cite[Corollary II.2.8]{FG}, and also \cite[Page 25]{CGJ}. Thus taking $\nu = 1$ here, one observes for $k \geq 2$ that the image of $e_{ord}$ on $p$-adic modular forms of level $N$ over $\mbox{\Bb{Z}}p$ is equal (after base change to $B$) to its image on $M_k(N,B;r)$ for any $r \in B$ with ${\rm ord}_p(r) < p/(p+1)$. We shall define the $p$-adic {\it ordinary subspace} over $\mbox{\Bb{Z}}p$ in level $N$, character $\chi$ and weight $k$ to be the image under $e_{ord}$ of $M_k(N,\chi,\mbox{\Bb{Z}}p,\mbox{\Bb F}rac{1}{p+1})$. (We make this definition since this is precisely the space computed by Algorithm \ref{Alg-A}. For weight $k \geq 2$ this is equivalent to the usual definition as the image of $p$-adic modular forms under $e_{ord}$, by our preceding observation. The definition should also be equivalent for general weight (certainly over $K$) since the ordinary subspace over $K$ can be described as the space of overconvergent (generalised) eigenforms of slope zero \cite[Page 59]{FG}, and (generalised) eigenforms of finite slope are $r$-overconvergent for any $r$ with ${\rm ord}_p(r) < p/(p+1)$ \cite[Page 25]{CGJ}.) \subsection{Projection of overconvergent forms}\label{Sec-ProjAlg} Underlying our algorithm for computing Rankin $p$-adic L-functions is an algorithm for computing ordinary projections of overconvergent modular forms and also a basis for the ordinary subspace. It is an extension of \cite[Algorithm 2.1]{AL}. \subsubsection{The basic algorithm} We first present the basic algorithm, before discussing the steps in more detail and giving some practical refinements. Here the notation and assumptions are as in Section \ref{Sec-KatzExp}. (We apologise that the notation ``$m$'' for the $p$-adic precision gives a clash with that used for a weight in the introduction and later, but we wished to follow closely that in \cite{AL}.) \begin{algorithm} \label{Alg-A} {\it Given an element $H \in M_k(N,\chi,\mbox{\Bb{Z}}p,\mbox{\Bb F}rac{1}{p+1})$ where $0 \leq k < p-1$ and integer $m \geq 1$, this algorithm computes the image in $R:= \mbox{\Bb{Z}}[[q]]/(p^m,q^{s(m,p)})$ of the ordinary projection $e_{ord}(H)$ and in addition the image in $R$ of an echelonised basis for the ordinary subspace. (Here $s(m,p)$ is some explicit function of $m$ and $p$ defined during the algorithm.)} \begin{enumerate}[(1)] \item{[Dimensions]\, Write $k_0 := k$. Compute $n:= \lfloor \mbox{\Bb F}rac{p+1}{p-1}(m+1) \rfloor$. For $i = 0,1,\dots,n$ compute $d_i$, the dimension of the space of classical modular forms of level $N$ character $\chi$ and weight $k_0 + i(p-1)$. Compute $m_i := d_i - d_{i-1}$, for $i \geq 1$,\,$m_0 := d_0$,\, and $\ell := m_0 + m_1 + {\mathbf d}ots + m_n = d_n$. Compute working precision $m^\prime := m + \lceil \mbox{\Bb F}rac{n}{p+1} \rceil$. Compute $\ell^\prime \geq \ell$, the Sturm bound for the space of classical modular forms of level $N$, character $\chi$ and weight $k_0 + (p-1)n$.} \item{[Complementary spaces]\, For each $0 \leq i \leq n$ compute a row-reduced basis $W_i$ of $q$-expansions in $\mbox{\Bb{Z}}[[q]]/(p^{m^\prime},q^{\ell^\prime p})$ for some choice of the complementary space ${\mathbf W}_i(N,\chi,\mbox{\Bb{Z}}p)$. } \item{[Katz expansions]\, Compute the $q$-expansion in $\mbox{\Bb{Z}}[[q]]/(p^{m^\prime},q^{\ell^\prime p})$ of the Eisenstein series $E_{p-1}(q)$. For each $0 \leq i \leq n$, let $b_{i,1},\dots,b_{i,m_i}$ denote the elements in $W_i$. Compute the ``Katz basis'' elements $e_{i,s} := p^{\lfloor \mbox{\Bb F}rac{i}{p+1} \rfloor} E_{p-1}^{-i} b_{i,s}$ in $\mbox{\Bb{Z}}[[q]]/(p^{m^\prime}, q^{\ell^\prime p})$.} \item{[Atkin operator]\, For each $0 \leq i \leq n$ and $1 \leq s \leq m_i$ compute $t_{i,s} := U_p(u_{i,s})$ in $\mbox{\Bb{Z}}[[q]]/(p^{m^\prime},q^{\ell^\prime})$, where $U_p$ is the Atkin operator on $q$-expansions and $u_{i,s} := e_{i,s}$.} \item{[Atkin matrix]\, Compute $T$, the $\ell \times \ell^\prime$ matrix over $\mbox{\Bb{Z}}/(p^{m^\prime})$ whose entries are the coefficients in the $q$-expansions modulo $q^{\ell^\prime}$ of the $\ell$ elements $t_{i,s}$. Compute $E$, the $\ell \times \ell^\prime$ matrix over $\mbox{\Bb{Z}}/(p^{m^\prime})$ whose entries are the coefficients in the $q$-expansions modulo $q^{\ell^\prime}$ of the $\ell$ elements $e_{i,s}$. Use linear algebra over $\mbox{\Bb{Z}}/(p^{m^\prime})$ to compute the matrix $A^\prime$ over $\mbox{\Bb{Z}}/(p^{m^\prime})$ such that $T = A^\prime E$. Let $A$ be the ``Atkin matrix'' over $\mbox{\Bb{Z}}/(p^m)$ obtained by reducing entries in $A^\prime$ modulo $p^m$.} \item{[Two-stage projection] Compute the image $H \in \mbox{\Bb{Z}}[[q]]/(p^{m^\prime},q^{\ell^\prime p})$. \begin{enumerate} \item{[Improve overconvergence] Compute $U_p(H) \in \mbox{\Bb{Z}}[[q]]/(p^{m^\prime},q^{\ell^\prime})$ and find coefficients $\alpha_{i,s} \in \mbox{\Bb{Z}}/(p^m)$ such that $U_p(H) \equiv \sum_{i,s} \alpha_{i,s} e_{i,s} \bmod{(p^{m},q^{\ell^\prime})}$.} \item{[Projection via Katz expansion] Compute a positive integer $f$ such that all the unit roots of the reverse characteristic polynomial of $A$ lie in some extension of $\mbox{\Bb{Z}}p$ with residue class field of degree $f$ over $\mbox{\Bb F}_p$. Compute $A^{r-1}$ for $r:= (p^f - 1)p^m$ using fast exponentiation. Compute $\gamma := \alpha A^{r-1}$ where $\alpha$ is the row vector $(\alpha_{i,s})$. Write $\gamma= (\gamma_{i,s})$ and return the ordinary projection $e_{ord}(H) = \sum_{i,s} \gamma_{i,s} e_{i,s} \in \mbox{\Bb{Z}}[[q]]/(p^m,q^{s(m,p)})$ where $s(m,p) := \ell^\prime p$.} \end{enumerate} } \item{[Ordinary subspace] Compute $A^r = A^{r-1} A$ and let $\{(B_{i,s})\}$ be the set of non-zero rows in the echelon form $B$ of the matrix $A^r$. Return $\sum_{i,s} B_{i,s} e_{i,s} \in \mbox{\Bb{Z}}[[q]]/(p^m,q^{s(m,p)})$ for each non-zero row $(B_{i,s})$, the image of a basis for the ordinary subspace.} \end{enumerate} \end{algorithm} In this algorithm we assume that the $q$-expansion of the input modular form $H$ can be computed in polynomial-time in $N,p$ and any desired $p$-adic and $q$-adic precisions. Regarding the complexity of the whole algorithm, we just refer the reader to the analysis of Steps 1-5 in \cite[Sections 3.2.2, 3.3.1]{AL}, and observe that Steps 6 and 7 can be carried out using standard methods in linear algebra. In particular, the algorithm is certainly polynomial time in $N,p$ and $m$. \subsubsection{Proof of correctness} The analysis of the correctness of the algorithm is very similar to that in \cite[Section 3.2.1]{AL}. The essential idea is the following. One considers an infinite square matrix for the Atkin $U_p$ operator on the space of $\mbox{\Bb F}rac{1}{p+1}$-overconvergent modular forms w.r.t. some choice of Katz basis. Reducing this (assumed integral, see Note \ref{Note-Minor} (\ref{Note-A})) matrix modulo $p^m$, it vanishes except for an $\infty \times \ell$ strip down the lefthand side. The matrix $A$ modulo $p^m$ we compute is the $\ell \times \ell$ matrix which occurs in the top lefthand corner, for our choice of basis (this is proved in \cite[Section 3]{AL}). We would like to iterate the infinite matrix on the infinite row vector representing an overconvergent modular form $H$. When $H \in M_k(N,\chi,\mbox{\Bb{Z}}p,\mbox{\Bb F}rac{p}{p+1})$ we notice that the coefficients in the infinite vector representing $H$ w.r.t. our Katz basis decay $p$-adically (since $p/(p+1) > 1/(p+1)$) and in fact vanish modulo $p^m$, except for the first $\ell$ elements (see the final paragraph in \cite[Section 3.4.2]{AL}). Hence we can iterate $U_p$ on $H$ by iterating the finite matrix $A$ on a finite vector of length $\ell$. (The actual power $r$ is chosen to ensure that we iterate sufficiently often to obtain the correct answer modulo $p^m$.) In our application to Rankin $p$-adic L-functions we will find that in fact $H \in M_k(N,\chi,\mbox{\Bb{Z}}p,\mbox{\Bb F}rac{1}{p+1})$. Hence the preliminary Step 6 (a) is to apply the $U_p$ operator once to $q$-expansions to improve overconvergence by a factor $p$, see \cite[Equation (2)]{AL} and Note \ref{Note-Minor} (\ref{Note-A}). (There is a loss of precision of $m^\prime - m$ when one writes $U_p(H)$ as a Katz expansion, cf. the last paragraph of \cite[Section 3.2.1]{AL} where a similar loss occurs during the computation of the matrix $A$.) Observe that this preliminary step is harmless, since we need to compute the elements in our Katz basis to the higher precision modulo $q^{\ell^\prime p}$ anyway. (To make the above argument completely rigorous one fusses over the minor difference between $r$-overconvergent for all ${\rm ord}_p(r) < p/(p+1)$, and $\mbox{\Bb F}rac{p}{p+1}$-overconvergent, as in \cite[Section 3.2.1]{AL}.) \begin{note}\label{Note-Minor} We make some minor comments on the algorithm. \begin{enumerate}[(1)] \item{ For weight $k \geq 2$ the ordinary subspace can be computed instead using classical methods; however, our algorithm is the only ``polynomial-time'' method known to the author for computing this subspace in weight $k \leq 1$. }\label{Note-k1} \item{ We assume that the smallest non-zero slope $s_0$ of (the Newton polygon of) the reverse characteristic polynomial of $A$ is such that $\lceil m/s_0 \rceil \leq (p^f - 1)p^m$. This is reasonable as the smallest non-zero slope which has ever been experimentally observed is $1/2$. (One could of course compute $s_0$ and adjust $r$ accordingly to remove this assumption.) The integer $f$ can be easily computed by reducing the matrix $A$ modulo $p$. The exponent $m$ rather than $m-1$ in the definition of $r$ accounts for the possibility that the unit roots may lie in ramified extensions. (So $u^r \equiv 1 \bmod{p^m}$ for each unit root $u$, and $u^r \equiv 0 \bmod{p^m}$ for all other roots $u$ of the reverse characteristic polynomial.) }\label{Note-s0} \item{ The correctness of the algorithm relies on the assumption that we can solve $T = A^\prime E$ for a $p$-adically integral matrix $A^\prime$, although the theory only guarantees that $p A^\prime$ has integral coefficients, see \cite[Note 3.2, Section 3.2.1]{AL} and also \cite[II.3]{FG}, \cite[Section 3.11]{Katz}. One could modify the algorithm (or rather the refined version in Section \ref{Sec-S6}) to remove this assumption; however, in practice the author has never encountered a situation in which the matrix $A^\prime$ fails to have $p$-adically integral coefficients. }\label{Note-A} \item{ The assumption that $\mbox{\Bb{Z}}[\chi]$ embeds in $\mbox{\Bb{Z}}p$ allows one to exploit fast algorithms for matrix and polynomial arithmetic over rings of the form $\mbox{\Bb{Z}}p/(p^m) \cong \mbox{\Bb{Z}}/(p^m)$ which are integrated into the systems {\sc Magma} and {\sc Sage}. The algorithm works perfectly well in principle without this assumption, but it will be much more difficult to get a comparably fast implementation. }\label{Note-Fast} \item{ The hypothesis $0 \leq k < p-1$ can be removed as follows. In Step 1 write $k:= k_0 + j(p-1)$ where $0 \leq k_0 < p-1$. In Step 4 compute $G := E_{p-1}(q)/E_{p-1}(q^p)$ and $G^j \in \mbox{\Bb{Z}}[[q]]/(p^{m^\prime},q^{\ell^\prime p})$, and let $u_{i,s} := G^j e_{i,s}$. The matrix $A$ computed in Step 5 is then for the ``twisted'' Atkin operator $U_p \circ G^j$. After Step 6(a) multiply the $q$-expansion of $U_p(H)$ by $E_{p-1}^{-j}$ and in Step 6(b) multiply the $q$-expansion $\sum_{i,s} \gamma_{i,s} e_{i,s}$ by $E_{p-1}^j$ and return this product as $e_{ord}(H)$. In Step 7 multiply each $q$-expansion $\sum_{i,s} b_{i,s} e_{i,s}$ by $E_{p-1}^j$ to get the basis for ordinary space. For $j \geq 1$, to ensure $E_{p-1}^{-j} U_{p}(H)$ lies in the correct space one should multiply it by $p^{\lceil \mbox{\Bb F}rac{j}{p+1} \rceil}$, and so the final answer will only be correct modulo $p^{m - \lceil \mbox{\Bb F}rac{j}{p+1} \rceil}$. (One could of course also just run the algorithm without twisting, but then the auxiliary parameters $n,\ell, m^\prime$ etc would have to be worked out afresh, since the algorithm would no longer be an extension of \cite[Algorithm 2.1]{AL}.) }\label{Note-kbig} \item{ In practice the output $q$-adic precision $s(m,p) = \ell^\prime p $ is always large enough for our needed application to Rankin $p$-adic L-functions. One can insist though on any output precision $s^\prime \geq \ell^\prime p$ simply by computing the Katz basis elements in Step 3 to that $q$-adic precision. }\label{Note-qadic} \end{enumerate} \end{note} \subsubsection{Finding complementary spaces in Step 2}\label{Sec-S23} A key step in the algorithm is the efficient construction in practice of the image in $ \mbox{\Bb{Z}}[[q]]/(p^{m^\prime},q^{\ell^\prime p})$ of a basis for some choice of complementary spaces ${\mathbf W}_i(N,\chi,\mbox{\Bb{Z}}p)$, for each $0 \leq i \leq n$. The author's implementation (which is at present restricted to trivial or quadratic characters $\chi$) is based upon suggestions of David Loeffler and John Voight. The idea is to use the multiplicative structure on the ring of modular forms. One fixes a choice of weight bound ${\mathcal B} \geq 1$ and computes the image in $\mbox{\Bb{Z}}[[q]]/(p^{m^\prime},q^{\ell^\prime p})$ of a $\mbox{\Bb{Z}}[\chi^\prime]$-basis for each of the spaces of classical modular forms ${\mathbf M}_b (N,\chi^\prime,\mbox{\Bb{Q}}(\chi^\prime))$ where $1 \leq b \leq {\mathcal B}$ and $\chi^\prime$ vary over a set of characters which generate a group containing $\chi$. One then reduces these basis elements modulo $(p,q^{\ell^\prime})$ and for each $0 \leq i \leq n$ looks for random products of these $q$-expansions which generate an $\mbox{\Bb F}_p$-vector space of dimension $d_i$ and have weight $k_0 + i(p-1)$ and character $\chi$. This is done in a recursive manner. Once one has computed the required forms in weight $k_0 + (i-1)(p-1)$ one maps then (via the identity map) into weight $k_0 + i (p-1)$ (recall $E_{p-1} \equiv 1 \mod{p}$) and generates a further $m_i = d_i - d_{i-1}$ linearly independent forms in weight $k_0 + i (p-1)$ and character $\chi$. The correct choices of products, which give forms not in the space already generated, are ``encoded'' in an appropriate manner; that is, the basis elements for each weight $b$ and character $\chi^\prime$ are stored as an ordered list, and products of them (modulo $(p,q^{\ell^\prime})$) can then be represented by ``codes'' which give the positions chosen in each list. Having found these correct choices modulo $(p,q^{\ell^\prime})$ one then repeats the process modulo $(p^{m^\prime},q^{\ell^\prime p})$ to find the complementary spaces ${\mathbf W}_i(N,\chi,\mbox{\Bb{Z}}p)$, but crucially this time using the ``codes'' to only take products of modular forms which give something not in the $\mbox{\Bb{Z}}p$-span of the forms already computed. In this way when working to the full precision one does not waste time computing products of modular forms that lie in the space one has already generated. (It surprised the author to discover that in practice in some examples, e.g. Example \ref{Ex-469}, one can generate many such ``dud'' forms --- he has no intuition as to why this is the case.) A good bound to take is ${\mathcal B} := 6$, but one can vary this, playing the time it takes to generate the spaces in low weight off against the time spent looking for suitable products. This choice of bound fits with some theoretical predictions communicated to the author by David Loeffler. \subsubsection{A three-stage projection in Step 6}\label{Sec-S6} In Algorithm \ref{Alg-A} we find $U_p^r(H)$, where $r$ is chosen so that the answer is correct modulo $p^m$, in two separate stages. First, one computes $U_p(H) \in M_k(N,\chi,\mbox{\Bb{Z}}p,\mbox{\Bb F}rac{p}{p+1})$ using $q$-expansions. Second, one computes $U_p^{r-1}(U_p(H))$ using Katz expansions. However, the matrix $A$ has size growing linearly with $m$ and so the computation of the high power $A^{r-1}$ becomes a bottleneck as the precision $m$ increases. A better aproach is to factor the projection map into three parts, as follows. Write $s_0$ for the smallest non-zero slope in the characteristic series of $A$ (one can safely just set $s_0 := 1/2$). Computing $A^{\lceil m/s_0 \rceil }$ and writing its non-zero rows (which are w.r.t. the Katz basis) as $q$-expansions in $\mbox{\Bb{Z}}[[q]]/(p^m,q^{\ell^\prime p})$ gives (the image of) a basis for the ordinary subspace. One can now compute a matrix $A_{ord}$ over $\mbox{\Bb{Z}}/(p^m)$ for the $U_p$ operator on this basis by explicitly computing with $q$-expansions. This matrix is significantly smaller than $A$ itself, since its dimension has no dependence on $m$. To project $H$, one computes as before $U_p(H)$ using $q$-expansions, then $U_p^{\lceil m/s_0 \rceil}(U_p(H))$ via Katz expansions as the product $\beta := \alpha A^{\lceil m/s_0 \rceil}$. Next, one writes the ``Katz vector'' $\beta$ as the image of a $q$-expansion in $\mbox{\Bb{Z}}[[q]]/(p^m,q^{\ell^\prime p})$ and thus as a new vector $\beta^{\prime}$ over $\mbox{\Bb{Z}}/(p^m)$ in terms of the basis for the ordinary subspace. Finally, one computes $U_p^{r - \lceil m/s_0 \rceil - 1}$ on $U_p^{\lceil m/s_0 \rceil}(U_p(H))$ as $\gamma^\prime := \beta^\prime A_{ord}^{r - \lceil m/s_0 \rceil - 1}$ and returns the $q$-expansion associated to $\gamma^\prime$ as the ordinary projection of $H$ modulo $(p^m,q^{s(m,p)})$. This three-stage projection method also works for $k < 0$ or $k \geq p-1$, but one must take care to twist and un-twist by powers of $E_{p-1}$ at the appropriate times. \subsubsection{Avoiding weight one forms}\label{Sec-Weight1} In the case that $k = 1$, one can compute the ordinary projection $e_{ord}(H)$ of the weight one form $H$ without doing {\it any} computations in weight one, except computing the $q$-expansion of $H$ itself modulo $(p^{m^\prime}, q^{\ell^\prime p})$. The idea is to use the Eisenstein series to ``twist'' up to weight $p = 1 + (p-1)$. That is, one proceeds as in Note \ref{Note-Minor} (\ref{Note-kbig}), only writing $k = 1 = k_0 + j(p-1)$ where now $k_0 := p$ and $j:=-1$. In addition, when generating complementary spaces (see Section \ref{Sec-S23}) one only computes bases of classical modular forms in low weights $2 \leq b \leq {\mathcal B}$. The author has implemented this variation in both {\sc Magma} and {\sc Sage}, and used it to compute the characteristic series of the Atkin operator on $p$-adic overconvergent modular forms in weight one (for a quadratic character, and various levels $N$ and primes $p$) without computing the $q$-expansions of any modular forms in weight one. \subsection{Application to $p$-adic L-functions}\label{Sec-RL} We now describe the application of Algorithm \ref{Alg-A} to the computation of $p$-adic L-functions. \subsubsection{Definition of Rankin triple product $p$-adic L-functions}\label{Sec-RLdef} Let $f,g,h$ be newforms of balanced weights $k,l,m \geq 2$, primitive characters $\chi_f,\chi_g,\chi_h$, with $\chi_f \chi_g \chi_h = 1$ and level $N$. Assume that the Heegner hypothesis H from \cite[Section 1]{DR-GrossZagier} is satisfied, e.g. $N$ is squarefree and for each prime $\ell$ dividing $N$ the product of the $\ell$th Fourier coefficients of $f,g$ and $h$ is $-\ell^{(k + l + m - 6)/2}$. Write $k = l + m - 2 - 2t$ with $t \geq 0$, which is possible since the sum of the weights must be even. We fix an embedding $\bar{\mbox{\Bb{Q}}} \hookrightarrow \mbox{\Bb{C}}p$ and assume $f,g$ and $h$ are ordinary at $p$. That is, the $p$th coefficient in the $q$-expansion of each is a $p$-adic unit. Define the map $d = q \mbox{\Bb F}rac{d}{dq}$ on $q$-expansions as $d: \sum_{n \geq 0} a_n q^n \mapsto \sum_{n \geq 0} n a_n q^n$. Then for $s \geq 0$, the map $d^s$ acts on $p$-adic modular forms increasing weights by $2s$ \cite[Th\'eor\`eme 5(a)]{JPS}. For a $p$-adic modular form $a(q) := \sum_{n \geq 0} a_n q^n$ let $a^{[p]} := \sum_{n \geq 1,\, p \not \;| n} a_n q^n$ denote its $p$-depletion. Then for $s \geq 1$ the map \[ a \mapsto d^{-s}(a^{[p]}) = \sum_{\stackrel{n = 1}{p \not \;| \,n}}^{\infty} \mbox{\Bb F}rac{a_n}{n^s} q^n\] acts on $p$-adic modular forms shifting weights by $-2s$ \cite[Th\'eor\`eme 5(b)]{JPS}. So $d^{-(1 + t)}(g^{[p]}) \times h$ is a $p$-adic modular form of weight $l - 2(1 + t) + m = k$ and character $\chi_g \chi_h = \chi_f^{-1}$. Let $f^*$ be the dual form to $f$ and $f^{*(p)}$ be the ordinary $p$-stabilisation of $f^*$, see \cite[Sections 1 and 4]{DR-GrossZagier}. So $f^{*(p)}$ is an ordinary eigenform of character $\chi_f^{-1}$. We define \[ {\mathcal L}_p({\mathbf f},{\mathbf g},{\mathbf h})(k,l,m) := c(f^{*(p)}, e_{ord}(d^{-(1 + t)}(g^{p]}) \times h)) \in \mbox{\Bb{C}}p.\] Here we are assuming the action of the Hecke algebra on the ordinary subspace in weight $k$ is semisimple (which the author understands is well-known for $N$ squarefree since $k \geq 2$) and $c(f^{*(p)},\bullet)$ denotes the coefficient of $f^{*(p)}$ when one writes an ordinary form $\bullet$ as a linear combination of ordinary eigenforms, see \cite[Page 222]{Hida-Book}. (Darmon and Rotger take a different but equivalent approach, using the Poincar\'e pairing in algebraic de Rham cohomology to extract the coefficient ${\mathcal L}_p({\mathbf f},{\mathbf g},{\mathbf h})(k,l,m)$ \cite[Proposition 4.10]{DR-GrossZagier}.) \subsubsection{Computation of Rankin triple product $p$-adic L-functions} We shall now (with another apology) introduce the clashing notation $m$ to refer to the $p$-adic precision, as in Section \ref{Sec-ProjAlg}. We wish to apply Algorithm \ref{Alg-A} to compute $e_{ord}(H)$ for $H := d^{-(1 + t)}(g^{[p]}) \times h$ modulo $p^m$ (and $q$-adic precision $s(m,p)$) and also a basis for the ordinary subspace in level $N$, weight $k$ and character $\chi:= \chi_f^{-1}$. (So we should assume that the image of $\chi$ lies in $\mbox{\Bb{Z}}p$ and $g$ and $h$ are defined over $\mbox{\Bb{Z}}p$, but see also Notes \ref{Note-Minor} (\ref{Note-Fast}) and \ref{Note-RankinL} (\ref{Note-phi}).) Given these, one can use Hecke operators on the ordinary subspace to extract the coefficient $c(f^{*(p)},e_{ord}(H))$, see Note \ref{Note-RankinL} (\ref{Note-HeckeOps}), as $f^{*(p)}$ (and $H$) are easy to compute (at least within {\sc Magma} and {\sc Sage} using the algorithms from \cite{Stein}). For our projection algorithm to work we require that $H$ is overconvergent (rather than just nearly overconvergent \cite[Section 2.5]{DR-GrossZagier}) and in particular that $H \in M_k(N,\chi,\mbox{\Bb{Z}}p,\mbox{\Bb F}rac{1}{p+1})$ for $\chi := \chi_f^{-1}$. Overconvergence is guaranteed provided $t = l - 2$, since $a \mapsto d^{-s}(a^{[p]})$ maps overconvergent forms in weight $1 + s$ to overconvergent forms in weight $1 - s$ \cite[Proposition 4.3]{Cole1}. That is, provided \begin{equation}\label{Eqn-klm} k = m - l + 2 \end{equation} we will have that $d^{-(1+t)}(g^{[p]})$, and hence also $H$, is overconvergent. When this condition is not satisfied our algorithm with fail. Regarding the precise radius of convergence of $d^{-(1+t)}(g^{[p]})$, Darmon has explained to the author that when our condition (\ref{Eqn-klm}) is met the methods used by Coleman (the geometric interpretation of the $d$ operator in terms of the Gauss-Manin connection \cite[Section 2.5]{DR-GrossZagier}) show the form $d^{-(1 + t)}(g^{[p]})$ lies in the space $M_k(N,\chi_g,K;r)$ for any $r \in B \subset \mbox{\Bb{C}}p$ with ${\rm ord}_p(r) < 1/(p+1)$. Let us outline the argument to get an idea why this is true. First, the $p$-depletion $g^{[p]} := (1 - V_p U_p)g$ is a classical modular form for $\Gamma_1(N) \cap \Gamma_0(p^2)$ with trivial character at $p$ and infinite slope. (Here $V_p$ is the one-sided inverse of $U_p$ \cite[Equation (12)]{DR-GrossZagier} and increases the level of $U_p (g)$ by $p$.) Hence by (\ref{Eqn-nu}) with $\nu := 2$, $g^{[p]}$ lies in $M_{\ell}(N,\chi_g,B;r)$ for any $r \in B$ with ${\rm ord}_p(r) < 1/(p+1)$. Next, \cite[Theorem 5.4]{Cole1} gives an explicit relation between the action of powers of the $d$ operator on spaces of overconvergent modular forms and that of the Gauss-Manin connection of certain de Rham cohomology spaces associated to rigid analytic modular curves. This relationship associates to $g^{[p]}$ a trivial class in the de Rham cohomology space (the righthand side of \cite[Equation (34)]{DR-GrossZagier} for ``$r$'' equals $t$ and any ``$\epsilon$'' less than $1/(p+1)$), and hence one in the image of the Gauss-Manin connection. (The class is trivial because the form has infinite slope, cf. \cite[Lemma 6.3]{Cole1}.) The Gauss-Manin connection preserves the radius of convergence, and taking the preimage and untangling the relationship one finds that $d^{-(1+t)}(g^{[p]})$ is an overconvergent modular form of the same radius of convergence as $g^{[p]}$, i.e. $d^{-(1+t)}(g^{[p]}) \in M_{\ell}(N,\chi_g,K;r)$ for any $r \in B$ with ${\rm ord}_p(r) < 1/(p+1)$. Thus multiplying by $h$ (and using (\ref{Eqn-nu}) with $\nu := 1$ to determine the overconvergence of $h$ itself) we find also \begin{equation}\label{Eqn-dgh} d^{-(1 + t)}(g^{[p]}) \times h \in M_k(N,\chi,K;r) \end{equation} for any $r \in B$ with ${\rm ord}_p(r) < 1/(p+1)$. \begin{note}\label{Note-RankinL} \begin{enumerate}[(1)] \item{ The above argument does not quite show that $H := d^{-(1 + t)}(g^{[p]}) \times h$ lies in $M_k(N,\chi,\mbox{\Bb{Z}}p,\mbox{\Bb F}rac{1}{p+1})$ as for this one would need to replace ``$K$'' by ``$B$'' in (\ref{Eqn-dgh}). However, the author just {\it assumed} this was true, and this was not contradicted by our experiments; in particular, when one could relate the value of the Rankin $p$-adic L-function to the $p$-adic logarithm of a point on an elliptic curve, the relationship held to exactly the precision predicted by the algorithm. To be completely rigorous though one would have to carry out a detailed analysis of Darmon's argument and the constructions used by Coleman (and one may have to account for some extra logarithmic growth and loss of precision). }\label{Note-NotProved} \item{ It is helpful to notice that the map $\phi:(g,h) \mapsto e_{ord}(d^{-(1+t)}(g^{[p]}) \times h)$ is bi-linear in $g$ and $h$. Thus one can compute $\phi((g,h))$ by first computing it on a product of bases for the spaces ${\mathbf S}_l(N,\chi_g)$ and ${\mathbf S}_m(N,\chi_h)$. This is useful when these spaces are defined over $\mbox{\Bb{Z}}p$ but the newforms themselves are defined over algebraic number fields which do not embed in $\mbox{\Bb{Z}}p$. }\label{Note-phi} \item{ The author implemented a number of different approaches to computing $c(f^{*(p)}, e_{ord}(H))$. The most straightforward is to compute matrices for the Hecke operators $U_\ell$ (for $\ell | Np)$ and $T_\ell$ (otherwise) on the ordinary subspace for many small $\ell$ by explicitly computing on the $q$-expansion basis for the ordinary subspace output by Algorithm \ref{Alg-A}. One can then try to project onto the ``$f^{*(p)}$-eigenspace'' using any one of these matrices. One difficulty which arises is that congruences between eigenforms may force a small loss of $p$-adic precision during this step. (Congruences with Eisenstein series can be avoided for $k \geq 2$ by using classical methods to compute a basis for the ordinary cuspidal subspace, and working with that space instead.) We did not carry out a rigorous analysis of what loss of precision could occur due to these congruences, but in our examples it was never more than a few $p$-adic coefficients and one could always determine exactly what loss of precision had occurred after the experiment. The author understands from discussions with Wiles that one should be able to compute an ``upper bound'' on the $p$-adic congruences which can occur, and thus on the loss of precision. This bound of course is entirely independent of the precision $m$. However, such a calculation is beyond the scope of this paper. }\label{Note-HeckeOps} \end{enumerate} \end{note} \subsubsection{Single and double product L-functions}\label{Sec-SingleDouble} The author understands that usual $p$-adic L-functions can be computed using our methods, by substituting Eisenstein series for newforms in the appropriate places in the triple product L-function, cf. \cite[Section 3]{BD}. However, he has not looked at this application at all, as the methods based upon overconvergent modular symbols are already very good (for $k \geq 2$) \cite{PS}. One can similarly compute double product Rankin $p$-adic L-functions using our approach. In particular, we have used our algorithm to compute a (suitably defined) Rankin double product $p$-adic L-function special value ``${\mathcal L}_p({\mathbf f},{\mathbf g})(2,1)$'' for $f$ of weight $2$ and $g$ of weight $1$, see the forthcoming \cite{DLR} and also \cite[Conjecture 10.1]{DR-Arizona}. \section{Examples}\label{Sec-Examples} In this section we shall freely use the notation from \cite[Section 1]{DR-GrossZagier}. We implemented our basic algorithms in both {\sc Magma} and {\sc Sage}, but focussed our refinements on the former and all the examples we present here were computed using this package. The running time and space for the examples varied from around 100 seconds with 201 MB RAM (Example \ref{Ex-77a}) to around 19000 seconds with 9.7 GB RAM (Example \ref{Ex-43a}) on a 2.93 GHz machine. All of the examples here are for weights $k,l,m$ with $k = m - l + 2$ and $t = l - 2$, where $t = 0$, i.e., $l = 2$ and $k = m$ (and in fact $f = h$). We implemented our algorithm for arbitrary $t \geq 0$ and computed ${\mathcal L}_p({\mathbf f},{\mathbf g},{\mathbf h})(k,l,m)$ in cases when $t > 0$; however, the author does not know of any geometric constructions of points when $t > 0$ (or even when $f \ne h$) and so we do not present these computations here. We begin with an example of the explicit construction of rational points mentioned in our introduction, see Equation (\ref{Eqn-Pg}). \begin{example}\label{Ex-57a} Let $E_f\,:\, y^2 + y = x^3 - x^2 - 2x + 2$ be the rank $1$ curve of conductor $57$ with Cremona label ``57a'' associated to the cusp form \[ f:= q - 2q^2 - q^3 + 2q^4 - 3q^5 + 2q^6 - 5q^7 + q^9 + {\mathbf d}ots .\] We choose two other newforms of level $57$ (associated to curves of rank zero): \[ \begin{array}{rcl} g_1 & := & q + q^2 + q^3 - q^4 - 2q^5 + q^6 - 3q^8 + q^9 - {\mathbf d}ots \\ g_2 & := & q - 2q^2 + q^3 + 2q^4 + q^5 - 2q^6 + 3q^7 + q^9 - {\mathbf d}ots. \end{array} \] Taking $p := 5$ and writing ${\mathbf f},{\mathbf g}_1,{\mathbf g}_2$ for the Hida families we compute the special values \[ \begin{array}{rcl} {\mathcal L}_5({\mathbf g}_1,{\mathbf f},{\mathbf g}_1)(2,2,2) & \equiv & -260429402433721822483 \bmod{5^{30}}\\ 5 {\mathcal L}_5({\mathbf g}_2,{\mathbf f},{\mathbf g}_2)(2,2,2) & \equiv & -279706401244025789341 \bmod{5^{31}}. \end{array} \] One computes that for each newform $g_i$, if one multiplies the operator of projection onto the $g_i$-eigenspace by $3$ then one obtains an element in the integral (rather than rational) Hecke algebra. Thus equation (\ref{Eqn-Pg}) predicts that there exist global points $P_1,P_2 \in E_f(\mbox{\Bb{Q}})$ such that \[ \log_{E_f}(P_i) = 6 \times \mbox{\Bb F}rac{{\mathcal E}_0(g_i) {\mathcal E}_1(g_i)}{{\mathcal E}(g_i,f,g_i)} \times {\mathcal L}_5({\mathbf g}_i,{\mathbf f},{\mathbf g}_i) .\] One finds \[ \begin{array}{rcl} \log_{E_f}(P_1) & \equiv & 37060573996879427247 \times 5 \bmod{5^{30}}\\ \log_{E_f}(P_2) & \equiv & -18578369245374641968 \times 5 \bmod{5^{30}}. \end{array} \] Adapting the method in \cite[Section 2.7]{KP} we recover the points \[ \begin{array}{rcccl} P_1 & = & \left(-\mbox{\Bb F}rac{1976}{7569},\mbox{\Bb F}rac{750007}{658503}\right) & = & -16 P \end{array} \] and $P_2 = (0,1) = 4P$, where $P := (2,-2)$ is a generator for $E_f(\mbox{\Bb{Q}})$. \end{example} Next we look at an example where the Darmon-Rotger formula may be applied, but the application to constructing points has not been fully worked out. (At least, at the time of author's computations --- we understand from a personal communication from Darmon and Rotger that this has now been done.) \begin{example} Let $E_f\,:\, y^2 + xy + y = x^3 - x^2$ be the rank $1$ curve of conductor $53$ with Cremona label ``53a'' associated to the cusp form \[ f:= q - q^2 - 3 q^3 - q^4 + 3 q^6 - 4 q^7 + 3 q^8 + 6 q^9 + {\mathbf d}ots .\] There is one newform $g$ of level $53$ and weight $4$ and trivial character with rational Fourier coefficients: \[ g := q + q^3 - 8 q^4 - 18 q^5 + 2 q^7 - 26 q^9 + 54 q^{11} + {\mathbf d}ots.\] Taking $p:=7$ and writing ${\mathbf f}$ and ${\mathbf g}$ for the Hida families we compute the special value \[ {\mathcal L}_7({\mathbf g},{\mathbf f},{\mathbf g})(4,2,4) \equiv -12581507765759084963366603 \bmod{7^{30}}.\] The Darmon-Rotger formula \cite[Theorem 1.3]{DR-GrossZagier} then predicts that \[ {\rm AJ}_7(\Delta)(\eta_g^{{\rm u-r}} \otimes \omega_f \otimes \omega_g) = \mbox{\Bb F}rac{{\mathcal E}_0(g) {\mathcal E}_1(g)}{{\mathcal E}(g,f,g)} {\mathcal L}_7({\mathbf g},{\mathbf f},{\mathbf g})(4,2,4)\] and we find that \[ {\rm AJ}_7(\Delta)(\eta_g^{{\rm u-r}} \otimes \omega_f \otimes \omega_g) \equiv 1025211670724558054729221 \times 7 \bmod{7^{30}}.\] Equation (\ref{Eqn-Pg}) does not apply in this setting, but one can hope that this equals $\log_{E_f}(P)$ for some point $P \in E_f(\mbox{\Bb{Q}}) \otimes \mbox{\Bb{Q}}$. Exponentiating one finds a point $\hat{P} = (x(\hat{P}),y(\hat{P})) \in E_1(\mbox{\Bb{Q}}_7)$ with coordinates $7^2 x(\hat{P}),\,7^3 y(\hat{P})$ modulo $7^{30}$ (where $E := E_f$). We have $|E(\mbox{\Bb F}_7)| = 12$ and translating $\hat{P}$ by elements $Q \in E(\mbox{\Bb{Q}}_7)[12]$ we find exactly one rational point, $P = (0,-1)$ (see the method in \cite[Section 2.7]{KP}). Thus we have computed a generator in a rather elaborate manner. \end{example} The author also considered again the curve $E_f$ with Cremona label ``57a'' but took $g$ to be the unique newform of level $57$ and weight $4$ with trivial character and rational Fourier coefficients, and found that ${\rm AJ}_5(\Delta)(\eta_g^{{\rm u-r}} \otimes \omega_f \otimes \omega_g) \equiv - \mbox{\Bb F}rac{15}{13} \log_{E_f}(P) \bmod{5^{31}}$ for $P := (2,-2)$ a generator of $E_f(\mbox{\Bb{Q}})$. (So here $p = k - 1$, and we used the ``twisted'' version of the algorithm described in Note \ref{Note-Minor} (\ref{Note-kbig}).) The next example has a similar flavour but involves cusp forms of odd weight. \begin{example}\label{Ex-43a} Let $E_f\,:\, y^2 + y = x^3 + x^2$ be the rank $1$ curve of conductor $43$ with Cremona label ``43a'' associated to the cusp form \[ f:= q - 2 q^2 - 2 q^3 + 2 q^4 - 4 q^5 + 4 q^6 + q^9 + 8 q^{10} + 3 q^{11} + {\mathbf d}ots .\] Let $\chi$ be the Legendre character modulo $43$. Then we find unique newforms $g \in S_3(43,\chi)$ and $h \in S_5(43,\chi)$ with rational Fourier coefficients: \[ \begin{array}{rcl} g & := & q + 4 q^4 + 9 q^9 - 21 q^{11} + {\mathbf d}ots \\ h & := & q + 16 q^4 + 81 q^9 + 199 q^{11} + {\mathbf d}ots. \end{array} \] Taking $p:=11$ and writing ${\mathbf f},{\mathbf g}$ and ${\mathbf h}$ for the Hida families we compute the special values \[ \begin{array}{rcl} {\mathcal L}_{11}({\mathbf g},{\mathbf f},{\mathbf g})(3,2,3) & \equiv &-7831319270947510009065871543799 \bmod{11^{30}}\\ {\mathcal L}_{11}({\mathbf h},{\mathbf f},{\mathbf h})(5,2,5) & \equiv & 4791560577275108790581414445515 \bmod{11^{30}}. \end{array} \] Using the Darmon-Rotger formula we compute \[ {\rm AJ}_{11}(\Delta)(\eta_g^{{\rm u-r}} \otimes \omega_f \otimes \omega_g) \equiv -646073276230754578213318125190 \times 11 \bmod{11^{30}}.\] Rather than attempt to recover a point from this, we take the generator $P = (0,0)$ for $E_f(\mbox{\Bb{Q}})$ and compute $\log_{E_f}(P)$ and then try to determine a relationship. One finds \[ {\rm AJ}_{11}(\Delta)(\eta_g^{{\rm u-r}} \otimes \omega_f \otimes \omega_g) \equiv \mbox{\Bb F}rac{258}{107} \log_{E_f}(P) \bmod{11^{30}}.\] (We checked that multiplying the relevant projection operator by $2 \times 107$ gives an element in the integral Hecke algebra.) Similarly we found \[ {\rm AJ}_{11}(\Delta)(\eta_h^{{\rm u-r}} \otimes \omega_f \otimes \omega_h) \equiv -\mbox{\Bb F}rac{6708}{5647} \log_{E_f}(P) \bmod{11^{30}}.\] \end{example} The examples above suggest the construction in \cite{DRS} can be generalised, at least in a $p$-adic setting. We now look at some examples in which one removes one of the main conditions in the Darmon-Rotger theorem \cite[Theorem 1.3]{DR-GrossZagier} itself, that the prime does not divide the level. In each example rather than try to recover a rational point, we look for an algebraic relationship between the logarithm of a generator and the special value we compute. \begin{example}\label{Ex-77a} Let $E_f\,:\, y^2 + y = x^3 + 2 x$ be the rank $1$ curve of conductor $77$ with Cremona label ``77a'' associated to the cusp form \[ f := q - 3 q^3 - 2 q^4 - q^5 - q^7 + 6 q^9 - q^{11} + {\mathbf d}ots .\] Let $g$ be the level $11$ and weight $2$ newform (associated to a rank zero elliptic curve): \[ g := q - 2 q^2 - q^3 + 2 q^4 + q^5 + 2 q^6 - 2 q^7 - 2 q^9 - 2 q^{10} + q^{11} + {\mathbf d}ots .\] We take the prime $p := 7$, which divides the level of $f$, and writing ${\mathbf f}$ and ${\mathbf g}$ for the Hida families compute \[ {\mathcal L}_{7}({\mathbf g},{\mathbf f},{\mathbf g})(2,2,2) \equiv -1861584104004734313229493 \times 7 \bmod{7^{31}}.\] Taking the generator $P = (2,3)$ we compute $\log_{E_f}(P) \bmod{7^{31}}$ and find that $\log_{E_f}(P)/7{\mathcal L}_{7}({\mathbf g},{\mathbf f},{\mathbf g})(2,2,2)$ satisfies the quadratic equation $1600 t^2 + 48 t + 9 = 0$ modulo $7^{29}$. \end{example} The factor $p$ which occurs in the expression relating the special value to the logarithm of a point when the prime divides the level is also seen in the next examples. \begin{example}\label{Ex-469} Let $E_f\,:\, y^2 + xy + y = x^3 - 80x - 275$ and $E_g: y^2 + xy + y = x^3 - x^2 - 12x + 18$ be the rank $1$ curves of conductor $469$ with Cremona labels ``469a'' and ``469b'', respectively, associated to the cusp forms \[ \begin{array}{rcl} f & := & q + q^2 + q^3 - q^4 - 3 q^5 + q^6 - q^7 - 3 q^8 - 2 q^9 - 3 q^{10} + {\mathbf d}ots\\ g & := & q - q^2 - 3 q^3 - q^4 + q^5 + 3 q^6 - q^7 + 3 q^8 + 6 q^9 - q^{10} + {\mathbf d}ots . \end{array} \] Taking the prime $p := 7$ we compute \[ \begin{array}{rcl} {\mathcal L}_{7}({\mathbf g},{\mathbf f},{\mathbf g})(2,2,2) & \equiv & 1435409545849510941783817 \bmod{7^{30}}\\ {\mathcal L}_{7}({\mathbf f},{\mathbf g},{\mathbf f})(2,2,2) & \equiv & 6915472639041460159095363 \bmod{7^{30}}. \end{array} \] Using generators $P_f = (-5,4)$ and $P_g = (2,-1)$ for $E_f(\mbox{\Bb{Q}})$ and $E_g(\mbox{\Bb{Q}})$, respectively, we found \[ \begin{array}{rcl} 7 {\mathcal L}_{7}({\mathbf g},{\mathbf f},{\mathbf g})(2,2,2) & \equiv & 4 \log_{E_f}(P_f) \bmod{7^{30}}\\ 35 {\mathcal L}_{7}({\mathbf f},{\mathbf g},{\mathbf f})(2,2,2) & \equiv & -16 \log_{E_g}(P_g) \bmod{7^{30}}. \end{array} \] \end{example} In the above example the ``tame'' level used in our computation was $N = 67 = \mbox{\Bb F}rac{469}{7}$. In the next example it is one: for tame level one the author's algorithm does not use the theory of modular symbols at all, cf. \cite[Section 3.2]{AL}. \begin{example} Let $E_f\,:\, y^2 + xy + y = x^3 + x^2 - x $ be the rank $1$ curve of conductor $89$ with Cremona label ``89a'' associated to the cusp form \[ f := q - q^2 - q^3 - q^4 - q^5 + q^6 - 4 q^7 + 3 q^8 - 2 q^9 + q^{10} - 2 q^{11} + {\mathbf d}ots .\] Let $g$ be the level $89$ and weight $2$ newform (associated to a rank zero elliptic curve): \[ g := q + q^2 + 2 q^3 - q^4 - 2 q^5 + 2 q^6 + 2 q^7 - 3 q^8 + q^9 - 2 q^{10} - 4 q^{11} + {\mathbf d}ots .\] Taking the prime $p := 89$ we found that \[ 89 {\mathcal L}_{89}({\mathbf g},{\mathbf f},{\mathbf g})(2,2,2) \equiv 72 \log_{E_f}(P) \bmod{89^{21}}\] where $P = (0,0)$ is a generator. \end{example} The author understands that the above examples are consistent with on-going work of Darmon and Rotger to generalise their formula to the situation in which the prime $p$ does divide the level $N$ \cite{DR-2}. \end{document}
\begin{document} \begin{abstract} The \emph{WL-rank} of a digraph $\Gamma$ is defined to be the rank of the coherent configuration of $\Gamma$. We construct a new infinite family of strictly Deza Cayley graphs for which the WL-rank is equal to the number of vertices. The graphs from this family are divisible design and integral. \\ \\ \textbf{Keywords}: WL-rank, Cayley graphs, Deza graphs. \\ \\ \textbf{MSC}: 05C25, 05C60, 05C75. \end{abstract} \title{On WL-rank of Deza Cayley graphs} \section{Introduction} Let $V$ be a finite set and $|V|=n$. A \emph{coherent configuration} $\mathcal{X}$ on $V$ can be thought as a special partition of $V\times V$ for which the diagonal of $V\times V$ is a union of classes (see~\cite[Definition~2.1.3]{CP}). The number of classes is called the \emph{rank} of $\mathcal{X}$. Let $\Gamma=(V,E)$ be a digraph with vertex set $V$ and arc set $E$. The \emph{WL-rank} (the \emph{Weisfeiler-Leman rank}) of $\Gamma$ is defined to be the rank of the smallest coherent configuration on the set $V$ for which $E$ is a union of classes. The term ``WL-rank of a digraph'' was introduced in~\cite{BPR}. This term was chosen because the coherent configuration of a digraph can be found using the Weisfeiler-Leman algorithm~\cite{WeisL}. Since the diagonal of $V\times V$ is a union of classes of any coherent configuration on $V$, we conclude that $\mathrm{right}kwl(\Gamma)\geq 2$ unless $|V|=1$. One can verify that $\mathrm{right}kwl(\Gamma)\leq 2$ if and only if $\Gamma$ is complete or empty. On the other hand, obviously, $\mathrm{right}kwl(\Gamma)\leq n^2$. From~\cite[Lemma~2.1 (2)]{BPR} it follows that if $\Gamma$ is vertex-transitive then $\mathrm{right}kwl(\Gamma)\leq n$. Let $G$ be a finite group, $|G|=n$, and $S$ an identity-free subset of $G$. The \emph{Cayley digraph} $\cay(G,S)$ is defined to be the digraph with vertex set $G$ and arc set $\{(g,sg):~s\in S,~g\in G\}$. If $S$ is inverse-closed then $\cay(G,S)$ is a \emph{Cayley graph}. If $\Gamma$ is a Cayley digraph over $G$ then $\aut(\Gamma)\geq G_{\mathrm{right}}$, where $G_{\mathrm{right}}$ is the subgroup of $\sym(G)$ induced by right multiplications of $G$. This implies that $\Gamma$ is vertex-transitive and hence $\mathrm{right}kwl(\Gamma)\leq n$. A $k$-regular graph $\Gamma$ is called \emph{strongly regular} if there exist nonnegative integers $\lambda$ and $\mu$ such that every two adjacent vertices have $\lambda$ common neighbors and every two nonadjacent vertices have $\mu$ common neighbors. The following generalization of the notion of a strongly regular graph was introduced in~\cite{EFHHH} and goes back to~\cite{Deza}. A $k$-regular graph $\Gamma$ on $n$ vertices is called a \emph{Deza} graph if there exist nonnegative integers $\alpha$ and $\beta$ such that any pair of distinct vertices of $\Gamma$ has either $\alpha$ or $\beta$ common neighbors. The numbers $(n,k,\beta,\alpha)$ are called the \emph{parameters} of $\Gamma$. Clearly, if $\alpha>0$ and $\beta>0$ then $\Gamma$ has diameter~$2$. A Deza graph is called a \emph{strictly} Deza graph if it is nonstrongly regular and has diameter~$2$. The WL-rank of a strongly regular graph is at most~$3$ (see~\cite[Lemma~2.1 (2)]{BPR}). It is a natural question how large the WL-rank of a Deza graph $\Gamma$ can be. In this paper we are interested in the WL-rank of Deza Cayley graphs. The WL-rank of a nonstrictly Deza Cayley graph can be sufficiently large. For example, an undirected cycle on $n$ vertices is a nonstrictly Deza graph of WL-rank~$[\frac{n}{2}]+1$ (see~\cite{BPR}). However, strictly Deza graphs seem close to strongly regular graphs. All known strictly Deza Cayley graphs over cyclic groups have WL-rank at most~$6$~\cite{BPR}. As it was said before, the WL-rank of any Cayley graph does not exceed the number of vertices of this graph. It turns out that there exists an infinite family of strictly Deza Cayley graphs whose WL-rank is equal to the number of vertices. This follows from the theorem below which is the main result of this paper. The cyclic and dihedral groups of order $n$ are denoted by $C_n$ and $D_n$ respectively. \begin{theo1}\label{main} Let $k\geq 3$ be an odd integer, $G\cong D_{2k}\times C_2\times C_2$, and $n=|G|$. There exists a strictly Deza Cayley graph $\Gamma$ over $G$ such that $\mathrm{right}kwl(\Gamma)=n$. \end{theo1} Note that the graphs from Theorem~\mathrm{right}ef{main} are divisible design integral graphs (see Section~$4$). We finish the introduction with the brief outline of the paper. If $\Gamma=\cay(G,S)$ then the WL-rank of $\Gamma$ is equal to the rank of the smallest $S$-ring over $G$ for which $S$ is a union of basic sets. The necessary background of $S$-rings and Cayley graphs is provided in Section~$2$. In Section~$3$ we construct the required family of strictly Deza Cayley graphs and prove Theorem~\mathrm{right}ef{main}. In Section~$4$ we prove that each graph from the constructed family is an integral divisible design graph (Lemma~\mathrm{right}ef{l4}), has the same parameters as the grid graph but not isomorphic to it (Lemma~\mathrm{right}ef{l5}), and can be identified efficiently (Lemma~\mathrm{right}ef{l6}). The authors would like to thank prof. I.~Ponomarenko for the valuable comments which help us to improve the text significantly. \section{Preliminaries} In this section we provide a background of $S$-rings and Cayley graphs. In general, we follow to~\cite{BPR,Ry1,Ry2}, where the most of definitions and statements is contained. \subsection{$S$-rings} Let $G$ be a finite group and $\mathbb{Z}G$ the integer group ring. The identity element of $G$ and the set of all nonidentity elements of $G$ are denoted by~$e$ and~$G^\#$ respectively. If $X\subseteq G$ then the element $\sum \limits_{x\in X} {x}$ of the group ring $\mathbb{Z}G$ is denoted by~$\underline{X}$. An easy straightforward computation implies that $\underline{G}^2=|G|\underline{G}$. The set $\{x^{-1}:x\in X\}$ is denoted by $X^{-1}$. A subring $\mathcal{A}\subseteq \mathbb{Z} G$ is called an \emph{$S$-ring} (a \emph{Schur} ring) over $G$ if there exists a partition $\mathcal{S}=\mathcal{S}(\mathcal{A})$ of~$G$ such that: $(1)$ $\{e\}\in\mathcal{S}$; $(2)$ if $X\in\mathcal{S}$ then $X^{-1}\in\mathcal{S}$; $(3)$ $\mathcal{A}=\Span_{\mathbb{Z}}\{\underline{X}:\ X\in\mathcal{S}\}$. \noindent The notion of an $S$-ring goes back to Schur~\cite{Schur} and Wielandt~\cite{Wi}. The elements of $\mathcal{S}$ are called the \emph{basic sets} of $\mathcal{A}$ and the number $\mathrm{right}k(\mathcal{A})=|\mathcal{S}|$ is called the \emph{rank} of~$\mathcal{A}$. The group ring $\mathbb{Z}G$ is an $S$-ring over $G$ corresponding to the partition of $G$ into singletons and $\mathrm{right}k(\mathbb{Z}G)=|G|$. The following lemma provides a well-known property of $S$-rings (see, e.g.~\cite[Lemma~2.4]{Ry1}). \begin{lemm}\label{basicset} Let $\mathcal{A}$ be an $S$-ring over a group $G$. If $X,Y\in \mathcal{S}(\mathcal{A})$ then $XY\in \mathcal{S}(\mathcal{A})$ whenever $|X|=1$ or $|Y|=1$. \end{lemm} \begin{lemm}\label{groupring} Let $\mathcal{A}$ be an $S$-ring over a group $G$ and $X\subseteq G$ such that $\langle X \mathrm{right}angle=G$. Suppose that $\{x\}\in \mathcal{S}(\mathcal{A})$ for every $x\in X$. Then $\mathcal{A}=\mathbb{Z}G$. \end{lemm} \begin{proof} Let us prove that $\{g\}\in \mathcal{S}(\mathcal{A})$ for every $g\in G$. Since $\langle X \mathrm{right}angle=G$, there exist $x_1,\ldots,x_k\in X$ and $\varepsilon_1,\ldots \varepsilon_k\in\{-1,1\}$ such that $g=x_1^{\varepsilon_1}\ldots x_k^{\varepsilon_k}$. We proceed by induction on $k$. Let $k=1$. If $\varepsilon_1=1$ then $\{g\}\in \mathcal{S}(\mathcal{A})$ by the assumption of the lemma; if $\varepsilon_1=-1$ then $\{g\}\in \mathcal{S}(\mathcal{A})$ by the assumption of the lemma and the second property from the definition of an $S$-ring. Now let $k\geq 2$. By the induction hypothesis, we have $\{x_1^{\varepsilon_1}\ldots x_{k-1}^{\varepsilon_{k-1}}\}\in \mathcal{S}(\mathcal{A})$ and $\{x_k^{\varepsilon_k}\}\in \mathcal{S}(\mathcal{A})$. So $\{g\}=\{x_1^{\varepsilon_1}\ldots x_{k-1}^{\varepsilon_{k-1}}\}\{x_k^{\varepsilon_k}\}\in \mathcal{S}(\mathcal{A})$ by Lemma~\mathrm{right}ef{basicset}. \end{proof} A set $X \subseteq G$ is called an \emph{$\mathcal{A}$-set} if $\underline{X}\in \mathcal{A}$ or, equivalently, $X$ is a union of some basic sets of $\mathcal{A}$. The set of all $\mathcal{A}$-sets is denoted by $\mathcal{S}^*(\mathcal{A})$. Obviously, if $X\in \mathcal{S}^*(\mathcal{A})$ and $|X|=1$ then $X\in \mathcal{S}(\mathcal{A})$. It is easy to check that if $X,Y\in \mathcal{S}^*(\mathcal{A})$ then $$X\cap Y,X\cup Y, X\setminus Y, Y\setminus X, XY\in \mathcal{S}^*(\mathcal{A}).~\eqno(1)$$ A subgroup $H \leq G$ is called an \emph{$\mathcal{A}$-subgroup} if $H\in \mathcal{S}^*(\mathcal{A})$. For every $\mathcal{A}$-set $X$, the groups $\langle X \mathrm{right}angle$ and $\mathrm{right}ad(X)=\{g\in G:~Xg=gX=X\}$ are $\mathcal{A}$-subgroups. \begin{lemm}\cite[Proposition~22.1]{Wi}\label{sw} Let $\mathcal{A}$ be an $S$-ring over $G$, $\xi=\sum \limits_{g\in G} c_g g\in \mathcal{A}$, where $c_g\in \mathbb{Z}$, and $c\in \mathbb{Z}$. Then $\{g\in G:~c_g=c\}\in \mathcal{S}^*(\mathcal{A})$. \end{lemm} Let $L \unlhd U\leq G$. A section $U/L$ is called an \emph{$\mathcal{A}$-section} if $U$ and $L$ are $\mathcal{A}$-subgroups. If $S=U/L$ is an $\mathcal{A}$-section then the module $$\mathcal{A}_S=Span_{\mathbb{Z}}\left\{\underline{X}^{\pi}:~X\in\mathcal{S}(\mathcal{A}),~X\subseteq U\mathrm{right}ight\},$$ where $\pi:U\mathrm{right}ightarrow U/L$ is the canonical epimorphism, is an $S$-ring over $S$. Let $S=U/L$ be an $\mathcal{A}$-section of $G$. The $S$-ring~$\mathcal{A}$ is called the \emph{$S$-wreath product} or \emph{generalized wreath product} of $\mathcal{A}_U$ and $\mathcal{A}_{G/L}$ if $L\trianglelefteq G$ and $L\leq\mathrm{right}ad(X)$ for each basic set $X$ outside~$U$. In this case we write $\mathcal{A}=\mathcal{A}_U\wr_{S}\mathcal{A}_{G/L}$. If $L>\{e\}$ and $U<G$ then the $S$-wreath product is called \emph{nontrivial}. The notion of the generalized wreath product of $S$-rings was introduced in~\cite{EP1}. Since $L\leq\mathrm{right}ad(X)$ for each basic set $X$ outside~$U$, the basic sets of $\mathcal{A}$ outside $U$ are in one-to-one correspondence with the basic sets of $\mathcal{A}_{G/L}$ outside $S$. Therefore $$\mathrm{right}k(\mathcal{A}_U\wr_{S}\mathcal{A}_{G/L})=\mathrm{right}k(\mathcal{A}_U)+\mathrm{right}k(\mathcal{A}_{G/L})-\mathrm{right}k(\mathcal{A}_S).~\eqno(2)$$ The \emph{automorphism group} $\aut(\mathcal{A})$ of $\mathcal{A}$ is defined to be the group $$\bigcap \limits_{X\in \mathcal{S}(\mathcal{A})} \aut(\cay(G,X)).$$ Since $\aut(\cay(G,X))\geq G_{\mathrm{right}}$ for every $X\in \mathcal{S}(\mathcal{A})$, we conclude that $\aut(\mathcal{A})\geq G_{\mathrm{right}}$. It is easy to check that $\aut(\mathcal{A})=G_{\mathrm{right}}$ if and only if $\mathcal{A}=\mathbb{Z}G$. \subsection{Cayley graphs} Let $S\subseteq G$, $e\notin S$, and $\Gamma=\cay(G,S)$. The \emph{WL-closure} $\WL(\Gamma)$ of $\Gamma$ can be thought as the smallest $S$-ring over $G$ such that $S\in\mathcal{S}^*(\mathcal{A})$ (see~\cite[Section~5]{BPR}). If $\mathcal{A}=\WL(\Gamma)$ then $\mathrm{right}kwl(\Gamma)=\mathrm{right}k(\mathcal{A})$ by~\cite[Lemma~5.1]{BPR}. From~\cite[Theorem~2.6.4]{CP} it follows that $\aut(\Gamma)=\aut(\mathcal{A})$. \begin{lemm}\cite[Lemma~5.2]{BPR}\label{deza} Let $G$ be a group of order~$n$, $S\subseteq G$ such that $e\notin S$, $S=S^{-1}$, and $|S|=k$, and $\Gamma=\cay(G,S)$. The graph $\Gamma$ is a Deza graph with parameters $(n,k,\beta,\alpha)$ if and only if $\underline{S}^2=ke+\alpha\underline{X_{\alpha}}+\beta\underline{X_{\beta}}$, where $X_{\alpha}\cup X_{\beta}=G^\#$ and $X_{\alpha}\cap X_{\beta}=\varnothing$. Moreover, $\Gamma$ is strongly regular if and only if $X_{\alpha}=S$ or $X_{\beta}=S$. \end{lemm} \section{Proof of Theorem~1} Let $k\geq 3$ be an integer, $G=(\langle a \mathrm{right}angle \mathrm{right}times \langle b \mathrm{right}angle)\times \langle c \mathrm{right}angle \times \langle d \mathrm{right}angle$, where $|a|=k$, $|b|=|c|=|d|=2$, and $bab=a^{-1}$, and $n=|G|$. The groups $\langle a \mathrm{right}angle$, $\langle c\mathrm{right}angle$, and $\langle a \mathrm{right}angle \mathrm{right}times \langle b \mathrm{right}angle$ are denoted by $A$, $C$, and $H$ respectively. Clearly, $H\cong D_{2k}$, $G\cong D_{2k}\times C_2 \times C_2$, $|H|=2k$, and $|G|=8k$. Put $$S=b(A\setminus \{a^{-1}\})\cup c(A\cup\{b\})\cup \{db,dcba^{-1}\}.$$ One can see that $S=S^{-1}$ and $|S|=2(k+1)$. Put $\Gamma=\cay(G,S)$. Note that $\Gamma$ is $2(k+1)$-regular. \begin{lemm}\label{l1} In the above notations, the graph $\Gamma$ is a strictly Deza graph with parameters $(8k,2(k+1),2(k-1),2)$. \end{lemm} \begin{proof} The straightforward computation in the group ring $\mathbb{Z}G$ using the equalities $\underline{A}^2=k\underline{A}$, $b\underline{A}=\underline{A}b$, $bab=a^{-1}$, $cg=gc$, and $dg=gd$, where $g\in G$, implies that $$\underline{S}^2=2(k+1)e+2(k-1)(\underline{A}^\#+cb\underline{A})+2(b+c)\underline{A}+2d\underline{C}\underline{H}.~\eqno(3)$$ Indeed, $$\underline{S}^2=(b\underline{A}+c\underline{A}-ba^{-1}+cb+db+dcba^{-1})^2=$$ $$=4e+(2(k-1)e+2(k-1)cb+2b+2c+2d+2dc+2db+2dcb)\underline{A}=$$ $$=2(k+1)e+2(k-1)(\underline{A}^\#+cb\underline{A})+2(b+c)\underline{A}+2d\underline{C}\underline{H}.$$ From Lemma~\mathrm{right}ef{deza} and Eq.~(3) it follows that $\Gamma$ is a nonstrongly regular Deza graph with parameters $(8k,2(k+1),2(k-1),2)$, $X_{2(k-1)}=A^\#\cup cbA$, and $X_2=bA\cup cA\cup d(C\times H)$. This means that $\Gamma$ is a strictly Deza graph. \end{proof} All Deza Cayley graphs with at most~$60$ vertices, including the graphs from the constructed family for $k\leq 7$, were enumerated in~\cite{GSh}. Put $A_1=\langle a^2 \mathrm{right}angle$. If $k$ is odd then $A_1=A$; if $k$ is even then $|A:A_1|=2$. The group $A_1$ is normal in $G$. So one can form the group $L=A_1\mathrm{right}times \langle cb \mathrm{right}angle$ which is isomorphic to $D_{2k}$ if $k$ is odd and to $D_k$ if $k$ is even. It can be verified in a straightforward way that $L$ is normal in $G$. Put $U=\langle L,ca,da\mathrm{right}angle$ and $S=U/L$. Since $L\cap \langle ca \mathrm{right}angle=L\cap \langle da \mathrm{right}angle=A_1$, we obtain $|U:L|=4$. \begin{lemm}\label{l2} In the above notations, $\WL(\Gamma)=\mathbb{Z}G$ if $k$ is odd and $\WL(\Gamma)=\mathbb{Z}U\wr_S \mathbb{Z}(G/L)$ if $k$ is even. \end{lemm} \begin{proof} Let $\mathcal{A}=\WL(\Gamma)$. Put $V=A^\#\cup cbA$. From Eq.~(3) it follows that every element of $V$ enters the element $\underline{S}^2$ with coefficient~$2(k-1)$ and any other element of $G$ enters $\underline{S}^2$ with coefficient distinct from~$2(k-1)$. Together with $S\in \mathcal{S}^*(\mathcal{A})$ and Lemma~\mathrm{right}ef{sw}, this implies that $V\in \mathcal{S}^*(\mathcal{A})$. So $$V\cap S=\{cb\}\in \mathcal{S}(\mathcal{A})~\eqno(4)$$ by Eq.~(1). Since $S,\{cb\}\in \mathcal{S}^*(\mathcal{A})$, Eq.~(1) implies that $cbS,Scb\in \mathcal{S}^*(\mathcal{A})$. So $$S_1=(cbS\setminus Scb)\cap S=\{ca\}\in \mathcal{S}(\mathcal{A})~\eqno(5)$$ by Eq.~(1). Now from Eqs.~(1) and~(5) it follows that $$S_2=(cbS\setminus Scb)\setminus S_1=\{da^{-1}\}\in \mathcal{S}(\mathcal{A})~\eqno(6)$$ Due to Eqs.~(1) and~(5), we obtain $S_1S_1=\{a^2\}\in \mathcal{S}(\mathcal{A})$. Since $a^2$ is a generator of $A_1$, Lemma~\mathrm{right}ef{groupring} yields that $$\mathcal{A}_{A_1}=\mathbb{Z}A_1.~\eqno(7)$$ Let $k$ be odd. Then $A_1=A$ and $G=\langle A, cb, ca, da^{-1} \mathrm{right}angle$. From Eqs.~(4)-(7) and Lemma~\mathrm{right}ef{groupring} it follows that $\mathcal{A}=\mathbb{Z}G$. Let $k$ be even. The partition of $G$ into sets $$\{g\},~g\in U,~La,~Lc,~Ld,~Lcda$$ defines the $S$-ring $\mathcal{B}$ over $G$ such that $\mathcal{B}=\mathbb{Z}U\wr_S \mathbb{Z}(G/L)$. Note that $S=Lc\cup S_U$, where $$S_U=b(A\setminus (A_1\cup \{a^{-1}\}))\cup c((A\setminus A_1)\cup \{b\})\cup \{db,dcba^{-1}\}\subseteq U.$$ So $S\in \mathcal{S}^*(\mathcal{B})$ and hence $\mathcal{B}\geq \mathcal{A}$. Let us prove that $\mathcal{B}\leq \mathcal{A}$. Observe that $da\in da^{-1}A_1$. So $$\{da\}\in \mathcal{S}(\mathcal{A})~\eqno(8)$$ by Eqs.~(6)-(7) and Lemma~\mathrm{right}ef{basicset}. Eqs.~(4),~(5),~(7),~(8), and Lemma~\mathrm{right}ef{groupring} imply that $U$ is an $\mathcal{A}$-subgroup and $$\mathcal{A}_U=\mathbb{Z}U=\mathcal{B}_U.~\eqno(9)$$ Since $S\in \mathcal{S}^*(\mathcal{A})$ and $U\in \mathcal{S}^*(\mathcal{A})$, Eq.~(1) implies that $$S\setminus U=Lc\in \mathcal{S}^*(\mathcal{A}).~\eqno(10)$$ From Eqs. (1),~(5),~(6),~(8), and~(10) it follows that $$La=Lc\{ca\},~Ld=Lc\{ca\}\{da^{-1}\},~Lcda=Lc\{da\}\in\mathcal{S}^*(\mathcal{A}).$$ Together with Eq.~(9), this implies that every basic set of $\mathcal{B}$ is an $\mathcal{A}$-set and hence $\mathcal{B}\leq \mathcal{A}$. Thus, $\mathcal{B}=\mathcal{A}$ and we are done. \end{proof} \begin{rem} If $k$ is odd then $\aut(\Gamma)=\aut(\mathbb{Z}G)=G_{\mathrm{right}}$. If $k$ is even then $\aut(\Gamma)=\aut(\mathbb{Z}U\wr_S \mathbb{Z}(G/L))$ is the canonical generalized wreath product of $U_{\mathrm{right}}$ by $(G/L)_{\mathrm{right}}$ (see~\cite[Section~5.3]{EP2} for the definitions). \end{rem} \begin{lemm}\label{l3} In the above notations, $\mathrm{right}kwl(\Gamma)=8k=n$ if $k$ is odd and $\mathrm{right}kwl(\Gamma)=4k+4=\frac{n}{2}+4$ if $k$ is even. \end{lemm} \begin{proof} If $k$ is odd then $\WL(\Gamma)=\mathbb{Z}G$ by Lemma~\mathrm{right}ef{l2} and hence $\mathrm{right}kwl(\Gamma)=\mathrm{right}k(\mathbb{Z}G)=8k$. Let $k$ be even. Then $\WL(\Gamma)=\mathbb{Z}U\wr_S \mathbb{Z}(G/L)$ by Lemma~\mathrm{right}ef{l2}. Since $|L|=k$ and $|U:L|=4$, we have $|U|=4k$ and hence $\mathrm{right}k(\mathbb{Z}U)=4k$. Observe that $|G/L|=8$ and $|S|=|U/L|=4$. So $\mathrm{right}k(\mathbb{Z}(G/L))=8$ and $\mathrm{right}k(\mathbb{Z}S)=4$. Therefore $$\mathrm{right}kwl(\Gamma)=\mathrm{right}k(\mathbb{Z}U\wr_S \mathbb{Z}(G/L))=\mathrm{right}k(\mathbb{Z}U)+\mathrm{right}k(\mathbb{Z}(G/L))-\mathrm{right}k(\mathbb{Z}S)=4k+4$$ by Eq.~(2). \end{proof} Theorem~\mathrm{right}ef{main} follows from Lemma~\mathrm{right}ef{l1} and Lemma~\mathrm{right}ef{l3}. \section{Some properties of $\Gamma$} In this section we collect some properties of the graph $\Gamma$ constructed in the previous section. A $k$-regular graph on $n$ vertices is called a \emph{divisible design graph} (\emph{DDG}) with parameters $(n,k,\alpha,\beta,m,l)$ if its vertex set can be partitioned into $m$ classes of size $l$, such that every two distinct vertices from the same class have $\alpha$ common neighbors and every two vertices from different classes have $\beta$ common neighbors. For a divisible design graph, the partition into classes is called a \emph{canonical partition}. The notion of a divisible design graph was introduced in~\cite{HKM} as a generalization of $(v,k,\lambda)$-graphs~\cite{Rud}. For more information on divisible design graphs, we refer the readers to~\cite{HKM,KS}. A graph is called \emph{integral} if all eigenvalues of its adjacency matrix are integers. The investigations on integral graphs goes back to~\cite{HS}. More information on spectra of graphs and integral graphs can be found in~\cite{BH}. \begin{lemm}\label{l4} The graph $\Gamma$ is an integral divisible design graph. \end{lemm} \begin{proof} From~\cite[Theorem~1.1]{KS} it follows that $\Gamma$ is a divisible design graph if and only if, in the notations of Lemma~\mathrm{right}ef{deza}, $X_2\cup \{e\}$ or $X_{2(k-1)}\cup \{e\}$ is a subgroup of $G$. Moreover, the canonical partition of $G$ is a partition into the right cosets by this subgroup. Eq.~(3) implies that $X_{2(k-1)}\cup\{e\}=A\cup cbA$. Since $A$ is normal in $G$, $|cb|=2$, and $a^{cb}=a^{-1}$, the set $X_{2(k-1)}\cup\{e\}$ is a subgroup of $G$ isomorphic to $D_{2k}$. Therefore $\Gamma$ is a divisible design graph with parameters $(8k,2(k+1),2(k-1),2,4,2k)$. Since $\Gamma$ is a divisible design graph, one can calculate eigenvalues of its adjacency matrix from its parameters by the formulas from~\cite[Lemma~2.1]{HKM}. It turns out that the set of eigenvalues of the adjacency matrix of $\Gamma$ is equal to $\{2(k+1),\pm 2(k-1),\pm 2\}$. This implies that $\Gamma$ is integral. \end{proof} Recall that the \emph{$(l\times m)$-grid} is the line graph of the complete bipartite graph $K_{l,m}$ (see~\cite[p.~440]{BKN}). \begin{lemm}\label{l5} The graph $\Gamma$ has the same parameters as the $(4 \times 2k)$-grid but it is not isomorphic to the $(4 \times 2k)$-grid. \end{lemm} \begin{proof} Let $\Gamma^{\prime}$ be the graph isomorphic to the $(4\times 2k)$-grid. The graph $\Gamma^{\prime}$ has parameters $(8k,2(k+1),2(k-1),2)$ by~\cite[Construction~4.8]{HKM}. However, due to~\cite[Example~3.2.12]{CP}, we obtain $\mathrm{right}kwl(\Gamma^{\prime})=4$ and $\aut(\Gamma^{\prime})\cong \sym(4) \times \sym(2k)$. So $\Gamma$ is not isomorphic to $\Gamma^{\prime}$. \end{proof} The \emph{Weisfeiler-Leman dimension} $\dimwl(\Delta)$ of a graph $\Delta$ is defined to be the smallest positive integer~$m$ for which $\Delta$ is identified by the $m$-dimensional Weisfeiler-Leman algorithm~~\cite[Definition~18.4.2]{Grohe}. If $\dimwl(\Delta)\leq m$ then the isomorphism between $\Delta$ and any other graph can be verified in time $n^{O(m)}$ using the Weisfeiler-Leman algorithm~\cite{WeisL}. The Weisfeiler-Leman dimension of Deza circulant graphs was studied in~\cite{BPR}. \begin{lemm}\label{l6} The Weisfeiler-Leman dimension of $\Gamma$ is equal to~$2$. \end{lemm} \begin{proof} The $S$-ring $\WL(\Gamma)$ is separable in the sense of~\cite[Section~4.2]{BPR}. Indeed, if $k$ is odd then $\WL(\Gamma)=\mathbb{Z}G$ by Lemma~\mathrm{right}ef{l2} and the required follows from~\cite[Theorem~2.3.33]{CP}. If $k$ is even then $\WL(\Gamma)=\mathbb{Z}U\wr_S \mathbb{Z}(G/L)$ by Lemma~\mathrm{right}ef{l2} and the required follows from~\cite[Theorem~3.4.23]{CP}. The separability of $\WL(\Gamma)$ and~\cite[Theorem~2.5]{FKV} imply that $\dimwl(\Gamma)\leq 2$. Since $\Gamma$ is regular but nonstrongly regular, $\dimwl(\Gamma)\neq 1$ by~\cite[Lemma~3.2]{BPR}. Thus, $\dimwl(\Gamma)=2$. \end{proof} \end{document}
\begin{equation}gin{document} \begin{equation}gin{abstract} In the paper we consider the linear inverse problem that consists in recovering the initial state in a first order evolution equation generated by a skew-adjoint operator. We studied the well-posedness of the inversion in terms of the observation operator and the spectra of the skew-adjoint generator. The stability estimate of the inversion can also be seen as a weak observability inequality. The proof of the main results is based on a new resolvent inequality and Fourier transform techniques which are of interest themselves. \end{abstract} \subjclass[2010]{35B35, 35B40, 37L15} \keywords{ Conditional stability, weak observability, resolvent inequality, Hautus test, Shr\"odinger equation} \maketitle \tableofcontents \break \section{Introduction} \label{sec1} \setcounter{equation}{0} Let $X$ be a complex Hilbert space with norm and inner product denoted respectively by $\|\cdot\|_{X}$ and $\langle \cdot,\cdot \rangle_X$. Let $A: X\rightarrow X$ be a linear unbounded self-adjoint, strictly positive operator with a compact resolvent. Denote by ${ D}(A^{\frac{1}{2}})$ the domain of $A^{\frac{1}{2}}$, and introduce for $\begin{equation}ta \in \mathbb R$ the scale of Hilbert spaces $X_{\begin{equation}ta}$, as follows: for every $\begin{equation}ta \geq 0$, $X_{\begin{equation}ta}= {D}(A^{\frac{\begin{equation}ta}{2}})$, with the norm $\|z \|_\begin{equation}ta=\|A^{\frac{\begin{equation}ta}{2}} z\|_X$ (note that $0 \notin \sigma(A) $ where $\sigma(A)$ is the spectrum of $A$). The space $X_{-\begin{equation}ta}$ is defined by duality with respect to the pivot space $X$ as follows: $X_{-\begin{equation}ta} =X_{\begin{equation}ta}^*$ for $\begin{equation}ta>0$.\\ The operator $A$ can be extended (or restricted) to each $X_\begin{equation}ta$, such that it becomes a bounded operator \begin{equation} \label{A0extb} A : X_\begin{equation}ta \rightarrow X_{\begin{equation}ta-2}\;\;\; \forall \begin{equation}ta \in \mathbb R. \end{equation} The operator $iA$ generates a strongly continuous group of isometries in $X$ denoted $(e^{itA})_{t\in \mathbb{R}}$ \cite{TW09}. Further, let $Y$ be a complex Hilbert space (which will be identified to its dual space) with norm and inner product respectively denoted by $||.||_{Y}$ and $\langle \cdot,\cdot \rangle_{Y}$, and let $C \in {\mathcal L}(X_{2}, Y)$, the space of linear bounded operators from $X_2$ into $Y$.\\ This paper is concerned with the following abstract infinite-dimensional dual observation system with an output $y \in Y$ described by the equations \begin{equation} \label{maineq} \left\{ \begin{equation}gin{array}{lll} \dot z(t)- i Az(t) = 0, \, t > 0, \\ z(0) = z_0\in X, \\ y(t) = C z(t), \, t > 0. \end{array} \right. \end{equation} In inverse problems framework the system above is called the direct problem, i.e, to determine the observation $y(t)= C z(t)$ of the state $z(t)$ for given initial state $z_0$ and unbounded operator $A$. The inverse problem is to recover the initial state $z_0$ from the knowledge of the observation $y(t)$ for $t\in [0, T]$ where $T>0$ is chosen to be large enough. Inverse problems for evolution equations driven by numerous applications, have been a very active area in mathematical and numerical research over the last decades \cite{Is}. They are intrinsically difficult to solve: this fact is due in part to their very mathematical structure and to the effect that generally only partial data is available. Many different linear inverse problems in evolution equations related to data assimilation, medical imaging, and geoscience, may fit in the general formulation of the system~\eqref{maineq} (see for instance \cite{Ya, AC1, AC3, ACT, ACT1, BaK, RT} and references therein). The system \eqref{maineq} has a unique weak solution $z\in C({\rm I~\hspace{-1.15ex}R} ,X)$ defined by: \begin{equation}gin{eqnarray} \label{eqmildsol} z(t)=e^{it A}z_{0}. \end{eqnarray} If $z_0$ is not in $X_2$, in general $z(t)$ does not belong to $X_2$, and hence the last equation in \eqref{maineq} might not be defined. We further make the following additional admissibility assumption on the observation operator $C$: $\forall T>0$, \; $\exists C_{T}>0$, \begin{equation}gin{eqnarray} \label{admm} \label{eqq} \forall z_{0}\in X_2, \quad \int_{0}^{T}\|C e^{itA}z_{0}\|_Y^{2}dt \leq C_{T}\|{z_{0}}\|_X^{2}. \end{eqnarray} We immediately deduce from the admissibility assumption that the map from $X_2$ to $L^{2}_{loc}({\rm I~\hspace{-1.15ex}R} _+;Y)$ that assigns $y$ for each $z_0$, has a continuous extension to $X$. Therefore the last equation in \eqref{maineq} is now well defined for all $z_0\in X$. Without loss of generality we assume that $C_T$ is an increasing function of $T$ (if the assumption is not satisfied we substitute $C_T$ by $ \sup_{0\leq t\leq T}C_T$). Since $A$ is a self-adjoint operator with a compact resolvent, it follows that the spectrum of $A$ is given by $\sigma(A) \,=\, \{ \lambda_k, \; k\in \mathbb N^*\}$ where $\lambda_k$ is a sequence of strictly increasing real numbers. We denote $(\phi_k)_{k\in \mathbb N^*}$ the orthonormal sequence of eigenvectors of $A$ associated to the eigenvalues $(\lambda_k)_{k\in \mathbb N^*}$. Let $ z\in X_2\setminus\{0\}\subset X \longmapsto \lambda(z)\in \mathbb R_+$ be the $A$-frequency function defined by \begin{equation}gin{eqnarray} \label{frequency} \lambda(z) &=& \langle Az, z\rangle_X \|z\|_X^{-2},\\ &=& \sum_{k=1}^{+\infty} \lambda_k \langle z, \phi_k \rangle_X^2 \left( \sum_{k=1}^{+\infty} \langle z, \phi_k \rangle_X^2 \right)^{-1}. \end{eqnarray} We observe that $z \longmapsto \lambda(z)$ is continuous on $X_2\setminus\{0\}$, and $\lambda(\phi_k) = \lambda_k, \, k \in \mathbb N^*$. Let $\mathfrak C $ be the set of functions $\psi: \mathbb R_+ \rightarrow \mathbb R_+^*$ continuous and decreasing. Recall that if $\psi \in \mathfrak C$ is not bounded below by a strictly positive constant it satisfies $\lim_{t\rightarrow +\infty}\psi(t) = 0$. \begin{equation}gin{definition} \label{ww} The system (\ref{maineq}) is said to be weakly observable in time $T>0$ if there exists $\psi \in \mathfrak C $ such that following observation inequality holds: \begin{equation}gin{eqnarray} \label{wobs} \forall z_{0}\in X_2, \quad \psi(\lambda(z_0))\|z_{0}\|_X^{2}\leq \int_{0}^{T} \|C e^{itA}z_{0}\|_{Y}^2 dt. \end{eqnarray} If $\psi(t)$ is lower bounded, the system is said to be exactly observable. \end{definition} \begin{equation}gin{remark} If the system (\ref{maineq}) is weakly observable in time $T>0$, it is weakly observable in any time $T^\prime$ larger than $T$. The function $\psi$ appearing in the observability inequality \eqref{wobs} may depends on the time $T$. \end{remark} Most of the existing works on observability inequalities for systems of partial differential equations are based on a time domain techniques as nonharmonic series \cite{AI, KL}, multipliers method \cite{Li,Li1}, and microlocal analysis techniques \cite{BLR92, LL}. Only few of them have considered frequency domain techniques in the spirit of the well known Fattorini-Hautus test for finite dimensional systems \cite{Fa, Ha, BZbb, RTTT, ZY97}. \\ The wanted frequency domain test for the observability of the system (\ref{maineq}) would be only formulated in terms of the operators $A$, $C$. The time domain system (\ref{maineq}) would be converted into a frequency domain one, and the test would involve essentially the solution in the frequency domain and the observability operator $C$. The frequency domain test seems to be more suitable for numerical validation and for the calibration of physical models for many reasons: the parameters of the system are in general measured in frequency domain; the computation of the solution is more robust and efficient in frequency domain. The objective here is to derive sufficient and if possible necessary conditions on \begin{equation}gin{itemize} \item[(i)] the spectrum of $A$, and \item[(ii)] on the action of the operator $C$ on the associated eigenfunctions of $A$, \end{itemize} such that the closed system \eqref{maineq} verifies, for some $T>0,$ sufficiently large, the inequality \eqref{wobs}. The aim of this paper is to obtain Fattorini-Hautus type tests on the pair $(A, C)$ that guarantee the {\it weak observability property} \eqref{wobs}. The rest of the paper is organized as follows: In section \ref{sec2} we present the main results of our paper related to the weak observability. Section \ref{sec3} contains the proof of the main Theorem \ref{main} based on new resolvent inequality and Fourier transform techniques. In section \ref{sec4} we study the relation between the spectral coercivity of the observability operator and his action on vector spaces spanned by eigenfunctions associated to close eigenvalues. Finally, in section \ref{sec5} we apply the results of the main Theorem \ref{main} to boundary observability of the Schr\"odinger equation in a square. \section{Main results} \label{sec2} We present in this section the main results of our paper. \begin{equation}gin{definition} \label{dwc} The operator $C$ is spectrally coercive if there exist functions $\varepsilon, \psi \in \mathfrak C$ such that if $z\in X_2 \setminus \left\{0\right\}$ satisfies \begin{equation} \label{wc} \frac{\|Az\|_X^2}{\|z\|_X^2} -\lambda^2(z) <\varepsilon(\lambda(z)), \end{equation} then \begin{equation} \label{wcc} \|Cz\|_Y^2 \geq \psi(\lambda(z)) \|z\|_{X }^2. \end{equation} \end{definition} \begin{equation}gin{remark} We remark that the following relation \begin{equation} \label{magic} 0 \leq \|(A-\lambda(z)I)z\|_X^2\|z\|_X^{-2} = \frac{\|Az\|_X^2}{\|z\|_X^2} -\lambda^2(z) \end{equation} holds for all $z\in X_2 \setminus\{0\}$. In addition, the equality $\frac{\|Az\|_X^2}{\|z\|_X^2} -\lambda^2(z) = 0$ is satisfied if and only if $z=\phi_k$ for some $ k\in \mathbb N^*$. \end{remark} Now, we are ready to announce our main result. \begin{equation}gin{theorem} \label{main} The system \eqref{maineq} is weakly observable iff $C$ is spectrally coercive, that is the following two assertions are equivalent. \begin{equation}gin{itemize} \item[(1)] There exist $\varepsilon, \psi \in \mathfrak C $ such that if $z \in X_2 \setminus\left\{0 \right\}$ satisfying \begin{equation}gin{eqnarray*} 0\leq \frac{\|Az\|_X^2}{\|z\|_X^2} -\lambda^2(z) <\varepsilon(\lambda(z)), \end{eqnarray*} then \begin{equation}gin{eqnarray*} \|Cz\|_Y^2 \geq \psi(\lambda(z)) \|z\|_{X }^2. \end{eqnarray*} \item[(2)] The following weak observation inequality holds: \begin{equation}gin{eqnarray} \label{wobst} \forall z_{0}\in X_2, \quad \theta_2 \psi\left(\theta_0\left(\frac{1}{T}+\lambda(z_0)\right)\right) \|z_0\|_X^2 \leq \int_{0 }^T\|{C z(t)}\|^{2}_Yd\tau \end{eqnarray} for all $T\geq T(\lambda(z_0))$, where $T(\lambda(z_0))$ is the unique solution to the equation \begin{equation}q \label{timeT} T\varepsilon\left(\theta_0\left(\frac{1}{T}+\lambda(z_0)\right)\right)= \theta_1, \end{equation}q and $\varepsilon, \psi \in \mathfrak C $ are the functions appearing in the spectral coercivity of $C$. The strictly positives constants $\theta_i, \, i=0, 1, 2,$ do not depend on the parameters of the observability system. In addition, the function $\lambda \mapsto T(\lambda)$ is increasing. \end{itemize} \end{theorem} The above theorem can be viewed as a extension of several results in the literature \cite{Ha, BZbb, RTTT, ZY97, Miller05}. \section{Proof of the main Theorem~\ref{main}} \label{sec3} In order to prove our main theorem, we need to derive a sequence of preliminary results. We start with the main tool in the proof of the theorem which is a generalized Hautus-type test. \begin{equation}gin{theorem} \label{hautus} The operator $C\in \mathcal{L}(X_2, Y),$ is spectrally coercive, if and only if there exist functions $\psi,\, \varepsilon \in \mathfrak C $, such that the following resolvent inequality holds \begin{equation}q \label{resolventestimate} \|z\|_X^2 \leq \inf\left\{\frac{\|{C z}\|^{2}_Y}{\psi(\lambda(z))}, \frac{\|{(A-\lambda I) z}\|^{2}_X}{(\lambda-\lambda(z))^2+\varepsilon(\lambda(z))} \right\}, \;\forall \lambda\in {\rm I~\hspace{-1.15ex}R} ,\; \forall z\in X_2\setminus\{0\}. \end{equation}q \end{theorem} \begin{equation}gin{proof} Let $z\in X_2\setminus\{0\}$ be fixed. A forward computation gives the following key identity: \begin{equation}q \label{maineqh} \|{(A-\lambda I) z}\|^{2}_X = (\lambda-\lambda(z))^2 \|z\|_X^2 + \|{(A-\lambda(z) I) z}\|^{2}_X. \end{equation}q We remark that the minimum of $\|{(A-\lambda I) z}\|^{2}_X$ for a fixed $z$ with respect to $\lambda \in {\rm I~\hspace{-1.15ex}R} $ is reached at $\lambda = \lambda(z).$\\ We first assume that $C\in \mathcal{L}(X_2, Y),$ is spectrally coercive and prove that \eqref{resolventestimate} is satisfied. Let now $ \varepsilon, \, \psi \in \mathfrak C $ the functions appearing in the spectral coercivity of the operator $C$ in Definition~\ref{dwc}, and consider the following two possible cases:\\ (i) The inequality $\|A z\|^2_X- \lambda^2(z)\|z\|_X^2<\varepsilon(\lambda(z)) \|z\|_X^2$ is satisfied. Then by the spectral coercivity of $C$, we deduce \begin{equation}gin{eqnarray} \label{i1} \|Cz\|_Y^2 \geq \psi(\lambda(z)) \|z\|_{X}^2. \end{eqnarray} (ii) The inequality $\|A z\|^2_X- \lambda^2(z)\|z\|_X^2 \geq \varepsilon(\lambda(z)) \|z\|_X^2$ holds. Then, the identity \eqref{maineqh} implies \begin{equation}q \label{i2} \|{(A-\lambda I) z}\|^{2}_X \geq \left((\lambda-\lambda(z))^2 + \varepsilon(\lambda(z))\right) \|z\|_X^2. \end{equation}q By combining both inequalities \eqref{i1} and \eqref{i2}, we obtain the resolvent inequality \eqref{resolventestimate}.\\ We now assume that \eqref{resolventestimate} holds and, we shall show that $C\in \mathcal{L}(X_2, Y),$ satisfies the spectrally coercivity in Definition~\ref{dwc}. Let $ \varepsilon, \, \psi \in \mathfrak C $ the functions appearing in \eqref{resolventestimate}, and assume that $z \in X_2\setminus\{0\}$ satisfies \begin{equation}gin{eqnarray} \label{j1} \|{(A-\lambda(z) I) z}\|^{2}_X= \|Az\|_X^2 -\lambda^2(z)\|z\|_X^2 <\varepsilon(\lambda(z))\|z\|_X^2. \end{eqnarray} Then, we have two possibilities (i) The inequality \begin{equation}qs \frac{\|{C z}\|^{2}_Y}{\psi(\lambda(z))} \leq \frac{\|{(A-\lambda I) z}\|^{2}_X}{(\lambda-\lambda(z))^2+\varepsilon(\lambda(z))} \end{equation}qs holds for some $\lambda \in {\rm I~\hspace{-1.15ex}R} $. Consequently the following spectral coercivity \begin{equation}gin{eqnarray*} \|Cz\|_Y^2 \geq \psi(\lambda(z)) \|z\|_{X}^2 \end{eqnarray*} can be trivially deduced from the resolvent identity \eqref{resolventestimate}. \\ (ii) The inequality \begin{equation}qs \frac{\|{C z}\|^{2}_Y}{\psi(\lambda(z))} > \frac{\|{(A-\lambda I) z}\|^{2}_X}{(\lambda-\lambda(z))^2+\varepsilon(\lambda(z))} \end{equation}qs is valid for all $\lambda \in {\rm I~\hspace{-1.15ex}R} $. We then deduce from the identity \eqref{maineqh} the following inequality \begin{equation}qs \frac{\|{C z}\|^{2}_Y}{\psi(\lambda(z))} > \frac{(\lambda-\lambda(z))^2 \|z\|_X^2 + \|{(A-\lambda(z) I) z}\|^{2}_X}{(\lambda-\lambda(z))^2+\varepsilon(\lambda(z))}, \quad \forall \lambda \in {\rm I~\hspace{-1.15ex}R} . \end{equation}qs Taking $\lambda $ to infinity we get the wanted inequality, that is \begin{equation}qs \frac{\|{C z}\|^{2}_Y}{\psi(\lambda(z))} \geq \|z\|_{X}^2, \end{equation}qs which finishes the proof of the Theorem. \end{proof} Next we use a method developed in \cite{BZbb} to derive observability inequalities based on resolvent inequalities and Fourier transform techniques. Our objective is to prove the equivalence between the resolvent inequality \eqref{resolventestimate} and the weak observability \eqref{wobst}. The proof of the Theorem is then achieved by considering the results obtained in Theorem~\ref{hautus}.\\ We further assume that the resolvent inequality \eqref{resolventestimate} holds and shall prove the weak observability.\\ Let $\chi \in C_0^\infty({\rm I~\hspace{-1.15ex}R} )$ be a cut off function with a compact support in $(-1, 1)$. For $T>0$, we further denote \begin{equation}q \label{cutoff} \chi_T(t) &=& \chi \left(\frac{t}{T}\right), \qquad t\in \mathbb R. \end{equation}q Let $z_{0}\in X_2\setminus\{0\}$. Set $z(t)=e^{itA}z_{0}$, $x=\chi_T z$ and $f=\dot x -iA x$. Since $\dot z-iA z =0$, we have $f= \dot \chi_T z$. The Fourier transform of $f$ with respect to time is given by $$\widehat{f}(\tau)=(i\tau-iA)\widehat{x}(\tau),$$ where $\widehat{x}(\tau)$ is the Fourier transform of $x(t)$. Applying \eqref{resolventestimate} to $\widehat{x}(\tau)\in X_2\setminus\{0\}$ for $\lambda= \tau$, we obtain \begin{equation} \label{inqq} \|\widehat x(\tau)\|_X^2 \leq \inf\left\{\frac{\|{C \widehat x(\tau)}\|^{2}_Y}{\psi(\lambda(\widehat x(\tau)))}, \frac{\|\widehat{f}(\tau)\|^{2}_X}{(\tau-\lambda(\widehat x(\tau)))^2+\varepsilon(\lambda(\widehat x(\tau)))} \right\}. \end{equation} We remark that since $\widehat x(\tau) \not= 0,$ we have $\lambda(\widehat x(\tau))\not= +\infty,$ and the inequality \eqref{inqq} is well justified. Next, we study how do the frequency $\lambda(\widehat x(\tau))$ behave as a function of $\tau$. We expect that $\lambda(\widehat x(\tau)) $ that is close to $\lambda(z_0)$, the frequency of the initial state $z_0$, and reach increases when $|\tau|$ tends to infinity.\\ To simplify the analysis we will make some assumptions on the cut-off function $\chi(s)$. We further assume that $\chi \in C_0({\rm I~\hspace{-1.15ex}R} )$ satisfies the following inequalities: \begin{equation}q \label{bbchi} \chi \in H^1_0(-1,1),\; \frac{\kappa_1}{1+\tau^2} \leq |\widehat \chi(\tau)| \leq \frac{\kappa_2}{1+\tau^2}, \tau \in {\rm I~\hspace{-1.15ex}R} , \end{equation}q where $\kappa_2> \kappa_1>0$ are two fixed constants that do not depend on $\tau$. We will show in the Appendix the existence of a such function. \\ \begin{equation}gin{theorem} \label{Tfrequency} Let $z_{0}\in X_2\setminus\{0\}$, and let $z(t) = e^{itA}z_0$, and let $\widehat{x}(\tau)$ be the Fourier transform of $x(t)= \chi_T(t) z(t)$, where $\chi_T(t)$ is the cut-off function defined by \eqref{cutoff}, and satisfying the inequality \eqref{bbchi}. \\ Then, there exists a constant $c_0= c_0(\chi)>0$ such that the following inequality \begin{equation}q \label{lambdatau} \lambda_1 \leq \lambda(\widehat x(\tau))\leq 4|\tau|+ c_0\lambda(z_0), \end{equation}q holds for all $ \tau \in {\rm I~\hspace{-1.15ex}R} .$ \end{theorem} \begin{equation}gin{proof} Recall the expression of the frequency function: \begin{equation}q \label{lambda} \lambda(\widehat x(\tau)) = \langle A\widehat x(\tau) , \widehat x(\tau) \rangle_X \|\widehat x(\tau)\|_X^{-2}, \; \forall \tau \in {\rm I~\hspace{-1.15ex}R} . \end{equation}q Let $z_0= \sum_{k=1}^{+\infty} z_k \phi_k \in X_2$. Hence \begin{equation} \widehat x(\tau) = \sum_{k=1}^{+\infty} \widehat \chi_T(\tau-\lambda_k) z_k \phi_k. \end{equation} Hence \begin{equation} \label{ide} \lambda(\widehat x(\tau)) = \sum_{k=1}^{+\infty} \lambda_k | \widehat \chi_T(\tau-\lambda_k)|^2 z_k^2 \left( \sum_{k=1}^{+\infty} |\widehat \chi_T(\tau-\lambda_k)|^2 z_k^2 \right)^{-1}. \end{equation} We first remark that $\lambda(\widehat x(\tau))\geq \lambda_1$ for all $\tau \in \mathbb R$, and it tends to $\lambda(z_0)$ when $T$ approaches $0$. In order to study the behavior of $\lambda(\widehat x(\tau))$ when $\tau$ is large we need to derive the behavior of $\widehat \chi_T(s)$ when $s$ tends to infinity. \\ We start with the trivial case where $\tau$ is far away from the spectrum of $A$, that is $\tau<\lambda_1$. Let $K \in \mathbb R_+$ be large enough, and set \begin{equation}qs \sum_{k=1}^{+\infty} | \widehat \chi_T(\tau-\lambda_k)|^2 z_k^2 &=& \sum_{|\tau-\lambda_k|\leq K} | \widehat \chi_T(\tau-\lambda_k)|^2 z_k^2 +\sum_{|\tau-\lambda_k|> K} | \widehat \chi_T(\tau-\lambda_k)|^2 z_k^2 = \mathcal I_1+ \mathcal I_2. \end{equation}qs We claim that there exists $K_\tau>0$ large enough such that \begin{equation} \label{inN} 2\mathcal I_2 \leq \mathcal I_1, \;\; \; \textrm{ for all } K \geq K_\tau. \end{equation} We first observe that there exists $r_0>0$ large enough such that \begin{equation} \label{ll1} 2\kappa_2\sum_{\lambda_k> r_0} z_k^2 \leq \kappa_1 \sum_{\lambda_k \leq r_0} z_k^2, \end{equation} or equivalently \begin{equation}qs \left(2\frac{\kappa_2}{\kappa_1}+1 \right)\sum_{\lambda_k> r_0} z_k^2 \leq \|z_0\|_X^2. \end{equation}qs In fact, we have \begin{equation}q \label{ww1} \sum_{\lambda_k> r_0} z_k^2 < \frac{1}{r_0} \sum_{\lambda_k> r_0} \lambda_k z_k^2 \leq \frac{\lambda(z_0)}{r_0}\|z_0\|_X^2. \end{equation}q Hence the inequality \eqref{ll1} holds if \begin{equation} \label{r0} r_0=2 \left(2\frac{\kappa_2}{\kappa_1}+1 \right)\lambda(z_0). \end{equation} Now by taking $K=|\tau| +r_0$, and using the bounds \eqref{bbchi} with $\widehat \chi_T(s)= T\widehat \chi(Ts)$ in mind, we get \begin{equation}q \label{ff1}2\mathcal I_2 \leq \frac{2\kappa_2T^2}{(1+K^2T^2)^2} \sum_{\lambda_k> K+\tau} z_k^2,\\ \label{ff2} \mathcal I_1 \geq \frac{\kappa_1T^2}{(1+K^2T^2)^2} \sum_{\lambda_k\leq K+\tau} z_k^2. \end{equation}q Since $K\geq r_0$, inequalities \eqref{ll1}, \eqref{ff1} and \eqref{ff2} imply \begin{equation}q 2\mathcal I_2 \leq \frac{\kappa_1T^2}{(1+K^2T^2)^2} \sum_{\lambda_k\leq K+\tau} \lambda_k z_k^2 \leq \mathcal I_1. \end{equation}q Then, inequality \eqref{inN} is valid for $ K_\tau=\max(\tau, r_0)$. Consequently the inequalities \begin{equation}q \label{inNN} \frac{1}{2} \mathcal I_1 \leq \|\widehat x(\tau)\|_X^{2}= \sum_{k=1}^{+\infty} | \widehat \chi_T(\tau-\lambda_k)|^2 z_k^2 \leq 3 \mathcal I_1, \end{equation}q holds for all $ K \geq K_\tau$.\\ Considering now identity \eqref{ide}, and inequalities \eqref{inNN}, we obtain \begin{equation}qs \label{hhq} \lambda(\widehat x(\tau)) \leq 2 \left(\sum_{|\tau-\lambda_k|\leq K} \lambda_k | \widehat \chi_T(\tau-\lambda_k)|^2 z_k^2\right) \left( \sum_{|\tau-\lambda_k|\leq K} |\widehat \chi_T(\tau-\lambda_k)|^2 z_k^2 \right)^{-1}\\+ 2\left(\sum_{|\tau-\lambda_k|>K} \lambda_k | \widehat \chi_T(\tau-\lambda_k)|^2 z_k^2\right) \left( \sum_{|\tau-\lambda_k|\leq K} |\widehat \chi_T(\tau-\lambda_k)|^2 z_k^2 \right)^{-1} = \mathcal J_1 +\mathcal J_2. \end{equation}qs On the other hand we have \begin{equation} \label{j1t} \mathcal J_1 \leq 2(\tau +K). \end{equation} In addition, using again the bounds \eqref{bbchi}, we obtain \begin{equation} \label{j2} \mathcal J_2 \leq 2 \left(\sum_{\lambda_k> \tau +K} \lambda_k z_k^2\right) \left( \sum_{\lambda_k\leq K+\tau } z_k^2 \right)^{-1}. \end{equation} Since $K+\tau \geq r_0$, inequality \eqref{ll1} gives \begin{equation} \label{bn1} \sum_{\lambda_k \leq K+\tau } z_k^2 \geq \left(\frac{\kappa_1}{2\kappa_2} +1 \right)^{-1}\|z_0\|_X^2. \end{equation} Hence \begin{equation} \label{jj2} \mathcal J_2 \leq 2 \left(\frac{\kappa_1}{2\kappa_2} +1 \right) \left(\sum_{k=1}^{+\infty} \lambda_k z_k^2\right) \left( \sum_{k=1}^{+\infty} z_k^2 \right)^{-1} = 2 \left(\frac{\kappa_1}{2\kappa_2} +1 \right) \lambda (z_0). \end{equation} Combining inequalities \eqref{hhq}, \eqref{jj2}and \eqref{bn1}, we get \begin{equation}qs \lambda(\widehat x(\tau)) \leq 2|\tau| +2K +2\left(\frac{\kappa_1}{2\kappa_2} +1 \right) \lambda(z_0). \end{equation}qs for all $ K \geq K_\tau$.\\ Consequently, the proof is achieved by taking $c_0= 8 \frac{\kappa_2}{\kappa_1}+\frac{\kappa_1}{\kappa_2}+6.$ \end{proof} \begin{equation}gin{remark} The upper bound of $\lambda(\widehat x(\tau))$ obtained in Theorem~\ref{Tfrequency} is not optimal since $\lambda(\widehat x(\tau)) = \lambda_k= \lambda(z_0)$ if $z_0 =\phi_k$. Moreover when $\lambda_{max}(z_0) = \max \{\lambda_k, \; k \in \mathbb N^*,\; \langle z_0,\phi_k\rangle_X \not= 0\} < \infty$, we can easily show that $ \lambda(\widehat x(\tau)) \leq \lambda_{max}(z_0) $. We remark that in both cases the bounds of $\lambda(\widehat x(\tau)) $ are independent of the Fourier frequency $\tau$. \end{remark} \begin{equation}gin{lemma} Let $c_0^\prime = \frac{\|\dot \chi\|_{L^2(-1, 1)}}{\|\chi\|_{L^2(-1, 1)}},$ $z_{0}\in X_2\setminus\{0\}$, and let $z(t) = e^{itA}z_0$, and let $\widehat{x}(\tau)$ be the Fourier transform of $x(t)= \chi_T(t) z(t)$, where $\chi_T(t)$ is the cut-off function defined by \eqref{cutoff}. \\ Then, the following inequality \begin{equation}q \label{inne1} \left(1- \frac{1}{R}\left( \frac{c_0^\prime}{T} +\lambda(z_0) \right)\right) \|z_0\|_X^2 \leq \|\chi\|_{L^2(-1, 1)}^{-2} \int_{-R}^R\|\widehat{x}(\tau)\|_X^2 d\tau \end{equation}q holds for all $R > \frac{c_0^\prime}{T} +\lambda(z_0).$ \end{lemma} \begin{equation}gin{proof} Recall that $\dot x = f+iA x$ where $f= \dot \chi_T z$. By integration by parts we then have \[ \widehat{x}(\tau) = -\frac{i}{\tau} \left(\widehat{f}(\tau)+ iA\widehat{x}(\tau) \right). \] Consequently \[ \|\widehat{x}(\tau)\|_X^2 = \langle -\frac{i}{\tau} \left(\widehat{f}(\tau)+ iA\widehat{x}(\tau) \right), \widehat{x}(\tau)\rangle_X. \] Then for any $R>0$, by Fourier-Plancherel Theorem, we have \begin{equation}qs \|\chi\|_{L^2(-1, 1)}^2 \|z_0\|_X^2 \leq \int_{-R}^R\|\widehat{x}(\tau)\|_X^2 d\tau+ \frac{1}{R}\left( \frac{1}{T} \|\dot \chi\|_{L^2(-1, 1)}\|\chi\|_{L^2(-1, 1)} +\lambda(z_0)\|\chi\|_{L^2(-1, 1)}^2 \right)\|z_0\|_X^2. \end{equation}qs Hence for $R$ large enough we have \begin{equation}qs \left(1- \frac{1}{R}\left( \frac{1}{T} \frac{\|\dot \chi\|_{L^2(-1, 1)}}{\|\chi\|_{L^2(-1, 1)}} +\lambda(z_0) \right)\right) \|z_0\|_X^2 \leq \|\chi\|_{L^2(-1, 1)}^{-2}\int_{-R}^R\|\widehat{x}(\tau)\|_X^2 d\tau, \end{equation}qs which finishes the proof of the lemma. \end{proof} Back now to the proof of the theorem. Combining inequalities \eqref{inqq} and \eqref{inne1}, we find \begin{equation}q \label{important1} \left(1- \frac{1}{R}\left( \frac{c_0}{T} +\lambda(z_0) \right)\right) \|z_0\|_X^2 \leq \|\chi\|_{L^2(-1, 1)}^{-2} \left( \int_{-R}^R \frac{\|{C \widehat x(\tau)}\|^{2}_Y}{\psi(\lambda(\widehat x(\tau)))} d\tau + \int_{-R}^R\frac{\|\widehat{f}(\tau)\|^{2}_X}{\varepsilon(\lambda(\widehat x(\tau)))}d\tau\right). \end{equation}q Applying the upper bound $\lambda(\widehat x(\tau))$ derived in Theorem~\ref{Tfrequency}, and considering the monotony of the functions $\psi$ and $\varepsilon$ in $\mathfrak C$, we obtain \begin{equation}qs \left(1- \frac{1}{R}\left( \frac{c_0^\prime}{T} +\lambda(z_0) \right)\right) \|z_0\|_X^2 \leq \frac{1}{\psi(4R+ c_0\lambda(z_0))} \frac{\|\chi\|_{L^\infty(-1,1)}^2}{\|\chi\|_{L^2(-1, 1)}^{2}} \int_{0 }^T|{C z(t)}\|^{2}_Yd\tau \\ + \frac{1}{T\varepsilon(4R+ c_0\lambda(z_0))} \frac{\|\dot \chi\|_{L^2(-1, 1)}^2}{\|\chi\|_{L^2(-1, 1)}^{2}}\|z_0\|_X^2, \end{equation}qs for all $R > \frac{c_0^\prime}{T} +\lambda(z_0)$.\\ Now, by taking $R= 2\left(\frac{c_0}{T} +\lambda(z_0)\right)$, and $\theta_0= \max(c_0^\prime, 8+c_0)$, we find \begin{equation}qs \left(1- \frac{2}{T\varepsilon\left(\theta_0\left(\frac{1}{T}+\lambda(z_0)\right)\right)} \frac{\|\dot \chi\|_{L^2(-1, 1)}^2}{\|\chi\|_{L^2(-1, 1)}^{2}}\right)\|z_0\|_X^2 \leq \frac{2}{\psi\left(\theta_0\left(\frac{1}{T}+\lambda(z_0)\right)\right)} \frac{\|\chi\|_{L^\infty(-1,1)}^2}{\|\chi\|_{L^2(-1, 1)}^{2}} \int_{0 }^T\|{C z(t)}\|^{2}_Ydt. \end{equation}qs Let $\theta_1= \frac{4\| \chi\|_{L^2(-1, 1)}^{2}}{\| \dot \chi\|_{L^\infty(-1,1)}^2}$, and $\theta_2= \frac{4\|\chi\|_{L^2(-1,1)}^2}{\| \chi\|_{L^\infty(-1, 1)}^{2}} $. \\ Then, for $T\varepsilon(4R+ c_0\lambda(z_0))\geq \theta_1$, we finally get the wanted estimate: \begin{equation}q \label{fee} \theta_2\psi\left(\theta_0\left(\frac{1}{T}+\lambda(z_0)\right)\right) \|z_0\|_X^2 \leq \int_{0 }^T\|{C z(t)}\|^{2}_Ydt. \end{equation}q Simple calculation shows that the function $T\mapsto T\varepsilon \left(\theta_0\left(\frac{1}{T}+\lambda(z_0)\right)\right)$ is increasing, tends to infinity when $T$ approaches $+\infty$, and tends to $0$ when $T$ approaches $0$. Then there exists a unique value $T(\lambda(z_0))>0$ that solves the equation \eqref{timeT}. In addition, the function $\lambda \mapsto T(\lambda)$ is increasing. Finally, the inequality \eqref{fee} is valid for all $T \geq T(\lambda(z_0))$. \\ Now, we shall prove the converse. Our strategy is to adapt the proof of Theorem 1.2 in ~\cite{RW} for the classical exact controllability to our settings (see also ~\cite{BZbb, Miller05}). We further assume that the weak observability inequality \eqref{wobst} holds for some fixed $\psi$ and $\varepsilon$ in $\mathfrak C$. Our goal now is to show that $C$ is indeed spectrally coercive.\\ Let $z_0 \in X_4$, and $x_0:= (iA-i\tau I)z_0 $ for some $\tau \in {\rm I~\hspace{-1.15ex}R} $. Define $x(t)= e^{itA}x_0$ and $z(t) = e^{itA}z_0$. \\ A forward computation shows that $z(t)$ solves the following \begin{equation}qs \dot z(t) -i\tau z(t) &=& x(t), \;\;\forall t\in {\rm I~\hspace{-1.15ex}R} _+^*,\\ z(0) &=& z_0. \end{equation}qs Then \[ z(t) = e^{i\tau t} z_0 +\int_{0}^t e^{i\tau(t-s)} x(s) ds. \] Applying now the observability operator both sides gives \begin{equation}qs Cz(t) = e^{i\tau t} Cz_0+ \int_{0}^t e^{i\tau(t-s)} Cx(s) ds, \end{equation}qs whence \begin{equation}qs \|Cz(t)\|_Y^2 \leq 2\|Cz_0\|_Y^2+2 \int_{0}^t \|Cx(s)\|_Y^2ds. \end{equation}qs Integrating the inequality above both sides over $(0, T)$, we obtain \begin{equation}qs \int_0^T \|Cz(t)\|_Y^2 dt \leq 2T\|Cz_0\|_Y^2+2 T \int_0^T\| Cx(s) \|_Y^2 ds. \end{equation}qs We deduce from the admissibility assumption \eqref{admm} that \begin{equation}qs \int_0^T \|Cz(t)\|_Y^2 dt \leq 2T\|Cz_0\|_Y^2+2 T C_T\|(A-\tau I)z_0\|^2_X. \end{equation}qs Applying the weak observability inequality \eqref{wobst} for $T= T(\lambda(z_0))$, leads to \begin{equation}qs \theta_2\psi\left(\theta_0\left(\frac{1}{T(\lambda(z_0))}+\lambda(z_0)\right)\right) \|z_0\|_X^2 \leq 2T(\lambda(z_0))\|Cz_0\|_Y^2+2 T(\lambda(z_0)) C_{T(\lambda(z_0))}\|(A-\tau I)z_0\|^2_X, \end{equation}qs for all $\tau \in \mathbb R$. \\ Since $T(\lambda) \geq T_0 = T(0),$ for all $\lambda \geq 0$, we have \begin{equation}qs \theta_2\psi\left(\theta_0\left(\frac{1}{T_0}+\lambda(z_0)\right)\right) \|z_0\|_X^2 \leq 2T(\lambda(z_0))\|Cz_0\|_Y^2+2 T(\lambda(z_0)) C_{T(\lambda(z_0))}\|(A-\tau I)z_0\|^2_X, \end{equation}qs Taking $\tau = \lambda(z_0)$ in the previous inequality implies \begin{equation}qs \frac{\theta_2}{2 T(\lambda(z_0)) C_{T(\lambda(z_0))}} \psi\left(\theta_0\left(\frac{1}{T_0}+\lambda(z_0)\right)\right) \|z_0\|_X^2 \leq \frac{1}{C_{T(\lambda(z_0))}} \|Cz_0\|_X^2+\|(A- \lambda(z_0)I)z_0\|^2_X. \end{equation}qs Let \begin{equation}qs \widetilde \psi(\lambda) = \frac{\theta_2}{4 T(\lambda) }\psi\left(\theta_0\left(\frac{1}{T_0}+\lambda\right)\right),\\ \widetilde \varepsilon(\lambda) = \frac{\theta_2}{4 T(\lambda) C_{\lambda}} \psi \left(\theta_0\left(\frac{1}{T_0}+\lambda \right)\right). \end{equation}qs We deduce from the monotonicity properties of $\psi(\lambda), C_\lambda, $ and $T(\lambda)$ that $\widetilde \psi(\lambda),\, \widetilde \varepsilon(\lambda) \in \mathfrak C $. \\ Consequently $C$ becomes spectrally coercive with the functions $\widetilde \psi(\lambda),\, \widetilde \varepsilon(\lambda)$, that is \begin{equation}gin{eqnarray*} 0\leq \frac{\|Az\|_X^2}{\|z\|_X^2} -\lambda^2(z) <\widetilde \varepsilon(\lambda(z)), \end{eqnarray*} implies \begin{equation}gin{eqnarray*} \|Cz\|_Y^2 \geq \widetilde \psi(\lambda(z)) \|z\|_{X }^2, \end{eqnarray*} which finishes the proof of the Theorem. \section{Sufficient conditions for the spectral coercivity.} \label{sec4} In this section we study the relation between the spectral coercivity of the observability operator $C$ given in Definition~\ref{dwc}, and the action of the operator $C$ on vector spaces spanned by eigenfunctions associated to close eigenvalues. \\ For $\lambda \in \mathbb R_+$ and $\varepsilon>0$, set \begin{equation} N_\varepsilon( \lambda)= \{k \in \mathbb N^* \textrm{ such that } |\lambda-\lambda_k |< \varepsilon\}, \end{equation} to be the index function of eigenvalues of $A$ in a $\varepsilon$-neighborhood of a given $\lambda$.\\ \begin{equation}gin{definition} \label{dsc} The operator $C$ is weakly spectrally coercive if there exist a constant $\varepsilon>0$ and a function $\psi \in \mathfrak C$ such that for all $ \lambda \in \mathbb R$, the following inequality \begin{equation} \label{swcc} \|Cz\|_Y^2 \geq \psi(\lambda) \|z\|_{X }^2, \end{equation} holds for all $z = \sum_{k \in N_\varepsilon(\lambda)} z_k \phi_k \in X_2 \setminus \left\{0\right\}.$ \end{definition} \begin{equation}gin{lemma} \label{dsc2} The operator $C$ is weakly spectrally coercive iff there exist a constant $\varepsilon>0$ and a function $\psi \in \mathfrak C$ such that the following inequality \begin{equation} \label{swcc2} \|Cz\|_Y^2 \geq \psi(\lambda_n) \|z\|_{X }^2, \end{equation} holds for all $z = \sum_{k \in N_\varepsilon(\lambda_n)} z_k \phi_k,$ and for all $n\in \mathbb N^*$. \end{lemma} \begin{equation}gin{proof} Assume that $C$ is weakly spectrally coercive. By taking $\lambda = \lambda_n$ in \eqref{swcc}, inequality \eqref{swcc2} immediately holds. Conversely, assume that inequality \eqref{swcc2} is satisfied, and let $\lambda \in \mathbb R$. One can easily check that the set $N_{\frac{\varepsilon}{2}}(\lambda)$ is either empty or it contains at least an element $n_0 \in \mathbb N^*$. Since $ N_{\frac{\varepsilon}{2}}(\lambda) \subset N_{\varepsilon}(\lambda_{n_0}),$ we have \begin{equation*} \|Cz\|_Y^2 \geq \psi(\lambda_{n_0}) \|z\|_{X }^2, \end{equation*} holds for all $z = \sum_{k \in N_{\frac{\varepsilon}{2}}(\lambda)} z_k \phi_k \in X_2 \setminus \left\{0\right\}.$ On the other hand the fact that $\psi$ is non-increasing implies \begin{equation*} \|Cz\|_Y^2 \geq \psi \left(\lambda+\frac{\varepsilon}{2}\right) \|z\|_{X }^2, \end{equation*} holds for all $z = \sum_{k \in N_{\frac{\varepsilon}{2}}(\lambda)} z_k \phi_k \in X_2 \setminus \left\{0\right\},$ which shows that $C$ is weakly spectrally coercive with the constant $\frac{\varepsilon}{2}>0$ and $ \tilde \psi(\lambda):= \psi(\lambda+\frac{\varepsilon}{2}) \in \mathfrak C$. \end{proof} The Lemma \ref{dsc2} has been proved in \cite{RTTT} for the particular case where $\psi$ is a constant function. \begin{equation}gin{theorem} \label{swc} Let $\varepsilon>0$ be a fixed constant and let $\psi \in \mathfrak C$. If $C$ is spectrally coercive with $ \varepsilon, \psi$, then it is weakly spectrally coercive. Conversely, if $C$ is weakly spectrally coercive with $ \varepsilon, \psi$, then $C$ is spectrally coercive. \end{theorem} \begin{equation}gin{proof} Let $\lambda \in \mathbb R_+, $ and $\begin{equation}ta>0$ being fixed. A direct calculation shows that if \[ z = \sum_{k \in N_\begin{equation}ta(\lambda)} z_k \phi_k, \] we have \[ (\lambda(z)-\lambda)\|z\|_X^2 = \sum_{k \in N_\begin{equation}ta(\lambda)} (\lambda_k -\lambda)z_k^2 . \] Hence \[ |\lambda(z)-\lambda| < \begin{equation}ta. \] On the other hand \[ \|Az\|_X^2 -\lambda^2(z)\|z\|_X^2 =\|(A-\lambda(z)I) z\|_X^2 \leq 2\|(A-\lambda I) z\|_X^2 +2|\lambda-\lambda(z)|^2\|z\|_X^2 <2\begin{equation}ta^2\|z\|_X^2 . \] Then, we deduce from the spectral coercivity in Definition~\ref{dwc} that \eqref{swcc} holds if we choose $\begin{equation}ta$ such that $2\begin{equation}ta^2 < \varepsilon$. Now, we shall prove the opposite implication. Assume that \eqref{swcc} is satisfied for all $\lambda \in \mathbb R_+, $ and let \[ z = \sum_{k=1}^{+\infty} z_k \phi_k, \] being in $X_2\setminus\{0\}$, and satisfying the inequality \begin{equation}gin{eqnarray}\label{opp} \frac{\|Az\|_X^2}{\|z\|_X^2} -\lambda^2(z) <\begin{equation}ta(\lambda(z)), \end{eqnarray} where $\begin{equation}ta$ will be chosen later in terms of $\varepsilon$ and $\psi$. \\ Set \begin{equation}q \label{opp2} ((A-\lambda(z)I) z= f. \end{equation}q We deduce from~\eqref{opp}, the following estimate \begin{equation}q \label{opp3} \|f\|_X^2 \leq \begin{equation}ta\|z\|_X^2. \end{equation}q We now introduce the following orthogonal decomposition of $z$: \begin{equation}q \label{orthog} z &=& z^0+\widetilde z, \end{equation}q with \begin{equation}q z^0 = \sum_{k \in N_\varepsilon(\lambda(z))} z_k \phi_k,\;\; \widetilde z = \sum_{k \notin N_\varepsilon(\lambda(z))} z_k \phi_k. \end{equation}q We deduce from \eqref{opp}, \eqref{opp2} and \eqref{opp3} the following estimate \begin{equation}q \label{luft1} \|\widetilde z \|_X^2 = \sum_{k \notin N_\varepsilon(\lambda(z))} z_k^2 = \sum_{k \notin N_\varepsilon(\lambda(z))} \frac{f_k^2}{(\lambda(z)-\lambda_k)^2} \leq \frac{1}{\varepsilon^2}\|f\|_X^2 \leq \frac{\begin{equation}ta}{\varepsilon^2} \|z \|_X^2. \end{equation}q On the other hand the inequality \eqref{swcc} for $\lambda= \lambda(z)$ implies \begin{equation}q \label{luft2} \|z^0\|_X^2\leq \frac{\|Cz^0\|_{Y}^2}{ \psi(\lambda(z))}. \end{equation}q The following result has been proved for admissible operator $C$ first on $(0, +\infty)$ in \cite{RW}, and on $(0, T)$ in \cite{RTTT}. \begin{equation}gin{proposition} \label{RW} For each $\varepsilon>0$ and $\lambda \in {\rm I~\hspace{-1.15ex}R} _+,$ we define the subspace $V(\lambda) \subset X$ by \begin{equation*} V(\lambda) : = \left\{ \phi_k: \; k\notin N_\varepsilon(\lambda) \right\}, \end{equation*} and we denote $A_\lambda: V(\lambda) \cap X_2 \rightarrow X$, the restriction of the unbounded operator to $V(\lambda)$. \\ Then, there exists a constant $M>0$, such that \begin{equation}q \|C(A_\lambda-\lambda I)^{-1}\|_{\mathcal L(V(\lambda), Y)} \leq M, \quad \forall \lambda \in {\rm I~\hspace{-1.15ex}R} _+. \end{equation}q \end{proposition} We deduce from \eqref{opp2} and \eqref{orthog}, the following inequality \begin{equation}q \|Cz^0\|_{Y}^2 \leq 2\|Cz\|_{Y}^2+ 2\|C\widetilde z\|_{Y}^2 \leq 2\|Cz\|_{Y}^2+2\| C(A_{\lambda(z)}-\lambda(z) I)^{-1}f\|_{Y}^2. \label{intineq} \end{equation}q Applying now the results of Proposition \ref{RW} on \eqref{intineq}, we get \begin{equation} \label{innnn} \|Cz^0\|_{Y}^2 \leq 2\|Cz\|_{Y}^2+2M\|f\|_{X}^2. \end{equation} Inequalities \eqref{opp3} and \eqref{innnn}, give \begin{equation*} \|Cz^0\|_{Y}^2 \leq 2\|Cz\|_{Y}^2+ 2M\begin{equation}ta \|z \|_X^2. \end{equation*} Now, using the inequality \eqref{luft2}, we get \begin{equation}q \label{luft3} \|z^0\|_X^2\leq 2 \frac{\|Cz\|_{Y}^2}{ \psi(\lambda(z))}+\frac{2M\begin{equation}ta}{\psi(\lambda(z))} \|z \|_X^2. \end{equation}q Combining \eqref{luft1} and \eqref{luft3}, we obtain \begin{equation}qs \|z\|_X^2 = \|z^0\|_X^2 + \|\widetilde z \|_X^2 \leq \rho(\lambda(z)) \|z \|_X^2+ 2 \frac{\|Cz\|_{Y}^2}{\psi(\lambda(z))}, \end{equation}qs with \begin{equation} \rho(\lambda(z)): = \left(\frac{2M}{\psi(\lambda(z))} +\frac{1}{\varepsilon} \right)\begin{equation}ta(\lambda(z)). \end{equation} By taking \begin{equation} \begin{equation}ta(\lambda(z)) := \frac{1}{2}\left(\frac{2M}{\psi(\lambda(z))} +\frac{1}{\varepsilon} \right)^{-1}, \end{equation} we find \begin{equation}qs \frac{1}{4} \, \psi(\lambda(z)) \|z\|_X^2 \leq \|Cz\|_{Y}^2. \end{equation}qs One can check easily that $\begin{equation}ta(\lambda)$ belongs to $\mathfrak C$. Then $C$ becomes spectrally coercive with the functions $ \begin{equation}ta(\lambda), \frac{1}{4} \, \psi(\lambda) \in \mathfrak C$. \end{proof} \begin{equation}gin{remark} Theorem \ref{swc} shows that the results of the paper \cite{RTTT} by M. Tucsnak {\it and al}. correspond to the particular case of spectral coercivity where $\varepsilon$ and $\psi$ are constant functions. Finally, applying Proposition \ref{RW} is not necessary to prove the theorem. In fact we can bound in inequality \eqref{intineq}, $C$ by $\|C\|^2 (\lambda(z)+\varepsilon)^2\frac{\begin{equation}ta}{\varepsilon^2} \|z \|_X^2$ where $ \|C\|$ is the norm of $C$ in $\mathcal L(X_2, Y).$ Applying the results of Proposition \ref{RW} improves the behavior of $\varepsilon(\lambda)$ for large $\lambda$. \end{remark} \section{Application to observability of the Schr\"odinger equation} \label{sec5} Let $\Omega = (0, \pi) \times (0, \pi), $ and $\partial \Omega$ be its boundary. We consider the following initial and boundary value problem: \begin{equation} \label{syst0} \left\{ \begin{equation}gin{array}{ll} z^{ \prime }(x,t) +i \Delta z(x,t) = 0, \, x \in \Omega, \, t > 0, \\ z(x, t) = 0, \, x\in \partial \Omega, \, t >0, \\ z(x, 0) = z_0(x), x\in \Omega. \end{array} \right. \end{equation} Let $\Gamma$ be an open nonempty subset of $\partial \Omega$. Define $C $ to be the following boundary observability operator \begin{equation}\label{syst01} y(x, t)= C z(x,t) = \partial_\nu z\vert_ \Gamma, \end{equation} where $\nu$ is the outward normal vector on $\partial \Omega,$ and $ \partial_\nu $ is the normal derivative. We further show that the observation system \eqref{syst0}-\eqref{syst01} fits perfectly in the general formulation of the system~\eqref{maineq}. \\ Let $X = H^1_0(\Omega)$ be the Hilbert space with scalar product \begin{equation*} \langle v, w\rangle_X = \int_{\Omega} \nabla u \cdot \nabla v \, dx. \end{equation*} Therefore $A= -\Delta: X_2 \subset X \rightarrow X$, is a linear unbounded self-adjoint, strictly positive operator with a compact resolvent. Hence the operator $iA$ generates a strongly continuous group of isometries in $X$ denoted $(e^{itA})_{t\in \mathbb{R}}$. Moreover for $\begin{equation}ta \geq 0$, $X_\begin{equation}ta = D(A^{\frac{\begin{equation}ta}{2}})$ is given by \begin{equation*} X_\begin{equation}ta= \left\{\phi \in H^1_0(\Omega): \, (-\Delta)^{\frac{\begin{equation}ta}{2}} \phi \in H^1_0(\Omega) \right\}. \end{equation*} Then the observability operator $C: X_2 \rightarrow Y:=L^2(\Gamma),$ defined by \eqref{syst01}, is a bounded operator. In addition it is known that $C$ is an admissible observability operator, that is for any $T>0$ there exists a constant $C_T>0$, such that the following inequality holds \begin{equation*} \int_0^T \int_{\Gamma}\left| \partial_\nu z\right|^2 ds(x) dt \leq C_T^2 \int_{\Omega} \left|\nabla z_0\right|^2 dx, \end{equation*} for all $z_0 \in X_2$.\\ The eigenvalues of $A$ are \begin{equation} \label{eigenva} \lambda_{m, n} = m^2+n^2, \;\; m, n \in \mathbb N^*. \end{equation} A corresponding family of normalized eigenfunctions in $ H^1_0(\Omega)$ are \begin{equation} \label{eigenfu} \phi_{m,n}(x) = \frac{2}{\pi \sqrt{m^2+n^2}} \sin(n\pi x_1)\sin(m\pi x_2), \;\; m, n \in \mathbb N^*,\;\; x=(x_1, x_2) \in \Omega. \end{equation} Next we derive observability inequalities corresponding to different geometrical assumptions on the observability set $\Gamma$. {\it Assumption {\bf I}}: We assume that $\Gamma$ contains at least two touching sides of $\Omega$. \\ In this case it is known that $\Gamma$ satisfies the geometrical assumptions of \cite{BLR92}, and the exact controllability is reached \cite{Le}. We will show that it is indeed the situation by applying our coercivity test.\\ Consider the Helmholtz equation defined by \begin{equation} \label{usystem} \left\{ \begin{equation}gin{array}{ll} \Delta u +k^2u = f, \, x \in \Omega, \\ u = 0, \, x\in \partial \Omega\setminus \overline \Gamma, \\ \partial_\nu u -ik u =0, \, x\in \Gamma, \end{array} \right. \end{equation} where $g \in L^2(\Gamma)$ and $f\in L^2(\Omega)$. It has been shown using Rellich's identities (which are somehow related to the multiplier approach in observability \cite{Li,Li1}) the following result \cite{He}. \begin{equation}gin{proposition} \label{Pfrequency} Under the assumptions {\bf I} on $\Gamma$, a solution $u \in H^1(\Omega)$ to the system \eqref{usystem} satisfies the following inequality \begin{equation} k\|u\|_{L^2(\Omega)} + \|\nabla u\|_{L^2(\Omega)} \leq c_0 \left(\|f\|_{L^2(\Omega)} +\|g\|_{L^2(\Gamma)}\right), \end{equation} for all $k \geq k_0$, where $k_0>0$ and $c_0>0$ are constants that only depend on $\Gamma$. \end{proposition} We deduce from Proposition \ref{Pfrequency} the following inequality \begin{equation} \|z\|_X \leq c_1\left( \|Az -\lambda(z) z\|_{X} + \|C z\|_{Y}\right), \end{equation} for all $z \in X_2\setminus \{0\},$ where $\lambda(z)$ is the $A$-frequency of $z$, and $c_1>0$ are constants that only depend on $\Gamma$. Then by taking $\varepsilon (\lambda) = \frac{1}{4c_1^2}$, we find that $C$ is spectrally coercive with $\psi(\lambda) =\frac{1}{4c_1^2}$, which implies in turn that the system \eqref{syst0}-\eqref{syst01} is exactly observable.\\ \begin{equation}gin{theorem} \label{mainsquare1} Under the assumptions {\bf I} on $\Gamma$, the system \eqref{syst0}-\eqref{syst01}, is exactly observable. \end{theorem} {\it Assumption {\bf II}}: We assume that $\Gamma$ in a one side of $\Omega$. Without loss of generality, we further assume that $ \Gamma= (0, \pi) \times \{0\}$. The following result has been derived partially in \cite{BaK2}. \begin{equation}gin{proposition} \label{frequency2} Under the assumptions {\bf II} on $\Gamma$, a solution $u \in H^1(\Omega)$ to the system \eqref{usystem} satisfies the following inequality \begin{equation} k\|u\|_{L^2(\Omega)} + \|\nabla u\|_{L^2(\Omega)} \leq c_0 k\left(\|f\|_{L^2(\Omega)} + \|g\|_{L^2(\Gamma)}\right), \end{equation} for all $k \geq k_0$, where $k_0>0$ and $c_0>0$ are constants that only depend on $\Gamma$. \end{proposition} We again deduce from Proposition \ref{frequency2} the following resolvent inequality \begin{equation} \|z\|_X \leq c_1(1+\sqrt{\lambda(z)}) \left( \|Az -\lambda(z) z\|_{X} + \|C z\|_{Y}\right), \end{equation} for all $z \in X_2\setminus \{0\},$ where $\lambda(z)$ is the $A$-frequency of $z$, and $c_1>0$ is a constant that only depends on $\Gamma$. Then by taking $\varepsilon (\lambda) = \frac{1}{8c_1^2(1+\lambda)}$, we find that $C$ is spectrally coercive with $\psi(\lambda) =\frac{1}{8c_1^2(1+\lambda)}$. This implies in turn that the system \eqref{syst0}-\eqref{syst01} is weakly observable: there exists a constant $T^0>0$ such that \begin{equation} \label{obsRec} \psi(\lambda(z_0)) \|z_0\|^2_{H^1_0(\Omega)} \leq \int_0^T \int_{\Gamma}\left| \partial_\nu z\right|^2 ds(x) dt, \end{equation} for all $z_0 \in X_2$, and for all $T\geq T^0$.\\ \begin{equation}gin{theorem} \label{mainsquare2} Under the assumptions {\bf II} on $\Gamma$, the system \eqref{syst0}-\eqref{syst01}, is weakly observable for any $z_0 \in X$. \end{theorem} {\it Assumption {\bf III}}: We assume that $\overline{\Gamma}$ is included in a one side of $\Omega$. Without loss of generality, we further assume that $ (\alpha, \begin{equation}ta)\times\{0\} \subset \Gamma \subset (0, \pi) \times \{0\}$, with $0<\alpha<\begin{equation}ta<\pi.$ Then, we have the following weak observability inequality. \begin{equation}gin{theorem} \label{mainsquare3} Under the assumptions {\bf III} on $\Gamma$, the system \eqref{syst0}-\eqref{syst01}, is weakly observable for any $z_0 \in X$ with $\widetilde \varepsilon(\lambda) = \frac{1}{ \frac{4M}{\delta_\Gamma} \lambda +1}$ and $\widetilde \psi(\lambda) = \frac{\delta_\Gamma}{4 \lambda},$ where $\delta_\Gamma>0$ is a constant that only depends on $\Gamma$, and $M>0$ is the admissibility constant appearing in Proposition \ref{RW}. \end{theorem} Different from the proofs in the two first cases, the proof of the weak observability in the theorem above is based on intrinsic properties of the eigenelements of $A$ and the operator $C$. We first present the following useful result. \begin{equation}gin{lemma} \label{wscsquare} The operator $C$ is weakly spectrally coercive, that is, the following inequality \begin{equation} \label{ttt1} \|Cz\|_Y^2 \geq \psi(\lambda_{m,n}) \|z\|_{X }^2, \end{equation} holds for all $z = \sum_{k \in N_{\frac{1}{2}}(\lambda_{m,n})} z_k \phi_k $ where $\psi(\lambda) =\frac{\delta_\Gamma}{\lambda},$ with $\delta_\Gamma>0$ is a constant that only depends on $\Gamma$. \end{lemma} \begin{equation}gin{proof} Let $\lambda_{m, n} = m^2+n^2$ be fixed eigenvalue, and let $z = \sum_{k \in N_{\frac{1}{2}}(\lambda_{m,n})} z_k \phi_k $ be a fixed vector in $X_2\setminus\{0\}$.\\ It is easy to check that \begin{equation} N_{\frac{1}{2}}(\lambda_{m,n})= \{k = (p, q) \in \mathbb N^*\times \mathbb N^*: p^2+q^2 = m^2+n^2\}. \end{equation} Therefore \begin{equation}q \|Cz\|_Y^2&=& \int_\Gamma\left | \sum_{k \in N_{\frac{1}{2}}(\lambda_{m,n})} z_k C\phi_k(x)\right |^2 ds(x), \nonumber\\ &\geq& \frac{4}{\pi^2} \int_\alpha^\begin{equation}ta \left | \sum_{p^2+q^2= m^2+n^2} \frac{q}{(p^2+q^2)^{\frac{1}{2}}} z_{p, q}\sin(px_1) \right |^2 dx_1,\label{rrrH} \end{equation}q Based on techniques related to nonharmonic Fourier series, the following inequality has been proved in Proposition 7 of \cite{RTTT}. \begin{equation}q \label{rrrH2} \int_\alpha^\begin{equation}ta \left | \sum_{p^2+q^2= m^2+n^2} \frac{q}{(p^2+q^2)^{\frac{1}{2}}}z_{p, q}\sin(px_1) \right |^2 dx_1 \geq \tilde \delta_{\alpha, \begin{equation}ta} \sum_{p^2+q^2= m^2+n^2}\frac{q^2}{p^2+q^2}|z_{p, q}|^2, \end{equation}q where $ \tilde \delta_{\alpha, \begin{equation}ta}>0$ only depends on $\alpha$ and $\begin{equation}ta$. \\ Combining now inequalities \eqref{rrrH} and \eqref{rrrH2}, we find \begin{equation}qs \|Cz\|_Y^2 \geq \delta_\Gamma \sum_{p^2+q^2= m^2+n^2}\frac{q^2}{p^2+q^2}|z_{p, q}|^2 \geq \frac{\delta_\Gamma}{\lambda_{m,n}} \|z\|_X^2, \end{equation}qs which achieves the proof. Here $ \delta_\Gamma := \frac{4}{\pi^2} \tilde \delta_{\alpha, \begin{equation}ta}$ only depends on $\Gamma$. \end{proof} \begin{equation}gin{proof}[Proof of Theorem \ref{mainsquare3}.] The result of the theorem is a direct consequence of Lemma~\ref{dsc2}, Theorem~\ref{swc}, and Lemma~\ref{wscsquare}. We finally obtain that $C$ is spectrally coercive with $\widetilde \varepsilon(\lambda) = \frac{1}{2}\left(\frac{2M}{ \psi(\lambda)} +1 \right)^{-1} $ and $\widetilde \psi(\lambda) = \frac{1}{4}\psi(\lambda)$, which finishes the proof. \end{proof} \begin{equation}gin{remark} We observe that the result of Theorem \ref{mainsquare2} based on clever analysis of Fourier series derived in \cite{BaK2}, is indeed a particular case of Theorem \ref{mainsquare3} ($\alpha=0$ and $\begin{equation}ta =\pi$) obtained from Ingham type inequalities. \end{remark} \section*{Appendix} \label{appendixA} Let $\chi \in C_0({\rm I~\hspace{-1.15ex}R} )$ be a cut off function with a compact support in $(-1, 1)$ given by \begin{equation}qs \chi(s) = (1-|s|)e^{-2|s|}\mathbbm{1} _{(-1,1)}. \end{equation}qs Then we have the following result. \begin{equation}gin{proposition} The function $\chi(s)$ satisfies \begin{equation}qs \chi \in H^1_0(-1,1),\; \frac{\kappa_1}{1+\tau^2} \leq |\widehat \chi(\tau)| \leq \frac{\kappa_2}{1+\tau^2}, \tau \in {\rm I~\hspace{-1.15ex}R} , \end{equation}qs where $\kappa_1> \kappa_2$ are two fixed constants. \end{proposition} \begin{equation}gin{proof} Since $|\widehat \chi(\tau)| $ is even we shall prove the inequality only for $\tau \in {\rm I~\hspace{-1.15ex}R} _+$.\\ A forward computation gives \[ \widehat \chi(\tau) = \frac{2}{1+\tau^2} +2\Re\left(\frac{1-e^{-(1+i\tau)}}{ (1+i\tau)^2}\right). \] Then \[ |\widehat \chi(\tau)| \leq \frac{6}{1+\tau^2}. \] On the other hand, we have \begin{equation}qs \widehat \chi(\tau) = \int_{\rm I~\hspace{-1.15ex}R} \frac{2}{1+(s-\tau)^2} \mbox{sinc}^2 \left(\frac{s}{2}\right) ds. \end{equation}qs Using the estimate $\mbox{sinc(s)} \geq \frac{2}{\pi}$ for $s\in (0, \frac{\pi}{2})$, we get \begin{equation}qs \widehat \chi(\tau) \geq \frac{4}{\pi}\int_{-\frac{1}{4}}^{\frac{1}{4}} \frac{1}{1+(s-\tau)^2} ds\\ \geq \frac{4}{\pi} (\arctan(\frac{1}{4}-\tau)+\arctan(\frac{1}{4}+\tau)) = \frac{4}{\pi} \arctan \left({\frac{\frac{1}{2}}{\frac{15}{16}+\tau^2}} \right)\\ \geq \frac{4}{\pi}\left[ \frac{\frac{1}{2}}{\frac{15}{16}+\tau^2} - \frac{1}{3} \left(\frac{\frac{1}{2}}{\frac{15}{16}+\tau^2}\right)^3\right] \\ \geq \frac{\frac{4}{3\pi}}{\frac{15}{16}+\tau^2} \\\geq \frac{\frac{4}{3\pi}}{1+\tau^2}, \end{equation}qs which finishes the proof. \end{proof} \begin{equation}gin{thebibliography}{99} \bibitem{AI} {\sc S. A. Avdonin and S. A. Ivanov,} {\em Families of exponentials,} Cambridge University Press, Cambridge, (1995). \bibitem{AC1}{\sc K. Ammari and M. Choulli}, { \em Logarithmic stability in determining two coefficients in a dissipative wave equation. Extensions to clamped Euler-Bernoulli beam and heat equations}, J. Diff. Equat., {\bf 259} (2015), 3344--3365. \bibitem{AC3}{\sc K. Ammari and M. Choulli}, {\em Logarithmic stability in determining a boundary coefficient in an ibvp for the wave equation}, Dynamics of PDE., {\bf 14} (2017) 33--45. \bibitem{ACT}{\sc K. Ammari, M. Choulli and F. Triki}, {\em Determining the potential in a wave equation without a geometric condition. Extension to the heat equation}, Proc. Amer. Math. Soc., {\bf 144} (2016), 4381--4392. \bibitem{ACT1} {\sc K. Ammari, M. Choulli and F. Triki,} {\em H\"older stability in determining the potential and the damping coefficient in a wave equation,} Journal of Evolution Equations, in press (2019). \bibitem{ACT2}{\sc K. Ammari, M. Choulli and F. Triki}, {\em How to use observability inequalities to solve some inverse problems for evolution equations? An unified approach.} arXiv:1711.01779. \bibitem{BaK}{\sc G. Bao and Y. Kihyun,} {\em On the stability of an inverse problem for the wave equation.} Inverse Problems, {\bf 25} (2009), 045003. \bibitem{BaK2} {\sc G. Bao and Y. Kihyun,} {\em Stability for the electromagnetic scattering from large cavities.} Archive for Rational Mechanics and Analysis, {\bf 220} (2016), 1003--1044. \bibitem{Bal77} {\sc J.-M. Ball,} { \em Strongly continuous semigroups, weak solutions, and the variation of constants formula,} {\em Proc. Amer. Math. Soc.,} \textbf{63} (1977), 370--373. \bibitem{BLR92} {\sc C. Bardos, G. Lebeau and J. Rauch,} {\em Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary,} SIAM J. Control Optim., \textbf{30} (1992), 1024--1065. \bibitem{BZbb} {\sc N. Burq and M. Zworski,} {\em Control in the presence of a black box,} J. Amer. Math. Soc., \textbf{17} (2004), 443--471. \bibitem{Fa} {\sc H.-O. Fattorini,} {\em Some remarks on complete controllability.} SIAM Journal on Control, {\bf 4} (1966), 686--694. \bibitem{Ha} {\sc M. Hautus,} {\em Controllability and observability conditions of linear autonomous systems,} Nederl. Akad. Wetensch. Proc. Ser. A., {\bf 72} (1969) 443--448. \bibitem{He} {\sc U. Hetmaniuk,} { \em Stability estimates for a class of Helmholtz problems,} Communications in Mathematical Sciences, {\bf 5} (2007), 665--78. \bibitem{Is} {\sc V. Isakov}, {\em Inverse Problems for Partial Differential Equations,} Springer-Verlag, Berlin, 2006. \bibitem{KL} {\sc V. Komornik and P. Loreti,} {\em Fourier Series in Control Theory.} Springer Monographs in Mathematics, Springer, New York, 2005. \bibitem{LL}{\sc C. Laurent and M. L\'eautaud}, {\em Uniform observability estimates for linear waves,} ESAIM: COCV., {\bf 22} (2016), 1097--1136. \bibitem{lebeaujerison} {\sc D. Jerison and G. Lebeau,} Nodal sets of sums of eigenfunctions, Harmonic analysis and partial differential equations (Chicago, IL, 1996), 22--239, Chicago Lectures in Math., Univ. Chicago Press, Chicago, IL, 1999. \bibitem{Le} {\sc G. Lebeau,} {\em Contr\^ole de l'\'equation de Schr\"odinger,} J. Math. Pures Appl, {\bf 71} (1992) 267--291. \bibitem{Li} {\sc J.-L. Lions,} {\em Contr\^olabilit\'e exacte, perturbations et stabilisation de syst\`emes distribu\'es. }Tome 1, Recherches en Math\'ematiques Appliqu\'ees, Vol. 8, Masson, Paris, 1988. \bibitem{Li1} {\sc J.-L. Lions,} {\em Exact controllability, stabilization and perturbations for distributed systems.} SIAM review, {\bf 30} (1988): 1--68. \bibitem{Liu97} {\sc K. Liu,} {\em Locally distributed control and damping for the conservative systems,} SIAM J. Control Optim., \textbf{35} (1997), 1574--1590. \bibitem{LLR} {\sc K. Liu, Z. Liu and B. Rao,} {\em Exponential stability of an abstract nondissipative linear system,} SIAM J. Control Optim., {\bf 40} (2001) 149--165. \bibitem{Miller05} {\sc L. Miller,} {\em Controllability cost of conservative systems: resolvent condition and transmutation,} Journal of Functional Analysis, \textbf{218} (2005), 425--444. \bibitem{RTTT} {\sc K. Ramdani, T. Takahashi, G.Tenenbaum and M. Tucsnak,} A spectral approach for the exact observability of infinite-dimensional systems with skew-adjoint generator, Journal of Functional Analysis, \textbf{226} (2005), 193--229. \bibitem{RT}{\sc K. Ren and F. Triki}, {\em A Global stability estimate for the photo-acoustic inverse problem in layered media}, European Journal of Applied Mathematics, (2019). \bibitem{rudin} {\sc W. Rudin,} {\em Real and complex analysis,} McGraw-Hill Book Co., New York, third edition, 1987. \bibitem{TW09} {\sc M. Tucsnak and G. Weiss,} {\em Observation and control for operator semigroups,} Springer Science \& Business Media, 2009. \bibitem{RW} {\sc D.L. Russell and G. Weiss,} {\em A general necessary condition for exact observability,} SIAM J. Control Optim., \textbf{32} (1994), 1--23. \bibitem{Ya} {\sc M. Yamamoto}, {\em Stability, reconstruction formula and regularization for an inverse source hyperbolic problem by a control method}, Inverse Probl., {\bf 11} (1995), 481--496. \bibitem{ZY97} {\sc Q. Zhou and M. Yamamoto,} {\em Hautus condition on the exact controllability of conservative systems,} Internat. J. Control., \textbf{67} (1997), 371--379. \bibitem{Zu}{ \sc E. Zuazua}. {\em Controllability and observability of partial differential equations: some results and open problems.} Handbook of differential equations: evolutionary equations. Vol. 3. North-Holland, 2007. 527--621. \end{thebibliography} \end{document}
\begin{document} \def\rangle{\ranglengle} \def\langle{\langlengle} \def\hat{a}{\hat{a}} \def\hat{a}t{\hat{a}^2} \def\hat{a}^\dagger{\hat{a}^\dagger} \def\hat{a}^\daggert{\hat{a}^{\dagger 2}} \def\aco\aao{\hat{a}^\dagger\hat{a}} \def\hat{b}{\hat{b}} \def\hat{b}t{\hat{b}^2} \def\hat{b}^\dagger{\hat{b}^\dagger} \def\hat{b}^\daggert{\hat{b}^{\dagger 2}} \def\bco\bao{\hat{b}^\dagger\hat{b}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\cdot\cdot\cdot{\cdot\cdot\cdot} \def\begin{center}{\begin{center}} \def\end{center}{\end{center}} \def\bar{n}{\bar{n}} \def\epsilon{\epsilonilon} \def\hat{\rho}{\hat{\rho}} \def\hat{\rho}_m{\hat{\rho}_m} \def\hat{\rho}_t{\hat{\rho}_t} \def\hat{\rho}_d{\hat{\rho}_d} \title{Effect of thermal noise on atom-field interaction: Glauber-Lachs versus Mixing} \author{ S. Sivakumar\\Materials Physics Division\\ Indira Gandhi Centre for Atomic Research\\ Kalpakkam 603 102 INDIA\\ Email: [email protected]\\ Phone: 91-044-27480500-(Extension)22503} \maketitle \begin{abstract} Coherent signal containing thermal noise is a mixed state of radiation. There are two distinct classes of such states, a Gaussian state obtained by Glauber-Lachs mixing and a non-Gaussian state obtained by the canonical probabilistic mixing of thermal state and coherent state. Though both these versions are noise-included signal states, the effect of noise is less pronounced in the Glauber-Lachs version. Effects of these two distinct ways of noise addition is considered in the context of atom-field interaction; in particular, temporal evolution of population inversion and atom-field entanglement are studied. Quantum features like the collapse-revivals in the dynamics of population inversion and entanglement are diminished by the presence of thermal noise. It is shown that the features lost due to the presence of thermal noise are restored by the process of photon-addition. \end{abstract} PACS: 42.50.Pq, 03.67.Bg, 03.67.Mn\\ Keywords: displaced thermal states, photon-added coherent states, Glauber-Lachs, Jaynes-Cummings model \section{Introduction}\langlebel{secI} Thermal radiation is a source of noise in the context of coherent signal. This source of noise is unavoidable at finite temperatures, making it necessary to understand its effects on experiments. Thermal radiation is characterized by the temperature ($T$) of the source emitting the thermal photons. The density operator for a single mode field in thermal state\cite{gerryknight} is \begin{equation} \hat{\rho}_t=\frac{1}{1+\bar{n}}\sum_{n=0}^\infty\left[\frac{\bar{n}}{1+\bar{n}}\right]^n\vert n\rangle\langle n\vert. \end{equation} The average number of thermal photons is $\bar{n}$, which is related to the temperature of the source of thermal photons. The photon-number distribution $\langle n\vert\hat{\rho}_t\vert n\rangle$ is a monotonically decreasing function of $n$, that is, the maximum probability is for the vacuum state. On the other hand, a coherent state of radiation field is a pure state which is characterized by a complex number $\alpha$. In the number state basis, the coherent state of amplitude $\alpha$\cite{gerryknight} is \begin{equation} \vert\alpha\rangle=\exp\left[-\frac{\vert\alpha\vert^2}{2}\right]\sum_{n=0}^\infty\frac{\alpha^n}{\sqrt{n!}}\vert n\rangle. \end{equation} Unlike the thermal state, a coherent state has a single peaked photon-number distribution, and the peak of the distribution occurs for $n\approx\vert\alpha\vert^2$. Since the thermal state is a mixed state, except at $T=0$K, any state of light that incorporates thermal noise is also a mixed state. Thermal radiation can be incorporated into the coherent radiation {\it via} either mixing or Glauber-Lachs (GL) superposition of coherent state and thermal field. The GL version corresponds to the unitarily displaced thermal state (DTS)\cite{Glauber, Lachs}, \begin{equation}\langlebel{GLS} \hat{\rho}_{_d}=\hat{D}(\alpha)\hat{\rho}_{_t}\hat{D}^\dagger(\alpha) \end{equation} where $\hat{D}(\alpha)=\exp[\alpha\hat{a}^\dagger-\alpha^*\hat{a}]$ is the displacement operator expressed in terms of the creation operator $\hat{a}^\dagger$ and the annihilation operator $\hat{a}$ of the field. These states are the quantum versions of classical channels with additive Gaussian noise\cite{cavesdrummond}. The DTS can be charaterized as an intermediate state between the thermal state and the coherent state, the two limiting cases of the DTS corresponding to $\bar{n}=0$ and $\alpha=0$ respectively \cite{valverde}. The other scheme to incorporate thermal noise is mixing, which is relevant if the field is obtained by probabilistically choosing photons from a coherent source and a thermal source. The resultant state is described by \begin{equation}\langlebel{mtcs} \hat{\rho}_m=(1-q)\hat{\rho}_t+q\vert\alpha\rangle\langle\alpha\vert, \end{equation} where $q$ is non-negative and less than unity. Such states are relevant in the study of noisy quantum channels that transmit either the coherent state $\vert\alpha\rangle$ with probability $q$ or the thermal state $\rho_{t}$ with probability $1-q$. In the subsequent discussion, the state $\hat{\rho}_m$ is referred as mixed thermal-coherent states (MTCS). The DTS and the MTCS, albeit constructed out of the coherent states and the thermal states, are not equivalent. The DTS is a Gaussian state while the MTCS, which is a mixture of Gaussian states, namely, the coherent state and the thermal state, is not a Gaussian state unless $q=0,1$\cite{paris}. If the mean photon-number $\bar{n}$ is zero, the DTS becomes the coherent state $\vert\alpha\rangle$, a pure state with no thermal noise; in the same limit, the MTCS is a mixture of $\vert\alpha\rangle$ and the vacuum state $\vert 0\rangle$. It is further required to have $q=1$ to ensure that there is no thermal noise in the MTCS. If the amplitude $\alpha$ vanishes, the DTS becomes the thermal state $\rho_{t}$ while the MTCS is a mixture of thermal state and vacuum state. In Fig.\ref{fig:Fig1}, photon-number distribution is shown for DTS (dashed curve) and MTCS (continuous curve) corresponding to $\alpha=\sqrt{10}$ and $q=0.5$. The photon number distribution of DTS exhibits a single peak, whereas MTCS has one at $n=0$ and the location of the other peak depends on the values of $\alpha$ and $\bar{n}$. The paper is organized as follows: in Section II, the DTS is shown to be a mixture of photon-added coherent states. In Section III, interaction of a two-level atom with the radiation field is studied. The objective of the study is to highlight the differences in the evolution of entanglement and population inversion when the atom interacts with the DTS and the MTCS. In Section IV, effects of photon-addition on atom-field dynamics are discussed, followed by a summary of the results. \section{Glauber-Lachs states as mixture of photon-added coherent states} A photon-added coherent state (PACS) of order $m$, is defined as\cite{gsatara} \begin{equation} \vert\alpha,m\rangle=\frac{\hat{a}^{\dagger m}\vert\alpha\rangle}{\sqrt{m!L_m(-\vert\alpha\vert^2)}}. \end{equation} Here $L_m(x)$ is the $m$th order Laguerre polynomial\cite{grad} and $m$ is a nonnegative integer. The first index $\alpha$ refers to the complex amplitude and the second index $m$ is the order of the PACS. It is important to note that PACS of different orders are not orthogonal to each other. These states have been studied, both theoretically and experimentally, in a variety of contexts, such as nonclassicality and quantum-classical correspondence\cite{zavatta}. In this section it is shown that DTS is expressible as a mixture of all orders of PACS, all of same amplitude. The definition of DTS given in Eq. \ref{GLS} yields the following expression \begin{equation}\langlebel{mdncs} \hat{\rho}_{d}=\frac{1}{1+\bar{n}}\sum_{n=0}^\infty\left[\frac{\bar{n}}{1+\bar{n}}\right]^n\vert n,\alpha\rangle\langle n,\alpha\vert, \end{equation} which is a mixture of orthogonal states, namely, the displaced number states $\vert n,\alpha\rangle=\hat{D}(\alpha)\vert n\rangle$. Another way of expressing the displaced thermal states is to note that \begin{equation} \hat{\rho}_d=\frac{1}{1+\bar{n}}\left[\frac{\bar{n}}{1+\bar{n}}\right]^{(\hat{a}^\dagger-\alpha^*)(\hat{a}-\alpha)}. \end{equation} Defining $\bar{n}=1/(\exp^\gamma-1)$, the above expression for $\hat{\rho}_d$ can be simplified to \cite{loui} \begin{equation} \hat{\rho}_d=\epsilon e^{-\epsilon\vert\alpha\vert^2} e^{\epsilon\alpha\hat{a}^\dagger} e^{-\gamma\hat{a}^\dagger\hat{a}} e^{\epsilon\alpha\hat{a}}. \end{equation} The parameter $\epsilon$ is given by $1/(1+\bar{n})$. Expanding $e^{-\gamma\hat{a}^\dagger\hat{a}}$ in terms of the Fock state projectors $\vert n\rangle\langle n\vert$, the density operator of the DTS becomes, \begin{equation} \hat{\rho}_{d}=\epsilon e^{-\epsilon\vert\alpha\vert^2}\sum_{n=0}^\infty \frac{e^{-\gamma n}}{n!}e^{\epsilon\alpha\hat{a}^\dagger}\hat{a}^{\dagger n}\vert 0\rangle\langle 0\vert\hat{a}^n e^{\epsilon\alpha^*\hat{a}}. \end{equation} Since $\exp(\alpha\hat{a}^\dagger)$ [respectively, $\exp(-\alpha^*\hat{a})$] and $\hat{a}^\dagger$ [respectively, $\hat{a}$] commute, the order of multiplication can be reversed. It is easy to recognize that $\exp(-\epsilonilon\alpha\hat{a}^\dagger)\vert 0\rangle= \exp(\epsilon^2\vert\alpha\vert^2/2)\vert\epsilon\alpha\rangle$, a coherent state of amplitude $\epsilon\alpha$, apart from the multiplication factor $\exp(\epsilon^2\vert\alpha\vert^2/2)$. Further, $\hat{a}^{\dagger n}\vert\epsilon\alpha\rangle$ are PACS, apart from the normalization factor $\sqrt{n!L_n(-\epsilon\vert\alpha\vert^2)}$. With these identifications, the previous equation is recast as \begin{equation}\langlebel{mpacs} \hat{\rho}_d=\frac{e^{-\left|\tilde\alpha\right|^2\frac{\bar{n}}{1+\bar{n}}}}{1+\bar{n}} \sum_{n=0}^\infty \left[\frac{\bar{n}}{1+\bar{n}}\right]^n L_n(-\left|\tilde\alpha\right|^2)\left|{\tilde\alpha,n}\ranglengle\langlengle\tilde\alpha,n\right|, \end{equation} where $\tilde\alpha=\alpha/(1+\bar{n})$. The expression reveals that DTS is a mixture of all orders of PACS of amplitude $\alpha/(1+\bar{n})$. It is interesting to contrast the two expressions for the DTS, given in Eqns. \ref{mdncs} and \ref{mpacs} respectively: the former relation expresses the DTS as a mixture of orthogonal states while the later expresses it as a mixture of nonorthogonal states. \section{Population inversion and entanglement dynamics} In this section, dynamics of a two-level atom interacting with a single mode field is considered. The initial state of the field is either DTS or MTCS. The presence of thermal noise in the initial state affects the dynamics of interaction. Evolution of atom-field state is studied to understand the effects of different ways of incorporating the thermal noise. The atom-field interaction is modelled by the Jaynes-Cummings hamiltonian\cite{jaynes} \begin{equation} \hat{H}_{JC}=\hbar\omega\hat{a}^\dagger\hat{a}+\frac{\hbar\nu}{2}\sigma_z+\hbar\langlembda(\hat{a}^\dagger\sigma_-+\hat{a}\sigma_+), \end{equation} where $\nu$ is the atomic transition frequency, $\omega$ is the field frequency and $\langlembda$ is the coupling constant for the atom-field interaction. Using a suitable interaction picture and rotating wave approximation\cite{Bradmore}, the hamiltonian is \begin{equation} \hat{H}_I=\hbar\frac{\Delta}{2}\sigma_z+i\hbar\langlembda\left(\hat{a}^\dagger\sigma_--\hat{a}\sigma_+\right), \end{equation} where $\Delta=\omega-\nu$ is the detuning parameter. The operators relevant to the two-level atom are the Pauli operator $\sigma_z$ and, the raising and lowering operators $\sigma_-$ and $\sigma_+$ respectively. The time-evolved density operator $\hat{\rho}(t)$ is $\hat\rho(t)=\hat{U}(t)\hat\rho(0)\hat{U}^\dagger(t)$, where $\hat{\rho}(0)$ is the initial density operator of the atom-field system. The unitary evolution operator $\hat{U}(t)$ is $\exp\left[-i\hat{H}_I t/\hbar\right]$. In this report, the initial state of the atom-field system is taken to be $\vert e\rangle\langle e\vert\otimes\rho_f$, a factorizable state which has no entanglement. The operator $\rho_f$ stands for the field state which is taken to be either DTS or MTCS. The states $\vert e,n\rangle$ and $\vert g,n+1\rangle$ transform under the action of $\hat{H_I}$ as follows: \begin{eqnarray} \hat{H_I}\vert e,n\rangle&=&\hbar\frac{\Delta}{2}\vert e,n\rangle+i\hbar\langlembda\sqrt{n+1}\vert g,n+1\rangle,\\ \hat{H_I}\vert g,n+1\rangle&=&\hbar\frac{\Delta}{2}\vert g,n+1\rangle-i\hbar\langlembda\sqrt{n}\vert e,n\rangle. \end{eqnarray} The results indicate that the span of the states $\vert e,n\rangle$ and $\vert g,n+1\rangle$, is invariant under the action of the unitary evolution operator $\exp(-it\hat{H_I}/\hbar)$. If the interaction is resonant, ($\Delta=0$), the time-evolved density operator is \begin{equation} \hat{\rho}(t)=\sum_{n,m=0}^\infty\rho_{n,m}, \end{equation} with \begin{equation} \rho_{n,m}=c_{nm}\left( \begin{array}{c} \cos(\langlembda\sqrt{n}t)\cos(\langlembda\sqrt{m}t)\\ \sin(\langlembda\sqrt{n}t)\cos(\langlembda\sqrt{m}t)\\ \cos(\langlembda\sqrt{n}t)\sin(\langlembda\sqrt{m}t)\\ \sin(\langlembda\sqrt{n}t)\sin(\langlembda\sqrt{m}t)\\ \end{array} \right)^{\tau} \left( \begin{array}{c} \vert e,n\rangle\langle e, m\vert\\ \vert g,n+1\rangle\langle e, m\vert\\ \vert e,n\rangle\langle g, n+1\vert\\ \vert g,n+1\rangle\langle g, n+1\vert \end{array} \right). \end{equation} The superscript $\tau$ implies matrix transpose. The coefficients $c_{nm}$ are the matrix elements $\langle n\vert\hat{\rho}\vert m\rangle$ of the initial density operator of the field; the relevant expressions for the DTS and the MTCS are \begin{eqnarray} \langle n\vert\hat{\rho}_m\vert m\rangle&=&\frac{q}{1+\bar{n}}\left(\frac{\bar{n}}{1+\bar{n}}\right)^n\delta_{n,m}+(1-q)\exp(-\vert\alpha\vert^2)\times\nonumber\\ & &~~\exp[i(n-m)\theta]\frac{\vert\alpha\vert^{n+m}}{\sqrt{n!m!}},\\ \langle n\vert\hat{\rho}_d\vert m\rangle &=&\sqrt{\frac{n!}{m!}}\left[\frac{1}{1+\bar{n}}\right]^{m-n+1}\left[\frac{\bar{n}}{1+\bar{n}}\right]^n\exp[-\vert\alpha\vert^2/(1+\bar{n})]\times\nonumber\\ &&~~\exp[i(n-m)\theta]\vert\alpha\vert^{m-n}L_m^{m-n}\left(\frac{\vert\alpha\vert^2}{\bar{n}(1+\bar{n}}\right), \end{eqnarray} respectively. It is also clear that $c_{mn}=c_{nm}^*$, the complex conjugate of $c_{nm}$. For the purpose of comparing the interaction dynamics of the atom with the two classes of the field, namely, the DTS and the MTCS, it is essential that a parameter is chosen to compare the two fields. While the displacement $\alpha$ and mean number of thermal photons $\bar{n}$ are common to both the states, the parameter $q$ in the MTCS is not fixed. A meaningful way is to choose the value $q$ so that the overlap with the coherent state $\vert\alpha\rangle$ is the same for both the DTS and the MTCS, {\em i.e.}, $\langle\alpha\vert\rho_m\vert\alpha\rangle=\langle\alpha\vert\hat{\rho}_t\vert\alpha\rangle$. With this restriction, the value of $q$ is fixed by the parameters $\alpha$ and $\bar{n}$. The respective contributions of coherent state $\vert\alpha\rangle$ to the DTS and MCTS are \begin{eqnarray} \langle\alpha\vert\hat{\rho}_m\vert\alpha\rangle&=&q+\frac{1-q}{1+\bar{n}}\exp[-\frac{\vert\alpha\vert^2}{1+\bar{n}}],\\ \langle\alpha\vert\hat{\rho}_d\vert\alpha\rangle&=&\frac{1}{1+\bar{n}}. \end{eqnarray} The contribution from the coherent state $\vert\alpha\rangle$ to the states decreases as $\bar{n}$ increases. Let $\bar{q}$ that special value of $q$ which ensures equal overlap of the DTS and MTCS with the coherent state $\vert\alpha\rangle$. Then, the last two expressions yield \begin{equation} \bar{q}=\frac{1-\exp[-\vert\alpha\vert^2/(1+\bar{n})]}{\bar{n}+1-\exp[-\vert\alpha\vert^2/(1+\bar{n})]}. \end{equation} In physical terms, this is the probability with which coherent state photon is chosen in the mixed state defined in Eq.\ref{mtcs} to ensure equal overlap with $\vert\alpha\rangle$. If $\bar{n}$ is zero (no thermal photons), $\bar{q}$ becomes unity. Hence, noise-free limit of the MTCS is the pure coherent state $\vert\alpha\rangle$, as in the case of DTS. If $\alpha=0$, then $\bar{q}$ is $0$. In this limit, the states $\hat{\rho}_d$ and $\hat{\rho}_m$ become the thermal state $\hat{\rho}_t$, a mixed state. Thus the condition of equal overlap implies that the limiting cases of the DTS and MTCS are the same. This implies that the DTS and the MTCS are two classes of states that interpolate between a pure state ($\vert\alpha \rangle$)and a mixed state (thermal state). \\ During evolution, the state of the atom and the field is entangled, a feature that originates in the bipartite nature of the system. The entanglement between the atom and the field is quantified by the logarithmic negativity $N$, defined as the absolute sum of the negative eigenvalues of the partially transposed density operator $\hat{\rho}^{PT}$\cite{wei}. If $\langlembda_k$ are the eigenvalues of $\hat{\rho}^{PT}(t)$, then $N(t)=\sum_k\left[\vert\langlembda_k\vert-\langlembda_k\right]/2$. In the case of atom-field evolution under Jaynes-Cummings interaction, the time-averaged entanglement depends on the initial mixedness of the bipartite state\cite{Kayhan}. The mixedness of the DTS is \begin{equation} 1-Tr[\hat{\rho}_d^2]=\frac{2\bar{n}}{1+2\bar{n}}. \end{equation} This expression is independent of $\alpha$ and hence the mixedness of the DTS is same as that of the thermal state. This is a consequence of the fact that the two states are related by a unitary transformation. The mixedness of the MTCS, given by \begin{equation} 1-Tr[\hat{\rho}_m^2]=1-q^2-\frac{(1-q)^2}{1+2\bar{n}}-\frac{2q(1-q)}{1+\bar{n}}\exp[-\frac{\vert\alpha\vert^2}{1+\bar{n}}], \end{equation} depends on $\bar{n}$ and $\alpha$. As far as the atomic dynamics is concerned, the relative probabilities to be in the excited and states are important. In this context, A quantitative population inversion $W(t)$, defined as the difference of the probabilities for the atom to be in the excited state and ground state, is of interest. Since the atomic Hamiltonian is proportional to $\vert e\rangle\langle e\vert-\vert g\rangle\langle g\vert$, the population inversion is essentially the energy of the atom, apart from an overall scaling factor $\hbar\nu$. If the state of the field is a coherent state of suitably large amplitude, the time-evolution of $W(t)$ is characterized by the well known "collapse-revival" structures\cite{cummings,eberly, walther}. There are associated features present in the evolution of entanglement between the atom and the field\cite{gerryknight}. The effectiveness of the thermal radiation in diminishing these features of atom-field is a qualitative indicator of the effects of thermal noise \cite{mvs}. If $\bar{n}$ and $\vert\alpha\vert$ are less than unity, both the DTS and the MTCS are approxiamtely the vacuum state of the field. The interaction dynamics of a two-level atom in its excited state with the vacuum state leads to a periodic exchange of energy between the field and the atom. Therefore, nontrivial dynamics can be expected if $\vert\alpha\vert$ differs significantly from unity. As a representative case, the numerical values considered for the study in this work are: $\vert\alpha\vert^2=10$ and $\bar{n}$ is taken to be a fraction of $\vert\alpha\vert^2$. In Fig. \ref{fig:Fig2}, the temporal variation of population inversion is shown for three different values of $\bar{n}$. For easy comparison, the corresponding profiles for the DTS and MTCS are shown in adjacent columns. In the noise-free limit, that is, $\bar{n}=0$, the DTS and the MTCS are identical and there is no difference in the dynamics of interaction with the two-level atom as seen from the first row of figures. However, as $\bar{n}$ is increased to 0.1, equal to one percent of the coherent state photon number, the dynamics in the DTS case is very different from that of the MTCS. This is seen by comparing the two figures in second row. The profiles clearly show the marked differences in the dynamics induced by the two different initial states. As the fraction of thermal photons is increased to ten percent ($\bar{n}=1$), the temporal profile of the population inversion in the case of MTCS exhibits higher degree of deviation from the $\bar{n}=0$ case, notably, the collapse-revival structure is totally absent. In contrast, in the case of DTS the collapse-revival structure is still intact even if $\bar{n}=1$. This indicates that the thermal noise in the MTCS is more effective than that in the Glauber-Lachs version. The temporal evolution of negativity $N(t)$ is shown in Fig. \ref{fig:Fig3}. To compute the eigenvalues of the partially transposed bipartite density matrix to high accuracy, the size of the density matrix has been chosen to be 300X300. Since the initial photon-distribution is appreciable for $n\approx10$ and falls rapidly for larger values of $n$, the chosen size is large enough to ensure that the computed eigenvalues are not spurious. The first, second and third rows of figures in Fig. \ref{fig:Fig3} correspond to $\bar{n}=0.1,1$ and $n=1$ respectively. As in the case of population inversion, the dynamics of $N(t)$ shows significant changes in the case of MTCS as the value of $\bar{n}$ increases, leading to complete loss of structures in the temporal profile. The Glauber-Lachs mixing too leads to changes in the evolution of $N(t)$ as noise level increases, though to a lesser extent than in the MTCS case. Thus, atom-field entanglement is more robust against thermal noise if the thermal noise is included through Glauber-Lachs version rather than mixing. It may be noted that if $\vert\alpha\vert$ is large, then the condition of equal overlap with coherent state also implies that the states are of equal mixedness. \section{Noise reduction by photon-addition} In the previous section the effect of thermal noise on the dynamics of atom-field interaction was discussed. In this section, effects of photon-addition are presented. One of the consequences of photon-addition is that the resultant state has no overlap with the vacuum state $\vert 0\rangle$. In the thermal state of the radiation field, the vacuum state probability is not less than that of other number states. Since the photon-added states do not have any contribution from the vacuum state, it can be expected that the effects of noise are less pronounced. The photon-added versions of the DTS and MTCS are \begin{eqnarray} \tilde\rho_m&=&\frac{\hat{a}^\dagger\rho_m\hat{a}}{Tr[\hat{a}^\dagger\rho_m\hat{a}]},\\ \tilde\rho_d&=&\frac{\hat{a}^\dagger\rho_d\hat{a}}{Tr[\hat{a}^\dagger\rho_d\hat{a}]}, \end{eqnarray} wherein the photon-added states are indicated with a tilde. Photon-addition introduces nonclassical features such as sub-Poissonion statistics, squeezing, etc, apart from increasing the mean number of photons\cite{ctlee, kiesel,arusha}. Additionally, effect of photon-addition shows up in the atom-field interaction dynamics too. In particular, the features lost due to the presence of thermal photons are restored as a consequence of photon-addition. The overlap between the coherent state $\vert\alpha\rangle$ and the photon-added versions of the DTS and MTCS are related to the corresponding overlaps of the DTS and MTCS {\it via}, \begin{eqnarray} \langle\alpha\vert\tilde\rho_m\vert\alpha\rangle&=&\frac{\vert\alpha\vert^2}{1+\bar{q}\bar{n}+(1-\bar{q})\vert\alpha\vert^2}\langle\alpha\vert\hat{\rho}_m\vert\alpha\rangle,\\ \langle\alpha\vert\tilde\rho_d\vert\alpha\rangle&=&\frac{\vert\alpha\vert^2}{1+\bar{n}+N_c}\langle\alpha\vert\hat{\rho}_d\vert\alpha\rangle. \end{eqnarray} If $\vert\alpha\vert^2=10$ and $\bar{n}=1$, the probability $\bar{q}$ is nearly 0.5. It is seen that the overlap of the photon-added MTCS $\tilde\rho_m$ with the coherent state is significantly larger compared to $\langle\alpha\vert\rho_m\vert\alpha\rangle$. So, it is natural to expect the dynamics of atom-field interaction to be similar to that in the case of field being in a coherent state. It is interesting to note that in the case of $\tilde\rho_t$, photon-addition leads to a decrease in the overlap with coherent state. The evolution of population inversion in the case of photon-added states shown in Fig.\ref {fig:Fig4}. The effects of photon-addition on the dynamics of $W(t)$ is clear on comparing the corresponding profiles in Fig. \ref{fig:Fig4} and Fig. \ref{fig:Fig2}. Noise does not modify the profiles much if the initial state is the DTS or its photon-added version. However, comparing the right columns of the two figures, it is clear that for the MTCS, the effect of photon-addition is very significant. In particular, with $\bar{n}=1$, the evolution of $W(t)$ in the photon-added version is very similar to the zero noise, that is, $\bar{n}=0$ case. In Fig. \ref{fig:Fig5}, evolution profiles of negativity if the initial state is one of the photon-added states are shown. To assess the effect of photon-addition, the corresponding profiles in Figs \ref{fig:Fig3} and \ref{fig:Fig5} are compared. Specifically, the right columns of the two figures show the recourse to noise-free evolution due to photon-addition to the MTCS. As in the case of $W(t)$, the effect of photon-addition is very mild in the case of DTS. Though the process of photon-addition is applied to both the coherent component and the thermal component of the MTCS, the net effect is to produce effects similar to that of coherent radiation as far as the atom-field dynamics is concerned. In short, the noise effect is suppressed to a large extent resulting in the recovery of the collapse-revival structures present in the noise-free evolution. The reason for the recovery of noise-free evolution on photon-addition is due to the the fact that the vacuum state probability is vanishes. The photon-added states $\tilde\rho_m$ and $\tilde\rho_d$ have no overlap with the vacuum state $\vert 0\rangle$, that is, $\langle 0\vert\tilde\rho\vert 0\rangle=0$, where $\tilde\rho$ refers to the photon-added versions of the DTS and MTCS. When the mean number of thermal photons is about unity, the major contribution to the thermal state is from the vacuum. In the photon-added versions, the contribution from the vacuum is absent, resulting in lesser overlap with the thermal state. This, in turn, means that the overlap with coherent states is more for the photon-added versions of the MTCS and DTS. \section{Summary} Thermal states and coherent states are Gaussian states. While displacing the thermal state is a way of incorporating thermal noise, and the resultant state is still Gaussian. Mixing of thermal and coherent states is another way of including thermal noise, however, the resultant state is non-Gaussian. The displaced thermal state is a mixture of photon-added coherent states of all orders. Though the two classes of states are closely related to thermal and coherent states, it is only for a special choice of mixing probability in the mixture of thermal and coherent states, these two classes of states have equal overlap with coherent states. In the case of displaced thermal state, the dynamics of population inversion and atom-field entanglement are not very susceptible to thermal photons if the coherent state amplitude is large $(\alpha\approx\sqrt{10})$ and the thermal photons contribution is as high as one tenth of the total number of photons. In the case of MTCS, though it has the same overlap with the coherent state $\vert\alpha\rangle$ as with the case of DTS, increasing the thermal photon contribution renders the dynamics totally devoid of any collapse-revival structure in the evolution of population inversion and atom-field entanglement. Photon-added versions of these two classes of states exhibit, in the context of atom-field interaction, features very similar to the case of dynamics in the absence of thermal noise. In essence, photon-addition suppresses the effects of thermal noise, thereby amplifying the effect of coherent states on the dynamics. \end{document}
\begin{document} \title{Homology graph of real arrangements and monodromy of Milnor Fiber} \begin{abstract} We study the first homology group $H_1(F,\mathbb{C})$ of the Milnor fiber $F$ of sharp arrangements $\overline{{\mathcal A}}$ in $\mathbb{P}^2_\mathbb{R}.$ Our work relies on the minimal complex $\mathbb{C}C_*(\mathcal{S}({\mathcal A}))$ of the deconing arrangement ${\mathcal A}$ and its boundary map. We describe an algorithm which computes possible eigenvalues of the monodromy operator $h_1$ of $H_1(F,\mathbb{C})$. We prove that, if a condition on some intersection points of lines in ${\mathcal A}$ is satisfied, then the only possible non trivial eigenvalues of $h_1$ are cubic roots of the unity. Moreover we give sufficient conditions for just eigenvalues of order $3$ or $4$ to appear in cases in which this condition is not satisfied. \end{abstract} \noindent \section{Introduction} Let $\overline{{\mathcal A}}$ in $\mathbb{P}_{\mathbb{R}}^2$ be a projective line arrangement, ${\mathcal A}$ in $\mathbb{R}^2$ be its deconing and $M({\mathcal A})=\mathbb{C}^2 \setminus \cup_{H \in {\mathcal A}}H_\mathbb{C}$ be the complement of the complexified arrangement. The Milnor fiber $$F=Q^{-1}(1)\subset \mathbb{C}^3$$ of ${\mathcal A}$ is the smooth affine hypersurface defined as preimage of $1$ by the defining polynomial $Q$ of $\overline{{\mathcal A}} $. Consider the geometric monodromy action on $F,$ given by the multiplication by $\lambda= \exp(2i\pi / n+1),$ where $n+1$ is the cardinality of $\overline{{\mathcal A}}$. This automorphism induces the monodromy operators in homology $$h_q: H_q(F,\mathbb{C}) \to H_q(F,\mathbb{C}).$$ It is known that we have the following equivariant decomposition \begin{equation}\label{eq:decomposition} H_q(F,\mathbb{C})=\bigoplus_{d \mid n+1}[\mathbb{C}[t,t^{-1}]/\varphi_d]^{\beta_{q,d}} \end{equation} where each $\beta_{q,d}$ is the multiplicity of an eigenvalue of $h_q$ with order $d,$ and $\varphi_d$ is the cyclotomic polynomial of degree $d.$ The computation of the eigenspaces of the monodromy operators, i.e. the cyclic modules $[\mathbb{C}[t,t^{-1}]/\varphi_d]^{\beta_{q,d}}$ appearing in (\ref{eq:decomposition}), is a difficult question which has been intensively studied the last decades and approached by different techniques such as nonresonant conditions for local systems (see for intance \cite{cdo,co-loc}), multinets (\cite{FalYuz,Torielli-Yoshi}), minimality of the complement(\cite{SS,y-lef,y-mini,y-cham}), graphs (\cite{Bailet},\cite{Sal-Ser}), and also mixed Hodge structure (\cite{BDS,BS,BS1,DNA,DL,DP,D3}). Many progress have been done for braid arrangements (\cite{S1}), graphic arrangements (\cite{PM}) and real line arrangements (\cite{Yoshi2,Yoshi3}). Notice that we can reduce to study $h_1,$ since the eigenspaces of $h_1$ determine the eigenspaces of $h_2$ in view of the formula of the Zeta function of the monodromy, see for instance \cite[Proposition 4.1.21]{ST}. Although we know that the monodromy operators and their eigenspaces are closely related with the multiplicities of the intersection points of the arrangement (see among others \cite{Bailet2,Dimcamonodromy,lib-mil,PS2,Yoshi2}), the role of these multiplicities in the determination of the eigenspaces is obscure and many questions are still open, even for $q=1.$ We mention here some of them that motivated the work presented in this paper: \begin{enumerate} \item it is yet unknown whether the (co)homology groups of the Milnor fiber are combinatorial; \item it is unknown whether the (co)homology groups of the Milnor fiber with $\mathbb{Z}$ coefficients are torsion free (but partial results as in \cite{Wil}); \item it is still yet not known wether a cyclic module $[\mathbb{C}[t,t^{-1}]/\varphi_d]$ appears or not in the decomposition of $H_q(F,\mathbb{C})$ for a given $d$. In particular it is not known, in general, wether a non trivial monodromy $d$ appears or not; \item even more specifically, it is not yet known whether the monodromy operator $h_1$ can have eigenvalues which are not roots of unity of order $3$ or $4$ (see \cite{PS2}). \end{enumerate} In case no non trivial eigenvalues appear, the arrangement ${\mathcal A}$ is said to be \textit{a-monodromic}. Several conjectures have been made on the (co)homology groups of the Milnor fiber. Among others the four following ones this paper focuses on: \begin{conjecture}\cite{Sal-Ser}\label{conj:amon} Let ${\mathcal G}amma({\mathcal A})$ be the graph defined by vertices $H \in {\mathcal A}$ and edges $(H,H')$ if and only if $H \cap H'$ is a point of multiplicity two. If ${\mathcal G}amma({\mathcal A})$ is connected, then ${\mathcal A}$ is a-monodromic. \end{conjecture} \begin{conjecture}\label{conj:mon}\cite{Yoshi2} If $\overline{{\mathcal A}}$ has a sharp pair of lines and ${\mathcal A}$ is not a-monodromic, then the eigenvalues of the monodromy operator $h_1$ are cubic roots of unity. \end{conjecture} \begin{conjecture}\cite[Conjecture 1.9]{PS2} The only possible non trivial monodromies of order prime powers have order $3$ or $4.$ \end{conjecture} \begin{conjecture}\cite{Yoshi2} If $\overline{{\mathcal A}}$ is a simplicial arrangement and ${\mathcal A}$ is not a-monodromic, then the eigenvalues of the monodromy operator $h_1$ are cubic roots of unity. \end{conjecture} The purpose of this paper is farther investigate those four conjectures in case of sharp arrangements, that is arrangements $\overline{{\mathcal A}}$ for which it exists a \textit{sharp} pair $(\overline{H},\overline{H'})$ of hyperplanes that satisfies the condition that all intersection points lie on $\overline{H},\overline{H'}$ or in the same region of $\mathbb{P}_{\mathbb{R}}^2$ delimited by $\overline{H}$ and $\overline{H'}$. In particular, in case of sharp arrangements, Conjecture \ref{conj:mon} is direct consequence of a more general conjecture which states that $d$-monodromy appears if and only if $d$-multinets appear. Indeed by \cite[Theorem 3.1 (i)]{DP} and \cite[Theorem 3.11]{FalYuz} it is known that if a $d$-multinet on $\overline{{\mathcal A}}$ appears for some $d \geq 3$, then $\beta_{1,d} \geq d-2$. While by \cite[Theorem 3.21]{Yoshi2}, if $d \neq 1$ then $\beta_{1,d} \leq 1$. That is on sharp arrangements at most $3$ multinets can exist. In this paper we study $H_1(F,\mathbb{C})$ by mean of a minimal complex $\mathbb{C}C_*(\mathcal{S}({\mathcal A}))$ homotopically equivalent to the complement $M({\mathcal A}), $ builded in \cite{SS} by M. Salvetti and the second author. We give an alternative algorithm to compute $H_1(F,\mathbb{C})$ to the one given in \cite{Yoshi2} by M. Yoshinaga and we prove that if a certain condition on some intersection points of lines in ${\mathcal A}$ is satisfied, then the only possible non trivial monodromy for the fiber $F$ has order $d=3$. Moreover if this condition is not satisfied, then we give sufficient conditions for monodromy to have order $d=3$ or $4$. \\ More in detail, if $\overline{{\mathcal A}}$ is a sharp arrangement and ${\mathcal A}$ is the affine arrangement obtained by removing one of the two lines in the sharp pair $(\overline{H},\overline{H'})$ ( e.g. $\overline{H}=H_{\infty}$ line at infinity), then lines in ${\mathcal A}$ and intersection points in each line have a natural order, both denoted by $\lhd$, given as follows. The other line in the sharp pair (e.g. $\overline{H'}$) is the line $H^{P_0}_1$, where $P_0=\overline{H} \cap \overline{H'}$. Fix a direction along line $H^{P_0}_1$. The intersection points in $H^{P_0}_1$ are ordered going along $H^{P_0}_1$ in positive direction and, for each point $P_i$, we order lines through it in the order they are intersected by a line $H^{P_0}_{(1,\epsilon)}$ oriented as $H^{P_0}_1$ and obtained translating $H^{P_0}_1$ by an epsilon in the direction of the half plane with no intersection point (see Figure \ref{fig1}). Finally, in each line, intersection points are ordered naturally from the first point which is the intersection with $H^{P_0}_1 (= \overline{H'})$ to the last real point which is the last real intersection before the point at infinity (intersection with $\overline{H}$). Lines passing trough $P_0$ are oriented as $H^{P_0}_1$ and numbered from $H^{P_0}_2$, the closest one to $H^{P_0}_1$, up to $H^{P_0}_{m(P_0)-1}$, last real one (here $m(P_0)$ is the multiplicity of $P_0$). With such an order, if ${\mathsf{last}}(H)$ is the last real intersection point along the line $H$, ${\mathsf{min}}(H)$ the second one, and $H_j^{P_i}$ the $j$-th line passing trough the point $P_i \in H^{P_0}_1$, then $H_j^{P_0} \lhd H_{j'}^{P_{0}}$ iff $j >j'$, $H_j^{P_i} \lhd H_{j'}^{P_{i'}}$ iff $(i,j) <(i',j')$ in lexicographic order, and the following results hold. \begin{theorem}\label{thmint1} Let ${\mathcal A} \subset \mathbb{R}^2$ be a sharp arrangement such that $${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}), \quad 0 \leq i <j, $$ holds for hyperplanes in ${\mathcal A}_{(0,3)}$. Then ${\mathcal A}$ is a- or 3-monodromic. \end{theorem} \begin{theorem}\label{thmint2} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement. If $\mathcal{G}_{{\mathcal A}'_{(0,3,4)},{\mathsf{last}},{\mathsf{min}}}$ does not contain any cycle of length $l$ as the one in Figure \ref{fig:cyclastmin} such that: \begin{itemize} \item $\overline{H}\rhd H'$ and $l$ is odd \item $\overline{H}\lhd H'$ and $l$ is even \end{itemize} then ${\mathcal A}$ is a-, 3- or 4-monodromic. \end{theorem} Here ${\mathcal A}_{(0,3)}$ and ${\mathcal A}_{(0,3,4)}$ are the sub-arrangements of ${\mathcal A}$ defined, respectively, in equations (\ref{eq:A03}) and (\ref{eq:A034}), $\mathcal{G}_{{\mathcal A}'_{(0,3,4)},{\mathsf{last}},{\mathsf{min}}}$ is the graph defined in \ref{def:grafGLM} and vertices and edges in $\mathcal{G}_{{\mathcal A}'_{(0,3,4)},{\mathsf{last}},{\mathsf{min}}}$ and in cycle in Figure \ref{fig:cyclastmin} are drawn as described in Figure \ref{fig:edge}. \\ The above theorems partially answer to Conjecture \ref{conj:amon} as it has been proved in \cite{Bailet} by the first author that if the non trivial monodromies of the fiber $F$ have order a power of a prime, then Conjecture \ref{conj:amon} holds. Moreover the algorithm described to prove theorems above can be applied, more in general, to study a-monodromicity of ${\mathcal A}$.\\ The paper is organized as follows. In Section \ref{sec:Mincomp} we recall the building foundation which are used throughout this paper. In Section \ref{maingraph} we describe a new graph based on the boundary map defined in \cite{gaiffi2009morse} and we give fundamental Lemma's that state relation between cycles in this graph and Gauss operations to triangularize the boundary map. In Section \ref{sec:sharp} we introduce main simplification that holds for boundary map when the arrangement ${\mathcal A}$ is sharp and, finally, in Section \ref{sec:mains} we define special graphs associated to the graph defined in Section \ref{maingraph} and we use them to prove Theorem \ref{thmint1} and Theorem \ref{thmint2}, stated as Theorem \ref{thm1.1} and Theorem \ref{thm2.1}. In the last Section we give examples to illustrate how our algorithm can be applied also to immediately state a-monodromicity.\\ Many questions are left open as, for instance, if there are conditions that allow to further simplify our algorithm to finally get Conjecture \ref{conj:mon} in case of sharp arrangements and, of course, what can be said if we extend it also to non sharp arrangements. This will be object of further studies. \section{Minimal complex for real line arrangements}\label{sec:Mincomp} In this section we will recall the complex defined in \cite{SS} in the special case of line arrangements (see also \cite{gaiffi2009morse}) and we will also introduce notations and definitions useful in the rest of the paper. Let us denote by $L({\mathcal A})$ the lattice of intersection of ${\mathcal A}$ and by ${\mathcal S}({\mathcal A})$ the stratification of $\mathbb{R}^2$ into facets induced by the arrangement. \subsection{Minimal complex and boundary map} In \cite{SS} Salvetti and the second author described a minimal complex $\mathbb{C}C_*({\mathcal A})$ homotopically equivalent to the complement $M({\mathcal A})$ of a real complexified arrangement ${\mathcal A}.$ Their construction is based on three main ingredients: \begin{enumerate} \item a polar order on the stratification ${\mathcal S}({\mathcal A})$ induced by an \textit{ ${\mathcal A}$-generic} polar system of coordinates; \item the Salvetti's complex introduced in \cite{Sal}; \item the discrete Morse theory. \end{enumerate} In order to fix a polar system of coordinates in $\mathbb{R}^2$, an origin $V_0$ and a line $V_1$ containing $V_0$ are needed; then the polar coordinates of a point will be determined by the distance from $V_0$ and the rotation angle of the line $V_1$ inside $\mathbb{R}^2$. In order for this polar system to be \textit{${\mathcal A}$-generic}, $V_0$ has to lie in an unbounded chamber $C_0$ of ${\mathcal S}({\mathcal A})$ and the line $V_1$ containing $V_0$ has to intersect all hyperplanes $H_i \in {\mathcal A}$ once in an unbounded facet $F_i \subset H_i$ of the stratification ${\mathcal S}({\mathcal A})$. Let us remark that, with this choice of polar system, all the points $P \in L({\mathcal A})$ lie on the same side of $V_1$. \\ We will denote by $\lhd$ the polar order induced on the stratification ${\mathcal S}({\mathcal A})$ of $\mathbb{R}^2$ by the polar system of coordinates $(V_0$, $V_1)$. This restricts to an order on the hyperplanes of ${\mathcal A}$ given by the distance from $V_0$ of their intersection with $V_1$.\\ Let us now recall that the $k$-cells of the Salvetti's complex are pairs $[C \prec F^k]$ in which $C$ and $F^k$ are, respectively, a chamber and a $k-$facet of the stratification ${\mathcal S}({\mathcal A})$ such that $F^k$ is contained in the closure of $C$. More in general if $F,G \in {\mathcal S}({\mathcal A})$ we write $G \prec F$ if $F$ is contained in the closure of $G$.\\ The cells of the minimal complex are the cells of the Salvetti's complex which are critical with respect to a suitable discrete Morse function. Explicitely in case of line arrangement: \begin{itemize} \item the critical $2-$cells are the pairs $[C \prec P]$ such that the point P is maximal with respect to the polar ordering, among the facets of ${\mathcal S}({\mathcal A})$ contained in the closure of $C$; \item the critical $1-$cells are the pairs $[C \prec F]$ such that $C \lhd F$ and $F$ is a $1$-dimensional facet intersected by $V_1$; \item the only critical $0-$cell is the pair $[C_0\prec C_0]$, $V_0 \in C_0$. \end{itemize} In \cite{gaiffi2009morse} Gaiffi and Salvetti gave, in case of line arrangements, a simplified version of the boundary map defined in \cite{SS}. Following their notations, for any point $P\in L_2({\mathcal A}),$ denote by $$S(P):=\{H\in {\mathcal A}\,|\,P \in H\}$$ the set of hyperplanes containing $P.$ For a given critical $2$-cell $[C \prec P],$ there are two lines in $S(P)$ which bound $C$. Denote by $H_C$ (resp. $H^C$) the one which has minimum (resp. maximum) index. Let $\mathbb{C}one(P)$ be the closed cone bounded by the minimum and the maximum lines of $S(P),$ having vertex $P$ and whose intersection with $V_1$ is bounded. Then define \begin{equation*} \begin{split} & U(C):=\{H_i \in S(P) \mid i \geq \mbox{ index of } H^C \},\\ & L(C):=\{H_i \in S(P) \mid i \leq \mbox{ index of } H_C \}.\\ \end{split} \end{equation*} Consider a line $V_{P}$ that passes through the points $V_0$ and $P$. This line intersects all the lines $H\in {\mathcal A}$ in points $P_H$ and, if $\theta(P_H)$ is the length of the segment $V_0P_H$, then define the set $$U(P):=\{H \in {\mathcal A} \mid \theta(P_H)> \theta(P)\}.$$ Then the map $\partial_2$ in \cite{gaiffi2009morse} is given by \begin{equation}\label{eq:boundary2} \begin{split} &\partial_2(l.e_{[C \prec P]})=\sum_{\substack{ |F_i|\in S(P)}}\bigg(\prod_{\substack{k<i~s.t.\\H_k\in U(P)}}t_{H_k}\bigg)\bigg[\prod_{\substack{k~s.t.\\H_k\in[C\rightarrow|F_i|)}}t_{H_k}-\prod_{\substack{k<i~s.t.\\H_k\in S(P)}}t_{H_k}\bigg](l).e_{[C_{i-1}\prec F_i]}+\\ &\sum_{\substack{|F_i|\in U(P)\{\mathbb F}_i\subset \mathbb{C}one(P)}}\bigg(\prod_{\substack{k<i~s.t.\\H_k\in U(P)}}t_{H_k}\bigg)\bigg(1-\prod_{\substack{k<i~s.t.\\ H_k\in L(C)}}t_{H_k}\bigg)\bigg(\prod_{\substack{k<i~s.t.\\H_k\in U(C)}}t_{H_k}-\prod_{\substack{k~s.t.\\H_k\in U(C)}}t_{H_k}\bigg)(l).e_{[C_{i-1}\prec F_i]}, \end{split} \end{equation} where $l \in \mathbb{L}$ is an abelian local system, $|F_i|=H_i$ is the affine subspace spanned by $F_i,$ and $[C\rightarrow |F_i|)$ are the subsets of $S(P)$ defined by \begin{enumerate} \item[i)] $[C\rightarrow|F_i|):=\{H_k\in U(C)~|~k<i\}$ if $|F_i|\in U(C)$; \item[ii)] $[C\rightarrow|F_i|):=\{H_k\in S(P)~|~k<i\}\cup U(C)$ if $|F_i|\in L(C)$. \end{enumerate} Furthermore, because the only critical $0$-cell is $[C_0\prec C_0]$, the boundary map $\partial_1$ can be easily computed: $$\partial_1(l.e_{[C_{i-1}\prec F_i]})=(1-t_{H_i})(l).e_{[C_0\prec C_0]}.$$ Notice that computing the (co)homology of the Milnor fiber with integer coefficients is equivalent to set, in the above boundary map, all the elements $t_H \in$ Aut$(\mathbb{L})$ equal to the same $t\in$ Aut$(\mathbb{L})$. \begin{remark} Notice that each entry of the boundary matrix given by formula (\ref{eq:boundary2}) above is divisible by $1-t,$ by minimality of the complex $\mathbb{C}C_*({\mathcal A}).$ \end{remark} \subsection{Polar ordering and indexation of the lattice $L({\mathcal A})$.}\label{polarorder} In this subsection we will list definitions and notations useful in the rest of the paper. Let $L_2({\mathcal A})$ be the set of rank $2$ elements in the intersection lattice $L({\mathcal A})$, i.e. intersection points of lines in ${\mathcal A}$. \begin{enumerate} \item[1.] It is an obvious remark that the critical $1$-cells $[C \prec F]$ are in one to one correspondence with the hyperplanes $H= |F|$ of the arrangement. Hence in the rest of the paper we will simply use $H$ instead of $[C \prec F]$. \item[2.] The second boundary map defined in equation (\ref{eq:boundary2}) is the sum of two addends, one is a summation on the lines belonging to $S(P)$, the second one a summation on the lines belonging to the set $U(P)$ which are in the Cone of $P$. Given a point $P \in L_2({\mathcal A})$ let's then define the \textit{Upper Cone} of $P$ as the set of lines \begin{equation}\label{eq:upper} \bold{\widehat{U}}(P):=\{ H=|F| \in U(P) \mid F\in \mathbb{C}one(P)\}. \end{equation} \begin{example}Looking at the arrangement in Figure \ref{fig1}, $\mathbb{C}one(P)$ is the Cone between the two lines $H_{m(P_1)}^{P_1}$ and $H_{2}^{P_m}$. Any line $H^{P_i}_j$, $1<i<m$, is in $\mathbb{C}one(P)$, but, for example, the line $H_{2}^{P_2}$ is in the upper $U(P)$ of P, i.e. $H_{2}^{P_2} \in \bold{\widehat{U}}(P)$, while $H_{m(P_2)}^{P_2}$ passes clearly below the point $P$, i.e. $H_{m(P_2)}^{P_2} \notin \bold{\widehat{U}}(P)$. \end{example} \item[3.] Given a point $P\in L_2({\mathcal A}),$ we will denote the hyperplanes in $S(P)$ by \begin{equation}\label{notationlocal} H_1^P \lhd \cdots \lhd H_{m(P)}^P, \end{equation} where $m(P)$ is the multiplicity of $P$ in ${\mathcal A}$ and the hyperplanes $H_i^P$ are ordered following the order $\lhd$ in ${\mathcal A}.$ \item[4.] Given a point $P\in L_2({\mathcal A})$ and following the previous notation for the lines in $S(P),$ we denote by $C_j^P$ the unique chamber $C_j^P \lhd P$ with walls facets $F$ and $F'$, $|F|=H_{j}^P, \,|F'|=H_{j+1}^P$. \item[5.] Given a line $H$, the order $\lhd$ on the stratification ${\mathcal S}({\mathcal A})$ induces a local order on the intersections points \begin{equation}\label{ord:points} P_1^H \lhd P_2^H \lhd \cdots \lhd P_{k-1}^H \lhd P_k^H \end{equation} in $L_2({\mathcal A})$ belonging to $H$. In particular in the rest of the paper we will denote by ${\mathsf{last}}(H)$ the last point in the above order and by ${\mathsf{min}}(H)$ the second one. (This latter notation will be clearer in the following of the paper). \end{enumerate} \begin{remark} The description of the set $U(P)$ (and hence $\bold{\widehat{U}}(P)$) can be given in terms of the intersection lattice simply using the order in (\ref{ord:points}). Indeed, for example, if $P=P_i^H$ is a point in $H$, a line $H' \lhd H$ will be in $U(P)$ if and only if $H' \cap H \neq \emptyset$ is a point $P_j^H$ with $j < i$. \end{remark} \subsection{The Milnor Fibre of line arrangements} Let $Q: \mathbb{C}^3 \rightarrow \mathbb{C}$ be the defining polynomial (of degree $n+1$) of the projective line arrangement $\overline{{\mathcal A}}$. Then the Milnor Fiber is defined as $F=Q^{-1}(1)$ with geometric monodromy \begin{equation*} \begin{split} \pi_1(\mathbb{C}^*,1&) \rightarrow Aut(F) \\ & \alpha \rightarrow e^{\frac{2\pi i}{n+1}}\alpha \end{split} \end{equation*} that induces the monodromy operator $h_q: H_q(F,A) \to H_q(F,A)$, A being any unitary commutative ring. Let $R:=A[t,t^{-1}]$ and $R_t$ be the ring $R$ endowed with the $\pi_1(M(\overline{{\mathcal A}}))$-module structure given by the abelian representation $$ \pi_1(M(\overline{{\mathcal A}})) \rightarrow H_1(M(\overline{{\mathcal A}}); A) \rightarrow Aut(R) $$ taking a generator $\beta_j$ into $t$-multiplication. Then it is a well know fact that $$ H_*(M(\overline{{\mathcal A}}); R_t) \simeq H_*(F;A) $$ where $t$-multiplication on the left corresponds to the monodromy action on the right. \\ Consider now $A=\mathbb{C}$, $R=\mathbb{C}[t,t^{-1}]$, then $H_*(M(\overline{{\mathcal A}}); R_t)$ decomposes into cyclic modules either isomorphic to $R$ or to $\frac{R}{(\varphi_d)}$, where $\varphi_d$ is the $d$-cyclotomic polynomial with $d \mid (n+1)$. Following notations in \cite{Sal-Ser} we will call an arrangement $\overline{{\mathcal A}}$ \textit{a-monodromic} if it has trivial monodromy, that is if and only if $$ H_1(F;\mathbb{C}) \simeq \mathbb{C}^n \quad $$ and, more in general, \textit{d-monodromic} if the cyclic module $\frac{R}{(\varphi_d)}$ appears in the decomposition of $H_1(F;\mathbb{C})$. Analogously if $ {\mathcal A}$ is the affine arrangement deconing of $\overline{{\mathcal A}}$, we will call ${\mathcal A}$ \textit{a-monodromic} if $$ H_1(M({\mathcal A});R_t) \simeq \mathbb{C}^n \quad $$ and \textit{d-monodromic} if the cyclic module $\frac{R}{(\varphi_d)}$ appears in the decomposition of $H_1(M({\mathcal A});R_t)$. Remark that if ${\mathcal A}$ is a-monodromic $\overline{{\mathcal A}}$ is (the converse is not true in general). \section{Homology graph of line arrangements}\label{maingraph} Let $Mat(\partial_2)$ be the second boundary matrix defined by the map in equation (\ref{eq:boundary2}). For the sake of simplicity, we will denote by \begin{enumerate} \item[i)] $H$ the row corresponding to the critical $1-$cell $[C \prec F],\,|F|=H$; \item[ii)] $c^P$ the columns block relative to the critical $2-$cells $[C \prec P]$. \end{enumerate} More in detail, following the local notations in subsection \ref{polarorder}, we will denote by $c_{j}^P$ the column corresponding to the critical $2-$cell $[C_{j}^P\prec P].$ With these notations we have $$c^P=\{c_1^P,\hdots,c_{m(P)-1}^P\}.$$ Rows and columns of $Mat(\partial_2)$ are ordered following the polar ordering of the corresponding critical $1$ and $2-$cells. We have the following straightforward remark. \begin{notation}\label{not:goodcolumn} Given a point $P \in L_2({\mathcal A})$ and a line $H \in S(P)$, if $e(H,c)$ denotes the entry of $Mat(\partial_2)$ corresponding to row $H$ and column $c$, then $$e(H_1^P,c_{m(P)-1}^P)=t^{\alpha_1}(1-t)$$ and $$e(H_i^P,c_1^P)=t^{\alpha_i}(1-t), \quad i>1, $$ where $\alpha_1$ and the $\alpha_i$ are positive integers. Hence we denote by $c_H^P$ the column of the block $c^P$ such that \begin{equation*} \begin{split} & c_H^P = c_{m(P)-1}^P, \,\,\mbox{if} \,\, H=H_1^P,\\ & c_H^P= c_1^P\,\mbox{otherwise}, \end{split} \end{equation*} in such a way that $e(H,c_H^P)$ has the form $t^\alpha(1-t).$ \end{notation} Our goal is to diagonalize the matrix $Mat(\partial_2).$ In order to do it we introduce the \textit{homology graph} $\mathcal{G}({\mathcal A})$ defined as follows: \begin{enumerate} \item[i)] vertices are couples $(H,P),$ with $H\in {\mathcal A}$ and $P\in L_2({\mathcal A})$ such that $P\in H$; \item[ii)] two vertices $(H,P)$ and $(H',P^\prime)$ are connected by an edge $[(H,P),(H',P'),\lhd]$ (resp. $[(H,P),(H',P'),\rhd]$) oriented from $(H,P)$ to $(H',P^\prime)$ if and only if $H^\prime \in S(P)\cup \bold{\widehat{U}}(P)$ (see Figure \ref{fig:edge}) and $H \lhd H^\prime$ (resp. $H \rhd H^\prime$). \end{enumerate} For simplicity, we will sometimes denote by $[(H,P),(H',P')]$ the edge from $(H,P)$ to $(H',P^\prime)$ when the order between $H$ and $H'$ is not needed. \begin{figure} \caption{Edge $[(H,P),(H',P'),\lhd]$ in $\mathcal{G} \label{fig:edge} \end{figure} Given the matrix $Mat(\partial_2),$ the graph $\mathcal{G}({\mathcal A})$ defines operations on $Mat(\partial_2)$ as follows. The edge $[(H,P),(H^\prime,P^\prime)]$ in $\mathcal{G}({\mathcal A})$ is equivalent to the operation \begin{equation}\label{mainop} O_{H,H^\prime}^{P,P^\prime}(\varphi^\prime)= H^\prime - \varphi^\prime H \end{equation} in $Mat(\partial_2),$ where $\varphi^\prime \in \mathbb{C}[t,t^{-1}]$ is the polynomial satisfying $e(H^\prime, c_H^P)= \varphi^\prime e(H,c_H^P),$ $c_H^P$ being the column defined in Notation \ref{not:goodcolumn}. Notice that $\varphi^\prime$ exists as $e(H,c_H^P)=t^\alpha(1-t).$ We will say that we perform operations along a subgraph in $\mathcal{G}({\mathcal A})$ if we perform all the rows operations $O_{H,H^\prime}^{P,P^\prime}$ for edges $[(H,P),(H^\prime,P^\prime)]$ in this subgraph. The following definition will be used often in the rest of the paper. \begin{definition}\label{def:remove} We say that an hyperplane $H$ is \textit{removed} by operations along a subgraph $\delta$ in $\mathcal{G}({\mathcal A})$ if $(H,P)$ is a vertex in $\delta$ and performing operations $O_{H,H^\prime}^{P,P^\prime}, H^\prime \rhd H$, along this subgraph, the column $c_H^P$ is reduced to a column with all entries $e(H^\prime, c_H^P)=0$ if $H^\prime \rhd H$ and $e(H,c_H^P)=1-t.$ All other entries of $c_H^P$ are left unchanged. \end{definition} \begin{remark}\label{rk:removetrivial} Let us remark that if $H$ is removed by operations along a subgraph $\delta$, then using simple columns operations, the row $H$ can be reduced to a row with all entries 0 but $e(H,c_H^P)=1-t$ without changing the entries $e(H^\prime,c^{P^\prime}),$ for all the $H^\prime \rhd H.$ Hence if we can find a subgraph of $\mathcal{G}({\mathcal A})$ that allows us to remove all hyperplanes of ${\mathcal A}$ at once, then the matrix $Mat(\partial_2)$ can be triangularized with pivots $1-t$ and rows ordered following the polar order, that is ${\mathcal A}$ is a-monodromic. \end{remark} From now on we will assume that operations along the graph are performed following the polar order, that is performing all operations $O_{H,H^\prime}^{P,P^\prime}$ for all $H^\prime \rhd H$, where each $H$ is fixed step by step following the polar order. In order to find a suitable graph $\delta$ to remove all hyperplanes at once, we need to study how the entry $e(H,c_H^P)= t^\alpha(1-t)$ is changed when we perform row operations along $\delta$ to remove $\overline{H}\lhd H.$ It is an easy remark on Gauss reduction that we can keep removing hyperplanes $\overline{H}\lhd H$ along the graph $\delta$ following the row (i.e. polar) order, without affecting entry $e(H,c_H^P)= t^\alpha(1-t)$ until we get the following submatrix (where the symbol * means a non zero entry) \begin{equation}\label{submatrixproblem} \bordermatrix{ &c_{H'}^{P'}&c_{H}^{P}\cr H'&S(P') & *\cr H& * & S(P)\cr} \end{equation} for a given row $H^\prime \lhd H$ such that $(H',P') \in \delta$. Indeed, in this case, in order to simplify the entry $e(H,c_{H^\prime}^{P^\prime})$ in the column $c_{H^\prime}^{P^\prime},$ we perform the row operation $H- \varphi H',$ where $\varphi\,\,\text{satisfies}\,\,e(H,c_{H'}^{P'})= \varphi e(H',c_{H'}^{P'}).$ Then the entry $e(H,c_H^P)=t^\alpha (1-t)$ is changed and, in general, we don't know if it will keep the form $t^\beta (1-t).$ \begin{lemma}\label{lemmacycle}If performing row operations along a subgraph $\delta$ of $\mathcal{G}({\mathcal A})$ to triangularize the matrix $Mat(\partial_2),$ we get the submatrix (\ref{submatrixproblem}) in the Gauss transformed matrix, then there exists a cycle in $\delta$ as the one in Figure \ref{fig:cycle} that contains $(H,P)$ and $(H^\prime,P^\prime)$. \begin{figure} \caption{Cycle $\gamma$ in $\delta$} \label{fig:cycle} \end{figure} \end{lemma} \proof Since we get the submatrix (\ref{submatrixproblem}) it means that $e(H,c_{H'}^{P^{\prime}})\neq 0$ (a) or $\rightsquigarrow \neq 0$ (b) and $e(H',c_H^{P})\neq 0$ (c) or $\rightsquigarrow \neq 0$ (d), where the symbol $\rightsquigarrow \neq 0$ means that the entry became non zero after having performed previous rows operations on $Mat(\partial_2)$. Let us study these cases separately. \begin{enumerate} \item[(a)] corresponds to the existence of the edge $[(H',P'),(H,P),\lhd],$ i.e. $H_{i_1}=\cdots = H_{i_k} = \overline{H} = H'$ in Figure \ref{fig:cycle}; \item[(b)] corresponds to existence of $\overline{H} \lhd H$ with $e(\overline{H},c_{H'}^{P^\prime})\neq 0$ and a chain of operations with starting vertex $\overline{H}$ and ending vertex $H$ corresponding to the edges $$[(\overline{H},\overline{P}),(H_{i_k},P_{i_k}),\lhd],\hdots,[(H_{i_2},P_{i_2}),(H_{i_1},P_{i_1}),\lhd],[(H_{i_1},P_{i_1}),(H,P),\lhd].$$ Note that we can have either $\overline{H} \lhd H^\prime$ and the corresponding edge is $[(H',P'),(\overline{H},\overline{P}),\rhd],$ or $\overline{H} \rhd H^\prime$ and the corresponding edge is $[(H',P'),(\overline{H},\overline{P}),\lhd];$ \item[(c)] corresponds to existence of the edge $[(H,P),(H',P'),\rhd],$ i.e. $H'= H_{i_{k+1}}=\hdots= H_{i_{k+s}}= \widetilde{H}$ in Figure \ref{fig:cycle}; \item[(d)] corresponds to existence $\widetilde{H} \lhd H^\prime$ with $e(\widetilde{H},c_H^{P})\neq 0$ and a chain of operations with starting vertex $\widetilde{H}$ and ending vertex $H^\prime$ corresponding to the edges $[(\widetilde{H},\widetilde{P}),(H_{i_{k+s}},P_{i_{k+s}}),\lhd],\hdots,[(H_{i_{k+2}},P_{i_{k+2}}),(H_{i_{k+1}},P_{i_{k+1}}),\lhd],[(H_{i_{k+1}},P_{i_{k+1}}),(H',P'),\lhd].$ Note that in this case it is clear that $\widetilde{H} \lhd H$ since $\widetilde{H} \lhd H^\prime \lhd H,$ and we have the edge $[(H,P),(\widetilde{H},\widetilde{P}),\rhd].$ \end{enumerate} \endproof \begin{remark}\label{rkamono} Condition in Lemma \ref{lemmacycle} is not sufficient in general to say that the matrix can not be triangularized. Indeed, it could exist another more suitable point $P'' \in H^\prime$ such that we have the submatrix $\bordermatrix{ & c_{H'}^{P''}&c_{H'}^{P'}&c_{H}^{P}\cr H'& S(P'')&S(P') & *\cr H & 0& * & S(P)\cr}$ and $c_{H^\prime}^{P''}$ can be simplified without changing $H,$ or it can happen that even if we perform $H-\varphi H^\prime$, still $e(H,c_H^P)$ is of the form $t^\beta(1-t).$ But anyway, Lemma \ref{lemmacycle} is sufficient to say that if no cycle appears in a suitable subgraph $\delta \subset \mathcal{G}({\mathcal A})$, then the submatrix $M^\prime$ whose rows $H$ and columns $c_H^P$ correspond to the vertices $(H,P)$ of $\delta$ can be triangularized. Then in particular, if we can find a suitable subgraph $\delta$ with as many vertices $(H,P)$ as the hyperplanes of ${\mathcal A}$ and such that for any two vertices $(H,P) \neq (H',P') \in \delta$, $H \neq H'$ and $P \neq P'$, then cycles as in Figure \ref{fig:cycle} of $\delta$ give informations on the monodromy of ${\mathcal A}.$ For example, if $\delta$ has no cycle it means that we have a submatrix $M^\prime$ with as many rows as $Mat(\partial_2)$ that can be reduced to ${\mathsf{diag}}(1-t),$ that is ${\mathcal A}$ is a-monodromic. \end{remark} In the following sections we will study the monodromy of the Milnor fiber of a special family of line arrangements, the so called sharp arrangements, studying the homology $H_1(M({\mathcal A}); R_t)$ via suitable subgraphs of the graph $\mathcal{G}({\mathcal A})$. \section{Sharp arrangements and their boundary map }\label{sec:sharp} In the following of this paper we will deal with a special category of line arrangements, the so called \textit{sharp arrangements.} \begin{definition}\label{sharppair} A \textit{sharp pair} of a projective arrangement $\overline{{\mathcal A}}$ in $\mathbb{P}^2_{\mathbb{R}}$ is a couple of lines $(\overline{H},\overline{H'})$ of $\overline{{\mathcal A}}$ such that all intersection points of lines in $\overline{{\mathcal A}}$ lie on $\overline{H} \cup \overline{H'}$ or are contained in one of the two regions of the projective plane $\mathbb{P}^2_{\mathbb{R}}$ divided in two by $\overline{H}$ and $\overline{H'}.$ \end{definition} \begin{definition}\label{sharparrangement} We say that ${\mathcal A}$ in $\mathbb{R}^2$ is a \textit{sharp arrangement} if it is the deconing of an arrangement $\overline{{\mathcal A}}$ in $\mathbb{P}_{\mathbb{R}}^2$ with respect to the hyperplane at infinity $H_\infty,$ where $H_\infty=\overline{H} \in \overline{{\mathcal A}}$ is a line in a sharp pair $(\overline{H},\overline{H'})$ of $\overline{{\mathcal A}}.$ \end{definition} Let ${\mathcal A}$ be a sharp arrangement, $(\overline{H},\overline{H'})$ the sharp pair, $P_0= \overline{H} \cap \overline{H'}$ be the point at infinity, and $m(P_0)$ the multiplicity of $P_0$ in $\overline{{\mathcal A}}.$ Then keeping the notation (\ref{notationlocal}) we define $\overline{H}=H_\infty=H_{m(P_0)}^{P_0}$ and $$S(P_0)=\{H\in {\mathcal A}\,|\, \overline{H} \cap H_\infty = P_0\}.$$ We choose the pair $(V_0,V_1)$ as follows: $V_0$ lies in the unbounded chamber with wall the hyperplane $H'' \in S(P_0)$ such that the induced polar order in $S(P_0)$ is $$H''=H_{m(P_0)-1}^{P_0} \lhd \cdots \lhd H_{1}^{P_0}=H',$$ where $\overline{H'}$ is the second line in the sharp pair (see Figure \ref{fig1}). \begin{figure} \caption{Polar coordinates system for a sharp arrangement} \label{fig1} \end{figure} \begin{remark} The choice of notation $P_0$ for the point at infinity will be clearer in the next section. Indeed it turns out to be the most natural choice to agree with the properties on all the other points in the sharp line $H_1^{P_0}$. \end{remark} Denote by $$\mathcal{P}:=\{P \in L_2({\mathcal A})\,|\, P \in H_{1}^{P_0}\}$$ the set of all affine points belonging to the sharp line $H_{1}^{P_0}$. As $H_{1}^{P_0} \cup {P_0}=\overline{H'}$ in the sharp pair $(\overline{H},\overline{H'})$ of $\overline{{\mathcal A}},$ it is an easy remark that for any point $P\in \mathcal{P},$ $\bold{\widehat{U}}(P)=\emptyset,$ as any other point $P'\in L_2({\mathcal A})\backslash \mathcal{P}$ has to lie in the same side of $H_{m(P_0)-1}^{P_0}$ with respect to $H_{1}^{P_0}$ (see Figure \ref{fig1}). As in Figure \ref{fig1}, we will index points $P_1 \lhd \cdots \lhd P_m$ in $\mathcal{P}$ following the order induced by $(V_0,V_1)$ along $H_{1}^{P_0}.$ \subsection{On the line $H_1^{P_0}$}\label{inclusion} The arrangement ${\mathcal A}_1=\{H_1^{P_0}\}$ is a subarrangement of ${\mathcal A}$ and it is a simple remark that the complex $\mathbb{C}C_*(\mathcal{S}({\mathcal A}_1))$ is a subcomplex of $\mathbb{C}C_*(\mathcal{S}({\mathcal A})).$ Then the map \begin{equation} \label{eq:inclusion1} \begin{split} i\colon (\mathbb{C}C_*(\mathcal{S}({\mathcal A}_{1})), \partial_*) &\rightarrow (\mathbb{C}C_*(\mathcal{S}({\mathcal A})),\partial_*) \end{split} \end{equation} is a well defined inclusion of algebraic complexes. It defines an exact sequence of algebraic complexes \begin{equation}\label{eq:seq1} 0 \rightarrow (\mathcal{C}_*(\mathcal{S}({\mathcal A}_{1})),\partial_*) \rightarrow(\mathcal{C}_*(\mathcal{S}({\mathcal A})),\partial_*) \rightarrow (F^1_*(\mathcal{S}({\mathcal A})),\partial_*)\rightarrow 0, \end{equation} where $F^1_*(\mathcal{S}({\mathcal A}))$ is the quotient complex $\mathbb{C}C_*(\mathcal{S}({\mathcal A}))/\mathbb{C}C_*(\mathcal{S}({\mathcal A}_{1}))$ with the induced boundary map. This exact sequence (\ref{eq:seq1}) gives rise to a long exact sequence in homology \begin{center} $\dots \rightarrow H_2((F^1_*(\mathcal{S}({\mathcal A})),\partial_*)) \rightarrow H_1(\mathcal{C}_*(\mathcal{S}({\mathcal A}_1)),\partial_*) \rightarrow H_1(\mathcal{C}_*(\mathcal{S}({\mathcal A})),\partial_*)$\\ $\rightarrow H_1(F^1_*(\mathcal{S}({\mathcal A})),\partial_*)\rightarrow H_0(\mathcal{C}_*(\mathcal{S}({\mathcal A}_1)),\partial_*) \rightarrow \dots$ \end{center} with \begin{center} $H_1(\mathcal{C}_*(\mathcal{S}({\mathcal A}_{1})),\partial_*)=0$ and $H_0(\mathcal{C}_*(\mathcal{S}({\mathcal A}_{1})),\partial_*) =\mathbb{C}[t,t^{-1}]/1-t,$ \end{center} that is we have the same cyclic modules $\frac{R}{\varphi_d}$ in $H_1(\mathcal{C}_*(\mathcal{S}({\mathcal A})),\partial_*)$ and $H_1(F^1_*(\mathcal{S}({\mathcal A})),\partial_*)$. It is an easy remark that the inclusion (\ref{eq:inclusion1}) is equivalent to remove the row corresponding to $H_1^{P_0}$ from the second boundary matrix. \subsection{On the matrix $Mat(\partial_2)$ for sharp arrangements}\label{reduction} Following section \ref{inclusion}, we denote by $M$ the submatrix of $Mat(\partial_2)$ obtained removing the row $H_1^{P_0}.$ We denote by $M_1$ the submatrix of $M$ with columns blocks $c^P,\,P\in \mathcal{P},$ and by $M_2$ the submatrix with columns $c^P,\,P\in L_2({\mathcal A}) \backslash \mathcal{P}:$ \begin{equation}\label{Matdec} M =( M_1 \mid M_2 ) \quad . \end{equation} We denote by $\partial_2(S(P))$ the sub-matrix of $M$ with columns $c^P$ and rows $H \in S(P).$ Similarly, we denote by $\partial_2(\bold{\widehat{U}}(P))$ the sub-matrix of $M$ with column block $c^P$ and rows $H\in \bold{\widehat{U}}(P).$ Performing operations $O^P_{i+1,i}(t^{-1}):=H_i^P- t^{-1}H_{i+1}^P,$ $1 \leq i \leq m(P)-1,$ we can diagonalize $\partial_2(S(P))$ as (see among others \cite{gaiffi2009morse}) \begin{center} $D(P)=\left(\begin{array}{cccccc} 1-t^{m(P)}&0 &\hdots & 0\\ 0& & &0\\ \vdots &\ddots&&\vdots\\ & & 1-t^{m(P)}&0\\ 0& \hdots & 0&1-t\\ \end{array}\right)$ \end{center} where the last row is obtained simplifying each entry by elementary columns operations with the last one. Since for any $P\in \mathcal{P},$ $\bold{\widehat{U}}(P)=\emptyset,$ the following lemma holds: \begin{lemma}\label{lemdiag2} $M_1$ is diagonalizable in $D(M_1)$ whose diagonal blocks are $D(P_1)$, $\hdots$, $D(P_m),$ and entries corresponding to rows $H_k^{P_0},\,1< k < m(P_0),$ are all zeros. \end{lemma} \begin{remark}\label{rkdiag} Since the monodromy has to divide $|{\mathcal A}|+1,$ if $P \in \mathcal{P}$ is a point such that $m(P)$ is coprime with $|{\mathcal A}|+1,$ then $1-t^{m(P)}$ cannot appear in the diagonal form of $M$, that is there exist columns operations on $M$ such that $D(P)={\mathsf{diag}}(1-t),$ where ${\mathsf{diag}}(1-t)=\left(\begin{array}{cccc} 1-t &0 \hdots & 0\\ \vdots &\ddots&\vdots\\ 0& \hdots &1-t\\ \end{array}\right).$\\ Denote by ${\mathcal A}_p =\{H \in ({\mathcal A} \setminus \{H_{1}^{P_0}\}) \mid \gcd(m(H \cap H_{1}^{P_0}), |{\mathcal A}|+1) \neq 1\}\cup \{H_k^{P_0}\}_{1 < k < m(P_0)}$ and ${\mathcal A}_{cp}={\mathcal A} \setminus ({\mathcal A}_p \cup \{H_{1}^{P_0}\})$ two subsets corresponding to rows of $M$ and by $M_p$ and $M_{cp}$ the submatrices of $M$ corresponding, respectively, to rows $H\in {\mathcal A}_p$ an $H \in {\mathcal A}_{cp}$. Then the rows of $M$ can be re-ordered as follows: $$ M -> {{M_p}\choose{ M_{cp}} }. $$ Let's now consider only the columns of $\left( \begin{array}{c} M_p \\ M_{cp} \end{array}\right)$ corresponding to the columns of the matrix $M_1$ in equation (\ref{Matdec}). After diagonalization process of $M_1$ in Lemma \ref{lemdiag2}, the matrix $D(M_1)$ can be re-ordered to keep the diagonal form as $$D(M_1) ->\left( \begin{array}{cc} D_p & 0 \\ 0 & 0 \\ 0 & D_{cp} \end{array}\right),$$ where the first block corresponds to submtarix of rows $H\in {\mathcal A}_p \setminus \{H_k^{P_0}\}_{1 < k < m(P_0)}$, the zero block $( 0 \quad 0 )$ to rows in $\{H_k^{P_0}\}_{1 < k < m(P_0)}$ and the last block to rows $H \in {\mathcal A}_{cp}$. Then for any fixed row $H \in {\mathcal A}_p$ of the matrix $M_p$, the operations $O_{H,H^\prime}^{P,P^\prime}, row( H^\prime ) > row ( H )$ in the matrix $\left( \begin{array}{c} M_p \\ M_{cp} \end{array}\right)$ change, in the matrix $\left( \begin{array}{cc} D_p & 0 \\ 0 & 0 \\ 0 & D_{cp} \end{array}\right)$, the column $c(H)$ of $\left(\begin{array}{c} D_p \\ 0 \\0 \end{array}\right)$ with non zero entry in the row $H$. Since this column is exchanged, in the diagonalization process, with the new reduced column $c^P_H$ with all entries $0$ except the one at row $H$, we can conclude that the diagonalization of rows $H \in {\mathcal A}_p$ of matrix $\left( \begin{array}{c} M_p \\ M_{cp} \end{array}\right)$ leaves the submatrix $( 0 \quad D_{cp} )$ unchanged. Hence, since all non zero entries of $D_{cp}$ are of the form $1-t^m$, $\gcd(m,|{\mathcal A}|+1) = 1$, it follows that, after diagonalization of the first rows $H \in {\mathcal A}_p$ of $\left( \begin{array}{c} M_p \\ M_{cp} \end{array}\right)$, independently of how rows $H \in {\mathcal A}_{cp}$ are changed by this half diagonalization, there exist columns and rows operations involving rows $H \in {\mathcal A}_{cp}$ to diagonalize $( 0 \quad D_{cp} )$ as $( 0 \quad {\mathsf{diag}} (1-t) )$. That is we can reduce to study only the submatrix $M_p$ of $M$ corresponding to rows $H\in {\mathcal A}_p$. Remark that if $H' \in {\mathcal A}_p$, then $ row( H^\prime ) > row ( H )$ if and only if $H' \rhd H$, that is in the matrix $M_p$ we perform operations $O_{H,H^\prime}^{P,P^\prime}, H^\prime \rhd H $. \end{remark} Remark \ref{rkdiag} allows to immediately retrieve the following Libgober's result (see \cite{lib-mil}). \begin{theorem}[]\label{thmLib} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement with Milnor fiber $F$ and order $n=|{\mathcal A}|.$ If $\gcd(m(P),n+1)=1,$ for all $P \in \mathcal{P},$ then ${\mathcal A}$ is a-monodromic. \end{theorem} With more non trivial computations on $M,$ it is also possible to retrieve the following Yoshinaga's result proved in \cite{Yoshi2,Yoshi3} and also in \cite{Wil}. \begin{theorem}[]\label{thmYosh} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement with Milnor fiber $F$ and order $n=|{\mathcal A}|.$ Assume that $\mathcal{P}$ contains exactly one point $P$ such that $\gcd\{m(P),n+1\} \neq 1.$ Then ${\mathcal A}$ is a-monodromic. \end{theorem} In the end of this section we describe the effect of the diagonalization of $M_1$ on the right part $M_2$ of $M.$ Denote by $D(M_2)$ the new matrix obtained from $M_2$ after diagonalization of $M_1$. \begin{theorem}\label{thmeffect} Operations $O^P_{i+1,i}(t^{-1})=H_i^P- t^{-1}H_{i+1}^P,\,P\in \mathcal{P},$ change the matrix $M_2$ as follows: \begin{enumerate} \item the zero entries in the row $H_i^{P}$ become entries of the form $t^{-1}S(P')$ in the columns corresponding to the block $c^{P'}$ if and only if $H_{i+1}^{P}=H_1^{P'}$; \item submatrices $\partial_2(S(P')),\,P'\in L_2({\mathcal A})\backslash \mathcal{P},$ and rows $H_k^{P_0},\,1<k<m(P_0),$ are unchanged; \item rows in $\partial_2(\bold{\widehat{U}}(P')),\,P'\in L_2({\mathcal A})\backslash \mathcal{P},$ become all zeros except the rows corresponding to $H \in \{\max_{\lhd}S(P) \cap \bold{\widehat{U}}(P') \mid P\in \mathcal{P}\}$ and to $H_k^{P_0},1<k<m(P_0).$ \end{enumerate} All other entries are unchanged. \end{theorem} \proof Using usual notation, $P$ will denote a point in $\mathcal{P}$ and $P'$ a point in $L_2({\mathcal A})\backslash \mathcal{P}.$ Recall that given a row $H$, the entries in the column block $c^{P'}$ are non zero if and only if $H \in S(P') \cup \bold{\widehat{U}}(P')$. Let us now prove 1., 2. and 3. separately. \begin{enumerate} \item Let's the entries of row $H_i^{P}$ and $H_{i+1}^{P}$ be respectively zero and not zero in the block $c^{P'}$, i.e. $H_i^P \notin S(P') \cup \bold{\widehat{U}}(P')$ and $H_{i+1}^P \in S(P') \cup \bold{\widehat{U}}(P')$. By simple geometric remark (see for instance lines $H_i^{P_m}, i>2$, in Figure \ref{fig1}) it follows immediately that if the line $H_i^P \notin S(P') \cup \bold{\widehat{U}}(P')$, then $H_{i+1}^P \notin \bold{\widehat{U}}(P')$. Hence $H_{i+1}^P \in S(P')$ and since if $H_{i+1}^P \in S(P')$ then $H_i^P \in U(P')$, the only possibility left for $H_{i+1}^P$ to be in $S(P')$ when $H_i^P \notin S(P') \cup \bold{\widehat{U}}(P')$ is that $H_i^P \notin \mathbb{C}one(P')$, that is if and only if $H_{i+1}^{P}=H_1^{P'}$. \item Assume $H_i^P \in S(P').$ It is obvious that $H_{i+1}^P \notin S(P') \cup \bold{\widehat{U}}(P')$ and the entries $e(H_i^P,c^{P'})$ in the block $c^{P'}$ are unchanged by the operation $O^P_{i+1,i}(t^{-1}).$ Case of rows $H_k^{P_0},\,1<k<m(P_0),$ is trivial. \item Assume $H_i^P \in \bold{\widehat{U}}(P')$. Then the entries $e(H_i^P,c^{P'})$ are changed by the operation $O^P_{i+1,i}(t^{-1})$ if and only if $e(H_{i+1}^P,c^{P'}) \neq 0$, that is if and only if one of the following two cases occurs: \begin{enumerate} \item $H_{i+1}^P \in \bold{\widehat{U}}(P')$ or \item $H_{i+1}^P \in S(P').$ \end{enumerate} In case (b) $H_i^P = \max_{\lhd}S(P) \cap \bold{\widehat{U}}(P')$ and the entries $e(H_i^P,c^{P'})$ will be modified by the operation $O^P_{i+1,i}(t^{-1}).$ In particular, they (may) stay different from zero. Let us now study case (a). Then $H_i^P \neq \max_{\lhd}S(P) \cap \bold{\widehat{U}}(P')$ and we can compute that $e(H_i^P-t^{-1}H_{i+1}^P,c^{P'})=0$ as follows. Let us denote by \begin{center} $\alpha_i^P(P'):= |\{ H \lhd H_i^P\,|\,H \in U(P')\}|,$\\ $l_i^P(C_j^{P'}):= |\{ H \lhd H_i^P\,|\,H \in L(C_j^{P'}))\}|,$ and\\ $u_i^P(C_j^{P'}):= |\{ H \lhd H_i^P\,|\,H \in U(C_j^{P'}))\}|.$ \end{center} With these notations we can express the $\bold{\widehat{U}}(P')$ addendum in formula (\ref{eq:boundary2}) of $\partial_2(l.e_{[C_j^{P'} \prec P']})$ as follows: $$ \sum_{\substack{[C_{i-1}^{P} \prec F_i^{P}] \,s.t. \\H_i^{P} \in \bold{\widehat{U}}(P')}} t^{\alpha_i^{P}(P')} \cdot (1-t^{l_i^{P}(C_j^{P'})}) \cdot (t^{u_i^{P}(C_j^{P'})} - t^{m(P') - j })(l).e_{[C_{i-1}^{P} \prec F_i^{P}]} \quad. $$ Given a critical $2-$cell $[C_j^{P'} \prec P'],$ as $H_{i}^P, H_{i+1}^P \in \bold{\widehat{U}}(P')$, it is easily seen that $\alpha_{i+1}^P(P')= \alpha_i(P')+1,\,l_{i+1}^P(C_j^{P'})=l_i^P(C_j^{P'}),$ and $u_{i+1}^P(C_j^{P'})=u_i^P(C_j^{P'}).$ We deduce directly that the entries $e(H_i^P-t^{-1}H_{i+1}^P,c^{P'})$ are all zeros. \end{enumerate} \endproof In view of the new non zero coefficients of type 1. in Theorem \ref{thmeffect} that appear when we diagonalize $M_1$, define for any $P'\in L_2({\mathcal A})\backslash \mathcal{P}$: $$ \mathcal{N}(P'):=\{H_i^P,\,P\in \mathcal{P}\,|\,H_{i+1}^P = H_1^{P'}\}. $$ Notice that $\mathcal{N}(P')$ is either the empty-set or contains just one line. \subsection{On rows $H_k^{P_0}$ in the matrix $M$}\label{simplification} It is an easy remark that lines of the form $H_k^{P_0}$ are never in $\mathbb{C}one(P)$, unless $P$ belongs to one of them (see for instance points $P$ and $P'$ in Figure \ref{fig1}). On the other hand, if they belong to the cone of $P$, by our choice of polar coordinates, they also belong to the upper of $P$. More precisely, if $P \in L_2({\mathcal A})$, the following holds: $$ H_k^{P_0} \in \bold{\widehat{U}}(P) \mbox{ if and only if } \exists~h > k \mbox{ s.t. } H_h^{P_0} \in S(P) \quad. $$ The first part of the following Lemma is then proved. \begin{lemma}\label{up0} Let $P \in L_2({\mathcal A})$ be a point, then: \begin{enumerate} \item if $H_{k}^{P_0} \notin S(P)$, for all $k,\,1\leq k\leq m(P_0)-1$, then $e(H_{k}^{P_0},c^{P})=0,$ for all $k,\,1 \leq k \leq m(P_0)-1$; \item if $H_{m(P_0)-1}^{P_0} \in S(P),$ then for any other line $H \in S(P)$, if $c^P_H$ is the column defined in Notation \ref{not:goodcolumn}, we have: \begin{equation*} \begin{split} & e(H_{m(P_0)-1}^{P_0},c_H^P)=t^{m(P)-1}-1, \mbox{ and }\\ & e(H_{k}^{P_0},c_H^P)=t^{m(P_0)-k-2}(1-t)(1-t^{m(P)-1}),\, 1< k < m(P_0)-1. \end{split} \end{equation*} \end{enumerate} \end{lemma} \proof Point 1. is trivial. Let's prove point 2. \\ Assume $H_{m(P_0)-1}^{P_0}\in S(P)$. Then $H_{k}^{P_0}\in \bold{\widehat{U}}(P)$, for all $k < m(P_0)-1$, and: \begin{enumerate} \item[i)] $H_{m(P_0)-1}^{P_0}= H_1^P$, that is any other line $H \in S(P)$ is of the form $H=H_i^P$, $i >1$ and, by Notation \ref{not:goodcolumn}, $c_H^P=c_1^p$; \item[ii)] the entry $e(H_{m(P_0)-1}^{P_0},c_1^P)$ will be in the $S(P)$ addendum of formula (\ref{eq:boundary2}); \item[ii)] the entry $e(H_{k}^{P_0},c_1^P)$ will be in the $\bold{\widehat{U}}(P)$ addendum of formula (\ref{eq:boundary2}) \end{enumerate} and we get: \begin{equation}\label{SP} e(H_{m(P_0)-1}^{P_0},c_1^P)= \bigg( \displaystyle{\prod_{\substack{H' < H_{m(P_0)-1}^{P_0}~s.t.\\ H'\in U(P)}}} t \bigg) \bigg[\prod_{\substack{H'~s.t.\\ H' \in[C_1^P\rightarrow H_{m(P_0)-1}^{P_0})}}t -\prod_{\substack{H' < H_{m(P_0)-1}^{P_0}~s.t.\\ H' \in S(P)}}t\bigg] \end{equation} \begin{equation}\label{UCP} e(H_k^{P_0},c_1^{P}) = \bigg(\displaystyle{\prod_{\substack{H' < H_{k}^{P_0}~s.t.\\H'\in U(P)}}}t \bigg)\bigg(1-\prod_{\substack{H' < H_{k}^{P_0}~s.t.\\ H' \in L(C_1^P)}}t \bigg)\bigg(\prod_{\substack{H' < H_{k}^{P_0}~s.t.\\H'\in U(C_1^P)}}t -\prod_{\substack{H'~s.t.\\H'\in U(C_1^P)}}t\bigg) \end{equation} Since $H_{m(P_0)-1}^{P_0}$ is the smallest line of ${\mathcal A}$ and $[C_1^P\rightarrow H_{m(P_0)-1}^{P_0}) = \{H_2^P,\hdots,H_{m(P)}^P\},$ equation (\ref{SP}) becomes $e(H_{m(P_0)-1}^{P_0},c_1^P)= t^{m(P)-1}-1.$\\ Since $\{H' < H_{k}^{P_0}~s.t.~H'\in U(P)\} = \{H_{m(P_0)-1}^{P_0} < H' < H_{k}^{P_0}\}$, its cardinality is $m(P_0)-k-2.$ Moreover $|\{H' < H_{k}^{P_0}~s.t.~H' \in L(C_1^P)\}|=1$ as $L(C_1^P)=\{H_{m(P_0)-1}^{P_0}\}$. Finally, any line in $U(C_1^P)=\{H_2^P,\hdots,H_{m(P)}^P\}$ is bigger than $H_k^{P_0},$ and equation (\ref{UCP}) becomes $e(H_{k}^{P_0},c_1^P)=t^{m(P_0)-k-2}(1-t)(1-t^{m(P)-1})$. \endproof \begin{lemma}\label{up0h1} Let $H=H_{m(P_0)-1}^{P_0} $ and $P\in H.$ We have that: \begin{equation*} \begin{split} & e(H,c_H^P)=t-1, \mbox{ and }\\ & e(H_{k}^{P_0},c_H^P)=t^{m(P_0)-k-2}(1-t)^2, 1< k < m(P_0)-1. \end{split} \end{equation*} \end{lemma} \proof Analogously to Proof of Lemma \ref{up0}, $H=H_{m(P_0)-1}^{P_0}=H_1^P$. Then $c_H^P= c_{m(P)-1}^P$ and since $H_{m(P_0)-1}^{P_0}$ is the smallest line of ${\mathcal A}$ and $[C_{m(P)-1}^P\rightarrow H_{m(P_0)-1}^{P_0}) = \{H_{m(P)}^P\},$ equation (\ref{SP}) with $c_{m(P)-1}^P$ instead of $c_{1}^P$ becomes $e(H_{m(P_0)-1}^{P_0},c_{m(P)-1}^P)= t-1.$\\ About second equality, all lines in $L(C_{m(P)-1}^P)=\{H_1^P,\hdots,H_{m(P)-1}^P\}$ but $H_1^P$ are bigger than $H^{P_0}_k$, $k < m(P_0)-1$, that is $|\{H' < H_{k}^{P_0}~s.t.~H' \in L(C_{m(P)-1}^P)\}|=1$. Finally, as $U(C_{m(P)-1}^P)=\{H_{m(P)}^P\},\, H_{m(P)}^P > H_{k}^{P_0}$, equation (\ref{UCP}) with $c_{m(P)-1}^P$ instead of $c_{1}^P$ becomes $e(H_{k}^{P_0},c_{m(P)-1}^P)=t^{m(P_0)-k-2}(1-t)^2.$ \endproof \begin{remark}\label{eliminationP0}The columns $c_H^P$ are key columns in the diagonalization process. Then we are interested in studying and simplifying the non zero entries on those columns. Clearly by Lemmas \ref{up0} and \ref{up0h1} performing operations \begin{equation}\label{op:P0} O^{P_0}_{m(P_0)-1,k}(t^{m(P_0)-k-2}(t-1))=H_k^{P_0} + t^{m(P_0)-k-2}(1-t)H_{m(P_0)-1}^{P_0} \end{equation} we have that for any point P and line $H \in S(P), H \neq H_k^{P_0},1<k<m(P_0)-1$, the entries $e(H_k^{P_0},c_H^P),1<k<m(P_0)-1,$ become all zero. All the other columns of the matrix are unchanged by these operations, since they have zeros entries on the row $H_{m(P_0)-1}^{P_0}.$ \end{remark} Theorem \ref{thmeffect} and the previous discussion motivate the following definition. \begin{definition}\label{def:Uppmax} Let $P\in L_2({\mathcal A}) \backslash \mathcal{P}$ be an affine point. We define the upper cone maximal of $P$ as the set $$ \bold{\widehat{U}}_{Max}(P):=\{\max_{\lhd} \bold{\widehat{U}}(P) \cap S(P_i) \mid P_i \in \mathcal{P}\} \cup \mathcal{H}_0(P), $$ where $\max_{\lhd} \bold{\widehat{U}}(P) \cap S(P_i)$ is the maximal hyperplane with respect to the order $\lhd$ in each $S(P_i)$ which also belongs to the upper cone $\bold{\widehat{U}}(P)$, and $\mathcal{H}_0(P)$ is the following subset of $S(P_0)\cap \bold{\widehat{U}}(P):$ $$\mathcal{H}_0(P):= \left \{ \begin{split} &\{H_k^{P_0}\}_{1 <k < m(P_0)-1} \cap \bold{\widehat{U}}(P), & \mbox{ if \,} H_{m(P_0)-1}^{P_0} \notin S(P)\\ &\emptyset & \mbox{ if \,} H_{m(P_0)-1}^{P_0} \in S(P). \end{split} \right . $$ \end{definition} \begin{remark}\label{simplification1} Since $H_k^{P_0} \in \bold{\widehat{U}}(P) \mbox{ if and only if } \exists~h > k \mbox{ s.t. } H_h^{P_0} \in S(P)$, the condition $\mathcal{H}_0(P) \neq \emptyset$ is equivalent to the existence of an $h \neq m(P_0)-1$ such that $H_h^{P_0} \in S(P)$. Hence after we performed operation in equation (\ref{op:P0}), the only non zero entries of rows $H_k^{P_0}$ are in the column blocks $c^P$ such that $H_k^{P_0} \in S(P)\cup \bold{\widehat{U}}_{Max}(P).$ That is, any row operation $H-\varphi H_k^{P_0}$ leaves the entry $e(H,c_H^P)$ unchanged for any point $P$ such that $H_k^{P_0} \notin S(P)\cup \bold{\widehat{U}}_{Max}(P).$ \end{remark} We will use this Remark \ref{simplification1} in section \ref{subgraph1} (respectively \ref{subgraph2}) in order to simplify the rows $H_k^{P_0},\,1<k<m(P_0)-1$ (respectively $H_k^{P_0},\,2<k<m(P_0)-1$). \section{Graph and monodromy of the Milnor fiber}\label{sec:mains} In this section, we study special graphs obtained from $\mathcal{G}({\mathcal A})$ in order to study the monodromy of ${\mathcal A}$. In Section \ref{sec:sharp} we essentially proved the following facts: \begin{enumerate} \item[i)] the line $H_1^{P_0}$ can be removed (see section \ref{inclusion}); \item[ii)] performing operations $O^P_{i+1,i}(t^{-1})$, we diagonalize the matrix $M_1$ getting entries $e(H_i^{P},c_{i-1}^P)=1-t^{m(P)}$, $1 < i < m(P)$, $e(H_{m(P)}^{P},c_{m(P)-1}^P)=1-t$, for all $P \in \mathcal{P}$ (see section \ref{reduction}); \item[iii)]after performing operations $O^P_{i+1,i}(t^{-1})$ and $O^{P_0}_{m(P_0)-1,k}(t^{m(P_0)-k-2}(t-1)),$ we have that \begin{center} $e(H,c^{P'})\neq 0 \Leftrightarrow H \in S(P')\cup \bold{\widehat{U}}_{Max}(P') \cup \mathcal{N}(P')$ \end{center} (see Theorem \ref{thmeffect} and Remark \ref{eliminationP0}). \end{enumerate} Since the only possible monodromies have to divide both $|{\mathcal A}|+1$ (by the equivariant decomposition in equation (\ref{eq:decomposition}) ) and $m(P_0)$ ( see \cite[3.21 (ii)]{Yoshi2}), it follows from i) and ii) that the hyperplanes \begin{equation}\label{eq:remove} {\mathcal A}_r=\{H_i^P \in {\mathcal A} | P \in \mathcal{P} \mbox{ and } \gcd(m(P),|{\mathcal A}|+1)=1 \mbox{ or } \gcd(m(P),m(P_0))=1 \mbox{ or } i=m(P) \} \end{equation} can all be removed in the sense of Definition \ref{def:remove} by the same argument than in Remark \ref{rkdiag}. Notice that $ i=m(P)$ includes the case of double points, i.e. $m(P)=2$. \\ Then in order to study the monodromy of ${\mathcal A}$ we can reduce to study the graph $\mathcal{G}_{{\mathcal A}'}$ defined as follows: \begin{enumerate} \item vertices of $\mathcal{G}_{{\mathcal A}'}$ are vertices $(H,P)$ of $\mathcal{G}({\mathcal A})$ such that $H \in {\mathcal A}'={\mathcal A} \setminus {\mathcal A}_r$ and $P \in L_2({\mathcal A}) \setminus \mathcal{P}$; \item two vertices $(H,P)$ and $(H',P^\prime)$ are connected by an edge from $(H,P)$ to $(H',P^\prime)$ if and only if $H^\prime \in S(P)\cup \bold{\widehat{U}}_{Max}(P) \cup \mathcal{N}(P)$. \end{enumerate} Let us remark that $\mathcal{G}_{{\mathcal A}'}$ has lesser vertices than $\mathcal{G}({\mathcal A})$ and all edges $[(H,P),(H',P')]$ for which $H' \in \bold{\widehat{U}}(P) \setminus \bold{\widehat{U}}_{Max}(P)$ disappear. Remark that $\mathcal{G}_{{\mathcal A}'}$ is not properly a subgraph of $\mathcal{G}({\mathcal A})$ since new edges $[(H,P),(H',P')]$ corresponding to $H'\in \mathcal{N}(P)$ appear. Graph $\mathcal{G}_{{\mathcal A}'}$ is more informative than $\mathcal{G}({\mathcal A})$ but still too big. It can be highly reduced as we will show in the two next subsections. \begin{notation}\label{not:arrnozero} Denote by $$ {\mathcal A}_0={\mathcal A}' \setminus \{H^{P_0}_k\}_{1< k <m(P_0)-1} $$ and by $$ {\mathcal A}'_0={\mathcal A}' \setminus \{H^{P_0}_k\}_{2< k <m(P_0)-1} \quad . $$ \end{notation} Note that ${\mathcal A}_0$ is obtained from ${\mathcal A}'_0$ simply removing the hyperplane $H_2^{P_0}$. \subsection{The subgraph $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$ of $\mathcal{G}_{{\mathcal A}'}$}\label{subgraph1} Recall that given a line $H$ we call ${\mathsf{last}}(H)\in L_2({\mathcal A})\backslash \mathcal{P}$ (resp. ${\mathsf{min}}(H) \in L_2({\mathcal A})\backslash \mathcal{P}$) the last point (resp. the second point) in $H$ with respect to the order induced by the polar order $\lhd$ of ${\mathcal A}.$ \begin{remark}\label{lastmininfty} By basic geometric remarks we have: \begin{enumerate} \item if $2 < k < m(P_0)-1,$ then ${\mathsf{last}}(H_k^{P_0}) \neq {\mathsf{last}}(H)\,\text{and}\,{\mathsf{min}}(H),$ for any $H \neq H_k^{P_0};$ \item if ${\mathsf{last}}(H_2^{P_0})= {\mathsf{last}}(H),$ for a certain $H \neq H_2^{P_0},$ then $m(P_0)=3$ and $H=H_2^{P_i},$ for a certain $P_i\in \mathcal{P}.$ \item if ${\mathsf{min}}({H_2}^{P_0}) = {\mathsf{last}}(H),$ for a certain $H \neq H_2^{P_0},$ then $m(P_0)=3;$ \item if ${\mathsf{min}}(H_2^{P_0})={\mathsf{min}}(H_2^{P_i}),$ for a certain $P_i\in \mathcal{P},$ then $m(P_i)=2;$ \item if ${\mathsf{last}}(H_{m(P_0)-1}^{P_0})={\mathsf{last}}(H),$ for a certain $H\neq H_{m(P_0)-1}^{P_0},$ then $H=H_2^{P_i},$ for a certain $P_i\in \mathcal{P}.$ \end{enumerate} \end{remark} The following remark studies the contribution of the lines $H_k^{P_0}$ in the non zero entries of the matrix $D(M_2)$ related to columns $c^{{\mathsf{last}}(H_j^{P_i})}$ or $c^{{\mathsf{min}}(H_2^{P_i})},\,P_i\in \mathcal{P}.$ \begin{remark}\label{rksimplast}The following facts hold: \begin{enumerate} \item for any $H \in {\mathcal A}$, if $H_k^{P_0} \in S({\mathsf{last}}(H))$, then $k=m(P_0)-1$ and \begin{equation*} H_{m(P_0)-1}^{P_0} \in S({\mathsf{last}}(H)) \Leftrightarrow H_k^{P_0}\in \bold{\widehat{U}}({\mathsf{last}}(H)),1 \leq k < m(P_0)-1, \end{equation*} that is $H_k^{P_0} \notin S({\mathsf{last}}(H_j^{P_i})) \cup \bold{\widehat{U}}_{Max}({\mathsf{last}}(H_j^{P_i})) \cup \mathcal{N}({\mathsf{last}}(H_j^{P_i})),$ for any $j$ and $P_i\in \mathcal{P}$; \item let $H_2^{P_i}\in {\mathcal A},\,P_i\in \mathcal{P}.$ Since ${\mathsf{min}}(H_2^{P_i})$ is smaller or equal than $H_2^{P_i} \cap H_2^{P_0},$ for any $2 < k < m(P_0)$ we have that \begin{equation*} H_k^{P_0}\notin S({\mathsf{min}}(H)) \cup \bold{\widehat{U}}_{Max}({\mathsf{min}}(H))\cup \mathcal{N}({\mathsf{min}}(H)) . \end{equation*} \end{enumerate} \end{remark} \begin{lemma}\label{removeHP0last}Rows $\{H^{P_0}_k\}_{1< k <m(P_0)-1}$ can be removed in the sense of Definition \ref{def:remove} without changing columns $c_H^{{\mathsf{last}}(H)}$, $H \in {\mathcal A}'$. \end{lemma} \begin{proof}Let's remove rows $H_k^{P_0}$, $1< k <m(P_0)-1,$ in the matrix $D(M)$ by using the last points ${\mathsf{last}}(H_k^{P_0})$ and the corresponding columns $c_{H_k^{P_0}}^{{\mathsf{last}}(H_k^{P_0})}$ defined in Notation \ref{not:goodcolumn}. It is clear that $e(H',c_{H_k^{P_0}}^{{\mathsf{last}}(H_k^{P_0})})=0$ for all $H'\lhd H_k^{P_0}.$ In order to remove the entries $e(H',c_{H_k^{P_0}}^{{\mathsf{last}}(H_k^{P_0})})\neq 0$ with $H'\rhd H_k^{P_0},$ we perform rows operations by using the entry $e(H_k^{P_0},c_{H_k^{P_0}}^{{\mathsf{last}}(H_k^{P_0})})=t^\alpha(1-t).$ It follows from Remarks \ref{simplification1} and \ref{rksimplast} that these row operations do not affects the other columns $c_H^{{\mathsf{last}}(H)}$ of $D(M).$ We have that the last points ${\mathsf{last}}(H_k^{P_0}),\,1<k<m(P_0)-1,$ are different from all the other last points ${\mathsf{last}}(H)$ for the remaining line $H \in {\mathcal A}'$, see Remark \ref{lastmininfty} 1. and 2. \end{proof} \begin{definition}We call $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$ the subgraph of $\mathcal{G}_{{\mathcal A}'}$ with vertices $$(H,{\mathsf{last}}(H)),\,\,\text{for all} \,\,H\in {\mathcal A}_0.$$ \end{definition} \begin{notation} For the sake of simplicity, we will only use the hyperplane $H$ instead of $(H,{\mathsf{last}}(H))$ for the vertices of $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$ and the notation $[H,H']$ for the edge oriented from $(H,{\mathsf{last}}(H))$ to $(H',{\mathsf{last}}(H')).$ \end{notation} Our goal is to show that, under special conditions, in order to study the monodromy of ${\mathcal A}$ it is enough to study the graph $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$. Let's start by observing that since ${\mathcal A}$ is sharp, all intersections lie in the same side of $H_1^{P_0},$ and then for any line $H_h^{P_i}\rhd H_k^{P_j}$, $P_i,P_j \in \mathcal{P}$, we have that $H_h^{P_i}\cap H_k^{P_j}\rhd H_{h+1}^{P_i}\cap H_k^{P_j}.$ Viceversa, for any line $H_h^{P_i}\lhd H_k^{P_j},$ we have that $H_h^{P_i}\cap H_k^{P_j}\lhd H_{h+1}^{P_i}\cap H_k^{P_j}$ (see Figure \ref{fig1}). By this we get the following important remark on edges of $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$. \begin{remark}\label{rk1} By ${\mathcal A}$ being a sharp arrangement, the following facts hold for ${\mathsf{last}}(H)$, $H \in {\mathcal A}_0$, which allows us to list all the possible edges appearing in $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$. As usual let $P_i$, $ 1 \leq i \leq |\mathcal{P} |$, denote points in $\mathcal{P}$ with $H^{P_i} \lhd H^{P_j}$ if and only if $i<j$: \\ \begin{enumerate} \item $H_h^{P_i} \parallel H_k^{P_j}\,\,\mathbb{R}ightarrow \,\, \left \{ \begin{array}{cc} h=2,\,k=m(P_j) & \mbox{ if } i>j \\ h=m(P_i),\,k=2 & \mbox{ if } i<j ; \end{array} \right .$ \item if $H_h^{P_i}\in S({\mathsf{last}}(H_k^{P_j}))$ then: \begin{equation}\label{eq:lastp} \left \{ \begin{array}{cccc} h=2 & \mbox{ or } & h=3 \mbox{ and } H_{2}^{P_i} \parallel H_k^{P_j} & \mbox{ if } i>j \\ h=m(P_i) & \mbox{ or } & h=m(P_i)-1 \mbox{ and } H_{m(P_i)}^{P_i} \parallel H_2^{P_j} & \mbox{ if } i<j ; \end{array} \right . \end{equation} since condition $H_{2}^{P_i} \parallel H_k^{P_j}$ implies $k=m(P_j)$ and $H_{m(P_i)}^{P_i} \notin {\mathcal A}_0$, conditions (\ref{eq:lastp}) correspond respectively to edges $E_1=[H_k^{P_j},H_{2}^{P_i},\lhd]$ and $E_2=[H_2^{P_j},H_{m(P_i)-1}^{P_i},\rhd]$ in $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$. Notice that condition $H_{m(P_i)}^{P_i} \parallel H_2^{P_j}$ is empty for $i=0$ as $H_{m(P_0)}^{P_0}$ is the line at infinity. Indeed we can have that $H_{m(P_0)-1}^{P_0}\in S({\mathsf{last}}(H_k^{P_j}))$ for any $k,j$, $j \neq 0$, that is $E_3=[H_k^{P_j},H_{m(P_0)-1}^{P_0},\rhd]$ is another possible edge in $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$. \item If $H_h^{P_i}\in \bold{\widehat{U}}_{Max}({\mathsf{last}}(H_k^{P_j})),$ then $0<i \leq j.$ Indeed, if $j<i$ then $H_h^{P_i} \in U({\mathsf{last}}(H_k^{P_j})) \Leftrightarrow H_h^{P_i} \parallel H_k^{P_j}$, that is $k=m(P_j)$ and $H_{k}^{P_j}\notin {\mathcal A}_0.$ The following hold: \begin{enumerate} \item if $i=j,$ then $h=k-1$ and the corresponding edge in $\mathcal{G}_{{\mathsf{last}},{\mathcal A}_0}$ is $E_4=[H_k^{P_j},H_{k-1}^{P_j},\rhd]$ (remark that this case also includes the case $H_{k-1}^{P_j} \in \mathcal{N}({\mathsf{last}}(H_k^{P_j}))$; \item if $0<i<j$, then we are in one of the following situations (by $H_{m(P_i)}^{P_i}\notin {\mathcal A}_0$): \begin{enumerate} \item $h=m(P_i)-1$ with $H_{m(P_i)}^{P_i}\in S({\mathsf{last}}(H_k^{P_j}))$ or $H_{m(P_i)}^{P_i} \parallel H_k^{P_j}$ (i.e. $k=2$), and the corresponding edge is $E_5=[H_k^{P_j},H_{m(P_i)-1}^{P_i},\rhd]$; \item $h=m(P_i)-2$ with $H_{m(P_i)}^{P_i} \parallel H_k^{P_j}$ (i.e. $k=2$), $H_{m(P_i)-1}^{P_i}\in S({\mathsf{last}}(H_2^{P_j}))$, and the corresponding edge is $E_6=[H_2^{P_j},H_{m(P_i)-2}^{P_i},\rhd];$ \end{enumerate} \end{enumerate} \item if $H_{m(P_i)-1}^{P_i}\in S({\mathsf{last}}(H_2^{P_j})),\, i <j$, then ${\mathsf{last}}(H_{m(P_i)-1}^{P_i})={\mathsf{last}}(H_2^{P_j}).$ Indeed in this case (see figure \ref{fig:last}) $$H \cap H_{m(P_i)-1}^{P_i}\lhd {\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \lhd H \cap H_2^{P_j} \Leftrightarrow H_{m(P_i)} \lhd H \lhd H_2^{P_j}$$ and, since ${\mathcal A}$ is sharp, by $H_{m(P_i)} \lhd H \lhd H_2^{P_j}$ it follows $H_{m(P_i)} \parallel H \parallel H_2^{P_j}$ and we finished. \begin{figure} \caption{$H_{m(P_i)-1} \label{fig:last} \end{figure} \end{enumerate} This last condition will play an important role in our main Theorem. \end{remark} By Remark \ref{rk1} we get that all possible edges in $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$ are the one in Figure \ref{fig:edge_glast}. \begin{figure} \caption{Edges in $\mathcal{G} \label{fig:edge_glast} \end{figure} \begin{proposition}\label{prop1} Let $H \neq H'$ be two hyperplanes in ${\mathcal A}_0$ such that \begin{center} ${\mathsf{last}}(H)= {\mathsf{last}}(H'),$ \end{center} then $H,H' \in \{H_2^{P_i}, H_{m(P_i)-1}^{P_i}\}_{1 \leq i \leq |\mathcal{P}|} \cup \{H_{m(P_0)-1}^{P_0}\}.$ \end{proposition} \proof Follows directly from Remark \ref{rk1}. \endproof In particular the two following propositions hold. \begin{proposition}\label{prop2} Given a line $H_2^{P_i}\in {\mathcal A}_0,$ if there exists $H\in {\mathcal A}_0$ such that ${\mathsf{last}}(H)={\mathsf{last}}(H_2^{P_i}),$ then we are in one of the following cases: \begin{enumerate} \item $H=H_{m(P_j)-1}^{P_j}$ with $0 \leq j < i.$ Moreover, if $j\neq 0,$ then $H_{m(P_j)}^{P_j} \parallel H_2^{P_i}.$ \item $H=H_{m(P_j)-1}^{P_j}$ with $0<i<j$ and $m(P_i)=m(P_j)=3$. \item $H=H_2^{P_j}$ with $0<j<i$ and $m(P_j)=3$. \item $H=H_2^{P_j}$ with $0<i<j$ and $m(P_i)=3.$ \end{enumerate} \end{proposition} \proof If $H=H_k^{P_j}$, then by Proposition \ref{prop1}, $k=2$ or $k=m(P_j)-1.$ If $k=2,$ we are in cases 3. and 4. and by remark \ref{rk1} 2., the smallest between $H_2^{P_j}$ and $H_2^{P_i}$ has to be the last minus one line in its point $P_i$ or $P_j,$ which implies $m(P_i)=3$ or $m(P_j)=3.$ If $k=m(P_j)-1,$ then by Remark \ref{rk1} 2. the biggest between $H_{m(P_j)-1}^{P_j}$ and $H_2^{P_i}$ has to be the second line in its point, while the smallest one has to be the last minus one. This directly imply points 1. and 2. \endproof \begin{proposition}\label{prop3} Given a line $H_{m(P_i)-1}^{P_i}\in {\mathcal A}_0,$ if $H\in {\mathcal A}_0$ is such that ${\mathsf{last}}(H)={\mathsf{last}}(H_{m(P_i)-1}^{P_i}),$ then we are in one of the following situations: \begin{enumerate} \item $H=H_{m(P_j)-1}^{P_j}$ with $0 \leq j <i$ and $m(P_i)=3$. \item $H=H_{m(P_j)-1}^{P_j}$ with $0 \leq i < j $ and $m(P_j)=3$. \item $H=H_2^{P_j}$ with $0 \leq i < j.$ Moreover, if $i \neq 0$ then $H_{m(P_i)}^{P_i} \parallel H.$ \item $H=H_2^{P_j}$ with $0<j<i$ and $m(P_i)=m(P_j)=3$. \end{enumerate} \end{proposition} \proof Similar to the proof of Proposition \ref{prop2}. \endproof In many situations, it happens that the graph $\mathcal{G}_{{\mathcal A}_{0},{\mathsf{last}}}$ has cycles. We will see that these cycles involve points $P_i \in \mathcal{P}$ of multiplicity 3. Hence, let's now define \begin{equation}\label{eq:A03} {\mathcal A}_{(0,3)}= {\mathcal A}_0 \backslash \{H_h^{P_i} |\,P_i\in \mathcal{P} \mbox{ and } m(P_i)=3\} . \end{equation} By above Propositions \ref{prop1}, \ref{prop2} and \ref{prop3} it follows that if, for hyperplanes in ${\mathcal A}_{(0,3)}$, \begin{equation}\label{cond:last} {\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}), \mbox{ for all } 0 \leq i <j, \end{equation} then $$| \mathcal{V}_0(\mathcal{G}_{{\mathcal A}_{(0,3)},{\mathsf{last}}})|=|{\mathcal A}_{(0,3)}|, $$ $| \mathcal{V}_0(\mathcal{G}_{{\mathcal A}_{(0,3)},{\mathsf{last}}})|$ being the number of vertices of $\mathcal{G}_{{\mathcal A}_{(0,3)},{\mathsf{last}}}$, and the following results holds. \begin{proposition}\label{thm1} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement such that $${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}), \quad 0 \leq i <j, $$ holds for hyperplanes in ${\mathcal A}_{(0,3)}$. Then the matrix associated to $\mathcal{G}_{{\mathcal A}_{(0,3)},{\mathsf{last}}}$ can be diagonalized as ${\mathsf{diag}}(1-t).$ \end{proposition} The following main theorem, stated in Introduction, follows from the previous proposition and remarks \ref{lastmininfty} and \ref{rkamono}. \begin{theorem}\label{thm1.1} Let ${\mathcal A} \subset \mathbb{R}^2$ be a sharp arrangement such that $${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}), \quad 0 \leq i <j, $$ holds for hyperplanes in ${\mathcal A}_{(0,3)}$. Then ${\mathcal A}$ is a- or 3-monodromic. \end{theorem} \begin{remark}\label{rem:diffpol} Condition $${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}), \quad 0 \leq i <j $$ for hyperplanes in ${\mathcal A}_{(0,3)}$ and, more in general, for hyperplanes in ${\mathcal A}_{0}$ strictly depends on the chosen polar coordinates system $(V_0,V_1)$. Indeed the only requirement is that lines $(H_{\infty},\overline{H}_1^{P_0})$ are sharp pair. This is equivalent to say that sufficient condition for (\ref{cond:last}) to hold is that it exists a polar coordinates system $(V_0,V_1)$ with lines $(H_{\infty},\overline{H}_1^{P_0})$ sharp pair such that (\ref{cond:last}) holds for this system.\\ In particular for any sharp pair there are four different natural choices that are the two different ways we can choose the line at infinity, that is if $(\overline{H},\overline{H}')$ is the sharp pair we can have \begin{enumerate} \item $(\overline{H},\overline{H}')=(H_{\infty},\overline{H}_1^{P_0})$; \item $(\overline{H}',\overline{H})=(H_{\infty},\overline{H}_1^{P_0})$. \end{enumerate} The other two options depend on the choice of the origin $V_0$ that can be \begin{enumerate} \item[i.] in the chamber in the bottom left corner as in Figure \ref{fig1} \item[ii.] or in the chamber in the upper left corner as in Figure \ref{fig:example4}. \end{enumerate} It is not difficult to check that for a given choice $(V_0,V_1)$ of the polar coordinates system such that $(H_{\infty},\overline{H}_1^{P_0})$ is sharp pair, the following four conditions on hyperplanes of ${\mathcal A}$ are equivalent to (\ref{cond:last}) with respect to the four different natural choices described above. \begin{enumerate} \item[1.i.] ${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}),\, 0 \leq i <j $; \item[1.ii.]${\mathsf{last}}(H_{3}^{P_i}) \neq {\mathsf{last}}(H_{m(P_j)}^{P_j})$ and ${\mathsf{min}}(H_{m(P_0)-1}^{P_0}) \neq {\mathsf{last}}(H_{m(P_j)}^{P_j}), \, 0 \leq j <i $; \item[2.i] ${\mathsf{min}}(H_{2}^{P_j}) \neq {\mathsf{min}}(H_{m(P_{j+1})}^{P_{j+1}})$ and ${\mathsf{min}}(H_{2}^{P_0}) \neq {\mathsf{min}}(H_{m(P_{j})}^{P_{j}}),\, 0 < j$; \item[2.ii.] ${\mathsf{min}}(H_{2}^{P_j}) \neq {\mathsf{min}}(H_2^{P_{j-1}})$ and ${\mathsf{last}}(H_{2}^{P_0}) \neq {\mathsf{min}}(H_2^{P_{j}}),\,0 < j$; \end{enumerate} Notice that, by ${\mathcal A}$ sharp arrangement, conditions 2.i. and 2.ii. can only fail for points $P_j$ such that $m(P_j)=2$. \end{remark} \begin{example}\label{example1} Let ${\mathcal A}$ be a sharp arrangement such that condition (\ref{cond:last}) is satisfied for at least one choice of polar coordinate system $(V_0,V_1)$ among those that are described in Remark \ref{rem:diffpol}. Then with Theorem \ref{thm1.1} we have that: \begin{center} if $\gcd(3,m(P_0))=1,$ then ${\mathcal A}$ is a-monodromic. \end{center} Such arrangement is given in Example \ref{example4}. \end{example} Theorem \ref{thm1.1} provides a case where Conjecture \ref{conj:amon} holds. Indeed by the upper bound for torsion local system given by Papadima-Suciu (see \cite[Theorem C]{PS}) and the vanishing of the Aomoto complex when ${\mathcal G}amma({\mathcal A})$ is connected (see \cite[Lemma 2.1]{Bailet}), fist author proved (see \cite{Bailet}) that Conjecture \ref{conj:amon} holds if ${\mathcal A}$ only admits monodromies $p^{k}$, $p$ prime. \begin{corollary} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement such that $${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}), \quad 0 \leq i <j $$ holds for hyperplanes in ${\mathcal A}_{(0,3)}$. If the graph of double points ${\mathcal G}amma({\mathcal A})$ is connected, then ${\mathcal A}$ is a-monodromic. \end{corollary} \begin{remark}The reciprocal is false, as shown in example \ref{example0}. Indeed, $H_2^{P_4}$ does not contain any double point and ${\mathcal G}amma({\mathcal A})$ is not connected. Remark also that ${\mathcal A}$ has no multinet, since ${\mathcal A}$ is a-monodromic. \end{remark} The rest of this section is devoted to prove our main result \ref{thm1}, consequence of Lemma's \ref{lem1} and \ref{propmono3last} \begin{lemma}\label{lem1} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement such that $${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}), \quad 0 \leq i <j $$ holds for hyperplanes in ${\mathcal A}_{0}$. If $H_h^{P_i} \in {\mathcal A}_{0}$ is a vertex in a cycle of $\mathcal{G}_{{\mathcal A}_{0},{\mathsf{last}}}$ as the one in Figure \ref{fig:cycle}, then $h=2$ or $h=m(P_i)-1.$ More precisely we have the cycle in Figure \ref{fig:cycleA0}. \end{lemma} \begin{figure} \caption{Cycles in $\mathcal{G} \label{fig:cycleA0} \end{figure} Note that Lemma \ref{lem1} essentially states that the only edges involved in a cycle are edges of the form $E_1$ for $k=2, m(P_i)-1$, and $E_2$ (see Figure \ref{fig:edge_glast}). \proof Let us consider the cycle $\gamma$ in Figure \ref{fig:cycle}. By Remark \ref{rk1} we know that any edge $[H_h^{P_i},H_k^{P_j},\lhd]$ have to be of the form $E_1$, that is $k=2$. Then all vertices in $\gamma$ but $\overline{H}$ and $\widetilde{H}$ are of the form $H_2^{P_j}.$ Let us now deal with $\overline{H}$ and $\widetilde{H}$ and their edges in $\gamma$. If $\overline{H} \rhd H',$ then $\overline{H}$ is also of the form $H_2^{P_j}.$ Assume $\overline{H} \lhd H'$, then we are in the case $H'=H_2^{P_j}$, $\overline{H}=H_h^{P_i}$ with $i<j$, and one of the following two situations apply to the edge $[H',\overline{H},\rhd]$: \begin{enumerate} \item $\overline{H} \in S({\mathsf{last}}(H'))$ and by Remark \ref{rk1} 2. $\overline{H}= H_{m(P_i)-1}^{P_i}$. Since, by Remark \ref{rk1} 4., $\overline{H}=H_{m(P_i)-1}^{P_i}, \,0<i<j$, implies ${\mathsf{last}}(\overline{H})= {\mathsf{last}}(H')$, it follows that $\overline{H}= H_{m(P_0)-1}^{P_0}$; \item $\overline{H} \in \bold{\widehat{U}}_{Max}({\mathsf{last}}(H'))$ and, by $i<j$, we are in case Remark \ref{rk1} 3. (b) i., that is $\overline{H}=H_{m(P_i)-1}^{P_i},\,0<i<j,$ with $H_{m(P_i)}^{P_i}\parallel H'$ or $H_{m(P_i)}^{P_i} \in S({\mathsf{last}}(H')),$ (edge $E_5$). Indeed, case 3. (b) ii. $\overline{H}=H_{m(P_i)-2}^{P_i}$ is only possible if $H_{m(P_i)-1}^{P_i} \in S({\mathsf{last}}(H'))$ that is again, by Remark \ref{rk1} 4., ${\mathsf{last}}(H_{m(P_i)-1}^{P_i})= {\mathsf{last}}(H'),\, i<j$. \end{enumerate} Similarily, the existence of the edge $[H,\widetilde{H},\rhd]$ implies by Remark \ref{rk1} that $\widetilde{H}=H_{m(P_i)-1}^{P_i}$ for a certain $i,\, 0 \leq i \leq |\mathcal{P}|.$\\ By denoting $H=H_2^{P_j}$, $\overline{H}=H_2^{P_i}$ or $H_{m(P_i)-1}^{P_i}$, $H'=H_2^{P_{j'}}$, $\widetilde{H}=H_{m(P_{\widetilde{j}})-1}^{P_{\widetilde{j}}}$ and $H_{i_k}=H_2^{P_{j_k}}$, we get the cycle in Figure \ref{fig:cycleA0} from the one in Figure \ref{fig:cycle}, recalling that we denote vertices of $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$ only by hyperplanes $H$ omitting points in $H$ as they are always of the form ${\mathsf{last}}(H)$. \endproof \begin{lemma}\label{propmono3last} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement such that $${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}), \quad 0 \leq i <j $$ holds for hyperplanes in ${\mathcal A}_{0}$. If there exists a cycle in $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$ as in Figure \ref{fig:cycleA0}, then it involves at least a vertex $H_h^{P_i}$ with $m(P_i)=3,$ i.e. $\mathcal{G}_{{\mathcal A}_{(0,3)},{\mathsf{last}}}$ doesn't contain any cycle and it is an oriented forest. \end{lemma} \proof Let us consider three hyperplanes $H_h^{P_j} \lhd H_2^{P_i} \lhd H_2^{P_k}$ in ${\mathcal A}_0$ with $0 \leq j< i<k$ connected as in Figure \ref{fig:cycleA0} by edges $[H_h^{P_j},H_2^{P_i},\lhd],[H_2^{P_i},H_2^{P_k},\lhd]$ that is, by Remark \ref{rk1}, $H_2^{P_k}\in S({\mathsf{last}}(H_2^{P_i}))$ and $H_2^{P_i}\in S({\mathsf{last}}(H_h^{P_j})).$ It is an easy geometric remark that, on sharp arrangements, this configuration forces ${\mathsf{last}}(H_2^{P_i})={\mathsf{last}}(H_h^{P_j})$ and it follows by Propositions \ref{prop2} and by our assumptions on ${\mathsf{last}}$ that $h=2$ and $m(P_j)=3$. \\ Hence the only cases left are the cycles with no $2$ consecutive edges of the form $[H'',H', \lhd],[H',H,\lhd]$, that is, by Lemma \ref{lem1} (see Figure \ref{fig:cycleA0}), our cycle is composed by two edges: $$[H,H',\rhd] \mbox{ and }[H',H,\lhd] \quad \mbox{(case A)};$$ or four edges: $$[H,\widetilde{H},\rhd],[\widetilde{H},H',\lhd],[H',\overline{H},\rhd],[\overline{H},H,\lhd] \quad \mbox{(case B)};$$ or three edges: $$[H,\widetilde{H},\rhd],[\widetilde{H},H'],[H',H,\lhd],$$ where $\widetilde{H}=H_{m(P_{\widetilde{j}})-1}^{P_{\widetilde{j}}}=H_2^{P_{j'}}$ in Figure \ref{fig:cycleA0}, that is $P_{j'}=P_{\widetilde{j}}$ has multiplicity 3. Note that in the above cases $H=H_2^{P_j}$. Let us study cases A and B separately: \begin{enumerate} \item[A.] $H \in S({\mathsf{last}}(H'))$ that is, by Remark \ref{rk1}, $H'=H_{m(P_0)-1}^{P_0} \in S({\mathsf{last}}(H))$ or $H'=H_{m(P_i)-1}^{P_i} \in \bold{\widehat{U}}_{Max}({\mathsf{last}}(H)),\,0<i<j.$ The first configuration is an absurd since in this case ${\mathsf{last}}(H)={\mathsf{last}}(H').$ In the second configuration, the fact that $H'\in \mathbb{C}one({\mathsf{last}}(H))$ implies the existence of a line $H'' \lhd H'$ such that $H'' \in S({\mathsf{last}}(H)).$ It is obvious that $H''$ has to be parallel with $H'$ as $H \in {\mathsf{last}}(H')$, that is $m(P_i)-1=2$ and $m(P_i)=3.$ \item[B.] This case corresponds to the cycle in Figure \ref{fig:cycle4} and we have the following two cases: \begin{figure} \caption{Case B} \label{fig:cycle4} \end{figure} \begin{enumerate} \item $\overline{H}=H_{m(P_0)-1}^{P_0} \in S({\mathsf{last}}(H')).$ Then the egde $[\overline{H},H,\lhd]$ means that $H\in S({\mathsf{last}}(\overline{H})).$ By assumption we have that $H \rhd H'.$ Since ${\mathsf{last}}(\overline{H}) \rhd_= {\mathsf{last}}(H'),$ we easily see that either ${\mathsf{last}}(H')\in H$ and ${\mathsf{last}}(\overline{H})={\mathsf{last}}(H'),$ not possible by hypothesis, or $H\parallel H'$ and $m(H' \cap H_1^{P_0})=2$, i.e. $H' \notin {\mathcal A}_0.$ \item $\overline{H}=H_{m(P_i)-1}^{P_i} \in \bold{\widehat{U}}_{Max}({\mathsf{last}}(H')),\,0<i<j.$ The fact that $\overline{H}\in \mathbb{C}one({\mathsf{last}}(H'))$ implies the existence of a line $H'' \lhd \overline{H}$ such that $H'' \in S({\mathsf{last}}(H')).$ If $H'' \parallel \overline{H},$ then $m(P_i)-1=2$ and $m(P_i)=3.$ Otherwise, $H'' \cap \overline{H} \neq \emptyset$ and ${\mathsf{last}}(\overline{H}) \rhd_= H'' \cap \overline{H}.$ On the other hand, the edge $[\overline{H},H,\lhd]$ means that $H\in S({\mathsf{last}}(\overline{H}))$ and since $H' \lhd H$ we easily see that either $H \parallel H'$ and $H' \notin {\mathcal A}_0$ or $H \cap H' \rhd {\mathsf{last}}(H')$ along $H',$ wich is an absurd. \end{enumerate} \end{enumerate} \endproof Let us remark that in Proposition \ref{thm1} and Theorem \ref{thm1.1} we focused on the arrangement ${\mathcal A}_{(0,3)} \subset {\mathcal A}_0$ since our main goal is to show Conjecture \ref{conj:mon}. But above Lemma's are true more in general if condition (\ref{cond:last}) holds in ${\mathcal A}_0$. Hence our algorithm also provides a way to show a-monodromicity directly via the following result. \begin{proposition}\label{thm0}Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement such that $${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) \neq {\mathsf{last}}(H_2^{P_j}), \quad 0 \leq i <j $$ holds for hyperplanes in ${\mathcal A}_0$ and $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$ doesn't have any cycle as the one in Figure \ref{fig:cycleA0}, then $\mathcal{G}_{{\mathcal A}_{0},{\mathsf{last}}}$ is an oriented forest and ${\mathcal A}$ is a-monodromic. \end{proposition} \begin{example}\label{example0} Let us consider the sharp arrangement ${\mathcal A}$ in $\mathbb{R}^2$ depicted in Figure \ref{fig:example0}. The cardinality of $\overline{{\mathcal A}}$ is 12 and ${\mathcal A}_0=\{H_2^{P_0},H_2^{P_4}\},$ since $m(P_1)=m(P_2)=m(P_3)=m(P_5)=2$ and $m(P_6)=4$ is coprime with $m(P_0)=3.$ Since ${\mathsf{last}}(H_2^{P_0}) \neq {\mathsf{last}}(H_2^{P_4})$ and the graph $\mathcal{G}_{{\mathcal A}_{0},{\mathsf{last}}}$ is composed of two non connected vertices, we have that ${\mathcal A}$ is a-monodromic by Proposition \ref{thm0}. \end{example} \begin{figure} \caption{${\mathcal A} \label{fig:example0} \end{figure} Let us now study the case in which we have ${\mathsf{last}}(H_{m(P_i)-1}^{P_i}) = {\mathsf{last}}(H_2^{P_j}),$ for certain $0 \leq i < j.$ We will consider a different subgraph: $\mathcal{G}_{{\mathcal A}'_0,{\mathsf{last}}, {\mathsf{min}}}$ involving different vertices. \subsection{The subgraph $\mathcal{G}_{{\mathcal A}'_0,{\mathsf{last}}, {\mathsf{min}}}$}\label{subgraph2} Analogously to the case of graph $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$, as a first step we will simplify rows $H_k^{P_0}$ with the following Lemma. \begin{lemma}Rows $\{H^{P_0}_k\}_{2< k <m(P_0)-1}$ can be removed in the sense of Definition \ref{def:remove} without changing columns $c_H^{{\mathsf{last}}(H)}$, $H \in {\mathcal A}' \setminus \{H_2^{P_i}\}_{0 \leq i \leq |\mathcal{P}|}$, and columns $c_H^{{\mathsf{min}}(H)}$, $H \in \{H_2^{P_i}\}_{0 \leq i \leq |\mathcal{P}|}$. \end{lemma} \proof In order to simplify rows $H_k^{P_0},\, 2<k<m(P_0)-1,$ in the matrix $D(M),$ let us consider columns $c_{H_k^{P_0}}^{{\mathsf{last}}(H_k^{P_0})}$ defined in Notation \ref{not:goodcolumn}. It is clear that $e(H',c_{H_k^{P_0}}^{{\mathsf{last}}(H_k^{P_0})})=0$ for all $H'\lhd H_k^{P_0}.$ In order to remove the entries $e(H',c_{H_k^{P_0}}^{{\mathsf{last}}(H_k^{P_0})})\neq 0$ with $H'\rhd H_k^{P_0},$ we perform usual rows operations by using the entry $e(H_k^{P_0},c_{H_k^{P_0}}^{{\mathsf{last}}(H_k^{P_0})})=t^\alpha(1-t).$ By Remarks \ref{simplification1} and \ref{rksimplast} these row operations do not affect the other columns $c_H^{{\mathsf{last}}(H)},\,H\neq H_2^{P_i}$, and $c_{H_2^{P_i}}^{{\mathsf{min}}(H_2^{P_i})}$ of $D(M).$ Finally, we have that the last points ${\mathsf{last}}(H_k^{P_0}),\, 2<k<m(P_0)-1,$ are different from the ${\mathsf{last}}(H),\,H\neq H_2^{P_i},$ and the ${\mathsf{min}}(H_2^{P_i})$ (see Remark \ref{lastmininfty} 1.). \endproof For the sake of simplicity, in the rest of this section we will assume $m(P_0) > 3$. This choice is essentially due to avoid case $H_{m(P_0)-1}=H_2^{P_0}$ in which additional considerations would be necessary in order to decide wether to consider the last or the min point along this line. This condition is not strong one from our point of view as our main goal is to prove Conjecture \ref{conj:mon}. Note that now the line $H_2^{P_0}$ cannot be removed anymore hence we will deal with the arrangement ${\mathcal A}'_0$ defined in Notation \ref{not:arrnozero}. \begin{definition}\label{def:grafGLM} We define $\mathcal{G}_{{\mathcal A}'_0,{\mathsf{last}}, {\mathsf{min}}}$ the subgraph of $\mathcal{G}_{{\mathcal A}'}$ such that vertices are of the form $(H_2^{P_i},{\mathsf{min}}(H_2^{P_i}))$, $ 0 \leq i \leq |\mathcal{P}|$, and $(H,{\mathsf{last}}(H))$ if $H \neq H_2^{P_i}$, $H \in {\mathcal A}'_0$. \end{definition} By definition of $\mathcal{G}_{{\mathcal A}'_0,{\mathsf{last}}, {\mathsf{min}}}$ it follows that its subgraph involving vertices $(H,{\mathsf{last}}(H))$ if $H \neq H_2^{P_i}$, $H \in {\mathcal A}'_0$, is a subgraph of $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$. Then we need only to study edges connecting the new vertices $(H_2^{P_i},{\mathsf{min}}(H_2^{P_i}))$, $ 0 \leq i \leq |\mathcal{P}|$. \begin{remark}\label{rklasth2} Let $H_k^{P_j} \in {\mathcal A}'$ such that $k \neq 2.$ Then the following are easy geometric remarks on sharp arrangements: \begin{enumerate} \item if $H_2^{P_0}\in S({\mathsf{last}}(H_k^{P_j})),$ then $m(P_0)=3$; \item $H_2^{P_0}\in \bold{\widehat{U}}({\mathsf{last}}(H_k^{P_j})) \Leftrightarrow H_{m(P_0)-1}^{P_0} \in S({\mathsf{last}}(H_k^{P_j})).$ \end{enumerate} If $H_2^{P_0} \neq H_{m(P_0)-1}^{P_0}$, i.e. $m(P_0) >3$, by Definition \ref{def:Uppmax} of $\bold{\widehat{U}}_{Max}({\mathsf{last}}(H_k^{P_j}))$ it follows: $$H_2^{P_0} \notin S({\mathsf{last}}(H_k^{P_j})) \cup \bold{\widehat{U}}_{Max}({\mathsf{last}}(H_k^{P_j})) \cup \mathcal{N}({\mathsf{last}}(H_k^{P_j})), \mbox{ for all } H_k^{P_j} \in {\mathcal A}'_0 .$$ \end{remark} \begin{remark}\label{rkmin}With usual notations, for points ${\mathsf{min}}(H), H \in {\mathcal A}'$, we have the following facts, consequence of ${\mathcal A}$ being a sharp arrangment: \begin{enumerate} \item if $H_h^{P_i} \in S({\mathsf{min}}(H_2^{P_j}))$ then $\left \{ \begin{array}{cc} h=2 & \mbox{ if } i<j \\ h=m(P_i) & \mbox{ if } i>j \\ \end{array} \right .$\\ that is, as $H_{m(P_i)}^{P_i} \notin {\mathcal A}'$, this corresponds to edges $E_7=[H_2^{P_j},H_2^{P_i},\rhd]$ (i.e. $j>i \geq 0$). \item If $H_h^{P_i}\in \bold{\widehat{U}}_{Max}({\mathsf{min}}(H_2^{P_j}))$ then $j<i$ and $h=m(P_i)$, i.e. $H_{h}^{P_i} \notin {\mathcal A}'$, or $h=m(P_i)-1$ and $H_{m(P_i)}^{P_i}\in S({\mathsf{min}}(H_2^{P_j})).$ The corresponding edges are $E_{8}=[H_2^{P_j},H_{m(P_i)-1}^{P_i},\lhd]$ (i.e. $0 \leq j <i$ ). \end{enumerate} \end{remark} \begin{figure} \caption{Edges in $\mathcal{G} \label{fig:edge_glmin} \end{figure} By previous remarks and by Remark \ref{rk1} we have that all possible edges in $\mathcal{G}_{{\mathcal A}'_0,{\mathsf{last}}, {\mathsf{min}}}$ are the one depicted in Figure \ref{fig:edge_glmin}. Let us remark that, with respect the subgraph $\mathcal{G}_{{\mathcal A}_0,{\mathsf{last}}}$, new edges of the form $E_7$ and $E_8$ appeared (see Figure \ref{fig:edge_glast}) while $E_2$ and $E_6$ disappeared. The latter follows from the fact that smaller lines of type $H_{m(P_i)-1}^{P_i}$ and $H_{m(P_i)-2}^{P_i}$ cannot be either in the $S({\mathsf{min}}(H_2^{P_j}))$ nor in $\bold{\widehat{U}}_{Max}({\mathsf{min}}(H_2^{P_j}))$ for $j>i$ for obvious geometric reasons as ${\mathcal A}$ is sharp. For the same reason, type $E_1$ and $E_3,E_5$ edges require condition $k \neq 2$ (otherwise we would have points of multiplicity 2 or 3). We also have the following two easy facts: \begin{enumerate} \item if ${\mathsf{last}}(H_h^{P_i})={\mathsf{min}}(H_2^{P_j}),$ then $H_h^{P_i}\in S({\mathsf{min}}(H_2^{P_j}))$ and by Remark \ref{rkmin} we have that $h=2$ and $i<j$, the case $h=m(P_i)$ being impossible since $H_h^{P_i}\in {\mathcal A}'$. In particular, if ${\mathsf{last}}(H_{m(P_0)-1}^{P_0})={\mathsf{min}}(H_2^{P_j}),$ then $m(P_0)=3;$ \item if ${\mathsf{min}}(H_2^{P_i})={\mathsf{min}}(H_2^{P_j}),$ then \begin{enumerate} \item $i<j \mathbb{R}ightarrow m(P_j)=2$ and $H_2^{P_j}\notin {\mathcal A}'$, \item $j<i \mathbb{R}ightarrow m(P_i)=2$ and $H_2^{P_i}\notin {\mathcal A}'$; \end{enumerate} \end{enumerate} In the rest of this subsection, Theorem, Corollary, Lemma and Proposition analogous to the one in previous subsection are stated and proved. Define \begin{equation}\label{eq:A034} {\mathcal A}'_{(0,3,4)}= {\mathcal A}'_0 \backslash \{H_h^{P_i}\,|\,P_i\in \mathcal{P} \,\text{and}\,m(P_i)\in\{3,4\}\}, \end{equation} then the following result holds. \begin{proposition}\label{thm2} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement. If $\mathcal{G}_{{\mathcal A}'_{(0,3,4)},{\mathsf{last}},{\mathsf{min}}}$ does not contain any cycle of length $l$ as the one in Figure \ref{fig:cycle} such that \begin{itemize} \item $\overline{H}\rhd H'$ and $l$ is odd \item $\overline{H}\lhd H'$ and $l$ is even \end{itemize} then the matrix associated to the graph $\mathcal{G}_{{\mathcal A}'_{(0,3,4)},{\mathsf{last}},{\mathsf{min}}}$ can be diagonalized in ${\mathsf{diag}}(1-t).$ \end{proposition} The following main Theorem, stated in Introduction, follows from previous Proposition, by Remarks \ref{lastmininfty} and \ref{rkamono}. \begin{theorem}\label{thm2.1} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement. If $\mathcal{G}_{{\mathcal A}'_{(0,3,4)},{\mathsf{last}},{\mathsf{min}}}$ does not contain any cycle of length $l$ as the one in Figure \ref{fig:cycle} such that: \begin{itemize} \item $\overline{H}\rhd H'$ and $l$ is odd \item $\overline{H}\lhd H'$ and $l$ is even \end{itemize} then ${\mathcal A}$ is a-, 3- or 4-monodromic. \end{theorem} When the assumptions of Theorem \ref{thm2.1} are satisfied, we can deduce, as in previous section, the a-monodromicity of ${\mathcal A}$ from the connectivity of ${\mathcal G}amma({\mathcal A}).$ \begin{corollary} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement. Assume $\mathcal{G}_{{\mathcal A}'_{(0,3,4)},{\mathsf{last}},{\mathsf{min}}}$ does not contain any cycle of length $l$ as the one in Figure \ref{fig:cycle} such that: \begin{itemize} \item $\overline{H}\rhd H'$ if $l$ is odd \item $\overline{H}\lhd H'$ if $l$ is even, \end{itemize} then, if ${\mathcal G}amma({\mathcal A})$ is connected, ${\mathcal A}$ is a-monodromic. \end{corollary} The rest of this section is devoted to prove our main result \ref{thm2}, consequence of Lemma's \ref{lem2} and \ref{prop4}. \begin{lemma}\label{lem2} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement. If $H_h^{P_i} \in {\mathcal A}_0'$ is a vertex in a cycle of $\mathcal{G}_{{\mathcal A}'_0,{\mathsf{last}}, {\mathsf{min}}}$ as the one in Figure \ref{fig:cycle}, then $h=2 \,\text{or}\,\, m(P_i)-1\,\text{or}\,\,m(P_i)-2,$ the latter being only possible if $i>0.$ More precisely we have cycles as the one in Figure \ref{fig:cyclastmin}. \end{lemma} \begin{figure} \caption{Cycles in $\mathcal{G} \label{fig:cyclastmin} \end{figure} \proof By Remarks \ref{rk1} and \ref{rkmin} (see also Figure \ref{fig:edge_glmin}), we know that edges $[H,H',\lhd]$ in $\mathcal{G}_{{\mathcal A}'_0,{\mathsf{last}},{\mathsf{min}}}$ are of the form $E_1$ and $E_{8}$. It follows that vertices $H, H' $ and $H_{i_j}$, $ 1 \leq j \leq k+s$ have to be of type $H_2^{P_i}$ or $H_{m(P_i)-1}^{P_i}$ and, moreover, the type alternates if $m(P_i)\neq3$, i.e. two adjacent vertices have to be of different type. By Remarks \ref{rk1} and \ref{rkmin} one can asses also the exact values of $\overline{H}$ and $\widetilde{H}$ depending on the types of $H$, $H'$ and the sign of the edge $[H,H']$ (see Figure \ref{fig:edge_glmin}). More precisely we have: \begin{enumerate} \item if $H=H_2^{P_{j_0}}$ then $\widetilde{H}=H_2^{P_j}$ (see $E_7$) corresponding, respectively, to type $H_{m(P_{j_{k+s}})-1}^{P_{j_{k+s}}}$ for the subsequent vertex in the cycle in Figure \ref{fig:cyclastmin} (see $E_8$); \item if $H=H_{m(P_{j_0})-1}^{P_{j_0}}$ then $\widetilde{H}=H_{m(P_{j_0})-2}^{P_{j_0}}$ (see $E_4$) or $\widetilde{H}=H_{m(P_j)-1}^{P_j}$ (see $E_3$ and $E_5$) corresponding both to type $H_2^{P_{j_{k+s}}}$ for the subsequent vertex in the cycle in Figure \ref{fig:cyclastmin} (see $E_1$); \item if $H' \rhd \overline{H}$ then as $H'=H_2^{P_{j'}}$ or $H'=H_{m(P_{j'})-1}^{P_{j'}}$ previous points 1. and 2. respectively apply with $H'$ instead of $H$ and $\overline{H}$ instead of $\widetilde{H}$; \item if $H' \lhd \overline{H}$ then same alternating rule of other vertices applies (see $E_1$ and $E_8$), that is if $H'=H_2^{P_{j'}}$ ($H_{m(P_{j'})-1}^{P_{j'}}$) then $\overline{H}=H_{m(P_{i})-1}^{P_{i}}$ ($H_2^{P_{i}}$) \end{enumerate} and we get cycles as the one in Figure \ref{fig:cyclastmin}. \endproof \begin{figure} \caption{Subgraphs $\gamma_1$ and $\gamma_2$ of cycle $\gamma$ in $\mathcal{G} \label{fig:gammas} \end{figure} \begin{remark}\label{rem:lami}Let us remark that in the cycle $\gamma$ in Figure \ref{fig:cyclastmin} the type of $H'$ and $H_{h}^{P_{j_k}}$ ($h=m_{j_k}=m(P_{j_k})-1$ or $h=2$) depends, respectively, on the length of the paths $\gamma_2$ and $\gamma_1$ in Figure \ref{fig:gammas}. In particular if $H=H_2^{P_{j_0}} (H_{m(P_{j_0})-1}^{P_{j_0}})$ and $l_i$ is the length of $\gamma_i$, we have: \begin{enumerate} \item if $l_2=2h$ then $H'=H_2^{P_{j'}} (H_{m(P_{j'})-1}^{P_{j'}})$ and if $l_1=2h$ then $H_{\bullet}^{P_{j_k}}=H_{m(P_{j_k})-1}^{P_{j_k}}(H_2^{P_{j_k}})$; \item if $l_2=2h-1$ then $H'=H_{m(P_{j'})-1}^{P_{j'}}(H_2^{P_{j'}})$ and if $l_1=2h-1$ then $H_{\bullet}^{P_{j_k}}=H_2^{P_{j_k}}(H_{m(P_{j_k})-1}^{P_{j_k}})$. \end{enumerate} By Lemma \ref{lem2} we also know that \begin{enumerate} \item[3.] if $H' \rhd \overline{H}$, $H'=H_2^{P_{j'}}(H_{m(P_{j'})-1}^{P_{j'}})$, then $\overline{H}=H_2^{P_{i}}(H_{m(P_{i})-1}^{P_{i}}$ or $H_{m(P_{i})-2}^{P_{i}})$; \item[4.] if $H' \lhd \overline{H}$ then same alternating rule of other vertices applies, that is if $H'=H_2^{P_{j'}}(H_{m(P_{j'})-1}^{P_{j'}})$ then $\overline{H}=H_{m(P_{i})-1}^{P_{i}} (H_2^{P_{i}})$. \end{enumerate} Finally remark that, by construction, if $\gamma$, $\gamma_1$ and $\gamma_2$ have lengths, respectively, $l, l_1$ and $l_2$ then $l=l_1+l_2+2$, that is if $l$ is even then $l_1$ and $l_2$ have to be both even or odd, while if $l$ is odd $l_1$ and $l_2$ have to be one odd the other even. \end{remark} We can now prove the following final result. \begin{lemma}\label{prop4} Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement, $\gamma$ a cycle in $\mathcal{G}_{{\mathcal A}'_0,{\mathsf{last}}, {\mathsf{min}}}$ of length $l$. If \begin{itemize} \item[i)]$\overline{H}\rhd H'$ and $l=2h$ or \item[ii)]$\overline{H}\lhd H'$ and $l=2h+1$ \end{itemize} then $\gamma$ contains a vertex $H_{\bullet}^{P_i}$ such that $m(P_i)\in \{3,4\}.$ \end{lemma} \proof By Lemma \ref{lem2} and Remark \ref{rem:lami} we know vertices involved in the cycle $\gamma$ (see Figure \ref{fig:cyclastmin}). Let $l_{(\rhd,<)}$ be the number of edges in $\gamma$ with direction and sign opposite (as in Figure \ref{fig:use}). \begin{figure} \caption{\footnotesize{Edges with direction and sign opposite} \label{fig:use} \end{figure} Remark that $l=l_{(\rhd,<)}+1,$ if $\overline{H}\rhd H'$ and $l=l_{(\rhd,<)}+2,$ if $\overline{H}\lhd H'$. \begin{enumerate} \item[i)] Case $\overline{H}\rhd H'$ and $l=2h$, that is $l_{(\rhd,<)}=2h-1$. If $H=H_{2}^{P_{j_0}}(H_{m(P_{j_0})-1}^{P_{j_0}})$ then, by the alternating rule of edges, going backwards along the cycle, $\widetilde{H}=H_{m(P_j)-1}^{P_j}(H_{2}^{P_j})$ (as $l_{(\rhd,<)}$ is odd), but on the other hand, considering that $[H,\widetilde{H}]$ is type $E_7$ ($E_4, E_5$ or $E_3$), we have $\widetilde{H}=H_{2}^{P_j} (H_{m(P_j)-2}^{P_j}$ or $\widetilde{H}= H_{m(P_j)-1}^{P_j})$ (see Figure \ref{fig:cyclastmin}). Hence $m(P_j)=3$ or $4$.\\ \item[ii)] Case $l=2h+1$ and $\overline{H}\lhd H'$. In this case we have that the cycle $\gamma$ is divided in the two disconnected subgraphs $\gamma_1$ and $\gamma_2$ removing edges $[\overline{H},H',\lhd]$ and $[H,\widetilde{H},\lhd]$ (see Figure \ref{fig:gammas}). By Remark \ref{rem:lami} since $l$ is even, if the lenght of $\gamma_1$ is even then the lenght of $\gamma_2$ has to be odd and viceversa. Assume that $l_1=2h_1-1$ and $l_2=2h_2$, the converse is similar. By Remark \ref{rem:lami} 1. and 2. if $H=H_2^{P_{j_0}}(H_{m(P_{j_0})-1}^{P_{j_0}})$ then $l_1=2h_1-1$ implies $H_{\bullet}^{P_{j_k}}=H_2^{P_{j_k}}(H_{m(P_{j_k})-1}^{P_{j_k}})$ and $l_2=2h_2$ implies $H'=H_2^{P_{j'}}(H_{m(P_{j'})-1}^{P_{j'}})$. The latter implies, by Remark \ref{rem:lami} 3, $\overline{H}=H_2^{P_{i}}(H_{m(P_{i})-1}^{P_{i}}$ or $H_{m(P_{i})-2}^{P_{i}})$. Then, since all possible edges in $\gamma$ are the one stated in Figure \ref{fig:edge_glmin}, it follows that $m(P_{j_k})=3$ ( $m(P_{j_k})=3$ or $m(P_i)=3,4$). \end{enumerate} \endproof Let us remark that, analogously to the previous section, in Proposition \ref{thm2} and Theorem \ref{thm2.1} we focused on the arrangement ${\mathcal A}_{(0,3,4)} \subset {\mathcal A}_0$, since our main goal is to show Conjecture \ref{conj:mon}. But above Lemma's are true more in general in ${\mathcal A}'_0$. Hence our algorithm also provides a way to show a-monodromicity directly via the following result. \begin{proposition}\label{thm0.2}Let ${\mathcal A}$ in $\mathbb{R}^2$ be a sharp arrangement. If $\mathcal{G}_{{\mathcal A}'_0,{\mathsf{last}},{\mathsf{min}}}$ does not contain any cycle of length $l$ as the one in Figure \ref{fig:cyclastmin} such that: \begin{itemize} \item $\overline{H}\rhd H'$ if $l$ is odd \item $\overline{H}\lhd H'$ if $l$ is even, \end{itemize} then $\mathcal{G}_{{\mathcal A}'_{0},{\mathsf{last}},{\mathsf{min}}}$ is an oriented forest and ${\mathcal A}$ is a-monodromic. \end{proposition} Example \ref{example6} shows that our algorithm is non trivial, that is it shows a-monodromicity of arrangements for which other known results and algorithms cannot provide answers. \section{Examples and Applications} \label{ex} In this section we will illustrate a couple of interesting examples to show how our algorithm can be applied to study monodromy of line arrangements. In particular we will also study the case of simplicial arrangements. \begin{example}\label{example4} Figure \ref{fig:example3} (respectively \ref{fig:example4}) corresponds to same sharp arrangement ${\mathcal A}$ with the choice of polar coordinate system $(V_0,V_1)$ as in Remark \ref{rem:diffpol} 1.i. (respectively Remark \ref{rem:diffpol} 1.ii.). This arrangement satisfies: \begin{enumerate} \item $m(P_0)=4$ divides $|\overline{{\mathcal A}}|=12;$ \item any line of $\overline{{\mathcal A}}$ contains at least two intersection points in $\mathbb{P}^2_\mathbb{R}$ of multiplicity $4;$ \item any band of parallel lines in ${\mathcal A}$ is $4-$resonant (the band includes two unbounded chambers which are separated by $8$ hyperplanes), see \cite{Yoshi2,Yoshi3} for the definitions of band and $k-$resonance introduced by Yoshinaga; \item the graph of double points ${\mathcal G}amma({\mathcal A})$ in not connected, see $H_2^{P_3}$ in Figure \ref{fig:example3}. \end{enumerate} Since $m(P_0)$ and $m(P_3)$ are coprime and all the other points in $\mathcal{P}$ have multiplicity $2$ or $3,$ $${\mathcal A}'=\{H_3^{P_0},H_2^{P_0},H_2^{P_1},H_3^{P_1}\} = \{\widetilde{H}_3^{P_0},\widetilde{H}_2^{P_0},\widetilde{H}_2^{P_5},\widetilde{H}_3^{P_5}\}.$$ In figure \ref{fig:example3}, ${\mathsf{last}}(H_2^{P_1})={\mathsf{last}}(H_3^{P_0})$ and we consider $\mathcal{G}_{{\mathcal A}_0',{\mathsf{last}},{\mathsf{min}}},$ where ${\mathcal A}_0'={\mathcal A}',$ which contains the following cycle: \begin{center} \begin{tikzpicture} \tikzset{vertex/.style = {shape=rectangle,minimum size=0.1em}} \tikzset{edge/.style = {->,> = latex'}} \node[vertex] (a) at (0,0) {$H_3^{P_0}$}; \node[vertex] (b) at (3,0) {$H_3^{P_1}$}; \node[vertex] (c) at (1.5,-2) {$H_2^{P_0}$}; \draw[edge] (b.west) to[bend right] (a.east); \draw[edge] (a.south) to[bend right] (c.west); \draw[edge] (c.east) to[bend right] (b.south); \node[vertex] (d) at (1.5,0.5) {$\lhd$}; \node[vertex] (e) at (-0.3,-1) {$\Delta$}; \node[vertex] (f) at (3.3,-1) {$\nabla$}; \end{tikzpicture} \end{center} and no conclusion is possible. On the other hand, if we consider the polar coordinate system in Figure \ref{fig:example4}, the last points of the lines in ${\mathcal A}'$ are all different and the a-monodromicity of ${\mathcal A}$ follows directly from Theorem \ref{thm1.1}, since the only non trivial monodromy that can appear has order 3 coprime with $m(P_0).$ \end{example} \begin{figure} \caption{$(V_0,V_1)$ as in Remark \ref{rem:diffpol} \label{fig:example3} \end{figure} \begin{figure} \caption{$(V_0,V_1)$ as in Remark \ref{rem:diffpol} \label{fig:example4} \end{figure} \begin{remark} It is also possible to prove a-monodromicity for the arrangement in Example \ref{example4} by using \cite[Theorem 3.23, Corollary 3.24]{Yoshi2}. \end{remark} \begin{example}[Simplicial arrangements]\label{ex:18lines} An arrangement ${\mathcal A}$ in $\mathbb{R}^2$ is called \textit{simplicial} if each chamber of $\overline{{\mathcal A}}$ in $\mathbb{P}_{\mathbb{R}}^2$ is a triangle. Gr\"{u}nbaum in \cite{Gru} presents a catalogue of known simplicial arrangements up to 37 lines (see \cite{Cuntz} for additional informations). In \cite{Yoshi2} Yoshinaga uses his algorithm to study the monodromy of the simplicial arrangement ${\mathcal A}(6m,1)$ obtained taking $3m$ lines determined by the sides of the $3m$-gon together with the $3m$ lines of symmetry of that $3m$-gon. He proved that it is $3$-monodromic (or \textit{pure-tone} using Yoshinaga definition). Yoshinaga also conjectured that those are the only simplicial arrangements with non trivial monodromy and that, more in general, if monodromy appears in a simplicial arrangements, it can only be equal to $3$. It is part of a work in progress to prove that if a simplicial arrangement contains three hyperplanes $H,H', H''$ such that $(H,H')$ and $(H',H'')$ are sharp pairs then the only non trivial monodromy that can appear is 3. In the following we give an example of how our algorithm reduces difficulty on computation to study a-monodromicity in simplicial arrangements. \begin{figure} \caption{Simplicial arrangement $\overline{{\mathcal A} \label{fig:18lines} \end{figure} In the simplicial arrangement depicted in Figure \ref{fig:18lines} (known to be a-monodromic), $(H_1^{P_0},H_\infty)$ is a sharp pair of lines, since there is no intersection points contained between them. With this choice of sharp pair we get that multiplicities $m(P_0)=3$ and $m(P_2)=m(P_4)=m(P_6)=4$ are coprime and $P_3$ and $P_5$ are double points. Hence the set ${\mathcal A}_0={\mathcal A}'_0$ defined in Notation \ref{not:arrnozero} is $${\mathcal A}_0={\mathcal A}'_0 = \{H_2^{P_0},H_2^{P_1},H_2^{P_7}\}$$ and the study of the boundary matrix simply reduces to the study of a three rows matrix containg the three columns triangular submatrix ( see Notation \ref{not:goodcolumn} ) $$\bordermatrix{ &c_{H_2^{P_0}}^{{\mathsf{last}}(H_2^{P_0})}&c_{H_2^{P_1}}^{{\mathsf{min}}(H_2^{P_1})}&c_{H_2^{P_7}}^{{\mathsf{last}}(H_2^{P_7})}\cr & & & \cr H_2^{P_0}&t^{\alpha_0}(1-t)&0 & 0\cr H_2^{P_1}& * & t^{\alpha_1}(1-t)& 0\cr H_2^{P_7}& * & 0 & t^{\alpha_7}(1-t)\cr}$$ that is ${\mathcal A}$ is a-monodromic. \end{example} \end{document}
\begin{document} \begin{frontmatter} \title{The Frobenius number for sequences of triangular and tetrahedral numbers} \author[AMRP]{A.M.~Robles-P\'erez\corref{cor1}\fnref{fn1}} \ead{[email protected]} \author[JCR]{J.C.~Rosales\fnref{fn1}} \ead{[email protected]} \cortext[cor1]{Corresponding author} \fntext[fn1]{Both authors are supported by the project MTM2014-55367-P, which is funded by Mi\-nis\-terio de Econom\'{\i}a y Competitividad and Fondo Europeo de Desarrollo Regional FEDER, and by the Junta de Andaluc\'{\i}a Grant Number FQM-343. The second author is also partially supported by the Junta de Andaluc\'{\i}a/Feder Grant Number FQM-5849.} \address[AMRP]{Departamento de Matem\'atica Aplicada, Facultad de Ciencias, Universidad de Granada, 18071-Granada, Spain.} \address[JCR]{Departamento de \'Algebra, Facultad de Ciencias, Universidad de Granada, 18071-Granada, Spain.} \begin{abstract} We compute the Frobenius number for sequences of triangular and tetrahedral numbers. In addition, we study some properties of the numerical semigroups associated to those sequences. \end{abstract} \begin{keyword} Frobenius number \sep triangular numbers \sep tetrahedral numbers \sep telescopic sequences \sep free numerical semigroups. \MSC[2010] 11D07 \end{keyword} \end{frontmatter} \section{Introduction} According to \cite{brauer}, Frobenius raised in his lectures the following question: given relatively prime positive integers $a_1,\ldots,a_n$, compute the largest natural number that is not representable as a non-negative integer linear combination of $a_1,\ldots,a_n$. Nowadays, it is known as the Frobenius (coin) problem. Moreover, the solution is called the Frobenius number of the set $\{a_1,\ldots,a_n\}$ and it is denoted by ${\mathrm F}(a_1, \ldots, a_n)$. It is well known (see \cite{sylvester2, sylvester3}) that ${\mathrm F}(a_1,a_2)=a_1a_2-a_1-a_2$. However, at present, the Frobenius problem is open for $n \geq 3$. More precisely, Curtis showed in \cite{curtis} that it is impossible to find a polynomial formula (this is, a finite set of polynomials) that computes the Frobenius number if $n=3$. In addition, Ram\'{\i}rez Alfons\'{\i}n proved in \cite{alfonsinNP} that this problem is NP-hard for $n$ variables. Many papers study particular cases (see \cite{alfonsin} for more details). Specially, when $\{a_1,\ldots,a_n\}$ is part of a ``classic'' integer sequences: arithmetic and almost arithmetic (\cite{brauer,roberts,lewin,selmer}), Fibonacci (\cite{marin-alfonsin-revuelta}), geometric (\cite{ong-ponomarenko}), Mersenne (\cite{mersenne}), repunit (\cite{repunit}), squares and cubes (\cite{squares-cubes,moscariello}), Thabit (\cite{thabit}), et cetera. For example, in \cite{brauer} Brauer proves that \begin{equation}\label{brauer1} {\mathrm F}(n,n+1,\ldots,n+k-1)=\Big(\Big\lfloor \frac{n-2}{k-1} \Big\rfloor +1 \Big)n-1, \end{equation} where, if $x\in{\mathbb R}$, then $\lfloor x \rfloor \in {\mathbb Z}$ and $\lfloor x \rfloor\leq x < \lfloor x \rfloor+1$. On the other hand, denoting by $a(n)={\mathrm F}\Big(\frac{n(n+1)}{2},\frac{(n+1)(n+2)}{2},\frac{(n+2)(n+3)}{2}\Big)$, for $n\in{\mathbb N} \setminus \{0\}$, C. Baker conjectured that (see \url{https://oeis.org/A069755/internal}) \begin{equation}\label{baker1} a(n) = \frac{-14 + 6(-1)^n + (3+9(-1)^n)n + 3(5+(-1)^n)n^2 + 6n^3}{8}; \end{equation} \begin{equation}\label{baker2} a(n) = \frac{6n^3 + 18n^2 + 12n - 8}{8}, \; \mbox{ for $n$ even}; \end{equation} \begin{equation}\label{baker3} a(n) = \frac{6n^3 + 12n^2 - 6n - 20}{8}, \; \mbox{ for $n$ odd}. \end{equation} Let us observe that both of these examples are particular cases of combinatorial numbers (or binomial coefficients) sequences, that is, \begin{itemize} \item ${n \choose 1},{n+1 \choose 1},\ldots,{n+k-1 \choose 1}$ in the first case, \item ${n+1 \choose 2},{n+2 \choose 2},{n+3 \choose 2}$ in the second one. \end{itemize} Let us recall that ${n+1 \choose 2}$ is known as a \emph{triangular} (or \emph{triangle}) \emph{number} and that the \emph{tetrahedral numbers} correspond to ${n+2 \choose 3}$. These classes of numbers are precisely the aim of this paper. In order to achieve our purpose, we use a well-known formula by Johnson (\cite{johnson}): if $a_1,a_2,a_3$ are relatively prime numbers and $\gcd\{a_1,a_2\}=d$, then \begin{equation}\label{eq-johnson} {\mathrm F}(a_1, a_2, a_3) = d{\mathrm F}\Big(\frac{a_1}{d},\frac{a_2}{d},a_3\Big) + (d-1)a_3. \end{equation} In fact, we use the well-known generalization by Brauer and Shockley (\cite{brauer-shockley}): if $a_1,\ldots,a_n$ are relatively prime numbers and $d=\gcd\{a_1,\ldots,a_{n-1}\}$, then \begin{equation}\label{eq-brauer-shockley} {\mathrm F}(a_1,\ldots,a_n) = d{\mathrm F}\Big(\frac{a_1}{d},\ldots,\frac{a_{n-1}}{d},a_n\Big) + (d-1)a_n. \end{equation} An interesting situation, to apply these formulae, corresponds with telescopic sequences (\cite{kirfel-pellikaan}) and leads to free numerical semigroups, which were introduced by Bertin and Carbonne (\cite{bertin-carbonne-1,bertin-carbonne-2}) and previously used by Watanabe (\cite{watanabe}). Let us note that this idea does not coincide with the categorical concept of a free object. \begin{definition}\label{telescopic-sequence} Let $(a_1,\ldots,a_n)$ be a sequence of positive integers such that $\gcd\{a_1,\ldots,a_n\}=1$ (where $n\geq2$). Let $d_i=\gcd\{a_1,\ldots,a_i\}$ for $i=1,\ldots,n$. We say that $(a_1,\ldots,a_n)$ is a \emph{telescopic sequence} if $\frac{a_i}{d_i}$ is representable as a non-negative integer linear combination of $\frac{a_1}{d_{i-1}},\ldots,\frac{a_{i-1}}{d_{i-1}}$ for $i=2,\ldots,n$. \end{definition} Let us observe that, if $(a_1,\ldots,a_n)$ is a telescopic sequence, then the sequence $\big(\frac{a_1}{d_i},\ldots,\frac{a_i}{d_i}\big)$ is also telescopic for $i=2,\ldots,n-1$. Let $({\mathbb N},+)$ be the additive monoid of non-negative integers. We say that $S$ is a \emph{numerical semigroup} if it is an additive subsemigroup of ${\mathbb N}$ which satisfies $0\in S$ and ${\mathbb N} \setminus S$ is a finite set. Let $X=\{x_1,\ldots,x_n\}$ be a non-empty subset of ${\mathbb N} \setminus \{0\}$. We denote by $\langle X \rangle = \langle x_1,\ldots,x_n\rangle$ the monoid generated by $X$, that is, $$\langle X \rangle=\{\lambda_1x_1+\cdots+\lambda_nx_n \mid \lambda_1,\ldots,\lambda_n\in{\mathbb N}\}.$$ It is well known (see \cite{springer}) that every submonoid $S$ of $({\mathbb N},+)$ has a unique \emph{minimal system of generators}, that is, there exists a unique $X$ such that $S=\langle X \rangle$ and $S\not=\langle Y \rangle$ for any $Y\subsetneq X$. In addition, $X$ is a system of generators of a numerical semigroup if and only if $\gcd(X)=1$. Let $X=\{x_1,\ldots,x_n\}$ be the minimal system of generators of a numerical semigroup $S$. Then $n$ (that is, the cardinality of $X$) is called the \emph{embedding dimension} of $S$ and it is denoted by ${\mathrm e}(S)$. \begin{definition}\label{free-sg} We say that $S$ is a \emph{free numerical semigroup} if there exists a telescopic sequence $(a_1,\ldots,a_n)$ such that $S=\langle a_1,\ldots,a_n\rangle$. \end{definition} Our purpose in this work is taking advantage of the notion of telescopic sequence in order to compute the Frobenius number associated to sequences of consecutive triangular (or tetrahedral) numbers. In order to achieve this purpose, let us observe two facts. \begin{enumerate} \item Two consecutive triangular numbers are not relatively prime (Lemma~\ref{lem1}). \item It is easy to check that, if $n\geq 6$, then $\Big( {n+1 \choose 2},{n+2 \choose 2},{n+3 \choose 2},{n+4 \choose 2} \Big)$ is a sequence of four consecutive (relatively prime) triangular numbers but it does not admit any permutation which is telescopic. \end{enumerate} Therefore, we have to limit our study to sequences of three consecutive triangular numbers. In the same way, we have to take sequences of four consecutive tetrahedral numbers. In addition, it is possible to apply our techniques (in a much more tedious study) to sequences of five consecutive combinatorial numbers ${n \choose m}$ with $m=4$ (see Remark~\ref{rem-13}). However, if $m\geq5$, it is not possible to use such tools because, in general, we have not got telescopic sequences (see Remark~\ref{rem-14}). Let us summarize the content of this work. In Section~\ref{tri-numbers} we compute the Frobenius number of three consecutive triangular numbers. In Section~\ref{tetra-numbers} we solve the analogue case for four consecutive tetrahedral numbers. In the last section, we show some results on numerical semigroups generated by three consecutive triangular numbers or four consecutive tetrahedral numbers, taking advantage of the fact that they are free numerical semigroups. Finally, point out that, to get a self-contained paper, we have included in Section~\ref{consequences} a preliminary subsection with backgrounds on minimal presentations, Ap\'ery sets, and Betti elements of a numerical semigroup. \section{Triangular numbers}\label{tri-numbers} Let us recall that a \emph{triangular number} (or \emph{triangle number}) is a positive integer which counts the number of dots composing an equilateral triangle. For example, in Figure~\ref{fig1} we show the first six triangular numbers. \begin{figure} \caption{The first six triangular numbers} \label{fig1} \end{figure} It is well known that the $n$th triangular number is given by the combinatorial number ${\mathrm T}_n={n+1 \choose 2}$. In order to compute the Frobenius number of a sequence of three triangular numbers, we need to determine if we have a sequence of relatively prime integers. First, we give a technical lemma. \begin{lemma}\label{lem1} We have that $$\gcd \{{\mathrm T}_n,{\mathrm T}_{n+1}\} = \left\{ \begin{array}{l} \frac{n+1}{2}, \mbox{ if $n$ is odd;} \\[2pt] n+1, \mbox{ if $n$ is even.} \end{array} \right.$$ \end{lemma} \begin{proof} If $n$ is odd, then we have that $$\gcd \{{\mathrm T}_n,{\mathrm T}_{n+1}\} = \gcd \left\{ \frac{n(n+1)}{2}, \frac{(n+1)(n+2)}{2} \right\} = \frac{n+1}{2} \gcd\{n,2\} = \frac{n+1}{2}.$$ On the other hand, if $n$ is even, then $$\gcd \{{\mathrm T}_n,{\mathrm T}_{n+1}\} = \gcd \left\{ \frac{n(n+1)}{2}, \frac{(n+1)(n+2)}{2} \right\} = (n+1) \gcd\left\{ \frac{n}{2}, 1 \right\} = n+1.$$ \end{proof} In the following lemma we show that three consecutive triangular numbers are always relatively prime. \begin{lemma}\label{lem2} $\gcd \{{\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2}\}=1.$ \end{lemma} \begin{proof} By Lemma~\ref{lem1}, if $n$ is odd, then $$\gcd \{{\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2}\} = \gcd \big\{ \gcd \{ {\mathrm T}_n,{\mathrm T}_{n+1} \}, \gcd \{ {\mathrm T}_{n+1},{\mathrm T}_{n+2} \} \big\} =$$ $$\gcd \left\{ \frac{n+1}{2},n+2 \right\} = \gcd \left\{ \frac{n+1}{2}, \frac{n+1}{2} +1 \right\}=1.$$ The proof is similar if $n$ is even. Therefore, we omit it. \end{proof} In the next result, we show the key to obtain the answer to our question. \begin{proposition}\label{prop3} The sequences $({\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2})$ and $({\mathrm T}_{n+2},{\mathrm T}_{n+1},{\mathrm T}_n)$ are telescopic. \end{proposition} \begin{proof} Let $n$ be an odd integer. From Lemmas~\ref{lem1} and \ref{lem2}, $\gcd\{{\mathrm T}_n,{\mathrm T}_{n+1}\}=\frac{n+1}{2}$ and $\gcd\{{\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2}\}=1$. Now, it is obvious that $$\frac{{\mathrm T}_{n+2}}{1} = \frac{n+3}{2}(n+2) \in \left\langle \frac{{\mathrm T}_n}{\,\frac{n+1}{2}\,}, \frac{{\mathrm T}_{n+1}}{\,\frac{n+1}{2}\,} \right\rangle = \langle n,n+2 \rangle.$$ Therefore, $({\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2})$ is telescopic if $n$ is odd. Once again, from Lemmas~\ref{lem1} and \ref{lem2}, we have that $\gcd\{{\mathrm T}_{n+2},{\mathrm T}_{n+1}\}=n+2$ (observe that $n+1$ is even) and $\gcd\{{\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2}\}=1$. Then it is clear that $$\frac{{\mathrm T}_{n}}{1} = \frac{n}{2}(n+1) \in \left\langle \frac{{\mathrm T}_{n+2}}{\,n+2\,}, \frac{{\mathrm T}_{n+1}}{\,n+2\,} \right\rangle = \langle \frac{\,n+3\,}{2},{\,n+1\,}{2} \rangle.$$ Thus, $({\mathrm T}_{n+2},{\mathrm T}_{n+1},{\mathrm T}_n)$ is telescopic if $n$ is odd. In a similar way, we can show that $({\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2})$ and $({\mathrm T}_{n+2},{\mathrm T}_{n+1},{\mathrm T}_n)$ are telescopic if $n$ is even. \end{proof} Now we are ready to give the main result of this section. \begin{proposition}\label{prop5} Let $n \in {\mathbb N} \setminus \{0\}$. Then $${\mathrm F}({\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2})=\left\{ \begin{array}{l} \frac{3n^3+6n^2-3n-10}{4}, \mbox{ if $n$ is odd;} \\[3pt] \frac{3n^3+9n^2+6n-4}{4}, \mbox{ if $n$ is even.} \end{array} \right.$$ Equivalently, \begin{equation}\label{eq-prop5} {\mathrm F}({\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2})= \left\lfloor \frac{n}{2} \right\rfloor ({\mathrm T}_n+{\mathrm T}_{n+1}+{\mathrm T}_{n+2}-1)-1. \end{equation} \end{proposition} \begin{proof} Let $n$ be an odd positive integer. From (\ref{eq-johnson}) (or (\ref{eq-brauer-shockley})) and the proof of Proposition~\ref{prop3}, we have that $${\mathrm F}({\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2})= \frac{n+1}{2} {\mathrm F} \left( \frac{{\mathrm T}_n}{\,\frac{n+1}{2}\,}, \frac{{\mathrm T}_{n+1}}{\,\frac{n+1}{2}\,}, {\mathrm T}_{n+2} \right) + \frac{n-1}{2}{\mathrm T}_{n+2}=$$ $$\frac{n+1}{2} {\mathrm F} \left( n, n+2 \right) + \frac{n-1}{2}\frac{(n+2)(n+3)}{2}$$ and, having in mind that ${\mathrm F} \left( n, n+2 \right)=n^2-2$, then the conclusion is obvious. On the other hand, the reasoning for even $n$ is similar. Finally, a straightforward computation leads to (\ref{eq-prop5}). \end{proof} \section{Tetrahedral numbers}\label{tetra-numbers} Let us recall that a \emph{tetrahedral number} (or \emph{triangular pyramidal number}) is a positive integer which counts the number of balls composing a regular tetrahedron. The $n$th tetrahedral number is given by the combinatorial number $\mathrm{TH}_n={n+2 \choose 3}$. Thus, in Figure~\ref{fig2}, we see the pyramid (by layers) associated to the 5th tetrahedral number ($\mathrm{TH}_5=35$). \begin{figure} \caption{The tetrahedral number $\mathrm{TH} \label{fig2} \end{figure} In this section, our purpose is compute the Frobenius number for a sequence of four consecutive tetrahedral numbers. We need a preliminary lemma with an immediate proof. \begin{lemma}\label{lem-aure1} Let $(a_1,a_2,\ldots,a_n)$ be a sequence of positive integers such that $d_1=\gcd\{a_1,a_2,\ldots,a_n\}$. If $d_2=\gcd\{a_2-a_1,\ldots,a_n-a_{n-1}\}$, then $d_1 | d_2$. In particular, if $d_2=1$, then $d_1=1$. \end{lemma} Now, let us see that four consecutive tetrahedral numbers are always relatively prime. \begin{lemma}\label{lem-aure2} $\gcd \{\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2},\mathrm{TH}_{n+3}\}=1.$ \end{lemma} \begin{proof} It is clear that $$(\mathrm{TH}_{n+1}-\mathrm{TH}_n, \mathrm{TH}_{n+2}-\mathrm{TH}_{n+1},\mathrm{TH}_{n+3}-\mathrm{TH}_{n+2})=({\mathrm T}_n, {\mathrm T}_{n+1}, {\mathrm T}_{n+2}).$$ Therefore, by applying Lemmas~\ref{lem2} and \ref{lem-aure1}, we have the conclusion. \end{proof} The following lemma has an easy proof too. So, we omit it. \begin{lemma}\label{lem-jc1} Let $n \in {\mathbb N}\setminus \{0\}$. \begin{enumerate} \item If $n=6k$, then $\gcd\left\{ \mathrm{TH}_n,\mathrm{TH}_{n+1} \right\} = (6k+1)(3k+1)$. \item If $n=6k+1$, then $\gcd\left\{ \mathrm{TH}_n,\mathrm{TH}_{n+1} \right\} = (3k+1)(2k+1)$. \item If $n=6k+2$, then $\gcd\left\{ \mathrm{TH}_n,\mathrm{TH}_{n+1} \right\} = (2k+1)(3k+2)$. \item If $n=6k+3$, then $\gcd\left\{ \mathrm{TH}_n,\mathrm{TH}_{n+1} \right\} = (3k+2)(6k+5)$. \item If $n=6k+4$, then $\gcd\left\{ \mathrm{TH}_n,\mathrm{TH}_{n+1} \right\} = (6k+5)(k+1)$. \item If $n=6k+5$, then $\gcd\left\{ \mathrm{TH}_n,\mathrm{TH}_{n+1} \right\} = (k+1)(6k+7)$. \end{enumerate} \end{lemma} In the next two results, we give the tool for getting the answer to our problem. \begin{proposition}\label{prop-aure3} The sequence $(\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2},\mathrm{TH}_{n+3})$ is telescopic if and only if $n\equiv r \bmod 6$ with $r\in\{0,1,2,3\}$. \end{proposition} \begin{proof} We are going to study the six possible cases $n=6k+r$ with $k \in {\mathbb N}$ and $r\in\{0,1,\ldots,5\}$. \begin{enumerate} \item Let $n=6k$. Since $\gcd\{\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2}\}$ is equal to $$\gcd\big\{ \gcd\{\mathrm{TH}_n,\mathrm{TH}_{n+1} \}, \gcd\{ \mathrm{TH}_{n+1},\mathrm{TH}_{n+2} \} \big\},$$ from items 1 and 2 of Lemma~\ref{lem-jc1}, we have $\gcd\{\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2}\} = \gcd \{ (6k+1)(3k+1), (3k+1)(2k+1) \} = 3k+1.$ Now, it is easy to check that $\mathrm{TH}_{n+3} = 0 \frac{\mathrm{TH}_n}{3k+1} + (3k+2) \frac{\mathrm{TH}_{n+1}}{3k+1} + 2 \frac{\mathrm{TH}_{n+2}}{3k+1}$. On the other hand, since $$\frac{(\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2})}{3k+1} = \big( 2k(6k+1),(6k+1)(2k+1),2(2k+1)(3k+2) \big),$$ $\gcd\{2k(6k+1),(6k+1)(2k+1)\}=6k+1$, and $$2(2k+1)(3k+2) = 0 \frac{2k(6k+1)}{6k+1} + 2(3k+2) \frac{(6k+1)(2k+1)}{6k+1},$$ we conclude that $(\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2},\mathrm{TH}_{n+3})$ is telescopic. \item Having in mind items 2 and 3 of Lemma~\ref{lem-jc1}, if $n=6k+1$, then we get that $\gcd\{\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2}\}=2k+1$. In addition, $$\mathrm{TH}_{n+3} = 0 \frac{\mathrm{TH}_n}{2k+1} + 0 \frac{\mathrm{TH}_{n+1}}{2k+1} + 2(k+1) \frac{\mathrm{TH}_{n+2}}{2k+1}.$$ Since $\gcd \left\{ \frac{\mathrm{TH}_n}{2k+1}, \frac{\mathrm{TH}_{n+1}}{2k+1} \right\} = 3k+1$ and $$\frac{\mathrm{TH}_{n+2}}{2k+1} = (3k+2)\frac{\mathrm{TH}_n}{(2k+1)(3k+1)} + 2 \frac{\mathrm{TH}_{n+1}}{(2k+1)(3k+1)},$$ we have the result. \item For $n=6k+2$, we have that $\gcd\{\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2}\}=3k+2$, $$\mathrm{TH}_{n+3} = 0 \frac{\mathrm{TH}_n}{3k+2} + 3(k+1) \frac{\mathrm{TH}_{n+1}}{3k+2} + 2 \frac{\mathrm{TH}_{n+2}}{3k+2},$$ $\gcd \left\{ \frac{\mathrm{TH}_n}{3k+2}, \frac{\mathrm{TH}_{n+1}}{3k+2} \right\} = 2k+1$, and $$\frac{\mathrm{TH}_{n+2}}{3k+2} = 0\frac{\mathrm{TH}_n}{(3k+2)(2k+1)} + 2(k+1)\frac{\mathrm{TH}_{n+1}}{(3k+2)(2k+1)}.$$ \item For $n=6k+3$, we have that $\gcd\{\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2}\}=6k+5$, $$\mathrm{TH}_{n+3} = 0 \frac{\mathrm{TH}_n}{6k+5} + 0 \frac{\mathrm{TH}_{n+1}}{6k+5} + 2(3k+4) \frac{\mathrm{TH}_{n+2}}{6k+5},$$ $\gcd \left\{ \frac{\mathrm{TH}_n}{6k+5}, \frac{\mathrm{TH}_{n+1}}{6k+5} \right\} = 3k+2$, and $$\frac{\mathrm{TH}_{n+2}}{6k+5} = 3(k+1)\frac{\mathrm{TH}_n}{(6k+5)(3k+2)} + 2\frac{\mathrm{TH}_{n+1}}{(6k+5)(3k+2)}.$$ \item If $n=6k+4$, then $\gcd\{\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2}\}=k+1$. Let us suppose that there exist $\alpha,\beta,\gamma \in {\mathbb N}$ such that $\mathrm{TH}_{n+3} = \alpha \frac{\mathrm{TH}_n}{k+1} + \beta \frac{\mathrm{TH}_{n+1}}{k+1} + \gamma \frac{\mathrm{TH}_{n+2}}{k+1}$. Then $$(6k+7)\big((3k+4)(2k+3)-(6k+5)\beta - 2(3k+4)\gamma\big) = 2(3k+2)(6k+5)\alpha.$$ Since $\gcd\{6k+7,2\}=\gcd\{6k+7,3k+2\}=\gcd\{6k+7,6k+5\}=1$, then there exists $\tilde{\alpha} \in {\mathbb N}$ such that $\alpha=(6k+7)\tilde{\alpha}$. Therefore, $$(3k+4)(2k+3)-(6k+5)\beta - 2(3k+4)\gamma = 2(3k+2)(6k+5)\tilde\alpha$$ and, consequently, $$(3k+4) (2k+3-2\gamma) = (6k+5)\big(\beta+2(3k+2)\tilde\alpha\big).$$ Thus, since $\gcd\{3k+4,6k+5\}=1$, we conclude that $(6k+5) \mid (2k+3-2\gamma)$ (that is, $6k+5$ divides $2k+3-2\gamma$) and, thereby, $2k+3-2\gamma=0$. Now, having in mind that $\gamma$ is a non-negative integer, the equality $2k+3=2\gamma$ is not possible. That is, we have a contradiction. \item If $n=6k+5$, then $\gcd\{\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2}\}=6k+7$, $$\mathrm{TH}_{n+3} = 0 \frac{\mathrm{TH}_n}{6k+7} + 0 \frac{\mathrm{TH}_{n+1}}{6k+7} + 2(3k+5) \frac{\mathrm{TH}_{n+2}}{6k+7},$$ and $\gcd \left\{ \frac{\mathrm{TH}_n}{6k+7}, \frac{\mathrm{TH}_{n+1}}{6k+7} \right\} = k+1$. Let us suppose that there exist $\alpha,\beta \in {\mathbb N}$ such that $\frac{\mathrm{TH}_{n+2}}{6k+7} = \alpha\frac{\mathrm{TH}_n}{(6k+7)(k+1)} + \beta\frac{\mathrm{TH}_{n+1}}{(6k+7)(k+1)}$. In such a case, $$(3k+4)(2k+3) = (6k+5)\alpha + 2(3k+4)\beta.$$ Now, since $\gcd\{3k+4,6k+5\}=1$, then $\alpha=(3k+4)\tilde{\alpha}$ for some $\tilde{\alpha} \in {\mathbb N}$ and, consequently, $2k+3-2\beta=(6k+5)\tilde{\alpha}$, that is, $(6k+5) \mid (2k+3-2\beta)$. Reasoning as in the previous case, since $\beta$ is a non-negative integer, we get a contradiction once again. \qedhere \end{enumerate} \end{proof} Using the same techniques as in the previous proof, we have the next result. \begin{proposition}\label{prop-aure4} The sequence $(\mathrm{TH}_{n+3},\mathrm{TH}_{n+2},\mathrm{TH}_{n+1},\mathrm{TH}_n)$ is telescopic if and only if $n\equiv r \bmod 6$ with $r\in\{4,5\}$. \end{proposition} By combining Propositions~\ref{prop-aure3} and \ref{prop-aure4} with (\ref{eq-brauer-shockley}), it is clear that we can obtain the Frobenius number for every sequence $(\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2},\mathrm{TH}_{n+3})$. Thus we get the following result. \begin{proposition}\label{prop5b} Let $n \in {\mathbb N}\setminus \{0\}$. Then ${\mathrm F}\left( \mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2},\mathrm{TH}_{n+3} \right) =$ \begin{enumerate} \item $\frac{n-3}{3}\mathrm{TH}_{n+1} + n\mathrm{TH}_{n+2} + \frac{n}{2}\mathrm{TH}_{n+3} - \mathrm{TH}_n$, if $n=6k$; \item $(n-1)\mathrm{TH}_{n+1} + \frac{n-1}{2}\mathrm{TH}_{n+2} + \frac{n-1}{3}\mathrm{TH}_{n+3} - \mathrm{TH}_n$, if $n=6k+1$; \item $(n-1)\mathrm{TH}_{n+1} + \frac{n-2}{3}\mathrm{TH}_{n+2} + \frac{n}{2}\mathrm{TH}_{n+3} - \mathrm{TH}_n$, if $n=6k+2$; \item $\frac{n-3}{3}\mathrm{TH}_{n+1} + \frac{n-1}{2}\mathrm{TH}_{n+2} + (n+1)\mathrm{TH}_{n+3} - \mathrm{TH}_n$, if $n=6k+3$; \item $\frac{n+2}{3}\mathrm{TH}_{n+2} + \frac{n+2}{2}\mathrm{TH}_{n+1} + (n+2)\mathrm{TH}_n - \mathrm{TH}_{n+3}$, if $n=6k+4$; \item $(n+4)\mathrm{TH}_{n+2} + \frac{n+1}{3}\mathrm{TH}_{n+1} + \frac{n+1}{2}\mathrm{TH}_n - \mathrm{TH}_{n+3}$, if $n=6k+5$. \end{enumerate} \end{proposition} \begin{remark}\label{rem-13} From the contents of this section and the previous one, it looks like that the problem becomes longer and longer when we consider the sequences $\Big( {n+m-1 \choose m}, \ldots, {n+2m-1 \choose m} \Big)$ with $m$ increasing (and $n$ fixed). Anyway, it is easy to see that, \begin{itemize} \item if $n\geq1$, then $\Big( {n+3 \choose 4}, {n+4 \choose 4}, {n+5 \choose 4}, {n+6 \choose 4}, {n+7 \choose 4} \Big)$ is a telescopic sequence if and only if $n\equiv x \bmod 6$ for $x \in \{0,1,2\}$; \item if $n\geq9$, then $\Big( {n+7 \choose 4}, {n+6 \choose 4}, {n+5 \choose 4}, {n+4 \choose 4}, {n+3 \choose 4} \Big)$ is telescopic if and only if $n\equiv x \bmod 6$ for $x \in \{3,4,5\}$; \item if $n\in\{3,4,5\}$, then both of above sequences are telescopic. \end{itemize} Therefore, we can give a general formula for the Frobenius problem associated to five consecutive combinatorial numbers given of the type ${n \choose 4}$. (In order to study this case, it is better to consider $n\equiv x \bmod 12$ for $x \in \{0,1,\ldots,11 \}$.) \end{remark} \begin{remark}\label{rem-14} Let $n,m$ be positive integers. At this moment, we could conjecture that the sequence $\Big( {n+m-1 \choose m}, \ldots, {n+2m-1 \choose m} \Big)$ is telescopic if and only if the sequence $\Big( {n+2m-1 \choose m}, \ldots, {n+m-1 \choose m} \Big)$ is not telescopic and, consequently, we would have an easy algorithmic process to compute ${\mathrm F}\Big( {n+m-1 \choose m}, \ldots, {n+2m-1 \choose m} \Big)$. Unfortunately, neither $\Big( {12 \choose 5}, \ldots, {17 \choose 5} \Big) = (792,1287,2002,3003,4368,6188)$ nor $(6188,4368,3003,2002,1287,792)$ are telescopic. In fact, all possible permutations of $(792,1287,2002,3003,4368,6188)$ are not telescopic. \end{remark} \section{Consequences on numerical semigroups}\label{consequences} As we comment in the introduction, if $(a_1,\ldots,a_n)$ is a sequence of relatively prime positive integers, then the monoid $\langle a_1,\ldots,a_n \rangle$ is a numerical semigroup. In this section we are interested in those numerical semigroups which are generated by three consecutive triangular numbers or by four consecutive tetrahedral numbers and have embedding dimension equal to three or four, respectively. In fact, having in mind that \begin{itemize} \item $({\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2})$ is always telescopic, \item either $(\mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2},\mathrm{TH}_{n+3})$ or $(\mathrm{TH}_{n+3},\mathrm{TH}_{n+2},\mathrm{TH}_{n+1},\mathrm{TH}_n)$ is telescopic, \end{itemize} we can use the ideas of \cite[Chapter~8]{springer} to obtain several results for the numerical semigroups ${\mathcal T}_n=\langle {\mathrm T}_n,{\mathrm T}_{n+1},{\mathrm T}_{n+2} \rangle$ and $\mathcal{TH}_n=\langle \mathrm{TH}_n,\mathrm{TH}_{n+1},\mathrm{TH}_{n+2},\mathrm{TH}_{n+3} \rangle$. Firstly, we compute the embedding dimension of ${\mathcal T}_n$ and $\mathcal{TH}_n$. \begin{lemma}\label{lem-aure41} We have that ${\mathrm e}({\mathcal T}_1)=1$, ${\mathrm e}({\mathcal T}_2)=2$, ${\mathrm e}({\mathcal T}_3)=3$ for all $n\geq 3$, ${\mathrm e}(\mathcal{TH}_1)=1$, ${\mathrm e}(\mathcal{TH}_2) = {\mathrm e}(\mathcal{TH}_3)=3$, ${\mathrm e}(\mathcal{TH}_n)=4$ for all $n\geq 4$. \end{lemma} \begin{proof} It is obvious that \begin{itemize} \item ${\mathcal T}_1 = \langle 1,3,6 \rangle = \langle 1 \rangle = {\mathbb N}$; \item ${\mathcal T}_2 = \langle 3,6,10 \rangle = \langle 3,10 \rangle$; \item $\mathcal{TH}_1 = \langle 1,4,10,20 \rangle = \langle 1 \rangle = {\mathbb N}$; \item $\mathcal{TH}_2 = \langle 4,10,20,35 \rangle = \langle 4,10,35 \rangle$; \item $\mathcal{TH}_3 = \langle 10,20,35,56 \rangle = \langle 10,35,56 \rangle$. \end{itemize} Thereby, ${\mathrm e}({\mathcal T}_1)=1$, ${\mathrm e}({\mathcal T}_2)=2$, ${\mathrm e}(\mathcal{TH}_1)=1$, and ${\mathrm e}(\mathcal{TH}_2) = {\mathrm e}(\mathcal{TH}_3)=3$. Now let ${\mathcal T}_n = \langle \frac{n(n+1)}{2}, \frac{(n+1)(n+2)}{2}, \frac{(n+2)(n+3)}{2} \rangle$ with $n\geq 3$. Then, $\frac{n(n+1)}{2} < \frac{(n+1)(n+2)}{2} < \frac{(n+2)(n+3)}{2}$, $\frac{n(n+1)}{2} \nmid \frac{(n+1)(n+2)}{2}$ (that is, $\frac{n(n+1)}{2}$ does not divide $\frac{(n+1)(n+2)}{2}$) and $\frac{n(n+1)}{2} \nmid \frac{(n+2)(n+3)}{2}$. On the other hand, if we suppose that $\frac{(n+2)(n+3)}{2} \in \langle \frac{n(n+1)}{2}, \frac{(n+1)(n+2)}{2} \rangle$, then there exist $\alpha,\beta \in {\mathbb N}$ such that $\frac{(n+2)(n+3)}{2} = \alpha\frac{n(n+1)}{2} + \beta \frac{(n+1)(n+2)}{2}$. Consequently, $(n+2)(n+3) = \alpha n(n+1) + \beta (n+1)(n+2)$, that is, $n+1$ should be a divisor of $(n+2)(n+3)$, which is a contradiction. Thus, we conclude that $\left\{ \frac{n(n+1)}{2}, \frac{(n+1)(n+2)}{2}, \frac{(n+2)(n+3)}{2} \right\}$ is a minimal system of generators of ${\mathcal T}_n$ for all $n\geq 3$ and, in consequence, ${\mathrm e}({\mathcal T}_3)=3$ for all $n\geq 3$. By using similar arguments, we prove that ${\mathrm e}(\mathcal{TH}_n)=4$ for all $n\geq 4$. \end{proof} Since we want to consider general cases, along this section we are going to take ${\mathcal T}_n$ with $n\geq 3$ and $\mathcal{TH}_n$ with $n\geq4$. Thus, we always have that ${\mathrm e}({\mathcal T}_n)=3$ and ${\mathrm e}(\mathcal{TH}_n)=4$. The remaining five cases are left as exercises to the reader. \subsection{Preliminaries} The concepts developed below can be generalized to more general frameworks. We refer to \cite{presentations} (and the references therein) for more details. Let $S$ be a numerical semigroup and let $\{n_1,\ldots,n_e\}$ be the minimal system of generators of $S$. Then we define the \emph{factorization homomorphism associated to $S$} $$\varphi_S : {\mathbb N}^e \to S, \quad u=(u_1,\ldots,u_e) \mapsto \textstyle{\sum^e_{i=1}}u_in_i.$$ Let us observe that, if $s\in S$, then the cardinality of $\varphi_S^{-1}(s)$ is just the number of factorizations of $s$ in $S$. We define the \emph{kernel congruence of $\varphi_S$} on ${\mathbb N}^e$ as follows, $$u \sim_S v \mbox{ if } \varphi_S(u)=\varphi_S(v).$$ It is well known that $S$ is isomorphic to the monoid ${\mathbb N}^e/\!\sim_S$. Let $\rho \in {\mathbb N}^e \times {\mathbb N}^e$. Then the intersection of all congruences containing $\rho$ is the so-called \emph{congruence generated by $\rho$}. On the other hand, if $\sim$ is the congruence generated by $\rho$, then we say that $\rho$ is a \emph{system of generators} of $\sim$. In this way, a \emph{presentation} of $S$ is a system of generators of $\sim_S$, and a \emph{minimal presentation} of $S$ is a minimal system of generators of $\sim_S$. Let us observe that, for numerical semigroups, the concepts of minimal presentation with respect cardinality and with respect set inclusion coincide (see \cite[Corollary~1.3]{algoritmo-relaciones} or \cite[Corollary~8.13]{springer}). Since every numerical semigroup $S$ is finitely generated, it follows that $S$ has a minimal presentation with finitely many elements, that is, $S$ is \emph{finitely presented} (see \cite{redei}). In addition, all minimal presentations of a numerical semigroup $S$ have the same cardinality (see \cite[Corollary~1.3]{algoritmo-relaciones} or \cite[Corollary~8.13]{springer}). Let us describe an algorithmic process (see \cite{algoritmo-relaciones} or \cite{springer}) to compute all the minimal presentations of $S$. If $s\in S$, then we define over $\varphi_S^{-1}(s)$ the binary relation ${\mathcal R}_s$ as follows: for $u=(u_1,\ldots,u_e),v=(v_1,\ldots,v_e) \in \varphi_S^{-1}(s)$, we say that $u {\mathcal R}_s v$ if there exists a chain $u_0,u_1,\ldots,u_r \in \varphi_S^{-1}(s)$ such that $u_0=u$, $u_r=v$, and $u_i\cdot u_{i+1}\not= 0$ for all $i\in \{0,\ldots,r-1\}$ (where $\cdot$ is the usual element-wise product of vectors). If $\varphi_S^{-1}(s)$ has a unique ${\mathcal R}_s$-class, then we take the set $\rho_s=\emptyset$. Otherwise, if ${\mathcal R}_{s,1},\ldots,{\mathcal R}_{s,l}$ are the different classes of $\varphi_S^{-1}(s)$, then we choose $v_i \in {\mathcal R}_{s,i}$ for all $i\in\{1,\ldots,l\}$ and consider $\rho_s$ to be any set of $k-1$ pairs of elements in $V=\{v_1,\ldots,v_l\}$ such that any two elements in $V$ are connected by a sequence of pairs in $\rho_s$ (or their symmetrics). Finally, the set $\rho = \cup_{s\in S} \rho_s$ is a minimal presentation of $S$. From the previous comments, it is clear that for each numerical semigroup $S$ there are finitely many elements $s\in S$ such that $\varphi_S^{-1}(s)$ has more than one ${\mathcal R}_s$-class. Precisely, such elements are known as the \emph{Betti elements} of $S$ (see \cite{gs-o}). \begin{remark}\label{rem-aure39} Let us observe that Betti elements are known from a minimal presentation. In fact, the Betti elements of a numerical semigroup $S$ can be computed as the evaluation of the elements of a minimal presentation of $S$. \end{remark} Let us define some constants which allow for characterizing the free numerical semigroups and, moreover, getting easily minimal presentations of such numerical semigroups. \begin{definition} Let $S$ be a numerical semigroup with minimal system of generators $\{n_1,\ldots,n_e\}$. For every $i\in \{2,\ldots,e \}$, we define $$c^*_i=\min\big\{k\in{\mathbb N} \mid kn_i \in \langle n_1,\ldots,n_{i-1} \rangle \big\}.$$ \end{definition} For our purposes, we consider the following characterization for free numerical semigroups (see \cite[Proposition~9.15]{springer} and the comment after it). \begin{lemma}\label{lem-aure43} Let $S$ be a numerical semigroup with minimal system of generators $\{n_1,\ldots,n_e\}$. Then $S$ is free (for the arrangement $\{n_1,\ldots,n_e \}$) if and only if $n_1=c^*_2\cdots c^*_e$. \end{lemma} In order to have a minimal presentation of a free numerical semigroup $S$, we use the following result from \cite{springer}. \begin{lemma}[Corollary 9.18]\label{lem-aure45} Let $S$ be a free numerical semigroup for the arrangement of its minimal set of generators $\{n_1,\ldots,n_e \}$. Assume that $c^*_in_i=a_{i_1}n_1+\cdots+a_{i_{i-1}}n_{i-1}$ for some $a_{i_1},\ldots,a_{i_{i-1}} \in {\mathbb N}$. Then $$\left\{(c^*_ix_i,a_{i_1}x_1+\cdots+a_{i_{i-1}}x_{i-1}) \mid i \in \{2,\ldots,e\} \right\}$$ is a minimal presentation of $S$. \end{lemma} From Remark~\ref{rem-aure39} and Lemma~\ref{lem-aure45}, we can easily compute the Betti elements of a free numerical semigroup $S$. \begin{lemma}\label{lem-aure40} Let $S$ be a free numerical semigroup for the arrangement of its minimal set of generators $\{n_1,\ldots,n_e \}$. Assume that $c^*_in_i=a_{i_1}n_1+\cdots+a_{i_{i-1}}n_{i-1}$ for some $a_{i_1},\ldots,a_{i_{i-1}} \in {\mathbb N}$. Then the Betti elements of $S$ are the numbers $c^*_in_i$ with $i \in \{2,\ldots,e\}$. \end{lemma} Let us recall that, if $S$ is a numerical semigroup and $n\in S\setminus \{0\}$, then the \emph{Ap\'ery set of $n$ in $S$} (see \cite{apery}) is $$\mathrm{Ap}(S,n) = \left\{ s\in S \mid s-n \notin S \right\} = \left\{0,\omega(1),\ldots,\omega(n-1) \right\},$$ where $\omega(i)$, $1\leq i \leq n-1$, is the least element of $S$ congruent with $i$ modulo $n$. Let us observe that the cardinality of $\mathrm{Ap}(S,n)$ is just equal to $n$. In addition, it is clear that ${\mathrm F}(S)=\max\big(\mathrm{Ap}(S,n)\big)-n$. When $S$ is a free numerical semigroup for the arrangement $\{n_1,\ldots,n_e \}$, then we can compute explicitly (and easily) the set $\mathrm{Ap}(S,n_1)$. (The next lemma is part of \cite[Lemma~9.15]{springer}, where there is not a detailed proof. By following the ideas exposed in \cite[Lemma~9.14]{springer}, we show a full proof.) \begin{lemma}\label{lem-aure46} Let $S= \langle n_1,\ldots,n_e \rangle$ a free numerical semigroup. Then $$\mathrm{Ap}(S,n_1)=\big\{\lambda_2n_2+\cdots+\lambda_en_e \mid \lambda_j\in \{0,\ldots,c^*_j-1\} \mbox{ for all } j \in \{2,\ldots,e\} \big\}.$$ \end{lemma} \begin{proof} Let $\omega(i)$ be a non-zero element of $\mathrm{Ap}(S,n_1)$. Since $\omega(i) \in S$, then there exist $\alpha_1,\ldots,\alpha_e \in {\mathbb N}$ such that $\omega(i)=\alpha_1n_1+\cdots+\alpha_en_e$. From the definition of Ap\'ery set, it is clear that $\alpha_1=0$ and, consequently, $\omega(i)=\alpha_2n_2+\cdots+\alpha_en_e$. Now, let us take $c^*_e$. Then, $\alpha_e = \gamma_e c^*_e + \delta_e$ with $\gamma_e \in {\mathbb N}$ and $0\leq \delta_e < c^*_e$. And, since $c^*_e=a_{e_1}n_1+\cdots+a_{e_{e-1}}n_{e-1}$ for some $a_{e_1},\ldots,a_{e_{e-1}} \in {\mathbb N}$, then $\omega(i)=\beta_1n_1+\cdots+\beta_{e-1}n_{e-1}+\delta_en_e$ with $\beta_2,\ldots,\beta_{e-1} \in {\mathbb N}$ and $\beta_1=0$ (remember that $\omega(i)\in \mathrm{Ap}(S,n_1)$). Repeating this process with the coefficient of $n_{e-1}$ up to that of $n_2$, we have that $\omega(i)=\delta_2n_2+\cdots+\delta_en_e$ with $0\leq \delta_j < c^*_j$ for all $j\in \{2,\ldots,e\}$. At this moment, we have that \begin{equation}\label{eq-1} \mathrm{Ap}(S,n_1) \subseteq \big\{\lambda_2n_2+\cdots+\lambda_en_e \mid \lambda_j\in \{0,\ldots,c^*_j\} \mbox{ for all } j \in \{2,\ldots,e\} \big\}. \end{equation} In order to finish the proof, it is enough to observe that the cardinality of $\mathrm{Ap}(S,n_1)$ is $n_1$ and that, from Proposition~\ref{lem-aure43}, the cardinality of the second set in~\ref{eq-1} is least than or equal to $c^*_2\ldots c^*_e=n_1$. \end{proof} \subsection{Triangular case}\label{subsect-tri} Let us take ${\mathcal T}_n=\langle {\mathrm T}_n, {\mathrm T}_{n+1}, {\mathrm T}_{n+2} \rangle$ with $n\geq 3$. We begin computing the values of $c^*_2$ and $c^*_3$ of ${\mathcal T}_n$ (for the arrangement $\{{\mathrm T}_n, {\mathrm T}_{n+1}, {\mathrm T}_{n+2}\}$). \begin{lemma}\label{lem-aure42} Let ${\mathcal T}_n=\langle {\mathrm T}_n, {\mathrm T}_{n+1}, {\mathrm T}_{n+2} \rangle$. Then \begin{enumerate} \item $c^*_2=n$ and $c^*_3=\frac{n+1}{2}$ if $n$ is odd; \item $c^*_2=\frac{n}{2}$ and $c^*_3=n+1$ if $n$ is even. \end{enumerate} \end{lemma} \begin{proof} Let $n$ be odd. By definition, there exist $\alpha, \beta \in {\mathbb N}$ such that $c^*_3 {\mathrm T}_{n+2} = \alpha {\mathrm T}_n + \beta {\mathrm T}_{n+1}$, that is, $$c^*_3 \frac{(n+2)(n+3)}{2} = \alpha \frac{n(n+1)}{2} + \beta \frac{(n+1)(n+2)}{2}.$$ From Lemma~\ref{lem1}, we have that $\gcd\left\{\frac{n(n+1)}{2}, \frac{(n+1)(n+2)}{2} \right\} = \frac{n+1}{2}$. Moreover, $\gcd\left\{n+2,\frac{n+1}{2} \right\} = \gcd\left\{\frac{n+3}{2},\frac{n+1}{2} \right\}=1$. Therefore, $c^*_3$ has to be a multiply of $\frac{n+1}{2}$. Since $$\frac{n+1}{2} \frac{(n+2)(n+3)}{2} = \frac{n+3}{2} \frac{n(n+1)}{2} + 0 \frac{(n+1)(n+2)}{2},$$ we conclude that $c^*_3=\frac{n+1}{2}$. Now, we have that $c^*_2 {\mathrm T}_{n+1} = \alpha {\mathrm T}_n$ for some $\alpha \in {\mathbb N}$. That is, $c^*_2 \frac{(n+1)(n+2)}{2} = \alpha \frac{n(n+1)}{2}$ or, equivalently, $c^*_2(n+2)=\alpha n$. Then, it is obvious that $c^*_2=n$. Repeating similar arguments, we have the result in the case of even $n$. \end{proof} \begin{remark} An analogous result is obtained if we consider the arrangement $\{{\mathrm T}_{n+2}, {\mathrm T}_{n+1}, {\mathrm T}_n\}$ for the minimal system of generators of ${\mathcal T}_n$ (see Proposition~\ref{prop3}). In fact, \begin{enumerate} \item $c^*_2=\frac{n+3}{2}$ and $c^*_3=n+2$ if $n$ is odd; \item $c^*_2=n+3$ and $c^*_3=\frac{n+2}{2}$ if $n$ is even. \end{enumerate} \end{remark} From Lemmas~\ref{lem-aure41}, \ref{lem-aure43}, and \ref{lem-aure42}, we have the next result. \begin{proposition}\label{prop-aure44} ${\mathcal T}_n$ is a free numerical semigroup with embedding dimension equal to three. \end{proposition} By combining Proposition~\ref{prop-aure44}, Lemma~\ref{lem-aure45} and the arguments in the proof of Lemma~\ref{lem-aure42}, we get a minimal presentation of ${\mathcal T}_n$. \begin{proposition}\label{prop6} A minimal presentation of $\,{\mathcal T}_n = \langle {\mathrm T}_n, {\mathrm T}_{n+1}, {\mathrm T}_{n+2} \rangle$ is \begin{enumerate} \item if $n$ is odd: $\left\{ \big(\frac{n+1}{2}x_3,\frac{n+3}{2}x_2\big), \big(nx_2,(n+2)x_1\big) \right\}$; \item if $n$ is even: $\left\{ \big((n+1)x_3,(n+3)x_2\big), \big(\frac{n}{2}x_2,\frac{n+2}{2}x_1\big) \right\}$. \end{enumerate} \end{proposition} By applying Lemma~\ref{lem-aure40}, we compute the Betti elements of ${\mathcal T}_n$. \begin{corollary}\label{cor8} The Betti elements of ${\mathcal T}_n$ are \begin{enumerate} \item $\frac{3}{2}{n+3 \choose 3}$ and $3{n+2 \choose 3}$, if $n$ is odd; \item $3{n+3 \choose 3}$ and $\frac{3}{2}{n+2 \choose 3}$, if $n$ is even. \end{enumerate} \end{corollary} \begin{remark} Let us observe that the Betti elements of ${\mathcal T}_n$ are given in terms of tetrahedral numbers. An analogue property can be observed in the case of $\mathcal{TH}_n$ (see Corollary~\ref{cor-aure7}): the Betti elements of $\mathcal{TH}_n$ can be expressed in terms of the combinatorial numbers ${p \choose 4}$. \end{remark} Finally, taking in Lemma\ref{lem-aure46} the values of $c^*_2$ and $c^*_3$ given in Lemma~\ref{lem-aure42}, we obtain the explicit description of the set $\mathrm{Ap}({\mathcal T}_n,{\mathrm T}_n)$. \begin{corollary}\label{cor7} We have that \begin{enumerate} \item if $n$ is odd, then $$\mathrm{Ap}({\mathcal T}_n,{\mathrm T}_n)=\Bigg\{ a{\mathrm T}_{n+1}+b{\mathrm T}_{n+2} \;\Big|\; a\in\{0,\ldots,n-1\}, \; b\in\left\{0,\ldots,\frac{n-1}{2}\right\} \Bigg\};$$ \item if $n$ is even, then $$\mathrm{Ap}({\mathcal T}_n,{\mathrm T}_n)=\Bigg\{ a{\mathrm T}_{n+1}+b{\mathrm T}_{n+2} \;\Big|\; a\in\left\{0,\ldots,\frac{n-2}{2}\right\}, \; b\in\{0,\ldots,n\} \Bigg\}.$$ \end{enumerate} \end{corollary} \begin{remark}\label{rem-aure5c} Having in mind that, if $S$ is a numerical semigroup and $n\in S\setminus \{0\}$, then ${\mathrm F}(S)=\max\big(\mathrm{Ap}(S,n)\big)-n$, we recover Proposition~\ref{prop5} (for $n\geq3$). \end{remark} \subsection{Tetrahedral case}\label{subsect-tetra} We finish with the serie of results relative to the numerical semigroups $\mathcal{TH}_n$ with $n\geq4$. Since the reasonings and tools are similar to the used ones in the triangular case (and some ones already seen in Section~\ref{tetra-numbers}), we omit the proofs. \begin{lemma}\label{lem-aure51a} Let $\mathcal{TH}_n=\langle \mathrm{TH}_n, \mathrm{TH}_{n+1}, \mathrm{TH}_{n+2}, \mathrm{TH}_{n+3} \rangle$, where $n\equiv r \bmod 6$ with $r\in\{0,1,2,3\}$. Then \begin{enumerate} \item $c^*_2=\frac{n}{3}$, $c^*_3=n+1$, and $c^*_4=\frac{n+2}{2}$ if $n=6k$; \item $c^*_2=n$, $c^*_3=\frac{n+1}{2}$, and $c^*_4=\frac{n+2}{3}$ if $n=6k+1$; \item $c^*_2=n$, $c^*_3=\frac{n+1}{3}$, and $c^*_4=\frac{n+2}{2}$ if $n=6k+2$; \item $c^*_2=\frac{n}{3}$, $c^*_3=\frac{n+1}{2}$, and $c^*_4=n+2$ if $n=6k+3$. \end{enumerate} \end{lemma} \begin{lemma}\label{lem-aure51b} Let $\mathcal{TH}_n=\langle \mathrm{TH}_{n+3}, \mathrm{TH}_{n+2}, \mathrm{TH}_{n+1}, \mathrm{TH}_n \rangle$, where $n\equiv r \bmod 6$ with $r\in\{4,5\}$. Then \begin{enumerate} \item $c^*_2=\frac{n+5}{3}$, $c^*_3=\frac{n+4}{2}$, and $c^*_4=n+3$ if $n=6k+4$; \item $c^*_2=n+5$, $c^*_3=\frac{n+4}{3}$, and $c^*_4=\frac{n+3}{2}$ if $n=6k+5$. \end{enumerate} \end{lemma} \begin{proposition}\label{prop-aure52} $\mathcal{TH}_n$ is a free numerical semigroup with embedding dimension equal to four. \end{proposition} \begin{proposition}\label{prop-aure5} A minimal presentation of $\mathcal{TH}_n$ is \begin{enumerate} \item if $n=6k$: $${\textstyle \left\{ \Big(\frac{n+2}{2}x_4,2x_3+\frac{n+4}{2}x_2\Big), \big((n+1)x_3,(n+4)x_2\big), \Big(\frac{n}{3}x_2,\frac{n+3}{3}x_1\Big) \right\};}$$ \item if $n=6k+1$: $${\textstyle \left\{ \Big(\frac{n+2}{3}x_4,\frac{n+5}{3}x_3\Big), \Big(\frac{n+1}{2}x_3,2x_2+\frac{n+3}{2}x_1\Big), \big(nx_2,(n+3)x_1\big) \right\};}$$ \item if $n=6k+2$: $${\textstyle \left\{ \Big(\frac{n+2}{2}x_4,2x_3+\frac{n+4}{2}x_2\Big), \Big(\frac{n+1}{3}x_3,\frac{n+4}{3}x_2\Big), \big(nx_2,(n+3)x_1\big) \right\};}$$ \item if $n=6k+3$: $${\textstyle 4\left\{ \big((n+2)x_4,(n+5)x_3\big), \Big(\frac{n+1}{2}x_3,2x_2+\frac{n+3}{2}x_1\Big), \Big(\frac{n}{3}x_2,\frac{n+3}{3}x_1\Big) \right\};}$$ \item if $n=6k+4$: $${\textstyle \left\{ \big((n+3)x_1,nx_2\big), \Big(\frac{n+4}{2}x_2,\frac{n-1}{3}x_3+\frac{n+2}{6}x_4\Big), \Big(\frac{n+5}{3}x_3,\frac{n+2}{3}x_4\Big) \right\}};$$ \item if $n=6k+5$: $${\textstyle \left\{ \Big(\frac{n+3}{2}x_1,\frac{n-2}{3}x_2+\frac{n+1}{6}x_3\Big), \Big(\frac{n+4}{3}x_2,\frac{n+1}{3}x_3\Big), \big((n+5)x_3,(n+2)x_4\big) \right\}.}$$ \end{enumerate} \end{proposition} \begin{corollary}\label{cor-aure7} The Betti elements of $\mathcal{TH}_n$ are \begin{enumerate} \item $2{n+5 \choose 4}$, $4{n+4 \choose 4}$ and $\,\frac{4}{3}{n+3 \choose 4}$, if $n=6k$; \item $\frac{4}{3}{n+5 \choose 4}$, $2{n+4 \choose 4}$ and $\,4{n+3 \choose 4}$, if $n=6k+1$; \item $2{n+5 \choose 4}$, $\frac{4}{3}{n+4 \choose 4}$ and $\,4{n+3 \choose 4}$, if $n=6k+2$; \item $4{n+5 \choose 4}$, $2{n+4 \choose 4}$ and $\,\frac{4}{3}{n+3 \choose 4}$, if $n=6k+3$; \item $\frac{4}{3}{n+5 \choose 4}$, $2{n+4 \choose 4}$ and $\,4{n+3 \choose 4}$, if $n=6k+4$; \item $4{n+5 \choose 4}$, $\frac{4}{3}{n+4 \choose 4}$ and $\,2{n+3 \choose 4}$, if $n=6k+5$. \end{enumerate} \end{corollary} \begin{corollary}\label{cor-aure6} We have that \begin{enumerate} \item if $n=6k$, then $$\mathrm{Ap}(\mathcal{TH}_n,\mathrm{TH}_n)=\Bigg\{ a\mathrm{TH}_{n+1}+b\mathrm{TH}_{n+2}+c\mathrm{TH}_{n+3} \;\Big|\; a\in\left\{0,\ldots,\frac{n-3}{3}\right\},$$ $$b\in\{0,\ldots,n\}, \; c\in\left\{0,\ldots,\frac{n}{2}\right\} \Bigg\};$$ \item if $n=6k+1$, then $$\mathrm{Ap}(\mathcal{TH}_n,\mathrm{TH}_n)=\Bigg\{ a\mathrm{TH}_{n+1}+b\mathrm{TH}_{n+2}+c\mathrm{TH}_{n+3} \;\Big|\; a\in\{0,\ldots,n-1\},$$ $$b\in\left\{0,\ldots,\frac{n-1}{2}\right\}, \; c\in\left\{0,\ldots,\frac{n-1}{3}\right\} \Bigg\};$$ \item if $n=6k+2$, then $$\mathrm{Ap}(\mathcal{TH}_n,\mathrm{TH}_n)=\Bigg\{ a\mathrm{TH}_{n+1}+b\mathrm{TH}_{n+2}+c\mathrm{TH}_{n+3} \;\Big|\; a\in\{0,\ldots,n-1\},$$ $$b\in\left\{0,\ldots,\frac{n-2}{3}\right\}, \; c\in\left\{0,\ldots,\frac{n}{2}\right\} \Bigg\};$$ \item if $n=6k+3$, then $$\mathrm{Ap}(\mathcal{TH}_n,\mathrm{TH}_n)=\Bigg\{ a\mathrm{TH}_{n+1}+b\mathrm{TH}_{n+2}+c\mathrm{TH}_{n+3} \;\Big|\; a\in\left\{0,\ldots,\frac{n-3}{3}\right\},$$ $$b\in\left\{0,\ldots,\frac{n-1}{2}\right\}, \; c\in\{0,\ldots,n+1\} \Bigg\};$$ \item if $n=6k+4$, then $$\mathrm{Ap}(\mathcal{TH}_n,\mathrm{TH}_{n+3})=\Bigg\{ a\mathrm{TH}_n+b\mathrm{TH}_{n+1}+c\mathrm{TH}_{n+2} \;\Big|\; a\in\{0,\ldots,n+2\},$$ $$b\in\left\{0,\ldots,\frac{n+2}{2}\right\}, \; c\in\left\{0,\ldots,\frac{n+2}{3}\right\} \Bigg\};$$ \item if $n=6k+5$, then $$\mathrm{Ap}(\mathcal{TH}_n,\mathrm{TH}_{n+3})=\Bigg\{ a\mathrm{TH}_n+b\mathrm{TH}_{n+1}+c\mathrm{TH}_{n+2} \;\Big|\; a\in\left\{0,\ldots,\frac{n+1}{2}\right\},$$ $$b\in\left\{0,\ldots,\frac{n+1}{3}\right\}, \; c\in\{0,\ldots,n+4\} \Bigg\}.$$ \end{enumerate} \end{corollary} \begin{remark} We can get Proposition~\ref{prop5b} (for $n\geq4$) as an immediate consequence of Corollary~\ref{cor-aure6} (see Remark~\ref{rem-aure5c}). \end{remark} \begin{remark} Having in mind Remark~\ref{rem-13}, it is possible to developed a serie of results, which are analogue to those obtain in Subsections~\ref{subsect-tri} and \ref{subsect-tetra}, for sequences of the type $\Big( {n+3 \choose 4}, {n+4 \choose 4}, {n+5 \choose 4}, {n+6 \choose 4}, {n+7 \choose 4} \Big)$ with $n\geq 6$. \end{remark} \end{document}
\begin{document} \title{\LARGE \bf Bridging a gap in Kalman filtering output estimation \\with correlated noises or direct feed-through \\from process noise into measurements } \thispagestyle{empty} \pagestyle{empty} \begin{abstract} Traditional statements of the celebrated Kalman filter algorithm focus on the estimation of state, but not the output. For any outputs, measured or auxiliary, it is usually assumed that the posterior state estimates and known inputs are enough to generate the minimum variance output estimate, given by $y_{n|n} = C x_{n|n} + D u_{n}$. Same equation is implemented in most popular control design toolboxes. It will be shown that when measurement and process noises are correlated, or when the process noise directly feeds into measurements, this equation is no longer optimal, and a correcting term of $H w_{n|n}\doteq H \mathbb{E}(w_{n}|z_{n})$ is needed in above output estimation. This natural extension can allow designer to simplify noise modeling, reduce estimator order, improve robustness to unknown noise models as well as estimate unknown input, when expressed as an auxiliary output. This is directly applicable in motion-control applications which exhibits such feed-through, such as estimating disturbance thrust affecting the accelerometer measurements. Based on a proof of suboptimality \cite{KalmanCounterExample}, this correction has been accepted and implemented in Matlab 2016 \cite{MatlabKalmanFnc2016}. \end{abstract} \section{Introduction} Consider a discrete-time system in the canonical form with time-update equations at step $n$ as below: {\small \begin{align} \label{eq:DiscreteDyn} \begin{split} x_{n+1} &= A x_{n} + B u + G w \\ y &= C x_{n} + D u + H w\\ z &= C_{m} x_{n} + D_{m} u + H_{m} w + v \end{split} \end{align} } where $x$ is the state vector, $u$ is a known input vector, $w$ is unknown input or process noise, $z$ are measured outputs affected by measurement noise $v$ and $y$ is the vector of auxiliary outputs (which may well contain as a subset the measured outputs without the measurement noise ). The process and measurement noises are white with following correlations. $\mathtt{E}(ww^{T}) = Q$, $\mathtt{E}(vv^{T}) = R$ and $\mathtt{E}(wv^{T}) = N$. \subsection{Problem of Output Estimation} The problem we wish to solve is to compute minimum-variance estimate of the state $x_{n}$, output $y_{n}$ and step-ahead prediction $x_{n+1}$, given a history of measurements $z_{i}, i=n, n-1, n-2, \dots$. This can be denoted by the short-hand $x_{n|n}$, $y_{n|n}$ and $x_{{n+1}|n}$ respectively. The state estimation and prediction part of this problem is at the heart of linear real-time estimation of dynamic systems and was solved exactly by Kalman \cite{Kalman1960}, and is described in many texts (e.g. see \cite{kwakernaak1972linear}, \cite{lewis1986optimal}, \cite{FranklinWorkmanPowell1997}, \cite{kailath2000}, \cite{simon2006}) with varying degree of generality of assumptions. The most general form, described for example in Kailath et al. \cite{kailath2000} has $G=I$, $H_{m}=0$ but allows arbitrary cross-correlation between $w$ and $v$ which makes it equivalent with \eqref{eq:DiscreteDyn} without loss of generality. Yet there is a gap in all but one of above references. They do not explicitly describe the equations for optimal output estimation $y_{n|n} \doteq \mathbb{E}(y_{n}|z_{n})$. The only reference from those surveyed, which describes the output equation is Kwakernaak, Sivan \cite{kwakernaak1972linear} (see eq. 4-228, Thm. 4.7), and the output estimate equation is given by: \begin{align}\label{eq:IncorrectOutputEq} y_{n|n} = C x_{n|n} + D u_{n} \end{align} This is proved for uncorrelated measurement and process noises, but a footnote states that the same can be proved for correlated noises, and that $\mathtt{var}(y - y_{n|n}) = H_{m}'Q H_{m} + R + H_{m}N + NH_{m}$, and innovations $y-y_{n|n}$ are white, when $y_{n|n}$ is computed as above and when the output vector is same as measurements without the measurement noise, $y \doteq z-v$. The steady-state or time-varying Kalman filter implementations in popular control design toolboxes such as Matlab 2015b \cite{MatlabKalmanFnc2015b}, Labview \cite{LabviewKalmanFnc}, Mathematica \cite{MathematicaKalmanFnc}, Maple \cite{MapleKalmanFnc} and Octave-forge \cite{OctaveKalmanFnc} do allow us to pose the most general problem \eqref{eq:DiscreteDyn} with correlated noises or feed-through and also allow us to generate estimates for both states $x_{n|n}$ and outputs $y_{n|n}$. But as recently as 2015, all above implementations use the equation \eqref{eq:IncorrectOutputEq} for updating outputs. As a specific example, equation for {\emph{current}} form of Kalman filter per the documentaion in \cite{MatlabKalmanFnc2015b} is: {\small \begin{align} \begin{bmatrix} x_{n|n}\\ y_{n|n}\\ x_{{n+1}|n}\end{bmatrix} &= \begin{bmatrix} I - K_{g}C_{m} & -K_{g}D_{m}\\ C - C K_{g} \,C_{m} & D - C K_{g} \,D_{m} \\ A - M_{A,G} \,C_{m} & B - M_{A,G} \,D_{m} \end{bmatrix} \cdot \label{eq:XYXnextupdate:A}\\ & \hskip 6em \begin{bmatrix} x_{n|{n-1}}\\ u_{n}\end{bmatrix} + \begin{bmatrix} K_{g}\\ C K_{g}\\ M_{A,G} \end{bmatrix} z_{n} \nonumber \end{align} } where $M_{C} = C K_{g}$ and $M_{A,G} = A K_{g} + G K_{g2}$. $K_{g} = P_{n|{n-1}}C_{m}' (C_{m}P_{n|{n-1}}C_{m}' + \bar{R})^{-1}$, $K_{g2} = (Q H_{m}'+N) (C_{m}P_{n|{n-1}}C_{m}' + \bar{R})^{-1}$, $\bar{R} = R + H_{m} Q H_{m}^{T} + H_{m} N + N^{T} H_{m}^{T}$ and $P_{n|n-1}$ is the solution of Riccati difference iteration or the Riccati equation. Despite the complex form, it is easy to see that eq. \eqref{eq:XYXnextupdate:A} matches eq. \eqref{eq:IncorrectOutputEq}. This paper proposes a change only to the output update equation for $y_{n|n}$. As will be shown later in this paper, all claims from \cite{kwakernaak1972linear} mentioned above hold except the optimality, and equation \eqref{eq:IncorrectOutputEq} fails to be optimal when there is a direct feed-through, which means either $H_{m}$ or $N$ is non-zero. In such case the correction needed is: \begin{align} \label{eq:CorrecetdOutputEqA} y_{n|n} = C x_{n|n} + D u_{n} + H \mathbb{E}(w_{n}|z_{n}) \end{align} In section \ref{sec:Example}, we will prove the suboptimality of eq. \eqref{eq:CorrecetdOutputEqA} over eq. \eqref{eq:IncorrectOutputEq} through a counter-example. The loss of optimality occurs because the conditional expectation of process noise $w_{n}$ knowing measurements $z_{n}$ is non-zero, as they are both correlated due to direct feed-through via $H_{m}$ or $N$, and must be accounted for to achive best estimation of output $y$. Another way to see this is in frequency domain. If \eqref{eq:DiscreteDyn} is a discretization of a continuous time system, then $x$ is bound to be smooth by physics, and so is the estimate $x_{n|n}$. But outputs $y$ is not smooth and would not have high-frequency roll-off, due to direct feed-through in $H$ or $H_{m}$. But using eq. \eqref{eq:IncorrectOutputEq} would constraint $y_{n|n}$ to be smooth even though $y$ is not. Thus estimate error is bound to be large at high frequencies. This can be avoided using the measurements $z_{n}$ in a better way, in the computation of $y_{n|n}$, as per eq. \eqref{eq:CorrecetdOutputEq} to follow. \subsection{Problem of Unknown Input Estimation} Note that after substituting $C=0$, $D=0$ and $H=I$ in \eqref{eq:DiscreteDyn}, we have $y=w$ and the problem of unknown input estimation becomes a specific case of output estimation. Traditional methods of unknown input modelling include augmenting state-space to include noise states. Such methods work well for smoothly varying unknown inputs, but due to the smoothness requirement on noise states evolution, they cannot model broad-band process noise which have significant energy at high frequencies or those which have a hard floor in frequency instead of a roll-off. Moreover, an estimator designed with a smooth noise model can lack robustness to unmodeled spikes in unknown inputs at high frequencies. Especially for an application like a wind turbine, direct feed-through exists from unknown wind thrust to sensors like accelerometers. Due to pockets of localized wind gusts and tower dam effect, the thrust on blades can have a very broad-band spectrum extending to high-frequencies with peaks at multiples of blade passing frequency. The method to improve output estimation, proposed in this paper, has a fortunate side-effect of improving the unknown input estimation, and a direct way to model broad-band noise without expanding state-space. The paper is organized as follows. In sec. \ref{sec:DeriveDiscKalman}, we derive the discrete time minimum variance estimator or the Kalman filter from the first principles, with a specific goal of state as well as output estimation. Note that we only derive state update equation for the sake of completeness, and propose no change in them compared to prior art. The only change proposed is in output update equations. The comparison with earlier approaches for output estimation is done in sec. \ref{sec:ComparePrevMethod}. A simple numerical example which demonstrates the improved estimation is described in sec. \ref{sec:Example}. \section{Derivation of discrete time Kalman filter}\label{sec:DeriveDiscKalman} This section derives the formulae for a time-varying discrete-time Kalman filter from first principles. In doing so, we will leverage a key result from conditional probability of a bi-variate Gaussian distribution repeatedly. \subsection{A result from conditional bivariate Gaussian distribution}\label{sec:bivarGaussDist} Let us denote a normal distribution by its mean and variance, $\mathcal{N}(\mathtt{mean}, \mathtt{variance})$. Assume two normal random variables $x_{1} = \mathcal{N}(m_{1},P_{11})$ and $x_{2} = \mathcal{N}(m_{2},P_{21})$ with cross-covariance $\mathtt{cov}(x_{1}, x_{2})=\mathbb{E}((x_{1}-m_{1})(x_{2}-m_{2})^{T}) = P_{12}$. Then it is well-known \cite{CondBivarDistrib} that condition distribution of $x_{1}$ knowing $x_{2} = z_{2}$ is $x_{1|2} = \mathcal{N}(m_{1|2}, P_{1|2})$ where {\small \begin{align} &m_{1|2} = \mathbb{E}(x_{1}|x_{2}=z_{2}) \nonumber\\ &\hskip1em = \mathbb{E}(x_{1}) + \mathtt{cov}(x_{1},x_{2}) \mathtt{var}(x_{2},x_{2})^{-1}(z_{2}-\mathbb{E}(x_{2})) \nonumber\\ &\hskip1em= m_{1} + P_{12} P_{22}^{-1} (z_{2} - m_{2}) \label{eq:condProb}\\ &P_{1|2} = \mathtt{var}(x_{1}|x_{2}=z_{2})\nonumber\\ &\hskip1em= \mathtt{var}(x_{1},x_{2}) - \mathtt{cov}(x_{1},x_{2}) \mathtt{var}(x_{2},x_{2})^{-1} \mathtt{cov}(x_{1},x_{2})' \nonumber\\ &\hskip1em= P_{11} - P_{12} P_{22}^{-1} P_{12}^{T} \label{eq:condProbVar} \end{align} } \subsection{State and Output update rules}\label{sec:updateEqs} At the n'th step of filter update, we know prior state estimate $x_{n|{n-1}}$ (with variance $P_{n|{n-1}}$), measurements $z_{n}$ and known inputs $u_{n}$. Tabulating cross-covariances and variances will help us in upcoming development. {\small \begin{align*} &\mathtt{cov}(x_{n|{n-1}}, z_{n|{n-1}}) = P_{n|{n-1}} C_{m}^{T} \\ &\mathtt{cov}(y_{n|{n-1}}, z_{n|{n-1}}) = C P_{n|{n-1}} C_{m}^{T} + H Q H_{m}^{T} + H N\\ &\mathtt{cov}(x_{n+1|{n-1}}, z_{n|{n-1}}) = A P_{n|{n-1}} C_{m}^{T} + G Q H_{m}^{T} + G N\\ &\mathtt{var}(x_{n|{n-1}}) = P_{n|{n-1}}\\ &\mathtt{var}(z_{n|{n-1}}) = C_{m} P_{n|{n-1}} C_{m}^{T} + \bar{R}\\ &\mathtt{var}(x_{n+1|{n-1}}) = A P_{n|{n-1}} A^{T} + G Q G^{T} \end{align*} } where $\bar{R} = R + H_{m} Q H_{m}^{T} + H_{m} N + N^{T} H_{m}^{T}$. Using above expressions and equation \eqref{eq:condProb}, by variable substitution, posterior expectations and variances of various random variables can be readily computed. The posterior expectation of state is given by (measurement update for state) : {\small \begin{align} &\mathbb{E}(x_{n|n}) = \mathbb{E}(x_{n|{n-1}} | z_{n|{n-1}} = z_{n}) \label{eq:Xupdate}\\ &= \mathbb{E}(x_{n|{n-1}}) + \mathtt{cov}(x_{n|{n-1}}, z_{n|{n-1}}) \cdot \nonumber\\ &\hskip 4em \mathtt{var}(z_{n|{n-1}})^{-1} (z_{n} - \mathbb{E}(z_{n|{n-1}})) \nonumber\\ &= x_{n|{n-1}} + P_{n|{n-1}} C_{m}^{T} \cdot \nonumber\\ &\hskip 4em \left(C_{m} P_{n|{n-1}} C_{m}^{T} + \bar{R} \right)^{-1} \left(z_{n} - (C_{m} x_{n|{n-1}} +D_{m} u_{n}) \right) \nonumber \end{align} } Similarly, the posterior expectation of output is given by (measurement update for outputs): {\small \begin{align} &\mathbb{E}(y_{n|n}) = \mathbb{E}(y_{n|{n-1}} | z_{n|{n-1}} = z_{n}) \label{eq:Yupdate}\\ &= \mathbb{E}(y_{n|{n-1}}) + \mathtt{cov}(y_{n|{n-1}}, z_{n|{n-1}}) \cdot \nonumber\\ &\hskip 4em \mathtt{var}(z_{n|{n-1}})^{-1} (z_{n} - \mathbb{E}(z_{n|{n-1}})) \nonumber\\ &= C x_{n|{n-1}} + D u_{n} + \left(C P_{n|{n-1}} C_{m}^{T} + H Q H_{m}^{T} + H N \right) \cdot \nonumber\\ & \hskip 4em \left(C_{m} P_{n|{n-1}} C_{m}^{T} + \bar{R} \right)^{-1} \left(z_{n} - (C_{m} x_{n|{n-1}} +D_{m} u_{n}) \right) \nonumber \end{align} } and the posterior expectation of predicted state becomes (measurement update for predicted state): {\small \begin{align} &\mathbb{E}(x_{{n+1}|n}) = \mathbb{E}(x_{n+1|{n-1}} | z_{n|{n-1}} = z_{n}) \label{eq:XupdateNext}\\ &= \mathbb{E}(x_{n+1|{n-1}}) + \mathtt{cov}(x_{n+1|{n-1}}, z_{n|{n-1}}) \cdot \nonumber\\ & \hskip 4em \mathtt{var}(z_{n|{n-1}})^{-1} (z_{n} - \mathbb{E}(z_{n|{n-1}})) \nonumber\\ &= A x_{n|{n-1}} + B u_{n} + \left(A P_{n|{n-1}} C_{m}^{T} + G Q H_{m}^{T} + G N \right) \cdot \nonumber\\ & \hskip 4em \left(C_{m} P_{n|{n-1}} C_{m}^{T} + \bar{R} \right)^{-1} \left(z_{n} - (C_{m} x_{n|{n-1}} +D_{m} u_{n}) \right) \nonumber \end{align} } and the posterior variance for predicted state using eq. \eqref{eq:condProbVar} is given by the discrete Riccati differential equation below: {\small \begin{align} &P_{{n+1}|n} = \mathtt{var}(x_{n+1|n-1}) - \mathtt{cov}(x_{n+1|n-1}, z_{n|n-1}) \cdot \label{eq:VarupdateXNext}\\ &\hskip 6em \mathtt{var}(z_{n|n-1})^{-1} \mathtt{cov}(x_{n+1|n-1}, z_{n|n-1})^{T} \nonumber\\ &=(A P_{n|{n-1}} A^{T} + G Q G^{T}) - \left(A P_{n|{n-1}} C_{m}^{T} + G Q H_{m}^{T} + G N \right) \cdot \nonumber\\ & \hskip 1em \left(C_{m} P_{n|{n-1}} C_{m}^{T} + \bar{R} \right)^{-1} \left(A P_{n|{n-1}} C_{m}^{T} + G Q H_{m}^{T} + G N \right)^{T} \nonumber \end{align} } Similarly the posterior variance for predicted outputs is: {\small \begin{align} &\mathtt{var}(y_{n|n}) = \mathtt{var}(y_{n|n-1}) - \mathtt{cov}(y_{n|n-1}, z_{n|n-1}) \cdot \label{eq:VarUpdateY}\\ &\hskip 6em \mathtt{var}(z_{n|n-1})^{-1} \mathtt{cov}(Y_{n|n-1}, z_{n|n-1})^{T} \nonumber\\ &= (C P_{n|{n-1}} C^{T} + H Q H^{T}) - \left(C P_{n|{n-1}} C_{m}^{T} + H Q H_{m}^{T} + H N \right) \cdot \nonumber\\ & \hskip 1em \left(C_{m} P_{n|{n-1}} C_{m}^{T} + \bar{R} \right)^{-1} \left(C P_{n|{n-1}} C_{m}^{T} + H Q H_{m}^{T} + H N \right)^{T} \nonumber \end{align} } In summary, after each measurement $z_{n}$, $x_{n|{n-1}}$ and $P_{n|n+1}$ are propagated forward in time to generate $x_{{n+1}|n}$ and $P_{{n+1}|n}$ and recursively there on, through equations \eqref{eq:XupdateNext} and \eqref{eq:VarupdateXNext}. Estimates of outputs and states, $y_{n|n}$ and $x_{n|n}$ based on Kalman filter are also generated by equations \eqref{eq:Yupdate} and \eqref{eq:Xupdate}. \begin{remark} Note that if estimated outputs are same as measured, i.e. when $C=C_{m}$, $H=H_{m}$, $N=0$, eq. \eqref{eq:VarUpdateY} simplifies to \begin{align} &\mathtt{var}(y_{n|n}) = (C_{m}P_{n|n-1}C_{m}^{T} +H_{m}Q H_{m}^{T}) \cdot \label{eq:VarupdateYmeas}\\ & \hskip 3em (C_{m}P_{n|n-1}C_{m}^{T} +H_{m}Q H_{m}^{T}+R)^{-1} \cdot R \preceq R \nonumber \end{align} Thus the posterior output estimate is more accurate than the measurement, which matches the intuition. \end{remark} \section{Comparison with the prior art }\label{sec:ComparePrevMethod} Equations \eqref{eq:Xupdate}, \eqref{eq:Yupdate} and \eqref{eq:XupdateNext} can be summarized in following matrix equation. {\small \begin{align} \begin{bmatrix} x_{n|n}\\ y_{n|n}\\ x_{{n+1}|n}\end{bmatrix} &= \begin{bmatrix} I - K_{g}C_{m} & -K_{g}D_{m}\\ C - M_{C,H} \,C_{m} & D - M_{C,H} \,D_{m} \\ A - M_{A,G} \,C_{m} & B - M_{A,G} \,D_{m} \end{bmatrix} \cdot \label{eq:XYXnextupdate}\\ & \hskip 6em \begin{bmatrix} x_{n|{n-1}}\\ u_{n}\end{bmatrix} + \begin{bmatrix} K_{g}\\ M_{C,H}\\ M_{A,G} \end{bmatrix} z_{n} \nonumber \end{align} } where $M_{C,H} = C K_{g} + H K_{g2}$ and $M_{A,G} = A K_{g} + G K_{g2}$. $K_{g} = P_{n|{n-1}}C_{m}' (C_{m}P_{n|{n-1}}C_{m}' + \bar{R})^{-1}$ and $K_{g2} = (Q H_{m}'+N) (C_{m}P_{n|{n-1}}C_{m}' + \bar{R})^{-1}$. Using \eqref{eq:XYXnextupdate}, it can be shown that $y_{n|n}$ and $x_{n|n}$ are related in following way. {\small \begin{align} y_{n|n} &= C x_{n|n} + D u_{n} \label{eq:RelateYnnXnn}\\ &\hskip 2em + H K_{g2} \cdot \left(z_{n} - ( C_{m} x_{n|{n-1}} + D_{m} u_{n} ) \right) \nonumber \end{align} } Comparing eq. \eqref{eq:RelateYnnXnn} with eq. \eqref{eq:IncorrectOutputEq} used in widely used design tools, we see that there needs to be an additional correction needed in computing $y_{n|n}$ when process noise feeds into measurements, and traditional equality $y_{n|n} = C x_{n|n} + D u_{n}$ would not hold. This is so, because conditional expectation of process noise, knowing the measurement with its feed-through, is non-zero. \begin{align} \label{eq:ExpectationWnGivenZn} w_{n|n} \doteq \mathbb{E}(w_{n}|z_{n}) = K_{g2} \cdot (z_{n} - ( C_{m} x_{n|{n-1}} + D_{m} u_{n} )) \end{align} and eq. \eqref{eq:RelateYnnXnn} can be expressed as \begin{align}\label{eq:CorrecetdOutputEq} y_{n|n} = C x_{n|n} + D u_{n} + H \mathbb{E}(w_{n|n}) \end{align} Note that in absence of the feed-through and correlation between process and measurement noise ($H_{m}=0$ and $N=0$), $Kg_{2}$ would be zero and \eqref{eq:RelateYnnXnn} reverts to previous implementation \eqref{eq:IncorrectOutputEq}. Thus this correction only affects cases with correlated process and measurement noises or direct-feed through. Note that the state-update and predicton equations from eq. \eqref{eq:XYXnextupdate} exactly matches with those from previous references and implementations. Thus the only modification proposed here is in computation of the output estimate, $y_{n|n}$. \section{Steady-state Kalman filter}\label{sec:ssKF} Above time-varying update equations simplify considerably for a steady-state Kalman filter. The variance update equation \eqref{eq:VarupdateXNext} simplifies to the Riccati equation. {\small \begin{align}\label{eq:RiccatiDiscrete} \begin{split} &P = (APA^{T} + GQG^{T}) - \left(A P C_{m}^{T} + G Q H_{m}^{T} + G N \right) \cdot \\ & \hskip 2em \left(C_{m} P C_{m}^{T} + \bar{R} \right)^{-1} \left(A P C_{m}^{T} + G Q H_{m}^{T} + G N \right)^{T} \end{split} \end{align} } and the update equations \eqref{eq:XYXnextupdate} become linear time-invariant once $P$ is fixed, as $K_{g}$, $K_{g2}$ and all the coeffients become constant, and estimates evolve as per following dynamical system. {\small \begin{align} \label{eq:SteadyKF} \begin{split} x_{{n+1}|n} &= (A - M_{A,G} \,C_{m}) x_{n|{n-1}} + \begin{bmatrix} M_{A,G} \\ B - M_{A,G} \,D_{m}\end{bmatrix} \begin{bmatrix} z_{n}\\ u_{n} \end{bmatrix} \\ \begin{bmatrix} x_{n|n}\\ y_{n|n}\end{bmatrix} &= \begin{bmatrix} I - K_{g} \,C_{m} \\ C - M_{C,H} \,C_{m} \end{bmatrix} \cdot x_{n|{n-1}} \\ & \hskip 4em + \begin{bmatrix} K_{g} && -K_{g}D_{m}\\ M_{C,H} && D - M_{C,H} \,D_{m}\end{bmatrix} \cdot \begin{bmatrix} z_{n} \\ u_{n}\end{bmatrix} \end{split} \end{align} } \section{Remarks on continuous time estimation}\label{sec:contKF} As seen above, for the equations for discrete time Kalman filter, the only correction needed is in the formula for $y_{n|n}$ in eqs. \eqref{eq:RelateYnnXnn} and \eqref{eq:XYXnextupdate} to compute minimum variance output estimate. The same correction carries over to the continuous time system. In continuous time systems, the state update is continuous, while measurements may be take at variable times. Thus prior state estimate $x_{n|n-1}$ is discrete time systems is replaced by prior estimate $\hat{x}^{-}_{t}$ and it's variance would be $\hat{P}^{-}_{t}$, and we can use the second equation from eq. \eqref{eq:XYXnextupdate} to derive continuous-time analogue of optimal posterior output estimate, $\hat{y}^{+}_{t}$. \begin{align} \label{eq:RelateYnnXnnCont} \begin{split} \hat{y}^{+}_{t} &= C \hat{x}^{-}_{t} + D u_{t}+ (C K_{g} + H K_{g2}) \\ &\hskip 8em \cdot \left(z_{t} - ( C_{m} \hat{x}^{-}_{t} + D_{m} u_{t} ) \right) \end{split} \end{align} where $K_{g}=P^{-}_{t}C_{m}' (C_{m}P^{-}_{t}C_{m}' + \bar{R})^{-1}$, $K_{g2} = (Q H_{m}'+N) (C_{m}P^{-}_{t}C_{m}' + \bar{R})^{-1}$, which is same as the discrete time analog, except that $P^{-}_{t}$ is the prior variance of the prior state estimate $\hat{x}^{-}_{t}$. \section{A Numerical Example}\label{sec:Example} Consider a simple example of a system with a direct feed-through. {\small \begin{align*} \dot{x} &= -0.1 x + 2 w, \hskip 1em y = \begin{bmatrix}1\\0\end{bmatrix}x + \begin{bmatrix}1\\1\end{bmatrix}w, \hskip 1em z = y(1) + v\\ Q &= \mathtt{var}(w) = 1, \hskip 1em R = \mathtt{var}(v) = 0.1, \hskip 1em N = \mathtt{cov}(w,v) = 0.3 \end{align*} } Note that the first output $y(1)$ has feed-through from process noise $w$ and is measured with certain noise as $z$ and the second output $y(2)=w$ is an estimate of unknown input expressed as an auxiliary output. Kalman estimators of three kinds were implemented on discretized version of above problem with time-step $0.1$ seconds. These were: \begin{enumerate}[(a)] \item Time varying Kalman filter as per eq. \eqref{eq:XYXnextupdate}, denoted by legend {\tt new}. \item Steady state Kalman filter as per eq. \eqref{eq:SteadyKF}, denoted by legend {\tt new ss}. \item Steady state Kalman filter using {\tt kalman.m} function in Matlab \cite{MatlabKalmanFnc2015b} controls toolbox, denoted by legend {\tt prev ss}. \end{enumerate} \subsection{Nominal performance} The estimates of measured output $y(1)$ by three methods are compared in fig. \ref{fig:estErrY1}, which also compares the estimate errors. It is clear that method (c) fails to capture the high frequencies present in the measured output, and only captures contribution due to state, which evolves smoothly. The estimate errors variances using both methods (a) and (b) is 0.0910, which matches eq. \eqref{eq:VarupdateYmeas}. This is 10 times lower than method (c) whose error variance is 0.99, which is close to $H_{m}QH_{m}^{T}+R$ as claimed in \cite{kwakernaak1972linear}. The estimates from methods (a) and (b) are nearly identical after the initial transient. This matches with the common intuition that time-varying Kalman filter rapidly converges to steady-state Kalman filter, as the value of state variance $P_{n}$ approaches Riccati solution \eqref{eq:RiccatiDiscrete}. \begin{figure} \caption{Measured output estimation $y(1)$} \label{fig:estErrY1} \end{figure} Fig. \ref{fig:estErrY2} compared the unknown input estimates by three methods, and again shows much better performance by the new method in unknown input estimation. \begin{figure} \caption{Unknown input estimation $y(2)$} \label{fig:estErrY2} \end{figure} Fig. \ref{fig:estErrX1} compares the state estimates by three methods, and they show near-perfect match, which shows that state estimation by new method per eq. \eqref{eq:XupdateNext} is done in exactly same manner as method (c). \begin{figure} \caption{State estimation $x(1)$} \label{fig:estErrX1} \end{figure} \subsection{Robustness to unmodeled bias in disturbance} Usually it is hard to guarantee robustness of optimal estimator to unmodeled disturbance dynamics. Still, as an example, the three estimators above were tested against an unmodeled random walk drift in addition to the white noise in the disturbance $w$. The {\tt true} signal in fig. \ref{fig:estErrWithWBiasY2} shows this disturbance. Fig. \ref{fig:estErrWithWBiasY1} compares $y(1)$ estimation and shows great improvement by new estimators over previous method (c), which has a clear drift in estimate error. Fig. \ref{fig:estErrWithWBiasY2} compares $y(2)$ or unknown input estimation. Since $y(2)=0\cdot x + w$, the estimate from method (c) is zero and estimate error is large. On the same problem, the new method estimates drift and high frequency disturbance quite well. \begin{figure} \caption{Measured output $y(1)$ estimation with unmodeled bias in process noise} \label{fig:estErrWithWBiasY1} \end{figure} \begin{figure} \caption{Unknown input $y(2)$ estimation with unmodeled bias in process noise} \label{fig:estErrWithWBiasY2} \end{figure} \section{Concluding Remarks} In summary, until now, due to a gap in output estimate update rules in Kalman filter and sub-optimal implementation within widely used design tools (\cite{MatlabKalmanFnc2015b}, \cite{LabviewKalmanFnc}, \cite{MathematicaKalmanFnc}, \cite{MapleKalmanFnc}, \cite{OctaveKalmanFnc}) per eq. \eqref{eq:IncorrectOutputEq}, designers had no way of accommodating direct feed-through of even a part of the process noise into measurements, except by expanding state-space to include noise states, which may or may not be physical. But this workaround left much to be desired as it was constrained by smoothness requirements on the unknown input estimates and thus incurred large estimation errors at high frequencies and reduced the estimation bandwidth for such outputs. Bandwidth could only be increased at the cost of simplicity or robustness due to increased model order or new tuning parameters. The proposed simple correction to output estimates per eq. \eqref{eq:RelateYnnXnn}, fills this gap as it makes use of posterior estimate of the unknown input computed from measurement which contain its feed-through. Thus it guarantees minimum variance output estimate. The performance of estimator has been demonstrated through a simple numerical example. The robustness to unmodeled inputs (e.g. bias or drift) seems to have greatly improved by this correction. Such estimator is directly useful for the problems of estimating thrust from accelerometers mounted on flexible structures, especially if the thrust is broad-band and hard to model physically. The paper also makes the case for correcting the implementations in commonly used toolboxes for control system design, as all the surveyed implementations/documentations suffer from the sub-optimality in the output estimation for problems with direct feed-through or correlated process and measurement noises. \section*{ACKNOWLEDGMENT} Many thanks to Cecilia Mazzaro, Xiongzhe Huang, Hullas Sehgal and Kirk Mathews from GE for their review which refined this work, and to Pascal Gahinet and Nirja Mehta from Mathworks, who reviewed and acknowledged and accepted this improvement and implemented it in Matlab release 2016a. \end{document}
\begin{document} \title[Complexified phase spaces, IVRs, and the accuracy of SC propagation]{Complexified phase spaces, initial value representations, and the accuracy of semiclassical propagation} \author{Gabriel M.~Lando} \address{Max Planck Institute for the Physics of Complex Systems, N{\"o}thnitzer Stra{\ss}e 38, D-01187 Dresden, Germany} \ead{[email protected]} \begin{indented} \item[]June 2020 \end{indented} \begin{abstract} Using phase-space complexification, an Initial Value Representation (IVR) for the semiclassical propagator in position space is obtained as a composition of inverse Segal-Bargmann (S-B) transforms of the semiclassical coherent state propagator. The result is shown to be free of caustic singularities and identical to the Herman-Kluk (H-K) propagator, found ubiquitously in physical and chemical applications. We contrast the theoretical aspects of this particular IVR with the van~Vleck-Gutzwiller (vV-G) propagator and one of its IVRs, often employed in order to evade the non-linear ``root-search'' for trajectories required by vV-G. We demonstrate that bypassing the root-search comes at the price of serious numerical instability for all IVRs except the H-K propagator. We back up our theoretical arguments with comprehensive numerical calculations performed using the homogeneous Kerr system, about which we also unveil some unexpected new phenomena, namely: (1) the observation of a clear mark of half the Ehrenfest's time in semiclassical dynamics; and (2) the accumulation of trajectories around caustics as a function of increasing time (dubbed ``caustic stickiness''). We expect these phenomena to be more general than for the Kerr system alone. \end{abstract} \section*{Introduction} The fundamental parameter of quantum mechanics, namely Planck's constant or its reduced form $\hbar$, is an extremely small quantity. It did not take long after the Schr\"odinger equation was discovered in order for approximations exploiting this smallness to be developed, especially by van Vleck in his seminal 1928 paper \cite{VanVleck1928}. An arguably surprising consequence of such asymptotic approximations was that the leading order terms in $\hbar$ involved quantities which \emph{made sense} in classical mechanics, such as actions, trajectories and catastrophes \cite{Maslov1981}. This lead to the new field being called \emph{semiclassical mechanics}, evincing that for the first time a link between classical and quantum realms had been devised. Semiclassical objects such as the van~Vleck-Gutzwiller (vV-G) propagator, for instance, established a connection between classical trajectories and quantum superposition, being able to reproduce interference patterns using solely classical data as input. Besides providing quantum mechanics with some geometrical meaning, semiclassical methods are nowadays used to model a plethora of physical and chemical phenomena. The Herman-Kluk (H-K) propagator is an alternative to vV-G's, \emph{i.e.}~another expression for the semiclassical propagator in the position representation, discovered in the mid 1980s \cite{Herman1984}. Used either raw or as a starting point to other approximation methods, it is the most popular Initial Value Representation (IVR) in chemistry, although the development of benchmark semiclassical methods is truly non-stop (see \cite{Heller1991-2,Rost2004,Campolieti1997,Liu2015,Curchod2018,Micciarelli2019,Gottwald2019,Grossmann2020} for some examples). In addition to providing high-accuracy results for the complicated processes inherent to the chemical sciences, the H-K propagator was also scrutinized by the physical community, who attested for its remarkable accuracy and implementation ease (\emph{e.g.}~\cite{Schoendorff1998,Rost1999,Maitra2000,Zagoya2012,Lando2019-3}). The reason behind such ease is mostly due to the fact that, as all IVRs, it sums over all initial positions and momenta instead of root-searching for specific trajectories, a procedure required by cruder propagators such as vV-G's. However, the H-K propagator has been shown to be more accurate than other IVRs \cite{Kay1993}, but at present a comprehensive analysis of the reasons behind such accuracy appears to be lacking -- especially one that unveils the role played by \emph{caustics}, which are divergences present in most semiclassical propagators, in the approximations. In the first part of this manuscript we collect and reinterpret several scattered results in the physical, chemical and mathematical literatures in order to understand the theory behind the H-K propagator. The main step is to identify this propagator as a sequence of two inverse Segal-Bargmann (S-B \footnote{The Segal-Bargmann space has many names, including: Fock, Fock-Bargmann, Fock-Cook, etc. We choose Segal-Bargmann because this is usually the name given to the transformation mapping the position representation to the coherent state one.}) transforms of the semiclassical propagator in the S-B representation (\emph{i.e}~the Weyl-ordered coherent state propagator). This is similar to what was done in \cite{Grossmann1998}, although we do not perform any integrals or use steepest descent methods, relying instead on the usual ``IVR trick'' devised by Miller \cite{Miller2001}. The H-K propagator is then shown to be a mere extension of the semiclassical quantization of linear hamiltonian flows to non-linear ones, the only difference between it and vV-G being the representation chosen. This seemingly small diffrence, however, is responsible for a major contrast between these propagators, as we can then invoke an earlier result that states that the S-B representation does not suffer from caustic singularities as the position one does \cite{Littlejohn1986}. Besides, although we contrast the S-B and position representations, our conclusions generalize to all semiclassical descriptions plagued by caustic singularities, such as the momentum \cite{Littlejohn1992}, mixed \cite{Maslov1981,Kirkwood1933} and even Weyl-Wigner ones \cite{Ozorio1998}. In this manuscript's second part we proceed to really visualize the impact of caustics in semiclassical propagation. As our toy model we choose the homogeneous Kerr system, which has a 4th-order hamiltonian and has been already investigated in several instances (\emph{e.g.}~\cite{Yurke1986,Averbukh1989,Raul2012,Lando2019}). This system is particularly tractable due to both its classical and quantum dynamics being analytical, but such analyticity does not prevent it from developing a very intricate caustic web, rendering it the perfect laboratory for our purposes. Comparisons are given in all levels of ``purity'', that is, from the semiclassical propagators themselves to quantities obtained by employing them as integral kernels, and it is found that their contrast decreases with each performed integral. In particular, an accumulation of defects in the semiclassical propagator is seen due to caustics in the position representation, while for the S-B representation it is sometimes hard to even separate between the exact quantum result and its semiclassical approximation. For wave functions and autocorrelations, it is seen that caustics are responsible for normalization loss and oscillatory errors. To show this is not due to the vV-G propagator being a sum and H-K an IVR, we also provide comparisons between results obtained from an IVR based on position representation. We then see, as predicted in the first part of the manuscript, that the a collateral effect of avoiding the root-search in the vV-G propagator results in its IVR requiring much denser integration grids in order to achieve equivalent results. We also identify a possibly new phenomenon, in which the crossings of trajectories and caustics start to take longer as time increases, dubbing it ``caustic stickiness''. When generalizable, it might be a fundamental mechanism behind the long-time failure of semiclassical propagators based on representations that contain caustics. \\ The first part of the manuscript is composed of its initial three sections, and the second part deals exclusively with Sec.~\ref{sec:kerr}. The organization of the sections is now in order. In Sec.~\ref{sec:comp} we briefly review the theory of phase space complexification, differentiating it from truly complex phase spaces. Sec.~\ref{sec:lin} deals mainly with 1-parameter groups of symplectic matrices and their quantization in position and S-B representations, and it is here the problem of caustics and the solution provided by the complexified variables are discussed. In Sec~\ref{sec:semi} we review IVRs and generalize the linear theory of Sec.~\ref{sec:lin} to the semiclassical realm, discussing the consequences of using IVR techniques in representations with and without caustics. We then move on to the numerical analysis performed in Sec.~\ref{sec:kerr}, in which the points raised in the first part are visualized. A brief discussion of our results is then included in Sec.~\ref{sec:disc}, and we finish the manuscript with the conclusions of Sec.~\ref{sec:conc}. Three appendices are included: in \ref{App:A} we reproduce a proof of the non-singularity of an important matrix, which happens to be directly responsible for the S-B representation having no caustics; \ref{App:B} exposes a bit of the mathematics of ``Miller's trick'', \emph{i.e.}~the substitution used to obtain IVRs from raw propagators; and finally \ref{App:C} shows how the root-search for the Kerr system was numerically implemented in Sec.~\ref{sec:kerr}. \section{Complexification}\label{sec:comp} In this section we present some properties of a particular complexification of $\mathbb{R}^{2n}$, together with its effect on relevant classical objects such as generating functions and canonical forms. The following conventions are used throughout the manuscript: \begin{itemize} \item Vectors are bold, like $\mathbf{p}$ and $\bzeta$, and matrices are upper case, like $\mathcal{M}$, $A$ and $\Gamma$. Latin and Greek scripts stand for real and complex quantities, respectively. \item The space $\mathbb{R}^{2n}$ is decomposed as a direct product of positions and momenta, for which we use condensed coordinates organized as \begin{eqnarray} (q_1, \dots, q_n,p_1,\dots,p_n) &= (\mathbf{q},\mathbf{p}) \in \mathbb{R}^n \oplus \mathbb{R}^n \sim \mathbb{R}^{2n} \\ (\zeta_1, \dots, \zeta_n,\zeta^*_1,\dots,\zeta^*_n) &= (\bzeta,\bzeta^*) \in \mathbb{C}^n \oplus \mathbb{C}^n \sim \mathbb{C}^{2n} \, . \end{eqnarray} Just as the coordinates are condensed, so are derivatives: \begin{eqnarray} \frac{\partial}{\partial \mathbf{q}} = \left(\frac{\partial}{\partial q_1}, \dots, \frac{\partial}{\partial q_n} \right) \, , \qquad \frac{\partial}{\partial \bzeta} = \left(\frac{\partial}{\partial \zeta_1}, \dots, \frac{\partial}{\partial \zeta_n} \right) \, . \end{eqnarray} The same logic applies to differentials/measures. We will sometimes use derivatives and differentials to represent the canonical bases of the tangent and cotangent bundles of $\mathbb{R}^{2n}$ at the origin, which being isomorphic to $\mathbb{R}^{2n}$ can be just though of as $\mathbb{R}^{2n}$ itself. \item The symbol ``$\cdot$'' represents an element-by-element product, \emph{e.g.}~$\mathbf{p} \cdot d\mathbf{q} = p_1 \, dq_1 + \dots + p_n \, dq_n$. \item The wedge product is term-by-term, \emph{i.e.}~$d\mathbf{q} \wedge d\mathbf{p} = dq_1 \wedge dp_1 + \dots + dq_n \wedge dp_n$. \item We fix $\hbar=1$ throughout the whole manuscript, and time is always a real parameter. The hamiltonian functions are always time-independent. \end{itemize} \subsection{The embedding $\mathbb{R}^{2n} \hookrightarrow \mathbb{C}^{2n}$}\label{subsec:emb} The fact that $\mathbb{C}^{n} \sim \mathbb{R}^n \oplus i \mathbb{R}^n$ renders the employment of isomorphisms such as $\mathbf{z} \propto \mathbf{p} + i \mathbf{q}$ quite usual. However, as is well-known in the mathematical literature, it is often more enlightening to embed $\mathbb{R}^{2n}$ into $\mathbb{C}^{2n}$ when dealing with symplectic geometry \cite{FollandBook,NazaiBook}, as we will now review. We start by defining the \emph{complexification} as the map \begin{eqnarray} \mathcal{W} : \mathbb{R}^{2n} \,\,\, &\longrightarrow \quad \! \mathbb{C}^{2n} \notag \\ \quad (\mathbf{q},\mathbf{p}) &\longmapsto (\bzeta, \bzeta^*) = \mathcal{W} (\mathbf{q},\mathbf{p}) \, , \quad \mathcal{W} = \frac{1}{\sqrt{2}} \begin{pmatrix} iI & I \\ -iI & I \end{pmatrix} \, , \label{comp} \end{eqnarray} with its inverse, the \emph{de-complexification}, given by \begin{eqnarray} \!\! \mathcal{W}^{-1} : \mathbb{C}^{2n} &\longrightarrow \quad \! \mathbb{R}^{2n} \notag \\ \quad \,\,\,\, (\bzeta, \bzeta^*) &\longmapsto (\mathbf{q},\mathbf{p}) = \mathcal{W}^{-1} (\mathbf{q},\mathbf{p}) \, , \quad \mathcal{W} = \frac{1}{\sqrt{2}} \begin{pmatrix} -iI & iI \\ I & I \end{pmatrix} \, . \label{decomp} \end{eqnarray} To see why the embedding given by \eqref{comp} is well suited from the symplectic point of view, we recall that a linear operator $\mathcal{S}$ fulfilling \begin{equation} \mathcal{S}^T \mathcal{J} \mathcal{S} = \mathcal{J} \, , \quad \det \mathcal{S} = 1 \,, \quad \mathcal{J} = \begin{pmatrix} 0 & I \\ -I & 0 \end{pmatrix} \, , \end{equation} is called a \emph{symplectic matrix}, \emph{linear canonical transformation} or \emph{linear symplectomorphism}, and the set formed by all such operators composes the so called \emph{sympletic group}, usually denoted by $\text{Sp}(n)$. The complexification $\mathcal{W}$, however, obeys the slightly different relation \begin{equation} \mathcal{W}^T \mathcal{J} \mathcal{W} = i \mathcal{J} \,, \label{symp} \end{equation} such that we can consider it to be \emph{$\lambda$-symplectic}, \emph{i.e}~a linear symplectomorphism with multiplier $\lambda=i$. In this way, the complexification of any linear operator $T$ on $\mathbb{R}^{2n}$ acts on $\mathbb{C}^{2n}$ through the similarity transformation \begin{eqnarray} _\mathbb{C} : \text{Gl}(\mathbb{R}^{2n}) &\longrightarrow \text{Gl}(\mathbb{C}^{2n}) \notag \\ \,\, \qquad T &\longmapsto \quad \, T_\mathbb{C} = \mathcal{W} \, T \, \mathcal{W}^{-1} \, . \end{eqnarray} Evidently, the complexification can also be pushed-forward to act on general differential forms on $\mathbb{R}^{2n}$. In particular, it preserves the canonical form $\omega = d \mathbf{q} \wedge d\mathbf{p}$, but again with a multiplier $\lambda=i$: \begin{equation} \omega = d\mathbf{q} \wedge d \mathbf{p} \quad \Longrightarrow \quad \omega_\mathbb{C} = i \, d\bzeta^*\! \wedge d\bzeta \, , \label{forms} \end{equation} as can be verified by direct substitution using \eqref{comp} or \eqref{decomp}. \subsection{Complexified hamiltonian fields and dynamics}\label{subsec:dyn} We now describe how the equations of motion change under the complexification embedding defined earlier. From \eqref{comp}, it is clear that the canonical basis vectors change to \begin{equation} \frac{\partial}{\partial \mathbf{p}} = \frac{1}{\sqrt{2}} \left( \frac{\partial}{\partial \bzeta} + \frac{\partial}{\partial \bzeta^*}\right) \, , \quad \frac{\partial}{\partial \mathbf{q}} = \frac{i}{\sqrt{2}} \left( \frac{\partial}{\partial \bzeta} - \frac{\partial}{\partial \bzeta^*}\right) \, , \end{equation} with inverse \begin{equation} \frac{\partial}{\partial \bzeta} = \frac{1}{\sqrt{2}} \left( \frac{\partial}{\partial \mathbf{p}} - \frac{i \partial}{\partial \mathbf{q}}\right) \, , \quad \frac{\partial}{\partial \bzeta^*} = \frac{1}{\sqrt{2}} \left( \frac{\partial}{\partial \mathbf{p}} + \frac{i \partial}{\partial \mathbf{q}}\right) \, . \end{equation} Then, either by direct substitution or using the pushforward of $\mathcal{W}$, we see that the gradient of a test function $f$ is mapped to \begin{equation} \frac{\partial f}{\partial \mathbf{q}} \cdot \frac{\partial}{\partial \mathbf{q}} + \frac{\partial f}{\partial \mathbf{p}} \cdot \frac{\partial}{\partial \mathbf{p}} = \nabla f \quad \stackrel{\mathbb{C}}{\longmapsto} \quad \nabla_\mathbb{C} f_\mathbb{C} = \frac{\partial f_\mathbb{C}}{\partial \bzeta^*} \cdot \frac{\partial}{\partial \bzeta} + \frac{\partial f_\mathbb{C}}{\partial \bzeta} \cdot \frac{\partial}{\partial \bzeta^*} \, . \label{nablac} \end{equation} Notice the inversion taking place in the complexified gradient. We then see that Hamilton's equations transform as \begin{equation} (\dot{\mathbf{q}},\dot{\mathbf{p}}) = \mathcal{J} \nabla H(\mathbf{q},\mathbf{p}) \quad \stackrel{\mathbb{C}}{\longmapsto} \quad (\dot{\bzeta}, \dot{\bzeta^*}) = \mathcal{J}_\mathbb{C} \nabla_\mathbb{C} H_\mathbb{C}(\bzeta, \bzeta^*) \, , \label{compham} \end{equation} where the complexified canonical matrix is symmetric and given by \begin{equation} \mathcal{J}_\mathbb{C} = \mathcal{W} \mathcal{J} \mathcal{W}^{-1} = i \begin{pmatrix} I & 0 \\ 0 & -I \end{pmatrix} \, . \label{Jc} \end{equation} Since $(\mathbb{R}^{2n},\omega)$ is identified here as the phase space, we will refer to $(\mathbb{C}^{2n},\omega_\mathbb{C})$ as the \emph{complexified} phase space. We now make a fundamental distinction between what the words ``complex'' and ``complexified'' refer to in this manuscript. There are two ways of interpreting the mapping in \eqref{compham}: The first is to consider it going left-to-right (as written), with real variables just \emph{complexified} and no information gained or lost in working with complex dynamics; And the second is to take complex dynamics as more fundamental, such that the reversed right-to-left map becomes a projection into real coordinates. If one considers a purely \emph{complex} phase space, with $\bzeta$ and $\bzeta^*$ independent of each other, the inverse mapping from complex to real will not be bijetive: Complex dynamics \emph{is richer} and involves phenomena impossible to achieve in real phase space. This is due to the existence of trajectories projecting to the same real ones, but with different imaginary parts, allowing for the semiclassical treatment of quantum processes forbidden in the real case, a prominent example being deep tunneling \cite{Aguiar2007,Heller1977,Klauder1994,Ozorio2010}. Thus, in a way, purely complex phase spaces do not really model classical mechanics, but effectively extend it (this extension can be identified with complex times \cite{Ozorio2010}). They also extend quantum mechanics, since complex positions and momenta do not necessarily fulfill the Poisson bracket identity $\{\mathbf{q},\mathbf{p}\}=I$, such that canonical quantization (and others) is not obvious and non-hermitian operators might be required \cite{Graefe2012}. In particular, the Heisenberg group, which underlies both classical and quantum mechanics, is extended. Thus, whenever we say ``complexification'', it must be understood that we are not referring to this general scenario of pure complex phase spaces, only to a mere parametrization in terms of complex variables -- although we will soon see its consequences are rather profound. \subsection{Generating functions on complexified phase spaces}\label{subsec:gen} We begin with a simple observation regarding an often ignored fact in the literature: There is no generating function that can be written as $S(\mathbf{q},\mathbf{p})$. If a function is responsible for implicitly defining a symplectomorphism, \emph{i.e.}~a canonical transformation, then it cannot have its domain fixed on initial variables -- it must also include the final ones. This is clearly expressed in the well-known generating functions of Goldstein \cite{GoldsteinBook}, which have as their domains four different position-momentum pairs: $(\mathbf{q}',\mathbf{q})$, $(\mathbf{q}',\mathbf{p})$, $(\mathbf{p}',\mathbf{q})$ and $(\mathbf{p}',\mathbf{p})$, where primed variables are final and non-primed, initial. The generating function usually denoted by $S(\mathbf{q},\mathbf{p};t)$ is actually the extended position generating funtion given by \begin{equation} S(\mathbf{q}',\mathbf{q};t) = \int_\mathbf{q}^{\mathbf{q}'} \mathbf{p} \cdot d\mathbf{Q} - \int_0^t d\tau H(\mathbf{q},\mathbf{p}) \, , \label{genqq} \end{equation} where the first integral is along the path joining $\mathbf{q}$ at $\tau=0$ to $\mathbf{q}'$ at $\tau=t$ by the hamiltonian flow (that is, by the solution to Hamilton's equations). Naturally, we can also write the function above as \begin{equation} S(t) = \int_0^t d\tau \left[ \mathbf{p} \cdot \dot{\mathbf{q}} - H(\mathbf{q},\mathbf{p}) \right] \, , \label{genqt} \end{equation} since $d\mathbf{Q}$ is a function of time. Expressions such as \eqref{genqq}, however, will be preferred whenever there is inherent interest in the variables with respect to which the generating function can be differentiated, \emph{e.g.}~for \eqref{genqq} we can use derivatives with respect to positions to obtain momenta \cite{ArnoldBook}. The numerical value of \eqref{genqq} and \eqref{genqt} is, of course, the same. The 1-form defined by the exterior derivative of \eqref{genqq}, namely \begin{equation} \widetilde{\alpha} = \mathbf{p} \cdot d\mathbf{q} - \mathbf{p}' \cdot d\mathbf{q}' - H(\mathbf{q},\mathbf{p}) \, dt \, , \qquad \widetilde{\alpha} = -dS(\mathbf{q}',\mathbf{q};t) \, , \end{equation} just as the generating function itself, does not treat positions and momenta equally. This 1-form is a \emph{tautological form} on the extended product manifold $\mathbb{R}^{4n} \times \mathbb{R} = \{\mathbf{q}',\mathbf{p}',\mathbf{q},\mathbf{p};t\}$ \cite{AnaBook,SpivakBook}. It is a primitive for the extended canonical form \begin{equation} \widetilde{\omega} = d\mathbf{q} \wedge d\mathbf{p} - d\mathbf{q}' \wedge d\mathbf{p}' + dH \wedge dt \, , \label{ext} \end{equation} that is, $d\widetilde{\alpha} = -\widetilde{\omega}$. Virtually all of classical mechanics is encoded in the relations between generating functions, tautological and canonical forms, and particularly important to us is the fact that tautological forms have an infinite number of primitives, related through Legendre transforms. This abundance allows for classical mechanics to be expressed in terms of a multitude of coordinates, and in semiclassical mechanics is directly responsible for us being able to express wave functions using positions, momenta, and others. For instance, the 1-form $ \mathbf{q} \cdot d\mathbf{p} - \mathbf{q}' \cdot d\mathbf{p}' - H(\mathbf{q},\mathbf{p}) \, dt$ will give birth \footnote{Generating functions can only be defined on the kernel of $\widetilde{\omega}$, \emph{i.e.}~the \emph{lagrangian} submanifolds $X$ for which $\widetilde{\omega}|_X=0$. To see this, note $\widetilde{\omega}|_X = 0 \Longrightarrow d\widetilde{\alpha} |_X = 0 \Longrightarrow \widetilde{\alpha} |_X = dS$. The graphs of symplectomorphisms are all lagrangian submanifolds w.r.t.~the extended canonical form \cite{AnaBook}. Since the hamiltonian flow is a family of symplectomorphisms w.r.t.~time, the generating functions used in this manuscript can always be defined.} to a generating function $S(\mathbf{p}',\mathbf{p};t)$ that involves momenta, not positions. This momentum generating function will describe dynamics just as its position equivalent, since they both obey the Hamilton-Jacobi equation. The unequal treatment of position and momentum used to obtain $S(\mathbf{q}',\mathbf{q};t)$ and $S(\mathbf{p}',\mathbf{p};t)$, however, will result in evolution being described using position and momentum representations, but the symmetrized 1-form \begin{equation} \widetilde{\alpha}_\text{W} = \left( \frac{\mathbf{p} \cdot d\mathbf{q} - \mathbf{q} \cdot d\mathbf{p}}{2} \right) + \left( \frac{\mathbf{q}' \cdot d\mathbf{p}' - \mathbf{p}' \cdot d\mathbf{q}'}{2} \right) - H(\mathbf{q},\mathbf{p}) \, dt \, , \label{sw} \end{equation} while still a primitive for $\widetilde{\omega}$, places momentum and position in an equal footing and describes the evolution using the \emph{Segal-Bargmann representation}, to be discussed later on \cite{HallBook}. The label chosen for the 1-form in \eqref{sw} reflects its connection to Weyl (or symmetric \cite{Fierro2006,Aguiar2005}) quantization. Its most symmetric form in complexified coordinates is \begin{align} \widetilde{\alpha}_{\text{W}, \mathbb{C}} &= i \left[ \left( \frac{\bzeta^* \cdot d\bzeta - \bzeta \cdot d\bzeta^*}{2} \right) - \left( \frac{\bzeta'^* \cdot d\bzeta' - \bzeta' \cdot d\bzeta'^*}{2} \right) \right] - H_\mathbb{C}(\bzeta,\bzeta^*) \, dt \, , \label{alpha} \end{align} which is a primitive of the complexification \eqref{ext}, namely \begin{equation} \widetilde{\omega}_\mathbb{C} = i\, d\bzeta^* \wedge d\bzeta - i\, d\bzeta'^* \wedge d\bzeta' + dH_\mathbb{C} \wedge dt \, . \end{equation} The 1-form in \eqref{alpha} is a function of twice as many variables are needed, since half of them are dummy and can be obtained by complex conjugation. We can then choose any pair of initial and final variables and simplify the expression above using complexified Legendre transforms. The pair $(\bzeta^*, \bzeta')$ was favored by Weissman, which was the first to study these generating functions in physics \cite{Weissman1982}. Although this choice offers no particular advantage, it is unarguably favored by both physical and mathematical literatures \cite{Littlejohn1986,FollandBook}, and by adopting it we can compare our calculations with earlier works more easily. We then employ the substitutions $ \bzeta^* \cdot d\bzeta = d(|\bzeta|^2) - \bzeta \cdot d\bzeta^*$ and $\bzeta' \cdot d\bzeta'^* = d(|\bzeta'|^2) - \bzeta'^* \cdot d\bzeta'$ to isolate the differentials in \eqref{alpha} as functions of $(\bzeta^*, \bzeta')$, resulting in \begin{align} \widetilde{\alpha}_{\text{W},\mathbb{C}} &= i \left\{ d \left( \frac{ \vert \bzeta' \vert^2 + \vert \bzeta \vert^2 }{2}\right) - \left[ \bzeta \cdot d\bzeta^* + \bzeta'^* \cdot d \bzeta' \right] \right\} \, . \label{alpha2} \end{align} The generating function is then given by \begin{align} S_{\text{W},\mathbb{C}}(\bzeta',\bzeta^*;t) = i \left( \frac{ \vert \bzeta' \vert^2 + \vert \bzeta \vert^2}{2} \right) + F_{\text{W},\mathbb{C}}(\bzeta^*,\bzeta';t) \, , \label{gen2} \end{align} where we have defined \begin{align} F_{\text{W},\mathbb{C}}(\bzeta',\bzeta^*;t) = - i \int \left( \bzeta \cdot d\bzeta^* + \bzeta'^* \cdot d\bzeta' \right) - \int_0^t d\tau H_\mathbb{C}(\bzeta,\bzeta^*) \, . \label{genz} \end{align} The reason for highlighting this function is due to it fulfilling \begin{equation} \frac{\partial F_{\text{W},\mathbb{C}}(\bzeta',\bzeta^*;t)}{\partial \bzeta^*} = -i \bzeta \, ; \quad \frac{\partial F_{\text{W},\mathbb{C}}(\bzeta',\bzeta^*;t)}{\partial \bzeta'} = -i \bzeta'^* \, , \quad \frac{\partial F_{\text{W},\mathbb{C}}(\bzeta',\bzeta^*;t)}{\partial t} + H_\mathbb{C}(\bzeta,\bzeta^*) = 0 \, , \label{condC} \end{equation} such that, in agreement with \cite{Weissman1982}, $F_{\text{W},\mathbb{C}}(\bzeta',\bzeta^*;t)$ is the generating function of the complexified evolution from $(\bzeta,\bzeta^*)$ to $(\bzeta',\bzeta'^*)$, written in terms of the pair $(\bzeta^*, \bzeta')$. The obscure expression in \eqref{genz} can be brought to a simpler form by changing the pair to $(\bzeta,\bzeta')$ using $\bzeta^* \cdot d\bzeta = d(|\bzeta|^2) - \bzeta \cdot d\bzeta^*$, resulting in \begin{align} F_{\text{W},\mathbb{C}}(\bzeta,\bzeta';t) = i \int_{\bzeta}^{\bzeta'} \bzeta^* \cdot d\mathbf{Z} - \int_0^t d\tau H_\mathbb{C}(\bzeta,\bzeta^*) - i |\bzeta|^2 \, , \label{gennotpop} \end{align} which is sometimes found in literature \cite{Weissman1982,Weissman1983}. The simplest expression for the complexified generating function, however, arises by writing either the above or \eqref{gen2} without isolating absolute values, such that \begin{align} S_{\text{W}, \mathbb{C}} (t) &= \int_0^t d\tau \left[ \frac{i}{2} \left( \bzeta \cdot \dot{\bzeta^*} - \bzeta^* \cdot \dot{\bzeta} \right) - H_\mathbb{C}(\bzeta,\bzeta^*) \right] \, . \label{genpop} \end{align} This function is clearly the most immediate primitive to $\eqref{alpha}$ and is by far the most popular in literature (\emph{e.g.}~\cite{Klauder1994,Aguiar2005,Klauder1979}). Although simpler and numerically identical to \eqref{gen2}, it makes it harder to draw the connections we shall develop in the following sections. \section{Linear theory}\label{sec:lin} The simplest example of symplectomorphism is found in a linear setting, with mappings given by $\mathcal{M} \in \text{Sp}(n)$. Their quantization results in the group of unitary operators known as the \emph{metaplectic group}. If time-dependence is allowed, $\mathcal{M}(t)$ forms a 1-parameter family of linear symplectomorphisms, \emph{i.e.}~the path $\mathcal{M}(t)$ belongs to $\text{Sp}(n)$ for all $t \in \mathbb{R}$. In order to carry out quantization, however, it is necessary to choose a representation, and we shall now show not all representations are built equal. \subsection{Linear symplectomorphisms}\label{subsec:lin} Consider linear symplectomorphisms of the form \begin{equation} (\mathbf{q}',\mathbf{p}') = \mathcal{M} (\mathbf{q},\mathbf{p}) \, , \quad \mathcal{M} = \begin{pmatrix} A & B \\ C & D \end{pmatrix} \, , \quad A,\,B,\,C,\,D \in \mathbb{R}^{n^2} \, , \label{linqp} \end{equation} with positions and momenta necessarily real. The complex equivalent of the above system is given by employing the similarity transformation $\mathcal{M}_\mathbb{C} = \mathcal{W} \mathcal{M} \mathcal{W}^{-1}$: \begin{equation} (\bzeta', \bzeta'^*) = \mathcal{M}_\mathbb{C} (\bzeta, \bzeta^*) \, , \quad \mathcal{M}_\mathbb{C} = \begin{pmatrix} \Lambda & \Gamma \\ \Gamma^* & \Lambda^* \end{pmatrix} \, , \label{linzz} \end{equation} with \begin{equation} \Lambda = \frac{1}{2} \left[ \left( D + A\right) + i \left( C - B \right) \right] \, , \quad \Gamma = \frac{1}{2} \left[ \left( D - A \right) - i \left( C + B \right) \right] \, . \label{compzz} \end{equation} The time-independent position generating function $S(\mathbf{q}',\mathbf{q})$ is obtained from \eqref{linqp} by writing $\mathbf{p}'$ and $\mathbf{p}$ as exclusive functions of $(\mathbf{q},\mathbf{q}')$, then finding a quadratic form fulfilling the differential conditions required, namely \begin{equation} \frac{\partial S(\mathbf{q}',\mathbf{q})}{\partial \mathbf{q}} = - \mathbf{p} \, ; \qquad \frac{\partial S(\mathbf{q}',\mathbf{q})}{\partial \mathbf{q}'} = \mathbf{p}' \, , \label{mom} \end{equation} the well-known answer being \begin{equation} S(\mathbf{q}',\mathbf{q}) = \frac{1}{2} \left[ \mathbf{q}' \cdot (B^{-1} A) \mathbf{q}' + \mathbf{q} \cdot (D B^{-1}) \mathbf{q} - 2 \mathbf{q}' \cdot (B^{-1}) \mathbf{q} \right] \, . \label{genclass} \end{equation} It is clear that the generating function above can only be defined for non-singular $B$, in which case we say the matrix $\mathcal{M}$ is \emph{free} \cite{deGossonBook}. If, on the other hand, one looks for a complex generating function $S_{\text{W},\mathbb{C}}(\bzeta',\bzeta^*)$ by writing $\bzeta$ and $\bzeta'^*$ in terms of $(\bzeta^*,\bzeta')$ in \eqref{linzz} and integrating the first two differential conditions in \eqref{condC}, the result is \footnote{Note this generating function has the exact same form as \eqref{gen2}.} \begin{equation} S_{\text{W},\mathbb{C}}(\bzeta^*,\bzeta') = \frac{i}{2} \left( |\bzeta^*|^2 + |\bzeta'|^2 \right) - \frac{i}{2} \left[ \bzeta' \cdot \left( \Gamma^* \Lambda^{-1} \right) \bzeta' - \bzeta^* \cdot \left( \Lambda^{-1} \Gamma \right) \bzeta^* + 2 \bzeta^* \cdot (\Lambda^{-1}) \bzeta' \right] \, . \label{gencomp} \end{equation} In \ref{App:A} we show that the matrix $\Lambda$ in the equation above, defined in \eqref{compzz}, is \emph{always} non-singular, as long as $\mathcal{M}$ is symplectic. Thus, while a description in terms of the position generating function is limited to free symplectic matrices, the complexified generating function above works for any element of $\text{Sp}(n)$. The contrast between employing either \eqref{genclass} or \eqref{gencomp} becomes more stringent when we allow the symplectic matrix in \eqref{linqp} to be time-dependent. In this case, the 1-parameter family $\mathcal{M}(t)$ belongs to $\text{Sp}(n)$ for all $t \in \mathbb{R}$, a situation found \emph{e.g.}~in the case of hamiltonian flows obtained from quadratic hamiltonian functions. The allowance of time-dependence means that we can start with a free $\mathcal{M}(0)$ and, as time grows, hit a bump at which $B(t)$ becomes singular. When this happens, evolution can no longer be described using the time-dependent position generating function $S(\mathbf{q}',\mathbf{q};t)$, and is known as a \emph{caustic} in position space. Notice this never happens for $S_{\text{W},\mathbb{C}}(\bzeta',\bzeta^*;t)$, which is able to describe the path traced by $\mathcal{M}(t)$ in $\text{Sp}(n)$ for all times. A description in terms of complex variables then naturally allows us to bypass caustics, while in real phase space we were somewhat stuck with the singularities in $S(\mathbf{q}',\mathbf{q};t)$. The usual procedure to evade the caustics in $S(\mathbf{q}',\mathbf{q};t)$ is to move to another coordinate system, that is, to describe \eqref{linqp} using one of the conjugate generating functions $S(\mathbf{p}',\mathbf{p};t)$, $S(\mathbf{p}',\mathbf{q};t)$ or $S(\mathbf{q}',\mathbf{p};t)$. For each of these a different component of $\mathcal{M}(t)$ appears inversed, \emph{i.e.}~$C(t)$ for $S(\mathbf{p}',\mathbf{p};t)$ or $A(t)$ for $S(\mathbf{q}',\mathbf{p};t)$. It is easy to see these generating functions cannot be simultaneously singular: If this happened then $\mathcal{M}(t)$ would itself be singular, contradicting its symplecticity. However, these conjugate functions will also develop caustics themselves, such that we end up being forced to go back and forth between at least two of them in order to describe the path traced by $\mathcal{M}(t)$. The complex generating function in \eqref{gencomp}, however, is able to describe the whole evolution on its own. Caustic avoidance is a first evidence that complex variables might present advantages over the usual position and momentum ones, but this is only true if the complex variables are obtained by complexification, \emph{i.e.}~$A+D$ and $C-D$ in \eqref{compzz} need to be real. If these are complex $\Lambda$ can become singular, such as for $A+B=2I$ and $C-B=2iI$, a choice incompatible with the symplecticity of $\mathcal{M}$ due to the imaginary determinant \footnote{As stated earlier, complex symplectic matrices need the underlying structure of classical mechanics to be modified, such that $\lambda$-symplectomorphisms take the place of the usual ones in order to deal with complex determinants.}. Thus, complex phase spaces \emph{do} have caustics, but complexifications of real phase spaces do not \cite{Littlejohn1986,FollandBook,deGossonBook}. \subsection{Metaplectic families and their representations}\label{subsec:meta} The quantization of $\text{Sp}(n)$ results in a unitary group known as the \emph{metaplectic group}, $\text{Mp}(n)$ \cite{FollandBook,deGossonBook,OzorioBook,GutzwillerBook}. It forms a double cover of $\text{Sp}(n)$, such that each symplectic matrix has two unitary operators associated to it in $\text{Mp}(n)$. The position representation of the metaplectic family quantizing $\mathcal{M}(t)$ is given by \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \langle \mathbf{q}' | \widehat{\mathcal{M}}(t) | \mathbf{q} \rangle &= \sigma \,(2\pi)^{-\frac{n}{2}} \left\vert \det B(t) \right\vert^{-\frac{1}{2}} \exp \left\{ \frac{i}{2} \bigg( \mathbf{q}' \cdot \left[ B^{-1}(t) \, A(t) \right] \mathbf{q}' \right. \notag \\ &\qquad \qquad \left. + \mathbf{q} \cdot \left[ D(t) \, B^{-1}(t) \right] \mathbf{q} - 2 \mathbf{q}' \left[ B^{-1}(t) \right] \mathbf{q} \bigg) - \frac{i \pi \mu}{2} \right\} \, , \label{Uqq} \end{eqnarray} where the index $\sigma = \pm 1$ indicates the two possible quantizations of each $\mathcal{M}(t)$ for a fixed time. It is clear that $\widehat{\mathcal{M}}(t)$ cannot be expressed in the position representation if $\mathcal{M}(t)$ is not free, just as in this case its classical path cannot be described using position generating functions. We are then forced to switch to a non-singular conjugate generating function, which quantum mechanically amounts to changing representation. The intertwining between generating functions across caustics, however, is responsible for an accumulated phase known as the \emph{Maslov index}, $\mu$. Written as in \eqref{Uqq}, this index is nothing but the number of caustics encountered alongside the trajectory linking $(\mathbf{q},\mathbf{p})$ and $(\mathbf{q}',\mathbf{p}')$ \cite{Littlejohn1992,ArnoldBook,deGossonBook,OzorioBook,GutzwillerBook,Ozorio2014}. The map taking a symplectic matrix to its corresponding metaplectic operators is exact, such that the classical and quantum objects are exactly the same. Thus, the caustics appearing in \eqref{Uqq} are not at all failures, but fundamental components inherent to quantum dynamics. As an example we can take the general caustic at $t=0$, with $\hat{\mathcal{M}}(0) = \hat{I}$. Here, the position representation of the metaplectic operator is just $\delta(\mathbf{q}' - \mathbf{q})$, and is exactly what the caustic is reproducing: A divergence in the quantum propagator. There is nothing wrong with a blowup in \eqref{Uqq}, just as there is nothing wrong with $\delta(\mathbf{q}' - \mathbf{q})$, which is a perfectly well-behaved distribution \cite{Littlejohn1986,SchwartzBook}. If, however, we had chosen a representation in which the basis elements do not contract to Dirac deltas, there would be no divergences to be reproduced by their expression in terms of classical generating functions. In particular, the coherent state basis builds the \emph{Segal-Bargmann} (S-B) representation, and since \begin{equation} \langle \bzeta^* | \bzeta' \rangle = \exp \left( \frac{-|\bzeta^*|^2-|\bzeta'|^2}{2} + \bzeta \cdot \bzeta' \right) \neq \delta(\bzeta' - \bzeta^*) \, , \end{equation} it provides a normalizable description that remains in Hilbert space. In this representation, the family $\widehat{\mathcal{M}}(t)$ is expressed using the complex generating function \eqref{gencomp} as \cite{Littlejohn1986,FollandBook} \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \langle \bzeta' | \widehat{\mathcal{M}}(t) | \bzeta^* \rangle &= \left[ \frac{\exp \left[ - \frac{1}{2} \left( |\bzeta^*|^2 + |\bzeta'|^2 \right) \right]}{\sqrt{ \det \Lambda(t)}} \right] \exp \bigg\{ \frac{1}{2} \bigg( \bzeta' \cdot \left[ \Gamma^*(t) \, \Lambda^{-1}(t) \right] \bzeta' \notag \\[8pt] &\qquad \qquad - \bzeta^* \cdot \left[ \Lambda^{-1}(t) \, \Gamma(t) \right] \bzeta^* + 2 \bzeta^* \cdot \left[ \Lambda^{-1}(t) \right] \bzeta' \bigg) \bigg\} \, . \label{Uzz} \end{eqnarray} The discontinuous phase jumps that happen in \eqref{Uqq} across a caustic end up being mapped into branch changes \footnote{Branch changes and caustics do not happen at the same spacial and/or temporal places. See Sec.~\ref{subsec:pre}.} of the complex amplitude in \eqref{Uzz}, \emph{i.e.}~by tracking the continuity in the complex pre-factor we are directly establishing the correct phase changes in the complex propagator \cite{Kay2005}. The choice of metaplectic leaf, specified in \eqref{Uqq} by $\sigma=\pm 1$, is included in \eqref{Uzz} as an indeterminacy in its initial square root \footnote{Absolute values of jacobian determinants are absent in the S-B representation because the real orientation is preserved by any complexified map. Proofs for $\mathbb{C}$-linear and holomorphic maps can be found in \cite{VolkerBook}. The proof for general maps uses Sylvester's theorem and is discussed in \emph{e.g.}~\cite{CrossBook} and \cite{FarautBook}.} sign \cite{FollandBook}. Note that the ``over-completeness of the coherent state basis'', often seen as an undesirable aspect of the S-B representation, is the fundamental reason for its absence of caustics. The mapping taking \eqref{Uqq} to \eqref{Uzz} is a composition of two \emph{S-B transforms} \cite{FollandBook,HallBook,Bargmann1961}. More important to us is the inverse transform, given in our notation by \begin{equation} \langle \mathbf{x} | \bphi \rangle = \int d\bzeta^* \langle \mathbf{x} | \bzeta^* \rangle \langle \bzeta^* | \bphi \rangle \, , \end{equation} where $d\bzeta^*$ is the Lebesgue measure on $\mathbb{C}^n$. This transform maps the state $\langle \bzeta^* | \bphi \rangle$ in the S-B representation back to the position one, and its integral kernel is given by what is known in physics as a \emph{Schr\"odinger coherent state}, which is unnormalized \cite{FollandBook,GazeauBook,SweitseBook}. The lack of normalization is fundamental in order to contain the gaussian measure $\mu(d\bzeta^*) = \exp(-|\bzeta^*|^2) \, d\bzeta^*$, with which the S-B representation is equipped. Thus, the map taking \eqref{Uzz} to \eqref{Uqq} is given by the composition \begin{align} \langle \mathbf{x}' | \widehat{\mathcal{M}}(t) | \mathbf{x} \rangle = N \int d\bzeta^* d\bzeta' \langle \mathbf{x}' | \bzeta' \rangle \langle \bzeta' | \widehat{\mathcal{M}}(t) | \bzeta^* \rangle \langle \bzeta^* | \mathbf{x} \rangle \, , \label{sb} \end{align} where $N$ is a normalization factor. The Schr\"odinger coherent states in the above equation are particularly simple when expressed in terms of complex coordinates, being given by \begin{align} \begin{cases} \langle \bzeta^* \vert \mathbf{x} \rangle = \pi^{-\frac{n}{4}} \exp \left[ \dfrac{1}{2} \left( - \mathbf{x} \cdot \mathbf{x} - 2 i \sqrt{2} \, \bzeta^* \cdot \mathbf{x} + \bzeta^* \cdot \bzeta^* - \vert \bzeta^* \vert^2 \right) \right] \\[8pt] \langle \mathbf{x}' \vert \bzeta' \rangle = \pi^{-\frac{n}{4}} \exp \left[ \dfrac{1}{2} \left( - \mathbf{x}' \cdot \mathbf{x}' + 2 i \sqrt{2} \, \bzeta' \cdot \mathbf{x}' + \bzeta' \cdot \bzeta' -\vert \bzeta' \vert^2 \right) \right] \end{cases} \label{cohs} \, , \end{align} unlike \emph{e.g.}~Klauder coherent states, which have more complicated expressions. Naturally, by performing the gaussian integrals arising from substituting \eqref{Uzz} into \eqref{sb}, the expression for \eqref{Uqq} is exactly recovered \cite{Littlejohn1986}. Note that the S-B transform and its inverse end up complexifying and de-complexifying phase space, respectively, which is why we rely so much on the mappings \eqref{comp} and \eqref{decomp} \cite{Heller1987} . \section{Semiclassical approximations}\label{sec:semi} We now extend what was presented in the earlier section to arbitrary hamiltonian flows, which are also 1-parameter families of [generally non-linear] symplectomorphisms. An exact extension is of course impossible, since it would imply that quantum and classical mechanics are identical, but an approximate link is established through the use of Stationary Phase Approximations. We then review the Initial Value Representation method and obtain an expression for the semiclassical propagator in position representation as a sequence of inverse Segal-Bargmann transforms of its complexified equivalent. This results in the Herman-Kluk propagator, a cornerstone of computational chemistry. \subsection{Semiclassical propagators in the position and Segal-Bargmann representations}\label{subsec:sqb} The early work by van~Vleck \cite{VanVleck1928}, together with the phase correction devised by Gutzwiller and others \cite{GutzwillerBook}, concentrated on generalizing \eqref{Uqq} from symplectic matrices to general hamiltonian flows, which are a particular type of 1-parameter family of non-linear symplectomorphisms. By writing the quantum propagator in position representation as \begin{equation} K (\mathbf{x}',\mathbf{x};t) = \langle \mathbf{x}' | \widehat{U}(t) | \mathbf{x} \rangle \, , \qquad \widehat{U}(t) = e^{-i t \widehat{H}} \, , \end{equation} the van~Vleck-Gutzwiller (vV-G) propagator approximates it as \begin{equation} K_{\text{vV-G}}(\mathbf{x}',\mathbf{x};t) = \left( 2\pi i \right)^{-\frac{n}{2}} \sum_p \left\vert \det \left( \frac{\partial^2 S(\mathbf{x}',\mathbf{x};t)}{\partial \mathbf{x}' \, \partial \mathbf{x}} \right) \right\vert^{\frac{1}{2}} \exp \left( i \left[ S(\mathbf{x}',\mathbf{x};t) - \frac{\pi \mu}{2} \right] \right) \, . \label{vV} \end{equation} The generating function appearing above is the one in \eqref{genqq}, and the sum runs over all the trajectories that connect $\mathbf{x}$ at $\tau=0$ to $\mathbf{x}'$ at $\tau=t$ (thus, over all initial momenta fulfilling the first equation in \eqref{mom}). The determinant in the amplitude is now a component of the \emph{monodromy matrix} $\mathbb{M}$, which is symplectic and defined in terms of positions $\mathbf{x}$ and momenta $\mathbf{y}$ as \begin{align} \mathbb{M}(\mathbf{x},\mathbf{y};t) = \begin{pmatrix} \dfrac{\partial \mathbf{x}'(\mathbf{x},\mathbf{y};t)}{\partial \mathbf{x}} & \dfrac{\partial \mathbf{x}'(\mathbf{x},\mathbf{y};t)}{\partial \mathbf{y}} \\[8pt] \dfrac{\partial \mathbf{y}'(\mathbf{x},\mathbf{y};t)}{\partial \mathbf{x}} & \dfrac{\partial \mathbf{y}'(\mathbf{x},\mathbf{y};t)}{\partial \mathbf{y}} \end{pmatrix} \label{mon} \, , \end{align} where $(\mathbf{x}',\mathbf{y}')$ represent the hamiltonian flow as a function of $(\mathbf{x},\mathbf{y})$ The linear setting in the earlier section resulted in the simple expressions \eqref{Uqq} and \eqref{Uzz} because, for linear systems, there is a single trajectory connecting two phase space points for a fixed time $t$ -- indeed, \eqref{vV} is brought to \eqref{Uqq} for linear flows. A further characteristic that renders the linear scenario particularly simple is the fact that, since the matrix in \eqref{linqp} cannot depend on phase space points, its caustics are exclusive functions of time. Besides, by the exactness of quantization in this case, there is no ``semiclassical failure'', since any divergence in \eqref{Uqq} arises due to the divergences in the quantum propagator itself. The non-linear scenario is considerably more intricate. The correspondence between quantum and semiclassical propagators is not exact anymore, and caustics become functions of time \emph{and} space simultaneously. This allows for the semiclassical propagator being truly incapable of reproducing quantum evolution, its failures traceable back to classical mechanics. It is evident these failures are due to caustics, a problem worsened by the fact that even the caustic structure of simple non-quadratic hamiltonians can be excruciatingly complicated, as will be shown in Sec.~\ref{sec:kerr}. A particularly special caustic it the one at $t=0$, since for this one the vV-G propagator's divergence is a correct one (the quantum propagator also diverges). This might lead to vV-G being reasonably good for extremely short times, when the quantum propagator is still close to a Dirac delta; followed by gradually degrading quality for an ``intermediate short-time regime'', where the quantum propagator is farther from a delta but the semiclassical one is still in the vicinity of the $t=0$ caustic; and finally regaining accuracy as we move away from the short-time regime. In Sec.~\ref{subsec:auto} we will see this is precisely what happens. If we are not at the time-origin, however, the quantum propagator can only become a Dirac delta again for some specific situations. One of these is the case of periodic systems, for which the evolution operator is the identity at each multiple of the system's period. A second one takes place for hamiltonians that are exclusive functions of the position operator, since $K(\mathbf{x}',\mathbf{x};t) = U(\mathbf{x};t) \, \delta(\mathbf{x}'-\mathbf{x})$. For a general hamiltonian that is a function of both momentum and position, however, quantum divergences cannot happen: The quantum propagator is smooth. Caustics are then seen as an exclusively classical problem, its roots traced to choosing representations in terms of non-normalizable bases, as described in Secs.~\ref{subsec:lin} and \ref{subsec:meta}. We now come back to the complexified case. The extension of \eqref{Uzz} to general hamiltonian flows was solidified in the early 1980s with the work of Weissman \cite{Weissman1982,Weissman1983}, based on a number of earlier developments (\emph{e.g.}~\cite{Heller1977,Miller1974}). His result, often referred to as the \emph{semiclassical coherent state propagator}, is nothing but the semiclassical propagator in the S-B representation, given by \begin{equation} \langle \bzeta' | \widehat{U}(t) | \bzeta^* \rangle \approx \sum_{\bzeta} \left[ i \det \left( \frac{\partial^2 F_\mathbb{C}(\bzeta',\bzeta^*;t)}{\partial \bzeta^* \, \partial \bzeta'} \right) \right]^{\frac{1}{2}} \exp \left[ - \frac{1}{2} \left( |\bzeta^*|^2 + |\bzeta'|^2 \right) + i F_\mathbb{C}(\bzeta',\bzeta^*;t) \right] \, . \label{zZ} \end{equation} This was re-derived several times and shown to be identical to the semiclassical propagator obtained by Klauder through coherent state path integrals \cite{Weissman1983,Klauder1979}. In particular, it was shown that the expression above is only true if quantization using the Weyl ordering rule is assumed \cite{Aguiar2005}, which in our presentation is obvious due to its phase being exactly the generating function in \eqref{gen2}. If different orderings are used, a correction factor needs to be included in the phase, as discussed in great detail in \cite{Aguiar2005,Baranger2001}. As in Sec.~\ref{subsec:meta}, the phase jumps across branches are directly included in the complex pre-factor. The sum over trajectories now runs over the second complexified equation in \eqref{condC}, and has been carefully examined in several instances \cite{Grossmann1998,Aguiar2007,Klauder1994,Fierro2006,Aguiar2005,Kay2005,Baranger2001,Kay1994,Tannor2018}. In the complexified case it is equivalent to the real root search, but for complex phase spaces purely imaginary phenomena, such as Stokes divergences, generally occur \cite{Klauder1994}. To avoid referring to \eqref{zZ} as the ``semiclassical Weyl-ordered propagator in the S-B representation'', we rename it plainly as the \emph{S-B propagator}. Now, just as in the case of symplectic matrices, we expect \eqref{zZ} to be mapped to \eqref{vV} by two inverse S-B transforms. This correspondence, however, is not obtained exactly: Integral transforms are not endomorphisms in the ``category'' of semiclassical propagators. One needs to evaluate the transforms using Stationary Phase Approximations (SPAs), and only then can we exchange between different quantum representations/classical generating functions. Notwithstanding the fact that the semiclassical propagators \eqref{vV} and \eqref{zZ} are in correspondence through SPAs, we observe the same phenomenon described in Sec.~\ref{subsec:meta}: The vV-G propagator is plagued by divergences, whilst the S-B propagator is strictly continuous. This can be easily seen by using \eqref{condC} to rewrite the pre-factor in \eqref{zZ}: \begin{equation} \left[ i \det \left( \frac{\partial^2 F_\mathbb{C}(\bzeta',\bzeta^*;t)}{\partial \bzeta^* \partial \bzeta'} \right) \right]^{\frac{1}{2}} = \left[ \det \left( \frac{\partial \bzeta'(\bzeta,\bzeta^*;t)}{\partial \bzeta} \right) \right]^{-\frac{1}{2}} = \big[ \det \Lambda(\bzeta^*,\bzeta;t) \big]^{-\frac{1}{2}} \, . \label{detCC} \end{equation} \\ Before moving on we briefly stop to dedicate some attention to the complexified monodromy matrix appearing in the expression above. Despite the monodromy matrix not really fitting the context of linear systems as described in Sec.~\ref{sec:lin}, it can be connected to its complexification in an equivalent manner. For this, we remind the reader that the monodromy matrix has its dynamics defined by the ordinary differential equation \begin{equation} \dot{\mathbb{M}} = \mathcal{J} \, \text{Hess}(H) \, \mathbb{M} \, , \end{equation} with $\text{Hess}$ representing the hessian \cite{NazaiBook}. This equation is linear, such that by similarity transformations we get \begin{equation} \dot{\mathbb{M}}_\mathbb{C} = \mathcal{J}_\mathbb{C} \, \text{Hess}_\mathbb{C}(H_\mathbb{C}) \, \mathbb{M}_\mathbb{C} \, , \label{monc} \end{equation} with $\mathcal{J}_\mathbb{C}$ defined in \eqref{Jc} and the complex hessian obtainable \emph{e.g.}~from differentiating \eqref{nablac}. This shows the complexified monodromy has components exactly equal to the ones in \eqref{compzz}, except that now $A,\,B,\,C$ and $D$ are given by \eqref{mon}. \subsection{Initial value representation for the van~Vleck-Gutzwiller propagator} Although the S-B propagator \eqref{zZ} is free from caustics, it suffers from a severe drawback also found in \eqref{vV}: The sum over classical trajectories. Except for a tiny number of systems, Hamilton's equations have to be solved numerically, and looking for trajectories entering the root-search is a computationally demanding task. After an insight by Miller \cite{Miller1970}, chemists started substituting the sum over trajectories by a full integral with respect to initial momenta, a procedure that can be mnemonically written as \begin{equation} \int d\mathbf{q}' \sum_\mathbf{p} \left\vert \det \left( \frac{\partial \mathbf{q}'}{\partial \mathbf{p}} \right) \right\vert^{-\frac{1}{2}} \longmapsto \int d\mathbf{p} \left\vert \det \left( \frac{\partial \mathbf{q}'}{\partial \mathbf{p}} \right) \right\vert^{\frac{1}{2}} = \int d\mathbf{p} \left\vert \det \left( \frac{\partial^2 S(\mathbf{q},\mathbf{q}';t)}{\partial \mathbf{q}' \, \partial \mathbf{q}} \right) \right\vert^{-\frac{1}{2}} \, , \label{ivrq} \end{equation} where in the last equation we used \eqref{mom}. Instead of looking for all possible momenta that define the trajectories entering the semiclassical sum, the substitution above simply sums over \emph{all} initial momenta and results in an \emph{Initial Value Representation} (IVR). The unfamiliar reader is directed to \cite{Miller2001} for a review and \cite{HellerBook} for a nice geometrical discussion on this substitution, for which we give a bit of mathematical context in \ref{App:B}. A welcome consequence of the pre-factor inversion in \eqref{ivrq} is that the previous divergences turn to converge toward zero as a caustic is approached. Another collateral effect is that, since the pre-factor acts as a weight, the contributions that were large for vV-G become small in the IVR. At this point it is not obvious whether or not this is desirable, a point we shall expand in Sec.~\ref{subsec:fail}. Applying the substitution \eqref{ivrq} to \eqref{vV} results in a typical IVR expression for the vV-G propagator, given by \begin{equation} K_\text{IVR}(\mathbf{x}',\mathbf{x};t) = \left( 2\pi i \right)^{-\frac{n}{2}} \int d\mathbf{q} \, d\mathbf{p} \,\left\vert \det \left( \frac{\partial \mathbf{q}'}{\partial \mathbf{p}} \right) \right\vert^{\frac{1}{2}} \exp \left( i \left[ S(\mathbf{q},\mathbf{q}';t) - \frac{\pi \mu}{2} \right] \right) \delta \left( \mathbf{q}' - \mathbf{x}' \right) \, . \label{IVR} \end{equation} The Dirac delta is a reminiscent of the root-search problem, and is in fact equivalent to it. This can be seen by noting that, by the compositional property of the delta (see \ref{App:B}), we have \begin{equation} \int d\mathbf{p} \, \delta \left[ \mathbf{q}'(\mathbf{x},\mathbf{p};t) - \mathbf{x}' \right] = \sum_{\ker \mathbf{q}'(\mathbf{x},\mathbf{p};t)} \left\vert \det \left( \frac{\partial \mathbf{q}'}{\partial \mathbf{p}} \right) \right\vert^{-1} \, , \end{equation} where the kernel of $\mathbf{q}'$ is to be searched for w.r.t.~the initial momenta, which are the integration variables. An important characteristic of the IVR in \eqref{IVR} is that it cannot be immediately used to calculate wave functions, and one is forced to chose between limiting its use to numbers, \emph{e.g.}~$\langle \bpsi | \hat{U}(t) | \bphi \rangle$, or to develop some clever strategy to substitute the Dirac delta by something smoother \cite{Heller1991-2,Kay1993}. Several important quantities in chemistry and physics, however, are of the desired form for \eqref{IVR} to be promptly employed, a prominent example being the autocorrelation function \begin{equation} C(t) = \langle \bpsi | \hat{U}(t) | \bpsi \rangle \, ,\label{auto} \end{equation} which is immediately seen to have the IVR expression \begin{equation} C(t) \approx \left( 2\pi i \right)^{-\frac{n}{2}} \int d\mathbf{q} \, d\mathbf{p} \,\left\vert \det \left( \frac{\partial \mathbf{q}'}{\partial \mathbf{p}} \right) \right\vert^{\frac{1}{2}} \exp \left( i \left[ S(\mathbf{q},\mathbf{q}';t) - \frac{\pi \mu}{2} \right] \right) \langle \bpsi^* \vert \mathbf{q} \rangle \langle \mathbf{q}' \vert \bpsi \rangle \, \label{autoIVR} \end{equation} after the $\mathbf{x}'$ integrals are performed \cite{Miller2001}. Although the autocorrelation is an important quantity, one might be interested in the semiclassical approximation to more general objects. Several methods to obtain workable IVRs that could calculate proper wave functions were then developed, especially by Kay \cite{Kay1994}, who was one of the first to report problems with the slow convergence and errors due to the oscillatory behavior of several IVR expressions \cite{Kay1993}. Interestingly, the Wigner representation allows for IVRs capable of calculating evolved Wigner functions directly, the final expression being free of Dirac deltas \cite{Ozorio1998,Ozorio2013}. As IVR techniques have nowadays developed into a proper branch of computational chemistry, we will limit our exposition to an analysis of \eqref{IVR} and redirect the interested reader to the seminal papers \cite{Kay1993}, \cite{Miller2001} and \cite{Kay1994}. \subsection{Initial value representation for the Segal-Bargmann propagator} The absence of caustics in the S-B representation is a stark motivation to pursue an IVR using the S-B propagator \eqref{zZ}. We start by introducing it into \eqref{sb}, with $\widehat{\mathcal{M}}$ substituted by the evolution operator $\widehat{U}(t)$, to obtain an expression for the position element in terms of complexified variables, \begin{equation} K(\mathbf{x}',\mathbf{x};t) \approx N \int d\bzeta^* d\bzeta' \sum_{\bzeta} \left[ i \det \left( \frac{\partial^2 F_\mathbb{C}(\bzeta',\bzeta^*;t)}{\partial \bzeta^* \partial \bzeta'} \right) \right]^{\frac{1}{2}} e^{i S_{\text{W},\mathbb{C}}(t)} \langle \mathbf{x}' \vert \bzeta' \rangle \langle \bzeta^* | \mathbf{x} \rangle \, , \label{pre-ivr} \end{equation} where we choose the simplest expression in \eqref{genpop} for the generating function (equivalent to both \eqref{gen2} and \eqref{gennotpop}, being still a function of $\bzeta$ and $\bzeta^*$). To obtain the IVR, notice that the substitution \eqref{ivrq} is translated to the S-B representation as \begin{equation} \int d\bzeta' \sum_\zeta \left[ \det \left( \frac{\partial \bzeta'}{\partial \bzeta} \right) \right]^{-\frac{1}{2}} \longmapsto \int d\bzeta \, \left[ \det \left( \frac{\partial \bzeta'}{\partial \bzeta} \right) \right]^\frac{1}{2} = \int d\bzeta \, \sqrt{\det \Lambda} \, , \label{ivrz} \end{equation} such that all we need to do is to employ it in \eqref{pre-ivr} to get \begin{equation} K(\mathbf{x}',\mathbf{x};t) \approx N \int d\bzeta^* d\bzeta \, \sqrt{\det \Lambda(\bzeta^*,\bzeta;t)} \, e^{i S_{\text{W},\mathbb{C}}(t)} \langle \mathbf{x}' \vert \bzeta'(\bzeta^*,\bzeta;t) \rangle \langle \bzeta^* | \mathbf{x} \rangle \, , \label{pre-preivr} \end{equation} with $N=\pi^{-n}$ obtained by requiring the propagator to be 1 for $t=0$. While the substitution in \eqref{ivrq} reverberates in multiple characteristics of the IVR due to the presence of caustics, their absence in the S-B representation means that \eqref{ivrz} does not really change anything except getting rid of the root-search. The formula in \eqref{pre-preivr} can be seen as a result in itself: It is an IVR for the semiclassical propagator in position representation, obtained from the S-B propagator as a composition of two inverse S-B transforms. It will be free of caustics and phase jumps are implicitly included in the continuity of its complex pre-factor. However, by de-complexifying the IVR, we can write the propagator in terms of our real trajectories explicitly. To this end, we note the de-complexified Schr\"odinger coherent states in position representation are just \begin{eqnarray} \begin{cases} \,\,\, \langle \bzeta^*(\mathbf{q},\mathbf{p}) | \mathbf{x} \rangle \,\,= \pi^{-\frac{n}{4}} \exp \left[ - \dfrac{|\mathbf{x}-\mathbf{q}|^2}{2} - i \mathbf{p} \cdot \left( \mathbf{x}-\dfrac{\mathbf{q}}{2} \right) \right] \\[8pt] \langle \mathbf{x}' \vert \bzeta'(\mathbf{q},\mathbf{p};t) \rangle = \pi^{-\frac{n}{4}} \exp \left[ - \dfrac{|\mathbf{x}'-\mathbf{q}'|^2}{2} + i \mathbf{p}' \cdot \left( \mathbf{x}'-\dfrac{\mathbf{q}'}{2} \right) \right] \end{cases} \, , \label{realcohs} \end{eqnarray} easily obtained by substituting \eqref{decomp} into \eqref{cohs}. Since $\bzeta$ and $\bzeta^*$ in the complexified case \emph{are} complex conjugates of each other, we can change integration variables from $(\bzeta,\bzeta^*)$ to $(\mathbf{q},\mathbf{p})$, the absolute value of the jacobian determinant easily seen to be $1$. Then, keeping in mind that \begin{equation} \Lambda(\mathbf{q},\mathbf{p};t) = \frac{1}{2} \left[ \left( \dfrac{\partial \mathbf{p}'(\mathbf{q},\mathbf{p};t)}{\partial \mathbf{p}} + \dfrac{\partial \mathbf{q}'(\mathbf{q},\mathbf{p};t)}{\partial \mathbf{q}} \right) + i \left( \dfrac{\partial \mathbf{p}'(\mathbf{q},\mathbf{p};t)}{\partial \mathbf{q}} - \dfrac{\partial \mathbf{q}'(\mathbf{q},\mathbf{p};t)}{\partial \mathbf{p}} \right) \right] \, , \label{detC} \end{equation} the final IVR is written as \begin{equation} K_\text{H-K}(\mathbf{x}',\mathbf{x};t) = N \int d\mathbf{q} \, d\mathbf{p} \, \sqrt{ \det \Lambda(\mathbf{q},\mathbf{p};t) } \, e^{ i S_\text{W} (t)} \langle \mathbf{x}' \vert \bzeta'(\mathbf{q},\mathbf{p};t) \rangle \langle \bzeta^*(\mathbf{q},\mathbf{p}) | \mathbf{x} \rangle \, . \label{preHK} \end{equation} Here, the coherent states are given by \eqref{realcohs}, the phase by \eqref{genqq} and the determinant by \eqref{detC}. Normalization in this case sets $N = (2\pi)^{-n}$, and all primed variables are evolved by the hamiltonian flow as functions of $\mathbf{q}\,,\mathbf{p}$ and $t$. This IVR is known as the \emph{Herman-Kluk} (H-K) propagator \cite{Herman1984}. The expression in \eqref{preHK} becomes more recognizable after a simple manipulation of its generating function. We start by explicitly de-complexifying \eqref{genpop}, \emph{i.e.} \begin{equation} S_{\text{W}, \mathbb{C}} (t) = \int_0^t d\tau \left[ \frac{i}{2} \left( \bzeta \cdot \dot{\bzeta}^* - \bzeta^* \cdot \dot{\bzeta} \right) - H_\mathbb{C}(\bzeta,\bzeta^*) \right] = \int_0^t d\tau \left[ \left( \frac{\mathbf{p} \cdot \dot{\mathbf{q}} - \mathbf{q} \cdot \dot{\mathbf{p}}}{2} \right) - H(\mathbf{q},\mathbf{p}) \right] \, ; \end{equation} and since $\mathbf{q} \cdot \dot{\mathbf{p}} = d(\mathbf{q} \cdot \mathbf{p})/dt - \mathbf{p} \cdot \dot{\mathbf{q}}$, we can rewrite the action above as a function of the extended position generating function as \begin{equation} \quad S_\text{W}(t) = S(\mathbf{q}',\mathbf{q};t) - \frac{1}{2} \left( \mathbf{q}' \cdot \mathbf{p}' - \mathbf{q} \cdot \mathbf{p} \right) \, . \end{equation} When the expression above enters the complex exponential, the dot products sum to the phases in the Schr\"odinger coherent states and disfigure them, resulting in \begin{equation} K_\text{H-K}(\mathbf{x}',\mathbf{x};t) = (2\pi)^{-\frac{n}{2}} \int d\mathbf{q} \, d\mathbf{p} \, \sqrt{ \det \Lambda(\mathbf{q},\mathbf{p};t) } \, e^{ i S(\mathbf{q}',\mathbf{q};t) } \langle \mathbf{x}' \vert \bbeta'(\mathbf{q},\mathbf{p};t) \rangle \langle \bbeta^*(\mathbf{q},\mathbf{p}) | \mathbf{x} \rangle \, , \end{equation} with the integral kernels given by the gaussians \begin{eqnarray} \begin{cases} \,\,\, \langle \bbeta^*(\mathbf{q},\mathbf{p}) | \mathbf{x} \rangle \,\,= \pi^{-\frac{n}{4}} \exp \left[ - \dfrac{|\mathbf{x}-\mathbf{q}|^2}{2} - i \mathbf{p} \cdot \left( \mathbf{x} - \mathbf{q}\right) \right] \\[8pt] \langle \mathbf{x}' \vert \bbeta'(\mathbf{q},\mathbf{p};t) \rangle = \pi^{-\frac{n}{4}} \exp \left[ - \dfrac{|\mathbf{x}'-\mathbf{q}'|^2}{2} + i \mathbf{p}' \cdot \left( \mathbf{x}'- \mathbf{q}' \right) \right] \end{cases} \, , \end{eqnarray} This is the original form of the H-K propagator as discovered by Herman and Kluk \cite{Herman1984}, and remains the most popular one. However, if one interprets the kernels above as Klauder coherent states, the profound connection this propagator has with the S-B representation is lost. Even more importantly, the fact that the action entering the H-K propagator is Weyl-ordered is also eclipsed \cite{Grossmann1998}. It is also common to encounter the H-K propagator expressed as a function of coherent states that depend on a real parameter, associated to its width. Everything we have done generalizes to this case by simply rescaling the complexification map (\emph{e.g.}~as in \cite{Fierro2006}) as a function of a free parameter: This will modify the H-K's pre-factor and the Schr\"odinger states accordingly. However, the matter of why a particular width works better than another appears to be specific to the particular problem at hand. \subsection{Caustics and the failure of semiclassical propagation}\label{subsec:fail} The matter of whether or not the pre-factor inversion taking place in the IVR \eqref{IVR} is an improvement over the vV-G propagator was left unanswered and is now placed under scrutiny. We shall focus on the semiclassical impact of being near or far a caustic, but not exactly on it, since in this case we already know the answer: Caustics do not contribute in any way. This can be seen by noticing that in vV-G they have to be manually excluded, and in the IVR they are assigned a null pre-factor. The end result is clearly the same, showing that even though the IVR does not diverge, it is still impacted by caustics. The caustic condition of singular $B$ matrices in \ref{subsec:lin} is identical in the case of general hamiltonian flows, but now the linearized dynamics is given by the monodromy matrix and a caustic happens when its $B$-equivalent, namely the monodromy component appearing in the vV-G propagator's pre-factor, vanishes. As discussed earlier, when this happens, \eqref{vV} and \eqref{IVR} follow \begin{equation} \frac{\partial^2 S(\mathbf{q}',\mathbf{q};t)}{\partial \mathbf{q}' \, \partial \mathbf{q}} \longrightarrow 0 \quad \Longrightarrow \quad K_\text{vV-G}(\mathbf{q},\mathbf{q}';t) \longrightarrow \infty \, , \quad K_\text{IVR}(\mathbf{q},\mathbf{q}';t) \longrightarrow 0 \, . \label{fail1} \end{equation} Since we are in a Wentzel-Kramers-Brillouin (WKB) scenario, phases are assumed to be stationary, restricting the actions to regions in which they vary slowly. As we near a caustic the condition above goes one step further and tells us that the action changes even less, since their second derivatives also vanish \cite{Schulman1994}: The closest we are to a caustic, the slower the oscillations in the integrands of both the vV-G propagator and its IVR. As we move away from the caustic, the pre-factor in vV-G starts to decrease, which helps muffling the fast oscillations that its complex exponential begins to develop. The IVR develops the opposite behavior, assigning small contributions near caustics and huge ones as we move away from them. The consequence is that the IVR becomes both highly oscillatory and numerically large in regions where vV-G is small. Since vV-G relies on the root-search for selecting what trajectories to be included, the oscillatory behavior of its complex exponential is not a problem, as the trajectories corresponding to momenta in oscillatory regions are handpicked. The IVR, however, integrates over initial momenta and relies on the Riemann-Lebesgue lemma to annihilate the contributions emanating from unimportant trajectories. The problem is that, as we now know, these regions of fast oscillatory behavior are assigned very large numerical values by the IVR, and expecting them to cancel perfectly is far-fetched. The obvious way of dealing with the oscillatory numerical errors prone to appear when using the IVR is by employing very large momentum grids: They will help canceling contributions from unimportant trajectories far from caustics, while increasing the number of contributions from important trajectories in the neighborhood of caustics (which need some help due to their small pre-factors). We then see that the difficulties of the root-search in \eqref{vV} are transformed into convergence problems in \eqref{IVR}. By using the powerful computers nowadays available, however, it is generally easier to increase grid sizes than to solve the root-search. This is especially true due to the existence of many numerical methods specialized in oscillatory integrals, together with a continuing interest by the chemical community to find strategies that help improving the convergence of IVRs in practical applications. In the beginning of this subsection we stated that caustics do not contribute to neither \eqref{vV} nor \eqref{IVR}, which is true. The severest problem is them \emph{failing} to contribute. To see this, suppose phase space is filled with caustics, a situation that usually takes place at long propagation times in both integrable and chaotic systems \cite{Schulman1994}. This increases the probability of trajectories falling on them, such that a possible contribution from a root-momentum that would be included in the vV-G propagator ends up having to be removed. This becomes an increasingly likely event as time grows, causing more and more contributions that would enter the semiclassical sum to be left out. The missing terms, of course, would be fundamental to conserve normalization, such that we can expect both the vV-G and its IVR to lose normalization as time grows. A time-threshold must then exist in which the density of caustics becomes large enough for a complete failure of the vV-G propagator and its IVR, and in the next section we will see that this problem is rendered even more serious due to the phenomenon of \emph{caustic stickiness}. We now see that the H-K propagator behaves in a markedly different way from vV-G and its cousins. In fact, not a single aspect of the mechanisms for semiclassical failure described above applies to it, since their backbone is the caustic singularities it does not possess. The mechanisms for the failure of H-K when applied to integrable systems are, as far as we know, still unclear. However, it suffers from the same problem as all IVRs with regard to its pre-factor possibly diverging when dealing with chaotic dynamics. The exponential separation of trajectories that begin infinitesimally close gives birth to positive Lyapunov exponents, such that \eqref{fail1} is reversed to \begin{equation} \frac{\partial^2 S(\mathbf{q}',\mathbf{q};t)}{\partial \mathbf{q}' \, \partial \mathbf{q}} \longrightarrow \infty \Longrightarrow K_\text{vV-G}(\mathbf{q}, \mathbf{q}';t) \longrightarrow 0 \, , \,\, K_\text{IVR}(\mathbf{q},\mathbf{q}';t) \longrightarrow \infty \longleftarrow K_\text{HK}(\mathbf{q},\mathbf{q}';t) \, .\label{fail2} \end{equation} We already know that the trajectories that are far from caustics generate small contributions to the vV-G propagator, but for the case of chaotic dynamics, the situation is extreme: If a trajectory is both chaotic and far from a caustic, its contribution is inversely proportional to its rate of growth (which is exponential!). Now, for the IVR and H-K, the only hope is annihilating the oscillatory terms associated to chaotic trajectories in integration. Since in this case they both have diverging pre-factors, this might be hopeless. Despite these problems, the H-K has been found to perform unexpectedly well for situations of soft chaos, in which phase space is populated by both regular and chaotic dynamics \cite{Schoendorff1998,Maitra2000,Lando2019-2}. The reason for this might be that the main contributions come from the regular trajectories, since, as states earlier, for chaotic trajectories the pre-factors diverge very fast. Indeed, an artificial erasure of chaos in a strongly chaotic system was shown to provide better semiclassical results than the ones obtained from the system's original chaotic dynamics \cite{Lando2019-3}. \section{Numerical simulations}\label{sec:kerr} We now begin the second half of this manuscript, which concerns numerical aspects of the vV-G propagator, its IVR and the H-K propagator. The homogeneous Kerr system, which we chose as laboratory, is integrable and does not display the intrinsic complications present in chaotic systems. Nevertheless, it does contain a quite intricate caustic web, and since we have no reason to suspect the caustics in integrable systems to be any different from the ones in chaotic systems, many aspects observed for regular dynamics should migrate unmodified to the chaotic case \cite{Schulman1994}. \subsection{The homogeneous Kerr system}\label{subsec:kerr} In order to test semiclassical propagators we need a system complex enough to have caustics, but as simple as possible for numerical computations to be performed quickly. The Simple Harmonic Oscillator (SHO) is an example of such a system, but since its hamiltonian is quadratic we are stuck with the linear theory developed in \ref{sec:lin}, \emph{i.e.}~the classical and quantum evolutions are identical. A small modification of the SHO turns out to be ideal as a toy model for semiclassical techniques, since it presents an intricate web of caustics and both its classical and quantum equations of motion have analytical solutions, with quantum dynamics markedly different from its classical counterpart. The \emph{homogenous Kerr system} (or simply \emph{Kerr system}), which we have just described, is obtained from the simple 1-dimensional hamiltonian \begin{equation} H_\text{Kerr}(q,p) = (p^2 + q^2)^2 \, , \label{Ckerr} \end{equation} which is nothing but a rescaled SHO squared. Writing the real Hamilton equations in \eqref{compham} and dividing $\dot{p}$ by $\dot{q}$, we see that the differential equation for the flow's geometry is the same as in the SHO, \emph{i.e.}~the orbits are circles. Thus, the distance from the origin is conserved for all orbits and the flow is given by \begin{equation} \begin{cases} q'(q,p,t) = q \cos \left[ \omega(q,p) \, t \right] + p \sin \left[ \omega(q,p) \, t \right] \\[4pt] p'(q,p,t) = p \cos \left[ \omega(q,p) \, t \right] - q \sin \left[ \omega(q,p) \, t \right] \end{cases} \, , \quad \omega(q,p) = 4(q^2+p^2) \, ,\label{flow} \end{equation} which is very similar to the SHO. The only difference is that, whilst the angular velocity is the same for all orbits in the SHO, in the Kerr system it is conserved \emph{per orbit}, but monotonically increasing as a function of the distance from the origin. Using \eqref{genqq}, it is also easy to show that the classical action for the Kerr system as obtained from the flow above is given by \begin{equation} S_\text{Kerr}(q'(q,p),q,t) = \frac{1}{4} \left\{ \omega(q,p) \, t + 2 \, p\, q ( \cos \left[ \omega(q,p) \, t \right] - 1) + (p^2-q^2) \sin \left[ \omega(q,p) \, t \right] \right\} \, , \label{genkerr} \end{equation} while the symmetric action in \eqref{genpop} is just \begin{equation} S_\text{W}(t) = (p^2+q^2)^2 t \, . \end{equation} Moving to the quantum realm, the canonical quantization of the Kerr hamiltonian in \eqref{Ckerr}, namely \begin{equation} \hat{H}_\text{Kerr}(\hat{q},\hat{p}) = (\hat{p}^2 + \hat{q}^2)^2 \, , \label{Qkerrqp} \end{equation} is straightforward and presents no ordering problems. However, just as in the SHO case, it is considerably simpler when expressed as a function of complexified variables. Here, the position and momentum operators are substituted by $\hat{p} = (\hat{\zeta}^* + \hat{\zeta})/\sqrt{2}$ and $\hat{q} = i(\hat{\zeta}^* - \hat{\zeta})/\sqrt{2}$, with the number operator given by $\hat{n} = \hat{\zeta}^* \hat{\zeta}$. In theses variables, the hamiltonian in \eqref{Qkerrqp} is brought to \begin{equation} \hat{H}_\text{Kerr} (\hat{n}) = \left( 2 \hat{n} + \hat{1} \right)^2 \, , \label{QkerrC} \end{equation} which is yet again our familiar squared and rescaled SHO, and as an exclusive function of $\hat{n}$ it also shares its eigenfunctions, the Fock states $|n\rangle$, with it. This is particularly useful for calculating the time-evolution of arbitrary states, since we can decompose them in the complete Fock basis $\{|n\rangle\}$ and deal with the evolution operator as a number. In particular, the eigenfunctions of the annihilation operator $\hat{\zeta}^*$ in the Fock basis are known from basic quantum mechanics \cite{BallentineBook} to be \begin{equation} \vert \alpha \rangle = e^{-\frac{\vert \alpha \vert^2}{2}} \sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}} \vert n \rangle \, , \label{coher} \end{equation} representing Klauder coherent states centered at $(q,p) = \sqrt{2} (\Re(\alpha),\Im(\alpha))$. Notice these states are not the same as the Schr\"odinger states used in H-K, since they are normalized \cite{GazeauBook,KlauderBook}. The normalization is required because these will serve as initial states for quantum propagation and we want the standard probabilistic interpretation of quantum mechanics to remain valid. In one dimension, the position representation of \eqref{coher} is \begin{equation} \langle x | \alpha \rangle = \pi^{-\frac{1}{4}} \exp \left\{ -\frac{ \left[ x - \Re(\alpha) \right]^2 }{2} + i \Im(\alpha) \left[ x - \Re(\alpha) \right] \right\} \, , \end{equation} where we have rescaled $\alpha \mapsto \alpha/\sqrt{2}$ in order to center the state at $(q,p) = (\Re(\alpha),\Im(\alpha))$. The time-evolution of this state in the Kerr system, namely \begin{equation} \vert \alpha(t) \rangle = \hat{U}_\text{Kerr}(t) \vert \alpha \rangle = e^{- i t \hat{H}_\text{Kerr}} \vert \alpha\rangle \, , \label{qflow} \end{equation} has been shown to be exact for times of the form \begin{equation} t = \left( \frac{2a}{b} \right) T_\text{rev} \, , \quad T_\text{rev} = \frac{\pi}{4} \, , \quad a, b \in \mathbb{Z} \, , \quad b \quad \text{odd} \, , \label{rev} \end{equation} where $T_\text{rev}$ is known as the \emph{revival time} for this system \cite{Yurke1986,Stoler1986}. The final state is then represented as a superposition of $b$ coherent states placed symmetrically around the origin. Multiples such as $T_\text{rev}/8$ and $T_\text{rev}/16$ are especially striking due to the emergence of \emph{fractional revival patterns}, in which the evolved state is formed by the superposition of 2 (a Schr{\"o}dinger's cat \cite{Robinett2004}) or 4 (a compass state \cite{Zurek2001}) coherent states, respectively. As the name suggests, the initial state is completely recovered at $t=T_\text{rev}$ up to a global phase. The Kerr system has been already investigated in depth \cite{Yurke1986,Lando2019,Stoler1986,Toscano2009}, but its dynamics in phase space is worthwhile revisiting due to its fascinating geometry. We start by noticing that a classical phase space distribution under the action of the Kerr flow \eqref{flow} will simultaneously revolve around the origin and be deformed into a filament, since outer points move faster than inner points. In particular, the 1-dimensional \emph{Wigner transform} of a time-dependent wave function $\langle q | \psi(t) \rangle$, given by \begin{equation} W (q,p;t) = \pi^{-1} \int d\tilde q \, \langle q + \tilde q | \psi(t) \rangle \langle \psi(t) | q-\tilde q \rangle e^{-2i \, \tilde{q} \, p}\,, \label{wig} \end{equation} provides a classical phase space distribution from its \emph{Truncated Wigner Approximation} (TWA), which corresponds to the $\mathcal{O}(\hbar^0)$ term in an $\hbar$-expansion of Moyal's equation \cite{Groenewold1946,Moyal1949,LandoThesis,Titimbo2020}. For the initial coherent state in \eqref{realcohs} the TWA is just \begin{equation} W_\text{TWA}(q,p;t) = \pi^{-1} \exp \left\{ - \left[ q'(q,p,-t) - \Re(\alpha) \right]^2 - \left[ p'(q,p,-t) - \Im(\alpha) \right]^2 \right\} \, , \label{TWA} \end{equation} \emph{i.e.}~a phase space gaussian with classically evolving coordinates. The negative time in the r.h.s.~is not a typo, being instead a fundamental ingredient in order for the TWA to evolve as a classical phase space distribution, obeying the Liouville equation \cite{LandoThesis}. In Fig.~\ref{fig:flow} we display the exact Wigner function in \eqref{wig} and its TWA for a coherent state propagated by the Kerr dynamics for some selected times. \begin{figure} \caption{The TWA in \eqref{TWA} \label{fig:flow} \end{figure} Fig.~\ref{fig:flow} inspires pessimism with regard to a semiclassical approximation being able to reproduce quantum evolution, especially considering that its filamentary classical backbone gets thinner and thinner as time evolves (although its area obviously remains constant, by Liouville's theorem). This expectation was proven wrong in at least three occasions: In \cite{Toscano2009} it was was shown that a careful application of the vV-G propagator was successful in reproducing the evolved wave functions for more than one revival time; in \cite{Grossmann2016} the H-K propagator was used to model a 0-dimensional Bose-Hubbard chain, for which the hamiltonian is given by the slightly different (yet dynamically identical) expression; and in \cite{Lando2019} a value representation using final instead of initial values, proposed first in \cite{Ozorio2013}, was able to reproduce quantum dynamics with calculations performed directly in phase space. Since we know that these semiclassical analyses of the Kerr system were successful, we can be sure that semiclassical methods are supposed to work for this system. It is fundamental to keep in mind that the previous analysis \cite{Toscano2009} of the Kerr system using the vV-G propagator used a series of approximations to obtain accurate results, such as filtering trajectories and approximating the action up to second order. Here, however, we are not interested in obtaining accurate results, but in providing fair comparisons between all semiclassical propagators, which are calculated in the same grids and with the same number of trajectories unless explicitly stated. Our objective is to use the methods in the most plug-and-play possible manner, without any approximations, trajectory focusing or optimization. This is only possible because all classical objects used by the semiclassical propagators for the Kerr system are obtained from analytical calculations, \emph{except} for the root-trajectories, the Maslov indexes and the branch changes. In \ref{App:C} we show how the error in the root-search of vV-G can be made equivalent to the numerical one in the flow, and since we use the same algorithm to calculate Maslov indexes and branch changes, the error in both objects is the same. It is also worthwhile to mention that, since branch changes and Maslov indexes are both obtained from a comparison algorithm, there is no numerical advantage at all in the absence of an explicit index in H-K: Guaranteeing the continuity of its pre-factor is a numerically identical process to counting caustics in vV-G \cite{Kay1994,SwensonThesis}. \subsection{A quick look at pre-factors}\label{subsec:pre} The first semiclassical aspect we would like to investigate is the difference between the pre-factors in the vV-G and H-K propagators in \eqref{vV} and \eqref{preHK}, which in the 1-dimensional case are just \begin{eqnarray} A_{vVG}(q_0,p_0;t) &= \left\vert \frac{\partial q'(q,p,t)}{\partial p} \right\vert^{-\frac{1}{2}}_{(q_0,p_0)} \label{prevvg} \\ A_{HK}(q_0,p_0;t) &= \left\{ \frac{1}{2} \left[ \left( \dfrac{\partial p'(q,p;t)}{\partial p} + \dfrac{\partial q'(q,p;t)}{\partial q} \right) \right. \right. \notag \\ &\qquad \qquad \left. \left. + i \left( \dfrac{\partial p'(q,p;t)}{\partial q} - \dfrac{\partial q'(q,p;t)}{\partial p} \right) \right] \right\}^{\frac{1}{2}}_{(q_0,p_0)} \, , \label{prehk} \end{eqnarray} and can be analytically obtained from the derivatives of the flow in \eqref{flow}. By choosing an arbitrary point $(q_0,p_0)$ and plotting these pre-factors as a function of time, fundamental differences between both methods can already be seen, as we display in Fig.~\ref{fig:pres}. For instance, by keeping in mind that the classical action \eqref{genkerr} inherits the flow's periodicity, we see that the asymptotic behavior of $A_{vVG}$ towards zero is a further indicative that the vV-G propagator might lose normalization as time grows; The absolute value of $A_{HK}$, on the other hand, suggests that in this case the problem is quite the opposite, with the H-K propagator risking growing too much. As is clear in the figure, however, $A_{vVG}$ goes to zero exponentially, while $A_{HK}$ grows only logarithmically, suggesting that the vV-G propagator loses normalization faster than the H-K propagator diverges. Notice that since $\partial_p q = 0$, $A_{vVG}$ has a caustic at the origin independently of the choice of initial phase-space point, as discussed in Sec.~\ref{subsec:sqb}. \begin{figure} \caption{Pre-factors in the vV-G and H-K propagators, given respectively by \eqref{prevvg} \label{fig:pres} \end{figure} Another important aspect seen in Fig.~\ref{fig:pres} is that the caustics in vV-G do not occur at the same places as the branch changes in H-K, but the number of caustics between each branch change is always equal to two, as demonstrated by Kay in one of the earlier investigations on IVRs \cite{Kay1994}. It is also clear that the absolute value of the H-K pre-factor is always larger than 1, as proved in \ref{App:A}, and that the branch changes happen when the real part of the pre-factor hits zero and its imaginary part changes sign. We do not show the IVR pre-factor in \eqref{IVR} in order to avoid convoluting Fig.~\ref{fig:pres}, but since it is the inverse of vV-G's it is quite clear that it will grow in time and tend to zero as caustics are approached. \subsection{Implementing semiclassical propagation}\label{subsec:impl} We now move to the implementation of the semiclassical propagators in \eqref{vV} and \eqref{preHK}. Since the Kerr system is integrable, the root-search required by vV-G is not particularly challenging, as trajectory multiplicity can be easily dealt with as described in \ref{App:C}. For vV-G, we do not avoid caustics in any way, running over their neighborhoods and divergent points \footnote{Their neighborhoods, of course, enter the calculations in full. It is only the proper caustic, the $\infty$, that is automatically removed by the compiler in the calculations. All coding is done in the Julia programming language.}. Naturally, the process of obtaining the roots can be optimized by grid-focusing and other strategies that depend on the form of the initial state, \emph{e.g.}~if one is interested in propagating a coherent state, all trajectories with initial positions lying outside the initial coherent state can be dismissed in the calculations \cite{Heller1991}. Here, however, we do not wish to be limited to propagating wave functions, since the semiclassical propagators themselves offer a striking visual comparison. This requires us to root-search everywhere for initial momenta, such that all trajectories in our position-momentum grid are included in vV-G; For the case of H-K the whole grid is used for integration. \begin{figure} \caption{Real parts of the quantum \eqref{Kqu} \label{fig:Ks} \end{figure} \begin{figure} \caption{Wave functions for an initial coherent state centered at $(\Re(\alpha)=5,\Im(\alpha)=0)$, the same as in Fig.~\ref{fig:flow} \label{fig:waves} \end{figure} Evidently, comparing semiclassical results without their quantum equivalent is certainly lacking in comprehensiveness. To calculate the quantum propagator for the Kerr hamiltonian, we insert a Klauder coherent state projector in the expression for the position propagator: \begin{equation} K_\text{Kerr}(q',q;t) = \pi^{-n} \int d\zeta \, \langle q' | \hat{U}_\text{Kerr}(t) | \alpha \rangle \langle \alpha |q \rangle \, . \end{equation} Since the evolution of coherent states is exact for the times in \eqref{rev}, by writing the complex measure explicitly we have the exact quantum propagator \begin{equation} K_\text{Kerr}(q',q;t) = (2\pi)^{-1} \int d\Re(\alpha)\,d\Im(\alpha) \, \langle q' | \alpha (t) \rangle \langle \alpha |q \rangle \, , \label{Kqu} \end{equation} with $|\alpha(t)\rangle$ as in \eqref{qflow} for the times in \eqref{rev}. A comparison of quantum, vV-G and H-K propagators can be seen in Fig.~\ref{fig:Ks} for the same time values as in Fig.~\ref{fig:flow}. Two aspects of Fig.~\ref{fig:Ks} immediately catch the eye: The first is the astonishing accuracy of the H-K propagator, which is almost indiscernible from its exact quantum counterpart; The second is the trapezoidal structure formed by caustic submanifolds lifted to the $(q',q)$-space, visible in the vV-G propagator (for caustics in the $(q,p)$-space, see Fig.~\ref{fig:caustics}). As discussed earlier, quantum propagation is smooth, so that the web of caustics appearing in vV-G has no equivalent in the quantum world and is absent in the caustic-free H-K. Indeed, for $t=t_3$ this web is so dense that the final propagator goes to zero in the outskirts of the grid, where caustics proliferate faster due larger angular frequencies and, in consequence, a higher number of zeros in the pre-factor. It is clear, however, that the vV-G propagator provides reasonable values for regions near the origin, in which the caustic web is sparser. We will soon see that caustics not only proliferate in time, but that the time spent by a trajectory when crossing a caustic is also increased. We suspect this to be an important mechanism for the failure of vV-G for long propagation times. Despite the visual richness of Fig.~\ref{fig:Ks}, it is important to have a more quantitative comparison between semiclassical propagation methods. The most immediate one is to use the propagators in Fig.~\ref{fig:Ks} to evolve an initial state and compare wave functions. For this we choose the same initial coherent state of Fig.~\ref{fig:flow}, and the result is presented in Fig.~\ref{fig:waves}. Again, the H-K wave functions are indiscernible from the exact quantum ones, while the vV-G wave functions reflect the instability of their respective propagators. Indeed, the cat-state wave function using vV-G is fading away due to normalization loss, a phenomenon also observed in the Wigner functions in \cite{Lando2019}, which required renormalization -- a quite common procedure in the field of quantum chaos. The H-K propagator, however, has already been credited with conserving its normalization for very long times \cite{Herman1986,Miller2011}, a point Fig.~\ref{fig:waves} confirms. Normalization shall be explored more deeply in Sec.~\ref{subsec:caus}. \subsection{Autocorrelation functions}\label{subsec:auto} Several important quantities in physics and chemistry are given by numbers, such that the semiclassical propagator is integrated twice and the oscillatory behavior seen \emph{e.g.}~in the vV-G wave functions of Fig.~\ref{fig:waves}, which was already an attenuation of the one in Fig.~\ref{fig:Ks}, should be muffled even further. The autocorrelation function in \eqref{auto} is an example, and in Fig.~\ref{fig:auto} we display it as obtained from H-K, vV-G, and the IVR expression in \eqref{autoIVR}. \begin{figure} \caption{Real part of the autocorrelation function for the same initial coherent state used in Figs.~\ref{fig:flow} \label{fig:auto} \end{figure} It is evident from Fig.~\ref{fig:auto} that all semiclassical methods used are successful in reproducing the autocorrelation function: The quantum oscillations are accurately captured and the discrete Fourier transform of the data reproduces the quantum energy spectrum perfectly, such that we choose not to show it here as this manuscript is already quite long. Fig.~\ref{fig:auto} allows us to observe several theoretical points raised in the main text, namely: \begin{itemize} \item The general caustic of position representation at $t=0$ is soon followed by a tiny time regime in which vV-G and its IVR are very accurate, which is then followed by a not so accurate intermediate short-time regime which ends in a severe and general inaccuracy for all methods used, centered around $t=0.031$. \item The oscillations lose amplitude for the vV-G propagator, showing that it does lose normalization as time grows. The IVR is also affected, but as we used a much denser grid to calculate it (see caption of Fig.~\ref{fig:auto}), it loses normalization more slowly. \item The relative error in the autocorrelation function's absolute value calculated using H-K, just as the real and imaginary parts (not shown), shows that H-K is significantly more accurate than vV-G and its IVR. \item The IVR result is at best equivalent to the one obtained using vV-G, despite its 25 times larger grid. \end{itemize} Besides these points, some aspects of Fig.~\ref{fig:auto} were not expected. The first is that despite all propagators having huge errors centered around the cat state revival at $t\approx0.393$, the vV-G is the one that better approximates the autocorrelation, even though its corresponding wave function in Fig.~\ref{fig:waves} has already lost a great deal of normalization and is filled with oscillatory errors. This is a stark demonstration of the filtering of oscillatory behavior taking place when integrating the semiclassical propagators on their whole domains. Of course, the fact that accurate autocorrelation functions can be obtained from semiclassical approximations that provide poor wave functions does not barren their application to, obviously, calculate autocorrelation functions. If there is interest in the connections between quantum and classical physics, however, an accurate autocorrelation function is not enough: One needs more general results, such as the propagators in Fig.~\ref{fig:Ks}, which suggest that a semiclassical quantization recipe based on a caustic-free representation ties the quantum and classical worlds much more closely than one that includes the caustics. The second surprising aspect of Fig.~\ref{fig:auto} is the general inaccuracy of semiclassical propagation at $t=0.031$, which is related to the \emph{Ehrenfest time} $T_\text{Ehr}$ for this particular initial state (see the first column of Fig.~\ref{fig:flow}). The Ehrenfest time in this particular case can be taken as the instant at which the initial packet's centroid has performed a full revolution around the origin \cite{Raul2012,Lando2019,Toscano2009}, obtained from the requirement \begin{equation} q'(q,p;0) \equiv q'(q,p, T_\text{Ehr}) \quad \Longrightarrow \quad 4(q^2+p^2) T_\text{Ehr} = 2\pi \quad \Longrightarrow \quad T_\text{Ehr} = \frac{\pi}{2(q^2+p^2)} \, , \end{equation} where $q'(q,p;t)$ is given in \eqref{flow}. For the centroid of the initial packet in Fig.~\ref{fig:auto}, we have $ T_\text{Ehr} = \pi/50 \approx 0.063$. The geometrical meaning of $T_\text{Ehr}/2$ is that at this moment the Wigner function's centroid has achieved the largest distance with respect to where it began, equal to the diameter of its orbit. As we can see from Fig.~\ref{fig:flow}, this is also the time at which the Wigner function's tail has performed a full revolution. Thus, at $T_\text{Ehr}/2$, the Wigner function has for the first time covered the maximum area available for this particular initial state, defining its \emph{characteristic action} \cite{Raul2012}. An equivalent interpretation of $T_\text{Ehr}$ is as the instant at which the autocorrelations obtained using the classical evolution of the Wigner function, \emph{i.e.}~the TWA, and its quantum equivalent cease to agree \cite{Lando2019,LandoThesis,Heller1991}. This time, which was previously thought to be a barrier for semiclassical propagation to work properly, has been broken on a daily basis by even the simplest of methods \cite{Voros1996}. However, it is often reported that no distinguishing feature can be observed in semiclassical propagation at the Ehrenfest time, which is something we also see in Fig.~\ref{fig:auto}: There is no feature indicating anything special about $t=0.063$. For $T_\text{Ehr}/2$, however, all propagation methods are inaccurate, implying the existence of something deeper than numerical errors, caustics, or grid sizes. This feature is not observable by naked eye in the autocorrelation functions themselves and requires us to look at the relative errors, perhaps explaining why, to our knowledge, this has not been observed before. We note that it is also possible to define the Ehrenfest time as half the centroid's revolution, although in this case one loses connection with the separation of quantum and classical autocorrelation functions \cite{Raul2012}. \subsection{Caustic stickiness}\label{subsec:caus} Caustics are unavoidable in the vV-G propagator and, as mentioned in Subsec.~\ref{subsec:impl}, infinite pre-factors lead to the contributions from their respective trajectories being lost. We here describe a mechanism that renders the vV-G propagator very unlikely to work for long propagation times, due to trajectories accumulating on caustics. \begin{figure} \caption{The absolute value of the H-K and vV-G propagators for $t_1=0.031$, where for each row one revival time is summed to $t_1$.} \label{fig:longtimes} \end{figure} \begin{figure} \caption{The phenomenon of caustic stickiness, in which trajectories keep falling on caustics as long as they are bound to the same orbit, is exemplified here. Panel \textbf{(a)} \label{fig:caustics} \end{figure} In Fig.~\ref{fig:longtimes} we present the absolute value of the vV-G and H-K propagators for $t=t_1$, as in Fig.~\ref{fig:Ks}, but for each column we sum a revival time to it. The absence of caustics causes H-K to be almost unchanged, but their increasing density destroys vV-G. At first it might look perfectly possible to use immense grids and recover reasonable results, since we compensate the normalization loss by including more trajectories that do not finish on caustics. This logic can be expressed as: If my trajectory starts at $(q,p)$ and lands on a caustic at $(q',p')$, then a small perturbation in either initial positions or momenta will evade the caustic and provide a usable contribution. What we have discovered is that this logic is incorrect, as the ``spiraling'' caustic web for the Kerr system becomes more circular with the passing of time, such that all \emph{all} initial positions and momenta lying on the same orbit for that fixed time start to fall on caustics. We refer to this as \emph{caustic stickiness}. The mechanism described above is depicted in Fig.~\ref{fig:caustics}. In panel \textbf{(a)} we show how the general structure of caustics looks like for the Kerr system, together with a typical trajectory. We then zoom on the white square to produce panel \textbf{(b)}, where we can see the trajectory crossing the caustic from up close. In panel \textbf{(c)} we go to a longer time and witness the caustic stickiness: Not only does the number of caustics increase, but they also start becoming more circular, increasing the time a trajectory remains on one of them during crossing. In panel \textbf{(d)} we are near the revival time $T_\text{rev}$, and the trajectory remains entirely on the caustic for this particular region. This has an extremely destructive effect on the final propagator, since it shows that not only a single trajectory is lost in the semiclassical sum, but instead all the trajectories ending on the green line in panel \textbf{(d)} are removed. This renders finding trajectories that do not end on caustics very hard for long times, and should also explain the unexpected null-valued regions in the Wigner functions of \cite{Lando2019}. The times in Fig.~\ref{fig:caustics} were, of course, selected on purpose for the caustics submanifolds to support the same trajectory, but it must be kept in mind that this happens in the neighborhoods of all caustics. Besides, since caustics are also removed from the IVR by having null pre-factors, the problem migrates unaltered. It has also been pointed out that the nature of caustic crossings is the same for both integrable and chaotic systems \cite{Schulman1994}, such that we suspect caustic stickiness might not be restricted to the Kerr system. Although there are several interesting considerations regarding the relationship between the distribution of root-trajectories and caustics as time grows (even more stickiness!), these findings are not fundamental to this present work and shall be published elsewhere. \section{Discussion}\label{sec:disc} The preference of the chemical community for the H-K propagator is completely justified, given the quality of its results, but the uneasiness of many with regard to its theoretical background has made it harder to see this propagator as what it is: An IVR for the position propagator expressed in the S-B representation. It doesn't help that this propagator did not fall prey to a consistent derivation until rather recently. Its discovery by Herman and Kluk in \cite{Herman1984} cannot be considered rigorous and received some criticism after more careful examinations failed to establish consistent links with the coherent state representation (\emph{e.g.}~\cite{Baranger2001}). Other authors, however, developed significant arguments in favor of the H-K propagator \cite{Grossmann1998,Miller2001} (and more recently \cite{Swart2009}). Some time later, Kay made use of the over-completeness of the coherent state basis to derive the H-K propagator through a series of very exhausting calculations \cite{Kay2005}. Follow up papers \cite{Fierro2006} and \cite{Ezra2006} were then the first instances where the H-K propagator was connected to complex variables by either complex WKB theory or SPAs. Nevertheless, the fact that the H-K propagator relies on real trajectories, merely parametrized by complex variables, was never connected to its absence of caustics -- and, therefore, to its roots in the S-B space. The S-B representation of the metaplectic group, for instance, is a unitary representation of the symplectic group and is exact only because it relies exclusively on complexified, instead of complex, variables \cite{Littlejohn1986}. What we have demonstrated here is that the generalization of this to the semiclassical scenario is exactly the theoretical pillar that renders the H-K propagator caustic-free. Besides, by identifying the map taking the S-B propagator to the position one as a sequence of inverse S-B transforms, one finally understands why the coherent states in the integral kernel of the H-K propagator must follow Schr\"odinger's phase convention instead of the more obvious, normalized Klauder one: It must include the gaussian measure with which the S-B representation is equipped. A consequence is that the generating function entering the S-B propagator is unmistakably identified as the symmetric action given by the Weyl ordering rule. The striking distinction between representations with and without caustics is clear in the analysis of the homogeneous Kerr system. Despite its regular dynamics, the caustic submanifolds for this system are as intricate as they can be, possibly even when compared to chaotic systems. The astonishing accuracy achieved by the H-K propagator becomes even more significant if one considers that the trapezoidal caustic web, visible in the vV-G propagator in Fig.~\ref{fig:Ks}, will not go away regardless of grid size, and the accumulation of caustics as time grows will inevitably lead to general failure. As discussed, the defects in the vV-G propagator are increasingly muffled as one integrates it: The wave functions in Fig.~\ref{fig:waves} are an improvement over the propagators in Fig.~\ref{fig:Ks}, and the autocorrelation in Fig.~\ref{fig:auto} is strikingly more accurate than one would suspect by looking at the earlier figures. However, when we use propagators as integral kernels and integrate them against states, we lose touch with the fact that the links between the quantum and the classical are much clearer by looking at the propagators themselves. In this aspect, the H-K propagators displayed in Fig.~\ref{fig:Ks} are, to this day, the strongest evidence that quantum-classical connections are deeper when employing representations that are invariant with respect to the symplect group. Although here our invariant representation is the S-B one, another example is the Wigner representation of quantum mechanics, which provides arguably the most important object when looking for quantum-classical connections: The Wigner function \cite{Ozorio1998,Wigner1932}. Unfortunately, the real nature of the Wigner representation superposes its invariance with respect to the symplectic group, such that it happens to not be caustic-free. The complexification of double phases spaces \cite{Ozorio2010}, however, will possibly reward us with a caustics-free, invariant way to describe Wigner evolution. Another important point addressed here is that the bypassing provided by transforming a semiclassical propagator to an IVR doesn't get rid of the problems raised by caustics, since they have nothing to do with the propagator, but with the representation it uses. Thus, the loss of contributions taking place in raw propagators migrates unaltered to their IVRs, which in turn have much worse convergence than the raw propagators themselves. Although the uncomfortable process of root-searching is avoided, the inversion of pre-factors that takes place when moving to an IVR drastically increases the amplitude of oscillations already present in the raw propagator, rendering the IVR dependent on much larger integration grids than the raw propagators themselves in order to converge. Naturally, since there are no caustics in the S-B representation, this maximization of amplitudes does not happen in the H-K propagator and, to add yet another desirable numerical aspect of this method, the static and evolving coherent states in its kernel limit the integration domain to trajectories close to the one which connects their centroids. This prevents the H-K propagator to include trajectories that are far from its main stationary one and would give off mostly oscillatory errors. All these characteristics sum up to provide a semiclassical propagator that converges with very few trajectories and has minimal numerical errors. We here also identify the impact of a characteristically classical time, namely half Ehrenfest's, in semiclassical propagation. A major mechanism for the failure of semiclassical propagators based on representations that have caustics, which we dubbed as ``caustic stickiness'', is also presented. As the density of caustics increases in phase space, whole families of trajectories (the ones lying on the same classical orbit) are lost in both vV-G and its IVR. We suspect that the loss of trajectories due to caustic stickiness explains the presence of blank arcs inside the coherent states at the fractional revivals in \cite{Lando2019}, since these have the same geometry of the caustic submanifolds in the Kerr system. Although the marginals obtained from the Wigner function can be significantly improved by using a larger grid, the arcs will never disappear from the Wigner function itself. Likewise, very large integration grids improve the results obtained \emph{from} the propagator and are useful in calculations, but the propagator itself will always reflect the caustics -- they are the footprints of working with a compromised representation. We do not claim to have implemented the vV-G propagator in the smartest possible manner (as in \cite{Toscano2009}), but in general such manner might not even exist. In fact, we have not implemented H-K any less crudely than we have vV-G, since we used the same integration grids and algorithms for all methods. The numerical advantage given to the position space IVR is necessary in order to achieve convergence, but IVRs of this type are not so interesting as in phase space, where they can be used to calculate the Wigner functions themselves \cite{Ozorio2013}. It could be argued that the vV-G propagator could provide better results if a larger number of trajectories were included in the sum, but we did try to increase the root-search domain and the impact was very small. Besides, we can also reverse the argument and state that the H-K propagator achieved strikingly accurate results with the same trajectories available to vV-G. In the end, we are also unable to see how including more trajectories would get rid of the problem of caustic stickiness. Nothing in this manuscript is indicative that using IVRs, whether the position one or even the H-K propagator itself, will be fruitful for the study of hard chaos or even strong soft chaos, in which chaotic trajectories cover a larger portion of phase space than the regular ones. A comprehensive analysis of the employment of the H-K propagator to chaotic systems is still lacking, but it might indicate that vV-G is not excluded as the method of choice in this case. \section{Conclusion}\label{sec:conc} We have shed new light on the Herman-Kluk propagator, which has for many years evaded proper theoretical contextualization. Its root in the Segal-Bargmann representation was shown to be the reason for its lack of caustics, which are a general feature of semiclassical propagators based on other representations. After a deep numerical investigation, this propagator's stringent success might imply that the connections between the classical and quantum realms are better established by using coherent states, an argument as old as quantum mechanics itself \cite{Schrodinger1926}. We did not, however, investigate this propagator's behavior for systems presenting chaotic dynamics, which might well be its Achilles' heel at least for the hard chaotic scenario. Nevertheless, since we have no reason to assume caustics in integrable and chaotic systems to be any different, we expect our conclusions to be generalizable to higher dimensional and/or chaotic dynamics. \section{Proof of the non-singularity of $\Lambda$}\label{App:A} For any symplectic matrix $\mathcal{M}$ we have \begin{equation} \mathcal{M}^T \mathcal{J} \mathcal{M} = \mathcal{J} \quad \Longrightarrow \quad \mathcal{M}^{-1} = \mathcal{J}^{-1} \mathcal{M}^T \mathcal{J} \, . \label{app1} \end{equation} Now, $\mathcal{M}$ can be obtained from the de-complexification of $\mathcal{M}_\mathbb{C}$ as $\mathcal{M} = \mathcal{W}^{-1} \mathcal{M}_\mathbb{C} \mathcal{W}$. Substituting this in \eqref{app1}, \begin{equation} \left( \mathcal{W}^{-1} \mathcal{M}_\mathbb{C} \mathcal{W} \right)^{-1} = \mathcal{J}^{-1} \left( \mathcal{W}^{-1} \mathcal{M}_\mathbb{C} \mathcal{W} \right)^T \mathcal{J} \quad \Longrightarrow \quad \mathcal{M}_\mathbb{C}^{-1} = \mathcal{J}_\mathbb{C} \mathcal{M}_\mathbb{C}^T \mathcal{J}_\mathbb{C} \, , \end{equation} where we have used $\mathcal{W}^{-1} = -\mathcal{W}^T$ and $\mathcal{J}^{-1} = -\mathcal{J}$. Thus, \begin{equation} \begin{pmatrix} \Lambda & \Gamma \\ \Gamma^* & \Lambda^* \end{pmatrix}^{-1} =- \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} \Lambda^T & \Gamma^\dagger \\ \Gamma^T & \Lambda^\dagger \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} = \begin{pmatrix} \Lambda^T & -\Gamma^\dagger \\ -\Gamma^T & \Lambda^\dagger \end{pmatrix} \, . \end{equation} Now, writing $\mathcal{M}_\mathbb{C}^{-1}\mathcal{M}_\mathbb{C} = I$ explicitly, we arrive at several useful properties of $\Gamma$ and $\Lambda$, including $\Lambda^T \Lambda - \Gamma^\dagger \Gamma^* = I$. Applying this equality to an arbitrary vector $u \in \mathbb{C}^{2n}$, \begin{equation} \vert \Lambda u \vert^2 = \vert \Gamma u \vert^2 + |u|^2 \quad \Longrightarrow \quad \Arrowvert \Lambda \Arrowvert \geq 1 \, , \end{equation} with $\Arrowvert \Lambda \Arrowvert = \sup_u | \Lambda u |/|u|$. It also follows is that $\Arrowvert \Lambda \Arrowvert \geq 1$. $\qquad \square$ Note that, by relying exclusively on the symplecticity of $\mathcal{M}$, this proof is valid regardless of whether $\mathcal{M}$ is a function of any parameter, including obviously time and phase-space points. For the original see \cite{FollandBook}. \section{Compositions with the $\delta$-distribution}\label{App:B} We limit ourselves to 1-dimensional spaces for brevity, but generalizations are trivial and can be found in \emph{e.g.}~\cite{SchwartzBook}. For $x \in \mathbb{R}$, Dirac's $\delta$ distribution is defined as the generalized functions fulfilling \begin{equation} \int_\mathbb{R} dx \, \delta(x) \phi(x) = \phi(0) \, , \end{equation} for all continuously differentiable test functions $\phi$. Its composition $\delta \circ f$ can be effortlessly worked out using a change of variables $f(x) \mapsto u$: \begin{eqnarray} \int_\mathbb{R} dx \, (\delta \circ f)(x) \phi(x) &= \int_{f(\mathbb{R})} d[f^{-1}(u)] \, \delta(u) (\phi \circ f^{-1})(u) \\ &= \int_{f(\mathbb{R})} du \, \delta(u) \left[ \frac{(\phi \circ f^{-1})(u)}{| (f' \circ f^{-1})(u)|} \right] \\ &= \sum_{\ker(f)} \frac{\phi(x)}{|f'(x)|} \, , \label{B} \end{eqnarray} where the sum runs over the kernel of $f$, \emph{i.e.}~over all $x^{(i)}$ such that $f(x^{(i)}) = 0$. It is clear that this identification is only valid if $f'(x) \neq 0$. If we pick $\phi(x) = \sqrt{|f'(x)|}$ (which is also not continuously differentiable at the origin), we see that \eqref{B} becomes \begin{align} \int dx \, (\delta \circ f)(x) \sqrt{|f'(x)|} = \sum_{\ker(f)} \sqrt{\frac{1}{|f'(x)|}} \, . \end{align} Identifying $f \leftrightarrow x'(q,p;t) - q'$ and $x \leftrightarrow p$, where $x'(q,p;t)$ is a final position evolved by the hamiltonian flow and $p$ is the initial momentum, the condition above reads \begin{align} \int dp \, \delta \left[ x'(q,p;t) - q' \right] \left\vert \frac{\partial x'(q,p;t)}{\partial p} \right\vert^\frac{1}{2} = \sum_{p} \left\vert \frac{\partial x'(q,p;t)}{\partial p} \right\vert^{-\frac{1}{2}} \, , \end{align} such that the left hand side is just an integral form of the root-search \cite{HellerBook}. If we integrate both sides with respect to $x'(q,p;t)$, the result is precisely Miller's trick \eqref{ivrq} in a 1-dimensional setting, with the filtering of final trajectories happening as a function of the initial momentum. We can run over any number of caustics in integration, since the area under the curve is asymptotically finite, as long as we do not end on them. If we do, it's just a matter of redefining the integration domain by associating the value $0$ to caustics, or deviating from the landed caustic by an infinitesimal value. As integration is blind to sets of zero measure, this either does not impact the result or sums an infinitesimal value to the final integral. All the arguments above can be reformulated in the complex case, for which we then choose $f \leftrightarrow \gamma'(\zeta,\zeta^*;t) - \zeta'$ and $x \leftrightarrow \zeta$. The preservation of real orientation by complex determinants implies that the absolute values in the jacobians are not necessary, leading directly to \eqref{ivrz}. There is also no need to worry about caustics, since the complex pre-factor $\Lambda$ is non-singular (see \ref{App:A}). \section{Implementing the root-search}\label{App:C} We again restrict ourselves to the 1-dimensional case, as in Sec.~\ref{sec:kerr}. The vV-G propagator requires root-searching, \emph{i.e.}~in order to obtain the matrix element $\langle q' | \widehat{U}(t) | q \rangle$ we must find all initial momenta that link the classical trajectories $(q,p)$ at $t=0$ and $(q',p')$ at $t=t$. Evidently, the final momentum is not important in the process. Now, regardless of whether or not the system at hand has an analytical solution, the process can be numerically implemented in the same fashion, enumerated below. \begin{enumerate} \item Fix $q$, $q'$ and $t$; \item Sort an initial momentum $\widetilde{p}$ and use it to calculate a final position $Q'(q,\widetilde{p};t)$; \item Let $\epsilon>0$ such that $|Q'(q,\widetilde{p};t) - q'| < \epsilon$. The value attributed to $\epsilon$ is the threshold which a final position must overcome for its trajectory to be considered a solution to the root-search problem. Add $\widetilde{p}$ to the list of root momenta; \item The list of root momenta will consist of several momenta for any non-linear flow, and some of them will be very close to each other, since if $|Q'_1(q,\widetilde{p}_1;t) - Q'_2(q,\widetilde{p}_2;t)|<\epsilon$ and $\widetilde{p}_1$ is a root, then $\widetilde{p}_2$ will also be a root. This makes it necessary to define $\delta > 0$ such that root momenta that fulfill $|\widetilde{p}_2 - \widetilde{p}_1| < \delta$ must be considered to correspond to the same trajectory. \end{enumerate} The procedure above requires sampling over grids of initial momenta to obtain each component of the vV-G propagator, but this process is actually faster than two dimensional integration for a single degree of freedom. The process, however, scales quite poorly with dimension, and then integral methods such as IVRs become more efficient. It must be kept in mind that the careless accounting for multiplicity in step 4.~is only possible because the trajectories in the Kerr system have monotonically increasing frequencies, such that true root-trajectories are never too close. Another important point if that setting $\epsilon$ smaller than the grid spacing we are using for $q$ and $q'$ has no impact on the results, since errors of this order are already present in the numerical implementation of the flow itself. The geometry of root-searching, together with its corresponding roots for the Kerr system, are displayed in Fig.~\ref{fig:trajs}. \begin{figure} \caption{(a) In this panel we can see that several root-trajectories, displayed in red, start at a fixed $q$ and end on a ball of radius $\epsilon$, centered at $q'$. These trajectories will all be selected by the root-search, but need to count as a single one. This is done \emph{via} \label{fig:trajs} \end{figure} \section*{References} \end{document}
\begin{document} \title[Certain subclasses of multivalent functions]{\textbf{Certain subclasses of multivalent functions defined by new multiplier transformations }} \author{Erhan Deniz and Halit orhan} \address{\thinspace Department of Mathematics, Faculty of Science, Ataturk University, 25240 Erzurum, Turkey} \email{[email protected]} \email{[email protected]} \date{} \begin{abstract} In the present paper the new multiplier transformations $\mathrm{{\mathcal{J} }}_{p}^{\delta }(\lambda ,\mu ,l)$ $(\delta ,l\geq 0,\;\lambda \geq \mu \geq 0;\;p\in \mathrm{ \mathbb{N} )}$ of multivalent functions is defined. Making use of the operator $\mathrm{ {\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l),$ two new subclasses $\mathcal{ P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ and $\widetilde{\mathcal{P}} _{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$\textbf{\ }of multivalent analytic functions are introduced and investigated in the open unit disk. Some interesting relations and characteristics such as inclusion relationships, neighborhoods, partial sums, some applications of fractional calculus and quasi-convolution properties of functions belonging to each of these subclasses $\mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ and $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ are investigated. Relevant connections of the definitions and results presented in this paper with those obtained in several earlier works on the subject are also pointed out. \end{abstract} \keywords{Multiplier transformations; Analytic functions; Multivalent functions; Neighborhoods and Partial sums of analytic functions; Fractional calculus operators.} \subjclass[2000]{Primary 30C45} \maketitle \noindent \section{\ INTRODUCTION AND DEFINITIONS} Let $\mathrm{{\mathcal{A}}}(n,p)$ denote the class of functions normalized by \begin{equation} f(z)=z^{p}+\sum_{k=n+p}^{\infty }a_{k}z^{k}\text{\ \ }\left( p,n\in \mathrm{ \mathbb{N} }:=\{1,2,3,...\}\right) \tag{1.1} \end{equation} which are analytic and $p-valent$ in the open unit disk $\mathrm{{\mathcal{U} }}=\{z:\;z\in \mathrm{ \mathbb{C} }\;\mathrm{and\;}\left\vert z\right\vert <1\}.$ Let $f(z)$ and $g(z)$ be analytic in $\mathrm{{\mathcal{U}}}.$ Then, we say that the function $f$ is subordinate to $g$ if there exists a Schwarz function $w(z),$ analytic in $\mathrm{{\mathcal{U}}}$ with $w(0)=0,$ $ \left\vert w(z)\right\vert <1$ such that $f(z)=g(w(z))\mathrm{\;(}z\in \mathrm{{\mathcal{U}})}.$ We denote this subordination $f\prec g\mathrm{ \;or\;}f(z)\prec g(z)\mathrm{\;(}z\in \mathrm{{\mathcal{U}})}.$ In particular, if the function $g$ is univalent in $\mathrm{{\mathcal{U}}}$, the above subordination is equivalent to $f(0)=g(0),$ $f(\mathrm{{\mathcal{U} }})\subset g(\mathrm{{\mathcal{U}}}).$ For $f\in \mathrm{{\mathcal{A}}}(n,p)$ given by (1.1) and $g(z)$ given by \begin{equation} g(z)=z^{p}+\sum_{k=n+p}^{\infty }b_{k}z^{k}\text{\ \ }\left( p,n\in \mathrm{ \mathbb{N} }:=\{1,2,3,...\}\right) \tag{1.2} \end{equation} their convolution (or Hadamard product), denoted by $(f\ast g),$ is defined as \begin{equation} (f\ast g)(z):=z^{p}+\sum_{k=n+p}^{\infty }a_{k}b_{k}z^{k}=:(g\ast f)(z)\text{ \ }\left( z\in \mathrm{{\mathcal{U}}}\right) . \tag{1.3} \end{equation} Note that $f\ast g\in \mathrm{{\mathcal{A}}}(n,p).$ In particular, we set \begin{equation*} \mathrm{{\mathcal{A}}}(p,1):=\mathrm{{\mathcal{A}}}_{p},\;\;\mathrm{{ \mathcal{A}}}(1,n):=\mathrm{{\mathcal{A}}}(n),\;\;\mathrm{{\mathcal{A}}} (1,1):=\mathrm{{\mathcal{A}}}_{1}=\mathrm{{\mathcal{A}}}.\;\; \end{equation*} For a function $f$ in $\mathrm{{\mathcal{A}}}(n,p),$ we define the \textit{ multiplier transformations }$\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)$ as follows: \noindent \textbf{Definition 1.1. }Let $f\in \mathrm{{\mathcal{A}}}(n,p).$ For the parameters $\delta ,\lambda ,\mu ,l\in \mathrm{ \mathbb{R} };$ $\lambda \geq \mu \geq 0$ and $\delta ,l\geq 0$ define the multiplier transformations\textit{\ }$\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)$ on $\mathrm{{\mathcal{A}}}(n,p)$ by the following \begin{equation*} \mathrm{{\mathcal{J}}}_{p}^{0}(\lambda ,\mu ,l)f(z)=f(z) \end{equation*} \begin{equation*} ~(p+l)\mathrm{{\mathcal{J}}}_{p}^{1}(\lambda ,\mu ,l)f(z)=\lambda \mu z^{2}f^{\prime \prime }(z)+\left( \lambda -\mu +(1-p)\lambda \mu \right) zf^{\prime }(z)+\left( p(1-\lambda +\mu )+l\right) f(z) \end{equation*} \begin{align} (p+l)\mathrm{{\mathcal{J}}}_{p}^{2}(\lambda ,\mu ,l)f(z)& =\lambda \mu z^{2}[ \mathrm{{\mathcal{J}}}_{p}^{1}(\lambda ,\mu ,l)f(z)]^{\prime \prime }+\left( \lambda -\mu +(1-p)\lambda \mu \right) z[\mathrm{{\mathcal{J}}} _{p}^{1}(\lambda ,\mu ,l)f(z)]^{\prime } \tag{1.4} \\ & +\left( p(1-\lambda +\mu )+l\right) \mathrm{{\mathcal{J}}}_{p}^{1}(\lambda ,\mu ,l)f(z) \notag \end{align} \begin{equation*} \mathrm{{\mathcal{J}}}_{p}^{\delta _{1}}(\lambda ,\mu ,l)(\mathrm{{\mathcal{J }}}_{p}^{\delta _{2}}(\lambda ,\mu ,l)f(z))=\mathrm{{\mathcal{J}}} _{p}^{\delta _{2}}(\lambda ,\mu ,l)(\mathrm{{\mathcal{J}}}_{p}^{\delta _{1}}(\lambda ,\mu ,l)f(z)) \end{equation*} for $z\in \mathrm{{\mathcal{U}}}$ and $p,n\in \mathrm{ \mathbb{N} }:=\{1,2,...\}.$ If $f$ is given by (1.1) then from the definition of the multiplier transformations $\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l),$ we can easily see that \begin{equation} \mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)=z^{p}+\sum_{k=n+p}^{\infty }\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)a_{k}z^{k} \tag{1.5} \end{equation} where \begin{equation} \Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)=\left[ \frac{(k-p)(\lambda \mu k+\lambda -\mu )+p+l}{p+l}\right] ^{\delta }. \tag{1.6} \end{equation} \noindent \textbf{Remark 1.1.} It should be remarked that the operator$ \mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)$ is a generalization of many other\textit{\ operators} considered earlier. In particular, for $ f\in \mathrm{{\mathcal{A}}}(n,p)$ we have the following: \begin{enumerate} \item $\mathrm{{\mathcal{J}}}_{1}^{\delta }(1,0,0)f(z)\equiv D^{\delta }f(z), $ $\left( \delta \in \mathrm{ \mathbb{N} }_{0}:=\mathrm{ \mathbb{N} }\cup \{0\}\right) $the Salagean differential operator [42]. \item $\mathrm{{\mathcal{J}}}_{1}^{\delta }(\lambda ,0,0)f(z)\equiv D_{\lambda }^{\delta }f(z),$ $\left( \delta \in \mathrm{ \mathbb{N} }_{0}\right) $ the generalized Salagean differential operator introduced by Al-Oboudi [2]. \item $\mathrm{{\mathcal{J}}}_{1}^{\delta }(\lambda ,\mu ,0)f(z)\equiv D_{\lambda ,\mu }^{\delta }f(z),$ the operator studied by Deniz and Orhan [18], in special case $0\leq \mu \leq \lambda \leq 1$ the operator was studied firstly Raducanu and Orhan [39]. \item $\mathrm{{\mathcal{J}}}_{1}^{\delta }(1,0,l)f(z)\equiv I_{l}^{\delta }f(z),$ $\left( \delta \in \mathrm{ \mathbb{N} }_{0}\right) $ the operator considered by Cho and Srivastava [15] and Cho and Kim [16]. \item $\mathrm{{\mathcal{J}}}_{1}^{\delta }(1,0,1)f(z)\equiv I^{\delta }f(z), $ $\left( \delta \in \mathrm{ \mathbb{N} }_{0}\right) $ the operator investigated by Uralegaddi and Somonatha [54]. \item $\mathrm{{\mathcal{J}}}_{1}^{\delta }(\lambda ,0,0)f(z)\equiv D_{\lambda }^{\delta }f(z),$ $\left( \delta \in \mathrm{ \mathbb{R} }^{+}\cup \{0\}\right) $ the operator studied by Acu and Owa [1]. \item $\mathrm{{\mathcal{J}}}_{1}^{\delta }(\lambda ,0,l)f(z)\equiv I(\delta ,\lambda ,l)f(z),$ $\left( \delta \in \mathrm{ \mathbb{R} }^{+}\cup \{0\}\right) $ the operator introduced by Cata\c{s} [11]. \item $J_{p}^{\delta }(1,0,0)f(z)\equiv D_{p}^{\delta }f(z)$\textbf{$,$}$ \left( \delta \in \mathrm{ \mathbb{N} }_{0}\right) $ the operator considered by Shenan \textit{et al. }[45]. \item $J_{p}^{\delta }(\lambda ,0,0)f(z)\equiv D_{\lambda ,p}^{\delta }f(z)$ \textbf{$,$}$\left( \delta \in \mathrm{ \mathbb{N} }_{0}\right) $ the operator investigated by Kwon [25]. \item $\mathrm{{\mathcal{J}}}_{p}^{\delta }(1,0,l)f(z)\equiv I_{p}(\delta ,l)f(z),$ the operator considered by Kumar\textit{\ et al.} [48]. \item $\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,0,l)f(z)\equiv I_{p}(\delta ,\lambda ,l)f(z),$ the operator studied recently by Cata\c{s} \textit{et al}. [12]. \end{enumerate} For special values of parameters $\lambda ,\mu ,l$ and $p$, from the operator $\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)$ the following new operators can be obtained: $\bullet \;\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,1)\equiv \mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu )$ $\bullet \;\mathrm{{\mathcal{J}}}_{1}^{\delta }(\lambda ,\mu ,l)\equiv \mathrm{{\mathcal{J}}}^{\delta }(\lambda ,\mu ,l).$ Now, by making use of the operator $\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l),$ we define a new subclass of functions belonging to the class $\mathrm{{\mathcal{A}}}(n,p)$. \noindent \textbf{Definition 1.2. }Let\textbf{\ }$\lambda \;\geq \mu \geq 0;\;l,\delta \geq 0;\;p\in \mathrm{ \mathbb{N} }$\textbf{\ }and for the parameters\textbf{\ $\sigma ,\;$}$A$ and $B$ such that \begin{equation} -1\leq A<B\leq 1,\;\;0<B\leq 1\text{ and }0\leq \sigma <p, \tag{1.7} \end{equation} \noindent we say that a function $f(z)\in \mathrm{{\mathcal{A}}}(n,p)$ is in the class $\mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ if it satisfies the following subordination condition: \begin{equation} \frac{1}{p-\sigma }\left( \frac{[\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p-1}}-\sigma \right) \prec \frac{1+Az}{ 1+Bz}\text{\ \ }(z\in \mathrm{{\mathcal{U}}}). \tag{1.8} \end{equation} If the following inequality holds true, \begin{equation} \left\vert \frac{\frac{\lbrack \mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p-1}}-p}{B\frac{[\mathrm{{\mathcal{J}}} _{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p-1}}-[pB+(A-B)(p-\sigma )]}\right\vert <1\text{\ \ }(z\in \mathrm{{\mathcal{U}}}) \tag{1.9} \end{equation} the inequality (1.9) is equivalent the subordination condition (1.8). We note that by specializing the parameters $\lambda ,\mu ,l,\delta ,\sigma ,A,B$ and $p,$ the subclass $\mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ reduces to several well-known subclasses of analytic functions. These subclasses are: \begin{enumerate} \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}(-1,1;0,1)\equiv \mathcal{P} _{0,0,l}^{1}(-1,1;0,1)=\mathrm{{\mathcal{R}}}$ (\textit{see Mac-Gregor} [31]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}(A,B;\sigma ,p)\equiv \mathcal{P} _{0,0,l}^{1}(A,B;\sigma ,p)\equiv \mathrm{{\mathcal{S}}}_{p}(A,B,\sigma )$ ( \textit{see Aouf }[3]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}(-1,1;\sigma ,p)\equiv \mathcal{P} _{0,0,l}^{1}(-1,1;\sigma ,p)\equiv \mathrm{{\mathcal{S}}}_{p}(\sigma )$ ( \textit{see Owa }[35]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}(-1,1-\frac{1}{\alpha };0,1)\equiv \mathcal{P}_{0,0,l}^{1}(-1,1-\frac{1}{\alpha };0,1)\equiv \mathrm{{\mathcal{S }}}(\alpha )$ $\left( \alpha >\frac{1}{2}\right) $ (\textit{see Goel} [20]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}(-1,1-\frac{1}{\alpha };0,p)\equiv \mathcal{P}_{0,0,l}^{1}(-1,1-\frac{1}{\alpha };0,p)\equiv \mathrm{{\mathcal{S }}}_{p}(\alpha )$ $\left( \alpha >\frac{1}{2}\right) $ (\textit{see Sohi} [49]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}(A,B;0,p)\equiv \mathcal{P} _{0,0,l}^{1}(A,B;0,p)\equiv \mathrm{{\mathcal{S}}}_{p}(A,B)$ (\textit{see Chen} [13]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}(A,B;0,1)\equiv \mathcal{P} _{0,0,l}^{1}(A,B;0,1)\equiv \mathrm{{\mathcal{R}}}(A,B),\;\left( -1\leq B<A\leq 1\right) $ (\textit{see Mehrok} [32]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}(-\gamma ,\gamma ;0,1)\equiv \mathcal{P}_{0,0,l}^{1}(-\gamma ,\gamma ;0,1)\equiv \mathrm{{\mathcal{R}}} _{(\gamma )},$ $\left( 0<\gamma \leq 1\right) $ (\textit{see Padmanabhan} [38] \textit{and} \textit{Caplinger and Causey }[10]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}((2\beta -1)\gamma ,\gamma ;0,1)\equiv \mathcal{P}_{0,0,l}^{1}((2\beta -1)\gamma ,\gamma ;0,1)\equiv \mathrm{{\mathcal{R}}}_{\beta ,\gamma },$ $\left( 0\leq \beta <1,\;0<\gamma \leq 1\right) $ (\textit{see Juneja and Mogra} [23]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}((2a-1)b,b;0,p)\equiv \mathcal{P} _{0,0,l}^{1}((2a-1)b,b;0,p)\equiv \mathrm{{\mathcal{S}}}_{p}(a,b),$ $\left( 0\leq a<1,\;0<b\leq 1\right) $ (\textit{see Owa }[36]); \item $\mathcal{P}_{\lambda ,\mu ,l}^{0}((\gamma -1)\beta ,\alpha \beta ;0,1)\equiv \mathcal{P}_{0,0,l}^{1}((\gamma -1)\beta ,\alpha \beta ;0,1)\equiv \mathrm{{\mathcal{L}}}(\alpha ,\beta ,\gamma ),$ $\left( 0\leq \alpha \leq 1,\;0<\beta \leq 1,0\leq \gamma <1\right) $ (\textit{see Kim and Lee }[24]). \end{enumerate} Furthermore, we say that a function $f(z)\in \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ is in the subclass $\widetilde{\mathcal{P}} _{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ if $f(z)$ is of the following form: \begin{equation} f(z)=z^{p}-\sum_{k=n+p}^{\infty }\left\vert a_{k}\right\vert z^{k}\text{\ \ } \left( p,n\in \mathrm{ \mathbb{N} }:=\{1,2,3,...\}\right) . \tag{1.10} \end{equation} Thus, by specializing the parameters $\lambda ,\mu ,l,\delta ,\sigma ,A,B$ and $p,$ we obtain the following familiar subclasses of analytic functions in $\mathrm{{\mathcal{U}}}$ with \textit{negative }coefficients: \begin{enumerate} \item $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{0}(-1,1,;\alpha ,1)\equiv \widetilde{\mathcal{P}}_{0,0,l}^{1}(-1,1,;\alpha ,1)\equiv \mathcal{P}^{\ast }(\alpha )$ $\left( 0\leq \alpha <1\right) $ (\textit{see for }$f\in \mathrm{ {\mathcal{A}}}$ \textit{Sarangi and Uralegaddi }[43] \textit{and for $f\in \mathrm{{\mathcal{A}}}(n)$ Sekine and Owa }[44]); \item $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{0}((\gamma -1)\beta ,\alpha \beta ;0,1)\equiv \widetilde{\mathcal{P}}_{0,0,l}^{1}((\gamma -1)\beta ,\alpha \beta ;0,1)\equiv \mathrm{{\mathcal{L}}}^{\ast }(\alpha ,\beta ,\gamma )$ $\left( 0\leq \alpha \leq 1,\;0<\beta \leq 1,0\leq \gamma <1\right) $ (\textit{see Kim and Lee} [24]); \item $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{0}(A,B;\sigma ,p)\equiv \widetilde{\mathcal{P}}_{0,0,l}^{1}(A,B;\sigma ,p)\equiv \mathcal{P}^{\ast }(p,A,B,\sigma )$ (\textit{see Aouf} [4]); \item $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{0}(-\beta ,\beta ;0,1)\equiv \widetilde{\mathcal{P}}_{0,0,l}^{1}(-\beta ,\beta ;0,1)\equiv \mathrm{{\mathcal{D}}}^{\ast }(\beta )$ $0<\beta \leq 1$ (\textit{see Kim and Lee} [24]); \item $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{0}(-\beta ,\beta ;\sigma ,1)\equiv \widetilde{\mathcal{P}}_{0,0,l}^{1}(-\beta ,\beta ;\sigma ,1)\equiv \mathcal{P}^{\ast }(\sigma ,\beta )$ $\left( 0\leq \sigma <p;\;0<\beta \leq 1\right) $ (\textit{see Gupta and Jain} [22]); \item $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{0}(-\beta ,\beta ;\alpha ,p)\equiv \widetilde{\mathcal{P}}_{0,0,l}^{1}(-\beta ,\beta ;\alpha ,p)\equiv \mathcal{P}_{p}^{\ast }(\alpha ,\beta )$ $\left( 0\leq \alpha <p;\;0<\beta \leq 1\right) $ (\textit{see Aouf } [6]); \item $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{0}(A,B;0,p)\equiv \widetilde{\mathcal{P}}_{0,0,l}^{1}(A,B;0,p)\equiv \mathcal{P}^{\ast }(p,A,B) $ (\textit{see} \textit{Shukla and Dashrath} [46]); \item $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{0}(-1,1;\sigma ,p)\equiv \widetilde{\mathcal{P}}_{0,0,l}^{1}(-1,1;\sigma ,p)\equiv \mathrm{{\mathcal{F }}}_{p}(1,\beta )$ $\left( 0\leq \sigma <p;\;p\in \mathrm{ \mathbb{N} }\right) $ (\textit{see for} $f\in \mathrm{{\mathcal{A}}}$ \textit{Lee et al. } [26] \textit{and for $f\in \mathrm{{\mathcal{A}}}(n)$} \textit{Yaguchi et al}.[55]). \item $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{0}((2a-1)b,b;0,p)\equiv \widetilde{\mathcal{P}}_{0,0,l}^{1}((2a-1)b,b;0,p)\equiv \mathrm{{\mathcal{T} }}_{p}(a,b)$ $\left( 0\leq a<1,\;0<b\leq 1\right) $ (\textit{see Owa }[36]); \end{enumerate} In our present paper, we shall make use of the familiar \textit{integral operator }$\mathrm{{\mathcal{I}}}_{\vartheta ,p}$ defined by (see, for details, [9, 27, 30]; see also [54]) \begin{equation} (\mathrm{{\mathcal{I}}}_{\vartheta ,p})(z):=\frac{\vartheta +p}{z^{p}} \int_{0}^{z}t^{\vartheta -1}f(t)dt\text{\ \ \ }f\in \mathrm{{\mathcal{A}}} (n,p);\;\vartheta +p>0;\;p\in \mathrm{ \mathbb{N} }) \tag{1.11} \end{equation} as well as the fractional calculus operator $\mathrm{{\mathcal{D}}}_{z}^{\nu }$ for which it is well known that (see, for details, [37, 50] and [53]; see also Section 7) \begin{equation} \mathrm{{\mathcal{D}}}_{z}^{\nu }\{z^{p}\}=\frac{\Gamma (\rho +1)}{\Gamma (\rho +1-\nu )}z^{\rho -\nu }\;\;(\rho >-1;\;\nu \in \mathrm{ \mathbb{R} }) \tag{1.12} \end{equation} in terms of Gamma function. The main object of the present paper is to investigate the various important properties and characteristics of two subclasses of $\mathrm{{\mathcal{A}}} (n,p)$ of normalized analytic functions in $\mathrm{{\mathcal{U}}}$ with negative and positive coefficients, which are introduced here by making use of the multiplier transformations $\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)$ defined by (1.4). Inclusion relationships for the class $ \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ are investigated by applying the techniques of convolution. Furthermore, several properties involving generalized neighborhoods and partial sums for functions belonging to these subclasses are investigated. We also derive many results for the Quasi- convolution of functions belonging to the class $\widetilde{\mathcal{P }}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$\textbf{$.$} Finally, some applications of fractional calculus operators are considered. Relevant connections of the definitions and results presented here with those obtained in several earlier works are also pointed out. \noindent \textbf{Remark 1.2. }Throughout our present investigation, we tacitly assume that the parametric constraints listed in (1.6), (1.7) and Definition 1.1 are satisfied. \section{INCLUSION PROPERTIES OF THE FUNCTION CLASS $\mathcal{P}_{\protect \lambda ,\protect\mu ,l}^{\protect\delta }(A,B;\protect\sigma ,p)$} For proving our first inclusion result, we shall need the following lemmas. \noindent \textbf{Lemma 2.1} (\textit{See Fejer} [19] \textit{or Ruscheweyh} [41])\textbf{.} Assume $a_{1}=1$ and $a_{m}\geq 0$ for $m\geq 2,$ such that $ \{a_{m}\}$ is a convex decreasing sequence, i.e., $a_{m}-2a_{m+1}+a_{m+2} \geq 0$ and $a_{m+1}-a_{m+2}\geq 0$ for $m\in \mathrm{ \mathbb{N} }.$ Then \begin{equation} \Re \left\{ \sum_{m=1}^{\infty }a_{m}z^{m-1}\right\} >\frac{1}{2} \tag{2.1} \end{equation} for all $z\in \mathrm{{\mathcal{U}}}.$ \noindent \textbf{Lemma 2.2 }(\textit{See Liu} [28])\textbf{. }Let $-1\leq A_{2}\leq A_{1}<B_{1}\leq B_{2}\leq 1.$ Then, we can write the following subordination result: \begin{equation*} \frac{1+B_{1}z}{1+A_{1}z}\prec \frac{1+B_{2}z}{1+A_{2}z}. \end{equation*} \noindent \textbf{Lemma 2.3. }If $\left[ \lambda -\mu \geq \frac{l+p}{2p} \mathrm{\;or\;\;}\lambda =\mu =0\right] ,$ then $\Re \left\{ 1+\sum_{k=n+p}^{\infty }\frac{1}{\Phi _{p}^{k}(1,\lambda ,\mu ,l)} z^{k-p}\right\} >\frac{1}{2}$ for all $z\in \mathrm{{\mathcal{U}}}.$ \noindent \textbf{Proof. }Define: \begin{equation*} q(z)=1+\sum_{k=2}^{\infty }\frac{1}{\Phi _{p}^{k+n+p-2}(1,\lambda ,\mu ,l)} z^{k+n-2}=1+\sum_{k=2}^{\infty }B_{k}z^{k+n-2} \end{equation*} where \begin{equation} B_{k}=\frac{1}{\Phi _{p}^{k+n+p-2}(1,\lambda ,\mu ,l)}=\frac{p+l}{ (k+n-2)(\lambda \mu (k+n+p-2)+\lambda -\mu )+p+l} \tag{2.2} \end{equation} for all $n,p\in \mathrm{ \mathbb{N} },\;k\geq 2.$ \noindent Since the values $k,\;l,\;p,\;\lambda $ and $\mu $ are positive, we have $B_{k}>0$ for all $k\in \mathrm{ \mathbb{N} }.$ We can easily find that \begin{equation} B_{k+1}=\frac{p+l}{(k+n-1)(\lambda \mu (k+n+p-1)+\lambda -\mu )+p+l} \tag{2.3} \end{equation} \begin{equation} B_{k+2}=\frac{p+l}{(k+n)(\lambda \mu (k+n+p)+\lambda -\mu )+p+l} \tag{2.4} \end{equation} and thus from (2.3) and (2.4), we can see that \begin{equation*} B_{k+1}-B_{k+2}\geq 0 \end{equation*} for all $k\in \mathrm{ \mathbb{N} }.$ Next, we show that the inequality \begin{equation} B_{k}-2B_{k+1}+B_{k+2}\geq 0 \tag{2.5} \end{equation} holds for all $k\in \mathrm{ \mathbb{N} }.$ Using (2.2), (2.3) and (2.4) we find that \begin{eqnarray*} {B_{k}-2B_{k+1}+B_{k+2}} &=&{\frac{p+l}{(k+n-2)(\lambda \mu (k+n+p-2)+\lambda -\mu )+p+l}} \\ &&-{2\frac{p+l}{(k+n-1)(\lambda \mu (k+n+p-1)+\lambda -\mu )+p+l}} \\ &&{+\frac{p+l}{(k+n)(\lambda \mu (k+n+p)+\lambda -\mu )+p+l}} \\ &=&\frac{2(\lambda \mu )^{2}\left[ 3(k+n)(k+n+p-2)+p^{2}-3p+2\right] }{ C_{2}C_{1}C_{0}} \\ &&+\frac{2\lambda \mu \left[ 3(\lambda -\mu )(k+n-1)+p(2(\lambda -\mu )-1)-l \right] +(\lambda -\mu )^{2}}{C_{2}C_{1}C_{0}} \end{eqnarray*} where $C_{i}=\left[ (k+n-i)(\lambda \mu (k+n+p-i)+\lambda -\mu )+p+l\right] $ and from the hypothesis of Lemma 2.3, we deduce that (2.5) holds for all $ k\in \mathrm{ \mathbb{N} }.$ Thus the sequence $\{B_{k}\}$ is convex decreasing and by Lemma 2.1 we obtain that \begin{equation*} \Re \{q(z)\}=\Re \left\{ 1+\sum_{k=2}^{\infty }B_{k}z^{k+n-2}\right\} =\Re \left\{ 1+\sum_{k=n+p}^{\infty }\frac{1}{\Phi _{p}^{k}(1,\lambda ,\mu ,l)} z^{k-p}\right\} >\frac{1}{2} \end{equation*} for all $z\in \mathrm{{\mathcal{U}}}.$ The proof of Lemma 2.3 is completed. \noindent \textbf{Theorem 2.1. }If $\left[ \lambda -\mu \geq \frac{l+p}{2p} \mathrm{\;or\;\;}\lambda =\mu =0\right] $ and $\delta \geq 0,$ then \begin{equation} \mathcal{P}_{\lambda ,\mu ,l}^{\delta +1}(A,B;\sigma ,p)\subseteq \mathcal{P} _{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p). \tag{2.6} \end{equation} \noindent \textbf{Proof.} Let $f\in \mathcal{P}_{\lambda ,\mu ,l}^{\delta +1}(A,B;\sigma ,p).$ Using the definition of $\mathcal{P}_{\lambda ,\mu ,l}^{\delta +1}(A,B;\sigma ,p)$ we obtain that \begin{equation} \frac{1}{p-\sigma }\left( \frac{z[\mathrm{{\mathcal{J}}}_{p}^{\delta +1}(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p}}-\sigma \right) \prec \frac{1+Az}{ 1+Bz}\text{ \ }(z\in \mathrm{{\mathcal{U}}}). \tag{2.7} \end{equation} Applying definition of $\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)$ and the properties of convolution we find that \begin{eqnarray*} &&\frac{z[\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }}{pz^{p}} \\ &=&\left( 1+\sum_{k=n+p}^{\infty }\frac{1}{\Phi _{p}^{k}(1,\lambda ,\mu ,l)} z^{k-p}\right) \ast \left( 1+\sum_{k=n+p}^{\infty }\frac{k}{p}\Phi _{p}^{k}(\delta +1,\lambda ,\mu ,l)a_{k}z^{k-p}\right) \\ &=&\left( 1+\sum_{k=n+p}^{\infty }\frac{1}{\Phi _{p}^{k}(1,\lambda ,\mu ,l)} z^{k-p}\right) \ast \left( \frac{z[\mathrm{{\mathcal{J}}}_{p}^{\delta +1}(\lambda ,\mu ,l)f(z)]^{\prime }}{pz^{p}}\right) . \end{eqnarray*} Therefore from the last equalities and (1.8) we get \begin{eqnarray} &&\frac{1}{p-\sigma }\left( \frac{[\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p-1}}-\sigma \right) \TCItag{2.8} \\ &=&\frac{1}{p-\sigma }\left\{ p\left[ \left( 1+\sum_{k=n+p}^{\infty }\frac{1 }{\Phi _{p}^{k}(1,\lambda ,\mu ,l)}z^{k-p}\right) \ast \left( \frac{z[ \mathrm{{\mathcal{J}}}_{p}^{\delta +1}(\lambda ,\mu ,l)f(z)]^{\prime }}{ pz^{p}}\right) \right] -\sigma \right\} \notag \\ &=&\left( 1+\sum_{k=n+p}^{\infty }\frac{1}{\Phi _{p}^{k}(1,\lambda ,\mu ,l)} z^{k-p}\right) \ast \frac{1}{p-\sigma }\left( \frac{z[\mathrm{{\mathcal{J}}} _{p}^{\delta +1}(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p}}-\sigma \right) \notag \\ &=&q(z)\ast \frac{1+Aw(z)}{1+Bw(z)}. \notag \end{eqnarray} where $\left\vert w(z)\right\vert <1$ and $w(0)=0.$ From the Herglotz theorem and Lemma 2.3 we thus obtain \begin{equation*} q(z)=\int_{\left\vert x\right\vert =1}\frac{d\varpi (x)}{1-xz}\;\;(z\in \mathrm{{\mathcal{U}}}), \end{equation*} when $\varpi (x)$ is a probability measure on the unit circle $\left\vert x\right\vert =1,$ that is \begin{equation*} \int_{\left\vert x\right\vert =1}d\varpi (x)=1. \end{equation*} It follows from (2.8) that \begin{equation*} \frac{1}{p-\sigma }\left( \frac{z[\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p}}-\sigma \right) =\int_{\left\vert x\right\vert =1}\frac{1+Axz}{1+Bxz}d\varpi (x)\prec \frac{1+Az}{1+Bz} \end{equation*} because $\frac{1+Az}{1+Bz}$ is convex univalent in $\mathrm{{\mathcal{U}}}.$ Hence we conclude that \begin{equation*} \mathcal{P}_{\lambda ,\mu ,l}^{\delta +1}(A,B;\sigma ,p)\subseteq \mathcal{P} _{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p), \end{equation*} which completes the proof of Theorem 2.1. \noindent \textbf{Theorem 2.2. }If $\delta \geq 0$ and $-1\leq A_{2}\leq A_{1}<B_{1}\leq B_{2}\leq 1,$ then \begin{equation} \mathcal{P}_{\lambda ,\mu ,l}^{\delta +1}(B_{1},A_{1};\sigma ,p)\subseteq \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(B_{2},A_{2};\sigma ,p). \tag{2.9} \end{equation} \noindent \textbf{Proof. }Making use of Lemma 2.2, we can write \begin{equation*} \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(B_{1},A_{1};\sigma ,p)\subseteq \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(B_{2},A_{2};\sigma ,p). \end{equation*} Using (2.6) and (2.9), we have \begin{equation*} \mathcal{P}_{\lambda ,\mu ,l}^{\delta +1}(B_{1},A_{1};\sigma ,p)\subseteq \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(B_{1},A_{1};\sigma ,p)\subseteq \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(B_{2},A_{2};\sigma ,p) \end{equation*} so, we obtain \begin{equation*} \mathcal{P}_{\lambda ,\mu ,l}^{\delta +1}(B_{1},A_{1};\sigma ,p)\subseteq \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(B_{2},A_{2};\sigma ,p). \end{equation*} Thus, the proof is complete. \section{BASIC PROPERTIES OF THE FUNCTION CLASS\textbf{\ }$\protect \widetilde{\mathcal{P}}_{\protect\lambda ,\protect\mu ,l}^{\protect\delta }(A,B;\protect\sigma ,p)$} We first determine a necessary and sufficient condition for a function $ f(z)\in \mathrm{{\mathcal{A}}}(n,p)$ of the form (1.10) to be in the class $ \widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p).$ \noindent \textbf{Theorem 3.1. }Let the function $f(z)\in \mathrm{{\mathcal{A }}}(n,p)$ be defined by (1.10). Then the function $f(z)$ is in the class $ \widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$\textbf{\ }if and only if \begin{equation} \sum_{k=n+p}^{\infty }k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)\left\vert a_{k}\right\vert \leq (B-A)(p-\sigma ) \tag{3.1} \end{equation} where $\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)$ is given by (1.6). \noindent \textbf{Proof. }If the condition (3.1) hold true, we find from (1.10) and (3.1) that \begin{eqnarray*} &&\left\vert [\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }-pz^{p-1}\right\vert -\left\vert B[\mathrm{{\mathcal{J}}} _{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }-z^{p-1}[pB+(A-B)(p-\sigma )]\right\vert \\ &=&\left\vert -\sum_{k=n+p}^{\infty }k\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)\left\vert a_{k}\right\vert z^{k-1}\right\vert -\left\vert (B-A)(p-\sigma )z^{p-1}-B\sum_{k=n+p}^{\infty }k\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)\left\vert a_{k}\right\vert z^{k-1}\right\vert \\ &\leq &\sum_{k=n+p}^{\infty }k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)\left\vert a_{k}\right\vert -(B-A)(p-\sigma )\leq 0\text{\ \ }\left( z\in \partial \mathrm{{\mathcal{U}}}=\{z:\;z\in \mathrm{ \mathbb{C} }\;\mathrm{and\;}\left\vert z\right\vert =1\}\right) . \end{eqnarray*} Hence, by the \textit{Maximum Modulus Theorem}, we have \begin{equation*} f(z)\in \widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p). \end{equation*} Conversely, assume that the function $f(z)$ defined by (1.10) is in the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ \textbf{$.$ }Then\textbf{\ }we have \begin{eqnarray} &&\left\vert \frac{\frac{[\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p-1}}-p}{B\frac{[\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p-1}}-[pB+(A-B)(p-\sigma )]}\right\vert \TCItag{3.2} \\ &=&\left\vert \frac{\sum_{k=n+p}^{\infty }k\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)\left\vert a_{k}\right\vert z^{k-p}}{(B-A)(p-\sigma )z^{p-1}-B\sum_{k=n+p}^{\infty }k\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)\left\vert a_{k}\right\vert z^{k-p}}\right\vert <1\;\;(z\in \mathrm{{ \mathcal{U}}}). \notag \end{eqnarray} Now, since $\left\vert \Re (z)\right\vert \leq \left\vert z\right\vert $ for all $z,$ we have \begin{equation} \Re \left( \frac{\sum_{k=n+p}^{\infty }k\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)\left\vert a_{k}\right\vert z^{k-p}}{(B-A)(p-\sigma )z^{p-1}-B\sum_{k=n+p}^{\infty }k\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)\left\vert a_{k}\right\vert z^{k-p}}\right) <1. \tag{3.3} \end{equation} We choose values of $z$ on the real axis so that the following expression: \begin{equation*} \frac{\lbrack \mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }}{z^{p-1}} \end{equation*} is real. Then, upon clearing the denominator in (3.3) and letting $ z\rightarrow 1^{-}$ though real values, we get the following inequality \begin{equation*} \sum_{k=n+p}^{\infty }k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)\left\vert a_{k}\right\vert \leq (B-A)(p-\sigma ). \end{equation*} This completes the proof of Theorem 3.1. \noindent \textbf{Remark 3.1. }Since $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ is contained in the function class $\mathcal{P} _{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$\textbf{$,$ }a sufficient condition for $f(z)$ defined by (1.1) to be in the class $\mathcal{P} _{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ is that it satisfies the condition (3.1) of Theorem 3.1. \noindent \textbf{Corollary 3.1. }Let the function $f(z)\in \mathrm{{ \mathcal{A}}}(n,p)$ be defined by (1.10). If the function $f(z)\in \widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p),$ then \begin{equation} \left\vert a_{k}\right\vert \leq \frac{(B-A)(p-\sigma )}{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}\text{\ \ }\left( k,p\in \mathrm{ \mathbb{N} }\right) . \tag{3.4} \end{equation} The result is sharp for the function $f(z)$ given by \begin{equation} f(z)=z^{p}-\frac{(B-A)(p-\sigma )}{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}z^{k}\text{\ \ }\left( k,p\in \mathrm{ \mathbb{N} }\right) . \tag{3.5} \end{equation} We next prove the following growth and distortion properties for the class $ \widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$\textbf{$ . $ } \noindent \textbf{Remark 3.2.} \begin{enumerate} \item Putting $A=-\beta ,\;B=\beta ,\;\delta =0$ and $\sigma =\alpha $ in Theorem 3.1, we obtain the corresponding result given earlier by Aouf [6]. \item Putting $A=(\gamma -1)\beta ,\;B=\alpha \beta ,\;\delta =0,\;p=n=1$ and $\sigma =0$ in Theorem 3.1, we obtain result of Kim and Lee [24]. \item Putting $\delta =0$ in Theorem 3.1, we obtain Theorem 1 in [4]. \item Putting $A=(2a-1)b,\;B=b,\;\delta =0,\;n=1$ and $\sigma =0$ in Theorem 3.1, we arrive at the Theorem of Owa [36]. \item Putting $\delta =0$ and $\sigma =0$ in Theorem 3.1, we obtain the corresponding result due to Shukla and Dashrath [46]. \end{enumerate} \noindent \textbf{Theorem 3.2. }If a function $f(z)$ be defined by (1.10) is in the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p),$ then \begin{eqnarray} &&~~~~~~\left( \frac{p!}{(p-q)!}-\frac{(B-A)(p-\sigma )(n+p-1)!}{(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)(n+p-q)!}\left\vert z\right\vert ^{n}\right) \left\vert z\right\vert ^{p-q} \TCItag{3.6} \\ &\leq &\left\vert f^{(q)}(z)\right\vert \leq \left( \frac{p!}{(p-q)!}+\frac{ (B-A)(p-\sigma )(n+p-1)!}{(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)(n+p-q)!}\left\vert z\right\vert ^{n}\right) \left\vert z\right\vert ^{p-q} \notag \end{eqnarray} for $q\in \mathrm{ \mathbb{N} }_{0},\;p>q$ and all $z\in \mathrm{{\mathcal{U}}}.$ The result is sharp for the function $f(z)$ given by \begin{equation} f(z)=z^{p}-\frac{(B-A)(p-\sigma )}{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}z^{n+p}\text{\ \ }\left( p\in \mathrm{ \mathbb{N} }\right) . \tag{3.7} \end{equation} \noindent \textbf{Proof. }In view of Theorem 3.1, we have \begin{equation*} \frac{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma )(n+p)!}\sum_{k=n+p}^{\infty }k!\left\vert a_{k}\right\vert \leq \sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{ (B-A)(p-\sigma )}\left\vert a_{k}\right\vert \leq 1, \end{equation*} which readily yields \begin{equation} \sum_{k=n+p}^{\infty }k!\left\vert a_{k}\right\vert \leq \frac{ (B-A)(p-\sigma )(n+p-1)!}{(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)} \text{\ \ }\left( k,p\in \mathrm{ \mathbb{N} }\right) . \tag{3.8} \end{equation} Now, by differentiating both sides of (1.10) $q-$times with respect to $z,$ we obtain \begin{equation} f^{(q)}(z)=\frac{p!}{(p-q)!}z^{p-q}-\sum_{k=n+p}^{\infty }\frac{k!}{(k-q)!} a_{k}z^{k-q}\;\;\left( q\in \mathrm{ \mathbb{N} }_{0};\;p>q\right) . \tag{3.9} \end{equation} Theorem 3.2 follows readily from (3.8) and (3.9). \noindent Finally, it is easy to see that the bounds in (3.6) are attained for the function $f(z)$ given by (3.7). \section{\noindent INCLUSION RELATIONS INVOLVING NEIGHBORHOODS} Following the earlier investigations (based upon the familiar concept of neighborhoods of analytic functions) by Goodman [21], Ruscheweyh [40] and others including Srivastava \textit{et al.} [50, 52], Orhan [33, 34], Deniz \textit{et al. }[17], Aouf \textit{et al}. [8] (see also [11]). Firstly, we define the $(n,\eta )-$neighborhood of function $f(z)\in \mathrm{ {\mathcal{A}}}(n,p)$ of the form (1.1) by means of Definition 4.1 below. \noindent \textbf{Definition 4.1. }For $\eta >0$ and a non-negative sequence $\mathrm{{\mathcal{S}}}=\{s_{k}\}_{k=1}^{\infty },$ where \begin{equation} s_{k}:=\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma )} \;\;(k\in \mathrm{ \mathbb{N} }). \tag{4.1} \end{equation} The $(n,\eta )-$neighborhood of a function $f(z)\in \mathrm{{\mathcal{A}}} (n,p)$ of the form (1.1) is defined as follows: \begin{equation} \mathrm{{\mathcal{N}}}_{n,p}^{\eta }(f):=\left\{ g:\;g(z)=z^{p}+\sum_{k=n+p}^{\infty }b_{k}z^{k}\in \mathrm{{\mathcal{A}}} (n,p)\;\mathrm{and\;}\sum_{k=n+p}^{\infty }s_{k}\left\vert b_{k}-a_{k}\right\vert \leq \eta \;(\eta >0)\right\} . \tag{4.2} \end{equation} For $s_{k}=k,$ Definition 4.1 would correspond to the $\mathrm{{\mathcal{N}}} _{\eta }-$neighborhood considered by Ruscheweyh [40]. \noindent Our first result based upon the familiar concept of neighborhood defined by (4.2). \noindent \textbf{Theorem 4.1. }Let $f(z)\in \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ be given by (1.1). If $f$ satisfies the inclusion condition: \begin{equation} \left( f(z)+\varepsilon z^{p}\right) \left( 1+\varepsilon \right) ^{-1}\in \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)\text{\ \ }\left( \varepsilon \in \mathrm{ \mathbb{C} };\;\left\vert \varepsilon \right\vert <\eta ;\;\eta >0\right) , \tag{4.3} \end{equation} then \begin{equation} \mathrm{{\mathcal{N}}}_{n,p}^{\eta }(f)\subset \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p). \tag{4.4} \end{equation} \noindent \textbf{Proof. }It is not difficult to see that a function $f$ \ belongs to $\mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ if and only if \begin{equation} \frac{\lbrack \mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }-pz^{p-1}}{B[\mathrm{{\mathcal{J}}}_{p}^{\delta }(\lambda ,\mu ,l)f(z)]^{\prime }-z^{p-1}[pB+(A-B)(p-\sigma )]}\neq \tau \;\;\left( z\in \mathrm{{\mathcal{U}}};\;\tau \in \mathrm{ \mathbb{C} },\;\left\vert \tau \right\vert =1\right) , \tag{4.5} \end{equation} which is equivalent to \begin{equation} {(f\ast h)(z)\diagup z^{p}}\neq 0\;\;(z\in \mathrm{{\mathcal{U}}}), \tag{4.6} \end{equation} where for convenience, \begin{equation} h(z):=z^{p}+\sum_{k=n+p}^{\infty }c_{k}z^{k}=z^{p}+\sum_{k=n+p}^{\infty } \frac{k(1+\tau B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{\tau (B-A)(p-\sigma )}z^{k}. \tag{4.7} \end{equation} We easily find from (4.7) that \begin{equation} \left\vert c_{k}\right\vert \leq \left\vert \frac{k(1+\tau B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{\tau (B-A)(p-\sigma )}\right\vert \leq \frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma )} \;\;(k\in \mathrm{ \mathbb{N} }). \tag{4.8} \end{equation} Furthermore, under the hypotheses of theorem, (4.3) and (4.6) yields the following inequalities: \begin{equation*} \frac{\left( (f(z)+\varepsilon z^{p})(1+\varepsilon )^{-1}\right) \ast h(z)}{ z^{p}}\neq 0\;\;(z\in \mathrm{{\mathcal{U}}}) \end{equation*} or \begin{equation*} \frac{f(z)\ast h(z)}{z^{p}}\neq \varepsilon \;\;(z\in \mathrm{{\mathcal{U}}} ), \end{equation*} which is equivalent to the following: \begin{equation} \frac{f(z)\ast h(z)}{z^{p}}\geq \eta \;\;(z\in \mathrm{{\mathcal{U}}};\;\eta >0). \tag{4.9} \end{equation} Now, if we let \begin{equation*} g(z):=z^{p}+\sum_{k=n+p}^{\infty }b_{k}z^{k}\in \mathrm{{\mathcal{N}}} _{n,p}^{\eta }(f), \end{equation*} then we have \begin{equation*} {\left\vert \frac{\left( f(z)-g(z)\right) \ast h(z)}{z^{p}}\right\vert =\left\vert \sum_{k=n+p}^{\infty }(a_{k}-b_{k})c_{k}z^{k-p}\right\vert } \end{equation*} \begin{equation*} {\;\leq \sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma )}\left\vert a_{k}-b_{k}\right\vert \left\vert z\right\vert ^{k-p}<\eta }\text{\ \ }{(z\in \mathrm{{\mathcal{U}}};\;\eta >0).} \end{equation*} Thus, for any complex number $\tau $ such that $\left\vert \tau \right\vert =1,$ we have \begin{equation*} {(g\ast h)(z)\diagup z^{p}}\neq 0\;\;(z\in \mathrm{{\mathcal{U}}}), \end{equation*} \noindent which implies that $g\in \mathcal{P}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p).$ The proof is complete. We now define the $(n,\eta )-$neighborhood of a function $f(z)\in \mathrm{{ \mathcal{A}}}(n,p)$ of the form (1.10) as follows: \noindent \textbf{Definition 4.2. }For $\eta >0,$ the $(n,\eta )-$ neighborhood of a function $f(z)\in \mathrm{{\mathcal{A}}}(n,p)$ of the form (1.10) is given by \begin{eqnarray} {\widetilde{\mathrm{{\mathcal{N}}}}_{n,p}^{\eta }(f)}{:=} &&\left\{ { g:\;g(z)=z^{p}-\sum_{k=n+p}^{\infty }b_{k}z^{k}\in \mathrm{{\mathcal{A}}} (n,p)\;}\text{and}\right. \TCItag{4.10} \\ &&\left. \text{ }\sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma )}\left\vert \left\vert b_{k}\right\vert -\left\vert a_{k}\right\vert \right\vert \leq \eta \;\;(\eta >0)\right\} . \notag \end{eqnarray} Next, we prove \noindent \textbf{Theorem 4.2. }If the function $f(z)$ defined by (1.10) is in the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta +1}(A,B;\sigma ,p),$ then \begin{equation} \widetilde{\mathrm{{\mathcal{N}}}}_{n,p}^{\eta }(f)\subset \widetilde{ \mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p) \tag{4.11} \end{equation} where \begin{equation*} \eta :=\frac{n[\lambda \mu (n+p)+\lambda -\mu ]}{n[\lambda \mu (n+p)+\lambda -\mu ]+p+l}. \end{equation*} The result is the best possible in the sense that $\eta $ cannot be increased. \noindent \textbf{Proof. }For a function $f(z)\in \widetilde{\mathcal{P}} _{\lambda ,\mu ,l}^{\delta +1}(A,B;\sigma ,p)$ of the form (1.10) Theorem 3.1 immediately yields \begin{equation} \sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{ (B-A)(p-\sigma )}\left\vert a_{k}\right\vert \leq \frac{p+l}{n[\lambda \mu (n+p)+\lambda -\mu ]+p+l}. \tag{4.12} \end{equation} Similarly, by taking \begin{equation*} g(z):=z^{p}-\sum_{k=n+p}^{\infty }\left\vert b_{k}\right\vert z^{k}\in \widetilde{\mathrm{{\mathcal{N}}}}_{n,p}^{\eta }(f)\;\;\left( \eta =\frac{ n[\lambda \mu (n+p)+\lambda -\mu ]}{n[\lambda \mu (n+p)+\lambda -\mu ]+p+l} \right) , \end{equation*} we find from the definition (4.10) that \begin{equation} \sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{ (B-A)(p-\sigma )}\left\vert \left\vert b_{k}\right\vert -\left\vert a_{k}\right\vert \right\vert \leq \eta \;\;(\eta >0). \tag{4.13} \end{equation} With the help of (4.12) and (4.13), we have \begin{eqnarray*} {\sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{ (B-A)(p-\sigma )}\left\vert b_{k}\right\vert } &\leq &{\sum_{k=n+p}^{\infty } \frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma )} \left\vert b_{k}\right\vert } \\ &&{+\sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l) }{(B-A)(p-\sigma )}\left\vert \left\vert b_{k}\right\vert -\left\vert a_{k}\right\vert \right\vert } \\ &&{+\sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l) }{(B-A)(p-\sigma )}\left\vert \left\vert b_{k}\right\vert -\left\vert a_{k}\right\vert \right\vert } \\ &{\leq }&{\frac{p+l}{n[\lambda \mu (n+p)+\lambda -\mu ]+p+l}+\eta =1.} \end{eqnarray*} Hence, in view of the Theorem 3.1 again, we see that $g(z)\in \widetilde{ \mathcal{P}}_{\lambda ,\mu ,l}^{\delta +1}(A,B;\sigma ,p).$ To show the sharpness of the assertion of Theorem 4.2, we consider the functions $f(z)$ and $g(z)$ given by \begin{equation} f(z)=z^{p}-\left[ \frac{(B-A)(p-\sigma )}{(n+p)(1+B)\Phi _{p}^{n+p}(\delta +1,\lambda ,\mu ,l)}\right] z^{n+p}\in \widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta +1}(A,B;\sigma ,p) \tag{4.14} \end{equation} and \begin{equation} g(z)=z^{p}-\left[ \frac{(B-A)(p-\sigma )}{(n+p)(1+B)\Phi _{p}^{n+p}(\delta +1,\lambda ,\mu ,l)}+\frac{(B-A)(p-\sigma )}{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\eta ^{\ast }\right] z^{n+p} \tag{4.15} \end{equation} where $\eta ^{\ast }>\eta .$ \noindent Clearly, the function $g(z)$ belong to $\widetilde{\mathrm{{ \mathcal{N}}}}_{n,p}^{\eta ^{\ast }}(f).$ On the other hand, we find from Theorem 3.1 that $g(z)\notin \widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p).$This evidently completes the proof of Theorem 4.2. \section{\noindent PARTIAL SUMS OF THE FUNCTION CLASS $\protect\widetilde{ \mathcal{P}}_{\protect\lambda ,\protect\mu ,l}^{\protect\delta }(A,B;\protect \sigma ,p)$} Following the earlier work by Silverman [47] and recently Liu [29] and Deniz \textit{et al}. [17], in this section we investigate the ratio of real parts of functions involving (1.10) and its sequence of partial sums defined by \begin{equation} \kappa _{m}(z)=\left\{ \begin{array}{l} {z^{p},\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\ \ \ \ \ \ \ \;m=1,2,...,n+p-1;} \\ {z^{p}-\sum_{k=n+p}^{m}\left\vert a_{k}\right\vert z^{k},\;\;\;m=n+p,n+p+1,....} \end{array} \right. \;\;(k\geq n+p;\;n,p\in \mathrm{ \mathbb{N} }) \tag{5.1} \end{equation} and determine sharp lower bounds for $\Re \left\{ {f(z)\diagup \kappa _{m}(z) }\right\} ,\;\Re \left\{ {\kappa _{m}(z)\diagup f(z)}\right\} .$ \noindent \textbf{Theorem 5.1. }Let $f\in \mathrm{{\mathcal{A}}}(n,p)$ and $ \kappa _{m}(z)$ be given by (1.10) and (5.1), respectively. Suppose also that \begin{equation} \sum_{k=n+p}^{\infty }\theta _{k}\left\vert a_{k}\right\vert \leq 1~~\left( \mathrm{where\;}{\theta }_{k}=\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma )}\right) . \tag{5.2} \end{equation} Then for $m\geq k+p,$ we have \begin{equation} \Re \left( \frac{f(z)}{\kappa _{m}(z)}\right) >1-\frac{1}{{\theta }_{m+1}} \tag{5.3} \end{equation} and \begin{equation} \Re \left( \frac{\kappa _{m}(z)}{f(z)}\right) >\frac{{\theta }_{m+1}}{1+{ \theta }_{m+1}}. \tag{5.4} \end{equation} The results are sharp for every $m$ with the extremal functions given by \begin{equation} f(z)=z^{p}-\frac{1}{{\theta }_{m+1}}z^{m+1}. \tag{5.5} \end{equation} \noindent \textbf{Proof. }Under the hypothesis of the theorem, we can see from (5.2) that \begin{equation*} {\theta }_{k+1}>{\theta }_{k}>1\text{\ \ }(k\geq n+p). \end{equation*} Therefore, we have \begin{equation} \sum_{k=n+p}^{m}\left\vert a_{k}\right\vert +{\theta }_{m+1}\sum_{k=m+1}^{ \infty }\left\vert a_{k}\right\vert \leq \sum_{k=n+p}^{\infty }{\theta } _{k}\left\vert a_{k}\right\vert \leq 1 \tag{5.6} \end{equation} by using hypothesis (5.2) again. Upon setting \begin{eqnarray} {\omega (z)} &=&{{\theta }_{m+1}\left[ \frac{f(z)}{\kappa _{m}(z)}-\left( 1- \frac{1}{{\theta }_{m+1}}\right) \right] } \TCItag{5.7} \\ &{=}&{1-\frac{{\theta }_{m+1}\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}}{1-\sum_{k=n+p}^{m}\left\vert a_{k}\right\vert z^{k-p}}.} \notag \end{eqnarray} By applying (5.6) and (5.7), we find that \begin{eqnarray} {\left\vert \frac{\omega (z)-1}{\omega (z)+1}\right\vert } &=&{\left\vert \frac{-{\theta }_{m+1}\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}}{2-2\sum_{k=n+p}^{m}\left\vert a_{k}\right\vert z^{k-p}-{\theta } _{m+1}\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}}\right\vert } \TCItag{5.8} \\ &\leq &{\frac{{\theta }_{m+1}\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}}{2-2\sum_{k=n+p}^{m}\left\vert a_{k}\right\vert z^{k-p}-{\theta }_{m+1}\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}}\leq 1\;\;(z\in \mathrm{{\mathcal{U}}};\;k\geq n+p),} \notag \end{eqnarray} which shows that $\Re \left( \omega (z)\right) >0\;(z\in \mathrm{{\mathcal{U} }}).$ From (5.7), we immediately obtain the inequality (5.3). To see that the function $f$ given by (5.5) gives the sharp result, we observe for $z\rightarrow 1^{-}$ that \begin{equation*} \frac{f(z)}{\kappa _{m}(z)}=1-\frac{1}{{\theta }_{m+1}}z^{m-p+1}\rightarrow 1-\frac{1}{{\theta }_{m+1}}, \end{equation*} which shows that the bound in (5.3) is the best possible. Similarly, if we put \begin{eqnarray} {\phi (z)} &=&{(1+{\theta }_{m+1})\left[ \frac{\kappa _{m}(z)}{f(z)}-\frac{{ \theta }_{m+1}}{1+{\theta }_{m+1}}\right] } \TCItag{5.9} \\ &{=}&{1+\frac{(1+{\theta }_{m+1})\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}}{1-\sum_{k=n+p}^{m}\left\vert a_{k}\right\vert z^{k-p}},} \notag \end{eqnarray} and make use of (5.6), we can deduce that \begin{eqnarray} {\left\vert \frac{\phi (z)-1}{\phi (z)+1}\right\vert } &=&{\left\vert \frac{ (1+{{\theta }_{m+1}})\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}}{2-2\sum_{k=n+p}^{m}\left\vert a_{k}\right\vert z^{k-p}+({{\theta } _{m+1}}-1)\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}} \right\vert } \TCItag{5.10} \\ &{\leq }&{\;\frac{(1+{{\theta }_{m+1}})\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}}{2-2\sum_{k=n+p}^{m}\left\vert a_{k}\right\vert z^{k-p}-({{\theta }_{m+1}}-1)\sum_{k=m+1}^{\infty }\left\vert a_{k}\right\vert z^{k-p}}\leq 1\;\;(z\in \mathrm{{\mathcal{U}}};\;k\geq n+p), } \notag \end{eqnarray} which leads us immediately to assertion (5.4) of the theorem. The bound in (5.4) is sharp with the extremal function given by (5.5). The proof of theorem is thus completed. \section{\noindent PROPERTIES ASSOCIATED WITH QUASI-CONVOLUTION\textbf{\ }} In this part, we establish certain results concerning the Quasi-convolution of function is in the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p).$ For the functions $f_{j}(z)\in \mathrm{{\mathcal{A}}}(n,p)$ given by \begin{equation} f_{j}(z)=z^{p}-\sum_{k=n+p}^{\infty }\left\vert a_{k,j}\right\vert z^{k}\;\;(j=\overline{1,m},\;p\in \mathrm{ \mathbb{N} }), \tag{6.1} \end{equation} we denote by $(f_{1}\bullet f_{2})(z)$ the Quasi-convolution of functions $ f_{1}(z)$ and $f_{2}(z),$ that is, \begin{equation} (f_{1}\bullet f_{2})(z)=z^{p}-\sum_{k=n+p}^{\infty }\left\vert a_{k,1}\right\vert \left\vert a_{k,2}\right\vert z^{k}. \tag{6.2} \end{equation} \noindent \textbf{Theorem 6.1. }If $f_{j}(z)\in \widetilde{\mathcal{P}} _{\lambda ,\mu ,l}^{\delta }(A,B;\sigma _{j},p)$ $(j=\overline{1,m}),$ then \begin{equation} (f_{1}\bullet f_{2}\bullet ...\bullet f_{m})(z)\in \widetilde{\mathcal{P}} _{\lambda ,\mu ,l}^{\delta }(A,B;\Upsilon ,p), \tag{6.3} \end{equation} where \begin{equation} \Upsilon :=p-\frac{\prod_{j=1}^{m}(B-A)(p-\sigma _{j})}{(B-A)[(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)]^{m-1}}. \tag{6.4} \end{equation} The result is sharp for the functions $f_{j}(z)$ given by \begin{equation} f_{j}(z)=z^{p}-\frac{(B-A)(p-\sigma _{j})}{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}z^{p+n}\;\;(j=\overline{1,m}). \tag{6.5} \end{equation} \noindent \noindent \textbf{Proof. }For $m=1,$ we see that $\Upsilon =\sigma _{1}.$ For $m=2,$ Theorem 3.1 gives \begin{equation} \sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{ (B-A)(p-\sigma _{j})}\left\vert a_{k,j}\right\vert \leq 1\;\;(j=1,2). \tag{6.6} \end{equation} Therefore, by the Cauchy-Schwarz inequality, we obtain \begin{equation} \sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{ \sqrt{\prod_{j=1}^{2}(B-A)(p-\sigma _{j})}}\sqrt{\left\vert a_{k,1}\right\vert \left\vert a_{k,2}\right\vert }\leq 1. \tag{6.7} \end{equation} To prove the case when $m=2,$ we have to find the largest $\Upsilon $ such that \begin{equation} \sum_{k=n+p}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{ (B-A)(p-\Upsilon )}\left\vert a_{k,1}\right\vert \left\vert a_{k,2}\right\vert \leq 1, \tag{6.8} \end{equation} or such that \begin{equation} \frac{\left\vert a_{k,1}\right\vert \left\vert a_{k,2}\right\vert }{ (B-A)(p-\Upsilon )}\leq \frac{\sqrt{\left\vert a_{k,1}\right\vert \left\vert a_{k,2}\right\vert }}{\sqrt{\prod_{j=1}^{2}(B-A)(p-\sigma _{j})}}, \tag{6.9} \end{equation} this, equivalently, that \begin{equation} \sqrt{\left\vert a_{k,1}\right\vert \left\vert a_{k,2}\right\vert }\leq \frac{(B-A)(p-\Upsilon )}{\sqrt{\prod_{j=1}^{2}(B-A)(p-\sigma _{j})}}. \tag{6.10} \end{equation} Further, by using (6.7), we need to find the largest $\Upsilon $ such that \begin{equation*} \frac{\sqrt{\prod_{j=1}^{2}(B-A)(p-\sigma _{j})}}{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}\leq \frac{(B-A)(p-\Upsilon )}{\sqrt{ \prod_{j=1}^{2}(B-A)(p-\sigma _{j})}} \end{equation*} or, equivalently, that \begin{equation} \frac{1}{(B-A)(p-\Upsilon )}\leq \frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{\prod_{j=1}^{2}(B-A)(p-\sigma _{j})}. \tag{6.11} \end{equation} It follows from (6.9) that \begin{equation} \Upsilon \leq p-\frac{\prod_{j=1}^{2}(B-A)(p-\sigma _{j})}{(B-A)k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}. \tag{6.12} \end{equation} Now, defining the function $\psi (k)$ by \begin{equation} \psi (k)=p-\frac{\prod_{j=1}^{2}(B-A)(p-\sigma _{j})}{(B-A)k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}, \tag{6.13} \end{equation} we see that $\psi ^{\prime }(k)\geq 0$ for $k\geq p+n.$ This implies that \begin{equation*} \Upsilon \leq \psi (n+p)=p-\frac{\prod_{j=1}^{2}(B-A)(p-\sigma _{j})}{ (B-A)(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}. \end{equation*} Therefore, the result is true for $m=2.$ Suppose that the result is true for any positive integer $m.$ Then we have $ (f_{1}\bullet f_{2}\bullet ...\bullet f_{m}\bullet f_{m+1})(z)\in \widetilde{ \mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\gamma ,p),$ when \begin{equation*} \gamma =p-\frac{(B-A)(p-\Upsilon )(B-A)(p-\sigma _{m+1})}{ (B-A)(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)} \end{equation*} where $\Upsilon $ is given by (6.4). After a simple calculation, we have \begin{equation*} \gamma \leq p-\frac{\prod_{j=1}^{m+1}(B-A)(p-\sigma _{j})}{ (B-A)[(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)]^{m}}. \end{equation*} Thus, the result is true for $m+1.$ Therefore, by using the mathematical induction, we conclude that the result is true for any positive integer $m.$ Finally, taking the functions $f_{j}(z)$ defined by (6.5), we have \begin{eqnarray*} {(f_{1}\bullet f_{2}\bullet ...\bullet f_{m})(z)} &=&{z^{p}-\left\{ \prod_{j=1}^{m}\frac{(B-A)(p-\sigma _{j})}{(p+n)(1+B)\Phi _{p}^{p+n}(\delta ,\lambda ,\mu ,l)}\right\} z^{p+n}} \\ &{=}&{z^{p}-\mathrm{A}_{p+n}z^{p+n},} \end{eqnarray*} which shows that \begin{eqnarray*} ~~~~~\sum_{k=p+n}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l) }{(B-A)(p-\Upsilon )}\mathrm{A}_{k} &=&\frac{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\Upsilon )}\mathrm{A} _{p+n}~~~~~~~~~~~~~ \\ ~~~~~~~~~~~~~ &=&\frac{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}{ (B-A)(p-\Upsilon )} \\ &&\times \left\{ \prod_{j=1}^{2}\frac{(B-A)(p-\sigma _{j})}{(p+n)(1+B)\Phi _{p}^{p+n}(\delta ,\lambda ,\mu ,l)}\right\} . \end{eqnarray*} Consequently, the result is sharp. Putting $\sigma _{j}=\sigma $ $(j=\overline{1,m})$ in Theorem 6.1, we have; \noindent \textbf{Corollary 6.2. }If $f_{j}(z)\in \widetilde{\mathcal{P}} _{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p)$ $(j=\overline{1,m}),$ then \begin{equation*} (f_{1}\bullet f_{2}\bullet ...\bullet f_{m})(z)\in \widetilde{\mathcal{P}} _{\lambda ,\mu ,l}^{\delta }(A,B;\Upsilon ,p), \end{equation*} where \begin{equation*} \Upsilon :=p-\frac{[(B-A)(p-\sigma )]^{m}}{(B-A)[(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)]^{m-1}}. \end{equation*} The result is sharp for the functions $f_{j}(z)$ given by \begin{equation*} f_{j}(z)=\frac{(B-A)(p-\sigma )}{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}z^{p+n}\;\;(j=\overline{1,m}). \end{equation*} \noindent \textbf{Remark 6.1. }For special values of parameters $\lambda ,\mu ,l,\delta ,\sigma ,A,B,n$ and $p,$ our results reduce to several well-known results as follows: \begin{enumerate} \item Putting $A=-1,\;B=1,\;\delta =0$ and $m=2$ in Theorem 6.1, we obtain the corresponding results of Yaguchi \textit{et al}. [55] and Aouf and Darwish [7] for $n=1.$ \item Putting $A=-1,\;B=1,\;\delta =0$ and $m=2$ in Corollary 6.2, we obtain the corresponding results of Lee \textit{et al}. [26] and for $n=1$ and Sekine and Owa [44] for $p=1.$ \item Putting $A=-1,\;B=1,\;\delta =0$ and $m=3$ in Corollary 6.2, we obtain the corresponding result due to Aouf and Darwish [7] for $n=1.$ \item Putting $A=-\beta ,\;B=\beta ,\;\delta =0$ and $\sigma =\alpha $ in Theorem 6.1, we obtain the corresponding result due to Aouf [5]. \end{enumerate} \noindent \textbf{Theorem 6.2. }Let the function $f_{j}(z)$ $(j=\overline{1,m })$ given by (6.1) be in the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma _{j},p).$ Then the function \begin{equation} h(z)=z^{p}-\sum_{k=n+p}^{\infty }\left( \sum_{j=1}^{m}\left\vert a_{k,j}\right\vert ^{2}\right) z^{k} \tag{6.14} \end{equation} belongs to the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\chi ,p),$ where \begin{equation} \chi :=p-\frac{m(B-A)(p-\sigma ^{\ast })^{2}}{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\;\;(\sigma ^{\ast }:=\min \{\sigma _{1},\sigma _{2},...,\sigma _{m}\}). \tag{6.15} \end{equation} The result is sharp for the functions $f_{j}(z)$ $(j=\overline{1,m})$ given by (6.5). \noindent \textbf{Proof. }By virtue of Theorem 3.1 we have \begin{equation} \sum_{k=p+n}^{\infty }\left\{ \frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma _{j})}\right\} ^{2}\left\vert a_{k,j}\right\vert ^{2}\leq \left\{ \sum_{k=p+n}^{\infty }\frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma _{j})}\left\vert a_{k,j}\right\vert \right\} ^{2}\leq 1. \tag{6.16} \end{equation} Then it follows that for $(j=\overline{1,m}),$ \begin{equation} \frac{1}{m}\sum_{k=p+n}^{\infty }\left\{ \frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma _{j})}\right\} ^{2}\left( \sum_{j=1}^{m}\left\vert a_{k,j}\right\vert ^{2}\right) \leq 1. \tag{6.17} \end{equation} Therefore, we need to find the largest $\chi $ such that \begin{equation} \frac{1}{m}\sum_{k=p+n}^{\infty }\left\{ \frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\chi )}\right\} \left( \sum_{j=1}^{m}\left\vert a_{k,j}\right\vert ^{2}\right) \leq 1. \tag{6.18} \end{equation} This implies that \begin{equation} \chi \leq p-\frac{m(B-A)(p-\sigma ^{\ast })^{2}}{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}\;\;(\sigma ^{\ast }:=\min \{\sigma _{1},\sigma _{2},...,\sigma _{m}\},\;k\geq p+n). \tag{6.19} \end{equation} Now, defining the function $\Im (k)$ by \begin{equation} \Im (k):=p-\frac{m(B-A)(p-\sigma ^{\ast })^{2}}{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}, \tag{6.20} \end{equation} we see that $\Im (k)$ is an increasing function of $k,$ $k\geq p+n.$ Setting $k=p+n$ in (6.19) we have \begin{equation*} \chi \leq \Im (n+p):=p-\frac{m(B-A)(p-\sigma ^{\ast })^{2}}{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)} \end{equation*} which completes the proof of Theorem 6.2. Setting $\sigma _{j}=\sigma $ $(j=\overline{1,m}),$ in Theorem 6.2, we arrive at the following result. \noindent \textbf{Corollary 6.3. }Let the functions $f_{j}(z)$ $(j=\overline{ 1,m})$ given by (6.1) be in the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p).$ Then the function \begin{equation*} h(z)=z^{p}-\sum_{k=n+p}^{\infty }\left( \sum_{j=1}^{m}\left\vert a_{k,j}\right\vert ^{2}\right) z^{k} \end{equation*} belongs to the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\chi ,p),$ where \begin{equation*} \chi :=p-\frac{m(B-A)(p-\sigma )^{2}}{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}. \end{equation*} The result is sharp for the functions $f_{j}(z)$ $(j=\overline{1,m})$ given by (6.5). \noindent \textbf{Remark 6.2.} \begin{enumerate} \item Putting $A=-1,\;B=1,\;\delta =0$ and $m=2$ in Theorem 6.2, we obtain the corresponding results of Yaguchi \textit{et al}. [55]. \item Putting $A=-1,\;B=1,\;\delta =0$ and $m=2$ in Corollary 6.3, we obtain the corresponding results of Aouf and Darwish [7] for $n=1,$ Sekine and Owa [44] for $p=1$. \item Putting $A=-\beta ,\;B=\beta ,\;\delta =0$ and $\sigma =\alpha $ in Theorem 6.2, we obtain the corresponding result due to Aouf [5]. \end{enumerate} \section{\noindent APPLICATIONS OF FRACTIONAL CALCULUS OPERATORS} Various operators of fractional calculus (that is, fractional integral and fractional derivatives) have been studied in the literature rather extensively (\textit{cf., e.g}., [37, 50, 53]; see also [14, 51] the various references cited therein). For our present investigation, we recall the following definitions. \noindent \textbf{Definition 7.1.} Let $f(z)$ be analytic in a simply connected region of the $z$-\textit{plane} containing the origin. The fractional integral of $f$ of order $\nu $ is defined by \begin{equation} \mathrm{{\mathcal{D}}}_{z}^{-\nu }f(z)=\frac{1}{\Gamma (\nu )}\int_{0}^{z} \frac{f(\zeta )}{(z-\zeta )^{1-\nu }}d\zeta \text{\ \ }(\nu >0), \tag{7.1} \end{equation} where the multiplicity of $(z-\zeta )^{\nu -1}$ is removed by requiring that $\log (z-\zeta )$ is real for $z-\zeta >0$. \noindent \noindent \textbf{Definition 7.2.} Let $f(z)$ be analytic in a simply connected region of the $z$-\textit{plane} containing the origin. The fractional derivative of $f$ of order $\nu $ is defined by \begin{equation} \mathrm{{\mathcal{D}}}_{z}^{\nu }f(z)=\frac{1}{\Gamma (1-\nu )}\int_{0}^{z} \frac{f(\zeta )}{(z-\zeta )^{\nu }}d\zeta \text{\ \ }(0\leq \nu <1), \tag{7.2} \end{equation} where the multiplicity of $(z-\zeta )^{-\nu }$ is removed by requiring that $ \log (z-\zeta )$ is real for $z-\zeta >0$. \noindent \noindent \textbf{Definition 7.3}. Under the hypotheses of Definition 7.2, the fractional derivative of order $n+\nu $ is defined, for a function $f(z)$ , by \begin{equation} \mathrm{{\mathcal{D}}}_{z}^{n+\nu }f(z)=\frac{d^{n}}{dz^{n}}\{\mathrm{{ \mathcal{D}}}_{z}^{\nu }f(z)\}\;\;(0\leq \nu <1;\;n\in \mathrm{ \mathbb{N} }_{0}). \tag{7.3} \end{equation} In this section, we shall investigate the growth and distortion properties of functions in the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p),$ which involving the operators $\mathrm{{\mathcal{I}}} _{\vartheta ,p}$ and $\mathrm{{\mathcal{D}}}_{z}^{\nu }.$ In order to derive our results, we need the following lemma given by Chen \textit{et al.} [14]. \noindent \textbf{Lemma 7.1 }(\textit{see} [14]). Let the function $f(z)$ defined by (1.10). Then \begin{equation} \mathrm{{\mathcal{D}}}_{z}^{\nu }\{(\mathrm{{\mathcal{I}}}_{\vartheta ,p}f)(z)\}=\frac{\Gamma (p+1)}{\Gamma (p+1-\nu )}z^{p-\nu }-\sum_{k=n+p}^{\infty }\frac{(\vartheta +p)\Gamma (k+1)}{(\vartheta +k)\Gamma (k+1-\nu )}a_{k}z^{k-\nu } \tag{7.4} \end{equation} \begin{equation*} (\nu \in \mathrm{ \mathbb{R} };\;\vartheta >-p;\;p,n\in \mathrm{ \mathbb{N} }) \end{equation*} and \begin{equation} \mathrm{{\mathcal{I}}}_{\vartheta ,p}\{(\mathrm{{\mathcal{D}}}_{z}^{\nu }f)(z)\}=\frac{(\vartheta +p)\Gamma (p+1)}{(\vartheta +p-\nu )\Gamma (p+1-\nu )}z^{p-\nu }-\sum_{k=n+p}^{\infty }\frac{(\vartheta +p)\Gamma (k+1) }{(\vartheta +k-\nu )\Gamma (k+1-\nu )}a_{k}z^{k-\nu } \tag{7.5} \end{equation} \begin{equation*} (\nu \in \mathrm{ \mathbb{R} };\;\vartheta >-p;\;p,n\in \mathrm{ \mathbb{N} }) \end{equation*} provided that no zeros appear in the denominators in (7.4) and (7.5). \noindent \textbf{Theorem 7.1. }Let the functions $f(z)$ defined by (1.10) be in the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p).$ Then \begin{eqnarray} &&{\left\vert \mathrm{{\mathcal{D}}}_{z}^{-\nu }\{(\mathrm{{\mathcal{I}}} _{\vartheta ,p}f)(z)\}\right\vert } \TCItag{7.6} \\ &{\geq }&\left\{ {\frac{\Gamma (p+1)}{\Gamma (p+1+\nu )}-\frac{(\vartheta +p)\Gamma (n+p+1)(B-A)(p-\sigma )}{(\vartheta +n+p)\Gamma (n+p+1+\nu )(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\left\vert z\right\vert ^{n}}\right\} {\left\vert z\right\vert ^{p+\nu }} \notag \end{eqnarray} \begin{equation*} (z\in \mathrm{{\mathcal{U}}};\;\nu >0;\;\vartheta >-p;\;p,n\in \mathrm{ \mathbb{N} }) \end{equation*} and \begin{eqnarray} &&{\left\vert \mathrm{{\mathcal{D}}}_{z}^{-\nu }\{(\mathrm{{\mathcal{I}}} _{\vartheta ,p}f)(z)\}\right\vert } \TCItag{7.7} \\ &{\leq }&\left\{ {\frac{\Gamma (p+1)}{\Gamma (p+1+\nu )}+\frac{(\vartheta +p)\Gamma (n+p+1)(B-A)(p-\sigma )}{(\vartheta +n+p)\Gamma (n+p+1+\nu )(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\left\vert z\right\vert ^{n}}\right\} {\left\vert z\right\vert ^{p+\nu }} \notag \end{eqnarray} \begin{equation*} (z\in \mathrm{{\mathcal{U}}};\;\nu >0;\;\vartheta >-p;\;p,n\in \mathrm{ \mathbb{N} }). \end{equation*} Each of the assertions (7.6) and (7.7) is sharp. \noindent \textbf{Proof.} In view of Theorem 3.1, we have \begin{equation} \frac{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma )} \sum_{k=n+p}^{\infty }\left\vert a_{k}\right\vert \leq \sum_{k=n+p}^{\infty } \frac{k(1+B)\Phi _{p}^{k}(\delta ,\lambda ,\mu ,l)}{(B-A)(p-\sigma )} \left\vert a_{k}\right\vert \leq 1, \tag{7.8} \end{equation} which readily yields \begin{equation} \sum_{k=n+p}^{\infty }\left\vert a_{k}\right\vert \leq \frac{(B-A)(p-\sigma ) }{(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}. \tag{7.9} \end{equation} Consider the function $\mathrm{{\mathcal{F}}}(z)$ defined in $\mathrm{{ \mathcal{U}}}$ by \begin{eqnarray*} \mathrm{{\mathcal{F}}}(z) &=&\frac{\Gamma (p+1-\nu )}{\Gamma (p+1)}z^{-\nu } \mathrm{{\mathcal{D}}}_{z}^{\nu }\{(\mathrm{{\mathcal{I}}}_{\vartheta ,p}f)(z)\} \\ &=&z^{p}-\sum_{k=n+p}^{\infty }\frac{(\vartheta +p)\Gamma (k+1)\Gamma (p+1+\nu )}{(\vartheta +k)\Gamma (k+1+\nu )\Gamma (p+1)}\left\vert a_{k}\right\vert z^{k} \\ &=&z^{p}-\sum_{k=n+p}^{\infty }\Theta (k)\left\vert a_{k}\right\vert z^{k} \text{ \ }(z\in \mathrm{{\mathcal{U}}}) \end{eqnarray*} where \begin{equation} \Theta (k):=\frac{(\vartheta +p)\Gamma (k+1)\Gamma (p+1+\nu )}{(\vartheta +k)\Gamma (k+1+\nu )\Gamma (p+1)}\text{\ \ }(k\geq p+n;\;p,n\in \mathrm{ \mathbb{N} };\;\nu >0). \tag{7.10} \end{equation} Since $\Theta (k)$ is a \textit{decreasing} function of $k$ when $\nu >0,$ we get \begin{equation} 0<\Theta (k)\leq \Theta (n+p)=\frac{(\vartheta +p)\Gamma (n+p+1)\Gamma (p+1+\nu )}{(\vartheta +n+p)\Gamma (n+p+1+\nu )\Gamma (p+1)} \tag{7.11} \end{equation} \begin{equation*} (\nu >0;\;\vartheta >-p;\;p,n\in \mathrm{ \mathbb{N} }). \end{equation*} Thus, by using (7.9) and (7.11), for all $z\in \mathcal{U},$ we deduce that \begin{eqnarray*} \left\vert \mathrm{{\mathcal{F}}}(z)\right\vert &\geq &\left\vert z\right\vert ^{p}-\Theta (n+p)\left\vert z\right\vert ^{n+p}\sum_{k=n+p}^{\infty }\left\vert a_{k}\right\vert \\ &\geq &\left\vert z\right\vert ^{p}-\frac{(\vartheta +p)\Gamma (n+p+1)\Gamma (p+1+\nu )(B-A)(p-\sigma )}{(\vartheta +n+p)\Gamma (n+p+1+\nu )\Gamma (p+1)(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\left\vert z\right\vert ^{n+p} \end{eqnarray*} and \begin{eqnarray*} \left\vert \mathrm{{\mathcal{F}}}(z)\right\vert &\leq &\left\vert z\right\vert ^{p}+\Theta (n+p)\left\vert z\right\vert ^{n+p}\sum_{k=n+p}^{\infty }\left\vert a_{k}\right\vert \\ &\leq &\left\vert z\right\vert ^{p}+\frac{(\vartheta +p)\Gamma (n+p+1)\Gamma (p+1+\nu )(B-A)(p-\sigma )}{(\vartheta +n+p)\Gamma (n+p+1+\nu )\Gamma (p+1)(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\left\vert z\right\vert ^{n+p} \end{eqnarray*} which yield the inequalities (7.6) and (7.7) of Theorem 7.1. Equalities in (7.6) and (7.7) are attained for the function $f(z)$ given by \begin{eqnarray*} {\mathrm{{\mathcal{D}}}_{z}^{-\nu }\{(\mathrm{{\mathcal{I}}}_{\vartheta ,p}f)(z)\}} &=&{\left\{ \frac{\Gamma (p+1)}{\Gamma (p+1+\nu )}\right. } \\ &&{\,\,\left. \,-\frac{(\vartheta +p)\Gamma (n+p+1)(B-A)(p-\sigma )}{ (\vartheta +n+p)\Gamma (n+p+1+\nu )(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}z^{n}\right\} z^{p+\nu }} \end{eqnarray*} or, equivalently, by \begin{equation*} (\mathrm{{\mathcal{I}}}_{\vartheta ,p}f)(z)=z^{p}-\frac{(\vartheta +p)(B-A)(p-\sigma )}{(\vartheta +n+p)(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}z^{n+p}. \end{equation*} Thus, we complete the proof of Theorem 7.1. \noindent \textbf{Theorem 7.2. }Let the functions $f(z)$ defined by (1.10) be in the class $\widetilde{\mathcal{P}}_{\lambda ,\mu ,l}^{\delta }(A,B;\sigma ,p).$ Then \begin{eqnarray} &&{\left\vert \mathrm{{\mathcal{D}}}_{z}^{\nu }\{(\mathrm{{\mathcal{I}}} _{\vartheta ,p}f)(z)\}\right\vert } \TCItag{7.12} \\ &{\geq }&\left\{ {\frac{\Gamma (p+1)}{\Gamma (p+1-\nu )}z^{p-\nu }-\frac{ (\vartheta +p)\Gamma (n+p)(B-A)(p-\sigma )}{(\vartheta +n+p)\Gamma (n+p+1-\nu )(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\left\vert z\right\vert ^{n}}\right\} {\left\vert z\right\vert ^{p-\nu }} \notag \end{eqnarray} \begin{equation*} (z\in \mathrm{{\mathcal{U}}};\;\nu >0;\;\vartheta >-p;\;p,n\in \mathrm{ \mathbb{N} }) \end{equation*} and \begin{eqnarray} &&{\left\vert \mathrm{{\mathcal{D}}}_{z}^{\nu }\{(\mathrm{{\mathcal{I}}} _{\vartheta ,p}f)(z)\}\right\vert } \TCItag{7.13} \\ &{\leq }&\left\{ {\frac{\Gamma (p+1)}{\Gamma (p+1-\nu )}z^{p-\nu }+\frac{ (\vartheta +p)\Gamma (n+p)(B-A)(p-\sigma )}{(\vartheta +n+p)\Gamma (n+p+1-\nu )(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\left\vert z\right\vert ^{n}}\right\} {\left\vert z\right\vert ^{p-\nu }} \notag \end{eqnarray} \begin{equation*} (z\in \mathrm{{\mathcal{U}}};\;\nu >0;\;\vartheta >-p;\;p,n\in \mathrm{ \mathbb{N} }). \end{equation*} Each of the assertions (7.12) and (7.13) is sharp. \noindent \textbf{Proof.} It follows from Theorem 3.1 that\textbf{\ } \begin{equation} \sum_{k=n+p}^{\infty }k\left\vert a_{k}\right\vert \leq \frac{(B-A)(p-\sigma )}{(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}. \tag{7.14} \end{equation} Consider the function $\mathrm{{\mathcal{Q}}}(z)$ defined in $\mathrm{{ \mathcal{U}}}$ by \begin{eqnarray*} \mathrm{{\mathcal{Q}}}(z) &=&\frac{\Gamma (p+1-\nu )}{\Gamma (p+1)}z^{\nu } \mathrm{{\mathcal{D}}}_{z}^{\nu }\{(\mathrm{{\mathcal{I}}}_{\vartheta ,p}f)(z)\} \\ &=&z^{p}-\sum_{k=n+p}^{\infty }\frac{(\vartheta +p)\Gamma (k)\Gamma (p+1-\nu )}{(\vartheta +k)\Gamma (k+1-\nu )\Gamma (p+1)}k\left\vert a_{k}\right\vert z^{k} \\ &=&z^{p}-\sum_{k=n+p}^{\infty }\wp (k)k\left\vert a_{k}\right\vert z^{k} \text{\ \ }(z\in \mathrm{{\mathcal{U}}}) \end{eqnarray*} where, for convenience, \begin{equation} \wp (k):=\frac{(\vartheta +p)\Gamma (k)\Gamma (p+1-\nu )}{(\vartheta +k)\Gamma (k+1-\nu )\Gamma (p+1)}\text{\ \ }(k\geq p+n;\;p,n\in \mathbb{N} ;\;0\leq \nu <1). \tag{7.15} \end{equation} Since $\wp (k)$ is a \textit{decreasing} function of $k$ when $0\leq \nu <1,$ we find that \begin{equation} 0<\wp (k)\leq \wp (n+p)=\frac{(\vartheta +p)\Gamma (n+p)\Gamma (p+1-\nu )}{ (\vartheta +n+p)\Gamma (n+p+1-\nu )\Gamma (p+1)} \tag{7.16} \end{equation} \begin{equation*} (0\leq \nu <1;\;\vartheta >-p;\;p,n\in \mathrm{ \mathbb{N} }). \end{equation*} Hence, with the aid of (7.14) and (7.16), for all $z\in \mathcal{U}$\textbf{$ ,$} we have \begin{eqnarray*} \left\vert \mathrm{{\mathcal{Q}}}(z)\right\vert &\geq &\left\vert z\right\vert ^{p}-\wp (n+p)\left\vert z\right\vert ^{n+p}\sum_{k=n+p}^{\infty }k\left\vert a_{k}\right\vert \\ &\geq &\left\vert z\right\vert ^{p}-\frac{(\vartheta +p)\Gamma (n+p)\Gamma (p+1-\nu )(B-A)(p-\sigma )}{(\vartheta +n+p)\Gamma (n+p+1-\nu )\Gamma (p+1)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\left\vert z\right\vert ^{n+p} \end{eqnarray*} and \begin{eqnarray*} \left\vert \mathrm{{\mathcal{Q}}}(z)\right\vert &\leq &\left\vert z\right\vert ^{p}+\wp (n+p)\left\vert z\right\vert ^{n+p}\sum_{k=n+p}^{\infty }k\left\vert a_{k}\right\vert \\ &\leq &\left\vert z\right\vert ^{p}+\frac{(\vartheta +p)\Gamma (n+p)\Gamma (p+1-\nu )(B-A)(p-\sigma )}{(\vartheta +n+p)\Gamma (n+p+1-\nu )\Gamma (p+1)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}\left\vert z\right\vert ^{n+p} \end{eqnarray*} which yield the inequalities (7.15) and (7.16) of Theorem 7.2. Equalities in (7.15) and (7.16) are attained for the function $f(z)$ given by \begin{eqnarray*} &&{\mathrm{{\mathcal{D}}}_{z}^{\nu }\{(\mathrm{{\mathcal{I}}}_{\vartheta ,p}f)(z)\}} \\ &{=}&\left\{ {\frac{\Gamma (p+1)}{\Gamma (p+1-\nu )}-\frac{(\vartheta +p)\Gamma (n+p+1)(B-A)(p-\sigma )}{(\vartheta +n+p)\Gamma (n+p+1-\nu )(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}z^{n}}\right\} {z^{p-\nu }} \end{eqnarray*} or, equivalently, by \begin{equation*} (\mathrm{{\mathcal{I}}}_{\vartheta ,p}f)(z)=z^{p}-\frac{(\vartheta +p)(B-A)(p-\sigma )}{(\vartheta +n+p)(n+p)(1+B)\Phi _{p}^{n+p}(\delta ,\lambda ,\mu ,l)}z^{n+p}. \end{equation*} Consequently, we complete the proof of Theorem 7.2. \noindent \noindent \end{document}
\begin{document} \title{A characterization of rich $c$-partite($c \geq 8$) tournaments without $(c+2)$-cycles\thanks{The author's work is supported by NNSF of China (No.12071260).}} \author{Jie Zhang, Zhilan Wang, Jin Yan\thanks{Corresponding author: Jin Yan, Email: [email protected]} \unskip\\ School of Mathematics, Shandong University, Jinan 250100, China} \date{} \maketitle \begin{center} \normalsize\noindent{\bf Abstract}\quad \end{center} Let $c$ be an integer. A $c$-partite tournament is an orientation of a complete $c$-partite graph. A $c$-partite tournament is rich if it is strong and each partite set has at least two vertices. In 1996, Guo and Volkmann characterized the structure of all rich $c$-partite tournaments without $(c+1)$-cycles, which solved a problem by Bondy. They also put forward a problem that what the structure of rich $c$-partite tournaments without $(c+k)$-cycles for some $k \geq 2$ is. In this paper, we answer the question of Guo and Volkmann for $k=2$.\\ \noindent{\bf Keywords}\quad Multipartite tournaments; cycles; strong\\ \noindent{\bf AMS Subject Classification}\quad 05C70, 05C38\\ \section{Introduction} In this paper, we consider only finite digraphs without loops or multiple arcs. For a digraph $D$, we denote its vertex set by $V(D)$ and its arc set by $A(D)$. A digraph is \emph{strong}, if for any pair of distinct vertices $x$ and $y$ there is a path from $x$ to $y$. The notation $q$-cycle ($q$-path) means a cycle (path) with $q$ arcs. We will use $(A,B)$-arc to denote an arc from a vertex in $A$ to a vertex in $B$. A \emph{$c$-partite tournament} is an orientation of a complete $c$-partite graph and is \emph{rich} if it is strong and each partite set has at least two vertices. We denote by $\mathcal{D}$ the family of all rich $c$-partite ($c\geq 5$) tournaments. It is clear that \emph{tournaments} are special $c$-partite tournaments on $c$ vertices with exactly one vertex in each partite set. An increasing interest is to generalize results in tournaments to larger classes of digraphs, such as multipartite tournaments. For results on tournaments and multipartite tournaments, we refer the readers to \cite{Bang-Jensen2,Bang-Jensen,Bang-Jensen3,Beineke,Volkmann}. Many researchers have done a lot of work on the study of cycles whose length does not exceed the number of partite sets. In 1976, Bondy \cite{bondy} proved that every strong $c$-partite ($c\geq 3$) tournament contains a $k$-cycle for all $k \in \{3, 4, \ldots, c\}$. He also showed that every $c$-partite tournament in $\mathcal{D}$ contains a $q$-cycle for some $q>c$, and asked the following question: does every multipartite tournament of $\mathcal{D}$ contains a $(c+1)$-cycle? A negative answer to this question was obtained by Gutin \cite{gutin1}. The same counterexample was found independently by Balakrishnan and Paulraja in \cite{Balakrishnan}. Further in \cite{gutin2}, Gutin proved the following result. \begin{thm1}\cite{gutin2} \label{Gutin} Every multipartite tournament in $\mathcal{D}$ has a $(c+1)$-cycle or a $(c+2)$-cycle. \end{thm1} In \cite{guo}, $\mathcal{W}_m$ is defined as follows. Let $c (\geq 5)$ be an integer and $P=x_1\cdots x_m$ be a path with $m \geq c$. The $c$-partite tournament consisting of the vertex set $\{x_1,\ldots,x_m\}$ and the arc set $A(P)\cup \{x_ix_j: i-j>1\ \text{and}\ i \not\equiv j (\text{mod}\ c)\ \text{where}\ i,j \in [m]\}$ is denoted by $W_m$. The set of all $c$-partite tournaments obtained from $W_m$ by replacing $x_i$ by a vertex set $A_i$ with $|A_i| \geq 2$ for $i\in \{1,2,m-1,m\}$ is denoted by $\mathcal{W}_m$. In 1996, Guo and Volkmann \cite{guo} gave a complete solution of this problem of Bondy and determined the structure of all $c$-partite($c \geq 5$) tournaments of $\mathcal{D}$, that have no $(c+1)$-cycle. \begin{thm1}\cite{guo}\label{guo2} Let $D$ be a c-partite tournament in $\mathcal{D}$. Then $D$ has no $(c+1)$-cycle if and only if $D$ is isomorphic to a member of $\mathcal{W}_m$. \end{thm1} In this paper, we characterize all $c$-partite ($c\geq 8$) tournaments in $\mathcal{D}$ without $(c+2)$-cycles. Before defining families $\mathcal{Q}_m$ and $\mathcal{H}$, we present the main theorem. \begin{thm1} \label{main} Let $D$ be a c-partite ($c\geq 8$) tournament in $\mathcal{D}$. Then $D$ has no $(c+2)$-cycle if and only if $D$ is isomorphic to a member of $\mathcal{Q}_m$ or $\mathcal{H}$. \end{thm1} The families $\mathcal{Q}_m$ and $\mathcal{H}$ are described as follows. $\bullet$ Let $i$ be a given integer with $2<i< c-1$. Define $\mathcal{H}^\prime$ the set of $(c+1)$-partite tournaments whose partite sets are $V_1,\ldots,V_{c+1}$, where $V_1=\{v_1\}$, $|V_i| \geq 1$ and $|V_j| \geq 2$ for $j \in [c+1]\setminus \{1,i\}$, and the arc set consists of arcs from each vertex of $V_{j_1} $ to each vertex of $ V_{j_2}$, where $2 \leq j_1 < j_2 \leq c+1$, and arcs between $v_1$ and vertices in other partite sets with arbitrary directions. The family of all $c$-partite tournaments obtained from a member of $\mathcal{H}^\prime$ by deleting all arcs between $v_1$ and $V_i$ and merging $V_1$ and $V_i$ into a partite set is denoted by $\mathcal{H}$. \begin{figure} \caption{An example of the $8$-partite tournament with $|V_1|=3$ and $|V_j|=2$ for $2 \leq j \leq 8$. Here, the arcs between $v_1$ and other vertices are arbitrary.} \label{fig1} \end{figure} $\bullet$ Let $s$ and $t$ be two fixed integers with $1 \leq s <t-1 \leq c$ and $P=x_1\cdots x_m$ be a path with $m \geq c$. We denote $Q_m^\prime$ the $(c+1)$-partite tournament consisting of the vertex set $\{x_1,\ldots,x_m\}$ and the arc set $A(P)\cup \{x_ix_j: i-j>1\ \text{and}\ i \not\equiv j (\text{mod}\ c)\ \text{where}\ i,j \in [m]\}$. Deleting arcs of $Q_m^\prime$ between $\{x_i|\ i\equiv s\ (\text{mod}\ (c+1)) \}$ and $\{x_j|\ j\equiv t\ (\text{mod}\ (c+1)) \}$ and and merging $V_i$ and $V_j$ into a partite set, we obtain a $c$-partite tournament $Q_m^1$. Let $\mathcal{Q}_m=\mathcal{Q}_m^1 \cup \mathcal{Q}_m^2 $, where $\mathcal{Q}_m^1$ and $ \mathcal{Q}_m^2 $ are defined as follows. \begin{enumerate}[{\bf($\mathcal{Q}_m^1$)}] \item The set of all $c$-partite tournaments obtained from $Q_m^1$ by substituting $x_i$ with a vertex set $A_i$ is denoted by $\mathcal{Q}_m^1$ for \begin{enumerate}[(1)] \item $i \in \{1,2,m-1,m\};$ or \item $i=t$ when $s=1$ and $t=3$ or $4$; or \item $i=m-2$ when $\{m,m-2\}\equiv\{s,t\}$ (mod $(c+1)$), or $i=m-3$ when $\{m,m-3\}\equiv\{s,t\}$ (mod $(c+1)$). \end{enumerate} \item $\mathcal{Q}_m^2$ is the set of all $c$-partite tournaments obtained from a member of $\mathcal{Q}_m^1$ by reversing some arcs satisfying \begin{align} \begin{split} \left\{ \begin{array}{ll} (A_2, A_3)\hbox{-arcs},& \hbox{when $t=3,s=1$;}\\ (A_1, A_2)\hbox{-arcs}, & \hbox{when $t=c+1,s=2$;} \\ (A_{m-2}, A_{m-1})\hbox{-arcs}, & \hbox{when $\{m-2, m\} \equiv \{s,t\} \ (\text{mod}\ (c+1)) $;} \\ (A_{m-1}, A_m)\hbox{-arcs}, & \hbox{when $\{m-1, m-c\} \equiv \{s,t\} \ (\text{mod}\ (c+1)) $.} \nonumber \end{array} \right. \end{split} \end{align} \end{enumerate} Note that in our main theorem, the structure we found in the proof needs that the parameter $c$ is at least 8. However, this condition may be not sharp. The characterization of rich $c$-partite tournaments with $5 \leq c \leq 7$ needs more techniques. We conclude this section by giving the following organization. In the second section, we set up notation and some helpful lemmas. The proof of the main theorem is presented in the third section. \begin{figure} \caption{An example of $Q_m^1$. Here, all other possible arcs are of the same direction as the path. } \label{fig2} \end{figure} \section{Notation and useful lemmas} {\large{\emph{2.1 Notation }}} For terminology and notation not defined here, we refer to \cite{Bang-Jensen}. Let $D$ be a digraph. For the vertex $x \in V(D)$, the set of out-neighbours of $x$ is denoted by $N_D^+(x)=\{y \in V(D) |\ xy \in A(D)\}$ and the set of in-neighbours of $x$ is denoted by $N_D^-(x)=\{y \in V(D) |\ y x \in A(D)\}$, respectively. For a vertex set $X \subseteq V (D)$, we define $N^+(X)= N_D^+(X) = \cup_{x\in X} N_D^+(x)\setminus X$ and $N^-(X) = N_D^-(X) = \cup_{x\in X} N_D^-(x)\setminus X$. When $X$ is a subdigraph of $D$, we write $N^+(X)$ instead of $N^+(V(X))$. We define $D[X]$ as the subdigraph of $D$ induced by $X$, and let $D - X=D[V(D)\setminus X]$. Define $[t] = \{1,\ldots, t\}$ for simplicity. Let $C$ be a cycle (or path). For a vertex $v$ of $C$, the successor and the predecessor of $v$ on $C$ are denoted by $v^+$ and $v^-$, respectively. We write the $i$-th successor and the $i$-th predecessor of $v$ on $C$ as $v^{i+}$ and $v^{i-}$, respectively. The notation $v_i C v_j$ means the subpath of $C$ from $v_i$ to $v_j$ along the orientation of $C$. The length of $C$ is the number of arcs of $C$. We say a vertex $x$ outside $C$ can be \emph{inserted} into $C$ if there is an in-neighbour of $x$ on $C$, say $v$, such that $v^+$ is an out-neighbour of $x$. If $xy$ is an arc in $A(D)$, then we write $x \rightarrow y$ and say that $x$ dominates $y$. If $X$ and $Y$ are two disjoint vertex sets of $D$, we use $X \rightarrow Y$ to denote that every vertex of $X$ dominates every vertex of $Y$, and define $A \Rightarrow B$ that there is no arc from a vertex in $B$ to a vertex in $A$. If $X$ or $Y$ consists of a single vertex, we omit the braces in all following notation. Correspondingly, $x\nrightarrow y$ expresses that $xy \notin A(D)$. A path $P=x\cdots y$ is \emph{minimal} if no proper subset of $V(P)$ induces a subdigraph of $D$ which contains a path from $x$ to $y$. For two vertices $x$ and $y$ in $D$, the distance from $x$ to $y$ in $D$, denoted by $dist(x, y)$, is the length of a shortest path from $x$ to $y$ in $D$. The \emph{diameter} of $D$, denoted by $diam(D)$, is the maximum distance between all pairs of its vertices. \begin{flushleft} {\large{\emph{2.2 Useful lemmas. }}} \end{flushleft} We give the following results that are frequently used in the proof of Theorem \ref{main}. \begin{thm1}\cite{guo}\label{guo1} Let $D$ be a strong $c$-partite tournament. If $D$ has a $k$-cycle containing vertices from exactly $l$ partite sets with $l<c$, then $D$ has a $t$-cycle for all $t$ satisfying $k \leq t \leq c+(k-l)$. \end{thm1} \begin{remark} \label{fact1} Let $C$ be a $k$-cycle in a digraph $D$. If $D$ contains no $(k+1)$-cycle, then no vertex can be inserted into $C$. \end{remark} \begin{lem} \label{lem+-} Let $D$ be a multipartite tournament in $\mathcal{D}$ and $C$ a $(c+1)$-cycle of $D$. Suppose that $D$ has no $(c+2)$-cycle and $D-C \subseteq N^+(C) \cap N^-(C)$. Then for every $y \in D-C$, there exists a vertex $x \in C$ such that $x$ and $y$ belong to the same partite set of $D$ and have the same in-neighbours and out-neighbours in $C$.\end{lem} \begin{proof} Let $C=x_1x_2\cdots x_iy_1x_{i+1}\cdots x_cx_1$ a $(c+1)$-cycle of $D$, where $x_j \in V_j$ for $j \in [c]$ and $y_1 \in V_1$. Clearly, there exists a vertex $x \in C$ such that $x$ and $y$ belong to the same partite set of $D$. Assume that $x= x_j$, as the case $x=x_1$ and the case $x=y_1$ are similar. Suppose that $j=1$. First suppose that $yx_{1}^-\in A(D)$. Then $y \Rightarrow x_{i+1}Cx_c$. Obviously, if $y \Rightarrow x_{i}$ then $y \Rightarrow x_{2}Cx_i$. Since $y \in N^+(C) \cap N^-(C)$, we have $x_{i} \Rightarrow y$. Thus $x_{i} \Rightarrow y \Rightarrow x_{i+1}$ and $y$ and $y_1$ have the same in-neighbours and out-neighbours in $C$. Second suppose that $x_{1}^+y\in A(D)$. Similarly, we obtain that $y$ and $y_1$ have the same in-neighbours and out-neighbours in $C$ again. Hence $x_{1}^-y, yx_{1}^+ \in A(D)$ and $y$ and $x_1$ have the same in-neighbours and out-neighbours in $C$. Set $j\in [c]\setminus \{1\}$. If $yx_{j}^-\in A(D)$, then $y \Rightarrow C$ because $D$ has no $(c+2)$-cycle, which contradicts the assumption. Thus $x_{j}^-y\in A(D)$ for $j \in \{2,\ldots,c\}$. Similarly, we obtain that $y \rightarrow x_j^+$ for $j \in \{2,\ldots,c\}$. Thus $y$ and $x_j$ have the same in-neighbours and out-neighbours in $C$. \end{proof} \begin{lem} \label{clm1} Let $D$ be a $c$-partite tournament in $\mathcal{D}$ and $\mathcal{C}$ the family of all $(c+1)$-cycles of $D$. Suppose that $D$ has no $(c+2)$-cycle. If $D-C \subseteq N^+(C) \cap N^-(C)$ for every $C \in \mathcal{C}$, then $D \in \mathcal{H}$. \end{lem} \begin{proof} Since $D$ has no $(c+2)$-cycle, it follows by Theorem \ref{guo1} that each $(c+1)$-cycle of $D$ meets all partite sets of $D$. This implies that each $(c+1)$-cycle contains exactly two vertices from one partite set and one vertex from other each partite set. Let $C \in \mathcal{C}$ and assume that it contains two vertices of $V_1$, that is, $C=x_1x_2\cdots x_iy_1x_{i+1}\cdots x_cx_1$, where $x_j \in V_j$ for $j \in [c]$ and $y_1 \in V_1$. Since every partite set of $D$ has at least two vertices, there exist $c-1$ vertices $y_2,\ldots,y_c$ such that $y_j \in V_j $ for $j\in \{2,\ldots,c\}$. By Lemma \ref{lem+-}, we have $x_j^- \rightarrow y_j \rightarrow x_j^+$ for $j \in [c]\setminus \{1\}$. Note that $x_i$ and $y_i$ have the same in-neighbours and out-neighbours in $C$. Since $y_i$ is any vertex that is distinct with $x_i$ in $V_i$, each vertex in $V_i$ has the same in- or out-neighbors with $x_i$ for $i=2,\ldots ,c$. In the following, we often use this property to determine the direction of the arcs in $A(D)$. We get $x_1\rightarrow V_2 \rightarrow \cdots \rightarrow V_i \rightarrow y_1 \rightarrow V_{i+1} \rightarrow \cdots \rightarrow V_c \rightarrow x_1$. Let $C^\prime$ be the $(c+1)$-cycle $x_1y_2\cdots y_iy_1y_{i+1}\cdots y_cx_1$. \begin{claim} \label{lem} The following statements hold. \begin{align} & (1)\ \{V_2,\ldots,V_{j-1}\} \rightarrow V_j \rightarrow \{V_{j+1},\ldots,V_{i}\}\ \text{ for }\ 2 \leq j \leq i-1;\nonumber \\ & (2)\ \{V_{i+1},\ldots,V_{j-1}\} \rightarrow V_j \rightarrow \{V_{j+1}, \ldots, V_c\} \ \text{ for }\ i+1 \leq j \leq c-1.\nonumber \end{align} \end{claim} \begin{proof} Suppose that there exists an integer $t \in \{2,\ldots,c\}\backslash \{2,i,i+1,c\}$ such that $y_{t+1}\rightarrow y_{t-1}$. If $x_t^{2+}\rightarrow y_t$, then there is a 6-cycle $x_t^{2+}y_ty_{t+1}y_{t-1}x_tx_{t}^+x_{t}^{2+}$ containing vertices from exactly four partite sets. If $x_t\rightarrow x_{t}^{2-}$, then $x_{t}x_{t}^{2-}x_{t}^-y_ty_{t}^+y_{t}^-x_t$ is a 6-cycle containing vertices from exactly four partite sets. In both cases, we deduce from Theorem \ref{guo1} that $D$ contains a $(c+2)$-cycle, a contradiction. This implies that $y_t \rightarrow x_t^{2+}$ and $x_t^{2-} \rightarrow x_t$. Then $y_tx_{t}^{2+}Cx_{t}^{2-}x_ty_{t+1}y_{t-1}y_t$ is a $(c+2)$-cycle, a contradiction. Thus we obtain that $y_{t-1}\rightarrow y_{t+1}$ for $t \in \{2,\ldots,c\}\backslash \{2,i,i+1,c\}$. By symmetry, we have $V_{t-1} \rightarrow V_{t+1}$. Since $c \geq 8$, it is easy to obtain that $x_j^{3-} \rightarrow y_j$ for $5 \leq j \leq i-1$ and $i+4 \leq j \leq c$ and $y_j \rightarrow x_j^{3+}$ for $ 2 \leq j\leq i-3$ and $i+1 \leq j \leq c-3$. We continue in this fashion to obtain $\{x_{j-1},\ldots,x_2\} \rightarrow y_j \rightarrow \{x_{j+1},\ldots,x_i\}$ for $j\in \{2,\ldots,i-1\}$ and $\{x_{j-1},\ldots,x_{i+1}\} \rightarrow y_j \rightarrow \{x_{j+1}, \ldots, x_c\}$ for $j \in \{i+1,\ldots,c-1\}$, successively. This proves Claim \ref{lem}. \end{proof} By Claim \ref{lem}, we can obtain a $(c+2)$-cycle from a cycle with larger length. Now consider the arcs between $V_i$ and $V_{i+1}$. If $x_ix_{i+1}, x_{i+1}y_i \in A(D)$, then $D$ has a $(c+3)$-cycle $x_ix_{i+1}y_iy_1y_{i+1}x_{i+2}Cx_i$. If $x_{i+1}x_i, x_iy_{i+1} \in A(D)$, then there is a $(c+3)$-cycle $x_{i+1}x_iy_{i+1}C^\prime y_1x_{i+1}$. By $c \geq 8$ and Claim \ref{lem}, we can obtain a $(c+2)$-cycle from such a $(c+3)$-cycle. Thus $V_i \rightarrow V_{i+1}$ or $V_{i+1} \rightarrow V_i$. Suppose that $V_i \rightarrow V_{i+1}$. We show that $\{V_2,\ldots,V_{i-1}\} \rightarrow \{V_i,\ldots,V_c\}$. If $x_{i+2}x_i \in A(D)$, then $x_{i+2}x_ix_{i+1}y_{i+2}C^\prime y_{i+1}x_{i+2}$ is a $(c+4)$-cycle. We can obtain a $(c+2)$-cycle because of $c \geq 8$ and Claim \ref{lem}. Based on this, considering arcs between $x_{i+3},\ldots,x_c$ and $x_{i}$ in order, we get $x_i \rightarrow \{x_{i+3},\ldots,x_c\} $. Similarly, it is immediate that $\{x_2,\ldots,x_{i-1}\} \rightarrow \{x_{i+1},\ldots,x_c\}$. Thus $\{V_2,\ldots,V_i\} \rightarrow \{V_{i+1},\ldots,V_c\}$. For $V_{i+1} \rightarrow V_i$, we will get $ \{V_{i+1},\ldots,V_c\} \rightarrow \{V_2,\ldots, V_i\} $ in the same way. Note that the structures obtained in two cases are isomorphic, so we only consider the first structure in the following. We declare that \begin{equation}\label{yy} \{V_2,\ldots,V_i \} \rightarrow y_1\rightarrow \{V_{i+1},\ldots,V_c\}. \end{equation} If $y_1 x_j \in A(D)$ for some $j \in [i-1]$, then $D$ contains a $(c+2)$-cycle $x_1C^\prime y_1x_jx_{i+1}C x_1$, a contradiction. If $x_j y_1 \in A(D)$ for some $j \in \{i+2,\ldots,c\}$, then $(c+2)$-cycle $x_jy_1C^\prime x_1C x_ix_j$ is in $D$. Thus (\ref{yy}) holds. If $|V_1|=2$, $D$ is a member of $\mathcal{H}$, which proves this lemma. Thus assume that $|V_1| \geq 3$. We show that every vertex in $V_1\setminus \{x_1,y_1\}$ have the same in-neighbours and out-neighbours in $C$ as $y_1$. To see this, let $z_1$ be a vertex in $V_1\setminus \{x_1,y_1\}$. Suppose that $z_1\rightarrow x_i$. It is easy to see $z_1 \rightarrow x_2, \ldots,x_{i-1}$. If $x_j \rightarrow z_1$ for some $j \in \{i+1, \ldots, c\}$, then $x_t \rightarrow z_1$ for all $t \in \{j+1, \ldots, c\}$. Recall that $V(D-C) \subseteq N^+(C) \cap N^-(C)$, we have $x_c \rightarrow z_1$. Observe that there is a 6-cycle $z_1x_2y_cx_1y_2x_cz_1$ which meets 3 partite sets of $D$, a contradiction by Theorem \ref{guo1} again. Thus $x_i \rightarrow z_1$. Obviously, $\{x_2,\ldots,x_{i-1}\} \rightarrow z_1$. Otherwise there exists a $(j+4)$-cycle $x_iz_1x_jx_cx_1C^\prime y_jx_i$ which meets $j+2$ partite sets of $D$ for $j \in\{2,\ldots,i-1\}$, a contradiction. On the other hand, it is easy to see $z_1 \rightarrow \{x_{i+1}\ldots,x_j\}$ if $z_1x_{j+1} \in A(D)$ for some $j \in \{i+1,\ldots,c\}$. Since $z_1 \in N^+(C) \cap N^-(C)$, we have $z_1x_{i+1} \in A(D)$. Thus $x_1x_2\cdots x_iz_1x_{i+1}\cdots x_cx_1$ is also a $(c+1)$-cycle. This implies that $z_1$ and $y_1$ have the same in-neighbours and out-neighbours in $C$. Hence $D$ is a member of $\mathcal{H}$. We are done. \end{proof} \section{Proof of Theorem \ref{main}} Now we are ready to prove our main theorem. It is easy to see that every element of $\mathcal{H}$ and $\mathcal{Q}_m$ has no $(c+2)$-cycle. Hence, it suffices to show the converse is true as well. Suppose that $D$ is a $c$-partite tournament in $\mathcal{D}$ such that $D$ has no $(c+2)$-cycle and is not isomorphic to any element of $\mathcal{H}$ and $\mathcal{Q}_m$. Let $V_1,\ldots ,V_c$ be partite sets of $D$. By Theorem \ref{Gutin}, we know that $D$ contains a $(c+1)$-cycle. It follows by Theorem \ref{guo1} that each $(c+1)$-cycle of $D$ visits exactly one partite set twice and each other partite sets once. Let $\mathcal{C}$ be the set of all $(c+1)$-cycles of $D$. Lemma \ref{clm1} gives that for every $C \in \mathcal{C}$, if all vertices of $D-C$ are contained in $N^+(C) \cap N^-(C)$, then $D \in \mathcal{H}$. Thus there exists at least one cycle $C$ in $\mathcal{C}$ such that $D-C$ contains a vertex outside $N^+(C) \cap N^-(C)$. Denote $C=x_1x_2\cdots x_{c+1}x_1$, where $x_j \in V_j$ for $j \in [i-1]$, $x_{j} \in V_{j-1}$ for $j \in \{i+1,\ldots,c+1\}$ and $x_i \in V_1$. Without loss of generality, assume that there exists a vertex $z \notin N^-(C)$. Because $D$ is strong, there is a path from $z$ to $C$. Let $P=z_1z_2 \cdots z_p$ be such a minimal path with $z_1 = z$ and assume that $z_p=x_t$. It is clear that, $p \geq 3$ and $z_2,\ldots,z_{p-2} \notin N^-(C)$, particularly, $z_{p-2} \nrightarrow x_{t-2}$ and $x_{t-1} \rightarrow z_{p-2}$. Since $D$ has no $(c+2)$-cycle, we see that $x_{t-2}z_{p-2} \notin A(D)$. This implies that there is no arc between $z_{p-2}$ and $x_{t-2}$, that is $x_{t-2}$ and $z_{p-2}$ must belong to the same partite set of $D$. It is not hard to get $z_{p-1}\nrightarrow x_{t-1}$. Since, otherwise, $x_{t-3}$ and $z_{p-2}$ must belong to the same partite set of $D$, which is impossible. Together with $x_{t-1}\nrightarrow z_{p-1}$ we obtain that $x_{t-1}$ and $z_{p-1}$ belong to the same partite set of $D$. Further, vertices $x_{t-i}$ and $z_{p-i}$ belong to the same partite set for $1 \leq i \leq p-1$. It is obvious that $x_{t-2}\rightarrow z_{p-1}$ due to $V(C)\setminus x_t \Rightarrow z_{p-1}$. We may assume that $x_t$ is on the path $x_{i+1}Cx_1$. If $C$ has a path from $x_t$ to $x_{t-2}$ with at most $c-1$ vertices, then together with the path $x_{t-2}x_{t-1}z_{p-2}z_{p-1}x_t$, we can form the path into a cycle of length at most $c+2$, which contains the vertices $x_{t-1}$ and $z_{p-1}$, and $x_{t-2}$, $z_{p-2}$ in the same partite sets respectively. We deduce that $D$ has a $(c+2)$-cycle from Theorem \ref{guo1}, a contradiction. This gives $x_{j+2}C x_{t-2} \Rightarrow x_j \Rightarrow x_tCx_{j-1}$ for $x_j \in x_tC x_{t-2}$. For the same reason, $x_{t-1} \Rightarrow x_iCx_{t-3}$ when $t > i+2$; and $ x_{t-1} \Rightarrow x_1Cx_{t-3}$ when $t = i+1$ or $i+2$. \begin{claim} \label{clmc+2} $diam(D) \geq c+2$. \end{claim} \begin{proof} Since $x_{t-2}z_{p-1} \in A(D)$, it is not hard to obtain that $x_{t-1}$ dominates each vertex of $x_{t+1}Cx_{i-1}$ when $t \geq i+2$. Otherwise, there exists a vertex $x_j$ in $x_{t+1}Cx_{i-1}$ with $x_{t-1}\rightarrow x_{j+1}Cx_i$ and $x_j \rightarrow x_{t-1}$. Observe that $x_jx_{t-1}x_{j+1}Cx_{t-2}z_{p-1}x_tCx_j$ is a $(c+2)$-cycle, a contradiction. Therefore, $D[C]$ is isomorphic to $\mathcal{Q}_{c+1}$ with the initial vertex $x_t$ and the terminal vertex $x_{t-1}$. This implies that every minimal path from $z$ to $C$ must end at $x_t$ and $dist(z,x_{t-1}) \geq c+2$. Thus $diam(D) \geq c+2$ when $t \geq i+2$. If there is a vertex $x \notin V(C)\cup N^-(C)$ such that the minimal path from $x$ to $C$ which ends at the path $x_{i+2}Cx_1$, we complete the proof. Then all such minimal paths from $x$ to $C$ end at $x_{i+1}$, that is $x_t=x_{i+1}$. The following proof is divided into two cases. \textbf{Case 1:} $i \geq 5$ or $i = 4$ and $x_i \rightarrow x_{c+1}$. If $x_j \rightarrow x_i$ for $i+3\leq j \leq c$ or $x_i \rightarrow x_{c+1}$ when $i \geq 5$, observe that $x_jx_ix_2x_3x_1z_{p-2}$ $z_{p-1}x_{i+1}Cx_j$ is a cycle of length at most $c+2$ which visits $V_1$ three times, a contradiction. Thus when $i \geq 5$ or $i = 4$ and $x_i \rightarrow x_{c+1}$ we have $x_i \rightarrow x_j$ for $i+3\leq j \leq c+1$. Hence $dist(z,x_{t-1}) \geq c+2$, which implies that $diam(D) \geq c+2$. \textbf{Case 2:} $i = 4$ and $x_{c+1} \rightarrow x_i$; or $i=3$. Recall that every partite set of $D$ has at least two vertices. Hence there is at least one vertex $y$ in $V_c \backslash \{x_{c+1}\}$. If $y \in N^+(C) \backslash N^-(C)$, then $dist(y,x_1) \geq c+2$. If $y \in N^-(C)\backslash N^+(C)$, by considering the digraph $D^\prime $ obtained by reversing all arcs of $D$, we get $diam(D^\prime) \geq c+2$, that is $diam(D) \geq c+2$. If $y \in N^+(C)\cap N^-(C)$, then $x_c \rightarrow y \rightarrow x_1$ by Lemma \ref{lem+-}. This implies that $D$ contains a $(c+2)$-cycle $x_c y x_1 Cx_{i-1}x_{c+1}z_{p-1}x_{i+1}Cx_c$, a contradiction. Thus $diam(D) \geq c+2$ when $i \in \{3, 4\}$. This completes the proof of the claim.\end{proof} Let $P=x_1x_2\cdots x_m$ be a path of $D$ with $dist(x_1,x_m)=diam(D)=m-1 \geq c+2$. As $D$ contains no $(c+2)$-cycle, vertices $x_i$ and $x_{c+i+1}$ must belong to the same partite set. If there exists vertex set $\{x_{i_1},x_{j_1},x_{i_2},x_{j_2}\} \subset V(D)$ with max$\{i_1,j_1,i_2,j_2\}-\mbox{min}\{i_1,j_1,i_2,j_2\} \leq c$ such that $x_{i_1}$ and $x_{j_1}$ belong to the same partite set and $x_{i_2}$ and $x_{j_2}$ belong to the same another partite set, then $D$ contains a $(c+2)$-cycle by applying Theorem \ref{guo1}, a contradiction. Thus $x_1Px_{c+1}$ meets all partite sets of $D$ and contains two vertices of exactly one partite set. Therefore, $D[P]$ is isomorphic to $Q_{m}$ with the initial vertex $x_1$ and the terminal vertex $x_m$. If $|V(D)|=m$, we are done. So $|V(D)|> m$. Assume that $x_j \in V_j$ for $j \in [i-1]$, $x_{j} \in V_{j-1}$ for $j \in \{i+1,\ldots,c\}$ and $x_i \in V_t$ for some $t \in [i-1]$. Let $x$ be a vertex of $D-P$. Suppose that $x \in N^+(P) \cap N^-(P)$, we now consider the arcs between $x$ and $V(P)$. We use $V_m$ to indicate the partite set which $x_m$ belongs to. \begin{claim} \label{clm2} Suppose that $x \in N^+(P) \cap N^-(P)$. If there exist two vertices $x_p$ and $x_q$ on $P$ with $p < q$ such that $x_p \rightarrow x \rightarrow x_q$, then $x$ belongs to one of $\{V_1,V_2,V_{m-1},V_m\}$. Moreover, $x$ has the same in-neighbours and out-neighbours on $P$ as $x_l \in \{x_2,x_3,x_4,x_{m-3},$ $x_{m-2},x_{m-1}\}$, where \begin{align} \begin{split} \left\{ \begin{array}{ll} x_3 \in V_1, & \hbox{when $l=3$;} \\ x_4 \in V_1, & \hbox{when $l=4$;} \\ x_{m-2} \in V_m, & \hbox{when $l=m-2$;} \\ x_{m-3} \in V_m, & \hbox{when $l=m-3$.} \nonumber \end{array} \right. \end{split} \end{align} \end{claim} \begin{proof} Since $D$ has no $(c+2)$-cycle, there is an integer $l$ such that $x_{l-1} \rightarrow x \rightarrow x_{l+1}$ and $x$ is in the same partite set with $x_l$. If $3 \leq l \leq m-2$, it easy to check that $D[P\cup \{x\}]$ contains a $(c+2)$-cycle $C$ as follows:\\ (1) when $m \geq c+l-1$, \begin{itemize} \item $C=x_{l-1}xx_{l+1}Px_{c+l-1}x_lx_{l-2}x_{l-1}$, or \item $C=x_{l-1}xx_{l+1}Px_{c+l-2}x_lx_{l-3}x_{l-2}x_{l-1}$ unless $x_3 \in V_1$ and $l=3$, or \item $C=x_{l-1}xx_{l+1}Px_{c+l-3}x_lx_{l-4}Px_{l-1}$ unless $ x_4 \in V_1$ and $l=4$; \end{itemize} or (2)when $l \geq c-1$, \begin{itemize} \item $C=x_{l-1}xx_{l+1}x_{l+2}xx_{l+2-c}Px_{l-1}$, or \item $C=x_{l-1}xx_{l+1}x_{l+2}x_{l+3}xx_{l+3-c}Px_{l-1}$ unless $x_{m-2} \in V_m$ and $l=m-2$, or \item $C=x_{l-1}xx_{l+1}Px_{l+4}xx_{l+4-c}Px_{l-1}$ unless $ x_{m-3} \in V_m$ and $l=m-3$. \end{itemize} Then we consider $m < c+l-1$ and $l < c-1$. Recall that $m \geq c+3$. This implies that $l \geq 4$. Clearly, if $x_lx_1, x_{c+1}x_l \in A(D)$, there is a $(c+2)$-cycle $x_1Px_{l-1}xx_{l+1}Px_{c+1}x_lx_1$. If $x_l, x_{c+1} \in V_l$, then $D[P\cup \{x\}]$ contains a $(c+2)$-cycle $x_2Px_{l-1}xx_{l+1}Px_{c+2}x_lx_2$. Thus $x_l \in V_1$. Observe that $D[P\cup \{x\}]$ contains $x_3Px_{l-1}xx_{l+1}Px_{c+3}x_lx_3$ which is a $(c+2)$-cycle unless $l=4$. Thus $x$ belongs to $V_1,\ V_2,\ V_{m-1}$ or $V_m$. We also get $l\in \{2,3,4,m-3,m-2,m-1\}$ and $x_3 \in V_1$ when $l=3$; $x_4 \in V_1$ when $l=4$; $x_{m-2} \in V_m$ when $l=m-2$; and $ x_{m-3} \in V_m$ when $l=m-3$. In all cases, it is easy to check that $x$ and $x_l$ have the same in-neighbours and out-neighbours on $P$, otherwise $D[P\cup \{x\}]$ contains a cycle of length at most $(c+2)$ and two pairs of vertices which belong to the same partite set. By Theorem \ref{guo1}, we get a contradiction.\end{proof} \begin{figure} \caption{A cycle of length at most $(c+2)$ in $D[P\cup \{x\} \label{example} \end{figure} \begin{claim} \label{clm3} Suppose that $x \in N^+(P) \cap N^-(P)$. If all vertices $x_p$ and $x_q$ with $x_p \rightarrow x \rightarrow x_q$ satisfy $p >q$, then\\ (i) $x$ has the same in-neighbours and out-neighbours on $P$ as $x_1$ or $x_m$; or\\ (ii) $D[P\cup\{x\}]$ has four specific structures as described in Figure \ref{clm3-1}; or\\ (iii) $x\rightarrow x_1$, $x_2Px_m \Rightarrow x$ and $x \in V_{c+1}$; or\\ (iv) $x_m\rightarrow x$, $x \Rightarrow x_1Px_{m-1}$ and $x \in V_{m-c}$. \end{claim} \begin{proof} Note that if some vertex $x_q \rightarrow x$ (or $x \rightarrow x_q$) then $ x_qPx_m \Rightarrow x$ (or $x \Rightarrow x_1Px_q$). Let $q$ be the maximum integer such that $x \rightarrow x_q$. First we suppose that $x$ has at least two in-neighbours and two out-neighbours on $P$. Let $x_{q_2}$ be the previous out-neighbour of $x$ before $x_q$ on $P$ and let $x_{p_1},x_{p_2}$ be two in-neighbours of $x$ on $P$ which is nearest to $x_q$. Clearly, we have $p_2-q_2<5$. Otherwise, $xx_{q_2}Px_{p_2}x$ is a 7-cycle meeting five partite sets of $D$, a contradiction by Theorem \ref{guo1}. Hence there exists at most one vertex in $x_{q_2}Px_{p_2}$ such that it is non-adjacent to $x$. Let $x_l$ be such vertex if it exists. According to the position of $x_l$, there are four possible sequences of $x_{q_2}Px_{p_2}$: (1) $x_{q_2}x_qx_{p_1}x_{p_2}$, (2) $x_{q_2}x_lx_qx_{p_1}x_{p_2}$, (3) $x_{q_2}x_qx_lx_{p_1}x_{p_2}$ and (4) $x_{q_2}x_qx_{p_1}x_lx_{p_2}$. If $m \geq c+q$, then there is the $(c+2)$-cycle $xx_qPx_{c+q}x$ for sequences (1), (3) and (4) and the $(c+2)$-cycle $xx_{q_2}Px_{q_2+c}x$ for sequence (2) unless $x$ and $x_{q+c-2}$ are in the same partite set. Observe that for sequence (2) if $q \geq 5$, then there still exists a $(c+2)$-cycle $xx_{q-5}P x_{q-5+c}x$ via $x_{q+c-2}x \notin A(D)$. If $q \geq c$, then $xx_{q-c+1}Px_{p_1}x$ is a $(c+2)$-cycle for sequences (1)-(3) and $xx_{q-c+3}Px_{p_2}x$ is a $(c+2)$-cycle for sequence (4) unless $x$ and $x_{q-c+3}$ belong to the same partite set. We also note that for sequence (4) if $q \geq m-5$ and $q \geq c$, then there still exists a $(c+2)$-cycle $xx_{q-c+5}P x_{q+5}x$. Hence, $m < c+q$ and $q < c$ or $P$ meets the partite sets of $D$ along two special orders as described in Figure \ref{clm3-1}. Moreover, $x$ has exactly two in-neighbours or two out-neighbours and $x \in \{V_1,V_2,V_m,V_{m-1}\}$. \begin{figure} \caption{The structure of $D[P\cup \{x\} \label{clm3-1} \end{figure} Clearly, $x \in V_1$ or $x \in V_{c+1}$. Otherwise, there is a $(c+2)$-cycle $xx_1Px_{c+1}x$ in $D$. Suppose that $x \in V_1$. If $x_3 \notin V_1$, then $xx_3Px_{c+3}x$ is an $(c+2)$-cycle because $x$ has at least two out-neighbours. Then $x_3 \in V_1$ and $x \rightarrow x_4$. Note that $x_4$ and $x_{c+5}$ belong to the same partite set. If $x_5\rightarrow x$, then $q=4$, $m=c+3$ and $P$ is isomorphic to Type I in Figure \ref{clm3-1}. Otherwise we obtain a $(c+2)$-cycle $xx_5Px_{c+5}x$ when $m \geq c+5$. Hence $m=c+3$ or $m=c+4$ and $P=x_1x_2\cdots x_{c+3}(x_{c+4})$ where $x_3,x_{c+2} ( x_{c+4}) \in V_1$. Next, suppose that $x \in V_{c+1}$. Obviously, there is a $(c+2)$-cycle $xx_2Px_{c+2}x$ when $x \notin V_2$. Similarly, when $x_4 \rightarrow x$ we have $q=3$ and $m=c+2$, which is impossible. Then $x \rightarrow x_4$. We obtain a $(c+2)$-cycle $xx_4Px_{c+4}x$ when $m \geq c+4$. Hence $m=n+3$ and $P=x_1x_2\cdots x_{c+3}$ where $x_{c+1} \in V_2$. In a word, when $x$ has at least two in-neighbours and two out-neighbours, $P$ has four specific structures as described in Figure \ref{clm3-1} based on the partite set which $x$ belongs to. Second, we suppose that $x$ has either one in-neighbour or one out-neighbour. Then (i) $x \in V_1$ or $x \in V_m$ and $x$ has the same in-neighbours and out-neighbours on $P$ as $x_1$ or $x_m$; or (ii) $x\rightarrow x_1$, $x_2Px_m \Rightarrow x$ and $x \in V_{c+1}$; or (iii) $x_m\rightarrow x$, $x \Rightarrow x_1Px_{m-1}$ and $x \in V_{m-c}$. \end{proof} By Claim \ref{clm2} and Claim \ref{clm3}, we get the following. \begin{pro}\label{property1} If $x \in N^+(P) \cap N^-(P)$, then $x$ and $P$ satisfy one of the following statements. (i) $x$ and one of $\{x_1,x_2,x_{m-1},x_m\}$ belong to the same partite set and their in-neighbours and out-neighbours on $P$ are same; (ii) $x$ and $x_l \in \{x_3,x_4,x_{m-3},x_{m-2}\}$ belong to the same partite set and their in-neighbours and out-neighbours on $P$ are same, where $x_3 \in V_1$ when $l=3$; $x_4 \in V_1$ when $l=4$; $x_{m-2} \in V_m$ when $l=m-2$; and $ x_{m-3} \in V_m$ when $l=m-2$; (iii) $D[P \cup x]$ has four specific structures Type(I--IV) which are shown in Figure \ref{clm3-1}; (iv) $x\rightarrow x_1$, $x_2Px_m \Rightarrow x$ and $x \in V_{c+1}$ or $x_m\rightarrow x$, $x \Rightarrow x_1Px_{m-1}$ and $x \in V_{m-c}$. \end{pro} \begin{figure} \caption{The structure of $D[P\cup \{x\} \label{clm3-3} \end{figure} Next, suppose that $x$ has only in-neighbours in $V(P)$, i.e., $P \Rightarrow x$. Since $D$ is strong, there is a path from $x$ to $P$. Let $P^\prime =x \cdots x^\prime x^{\prime\prime}$ be a shortest path such that $x^{\prime\prime} \in N^-(P)$. If $N^-(x^{\prime\prime}) \cap V(P) = \emptyset$, then there is an integer $j \leq 4$ such that $x_{j+c-1} \rightarrow x^\prime$ and further $D$ contains a $(c+2)$-cycle $x^\prime x^{\prime\prime} x_jPx_{j+c-1}x^\prime$, a contradiction. Then $x^{\prime\prime} \in N^-(P)\cap N^+(P)$. By Proposition \ref{property1}, there are several all possible structures of $D[x^{\prime\prime}\cup P]$. In each case, we obtain a $(c+2)$-cycle or $diam(D) \geq m$, which contradicts the initial assumption or Claim \ref{clmc+2}. \begin{itemize} \item Case 1: $x^{\prime\prime}$ satisfies Proposition \ref{property1} (i) and $x_l\in \{x_2,x_{m-1},x_m\}$. It is easy to check that $D$ contains a $(c+2)$-cycle. \item Case 2: $x^{\prime\prime}$ satisfies Proposition \ref{property1} (ii). There exists a $(c+2)$-cycle $x_2x_3x^\prime x^{\prime\prime} $ $x_4 P x_cx_2$ when $l=3$ (or $x_2x_3x_4x^\prime x^{\prime\prime} x_5Px_cx_2$ when $l=4$, resp.). For the case $l=m-2$ and $l=m-3$, we can obtain a $(c+2)$-cycle similarly. \item Case 3: $x^{\prime\prime}$ satisfies Proposition \ref{property1} (iii). $D[P\cup \{x^\prime,x^{\prime\prime}\}]$ contains a $(c+2)$-cycle $x_1x_2x^\prime x^{\prime\prime}x_3Px_cx_1$ (or $x_1x_2x_3x^\prime x^{\prime\prime}x_4Px_cx_1$) for Type I, III. For Type II, there is a $(c+2)$-cycle $x_{m-1}x^{\prime}x^{\prime\prime}x_{m-c}Px_{m-1}$ unless there is no arc between $x^{\prime\prime}$ and $x_{m-c}$. Moreover $D$ contains $x_mx^{\prime}x^{\prime\prime}x_{m-c+1}Px_m$ unless there is no arc between $x^{\prime}$ and $x_m$. Then $x_{m-2}x^{\prime}x^{\prime\prime}x_{m-c-1}Px_{m-2}$ is a cycle when $x^{\prime\prime} \nrightarrow x_{m-c}$ and $x_m \nrightarrow x^\prime$. \item Case 4: $x^{\prime\prime}$ satisfies Proposition \ref{property1} (iv) and $x_m \rightarrow x^{\prime\prime}$. There is a $(c+2)$-cycle $x_{m-c} x^\prime x^{\prime\prime} x_{m-c+1} P x_{m-1} x_{m-c}$ unless $x_{m-c}$, $x^{\prime\prime}$ and $x_{m-1}$ belong to the same partite set. Then $D[P\cup \{x^\prime,x^{\prime\prime}\}]$ contains a $(c+2)$-cycle $x_{m-c} x^\prime x^{\prime\prime} x_{m-c+2} P x_{m} x_{m-c}$. \item Case 5: $x^{\prime\prime}$ satisfies Proposition \ref{property1} (i) $x_l\in \{x_2,x_{m-1},x_m\}$ or (iv) $x^{\prime\prime} \rightarrow x_1$. According to the analysis of Cases 1 -- 4, we get $dist(x^\prime,x_m) \geq m$, a contradiction. \end{itemize} Hence, it is impossible that $x$ has only in-neighbours on $P$. Analogously, we can show that $D-P$ does not have any vertex which only has out-neighbours on $P$. Since each partite set of $D$ has at least two vertices, $P$ is not of Type III and $m\geq 2c+1$. In the following, we show that no vertex out of $P$ satisfies (iv). Assume that there is a vertex $x$ satisfying (iv) and $x\rightarrow x_1$, $x_2Px_m \Rightarrow x$. If there is a vertex $y$ out of $P$ such that $ x \rightarrow y$, it is easy to obtain that $y$ and $x_1$ have the same in-neighbours and out-neighbours on $P$; or $y$ satisfies (iv) and $x_m\rightarrow y$, $y \Rightarrow x_1Px_{m-1}$; or $D[P \cup y]$ is of Type II. Thus $x_cxyx_{c+1}x_2Px_c$ is a $(c+2)$-cycle unless $x_{c+1} \nrightarrow x_2$. However, we obtain that $D$ contains $x_3xyx_4Px_cx_1x_2x_3$ or $x_3xyx_5Px_{c+1}x_1x_2x_3$. Thus $y$ and $x_1$ have the same adjacencies to $P$. This implies that $dist(x,x_m)=m$, a contradiction. Analogously, if there is a vertex $x$ satisfying (iv) and $x_m\rightarrow x$, $x \Rightarrow x_1Px_{m-1}$, we will get $dist(x_1,x)=m$, a contradiction. Hence no vertex out of $P$ satisfies (iv). Finally, if there exist vertices $x$ of Type I and $y$ of Type II such that $x\rightarrow y$, then $D$ contains a $(c+2)$-cycle $x_1Px_4xyx_5Px_cx_1$ or $x_1Px_4x_6Px_{c+1}x_1$, a contradiction. Thus for any vertex $x$ of Type I and any vertex $y$ of Type II, there is no arc between $x$ and $y$ or $y \rightarrow x$. Observe that $D$ is isomorphic to a member of $\mathcal{Q}_m$. This proves Theorem \ref{main}.$\Box$ \section{Data Availability Statement} No data were generated or used during the study. \end{document}
\begin{document} \title[Geometric realizations of cyclic actions on surfaces - II]{Geometric realizations of \\ cyclic actions on surfaces - II} \author{Atreyee Bhattacharya} \address{Department of Mathematics\\ Indian Institute of Science Education and Research Bhopal\\ Bhopal Bypass Road, Bhauri \\ Bhopal 462 066, Madhya Pradesh\\ India} \email{[email protected]} \urladdr{https://sites.google.com/iiserb.ac.in/homepage-atreyee-bhattacharya/home?authuser=1} \author{Shiv Parsad} \address{School of Mathematics and Computer Science\\ Indian Institute of Technology Goa\\ Goa College of Engineering Campus\\ Farmagudi, Ponda-403401, Goa\\ India} \email{[email protected]} \author{Kashyap Rajeevsarathy} \address{Department of Mathematics\\ Indian Institute of Science Education and Research Bhopal\\ Bhopal Bypass Road, Bhauri \\ Bhopal 462 066, Madhya Pradesh\\ India} \email{[email protected]} \urladdr{https://home.iiserb.ac.in/$_{\widetilde{\mathcal{P}hantom{n}}}$kashyap/} \subjclass[2020]{Primary 57K20; Secondary 57M60} \keywords{surface, mapping class, finite order maps, Teichm{\"u}ller space} \maketitle \begin{abstract} Let $ \mathrm{Mod}(S_g)$ denote the mapping class group of the closed orientable surface $S_g$ of genus $g\geq 2$. Given a finite subgroup $H$ of $\mathrm{Mod}(S_g)$, let $\mathrm{Fix}(H)$ denote the set of fixed points induced by the action of $H$ on the Teichm\"{u}ller space $\mathrm{Teich}(S_g)$. When $H$ is cyclic with $|H| \geq 3$, we show that $\mathrm{Fix}(H)$ admits a decomposition as a product of two-dimensional strips at least one of which is of bounded width. For an arbitrary $H$ with at least one generator of order $\geq 3$, we derive a computable optimal upper bound for the restriction $\mathrm{sys} : \mathrm{Fix}(H) \to \mathbb{R}^+$ of the systole function. Furthermore, we show that in such a case, $\mathrm{Fix}(H)$ is not symplectomorphic to the Euclidean space of the same dimension. Finally, we apply our theory to recover three well known results, namely: (a) Harvey's result giving the dimension of $\mathrm{Fix}(H)$, (b) Gilman's result that $H$ is irreducible if and only if the corresponding orbifold is a sphere with three cone points, and (c) the Nielsen realization theorem for cyclic groups. \end{abstract} \section{Introduction} \operatorname{{\llbracket}}abel{sec:intro} Let $S_g$ be a closed orientable surface of genus $g\geq 2$ and, let $\text{Mod}(S_g)$ denote the mapping class group of $S_g$. Given a finite subgroup $H \operatorname{{\llbracket}}eq \mathrm{Mod}(S_g)$, let $\mathrm{Fix}(H)$ denote the set of fixed points induced by the natural action of $H$ on the Teichm\"{u}ller space $\mathrm{Teich}(S_g)$. The Nielsen realization problem asks whether $\mathrm{Fix}(H) \neq \emptyset$, for an arbitrary finite subgroup $H \operatorname{{\llbracket}}eq \mathrm{Mod}(S_g)$. While Kerckhoff~\cite{SK1} answered this in the affirmative, Harvey~\cite[Theorem 2]{H1} showed that $\mathrm{Fix}(H) \approx \hat{i} (\mathrm{Teich}(S_g/H))$, where $\mathrm{Teich}(S_g/H)$ is defined in the sense of Bers~\cite{LB}, and $\hat{i}$ is the natural embedding induced by the branched cover $S_g \to S_g/H$ (identifying $H$ with a group of self-homeomorphisms of $S_g$). Recently, the second and third authors~\cite{PRS} proved a structure theorem for realizing a finite order mapping class as an isometry of a hyperbolic structure on $S_g$. This package had two parts: \begin{enumerate}[(1)] \item realizing an irreducible periodic mapping class (whose Nielsen representative has a fixed point) called a \textit{spherical Type 1 mapping class}, and \item a complete description of how all reducible \textit{non-rotational} (i.e. that are not realized as surface rotations) periodic mapping classes are built (or realized) inductively from spherical Type 1 mapping classes. \end{enumerate} \noindent In particular, it was shown that a spherical Type 1 mapping class can be realized as the rotation of a polygon with a suitable side-pairing, as illustrated in Figure~\operatorname{{\rrbracket}}ef{fig:c14-action} below. \begin{figure} \caption{An irreducible order $14$ element in $\mathrm{Mod} \end{figure} The inductive process of building a reducible non-rotational periodic mapping class involved two key constructions: \begin{itemize} \item $\alpha$-\textit{compatibility} - Pasting together a pair of spherical Type 1 actions across boundary components induced by deleting invariant cyclically permuted disks around points in a pair of compatible orbits of size $\alpha$, which are orbits where the induced local rotation angles are equal, as illustrated in Figure~\operatorname{{\rrbracket}}ef{fig:2-compatibility} below. \begin{figure} \caption{\tiny An irreducible order $6$ element $h \in \mathrm{Mod} \caption{\tiny A $1$-compatibility between $h$ and $h^5$.} \caption{A $1$-compatibility realizing a order $6$ element in $\mathrm{Mod} \end{figure} \noindent This notion also includes the compatibility across orbits of size $\alpha$ within the same surface, called a \textit{self $\alpha$-compatibility}. \item \textit{Addition (resp. deletion) of a $g_1$-permutation component} - For $g_1 \geq 1$, pasting (resp. removal) of a cyclical permutation of $n$ copies of $S_{g_1,1}$ to (resp. from) the action induced by $h$ on $S_{g,n}$ by removing invariant cyclically permuted disks around an orbit of size $n$, as shown in Figure~\operatorname{{\rrbracket}}ef{fig:g1-perm} below. \begin{figure} \caption{\small Addition of a $g_1$-permutation component to a sphere rotation by $2\mathcal{P} \end{figure} \end{itemize} Given any cyclic subgroup $H \operatorname{{\llbracket}}eq \mathrm{Mod}(S_g)$ of order $n$, the Nielsen-Kerckhoff theorem says that one may also regard $H$ as a cyclic subgroup of $\text{Homeo}^+(S_g)$ generated by an orientation-preserving map $h$ of order $n$. Throughout this paper, we will refer to both the mapping class represented by $h$, and the group it generates, interchangeably, as a $C_n$-action on $S_g$. Furthermore, $h$ has a \textit{corresponding orbifold} given by $\mathcal{O}_h \approx S_{g}/\operatorname{{\llbracket}}angle h \operatorname{{\rrbracket}}angle$. In Section~\operatorname{{\rrbracket}}ef{sec:cyclic_irreducibles}, we extend the results in~\cite{PRS} to surface rotations of order $n \geq 3$ (see Theorem~\operatorname{{\rrbracket}}ef{thm:arb_cyc_real}). We also prove that an arbitrary $C_n$-action ($n \geq 3$) $h$ admits a canonical decomposition that can be visualized as a \textit{necklace with beads} where the beads represent spherical Type 1 actions and the strings correspond to admissible compatibilities among those beads (see Section 3 for details). Each invariant curve orbit arising in an $\alpha$-compatibility (in a canonical decomposition of $h$) contributes one length and one twist parameter to $\mathrm{Fix}(\operatorname{{\llbracket}}angle h \operatorname{{\rrbracket}}angle )$, while the addition of each $g_1$-permutation component contributes $\dim(\mathrm{Teich}(S_{g_1,1}))+1$ Fenchel-Nielsen coordinates. Moreover, each length parameter contributed by an $\alpha$-compatibility is bounded above by a positive constant determined by $h$. Thus, a canonical decomposition of $h$ yields a canonical decomposition of $\mathrm{Fix}(H)$ as a product of two-dimensional strips (subspaces) at least one of which is of bounded width. (For details, we refer the reader to Section~\operatorname{{\rrbracket}}ef{sec:main}.) \begin{theorem*} \operatorname{{\llbracket}}abel{intro:main1} Let $H <\mathrm{Mod}(S_g)$ be a cyclic subgroup with $|H| \geq 3$. Suppose that $H$ decomposes into $k$ spherical Type 1 components that were pasted across $p$ many $\alpha$-compatibilities (where $\alpha<n$), $q$ self $\alpha$-compatibilities, and with the addition of a $g_1$-permutation component and the deletion of a $g_2$-permutation component (where $g_2 \operatorname{{\llbracket}}eq g_1+q$). Then $$\mathrm{Fix}(H) \approx \operatorname{{\llbracket}}eft( \mathcal{P}rod_{j=1}^{N}((0,\ell_{j}(H)) \times \mathbb{R}) \operatorname{{\rrbracket}}ight) \times \operatorname{{\llbracket}}eft( \mathbb{R}_+ \times \mathbb{R}\operatorname{{\rrbracket}}ight )^{2(g_1-g_2)-1},$$ \noindent where $N= 3k+g_1-g_2-2p+q-4 >0$, $\ell_{j}(H) <\infty$ is a positive constant determined by $H$ such that the factors involving $(0, \ell_{j}(H))$ and $\mathbb{R}_+$ are length parameters arising from $\alpha$-compatibilities and the addition (or deletion) of permutation components, respectively, and the remaining factors in each summand being the corresponding twist parameters arising from these constructions (with the implicit understanding that when $g_1-g_2$ is zero, then the last term will disappear). \end{theorem*} \noindent It is important to note that in general $g_2 >0$ in Theorem~\operatorname{{\rrbracket}}ef{intro:main1}, as the deletion of permutation components are necessary to build arbitrary finite-order mapping classes (see Example~\operatorname{{\rrbracket}}ef{eg: irr_type2_necklace}). Let $\mathrm{sys} : \mathrm{Teich}(S_g) \to \mathbb{R}^+$ be the systole function. The systole function has been extensively studied. Using a variational approach initiated by Schmutz~\cite{PS2}, Akrout~\cite{HA1} proved that $\mathrm{sys}$ is a topological Morse function. Moreover, its critical points, and in particular, its local maxima have been characterized~\cite{BC0,PS2}. It is easy to see that by an immediate application of the Gauss-Bonnet Theorem for closed hyperbolic surfaces, the restriction $\mathrm{sys} : \mathrm{Fix}(H) \to \mathbb{R}^+$ is bounded above. When $H= \operatorname{{\llbracket}}angle h \operatorname{{\rrbracket}}angle$, better bounds for $\mathrm{sys}$ are known~\cite{BBCT}, and for the specific case when $\mathcal{O}_h$ is a triangular surface, a method of explicitly computing $\mathrm{sys}$ was described in~\cite{HK}. Applying Theorem~\operatorname{{\rrbracket}}ef{intro:main1}, in Section~\operatorname{{\rrbracket}}ef{sec:injrad-Fix}, we derive an easily computable upper bound for $\mathrm{sys}$ which turn out to be better than the bounds provided by~\cite{BBCT} in many cases. \begin{theorem*} \operatorname{{\llbracket}}abel{intro:inj_bd} Let $H = \operatorname{{\llbracket}}angle h_1,\operatorname{{\llbracket}}dots h_s \operatorname{{\rrbracket}}angle$ be a finite subgroup of $\mathrm{Mod}(S_g)$ such that $|h_i| \geq 3$ for some $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq s$. Then the restriction $\mathrm{sys} : \mathrm{Fix}(H) \to \mathbb{R}^+$ of the systole functional is bounded above by a global constant $\mathcal{U}(H)$ that is determined by the injectivity radii of points in the compatible orbits under a suitable decomposition of the $h_i$s into spherical Type 1 components as in the hypothesis of Theorem~\operatorname{{\rrbracket}}ef{intro:main1}. Consequently, the injectivity radii of the structures in $\mathrm{Fix}(H)$ are bounded by $\frac{1}{2}\mathcal{U}(H)$. \end{theorem*} \noindent The upper bound $\mathcal{U}(H)$ in Theorem~\operatorname{{\rrbracket}}ef{intro:inj_bd} is optimal in the sense that for each $\epsilon>0,$ there exist a point $P\in S_{g}$ and a structure $X \in \mathrm{Fix}(H)$ such that the injectivity radius $\mathrm{inj}_P(X)$ at the point $P$ is $>\mathcal{U}(H)-\epsilon$ (see Example~\operatorname{{\rrbracket}}ef{eg:inj_bd_real1}). Let $\mathfrak{g}_{WP}$ be the Weil-Petersson metric on $\mathrm{Teich}(S_g)$. By viewing $\mathrm{Fix}(H)$ as a K\"{a}hler submanifold of $(\mathrm{Teich}(S_g),\mathfrak{g}_{WP})$~\cite{SK2,MW1}, and appealing to the symplectic non-squeezing theorem due to Gromov~\cite{MG} from symplectic geometry, we conclude the following. \begin{corollary*} Given a finite subgroup $H < \mathrm{Mod}(S_g)$ with at least one generator of order $\geq 3$, $\mathrm{Fix}(H)$ is not symplectomorphic to the Euclidean space of the same dimension. \end{corollary*} Finally in Section \operatorname{{\rrbracket}}ef{sec:classical_results}, taking inspiration from the theory developed in this paper and~\cite{PRS}, we recover some classical results. In particular, we provide alternative proofs for the following well known results due to Harvey~\cite{H1,HM1} and Gilman~\cite{JG3}. \begin{corollary*} Let $H = \operatorname{{\llbracket}}angle h \operatorname{{\rrbracket}}angle$ be an arbitrary $C_n$-action on $S_g$ such that $\mathcal{O}_h$ has $c$ cone points. Then: \begin{enumerate}[(i)] \item (Harvey) $\dim(\mathrm{Fix}(H)) = 6g_0(h) + 2c - 6$, and \item (Gilman) $h$ is irreducible if, and only if $g_0(h) = 0$ and $c=3$. \end{enumerate} \end{corollary*} \noindent We conclude the paper by sketching a purely topological proof of the Nielsen realization theorem for cyclic groups~\cite{WF2,WF1,SK1,JN1}, which asserts that every finite-order element in $\mathrm{Mod}(S_g)$ has a representative in $\text{Homeo}^+(S_g)$ of the same order. \section{Preliminaries}\operatorname{{\llbracket}}abel{prel} Let $H=\operatorname{{\llbracket}}angle h\operatorname{{\rrbracket}}angle \operatorname{{\llbracket}}eq \mathrm{Mod}(S_g)$ be a cyclic subgroup of order $n$. Throughout this section we will fix an order $n$ representative of $h$ in $\text{Homeo}^{+}(S_g)$ i.e. a $C_n$-action on $S_g$. A $C_n$-action $D$ on $S_g$ induces a branched covering $S_g \to S_g/C_n,$ where the quotient orbifold $\mathcal{O}_D : =S_g/C_n$ has signature $(g_0;\,n_1,\dots,n_{\ell})$ (see~\cite{FM,WT}). From orbifold covering space theory, we obtain an exact sequence $$ 1 \to \mathcal{P}i_1(S_g) \to \mathcal{P}i^{\text{orb}}_1(\mathcal{O}_D) \stackrel{\operatorname{{\rrbracket}}ho}{\to} C_n \to 1,$$ where $\mathcal{P}i^{\text{orb}}_1(\mathcal{O}_D)$ is a Fuchsian group~\cite{SK} given by the presentation \begin{gather*} \operatorname{{\llbracket}}angle \alpha_1,\dots,\alpha_{\ell}, x_1,y_1,\dots,x_{g_0},y_{g_0}\,|\, \alpha_1^{n_1}=\cdots =\alpha_{\ell}^{n_l}=1,\, \mathcal{P}rod_{i=1}^{\ell}\alpha_i=\mathcal{P}rod_{j=1}^{g_0}[x_j,y_j]\,\operatorname{{\rrbracket}}angle. \end{gather*} The epimorphism $\mathcal{P}i^{\text{orb}}_1(\mathcal{O}_D)\xrightarrow{\operatorname{{\rrbracket}}ho}C_n$ is called the surface kernel map~\cite{WH}, and has the form $\operatorname{{\rrbracket}}ho(\alpha_i) = t^{(n/n_i)c_i}$, for $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq \ell$, where $C_n = \operatorname{{\llbracket}}angle t \operatorname{{\rrbracket}}angle$ and $\gcd(c_i,n_i) = 1$. The map $\operatorname{{\rrbracket}}ho$ is often described by a $(g_0;\,n_1,\dots,n_{\ell})$-generating vector~\cite{SAB,JG4}. From a geometric viewpoint, a cone point of order $n_i$ lifts to an orbit of size $n/n_i$ on $S_g$, and the local rotation induced by $D$ around the points in the orbit is given by $2 \mathcal{P}i c_i^{-1}/n_i$, where $c_i c_i^{-1} \equiv 1 \mathcal{P}mod{n_i}$. (For more details on the theory of finite group actions on surfaces, we refer the reader to~\cite{TB,BA1,HZ}.) Putting together the notions of orbifold signature and the generating vector, we can obtain a combinatorial encoding of the conjugacy class of a cyclic action. \begin{definition}\operatorname{{\llbracket}}abel{defn:data_set} A \textit{data set of degree $n$} is a tuple $$ D = (n,g_0, r; (c_1,n_1), (c_2,n_2),\operatorname{{\llbracket}}dots, (c_{\ell},n_{\ell})), $$ where $n\geq 1$, $ g_0 \geq 0$, and $0 \operatorname{{\llbracket}}eq r \operatorname{{\llbracket}}eq n-1$ are integers, and each $c_i$ is a residue class modulo $n_i$ such that: \begin{enumerate}[(i)] \item $r > 0$ if, and only if $\ell = 0$, and when $r >0$, we have $\gcd(r,n) = 1$, \item each $n_i\mid n$, \item for each $i$, $\gcd(c_i,n_i) = 1$, \item for each $i$, $\operatorname{{\llbracket}}cm(n_1,\operatorname{{\llbracket}}dots \widehat{n_i}, \operatorname{{\llbracket}}dots,n_{\ell}) = \operatorname{{\llbracket}}cm(n_1,\operatorname{{\llbracket}}dots,n_{\ell})$, and $\operatorname{{\llbracket}}cm(n_1,\operatorname{{\llbracket}}dots,n_{\ell}) = n$, if $g_0 = 0$, and \item $\displaystyle \sum_{j=1}^{\ell} \frac{n}{n_j}c_j \equiv 0\mathcal{P}mod{n}$. \end{enumerate} The number $g$ determined by the Riemann-Hurwitz equation \begin{equation*}\operatorname{{\llbracket}}abel{eqn:riemann_hurwitz} \frac{2-2g}{n} = 2-2g_0 + \sum_{j=1}^{\ell} \operatorname{{\llbracket}}eft(\frac{1}{n_j} - 1 \operatorname{{\rrbracket}}ight) \end{equation*} is called the \emph{genus} of the data set, which we shall denote by $g(D)$. Given a data set $D$ as above, we define $$n(D) : = n, \, g(D) := g, \, r(D) = r, \text{ and }g_0(D) := g_0.$$ The quantity $r(D)$ associated with a data set $D$ will be non-zero if, and only if, $D$ represents a free rotation of $S_{g(D)}$ by $2\mathcal{P}i r(D)/n$. \end{definition} \noindent The following lemma is a consequence of the classical results in~\cite{WH,JN2}. (For more details, see \cite{ALE,km,KP}.) \begin{lemma}\operatorname{{\llbracket}}abel{prop:ds-action} Data sets of degree $n$ and genus $g$ correspond to the conjugacy classes of $C_n$-actions on $S_g$. \end{lemma} \noindent From here on, we will follow the nomenclature of data sets to describe $C_n$-actions on $S_g$. In the remainder of this section, we will summarize the theory developed in~\cite{PRS} to be used extensively throughout this paper. \noindent To begin with, we classify $C_n$-actions on $S_g$ into three broad categories. \begin{definition}\operatorname{{\llbracket}}abel{def:types_of_actions} Let $D$ be a $C_n$-action on $S_g$. Then $D$ is said to be a: \begin{enumerate}[(i)] \item \textit{rotational action}, if either $r(D) \neq 0$, or $D$ is of the form $$(n,g_0;\underbrace{(s,n),(n-s,n),\operatorname{{\llbracket}}dots,(s,n),(n-s,n)}_{k \,pairs}),$$ for integers $k \geq 1$ and $0<s\operatorname{{\llbracket}}eq n-1$ with $\gcd(s,n)= 1$, and $k=1$, if and only if $n>2$. \item \textit{Type 1 action}, if $\ell = 3$, and $n_i = n$ for some $i$. (Note that this is a special type of quasi-platonic action~\cite{BA}.) \item \textit{Type 2 action}, if $D$ is neither a rotational nor a Type 1 action. \end{enumerate} \end{definition} \noindent If $g_0(D) = 0$, then we call $D$ a \textit{spherical} action. The following theorem gives a geometric realization of spherical Type 1 actions. \begin{theorem}\operatorname{{\llbracket}}abel{res:1} For $g \geq 2$, a spherical Type 1 action $$D = ((n,0;(c_1,n_1),(c_2,n_2),(c_3,n))$$ on $S_g$ can be realized explicitly as the rotation $\theta_D = 2\mathcal{P}i c_3^{-1}/n$ of a hyperbolic polygon $\mathcal{P}_D$ with a suitable side-pairing $W(\mathcal{P}_D)$, where $\mathcal{P}_D$ is a hyperbolic $k(D)$-gon with $$ \small k(D) := \begin{cases} 2n, & \text { if } n_1,n_2 \neq 2, \text{ and } \\ n, & \text{otherwise, } \end{cases}$$ and for $0 \operatorname{{\llbracket}}eq m\operatorname{{\llbracket}}eq n-1$, $$ \small W(\mathcal{P}_D) = \begin{cases} \displaystyle \mathcal{P}rod_{i=1}^{n} a_{2i-1} a_{2i} \text{ with } a_{2m+1}^{-1}\sim a_{2z}, & \text{if } k(D) = 2n, \text{ and } \\ \displaystyle \mathcal{P}rod_{i=1}^{n} a_{i} \text{ with } a_{m+1}^{-1}\sim a_{z}, & \text{otherwise,} \end{cases}$$ where $\displaystyle z \equiv m+qj \mathcal{P}mod{n}, \,q= (n/n_2)c_3^{-1}$, and $j=n_{2}-c_{2}$. \end{theorem} \begin{definition} \operatorname{{\llbracket}}abel{rem:triv_self_comp} Let $D = (n,g_0; (c_1,n_1), (c_2,n_2),\operatorname{{\llbracket}}dots, (c_{\ell},n_{\ell}))$ be a $C_n$-action on $S_g$. For a given $g'\geq 1$, one can obtain a new action from $D$ by removing cyclically permuted (mutually disjoint) disks around points in an orbit of size $n$, and then attaching $n$ copies of the surface $S_{g',1}$ along the resultant boundary components. The resultant action, which is uniquely determined up to conjugacy, is denoted by $ \operatorname{{\llbracket}} D, g'\operatorname{{\rrbracket}}$, where $$ \operatorname{{\llbracket}} D, g'\operatorname{{\rrbracket}} := (n,g_0+g'; (c_1,n_1), (c_2,n_2),\operatorname{{\llbracket}}dots, (c_{\ell},n_{\ell})).$$ \noindent Given an action of type $\operatorname{{\llbracket}} D,g'\operatorname{{\rrbracket}}$ for some $g' \geq 1$, one can reverse the construction process described above to recover the action $D$. We denote this reversal process by $\overline{\operatorname{{\llbracket}} D,g'\operatorname{{\rrbracket}}}$ (i.e. $\overline{\operatorname{{\llbracket}} D,g'\operatorname{{\rrbracket}}}=D$). \end{definition} It is easy to see that a construction of type $\operatorname{{\llbracket}} D, g'\operatorname{{\rrbracket}}$ (or the \textit{addition of a $g'$-permutation component}) and $\overline{\operatorname{{\llbracket}} D,g'\operatorname{{\rrbracket}}}$ (or the \textit{deletion of a $g'$-permutation component}) for some $g'>0,$ can be realized by $g'$ inductively performed constructions of type $\operatorname{{\llbracket}} D, 1\operatorname{{\rrbracket}}$ and $\overline{\operatorname{{\llbracket}} D,1\operatorname{{\rrbracket}}}$, respectively. We will now describe a construction of a new $C_n$-action from a pair of existing $C_n$-actions across a pair of compatible orbits of size $m$, where $m$ is a proper divisor of $n$. \begin{definition}\operatorname{{\llbracket}}abel{def:comp_pair} For $i = 1,2$, two actions $$D_{i}=(n, g_{i,0}; (c_{i,1} , n_{i,1} ),(c_{i,2},n_{i,2}),\operatorname{{\llbracket}}dots,(c_{i,\ell_i},n_{i,\ell_i}))$$ are said to form an $(r,s)$-\textit{compatible pair} $D = \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2, (r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$ if there exists $1 \operatorname{{\llbracket}}eq r \operatorname{{\llbracket}}eq \ell_1$ and $ 1 \operatorname{{\llbracket}}eq s \operatorname{{\llbracket}}eq \ell_2$ such that \begin{enumerate}[(i)] \item $n_{1,r} = n_{2,s} = m$, and \item $c_{1,r}+c_{2,s} \equiv 0 \mathcal{P}mod{m}$. \end{enumerate} The number $1+g(D) - g(D_1) - g(D_2)$ will be denoted by $A(D).$ \end{definition} \noindent This is a formalization of the $\alpha$-compatibility described in Section~\operatorname{{\rrbracket}}ef{sec:intro}, where $\alpha = A(D)$. The following lemma provides a combinatorial recipe for constructing a new action from an $(r,s)$-compatible pair of existing actions. \begin{lemma} \operatorname{{\llbracket}}abel{lem:comp_pair} Given a pair of cyclic actions as in Definition~\operatorname{{\rrbracket}}ef{def:comp_pair}, we have \begin{gather*} \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2, (r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}=(n,g_{1,0}+g_{2,0};(c_{1,1},n_{1,1}),\dots,\widehat{(c_{1,r},n_{1,r})}, \operatorname{{\llbracket}}dots, (c_{1,\ell_1},n_{1,\ell_1}),-\\ (c_{2,1},n_{2,1}),\dots,\widehat{(c_{2,s},n_{2,s})}, \operatorname{{\llbracket}}dots, (c_{2,\ell_2},n_{2,\ell_2})), \end{gather*} where $A\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2, (r,s)\operatorname{{\operatorname{{\rrbracket}}rparenthesis}} := \frac{n}{n_{1,r}}.$ \end{lemma} \noindent It is always possible to construct a new $C_n$ action from a pair of $C_n$ actions $D_i$ as in Definition~\operatorname{{\rrbracket}}ef{def:comp_pair} across a pair of orbits of size $n$. \begin{definition} Given actions $D_i$ as in Definition~\operatorname{{\rrbracket}}ef{def:comp_pair}, the action \begin{gather*} \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2\operatorname{{\operatorname{{\rrbracket}}rparenthesis}} :=(n,g_{1,0}+g_{2,0};(c_{1,1},n_{1,1}),\operatorname{{\llbracket}}dots, (c_{1,\ell_1},n_{1,\ell_1}),-\\ (c_{2,1},n_{2,1}), \operatorname{{\llbracket}}dots, (c_{2,\ell_2},n_{2,\ell_2})), \end{gather*} where $g(\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2\operatorname{{\operatorname{{\rrbracket}}rparenthesis}}) = g(D_1)+g(D_2)+n-1$ and $A\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1, D_2 \operatorname{{\operatorname{{\rrbracket}}rparenthesis}} := n$, is said to be a \textit{full compatibility} between the actions $D_i$. \end{definition} \noindent A pair of compatible orbits of the same action on a surface can also be identified to build a new action. \begin{definition}\operatorname{{\llbracket}}abel{def:self_comp_ds} For $\ell \geq 4$, let $D = (n,g_0; (c_1,n_1), (c_2,n_2),\operatorname{{\llbracket}}dots, (c_{\ell},n_{\ell})),$ be a $C_n$-action. Then $D$ is said yield an $(r,s)$-\textit{self compatible} action $D' = \operatorname{{\llbracket}} D,(r,s)\operatorname{{\rrbracket}}$, if there exist $1 \operatorname{{\llbracket}}eq r < s \operatorname{{\llbracket}}eq \ell$ such that \begin{enumerate}[(i)] \item $n_r = n_s = m$, and \item $\displaystyle c_r+c_s \equiv 0 \mathcal{P}mod{m}$. \end{enumerate} The number $g(D')- g(D)$ will be denoted by $A(D')$. \end{definition} \noindent Note that this is the self $\alpha$-compatibility described in Section~\operatorname{{\rrbracket}}ef{sec:intro}, where $\alpha = A(D')$. The following result gives an explicit realization of the $(r,s)$-\textit{self compatible} action yielded by an action $D$ as above. \begin{lemma} \operatorname{{\llbracket}}abel{lem:self_comp_ds} Let $D$ be an $(r,s)$-self compatible $C_n$-action as in Definition~\operatorname{{\rrbracket}}ef{def:self_comp_ds}. Then we have $$\operatorname{{\llbracket}} D,(r,s)\operatorname{{\rrbracket}}=(n,g_0+1;(c_{1},n_{1}),\dots,\widehat{(c_{r},n_{r})}, \operatorname{{\llbracket}}dots, \widehat{(c_{r},n_{s})}, \dots, (c_{\ell},n_{\ell})),$$ where $g(\operatorname{{\llbracket}} D,(r,s)\operatorname{{\rrbracket}}) = g(D)+n/n_r.$ \end{lemma} We conclude this section with the following theorem that provides a realization of arbitrary Type 2 actions on $S_g$. \begin{theorem}\operatorname{{\llbracket}}abel{thm:arb-real} For $g \geq 2$, a Type 2 action on $S_g$ can be constructed from finitely many compatibilities of the following types between spherical Type 1 actions: \begin{enumerate}[(i)] \item $\operatorname{{\llbracket}} D, (r,s)\operatorname{{\rrbracket}}$, \item $\operatorname{{\llbracket}} D,g' \operatorname{{\rrbracket}}$, $\overline{\operatorname{{\llbracket}} D,g' \operatorname{{\rrbracket}}}$, \item $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2, (r,s)\operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, and \item $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2 \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$. \end{enumerate} \end{theorem} \section{Decomposing cyclic actions into irreducibles} \operatorname{{\llbracket}}abel{sec:cyclic_irreducibles} In this section, we generalize Theorem~\operatorname{{\rrbracket}}ef{thm:arb-real} to obtain a topological description of the decomposition of an arbitrary cyclic action into irreducible components. We show that this decomposition can be visualized as a ``necklace with beads", where the beads are the irreducible components, and strings that connect a pair of beads symbolize the compatibility between them. We will now present an example that captures this idea. \begin{example} \operatorname{{\llbracket}}abel{eg: irr_type2_necklace} Consider the spherical Type 1 actions $D_1=(42,0;(2,21), \operatorname{{\llbracket}}inebreak(19,42),(19,42))$, $D_2=(42,0;(5,6),(13,21),(23,42))$, $D_3 = (42,0;(1,14),\operatorname{{\llbracket}}inebreak(8,21),(23,42))$, $D_4=(42,0;(1,6),(11,21),(13,42))$, $D_5=(42,0;(13,14), \operatorname{{\llbracket}}inebreak(10,21),(25,42))$, and $D_6 = (42,0;(19,21),(17,42),(29,42))$. The compatibilities $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1, D_2, (3,3)\operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_2, D_3, (2,2) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_3,D_4 \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_4, D_5, (2,2) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, and $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_5, D_6, (3,2) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, together realize the action \\ $D' = (42,0;(2,21),(19,42),(5,6),(23,42),(1,14),(1,6),(13,42),(13,14),\operatorname{{\llbracket}}inebreak(19,21),(29,42))$ on $S_{155}$. \\ \noindent A visual interpretation of this realization is shown in Figure~\operatorname{{\rrbracket}}ef{fig:linear_chain} below, where the number of lines connecting $D_i$ to $D_j$ are the sizes of the compatible orbits. (Note that the number 42 refers to the number of lines connecting $D3$ to $D_4$.) \begin{figure} \caption{A visualization of the action $D'$.} \end{figure} \noindent \end{example} \begin{remark} While we realize new actions from successive compatibilities across actions (represented by data sets), for simplicity, we assume from here on that the original indexing of the pairs (that correspond to cone points) in the data sets remains unaltered during the entire process. \end{remark} \noindent Fixing the notation $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1, D_2, (0,0) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}} := \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2 \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, we formalize this idea in the following definition. \begin{definition} \operatorname{{\llbracket}}abel{def:linear_chain} For $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq k$, let $D_i$ be a collection of cyclic actions of order $n$ on $S_{g_i}$. Then the $D_i$ are said to form a \textit{linear $k$-chain $T=(D_1,\operatorname{{\llbracket}}dots,D_k)$} if the following conditions hold. \begin{enumerate}[(i)] \item Each $D_i$ is a spherical Type 1 action. \item For $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq k-1$, there exists non-negative integers $r_i$ and $s_i$ such that the tuples given by $$D_1' = \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1, D_2, (r_1,s_1) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}} \text{ and } D_j' = \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_j', D_{j+1}, (r_j,s_j) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}, \text{ for } 2 \operatorname{{\llbracket}}eq j \operatorname{{\llbracket}}eq k-1,$$ are well-defined data sets. \end{enumerate} \noindent If in addition to conditions (i)-(iii), there exist non-negative integers $\tilde{r}_k$ and $\tilde{s}_k$ such that the tuple $D_k' = \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_{k-1}',D_1', (\tilde{r}_k,\tilde{s}_k) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$ is also a well-defined data set, then $T$ is said to be a \textit{closed $k$-chain}. \end{definition} \noindent Given a $k$-chain $T$ as above, we fix the following notation. \begin{enumerate}[(a)] \item $T_{i,j}:=(D_i,D_{i+1},\operatorname{{\llbracket}}dots,D_j)$, if $T$ is not closed. \item $\mathfrak{C}(T) := \begin{cases} ((r_1,s_1),\operatorname{{\llbracket}}dots,(r_{k-1},s_{k-1}),(\tilde{r}_k,\tilde{s}_k)), & \text{if } T \text{ is closed, and} \\ ((r_1,s_1),\operatorname{{\llbracket}}dots,(r_{k-1},s_{k-1})), & \text{otherwise.} \end{cases}$ \item $D_T := \begin{cases} D_k', & \text{if T is closed, and}\\ D_{k-1}', & \text{otherwise.} \end{cases}$ \item $A_T := \begin{cases} \sum_{i=1}^k A(D_i'), & \text{if T is closed, and}\\ \sum_{i=1}^{k-1} A(D_i'), & \text{otherwise.} \end{cases}$ \item $f(T) := |\{j:(r_j,s_j) = (0,0)\}|$. \end{enumerate} \noindent It is implicit in Definition~\operatorname{{\rrbracket}}ef{def:linear_chain} that for $1 \operatorname{{\llbracket}}eq i < j \operatorname{{\llbracket}}eq k$, the tuple $T_{i,j}$ forms a linear $(j-i+1)$-chain. \begin{example}\operatorname{{\llbracket}}abel{eg:linear} In Example~\operatorname{{\rrbracket}}ef{eg: irr_type2_necklace} above, $(D_i,D_{i+1}, \operatorname{{\llbracket}}dots, D_j)$, for $1 \operatorname{{\llbracket}}eq i < j \operatorname{{\llbracket}}eq 6$ form linear chains. In particular, for the linear chain $T = (D_1,\operatorname{{\llbracket}}dots,D_6) $, we have $\mathfrak{C}(T) = ((3,3),(2,2),(0,0),(2,2),(3,2))$. \end{example} \begin{example} \operatorname{{\llbracket}}abel{eg:generic_necklace} In Example~\operatorname{{\rrbracket}}ef{eg: irr_type2_necklace}, we can simultaneously add the self compatibilities $\operatorname{{\llbracket}} D', (1,9)\operatorname{{\rrbracket}}$, $\operatorname{{\llbracket}} D', (2,4) \operatorname{{\rrbracket}}$, $\operatorname{{\llbracket}} D', (5,8) \operatorname{{\rrbracket}}$, and $\operatorname{{\llbracket}} D', (7,10) \operatorname{{\rrbracket}}$ to realize the $C_{42}$-action on $S_{162}$ given by $D''=(42,4; (5,6), (1,6)).$ An illustration of this realization is given in Figure~\operatorname{{\rrbracket}}ef{fig:irr_Type2_necklace} below. \begin{figure} \caption{A visualization of the action $D''$.} \end{figure} \noindent Furthermore, we delete a $3$-permutation component to obtain a realization of the action $D=(42,1; (5,6), (1,6))$ on $S_{36}$. \end{example} \noindent This leads us to the following definition for which we fix the notation that $\operatorname{{\llbracket}} D, 0 \operatorname{{\rrbracket}} := D$ and $\overline{\operatorname{{\llbracket}} D, 0 \operatorname{{\rrbracket}}} := D.$ \begin{definition} \operatorname{{\llbracket}}abel{def: compatibility-gen} Let $T'=(D_1,\operatorname{{\llbracket}}dots,D_k)$ be a linear $k$-chain as in Definition~\operatorname{{\rrbracket}}ef{def:linear_chain}. Then for integers $1 \operatorname{{\llbracket}}eq x_i < y_i \operatorname{{\llbracket}}eq k$, $g' \geq 0$, and $0 \operatorname{{\llbracket}}eq g'' \operatorname{{\llbracket}}eq g'+m$, the tuple $$\mathcal{N} := (T';((x_1,y_1),\operatorname{{\llbracket}}dots,(x_m,y_m)); (g',g''))$$ is said to form a \textit{necklace with $k$ beads} if it satisfies the following conditions. \begin{enumerate} [(i)] \item If $m>1$, then for $1 \operatorname{{\llbracket}}eq j \operatorname{{\llbracket}}eq m$, there exists pairs of non-negative integers $(r_j',s_j')$ and the tuples $D_T^j$ given by $$D_T^1 := \operatorname{{\llbracket}} D_{T'}, (r_1',s_1') \operatorname{{\rrbracket}} \text{ and } D_T^{j} := \operatorname{{\llbracket}} D_T^{j-1}, (r_j',s_j') \operatorname{{\rrbracket}}, \text{ for }2 \operatorname{{\llbracket}}eq j \operatorname{{\llbracket}}eq m,$$ are well-defined data sets. \item For $1 \operatorname{{\llbracket}}eq j \operatorname{{\llbracket}}eq m$, the tuples $T_{x_j,y_j} = (D_{x_j},D_{x_j+1},\operatorname{{\llbracket}}dots,D_{y_j})$ form closed chains satisfying $A_{T_{x_j,y_j}} = A(D_T^j)$. \item The tuples $D_{\mathcal{N}}':=\operatorname{{\llbracket}} D_T^{m-1},g' \operatorname{{\rrbracket}}$ and $D_{\mathcal{N}} :=\overline{\operatorname{{\llbracket}} D_{\mathcal{N}}', g''\operatorname{{\rrbracket}}}$ are both well-defined data sets. \end{enumerate} \end{definition} \noindent Given a necklace $\mathcal{N}$ as in Definition~\operatorname{{\rrbracket}}ef{def: compatibility-gen}, we define $$T_{\mathcal{N}}:= T' \text{ and } \mathfrak{C}(\mathcal{N}) = (\mathfrak{C}(T_{\mathcal{N}}); ((r_1',s_1'),\operatorname{{\llbracket}}dots,(r_m',s_m'))).$$ \noindent It follows by definition that if we replace the $(g',g'')$ with a pair $(g'+p,g''+p)$, where $p$ is a natural number, then the necklace remains unchanged. So for the case when $g' = g''$, we simply omit the pair $(g',g'')$. Moreover, we allow $m = 0$ in a necklace $\mathcal{N}$, in which case, we simply write $\mathcal{N} := (T_{\mathcal{N}}; (g',g'')).$ \begin{example} \operatorname{{\llbracket}}abel{eg:specific_necklace} Going back to Example~\operatorname{{\rrbracket}}ef{eg:generic_necklace}, we see that the action $D$ is realized as a necklace with $6$ beads $$\mathcal{N} = ((D_1,\operatorname{{\llbracket}}dots,D_6);((1,3),(1,6),(3,5),(4,6));(0,3)),$$ where $$\mathfrak{C}(\mathcal{N}) = (((3,3),(2,2),(0,0),(2,2),(3,2));((3,3),(1,1),(1,1),(3,3))).$$ It is interesting to note that the subnecklaces $((D_1,D_2,D_3);((1,3));(0,1))$ and $((D_4,D_5,D_6);((4,6));(0,1))$ are spherical Type 2 actions. \end{example} \noindent We will now show that an arbitrary cyclic action can be realized as a necklace, as described in Definition~\operatorname{{\rrbracket}}ef{def: compatibility-gen}. \begin{theorem} \operatorname{{\llbracket}}abel{thm:arb_cyc_real} Given an arbitrary cyclic action $D$ on $S_g$ with $n(D) \geq 3$, there exists a necklace $\mathcal{N}$ with $k$ beads, for some $k \geq 0$, such that $D_{\mathcal{N}} = D$. \end{theorem} \begin{proof} We first consider the case when $D$ is a rotational action. If $D$ is a free rotation of $S_g$ by $2\mathcal{P}i r/n$, then it is of the form $D = (n,\frac{g-1}{n}+1,r;)$, which is realized by the necklace by $\mathcal{N} = ((D_1,D_1^{-1}); ((1,2),(1,2));)$, where $$D_1 = \begin{cases} (n,0;(r,n),(r,n),(r(n-2),n)), & \text{if } n \text{ is odd, and} \\ (n,0;(r,n),(r,n),(r(\frac{n-2}{2}),\frac{n}{2})), & \text{if } n \text{ is even,} \end{cases}$$ \noindent $D_1^{-1}$ represents the inverse of the action $D_1$, and $\mathfrak{C}(T_{\mathcal{N}}) = ((1,1),(2,2),(3,3))$. Along similar lines, we can see that when $D$ is a non-free rotation of $S_g$ by $2\mathcal{P}i r/n$, it is of the form $D_1 = (n,\frac{g}{n};(r,n),(n-r,n);)$, and so $D = D_{\mathcal{N}}$, where $\mathcal{N} = ((D_1,D_1^{-1}); ((1,2));)$ with $D$ taken as above and $\mathfrak{C}(T_{\mathcal{N}}) = ((1,1),(2,2))$. Now let $D$ be an arbitrary non-rotational action (with $n(D)\geq 3$). By an inductive application of Theorem~\operatorname{{\rrbracket}}ef{thm:arb-real}, we can decompose $D$ into finitely many irreducible spherical components $D_1,\operatorname{{\llbracket}}dots,D_k$ with finitely many compatibilities between them of types $\operatorname{{\llbracket}} D', (r,s)\operatorname{{\rrbracket}}$, $\operatorname{{\llbracket}} D',g' \operatorname{{\rrbracket}}$, $\overline{\operatorname{{\llbracket}} D,g' \operatorname{{\rrbracket}}}$, $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1',D_2', (r,s)\operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, and $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1',D_2' \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$. By a suitable rearrangement and relabeling of the $D_i$, it can be seen that the $D_i$ form a linear $k$-chain $T' = (D_1,\operatorname{{\llbracket}}dots,D_k)$, where each compatibility between $D_i$ and $D_{i+1}$ is chosen among the compatibilities of types $\operatorname{{\llbracket}} D', (r,s)\operatorname{{\rrbracket}}$, $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1',D_2', (r,s)\operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, and $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1',D_2' \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, arising from the earlier decomposition. (This is possible because any decomposition of $D$ into irreducible Type 1 actions by virtue of Theorem~\operatorname{{\rrbracket}}ef{thm:arb-real} would yield a maximal reduction system whose maximality would be contradicted by the absence of such a rearrangement.) Any remaining compatibilities (of types $\operatorname{{\llbracket}} D', (r,s)\operatorname{{\rrbracket}}$, $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1',D_2', (r,s)\operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, and $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1',D_2' \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$) in the decomposition of $D$ can now be perceived as series of self-compatibilties on the chain $T'$ between specific pairs of its components $D_i$. Let the collections of all such pairs of indices (corresponding to pairs of the $D_i$ involved in self-compatibilities) be $(x_1,y_1), \operatorname{{\llbracket}}dots,(x_m,y_m)$. Finally, any compatibities that remain in the original decomposition will be of types $\operatorname{{\llbracket}} D',g' \operatorname{{\rrbracket}}$ and $\overline{\operatorname{{\llbracket}} D,g' \operatorname{{\rrbracket}}}$. Thus, assuming that there $g'$ compatibilities of type $\operatorname{{\llbracket}} D', (r,s)\operatorname{{\rrbracket}}$ and $g''$ of type $\overline{\operatorname{{\llbracket}} D,g' \operatorname{{\rrbracket}}}$, we conclude that $D = D_{\mathcal{N}}$, where $$\mathcal{N} := (T';((x_1,y_1),\operatorname{{\llbracket}}dots,(x_m,y_m)); (g',g'')).$$ \end{proof} \begin{remark} It is important to note that given an action $D$, there could exist two distinct necklaces $\mathcal{N}_1$ and $\mathcal{N}_2$ such that $D_{\mathcal{N}_1} = D = D_{\mathcal{N}_2}$. For example, consider the action $D=(5,1;(1,5),(2,5),(2,5))$ on $S_2$. This can be realized by the necklace $\mathcal{N}_1 = ((D');;(1,0))$, where $D' = (5,0;(1,5),(2,5),(2,5))$. Alternatively, $D_{\mathcal{N}_2} = D$, for $\mathcal{N}_2 = ((D_1,D_2,D');((1,3));),$ where $D_1 = (5,0;(1,5),(1,5),(3,5)),$ and $D_2 = (5,0;(2,5),(4,5),(4,5))$. \end{remark} \section{Structures realizing compatibilities} \operatorname{{\llbracket}}abel{sec:str_real_comps} In this section, we classify the structures that realize the individual components and compatibilities that constitute a necklace, as described in Definition~\operatorname{{\rrbracket}}ef{def: compatibility-gen}. We begin by describing the structures that realize spherical Type 1 actions, which form the beads of the necklace. \subsection{Spherical Type 1 actions} In this subsection, we show that the structure $\mathcal{P}_D$ (described in Theorem~\operatorname{{\rrbracket}}ef{res:1}) that realizes a Type 1 action $D$ is unique. \begin{proposition}\operatorname{{\llbracket}}abel{thm:irr_type1} If $D$ is a spherical Type 1 action, then $\text{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle) = \{\mathcal{P}_D\}$. \end{proposition} \begin{proof} First consider the case when $n_i=2$ for some $i$. Then $D$ can be realized as a rotation of the regular hyperbolic $n$-gon $\mathcal{P}_D$ (as in Theorem~\operatorname{{\rrbracket}}ef{res:1}), with all interior angles equals to $2\mathcal{P}i/n_2$. It follows from basic hyperbolic trigonometry that such a hyperbolic polygon is unique, which proves the result for this case. When $n_1,n_2\neq 2$, $\mathcal{P}_D$ is a semi-regular hyperbolic $2n$-gon with side length $\ell$, and alternate interior angles of measure $2\mathcal{P}i/n_1$ and $2\mathcal{P}i/n_2$, respectively. Let $\{P_0,\dots,P_{2n-1}\}$ be the vertices of $\mathcal{P}_D$ and $O$ denotes the fixed point at the center, as shown in Figure~\operatorname{{\rrbracket}}ef{fig:s14_fixpt} below. \begin{figure} \caption{The polygon $\mathcal{P} \end{figure} As the rotation of $\mathcal{P}_D$ by $\theta_D$ is an isometry, it follows that $|OP_i| = |OP_{i+2}|$, for all $i$. Hence, the hyperbolic $SSS$ congruence implies that the triangles $P_iOP_{i+1}$ are mutually congruent to each other, with $\angle P_iOP_{i+1}=\mathcal{P}i/n$, $\angle OP_i P_{i+1}=2\mathcal{P}i/n_1$, and $\angle OP_{i+1} P_i=2\mathcal{P}i/n_2$. Thus, $\mathcal{P}_D$ is uniquely determined, and the assertion follows. \end{proof} \begin{remark} \operatorname{{\llbracket}}abel{rem:fixD} Let $D$ be a reducible action on $S_g$, and let $\mathcal{C}$ be a maximal reduction system for $D$. By extending $\mathcal{C}$ to a pants decomposition $P$ of $S_g$, we see that $\dim (\mathrm{Fix} ( \operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle )) \geq 2|\mathcal{C}| > 0$. Conversely, suppose that $\dim(\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)) > 0$, we can reverse the above argument to show that $D$ is reducible. \end{remark} \noindent The following corollary is immediate from Proposition~\operatorname{{\rrbracket}}ef{thm:irr_type1} and Remark~\operatorname{{\rrbracket}}ef{rem:fixD}. \begin{corollary} A spherical Type 1 action $D$ is irreducible. \end{corollary} \subsection{Compatibilities of type $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2,(r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$ and $\operatorname{{\llbracket}} D, (r,s) \operatorname{{\rrbracket}}$} Consider an irreducible Type 1 action $D$ on $S_g$, and a $D$-orbit of size $k.$ Removing $k$ mutually disjoint cyclically permuted (by the action of $D$) discs around the points in this orbit, we obtain a homeomorphic copy of $S_{g,k}$ with a homeomorphism $\hat{D}$ induced by $D$, which cyclically permutes the components of $\mathcal{P}artial S_{g,k}$. Note that $\mathrm{Teich}(S_g)$ can be viewed as a subspace of $\mathrm{Teich}(S_{g,k})$ in the following manner. The Fenchel-Nielsen coordinates of an arbitrary structure $\xi \in \mathrm{Teich}(S_g)$ are given by $\xi= \mathcal{P}rod_i^{3g-3} (\ell_i, \theta_i),$ where the pair $(\ell_i, \theta_i)$ denote the length and twist parameters contributed by the $i$-th curve of a pants decomposition $P$ of $S_g$ where $i=1,\operatorname{{\llbracket}}dots, 3g-3$. $P$ can always be extended to a pants decomposition $\hat{P}$ of $S_{g,k}$ where the first $3g-3$ non-boundary curves of $\hat{P}$ belong to $P$. As there are $3g-3+k$ non-boundary curves in $\hat{P}$, an arbitrary $\hat{\xi} \in \mathrm{Teich}(S_{g,k})$ can be decomposed as {\small $$\hat{\xi}= \mathcal{P}rod_i^{3g-3+k} (\ell_i, \theta_i) \times \mathcal{P}rod_{j=1}^k \ell_{b_j},$$} where $\ell_{b_j}$ denotes the length parameter of the $j$-th boundary component (for $j=1,\operatorname{{\llbracket}}dots ,k$) of $S_{g,k}.$ In light of the above decomposition of $\hat{\xi}$, two natural questions that arise are: ``Does there exist an endomorphism $\hat{D}_{\#}: \mathrm{Teich}(S_{g,k}) \to \mathrm{Teich}(S_{g,k})$ such that $\hat{D_{\#}}|_{\mathrm{Teich}(S_g)}=D_{\#}$? Moreover, is $\hat{D}_{\#}|_{\mathrm{Teich}(S_{g,k})\setminus \mathrm{Teich}(S_g)}$ a permutation of the coordinates?" We will show shortly that these questions do not always have positive answers. Consider the decomposition $\mathrm{Teich}(S_{g,k})\approx \mathcal{T}_{NB} \times \mathbb{R}_{+}^k,$ where $$\mathcal{T}_{NB} = \{\mathcal{P}rod_i^{3g-3+k} (\ell_i, \theta_i)\} \text{ (NB refers to ``non-boundary") and } \mathbb{R}_{+}^k \approx \{\mathcal{P}rod_{j=1}^k \ell_{b_j}\}.$$ The action of $D$ implies that $\hat{D}_{\#}$, if it exists, should preserve the above decomposition of $\mathrm{Teich}(S_{g,k})$, and furthermore, $\hat{D}_{\#}\operatorname{{\llbracket}}eft(\mathcal{P}rod_{j=1}^k \ell_{b_j}\operatorname{{\rrbracket}}ight)= \mathcal{P}rod_{j=1}^k \ell_{b_{\sigma_k(j)}}$ where $\sigma_k=(12\operatorname{{\llbracket}}dots k).$ The following result shows that $\hat{D}_{\#}$ is completely determined by $D_{\#}$ if, and only if, $k$ is a proper divisor of $n$. \begin{proposition}\operatorname{{\llbracket}}abel{Thm:induced_action} Let $D$ be a spherical Type 1 action on $S_g$ of order $n$ with a $D$-orbit of size $k$. Then $D_{\#}$ never extends to an endomorphism of $\mathrm{Teich}(S_{g,k})$, which induces an order $n$ permutation of the coordinates of $\mathrm{Teich}(S_{g,k})\setminus \mathrm{Teich}(S_g)$. In particular, the extended action $\hat{D}_{\#}$ is completely determined by $D_{\#}$ if, and only if, $k$ is a proper divisor of $n$. \end{proposition} \begin{proof} As $D$ is an spherical Type 1 action, we may assume that there exists a pants decomposition $P$ of $S_g$ with $s$ separating curves $\alpha_1,\operatorname{{\llbracket}}dots,\alpha_{s}$ and $r$ non-separating curves $\beta_1,\operatorname{{\llbracket}}dots, \beta_{r}$ such that for each $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq s-1$, there exist $1 \operatorname{{\llbracket}}eq j \operatorname{{\llbracket}}eq s$ ($j \neq i$) and $1 <M_{ij} <n$ with $D^{M_{ij}}(\alpha_{i})= \alpha_{j}$. Similarly, for each $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq r-1$, there exist $1 \operatorname{{\llbracket}}eq j \operatorname{{\llbracket}}eq r$ ($j \neq i$) and $1 <N_{ij} <n$ such that $D^{N_{ij}}(\beta_{i})= \beta_{j}$. Without loss of generality, we may assume that $D^{N_{1,r}}(\beta_1) = \beta_r$. In order that $D_{\#}$ extends to an endormorphism of $\mathrm{Teich}(S_{g,k})$, $P$ should extend to a pants decomposition $\hat{P}$ of $S_{g,k}$ as in the discussion above, with $k$ new non-boundary curves $\gamma_1,\operatorname{{\llbracket}}dots,\gamma_k$ and $k$ boundary curves $\gamma_1',\operatorname{{\llbracket}}dots,\gamma_k'$ such that $\hat{D}(\gamma_i')= \gamma_{i+1}'$, for each $i.$ We may assume that $\gamma_1$ is a nonseparating curve isotopic to $\beta_r$ in $S_g$, and thus $\hat{D}^M(\gamma_1)=\beta_{1}$ (since $D^M(\gamma_1)=\beta_{1}$), and the isotopy class of $\beta_1$ remain unaltered in $S_{g,k}$, as illustrated in Figure \operatorname{{\rrbracket}}ef{fig:c14_action_on_s3} below. In the case when $k = n$, it is apparent that the curve $\sum_{i} \gamma_i' \in H_1(S_{g,k})$ (indicated by the dotted curve in the polygon, and the curve $\gamma_2$ in the bounded surface in Figure \operatorname{{\rrbracket}}ef{fig:c14_action_on_s3} below) is left invariant by the action of $D$. \begin{figure} \caption{Extension of a pants decomposition of $S_g$.} \end{figure} \noindent Hence, $D$ has to induce an order $n$ rotation of the component $S'$ of $\overline{S_g \setminus \gamma_2}$ homeomorphic to $S_{0,k+1}$, which cyclically permutes its $k$ boundary components $\gamma_i'$ and fixes the $k+1$-th boundary component, namely, $\gamma_2$. This obviates the possibility of such an extension in this case, as $D\vert_{S'}$ can never induce an order $k$ permutation of the $\gamma_i$. Furthermore, it is clear from the structure $\mathcal{P}_D$ that when $k$ is a proper divisor of $n$, then $\gamma_2$ cannot be left invariant by the action of $D$. Consequently, the action of $\hat{D}$ on the $\gamma_i$ is completely determined by the action of $D$ on $P$, and hence the result follows. \end{proof} \begin{remark}\operatorname{{\llbracket}}abel{rem:inj_radius} Let $(X,\xi)$ be a closed hyperbolic surface with an isometry $D$ of finite order. Let $B_p(r)$ denote the closed disc of radius $r$ centered at any point $p \in X.$ If $D(p)=p$ and $D(B_p(r)) = B_p(r)$ such that $D|_{B_p(r)}$ becomes a rotation about $p$, then $r < \mathrm{inj}_p(X,\xi)$, where $\mathrm{inj}_p(X,\xi)$ denotes the injectivity radius of $(X,\xi)$ at $p$. This is immediate from the fact that the derivative of $D$ at $p$ is a rotation about the origin in $T_pX$, and the exponential map restricted to a normal ball is a radial isometry. \end{remark} \noindent The following result describes the structures that realize compatibilities of type $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2,(r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$. \begin{corollary}\operatorname{{\llbracket}}abel{cor:comp_pair} Let $D= \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2,(r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, where the $D_i$ are spherical Type 1 actions. \begin{enumerate}[(i)] \item If $(r,s) \neq (0,0)$, then $$\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle) \approx \{[\mathcal{P}_{D_1}]\} \times \{[\mathcal{P}_{D_2}]\} \times (0,\ell(D)) \times \mathbb{R},$$ where $\ell(D)$ is a positive constant determined by $D$. \item If $(r,s) = (0,0)$, then $$\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)\approx \{[\mathcal{P}_{D_1}]\} \times \{[\mathcal{P}_{D_2}]\} \times \mathcal{P}rod_{j=1}^3 \operatorname{{\llbracket}}eft( (0,\ell_j(D))\times \mathbb{R}\operatorname{{\rrbracket}}ight),$$ where for each $j$, $\ell_j(D)$ is a positive constant determined by $D$. \end{enumerate} \end{corollary} \begin{proof} We will only prove (i), as (ii) will follow from a similar argument. By Proposition~\operatorname{{\rrbracket}}ef{Thm:induced_action}, it is apparent that the action induced by the $D_i$ on $S_{g_i,k}$ is completely determined by the action of $D_i$ on the $S_{g_i}$. So any structure that realizes $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2,(r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$ as an isometry, is uniquely determined by the structures $\mathcal{P}_{D_i}$, and one additional length and twist parameter contributed by the isometric boundary components (cyclically permuted by the $D_i$) of $S_{g_i,k}$. Let $\ell$ denote the length of each boundary component of $S_{g_i,k}$. It remains to show that $\ell < \ell(D)$, where $\ell(D)$ is a positive constant determined by $D$. To see this, consider the unique hyperbolic surface $(X_i,\xi_{ih})$ (for $i=1,2$) realizing $D_i$ as an isometry. For each $i$, let $\{p_{ij}\}_{1\operatorname{{\llbracket}}eq j\operatorname{{\llbracket}}eq k} \subset X_i$ be the points in a distinguished compatible $D_i$-orbit of size $k$. Let $B_{ij}(r_i):=B_{p_{ij}}(r_i)$ denote mutually disjoint cyclically permuted disks under $D_i$. Since $D_i^k(B_{{ij}}(r_i))=B_{{ij}}(r_i),$ it follows from Remark~\operatorname{{\rrbracket}}ef{rem:inj_radius} that $r_i < \mathrm{inj}_{p_{ij}}(X_i,\xi_{ih})$ (each $p_{ij}$ has the same injectivity radius). Thus the circumference $c_{ij}$ of each $B_{{ij}}(r_i)$ satisfies $$c_{ij} =2\mathcal{P}i \sinh (r_i) < 2\mathcal{P}i \sinh (\mathrm{inj}_{p_{ij}}(X_i,\xi_{ih})) =L_i \text{ (say)}.$$ Let $L= \min(L_1, L_2),$ and $r_D=\min(\mathrm{inj}_{p_{1j}}(X_1,\xi_{1h}), \mathrm{inj}_{p_{2j}}(X_2,\xi_{2h}))$. Removing $\{B_{ij}(r)\}_{1\operatorname{{\llbracket}}eq j\operatorname{{\llbracket}}eq k}$ (where $r < r_D$ and the circumference $c(r)$ of $B_{ij}(r)$ satisfies $c(r)< L$) from each $X_i$, and gluing the surfaces $\overline{X_i \setminus \cup_{j} B_{p_{ij}}(r)}$ along their boundary components, we obtain a diffeomorphic copy $X$ of $S_{g_1+g_2+k-1}$ with a $C_n$ action $D$, and a reduction system $\mathcal{C}$ consisting of $k$ nonseparating curves. Moreover, $X$ admits a canonical Riemannian metric $\xi$ realizing $D$ as an isometry with each curve of $\mathcal{C}$ having length $c(r)$. By the uniformization theorem, there is a unique hyperbolic metric $\xi_h = e^{f}\xi$ on $X$, also realizing $D$ as an isometry, where $f=f(\xi_1, \xi_2)$ is a smooth real valued function on $X$. The result (i) now follows from the observation that under $\xi_h$, each curve of $\mathcal{C}$ has length $\ell_h= \ell_h(c(r),f) < \ell(D)$ where $\ell(D)= \ell(L, f)$ is a unique constant (as $L,f$ are uniquely determined by $D$). \end{proof} \noindent Considering the similarities between the compatibilities $\operatorname{{\llbracket}} D', (r,s) \operatorname{{\rrbracket}}$ and \operatorname{{\llbracket}}inebreak $\operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2, (r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, it is quite evident that the structures that realize $\operatorname{{\llbracket}} D', (r,s) \operatorname{{\rrbracket}}$ should also arise analogously, and so we have the following. \begin{corollary} \operatorname{{\llbracket}}abel{cor: self_comp_pair} Let $D = \operatorname{{\llbracket}} D', (r,s) \operatorname{{\rrbracket}}$ be an action of order $n$ on $S_g$. Then, $$\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle) \approx \mathrm{Fix} (\operatorname{{\llbracket}}angle D' \operatorname{{\rrbracket}}angle) \times (0,\ell(D')) \times \mathbb{R},$$ where $\ell(D')$ is a positive constant determined by $D'$. \end{corollary} \subsection{Compatibilities of type $\operatorname{{\llbracket}} D, g_0\operatorname{{\rrbracket}}$ and $\overline{\operatorname{{\llbracket}} D, g_0\operatorname{{\rrbracket}}}$} Let $D$ be an action of order $n$ on $S_g$. As we saw earlier, an action of type $\operatorname{{\llbracket}} D, g_0\operatorname{{\rrbracket}}$ is realized by pasting a permutation component (that cyclically permutes $n$ isometric copies of $S_{g_0,1}$) to the action $D$. As we saw earlier, the action $\operatorname{{\llbracket}} D, g_0\operatorname{{\rrbracket}}$ can also be realized iteratively from $g_0$ compatibilities of type $\operatorname{{\llbracket}} D,1 \operatorname{{\rrbracket}}$. Besides, the arguments in Theorem~\operatorname{{\rrbracket}}ef{Thm:induced_action} would imply that each copy of $S_{1,1}$ (that is attached in a $\operatorname{{\llbracket}} D,1 \operatorname{{\rrbracket}}$ type construction) contributes $2$ additional length parameters, and $1$ twist parameter. Furthermore, following the arguments in Corollary~\operatorname{{\rrbracket}}ef{cor:comp_pair}, we can show that one of the length parameters (contributed by $\mathcal{P}artial (S_{1,1,})$ is bounded by a positive constant that is determined uniquely by the action on which the permutation component is pasted. Hence, when the compatibility $\operatorname{{\llbracket}} D, g_0 \operatorname{{\rrbracket}}$ is completed, a total of $3g_0-1$ length and twist parameters would have been added to the dimension of $\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)$, and so we have the following result. \begin{corollary} \operatorname{{\llbracket}}abel{cor:perm_comp} Let $D$ be a cyclic action of order $n$ on $S_g$. Suppose that the actions $\operatorname{{\llbracket}} D, g_0 \operatorname{{\rrbracket}}$ and $\overline{ \operatorname{{\llbracket}} D, g_1 \operatorname{{\rrbracket}}}$ are well-defined, for some $g_0,g_1 \geq 1$. Then \begin{enumerate}[(i)] \item $\displaystyle \mathrm{Fix}(\operatorname{{\llbracket}}angle \operatorname{{\llbracket}} D, g_0 \operatorname{{\rrbracket}} \operatorname{{\rrbracket}}angle) \approx \mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle ) \times \mathcal{P}rod_{i=1}^{g_0}((0, \ell_i^0(D)) \times \mathbb{R}) \times \mathcal{P}rod_{i=1}^{2g_0 - 1} (\mathbb{R}_+ \times \mathbb{R}),$ where each $\ell_i^0(D)$ is a positive constant determined by the action $\operatorname{{\llbracket}} D, g_0 \operatorname{{\rrbracket}}$. \item $\displaystyle \mathrm{Fix}(\operatorname{{\llbracket}}angle \overline{\operatorname{{\llbracket}} D , g_1 \operatorname{{\rrbracket}}} \operatorname{{\rrbracket}}angle) \approx \mathrm{Fix} (\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle ) \Large{/}\operatorname{{\llbracket}}eft(\mathcal{P}rod_{i=1}^{g_1}((0, \ell_i^1(D)) \times \mathbb{R}) \times \mathcal{P}rod_{i=1}^{2g_1 - 1} (\mathbb{R}_+ \times \mathbb{R})\operatorname{{\rrbracket}}ight),$ where each $\ell_i^1(D)$ is a positive constant determined by the action $\overline{\operatorname{{\llbracket}} D, g_1 \operatorname{{\rrbracket}}}$. \end{enumerate} \end{corollary} \subsection{Structures that realize arbitrary actions} \operatorname{{\llbracket}}abel{sec:main} In this subsection, we will piece together the structures detailed in the Section~\operatorname{{\rrbracket}}ef{sec:str_real_comps} (that realize various kinds of compatibilities) to describe the structures that will realize arbitrary cyclic actions of order $3$ and above. We recall that for such a cyclic action $D$, there exists a necklace \[\tag{*} \mathcal{N} = ((D_1,\operatorname{{\llbracket}}dots,D_k); ((x_1,y_1), \operatorname{{\llbracket}}dots, (x_m,y_m)); (g',g''))\] as in Definition~\operatorname{{\rrbracket}}ef{def: compatibility-gen}, such that $D_{\mathcal{N}} = D$ (see Proposition~\operatorname{{\rrbracket}}ef{thm:arb_cyc_real}). Putting together the results in Proposition \operatorname{{\rrbracket}}ef{Thm:induced_action} and Corollaries \operatorname{{\rrbracket}}ef{cor:comp_pair} - \operatorname{{\rrbracket}}ef{cor:perm_comp}, we obtain an explicit decomposition of $\mathrm{Fix}(\operatorname{{\llbracket}}angle D\operatorname{{\rrbracket}}angle)$ as a product of two-dimensional strips some of which are of bounded width. \begin{theorem}\operatorname{{\llbracket}}abel{thm:main} Let $D$ be a cyclic action of order $n \geq 3$ on $S_g$, and let $\mathcal{N}$ be a necklace as in $(*)$ such that $D_{\mathcal{N}} = D$. Then $\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle) \approx M_1/M_2$, where { $$ M_1=\mathcal{P}rod_{i=1}^{k}\{\mathcal{P}_{D_i}\} \times \mathcal{P}rod_{i=1}^{g'+k+2f(T_{\mathcal{N}})+m-2}((0, \ell_i'(D)) \times \mathbb{R}) \times \mathcal{P}rod_{i=1}^{2g' - 1} (\mathbb{R}_+ \times \mathbb{R})$$} and $$M_2= \mathcal{P}rod_{i=1}^{g''}((0, \ell_i''(D)) \times \mathbb{R}) \times \mathcal{P}rod_{i=1}^{2g'' - 1} (\mathbb{R}_+ \times \mathbb{R}),$$ where the $\ell_j'(D)$ and $\ell_j''(D)$ are positive constants determined by $D$. Consequently, $$\dim(\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle) = 6(g'-g'')+2k+4f(T_{\mathcal{N}})+2m-2.$$ \end{theorem} \noindent Since the dimension of manifold $M_2$ (in Theorem~\operatorname{{\rrbracket}}ef{thm:main}) is determined by the deletion of permutation components in a given realization of $D$, it is crucial to note that it is in general nonempty even when $D$ is irreducible. In fact, it was shown in \cite{PRS} that the deletion of at least one permutation component is required to realize an irreducible Type 2 action. \begin{example} For the necklace structure realizing the action $D$ in Example~\operatorname{{\rrbracket}}ef{eg:generic_necklace}, we see that $k=6$, $m=4$, $f(T_{\mathcal{N}}) =1$, and $(g',g'') = (0,3)$. Consequently, applying Theorem~\operatorname{{\rrbracket}}ef{thm:main}, we have $\text{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle) \approx M_1/M_2$, where $$M_1 = \operatorname{{\llbracket}}eft( \mathcal{P}rod_{i=1}^{10} (0, \ell'_i(D)) \operatorname{{\rrbracket}}ight)\times \mathbb{R}^{10} \text{ and } M_2 = \operatorname{{\llbracket}}eft( \mathcal{P}rod_{i=1}^3 (0, \ell''_i(D)) \times \mathbb{R} \operatorname{{\rrbracket}}ight) \times \mathbb{R}_+^5 \times \mathbb{R}^5,$$ and so we have $\dim(\text{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)) = 20-16=4$. \end{example} \section{Applications to the geometry of $\mathrm{Fix}(H)$} \operatorname{{\llbracket}}abel{sec:injrad-Fix} Let $H= \operatorname{{\llbracket}}angle D_1, \operatorname{{\llbracket}}dots, D_s \operatorname{{\rrbracket}}angle < \mathrm{Mod}(S_g)$ be a finite subgroup, with $n(D_i)\geq 3$, for at least one $i,$ $1\operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq s$. In this section, we discuss a few applications of Theorem~\operatorname{{\rrbracket}}ef{thm:main} that help us understand some finer geometric properties of $\mathrm{Fix}(H)$ as a submanifold of $\mathrm{Teich}(S_g)$. \subsection{An upper bound of the injectivity radii of structures in $\mathrm{Fix}(H)$} In this subsection, we begin by deriving an upper bound for the systole (defined below) of hyperbolic structures in $\mathrm{Fix}(H)$, when $H = \operatorname{{\llbracket}}angle D\operatorname{{\rrbracket}}angle$ represents a cyclic action on $S_g$ of order at least $3$, in terms of the injectivity radii of the irreducible components of $\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle$. This, in turn, yields a bound on the injectivity radii of structures in $\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)$ and consequently a bound on the injectivity radii of structures in $\mathrm{Fix}(H)$ for any finite subgroup $H$ of $\mathrm{Mod}(S_g)$. \noindent Let $\mathrm{inj}(X)$ denote the injectivity radius of a Riemannian manifold $X$. The \textit{systole} (also known as the \textit{systole length}) of a compact hyperbolic surface $(X,\xi)$, denoted by $\mathrm{sys}(X)$, refers to the length of any of the shortest closed geodesic(s) of $X$. In particular, when $X$ is closed, $\mathrm{sys}(X)=2\mathrm{inj}(X)$, where $\mathrm{inj}(X)$ denotes the injectivity radius of $(X,\xi)$. First, we provide estimates for the injectivity radii of the structures of type $\mathcal{P}_D$ realizing spherical Type 1 actions. For this, we will use the following lemma that applies basic hyperbolic trigonometry. \begin{lemma}\operatorname{{\llbracket}}abel{lem:alt} Let $ABC$ be a hyperbolic triangle and let $D$ denote the foot of perpendicular from vertex $A$ to side $BC.$ Then the length of the altitude $AD$ is given by: $$ \sinh^{-1}{\operatorname{{\llbracket}}eft( \frac{\sqrt{\cos^2{A}+\cos^2{B}+\cos^2{C}-1+2\cos{A}\cos{B}\cos{C}}}{\sin{A}}\operatorname{{\rrbracket}}ight)} $$ \end{lemma} \noindent The following lemma provides estimates for injectivity radii at the nontrivial orbit points of a spherical Type 1 action in terms of its data set. \begin{proposition}\operatorname{{\llbracket}}abel{prop:inj_spherical_type1} Let $D=(n,0;(c_1,n_1),(c_2,n_2), (c_3,n_3))$, where $n_3 = n$, be a spherical Type 1 action in $\mathrm{Mod}(S_g)$ realized by a structure $\mathcal{P}_D$ as in Theorem~\operatorname{{\rrbracket}}ef{res:1}. Let $\mathrm{inj}_P(\mathcal{P}_D)$ be the injectivity radius afforded by the structure $\mathcal{P}_D$ at a point $P \in S_g$. \begin{enumerate}[(i)] \item Let $P_0$ be the center of $\mathcal{P}_D$. \begin{enumerate} \item If $n_1,n_2 \neq 2$, then $$\mathrm{inj}_{P_0}(\mathcal{P}_D) = \sinh^{-1}{\operatorname{{\llbracket}}eft( \frac{\sqrt{\displaystyle\sum_{i=1}^3 \cos^2{(\mathcal{P}i/n_i)}-1+2\mathcal{P}rod_{i=1}^3\cos{(\mathcal{P}i/n_i)}}}{\sin{(\mathcal{P}i/n)}}\operatorname{{\rrbracket}}ight)}.$$ \item If $n_2 =2$, then $\mathrm{inj}_{P_0}(\mathcal{P}_D)$ equals $$\displaystyle \sinh^{-1}{\operatorname{{\llbracket}}eft( \frac{\sqrt{\cos^2{(2\mathcal{P}i/n)}+2\cos^2{(\mathcal{P}i/n_1)}-1+2 \cos{(2\mathcal{P}i/n)}\cos^2{(\mathcal{P}i/n_1)}}}{\sin{(2\mathcal{P}i/n)}}\operatorname{{\rrbracket}}ight)}.$$ \end{enumerate} \item Let $P$ be any non-center point which belongs to a $\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle$-orbit corresponding to a nontrivial cone point of $\mathcal{O}_D$. Then $\mathrm{inj}_P(\mathcal{P}_D) < \ell(\mathcal{P}_D)$, where $\ell(\mathcal{P}_D)$ denotes the length of an edge of $\mathcal{P}_D$ given by $$ \ell(\mathcal{P}_D) = \begin{cases} \cosh^{-1} {\operatorname{{\llbracket}}eft( \frac{\cos^2 (\mathcal{P}i/n_1)+\cos (2\mathcal{P}i/n)}{\sin^2 (\mathcal{P}i/n_1)} \operatorname{{\rrbracket}}ight)}, & \text{if } k(D) = n, \text{ and} \\ \cosh^{-1} {\operatorname{{\llbracket}}eft( \frac{\cos (\mathcal{P}i/n_1)\cos (\mathcal{P}i/n_2)+\cos (\mathcal{P}i/n)}{\sin (\mathcal{P}i/n_1) \sin (\mathcal{P}i/n_2)} \operatorname{{\rrbracket}}ight)}, & \text{if } k(D)=2n. \end{cases}$$ \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[(i)] \item By definition, $\mathrm{inj}_P(\mathcal{P}_D)$ is the radius of the biggest hyperbolic disc centered at $P$ that can be isometrically embedded in the polygon $\mathcal{P}_D$. Thus, $\mathrm{inj}_P(\mathcal{P}_D)$ turns out to be the (hyperbolic) inradius of $\mathcal{P}_D$. Here two cases arise depending upon whether $k(D)=n$ or $k(D)=2n.$ We will argue for the case when $k(D)=2n$ as the argument for the other case is similar. Using the geometric realization of $D$ as $\mathcal{P}_D,$ one can see that $\mathcal{P}_D$ can be divided into $2n$ congruent triangles with interior angles $\mathcal{P}i/n$, $\mathcal{P}i/n_1$, and $\mathcal{P}i/n_2,$ sharing a common vertex and interior angle $\mathcal{P}i/n$. Hence, the inradius of $\mathcal{P}_D$ is given by the altitude of any one of these $2n$ hyperbolic triangles joining the vertex with interior angle $\mathcal{P}i/n$ to the opposite side with interior angles $\mathcal{P}i/n_1$ and $\mathcal{P}i/n_2.$ Thus, our assertion now follows from Lemma \operatorname{{\rrbracket}}ef{lem:alt}. \item We break our argument into the following two cases. \begin{enumerate}[\text{Case} 1.] \item $k(D)=n$ (Without loss of generality, let $n_2=2$).\\ In this case, $\mathcal{P}_D$ is a regular $n$-gon with side length $\ell(\mathcal{P}_D)$ where each vertex and the mid point of each side belong to respective orbits of sizes $n/n_1$ and $n/2$. By Lemma \operatorname{{\rrbracket}}ef{lem:alt}, we have $\ell(\mathcal{P}_D)= \cosh^{-1} {\operatorname{{\llbracket}}eft( \frac{\cos^2 (\mathcal{P}i/n_1)+\cos (2\mathcal{P}i/n)}{\sin^2 (\mathcal{P}i/n_1)} \operatorname{{\rrbracket}}ight)}.$ Consider two consecutive vertices $P_1,P_2$ and the mid point $P_{12}$ of the side $P_1P_2$ with respective injectivity radii $r_1$ and $r'.$ From the symmetries of the polygon and the side pairings, it follows directly that radius of the biggest normal ball centered at either of $P_1$ and $P_{12}$ must be strictly bounded above by the side length $\ell(\mathcal{P}_D)$. \item $k(D)=2n$ (i.e. $n_1,n_2\geq 3$).\\ In this case, $\mathcal{P}_D$ is a hyperbolic $2n$-gon with each side of length $\ell(\mathcal{P}_D)$. By Lemma \operatorname{{\rrbracket}}ef{lem:alt}, we get \small{$$\ell(\mathcal{P}_D)= \cosh^{-1} {\operatorname{{\llbracket}}eft( \frac{\cos (\mathcal{P}i/n_1)\cos (\mathcal{P}i/n_2)+\cos (\mathcal{P}i/n)}{\sin (\mathcal{P}i/n_1) \sin (\mathcal{P}i/n_2)} \operatorname{{\rrbracket}}ight)}.$$} Let $P_1,P_2$ be two consecutive vertices with respective injectivity radii $r_1,r_2$, belonging to respective orbits of sizes $n/n_1$ and $n/n_2$. A similar argument as in the previous case, would imply that both $r_1,r_2 <\ell(\mathcal{P}_D)$. \end{enumerate} \end{enumerate} \end{proof} \begin{remark} \operatorname{{\llbracket}}abel{rem:inj_bound} It can be easily verified that the maximum injectivity radius of a structure of type $\mathcal{P}_D$ realizing a spherical Type 1 action $D$ must be bounded above by one of the upper bounds mentioned in Proposition \operatorname{{\rrbracket}}ef{prop:inj_spherical_type1}. In fact, from the symmetries of the polygon, it follows that the maximum radius of a normal ball at any point in the interior of the polygon $\mathcal{P}_D$ must be smaller than the inradius of $\mathcal{P}_D$. Similarly, for any point belonging to the interior of an edge of $\mathcal{P}_D$, the maximum radius of a normal ball must be strictly bounded above by the length of the edge. \end{remark} \noindent In view of Proposition~\operatorname{{\rrbracket}}ef{prop:inj_spherical_type1} and Remark~\operatorname{{\rrbracket}}ef{rem:inj_bound}, we obtain the following upper bound for the maximum injectivity radius of a structure of type $\mathcal{P}_D$. \begin{corollary} \operatorname{{\llbracket}}abel{cor:bound_inj_pD} Let $D \in \mathrm{Mod}(S_g)$ be a spherical Type 1 action realized by a structure $\mathcal{P}_D$ as in Theorem~\operatorname{{\rrbracket}}ef{res:1}. Then $$M(\mathrm{inj}(\mathcal{P}_D)) \operatorname{{\llbracket}}eq \mathcal{U}(\mathcal{P}_D),$$ where $$M(\mathrm{inj}(\mathcal{P}_D)):=\max_{P\in S_g} \{\mathrm{inj}_P(\mathcal{P}_D)\}$$ is the maximum injectivity radius of $S_g$ realized by the structure $\mathcal{P}_D$ and $\mathcal{U}(\mathcal{P}_D) := \max\{\mathrm{inj}_{P_0}(\mathcal{P}_D), \ell(\mathcal{P}_D)\}.$ \end{corollary} \begin{remark} \operatorname{{\llbracket}}abel{rem:inj_bound_rscomp} Let $D \in \mathrm{Mod}(S_g)$ be such that $D= \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2,(r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, where the $D_i$ are spherical Type 1 actions. Note that a hyperbolic structure that realizes $D$ is obtained by identifying the boundaries of $k$ cyclically permuted isometric discs of radius $R$ across a pair of compatible orbits of size $k$ belonging to the unique geometric realizations of $D_1$ and $D_2$, respectively. The identified boundaries become a family of $k$ closed geodesics in $S_g$ of length $\ell =2\mathcal{P}i \sinh R $ with $R<\min \{R_1,R_2\}$, where $R_i$ denotes the injectivity radius of each point in the corresponding orbit of $D_i$ (using Corollary \operatorname{{\rrbracket}}ef{cor:comp_pair}). By definition, the systole of $D$, $\mathrm{sys}(D) \operatorname{{\llbracket}}eq \ell =2\mathcal{P}i \sinh R$. More generally, let $D \in \mathrm{Mod}(S_g)$ such that $D= \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2,(r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, where one of the $D_i$ is a spherical Type 1 action. Without loss of generality, let us assume $D_1$ to be a spherical Type 1 action. By a similar argument as above, it follows that $\mathrm{sys}(D) \operatorname{{\llbracket}}eq \ell =2\mathcal{P}i \sinh R_1$ where $R_1$ is the injectivity radius of any point in the compatible orbit of $D_1$. \end{remark} \noindent By Corollary~\operatorname{{\rrbracket}}ef{cor:bound_inj_pD}, Remark~\operatorname{{\rrbracket}}ef{rem:inj_bound_rscomp}, and the fact that $\mathrm{inj}(D)=\frac{1}{2} \mathrm{sys}(D)$, we have the following result. \begin{corollary} \operatorname{{\llbracket}}abel{prop:systole1} Let $D \in \mathrm{Mod}(S_g)$ be such that $D= \operatorname{{\operatorname{{\llbracket}}lparenthesis}} D_1,D_2,(r,s) \operatorname{{\operatorname{{\rrbracket}}rparenthesis}}$, where the $D_i$ are spherical Type 1 actions. Then for any $X \in \mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)$, we have $$\mathrm{inj}(X) \operatorname{{\llbracket}}eq \mathcal{P}i \min\{\sinh(\mathcal{U}(\mathcal{P}_{D_1})), \sinh(\mathcal{U}(\mathcal{P}_{D_2}))\}.$$ \end{corollary} \begin{remark} \operatorname{{\llbracket}}abel{rem:inj_rad_bound} Let $D$ be a cyclic action on $S_g$ with $n(D) \geq 3$. By (the proof of) Theorem~\operatorname{{\rrbracket}}ef{thm:arb_cyc_real}, there exists a necklace \[\mathcal{N} = ((D_1,\operatorname{{\llbracket}}dots,D_k); ((x_1,y_1), \operatorname{{\llbracket}}dots, (x_m,y_m)); (g',g''))\] such that $D_{\mathcal{N}} = D$. If $\mathcal{N}$ is just a linear chain (as in Definition \operatorname{{\rrbracket}}ef{def: compatibility-gen}), then for any hyperbolic structure $X \in \mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)$ realized by $\mathcal{N}$, an inductive application of Corollary \operatorname{{\rrbracket}}ef{prop:systole1} shows that $$\mathrm{sys}(X) \operatorname{{\llbracket}}eq \mathcal{U}(\mathcal{N}),$$ and thus $$\mathrm{inj}(X) \operatorname{{\llbracket}}eq \frac{1}{2}\mathcal{U}(\mathcal{N}),$$ where $$\mathcal{U}(\mathcal{N}):= 2\mathcal{P}i\min\{\sinh(\mathcal{U}(\mathcal{P}_{D_1})), \operatorname{{\llbracket}}dots, \sinh(\mathcal{U}(\mathcal{P}_{D_k})).$$ Moreover, if $\mathcal{N}$ also includes self-compatibilities and the additions (and deletions) of permutation components, Remarks~\operatorname{{\rrbracket}}ef{rem:inj_bound} and~\operatorname{{\rrbracket}}ef{rem:inj_bound_rscomp} would guarantee that $\frac{1}{2}\mathcal{U}(\mathcal{N})$ continues to be an upper bound for $\mathrm{inj}(X)$. \end{remark} \begin{theorem} \operatorname{{\llbracket}}abel{thm:systole1} Let $D$ be a cyclic action on $S_g$ with $n(D) \geq 3$, and let $\mathcal{N}$ be a necklace as in Theorem~\operatorname{{\rrbracket}}ef{thm:main} such that $D_{\mathcal{N}} = D$. Then for any $X \in \mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)$ realized by $\mathcal{N}$, we have $$\mathrm{sys}(X) \operatorname{{\llbracket}}eq \mathcal{U}(\mathcal{N}),$$ and consequently, $$\mathrm{inj}(X) \operatorname{{\llbracket}}eq \frac{1}{2}\mathcal{U}(\mathcal{N}).$$ \end{theorem} \noindent The following example illustrates a certain optimality of the bounds obtained in Theorem~\operatorname{{\rrbracket}}ef{thm:systole1}. \begin{example} \operatorname{{\llbracket}}abel{eg:inj_bd_real1} For a prime $p \geq 5$, consider the $C_p$ action $D=(p, 0; (p-4,p),(2,p), (1,p),(1,p))$ on $S_{p-1}$. The action $D$ can be realized as $D_{\mathcal{N}}$, where $\mathcal{N} = ((D_1,D_2);;)$ with $\mathfrak{C}(\mathcal{N}) = (((1,1));)$, $$D_1=(p,0;(3,p), (p-4,p),(1,p)), \text{ and } D_2=(p,0;(p-3,p),(2,p),(1,p)).$$ It follows from Proposition \operatorname{{\rrbracket}}ef{prop:inj_spherical_type1} that $\mathrm{inj}_{P_1}(\mathcal{P}_{D_1})=\mathrm{inj}_{P_2}(\mathcal{P}_{D_2})=R$, where $P_1,P_2$ denote the compatible fixed points of $D_1$ and $D_2$ respectively. Since the maximum radius of any disk centered at $P_i$ is strictly bounded above by $R$, for any structure $X \in \mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)$, $\mathrm{inj}(X) < \mathcal{P}i \sinh R$ (Corollary \operatorname{{\rrbracket}}ef{prop:systole1}). However, the upper bound $R$ is optimal in the sense that for each $\epsilon>0,$ there exist a point $P\in S_{p-1}$ and a structure $X \in \mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)$ such that the injectivity radius $ \mathrm{inj}_P(X)$ at the point $P$ is $>R-\epsilon.$ \end{example} Let $\mathcal{N}(D)$ be the set of all necklaces realizing an arbitrary cyclic action $D$, that is $$\mathcal{N}(D) : = \{\mathcal{N} : D_{\mathcal{N}} = D\}.$$ Clearly, $\mathcal{N}(D)$ is finite as $\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)$ is finite-dimensional. We define $$\mathcal{U}(D) := \max \{\mathcal{U}(\mathcal{N}): \mathcal{N} \in \mathcal{N}(D)\}.$$ Now, let $H < \mathrm{Mod}(S_g)$ be a finite subgroup such that $H = \operatorname{{\llbracket}}angle D_1, \operatorname{{\llbracket}}dots, D_s \operatorname{{\rrbracket}}angle$. Then a simple argument would show that $\mathrm{Fix}(H) = \cap_{i=1}^s \mathrm{Fix}(\operatorname{{\llbracket}}angle D_i \operatorname{{\rrbracket}}angle)$. Define $$\mathcal{U}(H) := \min_{1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq s} \mathcal{U}(D_i).$$ As an immediate consequence of Theorem~\operatorname{{\rrbracket}}ef{thm:systole1}, we obtain the following. \begin{corollary} \operatorname{{\llbracket}}abel{cor:inj_global_bound} Let $H= \operatorname{{\llbracket}}angle D_1, \operatorname{{\llbracket}}dots, D_s \operatorname{{\rrbracket}}angle < \mathrm{Mod}(S_g)$ be a finite subgroup such that $n(D_i) \geq 3$ for some $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq s$. Then for any $X \in \mathrm{Fix}(H)$, we have $$\mathrm{sys}(X) \operatorname{{\llbracket}}eq \mathcal{U}(H),$$ and consequently, $$\mathrm{inj}(X) \operatorname{{\llbracket}}eq \frac{1}{2}\mathcal{U}(H).$$ \end{corollary} Given a closed hyperbolic surface of genus $g$, ($g \geq 2$), the \textit{systole function} denoted by $\mathrm{sys} \colon \mathrm{Teich}(S_g) \to \mathbb{R}_{+}$, is defined by $\mathrm{sys} \colon (X,\xi) \mapsto \mathrm{sys}(X).$ Finally, as a consequence of Corollary~\operatorname{{\rrbracket}}ef{cor:inj_global_bound}, we obtain a global upper bound for the systole function restricted to the submanifold $\mathrm{Fix}(H)$ of $\mathrm{Teich}(S_g)$. \begin{corollary}\operatorname{{\llbracket}}abel{cor:systole2} Let $H < \mathrm{Mod}(S_g)$ be a finite subgroup as in Corollary \operatorname{{\rrbracket}}ef{cor:inj_global_bound}. Then the restriction $\mathrm{sys}:\mathrm{Fix}(H) \to \mathbb{R}_{+}$ of the systole function is bounded above by $ \mathcal{U}(H)$. \end{corollary} \subsection{$\mathrm{Fix}(H)$ as a symplectic submanifold of \textbf{$\mathrm{Teich}(S_g)$}} As $(\mathrm{Teich}(S_g),\operatorname{{\llbracket}}inebreak\mathfrak{g}_{WP})$ is a K\"ahler manifold, where $\mathfrak{g}_{WP}$ denotes Weil-Petersson metric, it admits a canonical symplectic structure given by $\omega =\sum_{i=1}^{3g-3} d\ell_i \wedge d\theta_i$, where $P=(X,\xi)\in \mathrm{Teich}(S_g)$ is an arbitrary point and $(\ell_1,\theta_1,\operatorname{{\llbracket}}dots,\ell_{3g-3},\theta_{3g-3})$ are the Fenchel-Nielsen coordinates of $P$. This follows from the \textit{magic formula} due to Wolpert (\cite{FM,WPS}). Moreover, using the Fenchel-Nielsel coordinate system, it is easy to see that the symplectic manifold $(\mathrm{Teich}(S_g), \omega)$ is symplectomorphic to the Euclidean space $\mathbb{R}^{6g-6}$ with its standard symplectic structure. \noindent For any finite subgroup $H < \mathrm{Mod}(S_g)$, $\mathrm{Fix}(H) \approx \mathrm{Teich}(S_g/H) \approx \mathbb{R}^{2k}$ (\cite[Theorem 2]{H1}), which is a K\"ahler submanifold of $(\mathrm{Teich}(S_g),\mathfrak{g}_{WP})$, is also a symplectic submanifold of $(\mathrm{Teich}(S_g), \omega)$. Therefore, it is natural to ask if it is symplectomorphic to $\mathbb{R}^{2k}$ with the standard symplectic structure. The answer to the above question is obtained as an application of Theorem~\operatorname{{\rrbracket}}ef{thm:main} which provides an explicit embedding of $\mathrm{Fix}(H)$ as a K\"ahler submanifold of $(\mathrm{Teich}(S_g),\mathfrak{g}_{WP})$ when $H =\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle$. \begin{corollary} Let $H<\mathrm{Mod}(S_g)$ be a finite subgroup as in Corollaries \operatorname{{\rrbracket}}ef{cor:inj_global_bound} -\operatorname{{\rrbracket}}ef{cor:systole2}. Then $\mathrm{Fix}(H)$ is not symplectomorphic to the standard Euclidean space of the corresponding dimension. \end{corollary} \begin{proof} Using Theorems ~\operatorname{{\rrbracket}}ef{thm:main} and \operatorname{{\rrbracket}}ef{thm:systole1}, it follows that $$\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle) =\mathcal{P}rod_{j=1}^k M_j,$$ where for some $j$, $M_j=(0,\ell_j) \times \mathbb{R}$, that is, $\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)$ can be expressed as product of two-dimensional strips $M_j$ via the global system of Fenchel-Nielsen coordinates, where at least one of the strips is bounded by $\ell_j >0$ and the bound is determined by the irreducible components of $D.$ Moreover, in the same coordinate system, $(\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle),\omega|_{\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)})$ can be realized as a symplectic submanifold with bounded symplectic width (also known as Gromov symplectic capacity) of the Euclidean space $\mathbb{R}^{2k}$ equipped with the standard symplectic structure (see Chapter 12, of \cite{McS} for a formal definition of symplectic width). The desired result is a direct consequence of M. Gromov's symplectic non-squeezing Theorem (see \cite{MG}) which shows that the radii of balls and cylinders are stored as a symplectic invariant i.e. a ball cannot be embedded into a cylinder via a symplectic map unless the radius of the ball is less than or equal to the radius of the cylinder. As a consequence, in particular, a symplectic manifold of bounded symplectic width cannot be symplectomorphic to the standard Euclidean space of the same dimension as the latter has unbounded symplectic width. Finally, for an arbitrary $H= \operatorname{{\llbracket}}angle D_1, \operatorname{{\llbracket}}dots, D_s \operatorname{{\rrbracket}}angle$, the assertion follows from the fact that $\mathrm{Fix}(H) \approx \cap_{i=1}^s \mathrm{Fix}(\operatorname{{\llbracket}}angle D_i \operatorname{{\rrbracket}}angle)$, which would also have bounded symplectic width. \end{proof} \section{Recovering some well known results} \operatorname{{\llbracket}}abel{sec:classical_results} In this section, we apply our theory to provide alternative proofs to some known results that closely connect with the central theme of this paper. We begin with the following result due to Harvey~\cite{H1,HM1}, which follows as an immediate consequence of Theorem \operatorname{{\rrbracket}}ef{thm:main}. \begin{corollary}\operatorname{{\llbracket}}abel{cor:dim-branch-loci} Let $D$ be a cyclic action of order $n$ on $S_g$ such that $\mathcal{O}_D$ has $c$ cone points. Then $$\dim (\mathrm{Fix} ( \operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle )) =6g_0(D)+2c-6.$$ \end{corollary} \begin{proof} This follows directly from Theorem \operatorname{{\rrbracket}}ef{thm:main} by observing that $g_0(D_{\mathcal{N}})=g'-g''+m$ and the number of cone points in $\mathcal{O}_{D_{\mathcal{N}}}=k+2f({T_{\mathcal{N}}})-2m+2.$ \end{proof} \noindent Corollary~\operatorname{{\rrbracket}}ef{cor:dim-branch-loci} leads us to the following result due to Gilman~\cite{JG3} that characterizes the irreducibility of cyclic actions. \begin{corollary} \operatorname{{\llbracket}}abel{cor:gilman} A cyclic action $D$ on $S_g$ is irreducible if, and only if $g_0(D) = 0$ and $\mathcal{O}_D$ is an orbifold with three cone points. \end{corollary} \begin{proof} Consider an action $D$ on $S_g$ of the form $D = (n,0;(c_1,n_1),(c_2,n_2),(c_3,n_3)),$ and let $\mathcal{N}$ be any necklace with $k$ beads such that $D_{\mathcal{N}} = D$. It follows from Corollary~\operatorname{{\rrbracket}}ef{cor:dim-branch-loci} that $\dim(\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)) = 0$. Therefore, by Remark~\operatorname{{\rrbracket}}ef{rem:fixD}, we conclude that $D$ is irreducible. Conversely, suppose that $D$ is irreducible. Then $g_0(D) = 0$, as otherwise, $D$ would have a nontrivial permutation component. By Remark~\operatorname{{\rrbracket}}ef{rem:fixD}, it follows $\dim(\mathrm{Fix}(\operatorname{{\llbracket}}angle D \operatorname{{\rrbracket}}angle)) = 0$, and so Corollary~\operatorname{{\rrbracket}}ef{cor:dim-branch-loci} would imply that $\mathcal{O}_D$ has exactly 3 cone points, and the assertion follows. \end{proof} \subsection{Nielsen realization theorem for cyclic groups} In this subsection, we provide a purely topological proof of the Nielsen realization theorem for cyclic groups motivated by some of the key ideas developed in~\cite{PRS} and this paper. Though the general version of this result (for arbitrary finite subgroups of $\mathrm{Mod}(S_g)$) was proved by Kerckhoff~\cite{SK1}, Fenchel~\cite{WF2,WF1} is credited with giving the first complete proof for cyclic groups, and more generally solvable groups. (The concluding discussion in Nielsen's original paper~\cite{JN1}, settling the cyclic case, was later shown by Zieschang~\cite{HZ1,HZ} to be partly incorrect.) Our proof of this result differs from approaches of Nielsen and Fenchel, as we will use a characterization of irreducible finite-order mapping classes through the orbits of nonseparating curves induced by their actions. A formal statement of the result is as follows: \begin{theorem}[Nielsen-Fenchel] \operatorname{{\llbracket}}abel{thm:NK} Let $h \in \mathrm{Mod}(S_g)$ be of order $n$. Then $h$ has a representative $\tilde{h} \in \text{Homeo}^+(S_g)$ such that $\tilde{h}^n = 1$. \end{theorem} \noindent We start by giving a characterization of irreducible Type 1 actions in terms of their curve orbits. We will use $i(a,b)$ (resp. $\tilde{i}(a,b)$) to denote the geometric (resp. algebraic) intersection number of two simple closed curve $a$ and $b$ in $S_g$. \begin{proposition}\operatorname{{\llbracket}}abel{prop:irr-type1_NK} Let $h \in \mathrm{Mod}(S_g)$ be an irreducible mapping class of order $n$. Then the following statements are equivalent. \begin{enumerate}[(i)] \item There exists a (oriented) nonseparating curve $c$ in $S_g$ with $i(c,h(c)) = \hat{i}(c,h(c))=1$ such that the $\operatorname{{\llbracket}}angle h \operatorname{{\rrbracket}}angle$-orbit of $c$ is of size $n$. \item There exists a representative $\tilde{h} \in \text{Homeo}^+(S_g)$ of $h$ that is realized as an order-$n$ rotation of a polygon of type $\mathcal{P}_D$ by $\theta_D$, as in Proposition~\operatorname{{\rrbracket}}ef{thm:irr_type1}. Equivalently, $\tilde{h}$ generates a spherical Type 1 action on $S_g$. \end{enumerate} \end{proposition} \begin{proof} Suppose we assume that (i) holds. Since $i(c,h(c))=1$, we have that $i(h^i(c),h^{i+1}(c)) =1$, for $0 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq n-1$, and so it follows that the isotopy class of the curve $\gamma := c*h(c)*\operatorname{{\llbracket}}dots *h^{n-1}(c)$ is represented by a simple closed curve. Moreover, the representation of $\gamma$ together with the fact that $i(h^i(c),h^{i+1}(c)) =1$ would now imply that $h$ preserves the isotopy class of $\gamma$. So, we may assume up to isotopy that $h(\gamma) = \gamma$. As $h$ is irreducible, it follows that $\gamma$ is contractible, and thus $\gamma$ bounds a disk $\mathcal{P}$ in $S_g$. Furthermore, since $\hat{i}(h^i(c),h^{i+1}(c)) =1$, the orientations on the curves $h^i(c)$ yield a side-pairing on $\mathcal{P}artial \mathcal{P}$. Therefore, $\bar{\mathcal{P}}$ is a topological polygon with a side-pairing, which upon identification yields $S_g$. Since $h$ cyclically permutes the curves $\{c,h(c),\operatorname{{\llbracket}}dots,h^{n-1}(c)\}$ in $\mathcal{P}artial \mathcal{P}$, it induces a rotation $\theta$ of $\bar{\mathcal{P}}$ of order $n$. Viewing $\bar{\mathcal{P}}$ as a hyperbolic polygon, the irreducibility of $h$ together with Proposition \operatorname{{\rrbracket}}ef{thm:irr_type1} implies that $\bar{\mathcal{P}}$ is a polygon of type $\mathcal{P}_D$, which $h$ rotates by $\theta_D(= \theta)$. Conversely, suppose that $(ii)$ holds. Then the structure of the polygon $\mathcal{P}_D$ guarantees the existence of an orbit $\{c,h(c),\operatorname{{\llbracket}}dots,h^{n-1}(c)\}$ of nonseparating curves (on $\mathcal{P}artial \mathcal{P}_D$) as desired. \end{proof} Let $h \in \mathrm{Mod}(S_g)$ be an irreducible finite-order mapping class that is not of Type 1. In~\cite[Lemma 2.23]{PRS}, we had provided a combinatorial recipe for the geometric realization $h$, which involved the attachment of a permutation component followed by a decomposition into Type 1 irreducibles. Taking inspiration from this construction, in the following remark, we sketch a rather technical procedure for realizing such an $h$ topologically. \begin{remark} \operatorname{{\llbracket}}abel{rem:irr_type2} Let $h \in \mathrm{Mod}(S_g)$ be an irreducible order-$n$ mapping class that is not of Type 1. Assuming, up to isotopy, that $h$ has an orbit of size $n$, we add an $1$-permutation component to $h$ (as in Definition~\operatorname{{\rrbracket}}ef{rem:triv_self_comp}) along this orbit, to obtain an action $h' = \operatorname{{\llbracket}} h, 1 \operatorname{{\rrbracket}}$ on $S_{g + n}$. For $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq n$, let $a_i$ and $b_i$ denote the longitude and meridional curves on $i^{th}$ copy of $S_{1,0}$ that was pasted to $S_{g}$ during the construction of $h'$. Further, we assume that the for each $i$, the $a_i$ and the $b_i$ are oriented such that $\hat{i}(a_i,b_i) = 1$. For $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq n$, let $\beta_i$ be the curve along with the $i^{th}$ copy of $S_{1,0}$ was attached while constructing $h'$. Then the isotopy class of $\beta = \beta_1 \ast \beta_2 \ast \operatorname{{\llbracket}}dots \beta_n$ is represented by a nontrivial curve in $S_g$, for otherwise $\beta$ would bound a disk $\mathcal{P}$ in $S_g$ which a representative $\tilde{h}$ of $h$ rotates by $2\mathcal{P}i/n$, thereby inducing a fixed point in the center of $\mathcal{P}$. This would contradict our assumption that $h$ is not of Type 1. Further, as $h$ irreducible, we have $h(\beta) \neq \beta$, and as $h(\beta_i) = \beta_{i+1}$, it follows that $h(\beta)$ and $\beta$ are homologous. Thus, $h(\beta)$ and $\beta$ cobound a subsurface of $S_g$, and so we have that $i(\beta, h(\beta)) = 0$. Now we define $c_i := a_i * b_{i+1}^{-1}$, for $ 1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq n$. Then we see that $i(c_i,c_{i+1})= \hat{i}(c_i,c_{i+1}) = 1$. Further, by definition, it follows that $h'(c_i)= c_{i+1}$, for all $i$. So, $\{c_1,\operatorname{{\llbracket}}dots,c_n\}$ is the curve orbit of $c_1$ under the $\operatorname{{\llbracket}}angle h' \operatorname{{\rrbracket}}angle$-action such that $i(c_1,f(c_1)) = \hat{i}(c_1,f(c_1)) = 1$. Further, $h'$ preserves the isotopy class of $c := c_1 \ast c_2 \ast \operatorname{{\llbracket}}dots \ast c_n$ in $S_{g+n}$, which is represented by a simple closed curve. Thus, $h'$ may be isotoped to preserve $c$, and hence a closed neighborhood $N$ of $c$, which is a subsurface $\Sigma$ of $S_{g+n}$. Thus, $h'$ induces an order-$n$ map $h_{\Sigma}$ on $\Sigma$, and so let $h_{\widehat{\Sigma}}$ be the map induced by $h_{\Sigma}$ on the surface $\widehat{\Sigma}$ obtained by capping off the boundary components of $\Sigma$. Since $c$ is contractible in $\widehat{\Sigma}$, by the ideas in Proposition~\operatorname{{\rrbracket}}ef{prop:irr-type1_NK}, it follows that $h_{\widehat{\Sigma}}$ defines an action on $\widehat{\Sigma}$ that is realized as a rotation of a polygon with a side-pairing defined by the oriented curves $c_i$. Moreover, $h_{\widehat{\Sigma}}$ must have finitely many $\alpha$-compatibilities (defined along the boundary components of $\Sigma$) with the action $h_{\widehat{\Sigma'}}$ induced by $h'$ on the surface $\widehat{\Sigma}'$ obtained by capping off $\overline{S_{g + n} \setminus \Sigma}$. By removing maximal reduction systems for $h_{\widehat{\Sigma}}$ and $h_{\widehat{\Sigma'}}$ in $\widehat{\Sigma}$ and $\widehat{\Sigma'}$, respectively, and capping (and repeating the process above, if required), we can further decompose $h_{\widehat{\Sigma'}}$ and $h_{\widehat{\Sigma'}}$ into Type 1 irreducibles. Finally, by Proposition~\operatorname{{\rrbracket}}ef{prop:irr-type1_NK}, each of these components has a order $n$ representative that is realized a rotation of a polygon (of type $\mathcal{P}_D$). So, we paste these representatives together across the respective compatibilities to obtain an order-$n$ representative of $h'$. Finally, we delete a $1$-permutation component from $h'$ to obtain an order-$n$ representative of $h$. \end{remark} By a \textit{multicurve} $\mathcal{C} \subset S_g$, we mean a finite collection of disjoint nonisotopic esential simple closed curves in $S_g$. A multicurve $\mathcal{C}$ in $S_g$ is said to be \textit{nonseparating}, if $S_g \setminus \mathcal{C}$ is connected, and is said to be \textit{separating}, otherwise. Two multicurves $\mathcal{C}$ and $\mathcal{C}'$ in $S_g$ are said to be \textit{mutually disjoint} if there exists no pair of curves $c, c'$ with $c \in \mathcal{C}$ and $c' \in \mathcal{C}'$ such that $i(c,c') > 0$. \begin{definition} Let $h \in \mathrm{Mod}(S_g)$ be of order $n$. Then a multicurve $\mathcal{C} \subset S_g$ is said to form an \textit{essential curve orbit induced by $h$} if: \begin{enumerate}[(i)] \item $\mathcal{C}$ is an orbit of some curve $c$ under the action of $\operatorname{{\llbracket}}angle h \operatorname{{\rrbracket}}angle$, and \item $c$ is nonseparating, if $|\mathcal{C}|> 1$, and $c$ is separating, otherwise. \end{enumerate} An essential orbit is said to be of \textit{full size}, if $|\mathcal{C}| =n$. \end{definition} \noindent Before we sketch a proof Theorem~\operatorname{{\rrbracket}}ef{thm:NK}, we will state a lemma (without proof) that gives characterization rotational mapping classes based on the essential curve orbits induced by their actions. \begin{lemma} \operatorname{{\llbracket}}abel{lem:rot_condn} Let $h \in \mathrm{Mod}(S_g)$ be of order $n$. Then $h$ is a rotational mapping class if, and only if, $h$ has a maximal reduction system $\mathcal{C}$ such that one of the following holds. \begin{enumerate}[(i)] \item There exists $k$ mutually disjoint full sized nonseparating essential curve orbits $\mathcal{C}_i$, for $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq k$, induced by $h$ such that $k = g/n$, and $\mathcal{C} = \cup_{i=1}^k \mathcal{C}_i$. \item There exists $k$ mutually disjoint full sized nonseparating essential curve orbits $\mathcal{C}_i$, for $1 \operatorname{{\llbracket}}eq i \operatorname{{\llbracket}}eq k$, induced by $h$ such that $k = (g-1)/n$, and there exists a nonseparating curve $c$ in $S_g$ disjoint from the curves in each $\mathcal{C}_i$ such that $\mathcal{C} = \cup_{i=1}^k \mathcal{C}_i \cup \{c\}$. \end{enumerate} \end{lemma} \noindent We conclude this paper by sketching a proof of the Nielsen realization theorem for the cyclic case. \begin{proof}[Proof of Theorem~\operatorname{{\rrbracket}}ef{thm:NK}] Let $h\in \mathrm{Mod}(S_g)$ be of order $n$. If $h$ is irreducible, then by Proposition~\operatorname{{\rrbracket}}ef{prop:irr-type1_NK} and Remark~\operatorname{{\rrbracket}}ef{rem:irr_type2}, we can obtain a representative $\tilde{h}$ of $h$, of order $n$, as required. Moreover, if $h$ has a maximal reduction system $\mathcal{C}$ that satisfies the condition (i) of Lemma~\operatorname{{\rrbracket}}ef{lem:rot_condn}. Then by capping off $\overline{S_g \setminus \mathcal{C}}$, $h$ reduces to a map on the sphere $S_0$, which is isotopic to a rotation $R$. Note that the action of $R$ on $S_0$ has two marked orbits of size $n$ on $S_0$ corresponding to each essential full-sized orbit in $\mathcal{C}$ (that was removed). We may inductively paste $k$ $1$-permutations to $R$ along these marked orbits in $S_0$ (corresponding to the full-sized essential orbits in $\mathcal{C}$ we had removed) to obtain an $\tilde{h} \in \text{Homeo}^+(S_g)$, as desired. If $h$ has a maximal reduction system $\mathcal{C}$ that satisfies condition (ii) of Lemma~\operatorname{{\rrbracket}}ef{lem:rot_condn}, then by a similar argument as above, we may obtain $\tilde{h}$. Suppose that $h$ is a reducible non-rotational mapping class. Choose a collection $\{\mathcal{C}_1, \operatorname{{\llbracket}}dots,\mathcal{C}_k\}$ of mutually disjoint essential orbits induced by $h$ such that $\mathcal{C} = \cup_{i=1}^k \mathcal{C}_i$ forms a maximal reduction system for $h$. Let $h'$ be the map induced by $h$ on the surface $S'$ obtained by capping off the components of $\overline{S_g \setminus \mathcal{C}}$. As before, we note that the removal each $\mathcal{C}_i \subset \mathcal{C}$ induces two marked orbits of size $|\mathcal{C}_i|$ in $S'$. By our assumption, it now follows that each component of $S'$ is of genus greater than or equal to $1$, and furthermore, $h'$ induces an irreducible mapping class on each component of $S'$. By Proposition~\operatorname{{\rrbracket}}ef{prop:irr-type1_NK} and Remark~\operatorname{{\rrbracket}}ef{rem:irr_type2}, we can obtain an order-$n$ representative of each component of $h'$. Now we paste together these representatives across $r$-compatibilities along the marked orbits (corresponding to an essential orbits in the collection $\{\mathcal{C}_1, \operatorname{{\llbracket}}dots,\mathcal{C}_k\}$ we had removed), to obtain a representative $\tilde{h}$ of the mapping class $h$ such that $\tilde{h}^n =1$. \end{proof} \section*{Acknowledgments} The first and third authors were supported by a joint SERB-EMR grant instituted by the Government of India. \end{document}
\begin{document} \title[The three-holed sphere] {Affine deformations of a three-holed sphere} \author[Charette]{Virginie Charette} \address{D\'epartement de math\'ematiques\\ Universit\'e de Sherbrooke\\ Sherbrooke, Quebec, Canada} {\mathsf e}mail{[email protected]} \author[Drumm]{Todd A. Drumm} \address{Department of Mathematics\\ Howard University\\ Washington, DC } {\mathsf e}mail{[email protected]} \author[Goldman]{William M. Goldman} \address{Department of Mathematics\\ University of Maryland\\ College Park, MD 20742 USA} {\mathsf e}mail{[email protected]} \date{\today} \mathfrak{s}ubjclass{57M05 (Low-dimensional topology), 20H10 (Fuchsian groups and their generalizations), 30F60 (Teichm\"uller theory)} \keywords{ hyperbolic surface, affine manifold, discrete group, fundamental polygon, fundamental polyhedron, proper action, Lorentz metric, Fricke space} \thanks{Charette gratefully acknowledges partial support from the Natural Sciences and Engineering Research Council of Canada and from the Fonds qu\'eb\'ecois de la recherche sur la nature et les technologies. Goldman gratefully acknowledges partial support from National Science Foundation grants DMS070781 and the Oswald Veblen Fund at the Institute for Advanced Study.} \begin{abstract} Associated to every complete affine $3$-manifold $M$ with nonsolvable fundamental group is a noncompact hyperbolic surface $\Sigma$. We classify such complete affine structures when $\Sigma$ is homeomorphic to a three-holed sphere. In particular, for every such complete hyperbolic surface $\Sigma$, the deformation space identifies with two opposite octants in $\mathbb R^3$. Furthermore every $M$ admits a fundamental polyhedron bounded by crooked planes. Therefore $M$ is homeomorphic to an open solid handlebody of genus two. As an explicit application of this theory, we construct proper affine deformations of an arithmetic Fuchsian group inside ${\mathsf{Sp}(4,\Z)}$. {\mathsf e}nd{abstract} \maketitle \tableofcontents \mathfrak{s}ection*{Introduction} A {{\mathsf e}m complete affine manifold\/} is a quotient \begin{equation*} M = {\mathbf E}/{\Gamma}amma {\mathsf e}nd{equation*} where ${\mathbf E}$ is an affine space and ${\Gamma}amma\mathfrak{s}ubset {\mathsf{Aff}}({\mathbf E})$ is a discrete group of affine transformations of ${\mathbf E}$ acting properly and freely on ${\mathbf E}$. When $\dim {\mathbf E} =3$, Fried-Goldman~\cite{FG} and Mess~\cite{Me} imply that either: \begin{itemize} \item ${\Gamma}amma$ is solvable, or \item ${\Gamma}amma$ is virtually free. {\mathsf e}nd{itemize} When ${\Gamma}amma$ is solvable, $M$ admits a finite covering homeomorphic to the total space of a fibration composed of points, circles, annuli and tori. The classification of such structures in this case is straightforward~\cite{FG}. When ${\Gamma}amma$ is virtually free, the classification is considerably more interesting. In the early 1980s Margulis discovered~\cite{Margulis1,Margulis2} the existence of such structures, answering a question posed by Milnor~\cite{Mi}. \begin{conj*} Suppose $M^3$ is a $3$-dimensional complete affine manifold with free fundamental group. Then $M$ is homeomorphic to an open solid handlebody. {\mathsf e}nd{conj*} \noindent The purpose of this paper is to prove this conjecture in the first nontrivial case. By Fried-Goldman~\cite{FG}, the linear holonomy homomorphism \begin{equation*} {\mathsf{Aff}}({\mathbf E}^3) \xrightarrow{\mathsf L} {\mathsf{GL}(3,\R)} {\mathsf e}nd{equation*} embeds ${\Gamma}amma$ as a discrete subgroup of a subgroup of ${\mathsf{GL}(3,\R)}$ conjugate to the orthogonal group $\mathsf{O}(2,1)$. Thus $M$ admits a {{\mathsf e}m complete flat Lorentz metric \/} and is a {{\mathsf e}m (geodesically) complete flat Lorentz $3$-manifold.\/} Thus we henceforth restrict our attention to the case ${\mathbf E}$ is a $3$-dimensional {{\mathsf e}m Lorentzian affine space\/} ${\EE^3_1}$. A Lorentzian affine space is a simply connected geodesically complete flat Lorentz $3$-manifold, and is unique up to isometry. Furthermore $\mathsf L({\Gamma}amma)$ is a Fuchsian group acting properly and freely on the hyperbolic plane ${\mathbf H}^2$. We model ${\mathbf H}^2$ on a component of the two-sheeted hyperboloid \begin{equation*} \{ {\mathsf v}\in{\R^3_1} \mid \ldot{{\mathsf v}}{{\mathsf v}} \,=\, -1 \}, {\mathsf e}nd{equation*} or equivalently its projectivization in $\mathsf{P}({\R^3_1})$. (Compare \cite{G}.) The quotient \begin{equation*} \Sigma := {\mathbf H}^2/\mathsf L({\Gamma}amma) {\mathsf e}nd{equation*} is a complete hyperbolic surface homotopy-equivalent to $M$, naturally associated to the Lorentz manifold $M$. We prove the above conjecture in the case that the surface $\Sigma$ is homeomorphic to a three-holed sphere. Margulis~\cite{Margulis1,Margulis2} discovered proper actions by bounding (from below) the Euclidean distance that elements of ${\Gamma}amma$ displace points. Our more geometric approach constructs fundamental polyhedra for affine deformations in the spirit of Poincar\'e's theorem on fundamental polyhedra for hyperbolic manifolds. This approach began with Drumm~\cite{Drumm1}, who constructed fundamental polyhedra from {{\mathsf e}m crooked planes\/} to show that certain affine deformations ${\Gamma}amma$ acts properly on all of ${\EE^3_1}$. A {{\mathsf e}m crooked plane\/} is a polyhedron in ${\EE^3_1}$ with four infinite faces, adapted to the invariant Lorentzian geometry of ${\EE^3_1}$. Specifically, representing the hyperbolic surface $\Sigma$ as an identification space of a fundamental polygon for the generalized Schottky group $\mathsf L({\Gamma}amma)\mathfrak{s}ubset \mathsf{O}(2,1)$, we construct a fundamental polyhedron for certain affine deformations ${\Gamma}amma$ bounded by crooked planes \cite{Drumm1}. We call such a fundamental polyhedron a {{\mathsf e}m crooked fundamental polyhedron.\/} \begin{conj*} Suppose $\dim({\EE^3_1}) = 3$ and ${\Gamma}amma\mathfrak{s}ubset{\mathsf{Aff}}({\EE^3_1})$ is a discrete group acting properly on ${\EE^3_1}$. Suppose that ${\Gamma}amma$ is not solvable. Then some finite-index subgroup of ${\Gamma}amma$ admits a crooked fundamental domain. {\mathsf e}nd{conj*} \noindent We prove this conjecture when $\Sigma$ is homeomorphic to a three-holed sphere. Let ${\Gamma}amma_0\mathfrak{s}ubset \mathsf{O}(2,1)$ be a Fuchsian group. Denote the corresponding embedding \begin{equation*} \rho_0: {\Gamma}amma_0 \hookrightarrow \mathsf{O}(2,1) \mathfrak{s}ubset {\mathsf{GL}(3,\R)}. {\mathsf e}nd{equation*} An {{\mathsf e}m affine deformation\/} of ${\Gamma}amma_0$ is a representation \begin{equation*} {\Gamma}amma_0 \xrightarrow{\rho} {\mathsf{Aff}}({\EE^3_1}) {\mathsf e}nd{equation*} satisfying $L\circ\rho = \rho_0$. We refer to the image ${\Gamma}amma$ of $\rho$ as an {{\mathsf e}m affine deformation\/} as well. An affine deformation is {{\mathsf e}m proper\/} if the affine action of ${\Gamma}amma_0$ on ${\EE^3_1}$ defined by $\rho$ is a proper action. Clearly an affine deformation ${\Gamma}amma$ which admits a crooked fundamental polyhedron is proper. \begin{thm*}[Drumm] Every free discrete Fuchsian group ${\Gamma}amma_0\mathfrak{s}ubset \mathsf{O}(2,1)$ admits a proper affine deformation. {\mathsf e}nd{thm*} \noindent Actions of free groups by Lorentz isometries are the only cases to consider. Fried-Goldman \cite{FG} reduces the problem to when ${\Gamma}amma_0$ is a Fuchsian group, and Mess~\cite{Me} implies ${\Gamma}amma_0$ cannot be cocompact. Thus, after passing to a finite-index subgroup, we may assume that ${\Gamma}amma_0$ is free. The linear representation $\rho_0$ is itself an affine deformation, by composing it with the embedding \begin{equation*} {\mathsf{GL}(3,\R)} \hookrightarrow {\mathsf{Aff}}({\EE^3_1}). {\mathsf e}nd{equation*} Slightly abusing notation, denote this composition by $\rho_0$ as well. Two affine deformations are {{\mathsf e}m translationally equivalent\/} if they are conjugate by a translation in ${\EE^3_1}$. An affine deformation is {{\mathsf e}m trivial\/} (or {{\mathsf e}m radiant\/}) if and only if it is translationally conjugate to the affine deformation $\rho_0$ constructed above. Equivalently, an affine deformation is trivial if it fixes a point in the affine space ${\EE^3_1}$. Let ${\R^3_1}$ denote the vector space underlying the affine space ${\EE^3_1}$, considered as a ${\Gamma}amma_0$-module via the linear representation $\rho_0$. The space of translational equivalence classes of affine deformations of $\rho_0$ identifies with the cohomology group $\mathsf H^1(\G_0,\V)$. For each $g\in{\Gamma}amma_0$, define the {{\mathsf e}m translational part\/} $u(g)$ of $\rho(g)$, as the unique translation taking the origin to its image under $\rho(g)$. That is, $u(g) = \rho(g) (0)$, and \begin{equation*} x \mathfrak{s}tackrel{\rho(g)}\longmapsto \rho_0(g)(x) + u(g). {\mathsf e}nd{equation*} The map ${\Gamma}amma_0\xrightarrow{u}{\R^3_1}$ is a cocycle in $\mathbb ZZ$, and conjugating $\rho$ by a translation changes $u$ by a coboundary. The classification of complete affine structures in dimension $3$ therefore reduces to determining, for a given free Fuchsian group ${\Gamma}amma_0$, the subset of $\mathsf H^1(\G_0,\V)$ corresponding to translational equivalence classes of {{\mathsf e}m proper\/} affine deformations. Margulis~\cite{Margulis1,Margulis2} introduced an invariant of the affine deformation ${\Gamma}amma$, defined for elements $\alphamma\in{\Gamma}amma$ whose linear part $\mathsf L(\alphamma)$ is hyperbolic. Namely, $\alphamma$ preserves a unique affine line $C_\alphamma$ upon which it acts by translation. Furthermore $C_\alphamma$ inherits a {{\mathsf e}m canonical orientation.\/} As $C_\alphamma$ is spacelike, the Lorentz metric and the canonical orientation determines a unique orientation-preserving isometry \begin{equation*} \mathbb R\xrightarrow{j_\alphamma} C_\alphamma. {\mathsf e}nd{equation*} \noindent The {{\mathsf e}m Margulis invariant\/} $\alpha(\alphamma)\in\mathbb R$ is the displacement of the translation $\alphamma|_{C_\alphamma}$ as measured by $j_\alphamma$: \begin{equation*} j_\alphamma(t) \xrightarrow{\alphamma} j_\alphamma(t + \alpha(\alphamma) \big) {\mathsf e}nd{equation*} for $t\in\mathbb R$. Margulis's invariant $\alpha$ is a class function on ${\Gamma}amma_0$ which completely determines the translational equivalence class of the affine deformation \cite{DrummGoldman2, CharetteDrumm2}. Charette and Drumm~\cite{CharetteDrumm1} extended Margulis's invariant to parabolic transformations. However, only its {{\mathsf e}m sign\/} is well defined for parabolic transformations. To obtain a precise numerical value one requires a {{\mathsf e}m decoration\/} of ${\Gamma}amma_0$, that is, a choice of horocycle at each cusp of $\Sigma$. If ${\Gamma}amma$ is an affine deformation of ${\Gamma}amma_0$ with translational part $u\in\mathbb ZZ$, then we indicate the dependence of $\alpha$ on the cohomology class $[u]\in\mathsf H^1(\G_0,\V)$ by writing $\alpha = \alpha_{[u]}$. Let ${\Gamma}amma_0$ be a Fuchsian group whose corresponding hyperbolic surface $\Sigma$ is homeomorphic to a three-holed sphere. Denote the generators of ${\Gamma}amma_0$ corresponding to the three ends of $\partial\Sigma$ by $g_1, g_2, g_3$. Choose a decoration so that the generalized Margulis invariant defines an isomorphism \begin{align*} \mathsf H^1(\G_0,\V) &\longrightarrow \mathbb R^3 \\ [u] &\longmapsto \bmatrix \mu_1([u]) \\ \mu_2([u]) \\ \mu_3([u]) {\mathsf e}ndbmatrix := \bmatrix \alpha_{[u]}(g_1) \\ \alpha_{[u]}(g_2) \\ \alpha_{[u]}(g_3) {\mathsf e}ndbmatrix. {\mathsf e}nd{align*} \begin{thmA*}\label{thm:main} Let ${\Gamma}amma_0,\Sigma_0,\mu_1,\mu_2,\mu_3$ be as above. Then $[u]\in\mathsf H^1(\G_0,\V)$ corresponds to a proper affine deformation if and only if \begin{equation*} \mu_1([u]),\; \mu_2([u]),\; \mu_3([u]) {\mathsf e}nd{equation*} all have the same sign. Furthermore in this case ${\Gamma}amma$ admits a crooked fundamental domain and $M$ is homeomorphic to an open solid handlebody of genus two. {\mathsf e}nd{thmA*} For purely hyperbolic ${\Gamma}amma_0$, Theorem~A was proved by Cathy Jones in her doctoral thesis~\cite{Jones}, using a different method. In the case that $\Sigma$ is a three-holed sphere, Theorem~A gives a complete description of the deformation space and the topological type. As three-holed spheres are the building blocks of all compact hyperbolic surfaces, the present paper plays a fundamental role in our investigation of affine deformations of hyperbolic surfaces of arbitrary topological type. We conjecture that when $\Sigma$ is homeomorphic to a two-holed projective plane or one-holed Klein bottle, the deformation space will again be defined by finitely many inequalities. However, in all other cases, the deformation space will be defined by infinitely many inequalities. For example, when $\Sigma$ is homeomorphic to a one-holed torus, the deformation space is a convex domain with fractal boundary~\cite{GMM}. Margulis's opposite sign lemma~\cite{Margulis1,Margulis2} (see Abels~\cite{Abels} for a beautiful exposition) states that uniform positivity (or negativity) of $\alpha(\alphamma)$ is necessary for properness of an affine deformation. In \cite{GM,G0} uniform positivity was conjectured to be equivalent to properness. Theorem~A implies this conjecture when $\Sigma$ is a three-holed sphere with geodesic boundary. In that case only the three $\alphamma$ corresponding to $\partial \Sigma$ need to be checked. However, when $\Sigma$ has at least one cusp, Theorem~A provides counterexamples to the original conjecture. If the generalized Margulis invariant of that cusp is zero, and those of the other ends are positive, then $\alpha(\alphamma) > 0$ for all hyperbolic elements $\alphamma\in{\Gamma}amma$. Other counterexamples are given in \cite{GMM}. We apply Theorem~A to construct a proper affine deformation of an arithmetic group in $\mathsf{SL}(2,\Z)$ inside ${\mathsf{Sp}(4,\Z)}$. Here ${\mathsf{Aff}}({\EE^3_1})$ is represented as the subgroup of ${\mathsf{Sp}(4,\R)}$ stabilizing a Lagrangian plane ${L_{\infty}}$ in a symplectic vector space $\mathbb R^4$ defined over $\mathbb Z$. Its unipotent radical ${\mathsf{U}}$ is the subgroup of ${\mathsf{Sp}(4,\R)}$ which preserves ${L_{\infty}}$, acts identically on ${L_{\infty}}$, and acts identically on the quotient $\mathbb R^4/{L_{\infty}}$. The parabolic subgroup ${\mathsf{Aff}}({\EE^3_1})$ is the normalizer of $U$ in ${\mathsf{Sp}(4,\R)}$. Furthermore ${\mathsf{Aff}}({\EE^3_1})$ acts conformally on a left-invariant flat Lorentz metric on $U$. This model of Minkowski space embeds in the conformal compactification of ${\EE^3_1}$, the {{\mathsf e}m Einstein universe\/} (see \cite{BCDGM}) upon which ${\mathsf{Sp}(4,\R)}$ acts transitively. \begin{thmB*} Choose three positive integers $\mu_1,\mu_2,\mu_3$. Let ${\Gamma}amma$ be the subgroup of ${\mathsf{Sp}(4,\Z)}$ generated by \begin{equation*} \bmatrix -1 & -2 & \mu_1 + \mu_2 -\mu_3 & 0 \\ 0 & -1 & 2\mu_1 & -\mu_1 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 2 & -1 {\mathsf e}ndbmatrix ,\; \bmatrix -1 & 0 & -\mu_2 & -2\mu_2 \\ 2 & -1 & 0 & 0 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 0 & -1 {\mathsf e}ndbmatrix {\mathsf e}nd{equation*} Let ${\mathsf{U}} \,:=\, {\mathsf e}xp(\mathcal Phi) \,\mathfrak{s}ubset\, {\mathsf{Sp}(4,\R)}$ be the connected unipotent subgroup consisting of matrices \begin{equation*} \bmatrix 1 & 0 & x & y \\ 0 & 1 & y & z \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 {\mathsf e}ndbmatrix {\mathsf e}nd{equation*} where $x,y,z\in\mathbb R$. Then: \begin{itemize} \item ${\Gamma}amma$ normalizes ${\mathsf{U}}$; \item The resulting action of ${\Gamma}amma$ on ${\mathsf{U}}$ is proper and free; \item ${\Gamma}amma$ acts isometrically with respect to a left-invariant flat Lorentz metric on ${\mathsf{U}}$; \item The quotient orbifold ${\mathsf{U}}/{\Gamma}amma$ is homeomorphic to an open solid handlebody of genus two. {\mathsf e}nd{itemize} {\mathsf e}nd{thmB*} Our result complements Goldman-La\-bourie-Mar\-gulis~\cite{GLM} when the hyperbolic surface $\Sigma$ is convex cocompact. In that case the space of proper affine deformations identifies with an open convex cone in $\mathsf H^1(\G_0,\V)$ defined by the nonvanishing of an extension of Margulis's invariant to geodesic currents on $\Sigma$. This cone is the interior of the intersection of half-spaces defined by the functionals \begin{align*} \mathsf H^1(\G_0,\V) & \longrightarrow \mathbb R \\ [u] &\longmapsto \alpha_{[u]} (g) {\mathsf e}nd{align*} for $g\in{\Gamma}amma_0$. In general we expect this cone to be the {{\mathsf e}m union\/} of open regions corresponding to combinatorial configurations realized by crooked planes, thereby giving a crooked fundamental domain for each proper affine deformation. Jones~\cite{Jones} used standard {{\mathsf e}m Schottky fundamental domains\/} to fill the open cone with such regions. Here we decompose $\Sigma$ into two ideal triangles, obtaining a single combinatorial configuration which applies to all proper affine deformations. We are grateful to Ian Agol, Francis Bonahon, Dick Canary, David Gabai, Ryan Hoban, Cathy Jones, Fran\c cois Labourie, Misha Kapovich, Grisha Margulis, Yair Minsky and Kevin Scannell for helpful discussions. We also wish to thank the Institute for Advanced Study for their hospitality. \mathfrak{s}ection{Lorentzian geometry}\label{sec:alpha} This section summarizes needed technical background on the geometry of Minkowski (2+1)-spacetime, its isometries and Margulis's invariant of hyperbolic and parabolic isometries. For details, variants and proofs, see \cite{Abels,CharetteDrumm1,CharetteDrumm2, CDGM,Drumm3,DGyellow,DrummGoldman2,G0}. Let ${\EE^3_1}$ denote {{\mathsf e}m Minkowski (2+1)-spacetime}, that is, a simply connected complete three-dimensional flat Lorentzian manifold. Alternatively ${\EE^3_1}$ is an affine space whose underlying vector space ${\R^3_1}$ of translations is a {{\mathsf e}m Lorentzian inner vector space,\/} a vector space with an inner product \begin{align*} {\R^3_1} \times {\R^3_1} &\longrightarrow \mathbb R \\ ({\mathsf v},{\mathsf w}) &\longmapsto {\mathsf v}\cdot{\mathsf w} {\mathsf e}nd{align*} of signature $(2,1)$. A vector ${\mathsf x}\in{\R^3_1}$ is: \begin{itemize} \item {{\mathsf e}m null} if $\ldot{{\mathsf x}}{{\mathsf x}}=0$; \item {{\mathsf e}m timelike} if $\ldot{{\mathsf x}}{{\mathsf x}}<0$; \item {{\mathsf e}m spacelike} if $\ldot{{\mathsf x}}{{\mathsf x}}<0$. {\mathsf e}nd{itemize} A spacelike vector ${\mathsf x}$ is {{\mathsf e}m unit spacelike} if $\ldot{{\mathsf x}}{{\mathsf x}}=1$. A null vector is {{\mathsf e}m future-pointing} if its third coordinate is positive -- this corresponds to choosing a connected component of the set of timelike vectors, or a {{\mathsf e}m time-orientation}. Define the {{\mathsf e}m Lorentzian cross-product} as follows. Choose an orientation on ${\R^3_1}$. Let \begin{align*} {\R^3_1} \times {\R^3_1} \times {\R^3_1} \xrightarrow{\mathsf{Det}} \mathbb R {\mathsf e}nd{align*} denote the alternating trilinear form compatible with the Lorentzian inner product and the orientation: if $({\mathsf v}_1,{\mathsf v}_2,{\mathsf v}_3)$ is a positively-oriented basis, with \begin{equation*} \ldot{ {\mathsf v}_i}{{\mathsf v}_j} \,=\, 0 {\text{~if~}} i \neq j, \; \ldot{{\mathsf v}_1}{{\mathsf v}_1} \,=\, \ldot{{\mathsf v}_2}{{\mathsf v}_2} \,=\, -\ldot{{\mathsf v}_3}{{\mathsf v}_3} \,=\, 1 {\mathsf e}nd{equation*} then \begin{equation*} \mathsf{Det}({\mathsf v}_1,{\mathsf v}_2,{\mathsf v}_3) = 1. {\mathsf e}nd{equation*} The Lorentzian cross-product is the unique bilinear map \begin{equation*} {\R^3_1}\times{\R^3_1}\xrightarrow{\boxtimes} {\R^3_1} {\mathsf e}nd{equation*} satisfying \begin{equation*} \ldot{{\mathsf u}}({{\mathsf v}\boxtimes{\mathsf w}}) \;=\; \mbox{Det}(\left[{\mathsf u}~ {\mathsf v}~{\mathsf w}\right]). {\mathsf e}nd{equation*} The following facts are well known (see for example Ratcliffe~\cite{Ratcliffe}): \begin{lemma}\label{lem:crossprod} Let ${\mathsf u},{\mathsf v},{\mathsf x},{\mathsf y}\in{\R^3_1}$. Then: \begin{align*} \ldot{{\mathsf u}}{({\mathsf x}\boxtimes{\mathsf y})} & = \ldot{{\mathsf x}}{({\mathsf y}\boxtimes{\mathsf u})} \\ \ldot{({\mathsf u}\boxtimes{\mathsf v})}{({\mathsf x}\boxtimes{\mathsf y})} &= (\ldot{{\mathsf u}}{{\mathsf y}})(\ldot{{\mathsf v}}{{\mathsf x}})-(\ldot{{\mathsf u}}{{\mathsf x}})(\ldot{{\mathsf v}}{{\mathsf y}}). {\mathsf e}nd{align*} {\mathsf e}nd{lemma} For a spacelike vector ${\mathsf v}$, define its {{\mathsf e}m Lorentz-orthogonal plane} to be: \begin{equation*} {\mathsf v}^\perp = \{ {\mathsf x} \, \mid \ldot{{\mathsf x}}{{\mathsf v}}=0\} . {\mathsf e}nd{equation*} It is an {{\mathsf e}m indefinite plane}, since the Lorentzian inner product restricts to an inner product of signature $(1,1)$. In particular, ${\mathsf v}^\perp$ contains two null lines. The two future-pointing linear independent vectors of Euclidean length $1$ in this set are denoted ${\mathsf v}^-$ and ${\mathsf v}^+$ and are chosen so that $( {\mathsf v}^-,{\mathsf v}^+, {\mathsf v} )$ is a positively oriented basis for ${\R^3_1}$. A basis $(a,b,c)$ of ${\R^3_1}$ is positively oriented if and only if \begin{equation*} \ldot{(a \boxtimes b)}{c} > 0 . {\mathsf e}nd{equation*} \begin{lemma}\label{lem:xpxo} Let ${\mathsf v}\in{\R^3_1}$ be a unit spacelike vector. Then: \begin{align*} {\mathsf v}\boxtimes{\mathsf v}^+ &={\mathsf v}^+ \\ {\mathsf v}^- \boxtimes{\mathsf v} &={\mathsf v}^- . {\mathsf e}nd{align*} {\mathsf e}nd{lemma} \noindent For the proof, see Charette-Drumm~\cite{CharetteDrumm2}. Let ${\mathsf{G}}$ denote the group of all affine transformations that preserve the Lorentzian scalar product on the space of directions; ${\mathsf{G}}$ is isomorphic to $\mathsf{O}(2,1)\ltimes{\R^3_1}$. We shall restrict our attention to those transformations whose linear parts are in ${\mathsf{SO}(2,1)^0}$, thus preserving orientation and time-orientation. As above, $\mathsf L$ denotes the projection onto the {{\mathsf e}m linear part} of an affine transformation. Suppose $g\in{\mathsf{SO}(2,1)^0}$ and $g\neq \mathbb I$. \begin{itemize} \item $g$ is {{\mathsf e}m hyperbolic} if it has three distinct real eigenvalues; \item $g$ is {{\mathsf e}m parabolic} if its only eigenvalue is 1; \item $g$ is {{\mathsf e}m elliptic} if it has no real eigenvalues. {\mathsf e}nd{itemize} Denote the set of hyperbolic elements in ${\mathsf{SO}(2,1)^0}$ by ${\mathsf{Hyp}}_0$ and the set of parabolic elements by ${\mathsf{Par}}_0$. We also call $\alphamma\in{\mathsf{G}}$ {{\mathsf e}m hyperbolic} (respectively {{\mathsf e}m parabolic}, {{\mathsf e}m elliptic}) if its linear part $\mathsf L(\gamma)$ is hyperbolic (respectively parabolic, elliptic). Denote the set of hyperbolic elements in ${\mathsf{G}}$ by ${\mathsf{Hyp}}$ and the set of parabolic transformations by ${\mathsf{Par}}$. Let $\gamma\in {\mathsf{Hyp}} \cup {\mathsf{Par}}$. The eigenspace ${\mathsf{Fix}}(\mathsf L(\gamma))$ is one-dimensional. Let ${\mathsf v}\in{\mathsf{Fix}}\big(\mathsf L(\gamma)\big)$ be a non-zero vector and $x\in{\EE^3_1}$. Define: \begin{equation*} \na{{\mathsf v}}(\gamma) \; := \;\ldot{( \gamma(x)-x)}{{\mathsf v}}. {\mathsf e}nd{equation*} \noindent The following facts are proved in \cite{Abels,CharetteDrumm1, CharetteDrumm2,DrummGoldman2,G0,GM}: \begin{itemize} \item $\na{{\mathsf v}}(\gamma)$ is independent of $x$; \item $\na{{\mathsf v}}(\alphamma)$ is identically zero if and only if $\alphamma$ fixes a point;\label{hpfact:neq0} \item For any ${\mathsf e}ta\in{\mathsf{G}}$ with $h=\mathsf L({\mathsf e}ta)$, \begin{equation*} \na{h({\mathsf v})}({\mathsf e}ta\alphamma{\mathsf e}ta^{-1}) \,=\, \na{{\mathsf v}}(\alphamma) {\mathsf e}nd{equation*} where ${\mathsf v}\in {\mathsf{Fix}}(g)$ and $h=\mathsf L({\mathsf e}ta)$; \item For any $n\in\mathbb Z$, \begin{equation*} \na{{\mathsf v}}(\gamma^{n}) \,=\, \vert n\vert \na{{\mathsf v}}(\gamma). {\mathsf e}nd{equation*} {\mathsf e}nd{itemize} \noindent A linear transformation $g$ induces a natural orientation on ${\mathsf{Fix}}(g)$ as follows. \begin{defn}\label{positive} Let $g\in{\mathsf{Hyp}}_0\cup{\mathsf{Par}}_0$. A vector ${\mathsf v}\in{\mathsf{Fix}}(g)$ is {{\mathsf e}m positive relative to $g$\/} if and only if \begin{equation*} ({\mathsf v}, {\mathsf x}, g{\mathsf x} ) {\mathsf e}nd{equation*} is a positively oriented basis, where ${\mathsf x}$ is any null or timelike vector which is not an eigenvector of $g$. {\mathsf e}nd{defn} \noindent The {{\mathsf e}m sign of $\alphamma$} is the sign of $\na{{\mathsf v}}(\alphamma)$, where ${\mathsf v}$ is any positive vector in ${\mathsf{Fix}}(g)$. For $n<0$ the orientation of ${\mathsf{Fix}}(g^n)$ reverses, so $\alphamma$ and $\gamma^{-1}$ have equal sign. \begin{lemma} [\cite{Margulis1,Margulis2,CharetteDrumm1}]\label{lem:opposite} Let $\gamma_1$, $\gamma_2\in {\mathsf{Hyp}} \cup {\mathsf{Par}}$ and suppose $\gamma_1$ and $\gamma_2$ have opposite signs. Then $\langle \gamma_1, \gamma_2\rangle$ does not act properly on ${\EE^3_1}$. {\mathsf e}nd{lemma} Let ${\Gamma}_0\mathfrak{s}ubset \mathsf{O}(2,1)$ be a free group and $\rho$ an affine deformation of ${\Gamma}_0$: \begin{equation}\label{eqn:affdef} \rho(g) (x) = g(x) + u(g) {\mathsf e}nd{equation} where $x\in{\R^3_1}$. Then ${\Gamma}_0\xrightarrow{u}{\R^3_1}$ is a cocycle of ${\Gamma}_0$ with coefficients in the ${\Gamma}_0$-module ${\R^3_1}$ corresponding to the linear action of ${\Gamma}_0$. As affine deformations of ${\Gamma}_0$ correspond to cocycles in $\mathbb ZZ$, translational conjugacy classes of affine deformations comprise the cohomology group $\mathsf H^1(\G_0,\V)$. If $g\in{\mathsf{Hyp}}_0$, set $\vo{g}$ to be the unique positive vector in ${\mathsf{Fix}}(g)$ such that $\ldot{\vo{g}}{\vo{g}}=1$. If $g\in{\mathsf{Par}}_0$, choose a positive vector in ${\mathsf{Fix}}(g)$ and call it $\vo{g}$. Let $u\in\mathbb ZZ$. Reinterpreting the Margulis invariant as a linear functional on the space of cocycles $\mathbb ZZ$, set: \begin{align*} {\Gamma}_0&\xrightarrow{\alpha_{[u]}} \mathbb R\\ g&\longmapsto \na{\vo{g}}(\gamma), {\mathsf e}nd{align*} \noindent where $\gamma=\rho(g)$ is the affine deformation corresponding to $u(g)$. As the notation indicates, $\alpha_{[u]}$ only depends on the cohomology class of $u$, since $\na{\vo{g}}$ is a class function. \mathfrak{s}ection{Hyperbolic geometry and the three-holed sphere} \label{sec:THS} Let $\Sigma$ denote a complete hyperbolic surface homeomorphic to a three-holed sphere. Each of the three ends can either flare out (that is, have infinite area) or end in a cusp. In the former case, a loop going around the end will have hyperbolic holonomy, and parabolic holonomy in the latter case. We consider certain geodesic laminations on the surface from which we will construct crooked fundamental domains. Fixing some arbitrary basepoint in $\Sigma$, let ${\Gamma}_0$ denote the image under the holonomy representation of the fundamental group of $\Sigma$. We may thus identify $\Sigma$ with ${\mathbf H}^2/ {\Gamma}_0$. The fundamental group of $\Sigma$ is free of rank two and admits a presentation \begin{equation}\label{eq:present} {\Gamma}_0=\langle g_1,g_2,g_3~\mid~g_3g_2g_1=\mathbb I\rangle, {\mathsf e}nd{equation} where the $g_i$ correspond to the components of $\partial\Sigma$ and may be hyperbolic or parabolic. For the rest of the paper, unless otherwise noted, the $g_i$ and their affine deformations $\gamma_i$ are indexed by $i= 1,2,3$ with addition in $\mathbb Z/3\mathbb Z$. If $g_i$ is hyperbolic, it admits a unique invariant axis $l_i\mathfrak{s}ubset{\mathbf H}^2$ which projects to an end of the three-holed sphere. For $g_i$ parabolic, we think of this invariant line as shrunk to a point on the ideal boundary. For hyperbolic $g_i$, set $\vp{i}$, $\vm{i}$ to be its attracting and repelling fixed points, respectively; if $g_i$ is parabolic, set $\vp{i}=\vm{i}$ to be its unique fixed point. Since ${\Gamma}_0$ is discrete, the $l_i$'s are pairwise disjoint. Furthermore, substituting inverses if necessary, we assume for convenience that the direction of translation along the axes is as in Figure~\ref{fig:axes}. (In this case, all three $g_i$'s are hyperbolic.) \begin{figure} \centerline{\input{threelines.pstex_t}} \caption{The invariant lines for $g_1,g_2,g_3$, with direction indicated by the arrows.} \label{fig:axes} {\mathsf e}nd{figure} The three arcs in ${\mathbf H}^2$ respectively joining $\vp{i}$ to $\vp{i+1}$ project to a geodesic lamination of $\Sigma$ as drawn in Figures~\ref{fig:ideal1} and~\ref{fig:ideal2}. \begin{figure} \centerline{\input{idealpattern1.pstex_t}} \caption{Three lines in ${\mathbf H}^2$ joining endpoints of the invariant axes $l_i$. On the right, the induced lamination of $\Sigma$.} \label{fig:ideal1} {\mathsf e}nd{figure} \begin{figure} \centerline{\input{idealpattern2.pstex_t}} \caption{Three lines in ${\mathbf H}^2$ joining endpoints of $l_i$, with $g_2$ parabolic and $l_2$ an ideal point.} \label{fig:ideal2} {\mathsf e}nd{figure} We shall adopt the following model for ${\mathbf H}^2$ in terms of Lorentzian affine space ${\EE^3_1}$. A {{\mathsf e}m future-pointing timelike ray\/} is a ray $q + \mathbb R_+ {\mathsf w}$, where $q\in{\EE^3_1}$ and ${\mathsf w}\in{\R^3_1}$ is a future-pointing timelike vector. Parallelism defines an equivalence relation on future-pointing timelike rays, and points of ${\mathbf H}^2$ identify with equivalence classes of future-pointing timelike rays. Denote by $[q + \mathbb R_+ {\mathsf w}]$ the point in ${\mathbf H}^2$ corresponding to the equivalence class of the ray $q + \mathbb R_+ {\mathsf w}$. Geodesics in ${\mathbf H}^2$ identify with parallelism classes of indefinite affine planes; a point in ${\mathbf H}^2$ is incident to a geodesic if and only if the corresponding future-pointing timelike ray and indefinite affine plane are parallel. A half-space $H$ in ${\EE^3_1}$ bounded by an indefinite affine plane determines a half-plane ${\mathfrak H}\mathfrak{s}ubset {\mathbf H}^2$. A point $[q + \mathbb R_+ {\mathsf w}]$ in ${\mathbf H}^2$ lies in ${\mathfrak H}$ if and only if $q+\mathbb R_+{\mathsf w}$ intersects $H$ in a ray, that is, $q + t{\mathsf w} \in H$ for $t \muchbigger 0$. Dually, geodesics in ${\mathbf H}^2$ correspond to spacelike lines, since the Lorentz-orthogonal plane of a spacelike vector is indefinite. In fact, if $l=\mathbb R\vo{g_i}$, then the null vectors $\ypm{(\vo{g_i})}$ respectively project to the ideal points $\vpm{i}$. Furthermore spacelike vectors correspond to oriented geodesics, or, equivalently, to half-planes in ${\mathbf H}^2$. A spacelike vector spans a unique spacelike ray, which contains a unique unit spacelike vector ${\mathsf v}$. The corresponding half-plane is \begin{equation*} {\mathfrak H}({\mathsf v}) := \{ [q + \mathbb R_+ {\mathsf w}]\in{\mathbf H}^2 \mid \ldot{{\mathsf w}}{{\mathsf v}} \ge 0 \} . {\mathsf e}nd{equation*} Extending terminology from ${\mathbf H}^2$ to ${\R^3_1}$, say that two spacelike vectors ${\mathsf u},{\mathsf v}\in{\R^3_1}$ are: \begin{itemize} \item {{\mathsf e}m ultraparallel} if ${\mathsf u}\boxtimes{\mathsf v}$ is spacelike; \item {{\mathsf e}m asymptotic} if ${\mathsf u}\boxtimes{\mathsf v}$ is null; \item {{\mathsf e}m crossing} if if ${\mathsf u}\boxtimes{\mathsf v}$ is lightlike. {\mathsf e}nd{itemize} \mathfrak{s}ection{Crooked planes and half-spaces}\label{sec:cp} Crooked planes are Lorentzian analogs of equidistant surfaces. We will think of a triple of crooked planes as the natural extension of a lamination. We will see how to get pairwise disjoint crooked plane triples, yielding proper affine deformations of the linear holonomy. In this section, we define crooked planes and discuss criteria for disjointness. Here is a somewhat technical, yet important, point. What we call crooked planes and half-spaces should really be called {{\mathsf e}m positively extended\/} crooked planes and half-spaces. We require crooked planes to be positively extended when the signs of the Margulis invariants are positive. But for the case of negative Margulis invariants, we must use {{\mathsf e}m negatively extended crooked planes\/}. As the arguments are essentially the same up to a change in sign change, for the rest of the paper we will restrict to the case of positive signs. The curious reader should consult~\cite{DrummGoldman1}. (In that paper the crooked planes are called positively or negatively {{\mathsf e}m oriented\/}). Given a null vector ${\mathsf x}\in{\R^3_1}$, set $\mathcal P({\mathsf x})$ to be the set of (spacelike) vectors ${\mathsf w}$ such that $\yp{{\mathsf w}}$ is parallel to ${\mathsf x}$. This half-plane in the Lorentz-orthogonal plane ${\mathsf x}^\perp$ is a connected component of ${\mathsf x}^\perp \mathfrak{s}etminus \langle{\mathsf x}\rangle$. If ${\mathsf v}$ is a spacelike vector, then \begin{align*} {\mathsf v}&\in\mathcal P(\yp{{\mathsf v}}) \\ -{\mathsf v}& \in\mathcal P(\ym{{\mathsf v}}). {\mathsf e}nd{align*} Let $p\in{\EE^3_1}$ be a point and ${\mathsf v}\in\mathbb R^{2,1}$ a spacelike vector. Define the {{\mathsf e}m crooked plane} ${\mathcal C}({\mathsf v},p)\mathfrak{s}ubset{\EE^3_1}$ with {{\mathsf e}m vertex\/} $p$ and {{\mathsf e}m direction vector\/} ${\mathsf v}$ to be the union of two {{\mathsf e}m wings} \begin{align*} & p +\mathcal P(\yp{{\mathsf v}})\\ & p +\mathcal P(\ym{{\mathsf v}}) {\mathsf e}nd{align*} and a {{\mathsf e}m stem} \begin{equation*} p +\ \{{\mathsf x}\in{\R^3_1} \mid\ \ldot{{\mathsf v}}{{\mathsf x}} = 0, \ldot{{\mathsf x}}{{\mathsf x}} \le 0 \} . {\mathsf e}nd{equation*} Each wing is a half-plane, and the stem is the union of two quadrants in a spacelike plane. The crooked plane itself is a piecewise linear submanifold, which stratifies into four connected open subsets of planes (two wings and the two components of the interior of the stem), four null rays, and a vertex. \begin{defn} Let ${\mathsf v}$ be a spacelike vector and $p\in{\EE^3_1}$. The {{\mathsf e}m crooked half-space} determined by ${\mathsf v}$ and $p$, denoted $\mathsf H({\mathsf v},p)$, consists of all $q\in{\EE^3_1}$ such that: \begin{itemize} \item $\ldot{(q-p)}{\yp{{\mathsf v}}}\leq0$ if $\ldot{(q-p)}{{\mathsf v}}\geq 0$; \item $\ldot{(q-p)}{\ym{{\mathsf v}}}\geq0$ if $\ldot{(q-p)}{{\mathsf v}}\leq 0$; \item Both conditions must hold for $q-p\in{\mathsf v}^\perp$. {\mathsf e}nd{itemize} {\mathsf e}nd{defn} \noindent Observe that ${\mathcal C}({\mathsf v},p)={\mathcal C}(-{\mathsf v},p)$. In contrast, the crooked half-spaces $\mathsf H({\mathsf v},p)$ and $ \mathsf H(-{\mathsf v},p)$ are distinct spaces. Their union and intersection are respectively: \begin{align*} \mathsf H({\mathsf v},p)\,\cup\, \mathsf H(-{\mathsf v},p) & ={\EE^3_1} \\ \mathsf H({\mathsf v},p)\,\cap\, \mathsf H(-{\mathsf v},p) & ={\mathcal C}({\mathsf v},p)= {\mathcal C}(-{\mathsf v},p). {\mathsf e}nd{align*} Crooked half-spaces in ${\EE^3_1}$ determine half-planes in ${\mathbf H}^2$ as follows. As in the preceding section, a point in ${\mathbf H}^2$ corresponds to the equivalence class of a future-pointing timelike ray. \begin{lemma}\label{lem:halfspace} Let $p,q\in{\EE^3_1}$ and ${\mathsf v},{\mathsf w}\in{\R^3_1}$ spacelike. Suppose that $\mathsf H({\mathsf v},p)$ is a crooked half-space and that $\ldot{{\mathsf w}}{{\mathsf v}}\neq0$. Then $q + t{\mathsf w}\in \mathsf{int}\left(\mathsf H({\mathsf v},p)\right)$ for $t\muchbigger 0$ if and only if $[q + t{\mathsf w}]\in \mathsf{int}\left({\mathfrak H}({\mathsf v})\right)$. {\mathsf e}nd{lemma} \begin{proof} It suffices to consider the case that $p = 0$ and \begin{equation*} {\mathsf v} = \bmatrix 1 \\ 0 \\ 0 {\mathsf e}ndbmatrix, {\mathsf e}nd{equation*} that is, \begin{equation*} \mathsf H({\mathsf v},p) \;=\; \left\{ \bmatrix x \\ y \\ z {\mathsf e}ndbmatrix \;\bigg{|}\; y + z \ge 0 \text{~if~} x \ge 0 \\ \text{~and~} y - z \ge 0 \text{~if~} x \le 0 \right\}. {\mathsf e}nd{equation*} By applying an automorphism preserving $\mathsf H({\mathsf v},p)$, we may assume \begin{equation*} q = \bmatrix x_0 \\ y_0 \\ z_0 {\mathsf e}ndbmatrix, \quad {\mathsf w} = \bmatrix d \\ 0 \\ 1 {\mathsf e}ndbmatrix. {\mathsf e}nd{equation*} where $\vert d \vert < 1$. Set $q(t) := q + t{\mathsf w}$. For any value of $d$, $q(t)$ eventually satisfies \begin{equation*} y + z = y_0 + z_0 + t > 0 {\mathsf e}nd{equation*} for $t\muchbigger 0$ and $ y -z<0$. The point $[q + t{\mathsf w}]$ lies in the interior $\mathsf{int}\left({\mathfrak H}({\mathsf v})\right)$ when $d > 0$. In this case, $q(t)$ eventually satisfies \begin{equation*} x = x_0 + t d > 0. {\mathsf e}nd{equation*} Thus $q(t)\in \mathsf{int}\left(\mathsf H({\mathsf v},p)\right)$. Conversely, if $[q + t{\mathsf w}] \in {\mathfrak H}(-{\mathsf v})$, then $d<0$. If $t\muchbigger0$, then $ x< 0$. Therefore $q(t)\notin \mathsf{int}\left(\mathsf H({\mathsf v},p)\right)$ as desired. {\mathsf e}nd{proof} \mathfrak{s}ection{Disjointness of crooked half-spaces} By~\cite{Drumm1,Drumm2} (see \cite{CG} for another exposition), the complement of a disjoint union of crooked half-spaces with pairwise identifications of its boundary defines a fundamental polyhedron for the group generated by the identifications. This section develops criteria for when two crooked half-spaces are disjoint. Lemma~\ref{lem:inhalfspace} reduces disjointness of crooked half-spaces to disjointness of crooked planes. We need only consider pairs of crooked half-spaces in the case of ultraparallel or asymptotic vectors: when ${\mathsf u}$ and ${\mathsf v}$ are crossing ${\mathcal C}({\mathsf u},p)$ and ${\mathcal C}({\mathsf v},p)$ always intersect~\cite{DrummGoldman1}. Theorem~\ref{thm:UltraCPCP} and Theorem~\ref{thm:AsCPCP} provide criteria for disjointness for crooked planes, and were established in \cite{DrummGoldman1}. Their respective corollaries, Corollary~\ref{cor:seammove} and Corollary~\ref{cor:seammoveII}, provide more useful criteria in terms of the direction vectors. \begin{defn} \label{def:co} Spacelike vectors ${\mathsf v}_1, \dots, {\mathsf v}_n\in{\R^3_1}$ are {{\mathsf e}m consistently oriented} if and only if, whenever $i\neq j$, \begin{itemize} \item $\ldot{{\mathsf v}_i}{{\mathsf v}_j}<0$; \item $\ldot{{\mathsf v}_i}{\ypm{{\mathsf v}_j}}\leq 0$. {\mathsf e}nd{itemize} {\mathsf e}nd{defn} \noindent The second requirement implies that the ${\mathsf v}_i$ are pairwise ultraparallel or asymptotic. Equivalently, ${\mathsf v}_i,{\mathsf v}_j,i\neq j$ are consistently oriented if and only if the interiors of the half-planes ${\mathfrak H}({\mathsf v}_i)$ and ${\mathfrak H}({\mathsf v}_j)$ are pairwise disjoint. (See~\cite{G}, \S 4.2.1 for details.) \begin{lemma}\label{lem:inhalfspace} Suppose ${\mathsf u},{\mathsf v}$ are consistently oriented, $p\in{\EE^3_1}$ and ${\mathcal C}({\mathsf u},p)$ and ${\mathcal C}({\mathsf v},p+{\mathsf w})$ are disjoint. Then ${\mathcal C}({\mathsf v},p+{\mathsf w})\mathfrak{s}ubset \mathsf H(-{\mathsf u},p)$. {\mathsf e}nd{lemma} \begin{proof} Because \begin{equation*} {\EE^3_1}\mathfrak{s}etminus {\mathcal C}({\mathsf v},p)\;=\; \mathsf{int}\left(\mathsf H({\mathsf u},p)\right)\,\cup\, \mathsf{int}\left(\mathsf H(-{\mathsf u},p)\right), {\mathsf e}nd{equation*} either ${\mathcal C}({\mathsf v},p+{\mathsf w})\mathfrak{s}ubset \mathsf H({\mathsf u},p)$ or ${\mathcal C}({\mathsf v},p+{\mathsf w})\mathfrak{s}ubset \mathsf H(-{\mathsf u},p)$. Suppose that ${\mathcal C}({\mathsf v},p+{\mathsf w})\mathfrak{s}ubset \mathsf H({\mathsf u},p)$. The future-pointing timelike rays on ${\mathcal C}({\mathsf v},p+{\mathsf w})$ lie on the stem of ${\mathcal C}({\mathsf v},p+{\mathsf w})$ and correspond to the geodesic $\partial{\mathfrak H}({\mathsf v})$. Since a future-pointing timelike ray on ${\mathcal C}({\mathsf v},p+{\mathsf w})$ lies entirely in $\mathsf H({\mathsf u},p)$, Lemma~\ref{lem:halfspace} implies that $$ \partial{\mathfrak H}({\mathsf v}) \mathfrak{s}ubset {\mathfrak H}({\mathsf u}). $$ Since ${\mathsf u},{\mathsf v}$ are consistently oriented, the half-spaces ${\mathfrak H}({\mathsf u})$ and ${\mathfrak H}({\mathsf v})$ are disjoint, and ${\mathfrak H}({\mathsf v})\mathfrak{s}ubset {\mathfrak H}(-{\mathsf u})$, a contradiction. Thus ${\mathcal C}({\mathsf v},p+{\mathsf w})\mathfrak{s}ubset \mathsf H(-{\mathsf u},p)$ as desired. {\mathsf e}nd{proof} \begin{thm} \label{thm:UltraCPCP} Let ${\mathsf v}_1$ and ${\mathsf v}_2$ be consistently oriented, ultraparallel, unit spacelike vectors and $p_1,p_2\in{\EE^3_1}$. The crooked planes ${\mathcal C}({\mathsf v}_1,p_1)$ and ${\mathcal C}({\mathsf v}_2,p_2)$ are disjoint if and only if \begin{equation}\label{eq:DisUltraCPCP} \ldot{(p_2-p_1)}{({\mathsf v}_1\boxtimes{\mathsf v}_2)} > \vert\ldot{(p_2-p_1)}{{\mathsf v}_2}\vert + \vert\ldot{(p_2-p_1)}{{\mathsf v}_1}\vert . {\mathsf e}nd{equation} {\mathsf e}nd{thm} \begin{cor}\label{cor:seammove} Let ${\mathsf v}_1, {\mathsf v}_2\in{\R^3_1}$ be consistently oriented, ultraparallel vectors. Suppose \begin{equation*} p_i=a_i\ym{{\mathsf v}_i}-b_i\yp{{\mathsf v}_i}, {\mathsf e}nd{equation*} for $a_i,b_i>0$, $i=1,2$. Then ${\mathcal C}({\mathsf v}_1,p_1)$ and ${\mathcal C}({\mathsf v}_2,p_2)$ are disjoint. {\mathsf e}nd{cor} \begin{proof} Rescaling if necessary, assume ${\mathsf v}_1,~{\mathsf v}_2$ are unit spacelike. By Lemmas~\ref{lem:crossprod} and~\ref{lem:xpxo}, \begin{align*} \ldot{\yp{{\mathsf v}_i}}{({\mathsf v}_i\boxtimes{\mathsf v}_j)}& \;=\; \ldot{\yp{{\mathsf v}_i}}{{\mathsf v}_j} \\ \ldot{\ym{{\mathsf v}_i}}{({\mathsf v}_i\boxtimes{\mathsf v}_j)}& \;=\; -\ldot{\ym{{\mathsf v}_i}}{{\mathsf v}_j}. {\mathsf e}nd{align*} for $i\neq j$. Consequently: \begin{align}\label{eq:UltraLHS} \ldot{(p_2-p_1)}{({\mathsf v}_1\boxtimes{\mathsf v}_2)} &\;=\;-\ldot{(a_2\ym{{\mathsf v}_2} +b_2\yp{{\mathsf v}_2})}{{\mathsf v}_1} \,-\,\ldot{(a_1\ym{{\mathsf v}_1}+b_1\yp{{\mathsf v}_1})}{{\mathsf v}_2}\notag\\ &\;=\; -\ldot{a_2\ym{{\mathsf v}_2}}{{\mathsf v}_1} \,-\, \ldot{b_2\yp{{\mathsf v}_2}}{{\mathsf v}_1} \,-\, \ldot{ a_1\ym{{\mathsf v}_1}}{{\mathsf v}_2} \,-\, \ldot{b_1\yp{{\mathsf v}_1}}{{\mathsf v}_2} \notag \\ &\;>\; \vert\ldot{(a_2\ym{{\mathsf v}_2} -b_2\yp{{\mathsf v}_2})}{{\mathsf v}_1}\vert \notag \,+\,\vert\ldot{ ( a_1\ym{{\mathsf v}_1} -b_1 \yp{{\mathsf v}_1} )}{{\mathsf v}_2}\vert. {\mathsf e}nd{align} The above inequality follows because each term in the previous expression is positive (since ${\mathsf v}_1,{\mathsf v}_2$ are consistently oriented). Finally: \begin{align*} \vert\ldot{(p_2-p_1)}{{\mathsf v}_2}\vert & =\; \vert\ldot{(a_1\ym{{\mathsf v}_1} -b_1 \yp{{\mathsf v}_1})}{{\mathsf v}_2}\vert \\\ \vert\ldot{(p_2-p_1)}{{\mathsf v}_1}\vert & =\; \vert\ldot{(a_2\ym{{\mathsf v}_2} -b_2 \yp{{\mathsf v}_2})}{{\mathsf v}_1}\vert . {\mathsf e}nd{align*} {\mathsf e}nd{proof} \noindent Alternatively, ${\mathcal C}({\mathsf v}_1,p_1)$ and ${\mathcal C}({\mathsf v}_2,p_2)$ are disjoint if and only if $p_2-p_1$ lies in the cone spanned by the four vectors \begin{equation*} \ym{{\mathsf v}_2},\; -\yp{{\mathsf v}_2} ,\; - \ym{{\mathsf v}_1},\; \yp{{\mathsf v}_1}. {\mathsf e}nd{equation*} In fact, we allow $a_1=b_1=0$ or $a_2=b_2=0$ since $p_2-p_1$ would still lie in the open cone. If three of the four coefficients $a_i,b_i$ are zero, then the crooked planes intersect in a single point, on the edges of the stems. Assume now that ${\mathsf v}_1,{\mathsf v}_2\in{\R^3_1}$ are consistently oriented, asymptotic vectors. Assume, without loss of generality: \begin{equation*} \ym{{\mathsf v}_1}=\yp{{\mathsf v}_2}. {\mathsf e}nd{equation*} \begin{thm}\label{thm:AsCPCP} Let ${\mathsf v}_1$ and ${\mathsf v}_2$ be consistently oriented, asymptotic vectors such that $\ym{{\mathsf v}_1}=\yp{{\mathsf v}_2}$, and $p_1,p_2\in{\EE^3_1}$. The crooked planes ${\mathcal C}({\mathsf v}_1,p_1)$ and ${\mathcal C}({\mathsf v}_2,p_2)$ are disjoint if and only if: \begin{align} &\ldot{(p_2-p_1)}{{\mathsf v}_1} \;<\; 0, \notag \\ &\ldot{(p_2-p_1)}{{\mathsf v}_2} \;<\; 0, \notag\\ &\ldot{(p_2-p_1)}{(\yp{{\mathsf v}_1} \boxtimes \ym{{\mathsf v}_2})} \;>\; 0. \label{eq:asymptotic} {\mathsf e}nd{align} {\mathsf e}nd{thm} \noindent As in the ultraparallel case, Theorem~\ref{thm:AsCPCP} provides criteria for when ${\mathcal C}({\mathsf v}_1,p_1)$ and ${\mathcal C}({\mathsf v}_2,p_2)$ are disjoint. \begin{cor}\label{cor:seammoveII} Let ${\mathsf v}_1, {\mathsf v}_2\in{\R^3_1}$ be consistently oriented, asymptotic vectors such that $\ym{{\mathsf v}_1}=\yp{{\mathsf v}_2}$. Suppose \begin{equation*} p_i=a_i\ym{{\mathsf v}_i} -b_i \yp{{\mathsf v}_i}, {\mathsf e}nd{equation*} where $a_i,b_i>0$ for $i=1,2$. Then ${\mathcal C}({\mathsf v}_1,p_1)$ and ${\mathcal C}({\mathsf v}_2,p_2)$ are disjoint. {\mathsf e}nd{cor} \begin{proof} Set \begin{equation*} \ym{{\mathsf v}_i}\boxtimes\yp{{\mathsf v}_i}=\kappa_i^2{\mathsf v}_i, {\mathsf e}nd{equation*} for $i=1,2$. Then: \begin{align*} \ldot{(p_2-p_1)}{{\mathsf v}_1} & =a_2\ldot{\ym{{\mathsf v}_2}}{{\mathsf v}_1}<0 \\ \ldot{(p_2-p_1)}{{\mathsf v}_2} & =b_1\ldot{\yp{{\mathsf v}_1}}{{\mathsf v}_2}<0 {\mathsf e}nd{align*} and: \begin{align}\label{eq:asy3} \ldot{(p_2-p_1)}{\left(\yp{{\mathsf v}_1} \boxtimes\ym{{\mathsf v}_2}\right)} &=\; -b_2\ldot{\yp{{\mathsf v}_2}}{\left( \yp{{\mathsf v}_1}\boxtimes\ym{{\mathsf v}_2}\right)} \,-\, a_1\ldot{\ym{{\mathsf v}_1}}{\left( \yp{{\mathsf v}_1}\boxtimes\ym{{\mathsf v}_2}\right)}\notag \\ &=\; -b_2\kappa_2^2 \left( \ldot{\ym{{\mathsf v}_1}}{{\mathsf v}_2}\right) \,-\, a_1\kappa_1^2\left( \ldot{\yp{{\mathsf v}_2}}{{\mathsf v}_1}\right) \notag\\ &>\; 0. {\mathsf e}nd{align} {\mathsf e}nd{proof} \noindent As in the ultraparallel case, we obtain disjoint crooked planes if and only if $p_2-p_1$ lies in a cone spanned by three rays. In Equation~{\mathsf e}qref{eq:asy3}, we allow $b_2=0$ or $a_1=0$ simply because $\ym{{\mathsf v}_1}=\yp{{\mathsf v}_2}$. If $a_2=0$, $b_1=0$ or $a_1=b_2=0$, then the crooked planes intersect in a null ray. \mathfrak{s}ection{Crooked fundamental domains}\label{sub:crookedfd} Now look at how collections of pairwise disjoint crooked planes correspond to groups acting properly on ${\EE^3_1}$. Let ${\mathsf v}$, ${\mathsf v}'\in{\R^3_1}$ be two spacelike vectors. Suppose $\alphamma\in{\mathsf{G}}$ and $p,p'\in{\EE^3_1}$ satisfy: \begin{equation*} \alphamma({\mathcal C}({\mathsf v},p))={\mathcal C}({\mathsf v}',p'). {\mathsf e}nd{equation*} Then $\alphamma(p)=p'$ and $\mathsf L(\alphamma)({\mathsf v})$ is a scalar multiple of ${\mathsf v}'$. In particular, $\alphamma\big(\mathsf H({\mathsf v},p)\big)$ is one of the two crooked half-spaces bounded by ${\mathcal C}({\mathsf v}',p')$. \begin{thm}\label{thm:disjointCPs} Suppose that $\mathsf H({\mathsf v}_i,p_i)$ are $2n$ pairwise disjoint crooked half-spaces and $\gamma_1, \ldots \gamma_n \in{\Gamma}$ such that for all $i$, \begin{equation*} \alphamma_i\left(\mathsf H({\mathsf v}_{-i},p_{-i})\right) \;=\; {\EE^3_1}\mathfrak{s}etminus \mathsf{int} \left(\mathsf H({\mathsf v}_{i},p_i)\right). {\mathsf e}nd{equation*} Then $\langle \gamma_1, \ldots \gamma_n \rangle$ acts freely and properly on ${\EE^3_1}$ with fundamental domain \begin{equation*} {\EE^3_1} \mathfrak{s}etminus \bigcup_{-n\leq i\leq n}\mathsf{int}\left( \mathsf H({\mathsf v}_i,p_i) \right) . {\mathsf e}nd{equation*} {\mathsf e}nd{thm} \begin{proof} By the assumption \begin{equation*} \alphamma_i\left(\mathsf H({\mathsf v}_{-i},p_{-i})\right) \,=\, {\EE^3_1}\,\mathfrak{s}etminus\mathsf{int} \left(\mathsf H({\mathsf v}_{i},p_i)\right), {\mathsf e}nd{equation*} the vectors ${\mathsf v}_{\pm i}$ either cross or are parallel to $\vo{g_i}$. The theorem is shown in \cite{Drumm1, Drumm2}, assuming, in the case of hyperbolic $\gamma_i$, that the vector ${\mathsf v}_{i}$ crosses the fixed vector $\vo{g_i}$. (The vectors ${\mathsf v}_i$ are parallel to $\vo{g_i}$ for parabolic $\gamma_i$.) However, the methods used in \cite{Drumm1, Drumm2} extend to the case of hyperbolic generators with ${\mathsf v}_{\pm i}$ parallel to $\vo{g_i}$. In particular, the compression of a tubular neighborhood around lines which touch a boundary crooked plane at a point in particular transverse directions is bounded from below. {\mathsf e}nd{proof} These fundamental domains notably differ from the standard construction (as in \cite{Drumm2}). A crooked fundamental domain $\Delta$ in ${\EE^3_1}$ for ${\Gamma}amma$ determines a polygon $\delta$ in ${\mathbf H}^2$ for $\mathsf L({\Gamma}amma)$; the stems of $\partial\Delta$ define lines in ${\mathbf H}^2$ bounding $\delta$. However, while ${\Gamma}amma\cdot\Delta \,=\, {\EE^3_1}$, the union $\mathsf L({\Gamma}amma)\cdot\delta$ may only be a {{\mathsf e}m proper\/} open subset of ${\mathbf H}^2$. In the present case, this is the universal covering of the interior of the convex core of $\Sigma$. The convex core is an incomplete hyperbolic surface bounded by three closed geodesics. In contrast, the flat Lorentz manifold ${\EE^3_1}/{\Gamma}amma$ is {{\mathsf e}m complete.} While the hyperbolic fundamental domains $\mathsf L(\alphamma)(\delta)$ only fill a proper subset of ${\mathbf H}^2$, the crooked fundamental domains $\alphamma(\Delta)$ fill all of ${\EE^3_1}$. Theorem~\ref{thm:disjointCPs} extends to the case when two of the crooked planes intersect in a single point. \begin{lemma} \label{lem:kissing1} Let ${\mathsf v}_{-2},{\mathsf v}_{-1},{\mathsf v}_1,{\mathsf v}_2\in{\R^3_1}$ be consistently oriented vectors and suppose $p_{-1},p_1,p_2\in{\EE^3_1}$ satisfy: \begin{align*} {\mathcal C}({\mathsf v}_{-2},p_{-1})\,\cap\,{\mathcal C}({\mathsf v}_2,p_2) &=\;{\mathsf e}mptyset\\ {\mathcal C}({\mathsf v}_{-1},p_{-1})\,\cap\,{\mathcal C}({\mathsf v}_1,p_1) &=\;{\mathsf e}mptyset \\ {\mathcal C}({\mathsf v}_{1},p_{1})\,\cap\,{\mathcal C}({\mathsf v}_2,p_2) &=\;{\mathsf e}mptyset . {\mathsf e}nd{align*} Then there exists $p_{-2}\in{\EE^3_1}$ such that the crooked planes ${\mathcal C}({\mathsf v}_{-2},p_{-2})$ and ${\mathcal C}({\mathsf v}_2,\alphamma_2(p_{-2}))$ are each disjoint from ${\mathcal C}({\mathsf v}_1,p_1)$. {\mathsf e}nd{lemma} \begin{proof} Let $\mathsf H({\mathsf v}_0,p_{-1})$ be the smallest crooked half-space containing both $\mathsf H({\mathsf v}_{-2},p_{-1})$ and $\mathsf H({\mathsf v}_{-1},p_{-1})$. Then \begin{equation*} \mathsf H({\mathsf v}_0,p_{-1}),\;\mathsf H({\mathsf v}_1,p_{1}),\; \mathsf H({\mathsf v}_2,p_{2}) {\mathsf e}nd{equation*} are pairwise disjoint. Disjointness of crooked planes is an open condition. Therefore there exists ${\mathsf e}psilon>0$ such that for any ${\mathsf u}\in{\R^3_1}$ of Euclidean norm less than ${\mathsf e}psilon$, the crooked plane ${\mathcal C}({\mathsf v}_{2},p_{2}+{\mathsf u})$ remains disjoint from ${\mathcal C}({\mathsf v}_{0},p_{-1})$ and ${\mathcal C}({\mathsf v}_{1},p_{1})$. Corollaries~\ref{cor:seammove} and~\ref{cor:seammoveII} imply the existence of a $p_{-2}$ such that ${\mathcal C}({\mathsf v}_{-2},p_{-2})$ is disjoint from both ${\mathcal C}({\mathsf v}_{0},p_{-1})$ and ${\mathcal C}({\mathsf v}_{-1},p_{-1})$. The set of choices being closed under positive rescaling, one can choose $p_{-2}$ close enough to $p_{-1}$ so that $\alphamma_2(p_{-2})$ is within an ${\mathsf e}psilon$-neighborhood of $p_2$. Lemma~\ref{lem:inhalfspace} implies: \begin{equation*} {\mathcal C}({\mathsf v}_{-2},p_{-2})\mathfrak{s}ubset\mathsf H({\mathsf v}_0,p_{-1}). {\mathsf e}nd{equation*} In particular, ${\mathcal C}({\mathsf v}_{-2},p_{-2})$ is disjoint from each ${\mathcal C}({\mathsf v}_1,p_1)$ and ${\mathcal C}({\mathsf v}_2,\alphamma_2(p_{-2}))$ as claimed. {\mathsf e}nd{proof} \mathfrak{s}ection {The space of proper affine deformations} \label{sec:cocycles} Recall the presentation of the fundamental group of $\Sigma$ in Equation~{\mathsf e}qref{eq:present}. We parametrize the space of translational conjugacy classes $\mathsf H^1(\G_0,\V)$ of affine deformations of ${\Gamma}_0$ by Margulis invariants corresponding to $g_1$, $g_2$, $g_3$. Positivity of the three signs will guarantee a triple of crooked planes arising from the lamination described in~\S\ref{sec:THS}. (Alternatively, if the signs are all negative, use negatively extended crooked planes~\cite{DrummGoldman1} as mentioned in~\S\ref{sec:cp}.) The existence of such a crooked polyhedron thereby completes the proof of Theorem~A. We begin with the parametrization of $\mathsf H^1(\G_0,\V)$. \begin{lemma}\label{lem:muiso} Let $\pi$ denote a free group of rank two with presentation \begin{equation*} \langle A_1, A_2, A_3 \mid A_1 A_2 A_3 \,=\, \mathbb I \rangle. {\mathsf e}nd{equation*} Let $\pi\xrightarrow{\rho_0}{\mathsf{SO}(2,1)^0}$ be a homomorphism such that $\rho_0(A_i) \,\in\, {\mathsf{Hyp}}_0 \cup {\mathsf{Par}}_0$ for $i=1,2,3$. Suppose that $\rho(\pi)$ is not solvable. For each $i$ choose a vector ${\mathsf v}_i\in {\mathsf{Fix}}\big(\rho_0(A_i)\big)$ positive with respect to $\rho_0(A_i)$ and define \begin{align*} \mathsf H^1(\G_0,\V) &\xrightarrow{\mu_i} \mathbb R \\ [u] &\longmapsto\; \na{{\mathsf v}_i}(\rho(A_i)) \,=\, u(A_i) \cdot {\mathsf v}_i {\mathsf e}nd{align*} where $\rho$ is the affine deformation corresponding to $u$. Then \begin{align*} \mathsf H^1(\G_0,\V) & \xrightarrow{\mu} \mathbb R^3 \\ \mu: [u] & \longmapsto \bmatrix \mu_1([u]) \\ \mu_2([u]) \\ \mu_3([u]) {\mathsf e}ndbmatrix {\mathsf e}nd{align*} is a linear isomorphism of vector spaces. {\mathsf e}nd{lemma} Of course this lemma is much more general than our specific application. In our application $\rho_0$ is an isomorphism of $\pi = \pi_1(\Sigma)$ onto the discrete subgroup ${\Gamma}_0\mathfrak{s}ubset{\mathsf{SO}(2,1)^0}$, and corresponds to a complete hyperbolic three-holed sphere $\mathsf{int}(\Sigma)$. The generators $A_1,A_2,A_3$ correspond to the three components of $\partial\Sigma.$ The proof of Lemma~\ref{lem:muiso} is postponed to the Appendix. As in \S\ref{sec:alpha}, choose a positive vector $\vo{i}:=\vo{g_i}\in{\mathsf{Fix}}(g_i)$, further requiring that $\vo{i}$ be unit spacelike when $g_i$ is hyperbolic. With this fixed choice of positive vectors: \begin{equation*} \mu_i([u]) \;=\alpha_{[u]}(g_i). {\mathsf e}nd{equation*} We will now show that every positive cocycle $(\mu_1,\mu_2,\mu_3)\in\mathbb ZZ$ corresponds to a triple of mutually disjoint crooked planes arising from the geodesic lamination described in~\S\ref{sec:THS}. By a slight abuse of notation, set $\vpm{i}=\ypm{(\vo{i})}$ and $\vp{i}=\vm{i}=\vo{i}$ when $g_i$ is parabolic. The three consistently oriented unit spacelike vectors \begin{equation*} {\mathsf v}_i\;=\;\frac{-1}{\vp{i}\cdot\vp{i+1}}\,\vp{i}\boxtimes\vp{i+1} {\mathsf e}nd{equation*} correspond to the arcs joining $\vp{i}$ to $\vp{i+1}$ in ${\mathbf H}^2$. \begin{lemma} For $i=1,2,3$, choose $a_i,b_i>0$. For \begin{equation*} p_i\;:=\; a_i\vp{i}-b_i\vp{i+1} {\mathsf e}nd{equation*} the crooked planes ${\mathcal C}({\mathsf v}_i,p_i)$ are pairwise disjoint. {\mathsf e}nd{lemma} \begin{proof} Each pair being asymptotic, we verify condition {\mathsf e}qref{eq:asymptotic} in Theorem~\ref{thm:AsCPCP}. We check this for ${\mathcal C}({\mathsf v}_1,p_1)$ and ${\mathcal C}({\mathsf v}_2,p_2)$; the other cases follow from cyclic symmetry. \begin{itemize} \item $(p_2-p_1)\cdot{\mathsf v}_1>0$: \begin{align*} (p_2-p_1)\cdot{\mathsf v}_1& = \frac{-1}{\vp{1}\cdot\vp{2}} (a_2\vp{2}-b_2\vp{3}-a_1\vp{1}+b_1\vp{2})\cdot (\vp{1}\boxtimes\vp{2}) \\ & =\frac{b_2}{\vp{1}\cdot\vp{2}}\vp{3}\cdot (\vp{1}\boxtimes\vp{2})>0. {\mathsf e}nd{align*} \item $(p_2-p_1)\cdot{\mathsf v}_2>0$: \begin{equation*} (p_2-p_1)\cdot{\mathsf v}_2 = \frac{a_1}{\vp{1}\cdot\vp{2}} \vp{1}\cdot (\vp{2}\boxtimes\vp{3}) >0. {\mathsf e}nd{equation*} \item $(p_2-p_1)\cdot (\vp{1}\boxtimes\vp{2})>0$: \begin{equation*} (p_2-p_1)\cdot (\vp{1}\boxtimes\vp{2}) =-b_2\vp{3}\cdot (\vp{1}\boxtimes\vp{2}) >0. {\mathsf e}nd{equation*} {\mathsf e}nd{itemize} {\mathsf e}nd{proof} \begin{proof}[Conclusion of proof of Theorem~A] Consider the following four crooked planes: \begin{align*} &{\mathcal C}({\mathsf v}_3,p_3),~{\mathcal C}(g_1({\mathsf v}_3),p_1)\mathfrak{s}ubset\mathsf H({\mathsf v}_1,p_1)\\ &{\mathcal C}({\mathsf v}_2,p_2),~{\mathcal C}(g_2^{-1}({\mathsf v}_2),p_1)\mathfrak{s}ubset\mathsf H({\mathsf v}_1,p_1) {\mathsf e}nd{align*} Then apply Lemma~\ref{lem:kissing1} to obtain a crooked fundamental domain for the cocycle $u$ such that $u(g_i)=p_i-p_{i-1}$, $i=1,2,3$. Every positive cocycle arises in this way. Indeed, compute the Margulis invariant for the above cocycle $u$: \begin{eqnarray*} \mu_1 &= &(p_1-p_3)\cdot\vo{1}\\ &=&(a_1\vp{1}-b_1\vp{2}-a_3\vp{3}+b_3\vp{1}) \cdot\frac{-(\vm{1}\boxtimes\vp{1})}{\vm{1}\cdot\vp{1}}\\ & = &(-b_1\vp{2}-a_3\vp{3})\cdot\vo{1} {\mathsf e}nd{eqnarray*} (Omit the second line when $g_1$ is parabolic.) Recall that every product $\beta_{i,j}= -\vp{i}\cdot\vo{j}>0$. In matrix form: \begin{equation*} \begin{bmatrix} \mu_1 \\ \mu_2 \\ \mu_3 {\mathsf e}nd{bmatrix} = \begin{bmatrix} 0 & \beta_{2,1} & 0 & 0 & \beta_{3,1}& 0 \\ \beta_{1,2}& 0 & 0 &\beta_{3,2} & 0 & 0 \\ 0 & 0 &\beta_{2,3} & 0 & 0 & \beta_{1,3}{\mathsf e}nd{bmatrix} \begin{bmatrix} a_1 \\ b_1 \\ a_2 \\ b_2 \\ a_3 \\ b_3 {\mathsf e}nd{bmatrix}. {\mathsf e}nd{equation*} and every positive triple of values $(\mu_1,\mu_2,\mu_3)$ may be realized by choosing appropriate positive values of $a_i,b_i$. Explicitly, for $i=1,2,3$, choose $p_i,q_i>0$ with $p_i + q_i=1$, and define \begin{equation*} \begin{bmatrix} a_1 \\ b_1 \\ a_2 \\ b_2 \\ a_3 \\ b_3{\mathsf e}nd{bmatrix} \;=\; \begin{bmatrix} p_2\mu_2/\beta_{12} \\ q_1\mu_1/\beta_{21} \\ p_3\mu_3/\beta_{23} \\ q_2\mu_2/\beta_{32} \\ p_1\mu_1/\beta_{31} \\ q_3\mu_3/\beta_{13} {\mathsf e}nd{bmatrix}. {\mathsf e}nd{equation*} The proof of Theorem~A is complete.{\mathsf e}nd{proof} \mathfrak{s}ection {Embedding in an arithmetic group} As an application, we construct examples of proper affine deformations of a Fuchsian group as subgroups of the symplectic group ${\mathsf{Sp}(4,\R)}$. Consider a $4$-dimensional real symplectic vector space $S$ with a lattice $S_\mathbb Z$ such that the symplectic form takes values $\mathbb Z$ on $S_\mathbb Z$. Fix an {{\mathsf e}m integral\/} Lagrangian $2$-plane ${L_{\infty}}\mathfrak{s}ubset S$, that is, a Lagrangian $2$-plane generated by ${L_{\infty}}\cap S_\mathbb Z$. Our model for Minkowski space will be the space $\Lag_\infty$ of all Lagrangian $2$-planes $L\mathfrak{s}ubset S$ transverse to ${L_{\infty}}$. The underlying Lorentzian vector space is the space of linear maps $S/{L_{\infty}}\rightarrow {L_{\infty}}$ which are {{\mathsf e}m self-adjoint\/} in the sense described below. We denote by ${\mathsf{Aut}}(S)\cong {\mathsf{Sp}(4,\R)}$ the group of linear {{\mathsf e}m symplectomorphisms\/} of $S$. We denote by ${\mathsf{Aut}}({L_{\infty}})\cong{\Gamma}LtwR$ the group of linear automorphisms of the vector space ${L_{\infty}}$. The set of two-dimensional subspaces $L\mathfrak{s}ubset S$ transverse to ${L_{\infty}}$ admits a simply transitive action of the vector space $\mathsf{Hom}(S/{L_{\infty}},{L_{\infty}})$, as follows. Denote the inclusion and quotient mappings by \begin{equation*} {L_{\infty}} \mathfrak{s}tackrel{\iota}\hookrightarrow S \mathfrak{s}tackrel{\mathcal Pi}\twoheadrightarrow S/{L_{\infty}} {\mathsf e}nd{equation*} respectively. Let $L$ be a $2$-plane transverse to ${L_{\infty}}$ and $\phi\in\mathsf{Hom}(S/{L_{\infty}},{L_{\infty}})$. Define the action $\phi\cdot L$ of $\phi$ on $L$ as the {{\mathsf e}m graph\/} of the composition \begin{equation*} L \xrightarrow{\mathcal Pi} S/{L_{\infty}} \xrightarrow{\phi} {L_{\infty}} \mathfrak{s}tackrel{\iota}\hookrightarrow S, {\mathsf e}nd{equation*} that is, \begin{equation*} \phi\cdot L \;:=\; = \{ v + \iota\circ\phi\circ\mathcal Pi(v) \mid v\in L \}. {\mathsf e}nd{equation*} The vector group $\mathsf{Hom}(S/{L_{\infty}},{L_{\infty}})\cong\mathbb R^4$ acts simply transitively on the set of $2$-planes $L$ transverse to ${L_{\infty}}$ as claimed. Such a $2$-plane $L$ is Lagrangian if and only if the corresponding linear map $\phi$ is {{\mathsf e}m self-adjoint\/} as follows. Since $S$ is 4-dimensional and ${L_{\infty}}\mathfrak{s}ubset S$ is Lagrangian, the symplectic structure on $S$ defines an isomorphism of $S/{L_{\infty}}$ with the dual vector space ${L_{\infty}}star$. Let $\phi\in\mathsf{Hom}(S/{L_{\infty}},{L_{\infty}})$ be a linear map. Its {{\mathsf e}m transpose\/} $\phi^\mathsf{T} \in\mathsf{Hom}\big({L_{\infty}}^*,(S/{L_{\infty}})^*\big)$ is the map induced by $\phi$ on the dual spaces. Its {{\mathsf e}m adjoint\/} $\phi^*\in\mathsf{Hom}(S/{L_{\infty}},{L_{\infty}})$ is defined as the composition \begin{equation} S/{L_{\infty}} \xrightarrow{\cong} {L_{\infty}}star \xrightarrow{\phi^\mathsf{T}} (S/{L_{\infty}})^* \xrightarrow{\cong} {L_{\infty}} {\mathsf e}nd{equation} and the isomorphisms above arise from duality between $S/{L_{\infty}}$ and ${L_{\infty}}$. If $L\in \Lag_\infty$, and $\phi\in\mathsf{Hom}(S/{L_{\infty}},{L_{\infty}})$, then $\phi\cdot L$ is Lagrangian if and only if $\phi = \phi^*$, that is, $\phi$ is {{\mathsf e}m self-adjoint.} In this case $\phi$ corresponds to a symmetric bilinear form on $S/{L_{\infty}}$. Let $\mathcal Phi\cong\mathbb R^3$ denote the vector space of such self-adjoint elements $\phi$ of $\mathsf{Hom}(S/{L_{\infty}},{L_{\infty}})$. Then $\Lag_\infty$ is an affine space with underlying vector space of translations $\mathcal Phi$. Choose a fixed ${L_{0}}\in\Lag_\infty$. The symplectic form defines a nondegenerate bilinear form \begin{equation*} {L_{0}} \times {L_{\infty}} \longrightarrow \mathbb R {\mathsf e}nd{equation*} under which ${L_{0}}$ and ${L_{\infty}}$ are dual vector spaces and $S = {L_{\infty}} \operatornameplus {L_{0}}$. The restriction $\mathcal Pi|_{L_{0}}$ induces an isomorphism \begin{equation*} {L_{0}} \xrightarrow{\cong} S/{L_{\infty}}. {\mathsf e}nd{equation*} Given $\phi\in\mathcal Phi$, a self-adjoint endomorphism of $\mathsf{Hom}(S/{L_{\infty}},{L_{\infty}})$, the linear transformation of $S = {L_{0}} \operatornameplus {L_{\infty}}$ defined by the exponential map \begin{equation*} U_\phi \;:=\; {\mathsf e}xp \big(0 \operatornameplus \, (\phi\circ \mathcal Pi|_{L_{0}})\, \big) {\mathsf e}nd{equation*} is a unipotent linear symplectomorphism of $S$ which: \begin{itemize} \item acts identically on ${L_{\infty}}$; \item induces the identity on the quotient $S/{L_{\infty}}$. {\mathsf e}nd{itemize} Indeed, the exponential map is an isomorphism of the vector group $\mathcal Phi$ onto the subgroup of the linear symplectomorphism group of $S$ satisfying the above two properties. Every linear automorphism $A$ of ${L_{\infty}}$ extends to the linear symplectomorphism of $S= {L_{\infty}} \operatornameplus {L_{0}}$: \begin{equation*} \mathfrak{s}igma(A) \;:=\; A \operatornameplus (A^\mathsf{T})^{-1} . {\mathsf e}nd{equation*} Such linear symplectomorphisms stabilize the Lagrangian subspaces ${L_{\infty}}$ and ${L_{0}}$, and the image of ${\mathsf{Aut}}({L_{\infty}})$ is characterized by these properties. In particular ${\mathsf{Aut}}({L_{\infty}})$ normalizes the group ${\mathsf e}xp(\mathcal Phi)$ corresponding to translations. These two subgroups generate the subgroup of linear symplectomorphisms of $S$ which stabilize ${L_{\infty}}$. The vector space $\mathcal Phi$ has a natural {{\mathsf e}m Lorentzian\/} structure as follows. Identify $\mathcal Phi$ with the vector space ${\mathsf S}_2$ of $2\times 2$ symmetric matrices. The bilinear form \begin{align*} {\mathsf S}_2 \times {\mathsf S}_2 & \longrightarrow \mathbb R \\ X\cdot Y &\;\longmapsto\; \frac{\mathsf{tr}\left(XY\right) - \mathsf{tr}(X)\mathsf{tr}(Y)}{2} {\mathsf e}nd{align*} is a Lorentzian inner product of signature $(2,1)$. If $A\in{\mathsf{Aut}}({L_{\infty}})$, then \begin{equation*} A X \cdot A Y = (\det A)^2 X \cdot Y {\mathsf e}nd{equation*} so the subgroup ${\mathsf{SAut}}({L_{\infty}})$ of unimodular automorphisms acts isometrically with respect to this inner product. In this way $\Lag_\infty$ is a model for Minkowski space and ${\mathsf{SAut}}({L_{\infty}})$ acts by linear isometries. In particular, ${\mathsf e}xp(\mathcal Phi)$ corresponds to the group of translations. We describe this explicitly by matrices. Consider $\mathbb R^4$ with standard basis vectors ${\mathsf e}_k$ for $1\leq k \leq 4$. Endow $\mathbb R^4$ with the symplectic form such that: \begin{align*} \operatornamemega\left( {\mathsf e}_1, {\mathsf e}_3\right) \;=\; - \operatornamemega\left( {\mathsf e}_3, {\mathsf e}_1\right) & \;=\; 1 \\ \operatornamemega\left( {\mathsf e}_2, {\mathsf e}_4\right) \;=\; - \operatornamemega\left( {\mathsf e}_4, {\mathsf e}_2\right) & \;=\; 1 {\mathsf e}nd{align*} and all other $\operatornamemega\left( {\mathsf e}_i, {\mathsf e}_j\right) = 0$. That is, \begin{equation*} \operatornamemega(u, v) \;:=\; u^{\mathsf{T}} \mathbb{J} v {\mathsf e}nd{equation*} where \begin{equation*} \mathbb{J} \;:=\; \bmatrix 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 {\mathsf e}ndbmatrix. {\mathsf e}nd{equation*} Define the complementary pair of Lagrangian planes: \begin{align*} {L_{\infty}} & \;:=\; \langle {\mathsf e}_1 , {\mathsf e}_2 \rangle \\ {L_{0}} & \;:=\; \langle {\mathsf e}_3 , {\mathsf e}_4 \rangle. {\mathsf e}nd{align*} Thus $\left({\mathsf e}_3,{\mathsf e}_4\right)$ is the basis of ${L_{0}}$ dual to the basis $\left({\mathsf e}_1,{\mathsf e}_2\right)$. Vectors in Minkowski space correspond to self-adjoint linear transformations ${L_{\infty}} \rightarrow {L_{0}} \cong {L_{\infty}}star$, that is, $2\times 2$ symmetric matrices as follows. A symmetric matrix \begin{equation*} \psi(x,y,z) \;:=\; \bmatrix x & y \\ y & z {\mathsf e}ndbmatrix {\mathsf e}nd{equation*} corresponds to a vector in Minkowski space with quadratic form \begin{equation*} -\det(\psi) = x z - y^2. {\mathsf e}nd{equation*} The unipotent symplectomorphism corresponding to a symmetric matrix $\psi(x,y,z)\in{\mathsf S}_2$ is: \begin{equation*} U_{\psi(x,y,z)} \;:=\; {\mathsf e}xp\left( \bmatrix 0 & 0 & x & y \\ 0 & 0 & y & z \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 {\mathsf e}ndbmatrix \right) \;=\; \bmatrix 1 & 0 & x & y \\ 0 & 1 & y & z \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 {\mathsf e}ndbmatrix {\mathsf e}nd{equation*} where $x,y,z\;\in\;\mathbb R$. These correspond to the translations of Minkowski space, and comprise the subgroup ${\mathsf{U}}\mathfrak{s}ubset{\mathsf{Sp}(4,\R)}$. The reductive subgroup ${\mathsf{Aut}}({L_{\infty}}) \;\cong\; {\Gamma}LtwR$ embeds in ${\mathsf{Aut}}(S) \;\cong\; {\mathsf{Sp}(4,\R)}$ as follows: let \begin{equation*} A \;:=\; \bmatrix a & b \\ c & d {\mathsf e}ndbmatrix \;\in\;\ {\Gamma}LtwR \;\cong\; {\mathsf{Aut}}({L_{\infty}}) {\mathsf e}nd{equation*} with determinant $\Delta \;:=\; \det(A).$ The corresponding linear symplectomorphism preserving the decomposition $S \;=\; {L_{\infty}} \operatornameplus {L_{0}}$ is: \begin{equation*} \mathfrak{s}igma(A) \;:=\; \bmatrix a & b & 0 & 0 \\ c & d & 0 & 0 \\ 0 & 0 & d/\Delta & - c/\Delta \\ 0 & 0 & -b/\Delta & a/\Delta {\mathsf e}ndbmatrix. {\mathsf e}nd{equation*} These correspond to linear conformal transformations of Minkowski space. The subgroup ${\mathsf{SAut}}({L_{\infty}})$ of {{\mathsf e}m unimodular\/} automorphisms of ${L_{\infty}}$ corresponds to the group of linear isometries of Minkowski space. The subgroup of ${\mathsf{Aut}}(S)$ generated by ${\mathsf{U}}$ and ${\mathsf{SAut}}({L_{\infty}})$ is a semidirect product ${\mathsf{U}}\rtimes{\mathsf{SAut}}({L_{\infty}})$ and acts by conjugation on the normal subgroup ${\mathsf{U}}$. This action corresponds to the action of the group of affine isometries of Minkowski space. We construct subgroups of ${\mathsf{Sp}(4,\Z)}$ which act properly on the ${\mathsf S}_2$ model of ${\EE^3_1}$. The linear parts and translational parts of Lorentzian transformations of ${\mathsf S}_2$ are associated with elements of ${\mathsf{Sp}(4,\Z)}$. The level two congruence subgroup ${\Gamma}amma_0$ of $\mathsf{SL}(2,\Z)$ is generated by \begin{equation*} g_1 \;:=\;-\begin{bmatrix} 1 & 2 \\ 0 & 1 {\mathsf e}nd{bmatrix} ,\; g_2 \;:=\;-\begin{bmatrix} 1 & 0 \\ -2 & 1 {\mathsf e}nd{bmatrix} ,\; g_3 \;:=\; \begin{bmatrix} -1 & 2 \\ -2 & 3 {\mathsf e}nd{bmatrix}. {\mathsf e}nd{equation*} subject to the relation $g_1g_2g_3 = \mathbb I$. It is freely generated by $g_1$ and $g_2$. All three $g_i$ are parabolic and the quotient hyperbolic surface $\Sigma \;:=\; {\mathbf H}^2/{\Gamma}amma_0$ is a three-punctured sphere. The symmetric matrices \begin{equation*} {\mathsf v}_1 \;:=\; \bmatrix -2 & 0 \\ 0 & 0 {\mathsf e}ndbmatrix,\; {\mathsf v}_2 \;:=\; \bmatrix 0 & 0 \\ 0 & -2 {\mathsf e}ndbmatrix,\; {\mathsf v}_3 \;:=\; \bmatrix -2 & -2 \\ -2 & -2 {\mathsf e}ndbmatrix {\mathsf e}nd{equation*} define positive fixed vectors with respect to $g_1,g_2,g_3$ respectively. The triple $({\mathsf v}_1,{\mathsf v}_2,{\mathsf v}_3)$ defines a decoration of $\Sigma$. An affine deformation of ${\Gamma}amma_0$ is defined by two arbitrary vectors $u_1,u_2\in{\mathsf S}_2$ as translational parts: \begin{equation*} u_1 \;:=\;\begin{bmatrix} a_1 & b_1 \\ b_1 & c_1 {\mathsf e}nd{bmatrix} ,\; u_2 \;:=\;\begin{bmatrix} a_2 & b_2 \\ b_2 & c_2 {\mathsf e}nd{bmatrix}. {\mathsf e}nd{equation*} Thus the affine transformations with linear part $g_i$ and translational part $u_i$ are: \begin{equation*} \alphamma_1 \;:=\; \begin{bmatrix} 1 & 0 & a_1 & b_1 \\ 0 & 1 & b_1 & c_1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 {\mathsf e}nd{bmatrix} \begin{bmatrix} -1 & -2 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 2 & -1 {\mathsf e}nd{bmatrix} {\mathsf e}nd{equation*} \begin{equation*} \alphamma_2 \;:=\; \begin{bmatrix} 1 & 0 & a_2 & b_2 \\ 0 & 1 & b_2 & c_2 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 {\mathsf e}nd{bmatrix} \begin{bmatrix} -1 & 0 & 0 & 0 \\ 2 & -1 & 0 & 0 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 0 & -1 {\mathsf e}nd{bmatrix} {\mathsf e}nd{equation*} and \begin{equation*} \alphamma_3 \;:=\; \begin{bmatrix} 1 & 0 & a_3 &b_3 \\ 0 & 1 & b_3 & c_3 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 {\mathsf e}nd{bmatrix} \begin{bmatrix} 1 & -2 & 0 & 0 \\ 2 & -3 & 0 & 0 \\ 0 & 0 & -3 & -2 \\ 0 & 0 & 2 & 1 {\mathsf e}nd{bmatrix}, {\mathsf e}nd{equation*} where $\alphamma_3 \; =\; (\alphamma_1 \alphamma_2)^{-1}$, \begin{itemize} \item $a_3 =-a_1-a_2+4 b_1 - 4 c_1$, \item $b_3 = -2 a_1 -2 a_2 + 7 b_1 - b_2 -6 c_1$, and \item $c_3 =-4 a_1 -4 a_2 + 12 b_1 -4 b_2 - 9 c_1 - c_2$. {\mathsf e}nd{itemize} The corresponding Margulis invariants taken with respect to ${\mathsf v}_1,{\mathsf v}_2,{\mathsf v}_3$ are: \begin{align*} \mu_1 & =\; c_1 \\ \mu_2 & =\; a_2 \\ \mu_3 & =\; c_1 + c_2 - 2 b_1 + 2 b_2 + a_1 + a_2. {\mathsf e}nd{align*} By Theorem~A, the affine deformation ${\Gamma}amma := \langle \alphamma_1,\alphamma_2\rangle$ acts properly with crooked fundamental domain whenever \begin{align*} \mu_1 & >\; 0 \\ \mu_2 & >\; 0 \\ \mu_3 & >\; 0. {\mathsf e}nd{align*} Furthermore, taking $a_1,b_1,c_1,a_2,b_2,c_2\;\in\; \mathbb Z$ implies ${\Gamma}amma\mathfrak{s}ubset{\mathsf{Sp}(4,\Z)}$. Here are some explicit examples. Consider, for example the slice for translational conjugacy defined by $b_1 = b_2 = c_2 = 0$. Choose three positive integers $\mu_1,\mu_2,\mu_3$. Take \begin{align*} a_1 & =\; \mu_3 - \mu_1 -\mu_2\\ c_1 & =\; \mu_1 \\ a_2 & =\; \mu_2, {\mathsf e}nd{align*} that is, let \begin{equation*} \alphamma_1 \;:=\; \begin{bmatrix} 1 & 0 & \mu_3 - \mu_1 -\mu_2 & 0 \\ 0 & 1 & 0 & \mu_1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 {\mathsf e}nd{bmatrix} \begin{bmatrix} -1 & -2 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 2 & -1 {\mathsf e}nd{bmatrix} {\mathsf e}nd{equation*} and \begin{equation*} \alphamma_2 \;:=\; \begin{bmatrix} 1 & 0 & \mu_2 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 {\mathsf e}nd{bmatrix} \begin{bmatrix} -1 & 0 & 0 & 0 \\ 2 & -1 & 0 & 0 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 0 & -1 {\mathsf e}nd{bmatrix}. {\mathsf e}nd{equation*} The proof of Theorem~B is complete.\qed \appendix \mathfrak{s}ection*{Appendix. Proof of Lemma~\ref{lem:muiso}} We return to the parametrization of the cohomology $\mathsf H^1(\G_0,\V)$ by the three generalized Margulis invariants $\mu_1,\mu_2,\mu_3$ associated to the respective generators $g_1,g_2,g_3$ associated to components of $\partial\Sigma$. When $g_i$ is parabolic, choose a positive vector ${\mathsf v}_i$ to define $\mu_i$. We must show that the triple $\mu = (\mu_1,\mu_2,\mu_3)$ defines an isomorphism \begin{equation*} \mathsf H^1(\G_0,\V)\longrightarrow\mathbb R^3. {\mathsf e}nd{equation*} Under the double covering $\mathsf{SL}(2,\R) \longmapsto {\mathsf{SO}(2,1)^0}$, lift $\rho_0$ to a representation $\pi\xrightarrow{\tilde{\rho}_0} \mathsf{SL}(2,\R)$. The condition that $\rho_0(\pi)$ is not solvable implies that the representation $\tilde{\rho}_0$ on $\mathbb R^2$ is irreducible. By a well-known classic theorem (see, for example, Goldman~\cite{G}), such a representation is determined up to conjugacy by the three traces $$ a_i \; := \; \mathsf{tr}\big( \tilde{\rho}_0(A_i)\big). $$ and, choosing $b_3$ such that $b_3 + 1/b_3 = a_3$, we may conjugate $\tilde{\rho}_0$ to the representation defined by: \begin{align}\label{eq:Aslice} \tilde{\rho}_0(A_1) &=\; \bmatrix a_1 & -1 \\ 1 & 0 {\mathsf e}ndbmatrix \notag \\ \tilde{\rho}_0(A_2) &=\; \bmatrix 0 & -b_3 \\ 1/b_3 & a_2 {\mathsf e}ndbmatrix \notag \\ \tilde{\rho}_0(A_2) &=\; \bmatrix b_3 & -a_1 c_3 + a_2 \\ 0 & 1/b_3 {\mathsf e}ndbmatrix. {\mathsf e}nd{align} \noindent Since $\pi$ is freely generated by $A_1,A_2$, a cocycle $\pi\xrightarrow{u}{\R^3_1}$ is completely determined by two values $u(A_1),u(A_2)\in{\R^3_1}$. Furthermore, since $\rho_0(\pi)$ is nonsolvable, the coboundary map $$ {\R^3_1} \xrightarrow{\partial} \mathbb ZZ $$ is injective. Therefore the vector space $\mathsf H^1(\G_0,\V)$ has dimension three. To show that the linear map $\mu$ is an isomorphism, it suffices to show that $\mu$ is onto. To this end, it suffices to show that for each $i=1,2,3$ there is a cocycle $u\in\mathbb ZZ$ such that $u(A_i) \neq 0$ and $u(A_j) = 0$ for $j\neq i$. By cyclic symmetry it is only necessary to do this for $i=1$. Under the local isomorphism $\mathsf{SL}(2,\R) \longmapsto {\mathsf{SO}(2,1)^0}$, the Lie algebra $\mathfrak{sl}(2,\mathbb R)$ maps to the Lie algebra $\mathfrak{so}(2,1)$ which in turn maps isomorphically to the Lorentzian vector space ${\R^3_1}$. (Compare \cite{GM,G0,CharetteDrummGoldman}.) If $g\in\mathsf{SL}(2,\R)$ is hyperbolic or parabolic, then a neutral eigenvector $\vo(g)$ is a nonzero multiple of the traceless projection $$ \hat{g} := g - \frac{\mathsf{tr}(g)}2 \mathbb I. $$ Define a cocycle for the representation $\tilde{\rho}_0$ defined in {\mathsf e}qref{eq:Aslice} by: \begin{align*} u(A_1) &:=\; \bmatrix 1 & 0 \\ 0 & 0 {\mathsf e}ndbmatrix \\ u(A_2) &:=\; \bmatrix 0 & 0 \\ 0 & 0 {\mathsf e}ndbmatrix \\ u(A_3) &:=\; \bmatrix 0 & 0 \\ -1/c & 0 {\mathsf e}ndbmatrix. {\mathsf e}nd{align*} Then $\mu_1(u) \neq 0$ but $\mu_2(u)= \mu_3(u) =0$ as claimed. The proof of Lemma~\ref{lem:muiso} is complete.\qed \begin{thebibliography}{10} \bibitem{Abels} Abels, H., {{\mathsf e}m Properly discontinuous groups of affine transformations, A survey,\/} Geom.\ Ded. {\bf 87} (2001) 309--333. \bibitem{BCDGM} Barbot, T., Charette, V., Drumm, T., Goldman, W.\ and Melnick, K., {{\mathsf e}m A primer on the Einstein (2+1)-universe,\/} in ``Recent Developments in Pseudo-Riemannian Geometry,'' (D.\ Alekseevsky and H.\ Baum, eds.) Erwin Schr\"odinger Institute Lectures in Mathematics and Physics, Eur.\ Math.\ Soc. (2008), 179--221. \bibitem{Charette1} Charette, V., {{\mathsf e}m Affine deformations of ultraideal triangle groups,\/} Geom.\ Ded. {\bf 97} (2003), 17--31. \bibitem{Charette2} \bysame, {{\mathsf e}m The affine deformation space of a rank two Schottky group: a picture gallery,\/} in ``Discrete Groups and Geometric Structures, with Applications: Proceedings of the Oostende Workshop 2005,'' Geom.\ Ded. {\bf 122} (2006), 173--183. \bibitem{Charette} \bysame, {{\mathsf e}m Non-proper affine actions of the holonomy group of a punctured torus,\/} Forum\ Math. {\bf 18} (2006), no. 1, 121--135. \bibitem{CharetteDrumm1} \bysame and Drumm, T., {{\mathsf e}m Strong marked isospectrality of affine Lorentzian groups,\/} J.\ Diff.\ Geom. {\bf 66} (2004), no. 3, 437--452. \bibitem{CharetteDrumm2} \bysame and \bysame , {{\mathsf e}m The Margulis invariant for parabolic transformations,\/} Proc.\ Amer.\ Math.\ Soc. {\bf 133} (2005), no. 8, 2439--2447 (electronic). \bibitem{CharetteDrummGoldman} \bysame, \bysame, and Goldman, W., {{\mathsf e}m Stretching three-holed spheres and the Margulis invariant,\/} Proceedings of the 2008 Ahlfors-Bers Colloquium, Contemp.\ Math. (to appear). \bibitem{CDGM} \bysame and \bysame, Goldman, W.\ and Morrill, M., {{\mathsf e}m Complete flat affine and Lorentzian manifolds,\/} Geom.\ Ded. {\bf 97} (2003), 187--198. \bibitem{CG} Charette, V., and Goldman, W., {{\mathsf e}m Affine Schottky groups and crooked tilings,\/} in ``Crystallographic Groups and their Generalizations,'' Contemp.\ Math. {\bf 262} (2000), 69--98, Amer.\ Math.\ Soc. \bibitem{Drumm1} Drumm, T., {{\mathsf e}m Fundamental polyhedra for Margulis space-times,\/} Topology {\bf 31} (4) (1992), 677-683. \bibitem{Drumm3} \bysame, {{\mathsf e}m Examples of nonproper affine actions,\/} Mich.\ Math.\ J. {\bf 39} (1992), 435--442. \bibitem{Drumm2} \bysame, {{\mathsf e}m Linear holonomy of Margulis space-times,\/} J.\ Diff.\ Geo. {\bf 38} (1993), 679--691. \bibitem{DGyellow} \bysame\ and Goldman, W., {{\mathsf e}m Complete flat Lorentz 3-manifolds with free fundamental group,\/} Int.\ J.\ Math.\ {\bf 1} (1990), 149--161. \bibitem{DrummGoldman1} \bysame and \bysame, {{\mathsf e}m The geometry of crooked planes,\/} Topology {\bf 38}, No. 2, (1999) 323--351. \bibitem{DrummGoldman2} \bysame and \bysame, {{\mathsf e}m Isospectrality of flat Lorentz 3-manifolds,\/} J.\ Diff.\ Geo. {\bf 38}, No. 2, (1999) 323--351. \bibitem{Frances1} Frances, C., The conformal boundary of Margulis space-times, {\mathsf e}mph{C.\ R.\ Acad.\ Sci.\ Paris} t.\ 336 (2003), no.\ 9, 751--756. \bibitem{FG} Fried, D.\ and Goldman, W., {{\mathsf e}m Three-dimensional affine crystallographic groups,\/} Adv.\ Math. {\bf 47} (1983), 1--49. \bibitem{G0} Goldman, W., {{\mathsf e}m The Margulis Invariant of Isometric Actions on Minkowski \linebreak (2+1)-Space,\/} in ``Ergodic Theory, Geometric Rigidity and Number Theory,'' Springer-Verlag (2002), 149--164. \bibitem{G} \bysame, {{\mathsf e}m Trace coordinates on Fricke spaces of some simple hyperbolic surfaces,\/} in ``Handbook of Teichm\"uller theory, vol.\ II,'' (A. Papadopoulos, ed.) Chapter 15, pp. 611--684, European Mathematical Society 2009. {\tt math.GT/0901.1404} \bibitem{GLM} \bysame, Labourie, F.\ and Margulis, G., {{\mathsf e}m Proper affine actions and geodesic flows of hyperbolic surfaces,\/} Ann.\ Math. (to appear) {\tt math.DG/0406247}. \bibitem{GM} Goldman, W.\ and Margulis, G., {{\mathsf e}m Flat Lorentz 3-manifolds and cocompact Fuchsian groups,\/} in ``Crystallographic Groups and their Generalizations,'' Contemp.\ Math.\ {\bf 262} (2000), 135---146, Amer.\ Math.\ Soc. \bibitem{GMM} \bysame, \bysame and Minsky, Y., {{\mathsf e}m Complete flat Lorentz $3$-manifolds and laminations of hyperbolic surfaces,\/} (in preparation). \bibitem{Jones} Jones, C., {{\mathsf e}m Pyramids of properness,\/} doctoral dissertation, University of Maryland (2003). \bibitem{Labourie} Labourie, F., {{\mathsf e}m Fuchsian affine actions of surface groups,\/} J.\ Diff.\ Geo. {\bf 59} (1), (2001), 15 -- 31. \bibitem{Margulis1} Margulis, G., {{\mathsf e}m Free properly discontinuous groups of affine transformations,\/} Dokl.\ Akad.\ Nauk SSSR {\bf 272} (1983), 937--940. \bibitem{Margulis2} \bysame, {{\mathsf e}m Complete affine locally flat manifolds with a free fundamental group,\/} J.\ Soviet Math. {\bf 134} (1987), 129--134. \bibitem{Me} Mess, G., {{\mathsf e}m Lorentz spacetimes of constant curvature,\/} Geom.\ Ded. {\bf 126}, no. 1 (2007), 3-45, in ``New techniques in Lorentz manifolds : Proceedings of the BIRS 2004 workshop,'' (V.\ Charette, and W.\ Goldman, eds.) \bibitem{Mi} Milnor, J., {{\mathsf e}m On fundamental groups of complete affinely flat manifolds,\/} Adv.\ Math. {\bf 25} (1977), 178--187. \bibitem{Ratcliffe} Ratcliffe, J., ``Foundations of hyperbolic manifolds.'' Second edition. Graduate Texts in Mathematics, 149. Springer, New York, 2006. \bibitem{Thurston} Thurston, W., {{\mathsf e}m Minimal stretch maps between hyperbolic surfaces, \/} {\tt math.GT/9801039}. {\mathsf e}nd{thebibliography} {\mathsf e}nd{document}
\begin{document} \title{NP-Hardness of Speed Scaling with a Sleep State} \author{Gunjan Kumar, Saswata Shannigrahi} \institute{Indian Institute of Technology Guwahati, India. \email{\{k.gunjan,saswata.sh\}@iitg.ernet.in} } \maketitle \begin{abstract} A modern processor can dynamically set it's speed while it's active, and can make a transition to sleep state when required. When the processor is operating at a speed $s$, the energy consumed per unit time is given by a convex power function $P(s)$ having the property that $P(0) > 0$ and $P''(s) > 0$ for all values of $s$. Moreover, $C > 0$ units of energy is required to make a transition from the sleep state to the active state. The jobs are specified by their arrival time, deadline and the processing volume. We consider a scheduling problem, called {\it speed scaling with sleep state}, where each job has to be scheduled within their arrival time and deadline, and the goal is to minimize the total energy consumption required to process these jobs. Albers et. al. \cite{albers} proved the NP-hardness of this problem by reducing an instance of an NP-hard {\it partition problem} to an instance of this scheduling problem. The instance of this scheduling problem consists of the arrival time, the deadline and the processing volume for each of the jobs, in addition to $P$ and $C$. Since $P$ and $C$ depend on the instance of the {\it partition problem}, this proof of the NP-hardness of the {\it speed scaling with sleep state} problem doesn't remain valid when $P$ and $C$ are fixed. In this paper, we prove that the {\it speed scaling with sleep state} problem remains NP-hard for any fixed positive number $C$ and convex $P$ satisfying $P(0) > 0$ and $P''(s) > 0$ for all values of $s$. \noindent \textbf{Keywords:} Energy efficient algorithm, scheduling algorithm, NP-hardness \end{abstract} \section{Introduction} A modern processor can dynamically set it's speed while it's active, and can make a transition to sleep state when required. When the processor is operating at a speed $s$, the energy consumed per unit time is given by a convex power function $P(s)$ having the property that $P(0) > 0$ and $P''(s) > 0$ for all values of $s$. Therefore, some energy is consumed even if the processor is not scheduling any job in the active state. On the other hand, no energy is consumed when the processor is in the sleep state. However, $C > 0$ units of energy is required to make a transition from the sleep state to the active state and therefore it is not always fruitful to go asleep when there is no work to be processed at some point of time. We assume that no energy is required to make a transition from the active state to the sleep state, as we can always include this energy requirement in the sleep to active state transition. A number of problems have been studied under this model, e.g., \cite{albers}, \cite{bampis}, \cite{baptiste}, \cite{baptiste_chrobak}, \cite{bansal_chan_lam}, \cite{bansal_chan_pruhs}, \cite{chan}, \cite{han}, \cite{irani}, \cite{irani_pruhs}, \cite{yao}. The jobs are specified by their arrival time, deadline and the processing volume. We consider a scheduling problem where each job has to be scheduled within their arrival time and deadline, and the goal is to minimize the total energy consumption required to process these jobs. Albers et. al. \cite{albers} proved the NP-hardness of this problem by reducing an instance of an NP-hard {\it partition problem} (defined below) to an instance of this scheduling problem. The instance of this scheduling problem consists of the arrival time, the deadline and the processing volume for each of the jobs, in addition to $P(s)$ and $C$ that depends on the problem instance of the partition problem. As a result, this proof of NP-hardness doesn't remain valid when we are given any fixed convex function $P(s)$ and a positive number $C$. In this paper, we prove that the problem remains NP-hard for any fixed positive number $C$ and convex function $P$ satisfying $P(0) > 0$ and $P''(s) > 0$ for all values of $s$. We would do the reduction from the following NP-hard {\it partition problem}: Given a finite set $A$ of $n$ positive integers $a_{1}, a_{2}, \ldots, a_{n}$, the problem is to decide whether there exists a subset $A' \subset A $ such that $\sum_{a_{i} \in A'} a_{i} = \sum_{a_{i} \notin A'} a_{i}$. It's assumed that $a_{max} \ge 2$; otherwise, the problem becomes trivial. \section{The Reduction and it's Properties} \begin{figure} \caption{An instance of $I_{S} \end{figure} Let us start with a few definitions and notations. The {\it density} of an interval is defined as the total workload of the jobs that completely lie in an interval divided by the length of the interval. The critical speed $s*$ for $P(s)$ is defined as the minimum speed that minimizes $\frac{P(s)}{s}$. Note that the critical speed is not well-defined if $\frac{P(s)}{s}$ decreases monotonically. However, this is not a realistic case as this would mean we can schedule all jobs at an infinite speed to get the schedule that requires the minimum amount of energy consumption. Therefore, we assume that $\frac{P(s)}{s}$ decreases for $s < s*$ and attains the minimum at $s = s*$. Under this assumption, the following property can be easily observed easily. \begin{lemma} \label{lemma1} $P'(s) < \frac {P(s)}{s} < \frac {P(s*)}{s*} $ for $ s < s* $ and $ P'(s*) = \frac{P(s*)}{s*}$. \end{lemma} \begin{proof} The derivative of the function $\frac{P(s)}{s}$ is $\frac {sP'(s)-P(s)} {s^2}$. Since $\frac{P(s)}{s}$ decreases for $s < s*$, we have $ \frac {sP'(s)-P(s)} {s^2}\le 0$ for $s \le s*$ with equality only when $s = s*$. $\Box$ \end{proof} Given the function $P(s)$, a non-negative number $C$, and an instance $I_{p} $ of the {\it partition problem}, i.e., the integers $a_1, a_2, \ldots, a_n$, let's define the parameters which would be used to construct an instance $I_{S}$ of the scheduling problem. \\ \\ $R(s) = P(s) - \frac {P(s*)} {s*} s$. \\ \\ $L_{i} = e - f a_{i} $ where $ e = C/R(d), d = (1-\epsilon) s*, 0 < \epsilon <1/2$, and $f = \frac {C} {P(0) a_{max}} $.\\ \\ $l_{i} = d L_{i} - \frac{a_{i}} {k} $ where $ k = \frac {- R'(d)} {f R(d)} $. \\ \\ $ B = \frac{\sum\limits_{i=1}^n a_{i}} {2k} $. \\ \\ The structure of $I_S$ is the same as the one used by Albers et. al. \cite{albers}. The job set in $I_S$ is partitioned into three levels. In level $0$, there is only one job having a processing volume equal to $B$. Level $1$ comprises of $n$ jobs; the $i$-th job has a processing volume $l_{i}$ and $L_{i}$ time units to process it. There are $n+1$ jobs in level $2$, with each job having a processing volume of $\delta s*$ and $\delta > 0$ time units to process it, thereby making the density of each of these jobs equal to $s*$. In the rest of this section, we establish a few lemmas that would be useful in our proof of NP-hardness. \begin{lemma} \label{lemma2} $R(d) < \epsilon P(0)$. \end{lemma} \begin{proof} Since $R(s) = P(s) - \frac{P(s*)}{s*}s$, we obtain $R'(s) = P'(s) - \frac{P(s*)}{s*}$. We also note that $R(0) = P(0) > 0 $ and $R(s*) = 0$. Furthermore, we obtain from Lemma \ref{lemma1} that $R'(s*) = P'(s*) - \frac{P(s*)}{s*} = 0$, and $R'(s) = P'(s) - \frac{P(s*)}{s*} < 0$ for $s< s*$. Along with the properties established above, $ R''(s) = P''(s) > 0 $ implies that the following relationship holds. \[R(d) < (1-\frac{d}{s*}) R(0) + \frac{d}{s*} R(s*) \Rightarrow R(d) < \epsilon P(0). \] $\Box$ \end{proof} \begin{lemma} \label{lemma3} $\frac{R(d)}{|R'(d)|} < \epsilon s*.$ \end{lemma} \begin{proof} It can be easily seen from Figure \ref{figure2} that $|R'(d)|>$ $\frac{R(d)}{s* - d}$ $=$ slope of line $ab$. Since $d = (1-\epsilon) s*$, it follows that $\frac{R(d)}{|R'(d)|} < \epsilon s*$. $\Box$ \end{proof} \begin{figure} \caption{Plot of $s$ vs $R(s)$} \label{figure2} \end{figure} We would now show that our choices of $l_{i}$ and $L_{i}$ satisfy the trivial constraints $l_{i}, L_{i} > 0$. We would also that the density of all the intervals except those corresponding to level $2$ jobs are strictly less than $s*$. \begin{lemma} \label{lemma4} $L_{i}, l_{i} > 0 $ for all $i$. \end{lemma} \begin{proof} We first prove that $L_{i} > 0$ for all $i$. Note that \[ L_{i} > 0 \Leftrightarrow e - f a_{i} > 0 \Leftrightarrow e > f a_{i}. \] Since $a_{max} > a_{i} $, it suffices to show that $ e > f a_{max} $. As shown below, it can be easily seen using Lemma \ref{lemma2}, $C>0$ and $0 < \epsilon < \frac{1}{2}$. \[ e > f a_{max} \Leftrightarrow \frac{C}{R(d)} > \frac{C}{P(0)} \Leftrightarrow R(d) < P(0). \] \noindent In order to show that $l_{i} > 0 $, we observe that \[ l_{i} > 0 \Leftrightarrow d L_{i} > \frac {a_{i}}{k} \Leftrightarrow d (e - f a_{i}) > \frac {a_{i}} {k} \Leftrightarrow d > \frac {a_{i}} {k (e-fa_{i})} \Leftrightarrow d > \frac {1} { k (\frac{e}{a_{i}} - f)}. \] Since $ a_{max} > a_{i} $, it suffices to show that $d > \frac{1}{k\left(\frac{e}{a_{max}} - f\right)}$. We show it below using Lemma \ref{lemma2} and Lemma \ref{lemma3}. \begin{eqnarray*} \frac{1}{k\left(\frac{e}{a_{max}} - f\right)} & = & \frac{f R(d)} {|R'(d)| (\frac{C}{R(d) a_{max}} -f)} \\ & = & \frac{R(d)} {|R'(d)| (\frac{C}{f R(d) a_{max}} - 1)} \\ & = & \frac{R(d) R'(d)} {|R'(d)| (P(0) - R(d))} \\ & = & \frac{R(d)}{|R'(d)|} \cdot \frac{1}{(\frac{P(0)}{R(d)} - 1)} \\ & < & \epsilon s* (\frac{1}{\frac{1}{\epsilon} - 1 }) \\ & = & \frac{\epsilon^2 s*}{1 - \epsilon} \\ & < & (1 - \epsilon ) s* = d. \end{eqnarray*} \noindent Note that the last inequality follows from our choice of $0 < \epsilon < \frac{1}{2}$. $\Box$ \end{proof} \begin{figure} \caption{Plot of $x$ vs $G(x)$} \end{figure} \begin{lemma} \label{lemma5} The density of all the intervals except those corresponding to level $2$ jobs are strictly less than $s*$. \end{lemma} \begin{proof} Let's first consider the intervals corresponding to level $1$ jobs. The density of such an interval is $\frac{ l_{i}} {L_{i}} $. Observe that \[ \frac{l_{i}} {L_{i}} < s* \Leftrightarrow \frac {d L_{i} - \frac {a_{i}}{k}} { L_{i}} > s* \Leftrightarrow d < s* + \frac{a_{i}}{k L_{i}}. \] Since $d < s*$, the density of an interval corresponding to a Level $1$ job is strictly less than $s*$. This also proves that the density of any interval corresponding to the union of a proper subset of the level $1$ and level $2$ jobs is less than $s*$, since such jobs are non-overlapping and the density of any interval corresponding a level $2$ job is exactly $s*$. \noindent Finally, we consider the interval that starts from the first arrival of the level $0$ job and lasts till it's deadline. The density of this interval is $\frac {(n+1) \delta s* + B + \sum_{i} l_{i} } {(n+1) \delta + \sum_{i} L_{i} } $. This quantity is less than $s*$ since \[ \frac {B + \sum_{i} l_{i} } { \sum_{i} L_{i} } < s* \Leftrightarrow \frac { \sum_{i} a_{i} } {2k} + d \sum_{i} L_{i} - \sum_{i} \frac {a_{i}} {k} < s* \sum_{i} L_{i} \Leftrightarrow d < s* + \frac { \sum_{i} a_{i}} { 2k \sum_{i} L_{i} }. \] The last inequality is true since $d < s* $. $\Box$ \end{proof} Let us introduce the functions $F(x) = \frac{P(s*)}{s*} x + C $ and $H_{i}(x) = P(\frac{x}{L_{i}}) L_{i}$, and establish some of their properties. \begin{lemma} \label{lemma6} $ F(x) $ and $ H_{i}(x) $ intersect at two different points for any $i$. \end{lemma} \begin{proof} Consider $G(x) = H_{i}(x) - F(x) = P(\frac{x}{L_{i}}) L_{i} - \frac{P(s*)}{s*} x - C$. Note that \begin{eqnarray*} G(0) & = & P(0)L_{i} - C \\ & = & P(0)(e-fa_{i}) - C \\ & = & P(0) (\frac{C}{R(d)} - \frac {C a_{i}}{P(0) a_{max}}) - C \\ & \ge & P(0) (\frac{C}{R(d)} - \frac{C}{P(0)}) - C \\ & = & \frac{C (P(0) - R(d))}{R(d)} - C \\ & > & C(\frac {P(0) (1-\epsilon)}{\epsilon P(0)}) - C \\ & = & C(\frac{1-2\epsilon}{\epsilon}) \\ & > & 0. \end{eqnarray*} \noindent The last-but-one inequality follows from Lemma \ref{lemma2}. The last inequality follows since $0 < \epsilon < 1/2$. \noindent Next, we would show that $G(x)$ decreases for $x < L_{i} s*$, attains minimum at $x = L_{i}s*$ and then finally increases. Note that $G'(x) = L_{i} P'(\frac{x}{L_{i}}) \frac{1}{L_{i}} - \frac{P(s*)}{s*} = P'(\frac{x}{L_{i}}) -\frac{P(s*)}{s*}$. The following inequalities follow easily from Lemma \ref{lemma1} and from the fact that $P''(s) > 0$. \[ G'(x) \le 0 \Leftrightarrow P'(\frac{x}{L_{i}}) \le \frac{P(s*)}{s*} \Leftrightarrow P'(\frac{x}{L_{i}}) \le P'(s*) \Leftrightarrow x \le L_{i}s* .\] By Lemma \ref{lemma1}, the inequalities above would be strict since $x < L_{i} s*$. \\ \\ We complete the proof by showing that $ G(L_{i} s*) = P(s*) L_{i} - P(s*) L_{i} - C = -C < 0 $. Since $G$ is a strictly convex function (note that $ G''(x) = \frac{1}{L_{i}} P''(\frac{x}{L_{i}}) $), it would eventually intersects the $x$-axis at some point $ x > L_{i} s* $. $\Box$ \end{proof} \begin{lemma} \label{lemma7} $H'_{i}(l_{i}+\frac{a_{i}}{k}) = \frac {H_{i}(l_{i} + \frac{a_{i}}{k}) - F(l_{i})} {a_{i}/k} = P'(d)$. \end{lemma} \begin{proof} It can be easily seen that $H'_{i}(l_{i}+\frac{a_{i}}{k}) = H'_{i}(dL_{i}) = P'(\frac{dL_{i}}{L_{i}}) = P'(d)$. The following calculation also shows that $ \frac {H_{i}(l_{i} + \frac{a_{i}}{k}) - F(l_{i})} {a_{i}/k} = P'(d)$. \begin{eqnarray*} \frac {H_{i}(l_{i} + \frac{a_{i}}{k}) - F(l_{i})} {a_{i}/k} & = &\frac{k}{a_{i}} [H_{i}(dL_{i}) - F(l_{i})] \\ & = &\frac {k}{a_{i}} [P(d) L_{i} - \frac{P(s*)}{s*} l_{i} - C] \\ & = & \frac{k}{a_{i}} [P(d) L_{i} - \frac{P(s*)}{s*} d L_{i} +\frac{P(s*)}{s*} \frac{a_{i}}{k} - C] \\ & = & \frac{k}{a_{i}} R(d) L_{i} + \frac{P(s*)}{s*} - \frac{Ck}{a_{i}} \\ & = & \frac{k}{a_{i}} R(d)e - \frac{k}{a_{i}} R(d) f a_{i} +\frac {P(s*)}{s*} - \frac{Ck}{a_{i}} \\ & = & \frac{k}{a_{i}} C +R'(d) + \frac{P(s*)}{s*} - \frac{Ck}{a_{i}} \\ & = & P'(d) - \frac{P(s*)}{s*} + \frac{P(s*)}{s*} \\ & = & P'(d). \end{eqnarray*} $\Box$ \end{proof} Let $r_{i1}$ and $r_{i2}$ be the two roots of the equation $ F(x) = H_{i}(x) $ such that $ r_{i1} < r_{i2}$. We establish the following two lemmas. \begin{lemma} \label{lemma8} $0 < l_{i} < r_{i1}$, where $r_{i1} $ is the first intersection of $H_{i}(x)$ and $F(x)$. \end{lemma} \begin{proof} Since $ H_{i}(x)$ is strictly convex at every $x$, we obtain \begin{eqnarray*} H'_{i}(l_{i} + \frac{a_{i}}{k}) > \frac {H_{i}(l_{i}+\frac{a_{i}}{k}) - H_{i}(l_{i})} {a_{i}/k} & \Rightarrow & \frac {H_{i}(l_{i} + \frac{a_{i}}{k}) - F(l_{i})} {a_{i}/k} > \frac {H_{i}(l_{i}+\frac{a_{i}}{k}) - H_{i}(l_{i})} {a_{i}/k} \\ & \Rightarrow & -F(l_{i}) > - H_{i}(l_{i}) \\ & \Rightarrow & G(l_{i}) >0. \end{eqnarray*} The lemma follows since $l_{i} < L_{i}s*$. $\Box$ \end{proof} \begin{lemma} \label{lemma9} $r_{i1} < (l_{i} + \frac{a_{i}}{k})< r_{i2}$. \end{lemma} \begin{proof} Assume for the sake of contradiction that $ l_{i} + \frac{a_{i}}{k} \le r_{i1}$. It can be seen from Figure \ref{figure4} that it implies $\frac {H_{i}(l_{i} + \frac{a_{i}}{k}) - F(l_{i})} {a_{i}/k} \ge \frac{P(s*)}{s*} $ $\Rightarrow P'(d) \ge \frac{P(s*)}{s*}$. However, this leads to a contradiction since $d < s*$, and $P'(d) < \frac{P(s*)}{s*} $ by Lemma \ref{lemma1}. On the other hand, it's easy to see that $ l_{i} + \frac{a_{i}}{k} = d L_{i} < L_{i}s* < r_{i2} $ . $\Box$ \end{proof} \begin{figure} \caption{Plot of $x$ vs $F(x)$ and $H(x)$} \label{figure4} \end{figure} \section{Proof of NP-hardness} In this section, we would complete the proof of the NP-hardness of the {\it speed scaling with power down} problem. We would be using the following result by Irani et al. \cite{irani} along with the results derived in the previous section. \begin{lemma} \label{lemma10} \cite{irani} There exists an optimal solution of the ``speed scaling with a sleep state" problem that satisfies the following properties: \begin{itemize} \item A job $j$ must be scheduled at a constant speed $ s_{j}. $ \\ \item Suppose that the arrival time and the deadline of a job $j$ is $r_{j}$ and $d_{j}$, respectively. If another job $k$ is scheduled in the interval $[r_{j}, d_{j}]$, then $s_{k} \ge s_{j} $. \\ \item The jobs in the intervals having density at least $s*$ are scheduled according to the YDS algorithm \cite{yao}. The YDS algorithm is an iterative algorithm. In each iteration, an interval with the maximum density is identified and an {\it earliest-deadline-first} policy is used to construct a schedule for the jobs that lie completely in that interval. After an iteration, the YDS algorithm removes the jobs that lie completely in the maximum density interval corresponding to that iteration, and updates the arrival time and deadline of any job that overlaps with that interval. \end{itemize} \end{lemma} \begin{theorem} \label{theorem1} An instance $ I_{p} $ of the partition problem admits a partition if and only if there exists a a feasible schedule for $ I_{s} $ with total energy consumption of at most $(n+1) \delta P(s*) + \sum \limits_{i=1}^n F(l_{i}) + B P'(d)$. \end{theorem} \begin{proof} $(\Rightarrow) $ Let's first assume that $ I_{p} $ admits a partition and construct a feasible schedule of energy at most $(n+1) \epsilon P(s*) + \sum \limits_{i=1}^n F(l_{i}) + B R_{min}$. We start with some notations. Let $A'$ be the set of i corresponding to the solution of Partition problem, i.e., $ \sum \limits_{i \in A'} a_{i} = \frac{\sum \limits_{i=1}^n a_{i}}{2}$. Let $ b_{i} $ denote the portion of the workload of Level 0 job scheduled in gap $ g_{i}$. It can be seen that $ \sum \limits_{i=1}^n b_{i} = B $. We set the $b_{i}$'s as follows: \begin{center} $ b_{i} = \left\lbrace \begin{array}{r l} a_{i}/k, & \mbox{if }i\in A' \\ 0, & \mbox{otherwise}. \end{array} \right. $ \end{center} Our schedule executes any Level $2$ jobs with speed $s*$ between it's release time and deadline. This is feasible since the density of any such job is equal to $s*$. Therefore, a total workload of $ l_{i} + a_{i}/k = d L_{i}$ has to be scheduled in each gap $g_{i}$ corresponding to an $i \in A'$. We schedule both the jobs in gap $g_i$ with speed $d$. In the rest of the gaps, the Level $1$ jobs are scheduled at speed $s*$. The processor transitions to the sleep state at the completion of the job in such gaps, and wakes up at the release time of a Level $2$ job. Since the density of any interval corresponding to a Level $1$ job is less than $s*$, we get a feasible schedule. Let us calculate the total energy consumed by the jobs at every level. First of all, the total energy consumed by the Level 2 jobs is $(n+1) \delta P(s*) $. In the gaps corresponding to $ i \in A'$, we note that the jobs are proceeded at a speed $ d = (\frac { l_{i} + a_{i}/k} {L_{i}}) $ for $L_{i}$ units of time. The energy consumption in such a gap equals $ P(\frac { l_{i} + a_{i}/k} {L_{i}}) L_{i}$, which is the same as $ H_{i}(l_{i}+ \frac {a_{i}} {k})$. In a gap corresponding to $ i\notin A'$, a total $l_{i}$ units of workload are scheduled at speed $ s* $ and then the processor transitions to sleep state. Therefore, the energy consumed is given by $P(s*)\frac {l_{i}}{s*} + C $, which is the same as $ F(l_{i})$. From lemma \ref{lemma7}, $ H_{i}(l_{i}+ \frac {a_{i}} {k})$ can be written as $ F(l_{i}) + P'(d) \frac {a_{i}}{k} $. \\ \\ Let $ E_{0,1} $ denote the total Energy consumed by the Level $0$ and Level $1$ jobs. We obtain the following. \begin {eqnarray*} E_{0,1} & = & \sum \limits_{i \in A'} H_{i}(l_{i} + \frac{a_{i}} {k}) + \sum \limits_{i \notin A'} F(l_{i}) \\ & = & \sum \limits_{i \in A'} ( F(l_{i}) + P'(d) \frac{a_{i}} {k}) + \sum \limits_{i \notin A'} F_{i}(l_{i})\\ & = & \sum \limits_{i = 1}^n F(l_{i}) + P'(d) \sum \limits_{i \in A'} \frac {a_{i}}{k} \\ & = & \sum \limits_{i = 1}^n F(l_{i}) + P'(d) B. \end{eqnarray*} \\ The last equality follows since $ \sum \limits_{i = 1}^n b_{i} = \sum \limits_{i \in A'} a_{i}/k = (\sum \limits_{i =1}^n a_{i}) /2k = B $. Hence, we get a feasible schedule whose total energy consumption is $ (n+1) \delta P(s*) + \sum \limits_{i = 1}^n F(l_{i}) + P'(d) B $. \begin{figure} \caption{Plot of $x$ vs the lower envelope function $LE_{i} \label{figure5} \end{figure} ($ \Leftarrow $) In the reverse direction of the proof, we assume that $I_{p}$ doesn't admits a partition and show that the energy consumption in any feasible schedule is strictly greater than $(n+1) \delta P(s*) + \sum \limits_{i=1}^n F(l_{i}) + B P'(d)$. Let $ LE_{i}(x) = min \{F(x),H_{i}(x) \}$ denote the lower envelope of the functions $ F(x) $ and $ H_{i}(x)$, represented by solid curve in Figure \ref{figure5}. Let slope(x) denote the slope of the line joining $ (l_{i}, LE_{i}(l_{i}))$ and $ (x, LE_{i}(x)) $. For $x \ge l_{i} $, $ LE_{i}(x) $ can be written as $L E_{i}(x) = LE_{i}(l_{i}) + ( LE_{i}(x) - LE_{i}(l_{i})) = F(l_{i}) + (slope(x))* (x - l_{i})$. We note that the slope(x) is minimum at $ x = l_{i} + \frac{a_{i}} {k} $ and the minimum value is $ H'(l_{i} + \frac {a_{i}} {k}) = P'(d)$ (by Lemma \ref{lemma7}) which is independent of $i$. Consider an Optimal schedule $ S $ satisfying the properties of lemma \ref{lemma10} and let $ b_{1},b_{2},....,b_{n} $ units of workload of Level 0 job be scheduled in the gaps $ g_{1},g_{2},.....,g_{n}$, respectively. Let $ A' = \{ i | r_{i1} \le l_{i} + b_{i} \le r_{i2} \} $. \\ \\ \textbf{Case 1.} $ b_{i} = \frac {a_{i}}{k} $ for some $ i \in A' $ \\ \\ Since the workload $ l_{i} + b_{i} $ is greater than $r_{i1} $ and less than $r_{i2} $, it is beneficial to schedule it at the speed $ (l_{i} + b_{i})/L_{i} $ (rather than to schedule it with the speed $s*$) and then transition to sleep state. From Lemma \ref{lemma10}, it follows that the ratio $ (l_{i} + b_{i})/L_{i} $ must be the same for all $ i \in A' $ in the schedule $ S$. Take $ i \in A' $ corresponding to $ b_{i} = \frac {a_{i}} {k} $. We show below that $ b_{j} $ must also be equal to $ a_{j}/k $ for all $ j \in A' $ in an optimal schedule. \begin{eqnarray*} \frac{(b_{i} + l_{i})}{L_{i}} = \frac {(b_{j} + l_{j})}{L_{j}} & \Rightarrow & \frac {a_{i}/k + dL_{i} - a_{i}/k} { L_{i}} = \frac {b_{j}+ dL_{j} - a_{j}/k} { L_{j}} \\ & \Rightarrow & d = d + \frac { b_{j} - a_{j}/k } {L_{j}} \\ & \Rightarrow & b_{j} = a_{j}/k. \end{eqnarray*} Lemma \ref{lemma10} says that all the intervals having density greater than or equal to s* must be scheduled according to YDS in the schedule $S$. Also, Lemma \ref{lemma1} tells that all the intervals except those corresponding to Level 2 jobs are having density less than s*. Therefore, in the schedule $S$, all Level 2 jobs must be scheduled at s*. Thus, the total energy consumed by the Level 2 jobs is $ (n+1) \delta P(s*) $. Let us again denote the total energy required by the Level 0 and level 1 jobs as $E_{0,1}$. In a gap corresponding to $i \notin A'$, it is optimal to schedule the job at speed $s*$ and then transition to sleep state than scheduling at the speed $ (l_{i} + b_{i})/L_{i} $. When this is feasible, the energy consumption in the gap would be given by $ LE(l_{i} +b_{i}) $. When it's not (i.e., if $ (l_{i} + b_{i})/L_{i} > s*$), the energy consumption in the gap would be greater than $ LE_{i}(l_{i} +b_{i}) $. Therefore, we obtain the following lower bound on $E_{0,1}$. \begin{align*} E_{0,1} & \ge \sum \limits_{i \in A'} LE_{i}(l_{i} + \frac{a_{i}} {k}) + \sum \limits_{i \notin A'} LE_{i}(l_{i} + b_{i}) \\ & = \sum \limits_{i \in A'} (F(l_{i}) + P'(d) \frac{a_{i}} {k}) + \sum \limits_{i \notin A'} (F(l_{i}) + \frac{P(s*)}{s*} b_{i}) \\ & = \sum \limits_{i = 1}^n F(l_{i}) + P'(d) \sum \limits_{i \in A'} a_{i}/k + \frac {P(s*)}{s*} (B - \sum \limits_{i \in A'}b_{i}). \end{align*} If $\sum \limits_{i \in A'}b_{i} = B$, it implies that $\sum \limits_{i \in A'} \frac{a_{i}}{k} = \frac {\sum \limits_{i = 1}^n a_{i}} {2k}$, which contradicts our assumption that a solution of the partition problem does not exist. Therefore, $\sum \limits_{i \in A'}b_{i} < B$, which implies that $\sum \limits_{i \in A'} a_{i}/k < B$. The following calculation completes the proof of Case $1$. \begin{align*} E_{0,1} & \ge \sum \limits_{i = 1}^n F(l_{i}) + P'(d) \sum \limits_{i \in A'} a_{i}/k + \frac {P(s*)}{s*} ( B - \sum \limits_{i \in A'} a_{i}/k) \\ & = \sum \limits_{i = 1}^n F(l_{i}) + B \frac {P(s*)}{s*} - (\sum \limits_{i \in A'} a_{i}/k) ( \frac {P(s*)}{s*} - P'(d)) \\ & > \sum \limits_{i = 1}^n F(l_{i}) + B \frac {P(s*)}{s*} - B( \frac {P(s*)}{s*} - P'(d)) \\ & = \sum \limits_{i = 1}^n F(l_{i}) + B P'(d) \end{align*} \\ \\ \textbf{Case 2.} $b_{i} \neq a_{i}/k$ for all $i \in A'$ \\ \\ In this case, the following calculation completes the proof. \begin{align*} E_{0,1} & = \sum \limits_{i \in A'} (F(l_{i}) + slope(l_{i}+b_{i}) * b_{i}) + \sum \limits_{i \notin A'} (F(l_{i}) + slope(l_{i}+b_{i}) * b_{i}) \\ & > \sum \limits_{i \in A'} F(l_{i}) + P'(d) \sum \limits_{i \in A'} b_{i} + \sum \limits_{i \notin A'} F(l_{i}) + P'(d) \sum \limits_{i \notin A'} b_{i} \\ & = \sum \limits_{i = 1}^n F(l_{i}) + B P'(d). \end{align*} $\Box$ \end{proof} \small \end{document}
\begin{document} \mathtoolsset{showonlyrefs,mathic} \begin{abstract} The \foreignlanguage{german}{Spiegelungssatz} is an inequality between the \(4\)-ranks of the narrow ideal class groups of the quadratic fields \(\Q(\sqrt{D})\) and \(\Q(\sqrt{-D})\). We provide a combinatorial proof of this inequality. Our interpretation gives an affine system of equations that allows to describe precisely some equality cases. \end{abstract} \title{Spiegelungssatz: a combinatorial proof for the \(4\)-rank} \tableofcontents \section*{Introduction} Let \(\K\) be a quadratic field. Let \(\idK\) be the multiplicative group of fractional nonzero ideals of the ring of integers of \(\K\) and \(\prK\) be the subgroup of principal fractional ideals. We consider the subgroup \(\prK^+\) of \(\prK\), whose elements are the ones generated by an element with positive norm. The narrow class group \(\ClK^+\) of \(\K\) is the quotient \(\quotientdroite{\idK}{\prK^+}\). If \(\K\) is imaginary, this is the usual class group \(\ClK\coloneqq\quotientdroite{\idK}{\prK}\) whereas if \(\K\) is real, the group \(\ClK\) is a quotient of \(\ClK^+\). We have \(\ClK^+=\ClK\) if and only if the fundamental unit of \(\K\) has norm \(-1\). Otherwise, the cardinalities of these two groups differ by a factor \(2\). For more details about the relations between \(\ClK\) and \(\ClK^+\) we refer to~\cite[Section 3.1]{MR2726105}. The narrow class-group being finite, we can define its \(p^k\)-rank for any power of a prime number \(p^k\) by \[ \rk_{p^k}(\K)\coloneqq\dim_{\F_p}\quotientdroite{\left(\ClK^+\right)^{p^{k-1}}}{\left(\ClK^+\right)^{p^{k}}}. \] In other words, \(\rk_{p^k}(\K)\) is the number of elementary divisors of \(\ClK^+\) divisible by \(p^k\). If \(\K=\Q(\sqrt{\Delta})\), the \emph{reflection} of \(\K\) is the quadratic field \(\reK\coloneqq\Q(\sqrt{-\Delta})\). Assume that \(\K\) is totally real, in \cite[Th\'eor\`emes II.9 and II.10]{MR0280466}, Damey \& Payan proved the following inequality (the so called \emph{\foreignlanguage{german}{Spiegelungssatz}} for the \(4\)-rank, see~\cite{MR0096633}): \[ \rkq(\K)\leq\rkq(\reK)\leq\rkq(\K)+1. \] In this article, we provide a combinatorial proof of this \emph{\foreignlanguage{german}{Spiegelungssatz}} using expressions involving character sums due to Fouvry \& Kl\"uners \cite{MR2276261}. The letter \(D\) will always denote a positive, odd, squarefree integer. Let \(d_{\K}\) be the discriminant of the real quadratic field \(\K\) and \(d_{\K}^{\#}\) be the discriminant of the imaginary quadratic field \(\reK\). The usual computation of the discriminant allows to consider three families of quadratic fields. This families are described table~\ref{tab_link}. \begin{table}[h] \begin{tabular}{|Sc||Sc|Sc|Sc|} \hline \(d_{\K}\) & \(1\pmod{4}\) & \(0\pmod{8}\) & \(4\pmod{8}\)\\ \hline\hline \(d_{\K}\) & \(D\) & \(8D\) & \(4D\)\\ \hline \(d_{\K}^{\#}\) & \(-4D\) & \(-8D\) & \(-D\)\\ \hline \(d_{\K}^{\#}\) & \(4\pmod{8}\) & \(0\pmod{8}\) & \(1\pmod{4}\)\\ \hline \(D \) & \(1\pmod{4}\) & & \(-1\pmod{4}\)\\ \hline \(\K \) & \(\Q(\sqrt{D})\) & \(\Q(\sqrt{2D})\) & \(\Q(\sqrt{D})\)\\ \hline \end{tabular} \caption{Link between \(D\), \(d_{\K}\) and their reflections.} \label{tab_link} \end{table} We introduce for any integers \(u\) and \(v\) coprime with \(D\) the cardinality \[ \E(u,v)\coloneqq\#\{(a,b)\in\N^2\colon D=ab,\, ua\equiv\square\pmod{b},\, vb\equiv\square\pmod{a}\} \] where \(x\equiv\square\pmod{y}\) means that \(x\) is the square of an integer modulo \(y\). Using table~\ref{tab_link}, we find in \cite{MR2276261} (where what the authors note \(D\) is what we note \(d_{\K}\) or \(d_{\K}^{\#}\)) the following expressions for the \(4\)-rank of \(\K\) and \(\reK\). \begin{enumerate}[\indent 1)] \item If \(d_{\K}\equiv 1\pmod{4}\), then \begin{equation*} 2^{\rkq(\K)}=\frac{1}{2}\E(-1,1) \end{equation*} \cite[Lemma 27]{MR2276261} and \begin{equation*} 2^{\rkq(\reK)}=\frac{1}{2}\left(\E(1,1)+\E(2,2)\right) \end{equation*} \cite[Lemma 40]{MR2276261} with \(D\equiv 1\pmod{4}\). \item If \(d_{\K}\equiv 0\pmod{8}\), then \begin{equation*} 2^{\rkq(\K)}=\frac{1}{2}\left(\E(-2,1)+\E(-1,2)\right) \end{equation*} \cite[Lemma 38]{MR2276261} and \begin{equation*} 2^{\rkq(\reK)}=\E(2,1) \end{equation*} \cite[Lemma 33]{MR2276261}. \item If \(d_{\K}\equiv 4\pmod{8}\), then \begin{equation*} 2^{\rkq(\K)}=\frac{1}{2}\left(\E(-1,1)+\E(-2,2)\right) \end{equation*} \cite[Lemma 42]{MR2276261} and \begin{equation*} 2^{\rkq(\reK)}=\frac{1}{2}\E(1,1) \end{equation*} \cite[Lemma 16]{MR2276261} with \(D\equiv 3\pmod{4}\). \end{enumerate} \begin{rem} These expressions of \(2^{\rkq(\K)}\) and \(2^{\rkq(\reK)}\) either have one term or are a sum of two terms. In case they have one term, it can not be zero and this term is a power of \(2\). In case they are sum of two terms, we will show that each of these terms is either zero or a power of two ; then considering the solutions of the equation \(2^a=2^b+2^c\), we see that either one term (and only one) is zero or the two terms are equal. \end{rem} To prove Damey \& Payan \emph{\foreignlanguage{german}{Spiegelungssatz}}, we have then to prove the three following inequalities. \begin{enumerate}[\indent 1)] \item If \(D\equiv 1\pmod{4}\) then \begin{equation}\label{eq_DPun} \E(-1,1)\leq\E(1,1)+\E(2,2)\leq2\E(-1,1). \end{equation} \item For any \(D\), \begin{equation}\label{eq_DPdeux} \E(-2,1)+\E(-1,2)\leq 2\E(2,1)\leq2\E(-2,1)+2\E(-1,2). \end{equation} \item If \(D\equiv 3\pmod{4}\) then \begin{equation}\label{eq_DPtrois} \E(-1,1)+\E(-2,2)\leq \E(1,1)\leq2\E(-1,1)+2\E(-2,2). \end{equation} \end{enumerate} In section~\ref{sec_charsum}, we establish a formula for \(\E(u,v)\) involving Jacobi characters. We average this formula over a group of order \(8\) generated by three permutations. We deduce properties for \(\E(u,v)\) from this formula. In section~\ref{sec_affine}, we give an interpretation of \(\E(u,v)\) in terms of the cardinality of an affine space. In particular, this shows that \(\E(u,v)\) is either \(0\) or a power of \(2\). Finally, in section~\ref{sec_spiegel}, we combine the character sum interpretation with the affine interpretation to deduce the \emph{\foreignlanguage{german}{Spiegelungssatz}}. We also prove the equality cases found by Uehara~\cite[Theorem 2]{MR987569} and give a new one. \section{A character sum}\label{sec_charsum} Denote by \(\J{m}{n}\) the Jacobi symbol of \(m\) and \(n\), for any coprime odd integers \(m\) and \(n\). The letter \(p\) will always denote a prime number. For any integers \(s\), \(t\), \(u\) and \(v\) coprime with \(D\), we introduce the sum \[ \sigma_D(s,t,u,v)=\sum_{ab=D}\prod_{p\mid b}\left(\J{s}{p}+\J{ua}{p}\right)\prod_{p\mid a}\left(\J{t}{p}+\J{vb}{p}\right). \] We have \[ \sigma_D(1,1,u,v)=\sum_{ab=D}\prod_{p\mid b}\left(1+\J{ua}{p}\right)\prod_{p\mid a}\left(1+\J{vb}{p}\right)\eqqcolon S_D(u,v). \] This last sum is nonnegative and related to our problem by the easy equality \begin{equation}\label{eq_lienES} \E(u,v)=2^{-\omega(D)}S_D(u,v) \end{equation} where \(\omega(D)\) stands for the number of prime divisors of \(D\). The aim of this section is to establish some properties of \(\sigma_D\). We note the symmetry relation \begin{equation}\label{eq_sym} \sigma_D(s,t,u,v)=\sigma_D(t,s,v,u) \end{equation} which gives \(S_D(u,v)=S_D(v,u)\). The factorisation \begin{equation}\label{eq_facto} \sigma_D(s,t,u,v)=\sum_{ab=D}\J{s}{b}\J{t}{a}\prod_{p\mid b}\left(1+\J{sua}{p}\right)\prod_{p\mid a}\left(1+\J{tvb}{p}\right) \end{equation} implies the upper bound \begin{equation}\label{eq_huitL} \abs*{\sigma_D(s,t,u,v)}\leq S_D(su,tv). \end{equation} Finally, we shall use the elementary formula \begin{equation}\label{eq_fmag} 2(-1)^{xy+yz+zx}=(-1)^x+(-1)^y+(-1)^z-(-1)^{x+y+z} \end{equation} valid for any integers \(x,y\) and \(z\). We introduce the element \(\be(n)\in\F_2\) by \[ \J{-1}{n}=(-1)^{\be(n)}. \] If \(m\) and \(n\) are coprime, the multiplicativity of the Jacobi symbol gives \(\be(m)+\be(n)=\be(mn)\). With this notation the quadratic reciprocity law reads \begin{equation}\label{eq_recqua} \J{m}{n}\J{n}{m}=(-1)^{\be(m)\be(n)}. \end{equation} We shall combine~\eqref{eq_fmag} and~\eqref{eq_recqua} to get the linearisation formula \[ 2\J{x}{y}\J{y}{z}\J{z}{x}\J{x}{z}\J{z}{y}\J{y}{x}=\J{-1}{x}+\J{-1}{y}+\J{-1}{z}-\J{-1}{xyz}. \] \begin{lem}\label{lem_septL} For any integers \(s,t,u,v\) coprime with \(D\), the following equality \[ \sigma_D(s,t,u,v)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\J{s}{b}\J{t}{a}\J{u}{d}\J{v}{c} \] holds. \end{lem} \begin{proof} By bimultiplicativity of the Jacobi symbol, equation~\eqref{eq_facto} gives \[ \sigma_D(s,t,u,v)=\sum_{ab=D}\J{s}{b}\J{t}{a}\sum_{d\mid b}\J{usa}{d}\sum_{c\mid a}\J{tvb}{c}. \] By the change of variables \((a,b,c,d)=(\alpha\gamma,\beta\delta,\gamma,\delta)\), we get \[ \sigma_D(s,t,u,v)=\sum_{D=\alpha\beta\gamma\delta}\J{s}{\beta}\J{u}{\delta}\J{v}{\gamma}\J{t}{\alpha}\J{\gamma}{\delta}\J{\delta}{\gamma}\J{\alpha}{\delta}\J{\beta}{\gamma} \] and we conclude using the quadratic reciprocity law~\eqref{eq_recqua} to \(\J{\gamma}{\delta}\J{\delta}{\gamma}\). \end{proof} To build symmetry, we average the formula in lemma~\ref{lem_septL} over an order \(8\) group, namely the group generated by three permutations: the permutation \((a,d)\), the permutation \((b,c)\) and the permutation \(\left((a,b),(d,c)\right)\). The quadratic reciprocity law allows to factorise the term \((-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\) in every transformed sum and then to see \(u\) and \(v\) as describing the action of each permutation. \begin{prop}\label{prop_Spermu} For any integers \(s,t,u,v\) coprime with \(D\), the following equality \begin{align*} 8S_D(u,v) &= \sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\times \\% \Bigl[2\J{u}{d}\J{v}{c} &+ \J{u}{a}\J{v}{c}\left(\J{-1}{a}+\J{-1}{c}+\J{-1}{d}-\J{-1}{acd}\right)\\ &+ \J{u}{d}\J{v}{b}\left(\J{-1}{b}+\J{-1}{c}+\J{-1}{d}-\J{-1}{bcd}\right)\\ &+ \J{u}{a}\J{v}{b}\left(1+\J{-1}{ac}+\J{-1}{bd}-\J{-1}{D}\right)\Bigr]. \end{align*} holds. \end{prop} \begin{proof} From lemma~\ref{lem_septL} follows \begin{equation}\label{eq_deper} S_D(u,v)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\J{u}{d}\J{v}{c}. \end{equation} We permute \(a\) and \(d\) and use the quadratic reciprocity law~\eqref{eq_recqua} to obtain \begin{multline*} S_D(u,v)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\J{u}{a}\J{v}{c}\\\times(-1)^{\be(c)\be(d)+\be(d)\be(a)+\be(a)\be(c)}. \end{multline*} Formula~\eqref{eq_fmag} gives \begin{multline}\label{eq_permun} 2S_D(u,v)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\J{u}{a}\J{v}{c}\\\times \left(\J{-1}{a}+\J{-1}{c}+\J{-1}{d}-\J{-1}{acd}\right). \end{multline} Similary, we permute \(b\) and \(c\), then use the quadratic reciprocity law~\eqref{eq_recqua} and formula~\eqref{eq_fmag} to get \begin{multline}\label{eq_permdeux} 2S_D(u,v)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\J{u}{d}\J{v}{b}\\\times \left(\J{-1}{b}+\J{-1}{c}+\J{-1}{d}-\J{-1}{bcd}\right). \end{multline} Finally, we permute \((a,b)\) and \((b,c)\), apply twice the quadratic reciprocity law~\eqref{eq_recqua} to get \begin{multline} 2S_D(u,v)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\J{u}{a}\J{v}{b}\\\times (-1)^{\be(c)\be(d)+\be(b)\be(a)+\be(a)\be(d)+\be(b)\be(c)}. \end{multline} Since \(\be(c)\be(d)+\be(b)\be(a)+\be(a)\be(d)+\be(b)\be(c)=\be(ac)\be(bd)\), using formula~\eqref{eq_fmag} with \(z=0\) we get \begin{multline}\label{eq_permtrois} 2S_D(u,v)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\J{u}{a}\J{v}{b}\\\times \left(1+\J{-1}{ac}+\J{-1}{bd}-\J{-1}{D}\right). \end{multline} We obtain the result by adding twice~\eqref{eq_deper} with the sum of~\eqref{eq_permun}, \eqref{eq_permdeux} and~\eqref{eq_permtrois}. \end{proof} When two expressions are equivalent under the action of the symmetry group, we get an identity. We give two such formulas in the next two corollaries. \begin{cor}\label{cor_dun} If \(D\equiv 1\pmod{4}\) then \(S_D(-1,1)=S_D(1,1)\). \end{cor} \begin{proof} For any \(D\), we obtain from proposition~\ref{prop_Spermu} the formula \begin{multline}\label{eq_difSD} 8\left(S_D(1,1)-S_D(-1,1)\right)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\times\\ \left(1-\J{-1}{D}\right)\left(1+\J{-1}{b}\right)\left(1+\J{-1}{c}\right). \end{multline} This gives the result since \(\J{-1}{D}=1\) if \(D\equiv 1\pmod{4}\). \end{proof} \begin{cor}\label{cor_egFdpm} If \(D\equiv{3}\pmod{4}\) then \(S_D(1,1)=2S_D(-1,1)\). \end{cor} \begin{proof} For any \(D\), proposition~\ref{prop_Spermu} gives \begin{multline*} 8S_D(1,-1)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\Bigl[ 2+\J{-1}{b}+2\J{-1}{c}+\\ \J{-1}{d}+\J{-1}{ac}+\J{-1}{bd}+\J{-1}{bc}-\J{-1}{ad}+\J{-1}{abc}-\J{-1}{acd} \Bigr]. \end{multline*} Thanks to~\eqref{eq_difSD}, we deduce for any \(D\) the equality \begin{multline*} -8\left(S_D(1,1)-S_D(-1,1)-S_D(1,-1)\right) = \sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\times \\% \Biggl[1+\J{-1}{c}+\J{-1}{d}+\J{-1}{ac}+\J{-1}{bd}+\J{-1}{abc}+\J{-1}{abd}+\J{-1}{D}\Biggr]. \end{multline*} It follows that \begin{multline*} -8\left(S_D(1,1)-S_D(-1,1)-S_D(1,-1)\right) = \sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\times \\% \left(1+\J{-1}{D}\right)\left(1+\J{-1}{c}+\J{-1}{d}+\J{-1}{ac}\right). \end{multline*} This finishes the proof since \(\J{-1}{D}=-1\) if \(D\equiv 3\pmod{4}\). \end{proof} Finally, after having dealt with equalities, we shall need the following inequalities. \begin{lem}\label{lem_ineq} For any \(D\), for any \(u\) coprime with \(D\), the following inequalities \[ S_D(u,1)\leq S_D(-u,1)+S_D(u,-1)\leq 2S_D(u,1) \] hold. \end{lem} \begin{proof} We prove first the inequality \begin{equation}\label{eq_majoam} S_D(-u,1)+S_D(u,-1)\leq 2S_D(u,1). \end{equation} With proposition~\ref{prop_Spermu}, we write \begin{multline} 8\left(S_D(-u,1)+S_D(u,-1)\right)=\sum_{abcd=D}(-1)^{\be(c)\be(d)}\J{a}{d}\J{b}{c}\times\\ \Biggl[2\J{u}{d}\left(1+\J{-1}{c}+\J{-1}{d}+\J{-1}{bd}\right)\\ +\J{u}{a}\Biggl(2+\J{-1}{a}+\J{-1}{b}+\J{-1}{c}+\J{-1}{d}+2\J{-1}{ac}\\+\J{-1}{abd}+\J{-1}{abc}-\J{-1}{acd}-\J{-1}{bcd}\Biggl) \Biggr]. \end{multline} Using \begin{equation}\label{eq_xyzabc} \J{-1}{xyz}=\J{-1}{D}\J{-1}{t} \end{equation} for any \(\{x,y,z,t\}=\{a,b,c,d\}\) together with~\eqref{eq_deper} and lemma~\ref{lem_septL} we deduce \begin{multline*} 8\left(S_D(-u,1)+S_D(u,-1)\right)=2\left(S_D(u,1)+S_D(u,-1)+S_D(-u,1)\right)\\ +2\left(\sigma_D(-1,1,-u,1)+\sigma_D(1,u,1,1)+\sigma_D(1,-u,1,-1)\right)\\% +\left(1-\J{-1}{D}\right)\left(\sigma_D(1,-u,1,1)+\sigma_D(-1,u,1,1)\right)\\% +\left(1+\J{-1}{D}\right)\left(\sigma_D(1,u,1,-1)+\sigma_D(1,u,-1,1)\right). \end{multline*} Since \(1-\J{-1}{D}\) and \(1+\J{-1}{D}\) are nonnegative, the upper bound~\eqref{eq_huitL} gives \[ 8\left(S_D(-u,1)+S_D(u,-1)\right)\leq 4\left(2S_D(u,1)+S_D(u,-1)+S_D(-u,1)\right) \] hence~\eqref{eq_majoam}. We prove next the inequality \begin{equation}\label{eq_minoam} S_D(u,1)\leq S_D(-u,1)+S_D(u,-1). \end{equation} As for~\eqref{eq_majoam}, we use equation~\eqref{eq_xyzabc}, proposition~\ref{prop_Spermu}, equation~\eqref{eq_deper} and lemma~\ref{lem_septL} to get \begin{multline*} 8S_D(u,1)=2S_D(u,1)+S_D(u,-1)+S_D(-u,1)\\ +\sigma_D(1,-u,1,1)+\sigma_D(1,u,1,-1)+\sigma_D(1,u,-1,1)+\sigma_D(-1,1,u,1)\\ +\left(1+\J{-1}{D}\right)\sigma_D(1,-u,1,-1)+\left(1-\J{-1}{D}\right)\sigma_D(1,u,1,1)\\% -\J{-1}{D}\bigl(\sigma_D(-1,u,1,1)+\sigma_D(1,-1,u,1)\bigr). \end{multline*} Then~\eqref{eq_huitL} leads to \[ 8S_D(u,1)\leq4\left(S_D(u,1)+S_D(u,-1)+S_D(-u,1)\right) \] hence~\eqref{eq_minoam}. \end{proof} \section{An affine interpretation}\label{sec_affine} We write \(p_1<\dotsm<p_{\omega(D)}\) for the prime divisors of \(D\) and define a bijection between the set of divisors \(a\) of \(D\) and the set of sequences \((x_i)_{1\leq i\leq\omega(D)}\) in \(\F_2^{\omega(D)}\) by \[ x_i=\begin{cases} 1 & \text{if \(p_i\mid a\)}\\ 0 & \text{otherwise}. \end{cases} \] Let \(a\) and \(b\) satisfy \(D=ab\) and \(u\) and \(v\) two integers coprime with \(D\). We extend the notation of the previous section writing \[ \J{a}{b}=(-1)^{\alpha(a,b)}=(-1)^{\beta_a(b)} \] with \(\alpha(a,b)=\beta_a(b)\in\F_2\). The condition that \(vb\) is a square modulo \(a\) is equivalent to \(\J{vb}{p}=1\) for any prime divisor \(p\) of \(a\), that is \[ \J{v}{p_i}\prod_{j\colon x_j=0}\J{p_j}{p_i}=1 \] for any \(i\) such that \(x_i=1\). With our notation, this gives \[ \forall i,\, x_i=1\Longrightarrow (-1)^{\beta_v(p_i)}(-1)^{\sum_{j\colon x_j=0}\alpha(p_j,p_i)}=1. \] We rewrite it \[ \forall i,\, x_i=1\Longrightarrow (-1)^{\beta_v(p_i)}(-1)^{\sum_{j\neq i}(1-x_j)\alpha(p_j,p_i)}=1 \] and so \begin{equation}\label{eq_condun} \forall i,\, x_i\beta_v(p_i)+\sum_{j\neq i}x_i(1-x_j)\alpha(p_j,p_i)=0. \end{equation} Similary, the condition that \(ua\) is a square modulo \(b\) is equivalent to \begin{equation}\label{eq_conddeux} \forall i,\, (1-x_i)\beta_u(p_i)+\sum_{j\neq i}(1-x_i)x_j\alpha(p_j,p_i)=0. \end{equation} Since \(x_i\) is either \(0\) or \(1\), equations \eqref{eq_condun} et \eqref{eq_conddeux} are equivalent to their sum. We deduce the following lemma. \begin{lem}\label{lem_cardi} The cardinality \(\E(u,v)\) is the cardinality of the affine space \(\Af(u,v)\) in \(\F_2^{\omega(D)}\) of equations \[ \left(\beta_u(p_i)+\beta_v(p_i)+\sum_{j\neq i}\alpha(p_j,p_i)\right)x_i+\sum_{j\neq i}\alpha(p_j,p_i)x_j=\beta_u(p_i) \] for all \(i\in\{1,\dotsc,\omega(D)\}\). \end{lem} \begin{rem} In particular, lemma~\ref{lem_cardi} shows that \(\E(u,v)\) if not zero is a power of \(2\), the power being the dimension of the direction of \(\Af(u,v)\). This is not \emph{a priori} obvious. \end{rem} \begin{rem} This interpretation slightly differs from the one found by Redei~\cite{0009.05101,MR759260}. The matrix with coefficients in \(\F_2\) associated to our affine space is \((a_{ij})_{1\leq i,j\leq\omega(D)}\) with \[ a_{ij}=\begin{dcases} \alpha(p_j,p_i) & \text{ if \(i\neq j\)}\\ \beta_u(p_i)+\beta_v(p_i)+\sum_{\ell\neq i}\alpha(p_\ell,p_i) & \text{ if \(i=j\)} \end{dcases} \] whereas the matrix considered by Redei is \((\widetilde{a}_{ij})_{1\leq i,j\leq\omega(D)}\) with \[ \widetilde{a}_{ij}=\begin{dcases} \alpha(p_j,p_i) & \text{ if \(i\neq j\)}\\ \omega(D)+1+\sum_{\ell\neq i}\alpha(p_\ell,p_i) & \text{ if \(i=j\).} \end{dcases} \] \end{rem} \begin{cor}\label{cor_tard} For any \(D\), we have \(S_D(1,1)\neq 0\) and, either \(S_D(2,2)=0\) or \(S_D(2,2)=S_D(1,1)\). \end{cor} \begin{proof} The affine space \(\Af(2,2)\) has equations \[ \left(\sum_{j\neq i}\alpha(p_j,p_i)\right)x_i+\sum_{j\neq i}\alpha(p_j,p_i)x_j=\beta_2(p_i) \] for all \(i\in\{1,\dotsc,\omega(D)\}\). The affine space \(\Af(1,1)\) has equations \[ \left(\sum_{j\neq i}\alpha(p_j,p_i)\right)x_i+\sum_{j\neq i}\alpha(p_j,p_i)x_j=0 \] for all \(i\in\{1,\dotsc,\omega(D)\}\). Hence, both spaces have the same direction, and same dimension. The space \(\Af(1,1)\) is not empty: it contains \((1,\dotsc,1)\). Its cardinality is then \(2^{\dim_{\F_2}\Af(1,1)}\). The affine space \(\Af(2,2)\) might be empty and, if it is not, then its cardinality is \(2^{\dim_{\F_2}\Af(2,2)}=2^{\dim_{\F_2}\Af(1,1)}\). It follows that \(\E(1,1)\neq 0\) and, either \(\E(2,2)=0\) or \(\E(2,2)=\E(1,1)\). We finish the proof thanks to~\eqref{eq_lienES}. \end{proof} \begin{cor}\label{cor_plustard} For any \(D\), we have \(S_D(-1,1)\neq 0\) and, either \(S_D(-2,2)=0\) or \(S_D(-2,2)=S_D(-1,1)\). \end{cor} \begin{proof} Since \(\beta_{-2}(p_i)+\beta_{2}(p_i)=\beta_{-1}(p_i)\), the affine space \(\Af(-2,2)\) has equations \[ \left(\beta_{-1}(p_i)+\sum_{j\neq i}\alpha(p_j,p_i)\right)x_i+\sum_{j\neq i}\alpha(p_j,p_i)x_j=\beta_{-2}(p_i) \] for all \(i\in\{1,\dotsc,\omega(D)\}\). The affine space \(\Af(-1,1)\) has equations \[ \left(\beta_{-1}(p_i)+\sum_{j\neq i}\alpha(p_j,p_i)\right)x_i+\sum_{j\neq i}\alpha(p_j,p_i)x_j=\beta_{-1}(p_i) \] for all \(i\in\{1,\dotsc,\omega(D)\}\). Hence, both spaces have the same direction, and same dimension. The space \(\Af(-1,1)\) is not empty: it contains \((1,\dotsc,1)\). It follows that \(\E(-1,1)\neq 0\) and, either \(\E(-2,2)=0\) or \(\E(-2,2)=\E(-1,1)\). We finish the proof thanks to~\eqref{eq_lienES}. \end{proof} \section{Damey-Payan Spiegelungssatz}\label{sec_spiegel} \subsection{Proof of the Spiegelungssatz} We have to prove~\eqref{eq_DPun}, \eqref{eq_DPdeux} and \eqref{eq_DPtrois}. Consider the case \(d_{\K}\equiv 1\pmod{4}\). Recall that \(D=d_{\K}\). Thanks to~\eqref{eq_lienES}, equation~\eqref{eq_DPun} is \[ S_D(-1,1)\leq S_D(1,1)+S_D(2,2)\leq 2S_D(-1,1) \] for any \(D\equiv 1\pmod{4}\). By corollary~\ref{cor_dun}, this inequality is equivalent to \(S_D(2,2)\leq S_D(1,1)\) and this last inequality is implied by corollary~\ref{cor_tard}. Consider the case \(d_{\K}\equiv 0\pmod{8}\). Recall that \(D=d_{\K}/8\). Thanks to~\eqref{eq_lienES}, equation~\eqref{eq_DPdeux} is \[ S_D(2,1)\leq S_D(-2,1)+S_D(2,-1)\leq 2S_D(2,1) \] for any \(D\). This is implied by lemma~\ref{lem_ineq} with \(u=2\). Finally, consider the case \(d_{\K}\equiv 4\pmod{8}\). Recall that \(D=d_{\K}/4\). Thanks to~\eqref{eq_lienES}, equation~\eqref{eq_DPtrois} is \[ S_D(-1,1)+S_D(-2,2)\leq S_D(1,1)\leq2S_D(-1,1)+2S_D(-2,2) \] for any \(D\equiv 3\pmod{4}\). By corollary~\ref{cor_egFdpm}, this inequality is equivalent to \(S_D(-2,2)\leq S_D(-1,1)\) and this last inequality is implied by corollary~\ref{cor_plustard}. \subsection{Some equality cases} It is clear from our previous computations that \begin{itemize} \item if \(d_{\K}\equiv 1\pmod{4}\) then \[ \rkq(\reK)=\begin{cases} \rkq(\K) & \text{if \(\E(2,2)=0\)}\\ \rkq(\K)+1 & \text{otherwise;} \end{cases} \] \item if \(d_{\K}\equiv 4\pmod{8}\) then \[ \rkq(\reK)=\begin{cases} \rkq(\K)+1 & \text{if \(\E(-2,2)=0\)}\\ \rkq(\K) & \text{otherwise.} \end{cases} \] \end{itemize} We do not have such clear criterium in the case \(d_{\K}\equiv 0\pmod{8}\). The reason is that our study of the cases \(d_{\K}\equiv 1\pmod{4}\) and \(d_{\K}\equiv 4\pmod{8}\) rests on equalities (corollaries~\ref{cor_dun}, \ref{cor_egFdpm}, \ref{cor_tard} and \ref{cor_plustard}) whereas, our study of the case \(d_{\K}\equiv 0\pmod{8}\) rests on inequalities (lemma~\ref{lem_ineq} and mainly equation~\eqref{eq_huitL}). We study more explicitely special cases in proving the following proposition due to Uehara~\cite[Theorem 2]{MR987569} (the case~\ref{item_new} seems to be new). \begin{thm} Let \(\K\) be a real quadratic field of discriminant \(d_{\K}\) and \(D\) be described in table~\ref{tab_link}. Suppose that every prime divisors of \(D\) is congruent to \(\pm 1\) modulo \(8\). Then \begin{enumerate}[\indent a)] \item If \(d_{\K}\equiv 1\pmod{4}\), then \(\rkq(\reK)=\rkq(\K)+1\). \item If \(d_{\K}\equiv 0\pmod{8}\) and \(D\equiv -1\pmod{4}\), then \(\rkq(\reK)=\rkq(\K)+1\). \item\label{item_new} If \(d_{\K}\equiv 0\pmod{8}\) and \(D\equiv 1\pmod{4}\), then \(\rkq(\reK)=\rkq(\K)\). \item If \(d_{\K}\equiv 4\pmod{8}\), then \(\rkq(\K)=\rkq(\reK)\). \end{enumerate} \end{thm} \begin{proof} Since every prime divisors of \(D\) is congruent to \(\pm 1\) modulo \(8\), we have \(\beta_2(p_i)=0\) for any \(i\). \begin{itemize} \item If \(d_{\K}\equiv 1\pmod{4}\), then \(D\equiv 1\pmod{4}\). By lemma~\ref{lem_cardi}, we know that \(\E(2,2)\) is the cardinality of an affine space having equations \[ \sum_{j\neq i}\alpha(p_j,p_i)(x_i+x_j)=0\quad(1\leq i\leq\omega(D)) \] hence it is non zero (\(x_i=1\) for any \(i\) gives a solution). \item If \(d_{\K}\equiv 0\pmod{8}\), then \[ 2^{\rkq(\reK)-\rkq(\K)}=\frac{2\E(2,1)}{\E(-2,1)+\E(-1,2)}. \] Since \(\beta_{-2}(p_i)=\beta_{-1}(p_i)\) for any \(i\), lemma~\ref{lem_cardi} shows that \(\E(-2,1)=\E(-1,2)=\E(-1,1)\). Lemma~\ref{lem_cardi} also shows that \(\E(2,1)=\E(1,1)\), hence \[ 2^{\rkq(\reK)-\rkq(\K)}=\frac{\E(1,1)}{\E(-1,1)}. \] If \(D\equiv -1\pmod{4}\), corollary~\ref{cor_egFdpm} implies that \[ 2^{\rkq(\reK)-\rkq(\K)}=2 \] whereas, if \(D\equiv 1\pmod{4}\), corollary~\ref{cor_dun} implies that \[ 2^{\rkq(\reK)-\rkq(\K)}=1. \] \item If \(d_{\K}\equiv 4\pmod{8}\), then \(D\equiv -1\pmod{4}\). By lemma~\ref{lem_cardi}, we know that \(\E(-2,2)\) is the cardinality of an affine space having equations \[ \beta_{-1}(p_i)x_i+\sum_{j\neq i}\alpha(p_j,p_i)(x_i+x_j)=\beta_{-1}(p_i)\quad(1\leq i\leq\omega(D)) \] hence it is non zero (\(x_i=1\) for any \(i\) gives a solution). \end{itemize} \end{proof} \begin{rem} Probabilistic results have been given by Gerth\cite{MR1838376} and, for a more natural probability by Fouvry \& Klüners in \cite{MR2679097}. Among other results, Fouvry \& Klüners prove that \begin{multline*} \lim_{X\to+\infty}\frac{\#\left\{d_{\K}\in\mathcal{D}(X)\colon \rkq(\reK)=s \vert \rkq(\K)=r\right\}}{\#\mathcal{D}(X)}\\=\begin{cases} 1-2^{-r-1}&\text{if \(r=s\)}\\ 2^{-r-1}&\text{if \(r=s-1\)}\\ 0&\text{otherwise.} \end{cases} \end{multline*} where \(\mathcal{D}(X)\) is the set of fundamental discriminants in \(]0,X]\). \end{rem} \end{document}
\begin{document} \title[Certain character sums and hypergeometric series] {Certain character sums and hypergeometric series} \author{Rupam Barman} \address{Department of Mathematics, Indian Institute of Technology Guwahati, North Guwahati, Guwahati-781039, Assam, INDIA} \curraddr{} \email{[email protected]} \author{Neelam Saikia} \address{Department of Mathematics, Indian Institute of Technology Guwahati, North Guwahati, Guwahati-781039, Assam, INDIA} \curraddr{} \email{[email protected]} \thanks{} \subjclass[2010]{33E50, 33C99, 11S80, 11T24.} \date{13th February 2018} \keywords{character sum; hypergeometric series; $p$-adic gamma function.} \thanks{Acknowledement: This work is partially supported by a start up grant of the first author awarded by Indian Institute of Technology Guwahati. The second author acknowledges the financial support of Department of Science and Technology, Government of India for supporting a part of this work under INSPIRE Faculty Fellowship.} \begin{abstract} We prove two transformations for the $p$-adic hypergeometric series which can be described as $p$-adic analogues of a Kummer's linear transformation and a transformation of Clausen. We first evaluate two character sums, and then relate them to the $p$-adic hypergeometric series to deduce the transformations. We also find another transformation for the $p$-adic hypergeometric series from which many special values of the $p$-adic hypergeometric series as well as finite field hypergeometric functions are obtained. \end{abstract} \maketitle \section{Introduction and statement of results} For a complex number $a$, the rising factorial or the Pochhammer symbol is defined as $(a)_0=1$ and $(a)_k=a(a+1)\cdots (a+k-1), ~k\geq 1$. For a non-negative integer $r$, and $a_i, b_i\in\mathbb{C}$ with $b_i\notin\{\ldots, -3,-2,-1\}$, the classical hypergeometric series ${_{r+1}}F_{r}$ is defined by \begin{align} {_{r+1}}F_{r}\left(\begin{array}{cccc} a_1, & a_2, & \ldots, & a_{r+1} \\ & b_1, & \ldots, & b_r \end{array}| \lambda \right):=\sum_{k=0}^{\infty}\frac{(a_1)_k\cdots (a_{r+1})_k}{(b_1)_k\cdots(b_r)_k}\cdot\frac{\lambda^k}{k!},\notag \end{align} which converges for $|\lambda|<1$. Throughout the paper $p$ denotes an odd prime and $\mathbb{F}_q$ denotes the finite field with $q$ elements, where $q=p^r, r\geq 1$. Greene \cite{greene} introduced the notion of hypergeometric functions over finite fields analogous to the classical hypergeometric series. Finite field hypergeometric series were developed mainly to simplify character sum evaluations. Let $\widehat{\mathbb{F}_q^{\times}}$ be the group of all multiplicative characters on $\mathbb{F}_q^{\times}$. We extend the domain of each $\chi\in \widehat{\mathbb{F}_q^{\times}}$ to $\mathbb{F}_q$ by setting $\chi(0)=0$ including the trivial character $\varepsilon$. For multiplicative characters $A$ and $B$ on $\mathbb{F}_q$, the binomial coefficient ${A \choose B}$ is defined by \begin{align}\label{eq-0} {A \choose B}:=\frac{B(-1)}{q}J(A,\overline{B})=\frac{B(-1)}{q}\sum_{x \in \mathbb{F}_q}A(x)\overline{B}(1-x), \end{align} where $J(A, B)$ denotes the usual Jacobi sum and $\overline{B}$ is the character inverse of $B$. Let $n$ be a positive integer. For characters $A_0, A_1,\ldots, A_n$ and $B_1, B_2,\ldots, B_n$ on $\mathbb{F}_q$, Greene defined the ${_{n+1}}F_n$ finite field hypergeometric functions over $\mathbb{F}_q$ by \begin{align} {_{n+1}}F_n\left(\begin{array}{cccc} A_0, & A_1, & \ldots, & A_n\\ & B_1, & \ldots, & B_n \end{array}\mid x \right)_q =\frac{q}{q-1}\sum_{\chi\in \widehat{\mathbb{F}_q^\times}}{A_0\chi \choose \chi}{A_1\chi \choose B_1\chi} \cdots {A_n\chi \choose B_n\chi}\chi(x).\notag \end{align} \par Some of the biggest motivations for studying finite field hypergeometric functions have been their connections with Fourier coefficients and eigenvalues of modular forms and with counting points on certain kinds of algebraic varieties. Their links to Fourier coefficients and eigenvalues of modular forms are established by many authors, for example, see \cite{ahlgren, evans-mod, frechette, fuselier, Fuselier-McCarthy, lennon, mccarthy4, mortenson}. Very recently, McCarthy and Papanikolas \cite{mc-papanikolas} linked the finite field hypergeometric functions to Siegel modular forms. It is well-known that finite field hypergeometric functions can be used to count points on varieties over finite fields. For example, see \cite{BK, BK1, fuselier, koike, lennon2, ono, salerno, vega}. \par Since the multiplicative characters on $\mathbb{F}_q$ form a cyclic group of order $q-1$, a condition like $q\equiv 1 \pmod{\ell}$ must be satisfied where $\ell$ is the least common multiple of the orders of the characters appeared in the hypergeometric function. Consequently, many results involving these functions are restricted to primes in certain congruence classes. To overcome these restrictions, McCarthy \cite{mccarthy1, mccarthy2} defined a function ${_{n}}G_{n}[\cdots]_q$ in terms of quotients of the $p$-adic gamma function which can best be described as an analogue of hypergeometric series in the $p$-adic setting (defined in Section 2). \par Many transformations exist for finite field hypergeometric functions which are analogues of certain classical results \cite{greene, mccarthy3}. Results involving finite field hypergeometric functions can readily be converted to expressions involving ${_{n}}G_{n}[\cdots]$. However these new expressions in ${_{n}}G_{n}[\cdots]$ will be valid for the same set of primes for which the original expressions involving finite field hypergeometric functions existed. It is a non-trivial exercise to then extend these results to almost all primes. There are very few identities and transformations for the $p$-adic hypergeometric series ${_{n}}G_{n}[\cdots]_q$ which exist for all but finitely many primes (see for example \cite{BS2, BS1, BSM}). Recently, Fuselier and McCarthy \cite{Fuselier-McCarthy} proved certain transformations for ${_{n}}G_{n}[\cdots]_q$, and used them to establish a supercongruence conjecture of Rodriguez-Villegas between a truncated ${_4}F_3$ hypergeometric series and the Fourier coefficients of a certain weight four modular form. \par Let $\chi_4$ be a character of order $4$. Then a finite field analogue of ${_2F}_{1}\left(\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ &1 \end{array}|x\right)$ is the function ${_2F}_{1}\left(\begin{array}{cc} \chi_4, & \chi_4^3 \\ &\varepsilon \end{array}|x\right)$. Using the relation between finite field hypergeometric functions and ${_n}G_n$-functions as given in Proposition \ref{prop-301} in Section 3, the function ${_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\dfrac{1}{x}\right]_q$ can be described as a $p$-adic analogue of the classical hypergeometric series ${_2F}_{1}\left(\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ &1 \end{array}|x\right)$. In this article, we prove the following transformation for the $p$-adic hypergeometric series which can be described as a $p$-adic analogue of the Kummer's linear transformation \cite[p. 4, Eq. (1)]{bailey}. Let $\varphi$ be the quadratic character on $\mathbb{F}_q$. \begin{theorem}\label{MT-1} Let $p$ be an odd prime and $x\in \mathbb{F}_q$. Then, for $x\neq 0, 1$, we have \begin{align} {_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{1}{x}\right]_q=\varphi(-2){_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{1}{1-x}\right]_q.\notag \end{align} \end{theorem} We note that the finite field analogue of Kummer's linear transformation was discussed by Greene \cite[p. 109, Eq. (7.7)]{greene2} when $q\equiv 1\pmod{4}$. \par We have $\varphi(-2)=-1$ if and only if $p\equiv 5, 7\pmod{8}$. Hence, using Theorem \ref{MT-1} for $x=\frac{1}{2}$, we obtain the following special value of the ${_n}G_n$-function. \begin{cor} Let $p$ be a prime such that $p\equiv 5, 7\pmod{8}$. Then we have \begin{align}\label{spv-1} {_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|2\right]_p=0. \end{align} \end{cor} If we convert the ${_n}G_n$-function given in \eqref{spv-1} using Proposition \ref{prop-301} in Section 3, then we have ${_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\frac{1}{2} \right)_p=0$ for $p\equiv 5\pmod{8}$ which also follows from \cite[Eq. (4.15)]{greene}. The value of ${_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|2\right]_p$ can be deduced from \cite[Eq. (4.15)]{greene} when $p\equiv 1 \pmod{8}$. It would be interesting to know the value of ${_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|2\right]_p$ when $p\equiv 3 \pmod{8}$. \par The following transformation for classical hypergeometric series is a special case of Clausen's famous classical identity \cite[p. 86, Eq. (4)]{bailey}. \begin{align}\label{clausen} {_3}F_2\left(\begin{array}{ccc} \frac{1}{2}, & \frac{1}{2}, &\frac{1}{2} \\ &1,&1 \end{array}|x\right)=(1-x)^{-1/2}~{_2}F_1\left(\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ &1 \end{array}|\frac{x}{x-1}\right)^2. \end{align} A finite field analogue of \eqref{clausen} was studied by Greene \cite[p. 94, Prop. 6.14]{greene2}. In \cite{EG}, Evans and Greene gave a finite field analogue of the Clausen's classical identity. We prove the following transformation for the ${_n}G_n$-function which can be described as a $p$-adic analogue of \eqref{clausen}. Let $\delta$ be the function defined on $\mathbb{F}_q$ by $$\delta(x)=\left\{ \begin{array}{ll} 1, & \hbox{if $x=0$;} \\ 0, & \hbox{if $x\neq0$.} \end{array} \right. $$ \begin{theorem}\label{MT-4} Let $p$ be an odd prime and $x\in\mathbb{F}_p$. Then, for $x\neq 0, 1$, we have \begin{align} {_3G}_{3}\left[\begin{array}{ccc} \frac{1}{2}, & \frac{1}{2}, & \frac{1}{2} \\ 0, & 0, & 0 \end{array}|\frac{1}{x} \right]_p &=\varphi(1-x)\cdot {_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{x-1}{x}\right]_p^2-p\cdot\varphi(1-x).\notag \end{align} \end{theorem} We also prove the following transformation using Theorem \ref{MT-1} and \cite[Thm. 4.16]{greene}. \begin{theorem}\label{MT-3} Let $p$ be an odd prime and $x\in\mathbb{F}_q$. Then, for $x\neq0, \pm 1$, we have \begin{align}\label{eqn-MT-3} {_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{(1+x)^2}{(1-x)^2}\right]_q&=\varphi(-2)\varphi(1+x){_2G}_{2}\left[\begin{array}{cc} \frac{1}{2}, & \frac{1}{2} \\ 0, & 0 \end{array}|x^{-1}\right]_q. \end{align} \end{theorem} The following transformation is a finite field analogue of \eqref{eqn-MT-3}. \begin{theorem}\label{MT-2} Let $p$ be an odd prime and $q=p^r$ for some $r\geq1$ such that $q\equiv1\pmod{4}$. Then, for $x\neq0, \pm 1$, we have \begin{align} {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\frac{(1-x)^2}{(1+x)^2} \right)_q=\varphi(-2)\varphi(1+x){_2}F_1\left( \begin{array}{cc} \varphi, & \varphi \\ ~ & \varepsilon \\ \end{array}|x \right)_q.\notag \end{align} \end{theorem} Using Theorem \ref{MT-3} and Theorem \ref{MT-2}, one can deduce many special values of the $p$-adic hypergeometric series as well as the finite field hypergeometric functions. For example, we have the following special values of a ${_n}G_n$-function and its finite field analogue. \begin{theorem}\label{MT-5} For any odd prime $p$, we have \begin{align*} {_2}G_2\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|9\right]_p=\left\{ \begin{array}{ll} 0, & \hbox{if $p\equiv 3\pmod{4}$;} \\ -2x\varphi(6)(-1)^{\frac{x+y+1}{2}}, & \hbox{if $p\equiv 1\pmod{4}$, $x^2+y^2=p$, and $x$ odd.} \end{array} \right. \end{align*} For $p\equiv1\pmod{4}$, we have \begin{align*} {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\frac{1}{9}\right)_p=\frac{2x\varphi(6)(-1)^{\frac{x+y+1}{2}}}{p}, \end{align*} where $x^2+y^2=p$ and $x$ is odd. \end{theorem} We also find special values of the following ${_n}G_n$-function. \begin{theorem}\label{MT-7} For $q\equiv 1\pmod{8}$ we have \begin{align}\label{eq-8.5} {_2}G_2\left[ \begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \\ \end{array}|\left(\frac{6\sqrt{2}\pm3}{-2\sqrt{2}\pm3}\right)^2 \right]_q=-q\varphi(6\pm 12\sqrt{2}) \left\{{\chi_4 \choose \varphi}+{\chi_4^3 \choose \varphi}\right\}. \end{align} For $q\equiv 11\pmod{12}$ we have \begin{align}\label{eq-8.6} {_2}G_2\left[ \begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \\ \end{array}|\left(\frac{6\pm\sqrt{3}}{-2\pm\sqrt{3}}\right)^2 \right]_q=0. \end{align} For $q\equiv 1\pmod{12}$ we have \begin{align}\label{eq-8.7} {_2}G_2\left[ \begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \\ \end{array}|\left(\frac{6\pm\sqrt{3}}{-2\pm\sqrt{3}}\right)^2 \right]_q=-q\varphi\left(\frac{8\pm5\sqrt{3}}{12\pm 6\sqrt{3}}\right) \left\{{\varphi \choose \chi_3}+{\varphi \choose \chi_3^2}\right\}. \end{align} \end{theorem} The following theorem is a finite field analogue of Theorem \ref{MT-7}. \begin{theorem}\label{MT-6} For $q\equiv 1\pmod{8}$ we have \begin{align}\label{eq-8.3} {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\left(\frac{-2\sqrt{2}\pm3}{6\sqrt{2}\pm3}\right)^2 \right)_q=\varphi(6\pm 12\sqrt{2}) \left\{{\chi_4 \choose \varphi}+{\chi_4^3 \choose \varphi}\right\}. \end{align} For $q\equiv 1\pmod{12}$ we have \begin{align}\label{eq-8.4} {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\left(\frac{-2\pm\sqrt{3}}{6\pm\sqrt{3}}\right)^2 \right)_q=\varphi\left(\frac{8\pm5\sqrt{3}}{12\pm6\sqrt{3}}\right) \left\{{\varphi \choose \chi_3}+{\varphi \choose \chi_3^2}\right\}. \end{align} \end{theorem} In section 3 we prove two character sum identities and then use them to prove Theorem \ref{MT-1}, Theorem \ref{MT-4}, and Theorem \ref{MT-3}. We also prove Theorem \ref{MT-2} in section 3. In section 4 we prove Theorem \ref{MT-5}, Theorem \ref{MT-7} and Theorem \ref{MT-6}. \section{Notations and Preliminaries} Let $\mathbb{Z}_p$ and $\mathbb{Q}_p$ denote the ring of $p$-adic integers and the field of $p$-adic numbers, respectively. Let $\overline{\mathbb{Q}_p}$ be the algebraic closure of $\mathbb{Q}_p$ and $\mathbb{C}_p$ the completion of $\overline{\mathbb{Q}_p}$. Let $\mathbb{Z}_q$ be the ring of integers in the unique unramified extension of $\mathbb{Q}_p$ with residue field $\mathbb{F}_q$. We know that $\chi\in \widehat{\mathbb{F}_q^{\times}}$ takes values in $\mu_{q-1}$, where $\mu_{q-1}$ is the group of $(q-1)$-th roots of unity in $\mathbb{C}^{\times}$. Since $\mathbb{Z}_q^{\times}$ contains all $(q-1)$-th roots of unity, we can consider multiplicative characters on $\mathbb{F}_q^\times$ to be maps $\chi: \mathbb{F}_q^{\times} \rightarrow \mathbb{Z}_q^{\times}$. Let $\omega: \mathbb{F}_q^\times \rightarrow \mathbb{Z}_q^{\times}$ be the Teichm\"{u}ller character. For $a\in\mathbb{F}_q^\times$, the value $\omega(a)$ is just the $(q-1)$-th root of unity in $\mathbb{Z}_q$ such that $\omega(a)\equiv a \pmod{p}$. \par We now introduce some properties of Gauss sums. For further details, see \cite{evans}. Let $\zeta_p$ be a fixed primitive $p$-th root of unity in $\overline{\mathbb{Q}_p}$. The trace map $\text{tr}: \mathbb{F}_q \rightarrow \mathbb{F}_p$ is given by \begin{align} \text{tr}(\alpha)=\alpha + \alpha^p + \alpha^{p^2}+ \cdots + \alpha^{p^{r-1}}.\notag \end{align} For $\chi \in \widehat{\mathbb{F}_q^\times}$, the \emph{Gauss sum} is defined by \begin{align} g(\chi):=\sum\limits_{x\in \mathbb{F}_q}\chi(x)\zeta_p^{\text{tr}(x)}.\notag \end{align} Now, we will see some elementary properties of Gauss and Jacobi sums. We let $T$ denote a fixed generator of $\widehat{\mathbb{F}_q^\times}$. \begin{lemma}\emph{(\cite[Eq. 1.12]{greene}).}\label{lemma2_1} If $k\in\mathbb{Z}$ and $T^k\neq\varepsilon$, then $$g(T^k)g(T^{-k})=qT^k(-1).$$ \end{lemma} Let $\delta$ denote the function on multiplicative characters defined by $$\delta(A)=\left\{ \begin{array}{ll} 1, & \hbox{if $A$ is the trivial character;} \\ 0, & \hbox{otherwise.} \end{array} \right. $$ \begin{lemma}\emph{(\cite[Eq. 1.14]{greene}).}\label{lemma2_2} For $A,B\in\widehat{\mathbb{F}_q^{\times}}$ we have \begin{align} J(A,B)=\frac{g(A)g(B)}{g(AB)}+(q-1)B(-1)\delta(AB).\notag \end{align} \end{lemma} The following are character sum analogues of the binomial theorem \cite{greene}. For any $A\in\widehat{\mathbb{F}_q^{\times}}$ and $x\in\mathbb{F}_q$ we have \begin{align}\label{lemma2_3} \overline{A}(1-x)=\delta(x)+\frac{q}{q-1}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}{A\chi \choose \chi}\chi(x), \end{align} \begin{align}\label{lemma2_4} A(1+x)=\delta(x)+\frac{q}{q-1}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}{A\choose \chi}\chi(x). \end{align} We recall some properties of the binomial coefficients from \cite{greene}. We have \begin{align}\label{eq-1} {A\choose B}={A\choose A\overline{B}}, \end{align} \begin{align}\label{eq-2} {A\choose \varepsilon}={A\choose A}=\frac{-1}{q}+\frac{q-1}{q}\delta(A). \end{align} \begin{theorem}\emph{(\cite[Davenport-Hasse Relation]{evans}).}\label{thm2_2} Let $m$ be a positive integer and let $q=p^r$ be a prime power such that $q\equiv 1 \pmod{m}$. For multiplicative characters $\chi, \psi \in \widehat{\mathbb{F}_q^\times}$, we have \begin{align} \prod\limits_{\chi^m=\varepsilon}g(\chi \psi)=-g(\psi^m)\psi(m^{-m})\prod\limits_{\chi^m=\varepsilon}g(\chi).\notag \end{align} \end{theorem} Now, we recall the $p$-adic gamma function. For further details, see \cite{kob}. For a positive integer $n$, the $p$-adic gamma function $\Gamma_p(n)$ is defined as \begin{align} \Gamma_p(n):=(-1)^n\prod\limits_{0<j<n,p\nmid j}j\notag \end{align} and one extends it to all $x\in\mathbb{Z}_p$ by setting $\Gamma_p(0):=1$ and \begin{align} \Gamma_p(x):=\lim_{x_n\rightarrow x}\Gamma_p(x_n)\notag \end{align} for $x\neq0$, where $x_n$ runs through any sequence of positive integers $p$-adically approaching $x$. This limit exists, is independent of how $x_n$ approaches $x$, and determines a continuous function on $\mathbb{Z}_p$ with values in $\mathbb{Z}_p^{\times}$. For $x \in \mathbb{Q}$ we let $\lfloor x\rfloor$ denote the greatest integer less than or equal to $x$ and $\langle x\rangle$ denote the fractional part of $x$, i.e., $x-\lfloor x\rfloor$, satisfying $0\leq\langle x\rangle<1$. We now recall the McCarthy's $p$-adic hypergeometric series $_{n}G_{n}[\cdots]$ as follows. \begin{definition}\cite[Definition 5.1]{mccarthy2} \label{defin1} Let $p$ be an odd prime and $q=p^r$, $r\geq 1$. Let $t \in \mathbb{F}_q$. For positive integer $n$ and $1\leq k\leq n$, let $a_k$, $b_k$ $\in \mathbb{Q}\cap \mathbb{Z}_p$. Then the function $_{n}G_{n}[\cdots]$ is defined by \begin{align} &_nG_n\left[\begin{array}{cccc} a_1, & a_2, & \ldots, & a_n \\ b_1, & b_2, & \ldots, & b_n \end{array}|t \right]_q:=\frac{-1}{q-1}\sum_{a=0}^{q-2}(-1)^{an}~~\overline{\omega}^a(t)\notag\\ &\times \prod\limits_{k=1}^n\prod\limits_{i=0}^{r-1}(-p)^{-\lfloor \langle a_kp^i \rangle-\frac{ap^i}{q-1} \rfloor -\lfloor\langle -b_kp^i \rangle +\frac{ap^i}{q-1}\rfloor} \frac{\Gamma_p(\langle (a_k-\frac{a}{q-1})p^i\rangle)}{\Gamma_p(\langle a_kp^i \rangle)} \frac{\Gamma_p(\langle (-b_k+\frac{a}{q-1})p^i \rangle)}{\Gamma_p(\langle -b_kp^i \rangle)}.\notag \end{align} \end{definition} Let $\pi \in \mathbb{C}_p$ be the fixed root of $x^{p-1} + p=0$ which satisfies $\pi \equiv \zeta_p-1 \pmod{(\zeta_p-1)^2}$. Then the Gross-Koblitz formula relates Gauss sums and the $p$-adic gamma function as follows. \begin{theorem}\emph{(\cite[Gross-Koblitz]{gross}).}\label{thm2_3} For $a\in \mathbb{Z}$ and $q=p^r$, \begin{align} g(\overline{\omega}^a)=-\pi^{(p-1)\sum\limits_{i=0}^{r-1}\langle\frac{ap^i}{q-1} \rangle}\prod\limits_{i=0}^{r-1}\Gamma_p\left(\langle \frac{ap^i}{q-1} \rangle\right).\notag \end{align} \end{theorem} The following lemma relates products of values of $p$-adic gamma function. \begin{lemma}\emph{(\cite[Lemma 3.1]{BS1}).}\label{lemma3_1} Let $p$ be a prime and $q=p^r$. For $0\leq a\leq q-2$ and $t\geq 1$ with $p\nmid t$, we have \begin{align} \omega(t^{-ta})\prod\limits_{i=0}^{r-1}\Gamma_p\left(\langle\frac{-tp^ia}{q-1}\rangle\right) \prod\limits_{h=1}^{t-1}\Gamma_p\left(\langle \frac{hp^i}{t}\rangle\right) =\prod\limits_{i=0}^{r-1}\prod\limits_{h=0}^{t-1}\Gamma_p\left(\langle\frac{p^i(1+h)}{t}-\frac{p^ia}{q-1}\rangle \right).\notag \end{align} \end{lemma} We now prove the following lemma which will be used to prove our results. \begin{lemma}\label{lemma3.2} Let $p$ be an odd prime and $q=p^r$. Then for $0\leq a\leq q-2$ and $0\leq i\leq r-1$ we have \begin{align}\label{eq-3.10} -\left\lfloor\frac{-4ap^i}{q-1}\right\rfloor +\left\lfloor\frac{-2ap^i}{q-1}\right\rfloor =-\left\lfloor\langle\frac{p^i}{4}\rangle-\frac{ap^i}{q-1}\right\rfloor-\left\lfloor\langle\frac{3p^i}{4}\rangle-\frac{ap^i}{q-1}\right\rfloor. \end{align} \end{lemma} \begin{proof} Let $\left\lfloor\frac{-4ap^i}{q-1}\right\rfloor=4k+s$, where $k, s \in \mathbb{Z}$ satisfying $0\leq s\leq 3$. Then we have \begin{align}\label{eq-10.1} 4k+s\leq\frac{-4ap^i}{q-1}< 4k+s+1. \end{align} If $p^i\equiv 1\pmod{4}$, then \eqref{eq-10.1} yields \begin{align} \left\lfloor\frac{-2ap^i}{q-1}\right\rfloor=\left\{ \begin{array}{ll} 2k, & \hbox{if $s=0,1$;} \\ 2k+1, & \hbox{if $s=2,3$,} \end{array} \right.\notag \end{align} \begin{align} \left\lfloor\langle\frac{p^i}{4}\rangle-\frac{ap^i}{q-1}\right\rfloor=\left\{ \begin{array}{ll} k, & \hbox{if $s=0,1,2$;} \\ k+1, & \hbox{if $s=3$,} \end{array} \right.\notag \end{align} and \begin{align} \left\lfloor\langle\frac{3p^i}{4}\rangle-\frac{ap^i}{q-1}\right\rfloor=\left\{ \begin{array}{ll} k, & \hbox{if $s=0$;} \\ k+1, & \hbox{if $s=1,2,3$.} \end{array} \right.\notag \end{align} Putting the above values for different values of $s$ we readily obtain \eqref{eq-3.10}. The proof of \eqref{eq-3.10} is similar when $p^i\equiv 3\pmod{4}$. \end{proof} \section{Proofs of the main results} We first prove two propositions which enable us to express certain character sums in terms of the $p$-adic hypergeometric series. \begin{proposition}\label{prop-1} Let $p$ be an odd prime and $x\in\mathbb{F}_q^{\times}$. Then we have \begin{align} \sum_{y\in\mathbb{F}_q}\varphi(y)\varphi(1-2y+xy^2)& = \varphi(2x)+\frac{q^2\varphi(-2)}{q-1}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose \chi}{\varphi\chi\choose \chi}\chi\left(\frac{x}{4}\right)\notag\\ & = -\varphi(-2){_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{1}{x}\right]_q.\notag \end{align} \end{proposition} \begin{proof} Applying \eqref{eq-1} and then \eqref{eq-0} we have \begin{align} \sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose \chi}{\varphi\chi\choose \chi}\chi\left(\frac{x}{4}\right)&=\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}{\varphi\chi\choose \chi}\chi\left(\frac{x}{4}\right){\varphi\chi^2\choose \varphi\chi}\notag\\ &=\frac{\varphi(-1)}{q}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}{\varphi\chi\choose \chi}\chi\left(\frac{-x}{4}\right)J(\varphi\chi^2,\varphi\overline{\chi})\notag\\ &=\frac{\varphi(-1)}{q}\sum_{\substack{\chi\in\widehat{\mathbb{F}_q^{\times}}\\ y\in\mathbb{F}_q}}{\varphi\chi\choose \chi}\chi\left(\frac{-x}{4}\right)\varphi\chi^2(y)\varphi\overline{\chi}(1-y)\notag\\ &=\frac{\varphi(-1)}{q}\sum_{\substack{\chi\in\widehat{\mathbb{F}_q^{\times}}\\ y\in\mathbb{F}_q, y\neq 1}} \varphi(y)\varphi(1-y){\varphi\chi\choose \chi}\chi\left(-\frac{xy^2}{4(1-y)}\right).\notag \end{align} Now, \eqref{lemma2_3} yields \begin{align} &\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose \chi}{\varphi\chi\choose \chi}\chi\left(\frac{x}{4}\right)\notag\\ &=\frac{\varphi(-1)(q-1)}{q^2}\sum_{y\in\mathbb{F}_q, y\neq 1} \varphi(y)\varphi(1-y)\left(\varphi\left(1+\frac{xy^2}{4(1-y)}\right)-\delta\left(\frac{xy^2}{4(1-y)}\right)\right) \notag\\ &=\frac{(q-1)\varphi(-1)}{q^2}\sum_{y\in\mathbb{F}_q, y\neq 1}\varphi(y)\varphi(1-y)\varphi\left(1+\frac{xy^2}{4(1-y)}\right).\notag \end{align} Since $p$ is an odd prime, taking the transformation $y\mapsto 2y$ we obtain \begin{align} &\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose \chi}{\varphi\chi\choose \chi}\chi\left(\frac{x}{4}\right)\notag\\ &=\frac{(q-1)\varphi(-2)}{q^2}\sum_{\substack{y\in\mathbb{F}_q\\ y\neq \frac{1}{2}}}\varphi(y)\varphi(1-2y)\varphi\left(1+\frac{xy^2}{1-2y}\right)\notag\\ &=\frac{(q-1)\varphi(-2)}{q^2}\sum_{\substack{y\in\mathbb{F}_q\\ y\neq \frac{1}{2}}}\varphi(y)\varphi(1-2y+xy^2)\notag\\ &=\frac{(q-1)\varphi(-2)}{q^2}\sum_{y\in\mathbb{F}_q}\varphi(y)\varphi(1-2y+xy^2)-\frac{\varphi(-x)(q-1)}{q^2},\notag \end{align} from which we readily obtain the first identity of the proposition. \par To complete the proof of the proposition, we relate the above character sums to the $p$-adic hypergeometric series. From \eqref{eq-0}, Lemma \ref{lemma2_2}, and then using the facts that $\delta(\chi)=0$ for $\chi\neq \varepsilon, \delta(\varepsilon)=1$ and $g(\varepsilon)=-1$, we deduce that \begin{align} A & := \sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose\chi}{\varphi\chi\choose\chi}\chi\left(\frac{x}{4}\right) =\frac{1}{q^2}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}J(\varphi\chi^2,\overline{\chi})J(\varphi\chi, \overline{\chi})\chi\left(\frac{x}{4}\right)\notag\\ &=\frac{1}{q^2}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}\frac{g(\varphi\chi^2)g^2(\overline{\chi})}{g(\varphi)}\chi\left(\frac{x}{4}\right) +\frac{q-1}{q^2}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}\frac{g(\varphi\chi)g(\overline{\chi})} {g(\varphi)}\chi\left(-\frac{x}{4}\right)\delta(\varphi\chi)\notag\\ &=\frac{1}{q^2}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} \frac{g(\varphi\chi^2)g^2(\overline{\chi})}{g(\varphi)}\chi\left(\frac{x}{4}\right)-\frac{q-1}{q^2}\varphi(-x).\notag \end{align} Now, taking $\chi=\omega^a$ we have \begin{align} A&=\frac{1}{q^2}\sum_{a=0}^{q-2}\frac{g(\varphi\omega^{2a})g^2(\overline{\omega}^a)}{g(\varphi)} \omega^a\left(\frac{x}{4}\right)-\frac{q-1}{q^2}\varphi(-x).\notag \end{align} Using Davenport-Hasse relation for $m=2$ and $\psi=\omega^{2a}$ we obtain \begin{align} g(\varphi\omega^{2a})=\frac{g(\omega^{4a})\overline{\omega}^{2a}(4)g(\varphi)}{g(\omega^{2a})}.\notag \end{align} Thus, \begin{align} A&=\frac{1}{q^2}\sum_{a=0}^{q-2}\omega^a(x)\overline{\omega}^{3a}(4) \frac{g(\omega^{4a})g^2(\overline{\omega}^a)}{g(\omega^{2a})}-\frac{q-1}{q^2}\varphi(-x).\notag \end{align} Applying Gross-Koblitz formula we deduce that \begin{align} A&=\frac{1}{q^2}\sum_{a=0}^{q-2}\omega^a(x)\overline{\omega}^{3a}(4)\pi^{(p-1)\alpha}\prod_{i=0}^{r-1} \frac{\Gamma_p(\langle\frac{-4ap^i}{q-1}\rangle)\Gamma_p^2(\langle\frac{ap^i}{q-1}\rangle)}{\Gamma_p(\langle\frac{-2ap^i}{q-1}\rangle)} -\frac{q-1}{q^2}\varphi(-x),\notag \end{align} where $\alpha=\sum_{i=0}^{r-1}\{\langle\frac{-4ap^i}{q-1}\rangle+2\langle\frac{ap^i}{q-1}\rangle-\langle\frac{-2ap^i}{q-1}\rangle\}$. Using Lemma \ref{lemma3_1} for $t=4$ and $t=2$, we deduce that \begin{align} A&=\frac{1}{q^2}\sum_{a=0}^{q-2}\omega^a(x)\pi^{(p-1)\alpha}\prod_{i=0}^{r-1}\frac{\Gamma_p(\langle(\frac{1}{4}-\frac{a}{q-1})p^i\rangle) \Gamma_p(\langle(\frac{3}{4}-\frac{a}{q-1})p^i\rangle)\Gamma_p^2(\langle\frac{ap^i}{q-1}\rangle)}{\Gamma_p(\langle\frac{p^i}{4}\rangle) \Gamma_p(\langle\frac{3p^i}{4}\rangle)}\notag\\ &\hspace{1cm}-\frac{q-1}{q^2}\varphi(-x).\notag \end{align} Finally, using Lemma \ref{lemma3.2} we have \begin{align} A=-\frac{q-1}{q^2}\cdot{_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{1}{x}\right]_q-\frac{q-1}{q^2}\varphi(-x).\notag \end{align} This completes the proof of the proposition. \end{proof} \begin{proposition}\label{prop-2} Let $p$ be an odd prime and $x\in \mathbb{F}_q$. Then, for $x\neq 1$, we have \begin{align} \sum_{y\in\mathbb{F}_q}\varphi(y)\varphi(1-2y+xy^2)&=2\varphi(x-1)+\frac{q^2}{q-1}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose \chi}{\varphi\chi\choose \chi^2}\chi(x-1)\notag\\ &=-{_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{1}{1-x}\right]_q.\notag \end{align} \end{proposition} \begin{proof} From \eqref{eq-0} and then using Lemma \ref{lemma2_2}, we have \begin{align}\label{eq-5} {\varphi\chi^2\choose \chi}{\varphi\chi\choose \chi^2}&=\frac{\chi(-1)}{q^2}J(\varphi\chi^2, \overline{\chi})J(\varphi\chi, \overline{\chi}^2)\notag\\ &=\frac{\chi(-1)}{q^2}\left[\frac{g(\varphi\chi^2)g(\overline{\chi})}{g(\varphi\chi)}+(q-1)\chi(-1)\delta(\varphi\chi)\right]\notag\\ &\hspace{1 cm}\times\left[\frac{g(\varphi\chi)g(\overline{\chi}^2)}{g(\varphi\overline{\chi})}+(q-1)\delta(\varphi\overline{\chi})\right]. \end{align} From Lemma \ref{lemma2_1}, we have $g(\varphi)^2=q\varphi(-1)$. Since $\delta(\chi)=0$ for $\chi\neq \varepsilon, \delta(\varepsilon)=1$ and $g(\varepsilon)=-1$, \eqref{eq-5} yields \begin{align}\label{eq-3.9} B:=\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}& {\varphi\chi^2\choose \chi}{\varphi\chi\choose \chi^2}\chi(x-1)\notag\\&=\frac{1}{q^2}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}\frac{g(\varphi\chi^2)g(\overline{\chi})g(\overline{\chi}^2)} {g(\varphi\overline{\chi})}\chi(1-x)-2\frac{q-1}{q^2}\varphi(x-1). \end{align} Using Lemma \ref{lemma2_2} and then \eqref{eq-0} we obtain \begin{align}\label{eq-3.3} \frac{g(\varphi\chi^2)g(\overline{\chi}^2)}{g(\varphi)}=q{\varphi\chi^2\choose \chi^2}, \end{align} and \begin{align}\label{eq-3.4} \frac{g(\varphi)g(\overline{\chi})} {g(\varphi\overline{\chi})}=q\chi(-1){\varphi\choose \chi}-(q-1)\chi(-1)\delta(\varphi\overline{\chi}). \end{align} From \eqref{eq-2}, we have ${\varphi\choose \varepsilon}=-\frac{1}{q}$. Hence, \eqref{eq-3.3} and \eqref{eq-3.4} yield \begin{align}\label{eq-3.5} &\frac{1}{q^2}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}\frac{g(\varphi\chi^2)g(\overline{\chi})g(\overline{\chi}^2)} {g(\varphi\overline{\chi})}\chi(1-x)\notag\\ &=\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose \chi^2}{\varphi\choose \chi}\chi(x-1)-\frac{q-1}{q}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}\chi(x-1){\varphi\chi^2\choose \chi^2}\delta(\varphi\overline{\chi})\notag\\ &=\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}{\varphi\chi^2\choose \chi^2}{\varphi\choose \chi}\chi(x-1)-\frac{q-1}{q}{\varphi\choose \varepsilon}\varphi(x-1)\notag\\ &=\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}{\varphi\chi^2\choose \chi^2}{\varphi\choose \chi}\chi(x-1)+\frac{q-1}{q^2}\varphi(x-1). \end{align} Applying \eqref{eq-0} on the right hand side of \eqref{eq-3.5}, and then \eqref{lemma2_4} we have \begin{align} &\frac{1}{q^2}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}\frac{g(\varphi\chi^2)g(\overline{\chi})g(\overline{\chi}^2)} {g(\varphi\overline{\chi})}\chi(1-x)\notag\\ &= \frac{1}{q}\sum_{\substack{\chi\in\widehat{\mathbb{F}_q^{\times}}\\ y\in\mathbb{F}_q}}{\varphi\choose \chi}\chi(x-1)\varphi\chi^2(y)\overline{\chi}^2(1-y)+\frac{q-1}{q^2}\varphi(x-1)\notag\\ &=\frac{1}{q}\sum_{\substack{\chi\in\widehat{\mathbb{F}_q^{\times}}\\ y\in\mathbb{F}_q, y\neq 1}}\varphi(y){\varphi\choose \chi}\chi\left(\frac{(x-1)y^2}{(1-y)^2}\right)+\frac{q-1}{q^2}\varphi(x-1)\notag\\ &=\frac{q-1}{q^2}\sum_{y\in\mathbb{F}_q, y\neq 1}\varphi(y)\left[\varphi\left(1+\frac{(x-1)y^2}{(1-y)^2}\right)-\delta\left(\frac{(x-1)y^2}{(1-y)^2}\right)\right] +\frac{q-1}{q^2}\varphi(x-1)\notag\\ &=\frac{q-1}{q^2}\sum_{\substack{y\in\mathbb{F}_q\\y\neq1}}\varphi(y)\varphi(1-2y+xy^2)+\frac{q-1}{q^2}\varphi(x-1).\notag \end{align} Adding and subtracting the term under summation for $y=1$, we have \begin{align}\label{eq-3.7} &\frac{1}{q^2}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}\frac{g(\varphi\chi^2)g(\overline{\chi})g(\overline{\chi}^2)} {g(\varphi\overline{\chi})}\chi(1-x)\notag\\ &=\frac{q-1}{q^2}\sum_{y\in\mathbb{F}_q}\varphi(y)\varphi(1-2y+xy^2). \end{align} Combining \eqref{eq-3.9} and \eqref{eq-3.7} we readily obtain the first equality of the proposition. \par To complete the proof of the proposition, we relate the character sums given in \eqref{eq-3.9} to the $p$-adic hypergeometric series. Using Davenport-Hasse relation for $m=2, \psi=\chi^2$ and $m=2, \psi=\overline{\chi}$, we have \begin{align} g(\varphi\chi^2)=\frac{g(\chi^4)g(\varphi)\overline{\chi}^2(4)}{g(\chi^2)}\notag \end{align} and \begin{align} g(\varphi\overline{\chi})=\frac{g(\overline{\chi}^2)g(\varphi)\chi(4)}{g(\overline{\chi})},\notag \end{align} respectively. Plugging these two expressions in \eqref{eq-3.9} we obtain \begin{align} B&=\frac{1}{q^2}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}}\frac{g(\chi^4)g^2(\overline{\chi})}{g(\chi^2)}\overline{\chi}^3(4)\chi(1-x) -2\frac{(q-1)}{q^2}\varphi(x-1).\notag \end{align} Now, considering $\chi=\omega^a$ and then applying Gross-Koblitz formula we obtain \begin{align} B&=\frac{1}{q^2}\sum_{a=0}^{q-2}\omega^{a}(1-x)~\overline{\omega}^{3a}(4)\pi^{(p-1)\alpha}\prod_{i=0}^{r-1} \frac{\Gamma_p(\langle\frac{-4ap^i}{q-1}\rangle)\Gamma_p^2(\langle\frac{ap^i}{q-1}\rangle)}{\Gamma_p(\langle\frac{-2ap^i}{q-1}\rangle)} -2\frac{(q-1)}{q^2}\varphi(x-1).\notag \end{align} where $\alpha=\sum_{i=0}^{r-1}\{\langle\frac{-4ap^i}{q-1}\rangle+2\langle\frac{ap^i}{q-1}\rangle-\langle\frac{-2ap^i}{q-1}\rangle\}$. Proceeding similarly as shown in the proof of Proposition \ref{prop-1}, we deduce that \begin{align} B=-\frac{q-1}{q^2}\cdot{_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{1}{1-x}\right]_q-2\frac{q-1}{q^2}\varphi(x-1).\notag \end{align} This completes the proof of the proposition. \end{proof} \par Before we prove our main results, we now recall the following definition of a finite field hypergeometric function introduced by McCarthy in \cite{mccarthy3}. \begin{definition}\cite[Definition 1.4]{mccarthy3} Let $A_0, A_1, \ldots A_n, B_1, B_2, \ldots, B_n\in\widehat{\mathbb{F}_q^{\times}}$. Then the ${_{n+1}F}_n(\cdots)^{\ast}$ finite field hypergeometric function over $\mathbb{F}_q$ is defined by \begin{align} &{_{n+1}F}_n\left(\begin{array}{cccc} A_0, & A_1, & \ldots, & A_n\\ & B_1, & \ldots, & B_n \end{array}\mid x \right)^{\ast}_q\notag\\ &=\frac{1}{q-1}\sum_{\chi\in \widehat{\mathbb{F}_q^\times}}\prod_{i=0}^{n}\frac{g(A_i\chi)}{g(A_i)}\prod_{j=1}^n\frac{g(\overline{B_j\chi})}{g(\overline{B_j})}g(\overline{\chi}) \chi(-1)^{n+1}\chi(x).\notag \end{align} \end{definition} The following proposition gives a relation between McCarthy's and Greene's finite field hypergeometric functions when certain conditions on the parameters are satisfied. \begin{proposition}\cite[Proposition 2.5]{mccarthy3}\label{prop-300} If $A_0\neq\varepsilon$ and $A_i\neq B_i$ for $1\leq i\leq n$, then \begin{align} &{_{n+1}F}_n\left(\begin{array}{cccc} A_0, & A_1, & \ldots, & A_n\\ & B_1, & \ldots, & B_n \end{array}\mid x \right)_q^{\ast}\notag\\ &=\left[\prod_{i=1}^n{A_i \choose B_i}^{-1}\right]{_{n+1}F}_n\left(\begin{array}{cccc} A_0, & A_1, & \ldots, & A_n\\ & B_1, & \ldots, & B_n \end{array}\mid x \right)_q.\notag \end{align} \end{proposition} In \cite[Lemma 3.3]{mccarthy2}, McCarthy proved a relation between ${_{n+1}F}_n(\cdots)^{\ast}$ and the $p$-adic hypergeometric series ${_{n}G}_n[\cdots]$. We note that the relation is true for $\mathbb{F}_q$ though it was proved for $\mathbb{F}_p$ in \cite{mccarthy2}. Hence, we obtain a relation between ${_{n}G}_n[\cdots]$ and the Greene's finite field hypergeometric functions due to Proposition \ref{prop-300}. In the following proposition, we list three such identities which will be used to prove our main results. \begin{proposition}\label{prop-301} Let $x\neq 0$. Then \begin{align} \label{tr1}{_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|x\right]_q & = -q\cdot {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\frac{1}{x} \right)_q;\\ \label{tr2} {_2G}_{2}\left[\begin{array}{cc} \frac{1}{2}, & \frac{1}{2} \\ 0, & 0 \end{array}|x\right]_q & = -q\cdot {_2}F_1\left( \begin{array}{cc} \varphi, & \varphi \\ ~ & \varepsilon \\ \end{array}|\frac{1}{x} \right)_q;\\ \label{tr3} {_3G}_{3}\left[\begin{array}{ccc} \frac{1}{2}, & \frac{1}{2} &\frac{1}{2}\\ 0, & 0, &0 \end{array}|x\right]_q & = q^2\cdot {_3}F_2\left( \begin{array}{ccc} \varphi, & \varphi, &\varphi \\ ~ & \varepsilon, &\varepsilon \\ \end{array}|\frac{1}{x} \right)_q. \end{align} We note that \eqref{tr1} is valid when $q\equiv 1\pmod{4}$. \end{proposition} \begin{proof} Applying \cite[Lemma 3.3]{mccarthy2} we have \begin{align}\label{tr_eq3} {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\frac{1}{x} \right)_q^{\ast}={_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|x\right]_q. \end{align} From \eqref{eq-2}, we have ${\chi_4^3 \choose \varepsilon}=\frac{-1}{q}$. Using this value and Proposition \ref{prop-300} we find that \begin{align}\label{tr_eq1} {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\frac{1}{x} \right)_q=-\frac{1}{q}{_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\frac{1}{x} \right)_q^{\ast}. \end{align} Now, combining \eqref{tr_eq3} and \eqref{tr_eq1} we readily obtain \eqref{tr1}. Proceeding similarly we deduce \eqref{tr2} and \eqref{tr3}. This completes the proof. \end{proof} We now prove our main results. \begin{proof}[Proof of Theorem \ref{MT-1}] From Proposition \ref{prop-1} and Proposition \ref{prop-2} we have \begin{align} \sum_{y\in\mathbb{F}_q}\varphi(y)\varphi(1-2y+xy^2)&=-\varphi(-2)\cdot{_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{1}{x}\right]_q\notag\\ &=-{_2G}_{2}\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{1}{1-x}\right]_q, \notag \end{align} which readily gives the desired transformation. \end{proof} \begin{proof}[Proof of Theorem \ref{MT-4}] From \cite[Eq. 4.5]{GS} we have \begin{align}\label{eq-7.1} \varphi((1-u)/u){_3}F_{2}\left( \begin{array}{ccc} \varphi, & \varphi, & \varphi \\ & \varepsilon, & \varepsilon \\ \end{array}|\frac{u}{u-1} \right)_p=&\varphi(u)f(u)^2+2\frac{\varphi(-1)}{p}f(u)-\frac{p-1}{p^2}\varphi(u)\notag\\ &+\frac{p-1}{p^2}\delta(1-u), \end{align} where $u=\frac{x}{x-1}, x\neq 1$ and \begin{align} f(u):=\frac{p}{p-1}\sum_{\chi\in\widehat{\mathbb{F}_p^{\times}}} {\varphi\chi^2\choose\chi}{\varphi\chi\choose\chi}\chi\left(\frac{u}{4}\right).\notag \end{align} From \eqref{tr3} and \eqref{eq-7.1}, we have \begin{align}\label{eq-7.6} \frac{\varphi((1-u)/u)}{p^2}\cdot{_3}G_{3}\left[ \begin{array}{ccc} \frac{1}{2}, & \frac{1}{2}, & \frac{1}{2} \\ 0, & 0, & 0 \\ \end{array}|\frac{u-1}{u} \right]_p& = \varphi(u)f(u)^2+2\frac{\varphi(-1)}{p}f(u)-\frac{p-1}{p^2}\varphi(u)\notag\\ &+\frac{p-1}{p^2}\delta(1-u). \end{align} Now, Proposition \ref{prop-1} gives \begin{align}\label{eq-7.2} f(u)=\frac{-\varphi(-u)}{p}-\frac{1}{p}\cdot{_2}G_2\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{1}{u}\right]_p. \end{align} Finally, combining \eqref{eq-7.6} and \eqref{eq-7.2} and then putting $u=\frac{x}{x-1}$ we obtain the desired result. This completes the proof of the theorem. \end{proof} \begin{proof}[Proof of Theorem \ref{MT-3}] Let $A=B=\varphi$ and $x\neq0,\pm1$. Then \cite[Thm. 4.16]{greene} yields \begin{align}\label{eq-6.1} {_2}F_1\left( \begin{array}{cc} \varphi, & \varphi \\ ~ & \varepsilon \\ \end{array}|x \right)_q=&\frac{\varphi(-1)}{q}\varphi(x(1+x))\notag\\ &+\varphi(1+x)\frac{q}{q-1}\sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose\chi}{\varphi\chi\choose\chi}\chi\left(\frac{x}{(1+x)^2}\right). \end{align} Now, using Proposition \ref{prop-1} we have \begin{align}\label{eq-6.2} \sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose\chi}{\varphi\chi\choose\chi}\chi\left(\frac{x}{(1+x)^2}\right)=&-\frac{q-1}{q^2}\varphi\left(\frac{-4x}{(1+x)^2}\right)\notag\\ &-\frac{q-1}{q^2}\cdot {_2}G_2\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{(1+x)^2}{4x}\right]_q. \end{align} Applying Theorem \ref{MT-1} on the right hand side of \eqref{eq-6.2} we obtain \begin{align}\label{eq-6.3} \sum_{\chi\in\widehat{\mathbb{F}_q^{\times}}} {\varphi\chi^2\choose\chi}{\varphi\chi\choose\chi}\chi\left(\frac{x}{(1+x)^2}\right)=&-\frac{q-1}{q^2}\varphi\left(-x\right)\notag\\ &-\frac{q-1}{q^2}\varphi(-2)\cdot {_2}G_2\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{(1+x)^2}{(1-x)^2}\right]_q. \end{align} Combining \eqref{eq-6.1} and \eqref{eq-6.3} we have \begin{align}\label{eq-6.4} {_2}G_2\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{(1+x)^2}{(1-x)^2}\right]_q=-q\varphi(-2)\varphi(1+x)\cdot{_2}F_1\left( \begin{array}{cc} \varphi, & \varphi \\ ~ & \varepsilon \\ \end{array}|x \right)_q, \end{align} which completes the proof of the theorem due to \eqref{tr2}. \end{proof} We finally present the proof of Theorem \ref{MT-2}. \begin{proof}[Proof of Theorem \ref{MT-2}] Let $q\equiv1\pmod{4}$. Then we readily obtain the desired transformation for the finite field hypergeometric functions from \eqref{eqn-MT-3} using \eqref{tr1} and \eqref{tr2}. \end{proof} \section{Special values of ${_2}G_2[\cdots]$} Finding special values of hypergeometric function is an important and interesting problem. Only a few special values of the ${_n}G_n$-functions are known (see for example \cite{BSM}). In \cite{BSM}, the authors with McCarthy obtained some special values of ${_n}G_n[\cdots]$ when $n=2, 3, 4$. From \eqref{eq-6.4}, for any odd prime $p$ and $x\neq 0, \pm 1$, we have \begin{align}\label{eq-6.4-1} {_2}G_2\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|\frac{(1+x)^2}{(1-x)^2}\right]_q=-q\varphi(-2)\varphi(1+x)\cdot{_2}F_1\left( \begin{array}{cc} \varphi, & \varphi \\ ~ & \varepsilon \\ \end{array}|x \right)_q. \end{align} Values of the finite field hypergeometric function ${_2}F_1\left( \begin{array}{cc} \varphi, & \varphi \\ ~ & \varepsilon \\ \end{array}|x \right)_q$ are obtained for many values of $x$. For example, see Barman and Kalita \cite{BK1, BK3}, Evans and Greene \cite{evans2}, Greene \cite{greene}, Kalita \cite{GK}, and Ono \cite{ono}. \begin{proof}[Proof of Theorem \ref{MT-5}] Let $\lambda \in \{-1, \frac{1}{2}, 2\}$. If $p$ is an odd prime, then from \cite[Thm. 2]{ono} we have \begin{align*} {_2}F_1\left( \begin{array}{cc} \varphi, & \varphi \\ ~ & \varepsilon \\ \end{array}|\lambda \right)_p=\left\{ \begin{array}{ll} 0, & \hbox{if $p\equiv 3\pmod{4}$;} \\ \frac{2x(-1)^{\frac{x+y+1}{2}}}{p}, & \hbox{if $p\equiv 1\pmod{4}$, $x^2+y^2=p$, and $x$ odd.} \end{array} \right. \end{align*} Putting the above values for $\lambda=\frac{1}{2}, 2$ into \eqref{eq-6.4-1} we readily obtain the required values of the ${_n}G_n$-function. \par Let $q\equiv 1\pmod{4}$. Then from \eqref{tr1} we have \begin{align*} {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\frac{1}{9} \right)_q=-\frac{1}{q}~{_2}G_2\left[\begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \end{array}|9\right]_q. \end{align*} From the above identity we readily obtain the required value of the finite field hypergeometric function. This completes the proof of the theorem. \end{proof} We now have the following corollary. \begin{cor}\label{cor-1} Let $p\equiv 1\pmod{4}$. We have \begin{align*} {\chi_4\choose \varphi}+ {\chi_4^3\choose \varphi}= \frac{2x(-1)^{\frac{x+y+1}{2}}}{p}, \end{align*} where $x^2+y^2=p$ and $x$ is odd. \end{cor} \begin{proof} From Theorem \ref{MT-5} and \cite[Thm. 1.4 (i)]{BK1} we have \begin{align*} {\chi_4\choose \varphi}+ {\chi_4^3\choose \varphi}= \frac{2x\varphi(2)\chi_4(-1)(-1)^{\frac{x+y+1}{2}}}{p}, \end{align*} where $x^2+y^2=p$ and $x$ is odd. Let $m$ be the order of $\chi\in \widehat{\mathbb{F}_q^{\times}}$. We know that $\chi(-1)=-1$ if and only if $m$ is even and $(q-1)/m$ is odd. Since $p\equiv 1\pmod{4}$, therefore, either $p\equiv 1\pmod{8}$ or $p\equiv 5\pmod{8}$. If $p\equiv 1\pmod{8}$, then $\varphi(2)=\chi_4(-1)=1$. Also, if $p\equiv 5\pmod{8}$, then $\varphi(2)=\chi_4(-1)=-1$. Hence, in both the cases $\varphi(2)\cdot\chi_4(-1)=1$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{MT-7}] From \cite[Thm. 1.1]{GK}, for $q\equiv 1\pmod{8}$, we have \begin{align}\label{eq-8.2} {_2}F_1\left( \begin{array}{cc} \varphi, & \varphi \\ ~ & \varepsilon \\ \end{array}|\frac{4\sqrt{2}}{2\sqrt{2}\pm3}\right)_q=\varphi(3\pm2\sqrt{2})\left\{{\chi_4\choose \varphi}+{\chi_4^3\choose \varphi}\right\}. \end{align} Now, comparing \eqref{eq-6.4} and \eqref{eq-8.2} for $x=\frac{4\sqrt{2}}{2\sqrt{2}\pm3}$ we obtain \eqref{eq-8.5}. Similarly, using \cite[Thm. 1.1]{GK} and \eqref{eq-6.4} for $x=\frac{4}{2\pm\sqrt{3}}$ we derive \eqref{eq-8.6} and \eqref{eq-8.7}. \end{proof} \begin{proof}[Proof of Theorem \ref{MT-6}] From \eqref{tr1}, we have \begin{align}\label{eq-8.8} {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\left(\frac{-2\sqrt{2}\pm3}{6\sqrt{2}\pm3}\right)^2 \right)_q=-\frac{1}{q}\cdot{_2}G_2\left[ \begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \\ \end{array}|\left(\frac{6\sqrt{2}\pm3}{-2\sqrt{2}\pm3}\right)^2 \right]_q. \end{align} Comparing \eqref{eq-8.5} and \eqref{eq-8.8} we readily obtain \eqref{eq-8.3}. Again, we have \begin{align}\label{eq-8.9} {_2}F_1\left( \begin{array}{cc} \chi_4, & \chi_4^3 \\ ~ & \varepsilon \\ \end{array}|\left(\frac{-2\pm\sqrt{3}}{6\pm\sqrt{3}}\right)^2 \right)_q=-\frac{1}{q}\cdot{_2}G_2\left[ \begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \\ \end{array}|\left(\frac{6\pm\sqrt{3}}{-2\pm\sqrt{3}}\right)^2 \right]_q. \end{align} Now, comparing \eqref{eq-8.7} and \eqref{eq-8.9} we deduce \eqref{eq-8.4}. \end{proof} Applying Corollary \ref{cor-1}, from \eqref{eq-8.5} and \eqref{eq-8.3} we have the following corollary. \begin{cor} Let $p\equiv 1\pmod{8}$. Then \begin{align*} {_2}G_2\left[ \begin{array}{cc} \frac{1}{4}, & \frac{3}{4} \\ 0, & 0 \\ \end{array}|\left(\frac{6\sqrt{2}\pm3}{-2\sqrt{2}\pm3}\right)^2 \right]_p &= -2x\varphi(6\pm 12\sqrt{2})(-1)^{\frac{x+y+1}{2}}, \end{align*} where $x^2+y^2=p$ and $x$ is odd. \end{cor} \end{document}
\begin{document} \title{Deterministic preparation of optical qubits with coherent feedback control} \author{Amy Rouillard} \affiliation{School of Chemistry and Physics, University of KwaZulu-Natal, Durban, South Africa} \author{Tanita Permaul} \affiliation{School of Chemistry and Physics, University of KwaZulu-Natal, Durban, South Africa} \author{Sandeep K. Goyal} \email{[email protected]} \affiliation{Department of Physical Sciences, Indian Institute of Science Education \& Research (IISER) Mohali, Sector 81 SAS Nagar, Manauli PO 140306 Punjab India.} \author{Thomas Konrad} \email{[email protected]} \affiliation{School of Chemistry and Physics, University of KwaZulu-Natal, Durban, South Africa} \affiliation{National Institute of Theoretical and Computational Sciences (NITheCS), KwaZulu-Natal, South Africa} \begin{abstract} We propose a class of preparation schemes for orbital angular momentum and polarisation qubits carried by single photons or classical states of light based on coherent feedback control by an ancillary degree of freedom of light. The preparation methods use linear optics and include the transcription of an arbitrary polarisation state onto a two-level OAM system (swap) for arbitrary OAM values $\pm\ell$ within a light beam, i.e. without spatial interferometer. The preparations can be carried out with unit efficiency independent from the potentially unknown initial state of the system. Moreover, we show how to translate measurement-based qubit control channels into coherent feedback schemes for optical implementation. \end{abstract} \maketitle \section{Introduction} The control of quantum systems is an important prerequisite for all quantum technologies, i.e.\ quantum computation and communication~\cite{Nielsen2011} as well as quantum metrology~\cite{Haroche2011, Wineland2011}. Closed-loop quantum control techniques, probing the quantum system, can be classified in two groups, measurement-based techniques~\cite{WisemanBook} and coherent-feedback techniques~\cite{Lloyd00, Nelson_etal00, HiroseCappellaro16, konrad2020robust}. For quantum communication, which involves non-classical states of light, measurement based-control is technically challenging since non-invasive measurements that do not absorb photons require coupling to matter or other photons, which is in general weak and difficult to realise~\cite{Haroche2011} or comes with low efficiency~\cite{wang2015quantum}. Therefore coherent control feedback techniques have the potential to improve quantum communication and all quantum technologies using light as a medium. Here, we give a recipe to translate any measurement-based control channel for two-level systems into a coherent feedback control scheme that can be implemented with optics given that the control channel is described by two Kraus operators. Our scheme can be generalised to systems with Hilbert spaces of finite dimension and arbitrary number of Kraus operators. Moreover, we present a class of control methods based on coherent feedback where a particular degree of freedom of light can be controlled by a second degree of freedom. Such methods enable, e.g., the deterministic preparation of any pure polarisation state in a Mach-Zehnder interferometer for a photon with initially mixed polarisation. Similarly, any coherent superposition $\alpha \ket{-\ell}+ \beta \ket{\ell}$ can be prepared from an unknown possibly incoherent mixture of light modes with orbital angular momentum (OAM) values $\pm \ell$ by means of a spatial interferometer (path-degree of freedom) or, using polarisation in a single light beam. Unlike other methods of preparation for spatial modes, e.g. by means of spatial light modulation, the present one suggests low noise and works ideally without photon loss. For the special case of equally weighted superpositions, $\alpha=\beta=1/\sqrt{2}$, we show how to produce such a target state for all pairs of OAM values $\pm \ell $, simultaneously. All preparation schemes described below work equally well for single-photon and classical states of light including thermal states with finite temperature. The underlying coherent feedback control method is robust and can be used to protect a target state against noise \cite{konrad2020robust}. The conditions required for a control channel to successfully drive an arbitrary initial state into a desired target state are reviewed in Section~\ref{Sec:SFP}. The optical implementation scheme, for various degrees of freedom of light, is informed by a decomposition technique~\cite{sandeep_csd} described in Section~\ref{Sec:CS}. In Section~\ref{Sec:2-out} this technique is applied to unitary couplings that realise coherent feedback control, and the translation from measurement-based to coherent feedback control is presented. Section~\ref{Sec:2-out} also details the optical realisation of the coherent feedback control of polarisation and OAM qubits. Methods which allow for the repeated application of coherent feedback control are discussed in Section~\ref{Sec:Reuse}. Such repetitions also protect target states against noise \cite{konrad2020robust} and can steer the system into target dynamics \cite{Uys2018SFP}. In Section~\ref{Sec:Examples} we give three examples of coherent feedback control and their respective optical implementations. A discussion of the results in Section~\ref{discussion} concludes this article. \section{Coherent feedback control}\label{Sec:SFP} In order to prepare light in a given target state $\ket{T}\in\mathcal{H}_s$, where $\mathcal{H}_s$ is the Hilbert space of the system, the following theorem can be applied \cite{konrad2020robust}. Consider a trace-preserving quantum channel \$ described by an $n$-element set of Kraus operators $K_i$ satisfying the fix-point condition \begin{align} K_i \ket{T} = z_i\ket{T} \label{eq:fp_contition} \end{align} for all $i=0,\dots, n-1$ with $z_i\in \mathbb{C}$. In addition, let the Kraus operators obey a second condition, \begin{align} \operatorname{span}\{K_i^\dagger \ket{T}\}_{i=0,\dots,n-1} = \mathcal{H}_s\, . \label{eq:span_contition} \end{align} Then {an arbitrary initial} state of the system $\rho$ converges to target state $\ket{T}\in\mathcal{H}_s$ under repeated application of the channel $\$$, \begin{align} \rho \rightarrow \$(\rho) = \sum_{i=0}^{n-1} K_i \rho K_i^\dagger\, . \label{Eq:channel_k} \end{align} The second condition~\eqref{eq:span_contition} ensures that the system is driven towards the target state \cite{konrad2020robust}, while fix point condition~\eqref{eq:fp_contition} arrests it there. Various construction methods of control channels are discussed in \cite{ konrad2020robust,Uys2018SFP}. Any such channel $\$$ can be implemented by a unitary time evolution $U$ which couples the system to a suitable ancilla system (quantum controller) in initial state $\ket{0}$, such that \begin{align} \$(\rho) = \operatorname{Tr}[U(\ket{0}\bra{0} \otimes \rho) U^\dagger] \,.\label{Eq:channel_u} \end{align} The quantum controller probes the system's state and accentuates coherent feedback accordingly. If we compare Eqs.~\eqref{Eq:channel_k} and \eqref{Eq:channel_u} the Kraus operators are revealed as function of the time evolution, \begin{align} K_i = \braket{i|U|0}\,, \label{Eq:K_U} \end{align} where $(\ket{i})_{i=1,\ldots n}$ is an orthonormal basis of the quantum controller's Hilbert space. Obviously, there is no way to map all initial states of system and controller onto a single state by a unitary (i.e.\ reversible) evolution. Consequently, the information about the arbitrary, and possibly unknown, initial state of the system must be transferred to the controller system during their {coupling~\cite{samal2011experimental}}. This implies that after one application of the channel $\$$, the controller will be in an unknown state and must be reset to $\ket{0}$ if $\$$ is to be repeated using the same controller. The resetting of the controller presents the main challenge in the repeated implementation of the control scheme and will be discussed in Section~\ref{Sec:Reuse} (abbreviated to Sec.~\ref{Sec:Reuse}). It is possible to design a control channel that reaches the target within a single application, an example of such a channel is given in Sec.~\ref{Sec:Basic}. However, it is still useful to investigate methods which allow for multiple iteration of the channel as this will protect the state against noise. \section{Cosine-Sine decomposition}\label{Sec:CS} The time evolution $U$ can be {constructed} from elementary single- and two-partite unitary gates. For this purpose, we employ the Cosine-Sine (CS) decomposition of an arbitrary $U$ into a product of conditional unitaries and Hadamard gates. When applied to light, these conditional unitaries correspond to operations on a single degree of freedom of light ({system}) conditioned on the state of another ancillary degree of freedom (controller). An arbitrary $(m+n)\times(m+n)$ unitary matrix $U_{m+n}$ ($n\geq m$) can be decomposed into $n\times n$ and $m\times m$ unitaries, $L_n, R_n$ and $L_m, R_m$ respectively, as well as a cosine-sine (CS) matrix, according to the CS decomposition~\cite{sandeep_csd}, given by \begin{equation} U_{m+n} = \begin{pmatrix} L_{m} & 0\\ 0 & L'_n \end{pmatrix}(\mathcal{S}_{2m}\oplus\mathds{1}_{n-m})\begin{pmatrix} R^\dagger_{m} & 0\\ 0 & R'^{\dagger}_n \end{pmatrix}\,, \end{equation} where $\mathds{1}_{n-m}$ is the identity matrix in $(n-m)$-dimensions and the so-called CS matrix $\mathcal{S}_{2m}$ reads \begin{equation}\label{Eq:CS} \mathcal{S}_{2m} = \begin{pmatrix} C_m & S_m\\ -S_m & C_m \end{pmatrix} \end{equation} with {$C_m \equiv \sum_{i=1}^m \cos\theta_i \ket{i}\bra{i}$ and $S_m \equiv \sum_{i=1}^m \sin\theta_i \ket{i}\bra{i}$}. The direct sum $\mathcal{S}_{2m}\oplus\mathds{1}_{n-m}$ can also be written in the form of a block diagonal matrix \begin{align} \begin{pmatrix} \mathcal{S}_{2m} & 0\\ 0& \mathds{1}_{n-m} \end{pmatrix}\,. \end{align} In this work we will only consider cases where $m=n$, so that \begin{equation} U = \begin{pmatrix} L & 0\\ 0 & L' \end{pmatrix} \begin{pmatrix} C & S\\ -S & C \end{pmatrix} \begin{pmatrix} R^\dagger & 0\\ 0 & R'^{\dagger} \end{pmatrix}\,,\label{eq:CS_1} \end{equation} where, for convenience, we drop the indices which indicate dimension. The matrix $\mathcal{S}$ can be further decomposed, see Appendix (App.)~\ref{Ap:1}, as \begin{equation} \mathcal{S} = \begin{pmatrix} C & S\\ -S & C \end{pmatrix}= \Big(P_{\frac{\pi}{4}}^\dagger H\otimes\mathds{1}\Big) \begin{pmatrix} \Theta & 0\\ 0 & \Theta^\dagger \end{pmatrix} \Big(H P_{\frac{\pi}{4}}\otimes\mathds{1}\Big), \label{eq:bs_internal_decomp} \end{equation} where $P_{\phi} =\exp(i\phi) \ket{0}\bra{0}+\exp(-i\phi)\ket{1}\bra{1}$ may represent a phase shift and {$H= \ket{0}(\bra{0}+\bra{1})/\sqrt{2} + \ket{1}(\bra{0}-\bra{1})/\sqrt{2} $} is the Hadamard transformation -- both acting on a two-level system. In addition, $\Theta=C+iS$ can be implemented as a state-dependent phase shift of an $n$-level system. Any unitary operator acting on a $2n$-dimensional Hilbert space can therefore be written \begin{equation}\label{eq:CS_U} U = \begin{pmatrix} L & 0\\ 0 & iL' \end{pmatrix} H\otimes\mathds{1} \begin{pmatrix} \Theta & 0\\ 0 & \Theta^\dagger \end{pmatrix} H \otimes\mathds{1} \begin{pmatrix} R^\dagger & 0\\ 0 & -i R'^{\dagger} \end{pmatrix}\,. \end{equation} The CS decomposition thus points to the physical realisation of a ($2n \times 2n$) unitary operator in terms of the evolution of a closed system formed by two subsystems, i.e., a two-level system coupling to an $n$-level system. This agrees with a sequence of evolutions of a system of $n$ spatial modes of light depending on two paths (as e.g.\ realised in a Mach-Zehnder) or depending on its polarisation. Hence, the CS decomposition has an operational meaning for a composite system, which is represented in Fig.~\ref{fig:fig1}. \begin{figure} \caption{\textbf{Operational meaning of the CS decomposition.} \label{fig:fig1} \end{figure} \section{Optical implementation of coherent feedback}\label{Sec:2-out} Based on a realisation method for optical qubit channels \cite{SandeepBox21}, we provide a recipe for the design of the optical implementation of coherent feedback given the corresponding Kraus operators from Sec.~\ref{Sec:SFP}. The construction of the latter is discussed in \cite{ konrad2020robust,Uys2018SFP}. We note that the corresponding control channel can also be realised by measurement, constituting measurement-based feedback control \cite{Uys2018SFP}. Therefore what follows is also a recipe to translate measurement-based feedback control into coherent feedback control. The discussion is limited to the control of two-level systems (qubits) by qubit controllers, corresponding to control channels with a pair of Kraus operators. However, an analogous scheme can be readily constructed for $d$-level systems controlled by $n$-level ancilla systems, i.e.,\ control channels involving $n$ Kraus operators~\cite{SandeepBox21}. A simple deterministic preparation channel for qubits that reaches target state $\ket{T}$ in a single shot, is given by $K_0=\ket{T}\bra{T}$ and $K_1=\ket{T}\bra{T^\perp}$ with $\langle T^\perp\ket{T}=0$. In general, let $K_0$ and $K_1$ be Kraus operators which meet conditions~\eqref{eq:fp_contition} and \eqref{eq:span_contition}, from which we can construct a unitary of the form \begin{equation} U=\begin{pmatrix} K_0 & A\\ K_1 & B \end{pmatrix}, \label{eq:unitary_two_outcome} \end{equation} where the matrices $A\equiv \braket{0|U|1}$ and $B\equiv \braket{1|U|1}$ are appropriately chosen such that $U$ is unitary. In addition, Eq.~\eqref{Eq:K_U} prescribes $K_0\equiv \braket{0|U|0}$ and $K_1\equiv \braket{1|U|0}$. This is in agreement with the form of $U$ in Eq.~\eqref{eq:unitary_two_outcome}, which acts on the Hilbert space $\mathcal{H}_c\otimes \mathcal{H}_s$, where $c$ and $s$ stands for controller and system, respectively. The CS decomposition of $U$ is given by Eq.~\eqref{eq:CS_1}, by comparing this to Eq.~\eqref{eq:unitary_two_outcome} we can write the Kraus operators as \begin{equation} \begin{aligned} K_0 &= L C R^\dagger \\ K_1 &= i L'' S R^\dagger\,, \end{aligned} \label{eq:svd_two_outcome} \end{equation} where $L''=iL'$. This is simply a singular value decomposition which exists for all Kraus operators $K_i$. Unitary $R'$ can be freely chosen and this can be exploited in order to simplify the optical implementation of $U$. For this purpose, we choose $R' = -iR$. This converts the conditional unitary $R^\dagger \oplus -iR'^\dagger$ to the local unitary $\mathds{1}\otimes R^\dagger$. Our aim is now to design an optical set-up to implement the unitary operator \begin{equation}\label{eq:U_4} U = \begin{pmatrix} L & 0\\ 0 & L'' \end{pmatrix} H\otimes\mathds{1} \begin{pmatrix} \Theta & 0\\ 0 & \Theta^\dagger \end{pmatrix} H \otimes R^{\dagger} \end{equation} where $L$, $L''$, $R^\dagger$ and $\Theta = C +i S$ are determined by the singular value decomposition of the Kraus operators $K_0$ and $K_1$, given in Eq.~\eqref{eq:svd_two_outcome}. Thus far the system and controller degree of freedom have not been specified. Below we discuss two specific examples of controller and system, namely path and polarisation, followed by the polarisation and orbital angular momentum. \subsection{Path and polarisation}\label{Sec:Pol_real} If we consider the paths of a photon as the controller and its polarisation as the system degree of freedom, then the CS decomposition of $U$, Eq.~(\ref{eq:U_4}), can be implemented using balanced beam-splitters and local unitary operations on the polarisation. The implementation of the latter require just one half-wave plate and two quarter-wave plates mounted coaxially~\cite{simon_gadget1,simon_gadget2}. The decomposition of $U$ represents a generalized Mach-Zehnder interferometer and can be made robust against noise associated with the vibration of optical elements by employing a Sagnac interferometer, shown in Fig.~\ref{fig:fig_3}. \begin{figure} \caption{\textbf{Optical realisation of coherent feedback control for path and polarisation.} \label{fig:fig_3} \end{figure} In order to determine the configuration of half- and quarter-wave plates which realises $L$, for example, we first expressed $L$ in terms of Euler angles $(\xi,\eta,\zeta)$, \begin{align} L(\xi,\eta,\zeta) &= e^{-i\frac{1}{2}\xi\sigma_y}e^{i\frac{1}{2}\eta\sigma_z}e^{-i\frac{1}{2}\zeta\sigma_y}\,. \end{align} The relationship between the Euler angles and the rotation angles of a half-wave plate ($H_{\phi}$) and two quarter-wave plates ($Q_\phi$) is given by~\cite{simon_gadget2}: \begin{align} L(\xi,\eta,\zeta)& = Q_{\frac{\pi}{4}+\frac{\xi}{2}} H_{-\frac{\pi}{4}+\frac{\xi+\eta-\zeta}{4}}Q_{\frac{\pi-\zeta}{4} }\,.\label{eq:QQH} \end{align} It is possible to reorder the optical elements by exploiting the fact that \begin{align} Q_\phi H_{\phi'} = H_{\phi'}Q_{2\phi'-\phi}\,. \end{align} The same decomposition procedure can be performed for $L''$, $\Theta$, $R$ and $R'$. \subsection{Polarisation and orbital angular momentum}\label{Sec:OAM_real} In this section we present a linear optical scheme to implement the coherent feedback control of a two-dimensional subspace of the OAM degree of freedom of light. Here the polarisation degree of freedom is considered as controller, hence, the scheme can be made non-interferometric. In order to determine the optical implementation of $U$ for polarisation and OAM we first express $U$ by means of local unitaries and controlled unitaries of the form $C_A := \mathds{1}\oplus A$. Eq.~\eqref{eq:U_4} can be written in terms of controlled unitary operations $C_{ L'' L^\dagger}$ and $C_{(\Theta_2^\dagger)^2}$ as \begin{align} U &= \begin{pmatrix} \mathds{1} & 0\\ 0 & L''L^\dagger \end{pmatrix} H\otimes L\Theta \begin{pmatrix} \mathds{1} & 0\\ 0 & (\Theta^\dagger)^2 \end{pmatrix} H \otimes R^{\dagger}\nonumber \\&= C_{ L''L^\dagger} \left(H\otimes L\Theta\right) C_{(\Theta^\dagger)^2} \left(H \otimes R^\dagger\right) \,.\label{eq:U_pol_oam} \end{align} In order to determine the optical implementation of the controlled unitary $C_{L''L^\dagger}$, it is first diagonalised: \begin{align} C_{ L''L^\dagger} = (\mathds{1}\otimes W) (\mathds{1}\oplus \Phi ) (\mathds{1}\otimes W^\dagger)\,,\label{eq:LL_diag} \end{align} where $W\Phi W^\dagger = L''L^\dagger$ and $\Phi = \sum_j \exp(i\phi_j)\ket{j}\bra{j}$, so that \begin{equation} U = \left(\mathds{1} \otimes W\right) C_{\Phi} \left(H\otimes W^\dagger L\Theta \right) C_{(\Theta^\dagger)^2} \left(H \otimes R^\dagger\right) \,. \label{eq:oam_pol_u} \end{equation} The controlled unitary operations $C_{\Phi}$ and $C_{(\Theta^\dagger)^2}$ have polarisation as control bit, where $\ket{0}\equiv \ket{V}$ (vertically polarised light) and $\ket{1}\equiv \ket{H}$ (horizontally polarised light), and as target bit the OAM subspace spanned by $\{\ket{-\ell}\equiv \ket{0},\ket{+\ell}\equiv \ket{1}\}$. Such controlled unitaries can be implemented using a linear optical device named polarisation selective Dove prism (PSDP)~\cite{yasir2021polarization}. \begin{figure} \caption{\textbf{Modified Dove prism.} \label{fig:psdp} \caption{\textbf{Modified Dove prism between two quarter-wave plates forms the PSDP device.} \label{fig:psdp_rot} \caption{Diagram showing the components of the polarisation selective Dove prism (PSDP).} \label{figthree} \end{figure} A PSDP consists of a modified Dove Prism (DP) mounted between two half-wave plates, cp.\ Fig.\ \ref{figthree}. A rotated PSDP realises the following controlled operation \begin{align} \mathcal{C}^{PSDP}_{l,\alpha} := \begin{pmatrix} \mathds{1} & 0 \\ 0 & \sigma_x P_{2\ell\alpha}^\dagger \end{pmatrix} = \begin{pmatrix} \mathds{1} & 0 \\ 0 & P_{2\ell\alpha} \sigma_x \end{pmatrix} \,, \label{Eq:PSDP} \end{align} where $\alpha$ is the angle of rotation and $P_{2\ell\alpha}^\dagger$ is the previously defined phase shift operator. The rotated PSDP device is configured by rotating the modified DP by an angle $\alpha$ and the half-wave plates by $\alpha/2$. It is therefore possible to implement the controlled unitaries $C_\Phi$ and $C_{(\Theta^\dagger)^2}$ using a local phase shift on polarisation and two PSDPs. This is further outlined in App.~\ref{Ap:2}. If $L'L^\dagger$ can be written in the form $\exp(i2\phi)\sigma_x P^\dagger_{\theta}$, then a phase shifter implementing $P^\dagger_{\phi}$ on polarisation and a single rotated PSDP can be used to implement $C_{L'L^\dagger}$ in Eq.~\eqref{eq:U_pol_oam}, without the need to diagonalise it. The Hadamard operation on polarisation can be performed using a half-wave plate rotated by an angle $\pi/8$. The final requirement for the optical realisation of $U$ is local unitary operations on the OAM degree of freedom. It is not currently known how to implement arbitrary unitary operations on OAM. However, cylindrical-lens mode converters~\cite{beijersbergen1993astigmatic} can be used to perform unitary operations on the $\{\ket{\ell=-1},\ket{\ell=+1}\}$ subspace of OAM. The so called $\pi$- and $\pi/2$-converters are analogous to the half-wave and quarter-wave plates, respectively~\cite{padgett1999poincare} and can therefore be used to construct arbitrary unitary operations on this subspace of OAM. In addition, a rotated Dove prism can be used to implement the $P_{2\ell\alpha}\sigma_x$ operation on the two-level OAM subspace associated with $\ell$, where $\alpha$ is the angle of rotation of the Dove prism. \section{Iterative application of control channels}\label{Sec:Reuse} In Sec.~\ref{Sec:2-out} we showed how to construct coherent feedback control schemes from control channels which satisfy the conditions for convergence to a target state given in Eqs.~\eqref{eq:fp_contition} and \eqref{eq:span_contition}. Such schemes may require repetition for various reasons. Some control schemes work weakly, meaning convergence to the target state occurs over multiple iterations. Examples of weak schemes are given in Sec.~\ref{Sec:weakswap} and Sec.~\ref{Sec:Example_2}. In addition, repeated coherent feedback control protects the system against noise~\cite{konrad2020robust}. Here we discuss two methods to achieve repeated application of coherent feedback control. If the system starts in the pure state $\ket{\psi_0}$, the initial state of system and controller is given by $\ket{\Psi_0}:=\ket{0}\ket{\psi_0}$. The output state $\ket{\Psi_1}:= U \ket{\Psi_0}$ of a coherent feedback scheme then reads \begin{equation} \begin{aligned} \ket{\Psi_1} & = \sum_i \ket{i}\bra{i}U \ket{0}\ket{\psi_0} \\ &= \ket{0} K_0 \ket{\psi_0}+ \ket{1} K_1 \ket{\psi_0}\,, \end{aligned}\label{bla0} \end{equation} where the Kraus operators $K_0$ and $K_1$ obey the conditions ~\eqref{eq:fp_contition} and \eqref{eq:span_contition} to drive the system into a target state. In order to iterate the control channel, the controller must be reset to its initial state $\ket{0}$. This could be accomplished by submitting each of the two components in Eq.\ (\ref{bla0}) into an additional control device adjusting the state of the controller where necessary. However, this would require $2^{N}-1$ devices for $N$ iterations. We discuss in the following two ways to circumvent this exponential growth of machinery. The first method involves the resetting of the controller by filtering. This results in losses that we propose to compensate by parametric amplification. The second method involves the transfer of the controller state to an additional degree of freedom. \subsection{Resetting of controller by filtering}\label{reset} One way to reset the controller is to project it onto the state $\ket{+}\equiv\tfrac{1}{\sqrt{2}}(\ket{0}+\ket{1})$ to create a superposition of the Kraus operators: \begin{equation} \ket{\Psi_1}\xrightarrow{\ket{+}\bra{+}} \ket{\Psi_1'} = \frac{1}{\sqrt{2}} \ket{+} (K_0 \ket{\psi}+K_1 \ket{\psi}) . \label{project} \end{equation} A subsequent rotation restores the initial controller state $\ket{0}$, \begin{equation} \ket{\Psi_1'} \xrightarrow{} \ket{\Psi_1''}= \frac{1}{\sqrt{2}}\ket{0}(K_0 \ket{\psi}+ K_1\ket{\psi}). \end{equation} After the projection (\ref{project}) the information about the controller states is deleted, and the state change no longer corresponds to a trace-preserving channel that satisfies the conditions for convergence given in Eqs.~\eqref{eq:fp_contition} and~\eqref{eq:span_contition}. However, we show that for the specific examples of coherent feedback schemes discussed in Sec.~\ref{Sec:Examples}, the system will still converge to the target state (see App.~\ref{Ap:fid}). The state of the composite system after the second round of coherent control is given by $\ket{\Psi_2}:=U\ket{\Psi_1''}$. The projection \eqref{project} will, in general, lead to photon losses. In order to compensate for these we consider parametric amplification. The preparation of light modes or polarisation by coherent feedback control can be applied to coherent (classical) states of light. In this case, amplification is an adequate means to compensate for losses. Parametric amplification is available for both spatial light modes and polarisation as system degree of freedom. This technique uses a non-linear crystal and a pump beam to amplify an input signal beam. The intensity gain factor of the parametric amplifier can be appropriately adjusted in order to restore the original input beam intensity. The parametric intensity gain is given by~\cite{opampManzoni}, \begin{equation} G(L)= 1 + \left(\frac{\Gamma}{g} \sinh(g L) \right)^2. \end{equation} Here $L$ is the length of the crystal (interaction length), and $g$ and $\Gamma$ are generalised wave numbers that depend on the parameters of the non-linear process, i.e., the pump beam intensity, the phase matching, the angular frequencies, the wave numbers in the medium as well as the non-linear susceptibility (see App.~\ref{Ap:4}). Resetting by means of filtering is available for different controller degrees of freedom. For the path degree of freedom, the light from the two output ports of the interferometer shown in Fig.~\ref{fig:fig_3} can be combined according to Eq.~\eqref{project} using a balanced beam-splitter. However, one path will still have to be discarded and this intensity loss could be compensated by amplification. For polarisation, the intensity losses are due to the use of a linear polariser (filter) which performs the projection onto $(\ket{V}+\ket{H})/\sqrt{2}$ (Eq.~\eqref{project}). A polarisation rotator can then be employed to reset the initial polarisation $\ket{V}$. We are not limited to the use of classical light, as amplification still works using low numbers of photons, but with finite success probabilities. In this regime, the output photons would only reach a limited fidelity, in agreement with the no-cloning theorem. It has been shown in \cite{quantclon} that stimulated emission, and thus parametric amplification, is capable of producing quantum clones with near optimal fidelity. For an optimal universal $N \rightarrow M$ cloner, the optimal fidelity is given by \cite{qclonfid}, \begin{equation}\label{Eq:opfid} F= \frac{NM +N +M}{M (N+2)}. \end{equation} For the case of $N,M\rightarrow\infty$ in Eq.~\eqref{Eq:opfid}, the classical fidelity of $1$ is recovered. Possibly, the noise present in the amplification process on the single photon level is reduced with each repetition of the control channel. This may lead to a high fidelity asymptotically, but with finite success probability. However, we do not know whether the coherent feedback method introduced here works for non-classical states of light with more than one photon. This is subject of further investigation. \subsection{Storage in an additional degree of freedom}\label{Sec:discard} Here we make use of an additional degree of freedom in order to reset the controller. Taking into account an additional ancilla ``$a$" in initial state $\ket{0}_a$, the composite state $\ket{\Psi_1}:= (U\ket{0}_c\ket{\psi_0}_s)\ket{0}_a$ after the system interacted with the controller reads \begin{equation} \ket{\Psi_1 }= (\ket{0} K_0 \ket{\psi_0} + \ket{1} K_1 \ket{\psi_0}) \ket{0}_a\, \end{equation} where $\ket{\psi_0}$ is the initial state of the system. In a first step, we mark the controller basis states using basis states of the ancilla $a$, e.g., by a C-NOT operation acting on ``$a$" conditioned on the controller, \begin{equation} \ket{\Psi_1 } \rightarrow \ket{\Psi_1'} = \ket{0} K_0 \ket{\psi}\ket{0}_a + \ket{1} K_1 \ket{\psi}\ket{1}_a.\label{Eq:marking} \end{equation} In this case, the initial controller state can be restored by a C-NOT conditioned on the state of subsystem ``$a$": \begin{equation} \ket{\Psi_1'} \rightarrow \ket{\Psi_1''}= \ket{0} \left(K_0 \ket{\psi} \ket{0}_a + K_1\ket{\psi}\ket{1}_a\right).\label{Eq:reset} \end{equation} In order to restore the initial controller state $\ket{0}$ unitarily, the information about the unknown state of the controller after the application of the control channel must be stored in the ancillary degree of freedom. When this resetting process is extended to $N$ iterations of the control channel, the dimension of the ancilla is given by $2^N$. Below we discuss this mechanism at the example of the time-bins and OAM as ancillas. \subsubsection{Time-bins as additional degree of freedom}\label{Sec:timebin} We consider coherent feedback control with polarisation and OAM of a light pulse as the controller and the system, respectively. The ancilla (time-bins) is in initial state $\ket{t=0}$, where $t$ represents the time delay between light pulses. Distinct time-bins can be created by varying the path lengths that the two different polarisation components experience by means of a delay loop. This marks each polarisation state with a time-bin state, analogous to Eq.~\eqref{Eq:marking}. The initial polarisation can then be restored in each pulse before it re-enters the coherent feedback scheme for the next iteration (Eq.~\eqref{Eq:reset}). Given that the output state of the pulse after the first application of the coherent feedback is $\ket{\Psi_{1}}:=(U\ket{V}_c\ket{\psi_0}_s)\ket{0}_a$, then the total transformation described can be written as \begin{equation}\label{timetrans} \begin{aligned} &\ket{\Psi_{1}}=\ket{V} K_0 \ket{\psi_0}\ket{0}_a+ \ket{H} K_1 \ket{\psi_0}\ket{0}_a\\ & \longrightarrow \ket{\Psi_1''}= \ket{V} (K_0 \ket{\psi_0}\ket{0}_a+ K_1 \ket{\psi_0}\ket{ \tau}_a)\,, \end{aligned} \end{equation} where $\tau$ is a delay period. A proposed method to iteratively implement Eq.~\eqref{timetrans} is shown in Fig.~\ref{fig:time}. \begin{figure} \caption{Setup to create disjoint time-bins depending on the order of Kraus operators in $N$ iterations of the channel which is realised by coupling polarisation (controller) to OAM (system). The setup uses electro-optic modulators (EOM) that change the polarisation of a pulse when switched on, and a polarising beam-splitter (PBS) that reflects (transmits) vertically (horizontally) polarised light~\cite{EOMswitch} \label{fig:time} \end{figure} The delay time $\tau$ must change in each iteration, in such a way that the pulses do not overlap. This allows for the selective change of the polarisation in order to restore the initial controller state for each time-bin. One way to create distinct pulses is to double the number of round trips in the delay loop from one iteration to the next. This is demonstrated in App.~\ref{unique}. During the second iteration the two distinct pulses in $\ket{\Psi_1''}$ are taken (at different times) as inputs in the coherent feedback scheme so that the state $\ket{\Psi_{2}}:= U\ket{\Psi_1''}$ is given by \begin{equation} \begin{aligned} \ket{\Psi_{2}} =& \left(\ket{V} K_0^2\ket{\psi} + \ket{H} K_1 K_0\ket{\psi}\right)\ket{0}_a\\ & + \left(\ket{V} K_0 K_1 \ket{\psi}+ \ket{H} K_1^2\ket{\psi}\right)\ket{\tau}_a. \end{aligned} \end{equation} The polarisation states are marked by the conditional operation on the controller and ancilla ``$a$", \begin{align} \ket{V}\bra{V}\otimes\mathds{1}+\ket{H}\bra{H}\otimes\sum_{n=0}^{2^{N-1}-1}\ket{n\tau+2^{N-1}\tau}\bra{n\tau}\,, \end{align} where $N$ is the number of applications of the coherent feedback control which have already occurred, and therefore corresponds to the index of the composite state. Here $N=2$ so that \begin{equation} \begin{aligned} \ket{\Psi_2'}=& \ket{V}K_0^2\ket{\psi}\ket{0}_a + \ket{H} K_1 K_0\ket{\psi}\ket{2\tau}_a\\ & + \ket{V} K_0 K_1 \ket{\psi}\ket{\tau}_a+ \ket{H} K_1^2\ket{\psi}\ket{3\tau}_a. \end{aligned} \end{equation} It is clear that the leading pulses in the first half of the pulse train are in the desired polarisation state. However, the second half of all pulses in the pulse train require resetting. This can be achieved by the conditional operation on the controller and ancilla ``$a$", \begin{align} \mathds{1}\otimes \sum_{n=0}^{2^{N-1}-1} \ket{n\tau}\bra{n\tau} + \sigma_x\otimes \sum_{n=2^{N-1}}^{2^N-1}\ket{n\tau}\bra{n\tau}\,, \end{align} so that \begin{equation} \begin{aligned} \ket{\Psi_2''}= \ket{V} &\left( K_0^2\ket{\psi}\ket{0}_a + K_1 K_0\ket{\psi} \ket{\tau}_a \right.\\ & \quad \left.+ K_0 K_1 \ket{\psi}\ket{2\tau}_a + K_1^2\ket{\psi}\ket{3\tau}_a\right). \end{aligned} \end{equation} After a sufficiently high number $N$ of applications of the control channel, the system converges to the target state $\ket{T}$ \cite{konrad2020robust}. Therefore, after $N$ iterations of coherent control and controller reset the composite system is in a product state of the form \begin{equation} \ket{\Psi_N''}=\ket{V} \ket{T}\left(\sum_{m=0}^{2^N-1} \alpha_m \ket{m\tau}_a\right). \end{equation} The amplitudes of the $2^N$ pulses are given by $\alpha_m= \bra{T} \mathcal{K}_m\ket{\psi}$ as follows from writing the final state in terms of the initial state, $\ket{\Psi_N''}= \ket{V} \left(\sum_m \mathcal{K}_m\ket{\psi_0}\ket{m\tau}_a\right)$. Here $\mathcal{K}_m$ refers to the $m$th permutation of the product $\Pi_{j=1}^N K_{i}^{(j)}$ of the two Kraus operators, with $i=0,1$. Although the system degree of freedom of each pulse is in the target state, the pulses have low amplitudes. In measurements on the system, collecting the accumulated signal over a large time period compensates the low amplitudes. \subsubsection{OAM as additional degree of freedom} \label{Sec:oam_mul} We consider coherent feedback control with path as the controller, polarisation as the system, and employ OAM as an additional ancillary degree of freedom. The input state has an even OAM mode so that the composite system is $\ket{0}_c\ket{\psi_0}_s\ket{\ell=0}_a$. Here $\ket{0}_c$ corresponds to the initial state of the path degree of freedom, which is the upper path, and $\ket{\psi_0}_s$ is the initial state of the system. After the coherent feedback, one unit of OAM is added to the light in the lower path (with $\ket{1}_c$) by use of a spiral phase plate. This is done in order to mark the controller, analogous to Eq.~\eqref{Eq:marking}. The controller can then be reset using an inverse even-odd OAM mode sorter, so that the total transformation is given by \begin{equation}\label{eq:oamtrans} \begin{aligned} &\ket{\Psi_{1}}=(\ket{0}K_0 \ket{\psi_0} + \ket{1}K_1 \ket{\psi_0})\ket{\ell=0}_a \\ & \longrightarrow\ket{\Psi_1''}= \ket{0}(K_0 \ket{\psi_0}\ket{\ell=0}_a + K_1 \ket{\psi_0}\ket{\ell=1}_a). \end{aligned} \end{equation} For the next iteration of the coherent feedback control, the OAM values must be doubled as to only obtain even OAM values. To accomplish this we must have a method of multiplying the OAM values of an input beam. In \cite{oam_mult}, multiplication is achieved by using superpositions of circular-sector transformations of the input beam. This method works best for low OAM values, as the conversion efficiencies decrease for increasing OAM values e.g., when doubling the values $\ell=+1$, $\ell=+2$ and $\ell=+3$, conversion efficiencies are $0,97$, $0,93$ and $0,86$, respectively \cite{oam_mult}. \begin{figure} \caption{Setup to reset controller degree of freedom for coherent feedback with polarisation (system) and path (controller) using OAM as additional controller. The marking of the controller states is achieved by S, a spiral phase plate, on the lower path. The initial controller state is restored through the use of M, the inverse OAM mode sorter for even and odd OAM values. Before the next application of the coherent feedback, the OAM values of the light must be doubled with the OAM doubler D, so only even OAM modes are present.} \label{fig:oammult} \end{figure} The setup to iteratively implement the resetting transformation (Eq.~\eqref{eq:oamtrans}) is shown in Fig.~\ref{fig:oammult}. A description of the $N$th iteration of the transformation is now briefly considered. The conditional operation on the controller and ancilla ``$a$" that marks the path states with different OAM values reads \begin{equation} \sum_{\ell=0}^{2^{N}-2} \left( \ket{0}\bra{0}\otimes\ket{\ell}\bra{\ell}+\ket{1}\bra{1}\otimes\ket{\ell+1}\bra{\ell}\right). \end{equation} Since the ingoing OAM modes are all even, the upper path will contain only even OAM modes, while the lower path contains only odd OAM modes. The initial path state is restored by \begin{equation} \sum_{l=0}^{2^{N-1}-1} \left(\mathds{1}\otimes\ket{2\ell}\bra{2\ell}+\sigma_x\otimes\ket{2\ell+1}\bra{2\ell+1}\right), \end{equation} which can be implemented by a mode sorter for even and odd OAM values run in reverse. Before entering the coherent feedback device the OAM value of the light is doubled. The system converges to the target state $\ket{T}$ for a sufficiently high number $N$ of iterations as described in the previous subsection. The composite system, after $N$ iterations of coherent feedback and controller reset but before the final doubling of OAM values, is given by \begin{equation} \ket{\Psi_N''}= \ket{0} \ket{T}\left(\sum_{m=0}^{2^{N}-1} \alpha_m \ket{\ell=m}_a\right). \end{equation} Each OAM mode has an amplitude $\alpha_m= \bra{T} \mathcal{K}_m\ket{\psi}$. After convergence to the target state, the output yields a scalar beam with the desired polarisation. In this case, it is easy to discard the OAM degree of freedom, as we are only interested in the polarisation. \section{Examples of coherent feedback schemes }\label{Sec:Examples} In this section we discuss three coherent feedback schemes. The first is a basic control mechanism which allows a target state to be reached in a single iteration. The second is based on the swap operation and allows any target state to be reached provided it is encoded in the controller degree of freedom. The third scheme allows for the preparation of an equal superposition of $+\ell$ and $-\ell$ OAM states. \subsection{Basic control scheme}\label{Sec:Basic} Let us consider the simple control channel mentioned in Sec.~\ref{Sec:2-out}, with the Kraus operators $K_0= \ket{T}\bra{T}$ and $K_1=\ket{T}\bra{T^\perp}$. This scheme allows the target state to be reached in a single iteration. The singular value decomposition of the Kraus operators reads, $K_0= V \ket{0}\bra{0}V^\dagger$ and $K_1= iV\sigma_y \ket{1}\bra{1}V^\dagger$, where $V = \ket{T}\bra{0} +\ket{T^\perp}\bra{1}$. Hence, according to Eq.~\eqref{eq:U_4}, the CS-decomposition of the unitary that implements the control channel determined by $K_0$ and $K_1$ is given by \begin{align} U & = \mathds{1}\otimes V \begin{pmatrix} \mathds{1} & 0\\ 0 & \sigma_y \end{pmatrix} H \otimes \mathds{1} \begin{pmatrix} e^{i\frac{\pi}{4}} P_{\frac{\pi}{4}}^\dagger & 0\\ 0 & e^{-i\frac{\pi}{4}} P_{\frac{\pi}{4}} \end{pmatrix} H \otimes V^\dagger \,. \label{Eq:simple_example} \end{align} If polarisation is the system degree of freedom and the path is the controller, then the above unitary can be implemented using balanced beam-splitters as well as half- and quarter-wave plates as described in Sec.~\ref{Sec:Pol_real}. If the system degree of freedom is OAM and the controller is polarisation, then the scheme could be realised using local operations in polarisation and PSPDs as described in Sec.~\ref{Sec:OAM_real}. In the simplest case the target state is given by $\ket{0} \equiv \ket{-\ell}$, so that $V=\mathds{1}$ and (up to a global phase factor) \begin{align} U & = \begin{pmatrix} \mathds{1} & 0\\ 0 & \sigma_x P^\dagger_{\frac{\pi}{2}} \end{pmatrix} P^\dagger_{\frac{\pi}{4}}HP_{\frac{\pi}{4}} \otimes\mathds{1} \begin{pmatrix} \mathds{1} & 0\\ 0 & P_{\frac{\pi}{2}} \end{pmatrix} H \otimes P_{\frac{\pi}{4}}^\dagger \,.\label{eq:simlpe_gen} \end{align} The local operations $P^\dagger_{\frac{\pi}{4}}HP_{\frac{\pi}{4}}$ and $H$ on polarisation can be decomposed into half- and quarter-wave plates as $Q_{\frac{\pi}{2}}H_{\frac{\pi}{8}} Q_0$ and $H_{\frac{\pi}{8}}$, respectively. The operations which act on OAM are $\ell$-value dependent. The first controlled unitary in Eq.~\eqref{eq:simlpe_gen} is realised by a PSDP rotated by $\frac{\pi}{4\ell}$, according to Eq.~\eqref{Eq:PSDP}. The second controlled unitary requires two PSDPs mounted coaxially. Finally, the local phase operation $P_{\frac{\pi}{4}}^\dagger$ on OAM can be realised by a two coaxially mounted Dove prisms, one rotated by $\frac{\pi}{8\ell}$. This example is in essence a mechanism to prepare the system in the state $\ket{0}$. The local unitary $V^\dagger$ can be absorbed into the initial (possibly unknown) system state, while $V$ simply rotates $\ket{0}$ to the target state. This local operation may not be readily available for OAM, as discussed earlier, which limits the applications of this control scheme. In the following examples we explore optical set-ups which allow for more general target states to be reached without the use of inaccessible local OAM operations. \begin{figure*} \caption{Implementation of the weak swap, where $\gamma = \frac{2\alpha+\lambda} \label{fig:weakswap} \caption{Implementation of the target state dependent control scheme. This optical set-up allows the state $(\ket{-\ell} \label{fig:example2} \caption{Optical implementation of two coherent feedback control schemes on polarisation (controller) and the subspace of OAM spanned by $\{\ket{-\ell} \label{figfive} \end{figure*} \subsection{Optical implementation of the weak swap}\label{Sec:weakswap} In~\citep{konrad2020robust} it is shown that the so called weak swap unitary \begin{align} U = \exp\left(-i\lambda S\right)\,, \end{align} where $\lambda \in \mathbb{R}$ and $S$ is the swap operator leads to convergence to any target state $\ket{T}$ encoded in the controller system since the Kraus operators $\{K_i = \braket{i|U|T}_c\}_i$ satisfy condition~\eqref{eq:fp_contition} and \eqref{eq:span_contition}. By adding a unitary transformation of the controller state, $U_c\ket{0}_c= \ket{T}_c$, to the weak swap $U\rightarrow U U_c\otimes \mathds{1}$, this case can be reduced to coherent control with initial controller state $\ket{0}_c$. It might be worth noting, that a scheme where the state of the controller determines the target, allows the preparation of an unknown target state that might be the result a quantum computation. The CS decomposition of the weak swap on $\mathcal{H}_c\otimes \mathcal{H}_s = \mathcal{H}_2\otimes \mathcal{H}_2$ (up to a phase factor) is given by \begin{align} U& = \begin{pmatrix} \mathds{1}& 0\\ 0&\sigma_x \end{pmatrix} ( H \otimes \mathds{1} ) \begin{pmatrix} \mathds{1} & 0\\ 0& P_{\lambda}^\dagger \end{pmatrix} (P^\dagger_{\frac{\lambda}{2}} H \otimes \mathds{1} ) \begin{pmatrix} \mathds{1}& 0\\ 0&\sigma_x \end{pmatrix}\,,\label{Eq:swap_dec} \end{align} where $L = P^\dagger_{\frac{\lambda}{2}}$, $ L''=\sigma_xP^\dagger_{\frac{\lambda}{2}}$, $\Theta = e^{-i\frac{\lambda}{2}}P_{\frac{\lambda}{2}}$, $R =\mathds{1}$ and $R' =\sigma_x $. The implementation of the weak swap for path (controller) and polarisation (system) can be achieved using beam-splitters, half- and quarter-wave plates, as discussed in Sec.~\ref{Sec:Pol_real}. The optical implementation of the weak swap for polarisation (controller) and a two-dimensional subspace of OAM spanned by $\{\ket{-\ell},\ket{+\ell}\}$ (system) is depicted in Fig.~\ref{fig:weakswap}. An unrotated PSDP can be used to implement a C-NOT, $\mathds{1}\oplus \sigma_x$, independent of the choice of $|\ell|$ which determines the OAM subspace. It is clear from Eq.~\eqref{Eq:PSDP} that the controlled unitary $\mathds{1} \oplus P_{\lambda}^\dagger$ can be implemented using two PSDP's. One of these PSDP's should be rotated by $\alpha = \tfrac{\lambda}{2l}$ and mounted coaxially with the second unrotated PSDP. In order to reduce the number of half- and quarter-wave plates required, we combine the action of one of the rotated half-wave plates from the rotated PSDP with the action of $P^\dagger_{\frac{\lambda}{2}} H$ (App.~\ref{Ap:3}). By selecting an appropriate angle of rotation $\alpha$, namely $\alpha=\frac{\pi}{4\ell}$, the optical scheme shown in Fig.~\ref{fig:weakswap} allows us to perform swap operation $S$ between polarisation and a particular OAM subspace, in one iteration. We note that the swap will also be performed for the two-level OAM subspace associated with $\ell'=(4n+1)\ell$, $n\in \mathbb{Z}$, since $2\alpha l' = \frac{\pi}{2}+2\pi n$. \subsection{Target state dependent control mechanism}\label{Sec:Example_2} In~\citep{konrad2020robust} it is shown that the unitary \begin{align}\label{bla} U(\lambda) = \exp\left(-i\frac{\lambda}{2} (\sigma_y \otimes \sigma_y + \sigma_z \otimes \sigma_z)\right)\,, \end{align} with coupling parameter $\lambda \in \mathbb{R}$ leads to convergence to the target state $\ket{T}=(\ket{0}+\ket{1})/\sqrt{2}$ encoded in the controller system, since the Kraus operators $\{K_i = \braket{i|U|T}\}_i$ satisfy condition~\eqref{eq:fp_contition} and \eqref{eq:span_contition}. A different target state can be reached by choosing different combinations of Pauli operators as generators. The following decomposition represents the optical implementation of this unitary~\eqref{bla}: \begin{align} U(\lambda) & = \begin{pmatrix} \mathds{1}& 0\\ 0&\sigma_x \end{pmatrix} ( H \otimes \mathds{1} ) \begin{pmatrix} \mathds{1} & 0\\ 0& P_{\lambda}^\dagger \end{pmatrix} ( H \otimes \mathds{1} ) \begin{pmatrix} \mathds{1}& 0\\ 0&\sigma_x \end{pmatrix}\,.\label{Eq:example_2} \end{align} The optical implementation for polarisation (controller) and a two-dimensional subspace of OAM spanned by $\{\ket{-\ell},\ket{\ell}\}$ (system) is shown in Fig.~\ref{fig:example2}. The unitary in Eq.~\eqref{Eq:example_2} is almost identical to the weak swap discussed in the previous example, Eq.~\eqref{Eq:swap_dec}. However, as a consequence of the absence of phase shift $P_{\frac{\lambda}{2}}^\dagger$ acting on polarisation, a fixed optical implementation achieves $U(\lambda)$ of varying $\lambda$ values depending on the OAM subspace considered. If we fix the angle $\alpha$ then the apparatus implements the unitary $U(2\alpha \ell)$ on the subspace spanned by $\{\ket{-\ell},\ket{\ell}\}$. Consequently, the target state can be reached in all subspaces with close to unit fidelity, if a sufficient number of iterations are performed. This fails if $\alpha \ell$ is equal to an integer multiple of $\pi$. For a chosen optical set-up, i.e.\ for fixed $\alpha$, the fidelity, $F^\ell_n$, for a particular $\ell$ subspace after $n$ iterations is given by the overlap between the final state of the system and the target state. The fidelity increases in an exponential fashion~\cite{konrad2020robust} \begin{align} F^\ell_n = 1-(1-F^\ell_0)(1-\sin^2(\alpha \ell))^n\, , \end{align} where $F^\ell_0$ is the initial overlap between the system and target state. From this we can also determine the number of iterations necessary to reach a certain fidelity $F$ \begin{align} n = \frac{1}{1-\sin^2(\alpha \ell)} \ln\left(\frac{1-F}{1-F_0^\ell} \right)\,. \end{align} \section{Discussion}\label{discussion} In conclusion, we have presented a class of schemes to prepare the OAM and polarisation qubits using coherent feedback control. Our results are valid for single photons as well as classical beams of light and require mostly linear optical setups. The biggest obstacle in realizing the coherent feedback control in optical systems is to perform the non-local unitaries jointly on the system and controller. This was accomplished by using the CS-decomposition, which reduces the joint unitaries into simple unitaries acting on individual degrees of freedom of light. Our coherent feedback control methods allow to prepare arbitrary superpositions of two OAM modes, even without spatial interferometers, and in principle without photon losses. This is an important step forward compared to other preparation methods, such as using spatial light modulators. A generalisation to the preparation of arbitrary structured light modes seems possible. While for massive systems, measurement-based feedback can be used to prepare target states without losses, non-destructive and efficient measurements are not easily available for photons. Our results show how to translate measurement based feedback into coherent feedback coupling various degrees of freedom of light. This recipe might also be employed for composite massive systems. Most of the schemes discussed here enable the preparation of a desired state in a single shot. However, coherent feedback control requires in general iterative interaction of the system and controller. This is for example the case to protect a system against noise or for steering the system into target dynamics. For this purpose, the controller needs to be reset to its initial state, or we need a fresh controller after each iteration. While readily available for systems with strong coupling, for optical systems the situation is more severe. In optical systems, resetting the controller or using fresh controller leads to exponential increase in the resources. To overcome this problem we have suggested two methods, one involving coherent amplification of light and the other using an additional degree of freedom. Both the methods have their own limitations which result in low fidelities and inefficient implementation of coherent feedback control. However, using these techniques the resources required are linear with the number of interactions. The coherent feedback control methods discussed here can in principle compensate weak control by repeated application of the control channel, for example the weak swap or the state dependent control scheme. This compensation mechanism is important for photons that weakly couple to their environment and might lead to further applications in quantum communication tasks. \section{Decomposition of the Sine-Cosine matrix}\label{Ap:1} In the section we provide details of the derivation of Eq.~\eqref{eq:bs_internal_decomp}. \begin{align} \mathcal{S} & = \begin{pmatrix} C & S\\ -S & C \end{pmatrix}\\ &= \mathds{1} \otimes C + i \sigma_y \otimes S\\ & = \mathds{1} \otimes \left( \frac{\Theta + \Theta^\dagger}{2} \right) + \sigma_y \otimes \left( \frac{\Theta - \Theta^\dagger}{2} \right)\\ & = \left( \frac{\mathds{1} + \sigma_y }{2}\right) \otimes \Theta + \left( \frac{\mathds{1} - \sigma_y }{2} \right) \otimes \Theta^\dagger \end{align} It is straightforward to show that \begin{align} \frac{\mathds{1} + \sigma_y }{2} & =P_{\tfrac{\pi}{4}}^\dagger H \ket{0}\bra{0} HP_{\tfrac{\pi}{4}} \end{align} and \begin{align} \frac{\mathds{1} - \sigma_y }{2} & = P_{\tfrac{\pi}{4}}^\dagger H \ket{1}\bra{1} H P_{\tfrac{\pi}{4}} \,, \end{align} so that \begin{align} \mathcal{S} & = \left(P_{\tfrac{\pi}{4}}^\dagger H \otimes\mathds{1}\right) \begin{pmatrix} \Theta & 0\\ 0 & \Theta^\dagger \end{pmatrix} \left( H P_{\tfrac{\pi}{4}} \otimes\mathds{1}\right)\,. \end{align} \section{Decomposition of controlled unitary operations into PSDPs}\label{Ap:2} We wish to implement an arbitrary controlled unitary of the form \begin{align} C_{\Theta} = \mathds{1} \oplus \begin{pmatrix} e^{i\theta_1} & 0\\ 0 & e^{i\theta_2}\\ \end{pmatrix}\, . \end{align} An appropriate phase shift on the first subsystem symmetrise the controlled operation as follows \begin{align} C_{\Theta} &= e^{i\theta'_1} \left(\mathds{1} \oplus P_{\theta'_2} \right) \left(P_{\theta'_1}^\dagger \otimes \mathds{1}\right)\, , \end{align} where $\theta'_1 = \frac{\theta_1+\theta_2}{4}$ and $\theta'_2 = \frac{\theta_1-\theta_2}{2}$. Discarding the global phase factor we write the controlled unitary $C_\Theta$ as the product \begin{align} C_{\Theta} & = \left(\mathds{1} \oplus \sigma_x\right) \left(\mathds{1} \oplus \sigma_x P_{\theta'_2} \right)\left(P_{\theta'_1}^\dagger \otimes \mathds{1}\right) \,. \end{align} The C-NOT operation $\mathds{1} \oplus \sigma_x$ can be implemented using a single PSDP (not rotated) while the the second controlled operation $\mathds{1} \oplus \sigma_x P_{\theta'_2}$ can be implemented using a PSDP rotated by an angle $\theta'_2$ (Eq.~\eqref{Eq:PSDP}). \section{Wave plate configuration}\label{Ap:3} In this section we give the explicit calculation which provides the decomposition of $H_{\frac{\alpha}{2}}P_{\frac{\lambda}{2}} H$ into quarter- and half-wave plates. \begin{align} P_{\frac{\lambda}{2}} &= e^{i\frac{\lambda}{2}\sigma_z}\\ H&= e^{-i\frac{\pi}{4}\sigma_y }\sigma_z = \sigma_z e^{i\frac{\pi}{4}\sigma_y }\\ H_{\frac{\alpha}{2}} & = e^{-i\alpha\sigma_y} \sigma_z\\ \Rightarrow H_{\frac{\alpha}{2}}P_{\frac{\lambda}{2}} H &= e^{-i\alpha\sigma_y} e^{i\frac{\lambda}{2}\sigma_z} e^{i\frac{\pi}{4}\sigma_y } \end{align} From Eq.~\eqref{eq:QQH} we have: \begin{align} H_{\frac{\alpha}{2}}P_{\frac{\lambda}{2}} H &= Q_{\alpha+\frac{\pi}{4}} H_{\frac{2\alpha+\lambda}{4}-\frac{\pi}{8}}Q_{\frac{3\pi}{8}}\,. \end{align} \section{Examples of iterative coherent feedback using filtering}\label{Ap:fid} In this section we provided the target fidelity $F_n$ after $n$ iterations of control and filtering for the three examples discussed in Sec.~\ref{Sec:Examples}. \begin{align} F_n &:= \frac{\left|\braket{T|\widetilde{K}^n|\psi_0}\right|^2}{\braket{\psi_0|(\widetilde{K}^\dagger)^n\widetilde{K}^n|\psi_0}}\,, \end{align} where $\widetilde{K} := ({K_0+K_1})/{\sqrt{2}}$, $\ket{\psi_0}$ is the initial state of the system and $\braket{T|T_\perp} =0$. \subsection{Basic control scheme} In this example the Kraus operators are given by \begin{align} K_0& = \ket{T}\bra{T} \\ K_1& =\ket{T}\bra{T_\perp} \end{align} so that \begin{align} \widetilde{K} & = \frac{1}{\sqrt{2}}\left(\ket{T}\bra{T} + \ket{T}\bra{T_\perp}\right)\,. \end{align} Taken to the $n$th power \begin{align} \widetilde{K}^n = \frac{1}{2^{\frac{n}{2}}}\left(\ket{T}\bra{T} + \ket{T}\bra{T_\perp}\right)\,. \end{align} Provide that $\braket{T|\psi_0}+\braket{T_\perp|\psi_0} \neq 0$, the target fidelity after each iteration can be shown to be unity, which is to be expected since this scheme works in a single round. \subsection{Weak swap} The Kraus operators associated with the weak swap are given by~\cite{konrad2020robust} \begin{align} K_0& = e^{-i\lambda}\ket{T}\bra{T}+ \cos\lambda\ket{T_\perp}\bra{T_\perp} \\ K_1& = \sin\lambda \ket{T}\bra{T_\perp} \end{align} so that \begin{align} \widetilde{K} & =\frac{1}{\sqrt{2}} \left(e^{-i\lambda} \ket{T}\bra{T} + \cos\lambda \ket{T_\perp}\bra{T_\perp}+ \sin\lambda\ket{T}\bra{T_\perp} \right)\,. \end{align} Taken to the $n$th power \begin{align} \widetilde{K}^n = \left(\frac{e^{-i\lambda}}{\sqrt{2}}\right)^n \Bigg( \ket{T}&\bra{T}+ e^{in\lambda}\cos^n\lambda \ket{T_\perp}\bra{T_\perp} \nonumber \\ & +\sin\lambda \sum_{k=0}^{n-1} e^{i(k+1)\lambda}\cos^{k}\lambda \ket{T}\bra{T_\perp} \Bigg) \end{align} so that \begin{align} F_n & = \frac{1}{1 + \Lambda(\cos\lambda)^{2n}}\,, \end{align} where \begin{align} \Lambda = \frac{|\braket{T_\perp|\psi_0}|^2}{|\braket{T|\psi_0}+e^{i\lambda}\sin\lambda \sum_{k=0}^{n-1} (e^{i\lambda}\cos\lambda)^{k}\braket{T_\perp|\psi_0} |^2}\,. \end{align} The target fidelity $F_n$ converges to one for large $n$ provided that $\lambda$ is not an integer multiple of $\pi$ and $\Lambda<\infty$. \subsection{Target state dependent control mechanism} The Kraus operators in this example are given by~\cite{konrad2020robust} \begin{align} K_0& = \frac{1}{\sqrt{2}}\left(\ket{T}\bra{T} +\sin\lambda \ket{T}\bra{T_\perp} +\cos\lambda \ket{T_\perp}\bra{T_\perp}\right)\\ K_1& =\frac{1}{\sqrt{2}}\left(\ket{T}\bra{T} -\sin\lambda \ket{T}\bra{T_\perp} +\cos\lambda \ket{T_\perp}\bra{T_\perp}\right) \end{align} so that \begin{align} \widetilde{K}& =\ket{T}\bra{T} +\cos\lambda \ket{T_\perp}\bra{T_\perp}\,. \end{align} Taken to the $n$th power \begin{align} \widetilde{K}^n =\ket{T}\bra{T} +\cos^n\lambda \ket{T_\perp}\bra{T_\perp}\,. \end{align} so that \begin{align} F_n & = \frac{1}{1 + (\cos\lambda)^{2n}\frac{|\braket{T_\perp|\psi_0}|^2}{|\braket{T|\psi_0}|^2}}\,. \end{align} The target fidelity $F_n$ converges to one for large $n$ provided that $\lambda$ is not an integer multiple of $\pi$ and $|\braket{T|\psi_0}|> 0$. \section{Parametric Amplification}\label{Ap:4} The process of parametric amplification involves the interaction of three fields, the signal $E_1(z,t)$, the idler $E_2(z,t)$ and the pump field $E_3(z,t)$. We follow the method presented in \cite{opampManzoni}. For the case of monochromatic plane waves, where the pump beam is undepleted during the nonlinear interaction and there is no initial idler field, the signal field evolution along the crystal is given by \begin{equation} \frac{\partial^2 A_1}{\partial z^2} = -i \Delta k\ \frac{\partial A_1}{\partial z} + \Gamma^2 A_1. \end{equation} Here $A_1$ refers to the complex amplitude of the signal field, $\Delta k = k_3 - k_2 -k_1$ is the wave vector mismatch and $\Gamma$ is the coupling constant from the nonlinear wave equations defined as \begin{equation} \Gamma^2 = \frac{4 d_{eff}^2 \omega_1^2 \omega_2^2 }{k_1 k_2 c^4} |A_3|^2. \end{equation} The constant $d_{eff}$ relates to the nonlinear susceptibility of the crystal, $\omega_i$ and $k_i$ refer to the angular frequencies and the wave numbers of the fields in the medium, respectively, here $c$ is the speed of light. Since the beam intensity can be given by $I_i = \tfrac{1}{2}n_i \varepsilon_0 c |A_i|^2$, the signal and idler intensities after the interaction length of the nonlinear crystal are \begin{equation} \begin{aligned} I_1(L)&= I_1(0) \left(1+ \left(\tfrac{\Gamma}{g} \text{sinh}(gL) \right)^2 \right)\\ I_2(L)&= I_1(0)\tfrac{\omega_2}{\omega_1} \left(\tfrac{\Gamma}{g} \text{sinh}(gL) \right)^2, \end{aligned} \end{equation} where $I_1(0)$ is the initial signal field intensity, $\varepsilon_0$ is the electric permittivity and $g$ is a generalised wave number given by \begin{equation} g=\sqrt{\Gamma^2-\frac{\Delta k ^2}{4}}. \end{equation} The parametric gain $G$ is therefore defined as the ratio of the signal intensity before and after the nonlinear process, $G=\frac{I_1(L)}{I_1(0)}$. \section{Creating distinct time-bins} \label{unique} If the number of round trips in the delay loop (Fig.~\ref{fig:time}) is doubled in each iteration, the delay time $T_N$ a specific pulse spends in the delay loop in $N$ iterations reads \begin{equation} T_{N}(\mathbf{a})= \sum_{n=1}^N a_n\ 2^{n}\ \tau \,. \end{equation} Here the vector $\mathbf{a}= (a_1, a_2\ldots a_N)$ contains information about which path the pulse took in each iteration. The component $a_i$ is $0$ or $1$, depending on whether the pulse entered the delay loop in the $i$th interation. The time period $\tau$ is the duration of a single round-trip in the delay loop. Since each vector $\mathbf{a}$ is the binary representation of a specific number $T_{N}/\tau$, the corresponding delay $T_{N}$ is unique and hence the time bins do not overlap. \end{document}
\begin{document} \title{On Kalamidas' proposal of faster than light quantum communication} \author{GianCarlo Ghirardi} \email{[email protected]} \affiliation{Emeritus, University of Trieste\\ The Abdus Salam ICTP,Trieste, Strada Costiera 11, 34014 Trieste, Italy}.\\ \author{Raffaele Romano} \email{[email protected]} \affiliation{Department of Mathematics, Iowa State University, Ames, IA (USA)} \begin{abstract} \noindent In a recent paper, Kalamidas has advanced a new proposal of faster than light communication which has not yet been proved invalid. In this paper, by strictly sticking to the standard quantum formalism, we prove that, as all previous proposals, it does not work. \\ \noindent KEY WORDS: faster-than-light signalling. \end{abstract} \maketitle \section{Introduction and preliminaries on the experimental set up} The idea that quantum entanglement and quantum interactions with a part of a composite system allow faster than light communication has been entertained for quite a long time. All existing proposals have been shown to be unviable. For a general overview we refer the reader to papers by Herbert~\cite{he}, Selleri~\cite{se}, Eberhard~\cite{eb}, Ghirardi \& Weber~\cite{gw}, Ghirardi, Rimini \& Weber~\cite{grw}, Herbert~\cite{he2}, Ghirardi (who has derived the no-cloning theorem just to reject the challenging proposal [6] by Herbert - see the document attached to ref [7]), and, more recently, by Greenberger~\cite{gr} and Kalamidas~\cite{ka}. A detailed analysis of the problem and the explicit refutation of all proposals excluding the one of Kalamidas appear in the recent work by Ghirardi~\cite{ghir}. In view of the interest of the subject and of the fact that a lively debate on the topic is still going on we consider our duty to make rigorously clear that the proposal [9] is basically flawed. We will not go into details concerning the precise suggestion and will simply present a very sketchy description of the experimental set-up. The main point can be grasped by the following picture, taken from the paper by Kalamidas, depicting a source S of entangled photons in precise modes which impinge on appropriate beam splitters $BS_{0}, BS_{a}$ and $BS_{b}$, the first one with equal transmittivity and reflectivity, the other two with (real) parameters $t$ and $r$ characterizing such properties. Finally, in the region at right, one can (or not at his free will) inject coherent photon states $|\alpha\rangle$ characterized by the indicated modes: \begin{figure} \caption{The experimental set up devised by Kalamidas.} \label{f1} \end{figure} \section{The initial state} Kalamidas' mechanism for superluminal signaling rests on the possibility of injecting or to avoid to do so the coherent states at the extreme right. Correspondingly, one has, as his initial state either: \begin{equation} |\psi_{in}\rangle=\frac{1}{\sqrt{2}}(a_{1}^{\dag}a_{2}^{\dag}+e^{i\phi}b_{1}^{\dag}b_{2}^{\dag})D_{a3}(\alpha)D_{b2}(\alpha)|0\rangle, \end{equation} \noindent where $D_{a3}(\alpha)$ and $D_{b3}(\alpha)$ are coherent states of modes $a3$ and $b3$, or, alternatively, the state: \begin{equation} |\tilde{\psi}_{in}\rangle=\frac{1}{\sqrt{2}}(a_{1}^{\dag}a_{2}^{\dag}+e^{i\phi}b_{1}^{\dag}b_{2}^{\dag})|0\rangle. \end{equation} \section{Second step: the evolution and the beam splitters} Once one has fixed the initial state the process starts and the state evolves in time. The evolution implies the passage of photons through the indicated beam splitters. It has to be mentioned that the recent debate on Ref.[9] has seen disagreeing positions concerning the functioning of these devices. We will not enter into technical details, we simply describe the effect of crossing a beam splitter in terms of appropriate unitary operations which account for its functioning. The result is the one considered by Kalamidas. Let me stress, due to its importance, that this move to simply consider the unitary nature of the transformations overcomes any specific debate. Actually, the quite general and legitimate assumption that {\bf any unitary transformation of the Hilbert space can actually be implemented} makes useless entering into the details of the functioning of the beam splitters, a move that we consider important since, apparently, different people make different claims concerning such a functioning. I simply consider the evolution of the initial statevector induced by the unitary transformation $U=U_{0}U_{a}U_{b}$, with: \begin{eqnarray} U_{0}a_{1}^{\dag}U_{0}^{\dag}&= &\frac{1}{\sqrt{2}}(a_{1}^{\dag}+b_{1}^{\dag}) \nonumber \\ U_{0}b_{1}^{\dag}U_{0}^{\dag}&=& \frac{1}{\sqrt{2}}(-a_{1}^{\dag}+b_{1}^{\dag}) \nonumber \\ U_{a}a_{2}^{\dag}U_{a}^{\dag}&=&(ta_{2}^{\dag}+ra_{3}^{\dag}) \nonumber \\ U_{a}a_{3}^{\dag}U_{a}^{\dag}&=&(-rb_{2}^{\dag}+tb_{3}^{\dag})\nonumber \\ U_{b}b_{2}^{\dag}U_{b}^{\dag}&=&(tb_{2}^{\dag}+rb_{3}^{\dag}) \nonumber \\ U_{b}b_{3}^{\dag}U_{b}^{\dag}&=&(-rb_{2}^{\dag}+tb_{3}^{\dag}). \end{eqnarray} Using such expressions one easily evaluates the evolved of each of the two initial states going through all the beam splitters with their particular characteristics.The computation is quite easy and the final state, when the coherent states are present at right, turns to have the following form: \begin{eqnarray} |\psi_{fin}\rangle &\equiv& U_{0}U_{a}U_{b}|\psi_{in}\rangle\nonumber \\ &=&\frac{1}{2}[(a_{1}^{\dag}+b_{1}^{\dag})(ta_{2}^{\dag}+ra_{3}^{\dag})+e^{i\phi}(-a_{1}^{\dag}+b_{1}^{\dag})(tb_{2}^{\dag}+rb_{3}^{\dag})]\nonumber \\ &\times& D_{a3}(t\alpha)D_{a2}(-r\alpha)D_{b3}(t\alpha)D_{b2}(-r\alpha)|0>. \end{eqnarray} \noindent Alternatively, when the second initial state is considered, the evolution leads to: \begin{eqnarray} |\tilde{\psi}_{fin}\rangle&\equiv & U_{0}U_{a}U_{b}|\tilde{\psi}_{in}\rangle \nonumber \\ &=& \frac{1}{2}[(a_{1}^{\dag}+b_{1}^{\dag})(ta_{2}^{\dag}+ra_{3}^{\dag})+e^{i\phi}(-a_{1}^{\dag}+b_{1}^{\dag})(tb_{2}^{\dag}+rb_{3}^{\dag})]|0\rangle. \end{eqnarray} \section{Possible actions at right} I must confess that the original paper by Kalamidas as well as many of the comments which followed are not sufficiently clear concerning what one does at right on the photons appearing there. One finds statements of the type ``when there is one photon in mode ${\bf a2'}$ and one photon in mode ${\bf b2'}$" then ``there is a coherent superposition of single photon occupation possibilities between modes ${\bf a1}$ and ${\bf b2}$". Here I cannot avoid stressing that such statements, as they stand, are meaningless because they take into account one of the possible outcomes and not the complete unfoldng of the measurement process. If one is advancing a precise proposal for an experiment, he must clearly specify which actions are actually performed. And here comes the crucial point: the alleged important consequences of an action performed at right on the outcomes at left must be deduced from the analysis of the outcomes of possible observations in the region at left (we want to have a signal there). It seems to me that the proponent of the new mechanism for superluminal communication has not taken into account a fundamental fact which has been repeatedly stressed precisely in the literature on the subject. What we have to investigate are the implications of precise actions at right for the physics of the systems in the region at left. In turn, all what is physically relevant at left, as well known, is exhaustively accounted by the reduced statistical operator $\rho_{red}^{(L)}$ referring to the systems which are there, i.e. the one obtained from the full statistical operator $\rho(L,R)\equiv |\psi_{fin}\rangle\langle\psi_{fin}|$ by taking the partial trace on the right degrees of freedom: $\rho_{red}^{(L)}=Tr^{(R)} [ |\psi_{fin}\rangle\langle\psi_{fin}|]$, with obvious meaning of the symbols. Now, the operator $\rho_{red}^{(L)}$ is unaffected by all conceivable legitimate actions made at right. The game is the usual one. One can consider: \begin{itemize} \item Unitary evolutions involving the systems at right : $\rho(L,R)\rightarrow U^{(R)}\rho(L,R)U^{\dag(R)}$ \item Projective measurement of an observable with spectral family $P_{k}^{(R)}: \rho(L,R)\rightarrow\sum_{k}P_{k}^{(R)}\rho(L,R)P_{k}^{(R)}$ \item Nonideal measurements associated to a family $A_{k}^{(R)}: \rho(L,R)\rightarrow\sum_{k}A_{k}^{(R)}\rho(L,R)A_{k}^{\dag(R)}$. \end{itemize} In all these cases (which exhaust all legitimate quantum possibilities), due to the cyclic property of the trace, to the unitarity of $U^{(R)}$ and to the fact that the projection operators $P_{k}^{(R)}$ as well as the quantities $A_{k}^{\dag(R)}A_{k}^{(R)}$ sum to the identity operator (obviously the one referring to the Hilbert space of the systems at right), the reduced statistical operator $\rho_{red}^{(L)}$ does not change in any way whatsoever as a consequence of the action at right. In brief, for investigating the physics at left one can ignore completely possible evolutions or measurements of any kind done at right. Obviously the same does not hold if one performs a selective measurement at right. But in this case the changes at left induced by the measurement depend on the outcome which one gets, so that, to take advantage of the change, the receiver at left must be informed concerning the outcome at right, and this requires a luminal communication. In accordance with these remarks, sentences like those I have mentioned above and appearing in Ref.[8], must be made much more precise. If at right one performs a measurement identifying the occupation numbers of the various states, one has to describe it appropriately taking into account all possible outcomes. Concentrating the attention on a specific outcome one is actually considering a selective measurement, an inappropriate procedure, as just discussed. Concluding this part: to compare the situation at left in the case in which at right coherent states are injected or not, one can plainly work with the evolved states (4) and (5). The fundamental question concerning the possibility of superluminal communication becomes then: does it exist an observable for the particles at left (i.e. involving modes ${\bf a1}$ and ${\bf b1}$) which has a different mean value or spread or probability for individual outcomes when the state is the one of Eq.(4) or the one of Eq.(5)? \section{Proof that no effect is induced at left} In accordance with the previous analysis, to answer the just raised question we consider the most general self-adjoint operator of the Hilbert space of the modes at left which we will simply denote as $h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})$, and we will evaluate its mean value in the two states (4) and (5). In the case of state (5), $|\psi_{fin}\rangle$ we have: \begin{eqnarray} & &\langle \psi_{in} |U^{\dag}h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})U|\psi_{in}\rangle \equiv \nonumber \\ & &\frac{1}{4}\langle 0|D_{b3}^{\dag}(t\alpha) D_{b2}^{\dag}(-r\alpha)D_{a3}^{\dag}(t\alpha)D_{a2}^{\dag}(-r\alpha)[(a_{1}+b_{1})(t a_{2}+r a_{3})+e^{i\phi}(-a_{1}+b_{1})(t b_{2}+r b_{3})] h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})\nonumber \\ & & [(a_{1}^{\dag}+b_{1}^{\dag})(t a_{2}^{\dag}+r a_{3}^{\dag})+e^{i\phi}(-a_{1}^{\dag}+b_{1}^{\dag})(t b_{2}^{\dag}+r b_{3}^{\dag})]D_{a2}(-r\alpha)D_{a3}(t\alpha)D_{b2}(-r\alpha)D_{b3}(t\alpha)|0\rangle. \end{eqnarray} One has now to take into account that the vacuum is the product of the vacua for all modes, $|0\rangle=|0\rangle_{1}|0\rangle_{2}|0\rangle_{3}$. The previous equation becomes: \begin{eqnarray} & &\langle \psi_{in} |U^{\dag}h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})U|\psi_{in}\rangle \equiv \nonumber \\ & &\frac{1}{4}[ _{2}\langle0|_{3}\langle 0|D_{b3}^{\dag}(t\alpha) D_{b2}^{\dag}(-r\alpha)D_{a3}^{\dag}(t\alpha)D_{a2}^{\dag}(-r\alpha)(t a_{2}+r a_{3})(t a_{2}^{\dag}+r a_{3}^{\dag}) \nonumber \\ & &D_{a2}(-r\alpha)D_{a3}(t\alpha)D_{b2}(-r\alpha)D_{b3}(t\alpha)|0\rangle_{2}|0\rangle_{3}]\cdot _{1} \langle0|(a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}+\nonumber \\ & &\frac{1}{4}e^{i\phi}[ _{2}\langle0|_{3}\langle 0|D_{b3}^{\dag}(t\alpha) D_{b2}^{\dag}(-r\alpha)D_{a3}^{\dag}(t\alpha)D_{a2}^{\dag}(-r\alpha)(t a_{2}+r a_{3})(t b_{2}^{\dag}+r b_{3}^{\dag}) \nonumber \\ & &D_{a2}(-r\alpha)D_{a3}(t\alpha)D_{b2}(-r\alpha)D_{b3}(t\alpha)|0\rangle_{2}|0\rangle_{3}]\cdot _{1} \langle0|(a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(-a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}+\nonumber \\ & &\frac{1}{4}e^{-i\phi}[ _{2}\langle0|_{3}\langle 0|D_{b3}^{\dag}(t\alpha) D_{b2}^{\dag}(-r\alpha)D_{a3}^{\dag}(t\alpha)D_{a2}^{\dag}(-r\alpha)(t b_{2}+r b_{3})(t a_{2}^{\dag}+r a_{3}^{\dag}) \nonumber \\ & &D_{a2}(-r\alpha)D_{a3}(t\alpha)D_{b2}(-r\alpha)D_{b3}(t\alpha)|0\rangle_{2}|0\rangle_{3}]\cdot _{1} \langle0|(-a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(-a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}+\nonumber \\ & &\frac{1}{4}[ _{2}\langle0|_{3}\langle 0|D_{b3}^{\dag}(t\alpha) D_{b2}^{\dag}(-r\alpha)D_{a3}^{\dag}(t\alpha)D_{a2}^{\dag}(-r\alpha)(t b_{2}+r b_{3})(t b_{2}^{\dag}+r b_{3}^{\dag}) \nonumber \\ & &D_{a2}(-r\alpha)D_{a3}(t\alpha)D_{b2}(-r\alpha)D_{b3}(t\alpha)|0\rangle_{2}|0\rangle_{3}]\cdot _{1} \langle0|(-a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(-a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}. \end{eqnarray} Let us take now into consideration the expression in square brackets of the first term (the one which contains the coherent states and the vacua for modes 2 and 3). If one keeps in mind that the coherent states are eigenstates of the annihilation operators one can apply the four terms arising from the expression $(t a_{2}+r a_{3})(t a_{2}^{\dag}+r a_{3}^{\dag})$ to the coherent states. Obviously, before doing this one has to commute the operators $a_{2}$ and $a_{2}^{\dag}$ in the expression $ta_{2}a_{2}^{\dag}$ and the similar one for mode 3. In so doing the expression $(t a_{2}+r a_{3})(t a_{2}^{\dag}+r a_{3}^{\dag})$ reduces to 1. Just for the same reason and with the same trick one shows that one can replace with 1 the expression $(t b_{2}+r b_{3})(t b_{2}^{\dag}+r b_{3}^{\dag})$ in the last term. The same calculation shows also that the corresponding expressions in the secon and third terms reduce to 0. The final step consist therefore in evaluating, for the first and fourth terms the expressions: \begin{equation} [ _{2}\langle0|_{3}\langle 0|D_{b3}^{\dag}(t\alpha) D_{b2}^{\dag}(-r\alpha)D_{a3}^{\dag}(t\alpha)D_{a2}^{\dag}(-r\alpha) D_{a2}(-r\alpha)D_{a3}(t\alpha)D_{b2}(-r\alpha)D_{b3}(t\alpha)|0\rangle_{2}|0\rangle_{3}]. \end{equation} Taking into account that $D_{a}^{\dag}(\alpha)D_{a}(\alpha)=I$ one gets the final expression for the expectation value of the arbitrary hermitian operator $h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})$ when one starts with the initial state containing the coherent states: \begin{eqnarray} & &\langle \psi_{in} |U^{\dag}h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})U|\psi_{in}\rangle =\nonumber \\ & &\frac{1}{4}[_{1} \langle0|(a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}+\nonumber \\ & & _{1} \langle0|(-a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(-a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}]. \end{eqnarray} It is now an easy game to repeat the calculation for the much simpler case in which the initial state is $|\tilde{\psi}\rangle$. One simply has precisely the expression (7) with all the coherent states missing. Taking into account that the operators of modes 2 and 3 act now on the vacuum state one immediately realizes that one gets once more the result (9). \section{Conclusion} We have proved, with complete rigour that the expectation value of any conceivable self adjoint operator of the space of the modes 1 at left remains the same when one injects or does not inject the coherent states at right. Note that the result is completely independent from the choice of the phase $\phi$ characterizing the two terms of the entangled initial state and from the parameters $t$ and $r$ of the beam splitters and it does not involve any approximate procedure. Accordingly, we have shown once more that devices of the type of the one suggested by Kalamidas do not consent superluminal communication. A last remark. During the alive debate which took place recently in connexion with Kalamidas proposal, other authors have reached the same conclusion. However the reasons for claiming this were not always crystal clear and a lot of discussion had to do with the approximations introduced by Kalamidas. For these reasons we have decided to be extremely general and we have been pedantic in discussing even well known facts and properties of an ensemble of photons. Our aim has been to refute in a completely clean and logically consistent way the idea that the device consents faster than light signaling. \end{document}
\betaegin{document} \makeatletter \renewcommand{\omegaverline}{\omegaverline} \nablaewcommand{\thetaimes}{\thetaimes} \nablaewcommand{\lambdaangle}{\lambdaangle} \renewcommand{\rangle}{\rangle} \nablaewcommand{\hookrightarrow}{\hookrightarrow} \renewcommand{\alpha}{\alphalpha} \renewcommand{\beta}{\betaeta} \renewcommand{\delta}{\deltaelta} \nablaewcommand{\Delta}{\Deltaelta} \nablaewcommand{\varphiepsilon}{\varphiepsilon} \nablaewcommand{\gamma}{\gammaamma} \nablaewcommand{\Gamma}{\Gammaamma} \renewcommand{\lambda}{\lambdaambda} \renewcommand{\Lambda}{\Lambdaambda} \nablaewcommand{\nabla}{\nablaabla} \nablaewcommand{\varphi}{\varphiphi} \nablaewcommand{\overline{\rm sign}\,ma}{\overline{\rm sign}\,maigma} \nablaewcommand{\Sigma}{\Sigmama} \renewcommand{\theta}{\thetaheta} \renewcommand{\Omega}{\Omegamega} \renewcommand{\omega}{\omegamega} \nablaewcommand{\zeta}{\zetaeta} \nablaewcommand{\betaoldsymbol \alphalpha}{\betaoldsymbol \alphalpha} \nablaewcommand{\alphalpha_{\betaullet}}{\alphalpha_{\betaullet}} \deltaef\betaa{\omegal{\alpha}} \deltaef\betab{\omegal{\beta}} \deltaef\betag{\omegal{\gammaamma}} \deltaef\betat{\omegal{\thetaau}} \deltaef\mu_{\mathrm{norm}}{\mu_{\mathrm{norm}}} \deltaef\overline{\rm sign}\,maep{\mathsf{sep}} \deltaef\mathsf{vol}{\mathsf{vol}} \deltaef\overline{\rm sign}\,macap{\mathsf{cap}} \deltaef\mathrm{Tub}{\mathrm{Tub}} \deltaef\mathrm{proj}{\mathrm{proj}} \deltaef\mathsf{1\hspace*{-2pt}l}{\mathsf{1\hspace*{-2pt}l}} \deltaef\overline{\rm sign}\,mafv{\mathsf{v}} \nablaewcommand{\partial}{\partialartial} \renewcommand{\widehat}{\widehat} \renewcommand{\omegaverline}{\omegaverline} \renewcommand{\thetailde}{\widetilde} \nablaewcommand{\mathrm{d}}{\mathrm{d}} \nablaewcommand{\varphiepsilonrf}{\mathrm{erf}} \font\varphiepsilonightrm=cmr8 \font\nablainerm=cmr9 \deltaef\mathbb{N}{\mathbb{N}} \deltaef\mathbb{Z}{\mathbb{Z}} \deltaef\mathbb{R}{\mathbb{R}} \deltaef\mathbb{Q}{\mathbb{Q}} \deltaef\mathbb{C}{\mathbb{C}} \deltaef\mathbb{F}{\mathbb{F}} \deltaef\mathbb{S}{\mathbb{S}} \deltaef\partialroj{\mathbb{P}} \deltaef \mathbb{R}i{\mathbb{R}^\infty} \deltaef \mathbb{Z}i{\mathbb{Z}^\infty} \deltaef \mathbb{Z}Ri{\mathbb{Z}^\infty\thetaimes\mathbb{R}^\infty} \deltaef \SZi{\mathbb{Z}^\infty\thetaimes S(\mathbb{R}^\infty)} \nablaewcommand{\alphalgoritmo}{\betaegin{minipage}{0.87\hsize}\vspace*{-5pt}\hrule\vspace*{5pt}} \nablaewcommand{\vspace*{-5pt}\hrule\vspace*{5pt}\end{minipage}\betaigskip}{\vspace*{-5pt}\hrule\vspace*{5pt}\varphiepsilonnd{minipage}\betaigskip} \nablaewcommand{\codigo}{\betaegin{minipage}{0.87\hsize}} \nablaewcommand{\end{minipage}\betaigskip}{\varphiepsilonnd{minipage}\betaigskip} \nablaewcommand{\vspace*{-5pt}\hrule\vspace*{5pt}}{\vspace*{-5pt}\hrule\vspace*{5pt}} \nablaewtheorem{algorithm}{Algorithm} \deltaef\varphiepsilonspacio{\hspace*{1cm}} \deltaef\varphiepsilonespacio{\hspace*{1.5cm}} \deltaef\varphiepsiloneespacio{\hspace*{2cm}} \nablaewcommand{\inputalg}[1]{\vspace*{-5pt}\hrule\vspace*{5pt}\betaf Input:\quad\rm #1\vspace*{3pt}} \nablaewcommand{\overline{\rm sign}\,mapecalg}[1]{\betaf Preconditions:\quad\rm #1} \nablaewcommand{\Omegautput}[1]{\vspace*{-5pt}\hrule\vspace*{5pt}\betaf Output:\quad\rm #1\vspace*{2pt}} \nablaewcommand{\partialostcond}[1]{\betaf Postconditions:\quad\rm #1\vspace*{3pt}} \nablaewcommand{\betaodyalg}[1]{\vspace*{-5pt}\hrule\vspace*{5pt}\thetat #1\vspace*{3pt}} \nablaewcommand{\betaodycode}[1]{\thetat #1\vspace*{3pt}} \nablaewcommand{\overline{\rm sign}\,massection}[1]{\ \\ \nablaoindent{\betaf #1.}\quad} \deltaef\overline{\rm sign}\,mag{{\rm sign}\,} \deltaef\overline{\rm sign}\,maig{\omegaverline{\rm sign}\,} \deltaef{\rm rank}\,{{\rm rank}\,} \deltaef{\rm Im}{{\rm Im}} \deltaef\deltaist{{\rm dist}} \deltaef\deltaegree{{\rm degree}\,} \deltaef\gammarad{{\rm grad}\,} \deltaef\overline{\rm sign}\,maize{{\rm size}} \deltaef\overline{\rm sign}\,maupp{{\rm supp}} \deltaef{\rm Id}{{\rm Id}} \deltaef\overline{\rm sign}\,mapann{\mathop{\rm span}} \deltaef\mathop{\tt fl}{\mathop{\thetat fl}} \deltaef\omegap{\mathop{\thetat op}} \deltaef{\rm cost}{{\rm cost}} \deltaef\varphiepsilonmac{\varphiepsilon_{\mathsf{mach}}} \deltaef\mathop{\mathsf{cond}}{\mathop{\mathsf{cond}}} \deltaef{\rm card}{{\rm card}} \deltaef{\rm mod}\;{{\rm mod}\;} \deltaef\thetarace{{\rm trace}} \deltaef\mathop{\rm Prob}{\mathop{\rm Prob}} \deltaef\lambdaength{{\rm length}} \deltaef\Deltaiag{{\rm Diag}} \deltaef\deltaiam{{\rm diam}} \deltaef{\mathsf{Tr}}{{\mathsf{Tr}}} \deltaef\alpharg{{\rm arg}\,} \deltaef\omegat{\lambdaeftarrow} \deltaef\thetaransp{^{\rm t}} \deltaef\betaL{{\betaf L}} \deltaef\betaR{{\betaf R}} \deltaef\betaC{\mathbf{C}} \deltaef\betafE{{\betaf E}} \deltaef\betad{{\betaf d}} \deltaef\alphadj{{\rm adj}\:} \deltaef\raise0.7pt\hbox{$\scriptstyle\times$}{\raise0.7pt\hbox{$\overline{\rm sign}\,macriptstyle\thetaimes$}} \deltaef\thetaodif{\overline{\rm sign}\,matackrel{\overline{\rm sign}\,macriptscriptstyle\nablaot =}{\thetao}} \deltaef\nablaedif{\overline{\rm sign}\,matackrel{\overline{\rm sign}\,macriptscriptstyle\nablaot=}{\nablaearrow}} \deltaef\overline{\rm sign}\,maedif{\overline{\rm sign}\,matackrel{\overline{\rm sign}\,macriptscriptstyle\nablaot=}{\overline{\rm sign}\,maearrow}} \deltaef\nablaodown{\deltaownarrow\mkern-15.3mu{\raise1.3pt\hbox{$\overline{\rm sign}\,macriptstyle\thetaimes$}}} \deltaef\nablaoto{\thetao\mkern-20mu\raise0.7pt\hbox{$\scriptstyle\times$}} \deltaef\nablaotox{\thetao\mkern-23mu\raise0.7pt\hbox{$\scriptstyle\times$}} \deltaef\alphalg{{\hbox{\overline{\rm sign}\,macriptsize\rm alg}}} \deltaef\thetatl{{\thetat l}} \deltaef\thetato{{\thetat o}} \deltaef\thetatml{\omegaverline{\thetat l}} \deltaef\thetatmo{\omegaverline{\thetat o}} \deltaefR^\infty{R^\infty} \deltaef\Omegah{{\cal O}} \deltaef\partialarts{{\cal P}} \deltaef{\hbox{\rm coeff}}{{\hbox{\rm coeff}}} \deltaef\Gammar{{\hbox{\rm Gr}}} \deltaef{\hbox{\tt Error}}\,{{\hbox{\thetat Error}}\,} \deltaef\alphappleq{\hbox{\lambdaower3.5pt\hbox{$\;\:\overline{\rm sign}\,matackrel{\thetaextstyle<}{\overline{\rm sign}\,maim}\;\:$}}} \deltaef\betaolita{\overline{\rm sign}\,macriptscriptstyle\betaullet} \deltaefJournal of the ACM{Journal of the ACM} \deltaef\mathbb{C}ACM{Communications of the ACM} \deltaef\ICALP{International Colloquium on Automata, Languages and Programming} \deltaef\STOC{annual ACM Symp. on the Theory of Computing} \deltaef\mathbb{F}OCS{annual IEEE Symp. on Foundations of Computer Science} \deltaefSIAM J. Comp.{SIAM J. Comp.} \deltaefSIAM J. Optim.{SIAM J. Optim.} \deltaefBulletin de la Soci\'et\'e Ma\-th\'e\-ma\-tique de France{Bulletin de la Soci\'et\'e Ma\-th\'e\-ma\-tique de France} \deltaef\mathbb{C}RAS{C. R. Acad. Sci. Paris} \deltaefInformation Processing Letters{Information Processing Letters} \deltaefTheoret. Comp. Sci.{Theoret. Comp. Sci.} \deltaefBulletin of the Amer. Math. Soc.{Bulletin of the Amer. Math. Soc.} \deltaefTransactions of the Amer. Math. Soc.{Transactions of the Amer. Math. Soc.} \deltaefProceedings of the Amer. Math. Soc.{Proceedings of the Amer. Math. Soc.} \deltaefJournal of the Amer. Math. Soc.{Journal of the Amer. Math. Soc.} \deltaef\LambdaNM{Lect. Notes in Math.} \deltaef\LambdaNCS{Lect. Notes in Comp. Sci.} \deltaefJournal for Symbolic Logic{Journal for Symbolic Logic} \deltaefJournal of Symbolic Computation{Journal of Symbolic Computation} \deltaefJ. Comput. System Sci.{J. Comput. System Sci.} \deltaefJ. Complexity{J. Complexity} \deltaefMath. Program.{Math. Program.} \overline{\rm sign}\,maloppy \betaibliographystyle{plain} \deltaef\mathbb{C}PRi{{\rm \#P}_{\kern-2pt\mathbb{R}}} \nablaewcommand{{\betaoldsymbol{b}}}{{\betaoldsymbol{b}}} \nablaewcommand{{\betaoldsymbol{d}}}{{\betaoldsymbol{d}}} \nablaewcommand{{\betaoldsymbol{f}}}{{\betaoldsymbol{f}}} \nablaewcommand{{\betaoldsymbol{x}}}{{\betaoldsymbol{x}}} \nablaewcommand{{\betaoldsymbol{a}}}{{\betaoldsymbol{a}}} \nablaewcommand{\omegal}[1]{\omegaverline{#1}} \deltaef\mathop{\mathscr C}{\mathop{\mathscr C}} \deltaef{\mathcal C}{{\mathcal C}} \deltaef{\mathcal D}{{\mathcal D}} \deltaef{\mathcal N}{{\mathcal N}} \deltaef{\mathcal G}{{\mathcal G}} \deltaef{\mathcal U}{{\mathcal U}} \deltaef{\mathcal M}{{\mathcal M}} \deltaef{\mathcal Z}{{\mathcal Z}} \deltaef{\mathcal B}{{\mathcal B}} \deltaef{\mathcal X}{{\mathcal X}} \deltaef{\mathscr P}{{\mathscr P}} \deltaef{\mathscr A}{{\mathscr A}} \deltaef\overline{\rm sign}\,macC{{\mathscr C}} \deltaef\overline{\rm sign}\,macO{{\mathscr O}} \deltaef\overline{\rm sign}\,macE{{\mathscr E}} \deltaef\overline{\rm sign}\,maG{{\mathscr G}} \deltaef{\mathcal Z}{{\mathcal Z}} \deltaef{\mathcal I}{{\mathcal I}} \deltaef\overline{\rm sign}\,maH{{\mathscr H}} \deltaef\overline{\rm sign}\,maF{{\mathscr F}} \deltaef\betaD{{\mathbf D}} \deltaef\nablarr{{\#_{\mathbb{R}}}} \deltaef{\Sigma_d}{{\Sigmama_d}} \deltaef\omegaPp{\omegaverline{P'_a}} \deltaef\omegaP{\omegaverline{P_a}} \deltaef\omegaDH{\omegaverline{DH_a^\deltaagger}} \deltaef\omegaH{\omegaverline{H_a}} \deltaef\overline{\a}{\omegaverline{\alpha}} \deltaef\HH_{{\boldsymbol{d}}}{\ensuremath{\mathcal H}_{{\betaoldsymbol{d}}}} \deltaef\HH_{{\boldsymbol{d}}}m{\HH_{{\boldsymbol{d}}}[m]} \deltaef\Lambdag{{\rm Lg}} \deltaef\overline{\rm sign}\,mafC{{\mathsf{C}}} \deltaef\mathbb{R}k{{\omegaperatorname{rank}}} \deltaef\betax{{\betaf x}} \deltaef{\'{\i}}{{\'{\i}}} \deltaef\mathbb P{\mathbb P} \deltaef\mathop{\mathbb E}{\mathop{\mathbb E}} \nablaewcommand{\betainomial}[2]{\varphiepsilonnsuremath{{\lambdaeft( \betaegin{array}{c} #1 \\ #2 \varphiepsilonnd{array} R^\inftyght)}}} \nablaewcommand{\ensuremath{\mathcal H}}{\varphiepsilonnsuremath{\mathcal H}} \nablaewcommand{\mathbf{diag}}{\mathbf{diag}} \nablaewcommand{\mathsf{CH}}{\mathsf{CH}} \nablaewcommand{\mathsf{Cone}}{\mathsf{Cone}} \nablaewcommand{\mathsf{SCH}}{\mathsf{SCH}} \nablaewcommand{\mathsf{NP}_{\mathbb{R}}}{\mathsf{NP}_{\mathbb{R}}} \nablaewcommand{\mathsf{NP}}{\mathsf{NP}} \nablaewcommand{\overline{\rm sign}\,mafH}{\mathsf{H}} \nablaewcommand{\#\mathsf{P}_{\mathbb{R}}}{\#\mathsf{P}_{\mathbb{R}}} \nablaewcommand{\mathsf{PSPACE}}{\mathsf{PSPACE}} \nablaewcounter{line} \nablaewcommand{\varepsilon_{\mathrm{m}}}{\varphiepsilon_{\mathrm{m}}} \nablaewcommand{\overline{\rm sign}\,magn}{\mathrm{sgn}} \betaegin{title} {\LambdaARGE {\betaf On local analysis}} \varphiepsilonnd{title} \alphauthor{Felipe Cucker\thetahanks{Partially supported by a GRF grant from the Research Grants Council of the Hong Kong SAR (project number CityU 11302418).} \\ Dept. of Mathematics\\ City University of Hong Kong\\ {\thetat [email protected]} \alphand Teresa Krick\thetahanks{Corresponding author. Partially supported by grant CONICET-PIP2014-2016-112 20130100073CO.}\\ Departamento de Matem\'atica \& IMAS\\ Univ. de Buenos Aires \&\ CONICET\\ ARGENTINA\\ {\thetat [email protected]} } \deltaate{} \makeatletter \maketitle \makeatother \betaegin{quote} {\overline{\rm sign}\,mamall {\betaf Abstract.} We extend to Gaussian distributions a result providing smoothed analysis estimates for condition numbers given as relativized distances to ill-posedness. We also introduce a notion of {\varphiepsilonm local analysis} meant to capture the behavior of these condition numbers around a point.\\ 2010 Mathematics Subject Classification: Primary 65Y20, Secondary 65F35.\\ Keywords: Conic condition number. Smoothed analysis. Local analysis. }\varphiepsilonnd{quote} \overline{\rm sign}\,maection{Introduction} In the 1990s D.~Spielman and S.H.~Teng introduced the notion of {\varphiepsilonm smoothed analysis}, in an attempt to give a more realistic analysis of the practical performance of an algorithm than those obtained through the use of worst-case or average-case analyses. In a nutshell, this new paradigm in probabilistic analysis interpolates between worst-case and average-case by considering the worst-case (over the data) of the average value (over possible random perturbations) of the analyzed quantity. See, for instance, \cite{SpiTen2009} for an overview. An example of this analysis to the quantity $\lambdan\kappa(A)$, where $A$ is a square matrix and $\kappa(A):=\|A\|\,\|A^{-1}\|$, was provided by M.~Wschebor in~\cite{Wsch:04}. Wschebor showed that \betaegin{equation}\lambdaabel{eq:Mario} \max_{\omegaverline{A}\in\mathbb{S}(\mathbb{R}^{n\thetaimes n})} \mathop{\mathbb E}_{A\overline{\rm sign}\,maim N(\omegaverline{A},\overline{\rm sign}\,maigma^2{\rm Id})}\lambdan\kappa(A) \lambdaeq \lambdan \Big(\frac{n}{\min\{\overline{\rm sign}\,maigma,1\}}\Big) +\Omegah(1), \varphiepsilonnd{equation} where here, and in what follows, $x\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})$ indicates that $x$ is drawn from an isotropic Gaussian distribution centered at $\omegaverline{x}$ with covariance matrix $\overline{\rm sign}\,maigma^2{\rm Id}$. The behavior of the bound $H(n,\overline{\rm sign}\,maigma)$ in the right-hand side of~\varphiepsilonqref{eq:Mario} shows two expected properties of a smoothed analysis: \betaegin{description} \item[(SA1)] When $\overline{\rm sign}\,maigma\thetao0$, $H(n,\overline{\rm sign}\,maigma)$ tends to its worst-case value (there are no random perturbations of the input in this case). \item[(SA2)] When $\overline{\rm sign}\,maigma\thetao\infty$, $H(n,\overline{\rm sign}\,maigma)$ tends to the average value of the analyzed quantity (the random perturbation is over all the input data in this case). \varphiepsilonnd{description} Indeed, the convergence of $H(n,\overline{\rm sign}\,maigma)$ to infinity when $\overline{\rm sign}\,maigma\thetao0$ is clear, and with it (SA1). And a result of A.~Edelman~\cite{Edelman88} proves that $\mathop{\mathbb E}_{A\overline{\rm sign}\,maim N(0,\overline{\rm sign}\,maigma^2{\rm Id})}\lambdan\kappa(A)=\lambdan n +\Omegah(1)$, thus showing (SA2). The main agenda of this paper is to introduce the notion of {\varphiepsilonm local analysis}, which aims to study locally at a base point $\omegaverline{x}$ the average value over possible random perturbations of the analyzed quantity, without taking then the worst-case over all input data. The benefit of such analysis is that it provides information depending directly on the base point instead of assuming a worst-case, as in the smoothed analysis. We illustrate this notion by developing it for a {\varphiepsilonm conic condition number}. This is a condition number satisfying a Condition Number Theorem. We next describe more precisely this notion and its context. In~1936 Eckart and Young~\cite{EckYou} proved that for a square matrix $A$, $\kappa(A)=\|A\|/d(A,\Sigmama)$ where $\Sigmama$ is the set of non-invertible matrices and $d$ denotes distance. This result came to be known as the {\varphiepsilonm Condition Number Theorem}, even though it was proved more than ten years before the introduction of condition numbers by Turing~\cite{Turing48} and von Neumann and Goldstine~\cite{vNGo47}. In~1987 J.~Demmel observed (and proved) that similar Condition Number Theorems hold true for the condition numbers of various problems~\cite{Demmel87}. More precisely, he showed that these condition numbers were either equal to or closely bounded by the (normalized) inverse to the distance to ill-posedness. That is, that for an input data $x$ of the problem at hand, the condition number of $x$ for that problem is either equal to or closely bounded by \betaegin{equation}\lambdaabel{eq:CCN} \mathop{\mathscr C}(x)=\frac{\|x\|}{d(x,\Sigmama)}, \varphiepsilonnd{equation} where $\Sigmama\nablae \{0\}$ is an algebraic cone of {\varphiepsilonm ill-posed inputs}. One year later, Demmel~\cite{Demmel88} derived general average analysis bounds for those (conic) condition numbers. These bounds depend only on the dimension $N+1$ of the ambient space, the codimension of $\Sigmama$, and its degree. He carried out this idea for the complex case and stated it for the real case (requiring $\Sigmama$ to be complete intersection) based on an unpublished (and not findable anywhere) result by Ocneanu. The underlying probability distribution is the isotropic Gaussian on $\mathbb{R}^{N+1}$ but it is easy to observe that the bounds hold as well for the uniform distribution on the unit sphere $\mathbb{S}^N$ (or, equivalently, on any half-sphere, due to the equality $\mathop{\mathscr C}(-x)=\mathop{\mathscr C}(x)$). In~\cite{BuCuLo:07} Demmel's idea was extended to perform a smoothed analysis of the conic condition number $\mathop{\mathscr C}(x)$ in the case that $\Sigmama$ is the zero set of a single real homogeneous polynomial $F$ in $N+1$ variables. For this analysis one considers the centers $\omegaverline{x}$ of the distributions in $\mathbb{S}^N$ (as in~\varphiepsilonqref{eq:Mario}) and there are two natural choices for the distribution itself: a Gaussian supported in $\mathbb{R}^{N+1}$ or a uniform on a spherical cap in $\mathbb{S}^N$. The uniform case is studied in \cite{BuCuLo:07}, where the following bound is obtained for $\thetaheta\in[0,\partiali/2]$: \betaegin{equation}\lambdaabel{eq:smooth-u} \max_{\omegaverline{x}\in\mathbb{S}^N}\mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x}, \thetaheta) } \lambdan \mathop{\mathscr C}(x) \;\lambdae\; \lambdan \frac{Nd}{\overline{\rm sign}\,main\thetaheta} + 2(\lambdan 2 + 1) \varphiepsilonnd{equation} where $d$ is the degree of $F$ and $B_{\mathbb{S}}(\omegaverline{x},\thetaheta)$ is the spherical cap of radius $\thetaheta$ centered at $\omegaverline{x}$ which we endow with the uniform distribution. This bound $H(N,d,\thetaheta)$ recovers an average analysis in the particular case that the spherical cap is a half-sphere. That is, \betaegin{description} \item[(SA2')] $H(N,d,\partiali/2)=\lambdan (Nd)+\Omegah(1)$, is the average value of $\lambdan \mathop{\mathscr C}(x)$ for $x\in \mathbb{S}^N$, see~\cite{Demmel88}. \varphiepsilonnd{description} A smoothed analysis of the conic condition number $\mathop{\mathscr C}(x)$ in the Gaussian case $N(\omegaverline x, \overline{\rm sign}\,maigma^2{\rm Id})$ was still lacking, and it is one of the results we present in this paper, since it is strongly linked with our local analysis as we will see below. Theorem \ref{th:smoothedGaussian} shows that $$ \max_{\omegaverline x\in \mathbb{S}^N} \mathop{\mathbb E}_{x\overline{\rm sign}\,maim N(\omegaverline x,\overline{\rm sign}\,maigma^2{\rm Id})}\lambdan \mathop{\mathscr C}(x) \, \lambdae \, H(N,d,\overline{\rm sign}\,maigma)$$ where $H(N,d,\overline{\rm sign}\,maigma)$ is an explicit bound that satisfies {\betaf (SA1)} and {\betaf (SA2)}. That is $$\lambdaim_{\overline{\rm sign}\,maigma\thetao 0} H(N,d,\overline{\rm sign}\,maigma)=\infty \quad \mbox{and} \quad \lambdaim_{\overline{\rm sign}\,maigma\thetao \infty} H(N,d,\overline{\rm sign}\,maigma)=\lambdan ({Nd}) + \Omegah(1).$$ With respect to local analysis, the gist is to obtain bounds for the quantities $$ \mathop{\mathbb E}_{x\overline{\rm sign}\,maim {\mathcal D}(\omegaverline{x})}\lambdan\mathop{\mathscr C}(x) $$ where $\omegaverline{x}\in\mathbb{S}^N$ and ${\mathcal D}(\omegaverline{x})$ is either the uniform distribution on the spherical cap $B_{\mathbb{S}}(\omegaverline{x},\thetaheta)$ or the Gaussian $N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})$. These bounds will be expressions $H(N,d,\nablau,\mathop{\mathscr C}(\omegaverline{x}))$ where $\nablau$ is either $\thetaheta$ or $\overline{\rm sign}\,maigma$ depending on the underlying distribution, which should coincide with smoothed analysis bounds when $\mathop{\mathscr C}(\omegaverline{x})=\infty$. More precisely, if we denote by $H_\infty(N,d,\nablau)$ the result of replacing $\mathop{\mathscr C}(\omegaverline{x})$ by $\infty$ in $H(N,d,\nablau,\mathop{\mathscr C}(\omegaverline{x}))$ then we want the following: \betaegin{description} \item[(LA0)] $H_\infty(N,d,\nablau)$ has the same behavior as the smoothed analysis bound $H(N,d,\nablau)$. \varphiepsilonnd{description} Furthermore, when $\mathop{\mathscr C}(\omegaverline{x})<\infty$ we seek the following limiting behavior: \betaegin{description} \item[(LA1)] $\deltaisplaystyle\lambdaim_{\nablau\thetao 0} H(N,d,\nablau,\mathop{\mathscr C}(\omegaverline{x})) = \lambdan(\mathop{\mathscr C}(\omegaverline{x}))+ \Omegah(1)$, the local complexity at $\omegaverline x$. \item[(LA2)] $\deltaisplaystyle\lambdaim_{\overline{\rm sign}\,maigma\thetao \infty} H(N,d,\overline{\rm sign}\,maigma,\mathop{\mathscr C}(\omegaverline{x})) = \lambdan(Nd)+ \Omegah(1)$ in the Gaussian case, the average complexity. \item[(LA2')] $H(N,d,\partiali/2,\mathop{\mathscr C}(\omegaverline{x})) =\lambdan(Nd)+ \Omegah(1)$ in the uniform case, the average complexity. \varphiepsilonnd{description} Indeed, we show that this is the case in Theorem~\ref{thm:instance} (uniform case) and Theorem~\ref{thm:main-local} (Gaussian case). \betaigskip \nablaoindent {\betaf Acknowledgments.} We are grateful to Pierre Lairez for many useful discussions. In particular, for pointing to us an argument in Proposition 4.2. \overline{\rm sign}\,maection{Notations and preliminaries} In all what follows we consider the space $\mathbb{R}^{N+1}$ endowed with the standard inner product $\lambdaangle~ ,~\rangle $ and its induced norm $\|~\|$. Within this space we have the unit sphere $\mathbb{S}^N=\{x\in \mathbb{R}^{N+1}:\, \|x\|=1\}$, and for $\omegaverline x\in \mathbb{S}^N$ we denote by $B(\omegaverline x,r)=\{x\in\mathbb{R}^{N+1}:\, \|x-\omegaverline x\|\lambdaeq r\}$ the closed ball centered at $\omegaverline x\in\mathbb{R}^{N+1}$ with radius $r\gammae 0$, and by $$ B_{\mathbb{S}}(\omegaverline x,\thetaheta)=\{x\in\mathbb{S}^N:\, 0\lambdae \overline{\rm sign}\,maphericalangle(x,\omegaverline x)\lambdaeq \thetaheta\} =\{x\in \mathbb{S}^{N}\,:\,\lambdaangle x,\omegaverline x\rangle \gammae \cos\thetaheta\} $$ the spherical cap in $\mathbb{S}^N$ centered at $\omegaverline x\in \mathbb{S}^N$ with radius $0\lambdae \thetaheta\lambdae \partiali$, that is the closed ball of radius $\thetaheta$ around $\omegaverline x$ in $\mathbb{S}^N$ with respect to the Riemannian distance in $\mathbb{S}^N$. We will also refer to the sine distance $d_{\overline{\rm sign}\,main}$ in $\mathbb{R}^{N+1}\overline{\rm sign}\,maetminus\{0\}$ given by $d_{\overline{\rm sign}\,main}(x,\omegaverline x):=\overline{\rm sign}\,main(\overline{\rm sign}\,maphericalangle(x,\omegaverline x))$. Let $B_{\overline{\rm sign}\,main}(\omegaverline{x},\rho):=\{x\in \mathbb{S}^N: \, d_{\overline{\rm sign}\,main}(x,\omegaverline{x})\lambdae \rho\}$ denote the closed ball of radius $\rho$ with respect to $d_{\overline{\rm sign}\,main}$ around $\omegaverline{x} \in \mathbb{S}^N$. This is the union of $B_{\mathbb{S}}(\omegaverline{x},\thetaheta)$ with $B_{\mathbb{S}}(-\omegaverline{x},\thetaheta)$ where $\thetaheta\in[0,\partiali/2]$ is such that $\rho=\overline{\rm sign}\,main\thetaheta$. We will denote by $\Omegah_N=\mathsf{vol}(\mathbb{S}^N)$ the volume of $\mathbb{S}^N$. We recall (see~\cite[Prop.~2.19(a)]{Condition}) that \betaegin{equation}\lambdaabel{eq:volSN} \Omegah_N=\frac{2\partiali^{\frac{N+1}{2}}}{\Gammaamma(\frac{N+1}{2})} \varphiepsilonnd{equation} as well as~\cite[Cor.~2.20]{Condition} \betaegin{equation}\lambdaabel{eq:volB} \mathsf{vol}(B(0,1))=\frac{\Omegah_N}{N+1} \varphiepsilonnd{equation} and, for $x\in\mathbb{S}^N$ and $\thetaheta\in[0,\frac{\partiali}{2}]$, the bound (see~\cite[Lem.~2.34]{Condition}) \betaegin{equation}\lambdaabel{eq:volBS} \frac{\Omegah_N}{\overline{\rm sign}\,maqrt{2\partiali (N+1)}}(\overline{\rm sign}\,main \thetaheta)^N \lambdae \mathsf{vol}(B_{\mathbb{S}}(x,\thetaheta))\lambdaeq\frac{\Omegah_N}{2} (\overline{\rm sign}\,main\thetaheta)^N. \varphiepsilonnd{equation} The main object in this paper is a {\varphiepsilonm conic condition number} on $\mathbb{R}^{N+1}$, i.e. a function given by $$ \mathop{\mathscr C}:\mathbb{R}^{N+1}\thetao [1,\infty],\quad \mathop{\mathscr C}(x)=\frac{\|x\|}{d(x,\Sigmama)}, $$ where $\Sigmama\nablae \{0\}$ is the set of {\varphiepsilonm ill-posed inputs} in $\mathbb{R}^{N+1}$, which we assume closed under scalar multiplication. We note that $\mathop{\mathscr C}(x)\gammae 1$ for all $x$ since $0\in \Sigmama$. As $\mathop{\mathscr C}$ is scale invariant we may restrict to data $x$ lying in $\mathbb{S}^N$ where $\mathop{\mathscr C}$ can also be expressed as $$ \mathop{\mathscr C}(x)=\frac{1}{d_{\overline{\rm sign}\,main}(x,\Sigmama\cap \mathbb{S}^N)}. $$ \overline{\rm sign}\,maection{The uniform case}\lambdaabel{sec:uniform} We endow $B_{\overline{\rm sign}\,main}(\omegaverline{x},\rho)$ with the uniform probability measure. A smoothed analysis for this measure is given in~\cite[Th. 21.1]{Condition}. Assume that $\Sigmama$ is contained in a real algebraic hypersurface, given as the zero set of a homogeneous polynomial of degree~$d$. Then, for all $\thetaheta\in[0,\frac{\partiali}2]$ and $\rho:=\overline{\rm sign}\,main \thetaheta$, we have \betaegin{equation}\lambdaabel{eq:smooth} \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x}, \thetaheta) } \lambdan \mathop{\mathscr C}(x) = \mathop{\mathbb E}_{x\in B_{\overline{\rm sign}\,main}(\omegaverline{x}, \rho) } \lambdan \mathop{\mathscr C}(x) \;\lambdae\; \lambdan \frac{Nd}{\overline{\rm sign}\,main\thetaheta} + K \varphiepsilonnd{equation} and \betaegin{equation}\lambdaabel{eq:smooth-pi} \mathop{\mathbb E}_{x\in {\mathbb{S}}^N } \lambdan \mathop{\mathscr C}(x) \;\lambdae\; \lambdan (Nd) + K, \varphiepsilonnd{equation} where $K=2(\lambdan 2 + 1)$. Here $\lambdan$ denotes Neperian logarithm. We observe that the equality above is due to the fact that $\mathop{\mathscr C}(x)=\mathop{\mathscr C}(-x)$ for all $x\in\mathbb{S}^N$ and that $\mathsf{vol} B_{\overline{\rm sign}\,main}(\omegaverline{x}, \rho) = \mathsf{vol} B_{\mathbb{S}}(\omegaverline{x}, \thetaheta)+ \mathsf{vol} B_{\mathbb{S}}(-\omegaverline{x}, \thetaheta)$. The same observation applies to the following result. \betaegin{theorem}\lambdaabel{thm:instance} Let $\mathop{\mathscr C}$ ba a conic condition number on $\mathbb{R}^{N+1}$ with set of ill-posed inputs $\Sigmama$. Assume that $\Sigmama$ is contained in a real algebraic hypersurface, given as the zero set of a homogeneous polynomial of degree $d$. Let $\omegaverline{x}\in \mathbb{S}^N$ and $0\lambdae \thetaheta\lambdae \partiali$. Then, for $\rho :=\overline{\rm sign}\,main\thetaheta$, $$ \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x}, \thetaheta)}\lambdan \mathop{\mathscr C}(x)\lambdae \lambdaeft\{\betaegin{array}{ll} \lambdan \deltafrac{Nd}{\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} +\lambdan 12 + 2& \mbox{if $\rho>\deltafrac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}$}\\ \lambdan \deltafrac{1}{\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} +\lambdan 4& \mbox{if $\rho\lambdaeq\deltafrac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}$.} \varphiepsilonnd{array}R^\inftyght. $$ In particular, there is a uniform explicit bound $H(N,d,\thetaheta, \mathop{\mathscr C}(\omegaverline x))$ --defined in \varphiepsilonqref{eq:UniformBound} below-- such that $$ \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x}, \thetaheta)}\lambdan \mathop{\mathscr C}(x)\lambdae H(N,d,\thetaheta,\mathop{\mathscr C}(\omegaverline x) ).$$ This bound satisfies satisfies {\betaf (LA0)}, since $H_\infty(N,d,\thetaheta)= \lambdan \frac{Nd}{\overline{\rm sign}\,main\thetaheta} + \Omegah(1)$ as $H(N,d,\thetaheta)$ in \varphiepsilonqref{eq:smooth-u}, {\betaf (LA1)} and {\betaf (LA2')}. \varphiepsilonnd{theorem} \partialroof Assume first that $\deltafrac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}\lambdae \rho\lambdae 1$. In this case, we have $$ \rho (2\mathop{\mathscr C}(\omegaverline{x})+1)\gammae 1 \iff 2\rho \, \mathop{\mathscr C}(\omegaverline{x})+\rho \gammae 1 \iff 2\rho\,\mathop{\mathscr C}(\omegaverline{x}) \gammae 1-\rho\iff 2\rho \gammae \frac{ 1-\rho }{ \mathop{\mathscr C}(\omegaverline{x})} $$ and we can decompose $$ \rho = \frac{1}{3}\rho + \frac{1}{3}(2\rho) \gammae \frac{1}{3}\Big(\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}\Big), \quad \mbox{i.e} \quad \frac{1}{\rho}\lambdae \frac{3}{\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}}. $$ Therefore, by~\varphiepsilonqref{eq:smooth}, \betaegin{align*} \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x}, \thetaheta) } \lambdan \mathop{\mathscr C}(x)& \lambdae \lambdan \frac{Nd}{\rho} + \lambdan 4 + 2\\ & \lambdae \lambdan \frac{3Nd}{\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})} } + \lambdan 4 +2 = \lambdan \frac{Nd}{\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})} } + \lambdan 12 +2. \varphiepsilonnd{align*} We next assume $0\lambdae \rho < \deltafrac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}$. In this case, $$ \frac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1} = \frac{1}{4}\Big(\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}+ \frac{3}{2\mathop{\mathscr C}(\omegaverline{x})+1}\Big) > \frac{1}{4}\Big(\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}\Big) $$ since $\deltafrac{3}{2 \mathop{\mathscr C}(\omegaverline{x})+1}> \deltafrac{1}{\mathop{\mathscr C}(\omegaverline{x})}>\deltafrac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}$. Equivalently, $$ 2 \mathop{\mathscr C}(\omegaverline{x})+1 < \frac{4}{\rho +\frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}}. $$ We also use here that for all $x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)$, \betaegin{align}\lambdaabel{eq:small_rho} \frac{1}{\mathop{\mathscr C}(x)}& =\ d_{\overline{\rm sign}\,main}(x,\Sigmama)\ \gammae \ d_{\overline{\rm sign}\,main}(\omegaverline{x},\Sigmama)-d_{\overline{\rm sign}\,main}(x,\omegaverline{x})\ \gammae \ \frac{1}{\mathop{\mathscr C}(\omegaverline{x})} -\rho \nablaonumber \\ & \ \gammae \frac{1}{\mathop{\mathscr C}(\omegaverline{x})+\frac{1}{2}} - \frac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}\ =\ \frac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}, \varphiepsilonnd{align} and therefore $$ \mathop{\mathscr C}(x)\lambdae {2\mathop{\mathscr C}(\omegaverline{x})}+1 < \frac{4}{\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} $$ which implies \betaegin{align*} \lambdan \mathop{\mathscr C}(x) \lambdae \lambdan \frac{1}{\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} +\lambdan 4. \varphiepsilonnd{align*} This shows the first statement. We now derive the expression of a bound $H(N,d,\thetaheta,\mathop{\mathscr C}(\omegaverline x))$.\\ Let $\varphiphi:[0,1]\thetao\mathbb{R}$ be the function defined by $$ \rho\mapsto 2(Nd-1) \rho^{\lambdaog_{_{\!\!\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})}}}\frac12} +1 $$ where the exponent of $\rho$ in the numerator is the logarithm in base $\deltafrac{1}{2\mathop{\mathscr C}(\omegaverline{x}) }$ of $\deltafrac{1}{2}$, which, by continuity, we take to be 0 when $\mathop{\mathscr C}(\omegaverline{x})=\infty$. We note that $\varphiphi$ is concave, monotonically increasing, d satisfies $\varphiphi(0)=1$, $\varphiphi(1)=2Nd-1$, and when $\mathop{\mathscr C}(\omegaverline{x})=\infty$, $\varphiphi(\rho)=2Nd-1 $. Moreover, by monotonicity, $$ \varphiphi\Big(\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}\Big) \lambdaeq \varphiphi\Big(\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})}\Big) = 2(Nd-1)\frac12 +1= Nd. $$ This implies, since $$\lambdan \frac{1}{\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} +\lambdan 4 \lambdae \lambdan \frac{\varphiphi(\rho)} {\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} +\lambdan 12 +2\ \mbox{for} \ \ 0\lambdae \rho < \deltafrac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}$$ and using also concavity, $$\lambdan \frac{Nd}{\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} +\lambdan 12+2\lambdae \lambdan \frac{\varphiphi(\rho)} {\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} +\lambdan 12+2 \ \mbox{for} \ \ \deltafrac{1}{2\mathop{\mathscr C}(\omegaverline{x})+1}\lambdae \rho \lambdae 1,$$ that $$ \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x}, \thetaheta) } \lambdan \mathop{\mathscr C}(x)\lambdae \lambdan \frac{\varphiphi(\rho)} {\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} +\lambdan 12 + 2. $$ That is, \betaegin{equation}\lambdaabel{eq:UniformBound}H(N,d,\thetaheta,\mathop{\mathscr C}(\omegaverline x))= \lambdan \frac{2(Nd-1) \rho^{\lambdaog_{_{\!\!\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})}}}\frac12} +1} {\rho + \frac{1-\rho}{\mathop{\mathscr C}(\omegaverline{x})}} +\lambdan 12 + 2. \varphiepsilonnd{equation} Finally, it is trivial to verify, from the specific values taken by $\varphiphi$ mentioned previously, that $ H(N,d,\thetaheta,\mathop{\mathscr C}(\omegaverline x))$ satisfies {\betaf (LA0)}, {\betaf (LA1)} and {\betaf (LA2')}. \varphiepsilonproof \overline{\rm sign}\,maection{The Gaussian case}\lambdaabel{sec:gaussian} We keep the same conic condition number $\mathop{\mathscr C}$ but now consider a Gaussian measure $N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})$ in $\mathbb{R}^{N+1}$ centered at $\omegaverline{x}\in\mathbb{S}^N$ and with covariance matrix $\overline{\rm sign}\,maigma^2{\rm Id}$ for $0<\overline{\rm sign}\,maigma <\infty$, that is with density function given by $$ \frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}}\varphiepsilonxp \Big(\frac{-\|x-\omegaverline x\|^2}{2\overline{\rm sign}\,maigma^2}\Big). $$ Since our local analysis will rely on a smoothed analysis in this case, which is not yet known, we begin by studying a general smoothed analysis for the Gaussian case. \overline{\rm sign}\,maubsection{Smoothed analysis} Let $\omegaverline x \in \mathbb{S}^N$. We recall that, for any $0\lambdae \thetaheta\lambdae \frac{\partiali}{2}$, $$ B_{\mathbb{S}}(\omegaverline{x},\thetaheta)= \{ x\in \mathbb{S}^N\,:\, 0\lambdae\overline{\rm sign}\,maphericalangle(x,\omegaverline x) < \thetaheta \} , $$ and in the particular case $\thetaheta =\frac{\partiali}{2}$ we denote $$ \mathbb{S}^N_+({\omegaverline x}):=B_{\mathbb{S}}\Big(\omegaverline x,\frac{\partiali}{2}\Big) = \Big\{x\in \mathbb{S}^N\,:\, 0\lambdae \overline{\rm sign}\,maphericalangle(x,\omegaverline x) < \frac{\partiali}{2}\Big\} =\betaig\{x\in \mathbb{S}^N\,:\, \lambdaangle x, \omegaverline x\rangle > 0\betaig\}, $$ the open half-sphere centered at $\omegaverline x$.\\ The main result of this section is the following smoothed analysis for the Gaussian distribution. \betaegin{theorem}\lambdaabel{th:smoothedGaussian} Let $\mathop{\mathscr C}$ be a conic condition number on $\mathbb{R}^{N+1}$ with set of ill-posed inputs $\Sigmama$. Assume that $\Sigmama$ is contained in a real algebraic hypersurface, given as the zero set of a homogeneous polynomial of degree $d$, and that $N\gammae 5$. Then, there exists an explicit bound $H(N,d,\overline{\rm sign}\,maigma)$ --defined in \varphiepsilonqref{SmoothGaussianBound}-- such that $$\max_{\omegaverline x\in \mathbb{S}^N} \mathop{\mathbb E}_{x\overline{\rm sign}\,maim N(\omegaverline{x}, \overline{\rm sign}\,maigma^2{\rm Id})}\lambdan \mathop{\mathscr C}(x) \lambdae H(N,d,\overline{\rm sign}\,maigma). $$ This bound satisfies \betaegin{description} \item[(SA1)] $\deltaisplaystyle \lambdaim_{\overline{\rm sign}\,maigma\thetao 0}H(N,d,\overline{\rm sign}\,maigma)=\infty$, the worst-case value. \item[(SA2)] $\deltaisplaystyle \lambdaim_{\overline{\rm sign}\,maigma\thetao \infty}H(N,d,\overline{\rm sign}\,maigma)=\lambdan(Nd)+2(\lambdan 2+1)$, the average value, in remarkable coincidence with~\varphiepsilonqref{eq:smooth-pi}. \varphiepsilonnd{description} \varphiepsilonnd{theorem} The following map plays a central role in all what follows, \betaegin{equation}\lambdaabel{eq:Psi} \mathbb Psi: \mathbb{R}^{N+1}\overline{\rm sign}\,maetminus \omegaverline{x}^\partialerp \thetao \mathbb{S}^N_+(\omegaverline{x}), \qquad x\mapsto \betaegin{cases} \|x\|^{-1}x& \mbox{if } \lambdaangle x,\omegaverline x\rangle > 0\\ -\|x\|^{-1}x& \mbox{otherwise.} \varphiepsilonnd{cases} \varphiepsilonnd{equation} The main stepping stone towards the proof of Theorem~\ref{th:smoothedGaussian} is the following. \betaegin{proposition}\lambdaabel{prop:tech} Let $\omegaverline x \in \mathbb{S}^N$. There exists a probability density $f: [0,\frac{\partiali}{2}]\thetao \mathbb{R}_{\gammae 0}$ of a random variable $\thetaheta \in [0,\frac{\partiali}{2}]$, associated to $\omegaverline x$, $\overline{\rm sign}\,maigma$ and $N$, such that for all measurable function $F:\mathbb{R}^{N+1}\thetao \mathbb{R}_{\gammae 0}$ satisfying $F(x)=F(\lambdaambda x)$ for all $\lambdaambda\in \mathbb{R}^\thetaimes$, one has $$ \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})} F(y) = (1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f} \Big(\mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)} F(x)\Big) \ + \ e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}\mathop{\mathbb E}_{x\in \mathbb{S}^N_+(\omegaverline x)} F(x). $$ \varphiepsilonnd{proposition} We begin by proving the following lemma. \betaegin{lemma}\lambdaabel{lem:tech} For any measurable function $F:\mathbb{R}^{N+1}\thetao \mathbb{R}_+$ satisfying $F(\lambdaambda y)=F(y), \ \forall \lambdaambda \in \mathbb{R}^\thetaimes$, one has $$ \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x}, \overline{\rm sign}\,maigma^2{\rm Id})} F(y)= \int_{\mathbb{S}^N_+(\omegaverline x)} G_{\omegaverline x}(\overline{\rm sign}\,maphericalangle(x,\omegaverline x)) F(x) \mathrm{d} x $$ where $G_{\omegaverline x}: [0,\frac{\partiali}{2}] \thetao \mathbb{R}_{>0}$ is a decreasing function of $\alphalpha$ defined by \betaegin{align*} G_{\omegaverline x}(\alphalpha)&=\frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}} \int_{-\infty}^{\infty} \varphiepsilonxp\Big(-\frac{\lambdaambda^2+1 - 2 \lambdaambda \cos\alphalpha }{2\overline{\rm sign}\,maigma^2}\Big) |\lambdaambda |^N\mathrm{d} \lambdaambda. \varphiepsilonnd{align*} \varphiepsilonnd{lemma} \partialroof We have \betaegin{align*} \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x}, \overline{\rm sign}\,maigma^2{\rm Id})} F(y) &= \frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}} \int_{\mathbb{R}^{N+1}} F(y) \varphiepsilonxp\Big(\frac{-\|y-\omegaverline{x}\|^2}{2\overline{\rm sign}\,maigma^2}\Big) \mathrm{d} y\nablaonumber\\ &= \frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}}\int_{\mathbb{S}^N_+(\omegaverline x)} \Big(\int_{-\infty}^{\infty} F(\lambdaambda x) \varphiepsilonxp\Big(\frac{-\|\lambdaambda x-\omegaverline{x}\|^2 }{2\overline{\rm sign}\,maigma^2}\Big)|\lambdaambda|^N \mathrm{d} \lambdaambda\Big)\mathrm{d} x \nablaonumber\\ &=\int_{\mathbb{S}^N_+(\omegaverline x)} F(x) \betaigg[\frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}}\int_{-\infty}^{\infty} \varphiepsilonxp\Big(\frac{-\|\lambdaambda x-\omegaverline{x}\|^2 }{2\overline{\rm sign}\,maigma^2}\Big) |\lambdaambda|^N\mathrm{d} \lambdaambda\betaigg]\mathrm{d} x\\ & = \int_{\mathbb{S}^N_+(\omegaverline x)} F(x) G(x)\mathrm{d} x \varphiepsilonnd{align*} where the second equality follows from the transformation formula~\cite[Thm.~2.1]{Condition} applied to the diffeomorphism $$ \mathbb Phi: \mathbb{R}^{N+1}\overline{\rm sign}\,maetminus \omegaverline{x}^\partialerp \thetao \mathbb{S}^N_+(\omegaverline{x})\thetaimes \mathbb{R}\overline{\rm sign}\,maetminus\{0\}, \qquad x\mapsto \betaegin{cases} (\mathbb Psi(x),\|x\|^2)& \mbox{if } \lambdaangle x,\omegaverline x\rangle > 0\\ (\mathbb Psi(x),-\|x\|^2)& \mbox{otherwise,} \varphiepsilonnd{cases} $$ and $$ G(x):= \frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}}\int_{-\infty}^{\infty} \varphiepsilonxp\Big(\frac{-\|\lambdaambda x-\omegaverline{x}\|^2 }{2\overline{\rm sign}\,maigma^2}\Big) |\lambdaambda|^N\mathrm{d} \lambdaambda $$ does not depend on $F$. Now, for $\omegaverline x, x\in \mathbb{S}^N_+(\omegaverline x)$, $$\|\lambdaambda x-\omegaverline{x}\|^2=\lambdaambda^2-2\lambdaambda \cos(\overline{\rm sign}\,maphericalangle(x,\omegaverline x)) +1. $$ Therefore, $G(x)=:G_{\omegaverline x}(\overline{\rm sign}\,maphericalangle(x,\omegaverline x))$ where for $0\lambdae \alphalpha \lambdae \frac{\partiali}{2}$, $$ G_{\omegaverline x}(\alphalpha)= \frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}}\int_{-\infty}^{\infty} \varphiepsilonxp\Big( - \frac{\lambdaambda^2+1 - 2 \lambdaambda \cos\alphalpha }{2\overline{\rm sign}\,maigma^2}\Big) |\lambdaambda|^N\mathrm{d} \lambdaambda, $$ which is a continuously differentiable decreasing function of $\alphalpha$. \varphiepsilonproof \partialroofof{Proposition~\ref{prop:tech}} By Lemma~\ref{lem:tech}, \betaegin{equation}\lambdaabel{eq1} \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x}, \overline{\rm sign}\,maigma^2{\rm Id})} F(y)= \int_{\mathbb{S}^N_+(\omegaverline x)} G_{\omegaverline x}(\overline{\rm sign}\,maphericalangle(x,\omegaverline x) ) F(x) \mathrm{d} x. \varphiepsilonnd{equation} Now, by the fundamental Theorem of Calculus for $0<\alphalpha<\frac{\partiali}{2}$, $$ G_{\omegaverline x}(\alphalpha)= G_{\omegaverline x}\Big(\frac{\partiali}{2}\Big) -\int_\alphalpha^{\frac{\partiali}{2}}G'_{\omegaverline x}(\thetaheta)\mathrm{d} \thetaheta= G_{\omegaverline x}\Big(\frac{\partiali}{2}\Big) -\int_0^{\frac{\partiali}{2}}\ \mathsf{1\hspace*{-2pt}l}_{\{\alphalpha\lambdae \thetaheta\}}G'_{\omegaverline x}(\thetaheta)\mathrm{d} \thetaheta. $$ Replacing this in \varphiepsilonqref{eq1} and changing the order of integration, we obtain $$ \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x}, \overline{\rm sign}\,maigma^2{\rm Id})} F(y)= G_{\omegaverline x}\Big(\frac{\partiali}{2}\Big) \int_{\mathbb{S}^N_+(\omegaverline x)} F(x) \mathrm{d} x - \int_0^{\frac{\partiali}{2}} \Big(\int_{\mathbb{S}^N_+(\omegaverline x)}F(x) \mathsf{1\hspace*{-2pt}l}_{\{\overline{\rm sign}\,maphericalangle(x,\omegaverline x)\lambdae \thetaheta\}}dx\Big)G'_{\omegaverline x}(\thetaheta)\mathrm{d} \thetaheta. $$ Now, since $$ \mathop{\mathbb E}_{x\in \mathbb{S}^N_+(\omegaverline x)}F(x)= \frac{\deltaisplaystyle \int_{\mathbb{S}^N_+(\omegaverline x)}F(x)\mathrm{d} x}{\mathsf{vol}(\mathbb{S}^N_+(\omegaverline x))} \quad \mbox{ and } \quad \mathop{\mathbb E}_{x\in B_\mathbb{S}(\omegaverline x,\thetaheta)}F(x) = \frac{\deltaisplaystyle \int_{B_\mathbb{S}(\omegaverline x,\thetaheta)}F(x)\mathrm{d} x} {\mathsf{vol}(B_{\mathbb{S}}(\omegaverline{x},\thetaheta))}, $$ we obtain \betaegin{align*} \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})} F(y) =\;&G_{\omegaverline x}\Big(\frac{\partiali}{2}\Big) \mathsf{vol}(\mathbb{S}^N_+(\omegaverline x))\mathop{\mathbb E}_{x\in \mathbb{S}^N_+(\omegaverline x)}F(x)\\ &- \int_0^{\frac{\partiali}{2}} \Big(\mathsf{vol}(B_{\mathbb{S}}(\omegaverline{x},\thetaheta)) \mathop{\mathbb E}_{x\in B_\mathbb{S}(\omegaverline x,\thetaheta)}F(x)\Big)G'_{\omegaverline x}(\thetaheta)\mathrm{d} \thetaheta. \varphiepsilonnd{align*} We now denote $$f(\thetaheta):=- \frac{\mathsf{vol}(B_{\mathbb{S}}(\omegaverline{x},\thetaheta))G'_{\omegaverline x}(\thetaheta)}{1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}},$$ which is a non-negative function since $G_{\omegaverline x}$ is decreasing, and rewrite the equality above as \betaegin{equation*} \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})} F(y)=H(N,\overline{\rm sign}\,maigma)\mathop{\mathbb E}_{x\in \mathbb{S}^N_+(\omegaverline x)}F(x) +(1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\int_0^{\frac{\partiali}{2}}\Big(\mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)} F(x)\Big) f(\thetaheta)\mathrm{d} \thetaheta , \varphiepsilonnd{equation*} where $$ H(N,\overline{\rm sign}\,maigma)=G_{\omegaverline x}\Big(\frac{\partiali}{2}\Big) \mathsf{vol}(\mathbb{S}^N_+(\omegaverline x)) = \mathsf{vol}(\mathbb{S}^N_+(\omegaverline x))\frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}}\int_{-\infty}^{\infty} \varphiepsilonxp\Big( - \frac{\lambdaambda^2+1 }{2\overline{\rm sign}\,maigma^2}\Big) |\lambdaambda|^N\mathrm{d} \lambdaambda. $$ We now prove that $H(N,\overline{\rm sign}\,maigma)= e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}$: Changing variables $\nablau=\frac{\lambdaambda}{\overline{\rm sign}\,maigma}$ we have \betaegin{eqnarray*} H(N,\overline{\rm sign}\,maigma)&=&\frac{ \mathsf{vol}(\mathbb{S}^N_+(\omegaverline x))}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}} \int_{-\infty}^{\infty} \varphiepsilonxp\Big( - \frac{\lambdaambda^2+1 }{2\overline{\rm sign}\,maigma^2}\Big) |\lambdaambda|^N\mathrm{d} \lambdaambda\\ &=& \frac{ \mathsf{vol}(\mathbb{S}^N_+(\omegaverline x))}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}} \int_{-\infty}^{\infty} \varphiepsilonxp\Big( - \frac{\nablau^2}{2}-\frac{1 }{2\overline{\rm sign}\,maigma^2}\Big) |\nablau|^N \overline{\rm sign}\,maigma^{N+1} \mathrm{d} \nablau\\ &=& e^{-\frac{1 }{2\overline{\rm sign}\,maigma^2}} \betaigg[\frac{\mathsf{vol}(\mathbb{S}^N_+(\omegaverline x))}{(2\partiali)^{\frac{N+1}{2}}} \int_{-\infty}^{\infty} e^{- \frac{\nablau^2}{2}} |\nablau|^N \mathrm{d} \nablau\betaigg]. \varphiepsilonnd{eqnarray*} To estimate the quantity between the square brackets we use the known equality $$ \int_0^\infty \nablau^{N}e^{-\frac{x^2}{2}}\mathrm{d} \nablau = \Gammaamma\Big(\frac{N+1}{2}\Big)2^{\frac{N-1}{2}} $$ together with~\varphiepsilonqref{eq:volSN} to obtain \betaegin{eqnarray*} \frac{\mathsf{vol}(\mathbb{S}^N_+(\omegaverline x))}{(2\partiali)^{\frac{N+1}{2}}} \int_{-\infty}^{\infty} \varphiepsilonxp\Big(- \frac{\nablau^2}{2}\Big) |\nablau|^N \mathrm{d} \nablau &=&\frac{\mathsf{vol}(\mathbb{S}^N_+(\omegaverline x))}{(2\partiali)^{\frac{N+1}{2}}} \Gammaamma\Big(\frac{N+1}{2}\Big)2^{\frac{N+1}{2}}\\ &=& \frac{\partiali^{\frac{N+1}{2}}}{\Gammaamma\Big(\frac{N+1}{2}\Big)(2\partiali)^{\frac{N+1}{2}}} \Gammaamma\Big(\frac{N+1}{2}\Big)2^{\frac{N+1}{2}}\\ &=& 1. \varphiepsilonnd{eqnarray*} Therefore \betaegin{equation*} \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})} F(y)=e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}\mathop{\mathbb E}_{x\in \mathbb{S}^N_+(\omegaverline x)}F(x) +(1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\int_0^{\frac{\partiali}{2}}\Big(\mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)} F(x)\Big) f(\thetaheta)\mathrm{d} \thetaheta. \varphiepsilonnd{equation*} This implies, by taking $F=1$, that $$1 = e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}} + (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\int_0^{\frac{\partiali}{2}} f(\thetaheta)\mathrm{d} \thetaheta,$$ i.e. $$\int_0^{\frac{\partiali}{2}} f(\thetaheta)\mathrm{d} \thetaheta=1.$$ Therefore $f$ is a density on $[0,\frac{\partiali}{2}]$, and $$\int_0^{\frac{\partiali}{2}}\Big(\mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)} F(x)\Big) f(\thetaheta)\mathrm{d} \thetaheta = \mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f} \Big( \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)} F(x) \Big) .$$ \varphiepsilonproof \betaigskip Since $\mathop{\mathscr C}(x)=\mathop{\mathscr C}(\lambdaambda x)$ for all $\lambdaambda \in \mathbb{R}^\thetaimes$, we can now focus on $F(x):=\lambdan\mathop{\mathscr C}(x)$. \betaegin{proposition} \lambdaabel{together} With the notation in Proposition~\ref{prop:tech}, we have \betaegin{equation*} \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})} \lambdan \mathop{\mathscr C}(y) \lambdae (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f}(\lambdan \Big( \frac{1}{\overline{\rm sign}\,main\thetaheta}\Big)) + \lambdan (Nd)+ 2(\lambdan 2 + 1). \varphiepsilonnd{equation*} \varphiepsilonnd{proposition} \partialroof Replacing the expectations in the right-hand side of the equality in Proposition~\ref{prop:tech} by their bound in~\varphiepsilonqref{eq:smooth} for $\rho=\overline{\rm sign}\,main \thetaheta$ and $\rho=1=\overline{\rm sign}\,main\frac{\partiali}{2}$, we obtain \betaegin{eqnarray*} \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})} \lambdan\mathop{\mathscr C}(y) &=& (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}) \mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f} \Big( \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)} \lambdan\mathop{\mathscr C}(x) \Big)+e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}} \mathop{\mathbb E}_{x\in {\mathbb{S}_+^N(\omegaverline x)}}\lambdan \mathop{\mathscr C}(x)\\ &\lambdae &(1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}) \mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f}(\lambdan \frac{Nd}{\overline{\rm sign}\,main\thetaheta} + K) +e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}} \mathop{\mathbb E}_{x\in {\mathbb{S}_+^N(\omegaverline x)}}(\lambdan (Nd) + K)\\ & \lambdae & (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}) \Big(\mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f}(\lambdan \Big( \frac{1}{\overline{\rm sign}\,main\thetaheta}\Big) + \lambdan (Nd) + K) \Big) +e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}} (\lambdan (Nd) + K)\\ &\lambdae & (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f}(\lambdan \Big( \frac{1}{\overline{\rm sign}\,main\thetaheta}\Big)) + \lambdan (Nd) + K, \varphiepsilonnd{eqnarray*} where $K=2(\lambdan 2 + 1)$. The result follows from the last equality in Proposition~\ref{prop:tech}. \varphiepsilonproof Our next goal is to estimate the right-hand side in Proposition~\ref{together}. \betaegin{lemma}\lambdaabel{lem:A2} Let $0\lambdae t \lambdae \frac{\partiali}{4}$. Then \betaegin{align*} \int_{t}^{\frac{\partiali}{2}}\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)f(\thetaheta) \mathrm{d} \thetaheta \lambdae \lambdan \overline{\rm sign}\,maqrt 2 + \int_{\overline{\rm sign}\,main t}^{\frac{\overline{\rm sign}\,maqrt 2}{2}} \Big(\int_{t}^{\alpharcsin s} f(\thetaheta)\mathrm{d} \thetaheta\Big)\frac1s\mathrm{d} s. \varphiepsilonnd{align*} \varphiepsilonnd{lemma} \partialroof Write $$ \int_{t}^{\frac{\partiali}{2}}\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)f(\thetaheta) \mathrm{d} \thetaheta = \int_{t}^{\frac{\partiali}{4}}\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)f(\thetaheta) \mathrm{d} \thetaheta+ \int_{\frac{\partiali}{4}}^{\frac{\partiali}{2}}\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)f(\thetaheta) \mathrm{d} \thetaheta.$$ Since $\frac{1}{\overline{\rm sign}\,main\thetaheta}\lambdae \overline{\rm sign}\,maqrt 2$ for $\thetaheta \in [\frac{\partiali}{4},\frac{\partiali}{2}]$, the second term satisfies $$\int_{\frac{\partiali}{4}}^{\frac{\partiali}{2}}\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)f(\thetaheta) \mathrm{d} \thetaheta \lambdae \lambdan \overline{\rm sign}\,maqrt 2 \int_{\frac{\partiali}{4}}^{\frac{\partiali}{2}}f(\thetaheta) \mathrm{d} \thetaheta.$$ We analyze the first term. Let $A=\{ (\thetaheta,r) \in [t,\frac{\partiali}{4}]\thetaimes [0,\lambdan\betaig(\frac{1}{\overline{\rm sign}\,main t}\betaig)]\,:\, r\lambdae \lambdan (\frac1{\overline{\rm sign}\,main\thetaheta})\}$, and $A_r=\{\thetaheta \in [t,\frac{\partiali}{4}]: r\lambdae \lambdan (\frac1{\overline{\rm sign}\,main\thetaheta})\}$. By Fubini's Theorem we have both $$ \int_{(\thetaheta, r)\in A} f(\thetaheta)\mathrm{d}(\thetaheta,r) = \int_t^{\frac{\partiali}{4}}\Big( \int_{0}^{ \lambdan (\frac1{\overline{\rm sign}\,main\thetaheta} )} \mathrm{d} r\Big) f(\thetaheta)\mathrm{d} \thetaheta = \int_t^{\frac{\partiali}{4}}\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)f(\thetaheta)\mathrm{d}\thetaheta $$ and \betaegin{align*} \int_{(\thetaheta, r)\in A} f(\thetaheta)\mathrm{d}(\thetaheta,t) & \ = \ \int_{0}^{\lambdan (\frac{1}{\overline{\rm sign}\,main t})} \Big(\int_{\thetaheta\in A_r} f(\thetaheta) \mathrm{d} \thetaheta\Big) \mathrm{d} r\\ & \ = \ \int_{0}^{\lambdan\overline{\rm sign}\,maqrt 2} \Big(\int_{\thetaheta\in A_r} f(\thetaheta) \mathrm{d} \thetaheta\Big) \mathrm{d} r + \int_{\lambdan\overline{\rm sign}\,maqrt2}^{\lambdan (\frac{1}{\overline{\rm sign}\,main t})} \Big(\int_{\thetaheta\in A_r} f(\thetaheta) \mathrm{d} \thetaheta\Big) \mathrm{d} r\\ & = \ \lambdan \overline{\rm sign}\,maqrt 2 \int_t^{\frac{\partiali}{4}} f(\thetaheta)\mathrm{d} \thetaheta + \int_{\lambdan \overline{\rm sign}\,maqrt 2}^{\lambdan (\frac{1}{\overline{\rm sign}\,main t}) } \Big(\int_{\overline{\rm sign}\,main t\lambdae \overline{\rm sign}\,main \thetaheta \lambdae e^{- r}} f(\thetaheta)\mathrm{d} \thetaheta\Big) \mathrm{d} r, \varphiepsilonnd{align*} since $t\lambdae \frac{\partiali}{4}$ implies $\lambdan \overline{\rm sign}\,maqrt 2\lambdae \lambdan\betaig(\frac{1}{\overline{\rm sign}\,main t}\betaig) $ and when $r\lambdae \lambdan \overline{\rm sign}\,maqrt 2$, then $A_r=[t,\frac{\partiali}{4}]$. Therefore, $$ \int_{(\thetaheta, r)\in A} f(\thetaheta)\mathrm{d}(\thetaheta,r) \ = \ \lambdan \overline{\rm sign}\,maqrt 2 \int_t^{\frac{\partiali}{4}} f(\thetaheta)\mathrm{d} \thetaheta + \int_{\overline{\rm sign}\,main t}^{\frac{\overline{\rm sign}\,maqrt 2}{2}} \Big(\int_{t}^{\alpharcsin s} f(\thetaheta) \mathrm{d} \thetaheta\Big)\frac1s\mathrm{d} s , $$ by taking $s=e^{-r}$. Finally, \betaegin{align*} \int_{t}^{\frac{\partiali}{2}}\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)f(\thetaheta) \mathrm{d} \thetaheta & \lambdae \lambdan \overline{\rm sign}\,maqrt 2 \int_{\frac{\partiali}{4}}^{\frac{\partiali}{2}}f(\thetaheta) + \lambdan \overline{\rm sign}\,maqrt 2 \int_t^{\frac{\partiali}{4}} f(\thetaheta)\mathrm{d} \thetaheta + \int_{\overline{\rm sign}\,main t}^{\frac{\overline{\rm sign}\,maqrt 2}{2}} \Big(\int_{t}^{\alpharcsin s} f(\thetaheta) \mathrm{d} \thetaheta\Big)\frac1s\mathrm{d} s \\ & \lambdae \lambdan \overline{\rm sign}\,maqrt 2 \int_t^{\frac{\partiali}{2}} f(\thetaheta)\mathrm{d} \thetaheta + \int_{\overline{\rm sign}\,main t}^{\frac{\overline{\rm sign}\,maqrt 2}{2}} \Big(\int_{t}^{\alpharcsin s} f(\thetaheta) \mathrm{d} \thetaheta\Big)\frac1s\mathrm{d} s\\ & \lambdae \lambdan \overline{\rm sign}\,maqrt 2 + \int_{\overline{\rm sign}\,main t}^{\frac{\overline{\rm sign}\,maqrt 2}{2}} \Big(\int_{t}^{\alpharcsin s} f(\thetaheta) \mathrm{d} \thetaheta\Big)\frac1s\mathrm{d} s \varphiepsilonnd{align*} since $\int_t^{\frac{\partiali}{2}}f(\thetaheta)\mathrm{d} \thetaheta \lambdae 1$. \varphiepsilonproof \betaegin{lemma}\lambdaabel{lem:B} Assume $N\gammae 5$. For all $t\in[0,\frac{\partiali}{4}]$, one has $$ \int_{0}^ t f(\thetaheta)\mathrm{d} \thetaheta \lambdaeq \min\Big\{1,\frac{1}{1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}}\Big(\frac12 (\overline{\rm sign}\,main(2t))^N + \frac{(\overline{\rm sign}\,main t)^N}{\overline{\rm sign}\,maigma^{N+1}}\Big)\Big\}. $$ \varphiepsilonnd{lemma} \partialroof For $t\lambdae \frac{\partiali}{2}$, \betaegin{align*} \int_{0} ^{ t} f(\thetaheta)\mathrm{d} \thetaheta &= \mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f}(\mathsf{1\hspace*{-2pt}l}_{\{\thetaheta\lambdaeq t\}}) \;\lambdae\; \mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f} \betaig(\mathop{\mathbb E}_{B_{\mathbb{S}}(\omegaverline{x},\thetaheta)} \mathsf{1\hspace*{-2pt}l}_{\{\overline{\rm sign}\,maphericalangle(x,\omegaverline{x})\lambdaeq t\}} \betaig) )\\ &\lambdaeq \frac{1}{1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}}\mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x}, \overline{\rm sign}\,maigma^2{\rm Id})} (\mathsf{1\hspace*{-2pt}l}_{ \{\overline{\rm sign}\,maphericalangle (\mathbb Psi(y),\omegaverline{x} )\lambdaeq t \}})\\ &=\frac{1}{1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}} \mathop{\rm Prob}_{y\overline{\rm sign}\,maim N(\omegaverline{x}, \overline{\rm sign}\,maigma^2{\rm Id})} \betaig\{\overline{\rm sign}\,maphericalangle (\mathbb Psi (y),\omegaverline{x} )\lambdaeq t\betaig\}, \varphiepsilonnd{align*} for $\mathbb Psi$ defined in \varphiepsilonqref{eq:Psi}. The first inequality holds because for $\thetaheta \lambdae t$, $\overline{\rm sign}\,maphericalangle(x,\omegaverline{x})\lambdaeq \thetaheta $ implies $\overline{\rm sign}\,maphericalangle(x,\omegaverline{x})\lambdaeq t $, and the second by Proposition~\ref{prop:tech} applied to $F=\mathsf{1\hspace*{-2pt}l}_{\betaig\{\overline{\rm sign}\,maphericalangle (\mathbb Psi(y),\omegaverline{x})\lambdae t\betaig\}}$. It is then enough to bound the right-hand expression.\\ We observe that for $0\lambdae t\lambdae \frac{\partiali}{2}$, the set $K=\betaig\{y\in\mathbb{R}^{N+1}\,:\,\overline{\rm sign}\,maphericalangle(\mathbb Psi(y),\omegaverline{x})\lambdaeq t\betaig\}$ is a pointed cone with vertex at $0$, central axis passing through $\omegaverline{x}$ and angular opening $\alphalpha:=2t$. In addition, one can prove by the cosine theorem that this cone is included in the union of the pointed cone $\omegaverline{K}$ with vertex at $\omegaverline{x}$, central axis passing through $2\omegaverline{x}$ and angular opening $2\alphalpha$ with the intersection $K\cap B(\omegaverline{x},1)$ (see Figure~\ref{fig:cones}). Hence, the measure of $K$ (with respect to $N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})$) is bounded by the sum of the measures of $\omegaverline{K}$ and $K\cap B(\omegaverline{x},1)$. \betaegin{figure}[H]\centering \betaegin{tikzpicture}[scale=.5,point/.style={draw,minimum size=0pt, inner sep=1pt,circle,fill=black}] \betaegin{scope} \fill[gray!20!white] (0,0) circle (7); \fill[white] (-0.4,-7)-- (2.45,7) -- (7,7) -- (7,-7) -- (-0.4,-7); \fill[white] (0.4,-7)-- (-2.45,7) -- (-7,7) -- (-7,-7) -- (0.4,-7); \fill[pattern=north west lines] (0,0)-- (-2.85,6.7) -- (2.85,6.7) -- (0,0); \varphiepsilonnd{scope} \deltaraw (0,0) node(x) [point,label=220:{\footnotesize$\omegaverline{x}$}] {}; \deltaraw(x) circle [radius=5]; \deltaraw[dotted](0,-5) circle [radius=5]; \deltaraw (0,-5) node(q) [point,label=-120:{\footnotesize$0$}] {}; \deltaraw[dashed] (0,0) --(2.9,6.8); \deltaraw[dashed] (0,0) --(-2.9,6.8); \deltaraw (q) -- (2.45,7); \deltaraw (q) -- (-2.45,7); \deltaraw ([shift={(0,0)}]65:0.8) arc [radius =0.8, start angle=65, end angle=115]; \partialath (0,1.4) node{\footnotesize $2\alphalpha$}; \deltaraw ([shift={(0,-5)}]78:0.8) arc [radius =0.8, start angle=78, end angle=102]; \partialath (0,-3.6) node{\footnotesize $\alphalpha$}; \partialath (1,-2) node{\footnotesize $K$}; \partialath (3.1,6) node{\footnotesize $\omegaverline{K}$}; \varphiepsilonnd{tikzpicture} \caption{{\overline{\rm sign}\,mamall The cones $K$ (shaded) and $\omegaverline{K}$ (line patterned).}} \lambdaabel{fig:cones} \varphiepsilonnd{figure} As the vertex $\omegaverline{x}$ of $\omegaverline{K}$ coincides with the center of $N(\omegaverline{x},\overline{\rm sign}\,maigma)$, the measure of $\omegaverline{K}$ with respect to $N(\omegaverline{x},\overline{\rm sign}\,maigma)$ equals the proportion of the volume (in $\mathbb{S}(\omegaverline{x},1)$) of the intersection of $\omegaverline{K}$ with $\mathbb{S}(\omegaverline{x},1)$ within this sphere. That is, the measure of $\omegaverline{K}$ with respect to $N(\omegaverline{x},\overline{\rm sign}\,maigma)$ satisfies $$ \mathop{\rm Prob}_{x\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}\{x\in \omegaverline{K}\} = \frac{\mathsf{vol}(B_{\mathbb{S}}(\omegaverline{x},2t))}{\Omegah_{N}} $$ where, we recall, $\Omegah_N:=\mathsf{vol}(\mathbb{S}^N)$. Using~\varphiepsilonqref{eq:volBS} we deduce that, for $t\in[0,\frac{\partiali}{4}]$, $$ \mathop{\rm Prob}_{x\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}\{x\in \omegaverline{K}\} \lambdaeq \frac12 (\overline{\rm sign}\,main(2t))^N. $$ Also, \betaegin{align*} \mathop{\rm Prob}_{x\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}&\{x\in K\cap B(\omegaverline{x},1)\} \ = \ \int_{x\in K\cap B(\omegaverline{x},1)} \frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}} \varphiepsilonxp\Big(-\frac{\|x-\omegaverline{x}\|^2}{2\overline{\rm sign}\,maigma^2}\Big)\mathrm{d} x \\ &\lambdaeq \frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}} \int_{x\in K\cap B(\omegaverline{x},1)} 1\mathrm{d} x \ = \ \frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}} \mathsf{vol}(K\cap B(\omegaverline{x},1))\\ &\lambdae\frac{1}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}}\mathsf{vol}(K\cap B(0,2)) \ = \ \frac{\mathsf{vol}(B_\mathbb{S}(\omegaverline x, t))}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}\Omegah_N} \mathsf{vol}(B(0,2))\\ &\underset{\varphiepsilonqref{eq:volBS}}{\lambdae} \frac{2^{N+1}(\overline{\rm sign}\,main t)^N}{(2\partiali\overline{\rm sign}\,maigma^2)^{\frac{N+1}{2}}\cdot 2} \mathsf{vol}(B(0,1))\ \underset{\varphiepsilonqref{eq:volSN}\varphiepsilonqref{eq:volB}}{\lambdae} \ \frac{2^{\frac{N+1}{2}}(\overline{\rm sign}\,main t)^N}{\Gammaamma(\frac{N+1}{2})(N+1)\overline{\rm sign}\,maigma^{N+1}} \\ &\lambdae \frac{2^{N+\frac12}e^{\frac{N-1}{2}}(\overline{\rm sign}\,main t)^N}{\overline{\rm sign}\,maqrt{\partiali} (N-1)^{\frac{N}{2}}(N+1)\overline{\rm sign}\,maigma^{N+1}}. \varphiepsilonnd{align*} Here we used the well-known lower bound $\Gammaamma(\frac{N+1}{2})>\overline{\rm sign}\,maqrt{2\partiali}\Big(\frac{N-1}{2}\Big)^{\frac{N}{2}} e^{-\frac{N-1}{2}}$ (see for instance~\cite[Eq.~2.14]{Condition}) for the last inequality. We finish the proof by noting that it can be easily proven by induction, using for instance that $N^{N+1}\gammae 2N(N-1)^N$, that for all $N\gammae 5$, we have \betaegin{equation}\thetaag*{\qed} \frac{2^{N+\frac12}e^{\frac{N-1}{2}}}{\overline{\rm sign}\,maqrt{\partiali} (N-1)^{\frac{N}{2}}(N+1)}\lambdaeq 1. \varphiepsilonnd{equation} \betaegin{lemma}\lambdaabel{lem:int1} Assume $N\gammae 5$. Then, $$ \mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f}(\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big))\lambdae \frac{1}{N} \Big(1+ \lambdan \betaig( 2^{N-1}+\frac{1}{\overline{\rm sign}\,maigma^{N+1}}\betaig) - \lambdan (1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}) \Big). $$ \varphiepsilonnd{lemma} \partialroof We have by Lemma~\ref{lem:A2} with $t=0$, $$ \mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f}(\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)) \lambdae \lambdan \overline{\rm sign}\,maqrt 2 + \int_0^{\frac{\overline{\rm sign}\,maqrt 2}{2}} \Big(\int_{0}^{\alpharcsin s} f(\thetaheta)\mathrm{d} \thetaheta\Big)\frac1s\mathrm{d} s, $$ where by Lemma~\ref{lem:B}, since $0\lambdae \alpharcsin s \lambdae \frac{\partiali}{4}$ for $0\lambdae s\lambdae \frac{\overline{\rm sign}\,maqrt 2}{2}$, \betaegin{align*} \int_{0}^{\alpharcsin s} f(\thetaheta)\mathrm{d} \thetaheta & \lambdaeq \min\betaigg\{1, \frac{1}{1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}}\Big(\frac12 (\overline{\rm sign}\,main(2\alpharcsin s))^N + \frac{(\overline{\rm sign}\,main(\alpharcsin s))^N}{\overline{\rm sign}\,maigma^{N+1}}\Big)\betaigg\} \\ & \lambdae \min\betaigg\{1, \frac{1}{1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}}\Big(2^{N-1}s^N + \frac{s^N}{\overline{\rm sign}\,maigma^{N+1}}\Big)\betaigg\} \\ & \lambdae \min\Big\{1, \frac{2^{N-1} + \frac{1}{\overline{\rm sign}\,maigma^{N+1}}}{1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}}s^N\Big\}. \varphiepsilonnd{align*} We have $$\frac{2^{N-1} + \frac{1}{\overline{\rm sign}\,maigma^{N+1}}}{1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}}s^N\lambdae 1 \iff \Big(2^{N-1} + \frac{1}{\overline{\rm sign}\,maigma^{N+1}}\Big)s^N \lambdae 1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}} \iff s\lambdae c(N,\overline{\rm sign}\,maigma), $$ where $c(N,\overline{\rm sign}\,maigma):=\overline{\rm sign}\,maqrt[N] {\deltafrac{(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\overline{\rm sign}\,maigma^{N+1}}{1+2^{N-1}\overline{\rm sign}\,maigma^{N+1}}}$. In addition we observe that for all $N\gammae 2$, $c(N,\overline{\rm sign}\,maigma)< \deltafrac{\overline{\rm sign}\,maqrt 2}{2}$ since \betaegin{eqnarray*} c(N,\overline{\rm sign}\,maigma) <\frac{1}{\overline{\rm sign}\,maqrt{2}} &\iff & \deltafrac{(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\overline{\rm sign}\,maigma^{N+1}}{1+2^{N-1}\overline{\rm sign}\,maigma^{N+1}} <\frac{1}{2^\frac{N}{2}}\\ &\iff & 2^\frac{N}{2}(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}) \overline{\rm sign}\,maigma^{N+1} < 1+2^{N-1}\overline{\rm sign}\,maigma^{N+1}. \varphiepsilonnd{eqnarray*} Rewriting $c(N,\overline{\rm sign}\,maigma)^{-N}= \deltafrac{2^{N-1}+\frac{1}{ \overline{\rm sign}\,maigma^{N+1}}}{ 1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}} $ we get \betaegin{align*} \mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f}(\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)) &\lambdae \lambdan \overline{\rm sign}\,maqrt 2 + \int_0^{c(N,\overline{\rm sign}\,maigma)}\Big(\int_{0}^{\alpharcsin s} f(\thetaheta)\mathrm{d} \thetaheta\Big)\frac1s\mathrm{d} s + \int_{c(N,\overline{\rm sign}\,maigma)}^{\frac{\overline{\rm sign}\,maqrt 2}{2}}\Big(\int_{0}^{\alpharcsin s} f(\thetaheta)\mathrm{d} \thetaheta\Big)\frac1s\mathrm{d} s\\ &\lambdae \lambdan \overline{\rm sign}\,maqrt 2 + \int_0^{c(N,\overline{\rm sign}\,maigma)}c(N,\overline{\rm sign}\,maigma)^{-N}s^{N-1}\mathrm{d} s + \int_{c(N,\overline{\rm sign}\,maigma)}^{\frac{\overline{\rm sign}\,maqrt 2}{2}}\frac{1}{s}\mathrm{d} s\\ &\lambdae \lambdan \overline{\rm sign}\,maqrt 2+ \frac{1}{N} + \lambdan \frac{\overline{\rm sign}\,maqrt 2}{2}- \lambdan c(N,\overline{\rm sign}\,maigma) \\ & = \frac{1}{N} \Big( 1 + \lambdan \frac{2^{N-1}+\frac{1}{\overline{\rm sign}\,maigma^{N+1}}} {1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}} \Big)\\ & = \frac{1}{N} \Big(1+ \lambdan \betaig( 2^{N-1}+\frac{1}{\overline{\rm sign}\,maigma^{N+1}}\betaig) - \lambdan(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}) \Big). \varphiepsilonnd{align*} \varphiepsilonproof \partialroofof{Theorem~\ref{th:smoothedGaussian}} By Proposition~\ref{together} and Lemma~\ref{lem:int1}, \betaegin{align*} \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})} \lambdan \mathop{\mathscr C}(y) &\lambdae (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f}(\lambdan \Big( \frac{1}{\overline{\rm sign}\,main\thetaheta}\Big)) + \lambdan (Nd) +K \\ & \lambdae \frac{1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}}{N} \Big(1+ \lambdan \betaig( 2^{N-1}+\frac{1}{\overline{\rm sign}\,maigma^{N+1}}\betaig) - \lambdan(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}) \Big)\\ & \quad +\lambdan(Nd) + K, \varphiepsilonnd{align*} with $K=2(\lambdan 2 + 1)$. We then define \betaegin{equation}\lambdaabel{SmoothGaussianBound} H(N,d,\overline{\rm sign}\,maigma)= \frac{(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})}{N} \Big(1+ \lambdan \betaig( 2^{N-1}+\frac{1}{\overline{\rm sign}\,maigma^{N+1}}\betaig) - \lambdan (1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}) \Big)+ \lambdan (Nd)+2(\lambdan 2 +1). \varphiepsilonnd{equation} We now verify that $H(N,d,\overline{\rm sign}\,maigma)$ satisfies {\betaf (SA1)} and {\betaf (SA2)}: \betaegin{description} \item[(SA1)] $\deltaisplaystyle \lambdaim_{\overline{\rm sign}\,maigma\thetao 0} H(N,d,\overline{\rm sign}\,maigma)= \deltaisplaystyle\lambdaim_{\overline{\rm sign}\,maigma\thetao 0}\Big(\deltafrac{1}{N}\Big(1+ \lambdan \betaig( 2^{N-1}+\deltafrac{1}{\overline{\rm sign}\,maigma^{N+1}}\betaig) \Big)+ \lambdan (Nd) + 2(\lambdan 2 +1)\Big)$\\ ${\ } \qquad \qquad \qquad \quad =\deltaisplaystyle \lambdaim_{\overline{\rm sign}\,maigma\thetao \infty} \Big(\deltafrac{N+1}{N}\lambdan\deltafrac{Nd}{\overline{\rm sign}\,maigma} + \Omegah(1)\Big) \ = \ \infty. $\\ Note that actually the difference of the formula in the last line compared to \varphiepsilonqref{eq:smooth}, with the dispersion parameter $\overline{\rm sign}\,maigma$ replacing $\overline{\rm sign}\,main \thetaheta$, is negligible. \item[(SA2)] $\deltaisplaystyle \lambdaim_{\overline{\rm sign}\,maigma\thetao \infty} H(N,d,\overline{\rm sign}\,maigma)=\lambdan(Nd)+2(\lambdan 2+1)$, and we recover the well-known, average-case analysis, bound for $\mathop{\mathbb E}_{x\in\mathbb{S}^N}\lambdan(\mathop{\mathscr C}(x))$ (see~\cite{Demmel88} and~\cite[Theorem~21.1]{Condition}). \varphiepsilonnd{description} \varphiepsilonproof \overline{\rm sign}\,maubsection{Local analysis} The main result of this section is the following. \betaegin{theorem}\lambdaabel{thm:main-local} Let $\mathop{\mathscr C}$ be a conic condition number on $\mathbb{R}^{N+1}$ with $N\gammae 6$, with set of ill-posed inputs $\Sigmama$. Assume that $\Sigmama$ is contained in a real algebraic hypersurface, given as the zero set of a homogeneous polynomial of degree $d$. Let $\omegaverline x\in \mathbb{S}^N$ and $\overline{\rm sign}\,maigma\gammaeq 0$. Then, there is an explicit bound $H(N,d,\overline{\rm sign}\,maigma, \mathop{\mathscr C}(\omegaverline x))$ --defined in \varphiepsilonqref{eq:GaussianBound} below-- such that $$ \mathop{\mathbb E}_{x\overline{\rm sign}\,maim N(\omegaverline{x}, \overline{\rm sign}\,maigma^2{\rm Id})}\lambdan \mathop{\mathscr C}(x)\lambdae H(N,d,\overline{\rm sign}\,maigma,\mathop{\mathscr C}(\omegaverline x) ).$$ This bound satisfies {\betaf (LA0)}, {\betaf (LA1)} and {\betaf (LA2)}. \varphiepsilonnd{theorem} In order to prove Theorem~\ref{thm:main-local} we need the following lemma. \betaegin{lemma}\lambdaabel{lem:otra-cota} Assume $N\gammaeq 2$. For all $t\in[0,\partiali/2]$, $$ \int_t^{\frac{\partiali}{2}}f(\thetaheta)\mathrm{d}\thetaheta \lambdaeq \min\Big\{1,\frac{2\partiali\overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt {N+1}}{(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})t}\Big\}. $$ \varphiepsilonnd{lemma} \partialroof The idea is to apply Markov's inequality~(e.g. \cite[Corollary~2.9]{Condition}) to the density $f$ to deduce that $$ \int_t^{\frac{\partiali}{2}} f(\thetaheta)\mathrm{d} \thetaheta=\mathop{\rm Prob}_{\thetaheta \overline{\rm sign}\,maim f}(\thetaheta \gammae t) \lambdae \frac{1}{t}\mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f} (\thetaheta) $$ Therefore we need to bound $\deltaisplaystyle{\mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f}(\thetaheta)}$. We first prove that \betaegin{equation}\lambdaabel{eq:des1}\mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f}(\thetaheta) \lambdae \frac{\overline{\rm sign}\,maqrt 2 \partiali}{1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}}\mathop{\mathbb E}_{y\in N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}(\|\mathbb Psi(y)-\omegaverline{x}\|), \varphiepsilonnd{equation} where $\mathbb Psi$ is given by~\varphiepsilonqref{eq:Psi}, and then that \betaegin{equation}\lambdaabel{eq:des2} \mathop{\mathbb E}_{y\in N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}(\|\mathbb Psi(y)-\omegaverline{x}\|) \lambdae {\overline{\rm sign}\,maqrt{2}}\, \overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt{N+1}. \varphiepsilonnd{equation} This implies $$ \mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f}(\thetaheta)\lambdae \frac{2\partiali \overline{\rm sign}\,maigma \,\overline{\rm sign}\,maqrt{N+1}}{1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}}. $$ To show \varphiepsilonqref{eq:des1} we apply Proposition~\ref{prop:tech} with $F(y)=\|\mathbb Psi(y)-\omegaverline{x}\|$ and get \betaegin{equation}\lambdaabel{eq:exp1} \mathop{\mathbb E}_{y\in N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}(\|\mathbb Psi(y)-\omegaverline{x}\|) \gammaeq (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}) \mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f}\Big(\mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)}(\|x-\omegaverline{x}\|)\Big). \varphiepsilonnd{equation} We claim that \betaegin{equation}\lambdaabel{eq:exp3} \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)}(\|x-\omegaverline{x}\|) \gammaeq \frac{\overline{\rm sign}\,maqrt 2}{2\partiali}\thetaheta. \varphiepsilonnd{equation} \betaegin{comment} \betaegin{eqnarray*} \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)}(\|x-\omegaverline{x}\|) &\gammaeq & \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)} \overline{\rm sign}\,maphericalangle(x,\omegaverline{x})\\ &=& \frac{1}{\overline{\rm sign}\,mafv(\thetaheta)} \betaigg(\int_{B_{\mathbb{S}}(\omegaverline{x},\frac{\thetaheta}{2})}\overline{\rm sign}\,maphericalangle[x,\omegaverline{x}] \mathrm{d} x + \int_{B_{\mathbb{S}}(\omegaverline{x},\thetaheta)\overline{\rm sign}\,maetminus B_{\mathbb{S}}(\omegaverline{x},\frac{\thetaheta}{2})} \overline{\rm sign}\,maphericalangle[x,\omegaverline{x}] \mathrm{d} x \betaigg)\\ &\gammae& \frac{1}{\overline{\rm sign}\,mafv(\thetaheta)} \int_{B_{\mathbb{S}}(\omegaverline{x},\thetaheta)\overline{\rm sign}\,maetminus B_{\mathbb{S}}(\omegaverline{x},\frac{\thetaheta}{2})} \frac{\thetaheta}{2} \mathrm{d} x \;=\; \frac{\overline{\rm sign}\,mafv(\thetaheta)-\overline{\rm sign}\,mafv(\frac{\thetaheta}{2})}{\overline{\rm sign}\,mafv(\thetaheta)}\frac{\thetaheta}{2}. \varphiepsilonnd{eqnarray*} \varphiepsilonnd{comment} Indeed, for $0\lambdae \alphalpha:=\overline{\rm sign}\,maphericalangle(x,\omegaverline x) \lambdae \frac{\partiali}{2}$, one has $$ \frac{2\overline{\rm sign}\,maqrt 2}{\partiali} \alphalpha \lambdae \|x-\omegaverline x\|\lambdae \alphalpha. $$ \betaegin{comment} the right-hand side inequality is quite obvious (and well-known) since $$\|x-\omegaverline x\|^2 = 2 (1-\cos \alphalpha) = 4 \overline{\rm sign}\,main^2 \frac{\alphalpha}{2},$$ which holds since $\overline{\rm sign}\,main \alphalpha\lambdae \alphalpha$ for $0\lambdae \alphalpha \lambdae \frac{\partiali}{4}$. We obtain the left-hand side inequality by showing that $2\overline{\rm sign}\,main \frac{\alphalpha}{2}\gammae \frac{2\overline{\rm sign}\,maqrt 2}{\partiali}\alphalpha$, i.e. $$ \overline{\rm sign}\,main \alphalpha - \frac{2\overline{\rm sign}\,maqrt 2}{\partiali}\alphalpha \gammae 0 \ \mbox{ for } \ 0\lambdae \alphalpha\lambdae \frac{\partiali}{4}.$$ We note that in $0$ and in $\frac{\partiali}{4}$ this function gives $0$ and furthermore, it is concave, which shows the left-hand side inequality. \varphiepsilonnd{comment} Therefore, writing $\overline{\rm sign}\,mafv(\thetaheta):=\mathsf{vol}(B_{\mathbb{S}}(\omegaverline{x},\thetaheta))$, \betaegin{eqnarray*} \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)}(\|x-\omegaverline{x}\|) &{\gammaeq }& \frac{2\overline{\rm sign}\,maqrt 2}{\partiali} \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)} \overline{\rm sign}\,maphericalangle(x,\omegaverline{x})\\ &=& \frac{2\overline{\rm sign}\,maqrt 2}{\partiali\,\overline{\rm sign}\,mafv(\thetaheta)} \betaigg(\int_{B_{\mathbb{S}}(\omegaverline{x},\frac{\thetaheta}{2})}\overline{\rm sign}\,maphericalangle[x,\omegaverline{x}] \mathrm{d} x + \int_{B_{\mathbb{S}}(\omegaverline{x},\thetaheta)\overline{\rm sign}\,maetminus B_{\mathbb{S}}(\omegaverline{x},\frac{\thetaheta}{2})} \overline{\rm sign}\,maphericalangle[x,\omegaverline{x}] \mathrm{d} x \betaigg)\\ &\gammae& \frac{2\overline{\rm sign}\,maqrt 2}{\partiali\overline{\rm sign}\,mafv(\thetaheta)} \int_{B_{\mathbb{S}}(\omegaverline{x},\thetaheta)\overline{\rm sign}\,maetminus B_{\mathbb{S}}(\omegaverline{x},\frac{\thetaheta}{2})} \frac{\thetaheta}{2} \mathrm{d} x \;=\; \frac{\thetaheta\overline{\rm sign}\,maqrt 2}{\partiali}\lambdaeft( \frac{\overline{\rm sign}\,mafv(\thetaheta)-\overline{\rm sign}\,mafv(\frac{\thetaheta}{2})}{\overline{\rm sign}\,mafv(\thetaheta)}R^\inftyght). \varphiepsilonnd{eqnarray*} Now, for $0\lambdae \thetaheta \lambdae \frac{\partiali}{2}$, we have $$\overline{\rm sign}\,main\thetaheta =2\overline{\rm sign}\,main\frac{\thetaheta}{2}\cos\frac{\thetaheta}{2}\gammae \overline{\rm sign}\,maqrt 2 \overline{\rm sign}\,main\frac{\thetaheta}{2},$$ which implies $$\overline{\rm sign}\,main \frac{\thetaheta}{2}\lambdae \frac{\overline{\rm sign}\,main \thetaheta}{\overline{\rm sign}\,maqrt 2}.$$ Using~\varphiepsilonqref{eq:volBS} twice we have, for $N\gammaeq 6$, $$ \overline{\rm sign}\,mafv\Big(\frac{\thetaheta}{2}\Big) \lambdaeq \frac{\Omegah_N}{2}\Big(\overline{\rm sign}\,main\frac{\thetaheta}{2}\Big)^N \lambdae \frac{\Omegah_N}{2}\frac{1}{2^{\frac{N}{2}}}(\overline{\rm sign}\,main\thetaheta)^N \lambdae \frac{\Omegah_N}{2} \frac{1}{\overline{\rm sign}\,maqrt{2\partiali ( N+1)}}(\overline{\rm sign}\,main\thetaheta)^N \lambdae \frac{\overline{\rm sign}\,mafv(\thetaheta)}{2} $$ and we deduce that $\frac{\overline{\rm sign}\,mafv(\thetaheta)-\overline{\rm sign}\,mafv(\frac{\thetaheta}{2})}{\overline{\rm sign}\,mafv(\thetaheta)}\gammae\frac12$. With this, $$ \mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)}(\|x-\omegaverline{x}\|) \gammaeq \frac{\overline{\rm sign}\,maqrt 2}{2\partiali}\thetaheta $$ which shows \varphiepsilonqref{eq:exp3}. From~\varphiepsilonqref{eq:exp1} and~\varphiepsilonqref{eq:exp3} it follows that $$ \mathop{\mathbb E}_{y\in N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}(\|\mathbb Psi(y)-\omegaverline{x}\|) \gammaeq \frac{\overline{\rm sign}\,maqrt 2 (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})}{2\partiali} \mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f}(\thetaheta), $$ which shows \varphiepsilonqref{eq:des1}. We now show \varphiepsilonqref{eq:des2}. We let $\mathbb Psi^*(y)$ be the closest point to $\omegaverline{x}$ on the line through $0$ and $y$ (see Figure~\ref{fig:proj}) and have \betaegin{eqnarray*} \mathop{\mathbb E}_{y\in N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}(\|\mathbb Psi(y)-\omegaverline{x}\|) &\lambdaeq& {\overline{\rm sign}\,maqrt{2}} \mathop{\mathbb E}_{y\in N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}(\|\mathbb Psi^*(y)-\omegaverline{x}\|) \\ &\lambdaeq& {\overline{\rm sign}\,maqrt{2}} \mathop{\mathbb E}_{y\in N(\omegaverline{x},\overline{\rm sign}\,maigma^2{\rm Id})}(\|y-\omegaverline{x}\|) \;\lambdae \; {\overline{\rm sign}\,maqrt{2}}\, \overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt{N+1}, \varphiepsilonnd{eqnarray*} where the last inequality is a consequence of~\cite[Prop. 2.10 \& Lem. 2.15]{Condition}. \betaegin{figure}[H]\centering \betaegin{tikzpicture}[scale=.5,point/.style={draw,minimum size=0pt, inner sep=1pt,circle,fill=black}] \deltaraw (0,0) node(x) [point,label=220:{\footnotesize$\omegaverline{x}$}] {}; \deltaraw [dotted] (0,-5) circle [radius=5]; \deltaraw (0,-5) node(q) [point,label=-120:{\footnotesize$0$}] {}; \deltaraw (8,1.5) node(p) [point,label=-270:{\footnotesize$y$}] {}; \deltaraw (q) -- (p); \deltaraw (2.5,-2.95) node(r) [point,label=-60:{\footnotesize$\mathbb Psi^*(y)$}] {}; \deltaraw (0,0) --(r); \deltaraw (3.9,-1.83) node(s) [point,label=0:{\footnotesize$\mathbb Psi(y)$}] {}; \varphiepsilonnd{tikzpicture} \caption{{\overline{\rm sign}\,mamall The point $\mathbb Psi^*(y)$.}} \lambdaabel{fig:proj} \varphiepsilonnd{figure} This shows \varphiepsilonqref{eq:des2}. Therefore, $$ \mathop{\mathbb E}_{\thetaheta\overline{\rm sign}\,maim f}(\thetaheta)\lambdae \frac{2\partiali \overline{\rm sign}\,maigma \,\overline{\rm sign}\,maqrt{N+1}}{1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}}}. $$ as desired, and hence, \betaegin{equation}\thetaag*{\qed} \int_t^{\frac{\partiali}{2}}f(\thetaheta)\mathrm{d}\thetaheta \lambdae \frac{2\partiali\overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt {N+1}}{(1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})t}. \varphiepsilonnd{equation} \partialroofof{Theorem~\ref{thm:main-local}} Let $t:=\alpharcsin\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})}$. Since $\mathop{\mathscr C}(\omegaverline{x})\gammae 1$, $\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})}\lambdae \frac12$ and we have $t\lambdae \frac{\partiali}{6}$. For all $\thetaheta \lambdaeq t$ and all $x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)$ we have $$ \frac{1}{\mathop{\mathscr C}(x)}=d_{\overline{\rm sign}\,main}(x,\Sigmama)\gammae d_{\overline{\rm sign}\,main}(\omegaverline{x},\Sigmama) -d_{\overline{\rm sign}\,main}(x,\omegaverline{x}) \gammae \frac{1}{\mathop{\mathscr C}(\omegaverline{x})}-\overline{\rm sign}\,main\thetaheta \gammaeq \frac{1}{\mathop{\mathscr C}(\omegaverline{x})}-\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})} \gammaeq \frac{1}{2\mathop{\mathscr C}(\omegaverline{x})}, $$ which implies $\lambdan (\mathop{\mathscr C}(x) ) \lambdae \lambdan ( 2\mathop{\mathscr C}(\omegaverline{x}) )$.\\ We apply Proposition~\ref{prop:tech} to $F(y)=\lambdan\mathop{\mathscr C}(y)$ and use the previous inequality and the bounds~\varphiepsilonqref{eq:smooth} and~\varphiepsilonqref{eq:smooth-pi} to obtain \betaegin{align}\lambdaabel{eq:final} \mathop{\mathbb E}_{y\overline{\rm sign}\,maim N(\omegaverline{x},\overline{\rm sign}\,maigma)} \lambdan\mathop{\mathscr C}(y) &= (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\mathop{\mathbb E}_{\thetaheta \overline{\rm sign}\,maim f}\Big(\mathop{\mathbb E}_{x\in B_{\mathbb{S}}(\omegaverline{x},\thetaheta)}\lambdan\mathop{\mathscr C}(x)\Big) + e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}} \mathop{\mathbb E}_{x\in \mathbb{S}_+^N(\omegaverline x)} \lambdan\mathop{\mathscr C}(x)\nablaonumber\\ &\lambdae (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\Big( \lambdan(2\mathop{\mathscr C}(\omegaverline{x}))\int_0^t f(\thetaheta)\mathrm{d} \thetaheta + \int_t^{\frac{\partiali}{2}}\Big(\lambdan\Big(\frac{Nd}{\overline{\rm sign}\,main\thetaheta}\Big)+K\Big) f(\thetaheta) \mathrm{d} \thetaheta\Big)\nablaonumber\\ &\quad + e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}(\lambdan(Nd)+K)\nablaonumber\\ & \lambdae \lambdan \mathop{\mathscr C}(\omegaverline{x})(1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\int_0^t f(\thetaheta)\mathrm{d} \thetaheta + (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\int_t^{\frac{\partiali}{2}}\lambdan\Big(\frac{1}{\overline{\rm sign}\,main\thetaheta}\Big) f(\thetaheta) \mathrm{d} \thetaheta\\ &\;+ \lambdan(Nd) \Big(e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}+(1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\int_t^{\frac{\partiali}{2}} f(\thetaheta) \mathrm{d}\thetaheta\Big) + K,\nablaonumber \varphiepsilonnd{align} since $$(1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\betaig(\lambdan 2 \int_0^t f(\thetaheta)\mathrm{d}\thetaheta + K \int_t^{\frac{\partiali}{2}} f(\thetaheta )\mathrm{d}\thetaheta \betaig)+ Ke^{-\frac{1}{2\overline{\rm sign}\,maigma^2}} \lambdae K.$$ We next bound each of the first three terms in the right-hand side. Applying Lemma~\ref{lem:B} and the inequality $\overline{\rm sign}\,main(2t)\lambdaeq 2\overline{\rm sign}\,main t$ we obtain \betaegin{eqnarray*} (1-e^{\frac{-1}{2\overline{\rm sign}\,maigma^2}})\int_0^t f(\thetaheta)\mathrm{d} \thetaheta &\lambdae& \min\Big\{1- e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}},\frac12 (\overline{\rm sign}\,main(2t))^N + \frac{(\overline{\rm sign}\,main t)^N}{\overline{\rm sign}\,maigma^{N+1}}\Big\}\nablaonumber \\ &\lambdae& \min\Big\{1- e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}},\frac1{2(\mathop{\mathscr C}(\omegaverline{x}))^N} + \frac{1}{(2\mathop{\mathscr C}(\omegaverline{x}))^N\overline{\rm sign}\,maigma^{N+1}}\Big\}\\ &=& \min\Big\{1- e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}},\frac1{(2\mathop{\mathscr C}(\omegaverline{x}))^N} \Big( 2^{N-1}+\frac{1}{\overline{\rm sign}\,maigma^{N+1}}\Big)\Big\} . \varphiepsilonnd{eqnarray*} This bounds the first term in~\varphiepsilonqref{eq:final} by \betaegin{equation}\lambdaabel{eq:term1} \lambdan \mathop{\mathscr C}(\omegaverline{x}) \min\Big\{1- e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}},\frac1{(2\mathop{\mathscr C}(\omegaverline{x}))^N} \Big( 2^{N-1}+\frac{1}{\overline{\rm sign}\,maigma^{N+1}}\Big)\Big\}. \varphiepsilonnd{equation} Second, by Lemma~\ref{lem:A2} since $t\lambdae \frac{\partiali}{6}$, Lemma~\ref{lem:otra-cota} and $t\gammae \overline{\rm sign}\,main t=\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})}$, \betaegin{align}\lambdaabel{eq:2do} \int_t^{\frac{\partiali}{2}}\lambdan&\Big(\frac{1}{\overline{\rm sign}\,main\thetaheta}\Big) f(\thetaheta) \mathrm{d} \thetaheta \lambdae \lambdan \overline{\rm sign}\,maqrt 2 + \int_{\frac{1}{2 \mathop{\mathscr C}(\omegaverline x)}}^{\frac{\overline{\rm sign}\,maqrt 2}{2}} \Big(\int_{t}^{\alpharcsin s} f(\thetaheta)\mathrm{d} \thetaheta\Big)\frac1s\mathrm{d} s \nablaonumber\\ \lambdae\; & \lambdan \overline{\rm sign}\,maqrt 2 + \int_{\frac{1}{2 \mathop{\mathscr C}(\omegaverline x)}}^{\frac{\overline{\rm sign}\,maqrt 2}{2}} \min\Big\{1-e^{-\frac1{2\overline{\rm sign}\,maigma^2}},\frac{2\partiali\overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt{N+1}}{t}\Big\} \frac{1}{s}\mathrm{d} s \nablaonumber \\ \lambdae\; & \lambdan \overline{\rm sign}\,maqrt 2 + \min\Big\{1,\frac{2\partiali\overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt{N+1}}{(1-e^{-\frac1{2\overline{\rm sign}\,maigma^2}})t}\Big\} \Big(\lambdan \Big(\frac{\overline{\rm sign}\,maqrt 2}{2}\Big) - \lambdan \frac{1}{ 2 \mathop{\mathscr C}(\omegaverline x)}\Big)\nablaonumber \\ =\; & \lambdan \overline{\rm sign}\,maqrt 2 + \min\Big\{1,\frac{4\partiali\overline{\rm sign}\,maigma\mathop{\mathscr C}(\omegaverline x)\overline{\rm sign}\,maqrt{N+1}}{1-e^{-\frac1{2\overline{\rm sign}\,maigma^2}}}\Big\} \lambdan(\overline{\rm sign}\,maqrt 2 \mathop{\mathscr C}(\omegaverline x) )\nablaonumber \\ \lambdae\; &\min\betaig\{ 1, \frac{4\partiali \overline{\rm sign}\,maigma \mathop{\mathscr C}(\omegaverline x) \overline{\rm sign}\,maqrt {N+1}}{1-e^{-\frac1{2\overline{\rm sign}\,maigma^2}}}\betaig\}\lambdan \mathop{\mathscr C}(\omegaverline x) + \lambdan 2. \varphiepsilonnd{align} Also, as $t\gammaeq 0$, we have by Lemma~\ref{lem:int1} that \betaegin{eqnarray*} \int_t^{\frac{\partiali}{2}}\lambdan\Big(\frac1{\overline{\rm sign}\,main\thetaheta}\Big)f(\thetaheta) \mathrm{d} \thetaheta &\lambdae& \frac{1}{N} \Big( 1 + \lambdan\betaig( 2^{N-1}+\frac {1}{\overline{\rm sign}\,maigma^{N+1}}\betaig)- \lambdan( 1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}) \Big)\\ &\lambdae & \frac{1}{N} \Big(\lambdan\betaig( 2^{N-1}+\frac {1}{\overline{\rm sign}\,maigma^{N+1}}\betaig)- \lambdan( 1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\Big) +\lambdan 2. \varphiepsilonnd{eqnarray*} Putting together this inequality and~\varphiepsilonqref{eq:2do} we deduce that the second term in~\varphiepsilonqref{eq:final} is bounded by \betaegin{align}\lambdaabel{eq:term2} \min&\betaigg\{(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\lambdan(\mathop{\mathscr C}(\omegaverline x)), 4\partiali \overline{\rm sign}\,maigma \mathop{\mathscr C}(\omegaverline x) \overline{\rm sign}\,maqrt {N+1}\lambdan(\mathop{\mathscr C}(\omegaverline x)), \nablaonumber\\ &\qquad \frac{(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})}{N}\Big(\lambdan\betaig( 2^{N-1}+\frac {1}{\overline{\rm sign}\,maigma^{N+1}}\betaig)- \lambdan( 1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\Big)\betaigg\}+\lambdan 2. \varphiepsilonnd{align} Finally, using again Lemma~\ref{lem:otra-cota} and $t\gammae \overline{\rm sign}\,main t=\frac{1}{2\mathop{\mathscr C}(\omegaverline{x})}$ we obtain \betaegin{eqnarray*}\lambdaabel{eq:3ro} e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}}+( 1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\int_t^{\frac{\partiali}{2}} f(\thetaheta)\mathrm{d}\thetaheta &\lambdaeq& e^{-\frac1{2\overline{\rm sign}\,maigma^2}}+ \min\Big\{1-e^{-\frac1{2\overline{\rm sign}\,maigma^2}},\frac{2\partiali\overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt{N+1}}{t}\Big\}\nablaonumber \\ &\lambdaeq & \min\Big\{1,e^{-\frac1{2\overline{\rm sign}\,maigma^2}}+4\partiali\mathop{\mathscr C}(\omegaverline{x})\overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt{N+1}\Big\} \varphiepsilonnd{eqnarray*} which bounds the third term in~\varphiepsilonqref{eq:final} by \betaegin{equation}\lambdaabel{eq:term3} \lambdan(Nd)\min\Big\{1,e^{-\frac1{2\overline{\rm sign}\,maigma^2}}+4\partiali\mathop{\mathscr C}(\omegaverline{x})\overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt{N+1}\Big\}. \varphiepsilonnd{equation} Combining~\varphiepsilonqref{eq:term1},~\varphiepsilonqref{eq:term2} and~\varphiepsilonqref{eq:term3} with the bound in~\varphiepsilonqref{eq:final}, we obtain \betaegin{align}\lambdaabel{eq:GaussianBound} H(N,d,\overline{\rm sign}\,maigma,\mathop{\mathscr C}(\omegaverline x)) &= \lambdan \mathop{\mathscr C}(\omegaverline{x}) \min\Big\{1- e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}},\frac1{(2\mathop{\mathscr C}(\omegaverline{x}))^N} \Big( 2^{N-1}+\frac{1}{\overline{\rm sign}\,maigma^{N+1}}\Big)\Big\}\nablaonumber \\ & +\; \min\betaigg\{(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\lambdan(\mathop{\mathscr C}(\omegaverline x)), 4\partiali \overline{\rm sign}\,maigma \mathop{\mathscr C}(\omegaverline x) \overline{\rm sign}\,maqrt {N+1}\lambdan \mathop{\mathscr C}(\omegaverline x),\\ &\qquad \frac{(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})}{N}\Big(\lambdan\betaig( 2^{N-1}+\frac {1}{\overline{\rm sign}\,maigma^{N+1}}\betaig)- \lambdan( 1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\Big)\betaigg\}\nablaonumber \\ &+\; \lambdan(Nd)\min\Big\{1,e^{-\frac1{2\overline{\rm sign}\,maigma^2}}+4\partiali\mathop{\mathscr C}(\omegaverline{x})\overline{\rm sign}\,maigma\overline{\rm sign}\,maqrt{N+1}\Big\} + \omegaverline{K}, \nablaonumber \varphiepsilonnd{align} where $\omegaverline{K}=\lambdan 2 + K=3\lambdan 2+2$. We now verify that $ H(N,d,\overline{\rm sign}\,maigma,\mathop{\mathscr C}(\omegaverline x))$ satisfies {\betaf (LA0)}, {\betaf (LA1)} and {\betaf (LA2)}. \betaegin{description} \item[(LA0)]When $\mathop{\mathscr C}(\omegaverline{x})=\infty$ we get $$ H_\infty(N,d,\overline{\rm sign}\,maigma)= \frac{(1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})}{N}\Big(\lambdan\betaig( 2^{N-1}+\frac {1}{\overline{\rm sign}\,maigma^{N+1}}\betaig)- \lambdan( 1-e^{-\frac{1}{2\overline{\rm sign}\,maigma^2}})\Big) + \lambdan(Nd) + \Omegah(1), $$ which is that of \varphiepsilonqref{SmoothGaussianBound} (with a slightly bigger constant) as required in {\betaf (LA0)}. \item[(LA1)] When $\overline{\rm sign}\,maigma\thetao0$, we have $$\lambdaim_{\overline{\rm sign}\,maigma\thetao 0} H(N,d,\overline{\rm sign}\,maigma,\mathop{\mathscr C}(\omegaverline x))=\lambdan(\mathop{\mathscr C}(\omegaverline{x}))+ \omegaverline{K},$$ as required. \item[(LA2)] Also, when $\overline{\rm sign}\,maigma\thetao \infty$, we get $$\lambdaim_{\overline{\rm sign}\,maigma\thetao \infty} H(N,d,\overline{\rm sign}\,maigma,\mathop{\mathscr C}(\omegaverline x))= \lambdan(Nd)+\omegaverline{K},$$ and we recover the average-case analysis bound for $\mathop{\mathbb E}_{x\in\mathbb{S}^N}\lambdan(\mathop{\mathscr C}(x))$. \varphiepsilonnd{description} \varphiepsilonproof \betaegin{thebibliography}{1} \betaibitem{Condition} P.~B\"urgisser and F.~Cucker. \nablaewblock {\varphiepsilonm Condition}, volume 349 of {\varphiepsilonm Grundlehren der mathematischen Wissenschaften}. \nablaewblock Springer-Verlag, Berlin, 2013. \betaibitem{BuCuLo:07} P.~B\"urgisser, F.~Cucker and M.~Lotz. \nablaewblock The probability that a slightly perturbed numerical analysis problem is difficult. \nablaewblock {\varphiepsilonm Mathematics of Computation}, 77:1559--1583, 2008. \betaibitem{Demmel87} J.~Demmel. On condition numbers and the distance to the nearest ill-posed problem. \nablaewblock {\varphiepsilonm Numer. Math.}, 51:251--289, 1987. \betaibitem{Demmel88} J.~Demmel. \nablaewblock The probability that a numerical analysis problem is difficult. \nablaewblock {\varphiepsilonm Math. Comp.}, 50:449--480, 1988. \betaibitem{EckYou} C.~Eckart and G.~Young. The approximation of one matrix by another of lower rank. \nablaewblock {\varphiepsilonm Psychometrika}, 1:211--218, 1936. \betaibitem{Edelman88} A.~Edelman. Eigenvalues and condition numbers of random matrices. \nablaewblock {\varphiepsilonm SIAM J. of Matrix Anal. and Applic.}, 9:543--556, 1988. \betaibitem{SpiTen2009} D.A.~Spielman, S.H.~Teng. Smoothed analysis: an attempt to explain the behavior of algorithms in practice. \nablaewblock{\varphiepsilonm Communications of the ACM}, 52(10):76--84, 2009. \betaibitem{Turing48} A.M.~Turing. Rounding-off errors in matrix processes. \nablaewblock {\varphiepsilonm Quart. J. Mech. Appl. Math.}, 1:287--308, 1948. \betaibitem{vNGo47} J.~von Neumann and H.H.~Goldstine. Numerical inverting matrices of high order. \nablaewblock {\varphiepsilonm Bulletin of the Amer. Math. Soc.}, 53:1021--1099, 1947. \betaibitem{Wsch:04} M.~Wschebor. \nablaewblock Smoothed analysis of $\kappa(a)$. \nablaewblock {\varphiepsilonm J. Complexity}, 20:97--107, 2004. \varphiepsilonnd{thebibliography} \varphiepsilonnd{document}
\begin{document} \baselineskip=17pt \title{Additive Correlation and the Inverse Problem for the Large Sieve} \author[Brandon Hanson]{Brandon Hanson} \address{Pennsylvania State University\\ University Park, PA} \email{[email protected]} \date{} \begin{abstract} Let $A\subset [1,N]$ be a set of positive integers with $|A|\gg \sqrt N$. We show that if avoids about $p/2$ residue classes modulo $p$ for each prime $p$, the $A$ must correlate additively with the squares $S=\{n^2:1\leq n\leq \sqrt N\}$, in the sense that we have the additive energy estimate \[E(A,S)\gg N\log N.\] This is, in a sense, optimal. \end{abstract} \maketitle \section{Introduction} Let $A\subset [1,N]$ be a set of positive integers. For a positive integer $v$, let \[A_v=\{a\mod v: a\in A\}\subseteq \ZZ/v\ZZ\] denote the set of residue classes covered by $A$ modulo $v$, and let \[A(v;h)=\{a\in A: a\equiv h\mod p\}\subseteq A\] denote the set of elements of $A$ congruent to $h$ modulo $v$. The Large Sieve of Linnik (and developed further by others, notably Montgomery \cite{M}) and the Larger Sieve of Gallagher are useful tools when trying to estimate the cardinality of sets $A$ for which $|A_p|$ is small for primes $p$, see for instance \cite[Chapter 9]{FI}. In fact, under the hypothesis that for each $p$, $|A_p|\leq \alpha p$ the Larger Sieve tells us the remarkably strong result $|A|\ll N^\alpha$ in a fairly elementary way. When $\alpha$ is about $1/2$, the estimate $|A|\ll N^{1/2}$ is sharp. This can be seen by taking $A$ to consist of the perfect squares up to $N$, or any other dense quadratic sequence. The quadratic case of the ``Inverse Conjecture for the Large Sieve" states that such quadratic sequences are the unique extremizers. Here is a fairly crude form of such the conjecture. \begin{Conjecture}[Quadratic Inverse Large Sieve]\label{Conjecture} \footnote{In this article we make frequent use of the asymptotic notation $X\ll Y$ which means that $|X|\leq c|Y|$ for some constant $c$, or equivalently $X=O(Y)$. We also mean } Let $A\subset [1,N]$ be a set of integers such that $|A_p|\leq (p+1)/2$ for each $p$ and $|A|\gg N^{1/2}$. Then there is a subset $A'\subseteq A$ of size $|A'|\geq \frac{9}{10}|A|$ and such that $A'\subseteq q(\ZZ)$ for some quadratic polynomial $q(x)\in\QQ[x]$. \end{Conjecture} \noindent Further reading about this conjecture can be found in \cite{CL, GH, HV, W1, W2}. Let $X$ and $Y$ be two finite sets of integers. The additive energy between $X$ and $Y$ is the quantity \[E(X,Y)=|\{(x_1,x_2,y_1,y_2)\in X\times X\times Y\times Y:x_1+y_1=x_2+y_2\}|.\] This is a quantity intimately related with the sumset \[X+Y=\{x+y:x\in X, y\in Y\}.\] For context, the trivial estimates for additive energy are \[|X||Y|\leq E(X,Y)\leq |X||Y|\min\{|X|,|Y|\}.\] We will write $r_{X+Y}(n)$ and $r_{X-Y}(n)$ for the number of solutions to $x+y=n$ and $x-y=n$, respectively, with $x\in X$ and $y\in Y$. A few moments thought reveals the following formulas: \begin{equation}\label{Energy1} E(X,Y)=\sum_{n}r_{X+Y}(n)^2 \end{equation} and \begin{equation}\label{Energy2} E(X,Y)=\sum_n r_{X-X}(n)r_{Y-Y}(n). \end{equation} The main theorem of this article is the following. \begin{Theorem}\label{MainTheorem} Suppose $A$ is a set of at least positive integers in the interval $[1,N]$ satisfying the condition \[|A_p|\leq \frac{p}{2}+\eps(p)\] for all primes $p$ and some uniformly bounded sequence $\eps(p)\geq 0$. Let $S$ denote the set of perfect squares up to $N$. We have \[E(A,S)\gg |A||S|+|A|^2\log N.\] In particular, if $|A|\gg \sqrt{N}$, then \[E(A,S)\gg |A||S|\log N.\] \end{Theorem} At first glance, a factor of $\log N$ away from the trivial bound appears quite weak, so before proceeding, here are a few remarks about this theorem. Firstly, a logarithmic factor is non-trivial. This can be observed by considering dense Sidon subsets of $[1,N]$. Recall a set $x$ is called Sidon if all of its sums are distinct. It is well-known that there are Sidon subsets $X$ of $[1,N]$ of cardinality about $\sqrt N$, the existence of which was proved in \cite{BC}. For such sets $X$, $r_{X-X}(n)\leq 1$ for $n\neq 0$, so that by (\ref{Energy2}) \[E(X,S)=|X||S|+\sum_{n\neq 0}r_{X-X}(n)r_{S-S}(n)\leq |X||Y|+\sum_{n}r_{S-S}(n)=|S|(|X|+|S|)\ll N.\] Now if we consider arbitrary subsets $X,Y\subseteq [1,N]$ of integers then $X+Y\subseteq[2,2N]$, we have the obvious estimate $|X+Y|\leq 2N-1$. The Cauchy-Schwarz inequality gives \[|X||Y|\leq (2N-1)\sum_{2\leq n\leq 2N} r_{X+Y}(n)^2=(2N-1)E(X,Y),\] showing that sets $X$ and $Y$ which are a little larger than $\sqrt N$ in size necessarily have some additive energy between them. So, while sets at the $\sqrt N$-threshold need not have any substantial additive correlation, they only just fail to do so. However, the squares and other sets which avoid residue classes are biased in arithmetic progressions, which are after all cosets of subgroups of $\ZZ$, and so this bias hints at a bit of underlying additive structure. Secondly, this logarithmic factor is interesting in that it is what should be expected if Conjecture \ref{Conjecture} were to hold, and so is in that sense best possible. Indeed, a well-known theorem of Ramanujan in \cite{R} states \begin{equation}\label{Ramanujan} E(S,S)=\sum_{1\leq n\leq N}r_{S+S}(n)^2\sim\frac{1}{4}N\log N. \end{equation} By (\ref{Energy2}), Cauchy-Schwarz, and (\ref{Energy1}), we have for any sets of integers $X$ and $Y$, \begin{equation}\label{EnergyCauchySchwarz} E(X,Y)^2\leq E(X,X)E(Y,Y). \end{equation} If we believe Conjecture \ref{Conjecture}, then $A$ should look like a quadratic sequence $a=sx_a^2+t$ for rational $s$ and $t$ (by completing the square) - so $A$ is an affine transform of the squares. Inequality \ref{EnergyCauchySchwarz} shows that for $u\neq 0$ and $v$ arbitrary \[E(X,uX+v)^2\leq E(X,X)E(uX+v,uX+v)=E(X,X)^2\] so that the logarithmic factor of Theorem \ref{MainTheorem} is the most one could hope prove. Finally, we mention that the inequality (\ref{EnergyCauchySchwarz}) also shows that among all sets $Y$ with comparable of additive energy to $X$, $X$ is essentially the best ``additive partner" for itself in that $E(X,Y)$ is maximized when $Y=X$, provided $Y$ has additive structure comparable to $X$'s. From this, Theorem \ref{MainTheorem} is supporting Conjecture \ref{Conjecture}. One corollary from Theorem \ref{MainTheorem} is that, at the very least, there are at least $\log N$ elements of $A$ in the image of a single quadratic. Thus we have proved an $\omega$-result. This is far weaker than Conjecture \ref{Conjecture} would give, but it is a positive result in this direction. \begin{Corollary} Suppose $A$ is a set of positive integers in the interval $[1,N]$ satisfying the condition \[|A_p|\leq \frac{p}{2}+\eps(p)\] for all primes $p$ and some uniformly bounded sequence $\eps(p)\geq 0$. If $|A|\gg \sqrt N$, then there is a rational quadratic $q(x)\in \QQ[x]$ such that $|q(\ZZ)\cap A|\gg \log N$. \end{Corollary} \begin{proof} Since $E(A,S)\gg |A||S|\log N$, we have by (\ref{Energy1}), \[\log N|A||S|\ll\sum_n r_{A+S}(n)^2\leq|A||S|\max_n r(n)\] whence there are at least $c\log N$ solutions $x_a$ to \[a=n-x_a^2\] with $a\in A$ for some $c>0$. The, in fact integral, quadratic $q(x)=n-x^2$ will suffice. \end{proof} Finally, it is known that dense Sidon subsets of $[1,N]$ are well-distributed modulo primes. This has been asserted in \cite{Li} and \cite{K}. We recover a similar such result here. \begin{Corollary} Suppose $A$ is a Sidon set of positive integers in the interval $[1,N]$ satisfying the condition \[|A_p|\leq \frac{p}{2}+\eps(p)\] for all primes $p$ and some uniformly bounded sequence $\eps(p)\geq 0$. Then \[|A|\ll\frac{\sqrt N}{\log N}.\] \end{Corollary} \begin{proof} By Theorem \ref{MainTheorem} we have \[E(A,S)\gg |A|(|A|+\sqrt N)\log N.\] On the other hand \[E(A,S)\leq |A||S|+\sum_{n\neq 0}r_{A-A}(n)r_{S-S}(n)\leq (|A|+\sqrt N)\sqrt N\] since $A$ is Sidon. Rearranging gives the corollary. \end{proof} \section{Lemmas and Proofs} One of the main observations is that one can do a little bit better than the Larger Sieve by considering composite moduli. \begin{Lemma}\label{CompositeModuli} Suppose $A$ is a set of integers such that $|A_p|\leq \frac{p}{2}+\eps(p)$ for each prime $p$. Let $\Delta:\NN\to \RR$ be the multiplicative function such that \[\Delta(p^k)=p^{k-1}\lr{\frac{p}{2}+\eps(p)}.\] Then for any $v\geq 1$, we have \[\frac{|A|^2}{\Delta(v)}\leq \sum_{h\mod v}|A(v;h)|^2.\] \end{Lemma} \begin{proof} Since $|A_p|\leq \frac{p}{2}+\eps(p)$, then we also have $|A_{p^k}|\leq p^{k-1}\lr{\frac{p}{2}+\eps(p)}$. By the Chinese Remainder Theorem, it follows that $|A_v|\leq \Delta(v)$. By Cauchy-Schwarz, \begin{align*} |A|^2=\lr{\sum_{h\mod v}|A(v;h)|}^2&\leq |A_v|\sum_{h\mod v}|A(v;h)|^2\\ &\leq\Delta(v)\sum_{h\mod v}|A(v;h)|^2. \end{align*} \end{proof} In addition, we will use an estimate for averages of multiplicative functions. This particular estimate can be found in \cite[Appendix A]{FI} as Corollary A.6. \begin{Lemma}\label{FI} Let $g:\NN\to \RR$ be a multiplicative function supported on square-free integers. Suppose \[g(p)=\frac{k}{p}+O\lr{\frac{1}{p^2}}\] for some integer $k\geq 1$. Then \[\sum_{n\leq x}g(m)=\frac{\fS_g}{k!}(\log x)^k+O\lr{(\log x)^{k-1}},\] where \[\fS_g=\prod_p\lr{1-\frac{1}{p}}^k(1+g(p)).\] \end{Lemma} \begin{Corollary}\label{AverageDelta} Let $\Delta:\NN\to \RR$ be the multiplicative function such that \[\Delta(p)=\frac{p}{2}+\eps(p)\] for all primes $p$ and some uniformly bounded sequence $\eps(p)\geq 0$. Then, \[\sum_{n\leq x}\frac{n^2}{\Delta(n)}\gg x^2\log x.\] \end{Corollary} \begin{proof} Since $\Delta$ is non-negative, we get a lower bound by summing over only those $n$ which are square-free. We have \[\Delta(p)^{-1}=\frac{2}{p}-\frac{4\eps(p)}{p(p+2\eps(p))}=\frac{2}{p}+O\lr{\frac{1}{p^2}}\] so that if \[M(x)=\sum_{n\leq x}\frac{\mu(n)^2}{\Delta(n)}\] then Lemma \ref{FI} gives \[M(x)=\frac{\fS_\Delta}{2}(\log x)^2+O\lr{\log x}.\] Here, \[\fS_\Delta=\prod_p\lr{1-\frac{1}{p}}^2\lr{1+\frac{2}{p}+O\lr{\frac{1}{p^2}}}=\prod_p\lr{1+O\lr{\frac{1}{p^2}}}\] is a convergent product. Again, since $\Delta(n)$ is non-negative, we are free to include only the terms with $n\geq x/C$ for some $C>1$. Thus \begin{align*} \sum_{n\leq x}\frac{n^2\mu(n)^2}{\Delta(n)}&\geq \sum_{x/C\leq n\leq x}\frac{n^2\mu(n)^2}{\Delta(n)}\\ &\geq \lr{x/C}^2\lr{M(x)-M(x/C)}\\ &\geq \lr{x/C}^2\log x\lr{\fS_\Delta\log C+O(1)}. \end{align*} Since $\fS_\Delta$ converges, if $C$ is large enough then $\fS_\Delta\log C+O(1)\geq 1$, and the result is proved. \end{proof} \begin{Lemma}\label{DivisorsLowerBound} Let $A\subset[1,N]$ be a set of integers with \[|A_p|\leq\frac{p}{2}+\eps(p)\] for all primes $p$ and some uniformly bounded sequence $\eps(p)\geq 0$. Then \[\sum_{1\leq u<v\leq \sqrt N}r_{A-A}(uv)\gg|A|^2\log N.\] \end{Lemma} \begin{proof} Observe that $a-b=uv$ with $1\leq u<v$ if and only if $a\equiv b\mod v$ and $b\leq a< b+v^2$, so that \begin{equation}\label{Identity} \sum_{1\leq u<v\leq \sqrt N}r_{A-A}(uv)=\sum_{1\leq v\leq \sqrt N}\sum_{\substack{a,b\in A\\ a\equiv b\mod v\\b\leq a< b+v^2}}1. \end{equation} For fixed $v$, we divide the interval $[1,N]$ into $J_v=[N/v^2]$ intervals $I_j=[jv^2,(j+1)v^2)$ of length $v^2$ (the last interval may be shorter). If $a,b$ are elements of $A\cap I_j$ with $b\leq a$ and $a$ congruent $b$ modulo $v$, then they are counted in the inner sum of (\ref{Identity}). Thus, we produce a partition of $A$ into sets $A_j=A\cap I_j$ and deduce that \begin{align*} \sum_{1\leq u<v\leq \sqrt N}r_{A-A}(uv)&\gg \sum_{1\leq v\leq \sqrt N}\sum_{1\leq j\leq J_v}\sum_{h\mod v}|A_j(v;h)|^2\\ &\geq\sum_{1\leq v\leq \sqrt N}\frac{1}{\Delta(v)}\sum_{1\leq j\leq J_v}|A_j|^2, \end{align*} having applied Lemma \ref{CompositeModuli} to each of the sets $A_j$. By Cauchy-Schwarz, \[\sum_{1\leq j\leq J_v}|A_j|^2\geq\frac{|A|^2}{J_v}\geq \frac{v^2|A|^2}{N}\] so we have deduced \[\sum_{1\leq u<v\leq \sqrt N}r_{A-A}(uv)\gg\frac{|A|^2}{N}\sum_{1\leq v\leq \sqrt N}\frac{v^2}{\Delta(v)}.\] The lemma now follows from Corollary \ref{AverageDelta}. \end{proof} \begin{proof}[Proof of Theorem \ref{MainTheorem}] By passing to a subset of $A$, we may assume that all elements of $A$ lie in a single congruence class modulo $4$. The cost of this is that $|A|$ decreases by a factor of at most $4$. The additive energy between $A$ and $S$ is \begin{align*} E(A,S)&=\sum_{m^2,n^2\in S}r_{A-A}(n^2-m^2)\\ &=|A||S|+2\sum_{1\leq m^2<n^2\leq N}r_{A-A}((n-m)(n+m)), \end{align*} where in the second line we have extracted the contribution from $r_{A-A}(0)$ and used that $r_{A-A}(k)=r_{A-A}(-k)$. Now, for $k\neq 0$, $k=n^2-m^2$ has a solution if and only if $k\neq 2\mod 4$, and its solutions are indexed by pairs $(u,v)$ where $uv=k$, $u<v$, and $u\equiv v\mod 2$. This can be seen by considering the system \[n-m=u,\ n+m=v\] which has as its solution \[n=\frac{v+u}{2},\ m=\frac{v-u}{2}.\] Thus \[E(A,S)=|A||S|+2\sum_{\substack{1\leq u<v\leq \sqrt N\\u\equiv v\mod 2}}r_{A-A}(uv).\] Because all elements of $A$ are in a single congruence class modulo $4$, $A-A$ is supported only on numbers which are divisible by $4$. Thus if $a-b=n=uv$ for some $a,b\in A$, either $u\equiv v\mod 2$ or else one of $(u/2,2v)$ or $(2u,v/2)$ is going to satisfy the necessary congruence condition. Thus, by reducing our count of pairs $(u,v)$ by a factor of at most $4$, we can remove the condition $u\equiv v\mod 2$, and we obtain a lower bound \[E(A,S)\geq \frac{1}{2}\sum_{1\leq u<v\leq \sqrt N/2}r_{A-A}(uv).\] The proof is complete after an application of Lemma \ref{DivisorsLowerBound}. \end{proof} \end{document}
\begin{equation}gin{document} \title{Entanglement of trapped-ion qubits separated by 230 meters: Supplemental material} \author{V.~Krutyanskiy} \affiliation{Institut f\"ur Quantenoptik und Quanteninformation, Osterreichische Akademie der Wissenschaften, Technikerstr. 21a, 6020 Innsbruck, Austria} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{M.~Galli} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{V.~Krcmarsky} \affiliation{Institut f\"ur Quantenoptik und Quanteninformation, Osterreichische Akademie der Wissenschaften, Technikerstr. 21a, 6020 Innsbruck, Austria} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{S.~Baier} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{D.~A.~Fioretto} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{Y.~Pu} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{A.~Mazloom} \affiliation{Department of Physics, Georgetown University, 37th and O Sts. NW, Washington, DC 20057, USA} \author{P.~Sekatski} \affiliation{Department of Applied Physics, University of Geneva, 1211 Geneva, Switzerland} \author{M.~Canteri} \affiliation{Institut f\"ur Quantenoptik und Quanteninformation, Osterreichische Akademie der Wissenschaften, Technikerstr. 21a, 6020 Innsbruck, Austria} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{M.~Teller} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{J.~Schupp} \affiliation{Institut f\"ur Quantenoptik und Quanteninformation, Osterreichische Akademie der Wissenschaften, Technikerstr. 21a, 6020 Innsbruck, Austria} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{J.~Bate} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{M.~Meraner} \affiliation{Institut f\"ur Quantenoptik und Quanteninformation, Osterreichische Akademie der Wissenschaften, Technikerstr. 21a, 6020 Innsbruck, Austria} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{N.~Sangouard} \affiliation{Institut de Physique Th\'eorique, Universit\'e Paris-Saclay, CEA, CNRS, 91191 Gif-sur-Yvette, France} \author{B.~P.~Lanyon} \email[Correspondence should be send to]{ [email protected]} \affiliation{Institut f\"ur Quantenoptik und Quanteninformation, Osterreichische Akademie der Wissenschaften, Technikerstr. 21a, 6020 Innsbruck, Austria} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \author{T.~E.~Northup} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \renewcommand{S\arabic{equation}}{S\arabic{equation}} \date{\today} \begin{equation}gin{abstract} \end{abstract} \maketitle \section{Ion-trap network nodes} \paragraph*{Overview.} \label{sec:overview} The ion-trap network nodes are both in room-temperature vacuum chambers and employ the same basic design. Specifically, a macroscopic linear Paul trap is rigidly suspended from the top flange of each vacuum chamber; thus, the ion's motional mode along the trap's axis of symmetry (the axial mode) is vertical, and the two other modes (radial modes) lie in the horizontal plane. An in-vacuum optical cavity around the ion trap is mounted via nanopositioning stages on the bottom flange of each chamber; the cavity axis is a few degrees off horizontal. Both cavities are \SI{20}{\milli\meter} long and in the near-concentric regime, corresponding to microscopic waists at the ion location. Ions are loaded into each trap using a resistively heated oven of atomic calcium and a two-photon ionization process driven by lasers at \SI{422}{\nano\meter} and \SI{375}{\nano\meter}. Details on Node A can be found in~\cite{Russo2009, Stute2012a, Friebe2019}. Details on Node B can be found in~\cite{Krutyanskiy2019, Schupp2021, Schupp2021a}. \paragraph*{Cavity parameters.} At Node A, the transmission of the cavity mirrors at 854 nm was measured to be \SI{13(1)}{ppm} for the output mirror and \SI{1.3(3)}{ppm} for the second mirror~\cite{Stute2012a}, with a probability of \SI{20(2)}{\percent} that a photon in the cavity mode leaves the cavity through the output mirror~\cite{Casabone2015a}. At Node B, the measured transmission values at \SI{854}{\nano\meter} are \SI{90(4)}{ppm} for the output mirror and \SI{2.9(4)}{ppm} for the second mirror, and the probability that a photon leaves through the output mode is \SI{78(3)}{\percent}~\cite{Schupp2021}. The decay rates of the cavity fields, measured via cavity ringdown, are $\kappa_A = 2\pi \times \SI{68.4(6)}{\kilo\hertz}$~\cite{Friebe2019} and $\kappa_B = 2\pi \times \SI{70(2)}{\kilo\hertz}$~\cite{Schupp2021}. \paragraph*{Trap frequencies.} At Node A, the frequencies of the axial and radial modes are $(\omega_\text{ax},\omega_\text{r1},\omega_\text{r2}) = 2\pi \times (1.13,1.70,1.76)\,{\rm MHz}$. At Node B, they are $2\pi \times (0.92,2.40,2.44)\,\rm{MHz}$~\cite{Schupp2021}. \paragraph*{Ion-cavity geometry.} For the remaining discussions in this section, we use a Cartesian coordinate system with three orthogonal axes: $x$, $y$ and $z$. At each node, the $z$ axis is the ion trap's axis of symmetry, defined by the line connecting the trap's DC endcap electrodes, which is the axis of the ion's motion at frequency $\omega_\text{ax}$. The $xz$ plane is defined as the plane containing both the $z$ axis and the cavity axis. The cavity axis subtends an angle with respect to the $x$ axis of \SI{4}{\degree} at both Node~A and Node~B~\cite{Stute2012a,Casabone2013, Schupp2021a}. \paragraph*{Quantization axis.} At each node, the atomic quantization axis is chosen to be parallel to the axis of an applied static magnetic field. This magnetic-field axis is set to subtend an angle of \SI{45}{\degree} with respect to the $z$ axis and to be perpendicular to the cavity axis; at Node B, it is likely that it is a few degrees off from perpendicular. At Node A, a magnetic field of \SI{4.2303(2)}{G} is set by DC currents in coils attached to the outside of the vacuum chamber. At Node B, a magnetic field of \SI{4.1713(4)}{G} is set by permanent magnets attached to the outside of the vacuum chamber. Both field strengths are measured via Ramsey spectroscopy of a single ion. \paragraph*{Laser beam geometry.} A bichromatic laser field at \SI{393}{\nano\meter} drives the cavity-mediated Raman transition. At each node, the propagation direction of the Raman laser field is parallel to the magnetic-field axis. The field is circularly polarized in order to maximize the coupling strength on the $\ket{S}\equiv\ket{4^2S_{1/2},m_j=-1/2}$ to $\ket{P}\equiv\ket{4^2P_{3/2},m_j=-3/2}$ transition. This coupling is depicted in Fig.~1c of the main text. At Node A, Doppler cooling and state detection are implemented using \SI{397}{\nano\meter} laser fields along two axes and a \SI{866}{\nano\meter} field along a third axis. Optical pumping and ion-qubit rotations are implemented using a \SI{729}{\nano\meter} field that lies in the $xz$ plane at an angle of \SI{45}{\degree} with respect to the $z$ axis. At Node B, Doppler cooling is implemented using a single beam path that lies in the $xz$ plane at an angle of \SI{45}{\degree} with respect to the $z$ axis, along which both \SI{397}{\nano\meter} and \SI{866}{\nano\meter} laser fields are sent. Optical pumping is implemented using a second, circularly polarised, \SI{397}{\nano\meter} laser field in a direction parallel to the magnetic-field axis. Ion-qubit rotations are implemented using a \SI{729}{\nano\meter} field at an angle of \SI{45}{\degree} with respect to the $z$ axis. \section{Fiber-optic channels} \label{sec:channels} \paragraph*{Fiber bundles.} The laboratories in which Nodes A and B are located are connected with two optical fiber bundles, each of which contains eight single-mode optical fibers. The bundles are installed along the same path between the laboratories, which follows underground corridors but includes a section several tens of meters in length that is exposed to outdoor air. Three optical signals are sent between the laboratories using the fiber bundles, each in a different fiber: \begin{equation}gin{enumerate} \item \SI{854}{\nano\meter} single photons, \item \SI{1550}{\nano\meter} laser light carrying digital trigger signals, \item \SI{854}{\nano\meter} laser light that is used to match the resonance frequencies of the cavities. \end{enumerate} Signal~1 is sent through one of the bundles. Signals~2 and 3 are sent through different fibers in the other bundle. None of the fibers are polarization maintaining. \paragraph*{Stabilization of fiber polarization dynamics.} Signal~1 consists of single photons that travel from Node A over one fiber bundle and through local fiber extensions to reach the photonic Bell-state measurement (PBSM) setup introduced in the main text. Every 20 minutes during attempts to generate remote ion entanglement, the polarization rotation of this fiber channel is characterized and corrected for, a process that takes a few minutes. The polarization rotation is characterized via quantum process tomography, for which six input states are injected sequentially into the channel: single photons with horizontal, vertical, diagonal, antidiagonal, right-circular, and left-circular polarizations. The single photons are produced at Node A via a monochromatic cavity-mediated Raman process that is repumped continuously at \SI{854}{\nano\meter}; this process generates linearly polarized photons with a measured contrast ratio of 10.5:1. After exiting the vacuum chamber, the photons pass through motorized waveplates, which we use to prepare the six input states. For each input state, the output state is analyzed using existing components at the PBSM setup (a polarizing beam splitter and photon detectors) along with additional waveplates. We perform measurements in sufficiently many bases to reconstruct each output state via quantum state tomography. A numerical search is then carried out over the data from all six states to find the nearest unitary polarization rotation, which we identify as the transformation of the fiber channel. Finally, at the input to the PBSM setup, the angles of three waveplates---a half-waveplate sandwiched by two quarter-waveplates---are set so that collectively, the waveplates implement the inverse of the unitary operation, thereby correcting for the transformation of the channel. \section{Photonic Bell-State Measurement (PBSM) setup} \label{sec:PBSM} A simplified schematic of the PBSM setup is shown in Fig.~1b of the main text. The three waveplates described in the previous paragraph are not depicted in the figure. They are located between the output fiber coupler from Node A and the nonpolarizing beamsplitter. Two additional waveplates---also not depicted---are located between the output fiber coupler from Node B and the nonpolarizing beamsplitter. They consist of a quarter-waveplate and a half-waveplate and are used for calibration and analysis of the ion--photon state from Node A. As shown in Fig.~1b, the PBSM setup has four single-photon detectors: two for each output mode of the nonpolarizing beamsplitter. In one of the beamsplitter output paths, the two detectors are single-photon counting modules (SPCMs); in the other output path, they are superconducting nanowire single-photon detectors (SNSPDs). To determine the background counts and efficiency for each detector, we execute the same sequence as used for ion--ion entanglement (described in detail in Sec.~\ref{sec:exp_seq}) with one difference: photon detection does not terminate the photon-generation loop. In order to evaluate the values from each node separately, we block the beam path from the other node. First, we define the background window as the interval $[t = \SI{70}{\micro\second},t = \SI{100}{\micro\second}]$, where, as in the main text, $t = 0$ indicates the start of the \SI{50}{\micro\second} detection window. No photons generated by an ion are expected in this window as the Raman laser pulse has been off for at least \SI{20}{\micro\second}. We determine the mean value of background counts per second as well as the probability of a background count during the detection window $p_{{\rm bg-det}_r}$, where the detection window is defined as $[t = \SI{5.5}{\micro\second},t = \SI{23}{\micro\second}]$. Next, we determine the mean photon number within the detection window and subtract $p_{\rm bg}$, yielding the probability $p^{k}_{{\rm det}_r}$ of detecting a photon at detector $r$ due to the Raman process at node $k \in \{\text{A},\text{B}\}$ within this window. All values are summarized in Tab.~\ref{tab:detectors}. These values are used in the empirical model of Sec.~\ref{Empirical_model} in order to evaluate the influence of background counts on the ion-ion density matrices. \newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}} \begin{equation}gin{table}[h!] \begin{equation}gin{center} \begin{equation}gin{tabular}{|M{1.5cm}||M{1.7cm} M{1.6cm} M{1.4cm} M{1.4cm}||} \hline detector $r$ & \centering background (1/s) & $p_{{\rm bg-det}_r} ($\%$)$ & $p^{\rm A}_{{\rm det}_r} ($\%$)$ & $p^{\rm B}_{{\rm det}_r} ($\%$)$ \\ [1ex] \hline\hline SPCM$_1$ & 9.69 & 0.017 & 0.08 & 1.30 \\ [1ex] \hline SPCM$_2$ & 9.37 & 0.016 & 0.12 & 1.96 \\ [1ex] \hline SNSPD$_1$ & 0.25 & 0.0004 & 0.19 & 2.82 \\ [1ex] \hline SNSPD$_2$ & 2.00 & 0.0035 & 0.24 & 3.62 \\ [1ex] \hline \end{tabular} \caption{Background counts, background-count probability within each detection window, and background-subtracted detection probability for each node, for each of the four detectors.} \label{tab:detectors} \end{center} \end{table} \section{Experimental sequences} \label{sec:exp_seq} \paragraph*{Initialization and handshake.} At each node, we implement a finite-length and node-specific sequence. The sequences at both Nodes A and B begin with Doppler cooling a single ion for at least \SI{1.52}{\milli\second}. Subsequently, \begin{equation}gin{enumerate} \item Node A sets TTL$^{A\rightarrow B}$ high on a \SI{1550}{\nano\meter} communication channel to Node B (Signal 2 in Sec.~\ref{sec:channels}). \item Upon receipt of the high TTL$^{A\rightarrow B}$, Node B sets TTL$^{B\rightarrow A}$ high on another communication channel on the same optical fiber to Node A. (The optical multiplexer supports four communication channels on one fiber.) \item Upon receipt of the high TTL$^{B\rightarrow A}$, Node A sets TTL$^{A\rightarrow B}$ to low. \item Upon receipt of the low TTL$^{A\rightarrow B}$, Node B sets TTL$^{B\rightarrow A}$ to low, completing the handshake. \end{enumerate} Appropriate wait times are added between the operations to allow for processing and signal travel time at both nodes. The shortest time for a handshake is about \SI{10}{\micro\second}. We estimate remote clock-frequency mismatch of at most \SI{50}{\milli\hertz}, which has a negligible effect on sequence synchronization given the maximum sequence length of \SI{11.9}{\milli\second}. Following the handshake, the sequences at both nodes enter a photon generation loop. \paragraph*{Photon generation loop.} Each iteration of the loop consists of the following operations: \begin{equation}gin{enumerate} \item Doppler cooling, \begin{equation}gin{itemize} \item Node A: \SI{63}{\micro\second} \item Node B: \SI{60}{\micro\second} + wait time \end{itemize} \item optical pumping, \begin{equation}gin{itemize} \item Node A: \SI{280}{\micro\second} \item Node B: \SI{60}{\micro\second} + wait time \end{itemize} \item a bichromatic Raman laser pulse, \begin{equation}gin{itemize} \item Node A: \SI{50}{\micro\second} \item Node B: \SI{50}{\micro\second} \end{itemize} \item a wait time for a signal that heralds coincident photon detection to be received at both nodes. \end{enumerate} Each iteration lasts \SI{420}{\micro\second}. The loop is iterated up to \SI{20}{times}. In the absence of coincident photon detection within any of the 20 iterations, the intialization and handshake are repeated. In the case of coincident detection of two photons produced within the same iteration, the loop is terminated, and the sequences proceed to ion-qubit measurement. \paragraph*{Ion-qubit measurement.} Measurement of the ion's electronic state at each node proceeds in three steps: \begin{equation}gin{enumerate} \item A \SI{729}{\nano\meter} $\pi$-pulse maps the state $\ket{D}\equiv\ket{3^2D_{5/2},m_j=-5/2}$ to $\ket{S}$ at Node A. As a result, information that was encoded in a superposition of $\ket{D}$ and $\ket{D'}\equiv\ket{3^2D_{5/2},m_j=-3/2}$ at each node is now encoded in a superposition of $\ket{S}$ and $\ket{D'}$. At the same time, at Node B, a \SI{729}{\nano\meter} $\pi$-pulse maps the state $\ket{D}\equiv\ket{3^2D_{5/2},m_j=-3/2}$ to $\ket{S}$, so that the encoding is in a superposition of $\ket{S}$ and $\ket{D}$. It is irrelevant whether $\ket{D'}$ or $\ket{D}$ is used for the measurement encoding; the experimenters at the two nodes just happened to make different choices. \begin{equation}gin{itemize} \item Node A: \SI{5.2}{\micro\second} \item Node B: \SI{11.1}{\micro\second} \end{itemize} \item An optional \SI{729}{\nano\meter} $\pi/2$-pulse is implemented on the $\ket{S}$ to $\ket{D'}$ transition at Node A and on the $\ket{S}$ to $\ket{D}$ transition at Node B~\cite{Haeffner2008}. The pulse is implemented when the ion-qubit is to be measured in the Pauli $\sigma_x$ or $\sigma_y$ basis; we set the optical phase of the pulse to determine in which of the two bases the measurement is made. The pulse is not implemented when the ion-qubit is to be measured in the $\sigma_z$ basis. \begin{equation}gin{itemize} \item Node A: \SI{4.3}{\micro\second} \item Node B: \SI{7.81}{\micro\second} \end{itemize} \item A projective fluorescence measurement on the \SI{397}{\nano\meter} $4^2S_{1/2} \leftrightarrow 4^2P_{1/2}$ transition determines whether the ion is in $\ket{S}$ or $\ket{D'}$ at Node A, and whether it is in $\ket{S}$ or $\ket{D}$ at Node B. A photomultiplier tube is used to collect fluorescence. \begin{equation}gin{itemize} \item Node A: \SI{1.5}{\milli\second} \item Node B: \SI{1.5}{\milli\second} \end{itemize} \end{enumerate} \section{Ion-ion state fidelities} \label{sec:ion-ion_f} In this section, we explain how uncertainties are calculated for the ion--ion state fidelities presented in the main text. As described in the main text, the joint state of two remote ions is characterized via quantum state tomography, yielding the density matrices $\rho^{\pm}(T)$, where $\rho^+$ and $\rho^-$ are reconstructed for the coincidences that should herald the Bell states $|\Psi^+\rangle$ and $|\Psi^-\rangle$, respectively, and $T$ is the maximum time difference between coincident photons for which entanglement is heralded. The state $\rho^+$ is obtained if coincident detection occurs in the output path of the beamsplitter in which the SNSPDs are placed, while $\rho^-$ is obtained if coincident detection occurs in opposite beamsplitter outputs, i.e., for the two combinations of a coincidence at one SPCM and one SNSPD. The fidelity is determined according to the expression $F^{\pm}(T) \equiv \langle \Psi^{\pm}| \rho^{\pm}(T) | \Psi^{\pm} \rangle $. We use Monte Carlo resampling~\cite{Efron93} to obtain the uncertainties in $F^{\pm}(T)$: Recall that $\rho^{\pm}(T)$ is determined from a set of measurement outcomes, which we can express as a vector. It is assumed that noise on these measurement outcomes is due to projection noise. We numerically generate $M = 200$ vectors of ``noisy" observations based on a multinomial distribution around the experimentally recorded values. For each of these vectors, we reconstruct a density matrix just as for the experimental data, via the maximum likelihood technique. As a result, for each state $\rho^{\pm}(T)$ reconstructed directly from the raw data, we have $M$ states reconstructed from simulated data. We calculate the value of some quantity of interest, e.g., the fidelity $F^{\pm}(T)$, not only for $\rho^{\pm}(T)$ but also for the associated $M$ states, yielding a distribution $D$ of values with mean $F_{m}$ and standard deviation $\delta$. The uncertainties are then given by $F^{\pm}(T)^{+(F_m+\delta-F)}_{-(F-Fm+\delta)}$. If $F^{\pm}(T)$ is optimized over the phase $\phi$ of the Bell state, then this calculation is carried out for each value of $\phi$. \section{Ion-photon state fidelities} Here, we provide more details on the calibration measurement of ion--photon entanglement that was carried out at each node immediately prior to ion--ion entanglement. For the ion--photon state generated at Node~B, photons were analyzed using the PBSM setup, details of which are given in Sec.~\ref{sec:PBSM}. Specifically, photon counts were recorded on the two SNSPDs. For the ion--photon state generated at Node~A, photons were analyzed using a separate setup in the Node~A laboratory. For each ion--photon state, measurements are made in all nine combinations of the Pauli measurement bases for two qubits~\cite{James2001}. The measurement basis of the photon is changed using waveplates in the photon analysis path. Tomographic reconstruction via the maximum likelihood technique yields the ion--photon density matrices $\rho^{\rm ion-photon}_k$ for $k \in \{\text{A},\text{B}\}$. The fidelities given in the main text are calculated as $\langle \Psi^{\theta}_k | \rho^{\rm ion-photon}_k | \Psi^{\theta}_k \rangle$, where $\ket{\Psi^{\theta}_k} = 1/\sqrt{2}\left (|\text{DV}\rangle + e^{\mathrm{i} \theta} |\text{D}'\text{H}\rangle\right)$ is the maximally entangled two-qubit state nearest to the state $\rho^{\rm ion-photon}_k$, found by numerical optimization over $\theta$. The method used to determine uncertainties in these fidelities is described in Sec.~\ref{sec:ion-ion_f}, where we replace the vector of ion--ion measurement outcomes by a vector of ion--photon measurement outcomes. \\ \section{Empirical model for the ion--ion density matrix} \label{Empirical_model} The target states for ion--ion entanglement are the two Bell states in Eq.~1 of the main text: \begin{equation}gin{equation} |\Psi^{\pm} \rangle = 1/\sqrt{2} \left( \ket{\rm D_A D'_B} \pm e^{\mathrm{i} \phi} \ket{\rm D'_A D_B} \right). \label{eq:eq1s} \end{equation} The corresponding density matrices are $\rho^\pm = \ket{\Psi^{\pm}}\bra{\Psi^{\pm}}$. Here we describe an empirical model for the density matrix $\rho$ heralded by two-photon detection in our experiments. For this model, we adapt $\rho^\pm$ to account for three sources of infidelity: detector background counts, photon distinguishability due to spontaneous emission, and imperfect ion–photon entanglement. We first account for detector background counts. We define $p_{mn}$ as the probability to detect the ion at Node~A in state $m$ and the ion at Node~B in state $n$ in a single experimental trial, where $m, n \in \{\text{D},\text{D}'\}$. In the absence of detector background counts and all other imperfections, $p_\text{DD} = p_{\text{D}'\text{D}'} = 0$. We write \begin{equation}gin{align} p_\text{DD} &= p_{\rm ph-bg}/4 +p_{\rm bg-bg}/4, \nonumber \\ p_{\text{D}'\text{D}'} &= p_\text{DD}, \nonumber \\ p_{\rm DD'} &= p_{\rm ph-ph}/2 + p_{\rm ph-bg}/4 +p_{\rm bg-bg}/4,\nonumber\\ p_{\rm D'D} &= p_{\rm DD'}, \label{eq:bkgd_model} \end{align} where $p_{\rm ph-bg}$, $p_{\rm bg-bg}$, and $p_{\rm ph-ph}$ are the probabilities to detect a coincidence in a single experimental trial between a photon and a background count, between two background counts, and between two photons. The scaling factors account for the chance to measure a certain ion-ion correlator given a coincidence. An underlying assumption of Eqs.~\eqref{eq:bkgd_model} is that when a coincidence due to one or two background counts occurs, it is equally likely to find the two ions in each of their four possible states. This assumption is valid for the Bell states considered here, and it will still be valid when we introduce a depolarizing channel to model imperfect ion-photon entanglement later in this section. The ion--ion density matrix that accounts for background counts is given by \begin{equation}gin{widetext} \begin{equation}gin{align} \rho_{\rm bg}^{\pm} = \frac{1}{\sum_{m,n} p_{mn}} ~~ \begin{equation}gin{blockarray}{ccccc} \ket{\rm D'_A D'_B} & \ket{\rm D'_A, D_B} & \ket{\rm D_A, D'_B} & \ket{\rm D_A, D_B} \\ \begin{equation}gin{block}{(cccc)c} p_{\rm D'D'} & 0 & 0 & 0 & \:\bra{\rm D'_A, D'_B}\\ 0 & p_{\rm D'D} & \pm e^{-\mathrm{i}\phi}p_{\rm ph-ph}/2 & 0 & \:\bra{\rm D'_A, D_B}\\ 0 & \pm e^{\mathrm{i}\phi}p_{\rm ph-ph}/2 & p_{\rm DD'} & 0 & \:\bra{\rm D_A, D'_B}\\ 0 & 0 & 0 & p_{\rm DD} & \:\bra{\rm D_A, D_B}\\ \end{block} \end{blockarray} \end{align} \end{widetext} The matrix $\rho_{\rm bg}$ can also be expressed as \begin{equation}gin{equation} \rho_{\rm bg}^{\pm} = \frac{1}{\sum_{m,n} p_{mn}} \left( \frac{p_{\rm ph-ph}}{2} \rho^\pm+\frac{p_{\rm tot-bg}}{4}\mathbbm{1}\right), \end{equation} where $p_{\rm tot-bg}=p_{\rm ph-bg} +p_{\rm bg-bg} $ and $\mathbbm{1}$ is the two-qubit identity matrix. Here one sees more clearly that the background-count model acts to add white noise to the ion-ion state. In general, for a given detector combination $\rm det_1$ and $\rm det_2$, one can write the coincidence probabilities as: \begin{equation}gin{align} p_{\rm ph-ph} &= p^{\rm A}_{\rm det_1} \times p^{\rm B}_{\rm det_2} + p^{\rm B}_{\rm det_1} \times p^{\rm A}_{\rm det_2}\nonumber\\ p_{\rm ph-bg} &= (p^{\rm A}_{\rm det_1}+p^{\rm B}_{\rm det_1})\times p_{\rm bg-det_2}\nonumber\\&+ (p^{\rm A}_{\rm det_2}+p^{\rm B}_{\rm det_2})\times p_{\rm bg-det_1}\nonumber\\ p_{\rm bg-bg} &= p_{\rm bg-det_1} \times p_{\rm bg-det_2} \label{eq:eq4s} \end{align} where $p^{k}_{{\rm det}_r}$ is the probability of detecting a photon at detector $r$ emitted by node $k \in \{\text{A},\text{B}\}$ and $p_{{\rm bg-det}_r}$ is the probability to get a background count within the detection window at detector $r$. Note that we use four detectors, two of which are SNSPDs and two of which are SPCMs. Background counts and efficiencies have been measured independently for each detector (Sec.~\ref{sec:PBSM}), from which we calculate the probabilities in Eq\.~\eqref{eq:eq4s}. Second, we account for photon distinguishability using a two-qubit dephasing channel. We define a completely dephased density matrix, for which we set the off-diagonal elements of $\rho_{\rm bg}$ to zero: \begin{equation}gin{align} \rho_{\rm bg,dephase} = \frac{1}{\sum_{m,n} p_{mn}} \begin{equation}gin{pmatrix} p_{\rm D'D'} & 0 & 0 & 0 & \\ 0 & p_{\rm D'D} & 0 & 0 \\ 0 & 0 & p_{\rm DD'} & 0 \\ 0 & 0 & 0 & p_{\rm DD} \end{pmatrix} \end{align} The probability for dephasing in the channel is parameterized by the Hong-Ou-Mandel interference visibility $V$. The density matrix $\rho_{\rm dist}^{\pm}$ accounts for both background counts and photon distinguishability: \begin{equation}gin{equation} \rho_{\rm dist}^{\pm} = V \times\rho_{\rm bg}^{\pm} + (1 - V) \times \rho_{\rm bg,dephase}. \label{eq:rho_dist} \end{equation} As discussed in the main text, the value of $V$ is experimentally determined as a function of the coincidence window for photon detection. In the absence of background counts or other imperfections, Eq.~\eqref{eq:rho_dist} predicts an ion--ion state of the form $\rho^{\pm}(1+V)/2 + \rho^{\mp}(1-V)/2$. An equivalent model of the effect of photon distinguishability on entanglement swapping is derived in the Supplemental Material of Ref.~\cite{Craddock2019}; see in particular Eq.~(S29). Finally, we account for imperfect ion--photon entanglement at Nodes A and B, for which we introduce a two-qubit depolarizing channel. We define $F'_{\rm ip,k}$ to be the fidelity of ion--photon entanglement with respect to a maximally entangled state at node $k$, where $F'_{\rm ip,k}$ has been corrected for background counts, and we define $\rho_{\rm depol}$ to be a completely depolarized density matrix: \begin{equation}gin{align} \rho_{\rm depol} = \frac{1}{4} \mathbbm{1}. \end{align} If we assume that the infidelity $1-F'_{\rm ip,k}$ is due to depolarizing noise, and that the entanglement-swapping process that creates ion--ion entanglement between Nodes A and B is perfect, then the fidelity of ion--ion entanglement with respect to a maximally entangled state is given by \cite{Briegel2000} \begin{equation}gin{equation} F'_{\rm ii} = \frac{1}{4} \left( 1 + 3\left( \frac{4F'_{\rm ip,A}-1}{3} \right) \left( \frac{4F'_{\rm ip,B}-1}{3} \right) \right). \end{equation} We can then describe the depolarizing channel that generates the state with fidelity $F'_{\rm ii}$ with a parameter $\lambda$ \cite{Horodecki99}: \begin{equation}gin{equation} \rho^\pm = \lambda \times \rho_{\rm dist}^{\pm} + (1 -\lambda) \times \rho_{\rm depol}, \end{equation} where \begin{equation}gin{align} \lambda = \frac{4 F'_{\rm ii}-1}{3}. \end{align} We thus arrive at the density matrix $\rho$ from which the fidelities plotted in Fig.~2d of the main text are calculated: \begin{equation}gin{equation} F^\pm_{\rm model} = \bra{\Psi^\pm} \rho^\pm \ket{\Psi^\pm}. \end{equation} In Fig.~2d, the fidelities are plotted as a function of coincidence window. To calculate $\rho$ for a given coincidence window $T$, we take into account the visibility $V(T)$ and the background counts that (on average) occur within the detection window. The depolarizing correction is treated as independent of $T$. To calculate the dashed lines in Fig.~2d, we omit the second step in this model---the dephasing channel parameterized by the visibility--- and determine $\rho$ only taking into account detector background counts and imperfect ion--photon entanglement. The ion--photon entanglement fidelities at Nodes~A and~B without background-count subtraction are given in the main text. After background-count subtraction, these values are $F'_{\rm ip,A} = (93.8+0.4-0.5)\%$ and $F'_{\rm ip,B} = (95.6+0.7-0.8)\%$. \section{Master-equation model for two-photon interference visibility} \subsection{The master equation}\label{sec:Model-master} We present in this section the master-equation model of the ion--cavity system. We start with the Hamiltonian, then review the noise terms, and conclude the section with the master equation that is relevant for the description of the experiment. Ultimately the model is used to predict the visibility of the interference obtained by combining on a beam-splitter two photons emitted from the two nodes of the ion-trap quantum network (Fig.~3b of the main text). As a first step, we calculate the joint ion--photon states produced at each node. Then the ions are traced out and the interference visibility is computed from the marginal states of the two photons. \subsubsection{Hamiltonian of the bichromatic cavity-mediated Raman transition} \label{sec:H-four} We start by presenting our model for a single $^{40}$Ca$^+$ ion trapped inside a cavity and driven by laser light. We restrict the atom model to a simple four-level system that includes the sublevels of direct importance for the experiment: $\ket{S}, \ket{P}, \ket{D},$ and $\ket{D'}$ (see Fig.~\ref{fig:FourLevel}). The ion is initially prepared in $\ket{S}$. The $\ket{S}-\ket{P}$ transition is driven off-resonantly with a bichromatic laser field with frequencies $\omega_1$ and $\omega_2$ and Rabi frequencies $\Omega_1$ and $\Omega_2$. The bichromatic field is detuned from the $\ket{S}-\ket{P}$ transition frequency $\omega_{PS}$ by $\Delta_{1}=\omega_{1}-\omega_{PS}$ and $\Delta_{2}=\omega_{2}-\omega_{PS}$. In addition, an exchange interaction between the ion and the cavity couples the $\ket{P}-\ket{D}$ transition to the emission and absorption of a photon with vertical polarization into the cavity and the $\ket{P}-\ket{D'}$ transition to the emission and absorption of a photon with horizontal polarization. The cavity has frequency $\omega_{\rm c}$. The vertically and horizontally polarized cavity modes are described with bosonic operators $\hat a^\dag_{\rm v}$ or $\hat a^\dag_{\rm h}$, and the corresponding coupling constants are denoted $g_1$ and $g_2$. \begin{equation}gin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{Four-complete.pdf} \caption{Representation of the energy levels $\ket{S}, \ket{P}, \ket{D},$ and $\ket{D'}$ relevant for the experiment. The frequencies of the bichromatic laser field are denoted $\omega_{1}$ and $\omega_{2}$, with corresponding Rabi frequencies $\Omega_{1}$ and $\Omega_{2}$ and detunings $\Delta_{1}$ and $\Delta_{2}$ from $\ket{P}$. The cavity frequency is $\omega_{\rm c}$, and $g_1$ and $g_2$ are the cavity coupling constants. The Stark shift due to the bichromatic field is $\delta_{\rm s}$. (Note that in Fig.~1 of the main text, $\delta_s$ is set to zero for simplicity.) \label{fig:FourLevel}} \end{figure} \begin{equation}gin{widetext} The Hamiltonian $H$ of the ion-cavity system is given by \begin{equation} \begin{equation}gin{split} H/\hbar&=\omega_{\rm c}(\hat a^\dag_{\rm h} \hat a_{\rm h}+ \hat a^\dag_{\rm v} \hat a_{\rm v})+\omega_{PS}\ketbra{P}{P}+\omega_{DS}\ketbra{D}{D}+\omega_{D'S}\ketbra{D'}{D'} \\ &+\frac{1}{2}\Big(\Omega_1e^{\mathrm{i}\omega_1 t}+\Omega_1e^{-\mathrm{i}\omega_1 t}\Big)\Big(\ketbra{S}{P}+\ketbra{P}{S}\Big)+\frac{1}{2}\Big(\Omega_2e^{\mathrm{i}\omega_2 t}+\Omega_2e^{-\mathrm{i}\omega_2 t}\Big)\Big(\ketbra{S}{P}+\ketbra{P}{S}\Big) \\ &+g _1\Big(\ketbra{D}{P}+\ketbra{P}{D}\Big)\left(\hat a^\dag_{\rm v}+\hat a_{\rm v}\right)+g_2\Big(\ketbra{D'}{P}+\ketbra{P}{D'}\Big)\left(\hat a^\dag_{\rm h}+\hat a_{\rm h}\right). \end{split} \end{equation} Note that the energies of the ion levels are defined with respect to $\ket{S}$. An effective Hamiltonian with a simpler form can be obtained by noting that the cavity is initially empty and consequently, the atom-cavity system remains in the four level manifold $\{|S,0\rangle, |P,0\rangle, |D,1_{\rm v}\rangle, |D',1_{\rm h}\rangle\}$, where 0 and 1 are cavity photon numbers and subscripts indicate polarization. The corresponding Hilbert space is labelled $\mathcal{H}^C$. Below, we shorten the notation to $\ket{D,1_{\rm v}}=\ket{D,1}$ and $\ket{D',1_{\rm h}}=\ket{D',1}$ as there is no ambiguity with the polarization of the cavity photon. Under the rotating wave approximation, the effective Hamiltonian $H_t^C$ is given by \begin{equation} \begin{equation}gin{split} H_t^C/\hbar&= -\Delta_1 \ketbra{P,0}{P,0}+\big(\Delta_{\rm c_1}-\Delta_1 \big)\ketbra{D,1}{D,1}+\big(\Delta_{\rm c_2}-\Delta_1\big)\ketbra{D',1}{D',1} \\ &+\frac{1}{2}\Big(\Omega_1+\Omega_2e^{\mathrm{i}(\omega_2-\omega_1)t}\Big)\ketbra{S,0}{P,0}+\frac{1}{2}\Big(\Omega_1+\Omega_2e^{-\mathrm{i}(\omega_2-\omega_1)t}\Big)\ketbra{P,0}{S,0} \\ &+g_1\Big(\ketbra{D,1}{P,0}+\ketbra{P,0}{D,1}\Big)+g_2\Big(\ketbra{D',1}{P,0}+\ketbra{P,0}{D',1}\Big). \end{split} \end{equation} In the rotating frame $\ket{P}_{L.F.}\rightarrow e^{\mathrm{i}\omega_1 t}\ket{P}_{R.F.}$, $\ket{1}_{L.F.}\rightarrow e^{\mathrm{i}\omega_c t}\ket{1}_{R.F.}$, $\ket{D}_{L.F.}\rightarrow e^{\mathrm{i}(\omega_1-\omega_c)t}\ket{D}_{R.F.}$, and $\ket{D'}_{L.F.}\rightarrow e^{\mathrm{i}(\omega_1-\omega_c)t}\ket{D'}_{R.F.}$, where $L.F.$ and $R.F.$ stand for lab frame and rotating frame. Here, we have introduced the cavity detunings $\Delta_{\rm c_{1}} = \omega_{\rm c} -\omega_{PD}$ and $\Delta_{\rm c_{2}} = \omega_{\rm c} -\omega_{PD'}$, with $\omega_{PD}= \omega_{PS}-\omega_{DS}$ and $\omega_{PD'}= \omega_{PS}-\omega_{D'S}$. In the subspace $\mathcal{H}_E$ spanned by $\{\ket{D,0},\ket{D',0}\}$, the Hamiltonian is simply \begin{equation}\label{eq:HE} H_E/\hbar = \big(\Delta_{\rm c_1}-\Delta_1 \big)\ketbra{D,0}{D,0}+\big(\Delta_{\rm c_2}-\Delta_1\big)\ketbra{D',0}{D',0}. \end{equation} In the experiment, the detunings are calibrated with respect to the observed resonance frequency. It is thus natural to define the detunings $\Delta_{1}' = \Delta_{1}-|\delta_S|$ and $\Delta_{2}' = \Delta_{2}-|\delta_S|$ that incorporate the AC Stark shift $\delta_s=\Omega_1^2/(4\Delta_1)+\Omega_2^2/(4\Delta_2$) calculated for the $\ket{S}-\ket{P}$ transition. In terms of the new detunings, the Hamiltonian is recast to \begin{equation} \begin{equation}gin{split} H_t^C/\hbar&=-(\Delta_1'+|\delta_{s}|)\ketbra{P,0}{P,0}+\big(\Delta_{\rm c_1}-\Delta_1'-|\delta_{s}|\big)\ketbra{D,1}{D,1}+\big(\Delta_{\rm c_2}-\Delta_1'-|\delta_{s}|\big)\ketbra{D',1}{D',1} \\ &+\frac{1}{2}\Big(\Omega_1+\Omega_2e^{\mathrm{i}(\omega_2-\omega_1)t}\Big)\ketbra{S,0}{P,0}+\frac{1}{2}\Big(\Omega_1+\Omega_2e^{-\mathrm{i}(\omega_2-\omega_1)t}\Big)\ketbra{P,0}{S,0} \\ &+g_1\Big(\ketbra{D,1}{P,0}+\ketbra{P,0}{D,1}\Big)+g _2\Big(\ketbra{D',1}{P,0}+\ketbra{P,0}{D',1}\Big),\\ H_E/\hbar &= \big(\Delta_{c_1}-\Delta_1'-|\delta_{s}| \big)\ketbra{D,0}{D,0}+\big(\Delta_{c_2}-\Delta_1'-|\delta_{s}|\big)\ketbra{D',0}{D',0}. \label{eq:HE2} \end{split} \end{equation} \noindent The total Hamiltonian is denoted $H_t=H_t^C+H_E.$ \end{widetext} \subsubsection{Noise terms} In addition to the Hamiltonian evolution, there are noise terms that affect the dynamics of the system. We review them below. \paragraph* {Spontaneous decay of the ion.}To account for spontaneous decay of the $P$ level to $S$, $D$ or $D'$ (outside of the cavity mode), we introduce the noise operators \begin{equation} \begin{equation}gin{split} L_{sp}&= \sqrt{2\gamma_{sp}} \ketbra{S,0}{P,0}, \\ L_{dp}&= \sqrt{2 \gamma_{dp}}\ketbra{D,0}{P,0}, \\ L_{d'p}&= \sqrt{2 \gamma_{d'p}}\ketbra{D',0}{P,0}, \\ \end{split} \end{equation} where $\gamma_{sp}$, $\gamma_{dp}$, and $\gamma_{d'p}$ are atomic polarization decay rates. These operators pick a phase in the rotating frame. However, these phases do not influence the master equation (see Eq.~\eqref{eq:ME-four-F}) and can thus be ignored. \paragraph*{Laser noise.}A finite coherence time of the Raman drive laser can be modelled by a process in which each of the Rabi frequencies $\Omega_1$ and $\Omega_2$ (which originate from the same laser field) has a small chance to acquire a random phase $e^{\mathrm{i} \varphi_t}$ at each moment of time. Since the level $\ket{S,0}$ only couples to other levels by absorbing a laser photon, the laser phase noise can be accounted for in the master equation by introducing a dephasing channel that reduces the coherences $\ketbra{S,0}{P,0}$, $\ketbra{S,0}{D,1}$, and $\ketbra{S,0}{D',1}$. This is done by introducing the noise operator \begin{equation} L_{ss}= \sqrt{2\gamma_{ss}} \ketbra{S,0}{S,0}. \end{equation} \paragraph*{Cavity jitter.} The cavity jitter stems from slow drifts of the cavity frequency away from the reference frequency between recalibration steps, which we attribute to imperfect active stabilization of the cavity length. The resonator is a massive system, so that the cavity length drifts on timescales much slower than the duration of the Raman pulse. Therefore, we assume the cavity frequency $\omega_{\rm c}$ to be fixed during a single iteration of the experiment (i.e., an attempt to generate a single photon). On the other hand, $\omega_{\rm c}$ can change from one iteration to the next. We thus assume that for each iteration, the cavity frequency is a Gaussian random variable with standard deviation $\gamma_{clj}$, which is well justified because the data analysis of the run sequence is unordered. That is, at each iteration, $\hat{\omega}_{\rm c}$ is sampled from the Gaussian distribution \begin{equation} \label{eq:Gaussian} \text{p}(\hat {\omega}_{\rm c}) = \frac{1}{\sqrt{2\pi }} \exp\left (-{\frac{(\hat{\omega}_{\rm c}-\omega_{\rm c})^2}{2 \gamma_{clj}^2}} \right). \end{equation} Concretely, this means that we solve the dynamics of the two ion--cavity systems for fixed values of $\hat{\omega}_{\rm c}$ that are sampled from $\text{p}(\hat {\omega}_{\rm c})$. The final state is a mixture of these solutions. In practice, to compute the model for $\hat \omega_{\rm c}$, we take a discrete ensemble of $2 k_{\rm max} +1$ equally spaced values $\omega_k = w_{\rm c} + \Delta k$ for $|k| \leq k_{\rm max}$, then renormalize the distribution by a constant such that it sums to one: $\sum_k \text{p}(\omega_k) =1$, that is, the contribution of each frequency in the ensemble is weighted by the distribution. For the numerical analysis below, we take $k_{\rm max}=6$ for Node A (yielding $13$ possible values for $\hat \omega_{\rm c}$), and we neglect the effect of the cavity lock jitter for Node B (fixing $\hat \omega_c = \omega_c$) as it is estimated to be an order of magnitude smaller. \paragraph*{Photon emission} The possibility for the photon to leave the cavity gives rise to two noise operators \begin{equation}\begin{equation}gin{split} \label{eq:L4L5} L_4 &= \sqrt{2\kappa} \ketbra{D,0}{D,1} \\ L_5 &= \sqrt{2\kappa} \ketbra{D',0}{D',1} \end{split} \end{equation} with $\kappa$ the cavity field decay rate. In our rotating frame, the noise operators are time dependent: $L_4 = \sqrt{2\kappa} \ketbra{D,0}{D,1} e^{-\mathrm{i} \omega_c t}$, $L_5 = \sqrt{2\kappa} \ketbra{D',0}{D',1} e^{-\mathrm{i} \omega_{\rm c} t}$. For the master equation, however (see Eq.~\eqref{eq:ME-four-F}), the phase of the noise operators plays no role. Note that the noise channels $L_4$ and $L_5$ encompass all cavity decay processes, including transmission, scattering, and absorption at both mirrors. Only a fraction of these photons are transmitted through the output mirror and sent to the PBSM. \subsubsection{The master equation for the full dynamics} \label{sec:four-photon} To capture not only the unitary dynamics of the ion-cavity system but also decoherence and photon emission from the cavity, we use the master equation \begin{equation}\label{eq:ME-four-F} \dot \varrho_t = -\mathrm{i}\, [H_t,\varrho_t]/\hbar + \sum_{i} \left(L_i \varrho_t L_i^\dag - \frac{1}{2}\{L_i^\dag L_i, \varrho_t\}\right) , \end{equation} where the density matrix $\varrho_t$ is defined on the six-level subspace $\mathcal{H}$ spanned by $\{|S,0\rangle, |P,0\rangle, |D,1\rangle, |D',1\rangle,|D,0\rangle,|D',0\rangle\}$. The index $i$ includes all the terms described above, that is, $i=sp,ss,dp,d'p,4,5$. The probability density (rate) for a noise event $L_i$ to occur at time $t$ is denoted by $P_i(t) = \tr L_i \varrho_t L_i^\dag$. The event leaves the system in the state \begin{equation} \varrho_{t|i} = \frac{L_i \varrho_t L_i^\dag}{\tr L_i \varrho_t L_i^\dag}. \end{equation} \subsection{Photon envelope and scattering rates} In this section, we solve the dynamics of the master-equation model developed in Sec.~\ref{sec:Model-master} for the ion--cavity system. As we will see, it is enough to model the system's state inside the four-dimensional subspace $\mathcal{H}^C$ for this purpose. Below, the density matrix is thus restricted to this subspace. Knowledge of the ion--cavity state is sufficient to predict the scattering rates and the temporal envelopes of photons leaving the cavity. Through a comparison between the prediction of our theoretical model and the experimental data for the photon temporal envelopes, we are able to fix free parameters in the model, including the cavity loss, cavity jitter and the overall detection efficiency. \subsubsection{Ion-cavity dynamics} In the master equation given in Eq.~\eqref{eq:ME-four-F}, different noises play different roles. The terms $L_{sp}$ and $L_{ss}$ leave the system in a state in the $\mathcal{H}^C$ subspace where it can still emit a photon. However, if the noise events $L_{dp}$, $L_{d' p}, L_4$, or $L_5$ occur, no photon can be emitted afterwards as the system is projected into $\mathcal{H}_E$. Since we are only interested in the evolution branch that can lead to the emission of a photon, we solve the master equation with the system remaining inside $\mathcal{H}^C$, that is, \begin{equation}\label{eq:ME-four}\begin{equation}gin{split} \dot \varrho_t =& -\mathrm{i} \, [H_t^C,\varrho_t]/\hbar + \sum_{i=sp,ss} \left( L_i \varrho_t L_i^\dag - \frac{1}{2} \{ L_i^\dag L_i, \varrho_t \}\right) \\ &- \sum_{i=dp,d'p,4,5} \frac{1}{2} \{L_i^\dag L_i, \varrho_t \}. \end{split} \end{equation} Note that the solution of this equation is not trace preserving, as it ignores the branches where $L_{dp}$, $L_{d' p}, L_4$, or $L_5$ happen. In fact, the trace of $\varrho_t$ gives the probability that none of these noises have happened before time $t$. \subsubsection{Photon envelope} We are primarily interested in the emission of a photon from each cavity to the PBSM setup when the ion-cavity system is initially in state $\varrho_0= \ketbra{S,0}$. If a photon is generated in the cavity mode, it leaves the cavity with rate $2\kappa$. To compute the probability that a photon of a given polarization (horizontal or vertical) is emitted at time $t$, it is thus sufficient to solve the master equation~\eqref{eq:ME-four} for the initial state $\varrho_0= \ketbra{S,0}$ and compute \begin{equation}\label{eq:envelope}\begin{equation}gin{split} p_{\rm v}(t) &= 2\kappa \bra{D,1} \varrho_t \ket{D,1}\\ p_{\rm h}(t) &= 2\kappa \bra{D',1} \varrho_t \ket{D',1}. \end{split} \end{equation} The envelope of this photon is thus defined by the functions $p_{\rm v}(t)$ and $p_{ \rm h}(t)$. In the presence of cavity jitter, the photon envelopes are the weighted averages over the different cavity frequency values \begin{equation} \label{eq:weighted_envelope} \begin{equation}gin{split} \bar{p}_{\rm v}(t) =\sum_k \text{p}(\omega_k)\, p_{\rm v}(t|\omega_k),\\ \bar{p}_{\rm h}(t) =\sum_k \text{p}(\omega_k)\, p_{\rm h}(t|\omega_k), \end{split} \end{equation} where $p_{\rm v}(t|\omega_k)$ and $p_{\rm h}(t|\omega_k)$ give the probabilities that a photon of a given polarization leaves the cavity at time $t$ for a fixed cavity frequency $\omega_k$. The photon envelopes of Eq.~\eqref{eq:envelope} and Eq.~\eqref{eq:weighted_envelope} can be compared with the time histograms of click events obtained at the PBSM setup. For these measurements, data are taken when only one node is sending photons, while the other is blocked. In Fig.~\ref{fig:benwavepacket}, we compare our model with data obtained from Node B. To obtain agreement between the observed detection rates and the model, we have multiplied the predicted emission rate $\bar{p}_{\rm h(v)}(t)$ by a factor $1/10.5\approx 0.095$, which corresponds to the overall detection efficiency $\eta$, including detector efficiencies, photon loss in the channel, and scattering and absorption losses contained in the noise channels $L_4$ and $L_5$. \begin{equation}gin{figure}[t!] \centering \includegraphics[width=0.98\columnwidth]{Ben-ExT.pdf} \caption{Single-photon temporal wavepacket emitted from Node B and detected on the PBSM setup. Orange squares and blue circles correspond to vertical and horizontal polarizations. Squares and circles represent experimental data; error bars are calculated from Poissonian statistics. Lines are the envelopes found theoretically, which have been multiplied by $\eta=1/10.5\approx 0.095$.} \label{fig:benwavepacket} \end{figure} In Fig.~\ref{fig:tracywavepacket}, we compare our model with data obtained from Node A. Here as well, the predicted emission rates at time $t$ are multiplied by a prefactor that accounts for the detection efficiency. In contrast to the comparison in Fig.~\ref{fig:benwavepacket}, here we include cavity jitter, that is, we use Eq.~\eqref{eq:weighted_envelope} instead of Eq.~\eqref{eq:envelope}. All parameters used for the numerical simulation are reported in Table.~\ref{tab:params}. \begin{equation}gin{figure}[t!] \subfloat{\includegraphics[width=0.98\columnwidth]{Tracy-ExT-clj006.pdf}} \hspace{0.1cm} \subfloat{\includegraphics[width=0.98\columnwidth]{Tracy-ExT-clj01.pdf}} \caption{Single-photon temporal wavepacket emitted from Node A and detected a few meters away. Orange squares and blue circles correspond to vertical and horizontal polarizations. Squares and circles represent experimental data; error bars are calculated from Poissonian statistics. Lines are the envelopes found theoretically, which have been multiplied by $\eta=1/14.47\approx 0.069$ (above) and $\eta =1/12.46\approx0.08$ (below). Cavity jitter has been added with $\gamma_{clj}=0.06$ (above) and $\gamma_{clj}= 0.1$ (below). Both parameter regimes are consistent with the data, that is, are within the uncertainties of experimentally determined values for $\eta$ and $\gamma_{clj}$.} \label{fig:tracywavepacket} \end{figure} \begin{equation}gin{table*}[t] \label{table 1} \centering \begin{equation}gin{tabular}{|l | c c c c c c c c c c c|} \hline Node & $\Omega_1$ & $\Omega_2$ & $g$ & $\Delta_1$ & $\Delta_2$ &$\kappa$ & $\gamma_{sp}$ & $\gamma_{dp}+\gamma_{d'p}$ & $\gamma_{ss}$ & $\gamma_{clj}$ &$\eta$\\ \hline A & 43.8 & 30.9 & 0.77 & 412.8206 & 419.8574 & 0.0684 & 10.74 & 0.75 & 0.01 & 0.06 -- 0.1 & 0.069-- 0.08\\ B & 24.76 & 21.05 & 1.2 & 414.0917 & 421.2091 & 0.07 & 10.74 & 0.75 & 0 & 0 & 0.095\\ \hline \end{tabular} \caption{\label{tab:params} The parameters that are used in the theoretical model to simulate the experimental data. All parameters have units of $\rm MHz$ and must be multiplied by $2\pi$. In order to obtain the coupling strengths $g_1$ and $g_2$ shown in Fig.~\ref{fig:FourLevel}, we multiply $g$ with the relevant atomic transition strength and with the projection of the transition polarization onto the photon polarization \cite{Stute2012a}.} \end{table*} \subsubsection{Scattering rates} To compute the interference visibility in the next section, we need to predict the scattering rates of the ion-cavity system back to its initial state. Once Eq.~\eqref{eq:ME-four} has been solved and the state $\varrho_t$ has been computed, the rate of scattering back to to $\ket{S,0}$ can be obtained as \begin{equation}\label{eq:scatt-Ps} \text{P}_s(t) = \tr \Big((L_{sp}^\dag L_{sp} +L_{ss}^\dag L_{ss}) \varrho_t \Big). \end{equation} Note that whenever such a scattering event occurs, the system is projected onto the state $\ket{S,0}$ at the corresponding time. \subsection{The full state of the photon} The photon envelopes $p_{\rm v}(t)$ and $p_{\rm h}(t)$ defined in Eq.~\eqref{eq:envelope} give the probabilities for photon emission at different times, but they do not tell us how coherent the emission process is. In particular, they do not tell us about the purity of the state of the emitted photon (for a fixed polarization) and are not sufficient to predict the interference visibility between two photons coming from different nodes. A more detailed analysis is thus required. Such an analysis is reported below in three steps. First, we compute the ion--cavity state conditional on no noise events occurring during the evolution. Combining this pure state with the scattering rate computed in the previous section, we compute the actual ion--photon state. Finally, tracing out the ion, we obtain the full state of the photon emitted from each cavity and use it to predict the interference visibility. \subsubsection{No-noise branch} To compute the final ion--photon state, our first step is to extract from the master equation the branch that corresponds to the evolution branch on which no noise events occur. This is given by the equation \begin{equation} \dot{\rho} = -\mathrm{i}[H_t^C, \rho]/\hbar - \frac{1}{2} \{ \sum_i L_i^\dag L_i, \rho\}, \end{equation} where we have simply removed all post-noise terms $L_i \rho_t L_i^\dag$. This equation can be cast in the form \begin{equation}\begin{equation}gin{split} &\dot{\rho_t} = - D_t \rho_t - \rho_t D_t^\dag, \\ &\text{with} \quad D_t = \mathrm{i} H_t^C/\hbar +\frac{1}{2} \sum_i L_i^\dag L_i. \end{split} \end{equation} One sees that if the state is initially pure, $\rho_{t_0}=\ketbra{\Psi_{t_0}}$, it will remain pure in the no-noise-branch evolution, that is, the evolution given by the Schr\"odinger equation \begin{equation} \label{eq: non-hermitian shrodinger} \ket{\dot{\Psi}_{t}} = -D_t \ket{\Psi_{t}}, \end{equation} with a non-Hermitian Hamiltonian $D_t$. The norm of the state decreases in general as $\frac{d}{dt}\| \ket{{\Psi}_{t}}\| = - \bra{{\Psi}_{t}} \sum_i L_i^\dag L_i \ket{{\Psi}_{t}}$, reflecting the fact that the system leaves the no-noise branch whenever a noise event occurs. The solution of Eq.~\eqref{eq: non-hermitian shrodinger} can be expressed formally by defining the time-ordered propagator \begin{equation}\begin{equation}gin{split} \ket{\Psi_t} &= V_{t_0}(t-t_0)\ket{\Psi_{t_0}}, \\ V_{t_0}(\tau) &= \mathcal{T} \left [e^{-\int_{t_0}^{t_0+\tau}D_s \dd s} \right], \end{split}\end{equation} where $\mathcal{T}[\bullet]$ is the time-ordering operator. For our noise model, the initial state for the no-noise evolution is always pure and given by $\ket{\Psi_{t_0}} = \ket{S,0}$ for some time $t_0$ where $t_0$ is determined by a noise event projecting the system onto $\ket{S}$, as discussed below. Let us denote $\ket{\Psi_{t|t_0}}$ the state of the system at time $t$, given that it was prepared in $\ket{S,0}$ at time $t_0\leq t$ and no scattering events occurred in between, that is, given that the system has evolved between $t_0$ and $t$ following the no-noise branch. This state is the solution of $\ket{\dot{\Psi}_{t|t_0}} = -D_t \ket{\Psi_{t|t_0}}$ and can also be expressed as \begin{equation} \label{eq:psitt0} \ket{{\Psi}_{t|t_0}}= V_{t_0}(t-t_0)\ket{S,0}. \end{equation} It is worth noting that the Hamiltonian has a time dependence, meaning that time-translation symmetry is broken: $V_{t_1}(\tau) \neq V_{t_0}(\tau)$, that is, the evolution for a duration $\tau$ depends on the start time. Nevertheless, in our numerical computations we ignore this asymmetry and use the approximation $\ket{\Psi_{t|t_0}} \approx \ket{\Psi_{(t-t_0)|0}}$. This approximation results in a substantial computational speedup. We have established the validity of this approximation by comparing its results with the results of a time-dependent model for several values of $t_0$. \subsubsection{Ion--cavity state revisited} \label{sec:ion_cavity_revisited} At this point, we know how to compute the scattering rate $\text{P}_s(t)$ and the state $\ket{\Psi_{t|t_0}}$. It is then convenient to express the total state of the system in the form \begin{equation} \label{eq: state decomposition} \begin{equation}gin{split} \varrho_t &= \ketbra{\Psi_{t|0}} + \int_{0}^t \dd s\, \text{P}_s(s)\, \ketbra{\Psi_{t|s}}\\ & \approx \ketbra{\Psi_{t|0}} + \int_{0}^t \dd s\, \text{P}_s(s)\, \ketbra{\Psi_{t-s|0}}, \end{split}\end{equation} where in the second step, we use the approximation $\ket{\Psi_{t|t_0}} \approx \ket{\Psi_{(t-t_0)|0}}$ discussed above. This expression captures the fact that given a state at a certain time, the system will either evolve without noise until $t$ (no-noise branch), trigger a noise event $L_{ss}$ or $L_{ps}$ at a later time $t'$ ($s\leq t'\leq t$) that keeps it within the four-dimensional manifold $\mathcal{H}^C$, or trigger one of the other four noise events that causes it to leave $\mathcal{H}^C$ (and never emit a photon that is sent to the PBSM setup). Note that the probability that at time $t$, the most recent scattering event happened at time $s\leq t$ is $\dd s\, \text{P}_s(s)\|\ket{\Psi_{t|s}}\|$, which explains the term in the integral of Eq.~\eqref{eq: state decomposition}. \subsubsection{Ion--photon state} We now show that the decomposition of the state $\varrho_t$ in the form proposed in Eq.~\eqref{eq: state decomposition} results in a natural description of the entangled state of the ion and the cavity photon. First, note that the states entering in the decomposition ($\Psi_{t|s}$) are pure, i.e., Eq.~\eqref{eq: state decomposition} gives an explicit decomposition of $\varrho_t$ into pure states. For a pure ion--cavity state $\ket{\Psi_{t}}$, the probability amplitude that a photon leaves the cavity after a time duration $\dd t$ (corresponding to the $L_4$ and $L_5$ decay channels when the photon is traced out) is obtained from \begin{equation}\begin{equation}gin{split} &\dd t \, E_t\ket{\Psi_t} \equiv \\ &\dd t \sqrt{2 \kappa}\left(\ketbra{D,0}{D,1} a_{\rm v}^\dag(t) +\ketbra{D',0}{D',1} a_{\rm_h}^\dag(t)\right) \ket{\Psi_{t}} , \end{split} \end{equation} where the ion--cavity state is projected into the $\mathcal{H}_E$ subspace. Here we have introduced the creation and annihilation operators for the continuous temporal (and polarization) modes outside the cavity directed to the PBSM setup, which satisfy $[a_{\rm v}(t),a_{\rm v}^\dag(t')] =[a_{\rm h}(t),a_{\rm h}^\dag(t')]=\delta(t-t')$. Thus, for the ion--cavity system evolving in the no-noise branch, with the system in state $\ket{S,0}$ at time $s$ and in $\ket{\Psi_{t|s}}$ at time $t$, we can associate a probability amplitude that a photon is emitted from the cavity towards the PBSM setup in an infinitesimal time window $[t',t'+dt']$ with $s\leq t'$ and $t'+\dd t'\leq t$. These events are coherent and described by the states $E_t'\, \dd t' \ket{\Psi_{t'|s}}\ket{0}_{t'}$, where $\ket{0}_{t'}$ is the vacuum state of all the temporal modes in the interval $[t',t'+dt']$. It follows that the no-noise evolution branch corresponds to a branch where a single photon has been coherently emitted, which is described by the state \begin{equation}\label{eq: emision state}\begin{equation}gin{split} & \left( \int_s^t \dd t' \, e^{-\mathrm{i}(t-t') H_E} E_{t'} \ket{\Psi_{t'|s}}\right) \ket{\bm 0} = \sqrt{2\kappa} \int_s^t \dd t' \times \\ &\Big( \ket{D,0} e^{-\mathrm{i} (t-t')(\Delta_{{\rm c}_1}-\Delta_1'-|\delta_{s}|)} \braket{D,1}{\Psi_{t'|s}} a_{\rm v}^\dag(t') \\ &+ \ket{D',0} e^{-\mathrm{i} (t-t')(\Delta_{{\rm c}_2}-\Delta_1'-|\delta_{s}|)}\braket{D',1}{\Psi_{t'|s}} a_{\rm h}^\dag(t') \Big)\ket{\bm 0}. \end{split} \end{equation} Here, $\ket{\bm 0}$ denotes all the temporal modes of the photons traveling to the PBSM setup. In Eq.~\eqref{eq: emision state}, the term $ e^{-\mathrm{i}(t-t') H_E}$ describes the evolution of the ion--cavity system following the emission of a photon at time $t'$. Recall from Eq.~\eqref{eq:HE2} that the states $\ket{D,0}$ and $\ket{D',0}$ acquire phases $ \ket{D,0} \mapsto e^{-\mathrm{i} (t-t')(\Delta_{{\rm c}_1}-\Delta_1'-|\delta_{s}|}\ket{D,0}$ and $\ket{D',0} \mapsto e^{-\mathrm{i} (t-t')(\Delta_{{\rm c}_2}-\Delta_1'-|\delta_{s}|)}\ket{D',0}$ between the times $t'$ and $t$, as given by the energies of the Hamiltonian $H_E$. To shorten the notation, it is convenient to introduce the complex amplitudes \begin{equation}\begin{equation}gin{split} \alpha(t'|s) &= e^{\mathrm{i} t' (\Delta_{\rm c_1}-\Delta_1'-|\delta_{s}|)} \braket{D,1}{\Psi_{t'|s}}, \\ \begin{equation}ta(t'|s) &= e^{\mathrm{i} t'(\Delta_{\rm c_2}-\Delta_1'-|\delta_{s}|)} \braket{D',1}{\Psi_{t'|s}}. \end{split}\end{equation} Then in the photon-emitted branch of the evolution, with the ion--cavity system prepared in $\ket{S,0}$ at time $s$, the ion--photon state at time $t$ is given by \begin{equation} \label{eq: phi t|s} \begin{equation}gin{split} \ket{\Phi_{t|s}} = &\sqrt{2 \kappa}\Big(\ket{D,0} \int_{s}^t \dd t' \alpha(t'|s) a_{\rm v}^\dag(t') \\ &+ e^{\mathrm{i} t(\Delta_{\rm c_1}-\Delta_{\rm c_2})} \ket{D',0} \int_{s}^t \dd t' \begin{equation}ta(t'|s) a_{\rm h}^\dag(t') \Big) \ket{\bm 0}. \end{split}\end{equation} This state can be rewritten as \begin{equation} \label{eq:phits} \ket{\Phi_{t|s}} = \ket{D,0}\ket{V_{t|s}} + e^{\mathrm{i} t(\Delta_{\rm c_1}-\Delta_{\rm c_2})} \ket{D',0}\ket{H_{t|s}} \end{equation} with the unnormalized single-photon states \begin{equation}\begin{equation}gin{split} \label{eq:htsvts} &\ket{V_{t|s}} = \sqrt{2 \kappa} \int_{s}^t \dd t' \alpha(t'|s) a_{\rm v}^\dag(t') \ket{\bm 0}, \\ &\ket{H_{t|s}} = \sqrt{2 \kappa} \int_{s}^t \dd t' \begin{equation}ta(t'|s) a_{\rm h}^\dag(t') \ket{\bm 0}. \end{split} \end{equation} From this point on, we will write $\ket{D}$ and $\ket{D'}$ instead of $\ket{D,0}$ and $\ket{D',0}$ since there is no ambiguity concerning the absence of cavity photons. From the decomposition in pure states of the ion--cavity state given in Eq.~\eqref{eq: state decomposition}, we can now deduce the full (unnormalized) ion--photon state associated with the evolution branch in which a single cavity photon has been emitted towards the PBSM setup: \begin{equation} \label{eq:ion-photon state} \rho^E_{t} = \ketbra{\Phi_{t|0}} + \int_{0}^t \dd s\, \text{P}_s(s)\, \ketbra{\Phi_{t|s}}, \end{equation} with the pure states $\ket{\Phi_{t|s}}$ given in Eqs.~\eqref{eq: phi t|s} and \eqref{eq:phits}. \subsubsection{The marginal state of the photon} From the ion--photon state $\rho^{E}_t$ (Eq.~\eqref{eq:ion-photon state}) (with an empty cavity), it is straightforward to compute the marginal state $\sigma_t$ of the emitted photon by tracing out the ion--cavity system. We obtain \begin{equation}\label{eq: photon states} \sigma_t = \tr_{\rm ion-cavity} \rho^E_t = \mathbf{V}_t + \mathbf{H}_t, \end{equation} with \begin{equation}\label{eq: photon states 2}\begin{equation}gin{split} \mathbf{V}_t &= \ketbra{V_{t|0}} + \int_{0}^t \dd s\, \text{P}_s(s)\, \ketbra{V_{t|s}}, \\ \mathbf{H}_t &= \ketbra{H_{t|0}} + \int_{0}^t \dd s\, \text{P}_s(s)\, \ketbra{H_{t|s}}. \end{split} \end{equation} Here the density matrices $\mathbf{V}_t$ and $\mathbf{H}_t$ are not normalized. Their traces corresponds to the probabilities that a vertically or horizontally polarized photon has been emitted outside of the cavity in the mode of interest before time~$t$. Note that the components of the states in Eq.~\eqref{eq: photon states 2} can be conveniently written as \begin{equation}\begin{equation}gin{split} &\mathbf{V}_t = \int_{0}^t \widetilde{\text{P}}_s (s) \ketbra{V_{t|s}} \\ &\text{with~} \widetilde{\text{P}}_s (s) =\text{P}_s(s) + \delta(s) \end{split}\end{equation} such that $\int_{0}^t \dd s\, \delta(s) \ketbra{V_{t|s}} = \ketbra{V_{t|0}}$, with an equivalent expression for $\mathbf{H}_t$. It is then straightforward to include the effects of cavity jitter, as we now show. \subsubsection{Effects of cavity jitter}\label{sec:Eff-jitt} We first remark that the above derivation of the ion--photon state assumes that the cavity frequency $\omega_{\rm c}$ is constant, which is not the case in the presence of cavity jitter, where $\hat \omega_c$ is a random variable distributed according to $\text{p}(\delta w)$, as discussed earlier in the context of Eq.~\eqref{eq:Gaussian}. Nevertheless, the effects of cavity jitter on the final state can be straightforwardly included, as we now discuss. In our model, we take a discrete set of possible values: $\hat{\omega}_c \in \{\omega_k\}_{k=1}^{13}$. The final ion-cavity state is then a mixture \begin{equation} \bar{\rho}_t^E = \sum_{k} \text{p}(\omega_k) \rho^{E,(\delta \omega_k)}_t \qquad \text{for} \qquad \delta \omega_k = \omega_k -\omega_c, \end{equation} where each state $\rho^{E,(\delta w)}_t$ takes the form \begin{equation}\begin{equation}gin{split} \rho_{t}^{E,(\delta w)} &= \ketbra{\Phi^{(\delta w)}_{t|0}} \\ &+ \int_{0}^t \dd s\, \text{P}^{(\delta w)}_s(s)\, \ketbra{\Phi^{(\delta w)}_{t|s}}, \\ \alpha^{(\delta w)}(t'|s) &= e^{\mathrm{i} t' (\widehat{\Delta}_{\rm c_1}-\Delta_1'-|\delta_{s}|)} \braket{D,1}{\Psi^{(\delta w)}_{t'|s}} \\ &= e^{\mathrm{i} t' (\Delta_{\rm c_1} +\delta w -\Delta_1'-|\delta_{s}|)} \braket{D,1}{\Psi^{(\delta w)}_{t'|s}}\\ \begin{equation}ta^{(\delta w)}(t'|s) &= e^{\mathrm{i} t' (\widehat{\Delta}_{\rm c_2}-\Delta_1'-|\delta_{s}|)} \braket{D',1}{\Psi^{(\delta w)}_{t'|s}}\\ & = e^{\mathrm{i} t' (\Delta_{\rm c_2} +\delta w -\Delta_1'-|\delta_{s}|)} \braket{D',1}{\Psi^{(\delta w)}_{t'|s}}; \end{split} \end{equation} see Eqs.~\eqref{eq:phits} and \eqref{eq:htsvts}. Here $\text{P}^{(\delta w)}_s(s)$ and $\ket{\Psi^{(\delta w)}_{t|s}}$ are obtained similarly to $\text{P}_s(s)$ in Eq.~\eqref{eq:scatt-Ps} and $\ket{\Psi_{t|s}}$ in Eq.~\eqref{eq:psitt0} for a shifted cavity frequency $\omega_c +\delta w$. The final state of the emitted photon also becomes a statistical mixture over the possible values of the cavity frequency $\omega_c+\delta\omega_k$: \begin{equation} \bar{\mathbf{V}}_t = \sum_{k} \text{p}(\omega_k) \mathbf{V}_t^{(\delta \omega_k)}, \qquad \bar{\mathbf{H}}_t = \sum_{k} \text{p}(\omega_k) \mathbf{H}_t^{(\delta \omega_k)}, \end{equation} with \begin{equation} \begin{equation}gin{split} \mathbf{V}_t^{(\delta \omega)} &= \int_{0}^t \widetilde{\text{P}}^{(\delta \omega)}_s (s) \ketbra{V_{t|s}^{(\delta \omega)}}, \\ \mathbf{H}_t^{(\delta \omega)} &= \int_{0}^t \widetilde{\text{P}}^{(\delta \omega)}_s (s) \ketbra{H_{t|s}^{(\delta \omega)}}, \\ \ket{V_{t|s}^{(\delta \omega)}} &= \sqrt{2 \kappa} \int_{s}^t \dd t' \alpha^{(\delta \omega)}(t'|s) a_{\rm v}^\dag(t') \ket{\bm 0}, \\ \ket{H_{t|s}^{(\delta \omega)}} &= \sqrt{2 \kappa} \int_{s}^t \dd t' \begin{equation}ta^{(\delta \omega)}(t'|s) a_{\rm h}^\dag(t') \ket{\bm 0}. \end{split} \end{equation} \subsection{Visibility of a Hong-Ou-Mandel-type interference} At this point, we know how to compute the state of the photon emitted by a single node, and we are ready to analyze the interference between photons coming from two nodes. First, note that we are only interested in events where two photons are detected at the PBSM setup. For such an event to occur (neglecting background counts), a single photon has to be emitted from both Nodes A and B, as fully captured by the non-normalized state $\mathbf{H}_t + \mathbf{V}_t$ given in Eq.~\eqref{eq: photon states}. We first model the two-photon interference by considering the cavity frequency to be fixed. We then come back to the effect of cavity jitter towards the end of this section. \subsubsection{Two-photon state} To fix our notation, we denote the single-photon states of Eq.~\eqref{eq: photon states 2} as $\mathbf{V}^{\rm A}_t$, $\mathbf{V}^{\rm B}_t$, $\mathbf{H}^{\rm A}_t$ and $\mathbf{V}^{\rm B}_t$ for Nodes A and B. The underlying pure states will be denoted \begin{equation} \label{eq: pure wavepackets}\begin{equation}gin{split} \ket{V^{\rm A}_{t|s}} &= \sqrt{2 \kappa} \int_{s}^t \dd t' \alpha^{\rm A}(t'|s) a_{\rm v}^\dag(t') \ket{\bm 0} \\ \ket{H^{\rm A}_{t|s}} &= \sqrt{2 \kappa} \int_{s}^t \dd t' \begin{equation}ta^{ \rm A}(t'|s) a_{\rm h}^\dag(t') \ket{\bm 0} \\ \ket{V^{\rm B}_{t|s}} &= \sqrt{2 \kappa} \int_{s}^t \dd t' \alpha^{\rm B}(t'|s) b_{\rm v}^\dag(t') \ket{\bm 0}\\ \ket{H^{\rm B}_{t|s}} &= \sqrt{2 \kappa} \int_{s}^t \dd t' \begin{equation}ta^{\rm B}(t'|s) b_{\rm h}^\dag(t') \ket{\bm 0} \end{split} \end{equation} with the natural notation for the bosonic operators $a_{\rm v}(t),a_{\rm h}(t)$ and $b_{\rm v}(t),b_{\rm h}(t)$ for Nodes A and B respectively. The scattering rates are $\text{P}_s^{\rm A}(s)$ and $\text{P}_s^{\rm B}(s)$. The overall density matrix $\Sigma_t$ describing the two photons (one emitted from each node) at time $t$ is thus the tensor product of the (unnormalized) states emitted from each node: \begin{equation} \Sigma _t = (\mathbf{V}_t^{\rm A} +\mathbf{H}^{\rm A}_t)\otimes (\mathbf{V}_t^{\rm B} +\mathbf{H}^{\rm B}_t). \end{equation} \subsubsection{Model of the PBSM} \begin{equation}gin{figure} \centering \includegraphics[width=0.98 \columnwidth]{Fig_twophotoninterference.pdf} \caption{Representation of the detection scheme for interfering two photons from two distant nodes in a photonic Bell-state measurement (PBSM). The bosonic operators associated to the fields leaving the cavity at Node A (B) are labeled $a_h$ and $a_v$ ($b_h$ and $b_v$). The fields are combined on a nonpolarizing beamsplitter at a central station. Two detectors preceded by a polarizing beamsplitter are placed at each output of the nonpolarizing beamsplitter. The detected fields are called $u_h$, $u_v$, $r_h$ and $r_v$.} \label{fig:central_station} \end{figure} Given the two photon states, we now consider the PBSM; see Fig.~\ref{fig:central_station}. The beamsplitter output modes $u$ and $r$ are linked to the input modes $a$ and $b$ via \begin{equation}\label{eq: beamsplitter} \binom{u}{r} = \frac{1}{\sqrt 2}\begin{equation}gin{pmatrix} 1 & \mathrm{i}\\ \mathrm{i} & 1 \end{pmatrix} \binom{a}{b} \Leftrightarrow \binom{a}{b} = \frac{1}{\sqrt 2}\begin{equation}gin{pmatrix} 1 & -\mathrm{i}\\ -\mathrm{i} & 1 \end{pmatrix} \binom{u}{r}. \end{equation} Each output mode of the (nonpolarizing) beamsplitter consists of a polarizing beamsplitter followed by two detectors; each detector detects one of the four modes $u_h, u_v, r_h$ and $r_v$. Let us consider the coincidence events where two clicks occur at detectors on opposite outputs of the beamsplitter, that is, clicks at detector pairs $\{u_h, r_h\}$, $\{u_h, r_v\}$, $\{u_v, v_h\}$ or $\{u_v, r_v\}$. We denote the rate of such coincidences for detector $u_{h(v)}$ at time $t_1$ and detector $r_{h'(v')}$ at time $t_2$ as $\text{det}_{h(v), h'(v')}(t_1,t_2)$, that is, $\text{det}_{v,h}(t_1,t_2) \dd t^2$ corresponds to the probability to get a click at $u_v$ in the time interval $[t_1,t_1+\dd t ]$ and a click at $r_h$ in the time interval $[t_2,t_2+\dd t ]$, for example. The rate $\text{det}_{v,h}(t_1,t_2)$ corresponds to a POVM density \begin{equation} E_{v,h}(t_1,t_2) = \eta_{u_v} \eta_{r_h} \ketbra{ v(t_1), h(t_2)} \end{equation} with $\ket{ v(t_1), h(t_2)} = u^\dag_v(t_1) r^\dag_h(t_2) \ket{\bm 0}$, where $\eta_{u_v}$ is the overall detection efficiency of detector $u_v$ and $\eta_{r_h}$ is the overall detection efficiency of detector $r_h$. Analogously, one defines POVM densities related to the other relevant coincidence rates $E_{h,v}(t_1,t_2)$, $E_{h,h}(t_1,t_2)$ and $E_{v,v}(t_1,t_2)$, with \begin{equation}\begin{equation}gin{split} E_{\pi_1,\pi_2}&(t_1,t_2) =\\ &\eta_{u_{\pi_1}}\eta_{v_{\pi_2}} \, u^\dag_{\pi_1}(t_1) r^\dag_{\pi_2}(t_2)\ketbra{\bm 0} u_{\pi_1}(t_1) r_{\pi_2}(t_2), \end{split} \end{equation} which describes the event where the upper detector for polarization $\pi_1$ clicks at time $t_1$ and the right detector for polarization $\pi_2$ clicks at time $t_2$. In principle, one can also compute the probability of events where both upper detectors or both right detectors click at different times, but here we are not interested in those events. \begin{equation}gin{widetext} \subsubsection{Coincidence rates for orthogonally polarized photons} Let us now compute the coincidence rates for two clicks from orthogonally polarized photons. We will explicitly compute the rate $\text{det}_{v,h}(t_1,t_2)$. By the Born rule, one has \begin{equation}\begin{equation}gin{split} \text{det}_{v,h}(t_1,t_2) &= \tr \Sigma_t E_{v,h}(t_1,t_2) \\ & =\tr \left(\mathbf{H}_t^{\rm A}\otimes \mathbf{H}_t^{\rm B} + \mathbf{H}_t^{\rm A}\otimes \mathbf{V}_t^{\rm B}+ \mathbf{V}^{\rm A}_t\otimes \mathbf{H}_t^{\rm B} +\mathbf{V}^{\rm A}_t\otimes \mathbf{V}_t^{\rm B}\right)E_{v,h}(t_1,t_2) \\ &= \tr (\mathbf{H}_t^{\rm A} \otimes \mathbf{V}^{\rm B}_t + \mathbf{V}_t^{\rm A} \otimes \mathbf{H}^{\rm B}_t) E_{v,h}(t_1,t_2) \\ & = \frac{\eta_{u_v} \eta_{r_{h}}}{4} \big(\tr \mathbf{H}_t^{\rm A} a_{\rm h}^\dag(t_2)\ketbra{\bm 0}a_{\rm h}(t_2) \big) \big(\tr \mathbf{V}_t^{\rm B} b_{\rm v}^\dag(t_1)\ketbra{\bm 0}b_{\rm v}(t_1) \big) \\ &+ \frac{\eta_{u_v} \eta_{r_h}}{4} \big(\tr \mathbf{H}_t^A a_{\rm v}^\dag(t_1)\ketbra{\bm 0}a_{\rm v}(t_2) \big) \big(\tr \mathbf{V}_t^B b_{\rm h}^\dag(t_1)\ketbra{\bm 0}b_{\rm h}(t_2) \big), \end{split} \end{equation} or simply \begin{equation}\label{eq: det vh} \text{det}_{v,h}(t_1,t_2) =\frac{\eta_{u_v} \eta_{r_h}}{4} \big(p_{\rm h}^{\rm A}(t_2) p_{\rm v}^{\rm B}(t_1)+ p_{\rm v}^{\rm A}(t_1) p_{\rm h}^{\rm B}(t_2)\big). \end{equation} where $p_{\rm h}^{\rm A}(t_2)= \tr \mathbf{H}_t^{\rm A} a_{\rm h}^\dag(t_2)\ketbra{\bm 0}a_{\rm h}(t_2)$ is the probability density that Node A emits a horizontally polarized photon at time $t_2$, and the probability densities for a vertically polarized photon and for Node B are defined equivalently. This coincidence rate can already be computed from the ion-cavity state according to Eq.~\eqref{eq:envelope} because photons of orthogonal polarization do not interfere. Similarly, one finds \begin{equation}\label{eq: det hv} \text{det}_{h,v}(t_2,t_1) =\frac{\eta_{u_h} \eta_{r_v}}{4} \big(p_{\rm h}^{\rm A}(t_2) p_{\rm v}^{\rm B}(t_1)+ p_{\rm v}^{\rm A}(t_1) p_{\rm h}^{\rm B}(t_2)\big). \end{equation} \subsubsection{Coincidence rates for photons with identical polarization} It is more interesting to analyze the detection rates for two detectors sensitive to the same polarization. For example, consider $\text{det}_{h,h}(t_1,t_2)$, which is related to the projector on \begin{equation} \begin{equation}gin{split} \ket{ h(t_1), h(t_2)} & =u^\dag_h(t_1) r^\dag_h(t_2) \ket{\bm 0} \\ &= \frac{1}{2}(a^\dag_{\rm h}(t_1) + \mathrm{i} b^\dag_{\rm h}(t_1)) (\mathrm{i} a^\dag_{\rm h}(t_2) + b^\dag_{\rm h}(t_2))\ket{\bm 0} \\ & = \frac{1}{2}(a^\dag_{\rm h}(t_1) b^\dag_{\rm h}(t_2) - a^\dag_{\rm h}(t_2)b^\dag_{\rm h}(t_1))\ket{\bm 0} + \dots \end{split}\end{equation} The dots here indicate terms with two photons emitted by a single node; these terms can be ignored as $\Sigma_t$ has no support on such states. For $t_1,t_2\leq t$, the rate is thus given by \begin{equation}\label{eq: bunch hh} \begin{equation}gin{split} \text{det}_{h,h}(t_1,t_2) &= \eta_{u_h}\eta_{r_h} \tr \Sigma_t \ketbra{ h(t_1), h(t_2)} \\ & =\eta_{u_h}\eta_{r_h} \tr \left(\mathbf{H}_t^{\rm A}\otimes \mathbf{H}_t^{\rm B} + \mathbf{H}_t^{\rm A}\otimes \mathbf{V}_t^{\rm B}+ \mathbf{V}^{\rm A}_t\otimes \mathbf{H}_t^{\rm B} +\mathbf{V}^{\rm A}_t\otimes \mathbf{V}_t^{\rm B}\right) \ketbra{ h(t_1), h(t_2)}\\ &= \eta_{u_h}\eta_{r_h} \tr \mathbf{H}^{\rm A}_t \otimes \mathbf{H}^{\rm B}_t \ketbra{ h(t_1), h(t_2)}\\ &= \frac{\eta_{u_h}\eta_{r_h}}{4} \int_0^t \dd s\, \dd s' \, \tilde{\rm P}_s^{\rm A}(s) \tilde{\rm P}_s^{\rm B}(s') \left| \bra{H^{\rm A}_{t|s},H^{\rm B}_{t|s'}}\big(a^\dag_{\rm h}(t_1) b^\dag_{\rm h}(t_2) - a^\dag_{\rm h}(t_2)b^\dag_{\rm h}(t_1)\big)\ket{\bm 0}\right|^2 \\ &= \frac{\eta_{u_h}\eta_{r_h}}{4} \int_0^t \dd s\, \dd s' \, \tilde{\rm P}_s^{\rm A}(s) \tilde{\rm P}_s^{\rm B}(s') \Big| \begin{equation}ta^{\rm A}(t_1|s)\begin{equation}ta^{\rm B}(t_2|s')- \begin{equation}ta^{\rm A}(t_2|s)\begin{equation}ta^{\rm B}(t_1|s')\Big|^2. \end{split} \end{equation} Similarly, \begin{equation}\label{eq: bunch vv} \text{det}_{v,v}(t_1,t_2) = \frac{\eta_{u_v}\eta_{r_v}}{4} \int_0^t \dd s\, \dd s' \, \tilde{\rm P}_s^{\rm A}(s) \tilde{\rm P}_s^{\rm B}(s') \Big| \alpha^{\rm A}(t_1|s)\alpha^{\rm B}(t_2|s')- \alpha^{\rm A}(t_2|s)\alpha^{\rm B}(t_1|s')\Big|^2. \end{equation} In the integrals above, in order to use a more compact notation, we formally extend the function $\alpha(t|s)=\begin{equation}ta(t|s)$ to times $t<s$ by setting $\alpha(t|s)=\begin{equation}ta(t|s) = 0$ for $t<s$, as it is impossible for the photon to be emitted from the cavity before a scattering event to the $\ket{S,0}$ level. One can easily see from Eqs.~\eqref{eq: bunch hh} and \eqref{eq: bunch vv} that for indistinguishable pure photons, that is, $\alpha^{\rm A}(t|s)=\alpha^{\rm B}(t|s)$ and $\begin{equation}ta^{\rm A}(t|s)=\begin{equation}ta^{\rm B}(t|s)$, and no scattering, that is, $\tilde{\rm P}_s^{\rm A}(s) = \tilde{\rm P}_s^{\rm B}(s) = \delta(s)$, the photons bunch perfectly as expected at the outputs of the nonpolarizing beamsplitter, that is, $\text{det}_{h,h}(t_1,t_2)=\text{det}_{v,v}(t_1,t_2)=0$. \end{widetext} \subsubsection{Interference visibility} Since we can now compute the coincidence rates at all pairs of detection times $(t_1,t_2)$ (Eqs.~\eqref{eq: bunch hh} and \eqref{eq: bunch vv}), we are also able to calculate the two-photon interference visibility. To do so, let us first define probabilities to detect two clicks delayed by at most $T$: \begin{equation}\label{eq: Det} \text{Det}_{\pi_1,\pi_2}(T) \equiv \int_{|t_1-t_2|\leq T} \hspace{- 5 pt} \dd t_1 \dd t_2 \, \text{det}_{\pi_1,\pi_2}(t_1,t_2). \end{equation} Then the two-photon interference visibility is by definition given by \begin{equation} V(T) = 1- \frac{\text{Det}_{h,h}(T)+\text{Det}_{v,v}(T)}{\text{Det}_{v,h}(T)+\text{Det}_{h,v}(T)}, \end{equation} which one computes with the help of Eqs.~\eqref{eq: det vh}, \eqref{eq: det hv}, \eqref{eq: bunch hh}, \eqref{eq: bunch vv}, and \eqref{eq: Det}.\\ To account for the effects of the cavity jitter at Node A, we simply replace the detection probabilities above with average quantities \begin{equation} \text{Det}_{{\pi_1,\pi_2}}(T) = \sum_{k} \text{p}(\omega_k) \text{Det}^{(\delta \omega_k)}_{{\pi_1,\pi_2}}(T), \end{equation} which are obtained by averaging the detection probabilities over the possible values of $\omega_k$, as discussed previously in Sec.~\ref{sec:Eff-jitt}. \subsubsection{Comparison with the experimental data} \label{se: deconstructiong} In this section, we focus on Fig.~3b of the main text, in which the interference visibility computed with the theoretical model presented above is compared with the experimentally determined values. The figure has already been explained and discussed in the main text; our goal here is to make the connection clear between the notation used in the previous sections and the values in the plot. The green solid line and green dashed line in Fig.~3b, which have the lowest values for visibility as a function of coincidence window, are computed with the model discussed above. The only difference between the two is the value of the cavity jitter parameter $\gamma_{clj}$ for Node A, which is given by $\gamma_{clj}= \SI{0.1}{\mega\hertz}$ for the dashed line and $\gamma_{clj}= \SI{0.06}{\mega\hertz}$ for the solid line. Both values are consistent with independently characterized experimental parameters within uncertainties. Next, we compute the visibility expected in the absence of both laser noise ($\gamma_{ss} = 0$) and cavity jitter ($\gamma_{clj}=0$), which is plotted in orange. These are ``technical" noises that could be reduced to negligible values by realistic improvements to the setup at Node A. Finally, the top (blue) line provides information about the role of the mismatch between pure photon wavepackets. Concretely, we consider the case $\gamma_{ss}= \gamma_{clj} = 0$ and compute the interference visibility between pure photons with the wavepackets $\ket{H_{t|0}^{\rm A(B)}}$ and $\ket{V_{t|0}^{\rm A(B)}}$ given in Eq.~\eqref{eq: pure wavepackets}, which describe the photonic states with no scattering on the $\ket{S}-\ket{P}$ transition during their evolution. The difference between the orange line and the blue line is thus solely due to the photon purity, that is, to the fact that the orange line takes into account emitted photons that are not pure due to spontaneous emission from $\ket{P}$ to $\ket{S}$. This effect can be in principle reduced by improving the coherent coupling strengths $g_1$ and $g_2$ between the ion and the cavity modes. Note that the computation of the average number of scattering events from $\ket{P}$ to $\ket{S}$ per experimental run gives $2.1$ for Node B and $5.3$ for Node A. \end{document}
\begin{document} \title{Two remarks about Ma\~n\'e's conjecture} \section{Introduction} In this note we consider an autonomous Tonelli Lagrangian $L$ on a closed manifold $M$, that is, a $C^2$ function $L \colon\thinspace TM \longrightarrow {\mathbb R}$ such that $L$ is fiberwise strictly convex and superlinear. Then the Euler-Lagrange equation associated with $L$ defines a complete flow $\phi_t$ on $TM$. Define $\mathcal{M}_{inv}$ to be the set of $\Phi_t$-invariant, compactly supported, Borel probability measures on $TM$. Mather showed that the function (called action of the Lagrangian on measures) \[ \begin{array}{rcl} \mathcal{M}_{inv} & \longrightarrow & {\mathbb R} \\ \mu & \longmapsto & \int_{TM} L d\mu \end{array} \] is well defined and has a minimum. A measure achieving this minimum is called $L$-minimizing. The union, in $TM$, of the support of all minimizing measures is called Mather set of $L$, and denoted $\mathcal{M}(L)$. It is compact and $\phi_t$-invariant. See \cite{Mather91} and \cite{Fathi_bouquin} for more background. Observe that if $f$ is a $C^2$ function on $M$, $L+f$ is also a Tonelli Lagrangian. Adding a function to a Lagrangian is called perturbing the Lagrangian by a potential. Following Ma\~n\'e we say a property holds for a generic Lagrangian if, given any Lagrangian, the property holds for a generic perturbation by a potential. Ma\~n\'e conjectured a generic description of the minimizing measures : \begin{conjecture} [\cite{Mane97}] \label{forte} Let \begin{itemize} \item $M$ be a closed manifold \item $L$ be an autonomous Tonelli Lagrangian on $TM$ \item $\mathcal{O}_1(L)$ be the set of $f$ in $C^{\infty}(M)$ such that the Mather set of $L+f$ consists of one periodic orbit. \end{itemize} Then the set $\mathcal{O}_1(L)$ is residual in $C^{\infty}(M)$. \end{conjecture} In other words, for a generic Lagrangian, there exits a unique minimizing measure, and it is supported by a periodic orbit. A similar conjecture can be made replacing $C^{\infty}(M)$ by $C^{k}(M)$ for any $k \geq 2$. Many more interesting invariant sets can be obtained by minimization than just the Mather set. If $\omega$ is a closed one-form on $M$, then $L-\omega$ is a Tonelli Lagrangian, and it has the same Euler-Lagrange flow as $L$. Its Mather set, however, is different in general. The Mather set of $L-\omega$ only depends on the cohomology class $c$ of $\omega$, we denote it $\mathcal{M}(L,c)$. It is often interesting to obtain information simultaneously on the Mather sets $\mathcal{M}(L,c)$ for a large set of cohomology classes. Thus Ma\~n\'e proposed the \begin{conjecture}[\cite{Mane96}]\label{faible} If $L$ is a Tonelli Lagrangian on a manifold $M$, there exists a residual subset $\mathcal{O}_2(L)$ of $C^{\infty}(M)$, such that for any $f$ in $\mathcal{O}_2(L)$, there exists an open and dense subset $U(L,f)$ of $H^1 (M,{\mathbb R})$ such that, for any $c$ in $U(L,f)$, the Mather set of $(L,c)$ consists of one periodic orbit. \end{conjecture} Intuitively Conjecture \ref{faible} is weaker than Conjecture \ref{forte} because we allow a larger set of perturbations (potentials and closed one-forms instead of just potentials). However the requirement of an open dense set in Conjecture \ref{faible} makes it far from obvious. In section \ref{section2} we prove that Conjecture \ref{forte} contains Conjecture \ref{faible}, using recent tools from Fathi's weak KAM theory, the most prominent of which is the Aubry set $\mathcal{A}(L)$. All we need to know about the Aubry set is that \begin{itemize} \item it consists of the Mather set, and (possibly) orbits homoclinic to the Mather set (see \cite{Fathi_bouquin}) \item when there is only one minimizing measure, the Aubry set is upper semi-continuous as a function of the Lagrangian, that is, for any neighborhood $V$ of $\mathcal{A}(L)$ in $TM$, there exists a neighborhood $\mathcal{U}$ of $L$ in the $C^2$ compact-open topology, such that for any $L_1$ in $\mathcal{U}$, we have $\mathcal{A}(L_1)\subset V$ (see \cite{Bernard_Conley}). \end{itemize} We first prove that Conjecture \ref{forte} is equivalent to the apparently stronger \begin{conjecture}\label{forte_Aubry} Let \begin{itemize} \item $M$ be a closed manifold \item $L$ be an autonomous Tonelli Lagrangian on $TM$ \item $\mathcal{O}_3(L)$ be the set of $f$ in $C^{\infty}(M)$ such that the Aubry set of $L+f$ consists of one, hyperbolic periodic orbit. \end{itemize} Then the set $\mathcal{O}_3(L)$ is residual in $C^{\infty}(M)$. \end{conjecture} Then we prove that Conjecture \ref{forte_Aubry} contains the following, which obviously contains Conjecture \ref{faible} : \begin{conjecture}\label{faible_Aubry} If $L$ is a Tonelli Lagrangian on a manifold $M$, there exists a residual subset $\mathcal{O}_4(L)$ of $C^{\infty}(M)$, such that for any $f$ in $\mathcal{O}_2(L)$, there exists an open and dense subset $U(L,f)$ of $H^1 (M,{\mathbb R})$ such that, for any $c$ in $U(L,f)$, the Aubry set of $(L+f,c)$ consists of one, hyperbolic periodic orbit. \end{conjecture} Conjecture \ref{faible_Aubry} is proved, in the case where the dimension of $M$ is two, in \cite{lowdim} (after a sketch of a proof appeared in \cite{ijm}). The analogous statement for Lagrangians which depend periodically on time is proved, in the case where the dimension of $M$ is one, in \cite{Osvaldo}. Conjecture \ref{forte} may be seen as an Aubry-Mather version of the Closing Lemma. This suggests that it should be true in the $C^2$ topology on Lagrangians and false in the $C^k$ topology for $k > 2$. If we want to prove the $C^k$ version of Conjecture \ref{forte}, and we are lucky enough to have a sequence of periodic orbits $\gamma_n$ which approximate our Mather set, then the first idea that comes to mind is to perturb $L$ by a non-negative potential $f_n$ which vanishes only on $\gamma_n$. Then $\gamma_n$ is still an orbit of $L+f_n$. If we can find $f_n$ big enough for $\gamma_n$ to be $L+f_n$-minimizing, but small enough for the $C^k$-norm of $f_n$ to converge to zero, then we are done. In Section \ref{section3} we prove that this naive approach doesn't work in the $C^k$-topology, for $k \geq 4$. Specifically, we give an example of a Lagrangian $L$ on the two-torus, such that for any periodic orbit $\gamma$ of $L$, and any $C^4$ function $f$ on the two-torus, if $\gamma$ is $L+f$-minimizing, then the $C^4$ norm of $f$ is bounded below by a constant which only depends on $L$. \textbf{Acknowledgements} This work was partially supported by the ANR project ''Hamilton-Jacobi et th\'eorie KAM faible''. \section{}\label{section2} \begin{lemma}\label{lemme1} Let \begin{itemize} \item $M$ be a closed manifold \item $L$ be an autonomous Tonelli Lagrangian on $TM$ \item $\mathcal{O}_3(L)$ be the set of $f$ in $C^{\infty}(M)$ such that the Aubry set of $L+f$ consists of one, hyperbolic periodic orbit \item $\mathcal{O}_1(L)$ be the set of $f$ in $C^{\infty}(M)$ such that the Mather set of $L+f$ consists of one periodic orbit. \end{itemize} Then $\mathcal{O}_3(L)$ is open and dense in $\mathcal{O}_1(L)$. \end{lemma} \proof We first prove that $\mathcal{O}_3(L)$ is open in $\mathcal{O}_1(L)$. Take $f \in \mathcal{O}_3(L)$. Replacing $L$ with $L+f$, we may assume $f=0$. Let $\gamma$ be the hyperbolic periodic orbit which comprises $\mathcal{A}(L+f)$. By a classical property of hyperbolic periodic orbits, there exists a neighborhood $\mathcal{U}_1$ of the zero function in $C^{\infty}(M)$, and a neighborhood $V$ of $\gamma$ in $TM$ such that for any $f \in \mathcal{U}_1$, for any energy level $E$ of $L$, the only invariant set of the Euler-Lagrange flow of $L$ contained in $E \cap V$, if any, is a hyperbolic periodic orbit homotopic to $\gamma$. Since $\mathcal{A}(L)$ is a periodic orbit, the quotient Aubry set $A$ has but one element. Thus by \cite{Bernard_Conley}, there exists a neighborhood $\mathcal{U}_2$ of the zero function in $C^{\infty}(M)$, such that for all $f$ in $\mathcal{U}_2$, we have $\mathcal{A}(L+f)\subset V$. Therefore, for any $ f \in \mathcal{U}_1 \cap \mathcal{U}_2$, the Aubry set $\mathcal{A}(L+f)$ consists of one, hyperbolic periodic orbit. Now let us prove that $\mathcal{O}_3(L)$ is dense in $\mathcal{O}_1(L)$. Take $f \in \mathcal{O}_1(L)$. Replacing $L$ with $L+f$, we may assume $f=0$. Let $\gamma$ be the periodic orbit which comprises $\mathcal{M}(L)$. Now let us take a smooth function $g$ on $M$ such that $g$ vanishes on the projection to $M$ of $\gamma$ (which we again denote $\gamma$ for simplicity), and $\forall x \in M,\ g(x) \geq d(x,\gamma)^2$, where the distance is meant with respect to some Riemannian metric on $M$. Let $\lambda$ be any positive number. We will show that $\lambda g \in \mathcal{O}(h)$, which proves that $\mathcal{O}(h)$ is dense in $\mathcal{O}_1(L)$. Observe that $\gamma$ is a minimizing hyperbolic periodic orbit of the Euler-Lagrange flow of $L+\lambda g$ (see \cite{CI99}). Furthermore , $\alpha_{L+\lambda g}(0) =\alpha_L(0)$, where $\alpha_L(0)$ is Ma\~n\'e's critical value for the Lagrangian $L$. Adding a constant to $L$ if necessary, we assume $\alpha_L(0)=0$. Recall that the Aubry set is the union of the Mather set and orbits homoclinic to the Mather set. Therefore, to prove that $\lambda g \in \mathcal{O}_0 (h)$, it suffices to prove that the Aubry set $\mathcal{A}\left(L+\lambda g \right)$ does not contain any orbit homoclinic to $\gamma$. Assume $\delta \colon\thinspace {\mathbb R} \longrightarrow M$ is an extremal of $L+\lambda g$, homoclinic to $\gamma$. Since $g(\delta(t)) >0$ for all $t$, there exists $C>0$ such that \[ \int^{+\infty}_{-\infty} g(\delta (t))dt \geq 2C. \] Let $u$ be a weak KAM solution for $L$. We have, for any $s,t \in {\mathbb R}$, remembering that $\alpha_L(0)=0$, \[ \int^{t}_{s} L(\delta (t),\dot{\delta}(t))dt \geq u\left(\delta(t)\right)-u\left(\delta(s)\right). \] Since is homoclinic to $\gamma$ there exist two sequences $t_n$ and $s_n$ that converge to $+\infty$, such that $\delta(t_n)$ and $\delta (-s_n)$ converge to the same point $x$ on $\gamma$, so for $n$ large enough \[ \int^{t_n}_{-s_n} L(\delta (t),\dot{\delta}(t))dt \geq -C. \] Therefore, for $n$ large enough, \[\int^{t_n}_{-s_n} \left(L+\lambda g \right)(\delta (t),\dot{\delta}(t))dt > C. \] On the other hand, since $\alpha_{L+\lambda g}(0) =0$, if $\delta$ were contained in the projected Aubry set of $L +\lambda g $, we would have, denoting $u_{\lambda}$ a weak KAM solution for $L +\lambda g $, \[\int^{t_n}_{-s_n} \left(L+\lambda g \right)(\delta (t),\dot{\delta}(t))dt = u_{\lambda}\left(\delta(t)\right)-u_{\lambda}\left(\delta(s)\right) \] which converges to zero because $\delta(t_n)$ and $\delta (-s_n)$ converge to the same point $x$ on $\gamma$. Therefore the Aubry set of $L +\lambda g $ consists of $\gamma$ alone, which proves that $\lambda g \in \mathcal{O}_3(L)$, and the Lemma. \qed Therefore $\mathcal{O}(L)$ is residual in $C^{\infty}(M)$ if and only if $\mathcal{O}_1(L)$ is. Therefore, Conjecture \ref{forte} is equivalent to Conjecture \ref{forte_Aubry}. Now we show that Conjecture \ref{forte_Aubry} contains Conjecture \ref{faible_Aubry}, which obviously contains Conjecture \ref{faible}. Assume Conjecture \ref{forte_Aubry} is true. Let $L$ be an autonomous Tonelli Lagrangian on a manifold $M$. Let $c_i, i \in {\mathbb N}$ be a countable dense subset of $H^1 (M,{\mathbb R})$. Take, for every $i \in {\mathbb N}$, a closed one-form $\omega_i$ with cohomology $c_i$. Since Conjecture \ref{forte_Aubry} is true for every Lagrangian $L-\omega_i$, for every $i \in {\mathbb N}$, there exists a residual subset $\mathcal{O}_i$ of $C^{\infty}(M)$ such that for every $f$ in $\mathcal{O}_i$, $\mathcal{A}(L+f,c_i)$ consists of one hyperbolic periodic orbit. Then the intersection, over $i \in {\mathbb N}$, of $\mathcal{O}_i$ is a residual subset $\mathcal{O}$ of $C^{\infty}(M)$. For every $f$ in $\mathcal{O}$, for every $i \in {\mathbb N}$, $\mathcal{A}(L+f,c_i)$ consists of one hyperbolic periodic orbit $\gamma_i$. As in the proof of Lemma \ref{lemme1}, there exists a neighborhood $V_i$ of $c_i$ in $H^1(M,{\mathbb R})$, such that for any $c$ in $V_i$, the Aubry set $\mathcal{A}(L+f,c)$ consists of one hyperbolic periodic orbit homotopic to $\gamma_i$. The union, over $i \in {\mathbb N}$, of the $V_i$, is an open and dense subset $V$ of $H^1(M,{\mathbb R})$, and for any $c$ in $V$, $\mathcal{A}(L+f,c)$ consists of one hyperbolic periodic orbit, so Conjecture \ref{faible_Aubry} is true. \section{An example}\label{section3} Let \begin{itemize} \item $r$ be a quadratic irrational number, for instance $\sqrt{2}$ \item $p_0$ and $q_0$ be real numbers such that $p^{2}_{0}+q^{2}_{0}=1$ and $p_0 / q_0 = r$ \item ${\mathbb T}^2$ be ${\mathbb R}^2 / {\mathbb Z}^2$, endowed with canonical coordinates $(x,y)$ \item $L$ be the Lagrangian on $T{\mathbb T}^2$ defined by \[ L(x,y,u,v) := \frac{u^2 + v^2}{2}- \left(p_0 u + q_0 v\right) \] where $(u,v)$ are the tangent coordinates to $(x,y)$. \end{itemize} Assume that for some function $f$ on ${\mathbb T}^2$, $L+f$ has a minimizing periodic orbit $\gamma$, and furthermore, $\gamma$ is an orbit of $L$, that is, it has the form $t \longmapsto (pt,qt)$ for some real numbers $p$ and $q$. Then, if $T$ is the smallest period of $\gamma$, $(pT,qT) \in {\mathbb Z}^2$ and $pT$, $qT$ are mutually prime. Consider the map \[ \begin{array}{rcl} F \colon\thinspace {\mathbb R} & \longrightarrow & {\mathbb R} \\ \lambda & \longmapsto & \frac{1}{T}\int^{T}_{0} f(pt, qt + \lambda) dt. \end{array} \] Observe that $F$ is $1$-periodic. We now prove that $F$ is $(pT)^{-1}$-periodic. Indeed, take $r,s$ in ${\mathbb Z}$ such that $pTr-qTs=1$. Then for any $t$, \begin{eqnarray*} (pt, qt + \frac{1}{pT})& = & (pt, qt + r -\frac{qs}{p})\\ & = & \left(pt, q(t -\frac{s}{p}) \right) \ \mbox{mod}{\mathbb Z}^2 \\ &=& \left(p(t -\frac{s}{p})+s, q(t -\frac{s}{p}) \right) \ \mbox{mod}{\mathbb Z}^2 \\ &=& \left(p(t -\frac{s}{p}), q(t -\frac{s}{p}) \right) \ \mbox{mod}{\mathbb Z}^2 \\ \end{eqnarray*} so \begin{eqnarray*} F\left(\lambda + \frac{1}{pT}\right)&=& \frac{1}{T}\int^{T}_{0} f(pt, qt + \frac{1}{pT}+ \lambda) dt \\ &=& \frac{1}{T}\int^{T}_{0} f(p(t -\frac{s}{p}), q(t -\frac{s}{p})+ \lambda) dt = F\left(\lambda \right) \end{eqnarray*} using the change of variable $t \mapsto t -s/p$ (and the fact that $F$ is $1$-periodic). Now we prove that \[ \int^{1}_{0}F(\lambda) d\lambda = \int_{{\mathbb T}^2}f d\mbox{leb} \] where $\mbox{leb}$ denotes the standard Lebesgue measure on ${\mathbb T}^2={\mathbb R}^2 / {\mathbb Z}^2$. Indeed, let $\nu$ be the measure on ${\mathbb T}^2$ defined by \[ \int g (x,y) d\nu (x,y) := \int^{1}_{0} d\lambda \left\{ \frac{1}{T}\int^{T}_{0} g(pt,qt + \lambda) dt \right\} \] for any continuous function $g$ on ${\mathbb T}^2$. We want to prove that $\nu $ is actually $\mbox{leb}$. First let us show that $\nu$ is invariant under translations. Let $(u,v)$ be any vector in ${\mathbb R}^2$. We have \begin{eqnarray*} \int g(x+u,y+v) d\nu (x,y) &=& \int^{1}_{0} d\lambda \left\{ \frac{1}{T}\int^{T}_{0} g(pt+u,qt + \lambda +v ) dt \right\} \\ &=& \int^{1}_{0} d\lambda \left\{ \frac{1}{T}\int^{T}_{0} g(p(t+\frac{u}{p}),q(t+\frac{u}{p}) + \lambda - \frac{uq}{p}) dt \right\} \\ &=& \int^{1}_{0} d\lambda \left\{ \frac{1}{T}\int^{T}_{0} g(pt,qt + \lambda - \frac{uq}{p}) dt \right\} \\ &=& \int^{1}_{0} d\lambda \left\{ \frac{1}{T}\int^{T}_{0} g(pt,qt ) dt \right\} = \int g (x,y) d\nu (x,y)\\ \end{eqnarray*} where we have used, in succession, the changes of variables $t \mapsto t +u/p$ and $\lambda \mapsto \lambda - uq/p$. So $\nu$ is invariant under translations. Furthermore $\int 1 d\nu = 1$ so $\nu $ is actually $\mbox{leb}$. Now let us use the fact that $\gamma$ is $L+f$-minimizing. Let $\mu$ be the probability measure equidistributed along $\gamma$. We have \begin{eqnarray*} \int \left(L+f\right) d\mu &=& \frac{1}{T}\int^{T}_{0} dt \left\{ \frac{p^2 + q^2}{2}- \left(p_0 p + q_0 q\right) +f(pt,qt) \right\} \\ &=& \frac{p^2 + q^2}{2}- \left(p_0 p + q_0 q\right) +F(0). \end{eqnarray*} Let $\mu_0$ be the measure on $T{\mathbb T}^2$ defined by \[ \int g (x,y,u,v) d\mu_0 (x,y,u,v) := \int^{1}_{x=0}dx \; \int^{1}_{y=0} g(x,y,p_0,q_0) dy \] for any continuous function $g$ on $T{\mathbb T}^2$. Observe that the measure $\mu_0$ is $L$-minimizing. In particular it is closed (see \cite{FS}, Theorem 1.6). We have \[ \int_{T{\mathbb T}^2} L d\mu_0 = -\frac{1}{2} \mbox{ and } \int_{T{\mathbb T}^2} f d\mu_0 = \int_{{\mathbb T}^2} f d\mbox{leb}= \int^{1}_{0}F(\lambda) d\lambda \] where we have implicitely extended $f$ to a function on $T{\mathbb T}^2$ by setting $f(x,y,u,v):=f(x,y)$ for any $u$ and $v$. Since $\mu$ is $L+f$-minimizing, and $\mu_0$ is closed, we have (see \cite{FS}, Theorem 1.6) \[ \int_{T{\mathbb T}^2} (L+f) d\mu_0 \geq \int_{T{\mathbb T}^2} (L+f) d\mu \] that is, \begin{eqnarray*} \int^{1}_{0}F(\lambda) d\lambda -F(0) & \geq & \int_{T{\mathbb T}^2} L d\mu -\int_{T{\mathbb T}^2} L d\mu_0 \\ &=& \frac{p^2 + q^2}{2}- \left(p_0 p + q_0 q\right) +\frac{1}{2} \\ &=& \frac{1}{2}(p-p_0)^2 + \frac{1}{2}(q-q_0)^2 . \end{eqnarray*} Now let us use the fact that $r=p_0 /q_0$ is quadratic, and $pT/qT = p/q$ is rational, so there exists a constant $C_0$ such that \[\left| \frac{p_0}{q_0}-\frac{p}{q}\right| \geq \frac{C_0}{(pT)^2}. \] Hence, setting $C := C^{2}_{0}/2$, \[ \frac{1}{2}(p-p_0)^2 + \frac{1}{2}(q-q_0)^2 \geq \frac{C}{(pT)^4} \] whence \[\int^{1}_{0}F(\lambda) d\lambda -F(0) \geq \frac{C}{(pT)^4}. \] Therefore, since $F$ is $(pT)^{-1}$-periodic, there exists a $\lambda_0 \in \left[0, (pT)^{-1} \right]$ such that \[ F(\lambda_0) -F(0) \geq \frac{C}{(pT)^4}. \] Hence there exists a $\lambda_1 \in \left[0, (pT)^{-1} \right]$ such that $\left|F'(\lambda_1)\right| \geq C(pT)^{-3}$. On the other hand, since $F$ is $(pT)^{-1}$-periodic, there exists a $\lambda_2 \in \left[0, (pT)^{-1} \right]$ such that $F'(\lambda_2)=0$. Thus $ \left| F'(\lambda_1)-F'(\lambda_2)\right|\geq C(pT)^{-3}$, so there exists a $\lambda_3 \in \left[0, (pT)^{-1} \right]$ such that $\left|F''(\lambda_3)\right| \geq C(pT)^{-2}$. Iterating this process we show there exists a $\lambda \in \left[0, (pT)^{-1} \right]$ such that $\left|F^{(4)}(\lambda)\right| \geq C$, that is, \[\left|\frac{1}{T}\int^{T}_{0} f^{(4)}(pt,qt+\lambda ) dt \right|\geq C. \] In particular the $C^4$-norm of $f$ is bounded below by $C$. \end{document}
\begin{document} \newcommand{\diracsl}[1]{\not\hspace{-3.0pt}#1} \newcommand{\psi_{\!{}_1}}{\psi_{\!{}_1}} \newcommand{\psi_{\!{}_2}}{\psi_{\!{}_2}} \newcommand{\psv}[1]{\psi_{\!{}_#1}} \newcommand{\lab}[1]{{}^{(#1)}} \newcommand{\psub}[1]{{\cal P}_{{}_{\! \! #1}}} \newcommand{\sst}[1]{{\scriptstyle #1}} \newcommand{\ssst}[1]{{\scriptscriptstyle #1}} \newcommand{{{\scriptscriptstyle\succ}}}{{{\scriptscriptstyle\succ}}} \newcommand{{{\scriptscriptstyle\prec}}}{{{\scriptscriptstyle\prec}}} \title[Quantum Averages of Weak Values] {Quantum Averages of Weak Values} \author{ Yakir Aharonov } \affiliation{ Department of Physics and Astronomy, University of South Carolina, Columbia, SC 29208} \affiliation{Department of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel} \affiliation{Department of Physics and Astronomy, George Mason University, Fairfax, VA 22030 } \author{ Alonso Botero } \email{[email protected]} \affiliation{ Department of Physics and Astronomy, University of South Carolina, Columbia, SC 29208} \affiliation{ Departamento de F\'{\i}sica, Universidad de Los Andes, Apartado A\'ereo 4976, Bogot\'a, Colombia } \date{\today} \begin{abstract} We re-examine the status of the weak value of a quantum mechanical observable as an objective physical concept, addressing its physical interpretation and general domain of applicability. We show that the weak value can be regarded as a \emph{definite} mechanical effect on a measuring probe specifically designed to minimize the back-reaction on the measured system. We then present a new framework for general measurement conditions (where the back-reaction on the system may not be negligible) in which the measurement outcomes can still be interpreted as \emph{quantum averages of weak values}. We show that in the classical limit, there is a direct correspondence between quantum averages of weak values and posterior expectation values of classical dynamical properties according to the classical inference framework. \end{abstract} \pacs{PACS numbers 03.65.Ud, 03.67.-a} \maketitle \section{Introduction} In previous publications\ccite{AV90,AV91,RA95,Vaid96,Vaidman96b}, an objective description of a quantum system in the time interval between two complete measurements has been proposed in terms of \emph{two} state vectors, together with a new type of physical quantity, the ``weak value" of a quantum mechanical observable. Specifically, for a system drawn from an ensemble preselected in the state $\ket{\psi_1}$ and postselected in the state $\ket{\psi_2}$, the weak value for the observable $\hat{A}$ is defined as \begin{equation}\label{weakvdef} A_w \equiv \weakv{\psi_2}{\hat{A}}{\psi_1} \, , \end{equation} where the real part is the quantity of primary physical interest (and to which the term ``weak value" shall henceforth apply unless otherwise noted). The suggestion was motivated operationally by the fact that both real and imaginary parts of weak values can be linked to conditional measurement statistics predicted by standard quantum mechanics for the general class of ``weak measurements", defined so as to minimize the disturbance to the system as a result of a diminished interaction with the measuring instrument. Under these conditions, joint weak measurements of two non-commuting observables can be made with negligible mutual interference, thus ensuring that the simultaneous assignment of weak values to all elements of the observable algebra is operationally consistent. The usefulness of this description has been demonstrated, both theoretically and experimentally, in a number of applications in which novel aspects of quantum processes have been uncovered when analyzed in terms of weak values. These include photon polarization interference\ccite{Duck89,KnightVaid,RSH,Parks99,Brunner03}, barrier tunnelling times \ccite{Stein94,Stein95,AER03}, photon arrival times \ccite{Ruseckas,Ahnert}, anomalous pulse propagation\ccite{RA02,Solli04,Brunner04}, correlations in cavity QED experiments\ccite{Wise02}, complementarity in ``which-way" experiments\ccite{Wise03,Garretsonetal}, non-classical aspects of light\ccite{JohNC1,JohNC2}, communication protocols\ccite{BR00} and retrodiction ``paradoxes" of quantum entanglement\ccite{ABPRT01,Molmer01,RLS03}. A certain amount of skepticism\ccite{Leggett,Peres,AVReply,Kastner98,AVKasreply,Kastner03} has nevertheless prevailed regarding the physical status of weak values, particularly in the light of the unconventional range of values that is possible according to\rref{weakvdef}. Indeed, the real part of $A_w$, describing the ``pointer variable" response in a weak measurement, may lie outside the bounds of the spectrum of $\hat{A}$. Manifestly ``eccentric" weak values, as are negative kinetic energies \ccite{APRV93,RAPV95} or negative particle numbers \ccite{ ABPRT01,RLS03}, are not easily reconciled with the physical interpretation that is traditionally attached to the respective observables. Less intuitive yet is when $\hat{A}$ stands for a projection operator, in which case the weak value suggests ``weak probabilities" taking generally non-positive values\ccite{Stein95,Wang,Garretsonetal}. Such bizarre interpretations call for a sharper clarification of what physical meaning should be attached to the weak value of an observable. Another item of skepticism surrounding the physical significance of weak values has to do with their general domain of applicability. It seems reasonable to demand of any new physical concept that it be applicable to a wide variety of situations outside the restricted context in which it is defined operationally. Although progress has been made in this direction\ccite{Vaidman96b,Adiabatic}, convincing evidence of the general validity of the concept of the weak value is still lacking. With these questions in mind, the aim of this paper is two-fold: First, we address the physical meaning of weak values by showing that there exists an unambiguous interpretation of the real part of the weak value as a \emph{definite} mechanical effect of the system on a measuring probe that is specifically designed to minimize the dispersion in the back-reaction on the system. Second, based on this interpretation, we present a new framework for the analysis of general von Neumann measurements, in which the measurement statistics are interpreted as {\em quantum averages of weak values} (QAWV). We believe this framework is physically intuitive and provides compelling evidence for the ubiquity of weak values in more general measurement contexts. In particular, we show that for arbitrary system ensembles, the expectation value of the reading of any von Neumann-type measurement is an average of weak values over a suitable posterior probability distribution. We furthermore show how QAWV framework has a natural correspondence in the classical limit with the posterior analysis of measurement data according to the classical inference framework. Thus, we can establish a correspondence between weak values and what in the macroscopic domain are regarded as objective classical dynamical variables. The paper is structured as follows: In Sec. \ref{eigvvswv}, we motivate the idea of averaging weak values by discussing the connection between pre-selected and pre- and postselected statistics in arbitrary measurements von-Neumann type measurements. In Sec. \ref{mechint} we present the operational definition of the weak value as a definite mechanical effect associated with infinitesimally uncertain unitary transformations. The QAWV framework is then introduced in Sec. \ref{qaves} for arbitrary strength measurements. We provide an illustration in Section \ref{likecc}, where we discuss a number of measurement situations in which the framework gives a simple characterization of the outcome statistics. Finally, we establish in Sec. \ref{Clas} the classical correspondence of the QAWV framework. Some conclusions are given in Sec. \ref{concl}. \section{ Pre- and Postselected Measurement Statistics, Eigenvalues and Weak Values} \label{eigvvswv} The conventional interpretation of a quantum mechanical expectation value, such as $\qave{\psi}{\hat{A}}$ for an observable $\hat{A}$, is as an average of the eigenvalues of $\hat{A}$ over a probability distribution that is realized in the context of a complete strong measurement of $\hat{A}$. Our main suggestion in this paper is that for a wide class of generalized conditions on the von Neumann measurement of $\hat{A}$, the statistics of measurement outcomes can alternatively be related to an underlying statistics of a different quantity, the weak value of $\hat{A}$, which is to be regarded as a definite physical property of an unperturbed quantum system in the time interval between two complete measurements. We shall therefore begin by discussing in this preliminary section the connection between pre- and pre-and post-selected measurement statistics of arbitrary strength von Neumann measurements, and from this discussion show an instance in which averages of weak values more aptly describe the posterior break-up of the measurement outcome distribution. In the von Neumann measurement scheme\ccite{VonNeum}, the device is some external system, described by canonical variables $\hat{q}$ and $\hat{p}$, with $[\hat{q},\hat{p}]=i$ ($\hbar\equiv 1$). The system-device interaction is designed so that the measurement result is read-off from the effect on some designated device ``pointer variable", which we take to be $\hat{p}$. For a measurement of the system observable $\hat{A}$ at the time $t=t_i$, this interaction is modelled by the impulsive Hamiltonian \begin{equation}\label{measham} \hat{H}_m = - \delta(t-t_i) \hat{A} \hat{q} \, . \end{equation} (Note that a possible coupling constant can always be absorbed by canonically redefinig $q$ and $p$.) The effect of the measurement is then described by the unitary operator $ \hat{U} = e^{ i \hat{A} \hat{q} }\, $. Since we will only be concerned with the effect of this interaction from times immediately before to immediately after $t_i$, we shall henceforth assume the all additional free evolution is already contained in the states. We first consider the pointer variable statistics from an ensemble defined by pure initial conditions on the system and the apparatus, described by states $\ket{\psi_1}$ and $\ket{\phi}$, respectively. For later convenience, we shall term this ensemble the \emph{preselected measurement ensemble} (PME) $\Omega_1$. Further, we introduce the notation $\prec$ or $\succ$ to denote times immediately before or immediately after the measurement time $t_i$. Now, for the PME $\Omega_1$, the effect of the measurement interaction is easily described by the Heisenberg picture transformation \begin{equation} \hat{p}_{{\scriptscriptstyle\succ}} = \hat{p}_{{\scriptscriptstyle\prec}} + \hat{A} \, . \end{equation} induced by the evolution operator $e^{ i \hat{A} \hat{q} }$. Since the initial system plus apparatus state is separable, the final statistics of the pointer variable are easily obtained from the spectral decomposition of $\hat{A}$, and are given by the probability distribution \begin{equation}\label{prestats} {\cal P}(p|\Omega_{1}^{{\scriptscriptstyle\succ}} ) = \sum_a \langle \psi_{\!{}_1} |\hat{\Pi}_a|\psi_{\!{}_1} \rangle {\cal P}(p-a|\phi)\, , \end{equation} where $ {\cal P}(p|\phi) =|\langle p |\phi\rangle|^2$, and $\hat{\Pi}_a$ is the projector onto the eigenspace of the system Hilbert space with eigenvalue $a$. In this description, a ``strong" or projective measurement corresponds to the limit ${\form{ d}}elta p \rightarrow 0$, (i.e., ${\cal P}(p|\phi) \rightarrow \delta(p)$), in which case the pointer distribution mimics the spectral distribution of the Born interpretation, $\langle \psi_{\!{}_1} |\hat{\Pi}_a|\psi_{\!{}_1} \rangle$. Note however that even if the spectrum cannot be resolved, the resulting expression\rref{prestats} for the pointer statistics can still be interpreted as if, on every single trial, the pointer variable is displaced in proportion to one of the eigenvalues of $\hat{A}$, with the eigenvalues distributed randomly throughout the sample according to $\langle \psi_{\!{}_1} |\hat{\Pi}_a|\psi_{\!{}_1} \rangle$. Thus, \emph{regardless} of the form of ${\cal P}(p |\phi)$ the mean and variance of the distribution \rref{prestats} will always satisfy \begin{eqnarray} \ave{p}_{_{\Omega_{1}^{{\scriptscriptstyle\succ}}}} & = & \qave{\psi_{\!{}_1}}{\hat{A}}\, \\ \ave{{\form{ d}}elta p^2}_{_{\Omega_{1}^{{\scriptscriptstyle\succ}}}} & = & \ave{{\form{ d}}elta p^2}_\phi + \qave{\psi_{\!{}_1}}{{\form{ d}}elta \hat{A}^2}\, \, , \end{eqnarray} where $\ave{{\form{ d}}elta p^2}_\phi $ is the variance in $p$ of the state $\ket{\phi}$ and we have assumed $\ave{p}_\phi \equiv \qave{\phi}{p} = 0$ for simplicity. \begin{figure}\label{fig:poolings} \end{figure} Now suppose that after time $t_i$, a postselection is performed on the system, and we wish to concentrate on the subset of measurement outcomes arising only from those systems that ended up in some specific state, $\ket{\psi_{\!{}_2}}$. This final condition defines for us a subensemble $\Omega_{12}$ of the PME $\Omega_1$, that we call a \emph{pre- and post- selected measurement ensmble} (PPME), the measurement statistics of which can be obtained from the conditional final state of the apparatus\cite{AV90} \begin{equation}\label{statetransf} \ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}} = \frac{1}{\sqrt{\psub{12}(\phi)}} \langle{\psi_{\!{}_2}}|e^{ i \hat{A} \hat{q}}|{\psi_{\!{}_1}}\rangle \ket{\phi}\, , \end{equation} where the normalization $\psub{12}(\phi)$ is shorthand for the transition probability ${\cal P}(\psi_{\!{}_2}|\psi_{\!{}_1}\phi)$ (i.e., the average relative size of the ensemble $\Omega_{12}$). From this state, the corresponding pointer variable distribution is given by ${\cal P}(p|\Omega_{12}^{{\scriptscriptstyle\succ}}) = | \tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}} (p)|^2$. Let us briefly discuss some relations between the PME and PPME statistics. Suppose the postselection involves a complete measurement of some non-degenerate observable $\hat{B}$, with eigenstates $ \{ \ket{b} \} $. A pooling of the data from all the subensembles $\{ \Omega_{1b} \}$ of $\Omega_1$ must then yield the preselected distribution (Eq.\rref{prestats}), in other words \begin{equation}\label{pooldist} {\cal P}(p|{\Omega_{1}^{{\scriptscriptstyle\succ}}}) = \sum_b \psub{1 b}(\phi){\cal P}(p|{\Omega_{1b}^{{\scriptscriptstyle\succ}}}) \, , \end{equation} where $\psub{1 b}(\phi)$ is the relative size of each PPME. Two important consequences follow from this decomposition: First, the PME expectation value of the pointer $\ave{p}_{_{\Omega_{1}^{{\scriptscriptstyle\succ}}}}$ breaks up in a similar fashion as $\ave{p}_{_{\Omega_{1}^{{\scriptscriptstyle\succ}}}} = \sum_b \psub{1 b}(\phi)\ave{p}_{_{\Omega_{1b}^{{\scriptscriptstyle\succ}}}}$; assuming that the prior expectation value of $p$ vanishes, this entails the sum rule \begin{equation}\label{avesumrule} \qave{\psi_{\!{}_1}}{\hat{A}} = \sum_b \psub{1 b}(\phi)\ave{p}_{_{\Omega_{1b}^{{\scriptscriptstyle\succ}}}}\, , \end{equation} i.e, the weighted average of the PPME pointer expectation values has to yield the standard expectation value of $\hat{A}$. A second consequence of \rref{pooldist} is a ``covering" condition satisfied by the individual PPME distributions, \begin{equation}\label{covercond} {\cal P}(p|{\Omega_{1}^{{\scriptscriptstyle\succ}}}) \geq \psub{1 b}(\phi){\cal P}(p|{\Omega_{1 b}^{{\scriptscriptstyle\succ}}}) \, ; \end{equation} for all values of $p$ and all final outcomes$b$. This imposes a constraint on how rare a PPME $\Omega_{1b}$ should be were the corresponding ${\cal P}(p|{\Omega_{1 b}^{{\scriptscriptstyle\succ}}})$ to be peaked somewhere in the tail region of ${\cal P}(p|{\Omega_{1}^{{\scriptscriptstyle\succ}}})$. The relevance of \rref{avesumrule} and \rref{covercond} is that indeed the weight of a PPME measurement outcome distribution need not lie within the ``normal" region of expectation defined by the bounds of the spectrum of $\hat{A}$ (Fig.\ref{fig:poolings}), contrary to what one would have naively expected given the generality of the spectral expansion of the PME distribution\rref{prestats}. This may not be obvious under strong measurement conditions, such that when the appartus wave function in $p$ is expanded as a superposition of shifted wave functions \begin{equation}\label{specexp} \tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}(p) \propto \sum_a \langle{\psi_{\!{}_2}}|\hat{\Pi}_a|{\psi_{\!{}_1}}\rangle \phi(p- a)\, , \end{equation} the overlap between two shifted functions $\phi(p- a)$ and $\phi(p- a')$ for all $a \neq a'$ is negligible. Indeed, in such a case, the resulting p.d.f. for the pointer variable takes the form of a mixture of ``strong" measurement distributions, i.e., $ {\cal P}(p|{\Omega_{12}^{{\scriptscriptstyle\succ}}}) \propto \sum_b|\langle{\psi_{\!{}_2}}|\hat{\Pi}_a|{\psi_{\!{}_1}}\rangle|^2\, {\cal P}(p- a|\phi) $, each centered at one of the eigenvalues of $\hat{A}$ with weights given by the Aharonov, Bergmann and Liebowitz rule for projective measurement sequences\ccite{ABL}; the weight of this distribution is, of course, within the bounds of the spectrum of $\hat{A}$. However, away from strong measurement conditions ${\cal P}(p|\Omega_{12}^{{\scriptscriptstyle\succ}} )$ will involve interference terms between the shifted wave functions $\phi(p- a)$ with coefficients $\langle{\psi_{\!{}_2}}|\hat{\Pi}_a|{\psi_{\!{}_1}}\rangle$ that are not generally real nor positive-definite, preventing the resolution of the individual shifted peaks and allowing for destructive interference effects that may place the weight of $\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}(p)$ beyond the spectrum of $\hat{A}$. For a wide class of wave functions of the apparatus, weak values emerge from the limiting behavior of these interference effects in a complementary limit to that of strong measurement conditions\ccite{AV90}, namely when $q$, the conjugate to the pointer variable, satisfies ${\form{ d}}elta q \rightarrow 0$ (${\form{ d}}elta p \rightarrow \infty$). In particular, if $\ave{q}=0$, one obtains the weak value as the limiting conditional expectation value \begin{equation}\label{genweakvalapproach} \lim_{{\form{ d}}elta q \rightarrow 0} \langle p \rangle_{_{\Omega_{12}^{{\scriptscriptstyle\succ}}}} \rightarrow {\rm Re}\weakv{\psi_{\!{}_2}}{\hat{A}}{\psi_{\!{}_1}}\, . \end{equation} This limiting behavior is furthermore accompanied by the limit $\psub{12}(\phi)\rightarrow |\amp{\psi_{\!{}_2}}{\psi_{\!{}_1}}|^2 \, $, as if indeed no measurement had taken place, justifying the term ``weak limit". Hence, in this limit the posterior break-up\rref{pooldist} of the PME pointer distribution is essentially that of a mixture of distributions, each of which is centered at the weak value defined by its corresponding final state in the post-selection and weighted by the corresponding (unperturbed) transition probability. Thus we have an instance in which the expectation value of $\hat{A}$ is more appropriately interpreted operationally as an average of weak values than as an average of eigenvalues. Indeed, it is easily verified that the weighted average of the weak values defined by a complete post-selection is the standard expectation value of $\hat{A}$ \begin{equation}\label{avewvs} \sum_b |\amp{\psv{b}}{\psi_{\!{}_1}}|^2\ {\rm Re} \weakv{\psv{b}}{\hat{A}}{\psi_{\!{}_1}} = \qave{\psi_{\!{}_1}}{\hat{A}}\, , \end{equation} as expected from the sum rule \rref{avesumrule}. Note that this is a \emph{classical} averaging process, as it arises from the mixing of the distributions conditioned on the distinguishable outcomes of the post-selection. The sum rule \rref{avewvs} embodies a general rule of thumb, namely that \emph{``eccentric weak values are unlikely"}, according to which weak values lying outside the spectrum of $\hat{A}$ must be weighted by correspondingly small relative probabilities, ensuring that the average over all pre- and postselected subensembles yields a quantity within the spectral bounds of $\hat{A}$. This generic property of weak values is at the heart of the QAWV framework presented in Section \rref{qaves}, where we show that pre-and post-selected statistics away from weak measurement conditions can also be interpreted from a \emph{quantum} averaging process involving weak values. \section{Mechanical Interpretation of Weak Values} \label{mechint} Implicit in the suggestion that standard expectation values can be interpreted (at least under certain conditions) as averages of weak values, is the idea that weak values are in some sense ``sharp" physical properties. We therefore expand on this notion of ``sharpness" by giving an operational sense in which the weak value can indeed be regarded as a definite mechanical property of a system that is known to belong to an enesemble defined by complete pre- and post-selections. The functional dependence on $q$ of the transition amplitude $\langle{\psi_{\!{}_2}}|e^{ i q \hat{A} }|{\psi_{\!{}_1}}\rangle$ in \rref{statetransf} furnishes the necessary elements to build a description of the PPME statistics based on a picture of ``action and reaction", in which, if the variable $q$ is sharply defined, then a) the measured system is subject to a sharply-defined unitary transformation generated by $\hat{A}$, and b) the measuring apparatus suffers a sharply-defined response given by the weak value of $\hat{A}$. This elementary picture serves the basis for the more general QAWV framework discussed in the following section. Let us look at the polar decomposition of $\langle{\psi_{\!{}_2}}|e^{ i q \hat{A} }|{\psi_{\!{}_1}}\rangle$, which we choose to express as \begin{equation} \label{polardecomp} \langle{\psi_{\!{}_2}}|e^{ i \hat{A} q}|{\psi_{\!{}_1}}\rangle =\sqrt{\psub{12}(q)} \, e^{ i S_{\!_{12}}(q) }\, , \end{equation} where \begin{equation} \psub{12}(q) \equiv \left|\,\langle{\psi_{\!{}_2}}|e^{ i \hat{A} q}|{\psi_{\!{}_1}}\rangle\right|^2 \, \end{equation} (see also refs. \ccite{Botero03,Botero04,Solli04}) gives the transition probability from $\ket{\psi_{\!{}_1}}$ to $\ket{\psi_{\!{}_2}}$, but mediated by an intermediate unitary transformation $e^{ i \hat{A} q}$. Thus, the variable $q$ can be regarded as the parameter of a back-reaction on the system, generated by the operator $\hat{A}$, inducing the transformation of the initial state \begin{equation}\label{backreact} \ket{\psi_{\!{}_1}} \stackrel{q}{\rightarrow} \ket{\psi_{\!{}_1}(q)} \equiv e^{i \hat{A} q}\ket{\psi_{\!{}_1}}\, \end{equation} (alternatively, the reaction can be viewed as the inverse transformation $e^{-i \hat{A} q}$ on the final state $\ket{\psi_{\!{}_2}}$). On the other hand, the phase factor in\rref{polardecomp} can be viewed as the generator of a certain reaction of the system on the apparatus corresponding to a specific rotation parameterized by $q$: viewed as a unitary operator on the apparatus degrees of freedom, $e^{ i S_{\!_{12}}(\hat{q})}$ induces in the Heisenberg picture the generally nonlinear canonical transformation of the pointer operator \begin{equation}\label{canshift} \hat{p}_{{\scriptscriptstyle\succ}} = \left. e^{- i S_{\!_{12}}(\hat{q}) } \hat{p}\, e^{i S_{\!_{12}}(\hat{q}) }\right|_{{\scriptscriptstyle\prec}} \equiv \hat{p}_{{\scriptscriptstyle\prec}} + \wva{12}(\hat{q})\, , \end{equation} where $ \wva{12}(q) \equiv S'_{_{12}}(q) ={\rm Im} \frac{d }{dq} \log\langle{\psi_{\!{}_2}}|e^{ i \hat{A} q}|{\psi_{\!{}_1}}\rangle \, . $ A straightforward derivation then shows that $\wva{12}(q)$ is indeed a weak value \begin{equation} \wva{12}(q) = {\rm Re} \frac{\bra{ \psi_{\!{}_2}}{\hat{A}e^{i \hat{A} q}}\ket{\psi_{\!{}_1}}}{\bra{ \psi_{\!{}_2}}{e^{i \hat{A} q}}\ket{\psi_{\!{}_1}}}={\rm Re}\weakv{\psi_{\!{}_2}}{\hat{A}}{\psi_{\!{}_1}(q)}\, , \end{equation} namely the weak value of $\hat{A}$ for the rotated state $\ket{\psi_i(q)}$ and the final state $\ket{\psi_{\!{}_2}}$. Equation\rref{canshift} therefore shows that for a definite value of $q$, there is an associated definite reaction on the measuring device pointer variable by the weak value for the corresponding pair of states $(\ \ket{\psi_i(q)}\, ,\, \ket{\psi_{\!{}_2}}\ )$. More precisely, note that for the general pointer variable statistics for the PPME $\Omega_{12}$, Eq.\rref{specexp}, we can equivalently express the final apparatus wave function $\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}(p)$ as the Fourier integral \begin{equation}\label{fourint} \tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}(p) = \frac{1}{\sqrt{ 2 \pi}}\int_{-\infty}^{\infty} d q\, \sqrt{\frac{\psub{12}(q)}{\psub{12}(\phi)}}\phi(q)e^{-i [p q-S_{_{12}}(q) ] }\, . \end{equation} Let us now suppose that $q$ is constrained to lie exclusively within a finite range around some value $q = q_i$, by taking $\phi(q)$ to be the ``window" function of width $\varepsilon$ centered at $q = q_i$. \begin{equation}\label{weaktrans} W_{q_i, \varepsilon}(q) = \left \{ \begin{array}{ccc} \frac{1}{\sqrt{ \varepsilon}} \, , & & |q - q_i| < \frac{\varepsilon}{2} \\ 0\, , & & |q - q_i| \geq \frac{\varepsilon}{2}\end{array}\right. \, . \end{equation} In this case the wave function in the $p$-representation is a modulated ``sinc" function \begin{equation} W_{q_i, \varepsilon}(p) = \sqrt{\frac{2}{\varepsilon \pi}}\, \frac{\sin\left( \frac{\varepsilon p}{2}\right)}{p}e^{i p q_i} \, , \end{equation} of characteristic width $\sim 1/\varepsilon$. Now let $\varepsilon$ be small enough that variations of $\psub{12}(q)$ and $\wva{12}(q)$ are negligible within the interval $|q - q_i| < \frac{\varepsilon}{2}$. Thus, we can approximate $\psub{12}(\phi) \simeq \psub{12}(q_i)$, and perform the Fourier integral in the ``group velocity approximation", i.e., by expanding the phase about $q_i$ to first order and replacing $\psub{12}(q)$ by $\psub{12}(q_i)$; this yields \begin{equation}\label{mechdef} \tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}(p) \simeq e^{i \Gamma_{\!_{12}}(q_i)}W_{q_i, \varepsilon}(p-\wva{12}(q_i)) \, , \end{equation} where we define $ \Gamma_{\!_{12}}(q) \equiv S_{\!_{12}}(q) - q\, \wva{12}(q) \, . $ Hence, in the limit $\varepsilon \rightarrow 0$, where the apparatus wave function approaches an eigenstate of $\hat{q}$ with eigenvalue $q_i$, the final wave function for the pointer becomes (up to a phase) the initial wave function \emph{rigidly shifted by a definite weak value}, the weak value $\wva{12}(q_i)$ for the rotated state $\ket{\psi_{\!{}_1}(q_i)}$ and the final state $\ket{\psi_{\!{}_2}}$. From the point of view of the system, the limit $\varepsilon \rightarrow 0$ can be regarded as an idealization of a situation often encountered in more general contexts, where the evolution of a quantum system is treated as effectively unitary despite the fact that certain parameters of the evolution are actually physical variables of some external (and typically macroscopic) system; for example a spin rotation, where a macroscopic external magnetic field sets the rotation angle. That such parameters can be treated as classical numbers is a consequence of a negligible uncertainty of the quantum variable of the external system acting as the parameter for the transformation. The interaction with such an external system may thus be idealized as an \emph{infinitesimally uncertain unitary transformation} at a given parameter value. This idealization provides the desired mechanical definition of weak values: \emph{The weak value $\wva{12}(q)$ corresponds to a definite conditional reaction of the system on the variable conjugate to the external physical ``parameter variable" $\hat{q}$ of an infinitesimally uncertain unitary transformation generated by $\hat{A}$ at parameter value $q$}. The essence of a weak measurement is thus to approach, as close as possible, the ideal conditions of an infinitesimally uncertain transformation. The above definition presents no ambiguities in the physical interpretation of ``eccentric" weak values or in the sometimes unexpected relationships that may arise between the weak values of say, $\hat{A}$ and $\hat{A}^2$ (e.g., negative ``weak variances", etc.). To the extent that we associate weak values to infinitesimal unitary transformations, no \emph{a-priori} connection between the weak values of two commuting observables should be expected; typically, commuting operators such as $\hat{A}$ and $\hat{A}^2$ generate entirely different types of un unitary transformations. Rather, relations between weak values follow from the \emph{linear}, vector space structure of the Lie Algebra of hermitian operators generating infinitesimal transformations. The vector space structure is reflected, for instance, in the fact that for any two initial and final states that are eigenstates of the the observables $\hat{A}$ and $\hat{B}$, with eigenvalues $a$ and $b$ respectively, the reaction to an infinitesimal unitary transformation generated by the linear combination $\hat{C}\equiv\alpha \hat{A} + \beta \hat{B} $ at $q=0$ is the linear combination $\mathcal{C}=\alpha a + \beta b$. Finally, let us emphasize the significance of the present mechanical interpretation of weak values in connection with certain quantum mechanical operators, such as kinetic energy or particle number\ccite{AV91,APRV93,ABPRT01}, for which any association with negative values would appear to be forbidden. The fact that the reactions associated with weak values will almost always lie within the range of the observable's spectrum is what gives us a reference from which to identify, in those unlikely circumstances where the reaction is ``eccentric", what are unique quantum-mechanical effects associated with the role of the observable as a generator of infinitesimal transformations. One would hardly suspect that such effects could indeed be possible given the physical interpretations that we have traditionally attached to the eigenvalues of a quantum mechanical observable. \section{Quantum Averages of Weak Values} \label{qaves} The framework of quantum averages of weak values (QAWV) is the extension of the previous analysis to general von Neumann measurements, with arbitrary pure initial states of the apparatus not necessarily satisfying a ``weakness condition". Given a \emph{pure} PPME $\Omega_{12}$, we shall show how the conditional average of measurement outcomes can nevertheless be interpreted as a quantum average of weak values over a suitable distribution. For more general initial and final conditions on the system (as well as more general initial conditions on the apparatus), the corresponding averages can then be obtained by a classical averaging process, similar to that of \rref{avewvs}, given that any such ensemble can always be broken-up into complete pre-and postselected measurement subensembles with appropriate relative weights. The heuristics of the framework are straightforward: a general apparatus pure state $\ket{\phi}$ entails indefiniteness in the parameter value $q$ driving the back-reaction on the system according to \rref{backreact}, so that a generally finite range of system configurations are sampled in the orbit of transformed initial states $\ket{\psi_{\!{}_1}(q)}$. Correspondingly, the pointer measurement statistics should reflect the sampling of a certain range of weak values $\wva{}(q)$ associated to this orbit. However, once $q$ is allowed to take arbitrary values, a new element in the description comes into play. This has to do with the relative weights associated with the sampled values of $q$, which reflect a probability-reassessment in the light of the additional conditions entailed by the post-selection. The central idea of the framework is then that an arbitrary strength von Neumann measurement on a pre- and postselected system may be viewed as a superposition of weak measurements at different sampling points $q$, with a re-assessment of the weights of each sample in accordance with Bayes' theorem. Let us for simpliciity consider an initial apparatus function that is real and smooth. This function may then be represented as the limit of a superposition of infinitesimally-wide window functions \begin{equation} \phi(q) =\lim_{\varepsilon\rightarrow 0} \sum_{k=-\infty}^{\infty} \sqrt{\varepsilon} \phi(q_k)W_{q_k, \varepsilon}(q) \, , \end{equation} centered at the ``sampling points" $q_k = k \varepsilon + \delta$ with $k \in \mathbb{Z}$ and $\delta \in [-\varepsilon/2, \varepsilon/2)$. From the results of the previous section, and by linearity, the corresponding final apparatus state wave function may be represented as \begin{equation} \tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}(p) =\lim_{\varepsilon\rightarrow 0} \sum_{k=-\infty}^{\infty} \sqrt{\varepsilon\frac{\psub{12}(q_k)}{\psub{12}(\phi) }} \phi(q_k)e^{\Gamma_{\!_{12}}(q_k)}W_{q_k, \varepsilon}(p-\wva{12}(q_k)) \, . \end{equation} The final apparatus state can therefore be viewed as a superposition of weak measuerements at the sampling points $q_k$ but with the initial weights $\phi(q_k)$ replaced by new weights $\sqrt{ {\psub{12}(q_k)}/{\psub{12}(\phi) }}\phi(q_k)$. As is easily seen, this re-assessment of weights is in correspondance with ${\cal P}(q|\Omega_{12})$, the p.d.f. for strong measurements of $q$ (performed either before or after the measurement interaction) on the PPME $\Omega_{12}$. Consistently with Bayes' theorem, ${\cal P}(q|\Omega_{12})$ is the posterior distribution for $q$ after a re-assessment of the prior p.d.f. ${\cal P}(q|\phi)$ by the likelihood $\psub{12}(q)/\psub{12}(\phi)$ of the post-selection given the $q$-dependent rotation of the initial state: \begin{equation}\label{postdef} {\cal P}(q|\Omega_{12}) = \frac{\psub{12}(q)}{\psub{12}(\phi) }\, {\cal P}(q|\phi) \, . \end{equation} Note that in accordance with the ``eccentric weak values are unlikely" rule of thumb, the likelihood factor $\propto \psub{12}(q)$ will tend to suppress the contributions in the superposition for which the weak value falls outside the spectrum of $\hat{A}$. As we shall illustrate in the coming section, it is this mechanism that ensures, together with quantum mechanical interference, that the strong measurement distributions peaked at the eigenvalues of $\hat{A}$ can nevertheless be understood as a quantum superpositions of weak measurements. It becomes convenient to capture in compact form the two conceptually different processes involved in the updating of the apparatus state $\ket{\phi} \rightarrow \ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}}$ as a result of the measurement. The first step, the generally irreversible process of probability re-assessment, can be expressed conveniently by defining a fiducial state $\ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}}}$, which we term the \emph{re-assessed initial state} of the apparatus. Defining the state by its wave function in $q$, it corresponds to the (prior) initial wave function $\phi(q)$ multiplied by the square root of the likelihood factor in\rref{postdef}: \begin{equation}\label{statereassess} \tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}}(q) = \sqrt{\frac{\psub{12}(q)}{\psub{12}(\phi) }}\, \phi(q) \, . \end{equation} The other process is the reversible mechanical action of the system on the measurement apparatus generated by the unitary operator $e^{i S_{\!_{12}}(\hat{q})}$ defined by the polar decomposition \rref{polardecomp}. The final conditional state of the measuring device can then be expressed as a unitary transformation applied to the re-assessed state $\ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}}}$, $ \ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}} = e^{i S_{\!_{12}}(\hat{q}) }\ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}}} $. Equivalently, one can compute pointer statistics from the Heisenberg picture transformation\rref{canshift} using the apparatus state $\ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}}}$. In particular, the conditional p.d.f. of pointer readings can be expressed as a quantum-mechanical analogue of a marginal distribution of shifted pointer values, \begin{equation}\label{quantpdf} {\cal P}(p|\Omega_{12}^{{\scriptscriptstyle\succ}}) = \left \langle \tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}} \biggl| \, \delta\!\bigl(p - \hat{p} - \wva{12}(\hat{q})\, \bigr)\biggr|\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}}\, \right\rangle \, , \end{equation} in other words, as a \emph{quantum average of weak values}, where the average is taken with respect to the re-assessed initial state $\ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}}}$. As we shall see in section \ref{Clas}, Eq.\rref{quantpdf} has a natural correspondence in the classical limit; it can be shown to correspond with the marginal posterior p.d.f. for the measurement outcomes of the classical function corresponding to $\hat{A}$ on a classical canonical system specified by initial and final boundary conditions in time. Using the Heisenberg picture, we finally obtain the pointer reading mean and variance for the PPME $\Omega_{12}$ \begin{subequations}\label{finalmoms} \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\langle p \rangle_{{\scriptscriptstyle\succ}} &\!\!\! =\!\!\! & \langle p \rangle_{{\scriptscriptstyle\prec}} + \langle \wva{12} \rangle_{{\scriptscriptstyle\prec}} \, \\ \!\!\!\!\!\!\!\!\!\!\! \langle {\form{ d}}elta p^2 \rangle_{{\scriptscriptstyle\succ}} &\!\!\! =\!\!\! & \langle {\form{ d}}elta p^2 \rangle_{{\scriptscriptstyle\prec}} + \langle \{{\form{ d}}elta p, {\form{ d}}elta \wva{12}\} \rangle_{{\scriptscriptstyle\prec}} + \langle {\form{ d}}elta \wva{12}^2 \rangle_{{\scriptscriptstyle\prec}}\, , \end{eqnarray} \end{subequations} where the subscripts ${{\scriptscriptstyle\prec}}$ and ${{\scriptscriptstyle\succ}}$ stand for expectation values in the state $\ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}}}$ and $\ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\succ}}}$, and where, the quantum weak value average $ \ave{\wva{12}}$ and variance $\langle {\form{ d}}elta \wva{12}^2 \rangle_{{\scriptscriptstyle\prec}}$ are directly evaluated using the posterior p.d.f. \rref{postdef}. These expressions can be further simplified if the initial apparatus state has a real $\phi(q)$\ccite{BoteroThesis,JohArb} and vanishing expectation value of $p$, in which case the posterior expectation $\langle p \rangle_{{\scriptscriptstyle\prec}}$ and the correlation $\langle \{{\form{ d}}elta p, {\form{ d}}elta \mathcal{A}\} \rangle_{{\scriptscriptstyle\prec}}$ vanish. Under such conditions, the first two central moments of\rref{quantpdf} are indistinguishable from those obtained from classically averaging weak values with a variability defined through the posterior distribution\rref{postdef}. Note therefore that a condition for a weak measurement that is more general than the one discussed in the previous section is that we have a sharp \emph{posterior} p.d.f in $q$ around some value $q=q_*$, in which case the pointer average reflects a measurement of a sharply-defined weak value $\simeq \wva{12}(q_*)$ with a small uncertainty $\langle {\form{ d}}elta \wva{12}^2 \rangle_{{\scriptscriptstyle\prec}}$. Examples of how such effective weak measurements are attained will be given in the next section. Equation\rref{quantpdf} provides a statistical characterization of the pointer variable response as a quantum average of weak values, given the most restrictive conditions possible for a pre- and postselected measurement ensemble. Statistics from less restrictive measurement ensembles can then be obtained using standard probability assessments on the PPME $\Omega_{12}$ consistent with the specified conditions. In particular, for the preselected measurement ensemble $\Omega_1$ and some specific post-selection measurement, the pointer variable distribution \rref{quantpdf} obeys equation Eq.\rref{pooldist}. This correspondence yields a generalization of the sum rule\rref{avewvs} to arbitrary measurement strengths, involving both classical and quantum averages \begin{equation} \qave{\psi_{\!{}_1}}{\hat{A}} = \sum_b \psub{1 b}(\phi) \ave{\mathcal{A}_{_{1b}}}_{_{\Omega_{1b}}}\, , \end{equation} and which is easily verified using Eqs.\rref{postdef} and\rref{avewvs}. Even more generally, since the statistics for any set of less restrictive conditions on the system and/or the apparatus will involve a classical averaging over the states $\ket{\phi}$, $\ket{\psi_{\!{}_1}}$, and $\ket{\psi_{\!{}_2}}$, the final expectation value of any von-Neumann type measurement can always be connected to a suitable average of weak values. \section{Illustration of the QAVW Framework} \label{likecc} \begin{figure}\label{fig:spin} \end{figure} \begin{figure}\label{fig:sl} \end{figure} Let us then illustrate how the QAVW framework provides new insight into the measurement statistics of arbitrary-strength von Neumann measurements in pre-and post-selected ensembles. A particularly graphic example of the interrelationship between the orbit of weak values $\mathcal{A}_{\!_{12}}(q)$ and the corresponding likelihood function $\propto P_{12}(q)$ is that of spin-component measurements given initial and final spin-$j$ coherent states\ccite{Perelomov}. Let $\vect{n}$ be a unit vector with direction parameterized by the polar angles $\theta$ and $\phi$, and $\ket{j,j}$ the maximal weight $J_z$ eigenstate ($\hat{J}_z\ket{j,j}= j\ket{j,j}$); a spin-$j$ coherent state is then defined as \begin{equation} \ket{\vect{n};j} = e^{ -i \hat{J}_z \phi}e^{ -i \hat{J}_y \theta}\ket{j,j}\, , \end{equation} and is hence an eigenstate of $\hat{J}_\vect{n} \equiv \hat{\vect{J}}\cdot\vect{n}$. Calculations are simplified by the fact that this state can be realized as a product state of $2j$ copies of the spin-$1/2$ coherent state $\ket{\vect{n};\frac{1}{2}}$. In particular, the transition probability between two coherent states is \begin{equation} |\amp{\vect{n}_2;j}{\vect{n}_1;j}|^2 \propto ( 1 + \vect{n}_2\cdot\vect{n}_1 )^{2 j} \, , \end{equation} while the weak value of all spin components are easily captured by a weak spin vector \begin{equation} \vect{\mathcal{J}} \equiv{\rm Re} \weakv{\vect{n}_2;j}{\hat{\vect{J}}}{\vect{n}_1;j}\\ = j \frac{ \vect{n}_2 + \vect{n}_1}{1 +\vect{n}_2\cdot\vect{n}_1} \, , \end{equation} for which the projection onto both $\vect{n}_2$ and $\vect{n}_1$ is $j$ (Fig.~\ref{fig:spin}). Note the relation between $|\amp{\vect{n}_2;j}{\vect{n}_1;j}|^2$ and the length $\mathcal{J}$ of the weak spin vector, $ |\amp{\vect{n}_2;j}{\vect{n}_1;j}|^2 \propto {\mathcal{J}}^{-4 j} \, $ in consistency with the ``eccentric weak values are unlikely" rule. \begin{figure}\label{fig:spincurves} \end{figure} In a measurement of the spin component $\hat{J}_z$ on a PPME defined by initial and final coherent states $\ket{\vect{n}_1;j}$ and $\ket{\vect{n}_2;j}$, the back-reaction corresponds to a spin rotation of the initial state about the $z$-axis by the angle $-q$: \begin{equation} \ket{\vect{n}_1;j} \stackrel{q}{\rightarrow} \ket{\vect{n}_1(q);j}\, , \ \ \ \vect{n}_1(q) = R_z(-q)\vect{n}_1\, . \end{equation} This reaction in turn entails an orbit for the weak spin vector $\vect{\mathcal{J}}(q)$ (see Fig~\ref{fig:sl}), from which the the weak value function $\mathcal{J}_{z}(q)$ for $\hat{J}_z$ can be obtained by projecting onto the $z$-axis. Note that since the $z$ component of $\vect{n}_1$ is unaffected by the rotation, $\mathcal{J}_{z}(q) \propto (1 + \vect{n}_2\cdot\vect{n}_1(q))^{-1}$; thus, the likelihood factor satisfies \begin{equation}\label{spinzbias} \mathcal{P}_{12}(q) \propto \mathcal{J}_{z}(q)^{-2 j} \, . \end{equation} Figure~\ref{fig:spincurves} illustrates the correlated behavior of $\mathcal{J}_{z}(q)$ and $\ln \mathcal{P}_{12}(q)$ for the case $j=20$ and with initial and final spin coherent states with $\vect{n}_1 = (0,\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ and $\vect{n}_2=(0,-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$. For these conditions, the weak value is given by \begin{equation}\label{wvalj} \mathcal{J}_{z}(q) = j \frac{\sqrt{2}}{1 + \sin^2\!\left(\frac{q}{2}\right) } \, , \end{equation} oscillating between $j \sqrt{2}$ at $q_+ = 2n\pi $ (full rotations of the initial state), and $j/ \sqrt{2}$ at $q_- = (2n+1)\pi$ when $\vect{n}_1(q)$ coincides with $ \vect{n}_2$). As the figure shows, for $j \gg 1$, the likelihood $\mathcal{P}_{12}(q)$ shows essentially an exponential behavior similar to a modular gaussian distribution; in particular, near values $q_+$ or $q_-$ (both periodic), for which the magnitude of $\mathcal{J}_{z}(q)$ is respectively either maximal or minimal on the orbit, we have the approximations for large $j$ \begin{equation}\label{likeapp} \mathcal{P}_{12}(q) \simeq \left|\mathcal{J}_{z}(q_\pm)\right|^{-2 j}e^{\pm j \left|\frac{\mathcal{J}_z''(q_\pm)}{\mathcal{J}_z(q_\pm)}\right| (q - q_\pm)^2 } \, . \end{equation} The exponential suppression of\rref{likeapp} near $q_+$, where the weak value is maximal in magnitude, is generic of the phenomenon of Fourier superoscillations\ccite{AAPV90,Berry92,ABRS98,Kempf00,BCG93}, exhibited by the amplitude $\langle{\vect{n}_2;j}|e^{ i \hat{J}_z q}|\vect{n}_1;j\rangle$ near $q_{+}$. This suppression imposes a ``robustness" condition on the prior distribution in $q$ if one is to measure eccentric weak values near $q_+$: not only must the prior distribution be ``sharp" around $q=q_+$, but additionally it must show a sufficiently fast fall-off to overcome the exponential rise in likelihood. \begin{figure}\label{fig:likeffex} \end{figure} On the basis of the generic correlated behaviors of the likelihood function and the weak value, the PPME pointer statistics for a relatively wide range of von Neumann measurement conditions--ranging from weak to strong measurements--can easily be described in the QAWV framework using simple sampling profiles. As discussed in the previous section, a sharp \emph{posterior} distribution ${\cal P}(q|\Omega_{12})$ about some well-defined ``sampling point" $q_{{}_*}$ satisfies the conditions for a weak measurement. More generally, however, the reassessment by the likelihood factor of the initial apparatus state $\ket{\phi}$ may yield a state $\ket{\phi_{_{12}}^{{\scriptscriptstyle\prec}}}$ for which the wave function in $q$ shows several well-separated narrow peaks, each satisfying weak measurement conditions. In this case, the PPME pointer distribution will be the result of a coherent superposition of weak measurement pointer wave functions, and will therefore exhibit interference fringes. In simple examples, the existence of just two peaks may be all that is needed to produce the statistical distributions associated with strong measurement conditions (i.e., with maxima at the eigenvalues of the measured observable). Figures~\ref{fig:likeffex}a to~\ref{fig:likeffex}g show how such single or multiple weak measurement conditions are attained from prior distributions in $q$, of various shapes and locations, for the spin-$j$ PPME setting of Fig.~\ref{fig:spincurves} (weak value ranging between $j/\sqrt{2}$ to $\sqrt{2}j$), with real wave functions for the initial state of the apparatus. Starting with Figs.~\ref{fig:likeffex}a and ~\ref{fig:likeffex}b, we illustrate the likelihood effects on an initial robust state of the apparatus given by a narrow window function in $q$ of the form given by Eq.\rref{weaktrans}, with two different locations $q_i$. Such profiles guarantee that $q$, and hence the average weak value, will always lie within a specific interval; thus, the effect of the likelihood factor will primarily be a distortion in the shape of the pointer distribution, with minimum effect on the expectation value of $p$. In Figs.~\ref{fig:likeffex}c through ~\ref{fig:likeffex}e, we show the likelihood effects on robust gaussian priors of variance $\sigma_i^2$ at different locations. Here, the prior sampling region may be significantly altered while still preserving a gaussian profile with relatively narrow width. For general initial and final state, these effects can be described by performing a gaussian approximation of the posterior distribution around its maximum $q_{{}_*}$, determined by the equation \begin{equation}\label{betaxep} q_{{}_*} = q_i -2\sigma_i^2 {\rm Im} \frac{\bra{ \psi_{\!{}_2}}{\hat{A}e^{i \hat{A} q_{{}_*}}}\ket{\psi_{\!{}_1}}}{\bra{ \psi_{\!{}_2}}{e^{i \hat{A} q_{{}_*}}}\ket{\psi_{\!{}_1}}}\, , \end{equation} showing that the imaginary part of the complex weak value can be interpreted as a ``bias function" for the posterior sampling point. The resulting pointer distribution will be approximately a gaussian centered at $p =\wva{12}(q_{{}_*}) $ with a corrected width determined by the gaussian approximation. Two interesting effects are then worth noting from these examples: First, as illustrated in Fig. ~\ref{fig:likeffex}d, if the bias function is large at the prior sampling point $q_i$, the posterior sampling point $q_{{}_*}$ may lie in the tail region of the prior distribution. Thus, even if the prior distribution is quite narrow, the sampled weak value $\wva{12}(q_{{}_*})$ may differ significantly from the weak value $\wva{12}(q_i)$ at the prior sampling point. The second effect has to do with appreciable alterations of the widths as illustrated in Figs. ~\ref{fig:likeffex}c and ~\ref{fig:likeffex}e: if the prior sampling point $q_i$ is set at a minimum (maximum) of the likelihood function (cases for which $q_{{}_*} = q_i$ in the gaussian approximation), the respective posterior distributions in $q$ will be widened (narrowed) with respect to the prior; correspondingly, the pointer distributions may be narrowed (widened) with respect to the prior distribution $\mathcal{P}(p|\phi)$. In particular, it follows that gaussian measurement conditions probing the most ``eccentric" weak value on the orbit will generically show a \emph{squeeze} of the prior pointer distribution--a surprising effect if the statistics are viewed as the result of sampling eigenvalues. \begin{figure}\label{fig:trans} \end{figure} Turning finally to Figs.~\ref{fig:likeffex}f and ~\ref{fig:likeffex}g, we show the effects on two quite dissimilar non-robust priors centered at $q=0$: a wide window function of width $\varepsilon = 3 \pi$, and a narrow Lorentzian of half-width $\Gamma = \pi/24$ (comparable to the prior widths in Figs.~\ref{fig:likeffex}a and~\ref{fig:likeffex}c), both encompassing the maximum likelihood regions around $q = \pm \pi$ with either no suppression or insufficiently slow tail suppression of the likelihood factor. The resulting posterior distributions in $q$ are then both qualitatively very similar and similar in turn to the likelihood factor $\propto \mathcal{P}_{12}(q)$ within the region $q \in [-3\pi/2,3\pi/2]$, which from\rref{likeapp} is spproximately the sum of two equally-shaped narrow gaussians at $q = \pm \pi$. Thus, conditions are achieved for the superposition of two weak measurements at $q_{{}_*} = \pm \pi$, both sampling in this case the least eccentric weak value on the orbit, $\mathcal{J}_z(\pm \pi) = j/\sqrt{2}$. The two peaks in these cases are in fact quite similar to the single peak from the gaussian profile of Fig.~\ref{fig:likeffex}e at $q =\pi$; thus, even while the prior pointer distributions in~\ref{fig:likeffex}e through~\ref{fig:likeffex}f differ substantially in their shapes, the resulting PPME pointer distributions for all three cases share essentially the same envelope, with the last two cases showing interference fringes from the superposition of the two weak measurement sampling points. This interference pattern can then be connected to the spectral distribution expected from a strong measurement: given weak value and likelihood curves symmetric about $q=0$, and a posterior distribution in $q$ with two similarly-shaped narrow peaks at locations $q = \pm q_{{}_*}$, the resulting PPME pointer distribution will be the PPME pointer distibution for the single peak weak measurement at $q_{{}_*}$, but modulated by the term \begin{equation} 2\cos^2\left( 2 p q_{{}_*} - \delta(q_{{}_*})\right) , \ \ \ \delta(q) = \int_{-q}^{q}dq'\, \wva{12}(q')\, , \end{equation} describing the interference pattern. For the situation depicted in Fig.~\ref{fig:likeffex}, the phase shift is easily obtained from Eq.\rref{wvalj} and is given by \begin{equation} \delta(q) = 4 j \tan^{-1}\left(\sqrt{2} \tan\left(\frac{q}{2}\right)\right)\, . \end{equation} For $q_{{}_*} \rightarrow \pi$, we have $\delta(q_{{}_*}) \rightarrow 2 \pi j$; hence, interference patterns similar to those of Figs~\ref{fig:likeffex}f and ~\ref{fig:likeffex}g will show maxima at integer values of $p$ (corresponding to integer values of $j$), or at half-integer values of $p$ for half-integer $j$, consistently with spectrum of $\hat{J}_z$. The foregoing suggests a fairly general picture underlying the transition from weak to strong measurement conditions for fixed initial and final conditions, as the width of the prior distribution in $q$ is varied. Illustrating this passage with a gaussian prior of variable width $\sigma_i$ centered at $q=0$ (Fig.~\ref{fig:trans}) for the same $\hat{J}_z$ measurement, we find the onset of a transitional behavior at a critical value of $\sigma_i$ (Fig.~\ref{fig:trans}b) where the gaussian approximation fails. Beyond this critical value, the exponential rise of the likelihood factor dominates the prior on both sides, thus producing two symmetrically opposed peaks, the locations of which gradually move towards $q = \pm \pi$ as $\sigma_i$ is increased. This transitional behavior is reflected in the resulting pointer distribution by the emergence of an interference pattern with increasingly closer fringes, modulated by an envelope that gradually shifts with the sampled weak value $\wva{12}(q_{{}_*})$ from the eccentric to the normal region of expectation. The pattern eventually settles at the characteristic shape of the strong measurement distribution when the location of the two peaks reaches $q = \pm \pi$, only becoming sharper with increasing $\sigma_i$ when the tails of the gaussian prior ``activate" the next likelihood peaks at $q = \pm 3\pi, \pm 5\pi$, etc. \section{Classical Correspondence of QAWV Framework.} \label{Clas} The connection between macroscopic ``classical" properties and weak values has already been suggested in the literature\ccite{AV90, Tanaka,Parks03}. In this section we give further evidence of this connection by showing the correspondence of the QAWV framework in the classical limit. In particular, we show that in the semi-classical limit, the necessary conditions for a precise measurement of a classical dynamical quantity $A$ according to classical mechanics are at the same time the conditions that in the quantum description guarantee a weak measurement of the corresponding observable $\hat{A}$ yielding the same numerical outcome. Let $x$ be the configuration variable of a classical system, with free dynamics described by the Lagrangian $L_o(\dot{x},x,t)$. For simplicity, we concentrate on a measurement of a function $A(x)$ of the configuration variable $x$ alone, with a measurement Lagrangian of the form \begin{equation} L_M(q,x,t) = \delta(t-t_i) A(x) q \, , \end{equation} coupling the system and an external classical apparatus with pointer variable $p$ and canonical conjugate $q$. To connect with the results of section \ref{qaves}, we interpret the pre- and post- selection as the fixing of initial and final boundary conditions on the system trajectory: $x_1 \equiv x(t_1)$ and $x_2 \equiv x(t_2)$ with $t_2 > t_i > t_1$. Let us assume for simplicity throughout that only one solution is possible for the Euler-Lagrange equations. For non-zero $q$, the trajectory of the system will differ from its free trajectory due to a modification of the equations of motion by an additional $q$-dependent impulsive force $ F_M=\delta(t-t_i) A'(x) q \, $ arising from the back reaction of the apparatus on the system. Then, since the actual trajectory will be some function $x_{12}(t;q) = x(t;x_1,x_2,q)$ of the boundary conditions and $q$, the quantity $A(x(t_i))$ will generally depend on $q$ as well. In analogy with our previous notation, define the function \begin{equation} \widetilde{\mathcal{A}}_{12}(q) \equiv A(x_{12}(t;q)) \, . \end{equation} As one can show from the equations of motion, the classical action for the total Lagrangian $L_T = L_o +L_M$ evaluated on the trajectory $x_{12}(t;q)$, \begin{equation}\label{clasact} \widetilde{S}(q|x_1 x_2) \equiv \int_{t_1}^{t_2} dt\, L_T[{x_{12}(t;q)}] \, , \end{equation} serves as a generating function for $\widetilde{\mathcal{A}}_{12}(q)$, i.e., $\widetilde{\mathcal{A}}_{12}(q) =\widetilde{S}_{12}'(q)$. Thus, from the equations of motion for the apparatus, we find that the pointer variable suffers the impulse $p$ at the time $t_i$ \begin{equation} p_{{{\scriptscriptstyle\succ}}} = p_{{{\scriptscriptstyle\prec}}} + \widetilde{\mathcal{A}}_{12}(q) = p_{{{\scriptscriptstyle\prec}}} + \widetilde{S}_{12}'(q)\, , \end{equation} in direct correspondence with Eq.\rref{canshift}. We now turn to the probabilistic aspects of the measurement. Allowing for uncertainties in the initial state (i.e., the point in phase space) of the apparatus, we describe our knowledge with a prior p.d.f. ${\cal P}(qp|I^{{\scriptscriptstyle\prec}})$ for the state of the apparatus before the measurement, where $I$ denotes all available prior information. We also assume that initial conditions on the system are irrelevant for this prior assessment of probabilities so that ${\cal P}(qp|I x_1^{{\scriptscriptstyle\prec}}) ={\cal P}(qp|I ^{{\scriptscriptstyle\prec}})$. Since the variable $q$ enters the equations of motion of the system, knowledge of the final condition $x_2$ becomes relevant for inferences about $q$ at the time of the measuring interaction, and will therefore determine a re-assessment of prior probabilities. We must therefore compute the posterior p.d.f. ${\cal P}(qp|I x_1 x_2 ^{{\scriptscriptstyle\prec}})$ for the apparatus, conditioned on the endpoints of the system trajectory, at the time \textit{before} the interaction. The dynamics of the measurement can then be described by the Liouville evolution generated by $\widetilde{S}_{12}(q)$, i.e., \begin{equation}\label{classtatetarns} {\cal P}(qp|I x_1 x_2 ^{{\scriptscriptstyle\succ}}) = e^{-\widetilde{\mathcal{A}}_{12}(q)\frac{\partial}{\partial p}}{\cal P}(qp|I x_1 x_2 ^{{\scriptscriptstyle\prec}}) \, . \end{equation} Using Bayes' theorem, we find that \begin{equation}\label{clasBayes} {\cal P}(qp|I x_1 x_2 ^{{\scriptscriptstyle\prec}}) = \frac{{\cal P}(x_2| x_1 q )}{{\cal P}(x_2|I x_1 )} {\cal P}(qp|I^{{\scriptscriptstyle\prec}} )\, , \end{equation} where we have used the fact that $q$ is the only relevant apparatus variable entering the dynamics of the system, thus yielding a likelihood factor ${\cal P}(x_2| x_1 q )$ analogous to $P_{12}(q)$ in the quantum case. Finally, evolving to the time after the measurement through Eq.\rref{classtatetarns} and marginalizing, we obtain for the pointer variable distribution after the measurement: \begin{equation}\label{claspdf} {\cal P}(p|I x_1 x_2^{{\scriptscriptstyle\succ}}) =\left \langle\, \delta\!\left(\, p - p' - g \widetilde{\mathcal{A}}_{12}(q')\, \right) \,\right \rangle_{{\scriptscriptstyle\prec}} \end{equation} where the dummy variables $q"$ and $p'$ are averaged over the reassessed initial phase space p.d.f. for the apparatus ${\cal P}(q'p'|I x_1 x_2^{{\scriptscriptstyle\prec}})$. This distribution is in complete analogy with Eq.\rref{quantpdf} if averages over ${\cal P}(qp|I x_1 x_2^{{\scriptscriptstyle\prec}}) $ are identified with averages over the reassessed state $\ket{\tilde{\phi}_{_{12}}^{{\scriptscriptstyle\prec}}}$ and if $\widetilde{\mathcal{A}}_{12}(q)$ is identified with the $q$-dependent weak value $\wva{12}(q)$. With this identification, Eq. \rref{finalmoms} for the associated moments can be used for both the classical or quantum descriptions. Furthermore, the terms $\langle p \rangle_{{\scriptscriptstyle\prec}}$ and $\langle \{{\form{ d}}elta p ,{\form{ d}}elta \mathcal{A} \} \rangle_{{\scriptscriptstyle\prec}}$ in \rref{finalmoms} can also be eliminated in the classical case by requiring that the prior phase space distribution factors as ${\cal P}(qp|I^{{\scriptscriptstyle\prec}})={\cal P}(q|I^{{\scriptscriptstyle\prec}}){\cal P}(p|I^{{\scriptscriptstyle\prec}})$ with the expectation value of $p$ vanishing over ${\cal P}(p|I^{{\scriptscriptstyle\prec}})$. We can now show that under appropriate semi-classical conditions on a corresponding quantum system, the above analogy is not only formal but rather constitutes a true numerical correspondence between classical and quantum averages. For this, we need to calculate the so-far unspecified likelihood factor ${\cal P}(x_2| x_1 q )$ in Eq.\rref{clasBayes}, which plays the role of ${\cal P}_{12}(q)$ in the state reassessment of Eq.\rref{statereassess}. In the classical description, the probability of being at $x_2$ at the time $t_2$ is proportional to the integral $\int d\pi \delta(x_2 -x(t_2;x_1, \pi, q, t_1))$ over all possible initial momenta $\pi$ of the system, yielding \begin{equation} {\cal P}(x_2|x_1 q) \propto \left|\frac{\partial\pi_1 }{\partial x_2} \right| \, , \end{equation} where $\pi_1 = \pi(t_1;x_1,x_2,q)$ is the value of the initial momentum as determined from the boundary conditions. This initial momentum can be obtained from a variation of the classical action, $\pi_1 = -\partial_{x_1} \widetilde{S}_{12}(q)$\ccite{Arnold}, so that \begin{equation} {\cal P}(x_2|x_1 q) \propto \left|\frac{\partial^2 \widetilde{S}_{12}(q) }{\partial x_1 \partial x_2}\right|\, , \end{equation} (known as Van Vleck determinant\ccite{Cecile} from its extension to higher dimensions). Correspondence with the quantum description can now be established by calculating the quantum mechanical propagator $ \langle x_2|\hat{U}(t_2,t_1;q)| x_1 \rangle $ for the corresponding quantum system, with $\hat{U}(t_2,t_1;q)$ being the time evolution operator associated with the classical Lagrangian $L_o + L_M(q)$. As is easily verified, this is the relevant amplitude for the von Neumann measurement of $A(\hat{x})$ at the time $t_i$ with the given boundary conditions. Under appropriate semiclassical conditions\ccite{Cecile} (e.g., small times, large masses, slowly varying potentials, etc.), the propagator reduces to the semiclassical or WKB form \begin{equation} \langle x_2|\hat{U}(t_2,t_1;q)| x_1 \rangle \stackrel{WKB}{\longrightarrow}\frac{1}{(2 \pi i)^{\frac{1}{2}}} \sqrt{\left|\frac{\partial^2 \widetilde{S}_{12}(q) }{\partial x_1 \partial x_2}\right|}\ e^{i \widetilde{S}_{12}(q)} \, , \end{equation} where $\widetilde{S}_{12}$ is classical action of Eq.\rref{clasact}. Consequently, under semiclassical conditions, the weak value $\wva{12}(q)$ of $A(\hat{x})$ at the time $t_i$ coincides with the classical $\widetilde{\mathcal{A}}_{12}(q)$; similarly, the likelihood factor in the re-assessment of the initial state of the apparatus (Eq.~\ref{statereassess}) is the square root of the the likelihood factor $\propto \left|\partial_1 \partial_2 \widetilde{S}_{12}(q) \right|$ involved in the re-assessment probabilities in the classical description. Thus, assuming the conditions ensuring $\langle p \rangle_{{\scriptscriptstyle\prec}}$=0, the final posterior mean value of $p$ will be given both in the classical and quantum descriptions by the average value $\langle \mathcal{A}\rangle_{{\scriptscriptstyle\prec}}$ over the respective posterior distributions in $q$, which can be made to coincide. This allows us to claim a stronger correspondence between the classical and quantum descriptions when the system satisfies semiclassical conditions: for the same prior distributions in $q$, the classical and quantum expectation values and variances of $A$ \emph{are numerically equal} and hence, in particular, the final pointer expectation values are equal. It follows that the minimum dispersion conditions on the variable $q$ that in a classical description are required for a precise measurement of $A$ (i.e., ${\form{ d}}elta q \rightarrow 0 \Rightarrow {\form{ d}}elta A \rightarrow 0$), are at the same time the conditions that in the quantum description will guarantee a \emph{weak} measurement of $\hat{A}$ yielding the same numerical value. This correspondence strongly suggests that indeed, what we call macroscopic ``classical" properties, are in fact weak values. Let us elaborate on this assertion: The use of classical mechanics to describe macroscopic systems or other quantum systems exhibiting classical behavior relies on the fact that individual measurements may be devised so that: a) the effect on the measurement device accurately reflects the numerical value of the classical observable being measured, b) no appreciable disturbance is produced on the system as a result of the measurement interaction; and c) the effect on the measurement device is statistically distinguishable (i.e., the signal to noise ratio is large). The three conditions can be stated as follows: a) $\frac{{\form{ d}}elta \mathcal{A}}{\mathcal{A}} \ll 1$, b) $ \ave{q}=0$, ${\form{ d}}elta q \rightarrow 0$ and c) $\frac{{\form{ d}}elta p }{\mathcal{A}} \gg 1$. In the quantum description, conditions a) and b) are weak measurement conditions and can be attained asymptotically by making the posterior uncertainty $ {\form{ d}}elta q_{{\scriptscriptstyle\succ}}$ tend to zero, with the posterior average fixed at $q=0$; however, condition c) cannot be upheld in the limit $ {\form{ d}}elta q \rightarrow 0$ since ${\form{ d}}elta p \rightarrow \infty$ due to the uncertainty principle. Equivalently, conditions a) and b) cannot be fulfilled if condition c) is to be satisfied by demanding ${\form{ d}}elta p \rightarrow 0$ as in the case of an ideally strong measurement. While it is therefore impossible to satisfy the three conditions either in the absolute strong or weak limits, relatively weak measurement conditions can nevertheless be found as a compromise in the uncertainty relations so that conditions a), b) and c) are simultaneously satisfied ``for all practical purposes" when classical-like physical quantities are involved. Indeed, for such quantities one expects $\mathcal{A}$ to be in a sense ``large" relative to atomic scales, or more precisely, to scale extensively with some scale parameter $\lambda$ growing with the size or ``classicality" of the system (such the mass, or the number of atoms). One can then choose a scaling relation for ${\form{ d}}elta q$, i.e., ${\form{ d}}elta q \sim \lambda^{-\gamma}$ so that \begin{equation} \frac{{\form{ d}}elta p}{ \mathcal{A} } \sim \frac{{\form{ d}}elta \mathcal{A} }{\mathcal{A} } \ll 1 \, , \end{equation} in which case conditions a) b) and c) can be satisfied in the limit $\lambda \rightarrow \infty$. Assuming that $\mathcal{A}'$ scales as $\mathcal{A}$, then with the aid of the uncertainty relation ${\form{ d}}elta p \sim 1/{\form{ d}}elta q$ and ${\form{ d}}elta \mathcal{A} \simeq \mathcal{A}' {\form{ d}}elta q$, we find that this is possible in the quantum description if ${\form{ d}}elta q$ can be made to scale as \begin{equation} {\form{ d}}elta q \sim \lambda^{-1/2} \, , \end{equation} in which case \begin{equation} \frac{{\form{ d}}elta p }{\mathcal{A}} \sim \frac{{\form{ d}}elta \mathcal{A}}{\mathcal{A}} \sim \lambda^{-\frac{1}{2}} \, . \end{equation} As was recently shown\ccite{Poullin}, this is precisely the scaling relation of the optimal compromise for measurements of ``classical" collective properties (such as center of mass position or total momentum) of a large number $(\sim \lambda)$ of independent atomic constituents. \section{Conclusion} \label{concl} In this paper we have advanced the claim that weak values of quantum mechanical observables constitute legitimate physical concepts providing an objective description of the properties of a quantum system known to belong to a completely pre- and postselected ensemble. This we have done by addressing two aspects, namely the physical interpretation of weak values, and their applicability as a physical concept outside the weak measurement context. Regarding the physical meaning of weak values, we have shown that the weak value corresponds to a definite mechanical response of an ideal measuring probe the effect of which, from the point of the system, can be described as an infinitesimally uncertain unitary transformation. We have stressed how from this operational definition the weak value of an observable $\hat{A}$ is tied to the role of $\hat{A}$ as a generator of infinitesimal unitary transformations. We believe that this sharper operational formulation of weak values in terms of well-defined mechanical effects clarifies the sense in which weak values describe new and surprising features of the quantum domain. Regarding the applicability of the concept of weak values in more general contexts, we have shown that arbitrary-strength von Neumann measurements can be analyzed in the framework of quantum averages of weak values, in which dispersion in the apparatus variable driving the back-reaction on the system entails a quantum sampling of weak values. The framework has been shown to merge naturally into the classical inferential framework in the semi-classical limit. It is our hope that the framework introduced in the present paper may serve as a motivation for a refreshed analysis of the measurement process in quantum mechanics. \section{Acknowledgments} Y. A. acknowledges support from the Basic Research Foundation of the Israeli Academy of Sciences and Humanities and the National Science Foundation. A.B. acknowledges support from Colciencias (contract No. 245-2003). This paper is based in part on the latter's doctoral dissertation\ccite{BoteroThesis}, the completion of which owes much to Prof. Yuval Ne'eman and financial support from Colciencias-BID II and a one-year scholarship from ICSC - World Laboratory. \end{document}
\begin{document} \title{Arnold-Liouville theorem for integrable PDEs: a case study of the focusing NLS equation} \author{T. Kappeler\footnote{T.K. is partially supported by the Swiss National Science Foundation.} and P. Topalov\footnote{P.T. is partially supported by the Simons Foundation, Award \#526907}} \maketitle \begin{abstract} We prove an infinite dimensional version of the Arnold-Liouville theorem for integrable non-linear PDEs: In a case study we consider the {\em focusing} NLS equation with periodic boundary conditions. \end{abstract} \noindent{\small\em Keywords}: {\small Arnold-Liouville theorem, focusing NLS equation, focusing mKdV equation, normal forms, Birkhoff coordinates} \noindent{\small\em 2010 MSC}: {\small 37K10, 37K20, 35P10, 35P15} \section{Introduction}\label{sec:introduction} Let us first review the classical Arnold-Liouville theorem (\cite{Liou,Min,Arn}) in the most simple setup: assume that the phase space $M$ is an open subset of $\mathbb{R}^{2n}=\{(x,y)\,|\,x,y\in\mathbb{R}^n\}$, $n\ge 1$, endowed with the Poisson bracket \[ \{F,G\}=\sum_{j=1}^n\Big({\partial}_{y_j}F\cdot{\partial}_{x_j}G-{\partial}_{x_j}F\cdot{\partial}_{y_j}G\Big). \] The Hamiltonian system with smooth Hamiltonian $H : M\to\mathbb{R}$ then has the form \begin{equation*} \dot x={\partial}_y H,\quad \dot y=-{\partial}artial_x H \end{equation*} with corresponding Hamiltonian vector field $X_H=({\partial}_y H,-{\partial}_x H)$ where ${\partial}artial_x\equiv({\partial}_{x_1},...,{\partial}_{x_n})$ and ${\partial}_y\equiv({\partial}_{y_1},...,{\partial}_{y_n})$. Such a vector field is said to be {\em completely integrable} if there exist $n$ pairwise Poisson commuting smooth integrals $F_1,...,F_n :M\to\mathbb{R}$ (i.e. $\{H,F_k\}=0$ and $\{F_k,F_l\}=0$ for any $1\le k,l\le n$) so that the differentials $d_{(x,y)} F_1$,...,$d_{(x,y)}F_n\in T^*_{(x,y)}\mathbb{R}^{2n}$ are linearly independent on an open dense subset of points $(x,y)$ in $M$. In this setup the Arnold-Liouville theorem reads as follows (cf. e.g. \cite{Arn} or \cite{MZ}): \begin{Th*}[Arnold-Liouville] Assume that $c\in F(M)\subseteq\mathbb{R}^n$ is a regular value of the momentum map $F=(F_1,...,F_n) : M\to\mathbb{R}^n$ so that a connected component of $F^{-1}(c)$, denoted by $N_c$, is compact. Then $N_c$ is an $n$-dimensional torus that is invariant with respect to the Hamiltonian vector field $X_H$. Moreover, there exists a $X_H$-invariant open neighborhood $U$ of $N_c$ in $M$ and a diffeomorphism \[ \Psi : D\times(\mathbb{R}/2{\partial}i)^n\to U,\quad(I,\theta)\mapsto\Psi(I,\theta), \] where $D$ is an open disk in $\mathbb{R}^n$, so that the actions $I=(I_1,...,I_n)$ and the angles $\theta=(\theta_1,...,\theta_n)$ are canonical coordinates (i.e., $\{\theta_j,I_j\}=1$ for all $1\le j\le n$ whereas all other brackets between the coordinates vanish) and the pull-back ${\mathcal H}=H\circ\Psi^{-1}$ of the Hamiltonian $H$ depends only on the actions. \end{Th*} An immediate implication of this theorem is that the equation of motion, when expressed in action-angle coordinates, becomes \[ \dot\theta={\partial}_I\mathcal{H}(I),\quad\dot I=-{\partial}artial_\theta\mathcal{H}(I)=0. \] Hence, the actions are conserved and so is the frequency vector $\omega(I):={\partial}_I\mathcal{H}(I)$. Therefore, the equation can be solved explicitly, \[ \theta(t)=\theta(0)+\omega(I)t\;\big(\mathop{\rm mod} (2{\partial}i\mathbb{Z})^n\big),\quad I(t)=I(0). \] In particular, this shows that every solution is quasi-periodic. Furthermore, the invariant tori in the neighborhood $U$ are of maximal dimension $n$ and their tangent spaces at any given point $(x,y)$ are spanned by the Hamiltonian vectors $X_{F_1}(x,y)$,...,$X_{F_n}(x,y)$. Informally, one can say that the Arnold-Liouville theorem assures that generically there is no room for other types of dynamics besides quasi-periodic motion. In fact, by Sard's theorem one sees that if $F$ is e.g. proper then, under the conditions above, the union of invariant tori of maximal dimension $n$ is open and dense in $M$. In addition, the set of regular values of the momentum map $F =(F_1,...,F_n) : M\to\mathbb{R}^n$ is open and dense in $F(M)$. Typically, action-angle coordinates cannot be extended globally. One reason for this is the existence of singular values of the momentum map $F : M\to\mathbb{R}^n$. Such a situation appears e.g. in the case of an elliptic fixed point $\xi\in M$. In that case, it might be possible to extend the coordinates $X_j=\sqrt{2I_j}\cos\theta_j$, $Y_j=\sqrt{2I_j}\sin\theta_j$, to an open neighborhood of $\xi$ in $M$. We refer to such coordinates as Birkhoff coordinates and the Hamiltonian $H$, when expressed in these coordinates, is said to be in Birkhoff normal form. For results in this direction we refer to \cite{Vey,Eliasson,Zung1} and references therein. In special cases, such as systems of coupled oscillators, Birkhoff coordinates can be defined on the entire phase space and are referred to as global Birkhoff coordinates. In what follows we will keep this terminology and will call $(X_j,Y_j)$ defined in terms of the action-angle coordinates as above again {\em Birkhoff coordinates}, even if they are non necessarily related to an elliptic fixed point. Other obstructions to globally extend action-angle coordinates are the presence of hyperbolic fixed points as well as focus-focus fixed points (cf. \cite{Duistermaat,Eliasson,Zung2}) where one observes topological obstructions in case of a non-trivial monodromy of the action variables. Finally note that action-angle coordinates are used for obtaining KAM type results for small perturbations of integrable Hamiltonian systems. Our aim is to present an extension of the above described version of the Arnold-Liouville theorem to integrable PDEs in form of a case study of the focusing nonlinear Schr\"odinger (fNLS) equation \begin{equation}\label{eq:nls} i {\partial}_t u=-{\partial}^2_x u-2|u|^2u,\quad u|_{t=0}=u_0 \end{equation} with periodic boundary conditions. Our motivation for choosing the fNLS equation stems from the fact that it is known to be an integrable PDE which does {\em not} admit local action-angle coordinates in open neighborhoods of specific potentials (see \cite{KT3}), whereas in contrast, integrable PDEs such as the Korteweg-de Vries equation (KdV) or the defocusing nonlinear Schr\"odinger equation (dNLS) admit global Birkhoff coordinates (\cite{GK1,KP1}). To state our results we first need to introduce some notation and review some facts about the fNLS equation. It is well known that \eqref{eq:nls} can be written as a Hamiltonian PDE. To describe it, let $L^2\equiv L^2(\mathbb{T},\mathbb{C})$ denote the Hilbert space of square-integrable complex valued functions on the unit torus $\mathbb{T}:=\mathbb{R}/\mathbb{Z}$ and let $L^2_c:=L^2\times L^2$. On $L^2_c$ introduce the Poisson bracket defined for $C^1$-functions $F$ and $G$ on $L^2_c$ by \begin{equation}\label{eq:poisson_bracket} \{F,G\}(\varphi):=-i \int_0^1\big({\partial}_1 F\cdot{\partial}_2 G-{\partial}_2 F\cdot{\partial}_1 G\big)\,dx \end{equation} where $\varphi=(\varphi_1,\varphi_2)$ and ${\partial}_j F\equiv{\partial}_{\varphi_j}F$ for $j=1,2$ are the two components of the $L^2$-gradient of $F$ in $L^2_c$. More generally, we will consider $C^1$-functions $F$ and $G$ defined on a dense subspace of $L^2_c$ having sufficiently regular $L^2$-gradients so that the integral in \eqref{eq:poisson_bracket} is well defined when viewed as a dual pairing. The NLS-Hamiltonian, defined on the Sobolev space $H^1_c=H^1\times H^1$, $H^1\equiv H^1(\mathbb{T},\mathbb{C})$, is given by \[ \mathcal{H}_{NLS}(\varphi):= \int_0^1\big({\partial}_x\varphi_1\cdot{\partial}_x\varphi_2+\varphi_1^2\varphi_2^2\big)\,dx. \] The corresponding Hamiltonian equation then reads \begin{equation}\label{eq:nls'} {\partial}_t(\varphi_1,\varphi_2)=-i \big({\partial}_2\mathcal{H}_{NLS},-{\partial}_1\mathcal{H}_{NLS}\big). \end{equation} Equation \eqref{eq:nls} is obtained by restricting \eqref{eq:nls'} to the real subspace $i L^2_r:=\{\varphi\in L^2_c\,|\,\varphi_2=-\overline{\varphi_1}\}$ of the complex vector space $L^2_c$. More precisely, for $\varphi=i (u,\overline{u})$, one gets the fNLS equation $i {\partial}_t u=-{\partial}_x^2 u-2|u|^2u$. We also remark that when restricting \eqref{eq:nls'} to the real subspace $L^2_r:=\{\varphi\in L^2_c\,|\,\varphi_2=\overline{\varphi_1}\}$ one obtains the dNLS equation mentioned above. According to \cite{ZS} equation \eqref{eq:nls'} admits a Lax pair representation (cf. \cite{Lax}) \[ {\partial}_t L(\varphi)=[P(\varphi),L(\varphi)] \] where $L(\varphi)$ is the Zakharov-Shabat operator (ZS operator) \[ L(\varphi):=i \begin{pmatrix} 1&0\\0&-1\end{pmatrix}{\partial}_x+ \begin{pmatrix} 0&\varphi_1\\\varphi_2&0\end{pmatrix} \] and $P(\varphi)$ is a certain differential operator of second order. As a consequence, the periodic spectrum of $L(\varphi)$ is invariant with respect to the NLS flow. Actually, we need to consider the periodic and the anti-periodic spectrum of $L(\varphi)$ or, by a slight abuse of notation, of $\varphi$. In order to treat the two spectra at the same time we consider $L(\varphi)$ on the interval $[0,2]$ and impose periodic boundary conditions. Denote the spectrum of $L(\varphi)$ defined this way by $\mathop{\rm Spec}\nolimits_p L(\varphi)$. In what follows we refer to it as the {\em periodic spectrum} of $L(\varphi)$ or, by a slight abuse of terminology, of $\varphi$. Note that $\mathop{\rm Spec}\nolimits_p L(\varphi)$ is discrete and invariant with respect to the NLS flow on $L^2_c$. We say that the periodic spectrum $\mathop{\rm Spec}\nolimits_p L(\varphi)$ is {\em simple} if every eigenvalue has algebraic multiplicity one. Furthermore, for any ${\partial}si\in i L^2_r$ introduce the isospectral set, \[ \mathop{\rm Iso}\nolimits({\partial}si):=\big\{\varphi\in i L^2_r\,\big|\,\mathop{\rm Spec}\nolimits_p L(\varphi)=\mathop{\rm Spec}\nolimits_p L({\partial}si)\big\} \] where the equality of the two spectra means that they coincide together with the corresponding algebraic multiplicities of the eigenvalues. Denote by $\mathop{\rm Iso}\nolimits_o({\partial}si)$ the connected component of $\mathop{\rm Iso}\nolimits({\partial}si)$ that contains ${\partial}si$. For any integer $N\ge 0$ introduce the Sobolev spaces $H^N_c:=H^N\times H^N$ and the real subspace \[ i H^N_r:=\{\varphi\in H^N_c\,|\,\varphi_2=-\overline{\varphi_1}\} \] where $H^N\equiv H^N(\mathbb{T},\mathbb{C})$ denotes the Sobolev space of functions $f : \mathbb{T}\to\mathbb{C}$ with distributional derivatives up to order $N$ in $L^2$. In a similar way introduce the following spaces of complex valued sequences \[ \mathfrak{h}^N_c:=\mathfrak{h}^N\times\mathfrak{h}^N, \] \[ \mathfrak{h}^N_r:=\big\{(z,w)\in\mathfrak{h}^N_c\,\big|\,w_n=\overline{z_{(-n)}}\,\,\forall n\in\mathbb{Z}\big\}, \] \[ i \mathfrak{h}^N_r:=\big\{(z,w)\in\mathfrak{h}^N_c\,\big|\,w_n=-\overline{z_{(-n)}}\,\,\forall n\in\mathbb{Z}\big\}, \] where \[ \mathfrak{h}^N:=\big\{z=(z_n)_{n\in\mathbb{Z}}\,\big|\,\|z\|_N<\infty\big\},\quad \|z\|_N:=\Big(\sum_{j\in\mathbb{Z}}(1+j^2)^N|z_j|^2\Big)^{1/2}. \] Note that $\mathfrak{h}^N_r$ and $i\mathfrak{h}^N_r$ are real subspaces of $\mathfrak{h}^N_c$. In the case when $N=0$ we set $\ell^2_c\equiv\mathfrak{h}^0_c$, $i \ell^2_r\equiv i \mathfrak{h}^0_r$, and $\ell^2\equiv\mathfrak{h}^0$. For simplicity, we will also use the same symbols for the spaces of sequence with indices $|n|>R$ with $R\ge 0$. Finally, we say that the subset ${\mathcal W}\subseteq i L^2_r$ is {\em saturated} if $\mathop{\rm Iso}\nolimits_o({\partial}si)\subseteq{\mathcal W}$ for any ${\partial}si\in{\mathcal W}$. The main result of this paper is the following Theorem. \begin{Th}\label{th:main} Assume that ${\partial}si\in i L^2_r$ has simple periodic spectrum $\mathop{\rm Spec}\nolimits_p L({\partial}si)$. Then there exist a saturated open neighborhood ${\mathcal W}$ of $\mathop{\rm Iso}\nolimits_o({\partial}si)$ in $i L^2_r$ and a real analytic diffeomorphism \begin{equation}\label{eq:Psi} \Psi : {\mathcal W}\to\Psi({\mathcal W})\subseteq i\ell^2_r,\quad\varphi\mapsto \Big(\big(z_n(\varphi)\big)_{n\in\mathbb{Z}},\big(w_n(\varphi)\big)_{n\in\mathbb{Z}}\Big), \end{equation} onto the open subset $\Psi({\mathcal W})$ of $i\ell^2_r$ so that the following holds: \begin{itemize} \item[(NF1)] $\Psi$ is canonical, i.e., $\{z_n,w_{(-n)}\}=-i$ for any $n\in\mathbb{Z}$ whereas all other brackets between coordinate functions vanish. \item[(NF2)] For any integer $N\ge 0$, $\Psi({\mathcal W}\cap i H^N_r)\subseteq i \mathfrak{h}^N_r$ and \[ \Psi : {\mathcal W}\cap i H^N_r\to\Psi({\mathcal W}\cap i H^N_r)\subseteq i \mathfrak{h}^N_r \] is a real analytic diffeomorphism onto its image. \item[(NF3)] The pull-back $\mathcal{H}_{NLS}\circ\Psi^{-1} : \Psi\big({\mathcal W}\cap i H^1_r\big)\to\mathbb{R}$ of the fNLS Hamiltonian is a real analytic function that depends only on the actions $I_n:=z_n w_{(-n)}$, $n\in\mathbb{Z}$. \end{itemize} \end{Th} The open neighborhood ${\mathcal W}\subseteq i L^2_r$ in Theorem \mathop{\rm Re}f{th:main} is chosen in such a way that for any $\varphi\in{\mathcal W}$ the spectrum $\mathop{\rm Spec}\nolimits_p L(\varphi)$ has the property that all multiple eigenvalues are real with algebraic and geometric multiplicity two whereas all simple eigenvalues are non-real and appear in complex conjugate pairs. Hence the periodic eigenvalues of $L(\varphi)$ have algebraic multiplicity at most two. Recall also that a potential $\varphi\in i L^2_r$ is called a {\em finite gap potential} if the number of simple periodic eigenvalues of $L(\varphi)$ is finite (cf. e.g. \cite{KLT1,KLT2}). As an immediate application of Theorem \mathop{\rm Re}f{th:main} one obtains the following \begin{Coro}\label{coro:main} Assume that ${\partial}si\in i H^N_r$ with $N\in\mathbb{Z}_{\ge 0}$ has simple periodic spectrum $\mathop{\rm Spec}\nolimits_p L({\partial}si)$. Then for any $\varphi\in{\mathcal W}$ where ${\mathcal W}$ is the open neighborhood of Theorem \mathop{\rm Re}f{th:main}, the following holds: \begin{itemize} \item[(i)] The set $\Psi(\mathop{\rm Iso}\nolimits_o(\varphi))$ is compact in $i \mathfrak{h}^N_r$ and can be represented as a direct product of countably many circles, one for each pair of complex conjugated simple periodic eigenvalues. \item[(ii)] For $N\ge 1$, the fNLS equation on $\Psi({\mathcal W}\cap i H^N_r)\subseteq i \mathfrak{h}^N_r$ takes the form \[ \dot z_n=-i\omega_n z_n,\quad \dot w_n=i \omega_n w_n,\quad n\in\mathbb{Z} \] where $\omega_n\equiv\omega_n(I):={\partial}_{I_n}(\mathcal{H}_{NLS}\circ\Psi^{-1})$ are the NLS frequencies and $I_n=z_n w_{(-n)}$, $n\in\mathbb{Z}$, are the actions. Hence, the solutions of the fNLS equation with initial data in ${\mathcal W}\cap i H^N_r$ are globally defined and almost periodic in time. \item[(iii)] The finite gap potentials in ${\mathcal W}$ lie on finite dimensional fNLS invariant tori contained in ${\mathcal W}\cap i H^N_r$ for any $N\ge 0$. The set of these potentials is dense in ${\mathcal W}\cap i H^N_r$. \end{itemize} \end{Coro} \begin{Rem} \begin{itemize} \item[(i)] For any $\varphi\in{\mathcal W}\cap i H^N_r$, $N\ge 1$, and $t\in\mathbb{R}$ denote by $S_t(\varphi)$ the solution of the fNLS equation obtained in Corollary \mathop{\rm Re}f{coro:main} (ii) with initial data $\varphi$. Then for any given $N\in\mathbb{Z}_{\ge 1}$ and $t\in\mathbb{R}$, one can prove that the flow map $S_t : {\mathcal W}\cap i H^N_r\to{\mathcal W}\cap i H^N_r$ is a homeomorphism (cf. \cite{KT1}). In addition, the map $S :\mathbb{R}\times\big({\mathcal W}\cap i H^N_r\big)\to{\mathcal W}\cap i H^N_r$, $(t,\varphi)\mapsto S_t(\varphi)$, is continuous. \item[(ii)] It can be shown that the frequencies $\omega_n$, $n\in\mathbb{Z}$, extend real analytically to the larger set ${\mathcal W}\cap i L^2_r$ (cf. \cite{KT1,KM}). \end{itemize} \end{Rem} The next result addresses the question of how restrictive the assumption of $\mathop{\rm Spec}\nolimits_p L({\partial}si)$ being simple is. Let \begin{equation}\label{eq:T} \mathcal{T}:=\big\{{\partial}si\in i L^2_r\,\big|\,\mathop{\rm Spec}\nolimits_p L({\partial}si)\,\,\text{is simple}\big\}. \end{equation} Recall that a subset $A$ of a complete metric space $X$ is said to be {\em residual} if it is the intersection of countably many open dense subsets. By Baire's theorem, such a set is dense in $X$. We prove in \cite{KTPreviato} the following \begin{Th}\label{prop:general_position} For any integer $N\ge 0$, the set $\mathcal{T}\cap i H^N_r$ is residual in $i H^N_r$. \end{Th} \begin{Rem} It is well known that the fNLS equation is wellposed on $i H^N_r$ for any integer $N\ge 0$ (\cite{Bo}). It then follows from Theorem \mathop{\rm Re}f{prop:general_position} and Corollary \mathop{\rm Re}f{coro:main} (iii) that any solution in $i H^N_r$ with $N\ge 0$ can be approximated in $C\big([-T,T],i H^N_r\big)$ by finite gap solutions for any $T>0$. \end{Rem} Finally we mention that the results of Theorem \mathop{\rm Re}f{th:main} apply to any Hamiltonian in the fNLS hierarchy. In particular, these results hold for the focusing modified KdV equation \[ {\partial}_t v=-{\partial}_x^3 v-6 v^2 {\partial}_xv,\quad v|_{t=0}=v_0 \] with periodic boundary conditions which can be obtained as the restriction of the Hamiltonian PDE on the Poisson manifold $L^2_c$ with Hamiltonian \[ \mathcal{H}_{mKdV}(\varphi):= \int_0^1\big(-({\partial}_x^3\varphi_1)\varphi_2+3(\varphi_1{\partial}_x\varphi_1)\varphi_2^2\ \big)\,dx \] to the real subspace of $i L^2_r$, \[ \big\{\varphi=i (v,v)\in i L^2_r\,\big|\,v\,\,\text{real valued}\big\}\cong L^2(\mathbb{T},\mathbb{R}). \] We remark that Theorem \mathop{\rm Re}f{prop:general_position} also holds in this setup meaning that the subset $\{v\in L^2(\mathbb{T},\mathbb{R})\,|\,i(v,v)\in\mathcal{T}\cap i H^N_r\}$ is residual in $H^N(\mathbb{T},\mathbb{R})$ for any integer $N\ge 0$ -- see \cite{KTPreviato} for more details. \noindent{\em Method of proof:} The proof of Theorem \mathop{\rm Re}f{th:main} is based on the following key ingredients: (1) Setup allowing to construct analytic coordinates: One of the principal merits of our analytic setup is that it allows to prove the canonical relations between the action-angle coordinates by a deformation argument using the canonical relations between these coordinates in a neighborhood of the zero potential established in \cite{KLTZ} (cf. also \cite{GK1} for the construction of such coordinates in the defocusing case). (2) Choice of contours: For an integrable system on a $2n$-dimensional symplectic space $M$ (as the one discussed at the beginning of the introduction), action coordinates $I_j$, $1\le j\le n$, on the invariant tori $N_c$ of dimension $n$, smoothly parametrized by regular values $c\in\mathbb{R}^n$ in the image of the momentum map $F : M\to \mathbb{R}^n$, can be defined by Arnold's formula \[ I_j:=\frac{1}{2{\partial}i}\int_{\gamma_j(c)}\alpha,\quad 1\le j\le n, \] in terms of the canonical $1$-form $\alpha$, stemming from the Poisson structure on $M$, and a set of cycles $\gamma_j(c)$, $1\le j\le n$, on $N_c$ that form a basis in the first homology group of $N_c$ and depend smoothly on the parameter $c$. For integrable PDEs such as the KdV or the dNLS equations, Arnold's procedure for constructing the action coordinates has been successfully implemented in \cite{FMcL} (cf. also \cite{VN} and \cite{McKV}). More precisely, in the case of the dNLS equation, the Dirichlet eigenvalues of $L(\varphi)$ can be used to define cycles $\gamma_j$, $j\in\mathbb{Z}$. Surprisingly, the integrals $\frac{1}{2{\partial}i}\int_{\gamma_j}\alpha$ can be interpreted as contour integrals on the complex plane (cf. e.g. \cite{FMcL,McKV,GK1}). We emphasize that in the case of the fNLS equation, treated in the present paper, these contours can {\em not} be obtained from the Dirichlet spectrum of $L(\varphi)$. It turns out that the contours in the complex plane which work in the case of the dNLS equation for potentials $\varphi$ near the origin in $H^N_r$ also work in the case of the fNLS for potentials near the origin in $i H^N_r$ (\cite{KLTZ}). We then use a deformation argument along an appropriately chosen path, that connects ${\partial}si$ with a small open neighborhood of the zero potential in $i H^N_r$, to obtain contours for potentials in an open neighborhood of the isospectral set $\mathop{\rm Iso}\nolimits_o({\partial}si)\cap i H^N_r$. (3) Normalized differentials: The angle coordinates are defined in terms of a set of normalized differentials on an open Riemann surface (of possibly infinite genus), associated to the periodic spectrum $\mathop{\rm Spec}\nolimits_p L(\varphi)$, and the Dirichlet spectrum of $L(\varphi)$ (cf. e.g. \cite{BBEIM,DK} for the case of finite gap potentials as well as \cite{GK1,McKT,McKV,FKT} for the case of more general potentials in $H^N_r$). Such normalized differentials (with properties needed for our purposes) for generic potentials in $i H^N_r$ have been constructed in \cite{KT2} (cf. also \cite{KLT3}). Note that the case of potentials in $i H^N_r$ is more complicated since the operator $L(\varphi)$ is not selfadjoint. An important ingredient for estimates, needed to construct the angle coordinates, is the rather precise localization of the zeros of these differentials provided in \cite{KT2}. We emphasize that no assumptions are made on the Dirichlet eigenvalues of $L(\varphi)$. In particular, they might have algebraic multiplicities greater or equal to two. As in the case of the actions we construct the angles using a deformation argument. (4) Generic spectral properties of non-selfadjoint ZS operators: The deformation argument, briefly discussed in item (1), requires that the path of deformation stays within the part of phase space $i H^N_r$ which admits action-angle coordinates. This part of the phase space contains the set of potentials $\varphi\in i H^N_r$ which have the property that all multiple eigenvalues are real with geometric multiplicity two whereas all simple eigenvalues are non-real and appear in complex conjugated pairs. In \cite{KLT2}, it is shown that this set is open and path connected. We remark that in order to prove that the actions and the angles, first defined in an open neighborhood of the potential ${\partial}si\in i H^N_r$, analytically extend to an open neighborhood of $\mathop{\rm Iso}\nolimits_o({\partial}si)$, we make use of the assumption that {\em all} periodic eigenvalues of $L({\partial}si)$ are simple. (5) Lyapunov type stability of isospectral sets: To ensure that the Birkhoff map \eqref{eq:Psi} is injective in an open neighborhood ${\mathcal W}$ of $\mathop{\rm Iso}\nolimits_o({\partial}si)$ in $i L^2_r$ with ${\partial}si\in\mathcal{T}$, we show that for any open neighborhood ${\mathcal U}\subseteq{\mathcal W}$ of $\mathop{\rm Iso}\nolimits_o({\partial}si)$ in $i L^2_r$ there exists an open neighborhood ${\mathcal V}\subseteq{\mathcal U}$ of $\mathop{\rm Iso}\nolimits_o({\partial}si)$ in $i L^2_r$ with the property that $\mathop{\rm Iso}\nolimits_o(\varphi)\subseteq{\mathcal U}$ for any $\varphi\in{\mathcal V}$. \noindent{\em Additional comments:} Informally, Theorem \mathop{\rm Re}f{th:main} means that $(z_n,w_n)$, $n\in\mathbb{Z}$, can be thought of as nonlinear Fourier coefficients of $\varphi=(\varphi_1,\varphi_2)$. They are referred to as {\em Birkhoff coordinates}. The Birkhoff coordinates are constructed in terms of action and angle variables. In the case of a finite gap potential, the angle variables are defined by {\em real} valued expressions involving the Abel map of a special curve of finite genus associated to the finite gap potential. The question if these expressions are real valued has been a longstanding issue, raised by experts in the field in connection with special solutions of the fNLS equation, given in terms of theta functions. In \cite{KLTZ}, coordinates of the type provided by Theorem \mathop{\rm Re}f{th:main} have been constructed in a open neighborhood of the origin in $i H^N_r$. Note however that $\mathop{\rm Spec}\nolimits_p L(0)$ is {\em not} simple since it consists of real eigenvalues of algebraic and geometric multiplicity two. \noindent{\em Related work:} In the seventies and the eighties, several groups of scientists made pioneering contributions to the development of the theory of integrable PDEs. In the periodic or quasi-periodic setup, deep connections between such equations and complex geometry as well as spectral theory were discovered and much of the efforts were aimed at representing classes of solutions (referred to as finite band solutions) by the means of theta functions, leading to the discovery of finite dimensional invariant tori. See e.g. \cite{Lax,DN} as well as the books \cite{NMPZ,BBEIM,GH} and the references therein. Further developments of these connections allowed to treat more general classes of solutions. In particular, it was established that many integrable PDEs admit invariant tori with infinitely many degrees of freedom. See e.g. \cite{McKT,FMcL,KP1,GK1,KT1} and the references therein. \noindent{\em Organization of the paper:} In Section \mathop{\rm Re}f{sec:setup} we review various results on ZS operators and related topics which are used in the paper. In Section \mathop{\rm Re}f{sec:actions_in_U_tn} we construct a tubular neighborhood $U_{\rm tn}$ of an appropriate path, connecting a potential ${\partial}si^{(0)}\in\mathcal{T}$ (cf. \eqref{eq:T}) near zero with a given potential ${\partial}si\in\mathcal{T}$ and define the actions in $U_{\rm tn}$. In Section \mathop{\rm Re}f{sec:angles_in_U_tn} we define the angles in $U_{\rm tn}$. In Section \mathop{\rm Re}f{sec:actions_and_angles_in_U_iso} we introduce actions and angles in a tubular neighborhood in $i L^2_r$ of the isospectral set $\mathop{\rm Iso}\nolimits_o({\partial}si)$ of ${\partial}si$. In Section \mathop{\rm Re}f{sec:birkhoff_map} we define the pre-Birkhoff map and study its local properties whereas in Section \mathop{\rm Re}f{sec:proofs} we prove Theorem \mathop{\rm Re}f{th:main}. \noindent{\em Acknowledgment:} The authors gratefully acknowledge the support and hospitality of the FIM at ETH Zurich and the Mathematics Departments of the Northeastern University and the University of Zurich. \section{Setup}\label{sec:setup} In this Section we review results needed throughout the paper. In particular we recall the spectral properties of ZS operators and the results on Birkhoff coordinates in a neighborhood of the zero potential (\cite{KLTZ}). For $\varphi = (\varphi _1,\varphi _2) \in L^2_c$ and $\lambda \in \mathbb{C}$, let $M = M(x,\lambda ,\varphi )$, $x \in {\mathbb R}$, be the fundamental solution of $L(\varphi )M = \lambda M$ satisfying the initial condition $M(0,\lambda,\varphi )={\rm Id}_{2 \times 2}$, where ${\rm Id}_{2 \times 2}$ is the identity $2\times 2$ matrix. It is convenient to write \begin{equation}\label{eq:M} M := \begin{pmatrix} m_1 & m_2 \\ m_3 & m_4 \end{pmatrix}, \quad M_1 :=\begin{pmatrix} m_1 \\ m_3 \end{pmatrix}, \quad M_2 := \begin{pmatrix}m_2 \\ m_4 \end{pmatrix} . \end{equation} The fundamental solution $M(x,\lambda ,\varphi )$ is a continuous function on ${\mathbb R} \times \mathbb{C} \times L^2_c$, for any given $x \in {\mathbb R}$ it is analytic in $(\lambda,\varphi)\in\mathbb{C} \times L^2_c$, and for any given $(\lambda , \varphi )$ in $\mathbb{C} \times L^2_c$, $M(\cdot,\lambda,\varphi)\in H^1([0,1],{\rm Mat}_{2\times 2}(\mathbb{C}))$ -- see \cite{GK1}. For $\varphi = 0$, $M$ is given by the diagonal $2\times 2$ matrix $E_\lambda (x) = \mbox{diag}(e^{- i\lambda x},e^{ i\lambda x}$). \noindent {\em Periodic spectrum:} Recall that a complex number $\lambda$ is said to be a {\em periodic eigenvalue} of $L(\varphi )$ iff there exists a nonzero solution of $L(\varphi )f = \lambda f$ with $f(1) = {\partial}m f(0)$. As $f(1) = M(1, \lambda )f(0)$ it means that $1$ or $-1$ is an eigenvalue of the Floquet matrix $M(1,\lambda )$. Denote by $\Delta (\lambda , \varphi )$ the discriminant of $L(\varphi )$ \[ \Delta (\lambda , \varphi ) := m_1(1, \lambda , \varphi ) + m_4(1,\lambda , \varphi ) \] and note that by the discussion above $\Delta : \mathbb{C}\times L^2_c\to \mathbb{C}$ is analytic. It follows easily from the Wronskian identity that $\lambda$ is a periodic eigenvalue of $L(\varphi )$ iff $\Delta(\lambda )\in\{2,-2\}$. Hence the periodic spectrum of $L(\varphi )$ coincides with the zero set of the entire function \begin{equation}\label{eq:chi_p} \chi_p(\lambda,\varphi ):=\Delta^2(\lambda,\varphi) - 4. \end{equation} In fact, by \cite[Lemma 2.3]{KLT2}, the algebraic multiplicity of a periodic eigenvalue coincides with its multiplicity as a root of $\chi _p(\cdot,\varphi )$. We say that two complex numbers $a$ and $b$ are {\em lexicographically ordered}, $a {\partial}reccurlyeq b$, if $[\mathop{\rm Re}(a) < \mathop{\rm Re}(b)]$ or $[\mathop{\rm Re}(a) = \mathop{\rm Re}(b) \mbox { and } \mathop{\rm Im}(a) \leq \mathop{\rm Im}(b)]$. Furthermore, for any $n\in\mathbb{Z}$ and $R\in\mathbb{Z}_{\ge 0}$ introduce the disks \[ D_n := \{ \lambda \in \mathbb{C},|\, |\lambda - n{\partial}i | < {\partial}i /6\}\, \text{and}\, B_R := \{ \lambda \in \mathbb{C}\,|\, |\lambda | < R {\partial}i + {\partial}i/6 \}. \] For a proof of the following well known Lemma see e.g. \cite{GK1}. \begin{Lem}\label{lem:counting_lemma} For any ${\partial}si\in L^2_c$ there exist an open neighborhood $V_{\partial}si$ of ${\partial}si$ in $L^2_c$ and $R_p\in\mathbb{Z}_{\ge 0}$ so that for any $\varphi\in V_{\partial}si$ the following properties hold: \begin{itemize} \item[(i)] The periodic spectrum $\mathop{\rm Spec}\nolimits_p L({\partial}si)$ is discrete. The set of eigenvalues counted with their algebraic multiplicities consists of two sequences of complex numbers $(\lambda_n^+)_{|n|>R_p}$ and $(\lambda_n^-)_{|n|>R_p}$ with $\lambda_n^+,\lambda_n^-\in D_n$, $\lambda^-_n{\partial}reccurlyeq\lambda^+_n$, and a set $\Lambda_{R_p}\equiv\Lambda_{R_p}(\varphi)$ of $4 R_p+2$ additional eigenvalues that lie in the disk $B_{R_p}$. For any $|n|>R_p$, $\Delta(\lambda_n^{\partial}m,\varphi)=(-1)^n 2$ and \[ \lambda_n^{\partial}m=n{\partial}i+\ell_n^2 \] where the remainder $(\lambda_n^{\partial}m-n{\partial}i)_{|n|>R_p}$ is bounded in $\ell^2$ locally uniformly in $\varphi\in V_{\partial}si$. (Later we will list the eigenvalues in $B_{R_p}$ in a way convenient for our purposes.) \item[(ii)] The set of roots of the entire function $\lambda\mapsto\dot\Delta(\lambda)\equiv{\partial}_\lambda\Delta(\lambda,\varphi)$, when counted with multiplicities, consists of a sequence $(\dot\lambda_n)_{|n|>R_p}$, so that $\dot\lambda_n\in D_n$, and a set $\dot\Lambda_{R_p}\equiv\dot\Lambda_{R_p}(\varphi)$ of $2 R_p+1$ additional roots that lie in the disk $B_{R_p}$. For $|n|>R_p$, one has \[ \dot\lambda_n=\frac{\lambda_n^++\lambda_n^-}{2}+(\lambda_n^+-\lambda_n^-)^2\ell^2_n \] where the remainder $\ell^2_n$ is bounded in $\ell^2$ locally uniformly in $\varphi\in V_{\partial}si$. The roots in $\dot\Lambda_{R_p}$ are listed in lexicographic order and with their multiplicities \[ \dot\lambda_{-R_p}{\partial}reccurlyeq\cdots{\partial}reccurlyeq \dot\lambda_{k}{\partial}reccurlyeq\dot\lambda_{k+1} {\partial}reccurlyeq\cdots{\partial}reccurlyeq\dot\lambda_{R_p}, \quad -R_p\le k\le R_p-1. \] \end{itemize} \end{Lem} For potentials $\varphi $ in $ i L^2_r$, the periodic spectrum of $L(\varphi )$ has additional properties. By \cite[Proposition 2.6]{KLT2} the following holds. \begin{Lem}\label{lem:spectrum_symmetries} For any given $\varphi\in i L^2_r$ any real periodic eigenvalue of $L(\varphi )$ has geometric multiplicity two and even algebraic multiplicity. For any periodic eigenvalue in $\mathbb{C} \backslash {\mathbb R}$, its complex conjugate $\overline \lambda$ is also a periodic eigenvalue and has the same algebraic and geometric multiplicity as $\lambda $. The periodic eigenvalues $\lambda^+_n$ and $\lambda^-_n$, $|n| > R_p$, given by Lemma \mathop{\rm Re}f{lem:counting_lemma}, satisfy $\mathop{\rm Im}(\lambda_n^+)\ge 0$ and \[ \lambda^-_n=\overline{\lambda ^+_n}\quad \forall |n| > R_p\,. \] \end{Lem} \noindent{\em Discriminant:} The following properties of $\Delta $ and $\dot\Delta $ are well known -- see e.g. \cite[Section 5] {GK1}. To state them introduce ${\partial}i _n:=n{\partial}i \,\,\, \forall n \in \mathbb{Z} \backslash \{ 0 \}$ and ${\partial}i _0 := 1$. \begin{Lem}\label{lem:product1} For $\varphi\in L^2_c$ arbitrary, let $R_p \in \mathbb{Z}_{\ge 0}$ be as in Lemma \mathop{\rm Re}f{lem:counting_lemma}. \begin{itemize} \item[(i)] The function $\lambda \mapsto \chi _p(\lambda ) = \Delta (\lambda ,\varphi )^2 - 4$ is entire and admits the product representation \[ \chi _p(\lambda ) =- 4\Big({\partial}rod _{|n|\leq R_p}\frac{1}{{\partial}i _n^2}\Big) \cdot\chi _{R_p}(\lambda) \cdot{\partial}rod_{|n|>R_p} \frac{(\lambda ^+_n-\lambda )(\lambda ^-_n-\lambda )}{{\partial}i ^2_n} \] where $\chi _{R_p}(\lambda )\equiv\chi _{R_p}(\lambda,\varphi)$ denotes the polynomial of degree $4R_p + 2$ given by \[ \chi_{R_p}(\lambda):={\partial}rod _{\eta\in\Lambda_{R_p}} (\eta-\lambda) . \] \item[(ii)] The function $\lambda\mapsto\dot\Delta(\lambda )\equiv\dot\Delta(\lambda,\varphi)$ is entire and admits the product representation \[ \dot\Delta(\lambda)=2\Big({\partial}rod _{|n|\leq R_p}\frac{1}{{\partial}i _n}\Big) \cdot{\partial}rod_{\eta\in\dot\Lambda_{R_p}} (\eta-\lambda) \cdot{\partial}rod _{|n|>R_p}\frac{\dot\lambda _n-\lambda }{{\partial}i _n} . \] \item[(iii)] For any $\varphi\in i L^2_r$ and $\lambda\in\mathbb{C}$ \[ \Delta(\overline\lambda,\varphi)=\overline{\Delta (\lambda,\varphi )}, \quad \dot\Delta(\overline\lambda,\varphi )=\overline{\dot\Delta(\lambda,\varphi)} . \] In particular, the zero set of $\dot\Delta(\cdot,\varphi )$ is invariant under complex conjugation and thus $\dot\lambda_n$ is a simple real root for any $|n|>R_p$. \end{itemize} \end{Lem} \noindent The spectrum of $L(\varphi)$, $\varphi \in L^2_c$, when considered as an unbounded operator on $L^2({\mathbb R},\mathbb{C})\times L^2 ({\mathbb R}, \mathbb{C})$ is given by \[ \mathop{\rm Spec}\nolimits_{\mathbb{R}} L(\varphi )=\big\{\lambda \in \mathbb{C}\,|\,\Delta (\lambda)\in[-2, 2]\big\} \] -- see e.g. \cite{KLT1}. Now consider the case $\varphi\in i L^2_r$. By Lemma \mathop{\rm Re}f{lem:product1} (iii), $\Delta (\lambda )$ is real for $\lambda \in{\mathbb R}$ and by Lemma \mathop{\rm Re}f{lem:spectrum_symmetries} and the Wronskian identity one concludes (see e.g. \cite{KLT1}) that \begin{equation}\label{eq:real_line} \Delta(\lambda,\varphi)\in [-2,2]\quad\forall\lambda\in\mathbb{R}\,\,\,\forall\varphi\in i L^2_r. \end{equation} \noindent{\em Dirichlet spectrum:} Denote by $\mathop{\rm Spec}\nolimits _D L(\varphi )$ the Dirichlet spectrum of the operator $L(\varphi )$, i.e. the spectrum of the operator $L(\varphi )$ considered with domain \[ \big\{ f = (f_1, f_2) \in H^1([0,1], \mathbb{C})^2\,\big|\,f_1(0) = f_2(0),\ f_1(1) = f_2(1)\big\} . \] (When the operator $L(\varphi)$ is written as an AKNS operator, the above boundary conditions become the standard Dirichlet boundary conditions -- see e.g. \cite{GK1}.) The Dirichlet spectrum is discrete and the eigenvalues satisfy the following Counting Lemma -- see e.g. \cite {GK1}. \begin{Lem}\label{lem:dirichlet_spectrum} For any ${\partial}si \in L^2_c$ the Dirichlet spectrum $\mathop{\rm Spec}\nolimits_D L({\partial}si)$ of $L({\partial}si)$ is discrete. Moreover, there exist $R_D\in\mathbb{Z}_{\ge 0}$ and an open neighborhood $V_{\partial}si$ of ${\partial}si$ in $L^2_c$ so that for any $\varphi\in V_{\partial}si$, the set of Dirichlet eigenvalues, when counted with their multiplicities, consists of a sequence $(\mu_n)_{|n|>R_D}$ with $\mu_n\in D_n$ and a set of $2 R_D+1$ additional Dirichlet eigenvalues that lie in the disk $B_{R_D}$. These additional Dirichlet eigenvalues are listed in lexicographic order and with multiplicities $\mu_{-R_D}{\partial}reccurlyeq\cdots{\partial}reccurlyeq\mu_{k}{\partial}reccurlyeq\mu_{k+1}{\partial}reccurlyeq\cdots{\partial}reccurlyeq\mu_{R_D}$, $-R_D\le k\le R_D$. For $|n|>R_D$, one has \[ \mu_n=n{\partial}i+\ell^2_n \] where the remainder $\ell^2_n$ is bounded in $\ell^2$ locally uniformly in $\varphi\in V_{\partial}si$. If $\lambda $ is a periodic eigenvalue of $L(\varphi )$ of geometric multiplicity two then $\lambda $ is also a Dirichlet eigenvalue. \end{Lem} Note that for any given $\varphi\in L^2_c$ the Dirichlet spectrum of $L(\varphi)$ coincides (with multiplicities) with the zeroes of the entire function $\chi_D(\cdot,\varphi) :\mathbb{C}\to\mathbb{C}$ where \[ \chi_D : \mathbb{C}\times L^2_c\to\mathbb{C},\quad(\lambda,\varphi)\mapsto\chi_D(\lambda,\varphi), \] is analytic and \begin{equation}\label{eq:chi_D} 2 i\chi_D(\lambda,\varphi):=\big(m_4+m_3-m_2-m_1\big)\big|_{(1,\lambda,\varphi)} \end{equation} (see \cite[Theorem 5.1]{GK1} and the deformation argument in \cite[Appendix C]{KLT1}). \noindent{\em Birkhoff normal form:} We review results from \cite{KLTZ} where Birkhoff coordinates near ${\partial}si = 0$ were constructed. It follows from Lemma \mathop{\rm Re}f{lem:counting_lemma} and Lemma \mathop{\rm Re}f{lem:dirichlet_spectrum} that there exists an open ball ${\mathcal U}_0$ centered at zero in $L^2_c$ so that for any $\varphi$ in ${\mathcal U}_0$, the periodic eigenvalues of $L(\varphi )$ are given by two sequences $\lambda ^{\partial}m _n$, $n \in\mathbb{Z}$, so that for any $n\in\mathbb{Z}$, $\lambda^{\partial}m_n\in D_n$ and $\Delta (\lambda ^{\partial}m _n) = 2(-1)^n$. In addition, the Dirichlet eigenvalues and the roots of $\dot \Delta$ are given by two sequences $\mu_n$, $n\in\mathbb{Z}$, and $\dot\lambda_n$, $n\in\mathbb{Z}$, which are all simple and satisfy $\mu_n,\dot \lambda _n\in D_n$ for any $n\in\mathbb{Z}$. By shrinking the ball ${\mathcal U}_0$ in $L^2_c$ if necessary so that it is contained in the domain of definition of the Birkhoff map constructed in \cite[Theorem 1.1]{KLTZ} we obtain the following \begin{Th}\label{th:main_near_zero} The claims of Theorem \mathop{\rm Re}f{th:main} hold on the open ball ${\mathcal W}_0\equiv{\mathcal U}_0\cap i L^2_r$. The diffeomorphism $\Phi : {\mathcal W}_0\to\Phi({\mathcal W}_0)\subseteq i\mathfrak{h}^0_r$ is the restriction to ${\mathcal U}_0$ of the Birkhoff map constructed in \cite[Theorem 1.1]{KLTZ}. \end{Th} \section{Actions on the neighborhood $U_{\rm tn}$}\label{sec:actions_in_U_tn} By Theorem \mathop{\rm Re}f{th:main_near_zero}, actions have been constructed for potentials in the open ball ${\mathcal U}_0$ centered at zero in $L^2_c$. Our goal is this Section is to show that they analytically extend along a suitably chosen path to an open neighborhood of any given potential ${\partial}si^{(1)}$ with simple periodic spectrum. First we need to make some preliminary considerations. For a given $R\ge\mathbb{Z}_{\ge 0}$ introduce the set $\mathcal{T}^R\subseteq L^2_c$, defined as follows. \begin{Def}\label{def:T^R} An element $\varphi\in L^2_c$ lies in $\mathcal{T}^R$ if the following conditions on the periodic spectrum $\mathop{\rm Spec}\nolimits_p L(\varphi)$ and the Dirichlet spectrum $\mathop{\rm Spec}\nolimits_D L(\varphi)$ (both counted with multiplicities) hold: \begin{itemize} \item[(R1)] For any $|n|> R$, the disk $D_n$ contains precisely two periodic eigenvalues. The set $\Lambda_R(\varphi)$ of the remaining periodic eigenvalues consists of $4R+2$ eigenvalues which are simple and contained in the disk $B_R$, $\Lambda_R(\varphi)\subseteq B_R$. \item[(R2)] For any $|n|> R$, the disk $D_n$ contains precisely one Dirichlet eigenvalue denoted by $\mu_n\equiv\mu_n(\varphi)$. There are $2R+1$ remaining Dirichlet eigenvalues which are contained in the disk $B_R$. These remaining eigenvalues are listed in lexicographic order with multiplicities $\mu_{-R}{\partial}reccurlyeq\cdots{\partial}reccurlyeq \mu_{k}{\partial}reccurlyeq\mu_{k+1} {\partial}reccurlyeq\cdots{\partial}reccurlyeq\mu_R$, $-R\le k\le R-1$. \end{itemize} \end{Def} Note that by Lemma \mathop{\rm Re}f{lem:counting_lemma} and Lemma \mathop{\rm Re}f{lem:dirichlet_spectrum}, the set $\mathcal{T}^R$ is open in $L^2_c$. For the given potential ${\partial}si^{(1)}\in i L^2_r$ with $\mathop{\rm Spec}\nolimits_p L({\partial}si^{(1)})$ simple we choose $R\in\mathbb{Z}_{\ge 0}$ as follows: Denote by $\ell$ the line segment in $i L^2_r$ connecting ${\partial}si^{(0)}$ with ${\partial}si^{(1)}$. By replacing ${\partial}si^{(1)}$, if necessary, by the first intersection point of $\ell$ with $\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)})$ we can assume without loss of generality that $\ell$ intersects $\mathop{\rm Iso}\nolimits_o({\partial}si)$ only at ${\partial}si^{(1)}$. For any $\varphi\in i L^2_r$ we choose an open ball $U_\varphi$ of $\varphi$ in $L^2_c$ and $R_\varphi\in\mathbb{Z}_{\ge 0}$ so that the statements of Lemma \mathop{\rm Re}f{lem:counting_lemma} and Lemma \mathop{\rm Re}f{lem:dirichlet_spectrum} hold with $R_p$ and $R_D$ replaced by $R_\varphi$. In view of the compactness of $\ell$, we can find $\varphi_j$, $1< j\le K$, all in $\ell$, so that $\ell\subseteq\bigcup_{1< j\le K}U_{\varphi_j}$. Now, define \begin{equation}\label{eq:R} R:=\max_{1\le j\le K} R_{\varphi_j}. \end{equation} Further, by using Theorem \mathop{\rm Re}f{prop:general_position} and by arguing as in the proof of \cite[Corollary 3.3]{KLT2}, one can construct a simple (i.e. without self intersections) continuous path \begin{equation*}\label{eq:gamma} \gamma : [0,1]\mapsto\Big(\bigcup_{1< j\le K}U_{\varphi_j}\Big)\cap i L^2_r,\quad s\mapsto{\partial}si^{(s)}, \end{equation*} connecting ${\partial}si^{(0)}$ with ${\partial}si^{(1)}$, so that $\gamma\subseteq\mathcal{T}^R\cap i L^2_r$. In particular, we see that the statements of Lemma \mathop{\rm Re}f{lem:counting_lemma} and Lemma \mathop{\rm Re}f{lem:dirichlet_spectrum} still hold uniformly on $\gamma$ with $R_p$ and $R_D$ replaced by $R$. In addition, as $\gamma\subseteq\mathcal{T}^R\cap i L^2_r$, for any $\varphi\in\gamma$ the periodic eigenvalues of $L(\varphi)$ inside $B_R$ are simple and non-real (see Lemma \mathop{\rm Re}f{lem:spectrum_symmetries}). This together with the compactness of $\gamma$ and the openness of $\mathcal{T}^R$ in $L^2_c$ implies that there exists a connected open tubular neighborhood $U_{\rm tn}$ of $\gamma$ in $L^2_c$ so that the following holds: \begin{itemize} \item[(T1)] The statements of Lemma \mathop{\rm Re}f{lem:counting_lemma} and Lemma \mathop{\rm Re}f{lem:dirichlet_spectrum} hold uniformly in $\varphi\in U_{\rm tn}$ with $R_p$ and $R_D$ replaced by $R$. \item[(T2)] The set $U_{\rm tn}\cap i L^2_r$ is connected and for any $\varphi\in U_{\rm tn}$ the periodic eigenvalues of $L(\varphi)$ in the disk $B_R$ are simple, non-real, and have the symmetries of Lemma \mathop{\rm Re}f{lem:spectrum_symmetries}. \end{itemize} It follows from the construction of the neighborhood ${\mathcal U}_0$ of zero in $L^2_c$ (see the discussion ahead of Theorem \mathop{\rm Re}f{th:main_near_zero}) and the property (T2) that for any $\varphi\in U_{\rm tn}\cap{\mathcal U}_0$ and for any $n\in\mathbb{Z}$, the disk $D_n$ contains precisely two periodic eigenvalues of $L(\varphi)$. For any $\varphi\in U_{\rm tn}\cap{\mathcal U}_0$ we list the periodic eigenvalues as follows: for $|n|\le R$, \[ \lambda_n^+,\lambda_n^-\in D_n\quad\text{with} \quad {\partial}m\mathop{\rm Im}(\lambda_n^{\partial}m)>0, \] and for $|n|> R$, \[ \lambda_n^+,\lambda_n^-\in D_n\quad\text{with} \quad\lambda_n^-{\partial}reccurlyeq\lambda_n^+. \] Then for any $|n|\le R$, in view of their simplicity, the periodic eigenvalues $\lambda_n^+$ and $\lambda_n^-$ of $L(\varphi)$ considered as functions of the potential, $\lambda_n^+,\lambda_n^- : U_{\rm tn}\cap{\mathcal U}_0\to\mathbb{C}$, are analytic. Since by (T2) the simplicity of these eigenvalues holds on the entire tubular neighborhood $U_{\rm tn}$, the analytic maps above extend to analytic maps \[ \lambda_n^+,\lambda_n^- : U_{\rm tn}\to\mathbb{C}\quad\forall |n|\le R. \] (This is in sharp contrast to the eigenvalues outside the disk $B_R$ which, at least for $|n|$ sufficiently large, are not even continuous in view of the lexicographic ordering.) We note that on $U_{\rm tn}\cap i L^2_r$, \[ \lambda_n^-=\overline{\lambda_n^+}\quad\text{where}\quad\mathop{\rm Im}(\lambda_n^+)>0\quad\forall |n|\le R. \] An important ingredient for the construction of the actions on $U_{\rm tn}$ is the choice of pairwise disjoint simple continuous paths $G_n\subseteq\mathbb{C}$, $n\in\mathbb{Z}$, also referred to as {\em cuts}, that connect $\lambda_n^-$ with $\lambda_n^+$, and continuous contours $\Gamma_n$ around $G_n$. In a first step we define the cuts $G_n$ along the path $\gamma : [0,1]\to U_{\rm tn}\cap i L^2_r$, $s\mapsto{\partial}si^{(s)}$. To simplify notation, introduce $\lambda_n^{\partial}m(s):=\lambda_n^{\partial}m({\partial}si^{(s)})$. We note that $\forall s\in[0,1]$, \[ \lambda_n^-(s)=\overline{\lambda_n^+(s)}\quad\text{and}\quad\mathop{\rm Im}(\lambda_n^+(s))>0\quad\forall |n|\le R \] while \[ \lambda_n^-(s)=\overline{\lambda_n^+(s)}\quad\text{and}\quad\mathop{\rm Im}(\lambda_n^+(s))\ge 0\quad\forall |n|> R. \] For any $s\in[0,1]$ and $|n|>R$, we define the cuts $G_n(s):=G_n({\partial}si_n^{(s)})$ to be the vertical line segments $[\lambda_n^-(s),\lambda_n^+(s)]\subseteq D_n$, parametrized by \[ G_n : [0,1]\times[-1,1]\to\mathbb{C},\quad (s,t)\mapsto\tau_n(s)+t\big(\lambda_n^+(s)-\lambda_n^-(s)\big)/2, \] where for any $n\in\mathbb{Z}$, $\tau_n(s):=(\lambda_n^-(s)+\lambda_n^+(s))/2$. Note that $G_n(s,-t)=\overline{G_n(s,t)}$ for any $t\in[-1,1]$. For $|n|\le R$ and $s$ sufficiently small so that ${\partial}si^{(s)}\in{\mathcal W}_0\equiv{\mathcal U}_0\cap i L^2_r$, we define $G_n(s,t)$ in a similar fashion as in the case $|n|>R$ and then show by a somewhat lengthly but straightforward argument that the cuts $G_n(s)$ can be chosen so that they depend continuously on $s\in[0,1]$. More precisely, the following holds: \begin{Lem}\label{lem:deformation_G_n} There exist continuous functions $G_n$, $|n|\le R$, with values in the disk $B_R$, \[ G_n : [0,1]\times[-1,1]\to B_R,\quad (s,t)\mapsto G_n(s,t), \] so that for any $s\in[0,1]$ the following properties hold: \begin{itemize} \item[(i)] $G_n(s)\equiv G_n(s,\cdot)$ is a simple $C^1$-smooth path such that for any $t\in[-1,1]$, \[ G_n(s,-1)=\lambda_n^-(s),\quad G_n(s,1)=\lambda_n^+(s), \quad G_n(s,-t)=\overline{G_n(s,t)}. \] \item[(ii)] The paths $G_n(s)$ and $G_k(s)$ do not intersect for any integer numbers $n$ and $k$, $n\ne k$, such that $|n|,|k|\le R$. \item[(iii)] There exist $0<\rho_0<1$ and $\delta_0>0$ such that \[ G_n(s,t)=\tau_n(s)+t\big(\lambda_n^+(s)-\lambda_n^-(s)\big)/2\quad\forall t\in[-\rho_0,\rho_0] \] where $\tau_n(s):=\big(\lambda_n^+(s)+\lambda_n^-(s)\big)/2$ and \[ \mathop{\rm Im}\big(G_n(s,t)\big)\ge\delta_0\quad\forall t\in[\rho_0,1]. \] In particular, the path $G_n(s)$ intersects the real line at the unique point $\tau_n(s)=G_n(s,0)$. \end{itemize} \end{Lem} In the next step, for any $s\in[0,1]$ and $n\in\mathbb{Z}$ we will choose a counterclockwise oriented simple $C^1$-smooth contour $\Gamma_n(s)$ in $\mathbb{C}$ around the cut $G_n(s)$ so that $\Gamma_n(s)$ is invariant under complex conjugation, $\overline{\Gamma_n(s)}=\Gamma_n(s)$, $\Gamma_n(s)\cap\mathbb{R}$ consists of two points, and $\Gamma_n(s)\cap\Gamma_k(s)=\emptyset$ for $n\ne k$. In the case $|n|>R$ we choose $\Gamma_n(s)$ to be the boundary $\Gamma_n$ of the disk $\{\lambda\in\mathbb{C}\,|\,|\lambda-n{\partial}i|<{\partial}i/4\}$ whereas for $|n|\le R$, the contours $\Gamma_n(s)$ are chosen in the disk $B_R$. In a similar way, for any $s\in[0,1]$ and $n\in\mathbb{Z}$ we choose a counterclockwise oriented simple $C^1$-smooth contour $\Gamma_n'(s)$ around $G_n(s)$ so that $\Gamma_n'(s)$ lies in the interior domain of the contour $\Gamma_n(s)$, $\overline{\Gamma_n'(s)}=\Gamma_n'(s)$, $\Gamma_n'(s)\cap\mathbb{R}$ consists of two points, and $\Gamma_n'(s)\cap\Gamma_k'(s)=\emptyset$ for $n\ne k$. In the case $|n|>R$ we choose $\Gamma_n'(s)$ to be the boundary of the disk $D_n$ whereas for $|n|\le R$, the contours $\Gamma_n'(s)$ are chosen so that $\inf\limits_{s\in[0,1], |n|\le R}\mathop{\rm dist}\big(\Gamma_n'(s),\Gamma_n(s)\big)>0$. For any $s\in[0,1]$ and $n\in\mathbb{Z}$ denote by $D_n'(s)$ the interior domain of the contour $\Gamma_n'(s)$ and by $D_n(s)$ the interior domain of $\Gamma_n(s)$. The domains $D_n'(s)$ and $D_n(s)$ are topologically open disks with the property that $G_n(s)\subseteq D_n'(s)\subseteq D_n(s)$. Note that by definition $D_n(s)=D_n$ for any $s\in[0,1]$ and $|n|>R$ whereas for $|n|\le R$ this does not necessarily hold. The contours $\Gamma_n'(s)$, constructed above for any $s\in[0,1]$ and $n\in\mathbb{Z}$, are now used to choose open neighborhoods $U_s\subseteq U_{\rm tn}$ of ${\partial}si^{(s)}$ in $L^2_c$ and cuts $G_n(s,\varphi)\subseteq D_n'(s)$ for any $\varphi\in U_s$ (and not just for $\varphi$ in $\gamma$). In the case $|n|\le R$ we proceed as follows: for any given $s\in[0,1]$ consider the potential ${\partial}si^{(s)}$. Then we choose an open ball $U_s\subseteq U_{\rm tn}$ in $L^2_c$ centered at ${\partial}si^{(s)}$ so that for any $\varphi\in U_s$ and $k\in\mathbb{Z}$, the periodic eigenvalues $\lambda_k^+(\varphi)$ and $\lambda_k^-(\varphi)$ are in the interior domain $D_k'(s)$ of $\Gamma_k'(s)$. Then for any $|n|\le R$ we choose a $C^1$-smooth simple path $G_n(s,\varphi)$ in the interior domain $D_n'(s)$ of $\Gamma_n'(s)$, connecting $\lambda_n^-(\varphi)$ with $\lambda_n^+(\varphi)$ so that $G_n(s,\varphi)$ intersects the real axis at a unique point $\varkappa_n(s,\varphi)$. In the case when $\varphi\in U_s\cap i L^2_r$, in addition to the properties described above, $G_n(s,\varphi)$ is chosen so that $\overline{G_n(s,\varphi)}=G_n(s,\varphi)$. It is convenient to define $G_n(s,\varphi)$ for $\varphi={\partial}si^{(s)}$ by $G_n(s,{\partial}si^{(s)}):=G_n(s)$. Note that then $\varkappa_n(s,{\partial}si^{(s)})=\tau_n(s)$. By construction, the cuts $G_n(s,\varphi)$, $|n|\le R$, do not necessarily depend continuously on $\varphi\in U_s$. In the case when $|n|>R$ we choose $G_n(s,\varphi)$ to be the line segment $G_n(\varphi):=[\lambda_n^-(\varphi),\lambda_n^+(\varphi)]$ for any $\varphi\in U_{\rm tn}$. The contours $\Gamma_n(s)$, $0\le s\le 1$, and the cuts $G_n(s,\varphi)$, $\varphi\in U_s$, $n\in\mathbb{Z}$, are a key ingredient not only for the construction of the actions but also for the construction of a family of one forms used in the subsequent section to define the angles. Let us begin with the one forms. To obtain such a family of one forms, we apply Theorem 1.3 in \cite{KT2}: shrinking the ball $U_s$ if necessary, it follows that there exist analytic functions \[ \zeta_n^{(s)} : \mathbb{C}\times U_s\to\mathbb{C},\quad n\in\mathbb{Z}, \] and an integer $R_s\ge R$, used to describe the location of the zeros of these functions, so that for any $\varphi\in U_s$ and $n\in\mathbb{Z}$, \begin{equation}\label{eq:normalization_s} \frac{1}{2{\partial}i}\int_{\Gamma_m(s)}\frac{\zeta_n^{(s)}(\lambda,\varphi)}{\sqrt[c]{\Delta^2(\lambda,\varphi)-4}} \,d\lambda=\delta_{nm},\quad m\in\mathbb{Z}. \end{equation} The {\em canonical root} appearing in the denominator of the integrand in \eqref{eq:normalization_s} is defined by the infinite product \begin{equation}\label{eq:canonical_root} \sqrt[c]{\Delta^2(\lambda,\varphi)-4}:=2 i {\partial}rod_{k\in\mathbb{Z}} \frac{\sqrt[\rm st]{(\lambda_k^+(\varphi)-\lambda)(\lambda_k^-(\varphi)-\lambda)}}{{\partial}i_k} \end{equation} and the {\em standard root} $\sqrt[\rm st]{(\lambda_k^+(\varphi)-\lambda)(\lambda_k^-(\varphi)-\lambda)}$ is defined as the unique holomorphic function on $\mathbb{C}\setminus G_k(s,\varphi)$ satisfying the asymptotic relation \begin{equation}\label{eq:standard_root} \sqrt[\rm st]{(\lambda_k^+(\varphi)-\lambda)(\lambda_k^-(\varphi)-\lambda)}\sim -\lambda \quad\text{as}\quad |\lambda|\to\infty. \end{equation} Note that by the asymptotic estimate in Lemma \mathop{\rm Re}f{lem:counting_lemma} (i), for any $\varphi\in U_s$ the canonical root $\sqrt[c]{\Delta^2(\lambda,\varphi)-4}$ is a holomorphic function of $\lambda$ in the domain $\mathbb{C}\setminus\big(\bigsqcup_{k\in\mathbb{Z}}G_k(s,\varphi)\big)$. In addition, the map \begin{equation}\label{eq:canonical_root_analytic} \Big(\mathbb{C}\setminus\big(\bigsqcup_{k\in\mathbb{Z}}\overline{D_k'(s)}\big)\Big)\times U_s\to\mathbb{C}, \quad(\lambda,\varphi)\mapsto\sqrt[c]{\Delta^2(\lambda,\varphi)-4}, \end{equation} is analytic and its image does not contain zero. \begin{Rem} The infinite product in \eqref{eq:canonical_root} is understood as the limit \[ \lim_{N\to\infty} 2 i{\partial}rod_{|k|\le N}\frac{\sqrt[\rm st]{(\lambda^+_k(\varphi)-\lambda)(\lambda^-_k(\varphi)-\lambda)}}{{\partial}i_k}. \] In order to see that this limit exists locally uniformly in $\varphi\in U_s$ and $\lambda\in\mathbb{C}\setminus\big(\bigsqcup_{k\in\mathbb{Z}}\overline{D_k'(s)}\big)$ one combines the terms corresponding to $k$ and $-k$ for $1\le k\le N$ and notices that in view of Lemma \mathop{\rm Re}f{lem:counting_lemma} (i), \[ \frac{(\lambda^+_k(\varphi)-\lambda)(\lambda^+_{-k}(\varphi)-\lambda)}{(-{\partial}i_k^2)}\cdot \frac{(\lambda^-_k(\varphi)-\lambda)(\lambda^-_{-k}(\varphi)-\lambda)}{(-{\partial}i_k^2)}=1+\frac{a_k}{k}, \] where the remainder $(a_k)_{k\in\mathbb{Z}}$ is bounded in $\ell^2$ locally uniformly in $\varphi\in U_s$ and $\lambda\in\mathbb{C}\setminus\big(\bigsqcup_{k\in\mathbb{Z}}\overline{D_k'(s)}\big)$ (cf. \cite{GK1}). \end{Rem} According to Theorem 1.3 in \cite{KT2}, there exists $R_s>R$ so that for any $\varphi\in U_s$ and $n\in\mathbb{Z}$ the zeros of the entire function $\zeta_n^{(s)}(\cdot,\varphi)$, counted with their multiplicities and listed {\em lexicographically}, $\big\{\sigma^n_k(\varphi)\,\big|\,k\in\mathbb{Z}\setminus\{n\}\big\}$ have the following properties: \begin{itemize} \item[(D1)] For any $|k|>R_s$, $k\ne n$, $\sigma^n_k(\varphi)$ is the only zero of $\zeta_n^{(s)}(\cdot,\varphi)$ in the disk $D_k$ and the map $\sigma^n_k : U_s\to D_k$ is analytic. Furthermore, for any $|k|\le R_s$, $k\ne n$, $\sigma^n_k(\varphi)\in B_{R_s}$. \item[(D2)] For any $|k|>R_s$, $k\ne n$, we have that \[ \sigma^n_k(\varphi)=\tau_k(\varphi)+\gamma_k^2(\varphi)\ell_k^2,\quad\gamma_k(\varphi):=\lambda_k^+(\varphi)-\lambda_k^-(\varphi), \] uniformly in $n\in\mathbb{Z}$ and locally uniformly in $\varphi\in U_s$. \item[(D3)] The entire function $\zeta_n^{(s)}(\cdot,\varphi)$ admits the product representation \[ \zeta_n^{(s)}(\lambda,\varphi)=-\frac{2}{{\partial}i_n}{\partial}rod_{k\ne n}\frac{\sigma^n_k(\varphi)-\lambda}{{\partial}i_k}. \] Moreover, if $\lambda_k^+(\varphi)=\lambda_k^-(\varphi)=\tau_k(\varphi)$ for some $k\ne n$ then $\tau_k(\varphi)$ is a zero of $\zeta_n^{(s)}(\cdot,\varphi)$. \end{itemize} Finally, we use the compactness of the path $\gamma$ to find finitely many numbers $s_1<...<s_N$ in the interval $[0,1]$ so that $\big\{U_{s_1},...,U_{s_N}\big\}$ is an open cover of $\gamma$ in $L^2_c$. We can assume without loss of generality that $s_1=0$ and $s_N=1$. We now shrink $U_{\rm tn}$ and set \begin{equation}\label{eq:U_tn} U_{\rm tn}:=\bigcup\limits_{1\le j\le N}U_{s_j},\quad R':=\max\limits_{1\le j\le N}R_{s_j}. \end{equation} In the sequel we will always assume that $U_{s_1}\subseteq{\mathcal U}_0$ where ${\mathcal U}_0$ is the open ball in $L^2_c$ centered at zero introduced at the end of Section \mathop{\rm Re}f{sec:setup}. By the construction above, $U_{\rm tn}$ is a connected tubular neighborhood of $\gamma$ so that the properties (T1) and (T2) hold. \begin{Lem}\label{lem:important} For any $1\le k<l\le N$, $\varphi\in U_{s_k}\cap U_{s_l}$ and $m\in\mathbb{Z}$ the contours $\Gamma_m(s_k)$ and $\Gamma_m(s_l)$ are homologous within the resolvent set of $L(\varphi)$. \end{Lem} The statement of this Lemma holds since by Lemma \mathop{\rm Re}f{lem:deformation_G_n}, for any $m\in\mathbb{Z}$ the cut $G_m(s_l)$ is obtained from $G_m(s_k)$ by a continuous deformation $\big\{G_m(s)\,\big|s\in[s_k,s_l]\big\}$ satisfying the properties listed in Lemma \mathop{\rm Re}f{lem:deformation_G_n}. Lemma \mathop{\rm Re}f{lem:important} implies the following. If $\varphi\in U_{s_k}\cap U_{s_l}$ for some $1\le k<l\le N$, then in view of the normalization condition \eqref{eq:normalization_s}, we conclude that for any $n\in\mathbb{Z}$, \[ \int_{\Gamma_m(s_k)}\frac{\zeta_n^{(s_k)}(\lambda,\varphi)}{\sqrt[c]{\Delta^2(\lambda,\varphi)-4}} \,d\lambda = \int_{\Gamma_m(s_l)}\frac{\zeta_n^{(s_l)}(\lambda,\varphi)}{\sqrt[c]{\Delta^2(\lambda,\varphi)-4}} \,d\lambda,\quad m\in\mathbb{Z}. \] This together with Lemma \mathop{\rm Re}f{lem:important} and the definition of the canonical root \eqref{eq:canonical_root} shows that \[ \int_{\Gamma_m(s_l)} \frac{\zeta_n^{(s_k)}(\lambda,\varphi)-\zeta_n^{(s_l)}(\lambda,\varphi)}{\sqrt[c]{\Delta^2(\lambda,\varphi)-4}} \,d\lambda=0,\quad m\in\mathbb{Z}. \] Now we can apply \cite[Proposition 5.2]{KT2} to conclude that for any $n\in\mathbb{Z}$, \[ \zeta_n^{(s_k)}(\cdot,\varphi)=\zeta_n^{(s_l)}(\cdot,\varphi). \] \begin{Rem}\label{rem:uniqueness_differentials} An important point in the above argument is that we apply \cite[Proposition 5.2]{KT2} to the difference of the forms $\omega_n^{(s_k)}:=\frac{\zeta_n^{(s_k)}(\lambda,\varphi)\,d\lambda}{\sqrt{\Delta^2(\lambda,\varphi)-4}}$ and $\omega_n^{(s_l)}:=\frac{\zeta_n^{(s_l)}(\lambda,\varphi)\,d\lambda}{\sqrt{\Delta^2(\lambda,\varphi)-4}}$. More specifically, by construction (see the ansatz (16) in \cite{KT2}), the two forms have the same ``leading'' term $\Omega_n$ which cancels when we take their difference. This allows us to apply \cite[Proposition 5.2]{KT2} to the difference of the forms and then conclude that they coincide. \end{Rem} The above allows us to define $\zeta_n(\lambda,\varphi)$ for $(\lambda,\varphi)\in\mathbb{C}\times U_{\rm tn}$ by setting \[ \zeta_n\big|_{\mathbb{C}\times U_{s_k}}:=\zeta_n^{(s_k)}. \] In this way we proved \begin{Prop}\label{lem:differentials_on_U_tn} For any $n\in\mathbb{Z}$, the analytic function \[ \zeta_n :\mathbb{C}\times U_{\rm tn}\to\mathbb{C} \] satisfies the above properties {\rm (D1)--(D3)} with $\zeta_n^{(s)}$ replaced by $\zeta_n$ and $R_s$ replaced by $R'$ uniformly on $U_{\rm tn}$. In addition, for any $\varphi\in U_{\rm tn}$ so that $\varphi\in U_{s_k}$ for some $1\le k\le N$, the analytic function $\zeta_n$ satisfies the normalization conditions \[ \frac{1}{2{\partial}i}\int_{\Gamma_m(s_k)}\frac{\zeta_n(\lambda,\varphi)}{\sqrt[c]{\Delta^2(\lambda,\varphi)-4}} \,d\lambda=\delta_{nm},\quad m\in\mathbb{Z}. \] \end{Prop} We now turn to the construction of the actions on the connected tubular neighborhood $U_{\rm tn}=\bigcup_{1\le j\le N}U_{s_j}$ of the path $\gamma$ defined in \eqref{eq:U_tn}. Recall that $s_1=0$ and $U_{s_1}\subseteq{\mathcal U}_0$ where ${\mathcal U}_0$ is the open ball in $L^2_c$ centered at zero, introduced at the end of Section \mathop{\rm Re}f{sec:setup}. For any $1\le j\le N$ and $\varphi\in U_{s_j}$ define the (prospective) actions \begin{equation}\label{eq:actions} I_n^{(j)}(\varphi):=\frac{1}{{\partial}i}\int_{\Gamma_n(s_j)}\frac{\lambda\dot\Delta(\lambda,\varphi)} {\sqrt[c]{\Delta^2(\lambda,\varphi)-4}}\,d\lambda,\quad n\in\mathbb{Z}. \end{equation} This definition is motivated by \cite{KLTZ} where actions were defined by formula \eqref{eq:actions} for potentials $\varphi$ in the ball ${\mathcal U}_0$ (cf. Theorem \mathop{\rm Re}f{th:main_near_zero}). Note that the contours of integration $\Gamma_n(s_j)$, $n\in\mathbb{Z}$, appearing in \eqref{eq:actions} are independent of $\varphi\in U_{s_j}$ and are contained in the set $\mathbb{C}\setminus\big(\bigsqcup_{k\in\mathbb{Z}}\overline{D_k'(s_j)}\big)$. In addition, the mapping $\dot\Delta : \mathbb{C}\times L^2_c\to\mathbb{C}$ is analytic and the mapping \eqref{eq:canonical_root_analytic} is analytic and does not have zeros (cf. Section \mathop{\rm Re}f{sec:setup}). This shows that for any $n\in\mathbb{Z}$ and $1\le j\le N$, \[ I_n^{(j)} : U_{s_j}\to\mathbb{C} \] is analytic. If $\varphi\in U_{s_k}\cap U_{s_l}$ for some $1\le k<l\le N$, then for any $n\in\mathbb{Z}$ the contours $\Gamma_n(s_k)$ and $\Gamma_n(s_l)$ are homologous in the resolvent set of $L(\varphi)$ -- see Lemma \mathop{\rm Re}f{lem:important}. This together with the definition of the canonical root \eqref{eq:canonical_root} shows that for any $n\in\mathbb{Z}$, \[ I_n^{(k)}(\varphi)=I_n^{(l)}(\varphi). \] Hence, \begin{equation}\label{eq:I_n} I_n : U_{\rm tn}\to\mathbb{C},\quad I_n\big|_{U_{s_j}}:=I_n^{(j)},\quad 1\le j\le N, \end{equation} is well defined. In this way we proved \begin{Prop}\label{lem:actions} For any $n\in\mathbb{Z}$, the function $I_n : U_{\rm tn}\to\mathbb{C}$, defined by \eqref{eq:I_n}, is analytic. On $U_{s_1}\cap{\mathcal U}_0$ the function $I_n$ coincides with the $n$-th action variable constructed in \cite{KLTZ}. \end{Prop} \begin{Rem} Alternatively, one can analytically extend the actions by arguments similar to the ones used in the proof of Proposition \mathop{\rm Re}f{prop:beta^n_tn} to analytically extend the angles. Here we use instead a deformation of the cuts (cf. Lemma \mathop{\rm Re}f{lem:deformation_G_n}) and a subsequent deformation of the contours $\Gamma_n(s)$ which provide a geometrically simple approach. \end{Rem} In view of Theorem \mathop{\rm Re}f{th:main_near_zero}, for any $n\in\mathbb{Z}$, the action $I_n|_{U_{s_1}\cap{\mathcal W}_0}$ is real valued and for any $m,n\in\mathbb{Z}$, \[ \big\{I_m|_{U_{s_1}\cap{\mathcal U}_0},I_n|_{U_{s_1}\cap{\mathcal U}_0}\big\}=0. \] We have the following \begin{Coro}\label{coro:actions_poisson_relations} For any $m,n\in\mathbb{Z}$, $\{I_m,I_n\}=0$ on $U_{\rm tn}$. Moreover, the action $I_n : U_{\rm tn}\to\mathbb{C}$ is real-valued when restricted to $U_{\rm tn}\cap i L^2_r$. \end{Coro} \begin{proof}[Proof of Corollary \mathop{\rm Re}f{coro:actions_poisson_relations}] Note that the analyticity of the action $I_n : U_{\rm tn}\to\mathbb{C}$ implies that for any $j=1,2$ the $L^2$-gradient ${\partial}artial_j I_n : U_{\rm tn}\to L^2_c$ is analytic. By the definition \eqref{eq:poisson_bracket} of the Poisson bracket we conclude that $\{I_n,I_m\} : U_{\rm tn}\to\mathbb{C}$ is analytic for any $m,n\in\mathbb{Z}$. Since $U_{\rm tn}$ is connected and $\{I_n,I_m\}|_{U_{s_1}}=0$ by the considerations above the first statement of the Corollary follows. The second statement follows from Proposition \mathop{\rm Re}f{lem:actions}, the fact that $I_n : U_{\rm tn}\to\mathbb{C}$ is real-valued when restricted to $U_{s_1}\cap{\mathcal W}_0$, and Lemma \mathop{\rm Re}f{lem:real-analyticity} below. \end{proof} In the proof of Corollary \mathop{\rm Re}f{coro:actions_poisson_relations} we used the following result about real analytic functions. \begin{Lem}\label{lem:real-analyticity} Let $X_r$ be a real subspace inside the complex Banach space $X_c=X_r\otimes\mathbb{C}$, $U$ a connected open set in $X_c$ such that $U\cap X_r$ is connected, and $f : U\to\mathbb{C}$ an analytic function. Assume that there exists an open ball $B(x_0)$ in $X_c$ centered at $x_0\in U\cap X_r$ such that $B(x_0)\subseteq U$ and $f|_{B(x_0)\cap X_r}$ is real-valued. Then $f : U\to\mathbb{C}$ is real-valued on $U\cap X_r$. \end{Lem} The Lemma follows easily from standard arguments involving Taylor's series expansions of $f$. \section{Angles on the neighborhood $U_{\rm tn}$}\label{sec:angles_in_U_tn} In this Section we analytically extend the angles, constructed in \cite{KLTZ} inside $U_{s_1}$, along the tubular neighborhood $U_{\rm tn}$ defined by \eqref{eq:U_tn}. First we need some preparation. As by construction $R'\ge R$ where $R'$ and $R$ are defined by \eqref{eq:U_tn} and, respectively, \eqref{eq:R}, there is a (possibly empty) set of indices $R<|k|\le R'$ so that the statements of Lemma \mathop{\rm Re}f{lem:counting_lemma} and Lemma \mathop{\rm Re}f{lem:dirichlet_spectrum}, hold uniformly in $\varphi\in U_{\rm tn}$ with $R_p$ and $R_D$ replaced by $R'$ but in contrast to (T2), there could be double periodic eigenvalues of $L(\varphi)$ inside the disk $B_{R'}$. More specifically, double periodic eigenvalues in $B_{R'}$ can only appear in the union of disks $\bigcup_{R<|k|\le R'}D_k$, since by construction, $\lambda_k^+(\varphi),\lambda_k^-(\varphi)\in D_k\subseteq B_{R'}$ for any $\varphi\in U_{\rm tn}$ and $R<|k|\le R'$. Next we argue as in the proof of Corollary 3.3 in \cite{KLT2} to construct a continuous path $:{\tilde\gamma} : [0,1]\to U_{\rm tn}\cap i L^2_r$ so that $\tilde\gamma(0)={\partial}si^{(0)}$, $\tilde\gamma(1)={\partial}si^{(1)}$, and for any potential $\varphi\in{\tilde\gamma}\big([0,1]\big)$ the operator $L(\varphi)$ has only simple (and hence non-real) periodic eigenvalues in the disk $B_{R'}$. In view of the compactness of $\tilde\gamma$ we can find a connected open neighborhood $V_{\rm tn}$ of $\tilde\gamma$ in $L^2_c$ so that $V_{\rm tn}\subseteq U_{\rm tn}$, $V_{\rm tn}\cap i L^2_r$ is connected, and for any $\varphi\in V_{\rm tn}$, the operator $L(\varphi)$ has only simple, non-real periodic eigenvalues in the disk $B_{R'}$. To simplify notation, in the sequel we denote $V_{\rm tn}$ by $U_{\rm tn}$ and $R'$ by $R$. In this way, we obtain \begin{Lem}\label{lem:U_tn} The neighborhood $U_{\rm tn}$ is connected, contains the potentials ${\partial}si^{(0)}$ and ${\partial}si^{(1)}$, and satisfies the properties (T1) and (T2), and Proposition \mathop{\rm Re}f{lem:differentials_on_U_tn} with $R'$ replaced by $R$. \end{Lem} Now, we proceed with the construction of the angles. For any given $\varphi\in U_{\rm tn}$ consider the affine curve \[ \mathbb{C}C_\varphi:=\big\{(\lambda,w)\in\mathbb{C}^2\,\big|\,w^2=\Delta^2(\lambda,\varphi)-4\big\} \] and the projection ${\partial}i_1 : \mathbb{C}C_\varphi\to\mathbb{C}$, $(\lambda,w)\mapsto\lambda$. In fact, in what follows we work only with the following subsets of $\mathbb{C}C_\varphi$: \begin{equation}\label{eq:riemann_surface} \mathbb{C}C_{\varphi,R}:={\partial}i_1^{-1}\big(B_R\big) \end{equation} and \begin{equation}\label{eq:k-th_handle} {\mathcal D}_{\varphi,k}:={\partial}i_1^{-1}\big(D_k\big),\quad|k|>R. \end{equation} By Lemma \mathop{\rm Re}f{lem:U_tn}, for any $\varphi\in U_{\rm tn}$ there are precisely $4R+2$ periodic eigenvalues of $L(\varphi)$ inside the disk $B_R$ and they are all simple. This implies that $\mathbb{C}C_{\varphi,R}$ is an open Riemann surface with $2 R+1$ handles whose boundary in $\mathbb{C}C_\varphi$ is a disjoint union of two circles. The same Lemma also implies that for any $\varphi\in U_{\rm tn}$ and $|k|>R$, \[ \lambda_k^+(\varphi),\,\,\lambda_k^-(\varphi),\,\, \mu_k(\varphi)\in D_k, \] and there are no other periodic or Dirichlet eigenvalues of $L(\varphi)$ inside $D_k$. While for any $|k|>R$, the Dirichlet eigenvalue $\mu_k$ is simple and hence depends analytically on $\varphi\in U_{\rm tn}$, the periodic eigenvalues $\lambda_k^+$ and $\lambda_k^-$ are not necessarily simple. In particular, we see that ${\mathcal D}_{\varphi,k}$ is either a Riemann surface diffeomorphic to $(0,1)\times\mathbb{T}$ or, when $\lambda_k^+=\lambda_k^-$, a transversal intersection in $\mathbb{C}^2$ of two complex disks at their centers. Now, let $\varphi\in U_{\rm tn}$ and assume that $\varphi\in U_{s_l}$ for some $1\le l\le N$ (cf. \eqref{eq:U_tn}). For any $k\in\mathbb{Z}$ consider the contour $\Gamma_k(s_l)$. By Lemma \mathop{\rm Re}f{lem:important}, the homology class of the cycle $\Gamma_k(s_l)$ within the resolvent set of $L(\varphi)$ is independent of the choice of $1\le l\le N$ with the property that $\varphi\in U_{s_l}$. Denote by $a_k$ the homology class in $\mathbb{C}C_\varphi$ of the component of ${\partial}i_1^{-1}\big(\Gamma_k(s_l)\big)$ that lies on the canonical sheet \[ \mathbb{C}C_\varphi^c:=\Big\{(\lambda,w)\,\Big|\,\lambda\in\mathbb{C}\setminus\Big(\bigsqcup_{k\in\mathbb{Z}}G_k(s_l,\varphi)\Big), w=\sqrt[c]{\Delta^2(\lambda,\varphi)-4}\Big\} \] of the curve $\mathbb{C}C_\varphi$ where the canonical root $\sqrt[c]{\Delta^2(\lambda,\varphi)-4}$ is defined by \eqref{eq:canonical_root}. By the discussion above, for any $\varphi\in U_{\rm tn}$ and $k\in\mathbb{Z}$ the class $a_k$ is independent of the choice of $1\le l\le N$ with the property that $\varphi\in U_{s_l}$. In what follows we will not distinguish between the class $a_k$ and a given $C^1$-smooth representative of $a_k$, which we call an {\em $a_k$-cycle}. In a similar way for any $1\le |k|\le R$ we define the {\em $b_k$-cycle}. More specifically, given any $1\le |k|\le R$ and $\varphi\in U_{\rm tn}$ so that $\varphi\in U_{s_l}$ for some $1\le l\le N$, consider the intersection point $\varkappa_k(s_l,\varphi)$ of the cut $G_k(s_l,\varphi)$ with the real axis. Denote by $b_k$ the homology class in $\mathbb{C}C_\varphi$ of the cycle ${\partial}i_1^{-1}\big([\varkappa_{k-1}(s_l,\varphi),\varkappa_k(s_l,\varphi)]\big)$ if $1\le k\le R$ and the cycle ${\partial}i_1^{-1}\big([\varkappa_k(s_l,\varphi),\varkappa_{k+1}(s_l,\varphi)]\big)$ if $-R\le k\le-1$ oriented so that the intersection index $a_k\circ b_k $ of $a_k$ with $b_k$ is equal to one. It is not hard to see that for any $1\le k\le R$ the class $b_k$ is independent of the choice of $1\le l\le N$ with the property that $\varphi\in U_{s_l}$. Moreover, the first homology group \begin{equation}\label{eq:homology_group} H_1(\mathbb{C}C_{\varphi,R},\mathbb{Z})=\mathop{\rm span}\big\langle a_0, a_k, b_k, 1\le |k|\le R\big\rangle_\mathbb{Z}\cong\mathbb{Z}^{4R+1}. \end{equation} In view of Lemma \mathop{\rm Re}f{lem:U_tn} and Proposition \mathop{\rm Re}f{lem:differentials_on_U_tn}, for any $n\in\mathbb{Z}$ the analytic functions $\zeta_n : \mathbb{C}\times U_{\rm tn}\to\mathbb{C}$ are well defined and satisfy the normalization conditions \begin{equation}\label{eq:a-normalization} \frac{1}{2{\partial}i}\int_{a_k}\frac{\zeta_n(\lambda,\varphi)}{\sqrt{\chi_p(\lambda,\varphi)}}\,d\lambda= \delta_{nk},\quad k\in\mathbb{Z}, \end{equation} where by \eqref{eq:chi_p}, $\chi_p(\lambda,\varphi)=\Delta^2(\lambda,\varphi)-4$. For any $\varphi\in U_{\rm tn}$, $n\in\mathbb{Z}$, and $1\le |k|\le R$, denote by $p_{nk}$ the $b_k$-period \[ p_{nk}\equiv p_{nk}(\varphi):=\int_{b_k}\frac{\zeta_n(\lambda,\varphi)}{\sqrt{\chi_p(\lambda,\varphi)}}\,d\lambda. \] (Note that $p_{nk}(\varphi)$ is well defined since $b_k$ is independent of the choice of $1\le l\le N$ with the property that $\varphi\in U_{s_l}$.) Recall from Lemma \mathop{\rm Re}f{lem:dirichlet_spectrum} that for any $\varphi\in U_{\rm tn}$ we list the Dirichlet eigenvalues in lexicographic order and with multiplicities $\mu_k=\mu_k(\varphi)$, $k\in\mathbb{Z}$. For any $k\in\mathbb{Z}$ we define the {\em Dirichlet divisor} \begin{equation}\label{eq:dirichlet_divisor} \mu_k^*(\varphi):=\Big(\mu_k(\varphi),\grave{m}_2\big(\mu_k(\varphi),\varphi\big)+ \grave{m}_3\big(\mu_k(\varphi),\varphi\big)\Big) \end{equation} where for any $\lambda\in\mathbb{C}$ \[ \begin{pmatrix} \grave{m}_1(\lambda,\varphi)&\grave{m}_2(\lambda,\varphi)\\ \grave{m}_3(\lambda,\varphi)&\grave{m}_4(\lambda,\varphi) \end{pmatrix} :=M(1,\lambda,\varphi) \] and $M(x,\lambda,\varphi)$ is the fundamental solution \eqref{eq:M}. Note that $\mu_k^*(\varphi)$ lies on the curve $\mathbb{C}C_\varphi$ since by \cite[Lemma 6.6]{GK1}, \[ \big(\grave{m}_2(\mu_k)+\grave{m}_3(\mu_k)\big)^2= \big(\grave{m}_1(\mu_k)+\grave{m}_4(\mu_k)\big)^2-4= \Delta(\mu_k)^2-4. \] As the Floquet matrix $M(1,\lambda,\varphi)\in\rm Mat_{2\times 2}$ is analytic in $(\lambda,\varphi)\in\mathbb{C}\times L^2_c$, we conclude from \eqref{eq:dirichlet_divisor} and the discussion above that for any $|k|>R$, the mapping $U_{\rm tn}\to \mathbb{C}^2$, $\varphi\mapsto\mu_k^*(\varphi)$, is analytic. With these preparations we are ready to define for any $\varphi\in U_{\rm tn}$ and $n\in\mathbb{Z}$ the following {\em multivalued} functions, \begin{equation}\label{4.1} \beta^n(\varphi):=\sum_{|k| \leq R}\int^{\mu^*_k(\varphi)}_{\lambda^-_k(\varphi)} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\chi_p(\lambda,\varphi)}}\,d\lambda \end{equation} and, for any $|k| > R$ \begin{equation}\label{4.2} \beta^n_k(\varphi):=\int^{\mu^*_k(\varphi)}_{\lambda^-_k(\varphi)} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\chi_p(\lambda,\varphi)}}\,d\lambda. \end{equation} Let us discuss the definition of these path integrals in more detail. In \eqref{4.1} the paths of integration are chosen in $\mathbb{C}C_{\varphi,R}$. The integrals in \eqref{4.1} depend on the choice of the path but only up to integer linear combinations of the periods of the one form $\frac{\zeta _n(\lambda)}{\sqrt{\chi _p(\lambda )}}\,d\lambda$ with respect to the basis of cycles $(a_k)_{|k|\leq R}$ and $(b_k)_{1\le |k|\le R}$ on $\mathbb{C}C_{\varphi,R}$. More specifically, if $|n|>R$, then since $\int_{a_k}\frac{\zeta _n(\lambda)}{\sqrt{\chi _p(\lambda )}}\,d\lambda=0$ for any $|k|\le R$, the quantity $\beta^n(\varphi)$ is defined modulo the lattice \begin{equation}\label{4.3} \mathcal{L}_n\equiv \mathcal{L}_n(\varphi ):= \Big\{ \sum_{1\le |k| \leq R} m_k\,p_{nk}(\varphi)\,\Big|\, m_k\in\mathbb{Z},\, 1\le |k|\leq R\Big\} \end{equation} whereas, if $|n|\le R$, it is defined modulo $\mathop{\rm span}\langle 2{\partial}i,\mathcal{L}_n\rangle_\mathbb{Z}$ since $\frac{1}{2{\partial}i}\int_{a_k}\frac{\zeta _n(\lambda)}{\sqrt{\chi _p(\lambda )}}\,d\lambda=\delta_{nk}$. In \eqref{4.2}, for $|k| > R$, the path of integration is chosen to be in ${\mathcal D}_{\varphi,k}$. If $k\ne n$, the integral $\beta^n_k(\varphi)$ is independent of the path whereas for $k=n$ with $|n|>R$, the integral $\beta^n_n(\varphi)$ is defined modulo $2{\partial}i$ on $U_{\rm tn}\backslash\mathcal{Z}_n$ where \begin{equation}\label{eq:Z_n} \mathcal{Z}_n :=\big\{\varphi\in U_{\rm tn}\,\big|\,\gamma ^2_n=0\big\}. \end{equation} Since for any $|n|>R$, $\gamma_n^2$ is analytic on $U_{\rm tn}$ (cf. \cite[Lemma 12.4]{GK1}), $\mathcal{Z}_n$ is an analytic subvariety of $U_{\rm tn}$. Furthermore, by the proof of Lemma 7.9 and Proposition 7.10 in \cite{GK1}, $\mathcal{Z}_n\cap i L^2_r$ is a real analytic submanifold of $U_{\rm tn}\cap i L^2_r$ of real codimension two. Defining $\gamma_n:=\lambda^+_n-\lambda^-_n$ for $|n|\le R$ it is clear that $\mathcal{Z}_n :=\big\{\varphi\in U_{\rm tn}\,\big|\,\gamma ^2_n=0\big\}=\emptyset$ for $|n|\le R$. Note that the integrals in \eqref{4.1} and \eqref{4.2} exist since whenever $\lambda ^+_k \not=\lambda^-_k$, the integrands have a singularity of the form $(\lambda-\lambda^{\partial}m _k)^{-1/2}$ for $\lambda$ near $\lambda ^{\partial}m _k$, and hence are integrable. If $\lambda ^+_k = \lambda ^-_k$ (and hence by the construction of $U_{\rm tn}$ necessarily $|k|>R$), the singularity of the integrand in \eqref{4.2} is removable since, by Lemma \mathop{\rm Re}f{lem:U_tn} and Lemma \mathop{\rm Re}f{lem:differentials_on_U_tn} (see property (D3)), the root $\sigma^n_k$ of $\zeta_n(\lambda)$ in $D_k$ then coincides with $\tau_k$ which, in view of \eqref{eq:canonical_root}, is a zero of the denominator since $\sqrt[\rm st]{(\lambda_k^+-\lambda)(\lambda_k^--\lambda)}=\tau_k-\lambda$. For convenience, we introduce \begin{equation}\label{eq:w_k} w_k(\lambda):=\sqrt[\rm st]{(\lambda_k^+-\lambda)(\lambda_k^--\lambda)}. \end{equation} The aim of this Section is to show that \begin{equation}\label{eq:angles_U_tn} \theta_n(\varphi):={\tilde\beta}^n(\varphi)+\sum_{|k|>R}\beta^n_k(\varphi),\quad \varphi\in U_{\rm tn}\setminus\mathcal{Z}_n,\quad n\in\mathbb{Z}, \end{equation} are bona fide angle variables, conjugate to the action variables introduced in Section \mathop{\rm Re}f{sec:actions_in_U_tn}. In particular the series in \eqref{eq:angles_U_tn} converges. The definition \eqref{eq:angles_U_tn} is motivated by \cite{KLTZ} where the angle $\theta_n$ was defined by a formula of the type as in \eqref{eq:angles_U_tn} for potentials in $U_{s_1}\setminus\mathcal{Z}_n$ (cf. Theorem \mathop{\rm Re}f{th:main_near_zero}). Here ${\tilde\beta}^n(\varphi)$ denotes an analytic branch of the multivalued function $\beta^n$, defined by \eqref{4.1}, which is well defined modulo $2{\partial}i$ and obtained by analytic extension of the corresponding function defined on $U_{s_1}\setminus\mathcal{Z}_n$ in \cite{KLTZ}. We begin by studying the integrals $\beta^n_k(\varphi)$, $|k|>R$. First we establish the following estimates. \begin{Lem}\label{lem:beta^n_k-asymptotics} For any $n\in\mathbb{Z}$ and $|k|>R$, $k\ne n$, \[ \beta^n_k=O\left(\frac{|\gamma_k|+|\mu _k-\tau _k|}{|k-n|}\right) \] locally uniformly in $\varphi\in U_{\rm tn}$ and uniformly in $n\in\mathbb{Z}$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:beta^n_k-asymptotics}] We follow the arguments of the proof of Lemma 5.1 in \cite{GK1}. It follows from \eqref{4.2}, the normalization condition \eqref{eq:a-normalization}, and the discussion above that $\beta^n_k = 0$ for any $\varphi\in U_{\rm tn}$, $n\in\mathbb{Z}$, and $|k|>R$ with $k\ne n$, such that $\mu_k\in\{\lambda^+_k,\lambda^-_k\}$. Moreover, in view of the normalization condition \eqref{eq:a-normalization}, the value of $\beta^n_k$ with $|k|>R$, $k\ne n$, will not change if we replace in formula \eqref{4.2} the eigenvalue $\lambda^-_k$ by $\lambda ^+_k$. Hence it is sufficient to prove the claimed estimate only for those $\varphi\in U_{\rm tn}$, $n\in\mathbb{Z}$, and $|k|>R$ with $k\ne n$, for which \begin{equation*} \mu_k\ne\{\lambda^+_k,\lambda^-_k\}\quad\text{and} \quad |\mu_k-\lambda^-_k|\le|\mu_k-\lambda^+_k|. \end{equation*} Using the definition \eqref{eq:canonical_root} of the canonical root and (D3) we write for $\lambda\in D_k$, \begin{equation}\label{4.6} \frac{\zeta_n(\lambda )}{\sqrt[c]{\chi _p(\lambda )}} = \frac{\sigma ^n_k - \lambda}{w_k(\lambda )}\,\zeta ^n_k(\lambda ) \quad \mbox { with } \quad \zeta ^n_k(\lambda ) := \frac{i}{w_n(\lambda )} {\partial}rod _{r \not= k,n} \frac{\sigma ^n_r - \lambda }{w_r(\lambda )}. \end{equation} Arguing as in \cite[Corollary 12.7]{GK1} and in view of the asymptotics for $(\sigma^n_m)_{m\ne n}$ given by Lemma \mathop{\rm Re}f{lem:U_tn} and Proposition \mathop{\rm Re}f{lem:differentials_on_U_tn} (property (D2)) we get \begin{equation}\label{eq:zeta^n_k} \zeta^n_k(\lambda)=O\Big(\frac{1}{|n - k|}\Big),\quad k\ne n,\,\,\lambda\in D_k, \end{equation} locally uniformly on $U_{\rm tn}$. To estimate $\beta^n_k$, we parametrize the interval $[\lambda_k^-,\mu_k]\subseteq\mathbb{C}$ in formula \eqref{4.2}, $t\mapsto\lambda(t):=\lambda^-_k+t d_k$, $t\in[0,1]$, where $d_k:=\mu_k-\lambda^-_k$ and $d_k\ne 0$, and then use \eqref{4.2}, \eqref{4.6}, and \eqref{eq:zeta^n_k}, to get \begin{eqnarray} |\beta^n_k|&=&\Big| \int ^{\mu _k}_{\lambda ^-_k}\frac{\sigma ^n_k -\lambda }{w_k(\lambda )}\, \zeta ^n_k(\lambda )\,d\lambda\Big|\nonumber\\ &=&O\Big( \frac{1}{|n-k|}\Big)\!\int ^1_0\!\Big|\frac{\sigma^n_k-\lambda (t)}{t d_k}\Big|^{1/2} \Big|\frac{\sigma^n_k-\lambda (t)}{\lambda ^+_k-\lambda (t)}\Big|^{1/2}\,|d_k|\,dt.\label{eq:beta^n_k} \end{eqnarray} Further, we have for any $0\le t\le 1$, \[ \frac{\sigma ^n_k - \lambda (t)}{\lambda ^+_k - \lambda (t)} = 1 + \frac{\sigma ^n_k - \lambda ^+_k}{\lambda ^+_k - \lambda (t)}. \] Using that $|\mu _k - \lambda ^-_k| \leq |\mu _k - \lambda ^+_k|$ one easily sees that $|\lambda ^+_k - \lambda (t)| \geq |\gamma _k| / 2$ for any $0 \leq t \leq 1$. Then, by Lemma \mathop{\rm Re}f{lem:U_tn} and Proposition \mathop{\rm Re}f{lem:differentials_on_U_tn} (property (D2)) and the triangle inequality, $|\sigma^n_k-\lambda ^+_k|\le|\sigma^n_k-\tau _k|+|\gamma _k|/2=O(\gamma _k)$. This implies that \begin{equation}\label{4.7} \Big|\frac{\sigma^n_k-\lambda (t)}{\lambda ^+_k-\lambda (t)}\Big| = O(1), \quad t\in[0,1], \end{equation} locally uniformly on $U_{\rm tn}$. On the other hand, \begin{equation}\label{4.8} \Big|\frac{\sigma^n_k-\lambda(t)}{t d_k}\Big|^{1/2}= \frac{\big(|\sigma^n_k-\lambda^-_k|+|d_k |\big)^{1/2}}{\sqrt{t}\,|d_k|^{1/2}}= O\left(\frac{\big(|\gamma_k|+|d_k|\big)^{1/2}}{\sqrt{t}\,|d_k|^{1/2}}\right). \end{equation} Combining, \eqref{eq:beta^n_k} with \eqref{4.7} and \eqref{4.8} we finally obtain for $|k|>R$, $k\ne n$, \begin{equation}\label{eq:|beta|} |\beta ^n_k|=\Big\arrowvert\int ^{\mu _k}_{\lambda ^-_k} \frac{\sigma^n_k-\lambda }{w_k(\lambda)}\,\zeta^n_k(\lambda )\,d\lambda \Big\arrowvert=O\left(\frac{\big(|\gamma_k|+|d_k|\big)^{1/2} |d_k|^{1/2}}{|n-k|}\right). \end{equation} The claimed estimate then follows by the Cauchy-Schwarz inequality. Going through the arguments of the proof one sees that the estimates for $\beta ^n_k$ hold uniformly in $n\in\mathbb{Z}$ and locally uniformly on $U_{\rm tn}$. \end{proof} The next result claims that $\beta^n_k$, $|k|>R$, are analytic. More precisely, the following holds. \begin{Lem}\label{lem:beta^n_k-analytic} Let $n \in \mathbb{Z}$ be arbitrary. \begin{itemize} \item[(i)] For any $|k| > R$ with $k\ne n$, $\beta ^n_k$ is analytic on $U_{\rm tn}$. \item[(ii)] For $|n| > R$, $\beta^n_n$ is defined modulo $2{\partial}i$. It is analytic on $U_{\rm tn} \backslash{\mathcal Z}_n$ when considered modulo ${\partial}i$. \end{itemize} \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:beta^n_k-analytic}] We follow the arguments of the proof of Lemma 15.2 in \cite{GK1}. (i) Fix $k \not= n$ with $|k| > R$. In addition to the analytic subvariety ${\mathcal Z}_k$ introduced in \eqref{eq:Z_n} we also consider \[ {\mathcal E}_k := \big\{ \varphi \in U_{\rm tn}\,\big|\,\mu_k\in \{\lambda ^{\partial}m_k\}\big\} = \big\{ \varphi \in U_{\rm tn}\,\big|\, \Delta (\mu _k) = 2(-1)^k \big\} \] which clearly is also an analytic subvariety of $U_{\rm tn}$. We prove that $\beta ^n_k$ is analytic on $U_{\rm tn} \backslash ({\mathcal Z}_k \cup{\mathcal E}_k)$ when taken modulo ${\partial}i$, extends continuously to $U_{\rm tn}$ and has weakly analytic restrictions to ${\mathcal Z}_k$ and ${\mathcal E}_k$. It then follows by \cite[Theorem A.6]{GK1} that $\beta ^n_k$ is analytic on $U_{\rm tn}$. To prove that $\beta ^n_k$ is analytic on $U_{\rm tn}\backslash ({\mathcal Z}_k \cup {\mathcal E}_k)$ it suffices to prove its differentiability. Note that $\lambda ^{\partial}m _k$ are simple eigenvalues on $U_{\rm tn}\backslash{\mathcal Z}_k$, but as they are listed in lexicographic order, they are not necessarily continuous. For any given $\varphi\in U_{\tt tn}\backslash ({\mathcal Z}_k \cup {\mathcal E}_k)$, according to \cite[Proposition 7.5]{GK1}, in a neighborhood of $\varphi$ there exist two analytic functions $\varrho ^{\partial}m _k$ such that $\{ \lambda ^+_k, \lambda ^-_k\} = \{ \varrho ^+_k, \varrho ^-_k \} $. Choose $\varrho ^{\partial}m _k$ so that $\mathop{\rm dist}\big([ \varrho ^-_k, \mu _k], \varrho ^+_k\big)\ge\frac{1}{3} |\gamma_k|$. In view of the normalization condition in Proposition \mathop{\rm Re}f{lem:differentials_on_U_tn} we can write \[ \beta ^n_k = \int ^{\mu _k}_{\varrho ^-_k}\frac{\zeta _n(\lambda )} {\sqrt[\ast ]{\chi _p(\lambda )}}\,d\lambda \] where the integral is taken along any path from $\varrho ^-_k$ to $\mu _k$ inside $D_k$ that besides its end point(s) does not contain any point of $G_k$. The sign of the $\ast $-root along such a path is the one determined by \[ \sqrt[\ast ]{\chi _p(\mu _k)} = \grave{m} _2(\mu _k) + \grave {m} _3(\mu _k) . \] As in \eqref{4.6} write \[ \frac{\zeta_n(\lambda)}{\sqrt[c]{\chi _p(\lambda )}} = \frac{\sigma ^n_k-\lambda }{w_k(\lambda )}\,\zeta ^n_k(\lambda ) \] and let $d_k := \mu _k - \varrho^-_k$. With the substitution $\lambda(t)=\varrho^-_k+t d_k$ one has $w_k(\lambda )^2 = t d_k(\lambda(t) - \varrho^+_k)$ and as by assumption, $|\lambda(t) - \varrho ^+_k|\ge|\gamma _k / 3|$ for $0 \le t\le 1$ and ${\partial}si$ in a neighborhood of $\varphi$ the argument of $\lambda(t)-\varrho_k^+$ is contained in an interval of length strictly smaller than ${\partial}i$. Hence the square root $\sqrt{\lambda (t) - \varrho ^+_k}$ can be chosen to be continuous in $t$ and analytic near $\varphi $. With the appropriate choice of the root $\sqrt{d_k}$ it then follows that \[ \beta ^n_k = \int ^1_0 \frac{1}{\sqrt{t}} \frac{\sigma ^n_k - \lambda } {\sqrt{\lambda - \varrho ^+_k}}\,\zeta ^n_k(\lambda ) \sqrt{d_k}\,dt \] is differentiable at $\varphi $. Next let us show that $\beta ^n_k$ is continuous on $U_{\rm tn}$. By the previous considerations, $\beta ^n_k$ is continuous in all points of $U_{\rm tn}\backslash ({\mathcal Z}_k \cup {\mathcal E}_k)$. By \eqref{eq:|beta|} and $\beta^n_k\big\arrowvert _{{\mathcal E}_k} = 0$, it follows that $\beta ^n_k$ is continuous at points of ${\mathcal E}_k$. It thus remains to prove that $\beta ^n_k$ is continuous in the points of ${\mathcal Z}_k \backslash {\mathcal E}_k$. First we show that $\beta ^n_k \big\arrowvert _{{\mathcal Z}_k \backslash {\mathcal E}_k}$ is continuous. Indeed, on ${\mathcal Z}_k$, $\lambda ^-_k = \tau _k$ and $(\sigma ^n_k - \lambda ) / w_k(\lambda )=1$ hence \[ \beta ^n_k \big\arrowvert _{{\mathcal Z}_k \backslash {\mathcal E} _k} = \int ^{\mu _k}_{\tau _k} \zeta ^n_k (\lambda )\,d\lambda \big\arrowvert _{{\mathcal Z}_k \backslash {\mathcal E}_k} \] and it follows that $\beta ^n_k \big\arrowvert _{{\mathcal Z}_k \backslash {\mathcal E}_k}$ is continuous. As ${\mathcal E}_k$ is closed in $U_{\rm tn}$, it then remains to show that for any sequence $(\varphi^{(j)})_{j \geq 1} \subseteq U_{\rm tn}\backslash ({\mathcal Z}_k\cup {\mathcal E}_k)$ converging to an element $\varphi \in {\mathcal Z}_k \backslash {\mathcal E}_k$ one has \[ \beta ^n_k(\varphi ^{(j)}) \underset {j \to \infty } {\longrightarrow } \beta ^n_k(\varphi ) . \] Without loss of generality we may assume that $\inf\limits_j\big|(\mu _k - \tau _k)(\varphi ^{(j)})\big| > 0$, \[ \big|\lambda ^+_k(\varphi ^{(j)}) - \mu _k(\varphi ^{(j)})\big| \ge \big|\lambda ^-_k(\varphi ^{(j)}) - \mu _k(\varphi ^{(j)})\big| \] (otherwise go to a subsequence of $\varphi ^{(j)}$ and/or, if necessary, switch the roles of $\lambda ^+_k$ and $\lambda ^-_k$), and \[ \sqrt[\ast]{\chi _p\big(\mu _k(\varphi ^{(j)})\big)} = \sqrt[c]{\chi _p\big(\mu _k(\varphi ^{(j)})\big)} . \] Let $0 < \varepsilon\ll 1$. As $\lim\limits_{j \to \infty }\gamma _k(\varphi^{(j)}) = 0$ as well as $\lim\limits_{j \to \infty} d_k(\varphi ^{(j)}) = \mu _k (\varphi ) - \tau _k(\varphi ) \not= 0$ there exists $j_0 \geq 1$ so that \begin{equation} \label{4.9} \Big\arrowvert \frac{\gamma _k(\varphi ^{(j)})}{d_k (\varphi ^{(j)})}\Big\arrowvert \leq \varepsilon / 2 \quad \forall j \geq j_0 . \end{equation} With the substitution $\lambda(t)=\lambda_k^-+t d_k$, $d_k=\mu_-\lambda_k^-$, one gets \[ \beta ^n_k(\varphi^{(j)}) = \left( \int ^\varepsilon _0 + \int ^1_\varepsilon \right) \frac{\sigma ^n_k - \lambda }{w_k(\lambda )}\,\zeta ^n_k(\lambda )\,d_k\,dt. \] By using the estimates \eqref{4.8}--\eqref{4.9} one sees that for any $j \geq j_0$ \[ \Big| \int ^\varepsilon _0 \frac{\sigma ^n_k - \lambda } {w_k(\lambda )}\,\zeta ^n_k(\lambda )\,d_k\,dt \Big| \le C \sqrt{\varepsilon } \] where $C > 0$ is a constant independent of $j$. To estimate the integral \[ J_\varepsilon (\varphi ^{(j)}) := \int ^1_\varepsilon \frac{\sigma ^n_k - \lambda }{w_k(\lambda )}\,\zeta ^n_k(\lambda )\,d_k\,dt \] note that for any $\varepsilon \le t \leq 1$ and $j \geq j_0$ \[ \Big\arrowvert \frac{\gamma ^2_k / 4}{(\tau _k-\lambda )^2}\Big\arrowvert = \Big \arrowvert t\,\frac{2 d_k}{\gamma _k} - 1 \Big\arrowvert ^{-2} \leq 3^{-2} \] and thus according to the definition \eqref{eq:standard_root} of the standard root \[ w_k(\lambda ) = (\tau _k - \lambda ) \sqrt[+]{1 - \frac{\gamma ^2_k / 4} {(\tau _k - \lambda ) ^2}} \] for $\varepsilon \leq t \leq 1$ and $\varphi ^{(j)}$ with $j \geq j_0$. Thus \[ J_\varepsilon (\varphi ^{(j)}) = \int ^1_\varepsilon \left( 1 + \frac{\sigma ^n_k - \tau _k}{\tau _k - \lambda } \right) \left( 1 - \frac{\gamma ^2_k / 4}{(\tau _k - \lambda )^2} \right)^{-1/2}\!\!\!\!\zeta^n_k(\lambda )\,d_k\,dt . \] As $\sigma ^n_k - \tau _k = \gamma ^2_k \ell ^2_k$ (property (D2)) one has for $\varepsilon \leq t \leq 1$ \[ \frac{\sigma ^n_k - \tau _k}{\tau _k - \lambda } = \frac{\gamma ^2_k \ell ^2_k}{\tau _k - \lambda ^-_k - t d_k} \to 0 \quad \mbox{as } j \to \infty \] as well as \[ \frac{\gamma _k}{\tau _k - \lambda } = \frac{\gamma _k}{\tau _k - \lambda ^-_k - t d_k} \to 0 \quad \mbox{ as } j \to \infty . \] By dominated convergence it then follows that \[ J_\varepsilon (\varphi ^{(j)}) \to \int ^1_\varepsilon \zeta ^n_k(\lambda , \varphi )\,d_k(\varphi )\,dt \quad \mbox{ as } j \to \infty . \] But $\beta^n_k(\varphi )-\int^1_\varepsilon\zeta^n_k(\lambda,\varphi )\,d_k(\varphi )\,dt = O(\varepsilon )$ by the continuity of $\zeta ^n_k$ in $\lambda $. Altogether we showed that there exits $j_1 \geq j_0$, depending on $\varepsilon $, so that for any $j \geq j_1$, \[ \big|\beta ^n_k(\varphi^{(j)}) - \beta ^n_k(\varphi )\big| \le C \sqrt{\varepsilon } \] where $C$ can be chosen independently of $\varepsilon $. As $\varepsilon > 0$ is arbitrarily small this establishes the claimed convergence. It remains to check the weak analyticity on ${\mathcal E}_k$ and ${\mathcal Z}_k$. On ${\mathcal E}_k$ this is trivial since $\beta ^n_k \big\arrowvert _{{\mathcal E}_k} \equiv 0$. On ${\mathcal Z}_k$ we can write $ \beta ^n_k = \int ^{\mu _k}_{\tau _k} \varepsilon _k \zeta ^n_k (\lambda )\,d\lambda$ where $\varepsilon _k \in \{ 0, {\partial}m 1\} $ is defined on ${\mathcal Z}_k \backslash {\mathcal E}_k$ by $ \sqrt[\ast ]{\chi _p(\mu _k)} = \varepsilon _k \sqrt[c]{\chi _p(\mu_k)}$ and is zero on ${\mathcal Z}_k \cap {\mathcal E}_k$. Now consider a disk $D \subseteq {\mathcal Z}_k$. As ${\mathcal E} _k$ is an analytic subvariety one has either $D \subseteq {\mathcal Z}_k \cap {\mathcal E}_k$ -- in which case $\beta ^n_k \big\arrowvert _D \equiv 0$ -- or $D \cap {\mathcal E}_k$ is finite. As $\int ^{\mu _k}_{\tau _k} \zeta ^n_k(\lambda )\,d\lambda \big \arrowvert _D$ is analytic and $\beta ^n_k$ is continuous on $D$ it then follows that $\int ^{\mu _k}_{\tau _n} \zeta ^n_k (\lambda )\,d\lambda \big \arrowvert _D \equiv 0$ or $\varepsilon _k \big\arrowvert _{D \backslash {\mathcal E} _k}$ is constant. In both cases it follows that $\beta ^n_k \big\arrowvert _D$ is analytic. This establishes the claimed analyticity. Item (ii) is proved in an analogous way except for the fact that switching from $\lambda ^-_n$ to $\varrho ^-_n$ may change the value of $\beta ^n_n$ by ${\partial}i $ in view of the normalization condition in Proposition \mathop{\rm Re}f{lem:differentials_on_U_tn}. Hence we have $\beta ^n_n = \int ^{\mu _n}_{\varrho ^-_n} \frac{\zeta _n(\lambda )} {\sqrt[\ast ]{\chi _p(\lambda )}}\,d\lambda$ modulo ${\partial}i$. \end{proof} As a next step we show that by choosing appropriate integration paths in \eqref{4.1} the quantity $\beta^n(\varphi)$ is well defined modulo $2{\partial}i$ on $U_{\rm tn}$ and real valued, when restricted to $U_{\rm tn}\cap i L^2_r$. In \cite{KLTZ} it is proved that such a choice is possible in a small neighborhood of zero in $L^2_c$. More specifically, we have the following \begin{Lem}\label{lem:near_zero} For any $\varphi\in U_{s_1}$, where $U_{s_1}\subseteq U_{\rm tn}\cap{\mathcal U}_0$ and ${\mathcal U}_0$ is the open ball in $L^2_c$ centered at zero, given by Theorem \mathop{\rm Re}f{th:main_near_zero}, and for any $n\in\mathbb{Z}$ and $|k| \leq R$, the path of integration in $\beta^n_k=\int ^{\mu ^\ast _k}_{\lambda^-_k} \frac{\zeta _n(\lambda )}{\sqrt{\chi _p(\lambda )}}\,d\lambda$ can be chosen to lie in the handle ${\mathcal D}_{\varphi,k}\equiv{\partial}i_1^{-1}(D_k)$ which is a Riemann surface biholomorphic to $\big\{z\in\mathbb{C}\,\big|\,1<|z|<2\big\}$ since $\lambda ^-_k \ne\lambda ^+_k$. For $|k|\le R$ with $k\ne n$ the quantity $\beta^n_k$ is well defined and analytic on $U_{s_1}$ whereas for $k=n$ it is defined modulo $2{\partial}i$ and as such analytic. Furthermore, for any $n\in\mathbb{Z}$ and $|k|\le R$, $\beta^n_k$ is {\em real-valued} when restricted to $U_{s_1}\cap i L^2_r$. As a consequence, for any $n\in\mathbb{Z}$, the sum $\beta^n=\sum _{|k|\le R}\beta^n_k$ is real valued when restricted to $U_{s_1}\cap i L^2_r$. \end{Lem} Let us introduce for any $\varphi\in U_{\rm tn}$ the set \[ M_R\equiv M_R(\varphi):=\big\{\mu_k\,\big|\,|k|\le R\big\}. \] By the definition \eqref{eq:dirichlet_divisor} for any $|k|\le R$, the Dirichlet divisor $\mu_k^*(\varphi)$ on the Riemann surface $\mathcal{C}_{\varphi,R}$ is uniquely determined by $\mu_k(\varphi)$. Similarly we introduce \[ \Lambda^-_R\equiv\Lambda^-_R(\varphi):=\big\{\lambda_k^-\,\big|\,|k|\le R\big\}. \] and recall that $\lambda^-_k(\varphi)$, $|k|\le R$, are simple periodic eigenvalues of $L(\varphi)$ that satisfy $\mathop{\rm Im}(\lambda^-_k)<0$. Then, we have the following important \begin{Prop}\label{prop:beta^n_tn} After shrinking the tubular neighborhood $U_{\rm tn}$ of the path $\gamma : [0,1]\to U_{\rm tn}\cap i L^2_r$, if necessary, so that $U_{\rm tn}$ and $U_{\rm tn}\cap i L^2_r$ are still connected, there exist for any $\varphi\in U_{\rm tn}$ a bijective correspondence \[ \Lambda^-_R(\varphi)\to M_R(\varphi),\quad z\mapsto\mu_\varphi(z), \] and for any $z\in\Lambda^-_R(\varphi)$ a continuous, piecewise $C^1$-smooth path ${\mathcal P}^*\big[z,\mu_\varphi(z)\big]$ on $\mathcal{C}_{\varphi,R}$ from $z^*=(z,0)$ to $\mu_\varphi^*(z)$ so that for any $n\in\mathbb{Z}$, \begin{equation}\label{eq:betatilde^n} {\tilde\beta}^n(\varphi):=\sum_{z\in\Lambda_R^-(\varphi)}\int_{{\mathcal P}^*[z,\mu_\varphi(z)]} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda, \end{equation} defined modulo $2{\partial}i$ if $|n|\le R$, is analytic on $U_{\rm tn}$, and real valued when restricted to $U_{\rm tn}\cap i L^2_r$. \end{Prop} \begin{Rem} Since the curve $\gamma$ is simple and can be assumed to be piecewise $C^1$-smooth, by shrinking $U_{\rm tn}$ once more if necessary, we can ensure that \eqref{eq:betatilde^n} is single valued on $U_{\rm tn}$ and hence there is no need to define in the case $|n|\le R$, ${\tilde\beta}^n$ modulo $2{\partial}i$ as stated in Proposition \mathop{\rm Re}f{prop:beta^n_tn}. We chose to state Proposition \mathop{\rm Re}f{prop:beta^n_tn} as is because we want to use its proof without modification in the subsequent Section when constructing angle coordinates in a tubular neighborhood of the isospectral set $\mathop{\rm Iso}\nolimits_o({\partial}si)$ which is {\em not} simply connected. \end{Rem} \begin{proof}[Proof of Proposition \mathop{\rm Re}f{prop:beta^n_tn}] For a given $0\le\tau\le 1$, let ${\partial}si:=\gamma(\tau)$ and for any $z\in\mathbb{C}$ and $\varepsilon>0$ denote by $D^\varepsilon(z)$ the open disk of radius $\varepsilon$ in $\mathbb{C}$ centered at $z$. Let $\overline{D}^\varepsilon(z)$ be the corresponding closed disk. We refer to the point $Q_z$ on the boundary of $D^\varepsilon(z)$ with the smallest real part among the points of boundary of the disk as the {\em base point} of $D^\varepsilon(z)$. Choose $\varepsilon\equiv\varepsilon_{\partial}si>0$ so that the following holds: For any $z,z'\in\Lambda_R({\partial}si)\cup M_R({\partial}si)$ where $\Lambda_R({\partial}si)\equiv\big\{\lambda_k^{\partial}m\,\big|\,|k|\le R\big\}$ we have \begin{itemize} \item[(1)] $\overline{D}^\varepsilon(z)\cap \overline{D}^\varepsilon(z')=\emptyset$ if $z\ne z'$. \item[(2)] $\overline{D}^\varepsilon(z)\setminus\{z\}$ does not contain any periodic eigenvalue of $L({\partial}si)$. \item[(3)] $\overline{D}^\varepsilon(z)\subseteq B_R$ and if $z\in\Lambda^-_R({\partial}si)$, $\overline{D}^\varepsilon(z)\subseteq\big\{\lambda\in\mathbb{C} \,\big|\,\mathop{\rm Im}(\lambda)<0\big\}$. \end{itemize} Now we choose an open ball $U_{\partial}si\subseteq U_{\rm tn}$ in $L^2_c$ centered at ${\partial}si$ so that for any $\varphi\in U_{\rm tn}$ the following holds: For any Dirichlet eigenvalue $\mu\in M_R({\partial}si)$ with (algebraic) multiplicity $m_D\ge 1$ there exist exactly $m_D$ Dirichlet eigenvalues of $L(\varphi)$ in the disk $D^\varepsilon(\mu)$ and for any periodic eigenvalue $\nu\in\Lambda^-_R({\partial}si)$ there exists a unique periodic eigenvalue $\nu_\varphi$ of $L(\varphi)$ in $D^\varepsilon(\nu)$. Denote by $B^\bullet_{R,{\partial}si}$ the complement in $B_R$ of the union of disks $D^\varepsilon(z)$, $z\in\Lambda_R({\partial}si)\cup M_R({\partial}si)$, \[ B^\bullet_{R,{\partial}si}:=B_R\setminus\Big(\bigcup_{z\in\Lambda_R({\partial}si)\cup M_R({\partial}si)}D^\varepsilon(z)\Big). \] Furthermore, in what follows, ${\partial}i_1$ will denote the projection ${\partial}i_1 : \mathbb{C}^2\to\mathbb{C}$, $(\lambda,w)\mapsto\lambda$, onto the first component of $\mathbb{C}^2$. Then the set $\big({\partial}i_1|_{\mathcal{C}_{{\partial}si,R}}\big)^{-1}\big(B^\bullet_{R,{\partial}si}\big)$ is a connected Riemann surface obtained from the Riemann surface $\mathcal{C}_{{\partial}si,R}$ by removing a certain number of open disks. Choose an arbitrary bijection \begin{equation*}\label{eq:initial_correspondence} \Lambda^-_R({\partial}si)\to M_R({\partial}si),\quad\nu\mapsto\mu_{\partial}si(\nu). \end{equation*} Then, for any $\nu\in\Lambda^-_R({\partial}si)$, take a $C^1$-smooth curve in $B^\bullet_{R,{\partial}si}$ that connects the base point of $D^\varepsilon(\nu)$ with the base point of $D^\varepsilon(\mu_{\partial}si(\nu))$. We denote this curve by $Y_{\partial}si[\nu,\mu_{\partial}si(\nu)]$. In this way we obtain a set of $2R+1$ curves \[ \big\{Y_{\partial}si[\nu,\mu_{\partial}si(\nu)]\,\big|\,\nu\in\Lambda^-_R({\partial}si)\big\} \] in $B^\bullet_{R,{\partial}si}$. By construction, these curves depend on the choice of ${\partial}si$ and the bijective correspondence. Now take $\varphi\in U_{\partial}si$. For any $z\in\Lambda^-_R(\varphi)$ with the condition that $z\in D^\varepsilon(\nu)$ where $\nu\in\Lambda^-_R({\partial}si)$, denote by $\mathcal{P}_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ the concatenated path \begin{equation}\label{eq:concatenated_path} [z,Q_\nu]\cup Y_{\partial}si[\nu,\mu_{\partial}si(\nu)]\cup[Q_{\mu_{\partial}si(\nu)},\mu_\varphi(z)] \end{equation} where by $\mu_\varphi(z)$ we denote one of the Dirichlet eigenvalues in $M_R(\varphi)$ that lies in the disk $D^\varepsilon(\mu_{\partial}si(\nu))$ and $[a,b]$ denotes the line segment connecting two complex numbers $a,b\in\mathbb{C}$. Requiring that every Dirichlet eigenvalue in $M_R(\varphi)$ appears as the endpoint of such a concatenated curve precisely once, we construct for the considered potential $\varphi\in U_{\partial}si$ a bijective correspondence \begin{equation}\label{eq:correspondence} \Lambda^-_R(\varphi)\to M_R(\varphi),\quad z\mapsto\mu_\varphi(z)\equiv\mu_{\varphi,{\partial}si}(z). \end{equation} \begin{Rem}\label{rem:multiple_choice} In the case when $\mu_{\partial}si(\nu)$ has algebraic multiplicity $m_D\ge 2$ there are multiple options for the choice of the third part $[Q_{\mu_{\partial}si(\nu)},\mu_\varphi(z)]$ of the path $\mathcal{P}_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$. This leads to different bijective correspondences \eqref{eq:correspondence}. The final result however will {\em not} depend on the choice of the correspondence in \eqref{eq:correspondence}. \end{Rem} We now want to lift the path $\mathcal{P}_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ constructed above to the Riemann surface $\mathcal{C}_{\varphi,R}$. Let us first treat the case where $\mu_{\partial}si(\nu)$ is not a ramification point of $\mathcal{C}_{{\partial}si,R}$, i.e. $\mu_{\partial}si(\nu)\notin\Lambda_R({\partial}si)$. Then the preimage $\big({\partial}i_1|_{\mathcal{C}_{{\partial}si,R}}\big)^{-1}\big(\overline{D}^\varepsilon(\mu_{\partial}si(\nu))\big)$ consists of two disjoint closed disks. Denote by $Q_{\mu_{\partial}si(\nu),{\partial}si}^*$ the lift of $Q_{\mu_{\partial}si(\nu)}$ which is in the disk containing the Dirichlet divisor $\mu_{\partial}si^*(\nu)$. Similarly, for $\varphi\in U_{\partial}si$ the preimage $\big({\partial}i_1|_{\mathcal{C}_{\varphi,R}}\big)^{-1}\big(\overline{D}^\varepsilon(\mu_{\partial}si(\nu))\big)$ consists of two disjoint disks. We denote by $Q_{\mu_{\partial}si(\nu),\varphi}^*$ the lift of $Q_{\mu_{\partial}si(\nu)}$ which is in the disk containing the Dirichlet divisor $\mu_\varphi^*(z)$. Denote by $Q_{\nu,\varphi}^*$ the starting point of the lift $Y^*_{\varphi,{\partial}si}[\nu,\mu_{\partial}si(\nu)]$ of $Y_{\partial}si[\nu,\mu_{\partial}si(\nu)]$ by $\big({\partial}i_1|_{\mathcal{C}_{\varphi,R}}\big)^{-1}$ that is ending at $Q_{\mu_{\partial}si(\nu),\varphi}^*$. By construction, \[ {\partial}i_1(Q_{\nu,\varphi}^*)=Q_\nu\quad\text{and}\quad{\partial}i_1(Q_{\mu_{\partial}si(\nu),\varphi}^*)=Q_{\mu_{\partial}si(\nu)}. \] This yields a uniquely determined lift $\mathcal{P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ of $\mathcal{P}_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ to $\mathcal{C}_{\varphi,R}$, starting at $(z,0)$ and ending at $\mu_\varphi^*(z)$. Now, let us turn to the case where $\mu_{\partial}si(\nu)\in\Lambda_R({\partial}si)$ and hence $\mu_{\partial}si(\nu)$ is a ramification point of ${\partial}i_1|_{\mathcal{C}_{{\partial}si,R}} : \mathcal{C}_{{\partial}si,R}\to\mathbb{C}$. Then the preimage $\big({\partial}i_1|_{\mathcal{C}_{{\partial}si,R}}\big)^{-1}\big(\overline{D}^\varepsilon(\mu_{\partial}si(\nu))\big)$ is a closed disk in $\mathcal{C}_{{\partial}si,R}$ and the restriction of the map ${\partial}i_1$ to this disk is two-to-one except at the branching point $\mu^*_{\partial}si(\nu)$ where it is one-to-one. Denote by $Q^*_{\mu_{\partial}si(\nu),{\partial}si}$ one of the preimages of $Q_{\mu_{\partial}si(\nu)}$. For $\varphi\in U_{\partial}si$ the preimage $\big({\partial}i_1|_{\mathcal{C}_{\varphi,R}}\big)^{-1}\big(\overline{D}^\varepsilon(\mu_{\partial}si(\nu))\big)$ is also a closed disk in $\mathcal{C}_{\varphi,R}$ and contains a unique branched point in its interior. We denote by $Q_{\mu_{\partial}si(\nu),\varphi}^*$ the preimage of $Q_{\mu_{\partial}si(\nu)}$ defined uniquely by the condition that the map \[ U_{\partial}si\to\mathbb{C}^2,\quad\varphi\mapsto Q_{\mu_{\partial}si(\nu),\varphi}^*, \] is analytic and $Q_{\mu_{\partial}si(\nu),\varphi}^*\big|_{\varphi={\partial}si}=Q^*_{\mu_{\partial}si(\nu),{\partial}si}$. Then, by proceeding in the same way as in the previous case, we construct the point $Q_{\nu,\varphi}^*$, the lift $Y^*_{\varphi,{\partial}si}[\nu,\mu_{\partial}si(\nu)]$ of $Y_{\varphi,{\partial}si}[\nu,\mu_{\partial}si(\nu)]$, and the uniquely determined lift $\mathcal{P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ of $\mathcal{P}_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ to $\mathcal{C}_{\varphi,R}$ that starts at $(z,0)$, passes consecutively through $Q_{\nu,\varphi}^*$ and $Q_{\mu_{\partial}si(\nu),\varphi}^*$, and then ends at the Dirichlet divisor $\mu_\varphi^*(z)$. It then follows that for any $n\in\mathbb{Z}$, \begin{equation}\label{eq:beta_tilde} \beta^n_{{\partial}si}(\varphi):=\sum_{z\in\Lambda_R^-(\varphi)} \int_{{\mathcal P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda, \end{equation} is well defined and analytic on $U_{\partial}si$. Indeed, if $\nu\in\Lambda^-_R({\partial}si)$ so that $\mu_{\partial}si(\nu)$ is a simple Dirichlet eigenvalue of $L({\partial}si)$ the integral $\int_{{\mathcal P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda$ is clearly analytic on $U_{\partial}si$ where $z\in\Lambda^-_R(\varphi)$ is the periodic eigenvalue of $L(\varphi)$ in the disk $D^\varepsilon(\nu)$. In case $\mu_{\partial}si(\nu)$ is a Dirichlet eigenvalue of $L({\partial}si)$ of multiplicity $m_D\ge 2$, denote by $\nu_j$, $1\le j\le m_D$, the periodic eigenvalues of $L({\partial}si)$ such that $\mu_{\partial}si(\nu_j)=\mu_{\partial}si(\nu)$ and for any $\varphi\in U_{\partial}si$ by $z_j$, $1\le j\le m_D$, the periodic eigenvalues of $L(\varphi)$ with $z_j\in D^\epsilon(\nu_j)$. Then the argument principle implies that \begin{equation}\label{eq:argument_principle} \sum_{1\le j\le m_D} \int_{{\mathcal P}^*_{\varphi,{\partial}si}[z_j,\mu_\varphi(z_j)]} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda, \end{equation} is analytic in $U_{\partial}si$ -- see Lemma \mathop{\rm Re}f{lem:argument_principle} in the Appendix for more details. One can easily check that the value \eqref{eq:beta_tilde} does {\em not} depend on the choice described in Remark \mathop{\rm Re}f{rem:multiple_choice}. Note that the restriction of $\beta^n_{{\partial}si}$ to $U_{\partial}si\cap i L^2_r$ is {\em not} necessarily real valued. For latter reference we record that by construction, for any $\varphi\in U_{\partial}si$, the path ${\mathcal P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ is a concatenation of three paths which we write as \begin{equation}\label{eq:concatenation*} {\mathcal P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]= [z,Q_\nu]^*\cup Y^*_{\varphi,{\partial}si}[\nu,\mu_{\partial}si(\nu)]\cup[Q_{\mu_{\partial}si(\nu)},\mu_\varphi(z)]^*. \end{equation} Now consider $0\le\tau_0\le 1$ with $\tau_0\ne\tau$ and let ${\partial}si_0:=\gamma(\tau_0)$. Assume that following the same construction as for ${\partial}si=\gamma(\tau)$, one finds an open ball $U_{{\partial}si_0}\subseteq U_{\rm tn}$ of ${\partial}si_0$ in $L^2_c$ with $U_{\partial}si\cap U_{{\partial}si_0}\ne\emptyset$ and for any $\varphi\in U_{{\partial}si_0}$ a system of paths ${\mathcal P}^*_{\varphi,{\partial}si_0}[z,\mu_{\varphi,{\partial}si_0}(z)]$, $z\in\Lambda^-(\varphi)$, so that the restriction of $\beta^n_{{\partial}si_0}$ to $U_{{\partial}si_0}\cap i L^2_r$ is {\em real valued}. We now want to show that one can modify the system of paths ${\mathcal P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$, $z\in\Lambda^-(\varphi)$, so that for any $n\in\mathbb{Z}$, \begin{equation}\label{eq:beta1=beta2} \beta^n_{\partial}si(\varphi)=\beta^n_{{\partial}si_0}(\varphi),\quad \forall\varphi\in U_{\partial}si\cap U_{{\partial}si_0}, \end{equation} where $\beta^n_{\partial}si(\varphi)$ denotes the value \eqref{eq:beta_tilde} with the system of paths modified. Then by Lemma \mathop{\rm Re}f{lem:real-analyticity} for any $n\in\mathbb{Z}$, the quantity $\beta^n_{\partial}si$ will be real valued when restricted to $U_{\partial}si\cap i L^2_r$. Indeed, take $\varphi\in U_{\partial}si\cap U_{{\partial}si_0}$. The first problem arises if the bijection \[ \mu_{\varphi,{\partial}si_0} : \Lambda^-_R\to M_R(\varphi) \] corresponding to the neighborhood $U_{{\partial}si_0}$ is different from the bijection \[ \mu_\varphi \equiv\mu_{\varphi,{\partial}si} : \Lambda^-_R\to M_R(\varphi) \] corresponding to the neighborhood $U_{\partial}si$. In such a case, denote by $\sigma_\varphi : \Lambda^-_R(\varphi)\to\Lambda^-_R(\varphi)$ the permutation with the property that $\mu_{\varphi,{\partial}si_0}=\mu_{\varphi,{\partial}si}\circ\sigma_\varphi$. Since $\sigma_\varphi$ can be written as a product of transpositions we can assume without loss of generality that $\sigma_\varphi$ is a transposition interchanging two periodic eigenvalues $z$ and $z'$ in $\Lambda^-_R(\varphi)$. Then $z\in D^\varepsilon(\nu)$ and $z'\in D^\varepsilon(\nu')$ for some $\nu,\nu'\in\Lambda^-_R({\partial}si)$. In view of \eqref{eq:concatenation*}, \[ {\mathcal P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]= [z,Q_\nu]^*\cup Y^*_{\varphi,{\partial}si}[\nu,\mu_{\partial}si(\nu)]\cup[Q_{\mu_{\partial}si(\nu)},\mu_\varphi(z)]^* \] and \[ {\mathcal P}^*_{\varphi,{\partial}si}[z',\mu_\varphi(z')]= [z',Q_{\nu'}]^*\cup Y^*_{\varphi,{\partial}si}[\nu',\mu_{\partial}si(\nu')]\cup[Q_{\mu_{\partial}si(\nu')},\mu_\varphi(z')]^*. \] Choose a path $Y^*_{\nu,\nu'}$ on the Riemann surface \[ \mathcal{C}_{{\partial}si,R}^\bullet:=\big({\partial}i_1|_{\mathcal{C}_{{\partial}si,R}}\big)^{-1}(B_{R,{\partial}si}^\bullet) \] that connects the point $Q^*_\nu$ with $Q^*_{\nu'}$. Let $Y_{\nu,\nu'}$ be its projection into $B_{R,{\partial}si}^\bullet$ by the map ${\partial}i_1$ and denote by $Y^*_{\nu,\nu';\varphi}$ the unique lift of $Y_{\nu,\nu'}$ by ${\partial}i_1|_{\mathcal{C}_{\varphi,R}} : \mathcal{C}_{\varphi,R}\to\mathbb{C}$ which connects $Q^*_{\nu,\varphi}$ with $Q^*_{\nu',\varphi}$. We then replace ${\mathcal P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ by the concatenated curve \[ [z,Q_\nu]^*\cup Y^*_{\nu,\nu';\varphi}\cup Y^*_{\varphi,{\partial}si}[\nu',\mu_{\partial}si(\nu')]\cup [Q_{\mu_{\partial}si(\nu')},\mu_\varphi(z')]^* \] and ${\mathcal P}^*_{\varphi,{\partial}si}[z',\mu_\varphi(z')]$ by \[ [z',Q_{\nu'}]^*\cup (Y^*_{\nu,\nu';\varphi})^{-1}\cup Y^*_{\varphi,{\partial}si}[\nu,\mu_{\partial}si(\nu)]\cup [Q_{\mu_{\partial}si(\nu)},\mu_\varphi(z)]^* \] where $(Y^*_{\nu,\nu';\varphi})^{-1}$ denotes the path obtained from $Y^*_{\nu,\nu';\varphi}$ by traversing it in opposite direction. Since this change of paths does not affect the value of $\beta^n_{\partial}si(\varphi)$ we can assume without loss of generality that $\mu_{\varphi,{\partial}si}=\mu_{\varphi,{\partial}si_0}$. Therefore, for any $z\in\Lambda^-_R(\varphi)$, the paths ${\mathcal P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ and ${\mathcal P}^*_{\varphi,{\partial}si_0}[z,\mu_{\varphi,{\partial}si_0}(z)]$ in $\mathcal{C}_{\varphi,R}$ have the same initial points and the same end points. Now, take $\varphi_0\in U_{\partial}si\cap U_{{\partial}si_0}$. Then, for any $z\in\Lambda^-_R(\varphi_0)$ choose an arbitrary point on $Y^*_{\varphi_0,{\partial}si}[\nu,\mu_{\partial}si(\nu)]$ of the path ${\mathcal P}^*_{\varphi_0,{\partial}si}[z,\mu_{\varphi_0}(z)]$ and modify it by adding a bouquet of $a$-cycles and $b$-cycles of the Riemann surface $\mathcal{C}_{\varphi_0,R}$, contained in $\big({\partial}i_1|_{\mathcal{C}_{\varphi_0,R}}\big)^{-1}(B_{R,{\partial}si}^\bullet)$ so that when modified in this way the path ${\mathcal P}^*_{\varphi_0,{\partial}si}[z,\mu_{\varphi_0}(z)]$ is homologous to ${\mathcal P}^*_{\varphi_0,{\partial}si_0}[z,\mu_{\varphi_0,{\partial}si_0}(z)]$ on $\mathcal{C}_{\varphi_0,R}$. Hence, for any $n\in\mathbb{Z}$, $\beta^n_{\partial}si(\varphi_0)$ defined by \eqref{eq:beta_tilde} with these modified paths equals $\beta^n_{{\partial}si_0}(\varphi_0)$. For any $\nu\in\Lambda^-_R({\partial}si)$, let \[ {\widetilde Y}_{\partial}si[\nu,\mu_{\partial}si(\nu)]:={\partial}i_1\big({\widetilde Y}^*_{\varphi_0,{\partial}si}[\nu,\mu_{\partial}si(\nu)]\big) \] where ${\widetilde Y}^*_{\varphi_0,{\partial}si}[\nu,\mu_{\partial}si(\nu)]$ denotes the middle part of the path ${\mathcal P}^*_{\varphi_0,{\partial}si}[z,\mu_{\varphi_0}(z)]$ modified as described above. Note that ${\widetilde Y}_{\partial}si[\nu,\mu_{\partial}si(\nu]\subseteq B^\bullet_R$. Now, use ${\widetilde Y}_{\partial}si[\nu,\mu_{\partial}si(\nu)]$ instead of $Y_{\partial}si[\nu,\mu_{\partial}si(\nu)]$ to construct the paths ${\mathcal P}_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ and their lifts ${\mathcal P}^*_{\varphi,{\partial}si}[z,\mu_\varphi(z)]$ for any $\varphi\in U_{\partial}si$ and $z\in\Lambda^-(\varphi)$. By this construction and since \eqref{eq:beta_tilde} is independent on the choice described in Remark \mathop{\rm Re}f{rem:multiple_choice}, the equality \eqref{eq:beta1=beta2} follows. As mentioned above, Lemma \mathop{\rm Re}f{lem:real-analyticity} then implies that ${\tilde\beta}_{\partial}si^n : U_{\partial}si\to\mathbb{C}$ is real valued when restricted to $U_{\partial}si\cap i L^2_r$ for any $n\in\mathbb{Z}$. Since the set $\gamma\big([0,1]\big):=\big\{\gamma(t)\,\big|\,t\in[0,1]\big\}\subseteq U_{\rm tn}\cap i L^2_r$ is compact in $L^2_c$ there are finitely many potentials $\varphi_1,...,\varphi_M\in\gamma([0,1])$, open balls $U(\varphi_1),...,U(\varphi_M)$ in $U_{\rm tn}$, and for any $1\le j\le M$ and any $\varphi\in U(\varphi_j)$, a system of paths \[ \big\{{\mathcal P}^*_{\varphi,\varphi_j}[z,\mu_{\varphi,\varphi_j}(z)]\,\big|\,z\in\Lambda_R^-(\varphi)\big\}, \] constructed as described above so that \[ \gamma\big([0,1]\big)\subseteq\bigcup_{j=1}^M U(\varphi_j)\subseteq U_{\rm tn}. \] Without loss of generality we can assume that $\varphi_1={\partial}si^{(0)}\in\mathcal{U}_0\cap U_{\rm tn}$ where $\mathcal{U}_0$ is the open ball in $L^2_c$ centered at zero given by Theorem \mathop{\rm Re}f{th:main_near_zero}. By choosing for any $\varphi\in U(\varphi_1)$ the system of paths as above and, at the same time, as described in Lemma \mathop{\rm Re}f{lem:near_zero}, it follows that $\beta^n_{\varphi_1} : U(\varphi_1)\to\mathbb{C}$ is real-valued on $U(\varphi_1)\cap i L^2_r$. By modifying the systems of paths as described above we conclude that for any $1\le j<k\le M$ and $\varphi\in U(\varphi_j)\cap U(\varphi_k)$ the quantities $\beta^n_{\varphi_j}(\varphi)$ and $\beta^n_{\varphi_k}(\varphi)$ are real valued and coincide up to a linear combination of $a$ and $b$-periods with integer coefficients. Taking into account Lemma \mathop{\rm Re}f{lem:lattice_imaginary} it then follows that modulo $2{\partial}i$, \[ \beta^n_{\varphi_j}(\varphi)\equiv\beta^n_{\varphi_k}(\varphi),\quad \forall\varphi\in U(\varphi_j)\cap U(\varphi_k). \] By setting ${\tilde\beta}^n|_{U(\varphi_j)}:=\beta^n_{\varphi_j}$ for $1\le j\le M$ we then obtain a well defined function modulo $2{\partial}i$, \begin{equation}\label{eq:beta_global} {\tilde\beta}^n : U_{\rm tn}'\to\mathbb{C},\quad U_{\rm tn}':=\bigcup_{j=1}^M U(\varphi_j), \end{equation} so that ${\tilde\beta}^n|_{U_{\rm tn}'\cap i L^2_r}$ is real-valued. This completes the proof of Proposition \mathop{\rm Re}f{prop:beta^n_tn}. \end{proof} In the proof of Proposition \mathop{\rm Re}f{prop:beta^n_tn} we used the following \begin{Lem}\label{lem:lattice_imaginary} For any $\varphi\in U_{\rm tn}\cap i L^2_r$ and for any $n\in\mathbb{Z}$, the lattice of periods $\mathcal{L}_n(\varphi)$, introduced in \eqref{4.3}, consists of imaginary numbers, $\mathcal{L}_n(\varphi)\subseteq i\mathbb{R}$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:lattice_imaginary}] Take $\varphi\in U_{\rm tn}\cap i L^2_r$ and assume that $\varphi\in U_{s_k}$ for some $1\le k\le N$ (see \eqref{eq:U_tn}). Then, by Proposition \mathop{\rm Re}f{lem:differentials_on_U_tn} for any $n\in\mathbb{Z}$, \begin{equation}\label{eq:normalization_condition1} \int_{\Gamma_m(s_k)}\frac{\zeta_n(\lambda,\varphi)}{\sqrt[c]{\Delta^2(\lambda,\varphi)-4}}\,d\lambda=2{\partial}i\delta_{mn}, \quad m\in\mathbb{Z}, \end{equation} where by construction, \begin{equation}\label{eq:conjugation1} \overline{\Gamma_m(s_k)}=\Gamma_m(s_k). \end{equation} It follows from Lemma \mathop{\rm Re}f{lem:spectrum_symmetries} and the definition of the canonical root \eqref{eq:canonical_root} that \begin{equation}\label{eq:conjugation2} \overline{\big(\sqrt[c]{\Delta^2(\lambda,\varphi)-4}\big)}=-\sqrt[c]{\Delta^2(\overline{\lambda},\varphi)-4}. \end{equation} By taking the complex conjugate of both sides of \eqref{eq:normalization_condition1}, then using \eqref{eq:conjugation1} and \eqref{eq:conjugation2}, and finally by passing to the complex conjugate variable in the integral, we obtain that for any $n\in\mathbb{Z}$, \begin{equation}\label{eq:normalization_condition2} \int_{\Gamma_m(s_k)}\frac{\overline{\zeta_n(\overline{\lambda},\varphi)}} {\sqrt[c]{\Delta^2(\lambda,\varphi)-4}}\,d\lambda=2{\partial}i\delta_{mn}, \quad m\in\mathbb{Z}. \end{equation} Now, by comparing \eqref{eq:normalization_condition1} and \eqref{eq:normalization_condition2} we conclude from \cite[Proposition 5.2]{KT2} (cf. Remark \mathop{\rm Re}f{rem:uniqueness_differentials}) that \begin{equation}\label{eq:zeta_symmetry} \overline{\zeta_n(\overline{\lambda},\varphi)}=\zeta_n(\lambda,\varphi) \end{equation} for any $\varphi\in U_{\rm tn}\cap i L^2_r$, $\lambda\in\mathbb{C}$, and $n\in\mathbb{Z}$. In particular, we see that in this case $\zeta_n(\lambda,\varphi)$ is real-valued for $\lambda\in\mathbb{R}$. Now consider the $b_l$-period of the one form $\frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda$ for some $1\le |l|\le R$. For ease of notation we assume that $l\ge 1$. Then, by construction ${\partial}i_1(b_l)=[\varkappa_{l-1}(s_k,\varphi),\varkappa_l(s_k,\varphi)]$, and therefore \begin{equation}\label{eq:p_nl} p_{nl}=\int_{b_l}\frac{\zeta_n(\lambda,\varphi)} {\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda\in\left\{ {\partial}m2\int_{\varkappa_{l-1}(s_k,\varphi)}^{\varkappa_l(s_k,\varphi)} \frac{\zeta_n(\lambda,\varphi)} {\sqrt[c]{\Delta^2(\lambda,\varphi)-4}}\,d\lambda\right\} \end{equation} where the integration is performed over the real interval $[\varkappa_{l-1}(s_k,\varphi),\varkappa_l(s_k,\varphi)]$. By construction $B_R\cap\mathbb{R}$ does not contain periodic eigenvalues of $L(\varphi)$. In view of \eqref{eq:zeta_symmetry} and the property that $\Delta(\lambda,\varphi)\in(-2,2)$ for $\lambda\in B_R\cap\mathbb{R}$ (see \eqref{eq:real_line}) we then conclude from \eqref{eq:p_nl} that the $b_l$-period $p_{nl}$ is an imaginary number. \end{proof} Arguing as in the proof of Lemma \mathop{\rm Re}f{lem:beta^n_k-asymptotics} one obtains the following estimates for ${\tilde\beta}^n(\varphi)$. \begin{Lem}\label{lem:beta_tilde-asymptotics} For any $n\in\mathbb{Z}$, \[ {\tilde\beta}^n=O(1/n)\quad\text{as}\quad|n|\to\infty, \] locally uniformly in $\varphi\in U_{\rm tn}$. \end{Lem} In what follows we assume that the tubular neighborhood $U_{\rm tn}$ is chosen as in Proposition \mathop{\rm Re}f{prop:beta^n_tn}. We summarize the results obtained in this section so far as follows. \begin{Prop}\label{Theorem 4.4} For any $n \in \mathbb{Z}$, the following statements hold: \begin{itemize} \item[(i)] $\sum _{\underset{\scriptstyle{k \not= n}}{|k| > R}} \beta ^n_k$ converges locally uniformly on $U_{\rm tn}$ to an analytic function on $U_{\rm tn}$ which is of the order $o(1)$ as $|n|\to\infty$. \item[(ii)] If $|n| > R$, then $\beta ^n_n$ is defined modulo $2{\partial}i$ on $U_{\rm tn}\backslash{\mathcal Z}_n$. It is analytic, when taken modulo ${\partial}i$. \item[(iii)] For any $\varphi \in U_{\rm tn}$ and $n\in\mathbb{Z}$, the quantity ${\tilde\beta}^n(\varphi)$ is defined modulo $2{\partial}i$ and is analytic on $U_{\rm tn}$. Furthermore, ${\tilde\beta}^n=O(1/n)$ locally uniformly on $U_{\rm tn}$. \end{itemize} \end{Prop} \begin{proof}[Proof of Proposition \mathop{\rm Re}f{Theorem 4.4}] (i) By Lemma \mathop{\rm Re}f{lem:beta^n_k-asymptotics} and the Cauchy-Schwarz inequality, \begin{align*} &\sum _{\underset{\scriptstyle{k \not= n}}{|k| > R}} |\beta ^n_k| =\sum _{0 < |k - n| \le \frac{|n|}{2},|k|>R} |\beta ^n_k| + \sum _{|k - n| >\frac{|n|}{2}, |k|>R} |\beta ^n_k| \\ &\le C \Big( \sum _{|k| \geq |n|/2} |\gamma _k|^2 + |\mu _k - \tau _k|^2 \Big) ^{1/2} + C \big( \| \gamma \|_0 + \| \mu - \tau \|_0 \big) \Big( \sum _{m \geq \frac{|n|}{2}} \frac{1}{m^2} \Big)^{1/2} \end{align*} where here $\gamma = (\gamma _m)_{m \in \mathbb{Z}}$ and $\mu - \tau = (\mu_m - \tau _m)_{m \in \mathbb{Z}}$. Both latter displayed terms converge to zero locally uniformly on $U_{\rm tn}$ as $|n|$ tends to infinity whence we have $\sum_{\underset{\scriptstyle {k \not= n}}{|k| > R}} \beta ^n_k = o(1)$. By Lemma \mathop{\rm Re}f{lem:beta^n_k-analytic} (i) and \cite[Theorem A.4]{GK1}, it then follows that $\sum _{\underset{\scriptstyle{k \not= n}}{|k| > R}}\beta ^n_k$ is analytic on $U_{\rm tn}$. Item (ii) is proved in Lemma \mathop{\rm Re}f{lem:beta^n_k-analytic} (ii) and item (iii) follows from Proposition \mathop{\rm Re}f{prop:beta^n_tn} and Lemma \mathop{\rm Re}f{lem:beta_tilde-asymptotics}. \end{proof} As a consequence the angle variables \eqref{eq:angles_U_tn}, \begin{equation}\label{eq:angles_U_tn'} \theta_n(\varphi)={\tilde\beta}^n(\varphi)+\sum_{|k|>R}\beta^n_k(\varphi),\quad \varphi\in U_{\rm tn}\setminus\mathcal{Z}_n,\quad n\in\mathbb{Z}, \end{equation} are well defined. \begin{Prop}\label{prop:angles_tn} For any $n\in\mathbb{Z}$, the angle variable $\theta_n$ are defined on $U_{\rm tn}\setminus\mathcal{Z}_n$ modulo $2{\partial}i$ by \eqref{eq:angles_U_tn'}. For $|n|\le R$, $\theta_n$ is analytic and for $|n|>R$, it is analytic on $U_{\rm tn}\setminus\mathcal{Z}_n$ when taken modulo ${\partial}i$. Moreover, \eqref{eq:angles_U_tn'} is real valued when restricted to $\big(U_{\rm tn}\setminus\mathcal{Z}_n\big)\cap i L^2_r$. \end{Prop} \begin{proof}[Proof of Proposition \mathop{\rm Re}f{prop:angles_tn}] In view of Proposition \mathop{\rm Re}f{prop:beta^n_tn} and Proposition \mathop{\rm Re}f{Theorem 4.4} it only remains to prove that for any $n\in\mathbb{Z}$ the angle variable $\theta_n$ is real valued when restricted to $\big(U_{\rm tn}\setminus\mathcal{Z}_n\big)\cap i L^2_r$. This follows from the analyticity of $\theta_n$, Lemma \mathop{\rm Re}f{lem:real-analyticity}, and the fact that, by Theorem \mathop{\rm Re}f{th:main_near_zero}, $\theta_n$ is real valued when restricted to the neighborhood of zero $\big(U_{s_1}\setminus\mathcal{Z}_n\big)\cap i L^2_r\subseteq{\mathcal W}_0\setminus\mathcal{Z}_n$ (cf. \eqref{eq:U_tn}). \end{proof} By combining Lemma \mathop{\rm Re}f{lem:near_zero}, Theorem \mathop{\rm Re}f{th:main_near_zero} and Proposition \mathop{\rm Re}f{prop:angles_tn}, we obtain by arguing as in the proof of Corollary \mathop{\rm Re}f{coro:actions_poisson_relations}, the following theorem, which summarizes the results obtained in Section \mathop{\rm Re}f{sec:actions_in_U_tn} and Section \mathop{\rm Re}f{sec:angles_in_U_tn}. \begin{Th}\label{th:commutation_relations_tn} For any $k,n\in\mathbb{Z}$ we have $\{I_n,I_k\}=0$ on $U_{\rm tn}$, $\{\theta _n,I_k\}=\delta _{nk}$ on $U_{\rm tn}\backslash {\mathcal Z}_n$, and $\{\theta_n,\theta _k\}=0$ on $U_{\rm tn}\setminus\big({\mathcal Z} _n\cup{\mathcal Z}_k\big)$. Furthermore, the action $I_n$ and the angle $\theta_n$ are real valued when restricted to $U_{\rm tn}\cap i L^2_r$ and, respectively, $\big(U_{\rm tn}\setminus\mathcal{Z}_n\big)\cap i L^2_r$. \end{Th} \section{Actions and angles in $U_{\rm iso}$}\label{sec:actions_and_angles_in_U_iso} Let ${\partial}si^{(1)}\in i L^2_r$ and assume that ${\partial}si^{(1)}$ has simple periodic spectrum. Following the construction of action-angle coordinates along the path $\gamma : [0,1]\to U_{\rm tn}\cap i L^2_r$, $s\mapsto{\partial}si^{(s)}$, in Section \mathop{\rm Re}f{sec:actions_in_U_tn} and Section \mathop{\rm Re}f{sec:angles_in_U_tn} we construct in the present Section action-angle coordinates in an open neighborhood of the isospectral set $\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)})$. We prove additional properties of these coordinates that will be used in the subsequent sections. For any $n\in\mathbb{Z}$, let $\Gamma_n:=\Gamma_n(s_N)$, $\Gamma_n':=\Gamma_n'(s_N)$ where $\Gamma_n(s_N)$ and $\Gamma_n'(s_N)$ are the contours corresponding to the cut $G_n(s_N)$ constructed in Section \mathop{\rm Re}f{sec:actions_in_U_tn}. Here $s=s_N=1$ is the endpoint of the deformation $\{G_n(s)\,|\,n\in\mathbb{Z}\}_{s\in[0,1 ]}$ of cuts given by Lemma \mathop{\rm Re}f{lem:deformation_G_n}. For any ${\partial}si\in\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)})$ we choose an open ball $U_{\partial}si$ of ${\partial}si$ in $L^2_c$, an integer $R_{\partial}si\in\mathbb{Z}_{\ge 0}$, and positive constants $\varepsilon_0>0$ and $\delta_0>0$ such that the following holds: \begin{itemize} \item[(I1)] The statements of Lemma \mathop{\rm Re}f{lem:counting_lemma}, Lemma \mathop{\rm Re}f{lem:dirichlet_spectrum}, and Lemma \mathop{\rm Re}f{lem:spectral_bands} in the Appendix with $\varepsilon=\varepsilon_0$ and $\delta=\delta_0$, hold uniformly in $\varphi\in U_{\partial}si$ with $R_p$, $R_D$, and $\tilde R$ replaced by $R_{\partial}si$. Moreover, for any $\varphi\in U_{\partial}si$ the $4 R_{\partial}si+2$ periodic eigenvalues of $L(\varphi)$ inside the disk $B_{R_{\partial}si}$ are {\em simple}. \item[(I2)] For any $\varphi\in U_{\partial}si$ and for any $n\in\mathbb{Z}$ the pair of periodic eigenvalues $\lambda^{\partial}m_n$ is contained in the interior domain $D_n'$ encircled by the contour $\Gamma_n'$. \item[(I3)] For any $n\in\mathbb{Z}$ there exists an analytic function $\zeta_n^{({\partial}si)} : \mathbb{C}\times U_{\partial}si\to\mathbb{C}$ such that for any $m\in\mathbb{Z}$ one has \begin{equation}\label{eq:normalization_condition} \frac{1}{2{\partial}i}\int_{\Gamma_m}\frac{\zeta_n^{({\partial}si)}(\lambda,\varphi)}{\sqrt[c]{\Delta^2(\lambda,\varphi)-4}}\,d\lambda= \delta_{nm}. \end{equation} Moreover, for any $\varphi\in U_{\partial}si$ and for any $n\in\mathbb{Z}$ the zeros $\big\{\sigma^n_k \,\big|\,k\in\mathbb{Z}\setminus\{n\}\big\}$ of the entire function $\zeta_n^{({\partial}si)}(\cdot,\varphi)$, when listed with their multiplicities, satisfy the conditions (D1)-(D3) with $R_s$ replaced by $R_{\partial}si$ and $\zeta_n^{(s)}$ replaced by $\zeta_n^{({\partial}si)}$. The canonical root in \eqref{eq:normalization_condition} is defined by \eqref{eq:canonical_root} and the map \[ \Big(\mathbb{C}\setminus\big(\bigsqcup_{k\in\mathbb{Z}}\overline{D_k'}\big)\Big)\times U_{\partial}si\to\mathbb{C}, \quad(\lambda,\varphi)\mapsto\sqrt[c]{\Delta^2(\lambda,\varphi)-4}, \] is analytic. \end{itemize} Since the isospectral set $\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)})$ is compact (see Proposition 2.2 in \cite{KTPreviato}), there exist finitely many elements $\eta^{(j)}\in\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)})$, $1\le j\le J$, such that $\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)})$ is contained in the connected open set \begin{equation}\label{eq:U_iso} U_{\rm iso}:=\bigcup_{1\le j\le J} U_{\eta^{(j)}}\subseteq L^2_c. \end{equation} Take $R:=\max_{1\le j\le J} R_{\eta^{(j)}}$. Using the compactness of $\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)})$ and the fact that ${\partial}si^{(1)}$ has simple periodic spectrum one sees that if necessary, the neighborhood $U_{\rm iso}$ can be shrunk so that $U_{\rm iso}$ and $U_{\rm iso}\cap i L^2_r$ are connected and for any $\varphi\in U_{\rm iso}$ the periodic eigenvalues of $L(\varphi)$ inside the disk $B_R$ are simple. Note that if $\varphi\in U_{\eta^{(k)}}\cap U_{\eta^{(l)}}$ for some $1\le k<l\le J$ then in view of the normalization condition \eqref{eq:normalization_condition} and \cite[Proposition 5.2]{KT2} one concludes (cf. Remark \mathop{\rm Re}f{rem:uniqueness_differentials}) that for any $n\in\mathbb{Z}$ and for any $\lambda\in\mathbb{C}$ we have that $\zeta_n^{(\eta_l)}(\lambda,\varphi)=\zeta_n^{(\eta_k)}(\lambda,\varphi)$. This allows us to define for any $n\in\mathbb{Z}$ the analytic map \[ \zeta_n : \mathbb{C}\times U_{\rm iso}\to\mathbb{C} \] such that for any $\varphi\in U_{\rm iso}$ and for any $m\in\mathbb{Z}$, \begin{equation}\label{eq:normalization_condition_U_iso} \frac{1}{2{\partial}i}\int_{\Gamma_m}\frac{\zeta_n(\lambda,\varphi)}{\sqrt[c]{\Delta^2(\lambda,\varphi)-4}}\,d\lambda= \delta_{nm}. \end{equation} Hence the following holds. \begin{Prop}\label{prop:preparation_U_iso} Let ${\partial}si^{(1)}\in i L^2_r$ and assume that $\mathop{\rm Spec}\nolimits_p L({\partial}si^{(1)})$ is simple. Then there exist a connected open neighborhood $U_{\rm iso}$ of $\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)})$ in $L^2_c$ with $U_{\rm iso}\cap i L^2_r$ connected, analytic maps $\zeta_n : \mathbb{C}\times U_{\rm iso}\to\mathbb{C}$, $n\in\mathbb{Z}$, and an integer $R\in\mathbb{Z}_{\ge 0}$ such that the conditions (I1)-(I3) above hold with $U_{\partial}si$ replaced by $U_{\rm iso}$, $\zeta_n^{({\partial}si)}$ replaced by $\zeta_n$, and $R_{\partial}si$ replaced by $R$. In particular, for any $\varphi\in U_{\rm iso}$ and $m,n\in\mathbb{Z}$ we have the normalization condition \eqref{eq:normalization_condition_U_iso}. \end{Prop} As in Section \mathop{\rm Re}f{sec:actions_in_U_tn}, for any $n\in\mathbb{Z}$ and any $\varphi\in U_{\rm iso}$ define the {\em $n$-th action} \begin{equation}\label{eq:I_n_in_U_iso} I_n(\varphi):=\frac{1}{{\partial}i}\int_{\Gamma_n} \frac{\lambda{\dot\Delta}(\lambda,\varphi)}{\sqrt[c]{\Delta^2(\lambda,\varphi)-4}}\,d\lambda. \end{equation} One easily sees from Proposition \mathop{\rm Re}f{prop:preparation_U_iso}, Corollary \mathop{\rm Re}f{coro:actions_poisson_relations}, and Lemma \mathop{\rm Re}f{lem:real-analyticity} that for any $n\in\mathbb{Z}$, the map $I_n : U_{\rm iso}\to\mathbb{C}$ is analytic and real-valued when restricted to $U_{\rm iso}\cap i L^2_r$. Next let us define the angle variables. To this end introduce for any $n\in\mathbb{Z}$ the analytic subvariety of $U_{\rm iso}$, \begin{equation}\label{eq:Z_n_in_U_iso} \mathcal{Z}_n=\big\{\varphi\in U_{\rm iso}\,|\,\gamma_n^2(\varphi)=0\big\}. \end{equation} For $|n|>R$, the set $\mathcal{Z}_n\cap i L^2_r$ is a real analytic submanifold in $i L^2_r$ of real codimension two. Since for any $\varphi\in U_{\rm iso}$ the periodic spectrum of $L(\varphi)$ is simple inside the disk $B_R$ we see that $\mathcal{Z}_n=\emptyset$ for any $|n|\le R$. Arguing as in the proof of Proposition \mathop{\rm Re}f{prop:beta^n_tn} one shows that after shrinking $U_{\rm iso}$, if needed, one can analytically extend the angle variable $\theta_n$, introduced near ${\partial}si^{(1)}$ in Proposition \mathop{\rm Re}f{prop:angles_tn}, to $U_{\rm iso}\setminus\mathcal{Z}_n$ so that it is of the form \begin{equation}\label{eq:theta_n_U_iso} \theta_n(\varphi)={\tilde\beta}^n(\varphi)+\sum_{|k|>R}\beta^n_k(\varphi),\quad \beta^n_k(\varphi)=\int_{\lambda_k^-(\varphi)}^{\mu_k^*(\varphi)} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda\quad\forall\,|k|>R, \end{equation} \[ {\tilde\beta}^n(\varphi)=\sum_{|k|\le R}\int_{{\mathcal P}^*[\lambda_k^-(\varphi),\mu_\varphi(\lambda_k^-(\varphi))]} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda, \] defined modulo $2{\partial}i$ on $U_{\rm iso}\setminus\mathcal{Z}_n$, real valued when restricted to $\big(U_{\rm iso}\setminus\mathcal{Z}_n\big)\cap i L^2_r$, and analytic on $U_{\rm iso}\setminus\mathcal{Z}_n$ when considered modulo ${\partial}i$ if $|n|>R$ and modulo $2{\partial}i$ if $|n|\le R$. By the arguments yielding Theorem \mathop{\rm Re}f{th:commutation_relations_tn} (cf. Corollary \mathop{\rm Re}f{coro:actions_poisson_relations}) we then obtain \begin{Th}\label{th:commutation_relations_iso} Let ${\partial}si\in i L^2_r$ and assume that $\mathop{\rm Spec}\nolimits_p L({\partial}si)$ is simple and $U_{\rm iso}$ is the neighborood of $\mathop{\rm Iso}\nolimits_o({\partial}si)$ introduced above. Then for any $k,n\in\mathbb{Z}$ we have $\{I_n,I_k\}=0$ on $U_{\rm iso}$, $\{\theta _n, I_k \} = \delta _{nk}$ on $U_{\rm iso}\backslash {\mathcal Z}_n$, and $\{\theta _n,\theta _k\} = 0$ on $U_{\rm iso}\setminus\big({\mathcal Z}_n \cup {\mathcal Z}_k\big)$. Furthermore, the action $I_n$ and the angle $\theta_n$ are real valued when restricted to $U_{\rm iso}\cap i L^2_r$ and, respectively, $\big(U_{\rm iso}\setminus\mathcal{Z}_n\big)\cap i L^2_r$. \end{Th} \begin{Rem}\label{rem:U_iso} Note that the statements of Lemma \mathop{\rm Re}f{lem:beta^n_k-asymptotics}, Lemma \mathop{\rm Re}f{lem:beta_tilde-asymptotics}, and Proposition \mathop{\rm Re}f{Theorem 4.4} $(i)$ also hold in $U_{\rm iso}$. \end{Rem} We finish this Section by proving properties of the actions used in the subsequent Section for constructing Birkhoff coordinates in $U_{\rm iso}$. \begin{Lem}\label{lem:negative_actions} For any $\varphi\in U_{\rm iso}\cap i L^2_r$ and for any $|n| > R$ we have that $I_n(\varphi ) \le 0$. Moreover, $I_n(\varphi ) = 0$ iff $\lambda^+_n(\varphi) =\lambda^-_n(\varphi)$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:negative_actions}] We follow the arguments of the proof of Theorem 13.1 in \cite{GK1}. For any $\varphi\in U_{\rm iso}\cap i L^2_r$ and $|n| > R$ with $\lambda^+_n=\lambda^-_n$, one has $\lambda^+_n=\dot\lambda_n$ and hence by Cauchy's theorem applied to the integral in \eqref{eq:I_n_in_U_iso}, $I_n(\varphi ) = 0$. In the case where $\lambda^+_n\ne\lambda^-_n$, let $g_n$ be the spectral band of Lemma \mathop{\rm Re}f{lem:spectral_bands} in Appendix, which by Proposition \mathop{\rm Re}f{prop:preparation_U_iso} and item (I1) holds with $\tilde R$ replaced by $R$. Recall that $\Delta (\lambda ^{\partial}m _n) = (-1)^n 2$ and, for $\lambda \in g_n \backslash \{ \lambda ^{\partial}m _n \}$, $\Delta(\lambda)$ is real and $-2<\Delta(\lambda)<2$. One easily sees that \[ \sqrt[c]{\Delta (\lambda {\partial}m 0)^2 - 4} = {\partial}m i (-1)^{n+1}\sqrt[+]{4 -\Delta (\lambda )^2}\quad\forall\lambda \in g_n . \] By deforming $\Gamma _n$ to $g_n$ and denoting by $g^+_n$ the arc $g_n$ with the orientation determined by starting at $\lambda ^-_n$ one gets \[ I_n = \frac{1}{{\partial}i } \int _{g^+_n} \frac{\lambda \dot \Delta (\lambda )} { i (-1)^{n+1} \sqrt[+]{4 - \Delta (\lambda )^2}}\,d\lambda - \frac{1}{{\partial}i }\int_{g^+_n}\frac{\lambda\dot\Delta (\lambda )}{- i (-1)^{n+1}\sqrt[+]{4 -\Delta (\lambda )^2}}\,d\lambda . \] Moreover, for any $\lambda \in g_n$ the quantity $(-1)^n\Delta(\lambda){\partial}m i\sqrt[+]{4 - \Delta(\lambda )^2}$ is in the domain of the principal branch of the logarithm, $\log z=\log|z|+ i \arg z$ where $-{\partial}i<\arg(z)<{\partial}i$. Integrating by parts one then gets \begin{align*} I_n = &- \frac{1}{{\partial}i } \int _{g^+_n} \log \left( (-1)^n \Delta (\lambda ) + i \sqrt[+]{4 - \Delta (\lambda )^2}\right) d\lambda \\ &+ \frac{1 }{{\partial}i } \int _{g^+_n} \log \left( (-1)^n \Delta(\lambda ) - i\sqrt[+]{4 - \Delta (\lambda )^2}\right)\,d\lambda . \end{align*} Let $f_{\partial}m(\lambda):= (-1)^n\Delta(\lambda){\partial}m i\sqrt[+]{4-\Delta(\lambda )^2}$ and note that for any $\lambda \in g_n$, $|f_+(\lambda )| = |f_-(\lambda )|=2$ whereas $\arg f_{\partial}m (\lambda ) = 0$ for $\lambda\in\{\lambda^{\partial}m _n\}$ and $0 <{\partial}m\arg f_{\partial}m (\lambda)< {\partial}i$ for any $\lambda\in g_n\setminus\{\lambda^{\partial}m_n \}$. Hence $I_n=-\frac{1}{{\partial}i}\int_{g^+_n}\big(\arg f_+(\lambda )-\arg f_-(\lambda )\big)\,d\lambda < 0$. \end{proof} Lemma \mathop{\rm Re}f{lem:negative_actions} is applied to study the quotient $I_n / \gamma ^2_n$. We have the following \begin{Lem}\label{lem:actions_asymptotics} \begin{itemize} \item[(i)] For any $|n| > R$, the quotient $I_n / \gamma ^2_n : U_{\rm iso}\setminus{\mathcal Z}_n\to\mathbb{C}$ extends analytically to $U_{\rm iso}$ so that \begin{equation}\label{3.8} \frac{4 I_n}{\gamma ^2_n} = 1 + \ell ^2_n, \quad |n| > R \end{equation} locally uniformly on $U_{\rm iso}$. Furthermore, $I_n / \gamma ^2_n$ is real on $U_{\rm iso}\cap i L^2_r$. \item[(ii)] For any $|n| > R$ and $\varphi \in U_{\rm iso}\cap i L^2_r$, \[ 4 I_n /\gamma ^2_n>0. \] Furthermore, after shrinking $U_{\rm iso}$ if necessary, for any $|n| > R$, the real part of $4 I_n / \gamma ^2_n$ is bounded away from $0$ uniformly in $|n| > R$ and locally uniformly on $U_{\rm iso}$. Moreover, the square root $\xi _n:= \sqrt[+]{4 I_n / \gamma ^2_n}$ satisfies the asymptotics \[ \xi _n = 1 + \ell ^2_n \] locally uniformly on $U_{\rm iso}$. \end{itemize} \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:actions_asymptotics}] We follow the arguments of the proof of Theorem 13.3 in \cite{GK1}. (i) We show that for any $|n|>R$ the quantity $I_n / \gamma ^2_n$ continuously extends to all of $U_{\rm iso}$ and its restriction to ${\mathcal Z}_n$ is weakly analytic. By \cite[Theorem A.6]{GK1}, it then follows that $I_n / \gamma ^2_n$ is analytic on $U_{\rm iso}$. Let $\varphi \in U_{\rm iso}\setminus{\mathcal Z}_n$ for some given $|n| > R$. By the definition \eqref{eq:canonical_root} of the canonical root and the product representation of $\dot\Delta(\lambda )$ (cf. Lemma \mathop{\rm Re}f{lem:product1}) one has for $\lambda $ near $\Gamma_n$, \[ \frac{\dot\Delta (\lambda )}{\sqrt[c]{\Delta (\lambda )^2 - 4}} =\frac{\dot \lambda _n - \lambda }{ i w_n(\lambda )}\,\chi_n(\lambda) \quad\text{\rm and}\quad \chi_n(\lambda)={\partial}rod_{k \ne n}\frac{\dot\lambda_k - \lambda }{w_k(\lambda )} \] where, by \eqref{eq:w_k}, $w_k(\lambda)=\sqrt[\rm st]{(\lambda ^+_k - \lambda )(\lambda ^-_k - \lambda )}$. By the definition \eqref{eq:I_n_in_U_iso} of $I_n$, \[ I_n = \frac{ i }{{\partial}i } \int _{\Gamma _n} \frac{(\lambda-\dot\lambda _n)^2}{\sqrt[\rm st]{(\lambda^+_n-\lambda )(\lambda^-_n-\lambda)}}\, \chi_n(\lambda)\,d\lambda. \] The assumption $\gamma _n \not= 0$, allows to make the substitution $\lambda= \tau _n + z \gamma _n / 2$, $\tau _n = (\lambda ^+_n + \lambda ^-_n) / 2$, which in view of the definition \eqref{eq:standard_root} of the standard root leads to \[ \sqrt[\rm st]{(\lambda ^+_n - \lambda )(\lambda ^-_n - \lambda )} \big \arrowvert _{z - i0} = i \frac{\gamma _n}{2} \sqrt[+]{1 - z^2},\quad - 1 \leq z \leq 1 . \] Hence, with $z_n = 2(\dot \lambda _n - \tau _n) / \gamma _n$, \[ \frac{4 I_n}{\gamma^2_n} = \frac{2}{{\partial}i}\int^1_{-1}\frac{(z - z_n)^2}{\sqrt[+]{1 - z ^2}}\,\chi_n(\tau_n+z\gamma_n/2)\,dz . \] By Lemma \mathop{\rm Re}f{lem:counting_lemma} (ii), $z_n \rightarrow 0$ as $\gamma_n\to 0$. Thus \[ \frac{4 I_n}{\gamma^2_n}\to\chi_n(\tau _n)\, \frac{2}{{\partial}i}\int^1_{-1}\frac{z^2}{\sqrt[+]{1-z^2}}\,dz=\chi_n(\tau_n) . \] This shows that $I_n / \gamma ^2_n$ is continuous on all of $U_{\rm iso}$. Note that $\chi _n(\tau _n) \ne 0$. Moreover, by the argument principle, $\tau_n$ is analytic on $U_{\rm iso}$ and arguing as in the proof of \cite[Lemma 12.7]{GK1} one sees that $\chi _n$ is analytic on $D_n \times U_{\rm iso}$. Hence the composition $\chi _n(\tau _n)$ is analytic on $U_{\rm iso}$ and thus in particular weakly analytic on ${\mathcal Z}_n$. By \cite[Theorem A.6]{GK1}, $I_n / \gamma ^2_n$ extends analytically to all of $U_{\rm iso}$. Arguing as in in the proof of \cite[Lemma 12.10]{GK1} one sees that $\chi _n(\lambda ) = 1 + \ell ^2_n$ for $\lambda$ near the interval \[ [\lambda ^-_n, \lambda ^+_n] = \big\{ (1 - t) \lambda ^-_n + t \lambda ^+_n\,\big|\,0 \leq t \leq 1\big\} \] locally uniformly on $U_{\rm iso}$. By the asymptotics $z_n = \gamma _n \ell ^2_n$ (cf. Lemma \mathop{\rm Re}f{lem:counting_lemma} $(ii)$) it then follows that $4 I_n/\gamma^2_n = 1+\ell ^2_n$ locally uniformly on $U_{\rm iso}$. (ii) By Lemma \mathop{\rm Re}f{lem:spectrum_symmetries} one has for any $\varphi\in U_{\rm iso}\cap i L^2_r$, $\gamma^2_n\le 0$. By Lemma \mathop{\rm Re}f{lem:negative_actions} it then follows that for any $|n| > R$ with $\gamma _n \ne 0$ we have $I_n/\gamma^2_n> 0$ whereas if $\gamma_n = 0$, one has by the proof of item (i) that $4 I_n/\gamma^2_n = \chi_n(\tau_n)\ne 0$. By the continuity of $I_n/\gamma^2_n$ on $U_{\rm iso}\cap i L^2_r$ one then concludes that $I_n / \gamma ^2_n > 0$ on $U_{\rm iso}\cap i L^2_r$. Furthermore, by the asymptotics established in item (i), $4 I_n/\gamma ^2_n\to 1$ locally uniformly on $U_{\rm iso}$. By shrinking $U_{\rm iso}$ if necessary we can assure that $\mathop{\rm Re}(4 I_n / \gamma ^2_n)$ is bounded away from $0$ locally uniformly on $U_{\rm iso}$ and uniformly in $|n| > R$. Then $\xi_n = \sqrt[+]{4I_n /\gamma^2_n}$ is well defined and real analytic on $U_{\rm iso}$. It is positive on $U_{\rm iso}\cap i L^2_r$ and satisfies the asymptotics $1+ \ell ^2_n$ locally uniformly on $U_{\rm iso}$. \end{proof} \section{The pre-Birkhoff map and its Jacobian}\label{sec:birkhoff_map} In this Section we construct the pre-Birkhoff map $\Phi : U_{\rm iso}\to\ell^2_c$ in an open neighborhood $U_{\rm iso}$ of the isospectral set $\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)})$ of an arbitrary given potential ${\partial}si^{(1)}\in i L^2_r$ with simple periodic spectrum, using the action and angle variables introduced in Section \mathop{\rm Re}f{sec:actions_and_angles_in_U_iso}. We then prove that the restriction $\Phi : U_{\rm iso}\cap i L^2_r\to i\ell^2_r$ is a local diffeomorphism.\footnote{For simplicity of notation we will use the same symbol for the map $\Phi$ and its restriction to $U_{\rm iso}\cap i L^2_r$.} Without further reference we will use the notations and results of the previous sections. For any $|n| > R$ and $\varphi\in U_{\rm iso}\setminus{\mathcal Z}_n$ we define \begin{equation}\label{eq:x,y} x_n:= \frac{\xi _n \gamma _n}{\sqrt{2}} \cos\theta _n , \quad y_n:= \frac{\xi _n \gamma _n}{\sqrt{2}} \sin\theta _n \end{equation} where $\theta_n : U_{\rm iso}\setminus\mathcal{Z}_n\to\mathbb{C}$ is the $n$-th angle \eqref{eq:theta_n_U_iso}, $\gamma_n=\lambda^+_n-\lambda^-_n$, and $\xi_n : U_{\rm iso}\to\mathbb{C}$ is the real-analytic non vanishing function introduced in Section \mathop{\rm Re}f{sec:actions_and_angles_in_U_iso}. Recall from Lemma \mathop{\rm Re}f{lem:actions_asymptotics} that \[ 4 I_n=(\xi_n\gamma_n)^2 \] where the $n$-th action $I_n$ is defined by \eqref{eq:I_n_in_U_iso} and $\xi_n$ satisfies \begin{equation}\label{eq:xi_asymptotics} \xi _n = 1 + \ell ^2_n \end{equation} locally uniformly in $U_{\rm iso}$. Note that on $U_{\rm iso}\cap i L^2_r$, $\xi_n$ is real valued and $\gamma_n\in i\mathbb{R}$ for any $|n|>R$. Since $\theta_n$, defined modulo $2{\partial}i$, is real valued on $\big(U_{\rm iso}\setminus\mathcal{Z}_n\big)\cap i L^2_r$ it then follows that $x_n, y_n\in i\mathbb{R}$ on $\big(U_{\rm iso}\setminus\mathcal{Z}_n\big)\cap i L^2_r$, $|n|>R$. Using the arguments from \cite[Section 16]{GK1} we now show that $(x_n,y_n)_{|n|>R}$ can be analytically extended to $U_{\rm iso}$ in $L^2_c$. First note that due to the lexicographic ordering $\lambda ^-_n {\partial}reccurlyeq\lambda ^+_n$, the $n$-th gap length $\gamma _n$ is not necessarily continuous on $U_{\rm iso}$. On the other hand, by Proposition \mathop{\rm Re}f{Theorem 4.4} the quantity $\beta ^n_n$ is defined modulo $2{\partial}i$ on $U_{\rm iso}\setminus{\mathcal Z}_n$ and real analytic when taken modulo ${\partial}i$ whereas in view of the definition \eqref{eq:theta_n_U_iso} of the angle $\theta_n$ and Proposition \mathop{\rm Re}f{prop:beta^n_tn}, the difference $\theta_n-\beta ^n_n$ is real analytic on $U_{\rm iso}$. We will first focus our attention on the complex functions \[ z^+_n:=\gamma _n e^{i \beta ^n_n}, \quad z^-_n:=\gamma _n e^{- i \beta ^n_n} , \quad |n| > R . \] \begin{Lem}\label{Lemma 5.1} The functions $z^{\partial}m_n=\gamma_n e^{{\partial}m i \beta ^n_n}$ are analytic on $U_{\rm iso}\setminus{\mathcal Z}_n$ for any $|n| > R$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{Lemma 5.1}] We follow the arguments of the proof of Lemma 16.1 in \cite{GK1}. Fix $|n| > R$ arbitrarily. Arguing as in \cite[Proposition 7.5]{GK1} one easily sees that locally around any potential in $U_{\rm iso}\setminus{\mathcal Z}_n$, there exist analytic functions $\varrho ^+_n$ and $\varrho ^-_n$ such that the set equality $\{ \varrho ^-_n, \varrho ^+_n\} = \{ \lambda ^-_n, \lambda ^+_n\}$ holds. Let \[ \tilde \gamma _n := \varrho ^+_n - \varrho ^-_n,\quad {\tilde\beta}^n_n := \int ^{\mu _n}_{\varrho ^-_n} \frac{\zeta _n (\lambda )}{\sqrt[\ast ]{\Delta (\lambda )^2 - 4}}\,d\lambda\quad(\mathop{\rm mod} 2{\partial}i). \] If $\varrho^-_n = \lambda^-_n$ (and then $\varrho ^+_n = \lambda ^+_n$) one has $\gamma _n = \tilde \gamma _n$ and $\beta ^n_n = \tilde \beta ^n_n$ whereas if $\varrho ^-_n = \lambda ^+_n$ (and hence $\varrho ^+_n = \lambda^-_n$), in view of the normalization condition \[ \int ^{\lambda ^+_n}_{\lambda^-_n} \frac{\zeta _n (\lambda )}{\sqrt[\ast ]{\Delta (\lambda )^2 - 4}}\,d\lambda \in{\partial}i+2{\partial}i\mathbb{Z}, \] one has \[ \gamma_n=-\tilde \gamma _n,\quad \beta ^n_n=\int ^{\mu _n}_{\lambda^-_n} \frac{\zeta _n(\lambda )}{\sqrt[\ast ]{\Delta(\lambda )^2 - 4}}\,d\lambda ={\tilde\beta}^n_n+{\partial}i\quad(\mathop{\rm mod} 2{\partial}i). \] Thus in both cases $\gamma _n e^{{\partial}m i \beta ^n_n}=\tilde \gamma _n e ^{{\partial}m i \tilde \beta ^n_n }$. As the right hand side of the latter identity is analytic the Lemma follows. \end{proof} Next we study the limiting behavior of $z^{\partial}m _n$, $|n| > R$, as $\varphi$ approaches a potential ${\partial}si\in U_{\rm iso}$ with the $n$-th gap collapsed. This limit is different form zero when ${\partial}si$ is in the set \[ {\mathcal F}_n:=\big\{{\partial}si\in U_{\rm iso}\,\big|\,\mu _n \notin G_n\big\} \] where $G_n=[\lambda ^-_n,\lambda ^+_n]$. Note that the set ${\mathcal F}_n$ is open. On ${\mathcal F}_n$, the sign function \begin{equation}\label{5.1} \varepsilon_n:=\frac{\sqrt[\ast ]{\Delta(\mu _n)^2-4}}{\sqrt[c]{\Delta (\mu _n)^2 - 4}} \end{equation} is well defined and locally constant. \begin{Lem}\label{Lemma 5.2} Let $|n| > R$ and ${\partial}si \in {\mathcal Z}_n$. If $\varphi\in {\mathcal F}_n \backslash {\mathcal Z}_n$ tends to ${\partial}si $, then \[ \gamma _n e^{{\partial}m i \beta ^n_n} \to \begin{cases} 2(\tau_n-\mu _n)(1{\partial}m\varepsilon_n) e^{\chi _n}&\text{\rm if}\quad{\partial}si\in {\mathcal F}_n\cap{\mathcal Z}_n \\ 0 &\text{\rm if}\quad{\partial}si\in {\mathcal Z}_n\setminus{\mathcal F}_n \end{cases} \] where \[ \chi_n=\int ^{\mu_n}_{\tau_n} \frac{Z_n(\tau_n)-Z_n(\lambda )}{\tau_n-\lambda }\,d\lambda,\quad Z_n(\lambda)=-{\partial}rod_{m\ne n}\frac{\sigma ^n_m-\lambda} {w_m(\lambda)}, \] and $w_m(\lambda)=\sqrt[\rm st]{(\lambda_m^+-\lambda)(\lambda_m^--\lambda)}$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{Lemma 5.2}] We follow the arguments of the proof of Lemma 16.2 in \cite{GK1}. Without loss of generality we may choose for $\varphi\in{\mathcal F}_n \backslash{\mathcal Z}_n$ a path of integration in the integral of $\beta ^n_n$ which meets $G_n$ only at the initial point $\lambda ^-_n$. Using the definition \eqref{5.1} of the sign $\varepsilon_n$ and the definition \eqref{eq:canonical_root} of $\sqrt[c]{\Delta (\lambda )^2 -4}$, we then can write \begin{equation}\label{5.1bis} \frac{\zeta _n(\lambda )}{\sqrt[\ast ]{\Delta (\lambda )^2- 4}} = - i\varepsilon _n \frac{Z_n(\lambda )}{w_n(\lambda )} \end{equation} and get modulo $2{\partial}i$, \[ i \beta ^n_n = i \int ^{\mu _n}_{\lambda ^-_n} \frac{\zeta _n (\lambda )}{\sqrt[\ast ]{\Delta (\lambda )^2 - 4}}\,d\lambda = \varepsilon_n\int ^{\mu _n}_{\lambda ^-_n} \frac{Z_n(\lambda )}{w_n(\lambda )}\,d\lambda, \] where $w_n(\lambda )$ is well defined as we consider a path of integration from $\lambda ^-_n$ to $\mu _n$ inside $D_n$ which meets $G_n$ only at its initial point. We decompose the numerator $Z_n(\lambda )$ into three terms, \[ Z_n(\lambda ) = \big(Z_n(\lambda ) - Z_n(\tau _n)\big) + \big(Z_n(\tau _n) + 1\big) - 1 \] and denote the corresponding integrals by $o_n, v_n$, and $\omega _n$, respectively. The limit of the first term is straightforward. Note that $Z_n$ is analytic in some neighborhood $D_n \times V_{\partial}si \subseteq\mathbb{C} \times L^2_c$ of $(n{\partial}i , {\partial}si )$. Moreover, if $\varphi\to {\partial}si $, then $\lambda ^{\partial}m _n(\varphi ) \to \tau_n({\partial}si )$, $w_n(\lambda )\to\tau_n-\lambda $, and $\mu_n(\varphi )\to\mu _n({\partial}si )$. Thus by the definition of $\chi_n({\partial}si)$, \[ o_n = \int ^{\mu _n}_{\lambda ^-_n} \frac{Z_n(\lambda ) - Z_n(\tau _n)}{w_n(\lambda )}\,d\lambda\to \int ^{\mu _n}_{\tau _n} \frac{Z_n(\lambda ) - Z_n(\tau _n)}{\tau _n -\lambda }\,d\lambda = - \chi _n({\partial}si ) . \] For the second term we have in view of the identity \eqref{5.2} below and the estimate $Z_n(\tau _n) + 1 = O(\gamma _n)$ by Lemma \mathop{\rm Re}f{Lemma 5.3} below \[ v_n = \big(Z_n(\tau _n) +1\big)\int ^{\mu _n}_{\lambda ^-_n} \frac{1}{w_n(\lambda )} d\lambda\to 0 . \] Now consider the third term. Using a limiting argument we can compute it modulo $2{\partial}i i$ on ${\mathcal F}_n \backslash {\mathcal Z}_n$ by choosing the line segment $\lambda=\tau _n+t\gamma_n /2$ with $-1\le t \le \varrho _n$ as path of integration where $\varrho _n =2(\mu _n - \tau _n) / \gamma _n$. In case the interval $[\lambda^-_n,\mu_n]$ intersects $G_n\setminus\{\lambda_n^-\}$ it actually contains all of $G_n$. One easily verifies that in this case the choice of the sign of $w_n(\lambda)$ along $G_n$ does not matter. We then get modulo $2{\partial}i i$ \begin{equation}\label{5.2} \omega _n = \int ^{\mu _n}_{\lambda ^-_n} \frac{d\lambda }{w_n(\lambda ) } = \int ^{\mu _n}_{\lambda ^-_n} \frac{d\lambda }{\sqrt[\rm st]{(\lambda ^+_n - \lambda )(\lambda ^-_n- \lambda )}} = \int ^{\varrho _n}_{-1} \frac{dt}{\sqrt[\rm st]{(1-t)(-1-t)}} \end{equation} with $\sqrt[\rm st]{(1-t)(-1-t)} = - t \sqrt[+]{1 - t^{-2}}$ for $|t| \to \infty $. Note that for $\varphi \in {\mathcal F}_n \backslash {\mathcal Z}_n$ one has $\varrho _n \in \mathbb{C} \backslash [-1,1]$. We claim that \begin{equation}\label{5.3} e^{-\varepsilon _n \omega _n} =- \varrho _n + \varepsilon_n\sqrt[\rm st]{(1 - \varrho _n)(-1-\varrho _n)} . \end{equation} Indeed both sides of \eqref{5.3}, viewed as functions of $\varrho _n$, are solutions of the initial value problem \[ \frac{f'(w)}{f(w)} = \frac{-\varepsilon_n}{\sqrt[\rm st]{(1 - w)(-1-w)}},\quad f(-1) = 1, \quad w\in\mathbb{C}\setminus[-1,1]. \] Now we compute the limit $\varphi \to {\partial}si $. First consider the case where ${\partial}si \in {\mathcal F}_n \cap {\mathcal Z}_n$. Then $\mu _n - \tau _n$ does not converge to zero. This implies $\varrho ^{-1}_n \to 0$ and further \begin{eqnarray} \gamma _n e^{-\varepsilon _n \omega _n} &=&-\gamma_n\varrho_n + \varepsilon_n\gamma_n\sqrt[\rm st]{(1-\varrho_n)(-1-\varrho_n)}\nonumber\\ &=&-\gamma_n\varrho_n-\varepsilon_n\gamma_n\varrho_n \sqrt[+]{1-\varrho_n^{-2}}\nonumber\\ &=&2(\tau _n - \mu _n)(1 + \varepsilon _n \sqrt[+]{1 -\varepsilon ^{-2}_n})\nonumber\\ &\to&2(\tau _n - \mu _n)(1 + \varepsilon _n).\label{5.4} \end{eqnarray} In the case where ${\partial}si \in {\mathcal Z}_n \backslash {\mathcal F}_n$, one has $\gamma_n \varrho _n = 2(\mu _n - \tau _n) \to 0$ and thus concludes that \[ \gamma _n e^{-\varepsilon _n \omega _n} = - \gamma_n\varrho_n + \varepsilon _n \gamma _n \sqrt[\rm st]{(1 - \varrho _n)(-1-\varrho _n)} \to 0 . \] (Actually, this case is included in the above result since it does not matter that $\varepsilon _n$ is not well defined outside of ${\mathcal F}_n$.) So in both cases we obtain \[ \gamma_n e^{-\varepsilon_n\omega_n}\to 2(\tau_n-\mu_n)(1+\varepsilon_n) . \] Combining the results for all three integrals we obtain \begin{equation}\label{5.5} \gamma _n e^{ i \beta^n_n} = \gamma_n e^{-\varepsilon _n \omega _n}e^{\varepsilon_n(o_n+v_n)}\to 2(\tau_n-\mu_n)(1+\varepsilon_n)e^{\varepsilon_n\chi_n}. \end{equation} This agrees with the claim for $z^+_n$ for $\varepsilon _n = -1$ where it vanishes and for $\varepsilon_n = 1$, where $e^{\varepsilon _n \chi _n}= e^{\chi _n}$. For $z^-_n$ we just have to switch the sign of $\varepsilon_n$ in \eqref{5.3} to obtain \[ \gamma_n e^{- i\beta^n_n} = \gamma_n e^{\varepsilon_n\omega _n}e^{-\varepsilon_n(o_n+v_n)}\to 2(\tau_n-\mu_n)(1-\varepsilon_n)e^{-\varepsilon_n\chi_n} . \] In particular the limit vanishes for $\varepsilon_n=1$. \end{proof} \begin{Lem}\label{Lemma 5.3} For $\lambda\in G_n$, $|n| > R$, one has $Z_n(\lambda)=-1+O(\gamma _n)$ locally uniformly on $U_{\rm iso}$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{Lemma 5.3}] We follow the arguments of the proof of Lemma 16.3 in \cite{GK1}. In analogy to \eqref{5.1bis} we write \[ \frac{\zeta _n(\lambda )}{\sqrt[c]{\Delta (\lambda )^2 - 4}} = \frac{Z_n(\lambda )}{ i w_n(\lambda )} , \quad Z_n(\lambda ) = - {\partial}rod _{m \not= n} \frac{\sigma ^n_m - \lambda }{w_m(\lambda )}. \] Integrating over $\Gamma _n$ and referring to Proposition \mathop{\rm Re}f{prop:preparation_U_iso} we obtain for $\tau \in G_n$, \[ 1 = \frac{1}{2{\partial}i i } \int _{\Gamma _n}\frac{Z_n(\lambda )}{w_n(\lambda )}\,d \lambda \\ =\frac{1}{2{\partial}i i}Z_n(\tau )\int_{\Gamma _n}\frac{1}{w_n(\lambda )}\,d\lambda + \frac{1}{2{\partial}i i}\int_{\Gamma_n}\frac{Z_n(\lambda ) - Z_n(\tau )}{w_n(\lambda )}\,d\lambda . \] Since $\frac{1}{2{\partial}i i}\int_{\Gamma_n}\frac{1}{w_n(\lambda)}\,d\lambda=-1$ we get \[ 1 = -Z_n(\tau ) + \frac{1}{2{\partial}i i}\int_{\Gamma_n} \frac{Z_n(\lambda) - Z_n(\tau )}{w_n(\lambda )}\,d\lambda =-Z_n(\tau)+O\left(\big\arrowvert Z_n -Z_n(\tau )\big\arrowvert _{G_n}\right) \] where the latter asymptotics follow from \cite[Lemma 14.3]{GK1}. Note that $Z_n(\lambda)$ is analytic on $D_n$ by \cite[Corollary 12.8]{GK1}. Since $Z_n$ is bounded on $D_n$ locally uniformly in $\varphi $ and uniformly in $n$ by \cite[Lemma 12.10]{GK1}, it then follows that the same is true for $\dot Z_n(\lambda ), \lambda \in G_n$, by Cauchy's estimate. Therefore \[ \max _{\lambda \in G_n} \big\arrowvert Z_n(\lambda ) - Z_n(\tau ) \big\arrowvert \leq \max _{\lambda \in G_n} |\dot Z_n(\lambda )| |\gamma _n| = O(\gamma _n) \] locally uniformly in $\varphi $ and uniformly in $n$. This proves the claim. \end{proof} For any $|n|>R$, we now extend $z^{\partial}m _n$ on all of $U_{\rm iso}$ as follows \[ z^{\partial}m_n = \begin{cases} 2(\tau _n - \mu _n)(1{\partial}m\varepsilon _n)e^{\chi _n}&\text{\rm on}\quad{\mathcal Z}_n \cap {\mathcal F}_n \\ 0 &\text{\rm on}\quad{\mathcal Z}_n \backslash{\mathcal F}_n \end{cases} \] where $\chi_n$ is given by Lemma \mathop{\rm Re}f{Lemma 5.2}. To establish the proper target space of $(x_n,y_n)_{n\ge 1}$ we need the following asymptotic estimates. \begin{Lem}\label{Lemma 5.4} For $|n| > R$ one has $z^{\partial}m _n = O\big(|\gamma_n|+|\mu_n-\tau_n|\big)$ locally uniformly on $U_{\rm iso}$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{Lemma 5.4}] We follow the arguments of the proof of Lemma 16.4 in \cite{GK1}. From the proof of Lemma~\mathop{\rm Re}f{Lemma 5.2}, in particular equations \eqref{5.4} and \eqref{5.5}, one sees that for $\varphi $ in ${\mathcal F}_n \backslash {\mathcal Z}_n$, \[ \gamma_n e^{ i \beta^n_n} = \left(-\gamma_n\varrho _n+ \varepsilon_n\gamma _n\sqrt[\rm st]{(1-\varrho_n)(-1-\varrho_n)}\right) e^{\varepsilon _n(v_n + o_n)} . \] In the case $2|\mu _n - \tau _n| \leq |\gamma _n|$, i.e., $|\varrho _n| \leq 1$, one has \begin{equation}\label{5.6} \big\arrowvert - \gamma _n \varrho _n +\varepsilon _n \gamma _n \sqrt[\rm st]{(1 - \varrho _n)(- 1 - \varrho _n)}\big\arrowvert \leq 3|\gamma _n| \end{equation} while in the case $2|\mu _n - \tau _n| > |\gamma _n|$, i.e., $|\varrho _n| > 1$, \[ \gamma_n e^{i \beta^n_n} = 2(\tau_n - \mu_n) \left(1+\varepsilon_n\sqrt[+]{1+\varrho_n^{-2}}\right) e^{\varepsilon_n(v_n + o_n)} \] yielding the estimate \begin{equation}\label{5.7} \left|2(\tau _n - \mu _n) \left(1+\varepsilon_n\sqrt[+]{1-\varrho_ n^{-2}}\right)\right| \le 6 |\mu _n-\tau _n|. \end{equation} The exponential term $e^{\varepsilon _n(v_n + o_n)}$ is locally uniformly bounded in view of \cite[Lemma 12.10]{GK1}. So we get on ${\mathcal F}_n\setminus{\mathcal Z}_n$, \begin{equation}\label{5.8} z^+_n = O\big( |\gamma _n| + |\mu _n - \tau _n|\big) . \end{equation} By Lemma~\mathop{\rm Re}f{Lemma 5.2}, \eqref{5.7} continues to hold on ${\mathcal F}_n \cap {\mathcal Z}_n$. Furthermore, one easily verifies that \eqref{5.6} is also valid on $U_{\rm iso}\setminus{\mathcal F}_n$ for any choice of $\varepsilon _n \in \{ {\partial}m 1\}$. Hence \eqref{5.8} holds in a locally uniform fashion on all of $U_{\rm iso}$. The argument for $\gamma_n e^{- i\beta_n^n}$ is the same. \end{proof} We are now ready to prove \begin{Prop}\label{Theorem 5.5} For any $|n|>R$, the functions $z^{\partial}m _n$ are analytic on $U_{\rm iso}$. \end{Prop} \begin{proof}[Proof of Proposition \mathop{\rm Re}f{Theorem 5.5}] We follow the arguments of the proof of Proposition 16.5 in \cite{GK1}. First, we apply \cite[Theorem A.6]{GK1} to the functions $z^{\partial}m _n$ on the domain $U_{\rm iso}$ with the subvariety ${\mathcal Z}_n$. These functions are analytic on $U_{\rm iso}\setminus{\mathcal Z}_n$ by Lemma~\mathop{\rm Re}f{Lemma 5.1}. We claim that they are also continuous at points of $\mathcal{Z}_n$. First note that their restrictions to ${\mathcal Z}_n$ are continuous by inspection. Approaching a point in ${\mathcal Z}_n$ from within ${\mathcal F}_n \backslash {\mathcal Z}_n$, the corresponding values $z^{\partial}m _n$ converge to the ones of the limiting potential by Lemma \mathop{\rm Re}f{Lemma 5.2}. On the other hand, approaching a point in ${\mathcal Z}_n$ from outside of ${\mathcal F}_n\cup{\mathcal Z}_n$ one has $|\mu _n - \tau _n| \leq |\gamma _n|$ and thus $z^{\partial}m_n =\gamma _n e^{{\partial}m i \beta^n_n}$ converges to zero by Lemma \mathop{\rm Re}f{Lemma 5.4}. Thus the functions $z^{\partial}m _n$ are continuous on all of $U_{\rm iso}$. To show that their restrictions to ${\mathcal Z}_n$ are weakly analytic, let $D$ be a one-dimensional complex disk contained in ${\mathcal Z}_n$. If the center of $D$ is in ${\mathcal F}_n$, then the entire disk $D$ is in ${\mathcal F}_n$, if chosen sufficiently small. The analyticity of $z^{\partial}m _n = \gamma _n e ^{{\partial}m i \beta^n_n}$ on $D$ is then evident from the above formula, the definition of $\chi _n$, and the local constancy of $\varepsilon _n$ on ${\mathcal F}_n$. If the center of $D$ does not belong to ${\mathcal F}_n$, then consider the analytic function $\mu _n - \tau _n$ on $D$. This function either vanishes identically on $D$, in which case $z^{\partial}m _n$ vanishes identically, too. Or it vanishes in only finitely many points. Outside these points, $D$ is in ${\mathcal F}_n$, hence $z^{\partial}m _n$ is analytic there. By continuity and analytic continuation, these functions are analytic on all of $D$. \end{proof} For $\varphi\in U_{\rm iso}$ we now define $\Phi (\varphi ) = \big(x_n(\varphi ), y_n(\varphi )\big)_{n \in \mathbb{Z}}$ where for $|n| > R$, \begin{equation}\label{eq:x,y'} \left\{ \begin{array}{l} x_n=\frac{\xi _n}{\sqrt{8}}\left(z_n^+e^{- i(\theta _n-\beta_n^n)}+z_n^-e^{- i(\theta _n-\beta_n^n)}\right),\\ y_n =\frac{\xi _n}{\sqrt{8} i}\left(z_n^+e^{- i(\theta _n-\beta_n^n)}-z_n^-e^{- i(\theta _n-\beta_n^n)}\right) , \end{array} \right. \end{equation} and for $|n|\le R$, \[ x_n = \sqrt{2 I_n}\cos\theta _n, \quad y_n = \sqrt{2 I_n}\sin \theta _n. \] Here the roots $\sqrt{2 I_n}$ are defined as follows: First note that by adding constants to the actions and by shrinking the neighborhood $U_{\rm iso}$, if necessary, we can ensure that $\mathop{\rm Re}(I_n)\le-1$ on $U_{\rm iso}$ for any $|n| \le R$. Then, $\sqrt{2 I_n}$ is analytic on $U_{\rm iso}$ where $\sqrt{\cdot}$ is the branch of the square root satisfying \[ \sqrt{2 I_n}=i\sqrt[+]{-2 I_n} \] on $U_{\rm iso}\cap i L^2_r$. From the preceeding asymptotic estimates it is then evident that $\Phi $ defines a continuous, locally bounded map into $\ell^2_c= \ell^2\times\ell ^2$ which, when restricted to $U_{\rm iso}\cap i L^2_r$, takes values in $ i\ell ^2_r$. Moreover, each component is real analytic. In what follows we identify the real Hilbert space $i\ell^2_r$ with $\ell^2(\mathbb{Z},i\mathbb{R}^2)$ by the $\mathbb{R}$-linear isomorphism \begin{equation}\label{eq:identification} \ell^2(\mathbb{Z},i\mathbb{R}^2)\to i\ell^2_r,\quad\big(i(u_n,v_n) \big)_{n\in\mathbb{Z}}\mapsto\big(z_n(\varphi),w_n(\varphi) \big)_{n\in\mathbb{Z}} \end{equation} where $z_n=(v_n+i u_n)/\sqrt{2}$ and $w_n=-\overline{z_{(-n)}}$ for any $n\in\mathbb{Z}$. We also write $x_n$ for $i u_n$ and $y_n$ for $i v_n$. We have proved the following \begin{Prop}\label{Theorem 5.6} The map \[ \Phi : U_{\rm iso}\cap i L^2_r \to i \ell ^2_r , \quad \varphi\mapsto\big(x_n(\varphi),y_n(\varphi) \big)_{n\in\mathbb{Z}}, \] is real analytic and extends to an analytic map $U_{\rm iso}\to\ell ^2_c$. \end{Prop} Using Theorem \mathop{\rm Re}f{th:commutation_relations_iso} we now can establish the following \begin{Prop}\label{Theorem 5.7} For any $m, n \in \mathbb{Z}$ and $\varphi\in U_{\rm iso} \cap i L^2_r$, \[ \{ x_m, x_n \} = 0, \quad \{ x_m, y_n \} = - \delta _{mn} , \quad \{ y_m, y_n \} = 0 . \] \end{Prop} \begin{proof}[Proof of Proposition \mathop{\rm Re}f{Theorem 5.7}] Let $n \in \mathbb{Z}$. For any $|n| > R$, the set ${\mathcal Z}_n \cap i L^2_r$ is a real analytic submanifold of $U_{\rm iso}\cap i L^2_r$ of codimension two whereas ${\mathcal Z}_n \cap i L^2_r =\emptyset $ for $|n| \leq R$ (cf. \eqref{eq:Z_n_in_U_iso}). Since the coordinates $x_n$ and $y_n$ are smooth it suffices to check that for any $m \in \mathbb{Z}$ the claimed commutation relations hold on the subset $\big(i L^2_r \cap U_{\rm iso}\big)\setminus\big({\mathcal Z}_n\cup{\mathcal Z}_m\big)$. Then for any $m\in\mathbb{Z}$, on $\big(U_{\rm iso}\setminus\mathcal{Z}_m\big)\cap i L^2_r$ \[ x_m=i\sqrt[+]{-2 I_m}\cos\theta_m,\quad y_m=i\sqrt[+]{-2 I_m}\sin\theta_m. \] Arguing as in the proof of \cite[Theorem 18.8]{GK1} one has by the chain rule \begin{align*} \{ x_m, y_n\} &= {\partial}artial _{\theta _m} x_m {\partial}artial _{\theta _n} y_n \{ \theta _m, \theta _n\} + {\partial}artial _{\theta _m} x_m {\partial}artial _{I_n} y_n \{ \theta _m, I_n \} \\ &+ {\partial}artial _{I_m} x_m {\partial}artial _{\theta _n} y_n \{ I_m, \theta _n\} + {\partial}artial _{I_m} x_m {\partial}artial _{I_n} y_n \{ I_m, I_n \} . \end{align*} By Theorem \mathop{\rm Re}f{th:commutation_relations_iso} it then follows that \begin{eqnarray*} \{x_m,y_n\}&=&\left({\partial}artial _{\theta _m} x_m {\partial}artial _{I_n} y_n - {\partial}artial _{I_m} x_m {\partial}artial _{\theta _n} y_n\right)\delta_{mn}\\ &=&\left(-\sqrt{2 I_m}\sin\theta_m\frac{1}{\sqrt{2 I_n}}\sin\theta_n- \sqrt{2 I_m}\cos\theta_m\frac{1}{\sqrt{2 I_n}}\cos\theta_n\right)\delta_{mn}\\ &=&-\delta_{mn}. \end{eqnarray*} All other commutation relations between coordinates are verified in a similar fashion. \end{proof} In the remaining part of this Section we prove that for any $\varphi \in U_{\rm iso}\cap i L^2_r$, the Jacobian $d_\varphi \Phi $ of $\Phi $ is a Fredholm operator of index $0$. First we need to introduce some more notation and establish some auxiliary results. Recall that ${\mathcal Z}_n\cap i L^2_r$, $|n| > R$, is a real analytic submanifold of real codimension two (cf. \eqref{eq:Z_n_in_U_iso}). Note that for $\varphi\in{\mathcal Z}_n\cap i L^2_r$ with $|n|> R$, the double periodic eigenvalue $\lambda ^+_n(\varphi ) = \lambda ^-_n(\varphi )$ has geometric multiplicity two and hence is also a Dirichlet eigenvalue. It turns out to be convenient to introduce for any $|n|> R$, \[ {\mathcal M}_n:=\big\{\varphi \in U_{\rm iso}\,\big|\,\mu_n(\varphi)\in\{ \lambda ^{\partial}m _n(\varphi)\}\big\} . \] Note that ${\mathcal Z}_n\cap i L^2_r\subseteq{\mathcal M}_n\cap i L^2_r$. \begin{Lem}\label{Lemma 6.0} For any $|n| > R$, $\big(U_{\rm iso}\setminus{\mathcal M}_n\big)\cap i L^2_r$ is open and dense in $U_{\rm iso}\cap i L^2_r$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{Lemma 6.0}] Note that $ {\mathcal M}_n=\big\{\varphi \in U_{\rm iso}\,\big|\,\chi_p(\mu_n,\varphi )=0\big\}$ where we recall that $\chi _p(\lambda , \varphi ) = \Delta (\lambda , \varphi )^2 - 4$. Using that $\chi _p : \mathbb{C} \times L^2_c \to\mathbb{C}$, $(\lambda,\varphi )\mapsto\chi_p(\lambda,\varphi )$, and for any $|n| > R$, $\mu _n : U_{\rm iso}\to\mathbb{C}$, $\varphi\mapsto\mu _n(\varphi )$, are analytic it follows that the composition $F_n : U_{\rm iso}\to\mathbb{C}$, $\varphi\mapsto\chi_p\big(\mu_n(\varphi),\varphi\big)$ is analytic. The claimed result follows if one can show that $F_n$ does not vanish identically on $U_{\rm iso} \cap i L^2_r$. Assume on the contrary that $F_n$ vanishes identically on $U_{\rm iso} \cap i L^2_r$. By analyticity it then vanishes on all of $U_{\rm iso}$. Actually, the $n$-th Dirichlet eigenvalue $\mu _n$ is well defined and analytic on $U_{\rm iso}\cup U_{\rm tn}$ where $U_{\rm tn}$ is the tubular neighborhood introduced in Section \mathop{\rm Re}f{sec:actions_in_U_tn}. Thus $F_n$ defines an analytic function on $U_{\rm iso}\cup U_{\rm tn}$ which vanishes also on $U_{\rm tn}$. Note that $U_{\rm tn}$ contains potentials ${\partial}si$ in $U_{\rm tn}\cap L^2_r$ near zero for which $\lambda ^-_n < \lambda ^+_n$ and $\big\{\varphi\in L^2_r\,\big|\,\mathop{\rm Spec}\nolimits_p\big(L(\varphi)\big)=\mathop{\rm Spec}\nolimits_p\big(L({\partial}si)\big)\big\}\subseteq U_{\rm tn}\cap L^2_r$. The vanishing of $F_n$ on $U_{\rm tn}\cap L^2_r$ then contradicts \cite[Corollary 9.4]{GK1}. \end{proof} In a first step we establish asymptotics for the gradients of $z^{\partial}m_n$ as $|n| \to \infty $. Since finite gap potentials are dense in $U_{\rm iso}\cap i L^2_r$ (\cite[Corollary 1.1]{KST}), it turns out that for our purposes it is sufficient to establish them for such potentials. Recall that for any $n \in \mathbb{Z}$ we have that $e^+_n = (0, e^{-2{\partial}i i n x})$ and $e^-_n = (e^{2{\partial}i i n x},0)$. \begin{Lem}\label{Lemma 6.1} At any finite gap potential in $U_{\rm iso}\cap i L^2_r$, one has \[ {\partial}artial z^{\partial}m _n = - 2e^{\partial}m _n + \ell ^2_n . \] These estimates hold uniformly on $0\le x\le 1$ in the sense that $\|{\partial}artial z^{\partial}m _n+2e^{\partial}m _n\|_{L^\infty}=\ell^2_n$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{Lemma 6.1}] We follow the proof of \cite[Lemma 17.1]{GK1}. In view of Lemma~\mathop{\rm Re}f{Lemma 6.0} we may approximate a given potential ${\partial}si \in {\mathcal Z}_n \cap i L^2_r, |n| > R$, by potentials in $ i L^2_r \cap (U_{\rm iso}\setminus{\mathcal M}_n)$. Then it can be approximated by potentials $\varphi $ in $ i L^2_r \cap U_{\rm iso}$ satisfying either $|\mu _n - \tau _n| \leq |\gamma _n| / 2, \mu _n \not= \lambda ^+_n, \lambda ^-_n$ (case 1) or $|\mu _n - \tau _n| > |\gamma _n| / 2$ (case 2). Both cases are treated in a similar fashion so we concentrate on case 1 only. As in \eqref{5.1} we define a sign $\varepsilon _n$ by \[ \varepsilon_n = \frac{\sqrt[\ast ]{\Delta ^2(\lambda ) - 4}}{\sqrt[c]{\Delta ^2(\lambda ) - 4}} \Bigg\arrowvert _{\lambda=\mu _n} \] where the root $\sqrt[c]{\Delta ^2(\lambda ) - 4} = 2 i {\partial}rod _{m \in \mathbb{Z}} \frac{w_m(\lambda )}{{\partial}i _m}$ is extended to $G_n$ by extending $w_n(\lambda )$ from the left of $G_n$. Since by definition $\zeta_n =- \frac{2}{{\partial}i _n} {\partial}rod _{m \not= n}\frac{\sigma ^n_m - \lambda}{{\partial}i_m}$ one has \[ \frac{\zeta _n(\lambda )}{\sqrt[\ast ]{\Delta ^2(\lambda ) - 4}}\Bigg\arrowvert _{\lambda=\mu _n} = - i \varepsilon _n \frac{Z_n(\lambda )}{w_n(\lambda )}\Bigg\arrowvert _{\lambda=\mu _n}\quad \text{\rm with}\quad Z_n(\lambda ) = - {\partial}rod _{m \not= n} \frac{\sigma ^n_m - \lambda }{w_m(\lambda )} . \] Hence \[ i\beta ^n_n = \varepsilon _n \int ^{\mu _n}_{\lambda ^-_n} \frac{Z_n(\lambda )}{w_n(\lambda )}\,d\lambda \] and we are in the same situation as in the proof of Lemma~\mathop{\rm Re}f{Lemma 5.2}. With the notation introduced there we conclude again that $z_n^+=\gamma _n e^{ i \beta _n^n} = \gamma _n e^{-\varepsilon _n \omega _n} e^{\varepsilon _n(o_n + v_n)}$ and \[ \gamma _n e^{-\varepsilon _n \omega _n} = -\gamma _n \varrho _n + \varepsilon _n \gamma _n \sqrt[\rm st]{(1 - \varrho _n)(-1 - \varrho _n)} \] where $\varrho _n = 2(\mu _n - \tau _n) / \gamma _n$. As by assumption, $2|\mu _n - \tau _n| \leq |\gamma _n|$, $\mu _n \notin \{ \lambda ^{\partial}m _n\}$ and $\gamma _n \in i {\mathbb R}_{> 0}$ one has $|\varrho_n| \le 1$, $\varrho _n \not= {\partial}m 1$. By extending the standard root (cf. \eqref{eq:standard_root}) by continuity from above to the interval $[-1,1]$, one obtains that \[ \sqrt[\rm st]{(1-\varrho_n)(-1-\varrho _n)} = \begin{cases} -i\sqrt[+]{1 - \varrho ^2_n} &\text{\rm if}\quad\mathop{\rm Im}(\varrho_n)\ge 0\\ +i\sqrt[+]{1 - \varrho ^2_n} &\text{\rm if}\quad\mathop{\rm Im}(\varrho_n) < 0. \end{cases} \] As $-i\gamma_n >0$ it then follows that $\gamma_n e^{-\varepsilon_n\omega_n}=2(\tau_n-\mu_n)+2\varepsilon_n r_n$ where $r_n = w_n(\mu_n)$. As both $\gamma_n e^{-\varepsilon_n\omega _n}$ and $e^{\varepsilon _n(o_n + v_n)}$ are analytic and as $o_n + v_n$ vanishes for $\varphi \to {\partial}si $ (see proof of Lemma~\mathop{\rm Re}f{Lemma 5.2}) we get by the product rule \[ {\partial}artial z^+_n={\partial}artial\big(\gamma _n e^{ i \beta ^n_n}\big)\to {\partial}artial\big(\gamma _n e^{-\varepsilon _n \omega _n}\big) = 2({\partial}artial \tau _n - {\partial}artial \mu _n) +2\varepsilon _n{\partial}artial r_n. \] To study the gradient of $r_n$ we use the representation \eqref{5.1bis} at $\mu _n$ to write \[ i\varepsilon_n r_n = \frac{Z_n(\mu _n)}{\zeta _n(\mu _n)} \sqrt[\ast ]{\Delta ^2(\mu _n) - 4} = {\partial}hi _n\cdot\delta(\mu_n) \] where ${\partial}hi _n := \frac{Z_n(\mu _n)}{\zeta_n(\mu _n)}$, $\delta(\mu_n):=\grave{m}_2(\mu_n)+\grave{m}_3(\mu_n)$ and, by the definition of the $*$-root, \[ \sqrt[\ast ]{\Delta (\mu_n)^2 - 4} = \delta (\mu_n). \] By \cite[Lemma 4.4]{GK1} and the asymptotics of \cite[Lemma 5.3]{GK1} \[ i{\partial}artial\delta(\mu _n) =(-1)^n\big(e^+_n - e^-_n\big) + \ell ^2_n \] where these estimates hold uniformly for $0\le x\le 1$. In addition, by \cite[Theorem 2.4, Lemma 5.3]{GK1}, $\delta(\mu_n)=\ell^2_n$ and $\dot \delta (\mu _n) = \ell ^2_n$. It then follows from \cite[Lemma 7.6]{GK1} that $\dot \delta (\mu _n) {\partial}artial \mu _n = \ell ^2_n .$ Moreover, by Lemma \mathop{\rm Re}f{Lemma 5.3}, $Z_n(\mu _n) = -1 + O(\gamma _n)$ and by \cite[Lemma C.4]{GK1}, $\zeta _n(\mu _n) = - 2(-1)^n + \ell ^2_n$ and thus $2{\partial}hi _n = (-1)^n + \ell ^2_n$ and with Cauchy's estimate ${\partial}artial{\partial}hi_n=O(1)$, leading to $\delta(\mu_n){\partial}artial{\partial}hi_n=\ell ^2_n$. On the other hand \begin{align*} 2{\partial}hi _n {\partial}artial (\delta (\mu _n)) &= 2{\partial}hi _n{\partial}artial\delta(\mu _n) + 2{\partial}hi _n \dot \delta (\mu _n) {\partial}artial \mu _n \\ &=\left( (-1)^n + \ell ^2_n\right) {\partial}artial \delta(\mu _n) + \ell ^2_n = -i\big(e^+_n - e^-_n\big) + \ell ^2_n . \end{align*} Hence \[ 2 \varepsilon _n {\partial}artial r_n = -i{\partial}artial \big(2{\partial}hi_n\delta (\mu _n)\big) = e^-_n - e^+_n + \ell ^2_n . \] Finally, by \cite[Lemma 7.6]{GK1} and by \cite[Lemma 12.3]{GK1} and its proof as well as Lemma 12.4 in \cite{GK1} \[ 2\big({\partial}artial \tau _n - {\partial}artial \mu _n\big) = - \big(e^+_n + e^-_n\big) + \ell ^2_n . \] Altogether we thus have proved that for finite gap potentials \[ {\partial}artial z^+_n = 2\big({\partial}artial \tau _n - {\partial}artial \mu _n\big) + 2 \varepsilon_n {\partial}artial r_n = - 2 e^+_n + \ell ^2_n\,. \] Analogously, one has \[ {\partial}artial z^-_n = 2\big({\partial}artial \tau _n - {\partial}artial \mu _n\big) - 2 \varepsilon_n {\partial}artial r_n = - 2 e^-_n + \ell ^2_n. \] \end{proof} \begin{Lem}\label{Proposition 6.2} At any finite gap potential in $U_{\rm iso}\cap i L^2_r$, \[ {\partial}artial x_n = - \frac{1}{\sqrt{2}}\big(e^+_n + e^-_n\big) + \ell ^2_n , \quad {\partial}artial y_n = - \frac{1}{\sqrt{2} i }\big(e^+_n - e^-_n\big) + \ell ^2_n . \] \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{Proposition 6.2}] We follow the arguments in the proof of Theorem 17.2 in \cite{GK1}. By the definition of $x_n$ and $y_n$, for any $|n| > R$, \begin{align} \label{6.1}&x_n = \frac{\xi _n}{2 \sqrt{2}} \left(z^+_n e^{ i(\theta_n - \beta^n_n)} + z^-_n e^{- i (\theta_n- \beta^n_n)} \right)\\ \label{6.2}&y_n = \frac{\xi _n}{2 \sqrt{2} i } \left( z^+_n e^{ i (\theta _n - \beta ^n_n)} - z^-_n e^{- i (\theta _n -\beta ^n_n)} \right). \end{align} At a finite gap potential we have $z^{\partial}m _n = 0$ for $|n|$ sufficiently large, $\xi_n = 1 + \ell ^2_n$ by \eqref{eq:xi_asymptotics} and $\theta _n - \beta ^n_n = O \left( \frac{1}{n} \right)$ by Remark \mathop{\rm Re}f{rem:U_iso}, using that $|\gamma_m|+|\mu_m-\tau_m|=0$ for all but finitely many $m$. Furthermore by Lemma \mathop{\rm Re}f{lem:beta^n_k-asymptotics} and the asymptotics of $\lambda ^{\partial}m _n$ and $\mu _n$, the $(z^{\partial}m _n)_{n\in \mathbb{Z}}$ are locally bounded in $\ell ^2$. By applying the product rule and Cauchy's estimate we thus obtain from the formulas \eqref{6.1} and \eqref{6.2} that \[ {\partial}artial x_n = \frac{1}{2\sqrt{2}}\big({\partial}artial z^+_n +{\partial}artial z^-_n\big) + \ell ^2_n ,\quad {\partial}artial y_n = \frac{1}{2\sqrt{2} i}\big({\partial}artial z^+_n -{\partial}artial z^-_n\big) + \ell ^2_n . \] From Lemma \mathop{\rm Re}f{Lemma 6.1} we then get the claimed asymptotics. \end{proof} Now consider the Jacobian of $\Phi $. At any $\varphi\in U_{\rm iso}\cap i L^2_r$ it is the linear map given by \[ d_\varphi\Phi : i L^2_r \to i \ell ^2_r ,\quad h\mapsto\left(\big(\langle b^+_n,h\rangle_r\big)_{n\in\mathbb{Z}},\big(\langle b^-_n,h\rangle_r\big)_{n\in\mathbb{Z}}\right) \] where \[ b^+_n := {\partial}artial x_n,\quad b^-_n := {\partial}artial y_n \] and $\langle \cdot ,\cdot \rangle _r$ denotes the bilinear form \[ L^2_c \times L^2_c \to \mathbb{C} , \quad (h,g) \mapsto \int ^1_0 (h_1 g_1 + h_2 g_2) dx . \] For any $n\in\mathbb{Z}$, introduce \[ d^+_n := - \frac{1}{\sqrt{2}}\big(e^+_n + e^-_n\big),\quad d^-_n := - \frac{1}{\sqrt{2} i}\big(e^+_n - e^-_n\big) . \] As $d_n^+$, $d_n^-$, $n\in\mathbb{Z}$, represent a Fourier basis of $ i L^2_r$, the linear map \[ F : i L^2_r \to i \ell ^2_r, \quad h\mapsto\left( \langle d^+_n,h \rangle _r, \langle d^-_n, h \rangle _r \right) _{n \in \mathbb{Z}} \] is a linear isomorphism. To prove the same for the Jacobian $d_\varphi\Phi $ it therefore suffices to show that \[ B_\varphi := F^{-1} d_\varphi \Phi : i L^2_r \to i L^2_r \] is a linear isomorphism for any $\varphi$ in $U_{\rm iso}\cap i L^2_r$. Clearly, $B_\varphi $ is continuous in $\varphi $ by the analyticity of $\Phi $ and is given by \[ B_\varphi h = \sum _{n \in \mathbb{Z}} \langle b^+_{-n}, h \rangle_r d^+_n + \langle b^-_{-n}, h\rangle_r d^-_n . \] Its adjoint $A_\varphi $ with respect to $\langle\cdot,\cdot\rangle_r$ is then a bounded linear operator on $ i L^2_r$ which also depends continuously on $\varphi$ and is given by \begin{equation}\label{eq:A} A_\varphi h = \sum _{n \in \mathbb{Z}} \langle d^+_{-n}, h \rangle _r b^+_n + \langle d^-_{-n}, h \rangle _r b^-_n . \end{equation} Moreover, $B_\varphi $ is a linear isomorphism if and only if $A_\varphi$ is. We obtain the following \begin{Lem}\label{prop:jacobian} For any $\varphi \in U_{\rm iso}\cap i L^2_r$, the differential $d_\varphi \Phi $ is a linear isomorphism if and only if the operator $A_\varphi $ is a linear isomorphism. The latter is a compact perturbation of the identity on $ i L^2_r$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{prop:jacobian}] We follow the arguments of the proof of Lemma 17.3 in \cite{GK1}. It remains to prove the compactness claim. At any finite gap potential in $U_{\rm iso}\cap i L^2_r$ we have by Lemma \mathop{\rm Re}f{Proposition 6.2} \[ \sum _{n \in \mathbb{Z}} \| (A_\varphi - Id)d^{\partial}m _n \| ^2 = \sum _{n \in \mathbb{Z}} \| b^{\partial}m _n - d^{\partial}m _n \| ^2 < \infty . \] Thus $A_\varphi - Id$ is Hilbert-Schmidt and therefore compact. Since $A_\varphi $ depends continuously on $\varphi$ and finite gap potentials are dense in $U_{\rm iso} \cap i L^2_r$ we see that $A_\varphi-Id$ is compact for any $\varphi \in U_{\rm iso} \cap i L^2_r$. \end{proof} We summarize the main results of this Section as follows. \begin{Th}\label{th:local_diffeo} The map $\Phi : U_{\rm iso}\cap i L^2_r\to i\ell^2_r$, $\varphi\mapsto\big(x_n(\varphi),y_n(\varphi) \big)_{n\in\mathbb{Z}}$, is real analytic and for any $m,n\in\mathbb{Z}$ and $\varphi\in U_{\rm iso}\cap i L^2_r$, \[ \{x_m,x_n\}=0,\quad\{x_m,y_n\}=-\delta_{mn},\quad\{y_m,y_n\}=0\,. \] Furthermore, for any $\varphi\in U_{\rm iso}\cap i L^2_r$ we have that $d_\varphi\Phi : i L^2_r\to i\ell^2_r$ is a linear isomorphism. \end{Th} \begin{proof}[Proof of Theorem \mathop{\rm Re}f{th:local_diffeo}] In view of Proposition \mathop{\rm Re}f{Theorem 5.6} and Proposition \mathop{\rm Re}f{Theorem 5.7} it remains to prove that for any $\varphi\in U_{\rm iso}\cap i L^2_r$ the differential $d_\varphi\Phi : i L^2_r\to i\ell^2_r$ is a linear isomorphism. By Lemma \mathop{\rm Re}f{prop:jacobian} it suffices to show that $A_\varphi : i L^2_r\to i L^2_r$ is such a map. This follows from Proposition \mathop{\rm Re}f{Theorem 5.7}, Lemma \mathop{\rm Re}f{prop:jacobian}, and Lemma F.7 in \cite{GK1}. \end{proof} \section{Proof of Theorem \mathop{\rm Re}f{th:main}}\label{sec:proofs} In this Section we prove Theorem \mathop{\rm Re}f{th:main}, stated in the Introduction. Recall that the map \begin{equation}\label{eq:birkhoff_map} \Phi : U_{\rm iso}\cap i L^2_r\to i\ell^2_r \end{equation} constructed in Theorem \mathop{\rm Re}f{th:local_diffeo} is a canonical local diffeomorphism. The map $\Psi$ of Theorem \mathop{\rm Re}f{th:main} will be obtained from $\Phi$ by a slight adjustment to ensure that $\Psi$ is one-to-one. We begin with some preliminary considerations. Let us first study isospectral sets of the ZS operator with potentials in $i L^2_r$. By Proposition 2.2 in \cite{KTPreviato}, for any ${\partial}si\in i L^2_r$, the isospecral set $\mathop{\rm Iso}\nolimits({\partial}si)$ is compact. Furthermore, recall that any connected component in a topological space is closed. If the space is locally path connected, then its connected components are also open (see e.g. \cite{Munkres}). We have \begin{Lem}\label{lem:iso_components} For any ${\partial}si\in i L^2_r$ with simple periodic spectrum the isospectral set $\mathop{\rm Iso}\nolimits({\partial}si)$ is locally path connected. In particular, the connected components of $\mathop{\rm Iso}\nolimits({\partial}si)$ are open and closed. Since $\mathop{\rm Iso}\nolimits({\partial}si)$ is compact we conclude that $\mathop{\rm Iso}\nolimits({\partial}si)$ has finitely many connected components. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:iso_components}] It is enough to proof that $\mathop{\rm Iso}\nolimits({\partial}si)$ is locally path connected. First note that we can assume without loss of generality that ${\partial}si\in\mathop{\rm Iso}\nolimits_o({\partial}si^{(1)}) $ where ${\partial}si^{(1)}\in U_{\rm iso}\cap i L^2_r$ is the potential with simple periodic spectrum used in the construction of the map \eqref{eq:birkhoff_map} in Section \mathop{\rm Re}f{sec:birkhoff_map}. Since $\Phi$ is a local diffeomorphism we can find an open neighborhood $U_{\partial}si$ of ${\partial}si$ in $U_{\rm iso}\cap i L^2_r$ and an open neighborhood $V_{p^0}$ of $p^0:=\Phi({\partial}si)$ in $i\ell^2_r$ such that \[ \Phi : U_{\partial}si\to V_{p^0} \] is a diffeomorphism. Here $p^0=(p_n^0)_{n\in\mathbb{Z}}$ with $p_n^0=i(u_n^0,v_n^0)$ and the neighborhood $V_{p^0}$ of $p^0$ in $i\ell^2_r$ is chosen of the form \begin{equation}\label{eq:tailneighborhood} V_{p^0}=B^{\delta_0',\delta_0''}_{|n|\le R'}(p^0)\times B^{\epsilon_0}_{|n|>R'}(0) \end{equation} where $R',\delta_0',\delta_0'',\epsilon_0>0$ are appropriately chosen parameters, \begin{equation}\label{eq:finite_component} B^{\delta_0',\delta_0''}_{|n|\le R'}(p^0):= \bigtimes_{|n|\le R'}\Big\{p_n=i (u_n,v_n)\in i\,\mathbb{R}^2\,\Big|\,\big||p_n^0|-|p_n|\big|<\delta_0', |\theta_n-\theta_n^0|<\delta_0''\Big\}, \end{equation} \begin{equation}\label{eq:tail_component} B^{\epsilon_0}_{|n|>R'}(0):= \Big\{p_n=i (u_n,v_n)_{|n|>R'}\in i\ell^2_r\,\Big|\,\Big(\sum_{|n|>R'}|p_n|^2\Big)^{1/2}<\epsilon_0\Big\}, \end{equation} and $|p_n|=\sqrt{u_n^2+v_n^2}$ and $\theta_n$ are the polar coordinates of the point $(u_n,v_n)$ in the Euclidean plane $\mathbb{R}^2$. By construction, for any $\varphi\in U_{\rm iso}\cap i L^2_r$ and $p=\Phi(\varphi)$ we have that $iu_n=x_n(\varphi)$ and $i v_n=y_n(\varphi)$ and hence \begin{equation}\label{eq:correspondence_coordinates} \frac{1}{2}\,|p_n|^2=-I_n(\varphi)=-\frac{1}{4}(\xi_n\gamma_n)^2\ge 0\quad\text{and}\quad \theta_n=\theta_n(\varphi). \end{equation} Since ${\partial}si$ has simple periodic spectrum we have that $|p_n|>0$ for any $n\in\mathbb{Z}$. We will assume that $0<\delta_0'<|p_n^0|$ for $|n|\le R'$ and $0<\delta_0''<{\partial}i$. \begin{Rem}\label{rem:tail_neighborhood} Here we used that for any $p^0\in i\ell^2_r$ and for any open neighborhood $W_{p^0}$ of $p^0$ in $i\ell^2_r$ there exists a open neighborhood $V_{p^0}$ of $p^0$, $V_{p^0}\subseteq W_{p^0}$ of the form \eqref{eq:tailneighborhood}. An important property of $V_{p^0}$ is that its ``tail'' component $B^{\epsilon_0}_{|n|>R'}(0)$ is a ball in $i \ell^2_r$. centered at zero. We will call such a neighborhood a {\em tail neighborhood} of $p^0$ in $i\ell^2_r$. \end{Rem} \noindent Note that the action variables $I_n$, $n\in\mathbb{Z}$, in $U_{\rm iso}$ constructed in Section \mathop{\rm Re}f{sec:actions_and_angles_in_U_iso} are constant on isospectral potentials. This follows from the fact that the contours $\Gamma_n$, $n\in\mathbb{Z}$, used in the definition of the actions on $U_{\rm iso}$ are fixed. Hence \[ \Phi\big(\mathop{\rm Iso}\nolimits({\partial}si)\cap U_{\partial}si\big)\subseteq \mathbb{T}or(p^0)\cap V_{p^0} \] where for $q^0\in i\ell^2_r$ we set \begin{equation}\label{eq:Tor} \mathbb{T}or(q^0):=\Big\{\big(i (u_n,v_n)\big)_{n\in\mathbb{Z}}\in i\ell^2_r\,\Big|\,u_n^2+v_n^2=|q_n^0|^2, n\in\mathbb{Z}\Big\}. \end{equation} By a slight abuse of terminology we refer to $\mathbb{T}or(q^0)$ as a torus of dimension $\#\big\{n\in\mathbb{Z}\,\big|\,|q_n^0|>0\big\}$. Note that $\mathbb{T}or(p^0)$ is a compact set in $i\ell^2_r$ that is an infinite product of circles whereas the set $\mathbb{T}or(p^0)\cap V_{p^0}$ is the product of $2R'+1$ arcs \begin{equation}\label{eq:finitely_many_circles} \bigtimes_{|n|\le R'}\Big\{i (u_n,v_n)\in i\,\mathbb{R}^2\,\Big|\,|p_n|=|p^0_n|, |\theta_n-\theta^0_n|<\delta_0''\Big\} \end{equation} times the infinite product of circles \begin{equation}\label{eq:infinitely_many_circles} \Big\{i (u_n,v_n)_{|n|>R'}\in i\ell^2_r\,\Big|\,|p_n|=|p^0_n|,\,|n|>R'\Big\}. \end{equation} It follows from Lemma 8.3 in \cite{GK1} and the fact that the actions are defined only in terms of the discriminant that $\{I_n,\Delta(\lambda)\}=0$ on $U_{\rm iso}\cap iL^2_r$ for any $n\in\mathbb{Z}$ and $\lambda\in\mathbb{C}$. This implies that for any $n\in\mathbb{Z}$ the Hamiltonian vector field $X_{I_n}$ corresponding to the action variables $I_n$ in $U_{\rm iso}\cap iL^2_r$ is {\em isospectral}, i.e. for any $\varphi\in U_{\rm iso}\cap iL^2_r$ the integral trajectory of $X_{I_n}$ in $U_{\rm iso}\cap iL^2_r$ with initial data at $\varphi$ lies in $\mathop{\rm Iso}\nolimits(\varphi)$. In view of the commutation relations \begin{equation}\label{eq:commutation_relations} \{\theta_n,I_m\}=\delta_{nm},\quad\{I_n,I_m\}=0,\quad n,m\in\mathbb{Z}, \end{equation} the fact that $\Phi : U_{\partial}si\to V_{p^0}$ is a canonical diffeomorphism and \eqref{eq:correspondence_coordinates}, one easily sees from the closedness of $\mathop{\rm Iso}\nolimits({\partial}si)$ that the set $\mathop{\rm Iso}\nolimits({\partial}si)\cap U_{\partial}si$ is diffeomorphic (via $\Phi$) to the product of the sets \eqref{eq:finitely_many_circles} and \eqref{eq:infinitely_many_circles}. Since this set is locally path connected so is $\mathop{\rm Iso}\nolimits({\partial}si)$. \end{proof} Given a non-empty subset $A$ of $i L^2_r$ and $\delta>0$ denote by $B_\delta(A)$ the open $\delta$-{\em tubular} neighborhood of $A$, \[ B_\delta(A):=\bigcup_{\varphi\in A} B_\delta(\varphi), \] where $B_\delta(\varphi)$ denotes the open ball of radius $\delta$ in $i L^2_r$ centered at $\varphi$. In view of Lemma \mathop{\rm Re}f{lem:iso_components}, the isospectral set $\mathop{\rm Iso}\nolimits({\partial}si)$ of any potential ${\partial}si$ with simple periodic spectrum consists of finitely many connected components. This implies that there exists $\delta_0\equiv\delta_0({\partial}si)>0$ such that for any $0<\delta<\delta_0$ the $\delta$-tubular neighborhoods of the different connected components of $\mathop{\rm Iso}\nolimits({\partial}si)$ do not intersect. We have the following {\em Lyapunov type} stability property of $\mathop{\rm Iso}\nolimits({\partial}si)$. \begin{Lem}\label{lem:lyapunov_stability} Assume that ${\partial}si\in i L^2_r$ has simple periodic spectrum. Then, for any $0<\delta<\delta_0$ there exists $0<\delta_1\le\delta$ such that for any $\varphi\in B_{\delta_1}\big(\mathop{\rm Iso}\nolimits_o({\partial}si)\big)$, one has $\mathop{\rm Iso}\nolimits_o(\varphi)\subseteq B_\delta\big(\mathop{\rm Iso}\nolimits({\partial}si)\big)$. Since $\mathop{\rm Iso}\nolimits_o(\varphi)$ is connected and $0<\delta\le\delta_0$ we conclude from the choice of $\delta_0>0$ that $\mathop{\rm Iso}\nolimits_o(\varphi)\subseteq B_\delta\big(\mathop{\rm Iso}\nolimits_o({\partial}si)\big)$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:lyapunov_stability}] Take ${\partial}si\in i L^2_r$ with simple periodic spectrum and choose $0<\delta<\delta_0$. We will prove the the statement by contradiction. Assume that the statement of Lemma \mathop{\rm Re}f{lem:lyapunov_stability} does not hold. Then, there exist two sequence $({\partial}si_k)_{k\ge 1}$ and $({\widetilde{\partial}si}_k)_{k\ge 1}$ in $i L^2_r$ such that \begin{equation}\label{eq:negation1} {\widetilde{\partial}si}_k\in\mathop{\rm Iso}\nolimits_o({\partial}si_k)\quad\text{\rm and}\quad{\widetilde{\partial}si}_k\notin B_\delta\big(\mathop{\rm Iso}\nolimits({\partial}si)\big), \end{equation} and a sequence $({\partial}si_k^*)_{k\ge 1}$ in $\mathop{\rm Iso}\nolimits_o({\partial}si)$ such that $\|{\partial}si_k^*-{\partial}si_k\|\to 0$ as $k\to\infty$. By using the compactness of $\mathop{\rm Iso}\nolimits_o({\partial}si)$ and then passing to subsequences if necessary we obtain that there exists ${\partial}si^*\in\mathop{\rm Iso}\nolimits_o({\partial}si)$ such that \begin{equation}\label{eq:negation2} {\partial}si_k\to{\partial}si^*\quad\text{\rm as}\quad k\to\infty. \end{equation} Since by Proposition 2.3 in \cite{KTPreviato} the $L^2$-norm is a spectral invariant of the ZS operator for potentials in $i L^2_r$, we conclude from \eqref{eq:negation1} that for any $k\ge 1$, $\|{\widetilde{\partial}si}_k\|=\|{\partial}si_k\|$. This together with \eqref{eq:negation2} then implies that \begin{equation}\label{eq:norms_converge} \|{\widetilde{\partial}si}_k\|\to\|{\partial}si^*\|\quad\text{\rm as}\quad k\to\infty. \end{equation} Hence, the sequence $({\widetilde{\partial}si}_k)_{k\ge 1}$ is bounded in $i L^2_r$. This implies that there exists ${\widetilde{\partial}si}\in i L^2_r$ such that $({\widetilde{\partial}si}_k)_{k\ge 1}$ converges weakly to ${\widetilde{\partial}si}$ in $i L^2_r$. Proposition 1.2 in \cite{GK1} then implies that for any $\lambda\in\mathbb{C}$, \[ \Delta(\lambda,{\widetilde{\partial}si}_k)\to\Delta(\lambda,{\widetilde{\partial}si})\quad\text{\rm as}\quad k\to\infty. \] By \eqref{eq:negation2}, for any $\lambda\in\mathbb{C}$, \[ \Delta(\lambda,{\partial}si_k)\to\Delta(\lambda,{\partial}si^*)\quad\text{\rm as}\quad k\to\infty. \] On the other side, by \eqref{eq:negation1}, we conclude that for any $k\ge 1$ and for any $\lambda\in\mathbb{C}$, \[ \Delta(\lambda,{\widetilde{\partial}si}_k)=\Delta(\lambda,{\partial}si_k). \] The three displayed formulas above then imply that $\Delta(\lambda,{\widetilde{\partial}si})=\Delta(\lambda,{\partial}si^*)$ for any $\lambda\in\mathbb{C}$. Hence, \begin{equation}\label{eq:isospectral_potentials} {\widetilde{\partial}si}\in\mathop{\rm Iso}\nolimits({\partial}si^*)=\mathop{\rm Iso}\nolimits({\partial}si). \end{equation} By Proposition 2.3 in \cite{KTPreviato} we obtain $\|{\widetilde{\partial}si}\|=\|{\partial}si^*\|$ which, in view of \eqref{eq:norms_converge}, implies that \[ \|{\widetilde{\partial}si}_k\|\to\|\widetilde{\partial}si\|\quad\text{\rm as}\quad k\to\infty. \] Since $({\widetilde{\partial}si}_k)_{k\ge 1}$ converges weakly to ${\widetilde{\partial}si}$ in $i L^2_r$, we conclude that \[ {\widetilde{\partial}si}_k\to{\widetilde{\partial}si}\quad\text{\rm as}\quad k\to\infty. \] The second relation in \eqref{eq:negation1} then implies that ${\widetilde{\partial}si}\notin B_\delta\big(\mathop{\rm Iso}\nolimits({\partial}si)\big)$ which contradicts \eqref{eq:isospectral_potentials}. \end{proof} \begin{Rem}\label{rem:local_theta_connectedness} Assume that ${\partial}si\in i L^2_r$ has simple periodic spectrum. It follows from the proof of Lemma \mathop{\rm Re}f{lem:iso_components} that there exist an open neighborhood $U_{\partial}si$ of ${\partial}si$ in $U_{\rm iso}\cap i L^2_r$ and a tail neighborhood $V_{p^0}$ of $p^0:=\Phi({\partial}si)$ in $i\ell^2_r$ with parameters $R'>0$, $0<\delta_0'<|p^0_n|$ for $|n|\le R'$, $0<\delta_0''<{\partial}i$, and $\epsilon_0>0$ (see \eqref{eq:tailneighborhood}) such that $\Phi : U_{\partial}si\to V_{p^0}$ is a diffeomorphism and for any $\varphi\in U_{\partial}si$ we have that $\Phi : U_{\partial}si\to V_{p^0}$ maps the set $\mathop{\rm Iso}\nolimits_o(\varphi)\cap U_{\partial}si$ bijectively onto the set $\mathbb{T}or\big(\Phi(\varphi)\big)\cap V_{p^0}$ where $\mathbb{T}or\big(\Phi(\varphi)\big)$ is given by \eqref{eq:Tor} with $q^0=\Phi(\varphi)\in i\ell^2_r$. \end{Rem} \noindent Now, let ${\partial}si:={\partial}si^{(1)}$ and $p^0:=\Phi({\partial}si)$. In view of the compactness of $\mathop{\rm Iso}\nolimits_o({\partial}si)$ and Remark \mathop{\rm Re}f{rem:local_theta_connectedness} we can construct a {\em finite} set of open neighborhoods ${\widetilde U}_{{\widetilde{\partial}si}_j}$ of ${\widetilde{\partial}si}_j\in\mathop{\rm Iso}\nolimits_o({\partial}si)$ in $U_{\rm iso}\cap i L^2_r$, $1\le j\le{\widetilde N}$, such that for any $1\le j\le{\widetilde N}$, $\Phi : {\widetilde U}_{{\widetilde{\partial}si}_j}\to{\widetilde V}_{{\tilde p}^j}$ is a diffeomorphism onto a tail neighborhood ${\widetilde V}_{{\tilde p}^j}$ of ${\tilde p}^j=\Phi({\widetilde{\partial}si}_j)\in\mathbb{T}or(p^0)$ in $i\ell^2_r$. In what follows we set \begin{equation}\label{eq:U_iso_shrinked} U_{\rm iso}\cap iL^2_r=\bigcup_{j=1}^{\widetilde N}{\widetilde U}_{\widetilde{\partial}si_j}. \end{equation} Then, by Lemma \mathop{\rm Re}f{lem:lyapunov_stability}, we can choose $0<\delta<\delta_0$ such that $B_{\delta}\big(\mathop{\rm Iso}\nolimits_o({\partial}si)\big)\subseteq U_{\rm iso}\cap i L^2_r$ and $\mathop{\rm Iso}\nolimits_o(\varphi)\subseteq U_{\rm iso}\cap i L^2_r$ for any $\varphi\in B_{\delta}\big(\mathop{\rm Iso}\nolimits_o({\partial}si)\big)$. By Remark \mathop{\rm Re}f{rem:local_theta_connectedness} there exist a neighborhood $U_{\partial}si$ of ${\partial}si$ in $U_{\rm iso}\cap i L^2_r$ and a neighborhood $V_{p^0}$ of $p^0:=\Phi({\partial}si)$ in $i\ell^2_r$ such that $\Phi : U_{\partial}si\to V_{p^0}$ is a diffeomorphism, $V_{p^0}$ is a tail neighborhood \begin{equation}\label{eq:tailneighborhood1} V_{p^0}=B^{\delta_0',\delta_0''}_{|n|\le R'}(p^0)\times B^{\epsilon_0}_{|n|>R'}(0) \end{equation} with parameters $R'>0$, $0<\delta_0'<|p^0_n|$ for any $|n|\le R'$, $0<\delta_0''<{\partial}i$, and $\epsilon_0>0$, and $B^{\delta_0',\delta_0''}_{|n|\le R'}(p^0)$ and $B^{\epsilon_0}_{|n|>R'}(0)$ are given by \eqref{eq:finite_component} and \eqref{eq:tail_component}. By taking the parameters $\delta_0'$, $\delta_0''$, $\epsilon_0>0$ smaller and $R'>0$ larger if necessary we can ensure that $U_{\partial}si\subseteq B_{\delta}\big(\mathop{\rm Iso}\nolimits_o({\partial}si)\big)$ and hence for any $\varphi\in U_{\partial}si$ we have that \begin{equation}\label{eq:flows_well_defined} \mathop{\rm Iso}\nolimits_o(\varphi)\subseteq U_{\rm iso}\cap i L^2_r. \end{equation} In view of the compactness of the isospectral component $\mathop{\rm Iso}\nolimits_o(\varphi)$ for any $\varphi\in i L^2_r$ and the fact that the flow of the Hamiltonian vector field $X_{I_n}$ corresponding to the action variable $I_n$, $n\in\mathbb{Z}$, is {\em isospectral} (see the proof of Lemma \mathop{\rm Re}f{lem:iso_components}), we conclude from \eqref{eq:flows_well_defined} that the integral trajectory of $X_{I_n}$ with initial data $\varphi\in U_{\partial}si$ is defined for all $t\in\mathbb{R}$. Moreover, it follows from the definition \eqref{eq:poisson_bracket} of the Poisson bracket on $i L^2_r$ that $X_{I_n} : U_{\rm iso}\cap i L^2_r\to i L^2_r$, $n\in\mathbb{Z}$, is an analytic vector field on $U_{\rm iso}\cap i L^2_r$. By \eqref{eq:commutation_relations} the vector fields $X_{I_n}$ and $X_{I_m}$ commute for any $m,n\in\mathbb{Z}$. Further, consider the direct product of $2R'+1$ open intervals \begin{equation}\label{eq:tail_section_finite_part} J_{|n|\le R'}:=\bigtimes_{|n|\le R'}\Big\{p_n=i(u_n,v_n)\in i\mathbb{R}^2\,\Big|\,\big||p_n^0|-|p_n|\big|<\delta_0',\, \theta_n=\theta_n^0\Big\} \end{equation} and denote by $\mathbb{T}T_{{\partial}si}\subseteq U_{{\partial}si}$ the preimage of \begin{equation}\label{eq:tail_section} T_{p^0}:=J_{|n|\le R'}\times B^{\epsilon_0}_{|n|>R'}(0)\subseteq V_{p^0} \end{equation} under the diffeomorphism $\Phi : U_{\partial}si\to V_{p^0}$. Since $\Phi : U_{\rm iso}\to i\ell^2_r$ is canonical we have that (see \eqref{eq:commutation_relations}) \begin{equation}\label{eq:X_I_n_in_coordinates} \Phi_*(X_{I_n})={\partial}artial_{\theta_n},\quad n\in\mathbb{Z}. \end{equation} In particular, the vector fields $X_{I_n}$, $|n|\le R'$, are {\em transversal} to the submanifold $\mathbb{T}T_{{\partial}si}$ in $U_{\partial}si$. Now, take an arbitrary $\varphi\in\mathbb{T}T_{{\partial}si}$ and consider the orbit $G(\varphi):=\big\{G^\tau(\varphi)\,\big|\,\tau\in\mathbb{R}^{2R'+1}\big\}$ where \begin{equation}\label{eq:action1} G^\tau(\varphi):=G_{X_{I_{(-R')}}}^{\tau_{(-R')}}\circ\cdots\circ G_{X_{I_{R'}}}^{\tau_{R'}}(\varphi), \quad\tau=(\tau_{(-R')},...,\tau_{R'})\in\mathbb{R}^{2R'+1}, \end{equation} and $G_{X_{I_n}}^{\tau_n}$ with $|n|\le R'$ is the isospectral flow corresponding to the Hamiltonian vector field $X_{I_n}$. Since the flows of $X_{I_n}$, $n\in\mathbb{Z}$, are isospectral and commute we conclude from \eqref{eq:flows_well_defined} that for any $\varphi\in\mathbb{T}T_{\partial}si\subseteq U_{\partial}si$ and for any $\tau\in\mathbb{R}^{2R'+1}$, $G^\tau(\varphi)$ in \eqref{eq:action1} is well defined and $G(\varphi)\subseteq\mathop{\rm Iso}\nolimits_o(\varphi)$. \begin{Lem}\label{lem:compact_orbit} For any $\varphi\in\mathbb{T}T_{{\partial}si}$ the orbit $G(\varphi)$ is a compact smooth submanifold of $U_{\rm iso}\cap i L^2_r$ of dimension $2R'+1$. Moreover, for $\varphi_1,\varphi_2\in\mathbb{T}T_{{\partial}si}$ we have that $G(\varphi_1)\cap G(\varphi_2)=\emptyset$ if $\varphi_1\neq\varphi_2$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:compact_orbit}] To see that $G(\varphi)$, $\varphi\in\mathbb{T}T_{{\partial}si}$, is compact let $(\varphi_k)_{k\ge 1}$ be a sequence in $G(\varphi)$. Since the flows $X_{I_n}$, $n\in\mathbb{Z}$, are isospectral, we have that $G(\varphi)\subseteq\mathop{\rm Iso}\nolimits_o(\varphi)$. We then conclude from the compactness of $\mathop{\rm Iso}\nolimits_o(\varphi)$ that there exist $\varphi^*\in\mathop{\rm Iso}\nolimits_o(\varphi)$ and a subsequence of $(\varphi_k)_{k\ge 1}$, denoted again by $(\varphi_k)_{k\ge 1}$, such that \begin{equation}\label{eq:convergence1} \varphi_k\to\varphi^*,\quad\text{as}\quad k\to\infty. \end{equation} In view of \eqref{eq:U_iso_shrinked} and \eqref{eq:flows_well_defined}, $\varphi^*\in{\widetilde U}_{{\widetilde{\partial}si}_j}\cap\mathop{\rm Iso}\nolimits_o(\varphi)$ where ${\widetilde U}_{{\widetilde{\partial}si}_j}$ is one of the neighborhoods appearing in \eqref{eq:U_iso_shrinked}. Since $\Phi : {\widetilde U}_{{\widetilde{\partial}si}_j}\to{\widetilde V}_{{\tilde p}^j}$ is a diffeomorphism and since ${\widetilde V}_{{\tilde p}^j}$ is a tail neighborhood, we conclude that $\Phi(\varphi^*)\in{\widetilde V}_{{\tilde p}^j}\cap\mathbb{T}or\big(\Phi(\varphi)\big)$ (see Remark \mathop{\rm Re}f{rem:local_theta_connectedness}). It follows from \eqref{eq:convergence1} that there exists $k_0\ge 1$ so that $\Phi(\varphi_k)\in{\widetilde V}_{{\tilde p}^j}\cap\mathbb{T}or\big(\Phi(\varphi)\big)$ for any $k\ge k_0$. This implies that $\varphi^*=G^{\tau^*}(\varphi_{k_0})$ for some $\tau^*\in\mathbb{R}^{2R'+1}$ where $\varphi_{k_0}\in G(\varphi)$. In particular, we see that $\varphi^*\in G(\varphi)$ and hence $G(\varphi)$ is compact. The coordinates $(\theta_{-R'},...,\theta_{R'})$ on ${\widetilde V}_{{\tilde p}^j}\cap\mathbb{T}or\big(\Phi(\varphi)\big)$, $1\le j\le{\widetilde N}$, define a smooth (in fact, real analytic) submanifold structure on $G(\varphi)$ in $U_{\rm iso}\cap i L^2_r$. It remains to prove the last statement of the Lemma. Take $\varphi_1,\varphi_2\in\mathbb{T}T_{{\partial}si}$ so that $\varphi_1\neq\varphi_2$. Then, in view of the definition of $\mathbb{T}T_{{\partial}si}$, either there exists $|n_0|>R'$ such that \begin{equation}\label{eq:separation1} (x_{n_0},y_{n_0})|_{\varphi_1}\neq(x_{n_0},y_{n_0})|_{\varphi_2} \end{equation} or there exists $|n_0|\le R'$ such that \begin{equation}\label{eq:separation2} I_{n_0}(\varphi_1)\neq I_{n_0}(\varphi_2). \end{equation} Since the vector fields $X_{I_n}$ with $|n|\le R'$ preserve the functions $x_n,y_n$ with $|n|>R'$ and the functions $I_n$ with $|n|\le R'$ we conclude that for any $\tau,\mu\in\mathbb{R}^{2R'+1}$ the relations \eqref{eq:separation1} (or \eqref{eq:separation2}) holds with $\varphi_1$ and $\varphi_2$ replaced respectively by $G^\tau(\varphi_1)$ and $G^\mu(\varphi_2)$. This implies that $G(\varphi_1)\cap G(\varphi_2)=\emptyset$. \end{proof} Let us now consider the following set in $i L^2_r$, \begin{equation}\label{eq:W} {\mathcal W}:=\bigsqcup_{\varphi\in\mathbb{T}T_{\partial}si} G(\varphi). \end{equation} \begin{Lem}\label{lem:W} ${\mathcal W}$ is an open neighborhood of ${\partial}si$ in $i L^2_r$ that is invariant with respect to the flows of the Hamiltonian vector fields $X_{I_n}$, $n\in\mathbb{Z}$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:W}] It follows from the definition \eqref{eq:tailneighborhood1} of the tail neighborhood $V_{p^0}$ and the set $T_{p^0}\subseteq V_{p^0}$ defined in \eqref{eq:tail_section} that \[ U_{\partial}si=\big\{G^{\tau}(\mathbb{T}T_{\partial}si)\,\big|\,|\tau_n|<\delta_0'', |n|\le R'\big\}. \] This implies that \begin{equation}\label{eq:W'} {\mathcal W}=\bigsqcup_{\varphi\in\mathbb{T}T_{\partial}si} G(\varphi)=\bigcup_{\tau\in\mathbb{R}^{2R'+1}}G^\tau(\mathbb{T}T_{\partial}si)= \bigcup_{\tau\in\mathbb{R}^{2R'+1}}G^\tau(U_{\partial}si). \end{equation} Since the sets $G^\tau(U_{\partial}si)$, $\tau\in\mathbb{R}^{2R'+1}$, are open we conclude from \eqref{eq:W'} that ${\mathcal W}$ is an open set in $i L^2_r$. The invariance of ${\mathcal W}$ with respect to the flow of $X_{I_n}$, $n\in\mathbb{Z}$, follows from the fact that the section $\mathbb{T}T_{\partial}si$ in \eqref{eq:W} is invariant with respect to the flows of $X_{I_n}$, $|n|>R'$, and since $X_{I_n}$ and $X_{I_k}$ commute for any $n,k\in\mathbb{Z}$. \end{proof} By restricting \eqref{eq:action1} to $\mathbb{R}^{2R'+1}\times{\mathcal W}$ we obtain a smooth action \begin{equation}\label{eq:action2} G : \mathbb{R}^{2R'+1}\times{\mathcal W}\to{\mathcal W}. \end{equation} of $\mathbb{R}^{2R'+1}$ on ${\mathcal W}$. In view of \eqref{eq:X_I_n_in_coordinates} we have the following commutative diagram \begin{equation}\label{eq:diagram_action} \begin{tikzcd} {\mathcal W}\arrow{r}{G^\tau}\arrow{d}{\Phi}&{\mathcal W}\arrow{d}{\Phi}\\ W\arrow{r}{\rho^\tau}&W \end{tikzcd} \end{equation} where $W$ is the tail neighborhood (cf. \eqref{eq:tailneighborhood1}) \begin{equation}\label{eq:Wdown} W:=B^{\delta_0',2{\partial}i}_{|n|\le R'}(p^0)\times B^{\epsilon_0}_{|n|>R'}(0)\subseteq i\ell^2_r \end{equation} where the parameter $\delta_0''>0$ is replaced by $2{\partial}i$ and for any $\tau=(\tau_{(-R')},...,\tau_R)\in\mathbb{R}^{2R'+1}$ and any $|n|\le R'$ the map $\rho^\tau : W\to W$ rotates the component $p_n=i(u_n,v_n)\in i\mathbb{R}^2$ of $p\in W$ by the angle $\theta_n=\tau_n$ for $|n|\le R'$ while keeping the components of $p$ for $|n|>R'$ unchanged. The commutative diagram \eqref{eq:diagram_action} and the invariance of ${\mathcal W}$ (cf. Lemma \mathop{\rm Re}f{lem:W}) then easily imply that $\Phi :{\mathcal W}\to W$ is {\em onto}. Note also that by \eqref{eq:Wdown}, the open set $W$ in $i\ell^2_r$ is a direct product of $2R'+1$ two dimensional annuli and an open ball in $i\ell^2_r$. \begin{Coro}\label{coro:stabilizer} For any $\varphi\in\mathbb{T}T_{{\partial}si}$ the orbit $G(\varphi)$ is diffeomorphic to the $2R'+1$ dimensional torus \begin{equation}\label{eq:torus} \mathcal{G}:=\mathbb{R}^{2R'+1}/\mathop{\rm Span}\nolimits\langle e_{(-R')},...,e_{R'}\rangle_\mathbb{Z} \end{equation} where $e_{(-R')},...,e_{R'}\in(2{\partial}i\mathbb{Z})^{2R'+1}$ are linearly independent over $\mathbb{R}$. The vectors $e_{(-R')},...,e_{R'}$ are independent of the choice of $\varphi\in\mathbb{T}T_{{\partial}si}$.\footnote{Note that we do {\em not} claim that $e_{(-R')},...,e_{R'}$ is a basis of $(2{\partial}i\mathbb{Z})^{2R'+1}$ over $\mathbb{Z}$.} \end{Coro} \begin{proof}[Proof of Corollary \mathop{\rm Re}f{coro:stabilizer}] By restricting the action \eqref{eq:action2} to the orbit $G(\varphi)$ with $\varphi\in\mathbb{T}T_{\partial}si$ we obtain a smooth {\em transitive} action \begin{equation}\label{eq:action3} G : \mathbb{R}^{2R'+1}\times G(\varphi)\to G(\varphi) \end{equation} of $\mathbb{R}^{2R'+1}$ on the submanifold $G(\varphi)$. Since $G(\varphi)$ is compact (cf. Lemma \mathop{\rm Re}f{lem:compact_orbit}) the stabilizer $\text{\rm St}(\varphi)$ of \eqref{eq:action3} is of the form \[ \text{\rm St}(\varphi)=\mathop{\rm Span}\nolimits\langle e_{(-R')},...,e_{R'}\rangle_\mathbb{Z} \] where the vectors $e_{(-R')},...,e_{R'}\in\mathbb{R}^{2R'+1}$ form a basis in $\mathbb{R}^{2R'+1}$ and $G(\varphi)$ is diffeomorphic to the factor group $\mathbb{R}^{2R'+1}/\text{\rm St}(\varphi)$, which is a torus of dimension $2R'+1$ (cf. \cite[\S\,49]{Arn}). It then follows from the commutative diagram \eqref{eq:diagram_action} that \begin{equation}\label{eq:integer_condition} e_{(-R')},...,e_{R'}\in(2{\partial}i\mathbb{Z})^{2R'+1}. \end{equation} Indeed, assume that $\tau\in\text{\rm St}(\varphi)$. Then $G^\tau(\varphi)=\varphi$ and, by the commutative diagram \eqref{eq:diagram_action}, we obtain that $\rho^\tau\big(\Phi(\varphi)\big)=\Phi(\varphi)$. This implies that $\tau\in(2{\partial}i\mathbb{Z})^{2R'+1}$, and hence, completes the proof of the first statement of the Corollary. Let us now prove the second statement. Take $\varphi^*\in\mathbb{T}T_{\partial}si$ and let $\tau^*\in\text{\rm St}(\varphi^*)$. In view of the continuity of the action \eqref{eq:action2}, there exists an open neighborhood $U_1$ of zero in $\mathbb{R}^{2R'+1}$ and an open neighborhood $U_2$ of $\varphi^*$ in $U_{\partial}si$ so that for any $(\tau,\varphi)\in U_1\times U_2$ we have that $G^{\tau^*+\tau}(\varphi)\in U_{\partial}si$. It then follows from the commutative diagram \eqref{eq:diagram_action} and the fact that $\tau^*\in(2{\partial}i\mathbb{Z})^{2R'+1}$ that \begin{equation}\label{eq:time_shift_in_coordinates} \Phi\big(G^{\tau^*+\tau}(\varphi)\big)=\left\{ \begin{array}{l} i\,|p_n|\,\big(\cos(\theta_n(\varphi)+\tau_n),\sin(\theta_n(\varphi)+\tau_n)\big),\quad |n|\le R',\\ p_n,\quad|n|>R', \end{array} \right. \end{equation} where $(p_n)_{n\in\mathbb{Z}}=\Phi(\varphi)$. Since $\Phi : U_{\partial}si\to V_{p^0}$ is a diffeomorphism, we conclude from \eqref{eq:time_shift_in_coordinates} that $G^{\tau^*}(\varphi)=\varphi$ for any $\varphi\in U_2$. In view of the connectedness of $U_{\partial}si$ we then obtain that $\tau^*\in\text{\rm St}(\varphi)$ for any $\varphi\in U_{\partial}si$. Since $\mathbb{T}T_{\partial}si$ is connected this shows that the $2R'+1$ linearly independent vectors in \eqref{eq:integer_condition} can be chosen independently of $\varphi\in\mathbb{T}T_{\partial}si$. \end{proof} With these preparations done we are now ready to introduce the slight adjustment of $\Phi$, mentioned at the beginning of the Section. By \eqref{eq:W} and Lemma \mathop{\rm Re}f{lem:compact_orbit} the open neighborhood ${\mathcal W}$ of ${\partial}si$ in $i L^2_r$ is foliated by orbits of the action \eqref{eq:action2} with $\mathbb{T}T_{\partial}si$ as a global section. By Corollary \mathop{\rm Re}f{coro:stabilizer} any orbit $G(\varphi)$, $\varphi\in\mathbb{T}T_{\partial}si$, is diffeomorphic to the $2R'+1$ dimensional torus \eqref{eq:torus}. Denote by $(t_{(-R')},...,t_{R'})$ the coordinates corresponding to the frame $(e_{(-R')},...,e_{R'})$ in $\mathbb{R}^{2R'+1}$. For any given $\varphi\in\mathbb{T}T_{\partial}si$ denote by $(\theta^*_{(-R')},...,\theta^*_{R'})$ the {\em coordinates} on $G(\varphi)$ that are obtained by taking the pull-back of $(2{\partial}i t_{(-R')},...,2{\partial}i t_{R'})$ via the diffeomorphism $G(\varphi)\to\mathcal{G}$ given by the action \eqref{eq:action3}. For any $|n|\le R'$ denote the coordinates of $e_n$ by $\tau_{nk}$, $|k|\le R'$, \begin{equation}\label{eq:e_n} e_n=\big(\tau_{n(-R')},...,\tau_{nR'}\big)\in(2{\partial}i\mathbb{Z})^{2R'+1},\quad |n|\le R'. \end{equation} For $|n|\le R'$ consider the (analytic) functions $I^*_n : {\mathcal W}\to\mathbb{R}$, \begin{equation}\label{eq:I*} I^*_n:=\frac{1}{2{\partial}i}\sum_{|k|\le R'}\tau_{nk} I_k, \end{equation} where $I_k$, $|k|\le R'$, is the $k$-th action on $U_{\rm iso}$. Then, it follows from \eqref{eq:I*} and the second statement of Corollary \mathop{\rm Re}f{coro:stabilizer} that $X_{I^*_n}=\sum_{|k|\le R'}\tau_{nk} X_{I_k}$, and by the definition of the action \eqref{eq:action2} (see also \eqref{eq:action1}) and the coordinates $(\theta^*_{(-R')},...,\theta^*_{R'})$, we conclude from \eqref{eq:e_n} that on any orbit $G(\varphi)$, $\varphi\in\mathbb{T}T_{\partial}si$, we have that \begin{equation}\label{eq:*-canonical} \big\{\theta^*_n,I^*_k\big\}\equiv(d\theta^*_n)\big(X_{I^*_k}\big)=d\big(2{\partial}i t_n\big)(e_k/2{\partial}i)=\delta_{nk} \end{equation} for any $|n|\le R'$ and $|k|\le R'$. This implies that on any orbit $G(\varphi)$, $\varphi\in\mathbb{T}T_{\partial}si$, \begin{equation}\label{eq:theta*} d\theta^*_n=\sum_{|k|\le R'}\tau^*_{nk}d\theta_k \end{equation} where $\theta_k : {\mathcal W}\to\mathbb{R}/2{\partial}i$, $|k|\le R'$, is the $k$-th angle on $U_{\rm iso}\cap i L^2_r$ and $(\tau^*_{nk})_{|n|,|k|\le R'}$ are the elements of the {\em non-degenerate} $(2R'+1)\times(2R'+1)$ matrix $P^*:= \big(P^{-1}\big)^T$ where $P:=(\tau_{nk})_{|n|,|k|\le R'}$ and $\big(P^{-1}\big)^T$ denotes the transpose of $P^{-1}$. Formula \eqref{eq:theta*} shows that the coordinates $\theta^*_{(-R')},...,\theta^*_{R'}$ taken modulo $2{\partial}i$ are real analytic functions on ${\mathcal W}$. In this way, we have \begin{Lem}\label{lem:separating_points} The angles $(\theta^*_n)_{|n|\le R'}$ taken modulo $2{\partial}i$, the actions $(I^*_n)_{|n|\le R'}$, and $(x_n)_{|n|>R'}$, $(y_n)_{|n|>R'}$, are real analytic on ${\mathcal W}$ and separate the points on ${\mathcal W}$. Moreover, $\{x_n,y_n\}=1$, $|n|>R'$, $\{\theta^*_n,I^*_n\}=1$, $|n|\le R'$, whereas all other Poisson brackets vanish. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:separating_points}] We already proved that the functions in Lemma \mathop{\rm Re}f{lem:separating_points} are real analytic and that their Poisson brackets satisfy the relations stated in the Lemma. Moreover, by the commutation relations \eqref{eq:commutation_relations}, the functions $(I^*_n)_{|n|\le R'}$, $(x_n)_{|n|>R'}$, and $(y_n)_{|n|>R'}$, are constant on any of the orbits $G(\varphi)$, $\varphi\in\mathbb{T}T_{\partial}si$, and they form a coordinate system on the section $\mathbb{T}T_{\partial}si$ in \eqref{eq:W}. Hence, these functions separate the orbits $G(\varphi)$, $\varphi\in\mathbb{T}T_{\partial}si$. Since the functions $(\theta^*_n)_{|n|\le R'}$ taken modulo $2{\partial}i$ separate the points on any of these orbits, we conclude the proof of the Lemma. \end{proof} Lemma \mathop{\rm Re}f{lem:separating_points} allows us to define the modification $\Psi : {\mathcal W}\to i\ell^2_r$ of the map $\Phi : {\mathcal W}\to i\ell^2_r$ by setting \begin{equation}\label{eq:modified_coordinates} x_n:=\sqrt{2I^*_n}\cos\theta^*_n,\quad y_n:=\sqrt{2I^*_n}\sin\theta^*_n,\quad|n|\le R', \end{equation} while keeping the other components of $\Phi$ unchanged. By shrinking the open neighborhood ${\mathcal W}$ if necessary we can ensure that $\Psi({\mathcal W})$ is a tail neighborhood of the form \eqref{eq:Wdown} with $p^0:=\Psi({\partial}si)$ and the same value of the parameter $\epsilon_0>0$. For simplicity of notation, we will denote $\Psi({\mathcal W})$ again by $W$ and for any $|n|\le R'$,write $\theta_n$ and $I_n$ instead of $\theta^*_n$ and, respectively, $I^*_n$. We have \begin{Prop}\label{prop:almost_done} The map \begin{equation}\label{eq:Phi*} \Psi : {\mathcal W}\to W \end{equation} is a canonical real analytic diffeomorphism. Moreover, the open neighborhood ${\mathcal W}$ is a saturated neighborhood of $\mathop{\rm Iso}\nolimits_o({\partial}si)$ and $W$ is a tail neighborhood of $\Psi({\partial}si)$ in $i\ell^2_r$. \end{Prop} \begin{proof}[Proof of Proposition \mathop{\rm Re}f{prop:almost_done}] The Proposition follows from Lemma \mathop{\rm Re}f{lem:separating_points}. Indeed, it follows from Lemma \mathop{\rm Re}f{lem:separating_points} and \eqref{eq:modified_coordinates} that the map \eqref{eq:Phi*} is analytic. Furthermore, it follows from \eqref{eq:I*}, \eqref{eq:theta*}, \eqref{eq:modified_coordinates}, and the fact that $\Phi : {\mathcal W}\to i\ell^2_r$ is a local diffeomorphism (cf. Theorem \mathop{\rm Re}f{th:local_diffeo}) onto its image, that \eqref{eq:Phi*} is a local diffeomorphism onto $W$. Since, by Lemma \mathop{\rm Re}f{lem:separating_points}, the components of $\Psi$ separate the points on ${\mathcal W}$, we then conclude that $\Psi$ is a diffeomorphism. This map is canonical by the last statement in Lemma \mathop{\rm Re}f{lem:separating_points}. The statement that ${\mathcal W}$ is a saturated neighborhood of $\mathop{\rm Iso}\nolimits_0({\partial}si)$ follows from the invariance of the neighborhood ${\mathcal W}$ with respect to the complete isospectral flows $X_{I_n}$, $n\in\mathbb{Z}$ (see Lemma \mathop{\rm Re}f{lem:W}), and the arguments in the proof of Lemma \mathop{\rm Re}f{lem:iso_components} that allow us, for any $\varphi\in{\mathcal W}$, to identify $\mathop{\rm Iso}\nolimits_o(\varphi)$ with $\mathbb{T}or\big(\Psi(\varphi)\big)$ (see \eqref{eq:Tor}) in the tail neighborhood $W$. \end{proof} Finally, we prove \begin{Prop}\label{prop:N>=1} For any integer $N\ge 1$ the restriction of the map \eqref{eq:Phi*} to ${\mathcal W}\cap i H^N_r$ takes values in $i\mathfrak{h}^N_r$ and $\Psi\big|_{{\mathcal W}\cap i H^N_r} : {\mathcal W}\cap i H^N_r\to W\cap i\mathfrak{h}^N_r$ is a real analytic diffeomorphism. \end{Prop} \begin{proof}[Proof of Proposition \mathop{\rm Re}f{prop:N>=1}] Assume that $N\ge 1$. Since the coordinate functions $x_n$ and $y_n$ are real analytic on ${\mathcal W}$ so are their restrictions to ${\mathcal W}\cap i H^N_r$. Recall that by Lemma \mathop{\rm Re}f{Lemma 5.4} that $z_n^{\partial}m=O\big(|\gamma_n|+|\mu_n-\tau_n|\big)$ for $|n|>R$, locally uniformly on $U_{\rm iso}$. Taking into account the estimates for $\gamma_n$ on $H^N_c$ in \cite[Corollary 1.1]{KSchT2} and the ones for $\mu_n-\tau_n$ on $H^N_c$ in \cite[Theorem 1.1]{KSchT2} and \cite[Theorem 1.3]{KSchT2} it follows that \begin{equation}\label{eq:z_n-estimate} \sum_{|n|>R}\langle n\rangle^{2N}|z_n^{\partial}m|^2<\infty \end{equation} locally uniformly on $U_{\rm iso}\cap H^N_c$. Furthermore, recall from Remark \mathop{\rm Re}f{rem:U_iso} that ${\tilde\beta}^n$, defined modulo $2{\partial}i$, is of order $O(1/n)$ as $|n|\to\infty$ locally uniformly on $U_{\rm iso}\setminus\mathcal{Z}_n$ and that $\sum_{|k|>R,k\ne n}\beta^n_k=O(1/n)$ as $|n|\to\infty$ locally uniformly on $U_{\rm iso}$. By combining this with \eqref{eq:z_n-estimate} we conclude from \eqref{eq:x,y'} that the real analytic map $\Psi\big|_{{\mathcal W}\cap i H^N_r} : {\mathcal W}\cap i H^N_r\to i\mathfrak{h}^N_r$ is locally bounded in a complex neighborhood of ${\mathcal W}\cap i H^N_r$. By \cite[Theorem A.5]{GK1} it then follows that $\Psi\big|_{{\mathcal W}\cap i H^N_r} : {\mathcal W}\cap i H^N_r\to i\mathfrak{h}^N_r$ is real analytic. Furthermore, by the characterization of potentials $\varphi\in i L^2_r$ to be in $i H^N_r$, provided in \cite[Theorem 1.2]{KST}, it follows that $\Psi({\mathcal W}\cap i H^N_r)=W\cap i\mathfrak{h}^N_r$, implying that $\Psi\big|_{{\mathcal W}\cap i H^N_r} : {\mathcal W}\cap i H^N_r\to W\cap i\mathfrak{h}^N_r$ is bijective. To see that the latter map is a real analytic diffeomorphism it remains to show that for any $\varphi\in{\mathcal W}\cap i H^N_r$, $\big(d_\varphi\Psi\big)\big|_{i H^N_r} : i H^N_r\to i\mathfrak{h}^N_r$ is a linear isomorphism. Since $d_\varphi\Psi : L^2_r\to i\ell^2_r$ is an isomorphism by Proposition \mathop{\rm Re}f{prop:almost_done}, we conclude that $\mathop{\rm Ker}\big(d_\varphi\Psi\big)\big|_{i H^N_r}=\{0\}$. Hence, we will complete the proof if we show that $\big(d_\varphi\Psi\big)\big|_{i H^N_r} : i H^N_r\to i\mathfrak{h}^N_r$ is a Fredholm operator of index zero. This will follow once we prove that $\big(d_\varphi\Phi\big)\big|_{i H^N_r} : i H^N_r\to i\mathfrak{h}^N_r$ is a Fredholm operator of index zero, where $\Phi$ is the map \eqref{eq:birkhoff_map}. In order to prove this, we show by analytic extension that the formulas for $z_n^{\partial}m$ in \cite[Theorem 2.2]{KSchT1}, valid for potentials near zero, continue to hold on $U_{\rm iso}$ for any $|n|>R$. These formulas involve the quantities $\tau_n-\mu_n$, $\delta(\mu_n)$, $\delta_n(\mu_n)$, and $\eta_n^{\partial}m$, which can be estimated using \cite[Theorem 1.1, Theorem 1.3, and Theorem 1.4]{KSchT2} and \cite[Lemma 12.7]{GK1}. In this way we show that for any $N\ge 1$, \begin{equation}\label{eq:one-smoothing} \Phi-\mathcal{F} : {\mathcal W}\cap i H^N_r\to i\mathfrak{h}^{N+1}_r, \end{equation} is a real analytic map. Here $\mathcal{F} : H^N_c\to\mathfrak{h}^N_c$ is the Fourier transform $(\varphi_1,\varphi_2)\mapsto\big(-{\widehat\varphi}_1(n),-{\widehat\varphi}_2(n)\big)_{n\in\mathbb{Z}}$ (cf. the identification introduced in \eqref{eq:identification}). Hence, for any $\varphi\in{\mathcal W}\cap i H^N_r$ we obtain that $d_\varphi\Phi-\mathcal{F} : i H^N_r\to i\mathfrak{h}^{N+1}_r$ is a bounded linear map. Since the Sobolev embedding $i\mathfrak{h}^{N+1}_r\to i\mathfrak{h}^N_r$ is compact, we then conclude that $d_\varphi\Phi-\mathcal{F} : i H^N_r\to i\mathfrak{h}^N_r$ is a compact operator. This completes the proof of the Proposition. \end{proof} Now, we are ready to proof Theorem \mathop{\rm Re}f{th:main}. \begin{proof}[Proof of Theorem \mathop{\rm Re}f{th:main}] By setting $z_n:=(x_n-i y_n)/\sqrt{2}$ and $w_n:=-\overline{z_{(-n)}}$ for any $n\in\mathbb{Z}$ Proposition \mathop{\rm Re}f{prop:almost_done} implies that the statements {\em(NF1)} and {\em(NF2)} (for $N=0$) of Theorem \mathop{\rm Re}f{th:main} hold (cf. \eqref{eq:identification}). The statement {\em(NF2)} for $N\ge 1$ follows from Proposition \mathop{\rm Re}f{prop:N>=1}. For proving {\em(NF3)} we follow the arguments of the proof of Theorem 20.3 in \cite{GK1}. \end{proof} \section{Appendix: Auxiliary results} First we provide details of the proof that the sum of the integrals \eqref{eq:argument_principle}, introduced in the proof of Proposition \mathop{\rm Re}f{prop:beta^n_tn}, is analytic. We use the notation introduced there. Assume that for a given $\nu\in\Lambda^-_R({\partial}si)$, $\mu_{\partial}si(\nu)$ is a Dirichlet eigenvalue of $L({\partial}si)$ of multiplicity $m_D\ge 2$. Denote by $\nu_j$, $1\le j\le m_D$, the periodic eigenvalues of $L({\partial}si)$ such that $\mu_{\partial}si(\nu_j)=\mu_{\partial}si(\nu)$ and for any $\varphi\in U_{\partial}si$ denote by $z_j$, $1\le j\le m_D$, the periodic eigenvalues of $L(\varphi)$ with $z_j\in D^\varepsilon(\nu_j)$. \begin{Lem}\label{lem:argument_principle} The quantity \begin{equation}\label{eq:argument_principle'} \sum_{1\le j\le m_D} \int_{{\mathcal P}^*_{\varphi,{\partial}si}[z_j,\mu_\varphi(z_j)]} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda \end{equation} is analytic in $U_{\partial}si$. \end{Lem} \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:argument_principle}] In view of the representation \eqref{eq:concatenation*}, it is enough to prove that \begin{equation}\label{eq:sum1} \sum_{1\le j\le m_D} \int_{[Q_{\mu_{\partial}si(\nu_j)},\mu_\varphi(z_j)]^*} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda \end{equation} is analytic in $U_{\partial}si$. In the case where $\mu_{\partial}si(\nu)\notin\Lambda_R({\partial}si)$ it follows that for any $\varphi\in U_{\partial}si$ the initial point of $\big[Q_{\mu_{\partial}si(\nu_j)},\mu_\varphi(z_j)\big]^*$ does not depend on $1\le j\le m_D$. One then concludes by the argument principle that \[ \sum_{1\le j\le m_D} \int\limits_{[Q_{\mu_{\partial}si(\nu_j)},\mu_\varphi(z_j)]^*} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda =\frac{1}{2{\partial}i i}\int_{{\partial}artial\overline{D}^\varepsilon(\mu_{\partial}si(\nu))}\!\!\!\!\!\!\!\! F(\mu,\varphi)\,\frac{\dot\chi_D(\mu,\varphi)}{\chi_D(\mu,\varphi)}\,d\mu \] where $\chi_D$ is given by \eqref{eq:chi_D} and \[ F(\mu,\varphi):=\int_{Q_{\mu_{\partial}si(\nu)}}^\mu \frac{\zeta_n(\lambda,\varphi)}{\sqrt[*]{\Delta^2(\lambda,\varphi)-4}}\,d\lambda \] is analytic on the disk $D^\varepsilon\big(\mu_{\partial}si(\nu)\big)$ and continuous up to its boundary. The case when $\mu_{\partial}si(\nu)\in\Lambda_R({\partial}si)$ can be treated in a similar way. We only remark that in this case, the initial points $Q^*_{\mu_{\partial}si(\nu_j),\varphi}$, $1\le j\le m_D$, of the paths $\big[Q_{\mu_{\partial}si(\nu_j)},\mu_\varphi(z_j)\big]^*$, $1\le j\le m_D$, are {\em not} necessarily the same. If $Q^*_{\mu_{\partial}si(\nu_j),\varphi}=Q^*_{\mu_{\partial}si(\nu),\varphi}$ let ${\mathcal P}^*_{j,\varphi}$ be the constant path $Q^*_{\mu_{\partial}si(\nu),\varphi}$ and if $Q^*_{\mu_{\partial}si(\nu_j),\varphi}\ne Q^*_{\mu_{\partial}si(\nu),\varphi}$, let ${\mathcal P}^*_{j,\varphi}$ be the counterclockwise oriented lift of the circle ${\partial}artial\overline{D}^\varepsilon\big(\mu_{\partial}si(\nu)\big)$ by ${\partial}i_1|_{\mathcal{C}_{\varphi,R}} : \mathcal{C}_{\varphi,R}\to\mathbb{C}$, which connects $Q^*_{\mu_{\partial}si(\nu),\varphi}$ with $Q^*_{\mu_{\partial}si(\nu_j),\varphi}$. We then write the path integral $\int_{[Q_{\mu_{\partial}si(\nu_j)},\mu_\varphi(z_j)]^*} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda$ as a sum of two path integrals with paths ${\mathcal P}^*_{j,\varphi}\cup\big[Q_{\mu_{\partial}si(\nu_j)},\mu_\varphi(z_j)\big]^*$ and $\big({\mathcal P}^*_{j,\varphi}\big)^{-1}$. By the argument principle, \[ \sum_{1\le j\le m_D} \int\limits_{{\mathcal P}^*_{j,\varphi}\cup[Q_{\mu_{\partial}si(\nu_j)},\mu_\varphi(z_j)]^*} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda \] is analytic on $U_{\partial}si$. Since \[ \sum_{1\le j\le m_D} \int\limits_{\big({\mathcal P}^*_{j,\varphi}\big)^{-1}} \frac{\zeta_n(\lambda,\varphi)}{\sqrt{\Delta^2(\lambda,\varphi)-4}}\,d\lambda \] is also analytic on $U_{\partial}si$, it then follows that \eqref{eq:argument_principle'} analytic on $U_{\partial}si$ in this case. \end{proof} The second result characterizes the spectral bands of $\mathop{\rm Spec}\nolimits_{\mathbb R} L(\varphi )$. It is used in the proof of Lemma \mathop{\rm Re}f{lem:negative_actions} to show that the actions $I_n$, $|n|>R$, are non positive on $U_{\rm iso}\cap i L^2_r$. For any given $\varepsilon>0$ and $\delta > 0$ sufficiently small, and $z\in\mathbb{C}$, denote by $B^{\varepsilon,\delta}_z$ the following {\em box} in $\mathbb{C}$, \[ B^{\varepsilon,\delta }_z := \big\{\lambda\in\mathbb{C}\,\big|\, |\mathop{\rm Re}(\lambda - z)|<\varepsilon,\,\, |\mathop{\rm Im}(\lambda - z)|<\delta \big\}. \] In the sequel we use results on the discriminant $\Delta(\lambda,\varphi)$ of $L(\varphi)$ reviewed in Section \mathop{\rm Re}f{sec:setup}. \begin{Lem}\label{lem:spectral_bands} For any ${\partial}si\in i L^2_r$, choose $R_p \in\mathbb{Z}_{\geq 0}$ as in Lemma \mathop{\rm Re}f{lem:counting_lemma}. Then there exist an integer $\tilde R\ge R_p$, as well as $\varepsilon > 0$, $\delta > 0$, and a neighborhood $W_{\partial}si $ of ${\partial}si $ in $i L^2_r$ so that for any $|n| > \tilde R$ and $\varphi\in W_{\partial}si $, $B^{\varepsilon , \delta }_{\dot \lambda _n(\varphi )}\subseteq D_n$ and $B^{\varepsilon , \delta }_{\dot \lambda _n(\varphi )}\cap \mathop{\rm Spec}\nolimits_{\mathbb R}L(\varphi )$ consists of the interval $(\dot\lambda _n(\varphi ) - \varepsilon,\dot\lambda _n(\varphi ) + \varepsilon )\subseteq\mathbb{R}$ and a smooth arc $g_n$ connecting $\lambda ^-_n(\varphi )$ with $\lambda ^+_n(\varphi )$ within $B^{\varepsilon,\delta }_{\dot \lambda_n(\varphi )}$ so that $\Delta(\lambda)$ is real valued on $g_n$ and satisfies $-2 < \Delta (\lambda ) < 2$ for any $\lambda \in g_n\setminus\{ \lambda ^{\partial}m _n(\varphi ) \} $. In fact for any $\varphi \in W_{\partial}si $, the arc $g_n$, also referred to as {\em spectral band}, is the graph $\{ a_n(v) +i v : |v| \leq \mathop{\rm Im}(\lambda^+ _n)\}$ of a smooth real valued function $a_n : [- \mathop{\rm Im}(\lambda ^+_n), \mathop{\rm Im} (\lambda^+_n)]\to\mathbb{R}$ with the property that $a_n(0) = \dot \lambda _n(\varphi )$, $a_n(-t) = a_n (t)$ for any $0 \le t \le\mathop{\rm Im}(\lambda ^+_n)$, and $a_n({\partial}m\mathop{\rm Im}(\lambda ^+_n)) = \mathop{\rm Re}(\lambda ^+_n)$. \end{Lem} For the convenience of the reader we include the proof of Lemma \mathop{\rm Re}f{lem:spectral_bands} given in \cite{KLT1}. \begin{proof}[Proof of Lemma \mathop{\rm Re}f{lem:spectral_bands}] First let us introduce some more notation. For any $\lambda \in \mathbb{C}$ write $\lambda = u + i v$ with $u, v \in {\mathbb R}$ and $\Delta = \Delta _1 + i \Delta _2$ where for $\varphi \in L^2_c$ arbitrary, $\Delta_j(u,v) \equiv \Delta_j(u,v,\varphi )$, $j=1,2$, are given by \[ \Delta _1(u,v):= \mathop{\rm Re}\big(\Delta (u + iv, \varphi )\big), \qquad \Delta _2(u,v):= \mathop{\rm Im}\big(\Delta (u + iv, \varphi )\big) . \] In a first step we want to study $\Lambda \cap D_n$ for $\varphi \in iL ^2_r$ where \[ \Lambda = \{ u + iv\,|\, u,v \in {\mathbb R},\ \Delta _2(u,v) = 0 \} . \] By Lemma \mathop{\rm Re}f{lem:product1} $(iii)$ for any $\varphi \in i L^2_r$, $\Delta _2(u,0,\varphi ) = 0$. Hence \[ F : {\mathbb R} \times {\mathbb R} \times i L^2_r \to {\mathbb R}, \ (u,v, \varphi ) \mapsto \Delta _2(u,v,\varphi ) / v \] is well defined. As $F$ and $\Delta _2$ have the same zero set on ${\mathbb R} \times({\mathbb R} \backslash \{ 0 \})$ we investigate $\{ (u,v) \in {\mathbb R}^2 : F(u,v,\varphi ) = 0 \} $ further. We note that \begin{equation}\label{2.20} F(u,v) \equiv F(u,v,\varphi ) = \int ^1_0\big({\partial}artial _v \Delta_2\big)(u,tv)\,dt \end{equation} and hence $F(u,0) = {\partial}artial _v \Delta _2(u,0)$. Since $\Delta _2(u,0)$ vanishes for any $u \in {\mathbb R}$, so does ${\partial}artial _u \Delta _2(u,0)$. By the Cauchy-Riemann equation it then follows that for $u \in {\mathbb R}$, $\dot\Delta (u,0) = 0$ if and only if ${\partial}artial _v \Delta _2(u,0) = 0$. The real roots of $F(\cdot , 0) = {\partial}artial _v \Delta _2(\cdot , 0)$ are thus given by $\dot \lambda _n$, $n \in \mathbb{Z}$, with $\dot \lambda _n \in {\mathbb R}$. Since $\Delta $ is an analytic function on $\mathbb{C} \times L^2_c$ one concludes that $\Delta_2(\cdot,\cdot ,\cdot )$ is a real analytic function on $\mathbb{R}\times\mathbb{R}\times i L^2_r$, and by formula \eqref{2.20}, so is $F$. For ${\partial}si \in i L^2_r$ let $R_p \in \mathbb{Z}_{\ge 0}$ and the neighborhood $V_{\partial}si$ of ${\partial}si $ be as in Lemma \mathop{\rm Re}f{lem:counting_lemma}. For any $|k| > R_p$, let $S_k$ be the map \[ S_k : B_{\ell^\infty} \times (-1,1) \times\big(V_{\partial}si\cap i L^2_r\big)\to {\mathbb R}, \, \, \big( (\zeta _n)_{|n| > R_p}, v, \varphi \big) \mapsto F(\dot \lambda _k + \zeta_k, v, \varphi ) \] where $\dot \lambda _k = \dot \lambda _k(\varphi )$ is in $D_k$ and $B_{\ell^\infty} $ denotes the open unit ball in the space of real valued sequences \[ \ell ^\infty = \big\{ \zeta = (\zeta _n)_{|n| > R_p}\,\big|\, \| \zeta \| _\infty < \infty\big\},\quad\| \zeta \| _\infty := \sup _{|n| > R_p} |\zeta_n|\,. \] Note that $S_k$ is the composition of the map $B_{\ell^\infty} \times (-1,1) \times\big(V_{\partial}si\cap i L^2_r\big)\to\mathbb{R}\times (-1,1) \times\big(V_{\partial}si\cap i L^2_r\big)$, \[ (\zeta , v, \varphi ) \mapsto\big(\dot \lambda _k (\varphi ) + \zeta _k, v, \varphi\big), \] with $F$ and hence real analytic. It follows from the asymptotics of $\Delta $ of \cite[Lemma 4.3] {GK1} and the asymptotics of $\dot\lambda_k$ stated in Lemma \mathop{\rm Re}f{lem:counting_lemma} that for any sequence $\zeta = (\zeta _n)_{|n| > R_p} \in B_{\ell^\infty}$, $(S_k)_{|k| > R_p}$ is also in $\ell ^\infty$. We claim that \[ S: B_{\ell^\infty} \times (-1,1) \times\big(V_{\partial}si\cap i L^2_r\big)\to \ell ^\infty , \ (\zeta , v, \varphi ) \mapsto\big(S_k(\zeta , v, \varphi )\big)_{|k| > R_p} \] is smooth. To see it note that by the Cauchy-Riemann equation, $\Delta _2(u,v,\varphi )$ and its derivatives, when restricted to ${\mathbb R} \times {\mathbb R} \times i L^2_r$, can be estimated in terms of $\Delta (\lambda , \varphi )$ and its derivatives. Hence the asymptotic estimates of $\Delta (\lambda , \varphi )$ (see \cite[Theorem 2.2, Lemma 4.3] {GK1}) together with Cauchy's estimate can be used to estimate the derivatives of $S_k$ to conclude that the map $S = (S_k)_{|k| > R_p}$ is smooth. We now would like to apply the implicit function theorem to $S$. First note that $S\big\arrowvert _{\zeta = 0, v = 0, {\partial}si } = 0$ as $\dot \lambda_k({\partial}si )$, $k\in\mathbb{Z}$, are the roots of $\dot \Delta (\cdot,{\partial}si )$. In addition, since $\dot\lambda_k({\partial}si)$, $|k|>R_p$, are real and simple roots of ${\dot\Delta}(\cdot,{\partial}si)$, one has \[ {\partial}artial _{\zeta _k} S_k \big\arrowvert _{\zeta = 0, v = 0, {\partial}si } = {\partial}artial _u {\partial}artial _v \Delta _2\big\arrowvert _{u = \dot \lambda _k({\partial}si ),v = 0,{\partial}si} \] whereas by the definition of $S_k$, ${\partial}artial_{\zeta _n} S_k$ vanishes identically for any $n \not= k$. By the asymptotics $\dot \lambda _k({\partial}si ) = k{\partial}i + \ell ^2_k$ (cf. Lemma \mathop{\rm Re}f{lem:counting_lemma}) and the one of $\Delta $ (cf \cite[Lemma 4.3] {GK1}) it follows from Cauchy's estimate that \[ {\partial}artial _u {\partial}artial _v \Delta _2\big\arrowvert _{u=\dot\lambda_k({\partial}si),v = 0,{\partial}si} = {\partial}artial_u {\partial}artial _v \Delta(u + iv, 0) \big\arrowvert _{u=\dot \lambda_k({\partial}si),v=0} + \ell ^2_k . \] On the other hand $\Delta (\lambda , 0) = 2 \cos \lambda $ and thus $\mathop{\rm Im}\big(\Delta (u + iv, 0)\big) = - 2 \sin u \sinh v$ implying that \[ {\partial}artial _u {\partial}artial _v \mathop{\rm Im}\big(\Delta (u + iv,0)\big)\big\arrowvert _{u=\dot\lambda_k({\partial}si),v = 0} = 2(-1)^{k+1}\big(1 + \ell^2_k\big) . \] Altogether we then conclude that \[ {\partial}artial _\zeta S \big\arrowvert _{\zeta = 0, v = 0,{\partial}si} = 2 \mathop{\rm diag} \left( \big((-1)^{k+1}\big)_{|k|> R_p} + \ell ^2_k \right) . \] Thus there exists an integer $\tilde R\ge R_p$ so that for the restriction of $S$ to $B_{\tilde\ell^\infty}\times(-1,1)\times\big(V_{\partial}si\cap i L^2_r\big)$, \[ \tilde S : \tilde B_{\tilde\ell^\infty} \times (-1, 1) \times\big(V_{\partial}si\cap i L^2_r\big)\to \tilde\ell^\infty, \] the differential ${\partial}artial _{\tilde\zeta}{\tilde S}\big\arrowvert _{{\tilde\zeta}=0, v = 0,{\partial}si}$ is invertible. Here $B_{\tilde\ell^\infty} $ denotes the unit ball in \[{\tilde\ell}^\infty = \big\{{\tilde\zeta} = (\zeta _n)_{|n|> \tilde R} \subseteq {\mathbb R}\,\big|\, \|{\tilde\zeta}\| _\infty < \infty\big\} . \] By the implicit function theorem there exist $\delta > 0$, a neighborhood $W_{\partial}si \subseteq V_{\partial}si$ of ${\partial}si$ in $L^2_c$, $0<\varepsilon<1$, and a smooth map \[ h : (-\delta , \delta ) \times W_{\partial}si \to B_{\tilde\ell^\infty} (\varepsilon) , \ (v,\varphi ) \mapsto h(v,\varphi ) = \big(h_n(v,\varphi )\big)_{|n| > \tilde R} \] so that $h(0,{\partial}si ) = 0$ and $\tilde S(h(v,\varphi ), v, \varphi ) = 0$ for any $(v,\varphi ) \in (-\delta , \delta ) \times W_{\partial}si $. Here \[ B_{\tilde\ell^\infty}(\varepsilon) = \big\{{\tilde\zeta}\in \tilde\ell^\infty\,\big|\,\|{\tilde\zeta}\| _\infty < \varepsilon\big\} \] and $(v,\varphi ) \mapsto (h(v,\varphi ),v,\varphi )$ parametrizes the zero level set of $\tilde S$ in $B_{\tilde\ell^\infty}(\varepsilon) \times (- \delta , \delta ) \times W_{\partial}si $. In particular, for any $|n| > \tilde R$ and $\varphi $ in $W_{\partial}si $, the intersection of $\{ F(u,v,\varphi ) = 0\}$ with the box $B^{\delta , \varepsilon }_{\dot \lambda _n(\varphi )}$ is smoothly parametrized by \[ z_n(\cdot , \varphi ) := (-\delta , \delta ) \to B^{\delta , \varepsilon }_{\dot \lambda _n(\varphi )} , \ v \mapsto \dot \lambda _n(\varphi ) + h_n(v,\varphi ) + iv . \] By choosing $\tilde R$ larger and by shrinking $W_{\partial}si $, if necessary, one can assure that $\lambda^{\partial}m _n\equiv\lambda ^{\partial}m _n(\varphi )$ is in $B^{\delta , \varepsilon }_{\dot \lambda _n(\varphi )}$ for any $\varphi \in W_{\partial}si $ and $|n| > \tilde R$. Since $\Delta (\lambda ^{\partial}m _n) = (-1)^n 2$ and $\Delta (\dot \lambda _n) \in {\mathbb R}$, the pair $\lambda ^+_n$, $\lambda ^-_n$ as well as $\dot \lambda _n$ are in the range of $z_n$. In fact, one has \[ z_n(0,\varphi ) ={\dot\lambda}_n \mbox{ and } z_n\big({\partial}m\mathop{\rm Im}(\lambda ^+_n),\varphi\big) = \lambda ^{\partial}m _n,\quad\forall\varphi\in W_{\partial}si. \] Furthermore recall that $\chi _p(\dot \lambda _n, \varphi ) \leq 0$ since $\dot\lambda_n\in\mathbb{R}$. As $\lambda ^{\partial}m _n$ are the only roots of $\chi _p(\cdot,\varphi )$ in $D_n$, it follows that if $\dot \lambda _n \not= \lambda ^{\partial}m _n$, then $\chi_p(\dot \lambda _n,\varphi ) < 0$ and, hence \[ \chi _p(z_n(v), \varphi ) < 0 \quad \forall - \mathop{\rm Im}(\lambda^+_n) < v < \mathop{\rm Im}(\lambda ^+_n) . \] Altogether we thus have proved that for any $|n|>\tilde R$ \[ g_n = \big\{ z_n(v)\,\big|\, - \mathop{\rm Im}(\lambda ^+_n) \le v \le \mathop{\rm Im}(\lambda ^+_n)\big\} \] is a smooth arc on which $\Delta $ takes values in $[-2,2]$. We also have all the properties listed in the statement of the Lemma. \end{proof} \end{document}
\begin{document} \title[Maximal subgroups and complexity of the flow semigroup]{The maximal subgroups and the complexity of the flow semigroup of finite (di)graphs} \dedicatory{Dedicated to John Rhodes on the occasion of his 80th birthday.} \author{G\'{a}bor Horv\'{a}th} \address{Institute of Mathematics, University of Debrecen, Pf.~400, Debrecen, 4002, Hungary} \email{[email protected]} \author{Chrystopher L.~Nehaniv} \address{Royal Society / Wolfson Foundation Biocomputation Research Laboratory, Centre for Computer Science and Informatics Research, University of Hertfordshire, College Lane, Hatfield, Hertfordshire AL10 9AB, United Kingdom} \email{[email protected]} \author{K\'{a}roly Podoski} \address{Alfr\'{e}d R\'{e}nyi Institute of Mathematics, 13--15 Re\'{a}ltanoda utca, Budapest, 1053, Hungary} \email{[email protected]} \thanks{ The research was partially supported by the European Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC under grant agreements no.~318202 and no.~617747, by the MTA R\'{e}nyi Institute Lend\"{u}let Limits of Structures Research Group, the first author was partially supported by the Hungarian National Research, Development and Innovation Office (NKFIH) grant no.~K109185 \ and grant no.~FK124814, and the third author was funded by the National Research, Development and Innovation Office (NKFIH) Grant No. ERC\_HU\_15 118286. } \date{30 July 2017} \subjclass[2010]{20M20, 05C20, 05C25, 20B30} \keywords{Rhodes's conjecture, flow semigroup of digraphs, Krohn--Rhodes complexity, complete invariants for graphs, invariants for digraphs, permutation groups} \begin{abstract} The flow semigroup, introduced by John Rhodes, is an invariant for digraphs and a complete invariant for graphs. After collecting together previous partial results, we refine and prove Rhodes's conjecture on the structure of the maximal groups in the flow semigroup for finite, antisymmetric, strongly connected digraphs. Building on this result, we investigate and fully describe the structure and actions of the maximal subgroups of the flow semigroup acting on all but $k$ points for all finite digraphs and graphs for all $k\geq 1$. A linear algorithm (in the number of edges) is presented to determine these so-called `defect $k$ groups' for any finite (di)graph. Finally, we prove that the complexity of the flow semigroup of a 2-vertex connected (and strongly connected di)graph with $n$ vertices is $n-2$, completely confirming Rhodes's conjecture for such (di)graphs. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} John Rhodes in \cite{wildbook} introduced the {\em flow semigroup}, an invariant for graphs and digraphs (that is, isomorphic flow semigroups correspond to isomorphic digraphs). In the case of graphs, this is a complete invariant determining the graph up to isomorphism. The flow semigroup is the semigroup of transformations of the vertices generated by elementary collapsings corresponding to the edges of the (di)graph. An elementary collapsing corresponding to the directed edge $uv$ is a map on the vertices moving $u$ to $v$ and acting as the identity on all other vertices. (See Section~\ref{sec:digraphs} for all the precise definitions.) A maximal subgroup of this semigroup for a finite (di)graph $D=(V_D, E_D)$ acts by permutations on all but $k$ of its vertices ($1 \leq k \leq \abs{V_D}-1$) and is called a ``defect $k$ group''. The set of defect $k$ groups of a (di)graph is also an invariant. For each fixed $k$, they are all isomorphic to each other in the case of (strongly) connected (di)graphs. Rhodes formulated a conjecture on the structure of these groups for strongly connected digraphs whose edge relation is anti-symmetric in \cite[Conjecture~6.51i (2)--(4)]{wildbook}. We show that his conjecture was correct, and we prove it here in sharper form. Moreover, extending this result, we fully determine the defect $k$ groups for all finite graphs and digraphs. Rhodes further conjectured \cite[Conjecture~6.51i (1)]{wildbook} that the Krohn--Rhodes complexity of the flow semigroup of a strongly connected, antisymmetric digraph $D$ on $n$ vertices is $n-2$. We confirm this conjecture when the digraph is 2-vertex connected, and bound the complexity in the remaining cases. The structure of the argument is as follows. First, a maximal group in the flow semigroup of a digraph $D$ is the direct product of maximal groups of the flow semigroups of its strongly connected components. Thus one needs only to consider strongly connected digraphs. It turns out, that if $D$ is a strongly connected digraph, then the defect $k$ group (up to isomorphism) does not depend on the choice of the vertices it acts on. Furthermore, for a strongly connected digraph, its flow semigroup is the same as the flow semigroup of the simple graph obtained by ``forgetting'' the direction of the edges. This is detailed in Section~\ref{sec:digraphs} and is based on \cite[p.~159--169]{wildbook}. Thus, one only needs to consider the defect $k$ groups of the flow semigroup for simple connected graphs. In Section~\ref{sec:prelim} we list some useful lemmas and determine the defect $k$ group of a cycle. In Section~\ref{sec:defect1} we prove that the defect 1 group of arbitrary simple connected graph is the direct product of the defect 1 groups of its 2-vertex connected components. The defect 1 group of an arbitrary 2-vertex connected graph $\Gamma$ has been determined by Wilson \cite{wilson}. He proved that the defect 1 group is either $A_{n-1}$ or $S_{n-1}$, unless $\Gamma$ is a cycle or the exceptional graph displayed in Figure~\ref{fig:exceptionalgraph}. \begin{figure} \caption{Exceptional graph} \label{fig:exceptionalgraph} \end{figure} In particular, Rhodes's conjecture (as phrased for strongly connected, antisymmetric digraphs in \cite[Conjecture~6.51i~(2)]{wildbook}) about the defect 1 group holds, and more generally: the defect 1 group of the flow semigroup of a simple connected graph is indeed the product of cyclic, alternating and symmetric groups of various orders. A straightforward linear algorithm is given to determine the direct components of the defect 1 group of an arbitrary connected graph (see Section~\ref{sec:algorithm}). In Section~\ref{sec:defectk} we determine the defect $k$ groups ($k \geq 2$) of arbitrary graphs by considering the so-called maximal $k$-subgraphs (maximal subgraphs for which the defect $k$ group is the full symmetric group) and prove that the defect $k$ group of a graph is the direct product of the defect $k$ groups of the maximal $k$-subgraphs (i.e.\ of full symmetric groups). In Section~\ref{sec:algorithm} we provide a linear algorithm (in the number of edges of $\Gamma$) to determine the maximal $k$-subgraphs of an arbitrary connected graph. Finally, in Section~\ref{sec:cpx} we confirm \cite[Conjecture~6.51i (1)]{wildbook} about the Krohn--Rhodes complexity of digraphs when the digraph is 2-vertex connected, and we prove some bounds on the complexity of the flow semigroup in the remaining cases. (See Section~\ref{sec:cpx} for the definition of Krohn--Rhodes complexity.) We have collected all these results into the following main theorem. \begin{thm}\label{thm:main}\ \begin{enumerate} \item\label{it:digraph} Let $D$ be a digraph, then every maximal subgroup of $S_D$ is (isomorphic to) the direct product of maximal subgroups of $S_{D_i}$, where the $D_i$ are the strongly connected components of $D$. \item\label{it:stronglyG_k} Let $D$ be a strongly connected digraph. Let $V_k, V_k' \subseteq D$ be subsets of nodes such that $\abs{V_k} = \abs{V_k'}=k$. Let $G_{k, V_k}, G_{k, V_k'}$ be the defect $k$ groups acting on $V \setminus V_k$ and $V \setminus V_k'$, respectively. Then $G_{k, V_k} \simeq G_{k, V_k'}$ as permutation groups. \myitem[(\getrefnumber{it:stronglyG_k}\textsuperscript{r})]\label{it:stronglyS_D} Let $D$ be a strongly connected digraph, and $\Gamma_D$ be the graph obtained from $D$ by forgetting the direction of the edges in $D$. Then $S_D = S_{\Gamma_D}$. \item\label{it:graph} Let $\Gamma$ be a simple connected graph of $n$ vertices, and let $\Gamma_1, \dots , \Gamma_m$ be its 2-vertex connected components. Then the defect 1 group of $\Gamma$ is the direct product of the defect 1 groups of $\Gamma_i$ ($1\leq i\leq m$). \item\label{it:2-vertex} Let $\Gamma$ be a 2-vertex connected simple graph with $n \geq 2$ vertices. Then the defect 1 group of $\Gamma$ is isomorphic (as a permutation group) to \begin{enumerate} \item\label{it:defect1cycle} the cyclic group $Z_{n-1}$ if $\Gamma$ is a cycle; \item $S_5 \simeq PGL_2(5)$ acting sharply 3-transitively on 6 points, if $\Gamma$ is the exceptional graph (see Figure~\ref{fig:exceptionalgraph}); \item $S_{n-1}$ or $A_{n-1}$, otherwise, where the defect 1 group is $A_{n-1}$ if and only if $\Gamma$ is bipartite. \end{enumerate} \myitem[(\getrefnumber{it:2-vertex}\textsuperscript{c})]\label{it:compl2vertex} Let $\Gamma$ be a 2-vertex connected simple graph with $n \geq 2$ vertices. Then the complexity of $S_\Gamma$ is $\cpx{S_{\Gamma}}=n-2$. \myitem[(\getrefnumber{it:2-vertex}\textsuperscript{cc})]\label{it:compl2edge} Let $\Gamma$ be a 2-edge connected simple graph with $n \geq 2$ vertices. Then for the complexity of $S_\Gamma$ we have $n-3 \leq \cpx{S_{\Gamma}} \leq n-2$. \item\label{it:defectk} Let $k \geq 2$, $\Gamma$ be a simple connected graph of $n$ vertices, $n > k$. \begin{enumerate} \item\label{it:defectkcycle} If $\Gamma$ is a cycle, then its defect $k$ group is the cyclic group $Z_{n-k}$. \item\label{it:defectkmain} Otherwise, let $\Gamma_1, \dots , \Gamma_m$ be the maximal $k$-subgraphs of $\Gamma$, and let $\Gamma_i$ have $n_i$ vertices. Then the defect $k$ group of $\Gamma$ is the direct product of the defect $k$ groups of $\Gamma_i$ ($1 \leq i\leq m$), thus it is isomorphic (as a permutation group) to \[ S_{n_1-k} \times \dots \times S_{n_m-k}. \] \end{enumerate} \end{enumerate} \end{thm} Our main contribution to Theorem~\ref{thm:main} are items~(\ref{it:graph}), \ref{it:compl2vertex}, \ref{it:compl2edge} and (\ref{it:defectk}). Items~(\ref{it:digraph}), (\ref{it:stronglyG_k})~and~\ref{it:stronglyS_D} (among some basic definitions and notations) are detailed in Section~\ref{sec:digraphs} and are based on \cite[p.~159--169]{wildbook}. In Section~\ref{sec:prelim} we list some useful lemmas and determine the defect $k$ group of a cycle. Item~(\ref{it:graph}) is proved in Section~\ref{sec:defect1}, while item~(\ref{it:2-vertex}) has already been proved by Wilson \cite{wilson}. Then in Section~\ref{sec:defectk} we prove item~(\ref{it:defectk}). In Section~\ref{sec:algorithm} we provide a linear algorithm (in the number of edges of $\Gamma$) to determine the maximal $k$-subgraphs of an arbitrary connected graph to help putting item~(\ref{it:defectk}) more into context. Finally, items~\ref{it:compl2vertex}~and~\ref{it:compl2edge} are proved in Section~\ref{sec:cpx}. East, Gadouleau and Mitchell \cite{mitchell} are currently looking into other properties of flow semigroups. In particular, they provide a linear algorithm (in the number of vertices of a digraph) for whether or not the flow semigroup contains a cycle of length $m$ for a fixed positive integer $m$. Furthermore, they classify all those digraphs whose flow semigroups have any of the following properties: inverse, completely regular, commutative, simple, 0-simple, a semilattice, a rectangular band, congruence-free, is $\mathcal{K}$-trivial or $\mathcal{K}$-universal, where $\mathcal{K}$ is any of Green's $\mathcal{H}$-, $\mathcal{L}$-, $\mathcal{R}$-, or $\mathcal{J}$-relation, and when the flow-semigroup has a left, right, or two-sided zero. Rhodes's original conjecture \cite[Conjecture~6.51i]{wildbook} is about strongly connected, antisymmetric digraphs. By \cite{Robbins1939} a strongly connected antisymmetric digraph becomes a 2-edge connected graph after forgetting the directions. Therefore Theorem~\ref{thm:main} almost completely settles Rhodes's conjecture \cite[Conjecture 6.51i]{wildbook}. To completely settle the last remaining part of Rhodes's conjecture \cite[Conjecture 6.51i (1)]{wildbook}, one should find the complexity of the flow semigroups for the rest of the 2-edge connected graphs. \begin{prob} Determine the complexity of $S_{\Gamma}$ for a 2-edge connected graph $\Gamma$ which is not 2-vertex connected. \end{prob} The smallest such graph is the ``bowtie'' graph: \begin{prob} Let $\Gamma$ be the graph with vertex set $\halmaz{u,v,w,x,y}$ and edge set $\halmaz{uv,vw,wu,wx,xy,yw}$. Determine the complexity of $S_{\Gamma}$. \end{prob} Ultimately, the goal is the determine the complexity for all flow semigroups. \begin{prob} Determine the complexity of $S_{\Gamma}$ for an arbitrary finite graph (or digraph) $\Gamma$. \end{prob} \section{Flow semigroup of digraphs}\label{sec:digraphs} For notions in graph theory we refer to \cite{GraphTheory, HandbookOfCombinatorics}, in group theory to \cite{Robinson} in permutation groups to \cite{Cameron, DixonMortimer}, in semigroup theory to \cite{CliffordPreston1, CliffordPreston2}. A {\em semigroup} is a set with a binary associative multiplication. A {\em transformation} on a set $X$ is a function $s \colon X\rightarrow X$. It {\em operates} (or {\em acts}) {\em on} $X$ by mapping each $x\in X$ to some $x\cdot s \in X$. Here we write $x\cdot s$ or $xs$ for transformation $s$ applied to $x\in X$. A {\em transformation semigroup} $S$ is a set of transformations $s \in S$ on some set $X$ such that $S$ is closed under (associative) function composition. Also, $S$ itself is then said to operate or {\em to act on} the set $X$. Note that in this paper functions act on the right, therefore transformations are multiplied from left to right. Denoting by $ss'$ the transformation of $X$ obtained by first applying $s$ and then $s'$, we have $x \cdot ss' = (x \cdot s) \cdot s'$. If a semigroup element $s$ acts on a set $X$, and for some $Y \supseteq X$ the action of $s$ is not defined on $Y \setminus X$, then we may consider $s$ acting on $Y$, as well, with the identity action on $Y \setminus X$. A {\em permutation group} is a nonempty transformation semigroup $G$ that contains only permutations and such that that if $g \in G$ then the inverse permutation $g^{-1}$ is also in $G$. Furthermore, for a set $Y \subseteq X$ and a transformation $s$ on $X$ define \[ Ys = \halmazvonal{ys}{y \in Y}. \] A {\em subgroup} $G$ of a transformation semigroup $S$ is a subset of $S$ whose transformations satisfy the (abstract) group axioms. It is not hard to show that if $S$ is a transformation semigroup acting on $X$, then $G$ contains a (unique) idempotent $e^2=e$ (which does not generally act as the identity map on $X$), and furthermore distinct elements of $G$ when restricted to $X e$ are distinct, permute $Xe$, and comprise a permutation group acting on $Xe$ (see \cite[p.~49]{wildbook}). A {\em digraph} $(V, E)$ is a set of \emph{nodes} (or \emph{vertices}) $V$, and a binary relation $E \subseteq V\times V$. An element $e=(u,v) \in E$ is called a {\em directed edge} from node $u$ to node $v$, and also denoted $uv$. A {\em loop-edge} is an edge from a vertex to itself. A {\em graph} $(V,E)$ is a set of nodes $V$ and a symmetric binary relation $E \subseteq V \times V$. If $(u,v) \in E$, then $uv$ is called an (undirected) edge. Such a graph is called {\em simple} if it has no loop-edges. \emph{In this paper we consider only digraphs without loop-edges and simple graphs}. A \emph{walk} is a sequence of vertices $\left( v_1, \dots, v_n \right)$ such that $v_iv_{i+1}$ is a (directed) edge for all $1\leq i\leq n-1$. By \emph{cycle} we will mean a simple cycle, that is a closed walk with no repetition of vertices except for the starting and ending vertex. A \emph{path} is a walk with no repetition of vertices. A (di)graph $\Gamma = (V, E)$ is (strongly) connected if there is a path from $u$ to $v$ for all distinct $u, v \in V$. By \emph{subgraph} $\Gamma' = (V', E') \subseteq \Gamma$ we mean a graph for which $V' \subseteq V$, $E' \subseteq E$. If $\Gamma'$ is an \emph{induced subgraph}, that is $E'$ consists of all edges from $E$ with both endpoints in $V'$, then we explicitly indicate it. A strongly connected component of a digraph $\Gamma$ is a maximal strongly connected subgraph of $\Gamma$. For a digraph $D = (V_D, E_D)$ without any loop-edges, the \emph{flow semigroup} $S = S_D$ is the semigroup of transformations acting on $V_D$ defined by \[ S = S_D = \left<e_{uv} \mid uv \in E_D \right>, \] where $e_{uv}$ is the \emph{elementary collapsing} corresponding to the directed edge $uv \in E_D$, that is, for every $x \in V_D$ we have \[ x\cdot e_{uv} = xe_{uv} = \begin{cases} v, & \text{if } x=u, \\ x, & \text{otherwise.} \end{cases} \] Thus, the flow semigroup of a (di)graph $D$ is generated by idempotents (elementary collapsings) corresponding to the edges of $D$. The flow semigroup $S_D$ is also called the {\em Rhodes semigroup of the (di)graph}. A maximal subgroup of $S_D$ is a subgroup that is not properly contained in any other subgroup of $S_D$. In order to determine the maximal subgroups of $S_D$, one can make several reductions by \cite[Proposition~6.51f]{wildbook}. First, one only needs to consider the maximal subgroups of $S_{D_i}$ for the strongly connected components $D_i$ of $D$. Strongly connected components are maximal induced subgraphs such that any vertex can be reached from any other vertex by a directed path. \begin{lem}[{\cite[Proposition~6.51f (1)]{wildbook}}] Let $D$ be a digraph, then every maximal subgroup of $S_D$ is (isomorphic to) the direct product of maximal subgroups of $S_{D_i}$, where the $D_i$ are the strongly connected components of $D$. \end{lem} This is (\ref{it:digraph}) of Theorem~\ref{thm:main}. An element $s \in S$ is \emph{of defect $k$} if $\abs{V_D s} = \abs{V_D}-k$. Let $V_k=\halmaz{ v_1,v_2,\dots , v_k} \subseteq V_D$. The \textit{defect $k$ group} $G_{k,V_k}$ associated to $V_k$ (called the \emph{defect set}) is generated by all elements of $S$ restricted to $V_D\setminus V_k$ which permute the elements of $V_D\setminus V_k$ and move elements of $V_k$ to elements of $V_D\setminus V_k$: \[ G_{k, V_k}=\left< s\restriction_{V_D\setminus V_k}\, : \, s\in S, (V_D\setminus V_k)s=V_D\setminus V_k, V_k s \subseteq V_D\setminus V_k \right>, \] where $s\restriction_{V_D\setminus V_k}$ denotes the restriction of the transformation $s$ onto the set $V_D\setminus V_k$. Now, $G_{k, V_k}$ is a permutation group acting on $V_D\setminus V_k$. For this reason $V_D\setminus V_k$ is called the \emph{permutation set} of $G_{k, V_k}$, and the elements of $G_{k, V_k}$ are sometimes called \emph{defect $k$ permutations}. Furthermore, if the defect set contains only one vertex $v$, then by abuse of notation we write \emph{defect $v$} or \emph{defect point $v$} instead of defect $\halmaz{v}$. In general, the defect $k$ group $G_{k, V_k}$ can depend on the choice of $V_k$. However, by \cite[Proposition~6.51f (2)]{wildbook} it turns out that if the graph is strongly connected then the defect $k$ group $G_k$ is unique up to isomorphism. \begin{lem}[{\cite[Proposition~6.51f (2)]{wildbook}}]\label{lem:stronglyconnected} Let $D$ be a strongly connected digraph. Let $V_k, V_k' \subseteq V_D$ be subsets of nodes such that $\abs{V_k} = \abs{V_k'}=k$. Then the action of $G_{k, V_k}$ on $V_D \setminus V_k$ is equivalent to that of $G_{k, V_k'}$ on $V_D \setminus V_k'$. That is, $G_{k, V_k} \simeq G_{k, V_k'}$ as permutation groups. \end{lem} This is (\ref{it:stronglyG_k}) of Theorem~\ref{thm:main}. By Lemma~\ref{lem:stronglyconnected}, we may write $G_k$ instead of $G_{k, V_k}$ without any loss of generality. Furthermore, the case of strongly connected graphs can be reduced to the case of simple graphs. Let $\Gamma=(V, E)$ be a simple (undirected) graph, we define $S_{\Gamma}$ by considering $\Gamma$ as a directed graph where every edge is directed both ways. Namely, let $D_{\Gamma}=(V,E_D)$ be the directed graph on vertices $V$ such that both $uv \in E_D$ and $vu \in E_D$ if and only if the undirected edge $uv \in E$. Then let $S_{\Gamma} = S_{D_\Gamma}$. Furthermore, for every digraph $D = (V_D, E_D)$, one can associate an undirected graph $\Gamma$ by ``forgetting'' the direction of edges in $D$. Precisely, let $\Gamma_D=(V_D,E)$ be the undirected graph such that $uv \in E$ if and only if $uv \in E_D$ or $vu \in E_D$. The following lemma due to Nehaniv and Rhodes shows that if a digraph $D$ is strongly connected then the semigroup $S_D$ corresponding to $D$ and the semigroup $S_{\Gamma_D}$ corresponding to the simple graph $\Gamma_D$ are the same. Moreover, Lemma~\ref{lem:reverseedge} immediately implies that the transformation semigroup $S_D$ is an invariant for digraphs and a complete invariant for (simple) graphs: That is, isomorphic digraphs have the isomorphic flow semigroups, and graphs are isomorphic if and only if their flow semigroups are isomorphic as transfromation semigroups. \begin{lem}[{\cite[Lemma~6.51b]{wildbook}}]\label{lem:reverseedge} Let $D$ be an arbitrary digraph. Then \[ e_{ab} \in S_D \Longleftrightarrow \begin{cases} a \to b \text{ is an edge in $D$, or } \\ b \to a \text{ is an edge in a directed cycle in $D$.} \end{cases} \] In particular, if $D$ is strongly connected then $S_D = S_{\Gamma_D}$. \end{lem} \begin{proof} Let $b \to a \to u_1 \to \dots \to u_{n-1} \to b$ be a directed cycle in $D$. Then an easy calculation shows that \[ e_{ab} = \left(e_{ba}e_{u_{n-1}b}e_{u_{n-2}u_{n-1}} \dots e_{u_{1}u_2} e_{a u_1} \right)^{n}. \] For the other direction, assume $e_{ab} = e_{uv} s$ for some $s \in S_D$. Then $e_{uv}s$ moves $u$ and $v$ to the same vertex, while $e_{ab}$ moves only $a$ and $b$ to the same vertex. Thus $\halmaz{a,b} = \halmaz{u,v}$. \end{proof} This is \ref{it:stronglyS_D} of Theorem~\ref{thm:main}. Therefore, in the following we only consider simple, connected, undirected graphs $\Gamma = (V, E)$, that is no self-loops or multiple edges are allowed. Furthermore, $\Gamma$ is 2-edge connected if removing any edge does not disconnect $\Gamma$. Rhodes's conjecture \cite[Conjecture~6.51i (2)--(4)]{wildbook} is about strongly connected, antisymmetric digraphs. Note that by \cite{Robbins1939} a strongly connected antisymmetric digraph becomes a 2-edge connected graph after forgetting the directions. Let us fix some notation. The letters $k$, $l$, $m$ and $n$ will denote nonnegative integers. The number of vertices of $\Gamma$ is usually denoted by $n$, while $k$ will denote the size of the defect set. Usually we denote the defect $k$ group of a graph $\Gamma$ by $G_k$ or $G_{\Gamma}$, depending on the context. We try to heed the convention of using $u$, $v$, $w$, $x$, $y$ as vertices of graphs, $V$ as the set of vertices, $E$ as the set of edges. Furthermore, the flow semigroup is mostly denoted by $S$, its elements are denoted by $s$, $t$, $g$, $h$, $p$, $q$. The cyclic group of $m$ elements is denoted by $Z_m$. We will need the notion of an open ear, and open ear decomposition. \begin{dfn} Let $\Gamma$ be an arbitrary graph, and let $\Gamma'$ be a proper subgraph of $\Gamma$. A path $(u, c_1, \dots, c_m, v)$ is called a \emph{$\Gamma'$-ear} (or \emph{open ear}) with respect to $\Gamma$, if $u, v\in\Gamma'$, $u\neq v$, and either $m=0$ and the edge $uv \notin \Gamma'$, or $c_1, \dots, c_m \in \Gamma \setminus \Gamma'$. An \emph{open ear decomposition} of a graph is a partition of its set of edges into a sequence of subsets, such that the first element of the sequence is a cycle, and all other elements of the sequence are open ears of the union of the previous subsets in the sequence. \end{dfn} A connected graph $\Gamma$ with at least $k$ vertices is \emph{$k$-vertex connected} if removing any $k-1$ vertices does not disconnect $\Gamma$. By~\cite{Whitney1932} a graph is 2-vertex connected if and only if it is a single edge or it has an open ear decomposition. \section{Preliminaries}\label{sec:prelim} Let $\Gamma = (V, E)$ be a simple, connected (undirected) graph, and for every $1\leq k\leq \abs{V}-1$, let $G_k$ denote its defect $k$ group for some $V_k \subseteq V$, $\abs{V_k} = k$. Let $S = S_{\Gamma}$ be the flow semigroup of $\Gamma$. The following is immediate. \begin{lem}[{\cite[Fact~6.51c]{wildbook}}]\label{lem:sTisdefectk} Let $s\in S$ be of defect $k$. If $se_{uv}$ is of defect $k$, as well, then $u \notin Vs$ or $v \notin Vs$. \end{lem} Furthermore, it is not too hard to see that every defect 1 permutation arises from the permutations generated by cycles (in the graph) containing the defect point. \begin{lem}[{\cite[Proposition~6.51e]{wildbook}}]\label{lem:cyclepermutation} Let $\Gamma$ be a connected graph, and let $G_{1}$ denote its defect 1 group, such that the defect point is $v \in V$. Then \[ G_{1} = \left< \left( u_1, \dots , u_k \right) \text{ as permutation} \mid (u_1, \dots , u_k, v) \text{ is a cycle in } \Gamma \right>. \] \end{lem} These yield that the defect $k$ group of the $n$-cycle graph is cyclic, proving items~(\ref{it:defect1cycle})~and~(\ref{it:defectkcycle}) of Theorem~\ref{thm:main}: \begin{lem}\label{lem:cycle} The defect $k$ group of the $n$-cycle is isomorphic to $Z_{n-k}$. \end{lem} \begin{proof} Let $x_1,x_2,\dots x_{n}$ be the consecutive elements of the cycle $\Gamma=(V, E)$. If $s\in S$ is an element of defect $k$ then by Lemma~\ref{lem:sTisdefectk} we have that $se_{x_ix_{i+1}}$ is of defect $k$ if and only if $x_i\notin Vs$ or $x_{i+1}\notin Vs$. This means that if $u_1,u_2,\dots u_{n-k}$ are the consecutive elements of $Vs$ in the cycle and $se_{x_ix_{i+1}}$ is of defect $k$, as well, then \[ u_1 e_{x_ix_{i+1}}, u_2 e_{x_ix_{i+1}}, \dots, u_{n-k}e_{x_ix_{i+1}} \] are the consecutive elements of $Vse_{x_ix_{i+1}}$. Thus the cyclic ordering of these elements cannot be changed. Hence $G_k$ is isomorphic to a subgroup of $Z_{n-k}$. Now, assume that $v_1, v_2,\dots v_k, u_1, u_2,\dots u_{n-k}$ are the consecutive elements of $\Gamma$, and the defect set is $V_k = \halmaz{v_1, \dots , v_k}$. Let \begin{align*} s_1 &=e_{v_1v_2}\dots e_{v_jv_{j+1}}\dots e_{v_{k-1}v_k}, \\ s_2 &= e_{u_{n-k}v_k}e_{u_{n-k-1}u_{n-k}}\dots e_{u_{j-1}u_j}\dots e_{u_1u_2}e_{v_{k}u_1}, \\ s &=s_1s_2. \end{align*} It easy to check that \[ v_is=u_1, \ \ u_1s=u_2,\dots , u_j s=u_{j+1}, \dots, u_{n-k} s=u_1. \] Therefore $s, s^2,\dots, s^{n-k}$ are distinct elements of $G_k$, hence $G_k\simeq Z_{n-k}$. \end{proof} \section{Defect 1 groups}\label{sec:defect1} In this Section we prove item~(\ref{it:graph})~of~Theorem~\ref{thm:main}, which states that the defect 1 group of a simple connected graph is the direct product of the defect 1 groups of its 2-vertex connected components. This follows by induction on the number of 2-vertex connected components from Lemma~\ref{lem:defect1directproduct}. The case where $\Gamma$ is 2-vertex connected (that is item~(\ref{it:2-vertex}) of Theorem~\ref{thm:main}) is covered by \cite[Theorem~2]{wilson}. \begin{lem}\label{lem:defect1directproduct} Let $\Gamma_1$ and $\Gamma_2$ be connected induced subgraphs of $\Gamma$ such that $\Gamma_1\cap\Gamma_2 = \halmaz{v}$, where there are no edges in $\Gamma$ between $\Gamma_1 \setminus \halmaz{v}$ and $\Gamma_2 \setminus \halmaz{v}$. Then the defect $1$ group of $\Gamma_1\cup\Gamma_2$ is the direct product of the defect $1$ groups of $\Gamma_1$ and $\Gamma_2$. \end{lem} \begin{proof} Let $G_{\Gamma_i}$ denote the defect 1 group of $\Gamma_i$, where the defect point is $v$. By Lemma~\ref{lem:cyclepermutation}, $G_{\Gamma}$ is generated by cyclic permutations corresponding to cycles through $v$ in $\Gamma$. Now, $\Gamma_1 \cap \Gamma_2 = \halmaz{v}$, and every path between a node from $\Gamma_1$ and a node from $\Gamma_2$ must go through $v$, hence every cycle in $\Gamma$ is either in $\Gamma_1$ or in $\Gamma_2$. Let $c_i^{(1)}, \dots , c_i^{(m_i)}$ be the permutations corresponding to the cycles in $\Gamma_i$ ($i =1, 2$). Since these cycles do not involve $v$ by Lemma~\ref{lem:cyclepermutation}, we have $c_1^{(j_1)}c_2^{(j_2)} = c_2^{(j_2)}c_1^{(j_1)}$ for all $1\leq j_i \leq m_i$, $i=1, 2$, thus \begin{multline*} G_{\Gamma} = \left< c_1^{(1)}, \dots , c_1^{(m_1)}, c_2^{(1)}, \dots , c_2^{(m_2)} \right> \\ =\left< c_1^{(1)}, \dots , c_1^{(m_1)} \right> \times \left< c_2^{(1)}, \dots , c_2^{(m_2)} \right> = G_{\Gamma_1} \times G_{\Gamma_2}. \end{multline*} \end{proof} \section{Defect $k$ groups}\label{sec:defectk} We prove item~(\ref{it:defectkmain}) of Theorem~\ref{thm:main} in this Section. In the following we assume $k \geq 2$, and every graph $\Gamma$ is assumed to be simple connected. We start with some simple observations. \begin{lem}\label{lem:simple} Let $\Gamma$ be a connected graph, and let $\Gamma'$ be a connected subgraph of $\Gamma$. If $\Gamma'$ has at least $k+1$ vertices, then the defect $k$ group of $\Gamma$ contains a subgroup isomorphic (as a permutation group) to the defect $k$ group of $\Gamma'$. Furthermore, if $\Gamma \setminus \Gamma'$ contains at least one vertex, and $\Gamma'$ has at least $k$ vertices, then the defect $k$ group of $\Gamma$ contains a subgroup isomorphic (as a permutation group) to the defect $k-1$ group of $\Gamma'$. \end{lem} \begin{proof} Let $\Gamma = \left(V, E \right)$, $\Gamma' = \left(V', E' \right)$. First, assume $\abs{V'} \geq k+1$, and let $V_k = \halmaz{v_1, \dots, v_k} \subseteq V'$. Let $G_{k, V_k}$ and $G'_{k, V_k}$ be the defect $k$-groups of $\Gamma$ and $\Gamma'$. Let $g \in G'_{k, V_{k}}$ be arbitrary. Then there exists $s \in S_{\Gamma'}$ with defect set $V_{k}$ such that $s \restriction_{V' \setminus V_{k}} = g$. Now, $E' \subseteq E$, hence every elementary collapsing of $\Gamma'$ is an elementary collapsing of $\Gamma$, as well, Thus $s \in S_{\Gamma}$, and $s$ acts as the identity on $V \setminus V'$. Furthermore, if $s' \in S_{\Gamma'}$ is another element with defect set $V_{k}$ such that $s' \restriction_{V' \setminus V_{k}} = g = s \restriction_{V' \setminus V_{k}}$, then $s' \in S_{\Gamma}$ with $s' \restriction_{V \setminus V_k} = s \restriction_{V \setminus V_k}$. Thus $\varphi \colon G'_{k, V_{k}} \to G_{k, V_{k}}$, $\varphi(g) = s \restriction_{V \setminus V_k}$ is a well defined injective homomorphism of permutation groups. Second, assume $\abs{V'} \geq k$, and let $V_{k-1} = \halmaz{v_1, \dots, v_{k-1}} \subseteq V'$. Let $v \in V \setminus V'$, and let $V_k = V_{k-1} \cup \halmaz{v}$. Let $u$ be a neighbor of $v$ and let $e = e_{vu}$. Let $G_{k, V_k}$ be the defect $k$-group of $\Gamma$ and let $G'_{k-1, V_{k-1}}$ be the defect $(k-1)$-group of $\Gamma'$. Let $g \in G'_{k-1, V_{k-1}}$ be arbitrary. Then there exists $s \in S_{\Gamma'}$ with defect set $V_{k-1}$ such that $s \restriction_{V' \setminus V_{k-1}} = g$. Now, $es \in S_{\Gamma}$ has defect set $V_k$, and $es \restriction_{V \setminus V_k}$ acts as $g$ on $V' \setminus V_{k-1}$, and acts as the identity on $V \setminus \left( V' \cup \halmaz{v} \right)$. Furthermore, if $s' \in S_{\Gamma'}$ is another element with defect set $V_{k-1}$ such that $s' \restriction_{V' \setminus V_{k-1}} = g = s \restriction_{V' \setminus V_{k-1}}$, then $es \restriction_{V \setminus V_k} = es' \restriction_{V \setminus V_k}$. As $g \in G'_{k-1, V_{k-1}}$ was arbitrary, we have that $\varphi \colon G'_{k-1, V_{k-1}} \to G_{k, V_{k}}$, $\varphi(g) = es \restriction_{V \setminus V_k}$ is a well defined injective homomorphism of permutation groups. \end{proof} \begin{lem}\label{lem:sgraph2} Let $1 \leq m\leq l < k \leq n-2$, and assume $\Gamma$ contains the following subgraph: \begin{center} \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm, thin, main node/.style={circle,draw}, rectangle node/.style={rectangle,draw}, empty node/.style={}] \node[rectangle node] (x1) {$x_1$}; \node[rectangle node] (y) [below of=x1] {$y$}; \node[rectangle node] (x2) [right of=x1] {$x_2$}; \node[empty node] (dots) [right of=x2] {$\dots$}; \node[rectangle node] (xl) [right of=dots] {$x_{l}$}; \node[main node] (v) [above right of=xl] {$v$}; \node[main node] (u_1) [ above right of=x1] {$u_1$}; \node[rectangle node] (x2) [right of=x1] {$x_2$}; \node[empty node] (dots) [right of=x2] {$\dots$}; \node[rectangle node] (xl) [right of=dots] {$x_{l}$}; \node[main node] (u_2) [above right of=u_1] {$u_2$}; \node[empty node] (dots_1) [above right of=u_2] {\reflectbox{$\ddots$}}; \node[main node] (u_m) [above right of=dots_1] {$u_m$}; \path[every node/.style={font=\sffamily\small}] (u_1) edge node {} (x1) (x1) edge node {} (x2) (x2) edge node {} (dots) (dots) edge node {} (xl) (xl) edge node {} (v) (y) edge node {} (x1) (u_1) edge node {} (u_2) (u_2) edge node {} (dots_1) (dots_1) edge node {} (u_m); \end{tikzpicture} \end{center} If $V_k$ is a set of nodes of size $k$ such that $y, x_1, \dots, x_l \in V_k$, and $v, u_i \notin V_k$ for some $1\leq i\leq m$, then the defect $k$ group $G_{k, V_k}$ contains the transposition $(u_i, v)$. \end{lem} \begin{proof} Let \[ r = \begin{cases} s s_1 e_{yx_1} e_{x_1u_1}, & \text{ if } i =1, \\ s s_1 \dots s_i p t t_{i-1} \dots t_1q, & \text{ if } i\geq 2, \end{cases} \] where \begin{align*} s &= e_{vx_l}e_{x_lx_{l-1}}\dots e_{x_2x_{1}}e_{x_1y}, \\ s_1 &= e_{u_1x_1}e_{x_1x_2}\dots e_{x_{l-1}x_{l}}e_{x_lv}, \\ s_j &= e_{u_{j}u_{j-1}}\dots e_{u_2u_1} e_{u_1x_1} e_{x_{1}x_{2}}\dots e_{x_{l-j+1}x_{l-j+2}}, & (2 &\leq j\leq m), \\ p &= e_{yx_1}e_{x_1u_{1}}e_{u_1u_{2}}\dots e_{u_{i-1}u_{i}}, \\ t &= e_{x_{l-i+2}x_{l-i+1}}\dots e_{x_2x_{1}}e_{x_1y}, \\ t_j &= e_{x_{l-j+2}x_{l-j+1}}\dots e_{x_2x_{1}}e_{x_1u_1}e_{u_1u_2}\dots e_{u_{j-1}u_j}, & (2 &\leq j\leq m), \\ t_1 &= e_{vx_{l}}e_{x_lx_{l-1}}\dots e_{x_2x_{1}}e_{x_1u_1}, \\ q &= e_{yx_1}e_{x_1x_2}\dots e_{x_{l-1}{x_l}}e_{x_lv}. \end{align*} Then $r$ transposes $u_i$ and $v$ and fixes all other vertices of $\Gamma$ outside the defect set. \end{proof} Note that Lemma~\ref{lem:sgraph2} is going to be useful whenever $\Gamma$ contains a node with degree at least 3. \begin{lem}\label{lem:kcycleplusvertex} Let $k \geq 2$, $\Gamma' = \left( V', E' \right)$ be such that $\abs{V'} > k$ and its defect $k$ group is transitive (e.g.\ if $\Gamma'$ is a cycle with at least $k+1$ vertices). Let $\Gamma=\left( V'\cup \halmaz{ v }, E' \cup \halmaz{x_1 v} \right)$ for a new vertex $v$ and some $x_1 \in \Gamma'$, where the degree of $x_1$ in $\Gamma'$ is at least 2. Then the defect $k$ group of $\Gamma$ is isomorphic to $S_{n-k}$. \end{lem} \begin{proof} Let $n$ be the number of vertices of $\Gamma$, then $n \geq k+2$. Let the vertices of $\Gamma'$ be $y, x_1, x_2, \dots , x_{k-1}, u_{1}, u_{2}, \dots , u_{n-k-1}$ such that $u_1$ and $y$ are neighbors of $x_1$ in $\Gamma'$. Let the defect set be $\halmaz{y, x_1, \dots , x_{k-1}}$. Applying Lemma~\ref{lem:sgraph2} to the subgraph with vertices $\halmaz{x_1, v, y, u_1}$ we obtain that the defect $k$ group of $\Gamma$ contains the transposition $(u_1, v)$. Since the defect $k$ group of $\Gamma'$ is transitive and contained in the defect $k$ group of $\Gamma$ by Lemma~\ref{lem:simple}, the defect $k$ group of $\Gamma$ contains the transposition $(u_i, v)$ for all $1\leq i \leq n-k-1$. Therefore, the defect $k$ group of $\Gamma$ is isomorphic to $S_{n-k}$. \end{proof} Motivated by Lemma~\ref{lem:kcycleplusvertex}, we define the \emph{$k$-sub\-graphs} and the \emph{maximal $k$-subgraphs} of a graph $\Gamma$. \begin{dfn} Let $\Gamma$ be a simple connected graph, $k\geq 2$. A connected subgraph $\Gamma' \subseteq \Gamma$ is called a \emph{$k$-subgraph} if its defect $k$ group is the symmetric group of degree $\abs{\Gamma'}-k$. A $k$-subgraph is a \emph{maximal $k$-subgraph} if it has no proper extension in $\Gamma$ to a $k$-subgraph. Finally, we say that a $k$-subgraph $\Gamma'$ is \emph{nontrivial} if it contains a vertex having at least 3 distinct neighbors in $\Gamma'$. \end{dfn} Note that every maximal $k$-subgraph is an induced subgraph. A trivial $k$-subgraph is either a line on $k+1$ points or a cycle on $k+1$ or $k+2$ points. Furthermore, a trivial maximal $k$-subgraph cannot be a cycle by Lemma~\ref{lem:kcycleplusvertex}, unless the graph itself is a cycle. Finally, any connected subgraph of $k+1$ points is trivially a $k$-subgraph, thus every connected subgraph of $k+1$ points is contained in a maximal $k$-subgraph. Note that the intersection of two maximal $k$-subgraphs cannot contain more than $k$ vertices: \begin{lem}\label{lem:kintersection} Let $\Gamma_1, \Gamma_2$ be $k$-subgraphs such that $\abs{ \Gamma_1\cap\Gamma_2 } > k$. Then $\Gamma_1\cup \Gamma_2$ is a $k$-subgraph, as well. \end{lem} \begin{proof} Choose the defect set $V_k$ such that $V_k \subsetneqq \Gamma_1 \cap \Gamma_2$, and let $v \in \left( \Gamma_1 \cap \Gamma_2 \right) \setminus V_k$. Then the symmetric groups acting on $\Gamma_1\setminus V_k$ and $\Gamma_2\setminus V_k$ are subgroups in the defect $k$ group of $\Gamma_1\cup\Gamma_2$. Thus, we can transpose every member of $\Gamma_i \setminus \left( V_k \cup \halmaz{v} \right)$ with $v$. Therefore, the defect $k$ group of $\Gamma_1 \cup \Gamma_2$ is the symmetric group on $\left( \Gamma_1\cup\Gamma_2 \right) \setminus V_k$. \end{proof} \begin{lem}\label{lem:deg3} Let $\Gamma$ be a simple connected graph, and let $\Gamma'$ be a $k$-subgraph of $\Gamma$. Let $x_1\in\Gamma'$, $v \notin \Gamma'$, and let $P = \left( x_1, x_2, \dots, x_l, v \right)$ be a shortest path between $x_1$ and $v$ in $\Gamma$ for some $l\leq k-1$. Assume that $x_1$ has at least 2 neighbors in $\Gamma'$ apart from $x_2$. Then the subgraph $\Gamma' \cup P$ is a $k$-subgraph. \end{lem} \begin{proof} First, consider the case $x_2, \dots , x_l \in \Gamma'$. Let $u, y$ be two neighbors of $x_1$ in $\Gamma'$ distinct from $x_2$, and choose the defect set $V_k$ such that it contains $y, x_1, \dots , x_l$ and does not contain $u$. By Lemma~\ref{lem:sgraph2} the defect $k$ group of $\Gamma'\cup \halmaz{ v }$ contains the transposition $(u, v)$. Furthermore, the defect $k$ group of $\Gamma'$ is the whole symmetric group on $\Gamma' \setminus V_k$. Thus, the defect $k$ group of $\Gamma' \cup \halmaz{v}$ is the whole symmetric group on $\left( \Gamma' \setminus V_k \right) \cup \halmaz{v}$. Now, if not all of $x_2, \dots , x_l$ are in $\Gamma'$, then, by the previous argument, one can add them (and then $v$) to $\Gamma'$ one by one, and obtain an increasing chain of $k$-subgraphs. \end{proof} As a corollary, we obtain that every vertex of degree at least 3 together with at least two of its neighbors is contained in exactly one nontrivial maximal $k$-subgraph. \begin{cor}\label{cor:kdeg3} Let $\Gamma$ be a simple connected graph with $n$ vertices such that $n>k$, and let $x_1$ be a vertex having degree at least $3$. Then there exists exactly one maximal $k$-subgraph $\Gamma'$ containing $x_1$ such that $x_1$ has degree at least 2 in $\Gamma'$. Furthermore, $\Gamma'$ is a nontrivial $k$-subgraph, and if $\Gamma_{x_1}$ is the induced subgraph of the vertices in $\Gamma$ that are of at most distance $k-1$ from $x_1$, then $\Gamma_{x_1} \subseteq \Gamma'$. \end{cor} \begin{proof} Any connected subgraph of $\Gamma$ with $k+1$ vertices containing $x_1$ and any two of its neighbors is a $k$-subgraph. Thus there exists at least one maximal $k$-subgraph containing $x_1$ and two of its neighbors. Let $\Gamma'$ be a maximal $k$-subgraph containing $x_1$ and at least two of its neighbors. Assume that $\Gamma_{x_1} \not\subseteq \Gamma'$. Let $v \in \Gamma_{x_1} \setminus \Gamma'$ be any vertex at a minimal distance from $x_1$, and let $P = (x_1, \dots , x_l, v)$ be a shortest path between $x_1$ and $v$. If $l = 1$, then $P=(x_1, v)$. Now $x_1$ has at least two neighbors in $\Gamma'$ apart from $v$, therefore $\Gamma' \cup P$ is a $k$-subgraph by Lemma~\ref{lem:deg3}, which contradicts the maximality of $\Gamma'$. Thus $l\geq 2$, in particular all neighbors of $x_1$ in $\Gamma$ are in $\Gamma'$, as well, and thus $\Gamma'$ is a nontrivial $k$-subgraph. Hence $x_1$ has at least two neighbors in $\Gamma'$ apart from $x_2$, therefore $\Gamma' \cup P$ is a $k$-subgraph by Lemma~\ref{lem:deg3}, which contradicts the maximality of $\Gamma'$. Thus $\Gamma_{x_1} \subseteq \Gamma'$. Now, assume that $\Gamma'$ and $\Gamma''$ are maximal $k$-subgraphs containing $x_1$ and at least two of its neighbors. Then $\Gamma_{x_1} \subseteq \Gamma'$ and $\Gamma_{x_1} \subseteq \Gamma''$. Note that either $\Gamma_{x_1} = \Gamma$ (and hence $\abs{\Gamma_{x_1}} = n >k$), or there exists a vertex $v \in \Gamma$ which is of distance exactly $k$ from $x_1$. Let $P = (x_1, \dots , x_k, v)$ be a shortest path between $x_1$ and $v$, and let $u$ and $y$ be two neighbors of $x_1$ distinct from $x_2$. Then $\halmaz{x_1, \dots , x_k , y, u} \subseteq \Gamma_{x_1}$, thus $\abs{\Gamma_{x_1}} > k$. Therefore $\abs{\Gamma' \cap \Gamma''} \geq \abs{\Gamma_{x_1}} > k$, yielding $\Gamma' = \Gamma''$ by Lemma~\ref{lem:kintersection}. \end{proof} \begin{lem}\label{lem:kear} Let $\Gamma'$ be a nontrivial $k$-subgraph of $\Gamma$, and let $P$ be a $\Gamma'$-ear. Then $\Gamma' \cup P$ is a (nontrivial) $k$-subgraph of $\Gamma$. \end{lem} \begin{proof} Let $\Gamma$, $\Gamma'$ and $P=\left( w_0,w_1,\dots w_i, w_{i+1} \right)$ be a counterexample, where $i$ is minimal. There exists a shortest path $(w_0, y_1, \dots, y_l, w_{i+1})$ in $\Gamma'$ among those where the degree of some $y_j$ or of $w_0$ or of $w_{i+1}$ is at least $3$ in $\Gamma'$. (At least one such path exists, because $\Gamma'$ is connected, and is a nontrivial $k$-subgraph, hence contains a vertex of degree at least 3.) For easier notation, let $y_0 = w_0$, $y_{l+1} = w_{i+1}$. Let $y' \in \Gamma' \setminus \halmaz{y_0, y_1, \dots, y_l, y_{l+1}}$ be a neighbor of $y_j$; this exists, because the degree of $y_j$ is at least 3, and otherwise a shorter path would exist between $w_0$ and $w_{i+1}$. If $j+1 \leq k-1$ (that is $j \leq k-2$), then by Lemma~\ref{lem:deg3} the induced subgraph on $\Gamma' \cup \halmaz{w_1}$ is a $k$-subgraph, thus $\Gamma' \cup \halmaz{w_1}$ with the ear $( w_1, \dots , w_i, w_{i+1} )$ is a counterexample with a shorter ear. Similarly, if $l-j+2 \leq k-1$ (that is $l+3-k \leq j$), then by Lemma~\ref{lem:deg3} the induced subgraph on $\Gamma' \cup \halmaz{w_i}$ is a $k$-subgraph, thus $\Gamma' \cup \halmaz{w_i}$ with the ear $( w_0, w_1, \dots , w_i )$ is a counterexample with a shorter ear. Finally, if $k-1 \leq j \leq l+2-k$, then $ 2k-3 \leq l$. Let $\Gamma''$ be the cycle $P \cup \left( y_0, y_1, \dots , y_l, y_{l+1} \right)$ together with $y'$ and the edge $y_j y'$. Then $\Gamma''$ is a $k$-subgraph by Lemma~\ref{lem:kcycleplusvertex}, $\abs{\Gamma' \cap \Gamma''} = l+2 \geq 2k-1 > k$, hence $\Gamma' \cup \Gamma'' = \Gamma' \cup P$ is a $k$-subgraph by Lemma~\ref{lem:kintersection}. \end{proof} \begin{cor}\label{cor:2edge} Let $\Gamma$ be a simple connected graph with $n$ vertices such that $n>k$, and assume that $\Gamma$ is not a cycle. Suppose $uv$ is an edge contained in a cycle of $\Gamma$. Then there exists exactly one maximal $k$-subgraph $\Gamma'$ containing the edge $uv$. Furthermore, $\Gamma'$ is a nontrivial $k$-subgraph, and if $\Gamma_{uv}$ is the 2-edge connected component containing $uv$, then $\Gamma_{uv} \subseteq \Gamma'$. \end{cor} \begin{proof} Any connected subgraph of $\Gamma$ with $k+1$ vertices containing the edge $uv$ is a $k$-subgraph. Thus there exists at least one maximal $k$-subgraph $\Gamma'$ containing the edge $uv$. We prove first that $\Gamma'$ is a nontrivial $k$-subgraph, then prove $\Gamma_{uv} \subseteq \Gamma'$, and only after that do we prove that $\Gamma'$ is unique. Assume first that $\Gamma'$ is a trivial $k$-subgraph. If $\Gamma'$ were a cycle, then $\Gamma \setminus \Gamma'$ contains at least one vertex, because $\Gamma'$ is an induced subgraph of $\Gamma$. Then Lemma~\ref{lem:kcycleplusvertex} contradicts the maximality of $\Gamma'$. Thus $\Gamma'$ is a line of $k+1$ vertices. Let $\Gamma_2$ be a shortest cycle containing $uv$. Now, there must exist a vertex in $\Gamma \setminus \Gamma_2$, otherwise either $\Gamma = \Gamma_2$ would be a cycle, or there would exist an edge in $\Gamma \setminus \Gamma_2$ yielding a shorter cycle than $\Gamma_2$ containing the edge $uv$. Let $x_2 \in \Gamma \setminus \Gamma_2$ be a neighbor of a vertex in $\Gamma_2$. By Lemma~\ref{lem:kcycleplusvertex} the induced subgraph on $\Gamma_2 \cup \halmaz{x_2}$ is a $k$-subgraph. Thus $\Gamma' \not\subseteq \Gamma_2$, otherwise $\Gamma'$ would not be a maximal $k$-subgraph. Let $x_1 \in \Gamma' \cap \Gamma_2$ be a vertex such that two of its neighbors are in $\Gamma_2$ and its third neighbor is some $x_2 \in \Gamma' \setminus \Gamma_2$. Note that every vertex in $\Gamma'$ is of distance at most $k-1$ from $x_1$, because $u,v \in \Gamma' \cap \Gamma_2$. Thus, if $\abs{\Gamma_2} \geq k+1$, then $\Gamma_2$ together with $x_2$ and the edge $x_1x_2$ is a $k$-subgraph by Lemma~\ref{lem:kcycleplusvertex}, and hence $\Gamma_2 \cup \Gamma'$ is a $k$-subgraph by Lemma~\ref{lem:deg3}, contradicting the maximality of $\Gamma'$. Otherwise, if $\abs{\Gamma_2} \leq k$, then every vertex in $\Gamma_2$ is of distance at most $k-1$ from $x_1$, and hence $\Gamma_2 \cup \Gamma'$ is a $k$-subgraph by Lemma~\ref{lem:deg3}, contradicting the maximality of $\Gamma'$. Therefore $\Gamma'$ is a nontrivial $k$-subgraph. Now we show that the two-edge connected component $\Gamma_{uv} \subseteq \Gamma'$. Let $\Gamma, \Gamma'$ be a counterexample to this such that the number of vertices of $\Gamma_{uv}$ is minimal, and among these counterexamples choose one where the number of edges of $\Gamma_{uv}$ is minimal. Using an ear-decomposition \cite{Robbins1939}, $\Gamma_{uv}$ is either a cycle, or there exists a 2-edge connected subgraph $\Gamma_1 \subseteq \Gamma_{uv}$ and there exists \begin{enumerate} \item\label{it:ear} either a $\Gamma_1$-ear $P$ such that $\Gamma_{uv} = \Gamma_1 \cup P$, \item\label{it:cycle} or a cycle $\Gamma_2$ such that $\abs{\Gamma_1 \cap \Gamma_2} = 1$ and $\Gamma_{uv} = \Gamma_1 \cup \Gamma_2$. \end{enumerate} If $\Gamma_{uv}$ is a cycle containing the edge $uv$, and $\Gamma_{uv} \not \subseteq \Gamma'$, then going along the edges of $\Gamma_{uv}$, one can find a $\Gamma'$-ear $P \subseteq \Gamma_{uv}$. Then $\Gamma' \cup P$ is a $k$-subgraph by Lemma~\ref{lem:kear}, contradicting the maximality of $\Gamma'$. Thus $\Gamma_{uv}$ is not a cycle. Let us choose $\Gamma_1$ from cases~(\ref{it:ear})~and~(\ref{it:cycle}) so that it would have the least number of vertices. Assume first that case~(\ref{it:ear}) holds. By minimality of the counterexample, $\Gamma_1 \subseteq \Gamma'$. If $P \not \subseteq \Gamma'$, then going along the edges of $P$ one can find a $\Gamma'$-ear $P' \subseteq P$. But then $\Gamma' \cup P'$ is a $k$-subgraph by Lemma~\ref{lem:kear}, contradicting the maximality of $\Gamma'$. Assume now that case~(\ref{it:cycle}) holds. Again, by induction, $\Gamma_1 \subseteq \Gamma'$. If $\Gamma_2 \not \subseteq \Gamma'$, then either $\abs{\Gamma' \cap \Gamma_2} = 1$ or going along the edges of $\Gamma_2$ one can find a $\Gamma'$-ear $P' \subseteq \Gamma_2$. The latter case cannot happen, because then $\Gamma' \cup P'$ is a $k$-subgraph by Lemma~\ref{lem:kear}, contradicting the maximality of $\Gamma'$. Thus $\abs{\Gamma' \cap \Gamma_2} = 1$, and hence $\Gamma' \cap \Gamma_2 = \Gamma_1 \cap \Gamma_2$. Let $\Gamma_1 \cap \Gamma_2 = \halmaz{x_1}$, and let $v_1$ be a neighbor of $x_1$ in $\Gamma_1 \setminus \Gamma_2$, and let $v_2$ be a neighbor of $x_1$ in $\Gamma_2 \setminus \Gamma_1$. If $\abs{\Gamma_2} \leq k$, then $\Gamma_2$ can be extended to a connected subgraph of $\Gamma$ having exactly $k+1$ vertices, which is a $k$-subgraph. If $\abs{\Gamma_2} \geq k+1$, then $\Gamma_2 \cup \halmaz{v_1}$ is a $k$-subgraph by Lemma~\ref{lem:kcycleplusvertex}. In any case, there exists a maximal $k$-subgraph $\Gamma_2' \supseteq \Gamma_2$. For notational convenience, let $\Gamma_1'$ denote the maximal $k$-subgraph $\Gamma'$ containing $\Gamma_1$. We prove that $\Gamma_2' = \Gamma_1'=\Gamma'$, thus $\Gamma'$ contains $\Gamma_2$, contradicting that we chose a counterexample. Now, both $\Gamma_1$ and $\Gamma_2$ contain at least two neighbors of $x_1$. Let $V_i \subseteq \Gamma_i$ be the set of vertices with distance at most $k-1$ from $x_1$ ($i \in \halmaz{1,2}$). If $\abs{\Gamma_i} \leq k$, then $V_i$ contains all vertices of $\Gamma_i$, otherwise $\abs{V_i} \geq k$ ($i \in \halmaz{1,2}$). By Lemma~\ref{lem:deg3}, the induced subgraph on $V_1$ is contained in $\Gamma_2'$. Thus, if $V_1$ contains all vertices of $\Gamma_1$, then $\Gamma_1 \subseteq \Gamma_2'$, hence we have $\Gamma_1' = \Gamma_2'$. Similarly, the induced subgraph on $V_2$ is contained in $\Gamma_1'$. Thus, if $V_2$ contains all vertices of $\Gamma_2$, then $\Gamma_2 \subseteq \Gamma_1'$, hence we have $\Gamma_1' = \Gamma_2'$. Otherwise, $\abs{\Gamma_1' \cap \Gamma_2'} \geq \abs{V_1} + \abs{V_2} - \abs{\halmaz{x_1}} \geq 2k-1 > k$, hence by Lemma~\ref{lem:kintersection} we have $\Gamma_1' = \Gamma_2'$. Finally, we prove uniqueness. Let $\Gamma'$ and $\Gamma''$ be two maximal $k$-subgraphs containing the edge $uv$. Then both $\Gamma'$ and $\Gamma''$ contain $\Gamma_{uv}$. If $\Gamma = \Gamma_{uv}$, then $\Gamma' = \Gamma_{uv} = \Gamma''$. Otherwise, there exists a vertex $x_2 \in \Gamma \setminus \Gamma_{uv}$ such that it has a neighbor $x_1 \in \Gamma_{uv}$. Note that $x_1$ has degree at least 3 in $\Gamma$. Let $V_1$ be the vertices of $\Gamma$ of distance at most $k-1$ from $x_1$. Note that if $V_1$ does not contain all vertices of $\Gamma$, then $\abs{V_1} > k$. By 2-edge connectivity, $\Gamma_{uv} \subseteq \Gamma'$ contains at least two neighbors of $x_1$, thus $V_1 \subseteq \Gamma'$ by Lemma~\ref{lem:deg3}. Similarly, $\Gamma_{uv} \subseteq \Gamma''$ contains at least two neighbors of $x_1$, thus $V_1 \subseteq \Gamma''$ by Lemma~\ref{lem:deg3}. If $V_1$ contains all vertices of $\Gamma$, then $\Gamma' = \Gamma = \Gamma''$. Otherwise, $\abs{\Gamma' \cap \Gamma''} \geq \abs{V_1} > k$, and $\Gamma' = \Gamma''$ by Lemma~\ref{lem:kintersection}. \end{proof} Recall that by \cite{Robbins1939} a strongly connected antisymmetric digraph becomes a 2-edge connected graph after forgetting the directions. Thus Rhodes's conjecture about strongly connected, antisymmetric digraphs \cite[Conjecture~6.51i~(3)--(4)]{wildbook} follows immediately from the following theorem on 2-edge connected graphs: \begin{thm}\label{thm:defectk} Let $n>k \geq 2$, $\Gamma$ be a $2$-edge connected simple graph having $n$ vertices. If $\Gamma$ is a cycle, then the defect $k$ group is $Z_{n-k}$. If $\Gamma$ is not a cycle, then the defect $k$ group is $S_{n-k}$. \end{thm} \begin{proof} If $\Gamma$ is a cycle, then its defect $k$ group is $Z_{n-k}$ by Lemma~\ref{lem:cycle}. Since $\Gamma$ is 2-edge connected with at least 3 vertices, every edge of $\Gamma$ is contained in a cycle. Thus, if $\Gamma$ is not a cycle, then the defect $k$ group is $S_{n-k}$ by Corollary~\ref{cor:2edge}. \end{proof} The final part of this section is devoted to prove item~(\ref{it:defectkmain}) of Theorem~\ref{thm:main}. First, we define bridges in $\Gamma$: \begin{dfn} A path $\left( x_1, \dots, x_l \right)$ in a connected graph $\Gamma$ for some $l \geq 2$ is called a \textit{bridge} if the degree of $x_i$ in $\Gamma$ is $2$ for all $2 \leq i\leq l-1$, and if $\Gamma \setminus \halmaz{x_jx_{j+1}}$ is disconnected for all $1 \leq j\leq l-1$. The \emph{length} of the bridge $\left( x_1, \dots, x_l \right)$ is $l$. \end{dfn} The intersection of maximal $k$-subgraphs turn out to be bridges: \begin{lem}\label{lem:kcomp3} Let $\Gamma_1$ and $\Gamma_2$ be distinct maximal $k$-subgraphs of the connected simple graph $\Gamma$. Assume that $\Gamma$ is not a cycle. Then $\Gamma_1 \cap \Gamma_2$ is either empty, or is a bridge $(x_1, \dots , x_l)$ such that \begin{enumerate} \item\label{it:lk} $l \leq k$, and \item\label{it:x_1Gamma_1} if $l\geq 2$ and $\Gamma_i \setminus \halmaz{x_1, \dots, x_l}$ ($i \in \halmaz{1,2}$) contains a neighbor of $x_1$ (resp.\ $x_l$), then $\Gamma_i$ contains all neighbors of $x_1$ (resp.\ $x_l$), \end{enumerate} \end{lem} \begin{proof} Note that $\Gamma_1$ and $\Gamma_2$ are induced subgraphs of $\Gamma$, thus so is $\Gamma_1 \cap \Gamma_2$. We prove first that $\Gamma_1 \cap \Gamma_2$ is connected (or empty) if $\Gamma_1$ is a nontrivial maximal $k$-subgraph. Suppose that $u, v \in \Gamma_1 \cap \Gamma_2$ are in different components of $\Gamma_1\cap\Gamma_2$ such that the distance between $u$ and $v$ is minimal in $\Gamma_2$. Due to the minimality, there exists a path $(u, x_1, \dots , x_l, v)$ such that $x_1, \dots, x_l \in \Gamma_2 \setminus \Gamma_1$. Then $P = (u, x_1, \dots, x_l, v)$ is a $\Gamma_1$-ear, and $\Gamma_1 \cup P$ would be a $k$-subgraph by Lemma~\ref{lem:kear}, contradicting the maximality of $\Gamma_1$. Thus $\Gamma_1 \cap \Gamma_2$ is connected. One can prove similarly that $\Gamma_1 \cap \Gamma_2$ is connected if $\Gamma_2$ is a nontrivial maximal $k$-subgraph. Now we prove that $\Gamma_1 \cap \Gamma_2$ is connected, even if both $\Gamma_1$ and $\Gamma_2$ are trivial maximal $k$-subgraphs. As $\Gamma_1 \subsetneqq \Gamma$, $\Gamma_1$ cannot be a cycle hence must be a line $(x_1, \ldots, x_{k+1})$. Note that the degree of $x_i$ in $\Gamma$ for $2 \leq i \leq k$ must be 2, otherwise a nontrivial maximal $k$-subgraph would contain $x_i$, and thus also $\Gamma_1$ by Corollary~\ref{cor:kdeg3}. In particular, if $\Gamma_1 \cap \Gamma_2$ is not connected, then $x_1, x_{k+1} \in \Gamma_1 \cap \Gamma_2$, $x_i \notin \Gamma_1 \cap \Gamma_2$ for some $2 \leq i\leq k$, and $\Gamma_1 \cup \Gamma_2$ would be a cycle. However, by Corollary~\ref{cor:2edge}, the edge $x_1x_2$ is contained in a unique nontrivial maximal $k$-subgraph, contradicting that it is also contained in the trivial maximal $k$-subgraph $\Gamma_1$. Now, we prove (\ref{it:lk}). By Corollary~\ref{cor:2edge}, $\Gamma_1 \cap \Gamma_2$ cannot contain any edge $uv$ which is contained in a cycle. As $\Gamma_1 \cap \Gamma_2$ is connected, it must be a tree. However, $\Gamma_1 \cap \Gamma_2$ cannot contain any vertex of degree at least 3 in $\Gamma_1 \cap \Gamma_2$, otherwise that vertex would be contained in a unique maximal $k$-subgraph by Corollary~\ref{cor:kdeg3}. Thus $\Gamma_1 \cap \Gamma_2$ is a path $(x_1, \dots , x_l)$. Now, $l\leq k$ by Lemma~\ref{lem:kintersection}, proving (\ref{it:lk}). Note that if any $x_i$ ($2\leq i\leq l-1$) is of degree at least 3 in $\Gamma$, then $\halmaz{x_{i-1}, x_i, x_{i+1}}$ is contained in a unique maximal $k$-subgraph by Corollary~\ref{cor:kdeg3}, a contradiction. For (\ref{it:x_1Gamma_1}) observe that at least two neighbors of $x_1$ (resp.\ $x_l$) are in $\Gamma_i$, and thus all its neighbors must be in $\Gamma_i$ by Corollary~\ref{cor:kdeg3}. Finally, if $l\geq 2$ then $\Gamma \setminus \halmaz{x_jx_{j+1}}$ is disconnected for all $1\leq j\leq l-1$ follows immediately from Corollary~\ref{cor:2edge} and the fact that any edge that is not contained in any cycle disconnects the graph $\Gamma$. \end{proof} Edges of short maximal bridges (having length at most $k-1$) are contained in a unique maximal $k$-subgraph: \begin{lem}\label{lem:longbridgeedge} Let $\Gamma$ be a simple connected graph with $n$ vertices such that $n>k$, and let $uv$ be an edge which is not contained in any cycle. Let $(x_1, \dots , x_l)$ be a longest bridge containing the edge $uv$. If $l\leq k-1$, then $uv$ is contained in a unique maximal $k$-subgraph $\Gamma'$, and furthermore, $\Gamma'$ is a nontrivial $k$-subgraph. \end{lem} \begin{proof} As $uv$ is not part of any cycle in $\Gamma$, $uv$ is a bridge of length 2. Note that a longest bridge $(x_1, \dots , x_l)$ containing $uv$ is unique, because as long as the degree of at least one of the path's end vertices is 2 in $\Gamma$, the path can be extended in that direction. The obtained path is the unique longest bridge containing $uv$. Let $\Gamma'$ be a maximal $k$-subgraph containing $uv$, and assume $l\leq k-1$. Note that the distance of $x_1$ and $x_l$ is $l-1 \leq k-2$. As $\abs{\Gamma} \geq k+1$, at least one of $x_1$ and $x_l$ has degree at least 3 in $\Gamma$, say $x_1$. We distinguish two cases according to the degree of $x_l$. Assume first that $x_l$ is of degree 1. As $\Gamma'$ is a connected subgraph having at least $k+1$ vertices, $\Gamma'$ must contain $x_1$ and at least two of its neighbors. Then by Corollary~\ref{cor:kdeg3} it contains all vertices of $\Gamma$ of distance at most $k-1$ from $x_1$. In particular, $\Gamma'$ must contain the bridge $(x_1, \dots , x_l)$. However, there is a unique (nontrivial) maximal $k$-subgraph $\Gamma_1'$ containing $x_1$ and two of its neighbors by Corollary~\ref{cor:kdeg3}, and thus $\Gamma' = \Gamma_1'$ is that unique maximal $k$-subgraph. Assume now that $x_l$ is of degree at least 3. As $\Gamma'$ is a connected subgraph having at least $k+1$ vertices, $\Gamma'$ must contain $x_1$ and at least two of its neighbors, or $x_l$ and at least two of its neighbors. If $\Gamma'$ contains $x_1$ and at least two of its neighbors, then by Corollary~\ref{cor:kdeg3} it contains all vertices of $\Gamma$ of distance at most $k-1$ from $x_1$. In particular, $\Gamma'$ must contain the bridge $(x_1, \dots , x_l)$ and all of the neighbors of $x_l$. Similarly, one can prove that if $\Gamma'$ contains $x_l$ and two of its neighbors, then it also contains the bridge $(x_1, \dots , x_l)$ and all of the neighbors of $x_1$. However, there is a unique (nontrivial) maximal $k$-subgraph $\Gamma_1'$ containing $x_1$ and two of its neighbors by Corollary~\ref{cor:kdeg3}, and also a unique (nontrivial) maximal $k$-subgraph $\Gamma_l'$ containing $x_l$ and two of its neighbors by Corollary~\ref{cor:kdeg3}. Therefore $\Gamma'$ must equal to both $\Gamma_1'$ and $\Gamma_l'$, and hence is unique. \end{proof} In particular, in non-cycle graphs trivial maximal $k$-subgraphs or intersections of two different maximal $k$-subgraphs consist of edges that are contained in long bridges (having length at least $k$). The key observation in proving item~(\ref{it:defectkmain}) of Theorem~\ref{thm:main} is that a defect $k$ group cannot move a vertex across a bridge of length at least $k$: \begin{lem}\label{lem:bridge} Let $2\leq k\leq l$, $\Gamma_1$ and $\Gamma_2$ be disjoint connected subgraphs of the connected graph $\Gamma$, and $\left( x_1, x_2, \dots, x_l \right)$ be a bridge in $\Gamma$ such that $x_1 \dots, x_{l} \notin \Gamma_1 \cup \Gamma_2$, $x_1$ has only neighbors in $\Gamma_1$ (except for $x_2$), $x_l$ has only neighbors in $\Gamma_2$ (except for $x_{l-1}$). Assume $\Gamma$ has no more vertices than $\Gamma_1 \cup \Gamma_2 \cup \left( x_1, \dots, x_l \right)$. Let the defect set be $V_k = \halmaz{x_1, \dots , x_k}$. Then for any $u \in \Gamma_1$ and $v \in \Gamma_2$ there does not exist any permutation in $G_{k,V_k}$ which moves $u$ to $v$. \end{lem} \begin{proof} Let $S = S_{\Gamma}$. Assume that there exists $u \in \Gamma_1$, $v \in \Gamma_2$, and a transformation $g \in S$ of defect $V_k$ such that $g \restriction_{V \setminus V_k} \in G_{k,V_k}$ and $ug=v$. Let $s_0 \in G_{k,V_k}$ be the unique idempotent power of $g$, that is $s_0$ is a transformation of defect $V_k$ that acts as the identity on $\Gamma \setminus V_k$. Then there exists a series of elementary collapsings $e_1, \dots , e_m$ such that $g = e_1 \dots e_m$. For every $1 \leq d \leq m$ let $s_d = s_0e_1 \dots e_d$. Now, $s_m = s_0 e_1 \dots e_m = s_0 g = gs_0 = g$. In particular, both $s_m$ and $s_0$ are of defect $k$, hence $s_d$ is of defect $k$ for all $1\leq d \leq m$. Consequently, $\abs{\Gamma_1 s_d} = \abs{\Gamma_1}$, $\abs{\Gamma_2 s_d} = \abs{\Gamma_2}$ and $\Gamma_1 s_d \cap \Gamma_2 s_d = \emptyset$ for all $1\leq d \leq m$. For an arbitrary $s \in S$, let \begin{align*} i(s) &= \begin{cases} 0, & \text{if } \Gamma_1 s \subseteq \Gamma_1, \\ l+1, & \text{if } \Gamma_1 s \not\subseteq \Gamma_1 \cup \halmaz{x_1, \dots , x_l}, \\ \displaystyle{\min_{1\leq i \leq l}\halmaz{\Gamma_1 s \subseteq \Gamma_1 \cup \halmaz{x_1, \dots , x_i}}}, & \text{otherwise}. \end{cases} \intertext{Similarly, let} j(s) &= \begin{cases} l+1, & \text{if } \Gamma_2 s \subseteq \Gamma_2, \\ 0, & \text{if } \Gamma_2 s \not\subseteq \Gamma_2 \cup \halmaz{x_1, \dots , x_l}, \\ \displaystyle{\max_{1\leq j \leq l}\halmaz{\Gamma_2 s \subseteq \Gamma_2 \cup \halmaz{x_j, \dots , x_l}}}, & \text{otherwise}. \end{cases} \end{align*} Note that for arbitrary $s \in S$ and elementary collapsing $e$, we have $\abs{i(s)-i({se})} \leq 1$, $\abs{j(s)-j({se})} \leq 1$. Furthermore, both $\abs{i(s_d)-i({s_de})} = 1$ and $\abs{j(s_d)-j({s_de})}=1$ cannot happen at the same time for any $1\leq d \leq m$, because that would contradict $\Gamma_1 s_d \cap \Gamma_2 s_d \neq \emptyset$. For $s_0$ we have $i({s_0}) = 0 < l+1 = j({s_0})$, for $s_m$ we have $i({s_m}) = l+1 \geq j({s_m})$. Let $1 \leq d \leq m$ be minimal such that $i({s_d}) \geq j({s_d})$. Then $i({s_{d-1}}) < j({s_{d-1}})$. From $s_{d-1}$ to $s_d$ either $i$ or $j$ can change and by at most 1, thus $i({s_d}) = j({s_d})$. If $i({s_d}) = j({s_d}) \in \halmaz{1, \dots , l}$, then $x_{i(s_d)} \in \Gamma_1 s_d \cap \Gamma_2 s_d$, contradicting $\Gamma_1 s_d \cap \Gamma_2 s_d = \emptyset$. Thus $i({s_d}) = j({s_d}) \notin \halmaz{1, \dots , l}$. Assume $i(s_d) = j(s_d) = l+1$, the case $i(s_d) = j(s_d) = 0$ can be handled similarly. Now, $j(s_d) = l+1$ yields $\Gamma_2 s_d \subseteq \Gamma_2$. Furthermore, $\abs{\Gamma_2 s_d} = \abs{\Gamma_2}$, thus $\Gamma_2 s_d = \Gamma_2$. From $i(s_d) = l+1$ we have $\Gamma_1 s_d \cap \Gamma_2 \neq \emptyset$. Thus $\Gamma_1 s_d \cap \Gamma_2 s_d = \Gamma_1 s_d \cap \Gamma_2 \neq \emptyset$, a contradiction. \end{proof} \begin{cor}\label{cor:dir} Let $\Gamma_1$ and $\Gamma_2$ be connected subgraphs of $\Gamma$ such that $\Gamma_1 \cap \Gamma_2$ is a length $k$ bridge in $\Gamma$. Let $V_k = \Gamma_1 \cap \Gamma_2$ be the defect set. Let $G_i$ be the defect $k$ group of $\Gamma_i$, $G$ be the defect $k$ group of $\Gamma_1 \cup \Gamma_2$. Then \[ G = G_1 \times G_2. \] \end{cor} \begin{proof} By Lemma~\ref{lem:simple} we have $G_1, G_2 \leq G$. Since $G_1$ and $G_2$ act on disjoint vertices, their elements commute. Thus $G_1\times G_2 \leq G$. Now, $V_k$ is a bridge of length $k$, thus by Lemma~\ref{lem:bridge} (applied to the disjoint subgraphs $\Gamma_1 \setminus V_k$ and $\Gamma_2 \setminus V_k$) there exists no element of $G$ moving a vertex from $\Gamma_1$ to $\Gamma_2$ or vice versa. Therefore $G \leq G_1\times G_2$. \end{proof} Finally, we are ready to prove item~(\ref{it:defectkmain}) of Theorem~\ref{thm:main}. \begin{proof}[Proof of item~(\ref{it:defectkmain}) of Theorem~\ref{thm:main}.] If $\Gamma$ is a cycle, then its defect $k$ group is $Z_{n-k}$ by Lemma~\ref{lem:cycle}. Otherwise, we prove the theorem by induction on the number of maximal $k$-subgraphs of $\Gamma$. If $\Gamma$ is a maximal $k$-subgraph, then the theorem holds, and the defect $k$ group of $\Gamma$ is $S_{n-k}$. In the following we assume that $\Gamma$ contains $m$-many maximal $k$-subgraphs for some $m\geq 2$, and that the theorem holds for all graphs with at most $(m-1)$-many maximal $k$-subgraphs. We consider two cases. Assume first that there exists a degree 1 vertex $x_1 \in \Gamma$, such that there exists a path $(x_1, \dots, x_{k+1})$ which is a bridge. Let $\Gamma_1$ be the path $(x_1, \dots, x_{k+1})$, and let $\Gamma_2$ be $\Gamma \setminus \halmaz{x_1}$. Now, $\Gamma_1$ is a trivial maximal $k$-subgraph, hence $\Gamma_2$ contains the same maximal $k$-subgraphs as $\Gamma$ except $\Gamma_1$. Furthermore, $\Gamma_2$ is connected, and cannot be a cycle because the degree of $x_2$ in $\Gamma_2$ is 1. Let the sizes of the maximal $k$-subgraphs of $\Gamma_2$ be $n_2, \dots , n_m$, then by induction the defect $k$ group of $\Gamma_1$ is $S_{n_2-k} \times \dots \times S_{n_m-k}$. The size of $\Gamma_1$ is $n_1 = k+1$, its defect $k$-group is $S_{n_1-k}$. Furthermore, $\Gamma_1 \cap \Gamma_2$ is a bridge of length $k$. By Corollary~\ref{cor:dir} the defect $k$-group of $\Gamma$ is $S_{n_1-k} \times S_{n_2-k} \times \dots \times S_{n_m-k}$. In the second case, no degree 1 vertex $x_1$ is in a path $(x_1, \dots , x_{k+1})$ which is a bridge. Then any maximal bridge $(x_1, \dots, x_l)$ with a degree 1 vertex $x_1$ has length $l \leq k$, and, as the bridge cannot be extended, $x_l$ must have degree at least 3. Moreover, $(x_1, ... , x_l)$ lies in a maximal $k$-subgraph containing $x_l$ and all its neighbors by Lemma~\ref{lem:longbridgeedge}~and~Corollary~\ref{cor:kdeg3}. In particular every bridge in $\Gamma$ of length at least $k+1$ occurs between nodes of degree at least 3. Hence every bridge of length at least $k+1$ occurs between two nontrivial maximal $k$-subgraphs by Corollary~\ref{cor:kdeg3}. For every vertex $v$ having degree at least 3 in $\Gamma$, let $\Gamma_v$ be the unique maximal $k$-subgraph containing $v$ and all its neighbors (Corollary~\ref{cor:kdeg3}). By definition, these are all the nontrivial maximal $k$-subgraphs of $\Gamma$. Let $\Gamma^k$ be the graph whose vertices are the nontrivial maximal $k$-subgraphs, and $\Gamma_u\Gamma_v$ is an edge in $\Gamma^k$ (for $\Gamma_u \neq \Gamma_v$) if and only if there exists a bridge in $\Gamma$ between a vertex $u' \in \Gamma_u$ of degree at least 3 in $\Gamma_u$ and a vertex $v' \in \Gamma_v$ of degree at least 3 in $\Gamma_v$. By Corollary~\ref{cor:2edge}, $\Gamma_u = \Gamma_v$ if $u$ and $v$ are in the same 2-edge connected component. As the 2-edge connected components of $\Gamma$ form a tree, the graph $\Gamma^k$ is a tree. Now, $\Gamma^k$ has $m$ vertices. Let $\Gamma_1$ be a leaf in $\Gamma^k$, and let $\Gamma_m$ be its unique neighbor in $\Gamma^k$. Let $x_1 \in \Gamma_1$ and $x_l \in \Gamma_m$ be the unique vertices of degree at least 3 in $\Gamma_i$ ($i \in \halmaz{1, l}$) such that there exists a bridge $P = ( x_1, \dots, x_l)$ in $\Gamma$. Note that the length of $P$ is at least $k$, otherwise $\Gamma_1 = \Gamma_m$ would follow by Lemma~\ref{lem:longbridgeedge}. Furthermore, any other bridge having an endpoint in $\Gamma_1$ must be of length at most $k$, because every degree 1 vertex is of distance at most $k-1$ from a vertex of degree at least 3. Thus every bridge other than $P$ and having an endpoint in $\Gamma_1$ is a subset of $\Gamma_1$ by Corollary~\ref{cor:kdeg3}. Let $\Gamma_2 = \left( \Gamma \setminus \Gamma_1 \right) \cup P$. Now, $\Gamma_1$ is a maximal $k$-subgraph, $\Gamma_2$ has one less maximal $k$-subgraphs than $\Gamma$. Furthermore, $\Gamma_2$ is connected, because every bridge other than $P$ and having an endpoint in $\Gamma_1$ is a subset of $\Gamma_1$. Finally, $\Gamma_2$ is not a cycle, because it contains the vertex $x_1$ which is of degree 1 in $\Gamma_2$. Let the sizes of the maximal $k$-subgraphs of $\Gamma_2$ be $n_2, \dots , n_m$, then by induction the defect $k$ group of $\Gamma_1$ is $S_{n_2-k} \times \dots \times S_{n_m-k}$. Let the size of $\Gamma_1$ be $n_1$, its defect $k$-group is $S_{n_1-k}$. Furthermore, $\Gamma_1 \cap \Gamma_2$ is a bridge of length $k$. By Corollary~\ref{cor:dir} the defect $k$-group of $\Gamma$ is $S_{n_1-k} \times S_{n_2-k} \times \dots \times S_{n_m-k}$. \end{proof} \section{An algorithm to calculate the defect $k$ group}\label{sec:algorithm} Note that by items~(\ref{it:graph})~and~(\ref{it:2-vertex}) of Theorem~\ref{thm:main} the defect 1 group can be trivially computed in $O \left( \abs{E} \right)$ time by first determining the 2-vertex connected components \cite{Tarjan2vertex}, and whether each is a cycle, the exceptional graph (Figure~\ref{fig:exceptionalgraph}) or if not, whether or not it is bipartite. For $k \geq 2$ one can check first if $\Gamma$ is a cycle (and then the defect group is $Z_{n-k}$) or a path (and then the defect group is trivial). In the following, we give a linear algorithm (running in $O\left( \abs{E} \right)$ time) to determine the maximal $k$-subgraphs ($k \geq 2$) of a connected graph $\Gamma$ having $n$ vertices, $\abs{E}$ edges where at least one vertex is of degree at least 3. During the algorithm we color the vertices. Let us call a maximal subgraph with vertices having the same color a \emph{monochromatic component}. First, one finds all 2-edge connected components and the tree of two-edge connected components in $O \left( \abs{E} \right)$ time using e.g.~\cite{Tarjan2edge}. Color the vertices of the nontrivial (i.e.\ having size greater than 1) 2-edge connected components such that two distinct vertices have the same color if and only if they are in the same nontrivial 2-edge connected component. Furthermore, color the uncolored vertices having degree at least 3 by different colors from each other and from the colors of the 2-edge connected components. Then the monochromatic components are each contained in a unique nontrivial maximal $k$-subgraph by Corollaries~\ref{cor:kdeg3}~and~\ref{cor:2edge} (a nontrivial maximal $k$-subgraph may contain more than one of these monochromatic components). Furthermore, the monochromatic components and the degree 1 vertices are connected by bridges. If any of the bridges connecting two monochromatic components is of length at most $k-1$, then recolor the two monochromatic components at the ends of the bridge and the vertices of the bridge by the same color, because these are contained in the same maximal $k$-subgraph by Corollary~\ref{cor:kdeg3}. Similarly, if any of the bridges connecting a monochromatic component and a degree 1 vertex is of length at most $k-1$, then recolor the monochromatic component and the vertices of the bridge by the same color, because these are contained in the same maximal $k$-subgraph by Lemma~\ref{lem:longbridgeedge}. Repeat recoloring along all bridges of length at most $k-1$ in $O \left( \abs{E} \right)$ time. Then we obtain monochromatic components $\Gamma_1, \dots , \Gamma_l$ connected by long bridges (i.e.\ bridges of length at least $k$), and possibly some long bridges to degree 1 vertices. Now, we have finished coloring. For every $1\leq i\leq l$, let $\Gamma_i'$ be the induced subgraph having all vertices of distance at most $k-1$ from $\Gamma_i$, which can be obtained in $O\left( \abs{E} \right)$ time by adding the appropriate $k-1$ vertices of the long bridges to the appropriate monochromatic component. Note that the obtained induced subgraphs are not necessarily disjoint. Then $\Gamma_1', \dots , \Gamma_l'$ are the nontrivial maximal $k$-subgraphs of $\Gamma$ by Lemma~\ref{lem:longbridgeedge}. Again, by Lemma~\ref{lem:longbridgeedge}, the trivial maximal $k$-subgraphs of $\Gamma$ are the paths containing exactly $k+1$ vertices in a long bridge. These can also be computed in $O \left( \abs{E} \right)$ time by going through all long bridges. By item~(\ref{it:defectkmain}) of Theorem~\ref{thm:main}, the defect $k$ group of $\Gamma$ as a permutation group is the direct product of the defect $k$ groups of $\Gamma_1', \dots \Gamma_l'$, and the defect $k$ groups of the trivial maximal $k$-subgraphs. \section{Complexity of the flow semigroup of (di)graphs}\label{sec:cpx} In this section we apply our results and the complexity lower bounds of \cite{ComplexityLowerBounds} to verify \cite[Conjecture~6.51i (1)]{wildbook} for 2-vertex connected graphs. That is, we prove that the Krohn--Rhodes (or group-) complexity of the flow semigroup of a 2-vertex connected graph with $n$ vertices is $n-2$ (item~\ref{it:compl2vertex} of Theorem~\ref{thm:main}). Then we derive item~\ref{it:compl2edge} of Theorem~\ref{thm:main} as a further consequences of our results. For standard definitions on wreath product of semigroups, we refer the reader to e.g.~\cite[Definition~2.2]{wildbook}. A finite semigroup $S$ is called \emph{combinatorial} if and only if every maximal subgroup of $S$ has one element. Recall that the \emph{Krohn--Rhodes (or group-) complexity of a finite semigroup $S$} (denoted by $\cpx{S}$) is the smallest non-negative integer $n$ such that $S$ is a homomorphic image of a subsemigroup of the iterated wreath product \[ C_n \wr G_n \wr \dots \wr C_1 \wr G_1 \wr C_0, \] where $G_1, \dots , G_n$ are finite groups, $C_0, \dots , C_n$ are finite combinatorial semigroups, and $\wr$ denotes the wreath product (for the precise definition, see e.g.~\cite[Definition~3.13]{wildbook}). The definition immediately implies that if a finite semigroup $S$ is the homomorphic image of a subsemigroup of $T$, then $\cpx{S} \leq \cpx{T}$. More can be found on the complexity of semigroups in e.g.~\cite[Chapter~3]{wildbook}. We need the following results on the complexity of semigroups. \begin{lem}[{\cite[Prop.~6.49(b)]{wildbook}}]\label{lem:K_n} The flow semigroup $K_n$ of the complete graph on $n \geq 2$ vertices has $\cpx{K_n} = n-2$. \end{lem} \begin{lem}[{\cite[Sec.~3.7]{ComplexityLowerBounds}}]\label{lem:F_n} The complexity of the full transformation semigroup $F_n$ on $n$ points is $\cpx{F_n}=n-1$. \end{lem} The well-known \emph{$\mathcal{L}$-order} is a pre-order, i.e.\ a transitive and reflexive binary relation, on the elements of a semigroup $S$ given by $s_1 \succeq_{\mathcal{L}} s_2$ if $s_1 = s_2$ or $ss_1 = s_2$ for some $s \in S$. The \emph{$\mathcal{L}$-classes} of $S$ are the equivalence classes of the $\mathcal{L}$-order. The $\mathcal L$-classes are thus partially ordered by $L_1 \succeq_{\mathcal{L}} L_2$ if and only if $SL_1 \cup L_1 \supseteq SL_2 \cup L_2$. One says that a finite semigroup $S$ is a \emph{$T_1$-semigroup} if it is generated by some $\succeq_{\mathcal{L}}$-chain of its $\mathcal{L}$-classes, i.e.\ if there exist $\mathcal L$-classes $L_1 \succeq_{\mathcal{L}} \dots \succeq_{\mathcal{L}} L_m$ of $S$ such that $S = \langle L_1 \cup \ldots \cup L_m \rangle$. Equivalently, $S$ is a $T_1$-semigroup if there exist $U_i \subseteq L_i $ ($1 \leq i \leq m$) for such a chain of $\mathcal L$-classes of $S$ such that $S= \langle U_1 \cup \ldots \cup U_m \rangle$. \begin{lem}[{\cite[Lemma 3.5(b)]{ComplexityLowerBounds}}]\label{lem:LB} Let $S$ be a noncombinatorial $T_1$-semigroup. Then \[ \cpx{S} \geq 1 + \cpx{EG(S)}, \] where $EG(S)$ is the subsemigroup of $S$ generated by all its idempotents. \end{lem} Now we prove \cite[Conjecture~6.51i (1)]{wildbook} for 2-vertex connected graphs. \begin{proof}[Proof of item~\ref{it:compl2vertex} of Theorem~\ref{thm:main}] Let $\Gamma$ be a 2-vertex connected simple graph with $n \geq 2$ vertices. Let $K_n$ denote the flow semigroup of the complete graph on vertices $V$, where $\abs{V} = n$. Then $\cpx{S_{\Gamma}} \leq \cpx{K_n} = n-2$ by Lemma~\ref{lem:K_n}. We proceed by induction on $n$. If $n \leq 3$, then $\Gamma$ is a complete graph, and $\cpx{S_{\Gamma}}=n-2$ by Lemma~\ref{lem:K_n}. From now on we assume $n>3$ and $\Gamma = (V, E)$. \textbf{Case 1.} Assume first that $\Gamma$ is not a cycle. Let $(u,v)$ and $(x,y)$ be two disjoint edges in $\Gamma$. Let $G_1$ be the defect~1 group with defect set $V\setminus\halmaz{u}$ and idempotent $e_{uv}$ as its identity element. Then $e_{uv} \succeq_{\mathcal{L}} e_{xy}e_{uv} = e_{uv}e_{xy}$. Let $T$ be $\left\langle G_1 \cup \halmaz{e_{uv}e_{xy}} \right\rangle$. Since $G_1 \succeq_{\mathcal{L}} \halmaz{e_{uv}e_{xy}}$ is an ${\mathcal{L}}$-chain in $T$, $T$ is a $T_1$-semigroup. Furthermore, $T$ is noncombinatorial since $G_1$ is nontrivial. Thus, by Lemma~\ref{lem:LB} \begin{equation}\label{eq:LB1} \cpx{T} \geq 1 + \cpx{EG(T)}. \end{equation} Let $\Gamma'$ be the complete graph on $V\setminus\halmaz{u}$. Let $a,b\in V\setminus\halmaz{u}$ be arbitrary distinct vertices. By item~(\ref{it:2-vertex}) of Theorem~\ref{thm:main}, $G_1$ is 2-transitive. Let $\pi \in G_1$ be such that $\pi(x)=a$ and $\pi(y)=b$. There is a positive integer $\omega>1$, with $\pi^{\omega}=e_{uv}$. In particular, $e_{uv}$ commutes with $\pi$. Observe that \begin{align*} \pi^{\omega-1} e_{uv}e_{xy} \pi = e_{uv} \left( \pi^{\omega-1} e_{xy} \pi \right) &= e_{uv} e_{ab}, \text{ and thus} \\ \left(\pi^{\omega-1} e_{xy}e_{uv} \pi\right)\restriction_{V\setminus\halmaz{u}} = e_{ab}. \end{align*} That is, we obtain the generators $e_{ab}$ of $S_{\Gamma'}$ by restricting the idempotents $e_{uv} e_{ab} \in T$ to $V \setminus \halmaz{u}$. Therefore, $S_{\Gamma'}$ is a homomorphic image of a subsemigroup of $EG(T)$, yielding \[ \cpx{EG(T)}\geq \cpx{S_{\Gamma'}}. \] By induction, $\cpx{S_{\Gamma'}}=n-3$. Applying \eqref{eq:LB1}, we obtain $\cpx{T}\geq n-2$. Since $T$ is a subsemigroup of $S_\Gamma$, we obtain $\cpx{S_\Gamma} \geq \cpx{T}\geq n-2$. \textbf{Case 2.} Assume now that $\Gamma$ is the $n$-node cycle $(u, v_1,\dots,v_{n-1})$. Then $(u,v_1)$ and $(v_2,v_3)$ are disjoint edges. Let $G_1 \simeq Z_{n-1}$ be the defect~1 group with defect set $V\setminus\halmaz{u}$ and idempotent $e_{uv_1}$ as its identity element. Let $\pi$ be a generator of $G_1$ with cycle structure $\left( v_1, \dots , v_{n-1}\right)$. Then $e_{uv_1} \succeq_{\mathcal{L}} e_{v_2v_3}e_{uv_1} = e_{uv_1} e_{v_2v_3}$. Let $T$ be $\left\langle G_1 \cup \halmaz{e_{uv_1}e_{v_2v_3}} \right\rangle$. Since $G_1 \succeq_{\mathcal{L}} \halmaz{e_{uv_1}e_{v_2v_3}}$ is an ${\mathcal{L}}$-chain in $T$, $T$ is a $T_1$-semigroup. Furthermore, $T$ is noncombinatorial since $G_1$ is nontrivial. Thus, by Lemma~\ref{lem:LB} \begin{equation}\label{eq:LB2} \cpx{T} \geq 1 + \cpx{EG(T)}. \end{equation} Let $\Gamma'$ be an $(n-1)$-node cycle with nodes $V \setminus \halmaz{u} = \halmaz{v_1, \dots, v_{n-1}}$. Note that $e_{uv_1} = \pi^{n-1}$, and therefore $e_{u v_1}$ commutes with $\pi$. Let $v_{i-1}, v_i, v_{i+1} \in V\setminus\halmaz{u}$ be three neighboring nodes in $\Gamma'$, where the indices are in $\halmaz{1,\dots,n-1}$ taken modulo $n-1$. Observe that \begin{align*} \pi^{n-2} e_{uv_1} e_{v_{i-1}v_i }\pi = e_{uv_1} \left(\pi^{n-2} e_{v_{i-1}v_i } \pi \right) &= e_{uv_1} e_{v_i v_{i+1}}, \text{ and thus} \\ \left(\pi^{n-2} e_{uv_1} e_{v_{i-1}v_i }\pi\right)\restriction_{V\setminus\halmaz{u}} &= e_{v_i v_{i+1}}. \end{align*} That is, we obtain the generators $e_{v_i v_{i+1}}$ of $S_{\Gamma'}$ by restricting the idempotents $e_{u v_1} e_{v_i v_{i+1}} \in T$ to $V \setminus \halmaz{u}$. Therefore, $S_{\Gamma'}$ is a homomorphic image of a subsemigroup of $EG(T)$, yielding \[ \cpx{EG(T)}\geq \cpx{S_{\Gamma'}}. \] By induction, $\cpx{S_{\Gamma'}}=n-3$. Applying \eqref{eq:LB2}, we obtain $\cpx{T}\geq n-2$. Since $T$ is a subsemigroup of $S_\Gamma$, we have $\cpx{S_\Gamma} \geq \cpx{T}\geq n-2$. \end{proof} Note that by Lemma~\ref{lem:reverseedge} a strongly connected digraph has the same flow semigroup as the corresponding graph. Thus, item~\ref{it:compl2vertex} of Theorem~\ref{thm:main} proves Rhodes's conjecture \cite[Conjecture~6.51i (1)]{wildbook} for 2-vertex connected strongly connected digraphs, as well. The following lemma bounds the complexity in the remaining cases. \begin{lem} Let $k$ be the smallest positive integer such that for a graph $\Gamma$ the flow semigroup $S_{\Gamma}$ has defect $k$ group $S_{n-k}$. Then $\cpx{S_{\Gamma}}\geq n-1-k$. \end{lem} \begin{proof} Assume first $k=n-1$. Then the lemma holds trivially. From now on, assume $k\leq n-2$. Let $uv$ be an edge in $\Gamma$. Let $V_k$ be an arbitrary $k$-element subset of the vertex set $V$ disjoint from $\halmaz{u,v}$. Let $G_k$ be the defect $k$ group with defect set $V_k$. Let $S$ be the subsemigroup of $S_{\Gamma}$ generated by $G_k$ and $e_{uv}$. As $G_k \simeq S_{n-k}$, we have that $S$ is the semigroup of all transformations on $V\setminus V_k$. Hence, $\cpx{S}=\cpx{F_{n-k}}=n-k-1$ by Lemma~\ref{lem:F_n}. Whence, $\cpx{S_{\Gamma}} \geq \cpx{S}= n-k-1$. \end{proof} By Theorem~\ref{thm:defectk}, it immediately follows that the complexity of the flow semigroup of a 2-edge connected graph $\Gamma$ is at least $n-3$. Furthermore, $\cpx{S_{\Gamma}} \leq \cpx{K_n} = n-2$ by Lemma~\ref{lem:K_n}. This finishes the proof of item~\ref{it:compl2edge} of Theorem~\ref{thm:main}. \end{document}
\begin{document} \title{Lower Bound on the Size-Ramsey Number of Tight Paths} \author{Christian Winter} \affil{University of Hamburg, Hamburg, Germany; and Karlsruhe Institute of Technology, Karlsruhe, Germany; E-mail: \textit{[email protected]}} \maketitle \begin{abstract} The size-Ramsey number $\mathbb{R}RR^{(k)}(\ensuremath{\mathcal{H}})$ of a $k$-uniform hypergraph $\ensuremath{\mathcal{H}}$ is the minimum number of edges in a $k$-uniform hypergraph $\ensuremath{\mathcal{G}}$ with the property that every `$2$-edge coloring' of $\ensuremath{\mathcal{G}}$ contains a monochromatic copy of $\ensuremath{\mathcal{H}}$. For $k\ge2$ and $n\in\mathbb{N}$, a $k$-uniform tight path on $n$ vertices~$\ensuremath{\mathcal{P}}^{(k)}_{n}$ is defined as a $k$-uniform hypergraph on $n$ vertices for which there is an ordering of its vertices such that the edges are all sets of $k$ consecutive vertices with respect to this order. We prove a lower bound on the size-Ramsey number of $k$-uniform tight paths, which is, considered assymptotically in both the uniformity $k$ and the number of vertices $n$, $\mathbb{R}RR^{(k)}(\ensuremath{\mathcal{P}}^{(k)}_{n})= \Omega\big(\log (k)n\big)$. \varepsilonnd{abstract} \textbf{Keywords\ --} size-Ramsey, Ramsey theory, tight path, uniform hypergraph\\ \section{Introduction}\label{sec_intro} For a $k$-graph $\ensuremath{\mathcal{G}}=(V,E)$, i.e.\ a $k$-uniform hypergraph on a vertex set $V$ and an edge set $E\subseteq \binom{V}{k}$, a \textit{$2$-edge coloring} of $\ensuremath{\mathcal{G}}$ is a function $c\colon E(\ensuremath{\mathcal{G}})\to \{\text{red},\text{blue}\}$ that maps every edge to one of the given colors \textit{red} or \textit{blue}. In the following we refer to such a function simply as a \textit{coloring} of $\ensuremath{\mathcal{G}}$. We say that a $k$-graph $\ensuremath{\mathcal{G}}$ has the \textit{Ramsey property} $\ensuremath{\mathcal{G}}\rightarrow \ensuremath{\mathcal{H}}$ for some $k$-graph $\ensuremath{\mathcal{H}}$ if every coloring of $\ensuremath{\mathcal{G}}$ contains a monochromatic copy of $\ensuremath{\mathcal{H}}$. The \textit{size-Ramsey number} of a $k$-graph $\ensuremath{\mathcal{H}}$ is defined as $$\mathbb{R}RR^{(k)}(\ensuremath{\mathcal{H}})=\min\big\{|E(\ensuremath{\mathcal{G}})| \colon \ensuremath{\mathcal{G}}\ k\text{-graph with }\ensuremath{\mathcal{G}}\rightarrow \ensuremath{\mathcal{H}}\big\}.$$ Size-Ramsey problems were introduced by Erd\H{o}s, Faudree, Rousseau and Schelp \cite{erdos_size} for graphs. One of the focus points of studies on the graph case is estimating the size-Ramsey number of paths. Beck \cite{beck83} disproved a conjecture of Erd\H{o}s \cite{erdos_pn} by showing that $\mathbb{R}RR^{(2)}(P_n)=O(n)$. Since then, estimates on this number have been gradually improved, with the current best known bounds being $\big(3.75-o(1)\big)n\le\mathbb{R}RR^{(2)}(P_n)\le74n$ given by Bal, DeBiasio \cite{bal_lb} and Dudek, Pralat \cite{dud_74}, respectively. \\ Let $n,k\in\mathbb{N}$ with $k\ge2$. A \textit{$k$-uniform tight path} on $n$ vertices $\ensuremath{\mathcal{P}}^{(k)}_{n}$ is a $k$-graph on $n$ vertices for which there exists an ordering of its vertices such that every edge is a $k$-element set of consecutive vertices with respect to this order, two consecutive edges have precisely $k-1$ vertices in common, and there are no isolated vertices. Equivalently, $\ensuremath{\mathcal{P}}^{(k)}_{n}$ is a $k$-graph isomorphic to the hypergraph $(\{1,\dots,n\},E)$ with edge set $$E=\big\{\{i,\dots,i+k-1\}\colon i\in\{1,\dots,n-k+1\}\big\}.$$ If the uniformity is clear from the context we omit the prefix `$k$-uniform' when referring to tight paths. \\ Research on the size-Ramsey number of hypergraphs has been substantially driven forward by Dudek, La Fleur, Mubayi and Rödl \cite{dud_general}. Among other results, they conjectured that the size-Ramsey number of tight paths is linear in terms of $n$. This conjecture was recently verified by Letzter, Pokrovskiy and Yepremyan \cite{let_ub}. \begin{theorem}[\cite{let_ub}]\label{ub_linear} Let $k\ge 2$ be fixed. Then $$\mathbb{R}RR^{(k)}(\ensuremath{\mathcal{P}}^{(k)}_{n})=O(n).$$ \varepsilonnd{theorem} Regarding a lower bound on this number, the following is a simple observation. \begin{observation*}\label{lb_trivial} Let $n,k\in\mathbb{N}$, $k\ge2$. Then $$\mathbb{R}RR^{(k)}(\ensuremath{\mathcal{P}}^{(k)}_{n})\ge 2n-2k+1.$$ \varepsilonnd{observation*} In this paper we show an improved lower bound on the size-Ramsey number of tight~paths. \begin{theorem}\label{lb_tight2} Let $n\ge 7$. Then $$\mathbb{R}RR^{(3)}(\ensuremath{\mathcal{P}}^{(3)}_{n})\ge\tfrac{8}{3} n -\tfrac{28}{3}.$$ \varepsilonnd{theorem} \begin{theorem}\label{lb_tight} Let $k\ge4$ and $n>\frac{k^2+k-2}{2}$. Then $$\mathbb{R}RR^{(k)}(\ensuremath{\mathcal{P}}^{(k)}_{n})\ge \big\lceil\log_2(k+1)\big\rceil\cdot n-2k^2.$$ \varepsilonnd{theorem} Section \ref{sec_neigh} discusses some properties which are useful for the main proofs. In Section \ref{sec_tight} the proofs of Theorem \ref{lb_tight} and Theorem \ref{lb_tight2} are presented. \section{Preliminaries}\label{sec_neigh} Let $\ensuremath{\mathcal{G}}$ be a $k$-graph and $Z\subseteq E(\ensuremath{\mathcal{G}})$ be an edge set. Let $\medcup Z =\{v\in e\colon e\in Z\}$ be the set of vertices that are \textit{covered} by~$Z$. We say that the $k$-graph $(\medcup Z, Z)$ is \textit{formed} by $Z$. Given a vertex set $W\subseteq V(\ensuremath{\mathcal{G}})$ the subhypergraph \textit{induced} by $W$ is $\ensuremath{\mathcal{G}}[W]=(W,\{e\in E(\ensuremath{\mathcal{G}})\colon e\subseteq W\})$. For $q\in\mathbb{R}$, $0\le q< k$, the \textit{$q$-neighborhood} of $Z$ is the edge set $$N_{> q}(Z)=\big\{e\in E(\ensuremath{\mathcal{G}}) \colon \varepsilonxists e'\in Z\text{ with }|e\cap e'|> q\big\}.$$ Note that we allow $e=e'$, thus $Z\subseteq N_{>q}(Z)$ for all $0\le q< k$. \\ For each $k$-uniform tight path $\ensuremath{\mathcal{P}}$ on $n$ vertices we fix an ordering of the vertices such that each edge is a set of consecutive vertices. We say that such an enumeration $V(\ensuremath{\mathcal{P}})=\{v_1,\dots,v_n\}$ is \textit{according to} $\ensuremath{\mathcal{P}}$. For a $k$-graph $\ensuremath{\mathcal{G}}$, we define $e(\ensuremath{\mathcal{G}})=|E(\ensuremath{\mathcal{G}})|$, e.g.\ $e(\ensuremath{\mathcal{P}}^{(k)}_{n})=n-k+1$. Furthermore, let $[n]=\{1,\dots,n\}$ for $n\in\mathbb{N}$. For any other notation, see Diestel \cite{diestel}. \begin{proposition}\label{prop_gen} Let $n,k\in\mathbb{N}$ such that $k\ge 2$ and $n>\frac{k^2+k-2}{2}$. Let $\ensuremath{\mathcal{P}}$ be a $k$-uniform tight path on $n$ vertices. Furthermore, let $\alpha\in\mathbb{R}$ such that $1\le \alpha \le k$ and $W\subseteq V(\ensuremath{\mathcal{P}})$ be a vertex set such that for every edge $e\in E(\ensuremath{\mathcal{P}})$ we have $|e\cap W|\ge \alpha$. Then $$|W|\ge \frac{\alpha(n-k+1)}{k}.$$ \noindent In particular, if for each $e\in E(\ensuremath{\mathcal{P}})$, $|e\cap W|> \frac{k+1}{2}$, then for $n>\frac{k^2+k-2}{2}$, $$|W|> \frac{n}{2}.$$ \varepsilonnd{proposition} \begin{proof} We estimate the size of $W$ by double-counting ordered pairs $(v,e)$ consisting of a vertex $v\in W$ and an edge $e\in E(\ensuremath{\mathcal{P}})$ with $v\in e$. Let $\rho_{(v,e)}$ be the number of such pairs. Considering the edges of $\ensuremath{\mathcal{P}}$ it is immediate that $$\rho_{(v,e)}\ge\alpha \cdot e(\ensuremath{\mathcal{P}})= \alpha (n-k+1).$$ Now consider the vertices in $W\subseteq V(\ensuremath{\mathcal{P}})$. The maximum degree of the tight path $\ensuremath{\mathcal{P}}$ is at most $k$, so $$\rho_{(v,e)}\le k\cdot |W|.$$ \noindent Combining both inequations, we obtain $$|W|\ge \frac{\alpha (n-k+1)}{k}.$$ Now consider the case that for each edge $e\in E(\ensuremath{\mathcal{P}})$ we have $|e\cap W|>\frac{k+1}{2}$, then also $|e\cap W|\ge\frac{k+2}{2}$. Therefore we obtain for sufficiently large $n$, \begin{align*} |W|\ge \frac{k+2}{2}\cdot\frac{n-k+1}{k}> \frac{n}{2}.&\qedhere \varepsilonnd{align*} \varepsilonnd{proof} \section{Proofs of the main results}\label{sec_tight} \begin{proof}[Proof of Theorem \ref{lb_tight}] Let $\ensuremath{\mathcal{G}}$ be a $k$-uniform hypergraph with $\ensuremath{\mathcal{G}}\rightarrow \ensuremath{\mathcal{P}}^{(k)}_{n}$, i.e.\ such that every $2$-coloring contains a monochromatic $k$-uniform tight path on $n$ vertices. We show that there are at least $\left\lceil\log_2(k+1)\right\rceil\cdot n-2k^2$ many edges in $\ensuremath{\mathcal{G}}$ by iteratively constructing many edge-disjoint tight paths of length $n$. Let $\lambda=\left\lceil\log_2(k+1)\right\rceil-1$, this number indicates how many iteration steps are executed. Additionally, we define the function $q\colon\{0,\dots,\lambda\}\rightarrow \mathbb{R}$, $$q(i)=\left(1-\frac{1}{2^i}\right)(k+1),$$ which will be the parameter of the $q$-neighborhoods considered in each iteration step. Clearly, $q$ is an increasing function and $q(i)\ge 0$ for $i\in\{0,\dots,\lambda\}$. For $i\le\lambda$ (or equivalently $i<\log_2(k+1)$) it can be seen that $q(i)<k$, which implies that the $q(i)$-neighborhood is well-defined for all $i\in\{0,\dots,\lambda\}$. \\ As an initial step of the iteration, the Ramsey property $\ensuremath{\mathcal{G}}\rightarrow \ensuremath{\mathcal{P}}^{(k)}_{n}$ provides that there is some tight path on $n$ vertices in $\ensuremath{\mathcal{G}}$, which we denote by $\ensuremath{\mathcal{P}}_0$. \\ \noindent From now on we proceed iteratively, so let $i= 1,\dots,\lambda$ and suppose that the iteration has been performed for all smaller values of $i$. In each step of the iteration we construct the following: \begin{itemize} \item Edge sets $Z^1_i, Z^2_i\subseteq E(\ensuremath{\mathcal{P}}_{i-1})$ such that $\medcup Z^1_i \cap \medcup Z^2_i=\varnothing$ and each of the sets forms a tight path in $\ensuremath{\mathcal{G}}$ on precisely $\left\lfloor\frac{n}{2}\right\rfloor$ vertices. \item A tight path $\ensuremath{\mathcal{P}}_{i}$ on $n$ vertices with $E(\ensuremath{\mathcal{P}}_{i})\cap N_{>q(i)}(Z^{a}_{b})=\varnothing$ for all $a\in[2], b\in[i]$. \varepsilonnd{itemize} \noindent First we construct $Z^1_i$ and $Z^2_i$ by dividing the tight path $\ensuremath{\mathcal{P}}_{i-1}$ into two parts of equal length and considering the edge sets of the two created shorter tight paths. For this purpose, consider an ordering of the vertices $V(\ensuremath{\mathcal{P}}_{i-1})=\{v_1,\dots,v_n\}$ according to $\ensuremath{\mathcal{P}}_{i-1}$. Let $$V^1_i=\big\{v_1,\dots,v_{\left\lfloor\frac{n}{2}\right\rfloor}\big\}\quad\text{ and }\quad V^2_i=\big\{v_{\left\lceil\frac{n}{2}\right\rceil+1},\dots,v_n\big\}.$$ Then $|V^1_i|=\left\lfloor\frac{n}{2}\right\rfloor=|V^2_i|$. Now let $Z^1_i=E(\ensuremath{\mathcal{P}}_{i-1}[V^1_i])$ and $Z^2_i=E(\ensuremath{\mathcal{P}}_{i-1}[V^2_i])$. Clearly, these two sets form vertex-disjoint tight paths on $\left\lfloor\frac{n}{2}\right\rfloor$ vertices in $\ensuremath{\mathcal{G}}$. The size of $Z^1_i$ and $Z^2_i$ is $$|Z^1_i|=|Z^2_i|=e(\ensuremath{\mathcal{P}}^{(k)}_{\left\lfloor\tfrac{n}{2}\right\rfloor})=\left\lfloor\frac{n}{2}\right\rfloor-k+1\ge \frac{n-2k+1}{2}.$$ \noindent In the next step we show a key property of the edge sets $Z^{a}_{b}$ for $a\in[2]$, $b\in[i]$. \\ \noindent \textbf{Claim.} Let $a_1,a_2\in[2]$, $b_1,b_2\in[i]$ such that $(a_1,b_1)\neq (a_2,b_2)$. Then for any two edges $e_1\in N_{>q(i)}(Z^{a_1}_{b_1})$ and $e_2\in N_{>q(i)}(Z^{a_2}_{b_2})$ we have $$|e_1\cap e_2|<k-1.$$ \noindent \textit{Proof of the claim.} Assume that there are edges $e_1\in N_{>q(i)}(Z^{a_1}_{b_1})$, $e_2\in N_{>q(i)}(Z^{a_2}_{b_2})$ with $|e_1\cap e_2|\ge k-1$. By definition, there is an edge $z_1\in Z^{a_1}_{b_1}$ such that $|e_1\cap z_1|>q(i)$ and an edge $z_2\in Z^{a_2}_{b_2}$ with $|e_2\cap z_2|>q(i)$. \\ \begin{figure}[H] \centering \includegraphics[scale=0.65]{figs/neigh} \caption{Possible constellation of the edges in iteration step $i=1$ where $k=6$} \varepsilonnd{figure} \noindent We estimate the size of $z_1\cap z_2$ in order to find a contradiction to our assumption. Since $|e_1\cap e_2|\ge k-1$, we have $|e_1\backslash e_2|\le 1$ and so $|e_2\cap z_1|>q(i)-1$. Applying this, we obtain: \begin{align*} |z_1\cap z_2|&\ge |e_2\cap z_1\cap z_2|\ge |e_2|-|e_2\backslash z_1|-|e_2\backslash z_2|= -|e_2|+|e_2\cap z_1|+|e_2\cap z_2|\\ &>-k+q(i)-1+q(i)=\left(1-\frac{1}{2^{i-1}}\right)(k+1)=q(i-1). \varepsilonnd{align*} If $b_1=b_2$, we have $\medcup Z^{a_1}_{b_1}\cap \medcup Z^{a_2}_{b_2}=\varnothing$ by construction. But then $q(i-1)<|z_1\cap z_2|=0$, which is a contradiction. We suppose that $b_1\neq b_2$, then without loss of generality $b_1>b_2$ (and by this $b_1-1\ge1$). By construction we know $z_1\in Z^{a_1}_{b_1}\subseteq E(\ensuremath{\mathcal{P}}_{b_1-1})$. In the iteration step $b_1-1$ the tight path $\ensuremath{\mathcal{P}}_{b_1-1}$ was chosen to be edge-disjoint from $\bigcup_{a\in[2],b<{b_1}} N_{>q(b_1-1)}(Z^a_b)$. This yields that $z_1\notin N_{>q(b_1-1)}(Z^{a_2}_{b_2})$ and so $$|z_1\cap z_2|\le q(b_1-1)\le q(i-1),$$ where the last inequality holds because $q$ is an increasing function, and we again reach a contradiction. This concludes the proof of the claim. \qed\\ Now we find the next tight path $\ensuremath{\mathcal{P}}_{i}$ in $\ensuremath{\mathcal{G}}$ by considering the following coloring of $\ensuremath{\mathcal{G}}$. For all $a\in[2]$ and $b\in[i]$, assign the color red to each edge in $N_{>q(i)}(Z^{a}_{b})$. The remaining edges are colored blue. We will prove that there is a monochromatic blue $\ensuremath{\mathcal{P}}^{(k)}_{n}$ in this coloring. We shall let $\ensuremath{\mathcal{P}}_{i}$ be that path. With this in mind, assume for a contradiction that there is a monochromatic red tight path $\mathbb{R}R$ on $n$ vertices in $\ensuremath{\mathcal{G}}$. \\ Clearly, each edge in $E(\mathbb{R}R)$ is in some neighborhood $N_{>q(i)}(Z^{a}_{b})$, $a\in[2], b\in[i]$. Now the above claim provides that any two edges which are consecutive in $\mathbb{R}R$, so intersect in precisely $k-1$ vertices, belong to the same neighborhood $N_{>q(i)}(Z^{a}_{b})$ for some $a\in[2], b\in[i]$. By repeating this argument, we obtain that $E(\mathbb{R}R)\subseteq N_{>q(i)}(Z^a_b)$ for some $a\in[2], b\in[i]$. This implies that for all $e\in E(\mathbb{R}R)$, $$|e\cap \medcup Z^a_b|>q(i)\ge q(1)=\tfrac{k+1}{2}.$$ Then applying Proposition \ref{prop_gen} for the tight path $\mathbb{R}R$ and the vertex set $\medcup Z^a_b$ yields $|\medcup Z^a_b|>\frac{n}{2}$. But by construction $Z^a_b$ forms a $k$-graph on precisely $\left\lfloor\frac{n}{2}\right\rfloor$ vertices, a contradiction. \\ Consequently, there is no red tight path on $n$ vertices in the coloring, so the Ramsey property $\ensuremath{\mathcal{G}}\rightarrow \ensuremath{\mathcal{P}}^{(k)}_{n}$ implies the existence of a monochromatic blue $\ensuremath{\mathcal{P}}^{(k)}_{n}$, which we denote $\ensuremath{\mathcal{P}}_{i}$. Observe that for all $e\in E(\ensuremath{\mathcal{P}}_{i})$ and for all $a\in[2], b\in[i]$, we have $e\notin N_{>q(i)}(Z^a_b)$, since all edges in these neighborhoods are colored in red. \\ By iterating the described procedure for $i=1,\dots,\lambda$, we obtain edge sets $Z^a_b$ for $a\in[2]$, $b\in[\lambda]$ which are pairwise disjoint and additionally a tight path $\ensuremath{\mathcal{P}}_{\lambda}$ on $n$ vertices such that each edge in $E(\ensuremath{\mathcal{P}}_{\lambda})$ is not contained in any set $Z^a_b$. This allows for the following estimate on the number of edges in $\ensuremath{\mathcal{G}}$ \begin{align*} e(\ensuremath{\mathcal{G}})&\ge \sum_{b\in[\lambda]} \big(|Z^1_b|+|Z^2_b|\big) + e(\ensuremath{\mathcal{P}}_{\lambda})\ge\lambda (n-2k-1)+(n-k+1)\\ &\ge \left\lceil\log_2(k+1)\right\rceil\cdot n-(k-1)(2k+2)\ge \left\lceil\log_2(k+1)\right\rceil\cdot n-2k^2, \varepsilonnd{align*} where in the last line we used $\left\lceil\log_2(k+1)\right\rceil\le k$. \varepsilonnd{proof} We point out that the above proof also applies to $3$-uniform tight paths, but does not yield an improvement of the trivial bound. In order to obtain a refined bound in this case, we instead use a non-iterative adaption of the above proof. \\ \begin{proof}[Proof of Theorem \ref{lb_tight2}] Let $\ensuremath{\mathcal{G}}$ be an arbitrary $3$-uniform hypergraph which has the Ramsey property $\ensuremath{\mathcal{G}}\rightarrow \ensuremath{\mathcal{P}}^{(3)}_{n}$. As before, we show that $\ensuremath{\mathcal{G}}$ is a $3$-graph on at least $\tfrac{8}{3} n -\tfrac{28}{3}$ many edges. Using the Ramsey property $\ensuremath{\mathcal{G}}\rightarrow \ensuremath{\mathcal{P}}^{(3)}_{n}$, there exists some tight path on $n$ vertices in~$\ensuremath{\mathcal{G}}$. In particular, we find a shorter tight path $\ensuremath{\mathcal{P}}_0$ on only $\left\lceil\tfrac{2}{3} n -\tfrac{7}{3}\right\rceil$ many vertices. Observe that $e(\ensuremath{\mathcal{P}}_0)= \left\lceil\tfrac{2}{3} n -\tfrac{7}{3}\right\rceil-2\ge \tfrac{2}{3}n-\tfrac{13}{3}$. \\ In order to find a tight path $\ensuremath{\mathcal{P}}_1$ which is edge-disjoint from $\ensuremath{\mathcal{P}}_0$, we consider the following coloring. Color all edges in the $1$-neighborhood $N_{>1}\big(E(\ensuremath{\mathcal{P}}_0)\big)$ in red and the remaining edges in blue. Assume for a contradiction that in this coloring there is a monochromatic red tight path on $n$ vertices, say $\mathbb{R}R$. Then Proposition \ref{prop_gen} applied to the tight path $\mathbb{R}R$ and the vertex set $V(\ensuremath{\mathcal{P}}_0)$ provides a contradiction. Since $\ensuremath{\mathcal{G}}\rightarrow \ensuremath{\mathcal{P}}^{(3)}_{n}$, there is a monochromatic blue tight path on $n$ vertices in~$\ensuremath{\mathcal{G}}$. This implies that there is also a blue tight path on $n-1$ vertices, i.e.\ on $n-3$ edges. We fix such a tight path $\ensuremath{\mathcal{P}}_1$ with $e(\ensuremath{\mathcal{P}}_1)=n-3$. Note that $N_{>1}\big(E(\ensuremath{\mathcal{P}}_0)\big)$ and $E(\ensuremath{\mathcal{P}}_1)$ are disjoint edge sets. \\ In the following, in order to find a third edge-disjoint tight path, we consider another coloring of $\ensuremath{\mathcal{G}}$. From now on, let each edge in $E(\ensuremath{\mathcal{P}}_0) \cup E(\ensuremath{\mathcal{P}}_1)$ be colored red and all other edges blue. Assume for a contradiction that there is a red tight path $\mathbb{R}R$ on $n$ vertices in this coloring. Then neither $E(\mathbb{R}R)\subseteq E(\ensuremath{\mathcal{P}}_0)$ nor $E(\mathbb{R}R)\subseteq E(\ensuremath{\mathcal{P}}_1)$, because the two edge sets have size strictly less than $e(\ensuremath{\mathcal{P}}^{(3)}_{n})$. Therefore, $\mathbb{R}R$ consists of edges of both $E(\ensuremath{\mathcal{P}}_0)$ and $E(\ensuremath{\mathcal{P}}_1)$. Both of these edge sets are disjoint, so there exist two edges $e_1\in E(\ensuremath{\mathcal{P}}_0)\cap E(\mathbb{R}R),e_2\in E(\ensuremath{\mathcal{P}}_1)\cap E(\mathbb{R}R)$ which are consecutive in $\mathbb{R}R$, i.e.\ $|e_1\cap e_2|=2$. But that is a contradiction to the fact that $N_{>1}\big(E(\ensuremath{\mathcal{P}}_0)\big)$ and $E(\ensuremath{\mathcal{P}}_1)$ are disjoint. Consequently, there is no red $\ensuremath{\mathcal{P}}^{(3)}_{n}$ in this coloring. By the same argument as before, there is a blue tight path $\ensuremath{\mathcal{P}}_2$ on $n$ vertices in $\ensuremath{\mathcal{G}}$. Then the three edge sets $E(\ensuremath{\mathcal{P}}_0), E(\ensuremath{\mathcal{P}}_1), E(\ensuremath{\mathcal{P}}_2)$ are pairwise disjoint. Thus, \begin{align*} e(\ensuremath{\mathcal{G}})\ge e(\ensuremath{\mathcal{P}}_0)+e(\ensuremath{\mathcal{P}}_1)+e(\ensuremath{\mathcal{P}}_2)\ge \tfrac{8}{3} n -\tfrac{28}{3}.&\qedhere \varepsilonnd{align*} \varepsilonnd{proof} \section*{Acknowledgments} The author would like to thank Mathias Schacht for helpful discussions and support with the thesis on which this paper is based, Maria Axenovich for comments on the manuscript and her help with its final polish as well as the two thorough referees for their careful reading of the paper and their useful comments. \begin{thebibliography}{99} \bibitem{bal_lb} D. Bal, and L. DeBiasio. \textit{New lower bounds on the size-Ramsey number of a path.} Electron. J. Combin. \textbf{29} (2022), no. 1, P1.18. \bibitem{beck83} J. Beck. \textit{On size Ramsey number of paths, trees, and circuits. I.} J. Graph Theory \textbf{7} (1983), no. 1, 115–129. \bibitem{diestel} R. Diestel. \textit{Graph Theory. Fifth Edition. Graduate Texts in Mathematics}, 173. Springer, Berlin, 2017. \bibitem{dud_general} A. Dudek, S. La Fleur, D. Mubayi, and V. Rödl. \textit{On the size-Ramsey number of hypergraphs.} J. Graph Theory \textbf{86} (2017), no. 1, 104–121. \bibitem{dud_74} A. Dudek, and P. Pralat. \textit{On some multicolour Ramsey properties of random graphs.} SIAM J. Discrete Math. \textbf{31} (2017), no. 3, 2079–2092. \bibitem{erdos_pn} P. Erd\H{o}s. \textit{On the combinatorial problems which I would most like to see solved.} Combinatorica \textbf{1} (1981), no. 1, 25–42. \bibitem{erdos_size} P. Erd\H{o}s, R. J. Faudree, C. C. Rousseau, and R. H. Schelp. \textit{The size Ramsey number.} Period. Math. Hungar. \textbf{9} (1978), no. 1-2, 145–161. \bibitem{let_ub} S. Letzter, A. Pokrovskiy, and L. Yepremyan. \textit{Size-Ramsey numbers of powers of hypergraph trees and long subdivisions}. Preprint, available at {arXiv:2103.01942v1}, 2021. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Lower Bounds on the Generalized Central Moments of the Optimal Alignments Score of Random Sequences} \begin{abstract} We present a general approach to the problem of determining tight asymptotic lower bounds for generalized central moments of the optimal alignment score of two independent sequences of i.i.d.~random variables. At first, these are obtained under a main assumption for which sufficient conditions are provided. When the main assumption fails, we nevertheless develop a ``uniform approximation" method leading to asymptotic lower bounds. Our general results are then applied to the length of the longest common subsequence of binary strings, in which case asymptotic lower bounds are obtained for the moments and the exponential moments of the optimal score. As a byproduct, a local upper bound on the rate function associated with the length of the longest common subsequences of two binary strings is also obtained. \noindent \textbf{AMS Mathematics Subject Classification 2010}:05A05, 60C05, 60F10. \noindent \textbf{Key words}: Longest Common Subsequence, Optimal Alignment, Last Passage Percolation. \end{abstract} \section{Introduction} Throughout this paper, $\mathbf{X}_{n}:=(X_{1},X_{2},\ldots,X_{n})$ and $\mathbf{Y}_{n}:=(Y_{1},Y_{2},\ldots,Y_{n})$ are two random strings, usually referred to as the (finite) sequences, so that any random variable $X_{i}$ or $Y_{i}$, $i=1,\ldots,n$, takes its values in a fixed finite alphabet $\mathcal{A}$. The sequences $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ are assumed to have the same distribution and to be independent. The sample space of $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ will be denoted by $\mathcal{X}_{n}$; clearly $\mathcal{X}_{n}\subseteq\mathcal{A}^{n}$, but, depending on the model, equality might not hold. The problem of measuring the similarity of $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ is central to many areas of applications including computational molecular biology (cf.~\cite{ChristianiniHahn:2007}, \cite{DurbinEddyKroghMitchison:1998}, \cite{Pevzner:2000}, \cite{SmithWaterman:1981} and~\cite{Waterman:1995}) and computational linguistics (cf.~\cite{YangLi:2003}, \cite{LinOch:2004}, \cite{Melamed:1995} and~\cite{Melamed:1999}). In this paper we adopt the notation of~\cite{LemberMatzingerTorres:2012}, namely, we consider a general scoring scheme, where $S:\mathcal{A}\times\mathcal{A}\rightarrow\mathbb{R}^{+}$ is a symmetric {\it pairwise scoring function}, that assigns a score to each pair of letters from $\mathcal{A}$. Throughout, an {\it alignment} is a pair $(\boldsymbol\pi,\boldsymbol\mu)$, where $\boldsymbol\pi:=(\pi_{1},\pi_{2},\ldots,\pi_{k})$ and $\boldsymbol\mu:=(\mu_{1},\mu_{2},\ldots,\mu_{k})$ are two increasing sequences of positive integers such that $1\leq\pi_{1}<\pi_{2}<\cdots<\pi_{k}\leq n$ and $1\leq\mu_{1}<\mu_{2}<\cdots<\mu_{k}\leq n$. The integer $k$ is the number of aligned letters, and $n-k$ is the number of {\it gaps} in the alignment. Note that our definition of gap differs from the one that is commonly used in the sequence alignment literature. More precisely, in most of the literature, a gap is a block of consecutive {\it indels} (insertion and deletion, formally denoted by one or more consecutive ``--") in both strings, and it depends on the alignment. For example, the left alignment below has four gaps, while the right one has five gaps. \begin{align*} \begin{array}{ccccccccc} 1 & 1 & 3 & 1 & 2 & - & - & - & 3 \\ 1 & - & 3 & - & 2 & 1 & 1 & 1 & - \end{array}\qquad\qquad\begin{array}{ccccccccc} 1 & 1 & 3 & 1 & 2 & - & 3 & - & - \\ 1 & - & 3 & - & 2 & 1 & - & 1 & 1 \end{array}\quad . \end{align*} Since we consider sequences of equal length, and since we do not have a gap opening penalty (which refers to a constant cost to open a gap of any length), our gap corresponds to a pair of indels, one on the $\mathbf{X}$-side and the other on the $\mathbf{Y}$-side. In other words, the number of gaps in this sense is the number of indels in either one of the sequences. Hence, in our framework, both the left and the right alignment above have three gaps. Given a symmetric pairwise scoring function $S:\mathcal{A}\times\mathcal{A}\rightarrow\mathbb{R}^{+}$ and a {\it constant} gap price $\delta\in\mathbb{R}$, the score of the alignment $(\boldsymbol\pi,\boldsymbol\mu)$, when aligning $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$, is defined by \begin{align*} U_{(\boldsymbol\pi,\boldsymbol\mu)}\!\left(\mathbf{X}_{n};\mathbf{Y}_{n}\right):=\sum_{i=1}^{k}S(X_{\pi_{i}},Y_{\mu_{i}})+\delta(n-k). \end{align*} In our general scoring scheme $\delta$ can be positive, although usually $\delta\leq 0$ penalizes matches with ``--". For negative $\delta$, the quantity $-\delta$ is usually called the {\it gap penalty}. The optimal alignment score of $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ is then defined as \begin{align*} L_{n}=L_{n}\!\left(\mathbf{X}_{n};\mathbf{Y}_{n}\right):=\max_{(\boldsymbol\pi,\boldsymbol\mu)}U_{(\boldsymbol\pi,\boldsymbol\mu)}\!\left(\mathbf{X}_{n};\mathbf{Y}_{n}\right), \end{align*} where the maximum is taken over all possible alignments. To simplify notations, in what follows, we often set $\mathbf{Z}_{n}:=(\mathbf{X}_{n},\mathbf{Y}_{n})$ so that $L_{n}=L_{n}(\mathbf{Z}_{n})$. When $\delta=0$ and the scoring function assigns the value one to every pair of common letters and zero to all other pairs, i.e., \begin{align*} S(a,b)=\left\{\begin{array}{ll} 1, & \hbox{if $a=b$;} \\ 0, & \hbox{if $a\ne b$,} \end{array}\right. \end{align*} then $L_{n}(\mathbf{Z}_{n})$ is just the maximal number of pairs of aligned letters -- the length of the {\it longest common subsequences} (LCS) of $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$, which is probably the most frequently used measure of global similarity. \subsection{A Summary of Known Results} The study of $L_{n}$ is not only of theoretical interest but also has some practical consequences. For example, it is useful for distinguishing related (homologous) pairs of strings from unrelated ones. Unfortunately, for fixed $n$ and an arbitrary distribution for $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$, the distribution of $L_{n}$ is unknown. In view of these, an alternative approach is to study the typical values and the fluctuations of $L_{n}$. When $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ are taken from an ergodic process, by Kingman's subadditive ergodic theorem, there exists a constant $\gamma^{*}$ such that \begin{align}\label{eq:ConvLn} \frac{L_{n}}{n}\rightarrow\gamma^{*}\quad\text{a.}\,\text{s.}\,\,\,\text{and in }\,L_{1},\quad\text{as }\,n\rightarrow\infty. \end{align} In the LCS case, the existence of $\gamma^{*}$ was first shown by Chv\'{a}tal and Sankoff~\cite{ChvatalSankoff:1975}, but its exact value (or an identity for it) remains unknown even for independent i.i.d. Bernoulli sequences (although numerically estimated). Alexander~\cite{Alexander:1994} obtained the rate of convergence in \eqref{eq:ConvLn} in the LCS case, which was extended to general scoring functions in~\cite{LemberMatzingerTorres:2012}, and to multiple (two or more) independent i.i.d. sequences with general score function in~\cite{GongHoudreIslak:2016}. Since the exact distribution of $L_{n}$ is rather hard to determine even for moderate $n$, it is natural to look for a limiting theorem, e.g., \begin{align*} \frac{L_{n}-\mathbb{E}(L_{n})}{n^{\alpha}}\;{\stackrel{\mathfrak{D}}{\longrightarrow}}\;\mathcal{P},\quad n\rightarrow\infty, \end{align*} for some $\alpha\in(0,1)$. Here $\mathcal{P}$ is a limiting distribution, and $\;{\stackrel{\mathfrak{D}}{\longrightarrow}}\;$ stands for convergence in law. Typically, one expects $\alpha=1/2$, and $\mathcal{P}$ to be a centered normal distribution, i.e., \begin{align}\label{eq:CLTLn} \frac{L_{n}-\mathbb{E}(L_{n})}{\sqrt{n}}\;{\stackrel{\mathfrak{D}}{\longrightarrow}}\;\mathcal{N},\quad n\rightarrow\infty, \end{align} where $\mathcal{N}$ stands for a centered normal distribution with variance $\sigma^{2}>0$. Under \eqref{eq:CLTLn}, for any $r>0$, the $r$-th absolute moment of $L_{n}$ would then be expected to grow at speed $n^{r/2}$, as $n\rightarrow\infty$. In particular, the variance would grow linearly, i.e., \begin{align*} \lim_{n\rightarrow\infty}\frac{\text{Var}(L_{n})}{n}=\sigma^{2}>0, \end{align*} and then, \eqref{eq:CLTLn} would be equivalent to \begin{align}\label{eq:CLTLn2} \frac{L_{n}-\mathbb{E}(L_{n})}{\sqrt{\text{Var}(L_{n})}}\;{\stackrel{\mathfrak{D}}{\longrightarrow}}\;\mathcal{N}(0,1),\quad n\rightarrow\infty. \end{align} Such a limiting theorem would allow to construct asymptotic tests, based on the optimal score $L_{n}$, hypothesizing that two given sequences $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ are independent (not homologous) or not. Note that in the gapless case, i.e., when $\delta=-\infty$, the optimal score is just the sum of the pairwise scores $L_{n}=\sum_{i=1}^{n}S(X_{i},Y_{i})$, and thus, under rather general assumptions on $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$, the limiting theorem \eqref{eq:CLTLn2} (equivalently, \eqref{eq:CLTLn}) holds true. This observation suggests to conjecture that \eqref{eq:CLTLn2} also holds in more general cases, in particular, in the LCS case. No such type of limiting theorem was known until~\cite{HoudreIslak:2015}, which proved \eqref{eq:CLTLn2} in the LCS case, when $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ are independent i.i.d. random sequences such that $\text{Var}(L_{n})$ admits a sublinear (in $n$) lower bound. The result in~\cite{HoudreIslak:2015} upper-bounds the Monge-Kantorovich-Wasserstein distance, and in turn the Kolmogorov distance, implying weak convergence. The limiting theorem in~\cite{HoudreIslak:2015} was extended to multiple independent i.i.d. sequences with a general score function in~\cite{GongHoudreIslak:2016}. In contrast to~\cite{HoudreIslak:2015}, the result in~\cite{GongHoudreIslak:2016} directly upper-bounds the Kolmogorov distance, which improves the rate of weak convergence, but still requires a (looser) sublinear variance lower bound. Hence, establishing a linear-order variance lower bound for a general distribution of $X_{1}$ implies immediately the corresponding limiting theorem \eqref{eq:CLTLn2}. Therefore, there is a direct connection between a variance lower bound and the limiting theorem \eqref{eq:CLTLn2}, leading to an extra motivation to study the rate of convergence of $\text{Var}(L_{n})$ as well as all the other central moments of $L_{n}$. The study of the asymptotic order of $\text{Var}(L_{n})$ in the case of LCS was first proposed by Chv\'{a}tal and Sankoff~\cite{ChvatalSankoff:1975}, who conjectured that $\text{Var}(L_{n})=o(n^{2/3})$, for $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ independent i.i.d. symmetric Bernoulli sequences, based on Monte-Carlo simulations. By the Efron-Stein inequality, in the case of independent i.i.d. sequences with score functions satisfying \eqref{eq:MaxScoreChange} below, there exist a universal constant (independent of $n$) $C_{2}>0$, such that \begin{align}\label{eq:UpperBoundLn} \text{Var}(L_{n})\leq C_{2}\,n,\quad\text{for all }\,n\in\mathbb{N}. \end{align} (see Section \ref{sec:upper} for more details). For the LCS case, this result was proved by Steele~\cite{Steele:1986}. In~\cite{Waterman:1994}, Waterman asked whether or not the linear bound on the variance can be improved, at least in the LCS case. His simulations showed that, in some special cases (including the LCS case), $\text{Var}(L_{n})$ should grow linearly in $n$. These simulations suggest a linear lower bound on $\text{Var}(L_{n})$, which would invalidate the conjecture of Chv\'{a}tal and Sankoff. In the past ten years, the asymptotic behavior of $\text{Var}(L_{n})$ has been investigated under various choices of sequences $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ (cf.~\cite{BonettoMatzinger:2006}, \cite{DurringerLemberMatzinger:2007}, \cite{HoudreMatzinger:2007}, \cite{LemberMatzinger:2009}, \cite{LemberMatzingerTorres:2012(2)}, \cite{AmsaluHoudreMatzinger:2014}, \cite{AmsaluHoudreMatzinger:2016}, etc). In particular, in~\cite{LemberMatzingerTorres:2012(2)}, the asymptotic behavior of $\text{Var}(L_{n})$ is investigated within two scenarios. In the first one, a general scoring function with a high gap penalty is considered, and $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ are independent i.i.d sequences drawn from a finite alphabet (see also Theorem \ref{thm:app1} and Remark \ref{rem:GapPrice} below). In the second one, the LCS case is studied when both $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ are binary sequence having a multinomial block structure, which is a generalization of the model in~\cite{Torres:2009} (the first analysis of the so-called {\it high entropy} case). In many ways, the present paper complements~\cite{LemberMatzingerTorres:2012(2)}. The results of~\cite{LemberMatzinger:2009} (the so-called {\it low entropy} case) were generalized in~\cite{HoudreMa:2016}, and a linear variance lower bound was proved for an arbitrary finite alphabet (not just binary) and for the central moments of arbitrary order (not just the variance), but still under a strongly asymmetric distribution over the letters. The goal of the present paper is to study in this general framework the lower bounds on generalized moments of the optimal alignment score $L_{n}$. To start with, in the next section we quickly show how to obtain upper moments and exponential bounds on $L_{n}$. In Section \ref{sec:GenMomentBounds}, we develop a general argument to find lower bounds for the generalized central moments of $L_{n}$. These lower bounds are obtained under a main assumption, and sufficient conditions for the validity of this main assumption are provided. Next, we relax some of these conditions (so that the main assumption is no longer satisfied), and develop a ``uniform approximation" method giving rougher lower bounds for the generalized central moments of $L_{n}$. In Section \ref{sec:LCS}, the general results are applied to the binary LCS case, and some refinements of those lower bounds are also presented. Finally, in Section \ref{sec:Rate}, we obtain a lower bound on some moment generating function, which is then used to derive a (local) upper bound on the rate function associated with the binary LCS case. \section{Upper Bounds on Moments and Exponential Moments}\label{sec:upper} In this short section, we briefly review the upper bounds on central moments and exponential moments of $L_{n}$. Recall that for any constant gap price $\delta\in\mathbb{R}$, changing the value of one of the $2n$ random variables $X_{1},\ldots,X_{n}$ and $Y_{1},\ldots,Y_{n}$ changes the value of $L_{n}$ by at most $K$, where \begin{align}\label{eq:MaxScoreChange} K:=\max_{u,v,w\in\mathcal{A}}|S(u,v)-S(u,w)|. \end{align} Thus, by Hoeffding's exponential martingale inequality, for every $t>0$, \begin{align}\label{eq:HoeffdingIneqOptScore} \mathbb{P}\left(\left|L_{n}-\mathbb{E}(L_{n})\right|\geq t\right)\leq 2\,e^{-t^{2}/(nK^{2})}, \end{align} which indicates that $V_{n}:=(L_{n}-\mathbb{E}(L_{n}))/\sqrt{n}$ is subgaussian. Hence, for any $r>0$, \begin{align*} \mathbb{E}\left(\left|V_{n}\right|^{r}\right)\leq r\,2^{r/2}\left(9K^{2}\right)^{r/2}\Gamma\left(\frac{r}{2}\right)=r(18)^{r/2}\,\Gamma\left(\frac{r}{2}\right)K^{r}. \end{align*} The above inequality provides an upper estimate, of the correct order, on the central absolute moments of $L_{n}$. Some further simple and direct computations allow to improve the constant. Let $r\geq 2$, $x>0$, and let $\widetilde{V}_{n}:=|L_{n}-\mathbb{E}(L_{n})|$. From \eqref{eq:HoeffdingIneqOptScore}, \begin{align*} \mathbb{E}\left(\widetilde{V}_{n}^{r}\right)=\int_{0}^{\infty}\mathbb{P}\left(\widetilde{V}_{n}\geq t^{1/r}\right)dt\leq x+2\int_{x}^{\infty}e^{-t^{2/r}/(nK^{2})}\,dt. \end{align*} Minimizing in $x$, i.e., taking $x=\left(K^{2}(\ln 2)n\right)^{{r/2}}$, and changing variables $u=t^{2/r}/(K^{2}n)$, lead to \begin{align*} \mathbb{E}\!\left(\widetilde{V}_{n}^{r}\right)\leq K^{r}\left[(\ln 2)^{r/2}+r\int_{\ln 2}^{\infty}e^{-u}u^{r/2-1}du\right]n^{r/2}=:C(r)\,n^{r/2}. \end{align*} When $x=0$, the corresponding constant is slightly bigger than $C(r)$, and is given by \begin{align*} D(r):=rK^{r}\int_{0}^{\infty}e^{-u}u^{r/2-1}du=rK^{r}\,\Gamma\left(\frac{r}{2}\right). \end{align*} \begin{remark}\label{rem:LCSUpConst} For the LCS case, the following constant was obtained, as a consequence of a general tensorization inequality, in~\cite{HoudreMa:2016}: \begin{align*} E(r):=(r-1)^{r}2^{r/2-1}\mathbb{P}\left(X_{1}\neq Y_{1}\right). \end{align*} For $r$ close to 2, $E(r)$ is smaller than $C(r)$. Indeed, \begin{align*} E(2)=\mathbb{P}\left(X_{1}\neq Y_{1}\right)\leq 1,\quad C(2)=1+\ln 2,\quad D(2)=2. \end{align*} However, for $r>2$ large, $E(r)$ might be bigger, e.g., \begin{align*} E(4)=162\,\mathbb{P}\left(X_{1}\neq Y_{1}\right),\quad D(4)=4\Gamma(2)=4. \end{align*} \end{remark} In addition to moment estimates, the above methodology also provides an upper bound on the moment generating function of $|V_{n}|=\widetilde{V}_{n}/\sqrt{n}=|L_{n}-\mathbb{E}(L_{n})|/\sqrt{n}$. Let $t>0$ and $r>1$. Then, with $K=1$ and using \eqref{eq:HoeffdingIneqOptScore}, \begin{align*} \mathbb{P}\left(e^{t|V_{n}|}\geq r\right)=\mathbb{P}\left(\widetilde{V}_{n}\geq\frac{\sqrt{n}\ln r}{t}\right)\leq 2\,e^{-\left(\ln r\right)^{2}/t^{2}}. \end{align*} Thus, \begin{align} M(t):=\mathbb{E}\left(e^{t\left|V_{n}\right|}\right)&\leq 1+2\int_{1}^{\infty}e^{-\left(\ln r\right)^{2}/t^{2}}\,dr=1+2\int_{0}^{\infty}\exp\left(-\frac{x^{2}}{t^{2}}+x\right)dx\nonumber\\ \label{eq:UpperBoundMGFVn} &\leq 1+t\sqrt{\pi}\left(1+\text{erf}\left(\frac{t}{2}\right)\right)e^{t^{2}/4}<1+2t\sqrt{\pi}\,e^{t^{2}/4}, \end{align} where $\text{erf}\,(\cdot)$ denotes the Gaussian error function. \begin{remark}\label{rem:UpperBoundSerExp} The upper bound on the moment generating function can clearly also be found through the upper bounds on the moments. Indeed, when $K=1$ and with the bound \begin{align*} \mathbb{E}\left(\left|V_{n}\right|^{r}\right)\leq D(r)=r\,\Gamma\left(\frac{r}{2}\right)=2\,\Gamma\left(\frac{r+2}{2}\right)=2\sqrt{\pi}\,\mathbb{E}\left(\left|\xi\right|^{r+1}\right),\quad r\in\mathbb{N}, \end{align*} where $\xi$ is a $\mathcal{N}(0,1/2)$ random variable, it follows that \begin{align*} \mathbb{E}\left(e^{t|V_{n}|}\right)=1+\sum_{r=1}^{\infty}\mathbb{E}\left(\left|V_{n}\right|^{r}\right)\frac{t^{r}}{r!}\leq 1+2\sqrt{\pi}\sum_{r=1}^{\infty}\mathbb{E}\left(\left|\xi\right|^{r+1}\right)\frac{t^{r}}{r!}=1+\frac{2\sqrt{\pi}}{t}\sum_{k=2}^{\infty}k\,\mathbb{E}\left(\left|\xi\right|^{k}\right)\frac{t^{k}}{k!}. \end{align*} Since, \begin{align*} \mathbb{E}\left(\left|\xi\right|e^{t|\xi|}\right)=\mathbb{E}\left(|\xi|\right)+\frac{1}{t}\sum_{k=2}^{\infty}k\,\mathbb{E}\left(\left|\xi\right|^{k}\right)\frac{t^{k}}{k!}=\frac{1}{\sqrt{\pi}}+\frac{1}{t}\sum_{k=2}^{\infty}k\,\mathbb{E}\left(\left|\xi\right|^{k}\right)\frac{t^{k}}{k!}, \end{align*} while, \begin{align*} \mathbb{E}\left(\left|\xi\right|e^{t|\xi|}\right)=\frac{d}{dt}\mathbb{E}\left(e^{t|\xi|}\right)=\frac{d}{dt}\left[\left(1+\emph{erf}\left(\frac{t}{2}\right)\right)e^{t^{2}/4}\right]=\frac{t}{2}\left(1+\emph{erf}\left(\frac{t}{2}\right)\right)e^{t^{2}/4}+\frac{1}{\sqrt{\pi}}, \end{align*} the upper bound in \eqref{eq:UpperBoundMGFVn} is recovered: \begin{align*} \mathbb{E}\left(e^{t|V_{n}|}\right)\leq 1+2\sqrt{\pi}\left(\mathbb{E}\left(\left|\xi\right|e^{t|\xi|}\right)-\mathbb{E}\left(|\xi|\right)\right)=1+t\sqrt{\pi}\left(1+\text{erf}\left(\frac{t}{2}\right)\right)e^{t^{2}/4}. \end{align*} \end{remark} \section{Lower Bounds on Generalized Moments}\label{sec:GenMomentBounds} Let $\Phi:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ be a convex, nondecreasing function. The main objective of this section is to find a lower bound for the expectation \begin{align}\label{mir} \mathbb{E}\left(\Phi\left(\left|L_{n}\!\left(\mathbf{X}_{n};\mathbf{Y}_{n}\right)-\mathbb{E}\left(L_{n}\!\left(\mathbf{X}_{n};\mathbf{Y}_{n}\right)\right)\right|\right)\right), \end{align} where, again \begin{align*} L_{n}=L_{n}\!\left(\mathbf{X}_{n};\mathbf{Y}_{n}\right)=\max_{(\boldsymbol\pi,\boldsymbol\mu)}U_{(\boldsymbol\pi,\boldsymbol\mu)}\!\left(\mathbf{X}_{n};\mathbf{Y}_{n}\right), \end{align*} and where again the maximum is taken over all possible alignments. \begin{remark} In what follows, we deal with lower bounds on \eqref{mir} with no further restrictions on $\Phi$. This is somehow different from the upper bound case, where to the best of our knowledge, there is no uniform approach that applies to every convex nondecreasing $\Phi$. Indeed, in the previous section, we established upper bounds for $\Phi(x)=|x|^{r}$, $r>0$, and $\Phi(x)=e^{tx}$, $t>0$. As shown in~\cite{HoudreMa:2016}, Efron-Stein type inequalities can be applied to $\Phi(x)=|x|^{r}$, $r>0$, via the Burkholder's inequality, giving the aforementioned constant $E(r)$. Another generalization of the Efron-Stein inequality, given, for example, in~\cite[Chapter 14]{BoucheronLugosiMassart:2013}, is the subadditivity inequality of the so called $\Phi$-Entropy. However, it only applies to those functions $\Phi$ having strictly positive second derivative and such that $1/\Phi''$ is concave, e.g., $\Phi(x)=|x|^{r}$, for $r\in(1,2]$, or $\Phi(x)=x\log x$. Moreover, the upper bounds on \eqref{mir} for those $\Phi$ are linear in $n$ as easily seen via, e.g., \cite[Theorem 14.6]{BoucheronLugosiMassart:2013}, together with \eqref{eq:MaxScoreChange}. In~\cite[Chapter 15]{BoucheronLugosiMassart:2013}, some further generalizations of Efron-stein inequality for $\Phi(x)=|x|^{r}$, $r\geq 2$, are also obtained, which, together with \eqref{eq:MaxScoreChange}, also imply upper bounds of order $n^{r/2}$ on the $r$-th central moments (cf. Theorem 15.5 therein). \end{remark} \subsection{The Random Variable $U_{n}$ and the Set $\mathcal{U}_{n}$} Recall that $\mathcal{X}_{n}$ is the sample space of $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$, so that $\mathcal{X}_{n}\times\mathcal{X}_{n}$ is the sample space of $\mathbf{Z}_{n}=(\mathbf{X}_{n};\mathbf{Y}_{n})$. In the sequel, we consider a function $\mathfrak{u}:\,\mathcal{X}_{n}\times\mathcal{X}_{n}\rightarrow\mathbb{Z}$, so that $U_{n}:=\mathfrak{u}(\mathbf{Z}_{n})$ is an integer-valued random variable. Let $\mathcal{S}_{n}$ be the support of $U_{n}$, and for every $u\in\mathcal{S}_{n}$, let \begin{align*} \ell_{n}(u):=\mathbb{E}\left(L_{n}(\mathbf{Z}_{n})\,|\,U_{n}=u\right), \end{align*} and set $\mu_{n}:=\mathbb{E}(L_{n}(\mathbf{Z}_{n}))$. Since $x\mapsto\Phi(|x-\mu_{n}|)$ is convex, \begin{align*} \mathbb{E}\left(\Phi\left(|L_{n}(\mathbf{Z}_{n})-\mu_{n}|\right)\,|\,U_{n}\right)\geq\Phi\left(\left|\mathbb{E}\left(L_{n}(\mathbf{Z}_{n})|\,U_{n}\right)-\mu_{n}\right|\right)=\Phi\left(\left|\ell_{n}(U_{n})-\mu_{n}\right|\right), \end{align*} so that \begin{align*} \mathbb{E}\left(\Phi\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|\right)\right)\geq\mathbb{E}\left(\Phi\left(\left|\ell_{n}(U_{n})-\mu_{n}\right|\right)\right). \end{align*} Let $\delta>0$, and let the set $\mathcal{U}_{n}\subset\mathcal{S}_{n}$ be such that \begin{align}\label{eq:MainAssump} \ell_{n}(u_{2})-\ell_{n}(u_{1})\geq\delta,\quad\text{for any }\,u_{1},u_{2}\in\mathcal{U}_{n}\,\text{ with }\,u_{1}<u_{2}. \end{align} Since $\mathcal{S}_{n}$ is finite, so is $\mathcal{U}_{n}$. Therefore, set \begin{align}\label{eq:DefUnk0} \mathcal{U}_{n}:=\{u_{1},\ldots,u_{m}\}\quad\text{and}\quad k_{0}:=\max_{i=1,\,\cdots,m-1}\left(u_{i+1}-u_{i}\right), \end{align} where $u_{1}<u_{2}<\cdots<u_{m}$ and where $m=m(n)$ depends on $n$. Clearly, except perhaps in some very special cases, such a set $\mathcal{U}_{n}$ formally always exists (it might even consists of two elements). As we will see, $\mathcal{U}_{n}$ becomes useful if $\mathbb{P}(U_{n}\in\mathcal{U}_{n})\geq c_{0}$, for $\delta$ and $c_{0}$ independent of $n$. \begin{lemma}\label{lemma:MeanValelln} Under \eqref{eq:MainAssump}, for every $u_{i},u_{j}\in\mathcal{U}_{n}$, \begin{align*} \left|\ell_{n}(u_{i})-\ell_{n}(u_{j})\right|\geq\frac{\delta}{k_{0}}\left|u_{i}-u_{j}\right|. \end{align*} Therefore, there exists $a_{n}\in[u_{1},u_{m}]$, such that for every $u_{i}\in\mathcal{U}_{n}$, \begin{align}\label{eq:MainAssumpA} \left|\ell_{n}(u_{i})-\mu_{n}\right|\geq\frac{\delta}{k_{0}}\left|u_{i}-a_{n}\right|. \end{align} \end{lemma} \noindent \textbf{Proof.} Let $g:[u_{1},u_{m}]\rightarrow\mathbb{R}^{+}$ be a differentiable function, such that $g(u_{i})=\ell_{n}(u_{i})$, for every $i=1,\ldots,m$, and such that $g'(x)\geq\delta/k_{0}$, for every $x\in(u_{1},u_{m})$. Then, one of the following three mutually exclusive possibilities holds: \begin{itemize} \item $\mu_{n}<\ell_{n}(u_{1})$, \item $\ell_{n}(u_{1})\leq\mu_{n}\leq\ell_{n}(u_{m})$, \item $\ell_{n}(u_{m})<\mu_{n}$. \end{itemize} In the first case, for every $u_{i}\in\mathcal{U}_{n}$, $i=1,\ldots,m$, \begin{align*} \left|\ell_{n}(u_{i})-\mu_{n}\right|=\ell_{n}(u_{i})-\mu_{n}\geq\ell_{n}(u_{i})-\ell_{n}(u_{1})\geq\frac{\delta}{k_{0}}\left(u_{i}-u_{1}\right), \end{align*} so \eqref{eq:MainAssumpA} holds with $a_{n}=u_{1}$. The third case can be handled similarly with $a_{n}=u_{m}$. For the middle case, since $g$ is increasing and continuous, there exists $a_{n}\in[u_{1},u_{m}]$ such that $\mu_{n}=g(a_n)$. Hence, by the mean value theorem, \begin{align*} \left|\ell_{n}(u_{i})-\mu_{n}\right|=\left|g(u_{i})-g(a_{n})\right|\geq\frac{\delta}{k_{0}}\left|u_{i}-a_{n}\right|, \end{align*} completing the proof. $\Box$ Now, in view of \eqref{eq:MainAssumpA}, since $\Phi$ is monotone and non-negative, \begin{align*} \mathbb{E}\left(\Phi\left(\left|\ell_{n}(U_{n})-\mu_{n}\right|\right)\right)\geq\sum_{u_{i}\in\mathcal{U}_{n}}\Phi\left(\left|\ell_{n}(u_{i})-\mu_{n}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right)\geq\sum_{u_{i}\in\mathcal{U}_{n}}\Phi\left(\frac{\delta}{k_{0}}\left|u_{i}-a_{n}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right), \end{align*} i.e., the lower bound on the generalized moment is of the form \begin{align}\label{eq:BasicEstEPhi} \mathbb{E}\!\left(\Phi\!\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|\right)\right)\geq\mathbb{E}\!\left(\Phi\!\left(\left|\ell_{n}(U_{n})-\mu_{n}\right|\right)\right)&\geq\sum_{u_{i}\in\mathcal{U}_{n}}\Phi\left(\frac{\delta}{k_{0}}\left|u_{i}-a_{n}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right)\\ \label{eq:BasicEstEPhi2} &=\mathbb{E}\!\left(\!\left.\Phi\!\left(\frac{\delta}{k_{0}}\left|U_{n}-a_{n}\right|\right)\,\right|U_{n}\in\mathcal{U}_{n}\right)\mathbb{P}\!\left(U_{n}\in\mathcal{U}_{n}\right). \end{align} When $\Phi(x)=|x|^{r}$, for some $r\geq 1$, then \eqref{eq:BasicEstEPhi2} becomes \begin{align*} \mathbb{E}\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|^{r}\right)\geq\left(\frac{\delta}{k_{0}}\right)^{r}\mathbb{E}\left(\left.\left|U_{n}-a_{n}\right|^{r}\,\right|U_{n}\in\mathcal{U}_{n}\right)\mathbb{P}\left(U_{n}\in\mathcal{U}_{n}\right). \end{align*} If $\mathbb{P}(U_{n}\in\mathcal{U}_{n})\geq c_{0}>0$, with $\delta$, $k_{0}$ and $c_{0}$ all independent of $n$, then \begin{align*} \mathbb{E}\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|^{r}\right)\geq\left(\frac{\delta}{k_{0}}\right)^{r}c_{0}\,\mathbb{E}\left(\left.\left|U_{n}-a_{n}\right|^{r}\,\right|U_{n}\in\mathcal{U}_{n}\right)\geq\left(\frac{\delta}{k_{0}}\right)^{r}c_{0}\,\min_{a\in\mathbb{R}}\mathbb{E}\left(\left.\left|U_{n}-a\right|^{r}\,\right|U_{n}\in\mathcal{U}_{n}\right). \end{align*} Thus a lower bound on the centered absolute $r$th moment of $L_{n}(\mathbf{Z}_{n})$ is obtained via conditioning on $U_{n}$. Typically, in applications, the random variable $U_{n}$ counts the number of some particular letters in $\mathbf{Z}_{n}$, and, therefore, it has a binomial distribution. In this case, the right hand side of \eqref{eq:BasicEstEPhi2} is relatively easy to compute. \begin{remark}\label{rem:Phin} Note that the argument leading to \eqref{eq:BasicEstEPhi2} is independent of $n$. Therefore, \eqref{eq:BasicEstEPhi} continues to hold whenever $\Phi$ depends on $n$, provided $\Phi_{n}$ remains convex and non-decreasing. \end{remark} \subsection{On the Existence of $U_{n}$ and $\mathcal{U}_{n}$}\label{subsec:ExistUns} As shown in the previous subsection, the lower bound on the $\Phi$-moment can be obtained via the random variable $U_{n}$, provided that there exists a universal constant $\delta>0$ such that the corresponding set $\mathcal{U}_{n}$ has a large enough probability. In what follows, we show that the existence of a suitable $\mathcal{U}_{n}$ can be guaranteed using a {\it random transformation}, \begin{align*} \mathcal{R}:\Omega\times\mathcal{X}_{n}\times\mathcal{X}_{n}\rightarrow\mathcal{X}_{n}\times\mathcal{X}_{n}, \end{align*} such that, for most of the outcomes of $Z$, $\mathcal{R}$ increases the score by at least some fixed amount $\varepsilon_{0}>0$. Here, with a slight abuse of notation, $\Omega$ is a sample space for the randomness involved in $\mathcal{R}$. More precisely, the random transformation should be such that there exists a set $B_{n}\subset\mathcal{X}_{n}\times\mathcal{X}_{n}$ having a relatively large probability so that, for every $\mathbf{z}_{n}\in B_{n}$, the expected score of $\mathcal{R}(\mathbf{z}_{n})$ exceeds the score of $\mathbf{z}_{n}$ by $\varepsilon_{0}$, namely, \begin{align*} \mathbb{E}\left(L_{n}(\mathcal{R}(\mathbf{z}_{n}))\right)\geq L_{n}(\mathbf{z}_{n})+\varepsilon_{0}. \end{align*} Above $\mathbb{E}$ denotes the expectation over the randomness involved in $\mathcal{R}$ (i.e., the integral over $\Omega$). The requirement that the set $B_{n}$ has a relatively large probability is formalized by requiring that \begin{align}\label{eq:ReqRanTrans} \Delta_{n}(\varepsilon_0):=\mathbb{P}\left(\mathbb{E}\left(\left.L_{n}(\mathcal{R}(\mathbf{Z}_{n}))-L_{n}(\mathbf{Z}_{n})\,\right|\mathbf{Z}_{n}\right)<\varepsilon_{0}\right)\rightarrow 0,\quad\text{as }\,n\rightarrow\infty. \end{align} We will see that the faster $\Delta_{n}(\varepsilon_{0})$ tends to zero, the better the lower bound is. The random transformation should be associated with $U_{n}$ and the set $\mathcal{U}_{n}=\{u_{1},\ldots,u_{m}\}$ so that when $U_{n}$ takes its values in $\mathcal{U}_{n}$, then $\mathcal{R}$ increases its value in $\mathcal{U}_{n}$ and has no effect on the conditional distribution of $\mathbf{Z}_{n}$. To formalize this requirement, for every $u\in\mathcal{U}_{n}$, let $\mathbb{P}^{(u)}$ be the law of $\mathbf{Z}_{n}$ given $U_{n}=u$, i.e., \begin{align*} \mathbb{P}^{(u)}(A)=\mathbb{P}\left(\left.\mathbf{Z}_{n}\in A\,\right|U_{n}=u\right),\quad A\subset\mathcal{X}_{n}\times\mathcal{X}_{n}. \end{align*} Now, for every $u_{i},u_{i+1}\in\mathcal{U}_{n}$, the following implication should hold: if $\mathbf{Z}_{n}^{(u_{i})}$ is a random vector with law $\mathbb{P}^{(u_{i})}$, then \begin{align}\label{eq:RanTransImpli} \mathcal{R}\left(\mathbf{Z}_{n}^{(u_{i})}\right)\sim\mathbb{P}^{(u_{i+1})}, \end{align} which means that for every $\mathbf{z}_{n}\in\mathcal{X}_{n}\times\mathcal{X}_{n}$, \begin{align*} \mathbb{P}\left(\left.\mathcal{R}(\mathbf{Z}_{n})=\mathbf{z}_{n}\,\right|U_{n}=u_{i}\right)=\mathbb{P}\left(\left.\mathbf{Z}_{n}=\mathbf{z}_{n}\,\right|U_{n}=u_{i+1}\right). \end{align*} In particular, \eqref{eq:RanTransImpli} implies that if $\mathfrak{u}(\mathbf{Z}_{n})=u_{i}$, then $\mathfrak{u}(\mathcal{R}(\mathbf{Z}_{n}))=u_{i+1}$, but the converse statement might not be true. \noindent \textbf{Fast Convergence Condition.} Our first general lower bound theorem assumes the existence of $\varepsilon_{0}$ so that $\Delta_{n}(\varepsilon_{0})$ converges to zero sufficiently fast. To simplify the notation, in what follows, let $\widetilde{\mathbf{Z}}_{n}:=\mathcal{R}(\mathbf{Z}_{n})$. The following theorem is a generalization of~\cite[Theorem 2.2]{LemberMatzingerTorres:2012(2)}. \begin{theorem}\label{thm:FastConvThm} For $n\in\mathbb{N}$, let there exist a random variable $U_{n}=\mathfrak{u}(\mathbf{Z}_{n})$, a set $\mathcal{U}_{n}=\{u_{1},\ldots,u_{m(n)}\}$, a random transformation $\mathcal{R}$ satisfying \eqref{eq:RanTransImpli}, such that the following two assumptions hold: \begin{itemize} \item [(i)] There exists a universal constant $A>0$ such that \begin{align}\label{eq:DiffLtildeZLZ} L_{n}\big(\widetilde{\mathbf{Z}}_{n}\big)-L_{n}(\mathbf{Z}_{n})\geq -A. \end{align} \item [(ii)] There exists a universal constant $\varepsilon_{0}>0$ such that \begin{align}\label{eq:DeltafastVarphi} \Delta_{n}(\varepsilon_{0})=o(\varphi(n)),\quad n\rightarrow\infty, \end{align} where $\Delta_{n}(\varepsilon_{0})$ is given by \eqref{eq:ReqRanTrans}, and where $\varphi$ be a function on $\mathbb{N}$ such that \begin{align}\label{eq:GenLowerBoundProbU} \mathbb{P}\left(U_{n}=u\right)\geq\varphi(n),\quad\text{for all }\,u\in\mathcal{U}_{n}. \end{align} \end{itemize} Then, there exists $a_{n}\in[u_{1},u_{m}]$ so that for $n$ large enough, \begin{align}\label{eq:trm31} \mathbb{E}\left(\Phi\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|\right)\right)\geq\sum_{u_{i}\in\mathcal{U}_{n}}\Phi\left(\frac{\delta}{k_{0}}\left|u_{i}-a_{n}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right)\geq\sum_{u_{i}\in\mathcal{U}_{n}}\Phi\left(\frac{\delta}{k_{0}}\left|u_{i}-a_{n}\right|\right)\varphi(n), \end{align} where $\delta=\varepsilon_{0}/2$ and $k_{0}(n)=\max_{i=1,\cdots,m-1}(u_{i+1}-u_{i})$. \end{theorem} \noindent \textbf{Proof.} Let $u_{i}\in\mathcal{U}_{n}$, and let $\mathbf{Z}_{n}^{(u_{i})}$ be a random vector with law $\mathbb{P}^{(u_{i})}$. By \eqref{eq:RanTransImpli}, \begin{align*} \ell_{n}(u_{i+1})=\mathbb{E}\left(L_{n}\!\left(\widetilde{\mathbf{Z}}_{n}^{(u_{i})}\right)\right). \end{align*} Hence, \begin{align*} \ell_{n}(u_{i+1})-\ell_{n}(u_{i})=\mathbb{E}\left(L_{n}\!\left(\widetilde{\mathbf{Z}}_{n}^{(u_{i})}\right)\right)-\mathbb{E}\left(L_{n}\!\left(\mathbf{Z}_{n}^{(u_{i})}\right)\right)=\mathbb{E}\left[\mathbb{E}\left(\left.L_{n}\!\left(\widetilde{\mathbf{Z}}_{n}^{(u_{i})}\right)-L_{n}\!\left(\mathbf{Z}_{n}^{(u_{i})}\right)\,\right|\mathbf{Z}_{n}^{(u_{i})}\right)\right]. \end{align*} Let $\varepsilon_{0}$ be as in (ii), and let $B_{n}(\varepsilon_{0})\subset\mathcal{X}_{n}\times\mathcal{X}_{n}$ be the set of outcomes of $\mathbf{Z}_{n}$ such that \begin{align}\label{eq:DefZBn} \left\{\mathbb{E}\left(\left.L_{n}\big(\widetilde{\mathbf{Z}}_{n}\big)-L_{n}(\mathbf{Z}_{n})\,\right|\mathbf{Z}_{n}\right)\geq\varepsilon_{0}\right\}=\left\{\mathbf{Z}_{n}\in B_{n}(\varepsilon_{0})\right\}. \end{align} By \eqref{eq:DiffLtildeZLZ}, for any pair of sequences $z_{n}$, when applying the transformation $\mathcal{R}$, the score can decrease by at most $-A$. Hence \begin{align*} \mathbb{E}\left[\mathbb{E}\left(\left.L_{n}\!\left(\widetilde{\mathbf{Z}}_{n}^{(u_{i})}\right)-L_{n}\!\left(\mathbf{Z}_{n}^{(u_{i})}\right)\,\right|\mathbf{Z}_{n}^{(u_{i})}\right)\right]\geq\varepsilon_{0}\,\mathbb{P}\left(\mathbf{Z}_{n}^{(u_{i})}\in B_{n}(\varepsilon_{0})\right)-A\,\mathbb{P}\left(\mathbf{Z}_{n}^{(u_{i})}\not\in B_{n}(\varepsilon_{0})\right). \end{align*} By definition, $\mathbb{P}(\mathbf{Z}_{n}\not\in B_{n}(\varepsilon_{0}))=\Delta_{n}(\varepsilon_{0})$. Therefore, by \eqref{eq:DeltafastVarphi} and \eqref{eq:GenLowerBoundProbU}, \begin{align*} \mathbb{P}\left(\mathbf{Z}_{n}^{(u_{i})}\not\in B_{n}(\varepsilon_{0})\right)=\mathbb{P}\left(\left.\mathbf{Z}_{n}\not\in B_{n}(\varepsilon_{0})\,\right|U_{n}=u_{i}\right)\leq\frac{\Delta_{n}(\varepsilon_{0})}{\mathbb{P}\left(U_{n}=u_{i}\right)}\leq\frac{\Delta_{n}(\varepsilon_{0})}{\varphi(n)}=o(1),\quad n\rightarrow\infty. \end{align*} Choose $n_{0}$ large enough (depending on $\varepsilon_{0}$ and $A$) so that for any $n>n_{0}$, \begin{align*} \varepsilon_{0}\left(1-\frac{\Delta_{n}(\varepsilon_{0})}{\varphi(n)}\right)-A\,\frac{\Delta_{n}(\varepsilon_{0})}{\varphi(n)}\geq\frac{\varepsilon_{0}}{2}. \end{align*} Therefore, for any $n>n_{0}$ and any $u_{i}\in\mathcal{U}_{n}$, \begin{align*} \ell_{n}(u_{i+1})-\ell_{n}(u_{i})\geq\frac{\varepsilon_{0}}{2}=:\delta. \end{align*} This proves \eqref{eq:MainAssump}, which entails \eqref{eq:MainAssumpA} and thus \eqref{eq:trm31}. The proof is now complete. $\Box$ \begin{remark}\label{rem:SufCondMainAssump} The assumption \eqref{eq:DiffLtildeZLZ} typically holds for any meaningful function $\mathfrak{u}$. The difficulties in applying Theorem \ref{thm:FastConvThm} lie in finding $\mathfrak{u}$, $\mathcal{R}$ and $\mathcal{U}_{n}$, such that \eqref{eq:DeltafastVarphi} holds for a given $\varepsilon_{0}$. Since typically for every $u\in\mathcal{S}_{n}$, $\mathbb{P}(U_{n}=u)\rightarrow 0$, $\Delta_{n}(\varepsilon_{0})$ has to converge to zero sufficiently fast as $n\rightarrow\infty$. The larger the probability $\mathbb{P}(U_{n}\in\mathcal{U}_n)$ is (recall that it has to be bounded away from zero), the smaller $\varphi(n)$ is, and the faster the convergence to zero of $\Delta_{n}(\varepsilon_{0})$ is. \end{remark} \subsection{Uniform Approximation}\label{subsec:UnifApprox} As mentioned in Remark \ref{rem:SufCondMainAssump}, the most important assumption in Theorem \ref{thm:FastConvThm} is the existence of the random transformation $\mathcal{R}$ and of the corresponding random variable $U_{n}$ such that (for some $\varepsilon_{0}>0$) the probability $\Delta_{n}(\varepsilon_{0})$ converges to zero sufficiently fast. For all purposes, this is often a very specific requirement and so far, the existence of suitable $\mathcal{R}$ and $U_{n}$ has only been shown for very specific models (cf.~\cite{HoudreMatzinger:2007}, \cite{LemberMatzinger:2009}, \cite{LemberMatzingerTorres:2012(2)} and~\cite{HoudreMa:2016}). It is therefore important to relax the assumption of fast convergence, which is the goal of the present subsection. Below, the condition \eqref{eq:DeltafastVarphi} is replaced by an arbitrarily slowly convergent sequence $\Delta_{n}(\varepsilon_{0})\rightarrow 0$. \begin{theorem}\label{thm:UnifConv} For every $n\in\mathbb{N}$, let there exist a random variable $U_{n}=\mathfrak{u}(\mathbf{Z}_{n})$, a set $\mathcal{U}_{n}=\{u_{1},\ldots,u_{m(n)}\}\subset\mathcal{S}_{n}$, a random transformation $\mathcal{R}$, a positive function $\varphi$ on $\mathbb{N}$, and a constant $A>0$, such that \eqref{eq:RanTransImpli}, \eqref{eq:DiffLtildeZLZ} and \eqref{eq:GenLowerBoundProbU} are satisfied. Moreover, let there exist a universal constant $c>0$, such that \begin{align}\label{eq:LowerBoundSizeUn} m(n)\geq c\,\varphi^{-1}(n),\quad\text{for all }\,n\in\mathbb{N}. \end{align} If for an $\varepsilon_{0}>0$, $\Delta_{n}(\varepsilon_{0})\rightarrow 0$, then for $n$ large enough, \begin{align*} \mathbb{E}\left(\Phi\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|\right)\right)\geq\sum_{j=1}^{\frac{c}{8}\varphi^{-1}(n)}\Phi\left(\frac{\varepsilon_{0}}{2}\left(\frac{c}{8\varphi(n)}+j\right)\right)\varphi(n)+\frac{c}{8}\,\Phi\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right)\geq\frac{c}{4}\,\Phi\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right). \end{align*} \end{theorem} \noindent \textbf{Proof.} Let $\varepsilon_{0}$ be as in the assumption of the theorem and let the set $\mathcal{U}_{n}^{0}\subseteq\mathcal{S}_{n}$ be defined as follows: \begin{align}\label{eq:DefUn0} u\in\mathcal{U}_{n}^{0}\quad\Leftrightarrow\quad\mathbb{P}\left(\left.\mathbf{Z}_{n}\not\in B_{n}(\varepsilon_{0})\right|U_{n}=u\right)\leq\sqrt{\Delta_{n}(\varepsilon_{0})}, \end{align} where $B_{n}(\varepsilon_{0})$ is given by \eqref{eq:DefZBn}. Thus, by the very definition of $\Delta_{n}(\varepsilon_{0})$, \begin{align*} \Delta_{n}(\varepsilon_{0})=\mathbb{P}\left(\mathbf{Z}_{n}\not\in B_{n}(\varepsilon_{0})\right)\geq\sum_{u\not\in\mathcal{U}_{n}^{0}}\mathbb{P}\left(\left.\mathbf{Z}_{n}\not\in B_{n}(\varepsilon_{0})\,\right|U_{n}=u\right)\mathbb{P}\left(U_{n}=u\right)\geq\sqrt{\Delta_{n}(\varepsilon_{0})}\,\mathbb{P}\left(U_{n}\not\in\mathcal{U}_{n}^{0}\right), \end{align*} which implies that \begin{align}\label{eq:ProbUnotUn0} \mathbb{P}\left(U_{n}\not\in\mathcal{U}_{n}^{0}\right)\leq\sqrt{\Delta_{n}(\varepsilon_{0})}. \end{align} When $u_{i}\in\mathcal{U}_{n}\cap\mathcal{U}_{n}^{0}$, using an argument as in the proof of Theorem \ref{thm:FastConvThm}, and by \eqref{eq:DiffLtildeZLZ} and \eqref{eq:DefUn0}, \begin{align}\label{eq:DiffellUnUn0} \ell_{n}(u_{i+1})-\ell_{n}(u_{i})\geq\varepsilon_{0}\left(1-\sqrt{\Delta_{n}(\varepsilon_{0})}\right)-A\sqrt{\Delta_{n}(\varepsilon_{0})}\geq\frac{\varepsilon_{0}}{2}, \end{align} provided that $n>n_{1}$, for some positive integer $n_{1}$ large enough. When $u_{i}\not\in\mathcal{U}_{n}\cap\mathcal{U}_{n}^{0}$, by \eqref{eq:DiffLtildeZLZ}, \begin{align}\label{eq:DiffEllNotUnUn0} \ell_{n}(u_{i+1})-\ell_{n}(u_{i})\geq -A. \end{align} Assuming $n>n_{1}$, we are interested in the set $\mathcal{U}_{n}\cap\mathcal{U}_{n}^{0}$ which can be represented as the union of disjoint subintervals of $\mathcal{U}_{n}=\{u_{1},\,\cdots,u_{m}\}$, i.e., $\mathcal{U}_{n}\cap\mathcal{U}_{n}^{0}=\cup_{j=1}^{N}I_{j}$, where for $j=1,\ldots,N$, \begin{align*} I_{j}=\left\{u_{1}^{(j)},\ldots,u_{r_{j}}^{(j)}\right\},\quad\text{with }\,u_{1}^{(1)}<\cdots<u_{r_{1}}^{(1)}<u_{1}^{(2)}<\cdots<u_{r_{2}}^{(2)}<\cdots<u_{1}^{(N)}<\cdots<u_{r_{N}}^{(N)}, \end{align*} are disjoint subintervals of $\mathcal{U}_{n}$. Clearly, the number $N$ of such intervals as well as the intervals $I_{1},\ldots,{I_{N}}$ themselves depend on $n$. On each interval $I_{j}$, \eqref{eq:DiffellUnUn0} implies that the function $\ell_{n}$ increases with a slope of at least $\varepsilon_{0}/2$, and moreover, note that if $u_{i}=u_{r_{j}}^{(j)}$, then $\ell_{n}(u_{i+1})\not\in I_{j}$. Note also that the approach in proving Theorem \ref{thm:FastConvThm} largely applied because $\mathcal{U}_{n}$ was a single interval where $\ell_{n}$ increases. In the present, case the set $\mathcal{U}_{n}\cap\mathcal{U}^{0}_{n}$ consists of several intervals and between them the function $\ell_{n}$ does not necessarily increase. At first, it might appear that the approach of the previous subsection would work, when replacing \begin{align*} \sum_{u_{i}\in\mathcal{U}_{n}}\Phi\left(\delta\left|u_{i}-a_{n}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right) \end{align*} with \begin{align*} \sum_{u_{i}\in\mathcal{U}_{n}\cap\,\mathcal{U}^{0}_{n}}\Phi\left(\delta\left|u_{i}-a_{n}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right)=\sum_{j=1}^{N}\sum_{u_{i}\in I_{j}}\Phi\left(\delta\left|u_{i}-a_{n}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right). \end{align*} Unfortunately, $a_{n}$ would now depend on $I_{j}$, and we would have to deal with the sums \begin{align*} \sum_{j=1}^{N}\sum_{u_{i}\in I_{j}}\Phi\left(\delta\left|u_{i}-a_{n}^{(j)}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right). \end{align*} To bypass this problem, we proceed differently. Consider the sets \begin{align*} J_{j}:=\left\{\ell_{n}\!\left(u_{1}^{(j)}\right),\,\ldots\,,\ell_{n}\!\left(u_{r_{j}}^{(j)}\right)\right\},\quad j=1,\ldots,N, \end{align*} i.e., $J_{j}$ is the image of $I_{j}$ under $\ell_{n}$. By \eqref{eq:DiffellUnUn0}, all elements of $J_{j}$ are at least $\varepsilon_{0}/2$-apart from each other (the intervals $J_{j}$ might overlap, although the intervals $I_{j}$ do not). By \eqref{eq:DiffEllNotUnUn0}, \begin{align*} \sum_{u_{i}\in\mathcal{U}_{n}\setminus\mathcal{U}_{n}^{0}}\left(\ell_{n}(u_{i+1})-\ell_{n}(u_{i})\right)\geq -A\cdot\text{Card}(\mathcal{U}_{n}\setminus\mathcal{U}_{n}^{0})=: -A\,m_{0}(n), \end{align*} implying that the sum of the lengths of the (integer) intervals $J_{j}$ differs from the length of $J:=\cup_{j=1}^{N}J_{j}$ by at most $A\,m_{0}(n)$. Formally, \begin{align*} \sum_{j=1}^{N}\left(\ell_{n}\!\left(u_{r_{j}}^{(j)}\right)-\ell_{n}\!\left(u_{1}^{(j)}\right)\right)-|J|\leq A\,m_{0}(n), \end{align*} where $|J|$ denotes the length of $J$, i.e., the difference between the largest element and the smallest element of $J$, and $\ell_{n}(u_{r_{j}}^{(j)})-\ell_{n}(u_{1}^{(j)})$ is the length of each $J_{j}$. The number of elements $\varepsilon_{0}/2$-apart needed for covering a (real) interval with length $A\,m_{0}(n)$ is at most $2A\,m_{0}(n)/\varepsilon_{0}+1$. Hence, at most $2A\,m_{0}(n)/\varepsilon_{0}+1$ elements $\varepsilon_{0}/2$-apart will be lost due to overlappings, which implies that the set $J$ contains at least \begin{align*} \left|\mathcal{U}_{n}\right|-\frac{2A\,m_{0}(n)}{\varepsilon_{0}}-1=m(n)-\frac{2A\,m_{0}(n)}{\varepsilon_{0}}-1 \end{align*} elements which are (at least) $\varepsilon_{0}/2$-apart from each other. Using \eqref{eq:GenLowerBoundProbU} and \eqref{eq:ProbUnotUn0}, \begin{align*} \sqrt{\Delta_{n}(\varepsilon_{0})}\geq\mathbb{P}\left(U_{n}\in\mathcal{U}_{n}\setminus\mathcal{U}^{0}_{n}\right)=\sum_{u\in\mathcal{U}_{n}\setminus\mathcal{U}^{0}_{n}}\mathbb{P}\left(U_{n}=u\right)\geq m_{0}(n)\,\varphi(n), \end{align*} and therefore, \begin{align*} m_{0}(n)\leq\sqrt{\Delta_{n}(\varepsilon_{0})}\,\varphi^{-1}(n). \end{align*} Recalling the assumption \eqref{eq:LowerBoundSizeUn}, there exists $n_{2}\in\mathbb{N}$ such that for all $n>n_{2}$, \begin{align}\label{eq:LowerBoundNumberEps2Points} m(n)-\frac{2A\,m_{0}(n)}{\varepsilon_{0}}-1\geq m(n)-\frac{2A\sqrt{\Delta_{n}(\varepsilon_{0})}}{\varepsilon_{0}\varphi(n)}\geq\frac{c\varepsilon_{0}-2A\sqrt{\Delta_{n}(\varepsilon_{0})}}{\varepsilon_{0}\varphi(n)}\geq\frac{c}{2}\varphi^{-1}(n). \end{align} Now let \begin{align*} \mathcal{B}_{n}:=\left\{u\in\mathcal{U}_{n}:\,\left|\ell_{n}(u)-\mu_{n}\right|\geq\frac{\varepsilon_{0}c}{16\varphi(n)}\right\}=\mathcal{B}_{n}^{+}\cup\mathcal{B}_{n}^{-}, \end{align*} where \begin{align*} \mathcal{B}_{n}^{+}:=\left\{u\in\mathcal{U}_{n}:\,\ell_{n}(u)-\mu_{n}\geq\frac{\varepsilon_{0}c}{16\varphi(n)}\right\}\quad\text{and}\quad\mathcal{B}_{n}^{-}:=\left\{u\in\mathcal{U}_{n}:\,\ell_{n}(u)-\mu_{n}\leq -\frac{\varepsilon_{0}c}{16\varphi(n)}\right\}. \end{align*} Since the interval \begin{align*} \left(\mu_{n}-\frac{\varepsilon_{0}c}{16\varphi(n)},\,\mu_{n}+\frac{\varepsilon_{0}c}{16\varphi(n)}\right) \end{align*} contains at most $c\varphi^{-1}(n)/4$ elements which are $\varepsilon_{0}/2$-apart from each other, and since the set $J$ contains at least $c\varphi^{-1}(n)/2$ elements which are $\varepsilon_{0}/2$-apart from each other, it follows that $\mathcal{B}_{n}$ contains at least $c\varphi^{-1}(n)/4$ elements (the values of $\ell_{n}$ being $\varepsilon_{0}/2$-apart from each other). This implies that at least one of the two sets $\mathcal{B}_{n}^{+}$ and $\mathcal{B}_{n}^{-}$, say $\mathcal{B}_{n}^{+}$, contains at least $c\varphi^{-1}(n)/8$ elements whose $\ell_{n}$-images are at least $\varepsilon_{0}/2$-apart from each other. Let $\widetilde{\mathcal{B}}_{n}$ be the union of those $c\varphi^{-1}(n)/8$ elements in $\mathcal{B}_{n}^{+}$. Then the set $\mathcal{B}_{n}\setminus\widetilde{\mathcal{B}}_{n}$ consists of at least $c\varphi^{-1}(n)/8$ elements (whose $\ell_{n}$-images are at least $\varepsilon_{0}/2$-apart from each other). Hence, by the monotonicity of $\Phi$, \begin{align*} \sum_{i=1}^{m(n)}\Phi\left(\left|\ell_{n}(u_{i})-\mu_{n}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right)&\geq\sum_{i=1}^{m(n)}\Phi\left(\left|\ell_{n}(u_{i})-\mu_{n}\right|\right)\varphi(n)\geq\sum_{u\in\mathcal{B}_{n}}\Phi\left(\left|\ell_{n}(u)-\mu_{n}\right|\right)\varphi(n)\\ &\geq\sum_{u\in\widetilde{\mathcal{B}}_{n}}\Phi\left(\left|\ell_{n}(u)-\mu_{n}\right|\right)\varphi(n)+\sum_{u\in\mathcal{B}_{n}\setminus\widetilde{\mathcal{B}}_{n}}\Phi\left(\left|\ell_{n}(u)-\mu_{n}\right|\right)\varphi(n)\\ &\geq\sum_{j=1}^{\frac{c}{8}\varphi^{-1}(n)}\!\Phi\!\left(\frac{\varepsilon_{0}}{2}\left(\frac{c}{8\varphi(n)}\!+\!j\right)\right)\!\varphi(n)+\Phi\!\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right)\frac{c}{8}\varphi^{-1}(n)\varphi(n)\\ &\geq\sum_{j=1}^{\frac{c}{8}\varphi^{-1}(n)}\Phi\left(\frac{\varepsilon_{0}}{2}\left(\frac{c}{8\varphi(n)}+j\right)\right)\varphi(n)+\Phi\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right)\frac{c}{8}\\ &\geq\frac{c}{4}\,\Phi\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right). \end{align*} The inequality \eqref{eq:BasicEstEPhi} now finishes the proof. $\Box$ \begin{remark}\label{rem:UniformLargeN} In both Theorem \ref{thm:FastConvThm} and Theorem \ref{thm:UnifConv}, the thresholds, on large $n$, are independent of the choice of the convex function $\Phi$. In Theorem \ref{thm:FastConvThm}, the threshold $n_{0}$ depends on the sequence $\Delta_{n}(\varepsilon_{0})/\varphi(n)$, but not on $\Phi$. In Theorem \ref{thm:UnifConv}, the threshold on $n$ is obtained in two steps: the first threshold $n_{1}$ is to ensure the validity of \eqref{eq:DiffellUnUn0}, while the second threshold $n_{2}$ is to guarantee the validity of the last inequality in \eqref{eq:LowerBoundNumberEps2Points}. Both steps are independent of the choice of $\Phi$. \end{remark} \begin{remark}\label{rem:UnifApprox} The assumption $m(n)=|\mathcal{U}_{n}|\geq c\varphi^{-1}(n)$ (for all $n\in\mathbb{N}$), together with \eqref{eq:GenLowerBoundProbU}, implies that for every $u\in\mathcal{U}_{n}$, $\mathbb{P}(U_{n}=u)\geq\varphi(n)\geq c\,m^{-1}(n)$. Hence, $\mathcal{U}_{n}$ cannot be too large, since all the atoms in $\mathcal{U}_{n}$ have roughly equal masses, justifying our naming this approach ``uniform approximation". On the other hand, these two assumptions imply that $\mathbb{P}(U\in\mathcal{U}_{n})\geq c$, so that $\mathcal{U}_{n}$ cannot be too small either. In the next section, we will see that the set $\mathcal{U}_{n}$ satisfying the requirements of Theorem \ref{thm:UnifConv} does exist, but its choice must be made more carefully when compared to the one in Theorem \ref{thm:FastConvThm}. This is the price to pay for the arbitrarily slow convergence of $\Delta_{n}(\varepsilon_{0})$. \end{remark} \section{The Binary LCS Case}\label{sec:LCS} As an applications of the general results of the previous section, in the rest of the paper, we consider the binary alphabet $\mathcal{A}=\{0,1\}$, i.e., $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ are two independent i.i.d. sequences such that $\mathbb{P}(X_{1}=1)=\mathbb{P}(Y_{1}=1)=p$, $p\in(0,1)$, and $L_{n}(\mathbf{X}_{n};\mathbf{Y}_{n})$ is the length of the LCS of $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$. Let $\mathfrak{u}(\mathbf{z}_{n})$ count the zeros in $\mathbf{z}_{n}=(\mathbf{x}_{n};\mathbf{y}_{n})$. Thus $U_{n}$ is equal to the number of zeros in $\mathbf{Z}_{n}=(\mathbf{X}_{n};\mathbf{Y}_{n})$ and therefore $U_{n}\sim B(2n,q)$, where $q=1-p$, and its support is $\mathcal{S}_{n}=\{0,1,\ldots,2n\}$. Let the random transformation $\mathcal{R}$ be defined as follows: given $\mathbf{z}_{n}$, choose randomly (with uniform probability) a one and turn it into a zero. With this transformation, the condition \eqref{eq:RanTransImpli} holds for any $u=0,\ldots,2n-1$. To see this, for any $u=0,\ldots,2n$, let $\mathcal{A}_{n}(u)\subseteq\{0,1\}^{2n}$ consist of all the binary sequences containing exactly $u$ zeros. For any $\mathbf{z}_{n}\in\mathcal{A}_{n}(u)$, \begin{align*} \mathbb{P}\left(\left.\mathbf{Z}_{n}=\mathbf{z}_{n}\,\right|U_{n}=u\right)=\frac{\mathbb{P}\left(\mathbf{Z}_{n}=\mathbf{z}_{n},\,U_{n}=u\right)}{\mathbb{P}\left(U_{n}=u\right)}=\frac{\mathbb{P}\left(\mathbf{Z}_{n}=\mathbf{z}_{n}\right)}{\mathbb{P}\left(U_{n}=u\right)}=\frac{q^{u}(1-q)^{2n-u}}{\displaystyle{\binom{u}{2n}}q^{u}(1-q)^{2n-u}}=\binom{u}{2n}^{-1}. \end{align*} In other words, $\mathbb{P}_{(u)}$ is the uniform distribution on $\mathcal{A}_{n}(u)$. Now, for any such $u$, let $\mathbf{Z}_{n}^{(u)}$ be a random vector with law $\mathbb{P}^{(u)}$, then $\widetilde{\mathbf{Z}}_{n}^{(u)}=\mathcal{R}(\mathbf{Z}_{n}^{(u)})$ is supported on $\mathcal{A}_{n}(u+1)$. For any $\mathbf{z}_{n}\in\mathcal{A}_{n}(u+1)$, let $0\leq i_{1}<\cdots<i_{u+1}\leq 2n$ be the positions of the zeros in $\mathbf{z}_{n}$, and let $\widehat{\mathbf{z}}_{n}^{(i_{j})}$, $j=1,\ldots,u+1$, be the sequence in $\mathcal{A}_{n}(u)$ obtained by changing the zero of $\mathbf{z}_{n}$ at the position $i_{j}$ to a one. Then, \begin{align*} \mathbb{P}\!\left(\widetilde{\mathbf{Z}}_{n}^{(u)}\!=\!\mathbf{z}_{n}\!\right)\!=\!\sum_{j=1}^{u+1}\mathbb{P}\!\left(\left.\widetilde{\mathbf{Z}}_{n}^{(u)}\!=\!\mathbf{z}_{n}\,\right|\mathbf{Z}_{n}^{(u)}\!=\!\widehat{\mathbf{z}}_{n}^{(i_{j})}\right)\mathbb{P}\!\left(\mathbf{Z}_{n}^{(u)}\!=\!\widehat{\mathbf{z}}_{n}^{(i_{j})}\right)\!=\!\frac{u\!+\!1}{2n\!-\!u}\binom{u}{2n}^{-1}\!\!\!\!=\!\binom{u\!+\!1}{2n}^{-1}\!\!\!\!=\!\mathbb{P}^{(u+1)}(\mathbf{z}_{n}). \end{align*} Finally note that since we are considering the LCS, any single-entry change in $\mathbf{z}_{n}$ changes the score by at most one. Hence, $A=1$ in \eqref{eq:DiffLtildeZLZ} and \begin{align}\label{eq:star} -1\leq\ell_{n}(u+1)-\ell_{n}(u)\leq 1,\quad\text{for any }\,u\in \{0,1,\ldots,2n-1\}. \end{align} \subsection{Lower Bounds Under Fast Convergence}\label{subsec:FastConvLCS} The following theorem is proved in~\cite[Theorem 2.2]{LemberMatzinger:2009} (see also~\cite[Theorem 2.1]{HoudreMa:2016} for a quantitative version with an arbitrary finite alphabet size). \begin{theorem}\label{thm:LCSLargeProb} There exist positive constants $\varepsilon_{1}$ and $\varepsilon_{2}$ with $\varepsilon_{1}>\varepsilon_{2}$, and a set $B_{n}\subseteq\mathcal{A}^{n}\times\mathcal{A}^{n}$ such that for every $\mathbf{z}_{n}\in B_{n}$, \begin{align*} \mathbb{P}\left(\left.L_{n}\big(\widetilde{\mathbf{Z}}_{n}\big)-L_{n}(\mathbf{Z}_{n})=1\right|\mathbf{Z}_{n}=\mathbf{z}_{n}\right)\geq\varepsilon_{1},\quad\mathbb{P}\left(\left.L_{n}\big(\widetilde{\mathbf{Z}}_{n}\big)-L_{n}(\mathbf{Z}_{n})=-1\right|\mathbf{Z}_{n}=\mathbf{z}_{n}\right)\leq\varepsilon_{2}. \end{align*} Moreover, there exists $p_{0}>0$, such that for every $0<p<p_{0}$, \begin{align*} \mathbb{P}\left(\mathbf{Z}_{n}\in B_{n}\right)\geq 1-e^{-c_{1}n}, \end{align*} where $c_{1}>0$ does not depend on $n$, but may depend on $p$. \end{theorem} Hence \eqref{eq:ReqRanTrans} holds with $\varepsilon_{0}=\varepsilon_{1}-\varepsilon_{2}$ and $\Delta_{n}(\varepsilon_{0})\leq e^{-c_{1}n}$, for $n$ large enough. Furthermore, as shown above, \eqref{eq:RanTransImpli} also holds for $u_{i}=0,1,\ldots,2n-1$. In what follows, we will apply the fast convergence result (Theorem \ref{thm:FastConvThm}) to two possible choices of $\mathcal{U}_{n}$, and then compare the corresponding lower bounds on moments and exponential moments of $|L_{n}(\mathbf{Z_{n}})-\mu_{n}|$. \subsubsection{The Standard $\mathcal{U}_{n}$}\label{subsubsec:StandUnLCS} Often (cf.~\cite{LemberMatzinger:2009}, \cite{LemberMatzingerTorres:2012(2)} and~\cite{HoudreMa:2016}) the following choice of $\mathcal{U}_{n}$ is used: \begin{align*} \mathcal{U}_{n}=\left[2nq-\sqrt{2n},\,2nq+\sqrt{2n}\right]\cap\mathcal{S}_{n}. \end{align*} This ``standard $\mathcal{U}_{n}$" is such that $k_{0}=1$ (recall \eqref{eq:DefUnk0}). By the de Moivre-Laplace local limit theorem (cf.~\cite[pp. 56]{Shiryaev:1995}), there exists a universal constant $b=b(p)>0$, which is independent of $n$, but depends on $p$, such that for $n$ large enough, \begin{align}\label{eq:LowerBoundProbULocalCLT} \mathbb{P}\left(U_{n}=u\right)\geq\frac{1}{b\sqrt{n}},\quad\text{for any }\,u\in\mathcal{U}_{n}. \end{align} The (best) constant $b$ depends on the choice of $\mathcal{U}_{n}$, and it is easy to see (see \eqref{eq:ExtendedUnLowerBound} below) that for the standard $\mathcal{U}_{n}$ as above, $b\geq e^{2}\sqrt{\pi}$. Without loss of generality, assume that $2nq$ is a positive integer (if not, replace $2nq$ in the definition of $\mathcal{U}_{n}$ with $\lceil 2nq\rceil$). Hence, with $\varphi(n)=1/b \sqrt{n}$, the condition \eqref{eq:DeltafastVarphi} is verified and thus, all the assumptions of Theorem \ref{thm:FastConvThm} are satisfied. Therefore, recalling that $\delta=\varepsilon_{0}/2$, $\mathbb{E}(\Phi(|L_{n}(\mathbf{Z}_{n})-\mu_{n}|))$ is lower-bounded by \begin{align} \sum_{u_{i}\in\mathcal{U}_{n}}\Phi\left(\delta\left|u_{i}-a_{n}\right|\right)\mathbb{P}\left(U_{n}=u_{i}\right)&\geq\sum_{u_{i}\in\mathcal{U}_{n}}\Phi\left(\delta\left|u_{i}-a_{n}\right|\right)\,\frac{1}{b\sqrt{n}}=\frac{1}{b\sqrt{n}}\sum_{k=2qn-\sqrt{2n}}^{2qn+\sqrt{2n}}\Phi\left(\delta\left|k-a_{n}\right|\right)\nonumber\\ \label{eq:LowerBoundPhi} &\geq\frac{1}{b\sqrt{n}}\sum_{j=-\sqrt{2n}}^{\sqrt{2n}}\Phi\left(\delta|j|\right)=\frac{1}{b\sqrt{n}}\left(\Phi(0)+2\sum_{j=1}^{\sqrt{2n}}\Phi\left(\frac{\varepsilon_{0}j}{2}\right)\right), \end{align} where the last inequality is obtained by minimizing over $a_{n}$ and since $2nq$ is a positive integer. \begin{example}\label{eg:MomentFastConvStadardU} For $\Phi(x)=|x|^{r}$, $r\geq 1$, then \eqref{eq:LowerBoundPhi} becomes \begin{align*} \frac{\varepsilon_{0}^{r}}{2^{r-1}b\sqrt{n}}\sum_{j=1}^{\sqrt{2n}}j^{r}\geq\frac{\varepsilon_{0}^{r}}{2^{r-1}b\sqrt{n}}\int_{0}^{\sqrt{2n}}x^{r}\,dx=\frac{\varepsilon_{0}^{r}(2n)^{(r+1)/2}}{(r+1)2^{r-1}b\sqrt{n}}=\frac{2^{(3-r)/2}\varepsilon_{0}^{r}}{b\,(r+1)}\,n^{r/2}. \end{align*} Hence, for $n$ large enough (independent of $r$, see Remark \ref{rem:UniformLargeN}), \begin{align}\label{eq:LowerBoundxr} \mathbb{E}\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|^{r}\right)\geq d_{1}(r)\,n^{r/2},\quad\text{where }\,d_{1}(r)=d_{1}(r,p,\varepsilon_{0}):=\frac{2^{(3-r)/2}\varepsilon_{0}^{r}}{b\,(r+1)}. \end{align} \end{example} \begin{example}\label{eg:ExpMomentFastConvStadardU} For $\Phi(x)=e^{tx}$, $t>0$, then the second summation in \eqref{eq:LowerBoundPhi} becomes \begin{align*} \sum_{j=1}^{\sqrt{2n}}\Phi\left(\frac{\varepsilon_{0}j}{2}\right)=\sum_{j=1}^{\sqrt{2n}}e^{t\varepsilon_{0}j/2}=\sum_{j=1}^{\sqrt{2n}}\rho_{t}^{j}=\frac{\rho_{t}-\rho_{t}^{\sqrt{2n}+1}}{1-\rho_{t}},\quad\text{where }\,\rho_{t}:=e^{\varepsilon_{0}t/2}. \end{align*} Hence, for $n$ large enough, \begin{align}\label{eq:LowerBoundetx} \mathbb{E}\!\left(e^{\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|t}\right)\!\geq\!\frac{1}{b\sqrt{n}}\!\left(\!1\!+\!\frac{2\rho_{t}\!-\!2\rho_{t}^{\sqrt{2n}+1}}{1-\rho_{t}}\right)\!=\!\frac{\rho_{t}\!\left(2\rho_{t}^{\sqrt{2n}}\!-\!\rho^{-1}_{t}\!-\!1\right)}{b\left(\rho_{t}-1\right)\sqrt{n}}=\frac{2\rho_{t}^{\sqrt{2n}}-1-\rho^{-1}_{t}}{b\left(1-\rho^{-1}_{t}\right)\sqrt{n}}. \end{align} Now, for any $t>0$, there exists $n$ large enough (depending on $t$) such that \begin{align*} \mathbb{E}\left(e^{\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|t}\right)\geq\frac{2\rho_{t}}{b\left(\rho_{t}-1\right)}\,\rho_{t}^{\sqrt{n}}. \end{align*} Taking $t=s/\sqrt{n}$, with $s>0$, gives a lower bound on the moment generating function of $|V_{n}|=|L_{n}(\mathbf{Z}_{n})-\mu_{n}|/\sqrt{n}$. Moreover, by \eqref{eq:LowerBoundetx}, \begin{align}\label{eq:AsyLowerBoundMGFVn} \liminf_{n\rightarrow\infty}\mathbb{E}\left(e^{s|V_{n}|}\right)\geq\liminf_{n\rightarrow\infty}\frac{2\rho_{t}^{\sqrt{2n}}-1-\rho^{-1}_{t}}{b\left(1-\rho^{-1}_{t}\right)\sqrt{n}}=\frac{4}{b\varepsilon_{0}s}\left(e^{\varepsilon_{0}s/\sqrt{2}}-1\right). \end{align} Since \begin{align*} \lim_{s\downarrow 0}\frac{1}{\varepsilon_{0}s}\left(e^{\varepsilon_{0}s/\sqrt{2}}-1\right)=\frac{1}{\sqrt{2}}, \end{align*} the right-hand side of \eqref{eq:AsyLowerBoundMGFVn} approaches $2\sqrt{2}/b$ as $s\downarrow 0$. Since $b\geq\sqrt{\pi}e^{2}$, then $2\sqrt{2}<b$, showing the deficiency of the obtained bound, since we naturally expect the right hand side of \eqref{eq:AsyLowerBoundMGFVn} to approach one as $s\downarrow 0$. \end{example} \begin{remark}\label{rem:TaylorApproxMGFVn} Since $|V_{n}|$ is non-negative, a series expansion, together with lower bounds on moments of $|V_{n}|$, can also lower-bound its moment generating function. Indeed, by \eqref{eq:LowerBoundxr}, for $n$ large enough (independent of $r$), \begin{align}\label{eq:RefAsyLowerBoundMGFVn} \mathbb{E}\left(e^{s|V_{n}|}\right)=1+\sum_{r=1}^{\infty}\mathbb{E}\left(|V_{n}|^{r}\right)\frac{s^{r}}{r!}\geq 1+\frac{2\sqrt{2}}{b}\sum_{r=1}^{\infty}\frac{\left(\frac{\varepsilon_{0}s}{\sqrt{2}}\right)^{r}}{(r+1)!}=1-\frac{2\sqrt{2}}{b}+\frac{4}{b\varepsilon_{0}s}\left(e^{\varepsilon_{0}s/\sqrt{2}}-1\right). \end{align} This gives a lower bound similar to \eqref{eq:AsyLowerBoundMGFVn}, except for the additive constant $1-2\sqrt{2}/b$, which ensures that the right-hand side of \eqref{eq:RefAsyLowerBoundMGFVn} tends to one as $s\downarrow 0$. Note finally that \eqref{eq:AsyLowerBoundMGFVn} is computed using the lower bound in \eqref{eq:LowerBoundPhi} with $\Phi(x)=e^{tx}$ and $t=s/\sqrt{n}$: \begin{align*} \frac{1}{b\sqrt{n}}\!\sum_{j=-\sqrt{2n}}^{\sqrt{2n}}\!e^{t\varepsilon_{0}|j|/2}=\frac{1}{b\sqrt{n}}\!\sum_{j=-\sqrt{2n}}^{\sqrt{2n}}\sum_{r=0}^{\infty}\frac{1}{r!}\left(\frac{t\varepsilon_{0}|j|}{2}\right)^{r}\!=\frac{2\sqrt{2n}+1}{b\sqrt{n}}+\sum_{r=1}^{\infty}\left[\frac{\varepsilon_{0}^{r}}{2^{r-1}b\sqrt{n}r!}\left(\frac{s}{\sqrt{n}}\right)^{r}\sum_{j=1}^{\sqrt{2n}}j^{r}\right]. \end{align*} Above, the constant term (i.e., $r=0$) converges to $2\sqrt{2}/b$ as $n\rightarrow\infty$, which is different from the constant term (equal to $1$) in the series expansion in \eqref{eq:RefAsyLowerBoundMGFVn}. The lower bound on the last series above can be computed using \eqref{eq:LowerBoundxr}: \begin{align*} \sum_{r=1}^{\infty}\left[\frac{\varepsilon_{0}^{r}}{2^{r-1}b\sqrt{n}r!}\left(\frac{s}{\sqrt{n}}\right)^{r}\sum_{j=1}^{\sqrt{2n}}j^{r}\right]\geq\sum_{r=1}^{\infty}\frac{1}{r!}\left(\frac{s}{\sqrt{n}}\right)^{r}\frac{2^{(3-r)/2}\varepsilon_{0}^{r}}{b\,(r+1)}\,n^{\frac{r}{2}}=-\frac{2\sqrt{2}}{b}+\frac{4}{b\varepsilon_{0}s}\left(e^{\varepsilon_{0}s/\sqrt{2}}-1\right). \end{align*} Therefore, the difference in the constants in \eqref{eq:AsyLowerBoundMGFVn} and \eqref{eq:RefAsyLowerBoundMGFVn} stems from the different constant terms ($r=0$) in the respective series expansions. \end{remark} \subsubsection{The Extended $\mathcal{U}_{n}$} The bounds presented in the previous subsection give the correct order for the centered absolute moments (given that $p$ is small enough), i.e., $\mathbb{E}(|L_{n}-\mu_{n}|^{r})\asymp O(n^{r/2})$. On the other hand, as argued at the end of Example \ref{eg:ExpMomentFastConvStadardU}, the lower bound on the moment generating function of $|V_{n}|$ can be improved. A way to do so (also valid for the moments of $|V_{n}|$) is to extend the set $\mathcal{U}_{n}$ and to refine the lower bound \eqref{eq:LowerBoundProbULocalCLT}. In the present subsection, we show how the refined lower bound yields better approximations. In what follows, take $\beta\in(1/2,2/3)$, and redefine $\mathcal{U}_{n}$ as follows \begin{align}\label{eq:ExtendedUn} \mathcal{U}_{n}=\left[2nq-(2n)^{\beta},\,2nq+(2n)^{\beta}\right]\cap\mathcal{S}_{n}=\left\{k\in\mathbb{Z}:\,|k-2nq|\leq (2n)^{\beta}\right\}. \end{align} In the sequel, this $\mathcal{U}_{n}$ is called the ``extended" $\mathcal{U}_{n}$, and note that again $k_{0}=1$ (recall \eqref{eq:DefUnk0}). Since $(2n)^{\beta}=o(n^{2/3})$, it follows from the local de Moivre-Laplace theorem (cf.~\cite[pp. 56]{Shiryaev:1995}) that \begin{align*} \sup_{k:\,|k-2nq|\leq (2n)^{\beta}}\left|\frac{\mathbb{P}\left(U_{n}=k\right)}{\phi_{n}(k)}-1\right|\rightarrow 0, \end{align*} where \begin{align*} \phi_{n}(k):=\frac{1}{2\sqrt{\pi pqn}}\exp\left(-\frac{\left(k-2nq\right)^{2}}{4npq}\right),\quad\text{and }\,q=1-p. \end{align*} Hence, for every $\epsilon>0$, there exists $N\in\mathbb{N}$ such that, for any $n>N$, \begin{align}\label{eq:LocalDMVIneq} \left(1+\epsilon\right)\phi_{n}(k)\geq\mathbb{P}\left(U_{n}=k\right)\geq\left(1-\epsilon\right)\phi_{n}(k),\quad\text{for all }\,k\in\mathbb{Z}\,\,\,\,\text{with }\,\left|k-2nq\right|\leq (2n)^{\beta}. \end{align} In particular, \begin{align}\label{eq:ExtendedUnLowerBound} \mathbb{P}\left(U_{n}=k\right)\geq\frac{\left(1-\epsilon\right)}{2\sqrt{\pi pqn}}\exp\left(-\frac{(2n)^{2\beta-1}}{2pq}\right)=:\varphi(n). \end{align} Since $2\beta-1<1$, it follows that \begin{align*} \varphi^{-1}(n)\,e^{-c_{1}n}\rightarrow 0,\quad\text{whenever }\,c_{1}>0. \end{align*} On the other hand, by Theorem \ref{thm:LCSLargeProb}, there exists $\varepsilon_{0}>0$ and $c_{1}>0$, such that $\Delta_{n}(\varepsilon_{0})\leq e^{-c_{1}n}$, for any $n\in\mathbb{N}$ large enough. Thus, the condition \eqref{eq:DeltafastVarphi} holds and therefore, all the assumptions of Theorem \ref{thm:FastConvThm} are verified. Thus, recalling that $\delta=\varepsilon_{0}/2$ and $a_{n}\in[2nq-(2n)^{\beta},2nq+(2n)^{\beta}]$, \begin{align*} \mathbb{E}\left(\Phi\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|\right)\right)\geq\sum_{k\in\mathcal{U}_{n}}\Phi\left(\delta\left|k-a_{n}\right|\right)\mathbb{P}\left(U_{n}=k\right)\geq(1-\epsilon)\sum_{k:\,|k-2nq|\leq (2n)^{\beta}}\Phi\left(\delta\left|k-a_{n}\right|\right)\phi_{n}(k), \end{align*} for $n$ large enough, where the inequality follows from \eqref{eq:LocalDMVIneq}. From the symmetry of $\Phi(\delta|\cdot|)$, the right-hand side above is minimized at $a_{n}=2nq$. Moreover, let \begin{align*} j_{k}(n):=\frac{(k-2nq)}{\sqrt{2n}},\quad\text{for }\,k\in\mathbb{Z}\,\text{ with }\,\left|k-2nq\right|\leq (2n)^{\beta}. \end{align*} Clearly, $(j_{k}(n))_{k\in\mathcal{U}_{n}}$ forms a $1/\sqrt{2n}$-net over the interval $[-(2n)^{\beta-1/2},(2n)^{\beta-1/2}]$. Hence, \begin{align} \mathbb{E}\left(\Phi\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|\right)\right)&\geq (1-\epsilon)\sum_{k:\,|k-2nq|\leq (2n)^{\beta}}\Phi\left(\delta\left|k-2nq\right|\right)\phi_{n}(k)\nonumber\\ \label{eq:LowerBoundPhiExtU} &=(1-\epsilon)\!\sum_{k:\,|k-2nq|\leq (2n)^{\beta}}\frac{1}{2\sqrt{\pi pqn}}\Phi\!\left(\delta\sqrt{2n}\left|j_{k}(n)\right|\right)\exp\left(-\frac{\left(j_{k}(n)\right)^{2}}{2pq}\right). \end{align} \begin{example}\label{eg:MomentFastConvExtU} For $\Phi(x)=|x|^{r}$, $r\geq 1$, since $n^{\beta-1/2}\rightarrow\infty$, as $n\rightarrow\infty$, \begin{align*} \lim_{n\rightarrow\infty}\frac{1}{\sqrt{2\pi pq}}\sum_{k:\,|k-2nq|\leq (2n)^{\beta}}\frac{1}{\sqrt{2n}}\left|j_{k}(n)\right|^{r}\exp\left(-\frac{\left(j_{k}(n)\right)^{2}}{2pq}\right)=\frac{1}{\sqrt{2\pi pq}}\int_{-\infty}^{\infty}|x|^{r}e^{-x^{2}/(2pq)}\,dx. \end{align*} Since $\epsilon>0$ is arbitrary, \eqref{eq:LowerBoundPhiExtU} leads to \begin{align*} \liminf_{n\rightarrow\infty}\mathbb{E}\left(|V_{n}|^{r}\right)=\liminf_{n\rightarrow\infty}\frac{\mathbb{E}\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|^{r}\right)}{n^{r/2}}\geq 2^{r/2}\delta^{r}\mathbb{E}\left(|\xi|^{r}\right)=\frac{\varepsilon_{0}^{r}}{\sqrt{\pi}}(pq)^{r/2}\,\Gamma\left(\frac{r+1}{2}\right)=:d_{2}(r), \end{align*} where $\xi\sim\mathcal{N}(0,pq)$. Therefore, for $n\in\mathbb{N}$ large enough, \begin{align*} \mathbb{E}\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|^{r}\right)\geq\frac{\varepsilon_{0}^{r}}{\sqrt{\pi}}(pq)^{r/2}\,\Gamma\left(\frac{r+1}{2}\right)n^{r/2}. \end{align*} Let us compare this last bound with \eqref{eq:LowerBoundxr} of Example \ref{eg:MomentFastConvStadardU}. From \eqref{eq:LocalDMVIneq}, \begin{align*} \frac{1}{b}\leq\frac{1}{2\sqrt{\pi pq}}\,e^{-1/(2pq)}\quad\text{implies that}\quad d_{1}(r)\leq\frac{2^{\frac{1-r}{2}}\varepsilon_{0}^{r}}{(r+1)\sqrt{\pi pq}}\,e^{-1/(2pq)}. \end{align*} Hence, for each $r\geq 1$, $d_{2}(r)>d_{1}(r)$ will follow from \begin{align}\label{eq:GammaFunt} (2pq)^{\frac{r+1}{2}}\Gamma\left(\frac{r+1}{2}\right)>\frac{2}{r+1}\,e^{-1/(2pq)},\quad\text{for }\,p\in(0,1)\,\,\,\text{and}\,\,\,q=1-p. \end{align} To show \eqref{eq:GammaFunt}, set $\theta:=(r+1)/2$ and \begin{align*} f(x):=x^{-\theta}e^{x}\,\Gamma(\theta+1),\quad x:=\frac{1}{2pq}\geq 2. \end{align*} Then \eqref{eq:GammaFunt} is equivalent to $f(x)>1$, for all $x\geq 2$. Since \begin{align*} f'(x)=\Gamma(\theta+1)\left(-\theta x^{-\theta-1}e^{x}+x^{-\theta}e^{x}\right)=x^{-\theta}e^{x}\,\Gamma(\theta+1)\left(1-\frac{\theta}{x}\right) \end{align*} has a unique zeros at $x=\theta$, it is sufficient to show that \begin{align*} f(\theta)=\theta^{-\theta}e^{\theta}\,\Gamma(\theta+1)=\theta^{1-\theta}e^{\theta}\,\Gamma(\theta)>1,\quad\text{for any }\,\theta\geq 1. \end{align*} But, this follows from a classical inequality for the gamma function (cf.~\cite[Theorem 1]{KeckicVasic:1971}) \begin{align*} \frac{\Gamma(x)}{\Gamma(y)}\geq\frac{x^{x-1}e^{y}}{y^{y-1}e^{x}},\quad\text{for any }\,x\geq y\geq 1. \end{align*} Therefore, $d_{2}(r)>d_{1}(r)$, for any $r\geq 1$. That is, with the extended $\mathcal{U}_{n}$, Theorem \ref{thm:FastConvThm} provides larger constants than the ones obtained with the standard $\mathcal{U}_{n}$. \end{example} \begin{example}\label{eg:ExpMomentFastConvExtU} For $\Phi_{n}(x)=e^{sx/\sqrt{2n}}$ with $s>0$, then \begin{align} \sum_{k:\,|k-2nq|\leq (2n)^{\beta}}\Phi_{n}\left(\delta|k-2nq|\right)\phi_{n}(k)&=\frac{1}{\sqrt{2\pi pq}}\frac{1}{\sqrt{2n}}\sum_{k:\,|k-2nq|\leq (2n)^{\beta}}\exp\left(s\delta\left|j_{k}(n)\right|-\frac{\left(j_{k}(n)\right)^{2}}{2pq}\right)\nonumber\\ \label{eq:LimitExpNormal} &\rightarrow\frac{1}{\sqrt{2\pi pq}}\int_{-\infty}^{\infty}\exp\left(s\delta|x|-\frac{x^{2}}{2pq}\right)dx,\quad n\rightarrow\infty. \end{align} Hence, \eqref{eq:LowerBoundPhiExtU} leads to \begin{align*} \liminf_{n\rightarrow\infty}\mathbb{E}\left(\exp\left(s\frac{\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|}{\sqrt{2n}}\right)\right)\geq\mathbb{E}\left(e^{s\delta|\xi|}\right), \end{align*} where again $\xi\sim\mathcal{N}(0,pq)$. Therefore, for any $s>0$, \begin{align}\label{eq:LowerBoundVnExtUn} \liminf_{n\rightarrow\infty}\mathbb{E}\left(e^{s|V_{n}|}\right)\geq\mathbb{E}\left(e^{s\varepsilon_{0}|\xi|/\sqrt{2}}\right). \end{align} Unlike the lower bound obtained in \eqref{eq:AsyLowerBoundMGFVn}, the present lower bound approaches one as $s\downarrow 0$. Thus, again, the bound obtained via Theorem \ref{thm:FastConvThm}, with the extended $\mathcal{U}_{n}$, outperforms the one obtained with the standard $\mathcal{U}_{n}$. \end{example} \begin{remark}\label{rem:TaylorApproxMGFVnExtUn} Since $d_{2}$ is given by the $r$-th moment of $|\xi|$, clearly \begin{align*} 1+\sum_{r=1}^{\infty}d_{2}(r)\frac{s^{r}}{r!}=\mathbb{E}\left(e^{s\varepsilon_{0}|\xi|/\sqrt{2}}\right), \end{align*} and so in this case, the lower bound on the moment generating function obtained via the series expansion gives the same result as \eqref{eq:LowerBoundVnExtUn}. \end{remark} \begin{remark} There is another way to get \eqref{eq:LimitExpNormal}. By \eqref{eq:LocalDMVIneq}, it is enough to prove that, as $n\rightarrow\infty$, \begin{align*} \sum_{k:\,|k-2nq|\leq (2n)^{\beta}}\!\!\!\!\Phi_{n}\!\left(\delta\!\left|k-2nq\right|\right)\mathbb{P}\left(U_{n}=k\right)=\!\!\sum_{k:|k-2nq|\leq (2n)^{\beta}}\!\!\!\!\exp\left\{\frac{s\delta}{\sqrt{2n}}\left|k-2nq\right|\right\}\mathbb{P}\left(U_{n}=k\right)\rightarrow\mathbb{E}\!\left(e^{s\delta|\xi|}\right). \end{align*} To see this, note that by the CLT and the continuous mapping theorem, for any $s\in\mathbb{R}$, \begin{align}\label{eq:WeakConvUn} \exp\left(\frac{s\delta}{\sqrt{2n}}\left|U_{n}-2nq\right|\right)\;{\stackrel{\mathfrak{D}}{\longrightarrow}}\; e^{s\delta|\xi|},\quad n\rightarrow\infty, \end{align} where ``$\;{\stackrel{\mathfrak{D}}{\longrightarrow}}\;$" stands for convergence in law. Moreover, for any $\epsilon>0$, by Hoeffding's exponential inequality, \begin{align*} \mathbb{E}\left(\exp\left(\frac{s\delta(1+\epsilon)}{\sqrt{2n}}\left|U_{n}-2nq\right|\right)\right)&=1+\int_{1}^{\infty}\mathbb{P}\left(\exp\left(s\delta(1+\epsilon)\sqrt{2n}\left|\frac{U_{n}-2nq}{2n}\right|\right)\geq x\right)dx\\ &=1+\int_{1}^{\infty}\mathbb{P}\left(\left|\frac{U_{n}}{2n}-q\right|\geq\frac{\ln x}{s\delta(1+\epsilon)\sqrt{2n}}\right)dx\\ &\leq 1+2\int_{1}^{\infty}\exp\left(-4n\frac{(\ln x)^{2}}{2s^{2}\delta^{2}(1+\epsilon)^{2}n}\right)dx\\ &=1+2\int_{0}^{\infty}\exp\left(-\frac{2y^{2}}{s^{2}\delta^{2}(1+\epsilon)^{2}}+y\right)dy<\infty, \end{align*} which implies that the family of non-negative random variables \begin{align*} \left\{\exp\left(\frac{s\delta}{\sqrt{2n}}\left|U_{n}-2nq\right|\right):\,\,n\in\mathbb{N}\right\} \end{align*} is uniformly integrable. Let $A_{n}:=\left\{U_{n}\not\in\mathcal{U}_{n}\right\}$. Again, Hoeffding's exponential inequality leads to \begin{align}\label{eq:ConvProbAn} \mathbb{P}\left(A_{n}\right)=\mathbb{P}\left(\left|U_{n}-2nq\right|>(2n)^{\beta}\right)\leq 2\exp\left(-2(2n)^{2\beta-1}\right)\rightarrow 0,\quad n\rightarrow\infty, \end{align} since $2\beta>1$. Therefore, for any $s\in\mathbb{R}$, \begin{align*} &\lim_{n\rightarrow\infty}\sum_{k:\,|k-2nq|\leq (2n)^{\beta}}\exp\left(\frac{s\delta}{\sqrt{2n}}\left|k-2nq\right|\right)\mathbb{P}\left(U_{n}=k\right)\\ &\quad\,=\lim_{n\rightarrow\infty}\mathbb{E}\left({\bf 1}_{A_{n}^{c}}\exp\left(\frac{s\delta}{\sqrt{2n}}\left|U_{n}-2nq\right|\right)\right)=\lim_{n\rightarrow\infty}\mathbb{E}\left(\exp\left(\frac{s\delta}{\sqrt{2n}}\left|U_{n}-2nq\right|\right)\right)=\mathbb{E}\left(e^{s\delta|\xi|}\right), \end{align*} where the last equality follows from \eqref{eq:WeakConvUn} and the uniform integrability. \end{remark} \subsection{Uniform Approximation}\label{subsec:UnifApproxLCS} We next show that in the binary LCS case, the uniform approximation approach (Theorem \ref{thm:UnifConv}) also applies provided that $\mathcal{U}_{n}$ is standard. Indeed, with the standard $\mathcal{U}_{n}$, we have $m(n)=2\sqrt{2n}$ (ignoring the rounding), and $\varphi(n)=1/(b\sqrt{n})$, so that the condition $m(n)\geq c\varphi^{-1}(n)=bc\sqrt{n}$ holds with $c=2\sqrt{2}/b$, and thus Theorem \ref{thm:UnifConv} applies. Therefore, \begin{align}\label{eq:UnifBoundLCS} \mathbb{E}\left(\Phi\!\left(\left|L_{n}(\mathbf{Z}_{n})\!-\!\mu_{n}\right|\right)\right)\geq\!\!\sum_{j=1}^{\frac{c}{8}\varphi^{-1}(n)}\!\!\Phi\left(\frac{\varepsilon_{0}}{2}\!\left(\frac{c}{8\varphi(n)}\!+\!j\right)\right)\varphi(n)+\frac{c}{8}\Phi\!\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right)\geq\frac{c}{4}\,\Phi\!\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right). \end{align} Again, let us apply this result to $|L_{n}(\mathbf{Z}_{n})-\mu_{n}|$. \begin{example}\label{eg:MomentUnifApprox} For $\Phi(x)=|x|^{r}$, $r\geq 1$, the rightmost bound in \eqref{eq:UnifBoundLCS} gives \begin{align*} \mathbb{E}\left(\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|^{r}\right)\geq\frac{c}{4}\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right)^{r}=\frac{\varepsilon_{0}^{r}}{2^{(5r+1)/2}\,b}\,n^{r/2}. \end{align*} Note that \begin{align*} d_{3}(r)=d_{3}(r,p,\varepsilon_{0}):=\frac{\varepsilon_{0}^{r}}{2^{(5r+1)/2}\,b}<\frac{2^{\frac{3-r}{2}}\varepsilon_{0}^{r}}{b\,(r+1)}=d_{1}(r), \end{align*} where $d_{1}(r)$ is the constant given in \eqref{eq:LowerBoundxr}. This last fact is quite expected since we had a cruder approximation and a weaker assumption $\Delta_{n}(\varepsilon_{0})\rightarrow 0$, as $n\rightarrow\infty$. Next, applying the finer lower middle bound in \eqref{eq:UnifBoundLCS} gives \begin{align*} \sum_{j=1}^{\frac{c}{8}\varphi^{-1}(n)}\!\Phi\!\left(\frac{\varepsilon_{0}}{2}\!\left(\frac{c}{8\varphi(n)}\!+\!j\right)\right)\varphi(n)+\frac{c}{8}\,\Phi\!\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right)&=\frac{1}{b\sqrt{n}}\left(\frac{\varepsilon_{0}}{2}\right)^{r}\!\sum_{j=1}^{\sqrt{n}/(2\sqrt{2})}\!\left(\frac{\sqrt{n}}{2\sqrt{2}}+j\right)^{r}\!+\frac{\varepsilon_{0}^{r}\,n^{r/2}}{2^{(5r+3)/2}\,b}\\ &\geq\frac{\varepsilon_{0}^{r}}{b\,2^{r}\sqrt{n}}\int_{0}^{\sqrt{n}/(2\sqrt{2})}\!\left(\frac{\sqrt{n}}{2\sqrt{2}}\!+\!x\right)^{r}\!dx+\frac{\varepsilon_{0}^{r}\,n^{r/2}}{2^{(5r+3)/2}\,b}\\ &=\frac{\varepsilon_{0}^{r}}{2^{(5r+3)/2}\,b}\left(\frac{2^{r+1}-1}{r+1}+1\right)n^{r/2}. \end{align*} Once more, a better approximation leads to a larger constant \begin{align*} d_{4}(r)=d_{4}(r,p,\varepsilon_{0}):=b^{-1}\varepsilon_{0}^{r}\,2^{-\frac{5r+3}{2}}\left(\frac{2^{r+1}-1}{r+1}+1\right)=\frac{2^{r+1}+r}{2(r+1)}\,d_{3}(r)<d_{1}(r). \end{align*} \end{example} \begin{example}\label{eg:ExpMomentUnifApprox} For $\Phi(x)=e^{tx}$ with $t>0$, the rightmost bound in \eqref{eq:UnifBoundLCS} gives \begin{align}\label{eq:uniformApproxetx} \mathbb{E}\left(e^{\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|t}\right)\geq\frac{c}{4}\,\Phi\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right)=\frac{1}{\sqrt{2}\,b}\,e^{\varepsilon_{0}t\sqrt{n}/(4\sqrt{2})}, \end{align} while the more refined middle bound, with $\rho_{t}=e^{\varepsilon_{0}t/2}$, gives \begin{align*} \sum_{j=1}^{\frac{c}{8}\varphi^{-1}(n)}\!\!\Phi\!\left(\frac{\varepsilon_{0}}{2}\!\left(\frac{c}{8\varphi(n)}\!+\!j\right)\!\right)\!\varphi(n)+\frac{c}{8}\Phi\!\left(\frac{\varepsilon_{0}c}{16\varphi(n)}\right)&=\frac{e^{\varepsilon_{0}t\sqrt{n}/(4\sqrt{2})}}{b\sqrt{n}}\sum_{j=1}^{\sqrt{n}/(2\sqrt{2})}\rho_{t}^{j}+\frac{1}{2\sqrt{2}\,b}\rho_{t}^{\sqrt{n}/(2\sqrt{2})}\\ &=\frac{\rho_{t}}{b\sqrt{n}\left(\rho_{t}\!-\!1\right)}\left(\rho_{t}^{\sqrt{n}/\sqrt{2}}\!-\!\rho_{t}^{\sqrt{n}/(2\sqrt{2})}\right)+\frac{\rho_{t}^{\sqrt{n}/(2\sqrt{2})}}{2\sqrt{2}\,b}. \end{align*} For every $t>0$, the convergence to infinity (as $n\rightarrow\infty$) of this last bound is slower than that of the bound \eqref{eq:LowerBoundetx}. Hence, again, the uniform bound is smaller (for large $n$) than the ones obtained in the previous subsection using Theorem \ref{thm:FastConvThm}. Since $t>0$, for $n$ large enough (depending on $t$), \begin{align*} \mathbb{E}\left(e^{\left|L_{n}(\mathbf{Z}_{n})-\mu_{n}\right|t}\right)\geq\frac{1}{b}\,e^{\varepsilon_{0}t\sqrt{n}/(4\sqrt{2})}\,\frac{\rho_{t}^{\frac{1}{4}\sqrt{n}+1}}{\rho_{t}-1}+\frac{e^{\varepsilon_{0}t\sqrt{n}/(4\sqrt{2})}}{2\sqrt{2}\,b}\geq\frac{1}{b}\,e^{\varepsilon_{0}t\sqrt{n}/4}+\frac{e^{\varepsilon_{0}t\sqrt{n}/(4\sqrt{2})}}{2\sqrt{2}\,b}. \end{align*} \end{example} \begin{remark}\label{remark:TaylorUniformApprox} Again, when estimating the moment generating function of $|V_{n}|=|L_{n}(\mathbf{Z})-\mu_{n}|/\sqrt{n}$ via its series expansion, with the lower bound $\mathbb{E}(|V_{n}|^{r})\geq d_{3}(r)$, for large $n$ (uniformly in $r$), \begin{align*} \mathbb{E}\!\left(e^{s|V_{n}|}\right)\geq 1+\frac{1}{\sqrt{2}b}\sum_{r=1}^{\infty}\frac{1}{r!}\left(\frac{\varepsilon_{0}}{4\sqrt{2}}s\right)^{r}=\frac{1}{\sqrt{2}\,b}\,e^{\varepsilon_{0}s/(4\sqrt{2})}+1-\frac{1}{\sqrt{2}b}, \end{align*} which is the same bound as in \eqref{eq:uniformApproxetx} with $t=s/\sqrt{n}$, except for the constant term. \end{remark} \begin{remark}\label{rem:NoUnifApproxExtUn} With the extended $\mathcal{U}_{n}$, the uniform approximation (Theorem \ref{thm:UnifConv}) cannot be applied, since there does not exist a constant $c>0$ such that $m(n)=2(2n)^{\beta}\geq c\,\varphi^{-1}(n)$, for any $n\in\mathbb{N}$, with $\varphi(n)$ given in \eqref{eq:ExtendedUnLowerBound}. \end{remark} \begin{remark}\label{rem:EstMoment} We have obtained several constants $d_{3}(r)<d_{4}(r)<d_{1}(r)<d_{2}(r)$ involved in lower bounding the moment $\mathbb{E}(|V_{n}|^{r})$. The best constant, $d_{2}(r)$, is obtained under fast convergence with extended $\mathcal{U}_{n}$. Clearly, in order to deduce the existence of $c(r)$ such that $\mathbb{E}(|V_{n}|^{r})\geq c(r)$, it suffices to tackle the case $r=1$. However, it is easy to verify that in all the cases considered above (i.e., $c(r)$ being either $d_{1}(r)$, $d_{2}(r)$, $d_{3}(r)$ or $d_{4}(r)$), the constant $(c(1))^{r}$ is smaller than $c(r)$. Moreover, in all these cases, it is also true that $c(r+1)\geq (c(r))^{(r+1)/r}$, so that a better constant is obtained when estimating the $r$-th moment directly rather than estimating a lower-order moment. \end{remark} \section{An Upper Bound on the Rate Function}\label{sec:Rate} \subsection{Background and Preliminary Results} The analysis right below (before Proposition \ref{prop:LowerBoundMGFLnl2nq}) is similar to~\cite[Theorem 2]{ArratiaWaterman:1994}, but with a more general notion of score function and a slightly different definition of gap\footnote{In~\cite{ArratiaWaterman:1994}, the scoring function takes the value $1$ for matches and the penalty $-\mu$ for mismatches. Moreover, the gap in~\cite{ArratiaWaterman:1994} is an indel in one sequence, and the gap price $-\delta$ is assumed to be negative.}. Again, let $(X_{n})_{n\in\mathbb{N}}$ and $(Y_{n})_{n\in\mathbb{N}}$ be two independent sequences of i.i.d. random variables, and let us consider a general scoring function $S:\mathcal{A}\times\mathcal{A}\rightarrow\mathbb{R}^{+}$. By subadditivity, \begin{align*} L_{n+m}\geq L_{n}+L\left(\mathbf{X}_{n+1}^{n+m};\mathbf{Y}_{n+1}^{n+m}\right), \end{align*} where $L(\mathbf{X}_{n+1}^{n+m};\mathbf{Y}_{n+1}^{n+m})$ denotes the optimal alignments score of $\mathbf{X}_{n+1}^{n+m}:=(X_{n+1},\ldots,X_{n+m})$ and $\mathbf{Y}_{n+1}^{n+m}:=(Y_{n+1},\ldots,Y_{n+m})$. Hence, for any $s\geq 0$, \begin{align*} \mathbb{P}\left(L_{n+m}\geq s(n+m)\right)\geq\mathbb{P}\left(L_{n}\geq sn,\,L\left(\mathbf{X}_{n+1}^{n+m};\mathbf{Y}_{n+1}^{n+m}\right)\geq sm\right)=\mathbb{P}\left(L_{n}\geq sn\right)\mathbb{P}\left(L_{m}\geq sm\right). \end{align*} Thus, by Fekete's lemma, for any $s\geq 0$, the following limit -- the {\it rate function} -- exists: \begin{align*} r(s):=\lim_{n\rightarrow\infty}-\frac{1}{n}\ln\mathbb{P}\left(L_{n}\geq sn\right)=\inf_{n\in\mathbb{N}}-\frac{1}{n}\ln\mathbb{P}\left(L_{n}\geq sn\right)<\infty. \end{align*} and so, \begin{align*} \mathbb{P}\left(L_{n}\geq sn\right)\leq e^{-r(s)\,n},\quad\text{for any }\,s\geq 0,\,\,n\in\mathbb{N}. \end{align*} Since for any $n\in\mathbb{N}$, \begin{align*} \mathbb{E}\left(L_{n}\right)\leq\gamma^{*}n,\quad\text{where }\,\gamma^{*}:=\lim_{n\rightarrow\infty}\frac{\mathbb{E}\left(L_{n}\right)}{n}, \end{align*} it follows from \eqref{eq:HoeffdingIneqOptScore} that for any $s>\gamma^{*}$, \begin{align*} \mathbb{P}\left(L_{n}\geq sn\right)=\mathbb{P}\!\left(\!L_{n}\!-\!\mathbb{E}(L_{n})\!\geq\!\left(s\!-\!\frac{\mathbb{E}(L_{n})}{n}\right)\!n\!\right)\leq\mathbb{P}\left(L_{n}\!-\!\mathbb{E}(L_{n})\!\geq\!\left(s\!-\!\gamma^{*}\right)n\right)\leq\exp\!\left(\!-\frac{\left(s-\gamma^{*}\right)^{2}n}{K^{2}}\!\right). \end{align*} Therefore, for $s>\gamma^{*}$, \begin{align}\label{eq:LowerBoundRateFunt} r(s)\geq\frac{\left(s-\gamma^{*}\right)^{2}}{K^{2}}. \end{align} Moreover, for $s=\gamma^{*}$, \begin{align*} \mathbb{P}\left(L_{n}\geq\gamma^{*}n\right)=\mathbb{P}\left(L_{n}-\mathbb{E}(L_{n})\geq\left(\gamma^{*}-\frac{\mathbb{E}(L_{n})}{n}\right)n\right)\leq\exp\left(-\frac{n}{K^{2}}\left(\gamma^{*}-\frac{\mathbb{E}(L_{n})}{n}\right)^{2}\right). \end{align*} The aim of the present section is to show that the methodology developed to date allows us to partially reverse \eqref{eq:LowerBoundRateFunt} in the LCS case. That is, we will show that there exists a universal constant $B>0$, such that \begin{align}\label{eq:ConjCond} r(s)\leq B(s-\gamma^{*})^{2}, \end{align} for any $s$ belonging to an upper neighborhood of $\gamma^{*}$. Moreover, the full claim \eqref{eq:ConjCond}, i.e., for all $s>\gamma^{*}$, is shown to hold under a uniform convergence assumption, which is not proved in general. Hereafter and throughout this section, we will only consider the binary LCS case. For the rest of this subsection, we obtain a lower bound on $\mathbb{E}(e^{t(L_{n}(\mathbf{Z}_{n})-\ell_{n}(2nq))})$ (note that there is no absolute value in the exponent), which will be used to prove the claim \eqref{eq:ConjCond} in the neighborhood of $\gamma^{*}$. Since we are considering the special case of LCS and since we have already seen the advantage of the extended $\mathcal{U}_{n}$ over the standard one, we shall use the extended $\mathcal{U}_{n}$. Throughout this section, let $p_{0}>0$ be given as in Theorem \ref{thm:LCSLargeProb}. \begin{proposition}\label{prop:LowerBoundMGFLnl2nq} Let $t>0$ and let $p\in(0,p_{0})$. Then, there exists $\varepsilon_{0}>0$, such that \begin{align*} \liminf_{n\rightarrow\infty}\mathbb{E}\left(\exp\left(\frac{t}{\sqrt{2n}}\left(L_{n}(\mathbf{Z}_{n})-\ell_{n}(2nq)\right)\right)\right)\geq\lambda\,e^{\,pq\,\varepsilon_{0}^{2}t^{2}/8}, \end{align*} where \begin{align}\label{eq:ConstA} \lambda=\lambda(\varepsilon_{0}):=\min_{t>0}\left(1-\,\mathbb{P}\left(\frac{pq\,\varepsilon_{0}t}{2}<\xi<pqt\right)\right)\in(0,1), \end{align} and where $\xi\sim\mathcal{N}(0,pq)$. \end{proposition} \noindent \textbf{Proof.} For simplicity, assume $2nq$ to be an integer. Let $\mathcal{U}_{n}$ be given by \eqref{eq:ExtendedUn}. By the conditional Jensen's inequality, \begin{align} \mathbb{E}\!\left(e^{t\left(L_{n}(\mathbf{Z}_{n})-\ell_{n}(2nq)\right)}\right)&\geq\mathbb{E}\left(e^{t\left(\ell_{n}(U_{n})-\ell_{n}(2nq)\right)}\right)\geq\sum_{k\in\mathcal{U}_{n}}e^{t\left(\ell_{n}(k)-\ell_{n}(2nq)\right)}\mathbb{P}\left(U_{n}=k\right)\nonumber\\ \label{eq:DecomMGFLZl2pn} &=\!\sum_{k=2nq}^{2nq+(2n)^{\beta}}\!\!\!e^{t\left(\ell_{n}(k)-\ell_{n}(2nq)\right)}\mathbb{P}\!\left(U_{n}\!=\!k\right)\!+\!\!\!\sum_{k=2nq-(2n)^{\beta}}^{2nq-1}\!\!\!\!\!e^{t\left(\ell_{n}(k)-\ell_{n}(2nq)\right)}\mathbb{P}\!\left(U_{n}\!=\!k\right). \end{align} Theorem \ref{thm:LCSLargeProb} ensures that, with $\varepsilon_{0}:=\varepsilon_{1}-\varepsilon_{2}$, the condition \eqref{eq:DeltafastVarphi} holds and therefore, all the assumptions of Theorem \ref{thm:FastConvThm} are satisfied. If $k\in\mathcal{U}_{n}$ and $k\geq 2nq$, then by Theorem \ref{thm:FastConvThm}, $\ell_{n}(k)-\ell_{n}(2nq)\geq\delta(k-2nq)$, where $\delta=\varepsilon_{0}/2<1$; while if $k<2nq$, then by \eqref{eq:star}, $\ell_{n}(k)-\ell_{n}(2nq)\geq k-2nq$. Hence, by \eqref{eq:LocalDMVIneq}, for any $\epsilon>0$ and $n$ (depending on $\epsilon$, but not on $t$) large enough, \begin{align*} \sum_{k=2nq}^{2nq+(2n)^{\beta}}\exp\left(\frac{t}{\sqrt{2n}}\left(\ell_{n}(k)-\ell_{n}(2nq)\right)\right)\mathbb{P}\left(U_{n}=k\right)\geq (1-\epsilon)\sum_{k=2nq}^{2nq+(2n)^{\beta}}\exp\left(\frac{t\delta}{\sqrt{2n}}\left(k-2nq\right)\right)\phi_{n}(k), \end{align*} and \begin{align*} \sum_{k=2nq}^{2nq+(2n)^{\beta}}\exp\left(\frac{t\delta}{\sqrt{2n}}\left(k-2nq\right)\right)\phi_{n}(k)\rightarrow\frac{1}{2}\left(1+\text{erf}\left(\frac{\sqrt{pq}\,\delta t}{\sqrt{2}}\right)\right)e^{\,pq\,\varepsilon_{0}^{2}t^{2}/8}. \end{align*} Similarly, for $n$ (depending on $\epsilon$, but not on $t$) large enough, \begin{align*} \sum_{k=2nq-(2n)^{\beta}}^{2nq-1}\!\!\!\exp\left(\frac{t}{\sqrt{2n}}\left(\ell_{n}(k)-\ell_{n}(2nq)\right)\right)\mathbb{P}\left(U_{n}=k\right)\geq (1-\epsilon)\!\!\!\sum_{k=2nq-(2n)^{\beta}}^{2nq-1}\!\!\!\exp\left(\frac{t}{\sqrt{2n}}\left(k-2nq\right)\right)\phi_{n}(k), \end{align*} and \begin{align*} \sum_{k=2nq-(2n)^{\beta}}^{2nq-1}\exp\left(\frac{t}{\sqrt{2n}}\left(k-2nq\right)\right)\phi_{n}(k)\rightarrow\frac{1}{2}\left(1-\text{erf}\left(\frac{\sqrt{pq}\,t}{\sqrt{2}}\right)\right)e^{pq\,t^{2}/2}. \end{align*} Since $\epsilon>0$ is arbitrarily chosen, \begin{align*} &\liminf_{n\rightarrow\infty}\,\mathbb{E}\left(\exp\left(\frac{t}{\sqrt{2n}}\left(L_{n}(\mathbf{Z}_{n})-\ell_{n}(2nq)\right)\right)\right)\\ &\quad\,\geq\frac{1}{2}\left(1+\text{erf}\left(\frac{\sqrt{pq}\,\delta t}{\sqrt{2}}\right)\right)e^{pq\,\varepsilon_{0}^{2}t^{2}/8}+\frac{1}{2}\left(1-\text{erf}\left(\frac{\sqrt{pq}\,t}{\sqrt{2}}\right)\right)e^{pq\,t^{2}/2}\\ &\quad\,>\left[1+\frac{1}{2}\left(\text{erf}\left(\frac{\sqrt{pq}\,\delta t}{\sqrt{2}}\right)-\text{erf}\left(\frac{\sqrt{pq}\,t}{\sqrt{2}}\right)\right)\right]e^{pq\,\varepsilon_{0}^{2}t^{2}/8}\\ &\quad\,=\left(1-\mathbb{P}\left(pq\,\delta t<\xi<pq\,t\right)\right)e^{pq\,\varepsilon_{0}^{2}t^{2}/8}, \end{align*} where $\xi\sim N(0,pq)$. $\Box$ To conclude this subsection, we provide an estimate on $|\mathbb{E}(L_{n}(\mathbf{Z}_{n}))-\ell_{n}(2nq)|/\sqrt{2n}$, which will be useful in the sequel. \begin{corollary}\label{cor:LowerBoundLnZnln2nq} In the binary LCS setting, with $p\in(0,p_{0})$, \begin{align*} \limsup_{n\rightarrow\infty}\frac{\left|\mathbb{E}\left(L_{n}(\mathbf{Z}_{n})\right)-\ell_{n}(2nq)\right|}{\sqrt{2n}}\leq\sqrt{\frac{2pq}{\pi}}. \end{align*} In particular, \begin{align}\label{eq:LimitLn} \lim_{n\rightarrow\infty}\frac{\ell_{n}(2nq)}{n}=\lim_{n\rightarrow\infty}\frac{\mathbb{E}\left(L_{n}\right)}{n}=\gamma^{*}. \end{align} \end{corollary} \noindent \textbf{Proof.} By \eqref{eq:star}, \begin{align*} \left|\ell_{n}(k)-\ell_{n}(2nq)\right|\leq\left|k-2nq\right|,\quad k\in\mathcal{U}_{n}. \end{align*} Hence, for any $\epsilon>0$ and $n$ large enough, \begin{align*} \sum_{k\in\mathcal{U}_{n}}\frac{\left|\ell_{n}(k)-\ell_{n}(2nq)\right|}{\sqrt{2n}}\mathbb{P}\left(U_{n}=k\right)\leq (1+\epsilon)\sum_{k\in\mathcal{U}_{n}}\frac{\left|k-2nq\right|}{\sqrt{2n}}\phi_{n}(k). \end{align*} Again, \begin{align*} \sum_{k\in\mathcal{U}_{n}}\frac{\left|k-2nq\right|}{\sqrt{2n}}\phi_{n}(k)\rightarrow\int_{-\infty}^{\infty}\frac{|x|}{\sqrt{2\pi pq}}\,e^{-x^{2}/(2pq)}\,dx=\sqrt{\frac{2pq}{\pi}}, \end{align*} and by \eqref{eq:ConvProbAn}, \begin{align*} \sum_{k\not\in\mathcal{U}_{n}}\frac{\left|\ell_{n}(k)-\ell_{n}(2nq)\right|}{\sqrt{2n}}\mathbb{P}\left(U_{n}=k\right)\leq\frac{n}{\sqrt{2n}}\mathbb{P}\left(U_{n}\not\in\mathcal{U}_{n}\right)\leq\sqrt{2n}\exp\left[-2(2n)^{2\beta-1}\right]\rightarrow 0,\quad n\rightarrow\infty. \end{align*} Thus, \begin{align*} \limsup_{n\rightarrow\infty}\frac{\left|\mathbb{E}\left(L_{n}(\mathbf{Z}_{n})\right)-\ell_{n}(2nq)\right|}{\sqrt{2n}}\leq\limsup_{n\rightarrow\infty}\frac{\mathbb{E}\left(\left|\ell_{n}(U_{n})-\ell_{n}(2nq)\right|\right)}{\sqrt{2n}}\leq\sqrt{\frac{2pq}{\pi}}, \end{align*} which clearly implies \eqref{eq:LimitLn}. $\Box$ \subsection{An Upper Bound on the Rate Function in the Neighborhood of $\gamma^{*}$}\label{subsec:UpperBoundRateFunt} The goal of this subsection is to prove the claim \eqref{eq:ConjCond} in the neighborhood of $\gamma^{*}$, i.e., for some constant $B>0$ and $\widetilde{\gamma}>0$, \begin{align}\label{eq:ConjCondLocal} r(s)\leq B\left(s-\gamma^{*}\right)^{2},\quad\text{for any }\,s\in(\gamma^{*},\gamma^{*}+\widetilde{\gamma}]. \end{align} Let \begin{align*} \Lambda_{n}(t):=\log\mathbb{E}\left(e^{tL_{n}(\mathbf{Z}_{n})}\right),\quad t\in\mathbb{R}. \end{align*} The following result of~\cite{Hammersley:1974} (see also~\cite[Theorem 1]{GrossmannYakir:2004}) will be useful in the sequel of the proof: there exists $\Lambda(t)$, $t\in\mathbb{R}$, such that \begin{align*} \lim_{n\rightarrow\infty}\frac{\Lambda_{n}(t)}{n}=\Lambda(t),\quad\text{for any }\,t\in\mathbb{R}. \end{align*} Moreover, $r(s)$ and $\Lambda(t)$ are convex functions and are related via \begin{align}\label{eq:RelatRateLambda} r(s)=\Lambda^{*}(s):=\sup_{t\geq 0}\left(ts-\Lambda(t)\right),\quad\Lambda(t)=r^{*}(t)=\sup_{s\in\mathbb{R}}\left(ts-r(s)\right),\quad\text{for all }s\in\mathbb{R}\,\,\,\text{and}\,\,\,t\geq 0. \end{align} \subsubsection{A Global Upper Bound Under a Uniform Lower Bound} Recall from Proposition \ref{prop:LowerBoundMGFLnl2nq} that \begin{align*} \liminf_{n\rightarrow\infty}\,\mathbb{E}\left[\exp\left(\frac{t}{\sqrt{2n}}\left(L_{n}(\mathbf{Z})-\ell_{n}(2nq)\right)\right)\right]\geq\lambda\,e^{pq\,\varepsilon_{0}^{2}t^{2}/8}, \end{align*} where $\lambda$ is given by \eqref{eq:ConstA}. Hence, for every $0<\lambda_{1}<\lambda$ and $t>0$, \begin{align}\label{eq:LowerBounda1} \mathbb{E}\left[\exp\left(\frac{t}{\sqrt{2n}}\left(L_{n}(\mathbf{Z}_{n})-\ell_{n}(2nq)\right)\right)\right]\geq\lambda_{1}\,e^{pq\,\varepsilon_{0}^{2}t^{2}/8}, \end{align} provided that $n$ (which depends on $t$) is large enough . \begin{proposition}\label{lem:GlobUpBoundRateFunt} Let \eqref{eq:LowerBounda1} hold uniformly over $[0,\infty)$, i.e., let there exists $n_{0}\in\mathbb{N}$ (independent of $t$) such that, whenever $n\geq n_{0}$, \eqref{eq:LowerBounda1} holds true for every $t\geq 0$, then \eqref{eq:ConjCond} holds true for all $s>\gamma^{*}$. \end{proposition} \noindent \textbf{Proof.} Let $\lambda_{2}:=pq\,\varepsilon_{0}^{2}/8$. Since \eqref{eq:LowerBounda1} holds uniformly in $t$, for $n\geq n_{0}$, \begin{align*} \Lambda_{n}\left(\frac{t}{\sqrt{2n}}\right)\geq\frac{\ell_{n}(2nq)t}{\sqrt{2n}}+\ln\lambda_{1}+\lambda_{2}\,t^{2},\quad\text{for any }\,t\geq 0. \end{align*} With $u=t/\sqrt{2n}$, for $n\geq n_{0}$, \begin{align*} \frac{\Lambda_{n}(u)}{n}\geq\frac{\ell_{n}(2nq)u}{n}+\frac{\ln\lambda_{1}}{n}+2\lambda_{2}u^{2},\quad\text{for any }\,u\geq 0, \end{align*} and thus (recalling \eqref{eq:LimitLn}), \begin{align*} \Lambda(u)=\lim_{n\rightarrow\infty}\frac{\Lambda_{n}(u)}{n}\geq\gamma^{*}u+2\lambda_{2}u^{2},\quad\text{for any }\,u\geq 0. \end{align*} Now, for every $s>\gamma^{*}$, $u\geq 0$, \begin{align*} su-\Lambda(u)\leq\left(s-\gamma^{*}\right)u-2\lambda_{2}u^{2}, \end{align*} which implies that, \begin{align*} r(s)=\sup_{u\geq 0}\left(su-\Lambda(u)\right)\leq\sup_{u\geq 0}\left[\left(s-\gamma^{*}\right)u-2\lambda_{2}u^{2}\right]=\frac{\left(s-\gamma^{*}\right)^{2}}{8\lambda_{2}}. \end{align*} Therefore, \eqref{eq:ConjCond} holds true, for all $s>\gamma^{*}$, with $B=1/(8\lambda_{2})=1/(4pq\delta^{2})$. $\Box$ \subsubsection{Binomial Approximation With the Further Extended $\mathcal{U}_{n}$} In this part, we derive \eqref{eq:ConjCondLocal} without assuming that \eqref{eq:LowerBounda1} holds uniformly in $t$. Let $M_{q}(t)$ and $K_{q}(t)$ be respectively the moment generating function and the cumulant generating function of the Rademacher law with parameter $q$, i.e., \begin{align*} M_{q}(t):=e^{t(1-q)}q+e^{-tq}(1-q),\quad K_{q}(t):=\ln M_{q}(t)=\ln\left(e^{t(1-q)}q+e^{-tq}(1-q)\right). \end{align*} We start with the following general theorem. \begin{theorem}\label{thm:LocUpBoundRateFunt} Let $p\in(0,p_{0})$. Let there exist $t_{0}>0$ and $\delta>0$ such that, for any $t\in[0,t_{0}]$, whenever $n$ is large enough (possibly depending on $t$), \begin{align}\label{eq:lambda} \mathbb{E}\left(e^{t\left(L_{n}(\mathbf{Z}_{n})-\ell_{n}(2nq)\right)}\right)\geq\nu_{n}M^{2n}_{q}(\delta t), \end{align} where $(\nu_{n})_{n\in\mathbb{N}}\subseteq [0,\infty)$ is such that $\ln\nu_{n}=o(n)$, as $n\rightarrow\infty$. Then, there exist constants $B>0$ and $\widetilde{\gamma}>0$ such that \eqref{eq:ConjCondLocal} holds true. \end{theorem} \noindent \textbf{Proof.} Let $t\in [0,t_{0}]$. Taking logarithms in \eqref{eq:lambda} leads to \begin{align*} \Lambda_{n}(t)=\ln\mathbb{E}\left(e^{tL_{n}(\mathbf{Z}_{n})}\right)\geq t\,\ell_{n}(2nq)+2n\,K_{q}(\delta t). \end{align*} Dividing both sides of the above equality by $n$, and using \eqref{eq:LimitLn}, lead to \begin{align}\label{lambdabound} \Lambda(t)\geq t\gamma^{*}+2K_{q}(\delta t). \end{align} Now for any $s\in\mathbb{R}$, \begin{align*} ts-\Lambda(t)\leq ts-t\gamma^{*}-2K_{q}(\delta t)=t(s-\gamma^{*})-2K_{q}(\delta t), \end{align*} which implies that \begin{align}\label{eq:SupLambda0t0} \sup_{t\in [0,t_{0})}\left(ts-\Lambda(t)\right)\leq\sup_{t\in [0,t_{0})}\left[t\left(s-\gamma^{*}\right)-2K_{q}(\delta t)\right]. \end{align} We next show that $\Lambda'(0+)=\gamma^{*}$ (recall that $\Lambda(t)$ and $r(s)$ are related to each other only for $t\in[0,\infty)$, see \eqref{eq:RelatRateLambda}). First, by \eqref{lambdabound}, since $K_{q}(\delta t)\geq 0$ for any $t\geq 0$, we have \begin{align}\label{eq:LowerBoundLambdat} \Lambda(t)\geq\gamma^{*}t,\quad\text{for any }\,t\geq 0. \end{align} Next, by definition, $r(s)=0$ for any $s<\gamma^{*}$. As stated in~\cite[Theorem 1]{GrossmannYakir:2004}, $r$ is a convex and, therefore, continuous function, and so $r(\gamma^{*})=0$. Thus, the function $r$ is equal to zero up to $\gamma^{*}$ and is strictly increasing afterwards, and so, \begin{align*} \Lambda(t)=\sup_{s\in\mathbb{R}}\left(ts-r(s)\right)=\sup_{s>\gamma^{*}}\left(ts-r(s)\right). \end{align*} Using \eqref{eq:LowerBoundRateFunt} with $K=1$ in the LCS case, for any $t>0$, \begin{align}\label{eq:UpperBoundLambdat} \Lambda(t)=\sup_{s>\gamma^{*}}\left(ts-r(s)\right)\leq\sup_{s>\gamma^{*}}\left[ts-\left(s-\gamma^{*}\right)^{2}\right]=ts_{t}-\left(s_{t}-\gamma^{*}\right)^{2}=t\gamma^{*}+\frac{1}{4}t^{2}, \end{align} where $s_{t}:=t/2+\gamma^{*}>\gamma^{*}$, $t>0$. Combining \eqref{eq:LowerBoundLambdat} and \eqref{eq:UpperBoundLambdat}, it follows that $\Lambda'(0+)=\gamma^{*}$, which, together with \eqref{eq:SupLambda0t0}, implies that there exists $\widetilde{\gamma}_{1}>0$, such that for any $s\in(\gamma^{*},\gamma^{*}+\widetilde{\gamma}_{1}]$, \begin{align*} r(s)=\sup_{t\geq 0}\left(ts-\Lambda(t)\right)=\sup_{t\in[0,t_{0}]}\left(ts-\Lambda(t)\right)\leq\sup_{t\in[0,t_{0}]}\left[t\left(s-\gamma^{*}\right)-2K_{q}(\delta t)\right]. \end{align*} Now, \begin{align*} \sup_{t\in [0,t_{0}]}\left[t\left(s-\gamma^{*}\right)-2K_{q}(\delta t)\right]&=2\sup_{t\in [0,t_{0}]}\left[\delta t\left(q+\frac{s-\gamma^{*}}{2\delta}\right)-\ln\left(qe^{t\delta}+(1-q)\right)\right]\\ &=2\sup_{u\in[0,\delta t_{0}]}\left[u\left(q+\frac{s-\gamma^{*}}{2\delta}\right)-\ln\left(qe^{u}+(1-q)\right)\right]. \end{align*} With \begin{align*} x:=q+\frac{s-\gamma^{*}}{2\delta}, \end{align*} we obtain that the solution of \begin{align*} x=\frac{qe^{u}}{qe^{u}+(1-q)} \end{align*} is \begin{align*} u(x)=\ln\frac{x(1-q)}{q(1-x)}. \end{align*} Clearly, there exists $\widetilde{\gamma}_{2}>0$, such that when $s-\gamma^{*}\leq\widetilde{\gamma}_{2}$, $u(x)\in [0,\delta t_{0}]$, and in this case \begin{align*} \sup_{u\in[0,\delta t_{0})}\left[u\left(q+\frac{s-\gamma^{*}}{2\delta}\right)-\ln\left(qe^{u}+(1-q)\right)\right]&=\sup_{u\geq 0}\left[u\left(q+\frac{s-\gamma^{*}}{2\delta}\right)-\ln\left(qe^{u}+(1-q)\right)\right]\\ &=u(x)x-\ln\left(qe^{u(x)}+(1-q)\right)\\&=x\ln{x\over q}+(1-x)\ln {1-x\over 1-q}\\ &=:D(x||q). \end{align*} Since for $w>0$, $\ln(1+w)<w$, and since for $w\in(0,1)$, $\ln(1-w)<-w$, it follows that \begin{align*} D(x||q)\leq x\,\frac{x-q}{q}-(1-x)\frac{x-q}{1-q}=\frac{(x-q)^{2}}{q(1-q)}=\frac{1}{4\delta^{2}q(1-q)}\left(s-\gamma^{*}\right)^{2}. \end{align*} Therefore, \eqref{eq:ConjCondLocal} holds with $B=1/(4\delta^{2}q(1-q))$ and $\widetilde{\gamma}=\min(\widetilde{\gamma}_{1},\widetilde{\gamma}_{2})$. $\Box$ \begin{remark}\label{rem:GlobLowerBoundRateFunt} By examining the above proof, it is easy to see that if \eqref{eq:lambda} holds for any $t>0$, then \eqref{eq:ConjCond} holds for any $s>\gamma^{*}$. \end{remark} Let us return to \eqref{eq:lambda} and examine under which conditions the hypothesis of Theorem \ref{thm:LocUpBoundRateFunt} are satisfied. Since $U_{n}\sim B(2n,q)$, \begin{align*} \sum_{k=0}^{2n}e^{t\delta\left(k-2nq\right)}\mathbb{P}\left(U_{n}=k\right)=M_{q}^{2n}(\delta t), \end{align*} one might try to use the first term in \eqref{eq:DecomMGFLZl2pn} to get \eqref{eq:lambda}: \begin{align*} \mathbb{E}\left(e^{t\left(L_{n}(\mathbf{Z}_{n})-\ell_{n}(2nq)\right)}\right)\geq\sum_{k\in\mathcal{U}_{n},\,k\geq 2nq}e^{t\left(\ell_{n}(k)-\ell_{n}(2nq)\right)}\mathbb{P}\left(U_{n}=k\right). \end{align*} Thus \eqref{eq:lambda} holds if, \begin{align*} \sum_{k=2nq}^{2nq+(2n)^{\beta}}e^{t\delta\left(k-2nq\right)}\mathbb{P}\left(U_{n}=k\right)\geq\nu_{n}\,M_{q}^{2n}(\delta t), \end{align*} for a sequence $(\nu_{n})_{n\in\mathbb{N}}$ of positive reals such that $\ln\nu_{n}=o(n)$ (as $n\rightarrow\infty$). Unfortunately, there does not exist such $(\nu_{n})_{n\in\mathbb{N}}$, since \begin{align*} \frac{1}{n}\ln\left(\sum_{k=2nq}^{2nq+(2n)^{\beta}}e^{t\delta(k-2nq)}\mathbb{P}\left(U_{n}=k\right)\right)<\frac{1}{n}\ln\left(\exp\left(t\delta (2n)^{\beta}\right)\right)\rightarrow 0,\quad n\rightarrow\infty. \end{align*} To show a similar requirement, we need to further enlarge $\mathcal{U}_{n}$. The following lemma shows how to do so. \begin{lemma}\label{lem:LargerUn} For any $\eta>0$, there exists $b:=b(\eta)>0$ such that for every $n\in\mathbb{N}$ large enough, \begin{align*} \inf_{k:\,|k-2nq|<bn}\mathbb{P}\left(U_{n}=k\right)\geq e^{-\eta n}. \end{align*} \end{lemma} \noindent \textbf{Proof.} Since $U_{n}\sim B(2n,q)$, \begin{align*} \mathbb{P}\left(U_{n}=k\right)=\binom{2n}{k}q^{k}(1-q)^{2n-k},\quad k=0,\ldots,2n. \end{align*} Next (cf.~\cite[Example 12.1.3]{CoverThomas:2006}), \begin{align*} \binom{2n}{k}\geq\frac{1}{2n+1}\exp\left(2n\,h_{e}\left(\frac{k}{2n}\right)\right),\quad k=0,\ldots,2n,\quad n\in\mathbb{N}, \end{align*} where $h_{e}$ is the binary entropy function of base $e$: \begin{align*} h_{e}(q)=-q\ln q-(1-q)\ln(1-q),\quad q\in(0,1). \end{align*} Hence, \begin{align*} \mathbb{P}\left(U_{n}=k\right)\geq\frac{1}{2n+1}\exp\left(2n\left[h_{e}\left(\frac{k}{2n}\right)+\frac{k}{2n}\ln q+\left(1-\frac{k}{2n}\right)\ln(1-q)\right]\right). \end{align*} The continuous function $g(x)=h_{e}(x)+x\ln q+(1-x)\ln(1-q)$, $x\in(0,1)$, is such that $g(x)\leq 0$, for any $x\in(0,1)$, with $g(q)=0$. Thus, for any $\eta>0$, there exists $b(\eta)>0$ such that, whenever $|x-q|\leq b/2$, then $g(x)\geq -\eta/4$, and so, if $|k-2nq|\leq b(\eta)n$, \begin{align*} h_{e}\left(\frac{k}{2n}\right)+\frac{k}{2n}\ln q+\left(1-\frac{k}{2n}\right)\ln(1-q)>-\frac{\eta}{4}. \end{align*} Therefore, for $n$ (depending only on $\eta$) large enough, \begin{align*} \inf_{k:|k-2nq|<bn}\mathbb{P}\left(U_{n}=k\right)\geq\frac{1}{2n+1}\exp\left[2n\,h_{e}\left(\frac{k}{2n}\right)\right]q^{k}(1-q)^{2n-k}\geq\frac{1}{2n+1}\exp\left(-\frac{\eta}{2}n\right)\geq e^{-\eta n}. \end{align*} The proof is now complete. $\Box$ Using Lemma \ref{lem:LargerUn}, we can now verify the condition \eqref{eq:lambda} of Theorem \ref{thm:LocUpBoundRateFunt}. \begin{theorem}\label{thm:LowerBoundCondRateFunt} Let $p\in(0,p_{0})$. Let $\varepsilon_{0}:=\varepsilon_{1}-\varepsilon_{2}>0$, where $\varepsilon_{1}>0$ and $\varepsilon_{2}>0$ are given in Theorem \ref{thm:LCSLargeProb}, and let $\delta:=\varepsilon_{0}/2$. Let $t_{0}>0$ be the unique positive solution to \begin{align}\label{eq:tMq} 2\delta t-b^{2}-2\ln M_{q}(\delta t)=0. \end{align} Then, for any $\epsilon>0$ and any $t\in [0,t_{0}]$, there exists $N(t)\in\mathbb{N}$ such that for any $n\geq N(t)$, \eqref{eq:lambda} holds true with $\nu_{n}\equiv 1-\epsilon$, $n\in\mathbb{N}$. \end{theorem} \noindent \textbf{Proof.} By Lemma \ref{lem:LargerUn}, the set $\mathcal{U}_{n}$ can now be further enlarged. Indeed, taking $0<\eta<c_{1}$ (where $c_{1}>0$ is given in Theorem \ref{thm:LCSLargeProb}), there exists $b:=b(\eta)>0$ such that \begin{align*} \sup_{k:\,|k-2nq|\leq bn}\frac{e^{-c_{1}n}}{\mathbb{P}\left(U_{n}=k\right)}\leq e^{(\eta-c_{1})n}\rightarrow 0,\quad n\rightarrow\infty. \end{align*} Let us therefore take \begin{align*} \mathcal{U}_{n}=\left[2nq-bn,\,2nq+bn\right]\cap\mathcal{S}_{n}. \end{align*} With $\varphi(n)=e^{-\eta n}$ and by Theorem \ref{thm:LCSLargeProb}, the inequalities \eqref{eq:DeltafastVarphi} (with $\varepsilon_{0}=\varepsilon_{1}-\varepsilon_{2}$) and \eqref{eq:GenLowerBoundProbU} are both satisfied. Since in our LCS case \eqref{eq:RanTransImpli} also holds, all the assumptions of Theorem \ref{thm:FastConvThm} are verified and thus, for any $k\in\mathcal{U}_{n}$, $\ell_{n}(k+1)-\ell_{n}(k)\geq\delta=\varepsilon_{0}/2$, provided that $n$ is large enough. Therefore, by the conditional Jensen's inequality, \begin{align} \mathbb{E}\left(e^{t\left(L_{n}(\mathbf{Z}_{n})-\ell_{n}(2nq)\right)}\right)&\geq\mathbb{E}\left(e^{t\left(\ell_{n}(U_{n})-\ell_{n}(2nq)\right)}\right)\geq\sum_{k=2nq}^{2nq+bn}e^{t\left(\ell_{n}(k)-\ell_{n}(2nq)\right)}\mathbb{P}\left(U_{n}=k\right)\nonumber\\ \label{eq:ExpLnS3Ineq} &\geq\sum_{k=2nq}^{2nq+bn}e^{t\delta\left(k-2nq\right)}\mathbb{P}\left(U_{n}=k\right)=:S_{n}^{(3)}(t). \end{align} On the other hand, \begin{align*} S_{n}(t):=M_{q}(\delta t)^{2n}&=\mathbb{E}\left(e^{\delta t\left(U_{n}-2nq\right)}\right)=\left[e^{t\delta(1-q)}q+e^{-t\delta q}(1-q)\right]^{2n}\\ &=\sum_{k=0}^{2nq-bn-1}e^{\delta t\left(k-2nq\right)}\mathbb{P}\left(U_{n}=k\right)+\sum_{k=2nq-bn}^{2nq-1}e^{\delta t\left(k-2nq\right)}\mathbb{P}\left(U_{n}=k\right)\\ &\quad+\sum_{k=2nq}^{2nq+bn}e^{\delta t\left(k-2nq\right)}\mathbb{P}\left(U_{n}=k\right)+\sum_{k=2nq+bn+1}^{2n}e^{\delta t\left(k-2nq\right)}\mathbb{P}\left(U_{n}=k\right)\\ &=:S_{n}^{(1)}(t)+S_{n}^{(2)}(t)+S_{n}^{(3)}(t)+S_{n}^{(4)}(t). \end{align*} It is easy to see that for every $t>0$, \begin{align*} S_{n}^{(2)}(t)\leq\sum_{k=2nq-bn}^{2nq-1}e^{t\delta(k-2nq)}=\sum_{j=1}^{nb}e^{-t\delta j}=\frac{e^{-t\delta}-e^{-t\delta(nb)}}{1-e^{-t\delta}}\rightarrow\frac{1}{e^{t\delta}-1}=:c(t),\quad n\rightarrow\infty. \end{align*} Moreover, for any $t>0$, \begin{align*} S_{n}^{(1)}(t)\!+\!S_{n}^{(4)}(t)\!=\!\left(\sum_{k=0}^{2nq-bn-1}\!\!\!+\!\!\!\sum_{k=2nq+bn+1}^{2n}\right)\!e^{t\delta(k-2nq)}\mathbb{P}\!\left(U_{n}\!=\!k\right)\!<\!e^{2\delta tn}\mathbb{P}\!\left(\left|U_{n}\!-\!2nq\right|\!>\!nb\right)\!\leq\!2e^{\left(2\delta t-b^{2}\right)n}, \end{align*} where the last inequality follows from Hoeffding's exponential inequality. Thus, for $t>0$, \begin{align*} 1=\frac{S_{n}^{(1)}(t)+S_{n}^{(2)}(t)+S_{n}^{(3)}(t)+S_{n}^{(4)}(t)}{S_{n}(t)}\leq\frac{c(t)}{S_{n}(t)}+\frac{S_{n}^{(3)}(t)}{S_{n}(t)}+2\exp\left(\left(2\delta t-b^{2}-2\ln M_{q}(\delta t)\right)n\right). \end{align*} Hence, for any $0<t<t_{0}$, where $t_{0}$ is the solution to \eqref{eq:tMq}, \begin{align*} \frac{S_{n}^{(3)}(t)}{S_{n}(t)}\rightarrow 1,\quad\text{as }\,n\rightarrow\infty. \end{align*} Together with \eqref{eq:ExpLnS3Ineq}, it follows that, for every $t\in(0,t_{0}]$, there exists $N(t)\in\mathbb{N}$, such that for any $n\geq N(t)$, \begin{align*} \mathbb{E}\left(e^{t\left(L_{n}(\mathbf{Z}_{n})-\ell_{n}(2nq)\right)}\right)\geq S_{n}(t)\geq (1-\epsilon)M_{q}(\delta t)^{2n}, \end{align*} which completes the proof. $\Box$ \section{Beyond the Binary LCS Case: Discussion}\label{sec:diss} In this paper we applied the general methodology developed in Section \ref{sec:GenMomentBounds} to the binary LCS case with strongly asymmetric distributions. From Theorem \ref{thm:LCSLargeProb}, this setup is one of few known models where the existence of a suitable random transformation and of $\varepsilon_{0}$ (such that \eqref{eq:RanTransImpli} and $\Delta_{n}(\varepsilon_{0})\rightarrow 0$ both hold) have been proven to exist. In the binary LCS case, Theorem \ref{thm:LCSLargeProb} assumes a very asymmetric distribution, and further research is needed to relax this asymmetry assumption. Other setups where the assumptions of Theorem \ref{thm:FastConvThm} have been proved to hold were studied in~\cite{HoudreMatzinger:2007}, \cite{LemberMatzingerTorres:2012(2)} and~\cite{LemberMatzingerTorres:2013}. For example, in~\cite{LemberMatzingerTorres:2012(2)} and~\cite{LemberMatzingerTorres:2013}, the random variables $(X_{n})_{n\in\mathbb{N}}$ and $(Y_{n})_{n\in\mathbb{N}}$ are all i.i.d., every letter $\beta\in\mathcal{A}$ is taken with a positive probability (i.e., $\mathbb{P}(X_{1}=\beta)>0$), and the scoring function $S$ as well as the distribution of $X_{1}$ do satisfy the following {\it asymmetry assumption}. \begin{assumption}\label{assump:mimi} There exist $a,b\in\mathcal{A}$ such that \begin{align}\label{eq:condmimi} \sum_{\beta\in\mathcal{A}}\mathbb{P}(X_{1}=\beta)\left(S(b,\beta)-S(a,\beta)\right)>0. \end{align} \end{assumption} For the binary alphabet $\mathcal{A}=\{a,b\}$, condition \eqref{eq:condmimi} reads \begin{align*} \left(S(b,a)-S(a,a)\right)\mathbb{P}\left(X_{1}=a\right)+\left(S(b,b)-S(b,a)\right)\mathbb{P}\left(X_{1}=b\right)>0. \end{align*} Since $S$ is symmetric and one could exchange $a$ and $b$, the condition \eqref{eq:condmimi} actually becomes \begin{align*} \left(S(b,a)-S(a,a)\right)\mathbb{P}\left(X_{1}=a\right)+\left(S(b,b)-S(b,a)\right)\mathbb{P}\left(X_{1}=b\right)\neq 0. \end{align*} When $S(b,b)=S(a,a)>S(b,a)$ (recall that $S$ is assumed to be symmetric and non-constant), then Assumption \ref{assump:mimi} is satisfied if and only if $\mathbb{P}(X_{1}=a)\neq\mathbb{P}(X_{1}=b)$. Thus, in terms of the distribution of $X_{i}$ and of the scoring function $S$, \eqref{eq:condmimi} requires much less than Theorem~\ref{thm:LCSLargeProb}. However, the price to be paid there is in terms of $\delta$. Namely, the analogue of Theorem~\ref{thm:LCSLargeProb} can be shown to hold only if the gap penalty $-\delta$ is sufficiently high. The theorem itself is as follows (cf.~\cite[Theorem 3.1]{LemberMatzingerTorres:2012(2)}): \begin{theorem}\label{thm:app1} Let Assumption \ref{assump:mimi} hold. Let $a,b\in\mathcal{A}$ be as in \eqref{eq:condmimi}, and let the random transformation $\mathcal{R}$ turn the letter $a$ in $\mathbf{Z}_{n}$ into the letter $b$. Then there exist constants $\delta_{0}<0$, $\epsilon_{0}>0$, $\alpha>0$, and $n_{0}\in\mathbb{N}$, such that for any $\delta<\delta_{0}$ and $n\geq n_{0}$, \begin{align*} \mathbb{P}\left(\mathbb{E}\left(\left.L_{n}(\mathcal{R}(\mathbf{Z}_{n}))-L_{n}(\mathbf{Z}_{n})\,\right|\mathbf{Z}_{n}\right)\geq\epsilon_{0}\right)\geq 1-e^{-\alpha n}. \end{align*} \end{theorem} \begin{remark}\label{rem:GapPrice} Typically $\delta_{0}<0$, so that the condition $\delta<\delta_{0}$ indicates that the gap penalty $-\delta$ has to be sufficiently large. Hence, the result does not apply to the LCS case. Intuitively, the larger the gap penalty (the smaller the gap price), the fewer gaps appear in the optimal alignment, so that the optimal alignment is closer to the pairwise comparison (Hamming distance). Some methods for determining a sufficient $\delta_{0}$, as well as some examples, are discussed in~\cite{LemberMatzingerTorres:2013}. We believe that the assumption on $\delta$ can be relaxed so that Theorem \ref{thm:app1} holds under more general assumptions. \end{remark} If the assumptions of Theorem \ref{thm:app1} hold (i.e., the scoring function $S$ and the law of $X_{1}$ satisfy Assumption \ref{assump:mimi} and $\delta$ is sufficiently small) and the alphabet $\mathcal{A}=\{a,b\}$ is binary, then the situation is exactly as in the binary LCS case considered in Section \ref{sec:LCS} except that $A$ (see \eqref{eq:DiffLtildeZLZ}) might not be $1$. However, the constant $A$ is not essential in any of the proofs, so all the results of Sections \ref{sec:LCS} and \ref{sec:Rate} continue to hold (with appropriate changes stemming from $A$). If $|\mathcal{A}|>2$, then to apply Theorem \ref{thm:app1}, one should use a more general version of Theorem \ref{thm:FastConvThm}, see~\cite[Theorem 2.2]{LemberMatzingerTorres:2012(2)}. The same theorem should apply to the LCS case when the alphabet is not binary, but the model is still strongly asymmetric, i.e., one letter has to have a very large probability. \end{document}
\begin{document} \begin{frontmatter} \title{An Erd\H{o}s-Ko-Rado theorem in general linear groups} \author[JG]{Jun Guo}\ead{[email protected]} \author[KW]{Kaishun Wang\corref{cor}} \ead{[email protected]} \cortext[cor]{Corresponding author} \address[JG]{Math. and Inf. College, Langfang Teachers' College, Langfang 065000, China } \address[KW]{Sch. Math. Sci. \& Lab. Math. Com. Sys., Beijing Normal University, Beijing 100875, China} \begin{abstract} Let $S_n$ be the symmetric group on $n$ points. Deza and Frankl [M. Deza and P. Frankl, On the maximum number of permutations with given maximal or minimal distance, J. Combin. Theory Ser. A 22 (1977) 352--360] proved that if ${\cal F}$ is an intersecting set in $S_n$ then $|{\cal F}|\leq(n-1)!$. In this paper we consider the $q$-analogue version of this result. Let $\mathbb{F}_q^n$ be the $n$-dimensional row vector space over a finite field $\mathbb{F}_q$ and $GL_n(\mathbb{F}_q)$ the general linear group of degree $n$. A set ${\cal F}_q\subseteq GL_n(\mathbb{F}_q)$ is {\it intersecting} if for any $T,S\in{\cal F}_q$ there exists a non-zero vector $\alpha\in \mathbb{F}_q^n$ such that $\alpha T=\alpha S$. Let ${\cal F}_q$ be an intersecting set in $GL_n(\mathbb{F}_q)$. We show that $|{\cal F}_q|\leq q^{(n-1)n/2}\prod_{i=1}^{n-1}(q^i-1)$. \end{abstract} \begin{keyword} Erd\H{o}s-Ko-Rado theorem\sep general linear group \end{keyword} \end{frontmatter} \section*{} The Erd\H{o}s-Ko-Rado theorem \cite{EKR} is a central result in extremal combinatorics. There are many interesting proofs and extensions of this theorem, for a summary see \cite{Deza2}. Let $S_n$ be the symmetric group on $n$ points. A set ${\cal F}\subseteq S_n$ is {\it intersecting} if for any $f, g\in{\cal F}$ there exists an $x\in [n]$ such that $f(x) = g(x)$. The following result is an Erd\H{o}s-Ko-Rado theorem for intersecting families of permutations. \begin{thm}\label{thm1.3} Let ${\cal F}$ be an intersecting set in $S_n$. Then \begin{itemize} \item[\rm(i)] {\rm(Deza and Frankl \cite{Deza})} $|{\cal F}|\leq(n-1)!$. \item[\rm(ii)] {\rm(Cameron and Ku \cite{Cameron})} Equality in {\rm(i)} holds if and only if ${\cal F}$ is a coset of the stabilizer of a point. \end{itemize} \end{thm} Wang and Zhang \cite{WZ} gave a simple proof of Theorem~\ref{thm1.3}. Recently, Godsil and Meagher \cite{Godsil} presented another proof. In this paper we consider the $q$-analogue of Theorem~\ref{thm1.3}, and obtain an Erd\H{o}s-Ko-Rado theorem in general linear groups. Let $\mathbb{F}_q$ be a finite field and $\mathbb{F}_q^n$ the $n$-dimensional row vector space over $\mathbb{F}_q$. The set of all $n\times n$ nonsingular matrices over $\mathbb{F}_q$ forms a group under matrix multiplication, called the {\it general linear group} of degree $n$ over $\mathbb{F}_q$, denoted by $GL_{n}(\mathbb{F}_q)$. There is an action of $GL_{n}(\mathbb{F}_q)$ on $\mathbb{F}_q^{n}$ defined as follows: \begin{eqnarray} \mathbb{F}_q^{n}\times GL_{n}(\mathbb{F}_q)& \longrightarrow& \mathbb{F}_q^{n}\nonumber\\ ((x_1,x_2,\ldots,x_{n}),T)&\longmapsto& (x_1,x_2,\ldots,x_{n})T.\nonumber \end{eqnarray} Let $P$ be an $m$-subspace of $\mathbb{F}_q^{n}$. Denote also by $P$ an $m\times n$ matrix of rank $m$ whose rows span the subspace $P$ and call the matrix $P$ a matrix representation of the subspace $P$. \begin{defin} A set ${\cal F}_q\subseteq GL_n(\mathbb{F}_q)$ is {\it intersecting} if for any $T, S\in{\cal F}_q$ there exists a non-zero vector $\alpha\in \mathbb{F}_q^{n}$ such that $\alpha T = \alpha S$. \end{defin} In this paper, we shall prove the following result: \begin{thm}\label{thm1.4} Let ${\cal F}_q$ be an intersecting set in $GL_n(\mathbb{F}_q)$. Then $|{\cal F}_q|\leq q^{(n-1)n/2}\prod_{i=1}^{n-1}(q^i-1)$. \end{thm} For the group $GL_n(\mathbb{F}_q)$ we can define a graph, denoted by $\Gamma$, on vertex set $GL_n(\mathbb{F}_q)$ by joining $T$ and $S$ if they are intersecting. Since $GL_n(\mathbb{F}_q)$ is an automorphism group of $\Gamma$, this graph is vertex-transitive. In order to prove Theorem~\ref{thm1.4}, we require a useful lemma obtained by Cameron and Ku and a classical result about finite geometry. \begin{lemma}{\rm (\cite{Cameron})}\label{lem2.1} Let $C$ be a clique and $A$ a coclique in a vertex-transitive graph on $v$ vertices. Then $|C||A|\leq v$. Equality implies that $|C\cap A|=1$. \end{lemma} An {\it $n$-spread} of $\mathbb{F}_q^l$ is collection of $n$-subspaces $\{W_1,\ldots,W_t\}$ such that every non-zero vector in $\mathbb{F}_q^l$ belongs to exactly one $W_i$. \begin{thm} {\rm(\cite{Demb})}\label{Thm:spread} An $n$-spread of $\mathbb{F}_q^l$ exists if and only if $n$ is a divisor of $l$. \end{thm} \begin{lemma}\label{lem2.4} Let $\alpha(\Gamma)$ be the size of the largest coclique of $\Gamma$. Then $\alpha(\Gamma)=q^n-1$. \end{lemma} \begin{pf} By Theorem~\ref{Thm:spread}, there exists an $n$-spread $\{W_0,W_1,\ldots,W_{q^n}\}$ of $\mathbb{F}_q^{2n}$. Since $W_0\cap W_{q^n}=\{0\}$ and $W_0+W_{q^n}=\mathbb{F}_q^{2n}$, by \cite[Theorem~1.3]{wanbook}, there exists a $G\in GL_{2n}(\mathbb{F}_q)$ such that $W_0G=(I^{(n)}\;0^{(n)}),W_{q^n}G=(0^{(n)}\;I^{(n)})$, and $\{W_0G,W_1G,\ldots,W_{q^n}G\}$ is an $n$-spread of $\mathbb{F}_q^{2n}$, where $I^{(n)}$ is the identity matrix of order $n$ and $0^{(n)}$ is the zero matrix of order $n$. Without loss of generality, we may assume that $W_0=(I^{(n)}\;0^{(n)})$ and $W_{q^n}=(0^{(n)}\;I^{(n)})$. Then each $W_i\,(1\leq i\leq q^n-1)$ has the matrix representation of the form $(I^{(n)}\;T_i)$, where $T_i\in GL_n(\mathbb{F}_q)$. For all $1\leq i\not=j\leq q^n-1$, since $W_i+W_j$ is of dimension $2n$, $T_i-T_j\in GL_n(\mathbb{F}_q)$. By the fact that $T_i-T_j\in GL_n(\mathbb{F}_q)$ if and only if $\alpha T_i\not=\alpha T_j$ for all $\alpha\in\mathbb{F}_q^n\backslash\{0\}$, $\{T_1,\ldots,T_{q^n-1}\}$ is a coclique of $\Gamma$; and so $\alpha(\Gamma)\geq q^n-1$. Suppose $\alpha(\Gamma)>q^n-1$ and ${\cal I}=\{T_1,T_2,\ldots,T_{\alpha(\Gamma)}\}$ is a coclique of $\Gamma$. Then $T_i-T_j\in GL_n(\mathbb{F}_q)$ for all $1\leq i\not=j\leq\alpha(\Gamma)$. Take $W_0=(I^{(n)}\;0^{(n)})$, $W_{\alpha(\Gamma)+1}=(0^{(n)}\;I^{(n)})$ and $W_i=(I^{(n)}\;T_i)\;(1\leq i\leq \alpha(\Gamma))$. Then $W_k\cap W_l=\{0\}$ for all $0\leq k\not=l\leq\alpha(\Gamma)+1$. The number of non-zero vectors in $\bigcup_{k=0}^{\alpha(\Gamma)+1}W_k\subseteq \mathbb{F}_q^{2n}$ is $(\alpha(\Gamma)+2)(q^n-1)>(q^n+1)(q^n-1)=q^{2n}-1$, a contradiction. \qed \end{pf} Combining Lemma~\ref{lem2.1} and Lemma~\ref{lem2.4}, we complete the proof of Theorem~\ref{thm1.4}. Let $G_v$ be the stabilizer of a given non-zero vector $v$ in $GL_n(\mathbb F_q)$. Then $G_v$ is an intersecting set meeting the bound in Theorem~\ref{thm1.4}. It seems to be interesting to characterize the intersecting sets meeting the bound in Theorem~\ref{thm1.4}. \section*{Acknowledgment} This research is partially supported by NSF of China (10971052, 10871027), NCET-08-0052, Langfang Teachers' College (LSZB201005), and the Fundamental Research Funds for the Central Universities of China. \end{document}
\begin{document} \operatorname{m}aketitle \begin{abstract} We construct a nonexpansive linear operator on the \mathbb Gurarii\ space that ``captures" all nonexpansive linear operators between separable Banach spaces. Some additional properties involving its restrictions to finite-dimensional subspaces describe this operator uniquely up to an isometry. \noindent {\bf MSC (2010):} 47A05, 47A65, 46B04. \noindent {\bf Keywords:} Isometrically universal operator, \mathbb Gurarii\ space, almost isometry. \end{abstract} \tableofcontents \section{Introduction} There exist at least two different notions of universal operators between Banach spaces (by an \emph{operator} we mean a bounded linear operator). Perhaps the most popular one, due to Caradus~\cite{Caradus} is the following: An operator $\operatorname{m}ap U X X$ is \emph{universal} if for every other operator $\operatorname{m}ap T X X$ there exist a $U$-invariant subspace $Y \subseteq X$ and a linear isomorphism $\operatorname{m}ap \varphi X Y$ such that ${\lambda} T = \varphi^{-1} \circ (U\restriction Y) \circ \varphi$ for some constant ${\lambda} > 0$. Caradus~\cite{Caradus} described universal operators on the separable Hilbert space. Some arguments from dilation theory show that the left-shift on the Hilbert space is actually universal in a stronger sense: the isomorphism $\varphi$ is a linear isometry, whenever the operator $T$ is contractive and satisfies $\lim_{n\to\infty}T^n x = 0$ for every $x \in H$. The details can be found in~\cite{AmbMul}. A much weaker notion of a universal operator is due to Lindenstrauss and Pe\l czy\'nski~\cite{LinPel}: An operator $U$ is \emph{universal} for a given class ${\cal{F}}$ of operators if for every $T \in {\cal{F}}$ there exist operators $L, R$ such that $L \circ T \circ R = U$. One of the results of \cite{LinPel} says that the ``partial sums'' operator $\operatorname{m}ap U {\ell_1}{\ell_\infty}$ is universal for the class of non-compact operators. We are concerned with a natural concept of an \emph{isometrically universal} operator, that is, an operator $U$ between separable Banach spaces having the property that for every other operator acting on separable Banach spaces, whose norm does not exceed the norm of $U$, there exist isometric embeddings $i$, $j$ such that $U \circ i = j \circ T$. This property is weaker than the isometric variant of Caradus' concept (since we allow two different embeddings and no invariant subspace), although much stronger than the universality in the sense of Lindenstrauss and Pe\l czy\'nski. Our main result is the existence of an isometrically universal operator $\mathbf\Omega$. We also formulate its extension property which describes this operator uniquely, up to isometries. This is in contrast with the result of Caradus, where a rather general criterion for being universal is given. It turns out that both the domain and the co-domain of our operator $\mathbf\Omega$ are isometric to the \mathbb Gurarii\ space. Recall that the \emph{\mathbb Gurarii\ space} is the unique separable Banach space $\mathbb G$ satisfying the following condition: Given finite-dimensional Banach spaces $X \subseteq Y$, $\varepsilon > 0$, every isometric embedding $\operatorname{m}ap i X \mathbb G$ extends to an $\varepsilon$-isometric embedding $\operatorname{m}ap j Y \mathbb G$. This space was constructed by \mathbb Gurarii~\cite{gurarii} in 1966; the non-trivial fact that it is unique up to isometry is due to Lusky \cite{lusky} in 1976. An elementary proof has been recently found by Solecki and the second author~\cite{KS}. For a recent survey of the \mathbb Gurarii\ space and its non-separable versions we refer to \cite{kubis-gar}. We shall construct a nonexpansive (i.e. of norm $\leq1$) linear operator $\operatorname{m}ap {\mathbf\Omega}\mathbb G \mathbb G$ with the following property: Given an arbitrary linear operator $\operatorname{m}ap T X Y$ between separable Banach spaces such that $\norm T \leq 1$, there exist isometric copies $X' \subseteq \mathbb G$ and $Y' \subseteq \mathbb G$ of $X$ and $Y$ respectively, such that $\img {\mathbf\Omega}{X'} \subseteq Y'$ and $\mathbf\Omega \restriction X$ is isometric to $T$. More formally, there exist isometric embeddings $\operatorname{m}ap i X \mathbb G$ and $\operatorname{m}ap j Y \mathbb G$ such that the following diagram is commutative. $$\xymatrix{ \mathbb G \ar[r]^{\mathbf\Omega} & \mathbb G \\ X \ar[u]^i \ar[r]_T & Y \ar[u]_j }$$ In other words, up to linear isometries, restrictions of $\mathbf\Omega$ to closed subspaces of $\mathbb G$ give \emph{all} nonexpansive linear operators between separable Banach spaces. Furthermore, we show that the operator $\mathbf\Omega$ can be characterized by a condition similar to the one defining the \mathbb Gurarii\ space. \section{Preliminaries} We shall use standard notation concerning Banach space theory. By $\nat$ we mean the set of all nonnegative integers. We shall deal exclusively with nonexpansive linear operators, i.e., operators of norm $\leq 1$. According to this agreement, a linear operator $\operatorname{m}ap fXY$ is an \emph{$\varepsilon$-isometric embedding} if $$ (1+\varepsilon)^{-1} \cdot \norm {x} \leq \norm {f(x)} \leq \norm {x}$$ holds for every $x\in X$. We shall often say ``\emph{$\varepsilon$-embedding}'' instead of ``$\varepsilon$-isometric embedding". In particular, an \emph{embedding} of one Banach space into another is a linear isometric embedding. When dealing with a linear operator we shall always have in mind, besides its domain, also its \emph{co-domain}, which is just a (fixed in advance) Banach space containing the range (set of values) of the operator. The \mathbb Gurarii\ space will be denoted by $\mathbb G$. We shall use some standard category-theoretic notions. Our basis is ${\mathfrak B_1}$, the category of Banach spaces with linear operators of norm $\leq1$. An important property of ${\mathfrak B_1}$ is the following standard and well-known fact (see e.g. \cite{ACCGM}, \cite{gurarii} or \cite{pelczynski}). \begin{lm}\label{faktone} Let $\operatorname{m}ap i Z X$, $\operatorname{m}ap f Z Y$ be nonexpansive operators between Banach spaces. Then there are nonexpansive operators $\operatorname{m}ap {g} X W$ and $\operatorname{m}ap {j} Y W$ such that $$\xymatrix{ Y \ar[r]^{j} & W \\ Z \ar[r]_i \ar[u]^f & X \ar[u]_{g} }$$ is a pushout square in ${\mathfrak B_1}$. Furthermore, if $i$ is an isometric embedding then so is $j$. \end{lm} It is worth mentioning the description of the pushout. Namely, given $i$, $f$ as above, one usually defines $W = (X \oplus Y) / \Delta$, where $X \oplus Y$ denotes the $\ell_1$-sum of $X$ and $Y$ and $$\Delta = \setof{(i(z), -f(z))}{z \in Z}.$$ The operators $j$, $g$ are defined in the obvious way. In case both $i$, $f$ are isometric embeddings, it can be easily seen that the unit ball of $W$ is the convex hull of the union of the unit balls of $X$ and $Y$, canonically embedded into $W$. This remark will be used later. \subseteqection{Correcting almost isometries} Assume $\operatorname{m}ap f X Y$ is an $\varepsilon$-embedding of Banach spaces, where $\varepsilon > 0$. It is natural to ask whether there exists an embedding $\operatorname{m}ap h X Y$ $\varepsilon$-close to $f$. Obviously, this may be impossible, since $Y$ may not contain isometric copies of $X$ at all. Thus, a well-posed question is whether $f$ is $\varepsilon$-close to some isometric embedding into some bigger Banach space containing $Y$. This is indeed true, proved as Lemma~2.1 in \cite{KS}. In fact, this is an elementary fact and very likely it appeared somewhere in the literature although the authors were unable to find it. The proof of \cite[Lemma~2.1]{KS} uses linear functionals. Below we provide a more elementary argument (coming from \cite{CsGwK}), at the same time showing that the ``correcting" isometric embedding is universal in the appropriate category. Throughout this section we fix $\varepsilon > 0$ and an $\varepsilon$-embedding $\operatorname{m}ap f X Y$. Actually, it is enough to require that $f$ satisfies $$(1 - \varepsilon) \norm x \leq \norm {f(x)} \leq (1 + \varepsilon) \norm x,$$ although we consider nonexpansive operators only, therefore always $\norm{f(x)} \leq \norm x$. Note that $1 - \varepsilon < (1 + \varepsilon)^{-1}$. We define the following category ${\mathfrak{K}}(f,\varepsilon)$. The objects of ${\mathfrak{K}}(f,\varepsilon)$ are pairs $\pair i j$ such that $\operatorname{m}ap i X Z$, $\operatorname{m}ap j Y Z$ are linear operators of norm $\leq 1$ such that $$\norm{i - j \circ f} \leq \varepsilon.$$ Given two objects $a_1 = \pair {i_1}{j_1}$, $a_2 = \pair {i_2}{j_2}$, an arrow from $a_1$ to $a_2$ is a linear operator $h$ of norm $\leq 1$ such that $$h \circ i_1 = i_2 \qquad\text{and}\qquad h \circ j_1 = j_2.$$ By \cite[Lemma~2.1]{KS}, we know that if $f$ is an $\varepsilon$-embedding then the category ${\mathfrak{K}}(f,\varepsilon)$ contains an object $\pair i j$ such that both $i$ and $j$ are isometries. Below we improve this fact, at least from the category-theoretic perspective. \begin{lm}[cf. \cite{CsGwK}]\label{Lkluczowy} The category ${\mathfrak{K}}(f,\varepsilon)$ has an initial object $\pair {i_X}{j_Y}$ such that $\operatorname{m}ap {i_X} X {Z_0}$, $\operatorname{m}ap {j_Y} Y {Z_0}$ are isometries. More precisely: $i_X$, $j_Y$ are canonical embeddings into $X \oplus Y$ endowed with the norm defined by the formula $$\norm{v}_C = \inf\Bigsetof{\norm {x}_X + \norm {y}_Y + \varepsilon \norm {w}_X}{v = \pair {x + w}{y - f(w)},\; x,w\in X,\; y\in Y},$$ where $\|\cdot\|_X$, $\|\cdot\|_Y$ are the norms of $X$ and $Y$ respectively. \end{lm} \begin{pf} It is easy to check that $\|\cdot\|_C$ is indeed a norm. In fact, the unit ball of $\|\cdot\|_C$ is the convex hull of the set $(\ubal X \times \sn 0) \cup (\sn 0 \times \ubal Y) \cup G$, where $$G = \setof{\pair w{f(w)}}{\norm{w}_X \leq \varepsilon^{-1}}.$$ Note that $\pair{i_X}{j_Y}$ is an object of ${\mathfrak{K}}(f,\varepsilon)$, because $\norm{\pair w{-f(w)}} \leq \varepsilon \norm {x}_X$ and $\norm{\pair x0}_C \leq \norm {x}_X$, $\norm{\pair 0y} \leq \norm {y}_Y$. Fix an object $\pair ij$ of ${\mathfrak{K}}(f,\varepsilon)$, and let $Z$ be the common range of $i$ and $j$. Clearly, there exists a unique linear operator $\operatorname{m}ap h{X \oplus Y} Z$ such that $h \circ i_X = i$ and $h \circ j_Y = j$. Namely, $h(x,y) = i(x) + j(y)$. Note that $\norm{h(x,0)} \leq \norm{x}_X$, $\norm{h(0,y)} \leq \norm{y}_Y$, and $\norm{h(w,-f(w))} \leq \varepsilon \norm{w}_X$. The last inequality comes from the fact that $\pair i j$ is an object of ${\mathfrak{K}}(f,\varepsilon)$. It follows that $\norm{h(a)} \leq 1$ whenever $a$ is in the convex hull of $(\ubal X \times \sn 0) \cup (\sn 0 \times \ubal Y) \cup G$. This shows that $\norm h \leq 1$, concluding the fact that $\pair {i_X}{j_Y}$ is an initial object of ${\mathfrak{K}}(f,\varepsilon)$. It remains to show that $i_X$ and $j_Y$ are isometries. Fix $x\in X$. Clearly, $\norm{\pair x0}_C \leq \norm {x}_X$. On the other hand, for every $v\in Y$, $u,w\in X$ such that $u+w = x$ and $v - f(w) = 0$, we have \begin{align*} \norm {u}_X + \norm {v}_Y + \varepsilon \norm {w}_X &= \norm {u}_X +\norm {f(w)}_Y + \varepsilon \norm {w}_X\\ &\geq \norm {u}_X + (1 - \varepsilon) \norm {w}_X + \varepsilon \norm {w}_X\\ &\geq \norm {u+w}_X = \norm {x}_X. \end{align*} Passing to the infimum, we see that $\norm{\pair x0}_C \geq \norm {x}_X$. This shows that $i_X$ is an isometric embedding. Now fix $y\in Y$. Again, $\norm{\pair 0y}_C \leq \norm {y}_Y$ is clear. Given $u,w\in X$, $v\in Y$ such that $u + w = 0$ and $v - f(w) = y$, we have \begin{align*} \norm {u}_X + \norm {v}_Y + \varepsilon \norm {w}_X &= \norm {w}_X + \norm {v}_Y + \varepsilon \norm {w}_X \\ &\geq \norm {v}_Y + \norm {f(w)}_Y\\ &\geq \norm {v - f(w)}_Y = \norm {y}_Y. \end{align*} Again, passing to the infimum we get $\norm{\pair 0y}_C \geq \norm {y}_Y$. This shows that $j_Y$ is an isometric embedding and completes the proof. \end{pf} Note that when $\varepsilon \geq 1$ then $f$ does not have to be an almost isometric embedding. In fact $f = 0$ can be taken into account. In such a case the initial object is just the coproduct $X \oplus Y$ with the $\ell_1$-norm. In general, we shall denote by $X \oplus_{(f,\varepsilon)} Y$ the space $X \oplus Y$ endowed with the norm described in Lemma~\ref{Lkluczowy} above. Note that if $\alphamap f X Y$ is an $\varepsilon$-embedding and $0 < \varepsilon < \delta$ then $f$ is also a $\delta$-embedding, however the norm of $X \oplus_{(f,\varepsilon)} Y$ is different from that of $X \oplus_{(f,\delta)} Y$. The following statement will be used several times later. \begin{lm}\label{Lkeycz} Let $\varepsilon, \delta > 0$ and let $$\xymatrix{ X_0 \ar[d]_{T_0} \ar@{~>}[r]^{f_0} & Y_0 \ar[d]^{T_1} \\ X_1 \ar@{~>}[r]_{f_1} & Y_1 }$$ be a $\delta$-commutative diagram in ${\mathfrak B_1}$ (i.e., $\norm{f_1 \circ T_0 - T_1 \circ f_0} \leq \delta$), such that $f_0$, $f_1$ are $\varepsilon$-embeddings. Then the operator $$\operatorname{m}ap{ T_0 \oplus T_1 }{ X_0 \oplus_{(f_0,\varepsilon+\delta)} Y_0 }{ X_1 \oplus_{(f_1,\varepsilon)} Y_1 }$$ has norm $\leq 1$ and \begin{equation} (T_0 \oplus T_1) \circ i_{X_0} = i_{X_1} \circ T_0 \qquad\text{and}\qquad (T_0 \oplus T_1) \circ j_{Y_0} = j_{Y_1} \circ T_1. \tag{$*$}\label{Eqtgno} \end{equation} \end{lm} The situation is described in the following diagram, where the side squares are commutative, the bottom one is $\delta$-commutative, the left-hand side triangle $(\varepsilon+\delta)$-commutative and the right-hand side triangle is $\varepsilon$-commutative. $$\xymatrix{ & X_0\oplus Y_0\ar@{->}[rrr]^{T_0\oplus T_1}& \ar@{-}[r]& \ar[r] &X_1\oplus Y_1\\ &\\ &Y_0 \ar@{^{(}->}[uu]|{j_{Y_0}} \ar@{->}[rrr]^{T_1}& \ar@{-}[r]& \ar[r] &Y_1 \ar@{^{(}->}[uu]|{j_{Y_1}} \\ X_0 \ar@{^{(}->}[uuur]|{i_{X_0}} \ar@{~>}[ur]^{f_0} \ar@{->}[rrr]^{T_0}& \ar@{-}[r]& \ar[r] & X_1\ar@{~>}[ur]^{f_1} \ar@{^{(}->}[uuur]|{i_{X_1}} }$$ \begin{pf} By Lemma~\ref{Lkluczowy} applied to the category ${\mathfrak{K}}(f_0,\varepsilon+\delta)$, there is a unique nonexpansive operator $\operatorname{m}ap S { X_0 \oplus_{(f_0,\varepsilon+\delta)} Y_0 }{ X_1 \oplus_{(f_1,\varepsilon)} Y_1 }$ satisfying (\ref{Eqtgno}) in place of $T_0 \oplus T_1$. On the other, obviously $T_0 \oplus T_1$ satisfies (\ref{Eqtgno}), therefore $S = T_0 \oplus T_1$, showing that $T_0 \oplus T_1$ is nonexpansive. \end{pf} \subseteqection{Rational operators} We say that a Banach space $(X, \|\cdot\|)$ is \emph{rational} if $X$ is finite-dimensional and there exists a linear isomorphism $\operatorname{m}ap h {{\mathbb{R}}^n}X$ such that $$\norm x = \operatorname{m}ax_{i \leq k} |f_i(x)|, \qquad x \in X,$$ where $f_0,\dots,f_{k-1}$ are linear functionals preserving vectors with rational coordinates, that is, $\img {f_i h}{{\mathbb{Q}}^n} = {\mathbb{Q}}$ for $i<k$. Very formally, a rational Banach space is a triple of the form $(X, \|\cdot\|, h)$, where $(X, \|\cdot\|)$ and $h$ are as above. This notion is needed for catching countably many spaces that approximate the class of all finite-dimensional Banach spaces. In other words, a finite-dimensional Banach space is rational if there is a coordinate-wise system (induced by a linear isomorphism from ${\mathbb{R}}^n$ and by the standard basis of ${\mathbb{R}}^n$) such that the closed unit ball is the convex hull of finitely many vectors, each of them having rational coordinates. Note that ``being rational" depends both on the norm and on the coordinate-wise system. For instance, the two-dimensional Hilbert space is not rational, however the space ${\mathbb{R}}^2$ endowed with a scaled $\ell_1$-norm $$\norm{(x,y)} = \sqrt 2 (|x| + |y|)$$ is rational, which is witnessed by the isomorphism $h(v) = \sqrt 2 v$, $v \in {\mathbb{R}}^2$. Later on, forgetting this example, when considering ${\mathbb{R}}^n$ as a rational Banach space, we shall always have in mind the usual coordinate-wise system. We shall also need the notion of a rational operator. Namely, an operator $\operatorname{m}ap T {X}{Y}$ is rational if $\norm T \leq 1$ and $\img {T h}{{\mathbb{Q}}^m} \subseteq \img g {{\mathbb{Q}}^n}$, where $\operatorname{m}ap h {{\mathbb{R}}^m} X$ and $\operatorname{m}ap g {{\mathbb{R}}^n} Y$ are linear isomorphisms with respect to which $X$ and $Y$ are rational Banach spaces. Note that there are, up to isometry, only countably many rational operators. Note also that being a rational operator again depends on fixed linear isomorphism inducing coordinate-wise systems. \begin{lm}\label{LmNormyRacjonalne} Let $X \subseteq Y$ be finite-dimensional Banach spaces and assume that $Y = {\mathbb{R}}^m$ so that $X$ is its rational subspace and the norm $\|\cdot\|_Y$ is rational when restricted to $X$. Then for every $\delta>0$ there exists a rational norm $\|\cdot\|_Y'$ on $Y$ that is $\delta$-equivalent to $\|\cdot\|_Y$ and such that $\norm{x}_Y = \norm{x}'_Y$ for every $x\in X$. \end{lm} \begin{pf} Let $\Phi$ be a finite collection of rational functionals on $X$ such that $$\norm{x}_X = \operatorname{m}ax_{\varphi \in \Phi}|\varphi(x)|$$ for every $x \in X$. By the Hahn-Banach Theorem, we may assume that each $\varphi \in \Phi$ is actually a rational functional on $Y$ that has ``almost'' the same norm as its restriction to $X$. Finally, enlarge $\Phi$ to a finite collection $\Phi'$ by adding finitely many rational functionals so that the new norm induced by $\Phi'$ will become $\delta$-equivalent to $\|\cdot\|_Y$. \end{pf} \begin{lm}\label{Lmkorektwym} Let $\operatorname{m}ap {T_0} {X_0} {Y_0}$ be a rational operator, $\varepsilon>0$, and let $\operatorname{m}ap T X Y$ be an operator of norm $\leq1$ extending $T_0$ and such that $X \supseteq X_0$, $Y \supseteq Y_0$ are finite-dimensional. Let $\|\cdot\|_X$, $\|\cdot\|_Y$ denote the norms of $X$, $Y$. Then there exist rational norms $\|\cdot\|_X'$ and $\|\cdot\|_Y'$ on $X$ and $Y$, respectively, such that \begin{enumerate} \item[(i)] $T$ is a rational operator from $(X,\|\cdot\|_X')$ to $(Y,\|\cdot\|_Y')$, \item[(ii)] $\|\cdot\|_X'$ is $\varepsilon$-equivalent to $\|\cdot\|_X$ and $\|\cdot\|_Y'$ is $\varepsilon$-equivalent to $\|\cdot\|_Y$, \item[(iii)] $X_0 \subseteq X$ and $Y_0 \subseteq Y$ are rational isometric embeddings when $X$, $Y$ are endowed with $\|\cdot\|_X'$, $\|\cdot\|_Y'$ and $X_0$, $Y_0$ are endowed with their original norms. \end{enumerate} \end{lm} \begin{pf} For simplicity, let us assume that $X = X_0 \oplus {\mathbb{R}} u$ and $Y = Y_0 \oplus {\mathbb{R}} v$ and either $T(u)=v$ or $T(u)$ is a rational vector in $X_0$. The general case will follow by induction. Let $\operatorname{m}ap {h_0}{{\mathbb{R}}^m}{X_0}$, $\operatorname{m}ap {g_0}{{\mathbb{R}}^n}{Y_0}$ be linear isomorphisms witnessing that $X_0$, $Y_0$ are rational Banach spaces and that $T_0$ is a rational operator. Extend $h_0$, $g_0$ to $\operatorname{m}ap h{{\mathbb{R}}^{m+1}}X$, $\operatorname{m}ap g{{\mathbb{R}}^{n+1}}Y$ by setting $h(e_{m+1}) = u$, $g(e_{n+1}) = v$, where $e_i$ denotes the $i$th vector from the standard vector basis of ${\mathbb{R}}^k$ ($k \geq i$). Note that $T$ will become a rational operator, as long as we define suitable rational norms on $X$ and $Y$. Fix $\delta>0$. Let $\|\cdot\|_X'$ and $\|\cdot\|_Y'$ be obtained from Lemma~\ref{LmNormyRacjonalne}. Conditions (i) and (iii) are obviously satisfied. The only obstacle is that the operator $T$ may not be nonexpansive with respect to these new norms. However, we have $\norm{T x}'_Y \leq (1+\delta) \norm {T x}_Y \leq (1+\delta)\norm {x}_X \leq (1+\delta)^2 \norm {x}'_X$. Assuming that $\delta$ is rational, we can replace $\|\cdot\|'_X$ by $(1+\delta)^2 \|\cdot\|'_X$, so that $T$ is again nonexpansive. Finally, if $\delta$ is small enough, then condition (ii) holds. \end{pf} \subseteqection{The \mathbb Gurarii\ property} Before we construct the isometrically universal operator, we consider its crucial property which is similar to the condition defining the \mathbb Gurarii\ space. Namely, we shall say that a linear operator $\operatorname{m}ap \Omega U V$ has the \emph{\mathbb Gurarii\ property} if $\norm \Omega \leq1$ and the following condition is satisfied. \begin{enumerate} \item[($G$)] Given $\varepsilon>0$, given a nonexpansive operator $\operatorname{m}ap {T}{X}{Y}$ between finite-dimensional spaces, given $X_0 \subseteq X$, $Y_0 \subseteq Y$ and isometric embeddings $\operatorname{m}ap i {X_0}U$, $\operatorname{m}ap j {Y_0}V$ such that $\Omega \circ i = j \circ (T\restriction X_0)$, there exist $\varepsilon$-embeddings $\operatorname{m}ap {i'} X U$, $\operatorname{m}ap {j'} Y V$ satisfying $$\norm{i'\restriction X_0 - i}<\varepsilon, \quad \norm{j'\restriction Y_0 - j}<\varepsilon, \qquad\text{and}\qquad \norm{\Omega \circ i' - j' \circ T}<\varepsilon.$$ \end{enumerate} We shall also consider condition ($G^*$) which is, by definition, the same as ($G$) with the stronger requirement that $\Omega \circ i' = j' \circ T$. We shall see later that ($G$) is equivalent to ($G^*$). In the next section we show that an operator with the \mathbb Gurarii\ property exists. \section{The construction} Fix two real vector spaces $U$, $V$, each having a fixed countable infinite vector basis, which provides a coordinate system and the notion of rational vectors (namely, rational combinations of the vectors from the basis). For the sake of brevity, we may assume that $U$,$V$ are disjoint, although this is not essential. We shall construct rational subspaces of $U$ and $V$. Notice the following trivial fact: given a rational space $X \subseteq U$ (that is, $X$ is finite-dimensional and its norm is rational in $U$), given a rational isometric embedding $\operatorname{m}ap e X Y$ (so $Y$ is also a rational space), there exists a rational extension $X'$ of $X$ in $U$ and a bijective rational isometry $\operatorname{m}ap h Y {X'}$ such that $h \circ e$ is identity on $X$. In other words, informally, every rational extension of a rational space ``living" in $U$ is realized in $U$. Of course, the same applies to $V$. We shall now construct a sequence of rational operators $\operatorname{m}ap {F_n}{U_n}{V_n}$ such that \begin{enumerate} \item[(a)] $U_n$ is a rational subspace of $U$ and $V_n$ is a rational subspace of $V$. \item[(b)] $F_{n+1}$ extends $F_n$ (in particular, $U_n \subseteq U_{n+1}$ and $V_n \subseteq V_{n+1}$). \item[(c)] Given $n\in{\mathbb{N}}$, given rational embeddings $\operatorname{m}ap i{U_n}X$, $\operatorname{m}ap j{V_n}Y$ and given a rational operator $\operatorname{m}ap T X Y$ such that $T \circ i = j \circ F_n$, there exist $m > n$ and rational embeddings $\operatorname{m}ap {i'}X{U_m}$, $\operatorname{m}ap {j'}Y{V_m}$ satisfying $j' \circ T = F_m \circ i'$ and such that $i' \circ i$ and $j' \circ j$ are identities on $U_n$ and $V_n$, respectively. \end{enumerate} For this aim, let ${\cal{F}}$ denote the family of all triples $\triple T e k$, where $\operatorname{m}ap T X Y$ is a rational operator, $k$ is a natural number and $e = \pair i j$ is a pair of rational embeddings like in condition (c) above, namely $\operatorname{m}ap {i}{X_0}X$, $\operatorname{m}ap {j}{Y_0}Y$, where $X_0$ and $Y_0$ are rational subspaces of $U$ and $V$, respectively. We also assume that $X$, $Y$ are rational subspaces of $U$, $V$, therefore the family ${\cal{F}}$ is indeed countable. Enumerate it as $\sett{\triple {T_n}{e_n}{k_n}}{{n\in\omega}}$ so that each $\triple T e k$ appears infinitely many times. We now start with $U_0=0$, $V_0=0$ and $F_0=0$. Fix $n > 0$ and suppose $\operatorname{m}ap {F_{n-1}}{U_{n-1}}{V_{n-1}}$ has been defined. We look at triple $\triple {T_n}{e_n}{k_n}$ and consider the following condition, where $e_n = \pair {i_n}{j_n}$: \begin{enumerate} \item[($*$)] $k_n = k < n$, the domain of $i_n$ is $U_k$, the domain of $j_n$ is $V_k$ and $T \circ i_n = j_n \circ F_k$. \end{enumerate} If ($*$) fails, we set $F_n = F_{n-1}$ (and $U_n = U_{n-1}$, $V_n = V_{n-1}$). Suppose now that ($*$) holds and $\operatorname{m}ap{i_n}{U_k}X$, $\operatorname{m}ap{j_n}{V_k}Y$. Using the push-out property (more precisely, its version for rational operators), we find rational spaces $X' \supseteq U_{n-1}$ and $Y' \supseteq V_{n-1}$ (and the inclusions are rational embeddings), together with rational embeddings $\operatorname{m}ap {i'}X{X'}$, $\operatorname{m}ap {j'}Y{Y'}$ such that $i' \circ i_n$ is identity on $U_k$ and $j' \circ j_n$ is identity on $V_k$. As we have mentioned, we may ``realize'' the spaces $X'$ and $Y'$ inside $U$ and $V$, respectively. We set $U_n = X'$, $V_n = Y'$ and we define $F_n$ to be the unique operator from $U_n$ to $V_n$ obtained from the push-outs. In particular, $F_n$ extends $F_{n-1}$ and satisfies $j' \circ T = F_n \circ i'$. This completes the description of the construction of $\sett{F_n}{{n\in\omega}}$. Note that the sequence satisfies (a)--(c). Only condition (c) requires an argument. Namely, fix $n$ and $T,i,j$ as in (c). Find $m>n$ such that $\triple {T_m}{e_m}{k_m} = \triple T e n$, where $e = \pair i j$. Then at the $m$th stage of the construction condition ($*$) is fulfilled and therefore $F_m$ witnesses that (c) holds. Now denote by $U_\infty$ and $V_\infty$ the completions of $\bigcup_{n\in{\mathbb{N}}}U_n$ and $\bigcup_{n\in{\mathbb{N}}}V_n$ and let $\operatorname{m}ap{F_\infty}{U_\infty}{V_\infty}$ the unique extension of $\bigcup_{n\in{\mathbb{N}}}F_n$. \begin{prop} The operator $F_\infty$ satisfies condition ($G^*$) and therefore has the \mathbb Gurarii\ property. \end{prop} \begin{pf} Fix finite-dimensional Banach spaces $X_0 \subseteq X_1$, $Y_0 \subseteq Y_1$, fix isometric embeddings $\operatorname{m}ap i {X_0}{U_\infty}$, $\operatorname{m}ap j {Y_0}{V_\infty}$ so that $F_\infty \circ i = j \circ T_0$. Furthermore, fix two non\-ex\-pan\-sive operators $\operatorname{m}ap {T_i}{X_i}{Y_i}$ for $i=0,1$ such that $T_1$ extends $T_0$. Fix $\varepsilon>0$. We need to find $\varepsilon$-isometric embeddings $\operatorname{m}ap f{X_1}{U_\infty}$, $\operatorname{m}ap g{Y_1}{V_\infty}$ such that $$\norm{f\restriction X_0 - i}<\varepsilon, \quad \norm{g\restriction Y_0 - j}<\varepsilon,$$ and $F_\infty \circ f = g \circ T_1$. Let $\delta = \varepsilon/3$. The remaining part of the proof is divided into four steps: \paragraph{Step 1} We first ``distort" the embeddings $i$, $j$, so that their images will be some $U_n$ and $V_n$, respectively. Formally, we find $\delta$-isometric embeddings $\operatorname{m}ap {i_0}{X_0}{U_n}$, $\operatorname{m}ap {j_0}{Y_0}{V_n}$ for some fixed $n$, such that $\norm {i-i_0} < \delta$, $\norm{j-j_0}<\delta$ and $$\norm{j_0 \circ T_0 - F_n \circ i_0} < \delta.$$ \paragraph{Step 2} Applying Lemma~\ref{Lkeycz} for obtaining finite-dimensional spaces $X_2 \supseteq X_0$, $Y_2 \supseteq Y_0$ with isometric embeddings $\operatorname{m}ap k {U_n}{X_2}$, $\operatorname{m}ap \ell {V_n}{Y_2}$, together with a nonexpansive operator $\operatorname{m}ap {T_2}{X_2}{Y_2}$ extending $T_0$ and satisfying $T_2 \circ k = \ell \circ F_n$. \paragraph{Step 3} We now use the Pushout Lemma for two pairs of embeddings: $X_0 \subseteq X_1$, $X_0 \subseteq X_2$ and $Y_0 \subseteq Y_1$, $Y_0 \subseteq Y_2$; we obtain a further extension of the operator $T_2$. Thus, in order to avoid too many objects, we shall assume that $X_1 \subseteq X_2$, $Y_1 \subseteq Y_2$ and the operator $T_2$ extends both $T_0$ and $T_1$. At this point, we may actually forget about $T_1$, replacing it by $T_2$. \paragraph{Step 4} Apply Lemma~\ref{Lmkorektwym} in order to change the norms of $X_2$ and $Y_2$ by $\delta$-equivalent ones, so that $T_2$ becomes a rational operator extending $F_n$. Using (c), we can now ``realize'' $T_2$ in $F_m$ for some $m > n$. Formally, there are isometric embeddings $\operatorname{m}ap {i_2}{X_2}{U_m}$, $\operatorname{m}ap {j_2}{Y_2}{V_m}$ satisfying $F_m \circ i_2 = j_2 \circ T_2$. Coming back to the original norms of $X_2$ and $Y_2$, we see that the embeddings $i_2$ and $j_2$ are $\delta$-isometric and their restrictions to $X_0$ and $Y_0$ are $(2\delta)$-close to $i_0$ and $j_0$, respectively, hence $\varepsilon$-close to $i$ and $j$ (recall that $\delta = \varepsilon/3$). This completes the proof. \end{pf} \section{Properties} We now show that the domain and the co-domain of an operator with the \mathbb Gurarii\ property acting between separable spaces is the \mathbb Gurarii\ space (thus, justifying the name) and later we show its uniqueness as well as some kind of homogeneity. \subseteqection{Recognizing the domain and the co-domain} Recall that a separable Banach space $W$ is linearly isometric to the \mathbb Gurarii\ space $\mathbb G$ if and only if it satisfies the following condition: \begin{enumerate} \item[($\mathfrak G$)] Given finite-dimensional spaces $X_0 \subseteq X$, given $\varepsilon>0$, given an isometric embedding $\operatorname{m}ap i {X_0}W$, there exists an $\varepsilon$-embedding $\operatorname{m}ap f X W$ such that $\norm{f \restriction X_0 - i} \leq \varepsilon$. \end{enumerate} Usually, the condition defining the \mathbb Gurarii\ space is stronger, namely, it is required that $f \restriction X_0 = i$. For our purposes, the formally weaker condition ($\mathfrak G$) is more suitable. It is not hard to see that both conditions are actually equivalent, see~\cite{kubis-gar} for more details. \begin{tw}\label{Thmrigbiw} Let $\operatorname{m}ap \Omega U V$ be a linear operator with the \mathbb Gurarii\ property, where $U,V$ are separable. Then both $U$ and $V$ are linearly isometric to the \mathbb Gurarii\ space. \end{tw} \begin{pf} (1) $U$ is isometric to $\mathbb G$. \noindent Fix finite-dimensional spaces $X_0 \subseteq X$ and fix an isometric embedding $\operatorname{m}ap i {X_0}U$. Fix $\varepsilon>0$. Let $Y_0 = \img \Omega {\img i {X_0}}$ and let $T_0 = \Omega \restriction \img i{X_0}$, treated as an operator into $Y_0$. Applying the pushout property (Lemma~\ref{faktone}), we find a finite-dimensional space $Y \supseteq Y_0$ and a nonexpansive linear operator $\operatorname{m}ap T X Y$ extending $T_0$. Applying condition ($G$), we get in particular an $\varepsilon$-embedding $\operatorname{m}ap f X U$ satisfying $\norm{f\restriction X_0 - i} \leq \varepsilon$. This shows that $U$ satisfies ($\mathfrak G$). (2) $V$ is isometric to $\mathbb G$. \noindent Fix $\varepsilon>0$ and fix finite-dimensional spaces $Y_0 \subseteq Y$ and an isometric embedding $\operatorname{m}ap j {Y_0} V$. Let $X_0 = \sn0 = X$ and let $\operatorname{m}ap {T_0}{X_0}{Y_0}$, $\operatorname{m}ap T X Y$ be the 0-operators. Applying ($G$), we get an $\varepsilon$-embedding $\operatorname{m}ap {j'} Y V$ satisfying $\norm{j' \restriction Y_0 - j}\leq \varepsilon$, showing that $V$ satisfies ($\mathfrak G$). \end{pf} From now on, we shall denote by $\mathbf\Omega$ the operator constructed in the previous section. According to the results above, this operator has the \mathbb Gurarii\ property and it is of the form $\operatorname{m}ap \mathbf\Omega \mathbb G \mathbb G$. \subseteqection{Universality} We shall now simplify the notation, in order to avoid too many parameters and shorten some arguments. Namely, given nonexpansive linear operators $\operatorname{m}ap S X Y$, $\operatorname{m}ap T Z W$, a pair $i = \pair {i_0}{i_1}$ of isometric embeddings of the form $\operatorname{m}ap {i_0} X Z$, $\operatorname{m}ap {i_1} Y W$ and satisfying $T \circ i_0 = i_1 \circ S$, will be called an \emph{embedding of operators} from $S$ into $T$ and we shall write $\operatorname{m}ap i S T$. Now fix $\varepsilon > 0$ and suppose that $f = \pair {f_0}{f_1}$ is a pair of $\varepsilon$-embeddings of the form $\operatorname{m}ap {f_0}X Z$, $\operatorname{m}ap {f_1}Y W$ satisfying $\norm{T \circ f_0 - f_1 \circ S} \leq \varepsilon$. We shall say that $f$ is \emph{$\varepsilon$-embedding of operators} from $S$ into $T$ and we shall write $\alphamap f S T$. Finally, an \emph{almost embedding of operators} of $S$ into $T$ will be, by definition, an $\varepsilon$-embedding of $S$ into $T$ for some $\varepsilon>0$. The composition of (almost) embeddings of operators is defined in the obvious way. Given two almost embeddings of operators $\operatorname{m}ap f S T$, $\operatorname{m}ap g S T$, we shall say that $f$ is \emph{$\varepsilon$-close} to $g$ (or that $f$, $g$ are \emph{$\varepsilon$-close}) if $\norm{f_0 - g_0} \leq \varepsilon$ and $\norm{f_1 - g_1} \leq \varepsilon$, where $f = \pair {f_0}{f_1}$ and $g = \pair {g_0}{g_1}$. The notation described above is in accordance with category-theoretic philosophy: almost embeddings of operators obviously form a category, which is actually a special case of much more general constructions on categories, where the objects are diagrams of certain shape. Now, observe that a consequence of the pushout property stated in Lemma~\ref{faktone} (that we have already used) says that for every two embeddings of operators $\operatorname{m}ap i S T$, $\operatorname{m}ap j S R$ there exist embeddings of operators $\operatorname{m}ap {i'} T P$, $\operatorname{m}ap {j'} R P$ such that $i' \circ i = j' \circ j$. The crucial property of almost embeddings is Lemma~\ref{Lkeycz} which says, in the new notation, that for every $\varepsilon$-embedding of operators $\operatorname{m}ap f S T$ there exist embeddings of operators $\operatorname{m}ap i S R$, $\operatorname{m}ap j T R$ such that $j \circ f$ is $(2\varepsilon)$-close to $i$. Before proving the universality of $\mathbf\Omega$, we formulate a statement which is crucial for the proof. \begin{lm}\label{Lmrgeihei} Let $\Omega$ be an operator with the \mathbb Gurarii\ property. Assume $\varepsilon>0$ and $\alphamap f T \Omega$ is an $\varepsilon$-embedding of operators and $\operatorname{m}ap j T R$ is an embedding of operators, where both $T$ and $R$ act between finite-dimensional spaces. Then for every $\delta > 0$ there exists a $\delta$-embedding of operators $\alphamap g R \Omega$ whose composition with $j$ is $(2\varepsilon+\delta)$-close to $f$. \end{lm} \begin{pf} We first replace $f$ by $\alphamap {f_1} T {T_1}$, where $T_1$ is some restriction of $\Omega$ to finite-dimensional spaces (so $f = e \circ f_1$, where the components of $e$ are inclusions). Using Lemma~\ref{Lkeycz}, we find isometries of operators $\operatorname{m}ap i {T_1} S$, $\operatorname{m}ap {j_1} T S$, where $S$ is an operator between finite-dimensional spaces and $j_1$ is $(2\varepsilon)$-close to $i \circ f_1$. Applying the amalgamation property to $j_1$ and $j$, we find embeddings of operators $\operatorname{m}ap k R {\tilde S}$ and $\operatorname{m}ap {j_2} S {\tilde S}$ satisfying $j_2 \circ j_1 = k \circ j$. In order to avoid too many parameters, we replace $S$ by $\tilde S$ and $j_1$ by $j_2 \circ j_1$. By this way, we have an embedding of operators $\operatorname{m}ap k R S$ such that $k \circ j = j_1$ and still $j_1$ is $(2\varepsilon)$-close to $i \circ f_1$. Now, condition ($G$) in our terminology says that there is a $\delta$-embedding of operators $\alphamap \ell S \Omega$ whose composition with $i$ is $\delta$-close to the inclusion $\operatorname{m}ap e {T_1}\Omega$. Finally, $g = \ell \circ k$ is the required $\delta$-embedding, because $g \circ j$ is $(2\varepsilon + \delta)$-close to $f$. \end{pf} \begin{tw} Given a nonexpansive linear operator $\operatorname{m}ap T X Y$ between separable Banach spaces, there exist isometric embeddings $\operatorname{m}ap i X \mathbb G$, $\operatorname{m}ap j Y \mathbb G$ such that $\mathbf\Omega \circ i = j \circ \mathbf\Omega$, that is, the following diagram is commutative. $$\xymatrix{ \mathbb G \ar[r]^{\mathbf\Omega} & \mathbb G \\ X \ar[u]^i \ar[r]_T & Y \ar[u]_j }$$ \end{tw} \begin{pf} We first ``decompose'' $T$ into a chain $T_0 \subseteq T_1 \subseteq T_2 \subseteq \cdots$ so that $\operatorname{m}ap {T_n}{X_n}{Y_n}$ and $X_n$, $Y_n$ are finite-dimensional spaces. Formally, we construct inductively two chains of finite-dimensional spaces $\sett{X_n}{{n\in\omega}}$, $\sett{Y_n}{{n\in\omega}}$ such that $\img T {X_n} \subseteq Y_n$ and $\bigcup_{{n\in\omega}}X_n$ is dense in $X$, and $\bigcup_{{n\in\omega}}Y_n$ is dense in $Y$. It is clear that such a decomposition is always possible and the operator $T$ is determined by the chain $\sett{T_n}{{n\in\omega}}$. Let $\varepsilon_n = 2^{-n}$. We shall construct almost embeddings of operators $\operatorname{m}ap {i_n} {T_n} \mathbf\Omega$ so that the following conditions are satisfied: \begin{enumerate} \item[(i)] $i_n$ is an $\varepsilon_n$-embedding of $T_n$ into $\mathbf\Omega$. \item[(ii)] $i_{n+1} \restriction T_n$ is $(3\varepsilon_n)$-close to $i_n$. \end{enumerate} Once we assume that $i_0$ is the 0-operator between the 0-subspaces of $X$ and $Y$, there is no problem to start the inductive construction. Fix $n \geq 0$ and suppose $i_n$ has already been constructed. Let $Z = \img {T_n}{X_n}$. By Lemma~\ref{Lmrgeihei} applied to $i_n$ with $\varepsilon=\delta=\varepsilon_n$, we find $i_{n+1}$ satisfying (ii). Thus, the construction can be carried out. Finally, there is a unique operator $\operatorname{m}ap {i_\infty} T \mathbb G$ that extends all the $i_n$s. Formally, both components of $i_\infty$ are uniquely determined by the completion of the pointwise limit of the sequence $\sett {i_n}{{n\in\omega}}$. The two components of $i_\infty$ are the required isometric embeddings. \end{pf} \subseteqection{Uniqueness and almost homogeneity} We start with the main, somewhat technical, lemma from which we easily derive all the announced properties of $\mathbf\Omega$. We shall say that an operator $\operatorname{m}ap f X Y$ is a \emph{strict} $\varepsilon$-embedding if it is a $\delta$-isometric embedding for some $0<\delta<\varepsilon$. \begin{lm}\label{LmMejnn} Assume $\operatorname{m}ap \Omega U V$ and $\operatorname{m}ap {\Omega'}{U'}{V'}$ are two operators between separable Banach spaces, both with the \mathbb Gurarii\ property (that is, nonexpansive and with property ($G$)). Assume $X_0 \subseteq U$, $Y_0 \subseteq V$, $X'_0 \subseteq U'$ and $Y'_0 \subseteq V'$ are finite-dimensional spaces. Fix $\varepsilon > 0$ and let $T_0 = \Omega \restriction X_0$, $T_0' = \Omega' \restriction X'_0$. Assume further that $\operatorname{m}ap {i_0}{X_0}{X'_0}$ and $\operatorname{m}ap {j_0}{Y_0}{Y'_0}$ are strict $\varepsilon$-embeddings satisfying $$\norm{T_0' \circ i_0 - j_0 \circ T_0} < \varepsilon.$$ Then there exist bijective linear isometries $\operatorname{m}ap I U{U'}$ and $\operatorname{m}ap J V{V'}$ such that $$J \circ \Omega = \Omega' \circ I$$ and $\norm{I\restriction X_0 - i_0} < \varepsilon$, $\norm{J\restriction Y_0 - j_0} < \varepsilon$. \end{lm} Before proving this lemma, we formulate and prove some of its consequences. First of all, let us say that an operator $\operatorname{m}ap \Omega U V$ is \emph{almost homogeneous} if the following condition is satisfied. \begin{enumerate} \item[(AH)] Given $\varepsilon>0$, given finite-dimensional spaces $X_0, X_1 \subseteq U$, $Y_0, Y_1 \subseteq V$ such that $\img \Omega {X_0} \subseteq Y_0$, $\img \Omega {X_1} \subseteq Y_1$, given linear isometries $\operatorname{m}ap i{X_0}{X_1}$, $\operatorname{m}ap j{Y_0}{Y_1}$ such that $\Omega \circ i = j \circ \Omega$, there exist bijective linear isometries $\operatorname{m}ap I U U$, $\operatorname{m}ap J V V$ such that $$\Omega \circ I = J \circ \Omega$$ and $\norm{I \restriction X_0 - i} \leq \varepsilon$, $\norm{J \restriction Y_0 - j} \leq \varepsilon$. \end{enumerate} Eliminating $\varepsilon$ from this definition, we obtain the notion of a \emph{homogeneous operator}. We shall see in a moment, using our knowledge on the \mathbb Gurarii\ space, that no operator between separable Banach spaces can be homogeneous. \begin{tw}\label{Thmenrgog} Let $\Omega$ be a linear operator with the \mathbb Gurarii\ property, acting between separable Banach spaces. Then $\Omega$ is isometric to $\mathbf\Omega$ in the sense that there exist bijective linear isometries $I$, $J$ such that $\mathbf\Omega \circ I = J \circ \Omega$. Furthermore, $\Omega$ is almost homogeneous. \end{tw} \begin{pf} We already know that $\mathbf\Omega$ has the \mathbb Gurarii\ property. Applying Lemma~\ref{LmMejnn} to the zero operator, we obtain the required isometries $I$, $J$. In order to show almost homogeneity, apply Lemma~\ref{LmMejnn} again to the operator $\Omega$ (on both sides) and to the embeddings $i$, $j$ specified in condition (AH). \end{pf} Let us say that an operator $\Omega$ is \emph{isometrically universal} if, up to isometries, its restrictions to closed subspaces provide all operators between separable Banach spaces whose norms do not exceed the norm of $\Omega$. \begin{uwgi} No bounded linear operator between separable Banach spaces can be isometrically universal and homogeneous. \end{uwgi} \begin{pf} Suppose $\Omega$ is such an operator and consider $G = \ker \Omega$. Then $G$ would have the following property: Every isometry between finite-dimensional subspaces of $G$ extends to an isometry of $G$. Furthermore, $G$ contains isometric copies of all separable spaces, because $\Omega$ is assumed to be isometrically universal. On the other hand, it is well-known that no separable Banach space can be homogeneous and isometrically universal for all finite-dimensional spaces, since this would be necessarily the \mathbb Gurarii\ space, which is not homogeneous (see \cite{gurarii} or \cite{kubis-gar}). \end{pf} It remains to prove Lemma~\ref{LmMejnn}. It will be based on the ``approximate back-and-forth argument", similar to the one in \cite{KS}. In the inductive step we shall use the following fact, formulated in terms of almost embeddings of operators. \begin{claim}\label{Clejpwrt} Assume $\Omega$ is an operator with the \mathbb Gurarii\ property, $\varepsilon > 0$ and $\alphamap f T R$ is an $\varepsilon$-embedding of operators acting on finite-dimensional spaces, and $\operatorname{m}ap e T \Omega$ is an embedding of operators. Then for every $\delta > 0$ there exists a $\delta$-embedding $\alphamap g R \Omega$ such that $g \circ f$ is $(2\varepsilon+\delta)$-close to $e$. \end{claim} \begin{pf} Using Lemma~\ref{Lkeycz}, we find embeddings of operators $\operatorname{m}ap i T S$, $\operatorname{m}ap j R S$ such that $j \circ f$ is $(2\varepsilon)$-close to $i$. Property ($G$) tells us that there exists a $\delta$-embedding $\alphamap h S \Omega$ such that $h \circ i$ is $\delta$-close to $e$. Finally, $g = h \circ j$ is $(2\varepsilon+\delta)$-close to $e$. \end{pf} The usefulness of the above claim comes from the fact that $\delta$ can be arbitrarily small comparing to $\varepsilon$. \begin{pf}[Proof of Lemma~\ref{LmMejnn}] We first choose $0 < \varepsilon_0 < \varepsilon$ such that $k_0 := \pair{i_0}{j_0}$ is an $\varepsilon_0$-embedding of operators. Our aim is to build two sequences $\ciag k$ and $\ciag \ell$ of almost embeddings of operators between finite subspaces of $U$, $V$, $U'$, $V'$. Notice that once we are given operators and almost embeddings as in the statement of Lemma~\ref{LmMejnn}, we are always allowed to enlarge the co-domains (namely the spaces $Y_0$ and $Y'_0$) to arbitrarily big finite-dimensional subspaces of $V$ and $V'$, respectively. This is important for showing that our sequences of almost embeddings will ``converge'' to bijective isometries. The formal requirements are as follows. We choose sequences $\ciag u$, $\ciag v$, $\sett{u'_n}{{n\in\omega}}$, $\sett{v'_n}{{n\in\omega}}$ that are linearly dense in $U$, $V$, $U'$, $V'$, respectively. We fix a decreasing sequence $\ciag \varepsilon$ of positive real numbers satisfying \begin{equation} 3\sum_{n=1}^\infty \varepsilon_n < \varepsilon - \varepsilon_0, \tag{s}\label{EqSerious} \end{equation} where $\varepsilon_0<\varepsilon$ is as above. We require that: \begin{enumerate} \item[(1)] $\alphamap {k_n}{T_n}{T'_n}$, $\alphamap {\ell_n}{T'_n}{T_{n+1}}$ are almost embeddings of operators; $T_n$ is a restriction of $\Omega$ to some pair of finite-dimensional spaces and $T'_n$ is a restriction of $\Omega'$ to some pair of finite-dimensional spaces. \item[(2)] $k_n$ is an $\varepsilon_n$-embedding of operators and $\ell_n$ is an $\varepsilon_{n+1}$-embedding of operators. \item[(3)] The composition $\ell_n \circ k_n$ is $(2\varepsilon_n+\varepsilon_{n+1})$-close to the identity (formally, to the inclusion $T_n \subseteq T_{n+1}$). \item[(4)] $k_{n} \circ \ell_{n-1}$ is $(2\varepsilon_{n}+\varepsilon_{n+1})$-close to the identity. \item[(5)] $u_n$ belongs to the domain of $T_{n+1}$, $v_n$ belongs to its co-domain; similarly for $u_n'$, $v_n'$ and $T'_{n+1}$. \end{enumerate} Fix $n \geq 0$ and suppose that $k_n$ and $\ell_{n-1}$ have already been constructed (if $n=0$ then we ignore condition (4)). We apply Claim~\ref{Clejpwrt} twice: first time to $k_n$ with $\varepsilon = \varepsilon_n$ and $\delta = \varepsilon_{n+1}$, thus obtaining $\ell_n$; second time to $\ell_n$ with $\varepsilon = \varepsilon_{n+1}$ and $\delta = \varepsilon_{n+2}$, thus obtaining $k_{n+1}$. Between these two steps, we choose a sufficiently big operator $T_{n+1}$ which is a restriction of $\Omega$ to some finite-dimensional spaces. Also, after obtaining $k_{n+1}$, we choose a sufficiently big operator $T'_{n+1}$ contained in $\Omega'$ and acting between finite-dimensional spaces. By this way, we may ensure that condition (5) holds. Fix $n>0$. We have the following (non-commutative) diagram of almost embeddings of operators $$\xymatrix{ T_n \ar@{~>}[rrr]^{k_n} & & & T'_n \\ T_{n-1} \ar[u] \ar@{~>}[rrr]_{k_{n-1}} & & & T'_{n-1} \ar[u] \ar@{~>}[lllu]_{\ell_{n-1}} }$$ in which the vertical arrows are inclusions, the lower triangle is $(2\varepsilon_{n-1}+\varepsilon_n)$-com\-mu\-ta\-tive by (3), and the upper triangle is $(2\varepsilon_n+\varepsilon_{n+1})$-commutative by (4). Formally, these relations are true for the two components of all ``arrows" appearing in this diagram. Using the triangle inequality of the norm and the fact that all operators are nonexpansive, we conclude that $k_n$ restricted to $T_{n-1}$ is $\eta_n$-close to $k_{n-1}$, where $\eta_n = 2\varepsilon_{n-1} + 3\varepsilon_n + \varepsilon_{n+1}$. In particular, both components of the sequence $\ciag k$ are point-wise convergent and by (2) the completion of the limit defines an isometric embedding of operators $\operatorname{m}ap K \Omega {\Omega'}$. Interchanging the roles of $\Omega$ and $\Omega'$, we deduce that the sequence $\ciag \ell$ converges to an isometric embedding of operators $\operatorname{m}ap L {\Omega'}\Omega$. Conditions (3),(4) say that $L$ is the inverse of $K$, therefore $K$ is bijective. Finally, denoting $K = \pair I J$, we see that $I$, $J$ are as required, because $\sum_{n=1}^\infty \eta_n < 2\varepsilon_0 + 6\sum_{n=1}^\infty\varepsilon_n < 2\varepsilon$. \end{pf} \subseteqection{Kernel and range} We finally show some structural properties of our operator. \begin{tw} The operator $\mathbf\Omega$ is surjective and its kernel is linearly isometric to the \mathbb Gurarii\ space. \end{tw} \begin{pf} We first show that $\ker \mathbf\Omega$ is isometric to $\mathbb G$. Fix finite-dimensional spaces $X_0 \subseteq X$ and fix an isometric embedding $\operatorname{m}ap i{X_0}{\ker \Omega}$ and fix $\varepsilon > 0$. Let $Y_0 = \sn0 = Y$ and let $\operatorname{m}ap j{Y_0}Y$ be the 0-operator. Applying ($G^*$), we obtain $\varepsilon$-embeddings $\operatorname{m}ap {i'} X U$ and $\operatorname{m}ap {i'} Y V$ such that $\norm{i'\restriction X_0 - i} \leq \varepsilon$, $\norm{j'\restriction Y_0 - j} \leq \varepsilon$ and $j' \circ 0 = \Omega \circ i'$, where $0$ denotes the 0-operator from $X$ to $Y$. By the last equality, $i'$ maps $X$ into $\ker \Omega$, showing that $\ker \Omega$ satisfies ($\mathfrak G$). In order to show that $\img \mathbf\Omega \mathbb G = \mathbb G$, fix $v \in \mathbb G$ and consider the zero operator $\operatorname{m}ap {T_0}{X_0}{Y_0}$, where $X_0 = \sn 0$ and $Y_0$ is the one-dimensional subspace of $\mathbb G$ spanned by $v$. Let $X = Y = Y_0$ and let $\operatorname{m}ap T X Y$. Let $\operatorname{m}ap i {X_0}\mathbb G$ and $\operatorname{m}ap j {Y_0}\mathbb G$ denote the inclusions. Applying condition ($G^*$) with $\varepsilon=1$, we obtain linear operators $\operatorname{m}ap {i'} X \mathbb G$, $\operatorname{m}ap {j'} Y \mathbb G$ such that $\norm{i'\restriction X_0 - i}<1$, $\norm{j'\restriction Y_0 - j}<1$ and $\mathbf\Omega \circ i' = j' \circ T$. Obviously, $j'=j$ and hence $\mathbf\Omega(i'(v))= T(v) = v$. \end{pf} \subseteqection*{Acknowledgments} The authors would like to thank Vladimir M\"uller for pointing out references concerning universal operators on Hilbert spaces. The second author would like to thank the Hausdorff Research Institute for Mathematics (Bonn, September 2013) for their support and warm hospitality. \end{document}
\begin{document} \title{Uniform stabilization in weighted Sobolev spaces for the KdV equation posed on the half-line} \author{Ademir F. Pazoto \thanks{Instituto de Matem\'atica, Universidade Federal do Rio de Janeiro, P.O. Box 68530, CEP 21945-970, Rio de Janeiro, RJ, Brasil ({\tt [email protected]})} \and Lionel Rosier \thanks{Institut Elie Cartan, UMR 7502 UHP/CNRS/INRIA, B.P. 239, F-54506 Vand\oe uvre-l\`es-Nancy Cedex, France ({\tt [email protected]})} } \maketitle \begin{abstract} Studied here is the large-time behavior of solutions of the Korteweg-de Vries equation posed on the right half-line under the effect of a localized damping. Assuming as in \cite{linares-pazoto} that the damping is active on a set $(a_0,+\infty)$ with $a_0>0$, we establish the exponential decay of the solutions in the weighted spaces $L^2((x+1)^mdx)$ for $m\in \mathbb N ^*$ and $L^2(e^{2bx}dx)$ for $b>0$ by a Lyapunov approach. The decay of the spatial derivatives of the solution is also derived.\\ {\bf MSC:} Primary: 93D15, 35Q53; Secondary: 93B05.\\ {\bf Key words.} Exponential Decay, Korteweg-de Vries equation, Stabilization. \end{abstract} \section{Introduction} The Korteweg-de Vries (KdV) equation was first derived as a model for the propagation of small amplitude long water waves along a channel \cite{bouss,jager,korteweg}. It has been intensively studied from various aspects for both mathematics and physics since the 1960s when solitons were discovered through solving the KdV equation, and the inverse scattering method, a so-called nonlinear Fourier transform, was invented to seek solitons \cite{gardner,miura}. It is now well known that the KdV equation is not only a good model for water waves but also a very useful approximation model in nonlinear studies whenever one wishes to include and balance weak nonlinear and dispersive effects. The initial boundary value problems (IBVP) arise naturally in modeling small-amplitude long waves in a channel with a wavemaker mounted at one end \cite{bona1,bona2,bona3,R04}. Such mathematical formulations have received considerable attention in the past, and a satisfactory theory of global well-posedness is available for initial and boundary conditions satisfying physically relevant smoothness and consistency assumptions (see e.g. \cite{bona1,bona4,bona6,bona7,colliander,faminskii,faminskii2} and the references therein). The analysis of the long-time behavior of IBVP on the quarter-plane for KdV has also received considerable attention over recent years, and a review of some of the results related to the issues we address here can be found in \cite{bona5,bona7,leach}. For stabilization and controllability issues on the half line, we refer the reader to \cite{linares-pazoto} and \cite{R00,R02}, respectively. In this work, we are concerned with the asymptotic behavior of the solutions of the IBVP for the KdV equation posed on the positive half line under the presence of a localized damping represented by the function $a$; that is, \begin{equation} \label{1} \begin{cases} u_t + u_x + u_{xxx} + uu_x + a(x)u = 0, \quad x,\, t \in \mathbb{R}^+,\\ u(0,t)=0, \quad t>0,\\ u(x,0)=u_0(x), \quad x > 0. \end{cases} \end{equation} Assuming $a(x)\ge 0$ a.e. and that $u(.,t)\in H^3(\mathbb R ^+ )$, it follows from a simple computation that \begin{eqnarray}\label{3} \frac{dE}{dt} = - \int_0^\infty a(x)|u(x,t)|^2 dx - \frac{1}{2} |u_x(0,t)|^2 \end{eqnarray} where \begin{eqnarray}\label{4} E(t) = \frac{1}{2}\,\int_0^\infty |u(x,t)|^2 dx \end{eqnarray} is the total energy associated with (\ref{1}). Then, we see that the term $a(x)u$ plays the role of a feedback damping mechanism and, consequently, it is natural to wonder whether the solutions of (\ref{1}) tend to zero as $t\rightarrow \infty$ and under what rate they decay. When $a(x)>a_0>0$ almost everywhere in $\mathbb{R}^+$, it is very simple to prove that $E(t)$ converges to zero as $t$ tends to infinity. The problem of stabilization when the damping is effective only in a subset of the domain is much more subtle. The following result was obtained in \cite{linares-pazoto}. \begin{theorem} \label{thm0} Assume that the function $a=a(x)$ satisfies the following property \begin{equation} \label{2} a\in L^\infty(\mathbb R ^+),\ a\ge 0\mbox{ a.e. in }\ \mathbb R ^+ \mbox{ and } a(x)\ge a_0>0 \mbox{ a.e. in } \ (x_0,+\infty) \end{equation} for some numbers $a_0,x_0>0$. Then for all $R>0$ there exist two numbers $C>0$ and $\nu >0$ such that for all $u_0\in L^2(\mathbb R ^+)$ with $||u_0||_{L^2(\mathbb R ^+ )}\le R$, the solution $u$ of \eqref{1} satisfies \begin{equation} \label{L2} ||u(t)||_{L^2(\mathbb R ^+)} \le C e^{-\nu t} ||u_0||_{L^2(\mathbb R ^+)}\cdot \end{equation} \end{theorem} Actually, Theorem \ref{thm0} was proved in \cite{linares-pazoto} under the additional hypothesis that \begin{equation} \label{L3} a(x)\ge a_0 \ \mbox{ a.e. in }\ (0,\delta ) \end{equation} for some $\delta >0$, but \eqref{L3} may be dropped by replacing the unique continuation property \cite[Lemma 2.4]{linares-pazoto} by \cite[Theorem 1.6]{rosier-zhang}. The exponential decay of $E(t)$ is obtained following the methods in \cite{pazoto,PMVZ,R97} which combine multiplier techniques and compactness arguments to reduce the problem to some unique continuation property for weak solutions of KdV. Along this work we assume that the real-valued function $a=a(x)$ satisfies the condition \eqref{2} for some given positive numbers $a_0,x_0$. In this paper we investigate the stability properties of \eqref{1} in the weighted spaces introduced by Kato in \cite{kato}. More precisely, for $b> 0$ and $m\in \mathbb{N}$, we prove that the solution $u$ exponentially decays to $0$ in $L^2_b$ and $L^2_{(x+1)^m dx}$ (if $u(0)$ belongs to one of these spaces), where $$L^2_b = \{u:\mathbb{R}^+\rightarrow \mathbb{R} ; \int_0^\infty |u(x)|^2 e^{2bx} dx < \infty \},$$ $$L^2_{(x+1)^m dx} = \{u:\mathbb{R}^+\rightarrow \mathbb{R}; \int_0^\infty |u(x)|^2 (x + 1)^m dx < \infty \} .$$ The following weighted Sobolev spaces $$H^s_b = \{u:\mathbb{R}^+\rightarrow \mathbb{R};\ \partial _x ^i u\in L^2_b \ \mbox{ for } 0 \le i\le s;\ u(0)=0 \hbox{ if } s\ge 1 \}$$ and $$H^s_{(x+1)^mdx} = \{u:\mathbb{R}^+\rightarrow \mathbb{R} ; \,\partial^i_x u \in L^2_{(x+1)^{m-i} dx}\,\hbox{ for }\, 0\leq i \leq s ;\ u(0)=0 \hbox{ if } s\ge 1\},$$ endowed with their usual inner products, will be used thereafter. Note that $H^0_b=L^2_b$ and that $H^0_{(x+1)^mdx}=L^2_{(x+1)^mdx}$. The exponential decay in $L^2_{(x+1)^m dx}$ is obtained by constructing a convenient Lyapunov function (which actually decreases strictly on the sequence of times $\{ kT \}_{k\ge 0}$) by induction on $m$. For $u_0\in L^2_{(x+1)^m dx}$, we also prove the following estimate \begin{equation} ||u(t)||_{H^1_{(x+1)^mdx}} \le C\frac{e^{-\mu t}}{\sqrt{t}} ||u_0||_{L^2_{(x+1)^mdx}} \label{globalkato} \end{equation} in two situations: (i) $m=1$ and $||u_0||_{L^2_{(x+1)^m dx}}$ is arbitrarily large; (ii) $m\ge 2$ and $||u_0||_{L^2_{(x+1)^m dx}}$ is small enough. In the situation (ii), we first establish a similar estimate for the linearized system and next apply the contraction mapping principle in a space of functions fulfilling the exponential decay. Note that \eqref{globalkato} combines the (global) Kato smoothing effect to the exponential decay. The exponential decay in $L^2_b$ is established for any initial data $u_0\in L^2_b$ under the additional assumption that $4 b^3 + b < a_0$. Next, we can derive estimates of the form $$ ||u(t)||_{H^s_b} \le C \frac{e^{-\mu t}}{t^{s/2}} ||u_0||_{L^2_b} $$ for any $s\ge 1$, revealing that $u(t)$ decays exponentially to 0 in strong norms. It would be interesting to see if such results are still true when the function $a$ has a smaller support. It seems reasonable to conjecture that similar positive results can be derived when the support of $a$ contains a set of the form $\cup_{k\ge 1}[ka_0,ka_0+b_0]$ where $0<b_0<a_0$, while a negative result probably holds when the support of $a$ is a finite interval, as the $L^2$ norm of a soliton-like initial data may not be sufficiently dissipated over time. Such issues will be discussed elsewhere. The plan of this paper is as follows. Section 2 is devoted to global well-posedness results in the weighted spaces $L^2_b$ and $L^2_{(x+1)^2dx}$. In section 3, we prove the exponential decay in $L^2_{(x+1)^mdx}$ and $L^2_b$, and establish the exponential decay of the derivatives as well. \section{Global well-posedness} \subsection{Global well-posedness in $\bf L^2_b$} Fix any $b>0$. To begin with, we apply the classical semigroup theory to the linearized system \begin{equation} \label{linear} \begin{cases} u_t + u_x + u_{xxx} + a(x)u = 0, \quad x,\, t \in \mathbb{R}^+,\\ u(0,t)=0, \quad t>0,\\ u(x,0)=u_0(x), \quad x > 0. \end{cases} \end{equation} Let us consider the operator $$A: D(A)\subset L^2_b \rightarrow L^2_b$$ with domain $$D(A) = \{ u\in L^2_b;\ \partial_x^i u\in L^2_b \mbox{ for } 1\leq i \leq 3\,\,\mbox{and}\,\,u(0)=0\}$$ defined by $$Au = - u_{xxx} - u_x - a(x)u.$$ Then, the following result holds. \begin{lemma}\label{semigroup} The operator $A$ defined above generates a continuous semigroup of operators $(S(t))_{t\geq 0}$ in $L^2_b$. \end{lemma} \noindent {\bf Proof.} We first introduce the new variable $v=e^{bx}u$ and consider the following (IBVP) \begin{equation} \label{1.1} \begin{cases} v_t + (\partial_x -b)v + (\partial_x - b)^3v + a(x)v = 0, \quad x,\, t \in \mathbb{R}^+,\\ v(0,t)=0, \quad t>0,\\ v(x,0)=v_0(x)=e^{bx}u_0(x), \quad x > 0. \end{cases} \end{equation} Clearly, the operator $B: D(B)\subset L^2(\mathbb{R} ^+) \rightarrow L^2(\mathbb{R}^+)$ with domain $$D(B) = \{u\in H^3(\mathbb{R}^+);\ u(0)=0\}$$ defined by $$Bv = - (\partial_x -b)v - (\partial_x - b)^3v - a(x)v$$ is densely defined and closed. So, we are done if we prove that for some real number $\lambda$ the operator $B-\lambda $ and its adjoint $B^\ast -\lambda $ are both dissipative in $L^2(\mathbb{R}^+)$. It is readily seen that $B^\ast : D(B^\ast)\subset L^2(\mathbb{R}^+)\rightarrow L^2(\mathbb{R}^+)$ is given by $B^\ast v= (\partial_x +b)v + (\partial_x + b)^3v - a(x)v$ with domain $$D(B^\ast) = \{v\in H^3(\mathbb{R}^+);\ v(0)=v'(0)=0\}.$$ Pick any $v\in D(B)$. After some integration by parts, we obtain that $$(Bv,v)_{L^2} = -\frac{1}{2}v_x^2(0) - 3b\int_0^\infty v^2_x dx + (b+b^3)\int_0^\infty v^2dx - \int_0^\infty a(x)v^2 dx,$$ that is, $$([B - (b^3 + b) ]v,v)_{L^2} \leq 0.$$ Analogously, we deduce that for any $v\in D(B^\ast)$ $$(v,[B^\ast - (b^3 + b) ]v)_{L^2} \leq 0$$ which completes the proof. { $\quad$\qd\\} The following linear estimates will be needed. \begin{lemma} \label{lin} Let $u_0 \in L^2_b$ and $u = S(\cdot)u_0$. Then, for any $T>0$ \begin{equation}\label{en-id1} \frac{1}{2}\int_0^\infty |u(x,T)|^2 dx - \frac{1}{2}\int_0^\infty |u_0(x)|^2 dx +\int_0^T\int_0^\infty a(x)|u|^2 dxdt +\frac{1}{2}\int_0^T u_x^2(0,t)dt = 0 \end{equation} \begin{equation}\label{en-id2} \begin{array}{l} \displaystyle\frac{1}{2}\int_0^\infty |u(x,T)|^2 e^{2bx}dx - \frac{1}{2}\int_0^\infty |u_0(x)|^2 e^{2bx}dx +\,3b\displaystyle\int_0^T\int_0^\infty u^2_x e^{2bx} dx dt\\ - \displaystyle(4b^3 + b)\int_0^T\int_0^\infty u^2 e^{2bx} dx dt +\displaystyle\int_0^T\int_0^\infty a(x)|u|^2 e^{2bx}dxdt +\frac{1}{2}\int_0^T u_x^2(0,t)dt = 0. \end{array} \end{equation} As a consequence, \begin{equation}\label{lin-est} \begin{array}{l} ||u||_{L^\infty (0,T;L^2_b)} + ||u_x||_{L^2(0,T;L^2_b)}\leq C\,||u_0||_{L^2_b}, \end{array} \end{equation} where $C=C(T)$ is a positive constant. \end{lemma} \noindent {\bf Proof.} Pick any $u_0\in D(A)$. Multiplying the equation in (\ref{1}) by $u$ and integrating over $(0,+\infty )\times(0,T)$, we obtain (\ref{en-id1}). Then, the identity may be extended to any initial state $u_0\in L^2_b$ by a density argument. To derive (\ref{en-id2}) we first multiply the equation by $(e^{2bx} - 1)u$ and integrate by parts over $(0,+\infty )\times (0,T)$ to deduce that \begin{equation} \begin{array}{l} \displaystyle\frac{1}{2}\int_0^\infty |u(x,T)|^2(e^{2bx}-1)dx - \frac{1}{2}\int_0^\infty |u_0(x)|^2 (e^{2bx}-1)dx \,+\\ +3b\displaystyle\int_0^T\int_0^\infty u^2_x e^{2bx} dx dt - \displaystyle(4b^3 + b)\int_0^T\int_0^\infty u^2 e^{2bx} dx dt \, + \\ +\displaystyle\int_0^T\int_0^\infty a(x)|u|^2 (e^{2bx}- 1) dxdt = 0. \nonumber \end{array} \end{equation} Adding the above equality and (\ref{en-id1}) hand to hand, we obtain (\ref{en-id2}) using the same density argument. Then, Gronwall inequality, (\ref{2}) and (\ref{en-id2}) imply that $$||u||_{L^\infty(0,T;L^2_b)} \leq \, C\,||u_0||_{L^2_b},$$ with $C=C(T) > 0$. This estimate together with (\ref{en-id2}) gives us $$||u_x||_{L^2(0,T;L^2_b)}\leq C\,||u_0||_{L^2_b},$$ where $C=C(T)$ is a positive constant. { $\quad$\qd\\} The global well-posedness result reads as follows: \begin{theorem} \label{global-exp} For any $u_0 \in L^2_b$ and any $T>0$, there exists a unique solution $u\in C([0,T];L^2_b)\cap L^2(0,T;H^1_b)$ of \eqref{1}. \end{theorem} \noindent {\bf Proof.} By computations similar to those performed in the proof of Lemma \ref{lin}, we obtain that for any $f\in C^1([0,T];L^2_b)$ and any $u_0\in D(A)$, the solution $u$ of the system $$ \left\{ \begin{array}{ll} u _t + u_x + u_{xxx} + a(x)u =f, \quad & x\in \mathbb{R}^+,\ t\in (0,T), \\ u(0,t)=0, & t\in (0,T),\\ u(x,0)=u_0(x),& x\in \mathbb{R}^+, \end{array} \right. $$ fulfills \begin{equation}\label{est-ex} \sup_{0\le t\le T} ||u(t)||_{L^2_b} +(\int_0^T\!\!\!\int_0^\infty |u_x|^2 e^{2bx}dxdt)^{\frac{1}{2}} \leq C\left( ||u_0||_{L^2_b} +\int_0^T ||f||_{L^2_b} dt \right) \end{equation} for some constant $C=C(T)$ nondecreasing in $T$. A density argument yields that $u\in C([0,T]; L^2_b)$ when $f\in L^1(0,T;L^2_b)$ and $u_0\in L^2_b$. Let $u_0\in L^2_b$ be given. To prove the existence of a solution of \eqref{1} we introduce the map $\Gamma$ defined by $$ (\Gamma u)(t)=S(t)u_0+\int_0^tS(t-s)N(u(s))\, ds $$ where $N(u)= -uu_x$, and the space $$F = C([0,T];L^2_b)\cap L^2(0,T;H^1_b)$$ endowed with its natural norm. We shall prove that $\Gamma$ has a fixed-point in some ball $B_R(0)$ of $F$. We need the following \\ {\sc Claim 1.} If $u\in H^1_b$ then $$||u^2e^{2bx}||_{L^\infty (\mathbb R ^+)} \leq (2 + 2b)\, ||u||_{L^2_b} ||u||_{H^1_b}.$$ From Cauchy-Schwarz inequality, we get for any $\overline{x}\in \mathbb{R}^+$ $$ \begin{array}{l} u^2(\overline{x})e^{2b\overline{x}} = \displaystyle\int_0^{\overline{x}} [u^2e^{2bx}]_x dx = \int_0^{\overline{x}} [2uu_xe^{2bx} + 2bu^2e^{2bx}]dx \\ \leq 2(\displaystyle\int_0^\infty u^2 e^{2bx}dx)^{\frac{1}{2}}(\int_0^\infty u_x^2 e^{2bx}dx)^{\frac{1}{2}} + 2b\int_0^\infty u^2 e^{2bx} dx \leq (2 + 2b) ||u||_{L^2_b} ||u||_{H^1_b} \end{array} $$ which guarantees that Claim 1 holds. {\sc Claim 2.} There exists a constant $K>0$ such that for $0<T\le 1$ \begin{equation} ||\Gamma(u) - \Gamma(v) ||_F \leq KT^{\frac{1}{4}} (||u||_F + ||v||_F) ||u - v||_F, \quad \forall\,u, v\, \in F.\nonumber \end{equation} According to the previous analysis, \begin{equation}\label{fp-0} ||\Gamma(u) - \Gamma(v) ||_F \leq C ||uu_x - vv_x||_{L^1(0,T;L^2_b)}.\nonumber \end{equation} So, applying triangular inequality and H\"older inequality, we have \begin{eqnarray} &&||\Gamma(u) - \Gamma(v) ||_F \leq C \{||u - v||_{L^2(0,T;L^\infty(0,\infty))}||u||_{L^2(0,T;H^1_b)} + \nonumber\\ &&\qquad\qquad + ||v||_{L^2(0,T;L^\infty(0,\infty))}||u - v||_{L^2(0,T;H^1_b)} \}. \label{fp-1} \end{eqnarray} Now, by Claim 1, we have \begin{equation}\label{fp-2} \begin{array}{l} ||u||_{L^2(0,T;L^\infty(0,\infty))}\leq C\, T^{\frac{1}{4}} ||u||_{L^\infty(0,T;L^2_b)}^{\frac{1}{2}}||u||_{L^2(0,T;H^1_b)}^{\frac{1}{2}}. \end{array} \end{equation} Then, combining (\ref{fp-1}) and (\ref{fp-2}), we deduce that \begin{equation} \label{fp-3} \begin{array}{l} ||\Gamma(u) - \Gamma(v) ||_F \leq C\, T^{\frac{1}{4}} \{\,||u||_F + ||v||_F\,\}||u - v||_F. \end{array} \end{equation} Let $T>0$, $R>0$ be numbers whose values will be specified later, and let $u\in B_R(0)\subset F$ be given. Then, by Claim 2 and Lemma \ref{lin}, $\Gamma u \in F$ and $$ ||\Gamma u||_F \leq C\,(\,||u_0||_{L^2_b} + T^{\frac{1}{4}}||u||^2_F\,). $$ Consequently, for $R=2C||u_0||_{L^2_b}$ and $T>0$ small enough, $\Gamma$ maps $B_R(0)$ into itself. Moreover, we infer from \eqref{fp-3} that this mapping contracts if $T$ is small enough. Then, by the contraction mapping theorem, there exists a unique solution $u\in B_R(0)\subset F$ to the problem \eqref{1} for $T$ small enough. In order to prove that this solution is global, we need some a priori estimates. So, we proceed as in the proof of Lemma \ref{lin} to obtain for the solution $u$ of \eqref{1} \begin{equation}\label{fp30} \frac{1}{2}\int_0^\infty |u(x,T)|^2 dx - \frac{1}{2}\int_0^\infty |u_0(x)|^2 dx +\int_0^T\int_0^\infty a(x)|u|^2 dxdt +\frac{1}{2}\int_0^T u_x^2(0,t)dt = 0 \end{equation} and \begin{eqnarray} \displaystyle\frac{1}{2}\int_0^\infty |u(x,T)|^2 e^{2bx}dx - \frac{1}{2}\int_0^\infty |u_0(x)|^2 e^{2bx}dx + \frac{1}{2}\int_0^T u_x^2(0,t)dt \nonumber\\ +\,3b\displaystyle\int_0^T\int_0^\infty u^2_x e^{2bx} dx dt - \displaystyle(4b^3 + b)\int_0^T\int_0^\infty u^2 e^{2bx} dx dt\nonumber\\ +\displaystyle\int_0^T\int_0^\infty a(x)|u|^2 e^{2bx}dxdt -\frac{2b}{3}\int_0^T \int_0^\infty u^3 e^{2bx}dx dt= 0.\label{fp-4} \end{eqnarray} First, observe that $$|\int_0^\infty u^2 e^{2bx} dx| = |-\frac{1}{b}\int_0^\infty uu_x e^{2bx}dx| \leq \frac{1}{b} (\int_0^\infty u^2 e^{2bx} dx)^{\frac{1}{2}}(\int_0^\infty u^2_x e^{2bx} dx)^{\frac{1}{2}},$$ therefore, \begin{equation} \begin{array}{l} \displaystyle\int_0^\infty u^2 e^{2bx} dx \leq \frac{1}{b^2}\int_0^\infty u^2_x e^{2bx} dx.\nonumber \end{array} \end{equation} Combined to Claim 1, this yields $$ ||u(x)e^{bx}||_{L^\infty (\mathbb R ^+)} \le C ||u_x ||_{L^2_b}. $$ On the other hand, it follows from \eqref{fp30} that $$ || u(t) ||_{L^2(\mathbb R ^+)} \le ||u_0||_{L^2(\mathbb R ^+)}, $$ hence \begin{eqnarray*} \int_0^T \!\!\! \int_0^\infty |u|^3 e^{2bx} dx dt &\leq& \int_0^T ||ue^{bx}||_{L^\infty(\mathbb R ^+)} (\int_0^\infty |u|^2e^{bx}dx)dt \\ &\le& C \int_0^T ||u_x||_{L^2_b} ||u||_{L^2_b}||u||_{L^2}dt \\ &\le& \delta ||u_x||^2_{L^2(0,T;L^2_b)} + C_\delta ||u||^2_{L^2(0,T;L^2_b)}, \end{eqnarray*} where $\delta > 0 $ is arbitrarily chosen and $C=C(b,\delta, ||u_0||_{L^2(\mathbb R ^+)} )$ is a positive constant. Combining this inequality (with $\delta <9/2$) to (\ref{fp-4}) results in $$ ||u(T)||^2_{L^2_b} \le ||u_0||^2_{L^2_b} + C\int_0^T||u||^2_{L^2_b}dt $$ where $C=C(b,||u_0||_{L^2(\mathbb R ^+)})$ does not depend on $T$. It follows from Gronwall lemma that $$ ||u(T)||^2_{L^2_b} \le ||u_0||^2_{L^2_b} e^{CT} $$ for all $T>0$, which gives the global well-posedness. { $\quad$\qd\\} \subsection{Global well-posedness in $L^2_{(x+1)^2dx} $} \begin{definition} For $u_0\in L^2_{(x+1)^2dx}$ and $T>0$, we denote by a {\em mild solution} of \eqref{1} any function $u\in C([0,T];L^2_{(x+1)^2 dx})\cap L^2(0,T;H^1_{(x+1)^2dx})$ which solves \eqref{1}, and such that for some $b>0$ and some sequence $\{ u_{n,0}\}\subset L^2_b$ we have \begin{eqnarray*} &&u_{n,0}\to u_0 \,\, \mbox{ strongly in }\,\, L^2_{(x+1)^2dx},\\ &&u_n \to u\,\,\mbox{weakly}\ast\,\,\mbox{in}\,\, L^\infty (0,T; L^2_{(x+1)^2dx}),\\ &&u_n \to u\,\,\mbox{weakly}\,\,\mbox{in}\,\, L^2(0,T; H^1_{(x+1)^2dx}), \end{eqnarray*} $u_n$ denoting the solution of \eqref{1} emanating from $u_{n,0}$ at $t=0$. \end{definition} \begin{theorem}\label{global-pol} For any $u_0 \in L^2_{(x+1)^2dx}$ and any $T>0$, there exists a unique mild solution $u\in C([0,T];L^2_{(x+1)^2dx})\cap L^2(0,T;H^1_{(x+1)^2dx} )$ of \eqref{1}. \end{theorem} \noindent {\bf Proof.} We prove the existence and the uniqueness in two steps.\\ {\sc Step 1. Existence}\\ Since the embedding $L^2_b\subset L^2_{(x+1)^2 dx}$ is dense, for any given $u_0\in L^2_{(x+1)^2dx}$ we may construct a sequence $\{u_{n,0}\}\subset L^2_b$ such that $u_{n,0}\rightarrow u_0$ in $L^2_{(x+1)^2 dx}$ as $n\rightarrow \infty$. For each $n$, let $u_n$ denote the solution of \eqref{1} emanating from $u_{n,0}$ at $t=0$, which is given by Theorem \ref{global-exp}. Then $u_n\in C([0,T];L^2_b)\cap L^2(0,T;H^1_b)$ and it solves \begin{eqnarray} && u_{n,t} + u_{n,x} + u_{n,xxx} + u_n u_{n,x} + a(x)u_{n} =0, \label{C1}\\ && u_n(0,t) =0 \label{C2}\\ && u_n(x,0)=u_{n,0}(x). \label{C3} \end{eqnarray} Multiplying \eqref{C1} by $(x+1)^2 u_n$ and integrating by parts, we obtain \begin{eqnarray} &&\frac{1}{2} \int_0^\infty (x+1)^2 |u_n (x,T)|^2 dx +3\int _0^T\!\!\!\int_0^\infty (x+1)|u_{n,x}|^2dxdt +\frac{1}{2}\int_0^T |u_{n,x}(0,t)|^2dt \nonumber\\ &&-\int_0^T\!\!\! \int_0^\infty (x+1)|u_n|^2 dxdt -\frac{2}{3} \int_0^T\!\!\!\int_0^\infty (x+1)u_n^3 \, dxdt + \int_0^T\!\!\!\int_0^\infty (x+1)^2u_n^2a(x)dx \nonumber \\ &&\qquad = \frac{1}{2}\int_0^\infty (x+1)^2 |u_{n,0}(x)|^2 dx. \label{C4} \end{eqnarray} Scaling in \eqref{C1} by $u_n$ gives \begin{eqnarray*} &&\frac{1}{2}\int_0^\infty |u_n(x,T)|^2 dx + \frac{1}{2}\int_0^T |u_{n,x}(0,t)|^2 dt + \int_0^T\!\!\!\int_0^\infty a(x)|u_n(x,t)|^2dxdt \\ &&=\frac{1}{2}\int_0^\infty |u_{n,0}(x)|^2dx, \end{eqnarray*} hence \begin{equation} ||u_n||_{L^2(\mathbb R ^+)} \le ||u_{n,0}||_{L^2(\mathbb R ^+)} \le C \label{C5} \end{equation} where $C=C( ||u_0||_{L^2(\mathbb R ^+)} )$. It follows that \begin{eqnarray} \frac{2}{3}\int_0^\infty (x+1)|u_n|^3dx &\le& \frac{2\sqrt{2}}{3} ||u_{n,x}||_{L^2(\mathbb R ^+)}^{\frac{1}{2}} ||u_n||_{L^2(\mathbb R ^+)}^{\frac{3}{2}} ||(x+1)u_n||_{L^2(\mathbb R ^+)} \nonumber \\ &\le& \int_0^\infty (x+1)|u_{n,x}|^2 dx + C \int_0^\infty (x+1)^2 |u_n|^2 dx \label{C6} \end{eqnarray} which, combined to \eqref{C4}, gives \begin{eqnarray*} &&\frac{1}{2}\int_0^\infty (x+1)^2 |u_n(x,T)|^2 dx + 2 \int_0^T\!\!\!\int_0^\infty (x+1)|u_{n,x}|^2 dxdt +\frac{1}{2}\int_0^T |u_{n,x}(0,t)|^2 dt \\ &&\qquad \le \frac{1}{2} \int_0^\infty (x+1)^2 |u_{n,0}(x)|^2dx + C\int_0^T\!\!\!\int_0^\infty (x+1)^2 |u_n (x,t)|^2 dxdt. \end{eqnarray*} An application of Gronwall's lemma yields \begin{eqnarray*} ||u_n||_{L^\infty (0,T;L^2_{(x+1)^2dx} )} &\leq& C(T,||u_{n,0}||_{L^2_{ (x+1)^2dx}}),\\ ||u_{n,x}||_{L^2(0,T;H^1_{ (x+1)^2 dx} )}&\leq& C(T,||u_{n,0}||_{L^2_{(x+1)^2 dx}}),\\ ||u_{n,x}(0,.)||_{L^2(0,T)} &\leq& C(T,||u_{n,0}||_{L^2_{ (x+1)^2dx}}). \end{eqnarray*} Therefore, there exists a subsequence of $\{u_n\}$, still denoted by $\{ u_n \}$, such that \begin{equation} \begin{cases} u_n \rightharpoonup u\,\,\mbox{weakly}\,\,\ast\,\,\mbox{in}\,\, L^\infty (0,T; L^2_{(x+1)^2dx}),\\ u_n \rightharpoonup u\,\,\mbox{weakly}\,\,\mbox{in}\,\, L^2(0,T; H^1_{(x+1)^2dx}),\\ u_{n,x}(0,.) \rightharpoonup u_x(0,.) \,\,\mbox{weakly}\,\,\mbox{in}\,\, L^2(0,T). \nonumber \end{cases} \end{equation} Note that, for all $L>0$, $\{ u_n\}$ is bounded in $L^2(0,T;H^1(0,L))\cap H^1(0,T;H^{-2}(0,L))$, hence by Aubin's lemma, we have (after extracting a subsequence if needed) $$u_n\to u\ \ \mbox{\rm strongly in } \ L^2(0,T;L^2(0,L)) \mbox{ for all } L>0.$$ This gives that $u_n u_{n,x} \to u u_x$ in the sense of distributions, hence the limit $u\in L^\infty (0,T;L^2_{(x+1)^2dx}) \cap L^2(0,T;H^1_{(x+1)^2dx})$ is a solution of \eqref{1}. Let us check that $u\in C([0,T]; L^2_{(x+1)^2dx})$. Since $u\in C([0,T]; H^{-2}(\mathbb R ^+))\cap L^\infty (0,T; L^2_{(x+1)^2dx})$, we have that $u\in C_w ([0,T]; L^2_{(x+1)^2dx})$ (see e.g. \cite{lions-magenes}), where $C_w ([0,T]; L^2_{(x+1)^2 dx})$ denotes the space of sequentially weakly continuous functions from $[0,T]$ into $L^2_{(x+1)^2dx}$. We claim that $u\in L^3(0,T;L^3(\mathbb R ^+))$. Indeed, from Moser estimate (see \cite{Taylor3}) \begin{equation} \label{Moser} ||u||_{L^\infty (\mathbb R ^+)}\le \sqrt{2}||u_x||^{\frac{1}{2}}_{L^2(\mathbb R ^+)} ||u||^{\frac{1}{2}}_{L^2(\mathbb R ^+)} \end{equation} and Young inequality we get \begin{equation} \label{exp-5bis} \int_0^\infty |u|^3 dx \le ||u||_{L^\infty} ||u||^2_{L^2} \le \sqrt{2} ||u_x||^{\frac{1}{2}}_{L^2} ||u||^{\frac{5}{2}}_{L^2} \le \varepsilon ||u_x||_{L^2}^2 + c_{\varepsilon} ||u||_{L^2}^{\frac{10}{3}} \end{equation} where $\varepsilon >0$ is arbitrarily chosen and $c_\varepsilon$ denotes some positive constant. Since $u\in C_w([0,T];L^2_{(x+1)^2dx}) \cap L^2(0,T;H^1_{(x+1)^2 dx})$, it follows that $u\in L^3(0,T;L^3(\mathbb R ^+))$. On the other hand, $u(0,t)=0$ for $t\in (0,T)$ and $u_x(0,.) \in L^2(0,T)$. Scaling in \eqref{1} by $(x+1)^2 u$ yields for all $t_1,t_2\in (0,T)$ \begin{eqnarray} &&\frac{1}{2}\int_0^\infty (x+1)^2 |u(x,t_2)|^2 dx - \frac{1}{2}\int_0^\infty (x+1)^2 |u(x,t_1)|^2 dx \nonumber \\ &&=-3 \int_{t_1}^{t_2} \!\!\!\int_0^\infty (x+1)|u_x|^2 dxdt -\frac{1}{2}\int_{t_1}^{t_2}|u_x(0,t)|^2 dt + \int_{t_1}^{t_2}\!\!\!\int_0^\infty (x+1)|u|^2 dxdt \nonumber\\ &&+\frac{2}{3}\int_{t_1}^{t_2}\!\!\!\int_0^\infty (x+1)u^3 dxdt -\int_{t_1}^{t_2}\!\!\!\int_0^\infty (x+1)^2 a(x) |u|^2 dxdt. \label{LL28} \end{eqnarray} Therefore $\lim_{t_1\to t_2} \left\vert ||u(t_2)||^2_{L^2_{(x+1)^2dx}}-||u(t_1)||^2_{L^2_{(x+1)^2dx}} \right\vert =0$. Combined to the fact that $u\in C_w([0,T];L^2_{(x+1)^2dx})$, this yields $u\in C([0,T],L^2_{(x+1)^2dx})$. \\ {\sc Step 2. Uniqueness}\\ Here, $C$ will denote a universal constant which may vary from line to line. Pick $u_0\in L^2_{(x+1)^2dx}$, and let $u,v\in C([0,T];L^2_{(x+1)^2dx}) \cap L^2(0,T;H^1_{(x+1)^2dx})$ be two mild solutions of \eqref{1}. Pick two sequences $\{u_{n,0}\}$, $ \{v_{n,0} \}$ in $L^2_b$ for some $b>0$ such that \begin{eqnarray} &&u_{n,0}\to u_0 \,\, \mbox{ strongly in }\,\, L^2_{(x+1)^2dx}, \label{X1}\\ &&u_n \to u\,\,\mbox{weakly}\ast\,\,\mbox{in}\,\, L^\infty (0,T; L^2_{(x+1)^2dx}),\label{X2}\\ &&u_n \to u\,\,\mbox{weakly}\,\,\mbox{in}\,\, L^2(0,T; H^1_{(x+1)^2dx})\label{X3} \end{eqnarray} and also \begin{eqnarray} &&v_{n,0}\to u_0 \,\, \mbox{ strongly in }\,\, L^2_{(x+1)^2dx}, \label{X4}\\ &&v_n \to v\,\,\mbox{weakly}\ast\,\,\mbox{in}\,\, L^\infty (0,T; L^2_{(x+1)^2dx}),\label{X5}\\ &&v_n \to v\,\,\mbox{weakly}\,\,\mbox{in}\,\, L^2(0,T; H^1_{(x+1)^2dx}).\label{X6} \end{eqnarray} We shall prove that $w=u-v$ vanishes on $\mathbb R ^+\times [0,T]$ by providing some estimate for $w_n=u_n-v_n$. Note first that $w_n$ solves the system \begin{eqnarray} &&w_{n,t} + w_{n,x}+ w_{n,xxx} + aw_n = f_n = v_n v_{n,x} - u_n u_{n,x}, \label{X7}\\ &&w_n(0,t)=0, \label{X8}\\ &&w_n(x,0)=w_{n,0}(x)=u_{n,0}(x)-v_{n,0}(x). \label{X9} \end{eqnarray} Scaling in \eqref{X7} by $(x+1)w_n$ yields \begin{eqnarray*} &&\frac{1}{2}\int_0^\infty (x+1)|w_n(x,t)|^2dx + \frac{3}{2}\int_0^t \!\!\! \int_0^\infty |w_{n,x}|^2 dxd\tau -\frac{1}{2}\int_0^t\!\!\!\int_0^\infty |w_n|^2dxd\tau \nonumber\\ &&\le \frac{1}{2} \int_0^\infty (x+1)|w_{n,0}|^2 dx + \int_0^t (\int _0^\infty (x+1)|w_n|^2dx)^{\frac{1}{2}} (\int _0^\infty (x+1)|f_n|^2dx)^{\frac{1}{2}}d\tau \\ &&\le \frac{1}{2} \int_0^\infty (x+1)|w_{n,0}|^2 dx + \frac{1}{4} \sup_{0 < \tau < t}\int _0^\infty (x+1)|w_n(x,\tau )|^2dx \\ &&\qquad + [\int_0^T(\int _0^\infty (x+1)|f_n|^2dx)^{\frac{1}{2}} d\tau ]^2. \end{eqnarray*} Since $||w_n(t)||_{L^2(\mathbb R ^+)}\le ||w_n(t)||_{L^2_{(x+1)dx}}$, this yields for $T<1/10$ \begin{eqnarray} &&\sup_{0<t<T}\int_0^\infty (x+1)|w_n(x,t)|^2dx + \int_0^T\!\!\!\int_0^\infty |w_{n,x}|^2dxdt \nonumber\\ &&\qquad \le C[ \int_0^\infty (x+1)|w_{n,0}(x)|^2dx + \left( \int_0^T (\int_0^\infty (x+1) |f_n|^2 dx) ^{\frac{1}{2}} d\tau \right)^2 ]. \label{X10} \end{eqnarray} It remains to estimate $\int_0^T(\int_0^\infty (x+1)|f_n|^2dx)^{\frac{1}{2}}dt$. We split $f_n$ into $$ f_n = (v_n-u_n) v_{n,x} + u_n ( v_{n,x} - u_{n,x} ) = f_n^1 + f_n^2. $$ We have that \begin{eqnarray*} \int_0^T (\int_0^\infty (x+1)|f_n^1|^2dx )^\frac{1}{2}dt &=& \int_0^T (\int_0^\infty (x+1) |w_n|^2 |v_{n,x}|^2dx )^{\frac{1}{2}} dt \\ &\le & \int_0^T ||w_n||_{L^\infty (\mathbb R ^+)} (\int_0^\infty (x+1) |v_{n,x}|^2 dx)^{\frac{1}{2}} dt \\ &\le& (\int_0^T ||w_n||^2_{L^\infty (\mathbb R ^+)}dt )^\frac{1}{2} (\int_0^T\!\!\!\int_0^\infty (x+1) |v_{n,x}|^2dxdt )^\frac{1}{2}. \end{eqnarray*} By Sobolev embedding, we have that \begin{eqnarray*} (\int_0^T ||w_n||^2_{L^\infty (\mathbb R ^+)} dt )^\frac{1}{2} &\le& (\int_0^T ||w_n||^2_{H^1 (\mathbb R ^+)} dt )^\frac{1}{2} \\ &\le& \sqrt{T} \sup_{0<t<T} ||w_n||_{L^2(\mathbb R ^+)} + ||w_{n,x}||_{L^2(0,T;L^2(\mathbb R ^+))}\cdot \end{eqnarray*} Thus \begin{eqnarray} &&\int_0^T (\int_0^\infty (x+1)|f_n^1|^2dx)^{\frac{1}{2}} dt \le ||v_{n,x}||_{L^2(0,T;L^2_{(x+1)dx})} \big( \sqrt{T} \sup_{0<t<T}||w_n||_{L^2(\mathbb R ^+)} \nonumber\\ &&\qquad+ ||w_{n,x}||_{L^2(0,T;L^2(\mathbb R ^+))}\big) \label{X11} \end{eqnarray} On the other hand, we have that \begin{eqnarray} &&\int_0^T ( \int_0^\infty (x+1)|f_n^2|^2 dx)^\frac{1}{2}dt \nonumber\\ &&\qquad = \int_0^T (\int_0^\infty (x+1)|u_n|^2 |w_{n,x}|^2 dx )^{\frac{1}{2}} dt\nonumber \\ &&\qquad \le \int_0^T ||(x+1)^{\frac{1}{2}} u_n||_{L^\infty (\mathbb R ^+)} ||w_{n,x}||_{L^2(\mathbb R ^+)} dt \nonumber \\ &&\qquad \le C\int_0^T \big(||(x+1)^\frac{1}{2}u_n||_{L^2(\mathbb R ^+)} + ||(x+1)^\frac{1}{2} u_{n,x}||_{L^2( \mathbb R ^+)} \big)||w_{n,x}||_{L^2(\mathbb R ^+ )} dt \nonumber \\ &&\qquad \le C \bigg(\sqrt{T} ||(x+1)u_n||_{L^\infty (0,T;L^2(\mathbb R ^+))} \nonumber\\ &&\qquad\qquad + ||(x+1)^\frac{1}{2}u_{n,x}||_{L^2(0,T,L^2(\mathbb R ^+))} \bigg) ||w_{n,x}||_{L^2(0,T;L^2(\mathbb R ^+))}. \label{X12} \end{eqnarray} Gathering together \eqref{X10}, \eqref{X11} and \eqref{X12}, we conclude that for $T<1/10$ $$ h_n(T) \le K_n(T) h_n(T) + C||w_{n,0}||^2_{L^2_{(x+1)dx}} $$ where \begin{eqnarray} h_n(t)&:=&\sup_{0<\tau <T} \int_0^\infty (x+1)|w_n(x,\tau )|^2 dx +\int_0^T \!\!\!\int_0^\infty |w_{n,x}|^2 dxdt\\ K_n(T)&\le& C \left( \int_0^T\!\!\!\int_0^\infty (x+1)|v_{n,x}|^2dxdt +T ||(x+1)u_n||^2_{L^\infty (0,T;L^2(\mathbb R ^+))} \right. \nonumber \\ &&\left. +\int_0^T\!\!\! \int_0^\infty (x+1)|u_{n,x}|^2 dxdt \right) \end{eqnarray} and $C$ denotes a universal constant. The following claim is needed.\\ {\sc Claim 3.} $$\lim_{T\to 0}\limsup _{n\to \infty}\int_0^T\!\!\!\int_0^\infty (x+1)|u_{n,x}|^2dxdt=0,\quad \lim_{T\to 0}\limsup _{n\to \infty} \int_0^T\!\!\!\int_0^\infty (x+1)|v_{n,x}|^2dxdt=0.$$ Clearly, it is sufficient to prove the claim for the sequence $\{ u_n \}$ only. From \eqref{LL28} applied with $u=u_n$ on $[0,T]$, we obtain \begin{eqnarray*} &&\frac{1}{2}\int_0^\infty (x+1)^2|u_n(x,T)|^2dx + 3\int_0^T\!\!\!\int_0^\infty (x+1)|u_{n,x}|^2 dxdt \nonumber \\ &&\le \frac{1}{2}\int_0^\infty (x+1)^2|u_{n,0}|^2dx +\int_0^T\!\!\!\int_0^\infty (x+1)|u_n|^2 dxdt +\frac{2}{3} \int_0^T\!\!\!\int_0^\infty (x+1)|u_n|^3 dxdt.\label{P1} \end{eqnarray*} Combined to \eqref{C5}-\eqref{C6}, this gives \begin{eqnarray} &&||u_n(T)||^2_{L^2_{(x+1)^2dx}} +\int_0^T\!\!\!\int_0^\infty (x+1)|u_{n,x}|^2dxdt \nonumber\\ &&\qquad \le ||u_{n,0}||^2_{L^2_{(x+1)^2 dx}} +C\int_0^T ||u_n||^2_{L^2_{(x+1)^2dx}}dt. \label{P2} \end{eqnarray} It follows from Gronwall lemma that \begin{equation} \label{gron} ||u_n(t)||^2_{L^2_{(x+1)^2dx}}\le ||u_{n,0}||^2_{L^2_{(x+1)^2dx}} e^{Ct} \end{equation} Using \eqref{gron} in \eqref{P2} and taking the limit sup as $n\to \infty$ gives for a.e. $T$ $$ ||u(T)||^2_{L^2_{(x+1)^2dx}} + \limsup_{n\to \infty} \int_0^T\!\!\!\int_0^\infty |u_{n,x}|^2dxdt \le e^{CT}||u_0||^2_{L^2_{(x+1)^2dx}} $$ As $u$ is continuous from $\mathbb R^+$ to $L^2_{(x+1)^2dx}$, we infer that $$ \lim_{T\to 0} \limsup_{n\to \infty}\int_0^T\!\!\!\int_0^\infty |u_{n,x}|^2dxdt=0. $$ The claim is proved. Therefore, we have that for $T>0$ small enough and $n$ large enough, $K_n(T)<\frac{1}{2}$, and hence $$ h_n(T)\le 2C ||w_n(0)||^2_{L^2_{(x+1)dx}}. $$ This yields \begin{equation*} ||u-v||^2_{L^\infty (0,T;L^2_{(x+1)dx})} \le \liminf _{n\to \infty} h_n(T) \le 2C \liminf _{n\to \infty} ||w_n(0)||^2_{L^2_{(x+1)dx}} =0 \end{equation*} and $u=v$ for $0<t<T$. This proves the uniqueness for $T$ small enough. The general case follows by a classical argument. { $\quad$\qd\\} \begin{remark} \begin{enumerate} \item If we assume only that $u_0\in L^2_{(x+1)dx}$, then a proof similar to Step 1 gives the existence of a mild solution $u\in C([0,T];L^2_{(x+1)dx}) \cap L^2(0,T;H^1_{(x+1)dx})$ of \eqref{1}. The uniqueness of such a solution is open. The existence and uniqueness of a solution issuing from $u_0\in L^2_{(x+1)dx}$ in a class of functions involving a Bourgain norm has been given in \cite{faminskii2}. \item If $u_0\in L^2_{(x+1)^m dx}$ with $m\ge 3$, then $u\in C([0,T];L^2_{(x+1)^mdx})\cap L^2(0,T;H^1_{(x+1)^mdx})$ for all $T>0$ (see below Theorem \ref{dec-pol}). \end{enumerate} \end{remark} \section{Asymptotic Behavior} \subsection{Decay in $L^2_{(x+1)^m dx}$} \begin{theorem} \label{dec-pol} Assume that the function $a=a(x)$ satisfies (\ref{2}). Then, for all $R>0$ and $m\ge 1$, there exist numbers $C > 0$ and $\nu > 0$ such that $$||u(t)||_{L^2_{(x+1)^mdx}} \leq C\, e^{-\nu t} ||u_0||_{L^2_{(x+1)^mdx}} $$ for any solution given by Theorem \ref{global-pol}, whenever $||u_0||_{ L^2_{(x+1)^mdx} }\leq R$. \end{theorem} \noindent {\bf Proof.} The proof will be done by induction in $m$. We set \begin{equation}\label{V0} V_0(u) = E(u) = \frac{1}{2} \int_0^\infty u^2 dx \end{equation} and define the Lyapunov function $V_m$ for $m\ge 1 $ in an inductive way \begin{equation}\label{pol1} V_m(u) = \displaystyle\frac{1}{2}\int_0^\infty (x + 1)^m u^2 dx + d_{m-1}V_{m-1}(u), \end{equation} where $d_{m-1} > 0$ is chosen sufficiently large (see below). Suppose first that $m=1$ and put $V = V_1$. Multiplying the first equation in (\ref{1}) by $u$ and integrating by parts over $\mathbb R^+ \times (0,T)$, we obtain \begin{equation}\label{exp-3} \frac{1}{2}\int_0^\infty |u(x,T)|^2 dx = \frac{1}{2}\int_0^\infty |u_0(x)|^2 dx - \int_0^T\int_0^\infty a(x)|u|^2 dxdt - \frac{1}{2}\int_0^T u_x^2(0,t)dt. \end{equation} Now, multiplying the equation by $xu$, we deduce that \begin{eqnarray} && \displaystyle\frac{1}{2}\int_0^\infty x |u(x,T)|^2 dx - \frac{1}{2}\int_0^\infty x |u_0(x)|^2 dx + \frac{3}{2} \int_0^T\int_0^\infty u_x^2 dx dt \nonumber \\ && -\displaystyle\frac{1}{2} \int_0^T\int_0^\infty u^2 dx dt-\displaystyle\frac{1}{3} \int_0^T\int_0^\infty u^3 dx dt+\int_0^T\int_0^\infty x a(x)|u|^2dxdt = 0.\qquad \label{exp-4} \end{eqnarray} Combining (\ref{exp-3}) and (\ref{exp-4}) it follows that \begin{eqnarray} && V(u) - V(u_0) + (d_0+1)\left( \displaystyle\frac{1}{2}\int_0^T u_x^2(0,t)dt + \int_0^T\int_0^\infty a(x)|u|^2 dxdt\right) \nonumber\\ && + \displaystyle \frac{3}{2} \int_0^T\int_0^\infty u_x^2 dx dt-\displaystyle\frac{1}{2} \int_0^T\int_0^\infty u^2 dx dt -\displaystyle\frac{1}{3} \int_0^T\int_0^\infty u^3 dx dt \nonumber\\ && \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad + \displaystyle\int_0^T\int_0^\infty xa(x)|u|^2 dxdt = 0. \label{exp-5} \end{eqnarray} The next step is devoted to estimate the nonlinear term in the left hand side of (\ref{exp-5}). To do that, we first assume that $||u_0||_{L^2} \le 1$. By \eqref{exp-5bis} we have that \begin{equation*} \int_0^\infty |u|^3 dx \le \varepsilon ||u_x||_{L^2}^2 + c_{\varepsilon} ||u||_{L^2}^{\frac{10}{3}} \end{equation*} for any $\varepsilon >0$ and some constant $c_\varepsilon >0$. Thus, if $||u_0||_{L^2} \le 1$, we have $||u||_{L^2}^{\frac{10}{3}} \leq ||u||_{L^2}^2$ and \begin{equation} \label{exp-6} \int_0^T\int_0^\infty |u|^3 dx dt \leq \varepsilon \int_0^T\int_0^\infty u_x^2 dx dt+ c_\varepsilon\int_0^T\int_0^\infty u^2 dx dt. \end{equation} Moreover, according to \cite{linares-pazoto}, there exists $c_1>0$, satisfying \begin{equation}\label{exp-7} \int_0^T\int_0^\infty u^2 dx dt \leq c_1\{ \frac{1}{2} \int_0^T u_x^2(0,t) dt+ \int_0^T\int_0^\infty a(x) u^2 dx dt \}. \end{equation} Now, combining (\ref{exp-5})-(\ref{exp-7}) and taking $\varepsilon < \frac{1}{2}$ and $d_0 := 2 c_1(\frac{1}{2} + \frac{c_\varepsilon}{3})$ we obtain \begin{eqnarray} &&V(u(T)) - V(u_0) + \frac{d_0+1}{2}(\frac{1}{2}\int_0^T u_x^2(0,t)dt + \int_0^T\int_0^\infty a(x)|u|^2 dxdt) \nonumber\\ &&\qquad+\,(\frac{3}{2} - \frac{\varepsilon}{3}) \int_0^T\int_0^\infty u_x^2 dx dt +\displaystyle\int_0^T\int_0^\infty xa(x)|u|^2 dxdt \leq 0 \label{exp-71} \end{eqnarray} or \begin{equation} \label{exp-72} V(u(T)) - V(u_0) \leq - \widetilde{c}\,\{\displaystyle\int_0^T u_x^2(0,t)dt + \int_0^T\int_0^\infty (x + 1) a(x)|u|^2 dxdt + \int_0^T\int_0^\infty u_x^2 dx dt \} \end{equation} where $\widetilde{c}>0$. We aim to prove the existence of a constant $c>0$ satisfying \begin{equation} \label{exp-73} V(u(T)) - V(u_0) \leq - c\,V(u_0) \end{equation} Indeed, such an inequality gives at once the decay $V(u(t))\le c e^{-\nu t} V(u_0)$. To this end, we need to establish two claims. {\sc Claim 4.} There exists $c>0$ such that $$\displaystyle\int_0^T V(u) dt \leq c\, \{\int_0^T u_x^2(0,t) dt + \int_0^T\int_0^\infty (x +1) a(x) u^2 dx dt\}.$$ Since $u_0\in L^2_{(x+1)dx}\subset L^2$, from (\ref{2}) and (\ref{exp-7}) we get \begin{eqnarray*} \int_0^T V(u) dt &=& \frac{1}{2}\int_0^T\int_0^\infty (x + 1) u^2 dx dt + \frac{d_0}{2} \int_0^T\int_0^\infty u^2 dx dt \\ &\leq& \frac{c_1 d_0}{2} \{\displaystyle \frac{1}{2} \int_0^T u_x^2(0,t) dt + \int_0^T\int_0^\infty a(x) u^2 dx dt\}\\ &&\qquad + \frac{1}{2}\int_0^T\int_0^{x_0} (x + 1 )u^2 dx dt +\frac{1}{2}\int_0^T\int_{x_0}^\infty (x + 1) u^2 dx dt\\ &\leq& \frac{c_1 d_0}{2} \{\frac{1}{2} \int_0^T u_x^2(0,t) dt + \int_0^T\int_0^\infty a(x) u^2 dx dt\} \\ &&\qquad +\frac{1}{2}(x_0 + 1)\int_0^T\int_0^{x_0}u^2 dx dt + \frac{1}{2}\int_0^T\int_{x_0}^\infty (x + 1)\frac{a(x)}{a_0} u^2 dx dt\\ &\leq& c\,\{ \int_0^T u_x^2(0,t) dt + \int_0^T\int_0^\infty (x + 1) a(x) u^2 dx dt\}. \end{eqnarray*} {\sc Claim 5.} \begin{equation} \label{exp-74} V(u_0) \leq C (\int_0^T u_x^2(0,t) dt + \int_0^T\int_0^\infty (x + 1) a(x) u^2 dx dt + \int_0^T\int_0^\infty u_x^2 dx dt ) \end{equation} where $C > 0$. Multiplying the first equation in (\ref{1}) by $(T-t)u$ and integrating by parts in $(0,\infty)\times (0,T)$, we obtain \begin{equation}\label{exp-75} \begin{array}{l} \displaystyle\frac{T}{2}\int_0^\infty |u_0(x)|^2 dx =\\ \displaystyle\frac{1}{2}\int_0^T\int_0^\infty |u|^2 dx dt + \int_0^T\int_0^\infty (T-t) a(x)|u|^2 dxdt +\frac{1}{2}\int_0^T (T-t) u_x^2(0,t)dt, \end{array} \end{equation} and therefore, using (\ref{exp-7}) \begin{equation}\label{exp-8} \begin{array}{l} \displaystyle\int_0^\infty |u_0(x)|^2 dx \leq C\left( \int_0^T\int_0^\infty a(x)|u|^2 dxdt + \int_0^T u_x^2(0,t) dt\right) . \end{array} \end{equation} Now, multiplying by $(T-t)x u$, it follows that \begin{equation} \begin{array}{l} -\displaystyle\frac{T}{2}\int_0^\infty x |u_0(x)|^2 dx + \frac{1}{2}\int_0^T\int_0^\infty x |u|^2 dx dt +\,\frac{3}{2}\displaystyle\int_0^T\int_0^\infty (T-t) u^2_x dx dt\\ - \displaystyle\frac{1}{2}\int_0^T\int_0^\infty (T- t) u^2 dx dt +\displaystyle\int_0^T\int_0^\infty (T - t)x a(x)|u|^2 dx dt - \\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad -\displaystyle\frac{1}{3}\int_0^T \int_0^\infty (T - t) u^3 dx dt= 0. \nonumber \end{array} \end{equation} The identity above and (\ref{exp-6}) allow us to conclude that \begin{equation} \begin{array}{l}\label{exp-9} \displaystyle\int_0^\infty x |u_0(x)|^2 dx \\ \leq C\,\{ \displaystyle\int_0^T\!\!\!\int_0^\infty (x + 1) |u|^2 dx dt +\,\displaystyle\int_0^T\!\!\!\int_0^\infty u^2_x dx dt+\displaystyle\int_0^T\!\!\!\int_0^\infty x a(x)|u|^2 dx dt +\\ +\displaystyle\int_0^T\!\!\! \int_0^\infty |u|^3 dx dt\} \leq C\,\{ \displaystyle\int_0^T V(u(t))dt + \int_0^T\!\!\!\int_0^\infty x a(x) u^2 dx dt +\,\displaystyle\int_0^T\!\!\!\int_0^\infty u^2_x dx dt\} \end{array} \end{equation} for some $C > 0$. Claim 5 follows from Claim 4 and (\ref{exp-8})-(\ref{exp-9}). { $\quad$\qd\\} The previous computations give us \eqref{exp-73} (and the exponential decay) when \\ $||u_0||_{L^2} \leq 1$. The general case is proved as follows. Let $u_0\in L^2_{(x+1)dx} \subset L^2$ be such that $||u_0||_{L^2} \leq R$. Since $u\in C(\mathbb{R} ^+ ;L^2(\mathbb{R} ^+))$ and $||u(t)||_{L^2} \leq \alpha e^{-\beta t}||u_0||_{L^2}$, where $\alpha =\alpha (R)$ and $\beta =\beta (R)$ are positive constants, $||u(T)||_{L^2} \leq 1$ if we pick $T$ satisfying $\alpha e^{-\beta T} R< 1$. Then, it follows from (\ref{exp-5})-(\ref{exp-5bis}) and \eqref{exp-73} that for some constants $\nu >0, \ c>0,\ C>0$ $$V(u(t+T)) \leq c e^{-\nu t}V(u(T)) \leq c (T||u_0||^2_{L^2} + T||u_0||_{L^2}^{\frac{10}{3}} + V(u_0)) e^{-\nu t},$$ hence $$V(u(t)) \leq C e^{-\nu t}V(u_0),$$ where $C = C(R)$, which concludes the proof when $m=1$. \vglue 0.2 cm \noindent{\bf Induction Hypothesis:} There exist $c > 0$ and $\rho >0$ such that if $V_{m-1}(u_0)\le \rho$, we have \begin{equation} \begin{array}{l} V_{m}(u) - V_{m}(u_0) (*)_{m}\\ \leq - c\{\displaystyle\int_0^T u_x^2(0,t) dt + \int_0^T\int_0^\infty (x + 1)^{m-1}u_x^2 dx dt + \int_0^T\int_0^\infty (x + 1)^{m} a(x) u^2dx dt \}\\ V_{m}(u_0) (**)_{m}\\ \leq c\, \{\displaystyle\int_0^T u_x^2(0,t) dt + \int_0^T\int_0^\infty (x + 1)^{m-1}u_x^2 dx dt + \int_0^T\int_0^\infty (x + 1)^{m} a(x) u^2 dx dt \}.\nonumber \end{array} \end{equation} By \eqref{exp-72}-\eqref{exp-74}, the induction hypothesis is true for $m=1$. Pick now an index $m\ge 2$ and assume that $d_0, ..., d_{m-2}$ have been constructed so that $(*)_k - (**)_k$ are fulfilled for $1\le k \le m-1$. We aim to prove that for a convenient choice of the constant $d_{m-1}$ in \eqref{pol1}, the properties $(*)_m-(**)_m$ hold true. Let us investigate first $(*)_m$. We multiply the first equation in \eqref{1} by $(x + 1)^m u$ to obtain \begin{equation} \begin{array}{l}\label{exp-10} V_m(u) - V_m(u_0) - d_{m-1}(V_{m-1}(u) - V_{m-1}(u_0))\\ -\displaystyle\frac{m(m-1)(m-2)}{2}\int_0^T\int_0^\infty (x + 1)^{m-3} u^2 dx dt + \displaystyle\frac{1}{2}\displaystyle\int_0^T u_x^2(0,t) dt\\ +\displaystyle\frac{3m}{2}\int_0^T\int_0^\infty (x + 1)^{m-1} u^2_x dx dt - \displaystyle\frac{m}{2}\int_0^T\int_0^\infty (x + 1)^{m-1}u^2 dx dt\\ - \displaystyle\frac{m}{3}\int_0^T\int_0^\infty (x + 1)^{m-1}u^3 dx dt + \displaystyle\int_0^T\int_0^\infty (x + 1)^m a(x) u^2 dx dt = 0. \end{array} \end{equation} The next steps are devoted to estimate the terms in the above identity. First, combining (\ref{2}) and (\ref{exp-7}) we infer the existence of a positive constant $c > 0$ such that \begin{equation}\label{exp-12} \begin{array}{l} \displaystyle\int_0^T\int_0^\infty (x + 1)^{m-1} u^2 dx dt \\ = \displaystyle\int_0^T\int_0^{x_0} (x + 1)^{m-1} u^2 dx dt + \int_0^T\int_{x_0}^\infty (x + 1)^{m-1} u^2 dx dt \\ \leq (x_0 + 1)^{m-1}\displaystyle\int_0^T\int_0^\infty u^2 dx dt + \displaystyle\frac{1}{a_0}\int_0^T\int_0^\infty a(x) (x + 1)^{m-1} u^2 dx dt \\ \leq c\,\{\displaystyle\int_0^T u_x^2 (0,t) dt + \displaystyle\int_0^T\int_0^\infty (x + 1)^{m-1} a(x) u^2 dx dt\} \\ \leq - c\,\{V_{m-1}(u) - V_{m-1}(u_0)\} \end{array} \end{equation} where we used $(*)_{m-1}$. In the same way \begin{equation} \begin{array}{l}\label{exp-13} \displaystyle\int_0^T\int_0^\infty (x + 1)^{m-3} u^2 dx dt \\ \leq \displaystyle\int_0^T\int_0^\infty (x + 1)^{m-1} u^2 dx dt \leq - c\,\{V_{m-1}(u) - V_{m-1}(u_0)\} \end{array} \end{equation} where $c > 0$ is a positive constant. Moreover, assuming $V_{m-1}(u_0) \le \rho$ with $\rho >0$ small enough (so that by exponential decay of $V_{m-1}(u(t))$ we have $\int_0^\infty (x+1)^{m-1}|u(x,t)|^2dx \le 1$ for all $t\ge 0$) and proceeding as in the case $m=1$, we obtain the existence of $\varepsilon > 0$ and $c_\varepsilon > 0$ satisfying \begin{equation}\label{exp-14} \begin{array}{l} \displaystyle\int_0^T\int_0^\infty (x + 1)^{m-1} |u|^3 dx dt \\ \leq \varepsilon\displaystyle\int_0^T\int_0^\infty (x + 1)^{m-1} u^2_x dx dt + c_{\varepsilon}\int_0^T\int_0^\infty (x + 1)^{m-1} u^2 dx dt. \end{array} \end{equation} Indeed, \begin{eqnarray} &&\displaystyle\int_0^\infty (x + 1)^{m-1} |u|^3 dx \\ &&\leq ||u||_{L^\infty}\int_0^\infty (x + 1)^{m-1} u^2 dx \leq \sqrt{2} ||u_x||_{L^2}^{\frac{1}{2}}||u||_{L^2}^{\frac{1}{2}} \int_0^\infty (x+1)^{m-1}u^2 dx \nonumber \\ &&\leq \varepsilon\displaystyle\int_0^\infty (x + 1)^{m-1} u^2_x dx + c_\varepsilon \int_0^\infty u^2dx + c_{\varepsilon}\left(\int_0^\infty (x + 1)^{m-1} u^2 dx \right)^{2}.\nonumber \end{eqnarray} Then, if we return to (\ref{exp-10}) and take $\varepsilon <9/2$ and $d_{m-1} > 0$ large enough, from (\ref{exp-12})-(\ref{exp-14}) if follows that \begin{equation} \begin{array}{l}\label{exp-15} V_m(u) - V_m(u_0) \\ \leq -c\,\{\displaystyle \int_0^T u^2_x (0,t) dt + \int_0^T\int_0^\infty (x + 1)^{m-1} u^2_x dx dt + \displaystyle \int_0^T\int_0^\infty a(x) (x + 1)^m u^2 dx dt\}\\ +\, \displaystyle \frac{d_{m-1}}{2} (V_{m-1}(u) - V_{m-1}(u_0)). \end{array} \end{equation} This yields $(*)_m$, by $(*)_{m-1}$. Let us now check $(**)_m$. It remains to estimate the terms in the right hand side of (\ref{exp-15}). We multiply the first equation in \eqref{1} by $(T-t)(x + 1)^m u$ to obtain \begin{equation} \begin{array}{l} \displaystyle \frac{T}{2}\int_0^\infty (x + 1)^m u_0^2 dx = \frac{1}{2}\int_0^T\int_0^\infty (x + 1)^m u^2 dx dt\\ -\displaystyle\frac{m(m-1)(m-2)}{2}\int_0^T\int_0^\infty (T-t)(x + 1)^{m - 3}u^2 dx dt + \displaystyle\frac{1}{2}\int_0^T (T-t)u_x^2(0,t) dt \\ +\displaystyle\frac{3m}{2}\int_0^T\int_0^\infty (T - t) (x + 1)^{m-1} u^2_x dx dt - \frac{m}{2}\int_0^T\int_0^\infty (T - t) (x + 1)^{m-1}u^2 dx dt\\ - \displaystyle\frac{m}{3}\int_0^T\int_0^\infty (T - t) (x + 1)^{m-1}u^3 dx dt + \int_0^T\int_0^\infty (T - t)(x + 1)^m a(x) u^2 dx dt. \nonumber \end{array} \end{equation} Then, proceeding as above, we deduce that \begin{equation} \begin{array}{l} \displaystyle\int_0^T (x + 1)^m u_0^2 dx \\ \leq c\,\{\displaystyle\int_0^T\int_0^\infty (x + 1)^{m-1} u^2 dx dt + \int_0^T u_x^2(0,t) dt + \int_0^T\int_0^\infty (x + 1)^{m-1}u_x^2 dx dt\\ + \displaystyle\int_0^T\int_0^\infty (x + 1)^m a(x) u^2 dx dt\}\\ \leq c\{\displaystyle\int_0^T u_x^2(0,t) dt + \int_0^T\int_0^\infty (x + 1)^{m-1}u_x ^2 dx dt + \displaystyle\int_0^T\int_0^\infty (x + 1)^m a(x) u^2 dx dt \}.\nonumber \end{array} \end{equation} Combined to $(**)_{m-1}$, this yields $(**)_m$. This completes the construction of the sequence $\{ V_m\}_{m\ge 1}$ by induction. Let us now check the exponential decay of $V_m$ for $m\ge 2$. It follows from $(*)_m-(**)_{m}$ that $$V_m(u) - V_m(u_0) \leq - c\,V_m(u_0)$$ where $c > 0$, which completes the proof when $V_{m-1}(u_0)\le \rho$. The global result ($V_{m-1} (u_0)\leq R$) is obtained as above for $m=1$. { $\quad$\qd\\} \begin{corollary}\label{c3} Let $a=a(x)$ fulfilling (\ref{2}) and $a\in W^{2,\infty }(0,\infty)$. Then for any $R>0$, there exist positive constants $c=c(R)$ and $\mu = \mu (R)$ such that \begin{equation} ||u_x(t)||_{L^2(\mathbb R ^+)} \leq c \frac{e^{-\mu t}}{\sqrt{t}} ||u_0||_{L^2_{(x+1)dx}} \label{L4} \end{equation} for all $t>0$ and all $u_0\in L^2_{(x+1)dx}$ satisfying $|| u_0 ||_{L^2_{(x+1)dx}}\le R$. \end{corollary} \noindent {\bf Proof.} Pick any $R>0$ and any $u_0\in L^2_{(x+1)dx}$ with $||u_0||_{L^2_{(x+1)dx}}\le R$. By Theorem \ref{dec-pol} there are some constants $C=C(R)$ and $\nu =\nu (R)$ such that \begin{equation} \label{L4bis} || u(t) ||_{L^2_{(x+1)dx}} \le C e^{-\nu t} ||u_0||_{L^2_{(x+1)dx}}. \end{equation} Using the multiplier $t(u^2 + 2 u_{xx})$ we obtain after some integrations by parts that for all $0<t_1<t_2$ \begin{eqnarray} &&t_2\int_0^\infty u_x^2(x,t_2) dx + \int_{t_1}^{t_2} tu_x^2 (0,t) dt + 2\int_{t_1}^{t_2}\!\!\!\int_0^\infty t a(x)u_x^2 dx dt + \int_{t_1}^{t_2} t u_{xx}^2 (0,t) dt\nonumber\\ &&=-\frac{1}{3}\int_{t_1}^{t_2}\!\!\!\int_0^\infty u^3 dx dt + \frac{t_2}{3}\int_0^\infty u^3(x,t_2) dx + \int_{t_1}^{t_2}\!\!\!\int_0^\infty t u^3 a(x)dx dt \nonumber\\ &&+\int_{t_1}^{t_2}\!\!\!\int_0^\infty u_x^2 dxdt + \int_{t_1}^{t_2}\!\!\!\int_0^\infty t a''(x) u^2 dx dt. \label{L5} \end{eqnarray} 1. Let us assume first that $T>1$. Applying \eqref{L5} on the time interval $[T-1,T]$, we infer that \begin{equation} \int_0^\infty |u_x(x,T)|^2 dx \leq c\left(\int_{T-1}^T\!\int_0^\infty |u|^3 dx dt + ||u(T)||^3_{L^3(\mathbb R ^+)} + \int_{T-1}^T||u||^2_{H^1(\mathbb R ^+)} dt\right) . \label{L8} \end{equation} To estimate the cubic terms in \eqref{L8}, we use \eqref{exp-5bis} to obtain \begin{eqnarray} &&\int_0^\infty |u_x(x,T)|^2dx \le \varepsilon \int_0^\infty |u_x(x,T)|^2dx \nonumber\\ &&\qquad + c_\varepsilon\big( ||u(T)||_{L^2(\mathbb R ^+)}^{\frac{10}{3}} + \int_{T-1}^T (||u||^2_{H^1(\mathbb R ^+)} + ||u||_{L^2(\mathbb R ^+)}^{\frac{10}{3}})dt\big). \label{L9} \end{eqnarray} Note that by \eqref{L4bis} $$ ||u(T)||_{L^2(\mathbb R ^+)}^{\frac{10}{3}} \le (C e^{-\nu T}||u_0||_{L^2_{(x+1)dx}} )^{\frac{10}{3}} \le C^{\frac{10}{3}} R^{\frac{4}{3}} e^{-\nu T}||u_0||^2_{L^2_{(x+1)dx}} .$$ It follows from \eqref{exp-5}, \eqref{exp-5bis}, and \eqref{L4bis} that \begin{eqnarray} &&\int_{T-1}^T(||u||^2_{H^1(\mathbb R ^+)} + ||u||^{\frac{10}{3}}_{L^2(\mathbb R ^+)})dt \nonumber\\ &&\qquad \le C\left( V_1(u(T-1))+ \int_{T-1}^T \big( ||u||^2_{L^2 (\mathbb R ^+)} +||u||^{\frac{10}{3}}_{L^2(\mathbb R ^+)}\big) dt \right) \nonumber\\ &&\qquad \le C e^{-\nu T}||u_0||^2_{L^2_{(x+1)dx}}\label{L10} \end{eqnarray} where $C=C(R,\nu)$. \eqref{L4} for $T\ge 1$ follows from \eqref{L9} and \eqref{L10} by choosing $\varepsilon <1$ and $\mu < \nu$.\\ 2. Assume now that $T\le 1$. Estimating again the cubic terms in \eqref{L5} (with $[t_1,t_2]=[0,T]$) by using \eqref{exp-5bis}, we obtain \begin{eqnarray} T\int_0^\infty u_x^2(x,T)dx &\le& \frac{T}{3} \left( \varepsilon ||u_x(T)||^2_{L^2(\mathbb R ^+ )} +C_\varepsilon ||u(T)||^{\frac{10}{3}}_{L^2(\mathbb R ^+)} \right) \nonumber\\ &&\quad +C_\varepsilon \int_0^T (||u||^2_{H^1(\mathbb R ^+)} +||u||^{\frac{10}{3}}_{L^2(\mathbb R ^+)}) dt. \label{L6} \end{eqnarray} By \eqref{exp-5}, \eqref{exp-5bis} and \eqref{L4bis}, we have that \begin{equation} \int_0^1\!\!\!\int_0^\infty |u_x|^2dxdt \le C(R) ||u_0||^2_{L^2_{(x+1)dx}} \label{L7} \end{equation} which, combined to \eqref{L6} with $\varepsilon =1$ and \eqref{L4bis}, gives $$ ||u_x(T)||^2_{L^2(\mathbb R ^+)} \le C(R)T^{-1}||u_0||^2_{L^2_{(x+1)dx}} $$ for all $T<1$. This gives \eqref{L4} for $T<1$. { $\quad$\qd\\} Corollary \ref{c3} may be extended (locally) to the weighted space $L^2_{(x+1)^mdx}$ ($m\ge 2$) in following the method of proof of \cite[Theorem 1.1]{pazoto-rosier}. \begin{corollary}\label{c4} Let $a=a(x)$ fulfilling \eqref{2} and $m\ge 2$. Then there exist some constants $\rho >0$, $C>0$ and $\mu >0$ such that $$||u(t)||_{H^1_{(x+1)^mdx}} \leq C \frac{e^{-\mu t}}{\sqrt{t}} ||u_0||_{L^2_{(x+1)^mdx}}$$ for all $t>0$ and all $u_0\in L^2_{(x+1)^mdx}$ satisfying $|| u_0 ||_{L^2_{(x+1)^mdx}}\le \rho$. \end{corollary} \noindent {\bf Proof.} We first prove estimates for the linearized problem \begin{eqnarray} &&u_t+u_x+u_{xxx}+au=0 \label{W1}\\ &&u(0,t)=0 \label{W2} \\ &&u(x,0)=u_0(x) \label{W3} \end{eqnarray} and next apply a perturbation argument to extend them to the nonlinear problem \eqref{1}. Let us denote by $W(t)u_0=u(t)$ the solution of \eqref{W1}-\eqref{W3}. By computations similar to those performed in the proof of Theorem \ref{dec-pol}, we have that \begin{equation*} ||W(t)u_0||_{L^2_{(x+1)^mdx}} \le C_0 e^{-\nu t}||u_0||_{L^2_{(x+1)^mdx}}. \end{equation*} We need the\\ {\sc Claim 6.} Let $k\in \{ 0, ... , 3\}$. Then there exists a constant $C_k>0$ such that for any $u_0\in H^k_{ (x+1)^m dx}$, \begin{equation} ||W(t)u_0||_{H^k_{(x+1)^m dx}} \le C_k e^{-\nu t} ||u_0||_{H^k_{(x+1)^m dx}}. \label{L12} \end{equation} Indeed, if $u_0\in H^3_{(x+1)^mdx}$, then $u_t(.,0)\in L^2_{(x+1)^{m-3}dx}$, and since $v=u_t$ solves \eqref{W1}-\eqref{W2}, we also have that $$ ||u_t(.,t)||_{L^2_{ (x+1)^{m-3} dx}} \le C_0 e^{-\nu t} ||u_t(.,0)||_{L^2_{(x+1)^{m-3}dx}}. $$ Using \eqref{W1}, this gives $$ ||W(t)u_0||_{H^3_{(x+1)^m dx}} \le C_3 e^{-\nu t} ||u_0||_{H^3_{(x+1)^mdx}}. $$ This proves \eqref{L12} for $k=3$. The fact that \eqref{L12} is valid for $k=1,2$ follows from a standard interpolation argument, for $H^k_{(x+1)^mdx}=[H^0_{(x+1)^m dx},H^3_{(x+1)^mdx}]_{\frac{k}{3}}$. \begin{lemma}\label{l2} Pick any number $\mu \in (0,\nu)$. Then there exists some constant $C=C(\mu )>0$ such that for any $u_0\in L^2_{(x+1)^mdx}$ \begin{equation} ||W(t)u_0||_{H^1_{(x+1)^mdx}} \le C \frac{e^{-\mu t}}{\sqrt{t}} ||u_0||_{L^2_{(x+1)^mdx}}\cdot \label{W100} \end{equation} \end{lemma} \noindent {\bf Proof.} Let $u_0\in L^2_{(x+1)^m dx}$ and set $u(t)=W(t)u_0$ for all $t\ge 0$. By scaling in \eqref{W1} by $(x+1)^mu$, we see that for some constant $C_K=C_K(T)$ $$ ||u||_{L^2(0,1;H^1_{(x+1)^mdx})} \le C_K ||u_0||_{L^2_{(x+1)^m dx}}\cdot $$ This implies that $u(t)\in H^1_{(x+1)^mdx}$ for a.e. $t\in (0,1)$ which, combined to \eqref{L12}, gives that $u(t)\in H^1_{(x+1)^m dx}$ for all $t>0$. Pick any $T\in (0,1]$. Note that, by \eqref{L12}, \begin{equation}\label{W4} ||u(T)||_{H^1_{(x+1)^mdx}} \leq C_1 e^{-\nu (T-t)} ||u(t)||_{H^1_{(x+1)^mdx}}, \quad \forall t\in (0,T). \end{equation} Integrating with respect to $t$ in \eqref{W4} yields $$[C_1^{-1}||u(T)||_{H^1_{(x+1)^mdx}}]^2\int_0^T e^{2\nu (T-t)}dt \leq \int_0^T ||u(t)||^2_{H^1_{(x+1)^mdx}}dt,$$ and hence \begin{eqnarray*} ||u(T)||_{H^1_{(x+1)^mdx}} &\leq& C_K\,C_1 \sqrt{\frac{2\nu}{e^{2\nu T}-1}} ||u_0||_{L^2_{(x+1)^mdx}}\\ &\leq& \frac{C_K\,C_1}{\sqrt{T}}||u_0||_{L^2_{(x+1)^mdx}} \end{eqnarray*} for $0<T\le 1$. Therefore \begin{equation} \label{W5} ||u(t)||_{H^1_{(x+1)^m dx}} \le C_K\, C_1 e^\nu \frac{e^{-\nu t}}{\sqrt{t}} ||u_0||_{L^2_{(x+1)^mdx}} \qquad \forall t\in (0,1). \end{equation} \eqref{W100} follows from \eqref{W5} and \eqref{L12}, since $\mu < \nu$. { $\quad$\qd\\} Let us return to the proof of Corollary \ref{c4}. Fix a number $\mu\in (0,\nu )$, where $\nu$ is as in \eqref{L12}, and let us introduce the space $$ F=\{u\in C(\mathbb R ^+; H^1_{ (x+1)^m dx}); \quad ||e^{\mu t} u(t)||_{L^\infty (\mathbb R ^+; H^1_{(x+1)^mdx })} <\infty \} $$ endowed with its natural norm. Note that \eqref{1} may be recast in the following integral form \begin{equation} \label{W6} u(t)=W(t)u_0 +\int_0^t W(t-s) N(u(s))\, ds \end{equation} where $N(u)=-uu_x$. We first show that \eqref{W6} has a solution in $F$ provided that $u_0\in H^1_{(x+1)^mdx}$ with $||u_0||_{H^1_{(x+1)^m dx}}$ small enough. Let $u_0\in H^1_{(x+1)^m dx}$ and $u\in F$ with $||u_0||_{H^1_{(x+1)^m dx}}\le r_0$ and $||u||_F\le R$, $r_0$ and $R$ being chosen later. We introduce the map $\Gamma$ defined by \begin{equation}\label{gama} (\Gamma u)(t)=W(t)u_0+\int_0^t W(t-s)N(u(s))\, ds\qquad \forall t\ge 0. \end{equation} We shall prove that $\Gamma$ has a fixed point in the closed ball $B_R(0)\subset F$ provided that $r_0>0$ is small enough. For the forcing problem \begin{equation} \begin{cases} u_t + u_x + u_{xxx} +au = f\\ u(0,t) = 0 \\ u(x,0) = u_0(x)\nonumber \end{cases} \end{equation} we have the following estimate \begin{eqnarray*} &&\sup_{0\leq t\leq T}||u(t)||^2_{L^2_{(x+1)^m dx}} + \int_0^T \!\!\!\int_0^\infty (x+1)^{m-1} u_x^2 dxdt\\ &&\qquad \leq C\,\left(||u_0||^2_{L^2_{(x+1)^mdx }} + ||f||^2_{L^1(0,T;L^2_{(x+1)^m dx}} \right) . \end{eqnarray*} Let us take $f=N(u)=-uu_x$. Observe that for all $x>0$ \begin{eqnarray*} (x+1) u^2(x) & = & \left\vert \int_0^\infty \frac{d}{dx}[(x+1)u^2(x)]dx \right\vert \\ &\le& C\left( \int_0^\infty (x+1)^m |u|^2 dx + \int_0^\infty (x+1)^{m-1} |u_x|^2 dx \right) \end{eqnarray*} whenever $m\ge 2$. It follows that for some constant $K>0$ \begin{eqnarray*} ||uu_x||^2_{L^2_{(x+1)^m dx}} &\le& ||(x+1) u^2||_{L^\infty (\mathbb R ^+ )} \int_0^\infty (x+1)^{m-1}|u_x|^2dx\\ &\le& K ||u||^4_{H^1_{(x+1)^m dx}}. \end{eqnarray*} Therefore, for any $T>0$, \begin{eqnarray*} &&\sup_{0\le t\le T} ||(\Gamma u)(t)||^2_{L^2_{(x+1)^m dx}} + \int_0^T\!\!\!\int_0^\infty (x+1)^{m-1}|(\Gamma u)_x|^2dxdt \nonumber\\ &&\qquad \le C\left( ||u_0||^2_{L^2_{(x+1)^m dx }} + \big(\int_0^T||u(t)||^2_{H^1_{(x+1)^mdx }}dt \big)^2 \right) <\infty. \end{eqnarray*} Thus $\Gamma u\in C(\mathbb R ^+, L^2_{(x+1)^mdx})\cap L^2_{loc}(\mathbb R ^+; H^1_{(x+1)^m dx})$ with $(\Gamma u)(0)=u_0$. We claim that $\Gamma u\in F$. Indeed, by \eqref{L12}, $$ ||e^{\mu t} W(t) u_0||_{H^1_{(x+1)^m dx}} \le C_1 ||u_0||_{H^1_{(x+1)^m dx}} $$ and for all $t\ge 0$ \begin{eqnarray*} ||e^{\mu t} \int_0^t W(t-s) N(u(s)) ds||_{H^1_{(x+1)^m dx}} &\le& C e^{\mu t} \int_0^t \frac{e^{-\mu (t-s)}}{\sqrt{t-s}} ||N(u(s))||_{L^2_{(x+1)^m dx}} ds\\ &\le& C\int_0^t \frac{e^{\mu s}}{\sqrt{t-s}} K (e^{-\mu s}||u||_F)^2 ds \\ &\le& CK||u||^2_F \int_0^t\frac{e^{-\mu (t-s)}}{\sqrt{s}}ds\\ &\le& CK (2+\mu ^{-1}) ||u||^2_F \end{eqnarray*} where we used Lemma \ref{l2}. Pick $R>0$ such that $CK(2 + \mu ^{-1})R \leq\frac{1}{2}$, and $r_0$ such that $C_1r_0 = \frac{R}{2}$. Then, for $||u_0||_{H^1_{(x+1)^m dx}} \leq r_0$ and $||u||_F \leq R$, we obtain that $$||e^{\mu t}(\Gamma u)(t)||_{H^1_{(x+1)^mdx}} \leq C_1r_0 + CK(2 + \mu ^{-1})R^2 \leq R, \quad t \geq 0.$$ Hence $\Gamma$ maps the ball $B_R(0)\subset F$ into itself. Similar computations show that $\Gamma$ contracts. By the contraction mapping theorem, $\Gamma$ has a unique fixed point $u$ in $B_R(0)$. Thus $||u(t)||_{H^1_{(x+1)^mdx}} \le Ce^{-\mu t}||u_0||_{H^1_{(x+1)^m dx}}$ provided that $||u_0||_{H^1_{(x+1)^m dx}} \le r_0$ with $r_0$ small enough. Proceeding as in the proof of Lemma \ref{l2}, we have that $$ ||u(t)||_{H^1_{(x+1)^mdx}} \le C\frac{e^{-\mu t}}{\sqrt{t}} ||u_0||_{L^2_{(x+1)^m dx}} \qquad \mbox{ for } 0<t<1,$$ provided that $||u_0||_{L^2_{(x+1)^m dx}} \le \rho _0$ with $\rho _0<1$ small enough. The proof is complete with a decay rate $\mu' < \mu $. { $\quad$\qd\\} \begin{corollary} \label{c5} Assume that $a(x)$ satisfies \eqref{2} and that $\partial _x^ka\in L^\infty (\mathbb R ^+)$ for all $k\ge 0$. Pick any $u_0\in L^2_{(x+1)^m dx}$. Then for all $\varepsilon >0$, all $T>\varepsilon$, and all $k\in \{1 ,... , m\}$, there exists a constant $C=C(\varepsilon,T, k)>0$ such that \begin{eqnarray} \int_\varepsilon^\infty (x+1)^{m-k} |\partial_x^k u(x,t)|^2dx \le C||u_0||^2_{L^2_{(x+1)^mdx}}\qquad \forall t\in [\varepsilon ,T]. \end{eqnarray} \end{corollary} \noindent {\bf Proof.} The proof is very similar to the one in \cite[Lemma 5.1]{KF} and so we only point out the small changes. First, it should be noticed that the presence in the KdV equation of the extra terms $u_x$ and $a(x)u$ does not cause any serious trouble. On the other hand, choosing a cut-off function in $x$ of the form $\eta(x)=\psi _0 (x/\varepsilon)$ (instead of $\eta (x)=\psi _0(x-x_0+2)$ as in \cite{KF}) where $\psi _0\in C^\infty (\mathbb R , [0,1])$ satisfies $\psi_0(x)=0$ for $x\le 1/2$ and $\psi_0(x)=1$ for $x\ge 1$, allows to overcome the fact that $u$ is a solution of \eqref{1} on the half-line only. { $\quad$\qd\\} \subsection{Decay in $L^2_b$} This section is devoted to the exponential decay in $L^2_b$. Our result reads as follows: \begin{theorem} \label{dec-exp} Assume that the function $a=a(x)$ satisfies \eqref{2} with $4 b^3 + b < a_0$. Then, for all $R>0$, there exist $C > 0$ and $\nu > 0$, such that $$||u(t)||_{L^2_b} \leq C e^{-\nu t}||u_0||_{L^2_b} \qquad t\ge 0$$ for any solution $u$ given by Theorem \ref{global-exp}. \end{theorem} \noindent {\bf Proof.} We introduce the Lyapunov function \begin{equation}\label{exp1} V(u) = \displaystyle\frac{1}{2}\int_0^\infty u^2 e^{2bx} dx + c_b\int_0^\infty u^2 dx, \end{equation} where $c_b$ is a positive constant that will be chosen later. Then, adding (\ref{fp30}) and (\ref{fp-4}) hand by hand we obtain \begin{equation}\label{exp3} \begin{array}{l} V(u) - V(u_0) = \displaystyle(4b^3 + b)\int_0^T\int_{x_0}^\infty u^2 e^{2bx} dx dt + \displaystyle(4b^3 + b)\int_0^T\int_0^{x_0} u^2 e^{2bx} dx dt\\ \qquad\qquad\qquad\qquad -\,\,3b\,\displaystyle\int_0^\infty\int_0^\infty u^2_x e^{2bx} dx dt +\frac{2b}{3}\int_0^T \int_0^\infty u^3 e^{2bx}dx dt\\ \qquad\qquad\qquad\qquad- \,(c_b + \displaystyle\frac{1}{2})\int_0^T u_x^2(0,t)dt - \,\, \displaystyle\int_0^T\int_0^\infty a(x)|u|^2 (e^{2bx} +2c_b) dxdt, \end{array} \end{equation} where $x_0$ is the number introduced in \eqref{2}. On the other hand, since $L^2_b \subset L^2_{(x+1)dx}$, $||u(t)||_{L^2(0,\infty)}$ and $||u_x(t)||_{L^2(0,\infty)}$ decays to zero exponentially. Consequently, from Moser estimate we deduce that $||u(t)||_{L^\infty (0,\infty)}\rightarrow0$. We may assume that $(2b/3)||u(t)||_{L^\infty} <\varepsilon =a_0-(4b^3+b)$ for all $t\ge 0$, by changing $u_0$ into $u(t_0)$ for $t_0$ large enough. Therefore \begin{equation}\label{exp3.1} \begin{array}{l} \displaystyle\frac{2b}{3}\int_0^T\int_0^\infty |u|^3 e^{2bx}dx dt\\ \leq \displaystyle\frac{2b}{3}\int_0^T ||u(t)||_{L^\infty(0,\infty)} \left( \int_0^\infty |u|^2 e^{2bx}dx\right) dt \leq \varepsilon \int_0^T\int_0^\infty u^2 e^{2bx}dx dt. \end{array} \end{equation} So, returning to (\ref{exp3}), the following holds \begin{equation}\label{exp4} \begin{array}{l} V(u) - V(u_0) -(4b^3+b +\varepsilon )\displaystyle\int_0^T\int_0^{x_0} u^2 e^{2bx} dx dt\\ +3b\displaystyle\int_0^T\int_0^\infty u^2_x e^{2bx} dx dt + (c_b + \displaystyle\frac{1}{2})\int_0^T u_x^2(0,t)dt +2 c_b \displaystyle\int_0^T\int_0^\infty a(x)|u|^2dxdt \leq 0. \end{array} \end{equation} Moreover, according to \cite{linares-pazoto} there exists $C > 0$ satisfying \begin{equation} \begin{array}{l} \displaystyle\int_0^T\int_0^{x_0} u^2 e^{2bx} dx dt\\ \leq e^{2bx_0}\displaystyle\int_0^T\int_0^{x_0} u^2 dx dt \leq C\,\{ \displaystyle\int_0^T u^2_x(0,t)dt + \displaystyle\int_0^T\int_0^{\infty} a(x) u^2 dx dt \}\nonumber \end{array} \end{equation} since $L^2_b \subset L^2(\mathbb R ^+)$. Then, choosing $c_b$ sufficiently large, the above estimate and (\ref{exp4}) give us that \begin{equation}\label{exp5} \begin{array}{l} V(u) - V(u_0) \leq - C\,\{\displaystyle\int_0^T u_x^2(0,t)dt + \displaystyle\int_0^T\int_0^\infty a(x) u^2 dx dt\\ \qquad\qquad\qquad\qquad + \displaystyle\int_0^T\int_0^\infty u^2_x e^{2bx} dx dt\}\leq - C\,V(u_0), \end{array} \end{equation} which allows to conclude that $V(u)$ decays exponentially. The last inequality is a consequence of the following results: {\sc Claim 7.} There exists a positive constant $C>0$, such that $$\int_0^T V(u(t)) dt \leq C \int_0^T\int_0^\infty u^2_x e^{2bx} dx dt. $$ First, observe that $$|\int_0^\infty u^2 e^{2bx} dx| = |-\frac{1}{b}\int_0^\infty uu_x e^{2bx}dx| \leq \frac{1}{b} (\int_0^\infty u^2 e^{2bx} dx)^{\frac{1}{2}}(\int_0^\infty u^2_x e^{2bx} dx)^{\frac{1}{2}},$$ therefore, \begin{equation} \begin{array}{l}\label{exp5.1} \displaystyle\int_0^\infty u^2 e^{2bx} dx \leq \frac{1}{b^2}\int_0^\infty u^2_x e^{2bx} dx. \end{array} \end{equation} Then, from (\ref{2}) and (\ref{exp5.1}) we have \begin{equation*} V(u(t)) \le (\frac{1}{2} + c_b) \int_0^\infty u^2 e^{2bx}dx \le (\frac{1}{2} + c_b)b^{-2} \int_0^\infty u_x^2 e^{2bx}dx \end{equation*} which gives us Claim 7. {\sc Claim 8.} $$V(u_0) \leq C\,\{ \displaystyle\int_0^T u_x^2(0,t)dt + \displaystyle\int_0^T\int_0^\infty u_x^2 e^{2bx} dxdt + \displaystyle\int_0^T V(u(t)) dt\},$$ where $C$ is a positive constant. Multiplying the first equation in (\ref{1}) by $(T-t)ue^{2bx}$ and integrating by parts in $(0,\infty)\times (0,T)$, we obtain \begin{equation}\label{exp7} \begin{array}{l} -\displaystyle\frac{T}{2}\int_0^\infty |u_0(x)|^2 e^{2bx} dx + \frac{1}{2}\int_0^T\int_0^\infty |u|^2 e^{2bx} dx dt +\,3b\displaystyle\int_0^T\int_0^\infty (T-t) u^2_x e^{2bx} dx dt\\ + \displaystyle\frac{1}{2} \int_0^T (T - t) u_x^2(0,t)dt - \displaystyle(4b^3 + b)\int_0^T\int_0^\infty (T- t) u^2 e^{2bx} dx dt\\ +\displaystyle\int_0^T\int_0^\infty (T - t) a(x)|u|^2 e^{2bx} dxdt -\frac{2b}{3}\int_0^T \int_0^\infty (T - t) u^3 e^{2bx}dx dt= 0 \end{array} \end{equation} and therefore, \begin{equation}\label{exp8} \begin{array}{l} \displaystyle \int_0^\infty |u_0(x)|^2 e^{2bx} dx \leq C (\displaystyle\int_0^T u_x^2(0,t)dt + \frac{1}{2}\int_0^T\int_0^\infty u^2 e^{2bx} dxdt \\+\,\displaystyle\int_0^T\int_0^\infty u^2_x e^{2bx} dx dt +\displaystyle\int_0^T \int_0^\infty |u|^3 e^{2bx}dx dt). \end{array} \end{equation} Then, combining (\ref{exp5.1}) and (\ref{exp3.1}), we derive Claim 8. \eqref{exp5} follows at once. This proves the exponential decay when $||u(t)||_{L^\infty}\le 3\varepsilon/(2b)$. The general case is obtained as in Theorem \ref{dec-pol} { $\quad$\qd\\} \begin{corollary} \label{c6} Assume that the function $a=a(x)$ satisfies \eqref{2} with $4 b^3 + b < a_0$. Then for any $R>0$, there exist positive constants $c=c(R)$ and $\mu = \mu (R)$ such that \begin{equation} ||u_x(t)||_{L^2_b} \leq c \frac{e^{-\mu t}}{\sqrt{t}} ||u_0||_{L^2_b} \end{equation} for all $t>0$ and all $u_0\in L^2_b$ satisfying $||u_0||_{L^2_b}\le R$. \end{corollary} \begin{corollary}\label{c7} Assume that the function $a=a(x)$ satisfies \eqref{2} with $4 b^3 + b < a_0$, and let $s\ge 2$. Then there exist some constants $\rho >0$, $C>0$ and $\mu >0$ such that $$||u(t)||_{ H^s_b } \leq C \frac{e^{-\mu t}}{t^{\frac{s}{2}}} ||u_0||_{ L^2_b }$$ for all $t>0$ and all $u_0\in L^2_b$ satisfying $|| u_0 ||_{L^2_b}\le \rho$. \end{corollary} The proof of Corollary \ref{c6} (resp. \ref{c7}) is very similar to the proof of Corollary \ref{c3} (resp. \ref{c4}), so it is omitted. \section*{Acknowledgments.} This work was achieved while the first author (AP) was visiting Universit\'e Paris-Sud with the support of the Cooperation Agreement Brazil-France and the second author (LR) was visiting IMPA and UFRJ. LR was partially supported by the ``Agence Nationale de la Recherche'' (ANR), Project CISIFS, Grant ANR-09-BLAN-0213-02. \end{document}
\begin{document} \begin{flushright} Publ. Math. Debrecen \textbf{80}(1-2) (2012), 107–126. \\ \href{http://dx.doi.org/110.5486/PMD.2012.4930}{110.5486/PMD.2012.4930} \\[1cm] \end{flushright} \title[]{On $\varphi$-convexity} \author[J.\ Makó]{Judit Makó} \author[Zs. Páles]{Zsolt Páles} \address{Institute of Mathematics, University of Debrecen, H-4010 Debrecen, Pf.\ 12, Hungary} \email{\{makoj,pales\}@science.unideb.hu} \subjclass[2010]{Primary 39B62, 26A51} \keywords{Approximate convexity, Jensen convexity, $\varphi$-convexity} \thanks{This research has been supported by the Hungarian Scientific Research Fund (OTKA) Grant NK81402 and by the TÁMOP 4.2.1./B-09/1/KONV-2010-0007 project implemented through the New Hungary Development Plan co-financed by the European Social Fund, and the European Regional Development Fund.} \begin{abstract} In this paper, approximate convexity and approximate midconvexity properties, called $\varphi$-convexity and $\varphi$-midconvexity, of real valued function are investigated. Various characterizations of $\varphi$-convex and $\varphi$-midconvex functions are obtained. Furthermore, the relationship between $\varphi$-midconvexity and $\varphi$-convexity is established. \end{abstract} \maketitle \section{Introduction} The stability theory of functional inequalities started with the paper \cite{HyeUla52} of Hyers and Ulam who introduced the notion of $\varepsilon$-convex function: If $D$ is a convex subset of a real linear space $X$ and $\varepsilon$ is a nonnegative number, then a function $f:D\to\mathbb{R}$ is called {\it $\varepsilon$-convex} if \Eq{0}{ f(tx+(1-t)y)\le tf(x)+(1-t)f(y) + \varepsilon } for all $x,y\in D$, $t\in[0,1]$. The basic result obtained by Hyers and Ulam states that if the underlying space $X$ is of finite dimension then $f$ can be written as $f=g+h$, where $g$ is a convex function and $h$ is a bounded function whose supremum norm is not larger than $k_n\varepsilon$, where the positive constant $k_n$ depends only on the dimension $n$ of the underlying space $X$. Hyers and Ulam proved that $k_n\le(n(n+3))/(4(n+1))$. Green \cite{Gre52}, Cholewa \cite{Cho84a} obtained much better estimations of $k_n$ showing that asymptotically $k_n$ is not bigger than $(\log_2(n))/2$. Laczkovich \cite{Lac99} compared this constant to several other dimension-depending stability constants and proved that it is not less than $(\log_2(n/2))/4$. This result shows that there is no analogous stability results for infinite dimensional spaces $X$. A counterexample in this direction was earlier constructed by Casini and Papini \cite{CasPap93}. The stability aspects of $\varepsilon$-convexity are discussed by Ger \cite{Ger94e}. An overview of results on $\deltalta$-convexity can be found in the book of Hyers, Isac, and Rassias \cite{HyeIsaRas98a}. If $t=1/2$ and \eq{0} holds for all $x,y\in D$, then $f$ is called an {\it $\varepsilon$-Jensen-convex function}. There is no analogous decomposition for $\varepsilon$-Jensen-convex functions by the counterexample given by Cholewa \cite{Cho84a}. However, one can get Bernstein-Doetsch type regularity theorems which show that $\varepsilon$-Jensen-convexity and local upper boundedness imply $2\varepsilon$-convexity. This result is due to Bernstein and Doetsch \cite{BerDoe15} for $\varepsilon=0$, and to Ng and Nikodem \cite{NgNik93} in the case $\varepsilon\ge0$. For some recent extensions of these results to more general convexity concepts, see \cite{Pal00b}. For locally upper bounded $\varepsilon$-Jensen-convex functions one can obtain the existence of an analogous stability constant $j_n$ (defined similarly as $k_n$ above). The sharp value of this stability constant has recently been found by Dilworth, Howard, and Roberts \cite{DilHowRob99} who have shown that $$ j_n=\frac{1}{2}\mathscr{B}igl([\log_2(n)]+1+\frac{n}{2^{[\log_2(n)]}}\mathscr{B}igr) \le 1+\frac{1}{2}\log_2(n) $$ is the best possible value for $j_n$. (Here $[\cdot]$ denotes the integer-part function). The connection between $\varepsilon$-Jensen-convexity and $\varepsilon$-$\mathbb{Q}$-convexity has been investigated by Mrowiec \cite{Mro01}. If $D\subset\mathbb{R}$ and \eq{0} is supposed to be valid for all $x,y\in D$ except a set of 2-dimensional Lebesgue measure zero then one can speak about {\it almost $\varepsilon$-convexity}. Results in this direction are due to Kuczma \cite{Kuc70a} (the case $\varepsilon=0$) and Ger \cite{Ger88c} (the case $\varepsilon\ge0$). In a recent paper \cite{Pal03a}, the second author introduced a more general notion than $\varepsilon$-convexity. Let $\varepsilon$ and $\delta$ be nonnegative constants. A function $f:D \to\mathbb{R}$ is called $(\varepsilon,\deltalta)$-convex, if $$ f\left(t x+(1-t) y \right) \leq t f(x) + (1-t) f(y) + \deltalta + \varepsilon t (1-t) \|x-y\| $$ for every $x,y\in D$ and $t \in[0,1]$. The main results of the paper \cite{Pal03a} obtain a complete characterization of $(\varepsilon,\deltalta)$-convexity if $D\subseteq\mathbb{R}$ is an open real interval by showing that these functions are of the form $f=g+h+\ell$, where $g$ is convex, $h$ is bounded with $\|h\|\leq \delta/2$ and $\ell$ is Lipschitzian with Lipschitz modulus Lip$(\ell)\leq\varepsilon$. In the papers \cite{HazPal04}, \cite{HazPal05}, the notion of $(\varepsilon,p)$-convexity and $(\varepsilon,p)$-midconvexity were introduced: If $\varepsilon,p\geq0$ and $t\in[0,1]$, then a function $f:D \to\mathbb{R}$ is called \textit{$(\varepsilon,p,t)$-convex}, if $$ f\left(t x+(1-t) y \right) \leq t f(x) + (1-t) f(y) + \varepsilon (t (1-t) \|x-y\|)^p $$ for every $x,y\in D$. If the above property holds for $t=1/2$ and for all $t\in[0,1]$, then we speak about \textit{$(\varepsilon,p)$-midconvexity} and \textit{$(\varepsilon,p)$-convexity}, respectively. The main result in \cite{HazPal05} shows that, for locally upper bounded functions, $(\varepsilon,p)$-midconvexity implies $(c\varepsilon,p)$-convexity for some constant $c$. Another, but related, notion of approximate convexity, the concept of so-called paraconvexity was introduced by Rolewicz \cite{Rol79b,Rol79a,Rol05b} in the late 70s. It also turned out that Takagi-like functions appear naturally in the investigation of approximate convexity, see, for example, Boros \cite{Bor08}, Házy \cite{Haz07a,Haz07b}, Házy and Páles \cite{HazPal04,HazPal05,HazPal09}, Makó and Páles \cite{MakPal11b,MakPal11a}, Mrowiec, Tabor and Tabor \cite{MroTabTab08}, Tabor and Tabor \cite{TabTab09b,TabTab09a}, Tabor, Tabor, and Żołdak \cite{TabTabZol10a,TabTabZol10b}. The aim of this paper is to offer a unified framework for most of the mentioned approximate convexity notions by introducing the notions of $\varphi$-convexity and $\varphi$-midconvexity and to extend the previously known results to this more general setting. We also introduce the relevant Takagi type functions which appear naturally in the description of the connection of $\varphi$-convexity and $\varphi$-midconvexity. \section{$\varphi$-convexity and $\varphi$-midconvexity} Throughout the paper $\mathbb{R}$, $\mathbb{R}_+$, and $\mathbb{N}$ denote the sets of real, nonnegative real, and natural numbers, respectively. Assume that $D$ is a nonempty convex subset of a real normed space $X$ and denote $D^+:=\{\|x-y\|:x,y\in D\}$. Let $\varphi:D^+\to \mathbb{R}_+$ be a given function. \Defi{1}{A function $f:D\to \mathbb{R}$ is called \textit{$\varphi$-convex} on $D$, if \Eq{1a} { f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)+t\varphi\big((1-t)\|x-y\|\big)+(1-t)\varphi\big(t\|x-y\|\big) } holds for all $t\in [0,1]$ and for all $x,y \in D.$ If \eq{1a} holds for $t=1/2$, i.e., if, for all $x,y \in D,$ \Eq{1b} { f\bigg(\frac{x+y}{2}\bigg)\leq \frac{f(x)+f(y)}{2}+\varphi\bigg(\mathscr{B}ig\|\frac{x-y}{2}\mathscr{B}ig\|\bigg), } then we say that $f$ is \textit{$\varphi$-midconvex}. } In the case $\varphi\equiv0$, the meaning of inequalities \eq{1a} and \eq{1b} is the convexity and midconvexity (Jensen-convexity) of $f$, respectively. An important particular case occurs when $\varphi:D^+\to\mathbb{R}_+$ is of the form $\varphi(x):=\varepsilon x^p,$ where $p,\varepsilon\geq0$ are arbitrary constants. Then the function $f$ is called \textit{$(\varepsilon,p)$-convex} and \textit{$(\varepsilon,p)$-midconvex} on $D$, respectively (cf.\ \cite{Pal03a}). The next results describe the structure of $\varphi$-convex functions and $\varphi$-midconvex functions. (\mathscr{P})rp{2}{{\mathop{\mbox{\rm co}}lor{white}.} \begin{enumerate}[(i)] \item If, for $j=1,\dots,n$, $\varphi_j:D^+\to\mathbb{R}_+$, the function $f_j:D\to \mathbb{R}$ is $\varphi_j$-convex and $c_j$ is a nonnegative number, then $c_1f_1+\cdots+c_nf_n$ is $(c_1\varphi_1+\cdots+c_n\varphi_n)$-convex. In particular, the set of $\varphi$-convex functions on $D$ is convex. \item Let $\{f_{\gamma}:D\to\mathbb{R}\mid\gamma\in \Gamma \}$ be a family of $\varphi$-convex functions. Assume, for all $x\in D$, that $f(x):=\mathop{\mbox{\rm sup}}_{\gamma\in \Gamma}f_{\gamma}(x)<+\infty$. Then $f$ is $\varphi$-convex. \item Let $\{f_{\gamma}:D\to\mathbb{R}\mid\gamma\in \Gamma \}$ be a downward directed family of $\varphi$-convex functions in the following sense: for all $\gamma_1,\gamma_2\in\Gamma$ and $x_1,x_2\in D$, there exists $\gamma\in\Gamma$ such that $f_\gamma(x_i)\leq f_{\gamma_i}(x_i)$ for $i=1,2$. Assume, for all $x\in D$, that $f(x):=\inf_{\gamma\in \Gamma}f_{\gamma}(x)>-\infty$. Then $f$ is $\varphi$-convex. \end{enumerate}} \begin{proof} (i) is easy to prove. (ii) Let $x,y\in D$ and $t\in[0,1]$. For all $\gamma\in \Gamma,$ we have \Eq{*} { f_{\gamma}(tx+(1-t)y) &\leq tf_{\gamma}(x)+(1-t)f_{\gamma}(y)+ t\varphi\big((1-t)\|x-y\|\big)+(1-t)\varphi\big(t\|x-y\|\big)\\ &\leq tf(x)+(1-t)f(y)+ t\varphi\big((1-t)\|x-y\|\big)+(1-t)\varphi\big(t\|x-y\|\big).} Thus, \Eq{*} { f(tx+(1-t)y)&=\mathop{\mbox{\rm sup}}_{\gamma\in \Gamma}f_{\gamma}(tx+(1-t)y)\\ &\leq tf(x)+(1-t)f(y)+t\varphi\big((1-t)\|x-y\|\big)+(1-t)\varphi\big(t\|x-y\|\big). } Hence $f$ is $\varphi$-convex. (iii) Let $x,y \in D$ and $t\in [0,1].$ Let $\deltalta>0$ be arbitrary. Then $f(x)<f(x)+\deltalta$ and $f(y)<f(y)+\deltalta.$ Thus there exist $\gamma_1,\gamma_2,$ such that $f_{\gamma_1}(x)<f(x)+\deltalta$ and $f_{\gamma_2}(y)<f(y)+\deltalta.$ By the conditions of the proposition, there exists $\gamma\in \Gamma,$ such that \Eq{*} { f_{\gamma}(x)&\leq f_{\gamma_1}(x)<f(x)+\deltalta,\\ f_{\gamma}(y)&\leq f_{\gamma_2}(y)<f(y)+\deltalta. } Then we get \Eq{*} { f(tx+(1-t)y)&\leq f_{\gamma}(tx+(1-t)y)\\ &\leq tf_{\gamma}(x)+(1-t)f_{\gamma}(y)+t\varphi\big((1-t)\|x-y\|\big)+(1-t)\varphi\big(t\|x-y\|\big)\\ &\leq tf(x)+(1-t)f(y)+\deltalta+t\varphi\big((1-t)\|x-y\|\big)+(1-t)\varphi\big(t\|x-y\|\big). } This proves that $f$ is $\varphi$-convex. \end{proof} The following statements concern midconvex funtions, they are analogous to those of \prp{2}. (\mathscr{P})rp{3}{{\mathop{\mbox{\rm co}}lor{white}.} \begin{enumerate}[(i)] \item If, for $j=1,\dots,n$, $\varphi_j:D^+\to\mathbb{R}_+$, the function $f_j:D\to \mathbb{R}$ is $\varphi_j$-midconvex and $c_j$ is a nonnegative number, then $c_1f_1+\cdots+c_nf_n$ is $(c_1\varphi_1+\cdots+c_n\varphi_n)$-midconvex. In particular, the set of $\varphi$-midconvex functions on $D$ is convex. \item Let $\{f_{\gamma}:D\to\mathbb{R}\mid\gamma\in \Gamma \}$ be a family of $\varphi$-midconvex functions. Assume, for all $x\in D$, that $f(x):=\mathop{\mbox{\rm sup}}_{\gamma\in \Gamma}f_{\gamma}(x)<+\infty$. Then $f$ is $\varphi$-midconvex. \item Let $\{f_{\gamma}:D\to\mathbb{R}\mid\gamma\in \Gamma \}$ be a downward directed family of $\varphi$-midconvex functions in the following sense: for all $\gamma_1,\gamma_2\in\Gamma$ and $x_1,x_2\in D$, there exists $\gamma\in\Gamma$ such that $f_\gamma(x_i)\leq f_{\gamma_i}(x_i)$ for $i=1,2$. Assume, for all $x\in D$, that $f(x):=\inf_{\gamma\in \Gamma}f_{\gamma}(x)>-\infty$. Then $f$ is $\varphi$-midconvex. \end{enumerate}} \Defi{H}{A function $f:D\to \mathbb{R}$ is said to be of \textit{$\varphi$-Hölder class} on $D$ or briefly $f$ is called \textit{$\varphi$-Hölder} on $D$ if there exists a nonnegative constant $H$ such that, for all $x,y\in D,$ \Eq{H} { |f(x)-f(y)|\leq H\varphi(\|x-y\|). } The smallest constant $H$ such that \eq{H} holds is said to be the \textit{$\varphi$-Hölder modulus} of $f$ and is denoted by ${H}_{\varphi}(f)$. } A relationship between the $\varphi$-Hölder property and $\varphi$-convexity is obtained in the following result. (\mathscr{P})rp{H}{Let $f:D\to \mathbb{R}$ be of $\varphi$-Hölder class on $D$. Then $f$ is $\big({H}_{\varphi}(f)\cdot\varphi\big)$-convex on $D$.} \begin{proof} Let $x,y\in D$ and let $t\in [0,1].$ Then \Eq{*} { f(tx+(1-t)y)&-tf(x)-(1-t)f(y)\\ &=t\big(f(tx+(1-t)y)-f(x)\big)+(1-t)\big(f(tx+(1-t)y)-f(y)\big)\\ &\leq t{H}_{\varphi}(f)\varphi(\|tx+(1-t)y-x\|)+ (1-t){H}_{\varphi}(f)\varphi(\|tx+(1-t)y-y\|), } which is equivalent to the $\big({H}_{\varphi}(f)\cdot\varphi\big)$-convexity of $f.$ \end{proof} For functions $\varphi:D^+\to \mathbb{R}$, we introduce the following subadditivity-type property: \Defi{IS}{ We say that $\varphi$ is \textit{increasingly subadditive on $D^+$} if, for all $u,v,w\in D^+$ with $u\leq v+w$, \Eq{uvw}{ \varphi(u)\leq\varphi(v)+\varphi(w) } holds.} Clearly, if $\varphi:\mathbb{R}_+\to \mathbb{R}$ is nondecreasing and subadditive then it is also increasingly subadditive on $\mathbb{R}^+$. (\mathscr{P})rp{5}{Assume that $\varphi:D^+\to \mathbb{R}$ is increasingly subadditive. Then, for all $z\in D$, the map $x \mapsto -\varphi(\|x-z\|)$ is of $\varphi$-Hölder class on $D$ with $\varphi$-Hölder modulus $1$, and therefore, it is also $\varphi$-convex on $D$.} \begin{proof} Let $z\in D$ be fixed. To prove the $\varphi$-Hölder property of the map $x \mapsto -\varphi(\|x-z\|)$, let $x,y\in D.$ Then $u=\|x-z\|$, $v=\|x-y\|$, and $w=\|y-z\|$ are elements of $D^+$ such that \eq{uvw} holds. Therefore, by the increasing subadditivity, we get \Eq{*} { \varphi(\|x-z\|)-\varphi(\|y-z\|) \leq \varphi(\|x-y\|)+\varphi(\|y-z\|)-\varphi(\|y-z\|)=\varphi(\|x-y\|). } Interchanging $x$ and $y$, we also have $\varphi(\|y-z\|)-\varphi(\|x-z\|)\leq \varphi(\|y-x\|).$ These two inequalities imply \Eq{*} { \big|\varphi(\|x-z\|)-\varphi(\|y-z\|)\big|\leq \varphi(\|x-y\|), } which means that the map $x\mapsto -\varphi(\|x-z\|)$ is $\varphi$-Hölder on $D$ with $\varphi$-Hölder modulus $1$. \end{proof} The next lemma is well known, for completeness we provide its short proof. \mathscr{L}em{4}{Let $0\leq p\leq 1$ be an arbitrary constant. Then the map $x\mapsto x^p$ is subadditive and nondecreasing on $\mathbb{R}_+$ and hence it is also increasingly subadditive on $\mathbb{R}^+$.} \begin{proof} For $s\in ]0,1[$, we have $s\leq s^p.$ Hence \Eq{*}{ 1=s+(1-s)\leq s^p+(1-s)^p. } If $x,y\in\mathbb{R}_+$ then, with $s:=\frac{x}{x+y}\in ]0,1[$, we get \Eq{*}{ 1\leq \mathscr{B}ig(\frac{x}{x+y}\mathscr{B}ig)^p+\mathscr{B}ig(\frac{y}{x+y}\mathscr{B}ig)^p, } which shows the subadditivity of the function $x\mapsto x^p$. \end{proof} \Defi{5}{ Let $0<p\leq 1$ be an arbitrary constant. For all $t\in D^+$ let $\varphi(t):=t^p,$ then if $f:D\to \mathbb{R}$ is a $\varphi$-Hölder function, then it is called (classical) \textit{$p$-Hölder functions}. In this case the $\varphi$-Hölder modulus is called \textit{$p$-Hölder modulus} of $f$ and it is denoted by ${H}_{p}(f)$. } The next corollary gives a relationship between the $p$-Hölder functions and the $p$-convex functions. \Cor{4}{ Let $0<p\leq 1$ be an arbitrary constant and $z\in X$. Then $x \mapsto -\|x-z\|^p$ is of $p$-Hölder class on $X$ with the $p$-Hölder modulus $1$, and therefore, it is $(1,p)$-convex on $X$. } The subsequent theorem, which is one of the main results of this paper, offers equivalent conditions for $\varphi$-convexity. It generalizes the result of \cite[Thm. 1]{Pal03a}. \mathscr{T}hm{2}{Let $D$ be an open real interval and $f:D\to \mathbb{R}.$ Then the following conditions are equivalent. \begin{enumerate}[(i)] \item $f$ is $\varphi$-convex on $D.$ \item For $x,u,y\in D$ with $x<u<y,$ \Eq{2b} { \frac{f(u)-f(x)-\varphi(u-x)}{u-x}\leq \frac{f(y)-f(u)+\varphi(y-u)}{y-u}. } \item There exists a function $a:D\to \mathbb{R}$ such that, for $x,u\in D,$ \Eq{2c} { f(x)-f(u)\geq a(u)(x-u)-\varphi(|x-u|). } \end{enumerate}} \begin{proof} $(\mbox{i}) \mathbb{R}ightarrow (\mbox{ii})$ Assume that $f:D\to \mathbb{R}$ is $\varphi$-convex and let $x<u<y$ be arbitrary elements of $D.$ Choose $t\in [0,1]$ such that $u=tx+(1-t)y,$ that is let $t:=\frac{y-u}{y-x}.$ Then, applying the $\varphi$-convexity of $f$, we get \Eq{*} { f(u)\leq \frac{y-u}{y-x}f(x)+\frac{u-x}{y-x}f(y)+\frac{y-u}{y-x}\varphi\mathscr{B}ig(\frac{u-x}{y-x}(y-x)\mathscr{B}ig) +\frac{u-x}{y-x}\varphi\mathscr{B}ig(\frac{y-u}{y-x}(y-x)\mathscr{B}ig), } which is equivalent to \Eq{*} { (y-u)\big(f(u)-f(x)-\varphi(u-x)\big)\leq (u-x)\big(f(y)-f(u)+\varphi(y-u)\big). } Dividing by $(y-u)(u-x)>0$, we arrive at \eq{2b}. $(\mbox{ii}) \mathbb{R}ightarrow (\mbox{iii})$ Assume that $(\mbox{ii})$ holds and, for $u\in D$, define \Eq{*} { a(u):= \inf_{y\in D,u<y} \frac{f(y)-f(u)+\varphi(y-u)}{y-u}. } Then in view of $(\mbox{ii}),$ we get \Eq{2d} { \frac{f(x)-f(u)-\varphi(u-x)}{x-u}\leq a(u)\leq \frac{f(y)-f(u)+\varphi(y-u)}{y-u}, } for all $x<u<y$ in $D.$ The left-hand side inequality in $\eq{2d}$ yields $\eq{2c}$ in the case $x<u,$ and analogously, the right-hand side inequality (with the substitution $y:=x$) reduces to $\eq{2c}$ in the case $x>u.$ The case $x=u$ is obvious. $(\mbox{iii}) \mathbb{R}ightarrow (\mbox{i})$ Let $x,y\in D,$ $t\in [0,1]$, and set $u:=tx+(1-t)y$. Then, by $(\mbox{iii})$, we have \Eq{*} { f(x)-f(u)&\geq a(u)(x-u)-\varphi(|x-u|),\\ f(y)-f(u)&\geq a(u)(y-u)-\varphi(|y-u|). } Multiplying the first inequality by $t$ and the second inequality by $1-t$ and adding up the inequalities so obtained, we get $\eq{1a}.$ \end{proof} \mathbb{R}em{12}{In the vector variable setting (i.e., when $D$ is an open convex subset of a normed space $X$), instead of condition (iii), the following analogous property can be formulated: \begin{enumerate} \item[(iii)$^*$] There exists a function $a:D\to X^*$ such that, for $x,u\in D,$ \Eq{*} { f(x)-f(u)\geq a(u)(x-u)-\varphi(\|x-u\|). } \end{enumerate} One can easily see (by the same argument as above) that (iii)$^*$ implies (i), that is, the $\varphi$-convexity of $f$. The validity of the reversed implication is an open problem.} The next theorem gives another characterization of $\varphi$-convex functions, if $\varphi$ is increasingly subadditive. \mathscr{T}hm{6}{Let $D$ be an open real interval and let $\varphi:D^+\to \mathbb{R}_+$ be increasingly subadditive. Then a function $f:D\to\mathbb{R}$ is $\varphi$-convex if and only if there exist two functions $a:D\to \mathbb{R}$ and $b:D\to \mathbb{R}$ such that \Eq{6a} { f(x)=\mathop{\mbox{\rm sup}}_{u\in D}\big(a(u)x+b(u)-\varphi(|x-u|)\big), } for all $x\in D.$} \begin{proof} Assume that $f$ is $\varphi$-convex. By \thm{2}, there exists a function $a:D\to \mathbb{R}$ such that \Eq{*} { f(x)\geq f(u)+a(u)(x-u)-\varphi(|x-u|), } for all $u,x\in D.$ Define $b(u):=f(u)-a(u)u,$ for $u\in D.$ Thus, for $u,x\in D$, \Eq{*} { f(x)\geq a(u)x+b(u)-\varphi(|x-u|) } and we have equality for $u=x$. Therefore, \eq{6a} holds. Conversely, assume that \eq{6a} is valid for $x\in D$. By \prp{5}, for fixed $u\in D$, the mapping $x\mapsto-\varphi(|x-u|)$ is $\varphi$-convex. The map $x\mapsto a(u)x+b(u)$ is affine, and hence the function $f_u:D\to\mathbb{R}$ defined by $f_u(x):=a(u)x+b(u)-\varphi(|x-u|)$ is $\varphi$-convex for all fixed $u\in D$. Now applying (ii) of \prp{2}, we obtain that $f$ is $\varphi$-convex. \end{proof} \mathbb{R}em{13}{In the vector variable setting (i.e., when $D$ is an open convex subset of a normed space $X$), the following implication can be formulated: If $\varphi:D^+\to \mathbb{R}_+$ is increasingly subadditive and there exist two function $a:D\to X^*$ and $b:D\to \mathbb{R}$ such that, for $x\in D,$ \Eq{*} { f(x)=\mathop{\mbox{\rm sup}}_{u\in D}\big(a(u)(x)+b(u)-\varphi(\|x-u\|)\big), } then $f$ is $\varphi$-convex. The validity of the reversed implication is an open problem.} \Cor{11} {Let $D$ be an open real interval and let $0< p\leq1$ and $\varepsilon\geq0$ be arbitrary constants. Then a function $f:D\to\mathbb{R}$ is $(\varepsilon,p)$-convex if and only if there exist two functions $a:D\to \mathbb{R}$ and $b:D\to \mathbb{R}$ such that \Eq{*} { f(x)=\mathop{\mbox{\rm sup}}_{u\in D}\big(a(u)x+b(u)-\varepsilon|x-u|^p\big), } for all $x\in D.$ } The subsequent theorem offers a sufficient condition for the $\varphi$-midconvexity. The result is analogous to the implication (iii)$\mathbb{R}ightarrow$(i) of \thm{2}. Unfortunately, we were not able to obtain the necessity of this condition, i.e., the reversed implication. \mathscr{T}hm{2M}{Let $f:D\to \mathbb{R}$ and assume that, for all $u\in D$, there exists an additive function $A_u: X\to X$ such that \Eq{2Ma} { f(x)-f(u)\geq A_u(x-u)-\varphi(\|x-u\|) \qquad(x\in D). } Then, $f$ is $\varphi$-midconvex.} \begin{proof} Let $x,y\in D$ and set $u:=\frac{x+y}{2}$. Then, by \eq{2Ma}, we have \Eq{*} { f(x)-f(u)&\geq A_u(x-u)-\varphi(\|x-u\|)=A_u\bigg(\frac{x-y}{2}\bigg)-\varphi \bigg(\mathscr{B}ig\|\frac{x-y}{2}\mathscr{B}ig\| \bigg),\\ f(y)-f(u)&\geq A_u(y-u)-\varphi(\|y-u\|)=A_u\bigg(\frac{y-x}{2}\bigg)-\varphi \bigg(\mathscr{B}ig\|\frac{y-x}{2}\mathscr{B}ig\| \bigg). } Adding up the inequalities and multiplying the inequality so obtained by $\frac{1}{2}$, we get $\eq{1b}.$ \end{proof} The following result is analogous to \thm{6}, however it offers only a sufficient condition for $\varphi$-midconvexity. \mathscr{T}hm{6M}{Let $\varphi:D^+\to \mathbb{R}_+$ be increasingly subadditive and let $f:D\to \mathbb{R}$. Assume that, for all $u\in D$, there exists an additive function $A_u:X\to X$ and there exists a function $b:D\to\mathbb{R}$ such that \Eq{6M} { f(x)=\mathop{\mbox{\rm sup}}_{u\in D}\big(A_u(x)+b(u)-\varphi(\|x-u\|)\big), } for all $x\in D.$ Then $f$ is $\varphi$-midconvex.} \begin{proof} Assume that \eq{6M} is valid for $x\in D$. By \prp{5}, for fixed $u\in D$, the mapping $x\mapsto-\varphi(\|x-u\|)$ is $\varphi$-convex, so it is $\varphi$-midconvex. The map $x\mapsto A_u(x)+b(u)$ is affine, and hence the function $f_u:D\to\mathbb{R}$ defined by $f_u(x):=A_u(x)+b(u)-\varphi(\|x-u\|)$ is $\varphi$-midconvex for all fixed $u\in D$. Now applying (ii) of \prp{3}, we obtain that $f$ is $\varphi$-midconvex. \end{proof} Henceforth we search for relations between the local upper-bounded $\varphi$-midconvex functions and $\varphi$-convex functions with the help of the results from the papers \cite{Haz05a} and \cite{HazPal05} by Házy and Páles. Define the function $d_\mathbb{Z}:\mathbb{R}\to\mathbb{R}_+$ by \Eq{*}{ d_\mathbb{Z}(t)=\mathop{\mbox{\rm dist}}(t,\mathbb{Z}):=\min\{|t-k|:k\in\mathbb{Z}\}. } It is immediate to see that $d_\mathbb{Z}$ is 1-periodic and symmetric with respect to $t=1/2$, i.e., $d_\mathbb{Z}(t)=d_\mathbb{Z}(1-t)$ holds for all $t\in\mathbb{R}$. For a fixed $\varphi:\frac12 D^+\to\mathbb{R}_+$, we introduce the Takagi type function $\mathscr{T}_\varphi:\mathbb{R}\times D^+\to\mathbb{R}_+$ by \Eq{TT} { \mathscr{T}_\varphi(t,u):=\sum_{n=0}^{\infty}\frac{\varphi\big(d_\mathbb{Z}(2^{n}t)u\big)}{2^{n}} \qquad((t,u)\in \mathbb{R}\times D^+). } Applying the estimate $0\leq d_\mathbb{Z}\leq\frac12$, one can easily see that $\mathscr{T}_\varphi(t,u)\leq2\varphi\big(\frac{u}{2}\big)$ for $u\in D^+$ whenever $\varphi$ is nondecreasing. For $p\geq0$, we also define the Takagi type function $T_p:\mathbb{R}\to\mathbb{R}_+$ by \Eq{Tp} { T_p(t):=\sum_{n=0}^{\infty}\frac{\big(d_\mathbb{Z}(2^{n}t)\big)^p}{2^{n}} \qquad(t\in\mathbb{R}). } In the case when $\varphi$ is of the form $\varphi(t)=\varepsilon|t|^p$ for some constants $\varepsilon\geq0$ and $p\geq0$, the following identity holds: \Eq{*}{ \mathscr{T}_\varphi(t,u)=\varepsilon T_p(t)u^p \qquad((t,u)\in \mathbb{R}\times D^+). } Observe that $\mathscr{T}_\varphi$ and $T_p$ are also 1-periodic and symmetric with respect to $t=1/2$ in their first variables. In order to obtain lower and upper estimates for the functions $\mathscr{T}_\varphi$ and $T_p$ defined above, we need to recall de Rham's classical theorem \cite{Rha57}. By $\mathscr{B}(\mathbb{R},\mathbb{R})$ we denote the space of bounded functions $f:\mathbb{R}\to\mathbb{R}$ equipped with the supremum norm. \mathscr{T}hm{TT}{Let $\psi\in \mathscr{B}(\mathbb{R},\mathbb{R}), a,b\in \mathbb{R},$ $|a|<1.$ Let $F_\psi:\mathscr{B}(\mathbb{R},\mathbb{R})\to \mathscr{B}(\mathbb{R},\mathbb{R})$ be an operator defined as follows \Eq{*} { \big(F_\psi f\big)(t) :=af(bt)+\psi(t) \qquad \mbox{for}\quad f\in \mathscr{B}(\mathbb{R},\mathbb{R}),\,\,t\in \mathbb{R}. } Then \begin{enumerate}[(i)] \item $F_\psi$ is a contraction on $\mathscr{B}(\mathbb{R},\mathbb{R})$ with a unique fixed point $f_\psi$ which is given by the formula \Eq{*}{ f_\psi(t)=\sum_{n=0}^{\infty}a^n\psi(b^nt) \qquad (t\in \mathbb{R}); } \item if $a\geq 0$ and the functions $g,h\in\mathscr{B}(\mathbb{R},\mathbb{R})$ satisfy the inequalities $g\leq F_\psi g$ and $F_\psi h\leq h$, then $g\leq f_\psi\leq h.$ \end{enumerate}} \mathbb{R}em{1}{In view of the first assertion of this theorem, observe that the functions $\mathscr{T}_\varphi(\cdot,u)$ and $T_p$ defined in \eq{TT} and \eq{Tp} are the fixed points of the operator: \Eq{FF}{ \big(F_\psi f\big)(t):=\frac12 f(2t)+\psi(t) \qquad \mbox{for}\quad f\in \mathscr{B}(\mathbb{R},\mathbb{R}),\,\,t\in\mathbb{R} } where $\psi\in\mathscr{B}(\mathbb{R},\mathbb{R})$ is given by $\psi(t):=\varphi\big(d_\mathbb{Z}(t)u\big)$ and $\psi(t):=\big(d_\mathbb{Z}(t)\big)^p$, respectively.} In the results below, we establish upper and lower bounds for $\mathscr{T}_\varphi$ in terms of the function $\tau_\varphi:\mathbb{R}\times D^+\to\mathbb{R}$ defined by \Eq{*} { \tau_{\varphi}(t,u):= d_\mathbb{Z}(t)\varphi\big((1-d_\mathbb{Z}(t))u\big) +(1-d_\mathbb{Z}(t))\varphi\big(d_\mathbb{Z}(t)u\big) \qquad ((t,u)\in \mathbb{R}\times D^+). } Observe that, for $t\in[0,1]$, we have \Eq{*} { \tau_{\varphi}(t,u):= t\varphi\big((1-t)u\big) +(1-t)\varphi\big(tu\big) \qquad (u\in D^+), } which is exactly the error term related to $\varphi$-convexity. (\mathscr{P})rp{7}{Let $\varphi:D^+\to \mathbb{R}_+$ be subadditive. Then, for all $(t,u)\in\mathbb{R}\times D^+,$ \Eq{7a} { \tau_\varphi(t,u)\leq \mathscr{T}_\varphi(t,u). }} \begin{proof} Let $u\in D^+$ be arbitrarily fixed. By the 1-periodicity and symmetry with respect to the point $t=1/2$, it suffices to show that \eq{7a} holds for all $t\in\big[0,\frac12\big]$. If $t=0$ then \eq{7a} is obvious. Now assume that $0<t\leq \frac12.$ Then there exists a unique $k\in \mathbb{N}$ such that $\frac{1}{2^{k+1}}<t\leq\frac{1}{2^{k}}.$ Then, one can easily see that \Eq{kk}{ d_\mathbb{Z}(t)=t,\qquad d_\mathbb{Z}(2t)=2t,\quad\dots,\quad d_\mathbb{Z}(2^{k-1}t)=2^{k-1}t,\qquad d_\mathbb{Z}(2^kt)=1-2^kt. } On the other hand, by the well-known identity $\sum_{j=0}^{k-1} 2^j=2^k-1$, we have \Eq{*} { (1-t)u=tu+2tu+\cdots+2^{k-1}tu+(1-2^kt)u. } Then, by the subadditivity of $\varphi$, and by $t\leq \frac{1}{2^{k}}<\frac{1}{2^{k-1}}<\cdots<\frac{1}{2},$ it follows that \Eq{*} { t\varphi((1-t)u) &\leq t\varphi(tu)+t\varphi(2tu)+\cdots+t\varphi(2^{k-1}tu)+t\varphi((1-2^kt)u)\\ &\leq t\varphi(tu)+\frac{\varphi(2tu)}{2}+\cdots +\frac{\varphi(2^{k-1}tu)}{2^{k-1}}+\frac{\varphi((1-2^kt)u)}{2^k}. } Adding $(1-t)\varphi(tu)$ to the previous inequality and using \eq{kk}, we get \Eq{*} { \tau_\varphi(t,u):=t\varphi((1-t)u)+(1-t)\varphi(tu)\leq& \varphi(tu)+\frac{\varphi(2tu)}{2}+\cdots+\frac{\varphi(2^{k-1}tu)}{2^{k-1}}+\frac{\varphi((1-2^kt)u)}{2^k}\\ =& \sum_{j=0}^k \frac{\varphi(d_\mathbb{Z}(2^jt)u)}{2^j}\leq \mathscr{T}_\varphi(t,u). } Which completes the proof of \eq{7a}. \end{proof} (\mathscr{P})rp{8}{Let $\varphi:D^+\to \mathbb{R}_+$ be nondecreasing with $\varphi(s)>0$ for $s>0$ and assume that \Eq{*}{ \gamma_\varphi:=\mathop{\mbox{\rm sup}}_{0<s\in\frac{1}{2}D^+}\frac{\varphi(2s)}{\varphi(s)}<2. } Then, for all $(t,u)\in\mathbb{R}\times D^+$, \Eq{8a} { \mathscr{T}_\varphi(t,u)\leq \frac{2}{2-\gamma_\varphi}\tau_\varphi(t,u) } holds. } \begin{proof} To prove \eq{8a}, we fix an arbitrary element $u\in D^+$. By \rem{1}, the function $\mathscr{T}_\varphi(\cdot,u)$ is the fixed point of the operator \Eq{*} { (F_\varphi f)(t)=\frac{1}{2}f(2t)+\varphi(d_\mathbb{Z}(t)u). } Define the function $g:\mathbb{R}\to\mathbb{R}$ by $g(t):= \frac{2}{2-\gamma_\varphi}\tau_\varphi(t,u)$. In view of \thm{TT}, in order to prove inequality \eq{8a}, it is enough to show that \Eq{8b} { (F_\varphi g)(t)\leq g(t)\qquad(t\in\mathbb{R}). } Since $g$ is periodic by $1$ and symmetric with respect to $t=1/2$, it suffices to prove that \eq{8b} is satisfied on $\big[0,\frac12\big].$ Trivially, $\gamma_\varphi\geq1$, hence the inequality \eq{8b} is obvious for $t=0$ or for $u=0$. Thus, we may assume that $u>0$ and $0<t\leq \frac12.$ By the definition of the constant $\gamma_\varphi$, we have that \Eq{8c} { \varphi(tu)\mathscr{B}ig(1-\frac{\gamma_\varphi}{2}\mathscr{B}ig) \leq \varphi(tu)-\frac{\varphi(2tu)}{2}. } Since $t\leq 2t$ and $1-2t\leq 1-t$ and $\varphi$ is nondecreasing we also have that \Eq{*} { 0\leq t(\varphi((1-t)u)-\varphi((1-2t)u))+t\varphi(2tu)-t\varphi(tu). } Adding $\varphi(tu)-\dfrac{\varphi(2tu)}{2}$ to the previous inequality and also using \eq{8c}, we obtain \Eq{*} { \varphi(tu)\mathscr{B}ig(1-\frac{\gamma_\varphi}{2}\mathscr{B}ig) &\leq \varphi(tu)-\frac{\varphi(2tu)}{2}\\&\leq t\big(\varphi((1-t)u)-\varphi((1-2t)u)\big)+(1-t)\varphi(tu)-\mathscr{B}ig(\frac12-t\mathscr{B}ig)\varphi(2tu). } Rearranging this inequality, we finally obtain that \Eq{*} { \frac{1}{2-\gamma_\varphi}\big(2t\varphi\big((1-2t)u\big)+(1-2t)\varphi\big(2tu\big)\big)+\varphi(tu) \leq \frac{2}{2-\gamma_\varphi}\big(t\varphi\big((1-t)u\big)+(1-t)\varphi\big(tu\big)\big), } which means that \eq{8b} is satisfied for all $0<t\leq \frac12.$ \end{proof} Let $\mu$ be a nonnegative finite Borel measure on $[0,1]$ and let $\mathop{\mbox{\rm sup}}p \mu$ denote the support of $\mu.$ \mathscr{L}em{10}{Let $\mu$ be a nonnegative and nonzero finite Borel measure on $[0,1]$ and let $\chi:]0,\infty[\to \mathbb{R}_+$ be defined by \Eq{*} { \chi(s)=\frac{\int_{[0,1]}(2s)^pd\mu(p)}{\int_{[0,1]}s^pd\mu(p)}. } Then $\chi$ is nondecreasing on $]0,\infty[$ and \Eq{lim}{ \lim_{s\to\infty}\chi(s)=2^{p_0}, } where $p_0:=\mathop{\mbox{\rm sup}}(\mathop{\mbox{\rm sup}}p\mu)$.} \begin{proof}The function $x\mapsto 2^x$ is strictly increasing, hence, for $p,q\in\mathbb{R}$, we have $ (2^p-2^q)(p-q)\geq0$. It suffices to show that $\chi'\geq0$. For $s>0,$ we obtain \Eq{*} { \chi'(s)&=\frac{\int_{[0,1]}2^pps^{p-1}d\mu(p) \cdot \int_{[0,1]}s^{p}d\mu(p) - \int_{[0,1]}2^ps^{p}d\mu(p) \cdot \int_{[0,1]}ps^{p-1}d\mu(p)} {\big(\int_{[0,1]}s^{p}d\mu(p)\big)^2}\\ &=\frac{\int_{[0,1]}2^pps^{p-1}d\mu(p) \cdot \int_{[0,1]}s^{q}d\mu(q) + \int_{[0,1]}2^qqs^{q-1}d\mu(q) \cdot \int_{[0,1]}s^{p}d\mu(p)} {2\big(\int_{[0,1]}s^{p}d\mu(p)\big)^2}\\ &\quad- \frac{\int_{[0,1]}2^ps^{p}d\mu(p) \cdot \int_{[0,1]}qs^{q-1}d\mu(q) + \int_{[0,1]}2^qs^{q}d\mu(q) \cdot \int_{[0,1]}ps^{p-1}d\mu(p)} {2\big(\int_{[0,1]}s^{p}d\mu(p)\big)^2}\\ &=\frac{\int_{[0,1]}\int_{[0,1]}(2^p-2^q)(p-q)s^{p+q-1} d\mu(p)d\mu(q)} {2\big(\int_{[0,1]}s^{p}d\mu(p)\big)^2}\geq 0, } which proves that $\chi$ is nondecreasing. Using $\mathop{\mbox{\rm sup}}p\mu\subseteq[0,p_0]$, for $s>0,$ we obtain \Eq{*} { \int_{[0,1]}s^pd\mu(p)=\int_{[0,1]}2^p\mathscr{B}ig(\frac{s}{2}\mathscr{B}ig)^pd\mu(p)\leq 2^{p_0}\int_{[0,1]}\mathscr{B}ig(\frac{s}{2}\mathscr{B}ig)^pd\mu(p), } which proves that $\chi(s)\leq 2^{p_0}$, and hence, $\lim_{s\to\infty}\chi(s)\leq2^{p_0}$. To show that in \eq{lim} the equality is valid, assume that $\lim_{s\to\infty}\chi(s)<2^{p_0}$. Choose $q<q_0<p_0$ so that $\lim_{s\to\infty}\chi(s)\leq 2^{q}$. Then, for all $s>0$, \Eq{*}{ \int_{[0,1]}(2s)^pd\mu(p)\leq2^q\int_{[0,1]}s^pd\mu(p), } i.e., for all $s\geq1$, \Eq{*}{ 0&\leq\int_{[0,1]}(2^q-2^p)s^pd\mu(p)\\ &=\int_{[0,q[}(2^q-2^p)s^pd\mu(p) +\int_{[q,q_0[}(2^q-2^p)s^pd\mu(p)+\int_{[q_0,1]}(2^q-2^p)s^pd\mu(p)\\ &\leq\int_{[0,q[}(2^q-2^p)s^pd\mu(p)+\int_{[q_0,1]}(2^q-2^p)s^pd\mu(p)\\ &\leq\int_{[0,q[}(2^q-2^p)s^pd\mu(p)+\int_{[q_0,1]}(2^q-2^p)s^{q_0}d\mu(p). } Therefore, for $s\geq1$, \Eq{*}{ 0\leq\int_{[0,q[}(2^q-2^p)s^{p-q_0}d\mu(p)+\int_{[q_0,1]}(2^q-2^p)d\mu(p). } The first integrand converges uniformly to $0$ on $[0,q[$ as $s\to\infty$. Thus, by taking the limit $s\to\infty$, we get \Eq{xx}{ 0\leq\int_{[q_0,1]}(2^q-2^p)d\mu(p). } On the other hand, the inequality $q_0<p_0=\mathop{\mbox{\rm sup}}(\mathop{\mbox{\rm sup}}p\mu)$ implies $\mu([q_0,1])>0$ and, obviously, $2^q-2^p<0$ for $p\in[q_0,1]$. Hence the right hand side of \eq{xx} is negative. The contradiction so obtained proves \eq{lim}. \end{proof} (\mathscr{P})rp{9}{Let $\mu$ be a nonnegative and nonzero finite Borel measure on $[0,1]$. Denote $\alpha:=\mathop{\mbox{\rm sup}} D^+$ and $p_0:=\mathop{\mbox{\rm sup}}(\mathop{\mbox{\rm sup}}p\mu)$ and define $\varphi:D^+\to \mathbb{R}_+$ by \Eq{*} { \varphi(s):=\int_{[0,1]}s^pd\mu(p) \quad \mbox{for all} \quad s\in D^+. } Then $\varphi$ is subadditive and nondecreasing, furthermore, \Eq{gp} { \gamma_{\varphi}=\left\{ \begin{array}{lcl} \dfrac{\int_{[0,1]}\alpha^pd\mu(p)}{\int_{[0,1]}(\alpha/2)^pd\mu(p)}, &\text{if} & \alpha<\infty,\\[6mm] 2^{p_0}, &\text{if} & \alpha=\infty \end{array}\right. } and $\gamma_\varphi<2$ if either $\alpha<\infty$ and $\mu$ is not concentrated at the singleton $\{1\}$ or $p_0<1$. In addition, for all $t\in[0,1]$ and $u\in D^+$, \Eq{9A} { \int_{[0,1]}\big[t(1-t)^p+(1-t)t^p\big]u^pd\mu(p) \leq \int_{[0,1]}T_p(t)u^pd\mu(p) } and, provided that $\gamma_\varphi<2$, \Eq{9B} { \int_{[0,1]}T_p(t)u^pd\mu(p) \leq \frac{2}{2-\gamma_\varphi}\int_{[0,1]}\big[t(1-t)^p+(1-t)t^p\big]u^pd\mu(p). }} \begin{proof}It can be easily seen that $\varphi$ is nondecreasing. The subadditivity is a consequence of \lem{4}. Let $\alpha<\infty.$ Then, by \lem{10}, the map $s\mapsto\dfrac{\varphi(2s)}{\varphi(s)}= \dfrac{\int_{[0,1]}(2s^p)d\mu(p)}{\int_{[0,1]}s^pd\mu(p)}=\chi(s)$ is nondecreasing on $\frac12D^+,$ so it attains its supremum at $\alpha/2.$ Thus, in this case, $\gamma_\varphi=\dfrac{\int_{[0,1]}\alpha^pd\mu(p)}{\int_{[0,1]}(\alpha/2)^pd\mu(p)}.$ To prove that $\gamma_\varphi<2$, we use the inequality $2^p<2$ for $p\in[0,1]$ to obtain: \Eq{*} { \int_{[0,1]}\alpha^pd\mu(p) =\int_{[0,1]}2^p\mathscr{B}ig(\frac{\alpha}{2}\mathscr{B}ig)^pd\mu(p) <2\int_{[0,1]}\mathscr{B}ig(\frac{\alpha}{2}\mathscr{B}ig)^pd\mu(p). } In the case $\alpha=\infty$, by \lem{10}, we have that $\gamma_\varphi=\lim_{s\to\infty}\chi(s)=2^{p_0}$. Obviously, $\gamma_{\varphi}<2$ if $p_0<1$. The inequalities \eq{9A} and \eq{9B} are immediate consequences of \prp{7} and \prp{8}, respectively. \end{proof} In the case when the measure $\mu$ is concentrated at a singleton $\{p\}$, \prp{9} simplifies to the following result. \Cor{13}{Let $0\leq p\leq1$ be an arbitrary constant. Then, for all $t\in[0,1]$, \Eq{*} { t(1-t)^p+(1-t)t^p\leq T_p(t) } and, provided that $p<1$, \Eq{*} { T_p(t)\leq \frac{2}{2-2^p}\big(t(1-t)^p+(1-t)t^p\big). } } The proof of the next theorem is analogous to that in \cite{HazPal05}. \mathscr{T}hm{10}{Let $\varphi:D^+\to\mathbb{R}_+$ be nondecreasing. If $f : D\to \mathbb{R}$ is $\varphi$-midconvex and locally bounded from above at a point of $D$, then $f$ is locally bounded from above on $D.$} The following theorem generalizes the analogous result of the paper \cite{HazPal05} obtained for $(\varepsilon,p)$-convexity. A similar result was also established by Tabor and Tabor \cite{TabTab09b}, \cite{TabTab09a}. \mathscr{T}hm{H7}{Let $f:D\to \mathbb{R}$ be locally bounded from above at a point of $D$ and let $\varphi:\frac12 D^+\to\mathbb{R}_+$ be nondecreasing. Then $f$ is $\varphi$-midconvex on $D$, i.e., \eq{1b} holds for all $x,y\in D$ if and only if \Eq{H7a} { f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)+\mathscr{T}_\varphi(t,\|x-y\|) } for all $x,y\in D$ and $t\in [0,1].$} \begin{proof} Assume that $f$ is $\varphi$-midconvex on $D$ and locally bounded from above at a point of $D$. From \thm{10}, it follows that $f$ is locally bounded from above at each point of $D.$ Thus $f$ is bounded from above on each compact subset of $D,$ in particular, for each fixed $x,y\in D$, $f$ is bounded from above on $[x,y]=\{tx+(1-t)y\mid t\in[0, 1]\}$. Denote by $K_{x,y}$ a finite upper bound of the function \Eq{11b} { t\mapsto f(tx +(1-t)y)-tf(x)-(1-t)f(y) \qquad (t \in [0, 1]). } We are going to show, by induction on $n$, that \Eq{11c} { f(tx+(1-t)y) \leq tf(x)+(1-t)f(y)+\frac{K_{x,y}}{2^n} +\sum_{j=0}^{n-1}\frac{\varphi\big(d_\mathbb{Z}(2^{j}t)\|x-y\|\big)}{2^{j}} } for all $x, y \in D$ and $t \in [0, 1].$ For $n = 0,$ the statement follows from the definition of $K_{x,y}$ (with the convention that the summation for $j=0$ to $(-1)$ is equal to zero). Now assume that \eq{11c} is true for some $n \in \mathbb{N}.$ Assume that $t\in [0,1/2].$ Then, due to the $\varphi$-midconvexity of $f,$ we get \Eq{*} { f(tx+(1-t)y) = f \mathscr{B}ig(\frac{y+(2tx+(1-2t)y)}{2}\mathscr{B}ig) \leq \frac{f(y)+f(2tx+(1-2t)y)}{2}+\varphi(t\|x-y\|). } On the other hand, by $\eq{11c}$, we get that \Eq{*} { f(2tx+(1-2t)y)\leq 2tf(x)+(1-2t)f(y)+\frac{K_{x,y}}{2^n} +\sum_{j=0}^{n-1}\frac{\varphi\big(d_\mathbb{Z}(2^{j+1}t)\|x-y\|\big)}{2^{j}}. } Combining these two inequalities, we obtain \Eq{*} { f(tx+(1-t)y)&\leq tf(x)+ (1-t)f(y)+\frac{1}{2}\bigg(\frac{K_{x,y}}{2^n} +\sum_{j=0}^{n-1}\frac{\varphi\big(d_\mathbb{Z}(2^{j+1}t)\|x-y\|\big)}{2^{j}}\bigg)+\varphi(t\|x-y\|)\\ &= tf(x) + (1-t)f(y)+\frac{K_{x,y}}{2^{n+1}} +\sum_{j=0}^{n}\frac{\varphi\big(d_\mathbb{Z}(2^{j}t)\|x-y\|\big)}{2^{j}}. } In the case $t\in[1/2,1]$, the proof is similar. Thus, \eq{11c} is proved for all $n\in\mathbb{N}$. Finally, taking the limit $n\to \infty$ in \eq{11c}, we get the desired inequality \eq{H7a}. To see that \eq{H7a} implies the $\varphi$-midconvexity of $f$, substitute $t=1/2$ into \eq{H7a} and use the easy-to-see identity $\mathscr{T}_\varphi\big(\frac12,u\big)=\varphi(\frac{|u|}{2})$ ($u\in\mathbb{R}$). \end{proof} The optimality of the error term in \eq{H7a} and the appropriate convexity properties of $\mathscr{T}_\varphi$ have recently been obtained in \cite{MakPal10b}. \mathscr{T}hm{8}{Let $\varphi:D^+\to \mathbb{R}_+$ be nondecreasing with $\varphi(s)>0$ for $s>0$ and assume that $\gamma_\varphi:=\mathop{\mbox{\rm sup}}_{0<s\in\frac12D^+}\frac{\varphi(2s)}{\varphi(s)}<2.$ If $f:D\to \mathbb{R}$ is locally bounded from above a point of $D$ and it is also $\varphi$-midconvex, then $f$ is $\big(\frac{2}{2-\gamma_\varphi}\cdot\varphi\big)$-convex on $D$.} \begin{proof} By \prp{8} and by \thm{H7}, the proof of this theorem is evident. \end{proof} \Cor{9}{Let $\mu$ be a nonnegative and nonzero finite Borel measure on $[0,1]$. Denote $\alpha:=\mathop{\mbox{\rm sup}} D^+$ and $p_0:=\mathop{\mbox{\rm sup}}(\mathop{\mbox{\rm sup}}p\mu)$ and assume that either $\alpha<\infty$ and $\mu$ is not concentrated at the singleton $\{1\}$ or $p_0<1$. Define $\varphi:D^+\to \mathbb{R}_+$ by \Eq{*} { \varphi(s):=\int_{[0,1]}s^pd\mu(p) \quad \mbox{for all} \quad s\in D^+. } If $f:D\to \mathbb{R}$ is locally bounded from above a point of $D$ and it is also $\varphi$-midconvex, then $f$ is $\big(\frac{2}{2-\gamma_{\varphi}}\cdot \varphi\big)$-convex on $D$, where $\gamma_\varphi$ is given by \eq{gp}.} \Cor{10}{Let $0\leq p<1$ and $\varepsilon\geq 0$ be arbitrary constants. If $f:D\to \mathbb{R}$ is locally bounded from above a point of $D$ and it is also $(\varepsilon,p)$-midconvex, then $f$ is $\big(\frac{2\varepsilon}{2-2^p},p\big)$-convex on $D$.} \end{document}
\begin{document} \begin{abstract} We provide explicit motion planners for Euclidean configuration spaces. This allows us to recover some known values of the topological complexity and the Lusternik-Schinirelman category of these spaces. \end{abstract} \title{lue{Motion planning algorithms for Configuration Spaces} \section{Introduction} The Topological Complexity (TC) of a space $X$ is, in practical terms, the smallest number of local domains in each of which there is a continuous motion planning algorithm. This number turns out to be a homotopy invariant and is denoted by $TC(X)$, see~\cite{farber}. Recall that the space of configurations of $k$ labeled points in $X$ is given by \[ F(X,k) = \{ (x_1,\ldots,x_k)\in X^k: x_i \neq x_j \} .\] This space turns out to be of crucial importance in algebraic topology and of course in many of its applications in fields such as robotics. This latter connection arises by noticing that a path in $F(\mathbb{R}^n,k)$ is essentially a set of $k$ non-colliding paths in $\mathbb{R}^n$. A related concept is that of the orbit configuration space of a $G$-space $X$, which is defined as \[ F_G(X,k) = \{ (x_1,\ldots,x_k)\in X^k: Gx_i\neq Gx_j \} ,\] where $Gx= \{ gx: g\in G \}$ is the orbit $x$. For instance, the space $F_{O(n)}(\mathbb{R}^n,k)$, where $O(n)$ is the linear orthogonal group, is the subspace of $F(\mathbb{R}^n,k)$ consisting of configurations whose components are vectors of different lengths. Note that $F_1(X,k)$, where 1 is the trivial group, is just the space of configurations of $k$ labeled points in $X$. We will construct explicit motion planners on $F(\mathbb{R}^n,k)$ that realize the value of its TC when $k$ is odd, and when $k$ is even this number of motion planners is just one unit off the actual value of its TC. Before embarking into the construction of these planners we will provide a lower bound for their TC by constructing a retract of $F(\mathbb{R}^n,k)$ that realizes the value of TC of $F(\mathbb{R}^n,k)$ when $n$ is odd, as well as the TC of some orbit configuration spaces. The values of TC and the Lusternik-Schnirelmann category ($cat$) of $F(\mathbb{R}^n,k)$, had already been computed (\cite{fy},~\cite{fg},~\cite{roth}), and for $n,k\geq 2$ are given by \[ TC(F(\mathbb{R}^n,k) )=\left\{ \begin{array}{cc} 2k -1 & n \mbox{ odd} \\ 2k- 2 & n \mbox{ even} \end{array} \right. , \] and \[ cat(F(\mathbb{R}^n,k)) =k.\] The conditions $n,k\geq 2$ guarantee the space $F(\mathbb{R}^n,k)$ is a non-contractible, connected space. The contribution of this paper is two-fold, on one hand we show that no sophisticated machinery is needed to compute $cat$ nor TC when $n$ is odd; and on the other we find explicit motion planning algorithms for $F(\mathbb{R}^n, k)$ with $2k-1$ local rules, solving a problem posed in~\cite{farber}. Previous motion planning algorithms described in~\cite{farber2} consisted of $k^2-k+1$ local rules. \section{Retracts for Configuration Spaces} In this section we will show that there exist retracts given by products of spheres sitting in the configuration spaces that we will consider. This will allow us to obtain lower bounds for $cat$ and TC. \begin{proposition}\label{retraction} If $G$ is a subgroup of $O(m+1)$, then there are retractions \[ (S^m)^{k-1}\hookrightarrow F_G(\mathbb{R}^{m+1}, k) \to (S^m)^{k-1} \] and \[ (S^m)^k\hookrightarrow F_G(\mathbb{R}^{m+1}\setminus \{ 0 \} , k) \to (S^m)^k .\] \end{proposition} \begin{proof} To see these let \[ \alpha_1: (S^m)^{k-1} \to F_G(\mathbb{R}^{m+1} , k) \] \[ (x_1,\ldots,x_{k-1}) \mapsto (0,x_1,x_1+3x_2,\ldots, x_1+3x_2+\cdots+3^{k-2}x_{k-1} ) \] and \[ \beta_1: F_G(\mathbb{R}^{m+1} , k) \to (S^m)^{k-1} \] \[ (y_1,\ldots,y_k) \mapsto (N(y_2- y_1),\ldots, N(y_k - y_{k-1})) ,\] where $N(y) = y/|y|$. Similarly, let us define \[ \alpha_2: (S^m)^k \to F_G(\mathbb{R}^{m+1}\setminus \{ 0 \} , k) \] \[ (x_1,\ldots,x_k) \mapsto (x_1,x_1+3x_2,\ldots, x_1+3x_2+\cdots+3^{k-1}x_k ) \] and \[ \beta_2: F_G(\mathbb{R}^{m+1}\setminus \{ 0 \} , k) \to (S^m)^k \] \[ (y_1,\ldots,y_k) \to (N(y_1),N(y_2- y_1),\ldots, N(y_k - y_{k-1})) \] These maps satisfy $\beta_i\circ\alpha_i = 1$, for $i=1,2$. By definition $\beta_i$ lands in the respective product of spheres. One only needs to show that the map \[ \alpha_2(x_1,\ldots, x_k) = (x_1, x_1 + 3x_2,\ldots, x_1 + 3x_2 + \dots + 3^{k-1}x_k )\] does land in $ F_G(\mathbb{R}^{m+1}\setminus \{ 0\} , k)$. The case of $\alpha_1$ is analogous. It is easy to see that each coordinate of this map is non-zero. If $A(x_1+\cdots+3^{l-1}x_l) = x_1+\cdots+ 3^{l+p-1} x_{l+p}$ for some $A\in G\subseteq O(m+1)$ and $p\geq 1$, then $|x_1+\cdots+3^{l-1}x_l| = |x_1+\cdots+ 3^{l+p-1} x_{l+p}|$. Now, if any two vectors $u$ and $v$ satisfy $|u|= |u+v|$, then $|v| \leq 2|u|$. Thus, if we take $u=x_1+\cdots+3^{l-1}x_l$ and $v= 3^lx_{l+1}+\cdots+ 3^{l+p-1} x_{l+p}$, then \[ 3^l| x_{l+1}+\cdots+ 3^{p-1} x_{l+p}| \leq 2|x_1+\cdots+3^{l-1}x_l|<3^l, \] So $| x_{l+1}+\cdots+ 3^{p-1} x_{l+p}|< 1$. On the other hand, \[ 1\leq \frac{3^{p-1} +1}{2} \leq |3^{p-1} - |x_{l+1}+\cdots+ 3^{p-2} x_{l+p-1}||\leq |x_{l+1}+\cdots+ 3^{p-1} x_{l+p}| ,\] a contradiction. Therefore the vector $(x_1, x_1 + 3x_2,\ldots, x_1 + 3x_2 + \dots + 3^{k-1}x_k )$ does live in the configuration space $F_G(\mathbb{R}^{m+1} \setminus \{ 0 \}, k)$. \end{proof} \begin{remark} The arguments in the proof of the previous result can be used to show that there is also a retraction \[ (S^m)^k\hookrightarrow F(\mathbb{R}^{m+1} \setminus Q_r, k) \to (S^m)^k\] where $Q_r$ is a subset of fixed points with $r$ elements. This follows from the homeomorphism induced between configuration spaces by a homeomorphism between $\mathbb{R}^{m+1} \setminus Q_r$ and $\mathbb{R}^{m+1} \setminus \overline{Q}_r$, where $\overline{Q}_r$ is a subset of $r$ fixed points of norm less than one (note that each component of the map $\alpha_2$ is a vector of norm greater than or equal to 1). The space $F(\mathbb{R}^{m+1} \setminus Q_r, k)$ is related to the collision free motion planning problem in the presence of multiple moving obstacles, see~\cite{fgy}. \end{remark} \begin{theorem}\label{tc-inequalities} Suppose that $Q_r$ is a set of $r$ points in $\mathbb{R}^{m+1}$, then \begin{itemize} \item[(1)] \[ cat(F(\mathbb{R}^{m+1}\setminus Q_r, k)) =\left\{\begin{array}{cl} k & \mbox{if } r=0\\ k+1 & \mbox{if } r>0\end{array}\right.\] \item[(2)] \[ TC(F(\mathbb{R}^{2m+1}\setminus Q_r, k)) =\left\{\begin{array}{cl} 2k-1 & \mbox{if } r=0\\ 2k+1 & \mbox{if } r>0\end{array}\right.\] \end{itemize} Suppose that $G$ is a finite subgroup of $O(2m+1)$ acting freely on $\mathbb{R}^{2m+1}\setminus \{ 0 \}$, then \begin{itemize} \item[(3)] \[ cat(F_G(\mathbb{R}^{m+1}\setminus \{ 0\}, k)) =k+1\] \item[(4)] \[TC( F_G(\mathbb{R}^{2m+1} \setminus \{ 0 \}, k)) = 2k +1.\] \end{itemize} \end{theorem} \begin{proof} Notice that there are fibrations of the form \[ \bigvee^{r+k-1} S^m \to F(\mathbb{R}^{m+1}\setminus Q_r,k) \to F(\mathbb{R}^{m+1}\setminus Q_r,k-1) \] and \[ \bigvee^{g(k-1)+1} S^m \to F_G(\mathbb{R}^{m+1}\setminus \{0\},k) \to F_G(\mathbb{R}^{m+1}\setminus \{0\},k-1) \] where $g$ is the order of $G$. An inductive argument shows that the space $F_G(\mathbb{R}^{m+1}\setminus \{0\},k)$ is $(m-1)$-connected and homotopy equivalent to a finite CW-complex of dimension at most $mk$. Similarly, the space $F(\mathbb{R}^{m+1}\setminus Q_r,k)$ is $(m-1)$-connected and homotopy equivalent to a CW complex of dimension at most $m(k-1)$ when $r=0$, and of dimension at most $mk$ when $r>0$. To get a lower bound for $cat$ and TC of these spaces we just need to apply Proposition~\ref{retraction}, recall the fact that if $X$ is dominated by $Y$ then $TC(X)\leq TC(Y)$, and also make use of the known values $TC((S^{2m})^k)=2k+1$ and $cat((S^m)^k) = k+1$. For the upper bounds we can apply the following two properties: $TC(X)\leq 2cat(X)-1$; and if $X$ is a $q$-connected finite CW-complex then $cat(X)\leq \frac{\dim(X)}{q+1} +1$. \end{proof} \begin{remark} The value of $TC(F(\mathbb{R}^{2}\setminus Q_r, k))$ and $TC(F(\mathbb{R}^{3}\setminus Q_r, k))$ had already been computed in~\cite{fgy}, and more recently for any Euclidean space in~\cite{gg}. \end{remark} \section{Partitions on Configuration Spaces} Throughout this section we will be working with the space $F(\mathbb{R}^2,k)$, and we will keep $k$ fixed. A vector of positive integers $A=(a_1,\ldots,a_l)$ such that $\sum a_i =k$ will be called a partition of $k$, and we will call the number $|A| = l$ the number of levels of $A$. We will consider the (reverse) lexicographic order on $\mathbb{R}^2$, that is: $(b_1,b_2)\leq (c_1,c_2)$ if $b_2<c_2$, or if $b_2=c_2$ and $b_1\leq c_1$. Now, if $x=(x_1,\ldots,x_k)\in F(\mathbb{R}^2,k)$ then there is a unique permutation $\sigma \in \Sigma_k$ such that $x_{\sigma(1)} <\cdots < x_{\sigma(k)}$. This permutation will be denoted by $\sigma_x$, and if $\sigma_x =1$ we will say that $x$ is (lexicographically) ordered. Let $\pi_2:\mathbb{R}^2\to \mathbb{R}$ be the projection of the second factor. If $x=(x_1,\ldots,x_k)\in F(\mathbb{R}^2,k)$ is (lexicographically) ordered, then there are positive integers $a_1,\ldots,a_l$ such that \begin{align*} \pi_2(x_1)= \cdots =\pi_2(x_{a_1}) & < \pi_2( x_{a_1+1}) \\ \pi_2(x_{a_1+1})= \cdots =\pi_2(x_{a_1+a_2}) & < \pi_2(x_{a_1+a_2+1}) \\ & \vdots \\ \pi_2(x_{a_1+...+a_{l-2}+1})= \cdots =\pi_2(x_{a_1+...+a_{l-1}}) & < \pi_2(x_{a_1+...+a_{l-1}+1}) \\ \pi_2(x_{a_1+...+a_{l-1}+1})= \cdots =\pi_2(x_{a_1+...+a_l}) \end{align*} These of course define a partition $(a_1,\ldots,a_l)$ of $k$. This partition will be denoted by $A_x$. Note that this partition tells us how the configuration $x$ is sitting in $\mathbb{R}^2$ with respect to the $y$-axis. In this context, $|A|=l$ is the number of lines parallel to the $x$-axis on which the configuration $x$ sits. \begin{definition} Given a partition $A=(a_1,\ldots,a_l)$ of $k$ and $x=(x_1,\ldots,x_k)\in F(\mathbb{R}^2,k)$, we will say that $x$ is an $A$-configuration if $A_{\sigma_x(x)} = A$. \end{definition} \begin{definition} Given an $A$-configuration $x=(x_1,\ldots,x_k)\in F(\mathbb{R}^2,k)$, we will say that $x$ has $|A|$ levels and that $x_i$ and $x_j$ are on the same level if $\pi_2(x_i)=\pi_2(x_j)$. \end{definition} \begin{definition} Given a partition $A$ of $k$ and a permutation $\sigma \in \Sigma_k$, we let \[ F_{A,\sigma} =\{ x=(x_1,\ldots,x_k)\in F(\mathbb{R}^2,k): \sigma_x = \sigma \mbox{ and } x \mbox{ is an } A\mbox{-configuration} \}. \] We also define \[ F_A = \bigcup_{\sigma\in \Sigma_k} F_{A,\sigma} \] \end{definition} This latter is precisely the subspace of all $A$-configurations. Note that the subspaces $F_{A,\sigma}$ are disjoint, and that \[ F(\mathbb{R}^2,k) = \bigcup_A F_A . \] \begin{theorem} Suppose that $x \in F(\mathbb{R}^2,k)$ is a limit point of $F_{A,1}$, then $|A_x| \leq |A|$. The equality holds if and only if, $A_x = A$. \end{theorem} \begin{proof} Suppose that $|A|=l$. Note that any element of a sequence of (lexicographically) ordered $A$-configurations converging to $x$ defines a set of increasing real numbers $h_1<\cdots< h_l$ which are determined by the map $\pi_2$. Moreover, this latter set of real numbers depends continuously on the sequence converging to $x$. The position of the levels of $x$ is determined by the limit of these real numbers, and since some of these may collapse into a single real number in the limit, it follows that $|A_x| \leq |A|$. For the second part, it suffices to show that if $|A_x| = |A|$ then $A_x = A$. This can be seen by noticing that the condition $|A_x| = |A|$ tells us that the levels determined by the sequence do not collapse resulting in a smaller number of levels when converging to $x$, and hence $x$ must be an $A$-configuration. \end{proof} Note that since the subspaces $F_{A,\sigma}$ and $F_{A,\mu}$ are homeomorphic for any two permutations $\sigma,\mu$, it follows that the latter result holds for any $F_{A,\sigma}$. \begin{corollary}\label{limit_point} Suppose that $(x,y) \in F(\mathbb{R}^2,k) \times F(\mathbb{R}^2,k)$ is a limit point of $F_{A,\sigma}\times F_{B,\mu}$. Then $|A_x| + |A_y| \leq |A| + |B|$, and the equality holds if and only if $A_x = A$ and $B_y = B$. \end{corollary} \begin{proof} The result follows from the following observation: if $a,b,c,d$ are postive real numbers such that $a\geq c$, $b\geq d$, then $a+b \geq c+d$, and the equality holds if and only if $a=c$ and $b = d$. \end{proof} Recall that a space $X$ is called ENR (Euclidean Neighborhood Retract) if it is homeomorphic to a subspace $X'$ of some $\mathbb{R}^N$ such that $X'$ is a retract of an open neighborhood $X'\subset U\subset \mathbb{R}^N$. Here we recall a definition of TC from~\cite{farber}, Proposition 4.12. \begin{definition} Suppose that $X$ is an ENR. The topological complexity of $X$ is the smallest integer $r$ such that there exists a section $s:X\times X \to X^I$ of the double-evaluation map $\epsilon:X^I\to X\times X$ and a splitting $F_1\cup\cdots\cup F_r = X\times X$ such that: \begin{enumerate} \item $F_i\cap F_j =\emptyset$ when $i\neq j$, \item the restriction of $s$ to each $F_i$ is continuous, and \item each $F_i$ is a locally compact subspace of $X\times X$. \end{enumerate} \end{definition} \begin{definition} Given $i\in \{ 2,...,2k\}$, we let \[ F_i = \bigcup_{|A|+|B| = i} F_A \times F_B .\] \end{definition} Note that these $F_i$ are disjoint and they cover $F(\mathbb{R}^2,k)\times F(\mathbb{R}^2,k)$. \begin{example} When $k=3$, the first two $F_i$ are given as follows \begin{align*} F_2 & = F_{(3)} \times F_{(3)}, \\ F_3 &= F_{(3)} \times F_{(1,2)} \cup F_{(3)} \times F_{(2,1)} \cup F_{(1,2)} \times F_{(3)} \cup F_{(2,1)} \times F_{(3)}. \end{align*} \end{example} \begin{lemma}\label{enr} Each $F_{A,\sigma}$, $F_A$ and $F_i$ are locally compact, locally contractible, and hence ENR. \end{lemma} \begin{proof} If $A=(a_1,\ldots,a_l)$ is a partition of $k$, then there is a homeomorphism \[ F_{A,1} \to F(\mathbb{R},a_1)\times\cdots\times F(\mathbb{R},a_l)\times \tilde{F}(\mathbb{R},l) ,\] where $\tilde{F}(\mathbb{R},l) = \{ (h_1,\ldots,h_l)\in \mathbb{R}^l: h_1 < \cdots< h_l \}$. This homeomorphism is obtained by projecting each level onto the $x$-axis, and by projecting each level onto the $y$-axis. Therefore $F_{A,1}$ is homeomorphic to an open set of $\mathbb{R}^{k+l}$, and thus each $F_{A,\sigma}$, $F_A$, and $F_i$ are locally compact, and locally contractible. Finally, a subspace of $\mathbb{R}^N$ is an ENR if and only if, it is locally compact and locally contractible \cite{dold}. \end{proof} \begin{lemma} If $V=F_{A,\sigma} \times F_{B,\mu}\subset F_i$ and $(x,y) \in \overline{V} - V$, then $(x,y)\in F_j$ for some $j<i$. \end{lemma} \begin{proof} Note that if $x$ is a limit point of $F_{A,\sigma}$ and $|A_x|=|A|$, then $x\in F_{A,\sigma}$. Now apply Corollary~\ref{limit_point}. \end{proof} \begin{lemma} Suppose that $U$ and $V$ are disjoint subspaces of $\mathbb{R}^N$ such that $\overline{U}\cap V$ and $U\cap \overline{V}$ are empty. If $f$ is a function from $\mathbb{R}^N$ to $\mathbb{R}^M$ such that $f$ restricted to both $U$ and $V$ is continuous, then $f$ is continuous on $U\cup V$. \end{lemma} These latter two results are crucial since they tell us that if we are able to find a planner on $F_i$ then it will be continuous on $F_i$ as long as it is continuous on each $F_{A_1,\sigma_1}\times F_{A_2,\sigma_2}\subset F_i$. \section{Motion Planners} The following result will be a basic ingredient needed to construct motion planners and its proof will be omitted since it is straightforward. \begin{lemma}\label{distance} Let $\pi_1:\mathbb{R}^2\to \mathbb{R}$ be the projection of the first factor, and define $p:(\mathbb{R}^2)^{2k} \to \mathbb{R}$ by $(x_1,\ldots,x_{2k})\mapsto \max_{1\leq j\leq 2k} \{ \pi_1(x_j)\}$. The map $p$ is continuous, and so is its restriction to $F(\mathbb{R}^2,k)\times F(\mathbb{R}^2,k)$. \end{lemma} We will define a planner $s_i$ on $F_i$ by means of planners $s_{A,\sigma,B,\mu}$ on each $F_{A,\sigma}\times F_{B,\mu} \subset F_i$, where $i=|A|+|B|$. Without loss of generality we will provide a recipe only for $F_{A,1}\times F_{B,1}\subset F_i$: \begin{enumerate} \item Take a pair of configurations $(x,y) \in F_{A,1}\times F_{B,1}\subset F_i$. \item Each level of the $A$-configuration $x$ will be connected by means of straight lines to a set of points on a line which is parallel to the $y$-axis and whose $x$-coordinate is given by $p(x,y)+1$. More precisely, if $A_x = (a_1,\ldots,a_l)$ and we let $h_j = \pi_2(x_{a_1+\cdots+a_j})$, then \begin{enumerate} \item $x_1,\ldots,x_{a_1}$ will be mapped onto the line $X=p(x,y)+1$ by means of straight lines, $x_1$ will go to the point on the line at height $h_1 - |x_1 - x_{a_1}|$, $x_2$ will go to the point on the line at height $h_1 - |x_2 -x_{a_1}|$, and so on. \item For the next level, send $x_{a_1+j}$ to the point on the line $X=p(x,y)+1$ at height \[ h_2 - \frac{( a_2 - j )(h_2 - h_1)}{2(a_2 - 1)}, \] for $1\leq j \leq a_2$. \item Proceed as in (b) with each level of $x$. \end{enumerate} This set of paths define a path $Q_x$ in $F(\mathbb{R}^2,k)$ connecting the configuration $x$ to a configuration sitting on the line $X=p(x,y)+1$. \item We proceed with $y$ the same way we did with $x$ to obtain a path $Q_y$ except that in this case we use the line $X=p(x,y)+2$ to avoid possible collisions in the following step. \item Let $\alpha_{(x,y)}$ be the path that connects by means of straight lines (following the order of both $x$ and $y$) the configuration $Q_{x}(1)$ to the configuration $Q_y(1)$. \item The motion planner is determined by the path from $x$ to $y$ given by $Q_x\cdot\alpha_{(x,y)} \cdot Q_y^{-1}$ (concatenation of paths). \end{enumerate} The following picture illustrates the construction of the path $Q_x$ when $A=(3,2,1,2)$. \[ \xygraph{ !{<0cm,0cm>;<2cm,0cm>:<0cm,2cm>::} !{(.4,0) }*+{\bullet^{x_1}} ="x1" !{(1.2,0) }*+{\bullet^{x_2}} ="x2" !{(2,0) }*+{\bullet^{x_3}} ="x3" !{(.8,1) }*+{\bullet^{x_4}} ="x4" !{(1.8,1) }*+{\bullet^{x_5}} ="x5" !{(1.5,1.5) }*+{\bullet^{x_6}} ="x6" !{(1.5,2.5) }*+{\bullet^{x_7}} ="x7" !{(2.5,2.5) }*+{\bullet^{x_8}} ="x8" !{(4.6,0) }*+{\bullet^{h_1}} ="h1" !{(4.5,-.8) }*+{\bullet} ="h12" !{(4.5,-1.6) }*+{\bullet} ="h13" !{(4.6,1) }*+{\bullet^{h_2}} ="h2" !{(4.5,.5) }*+{\bullet} ="h22" !{(4.6,1.5) }*+{\bullet^{h_3}} ="h3" !{(4.6,2.5) }*+{\bullet^{h_4}} ="h4" !{(4.5,2.0) }*+{\bullet} ="h42" !{(4.5,-2) }*+{} ="A" !{(4.5,3.4) }*+{X=p(x,y)+1} !{(4.5,3.2) }*+{} ="B" "A"-"B" "x1":"h13" "x2":"h12" "x3":"h1" "x4":"h22" "x5":"h2" "x6":"h3" "x7":"h42" "x8":"h4" } \] \begin{theorem} The collection $(F_i,s_i)$, $2\leq i\leq 2k$, forms a set of motion planning algorithms for $F(\mathbb{R}^2,k)$. \end{theorem} \section{Higher dimensions and higher TC} For simplicity and convenience we will denote the coordinates of $\mathbb{R}^n$ by $z_1,\ldots,z_n$. We can extend the ideas of partitions and levels to this scenario: given a configuration $x\in F(\mathbb{R}^n,k)$, each level will be a hyperplane perpendicular to the $z_n$--axis containing a number of elements of $x$, and this number of elements is a component of the partition determined by $x$. Now, given $y\in F(\mathbb{R}^n,k)$, we define \[ p(x,y) = \max_{1\leq i,j \leq k} \{ \pi_{1}(x_i), \pi_{1}(y_j) \} ,\] where $\pi_{1}$ is the projection of the first factor, see Lemma \ref{distance}. Then the elements on a level of $x$ are connected to a configuration on the line $L_{x,y}$ which is parallel to the $z_n$-axis and intersects the $z_1$-axis at $p(x,y)+1$. The recipe spelled out for $\mathbb{R}^2$ works for $\mathbb{R}^n$, the only difference is that we will consider the lexicographic order on each level to assign to each point a point on the line $L_{x,y}$ (this is implicit in step (2)(b) for $\mathbb{R}^2$). \\ The concept of higher topological complexity was developed in~\cite{bgrt}. The basic idea is that in this case the motion planning involves a set of $(n-2)$ prescribed intermediate stages that the system (robot) has to reach. This turns out to be an invariant and it is denoted by $TC_n$. The case $n=2$ is just that of $TC$. The arguments applied in the proof of Theorem~\ref{tc-inequalities} can be used in this context since the analogous ideas for $TC_n$ are available in~\cite{bgrt}. This allows us to obtain \[ TC_n(F(\mathbb{R}^{2m+1}\setminus Q_r, k)) =\left\{\begin{array}{cl} n(k-1)+1 & \mbox{if } r=0\\ nk+1 & \mbox{if } r>0\end{array}\right. ,\] and if $G$ is a finite subgroup of $O(2m+1)$ acting freely on $\mathbb{R}^{2m+1}\setminus \{ 0 \}$, then \[TC_n( F_G(\mathbb{R}^{2m+1} \setminus \{ 0 \}, k)) = nk+1.\] The value of $TC_n(F(\mathbb{R}^m\setminus Q_r, k))$ was obtained in~\cite{gg}; their arguments, however, are way more elaborate. \\ It is also worth mentioning that the motion planning algorithms described in this paper can also be extended to the case of higher topological complexity. It is not hard to see what modifications are needed, and the details are left to the interested reader. As we pinpointed in the introduction, the contribution of this paper resides more in the construction of the motion planners. This construction may be of more practical importance than just knowing the value of TC. \section{LS-category} The LS-category of $F(\mathbb{R}^n,k)$ has been computed in \cite{roth} and it is equal to $k$ when $n\geq 2$. We will construct a categorical cover that realizes this value. Consider the sets \[ W_i = \bigcup_{|A|=i} F_A,\] where $i\in\{ 1,\ldots,k\}$ and notice that they are ENR by Lemma \ref{enr}. Now we use the following result from \cite{dold}. \begin{lemma} If $W$ is a subspace of $X$ and both are ENR, then there is an open neighborhood $W\subset U \subset X$ and a retraction $r : U \to W$ such that the natural inclusion $ j : U \to X$ is homotopic to $i\circ r$, where $i$ is the natural inclusion map of $W$ into $X$. \end{lemma} \begin{theorem} The subspaces $W_i$, $1\leq i\leq k$, can be enlarged to define a categorical covering for $F(\mathbb{R}^n,k)$. \end{theorem} \begin{proof} Note that each $W_i$ is contractible in $F(\mathbb{R}^n,k)$ by using the ideas from steps (1) and (2) in the defintion of the motion planners and by connecting the resulting configurations on the corresponding line (see step (2)(a)) to a fixed configuration in $\mathbb{R}^n$. A straightforward application of the previous result allows us to enlarge each subspace $W_i$ to an open subset $U_i\subset F(\mathbb{R}^n,k)$ so that $U_i$ is contractible in $F(\mathbb{R}^n,k)$. The fact that $k$ is the smallest possible size of a categorical covering is a consequence of Proposition~\ref{retraction}. Therefore the subsets $U_1,\ldots, U_k$ define a categorical cover of $F(\mathbb{R}^n,k)$. \end{proof} \end{document}
\begin{document} \title{$k$-noncrossing and $k$-nonnesting graphs and fillings of Ferrers diagrams} \begin{abstract} We give a correspondence between graphs with a given degree sequence and fillings of Ferrers diagrams by nonnegative integers with prescribed row and column sums. In this setting, $k$-crossings and $k$-nestings of the graph become occurrences of the identity and the antiidentity matrices in the filling. We use this to show the equality of the numbers of $k$-noncrossing and $k$-nonnesting graphs with a given degree sequence. This generalizes the analogous result for matchings and partition graphs of Chen, Deng, Du, Stanley, and Yan, and extends results of Klazar to $k>2$. Moreover, this correspondence reinforces the links recently discovered by Krattenthaler between fillings of diagrams and the results of Chen et al. \end{abstract} \section{Introduction}\label{sec:intro} Let $G$ be a graph on $[n]$; unless otherwise stated, we allow multiple edges and isolated vertices, but no loops. Two edges $\{i,j\}$ and $\{k,l\}$ are a \emph{crossing} if $i<k<j<l$ and they are a \emph{nesting} if $i<k<l<j$. If we draw the vertices of $G$ on a line and represent the corresponding edges by arcs above the line, crossings and nestings have the obvious geometric meaning. A graph without crossings (respectively, nestings) is called \emph{noncrossing} (resp., \emph{nonnesting}). Klazar~\cite{klazar_crossings} proves the equality between the numbers of noncrossing and nonnesting simple graphs, counted by order, and also between the numbers of noncrossing and nonnesting graphs without isolated vertices, counted by size. The purpose of this paper is to study analogous results for sets of $k$ pairwise crossing and $k$ pairwise nested edges. A \emph{$k$-crossing} is a set of $k$ edges every two of them being a crossing, that is, edges $\{i_1,j_1\},\ldots,$ $\{i_k,j_k\}$ such that $i_1<i_2<\cdots<i_k<j_1<\cdots< j_k$. A \emph{$k$-nesting} is a set of $k$ edges pairwise nested, that is, $\{i_1,j_1\},\ldots,$ $\{i_k,j_k\}$ such that $i_1<i_2<\cdots<i_k<j_k<\cdots< j_1$. A graph with no $k$-crossing is called \emph{$k$-noncrossing} and a graph with no $k$-nesting is called \emph{$k$-nonnesting}. The largest $k$ for which a graph $G$ has a $k$-crossing (respectively, a $k$-nesting) is denoted $\mathrm{cross}(G)$ (resp., $\mathrm{nest}(G)$). The aim of this paper is to show that the number of $k$-noncrossing graphs equals the number of $k$-nonnesting graphs, counted by order, size, and degree sequences. This problem was originally posed by Martin Klazar and we learned of it at the Homonolo 2005 workshop~\cite{homonolo}; the case where the number of vertices of the graph is $2k+1$ was proved by A. P\'or (unpublished). Our main result (Theorem~\ref{thm:kcrossknest}) states that the numbers of $k$-noncrossing and $k$-nonnesting graphs with a given degree sequence are the same. Chen~et~al.~\cite{match_and_part} prove the equality of the numbers of $k$-noncrossing and $k$-nonnesting graphs for two subclasses of graphs, namely for perfect matchings and for partition graphs, also counted by degree sequences (under a different but equivalent terminology). A perfect matching is a graph where each vertex has degree one, and a partition graph is a graph that is a disjoint union of monotone paths, that is, where each vertex has at most one edge to its right and at most one to its left. The latter correspond in a natural way to set partitions, hence the result can be stated in terms of these. The paper~\cite{match_and_part} also contains other identities and enumerative results on $k$-noncrossing and $k$-nonnesting matchings and partitions. Krattenthaler~\cite{krattenthaler} deduces most of these from his more general results on fillings of Ferrers diagrams. In this paper we also use fillings of diagrams to prove results about graphs. The difference is that whereas in~\cite{krattenthaler}, and also in~\cite{jonsson}, the results about graphs follow from general theorems by restricting the shape of the diagram, here we show that the results about graphs are in fact equivalent to those about fillings with arbitrary shapes. The main idea is to encode graphs by fillings of Ferrers diagrams in such a way that $k$-crossings and $k$-nestings are easy to recognize. A $k$-noncrossing ($k$-nonnesting) graph becomes a filling of a diagram that avoids the identity (antiidentity) matrix of order $k$, and the degree sequence of the graph can be recovered from the shape of the diagram and the row and column sums of the filling. Then proving that there are as many $k$-noncrossing as $k$-nonnesting graphs is equivalent to showing that the numbers of fillings avoiding these two matrices are the same. This idea generalizes easily to other subgraphs in addition to crossing and nestings, and allows us to show that the study of fillings of Ferrers diagrams with forbidden configurations is equivalent to the study of graphs avoiding certain subgraphs, in the sense defined in Section~\ref{sec:degree}. The structure of the paper is as follows. In Section~\ref{sec:general} we show that the equality of the numbers of $k$-noncrossing and $k$-nonnesting graphs counted by size and order is already in the literature, although not explicitly stated in this form. We introduce some notation on pattern avoiding fillings of Ferrers diagrams and we rephrase results of Krattenthaler~\cite{krattenthaler} and Jonsson and Welker~\cite{pfaffians} in terms of $k$-noncrossing and $k$-nonnesting graphs. Section~\ref{sec:degree} introduces a new correspondence between graphs and fillings of diagrams that keeps track of degree sequences. Then we discuss why, from the perspective of pattern avoiding, graphs and fillings of diagrams are equivalent objects. In particular, showing that the number of $k$-noncrossing graphs with a fixed degree sequence equals the number of such $k$-nonnesting graphs is equivalent to proving a result on fillings of diagrams with restrictions on the row and column sums. Our proof is an adaptation of the one in~\cite{filling_boards} to allow arbitrary entries in the filling, and this is the content of Section~\ref{sec:proof}. We conclude with some remarks and open questions. \section{Fillings of diagrams}\label{sec:general} We start by setting some notation on fillings of Ferrers diagrams. Let $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_k)$ be an integer partition. The \emph{Ferrers diagram of shape $\lambda$} (or simply a \emph{diagram}) is the arrangement of square cells, left-justified and from top to bottom, having $\lambda_i$ cells in row $i$, for $i$ with $1\leq i \leq k$. For a Ferrers diagram $T$ of shape $\lambda$ with rows indexed from top to bottom and columns from left to right, a \emph{filling} $L$ of $T$ consists of assigning a nonnegative integer to each cell of the diagram. We say that a cell is \emph{empty} if it has been assigned the integer $0$. Let $M$ be an $s\times t$ $0-1$ matrix. We say that the filling \emph{contains $M$} if there is a selection of rows $(r_1,\ldots,r_s)$ and columns $(c_1,\ldots,c_t)$ of $T$ such that if $M_{i,j}=1$ then the cell $(r_i,c_j)$ of $T$ is nonempty and moreover the cell $(r_s,c_t)$ is in the diagram (in other words, we require that the matrix $M$ is fully contained in $T$). We say that the filling \emph{avoids $M$} if there is no such selection of rows and columns. If a filling $L$ contains $M$, by an \emph{occurrence} of $M$ we mean the set of cells of $T$ that correspond to the $1$'s in $M$. We are mainly concerned about diagrams avoiding the identity matrix $I_t$ and the antiidentity matrix $J_t$; the latter is the matrix with $1$'s in the main antidiagonal and $0$'s elsewhere. As an example of these concepts, Figure~\ref{fig:diagram1} shows a filling of a diagram of shape $(7,6,5,4,3,2,1)$ that contains the matrices $I_3$ and $J_2$ but avoids $J_3$. (For clarity, we omit the zeros corresponding to the empty cells.) \begin{figure} \caption{Left: a filling of a diagram that contains $I_3$ and $J_2$ but avoids $J_3$. Right: the graph determined by the filling has a $3$-nesting and several $2$-crossings, but no $3$-crossing.} \label{fig:diagram1} \end{figure} Studying fillings of diagrams avoiding matrices is a natural generalization of pattern avoiding permutations, as explained in~\cite{filling_boards, stankova_west}. We explore two types of connections between graphs and fillings of diagrams. The first one is straightforward, being essentially the adjacency matrix, and it has been used in~\cite{jonsson, krattenthaler} to derive results on $k$-noncrossing maximal graphs and $k$-noncrossing and $k$-nonnesting matchings and partitions. Suppose $G$ is a graph on $[n]$ and consider a diagram $\Delta$ of shape $(n-1,n-2,\ldots,2,1)$. Then if there are $d\geq 0$ edges joining vertices $i$ and $j$, with $i<j$, fill the cell of column $i$ and row $n-j+1$ with $d$. Let this filling of the diagram be called $\Delta(G)$. Obviously the sum of the entries of $\Delta(G)$ is the number of edges of $G$ and the number of vertices is just one plus the number of rows of $\Delta$. If $G$ is a simple graph, then $\Delta(G)$ is a $0-1$ filling. If the edges $\{i_1,j_1\},\ldots,\{i_k,j_k\}$ are a $k$-nesting of $G$, then $\Delta(G)$ contains the $k\times k$ identity matrix $I_k$ in columns $i_1,\ldots,i_k$ and rows $n-j_1+1,\ldots,n-j_k+1$. Similarly, if $G$ contains a $k$-crossing, then $\Delta(G)$ contains the antiidentity matrix $J_k$ (the condition $i_k<j_1$ guarantees that the matrix is indeed contained in the diagram). Krattenthaler~\cite{krattenthaler} derives many of the results of Chen et al.~\cite{match_and_part} for matchings and partitions by specializing to $\Delta(G)$ his results on fillings of diagrams avoiding large identity or antiidentity matrices. His Theorem~13 gives a generalization to arbitrary graphs which is implicitly included in the remark after it; we explicitly state his result here (see also the comment after Theorem~\ref{thm:kcrossknest} in the next section). The following is a weaker version of~\cite[Theorem 13]{krattenthaler} \begin{thm}\label{thm:thm13} For any diagram $T$ and any integer $m$, consider fillings of $T$ with nonnegative integers adding up to $m$. Then for each $k>1$, the number of such fillings that do not contain the identity matrix $I_k$ equals the number of fillings that do not contain the antiidentity matrix $J_k$. \end{thm} By restricting to $T=\Delta$ we immediately get the following. \begin{cor}\label{cor:kcrossknestmulti} The number of $k$-noncrossing graphs with $n$ vertices and $m$ edges equals the number of $k$-nonnesting such graphs. \end{cor} Actually, from the statement of~\cite[Theorem 13]{krattenthaler} one gets a stronger result. For this we need to introduce weak $k$-crossings and weak $k$-nestings. The edges $\{i_1,j_1\},\ldots,\{i_k,j_k\}$ are a \emph{weak $k$-crossing} if $i_1\leq i_2\leq \cdots \leq i_k < j_1\leq \cdots\leq j_k$; similarly, they are a \emph{weak $k$-nesting} if $i_1\leq i_2\leq \cdots \leq i_k <$ $j_k\leq \cdots\leq j_1$. Let $\mathrm{cross}^*(G)$ (respectively, $\mathrm{nest}^*(G)$) be the largest $k$ for which $G$ has a weak $k$-crossing (resp., weak $k$-nesting). Then the following is a corollary of the full version of~\cite[Theorem 13]{krattenthaler}. \begin{cor}\label{cor:weak} The number of graphs with $n$ vertices and $m$ edges with $\mathrm{cross}(G)=r$ and $\mathrm{nest}^*(G)=s$ equals the number of such graphs with $\mathrm{cross}^*(G)=s$ and $\mathrm{nest}(G)=r$. \end{cor} Ideally, one would like to have an analogous result proving the symmetry of the distribution of $\mathrm{cross}(G)$ and $\mathrm{nest}(G)$ for all graphs. This is known to be true for matchings and partition graphs~\cite[Theorem 1 and Corollary 4]{match_and_part}; actually, for these graphs weak crossings (respectively, weak nestings) are the same as crossings (resp., nestings). For simple graphs, the result would follow if Problem~2 in~\cite{krattenthaler} has a positive answer for the diagram $\Delta$. The bijection used to prove~\cite[Theorem 13]{krattenthaler} does not preserve the values of the entries of the filling, so we cannot deduce from it the corresponding result for simple graphs. However, this follows from a result of Jonsson and Welker. They deal with fillings not of diagrams, but of stack polyominoes. A \emph{stack polyomino} consists of taking a diagram, reflecting it through the vertical axis, and gluing it to another (unreflected) diagram. The content of a stack polyomino is the multiset of the lengths of its columns. The definitions of fillings and containment of matrices in stack polyominoes are analogous to those for diagrams. The following is Corollary~6.5 of~\cite{pfaffians}. (The particular case where $m$ below is maximal was proved in~\cite{jonsson}.) \begin{thm}\label{thm:pfaffians} The number of $0-1$ fillings of a stack polyomino with $m$ nonzero entries that avoid the matrix $I_k$ depends only on the content of the polyomino and not on the ordering of the columns. \end{thm} By a simple reflection argument we get the following for the triangular diagram $\Delta$ of shape $(n-1,n-2,\ldots,2,1)$: the number of $0-1$ fillings of $\Delta$ with $m$ non-zero entries and that avoid the matrix $I_k$ is the same as those that avoid the matrix $J_k$. Hence we have the following in terms of graphs. \begin{cor}\label{cor:simple} The number of $k$-noncrossing simple graphs on $n$ vertices and $m$ edges equals the number of such $k$-nonnesting simple graphs. \end{cor} In the next section we deal with graphs with a fixed degree sequence. For this we need to consider diagrams of arbitrary shapes, since the correspondence between graphs and fillings is no longer restricted to the triangular diagram $\Delta$. \section{Degree sequences and fillings with prescribed row and column sums}\label{sec:degree} The \emph{left-right degree sequence} of a graph on $[n]$ is the sequence $\left((l_i , r_i)\right)_{1\leq i \leq n}$, where $l_i$ (resp., $r_i$) is the left (resp., right) degree of vertex $i$; by the left (resp., right) degree of $i$ we mean the number of edges that join $i$ to a vertex $j$ with $j<i$ (resp., $j>i$). Obviously $l_i+r_i$ is the degree of vertex $i$ (loops are not allowed). For instance, if $r_i\leq 1$ and $l_i\leq 1$ for all $i$, then the graph is either a matching or a partition graph, perhaps with some isolated vertices. If a graph $G$ has $D$ as its left-right degree sequence, we say that $G$ is a graph \emph{on $D$}. A useful way of thinking of left-right degree sequences is drawing for each vertex $i$, $l_i$ half-edges going left and $r_i$ half-edges going right. Then a graph is just a way of matching these half-edges; recall that we allow multiple edges. For completeness we mention here that a sequence $\left((l_i, r_i)\right)_{1\leq i \leq n}$ is the left-right degree sequence of some graph on $[n]$ if and only if \begin{equation}\label{eq:deg} \sum_{i=1}^n l_i = \sum_{i=1}^{n} r_i \mbox{ and } \sum_{i=1}^k l_i \leq \sum_{i=1}^{k-1} r_i, \quad \forall k\in [n].\end{equation} This and the next section are devoted to proving that for each left-right degree sequence $D$ there are as many $k$-noncrossing graphs on $D$ as $k$-nonnesting. We stress that the fact that we allow multiple edges is essential, since if we restrict to simple graphs the result does not hold. For instance, one can check that there is one simple nonnesting graph with left-right degree sequence $(0,2),(0,2),(1,1),(2,0),(2,0)$, but no such noncrossing simple graph. However, it turns out that there is a bijection between $k$-noncrossing and $k$-nonnesting simple graphs that preserves left degrees (or right degrees, but not both simultaneously). This follows from the following result of Rubey~\cite[Theorem 4.2]{rubey} applied to the filling $\Delta(G)$ by noting that the sum of the entries in row $n-j+1$ of $\Delta(G)$ corresponds to the left degree of vertex $j$. Rubey's result is for moon polyominoes, but we state the version for stack polyominoes. (A weaker version of this result was proved by Jonsson~\cite[Corollary 26]{jonsson}.) \begin{thm}\label{thm:cor26} For any stack polyomino $\Lambda$ with $s$ rows and for any sequence $(d_1,\ldots, d_s)$ of nonnegative integers, the number of $0-1$ fillings of $\Lambda$ that avoid $I_k$ and have $d_i$ nonzero entries in row $i$ depends only on the content of $\Lambda$ and not on the ordering of the columns. \end{thm} By the same reflection argument as at the end of Section~\ref{sec:general} we obtain the following corollary. \begin{cor}\label{cor:simplemax} Let $(l_2,\ldots,l_n)$ be a sequence of nonnegative integers. Then the number of $k$-noncrossing simple graphs on $[n]$ with vertex $i$ having left degree $l_i$ for $2\leq i\leq n$ is the same as the number of such $k$-nonnesting simple graphs. \end{cor} The main result of this paper says that by allowing multiple edges we can simultaneously fix left and right degrees. \begin{thm}\label{thm:kcrossknest} For any left-right degree sequence $D$, the number of $k$-noncrossing graphs on $D$ equals the number of $k$-nonnesting graphs on $D$. \end{thm} This result generalizes to arbitrary graphs some of the results of~\cite{match_and_part}, which are only for partition graphs but also taking into account degree sequences (with different terminology). To approach Theorem~\ref{thm:kcrossknest} we could use again the filling $\Delta(G)$ of the previous section and fix the sums of the entries in each row and column. In this setting Theorem~\ref{thm:kcrossknest} is again implicitly included in the remark after~\cite[Theorem 13]{krattenthaler} by keeping track of the changes in the partitions involved in the proof of that theorem. (I am grateful to Christian Krattenthaler for this observation.) However, our approach consists of encoding graphs not by the triangular diagram $\Delta$ but by an arbitrary diagram whose shape depends on the degree sequence. By doing this we actually show that not only results on $k$-noncrossing and $k$-nonnesting graphs can be deduced from results on fillings of Ferrers diagrams avoiding $I_k$ and $J_k$, but that actually these two families of results are completely equivalent. Moreover, we have an analogous assertion for arbitrary matrices (see Theorem~\ref{thm:equivalence}). We start with an easy lemma that follows immediately from the fact that the edges in a $k$-crossing or a $k$-nesting must be vertex-disjoint. \begin{lemma}\label{lem:split} The number of $k$-noncrossing (resp., $k$-nonnesting) graphs with left-right degree sequence $$(l_1, r_1),\ldots,(l_n,r_n)$$ is the same as that of those with left-right degree sequence $$(l_1,r_1),\ldots,(l_{i-1},r_{i-1}),(l_i,0),(0,r_i),(l_{i+1},r_{i+1}), \ldots,(l_n,r_n), $$ for any $i$ with $1\leq i \leq n$. \end{lemma} Hence it is enough to prove Theorem~\ref{thm:kcrossknest} for left-right degree sequences whose elements $(l_i,r_i)$ are such that either $l_i$ or $r_i$ is $0$. We call these graphs \emph{left-right} graphs; note though that we do not require that the degrees of the vertices alternate between right and left. The case where both left and right degrees are $0$ corresponds to an isolated vertex. \comment{; even if isolated vertices are easy to deal with, allowing them makes some proofs (things, statements) more homogeneous, hence we keep them.} We now describe a bijection between left-right graphs and fillings of Ferrers diagrams of arbitrary shape; this bijection has the property that the left-right degree sequence of the graph can be recovered from the shape and filling of the diagram. Let $G$ be a left-right graph. If the degree of vertex $i$ is of the form $(0,r_i)$ we say that $i$ is \emph{opening}, and if it is of the form $(l_i,0)$ we say that $i$ is \emph{closing}. An isolated vertex is both opening and closing. Let $i_1,\ldots, i_c$ be the closing vertices of $G$ and let $j_1,\ldots, j_o$ be the opening ones. For each closing vertex $i$, let $p(i)$ be the number of vertices $j$ with $j<i$ that are opening. We consider a diagram $T(G)$ of shape $(p(i_c),p(i_{c-1}),\ldots,p(i_1))$, and if there are $d$ edges going from the opening vertex $j_s$ to the closing vertex $i_r$, we fill the cell in column $s$ and row $c-r+1$ with the integer $d$ (see Figure~\ref{fig:diagram2}). Thus graphs with left degrees $l_1,\ldots,l_c$ and right degrees $r_1,\ldots,r_o$ correspond to fillings of this diagram with nonnegative entries such that the sum of the entries in row $i$ is $l_i$ and the sum of the entries in column $j$ is $r_j$. Conversely, any filling of a diagram arises in this way. Indeed, given a filling $L$ of a diagram $T$, the shape of $T$ gives the ordering of the opening and closing vertices of the graph, the row and column sums give the left and right degrees (it is easy to see that they must satisfy equation~(\ref{eq:deg})), and the entries of the filling give the edges of the graph. Given a graph $G$, we denote by $L(G)$ the filling of $T(G)$ corresponding to $G$. Similarly, given a filling $L$ of a diagram, we denote by $G(L)$ the left-right graph corresponding to this filling. In this setting, it is immediate to check that again $k$-crossings of $G$ correspond to occurrences of $I_k$ in $L(G)$ and $k$-nestings to occurrences of $J_k$. \begin{figure} \caption{A filling $L$ of a diagram with row sums $4,2,3,2$ and column sums $2,2,3,2,2,$ and the corresponding graph $G(L)$. } \label{fig:diagram2} \end{figure} By a \emph{diagram with prescribed row and column sums} we mean a diagram and two sequences $(\rho_i)$ and $(\gamma_j)$ of nonnegative integers such that the only fillings allowed for this diagram are those where the row and column sums are given by the sequences $(\rho_i)$ and $(\gamma_j)$. Given two matrices $M$ and $N$, we say that they are \emph{equirestrictive} if for all diagrams $T$ with prescribed row and column sums, the number of fillings of $T$ that avoid $M$ equals the number of fillings of $T$ that avoid $N$. With this notation, Theorem~\ref{thm:kcrossknest} is an immediate consequence of the following result, the proof of which is the content of the next section. \begin{thm}\label{thm:avoidk} The identity matrix $I_k$ and the antiidentity matrix $J_k$ are equirestrictive. \end{thm} \comment{ Let us mention that recently M. Rubey~\cite{rubey} has proved the analogous result for fillings of moon polyominoes, also with prescribed row and column sums. ASK MARTIN IF HE ALSO HAS COLUMN SUMS FIXED, IT IS NOT CLEAR FROM HIS MESSAGE. } Before moving to the proof of Theorem~\ref{thm:avoidk}, let us make some remarks and point out some consequences of the proof. We start by further exploring the bijection between left-right graphs and fillings of diagrams. Let $G$ and $H$ be graphs on $[n]$ and $[h]$, respectively, with $h\leq n$. For the rest of this section we assume that $H$ is simple (but $G$ can have multiple edges as usual). We say that $G$ contains $H$ if there is an order-preserving injection $\sigma: [h]\rightarrow [n]$ such that if $\{i,j\}$ is an edge of $H$ then $\{\sigma(i),\sigma(j)\}$ is an edge of $G$. For instance, a $k$-noncrossing graph is a graph that does not contain the graph on $[2k]$ with edges $\{1,k+1\},\{2,k+2\},\ldots,\{k,2k\}$. A $0-1$ matrix $M$ with $s$ rows and $t$ columns can also be viewed as a filling of the diagram of shape $(t,t,\stackrel{(s)}{\ldots},t)$. By the correspondence between graphs and fillings of diagrams described above, we have that $M$ gives a graph $G(M)$ with $t$ opening vertices and $s$ closing vertices and such that all opening vertices appear before the closing vertices. Let us call such a graph a \emph{split graph}, a particular case being the graph of a $k$-crossing or a $k$-nesting. As a consequence of the previous discussion we have that in terms of containment of substructures (matrices or split graphs), fillings of diagrams and graphs are equivalent objects. \begin{thm}\label{thm:bijection} For any split graph $H$ there is a matrix $M(H)$ such that a left-right graph $G$ contains $H$ if and only if the filling $L(G)$ contains $M(H)$. And conversely, for each matrix $M$ there is a split graph $H(M)$ such that a filling $L$ of a diagram $T$ contains $M$ if and only if the graph $G(L)$ contains $H(M)$. \end{thm} Observe now that Lemma~\ref{lem:split} can be generalized by substituting ``$k$-noncrossing graphs'' with ``graphs that do not contain the split graph $H$''. Hence the following. \begin{thm}\label{thm:equivalence} Let $H$ and $H'$ be two split graphs. Then for any left-right degree sequence $D$ there are as many graphs on $D$ avoiding $H$ as graphs on $D$ avoiding $H'$ if and only if for each diagram with prescribed row and column sums there are as many fillings avoiding $M(H)$ as fillings avoiding $M(H')$. \end{thm} Following the notation for matrices, we say that two split graphs $H$ and $H'$ are \emph{equirestrictive} if for any left-right degree sequence $D$, there are as many graphs on $D$ avoiding $H$ as graphs on $D$ avoiding $H'$. All the split graphs that are known to be equirestrictive are obtained from the graph of a $k$-crossing or a $k$-nesting by using Proposition~\ref{prop:avoidextension} from the next section. This proposition states that if $M$ and $N$ are equirestrictive matrices, then for any other matrix $A$ the matrices $$\left( \begin{array}{cc} M & 0 \\ 0 & A \end{array}\right) \quad \mbox{and} \quad \left( \begin{array}{cc} N & 0 \\ 0 & A \end{array}\right)$$ defined by blocks are also equirestrictive. This has the following implications in terms of graphs. Given a split graph $H$ on $[h]$, a \emph{$(k-H)$-crossing} is a graph on $2k+h$ such that the graph induced by the vertices $[k]\cup \{k+h+1,\ldots,2k+h\}$ is a $k$-crossing, the graph induced by $\{k+1,\ldots,k+h\}$ is $H$, and there are no other edges. A \emph{$(k-H)$-nesting} is defined similarly. Then by combining Theorems~\ref{thm:avoidk} and~\ref{thm:equivalence} and Proposition~\ref{prop:avoidextension} we deduce the following. \begin{cor}\label{cor:kH} For any split graph $H$ and any nonnegative integer $k$, $(k-H)$-crossings and $(k-H)$-nestings are equirestrictive. \end{cor} Observe that if we take $H$ to be an $h$-nesting, a $(k-H)$-nesting is a $(k+h)$-nesting, so it follows that a $k$-nesting, a $k$-crossing, and any combination of a $t$-crossing ``over" a $(k-t)$-nesting are equirestrictive. However, it is not true that $t$-nestings over $(k-t)$-crossings are equirestrictive, not even within matchings, as observed in the remark after Theorem~1 of~\cite{jelinek_matchings}. This implies also that there is no analogous version of Proposition~\ref{prop:avoidextension} where $A$ is the top-left block and $M$ and $N$ are the bottom-right blocks of the matrix. Finally, we comment on the results known for $0-1$ fillings of diagrams with row and column sums equal to $1$. Our correspondence translates these results into results for matchings and partition graphs, as we next explain. In the literature, two permutation matrices $M$ and $N$ are called \emph{shape-Wilf-equivalent} if for each diagram $T$ with row and column sums set to $1$, the number of fillings avoiding $M$ equals the number of fillings avoiding $N$. (In view of this notation, we could have chosen the name graph-Wilf-equivalent instead of equirestrictive.) Let $P$ be a $t\times t$ permutation matrix. The split graph corresponding to $P$ is a matching (these are sometimes called \emph{permutation matchings}). Now if two permutation matrices $P$ and $P'$ are shape-Wilf-equivalent, then by straightforward application of Theorem~\ref{thm:equivalence} we have that for all graphs whose left and right degrees are one, the number of graphs avoiding the matching $H(P)$ equals the number of graphs avoiding the matching $H(P')$. Since graphs with left and right degrees one are exactly partition graphs, it turns out that shape-Wilf-equivalence is equivalent to the matchings $H(P)$ and $H(P')$ being equirestrictive among partition graphs, counted by left-right degree sequences. There are not many pairs of permutation matrices known to be shape-Wilf-equivalent. Backelin, West, and Xin~\cite{filling_boards} show that $I_k$ and $J_k$ are shape-Wilf-equivalent; in graph theoretic terms, this gives another alternative proof of the equality between $k$-noncrossing and $k$-nonnesting partition graphs from~\cite{match_and_part}. Let us mention here that Krattenthaler~\cite{krattenthaler} deduces both the result of Chen et al. and that of Backelin, West, and Xin from his Theorem~3, but for the first one he sets $T=\Delta$ and for the second he restricts the number of non-empty cells in the filling (and takes arbitrary shapes). Since these two apparently unrelated results are in fact equivalent, it is obvious that they must follow from the same theorem, but it is interesting that they do in different ways. Another observation is that by Lemma~\ref{lem:split}, and its generalization to split graphs, if we know that two split graphs are equirestrictive within matchings, then they are so within partition graphs. For instance, a bijective proof of the equality of the numbers of $k$-noncrossing and $k$-nonnesting matchings would immediately give a bijection for $k$-noncrossing and $k$-nonnesting partition graphs. In addition to the matrices $I_t$ and $J_t$ and the ones that follow from Proposition~\ref{prop:avoidextension}, the only other pair of matrices known to be shape-Wilf-equivalent are (see~\cite{stankova_west}) $$M(213)=\left( \begin{array}{ccc} 0& 0& 1 \\ 1&0&0 \\ 0&1&0 \end{array} \right) \qquad \mathrm{and} \qquad M(132)=\left(\begin{array}{ccc} 0&1&0\\ 0&0&1\\1&0&0 \end{array} \right). $$ The graph theoretic version of this result has been independently proved by Jel\'inek~\cite{jelinek_matchings}. It is not known to us if $M(231)$ and $M(132)$ are also equirestrictive, or more generally if there is a pair of shape-Wilf-equivalent permutation matrices that are not equirestrictive. Lastly, let us mention that all the discussion of this section can be carried out with almost no changes to the case where the matrix we want to avoid in the filling can have arbitrary nonnegative entries; this corresponds to avoiding split graphs with multiple edges. The interested reader will have no problems in filling in the details. \section{Proof of Theorem~\ref{thm:avoidk}}\label{sec:proof} This section is devoted to the proof of Theorem~\ref{thm:avoidk}. We show that we can adapt to our setting the proof of~\cite{filling_boards}, which is for shape-Wilf-equivalence, that is, row and column sums equal to $1$; we include the details for the sake of completeness. (Actually, \cite{filling_boards} contains two proofs of the analogous of our Theorem~\ref{thm:avoidk} for shape-Wilf-equivalence; the proof we adapt is the first one.) This bijection has been further studied in~\cite{bms}. Here we show that it extends, in a quite straightforward way, to arbitrary fillings. This gives a result stronger than Theorem~\ref{thm:avoidk}, the consequences of which in graph theoretic terms have already been pointed out at the end of the previous section. Let us also mention that Theorem~\ref{thm:avoidk} can also be proved using the techniques of~\cite{krattenthaler}. From now on $\bar{T}$ denotes a diagram with prescribed row and column sums. When we say that a cell is above (or below, to the right, to the left) of another cell we always mean strictly. If we say that a cell is weakly above (below, etc.) we mean not above (not below, etc.) If $A$ and $B$ are two matrices, by $[A|B]$ we mean the matrix having $A$ and $B$ as blocks, that is, $$\left( \begin{array}{cc} A & 0 \\ 0 & B\end{array}\right) .$$ \begin{prop}\label{prop:avoidextension} Let $M$ and $N$ be a pair of equirestrictive matrices and let $A$ be any matrix. Then the matrices $[M|A]$ and $[N|A]$ are also equirestrictive. \end{prop} \begin{proof} Let $L$ be a filling of the diagram $\bar{T}$ that avoids $[M|A]$. Let $T'$ be the set of cells $(i,j)$ of $T$ such that the cells to the right and below $(i,j)$ contain the matrix $A$. $T'$ is a diagram, since if $(i,j)$ is in $T'$ all the cells weakly above and weakly to the left of it are also in $T'$. Now set the row and column sums of $T'$ according to the restriction of $L$ to $T'$, call it $L'$, giving a diagram $\bar{T'}$. Now $L'$ is a filling of $\bar{T'}$ that avoids $M$, so by assumption there is a bijection between such fillings and the ones that avoid $N$. Change the entries of $L$ corresponding to $T'$ to obtain a filling of $\bar{T}$ that avoids $[N|A]$. The bijection in the other direction goes just in the same way. \end{proof} Let $F_t$ be the matrix $[J_{t-1}|I_1]$. The proof of the following proposition takes the rest of this section. \begin{prop}\label{prop:jt_ft} For all $t$, $F_t$ and $J_t$ are equirestrictive. \end{prop} We get as a corollary a stronger version of Theorem~\ref{thm:avoidk}. \begin{cor}\label{cor:it_jt} For all $t$, $[I_t|A]$ and $[J_t|A]$ are equirestrictive. \end{cor} \begin{proof} By Proposition~\ref{prop:avoidextension} it is enough to show that $I_t$ and $J_t$ are equirestrictive. The proof is by induction on $t$; clearly $I_1$ and $J_1$ are equirestrictive. By Proposition~\ref{prop:jt_ft}, it is enough to show that $I_{t}$ and $F_{t}$ are equirestrictive, and this follows by the induction hypothesis combined with Proposition~\ref{prop:avoidextension}. \end{proof} A sketch of the proof of Proposition~\ref{prop:jt_ft} is as follows. We first define two maps between fillings that transform occurrences of $F_t$ into occurrences of $J_t$, and conversely, and use them to define to algorithms that transform a filling avoiding $F_t$ into a filling avoiding $J_t$, and conversely. The fact that these two algorithms are inverses of each other follows from a series of lemmas. For any filling $L$, given two occurrences $G_1$ and $G_2$ of $J_t$ in $L$, we say that $G_1$ \emph{precedes} $G_2$ if the first entry in which they differ, from left to right, is either higher in $G_1$ or it is at the same height and the one in $G_1$ is to the left. So two occurrences are either equal or comparable. The order for the occurrences of $F_t$ goes the other way around, i.e., we look at the first entry in which they differ, from right to left, and the lower entries have preference, and if they are at the same height, the one more to the right goes first. Let $L$ be a filling with the first occurrence of $J_t$ in rows $r_1,\ldots,r_t$ and columns $c_1,\ldots,c_t$. Let $\phi(L)$ be the result of substracting $1$ from each cell $(r_s,c_s)$, $1\leq s\leq t$ and adding $1$ to each cell $(r_s,c_{s-1})$, $2\leq s\leq t$ and to cell $(r_1,c_t)$. Since row and column sums have not been altered, $\phi(L)$ is a filling of $\bar{T}$. So we have changed an occurrence of $J_t$ to an occurrence of $F_t$. Define $\psi$ as the inverse procedure, that is, $\psi$ takes a filling of the diagram, looks for the first occurrence of $F_t$, and replaces it by an occurrence of $J_t$. We define the algorithms $A1$ and $A2$ in the following way. Algorithm $A1$ starts with a filling avoiding $F_t$ and applies $\phi$ successively until there is no occurrence of $J_t$. The result (provided the algorithm finishes) is a filling that avoids $J_t$. Similarly, algorithm $A2$ starts with a filling avoiding $J_t$ and applies $\psi$ until there are no occurrences of $F_t$ left. We claim that $A1$ and $A2$ are inverse of each other. We prove this through a series of analogous lemmas. It is enough to prove the following claims. \begin{itemize} \item That both algorithms end. (Lemmas~\ref{lem:nojtabove} and~\ref{lem:noftbelow2}.) \item That $\psi(\phi^n(L))=\phi^{n-1}(L)$ for all $n$. (Lemma~\ref{lem:psipicksb}.) \item That $\phi(\psi^n(L))=\psi^{n-1}(L)$ for all $n$. (Lemma~\ref{lem:phipicksa}.) \end{itemize} In order to prove these claims, we need to investigate some properties of the maps $\phi$ and $\psi$. We start by studying the map $\phi$. Let us first introduce some notation. Let $L$ be a filling of the diagram and let $a_1,\ldots,a_t$ be the cells of the first $J_t$ in $L$, listed from left to right; say they are $(r_1,c_1),\ldots,(r_t,c_t)$. So in each cell $a_i$ there is a positive integer, possibly greater than one. Let $b_1,\ldots,b_t$ be the cells $(r_2,c_1),(r_3,c_2),\ldots,(r_t,c_{t-1})$ and $(r_1,c_t)$; hence, $b_1,\ldots,b_t$ are the cells corresponding to the occurrence of $F_t$ that is created after applying $\phi$ to $L$. So cell $b_i$ is in the same row as $a_{i+1}$ and in the same column as $a_i$, for $i$ with $1\leq i \leq t-1$. Consider now the following two paths of cells determined by $a_1,\ldots,a_t$ and $b_1,\ldots,b_t$ (see Figure~\ref{fig:abe}). The path $A$ starts at the leftmost cell in the row of $a_1$, continues to the right until it reaches the column of $a_2$, then takes this column up until it hits cell $a_2$, then turns right until reaching the column of $a_3$, goes up until $a_3$, then turns right again, and so on, until it reaches cell $a_t$, at which point continues up until the top of the diagram. The path $B$ is defined in a similar manner. It starts at the leftmost cell of the row of cell $b_1$, and goes right until it hits $b_1$. Then it turns up until the row of $b_2$, where it turns and continues to the right until hitting $b_2$. Then it goes up until the row of $b_3$, and then turns to the right until $b_3$, and so on, until reaching $b_{t-1}$, at which point it goes up until reaching the top of the diagram. Since $a_1,\ldots,a_t$ are the first occurrence of $J_t$, the cells that are both to the right of $B$ and to the left of $A$ are empty, or, in other words, this region of the diagram avoids $J_1$. We denote this region by $E$. The choice of the first $J_t$ also imposes some other less trivial bounds on the longest $J_i$'s that can be found in some other areas determined by $E$. Note that in the next lemma the area left of $E$ includes the path $B$. \begin{figure} \caption{The regions $A, B,$ and $E$} \label{fig:abe} \end{figure} \begin{lemma}\label{lem:leftofE} With the above notation, the following hold for any filling $L$ and for the corresponding $\phi(L)$. \begin{itemize} \item[(i)] For all $i$ with $1\leq i \leq t-1$, there is no $J_i$ below $b_i$ and to the left of $E$. \item[(ii)] For all $i$ with $1\leq i \leq t-1$, there is no $J_{t-i}$ above and to the right of $b_i$ and to the left of $E$. \item[(iii)] For all $i,j$ with $1\leq i<j\leq t-1$, the rectangle determined by $b_i$ and $b_j$ contains no $J_{j-i}$ to the left of $E$; that is, there is no $J_{j-i}$ below $b_j$, above $b_i$, to the right of $b_i$, and to the left of $E$. \end{itemize} \end{lemma} \begin{proof} The arguments below apply to both $L$ and $\phi(L)$ since they do not use the entries in cells $b_i$. \begin{itemize} \item[(i)] Assume there was such a $J_i$. Then this $J_i$ together with $a_{i+1},\ldots,a_t$ would form a $J_t$ contradicting the choice of $a_1$. \item[(ii)] Suppose there was such a $J_{t-i}$. Then $a_1,\ldots,a_i$ followed by this $J_{t-i}$ form a $J_t$ that contradicts the choice of $a_{i+1}$. \item[(iii)] Again, if there was such a $J_{j-i}$, combined with $a_1,\ldots,a_{i-1}$ and $a_{j+1},\ldots,a_t$, it would create a $J_t$ contradicting the choice of $a_i$. \end{itemize} \end{proof} \begin{lemma}\label{lem:nojtabove} There is no $J_t$ in $\phi(L)$ in the rows above $a_1$ \end{lemma} \begin{proof} We argue by contradiction. Let $G$ be an occurrence of $J_t$ in $\phi(L)$. Since $\phi$ picked $a_1$ as the topmost cell being the left-bottom cell of a $J_t$, $G$ must use at least one of the cells $b_1,\ldots,b_{t-1}$. The idea is to substitute these cells $b_i$, and possibly others, by some of the cells $a_i$, to find an occurrence of $J_t$ in $L$ in the rows above $a_1$, hence contradicting the choice of $a_1$. Now for each cell $b_{i}$ which belong to $G$, find the largest integer $j$ such that all cells of $G$ above $b_{i}$ and weakly below $b_{j-1}$ lie left of $E$. In this way it is possible to find two sequences $i_1,\ldots,i_s$ and $j_1,\ldots, j_s$ with the following properties: \begin{itemize} \item $i_k<j_k$, $1\leq i_{k-1}<i_k$, and $j_{k-1}<j_k\leq t$ for all $k$; \item $b_{i_k}$ is in $G$; \item if $b_l$ is in $G$, then $i_k\leq l \leq j_k-1$ for some $k$; \item all cells of $G$ above $b_{i_k}$ and weakly below $b_{j_k-1}$ are to the left of $E$, and $j_k$ is the largest integer with this property. \end{itemize} Now we show that we can replace the cells of $G$ that fall left of $E$ and are contained in the rectangles determined by $b_{i_k}$ and $b_{j_k}$ by some of the $a_i$, giving an instance of $J_t$ contained in $L$ and above $a_1$. We need to distinguish two cases, according to whether $j_s=t$ or not. Assume first that $j_s\neq t$. For each $k$, consider the rectangles determined by $b_{i_k}$ and $b_{j_k}$. By Lemma~\ref{lem:leftofE}.(iii), there are at most $j_k-i_k-1$ elements of $G$ in this rectangle and to the left of $E$. Replace these cells, together with $b_{i_k}$, by a (possibly proper) subset of $a_{i_k+1},\ldots,a_{j_k}$. After doing this for each $k$, we still have an occurrence of $J_t$ starting above $a_1$, but now it is contained in the original filling $L$, contradicting the hypothesis. Now assume that $j_s=t$. For $k<s$, do the same substitutions as in the previous case; for $k=s$, we have by Lemma~\ref{lem:leftofE}.(ii) that there are at most $t-i_k-1$ cells of $G$ left of $E$ and above $b_{i_k}$. Replace these cells and $b_{i_k}$ by a subset of $a_{i_k+1},\ldots,a_t$. Again we obtain an occurrence of $J_t$ in $L$ that starts above $a_1$, a contradiction. \end{proof} This lemma alone shows that algorithm $A1$ terminates. Indeed, after one application of $\phi$ all the cells in the row of $a_1$ and to the left of $a_1$ are empty (because of the choice of $a_1$), and the cell $a_1$ has decreased its value by one. So the leftmost cell of the first occurrence of $J_t$ in $\phi(L)$ is either $a_1$, or it is to the right of $a_1$, or it is below $a_1$. But since the value in cell $a_1$ decreases and cells to the left of $a_1$ stay empty, eventually there will be no occurrence of $J_t$ whose leftmost cell is $a_1$. So the selection of $J_t$'s goes from top to bottom and from left to right, so for some $n$ the filling $\phi^n(L)$ is free of $J_t$'s. It is not the case that if we apply $\phi$ to an arbitrary filling $L$ of $\bar{T}$ we have that $\psi(\phi(L))=L$. But algorithm $A1$ starts with a filling that avoids $F_t$ and the successive applications of $\phi$ create occurrences of $F_t$ from top to bottom and from left to right. We need to show that in this situation after each application of $\phi$, the first occurrence of $F_t$ is precisely the one created by $\phi$. The next lemmas are devoted to proving this. \begin{lemma}\label{lem:noftbelow} If $L$ contains no $F_t$ with at least one square below $a_1$, then $\phi(L)$ contains no such $F_t$. \end{lemma} \begin{proof} The proof is similar to the one of the previous lemma. Let $G$ be an occurrence of $F_t$ in $\phi(L)$ with at least one cell below $a_1$. Since $L$ had no such occurrence, $G$ contains at least one of the cells $b_i$. The bottom-right cell of $G$ is below $a_1$, and it cannot be to the right of $a_{t-1}$, otherwise this cell together with $a_1,\ldots, a_{t-1}$ would form an $F_t$ in $L$. By an argument similar to the one in the previous lemma, we change all cells $b_i$ of $G$, and possibly others, to some of the cells $a_i$, so that at the end we have an occurrence of $J_{t-1}$ that together with the bottom-right cell of $G$ gives an occurrence of $F_t$ that contradicts the hypothesis. For each $b_i$ that is in $G$, look for the smallest $j$ such that all cells in $G$ that are left of $b_i$ and weakly to the right of $b_{j+1}$ are left of $E$. By doing this we find integers $i_1,\ldots, i_s$ and $j_1,\ldots, j_s$ with the following properties: \begin{itemize} \item $i_k> j_k$, $t-1\geq i_{k-1}>i_k$, and $j_{k-1}>j_k\geq 0$ for all $k$; \item $b_{i_k}$ is in $G$ for all $k$ with $1\leq k \leq s$; \item $j_k$ is the smallest integer such that all cells of $G$ that are left of $b_{i_k}$ and weakly to the right of $b_{j_k+1}$ are to the left of $E$; \item if $b_l$ is in $G$, then $j_k+1 \leq l\leq i_k$ for some $k$. \end{itemize} We have to distinguish whether ${j_s}=0$ or not. Assume first $j_s\neq 0$. Since by Lemma~\ref{lem:leftofE}.(iii) there are at most $i_k-j_k-1$ cells of $G$ in the rectangle determined by $b_{j_k}$ and $b_{i_k}$, these cells, together with $b_{i_k}$, can be replaced by a (possibly proper) subset of $a_{j_k+1},\ldots,a_{i_k}$. By doing this for all $k$, we have an occurrence of $J_{t-1}$ in $L$ that together with the right-bottom cell of $G$ contradicts the hypothesis. If $j_s=0$, then we do the same substitutions for all $k\neq s$; for $k=s$, we have by Lemma~\ref{lem:leftofE}.(i) that there are at most $i_s-1$ cells of $G$ left of $E$ and below $b_{i_s}$, so we can substitute those and $b_{i_s}$ by $a_1,\ldots,a_{i_s}$. After these substitutions, the result is again an occurrence of $F_t$ in $L$ that contains a cell below $a_1$, contradicting the hypothesis. \end{proof} The following is easy but we state it for the sake of completeness. \begin{lemma}\label{lem:noftright} If $L$ contains no $F_t$ with a cell to the right of $a_t$ and below $a_2$, then $\phi(L)$ contains no such $F_t$. \end{lemma} \begin{proof} Again we argue by contradiction. Suppose $G$ is an $F_t$ in $\phi(L)$ that contains a cell to the right of $a_t$ and below $a_2$. This cell together with $a_2,\ldots,a_t$ gives an occurrence of $F_t$ in $L$ that contradicts to the assumption. \end{proof} \begin{lemma}\label{lem:ftfromphi} For each $k$ with $1\leq k\leq t-1$, there is no $J_k$ in $\phi(L)$ above $a_1$ and to the left of and below $a_{k+1}$. \end{lemma} \begin{proof} Let $G$ be an occurrence of such a $J_k$. If $G$ contains none of $b_1,\ldots, b_{k-1}$, then $G$ followed by $a_{k+1},\ldots,a_t$ forms a $J_t$ in $L$ that is above $a_1$, and this contradicts the choice of $a_1$. Hence, $G$ uses some $b_i$ for $1\leq i \leq k-1$. By an argument analogous to that of the proof of Lemma~\ref{lem:nojtabove}, we can substitute the cells $b_i$ that are in $G$ and possibly others by some $a_i$'s so that we get an occurrence of $J_{k}$ in $L$ that is below $a_{k+1}$ and above $a_1$. This followed by $a_{k+1},\ldots,a_t$, gives an $J_t$ in $L$ that contradicts the choice of $a_1$. \end{proof} The following lemma is just a combination of the previous and induction; it implies that the inverse of algorithm $A1$ is $A2$. \begin{lemma}\label{lem:psipicksb} \begin{itemize} \item[(i)] If $L$ does not contain any occurrence of $F_t$ below $a_1$, then the first occurrence of $F_t$ in $\phi(L)$ is $b_1,\ldots,b_t$. \item[(ii)] If $L$ is a filling that avoids $F_t$, then $\psi(\phi^n(L))=\phi^{n-1}(L)$. \end{itemize} \end{lemma} \begin{proof} For the first statement, let $f_1,\ldots,f_t$ be the first occurrence of $F_t$ in $\phi(L)$, with the elements ordered from left to right. Recall that $b_1,\ldots,b_t$ is an occurrence of $F_t$ in $\phi(L)$; we need to show that $f_i=b_i$ for all $i$. By Lemma~\ref{lem:noftbelow}, $f_t$ is in the same row as $b_t$. By Lemma~\ref{lem:noftright}, $f_t$ cannot be to the right of $a_t$, hence $f_t=b_t$. Now use induction on $t-i$. Suppose we know $f_{i+1}=b_{i+1},\ldots,f_t=b_t$. It is enough now to show that $f_i$ lies in the same row as $b_i$, since all the cells to the right of $b_i$ but left of $b_{i+1}$ lie in $E$, which we know contains only empty cells. But now Lemma~\ref{lem:ftfromphi} guarantees that there is no $J_i$ below $b_i$, to the left of $b_{i+1}$, and above $b_t$, as required. For the second statement, it follows by Lemma~\ref{lem:noftbelow} and induction on $n$ that the filling $\phi^{n}(L)$ contains no $F_t$ whose lowest cell is below the lowest cell of the first occurrence of $J_t$. Hence the previous statement applied to $\phi^n(L)$ gives immediately that $\psi(\phi^n(L))=\phi^{n-1}(L)$. \end{proof} So the inverse of algorithm $A1$ is $A2$. Now we only need to prove the converse. The proof follows exactly the same steps and we content ourselves by stating and proving the corresponding lemmas. Actually in this case some proofs are slightly simpler. We keep the notation as above. Let $L$ be now a filling of $\bar{T}$ and let $b_1,\ldots,b_t$ be the first occurrence of $F_t$ and let $a_1,\ldots, a_t$ be the occurrence of $J_t$ in $\psi(L)$ created after applying $\psi$ to $L$. Consider again the region $E$ as defined above. By the choice of $b_1,\ldots,b_t$ as the first occurrence of $F_t$ in $L$, all the cells of $E$ are again empty. \begin{lemma}\label{lem:rightofE} For all $i,j$ with $1\leq i<j\leq t$, the rectangle determined by $a_i$ and $a_j$ contains no $J_{j-i}$ to the right of $E$ in either $L$ or $\psi(L)$; that is, there is no $J_{j-i}$ below $a_j$, above $a_i$, to the left of $a_j$, and to the right of $E$. \end{lemma} \begin{proof} Suppose there was such a $J_{j-i}$. Then $b_1,\ldots,b_{i-1}$, followed by this $J_{j-i}$ and then followed by $b_j,\ldots,b_{t}$ gives an occurrence of $F_t$ in $L$ that contradicts the choice of $b_{j-1}$. \end{proof} \begin{lemma}\label{lem:noftbelow2} There is no $F_t$ in $\psi(L)$ with at least one cell in a row below $a_1$. \end{lemma} \begin{proof} Suppose there is such an $F_t$. Its right-bottom cell is below $a_1$ and also weakly to the left of $b_{t-1}$, since otherwise $b_1,\ldots,b_{t-1}$ and this cell would form an $F_t$ contradicting the choice of $b_t$. Let $G$ be this occurrence of $F_t$ except the right-bottom cell. $G$ must contain some of the cells $a_1,\ldots,a_t$. As in the previous lemmas, the idea is to substitute the $a_i$ in $G$ together with other cells by some of the $b_i$ so that we obtain an occurrence of $F_t$ in $L$ contradicting the choice of $b_t$. Find integers $i_1,\ldots,i_s$ and $j_1,\ldots,j_s$ with the following properties: \begin{itemize} \item $ i_k< j_k$, $1\leq i_{k-1}<i_k$, and $j_{k-1}<j_k\leq t-1$ for all $k$; \item $a_{i_k}$ is in $G$ for all $k$ with $1\leq k \leq s$; \item $j_k$ is the largest integer such that all cells of $G$ that are to the right of $a_{i_k}$ and weakly to the left of $a_{j_k-1}$ are to the right of $E$; \item if $a_l$ is in $G$, then $i_k\leq l \leq j_k-1$ for some $k$. \end{itemize} Now, by Lemma~\ref{lem:rightofE}, there are at most $j_k-i_k-1$ elements of $G$ in the rectangle determined by $a_{i_k}$ and $a_{j_k}$. Together with $a_{i_k}$, they account for at most $j_k-i_k$ elements of $G$; substitute them for a subset of $b_{i_k},\ldots,b_{j_k-1}$. Doing this for all $k$, we get an occurrence of $F_t$ in $L$ that contains a cell below $a_1$, hence contradicting the choice of $b_1,\ldots,b_t$ as the first $F_t$ in $L$. \end{proof} \begin{lemma}\label{lem:nojtabove2} If $L$ contains no $J_t$ that is above $a_1$, then $\psi(L)$ contains no such $J_t$. \end{lemma} \begin{proof} Let $G$ be such a $J_t$; $G$ must contain some of the cells $a_i$. Find integers $i_1,\ldots,i_s$ and $j_1,\ldots,j_s$ with the following properties: \begin{itemize} \item $ i_k > j_k$, $t\geq i_{k-1}>i_k$, and $j_{k-1}>j_k\geq 1$ for all $k$; \item $a_{i_k}$ is in $G$ for all $k$ with $1\leq k \leq s$; \item $j_k$ is the smallest integer such that all cells of $G$ that are below $a_{i_k}$ and weakly above $a_{j_k+1}$ are to the right of $E$; \item if $a_l$ is in $G$, then $j_k+1\leq l\leq i_k$ for some $k$. \end{itemize} As in the proof of the previous lemma, it is possible to substitute the elements of $G$ contained in the rectangles determined by $a_{i_k}$ and $a_{j_k}$, plus the cell $a_{i_k}$, by (a subset of) the elements $b_{j_k},\ldots,b_{i_k-1}$. These substitutions give a $J_t$ in $L$ that is above $a_1$, contrary to the hypothesis. \end{proof} \begin{lemma}\label{lem:nojtleft} If $L$ contains no $J_t$ with a cell to the left of $a_1$ and below $a_2$, then neither does $\psi(L)$. \end{lemma} \begin{proof} If this were the case, the leftmost cell of this $J_t$ together with $b_1,\ldots,b_{t-1}$ would give a $J_t$ contradicting the hypotheses \end{proof} \begin{lemma}\label{lem:jtfrompsi} If $L$ contains no $J_t$ above $a_1$, there is no $J_{t-r}$ in $\psi(L)$ above $a_{r+1}$ such that the lowest cell of this $J_{t-r}$ is weakly to the left of $a_{r+1}$. \end{lemma} \begin{proof} Suppose $G$ is an occurrence of such a $J_{t-r}$. $G$ must contain some of the cells $a_{r+2}, \ldots,a_t$, otherwise $b_1,\ldots,b_r$ followed by $G$ would form a $J_t$ contradicting the hypothesis. Find integers $i_1,\ldots,i_s$ and $j_1,\ldots,j_s$ with the following properties: \begin{itemize} \item $ i_k > j_k$, $t\geq i_{k-1}>i_k$, and $j_{k-1}>j_k\geq r-1$ for all $k$; \item $a_{i_k}$ is in $G$ for all $k$ with $1\leq k \leq s$; \item $j_k$ is the smallest integer such that all cells of $G$ that are below $a_{i_k}$ and weakly above $a_{j_k+1}$ are to the right of $E$; \item if $a_l$ is in $G$, then $j_k+1\leq l\leq i_k$ for some $k$. \end{itemize} As before, the rectangle determined by $a_{i_k}$ and $a_{j_k}$ contains at most $i_k-j_k-1$ cells of $G$; these cells, together with $a_{i_k}$, can be replaced by a subset of $b_{j_k},\ldots,b_{i_k-1}$. After all these substitutions we get an occurrence of $J_{t-r}$ in $L$ that combined with $b_1,\ldots,b_r$ gives an occurrence of $J_t$ in $L$ contradicting the hypothesis. \end{proof} \begin{lemma}\label{lem:phipicksa} \begin{itemize} \item[(i)] If $L$ does not contain any occurrence of $J_t$ above $b_t$, then the first occurrence of $J_t$ in $\phi(L)$ is $a_1,\ldots,a_t$. \item[(ii)] If $L$ is a filling that avoids $J_t$, then $\phi(\psi^n(L))=\psi^{n-1}(L)$. \end{itemize} \end{lemma} \begin{proof} For the first statement, let $d_1,\ldots,d_t$ be the first occurrence of $J_t$ in $\psi(L)$, with cells listed from left to right. We want to show that $a_i=d_i$ for all $i$ with $1\leq i \leq t$. By Lemma~\ref{lem:nojtabove2}, $d_1$ is in the same row as $a_1$, and by Lemma~\ref{lem:nojtleft} it is weakly to the right of $a_1$, hence $d_1=a_1$. Now we proceed by induction on $i$. Suppose $d_1=a_1,\ldots,d_i=a_i$. By Lemma~\ref{lem:jtfrompsi} we have that the only $J_{t-i}$ in $\psi(L)$ that is weakly above and weakly to the left of $a_{i+1}$ is $a_{i+1},\ldots, a_t$, hence $d_{i+1}=a_{i+1}$, as needed. For the second statement, by induction and Lemma~\ref{lem:nojtabove2} we get that $\psi^n(L)$ satisfies the hypothesis of part (i), hence it follows that $\phi(\psi^n(L))=\psi^{n-1}(L)$. \end{proof} \section{Concluding remarks}\label{sec:comments} In his paper~\cite{krattenthaler}, Krattenthaler speaks of a ``bigger picture" that would englobe several recent results on pattern avoiding fillings of diagrams. We believe that our correspondence between graphs and fillings of diagrams also belongs to this picture and that it may shed some light in the understanding of it. We have shown that for each statement in pattern avoiding fillings there is a statement about graphs avoiding certain split graphs. So we can claim that in some sense the resources available to attack either problem have doubled. An example of this are the ``repeated" results in the literature mentioned at the end of Section~\ref{sec:degree}. \comment{Being essentially the same object, it is not casual that fillings of diagrams have become a useful tool in studying $k$-noncrossing and $k$-nonnesting graphs. In the same way as one can start from pattern avoiding permutations and build the way up to fillings of diagrams avoiding certain matrices (in principle permutation matrices, but nothing would prevent us from considering arbitrary matrices), one can go in the parallel road that starts at the by now completely understood bijection between noncrossing and nonnesting matchings and ends, currently, at the equality between the numbers of $k$-noncrossing and $k$-nonnesting graphs with given degree sequences. We are not far from the truth if we say that for each statement in pattern avoiding fillings there is a statement about graphs avoiding certain subgraphs. There is a type of filling, the one without virtually any restriction, whose graph equivalent we have not considered yet; we do it here for completeness. If we do not restrict the sums in each row or column, the entries allowed in the filling ($0-1$ or arbitrary) distinguish between simple and general (multi)graphs, but then we encode graphs only by the triangular diagram $\Delta$, so in some sense we lose the richness of the arbitrary shapes. By prescribing row and column sums and taking arbitrary diagrams, we encode graphs according to their degree sequences. In the spirit of this paper, the graph theoretic interpretation of fillings with arbitrary entries and arbitrary shapes (and no restriction on row and column sums) is the following. With the same notation as in Section~\ref{sec:degree}, a filling of a diagram of shape $(\lambda_1,\ldots,\lambda_s)$ corresponds to a graph with $\lambda_1$ opening vertices and $s$ closing vertices such that the $i$-th closing vertex is preceded by $\lambda_{s-i+1}$ opening vertices (for this correspondence, an isolated vertex can act as either opening or closing, but not both simultaneously). In this setting one can translate results about arbitrary fillings avoiding certain matrices into graph theoretic terms, and in particular, Theorem~\ref{thm:thm13} gives an identity between $k$-noncrossing and $k$-nonnesting graphs that lies half-way between Corollary~\ref{cor:kcrossknestmulti} and Theorem~\ref{thm:kcrossknest}.} For completeness, we mention here a result by Bousquet-M\'elou and Steingr\'imsson~\cite{bms} that can be cast in terms of $k$-noncrossing and $k$-nonnesting graphs. They restrict to diagrams with self-conjugate shape and row and column sums are set to $1$, and they only consider symmetric $0-1$ fillings (that is, symmetric with respect to the main diagonal of the diagram). For these fillings, they show that $I_t$ and $J_t$ are equirestrictive. In terms of matchings, this says that for each left-right degree sequence, the number of $k$-noncrossing symmetric matchings is the same as the number of $k$-nonnesting ones, where a matching on $[2n]$ is symmetric if it equals its reflection through the vertical axis that goes between vertices $n$ and $n+1$. Similar results for symmetric graphs can be deduced from~\cite[Theorem 15]{krattenthaler}. \comment{ It is also possible to go in the other direction, that is, to use graph theoretic results to find shape-Wilf-equivalent matrices. The following is Theorem~1 of~\cite{match_and_part}, written in our notation. \begin{thm}\label{thm:thm1} For any left-right degree sequence $D$ having only terms of the form $(1,0),(0,1),$ and $(1,1)$, the number of graphs $G$ on $D$ with $\mathrm{cross}(G)=i$ and $\mathrm{nest}(G)=j$ is the same same as the number of such graphs with $\mathrm{cross}(G)=j$ and $\mathrm{nest}(G)=i$. \end{thm} If we sum for $i,j$ with $1\leq i <k$ and $1\leq j <l$, we deduce that the number of $k$-noncrossing and $l$-nonnesting graphs on $D$ is the same as the number of $l$-nonnesting and $k$-noncrossing such graphs. (This is the same as~\cite[Corollary 2]{match_and_part}, but fixing the degrees.) To rephrase this in terms of fillings of diagrams, extend the previous notation by saying that two sets of permutation matrices $M_1,\ldots,M_s$ and $N_1,\ldots,N_r$ are equirestrictive if for each diagram $T$ with prescribed row and column sums, the number of fillings avoiding each of $M_1,\ldots,M_s$ equals the number of fillings avoiding each of $N_1,\ldots,N_r$. If the row and column sums are equal to $1$, we say the two sets of permutation matrices are shape-Wilf-equivalent. Then as a corollary of Theorem~\ref{thm:thm1} we get the following. (This is also mentioned in the final section of~\cite{krattenthaler}.) \begin{thm}\label{thm:i_k_j_l} For all $k,l$, $\{I_k,J_l\}$ and $\{I_l,J_k\}$ are shape-Wilf-equivalent. \end{thm} } Let us finish by going back to our initial motivation of studying $k$-noncrossing and $k$-nonnesting graphs. Even if our main question has been answered positively, it is fair to say that it has not been solved in the most satisfactory way; ideally we would like to find a bijective proof in graph theoretic terms. Note that due to its roundabout character, our proof of Theorem~\ref{thm:avoidk} does not give a clear bijection, neither in terms of graphs nor of fillings. Also the proofs of Corollaries~\ref{cor:kcrossknestmulti}, \ref{cor:simple}, and~\ref{cor:simplemax} do not provide bijections in graph theoretic terms. A bijective proof of Theorem~\ref{thm:kcrossknest} for $k=2$ has recently been found by Jel\'inek, Klazar, and de Mier~\cite{notes_jkm}. Other interesting questions related to $k$-crossings and $k$-nestings of graphs include, as mentioned before, to determine whether the pairs $(\mathrm{cross}(G), \mathrm{nest}(G))$ are symmetrically distributed among all graphs. This is already known for matchings and partition graphs~\cite{match_and_part}. One would also hope for a wide generalization of Theorem~\ref{thm:kcrossknest} stating that the number of graphs with $r$ $k$-crossings and $s$ $k$-nestings equals the number of graphs with $s$ $k$-crossings and $r$ $k$-nestings. Again, the case $k=2$ is known for matchings~\cite{klazar_match_trees} and partition graphs~\cite{distribution}. Unfortuntely, for $k=3$ this is not true even for matchings; for instance, Marc Noy~\cite{marc} checked that there are more matchings with six edges and only one $3$-crossing than with only one $3$-nesting. \comment{ We do not get symmetry of the distribution of the number of occurrences of $I_t$ and $J_t$ in fillings of a given $\bar{T}$. An easy example is considering the diagram of shape $(2,2)$ with column sums $(2,1)$ and row sums $(1,2)$. There are only two fillings, by rows $(1,0|1,1)$ and $(0,1|2,0)$, the first having one $I_2$ and the second having two $J_2$. If we only look at the cells, and not the values, they do have the same number of occurrences actually. hmm... is there a counterexample to symmetric distribution if we count each occurrence by the minimum value of the cells it occupies? actually, this makes sense in terms of the algorithm... interesting...} \end{document}
\begin{document} \def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1} \if11 { \title{\bf A Process of Dependent Quantile Pyramids} \author{Hyoin An\thanks{Correspondence: Hyoin An ([email protected]), Department of Statistics, The Ohio State University} \footnotemark[2] \hspace{.05em} and Steven N. MacEachern \thanks{This work was supported by NSF grants DMS-2015552 and SES-1921523.} \\ Department of Statistics, The Ohio State University} \maketitle } \fi \begin{abstract} Despite the practicality of quantile regression (QR), simultaneous estimation of multiple QR curves continues to be challenging. We address this problem by proposing a Bayesian nonparametric framework that generalizes the quantile pyramid by replacing each scalar variate in the quantile pyramid with a stochastic process on a covariate space. We propose a novel approach to show the existence of a quantile pyramid for all quantiles. The process of dependent quantile pyramids allows for non-linear QR and automatically ensures non-crossing of QR curves on the covariate space. Simulation studies document the performance and robustness of our approach. An application to cyclone intensity data is presented. \end{abstract} \noindent {\it Keywords:} Simultaneous Quantile Regression, Bayesian Nonparametrics, Quantile Pyramids, Stochastic Processes, Non-crossing Quantiles \spacingset{1.9} \section{Introduction} \subsection{Quantile Regression} Quantile regression (QR) has drawn increased attention as an alternative to mean regression. QR was motivated by the realization that extreme quantiles often have a different relationship with covariates than do the centers of the response distributions. QR can target quantiles in the tail of the distribution and is more robust to outliers than is mean regression. The advantages of QR can be substantial and have led to its use in many areas, including econometrics, finance, medicine, and climatology. The seminal work of \cite{koenker1978regression} extended median regression, which dates back at least as far as \cite{edgeworth1888mathematical}, to QR by allowing asymmetry in the objective function defining the regression. That is, when one is interested in the $\tau^{th}$ quantile ($0<\tau<1$) for the response $y_i$ and the covariate $x_i \in \mathbb{R}^{p+1}$, $i = 1, \ldots, n$, and assuming independent responses, the estimated QR surface is $x_i^T b^*$, where $b^* = \arg\min_{b \in \mathbb{R}^{P+1}} \sum_{i=1}^n \rho_{\tau}(Y_i - x_i^T b),$ and $\rho_{\tau}(u) = u(\tau - 1_{(u<0)})$ is the (asymmetric) check loss function. This method is implemented in the R package `quantreg' \citep{koenker_2005} and has led to a wide range of developments. A recent overview of the area is provided in \cite{koenker2017quantile}. The Bayesian counterpart of quantile regression was introduced by \cite{yu2001bayesian} who used the asymmetric Laplace distribution (ALD) for the sampling density of $Y \vert x$ as a device to focus on the QR for the $\tau^{th}$ quantile. The ALD has density $f(u) = \tau(1-\tau)\exp (-\rho_{\tau}(u))$ which can be seen as a scaled and exponentiated check loss function. This substitution of a loss function for the log-density is an early example of the generalized Bayes technology developed in \cite{bissiri2016general}. \cite{kozumi2011gibbs} made use of the reparametrization of the ALD illustrated in \cite{kotz2001laplace} and \cite{tsionas2003bayesian} to create an efficient Gibbs sampling algorithm. The method is implemented in the R package `bayesQR' \citep{benoit2017bayesqr}. Many authors have followed Yu and Moyeed's approach, describing properties of the method. See, for example, \cite{geraci2007quantile, reich2010flexible, waldmann2013bayesian, lum2012spatial}. Some have appealed to a semi- or nonparametric approach. \cite{kottas2009bayesian}, for instance, proposed two approaches to model the error distribution nonparametrically in QR, using a Dirichlet process (DP) mixture of uniform densities and a dependent DP mixture of normal densities. \cite{chen2009automatic} developed a QR function in a nonparametric fashion using piecewise polynomials. \subsection{Crossing quantiles} When more than one quantile level is considered, however, fitting a QR curve for each level by itself does not correspond to an encompassing model, may not respect the monotonicity of the quantile function, and can result in crossing quantiles. Researchers have suggested various approaches to handle this issue. \cite{rodrigues2017regression} constructed a likelihood inspired by Yu \& Moyeed's approach, while ensuring monotonocity of quantiles with an additional adjustment step. Semi- or non-parametric Bayesian approaches to simultaneous QR include \cite{taddy2010bayesian} who suggested an approach to estimate the entire joint density of $(\boldsymbol{x}, y)$ and then extract the QR from this density. This ensures monotonicity of quantiles since the inference of quantiles is based on a single density. \cite{reich2011bayesian} and \cite{reich2012spatiotemporal} model for the entire quantile process using Bernstein polynomial basis functions in spatial and spatiotemporal settings. In both papers, the prior is specified to satisfy the monotonicity constraint on the quantile function. \cite{kadane2012simultaneous} developed a characterization of the quantile function that induces monotonicity in the joint estimation of linear QR models for a univariate covariate. \cite{yang2017joint} extended this to any bounded covariate space in $\mathbb{R}^P$ via reparameterization. \cite{chen2021joint} generalized this to the spatial setting. \cite{hjort2009quantile} proposed the \textit{quantile pyramid} (QP) for nonparametric inference for a single distribution and briefly mentioned a possible extension to QR. Most similar to our approach, \cite{rodrigues2019pyramid} used the QP for QR. In their work, independent QPs are used to specify the prior distribution for quantiles at $(p+1)$ pivotal locations in a bounded $p$ dimensional covariate space. For each quantile, a linear QR is then constructed as the hyperplane passing through the specified quantile at each of the pivotal locations. \cite{rodrigues2019simultaneous} adapted this idea to a spline regression setting. \subsection{Our contribution} We generalize Hjort \& Walker's construction by incorporating dependence in the QPs across the covariate space and by allowing for non-binary splits in the pyramids. Our approach allows direct and flexible modeling of the quantiles over covariate spaces and, by construction, naturally respects the monotonicity of QR curves. Our contribution is twofold: (1) a novel approach to show the existence of a single QP, and (2) extension of the QP from a model for a single distribution to a model for a collection of distributions that vary with the covariate. The first point is a stepping stone to generalize the idea of QP. With an eye to the second point, it also involves expansion of the mathematical framework to move from a single QP to a process of QPs, with greater attention to mappings between the interval [0,1] and the real line. The rest of this article is organized as follows. In Section~\ref{sec:DQP}, we introduce the idea of a process of dependent quantile pyramids (DQPs) and a canonical construction of the model. Section \ref{sec:theory} provides theoretical results. Section~\ref{sec:posteriorinference} describes prior specification and posterior inference. Simulation studies appear in Section~\ref{sec:simulation} and application to real data appears in Section~\ref{sec:cyclone}. Section~\ref{sec:discussion} presents discussion and directions for future work. \section{A Process of Dependent Quantile Pyramids} \label{sec:DQP} In this section, we briefly recap the QP of \cite{hjort2009quantile} and introduce a DQP. The following remark comes from \cite{parzen2004quantile}. \begin{remark} For a random variable $Y$ whose distribution function is $F(\cdot)$, its quantile function is defined as a left-continuous function $Q(\tau) \equiv F^{-1}(\tau) = \inf\{y: F(y) \ge \tau\}$ that satisfies $F(y) \ge \tau$ if and only if $y \ge Q(\tau)$ for $0 < \tau < 1$. \end{remark} In other words, if we define the quantile function, there exists a random variable with the corresponding distribution function. This fact is useful to understand how constructing quantile functions can lead to a distribution over distribution functions. \subsection{Quantile Pyramid} \label{sec:quantilepyramid} \cite{hjort2009quantile} created the QP, a Bayesian nonparametric model that focuses on the quantiles of a distribution. The QP provides a distribution over distribution functions, and so it is suited to use as a prior distribution for an unknown distribution function. The quantile function, $Q(\cdot)$, on the unit interval $[0, 1]$, is at the heart of the QP. $Q(0) \equiv 0$ and $Q(1) \equiv 1$. The pyramid is built in levels for dyadic quantile levels, as a binary tree. The $0^{th}$ level is the unit interval $[0, 1]$. At the first level, the median of the unit interval is drawn from some density, which divides the interval into two subintervals. Intervals are recursively split into smaller subintervals, doubling the number of subintervals with each new level. Thus, at level $m$, we have specified the $2^m-1$ quantiles, $Q^m(i/2^m)$, $i = 1, 2, \ldots, 2^m-1$. The quantiles $Q^m(j/2^m) \equiv Q^{m-1}(j/2^m)$, $j = 2, 4, \ldots, 2^m-2$, are inherited from level $m-1$. The new quantiles at the $m^{th}$ level can be expressed, for $j = 1, 3, \ldots, 2^m-1$, as \begin{equation} \label{eq:qpconstruction} Q^m(j/2^m) = Q^{m-1}((j-1)/2^m)(1-V_{m, j}) + Q^{m-1}((j+1)/2^m)V_{m, j}, \end{equation} where $V_{m,j}$, $j = 1, 3, \ldots, 2^m-1$, are a set of mutually independent random variables with support $[0,1]$. For $\tau \in (0,1)$, less the specified quantile levels, $Q(\tau)$ is filled in by linear interpolation. There is scope for a wide variety of choices for the distribution of the conditional medians (the $V_{m,j}$). If the $V_{m,j}$ are assumed to have mean $1/2$, for example, then $Q^m(\tau)$ forms a martingale sequence and has a limit almost surely by Doob's martingale convergence theorem. Moreover, if $V_{m,j}$ are chosen so that $\max_{j \le 2^m}\{Q^m(j/2^m) - Q^m((j-1)/2^m) \overset{p}{\to} 0$, \cite{hjort2009quantile} showed that there exists a continuous limiting quantile process to which $Q^m$ converges. While \cite{hjort2009quantile} focused on quantiles at the dyadic levels, we adapt the \textit{oblique pyramid} construction developed by \cite{rodrigues2019pyramid}, where the splits are not necessarily binary and the quantile levels are not necessarily dyadic. \subsection{Dependent Quantile Pyramids} \label{sec:dqp} In the sequel, we extend the QP from a single distribution to a collection of distributions, creating {\it a process of dependent quantile pyramids}. To do so, we construct a QP at each value of $x$ in some index set, $\mathcal{X}$. We replace each scalar $V_{m,j}$ in (\ref{eq:qpconstruction}) with an appropriate stochastic process, say $\{V_{x,m,j}$, $x \in \mathcal{X}\}$. This leads to a collection of QPs that may exhibit dependence across ${\cal X}$. Formally, we construct a distribution-valued stochastic process. For modeling purposes, the index set of the process is identified with the covariate space, as in QR. Alternatively, the index set may be described as containing values of predictors, spatial locations, constructed features, or time, to name a few possibilities. The important part is that, conditional on $x$, we have a model for a QP. Following \cite{hjort2009quantile}, we construct QPs on the unit interval $[0,1]$. We adopt the idea of the oblique QP \citep{rodrigues2019pyramid} and allow more than one quantile to be drawn in a subinterval. This allows more flexibility than does the binary tree structure with dyadic points. Figure~\ref{fig:DQPidea} illustrates this idea. \begin{figure} \caption{\small (a) A binary QP by \cite{hjort2009quantile} \label{fig:DQPidea} \end{figure} \subsubsection{New Notation} The DQP is constructed sequentially. We begin with a DQP with $m-1$ levels, where for any $n$-tuple $x_1, \ldots, x_n \in {\cal X}$ and the set of specified quantile levels, $\mathcal{T}_{m-1}$, the function $Q_{x_i}^{m-1}(\tau), \tau \in \mathcal{T}_{m-1}$ has been determined by the construction. At the next level, these values remain unchanged. Additionally, the fucntions $Q_{x_i}^m(\cdot)$ are pinned down at a new set of quantile levels, mirroring the construction of a single QP. \begin{figure} \caption{\small An example of a DQP for $x$ with three levels. At level $0$, the interval at each $x$ is $[0,1]$. At each sub-interval, $K$ conditional quantiles at $x$ are specified, creating $(K+1)$ sub-intervals. ($K$ may differ for different sub-intervals.) At level $1$, for example, $K=2$ quantiles are specified, creating three sub-intervals. Collecting the quantiles across $x$ gives the quantile curves. The corresponding tree structure is displayed in the right panel. Dots and rhombi represent sub-intervals and quantiles, respectively. } \label{fig:notation_DQP} \end{figure} The DQP can have more than just a binary tree structure, and so we introduce `$\epsilon$-notation' to indicate the location of conditional quantiles in a pyramid. Consider the sequence $\epsilon_1 \ldots \epsilon_m$. Each $\epsilon_l$ takes some positive integer, $l = 1, \ldots, m$. This length-$m$ sequence denotes the location of a quantile in the $m^{th}$ level of a pyramid. For example, $Q_{x, \epsilon_1\epsilon_2}$ is a quantile located in the $2^{nd}$ level of the pyramid. This notation also tells us the structural relationship between quantiles. The parent quantiles of $Q_{x, \epsilon_1 \ldots \epsilon_m}$ are $Q_{x, \epsilon_1 \ldots \epsilon_{m-1}}$ and $Q_{x, \epsilon_1 \ldots (\epsilon_{m-1}+1)}$. Within the sub-interval ($Q_{x, \epsilon_1 \ldots \epsilon_{m-1}}$, $Q_{x, \epsilon_1 \ldots (\epsilon_{m-1}+1)}$), $Q_{x, \epsilon_1 \ldots \epsilon_m}$ is the $\epsilon_m^{th}$ quantile. For convenience, a short form $\epsilon \equiv \epsilon_1 \cdots \epsilon_{m-1}$ will be used for the $(m-1)$ length sequence throughout the paper. Figure \ref{fig:notation_DQP} illustrates the notation for a three-level DQP with a given value of $x$. A special case of the pyramid has binary splits for the dyadic quantile levels. In this case, all values of the $\epsilon_i$ are $0$ or $1$. The quantile $Q_{x,\epsilon_1 \cdots \epsilon_m}(\tau)$ at level $m$ is the $\tau = \sum_{i=1}^m \epsilon_i 2^{-i}$ quantile of the distribution at $x$. The $\epsilon$-notation is used in constructing each level of the quantile pyramid presented in Section \ref{sec:definition}. Another set of notation considers all the specified quantile levels in an $m$-level pyramid. The whole set of specified quantile levels up to level $m$ is denoted by $\mathcal{T}_m = \{\tau_1^*, \ldots, \tau_T^*\}$, where $\tau_1^* < \cdots < \tau_T^*$. This notation is used in showing existence the process in Section \ref{sec:theory} and in specifying the density in Section \ref{sec:posteriorinference}. Indeed, $\mathcal{T}_m$ and $\{\tau_{\epsilon_1 \cdots \epsilon_m}\}$ consist of the same elements, represented in different notation. \subsubsection{Definition} \label{sec:definition} DQPs come in two main varieties. The first is the process of finite dependent quantile pyramids (FDQP) while the second, arrived at from a countable sequence of FDQPs, is termed the limit process of dependent quantile pyramids (LDQP). We begin with the FDQP. The FDQP focuses on a finite number of quantiles. As with the QP, the FDQP is defined sequentially. We focus on the quantiles in a single interval (e.g., the center interval at level $2$ in Figure~\ref{fig:notation_DQP}). The description applies to all such intervals. \begin{definition} We call $Q^M = \{Q_x^M, x \in \mathcal{X} \}$ a process of Finite Dependent Quantile Pyramids (FDQP) with $M$ levels if there exists an $M$-level QP valued stochastic process whose distribution $Q^M$ follows. That is, for each $x \in \mathcal{X}$, $Q_x^M$ is an $M$-level QP on $[0,1]$, and, for each $n$ and distinct $x_1, \ldots, x_n \in \mathcal{X}$, Kolmogorov's permutation and marginalization conditions are satisfied. \end{definition} \begin{definition} We call $F^M = \{F_x^M, x\in\mathcal{X}\}$ a set of conditional distribution functions induced by a process of FDQP with $M$ levels, if every $x\in\mathcal{X}$, $F_x^M$ is a distribution function and $F_x^M(Q_x^M(\tau)) = \tau$ for all $\tau \in (0,1)$. \end{definition} \noindent The FDQP with $M$ levels is given by its quantile functions, $\{ Q_x^M(\tau) \}$, $x \in {\cal X}$, $\tau \in [0, 1]$. It is defined sequentially, beginning with level $1$ of the pyramid. Assume that $K$ quantiles are to be drawn at level $m$, for $m = 1, \ldots, M$, in a subinterval that comes from level $m-1$. The subinterval is $(Q_{x, \epsilon 0}, Q_{x, \epsilon (K+1)})$ for $x \in \mathcal{X}$ and $\epsilon = \epsilon_1 \cdots \epsilon_{m-1}$. Let $\{V_{x, \epsilon_1 \cdots \epsilon_{m}}, x \in \mathcal{X}\}$ be a multivariate stochastic process with index set ${\cal X}$. For each $x \in \mathcal{X}$ and $\epsilon = \epsilon_1 \cdots \epsilon_{m-1}$ a random vector $(V_{x, \epsilon 1}, \ldots, V_{x, \epsilon (K+1)})$ follows a distribution with the following conditions: (1) $0 < V_{x, \epsilon k} < 1, \text{ for } k = 1, \ldots, (K + 1)$; and (2) $\sum_{k=1}^{K + 1} V_{x, \epsilon k} = 1$. That is, the vector lies in the interior of the $K$-dimensional simplex. The quantiles for the $K$ specified quantile levels in the interval are \begin{align*} \label{eq:dqpconstruction} Q_{x, \epsilon k} &= Q_{x, \epsilon 0} \left(1 - \sum_{j=1}^k V_{x, \epsilon j} \right) + Q_{x, \epsilon (K+1)} \left(\sum_{j=1}^k V_{x, \epsilon j}\right), \quad k = 1, \ldots, K. \end{align*} A finite number of quantiles, $\tau_1^* < \cdots < \tau_T^*$, are specified in the $M$-level pyramid. Quantiles that are not specified directly in the pyramid are given by linear interpolation. For all $\tau \in (\tau_t^*, \tau_{t+1}^*), t = 0, \ldots, T$, we linearly interpolate $Q_{x}^M(\tau)$, i.e. \[ Q_{x}^M(\tau) = Q_{x}^M(\tau_t^*) + [Q_{x}^M(\tau_{t+1}^*)-Q_{x}^M(\tau_t^*)](\tau - \tau_t^*)/(\tau_{t+1}^* - \tau_t^*). \] The sequential construction and interpolation together define the conditional quantiles given $x$ for all quantile levels $\tau \in (0,1)$. The construction of the FDQP leads to its existence, which is formally proved in Section \ref{sec:dqpexistence}. At each value $x$, the distribution is defined by a finite collection of real valued random variates. The use of stochastic processes that are well-defined ensures that the QP construction for a single $x$ extends to $\mathcal{X}$. We restrict attention to cases defined on a single probability space where the FDQP with $M$ levels is the marginal distribution for each of the FDQPs with more than $M$ levels, with appropriate renumbering of the quantiles in $\cup_{m=1}^M \mathcal{T}_m$, where $\mathcal{T}_m$ is the set of quantile levels specified at level $m$. With some sufficient conditions, the limit may exist, in which case we have an infinite pyramid, leading to the LDQP. Our concern is with cases where a set of quantiles that is dense in $[0,1]$ is determined in the limit, and so we have no need for interpolation. The existence of the LDQP is established in Section \ref{sec:dqpexistence}. When the limit of a sequence of FDQPs exists as $m \to \infty$, the limit process of dependent quantile pyramids (LDQP) is defined. \begin{definition} We call $Q = \{Q_x, x \in \mathcal{X} \}$ a limit process of Dependent Quantile Pyramids (LDQP) if there exists a QP valued stochastic process whose distribution $Q$ follows. That is, for each $x \in \mathcal{X}$, $Q_x$ is a QP on $[0,1]$, and, for each $n$ and distinct $x_1, \ldots, x_n \in \mathcal{X}$, Kolmogorov's permutation and marginalization conditions are satisfied. \end{definition} \begin{definition} We call $F = \{F_x, x\in\mathcal{X}\}$ a set of conditional distribution functions induced by a process of LDQP, if for every $x\in\mathcal{X}$, $F_x$ is a distribution function and $Q_x(\tau) = \inf\{y: F_x(y) \ge \tau\}$ for all $\tau \in (0,1)$. \end{definition} \subsection{Canonical Construction} \label{sec:canonical} In this section, we provide a canonical construction of the quantiles in a single subinterval at level $m$ of a process of DQPs. To do so, we construct $U$-processes and $V$-processes and then make use of equation~(\ref{eq:qpconstruction}) in Section~\ref{sec:dqp}. We begin under the assumption that the choice of quantiles has been made and that this choice is consistent, matching the correct ordering of the quantiles. By repeatedly sampling $U$-processes and $V$-processes, we can proceed to sequential construction of the entire process of pyramids. \subsubsection{U-processes induced from Gaussian processes} Suppose that we are interested in obtaining $K$ quantiles at level $m$ for a subinterval $(Q_{x, \epsilon 0}, Q_{x, \epsilon (K+1)})$, $m \in \mathbb{N}$. For each sequence $\epsilon k$ for $k=1, \ldots, K$, we consider a Gaussian process (GP) $\{Z_{x, \epsilon k}, x \in \mathcal{X}\}$ with zero mean, unit variance, and some correlation function $\gamma(x, x')$, so that $Z_{x, \epsilon k} \sim \mathcal{GP}(\*0, \gamma(x, x')).$ The correlation function governs the interdependence of quantiles across the covariate space. We then construct the $U$-processes, $\{U_{x, \epsilon k}, x \in \mathcal{X} \}, k = 1, \ldots, K+1$, using the normal cdf transformation element-wise, i.e. for each $x \in \mathcal{X}$, $U_{x, \epsilon k} = \Phi(Z_{x, \epsilon k}),$ where $\Phi(\cdot)$ denotes the cumulative distribution function of the standard normal distribution. \subsubsection{V-processes induced from U-processes via gamma variates} We construct the $V$-processes, $\{V_{x, \epsilon k}, x \in \mathcal{X} \}, k = 1, \ldots, K+1$, from the $U$-processes. Let $G_{\epsilon k}(\cdot)$ be the gamma distribution function with shape $\alpha_k>0$ and scale $1$, for $k = 1, \ldots, K+1$. For each $x \in \mathcal{X}$ and combination $\epsilon = \epsilon_1 \cdots \epsilon_{m-1}$, define $Y_{x, \epsilon k} \equiv G^{-1}_{\epsilon k}(U_{x, \epsilon k}).$ Define $Y_x = \sum_{k=1}^{K+1} Y_{x, \epsilon k}$. Then, for each $x \in \mathcal{X}$ and $\epsilon$, we set \begin{equation*} \label{eq:Vprocess} (V_{x, \epsilon 1}, \ldots, V_{x, \epsilon (K+1)}) \equiv \left(\frac{Y_{x, \epsilon 1}}{Y_x}, \ldots, \frac{Y_{x, \epsilon (K+1)}}{Y_x}\right), \end{equation*} which forms a Dirichlet$(\alpha_1, \ldots, \alpha_{K+1})$ random vector. Lastly, collecting $V_{x, \epsilon k}$ together across $x \in \mathcal{X}$, we have constructed a vector-valued process $\{ (V_{x, \epsilon 1}, \ldots, V_{x, \epsilon (K+1)}), x \in \mathcal{X} \}$ with component processes $\{V_{x, \epsilon k}, x \in \mathcal{X}\}$, $k = 1, \ldots, K+1.$ \subsubsection{V-processes induced from U-processes via beta variates} The Dirichlet distribution can be derived from the product of independent beta random variates. This fact leads to a second construction of the $V$-processes. Define $\alpha_1, \ldots, \alpha_{K+1}$ as before. Let $G_{\epsilon k}(\cdot)$ be the distribution function of a beta variate with parameters $\alpha_k$ and $\sum_{j=k+1}^{K+1} \alpha_j$. Define $Y_{x, \epsilon k} = G_{\epsilon k}^{-1}(U_{x, \epsilon k})$ for each $x$ and $k = 1, \ldots, K$. Set \begin{eqnarray*} \label{eq:canonical_beta} V_{x, \epsilon k} & = & Y_{x, \epsilon k} \prod_{j=1}^{k-1} (1 - Y_{x, \epsilon j}) , \end{eqnarray*} with the conventions that an empty product is $1$ and that $V_{x, \epsilon (K+1)} = 1 - \sum_{k=1}^K V_{x, \epsilon k}$. The vector $(V_{x, \epsilon 1}, \ldots, V_{x, \epsilon (K+1)})$ follows the desired Dirichlet distribution. \subsubsection{Martingale construction} \label{sec:martingale} One construction of the dyadic QP in \cite{hjort2009quantile} relies on a martingale. At each level of the QP, each interval is split in half. In this canonical construction, $K=1$ and the martingale property requires that $\alpha_1 = \alpha_2$ for a beta distribution. Allowing for splits that do not bisect the interval, this becomes, at level $m$ of the pyramid, $\alpha_1 = c (\tau_{\epsilon 1} - \tau_{\epsilon 0})$ and $\alpha_2 = c (\tau_{\epsilon 2} - \tau_{\epsilon 1})$ for some $c > 0$. For a non-binary split of an interval, the values of $\alpha_k$ need to be mapped to the relevant subintervals. Let $\gamma_{\epsilon}$ denote the scaled quantile level in a subinterval and assume that the subinterval at the $m^{th}$ level contains quantiles associated with $K$ specified quantile levels $\tau_{\epsilon 1} < \ldots < \tau_{\epsilon K}$. The quantile levels of left and right endpoints of the subinterval are denoted by $\tau_{\epsilon 0}$ and $\tau_{\epsilon (K+1)}$, respectively. Then \begin{equation*} \gamma_{\epsilon k} = (\tau_{\epsilon k} - \tau_{\epsilon 0})/(\tau_{\epsilon (K+1)} - \tau_{\epsilon 0}), \qquad k=1, \ldots, K. \end{equation*} From this, we can derive the values of $\alpha_k$ satisfying the martingale condition. That is, $\alpha_k = c(\tau_{\epsilon k} - \tau_{\epsilon_{k-1}})$ for $k=1,\ldots, K+1$ and some $c > 0$. We note that the chosen quantiles $\tau_\epsilon$ do not depend on $x$. \subsection{Mapping to the Response Space} \label{sec:mapping} Assume that the DQP has been constructed on $(0,1)$ and denote the QP at $x$ by $Q_x(\tau)$. As briefly mentioned in \cite{hjort2009quantile}, we can transform the scale as for a proper response space, say, a real line using the inverse normal cumulative distribution function (cdf) transformation. Defining trend parameters $\mu_x$ and scale parameters $\sigma_x$, we have the canonical DQP regression model, centered on a normal theory regression: \begin{equation} \label{eq:DQPtransform} \{ Q_x^{\cal R}(\tau) = \underbrace{\mu_x}_\text{Trend} + \overbrace{\underbrace{\sigma_x}_\text{Scale} \cdot \Phi^{-1}(Q_x(\tau))}^{\text{Local Fluctuation}}, x \in \mathcal{X} \} \end{equation} The trend parameter $\mu_x$ is shared across the quantiles and controls the overall trend of quantiles throughout the covariate space. The scale parameter controls the dispersion of the distributions while the realized QPs determine the departures from normality. Together, the scale and QP determine the departure (or local fluctuation) from a set of constant variance normal models with centers given by $\mu_x$. Various choices can be made for the trend and scale. If one believes that there is no significant global trend, $\mu_x$ can be set to a constant. Alternatively, a linear regression model, $\mu_x = x^\top\*\beta$, may provide an effective choice. The model is compatible with more complicated models for the trend. Similarly, models for $\sigma_x$ can range from a simple constant scale to more complex forms. The trend and scale parameters can be incorporated into Markov chain Monte Carlo (MCMC) procedures used to fit the model. Alternatively, to save computational effort, estimates can be plugged in and the parameters treated as fixed. \section{Theoretical Results} \label{sec:theory} The novel notation used in this section is formally defined in \nameref{sec:appendix_notation} while proofs of the results appear in \nameref{sec:appendix_proofs}. \subsection{A Novel Approach to Existence of Quantile Pyramid} \label{sec:qpexistence} \cite{hjort2009quantile} provide two different proofs for the existence of the QP. The following results establish the existence of the QP under slightly weaker conditions than those of \cite{hjort2009quantile}. They also apply to the oblique pyramid of \cite{rodrigues2019pyramid}. The QP is a probability measure for a distribution-valued random element. To show its existence, we wish to show that the sequence of probability measures that define the finite quantile pyramids (FQPs) converges to a limiting probability measure. Our argument relies on two key facts. First, the space of probability measures over distribution functions with support contained in $[0, 1]$, when equipped with the Prokhorov metric, is compact \citep{parthasarathy1967probability}. Second, and yet to be established, the sequence of probability measures forms a Cauchy sequence. Together, these facts lead to the conclusion that the sequence of FQPs converges to a limit and that the limit is a probability measure on distribution-valued elements. A first lemma bounds the L\'evy distance, $d_L(\cdot,\cdot)$, between two distribution functions that share a set of quantiles. Let $0 = \tau_0^* < \tau_1^* < \tau_2^* < \ldots < \tau_T^* < 1 = \tau_{T+1}^*$ be ordered quantile levels. The largest gap between consecutive quantile levels appears in the bound below. \begin{lemma} \label{Lemma:Levy} Define $\epsilon = \max_{t = 1, \ldots, T+1} (\tau_t^* - \tau_{t-1}^*)$. Assume that $F$ and $G$ are two distribution functions such that there exist $y_1, \ldots, y_T$ for which $F(y_t) = G(y_t) = \tau_t^* $ for $ t = 1, \ldots, T$. Then, $d_L(F,G) \leq \epsilon$. \end{lemma} The construction of the QP proceeds through a sequence of FQPs, indexed by $m$, the number of levels in the pyramid. These FQPs are defined on a single probability space $(\Omega, {\cal B}, \mu)$. Each $\omega \in \Omega$ defines a sequence of FQPs with $m = 1, 2, \ldots$ levels. The distributions in the sequence all have support contained in $[0,1]$ and share values of the quantile function at certain specified quantile levels, as described in Section~\ref{sec:quantilepyramid}. In particular, the $\tau_{\epsilon_1\cdots\epsilon_m}$ quantiles in $F^m(\omega)$ and $F^{m+k}(\omega)$ are identical for all $\omega$ and for any positive integer $k$. $\mathcal{B}$ is the Borel $\sigma$-field generated by the open sets under the L\'evy metric. The next lemma shows that the sequence of probability measures arising from the FQPs is Cauchy. The measure $\mu_m$ provides the probability distribution on $F^m$. The distance between $\mu_m$ and $\mu_n$ is measured by the Prokhorov metric, $d_P(\mu_m, \mu_n)$. \begin{lemma} \label{Lemma:Cauchy} Suppose that $\cup_{m=1}^{\infty} \mathcal{T}_m$ is dense in $[0, 1]$. Then, under the Prokhorov metric $d_P$, the sequence $\{ \mu_m \}_{m=1}^{\infty}$ is a Cauchy sequence. \end{lemma} A Cauchy sequence on a compact set converges to a limit in the set. In this case, the sequence of measures $\mu_m$ converges to a limit measure $\mu$. The limit measure is a probability measure on distribution-valued elements where the distributions have support contained in the interval $[0,1]$. This reasoning leads to the existence of the QP, stated formally in the next theorem. We note that the dyadic construction of the QP satisfies the denseness condition in Lemma~\ref{Lemma:Cauchy}. \begin{theorem} \label{Theorem:existence} Suppose that $\cup_{m=1}^{\infty} \mathcal{T}_m$ is dense in $[0, 1]$. Then, the QP constructed as in Section \ref{sec:quantilepyramid} is a random element whose distribution is determined by $(\Omega, {\cal B}, \mu)$. \end{theorem} \subsection{Existence of a Process of Dependent Quantile Pyramids} \label{sec:dqpexistence} The existence of a process of DQPs follows from consideration of the joint distributions of the QPs at finite sets of indices in the index set. The construction of the FDQP in Section~\ref{sec:dqp} ensures the existence of the joint distribution of the corresponding sequence of FQPs. From here, the argument parallels that of the previous section, with the L\'evy metric and Prokhorov metric replaced with their suprema over the finite set of indices. This ensures the existence of a probability space on which the limiting DQPs satisfy the requisite permutation and marginalization conditions. The DQP relies on a countable collection of real-valued stochastic processes, $V_{x,\epsilon k}$, all of which have index set ${\mathcal X}$. Each of these processes satisfies Komogorov's consistency axioms. The processes are defined on a single probability space. Selecting a finite set of distinct indices, say, $x_1, \ldots, x_n \in {\cal X}$, the sequence of FDQPs at these indices is generated by a countable collection of real-valued random variables. The permutation and marginalization conditions follow immediately. This establishes the following lemma. \begin{lemma} (Existence of FDQP) \label{Lemma:existenceFDQP} Under the conditions in the previous paragraph, for each $m \in \mathbb{Z}^+$, there exists a distribution-valued stochastic process, $F^m = \{F_x^m, x \in \mathcal{X}\}$, with the specified finite dimensional distributions. \end{lemma} Define the suprema of the L\'evy metric and Prokhorov metric over a set $S$ to be $d_{L_u}(F^m,F^n) = \sup_{x \in S} d_L(F_x^m, F_x^n)$ and $d_{P_u}(\mu^m,\mu^n) = \sup_{x \in S} d_P(\mu_x^m,\mu_x^n)$, respectively. \nameref{sec:appendix_proofs} shows that $d_{L_u}$ and $d_{P_u}$ are metrics. We first reprise Lemma~\ref{Lemma:Levy}. \begin{lemma} \label{Lemma:Levy2} Define $\epsilon = \max_{t = 1, \ldots, T+1} (\tau_t^* - \tau_{t-1}^*)$. Assume that $F_x$ and $G_x$, $x \in S$, are two sets of distribution functions such that, for each $x \in S$, there exist $y_{x,1}, \ldots, y_{x,T}$ for which $F_x(y_{x,t}) = G_x(y_{x,t}) = \tau_t^*$ for $t = 1, \ldots, T$. Then, $d_{L_u}(F,G) \leq \epsilon$. \end{lemma} The construction of the FDQPs ensures that, for all $\omega$, for each $x \in \{ x_1, \ldots, x_n \}$ the $\tau_{\epsilon_1\cdots\epsilon_m}$ quantiles in $F_x^m(\omega)$ and $F_x^{m+k}(\omega)$ are identical for every positive integer $k$. This lets us apply the lemma. With a suitable choice of quantile levels, the sequence of probability measures for the set of $n$ FQPs is Cauchy. \begin{lemma} \label{Lemma:Cauchy2} Suppose that $\cup_{m=1}^{\infty} \mathcal{T}_m$ is dense in $[0, 1]$. Then, with $S = \{ x_1, \ldots, x_n \}$, under the supremum Prokhorov metric $d_{P_u}$, the sequence $\{ \mu_m \}_{m=1}^{\infty}$ is a Cauchy sequence. \end{lemma} Finally, noting that the set of distributions with support contained in $[0,1]^n$ is compact under $d_{L_u}$, by Tychonoff's Theorem, we know that $\mu_m$ converges to some probability measure $\mu$. This ensures the existence of the limiting DQP. \begin{theorem} \label{Theorem:existence2} Suppose that $\cup_{m=1}^{\infty} \mathcal{T}_m$ is dense in $[0, 1]$. Then, for any $S = \{x_1, \ldots, x_n\} \in \mathcal{X}$, the DQP for $x \in S$ constructed as in Section \ref{sec:dqp} is a random element whose distribution is determined by $(\Omega, {\cal B}, \mu)$. Furthermore, there exists a stochastic process defined for all $x \in {\cal X}$ whose finite dimensional joint distributions on the QPs are exactly these. \end{theorem} \section{Posterior Inference} \label{sec:posteriorinference} In this section, we discuss how to specify the DQP prior and the likelihood under the canonical construction. Suppose we have $T$ quantiles of interest, $\tau_1^*, \ldots, \tau_T^*$ to be specified on the DQP. For simplicity, we work with the binary pyramid. A similar formulation can be laid out in the general case. Recall that with the canonical construction of the DQP in Section \ref{sec:canonical} and the canonical transformation in Section \ref{sec:mapping}, quantiles are generated in the uniform scale and are transformed to the real line. \subsection{Prior specification} \label{sec:prior} A DQP $\{Q_{x}(\tau), \tau \in (0, 1)\}$ is defined for all $x \in \mathcal{X}$ as a stochastic process. Then, the finite-dimensional distributions of the DQP $(\{Q_{x_1}(\tau), \tau \in (0,1)\}, \ldots, \{Q_{x_n}(\tau), \tau \in (0,1)\})$ can be defined for each $n$-tuple $(x_1, \ldots, x_n)$ of distinct elements of $\mathcal{X}$. \citep{billingsley1968} While the $n$-tuple can be arbitrary, in practice, we choose the coordinates to be covariate values for the data points. Let the quantiles of interest be denoted by $\*Q$. For convenience, we view $\*Q$ as a $T \times n$ matrix. The $t^{th}$ row of $\*Q$ (denoted by $Q_{\tau_t^*}$) is the vector of ${\tau_t^*}$ quantiles at $x_1, \ldots, x_n$ for $t = 1, \ldots, T$. The $\tau_t^*$ quantile at $x_i$ is denoted by $Q_{x_i, \tau_t^*}$. The parent quantiles that defines the left and right endpoints of the interval on which $Q_{x_i, \tau_t^*}$ is generated are denoted by $Q_{x_i, \tau_t^*}^L$ and $Q_{x_i, \tau_t^*}^R$. The vectors $Q_{\tau_t^*}^L$ and $Q_{\tau_t^*}^R$ have $Q_{x_i, \tau_t^*}^L$ and $Q_{x_i, \tau_t^*}^R$ as their elements, respectively. Write $\*\mu_x$ for the vector of trend transformation parameters and $\*\sigma_x$ for the vector of scale transformation parameters, where $\mu_{x_i}$ and $\sigma_{x_i}$ denote the element of $\*\mu_x$ and $\*\sigma_x$, respectively, corresponding to the value of $x_i$. For a GP given the input points $(x_1, \ldots, x_n)$, denoted by $\*Z$, let the mean vector $\*\mu = (\mu_{x_1}, \ldots, \mu_{x_n})$ and covariance matrix $\*\Sigma = [\sigma_{x_i, x_j}]_{i,j=1}^n$ be parameters, where $\mu_{x_i}$ is the mean and $\sigma_{x_i} = \sqrt{\sigma_{x_i, x_i}}$ is the standard deviation corresponding to the value of $x_i$. Let $\Psi(\cdot; a, b)$ denote the cdf of the $beta(a, b)$ distribution and $\Phi(\cdot; \eta, \nu)$ denote the cdf of the $N(\eta, \nu^2)$ distribution. Given the parameters $\*\mu, \*\Sigma, \*\mu_x, \*\sigma_x$, the joint conditional density of the finite dimensional distribution of the specified quantile levels of the DQP at $(x_1, \ldots, x_n)$ is \begin{equation*} \begin{aligned} \pi \left(\mathbf{Q} \mid \*\mu, \*\Sigma, \*\mu_x, \*\sigma_x \right) =\prod_{t=1}^{T} \left\{ \phi_n\left( h_1(Q_{x_1, \tau_t^*}), \ldots, h_n(Q_{x_n, \tau_t^*}); \*\mu, \*\Sigma \right) \times \left\vert \*J(\tau_t^*) \right\vert \right\}, \end{aligned} \end{equation*} where $\phi_n(\cdot; \eta, V)$ denotes the pdf of the $n$-variate Normal, $N_n(\eta, V)$, the back-transformed quantiles to the $Z$-scale \begin{align*} h_i(Q_{x_i, \tau}) &= h_i(Q_{x_i, \tau} | Q_{x_i, \tau}^L, Q_{x_i, \tau}^R, \mu_i, \sigma_i, \mu_{x_i}, \sigma_{x_i}) \\ &= \Phi^{-1}\left( \Psi \left(\frac{\Phi\left(Q_{x_i, \tau} ; \mu_{x_i}, \sigma_{x_i}\right)-\Phi\left(Q_{x_i, \tau}^L ; \mu_{x_i}, \sigma_{x_i}\right)}{\Phi\left(Q_{x_i, \tau}^R ; \mu_{x_i}, \sigma_{x_i}\right)-\Phi\left(Q_{x_i, \tau}^L ; \mu_{x_i}, \sigma_{x_i}\right)}; a_{\tau}, b_{\tau} \right) ; \mu_{i}, \sigma_{i} \right), \end{align*} and the determinant of the Jacobian matrix is of the form \begin{align*} |\*J(\tau) | = \prod_{i=1}^n & \left\vert \frac{\partial h_i(Q_{x_i, \tau}| Q_{x_i, \tau}^L, Q_{x_i, \tau}^R)}{\partial Q_{x_i, \tau}}\right\vert\\ = \prod_{i=1}^n & \left\{ \frac{1}{\phi(Z_{x_i, \tau}; \mu_{i}, \sigma_{i})} \times \psi \left(\frac{\Phi\left(Q_{x_i, \tau} ; \mu_{x_i}, \sigma_{x_i}\right)-\Phi\left(Q_{x_i, \tau}^L ; \mu_{x_i}, \sigma_{x_i}\right)}{\Phi\left(Q_{x_i, \tau}^R ; \mu_{x_i}, \sigma_{x_i}\right)-\Phi\left(Q_{x_i, \tau}^L ; \mu_{x_i}, \sigma_{x_i}\right)}; a_{\tau}, b_{\tau} \right) \right.\\ & \left. \times \frac{\phi\left(Q_{x_i, \tau} ; \mu_{x_i}, \sigma_{x_i}\right)}{\Phi\left(Q_{x_i, \tau}^R ; \mu_{x_i}, \sigma_{x_i}\right)-\Phi\left(Q_{x_i, \tau}^L ; \mu_{x_i}, \sigma_{x_i}\right)} \right\}, \end{align*} where $\psi(\cdot; a, b)$ denotes the pdf of the $beta(a, b)$ distribution. \subsection{Likelihood} Since we use the normal distribution for the transformation from $[0,1]$ to ${\cal R}$, the conditional density of the response variable is piecewise-normal. Given the values of the covariate $\*X = [x_1 \cdots x_n]^\top$ and response $\*y = (y_1, \cdots, y_n)^\top$, the likelihood is \begin{align*} f(\*y | \*X, \*Q, \*\mu_x, \*\sigma_x) &= \prod_{i=1}^n \left[ \sum_{t=1}^{T+1} \frac{(\tau_t^* - \tau_{t-1}^*)\times I_{(Q_{x_i} (\tau_{t-1}^*), Q_{x_i}(\tau_t^*)]}(y_i) \times \phi(y_i; \mu_{x_i}, \sigma_{x_i}) }{\Phi(Q_{x_i}(\tau_t^*); \mu_{x_i}, \sigma_{x_i}) - \Phi(Q_{x_i}(\tau_{t-1}^*); \mu_{x_i}, \sigma_{x_i})} \right], \end{align*} where $I_A(x)$ is an indicator function whose value is one when $x \in A$ and zero otherwise. \subsection{Inference through the DQP Construction} Posterior inference can be made based on the DQP prior and likelihood using an MCMC procedure. The final estimate of the QR model would be obtained by taking the posterior mean of the conditional quantiles at each $x$. The integration of dependence across the covariate space forms a fundamental building block in the DQP framework. This leads to conditional quantiles being intrinsically dependent on each other through both the GP covariance structure, $\*\Sigma$, and the pyramid structure. Increasing the level of dependence results in a smoother QR model across the covariate space, while reducing dependence leads to a more flexible and bumpier model. \subsubsection{Linearized inference under the DQP} Bayesian methods allow one to distinguish between beliefs about a quantity, say a QR surface, and inference about the quantity. One strategy that has proven successful is to perform modeling in a large space and to impose parsimony through a restriction on the inference (e.g., \cite{maceachern2001decision}, \cite{hahn2015decoupling}). In contrast, practitioners of classical statistics generally impose parsimony by working in a smaller model space or through model selection. Much of the literature on QR, both classical and Bayesian, focuses on linear QR. The DQP model naturally produces a posterior distribution that assigns probability one to nonlinear QR surfaces. To linearize inference, one needs a distribution over the covariate and a set of draws from the posterior distribution of the QR surface. Each draw of a posterior QR surface is projected into a linear QR surface. The projection makes use of the distribution over covariates to map the nonlinear QR surface to the best fitting linear QR surface. As a criterion, we minimize the integrated squared difference between the nonlinear and linear QR surfaces. That is, with $x$ following distribution $G$ and a quantile level $\tau$, \begin{equation} \label{eq:minimization} \*\beta_L(\tau) = \arg\min_{\*\beta^*} \left(\int ( Q_x(\tau) - x^\top\*\beta^*)^2 dG(x) \right) . \end{equation} For a discrete distribution $G$, this minimization is obtained as a least squares fit. The minimization in (\ref{eq:minimization}) produces a collection of draws of linear QR surfaces. These surfaces can be summarized, by computing, for example, the posterior mean linear QR surface. We examine the benefits of linearized inference in the simulation studies of the next section. \section{Simulation Study} \label{sec:simulation} We conducted simulation studies to evaluate how the DQP performs. We considered two sets of quantiles: $T = 3$ $(0.25, 0.50, 0.75)$ and $T = 7$ $(0.05, 0.10, 0.25, 0.50, 0.75, 0.90, 0.95)$. The covariate is $x_{ij} = i$ and the response is $Y_{ij}$ for $i=1, \ldots, 10$ and $j = 1, \ldots, r$. Let $N(0, 1)$ denote the standard normal distribution and $t_{df}$ a t-distribution with $df$ degrees of freedom. We considered the following three scenarios for the simulation study. \begin{description} \item[Scenario 1.] (Homogeneous Error) $Y_{ij} = x_{ij} + \epsilon_{ij}, \text{ where } \epsilon_{ij} \overset{iid}{\sim} F $ \item[Scenario 2.] (Heterogeneous Error) $Y_{ij} = x_{ij} + \epsilon_{ij}, \text{ where } \epsilon_{ij} \overset{iid}{\sim} \begin{cases} \sqrt{10} F \hspace{0.5em} \text{ if } 5 \le i \le 6 \\ F \hspace{2.3em} \text{ otherwise } \end{cases}$ \item[Scenario 3.] (Non-linear association between the covariate and the response)\\ (1) $Y_{ij} = \sin(x_{ij}) + \epsilon_{ij}, \text{ where } \epsilon_{ij} \overset{iid}{\sim} N(0, 1)$ \\ (2) $Y_{ij} = \exp(1/x_{ij}) + \epsilon_{ij}, \text{ where } \epsilon_{ij} \overset{iid}{\sim} N(0, 1)$ \end{description} For Scenarios 1 and 2, (1) $F = N(0,1)$; (2) $F = t_{20}$; (3) $F = t_3$. Scenario 1 examines the case that the error variance is homogeneous and the normality assumption used in the mapping from $(0,1)$ to ${\cal R}$ is satisfied (1-1), slightly violated (1-2), and more severely violated (1-3). In Scenario 2, the error variance is not constant, with a larger variance for certain covariate values, and the mapping is correctly specified (2-1), slightly misspecified (2-2), or more severely misspecified (2-3). Finally, Scenario 3 showcases settings where the error is normally distributed with homogeneous variance, but the covariate and response variables have a nonlinear association. For each setting, we used $r = 10$ and $r = 30$ replicates at each value of $x$, corresponding to overall sample sizes of $n = 100$ and $n = 300$, respectively. In each of our $32$ scenario-$T$-sample size combinations, we generated $N=100$ data sets, fit QRs, and calculated the empirical mean-squared error (MSE) for each quantile $\tau$ at $x \in \mathcal{X}$: $\text{MSE}_x(\tau) = \frac{1}{S} \sum_{s=1}^S (\widehat{Q}_x^s(\tau) - Q_x(\tau))^2,$ where $\hat{Q}_x^s(\tau)$ is an estimated quantile for $\tau$ level with $s^{th}$ simulated data set at $x$. We further computed the MSE for the quantile, averaged over the distribution of $x$ as $\text{MSE}(\tau) = \sum_{x=1}^{10} 0.1 \,\, \text{MSE}_x(\tau)$, which again is averaged over $\tau$ as $\text{AMSE} = \sum_{t=1}^T \sum_{x=1}^{10} 0.1 \,\, \text{MSE}_x(\tau)/T$. We fit a binary FDQP with two levels for the $T=3$ case and with three levels for the $T=7$ case based on the normal inverse CDF transformation from (\ref{eq:DQPtransform}), centering the FDQP on a linear regression model with $\mu_x = x^\top \*\beta$. Regarding the dependence of the pyramids, we used the canonical construction for the Gaussian processes with zero mean vector and Gaussian covariance function with no nugget effect, variance parameter $\sigma^2 = 1$, and distance parameter $\phi=5$. For the beta distribution in the construction of the binary pyramid prior, we used the canonical values for the hyperparameters with $c_m = (m+5)^2$. The prior distribution for $\boldsymbol{\beta}$ is bivariate normal with mean $\mu_0 = (5, 0)^\top$ and variance matrix $\Sigma_0 = \mbox{diag}(3, 3)$. For the scale transformation parameter $\sigma_x$, we used the sample standard deviation at $x$ as a plug-in estimator. We compare our method to three alternatives: quantreg \citep{koenker1978regression}, Yu \& Moyeed's approach as implemented in bayesQR \citep{benoit2017bayesqr}, and qrjoint \citep{yang2017joint}. The estimators are evaluated by \text{AMSE}, with values presented in Table~\ref{fig:AMSE}. For the Markov chain Monte Carlo (MCMC) runs of bayesQR, qrjoint, and DQP, we used a warmup of $1,000$ iterates followed by $100,000$ draws for estimation, thinned at a rate of $1$ in $100$. \begin{figure} \caption{\small AMSE values for each combination of scenario-$T$-sample size} \label{fig:AMSE} \end{figure} Examining Figure~\ref{fig:AMSE}, the linearized DQP (DQP-lm) demonstrated competitive performance relative to other alternative methods under Scenario 1. Under Scenarios 2-1 and 2-2, the DQP had the lowest AMSE for any combination of $n$ and the number of quantiles. This is because the DQP is not only robust to mild misspecification of the mapping from $(0, 1)$ to the response variable scale but can also account for the local nonlinearity arising from a larger variance at specific locations of the covariate. Among the linear methods, DQP-lm performed the best under Scenarios 2-1 and 2-2. Even under Scenario 2-3, where the normal assumption in the DQP transformation is violated, the DQP still outperformed other methods in most of the cases except when $n=100$ and $T=7$. Regarding the nonlinear association between the covariate and the response, represented by Scenarios 3-1 and 3-2, the DQP has lower AMSE than the three other alternatives for any combination of $n$ and the number of quantiles, except for the case of $n=100$ and $T=7$. This is because the DQP can capture the local nonlinearity of quantiles, such as the cyclic movement in Scenario 3-1 and the sudden drop for small covariate values in Scenario 3-2. The advantage of the DQP becomes more evident as $n$ increases. \section{Application} \label{sec:cyclone} \cite{elsner2008increasing} observed that the trend in tropical cyclone intensity in the North Atlantic Ocean from 1981 to 2006 is different for different quantiles. They fit separate linear QRs for multiple quantiles using the R package `quantreg' \citep{koenker_2005} and found that the higher quantiles showed an upward trend, with statistically significant slopes above the 0.7 quantile. Lower quantiles had slopes closer to zero. \cite{kadane2012simultaneous} analyzed the same data set and came to a different conclusion. They developed a Bayesian approach to simultaneously fit a collection of linear QRs. They found that the increasing trend in the cyclone intensity was significant for almost all quantiles, not just the upper quantiles. We analyzed Elsner et al.'s data set (https://myweb.fsu.edu/jelsner/temp/Data.html). The data consist of the lifetime maximum wind speeds of $291$ North Atlantic tropical cyclones derived from satellite imagery. The wind speeds are the response ($Y$) and range from 29.8 to 159.5 $ms^{-1}$. The year is the covariate ($x$). We fit a binary DQP with four levels and 15 quantiles at $\tau$ = 0.05, 0.10, 0.20, 0.25, 0.30, 0.35, 0.40, 0.50, 0.60, 0.65, 0.70, 0.75, 0.80, 0.90, and 0.95. We used the beta canonical construction with Gaussian processes with zero mean vector and the exponential covariance function with no nugget effect, variance parameter $\sigma^2 = 1$, and distance parameter $\phi=5$. For the beta distribution, the canonical values were used for the hyperparameters with $c_m = (m+5)^2$. We again used the canonical transformation with linear regression model for the trend parameter, i.e. $\mu_x = x^\top \*\beta$. The prior distribution for $\boldsymbol{\beta}$ is bivariate normal with mean $\mu_0 = (75, 0.5)^\top$ and variance matrix $\Sigma_0 = \mbox{diag}(15, 2)$. We used the sample standard deviation for each year as a plug-in value for the scale parameter $\sigma_x$. We ran $10,000$ warm up iterates of MCMC followed by $200,000$ iterates, thinned to $2,000$ iterates for estimation. \begin{figure} \caption{\small (a) 15 quantiles of cyclone intensity. Grey lines are the posterior means of the quantiles and black lines are the linearized fit of the posterior mean. Dots are the data points. (b) Estimated slopes of the quantile lines with 95\% empirical credible intervals.} \label{fig:cyclone_plots} \end{figure} Figure~\ref{fig:cyclone_plots} presents the results of our analysis. In panel (a), the grey lines provide the posterior means at each year and the black lines the linearized fit. The lines are overlaid on the data points. The 95\% credible intervals for the slopes of the linearized fit are shown in panel (b). Overall, the slopes are greater for greater quantiles, implying that stronger cyclones have been getting stronger more quickly. Indeed, the 95\% credible interval for the slope of the 0.95 quantile is observed to be well above that of the 0.50 quantile. This means that the most intense cyclones are increasing in strength at a significantly faster rate than the moderately strong cyclones. On the other hand, the 95\% credible intervals for the slope of the quantiles below 0.50 include zero. The slopes do not significantly differ from zero for lower quantiles. This is consistent with the perspective of \cite{elsner2008increasing} and contrary to that of \cite{kadane2012simultaneous}. The difference between our result and that of \cite{elsner2008increasing} lies in the mid-range quantiles, from 0.50 to 0.65. While the slopes for these quantiles were not significant in \cite{elsner2008increasing}, our result shows that these slopes are significantly different from zero. Overall, our method applied to these data shows the intensity of the upper half of the cyclones to be increasing, with more powerful cyclones intensifying at a more rapid rate. \section{Discussion} \label{sec:discussion} We propose a nonparametric Bayesian approach to quantile regression, based on a process of DQPs. The DQP generalizes Hjort \& Walker's QP to allow dependence across a predictor space. The flexibility of the model allows us to depart from linearity and to handle regressions for multiple quantiles in unbounded predictor spaces without quantile crossing. The canonical construction can be adapted to account for a various features of the data. As examples, the standard normal distribution used in the inverse cdf transformation can be replaced with a distribution with different tails to handle thick or thin tailed data; skewness in the quantiles can be handled by replacing the symmetric normal distribution with an asymmetric distribution; and positively valued responses can be handled by basing the transformation on a distribution supported on the half line. The linear regression model for the mean can be replaced with a nonlinear form, resulting in large-scale nonlinearity of the QRs. Simple choices for this include the use of a deterministic form such as a fixed-knot spline or a stochastic form such as a Gaussian process. Replacement of the linear form for the scale factor ($x^\top \*\gamma$) with a form that ensures positivity, for example, $\exp( x^\top \*\gamma)$, relieves concerns about the implications of unbounded $\mathcal{X}$. In the simulation examples, we employed the sample standard deviation at the location $x$ as a plug-in estimator for the scale transformation parameter $\sigma_x$. An alternative is to use a pooled sample standard deviation in situations where heterogeneity is not suspected, such as Scenario 1. The results of this alternative will be included in our future work. The simulations we report above rely on a single predictor. The stochastic processes used for the FDQP are Gaussian processes that are then passed through transformations to arrive at the FDQP. Extension to the case of multiple predictors is straightforward through the use of Gaussian processes with a multivariate index. The Gaussian processes can be replaced with other processes. \section{Appendix A} \label{sec:appendix_notation} \begin{itemize} \item $(\Omega, \mathcal{B}, \mu)$ := a triple that defines a probability space \item $D$ = the space of all distributions with the support on $[0, 1]$ \item $D_S$ = the product space of all collections of (conditional) distributions with the support on $[0, 1]$ on some finite set $S \subset \mathcal{X}$ \item $\mathcal{B}'$ := the Borel $\sigma$-field of $D$, i.e. all Borel sets of distributions on $D$ \item $\mathcal{B}'_S$ := the Borel $\sigma$-field of $D_S$ \item $d_L(F,G)$ := Lévy metric between distributions $F$ and $G$ ($F,G \in D$) \item $d_L(F,A) = \inf \{d_L(F,G) \mid G \in A \}$ := Lévy metric between a distribution $F \in D$ and a Borel set $A \in \mathcal{B}'$ \item $A_\alpha = \{F \in D \mid d_L(F,A) < \alpha \}$ := the open $\alpha$ ball ($\alpha > 0$) about $A \in \mathcal{B}'$ \\ (or $A_\alpha = \{F \in D \mid d_{L_u}(F,A) < \alpha \}$ for $A \in \mathcal{B}'_S$) \item $\mathcal{P} = \mathcal{P}(\mathcal{B}')$ := all Borel probability measures on the Borel $\sigma$-field of $D$ \item $\mathcal{P}_{S} = \mathcal{P}_{S}(\mathcal{B}'_{S})$ := all Borel probability measures on the Borel $\sigma$-field of $D_{S}$ \item $d_P(\mu,\nu)$ := the Prokhorov metric on $\mathcal{P}$ for $\mu, \nu \in \mathcal{P}$, that is \[ d_P(\mu,\nu) = \inf_{\alpha > 0} \{ \alpha \mid \mu(A) \leq \nu(A_\alpha) + \alpha \mbox{ and } \nu(A) \leq \mu(A_\alpha) + \alpha \mbox{ for all } A \in \mathcal{B}' \} \] \item $F^m$ := an $m$-level FQP in Section \ref{sec:qpexistence} and FDQP in Section \ref{sec:dqpexistence} \item $\Omega_{A,m} = \{ \omega \mid F^m(\omega) \in A \}$ := an $\omega$-set with $F^m$ and $A \in \mathcal{B}'$ \item $\Omega_{A_\alpha,m} = \{ \omega \mid F^m(\omega) \in A_\alpha \}$ := the $\alpha$ ball for $F^m(\omega)$ about $A \in \mathcal{B}'$ \item $\mu_m$ := a probability measure in $\mathcal{P}$ arising from all the $m$-level pyramids, which assigns probabilities to subsets of $D$ (or $D_S$ for some $S \subset \mathcal{X}$) \item $\mu_m(A) = \mu(\Omega_{A,m})$ := the probability that the probability distributions induced from $m$-level pyramids belongs to $A \in \mathcal{B}'$ \end{itemize} \section{Appendix B} \label{sec:appendix_proofs} \noindent {\bf Proof of Lemma~\ref{Lemma:Levy}.} Consider $y \in [\tau_{t-1}^*, \tau_t^*]$. Then $\tau_{t-1}^* \leq F(y), G(y) \leq \tau_t^*$. Thus $F(y) - \epsilon \leq \tau_t^* - \epsilon \leq \tau_{t-1}^*$ and so $F(y) - \epsilon \leq G(y)$. Similarly, $F(y) + \epsilon \geq \tau_{t-1}^* + \epsilon \geq \tau_t^*$ and so $F(y) + \epsilon \geq G(y)$. Thus $F(y - \epsilon) - \epsilon \leq G(y) \leq F(y + \epsilon) + \epsilon$. Repeating the argument above for $t = 1, \ldots, T+1$ establishes that $d_L(F,G) \leq \epsilon$. $\blacksquare$ \noindent {\bf Proof of Lemma~\ref{Lemma:Cauchy}.} Fix $\epsilon > 0$. Choose $M$ such that $\max_{t \in \{1, \ldots, T_M+1\}} (\tau_t^* - \tau_{t-1}^*) < \epsilon$. Applying Lemma~\ref{Lemma:Levy}, we have $d_L(F^m(\omega), F^n(\omega)) < \epsilon$ for all $m, n \geq M$ and for all $\omega \in \Omega$. That is, $F^n(\omega)$ is in the $\epsilon$-ball about $F^m(\omega)$. Such an $M$ always exists provided that $\cup_{m=1}^{\infty}\mathcal{T}_m$ is dense in $[0, 1]$ and each $F^m(\omega)$ has the same pyramid structure. Consider an arbitrary Borel set $A \in \mathcal{B}'$. Define the sets $\Omega_{A,m} = \{ \omega \mid F^{m}(\omega) \in A \}$ and $\Omega_{A_\epsilon,n} = \{ \omega \mid F^{n}(\omega) \in A_\epsilon \}$. For every $\omega \in \Omega_{A,m}$, $F^n(\omega)$ is in the $\epsilon$-ball about $F^m(\omega)$. Hence $\omega \in \Omega_{A_\epsilon,n}$ and so $\Omega_{A,m} \subset \Omega_{A_\epsilon,n}$. Turning to the probability measures on the FQPs with $m$ and $n$ levels, we note that $\mu_m(A) = \mu(\Omega_{A,m})$ and $\mu_n(A_\epsilon) = \mu(\Omega_{A_\epsilon,n})$. Since $\Omega_{A,m} \subset \Omega_{A_\epsilon,n}$, $\mu_m(A) \leq \mu_n(A_\epsilon) < \mu_n(A_\epsilon) + \epsilon$. A similar argument shows that $\mu_n(A) < \mu_m(A_\epsilon) + \epsilon$. This holds for all Borel $A$, and so $d_P(\mu_m, \mu_n) \leq \epsilon$. Thus, for each $\epsilon > 0$, there is an $M$ such that, for all $m, n \geq M$, $d_P(\mu_m, \mu_n) \leq \epsilon$. The sequence $\{ \mu_m \}_{m=1}^\infty$ is Cauchy. $\blacksquare$ \noindent {\bf Proof of Theorem~\ref{Theorem:existence}.} By Lemma~\ref{Lemma:Cauchy}, the sequence $\{\mu_m\}_{m=1}^{\infty}$ is Cauchy. Moreover, the space $\mathcal{P}$ equipped with the Prokhorov metric is compact, and thus complete, since the space $D$ equipped with Lévy metric is compact (see Theorem 2.6.4 in \cite{parthasarathy1967probability}). Therefore, the sequence $\{\mu_m\}_{m=1}^{\infty}$ is convergent. That is, there exists $\mu$ to which $\mu_m$ converges and that $\mu$ provides a probability distribution on $\lim_{m\to \infty} F^m$. Thus, a limit of QP exists. $\blacksquare$ \noindent {\bf Lemma Appendix 1.} $d_{L_u}$ as defined in Section~\ref{sec:dqpexistence} is a metric on the space of distributions. $d_{P_u}$ is a metric on the space of probability measures over distributions. The set $\cal{S}$ need not have finite cardinality. \noindent{\bf Proof.} Symmetry, non-negativity, the triangle inequality, and the zero property follow from straightforward calculation. With a nod to the St.\ Petersburg paradox, we must show that $d_{L_u}(F^m,F^n) < \infty$. Since $d_L(F_x^m,F_x^n) \leq 1$ for all $x \in {\cal S}$, we have that $d_{L_u}(F^m,F^n) \le 1$. The argument for $d_{P_u}$ is established in the same way. $\blacksquare$ \noindent {\bf Proof of Lemma~\ref{Lemma:Levy2}.} From Lemma~\ref{Lemma:Levy}, we have $d_L(F_x,G_x) \leq \epsilon$ for all $x \in {\cal S}$. Thus $d_{L_u}(F,G) \leq \epsilon$. $\blacksquare$ \noindent {\bf Proof of Lemma~\ref{Lemma:Cauchy2}.} Replace $d_L$ with $d_{L_u}$, $d_P$ with $d_{P_u}$, and $\mathcal{B}'$ with $\mathcal{B}'_S$ in the proof of Lemma~\ref{Lemma:Cauchy}. $\blacksquare$ \noindent{\bf Proof of Theorem~\ref{Theorem:existence2}.} The proof of the first part of the theorem follows that of Theorem~\ref{Theorem:existence}. Kolmogorov's permutation condition is satisfied at each step in the sequence of FDQPs since each of the $V_x$ processes satisfies the condition. His marginalization condition is also satisfied. Since $\{x_1, \ldots, x_n \}$ was arbitrary, this ensures the existence of a stochastic process with the specified limiting distributions (e.g., \cite{billingsley1968}, chapter 7). $\blacksquare$ \section{Appendix C} \begin{table}[h!] {\renewcommand{1.7}{1.7} \centering \scriptsize \begin{tabular}{ x{1.5em} | x{4em} x{4em} x{3.7em} x{3.7em} x{4.2em} | x{4em} x{4em} x{3.7em} x{3.7em} x{4.2em} } &&& T = 3 &&&&& T = 7 && \tabularnewline & quantreg & bayesQR & qrjoint & DQP & DQP-lm & quantreg & bayesQR & qrjoint & DQP & DQP-lm \tabularnewline \hline 1-1 & 0.0353 & 0.0293 & 0.0277 & 0.0463 & 0.0301 & 0.0574 & 0.0421 & 0.0350 & 0.1076 & 0.0510 \tabularnewline & (0.0035) & (0.0029) & (0.0028) & (0.0032) & (0.0031) & (0.0055) & (0.0042) & (0.0034) & (0.0058) & (0.0051) \tabularnewline 1-2 & 0.0395 & 0.0323 & 0.0304 & 0.0540 & 0.0331 & 0.0702 & 0.0506 & 0.0404 & 0.1293 & 0.0530 \tabularnewline & (0.0035) & (0.0030) & (0.0028) & (0.0035) & (0.0033) & (0.0076) & (0.0054) & (0.0039) & (0.0064) & (0.0050) \tabularnewline 1-3 & 0.0588 & 0.0512 & 0.0516 & 0.2043 & 0.0959 & 0.4409 & 0.3432 & 0.2989 & 0.6842 & 0.1712 \tabularnewline & (0.0074) & (0.0064) & (0.0057) & (0.0176) & (0.0108) & (0.0523) & (0.0344) & (0.0284) & (0.0642) & (0.0195) \tabularnewline 2-1 & 0.2937 & 0.2809 & 0.2788 & 0.0861 & 0.2636 & 1.3685 & 1.1694 & 1.1271 & 0.2460 & 1.1033 \tabularnewline & (0.0059) & (0.0049) & (0.0050) & (0.0058) & (0.0038) & (0.0511) & (0.0187) & (0.0117) & (0.0166) & (0.0080) \tabularnewline 2-2 & 0.3021 & 0.2917 & 0.2888 & 0.1041 & 0.2762 & 1.4621 & 1.2579 & 1.2129 & 0.2958 & 1.1887 \tabularnewline & (0.0061) & (0.0056) & (0.0054) & (0.0069) & (0.0041) & (0.0525) & (0.0170) & (0.0110) & (0.0185) & (0.0073) \tabularnewline 2-3 & 0.3832 & 0.3679 & 0.3641 & 0.4922 & 0.4434 & 2.8263 & 2.2268 & 2.1619 & 1.7753 & 2.1811 \tabularnewline & (0.0087) & (0.0073) & (0.0068) & (0.0944) & (0.0206) & (0.1651) & (0.0434) & (0.0390) & (0.3882) & (0.0539) \tabularnewline 3-1 & 0.5219 & 0.5192 & 0.5099 & 0.3356 & 0.5105 & 0.5869 & 0.6074 & 0.5684 & 0.4031 & 0.5253 \tabularnewline & (0.0053) & (0.0048) & (0.0041) & (0.0063) & (0.0041) & (0.0116) & (0.0104) & (0.0080) & (0.0094) & (0.0055) \tabularnewline 3-2 & 0.1365 & 0.1302 & 0.1292 & 0.1134 & 0.1324 & 0.1661 & 0.1633 & 0.1408 & 0.1756 & 0.1526 \tabularnewline & (0.0035) & (0.0033) & (0.0030) & (0.0037) & (0.0030) & (0.0071) & (0.0063) & (0.0043) & (0.0068) & (0.0049) \tabularnewline \end{tabular} } \caption{AMSE values over 100 simulated datasets with standard errors in parentheses when $n=100$.} \label{table:MSEn100} \end{table} \begin{table}[h!] {\renewcommand{1.7}{1.7} \centering \scriptsize \begin{tabular}{ x{1.5em} | x{4em} x{4em} x{3.7em} x{3.7em} x{4.2em} | x{4em} x{4em} x{3.7em} x{3.7em} x{4.2em} } &&& T = 3 &&&&& T = 7 && \tabularnewline & quantreg & bayesQR & qrjoint & DQP & DQP-lm & quantreg & bayesQR & qrjoint & DQP & DQP-lm \tabularnewline \hline 1-1 & 0.0116 & 0.0105 & 0.0100 & 0.0175 & 0.0092 & 0.0183 & 0.0148 & 0.0133 & 0.0367 & 0.0151 \tabularnewline & (0.0012) & (0.0011) & (0.0010) & (0.0010) & (0.0009) & (0.0019) & (0.0016) & (0.0013) & (0.0018) & (0.0014) \tabularnewline 1-2 & 0.0134 & 0.0121 & 0.0105 & 0.0195 & 0.0106 & 0.0239 & 0.0199 & 0.0153 & 0.0417 & 0.0176 \tabularnewline & (0.0014) & (0.0013) & (0.0010) & (0.0012) & (0.0010) & (0.0024) & (0.0021) & (0.0015) & (0.0022) & (0.0016) \tabularnewline 1-3 & 0.0221 & 0.0211 & 0.0197 & 0.0781 & 0.0343 & 0.2384 & 0.2234 & 0.2123 & 0.3621 & 0.1015 \tabularnewline & (0.0022) & (0.0023) & (0.0020) & (0.0075) & (0.0036) & (0.0196) & (0.0171) & (0.0133) & (0.0504) & (0.0133) \tabularnewline 2-1 & 0.2593 & 0.2571 & 0.2561 & 0.0434 & 0.2409 & 1.1491 & 1.1068 & 1.0972 & 0.0965 & 1.0521 \tabularnewline & (0.0026) & (0.0026) & (0.0024) & (0.0035) & (0.0015) & (0.0162) & (0.0103) & (0.0107) & (0.0072) & (0.0026) \tabularnewline 2-2 & 0.2714 & 0.2681 & 0.2654 & 0.0454 & 0.2503 & 1.2422 & 1.1992 & 1.1776 & 0.0986 & 1.1391 \tabularnewline & (0.0030) & (0.0031) & (0.0025) & (0.0039) & (0.0014) & (0.0193) & (0.0122) & (0.0071) & (0.0064) & (0.0025) \tabularnewline 2-3 & 0.3386 & 0.3359 & 0.3287 & 0.2687 & 0.3596 & 2.1926 & 2.0864 & 2.0261 & 1.4277 & 2.1116 \tabularnewline & (0.0040) & (0.0040) & (0.0032) & (0.0557) & (0.0097) & (0.0338) & (0.0208) & (0.0155) & (0.4497) & (0.0539) \tabularnewline 3-1 & 0.4944 & 0.4937 & 0.4907 & 0.1457 & 0.4790 & 0.5555 & 0.5547 & 0.5427 & 0.1866 & 0.4825 \tabularnewline & (0.0023) & (0.0024) & (0.0020) & (0.0034) & (0.0013) & (0.0065) & (0.0059) & (0.0048) & (0.0051) & (0.0016) \tabularnewline 3-2 & 0.1115 & 0.1102 & 0.1091 & 0.0530 & 0.1083 & 0.1232 & 0.1214 & 0.1158 & 0.0755 & 0.1144 \tabularnewline & (0.0014) & (0.0014) & (0.0011) & (0.0017) & (0.0010) & (0.0025) & (0.0023) & (0.0017) & (0.0028) & (0.0016) \tabularnewline \end{tabular} } \caption{AMSE values over 100 simulated datasets with standard errors in parentheses when $n=300$.} \label{table:MSEn300} \end{table} \end{document}
\begin{document} \title{Factorization identities for reflected processes, with applications} \author{Brian H. Fralix, Johan S.H. van Leeuwaarden and Onno J. Boxma \footnote{BF (corresponding author) is with Clemson University, Department of Mathematical Sciences, O-110 Martin Hall, Box 340975, Clemson, SC 29634, USA. Email: {\tt [email protected]}. JvL and OB are with Eindhoven University of Technology, Department of Mathematics and Computer Science, Eindhoven, The Netherlands. Emails: {\tt [email protected]}, {\tt [email protected]}}} \maketitle \begin{abstract} We derive factorization identities for a class of preemptive-resume queueing systems, with batch arrivals and catastrophes that, whenever they occur, eliminate multiple customers present in the system. These processes are quite general, as they can be used to approximate L\'evy processes, diffusion processes, and certain types of growth-collapse processes; thus, all of the processes mentioned above also satisfy similar factorization identities. In the L\'evy case, our identities simplify to both the well-known Wiener-Hopf factorization, and another interesting factorization of reflected L\'evy processes starting at an arbitrary initial state. We also show how the ideas can be used to derive transforms for some well-known state-dependent/inhomogeneous birth-death processes and diffusion processes. \end{abstract} \noindent \textbf{Keywords:} L\'evy processes, Palm distribution, random walks, time-dependent behavior, Wiener-Hopf factorization \noindent \textbf{2010 MSC:} 60G50, 60G51, 60G55, 60K25 \section{Introduction} The Wiener-Hopf factorization is a classical result in both the theory of random walks and the theory of L\'evy processes. For a L\'evy process $X$, the factorization allows us to write the position of $X$ at an independent exponential time $e_q$, i.e. $X(e_q)$, as the sum of two independent random variables: $\inf_{0 \leq s \leq e_q}X(s)$ and $X(e_q) - \inf_{0 \leq s \leq e_q}X(s)$, with the latter random variable representing the reflection of $X$ at a random time $e_q$. In principle, the distribution of the reflected process at time $e_q$ can be derived if and only if the distribution of the infimum of $X$ over $[0,e_q]$ is known as well. We show that a similar type of property is also found in processes that may not necessarily be expressible as a reflection of a simpler process. To do this, we introduce the Preemptive-Resume Production system, or PRP system, and we show that it satisfies a factorization identity. Technically, for an arbitrary PRP system the identity is not a true factorization, but it is in some cases: when $X$ is a L\'evy process, for instance, our factorization identity is equivalent to the Wiener-Hopf factorization. The notion of a PRP system may appear at first to be somewhat contrived, but this is not the case: such systems can be used to approximate many types of important processes found in the probability literature, such as L\'evy processes, diffusion processes, and even Markovian growth-collapse models. Our factorization results also provide insight into the time-dependent behavior of a number of important birth-death processes, with birth/death rates that may depend on the state of the system. For instance, our Wiener-Hopf identity shows how the probability mass function of the $M/M/s$ queue-length at an independent exponential time $e_q$ can be expressed entirely in terms of quantities from a $M/M/1$ queue and a $M/M/\infty$ queue. Similarly, a $M/M/s/K$ queue (assuming $s < K$, otherwise trivial) can be expressed in terms of a $M/M/\infty$ queue and a $M/M/1/(K-s)$ queue, and a similar observation may be made for Markovian queues with reneging. In particular, the pmf for the $M/M/s/K$ queue can be quickly derived from the solutions to the $M/M/\infty$ queue and the $M/M/1/(K-s)$ queue, without having to make use of the Kolmogorov forward equations corresponding to the $M/M/s/K$ queue. Similar expressions can also be derived for diffusions that can be expressed as limits of birth-death processes. Readers wondering why we are interested in studying the distribution of $X(e_q)$ should note that $P(X(e_{q}) = k)$ can be expressed as $q$ times the Laplace transform of the function $P(X(t) = k)$ evaluated at $q$, where $q$ is a positive real number. Hence, having knowledge of $X(e_q)$ yields insight into the behavior of $X(t)$, for each $t \geq 0$. Even though we restrict ourselves to the case where $q$ is real and positive, it is possible to derive similar transform expressions for the function $P(X(t) = k)$ at complex numbers with positive real part: readers will find explanations of how to make such extensions at various places throughout the paper, whenever they are needed. The factorization results we present here seem to be somewhat related to those found in Millar \cite{Millar}. The main result of \cite{Millar} establishes that for a Markov process $X$ satisfying suitable regularity conditions, the distribution of the path of $X$ from the time at which a functional of it attains a minimum is independent of the behavior of $X$ before having attained this minimum. Contrary to \cite{Millar}, our factorization results are valid for processes that are not necessarily Markovian, and our results also show how various transforms associated with some processes can be decomposed into computable transforms associated with other types of simpler stochastic processes, as previously mentioned. \section{Model Description} We now define what we refer to as a Preemptive-Resume Production system, or PRP system. At time zero there are a countably infinite number of customers present, which are labeled $n_0, n_0 -1, n_0 -2, n_0 -3, \ldots$. The system then begins to process the work of the customer that possesses the highest label, or number, which at time zero is customer $n_0$. The server processes jobs in accordance to the Last-Come-First-Served Preemptive-Resume discipline. All customers possess a random, generally distributed amount of work, and the amount of work possessed by a given customer is independent of the amounts of work of all other customers that will visit, or have visited the system. We are interested in studying the process $Q := \{Q(t); t \geq 0 \}$, where $Q(t)$ represents the label of the customer being served by the server at time $t$: for example, $Q(0) = n_0$. There are two sets of Poisson processes governing arrivals to the production system. The first set governs single arrivals to the system, and consists of an independent collection of Poisson processes $\{A_{0, j}\}_{j \in \mathbb{Z}}$, where $A_{0,j}$ has rate $\lambda_{0,j}$. At an arbitrary time $t$, when $Q(t-) = j$, we say that $A_{0,j}$ is active: in other words, if a point of $A_{0,j}$ occurs at time $t$ while $Q(t-) = j$, then $Q(t) = j + 1$, and the new arrival is immediately given label $j + 1$. Otherwise, the point of $A_{0,j}$ occurring at time $t$ is ignored if $Q(t-) = k \neq j$, so no new customer arrives to the system at that time. Once the server finishes with the customer having label $j+1$, it begins serving customer $j$, returning to where it left off before previously departing. The second set of Poisson processes govern batch arrivals of customers to the system (we allow batches to be of size one). This second set consists of an independent collection of Poisson processes $\{A_{1,j,k}\}_{j,k \in \mathbb{Z}}$, where $A_{1,j,k}$ has rate $\lambda_{1,j}P(Z_{1,j} = k - j)$. Again, while $Q(t-) = j$, we say that the subcollection $\{A_{1,j,k}\}_{k \in \mathbb{Z}}$ is active, so a point of $A_{1,j,k}$ at time $t$ pushes $Q$ from level $j$ to level $k$, the $k - j$ customers in the batch are instantaneously assigned labels $j+1$, $j + 2$, \ldots, $k$, and the server immediately begins processing customer $k$. Here $Z_{1,j}$ is a generic random variable representing the jump size of the $Q$ process from level $j$: we allow the distribution of these jumps to depend on the current level. We further assume that catastrophes occur according to a modulated Poisson process $D := \{D(t); t \geq 0\}$, with rate $\delta_{Q(t-)}$. At the time of a catastrophe, a random number of customers are removed from the system: in particular, if $Q(t-) = n$, and a catastrophe occurs at time $t$, which eliminates $k$ customers, then customers $n, n-1, n-2, \ldots, n-k+1$ are immediately removed from the system, and at time $t$ the server begins to process the remaining amount of work possessed by customer $n-k$, and so $Q(t) = n-k$. We assume that the distribution function of the number of removals at time $t$ depends on $Q(t-)$, so that the downward jump distribution of the process may depend on the level of the process, immediately before a jump. Readers may wonder why we chose to use an infinite collection of independent Poisson processes to govern arrivals to our queueing system, while not modeling catastrophes in the same manner. The answer lies in the proof of our main result, as modeling the arrival processes in this way allows us to derive a linear system of equations in a most efficient manner. Indeed, catastrophes can be modeled in the same way, but these will not play as important a role in our proofs. Our use of collections of Poisson processes to model the arrival process was inspired by Chapter 9 of Br\'emaud \cite{BremaudMC}, who makes use of such a framework when constructing continuous-time Markov chains. Readers wishing to rigorously construct our PRP systems in the same manner can follow the procedure given there, by expanding the state space of the PRP system to include the residual service time of each customer in the system, thus making it a stochastic recursive system, and Markovian: readers should note that customers in the system possess generally distributed amounts of work, meaning $\{Q(t); t \geq 0\}$ is not a Markov process unless the state space is expanded to include the residual service times. Later we will use these processes to approximate L\'evy processes: arrivals from the $\{A_{0,j}\}_{j}$ collection and service completions of the server will be used to construct Brownian motion, while the batch arrivals and catastrophe processes will be used to construct Compound Poisson processes. Finally, we also consider a `reflected' PRP system $\{Q_{l}(t); t \geq 0\}$, where $l$ is a fixed integer. This system behaves in a similar manner as $Q$, with the following exception: whenever $Q_{l}$ is in a state $i$, and a catastrophe occurs which, in the original system, would place $Q$ at a level at or lower than $l$, $Q_{l}$ instead makes a transition from state $i$ to state $l$. When $Q_l$ is at level $l$, the server stops working until the next arrival: hence, customer $l$ is in the system for all time. Finally, upward jumps of $Q_{l}$ behave the same as upward jumps of $Q$. We refer to $Q_{l}$ as a reflected PRP system with reflection at level $l$. \section{Main Results} Our main result establishes that the process $\{Q(t); t \geq 0\}$ from the PRP system satisfies a factorization identity, which we now give. \begin{theorem} \label{mainresult} Let $e_q$ be an exponential random variable with rate $q > 0$, independent of $Q$. For any two integers $k, l$, where $k \geq 0$ and $l \leq n_0 = Q(0)$, \begin{eqnarray*} P(Q_{l}(e_q) = k + l \mid Q_{l}(0) = l) &=& P(Q(e_q) = k + l \mid \inf_{0 \leq u \leq e_q}Q(u) = l) \\ &=& P(Q(e_q) - \inf_{0 \leq u \leq e_q}Q(u) = k \mid \inf_{0 \leq u \leq e_q}Q(u) = l). \end{eqnarray*} \end{theorem} \begin{proof} To help readers understand the proof, we break it up into three steps. \noindent \textbf{Step 1} We begin by presenting the following identity, which is satisfied by the sample paths of our PRP system: for each $t \geq 0$, we see that for any two integers $k,l$ with $k \geq 1$, $l \leq n_0 = Q(0)$, \begin{eqnarray} \label{mainindicator} & & \textbf{1}(Q(t) \geq k + l, \inf_{0 \leq u \leq t}Q(u) = l) \nonumber \\ &=& \int_{0}^{t}\textbf{1}(Q(s-) = k - 1 + l, \inf_{0 \leq u < s}Q(u) = l)\textbf{1}(\inf_{u \in [s,t]}Q(s) \geq k + l)A_{0,k-1+l}(ds) \nonumber \\ &+& \sum_{j=0}^{k-1}\sum_{m = k}^{\infty}\int_{0}^{t}\textbf{1}(Q(s-) = j + l, \inf_{0 \leq u < s}Q(u) = l)\textbf{1}(\inf_{u \in [s,t]} Q(u) \geq k + l)A_{1,j + l, m + l}(ds). \end{eqnarray} The identity (\ref{mainindicator}) says that, in order that $Q(t) \geq k + l$, exactly one of two things must happen: if the infimum of the process over $[0,t]$ is $l$, either (i) there exists a time point $s \leq t$ such that $Q(s-) = k-1 + l$, $Q(s) = k + l$ (due to the arrival of a customer from $A_0$ at time $s$), and the process stays at or above level $k + l$ in $[s,t]$, giving the first term, or (ii) there exists a time point $s \leq t$ such that, due to a batch of customers arriving at time $s$ (which is contributed by $A_1$), the process crosses level $k + l$, reaching some level at or above $k + l$ at time $s$, and stays at or above $k + l$ during $[s,t]$, giving the second term. After taking expected values of both sides of (\ref{mainindicator}), we get \begin{eqnarray} \label{firstcalculation1} & & P(Q(t) \geq k + l, \inf_{0 \leq u \leq t}Q(u) = l) \nonumber \\ &=& E\left[\int_{0}^{t}\textbf{1}(Q(s-) = k-1 + l, \inf_{0 \leq u < s}Q(u) = l)\textbf{1}(\inf_{u \in [s,t]}Q(u) \geq k + l)A_{0,k-1 + l}(ds)\right] \nonumber \\ &+& \sum_{j=0}^{k-1}\sum_{m=k}^{\infty}E\left[\int_{0}^{t}\textbf{1}(Q(s-) = j + l, \inf_{0 \leq u < s}Q(u) = l)\textbf{1}(\inf_{u \in [s,t]}Q(u) \geq k + l)A_{1,j + l, m + l}(ds)\right]. \end{eqnarray} We can use the Campbell-Mecke formula to evaluate the expected values found on the right-hand side of Equation (\ref{firstcalculation1}). Notice first that \begin{eqnarray*} & & E\left[\int_{0}^{t}\textbf{1}(Q(s-) = k-1 + l, \inf_{0 \leq u < s}Q(u) = l)\textbf{1}(\inf_{u \in [s,t]}Q(u) \geq k + l)A_{0,k-1 + l}(ds)\right] \\ &=& \lambda_{0,k-1+l}\int_{0}^{t}\mathcal{P}_{s}(Q(s-) = k-1 + l, \inf_{0 \leq u < s}Q(u) = l, \inf_{u \in [s,t]}Q(u) \geq k + l)ds, \end{eqnarray*} where $\mathcal{P}$ represents the Palm kernel induced by $A_{0,k-1 + l}$. Furthermore, since the server processes work in a preemptive-resume manner, we can also use the Campbell-Mecke formula to establish that \begin{eqnarray*} \label{goodpreemptiveresumecalc} & & \mathcal{P}_{s}(\inf_{u \in [s,t]}Q(u) \geq k + l, Q(s-) = k-1 + l, \inf_{0 \leq u < s}Q(u) = l) \nonumber \\ &=& P(\tau_{k + l,k + l} > t-s)\mathcal{P}_{s}(Q(s-) = k-1 + l, \inf_{0 \leq u < s}Q(u) = l) \end{eqnarray*} where $\tau_{k,j}$ is the amount of time it takes the PRP system to go below state $j$, starting from state $k$, $j \leq k$, where all customers labeled $j,j+1, \ldots, k$ have not yet received any attention from the server. Moreover, if we let $\{\mathcal{F}_{t}; t \geq 0\}$ represent the minimal filtration induced by $Q$ and our arrival and catastrophe processes, we see that the event $\{Q(s-) = k-1 + l, \inf_{0 \leq u < s}Q(u) = l\} \in \mathcal{F}_{s-}$, and so Proposition \ref{ASTAprop} in the Appendix yields \begin{eqnarray*} \mathcal{P}_{s}(Q(s-) = k-1 + l, \inf_{0 \leq u < s}Q(u) = l) = P(Q(s) = k-1 + l, \inf_{0 \leq u \leq s}Q(u) = l). \end{eqnarray*} An analogous argument can be used to evaluate the second type of expectation found in (\ref{firstcalculation1}). Plugging these expressions into (\ref{firstcalculation1}) gives \begin{eqnarray} \label{firstcalculation2} & & P(Q(t) \geq k + l, \inf_{0 \leq u \leq t}Q(u) = l) = \lambda_{0, k-1 + l}\int_{0}^{t}P(Q(s) = k-1 + l, \inf_{0 \leq u \leq s}Q(u) = l)P(\tau_{k + l,k + l} > t-s)ds \nonumber \\ &+& \sum_{j=0}^{k-1}\sum_{m=k}^{\infty}\lambda_{1,j + l}P(Z_{1,j + l} = m-j)\int_{0}^{t}P(\tau_{m + l, k + l} > t-s)P(Q(s-) = j + l, \inf_{0 \leq u \leq s}Q(u) = l)ds. \end{eqnarray} After integrating both sides of (\ref{firstcalculation2}) with respect to an exponential density with rate $q > 0$, we get \begin{eqnarray*} & & P(Q(e_q) \geq k + l, \inf_{0 \leq u \leq e_q} Q(u) = l) = \lambda_{0,k-1 + l}\frac{(1 - \phi_{k + l,k + l}(q))}{q}P(Q(e_q) = k-1 + l, \inf_{0 \leq u \leq e_q}Q(u) = l) \\ &+& \sum_{j=0}^{k-1}\sum_{m=k}^{\infty}\lambda_{1,j + l} P(Z_{1,j + l} = m-j) \frac{(1 - \phi_{m + l, k + l}(q))}{q}P(Q(e_q) = j + l, \inf_{0 \leq u \leq e_q}Q(u) = l) \end{eqnarray*} where $\phi_{m + l,k + l}$ represents the Laplace-Stieltjes transform of $\tau_{m + l,k + l}(0)$ (with $Q(0) = m + l$). Dividing by $P(\inf_{0 \leq u \leq e_q}Q(u) = l)$ finally yields \begin{eqnarray} \label{infequations} & & P(Q(e_q) \geq k + l \mid \inf_{0 \leq u \leq e_q} Q(u) = l) = \lambda_{0,k-1 + l}\frac{(1 - \phi_{k + l,k + l}(q))}{q}P(Q(e_q) = k-1 + l \mid \inf_{0 \leq u \leq e_q}Q(u) = l) \nonumber \\ &+& \sum_{j=0}^{k-1}\sum_{m=k}^{\infty}\lambda_{1,j + l} P(Z_{1,j+l} = m - j) \frac{(1 - \phi_{m + l, k + l}(q))}{q}P(Q(e_q) = j + l \mid \inf_{0 \leq u \leq e_q}Q(u) = l). \end{eqnarray} \noindent \textbf{Step 2} We now show that the system of equations (\ref{infequations}) has a unique solution. Notice that for a fixed integer $l$, these equations can be iteratively solved, since \begin{eqnarray*} \sum_{k=0}^{\infty}P(Q(e_q) = k + l \mid \inf_{0 \leq u \leq e_q}Q(u) = l) = 1. \end{eqnarray*} Indeed, notice that \begin{eqnarray*} & & 1 - P(Q(e_q) = l \mid \inf_{0 \leq u \leq e_q}Q(u) = l) = P(Q(e_q) \geq l + 1 \mid \inf_{0 \leq u \leq e_q}Q(s) = l) \\ &=& \lambda_{0,l}\frac{(1 - \phi_{l+1,l+1}(q))}{q}P(Q(e_q) = l \mid \inf_{0 \leq u \leq e_q}Q(u) = l) \nonumber \\ &+& \sum_{m=1}^{\infty}\lambda_{1,l} P(Z_{1,l} = m) \frac{(1 - \phi_{m + l, 1 + l}(q))}{q}P(Q(e_q) = l \mid \inf_{0 \leq u \leq e_q}Q(u) = l) \end{eqnarray*} which allows us to determine $P(Q(e_q) = l \mid \inf_{0 \leq u \leq e_q}Q(u) = l)$, and all other probabilities can be determined in a similar, iterative manner. Hence, there is a unique probability measure on the integers that satisfies these equations. \noindent \textbf{Step 3} By precisely the same arguments, we see that the $Q_{l}$ process satisfies the same system of equations. Indeed, when $Q_{l}(0) = l$, \begin{eqnarray*} & & P(Q_{l}(e_q) \geq k + l) = \lambda_{0, k + l - 1}\frac{1 - \phi_{k+l,k+l}(q)}{q}P(Q_{l}(e_q) = k - 1 + l) \\ &+& \sum_{j=0}^{k-1}\sum_{m=k}^{\infty}\lambda_{1,l + j}P(Z_{1,l + j} = m - j)\frac{1 - \phi_{m + l, k + l}(q)}{q}P(Q_{l}(e_q) = j + l). \end{eqnarray*} Thus, we see that \begin{eqnarray*} P(Q_{l}(e_q) = k + l \mid Q_{l}(0) = l) = P(Q(e_q) = k + l \mid \inf_{0 \leq s \leq e_q}Q(s) = l) \end{eqnarray*} completing the proof. \end{proof} \begin{remark} It is worth noting, from the point of view of numerical transform inversion \cite{AbateWhittInversion}, that a similar result can be derived when we consider complex-valued $q$, i.e. expressions of the form \begin{eqnarray*} \int_{0}^{\infty}P(Q(t) = k + l, \inf_{0 \leq s \leq t}Q(s) = l)qe^{-qt}dt \end{eqnarray*} for complex $q$ with positive real part, i.e. those $q$ satisfying $\Re(q) > 0$, as opposed to $P(Q(e_q) = k + l, \inf_{0 \leq s \leq e_q}Q(s) = l)$ for real $q > 0$. First note that for $q = x + iy$ satisfying $\Re(q) = x > 0$, with $e_{x}$ being exponential with rate $x$, independent of $Q$, \begin{eqnarray*} \int_{0}^{\infty}P(Q(t) = k + l, \inf_{0 \leq s \leq t}Q(s) = l)qe^{-qt}dt &=& \int_{0}^{\infty}P(Q(t) = k + l, \inf_{0 \leq s \leq t}Q(s) = l)(x + iy)e^{-iyt}e^{-xt}dt \\ &=& \frac{(x + iy)}{x}E[\textbf{1}(Q(e_{x}) = k + l, \inf_{0 \leq s \leq e_{x}}Q(s) = l)e^{-iye_{x}}]. \end{eqnarray*} Using this observation, we can mimic the proof of Theorem \ref{mainresult} in a straightforward manner to determine that \begin{eqnarray*} & & E[\textbf{1}(Q(e_{x}) = k+l)e^{-iye_{x}} \mid \inf_{0 \leq s \leq e_{x}}Q(s) = l, Q(0) = n_0] \\ &=& \frac{E[e^{-iye_{x}}\textbf{1}(\inf_{0 \leq s \leq e_{x}}Q(s) = l) \mid Q(0) = n_0]E[\textbf{1}(Q_{l}(e_{x}) = k + l)e^{-iye_{x}} \mid Q_{l}(0) = l]}{P(\inf_{0 \leq s \leq e_{x}}Q(s) = l \mid Q(0) = n_{0})}\frac{x + iy}{x} \end{eqnarray*} which contains quantities that are given in terms of either the reflection $Q_{l}$ reflected at $l$, or hitting-time transforms associated with the original process $Q$. To see why only these types of transforms need to be computed, note that letting $\tau_{l} = \inf\{t \geq 0: Q(t) \leq l\}$ yields \begin{eqnarray*} E[e^{-iye_{x}}\textbf{1}(\inf_{0 \leq u \leq e_{x}}Q(u) = l) \mid Q(0) = n_0] &=& E[e^{-iye_{x}}\textbf{1}(\inf_{0 \leq u \leq e_{x}}Q(u) \leq l) \mid Q(0) = n_{0}] \\ &-& E[e^{-iye_{x}}\textbf{1}(\inf_{0 \leq u \leq e_{x}}Q(u) \leq l-1) \mid Q(0) = n_0] \\ &=& E[e^{-iye_{x}}\textbf{1}(\tau_{l} \leq e_{x}) \mid Q(0) = n_{0}] \\ &-& E[e^{-iye_{x}}\textbf{1}(\tau_{l-1} \leq e_{x}) \mid Q(0) = n_0] \\ &=& \frac{x}{x + iy}E[e^{-iy\tau_{l}}\textbf{1}(\tau_{l} \leq e_{x}) \mid Q(0) = n_0] \\ &-& \frac{x}{x + iy}E[e^{-iy \tau_{l-1}}\textbf{1}(\tau_{l-1} \leq e_{x}) \mid Q(0) = n_0] \\ &=& \frac{x}{x + iy}\left[E[e^{-q\tau_{l}} \mid Q(0) = n_0] - E[e^{-q \tau_{l-1}} \mid Q(0) = n_0]\right] \end{eqnarray*} This gives \begin{eqnarray*} & & E[\textbf{1}(Q(e_{x}) = k + l)e^{-iye_{x}} \mid \inf_{0 \leq s \leq e_{x}}Q(s) = l, Q(0) = n_0] \\ &=& \frac{\left[E[e^{-q\tau_{l}} \mid Q(0) = n_0] - E[e^{-q \tau_{l-1}} \mid Q(0) = n_0]\right]}{\left[E[e^{-x\tau_{l}} \mid Q(0) = n_0] - E[e^{-x \tau_{l-1}} \mid Q(0) = n_0]\right]}E[\textbf{1}(Q_{l}(e_{x}) = k + l)e^{-iye_{x}} \mid Q_{l}(0) = l] \end{eqnarray*} implying \begin{eqnarray*} & & \int_{0}^{\infty}P(Q(t) = k + l, \inf_{0 \leq s \leq t}Q(s) = l \mid Q(0) = n_0)qe^{-qt}dt \\ &=& \left[E[e^{-q\tau_{l}} \mid Q(0) = n_0] - E[e^{-q \tau_{l-1}} \mid Q(0) = n_0]\right] \int_{0}^{\infty}P(Q_{l}(t) = k+l \mid Q_{l}(0) = l)qe^{-qt}dt \end{eqnarray*} which is clearly the complex analogue of the formula given in Theorem \ref{mainresult}. All other types of transforms that we will need can be computed in a similar manner, for complex $q$. \end{remark} We now show that the reflected process $\{Q_{0}(t); t \geq 0\}$ exhibits a similar type of factorization identity. \begin{theorem} \label{mainreflectedresult} Suppose $Q$ is a PRP system with $Q(0) = n_0$, and let $Q_{0}$ be the reflected version of $Q$ at level zero, with $Q_{0}(0) = n_0$. Then for each integer $l \geq 0$, and each integer $k \geq 1$, \begin{eqnarray*} P(Q(e_q) - \inf_{0 \leq u \leq e_q}Q(u) = k \mid \inf_{0 \leq u \leq e_q}Q(u) = l) &=& P(Q_{0}(e_q) - \inf_{0 \leq u \leq e_q}Q_{0}(u) = k \mid \inf_{0 \leq u \leq e_q}Q_{0}(u) = l). \end{eqnarray*} \end{theorem} \begin{proof} Notice that a sample-path identity that is completely analogous to (\ref{mainindicator}) can be established for $Q_{0}$: for each $l \geq 0$, $k \geq 1$, \begin{eqnarray} \label{secondmainindicator} & & \textbf{1}(Q_{0}(t) \geq k + l, \inf_{0 \leq u \leq t}Q_{0}(u) = l) \nonumber \\ &=& \int_{0}^{t}\textbf{1}(Q_{0}(s-) = k-1 + l, \inf_{0 \leq u \leq s}Q_{0}(u) = l)\textbf{1}(\inf_{u \in [s,t]}Q_{0}(u) = k + l)A_{0,k-1 + l}(ds) \nonumber \\ &+& \sum_{j=0}^{k-1}\sum_{m = k}^{\infty}\int_{0}^{t}\textbf{1}(Q_{0}(s-) = j + l, \inf_{0 \leq u < s}Q_{0}(u) = l)\textbf{1}(\inf_{u \in [s,t]}Q_{0}(u) \geq k + l)A_{1,j + l, m + l}(ds). \end{eqnarray} \noindent Applying the same steps found in Step 1 of the proof of Theorem \ref{mainresult} yields \begin{eqnarray} \label{secondinfequations} & & P(Q_{0}(e_q) \geq k + l \mid \inf_{0 \leq u \leq e_q} Q_{0}(u) = l) = \lambda_{0,k-1 + l}\frac{(1 - \phi_{k + l,k + l}(q))}{q}P(Q_{0}(e_q) = k-1 + l \mid \inf_{0 \leq u \leq e_q}Q_{0}(u) = l) \nonumber \\ &+& \sum_{j=0}^{k-1}\sum_{m=k}^{\infty}\lambda_{1,j + l} P(Z_{1,j + l} = m - j) \frac{(1 - \phi_{m + l, k + l}(q))}{q}P(Q_{0}(e_q) = j + l \mid \inf_{0 \leq u \leq e_q}Q_{0}(u) = l). \end{eqnarray} For our fixed $l$, we notice that the equations that form system (\ref{infequations}) are the same as the equations found in (\ref{secondinfequations}). Hence, by the uniqueness result proven in Step 2 of Theorem \ref{mainresult} we have \begin{eqnarray*} P(Q(e_q) \geq k + l \mid \inf_{0 \leq u \leq e_q}Q(u) = l) = P(Q_{0}(e_q) \geq k + l \mid \inf_{0 \leq u \leq e_q} Q_{0}(u) = l) \end{eqnarray*} which completes the proof. \end{proof} Two interesting factorization results can be derived, when the batch and catastrophe sizes of both $Q$ and $Q_0$ have distributions that are state-independent. Clearly, in this case we see that for each $k \geq 0$ and $l$, $P(Q_{l}(e_q) = k + l \mid Q_{l}(0) = l) = P(Q_{0}(e_q) = k \mid Q_{0}(0) = 0)$, and since $Q_{0}$ is the reflection of $Q$ at level 0, we also find that \begin{eqnarray*}Q_{0}(e_q) \stackrel{d}{=} Q(e_q) - \inf_{0 \leq u \leq e_q}Q(u) \end{eqnarray*} which follows since customers are processed in a Last-Come-First-Served Preemptive-Resume manner. Hence, Theorem \ref{mainresult} yields for each $k \geq 0$, $l \leq 0 = Q(0)$, \begin{eqnarray*} P(Q(e_q) - \inf_{0 \leq u \leq e_q}Q(u) = k) = P(Q(e_q) - \inf_{0 \leq u \leq e_q}Q(u) = k \mid \inf_{0 \leq u \leq e_q}Q(u) = l). \end{eqnarray*} In other words, the following corollary holds. \begin{corollary} \label{PRPcorollary} Suppose that $\{Q(t); t \geq 0\}$ represents a PRP system, with state-independent jumps, and let $e_q$ be an exponential random variable with rate $q > 0$, independent of $Q$. Then for each $\omega \in \mathbb{R}$, \begin{eqnarray*} E_{0}[e^{i \omega Q(e_q)}] = E_{0}[e^{i \omega \inf_{0 \leq u \leq e_q}Q(u)}]E_{0}[e^{i \omega (Q(e_q) - \inf_{0 \leq u \leq e_q}Q(u))}]. \end{eqnarray*} \end{corollary} Here $E_{x}$ is the expectation corresponding to $P_{x}$, where $P_{x}$ is a probability measure under the condition that our process starts at level $x$. This notation will be used in many places throughout the rest of the paper. This factorization has been well-known for L\'evy processes since the late 60's, due to Percheskii and Rogozin \cite{PercheskiiRogozin}, and the first probabilistic proof of this result was given in Greenwood and Pitman \cite{GreenwoodPitman}. We can also conclude from Theorem \ref{mainreflectedresult} that for $l \geq 0$, when $Q_{0}(0) = Q(0) = n_0$, \begin{eqnarray*} P(Q_{0}(e_q) - \inf_{0 \leq u \leq e_q}Q_{0}(u) = k \mid \inf_{0 \leq u \leq e_q}Q_{0}(u) = l) &=& P(Q(e_q) - \inf_{0 \leq u \leq e_q}Q(u) = k) \\ &=& P(Q_{0}(e_q) - \inf_{0 \leq u \leq e_q}Q_{0}(u) = k) \end{eqnarray*} where the second equality follows from the simple fact that the reflection of $Q_0$ at its infimum is equal in distribution to the reflection of $Q$ at its infimum. Hence, we see that $Q_{0}(e_q) - \inf_{0 \leq u \leq e_q}Q_{0}(u)$ is actually independent of $\inf_{0 \leq u \leq e_q}Q_{0}(u)$, which gives us another interesting corollary. \begin{corollary} \label{reflectedPRPcorollary} Suppose that $\{Q_{0}(t); t \geq 0\}$ is a reflected version of our PRP system, reflected at 0. Then for each $\omega \in \mathbb{R}$, and each integer $n_0 \geq 0$, \begin{eqnarray*} E_{n_0}[e^{i \omega Q_{0}(e_q)}] = E_{n_0}[e^{i \omega \inf_{0 \leq u \leq e_q}Q_{0}(u)}]E_{0}[e^{i \omega Q_{0}(e_q)}]. \end{eqnarray*} \end{corollary} Such a factorization result is useful when studying reflected processes starting in an arbitrary initial state. Corollary \ref{PRPcorollary} shows that, since $\inf_{0 \leq u \leq e_q}Q(u)$ is independent of $Q(e_q) - \inf_{0 \leq u \leq e_q}Q(u)$, the transforms of $Q(e_q)$ and $\inf_{0 \leq u \leq e_q}Q(u)$ can be used to derive the transform of $Q(e_q) - \inf_{0 \leq u \leq e_q}Q(u)$, which represents the distribution of the reflected process, starting in level zero. Theorem \ref{mainreflectedresult} can then be used to find the distribution of the reflected process, starting in any initial state, since it is clearly equal in distribution to a convolution of the reflected PRP system $Q_0$ starting in level zero, and a truncated version of $\inf_{0 \leq u \leq e_q}Q(u)$. We are now ready to see how the Wiener-Hopf factorization for L\'evy processes follows as a consequence of our factorization identities for PRP systems, whose arrival rates, service rates, and jump distributions do not depend on the level of the process. \subsection{The Wiener-Hopf factorization} We begin with establishing the well-known version of the Wiener-Hopf factorization, for L\'evy processes. \begin{theorem} \label{classicalLevyThm} Suppose $X$ is a L\'evy process, and let $e_q$ be an exponential random variable, independent of $X$, with rate $q > 0$. Then $\inf_{0 \leq s \leq e_q}X(s)$ and $X(e_q) - \inf_{0 \leq s \leq e_q}X(s)$ are independent. \end{theorem} \begin{proof} Suppose first that $\tilde{X}$ is a L\'evy process that consists of only a Brownian component and a compound Poisson component. In this case, there exists a sequence of PRP systems $\{\tilde{X}_n\}_{n \geq 1}$, such that $\tilde{X}_n$ converges uniformly on compact sets to $\tilde{X}$: in fact, each $\tilde{X}_{n}$ process is also a L\'evy process. We omit the details on constructing the $\{\tilde{X}_n\}_{n}$ sequence, as they are somewhat standard: interested readers can also find them in a previous online version \cite{FralixvanLeeuwaardenBoxma} of the paper. From Corollary \ref{PRPcorollary}, we see that the Wiener-Hopf factorization is valid for each PRP system with state-independent jumps. Applying the L\'evy continuity theorem yields, for each $(\omega_{1}, \omega_{2}) \in \mathbb{R}^{2}$, \begin{eqnarray*} E[e^{i(\omega_{1} \inf_{0 \leq s \leq e_{q}}X(s) + \omega_{2}(X(e_q) - \inf_{0 \leq s \leq e_q}X(s)))}] &=& \lim_{n \rightarrow \infty}E[e^{i(\omega_{1} \inf_{0 \leq s \leq e_{q}}\tilde{X}_{n}(s) + \omega_{2}(\tilde{X}_{n}(e_q) - \inf_{0 \leq s \leq e_q}\tilde{X}_{n}(s)))}] \\ &=& \lim_{n \rightarrow \infty} E[e^{i\omega_{1} \inf_{0 \leq s \leq e_{q}}\tilde{X}_{n}(s)}]E[e^{i(\omega_{2}(\tilde{X}_{n}(e_q) - \inf_{0 \leq s \leq e_q}\tilde{X}_{n}(s)))}] \\ &=& E[e^{i \omega_{1} \inf_{0 \leq s \leq e_q}\tilde{X}(s)}]E[e^{i \omega_{2} (\tilde{X}(e_q) - \inf_{0 \leq s \leq e_{q}}\tilde{X}(s))}] \end{eqnarray*} proving independence. To derive this result for an arbitrary L\'evy process, use this result in conjunction with the proof of the L\'evy-It\^o decomposition: again, finer details of this procedure can be found in \cite{FralixvanLeeuwaardenBoxma}. \end{proof} Our idea of proving a factorization result for a special type of process, then taking limits is similar to the older approaches of proving the Wiener-Hopf factorization, along with related results: see for instance Percheskii and Rogozin \cite{PercheskiiRogozin}, along with Gusak and Korolyuk \cite{GusakKorolyuk}. Our approach differs in the fact that we use a discrete state space in continuous time: this allows us to state a simple sample-path identity, from which we derive a linear system of equations that has a unique solution. Moreover, our limiting argument makes use of classical heavy-traffic results from queueing theory. Readers interested in learning more about classical approaches towards proving the Wiener-Hopf factorization are referred to the recent paper of Kuznetsov \cite{Kuznetsov}. \subsection{An analogous factorization for the reflection} We now show how to use Corollary \ref{reflectedPRPcorollary} to deduce an analogous factorization for reflected L\'evy processes, with an arbitrary initial state. \begin{theorem} \label{reflectionLevyThm} Suppose $X$ represents a L\'evy process, and let $e_q$ be an exponential random variable with rate $q > 0$, independent of $X$. Moreover, let $R := \{R(t); t \geq 0\}$ represent the reflection of $X$, with a reflected barrier at state zero. Then, assuming $X(0) = x \geq 0$, \begin{eqnarray} \label{reflectedlevystatement} E_{x}[e^{i \omega R(e_q)}] &=& E_{0}[e^{i \omega R(e_q)}]E_{x}[e^{i \omega \inf_{0 \leq u \leq e_q}R(u)}]. \end{eqnarray} \end{theorem} \begin{proof} The proof of this result is completely analogous to the proof of Theorem \ref{classicalLevyThm}. First, we use Corollary \ref{reflectedPRPcorollary} to establish that it holds for a L\'evy process $X$ that consists of only a Brownian and compound Poisson part. The general statement then again follows as before, from the proof of the L\'evy-It\^o decomposition. \end{proof} Theorem \ref{reflectionLevyThm} can also be derived directly from the Wiener-Hopf factorization. Here $X(0) = x$, and for each $t \geq 0$ \begin{eqnarray*} R(t) = X(t) - \inf_{0 \leq s \leq t}\min(X(s), 0) \end{eqnarray*} and so \begin{eqnarray*} R(t) - \inf_{0 \leq s \leq t}R(s) &=& X(t) - \inf_{0 \leq s \leq t}\min(X(s), 0) - \inf_{0 \leq s \leq t}\left((X(s) - \inf_{0 \leq u \leq s}\min(X(u), 0)\right). \end{eqnarray*} Let $\tau_{0} = \inf\{t \geq 0: X(t) = 0\}$. If $\tau_{0} > t$, then \begin{eqnarray*} R(t) - \inf_{0 \leq s \leq t}R(s) &=& X(t) - \inf_{0 \leq s \leq t}X(s) \end{eqnarray*} since $\min(X(s), 0) = 0$ for $0 \leq s \leq \tau_{0}$. Next, if $\tau_{0} \leq t$, we also see that \begin{eqnarray*} R(t) - \inf_{0 \leq s \leq t}R(s) = X(t) - \inf_{0 \leq s \leq t}X(s) - \inf_{\tau_{0} \leq s \leq t}\left(X(s) - \inf_{\tau_{0} \leq u \leq s}X(u)\right) = X(t) - \inf_{0 \leq s \leq t}X(s)\end{eqnarray*} since $\inf_{\tau_{0} \leq s \leq t}\left(X(s) - \inf_{\tau_{0} \leq u \leq s}X(u)\right) \geq 0$, and $X(\tau_{0}) - \inf_{\tau_{0} \leq u \leq \tau_{0}}X(u) = 0$. Moreover, for each $t \geq 0$ \begin{eqnarray*} \inf_{0 \leq s \leq t}R(t) = \max(\inf_{0 \leq s \leq t}X(s), 0). \end{eqnarray*} Thus, for an exponential random variable $e_q$ with parameter $q > 0$, independent of $X$, we have \begin{eqnarray*} E_{x}[e^{i\omega(R(e_q) - \inf_{0 \leq s \leq e_q}R(s))}e^{i\omega\inf_{0 \leq s \leq e_q}R(s)}] &=& \int_{0}^{\infty}E_{x}[e^{i\omega(R(t) - \inf_{0 \leq s \leq t}R(s))}e^{i\omega\inf_{0 \leq s \leq t}R(s)}]qe^{-qt}dt \\ &=& \int_{0}^{\infty}E_{x}[e^{i\omega(X(t) - \inf_{0 \leq s \leq t}X(s))}e^{i\omega\max(0, \inf_{0 \leq s \leq t}X(s))}]qe^{-qt}dt \\ &=& E_{x}[e^{i\omega(X(e_q) - \inf_{0 \leq s \leq e_q}X(s))}e^{i\omega\max(0, \inf_{0 \leq s \leq e_q}X(s))}] \\ &=& E_{x}[e^{i\omega(X(e_q) - \inf_{0 \leq s \leq e_q}X(s))}]E_{x}[e^{i\omega\max(0, \inf_{0 \leq s \leq e_q}X(s))}] \end{eqnarray*} where the last step follows from the Wiener-Hopf factorization, i.e. Theorem \ref{classicalLevyThm}. Theorem \ref{reflectionLevyThm} does not seem to be explicitly known, however direct computations of $E_{x}[e^{i \omega R(e_q)}]$ have appeared in various places: see e.g. Theorem 9.1 of Abate and Whitt \cite{AbateWhittRBM2}, Theorem 2.1 of Abate and Whitt \cite{AbateWhittMM12}, Bingham \cite{Bingham}, Bekker et al. \cite{BekkerBoxmaResing}, and Chapter 9, Theorem 3.10 of Asmussen \cite{Asmussen}, where all of these references address the factorization in the case where $X$ is spectrally positive, i.e. $X$ has only positive jumps. Theorem \ref{reflectionLevyThm} is also implicitly stated in Example 3 of Palmowski and Vlasiou \cite{PalmowskiVlasiou}, in terms of the steady-state distribution of a reflected L\'evy process that experiences catastrophes at times forming a homogeneous Poisson process. Their result, like previous references, considers only the spectrally positive case, but their arguments can also be used to establish Theorem 3.4 as well. Other results similar to Theorem \ref{reflectionLevyThm} can also be found in the recent work of Debicki et al. \cite{DebickiKosinskiMandjes}, and in Kella and Mandjes \cite{KellaMandjes}. \section{Applications to birth-death processses, and diffusions} We now apply our factorization identities, i.e. Theorems \ref{mainresult} and \ref{mainreflectedresult}, towards the study of birth-death processes, which form another interesting subclass of PRP systems. It will also be possible to apply our identity towards the study of diffusion processes as well, as these are often weak limits of birth-death processes. Readers should note that the transforms derived below can also be modified so that the domain is complex-valued, as we noted in the remark following Theorem \ref{mainresult} above. \subsection{Birth-death processes} Suppose that $Q := \{Q(t); t \geq 0\}$ represents a birth-death process on the integers, with birth rates $\{\lambda_{n}\}_{n \in \mathbb{Z}}$ and death rates $\{\mu_{n}\}_{n \in \mathbb{Z}}$. Let $e_q$ represent an exponential random variable with rate $q > 0$, independent of $Q$. Throughout we assume that $Q$ is ergodic, and we let $\pi$ represent its stationary distribution. Our object of study is now the probability mass function of $Q(e_q)$. We remind readers that $Q$ can easily be related to a PRP system: units arrive according to a collection of independent Poisson processes $\{A_{0,j}\}_{j \in \mathbb{Z}}$ where $A_{0,j}$ has rate $\lambda_j$, each customer brings to the system a unit exponential amount of work, and the server processes work at a rate $\mu_n$ whenever the system is in state $n$, for $n \in \mathbb{Z}$. By Corollary 4.1.1 of Abate and Whitt \cite{AbateWhittMM12}, we see that for each $n \in \mathbb{Z}$, \begin{eqnarray*} P_{0}(Q(e_q) = n) = \frac{\pi_{n}E_{n}[e^{-q \tau_{0}}]}{\sum_{k \in \mathbb{Z}}\pi_{k}E_{k}[e^{-q \tau_{0}}]} \end{eqnarray*} where $P_{n}$ is meant to represent a conditional probability, given $Q(0) = n$. This expression also holds in the absence of ergodicity, and also for complex $q$ when $P_{0}(Q(e_q) = n)$ is interpreted as a Laplace transform, multiplied by $q$. However, suppose we would like to change the initial condition. While the same method will tell us that \begin{eqnarray*} P_{n_0}(Q(e_q) = n) = \frac{\pi_{n}E_{n}[e^{-q \tau_{n_0}}]}{\sum_{j \in \mathbb{Z}}\pi_{j}E_{j}[e^{-q \tau_{n_0}}]} \end{eqnarray*} for an arbitrary $n_0$, we must be careful: how do we know that $E_{n}[e^{-q \tau_{n_0}}]$ is tractable? This is a very legitimate question, as there are many instances where $E_{n}[e^{-q \tau_{n_0}}]$ will be tractable for some choices of $n_0$, but not for others. Thus, the key to computing these probabilities is to choose the appropriate \emph{reference point}, i.e. the point found in the hitting-time Laplace-Stieltjes transforms given in the pmf of $Q(e_q)$. This is where our factorization identities become useful: they allow us to use whatever reference point we like, regardless of the initial value. We illustrate our approach by computing the pmf of the number of customers in an $M/M/s$ queueing system at an independent exponential time $e_q$. The reader will see that our expressions will be given in terms of an $M/M/1$ model and an $M/M/\infty$ model, which are much simpler. \subsubsection{The $M/M/s$ queue} Recall that the $M/M/s$ queue is a birth-death process on $\{0,1,2, \ldots\}$ with birth rates $\lambda_{n} = \lambda$, for $n \geq 0$, and death rates $\mu_{n} = \min\{n,s\}\mu$, for $n \geq 1$. A classical reference on the time-dependent behavior of the $M/M/s$ queue is Saaty \cite{Saaty}, which makes use of the approach found in Bailey \cite{Bailey}. Assume first that $Q(0) = s$. In this case, for each $n \geq 0$, \begin{eqnarray*} P_{s}(Q(e_q) = n) = \frac{\pi_{n}E_{n}[e^{-q \tau_{s}}]}{\sum_{j \geq 0}\pi_{j}E_{j}[e^{-q \tau_{s}}]}. \end{eqnarray*} This is a nice expression: notice that if $k < s$, $E_{k}[e^{-q \tau_{s}}]$ is the Laplace-Stieltjes transform of the amount of time it takes an $M/M/s$ queue to go from level $k$ to level $s$, but this is the same as the Laplace-Stieltjes transform of the amount of time it takes to go from $k$ to $s$ in an $M/M/\infty$ queue, with arrival rate $\lambda$ and service rate $\mu$. Similarly, for $k > s$, $E_{k}[e^{-q \tau_{s}}]$ is just the LST of the amount of time it takes to go from level $k$ to level $s$ in an $M/M/1$ queue, with arrival rate $\lambda$ and service rate $s\mu$. Hence, all of the terms in our expression for $P_{s}(Q(e_q) = k)$ can theoretically be derived from two simpler models, the $M/M/1$ queue and the $M/M/\infty$ queue. For $k > s$, we already have a closed-form expression for $E_{k}[e^{-q \tau_{s}}]$: letting $\psi(q) = E_{s+1}[e^{-q \tau_{s}}]$ be the busy period of an $M/M/1$ queue with arrival rate $\lambda$ and service rate $s\mu$, we see that \begin{eqnarray*} E_{k}[e^{-q \tau_{s}}] = \psi(q)^{k-s}. \end{eqnarray*} We now focus on the case where $k < s$. Letting $\{Q_{M/M/\infty}(t); t \geq 0\}$ represent the queue-length process of an $M/M/\infty$ queue (including the customers in service), we use a classical argument found in Darling and Siegert \cite{DarlingSiegert} to find that \begin{eqnarray*} P_{k}(Q_{M/M/\infty}(e_q) = s) &=& P_{k}(Q_{M/M/\infty}(e_q) = s, \tau_{s} \leq e_q) \\ &=& P_{s}(Q_{M/M/\infty}(e_q) = s)E_{k}[e^{-q \tau_{s}}] \end{eqnarray*} giving \begin{eqnarray} \label{hittingtimeratio} E_{k}[e^{-q \tau_{s}}] = \frac{P_{k}(Q_{M/M/\infty}(e_q) = s)}{P_{s}(Q_{M/M/\infty}(e_q) = s)}. \end{eqnarray} To compute $P_{k}(Q_{M/M/\infty}(e_q) = s)$, we need to use the following known lemma. The $\mu = 1$ case was observed in Flajolet and Guillemin \cite{FlajoletGuillemin}, but we repeat it here for convenience. \begin{lemma} \label{KummerLemma} For a positive real number $q$, \begin{eqnarray*} \int_{0}^{\infty}qe^{-(qt + \rho(1 - e^{-\mu t}))}dt &=& M\left(1, \frac{q}{\mu} + 1, -\rho \right) \end{eqnarray*} where $M$ is Kummer's function, i.e. \begin{eqnarray*} M(a,b,z) = \sum_{n=0}^{\infty}\frac{(a)_n z^{n}}{(b)_n n!} \end{eqnarray*} with $(a)_0 = 1$, and for $n \geq 1$, $(a)_n = (a)(a + 1)\cdots(a + n - 1)$. \end{lemma} \begin{proof} Applying partial integration gives \begin{eqnarray*} \int_{0}^{\infty}e^{-\rho (1 - e^{-\mu t})}qe^{-q t}dt &=& 1 - \rho \mu \int_{0}^{\infty}e^{-(q + \mu)t}e^{-\rho(1 - e^{-\mu t})}dt. \end{eqnarray*} After repeatedly applying partial integration and taking limits, we get the result. \end{proof} \begin{lemma} For each $k \leq s$, \begin{eqnarray*} P_{k}(Q_{M/M/\infty}(e_q) = s) &=& \sum_{j=0}^{k}\sum_{m=0}^{k + s - 2j}{k \choose j}{k + s - 2j \choose m}\frac{(\rho)^{s-j}(-1)^{m}}{(s-j)!}\frac{q}{q + (j+m)\mu}M\left(1, \frac{q}{\mu} + j + m + 1, -\rho\right). \end{eqnarray*} \end{lemma} \begin{proof} This identity can be derived from the known fact that, at a fixed time $t \geq 0$, $Q(t)$ is the convolution of a binomial random variable with parameters $(k,e^{-\mu t})$ and a Poisson random variable with parameter $\rho(1 - e^{-\mu t})$. The result then follows by integrating the pmf of $Q(t)$, and applying Lemma \ref{KummerLemma}. \end{proof} By making use of this lemma in equation (\ref{hittingtimeratio}), we arrive at the following result. \begin{lemma} \label{MMinftyhittingtimelemma} For each $k \leq s$, we see that \begin{eqnarray*} E_{k}[e^{-q \tau_{s}}] = \frac{\sum_{j=0}^{k}\sum_{m=0}^{k + s - 2j}{k \choose j}{k + s - 2j \choose m}\frac{(\rho)^{s-j}(-1)^{m}}{(s-j)!}\frac{q}{q + (j+m)\mu}M\left(1, \frac{q}{\mu} + j + m + 1, -\rho\right)}{\sum_{j=0}^{s}\sum_{m=0}^{2(s-j)}{s \choose j}{2(s-j) \choose m}\frac{(\rho)^{s-j}(-1)^{m}}{(s-j)!}\frac{q}{q + (j+m)\mu}M\left(1, \frac{q}{\mu} + j + m + 1, -\rho\right)}. \end{eqnarray*} \end{lemma} \begin{remark} As discussed in the remark following Theorem \ref{mainresult}, Lemmas 4.1, 4.2 and 4.3 can be modified so that $q$ is allowed to take on complex values. \end{remark} Our next step is to use the Wiener-Hopf identity to compute probabilities of the form $P_{k}(Q(e_q) = n)$, for arbitrary $k,n \geq 0$. Notice that we already have a nice expression for such a pmf, when $k = s$. \noindent \textbf{Case 1}: $k > s$, $n \leq s$. Notice that \begin{eqnarray*} P_{k}(Q(e_q) = n) &=& P_{k}(Q(e_q) = n, \tau_{s} \leq e_q) \\ &=& P_{k}(Q(e_q) = n \mid \tau_{s} \leq e_q)E_{k}[e^{-q \tau_{s}}] \\ &=& P_{s}(Q(e_q) = n) E_{k}[e^{-q \tau_{s}}] \end{eqnarray*} showing, from our previous calculations, that this probability is tractable. Readers should again note that a similar argument can be made for complex $q = x + iy$ satisfying $x > 0$. Here \begin{eqnarray*} \int_{0}^{\infty}P_{k}(Q(t) = n)qe^{-qt}dt &=& \frac{x + iy}{x}E_{k}[e^{-iye_{x}}\textbf{1}(Q(e_{x}) = n)] \\ &=& \frac{x + iy}{x}E_{k}[e^{-iye_{x}}\textbf{1}(Q(e_{x}) = n)\textbf{1}(\tau_{s} \leq e_{x})] \\ &=& \frac{x + iy}{x}E_{k}[e^{-iy(e_{x} - \tau_{s} + \tau_{s})}\textbf{1}(Q(e_{x} - \tau_{s} + \tau_{s}) = n)\textbf{1}(e_{x} \geq \tau_{s})] \\ &=& \frac{x + iy}{x}E_{s}[e^{-iye_{x}}\textbf{1}(Q(e_{x}) = n)]E_{k}[e^{-iy\tau_{s}}\textbf{1}(\tau_{s} \leq e_{x})] \\ &=& E_{k}[e^{-q\tau_{s}}]\int_{0}^{\infty}P_{s}(Q(t) = n)qe^{-qt}dt \end{eqnarray*} where the fourth equality holds by the strong Markov property. \noindent \textbf{Case 2}: $k > s$, $n > s$. This case is much more interesting, since it is possible for our process to go from $k$ to $n$, without ever reaching level $s$ in $[0,e_q]$. Proceeding in the same manner as in Case 1 yields \begin{eqnarray*} P_{k}(Q(e_q) = n) &=& P_{k}(Q(e_q) = n, \tau_{s} \leq e_q) + P_{k}(Q(e_q) = n, \tau_{s} > e_q) \\ &=& P_{s}(Q(e_q) = n)E_{k}[e^{-q \tau_{s}}] + \sum_{l=s+1}^{\min\{n,k\}}P_{k}(Q(e_q) = n \mid \inf_{0 \leq u \leq e_q}Q(u) = l)P_{k}(\inf_{0 \leq u \leq e_q}Q(u) = l). \end{eqnarray*} These terms are computable: first note that \begin{eqnarray*} P_{k}(\inf_{0 \leq u \leq e_q}Q(u) = l) &=& P_{k}(\tau_{l} \leq e_q) - P_{k}(\tau_{l-1} \leq e_q) \\ &=& E_{k}[e^{-q \tau_{l}}] - E_{k}[e^{-q \tau_{l-1}}] \\ &=& \psi(q)^{k-l} - \psi(q)^{k - l + 1} \end{eqnarray*} and from Theorem \ref{mainreflectedresult}, we find that conditional on $\inf_{0 \leq u \leq e_q}Q(u) = l$, $Q$ behaves as an $M/M/1$ queue on $[0,e_q]$ with arrival rate $\lambda$ and service rate $s\mu$. Hence, \begin{eqnarray*} P_{l}(Q(e_q) = n \mid \inf_{0 \leq u \leq e_q}Q(u) = l) = \left(1 - \frac{\lambda \psi(q)}{s\mu}\right)\left(\frac{\lambda \psi(q)}{s \mu}\right)^{n - l}. \end{eqnarray*} \noindent \textbf{Case 3}: $0 \leq k < s$, $n \geq s$. This case is analogous to Case 1: here \begin{eqnarray*} P_{k}(Q(e_q) = n) = P_{s}(Q(e_q) = n)E_{k}[e^{-q \tau_{s}}]. \end{eqnarray*} Now we can use Lemma \ref{MMinftyhittingtimelemma} to express $E_{k}[e^{-q \tau_{s}}]$ in terms of Kummer functions. \noindent \textbf{Case 4}: $0 \leq k < s$, $n < s$. As expected, this case is analogous to Case 2, but the expression here is more complicated than the other cases. Here \begin{eqnarray*} P_{k}(Q(e_q) = n) &=& P_{k}(Q(e_q) = n, \tau_{s} \leq e_q) + P_{k}(Q(e_q) = n, \tau_{s} > e_q) \\ &=& P_{s}(Q(e_q) = n)E_{k}[e^{-q \tau_{s}}] + \sum_{l=\max\{k,n\}}^{s-1}P_{k}(Q(e_q) = n \mid \sup_{0 \leq u \leq e_q}Q(u) = l)P_{k}(\sup_{0 \leq u \leq e_q}Q(u) = l). \end{eqnarray*} However, we again observe that \begin{eqnarray*} P_{k}(\sup_{0 \leq u \leq e_q}Q(u) = l) = E_{k}[e^{-q \tau_{l}}] - E_{k}[e^{-q \tau_{l+1}}] \end{eqnarray*} and conditional on $\sup_{0 \leq u \leq e_q}Q(u) = l$, we use Theorem \ref{mainreflectedresult} to deduce that $Q$ behaves as an $M/M/l/l$ queue on $[0,e_q]$, starting at level $l$. This yields \begin{eqnarray*} P_{k}(Q(u) = n \mid \sup_{0 \leq u \leq e_q}Q(u) = l) &=& \frac{\frac{\rho^{n}}{n!}E_{n}[e^{-q \tau_{l}}]}{\sum_{j=0}^{l}\frac{\rho^{j}}{j!}E_{j}[e^{-q \tau_{l}}]} \end{eqnarray*} implying that this final case is tractable as well, in that it can be expressed in terms of Kummer functions. There is an important lesson to be learned from our calculations of the pmf of $Q(e_q)$. Given a proper choice of initial point and reference point, our probability mass function of $Q(e_q)$ can be expressed in terms of quantities related to three simpler models: the $M/M/1$ queue, the $M/M/l/l$ queue, and the $M/M/\infty$ queue. Had we chosen another reference point different from $s$, our hitting-time transforms would have been much more difficult to compute. \subsubsection{The $M/M/s/K$ queue} Our factorization identities can also be used to derive the pmf of the $M/M/s/K$ queue-length process at an independent exponential time $e_q$, where $s$ is the number of servers and $K$ the system capacity. By choosing our reference point to be $s$, we mimic the procedure used in the $M/M/s$ case to express the desired pmf in terms of two simpler models: the $M/M/s/s$ queue (which is expressible in terms of $M/M/\infty$ hitting-time transforms), and the $M/M/1/(K-s)$ queue. Note that the relevant hitting-time transforms for the $M/M/1/(K-s)$ queue can be derived from the $M/M/1$ queue, since we can use the pmf of an $M/M/1$ queue at an exponential time to derive the LST of the time it takes us to go from level $j_1$ to level $j_2$ in an $M/M/1$ queue, when $j_1 < j_2$. Such a result can then be used to derive all of the corresponding hitting-time transforms for an $M/M/1/(K-s)$ queue. \subsubsection{Time-dependent moments} It is possible to make use of the factorization identities to derive the moments of $Q(e_q)$ as well. To illustrate the main idea, we first suppose that $\{Q(t); t \geq 0\}$ represents an $M/M/1$ queue-length process, with arrival rate $\lambda$ and service rate $\mu$. It has been shown in Abate and Whitt \cite{AbateWhittMM11} that, for each $t \geq 0$, \begin{eqnarray*} E[Q(t) \mid Q(0) = 0] = \frac{\rho}{1 - \rho}P(R_{\tau} \leq t) \end{eqnarray*} where $\tau$ represents the busy period of an $M/M/1$ queue, and $R_{\tau}$ represents the residual busy period, i.e. for each $t > 0$, \begin{eqnarray*} P(R_{\tau} > t) = \frac{1}{E[\tau]}\int_{t}^{\infty}P(\tau > x)dx. \end{eqnarray*} Letting $e_q$ be an exponential r.v. with rate $q > 0$, independent of $Q$, gives \begin{eqnarray*} E[Q(e_q) \mid Q(0) = 0] &=& \frac{\rho}{1 - \rho}E[e^{-q R_{\tau}}] \\ &=& \frac{\rho}{1 - \rho}\frac{1 - E[e^{-q \tau}]}{qE[\tau]} \\ &=& \frac{\lambda(1 - E[e^{-q \tau}])}{q} \end{eqnarray*} which implies that the first moment of $Q(e_q)$ is tractable, assuming we start in state 0. Our factorization identities can now be used to compute the first moment of $Q(e_q)$, for any initial condition. Suppose that $Q(0) = n_0 \geq 0$. Then \begin{eqnarray*} E[Q(e_q) \mid Q(0) = n_0] &=& E[Q(e_q) \mid \inf_{0 \leq s \leq e_q}Q(s) = 0, Q(0) = n_0]P(\inf_{0 \leq s \leq e_q}Q(s) = 0 \mid Q(0) = n_0) \\ &+& \sum_{k=0}^{n_0}E[Q(e_q) \mid \inf_{0 \leq s \leq e_q}Q(s) = k, Q(0) = n_0]P(\inf_{0 \leq s \leq e_q}Q(s) = k \mid Q(0) = n_0) \\ &=& E[Q(e_q) \mid Q(0) = 0]P(\inf_{0 \leq s \leq e_q}Q(s) = 0 \mid Q(0) = n_0) \\ &+& \sum_{k=0}^{n_0}(E[Q(e_q) \mid Q(0) = 0] + k)P(\inf_{0 \leq s \leq e_q}Q(s) = k \mid Q(0) = n_0) \\ &=& E[Q(e_q) \mid Q(0) = 0] + \sum_{k=0}^{n_0}k P(\inf_{0 \leq s \leq e_q}Q(s) = k \mid Q(0) = n_0) \\ &=& \frac{\lambda(1 - E[e^{-q \tau}])}{q} + \sum_{k=1}^{n_0}k \psi(q)^{n_0 - k}(1 - \psi(q)). \end{eqnarray*} The key step in this derivation is the second equality: if $\inf_{0 \leq s \leq e_q}Q(s) = k$, then Theorem \ref{mainreflectedresult} tells us that $Q(e_q)$ is equal in distribution to an $M/M/1$ queue on the states $\{k, k+1, k+2, \ldots\}$ with arrival rate $\lambda$ and service rate $\mu$. This result agrees with the result given in \cite{AbateWhittMM12}, and also in \cite{FralixRiano}. With a bit of patience, higher moments can also be computed through the use of this approach, but there are better ways to do this for the $M/M/1$ model: see \cite{FralixRiano} for details. An analogous procedure can be used to compute the moments of $Q(e_q)$, for more complicated processes. Suppose now that $\{Q(t); t \geq 0\}$ represents the queue-length process of an $M/M/s$ queue, with arrival rate $\lambda$ and service rate $\mu$, and $s$ servers. While the transient moments of the $M/M/s$ queue have been studied in Marcell\'an and P\'erez \cite{MarcellanPerez}, the point here is to show how to construct the moments from simpler birth-death processes. The key to computing the moments of $Q(e_q)$ for an arbitrary initial condition is to first compute the moments, while assuming that $Q(0) = s$, since we will want to again use $s$ as a reference point when we apply Theorem \ref{mainreflectedresult}. Again, since $Q$ is a reversible process, we can say that \begin{eqnarray*} E[Q(e_q) \mid Q(0) = s] = \pi_{0}(q)\sum_{k = 0}^{s}k E_{k}[e^{-q \tau_{s}}]\frac{\rho^{k}}{k!} + \pi_{0}(q)\frac{\rho^{s}}{s!}\sum_{k = s+1}^{\infty}k E_{k}[e^{-q \tau_{s}}](\rho/s)^{k-s} \end{eqnarray*} with \begin{eqnarray*} \pi_{0}(q) = \left[\sum_{k=0}^{s}E_{k}[e^{-q \tau_s}]\frac{(\rho)^{k}}{k!} + \sum_{k = s + 1}^{\infty}\frac{(\rho)^{s}}{s!}\left(\rho/s\right)^{k-s}E_{k}[e^{-q \tau_s}]\right]^{-1} \end{eqnarray*} being the normalizing constant. There are a few observations here worth noting. First, notice that \begin{eqnarray*} \pi_{0}(q)\sum_{k=0}^{s}k E_{k}[e^{-q \tau_{s}}]\frac{\rho^{k}}{k!} &=& P_{s}(Q(e_q) \leq s)E_{s}[Q_{M/M/s/s}(e_q)] \end{eqnarray*} where $Q_{M/M/s/s}$ represents an $M/M/s/s$ loss model with arrival rate $\lambda$, service rate $\mu$, and $s$ servers, and this is a known expected value; see Abate and Whitt \cite{AbateWhittErlangLoss} for details. Second, we see that \begin{eqnarray*} \pi_{0}(q)\frac{\rho^{s}}{s!}\sum_{k = s+1}^{\infty}k E_{k}[e^{-q \tau_{s}}](\rho/s)^{k-s} &=& \pi_{0}(q)\frac{\rho^{s}}{s!}\sum_{k = s+1}^{\infty}(k-s) E_{k}[e^{-q \tau_{s}}](\rho/s)^{k-s} \\ &+& \pi_{0}(q)\frac{\rho^{s}}{s!}\sum_{k = s+1}^{\infty}s E_{k}[e^{-q \tau_{s}}](\rho/s)^{k-s} \\ &=& P_{s}(Q(e_q) \geq s)E_{0}[Q_{M/M/1}(e_q)] + sP_{s}(Q(e_q) \geq s)P_{0}(Q_{M/M/1}(e_q) \geq 1) \end{eqnarray*} where $Q_{M/M/1}$ represents an $M/M/1$ queue with arrival rate $\lambda$ and service rate $s\mu$. Thus, we conclude that $E[Q(e_q) \mid Q(0) = s]$ is a quantity that can be computed. To get $E[Q(e_q) \mid Q(0) = i]$ for an arbitrary $i \geq 0$, we now invoke Theorem \ref{mainreflectedresult}. Suppose first that $i < s$. Then \begin{eqnarray*} E[Q(e_q) \mid Q(0) = i] &=& \sum_{j=i}^{s-1}E[Q(e_q) \mid \sup_{0 \leq s \leq e_q}Q(s) = j, Q(0) = i]P(\sup_{0 \leq s \leq e_q}Q(s) = j \mid Q(0) = i) \\ &+& E[Q(e_q) \mid Q(0) = s]P(\tau_{s} \leq e_q) \end{eqnarray*} and we observe from Theorem \ref{mainreflectedresult} that, conditional on $\sup_{0 \leq s \leq e_q}Q(s) = j$, $Q(e_q)$ behaves as an $M/M/j/j$ queue on $\{0,1,2,\ldots,j\}$, meaning \begin{eqnarray*} E[Q(e_q) \mid \sup_{0 \leq s \leq e_q}Q(s) = j, Q(0) = i] &=& E_{j}[Q_{M/M/j/j}(e_q)]. \end{eqnarray*} All of the other terms in the sum are, for similar reasons, also tractable. A similar argument can be used to derive $E[Q(e_q) \mid Q(0) = i]$ for $i > s$; we omit the details. We also point out that a similar argument can be used to derive moment expressions for the $M/M/s$ queue with exponential reneging, i.e. the $M/M/s-M$ queue, which is the model studied in Garnett et al. \cite{GarnettMandelbaumReiman}. Such moments would be decomposed into components from an $M/M/s/s$ queue, and a $M/M/1-M$ queue, and the $M/M/1-M$ queue moments have recently been studied in \cite{FralixReneging}. \subsection{Diffusion processes} The factorization identities can also be used to establish similar expressions for diffusion processes. We illustrate how the procedure works by applying it to a classical reflected diffusion: regulated Brownian motion. \subsubsection{Regulated Brownian motion} Suppose that $\{B(t); t \geq 0\}$ represents a Brownian motion, with drift $\mu = -1$ and volatility $\sigma^2 = 1$. We are interested in understanding the time-dependent behavior of $\{R(t); t \geq 0\}$, where \begin{eqnarray*} R(t) = B(t) - \inf_{0 \leq u \leq t}\min(B(u),0) \end{eqnarray*} i.e. $R$ is the one-sided reflection of $B$. Granted, since $B$ is a L\'evy process, we can already use the Wiener-Hopf factorization to derive the Laplace-Stieltjes transform of $R(e_q)$. However, we will instead be interested in showing how our factorization identities can also be used to derive the probability density function of $R(e_q)$. To derive this pdf, we will need to know a bit about the distribution of the hitting times associated with a Brownian motion. Following the classical argument of applying the optional sampling theorem to the Wald martingale, we see that \begin{eqnarray*} E_{x}[e^{-q \tau_{0}}] = e^{-(-1 + \sqrt{1 + 2q})x}. \end{eqnarray*} Moreover, $R$ has a unique stationary distribution $\pi$, where $\pi(dx) = 2e^{-2x}dx$. We will now compute the density of $R(e_q)$, given $R(0) = x_0$: we denote this density at the point $x$ as $f_{R(e_q)}(x; x_0)$. Again, we will need to break the calculation up into cases. Considering first the case where $x > x_0$, we may use Theorem \ref{mainreflectedresult}, along with a weak-convergence argument to show that \begin{eqnarray*} P_{x_0}(R(e_q) > x) &=& E_{x_0}[e^{-q \tau_{0}}]\frac{\int_{x}^{\infty}E_{y}[e^{-q \tau_{0}}]\pi(dy)}{\int_{0}^{\infty}E_{y}[e^{-q \tau_{0}}]\pi(dy)} \\ &+& \int_{0}^{x_0} \frac{\int_{x}^{\infty}E_{y}[e^{-q \tau_{0}}]\pi(dy)}{\int_{z}^{\infty}E_{y}[e^{-q \tau_{0}}]\pi(dy)}dP(\inf_{0 \leq u \leq e_q}R(u) \leq z). \end{eqnarray*} Careful readers will note that this identity is valid for a large class of reflected diffusion processes (namely, those processes that are expressible as a scaling-limit of a sequence of birth-death processes), not just for regulated Brownian motion. Success in using this identity for a given diffusion depends on both the tractability of the hitting-time transforms, and the integrals containing them. For $x \geq 0$, we can use our expressions for both the hitting-time LST and the stationary distribution to show that \begin{eqnarray*} \int_{x}^{\infty}E_{y}[e^{-q \tau_{0}}]\pi(dy) &=& \int_{x}^{\infty}e^{-(-1 + \sqrt{1 + 2q})y}2e^{-2y}dy \\ &=& \frac{2}{1 + \sqrt{1 + 2q}}e^{-(1 + \sqrt{1 + 2q})x}. \end{eqnarray*} Also, for $0 < z < x_0$, \begin{eqnarray*} P_{x_0}(\inf_{0 \leq u \leq e_q}R(u) \leq z) &=& P_{x_0}(\tau_{z} \leq e_q) \\ &=& E_{x_0}[e^{-q \tau_{z}}] \\ &=& E_{x_0 - z}[e^{-q \tau_{0}}] \\ &=& e^{-(-1 + \sqrt{1 + 2q})(x_0 - z)} \end{eqnarray*} so for positive $z$, we find that the density of $\inf_{0 \leq u \leq e_q}R(u)$ is just \begin{eqnarray*} dP(\inf_{0 \leq u \leq e_q}R(u) \leq z) = (-1 + \sqrt{1 + 2q})e^{-(-1 + \sqrt{1 + 2q})x_0}e^{(-1 + \sqrt{1 + 2q})z}dz. \end{eqnarray*} Plugging everything in, we can now say that \begin{eqnarray*} P_{x_0}(R(e_q) > x) &=& e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 + \sqrt{1 + 2q})x} \\ &+& \int_{0}^{x_0}e^{-(1 + \sqrt{1 + 2q})x}e^{(1 + \sqrt{1 + 2q})z}(-1 + \sqrt{1 + 2q})e^{-(-1 + \sqrt{1 + 2q})x_0}e^{(-1 + \sqrt{1 + 2q})z}dz \\ &=& e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 + \sqrt{1 + 2q})x}\left[1 + \frac{(-1 + \sqrt{1 + 2q})}{2\sqrt{1 + 2q}}\left[e^{2\sqrt{1 + 2q}x_0} - 1\right]\right] \end{eqnarray*} and so after taking derivatives and multiplying by $(-1)$, we find that the transient density of $R(e_q)$, for $x > x_0$, is just \begin{eqnarray*} f_{R(e_q)}(x; x_0) &=& (1 + \sqrt{1 + 2q})e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 + \sqrt{1 + 2q})x} \\ &+& \frac{q}{\sqrt{1 + 2q}}e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 + \sqrt{1 + 2q})x}\left[e^{2\sqrt{1 + 2q}x_0} - 1\right]. \end{eqnarray*} We will now focus on computing $f_{R(e_q)}(x; x_0)$, for $x < x_0$. After applying our weak-convergence results, we see that \begin{eqnarray*} P_{x_0}(R(e_q) > x) &=& 1 - E_{x_0 - x}[e^{-q \tau_{0}}] + E_{x_0}[e^{-q \tau_{0}}] \frac{\int_{x}^{\infty}E_{y}[e^{-q \tau_{0}}]\pi(dy)}{\int_{0}^{\infty}E_{y}[e^{-q \tau_{0}}]\pi(dy)} \\ &+& \int_{0}^{x}\frac{\int_{x}^{\infty}E_{y}[e^{-q \tau_{0}}]\pi(dy)}{\int_{z}^{\infty}E_{y}[e^{-q \tau_{0}}]\pi(dy)}dP_{x_0}(\inf_{0 \leq u \leq e_q}R(u) \leq z). \end{eqnarray*} Evaluating this quantity, then taking derivatives shows that the transient density of $R(e_q)$ is just \begin{eqnarray*} f_{R(e_q)}(x;x_0) &=& (-1 + \sqrt{1 + 2q})e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 - \sqrt{1 + 2q})x} \\ &+& (1 + \sqrt{1 + 2q})e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 + \sqrt{1 + 2q})x} \\ &+& (1 - \sqrt{1 + 2q})(-1 + \sqrt{1 + 2q})e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 - \sqrt{1 + 2q})x} \\ &-& \frac{q}{\sqrt{1 + 2q}}e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 + \sqrt{1 + 2q})x} \\ &=& (-1 + \sqrt{1 + 2q})(2 - \sqrt{1 + 2q})e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 - \sqrt{1 + 2q})x} \\ &+& \frac{\sqrt{1 + 2q} + 1 + q}{\sqrt{1 + 2q}}e^{-(-1 + \sqrt{1 + 2q})x_0}e^{-(1 + \sqrt{1 + 2q})x}. \end{eqnarray*} \appendix \section{Palm measures} Throughout this paper, we assume that all of our random elements reside on a probability space $(\Omega, \mathcal{F}, P)$, where $\Omega$ represents a complete, separable metric space, $\mathcal{F}$ the Borel $\sigma$-field generated by the open sets of the metric, and $P$ a probability measure on $\mathcal{F}$. These additional restrictions will be needed in order to properly define a collection of Palm measures, which are used to derive our main result. The reader should not be alarmed by such restrictions, as the space $D[0, \infty)$ endowed with the proper choice of Skorohod metric is a complete, separable metric space, and many queueing processes (and stochastic processes in general) can reside on such a space. Moreover, $\mathbb{R}_{+}$ is used to represent the nonnegative real line, and $\mathcal{B}$ the Borel $\sigma$-field generated by the open sets of $\mathbb{R_{+}}$. Let $N := \{N(t); t \geq 0\}$ represent a point process on the nonnegative real line, with mean measure $\mu$, where $\mu(A) = E[N(A)] < \infty$ for all bounded $A \in \mathcal{B}$. Under such assumptions, it is known that $N$ induces a $\mu$-a.e. unique probability kernel $\mathcal{P}: \mathbb{R}_{+} \times \mathcal{F} \rightarrow [0,1]$, where for each fixed $E \in \mathcal{F}$, $\mathcal{P}_{s}(E)$ is a Borel measurable function in $s$, and for each fixed $s \in \mathbb{R}_{+}$, $\mathcal{P}_{s}$ is a probability measure on $\mathcal{F}$. The probability distributions of this kernel are referred to as the Palm measures of $N$, and these are defined to be the measures that satisfy the following condition: for each $B \in \mathcal{B}$, and each $A \in \mathcal{F}$, \begin{eqnarray} \label{Palmdefinition} E[N(B)\textbf{1}_{A}] = \int_{B}\mathcal{P}_{s}(A)\mu(ds). \end{eqnarray} An important consequence of equation (\ref{Palmdefinition}) is the Campbell-Mecke formula; see for instance Kallenberg \cite{KallenbergRM}. The proof of this formula follows from applying a monotone class argument to (\ref{Palmdefinition}). \begin{theorem} (Campbell-Mecke formula) For any measurable stochastic process $\{X(t); t \geq 0\}$, we find that \begin{eqnarray*} E\left[\int_{0}^{\infty}X(s)N(ds)\right] = \int_{0}^{\infty}\mathcal{E}_{s}[X(s)]\mu(ds) \end{eqnarray*} where $\mathcal{E}_{s}$ represents expectation, under the probability measure $\mathcal{P}_{s}$. \end{theorem} Throughout, we say that a stochastic process is measurable if it is measurable with respect to the $\sigma$-field $\mathcal{A}$, which is generated by sets of the form $A \times C$, where $A \in \mathcal{B}$, and $C \in \mathcal{F}$, i.e. if for each $B \in \mathcal{B}$, $\{(t, \omega); X(t,\omega) \in B\} \in \mathcal{A}$. The Campbell-Mecke formula is a very important, fundamental result in the theory of Palm measures, and is typically the main tool used when applying Palm measures to a given problem. Readers wishing to consult a rigorous treatment of such measures are referred to Chapters 10-12 of \cite{KallenbergRM}: other classical references on point process theory include the series of textbooks by Daley and Vere-Jones \cite{DaleyVereJones1, DaleyVereJones2}. A collection of sub-$\sigma$-fields $\{\mathcal{F}_{s}; s \geq 0\}$ of $\mathcal{F}$ is said to be a filtration, if for each $s < t$, $\mathcal{F}_{s} \subset \mathcal{F}_{t}$. We say that a stochastic process $\{X(t); t \geq 0\}$ is adapted to the filtration if, for each $t \geq 0$, $X(t)$ is measurable with respect to $\mathcal{F}_t$. Associated with a filtration is a collection of $\sigma$-fields $\{\mathcal{F}_{s-}; s > 0\}$, where $\mathcal{F}_{s-}$ is the smallest $\sigma$-field containing all $\sigma$-fields $\mathcal{F}_{r}$, for $r < s$. These are standard concepts within stochastic calculus, and can be found in virtually any textbook on the subject. Some examples of textbooks that focus on point processes, and include such concepts, are Br\'emaud \cite{BremaudPP} and Baccelli and Br\'emaud \cite{BaccelliBremaud}. We are now ready to quote a result that is used to derive the main result of this paper. Suppose $N := \{N(t); t \geq 0\}$ represents a point process on $[0, \infty)$, and suppose $\{\mathcal{F}_{t}; t \geq 0\}$ represents a filtration, to which $N$ is adapted. Within this framework, we say that $N$ is an $\mathcal{F}_{t}$-Poisson process, if (i) $N$ is adapted to the filtration, and (ii) the distribution of $N(a,b]$, conditional on $\mathcal{F}_{a}$, is Poisson with rate \begin{eqnarray*} \mu(a,b] = \int_{(a,b]}\lambda(s)ds \end{eqnarray*} for some deterministic function $\lambda: [0, \infty) \rightarrow [0, \infty)$ (i.e. $N(a,b]$ is independent of $\mathcal{F}_{a}$). Under these conditions, we can apply the following result, which is a corollary of a time-dependent analogue of Papangelou's lemma for point processes; see \cite{FralixRianoSerfozo} for details. \begin{proposition} \label{ASTAprop} If $N$ is an $\mathcal{F}_{t}$-Poisson process, then $\mathcal{P}_t = P$ on $\mathcal{F}_{t-}$, for almost all $t$ (w.r.t. Lebesgue measure). \end{proposition} \noindent \textbf{Acknowledgements} The authors would like to thank an anonymous referee for providing valuable comments on our paper, and for bringing reference \cite{Millar} to our attention. \end{document}
\begin{document} \author[H.~July]{Henry July} \address{Bergische Universit\"at Wuppertal, Fakult\"at 4, Gau\ss stra\ss e 20, 42119 Wuppertal, Germany} \email{\href{mailto:[email protected]}{[email protected]}} \author[A.~St\"abler]{Axel St\"abler} \address{Johannes Gutenberg-Universit\"at Mainz\\ Fachbereich 08\\ Staudingerweg 9\\ 55099 Mainz\\Germany} \email{\href{mailto:[email protected]}{[email protected]}} \keywords{Cartier algebra, Gauge boundedness, Complexity} \subjclass{13A35} \thanks{We thank Manuel Blickle for useful comments on an earlier draft.} \begin{abstract} We show that a gauge bounded Cartier algebra has finite complexity. We also give an example showing that the converse does not hold in general. \end{abstract} \title{Complexity of gauge bounded Cartier algebras} \section{Introduction} The central notion of this note is the following \begin{defi} Let $R$ be an $F$-finite noetherian (commutative) ring. \begin{enumerate}[(i)] \item We denote by \[\mathcal{C}^R=\bigoplus_{e\geq 0}\mathcal{C}^R_e=\bigoplus_{e\geq 0}\Hom_R(F^e_\ast R,R) \] the \emph{total Cartier algebra} of $R$. We note that this is a non-commutative ring, where multiplication $\varphi \cdot \psi$ is defined as $\varphi \circ F_\ast^a \psi$ for $\varphi \in \mathcal{C}_a^R$ and $\psi \in \mathcal{C}_b^R$. Also note that any $\mathcal{C}_e^R$ is an $R$-bimodule where the left module structure is the ordinary one and the right module structure is given by noting that $F_\ast^e R = R$ as rings. These module structures are then related by $\varphi \cdot r^{p^e} = r \cdot \varphi$ for any $\varphi \in \mathcal{C}_e^R$. \item We call a graded subring $\mathcal{D}\subseteq \mathcal{C}^R$ a Cartier subalgebra of $R$, if $\mathcal{D}_0=R$ and $\mathcal{D}_e\neq 0$ holds for some $e>0$. We write $\mathcal{D}_+$ for $ \bigoplus_{e \geq 1} \mathcal{D}_e$. \end{enumerate} \end{defi} Given an $F$-finite field $k$ we fix an isomorphism $k \to F^! k = \Hom_k(F_\ast k, k)$ (where on the right we consider the right $k$-module structure). Denote the adjoint map $F_\ast k \to k$ by $\lambda$. If $R = k[x_1,\ldots, x_n]$ is a polynomial ring over an $F$-finite field, then $\Hom_R(F_\ast^e R, R)$ is generated as a right-$R$-module by the so-called Cartier operator $\kappa^e\colon F_\ast^e R \to R$ \[ \kappa^e(c x^\alpha) = \begin{cases} \lambda^e(c), & \text{ if } \alpha_i = p^e -1 \text { for all } 1 \leq i \leq n, \\ 0, &\text{ if } \alpha_i < p^e -1 \text{ for some } i. \end{cases}\] Little is lost if the reader assumes that $k$ is perfect and $\lambda$ is defined as the $p$th root map. In particular, in the case that $R$ is a polynomial ring, $\mathcal{C}^R$ is generated by $\kappa$ as an $R$-algebra. It is understood that $\mathcal{C}^R$ is not finitely generated in many cases of interest if $R$ is not a Gorenstein ring (\cite{katzmannonfg}, \cite{MR2905024}). Due to this, weaker finiteness notions have been introduced. The first one is that of the \emph{complexity} of the Cartier algebra which loosely speaking measures how fast the number of generators as a right $R$-module grows with the homogeneous degree. \begin{defi}[{see \cite{enescuyaofcomplexity}}] Let $(R, \mathcal{D})$ be as above. \begin{enumerate}[(i)] \item For $e \geq -1$ let $G_e \coloneqq G_e(\mathcal{D})$ be the subring of $\mathcal{D}$ generated by elements of degree $\leq e$. We write $k_e$ for the minimal number of homogeneous generators of $G_e$ and call $(k_e - k_{e-1})_e$ the \emph{complexity sequence} of $\mathcal{D}$. \item Provided that the $k_e$ are finite the complexity of $\mathcal{D}$ is defined as \begin{align*} \cx(\mathcal{D})\coloneqq\inf\{n\in \R_{>0}:k_e-k_{e-1}=\mathcal{O}(n^e)\}. \end{align*} We follow the convention that the infimum over $\varnothing$ is $\infty$. \item The \emph{Frobenius exponent} of $\mathcal{D}$ is defined as $\exp_F(\mathcal{D}) = \log_p(cx(\mathcal{D}))$. \end{enumerate} \end{defi} The second finiteness notion is that of \emph{gauge boundedness} (\cite{blickletestidealsvia}). This notion is very useful in proving various desired properties for test ideal filtrations. From now on we let $S = k[x_1, \ldots, x_n]$ be a polynomial ring in $n$ variables over an $F$-finite field $k$. \begin{defi} \label{gaugebound} \begin{enumerate}[(i)] \item For each $d \geq 0$ let $S_d$ be the $k$-subspace of $S$ which is generated by all monomials $x_1^{\alpha_1}\cdots x_n^{\alpha_n}$ such that $0\leq \alpha_1,...,\alpha_n\leq d$. We will loosely refer to this as the \emph{maximum norm}. Let $I \subseteq S$ be an ideal and $R = S/I$. The $S_d$ induce an increasing filtration on $R$ by setting $R_{- \infty} = 0$ and $R_d = S_d \cdot 1_R$ for $d \geq 0$. For $r \in R$ we define a \emph{gauge} $\delta\colon R \to \mathbb{N} \cup \{- \infty\}$ via \begin{align*} \delta(r)=\begin{cases} \ -\infty, &\textit{if}\ r=0,\\ \ d,\ &\textit{if}\ r\neq 0\ \textit{and}\ r\in R_d\smallsetminus R_{d-1}.\end{cases} \end{align*} \item Let $\mathcal{D} \subseteq \mathcal{C}^R$ be a Cartier algebra. We say that $\mathcal{D}$ (or the pair $(R, \mathcal{D})$) is \emph{gauge bounded} if there is a set \[\{ \psi_i \, \vert \, \psi_i \in \mathcal{D}_{e_i}, e_i \geq 1 \}\] which generates $\mathcal{D}_{+}$ as a right $R$-module and a constant $K$ such that \[\delta(\psi_i(r)) \leq \frac{\delta(r)}{p^{e_i}} + K \] for all $r \in R$. \end{enumerate} \end{defi} Overall, the notion of gauge boundedness is still poorly understood. While there are examples of non-gauge bounded Cartier algebras (\cite{blicklestaeblerfuntestmodules}), it is unknown whether the full Cartier algebra $\mathcal{C}^R$ for a quotient $R$ of $S$ is always gauge bounded or not. Our goal in this note is to show that gauge boundedness implies finite complexity and that the converse is not true. We also give an elementary proof of a result of Enescu--Perez (\cite[Theorem 3.9]{enescuperezfexponent}) at the expense of a worse bound. \section{The result} Recall that throughout $S = k[x_1,\ldots, x_n]$ is a polynomial ring over an $F$-finite field $k$. When dealing with $\mathcal{C}^R$ for $R = S/I$ we have by Fedder (\cite{fedderisom}) an isomorphism of $R$-modules \[\Psi\colon \frac{(I^{[p^e]}:I)}{I^{[p^e]}} \longrightarrow \Hom_R(F_\ast^e R, R). \] In particular, for a Cartier subalgebra $\mathcal{D} \subseteq \mathcal{C}^R$ we may also study the action of the Cartier algebra \[\bigoplus_{e \geq 0} \kappa^e J_e \] on $R$, where $\kappa$ is a generator of $\Hom_S(F_\ast S, S)$ and $J_e = \Psi^{-1}(\mathcal{D}_e)$. \begin{lem} \label{keylemma} Let $I \subset S$ be an ideal, $R = S/I$ and $\mathcal{D} \subseteq \mathcal{C}^R$ a subalgebra on $R$. For each $e$ we fix a minimal system of generators $(f_1, \ldots, f_{a_e})$ of $J_e$ and denote the maximum of the $\deg f_i$ by $d(J_e)$. If $d(J_e) \leq K p^{te}$ for some constants $K$ and $t$, then the complexity sequence $(k_e - k_{e-1})$ is in $\mathcal{O}(p^{etn})$. \end{lem} \begin{proof} We have the following inequality where for the first equality we simply count the number of monomials of degree $\leq d$ in $S$ \begin{align*} k_{e}-k_{e-1}&\leq k_e\leq \sum_{d=0}^{Kp^{te}}\dim_kS_d =\binom{n+Kp^{te}}{Kp^{te}}\\&=\frac{(n+Kp^{te})\cdots(Kp^{te}+1)}{n(n-1)\cdots 1}\\&=\left(1+Kp^{te}\right)\left(1+\frac{Kp^{te}}{n-1}\right)\cdots \left(1+Kp^{te}\right)\\&\leq (Kp^{te}+1)^n\\& \leq K'p^{te n} \end{align*} for some constant $K'$. Thus $(k_e - k_{e-1}) \in \mathcal{O}(p^{etn})$ as claimed. \end{proof} We can now give an elementary proof of \cite[Theorem 3.9]{enescuperezfexponent} with a worse bound (they achieve $\dim R$ instead of $\dim S = n$). \begin{theorem} \label{elementaryproof}Let $I \subset S$ be an ideal, $R = S/I$ and $\mathcal{D} \subseteq \mathcal{C}^R$ a subalgebra on $R$. With notation as above, if $d(J_e)=\mathcal{O}(p^{te})$ holds for some fixed $t$ (and some choice of generating system), then \begin{align*} \exp_F(\mathcal{D})\leq t\cdot n. \end{align*} \end{theorem} \begin{proof} From \Cref{keylemma} we get $k_e-k_{e-1}=\mathcal{O}(p^{ten})$ and therefore also $\cx(\mathcal{D})\leq p^{tn}$. Hence, $\exp_F(\mathcal{D})=\log_p(\cx(\mathcal{D}))\leq \log_p(p^{tn})=tn$. \end{proof} \begin{bem} \begin{enumerate}[(a)] \item Note that \cite[Theorem 3.9]{enescuperezfexponent} requires the ideal to be homogeneous whereas we do not need this assumption. Alternatively, one can also avoid this by arguing as in \cite[Remark 2.3]{MR3192605} to obtain the bound $t (\dim R +1)$ via \cite[Theorem 3.9]{enescuperezfexponent}. \item The strategy to apply \Cref{elementaryproof} above is to use \cite[Lemma 3.2, Theorem 3.3]{MR3192605} as outlined in \cite{enescuperezfexponent}. Unfortunately, the proof of \cite[Lemma 3.2]{MR3192605} contains a gap. Namely, it is not clear why equation (5) on page $3525$ holds. Hence, it is at present not known whether there are interesting cases where we can apply \Cref{elementaryproof}. \end{enumerate} \end{bem} \begin{theorem} Let $I \subseteq S$ be an ideal. If $(R=S/I,\mathcal{D})$ is gauge bounded, then $\cx(\mathcal{D}) \leq p^n$. \end{theorem} \begin{proof} First, we assert \begin{claim} \label{claim1} There are generators $f_1, \ldots, f_m$ of $J_e$ such that $\delta(f_i) \leq K p^e$ where $K$ is a constant independent of $e$. \end{claim} Assuming this claim we proceed as follows. Using the bound obtained from \Cref{claim1} and an argument just as in \Cref{keylemma} (with the maximum norm instead of the degree) yields \[ k_e -k_{e-1} \leq K' p^{en}\] for some constant $K'$. Hence, $\cx(\mathcal{D})\leq p^n$. It remains to prove \cref{claim1}: We fix a generating system $\{\psi_{\gamma}\}$ as in \Cref{gaugebound} and $e \geq 1$. Further, choose $\psi_1, \ldots, \psi_r$ in our generating system which generate $\mathcal{D}_e$. As explained before \Cref{keylemma} $\psi_i$ is of the form $\kappa^e f_i$ and the $f_i$ form a system of generators of $J_e$. Our goal is to bound the gauges of these $f_i$. Therefore fix one and omit the index. Let $\alpha$ be a multiindex such that $c x^\alpha$ is a monomial of $f$ such that $\delta(f) = \delta(c x^{\alpha})$. Write $\alpha = p^e r + \alpha'$ for unique $r, \alpha' \in \mathbb{N}^n$ with $0 \leq \alpha'_i \leq p^e -1$. Then \begin{align*} \kappa^e\left(c x^\alpha x^{p^e - 1 - \alpha'}\right) = \kappa^e \left( c x^{p^e r + p^e - 1}\right) = x^r \kappa^e\left(c x^{p^e -1}\right) = x^r \kappa^e(c) \end{align*} and therefore \[ \delta\left(\kappa^e\left(c x^\alpha x^{p^e - 1 - \alpha'}\right)\right) = \max \{ r_1, \ldots, r_n\} \eqqcolon r_\mathrm{max}.\] On the other hand, we have \begin{align*} \delta \left( \kappa^e \left( cx^{\alpha} x^{p^e -1 -\alpha'} \right) \right) = \delta\left( \kappa^e \left(f x^{p^e -1 -\alpha'}\right) \right) &= \delta \left( \psi_i\left(x^{p^e -1 - \alpha'}\right)\right)\\ &\leq \frac{\delta\big( x^{p^e-1 - \alpha'} \big)}{p^e} + K\\ &\leq \frac{p^e -1}{p^e} +K \leq 1 + K, \end{align*} where the first inequality holds due to \Cref{gaugebound} (ii). Hence, $r_\mathrm{max} \leq 1 + K$ which is independent of $e$. Thus all $\alpha'_i$ and $r_i p^{e}$ are in $\mathcal{O}(p^e)$. Hence also all $\alpha_i$ which proves \cref{claim1}. \end{proof} We now come to the promised example showing that there are Cartier algebras of finite complexity which are not gauge bounded. \begin{bei} Let $R=k[x,y]$ for an $F$-finite field $k$ of prime characteristic $p$. It is verified in \cite[Example 7.13]{blicklestaeblerfuntestmodules} that \[ \mathcal{D} \coloneqq \bigoplus_{e \geq 0} \kappa^e \mathfrak{a}_e\] with $\mathfrak{a}_0 = R$ and $\mathfrak{a}_e = (x^2, xy^{ep^e})$ for $e \geq 1$ is a Cartier algebra which is not gauge bounded. Note that in \cite{blicklestaeblerfuntestmodules} the gauge is induced by the degree, but we may just as well use the one induced by the maximum norm as in \Cref{gaugebound} (i). However, the complexity of $\mathcal{D}$ is clearly one. \end{bei} \end{document}
\begin{document} \title{A study on cost behaviors of binary classification measures in class-imbalanced problems} \author{Bao-Gang Hu,~\IEEEmembership{Senior Member,~IEEE}, ~ Wei-Ming Dong, ~\IEEEmembership{Member,~IEEE} \begin{CJK*}{GB}{gbsn}\author{Pinyin Name (???)}\end{CJK*} \IEEEcompsocitemizethanks{\IEEEcompsocthanksitem B.-G. Hu and W.-M. Dong are with NLPR/LIAMA, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.\protect\\ E-mail: [email protected] \protect\\ E-mail: [email protected] } \thanks{}} \IEEEcompsoctitleabstractindextext{ \begin{abstract} This work investigates into cost behaviors of binary classification measures in a background of class-imbalanced problems. Twelve performance measures are studied, such as $F$ measure, G-means in terms of accuracy rates, and of recall and precision, balance error rate ($BER$), Matthews correlation coefficient ($MCC$), Kappa coefficient ($\kappa$), etc. A new perspective is presented for those measures by revealing their cost functions with respect to the class imbalance ratio. Basically, they are described by four types of cost functions. The functions provides a theoretical understanding why some measures are suitable for dealing with class-imbalanced problems. Based on their cost functions, we are able to conclude that G-means of accuracy rates and $BER$ are suitable measures because they show {\it ``proper'' } cost behaviors in terms of {\it ``a misclassification from a small class will cause a greater cost than that from a large class''. } On the contrary, $F_1$ measure, G-means of recall and precision, $MCC$ and $\kappa$ measures do not produce such behaviors so that they are unsuitable to serve our goal in dealing with the problems properly. \end{abstract} \begin{keywords} Binary classification, class imbalance, performance, measures, cost functions \end{keywords}} \maketitle \IEEEdisplaynotcompsoctitleabstractindextext \IEEEpeerreviewmaketitle \section{Introduction} Class-imbalanced problems become more common and serious in the emergence of {\it ``Big Data''} processing. The initial reason is due to a fact that useful information is generally represented by a minority class. Therefore, the {\it class-imbalance} (or {\it skewness}) {\it ratio} between a {\it majority} class over a {\it minority} one can be severely large \cite{Chawla}. The other reason can be appeared from utilizations of {\it ``one-versus-rest''} binary classification scheme for a fast processing of multiple classes \cite{Akata}. Generally, the greater the number of classes, the larger the class-imbalance ratio. When most investigations in the conventional classifications apply {\it accuracy} (or {\it error}) {\it rate} as a learning criterion, this performance measure is no more appropriate in dealing with highly-imbalanced datasets \cite{He}. In addressing class-imbalanced problems properly, {\it cost-sensitive learning} is proposed in which users are required to specify the costs according to error types \cite{Elkan}. At the same time, the other investigations apply {\it ``proper"} measures \cite {Weiss}, or learning criteria, which do not require information about costs. Those measures, such as $F$-measures, $AUC$ and $G$-means, are considered to be {\it cost-free learning} \cite{Zhang}. Significant progresses have been reported on using those measures \cite {Kubat, Daskalaki, Huang, Menon}. Within the classification studies, however, we consider that two important issues below are still unclear theoretically, that is: {\it I. Why some of measures are successful in dealing with highly-imbalanced datasets? II. What are the function behaviors of binary classification measures when the class-imbalance ratio increases? } The questions above form the motivation of this work. In principle, we can view that any classification measure implies cost information even one does not specify it explicitly. Taking a measure of error rate for example. When this measure is set as a learning criterion in binary classifications, a {\it ``zero-one''} cost function is given to the criterion \cite{Duda}. This function assigns an equal cost to both errors from two classes. Therefore, a new perspective from the cost behaviors is proposed in this study in order to answer the questions. Twelve measures are selected in this study on binary classifications. The rest of this brief paper is organized as follows. In Section II, we discuss two levels of evaluations in the selection of measures. Twelve measures in binary classifications are presented in Section III. Their cost functions are derived in Section IV. We demonstrate numerical examples in Section V. The conclusions are given in Section VI. \section{Function-based vs performance-based evaluations} This section will discuss measure selection in classifications. Fig. 1 shows two levels of evaluations, namely, {\it function-based} and {\it performance-based} evaluations. From an application viewpoint, the performance-based evaluation seems more common because it can provide a fast and overall picture among the candidate measures. One of typical investigations is shown by Ferri et al \cite{Ferri} on eighteen performance measures over thirty datasets. However, this kind of investigations generally produce the performance responses, not only to the measures, but also to the data and associated learning algorithms. Therefore, conclusions from the performance-based evaluation may be changed accordingly with the different datasets. Due to the coupling feature in the performance responses, one may fail to obtain the intrinsic properties of the measures. We consider that the function-based evaluation is more fundamental in the measure selection. This evaluation will reveal {\it function} (or {\it property}) {\it differences} among the measures. Without involving any learning algorithm and noisy data, one is able to gain the intrinsic properties of measures. The properties can be various depending on the specific concerns, such as, ROC isometrics \cite{Flach}, statistical properties of AUC measure \cite{Ling}, monotonicity and error-type differentiability \cite{Leichter}. According to a specific property, one is able to see why one measure is more {\it ``proper"} than the others. The findings from the function-based evaluation will be independent of the learning algorithms and datasets. In this work, we will focus on a specific property which is not well studied in the function-based evaluation. Suppose that any binary classification measure produces cost functions in an implicit form. We consider a measure to be {\it ``proper''} for processing class-imbalanced problems only when it holds a {\it ``desirable''} property so that {\it ``a misclassification from a small class will cause a greater cost than that from a large class''} \cite{Hu}. We call this property to be a {\it ``meta measure''} because it describes high-level or qualitative knowledge about a specific measure. If a binary classification measure satisfies (or does not satisfy) the meta measure, we call it {\it ``proper''} (or {\it ``improper''}). The examination in terms of the meta-measure enables clarification of the intrinsic causes of performance differences among classification measures. \section{Two-class measures} A binary classification is considered in this work, and it is given by a {\it confusion matrix} $\textbf{C}$ in a form of: \begin{equation} \textbf{C} = \left[ {\begin{array}{*{20}c} {TN} & {FP } \\ {FN} & {TP } \\ \end{array}} \right], \end{equation} where ``\emph{TN}", ``\emph{TP}", ``\emph{FN}", ``\emph{FP}", represent ``\emph{true negative}" , ``\emph{true positive}", ``\emph{false negative}", ``\emph{false positive}", respectively. Suppose $N$ ($= TN+TP+FN+FP$) to be the {\it total number} of samples in the classification. The confusion matrix can be shown in the other form: \begin{equation} \textbf{C} = N \left[ {\begin{array}{*{20}c} {CR_1} & {E_1 } \\ {E_2} & {CR_2} \\ \end{array}} \right], \end{equation} where $CR_1$, $CR_2$, $E_1$, and $E_2$ are the {\it correct recognition rates} and {\it error rates} \cite {Hu} of Class 1 and Class 2, respectively. They are defined by: \begin{equation} CR_1= \frac{TN} {N}, ~CR_2= \frac{TP} {N}, \end{equation} \begin{equation} E_1= \frac{FP} {N} , ~E_2= \frac{FN} {N} , \end{equation} and form the relations to the {\it population rates} by: \begin{equation} p_1=CR_1+E_1, ~p_2=CR_2+E_2. \end{equation} From the non-negative terms in the confusion matrix, one can get the following constraints: \begin{equation} \begin{array}{r@{\quad}l} & 0 < p_1 <1, ~~ 0 < p_2 <1, ~ p_1+p_2=1 \\ & 0 \leq E_1 \leq p_1, ~ 0 \leq E_2 \leq p_2. \end{array} \end{equation} Twelve measures are investigated in this work. The first measure is the {\it total accuracy rate}: \begin{equation} A_T= \frac{TN+TP} {N} = 1 -E_1-E_2. \end{equation} In this work, we will adopt the notions of four means (Fig. 2), namely, {\it Arithmetic Mean}, {\it Geometric Mean}, {\it Quadratic Mean} and {\it Harmonic Mean}, in constructions of performance measures. From the definitions of {\it precision} ($P$) and {\it recall} ($R$): \begin{equation} P= \frac{TP} {TP+FP}=\frac{CR_2} {CR_2+E_1},~ R= \frac{CR_2} {p_2}, \end{equation} one can obtain four precision-recall-based means: \begin{equation} A_{PR}=(P+R)/2. \end{equation} \begin{equation} G_{PR}= \sqrt{PR}. \end{equation} \begin{equation} Q_{PR}= \sqrt{\frac{P^2 + R ^2} {2} }. \end{equation} \begin{equation} H_{PR}=F_1= 2 \frac{P*R} {P+R}. \end{equation} Eq. (12) shows that $F_1$ measure is the harmonic mean of precision and recall. More definitions are given below \begin{equation} \begin{array}{r@{\quad}l} & A_1=TNR= Specificity = \frac{TN} {TN+FP}= \frac{CR_1} {p_1}, \\ & A_2=TPR= Sensitivity = \frac{TP} {TP+FN} = \frac{CR_2} {p_2}=R, \end{array} \end{equation} where the {\it accuracy rate of the first class} ($A_1$) can also be called {\it true negative rate} ($TNR$) or {\it specificity}; the {\it accuracy rate of the second class} ($A_2$) called {\it true positive rate} ($TPR$), {\it sensitivity} or {\it recall}. In this work, we adopt the term of {\it accuracy rate of the $i$th class} ($A_i$) because it is extendable if multiple-class problems are considered. The relation between the total accuracy rate and the accuracy rate of the $i$th class is \begin{equation} A_T =p_1*A_1+p_2*A_2. \end{equation} Then, four accuracy-rate-based means are formed as: \begin{equation} A_{A_i}=AUC_b=(A_1+A_2)/2. \end{equation} \begin{equation} G_{A_i}= \sqrt{A_1 * A_2}. \end{equation} \begin{equation} Q_{A_i}= \sqrt{\frac{A_1^2 + A_2 ^2} {2} }. \end{equation} \begin{equation} H_{A_i}= 2 \frac{A_1*A_2} {A_1+A_2}. \end{equation} In eq. (15), $AUC_b$ is the {\it area under the curve} ($AUC$) for a single classification point in the $ROC$ curve. $AUC_b$ is also called {\it balanced accuracy} \cite{Velez}. Three other measures are also received attentions. The {\it balance error rate} ($BER$) is given in a form of: \begin{equation} BER= \frac{1} {2}( \frac{E_1} {p_1} + \frac{E_2} {p_2}). \end{equation} The {\it Matthews correlation coefficient} ($MCC$) is given by: \begin{equation} MCC= \frac{TP*TN-FP*FN} {\sqrt{p_1 p_2 N^2(TN+FN)(TP+FP)}}. \end{equation} The {\it Kappa coefficient} ($ \kappa $) is given by: \begin{equation} \begin{array}{r@{\quad}l} & \kappa = \frac{Pr(a)-Pr(e)} {1-Pr(e)}, \\ & Pr(a)= \frac{TN+TP} {N}, \\ & Pr(e)= p_1 \frac{TN+FN} {N} +p_2\frac{TP+FP} {N} . \end{array} \end{equation} One needs to note that the first ten measures are given in a range of [0, 1], and the last two measures, $MCC$ and $\kappa$, are within a range of [-1,1]. When the four precision-recall-based measures do not take the true negative rate into account, all other measures do. Some measures above may be not well adopted in applications. We investigate them for the reason of a comparative study. \section{Cost functions of measures} The risk of binary classifications can be described by \cite{Duda}: \begin{equation} Risk = \lambda_{11}CR_1 + \lambda_{12}E_1 + \lambda_{22}CR_2 + \lambda_{21}E_2, \end{equation} where $\lambda_{ij}$ is a cost term for the true class of a pattern to be $i$, but be misclassified as $j$. In the cost sensitive learning, the cost terms are generally assigned with constants \cite{Elkan}. However, we consider all costs in binary classifications can be described in a function form of $\lambda_{ij} (\textbf{v})$, where \textbf{v} is a variable vector. The size of the vector will be discussed later. We call $\lambda_{ij} (\textbf{v})$ {\it ``cost function''}, or {\it ``equivalent cost''} if it is not given explicitly. In the derivation of cost functions of the given measures, we make several assumptions below: \begin{itemize} \item[] $\cal {A}$1. The basic information to derive the cost functions is a confusion matrix in a binary classification problem without a reject option. \item[] $\cal {A}$2. The population rate of the second class $p_2$ corresponds to the minority class, that is, $p_2<0.5$. Hence, $p_1$ corresponds to the majority class, \item[] $\cal {A}$3. For simplifying analysis without losing generality, we assume $\lambda_{11}=\lambda_{22}=0$. Therefore, only $\lambda_{12} (\textbf{v})$ and $\lambda_{21} (\textbf{v})$ are considered, but required to be non-negative ($\geq 0$) for $Risk \geq 0$. \item[] $\cal {A}$4. When the exact cost function cannot be obtained, the Taylor approximation will be applied by keeping the linear terms, and neglecting the remaining higher-order terms. The function is then denoted by $ \hat{\lambda}_{ij} (\textbf{v})$. \end{itemize} When all the measures, except $BER$, are given in a maximum sense to the task of classifications, we need to transfer them into the minimum sense in the form of eq. (22). This transformation should not destroy the evaluation conclusions. For example, we can find an equivalent relation between the total accuracy rate and error rates: \begin{equation} max ~ A_T ~ \Leftrightarrow ~ min ~ {\cal R}isk ~(A_T) = E_1 + E_2 \end{equation} where {\it ``max''} and {\it ``min''} are denoted {\it ``maximization''} and {\it ``minimization''} operators, respectively; the symbol ``$\Leftrightarrow$'' is for {\it ``equivalency''}; and ``${\cal R}isk$'' is the transformation operator. Using the expression of eq. (22), one can immediately obtain the equivalent costs for the accuracy measure, $\lambda_{12}=\lambda_{21}=1 $. The costs indicate constant values and no distinction between two types of errors. However, in most cases, one fails to obtain the exact expressions on $\lambda_{ij}$. One example is given on the general form of $F$ measure by a transformation \cite{Martino}: \begin{equation} \begin{array}{r@{\quad}l} & max ~ F_\beta = (1+\beta ^2 )\frac{PR} {\beta ^2 P+R} ~ \Leftrightarrow \\ & min ~ {\cal R}isk ~(F_\beta) = \frac{E_1} {p_2-E_2} + \frac{\beta ^2 E_2} {p_2-E_2} , \end{array} \end{equation} from which we can only get so called {\it ``apparent cost functions''} in a form of: \begin{equation} \lambda_{12}^A=\frac{1} {p_2-E_2}, ~ \lambda_{21}^A= \frac{\beta ^2} {p_2-E_2}. \end{equation} The term of {\it ``apparent''} is used because the exact functions without coupling with $E_i$ may never be obtained from the given measure. Hence, the apparent cost functions in binary classifications without a reject option can be described in a general form of: \begin{equation} \lambda_{ij}^A = \lambda_{ij}^A (E_1, E_2,p_2). \end{equation} From the relations of eqs. (2)-(6), only three independent variables are used in describing the functions. One can apply the {\it ``class imbalance} (or {\it skewness}) {\it ratio"}, $S_r=p_1 / p_2$, to replace the variable $p_2$ for the analysis. The apparent cost functions provide users an analytical power in terms of a complete set of independent variables. However, one is unable to realize unique representations of costs, either exact or apparent, on all measures, such as on $G_{Ai}$ or $G_{PR}$. For overcoming this difficulty, we adopt a strategy of the first-order approximation, $\cal {A}$4. Therefore, one will get a general form of $ \hat{\lambda}_{ij} (p_2) $ with only a single variable for binary classifications. From the relation \cite{Elkan} of $min ~ Risk \Leftrightarrow min ~ a*Risk + b$, the constants $a$ and $b$ will be removed in the derivation of $ \hat{\lambda}_{ij} (p_2) $, which will not destroy the classification conclusions. Table I lists the all measures and their cost functions or values. Only three measures exist the exact solutions on the costs. The other measures, originally given in a form of maximization sense in classifications, need to be transformed into a minimization sense. Suppose $M$ to be one of those measures, we adopt the following transformation: \begin{equation} {\cal R}isk ~(M) = \frac{1} {M-M_{min}}, \end{equation} where $M_{min}$ is the minimum value of $M$. The transformation above is meaningful on three aspects. First, it keeps classification conclusions invariant. Second, it satisfies the assumption of $Risk \geq 0$ because $M-M_{min}\geq0$. Third, it can describe an infinitive risk when $M=M_{min}$. \begin{table*}[htbp] \caption{twelve measures and their cost functions.} \centering \begin{tabular}{lllll} \hline Name of measures & Calculation & Cost & When & Remark on\\ $[$Main reference$]$ & formulas & functions & $p_2 \rightarrow 0$ & cost functions\\ \hline \parbox[c][1.1cm]{3.0cm }{Total accuracy rate \cite{Duda}} & $ A_T= 1-E_1-E_2 $ & \parbox[c][1.1cm]{2.7cm }{$ \lambda_{12}=1 $ \\ $ \lambda_{21}=1 $} & \parbox[c][1.1cm]{1.8cm }{$ \lambda_{12}=1 $ \\ $ \lambda_{21}=1 $} & \parbox[c][1.1cm]{2.4cm }{Exact \\ costs} \\ \hline \parbox[c][1.1cm]{3.0cm }{Arithmetic mean of \\ precision and recall \cite{Henderson}} & $A_{PR}=\frac{P+R} {2} $ & \parbox[c][1.1cm]{2.7cm }{$ \hat{\lambda}_{12}=\frac{1} {p_2} $ \\ $ \hat{\lambda}_{21}=\frac{1} {p_2} $} & \parbox[c][1.1cm]{1.8cm }{$ \hat{\lambda}_{12} \rightarrow \infty $ \\ $ \hat{\lambda}_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{2.4cm }{Lower bounds \\ if $E_1 > (2+\sqrt{5})E_2$ } \\ \hline \parbox[c][1.1cm]{3.0cm }{Geometric mean of \\ precision and recall \cite{Daskalaki}} & $G_{PR}= \sqrt{P R}$ & \parbox[c][1.1cm]{2.7cm }{$ \hat{\lambda}_{12}=\frac{1} {p_2} $ \\ $ \hat{\lambda}_{21}=\frac{1} {p_2} $} & \parbox[c][1.1cm]{1.8cm }{$ \hat{\lambda}_{12} \rightarrow \infty $ \\ $ \hat{\lambda}_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{3.1cm }{Lower bounds \\ if $E_1 > (3+2\sqrt{3})E_2$ } \\ \hline \parbox[c][1.1cm]{3.0cm }{Quadratic mean of \\ precision and recall \cite{Kan}} & $Q_{PR}= \sqrt{\frac{P^2 + R ^2} {2} }$ & \parbox[c][1.1cm]{2.7cm }{$ \hat{\lambda}_{12}=\frac{1} {p_2} $ \\ $ \hat{\lambda}_{21}=\frac{1} {p_2} $} & \parbox[c][1.1cm]{1.8cm }{$ \hat{\lambda}_{12} \rightarrow \infty $ \\ $ \hat{\lambda}_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{3.1cm }{Lower bounds \\ if $E_1 > (\frac{5} {3}+\frac{2} {3}\sqrt{7})E_2$ } \\ \hline \parbox[c][1.2cm]{3.0cm }{Harmonic mean of \\ precision and recall \\ (or $F_1$ measure) \cite{Rijsbergen} } & $H_{PR}=F_1 = 2 \frac{P*R} {P+R}$ & \parbox[c][1.1cm]{2.7cm }{$ \hat{\lambda}_{12}=\frac{1} {p_2} $ \\ $ \hat{\lambda}_{21}=\frac{1} {p_2} $} & \parbox[c][1.1cm]{1.8cm }{$ \hat{\lambda}_{12} \rightarrow \infty $ \\ $ \hat{\lambda}_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{2.4cm }{Lower bounds \\ for any $E_i$} \\ \hline \parbox[c][1.1cm]{3.0cm }{Arithmetic mean of \\ accuracy rates \cite{Velez}} & $ A_{A_i}=AUC_b=(A_1+A_2)/2 $ & \parbox[c][1.1cm]{2.7cm }{$ \lambda_{12}=\frac{1} {1-p_2} $ \\ $ \lambda_{21}=\frac{1} {p_2} $} & \parbox[c][1.1cm]{1.8cm }{$ \lambda_{12}=1 $ \\ $ \lambda_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{2.4cm }{Exact \\ functions} \\ \hline \parbox[c][1.1cm]{3.0cm }{Geometric mean of \\ accuracy rates \cite{Kubat}} & $G_{A_i}= \sqrt{A_1 * A_2}$ & \parbox[c][1.1cm]{2.7cm }{$ \hat{\lambda}_{12}=\frac{1} {1-p_2} $ \\ $ \hat{\lambda}_{21}=\frac{1} {p_2} $} & \parbox[c][1.1cm]{1.8cm }{$ \hat{\lambda}_{12} =1 $ \\ $ \hat{\lambda}_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{2.4cm }{Lower bounds \\ for any $E_i$} \\ \hline \parbox[c][1.1cm]{3.0cm }{Quadratic mean of \\ accuracy rates \cite{Liu}} & $ Q_{A_i}= \sqrt{\frac{A_1^2 + A_2 ^2} {2}}$ & \parbox[c][1.1cm]{2.7cm }{$ \hat{\lambda}_{12}=\frac{1} {1-p_2} $ \\ $ \hat{\lambda}_{21}=\frac{1} {p_2} $} & \parbox[c][1.1cm]{1.8cm }{$ \hat{\lambda}_{12} =1 $ \\ $ \hat{\lambda}_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{2.4cm }{Lower bounds \\ for any $E_i$} \\ \hline \parbox[c][1.1cm]{3.0cm }{Harmonic mean of \\ accuracy rates \cite{Kennedy}} & $H_{A_i}= 2 \frac{A_1*A_2} {A_1+A_2}$ & \parbox[c][1.1cm]{2.7cm }{$ \hat{\lambda}_{12}=\frac{1} {1-p_2} $ \\ $ \hat{\lambda}_{21}=\frac{1} {p_2} $} & \parbox[c][1.1cm]{1.8cm }{$ \hat{\lambda}_{12} =1 $ \\ $ \hat{\lambda}_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{2.4cm }{Lower bounds \\ for any $E_i$} \\ \hline \parbox[c][1.1cm]{3.0cm }{Balance error \\ rate (BER) \cite{Guyon}} & $BER= \frac{1} {2}( \frac{E_1} {p_1} + \frac{E_2} {p_2})$ & \parbox[c][1.1cm]{2.7cm }{$ \lambda_{12}=\frac{1} {1-p_2}$ \\ $ \lambda_{21}=\frac{1} {p_2} $} & \parbox[c][1.1cm]{1.8cm }{$ \lambda_{12} =1 $ \\ $ \lambda_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{2.4cm }{Exact \\ functions} \\ \hline \parbox[c][1.1cm]{3.0cm }{Matthews correlation \\ coefficient (MCC) \cite{Baldi}} & $MCC= \frac{TP*TN-FP*FN} {\sqrt{p_1 p_2 N^2(TN+FN)(TP+FP)}}$ & \parbox[c][1.1cm]{2.7cm }{$ \hat{\lambda}_{12}=\frac{1} {p_2(1-p_2)} $ \\ $ \hat{\lambda}_{21}=\frac{1} {p_2(1-p_2)} $} & \parbox[c][1.1cm]{1.8cm }{$ \hat{\lambda}_{12} \rightarrow \infty $ \\ $ \hat{\lambda}_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{3.1cm }{Unknown for \\ bound features } \\ \hline \parbox[c][1.1cm]{3.0cm }{Kappa coefficient ($\kappa$) \\ \cite{Cohen}} & $\kappa= \frac{TN+TP-p_1(TN+FN)-p_2(TP+FP)} {N-p_1(TN+FN)-p_2(TP+FP)}$ & \parbox[c][1.1cm]{2.7cm }{$ \hat{\lambda}_{12}=\frac{1} {p_2(1-p_2)} $ \\ $ \hat{\lambda}_{21}=\frac{1} {p_2(1-p_2)} $} & \parbox[c][1.1cm]{1.8cm }{$ \hat{\lambda}_{12} \rightarrow \infty $ \\ $ \hat{\lambda}_{21} \rightarrow \infty $} & \parbox[c][1.1cm]{3.1cm }{Unknown for \\ bound features } \\ \hline \end{tabular} \end{table*} From Table I, one can observe that the all measures investigated in this work can be classified by four types of cost functions. Fig. 3 depicts the functions with respect to a single independent variable $p_2$. We will discuss the cost behaviors according to the function types first, and then the specific measures. Type I: $\lambda_{12}=\lambda_{21}=\lambda > 0 $. The costs are positive constants with equality. The classification solutions will be independent of the constant values of costs whenever their equality relation holds. According to the meta measure, this feature suggests that the total accuracy (or error) rate measure be {\it ``improper"} for dealing with class-imbalanced problems. Type II: $ {\lambda}_{12}={\lambda}_{21}=\frac{1} {p_2} $. Within this type of cost functions, both types of errors show the same cost behaviors with respect to the $p_2$. It indicates no distinctions between two types of errors, which can be considered as an {\it ``improper"} feature in class-imbalanced problems. Four measures from the precision-recall-based means demonstrate the same approximation expressions of $ \hat{\lambda}_{12}= \hat{\lambda}_{21}=\frac{1} {p_2} $ as the lower bounds to the exact functions (Table I). However, their approximation rates are different and are not given for the reason of their tedious expressions. The feature of the lower bounds will support the conclusions about the cost behaviors of their exact functions on: $ {\lambda}_{12}$ and ${\lambda}_{21} \rightarrow \infty$ when $ {p_2} \rightarrow 0$. Another important feature is that this type of functions is {\it asymmetric} and imposes more costs on the positive class than on the negative class. For example, from eq. (25), $F_1$ measure shows smaller costs of $ {\lambda}_{12}$ = ${\lambda}_{21} =\frac{1} {1-E_2}$ if $ {p_1} = 0$. Type III: $ {\lambda}_{12}=\frac{1} {1-p_2} , ~{\lambda}_{21}=\frac{1} {p_2} $. This type of cost functions shows a {\it ``proper''} feature in processing class-imbalanced problems, because it satisfies the meta measure. One can observe that in Fig. 3, when $p_2$ decreases, Type II error will receive a higher cost than Type I error. Only when two classes are equal (also called {\it ``balanced''}), two types of errors will share the same values of costs. Note that the meta measure implies such requirement. Four measures from the accuracy-rate-based means and $BER$ measure are within this type of the functions. In a study of the cost-sensitive learning, this type of the functions can be viewed a {\it ``rebalance''} approach \cite{Elkan,Weiss,Akata}. The exact solutions of the cost functions inform that $BER$ and $A_{A_i}$ ($=AUC_{b}$) are fully equivalent in classifications. Their equivalency can also be gained from a relation of $BER=1-A_{A_i}$. The other three measures, $G_{A_i}$, $Q_{A_i}$ and $H_{A_i}$, present only approximations to the exact cost functions. Their lower bound features guarantee the cost behaviors of their exact functions on $ {\lambda}_{12}=1$ and ${\lambda}_{21} \rightarrow \infty$ when $ {p_2} \rightarrow 0$. This type of functions shows {\it symmetric} cost behaviors for any class to be a minority. Type IV: $ {\lambda}_{12}= {\lambda}_{21}=\frac{1} {p_2(1-p_2)} $. Both $MCC$ and $\kappa$ measures approximate this type of cost functions. Because the same functions are given for the two types of errors, any measure within this category will be {\it ``improper"} for processing class-imbalanced problems. The functions are {\it symmetric} to either class being a minority. From the context of class-imbalanced problems, one can further aggregate the four types of cost functions within two categories, namely, {\it ``proper cost type"} and {\it ``improper cost type"}. We consider only Type III cost function falls in the proper cost type, and all others belong to the improper cost type. Hence, one can reach the most important finding from the category discussions about each measure. For example, when the two geometric mean measures, $G_{A_i}$ and $G_{PR}$, are applied in the class-imbalanced problems \cite{Kubat, Daskalaki}, respectively, their intrinsic differences are not well disclosed. The present cost function study reveals their property differences about the cost response to the skewness ratio. When $G_{A_i}$ satisfies the desirable feature on the costs, $G_{PR}$ does not hold such feature. To our best knowledge, this theoretical finding has not been reported before. Further finding is gained on $F$ measure. This measure is initially proposed in the area of information retrieval \cite{Rijsbergen} for an overall balance between precision and recall. Recently, $F$ measure is adopted increasingly in the study of class-imbalanced learning \cite{Dembczynski,Ye,Maratea}. When $F$ measure is designed by concerning a {\it positive} ({\it minority}) class correctly without taking the {\it negative} ({\it majority}) class into account directly, it does not mean suitability in processing highly-imbalanced problems. The cost function analysis above confirms that $F$ measure is {\it ``improper"} in either class to be a {\it minority} when its population approximates zero. \section{Numerical examples} For a better understanding of the investigated measures, we present numerical examples within two specific scenarios below. {\it Scenario I: Class populations are given.} Within this scenario, only two measures, $BER$ and $F_1$, are considered in the investigation for the following reasons. First, we need to demonstrate the exact cost functions graphically. When $BER$ is qualified to this aspect, $F_1$ can also present the exact cost values when $E_2$ is known in eq. (25). Second, $BER$ and $F_1$ measures are representative to be {\it ``proper cost type"} and {\it ``improper cost type"} respectively in cost functions. They form the {\it baselines} for understanding the other measures. In the numerical examples, we assume the following data: \begin{equation} \begin{split} & N=10000, E_1=0.1, E_2=\frac{p_2} {2} , \\ & p_2=[0.5,0.1,0.05,0.01,0.005,0.001], \end{split} \end{equation} where $p_2$ is given in a vector form to present classification changes, such as from the {\it ``balanced''} to the {\it ``minority''} and {\it ``rare''} stages, respectively. \begin{table*}[htbp] \caption{Solution data of ``$\lambda_{ij}$ vs. $p_2$'' for $BER$ and $F_1$ measures from the given data in eq. (28). } \label{tab:comdist} \centering \setlength{\tabcolsep}{1.345pc} \begin{tabular}{lcccccc} \hline ~~ $p_2$ & 0.500 & 0.100 & 0.050 & 0.010 & 0.005 & 0.001\\ \hline $BER$ & 0.350 & 0.306 & 0.303 & 0.301 & 0.300 & 0.300\\ \cline{2-7} $~~\lambda_{12}$ & 2.000 & 1.111 & 1.053 & 1.010 & 1.005 & 1.001\\ \cline{2-7} $~~\lambda_{21}$ & 2.0 & 10.0 & 20.0 & 100.0& 200.0 & 1000.0\\ \hline $F_1$ & 0.588 & 0.400 & 0.286 & 0.087 & 0.047 & 0.010\\ \cline{2-7} $~~\lambda_{12}$ & 4.0 & 20.0 & 40.0 & 200.0 & 400.0 & 2000.0\\ \cline{2-7} $~~\lambda_{21}$ & 4.0 & 20.0 & 40.0 & 200.0 & 400.0 & 2000.0\\ \hline \end{tabular} \end{table*} Table II shows the solutions to the given data in (28) for both $BER$ and $F_1$ measures. The data of $BER$ and $F_1$ are calculated directly from the equations defined. The data of $\lambda_{ij}$ are the exact values to each measure, respectively. One is able to confirm the correctness of $\lambda_{ij}$ data through the following relations: \begin{equation} BER=\frac{1} {2}(\lambda_{12}*E_1+ \lambda_{21}*E_2). \end{equation} \begin{equation} \frac{1} {F_1} =1+\frac{1} {2}(\lambda_{12}*E_1+ \lambda_{21}*E_2). \end{equation} From the data in Table II, we can depict the plots of ``$\lambda_{ij}$ vs. $p_2$'' for $BER$ and $F_1$ measures (Fig. 4). One can observe that $F_1$ measure is unable to distinct the costs, but produces the same costs on the given data when $p_2$ decreases. Although $F_\beta$ can generate different cost functions shown in (25) when $\beta \ne 1$, the infinity feature still remains in the both cost functions if $p_2=0$. This numerical example is sufficient to conclude that $F_1$, or the other measures having the similar feature, is not suitable for processing class-imbalanced problems. On the contrary, the cost plots of $BER$ measure confirm the theoretical findings in the previous section. Among the twelve measures investigated, the measures within Type III cost functions will exhibit the {\it ``proper"} cost behaviors in compatible with our intuitions for solving class-imbalanced problems. {\it Scenario II: Gaussian distributions are given.} This scenario is designed for a class-imbalance learning. A specific set of Gaussian distributions is exactly known, \begin{equation} \begin{split} & \mu_{1}=-1, \mu_{2}=1,\sigma_{1}=\sigma_{2}=1, \\ & p_2=[0.5,0.1,0.01,0.001,0.0001, 0.00001], \end{split} \end{equation} where $ \mu_{i}$ and $\sigma_{i}$ are the mean and standard deviation to the {\it i}th class. Five measures, $A_T$, $BER$, $F_1$, $G_{Ai}$ and $G_{PR}$, are considered for a comparative study. Table III shows the {\it optimum} solutions to the given data in (31) from using the five measures, respectively. Based on the data in Table III, Fig. 5 depicts the plots of ``$\frac {E_2} {p_2} $ vs. $\frac {p_1} {p_2}$'' for the measures. When the class-imbalance ratio $\frac {p_1} {p_2}$ increases, the minority class (or Class 2) is mostly misclassified for measures $A_T$, $F_1$ and $G_{PR}$. The value of $\frac {E_2} {p_2} =1.0$ suggests a {\it complete misclassification} on all samples in Class 2. In comparison, $BER$ and $G_{Ai}$ measures show a small constant value of $\frac {E_2} {p_2} $ ($=0.1587$), which implies a good protection on the minority class. The two measures share the same solutions for the given distribution data in eq. (31). One can show that, when $\sigma_{1} \neq \sigma_{2}$, $BER$ and $G_{Ai}$ will present the different constant values. It can be further proved that all measures in Type III will produce a constant behavior shown in Fig. 5, because their decision boundaries, $x_{b}$, will be independent with the population variables. \begin{table*}[htbp] \caption{Optimum solutions using the five measures respectively to the given data in eq. (31). \newline (The subscripts "max" and "min" stand for maximum and minimum respectively. $x_{b}$ is a decision boundary.)} \label{tab:comdist} \centering \setlength{\tabcolsep}{1.345pc} \begin{tabular}{lcccccc} \hline ~~ $p_2$ & 0.50000 & 0.10000 & 0.01000 & 0.00100 & 0.00010 & 0.00001\\ \hline $(A_T)_{max}$ & 0.8413 & 0.9299 & 0.9905 & 0.9990 & 0.9999 & 0.9999\\ \cline{2-7} $~~x_{b}$ & 0.0 & 1.0986 & 2.2976 & 3.4534 & 4.6051 & 5.7564 \\ \cline{2-7} $E_{1}/p_{1}$ & 1.587e-1 & 1.792e-2 & 4.876e-4 & 4.226e-6 & 1.041e-8 & 7.070e-12\\ \cline{2-7} $E_{2}/p_{2}$ & 0.1587 & 0.5393 & 0.9028 & 0.9929 & 0.9998 & 0.9999\\ \hline $(BER)_{min}$ & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587\\ \cline{2-7} $~~x_{b}$ & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \cline{2-7} $E_{1}/p_{1}$ & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587\\ \cline{2-7} $E_{2}/p_{2}$ & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587\\ \hline $(F_1)_{max}$ & 0.8443 & 0.6121 & 0.3211 & 0.1291 & 0.0420 & 0.0118\\ \cline{2-7} $~~x_{b}$ & -.1570 & 0.6893 & 1.4705 & 2.1167 & 2.6843 & 3.1948\\ \cline{2-7} $E_{1}/p_{1}$ & 1.996e-1 & 4.557e-2 & 6.746e-3 & 9.145e-4 & 1.147e-4 & 1.365e-5\\ \cline{2-7} $E_{2}/p_{2}$ & 0.1236 & 0.3780 & 0.6810 & 0.8679 & 0.9539 & 0.9859\\ \hline $(G_{Ai})_{max}$ & 0.8413 & 0.8413 & 0.8413 & 0.8413 & 0.8413 & 0.8413\\ \cline{2-7} $~~x_{b}$ & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\ \cline{2-7} $E_{1}/p_{1}$ & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587\\ \cline{2-7} $E_{2}/p_{2}$ & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587 & 0.1587\\ \hline $(G_{PR})_{max}$ & 0.8450 & 0.6123 & 0.3211 & 0.1293 & 0.0436 & 0.0139\\ \cline{2-7} $~~x_{b}$ & -.1946 & 0.6697 & 1.4826 & 2.0481 & 2.2840 & 2.3260\\ \cline{2-7} $E_{1}/p_{1}$ & 2.103e-1 & 4.749e-2 & 6.519e-3 & 1.151e-3 & 5.116e-4 & 4.407e-5 \\ \cline{2-7} $E_{2}/p_{2}$ & 0.1161 & 0.3706 & 0.6853 & 0.8527 & 0.9004 & 0.9076\\ \hline \end{tabular} \end{table*} The numerical study in this scenario provides a counterexample to confirm a general conclusion that $A_T$, $F_1$ and $G_{PR}$ are {\it ``improper"} measures. If {\it ``improper"} measures are set as {\it ``learning targets}" (or {\it ``criteria}") in highly-imbalanced problems, one may have a deleterious impact on classification qualities. The numerical solutions of $BER$ and $G_{Ai}$ support the measures to be {\it ``proper"} only for the given datasets. However, one is unable to reach a general conclusion on the two measures via numerical studies. This scenario study is also a function-based evaluation. If using real datasets for a performance-based evaluation, inconsistency findings may be introduced by population changes from sampling. \section{Conclusions} \label{sec: con} This work aims at developing a theoretical insight into why some performance measures are appropriate, and some are not, for solving class-imbalanced problems. Before reviewing the existing approaches, we discuss the two levels of measure evaluations, that is, function-based evaluation and performance-based evaluation. For revealing the intrinsic properties of the measures, we consider the function-based evaluation to be necessary, and investigate one important aspect which is not well studied. This aspect is defined to be the cost behaviors of binary classification measures in terms of class-imbalance skewness ratio. We adopt a {\it meta measure} in \cite{Hu} to examine each measure to be {\it ``proper"} or {\it ``improper"} in applications. Twelve measures are studied and their cost functions, either exact or approximate, are derived. When four types of the cost functions are formed from the given measures, they are basically two kinds according to the meta measure. The {\it ``proper"} kind includes the four means on accuracy rates and $BER$ (equivalently including $AUC_b$). The other measures, i.e. $A_T$, the four means on precision and recall (including $F_1$), $MCC$ and $\kappa$, belong to {\it ``improper"} kind. Through the cost function analysis, one can observe their intrinsic equivalences or differences among the measures. In apart from the measures investigated in this work, one can add other performance or meta measures for a systematic study. From an application viewpoint, we understand that a final selection of measures (or learning criteria) may need to be based on an overall consideration regarding to each aspect in function-based evaluation and performance-based evaluation. The main point raised in this work confirms that {\it ``what to learn (or learning-target selection)"} is the most imperative and primary issue in the study of machine learning. \section*{Acknowledgment} This work is supported in part by NSFC (No. 61273196) for B.-G. Hu, and NSFC (No. 61172104) for W.-M. Dong. \end{document}
\begin{document} \title{A Survey on Optimal Transport for Machine Learning: Theory and Applications} \begin{abstract} Optimal Transport (OT) theory has seen an increasing amount of attention from the computer science community due to its potency and relevance in modeling and machine learning. It introduces means that serve as powerful ways to compare probability distributions with each other, as well as producing optimal mappings to minimize cost functions. Therefor, it has been deployed in computer vision, improving image retrieval, image interpolation, and semantic correspondence algorithms, as well as other fields such as domain adaptation, natural language processing, and variational inference. In this survey, we propose to convey the emerging promises of the optimal transport methods across various fields, as well as future directions of study for OT in machine learning. We will begin by looking at the history of optimal transport and introducing the founders of this field. We then give a brief glance into the algorithms related to OT. Then, we will follow up with a mathematical formulation and the prerequisites to understand OT, these include Kantorovich duality, entropic regularization, KL Divergence, and Wassertein barycenters. Since OT is a computationally expensive problem, we then introduce the entropy-regularized version of computing optimal mappings, which allowed OT problems to become applicable in a wide range of machine learning problems. In fact, the methods generated from OT theory are competitive with the current state-of-the-art methods. The last portion of this survey will analyze papers that focus on the application of OT within the context of machine learning. We first cover computer vision problems; these include GANs, semantic correspondence, and convolutional Wasserstein distances. Furthermore, we follow this up by breaking down research papers that focus on graph learning, neural architecture search, document representation, and domain adaptation. We close the paper with a small section on future research. Of the recommendations presented, three main problems are fundamental to allow OT to become widely applicable but rely strongly on its mathematical formulation and thus are hardest to answer. Since OT is a novel method, there is plenty of space for new research, and with more and more competitive methods (either on an accuracy level or computational speed level) being created, the future of applied optimal transport is bright as it has become pervasive in machine learning. \end{abstract} \keywords{Optimal Transport \and Machine Learning \and Computer Vision \and Wasserstein distance} \section{Introduction} The Optimal Transport problem sits at the intersection of various fields, including probability theory, PDEs, geometry, and optimization theory. It has seen a natural progression in its theory from when Monge first posed the problem in 1781 \cite{monge1781memoire}. Now, it serves as a powerful tool due to its natural formulation in various contexts. It has recently seen a wide range of applications in computer science--most notably in computer vision, but also in natural language processing and other areas. Different elements such as the Convolutional Wasserstein Distance \cite{solomon2015convolutional} and the Minibatch Energy Distance \cite{arjovsky2017wasserstein} have made significant improvements on image interpolation, heat maps, and GANs. These are examples of some problems in machine learning that are being recast using Optimal Transport elements, such as Wasserstein distance being used as an error measure for comparing different probability distributions. We note the effectiveness with which optimal transport deals with both discrete and continuous problems and the easy transition between the two classes of problems. The powerful tools from convex geometry and optimization theory have made optimal transport more viable in applications. To that extent, we note the remarkable implementation of Sinkhorn's algorithm to significantly speed up computation of Wasserstein distances \cite{cuturi2013sinkhorn}. Although the theory is well-developed \cite{villani2003topics}, much work is being made in determining the state-of-the-art algorithms for computing optimal transport plans under various conditions. In this survey, we explore the main tools from the theory and summarize some of the major advancements in its application. While it is not all-encompassing, we aim to provide an application-focused summary. The rest of this paper is organized as follows: Section 2 provides an overview of algorithms from different applications and major breakthroughs in computation. Section 3 presents a brief history of the topic. Section 4 details some mathematical formalism. Section 5 reviews ways to overcome the computational challenges. Section 6 and on then explores applications of OT to different fields, most notably in GANs and general image processing. We then conclude with remarks and proposed directions and close with open problems. The interested reader can dive deeper into the rich OT material using some superb books such as \cite{villani2003topics}, \cite{villani2008optimal}, \cite{peyre2019computational}, \cite{santambrogio2015optimal}. \section{OT Algorithms at a Glance} \begin{table} [h] \caption{OT Algorithms in Machine Learning Presented} \label{table} \setlength{\tabcolsep}{10pt} \begin{tabular}{|p{85pt}|p{185pt}|p{90pt}|p{20pt}|} \hline \rule{0pt}{2ex} Application& Publication& Metric Employed& Year\\ \hline \rule{0pt}{2ex} Computations & Sinkhorn Entropy-Reg OT \cite{cuturi2013sinkhorn} & Ent-Reg W-Distance & 2013\\ \rule{0pt}{2ex} Computations & 2-W Barycenters \cite{cuturi2014fast} & Ent-Reg W-Distance & 2014\\ \rule{0pt}{2ex} Comp. Vision & Conv-W Dist \cite{solomon2015convolutional} & Conv-W & 2015\\ \rule{0pt}{2ex} Comp. Vision & WGANs \cite{arjovsky2017wasserstein} & EMD & 2017\\ \rule{0pt}{2ex} Comp. Vision & OT-GAN \cite{salimans2018improving} & MED & 2018\\ \rule{0pt}{2ex} Graphs & GWL \cite{kandasamy2018neural} & Gromov - W Dist & 2018\\ \rule{0pt}{2ex} Domain Adaptation & GCG \cite{flamary2016optimal} & Ent-Reg W-Distance & 2016 \\ \hline \multicolumn{4}{p{444pt}}{An Overview of the algorithms presented in detail. Abbreviations used: Entropy Regularized Wasserstein Distance (Ent-Reg W-Distance), Minibatch Energy Distance (MED), Convolutional Wasserstein Distance (Conv-W), Gromov Wasserstein Distance (Gromov-W Dist) Earth Mover Distance (EMD), Domain Adaptation (Dom. Adap.), 2-Wasserstein (2-W), Gromov-Wasserstein Learning (GWL), Generalized Conditional Gradient (GCG) } \end{tabular} \label{tab1} \end{table} \section{History} The central idea of Optimal Transport (OT) can be found in the work by French geometer Gaspard Monge. In his paper, \textit{Mémoire sur la théorie des déblais et des remblais}, published in 1781, Monge asked the question: How do I move a pile of earth (some natural resource) to a target location with the least amount of effort, or cost \cite{monge1781memoire}? The idea was to find a better way of optimizing such cost that was not simply iterating through every possible permutation of supplier vs. receiver and choosing the one with the lowest cost. One of the major breakthroughs following Monge's work was by Russian mathematician Leonid Vitaliyevich Kantorovich who was the founder of linear programming. His research in optimal resource allocation, which earned him his Nobel Prize in Economics, led him to study optimal coupling and duality, thereby recasting some parts of the OT problem into a linear programming problem. Kantorovich's work led to the renaming of optimal coupling between two probability measures as the Monge-Kantorovich problem. After Kantorovich, the field of OT gained traction and its applications expanded to several fields. For example, while John Mather worked on Lagrangian dynamical systems, he developed the theory of action-minimizing stationary measures in phase space, which led to the solution of certain Monge-Kantorovich problems \cite{mather1989minimal}. Although he did not make the connection between his work and OT, Buffoni and Bernard in their paper \textit{Optimal mass transportation and Mather theory} showed the existence of an optimal transport map while studying the "Monge transportation problem when the cost is the action associated to a Lagrangian function on a compact manifold \cite{bernard2004optimal}." Several other names helped expand the field OT. For example, Yann Brenier introduced optimal coupling to his research in incompressible fluid mechanics, thus linking the two fields. Mike Cullen introduced OT in meteorology while working on semi-geostrophic equations. Both Brenier's and Cullen's work brought forth the notion that there is a connection, previously not expected, between OT and PDEs. Fields medalist Cédric Villani also contributed much to the field in connection with his work in statistical mechanics and the Boltzmann equation. Recently, OT is being applied in several fields, including Machine Learning (ML). It started with image processing by utilizing color histograms of images (or gray images) and Wasserstein's distance to compute the similarity between images. Then, it was followed by shape recognition \cite{peleg1989unified, gangbo2000shape, ahmad2003geometry}. For example, in \textit{A Metric for Distributions with Applications to Image Databases}, Rubner et al. introduced a new distance between two distributions, called Earth Mover's Distance (EMD), which reflects the minimal amount of work that must be performed to transform one distribution into the other by moving "distribution mass" around \cite{rubner1998metric, rubner2000earth}. Next, Haker et al. introduced a method for computing elastic registration and warping maps based on the Monge-Kantorovich theory of OT \cite{haker2003monge, haker2004optimal}. Due to the important role of matrix factorization in ML, it was a natural progression to use OT as the divergence component of Nonnegative Matrix Factorization (NMF) \cite{sandler2011nonnegative}. In 2014, Solomon et al., looked at the applications of OT in semi supervised learning in their paper \textit{Wasserstein Propagation for Semi-Supervised Learning} \cite{solomon2014wasserstein}. Other applications have been utilizing OT in mappings between distributions; more specifically, a recent paper was published on using Wasserstein's metric in variational inference, which lies at the heart of ML \cite{ambrogioni2018wasserstein}. More recently, researchers have made advancements in the theory of OT with Marco Cuturi proposing methods to solve approximations of the OT problems by introducing a regularization term \cite{cuturi2013sinkhorn}. The field is now more active than ever, with researchers extending the theories that work for low-dimensional ML problems into high-dimensional problems, bringing forth several complex theoretical and algorithmic questions \cite{santambrogiooptimal}. \section{Mathematical Formalism} \subsection{Problem Statement} Given a connected compact Riemannian manifold $M$, \textit{Optimal Transport Plans} (OT plans) offer a way to mathematically formulate the mapping of one probability measure $\mu_0$ \emph{onto} another probability measure $\mu_1$. These plans $\pi$ are couplings that obey mass conservation laws and therefore belong to the set $$ \Pi(\mu_0 , \mu_1 ) = \{ \pi \in \text{Prob}(M \times M) | \pi(\cdot , M) = \mu_0 , \pi(M , \cdot) = \mu_1 \} $$ Here, $\Pi$ is meant to be the set of all joint probabilities that exhibit $\mu_0$ and $\mu_1$ as marginal distributions. The OT plan $\pi(x,y)$ seeks to transport mass from point $x$ to point $y$. This formulation allows for \emph{mass-splitting} which is to say that the optimal transport map can take portions of the mass at point $x$ to multiple points $y_i$. Kantorovich sought to rephrase the Monge question into a minimization of a linear functional \begin{equation} \pi \rightarrow \text{inf} \int_{M \times M} c(x,y) d \pi(x,y) \end{equation} on the nonempty and convex $\Pi$ and appropriate cost function $c$. We note that some formulations accommodating multiple cost have also been proposed, e.g. \cite{scetbon2020handling}. Alternatively, these OT plans will minimize the distance between two measures denoted formally as the \emph{2-Wasserstein Distance}, where $d$ is a metric: \begin{equation} \label{w2metric} W^{d}_{2} (\mu_0, \mu_1) = \inf_{\pi \in \prod(\mu_0 , \mu_1)} \bigg( \int_{M \times M} d(x,y)^2 d \pi (x,y) \bigg)^{1/2} \end{equation} This distance defines a metric\footnote{Here, we mean a metric in the mathematics sense, i.e. a function $d(\cdot,\cdot): M \times M \to \mathbb{R}_+$ that is positive definite, symmetric, and subadditive on a metrizable space $M$. See Appendix A for more details.} as shown in Villani's book \cite{villani2003topics}. This distance metric will be integral to applications as we will see that it offers a new way to define loss functions. The goal is to find, or approximate, the optimal transport plan, $\pi$. \subsection{Kantorovich Duality} Duality arguments are central to both the theoretical and numerical arguments in the OT framework. Kantorovich noticed that the minimization of the linear functional problem emits a dual problem. Here, let $c$ denote a lower semicontinuous cost function, $\mu_0$ and $\mu_1$ denote marginal probabilities, and $\Pi$ be the set of all probability measures on $M \times M$ which emit $\mu_0$ and $\mu_1$ as marginals. Then, for continuous $\phi(x), \psi(y)$, we have that \begin{equation} \inf\limits_{\Pi(\mu_0,\mu_1)} \int_{M \times M} c(x,y) d \pi(x,y) = \sup\limits_{\phi, \psi} \int_M \phi(x) d\mu_0 + \int_M \psi(y) d\mu_1 \end{equation} The right-hand side of the equation is known as the \emph{dual problem} of the minimization problem and is a very useful tool in proving consequences regarding optimal transport maps. A proof of this result, along with further discussion, can be found in \cite{villani2003topics}. \subsection{Entropic Regularization} We can define the entropy of a coupling on $M \times M$ by the negative energy functional coming from information theory: \begin{equation}\label{entropy} H(\pi) = - \int \int_{M \times M} \pi (x,y) \ln(\pi(x,y)) dx dy \end{equation} This entropy essentially tracks information loss of a given estimate versus the true value as it proves a lower bound for the square loss error. Then, we can consider the entropy-regularized Wasserstein distance: \begin{equation} \label{w2regmetric} W^2_{2,\gamma} (\mu_0 , \mu_1 ) = \inf_{\pi \in \Pi(\mu_0 , \mu_1)} \bigg[ \int \int_{M \times M} d(x,y)^2 d\pi (x,y) - \gamma H(\pi) \bigg] \end{equation} Cuturi proved this regularized distance offers a transport plan that is more spread out and also offers much faster computational convergence convergence \cite{cuturi2013sinkhorn}. This computational breakthrough will be pivotal in the tractability of Wasserstein-distance dependent algorithms. \subsection{KL Divergence} A lot of results for optimal transport maps can be related to the familiar KL divergence. If we define $p(x)$ and $q(x)$ as probability distributions given a random variable x over a manifold of distributions, then we define the KL divergence as: \begin{equation} \label{kldiv} D_{KL}(p(x)|q(x)) \coloneqq \int p(x) \bigg(\ln \frac{p(x)}{q(x)}\bigg) dx \end{equation} \subsection{Wasserstein barycenters} The barycenter problem is central to the interpolation of points in Euclidean space. Agueh and Carlier present the analog in Wasserstein space, proving its existence, uniqueness, and providing characterizations \cite{agueh2011barycenters}. The analog is presented as the solution to the minimzation of a convex combination problem \begin{equation} \label{wassbary} \inf_{\mu} \sum\limits_{i=1}^{p} \alpha_i W^2_2(\mu_i,\mu) \end{equation} where $\mu_i$ are probability measures and the $\alpha_i$'s, known as barycentric coordinates, are nonnegative and sum to unity. These conclusions are derived from considering the problem dual to the problem and desirable properties of the Lengendre-Fenchel transform as well as conclusions from convex geometry. These barycenters are also uniquely characterized in relation to Brennier maps which offers direct formulation as a push forward operator. Barycenteres will play a major role in applications such as the interpolation of images under transport maps as in \cite{solomon2015convolutional}. Computing these barycenters is discussed in the Computational Challenges section. \section{Computational Challenges} One the biggest challenges in the implementation of optimal transport has been its computational cost. One widely used implementation of Sinkhorn's algorithm was formulated by Cuturi significantly decreased computation cost \cite{cuturi2013sinkhorn}. In the following, $KL$ denotes the Kullback-Leibler divergence, $U$ denotes the transport polytope of transport plans $P$ that emit $r$ and $c$ as marginal distributions: $U(r,c) = \{ P \in \mathbb{R}^{d \times d}_+ | P \mathbbm{1}_d = r, P^T \mathbbm{1}_d = c \}$; and $U_\alpha(r,c) = \{ P\in U(r,c) | KL(P | rc^T) \leq \alpha\} $. We present the discrete version as opposed to the continuous analog presented in equation (\ref{w2regmetric}). Define the Sinkhorn distance as \begin{equation} \label{sinkhorn} d_{M,\alpha} \coloneqq \min\limits_{P \in U_\alpha(r,c)} \langle P,M \rangle \end{equation} Then we can introduce an entropy regularization argument stated in a Lagrangian for $\lambda >0 $: \begin{equation} \begin{aligned} \label{Regent} d_M^\lambda (r,c) \coloneqq & \langle P^\lambda , M \rangle, \quad \\ \text{where} \quad P^\lambda = \text{argmin}_{P \in U(r,c)} &\langle P,M \rangle - \frac{1}{\lambda} h(P) \end{aligned} \end{equation} where $h(P) = - \sum\limits_{i,j=1}^d p_{ij} \log(p_{ij})$ is the entropy of $P$. Then, Sinkhorn's famed algorithm for finding the the minimum, which we know from the general theory will be found on one of the vertices of the polytope, will serve as a proper approximation tool as seen in Algorithm \ref{alg:RegEnt}. Here, a main result proved by Cuturi is used which states that the solution $P^\lambda$ is unique and, moreover, has the particular form of $P^\lambda = \textit{diag}(u) K \textit{diag}(v)$, where $u,v$ are two nonnegative vectors that are unique up to constants and $K = e^{- \lambda M}$ denotes the matrix exponential of $ - \lambda M$. This result is pivotal in further speeding up the computation of (\ref{Regent}). This type of result is also commonly used as in, for example, \cite{solomon2014wasserstein} which is explored in (\ref{cwd}). \LinesNumberedHidden{ \begin{algorithm} \DontPrintSemicolon \caption{\textbf{Computation of Entropy-Regularized} \\ $d = [d^\lambda_M(r, c_1) ,d^\lambda_M(r, c_2), ... , d^\lambda_M(r, c_N)]$ } \label{alg:RegEnt} Input: $M,\lambda, r, C = [c_1, ... , c_N]$ \\ $I = (r>0) ; r = r(I) ; M = M(I,:); K = exp(- \lambda M)$ \\ $u = ones(length(r),N) / length(r)$ \\ $\hat{K} = diag(1./r) K$ \\ While $u$ changes or any stopping criterion Do \\ $\quad u = 1./(\hat{K}(C./(K^Tu)))$ \\ end while \\ $v = C./ (K^T u)$ \\ $d = sum(u.*((K.*M)v))$ \end{algorithm}} The implementation of Sinkhorn's algorithm to find optimal transport maps has improved the general tractability OT algorithms. We note the improvement on the problem of computing barycenters in Wasserstein space made by Cuturi and Doucet in \cite{cuturi2014fast} where they prove the polyhedral convexity of a function that is like a discrete version of (\ref{wassbary}) $$ f(r,X) = \frac{1}{N} \sum\limits_{i=1}^{N} d(r,c_i,M_{XY_i}) $$ where $d(\cdot,\cdot)$ is the previously defined Sinkhorn distance (\ref{sinkhorn}), and $r,c$ are the marginal probabilities. $M_{XY}$ is the pairwise distance matrix. Here, the problem of the optimal p is phrased using the dual linear programming from known as the dual optimal transport problem: $$ d(r,c,M) = \max_{(\alpha,\beta) \in C_M} \alpha^T r + \beta^T c$$ where $C_M$ is the polyhedron of dual variables $$C_M = \{ (\alpha,\beta) \in \mathbb{R}^{n+m} | \alpha_i + \beta_j \leq m_{ij}\}$$ This problem then has a solution and the computation of the barycenters centers around this. While the theoretical groundwork for optimal transport has been laid, efficient algorithms are still needed for it to be implemented in large scale. Genevay, \emph{et al.} formulate stochastic descent methods for large scale computations, making use of the duality arguments previously presented along with entropic regularization for various cases in \cite{genevay2016stochastic}. Then, Sinkhorn's algorithm will play an important role in the discrete case while the continuous case is very elegantly dealt with using reproducing kernel Hilbert spaces. For a complete discussion of the numerical methods associated with the OT problem as well as other relevant algorithms, see \cite{vialard2019elementary,merigot2020optimal}. \section{Applications} Here, we hope to bring light to some of the many applications of OT within a machine learning setting. \subsection{Computer Vision} OT finds a natural formulation within the context of computer vision. The common method is to make a probability measure out of color histograms relating to the image. Then, one can find a dissimilarity measure between the images using the Wasserstein distance. An early formulations of OT in computer visions can be in \cite{rubner2000earth} and those relating to the Earth Mover's Distance (EMD) which acts as a slighty different discrete version of the 1-Wasserstein distance. A formulation of the EMD on discrete surfaces can be found in \cite{solomon2014earth}. In the forthcoming, we note a use of OT in improving GANs and the Convolutional Wasserstein Distances which serve well for image interpolation. \subsubsection{OT Meets GANs} Multiple attempts have been made to improve GANs using optimal transport. Arjovski \emph{et al.} recast the GANs problem into an OT theory problem \cite{arjovsky2017wasserstein}. OT lends itself well to the GANs problem of learning models that generate data like images or text with a distribution that is similar to that of training data. Here in WGANs, we can take two probability measures $\mu_0,\mu_1 \in M$ with $\mu_1$ being the distribution of a locally Lipschitz $g_\theta(Z)$ acting as a neural network with \emph{nice} convergence properties and with $Z$ a random variable with density $\rho$ and $g_\theta$ and the Kantorovich-Rubinstein duality gives $$ W(\mu_0,\mu_1) = \sup\limits_{||f|| \leq 1} \mathbb{E}_{x\sim \mu_0} [f(x)] - \mathbb{E}_{x\sim\mu_1} [f(x)] $$ with supremum taken over all Lipschitz continuous functions $f:M \to \mathbb{R}$. It is shown here that there is a solution to this problem with relevant gradient $$ \nabla_\theta W(\mu_0,\mu_1) = - \mathbb{E}_{z \sim \rho} [\nabla_\theta f(g_\theta(z))]$$ wherever both are well-defined. This formulation poses an alternative to the classical GANs and it is found to be more stable, specially when dealing with lower dimensional data, than its counterparts. We also note the progress made by Salimans \emph{et al.} \cite{salimans2018improving} where they improve upon the idea of mini-batches \cite{genevay2017learning} and using energy functionals \cite{BellemareDDMLHM17} to introduce an OT variant using the W-distance named the Minibatch Energy Distance: \begin{align*} D^2_{MED} (\mu_0, \mu_1) = 2 \mathbb{E}[W_c(X,Y)] - \mathbb{E}[W_c(X,X')] - \mathbb{E}[W_c(Y,Y')] \end{align*} where $X,X'$ are sampled mini-batches from $\mu_0$ and $Y,Y'$ are sampled mini-batches from $\mu_1$ and $c$ is the optimal transport function that is learned adversarially through the alternating gradient descent common to GANs. These algorithms are seeing a greater statistical consistency. \subsubsection{Semantic Correspondence} OT is one of the few, if not the only, method that deals with mass-splitting phenomenon which commonly occurs in establishing dense correspondence across semantically similar images. This occurrence is in the form of a many-to-one matching in the assignment of pixels from a source of pixels to a target pixel as well as a one-to-many matching of the same type. The one-to-one matching problem can be recast as an OT problem as done in \cite{liu2020semantic}. Liu \emph{et al.} replace it with maximizing a total correlation where the optimal matching probability is denoted as $$ P^* = \text{argmax}_{P} \sum\limits_{i,j} P_{ij} C_{ij}$$ where $P \in \mathbb{R}^{n \times m}_+, P \mathbbm{1}_n = r, P^T \mathbbm{1}_m = c$ and $r,c$ are marginals in the same vein as in the section on computational challenges. Then, we can call $M = 1-C$ to be the cost matrix. Then, the problem becomes the optimal transport problem $$ P^* = \text{argmin}_P \sum_{i,j} P{ij} M_{ij} $$ where $P \in \mathbb{R}^{n \times m}_+, P \mathbbm{1}_n = r, P^T \mathbbm{1}_m = c$. This problem can then be solved using known algorithms, like those proposed in the computation challenges section. Using the percentage of correct keypoints (PCK) evaluation metric, their proposed algorithm outperformed state-of -the-art algorithms by 7.4 (or 26\%), making it a huge improvement over other methods. \subsubsection{Convolutional Wasserstein Distances} \label{cwd} In \cite{solomon2015convolutional}, Solomon \emph{et al.} propose an algorithm for approximating optimal transport distances across geometric domains. Here, they make use of the entropy-regularized Wasserstein distance given by (\ref{w2regmetric}) for its computational advantages discussed in the Computational Challenges section: \begin{equation} \label{w2regmetric2} \begin{aligned} W^2_{2,\gamma} (\mu_0 , \mu_1 ) = \inf\limits_{\pi \in \Pi(\mu_0 , \mu_1)} \bigg[ \int_{M \times M} d(x,y)^2 d\pi (x,y) - \gamma H(\pi) \bigg] \end{aligned} \end{equation} They use Varadhan's formula \cite{varadhan1967behavior} to approximate the distance $d(x,y)$ by transferring heat from $x$ to $y$ over a short time interval: $$d(x,y)^2 = \lim\limits_{t \to 0} [-2t \ln H_t(x,y) ] $$ where $H_t$ is the heat kernel associated to the geodesic distance $d(x,y)$. Then, we can use this value in a kernel defined by $K_\gamma (x,y) = e^{- \frac{d(x,y)^2}{\gamma}}$. We can conclude through algebraic manipulations that $$ W_{2,\gamma}^2(\mu_0,\mu_1) = \gamma [1 + \min\limits_{\pi \in \Pi} KL(\pi | K_\gamma)] $$ where $KL$ denotes the K-L divergence (\ref{kldiv}). Then, in order to compute the convolutional distances, we can discretize the domain $M$ with function and density vectors $\mathbf{f} \in \mathbb{R}^n$. Then, define area weights vector $\mathbf{a} \in \mathbb{R}^n_+$ with $\mathbf{a}^T \mathbbm{1} = 1$ and a symmetric matrix $\mathbf{H}_t$ discretizing $H_t$ such that $$ \int_M f(x) dx \approx \mathbf{a}^T \mathbf{f} \quad \text{and} \quad \int_M f(y) H_t(\cdot,y)dy \approx \mathbf{H}_t(\mathbf{a} \otimes \mathbf{f} ) $$ Thus we are ready to compute the convolutional Wasserstein distance as in Algorithm \ref{alg:convwass2}. \begin{algorithm} \DontPrintSemicolon \caption{\textbf{Convolutional Wasserstein Distance}} \label{alg:convwass2} Input: $\mu_0 , \mu_1 , H_t , a, \gamma$ \; Sinkhorn Iterations: \; $\mathbf{v}, \mathbf{w} \leftarrow 1$ \; for $i = 1,2,3,...$ \; \quad $\mathbf{v} \leftarrow \mu_0 ./ \mathbf{H}_t(\mathbf{a}.*\mathbf{w})$ \; \quad $\mathbf{w} \leftarrow \mu_1 ./ \mathbf{H}_t(\mathbf{a}.*\mathbf{v})$ \; KL Divergence: \; Return $\gamma \mathbf{a}^t[(\mu_0.* \ln(\mathbf{v})) + (\mu_1 .* \ln(\mathbf{w})]$ \end{algorithm} We note the authors' use of the Convolution Wasserstein Distance along with barycenters in the Wasserstein space to implement an image interpolation algorithm. \subsection{Graphs} \label{Graphs} The OT problem also lends itself to the formulation of dissimilarity measures within different contexts. In \cite{kolouri2020wasserstein}, authors developed a fast framework, referred to as WEGL (Wasserstein Embedding for Graph Learning), to embed graphs in a vector space. We find that analogs of the dissimilarity measures can be defined on graphs and manifolds where the source manifold and target manifolds need not be the same. In \cite{xu2019gromov}, Xu et al. propose a new method to solve the joint problem of learning embeddings for associated graph nodes and graph matching. This is done using a regularized Gromov-Wasserstein discrepancy when computing the levels of dissimilarity between graphs. The computed distance allows us to study the topology each of the spaces. The Gromov-Wasserstein discrepancy was proposed by Peyre as a succession to the Gromov-Wasserstein distance which is defined as follows: \\\\ \textbf{Definition:} Let $(X,d_X,\mu_{X})$ and $(Y,d_Y,\mu_{Y})$ be two metric measure spaces, where $(X,d_X)$ is a compact metric space and $\mu_X$ is a probability measure on X (with $(Y,d_Y,\mu_{Y})$ defined in the same way). The Gromov Wasserstein distance $d_{GW}(\mu_X,\mu_Y)$ is defined as \\ \begin{equation*} \inf\limits_{\pi \in \Pi (\mu_X, \mu_Y)}\int\limits_{X \times Y} \int\limits_{X \times Y} L(x,y,x',y')d\pi(x,y)d\pi(x',y'), \end{equation*} where $L(x,y,x',y') = |d_X(x,x') - d_Y(y,y')|$ is the loss function and $\Pi (\mu_X,\mu_Y)$ is the set of all probability measures on X x Y with $\mu_X$ and $\mu_Y$ as marginals. We note that the loss function could be continuous depending on the topology the metric space $X$ is endowed with. At the very least, we would want it to be $\pi$-measurable. \\\\ When $d_x$ and $d_y$ are replaced with dissimilarity measurements rather than strict distance metrics and the loss function \emph{L} is defined more flexibly, the GW distance can be relaxed to the \emph{discrepancy}. From graph theory, a graph is represented by its vertices and edges, \emph{G(V,E)}. If we let a metric-measure space be defined by the pair \emph{\textbf{(C,$\mu$)}}, then we can define the Gromov-Wasserstein discrepancy between two spaces, \emph{\textbf{($C_s,\mu_s$)}} and \emph{\textbf{($C_t,\mu_t$)}}, as: \begin{equation*} \begin{aligned} d_{GW}(\mu_s,\mu_t) &= \min_{T\in \pi (\mu_s, \mu_t)}\sum_{i,j,i',j'}L(c^{s}_{ij},c^{t}_{i'j'})Tii'Tjj' \\ &= \min_{T\in \pi (\mu_s, \mu_t)}\langle L(C_s,C_t,T),T \rangle \end{aligned} \end{equation*} In order to learn the mapping that includes the correspondence between graphs and also the node embeddings, Xu et al. proposed the regularized GW discrepancy: \begin{equation*}\begin{aligned} \min\limits_{X_s,X_t}\min\limits_{T\in\Pi(\mu_s,\mu_t)}\langle L(C_s(X_s),C_t(X_t),T),T\rangle +\alpha\langle K(X_s,X_t),T\rangle + \beta R (X_s,X_t) \end{aligned} \end{equation*} To solve this problem, the authors present Algorithm \ref{alg:gwl}. \begin{algorithm} \DontPrintSemicolon \caption{\textbf{Gromov-Wasserstein Learning (GWL)}} \label{alg:gwl} Input: $\{C_s,C_t\}$, $\{\mu_s$,$\mu_t\}$, $\beta$, $\gamma$, the dimension D, the number of outer/inner iterations $\{M,N\}$. \; Output: $X_s$, $X_t$, and $\hat{T}$\; Initialize $X_s^{(0)}$, $X_t^{(0)}$ randomly, $\hat{T}^{(0)}=\mu_s \mu_t^T$.\; For $m=0 : M-1$:\; \quad Set $\alpha_m = \frac{m}{M}$.\; \quad For $n=0 : N-1$\; \quad \quad Update optimal transport $\hat{T}^{(m+1)}$\; \quad Obtain $X_s^{(m+1)}$, $X_t^{(m+1)}$\; $X_s = X_s^{(M)}, X_t = X_t^{(M)}$ and $\hat{T}=\hat{T}^{(M)}.$\; Graph matching:\; Initialize correspondence set $P=\emptyset$\; For $v_i \in V_s$\; \quad $j = \mathrm{arg max}_j \hat{T}_{ij}. P=P\bigcup\{ (v_i \in V_s,v_j \in V_t)\}$. \end{algorithm} The proposed methodology produced matching results that are better than all other comparable methods and opens the opportunity for the improvement of well-known systems (i.e. recommendation systems). We note that the Gromov-Wasserstein discrepancy can also be used to improve GANs, as is done in \cite{bunne2019learning}. Here, Bunne, et al., adapt the generative model to use the Gromov-Wasserstein discrepancy to perform GANs across different types of data. \subsection{Neural Architecture Search} In this section we will look at the following paper: \emph{Neural Architecture Search with Bayesian Optimisation and Optimal Transport} \cite{kandasamy2018neural}. Bayesian Optimization (BO) refers to a set of methods used for optimization of a function $f$, thus making it perfect for solving the \emph{model selection} problem over the space of neural architectures. The difficulty posed in BO when dealing with network architecture is figuring out how to quantify \emph{(dis)similarity} between any two networks. To do this, the authors developed what they call a (pseudo-)distance for neural network architectures, called OTMANN (Optimal Transport Metrics for Architectures of Neural Networks). Then, to perform BO over neural network architectures, they created NASBOT, or Neural Architecture Search with Bayesian Optimization and Optimal Transport. To understand their formulation, we first look at the following definitions and terms. First, a Gaussian process is a random process characterized by an expectation function (mean function) $\mu: \chi \rightarrow \mathbb{R}$ and a covariance (kernel) $\kappa = \chi^2 \rightarrow \mathbb{R}$. In the context of architecture search, having a large $\kappa (x,x')$, where $x,x' \in \chi$ and $\kappa(x,x')$ is the measure of similarity so that $f(x)$ and $f(x')$ are highly correlated; implying the GP imposes a smoothness condition on $f:\chi \rightarrow \mathbb{R}$. Next, the authors view a neural network (NN) as a graph whose vertices are the layers of the network $G =(L,E)$, where $L$ is a set of layers and $E$ the directed edges. Edges are denoted by a pair of layers, $(u,v)\in E$. A layer $u\in L$ is equipped with a layer label $ll(u)$, which denotes the type of operations performed at layer $u$ (i.e. $ll(1) = conv3$ means 3x3 convolutions). Then, the attribute $lu$ denotes the number of computational units in a layer. Furthermore, each network has \emph{decision layers}, which are used to obtain the predictions of the network. When networks have more than one decision layer, one considers the average of the output given by each layer. Lastly, each network has an input and output layer, $u_{in}$ and $u_{op}$ respectively; any other layer is denoted as a \emph{processing layer}. \\ Using the definitions above, the authors describe the distance for neural architectures as $d:\chi^2 \rightarrow \mathbb{R}_+$; with the goal of obtaining a kernel for the GP where $\kappa(x,x')=exp(-\beta d(x,x')^p)$, given that $\beta,p\in\mathbb{R}_+$. We first look at the OTMANN distance. OTMANN is defined as the minimum of a matching scheme which attempts to match the computation at the layers of one network to the layers of another, where penalties occur given that different types of operations appear in matched layers. The OTMANN distance is that which minimizes said penalties. Given two networks $G_1(L_1,E_1)$ and $G_2(L_2,E_2)$ with $n_1, n_2$ layers respectively, the OTMANN distance is computed by solving the following optimization problem: \begin{align*} \underset{Z}{\text{minimize}} \hspace{8pt} \phi_{lmm}(Z) + \phi_{nas}(Z) +\nu_{str}\phi_{str}(Z) \\ \text{subject to} \sum\limits_{j\in L_2}Z_{ij}\leq lm(i), \sum\limits_{i\in L_1} Z_{ij} \leq lm(j), \forall i,j \end{align*} In the above equation, $\phi_{lmm}$ is the label mismatch penalty, $\phi_{str}$ is the structural term penalty, $\phi_{nas}$ is the non-assigment penalty, $Z \in \mathbb{R}^{n_1 x n_2}$ denoting hte maount of mass matched between layer $i\in G_1$ and $j\in G_2$, $l_m: L \rightarrow \mathbb{R}_+$ is a layer mass, and lastly $\nu_{str} > 0$ determines the trade-off between the structural term and other terms. This problem can be formulated as an Optimal Transport problem and is proved in the appendix of the paper.\\ Next, we look at NASBOT. The goal here is to use the kernel $\kappa$, as previously mentioned, to define the neural architectures and to find a method to optimize the acquisition function: \begin{equation*} \begin{aligned} \phi_t(x) = \mathbb{E}[\max &\{ 0, f(x) -\tau_{t-1}\}|\{(x_i,y_i)\}_{i=1}^{t-1} ], \\ \tau_{t-1} &= \underset{i\leq t-1}{\text{argmax}}\hspace{4pt} f(x_i) \end{aligned} \end{equation*} The authors solve this optimization problem using an evolutionary algorithm, whose solution leads to the creation of NASBOT. Detailed explanations on the algorithm and the methodology onto which the optimization was solved can be found in the appendix of the original paper. After running an experiment to compare NASBOT against known methods, the authors show that NASBOT consistently had the smallest cross validation mean squared error. For the interested reader, there are illustrations for the best architectures found for the problem posed in the experiment proposed. \subsection{Document Representation} In this section we will look at the following paper: \emph{Hierarchical Optimal Transport for Document Representation} \cite{yurochkin2019hierarchical}. In this paper, Yurochkin, \emph{et al.} combine hierarchical latent structures from topic models with geometry from word embeddings. \emph{Hierarchical} optimal topic transport document distances, referred to as HOTT, this method combines language information (via word embeddings) with topic distributions from latent Dirichlet allocation (LDA) to measure the similarities between documents. Given documents $d^1$ and $d^2$, HOTT is defined as: \begin{equation} HOTT(d^1,d^2) = W_1(\sum_{k=1}^{|T|}\bar{d}_k^1 \delta_{t_k}, \sum_{k=1}^{|T|}\bar{d}_k^2 \delta_{t_k} ) \label{HOTT} \end{equation} Here, $\bar{d}^i$ represents document distributions over topics and the Dirac delta $\delta_{t_k}$ is a probability distribution supported on the corresponding topic $t_k$ and $W_1(d^1,d^2) = WMD(d^1,d^2)$ (\emph{WMD} being the Word Movers Distance). By truncating topics, the authors were able to reduce the computational time and make HOTT a competitive model against common methods. Their experiments show that although there is no uniformly best method, HOTT has on average the smallest error with respect to nBOW (normalized bag of words). More importantly, what was shown was that the process of truncating topics to improve computational time does not hinder the goal of obtaining high-quality distances. Interested readers will find in the paper more detailed reports about the setup and results of the experiments run. \subsection{Domain Adaptation} In this section we will cover \emph{Optimal Transport for Domain Adaptation} \cite{flamary2016optimal}. In their paper, Flamary, \emph{et al.}, propose a regularized unsupervised optimal transportation model to perform an alignment of the representations in the source and target domains. By learning a transportation plan that matches the source and target PDFs, they constrained labeled samples of the same class during the transport. This helps solve the discrepancies (known as drift) in data distributions. \\ In real world problems, the drift that occurs between the source and target domains generally implies a change in marginal and conditional distributions. In this paper, the authors assume the domain drift is due to “an unknown, possibly nonlinear transformation of the input space $T: \Omega_s \rightarrow \Omega_t$ (omega is a measurable space, s is source, t is target). Because searching for T is an intractable problem and requires restrictions to become approximated. Here, the authors consider the problem of finding T the same as choosing a T such that one minimizes the transportation cost C(T): \begin{equation} C(T) = \int_{\Omega_s} c(x, T(x))d\mu(x) \end{equation} where $c: \Omega_s \mathrm{ x } \Omega_t \rightarrow \mathbb{R}^+$ and $\mu(x)$ is a probability mass (or measure from x to T(x).) \\ This is precisely the optimal transport problem. Then, to further improve the computational aspect of the model, a regularization component that preserves label information and sample neighborhood during the transportation is introduced. Now, the problem is as follows: \begin{equation} \min\limits_{\pi \in \Pi} \langle\pi,C\rangle_F + \lambda\Omega_s(\pi)+\eta\Omega_c(\pi) \end{equation} where $\lambda \in \mathbb{R}$, $\eta \geq 0$, $\Omega_c(\cdot)$ is a class-based regularization term, and \begin{equation*} \Omega_s(\pi) = \sum_{i,j}\mathrm{\pi(i,j)log(i,j)} \end{equation*} This problem is solved using Algorithm \ref{alg:GenCondGradient}: \begin{algorithm} \DontPrintSemicolon \caption{\textbf{Generalized Conditional Gradient}} \label{alg:GenCondGradient} Initialize: $k=0$, and $\pi^0 \in P$\; repeat \; \quad With $G \in \nabla f(\pi^k)$, solve $\pi^* = \underset{\pi\in B}{\mathrm{argmin}} \langle\pi,G\rangle_F + g(\pi)$ \; \quad Find the optimal step $\alpha^k$, $\alpha^k = \underset{0\leq\alpha\leq 1}{\mathrm{argmin}} f(\pi^k + \alpha\Delta\pi)+g(\pi^k+\alpha\Delta\pi)$, with $\Delta\pi = \pi^* - \pi^k$ \; \quad $\pi^{k+1} \leftarrow \pi^k + \alpha^k\Delta\pi$, set $k \leftarrow k+1$\; until Convergence \; \end{algorithm} In the algorithm above, $f(\pi) = \langle\pi,C\rangle_F + \eta\Omega_c(\pi)$ and $g(\pi)=\lambda\Omega_s(\pi)$. Using the assumption that $\Omega_c$ is differentiable, step 3 of the algorithm becomes \begin{equation*} \pi^* = \underset{\pi\in Pi}{\mathrm{argmin}} \langle\pi, C+\eta\nabla\Omega_c(\pi^k)\rangle_F+\lambda\Omega_s(\pi) \end{equation*} By using a constrained optimal transport method, the overall performance was better than other state-of-the-art methods. Readers can find detailed reports on Table 1 in \cite{flamary2016optimal}. For readers interested in domain adaptation, a varying approach to study heterogeneous domain adaptation problems using OT can be found in \cite{yan2018semi}. \section{Future Research} Further research will allow OT to be implemented in more areas and become more widely acceptable. The main problem with optimal transport is scaling onto higher dimensions. The optimal mappings that need to be solved are currently intractable in high-dimensions, which is where most of the current problems today lie. For example, Google's NLP model has roughly 1 trillion parameters. This type of problem is currently outside the scope of OT. Another interesting research topic is the use of optimal transport in approximating intractable distributions. This would compete with current known methods like KL-divergence and open up interesting opportunities when working with variational inference and/or expectation propagation. Another fundamental area to explore lies with the choice of using the Wasserstein distance. As shown throughout the paper, it is the most commonly used metric, but as one can see in Appendix 1, there are various others metrics, or distances, that may be used to replace W-distance. Interested readers can read more about them in Villani's Book, \emph{Optimal transport: old and new} \cite{villani2008optimal}. For further research from an applied perspective, one possibility is the use of the GWL framework explained in section \ref{Graphs} to improve on recommendation systems. On the other hand, all of the papers we have referenced above are quite novel in their applications and thus they all provide space for continuation or extension into more specific sub-fields within their respective context. \section{Concluding Remarks} Throughout this survey, we have shown that Optimal Transport is seeing growing attention within the machine learning community due to its applicability in different areas. Although OT is becoming widely accepted in the machine learning world, it is deeply rooted in mathematics and so we extracted the most important topics so that interested readers can access only what is needed to have a high-level understanding of what is happening. These excerpts explain Kantorovich duality, entropic regularization, KL divergence, and Wasserstein barycenters. Although the applications of OT span a wide range, it is limited by computational challenges. Within this section we explored how using an entropic regularization term allowed for the formation of an algorithm that made OT problems computationally feasible and thus applicable. This takes us to the last section of this survey, the applications of optimal transport in machine learning. We began with computer vision, as it was one of the first applications of OT in ML. First, OT has been used to improve GANs by providing better statistical stability in low-dimensional data. Furthermore, since OT is one of the few methods that deal with the mass-splitting phenomenon, it allowed for many-to-one matching in pixel assignments which yielded a new approach to semantic correspondence with a 26\% performance improvement over state-of-the-art methods. The last application we covered with respect to computer vision was the use of W-distance to create a novel method for image interpolation called Convolutional Wasserstein Distance. Next, with respect to graphs, OT has allowed for the creation of the Gromov-Wasserstein Learning (GWL) algorithm which have also been shown to improve GANs. Other interesting areas that OT has shown promising results include neural architecture search, document representation, and domain adaptation. All of the papers we have analyzed and summarized will show that in some form (computational/accuracy) the use of OT has yielded better results than traditional methods. Although the computational inefficiencies are prevalent, the future for optimal transport in machine learning looks promising as more researchers become aware of this new intersection of areas. \section*{Appendix A} The implementation of the conclusions of OT in machine learning rely mostly on the implementation of the various metrics that can be used as error measures in model tuning. The most notable ones arise from the reformulation or approximation of metrics into convex functionals that can be optimized by drawing on the many beautiful conclusions of convex geometry. Here, we recall a metric, many times called a distance, as a function $d(\cdot, \cdot): X \times X \to \mathbf{R}_+$, where $X$ is a metrizable space, that satisfies \begin{itemize} \item Positive Definite: $d(x, y) \geq 0$ with $d(x, y) = 0$ if, and only if, $x = y$ \item Symmetric: $d(x,y) = d(y,x)$ for all $x,y \in X$ \item Subadditive: $d(x,y) \leq d(x,z) + d(z,y)$ for all $x,y,z \in X$ \end{itemize} Here, we want to note some different error estimates that come up in the OT literature as well as some that are traditionally used to compare probability distributions. The most notable comparison of probability measures in the OT literature is the p-Wasserstein Distance \begin{equation} \label{wpmetric} W^d_p (\mu_0 , \mu_1 ) = \inf_{\pi \in \Pi(\mu_0 , \mu_1)} \bigg( \int_{M \times M} d(x,y)^p d\pi (x,y) \bigg)^{1/p} \end{equation} In \ref{wpmetric}, d is a metric. We see from definition that it should very much act like a minimal $L^p$ distance on the space of probability measures. The most relevant choice of parameter p is $p=1,2$. This distance was formulated in the most general sense possible and it has a natural discrete formulation for discrete measures. Therefore, it allows for different contexts. For example, we saw the analog in the context of graphs as the Gromov-Wasserstein distance as: Let $(X,d_X,\mu_{X})$ and $(Y,d_Y,\mu_{Y})$ be two metric measure spaces, where $(X,d_X)$ is a compact metric space and $\mu_X$ is a probability measure on X (with $(Y,d_Y,\mu_{Y})$ defined in the same way). The Gromov-Wasserstein distance $d_{GW}(\mu_X,\mu_Y)$ is defined as \\ \begin{equation} \inf\limits_{\pi \in \Pi (\mu_X, \mu_Y)}\int\limits_{X \times Y} \int\limits_{X \times Y} L(x,y,x',y')d\pi(x,y)d\pi(x',y'), \end{equation} where $L(x,y,x',y') = |d_X(x,x') - d_Y(y,y')|$. Here, we see that the formulas look naturally similar. The Gromov-Wasserstein distance would be a particular choice of the 1-Wasserstein distance to a general metric space which can then be relaxed to be able to work with graphs as we saw before. The novelty in using OT in applications is principally the different error estimates. We recall some of the well-known distances that are traditionally used to compare probability measures: \begin{itemize} \item KL Divergence: \begin{align*} &KL(\pi|\kappa) \coloneqq \\ &\int\int_{M \times M} \pi(x,y) \bigg[\ln \frac{\pi(x,y)}{\kappa(x,y)} -1 \bigg] dxdy \end{align*} \item Hellinger distance: $H^2(\mu_0,\mu_1) = \frac{1}{2} \int (\sqrt{\frac{d\mu_0}{d\lambda}} - \sqrt{\frac{d\mu_1}{d\lambda}} )^2 d\lambda$, where $\mu_0,\mu_1$ are absolutely continuous with respect to $\lambda$ and $\frac{d\mu_0}{d\lambda}, \frac{d\mu_1}{d\lambda}$ denote the Radon-Nykodym derivatives, respectively. \item Lèvy-Prokhorov distance: $d_P(\mu_0,\mu_1) = \inf \{ \epsilon>0 ; \exists X,Y ; \inf \mathbb{P}[d(X,Y) >\epsilon] \leq \epsilon \}$ \item Bounded Lipschitz distance (or Fortet-Mourier distance): $d_bL(\mu_0, \mu_1) = \sup \{ \int \phi d\mu_0 - \int \phi d\mu_1; ||\phi ||_\infty + || \phi ||_{\text{Lip}} \leq 1 \} $ \item (in the case of nodes) Euclidean distance: $d(x,y) = \sqrt{(x-y)^2}$ \end{itemize} We note that the Lèvy-Prokhorov and bounded Lipschitz distances can work in much the same way that the Wasserstein distance does. At the present, the Wasserstein distance proves useful because of it's capabilities in dealing with large distances and its convenient formulation in many problems such as the ones presented in this paper as well as others coming from partial differential equations. It's definition using infimum makes it easy to majorate. Its duality properties are useful--particularly in the case when $p=1$ as we see with the Kantorovich-Rubinstein distance where it is defined as an equivalence to its dual: \begin{equation} W_1(\mu_0, \mu_1) = \sup_{||\phi||_{\text{Lip}} \leq 1} \bigg\{ \int_X \phi d\mu_0 - \int_x \phi d\mu_1 \bigg\} \end{equation} The interested reader can read more about the different distances in \cite{rachev1991probability,villani2008optimal} As we presently see in this paper, we notice that much of the work on the optimal transport in machine learning is in the reformulation of the algorithms, which classically used the traditional distances, into new versions that use the Wasserstein distance. Then, a lot of the work is done in dealing with the computational inefficiency of the Wasserstein distance. Moving forward, the authors think that many machine learning algorithms will implement some of the "deeper" features of the optimal transport theory to improve such algorithms after their best formulation becomes abundantly clear. \section*{Appendix B} For the readers interested in papers that apply OT in machine learning, here are a few more references to be considered. First we have OT in GANS: \begin{itemize} \item \textit{A geometric view of optimal transportation and generative model} \cite{lei2019geometric} \end{itemize} Next, for semantic correspondence and NLP we have: \begin{itemize} \item \textit{Improving sequence-to-sequence learning via optimal transport} \cite{chen2019improving} \end{itemize} Lastly, on domain adaptation we have: \begin{itemize} \item \textit{Joint distribution optimal transportation for domain adaptation} \cite{courty2017joint} \item \textit{Theoretical analysis of domain adaptation with optimal transport} \cite{redko2017theoretical} \end{itemize} \end{document}
\textcolor{blue}egin{document} \title{Existence of multi-point boundary Green's function for chordal Schramm-Loewner evolution (SLE)} \author{Rami Fakhry and Dapeng Zhan} \affil{Michigan State University} \date{\today} \maketitle \textcolor{blue}egin{abstract} In the paper we prove that, for $\kappa\in(0,8)$, the $n$-point boundary Green's function of exponent $\frac8\kappa -1$ for chordal SLE$_\kappa$ exists. We also prove that the convergence is uniform over compact sets and the Green's function is continuous. We also give up-to-constant bounds for the Green's function. \end{abstract} \tableofcontents \section{Introduction}\label{chapter1} The Schramm-Loewner evolution (SLE for short) is a one-parameter ($\kappa\in(0,\infty)$) family of random fractal curves which grow in plane domains. It was defined in the seminal work of Schramm \cite{S-SLE} in 1999. Because of its close relation with two-dimensional lattice models, Gaussian free field, and Liouville quantum gravity, SLE has attracted a lot of attention for over two decades. The geometric properties of an SLE$_\kappa$ curve depends on the parameter $\kappa$. When $\kappa\textcolor{green}e 8$, an SLE$_\kappa$ curve visits every point in the domain (cf.\ \cite{RS}); when $\kappa\in(0,8)$, an SLE$_\kappa$ curve has Hausdorff dimension $d_0=1+\frac\kappa 8$ (cf.\ \cite{Bf}). There are several types of SLE. In this paper we focus on chordal SLE, which grows in a simply connected plane domain from one boundary point to another boundary point. Suppose $\textcolor{green}amma$ is a chordal SLE$_\kappa$ curve, $\kappa\in(0,8)$, in a domain $D$, and $z_0\in D$. The Green's function for $\textcolor{green}amma$ at $z_0$ is the limit \mbox{\textcolor{blue}fseries B}GE G(z_0):=\lim_{r\to 0^+} r^{-\alpha} \mathbb{P}[\dist(z_0,\textcolor{green}amma)\le r]\label{1-pt Green}\mathbb{ E}DE for some suitable exponent $\alpha>0$ depending on $\kappa$ such that the limit exists and is not trivial, i.e., lies in $(0,\infty)$. This notion easily extends to $n$-point Green's function: \mbox{\textcolor{blue}fseries B}GE G(z_1,\dots,z_n):=\lim_{r_1,\dots,r_n\to 0^+} \prod_{j=1}^n r_j^{-\alpha} \mathbb{P}[\dist(z_j,\textcolor{green}amma)\le r_j,1\le j\le n],\label{n-pt Green}\mathbb{ E}DE where $z_1,\dots,z_n$ are distinct points in $D$, provided that the limit exists and is not trivial. The term ``Green's function'' is used for the following reasons. Recall the Laplacian Green's function $G_D(z,w)$, $z\ne w\in D$, for a planar domain $D$, which is characterized by the following properties: for any $w\in D$, \textcolor{blue}egin{itemize} \item $G_D(\cdot,w)$ is positive and harmonic on $D\setminus \{w\}$. \item As $z\to\partial D$, $G_D(z,w)\to 0$. \item As $z\to w$, $G_D(z,w)=-\frac 1{2\pi} \ln|z-w|+O(1)$. \end{itemize} One important fact is \mbox{\textcolor{blue}fseries B}GE G(z,w)= \lim_{r\to 0^+} \frac{-\ln r}{2\pi}\cdot \mathbb{P}^w[\dist(z,B[0,\tau_D])\le r],\label{Laplacian Green}\mathbb{ E}DE where $B$ is a planar Brownian motion started from $w$, and $\tau_D$ is the exit time of $D$. Notice the similarity between (\textcolor{red}ef{1-pt Green}) and (\textcolor{red}ef{Laplacian Green}). The main difference between them is the normalization factor: one is $r^{-\alpha}$ and the other is $\frac{-\ln r}{2\pi}$. Another important fact is: for any measurable set $U\subset D$, \mbox{\textcolor{blue}fseries B}GE \mathbb{ E}E^w[|\{t\in[0,\tau_D): B_t\in U\}|]=\int_U G(z,w) dA(z).\label{integral-Laplacian}\mathbb{ E}DE Here $|\cdot |$ stands for the Lebesgue measure on $\mathbb{R}$, and $A$ is the area. It turns out that the correct exponent $\alpha$ is the co-dimension of the SLE curve, i.e., $\alpha=2-d_0=2-(1+\frac\kappa 8)=1-\frac \kappa 8$. The existence of a variation of Green's function for chordal SLE$_\kappa$, $\kappa\in(0,8)$, was given in \cite{Law4}, where the conformal radius was used instead of Euclidean distance. The existence of $2$-point Green's function was proved in \cite{LW} (again for conformal radius instead of Euclidean distance) following a method initiated by Beffara \cite{Bf}. In \cite{LR} the authors showed that Green's function as defined in (\textcolor{red}ef{n-pt Green}) (using Euclidean distance) exists for $n = 1,2$, and then used those Green's functions to prove that \textcolor{blue}egin{itemize} \item an SLE$_\kappa$ curve $\textcolor{green}amma$ can be parametrized by its $d_0$-dimensional Minkowski content, i.e., for any $t_1<t_2$, the $(1+\frac\kappa 8)$-dimensional Minkowski content of $\textcolor{green}amma[t_1,t_2]$ is $t_2-t_1$; and \item under such parametrization, for any measurable set $U\subset D$, \mbox{\textcolor{blue}fseries B}GE \mathbb{ E}E[|\{t: \textcolor{green}amma(t)\in U\}|]=\int_U G(z) dA(z).\label{integral-NP}\mathbb{ E}DE \end{itemize} The Minkowski content parametrization agrees with the natural parametrization introduced earlier (cf.\ \cite{LS,LZ}) The similarity between (\textcolor{red}ef{integral-Laplacian}) and (\textcolor{red}ef{integral-NP}) further justifies the terminology ``Green's function''. In a series of papers (\cite{higher,existence}) the authors showed that the Green's function of chordal SLE exists for any $n\in\mathbb{N}$. In addition, they found convergence rate and modulus of continuity of the Green's functions, and provided up-to-constant sharp bounds for them. If the reference point(s) is (are) on the boundary of the domain instead of the interior, we may use (\textcolor{red}ef{1-pt Green}) and (\textcolor{red}ef{n-pt Green}) to define the one-point and $n$-point boundary Green's function. Again, if $\kappa\textcolor{green}e 8$, the boundary Green's function makes no sense for SLE$_\kappa$ since it visits every point on the boundary; if $\kappa\in (0,8)$, the intersection of the SLE$_\kappa$ curve with the boundary has Hausdorff dimension $d_1=2-\frac 8\kappa$ (\cite{dim-real}), and so the reasonable choice of the exponent $\alpha$ is $\alpha=1-d_1=\frac 8\kappa-1$. Greg Lawler proved (cf.\ \cite{Mink-real}) the existence of the $1$- and $2$-point (on the same side) boundary Green's function for chordal SLE, and used them to prove that the $d_1$-dimensional Minkowski content of the intersection of SLE$_\kappa$ with the domain boundary exists. He also obtained the exact formulas of these Green's function up to some multiplicative constant. We will use the exact formula of the one-point Green's function: \mbox{\textcolor{blue}fseries B}GE G(z)=\widehat c |z|^{-\alpha},\quad z\ne0,\label{G(z)}\mathbb{ E}DE where $\widehat c>0$ is some (unknown) constant depending only on $\kappa$. We will also use the convergence rate of the one-point Green's function (\cite[Theorem 1]{Mink-real}): there is some constant $C,\textcolor{blue}eta_1>0$ depending only on $\kappa$ such that for any $z\in\mathbb{R}\setminus\{0\}$ and $r\in (0,|z|)$, \mbox{\textcolor{blue}fseries B}GE |\mathbb{P}[\dist(z,\textcolor{green}amma)\le \varepsilon]-G(z)\varepsilon^\alpha|\le C (\varepsilon/|z|)^{\alpha+\textcolor{blue}eta_1}.\label{G(z)-approx} \mathbb{ E}DE To the best of our knowledge, the existence of $n$-point boundary Green's function for $n > 2$ and the $2$-point boundary Green's function when the two reference points lie on different sides of $0$ has not been proved so far. The main goal of the paper is to prove this existence for all $n\in\mathbb{N}$ without assuming that the reference points all lie on the same side of $0$. In addition we prove that the Green's functions are continuous. We do not have exact formulas of these functions, but find some sharp bounds for them in terms of simple functions. We will mainly follow the approach in \cite{existence}, and apply the results from there as well as from \cite{Mink-real} and \cite{higher}. Below is our main result. \textcolor{blue}egin{Theorem} Let $\kappa\in(0,8)$ and $\alpha=\frac 8\kappa -1$. Let $\textcolor{green}amma$ be an SLE$_\kappa$ curve in $\mathbb{H}:=\{z\in\mathbb{C}:{\infty}mm z>0\}$ from $0$ to $\infty$. Let $n\in\mathbb{N}$ and $\Sigma_n=\{(z_1,\dots,z_n)\in(\mathbb{R}\setminus \{0\})^n: z_j\ne z_k\mbox{ whenever}j\ne k\}$. Then for any $\underline z=(z_1,\dots,z_n)\in \Sigma_n$, the limit $G(\underline z)$ in (\textcolor{red}ef{n-pt Green}) exists and lies in $(0,\infty)$. Moreover, the convergence in (\textcolor{red}ef{n-pt Green}) is uniform on each compact subset of $\Sigma_n$, the function $G$ is continuous on $\Sigma_n$, and there is an explicit function $F$ on $\Sigma_n$ (defined in (\textcolor{red}ef{F})) with a simple form such that $G(\underline z)\asymp F(\underline z)$, where the implicit constants depend only on $\kappa$ and $n$. \label{Main-thm} \end{Theorem} Our result will shed light on the study of multiple SLE. For example, if we condition the chordal SLE$_\kappa$ in Theorem \textcolor{red}ef{Main-thm} to pass through small discs centered at $z_1<z_2<\cdots<z_n\in (0,\infty)$, and suitably take limits while sending the radii of the discs to zero, then we should get an $(n+1)$-SLE$_\kappa$ configuration in $\mathbb{H}$ with link pattern $(0\leftrightarrow z_1;z_1\leftrightarrow z_2;z_2\leftrightarrow z_3;\dots;z_{n-1}\leftrightarrow z_n;z_n\leftrightarrow \infty)$, which is a collection of $(n+1)$ random curves $(\textcolor{green}amma_0,\dots,\textcolor{green}amma_n)$ in $\overline\mathbb{H}$ such that $\textcolor{green}amma_j$ connects $z_j$ with $z_{j+1}$, where $z_0:=0$ and $z_{n+1}:=\infty$, and when any $n$ curves among the $(n+1)$ curves are given, the last curve is a chordal SLE$_\kappa$ curve in a connected component of the complement of the given $n$ curves in $\mathbb{H}$. The $n$-point boundary Green's function is then closely related to the partition function associated to such multiple SLE. Here are a few topics that we could study in the near future. We may consider ``mixed'' multi-point Green's functions for chordal SLE, where some reference points lie in the interior of the domain, and some lie on the boundary. We expect that the Green's functions still exist, if $1-\frac\kappa 8$ is used as the exponent for interior points and $\frac 8\kappa-1$ is used as the exponent for boundary points. We may also work on other types of SLE such as radial SLE, which grows from a boundary point to an interior point. The multi-point (interior) Green's function for radial SLE was proved to exist in \cite{MZ}. The next natural objects to study are the boundary and mixed multi-point Green's function for radial SLE. The rest of the paper is organized in a straightforward fashion. In Section \textcolor{red}ef{Chap-Prel}, we recall symbols, notation and some basic results that are relevant to the paper. Section \textcolor{red}ef{mainestimates} contains the most technical part of the paper, where we derive a number of important estimates. We finish the proof of the main theorem in Section \textcolor{red}ef{Chap-main-thm}. \section{Preliminaries}\label{Chap-Prel} \subsection{Notion and symbols} Let $\mathbb{H}=\{z\in\mathbb{C}:{\infty}mm z>0\}$ be the open upper half plane. Given $z_0\in\mathbb{C}$ and $S\subset \mathbb{C}$, we use $\textcolor{red}ad_{z_0}(S)$ to denote $\sup\{|z-z_0|:z\in S\cup\{z_0\}\}$. We write $\mathbb{N}_n$ for $\{k\in\mathbb{N}:k\le n\}$, where $\mathbb{N}=\{1,2,3,\dots\}$ is the set of all positive integers. For $a,b\in\mathbb{R}$, we write $a\wedge b$ and $a\vee b$ respectively for $\min\{a,b\}$ and $\max\{a,b\}$. We fix $\kappa\in(0,8)$ and set $d=1+\frac\kappa 8$ and $\alpha=\frac 8\kappa -1$. Throughout, a constant (such as $\alpha$) depends only on $\kappa$ and a variable $n\in\mathbb{N}$ (number of points), unless otherwise specified. We use $X\lesssim Y$ or $Y\textcolor{green}trsim X$ if there is a constant $C>0$ such that $X\le C Y$. We write $X\asymp Y$ if $X\lesssim Y$ and $Y\lesssim X$. When a (deterministic or random) curve $\textcolor{green}amma(t)$, $t\textcolor{green}e 0$, is fixed in the context, we let $\tau_S=\inf(\{t\textcolor{green}e 0:\textcolor{green}amma(t)\in S\}\cup\{\infty\})$. We write $\tau^{z_0}_r$ for $\tau_{\{z:|z-z_0\le r\}}$, and $T_{z_0}$ for $\tau^{z_0}_0=\tau_{\{z_0\}}$. So another way to say that $\dist(z_0,\textcolor{green}amma)\le r$ is $\tau^{z_0}_r<\infty$. We also write $\tau^\infty_R$ for $\tau_{\{z:|z|\textcolor{green}e R\}}$. A crosscut in a domain $D$ is an open simple curve in $D$, whose two ends approach to two boundary points of $D$. When $D$ is a simply connected domain, any crosscut $\textcolor{red}ho$ of $D$ divides $D$ into two connected components. \subsection{$\mathbb{H}$-Hulls}\label{H-hull} A relatively closed bounded subset $K$ of $\mathbb{H}$ is called an $\mathbb{H}$-hull if $\mathbb{H}\setminus K$ is simply connected. The complement domain $\mathbb{H}\setminus K$ is then called an $\mathbb{H}$-domain. Given an $\mathbb{H}$-hull $K$, we use $g_K$ to denote the unique conformal map from $\mathbb{H}\setminus K$ onto $\mathbb{H}$ that satisfies $g_K(z)=z+O(|z|^{-1})$ as $z\to \infty$. Let $f_K=g_K^{-1}$. The half-plane capacity of $K$ is $\hcap(K):=\lim_{z\to\infty} z(g_K(z)-z)$. If $K=\emptyset$, then $g_K=f_K=\id$, and $\hcap(K)=0$. Now suppose $K\ne\emptyset$. Let $a_K=\min(\overline K\cap\mathbb{R})$ and $b_K=\max(\overline K\cap\mathbb{R})$. Let $K^{\doub}=K\cup[a_K,b_K]\cup\{\overline z: z\in K\}$. By Schwarz reflection principle, $g_K$ extends to a conformal map from $\mathbb{C}\setminus K^{\doub}$ onto $\mathbb{C}\setminus[c_K,d_K]$ for some $c_K<d_K\in\mathbb{R}$, and satisfies $g_K(\overline z)=\overline{g_K(z)}$. In this paper, we write $S_K$ for $[c_K,d_K]$. In the case $K=\emptyset$, we understand $S_K$ and $[a_K,b_K]$ as the empty set. Given two $\mathbb{H}$-hulls $K_1\subset K_2$, we get another $\mathbb{H}$-hull $K_2/K_1$ defined by $K_2/K_1=g_{K_1}(K_2\setminus K_1)$. \textcolor{blue}egin{Example} For $x_0\in\mathbb{R}$ and $r>0$, the set $K:=\{z\in\mathbb{H}:|z-x_0|\le r\}$ is an $\mathbb{H}$-hull, $a_K=x_0-r$, $b_K=x_0+r$, $g_K(z)=z+\frac{r^2}{z-x_0}$, $\hcap( K)=r^2$, and $S_{K}=[x_0-2r,x_0+2r]$. \label{semi-disc} \end{Example} \textcolor{blue}egin{Proposition} For any $\mathbb{H}$-hull $K$, $[a_K,b_K]\subset S_K$. If $K_1\subset K_2$ are two $\mathbb{H}$-hulls, then $S_{K_1}\subset S_{K_2}$ and $S_{K_2/K_1}\subset S_{K_2}$. \label{SKSH} \end{Proposition} \textcolor{blue}egin{proof} This is \cite[Lemmas 5.2 and 5.3]{LERW}. \end{proof} \textcolor{blue}egin{Proposition} If a nonempty $\mathbb{H}$-hull $K$ satisfies that $\textcolor{red}ad_{x_0}(K)\le r$ for some $x_0\in\mathbb{R}$ and $r>0$, then $\hcap(K)\le r^2$, $S_K\subset[x_0-2r,x_0+2r]$, and \mbox{\textcolor{blue}fseries B}GE |g_K(z)-z|\le 3r, \quad z\in\mathbb{C}\setminus K^{\doub}.\label{f-z0}\mathbb{ E}DE Moreover, for any $z\in\mathbb{C}$ with $|z-x_0|\textcolor{green}e 5r$, we have \mbox{\textcolor{blue}fseries B}GE |g_K(z)-z|\le 2|z-x_0|\mbox{\textcolor{blue}fseries B}ig(\frac{r}{|z-x_0|}\mbox{\textcolor{blue}fseries B}ig)^2 ;\label{1}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE |g_K'(z)-1|\le 5\mbox{\textcolor{blue}fseries B}ig(\frac{r}{|z-x_0|}\mbox{\textcolor{blue}fseries B}ig)^2 .\label{3}\mathbb{ E}DE \label{small} \end{Proposition} \textcolor{blue}egin{proof} This is \cite[Lemmas 2.5 and 2.6]{existence}. \end{proof} \textcolor{blue}egin{Proposition} Let $H$ be a nonempty $\mathbb{H}$-hull, and ${\mathcal H}(H)$ denote the space of $\mathbb{H}$-hulls, which are subsets of $H$. Then ${\mathcal H}(H)$ is compact in the sense that any sequence $(K_n)$ in ${\mathcal H}(H)$ contains a convergent subsequence $(K_{n_k})$ whose limit $K$ is contained in ${\mathcal H}(H)$. Here the convergence means that $g_{K_{n_k}}$ converges to $g_K$ locally uniformly in $\mathbb{C}\setminus H^{\doub}$. \label{compact-prop} \end{Proposition} \textcolor{blue}egin{proof} This is \cite[Lemma 5.4]{LERW}. \end{proof} \subsection{Chordal Loewner Processes} Let $U(t)$, $0\le t<T$, be a real valued continuous function, where $T\in(0,\infty]$. The chordal Loewner equation driven by $U$ is the equation \mbox{\textcolor{blue}fseries B}GE \partial_t g_t(z)=\frac 2{g_t(z)-U_t},\quad g_0(z)=z.\label{chordal}\mathbb{ E}DE For every $z\in\mathbb{C}$, let $\tau^*_z$ denote the first time that the solution $g_\cdot(z)$ blows up; when such time does not exist, $\tau^*_z$ is set to be $\infty$. Let $K_t=\{z\in\mathbb{H}:\tau^*_z\le t\}$. We call $g_t$ and $K_t$, $0\le t<T$, the chordal Loewner maps and hulls, respectively, driven by $U$. It turns out that, for each $t\in[0,T)$, $K_t$ is an $\mathbb{H}$-hull, $\hcap(K_t)=2t$, and $g_t=g_{K_t}$. \textcolor{blue}egin{Proposition} For any $0\le t<T$, $$\{U_t\}=\textcolor{blue}igcap_{\varepsilon\in(0,T-t)} \overline{K_{t+\varepsilon}/K_t}.$$ \label{K/U} \end{Proposition} \textcolor{blue}egin{proof} This a restatement of \cite[Theorem 2.6]{LSW1}. \end{proof} \textcolor{blue}egin{Corollary} If for some $\mathbb{H}$-hull $H$ and $t_0\in(0,T)$, $K_{t_0}\subset H$, then $U_t\in S_H$ for $0\le t<t_0$. \label{UinSK} \end{Corollary} \textcolor{blue}egin{proof} By Proposition \textcolor{red}ef{K/U}, for every $t\in[0,t_0)$, $U_t\in [a_{K_{t_0}/K_t},b_{K_{t_0}/K_t}]$, which implies by Proposition \textcolor{red}ef{SKSH} that $U_t\in S_{K_{t_0}/K_t}\subset S_{K_{t_0}}\subset S_H$. By the continuity of $U$, we also have $U_{t_0}\in S_H$. \end{proof} We call the maps $Z_t=g_t-U_t$ the centered Loewner maps driven by $U$. \textcolor{blue}egin{Proposition} Let $b>a\in [0,T)$. Suppose that $\textcolor{red}ad_{x_0}(K_b/K_a)\le r$ for some $x_0\in\mathbb{R}$ and $r>0$. Then $|Z_{a}(z)-Z_b(z)|\le 7r$ for any $z\in\overline\mathbb{H}\setminus \overline{K_b}$. \label{centered-Delta} \end{Proposition} \textcolor{blue}egin{proof} Let $U_{a;t}=U_{a+t}$, $g_{a;t}=g_{a+t}\circ g_a^{-1}$, and $K_{a;t}=K_{a+t}/K_a$, $0\le t<T-a$. It is straightforward to check that $g_{a;\cdot}$ and $K_{a;\cdot}$ are respectively the chordal Loewner maps and hulls driven by $U_{a;\cdot}$. By Corollary \textcolor{red}ef{UinSK}, $U_a,U_b\in S_{K_{a;b-a}}$. By the assumption, $\textcolor{red}ad_{x_0 }(K_{a;b-a})\le r$. By Proposition \textcolor{red}ef{small}, $S_{K_{a;b-a}}\subset [x_0 -2r,x_0 +2r]$. Thus, $|U_a-U_b|\le 4r$. By Proposition \textcolor{red}ef{small}, $|g_{a;b-a}(z)-z|\le 3r$ for any $z\in \overline\mathbb{H}\setminus \overline{K_{a;b-a}}$. So for any $z\in\overline\mathbb{H}\setminus \overline{K_b}$, $|g_a(z)-g_b(z)|\le 3 r$. Since $Z_t=g_t-U_t$, and $|U_a-U_b|\le 4r$, we get the conclusion. \end{proof} If there exists a function $\textcolor{green}amma(t)$, $0\le t<T$, in $\overline\mathbb{H}$, such that for any $t$, $\mathbb{H}\setminus K_t$ is the unbounded connected component of $\mathbb{H}\setminus \textcolor{green}amma[0,t]$, we say that such $\textcolor{green}amma$ is the chordal Loewner curve driven by $U$. Such $\textcolor{green}amma$ may not exist in general, but when it exists, it is determined by $U$, and for each $t\in[0,T)$, $g_t^{-1}$ and $Z_t^{-1}$ extend continuously from $\mathbb{H}$ to $\overline\mathbb{H}$ and satisfy $g_t^{-1}(U_t)=Z_t^{-1}(0)=\textcolor{green}amma(t)$. \subsection{Chordal SLE} Let $\kappa>0$. Let $B_t$ be a standard Brownian motion. If the driving function is $U_t=\sqrt\kappa B_t$, $0\le t<\infty$, then the chordal Loewner curve driven by $U$ exists, starts from $0$ and ends at $\infty$ (cf.\ \cite{RS}). Such curve is called a chordal SLE$_\kappa$ trace or curve in $\mathbb{H}$ from $0$ to $\infty$. Its geometric property depends on $\kappa$: if $\kappa\le 4$, it is simple; if $4<\kappa<8$, it is not simple and not space-filling; if $\kappa\textcolor{green}e 8$, it is space-filling (cf.\ \cite{RS}). The Hausdorff dimension of an SLE$_\kappa$ curve is $\min\{1+\frac\kappa 8,2\}$ (cf. \cite{RS,Bf}). The definition of chordal SLE extends to general simply connected domains via conformal maps. Let $D$ be a simply connected domain with two distinct boundary points (more precisely, prime ends) $a,b$. Let $f$ be a conformal map from $\mathbb{H}$ onto $D$, which sends $0$ and $\infty$ respectively to $a$ and $b$. Let $\textcolor{green}amma$ be a chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$. Then $f\circ \textcolor{green}amma$ is called a chordal SLE$_\kappa$ curve in $D$ from $a$ to $b$. A remarkable property of SLE is the Domain Markov Property (DMP). Suppose $\textcolor{green}amma$ is a chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$, which generates the $\mathbb{H}$-hulls $K_t$, $0\le t<\infty$, and a filtration ${\mathcal F}=({\mathcal F}_t)_{t\textcolor{green}e 0}$. Let $\tau$ be a finite ${\mathcal F}$-stopping time. Conditionally on ${\mathcal F}_\tau$, $\textcolor{green}amma(\tau+\cdot)$ has the same law of a chordal SLE$_\kappa$ curve in $\mathbb{H}\setminus K_\tau$ from $\textcolor{green}amma(\tau)$ to $\infty$. Equivalently, there is a chordal SLE$_\kappa$ curve $\widetilde\textcolor{green}amma$ in $\mathbb{H}$ from $0$ to $\infty$ independent of ${\mathcal F}_\tau$ such that $\textcolor{green}amma(\tau+t)=Z_\tau^{-1}(\widetilde \textcolor{green}amma(t))$, $t\textcolor{green}e 0$. Here $Z_\tau$ is the centered Loewner map at the time $\tau$ that corresponds to $\textcolor{green}amma$, and its inverse $Z_\tau^{-1}$ has been extended continuously to $\overline\mathbb{H}$. We will also use the left-right symmetry and rescaling property of chordal SLE. Suppose $\textcolor{green}amma$ is a chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$. The left-right symmetry states that, if $f(z)=-\overline z$ is the reflection about $i\mathbb{R}$, then $f\circ \textcolor{green}amma$ has the same law as $\textcolor{green}amma$. This follows easily from that $(-\sqrt \kappa B_t)$ has the same law as $(\sqrt\kappa B_t)$. The rescaling property states that, for any $c>0$, $(c\textcolor{green}amma(t))$ has the same law as $(\textcolor{green}amma(\sqrt c t))$. This follows easily from the rescaling property of the Brownian motion. \subsection{Extremal Length}\label{Extremal} We will need some lemmas on extremal length, which is a nonnegative quantity $\lambda(\Gamma)$ associated with a family $\Gamma$ of rectifiable curves (\cite[Definition 4-1]{Ahl}). One remarkable property of extremal length is its conformal invariance (\cite[Section 4-1]{Ahl}), i.e., if every $\textcolor{green}amma\in\Gamma$ is contained in a domain $\Omega$, and $f$ is a conformal map defined on $\Omega$, then $\lambda(f(\Gamma))=\lambda(\Gamma)$. We use $d_\Omega(X,Y)$ to denote the extremal distance between $X$ and $Y$ in $\Omega$, i.e., the extremal length of the family of curves in $\Omega$ that connect $X$ with $Y$. It is known that in the special case when $\Omega$ is a semi-annulus $\{z\in\mathbb{H}:R_1<|z-x|<R_2\}$, where $x\in\mathbb{R}$ and $R_1>R_1>0$, and $X$ and $Y$ are the two boundary arcs $\{z\in\mathbb{H}:|z-x|=R_j\}$, $j=1,2$, then $d_\Omega(X,Y)=\log(R_2/R_1)/\pi$ (\cite[Section 4-2]{Ahl}). We will use the comparison principle (\cite[Theorem 4-1]{Ahl}): if every $\textcolor{green}amma\in\Gamma$ contains a $\textcolor{green}amma'\in\Gamma'$, then $\lambda(\Gamma)\textcolor{green}e \lambda(\Gamma')$. Thus, if every curve in $\Omega$ connecting $X$ with $Y$ a semi-annulus with radii $R_1,R_2$, then $d_\Omega(X,Y)\textcolor{green}e \log(R_2/R_1)/\pi$. We will also use the composition law (\cite[Theorem 4-2]{Ahl}): if for $j=1,2$, every $\textcolor{green}amma_j$ in a family $\Gamma_j$ is contained in $\Omega_j$, where $\Omega_1$ and $\Omega_2$ are disjoint open sets, and if every $\textcolor{green}amma$ in another family $\Gamma$ contains a $\textcolor{green}amma_1\in\Gamma_1$ and a $\textcolor{green}amma_2\in\Gamma_2$, then $\lambda(\Gamma)\textcolor{green}e \lambda(\Gamma_1)+\lambda(\Gamma_2)$. The following propositions are applications of Teichm\"uller Theorem. \textcolor{blue}egin{Proposition} Let $S_1$ and $S_2$ be a disjoint pair of connected closed subsets of $\overline\mathbb{H}$ that intersect $\mathbb{R}$ such that $S_1$ is bounded and $S_2$ is unbounded. Let $z_j\in S_j\cap\mathbb{R}$, $j=1,2$. Then $$1\wedge \frac{\textcolor{red}ad_{z_1}(S_1)}{|z_1-z_2|}\le 32 e^{-\pi d_{\mathbb{H}}(S_1,S_2)}.$$ \label{lem-extremal2} \end{Proposition} \textcolor{blue}egin{proof} For $j=1,2$, let $S_j^{\doub}$ be the union of $S_j$ and its reflection about $\mathbb{R}$. By reflection principle (\cite[Exercise 4-1]{Ahl}), $d_{\mathbb{H}}(S_1,S_2)=2d_{\mathbb{C}}(S_1^{\doub},S_2^{\doub})$. Let $r=\textcolor{red}ad_{z_2}(S_1)$, $L=|z_1-z_2|$ and $R=L/r$. From Teichm\"uller Theorem (\cite[Theorem 4-7]{Ahl}), $$ d_{\mathbb{C}}(S_1^{\doub},S_2^{\doub})\le d_{\mathbb{C}}([-r,0],[L,\infty))= d_{\mathbb{C}}([-1,0],[R,\infty))={\mathcal L}ambda(R).$$ From \cite[Formula (4-21)]{Ahl}, we have $$e^{-\pi d_{\mathbb{H}}(S_1,S_2)}=e^{-2\pi d_{\mathbb{C}}(S_1^{\doub},S_2^{\doub})}\textcolor{green}e e^{-2\pi {\mathcal L}ambda(R)}\textcolor{green}e \frac 1{16(R+1)} .$$ Since $1\wedge \frac 1 R\le \frac 2{1+R}$, we get the conclusion. \end{proof} \textcolor{blue}egin{Proposition} Let $D$ be an $\mathbb{H}$-domain and $S\subset \mathbb{H}$. Suppose that there are $z_0\in\mathbb{R}$ and $r>0$ such that $\{|z-z_0|=r\}\cap D$ has a connected component $C_r$, which disconnect $S$ from $\infty$. In other words, $S$ lies in the bounded component of $D\setminus C_r$. Let $g$ be a conformal map from $D$ onto $\mathbb{H}$ such that $\lim_{z\to \infty} g(z)/z=1$. Then there is $w_0\in\mathbb{R}$ such that $$\textcolor{red}ad_{w_0}(g(S))\le 4 \textcolor{red}ad_{z_0}(S).$$ \label{lem-extremal2'} \end{Proposition} \textcolor{blue}egin{proof} Since $C_r$ is a crosscut of $D$, and $S$ lies in the bounded component of $D\setminus C_r$, $g(C_r)$ is a crosscut of $g(D)=\mathbb{H}$, and $g(S)$ lies in the bounded component of $\mathbb{H}\setminus g(C_r)$. Let $w_0$ be one endpoint of $g(C_r)$. It suffices to show that $\textcolor{red}ad_{w_0}(g(C_r))\le 4 r$. Let $L=\textcolor{red}ad_{z_0}(K)$ and $L_0=|z_0-w_0|$. Take a big number $R>r+L+L'$ and let $C_R=\{z\in\mathbb{H}:|z-z_0|=R\}$. Then $S$ and $C_R$ can be separated by the semi-annulus $\{z\in\mathbb{H}:r<|z-x_0|<R\}$ in $D$. By the comparison principle and conformal invariance of extremal length, $$d_{\mathbb{H}}(g(C_r),g(C_R))=d_{D}(C_r,C_R)\textcolor{green}e \frac 1\pi \log(R/r).$$ Let $K$ be the $\mathbb{H}$-hull $\mathbb{H}\setminus D$. Then $g-g_K$ is a real constant. So we may assume that $g=g_K$. By Proposition \textcolor{red}ef{small} again, $g(C)$ is a crosscut of $\mathbb{H}$ with $\textcolor{red}ad_{w_0}(g(C))\le R+L+L_0$. Let $r'=\textcolor{red}ad_{w_0}(g(C_r))$ and $R'=R+L+L_0$. By comparison principle, reflection principle and Teichm\"uller Theorem, $$d_{\mathbb{H}}(g(C_r),g(C_R))\le d_{\mathbb{H}}(g(C_r),\{z\in\mathbb{H}:|z-w_0|=R'\})$$ $$=2 d_{\mathbb{C}}(g(C_r)^{\doub},\{|z-w_0|=R'\})\le 2 d_{\mathbb{C}}([-1,0], \{|z|=R'/{r'}\}=2M(R'/r').$$ By \cite[Formula 4-14]{Ahl}, $2M(R'/r')= {\mathcal L}ambda((R'/r')^2-1)$. Thus, by the above displayed formulas and \cite[Formula (4-21)]{Ahl} $$\frac 1\pi \log(R/r)\le {\mathcal L}ambda((R'/r')^2-1)\le \frac 1{2\pi} \log (16(R'/r')^2)=\frac 1\pi (4(R'/r')).$$ So we get $r'\le 4(R'/R) r$. Letting $R\to \infty$, we get $R'/R\to 1$. So $r'\le 4r$. \end{proof} \textcolor{blue}egin{comment} \textcolor{blue}egin{Lemma} Let $S_1$ and $S_2$ be a disjoint pair of connected bounded closed subsets of $\overline\mathbb{H}$ that intersect $\mathbb{R}$. Then $$\prod_{j=1}^2 \mbox{\textcolor{blue}fseries B}ig(\frac{\diam(S_j)}{\dist(S_1 ,S_2 )}\wedge 1\mbox{\textcolor{blue}fseries B}ig)\le 144 e^{-\pi d_{\mathbb{H}}(S_1,S_2)}.$$ \label{lem-extremal2} \end{Lemma} \end{comment} \subsection{Two-sided Chordal SLE} Suppose $\textcolor{green}amma$ is a chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$, which generates the filtration ${\mathcal F}=({\mathcal F}_t)_{t\textcolor{green}e 0}$. Let $\mathbb{P}$ denote the law of $\textcolor{green}amma$, and $\mathbb{ E}E$ denote the corresponding expectation. Let $z\in\mathbb{R}\setminus \{0\}$. By (\textcolor{red}ef{chordal}) and the fact that $U_t=\sqrt\kappa B_t$ for some standard Brownian motion $B_t$, up to $\tau^*_z$, $Z_t(z)$ and $g_t'(z)$ satisfy the following SDE and ODE: \textcolor{blue}egin{align*} d Z_t(z)&=-\sqrt\kappa dB_t+\frac 2{Z_t(z)} \,dt;\\ \frac{ d g_t'(z)}{g_t'(z)}&=\frac{-2}{Z_t(z)^2}\,dt. \end{align*} By It\^o's formula (cf.\ \cite{RY}), we get the following continuous positive local martingale: \mbox{\textcolor{blue}fseries B}GE M_t(z):=\frac{|g_t'(z)|^\alpha |z|^\alpha}{|Z_t(z)|^\alpha}, \quad 0\le t<\tau^*_z,\label{M}\mathbb{ E}DE which satisfies the SDE: \mbox{\textcolor{blue}fseries B}GE \frac{dM_t(z)}{M_t(z)}=\frac{\kappa-8}{\sqrt\kappa} \frac{dB_t}{Z_t(z)}.\label{dM}\mathbb{ E}DE By Girsanov Theorem (cf.\ \cite{RY}), if we tilt the law $\mathbb{P}$ by the local martingale $M_\cdot(z)$, we get a new random curve $\widetilde\textcolor{green}amma$, whose driving function $\widetilde U$ satisfies the SDE: $$d\widetilde U_t=\sqrt\kappa d\widetilde B_t +\frac{\kappa-8}{\widetilde Z_t(z)}\,dt,$$ where $\widetilde B$ is another standard Brownian motion, and $\widetilde Z_t$'s are the centered Loewner maps associated with $\widetilde\textcolor{green}amma$. In fact, such $\widetilde \textcolor{green}amma$ is a chordal SLE$_\kappa(\kappa-8)$ curve (cf.\ \cite{LSW-8/3}) in $\mathbb{H}$ started from $0$, aimed at $\infty$, with the force point located at $z$. Since $\kappa-8<\frac\kappa 2-4$, with probability $1$, $\widetilde\textcolor{green}amma$ ends at $z$ (cf.\ \cite{MS1}). The above curve $\widetilde\textcolor{green}amma$ from $0$ to $z$ is the first arm of a two-sided chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$ passing through $z$. Given this arm $\widetilde\textcolor{green}amma$, the rest of the two-sided chordal SLE$_\kappa$ curve is a chordal SLE$_\kappa$ curve from $z$ to $\infty$ in the unbounded connected component of $\mathbb{H}\setminus\widetilde\textcolor{green}amma$. We use $\mathbb{P}^*_z$ to denote the law of such a two-sided chordal SLE$_\kappa$ curve, and let $\mathbb{ E}E^*_z$ denote the corresponding expectation. For $r>0$, we use $\mathbb{P}^r_z$ to denote the conditional law $\mathbb{P}[\cdot|\tau^z_r<\infty]$, i.e., the law of a chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$ conditioned to visit the disk with radius $r$ centered at $z$; and let $\mathbb{ E}E^r_z$ denote the corresponding expectation. \textcolor{blue}egin{Proposition} Let $z\in\mathbb{R}\setminus \{0\}$ and $R\in(0,|z|)$. Then $\mathbb{P}_z^*$ is absolutely continuous w.r.t.\ $\mathbb{P}_z^R$ on ${\mathcal F}_{\tau^z_R}\cap\{\tau^z_R<\infty\}$, and the Radon-Nikodym derivative is uniformly bounded by some constant $C_\kappa\in[1,\infty)$ depending only on $\kappa$. \label{RN<1} \end{Proposition} \textcolor{blue}egin{proof} By symmetry we may assume $z>0$. Let $\tau=\tau^z_R$. By the construction of $\mathbb{P}_z^*$ (through tilting $\mathbb{P}$ by $M_\cdot(z)$), we have $$ \frac{d\mathbb{P}_z^*|{{\mathcal F}_{\tau }\cap\{\tau <\infty\}}}{d\mathbb{P} |{{\mathcal F}_{\tau }\cap\{\tau <\infty\}}}= {M_\tau(z)} .$$ By the definition of $\mathbb{P}_z^R$, $$ \frac{d\mathbb{P}_z^R|{{\mathcal F}_{\tau }\cap\{\tau <\infty\}}}{d\mathbb{P} |{{\mathcal F}_{\tau }\cap\{\tau <\infty\}}}=\frac{1}{\mathbb{P}[\tau <\infty]}.$$ Thus, it suffices to prove that $M_\tau(z ) \cdot \mathbb{P}[\tau <\infty]$ is uniformly bounded. By (\textcolor{red}ef{1pt}), $\mathbb{P}[\tau <\infty]\lesssim (R/|z|)^\alpha$. Since $g_\tau$ maps the simply connected domain $\Omega:=\mathbb{C}\setminus (K_\tau^{\doub}\cup (-\infty,0])$ conformally onto $\mathbb{C}\setminus (-\infty, b_{K_\tau}]$, by Koebe's $1/4$ theorem, \mbox{\textcolor{blue}fseries B}GE |g_\tau'(z)|\cdot R=|g_\tau'(z)|\cdot \dist(z,\partial \Omega)\asymp \dist(g_\tau(z),\partial(\mathbb{C}\setminus (-\infty, b_{K_\tau}]))=|g_\tau(z)-b_{K_\tau}|\le Z_t(z),\label{gz-Z}\mathbb{ E}DE where in the last step we used $g_\tau(z)>b_{K_\tau}\textcolor{green}e U_\tau$. Thus, $$M_\tau(z)\cdot \mathbb{P}[\tau <\infty]\lesssim \frac{|g_t'(z)|^\alpha |z|^\alpha}{|Z_t(z)|^\alpha}\frac{R^\alpha}{|z|^\alpha} = \frac{|g_t'(z)|^\alpha R|^\alpha}{|Z_t(z)|^\alpha}\lesssim 1.$$ \end{proof} \textcolor{blue}egin{Proposition} Let $z\in\mathbb{R}\setminus \{0\}$ and $0<r<\eta<|z|$. Then $\mathbb{P}^r_z$ restricted to ${\mathcal F}_{\tau_\eta^z}$ is absolutely continuous with respect to $\mathbb{P}^*_z$, and there is a constant $ \textcolor{blue}eta>0$ depending only on $\kappa$ such that \mbox{\textcolor{blue}fseries B}GE \mbox{\textcolor{blue}fseries B}ig|\log\mbox{\textcolor{blue}fseries B}ig(\frac{d\mathbb{P}^r_z|{\mathcal F}_{\tau_\eta^z}}{d\mathbb{P}^*_z|{\mathcal F}_{\tau_\eta^z}}\mbox{\textcolor{blue}fseries B}ig) \mbox{\textcolor{blue}fseries B}ig| \lesssim \mbox{\textcolor{blue}fseries B}ig(\frac r \eta\mbox{\textcolor{blue}fseries B}ig)^{\textcolor{blue}eta},\quad \mbox{if }r/\eta<1/6. \label{Prop2.13-eqn}\mathbb{ E}DE \label{Prop2.13} \end{Proposition} \textcolor{blue}egin{proof} Recall the $G(z)=\widehat c |z|^{-\alpha}$ defined by (\textcolor{red}ef{G(z)}). Define $G_t(z)=|Z_t'(z)|^\alpha Z_t(z)$ if $\tau^*_z>t$; and $G_t(z)=0$ if $\tau^*_z\le t$. Then $$\frac{d\mathbb{P}^*_z|{\mathcal F}_{\tau_\eta^z}}{d\mathbb{P}|{\mathcal F}_{\tau_\eta^z}}=M_{\tau^z_\eta}(z)=\frac{G_{\tau^z_\eta}(z)}{G(z)}.$$ By the definition of $\mathbb{P}^r_z$, we have $$\frac{d\mathbb{P}^r_z|{\mathcal F}_{\tau_\eta^z}}{d\mathbb{P}|{\mathcal F}_{\tau_\eta^z}}=\frac{\mathbb{P}[\tau^z_r<\infty|{\mathcal F}_{\tau^\eta_z}]}{\mathbb{P}[\tau^z_r<\infty]}.$$ Since $\mathbb{P}[\tau^z_r<\infty|{\mathcal F}_{\tau^\eta_z}]=0$ implies that $\tau^\eta_z\le \tau^*_z$, which in turn implies that $G_{\tau^z_\eta}(z)=0$, by the above two displayed formulas, $\mathbb{P}^r_z$ restricted to ${\mathcal F}_{\tau_\eta^z}$ is absolutely continuous with respect to $\mathbb{P}^*_z$, and $$\frac{d\mathbb{P}^r_z|{\mathcal F}_{\tau_\eta^z}}{d\mathbb{P}^*_z|{\mathcal F}_{\tau_\eta^z}} =\frac{\mathbb{P}[\tau^z_r<\infty|{\mathcal F}_{\tau^\eta_z}]/(G_{\tau^z_\eta}(z)r^\alpha)}{\mathbb{P}[\tau^z_r<\infty]/(G(z)r^\alpha)}.$$ By (\textcolor{red}ef{G(z)-approx}) and Koebe's distortion theorem, there are constants $\textcolor{blue}eta,\deltata>0$ such that, if $r/\eta<1/6$, then $$\log(\mathbb{P}[\tau^z_r<\infty]/(G(z)r^\alpha))\lesssim (r/|z|)^{\textcolor{blue}eta},\quad \log(\mathbb{P}[\tau^z_r<\infty|{\mathcal F}_{\tau^\eta_z}]/(G_{\tau^z_\eta}(z)r^\alpha))\lesssim (r/\eta)^{\textcolor{blue}eta}.$$ The above two displayed formulas together imply (\textcolor{red}ef{Prop2.13-eqn}). \end{proof} \section{Main Estimates} \label{mainestimates} In this section, we will provide some useful estimates for the proof of the main theorem. We use the notion and symbols in the previous section. We now define the function $F(z_1,\dots,z_n)$ that appeared in Theorem \textcolor{red}ef{Main-thm}. From now on, let $d_0=1+\frac \kappa 8$ and $\alpha=\frac 8\kappa -1$. For $y\textcolor{green}e 0$, define $P_y$ on $[0,\infty)$ by $$P_y(x)=\left\{ \textcolor{blue}egin{array}{ll} y^{\alpha-(2-d_0)} x^{2-d_0},&x\le y;\\ x^\alpha,& x\textcolor{green}e y. \end{array} \textcolor{red}ight. $$ For an (ordered) set of distinct points $z_1,\dots,z_n\in\overline\mathbb{H}\setminus \{0\}$, we let $z_0=0$ and define \mbox{\textcolor{blue}fseries B}GE y_k={\infty}mm z_k,\quad l_k=\min_{0\le j\le k-1}\{|z_k-z_j|\},\quad R_k=\min_{0\le j\le n,j\ne k}\{|z_k-z_j|\},\quad 1\le k\le n.\label{lR}\mathbb{ E}DE Note that we have $R_k\le l_k$. For $r_1,\dots,r_n>0$, define \mbox{\textcolor{blue}fseries B}GE F(z_1,\dots,z_n;r_1,\dots,r_n)=\prod_{k=1}^n \frac{P_{y_k}(r_k)}{P_{y_k}(l_k)}.\label{Fzr}\mathbb{ E}DE The following is \cite[Formula (2.7)]{existence}. For any permutation $\sigma$ of $\{1,\dots,n\}$, \mbox{\textcolor{blue}fseries B}GE F(z_1,\dots,z_n;r_1,\dots,r_n)\asymp F(z_{\sigma(1)},\dots,z_{\sigma(n)};r_{\sigma(1)},\dots,r_{\sigma(n)}).\label{perm-Fzr}\mathbb{ E}DE The following proposition combines \cite[Theorem 1.1]{higher} (which gives the upper bound) and \cite[Theorem 4.3]{existence} (which gives the lower bound). \textcolor{blue}egin{Proposition} Let $z_1,\dots,z_n$ be distinct points on $\overline\mathbb{H}\setminus \{0\}$. Let $R_1,\dots,R_n$ be defined by (\textcolor{red}ef{lR}). Let $r_j>0$, $1\le j\le n$. Then for a chordal SLE$_\kappa$ curve $\textcolor{green}amma$ in $\mathbb{H}$ from $0$ to $\infty$, we have \textcolor{blue}egin{itemize} \item $\mathbb{P}[\tau^{z_j}_{r_j}<\infty,1\le j\le n]\lesssim \prod_{j=1}^n (1\wedge \frac{P_{y_j}( r_j)}{P_{y_j}(l_j)} )$; \item $\mathbb{P}[\tau^{z_j}_{r_j}<\infty,1\le j\le n]\textcolor{green}trsim F(z_1,\dots,z_n;r_1,\dots r_n)$, if $r_j\le R_j$, $1\le j\le n$. \end{itemize} \label{lower-upper} \end{Proposition} Now suppose $z_1,\dots,z_n$ are distinct points on $\mathbb{R}\setminus \{0\}$. Then $y_k=0$, $1\le k\le n$. So, the $\frac{P_{y_k}(r_k)}{P_{y_k}(l_k)}$ in (\textcolor{red}ef{Fzr}) simplifies to $\frac{r_k^\alpha}{l_k^\alpha}$. Then we define \mbox{\textcolor{blue}fseries B}GE F(z_1,\dots,z_n)= \prod_{k=1}^n r_k^{-\alpha} F(z_1,\dots,z_n;r_1,\dots,r_n) =\prod_{k=1}^n l_k^{-\alpha}. \label{F}\mathbb{ E}DE This function is different from the $F(z_1,\dots,z_n)$ that appeared in \cite{existence}, which was defined for $z_1,\dots,z_n\in\mathbb{H}$. By (\textcolor{red}ef{perm-Fzr}), we have \mbox{\textcolor{blue}fseries B}GE F(z_1,\dots,z_n )\asymp F(z_{\sigma(1)},\dots,z_{\sigma(n)} ).\label{perm-F}\mathbb{ E}DE A simple but useful special case of Proposition \textcolor{red}ef{lower-upper} is: when $n=1$ and $z_1\in\mathbb{R}\setminus\{0\}$, we have \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[\tau^{z_1}_{r_1}<\infty]\asymp (r_1/|z_1|)^\alpha,\quad 0<r<|z_1|.\label{1pt}\mathbb{ E}DE The estimate includes a lower bound and an upper bound. They first appeared in \cite{boundary}. The upper bound in (\textcolor{red}ef{1pt}) was called the boundary estimate in the literature. \textcolor{blue}egin{comment} By conformal invariance of SLE, this estimate extends to SLE in general simply connected domains. The following proposition is \cite[Lemma 2.5]{higher}. Recall the definitions/symbols of crosscuts and $D(\textcolor{red}ho;Z),D^*(\textcolor{red}ho;Z)$ in Section \textcolor{red}ef{crosscuts} and the extremal distance $d_\Omega(X,Y)$ in Section \textcolor{red}ef{Extremal}. \textcolor{blue}egin{Proposition} Let $D$ be a simply connected domain, and $w_0$ and $w_\infty$ be two distinct prime ends of $D$. Let $\textcolor{red}ho_{\ee}$ and $\textcolor{red}ho_{\ii}$ be two disjoint crosscuts in $D$ such that $D(\textcolor{red}ho_{\ee};\textcolor{red}ho_{\ii})$ is neither a neighborhood of $w_0$ nor a neighborhood of $w_\infty$ in $D$. The condition for $w_0$ means that either $D\setminus \textcolor{red}ho_{\ee}$ is a neighborhood of $w_0$ and $D(\textcolor{red}ho_{\ee};w_0)=D^*(\textcolor{red}ho_{\ee};\textcolor{red}ho_{\ii})$, or $w_0$ is a prime end determined by $\textcolor{red}ho_{\ee}$; and likewise for $w_\infty$. Let $\textcolor{green}amma$ be a chordal SLE$_\kappa$ curve in $D$ from $w_0$ to $w_\infty$. Then $$ \mathbb{P}[\textcolor{green}amma\cap ({\textcolor{red}ho_{\ii}}\cup D^*(\textcolor{red}ho_{\ii};w_\infty))\ne\emptyset]\lesssim e^{-\alpha \pi d_D(\textcolor{red}ho_{\ee},\textcolor{red}ho_{\ii})}. $$ \label{boundary-lem} \end{Proposition} \end{comment} From now on till the end of this section, $\mathbb{P}$ denotes the law of a chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$; for $z\in\mathbb{R}\setminus\{0\}$ and $r>0$, $\mathbb{P}_z^r$ denotes the conditional law $\mathbb{P}[\cdot|\tau^z_r<\infty]$, and $\mathbb{P}_z^*$ denotes the law of a two-sided chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$ passing through $z$. When $\textcolor{green}amma$ follows some law above in the context, let $U_t$, $K_t$ and $g_t$ be respectively the chordal Loewner driving function, hulls and maps which correspond to $\textcolor{green}amma$. Let $Z_t=g_t-U_t$ be the centered Loewner maps, and let $H_t=\mathbb{H}\setminus K_t$. For $t\textcolor{green}e 0$, let $S^+_{t}$ be the set of prime ends of $H_t$ that lie on the right side of $\textcolor{green}amma[0,t]$ or on $[b_{K_t},\infty)$, and let $S^-_{t}$ be the set of prime ends of $H_t$ that lie on the left side of $\textcolor{green}amma[0,t]$ or on $(-\infty,a_{K_t}]$. More precisely, $S^+_t$ and $S^-_t$ are respectively the images of $[0,+\infty)$ and $(-\infty,0]$ under $Z_t^{-1}$. \textcolor{blue}egin{Proposition} Let $z_1,\dots,z_{n}$ be distinct points in $\mathbb{R}\setminus\{0\}$, where $n\textcolor{green}e 2$. Let $R_1,\dots,R_n$ be defined by (\textcolor{red}ef{lR}). Let $r_j\in(0,R_j/8)$, $1\le j\le n$. Then we have a constant $\textcolor{blue}eta>0$ such that for any $k_0\in\{2,\dots,n\}$ and $s_{0}\textcolor{green}e 0$, \textcolor{blue}egin{align*} &\mathbb{P}[\tau^{z_1}_{r_1}< \tau^{z_k}_{r_k}<\infty,2\le k\le n; \dist(z_{k_0},\textcolor{green}amma[0,\tau^{z_1}_{r_1}])\le s_{0} ]\\ \lesssim & F(z_1,\dots,z_{n} )\prod_{j=1}^n r_j^\alpha \mbox{\textcolor{blue}fseries B}ig(\frac{s_{0}}{|z_{k_0}-z_1|\wedge |z_{k_0}|}\mbox{\textcolor{blue}fseries B}ig)^{\textcolor{blue}eta}. \end{align*} \label{RZ-Thm3.1} \end{Proposition} \textcolor{blue}egin{proof} This proposition is very similar to \cite[Theorem 3.1]{existence}. The following estimate is \cite[Formula (A.14)]{existence}. For distinct points $z_1,\dots,z_n\in\overline\mathbb{H}\setminus\{0\}$, $r_j\in(0,R_j)$, $1\le j\le n$, and $s_0>0$, $$ \mathbb{P}[\tau^{z_j}_{r_j}<\infty,1\le j\le n; \tau^{z_{1}}_{s_{0}}<\tau^{z_{2}}_{r_{2}}<\tau^{z_1}_{r_1}] \lesssim F(z_1,\dots,z_n;r_1,\dots,r_n)\cdot\mbox{\textcolor{blue}fseries B}ig(\frac{s_{0}}{|z_{1}-z_{2}|\wedge |z_{1}|}\mbox{\textcolor{blue}fseries B}ig)^{\frac{\alpha}{32n^2}}. $$ Let $k_0\in \{2,\dots,n\}$ and $\textcolor{blue}eta=\frac{\alpha}{32n^2}$. Applying (\textcolor{red}ef{perm-Fzr}) to the above formula with a permutation $\sigma$ of $\mathbb{N}_n$, which sends $1$ to $k_0$ and $2$ to $1$, we find that $$\mathbb{P}[\tau^{z_j}_{r_j}<\infty,1\le j\le n; \tau^{z_{k_0}}_{s_{0}}<\tau^{z_{1}}_{r_{2}}<\tau^{z_{k_0}}_{s_0}] \lesssim F(z_1,\dots,z_n;r_1,\dots,r_n)\cdot\mbox{\textcolor{blue}fseries B}ig(\frac{s_{0}}{|z_{k_0}-z_{1}|\wedge |z_{k_0}|}\mbox{\textcolor{blue}fseries B}ig)^{\textcolor{blue}eta}.$$ We then complete the proof by setting $z_1,\dots,z_n\in\mathbb{R}\setminus\{0\}$. \end{proof} \textcolor{blue}egin{Proposition} Let $z_1 \in\mathbb{R}\setminus \{0\}$ and $0\le s< r<R\wedge |z_1|$. On the event $\{\tau^{z_1}_ r<\tau^*_{z_1}\}$, let $ \xi_+ $ be the connected component of $\{|z-z_1|=R\}\cap H_{\tau^{z_1}_ r}$ with one endpoint being $z_1+\sign(z_1)R$; otherwise let $\xi_+=\emptyset$. Let $$E_{r,s;R}=\{\textcolor{green}amma[\tau^{z_1}_ r,\tau^{z_1}_s]\cap \xi_+ =\emptyset \}.$$ Then \textcolor{blue}egin{enumerate} \item [(i)] If $s>0$, $\mathbb{P}_{z_1}^s[ E_{r,s;R}^c]\lesssim ({ r}/{R})^{\alpha}$. \item [(ii)] If $s=0$, $\mathbb{P}_{z_1}^*[E_{r,0;R}^c] \lesssim ({ r}/{R} )^{\alpha}$. \end{enumerate}\label{stayin} \end{Proposition} \textcolor{blue}egin{proof} (i) Assume that $z_1 >0$ by the left-right symmetry of chordal SLE. Suppose $\textcolor{green}amma$ follows the law $\mathbb{P}$. Since $\kappa\in(0,8)$, the probability that $\textcolor{green}amma$ visits $\{z_1+ s,z_1-s,z_1+R\}$ is zero. We now assume that $\textcolor{green}amma$ does not visit this set. Let $\tau=\inf(\{t\textcolor{green}e \tau^{z_1}_r:\textcolor{green}amma(t)\in \xi_+\}\cup\{\infty\})$. Then $\tau$ is a stopping time, and $E_{r,s;R}=\{\tau<\tau^{z_1}_s<\infty\}$. Let $E_\tau=\{\tau<\tau^{z_1}_{s}\}\in {\mathcal F}_\tau$. By DMP of chordal SLE, conditionally on ${\mathcal F}_{\tau}$ and the event $E_\tau$, there is a random curve $\widetilde \textcolor{green}amma$ following the law $\mathbb{P}$ such that $\textcolor{green}amma(\tau+\cdot)=Z_\tau^{-1}\circ \widetilde \textcolor{green}amma$. Let $D=\{z\in\mathbb{H}\setminus K_\tau:|z-z_1|\le s\}$, $\widetilde D=Z_\tau(D)$, $\widetilde z_1=Z_\tau(z_1)>0$, and $\widetilde s=\textcolor{red}ad_{\widetilde z_1}(\widetilde D)>0$. On the event $E_\tau$, in order for $E_{r,s;R}$ to happen, we need that $\textcolor{green}amma(\tau+\cdot)$ visits $D$, which is equivalent to that $\widetilde\textcolor{green}amma$ visits $\widetilde D$. By (\textcolor{red}ef{1pt}), $$\mathbb{P}[E_{r,s;R}^c|{\mathcal F}_\tau,E_\tau]\lesssim (1\wedge (\widetilde r_1/\widetilde z_1))^\alpha.$$ By Lemma \textcolor{red}ef{lem-extremal2} and conformal invariance of extremal length, $$1\wedge ({\widetilde r_1}/{\widetilde z_1})\lesssim e^{-\pi d_{\mathbb{H}}((-\infty, 0],\widetilde D)}=e^{-\pi d_{H_\tau}(S^-_\tau,D)}.$$ Since $S^-_\tau$ can be separated from $D$ in $H_\tau$ by the semi-annulus $\{s<|z-z_1|<R\}$, by the comparison principle of extremal length, $ d_{H_\tau}(S^-_\tau,D)\textcolor{green}e \log(R/s)/\pi$. So by the above two displayed formulas we get $\mathbb{P}[E_{r,s;R}^c|{\mathcal F}_\tau,E_\tau]\lesssim (s/R)^\alpha$, which together with $\mathbb{P}[E_\tau]\le \mathbb{P}[\tau^{z_1}_ r<\infty]\lesssim ( r/|z_1|)^\alpha $ (the upper bound in (\textcolor{red}ef{1pt})) implies that $\mathbb{P}[E_{r,s;R}^c]\lesssim ( r /{|z_1|} )^\alpha ( {s}/{R} )^{\alpha}$. Combining this estimate with the lower bound in (\textcolor{red}ef{1pt}), i.e., $\mathbb{P}[\tau^{z_1}_{s}]\textcolor{green}trsim (s/z_1)^\alpha$, we get (i). \textcolor{blue}egin{comment} Suppose $z_1>R$ and $\tau^{z_1}_ r<\tau^*_{z_1- R}$ so that $\xi_-$ is not empty. Since $|z_1-z_1'|< r-s$, we have $e^m \cdot s\le r-|z_1-z_1'|\le e^{m+1} \cdot s$ for some $m\in\mathbb{N}\cup\{0\}$. Since $|z_1-z_1'|<\frac r 2$, $ r\asymp e^m\cdot s$. Let $\tau_-$ be the first time after $\tau^{z_1}_ r$ that $ r$ visits $\xi_-$. Let $\tau_j=\tau^{z_1'}_{e^{j}\cdot s}$. Then \mbox{\textcolor{blue}fseries B}GE \{\textcolor{green}amma[\tau^{z_1}_ r,\tau^{z_1'}_s]\cap \xi_-\ne \emptyset\}\subset \textcolor{blue}igcup_{j=1}^{m+1} E_j,\label{union}\mathbb{ E}DE where $$E_j:=\{\tau_j<\tau_-<\tau_{j-1},\quad 1\le j\le m;\quad E_{m+1}=\{\tau<\tau_m\}.$$ Fix $1\le j\le m$. By (\textcolor{red}ef{1pt}), \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[\tau_j<\infty]\lesssim (e^{j}\cdot s/|z_1'|)^\alpha.\label{Ptauj}\mathbb{ E}DE Applying Proposition \textcolor{red}ef{boundary-lem} to the conditional SLE$_\kappa$ curve $\textcolor{green}amma(\tau_j+\cdot)$ in $H_{\tau_j}$ (given ${\mathcal F}_{\tau_j}$ and the events $\tau_j<\infty$ and that $\textcolor{green}amma[\tau^{z_1}_ r,\tau_j]\cap \xi_-=\emptyset$) and the crosscuts $\textcolor{red}ho^{e^j\cdot s}_-$ and $\xi_-$, where $\textcolor{red}ho^{e^j\cdot s}_-$ is the connected component of $\{ |z-z_1'|=e^j\cdot s\}\cap H_{\tau_j}$ with one endpoint being $z_1'- e^j\cdot s$, we find that \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[ \textcolor{green}amma[\tau^{z_1}_ r,\tau^{z_1'}_s]\cap\xi_- \ne\emptyset|{\mathcal F}_{\tau_j},\tau_j<\tau]\lesssim e^{-\alpha \pi d_{H_{\tau_j}}(\textcolor{red}ho^{e^j\cdot s}_-,\xi_-)}\le (e^j\cdot s/R)^\alpha,\label{Ptaujn}\mathbb{ E}DE where the last inequality follows from the comparison principle of extremal length and the facts that every curve connecting $\textcolor{red}ho^{e^j\cdot s}_-$ and $\xi_-$ crosses the semi-annulus $\{z\in\mathbb{H}:e^j\cdot s<|z-z_1'|<R-|z_1-z_1'| \}$ and that $R-|z_1-z_1'|\asymp R$. Applying Proposition \textcolor{red}ef{boundary-lem} to the conditional SLE$_\kappa$ curve $\textcolor{green}amma(\tau_-+\cdot)$ in $H_{\tau_-}$ (given ${\mathcal F}_{\tau_-}$ and the event $E_j$) and the crosscuts $ \{z\in\mathbb{H}:|z-z_1'|=e^{j-1}\cdot s\}$ and $ \{z\in\mathbb{H}:|z-z_1'|= s\}$, we find that, for $2\le j\le m+1$, \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[\tau^{z_1'}_s<\infty|{\mathcal F}_{\tau_-},E_j]\lesssim e^{-\alpha (j-1)}.\label{Ptaujlast}\mathbb{ E}DE The inequality trivially holds for $j=1$. Combining (\textcolor{red}ef{Ptauj},\textcolor{red}ef{Ptaujn},\textcolor{red}ef{Ptaujlast}) and that $e^m \cdot s\le r$, we find that, for $1\le j\le m$, \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[\tau^{z_1'}_s<\infty;E_j; \textcolor{green}amma[\tau^{z_1}_ r,\tau^{z_1'}_s]\cap\xi_- \ne\emptyset] \lesssim \frac{(e^j \cdot s^2)^\alpha}{(|z_1'| R)^\alpha} \le e^{(j-m)\alpha} \mbox{\textcolor{blue}fseries B}ig( \frac{ s r }{ |z_1'| R }\mbox{\textcolor{blue}fseries B}ig)^\alpha. \label{Ptaucomp}\mathbb{ E}DE We claim that (\textcolor{red}ef{Ptaucomp}) also holds for $j=m+1$. First, by (\textcolor{red}ef{1pt}) and that $|z_1|\asymp |z_1'|$, we have \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[\tau^{z_1}_ r<\infty]\lesssim ( r/|z_1|)^\alpha\lesssim ( r/|z_1'|)^\alpha.\label{Ptauj*}\mathbb{ E}DE Second, applying Proposition \textcolor{red}ef{boundary-lem} to the conditional SLE$_\kappa$ curve $\textcolor{green}amma(\tau^{z_1}_ r+\cdot)$ in $H_{\tau^{z_1}_ r}$ (given ${\mathcal F}_{\tau^{z_1}_ r}$ and the event $\tau^{z_1}_ r<\infty$) and the crosscuts $\textcolor{red}ho^ r_-$ and $ \xi_-$, where $\textcolor{red}ho^ r_-$ is the connected component of $\{z\in\mathbb{H}:|z-z_1|= r\}\cap H_{\tau^{z_1}_ r}$ with one endpoint being $z_1- r$, we find that \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[\textcolor{green}amma[\tau^{z_1}_ r,\infty)\cap \xi_-\ne\emptyset|{\mathcal F}_{\tau^{z_1}_ r}, \tau^{z_1}_ r<\infty]\lesssim ( r/R)^\alpha.\label{Ptaujn*}\mathbb{ E}DE Third, we have (\textcolor{red}ef{Ptaujlast}) for $j=m+1$. Combining (\textcolor{red}ef{Ptauj*},\textcolor{red}ef{Ptaujn*},\textcolor{red}ef{Ptaujlast}) and that $ r\asymp e^m\cdot s$, we find that (\textcolor{red}ef{Ptaucomp}) holds for $j=m+1$. Summing (\textcolor{red}ef{Ptaucomp}) over $j\in \{1,\dots,,m+1\}$ and using (\textcolor{red}ef{union}), we conclude that \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[\tau^{z_1'}_s<\infty; \textcolor{green}amma[\tau^{z_1}_ r,\tau^{z_1'}_s]\cap\xi_- \ne\emptyset] \lesssim ( r /{|z_1'|} )^\alpha ( {s}/{R} )^{\alpha}. \label{eta-eps-R-}\mathbb{ E}DE Combining (\textcolor{red}ef{eta-eps-R+},\textcolor{red}ef{eta-eps-R-}) with the lower bound in (\textcolor{red}ef{1pt}) applied to $z_1'$ and $s$, we get (i). \end{comment} (ii) From Proposition \textcolor{red}ef{RN<1} and (i), we get $\mathbb{P}_{z_1}^*[E_{r,s;R}^c] \lesssim ( { r}/{R} )^{\alpha}$ for any $s\in (0, r)$. We then complete the proof by sending $s$ to $0^+$. \end{proof} \textcolor{blue}egin{Lemma} Let $z_1,\dots,z_n,w_1,\dots,w_m$ be distinct points in $\mathbb{R}\setminus\{0\}$, where $n\textcolor{green}e 1$ and $m\textcolor{green}e 0$. Suppose that all $z_j$ have the same sign $\sigma_z\in\{+,-\}$, all $w_k$ have the same sign $\sigma_w\in\{+,-\}$, $\sigma_z\ne \sigma_w$, and both $j\mapsto |z_j|$ and $k\mapsto |w_k|$ are increasing. Let $z_0=w_0=0$, $z_{n+1}=\sigma_z\cdot \infty$, and $w_{m+1}=\sigma_w\cdot \infty$. Let $r_j\in(0,(|z_j-z_{j-1}|\wedge |z_j-z_{j+1}|)/2)$, $1\le j\le n$, and $s_k\in (0,(|w_k-w_{k+1}|\wedge |w_k-w_{k-1}|)/2)$, $1\le k\le m$. Let $R>2(|z_n|\vee |w_m|)$. Then \textcolor{blue}egin{align} &\mathbb{P}[\tau^\infty_R<\tau^{z_j}_{r_j}<\infty,1\le j\le n;\tau^\infty_R<\tau^{w_k}_{s_k}<\infty,1\le k\le m] \noindentnumber\\ \lesssim & \mbox{\textcolor{blue}fseries B}ig(\frac{|z_1|}{R}\mbox{\textcolor{blue}fseries B}ig)^\alpha F(z_1,\dots,z_n,w_1,\dots,w_m) \prod_{j=1}^n r_j^\alpha\cdot \prod_{k=1}^m s_k^\alpha.\label{PR} \end{align} \label{Lemma-PR} \end{Lemma} \textcolor{blue}egin{proof} By symmetry, we may assume that $w_m<\cdots<w_1<0<z_1<\cdots <z_n$. Define $F_z$ and $F_w$ such that $F_z= \prod_{j=1}^n |z_j-z_{j-1}|^{-\alpha}$; $F_w =\prod_{k=1}^m |w_k-w_{k-1}|^{-\alpha}$, if $m\textcolor{green}e 1$; and $F_w=1$ if $m=0$. Then we have $F(z_1,\dots,z_n,w_1,\dots,w_m)=F_z F_w$. Let $\tau=\tau^\infty_R$. Let $E$ denote the event in (\textcolor{red}ef{PR}). Then $E=E^\tau_*\cap E_\#$, where \textcolor{blue}egin{align*} E^\tau_*&:=\{\tau^\infty_R<\tau^{z_j}_{r_j}\wedge \tau^*_{z_j},1\le j\le n;\tau^\infty_R<\tau^{w_k}_{s_k}\wedge \tau^*_{w_k},1\le k\le m\}\in{\mathcal F}_\tau;\\ E_\#:&=\{ \tau^{z_j}_{r_j}<\infty,1\le j\le n; \tau^{w_k}_{s_k}<\infty,1\le k\le m \}. \end{align*} Suppose the event $E_*^\tau$ occurs. Let $\widetilde z_j=Z_\tau(z_j)$, $D_j=\{z\in H_\tau:|z-z_j|\le r_j\}$, $\widetilde D_j=Z_\tau(D_j)$, and $\widetilde r_j=\textcolor{red}ad_{\widetilde z_j}(\widetilde D_j)$, $1\le j\le n$. Let $\widetilde w_k=Z_\tau(w_k)$, $E_k=\{z\in H_\tau:|z-w_k|\le s_k\}$, $\widetilde E_k=Z_\tau(E_k)$, and $\widetilde s_k=\textcolor{red}ad_{\widetilde w_k}(\widetilde E_k)$, $1\le k\le m$. Then $\widetilde w_m<\cdots <\widetilde w_1<0<\widetilde z_1<\cdots <\widetilde z_n$. By DMP of chordal SLE$_\kappa$ and Proposition \textcolor{red}ef{lower-upper}, \textcolor{blue}egin{align} \mathbb{P}[E_\#|{\mathcal F}_\tau,E_*^\tau] \lesssim & \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde r_{n}}{|\widetilde z_{n}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha\cdot \prod_{j=1}^{n-1} \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde r_j}{|\widetilde z_j|\wedge |\widetilde z_j-\widetilde z_{j+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \noindentnumber\\ \cdot& \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde s_m}{|\widetilde w_m|}\mbox{\textcolor{blue}fseries B}ig)^\alpha\cdot \prod_{k=1}^{m-1} \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde s_k}{|\widetilde w_k|\wedge |\widetilde w_k-\widetilde w_{k+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha. \label{EE0R} \end{align} Here we organize $\widetilde z_j$'s and $\widetilde w_k$'s by $\widetilde z_n,\dots,\widetilde z_1,\widetilde w_m,\dots,\widetilde w_1$ when applying Proposition \textcolor{red}ef{lower-upper}. In the case that $m=0$, the second line disappears. By Proposition \textcolor{red}ef{lem-extremal2} and conformal invariance of extremal distance, \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_{j}}{|\widetilde z_{j}|}\lesssim e^{-\pi d_{\mathbb{H}} ((-\infty,0],\widetilde D_j)}=e^{-\pi d_{H_\tau}(S^-_\tau, D_j)},1\le j\le n;\label{disjRz}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_{j}}{|\widetilde z_{j}-\widetilde z_{j+1}|}\lesssim e^{-\pi d_{\mathbb{H}} ([\widetilde z_{j+1},\infty),\widetilde D_j)}=e^{-\pi d_{H_\tau}([z_{j+1},\infty), D_j)},1\le j\le n-1;\label{disj-1Rz}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde s_{k}}{|\widetilde w_{k}|}\lesssim e^{-\pi d_{\mathbb{H}} ([0,+\infty),\widetilde E_k)}=e^{-\pi d_{H_\tau}(S^+_\tau, E_k)},1\le k\le m;\label{disjRw}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde s_{k}}{|\widetilde w_{k}-\widetilde w_{k+1}|}\lesssim e^{-\pi d_{\mathbb{H}} ((-\infty,\widetilde w_{k+1}],\widetilde E_k)}=e^{-\pi d_{H_\tau}((-\infty,w_{k+1}], E_k)},1\le k\le m-1.\label{disj-1Rw}\mathbb{ E}DE \textcolor{blue}egin{figure} \centering \includegraphics[width=1\textwidth]{zwR.png} \caption[A figure for the proof of Lemma \textcolor{red}ef{Lemma-PR}]{{\textbf{A figure for the proof of Lemma \textcolor{red}ef{Lemma-PR}.}} This figure illustrates an application of the comparison principle of extremal distance in the proof of Lemma \textcolor{red}ef{Lemma-PR}. Here $n=3$ and $m=2$. The curve $\textcolor{green}amma$ is stopped at the time $\tau=\tau^\infty_R$. Assume that the event $E_*^\tau$ occurs. To bound the extremal distance $d_{H_\tau}(D_2,S^-_\tau)$ from below for example, we use the semi-annulus (shaded region) $A_2:=\{z\in\mathbb{H}: r_2<|z-z_2|<R-|z_2|\}$ and the fact that any curve in $H_\tau$ that connects the semi-circle $\partial D_2\cap \mathbb{H}$ with the left side of $\textcolor{green}amma[0,\tau]$ or the real interval $(-\infty,0]$ must cross $A_2$, i.e., contain a subpath in $A_2$ connecting its two semi-circles. The intersection of $A_2$ with $\textcolor{green}amma[0,\tau]$ does not cause a problem in the application. } \label{zwR} \end{figure} Since $S^-_\tau$ can be separated from $D_j$ in $H_\tau$ by $\{z\in\mathbb{H}: r_j<|z-z_j|<R-|z_j|\}$, by comparison principle of extremal distance, $$d_{H_\tau}(S^-_\tau, D_j)\textcolor{green}e \frac 1\pi \log\mbox{\textcolor{blue}fseries B}ig(\frac{R-|z_j|}{r_j}\mbox{\textcolor{blue}fseries B}ig),$$ which combined with (\textcolor{red}ef{disjRz}) and that $R>2|z_j|$ implies that \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_{j}}{|\widetilde z_{j}|}\lesssim \frac{r_j}{R-|z_j|}\asymp \frac{r_j}{R},\quad 1\le j\le n.\label{rjzjR}\mathbb{ E}DE See Figure \textcolor{red}ef{zwR}. Since $[z_{j+1},\infty)$ can be separated from $D_j$ in $H_\tau$ by $\{z\in\mathbb{H}: r_j<|z-z_j|<|z_{j+1}-z_j|\}$, by comparison principle of extremal distance, $$d_{H_\tau}([z_{j+1},\infty), D_j)\textcolor{green}e \frac 1\pi \log\mbox{\textcolor{blue}fseries B}ig(\frac{|z_{j+1}-z_j|}{r_j}\mbox{\textcolor{blue}fseries B}ig),$$ which combined with (\textcolor{red}ef{disj-1Rz}) implies that \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_{j}}{|\widetilde z_{j}-\widetilde z_{j+1}|}\lesssim \frac{r_j}{|z_{j+1}-z_j|},\quad 1\le j\le n-1.\label{rjzj+1R}\mathbb{ E}DE For $1\le j\le n-1$, since $R-|z_j|\textcolor{green}e |z_{j+1}|-|z_j|=|z_{j+1}-z_j|$, by (\textcolor{red}ef{rjzjR}) and (\textcolor{red}ef{rjzj+1R}), \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_j}{|\widetilde z_j|\wedge |\widetilde z_j-\widetilde z_{j+1}|} \lesssim \frac{r_j}{|z_{j+1}-z_j|},\quad 1\le j\le n-1.\label{rjzjR-com}\mathbb{ E}DE Similarly, \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde s_{k}}{|\widetilde w_{k}|}\lesssim \frac{s_k}{R-|w_k|}\asymp \frac{s_k}{R},\quad 1\le k\le m;\label{skwkR}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde s_k}{|\widetilde w_k|\wedge |\widetilde w_k-\widetilde w_{k+1}|} \lesssim \frac{s_k}{|w_{k+1}-w_k|},\quad 1\le k\le m-1.\label{skwkR-com}\mathbb{ E}DE Combining (\textcolor{red}ef{EE0R}) with (\textcolor{red}ef{rjzjR}) (for $j=n$), (\textcolor{red}ef{rjzjR-com}), (\textcolor{red}ef{skwkR}) (for $k=m$) and (\textcolor{red}ef{skwkR-com}), we get \textcolor{blue}egin{align*}\mathbb{P}[E_\#|{\mathcal F}_\tau,E_*^\tau]& \lesssim \mbox{\textcolor{blue}fseries B}ig(\frac{r_n}{R }\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{j=1}^{n-1} \mbox{\textcolor{blue}fseries B}ig(\frac{r_j}{|z_j-z_{j+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \mbox{\textcolor{blue}fseries B}ig(\frac{s_m}{R }\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{k=1}^{m-1} \mbox{\textcolor{blue}fseries B}ig(\frac{s_k}{|w_k-w_{k+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha\\ &\le \mbox{\textcolor{blue}fseries B}ig(\frac{|z_1|}{R}\mbox{\textcolor{blue}fseries B}ig)^\alpha F_z F_w\prod_{j=1}^n r_j^\alpha \prod_{k=1}^m s_k^\alpha \end{align*} Here, if $m=0$, the factors involving $s_k$ and $s_m$ disappear; if $m\textcolor{green}e 1$, we used that $R\textcolor{green}e |w_1|$ in the estimate. Since $ F(z_1,\dots,z_n,w_1,\dots,w_m)=F_zF_w$, taking expectation we get (\textcolor{red}ef{PR}). \end{proof} \textcolor{blue}egin{Lemma} Suppose $x_0,\dots,x_N$, $N\textcolor{green}e 1$, are distinct points in $\mathbb{R}\setminus\{0\}$ that have the same sign $\nu\in\{+,-\}$, and $j\mapsto |x_j|$ is increasing. Let $x_{N+1}=\nu\cdot \infty$. Let $R_j=(|x_j-x_{j+1}|\wedge |x_j-x_{j-1}|)/2$ and $r_j\in (0,R_j)$, $1\le j\le N$. Let $r_0\in (0,|x_0-x_1|/2)$. Then \textcolor{blue}egin{align} &\mathbb{P}[\tau^{x_j}_{r_j}<\tau^{x_0}_{r_0}<\infty;1\le j\le N] \lesssim \mbox{\textcolor{blue}fseries B}ig( \frac{r_N}{|x_N|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{k=0}^{N-1} \mbox{\textcolor{blue}fseries B}ig( \frac{ r_k }{|x_k-x_{k+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha\cdot \prod_{k=1}^N \mbox{\textcolor{blue}fseries B}ig( \frac{ r_{k} }{|x_k-x_{k-1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \label{n-1}\\ \lesssim & \mbox{\textcolor{blue}fseries B}ig( \frac{R_N}{|x_N|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \mbox{\textcolor{blue}fseries B}ig(\frac{r_0}{|x_0-x_1|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{k=1}^{N} \mbox{\textcolor{blue}fseries B}ig( \frac{ r_k }{R_k}\mbox{\textcolor{blue}fseries B}ig)^{2\alpha} \le \mbox{\textcolor{blue}fseries B}ig(\frac{r_0}{|x_0-x_1|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{k=1}^{N} \mbox{\textcolor{blue}fseries B}ig( \frac{ r_k }{R_k}\mbox{\textcolor{blue}fseries B}ig)^{2\alpha}. \label{n-1'} \end{align} \label{inner-last} \end{Lemma} \textcolor{blue}egin{proof} Assume all $x_j$'s are positive by symmetry. Let $P$ denote the RHS of (\textcolor{red}ef{n-1}) (depending on $x_0,\dots,x_N$ and $r_0,\dots,r_N$). We write $\tau_j$ for $\tau^{x_j}_{r_j}$, $1\le j\le N$. Let $S_N^*$ denote the set of permutation $\sigma$ of $ \{0,1,\dots,N\}$ such that $\sigma(n)=0$. For each $\sigma\in S_N^*$, let $E_\sigma=\{\tau_{\sigma(0)}<\tau_{\sigma(2)}<\cdots<\tau_{\sigma(N)}<\infty\}$. Then $\textcolor{blue}igcup_{\sigma\in S_N^*} E_\sigma$ is the event in (\textcolor{red}ef{n-1}). To prove (\textcolor{red}ef{n-1}), it suffices to show that, for any $\sigma\in S_N^*$, $\mathbb{P}[E_\sigma]\lesssim P$. Fix $\sigma\in S_N^*$. For $0\le k\le N-1$, let $$E^\sigma_k=\{\tau_{\sigma(0)}<\tau_{\sigma(1)}<\cdots <\tau_{\sigma(k)}<\tau_{\sigma(k+1)}\wedge \tau^*_{x_{\sigma(k+1)}}\}\in {\mathcal F}_{\tau_{\sigma(k)}};$$ and let $E^\sigma_N=E_\sigma$. Then $E^\sigma_0\supset E^\sigma_1\supset\cdots\supset E^\sigma_N=E_\sigma$. Let \mbox{\textcolor{blue}fseries B}GE S_\sigma=\{j:n-1\textcolor{green}e j\textcolor{green}e \sigma^{-1}(n),\sigma(j+1)<\sigma(j)\}.\label{Tsigma}\mathbb{ E}DE For each $j\in S_\sigma$, let \mbox{\textcolor{blue}fseries B}GE S^\sigma_j=\{k : \sigma(j+1)<k<\sigma(j), \sigma^{-1}(k)<j\}.\label{Ssigma}\mathbb{ E}DE In plain words, $S_\sigma$ is the set of index $j\textcolor{green}e j_0$, where $j_0:= \sigma^{-1}(N)$, such that $\sigma(j+1)<\sigma(j)$; and $S^\sigma_j$ is the set of index $k$, which lies strictly between $\sigma(j+1)$ and $\sigma(j)$, such that the disc $\{|z-x_k|\le r_k\}$ was visited by $\textcolor{green}amma$ before $\{|z-x_{\sigma(j)}|\le r_{\sigma(j)}\}$. For example, $j_0$ and $N-1$ belong to $S^\sigma$. For $j\in S_\sigma$, the set $S^\sigma_j$ may or may not be empty. See Figure \textcolor{red}ef{Figure-x}. \textcolor{blue}egin{figure} \centering \includegraphics[width=1\textwidth]{x.png} \caption[The first figure for the proof of Lemma \textcolor{red}ef{inner-last}]{\textbf{The first figure for the proof of Lemma \textcolor{red}ef{inner-last}.} This figure illustrates a situation in the proof of Lemma \textcolor{red}ef{inner-last}. Here $n=5$, and the event $E_\sigma$ happens, where $\sigma = \textcolor{blue}igl(\textcolor{blue}egin{smallmatrix} 0 & 1 & 2 & 3 & 4 & 5 \\ 2 & 5 & 3 & 4 & 1 & 0\end{smallmatrix}\textcolor{blue}igr)$. We have $S_\sigma=\{1,3,4\}$ since $ \sigma^{-1}(5)=1$, $\sigma(1)=5>3=\sigma(2)$, $\sigma(3)=4>1=\sigma(4)$, $\sigma(4)=1>0=\sigma(5)$, but $\sigma(2)=3<4=\sigma(3)$. We have $S^\sigma_1=\emptyset$ because the only index between $\sigma(2)$ and $\sigma(1)$ is $4$, and $\sigma^{-1}(4)=3>2$. We have $S^\sigma_3=\{2,3\}$ because $2,3$ lie between $\sigma(4)$ and $\sigma(3)$, and $\sigma^{-1}(2),\sigma^{-2}(3)<3$. We have $S^\sigma_4=\emptyset$ because there is no index that lies between $\sigma(5)$ and $\sigma(4)$. } \label{Figure-x} \end{figure} For $0\le j\le N-1$, let $Q^+_j=( \frac{2r_j}{|x_j-x_{j+1}|})^\alpha$. For $1\le j\le N$, let $Q^-_j=( \frac{2r_j}{|x_j-x_{j-1}|})^\alpha$. Let $Q_n=( \frac{r_N}{|x_N|})^\alpha$. Then $P= Q_N\cdot \prod_{j=1}^{N-1} Q^+_j\cdot \prod_{j=2}^N Q^-_j$. By (\textcolor{red}ef{1pt}), \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[E^\sigma_{\sigma^{-1}(N)}]\le \mathbb{P}[\tau_N<\infty]\lesssim Q_N.\label{PQn}\mathbb{ E}DE We claim that, for any $j\in S_\sigma$, \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[E^\sigma_{j+1}|{\mathcal F}_{\tau_{\sigma(j)}},E^{\sigma}_j]\lesssim Q_{\sigma(j)}^- Q_{\sigma(j+1)}^+ \prod_{k\in S^\sigma_j}(Q_{k}^+Q_{k}^-);\label{PQj}\mathbb{ E}DE and \mbox{\textcolor{blue}fseries B}GE \prod_{j\in S_\sigma}\mbox{\textcolor{blue}fseries B}ig(Q_{\sigma(j)}^- Q_{\sigma(j+1)}^+ \prod_{k\in S^\sigma_j}(Q_{k}^+Q_{k}^-)\mbox{\textcolor{blue}fseries B}ig)\le \prod_{l=0}^{N-1} Q^+_l\cdot \prod_{l=1}^N Q^-_l.\label{PQj-union}\mathbb{ E}DE Note that (\textcolor{red}ef{PQn},\textcolor{red}ef{PQj},\textcolor{red}ef{PQj-union}) together imply that $\mathbb{P}[E_\sigma]=\mathbb{P}[E^\sigma_n]\lesssim P$. We first prove (\textcolor{red}ef{PQj-union}). It suffices to show that \mbox{\textcolor{blue}fseries B}GE \{0,\dots,N-1\}\subset \textcolor{blue}igcup_{j\in S_\sigma} ( \{\sigma(j+1)\}\cup S^\sigma_j);\label{Q1n-1}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE \{1,\dots,N\}\subset \textcolor{blue}igcup_{j\in S_\sigma} ( \{\sigma(j)\}\cup S^\sigma_j).\label{Q2n}\mathbb{ E}DE Let $l\in\{0,\dots,N-1\}$. We consider several cases. Case 1. $\sigma^{-1}(l)< \sigma^{-1}(N)$. Since $\sigma^{-1}(0)=N$, we have $l\textcolor{green}e 1$. Since $\sigma(\sigma^{-1}(N))=N>l$ and $\sigma(N)=0<l$, there exists $\sigma^{-1}(N)\le j_0\le N-1$ such that $\sigma(j_0)>l>\sigma(j_0+1)$. By (\textcolor{red}ef{Tsigma},\textcolor{red}ef{Ssigma}) we have $j_0\in S_\sigma$ and $l\in S^\sigma_{j_0}$. Case 2. $\sigma^{-1}(l)\textcolor{green}e \sigma^{-1}(N)$. Then $\sigma^{-1}(l)-1\textcolor{green}e \sigma^{-1}(N)$ since $l\ne N$. Consider two subcases. Case 2.1. $\sigma(\sigma^{-1}(l)-1)>\sigma(\sigma^{-1}(l))=l$. In this subcase, $j_1:=\sigma^{-1}(l)-1\in S_\sigma$ by (\textcolor{red}ef{Tsigma}), and $\sigma(j_1+1)=l$. Case 2.2. $\sigma(\sigma^{-1}(l)-1)<\sigma(\sigma^{-1}(l))=l$. Since $\sigma(\sigma^{-1}(N))=N>l>\sigma(\sigma^{-1}(l)-1)$ and $\sigma^{-1}(N)\le \sigma^{-1}(l)-1$, there exists $\sigma^{-1}(N)\le j_2\le \sigma^{-1}(l)-2$ such that $\sigma(j_2)>l>\sigma(j_2+1)$. This implies that $j_2\in S_\sigma$ and $l\in S^\sigma_{j_2}$. Thus, in all cases, there is some $j\in S_\sigma$ such that $l\in \{\sigma(j+1)\}\cup S^\sigma_j$. So we get (\textcolor{red}ef{Q1n-1}). Let $l\in\{1,\dots,N\}$. We consider several cases. Case 1. $\sigma^{-1}(l)< \sigma^{-1}(N)$. Then $l\le N-1$. By Case 1 of the last paragraph, there exists $j_0\in S_\sigma$ such that $l\in S^\sigma_{j_0}$. Case 2. $\sigma^{-1}(l)\textcolor{green}e \sigma^{-1}(N)$. Consider two subcases. Case 2.1. $\sigma(\sigma^{-1}(l))>\sigma(\sigma^{-1}(l)+1)$. In this subcase, $j_1:=\sigma^{-1}(l)\in S_\sigma$ and $l=\sigma(j_1)$. Case 2.2. $l=\sigma(\sigma^{-1}(l))<\sigma(\sigma^{-1}(l)+1)$. Since $\sigma(\sigma^{-1}(l)+1)>l>0=\sigma(N)$, there exists $\sigma^{-1}(l)+1\le j_2\le N-1$ such that $\sigma(j_2)>l>\sigma(j_2+1)$. This implies that $j_2\in S_\sigma$ and $l\in S^\sigma_{j_2}$. Thus, in all cases, there is some $j\in S_\sigma$ such that $l\in \{\sigma(j)\}\cup S^\sigma_j$. So we get (\textcolor{red}ef{Q2n}). Combining (\textcolor{red}ef{Q1n-1},\textcolor{red}ef{Q2n}) we get (\textcolor{red}ef{PQj-union}). \textcolor{blue}egin{figure} \centering \includegraphics[width=1\textwidth]{x2.png} \caption[The second figure for the proof of Lemma \textcolor{red}ef{inner-last}]{\textbf{The second figure for the proof of Lemma \textcolor{red}ef{inner-last}.} This figure illustrates an application of the comparison principle of extremal distance in the proof of Lemma \textcolor{red}ef{inner-last}. Here $n=4$, and the event $E_\sigma$ happens, where $\sigma = \textcolor{blue}igl(\textcolor{blue}egin{smallmatrix} 0 & 1 & 2 & 3 & 4 \\ 3& 4 & 1 & 2 & 0 \end{smallmatrix}\textcolor{blue}igr)$. We stop the curve at the time $\tau:=\tau_4$. Then the next semi-disc to visit is $D_1=\{z\in\mathbb{H}:|z-x_1|\le r_1\}$. We know that $1\in S_\sigma$, $\sigma(1)=4$, $\sigma(2)=1$, and $S^\sigma_1=\{3\}$. The $D_1$ is separated from $S^-_{\tau}$ in $H_\tau$ by the disjoint regions $A_1$, $A_4$, $\widetilde A_3^+$ and $\widetilde A_3^-$, among which $A_1$ and $A_4$ are semi-annuli, and $\widetilde A_3^+$ and $\widetilde A_3^-$ are subsets of two semi-annuli, which have the same center $x_3$, same inner radius $r_3$, but different outer radii. } \label{x2} \end{figure} Finally, we prove (\textcolor{red}ef{PQj}). Fix $j\in S_\sigma$. Let $\tau=\tau_{\sigma(j)}$. Suppose the event $E^{\sigma}_j$ occurs. Let $w=x_{\sigma(j+1)}$, $D=\{z\in\mathbb{H}: |z-w|\le r_{\sigma(j+1)}\}$, $\widetilde w=Z_\tau(w)$, $\widetilde D=Z_\tau(\widetilde D)$, and $\widetilde r=\textcolor{red}ad_{\widetilde w}(\widetilde D)$. By DMP of chordal SLE$_\kappa$ and (\textcolor{red}ef{1pt}), \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[E^\sigma_{j+1}|{\mathcal F}_\tau,E^\sigma_j]\le \mathbb{P}[\tau_{\sigma(j+1)}<\infty |{\mathcal F}_\tau,E^\sigma_j]\lesssim \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde r}{|\widetilde w|}\mbox{\textcolor{blue}fseries B}ig)^\alpha.\label{1wedge>}\mathbb{ E}DE By Proposition \textcolor{red}ef{lem-extremal2} and conformal invariance of extremal distance, \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r}{|\widetilde w|}\lesssim e^{-\pi d_{\mathbb{H}}((-\infty,0],\widetilde D)} =e^{-\pi d_{H_\tau}(S^-_\tau, D)}.\label{1wedge<}\mathbb{ E}DE Define semi-annuli \textcolor{blue}egin{align*} A_{\sigma(j)}&=\{z\in\mathbb{H}: r_{\sigma(j)} <|z-x_{\sigma(j)}|<|x_{\sigma(j)-1}-x_{\sigma(j)}|/2\};\\ A_{\sigma(j+1)}&=\{z\in\mathbb{H}: r_{\sigma(j+1)} <|z-x_{\sigma(j+1)}|<|x_{\sigma(j+1)+1}-x_{\sigma(j+1)}|/2\}; \\ A_k^\pm&=\{z\in\mathbb{H} : r_k <|z-x_k|<|x_{k\pm 1}-x_k|/2\},\quad k\in S^\sigma_j. \end{align*} For each $k\in S^\sigma_j$, define $\widetilde A_k^\pm$ to be the connected component of $A_k^\pm \cap H_\tau$ whose boundary contains $x_k\pm r_k$. Then $A_{\sigma(j)}$, $A_{\sigma(j+1)}$, $\widetilde A_k^+$ and $\widetilde A_k^-$, $k\in S^\sigma_j$, are mutually disjoint. See Figure \textcolor{red}ef{x2}. Since the event $E^\sigma_j$ occurs, any curve in $H_\tau$ connecting $D$ with $S^-_\tau$ must contain a subarc crossing $A_{\sigma(j)}$, a subarc crossing $A_{\sigma(j+1)}$, a subarc contained in $\widetilde A_k^+$ crossing $A_k^+$ for each $k\in S^\sigma_j$, and a subarc contained in $\widetilde A_k^-$ crossing $A_k^-$ for each $k\in S^\sigma_j$. By the comparison principle and composition rule of extremal length, we know that \textcolor{blue}egin{align}d_{H_\tau}(S^-_\tau, D)\textcolor{green}e & \frac 1\pi\log\mbox{\textcolor{blue}fseries B}ig(\frac{|x_{\sigma(j)}-x_{\sigma(j)-1}|}{2r_{\sigma(j)} } \mbox{\textcolor{blue}fseries B}ig) +\frac 1\pi\log\mbox{\textcolor{blue}fseries B}ig(\frac{|x_{\sigma(j+1)}-x_{\sigma(j+1)+1}|}{2r_{\sigma(j+1)}} \mbox{\textcolor{blue}fseries B}ig) \noindentnumber\\ &+ \frac 1\pi \sum_{k\in S^\sigma_j} \log\mbox{\textcolor{blue}fseries B}ig(\frac{|x_k-x_{k+1}|}{2r_k }\mbox{\textcolor{blue}fseries B}ig) + \frac 1\pi \sum_{k\in S^\sigma_j} \log\mbox{\textcolor{blue}fseries B}ig(\frac{|x_k-x_{k-1}|}{2r_k}\mbox{\textcolor{blue}fseries B}ig).\label{ext-lower} \end{align} Combining (\textcolor{red}ef{1wedge>},\textcolor{red}ef{1wedge<},\textcolor{red}ef{ext-lower}) we get (\textcolor{red}ef{PQj}). Then we get (\textcolor{red}ef{n-1}), which implies (\textcolor{red}ef{n-1'}) because $|x_k-x_{k\pm 1}|\textcolor{green}e R_k$, $1\le k\le N$, and $R_N=|x_N-x_{N-1}|\le |x_N|$. \end{proof} \textcolor{blue}egin{Remark} By a slight modification of the above proof, we can obtain the following estimate. Let $x_0,\dots,x_{N+1},R_1,\dots,R_N,r_0,\dots,r_N$ be as in Lemma \textcolor{red}ef{inner-last}. Let $I=[a,x_0]$ for some $a\in (0,x_0)$, and $\tau^{I}_{r_0}=\tau_{I\times [0,r_0]}$. Then \textcolor{blue}egin{align}&\mathbb{P}[\tau^{z_j}_{r_j}<\tau^{I}_{r_0}<\infty;1\le j\le N] \lesssim \prod_{j=1}^N \mbox{\textcolor{blue}fseries B}ig( \frac{r_j}{R_j}\mbox{\textcolor{blue}fseries B}ig)^{2\alpha}.\noindentnumber \end{align} To prove the estimate, we may use the same extremal length argument except that we do not use a semi-annulus centered at $x_0$ because such a semi-annulus may not disconnect $I\times [0,r_0]$ from other $x_j$'s in $H_\tau$. So we have the same factor in the upper bound except for $(\frac{r_0}{|x_1-x_0|})^\alpha$. \label{after-inner-last} \end{Remark} \textcolor{blue}egin{Lemma} Let $z_j$, $0\le j\le n+1$, $w_k$, $0\le k\le m+1$, $r_j$, $1\le j\le n$, $s_k$, $1\le k\le m$, be as in Lemma \textcolor{red}ef{Lemma-PR}. Now assume $n\textcolor{green}e 2$. Let $j_0\in\{2,\dots,n\}$. Let \textcolor{blue}egin{align*} Q= & |z_{j_0-1}-z_{j_0}|^{-\alpha} \cdot \prod_{j=1}^{j_0-1} |z_j-z_{j+1}|^{-\alpha} \cdot |w_m|^{-\alpha}\cdot \prod_{k=1}^{m-1}(|w_k|\wedge |w_k-w_{k+1}|)^{-\alpha}\\ &\cdot (|z_{j_0}|\wedge |z_{j_0}-z_{j_0+1}|)^{-\alpha}\cdot \prod_{j=j_0+1}^{n} (|z_j-z_{j-1}|\wedge |z_j -z_{j+1}|)^{-\alpha}. \end{align*} Here when $m=0$, the $|w_m|^{-\alpha}\cdot \prod_{k=1}^{m-1}(|w_k|\wedge |w_k-w_{k+1}|)^{-\alpha}$ disappears; and when $j_0=n$, the $\prod_{j=j_0+1}^{n} (|z_j-z_{j-1}|\wedge |z_j -z_{j+1}|)^{-\alpha}$ disappears. Then we have \textcolor{blue}egin{align} & \mathbb{P}[\tau^{z_{j_0}}_{r_{j_0}}<\tau^{z_j}_{r_j}<\infty, j\in\mathbb{N}_n\setminus\{j_0\}; \tau^{z_{j_0}}_{r_{j_0}}<\tau^{w_k}_{s_k} <\infty,k\in\mathbb{N}_m] \lesssim Q r_{j_0}^\alpha \cdot \prod_{j=1}^n r_j^\alpha \cdot \prod_{k=1}^m s_k^\alpha.\label{ping-pong-ineq'} \end{align} \label{ping-pong} \end{Lemma} \textcolor{blue}egin{proof} By symmetry, we may assume that $w_m<\cdots<w_1<0<z_1<\cdots <z_n$. Let $\tau=\tau^{z_{j_0}}_{r_{j_0}}$. Let $E$ denote the event in (\textcolor{red}ef{ping-pong-ineq'}). See Figure \textcolor{red}ef{zwj}. Let \textcolor{blue}egin{align*} E_*^\tau&=\{\tau<\tau^{z_j}_{r_j}\wedge \tau^*_{z_j}: j\in\mathbb{N}_n\setminus\{j_0\}\}\in{\mathcal F}_\tau;\\ E_\#&=\{\tau^{z_j}_{r_j}<\infty,j\in\mathbb{N}_n\setminus\{j_0\}; \tau^{w_k}_{s_k}<\infty,1\le k\le m\}. \end{align*} Then $E=E_*^\tau\cap E_\#$. By (\textcolor{red}ef{1pt}), $\mathbb{P}[E_*^\tau]\lesssim (r_{j_0}/|z_{j_0}|)^\alpha$. \textcolor{blue}egin{figure} \centering \includegraphics[width=1\textwidth]{zwj.png} \caption[A figure for the proof of Lemma \textcolor{red}ef{ping-pong}]{\textbf{A figure for the proof of Lemma \textcolor{red}ef{ping-pong}.} This figure illustrates the event $E$ in Lemma \textcolor{red}ef{ping-pong}. Here $n=3$, $m=2$, and $j_0=2$. The curve $\textcolor{green}amma$ visits the five semi-discs centered at $z_1,z_2,z_3,w_1,w_2$, among which the one centered at $z_2$ is first visited (at the time $\tau=\tau^{z_2}_{r_2}$). The parts of $\textcolor{green}amma$ before $\tau$ and after $\tau$ are respectively drawn in solid and dashed lines. } \label{zwj} \end{figure} Suppose the event $E_*^\tau$ occurs. Let $\widetilde z_j=Z_\tau(z_j)$, $D_j=\{z\in H_\tau:|z-z_j|\le r_j\}$, $\widetilde D_j=Z_\tau(D_j)$, and $\widetilde r_j=\textcolor{red}ad_{\widetilde z_n}(\widetilde D_n)$, $1\le j\le n$. Let $\widetilde w_k=Z_\tau(w_k)$, $E_k=\{z\in H_\tau:|z-w_k|\le s_k\}$, $\widetilde E_k=Z_\tau(E_k)$, and $\widetilde s_k=\textcolor{red}ad_{\widetilde w_m}(\widetilde E_m)$, $1\le k\le m$. Then $\widetilde w_m<\cdots <\widetilde w_1<0<\widetilde z_1<\cdots <\widetilde z_n$. By DMP of chordal SLE$_\kappa$ and Proposition \textcolor{red}ef{lower-upper} and that $E=E_*^\tau\cap E_\#$, \textcolor{blue}egin{align} \mathbb{P}[E|{\mathcal F}_\tau,E_*^\tau] \lesssim & \prod_{j=1}^{j_0-1} \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde r_j}{|\widetilde z_j|\wedge |\widetilde z_j-\widetilde z_{j+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{j=j_0+1}^n \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde r_j}{|\widetilde z_j-\widetilde z_{j-1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha\noindentnumber\\ \cdot& \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde s_m}{|\widetilde w_m|}\mbox{\textcolor{blue}fseries B}ig)^\alpha\cdot \prod_{k=1}^{m-1} \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde s_k}{|\widetilde w_k|\wedge |\widetilde w_k-\widetilde w_{k+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha. \label{EE0} \end{align} Here when applying Proposition \textcolor{red}ef{lower-upper}, we ordered the points $\widetilde z_j$ and $\widetilde w_k$ by $$\widetilde z_{j_0},\dots,\widetilde z_1, \widetilde z_{j_0+1},\dots,\widetilde z_n,\widetilde w_m,\dots,\widetilde w_1,$$ and omit the factor $(1\wedge \frac{\widetilde r_{j_0}}{|\widetilde z_{j_0}|} )^\alpha$, which is bounded by $1$. By Proposition \textcolor{red}ef{lem-extremal2} and conformal invariance of extremal length, \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_j}{|\widetilde z_j|} \lesssim e^{-\pi d_{\mathbb{H}}((-\infty,0],\widetilde D_j)}=e^{-\pi d_{H_\tau} (S^-_\tau,D_j)} ,\quad 1\le j\le j_0-1;\label{jD}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde s_k}{|\widetilde w_k|}\lesssim e^{-\pi d_{\mathbb{H}}([0,+\infty),\widetilde E_k)}=e^{-\pi d_{H_\tau }(S^+_\tau,\mathbb{ E}_k)},\quad 1\le k\le m;\label{kE}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_j}{|\widetilde z_j-\widetilde z_{j+1}|}\lesssim e^{-\pi d_{\mathbb{H}}([\widetilde z_{j+1},\infty),\widetilde D_j)}=e^{-\pi d_{H_\tau}([z_{j+1},\infty),D_j)},\quad 1\le j\le j_0-1 ;\label{j+D}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_j}{|\widetilde z_j-\widetilde z_{j-1}|}\lesssim e^{-\pi d_{\mathbb{H}}((-\infty,\widetilde z_{j-1}],\widetilde D_j)}= e^{-\pi d_{H_\tau}(K_\tau\cup (-\infty,z_{j-1}],D_j)},\quad j_0+1\le j\le n;\label{j-D}\mathbb{ E}DE \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde s_k}{|\widetilde w_k-\widetilde w_{k+1}|}\lesssim e^{-\pi d_{\mathbb{H}}((-\infty,\widetilde w_{k+1}],\widetilde E_k)}=e^{-\pi d_{H_\tau}((-\infty, w_{k+1}],E_k)},\quad 1\le k\le m-1.\label{k+E}\mathbb{ E}DE Since $S^+_\tau$ and $E_m$ are separated by the semi-annulus $\{z\in\mathbb{H}: s_m<|z-w_m|<|w_m|\}$ in $H_\tau$, we have $d_{H_\tau} (S^+_\tau,E_m)\textcolor{green}e \frac 1\pi \log(\frac{|w_m|}{s_m})$, which together with (\textcolor{red}ef{kE}) implies that \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde s_m}{|\widetilde w_m|}\lesssim \frac{s_m}{|w_m|}.\label{mE1}\mathbb{ E}DE For $1\le j\le j_0-2$, since $D_j$ is separated from both $S^-_\tau$ and $[z_{j+1},\infty)$ by the semi-annulus $\{z\in\mathbb{H}: r_j<|z-z_j|<|z_j-z_{j+1}|\}$ in $H_\tau$, we have $$d_{H_\tau} (S^-_\tau,D_j),d_{H_\tau} ([z_{j+1},+\infty),D_j) \textcolor{green}e\frac 1\pi \log\mbox{\textcolor{blue}fseries B}ig(\frac{ |z_j-z_{j+1}|}{r_j}\mbox{\textcolor{blue}fseries B}ig),$$ which combined with (\textcolor{red}ef{jD},\textcolor{red}ef{j+D}) implies that \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_j}{|\widetilde z_j|\wedge |\widetilde z_j-\widetilde z_{j+1}|}=\mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde r_j}{|\widetilde z_j| }\mbox{\textcolor{blue}fseries B}ig)\vee \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde r_j}{ |\widetilde z_j-\widetilde z_{j+1}|}\mbox{\textcolor{blue}fseries B}ig)\lesssim \frac{r_j}{|z_j-z_{j+1}|}. \label{jD1}\mathbb{ E}DE For $j=j_0-1$, we have a better estimate. Since $D_{j_0-1}$ is separated from both $S^-_\tau$ and $[z_{j_0},\infty)$ by a disjoint pair of semi-annuli $\{z\in\mathbb{H}: r_{j_0-1}<|z-z_{j_0-1}|<|z_{j_0-1}-z_{j_0}|/2\}$ and $\{z\in\mathbb{H}: r_{j_0}<|z-z_{j_0}|<|z_{j_0-1}-z_{j_0}|/2\}$, we have $$d_{H_\tau} (S^-_\tau,D_{j_0-1}),d_{H_\tau} ([z_{j_0},+\infty),D_{j_0-1}) \textcolor{green}e \frac 1\pi \log\mbox{\textcolor{blue}fseries B}ig( \frac{|z_{j_0-1}-z_{j_0}|}{2r_{j_0-1}}\mbox{\textcolor{blue}fseries B}ig)+\frac 1\pi \log \mbox{\textcolor{blue}fseries B}ig( \frac{|z_{j_0-1}-z_{j_0}|}{2r_{j_0}})\mbox{\textcolor{blue}fseries B}ig),$$ which combined with (\textcolor{red}ef{jD},\textcolor{red}ef{j+D}) implies that \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_{j_0-1}}{|\widetilde z_{j_0-1}|\wedge |\widetilde z_{j_0-1}-\widetilde z_{j_0}|} \lesssim \frac{r_{j_0-1} }{|z_{j_0-1}-z_{j_0}| }\cdot \frac{r_{j_0} }{|z_{j_0-1}-z_{j_0}| }. \label{jD1'}\mathbb{ E}DE For $1\le k\le m-1$, since $E_k$ is separated from $S^+_\tau$ by $\{z\in\mathbb{H}:s_k<|z-w_k|<|w_k|\}$ in $H_\tau$, we get $d_{H_\tau} (S^+_\tau,E_k) \textcolor{green}e \frac 1\pi \log (\frac{|w_k |}{2s_k})$. Since $E_k$ is separated from $(-\infty,w_{k+1}]$ by $\{z\in\mathbb{H}:s_k<|z-w_k|<|w_k-w_{k+1}|\}$ in $H_\tau$, we get $ d_{H_\tau} ((-\infty, w_{k+1}],E_k) \textcolor{green}e \frac 1\pi \log (\frac{|w_k-w_{k+1}|}{2s_k}) $. These two lower bounds of extremal lengths combined with (\textcolor{red}ef{kE},\textcolor{red}ef{k+E}) imply that \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde s_k}{|\widetilde w_k|\wedge |\widetilde w_k-\widetilde w_{k+1}|}\lesssim \frac{s_k}{|w_k-w_{k+1}|\wedge |w_k|}.\label{kE1}\mathbb{ E}DE Suppose $j_0=n$. Combining (\textcolor{red}ef{EE0},\textcolor{red}ef{mE1}-\textcolor{red}ef{kE1}), we get $$\mathbb{P}[E|{\mathcal F}_\tau,E_*^\tau]\lesssim \mbox{\textcolor{blue}fseries B}ig(\frac{r_{j_0}}{|z_{j_0-1}-z_{j_0}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \mbox{\textcolor{blue}fseries B}ig(\frac{s_m}{|w_m |}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{j=1}^{j_0-1} \mbox{\textcolor{blue}fseries B}ig(\frac{r_j}{|z_j-z_{j+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{k=1}^{m-1} \mbox{\textcolor{blue}fseries B}ig(\frac{s_k}{|w_k-w_{k+1}|\wedge|w_k|}\mbox{\textcolor{blue}fseries B}ig)^\alpha,$$ which together with $\mathbb{P}[E_*^\tau]\lesssim (r_{j_0}/|z_{j_0} |)^\alpha$ and $z_{j_0+1}=\infty$ implies (\textcolor{red}ef{ping-pong-ineq'}) for $j_0=n$. Now suppose $2\le j_0\le n-1$. Let $\mathbb{N}_{(j_0,n]}=\{j_0+1,\dots,n\}$. For $j\in\mathbb{N}_{(j_0,n]}$, let $R_j=(|z_j-z_{j-1}|\wedge |z_j-z_{j+1}|)/2$. For each $\underline k=(k_{j_0+1},\dots,k_n)\in (\mathbb{N}\cup\{0\})^{\mathbb{N}_{(j_0,n]}}$, let $S_{\underline k}=\{j\in\mathbb{N}_{(j_0,n]}:K_j\textcolor{green}e 1\}$, and $E_{\underline k}$ denote the event that $\tau<\infty$ and $\dist(z_j,K_\tau)\textcolor{green}e R_j$, for $j\in \mathbb{N}_{(j_0,n]}\setminus S_{\underline k}$, and $R_j e^{-k_j}\le \dist(z_j,K_\tau)<R_j e^{1-k_j}$ for $j\in S_{\underline k}$. We now bound $\mathbb{P}[E_{\underline k}]$. If $S_{\underline k}=\emptyset$, we use (\textcolor{red}ef{1pt}) to conclude that $$\mathbb{P}[E_{\underline k}]\le \mathbb{P}[\tau^{z_{j_0}}_{r_0}<\infty]\lesssim (r_{j_0}/|z_{j_0}|)^\alpha.$$ Suppose $S_{\underline k}\ne \emptyset$. We express $S_{\underline k}=\{j_1<\cdots<j_N\}$. Let $x_s=z_{j_s}$, $0\le s\le N$. By the definition of $E_{\underline k}$ and Lemma \textcolor{red}ef{inner-last}, we have \textcolor{blue}egin{align*} &\mathbb{P}[E_{\underline k}]\le \mathbb{P}[\tau^{x_s}_{e^{1-k_{j_s}} R_{j_s}}<\tau^{x_0}_{r_{j_0}}<\infty,1\le s\le N]\\ \lesssim &\mbox{\textcolor{blue}fseries B}ig(\frac{r_{j_0}}{|x_1-x_0|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{s=1}^N \mbox{\textcolor{blue}fseries B}ig(\frac{e^{1-k_{j_s}} R_{j_s}}{ R_{j_s}}\mbox{\textcolor{blue}fseries B}ig)^{2\alpha}\lesssim \mbox{\textcolor{blue}fseries B}ig( \frac{r_{j_0}}{|z_{j_0}-z_{j_0+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{j=j_0+1}^n e^{-2\alpha k_j}. \end{align*} Combining the two formulas, we conclude that, for any $\underline k \in (\mathbb{N}\cup\{0\})^{\mathbb{N}_{(j_0,n]}}$ \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[E_{\underline k}]\lesssim \mbox{\textcolor{blue}fseries B}ig( \frac{r_{j_0}}{|z_{j_0}|\wedge |z_{j_0}-z_{j_0+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{j=j_0+1}^n e^{-2\alpha k_j}.\label{PEk}\mathbb{ E}DE Suppose for some $\underline k=(k_{j_0+1},\dots,k_n)$, $E_{\underline k}\cap E_*$ happens. We claim that \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_j}{|\widetilde z_j-\widetilde z_{j-1}|}\lesssim \frac{r_j}{R_j e^{-k_j}},\quad j_0+1\le j\le n.\label{j-D=>}\mathbb{ E}DE Let $j\in\mathbb{N}_{(j_0,n]}$. First, (\textcolor{red}ef{j-D=>}) holds trivially if $r_j\textcolor{green}e R_j e^{-k_j}$. Suppose that $r_j< R_j e^{-k_j}$. Then $D_j$ can be disconnected from $K_\tau$ and $(-\infty,z_{j-1}]$ in $H_\tau$ by $\{z\in\mathbb{H}:r_j<|z|<R_j e^{-k_j}\}$. By comparison principle of extremal distance, we have $$d_{H_\tau}(K_\tau\cup (-\infty,z_{j-1}],D_j)\textcolor{green}e \frac 1\pi \log\mbox{\textcolor{blue}fseries B}ig(\frac{R_j e^{-k_j}}{r_j}\mbox{\textcolor{blue}fseries B}ig),$$ which together with (\textcolor{red}ef{j-D}) implies (\textcolor{red}ef{j-D=>}). So the claim is proved. Combining (\textcolor{red}ef{EE0},\textcolor{red}ef{mE1}-\textcolor{red}ef{j-D=>}) and that $R_j=(|z_j-z_{j-1}|\wedge |z_j-z_{j+1}|)/2$, we get \textcolor{blue}egin{align*} \mathbb{P}[E\cap E_{\underline k}]\lesssim& \mbox{\textcolor{blue}fseries B}ig(\frac{r_{j_0} }{|z_{j_0}-z_{j_0-1}| }\mbox{\textcolor{blue}fseries B}ig)^\alpha \mbox{\textcolor{blue}fseries B}ig(\frac{s_m}{|w_m|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{k=1}^{m-1} \mbox{\textcolor{blue}fseries B}ig(\frac{s_k}{|w_k-w_{k+1}|\wedge |w_k|}\mbox{\textcolor{blue}fseries B}ig)^\alpha\cdot \prod_{j=1}^{j_0-1}\mbox{\textcolor{blue}fseries B}ig( \frac{r_j}{|z_j-z_{j+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \\ \cdot & \mbox{\textcolor{blue}fseries B}ig( \frac{r_{j_0}}{|z_{j_0}|\wedge |z_{j_0}-z_{j_0+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{j=j_0+1}^n (|z_j-z_{j-1}|\wedge |z_j-z_{j+1}|)^{-\alpha} \cdot \prod_{j=j_0+1}^n e^{-\alpha k_j}. \end{align*} Summing the inequality over $\underline k\in(\mathbb{N}\cup\{0\})^{\mathbb{N}_{(j_0,n]}}$, we get (\textcolor{red}ef{ping-pong-ineq'}). \end{proof} \textcolor{blue}egin{Lemma} Let $n,m,j_0$, $z_0,\dots,\widehat z_{j_0},\dots,z_{n+1}$, and $w_0,\dots,w_{m+1}$ be as in Lemma \textcolor{red}ef{ping-pong}. Here the symbol $\widehat z_{j_0}$ means that $z_{j_0}$ is missing in the list. Let $I$ be a compact real interval that lies strictly between $z_{j_0-1}$ and $z_{j_0+1}$. Let $L_\pm= \dist(z_{j_0\pm 1},I)>0$. Here if $j_0=n$, then $L_+=\infty$. Let $r_1,\dots,\widehat r_{j_0},\dots,r_n$, and $s_1,\dots,s_m$ be as in Lemma \textcolor{red}ef{ping-pong} except that we now require that $r_{j_0\pm 1}<(|z_{j_0\pm 1}-z_{j_0\pm 2}|\wedge L_\pm)/2$. Let \textcolor{blue}egin{align*} Q=&L_-^{-2\alpha} \prod_{j=1}^{j_0-2} |z_j-z_{j+1}| \cdot |w_m|^{-\alpha} \prod_{k=1}^{m-1} (|w_k|\wedge |w_k-w_{k+1}|)^{-\alpha} \\ &\cdot (L_+\wedge |z_{j_0+1}-z_{j_0+2}|)^{-\alpha} \prod_{j=j_0+2}^n (|z_j-z_{j+1}|\wedge |z_j-z_{j-1}|)^{-\alpha}. \end{align*} Here when $m=0$, the $|w_m|^{-\alpha} \prod_{k=1}^{m-1} (|w_k|\wedge |w_k-w_{k+1}|)^{-\alpha}$ disappears; and when $j_0=n$, the second line in the formula disappears. Let $h\in(L_+\wedge L_-)/2$ and $\tau^I_{h}=\tau_{I\times [0,h]}$. Then \textcolor{blue}egin{align} \mathbb{P}[\tau^I_{h}<\tau^{z_j}_{r_j}<\infty,j\in\mathbb{N}_n\setminus\{j_0\}; \tau^I_{h}<\tau^{w_k}_{s_k}<\infty,k\in\mathbb{N}_m]\lesssim Q h^\alpha \prod_{j\in \mathbb{N}_m\setminus \{j_0\}} r_j^\alpha \cdot \prod_{k=1}^m s_k^\alpha. \label{ping-pong-ineq-I} \end{align} \label{ping-pong-I} \end{Lemma} \textcolor{blue}egin{proof} The proof is similar to that of Lemma \textcolor{red}ef{ping-pong}. The only essential difference is that now we do not get an upper bound of $\mathbb{P}[\tau^I_{h}<\infty]$ using (\textcolor{red}ef{1pt}). By symmetry we assume that $z_j$'s are positive and $w_k$'s are negative. Let $\tau=\tau^I_{h}$ and $z_{j_0}=\mathbb{R}ee \textcolor{green}amma(\tau)$. Then $z_{j_0}$ is ${\mathcal F}_\tau$-measurable, and $z_{j_0-1}<z_{j_0}<z_{j_0+1}$. Let $E$ denote the event in (\textcolor{red}ef{ping-pong-ineq-I}). Then $E=E_*^\tau\cap E_\#$, where $$E^\tau_*:=\{\tau <\tau^{z_j}_{r_j}\wedge \tau^*_{z_j},j\in\mathbb{N}_n\setminus \{j_0\};\tau <\tau^{w_k}_{s_k}\wedge \tau^*_{w_k},1\le k\le m\}\in{\mathcal F}_\tau;$$ $$E_\#:=\{ \tau^{z_j}_{r_j}<\infty,j\in\mathbb{N}_n\setminus \{j_0\}; \tau^{w_k}_{s_k}<\infty,1\le k\le m \}.$$ Suppose $E_*^\tau$ occurs. Define $\widetilde z_j,D_j,\widetilde D_j,\widetilde r_j,\widetilde w_k,E_k,\widetilde E_k,\widetilde s_k$ as in the previous proof. By DMP of chordal SLE$_\kappa$ and Proposition \textcolor{red}ef{lower-upper}, we see that (\textcolor{red}ef{EE0}) also holds here. \textcolor{blue}egin{comment} \textcolor{blue}egin{align} \mathbb{P}[E|{\mathcal F}_\tau,E_*^\tau] \lesssim & \prod_{j=1}^{j_0-1} \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde r_j}{|\widetilde z_j|\wedge |\widetilde z_j-\widetilde z_{j+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{j=j_0+1}^n \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde r_j}{|\widetilde z_j-\widetilde z_{j-1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha\noindentnumber\\ \cdot& \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde s_m}{|\widetilde w_m|}\mbox{\textcolor{blue}fseries B}ig)^\alpha\cdot \prod_{k=1}^{m-1} \mbox{\textcolor{blue}fseries B}ig(1\wedge \frac{\widetilde s_k}{|\widetilde w_k|\wedge |\widetilde w_k-\widetilde w_{k+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha. \label{EE0-I} \end{align} Here we organize the points $\widetilde z_j$'s and $\widetilde w_k$'s by $$\widetilde z_{j_0-1},\dots,\widetilde z_1,\widetilde z_{j_0+1},\dots,\widetilde z_{n},w_m\dots,w_1,$$ and use the inequalities $|\widetilde z_{j_0-1}|\textcolor{green}e |\widetilde z_{j_0-1}|\wedge |\widetilde z_{j_0-1}-\widetilde z_{j_0}|$ and $|\widetilde z_{j_0+1}-\widetilde z_{j_0-1}|\textcolor{green}e |\widetilde z_{j_0+1}-\widetilde z_{j_0}|$. \end{comment} The estimates (\textcolor{red}ef{mE1},\textcolor{red}ef{jD1},\textcolor{red}ef{kE1}) still hold here by the same extremal length argument. Estimate (\textcolor{red}ef{jD1'}) should be replaced by \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_{j_0-1}}{|\widetilde z_{j_0-1}|\wedge |\widetilde z_{j_0-1}-\widetilde z_{j_0}|} \lesssim \frac{r_{j_0-1} h }{|z_{j_0-1}-z_{j_0}|^2 } \le \frac{r_{j_0-1} h }{L_-^2 } . \label{jD1'-I}\mathbb{ E}DE When $j_0=n$, combining (\textcolor{red}ef{mE1},\textcolor{red}ef{jD1},\textcolor{red}ef{kE1},\textcolor{red}ef{jD1'-I}) with (\textcolor{red}ef{EE0}), we get \textcolor{blue}egin{align*} \mathbb{P}[E_\#|{\mathcal F}_\tau,E_*^\tau]\lesssim & \mbox{\textcolor{blue}fseries B}ig(\frac{h r_{j_0-1}}{L_-^2}\mbox{\textcolor{blue}fseries B}ig)^\alpha \mbox{\textcolor{blue}fseries B}ig(\frac{s_m}{|w_m |}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{j=1}^{j_0-2} \mbox{\textcolor{blue}fseries B}ig(\frac{r_j}{|z_j-z_{j+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{k=1}^{m-1} \mbox{\textcolor{blue}fseries B}ig(\frac{s_k}{|w_k-w_{k+1}|\wedge|w_k|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \end{align*} Taking expectation, we then get (\textcolor{red}ef{ping-pong-ineq-I}) in the case $j_0=n$. Suppose $2\le j_0\le n-1$. Let $R_j$, $j_0+2\le j\le n$, be as in the proof of Lemma \textcolor{red}ef{ping-pong}. We redefine $R_{j_0+1}=(|z_{j_0+1}-z_{j_0+2}|\wedge L_+)/2$. For each $\underline k=(k_{j_0+1},\dots,k_n)\in (\mathbb{N}\cup\{0\})^{\mathbb{N}_{(j_0,n]}}$, let $E_{\underline k}$ be defined as in the proof of Lemma \textcolor{red}ef{ping-pong} using the $R_j$, $j_0+1\le j\le n$ defined here. By Remark \textcolor{red}ef{after-inner-last}, \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[E_{\underline k}]\lesssim \prod_{j=j_0+1}^n e^{-2\alpha k_j}.\label{PEk-after}\mathbb{ E}DE This inequality holds no matter whether all $k_j$'s are zero or not. On the event $E_{\underline k}\cap E_*$, the same extremal length argument shows that \mbox{\textcolor{blue}fseries B}GE 1\wedge \frac{\widetilde r_j}{|\widetilde z_j-\widetilde z_{j-1}|}\lesssim \frac{r_j}{R_j e^{-k_j}},\quad j_0+1\le j\le n.\label{j-D=>-after}\mathbb{ E}DE Combining (\textcolor{red}ef{EE0},\textcolor{red}ef{mE1},\textcolor{red}ef{jD1},\textcolor{red}ef{kE1},\textcolor{red}ef{jD1'-I}) with (\textcolor{red}ef{PEk-after},\textcolor{red}ef{j-D=>-after}) we get \textcolor{blue}egin{align*} \mathbb{P}[E\cap E_{\underline k}]\lesssim & \mbox{\textcolor{blue}fseries B}ig(\frac{h r_{j_0-1}}{L_-^2}\mbox{\textcolor{blue}fseries B}ig)^\alpha \prod_{j=1}^{j_0-2} \mbox{\textcolor{blue}fseries B}ig(\frac{r_j}{|z_j-z_{j+1}|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{j=j_0+1}^n \mbox{\textcolor{blue}fseries B}ig( \frac{r_j}{R_j }\mbox{\textcolor{blue}fseries B}ig)^\alpha \\ \cdot & \mbox{\textcolor{blue}fseries B}ig(\frac{s_m}{|w_m |}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{k=1}^{m-1} \mbox{\textcolor{blue}fseries B}ig(\frac{s_k}{|w_k-w_{k+1}|\wedge|w_k|}\mbox{\textcolor{blue}fseries B}ig)^\alpha \cdot \prod_{j=j_0+1}^n e^{-\alpha k_j} \end{align*} Summing up the above inequality over $\underline k\in (\mathbb{N}\cup\{0\})^{\mathbb{N}_{(j_0,n]}}$, we get (\textcolor{red}ef{ping-pong-ineq-I}) for $j_0<n$. \end{proof} \textcolor{blue}egin{Definition} Recall the $\Sigma_n$, $n\in\mathbb{N}$, defined in Theorem \textcolor{red}ef{Main-thm}. For $\underline z\in \Sigma_n$ and $j_0\in\mathbb{N}_n$, we say that $z_{j_0}$ is an innermost component of $\underline z$ if there is no $k\in\mathbb{N}_n\setminus \{j_0\}$ such that $z_k$ lies strictly between $0$ and $z_{j_0}$. An element $\underline z\in\Sigma_n$ may have one or two innermost components. For $\underline z=(z_1,\dots,z_n)\in \Sigma_n$, we define the inner distance of $\underline z$ by $d(\underline z):=\min\{|z_j-z_k|:0\le j<k\le n\}$, where $z_0:=0$. \label{Definition-Sigma} \end{Definition} \textcolor{blue}egin{Lemma} Let $\underline z^*=(z_1^*,\dots,z_n^*)\in\Sigma_n$. Suppose that $z_1^*$ is an innermost component of $\underline z^*$. Then for any $\varepsilon>0$, there are $\deltata\in(0,d(z^*)/3]$ and an $\mathbb{H}$-hull $H$ (depending on $\underline z^*$ and $\varepsilon$) such that \textcolor{blue}egin{itemize} \item $\{z\in\mathbb{H}:|z-z_1^*|\le 3\deltata\}\subset H$; \item $\dist(z_j^*,H)\textcolor{green}e 3\deltata$, $2\le j\le n$; and \item if $\underline z\in\Sigma_n$ and $\underline r\in (0,\infty)^n$ satisfy $\Vert z-z^*\Vert_\infty\le \deltata$ and $\Vert \underline r\Vert_\infty\le \deltata$, then \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[K_{\tau^{z_1}_{r_1}} \noindentt\subset H;\tau^{z_1}_{r_1}<\tau^{z_j}_{r_j} <\infty,2\le j\le n]<\varepsilon \prod_{j=1}^n r_j^\alpha .\label{compact-lem-ineq}\mathbb{ E}DE \end{itemize} \label{compact-lem} \end{Lemma} \textcolor{blue}egin{proof} For $\underline r=(r_1,\dots,r_n)\in(0,\infty)^n$, let $P(\underline r)=\prod_{j=1}^n r_j$. Fix a chordal SLE$_\kappa$ curve $\textcolor{green}amma$ in $\mathbb{H}$ from $0$ to $\infty$. For $\underline z=(z_1,\dots,z_n)\in\Sigma_n$, $\underline r=(r_1,\dots,r_n)\in(0,\infty)^n$, and $S\subset\mathbb{H}$, let $$E^{\underline z}_{\underline r;S}=\{\tau^{z_1}_{r_1}<\tau^{z_j}_{r_j} <\infty,2\le j\le n;K_{\tau^{z_1}_{r_1}}\cap S\ne \emptyset \} .$$ Then (\textcolor{red}ef{compact-lem-ineq}) can be rewritten as $\mathbb{P}[ E^{\underline z}_{\underline r;\mathbb{H}\setminus H}]< \varepsilon P(\underline r)$. By Lemma \textcolor{red}ef{Lemma-PR}, there is a positive continuous functions $F_\infty$ on $\Sigma_n$ such that, for any $\underline z\in\Sigma_n$ and any $\underline r\in(0,\infty)^n$, \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[E^{\underline z}_{\underline r; \{z\in\mathbb{H}:|z|\textcolor{green}e R\}}]\le F_\infty(\underline z)R^{-\alpha} P(\underline r),\quad \mbox{if }\Vert \underline r\Vert_\infty<d(\underline z)/2\mbox{ and } R\textcolor{green}e 2\max\{|z_k|\}.\label{R-infty}\mathbb{ E}DE By Proposition \textcolor{red}ef{RZ-Thm3.1}, for any $2\le k\le n$, there are a constant $\textcolor{blue}eta>0$ and a positive continuous function $F_k$ on $\Sigma_n$ such that, for any $\underline z\in\Sigma_n$, $\underline r\in(0,\infty)^n$, and $r>0$, \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[E^{\underline z}_{\underline r; \{z\in\mathbb{H}:|z-z_k|\le r\}}]\le F_k(\underline z) r^\textcolor{blue}eta P(\underline r),\quad \mbox{if }\Vert \underline r\Vert_\infty<d(\underline z)/8.\label{rk}\mathbb{ E}DE Note that, if $\Vert \underline z-\underline z^*\Vert_\infty\le d(\underline z^*)/4$, then $d(\underline z)\textcolor{green}e d(\underline z^*)/2$ and $\max\{|z_j|\}\le 2 \max\{|z_j^*|\}$. By (\textcolor{red}ef{R-infty},\textcolor{red}ef{rk}) and the continuity of $F_\infty$ and $F_k$, $2\le k\le n$, there are $R>4\max\{|z_k^*|\}$ and $r\in (0,d(\underline z^*)/3)$ such that if $\Vert \underline z-\underline z^*\Vert_\infty\le d(\underline z^*)/4$, and $\Vert \underline r\Vert_\infty<d(\underline z^*)/16$, then $$ \mathbb{P}[E^{\underline z}_{\underline r; \{z\in\mathbb{H}:|z|\textcolor{green}e R\}\cup \textcolor{blue}igcup_{k=2}^n\{z\in\mathbb{H}:|z-z_k|\le r\} }] <\frac \varepsilon 2 P[\underline r]. $$ We further assume that $\Vert \underline z-\underline z^*\Vert_\infty\le r/2$. Then $\{z\in\mathbb{H}:|z-z_k^*|\le r/2\}\subset \{z\in\mathbb{H}:|z-z_k|\le r\}$ for $2\le k\le n$, which implies by the above formula that \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[E^{\underline z}_{\underline r; \{z\in\mathbb{H}:|z|\textcolor{green}e R\}\cup \textcolor{blue}igcup_{k=2}^n\{z\in\mathbb{H}:|z-z_k^*|\le r/2\} }] <\frac \varepsilon 2 P[\underline r],\quad \mbox{if }\Vert \underline r\Vert_\infty<d(\underline z^*)/16. \label{Rinftyrk}\mathbb{ E}DE Since $R>2\max\{|z_k|\}$ and $r<d(\underline z^*)/3$, the semi-discs $\{z\in\mathbb{H}:|z-z_j^*|\le r\}$, $1\le j\le n$, are mutually disjoint, and are all contained in the semi-disc $\{z\in\mathbb{H}:|z|\le R\}$. By symmetry, we assume that $z_1^*>0$. We relabel the components of $\underline z^*$ by $z_j^*$, $1\le j\le n'$, and $w_k^*$, $1\le k\le m'$, where $n'\textcolor{green}e 1$, $m'\textcolor{green}e 0$, and $n'+m'=n$, such that $w_{m'}^*<\cdots<w_1^*<0<z_1^*<\dots<z_{n'}^*$. After relabeling, the symbol $z_1^*$ still refers to the same point. Correspondingly, we relabel the components of every $\underline z\in\Sigma_n$ and $\underline r\in(0,\infty)^n$ by $z_j$, $1\le j\le n'$, $w_k$, $1\le k\le m'$, $r_j$, $1\le j\le n'$, and $s_k$, $1\le k\le m'$. It is clear that, if $\Vert \underline z-\underline z^*\Vert_\infty<d(z^*)/2$, then $w_{m'}<\cdots<w_1<0<z_1<\dots<z_{n'}$, and so $z_1$ is an innermost component of $\underline z$. Define compact intervals $I_j$, $2\le j\le n$, and $J_k$, $1\le k\le m$, as follows. If $n'=1$, we do not define $I_j$'s. If $n'\textcolor{green}e 2$, let $I_{n'}=[z_{n'}^*+r/2,R ]$, and $I_j=[z_j^*+r/2,z_{j+1}^*-r/2]$, $2\le j\le n'-1$. If $m=0$, we do not define $J_k$'s. If $m\textcolor{green}e 1$, let $J_{m'}=[-R ,w_{m'}^*-r/2]$, and $J_k=[w_{k+1}^*+r/2,w_k^*-r/2]$, $1\le k\le m'-1$. If $\Vert \underline z-\underline z^*\Vert_\infty\le r/4$, then the distance from every component of $\underline z$ to every interval $I_j$ or $J_k$ is at least $r/4$. By Lemma \textcolor{red}ef{ping-pong-I}, there are continuous functions $F_{I_j}$, $2\le j\le n'$, and $F_{J_k}$, $1\le k\le m'$, defined on the set of $\underline z\in\Sigma_n$ with $\Vert \underline z-\underline z^*\Vert_\infty\le r/4$, such that, if $\Vert \underline z-\underline z^*\Vert_\infty\le r/4$, $\Vert \underline r\Vert_\infty< r/8$, and $h<r/8$, then for each $2\le j\le n'$ and $1\le k\le m'$, $$ \mathbb{P}[E^{\underline z}_{\underline r; I_j\times [0,h] }]< F_{I_j}(\underline z) h^\alpha P(\underline r),\quad \mathbb{P}[E^{\underline z}_{\underline r; J_k\times [0,h] }]< F_{J_k}(\underline z) h^\alpha P(\underline r).$$ Thus, there is $h>0$ such that, if $\Vert \underline z-\underline z^*\Vert_\infty\le r/4$ and $\Vert \underline r\Vert_\infty\le r/8$, then \mbox{\textcolor{blue}fseries B}GE \mathbb{P}[E^{\underline z}_{\underline r; I_j\times [0,h] }],\mathbb{P}[E^{\underline z}_{\underline r; J_k\times [0,h] }] <\frac \varepsilon {2n} P(\underline r),\quad 2\le j\le n',\quad 1\le k\le m'. \label{IJh}\mathbb{ E}DE Let $$H=\{z\in\mathbb{H}:|z|\le R\}\setminus \textcolor{blue}igcup_{j=2}^{n'}(\{|z-z_j^*|\le r\}\cup I_j\times [0,h])\setminus \textcolor{blue}igcup_{k=1}^{m'} (\{|z-w_k^*|\le r\}\cup J_k\times [0,h]).$$ See Figure \textcolor{red}ef{H}. Then $H$ is an $\mathbb{H}$-hull, which contains $\{z\in\mathbb{H}:|z-z_1^*|\le r\}$, and the distance from each of $z_2^*,\dots,z_{n'}^*$ and $w_1^*,\dots,w_{m'}^*$ to $H$ is at least $r$. Combining (\textcolor{red}ef{Rinftyrk}) and (\textcolor{red}ef{IJh}), we get $\mathbb{P}[ E^{\underline z}_{\underline r;\mathbb{H}\setminus H}]< \varepsilon P(\underline r)$ if $\Vert \underline z-\underline z^*\Vert_\infty\le r/4$, and $\Vert \underline r\Vert_\infty\le r/16$. So we find that (\textcolor{red}ef{compact-lem-ineq}) holds for such $H$ and $\deltata:=r/16$. \end{proof} \textcolor{blue}egin{figure} \centering \includegraphics[width=1\textwidth]{H.png} \caption[A figure for the proof of Lemma \textcolor{red}ef{compact-lem}]{\textbf{A figure for the proof of Lemma \textcolor{red}ef{compact-lem}.} This figure illustrates the construction of the $\mathbb{H}$-hull $H$ in the proof of Lemma \textcolor{red}ef{compact-lem} in the case that $n'=3$ and $m'=2$. The $\mathbb{H}$-hull $H$ (the shaded region) is obtained by removing small discs of radius $r$ centered at $z_2,z_3,w_1,w_2$ and $4$ rectangles with real interval bases and height $h$ from the big semi-disc $\{z\in\mathbb{H}:|z|\le R\}$. } \label{H} \end{figure} \section{Proof of the Main Theorem}\label{Chap-main-thm} We will finish the proof of Theorem \textcolor{red}ef{Main-thm} in this section. Recall that $\mathbb{P}$ denotes the law of a chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$; and for $z\in\mathbb{R}\setminus\{0\}$ and $r>0$, $\mathbb{P}_z^*$ denotes the law of a two-sided chordal SLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$ passing through $z$, and $\mathbb{P}_z^r$ denotes the conditional law $\mathbb{P}[\cdot|\tau^z_r<\infty]$. We will use an induction on $n$. By (\textcolor{red}ef{G(z)-approx}), Theorem \textcolor{red}ef{Main-thm} holds for $n=1$. Let $n\textcolor{green}e 2$. We make the induction hypothesis that Theorem \textcolor{red}ef{Main-thm} holds for $n-1$. For any $\underline w=(w_1,\dots,w_{n-1})\in \Sigma_{n-1}$ (Definition and $\underline s=(s_1,\dots,s_{n-1})\in (0,\infty)^{n-1}$, we define \mbox{\textcolor{blue}fseries B}GE G(\underline w,\underline s)=\mathbb{P}[\tau^{w_j}_{s_j}<\infty,1\le j\le n].\label{Gws}\mathbb{ E}DE By the induction hypothesis, $\lim_{s_1,\dots,s_{n-1}\to 0^+} \prod_{j=1}^{n-1} s_j ^{-\alpha} G(\underline w,\underline s)=G(\underline w)$. Given a chordal Loewner curve $\textcolor{green}amma$ with the corresponding centered Loewner maps $Z_t$'s, we define a family of functions $G^\textcolor{green}amma_t$, $t\textcolor{green}e 0$, on $\Sigma_{n-1}$ associated with $\textcolor{green}amma$ by \mbox{\textcolor{blue}fseries B}GE G_t^\textcolor{green}amma(z_2,\dots,z_{n})=\left\{\textcolor{blue}egin{array}{ll} \prod_{j=2}^{n} |Z_t'(z_j)|^{\alpha} G(Z_t(z_2) ,\dots,Z_t(z_{n})),&\mbox{if }t<\tau^*_{z_j}, 2\le j\le n;\\ 0,&\mbox{otherwise.} \end{array} \textcolor{red}ight. \label{vvGt} \mathbb{ E}DE When $\textcolor{green}amma$ is a random Loewner curve, $G_t^\textcolor{green}amma$ are random functions. We use $\mathbb{ E}E_{z_1}^*[G_{T_{z_1}}(\cdot)]$ to denote the expectation of $G^\textcolor{green}amma_t(\cdot)$ when $\textcolor{green}amma$ follows the law $\mathbb{P}^*_{z_1}$, and $t=T_{z_1}$. Following the approach in \cite{LW}, we will prove that for any $1\le j_0\le n$ and $\underline z=(z_1,\dots,z_n)\in \Sigma_n$, the following limit exists and is finite: \mbox{\textcolor{blue}fseries B}GE G^{j_0}(\underline z):=\lim_{r_1,\dots,r_n\to 0^+} \prod_{j=1}^n r_j^{-\alpha} \mathbb{P}[\tau^{z_{j_0}}_{r_{j_0}}<\tau^{z_k}_{r_k}<\infty,\forall k\in\mathbb{N}_n\setminus\{j_0\}].\label{ordered}\mathbb{ E}DE It is clear that if the above limit exists and is finite for any $1\le j_0\le n$, then the same is true for the limit in (\textcolor{red}ef{n-pt Green}), and we have \mbox{\textcolor{blue}fseries B}GE G(\underline z)=\sum_{j=1}^n G^j(\underline z).\label{G-sum}\mathbb{ E}DE In this section we will prove the following theorem. \textcolor{blue}egin{Theorem} Given the induction hypothesis, for any $1\le j_0\le n$, the limit in (\textcolor{red}ef{ordered}) converges uniformly on any compact subset of $\Sigma_n$, and the limit function $G^{j_0}$ is continuous on $\Sigma_n$. Moreover, we have \mbox{\textcolor{blue}fseries B}GE G^{j_0} (z_1,\dots,z_n)=G(z_{j_0})\mathbb{ E}E_{z_{j_0}}^*[ G_{T_{z_{j_0}}}(z_1,\dots,\widehat z_{j_0},\dots,z_n)],\label{induction}\mathbb{ E}DE where the symbol $\widehat z_{j_0}$ means that $z_{j_0}$ is omitted in the list from $z_1$ to $z_n$. \label{main} \end{Theorem} It is clear that all statements of Theorem \textcolor{red}ef{Main-thm} in the induction step except for $G\asymp F$ follow from Theorem \textcolor{red}ef{main} and (\textcolor{red}ef{G-sum}). When we have the existence of $G$ on $\Sigma_n$, the statement $G\asymp F$ then follows immediately from Proposition \textcolor{red}ef{lower-upper} by sending $r_1,\dots,r_n$ to $0^+$. After proving Theorem \textcolor{red}ef{main}, we get a local martingale related to the Green's function. \textcolor{blue}egin{Corollary} \label{martingale} For any fixed $\underline z=(z_1,\dots,z_n)\in \Sigma_n$, the process $t\mapsto G^\textcolor{green}amma_t(\underline z)$ associated with a chordal SLE$_\kappa$ curve $\textcolor{green}amma$ in $\mathbb{H}$ from $0$ to $\infty$ is a local martingale up to $\tau:=\min\{\tau^*_{z_j},1\le j\le n\}$. \end{Corollary} \textcolor{blue}egin{proof} Fix $\underline z=(z_1,\dots,z_n)\in \Sigma_n$ and let $M_t= G_t^\textcolor{green}amma(\underline z)$. It suffices to prove that for any $\mathbb{H}$-hull $K$, whose closure does not contain any of $z_1,\dots,z_n$, $M_{\cdot\wedge T_K}$ is a martingale, where $T_K:=\inf\{t>0:\textcolor{green}amma[0,t]\noindentt\subset \overline K\}$. The reason is that $\tau$ is the supremum of all such $T_K$. To prove that $M_{\cdot\wedge T_K}$ is a martingale, we pick a small $r>0$, and consider the martingale $$M^{(r)}_t:=r^{-n\alpha }\mathbb{P}[\tau^{z_j}_r< \infty,1\le j\le n|{\mathcal F}_{t}].$$ By Theorem \textcolor{red}ef{main}, DMP of chordal SLE and Koebe's distortion theorem, we have $M^{(r)}_t\to M_{t }$ on $[0,\tau)$ as $r\to 0^+$. We claim that the convergence is uniform on $[0,T_K]$. To see this, we apply Proposition \textcolor{red}ef{compact-prop} to conclude that there exist an $\mathbb{H}$-hull $H$ and $a<b\in(0,\infty)$ such that $(Z_t(z_1),\dots,Z_t(z_n))\in H$ and $a\le |Z_t'(z_j)|\le b$, $1\le j\le n$, for any $t\in[0,T_K]$. So we get the uniform convergence of $M^{(r)}_t\to M_{t }$ over $[0,T_K]$ by the uniform convergence of the $n$-point Green's function on the $\mathbb{H}$-hull in $H$. So the claim is proved, which then implies that $M_{\cdot\wedge T_K}$ is a martingale, as desired. \end{proof} \textcolor{blue}egin{Remark} We may write $M_t=\prod_{j=1}^n|g_t'(z_j)|^{\alpha} G(g_t(z_1)-U_t,\dots,g_t(z_n)-U_t)$. If we know that $ G$ is $C^2$, then using It\^o's formula and Loewner's equation (\textcolor{red}ef{chordal}), one can easily get the following second order PDE for $ G$: $$\frac{\kappa}{2}\mbox{\textcolor{blue}fseries B}ig(\sum_{j=1}^n \partial_{z_j}\mbox{\textcolor{blue}fseries B}ig)^2 G +\sum_{j=1}^n \partial_{z_j}\frac{2}{z_j }\cdot G +\alpha \sum_{j=1}^n \frac{-2 }{z_j^2}\cdot G=0.$$ Since the PDE does not depend on the order of points, it is also satisfied by the unordered Green's function $G$. We expect that the smoothness of $ G$ can be proved by H\"ormander's theorem because the differential operator in the above displayed formula satisfies H\"ormander's condition. \end{Remark} The rest part of the paper is devoted to the proof of Theorem \textcolor{red}ef{main}. By symmetry it suffices to work on the case $j_0=1$. We will first prove in Section \textcolor{red}ef{proof1} the existence of $G^1$ as well as the uniform convergence on compact subsets of $\Sigma_n$, and then prove in Section \textcolor{red}ef{proof2} the continuity of $G^1$. \textcolor{blue}egin{comment} Fix $\underline z=(z_1,\dots,z_n)\in\Sigma_n$. Let $l_j$, $R_j$, $1\le j\le n$, and $F(z_1,\dots,z_n)$ be as defined in (\textcolor{red}ef{lR},\textcolor{red}ef{F}). When a chordal Loewner curve $\textcolor{green}amma$ is fixed in the context, we write $F_t$ for the $F_{(H_t;\textcolor{green}amma(t),\infty)}$ defined by (\textcolor{red}ef{FD}). Since $Z_t$ maps $\mathbb{H}\setminus K_t$ conformally onto $\mathbb{H}$ and sends $\textcolor{green}amma(t)$ and $\infty$ respectively to $0$ and $\infty$, we have \mbox{\textcolor{blue}fseries B}GE F_t(z_2,\dots,z_n)=\prod_{j=1}^n |Z_t'(z_j)|^{\alpha} F(Z_t(z_2) ,\dots,Z_t(z_n)),\label{vvFt}\mathbb{ E}DE when $z_2,\dots,z_n$ are distinct points in $\mathbb{R}\setminus \overline{K_t}$. One may compare it with the $\vv G_t$ defined by (\textcolor{red}ef{vvGt}). From the induction hypothesis and Proposition \textcolor{red}ef{lower-upper}, we have that $\vv G\le G\asymp F$ holds for $n-1$ points. Thus, Lemma \textcolor{red}ef{FF-F} holds with $K_t$ in place of $K$, $G(z_1)$ in place of $|z_1|^{-\alpha}$, and $\vv G_t$ in place of $F_{(\mathbb{H}\setminus K;w_0,w_\infty)}$. \textcolor{blue}egin{Lemma} There is some constant $\textcolor{blue}eta>0$ depending only on $\kappa$ and $n$ such that for any $k_0\in\{2,\dots,n\}$ and $s_{k_0}\textcolor{green}e 0$, \textcolor{blue}egin{align*} & G(z_1)E^*_{z_1}[\vv G_{T_{z_1}}(z_2,\dots,z_{n}){\mathbf 1}\{\dist(z_{k_0},\textcolor{green}amma[0,T_{z_1}])\leq s_{k_0} \}]\\ \lesssim & F(z_1,\dots,z_n) \mbox{\textcolor{blue}fseries B}ig(\frac{s_{k_0}}{|z_{k_0}-z_1|\wedge |z_{k_0}|}\mbox{\textcolor{blue}fseries B}ig)^{\textcolor{blue}eta}. \end{align*} \label{RZ-Thm3.1-lim} \end{Lemma} \textcolor{blue}egin{proof} This lemma essentially follows from Proposition \textcolor{red}ef{RZ-Thm3.1} and the existence of $(n-1)$-point Green's function. Below are the details. Let $r_j\in(0,R_j/8)$, $1\le j\le n$. From Propositions \textcolor{red}ef{lower-upper} and \textcolor{red}ef{RZ-Thm3.1}, there is a constant $\textcolor{blue}eta>0$ such that \textcolor{blue}egin{align*} & \mathbb{P}[\tau^{z_1}_{r_1}<\infty]\cdot\mathbb{ E}E[{\mathbf 1}\{\dist(z_{k_0},\textcolor{green}amma[0,\tau_{r_1}^{z_1}])\leq s_{k_0}\}\mathbb{P}[\tau^{z_1}_{r_1}<\infty,1\le j\le n|{\mathcal F}_{\tau^{z_1}_{r_1}},\tau^{z_1}_{r_1}<\infty]]\\ \lesssim & F(z_1,\dots,z_{n} ) \mbox{\textcolor{blue}fseries B}ig(\frac{s_{k_0}}{|z_{k_0}-z_1|\wedge |z_{k_0}|}\mbox{\textcolor{blue}fseries B}ig)^{\textcolor{blue}eta}\prod_{j=1}^n r_j^\alpha. \end{align*} Using the convergence of $(n-1)$-point Green's function and applying Fatou's lemma by sending $r_2,\dots,r_n$ to $0^+$, we get \textcolor{blue}egin{align*} &\mathbb{P}[\tau^{z_1}_{r_1}<\infty]\cdot\mathbb{ E}E[{\mathbf 1}\{\dist(z_{k_0},\textcolor{green}amma[0,\tau^{z_1}_{r_1}])\leq s_{k_0}\} \vv G_{\tau^{z_1}_{r_1}}(z_2,\dots,z_n)|\tau^{z_1}_{r_1}<\infty] \\ \lesssim & F(z_1,\dots,z_{n} ) r_1^\alpha \mbox{\textcolor{blue}fseries B}ig(\frac{s_{k_0}}{|z_{k_0}-z_1|\wedge |z_{k_0}|}\mbox{\textcolor{blue}fseries B}ig)^{\textcolor{blue}eta}, \end{align*} which together with Proposition \textcolor{red}ef{RN<1} implies that \textcolor{blue}egin{align*} &\mathbb{P}[\tau^{z_1}_{r_1}<\infty]\cdot \mathbb{ E}E_{z_1}^*[{\mathbf 1}\{\dist(z_{k_0},\textcolor{green}amma[0,\tau_{r_1}^{z_1}])\leq s_{k_0}\} \vv G_{\tau^{z_1}_{r_1}}(z_2,\dots,z_n)] \\ \lesssim & F(z_1,\dots,z_{n} ) r_1^\alpha \mbox{\textcolor{blue}fseries B}ig(\frac{s_{k_0}}{|z_{k_0}-z_1|\wedge |z_{k_0}|}\mbox{\textcolor{blue}fseries B}ig)^{\textcolor{blue}eta}. \end{align*} By the continuity two-sided chordal SLE and the continuity of $(n-1)$-point Green's function, we see that, under the law $\mathbb{P}_{z_1}^*$, as $r_1\to 0$, $\dist(z_{k_0},\textcolor{green}amma[0,\tau_{r_1}^{z_1}])\to \dist(z_{k_0},\textcolor{green}amma[0,T_{z_1}])$ and $\vv G_{\tau^{z_1}_{r_1}}(z_2,\dots,z_n)\to \vv G_{T_{z_1}}(z_2,\dots,z_n)$. Since $\lim_{r_1\to 0} r_1^{-\alpha} \mathbb{P}[\tau^{z_1}_{r_1}<\infty]=G(z_1)$, applying Fatou's lemma again by sending $r_1$ to $ 0^+$, we get the conclusion. \end{proof} \end{comment} \subsection{Existence}\label{proof1} In this subsection, we work on the inductive step to prove the existence of the limit in (\textcolor{red}ef{ordered}) with $j_0=1$. We now define $G^1$ on $\Sigma_n$ using (\textcolor{red}ef{induction}) instead of (\textcolor{red}ef{ordered}). \textcolor{blue}egin{comment} For the $\vv G$ defined in this way, we have the following upper bound. \textcolor{blue}egin{Lemma} We have $\vv G(\underline z)\lesssim F(\underline z)$ on $\Sigma_n$. \end{Lemma} \textcolor{blue}egin{proof} Let $\underline z=(z_1,\dots,z_n)\in\Sigma_n$. Let $r_1,\dots,r_n>0$. Let $E=\{\tau^{z_1}_{r_1}<\cdots<\tau^{z_n}_{r_n}<\infty\}$, $E'=\{\tau^{z_2}_{r_2}<\cdots<\tau^{z_n}_{r_n}<\infty$, and $\underline z'=(z_2,\dots,z_n)$. By Proposition \textcolor{red}ef{lower-upper}, $$F(\underline z)\textcolor{green}trsim \prod_{k=1}^n r_k^{-\alpha} \mathbb{P}[E] =\prod_{k=1}^n r_k^{-\alpha} \mathbb{ E}E[\mathbb{P}[E|{\mathcal F}_{\tau^{z_1}_{r_1}}]=\prod_{k=1}^n r_k^{-\alpha} \mathbb{ E}E[{\mathbf 1}_{\{\tau^{z_1}_{r_1}<\tau^{z_2}_{r_2}\}} \mathbb{P}[E'|{\mathcal F}_{\tau^{z_1}_{r_1}}]].$$ Sending $r_2,\dots,r_n$ to $0^+$, using the convergence of $(n-1)$-point ordered Green's function and applying Fatou's lemma, we get $$F(\underline z)\textcolor{green}trsim r_1^{-\alpha} \mathbb{ E}E[{\mathbf 1}_{\{\tau^{z_1}_{r_1}<\infty\}}G_{\tau^{z_1}_{r_1}}(\underline z')]=r_1^{-\alpha} \mathbb{P}[\tau^{z_1}_{r_1}<\infty] \cdot \mathbb{ E}E^{r_1}_{z_1}[\vv G_{\tau^{z_1}_{r_1}}(\underline z')].$$ By Proposition \textcolor{red}ef{RN<1}, we get $F(\underline z)\textcolor{green}trsim r_1^{-\alpha} \mathbb{P}[\tau^{z_1}_{r_1}<\infty] \mathbb{ E}E^{*}_{z_1}[\vv G_{\tau^{z_1}_{r_1}}(\underline z')]$. Sending $r_1$ to $0^+$, we get $r_1^{-\alpha} \mathbb{P}[\tau^{z_1}_{r_1}<\infty]\to G(z_1)$ by the convergence of $1$-point Green's function, and $\vv G_{\tau^{z_1}_{r_1}}(\underline z')\to \vv G_{T_{z_1}}(\underline z')$ by the continuity of the two-sided chordal SLE$_\kappa$ curve. Applying Fatou's lemma again, we get $F(\underline z)\textcolor{green}trsim \vv G(\underline z)$. \end{proof} \end{comment} In order to prove that the limit in (\textcolor{red}ef{ordered}) converges uniformly on each compact subset of $\Sigma_n$, it suffices to show that, for any $\underline z^*=(z_1^*,\dots,z_n^*)\in\Sigma_n$ and $\varepsilon>0$, there exists $\deltata>0$ such that if $\underline z=(z_1,\dots,z_n)\in \Sigma_n$ and $\underline r=(r_1,\dots,r_n)\in(0,\infty)^n$ satisfy that $\Vert \underline z-\underline z^*\Vert_\infty<\deltata$ and $\Vert \underline r\Vert_\infty<\deltata$, then \mbox{\textcolor{blue}fseries B}GE |\prod_{j=1}^n r_j^{-\alpha} \mathbb{P}[\tau^{z_1}_{r_1}< \tau^{z_j}_{r_j}<\infty,2\le j\le n]- G^1(z_1,\dots,z_n)|<\varepsilon. \label{local-uniform}\mathbb{ E}DE Fix $\underline {z}^*=(z_1^*,\dots,z_n^*)\in\Sigma_n$ and $\varepsilon>0$. Recall Definition \textcolor{red}ef{Definition-Sigma}. Let $d^*=d(\underline z^*)$. Let $\underline z=(z_1,\dots,z_n)\in\Sigma_n$ satisfy $\Vert \underline z-\underline z^*\Vert_\infty< d^*/2$. First suppose $z_1^*$ is not an innermost component of $\underline z^*$. Then $z_1$ is not an innermost component of $\underline z$. Then there is $k_0\in\{2,\dots,n\}$ such that $z_{k_0}$ lies strictly between $0$ and $z_1$. Under the law $\mathbb{P}_{z_1}^*$, we have $\tau^*_{z_{k_0}}\le T_{z_1}$, and so $ G_{T_{z_1}}(z_2,\dots,z_n)=0$, which implies that $G^1(\underline z)=0$. On the other hand, by Lemma \textcolor{red}ef{ping-pong}, $$\lim_{r_1,\dots,r_n\to 0^+} \prod_{j=1}^n r_j^{-\alpha} \mathbb{P}[\tau^{z_1}_{r_1}<\tau^{z_k}_{r_k}<\infty,2\le k\le n]=0,$$ and the convergence is uniform in some neighborhood of $\underline z^*$. So we have (\textcolor{red}ef{local-uniform}) if $z_1^*$ is not an innermost component of $\underline z^*$. From now on, we assume that $z_1^*$ is an innermost component of $\underline z^*$. By symmetry we assume that $z_1^*>0$. Let $\underline z=(z_1,\dots,z_n)\in\Sigma_n$ and $\underline r=(r_1,\dots,r_n)\in (0,\infty)^n $. Suppose $\Vert \underline z-\underline z^*\Vert_\infty< d^*/4$ and $\Vert \underline r\Vert_\infty<d^*/4$. Then the discs $\{|z-z_j|\le r_j\}$, $1\le j\le n$, are mutually disjoint. Let $E_{\underline r}$ denote the event $\{\tau^{z_1}_{r_1}<\tau^{{z_j}}_{r_{j}}<\infty,2\le j\le n\}$. We will transform the rescaled probability $\prod_{j=1}^n r_j^{-\alpha} \mathbb{P}[E_{\underline r}]$ into $G^1(\underline z)$ (defined by (\textcolor{red}ef{induction})) in a number of steps. In each step we get an error term, and have an upper bound of the error term. We define some good events depending on $\underline z$. For any $r>0$ and $\mathbb{H}$-hull $H$, let $E_{r;H}$ denote the event that $K_{\tau^{z_1}_r}\subset H$. For $R>r>s\textcolor{green}e 0$, let $E_{r,s;R}$ be the event that $\textcolor{green}amma[\tau^{z_1}_r,\tau^{z_1}_s]$ does not intersect the connected component of $\{z\in\mathbb{H}:|z-z_1|=R\}\cap H_{\tau^{z_1}_{r}}$ which has $z_1+R$ as an endpoint. \textcolor{blue}egin{comment} Fix $\underline s=( s_2,\dots, s_{n})$ with $0< s_j\le l_j$ to be determined later. Define the (good) events \mbox{\textcolor{blue}fseries B}GE E_{r;\underline{ s}}=\textcolor{blue}igcap_{j=2}^{n}\{\dist(z_j,K_{\tau^{z_1}_r})\textcolor{green}eq s_j\},\quad r\textcolor{green}e 0.\label{Ers}\mathbb{ E}DE Note that on the complement event $E_{r_1;\underline{ s}}^c$, $\textcolor{green}amma$ approaches $z_{k_0}$ by distance $s_{k_0}$ for some $2\le k_0\le n$ before it approaches $z_1$ by distance $r_1$. Fix $\theta\in(0,1)$ to be determined later. Define the (good) events \mbox{\textcolor{blue}fseries B}GE E_{r;\theta}=\{\dist(g_{\tau^{z_1}_{r}}(z_j),S_{K_{\tau^{z_1}_{r}}} )\textcolor{green}e \theta |g_{\tau^{z_1}_{r}}(z_j)-U_{\tau^{z_1}_{r}}|,2\le j\le n\},\quad r>0.\label{Erth}\mathbb{ E}DE \end{comment} In the following, we use $X\stackrel{e}{\approx}Y$ to denote the approximation relation $|X-Y|=e$, and call $e$ the error term. Let $\underline z'=(z_2,\dots,z_n)$, $\underline r'=(r_2,\dots,r_n)$, and $E_{\underline r'}=\{\tau^{z_j}_{r_j} <\infty,2\le j\le n\}$. For some $\mathbb{H}$-hull $H$ to be determined we use the following approximation relations: \textcolor{blue}egin{align*} & \mathbb{P}[E_{\underline r}]\stackrel{e_1^*}{\approx} \mathbb{P}[E_{\underline r}\cap E_{r_1;H}] = \mathbb{P}[\tau^{z_1}_{r_1}<\infty]\cdot \mathbb{ E}E^{r_1}_{z_1}[{\mathbf 1}_{E_{r_1;H} }\mathbb{P}[E_{\underline r'}| {\mathcal F}_{\tau^{z_1}_{r_1}}]]\\ \stackrel{e_2^*}{\approx} & r_1^{\alpha} G(z_1) \mathbb{ E}E^{r_1}_{z_1}[{\mathbf 1}_{E_{r_1;H}}\mathbb{P}[E_{\underline r'}| {\mathcal F}_{\tau^{z_1}_{r_1}}] ] \stackrel{e_3^*}{\approx} G(z_1) \mathbb{ E}E^{r_1}_{z_1}[{\mathbf 1}_{E_{r_1;H}} G_{\tau^{z_1}_{r_1}}(\underline z')] \prod_{k=1}^n r_k^{\alpha}. \end{align*} We write $G(r,\cdot)$ for $G_{\tau^{z_1}_r}$. For some $\eta_2>\eta_1>r_1$ to be determined, we further use the following approximation relations: \textcolor{blue}egin{align*} & G(z_1)\mathbb{ E}E_{z_1}^{{r_1}}[{\mathbf 1}_{E_{r_1;H}} G_{}({r_1},\underline z')] \stackrel{e_4}{\approx} G(z_1)\mathbb{ E}E_{z_1}^{{r_1}}[{\mathbf 1}_{E_{r_1;H}\cap E_{\eta_1,r_1;\eta_2}} G_{ }({r_1},\underline z')]\\ \stackrel{e_5}{\approx} & G(z_1)\mathbb{ E}E_{z_1}^{{r_1}}[{\mathbf 1}_{E_{\eta_1;H}\cap E_{\eta_1,{r_1};\eta_2}} G_{ }({r_1},\underline z')] \stackrel{e_6}{\approx} G(z_1)\mathbb{ E}E_{z_1}^{{r_1}}[{\mathbf 1}_{E_{\eta_1;H}\cap E_{\eta_1,{r_1};\eta_2}} G_{ }({\eta_1},\underline z')]\\ \stackrel{e_{7}}{\approx} & G(z_1)\mathbb{ E}E_{z_1}^{{r_1}}[{\mathbf 1}_{E_{\eta_1;H} } G_{ }({\eta_1},\underline z')] \stackrel{e_{8}}{\approx} G(z_1)\mathbb{ E}E^*_{z_1}[ {\mathbf 1}_{E_{\eta_1;H}} G_{ }({\eta_1},\underline z')] \\\stackrel{e_{9}}{\approx} & G(z_1)\mathbb{ E}E^*_{z_1}[{\mathbf 1}_{E_{\eta_1;H}\cap E_{\eta_1,0;\eta_2}} G_{ }({\eta_1},\underline z')] \stackrel {e_{10}}{\approx} G(z_1)\mathbb{ E}E^*_{z_1}[{\mathbf 1}_{E_{\eta_1;H}\cap E_{\eta_1,0;\eta_2}} G_{ }(0 ,\underline z')] \\ \stackrel {e_{11}}{\approx} & G(z_1)\mathbb{ E}E^*_{z_1}[{\mathbf 1}_{E_{0;H}\cap E_{\eta_1,0;\eta_2}} G_{ }(0 ,\underline z')] \stackrel {e_{12}}{\approx} G(z_1)\mathbb{ E}E^*_{z_1}[{\mathbf 1}_{E_{0;H}} G_{}(0,\underline z')] \\ \stackrel {e_{13}}{\approx} & G(z_1)\mathbb{ E}E^*_{z_1}[ G_{ }(0,\underline z')] = G(\underline z). \end{align*} Let $e_j=e_j^*/\prod_{k=1}^n r_k$, $j=1,2,3$. Then \mbox{\textcolor{blue}fseries B}GE \mbox{\textcolor{blue}fseries B}ig|\prod_{k=1}^n r_k^{-\alpha} \mathbb{P}[E_{\underline r}] - G(\underline z)\mbox{\textcolor{blue}fseries B}ig|\le \sum_{j=1}^{12} e_j.\label{sum-ej}\mathbb{ E}DE Let $\tau=\tau^{z_1}_{r_1}$, $D_j=\{z\in H_\tau:|z-z_j|\le r_j\}$, $\widetilde z_j=Z_\tau(z_j)$, $\widetilde D_j=Z_\tau(D_j)$, $\widetilde r_j^+=\textcolor{red}ad_{\widetilde z_j}(\widetilde D_j)$ and $\widetilde r_j^-=\dist(\widetilde z_j,\partial \widetilde D_j\cap \mathbb{H})$, $2\le j\le n$. Let $\underline{\widetilde z}'=(\widetilde z_2,\dots,\widetilde z_n)$ and $\underline {\widetilde r}'_\pm=(\widetilde r_2^\pm,\dots,\widetilde r_n^\pm)$. By Koebe distortion theorem, for any $2\le j\le n$, if $r_j<\dist(z_j,K_\tau)$, \mbox{\textcolor{blue}fseries B}GE \frac{|Z_\tau'(z_j)| r_j}{(1+r_j/\dist(z_j,K_\tau))^2}\le \widetilde r_j^-\le \widetilde r_j^+\le \frac{|Z_\tau'(z_j)| r_j}{(1-r_j/\dist(z_j,K_\tau))^2} .\label{distortion}\mathbb{ E}DE By DMP of chordal SLE and (\textcolor{red}ef{Gws}), \mbox{\textcolor{blue}fseries B}GE G(\underline{\widetilde z}',\underline{\widetilde r}'_-)\le \mathbb{P}[E_{\underline r'}|{\mathcal F}_\tau,\tau<\tau^{z_j}_{r_j},2\le j\le n]\le G(\underline{\widetilde z}',\underline{\widetilde r}'_+).\label{DMP}\mathbb{ E}DE Let $C_\kappa\in[1,\infty)$ be the constant in Proposition \textcolor{red}ef{RN<1}. By Lemma \textcolor{red}ef{compact-lem}, there are a nonempty $\mathbb{H}$-hull $H$ and $\deltata_H\in (0, d^*/3]$, such that $\{z\in\mathbb{H}:|z-z_1^*|\le 3\deltata_H\}\subset H$, $\dist(z_j,H)\textcolor{green}e 3\deltata_H$, $2\le j\le n$, and whenever $\Vert\underline z-\underline z^*\Vert_\infty\le \deltata_H$ and $\Vert \underline r\Vert _\infty\le \deltata_H$, we have \mbox{\textcolor{blue}fseries B}GE \prod_{j=1}^n r_j^{-\alpha} \mathbb{P}[E_{r_1;H}^c\cap E_{\underline r}]<\frac\varepsilon{11 C_\kappa } .\label{est-H}\mathbb{ E}DE From now on, we always assume that $\Vert\underline z-\underline z^*\Vert_\infty<\deltata_H$. Then $H\supset \{z\in\mathbb{H}:|z-z_1|\le 2\deltata_H\}$, and $\dist(z_j,H)\textcolor{green}e 2\deltata_H$, $2\le j\le n$. By (\textcolor{red}ef{est-H}), if $\Vert \underline r\Vert_\infty\le \deltata_H$, $$e_1\le \frac\varepsilon{11 C_\kappa }.$$ Sending $r_2,\dots,r_n$ to $0^+$ in (\textcolor{red}ef{est-H}) and using Fatou's lemma, estimates (\textcolor{red}ef{distortion},\textcolor{red}ef{DMP}) and the convergence of $(n-1)$-point Green's function, we get $$ r_1^{-\alpha} \mathbb{P}[\tau^{z_1}_{r_1}<\infty] \cdot \mathbb{ E}E_{z_1}^{r_1}[{\mathbf 1}_{E_{r_1;H}^c} G (r_1,\underline z')]\le \frac \varepsilon {11 C_\kappa},\quad \mbox{if }r_1\le \deltata_H.$$ By Proposition \textcolor{red}ef{RN<1}, if $r_1\le \deltata_H$, $$ r_1^{-\alpha} \mathbb{P}[\tau^{z_1}_{r_1}<\infty] \cdot \mathbb{ E}E_{z_1}^{*}[{\mathbf 1}_{ E_{r_1;H}^c} G (r_1,\underline z')]\le \frac \varepsilon {11}.$$ Let $r_1\to 0^+$. From $ r_1^{-\alpha} \mathbb{P}[\tau^{z_1}_{r_1}<\infty]\to G(z_1)$, $E_{r_1;H}^c\to E_{0;H}^c$, $G(r_1,\underline z)\to G(0,\underline z)$ and Fatou's lemma, we get $G(z_1) \mathbb{ E}E_{z_1}^{*}[{\mathbf 1}_{ E_{0;H}^c} G (0,\underline z')]\le \frac \varepsilon{11}$, which implies $$e_{13}\le \frac \varepsilon{11}.$$ By Proposition \textcolor{red}ef{compact-prop}, the set $$ \Omega_H:=\{(g_K(z_2)-u,\dots,g_K(z_n)-u): K\in{\mathcal H}(H),u\in S_H,|z_j-z_j^*|\le \deltata_H,2\le j\le n\}$$ is a compact subset of $\Sigma_{n-1}$, and the set $$ Q_H:= \{|g_K'(z)|:K\in{\mathcal H}(H),z\in\textcolor{blue}igcup_{j=2}^n [z_j^*-\deltata_H,z_j^*+\deltata_H]\}$$ is a compact subset of $(0,\infty)$. Let $\xi_H=\min\{|w_k|:\underline w=(w_2,\dots,w_n)\in \Omega_H,2\le k\le n\}>0$. For $a\textcolor{green}e 0$, we write $\underline Z_a(\underline z')$ for $(Z_{\tau^{z_1}_a}(z_2),\dots,Z_{\tau^{z_1}_a}(z_n))$, when all components are well defined. We will use the fact that, on the event $E_{a;H}$, $\underline Z_a(\underline z')\in \Omega_H$ because $Z_{\tau^{z_1}_a}=g_{K_{\tau^{z_1}_a}}-U_{\tau^{z_1}_a}$, $K_{\tau^{z_1}_a}\subset H$, and $U_{\tau^{z_1}_a}\in S_{K_{\tau^{z_1}_a}}\subset H$ by Corollary \textcolor{red}ef{UinSK} and Proposition \textcolor{red}ef{SKSH}. Recall that we assume that $\Vert\underline z-\underline z^*\Vert_\infty\le \deltata_H$. So we have \mbox{\textcolor{blue}fseries B}GE |Z_{\tau^{z_1}_a}(z_j)|\textcolor{green}e \xi_H,\quad 2\le j\le n,\mbox{ on the event }E_{a;H}.\label{xiH}\mathbb{ E}DE By the continuity of $(n-1)$-point Green's function and the compactness of $\Omega_H$ and $Q_H$, we see that, for any $a\textcolor{green}e 0$, $G(a,\underline z')$ is bounded by a constant depending only on $\kappa,n,\underline z^*,H,\deltata_H$ on the event $E_{a;H}$. \textcolor{blue}egin{comment} We will use the fact that, if $E_{r_1;H}$ happens, then $\underline{\widetilde z}'=(Z_{\tau}(z_2),\dots,Z_{\tau}(z_n))\in \Omega_H$ because $Z_{\tau}=g_{K_{\tau}}-U_\tau$, $K_{\tau}\subset H$, and $U_{\tau}\in S_{K_{\tau}}\subset S_H$ by Proposition \textcolor{red}ef{SKSH}. Recall that we assume that $\Vert\underline z-\underline z^*\Vert_\infty\le \deltata_H$. \end{comment} By (\textcolor{red}ef{distortion},\textcolor{red}ef{DMP}), the compactness of $\Omega_H$ and $Q_H$, and Proposition \textcolor{red}ef{lower-upper}, $\mathbb{P}[E_{\underline r'}|{\mathcal F}_{\tau^{z_1}_{r_1}},E_{r_1;H}]$ is bounded by $\prod_{j=2}^n r_j^\alpha$ times some constant depending only on $\kappa,n,\underline z^*,H,\deltata_H$. By (\textcolor{red}ef{G(z)-approx}) and the above bound, there are $\textcolor{blue}eta_1\in(0,\infty)$ depending only on $\kappa$ and $C^H_1\in(0,\infty)$ depending only on $\kappa,n,\underline z^*,H,\deltata_H$ such that, if $r_1<|z_1|$, $$e_2\le C^H_1 r_1^{\textcolor{blue}eta_1}.$$ Since the convergence of $(n-1)$-point ordered Green's function is uniform over compact sets, by (\textcolor{red}ef{distortion},\textcolor{red}ef{DMP}) and the compactness of $\Omega_H$ and $Q_H$, we find that there is $\deltata_H'\in(0,\deltata_H)$ depending only on $\kappa,n,H,\deltata_H$ such that if $\Vert\underline r\Vert_\infty\le \deltata_H'$, then $$e_3<\frac\varepsilon {11}.$$ Since $G$ is continuous on $\Sigma_{n-1}$, by the compactness of $\Omega_H$ and $Q_H$, $G(a,\underline z')$ is bounded by some constant depending only on $\kappa,n,\underline z^*, H,\deltata_H$ on the event $E_{a;H}$. Combining this fact for $a\in\{r_1,\eta_1,0\}$ with Proposition \textcolor{red}ef{stayin} and the boundedness of $G(z_1)$ (over $[z_1^*-\deltata_H,z_1^*+\deltata_H]$), we find that there is $C^H_2\in(0,\infty)$ depending only on $\kappa,n,\underline z^*,H,\deltata_H$ such that $$e_4,e_7,e_9,e_{12}\le C^H_2 (\eta_1/\eta_2)^\alpha.$$ Since $H\supset \{z\in\mathbb{H}:|z-z_1^*|\textcolor{green}e 3\deltata_H\}$, if $\eta_2\le 2\deltata_H$, then $E_{r_1;H}\cap E_{\eta_1,r_1;\eta_2}=E_{\eta_1;H}\cap E_{\eta_1,r_1;\eta_2}$ and $E_{\eta_1;H}\cap E_{\eta_1,0;\eta_2}=E_{0;H}\cap E_{\eta_1,0;\eta_2}$, which implies that $$e_5=e_{11}=0.$$ Combining Proposition \textcolor{red}ef{Prop2.13} with the boundedness of $G(z_1)$ and $G(\eta_1,\underline z')$ on the event $E_{\eta_1;H}$, we find that there are $\textcolor{blue}eta_2>0$ depending only on $\kappa$ and $C^H_3\in(0,\infty)$ depending only on $\kappa,n,\underline z^*,H,\deltata_H$ such that, if $r_1<\eta_1/6$, $$e_8\le C_3^H (r_1/\eta_1)^{\textcolor{blue}eta_2}.$$ Recall that $$G(\eta_1,\underline z')=\prod_{j=2}^n |Z_{\tau^{z_1}_{\eta_1}}'(z_j)|^\alpha \cdot G(\underline Z_{ {\eta_1}}(\underline z')),\quad G(r_1,\underline z')= \prod_{j=2}^n |Z_{\tau^{z_1}_{r_1}}'(z_j)|^\alpha\cdot G(\underline Z_{ {r_1}}(\underline z')).$$ Assume $\eta_2\le 2\deltata_h$. Then $E_{r_1;H}\cap E_{\eta_1,r_1;\eta_2}=E_{\eta_1;H}\cap E_{\eta_1,r_1;\eta_2}$, and on this common event, $\underline Z_{{\eta_1}}(\underline z'),\underline Z_{{r_1}}(\underline z')\in \Omega_H$. Let $K_\mathbb{D}elta= K_{\tau^{z_1}_{r_1}}/ K_{\tau^{z_1}_{\eta_1}}$. On the event $E_{\eta_1,r_1;\eta_2}$, by Proposition \textcolor{red}ef{lem-extremal2'}, $\diam(K_\mathbb{D}elta)\le 8 \eta_2$, and by Proposition \textcolor{red}ef{centered-Delta}, we have \mbox{\textcolor{blue}fseries B}GE \Vert \underline Z_{\eta_1}(\underline z')-\underline Z_{r_1}(\underline z')\Vert_\infty\le 56 \eta_2 . \label{Z-r-eta}\mathbb{ E}DE From $K_\mathbb{D}elta=K_{\tau^{z_1}_{r_1}}/ K_{\tau^{z_1}_{\eta_1}}$ we know $g_{\tau^{z_1}_{r_1}}=g_{K_\mathbb{D}elta}\circ g_{\tau^{z_1}_{\eta_1}}$. Let $Z_\mathbb{D}elta=g_{K_\mathbb{D}elta}(\cdot+U_{\tau^{z_1}_{\eta_1}})-U_{\tau^{z_1}_{r_1}}$. Then $Z_{\tau^{z_1}_{r_1}}=Z_\mathbb{D}elta\circ Z_{\tau^{z_1}_{\eta_1}}$ and $Z_\mathbb{D}elta'(z)=g_{K_\mathbb{D}elta}'(\cdot+U_{\tau^{z_1}_{\eta_1}})$. By Proposition \textcolor{red}ef{K/U}, $U_ {\tau^{z_1}_{\eta_1}}\in \overline{K_\mathbb{D}elta}$. By Proposition \textcolor{red}ef{small}, for $z\in\overline\mathbb{H}$, \mbox{\textcolor{blue}fseries B}GE |Z_\mathbb{D}elta'(z)-1|\le 5 \mbox{\textcolor{blue}fseries B}ig(\frac{8\eta_2}{|z |}\mbox{\textcolor{blue}fseries B}ig)^2,\quad \mbox{if }|z |\textcolor{green}e 40\eta_2.\label{Z-r-eta'}\mathbb{ E}DE Let $\widetilde z_j=Z_ {\tau^{z_1}_{\eta_1}}(z_j)$, $2\le j\le n$. Then $Z_{\tau^{z_1}_{r_1}}'(z_j)=Z_\mathbb{D}elta'(\widetilde z_j)\cdot Z_{\tau^{z_1}_{\eta_1}}'(z_j)$, and by (\textcolor{red}ef{xiH}) $|\widetilde z_j|\textcolor{green}e \xi_H$ on the event $E_{\eta_1;H}$. Thus, if $\eta_2\le \xi_H/40$, then $|Z_\mathbb{D}elta'(z_j)-1|\le 320 {\eta_2^2}/{\xi_H^2}$. Since $G$ is continuous on $\Sigma_{n-1}$, it is uniformly continuous on the compact set $\Omega_H$. By (\textcolor{red}ef{Z-r-eta}), $|G(\underline Z_{\eta_1}(\underline z'))-G(\underline Z_{r_1}(\underline z')|\to 0$ uniformly as $\eta_2\to 0^+$. Combining these facts with the compactness of $Q_H$ and the expressions of $G(\eta_1,\cdot)$ and $G(r_1,\cdot)$, we find that, there is $\deltata_H''\in(0,\deltata_H')$ depending only on $\kappa,n,\underline z^*,H,\deltata_H$ such that, if $\eta_2\le\deltata_H''$, then $$e_6,e_{10}< \frac\varepsilon {11}.$$ We now explain how to choose the $H$ and $\eta_1,\eta_2$ in the approximation with errors from $e_1$ to $e_{13}$. First, we choose the $\mathbb{H}$-hull $H$ and $\deltata_H>0$ such that $e_1,e_{13}\le \frac \varepsilon{11}$ if $\Vert \underline z-\underline z^*\Vert_\infty\le \deltata_H$ and $\Vert \underline r\Vert_\infty\le \deltata_H$. We have the quantities $C^H_1,C^H_2,C^H_3,\deltata_H',\deltata_H''\in(0,\infty)$ depending only on $\kappa,n,\underline z^*,H,\deltata_H$. Assume that $\Vert \underline z-\underline z^*\Vert_\infty\le \deltata_H$. If $\Vert\underline r\Vert_\infty\le \deltata_H'$, then $e_3<\frac\varepsilon {11}$. Let $\eta_2= \deltata_H''$. Then we have $e_6,e_{10}<\frac\varepsilon{11}$. Since $\deltata_H''< \deltata_H$, we have $\eta_2< \deltata_H$, and so $e_5=e_{11}=0$. Let $\eta_1=(\varepsilon/(11 C^H_2))^{1/\alpha} \eta_2$. Then $e_4,e_7,e_9,e_{12}\le \frac\varepsilon{11}$. If $r_1<(\varepsilon/(11 C^H_1))^{1/\textcolor{blue}eta_1}$, then $e_2<\frac\varepsilon{11}$; and if $r_1<(\varepsilon/(11 C^H_3))^{1/\textcolor{blue}eta_2} \eta_1$, then $e_8<\frac\varepsilon{11}$. In conclusion, if $\Vert \underline z-\underline z^*\Vert_\infty\le \deltata_H$ and $$\Vert\underline r\Vert_\infty< \deltata_H'\wedge \mbox{\textcolor{blue}fseries B}ig(\mbox{\textcolor{blue}fseries B}ig(\frac{\varepsilon}{11 C^H_3}\mbox{\textcolor{blue}fseries B}ig)^{1/\textcolor{blue}eta_2} \cdot\mbox{\textcolor{blue}fseries B}ig(\frac {\varepsilon}{11 C^H_2}\mbox{\textcolor{blue}fseries B}ig)^{1/\alpha}\cdot \deltata_H''\mbox{\textcolor{blue}fseries B}ig)=:\deltata,$$ then $e_5=e_{11}=0$ and all $e_j$'s are bounded by $\varepsilon/11$, which then imply by (\textcolor{red}ef{sum-ej}) that (\textcolor{red}ef{local-uniform}) holds. Thus, we get the existence of the limit in (\textcolor{red}ef{ordered}) with $j_0=1$ as well as the uniform convergence on compact subsets of $\Sigma_n$. \subsection{Continuity}\label{proof2} In this subsection, we prove the continuity of the function $G^1$ on $\Sigma_n$. We adopt the notation in the previous subsection. By the rescaling property and left-right symmetry of SLE, for any $c\in\mathbb{R}\setminus \{0\}$, $\underline z=(z_1,\dots,z_n)\in \Sigma_n$, and $r>0$, $$\mathbb{P}[\tau^{ z_1}_{ r }<\tau^{z_k}_{ r }<\infty,2\le k\le n]=\mathbb{P}[\tau^{c z_1}_{|c| r }<\tau^{c z_k}_{|c| r }<\infty,2\le k\le n].$$ Multiplying both sides by $r^{-n\alpha}$ and sending $r$ to $0^+$, we get by the existence of the limit in (\textcolor{red}ef{ordered}) that $G^1(\underline z)=| c|^{n \alpha} G^1( c\underline z)$. In particular, we have $G^1(\underline z)=|z_1|^{-n\alpha} G^1(1,z_2/z_1,\dots,z_n/z_1)$. Thus, it suffices to prove that $G^1(1,\cdot)$ is continuous on $\Sigma^1_{n-1}$, which is the set of $\underline w\in \Sigma_{n-1}$ such that $(1,\underline w)\in\Sigma_n$. Define $\widehat G$ on $\Sigma^1_{n-1}$ such that $\widehat G(\underline w)=\mathbb{ E}E^*_1 [G_{T_1}(\underline w)]$. Then $G^1(1,\underline w)=G(1) \widehat G(\underline w)$. So it suffices to prove that $\widehat G$ is continuous on $\Sigma^1_{n-1}$. From the previous subsection, $\widehat G$ vanishes on the set of $\underline w$ which has at least one component lying in $(0,1)$. Since such set is open in $\Sigma^1_{n-1}$, it suffices to prove the continuity of $\widehat G$ at other points of $\Sigma^1_{n-1}$. Fix $\underline w^*=(w_2^*,\dots,w_n^*)\in \Sigma^1_{n-1}$ such that $w_k^*\noindentt\in(0,1)$ for any $2\le k\le n$. Let $w_0=0$ and $w_1=1$. Let $d^*=\min\{|w_j^*-w_k^*|:0\le j<k\le n\}>0$. Let $\varepsilon>0$. By the argument of an upper bound of $e_{13}$ in the previous subsection, there are $\deltata_H\in(0,d^*/3)$ and an $\mathbb{H}$-hull $H$, such that $\{z\in\mathbb{H}:|z-1|\le 3\deltata_H\}\subset H$, $\dist(w_j^*,H)\textcolor{green}e 3\deltata_H$, $2\le j\le n$, and for any $\underline w=(w_2,\dots,w_n)\in\mathbb{R}^{n-1}$ satisfying $\Vert \underline w-\underline w^*\Vert_\infty\le \deltata_H$, we have $ \mathbb{ E}E_{1}^{*}[{\mathbf 1}_{ E_{0;H}^c} G_{T_1} (\underline w)]<\varepsilon/3$, where $E_{0;H}$ is the event that $\textcolor{green}amma[0,T_1]\subset H$. Suppose $\Vert \underline w-\underline w^*\Vert_\infty\le \deltata_H$, we use the following approximation relations for such $H$: $$\widehat G(\underline w)=\mathbb{ E}E_{1}^{*}[ G _{T_1}(\underline w)]\stackrel{e_1}{\approx} \mathbb{ E}E_{1}^{*}[{\mathbf 1}_{ E_{H}} G _{T_1}(\underline w)]\stackrel{e_2}{\approx} \mathbb{ E}E_{1}^{*}[{\mathbf 1}_{ E_{H}} G_{T_1} (\underline w^*)]\stackrel{e_3}{\approx} \mathbb{ E}E_{1}^{*}[ G_{T_1} (\underline w^*)] =\widehat G(\underline w^*).$$ We have known that $e_1,e_3<\varepsilon/3$. It remains to bound $e_2$. We write $\underline Z(\underline w)=(Z_{T_1}(w_2),\dots,Z_{T_1}(w_n))$. Then $G_{T_1}(\underline w)=\prod_{j=2}^n Z_{T_1}'(w_j)^\alpha G(\underline Z (\underline w))$. As $\underline w\to \underline w^*$, we have $G(\underline Z (\underline w))\to G(\underline Z (\underline w^*))$ by the continuity of $(n-1)$-point Green's function, and $Z_{T_1}'(w_j)\to Z_{T_1}'(w_j^*)$, $2\le j\le n$, which together imply that $G_{T_1}(\underline w)\to G_{T_1}(\underline w^*)$. We now show that the convergence is uniform (independent of the randomness) on the event $E_{0;H}$. By the previous subsection, on the event $E_{0;H}$, we have $ \underline Z(\underline w),\underline Z(\underline w^*)\in \Omega_H$, and $Z_{T_1}'(w_j),Z_{T_1}'(w_j^*)\in Q_H$, $2\le j\le n$. By the compactness of $Q_H$, on the event $E_{0;H}$, the random map $Z_{T_1}$ is equicontinuous (independent of the randomness) on $[w_j-\deltata_H,w_j+\deltata_H]$ for $2\le j\le n$. Thus, as $\underline w\to \underline w^*$, $\underline Z(\underline w)\to \underline Z(\underline w^*)$ uniformly on the event $E_{0;H}$. Since $G$ is uniformly continuous on the compact set $\Omega_H$, we get $G(\underline Z(\underline w))\to G (\underline Z(\underline w^*))$ uniformly on the event $E_{0;H}$ as $\underline w\to \underline w^*$. By Koebe's distortion theorem, for $2\le j\le n$, $Z_{T_1}'(w_j)\to Z_{T_1}'(w_j^*)$ uniformly on the event $E_{0;H}$ as $\underline w\to \underline w^*$. Thus, $G_{T_1}(\underline w)\to G_{T_1}(\underline w^*)$ uniformly on the event $E_{0;H}$ as $\underline w\to \underline w^*$. In particular, there is $\deltata_H'\in(0,\deltata_H)$ such that if $\Vert \underline w-\underline w^*\Vert_\infty\le \deltata_H'$, then $|G_{T_1}(\underline w)-G_{T_1}(\underline w^*)|<\varepsilon/3$ on the event $E_{0;H}$, which implies that $e_2<\varepsilon/3$. Thus, if $\Vert \underline w-\underline w^*\Vert_\infty\le \deltata_H'$, then $$|\widehat G(\underline w)-\widehat G(\underline w^*)|\le e_1+e_2+e_3<\frac \varepsilon 3+\frac \varepsilon 3+\frac \varepsilon 3=\varepsilon.$$ So we get the desired continuity of $\widehat G$ at $\underline w^*$. The proof of the continuity of $G^1$ on $\Sigma^n$ is thus complete, and so is the proof of Theorem \textcolor{red}ef{main}. \textcolor{blue}egin{thebibliography}{00} \textcolor{blue}ibitem{Ahl} Lars V.\ Ahlfors. {\itshape Conformal invariants: topics in geometric function theory}. McGraw-Hill Book Co., New York, 1973. \textcolor{blue}ibitem{boundary} Tom Alberts and Michael Kozdron. Intersection probabilities for a chordal SLE path and a semicircle, {\itshape Electron.\ Comm.\ Probab.}, {\textcolor{blue}fseries 13}:448-460, 2008. \textcolor{blue}ibitem{dim-real} Tom Alberts and Scott Sheffield. Hausdorff dimension of the SLE curve intersected with the real line, {\itshape Electron J.\ Probab.}, {\textcolor{blue}fseries 40}:1166-1188, 2008. \textcolor{blue}ibitem{Bf} Vincent Beffara. The dimension of SLE curves, {\itshape Ann.\ Probab.}, {\textcolor{blue}fseries 36}:1421-1452, 2008. \textcolor{blue}ibitem{Law4} Gregory Lawler. Schramm-Loewner evolution, in {\em statistical mechanics}, S.\ Sheffield and T.\ Spencer, ed., IAS/Park City Mathematical Series, AMS, 231-295, 2009. \textcolor{blue}ibitem{Mink-real} Gregory Lawler. Minkowski content of the intersection of a Schramm-Loewner evolution (SLE) curve with the real line, {\itshape J. Math. Soc. Japan.}, {\textcolor{blue}fseries 67}:1631-1669, 2015. \textcolor{blue}ibitem{Law-SLE} Gregory Lawler. {\em Conformally invariant processes in the plane}, Amer. Math. Soc., 2005. \textcolor{blue}ibitem{LR} Gregory Lawler and Mohammad Rezaei. Minkowski content and natural parametrization for the Schramm-Loewner evolution. {\itshape Ann.\ Probab.}, {\textcolor{blue}fseries 43}(3):1082-1120, 2015. \textcolor{blue}ibitem{LSW1} Gregory Lawler, Oded Schramm and Wendelin Werner. Values of Brownian intersection exponents I: half-plane exponents. {\itshape Acta Math.}, {\textcolor{blue}fseries 187}(2):237-273, 2001. \textcolor{blue}ibitem{LSW-8/3} Gregory Lawler, Oded Schramm and Wendelin Werner. Conformal restriction: the chordal case, {\itshape J.\ Amer.\ Math.\ Soc.}, {\textcolor{blue}fseries 16}(4): 917-955, 2003. \textcolor{blue}ibitem{LS} Gregory Lawler and Scott Sheffield. A natural parametrization for the Schramm-Loewner evolution. {\itshape Annals of Probab.}, {\textcolor{blue}fseries 39}(5):1896-1937, 2011. \textcolor{blue}ibitem{LW} Greg Lawler and Brent Werness. Multi-point Green's function for SLE and an estimate of Beffara, {\itshape Annals of Prob.} {\textcolor{blue}fseries 41}:1513-1555, 2013. \textcolor{blue}ibitem{LZ} Greg Lawler and Wang Zhou. SLE curves and natural parametrization. {\itshape Ann.\ Probab.}, {\textcolor{blue}fseries 41}(3A):1556-1584, 2013. \textcolor{blue}ibitem{MZ} Benjamin Mackey and Dapeng Zhan. Multipoint Estimates for Radial and Whole-plane SLE. {\itshape J.\ Stat.\ Phys.}, {\textcolor{blue}fseries 175}:879-903, 2019. \textcolor{blue}ibitem{MS1} Jason Miller and Scott Sheffield. Imaginary geometry I: intersecting SLEs. {\itshape Probab.\ Theory Relat.\ Fields}, {\textcolor{blue}fseries 164}(3):553-705, 2016. \textcolor{blue}ibitem{existence} Mohammad Rezaei and Dapeng Zhan. Green's function for chordal SLE curves. {\itshape Probab.\ Theory Rel.}, {\textcolor{blue}fseries 171}:1093-1155, 2018. \textcolor{blue}ibitem{higher} Mohammad Rezaei and Dapeng Zhan. Higher moments of the natural parameterization for SLE curves, {\itshape Ann. IHP.} {\textcolor{blue}fseries 53}(1):182-199, 2017. \textcolor{blue}ibitem{RY} Daniel Revuz and Marc Yor. {\itshape Continuous martingales and Brownian motion}. Springer, Berlin, 1991. \textcolor{blue}ibitem{RS} Steffen Rohde and Oded Schramm. Basic properties of SLE. {\itshape Ann.\ Math.}, {\textcolor{blue}fseries 161}:879-920, 2005. \textcolor{blue}ibitem{S-SLE} Oded Schramm. Scaling limits of loop-erased random walks and uniform spanning trees. {\itshape Israel J.\ Math.}, {\textcolor{blue}fseries 118}:221-288, 2000. \textcolor{blue}ibitem{LERW} Dapeng Zhan. The scaling limits of planar LERW in finitely connected domains. {\itshape Ann.\ Probab.} {\textcolor{blue}fseries 36}:467-529, 2008. \end{thebibliography} \end{document}
\begin{document} \begin{abstract} In an effort to study the stability of contact lines in fluids, we consider the dynamics of an incompressible viscous Stokes fluid evolving in a two-dimensional open-top vessel under the influence of gravity. This is a free boundary problem: the interface between the fluid in the vessel and the air above (modeled by a trivial fluid) is free to move and experiences capillary forces. The three-phase interface where the fluid, air, and solid vessel wall meet is known as a contact point, and the angle formed between the free interface and the vessel is called the contact angle. We consider a model of this problem that allows for fully dynamic contact points and angles. We develop a scheme of a priori estimates for the model, which then allow us to show that for initial data sufficiently close to equilibrium, the model admits global solutions that decay to equilibrium exponentially fast. \end{abstract} \maketitle \mathcal{E}ction{Introduction } \subsection{Formulation in Eulerian coordinates } Consider a viscous incompressible fluid evolving in a two-dimensional open-top vessel. We model the vessel as a bounded, connected, open set $\mathcal{V} \subseteq \mathcal{R}n{2}$ subject to the following two assumptions. First, we assume that \begin{equation} \mathcal{V}_{top} := \mathcal{V} \cap \{y \in \mathcal{R}n{2} \;\vert\; y_2 \ge 0 \} = \{y \in \mathcal{R}n{2} \;\vert\; -\ell < y_1 < \ell, 0 \le y_2 < L \} \end{equation} for some $\ell, L >0$. This means that the ``top'' part of the vessel, $\mathcal{V}_{top}$, consists of a rectangular channel. Second, we assume that $\partial^\alphartial \mathcal{V}$ is $C^2$ away from the points $(\partial^\alphartialm \ell, L)$. We will write \begin{equation} \mathcal{V}_{btm} := \mathcal{V} \cap \{y \in \mathcal{R}n{2} \;\vert\; y_2 \le 0 \} \end{equation} for the ``bottom'' part of the vessel. See Figure \ref{fig:vessel} for a cartoon of two possible vessels. We will assume that the fluid fills the entirety of the bottom part of the vessel and partially fills the top. More precisely, we assume that the fluid occupies the moving domain \begin{equation} \Omega(t) = \mathcal{V}_{btm} \cup \{ y \in \mathcal{R}n{2} \;\vert\; -\ell < y_1 < \ell, 0 < y_2 < \zeta(y_1,t)\}, \end{equation} where the free surface of the fluid is given as the graph of a function $\zeta:[-\ell,\ell] \times \mathcal{R}n{+} \to \mathcal{R}n{}$ such that $0 < \zeta(\partial^\alphartialm \ell,t) \le L$ for all $t \in \mathcal{R}n{+}$, which guarantees that the fluid does not ``spill'' out of the vessel top. We write $\Sigma(t) = \{(y_1,\zeta(y_1,t)) \;\vert \; \abs{y_1} < \ell\}$ for the free surface and $\Sigma_s(t) = \partial^\alphartial \Omega(t) \backslash \Sigma(t)$ for the interface between the fluid and the fixed solid walls of the vessel. See Figure \ref{fig:omega} for a cartoon of two possible fluid domains. For each $t\ge0$, the fluid is described by its velocity and pressure functions $(u,P) :\Omega(t) \to \mathcal{R}n{2} \times \mathcal{R}n{}$. The viscous stress tensor is determined in terms of $P$ and $u$ according to \begin{equation}\label{stress_def} S(P,u):= PI - \mu \mathbb{D} u, \end{equation} where $I$ is the $2 \times 2$ identity matrix, $(\mathbb{D} u)_{ij} = \partial^\alphartialartial_i u_j + \partial^\alphartialartial_j u_i$ is the symmetric gradient of $u$, and $\mu>0$ is the viscosity of the fluid. We write $\diverge S(P,u)$ for the vector with components $\diverge S(P,u)_i = \partial^\alphartial_j S(P,u)_{i,j}$; note that if $\diverge{u}=0$ then $\diverge S(P,u) = \nabla P - \mu \mathcal{D}elta u$. Before stating the equations of motion we define a number of terms that will appear. We will write $g>0$ for the strength of gravity, $\sigma>0$ for the surface tension coefficient along the free surface, and $\beta > 0$ for the Navier slip friction coefficient on the vessel side walls. The coefficients $\gamma_{sv}, \gamma_{sf} \in \mathcal{R}n{}$ are a measure of the free-energy per unit length associated to the solid-vapor and solid-fluid interaction, respectively. We set $\jump{\gamma} := \gamma_{sv} - \gamma_{sf}$ and assume that the classical Young relation \cite{young} holds: \begin{equation}\label{gamma_assume} \frac{\abs{\jump{\gamma}}}{\sigma} < 1. \end{equation} Finally, we define the contact point velocity response function $\mathscr{V}: \mathcal{R}n{} \to \mathcal{R}n{}$ to be a $C^2$ increasing diffeomorphism such that $\mathscr{V}(0) =0$. We will refer to its inverse as $\mathscr{W} := \mathscr{V}^{-1} \in C^2(\mathcal{R}n{})$. \begin{figure} \caption{Two possible vessels} \label{fig:vessel} \end{figure} \begin{figure} \caption{Fluid domains} \label{fig:omega} \end{figure} We require that $(u, P, \zeta)$ satisfy the gravity-driven free-boundary incompressible Stokes equations in $\Omega(t)$ for $t>0$: \begin{equation}\label{ns_euler} \begin{cases} \diverge{S(P,u)} = \nabla P - \mu \mathcal{D}elta u = 0 & \text{in }\Omega(t) \\ \diverge{u}=0 & \text{in }\Omega(t) \\ S(P,u) \nu = g \zeta \nu - \sigma \mathcal{H}(\zeta) \nu & \text{on } \Sigma(t) \\ (S(P,u)\nu - \beta u)\cdot \tau =0 &\text{on } \Sigma_s(t) \\ u \cdot \nu =0 &\text{on } \Sigma_s(t) \\ \partial^\alphartialartial_t \zeta = u_2 - u_1 \partial^\alphartialartial_{y_1}\zeta &\text{on } \Sigma(t) \\ \partial^\alphartialartial_t \zeta(\partial^\alphartialm \ell,t) = \mathscr{V}\left( \jump{\gamma} \mp \sigma \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}}(\partial^\alphartialm \ell,t) \right) \end{cases} \end{equation} for $\nu$ the outward-pointing unit normal, $\tau$ the associated unit tangent, and \begin{equation}\label{H_def} \mathcal{H}(\zeta) := \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}} \right) \end{equation} twice the mean-curvature operator. Note that in \eqref{ns_euler} we have shifted the gravitational forcing to the boundary and eliminated the constant atmospheric pressure, $P_{atm}$, in the usual way by adjusting the actual pressure $\bar{P}$ according to $P = \bar{P} + g y_2 - P_{atm}$. Note also that the final equation in \eqref{ns_euler} is equivalent to \begin{equation} \mathscr{W}( \partial^\alphartialartial_t \zeta(\partial^\alphartialm \ell,t)) = \jump{\gamma} \mp \sigma \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}}(\partial^\alphartialm \ell,t), \end{equation} which is a more convenient version for our analysis. We assume that initial data are specified with the initial mass of the fluid given as \begin{equation} M_0 := \abs{\Omega(0)} = \abs{\mathcal{V}_{btm}} + \int_{-\ell}^\ell \zeta(y_1,0) dy_1. \end{equation} We may identify the second term with the mass of the fluid in the top of the vessel, \begin{equation} M_{top} := \int_{-\ell}^\ell \zeta(y_1,0) dy_1. \end{equation} The mass of the fluid is conserved in time since $\partial^\alphartialartial_t \zeta = u \cdot \nu \sqrt{1 + \abs{\partial^\alphartial_1 \zeta}^2}$: \begin{equation}\label{avg_prop} \frac{d}{dt} \abs{\Omega(t)} = \frac{d}{dt} \int_{-\ell}^\ell \zeta = \int_{-\ell}^\ell \partial^\alphartialartial_t \zeta = \int_{\Sigma(t)} u \cdot \nu = \int_{\Omega(t)} \diverge{u} = 0. \end{equation} We defer a deeper explanation of the system \eqref{ns_euler} to Section \ref{sec_model} and turn now to the construction of equilibrium solutions to \eqref{ns_euler}. \subsection{Equilibrium state } A steady state equilibrium solution to \eqref{ns_euler} corresponds to setting $u =0$, $P(y,t) = P_0 \in \mathcal{R}n{}$, and $\zeta(y_1,t) = \zeta_0(y_1)$ with $\zeta_0$ and $P_0$ solving \begin{equation}\label{zeta0_eqn} \begin{cases} g \zeta_0 - \sigma \mathcal{H}(\zeta_0) = P_0 & \text{on } (-\ell,\ell) \\ \sigma \frac{\partial^\alphartial_1 \zeta_0}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta_0}^2}}(\partial^\alphartialm \ell) = \partial^\alphartialm \jump{\gamma}. \end{cases} \end{equation} Here the boundary conditions follow from the assumptions on the inverse of the velocity response function, $\mathscr{W}$, which in particular require that $\mathscr{W}(z) =0$ if and only if $z=0$. A solution to \eqref{zeta0_eqn} is called an equilibrium capillary surface. The constant $P_0$ in \eqref{zeta0_eqn} is determined through the fixed-mass condition \begin{equation}\label{zeta0_constraint} M_{top} = \int_{-\ell}^\ell \zeta_0(y_1) dy_1. \end{equation} Indeed, this allows us to integrate the first equation in \eqref{zeta0_eqn} to compute $P_0$: \begin{equation} 2\ell P_0 = \int_{-\ell}^\ell P_0 = \int_{-\ell}^\ell g \zeta_0 - \sigma \mathcal{H}(\zeta_0) = g M_{top} -\sigma \left. \frac{\partial^\alphartial_1 \zeta_0}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta_0}^2}} \right\vert_{-\ell}^\ell = g M_{top} -2 \jump{\gamma}, \end{equation} which means that \begin{equation}\label{p0_def} P_0 = \frac{g M_{top} -2 \jump{\gamma}}{2\ell}. \end{equation} The problem \eqref{zeta0_eqn} is variational in nature. Indeed, solutions correspond to critical points of the energy functional $\mathscr{I} : W^{1,1}(-\ell,\ell) \to \mathcal{R}n{}$ given by \begin{equation}\label{zeta0_energy} \mathscr{I}(\zeta) = \int_{-\ell}^\ell \frac{g}{2} \abs{\zeta}^2 + \sigma \sqrt{1 + \abs{\zeta'}^2} - \jump{\gamma}(\zeta(\ell) + \zeta(-\ell)) \end{equation} subject to the mass constraint $M_{top} = \int_{-\ell}^\ell \zeta$. The equilibrium pressure $P_0$ arises as a Lagrange multiplier associated to the constraint. We now state a well-posedness result for \eqref{zeta0_eqn}, the proof of which we defer to Appendix \ref{app_surf}. \begin{thm}\label{zeta0_wp} There exists a constant $M_{min} \ge 0$ such that if $M_{top} > M_{min}$ then there exists a unique solution $\zeta_0 \in C^\infty([-\ell,\ell])$ to \eqref{zeta0_eqn} that satisfies \eqref{zeta0_constraint} with $P_0$ given by \eqref{p0_def}. Moreover, $\min_{[-\ell,\ell]} \zeta_0 >0$, and if $\mathscr{I}$ is given by \eqref{zeta0_energy}, then $\mathscr{I}(\zeta_0) \le \mathscr{I}(\partial^\alphartialsi)$ for all $\partial^\alphartialsi \in W^{1,1}((-\ell,\ell))$ such that $\int_{-\ell}^\ell \partial^\alphartialsi = M_{top}$. \end{thm} \begin{remark} Throughout the rest of the paper we make two assumptions on the parameters. First, we assume that $M_{top} > M_{min}$ in order to have $\zeta_0$ as in Theorem \ref{zeta0_wp}. Second we assume that the parameter $L >0$, the height of the side-walls of $\mathcal{V}_{top}$, satisfies the condition $\zeta_0(\partial^\alphartialm\ell) < L$, which means that the equilibrium fluid does not spill out of $\mathcal{V}_{top}$. \end{remark} \subsection{Discussion of the model }\label{sec_model} We now turn to a discussion of the model \eqref{ns_euler}. The interface between a fluid, solid, and vapor phase, known as a contact line in 3D or contact point in 2D, has been the subject of serious research for two centuries, dating to the early work of Young in 1805 and continuing to this day. We refer to the exhaustive survey by de Gennes \cite{degennes} for a more thorough discussion. Equilibrium configurations (given by \eqref{zeta0_energy}) were studied first by Young \cite{young}, Laplace \cite{laplace}, and Gauss \cite{gauss}. They determined that the equilibrium contact angle $\theta_{eq}$, the angle formed between the solid wall and the fluid (see Figure \ref{fig:angle}), is determined through a variational principle. This lead's to the well-known equation of Young: \begin{equation}\label{young_relat} \cos(\theta_{eq}) = \frac{\gamma_{sf} - \gamma_{sv}}{\sigma} = -\frac{\jump{\gamma}}{\sigma}, \end{equation} which is related to the assumption \eqref{gamma_assume}. This relation is enforced in \eqref{zeta0_eqn} in the boundary conditions. \begin{figure} \caption{Equilibrium angle} \label{fig:angle} \end{figure} The behavior of a contact line or point in a dynamical setting is a much more complicated issue. The basic problem stems from the incompatibility of the standard no-slip boundary conditions for viscous fluids ($u=0$ at the fluid-solid interface) and the free boundary kinematics ($\partial^\alphartialartial_t \zeta = u\cdot \nu\sqrt{1 + \abs{\nabla \zeta}^2}$). Combining the two at the contact point shows that the fluid cannot move along the solid, which is clearly in gross contradiction with the actual behavior of fluids in solid vessels. This suggests that the no-slip condition is inappropriate for modeling fluid-solid-vapor junctions and that slip must be introduced into the model. Much work has gone into the study of contact line motion: we refer to the surveys of Drussan \cite{drussan} and Blake \cite{blake} for a thorough discussion of theoretical and experimental studies. The general picture that has emerged is that the contact line moves as a result of the deviation of the dynamic contact angle $\theta_{dyn}$ from the equilibrium angle $\theta_{eq}$ determined by \eqref{young_relat}. More precisely, these quantities are related via \begin{equation}\label{cl_motion} V_{cl} = F( \cos(\theta_{eq}) - \cos(\theta_{dyn}) ), \end{equation} where $V_{cl}$ is the contact line normal velocity and $F$ is some increasing function such that $F(0)=0$. These assumptions on $F$ show that the slip of the contact line acts to restore the equilibrium angle. Equations of the form \eqref{cl_motion} have been derived in a number of ways. Blake-Haynes \cite{blake_haynes} combined thermodynamic and molecular kinetics arguments to arrive at $F(z) = A \sinh(Bz)$ for material constants $A,B>0$. Cox \cite{cox} used matched asymptotic analysis and hydrodynamic arguments to derive \eqref{cl_motion} with a different $F$ but of the same general form. Ren-E \cite{ren_e} performed molecular dynamics simulations to probe the physics near the contact line and also found an equation of the form \eqref{cl_motion}. Ren-E \cite{ren_e_deriv} also derived \eqref{cl_motion} from constitutive equations and thermodynamic principles. The above studies confirm that an appropriate model for contact line dynamics involves an equation of the form \eqref{cl_motion} and slip. The molecular dynamics simulations of Ren-E \cite{ren_e} found that the slip of the fluid along the solid obeys the well-known Navier-slip condition \begin{equation}\label{navier_slip} u \cdot \nu =0 \text{ and } S(P,u) \nu \cdot \tau = \beta u \cdot \tau \end{equation} for some parameter $\beta >0$. The former is a no-penetration condition, which prevents the fluid from flowing into the solid, while the latter states that the fluid experiences a tangential stress due to the slipping of the fluid along the solid wall. Note that the limiting case $\beta=\infty$ corresponds to the no-slip condition. The equations \eqref{ns_euler} studied in this article combine the Navier-slip boundary conditions \eqref{navier_slip} with a general form of the contact point equation \eqref{cl_motion}. Indeed, the last equation in \eqref{ns_euler} may be rewritten as \begin{equation} V_{cl} = \partial^\alphartialartial_t \zeta = \mathscr{V}\left( \jump{\gamma} \mp \sigma \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}}(\partial^\alphartialm \ell,t) \right) = \mathscr{V}( \sigma (\cos(\theta_{eq}) - \cos(\theta_{dyn} )), \end{equation} which is clearly of the form \eqref{cl_motion}. Our assumptions on $\mathscr{V}$ then correspond to the assumptions on $F$ mentioned above. Note that in principle we only need $\mathscr{V} : [-2\sigma,2\sigma] \to \mathcal{R}n{}$ to be increasing with $\mathscr{V}(0)=0$, but we can easily make a $C^2$ extension so that $\mathscr{V} : \mathcal{R}n{} \to \mathcal{R}n{}$ is a $C^2$ diffeomorphism. Thus, it is no loss of generality to assume that $\mathscr{V}$ is defined on all of $\mathcal{R}n{}$. We do this because this makes it simpler to work with $\mathscr{W} = \mathscr{V}^{-1}$. We use the Stokes equations for the bulk fluid mechanics in \eqref{ns_euler} in order to somewhat simplify the problem and focus on the contact point dynamics. Thus in some sense the problem \eqref{ns_euler} is quasi-stationary: the problem is time-dependent and the free boundary moves, but the fluid dynamics are stationary at each time. In future work, based on the techniques we develop in this paper, we will address the full Navier-Stokes equations coupled to the boundary conditions in \eqref{ns_euler}. Much work has been devoted to studying contact lines and points in simplified thin-film models; we will not attempt to enumerate these results here and instead refer to the survey by Bertozzi \cite{bertozzi}. By contrast, there are relatively few results in the literature related to models in which the full fluid mechanics are considered, and to the best of our knowledge none that allow for both dynamic contact point and dynamic contact angle. Schweizer \cite{schweizer} studied a 2D Navier-Stokes problem with a fixed contact angle of $\partial^\alphartiali/2$. Bodea studied a similar problem with fixed $\partial^\alphartiali/2$ contact angle in 3D channels with periodicity in one direction. Kn\"upfer-Masmoudi studied the dynamics of a 2D drop with fixed contact angle when the fluid is assumed to be governed by Darcy's law. Related analysis of the fully stationary Navier-Stokes system with free, but unmoving boundary, was carried out in 2D by Solonnikov \cite{solonnikov} with contact angle fixed at $\partial^\alphartiali$, by Jin \cite{jin} in 3D with angle $\partial^\alphartiali/2$, and by Socolowsky \cite{socolowsky} for 2D coating problems with fixed contact angles. Given that our choice of boundary conditions can be derived in all the different ways mentioned above, we believe that the system \eqref{ns_euler} is a good general model for the dynamics of a viscous fluid with dynamic contact points and contact angles. The purpose of our work is to show that the problem \eqref{ns_euler} is globally well-posed for data sufficiently close to the equilibrium state and that these solutions return to equilibrium exponentially fast. In other words, we will prove that the equilibrium capillary surfaces are asymptotically stable in this model. This provides further evidence for the validity of the model. Let us now turn to a simple motivation for why we should expect solutions to \eqref{ns_euler} to exist globally and decay to equilibrium. Solutions to the system \eqref{ns_euler} satisfy the following energy-dissipation equality: \begin{equation}\label{fund_en_evolve} \frac{d}{dt} \mathscr{I}( \zeta(\cdot,t)) + \int_{\Omega(t)} \frac{\mu}{2} \abs{\mathbb{D} u(\cdot,t)}^2 + \int_{\Sigma_s(t)} \frac{\beta}{2} \abs{u(\cdot,t) \cdot \tau} + \sum_{a=\partial^\alphartialm 1} \partial^\alphartialartial_t \zeta(a \ell,t) \mathscr{W}(\partial^\alphartialartial_t \zeta(a \ell,t))= 0, \end{equation} where $\mathscr{I}$ is the energy functional given by \eqref{zeta0_energy} and $\mathscr{W} = \mathscr{V}^{-1}$. Indeed, this can be shown in the usual way by taking the dot product of the first equation in \eqref{ns_euler} with $u$, and integrating by parts over $\Omega(t)$, employing all of the other equations in \eqref{ns_euler}. Rather than prove the result we refer to Theorem \ref{linear_energy}, which provides the proof of a similar result. Since $\mathscr{W}$ is increasing and vanishes precisely at the origin we have that $z \mathscr{W}(z) >0$ for $z \neq 0$. This means that the third term on the left of \eqref{fund_en_evolve} provides positive definite control of $\partial^\alphartialartial_t \zeta$ at the fluid-solid-vapor contact point. Thus the latter three terms on the left of \eqref{fund_en_evolve} serve as a positive-definite dissipation functional, and so we can use \eqref{fund_en_evolve} as the basis of a nonlinear energy method. We also deduce from \eqref{fund_en_evolve} that $\mathscr{I}(\zeta(\cdot,t))$ is non-increasing, and since $\zeta_0$ is the unique minimizer of $\mathscr{I}$ this suggests that solutions return to equilibrium as $t \to \infty$. However, the exponential rate of decay to equilibrium is not obvious from \eqref{fund_en_evolve} and requires deeper analysis. \subsection{Reformulation of \eqref{ns_euler}} Let $\zeta_0 \in C^\infty[-\ell,\ell]$ be the equilibrium capillary surface given by Theorem \ref{zeta0_wp}. We then define the equilibrium domain $\Omega \subseteq \mathcal{R}n{2}$ by \begin{equation}\label{omega_def} \Omega := \mathcal{V}_{btm} \cup \{ x \in \mathcal{R}n{2} \;\vert\; -\ell < x_1 < \ell \text{ and } 0 < x_2 < \zeta_0(x_1) \}. \end{equation} We write $\partial^\alphartial \Omega = \Sigma \sqcup \Sigma_s$, where \begin{equation}\label{sigma_def} \Sigma := \{ x \in \mathcal{R}n{2} \;\vert\; -\ell < x_1 < \ell \text{ and } x_2 = \zeta_0(x_1) \} \text{ and } \Sigma_s := \partial^\alphartial \Omega \backslash \Sigma. \end{equation} Here $\Sigma$ is the equilibrium free surface, and $\Sigma_s$ denotes the ``sides'' of the equilibrium fluid configuration. We will write $x \in \Omega$ as the spatial coordinate in the equilibrium domain. Let us now assume that the free surface to $\Omega(t)$ is given as a perturbation of $\zeta_0$, i.e. we assume that \begin{equation} \zeta(x_1,t) = \zeta_0(x_1) + \eta(x_1,t) \text{ for } \eta:(-\ell,\ell) \times \mathcal{R}n{+} \to \mathcal{R}n{}. \end{equation} We define \begin{equation}\label{extension_def} \bar{\eta}(x,t) = \mathcal{P} E \eta(x_1,x_2 - \zeta_0(x_1),t), \end{equation} where $E:H^s(-\ell,\ell) \to H^s(\mathcal{R}n{})$ is a bounded extension operator for all $0 \le s \le 3$ and $\mathcal{P}$ is the lower Poisson extension given by \begin{equation}\label{poisson_def} \mathcal{P}f(x_1,x_2) = \int_{\mathcal{R}n{}} \hat{f}(\xi) e^{2\partial^\alphartiali \abs{\xi}x_2} e^{2\partial^\alphartiali i x_1 \xi} d\xi. \end{equation} Let $\partial^\alphartialhi \in C^\infty(\mathcal{R}n{})$ be such that $\partial^\alphartialhi(z) =0$ for $z \le \frac{1}{4} \min \zeta_0$ and $\partial^\alphartialhi(z) =z$ for $z \ge \frac{1}{2} \min \zeta_0$. The extension $\bar{\eta}$ allows us to map the equilibrium domain to the moving domain $\Omega(t)$ via the mapping \begin{equation}\label{mapping_def} \Omega \ni x \mapsto \left( x_1,x_2 + \frac{\partial^\alphartialhi(x_2)}{\zeta_0(x_1)} \bar{\eta}(x,t) \right) := \mathcal{P}hi(x,t) = (y_1,y_2) \in \Omega(t). \end{equation} Note that \begin{equation} \begin{split} \mathcal{P}hi(x_1,\zeta_0(x_1),t) &= (x_1, \zeta_0(x_1) + \eta(x_1,t)) = (x_1,\zeta(x_1,t)) \mathcal{R}ightarrow \mathcal{P}hi(\Sigma,t) = \Sigma(t) \\ \mathcal{P}hi(\mathcal{V}_{btm},t) &= \mathcal{V}_{btm} \\ \mathcal{P}hi(\partial^\alphartialm \ell, x_2,t) &= (\partial^\alphartialm \ell, x_2+ \partial^\alphartialhi(x_2)\bar{\eta}(\partial^\alphartialm\ell ,x_2)/\zeta_0(\partial^\alphartialm \ell)) \\ &\mathcal{R}ightarrow \mathcal{P}hi(\Sigma_s \cap \{x_1 = \partial^\alphartialm \ell, x_2 \ge 0\},t) = \Sigma_s(t) \cap \{y_1 = \partial^\alphartialm \ell, y_2 \ge 0\}. \end{split} \end{equation} If $\eta$ is sufficiently small (in an appropriate Sobolev space), then the mapping $\mathcal{P}hi$ is a $C^1$ diffeomorphism of $\Omega$ onto $\Omega(t)$ that maps the components of $\partial^\alphartial \Omega$ to the corresponding components of $\partial^\alphartial \Omega(t)$. We have \begin{equation}\label{A_def} \nabla \mathcal{P}hi = \begin{pmatrix} 1 & 0 \\ A & J \end{pmatrix} \text{ and } \mathcal{A} := (\nabla \mathcal{P}hi^{-1})^T = \begin{pmatrix} 1 & -A K \\ 0 & K \end{pmatrix} \end{equation} for \begin{equation}\label{AJK_def} W = \frac{\partial^\alphartialhi}{\zeta_0}, \quad A = W \partial^\alphartial_1 \bar{\eta} - \frac{W}{\zeta_0} \partial^\alphartial_1 \zeta_0 \bar{\eta}, \quad J = 1 + W \partial^\alphartial_2 \bar{\eta} + \frac{\partial^\alphartialhi' \bar{\eta}}{\zeta_0}, \quad K = J^{-1}. \end{equation} Here $J = \det{\nabla \mathcal{P}hi}$ is the Jacobian of the coordinate transformation. We will assume that in fact $\mathcal{P}hi$ is a diffeomorphism. This allows us to transform the problem \eqref{ns_euler} to one on the fixed spatial domain $\Omega$ for $t \ge 0$. In the new coordinates, the PDE \eqref{ns_euler} becomes \begin{equation}\label{geometric_full} \begin{cases} \diverge_{\mathcal{A}} S_{\mathcal{A}}(P,u) = -\mu \mathcal{D}elta_{\mathcal{A}} u + \nablaa P =0 & \text{in } \Omega \\ \diverge_{\mathcal{A}} u = 0 & \text{in }\Omega \\ S_{\mathcal{A}}(P,u) \mathcal{N} = g\zeta \mathcal{N} -\sigma \mathcal{H}(\zeta) \mathcal{N} & \text{on } \Sigma \\ (S_{\mathcal{A}}(P,u)\nu - \beta u)\cdot \tau =0 &\text{on }\Sigma_s \\ u\cdot \nu =0 &\text{on }\Sigma_s \\ \partial^\alphartialartial_t \zeta = u \cdot \mathcal{N} & \text{on } \Sigma \\ \mathscr{W}(\partial^\alphartialartial_t \zeta(\partial^\alphartialm \ell,t)) = \jump{\gamma} \mp \sigma \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}}(\partial^\alphartialm \ell,t) \\ u(x,0) = u_0(x), \zeta(x_1,0) = \zeta_0(x_1) + \eta_0(x_1). \end{cases} \end{equation} Here we have written the differential operators $\nablaa$, $\diverge_{\mathcal{A}}$, and $\mathcal{D}elta_{\mathcal{A}}$ with their actions given by $(\nablaa f)_i := \mathcal{A}_{ij} \partial^\alphartial_j f$, $\diverge_{\mathcal{A}} X := \mathcal{A}_{ij}\partial^\alphartial_j X_i$, and $\mathcal{D}elta_{\mathcal{A}} f = \diverge_{\mathcal{A}} \nablaa f$ for appropriate $f$ and $X$; for $u\cdot \nablaa u$ we mean $(u \cdot \nablaa u)_i := u_j \mathcal{A}_{jk} \partial^\alphartial_k u_i$. We have also written $\mathcal{N} := -\partial^\alphartial_1 \zeta e_1 + e_2$ for the non-unit normal to $\Sigma(t)$, and we write $S_{\mathcal{A}}(P,u) = (P I - \mu \mathbb{D}_{\mathcal{A}} u)$ for the stress tensor, where $I$ the $2 \times 2$ identity matrix and $(\mathbb{D}_{\mathcal{A}} u)_{ij} = \mathcal{A}_{ik} \partial^\alphartial_k u_j + \mathcal{A}_{jk} \partial^\alphartial_k u_i$ is the symmetric $\mathcal{A}-$gradient. Note that if we extend $\diverge_{\mathcal{A}}$ to act on symmetric tensors in the natural way, then $\diverge_{\mathcal{A}} S_{\mathcal{A}}(P,u) = \nablaa P - \mu \mathcal{D}elta_{\mathcal{A}} u$ for vector fields satisfying $\diverge_{\mathcal{A}} u=0$. Recall that $\mathcal{A}$ is determined by $\eta$ through the relation \eqref{A_def}. This means that all of the differential operators in \eqref{geometric_full} are connected to $\eta$, and hence to the geometry of the free surface. This geometric structure is essential to our analysis, as it allows us to control high-order derivatives that would otherwise be out of reach. \subsection{Perturbation } We want to consider solutions as perturbations around the equilibrium state $(0,P_0,\zeta_0)$ given by Theorem \ref{zeta0_wp}, i.e. we assume that $u =0 + u$, $P = P_0 + p$, $\zeta = \zeta_0 + \eta$ for new unknowns $(u,p,\eta)$. We will now reformulate the equations \eqref{geometric_full} in terms of the perturbed unknowns. To begin, we use a Taylor expansion in $z$ to write \begin{equation} \frac{y+z}{(1+\abs{y+z}^2)^{1/2}} = \frac{y}{(1+\abs{y}^2)^{1/2}} + \frac{z}{(1+\abs{y}^2)^{3/2}} + \mathcal{R}(y,z), \end{equation} where $\mathcal{R} \in C^\infty(\mathcal{R}n{2})$ is given by \begin{equation}\label{R_def} \mathcal{R}(y,z) = \int_0^z 3 \frac{(s-z)(s+y)}{(1+ \abs{y+s}^2)^{5/2}} ds. \end{equation} Then \begin{equation} \frac{\partial^\alphartial_1 \zeta}{(1+\abs{\partial^\alphartial_1 \zeta}^2)^{1/2}} = \frac{\partial^\alphartial_1 \zeta_0}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{1/2}} + \frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta), \end{equation} which allows us to use \eqref{zeta0_eqn} to compute \begin{multline}\label{pert_comp_1} g \zeta - \sigma \mathcal{H}(\zeta) = \left( g \zeta_0 - \sigma \mathcal{H}(\zeta_0)\right) + g \eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}\right) -\sigma \partial^\alphartial_1 \left( \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) \\ = P_0 + g \eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}\right) -\sigma \partial^\alphartial_1 \left( \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) \end{multline} and \begin{multline} \jump{\gamma} \mp \sigma \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}}(\partial^\alphartialm \ell,t) = \jump{\gamma} \mp \frac{\sigma \partial^\alphartial_1 \zeta_0}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{1/2}}(\partial^\alphartialm \ell) \mp \frac{\sigma \partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}(\partial^\alphartialm \ell,t) \\ \mp \sigma \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)(\partial^\alphartialm \ell,t) = \mp \frac{\sigma \partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}(\partial^\alphartialm \ell,t) \mp \sigma \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)(\partial^\alphartialm \ell,t). \end{multline} On the other hand, \begin{multline}\label{pert_comp_2} \diverge_{\mathcal{A}} S_{\mathcal{A}}(P,u) = \diverge_{\mathcal{A}} S_{\mathcal{A}}(p,u) \text{ in } \Omega, \; S_{\mathcal{A}}(P,u) \mathcal{N} = S_{\mathcal{A}}(p,u) \mathcal{N} + P_0 \mathcal{N} \text{ on }\Sigma, \\ \text{and } S_{\mathcal{A}}(P,u)\nu \cdot \tau = S_{\mathcal{A}}(p,u)\nu \cdot \tau \text{ on }\Sigma_s. \end{multline} Next we expand the velocity response function inverse $\mathscr{W} \in C^2(\mathcal{R}n{})$. Since $\mathscr{W}$ is increasing, we may set \begin{equation}\label{kappa_def} \kappa = \mathscr{W}'(0) >0. \end{equation} We then define the perturbation $\mathscr{W}h \in C^2(\mathcal{R}n{})$ as \begin{equation}\label{V_pert} \mathscr{W}h(z) = \frac{1}{\kappa} \mathscr{W}(z) - z. \end{equation} We now plug \eqref{pert_comp_1}--\eqref{pert_comp_2} and \eqref{V_pert} into \eqref{geometric_full} to see that $(u,p,\eta)$ solve \begin{equation}\label{geometric} \begin{cases} \diverge_{\mathcal{A}} S_{\mathcal{A}}(p,u) = -\mu \mathcal{D}elta_{\mathcal{A}} u + \nablaa p =0 & \text{in } \Omega \\ \diverge_{\mathcal{A}} u = 0 & \text{in }\Omega \\ S_{\mathcal{A}}(p,u) \mathcal{N} = g\eta \mathcal{N} - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right)\mathcal{N} & \text{on } \Sigma \\ (S_{\mathcal{A}}(p,u)\nu - \beta u)\cdot \tau =0 &\text{on }\Sigma_s \\ u\cdot \nu =0 &\text{on }\Sigma_s \\ \partial^\alphartialartial_t \eta = u \cdot \mathcal{N} & \text{on } \Sigma \\ \kappa \partial^\alphartialartial_t \eta(\partial^\alphartialm \ell,t) + \kappa \mathscr{W}h(\partial^\alphartialartial_t \eta(\partial^\alphartialm \ell,t)) = \mp \sigma \left( \frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)\right)(\partial^\alphartialm \ell,t) \\ u(x,0) = u_0(x), \eta(x_1,0) = \eta_0(x_1). \end{cases} \end{equation} Here we still have that $\mathcal{N}$, $\mathcal{A}$, etc are determined in terms of $\zeta = \zeta_0 + \eta$. Throughout the paper we will write \begin{equation}\label{n_0_def} \mathcal{N}_0 = -\partial^\alphartial_1 \zeta_0 e_1 + e_2 \end{equation} for the non-unit normal associated to the equilibrium surface. Then \begin{equation} \mathcal{N} = \mathcal{N}_0 - \partial^\alphartial_1 \eta e_1. \end{equation} \mathcal{E}ction{Main results and discussion} \subsection{Main results } In order to state our main results we must first define a number of energy and dissipation functionals. We define the basic or ``parallel'' (since temporal derivatives are the only ones parallel to the boundary) energy as \begin{equation}\label{ed_def_1} \mathcal{E}b = \sum_{j=0}^2 \ns{\partial^\alphartialartial_t^j \eta}_{H^1((-\ell,\ell))}, \end{equation} and we define the basic dissipation as \begin{equation}\label{ed_def_2} \mathcal{D}bb = \sum_{j=0}^2 \ns{\partial^\alphartialartial_t^j u}_{H^1(\Omega)} +\ns{\partial^\alphartialartial_t^j u}_{H^0(\Sigma_s)} + \bs{\partial^\alphartialartial_t^j u \cdot \mathcal{N}}, \end{equation} where we have written \begin{equation} \bs{f} = [f,f]_\ell. \end{equation} We also define the improved basic dissipation as \begin{equation}\label{ed_def_3} \mathcal{D}b = \mathcal{D}bb + \sum_{j=0}^2 \ns{\partial^\alphartialartial_t^j p}_{H^0(\Omega)} + \ns{\partial^\alphartialartial_t^j \eta}_{H^{3/2}((-\ell,\ell))}. \end{equation} The basic energy and dissipation arise through a version of the energy-dissipation equation \eqref{fund_en_evolve}. However, once we control these terms we are then able to control much more. This extra control is encoded in the full energy and dissipation, which are defined as follows: \begin{equation}\label{ed_def_4} \mathcal{E} = \mathcal{E}b + \ns{\eta}_{W^{5/2}_\delta(\Omega)} + \ns{\partial^\alphartialartial_t \eta}_{H^{3/2}((-\ell,\ell))} + \ns{u}_{W^{2}_\delta(\Omega)} + \ns{\partial^\alphartialartial_t u}_{H^1(\Omega)} + \ns{p}_{\mathring{W}^{1}_\delta(\Omega)} + \ns{\partial^\alphartialartial_t p}_{H^0(\Omega)}, \end{equation} and \begin{equation}\label{ed_def_5} \mathcal{D} = \mathcal{D}b + \ns{\eta}_{W^{5/2}_\delta(\Omega)} + \ns{\partial^\alphartialartial_t \eta}_{W^{5/2}_\delta(\Omega)} + \ns{\partial^\alphartialartial_t^3 \eta}_{W^{1/2}_\delta(\Omega)} + \ns{u}_{W^{2}_\delta(\Omega)} + \ns{\partial^\alphartialartial_t u}_{W^{2}_\delta(\Omega)} + \ns{p}_{\mathring{W}^{1}_\delta(\Omega)} + \ns{\partial^\alphartialartial_t p}_{\mathring{W}^{1}_\delta(\Omega)}. \end{equation} In \eqref{ed_def_4} and \eqref{ed_def_5} the spaces $W^{r}_\delta$ are weighted Sobolev spaces, as defined in Appendix \ref{app_weight}, for a fixed weight parameter $\delta \in (0,1)$. We now state our main results. In these we make reference to the corner angles of $\Omega$, which we call $\omega \in (0,\partial^\alphartiali)$. These are related to the equilibrium contact angle, $\theta_{eq} \in (0,\partial^\alphartiali)$ given by \eqref{young_relat}, via \begin{equation} \theta_{eq} + \omega = \partial^\alphartiali, \end{equation} i.e. $\omega$ is supplementary to $\theta_{eq}$. Thus we have have that $\omega$ may be computed via \begin{equation} \cos(\omega) = \frac{\zeta_0'(-\ell)}{\sqrt{1+\abs{\zeta_0'(-\ell)}^2}}. \end{equation} We now state our a priori estimates for solutions to \eqref{geometric}. \begin{thm}\label{main_apriori} Let $\omega \in (0,\partial^\alphartiali)$ be the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. There exists a universal constant $\gamma >0$ such that if a solution to \eqref{geometric} exists on the temporal interval $[0,T]$ and obeys the estimate \begin{equation} \sup_{0 \le t \le T} \mathcal{E}(t) + \int_0^T \mathcal{D}(t) dt \le \gamma, \end{equation} then there exist universal constants $\lambda,C >0$ such that \begin{multline} \sup_{0\le t \le T} \left( \mathcal{E}(t) + e^{\lambda t} \left[ \mathcal{E}b(t) + \ns{u(t)}_{H^1(\Omega)} + \ns{u(t)\cdot \tau}_{H^0(\Sigma_s)} + \bs{u\cdot \mathcal{N}(t)} + \ns{p(t)}_{H^0(\Omega)} \right] \right) \\ + \int_0^T \mathcal{D}(t) dt \le C \mathcal{E}(0). \end{multline} \end{thm} \begin{proof} The result is proved later in Theorems \ref{ap_decay} and \ref{ap_bound}. \end{proof} The a priori estimates of Theorem \ref{main_apriori} couple with a local existence theory, which we will develop in a companion paper, to prove the existence of global decaying solutions. \begin{thm}\label{main_gwp} Let $\omega \in (0,\partial^\alphartiali)$ be the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. There exists a universal smallness parameter $\gamma >0$ such that if \begin{equation} \mathcal{E}(0) \le \gamma, \end{equation} then there exists a unique global solution triple $(u,p,\eta)$ to \eqref{geometric} such that \begin{multline} \sup_{t \ge 0} \left( \mathcal{E}(t) + e^{\lambda t} \left[ \mathcal{E}b(t) + \ns{u(t)}_{H^1(\Omega)} + \ns{u(t)\cdot \tau}_{H^0(\Sigma_s)} + \bs{u\cdot \mathcal{N}(t)} + \ns{p(t)}_{H^0(\Omega)} \right] \right) \\ + \int_0^\infty \mathcal{D}(t) dt \le C \mathcal{E}(0). \end{multline} where $\lambda,C >0$ are universal constants. \end{thm} \begin{proof} The result is proved later in Theorem \ref{gwp}. \end{proof} In Theorems \ref{main_apriori} and \ref{main_gwp} we have stated the estimates in terms of a mixture of standard and weighted Sobolev norms. The weighted norms do provide control of standard norms. For instance, we find in Appendix \ref{app_weight} that \begin{equation} W^2_\delta \hookrightarrow H^s \text{ and } W^1_\delta \hookrightarrow H^{s-1} \end{equation} where our choice of $\delta$ in the theorems allows for the choice of any any \begin{equation} 1 < s < \min\{\frac{\partial^\alphartiali}{\omega},2\}. \end{equation} We also have that \begin{equation} W^{5/2}_\delta((-\ell,\ell)) \hookrightarrow H^{s+1/2}((-\ell,\ell)) \end{equation} so that we can control $\ns{\eta}_{H^{s+1/2}} + \ns{\partial^\alphartialartial_t \eta}_{H^{s+1/2}}$. Theorem \ref{main_gwp} and the above embeddings tell us that \begin{equation} \sup_{t \ge 0} \left[ \ns{u(t)}_{H^s} + \ns{p(t)}_{H^{s-1}} + \ns{\eta(t)}_{H^{s+1/2}} + \ns{\partial^\alphartialartial_t \eta(t)}_{H^{3/2}} \right] \le C \mathcal{E}(0) \end{equation} and that \begin{equation} \sup_{t \ge 0} e^{\lambda t}\left[ \ns{u(t)}_{H^1} + \ns{p(t)}_{H^0} + \ns{\eta(t)}_{H^1} + \ns{\partial^\alphartialartial_t \eta(t)}_{H^1} \right] \le C \mathcal{E}(0). \end{equation} We may interpolate between these two estimates to deduce the following. \begin{cor} Under the assumptions of Theorem \ref{main_gwp} we have that the following hold. For any $1 < r < s$ there exists $\lambda_r >0$ such that \begin{equation} \sup_{t \ge 0} e^{\lambda_r t}\left[ \ns{u(t)}_{H^r} + \ns{p(t)}_{H^0} \right] \le C \mathcal{E}(0). \end{equation} For any $1 < r < s+1/2$ there exists $\beta_r >0$ such that \begin{equation} \sup_{t \ge 0} e^{\lambda_r t}\ns{\eta(t)}_{H^r} \le C \mathcal{E}(0). \end{equation} For any $1 < r < 3/2$ there exists $\beta_r >0$ such that \begin{equation} \sup_{t \ge 0} e^{\lambda_r t} \ns{\partial^\alphartialartial_t \eta(t)}_{H^r} \le C \mathcal{E}(0). \end{equation} \end{cor} \subsection{Sketch of proof and summary of methods } We now provide a summary of the principal difficulties encountered in the analysis of \eqref{geometric} and our techniques for overcoming them. We employ a nonlinear energy method that combines energy estimates, enhanced dissipation estimates, and elliptic estimates in weighted Sobolev spaces into a closed scheme of a priori estimates. \textbf{Energy estimates:} The starting point for our analysis is the basic energy-dissipation equation satisfied by solutions to \eqref{geometric}, which comes in essentially the form of a perturbation of \eqref{fund_en_evolve}. The control provided by the terms in this basic energy-dissipation equation is insufficient for closing a scheme of a priori estimates, so we must move to a higher-regularity context. In order to employ the energy-dissipation equality to this end, we can only apply differential operators to \eqref{geometric} that are compatible with the boundary conditions. This leads us to apply temporal derivatives to \eqref{geometric} since they are indeed compatible with the boundary conditions. Upon doing so and summing we arrive at an equation of the form \begin{equation}\label{sum_1} \frac{d}{dt} \mathcal{E}b + \mathcal{D}bb = \mathscr{N}, \end{equation} where $\mathcal{E}b$ and $\mathcal{D}bb$ are given by \eqref{ed_def_1} and \eqref{ed_def_2}, and where $\mathscr{N}$ denotes nonlinear interaction terms. Of course, temporal derivatives do not commute with the other differential operators in \eqref{geometric}, so the nonlinear terms $\mathscr{N}$ become more complicated as the number of applied derivatives grows. The identity \eqref{sum_1}, which we derive more precisely in Theorem \ref{linear_energy}, forms the basis of our scheme of a priori estimates. The key term in $\mathcal{D}bb$ is the third term, which comes from the linearization of $\mathscr{W}$ in \eqref{fund_en_evolve}. It provides dissipative control of the contact line velocity $u\cdot \mathcal{N} = \partial^\alphartialartial_t \eta$ and its temporal derivatives. This is essential in our analysis due to the second-to-last equation in \eqref{geometric}, which provides a Neumann-type boundary condition for $\partial^\alphartialartial_t^j \eta$ that is compatible with the linearized mean curvature operator appearing in the third equation. As we describe later, we crucially exploit this in order to gain higher-regularity control of $\partial^\alphartialartial_t^j \eta$ in terms of the dissipation. The goal of a nonlinear energy method is to control the nonlinear term $\mathscr{N}$ in such a way that it can be absorbed onto the left side. Roughly speaking, we aim to show that \begin{equation}\label{sum_3} \abs{\mathscr{N}} \le C \mathcal{E}b^\theta \mathcal{D}bb \text{ for some } \theta >0, \end{equation} which when coupled with a bound of the form $\mathcal{E}b \le \delta$ for $\delta>0$ sufficiently small allows for the nonlinear term to be absorbed onto the left side of \eqref{sum_1}, leading to an inequality of the form \begin{equation}\label{sum_4} \frac{d}{dt} \mathcal{E}b + \frac{1}{2} \mathcal{D}bb \le 0. \end{equation} This requires two crucial ingredients. First, the nonlinear terms must not exceed the level of regularity controlled by the energy or dissipation. Second, the nonlinear terms must obey the structured estimates of the form \eqref{sum_3}; for instance, the estimate $\abs{\mathscr{N}} \le C \mathcal{D}bb^\theta \mathcal{E}b$ cannot be used to derive \eqref{sum_4}. For the problem \eqref{geometric} neither of these ingredients is available for the energy and dissipation coming directly from the equations. This dictates that we seek to augment the control provided by $\mathcal{E}b$ and $\mathcal{D}bb$ by appealing to auxiliary estimates. Even with these auxiliary estimates in hand, delicate care is still required to show that these ingredients are available. Our choice of the coordinate system and of the form of the differential operators in \eqref{geometric} are made for this reason, as they have already proven successful in this regard in the analysis of other free boundary problems \cite{gt_hor,gt_inf,jtw}. The regularity demands needed to prove an estimate of the form \eqref{sum_3} dictate that we move beyond the estimates available strictly through energy-type estimates. For example, we could not prove \eqref{sum_3} with weighted estimates for first derivatives of $u$. To control higher-order derivatives we need elliptic estimate, but the ones we would use in smooth domains are unavailable due to the presence of the corners. Consequently, we are forced to employ weighted Sobolev estimates, which work well in domains with corners. However, the estimates for the boundary terms are not available in the basic energy or dissipation, so we must first prove some enhance dissipation estimates in order to form a bridge between the basic energy-dissipation estimate and the weighted elliptic theory. \textbf{Enhanced dissipation estimates:} Notice that the basic dissipation $\mathcal{D}bb$ provides no control of either $\eta$ or $p$. However, by appealing to the structure of the PDEs in \eqref{geometric} we can achieve such control and thereby derive enhanced dissipation estimates. Our control of the pressure is based on the technique of viewing it as a Lagrange multiplier associated to the diverge-free condition. Here we adapt this argument to the context of a function space appropriate for weak solutions to \eqref{geometric}. Interestingly, the estimate for the pressure decouples from $\eta$, and so the control of $u$ provided by $\mathcal{D}bb$ is sufficient to control $\ns{\partial^\alphartialartial_t^j p}_{H^0(\Omega)}$ for $j=0,1,2$. We prove this in Theorem \ref{pressure_est}. With the above control of $u$ and $p$ we formally expect from the third equation in \eqref{geometric} that $\partial^\alphartial_1^2 \eta \in H^{-1/2}$ and hence that $\eta \in H^{3/2}$. It turns out that this formal derivative counting can be made rigorous by using the system \eqref{geometric} to derive a weak elliptic problem for $\eta$. Here the control of the contact line velocity in the basic dissipation is essential, as it provides control of a Neumann boundary condition for $\eta$. Thus in Theorem \ref{xi_est} we are able to use elliptic estimates to gain control of $\ns{\partial^\alphartialartial_t^j \eta}_{H^{3/2}((-\ell,\ell))}$ for $j=0,1,2$. Interestingly, these estimates decouple from $p$ and are determined only by $u$ terms in $\mathcal{D}b$. The decoupling of these estimates is ultimately related to the fact that \eqref{geometric} involves the Stokes problem for the fluid mechanics rather than the Navier-Stokes problem. Note, though, that we would be able to achieve the same decoupling with the stationary Navier-Stokes problem. The control provided by the above estimates allows us to move from the dissipation functional $\mathcal{D}bb$ to the functional $\mathcal{D}b$, as defined by \eqref{ed_def_3}. Roughly speaking, this means that \eqref{sum_1} also holds with $\mathcal{D}b$ replacing $\mathcal{D}bb$, at the price of introducing more terms to $\mathcal{N}$. \textbf{Elliptic estimates in weighted Sobolev spaces:} The enhanced dissipation estimates for $(u,p,\eta)$ are what we would expect for weak solutions to the $\eta-$coupled Stokes problem in \eqref{geometric}. Here $\eta-$coupling means that $\eta$ is treated as an unknown with an elliptic operator on the boundary rather than as a forcing term (compare \eqref{A_stokes_stress} vs. \eqref{A_stokes_beta}). The trade-off for this coupling to a new unknown is that we must consider an extra boundary condition on $\Sigma$; indeed, in \eqref{A_stokes_stress} there are three scalar boundary conditions on $\Sigma$, whereas in \eqref{A_stokes_beta} there are only two because there is one fewer unknown. The above suggests that we might be able to invoke the higher-order elliptic regularity results of Agmon-Douglis-Nirenberg \cite{adn_2}. However, the equilibrium domain (and even the dynamic domain at any time) is piecewise $C^2$ but only globally Lipschitz because of the corners formed at the contact points. It is well-known that the usual elliptic regularity theory fails in domains with corners, as the corners allow for weak singularities, and the results of \cite{adn_2} are inapplicable. This means that while we can derive the weak-formulation estimates for $(u,p,\eta) \in H^1 \times H^0 \times H^{3/2}$, we cannot in general expect even $u \in H^2$. Moreover, the analysis of \cite{adn_2} only applies if we view $\eta$ as a forcing term, and is thus unsuitable for the $\eta-$coupled problem. The extension of elliptic regularity theory to domains with corner singularities, which originated in the work of \`{E}skin \cite{eskin}, Lopatinski\u{i} \cite{lopatinskii}, and Kondrat'ev \cite{kondra}, is achieved by replacing the standard Sobolev spaces with their weighted counterparts. The weights are tuned to cancel out the singular behavior near the corners. It is now well-understood that the relationship between the weights, the corner angles, and the choice of boundary conditions is determined by the eigenvalues of certain operator pencils: we refer to the books of Grisvard \cite{grisvard,grisvard_2} and Kozlov-Maz'ya-Rossman \cite{kmr_1,kmr_2,kmr_3} for exhaustive surveys. The particular choice of boundary conditions we use in \eqref{geometric} does not seem to be directly available in the literature, so we have to develop the theory here. This is the content of Section \ref{sec_elliptics}, which culminates in Theorem \ref{A_stokes_stress_solve}, the weighted $\eta-$coupled Stokes regularity theory for the triple $(u,p, \eta)$. Fortunately, the boundary conditions in \eqref{geometric} essentially amount to a compact perturbation of boundary conditions for which the operator pencil is well-understood thanks to the work of Orlt-S\"{a}ndig \cite{orlt_sandig}. Combining these ingredients with existing estimates in \cite{kmr_1,kmr_2,kmr_3} then allows us to prove the theorem. It is worth noting that the weighted Sobolev estimates imply estimates in the standard Sobolev spaces $W^{k,p}$, for $1 \le p < 2$, but the weighted estimates are actually sharper. Our analysis actually depends crucially on the weight terms, and so we cannot simplify our approach by working directly with the derived $W^{k,p}$ estimates. In order to invoke the weighted elliptic estimates of Theorem \ref{A_stokes_stress_solve} we must have control of the forcing terms. It is here that the velocity response function plays another essential role. The control it provides in $\mathcal{D}bb$ leads to the $\partial^\alphartialartial_t^j \eta \in H^{3/2}$ estimates in $\mathcal{D}b$, which in turn provide exactly the right level of regularity needed to employ the weighted elliptic estimates. More precisely, in order to get weighted estimates for $(\partial^\alphartialartial_t^j u, \partial^\alphartialartial_t^j p, \partial^\alphartialartial_t^j \eta)$ we must control $\partial^\alphartialartial_t^{j+1} \eta \in H^{3/2}$. This dictates the count $j=0,1,2$ that we use in $\mathcal{E}b$ and $\mathcal{D}bb$, as we need weighted estimates for $(u,p,\eta)$ and $(\partial^\alphartialartial_t u,\partial^\alphartialartial_t p,\partial^\alphartialartial_t \eta)$ in order to close our a priori estimates. This extra control then allows us to move from $\mathcal{D}b$ to $\mathcal{D}$, as defined by \eqref{ed_def_5}. \textbf{A priori estimates:} We combine the above ingredients into a scheme of a priori estimates for solutions to \eqref{geometric}. Our method is a nonlinear energy method based on the full dissipation $\mathcal{D}$ given by \eqref{ed_def_5}, and the full energy $\mathcal{E}$ given by \eqref{ed_def_4}. In the above analysis we have not mentioned any improvements of $\mathcal{E}b$ from enhanced or elliptic estimates. These appear to be unavailable, and so we are only able to enhance the control of the energy functional by integrating the dissipation functional in time. This leads to the extra terms controlled in $\mathcal{E}$. Thus for \eqref{geometric} we see that the dissipation functional plays the principal role in our nonlinear energy method. This is in stark contrast with the nonlinear energy methods we have employed in \cite{gt_hor,gt_inf,jtw}, for which the energy and dissipation each play an essential role. In Section \ref{sec_nlin_en} we develop the estimates of the nonlinearities that appear in the energy-dissipation equality, i.e. the term $\mathscr{N}$ in \eqref{sum_1}. The goal is to prove a variant of \eqref{sum_3}, namely an estimate of the form $\abs{\mathscr{N}} \le C \mathcal{E}^\theta \mathcal{D}$. Here we make very frequent use of the weighted estimates, which is why we choose to work with them directly, and we employ a number of product estimates from Appendix \ref{app_prods}. The choice of operators and coordinates is also important here, as it keeps the derivative count of the nonlinear terms low enough. In Section \ref{sec_nlin_ell} we develop the estimates for the nonlinear terms appearing in the weighted elliptic estimates. We prove that (roughly speaking) \begin{equation} \mathcal{D} \lesssim \mathcal{D}b + \mathcal{E}^\theta \mathcal{D}. \end{equation} This structure is again essential in order to work with an absorbing argument. Finally, in Section \ref{sec_apriori} we complete the a priori estimates by combining the energy-dissipation equality and the nonlinear estimates. This results in an estimate of the form \begin{equation} \frac{d}{dt} \mathcal{E}b + \mathcal{D} \lesssim \mathcal{E}^\theta \mathcal{D}, \end{equation} which then shows that if we know a priori that the solution obeys the estimate $\mathcal{E} \le \delta$ for $\delta$ some universal constant, then \begin{equation}\label{sum_5} \frac{d}{dt} \mathcal{E}b + \frac{1}{2} \mathcal{D} \le 0. \end{equation} From \eqref{sum_5} we then close our a priori estimates by integrating in time to see that \begin{equation} \sup_{0 \le t\le T} \mathcal{E}(t) + \int_0^T \mathcal{D}(t) dt \lesssim \mathcal{E}(0). \end{equation} The coercivity of the dissipation over the energy, $\mathcal{E}b \lesssim \mathcal{D}$, combines with \eqref{sum_5} to allow us to prove decay: (again, roughly) \begin{equation} \sup_{0 \le t \le T} e^{\beta t} \mathcal{E}b(t) \lesssim \mathcal{E}(0) \end{equation} for some universal $\beta >0$ \subsection{Plan of paper} The paper is organized as follows. In Section \ref{sec_weakform} we discuss the weak formulation of \eqref{geometric}. In Section \ref{sec_basic_ests} we develop the basics on which our nonlinear energy method is based. This includes the energy-dissipation equation as well as the ingredients needed for the enhanced dissipation estimates. In Section \ref{sec_elliptics} we develop the weighted Sobolev elliptic regularity theory for \eqref{geometric}. Section \ref{sec_nlin_en} provides estimates for the nonlinear terms that arise in the energy-dissipation equality. Section \ref{sec_nlin_ell} develops the estimates for the nonlinearities in the elliptic problems. In Section \ref{sec_apriori} we complete our a priori estimates and record the proofs of our main results. Appendix \ref{app_nonlin_form} records the precise form of various nonlinearities that appear in our analysis. Appendix \ref{app_r} records some estimates for the perturbation function $\mathcal{R}$. Appendices \ref{app_weight} and \ref{app_prods} provide some key details about weighted Sobolev spaces. Appendix \ref{app_coeff} records some coefficient estimates. Finally, Appendix \ref{app_surf} records useful properties about equilibrium capillary surfaces. \subsection{Notation and terminology} We now mention some of the notational conventions that we will use throughout the paper. \textbf{Einstein summation and constants:} We will employ the Einstein convention of summing over repeated indices for vector and tensor operations. Throughout the paper $C>0$ will denote a generic constant that can depend $\Omega$, or any of the parameters of the problem. We refer to such constants as ``universal.'' They are allowed to change from one inequality to the next. We will employ the notation $a \lesssim b$ to mean that $a \le C b$ for a universal constant $C>0$. \textbf{Norms:} We write $H^r(\Omega)$, $H^r(\Sigma)$, and $H^r(\Sigma_s)$ with $r \in \mathcal{R}n{}$ for the usual Sobolev spaces. We will typically write $H^0 = L^2$. To avoid notational clutter, we will often avoid writing $H^r(\Omega)$, $H^r(\Sigma)$, or $H^r(\Sigma_s)$ in our norms and typically write only $\norm{\cdot}_{r}$. When we need to refer to norms on the space $L^r$ we will explicitly write $\norm{\cdot}_{L^r}$. Since we will do this for functions defined on $\Omega$,$\Sigma$, and $\Sigma_s$ this presents some ambiguity. We avoid this by adopting two conventions. First, we assume that functions have natural spaces on which they ``live.'' For example, the functions $u$, $p$, and $\bar{\eta}$ live on $\Omega$, while $\eta$ lives on $\Sigma$. Second, whenever the norm of a function is computed on a space different from the one in which it lives, we will explicitly write the space. This typically arises when computing norms of traces onto $\Sigma$ of functions that live on $\Omega$. \mathcal{E}ction{Weak formulation }\label{sec_weakform} The purpose of this section is to define a number of useful function spaces and to give a weak formulation of the problem \eqref{geometric}. \subsection{Some spaces and bilinear forms } Suppose that $\eta$ is given and that $\mathcal{A}$, $J$, $\mathcal{N}$, etc are determined in terms of it. Let us define \begin{equation} \partial^\alphartialp{u,v} := \int_\Omega \frac{\mu}{2} \mathbb{D}_{\mathcal{A}} u : \mathbb{D}_{\mathcal{A}} v J + \int_{\Sigma_s} \beta (u\cdot \tau) (v\cdot \tau) J. \end{equation} We also define \begin{equation} (\partial^\alphartialhi,\partial^\alphartialsi)_{1,\Sigma} := \int_{-\ell}^\ell g \partial^\alphartialhi \partial^\alphartialsi + \sigma \frac{\partial^\alphartial_1 \partial^\alphartialhi \partial^\alphartial_1 \partial^\alphartialsi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \end{equation} and \begin{equation} [a,b]_\ell := \kappa\left( a(\ell)b(\ell) + a(-\ell)b(-\ell)\right). \end{equation} We define the time-dependent spaces \begin{equation} \mathcal{H}^0(\Omega) := \{ u: \Omega \to \mathcal{R}n{2} \;\vert\; \sqrt{J} u \in H^0(\Omega) \}, \end{equation} \begin{equation} {_0}\mathcal{H}^1(\Omega) := \{ u: \Omega \to \mathcal{R}n{2} \;\vert\; \partial^\alphartialp{u,u}<\infty, u\cdot \nu=0 \text{ on }\Sigma_s \}, \end{equation} and \begin{equation} {_0}\mathcal{H}^1(\Omega)z := \{ u \in {_0}\mathcal{H}^1(\Omega) \;\vert\; u\cdot \mathcal{N}=0 \text{ on }\Sigma \}. \end{equation} Here we suppress the time-dependence in the notation, though each space depends on $t$ through the dependence of $J,\mathcal{A},\mathcal{N}$ on $t$. We also define the time-independent spaces \begin{equation} \mathring{H}^0(\Omega) = \{ q \in H^0(\Omega) \;\vert\; \int_{\Omega} q =0\}, \end{equation} \begin{equation} \overset{\circ}{H}{}^0(-\ell,\ell) = \{ g \in H^0((-\ell,\ell)) \;\vert\; \int_{-\ell}^\ell g =0\}, \end{equation} \begin{equation} W = \{ v \in {_0}H^1(\Omega) \;\vert\; u \cdot \mathcal{N}_0 \in H^1(-\ell,\ell) \cap \overset{\circ}{H}{}^0(-\ell,\ell) \}, \end{equation} where $\mathcal{N}_0$ is given by \eqref{n_0_def}, and \begin{equation} V = \{ v \in W \;\vert\; \diverge{v}=0\}. \end{equation} We endow both with the natural inner-product on $W$: \begin{equation} (u,v)_W = \int_\Omega \frac{\mu}{2} \mathbb{D} u: \mathbb{D} v + \int_{\Sigma_s} \beta (u\cdot \tau)(v\cdot \tau) + (u\cdot \mathcal{N}_0, v\cdot \mathcal{N}_0)_{1,\Sigma}. \end{equation} We then define the space \begin{equation} \mathcal{W}(t):= \{ w \in {_0}H^1(\Omega) \;\vert\; v \cdot \mathcal{N} \in H^1(-\ell,\ell) \cap \overset{\circ}{H}{}^0(-\ell,\ell) \}, \end{equation} which we endow with the inner-product \begin{equation} (v,w)_{\mathcal{W}} = \partial^\alphartialp{v,w} + (v\cdot \mathcal{N},w\cdot \mathcal{N})_{1,\Sigma}. \end{equation} We also define the subspace \begin{equation} \mathcal{V}(t) := \{v \in \mathcal{W}(t) \;\vert\; \diverge_{\mathcal{A}} v = 0\}, \end{equation} \subsection{Formulation } We now aim to justify a weak formulation of \eqref{geometric}. Suppose that $\zeta=\zeta_0 + \eta$ and $\mathcal{A}$ and $\mathcal{N}$ are determined in terms of $\zeta$. Then suppose that $(v,q,\xi)$ satisfy \begin{equation}\label{linear_geometric} \begin{cases} \diverge_{\mathcal{A}}S_{\mathcal{A}}(q,v) =F^1 & \text{in } \Omega \\ \diverge_{\mathcal{A}} v = F^2 & \text{in }\Omega \\ S_{\mathcal{A}}(q,v) \mathcal{N} = g\xi \mathcal{N} -\sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3\right) \mathcal{N} + F^4& \text{on } \Sigma \\ (S_{\mathcal{A}}(q,v)\nu - \beta v)\cdot \tau =F^5 &\text{on }\Sigma_s \\ v\cdot \nu =0 &\text{on }\Sigma_s \\ \partial^\alphartialartial_t \xi = v \cdot \mathcal{N} + F^6& \text{on } \Sigma \\ \kappa \partial^\alphartialartial_t \xi(\partial^\alphartialm \ell,t) = \mp \sigma \left(\frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3 \right)(\partial^\alphartialm \ell,t) - \kappa F^7(\partial^\alphartialm\ell,t). \end{cases} \end{equation} We have the following integral identity $(v,q,\xi)$. \begin{lem}\label{geometric_evolution} Suppose that $u$ and $\zeta$ are given as above and that $(v,q,\xi)$ are sufficiently regular and satisfy \eqref{linear_geometric}. Suppose that $w \in \mathcal{W}(t)$. Then \begin{multline}\label{ge_0} \partial^\alphartialp{v,w} - (p,\diverge_{\mathcal{A}} w)_0 + (\xi,w\cdot \mathcal{N})_{1,\Sigma} + [v\cdot \mathcal{N},w\cdot \mathcal{N}]_\ell = \int_\Omega F^1 \cdot w J \\- \int_{\Sigma_s} J (w\cdot \tau)F^5 - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (w \cdot \mathcal{N}) + F^4 \cdot w - [w\cdot \mathcal{N}, F^6 + F^7]_\ell. \end{multline} \end{lem} \begin{proof} We take the dot product of the first equation in \eqref{linear_geometric} with $J w$ and integrate over $\Omega$ to find that \begin{equation}\label{ge_1} \int_\Omega \diverge_{\mathcal{A}} S_{\mathcal{A}}(q,v) \cdot w J =: I = II := \int_\Omega F^1 \cdot w J. \end{equation} We will compute $I$ and then plug into \eqref{ge_1} to deduce \eqref{ge_0}. In computing, we will utilize the geometric identity $\partial^\alphartial_k(J \mathcal{A}_{jk}) =0$ for each $j$. We will also employ the identity \begin{equation}\label{ge_4} J \mathcal{A} \nu = \begin{cases} J \nu &\text{on }\Sigma_s \\ \mathcal{N}/\sqrt{1 +\abs{\partial^\alphartial_1 \zeta_0}^2} &\text{on }\Sigma, \end{cases} \end{equation} which follows from a straightforward computation. Now we turn to the computation of $I$. The geometric identity and an integration by parts allow us to rewrite \begin{equation}\label{ge_7} I = \int_\Omega \partial^\alphartial_k (J \mathcal{A}_{jk} S_{\mathcal{A}}(q,v)_{ij}) w_i = \int_\Omega -J \mathcal{A}_{jk} \partial^\alphartial_k w_i S_{\mathcal{A}}(q,v)_{ij} + \int_{\partial^\alphartial \Omega} (J \mathcal{A} \nu) \cdot (S_{\mathcal{A}}(q,v) w) := I_1 + I_2. \end{equation} We may use the definition of $S_{\mathcal{A}}(q,v)$ to compute \begin{equation} I_1 = \int_\Omega \frac{\mu}{2} \mathbb{D}a v: \mathbb{D}a w J - q \diverge_{\mathcal{A}}{w} J. \end{equation} On the other hand, the first equality in \eqref{ge_4} allows us to compute \begin{multline} \int_{\Sigma_s} (J \mathcal{A} \nu) \cdot (S_{\mathcal{A}}(q,v) w) = \int_{\Sigma_s} J \nu \cdot (S_{\mathcal{A}}(q,v) w) = \int_{\Sigma_s} J w \cdot (S_{\mathcal{A}}(q,v) \nu) \\ = \int_{\Sigma_s} J \left(\beta (v \cdot \tau) (w \cdot \tau) + w\cdot \tau F^5\right). \end{multline} Similarly, the second equality in \eqref{ge_4} shows that \begin{multline} \int_{\Sigma} (J \mathcal{A} \nu) \cdot (S_{\mathcal{A}}(q,v) w) \\ = \int_{-\ell}^\ell (S_{\mathcal{A}}(q,v) \mathcal{N})\cdot w = \int_{-\ell}^\ell g \xi (w \cdot \mathcal{N}) - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3\right)w\cdot \mathcal{N} + F^4 \cdot w. \end{multline} We then compute the second of these terms by using the equations in \eqref{linear_geometric}: \begin{multline} \int_{-\ell}^\ell - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3 \right)w\cdot \mathcal{N} \\ = \int_{-\ell}^\ell \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3 \right)\partial^\alphartial_1 (w\cdot \mathcal{N}) - \sigma \left. \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3 \right) (w\cdot \mathcal{N}) \right\vert_{-\ell}^\ell \end{multline} and \begin{multline}\label{ge_8} - \sigma \left. \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3 \right) (w\cdot \mathcal{N}) \right\vert_{-\ell}^\ell = \sum_{a=\partial^\alphartialm 1} \left(\kappa \partial^\alphartialartial_t\xi(a\ell) + \kappa F^7(a\ell) \right)(w\cdot \mathcal{N}(a\ell) ) \\ = \sum_{a=\partial^\alphartialm 1} \kappa (v \cdot \mathcal{N}(a\ell))(w\cdot \mathcal{N}(a\ell) ) + \kappa (w\cdot\mathcal{N}(a\ell)) ( F^6(a\ell) + F^7(a\ell) ). \end{multline} Combining \eqref{ge_7}--\eqref{ge_8}, we find that \begin{multline}\label{ge_9} I = \int_\Omega \frac{\mu}{2} \mathbb{D}a v: \mathbb{D}a w J + \int_{\Sigma_s} J \beta (v \cdot \tau)(w \cdot \tau) + \int_{\ell}^\ell g \xi (w\cdot \mathcal{N}) + \sigma \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \partial^\alphartial_1(w \cdot \mathcal{N}) \\ + \sum_{a=\partial^\alphartialm 1} \kappa (v \cdot \mathcal{N}(a\ell)) (w \cdot \mathcal{N}(a\ell)) - \int_\Omega q \diverge_{\mathcal{A}}{w} J + \int_{\Sigma_s} J (w\cdot \tau)F^5 + \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1( w \cdot \mathcal{N}) + F^4 \cdot w \\ + [w\cdot \mathcal{N}, F^6 + F^7]_\ell. \end{multline} Plugging this into \eqref{ge_1} yields \eqref{ge_0}. \end{proof} The lemma motivates our definition of a weak solution. \begin{dfn} A weak solution to \eqref{linear_geometric} is a triple $(v,q,\xi)$ that satisfies \begin{multline}\label{weak_form} \partial^\alphartialp{v,w} - (q,\diverge_{\mathcal{A}} w)_0 + (\xi,w\cdot \mathcal{N})_{1,\Sigma} + [v\cdot \mathcal{N},w\cdot \mathcal{N}]_\ell = \int_\Omega F^1 \cdot w J \\- \int_{\Sigma_s} J (w\cdot \tau)F^5 - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1(w \cdot \mathcal{N}) + F^4 \cdot w - [w\cdot \mathcal{N},F^6 +F^7 ]_\ell. \end{multline} for a.e. $t \in [0,T]$ and for every $w \in \mathcal{W}(t)$. \end{dfn} \mathcal{E}ction{Basic estimates }\label{sec_basic_ests} \subsection{The energy estimate } We have the following equation for the evolution of the energy of $(v,q,\xi)$. \begin{thm}\label{linear_energy} Suppose that $\zeta=\zeta_0 + \eta$ is given and $\mathcal{A}$ and $\mathcal{N}$ are determined in terms of $\zeta$. Suppose that $(v,q,\xi)$ satisfy \eqref{linear_geometric}. Then \begin{multline} \label{linear_energy_0} \partial^\alphartialartial_t \left( \int_{-\ell}^\ell \frac{g}{2} \abs{\xi}^2 + \frac{\sigma}{2} \frac{\abs{\partial^\alphartial_1 \xi}^2}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) + \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a v}^2 J +\int_{\Sigma_s} \beta J \abs{v \cdot \tau}^2 + [v\cdot \mathcal{N},v\cdot \mathcal{N}]_\ell \\ = \int_\Omega F^1 \cdot v J + q F^2 J - \int_{\Sigma_s} J (v \cdot \tau)F^5 \\ - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1(v \cdot \mathcal{N}) + F^4 \cdot v - g \xi F^6 - \sigma \frac{\partial^\alphartial_1 \xi \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} - [v\cdot \mathcal{N}, F^6 + F^7]_\ell. \end{multline} \end{thm} \begin{proof} We use $v$ as a test function in Lemma \ref{geometric_evolution} to see that \begin{multline}\label{linear_energy_1} \partial^\alphartialp{v,v} - (p,\diverge_{\mathcal{A}} v)_0 + (\xi,v\cdot \mathcal{N})_{1,\Sigma} + [v\cdot \mathcal{N},v\cdot \mathcal{N}]_\ell = \int_\Omega F^1 \cdot v J \\ - \int_{\Sigma_s} J (v\cdot \tau)F^5 - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1(v \cdot \mathcal{N}) + F^4 \cdot v - [v\cdot \mathcal{N}, F^6 + F^7]_\ell. \end{multline} We then compute \begin{multline} (\xi,v\cdot \mathcal{N})_{1,\Sigma} = (\xi,\partial^\alphartialartial_t \xi - F^6)_{1,\Sigma} \\ = \partial^\alphartialartial_t \left( \int_{-\ell}^\ell \frac{g}{2} \abs{\xi}^2 + \frac{\sigma}{2} \frac{\abs{\partial^\alphartial_1 \xi}^2}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) - \int_{-\ell}^\ell g \xi F^6 + \sigma \frac{\partial^\alphartial_1 \xi \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \end{multline} and plug into \eqref{linear_energy_1} to deduce \eqref{linear_energy_0} \end{proof} We now apply this to solutions to \eqref{geometric}. First we define \begin{equation}\label{Q_def} \mathcal{Q}(y,z) := \int_0^z \mathcal{R}(y,r) dr \mathcal{R}ightarrow \frac{\partial^\alphartial \mathcal{Q}}{\partial^\alphartial z}(y,z) = \mathcal{R}(y,z), \end{equation} where $\mathcal{R}$ is defined by \eqref{R_def}. \begin{cor}\label{basic_energy} Suppose that $(u,p,\eta)$ solve \eqref{geometric}. Let $\mathcal{Q}$ be given by \eqref{Q_def}. Then \begin{multline}\label{be_0} \partial^\alphartialartial_t \left( \int_{-\ell}^\ell \frac{g}{2} \abs{\eta}^2 + \frac{\sigma}{2} \frac{\abs{\partial^\alphartial_1 \eta}^2}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) + \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a u}^2 J +\int_{\Sigma_s} \beta J \abs{u \cdot \tau}^2 + \sum_{a=\partial^\alphartialm 1} \kappa \abs{u\cdot \mathcal{N}(a\ell)}^2 \\ =- \partial^\alphartialartial_t \left( \int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) - [u\cdot \mathcal{N},\mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell . \end{multline} \end{cor} \begin{proof} According to \eqref{geometric}, $v =u$, $q =p$, and $\xi = \eta$ solve \eqref{linear_geometric} with $F^i=0$ for $i\neq 3,7$ and $F^3 = \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)$, $F^7 = \mathscr{W}h(\partial^\alphartialartial_t \eta)$. The equality \eqref{be_0} then follows directly from Theorem \ref{linear_energy} and a simple computation with $\mathcal{Q}$. \end{proof} \subsection{The pressure estimate } The basic energy estimate does not control the pressure. We get this control from another estimate. Recall that \begin{equation} {_0}H^1(\Omega)z =\{ u \in H^1(\Omega) \;\vert\; u\cdot \nu =0 \text{ on } \Sigma_s \text{ and } u \cdot \mathcal{N}_0 =0 \text{ on } \Sigma\}. \end{equation} \begin{prop}\label{pressure_v} Let $p \in \mathring{H}^0(\Omega)$. Then there exists $v \in {_0}H^1(\Omega)z$ such that $\diverge v = p$ and \begin{equation} \ns{v}_1 \lesssim \ns{p}_0. \end{equation} \end{prop} \begin{proof} Consider the elliptic problem \begin{equation} \begin{cases} -\mathcal{D}elta \varphi =p &\text{in }\Omega \\ \nabla \varphi \cdot \nu =0 & \text{on } \Sigma_s \\ \nabla \varphi \cdot \mathcal{N}_0 =0 & \text{on } \Sigma. \end{cases} \end{equation} By the Neumann problem analysis in planar domains with convex corners (see for instance \cite{tice_neumann}), we know that there exists a unique solution $\varphi \in H^2(\Omega) \cap \mathring{H}^0(\Omega)$ and that $\ns{\varphi}_2 \lesssim \ns{p}_0$. Set $v=\nabla \varphi$. \end{proof} We can parlay this into a solution of $\diverge_{\mathcal{A}} v =p$. We need a preliminary result first. Consider the matrix \begin{equation}\label{M_def} M = K \nabla \mathcal{P}hi = (J \mathcal{A}^T)^{-1}. \end{equation} We view multiplication by $M$ as a linear operator. The following lemma summarizes some of the properties of this operator. \begin{prop}\label{M_properties} Assume that $\eta \in H^r$ for $r >3/2$. Let $M$ be defined by \eqref{M_def}. The following hold. \begin{enumerate} \item $M: H^0(\Omega) \to \mathcal{H}^0(\Omega)$ is a bounded linear isomorphism, and \begin{equation} \norm{M u}_{\mathcal{H}^0(\Omega)} \lesssim (1+ \norm{\eta}_{r-1/2}) \norm{u}_0. \end{equation} \item $M : {_0}H^1(\Omega) \to {_0}\mathcal{H}^1(\Omega)$ is a bounded linear isomorphism, and \begin{equation} \norm{M u}_{{_0}\mathcal{H}^1(\Omega)} \lesssim (1+ \norm{\eta}_{r}) \norm{u}_1. \end{equation} \item $M : {_0}H^1(\Omega)z \to {_0}\mathcal{H}^1(\Omega)z$ is a bounded linear isomorphism. \item Let $u \in H^1(\Omega)$. Then $\diverge u =p$ if and only if $\diverge_{\mathcal{A}}(Mu)=K p$. \item $M : W \to \mathcal{W}(t)$ and $M: V \to \mathcal{V}(t)$ are bounded linear isomorphisms. \end{enumerate} \end{prop} \begin{proof} The boundedness of $M$ on $\mathcal{H}^0(\Omega)$ and on ${_0}\mathcal{H}^1(\Omega)$ follows directly from standard product estimates in Sobolev spaces. This proves the first item. To complete the proof of the second we note that a straightforward computation reveals that \begin{equation} K\nabla \mathcal{P}hi^T \nu = K\nu \text{ on } \{ x \in \partial^\alphartial \Omega \;\vert \; x_1 = \partial^\alphartialm \ell, x_2 \ge 0 \} \text{ and } K\nabla \mathcal{P}hi^T \nu = \nu \text{ on } \{x \in \partial^\alphartial \Omega \; \vert \; x_2 < 0 \}. \end{equation} Hence, on $\Sigma_s$ we have that \begin{equation} M u \cdot \nu =0 \mathcal{L}eftrightarrow u \cdot (K \nabla \mathcal{P}hi^T \nu) =0 \mathcal{L}eftrightarrow u \cdot \nu =0. \end{equation} This then proves the second item. We also have that on $\Sigma$ \begin{equation} J \mathcal{A} \mathcal{N}_0 = \mathcal{N} \mathcal{R}ightarrow \mathcal{N}_0 = K (\mathcal{A})^{-1} \mathcal{N} = K \nabla \mathcal{P}hi^T \mathcal{N}, \end{equation} which implies that \begin{equation} u \cdot \mathcal{N}_0 = u \cdot K \nabla \mathcal{P}hi^T \mathcal{N} = K \nabla \mathcal{P}hi u \cdot \mathcal{N} = M u \cdot \mathcal{N}. \end{equation} This then proves the third item. To prove the fourth item we compute \begin{equation} \diverge(M^{-1} v) = \partial^\alphartial_j( J\mathcal{A}_{ij}v_i) = J \mathcal{A}_{ij} \partial^\alphartial_j v_i = J \diverge_{\mathcal{A}}{v}. \end{equation} Thus if $Mu = v$ we have that \begin{equation} \diverge{u} = p \text{ if and only if } \diverge_{\mathcal{A}}(Mu) = Kp. \end{equation} The fifth item follows easily from the previous four. \end{proof} We now combine Propositions \ref{pressure_v} and \ref{M_properties} to deduce an orthogonal decomposition of $\mathcal{W}(t)$. Using the Riesz representation theorem we may define the operator $\mathcal{Q}^1_t: \mathring{H}^0(\Omega) \to \mathcal{W}(t)$ via \begin{equation} \ip{\mathcal{Q}^1_t p }{w}_{\mathcal{W}(t)} = \int_{\Omega} p \diverge_{\mathcal{A}}{w} J. \end{equation} We may estimate \begin{equation} \ns{\mathcal{Q}^1_t p}_{\mathcal{W}(t)} = \int_{\Omega} p \diverge_{\mathcal{A}}{\mathcal{Q}^1_t p} J \lesssim \norm{p}_{0}\norm{\mathcal{Q}^1_t p}_{\mathcal{W}(t)} \mathcal{R}ightarrow \norm{\mathcal{Q}^1_t p}_{\mathcal{W}(t)} \lesssim \norm{p}_0. \end{equation} On the other hand, for $p \in \mathring{H}^0(\Omega)$ we use Proposition \ref{pressure_v} to find $u \in {_0}H^1(\Omega)z \subset W$ such that $\diverge{u} =p$ and $\ns{u}_1 \lesssim \ns{p}_0$. Proposition \ref{M_properties} then implies that if we set $v = Mu \in \mathcal{W}(t)$ we have that $J \diverge_{\mathcal{A}}{v} = p$ and $\ns{v}_{\mathcal{W}(t)} \lesssim \ns{p}_0$. Then \begin{equation} \int_\Omega \abs{p}^2 = \int_\Omega p \diverge_{\mathcal{A}}{v}J = \ip{\mathcal{Q}^1_t p }{v}_{\mathcal{W}(t)} \lesssim \norm{\mathcal{Q}^1_t p}_{\mathcal{W}(t)} \norm{v}_{\mathcal{W}(t)} \lesssim \norm{\mathcal{Q}^1_t p}_{\mathcal{W}(t)} \norm{p}_0 \mathcal{R}ightarrow \norm{p}_0 \lesssim \norm{\mathcal{Q}^1_t p}_{\mathcal{W}(t)}. \end{equation} Hence \begin{equation}\label{Qt_closed} \norm{p}_0 \lesssim \norm{\mathcal{Q}^1_t p}_{\mathcal{W}(t)} \lesssim \norm{p}_0. \end{equation} We deduce from \eqref{Qt_closed} that the range of $\mathcal{Q}^1_t$ is closed in $\mathcal{W}(t)$ and hence that $\mathcal{Q}^1_t$ is an isomorphism from $\mathring{H}^0(\Omega)$ to $\ran(\mathcal{Q}^1_t) \subseteq \mathcal{W}(t)$. Now we argue as per usual (see for instance \cite{GT_lwp}) to deduce that \begin{equation}\label{Qt_decomp} (\ran(\mathcal{Q}^1_t))^\bot = \mathcal{V}(t) \text{ and hence } \mathcal{W}(t) = \mathcal{V}(t) \oplus_{\mathcal{W}(t)} \ran(\mathcal{Q}^1_t). \end{equation} Indeed, the inclusion $\mathcal{V}(t) \subseteq (\ran(\mathcal{Q}^1_t))^\bot$ is trivial. To prove the opposite inclusion we suppose $w \in (\ran(\mathcal{Q}^1_t))^\bot$. Then \begin{equation} 0 = \int_{\Omega} p \diverge_{\mathcal{A}}{w} J \text{ for every } p \in \mathring{H}^0(\Omega) \mathcal{R}ightarrow \diverge_{\mathcal{A}}{w} J = C \end{equation} for some constant $C \in \mathcal{R}n{}$. However, \begin{equation} C \abs{\Omega} = \int_\Omega \diverge_{\mathcal{A}}{w} J = \int_{\Sigma_s} J w \cdot \nu + \int_\Sigma \frac{\mathcal{N}}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta_0}^2}}\cdot w = 0 + \int_{-\ell}^\ell \mathcal{N} \cdot w =0 \end{equation} so $C=0$ and $w \in \mathcal{V}(t)$. Now we use \eqref{Qt_decomp} to deduce the following. \begin{thm}\label{pressure_lagrange} Suppose that $\mathcal{L}ambda \in (\mathcal{W}(t))^\ast$ and that $\mathcal{L}ambda(v) =0$ for all $v \in \mathcal{V}(t)$. Then there exists a unique $p \in \mathring{H}^0(\Omega)$ such that \begin{equation} \mathcal{L}ambda(w) = \ip{\mathcal{Q}^1_t p}{w}_{\mathcal{W}(t)} = \int_\Omega p \diverge_{\mathcal{A}}{w} J \text{ for all }w \in \mathcal{W}(t). \end{equation} Moreover, \begin{equation} \norm{p}_0 \lesssim \norm{\mathcal{L}ambda}_{(\mathcal{W}(t))^\ast}. \end{equation} \end{thm} We can also use this to deduce the following theorem. \begin{thm}\label{pressure_est} If $(v,\xi)$ satisfy \begin{multline} \partial^\alphartialp{v,w} + (\xi,w\cdot \mathcal{N})_{1,\Sigma} + [v\cdot \mathcal{N},w\cdot \mathcal{N}]_\ell = \int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w\cdot \tau)F^5 \\ - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (w \cdot \mathcal{N}) + F^4 \cdot w - [w\cdot \mathcal{N}, F^6 + F^7]_\ell \end{multline} for all $w \in \mathcal{V}(t)$, then there exists a unique $q \in \mathring{H}^0(\Omega)$ such that \eqref{weak_form} is satisfied. Moreover, \begin{equation}\label{pressure_est_0} \norm{q}_0 \lesssim \norm{v}_{1} + \norm{\mathcal{F}}_{(H^1)^\ast}, \end{equation} where $\mathcal{F} \in (H^1)^\ast$ is given by \begin{equation} \br{\mathcal{F},w} = \int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w\cdot \tau)F^5. \end{equation} \end{thm} \begin{proof} Define $\mathcal{L}ambda \in (\mathcal{W}(t))^\ast$ via \begin{multline} \mathcal{L}ambda(w) = -\partial^\alphartialp{v,w} - (\xi,w\cdot \mathcal{N})_{1,\Sigma} - [v\cdot \mathcal{N},w\cdot \mathcal{N}]_\ell + \int_\Omega F^1 \cdot w J \\- \int_{\Sigma_s} J (w\cdot \tau)F^5 - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (w \cdot \mathcal{N}) + F^4 \cdot w - [w\cdot \mathcal{N}, F^6 + F^7]_\ell. \end{multline} Then $\mathcal{L}ambda(v) =0$ for all $v\in \mathcal{V}(t)$, so Theorem \ref{pressure_lagrange} yields $q$ and shows that \eqref{weak_form} is satisfied. Using Propositions \ref{pressure_v} and \ref{M_properties} we can find $w \in {_0}\mathcal{H}^1(\Omega)z \subseteq \mathcal{W}(t)$ such that $J \diverge_{\mathcal{A}} w = q$. Then \begin{multline} \ns{q}_0 = -\partial^\alphartialp{v,w} + \int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w\cdot \tau)F^5 \\ \lesssim \norm{w}_{\mathcal{W}(t)} \left( \norm{v}_{1} + \norm{\mathcal{F}}_{(H^1)^\ast} \right) \lesssim \norm{q}_{0} \left( \norm{v}_{1} + \norm{\mathcal{F}}_{(H^1)^\ast} \right), \end{multline} which yields the estimate \eqref{pressure_est_0}. \end{proof} \subsection{The $\eta$ dissipation estimate } In what follows we write \begin{equation} \mathring{H}^s(-\ell,\ell) = H^s(-\ell,\ell) \cap \mathring{H}^0(-\ell,\ell) \end{equation} for $s \ge 0$. Two important facts about these spaces are recorded below. \begin{prop}\label{ohs_interp} Let $(X,Y)_{s,q}$ denote the (real) interpolant of Banach spaces $X,Y$ with parameters $s\in (0,1)$, $q \in [1,\infty]$ (see for instance Triebel's book \cite{triebel} for definitions). We have that \begin{equation} (\mathring{H}^0,\mathring{H}^1)_{s,2} = \mathring{H}^s \end{equation} and \begin{equation} ((\mathring{H}^0)^\ast,(\mathring{H}^1)^\ast)_{s,2} = (\mathring{H}^s)^\ast. \end{equation} \end{prop} \begin{proof} The former follows from Theorem 1.17.1/1 of \cite{triebel} and the latter follows from Theorem 1.11.2/1 of \cite{triebel}. \end{proof} \begin{remark} Here we have used real interpolation, but analogous results hold for complex interpolation. \end{remark} We can now use these and usual elliptic estimates to get an interpolated elliptic estimate. \begin{thm}\label{eta_elliptic} Suppose that $\xi \in \mathring{H}^1(-\ell,\ell)$ is the unique solution to \begin{equation}\label{eta_elliptic_01} (\xi,\theta)_{1,\Sigma} + [h,\theta]_\ell = \br{F,\theta} \end{equation} for all $\theta \in \mathring{H}^{1}(-\ell,\ell)$. If $F \in (\mathring{H}^s(-\ell,\ell))^\ast$ for $s \in [0,1]$ then $\xi \in \mathring{H}^{2-s}(-\ell,\ell)$ and \begin{equation}\label{eta_elliptic_0} \ns{\xi}_{\mathring{H}^{2-s}} \lesssim [h]_\ell^2 + \ns{F}_{(\mathring{H}^s)^\ast}. \end{equation} \end{thm} \begin{proof} We sketch the proof in several steps. \emph{Step 1 - Elliptics at the endpoints with $h=0$} Suppose for now that $h=0$. We get the following estimates. If $F \in (\mathring{H}^0)^\ast = \mathring{H}^0$, then $\xi \in \mathring{H}^2$ and \begin{equation} \ns{\xi}_{\mathring{H}^2} \lesssim \ns{F}_{(\mathring{H}^0)^\ast}. \end{equation} On the other hand, if $F \in (\mathring{H}^1)^\ast$ then we get no improvement of regularity for $\xi$, but we have the estimate \begin{equation} \ns{\xi}_{\mathring{H}^1} \lesssim \ns{F}_{(\mathring{H}^1)^\ast}. \end{equation} \emph{Step 2 - Elliptics with $h=0$} Again assume $h=0$. We may then apply Proposition \ref{ohs_interp} and the usual the usual theory of operator interpolation to deduce that if $F \in (\mathring{H}^s)^\ast$, then $\xi \in \mathring{H}^{2-s}$ and \begin{equation} \ns{\xi}_{\mathring{H}^{2-s}} \lesssim \ns{F}_{(\mathring{H}^s)^\ast}. \end{equation} \emph{Step 3 - Elliptics with $F =0$} Now we consider the case $F=0$. In this case a standard argument reveals that $\xi \in C^\infty$ and \begin{equation} \ns{\xi}_{\mathring{H}^k} \le C_k [h]_\ell^2 \end{equation} for any $k \ge 0$, where $C_k >0$ does not depend on the solution or $h$. \emph{Step 4 - synthesis} We now combine Steps 3 and 4 to deduce that \eqref{eta_elliptic_0} holds. \end{proof} The derivative operator $\partial^\alphartial_1$ is a bounded operator from $H^1(-\ell,\ell)$ to $L^2(-\ell,\ell)$ and from $L^2(-\ell,\ell)$ to $(H_0^1(-\ell,\ell))^\ast$, and so the usual theory of interpolation guarantees that \begin{equation} \partial^\alphartial_1 : H^{1/2}(-\ell,\ell) \to [L^2(-\ell,\ell),H_0^1(-\ell,\ell)]_{1/2}^\ast = (H_{00}^{1/2}(-\ell,\ell))^\ast. \end{equation} is a bounded linear operator. The trouble is that $H_{00}^{1/2}(-\ell,\ell) \subset H^{1/2}(-\ell,\ell)$ (for a proof of this and precise definitions we refer to \cite{lions_magenes_1}), and so the previous theorem is not ideal for dealing with $F$ of the from \begin{equation} \br{F,\theta} = \int_{-\ell}^\ell f \partial^\alphartial_1 \theta. \end{equation} As a result, we need the following variant. \begin{thm}\label{eta_elliptic_var} Suppose that $\xi \in \mathring{H}^1(-\ell,\ell)$ is the unique solution to \begin{equation}\label{eta_elliptic_var_01} (\xi,\theta)_{1,\Sigma} = \int_{-\ell}^\ell f \partial^\alphartial_1 \theta \end{equation} for all $\theta \in \mathring{H}^{1}(-\ell,\ell)$, where $f \in H^{1/2}(-\ell,\ell)$. Then $\xi \in H^{3/2}(-\ell,\ell)$ and \begin{equation}\label{eta_elliptic_var_02} \ns{\xi}_{3/2} \lesssim \ns{f}_{1/2}. \end{equation} \end{thm} \begin{proof} Using $\theta = \xi$ as a test function in \eqref{eta_elliptic_var_01} provides us with the estimate \begin{equation}\label{eta_elliptic_var_1} \norm{\xi}_1 \lesssim \norm{f}_0. \end{equation} Now let $\varphi \in C_c^\infty(-\ell,\ell)$ and let $\bar{\varphi} = \int_{-\ell}^\ell \varphi$. Then $\varphi - \bar{\varphi} \in \mathring{H}^1(-\ell,\ell)$ and hence \begin{equation} \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_1 \xi}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta_0}^2}} \partial^\alphartial_1 (\varphi -\bar{\varphi}) + g \xi (\varphi -\bar{\varphi}) = \int_{-\ell}^\ell f \partial^\alphartial_1 (\varphi -\bar{\varphi}). \end{equation} Since \begin{equation} \partial^\alphartial_1 (\varphi -\bar{\varphi}) = \partial^\alphartial_1 \varphi \text{ and } \int_{-\ell}^\ell g \xi \bar{\varphi} = g \bar{\varphi} \int_{-\ell}^\ell \xi =0 \end{equation} we then have that \begin{equation} \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_1 \xi}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta_0}^2}} \partial^\alphartial_1 \varphi + g \xi \varphi = \int_{-\ell}^\ell f \partial^\alphartial_1 \varphi \end{equation} for every $\varphi \in C_c^\infty(-\ell,\ell)$. From this we immediately deduce that $\chi:= \sigma \partial^\alphartial_1 \xi (1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{-1/2} -f$ is weakly differentiable with \begin{equation} \partial^\alphartial_1 \chi = g \xi \in H^1(-\ell,\ell). \end{equation} Thus $\chi \in H^2(-\ell,\ell)$ and \begin{equation}\label{eta_elliptic_var_2} \norm{\chi}_{2} \le \norm{\chi}_0 + \norm{\partial^\alphartial_1 \chi}_{1} \lesssim \norm{\xi}_1 + \norm{f}_0 + \norm{g \xi}_1 \lesssim \norm{f}_0, \end{equation} where in the last inequality we have used \eqref{eta_elliptic_var_1}. Now we may estimate \begin{equation} \norm{\partial^\alphartial_1 \xi}_{1/2} = \norm{\sqrt{1+\abs{\partial^\alphartial_1 \zeta_0}^2} (f + \chi ) }_{1/2} \lesssim \norm{f+\chi}_{1/2} \lesssim \norm{f}_{1/2} + \norm{\chi}_{1/2} \lesssim \norm{f}_{1/2}. \end{equation} From this estimate we immediately deduce \eqref{eta_elliptic_var_02}. \end{proof} Now we use Theorems \ref{eta_elliptic} and \ref{eta_elliptic_var} to get the $\eta$ dissipation estimate. \begin{thm}\label{xi_est} Suppose that $(v,\xi)$ satisfy \begin{multline} \partial^\alphartialp{v,w} + (\xi,w\cdot \mathcal{N})_{1,\Sigma} + [v\cdot \mathcal{N},w\cdot \mathcal{N}]_\ell = \int_\Omega F^1 \cdot w J \\- \int_{\Sigma_s} J (w\cdot \tau)F^5 - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (w \cdot \mathcal{N}) + F^4 \cdot w - [w\cdot \mathcal{N}, F^6 + F^7]_\ell \end{multline} for all $w \in \mathcal{V}(t)$. Then for each $\theta \in \mathring{H}^1(-\ell,\ell)$ then there exists $w[\theta] \in \mathcal{V}(t)$ such that the following hold: \begin{enumerate} \item $w[\theta]$ depends linearly on $\theta$, \item $w[\theta]\cdot \mathcal{N} =\theta$ on $\Sigma$, \item we have the estimates \begin{equation}\label{xi_est_02} \ns{w[\theta]}_{1} \lesssim \ns{\theta}_{\mathring{H}^{1/2}} \text{ and } \ns{w[\theta]}_{\mathcal{W}(t)} \lesssim \ns{\theta}_{\mathring{H}^{1}}, \end{equation} \item we have the identity \begin{equation}\label{xi_est_00} (\xi,\theta)_{1,\Sigma} + [h,\theta]_\ell = \br{\mathcal{G},\theta} - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 \theta, \end{equation} where $\mathcal{G}$ and $h$ are defined as follows. First, $h$ is given by \begin{equation} [h,\theta]_\ell = [v\cdot \mathcal{N},\theta]_\ell - [ F^6 +F^7 ,\theta]_\ell. \end{equation} Second, $\mathcal{G} \in (\mathring{H}^{1/2})^\ast$ is defined via \begin{equation} \br{\mathcal{G},w[\theta]} = - \partial^\alphartialp{v,w[\theta]} + \br{\mathcal{F},w[\theta]}, \end{equation} with $\mathcal{F} \in (H^1)^\ast$ given by \begin{equation} \br{\mathcal{F},w} = \int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w\cdot \tau)F^5 - \int_{-\ell}^\ell F^4 \cdot w. \end{equation} \end{enumerate} Consequently, $\xi$ satisfies \begin{equation}\label{xi_est_01} \ns{\xi}_{\mathring{H}^{3/2}} \lesssim \ns{v}_1 +[v\cdot \mathcal{N}]_\ell^2 + \ns{\mathcal{F}}_{(H^1)^\ast} + \ns{F_3}_{1/2} + [ F^6 +F^7 ]_\ell^2. \end{equation} \end{thm} \begin{proof} Let $\theta \in \mathring{H}^1$. We may again employ the Neumann problem analysis (see for example \cite{tice_neumann}) to find $\varphi \in H^2(\Omega)$ such that $w = M \nabla \varphi \in \mathcal{V}(t)$ (with $M$ as in \eqref{M_def}) such that \begin{equation} \begin{cases} \diverge_{\mathcal{A}} w = 0 &\text{in }\Omega \\ w \cdot \mathcal{N} = \theta &\text{on } \Sigma \\ w\cdot \nu =0 &\text{on }\Sigma_s \end{cases} \end{equation} and \eqref{xi_est_02} holds. Let us write $w[\theta]$ to denote this function. We then have that \begin{equation} (\xi,\theta)_{1,\Sigma} + [h,\theta]_\ell = \br{\mathcal{G},\theta} - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 \theta \end{equation} for all $\theta \in \mathring{H}^1$, which is \eqref{xi_est_00}. We may decompose $\xi = \xi_1 + \xi_2$, where \begin{equation} (\xi_1,\theta)_{1,\Sigma} + [h,\theta]_\ell = \br{F,\theta} \end{equation} and \begin{equation} (\xi_2,\theta)_{1,\Sigma} = - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 \theta. \end{equation} We then apply Theorem \ref{eta_elliptic} with $s=1/2$ to $\xi_1$ and Theorem \ref{eta_elliptic_var} to $\xi_2$ in order to arrive at \eqref{xi_est_01}. \end{proof} \mathcal{E}ction{Elliptic theory for the Stokes problem }\label{sec_elliptics} \subsection{Analysis in cones } Consider the cone of opening angle $\omega \in (0,\partial^\alphartiali)$ given by \begin{equation}\label{cone_def} K_\omega = \{x \in \mathcal{R}n{2} \;\vert\; r>0 \text{ and } \theta \in (-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega) \}, \end{equation} where $(r,\theta)$ are standard polar coordinates in $\mathcal{R}n{2}$ (i.e. $\theta=0$ corresponds to the positive $x_1$ axis). We write \begin{equation} \mathcal{G}amma_- = \{ x \in \mathcal{R}n{2} \;\vert\; r>0 \text{ and } \theta =-\partial^\alphartiali/2 \} \text{ and } \mathcal{G}amma_+ = \{x \in \mathcal{R}n{2} \;\vert\; r>0 \text{ and } \theta =-\partial^\alphartiali/2 + \omega \} \end{equation} for the lower and upper boundaries of $K_\omega$. For a given $\omega \in (0,\partial^\alphartiali)$ we will often need to refer to the critical weight \begin{equation}\label{crit_wt} \delta_\omega := \max\{0,2-\partial^\alphartiali/\omega\} \in [0,1). \end{equation} Next we introduce a special matrix-valued function. Suppose that $\mathfrak{A}: K_\omega \to \mathcal{R}n{2\times 2}$ is a map satisfying the following four properties. First, $\mathfrak{A}$ is smooth on $K_\omega$ and $\mathfrak{A}$ extends to a smooth function on $\bar{K}_\omega \backslash \{0\}$ and a continuous function on $\bar{K}_\omega$. Second, $\mathfrak{A}$ satisfies the following for all $a,b \in \mathbb{N}$: \begin{equation}\label{frak_A_assump} \begin{split} & \lim_{r\to 0} \sup_{\theta \in [-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega]} \abs{ (r \partial^\alphartial_r)^a \partial^\alphartial_\theta^b [ \mathfrak{A}(r,\theta) \mathfrak{A}^T(r,\theta) - I ] } =0 \\ & \lim_{r\to 0} \sup_{\theta \in [-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega]} \abs{ (r \partial^\alphartial_r)^a \partial^\alphartial_\theta^b [ \mathfrak{A}_{ij}(r,\theta)\partial^\alphartial_j \mathfrak{A}_{ik}(r,\theta) ] } =0 \text{ for }k \in \{1,2\} \\ & \lim_{r\to 0} \sup_{\theta \in [-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega]} \abs{ (r \partial^\alphartial_r)^a \partial^\alphartial_\theta^b [ \mathfrak{A}(r,\theta) - I ] } =0 \\ & \lim_{r\to 0} (r \partial^\alphartial_r)^a [ \mathfrak{A}(r,\theta_0)\nu - \nu ] =0 \text{ for } \theta_0 =-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega \\ & \lim_{r\to 0} (r \partial^\alphartial_r)^a \left[ \left(\mathfrak{A}\nu \otimes \mathfrak{A}^T (\mathfrak{A} \nu)^\bot + (\mathfrak{A}\nu)^\bot \otimes \mathfrak{A}^T (\mathfrak{A}\nu)\right)(r,\theta_0) - I \right] =0 \text{ for } \theta_0 =-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega \\ \end{split} \end{equation} where $(r,\theta)$ denote the standard polar coordinates and $(z_1,z_2)^\bot = (z_2,-z_1)$. Third, the matrix $\mathfrak{A} \mathfrak{A}^T$ is uniformly elliptic on $K_\omega$. Fourth, $\det \mathfrak{A} =1$ and \begin{equation} \partial^\alphartial_j( \mathfrak{A}_{ij}) = 0 \text{ for }i=1,2. \end{equation} We now concern ourselves with solving the $\mathfrak{A}-$Stokes problem in the cone $K_\omega$: \begin{equation}\label{af_stokes_cone} \begin{cases} \diverge_\mathfrak{A} S_\mathfrak{A}(q,v) = G^1 &\text{in } K_\omega \\ \diverge_\mathfrak{A} v = G^2 &\text{in } K_\omega \\ v \cdot \mathfrak{A} \nu = G^3_\partial^\alphartialm &\text{on } \mathcal{G}amma_\partial^\alphartialm \\ \mu \mathbb{D}_\mathfrak{A} v \mathfrak{A} \nu \cdot (\mathfrak{A} \nu)^\bot = G^4_\partial^\alphartialm &\text{on } \mathcal{G}amma_\partial^\alphartialm, \end{cases} \end{equation} where here the operators $\diverge_\mathfrak{A}$ and $S_\mathfrak{A}$ are defined in the same way as $\diverge_{\mathcal{A}}$ and $S_\mathcal{A}$. Note that in the case that $\mathfrak{A} = I_{2\times 2}$, the system \eqref{af_stokes_cone} is the standard Stokes problem \begin{equation}\label{stokes_cone} \begin{cases} \diverge S(q,v) = G^1 &\text{in } K_\omega \\ \diverge v = G^2 &\text{in } K_\omega \\ v \cdot \nu = G^3_\partial^\alphartialm &\text{on } \mathcal{G}amma_\partial^\alphartialm \\ \mu \mathbb{D} v \nu \cdot \tau = G^4_\partial^\alphartialm &\text{on } \mathcal{G}amma_\partial^\alphartialm. \end{cases} \end{equation} We note that the assumptions in \eqref{frak_A_assump} are needed to show that the operators appearing in \eqref{af_stokes_cone} behave like the operators in \eqref{stokes_cone} near $0 \in \bar{K}_\omega$. Following \cite{kmr_1}, for $k \in \mathbb{N}$ and $\delta >0$ we define the weighted Sobolev spaces \begin{equation} W^k_\delta(K_\omega) = \{ u \;\vert\; \norm{u}_{W^k_\delta} < \infty\}, \end{equation} where \begin{equation} \ns{u}_{W^k_\delta} = \sum_{\abs{\alpha} \le k} \int_{K_\omega} \abs{x}^{2\delta} \abs{\partial^\alphartial^\alpha u(x)}^2 dx. \end{equation} We then define the trace spaces $W^{k-1/2}_\delta(\mathcal{G}amma_\partial^\alphartialm)$ as in \cite{kmr_1}. \begin{thm}\label{cone_solve} Let $\omega \in (0,\partial^\alphartiali)$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. Suppose that $\mathfrak{A}$ satisfies the four properties stated above. Assume that the data $G^1,G^2, G^3_\partial^\alphartialm,G^4_\partial^\alphartialm$ for the problem \eqref{stokes_cone} satisfy \begin{equation}\label{cone_solve_01} G^1 \in W^0_{\delta}(K_\omega), G^2 \in W^1_{\delta}(K_\omega), G^3_\partial^\alphartialm \in W^{3/2}_{\delta}(\mathcal{G}amma_\partial^\alphartialm), G^4_\partial^\alphartialm \in W^{1/2}_{\delta}(\mathcal{G}amma_\partial^\alphartialm) \end{equation} as well as the compatibility condition \begin{equation}\label{cone_solve_cc} \int_{K_\omega} G^2 = \int_{\mathcal{G}amma_+} G^3_+ + \int_{\mathcal{G}amma_-} G^3_- . \end{equation} Suppose that $(v,q) \in H^1(K_\omega) \times H^0(K_\omega)$ satisfy $\diverge_\mathfrak{A} v = G^2$, $v\cdot \mathfrak{A} \nu = G^3_\partial^\alphartialm$ on $\mathcal{G}amma_\partial^\alphartialm$, and \begin{equation} \int_{K_\omega} \frac{\mu}{2} \mathbb{D}_\mathfrak{A} v : \mathbb{D}_\mathfrak{A} w - q \diverge_\mathfrak{A} w = \int_{K_\omega} G^1 \cdot w + \int_{\mathcal{G}amma_+} \mathcal{G}^4_+ w\cdot \frac{(\mathfrak{A} \nu)^\bot}{\abs{\mathfrak{A} \nu}} + \int_{\mathcal{G}amma_-} \mathcal{G}^4_- w\cdot \frac{(\mathfrak{A} \nu)^\bot}{\abs{\mathfrak{A} \nu}} \end{equation} for all $w \in \{ w \in H^1(K_\omega) \;\vert\; w\cdot (\mathfrak{A} \nu) =0 \text{ on } \mathcal{G}amma_\partial^\alphartialm\}$. Finally, suppose that $v,q$ and all of the data $G^i$ are supported in $\bar{K}_\omega \cap B[0,1]$. Then $D^2 v, \nabla q \in W^0_{\delta}(K_\omega)$ and \begin{multline}\label{cone_solve_02} \ns{D^2 v}_{W^0_{\delta}} + \ns{\nabla q}_{W^0_{\delta}} \lesssim \ns{G^1}_{W^0_{\delta} } + \ns{G^2}_{W^1_{\delta}} + \ns{G^3_-}_{W^{3/2}_{\delta} } + \ns{G^3_+}_{W^{3/2}_{\delta}} + \ns{G^4_-}_{W^{1/2}_{\delta}} + \ns{G^4_+}_{W^{1/2}_{\delta}}. \end{multline} \end{thm} \begin{proof} In the case $\mathfrak{A} = I$ the result is essentially proved in Theorem 9.4.5 in \cite{kmr_3} except that there the results are stated in a three-dimensional dihedral angle. However, the analysis begins with the problem in two dimensions and is easily adaptable to the $\mathfrak{A}$-Stokes problem \eqref{af_stokes_cone}. The key to the proof is an application of Theorem 8.2.1 of \cite{kmr_1}, which characterizes the solvability of elliptic systems in terms of the eigenvalues of an associated operator pencil. The assumptions on $\mathfrak{A}$, in particular \eqref{frak_A_assump}, guarantee that the ``leading operators'' (in the terminology of \cite{kmr_1}) associated to \eqref{af_stokes_cone} are exactly the operators appearing in \eqref{stokes_cone}, and hence the problems \eqref{af_stokes_cone} and \eqref{stokes_cone} give rise to the same associated operator pencil. The eigenvalues of the pencil associated to \eqref{stokes_cone} may be found in the ``G-G eigenvalue computations'' of \cite{orlt_sandig} (with $\chi_1 = \chi_2 = \partial^\alphartiali/2$). Indeed, the latter guarantees that the strip \begin{equation} \{ z \in \mathbb{C} \;\vert\; 0 \le \mathcal{R}e(z) \le 1- \delta\} \end{equation} contains no eigenvalues of the operator pencil associated to the Stokes problem \eqref{stokes_cone} in the cone $K_\omega$, which are $\partial^\alphartialm 1 + n \partial^\alphartiali/\omega$ for $n \in \mathbb{Z}$. Thus we may use Theorem 8.2.1 of \cite{kmr_1} on \eqref{af_stokes_cone} and then argue as in Theorem 9.4.5 in \cite{kmr_3}. \end{proof} \subsection{The Stokes problem in $\Omega$} We now turn to the study of the Stokes problem in $\Omega$: \begin{equation}\label{stokes_omega} \begin{cases} \diverge S(q,v) = G^1 &\text{in } \Omega \\ \diverge v = G^2 &\text{in } \Omega \\ v \cdot \nu = G^3_+ &\text{on } \Sigma \\ \mu \mathbb{D} v \nu \cdot \tau = G^4_+ &\text{on } \Sigma\\ v \cdot \nu = G^3_- &\text{on } \Sigma_s \\ \mu \mathbb{D} v \nu \cdot \tau = G^4_- &\text{on } \Sigma_s. \end{cases} \end{equation} In what follows we will work with the spaces $W^k_\delta(\Omega)$, $W^{k-1/2}_\delta(\partial^\alphartial \Omega)$, and $\mathring{W}^k_\delta(\Omega)$ as defined in Appendix \ref{app_weight}. Next we define $\mathfrak{X}_\delta$ for $0 < \delta < 1$ to be the space of $6-$tuples \begin{equation} (G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in W^{0}_\delta(\Omega) \times W^1_\delta(\Omega) \times W^{3/2}_\delta(\Sigma )\times W^{3/2}_\delta(\Sigma_s ) \times W^{1/2}_\delta(\Sigma) \times W^{1/2}_\delta(\Sigma_s) \end{equation} such that \begin{equation} \int_{\Omega} G^2 = \int_{\Sigma} G^3_+ + \int_{\Sigma_s} G^3_-. \end{equation} We will now formulate a definition of weak solution to \eqref{stokes_omega} for data in this space. \begin{dfn} Assume that $(G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in \mathfrak{X}_\delta$ for some $0 < \delta < 1$. We say that a pair $(v,q) \in H^1(\Omega) \times \mathring{H}^0(\Omega)$ such that $\diverge v = G^2$, $v\cdot \nu = G^3$ on $\partial^\alphartial \Omega$, and \begin{equation}\label{stokes_om_weak_form} \int_{\Omega} \frac{\mu}{2} \mathbb{D} v : \mathbb{D} w - q \diverge w = \int_{\Omega} G^1 \cdot w + \int_{\Sigma} G^4_+ (w\cdot \tau) + \int_{\Sigma_s} G^4_- (w \cdot \tau) \end{equation} for all $w \in \{ w \in H^1(\Omega) \;\vert\; w\cdot \nu =0 \text{ on } \partial^\alphartial \Omega\}$ is a weak solution to \eqref{stokes_omega}. Note that the integrals on the right side of \eqref{stokes_om_weak_form} are well-defined by virtue of \eqref{hardy_embed} and \eqref{hardy_embed_2}. Also $G^2 \in H^0(\Omega)$ and $G^3 \in H^{1/2}(\partial^\alphartial \Omega)$ for the same reason. \end{dfn} We have the following weak existence result. \begin{thm}\label{stokes_om_weak} Let $(G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in \mathfrak{X}_\delta$ for some $0 < \delta < 1$. Then there exist a unique pair $(v,q) \in H^1(\Omega) \times \mathring{H}^0(\Omega)$ that is a weak solution to \eqref{stokes_omega}. Moreover, \begin{equation}\label{stokes_weak_0} \ns{v}_1 + \ns{q}_0 \lesssim \ns{G^1}_{W^0_\delta} + \ns{G^2}_{W^1_\delta} + \ns{G^3}_{W^{3/2}_\delta} + \ns{G^4}_{W^{1/2}_\delta}. \end{equation} \end{thm} \begin{proof} We first use \eqref{hardy_embed} to see that $G^2 \in H^0(\Omega)$ and $G^3 \in H^{1/2}(\partial^\alphartial \Omega)$. Choose $\bar{v} \in W^2_\delta(\Omega)$ such that $\bar{v}\vert_{\Sigma} = G^3_+$ and $\bar{v}\vert_{\Sigma_s} = G^3_-$ with $\norm{\bar{v}}_{W^2_\delta} \lesssim \norm{G_3}_{W^{3/2}_\delta}$. Using, for instance, the analysis in \cite{tice_neumann}, we may find $\varphi \in H^2(\Omega)$ solving \begin{equation} \begin{cases} -\mathcal{D}elta \varphi = G^2 - \diverge \bar{v} &\text{in }\Omega \\ \nabla \varphi \cdot \nu = G^3 &\text{on } \partial^\alphartial \Omega \end{cases} \end{equation} with \begin{equation} \ns{\varphi}_{2} \lesssim \ns{G^2}_0 + \ns{G^3}_{1/2} \lesssim \ns{G^2}_{W^1_\delta} + \ns{G^3}_{W^{3/2}_\delta}. \end{equation} Next we find $u \in {_0}H^1(\Omega)z$ with $\diverge u =0$ such that \begin{equation} \int_{\Omega} \frac{\mu}{2} \mathbb{D} u : \mathbb{D} w = \int_{\Omega} G^1 \cdot w - \frac{\mu}{2} \mathbb{D} (\nabla \varphi + \bar{v}) : \mathbb{D} w + \int_{\partial^\alphartial \Omega} G^4 (w\cdot \tau) \end{equation} for all $w \in {_0}H^1(\Omega)z$ such that $\diverge w =0$. This is readily done with the Riesz representation theorem, and we find that \begin{equation} \ns{u}_1 \lesssim \ns{G^1}_{W^0_\delta} + \ns{G^2}_{W^1_\delta} + \ns{G^3}_{W^{3/2}_\delta} + \ns{G^4}_{W^{1/2}_\delta}. \end{equation} Finally, we use Theorem \ref{pressure_lagrange} (with $\eta =0$ so that $\mathcal{A} = I$, etc) to find $q \in \mathring{H}^0(\Omega)$ such that \eqref{stokes_om_weak_form} holds with $v = u + \bar{v}+ \nabla \varphi$. We then easily deduce the estimate \eqref{stokes_weak_0}, which in turn implies the uniqueness claim. \end{proof} Next we turn to the issue of second-order regularity. To develop this theory we will first need the following technical result, which constructs a special diffeomorphism. \begin{prop}\label{wedge_diffeo} Let $K_\omega \subset \mathcal{R}n{2}$ be the cone of opening $\omega \in (0,\partial^\alphartiali)$ defined by \eqref{cone_def}, where $\omega$ is the angle of $\Omega$ near the corners, and let $0 < r < \min\{\ell,\zeta_0(-\ell)/2\}$. Then there exists a smooth diffeomorphism $\mathcal{P}si : K_\omega \to \mathcal{P}si(K_\omega) \subset \mathcal{R}n{2}$ satisfying the following properties. \begin{enumerate} \item $\mathcal{P}si$ is smooth up to $\bar{K}_\omega$. \item $\mathcal{G}amma_- = \mathcal{P}si^{-1}(\{x \in \mathcal{R}n{2} \;\vert\; x_1=-\ell, x_2 < \zeta_0(-\ell)\})$. \item We have that $\mathcal{P}si^{-1}( \Sigma \cap B((-\ell,\zeta_0(\ell)),r)) \subseteq \mathcal{G}amma_+ \cap B(0,R)$ and $\mathcal{P}si^{-1}( \Omega \cap B((-\ell,\zeta_0(\ell)),r) ) \subseteq K_\omega \cap B(0,R)$ for $R = \sqrt{2r^2 + 2r^4\ns{\zeta_0}_{C^2}}$. \item The matrix function $\mathfrak{A}(x) = (D\mathcal{P}si(x))^{-T}$ is smooth on $\bar{K}_\omega$, and all its derivatives are bounded. Moreover, $\mathfrak{A}$ satisfies the four properties listed near \eqref{frak_A_assump}. \end{enumerate} \end{prop} \begin{proof} Let $\chi \in C^\infty(\mathcal{R}n{})$ be such that $\chi(s) =1$ for $s \le r$ and $\chi(s) =0$ for $s \ge 2r$. Let $\alpha = \zeta_0'(-\ell)$, which is related to $\omega$ via $-\text{cotan}(\omega) = \alpha$. Define $\zeta : [0,\infty) \to \mathcal{R}n{}$ by \begin{equation} \zeta(s) = \chi(s) \zeta_0(-\ell +s) + (1-\chi(s)) \alpha s, \end{equation} which is well-defined for all $s \in (0,\infty)$ since $2r < 2\ell$ and hence $\zeta_0(-\ell+s)$ is defined on the support of $\chi$. It's easy to see that $\zeta$ is smooth, $\zeta(0) = \zeta_0(-\ell)$, and $\zeta'(0) = \alpha$. Also, $\zeta(s) - \alpha s$ is compactly supported in $[0,\infty)$. We also define the open set \begin{equation} G_\zeta = \{x \in \mathcal{R}n{2} \;\vert\; x_1 >-\ell \text{ and } x_2 < \zeta(x_1)\} \end{equation} and note that \begin{equation}\label{wedge_diffeo_1} G_\zeta \cap B((-\ell,\zeta_0(-\ell)),r) = \Omega \cap B((-\ell,\zeta_0(-\ell)),r) \end{equation} since $\zeta(s) = \zeta_0(-\ell+s)$ for $s \in [0, r]$. Next we define the map $\mathcal{P}si : K_\omega \to \mathcal{R}n{2}$ via \begin{equation} \mathcal{P}si(x) = (x_1 - \ell, x_2 -\alpha x_1 + \zeta(x_1)). \end{equation} It is a trivial matter to see that $\mathcal{P}si$ is smooth on $\bar{K}_\omega$ and that $\mathcal{P}si$ is a smooth diffeomorphism from $K_\omega$ to $\mathcal{G}_\zeta$ with inverse given by \begin{equation}\label{wedge_diffeo_2} \mathcal{P}si^{-1}(y_1,y_2) = (y_1+\ell, y_2 - \zeta(y_1+\ell) + \alpha (y_1+\ell)). \end{equation} This proves the first item, and the second item follows trivially. To prove the third item we first note that $\mathcal{P}si(\mathcal{G}amma_+) = \{x \in \mathcal{R}n{2} \;\vert\; x_1 >0, x_2 = \zeta(x_1))\}$. From \eqref{wedge_diffeo_1} and \eqref{wedge_diffeo_2} we find that if $y \in \Omega \cap B((-\ell,\zeta_0(-\ell)),r)$ then \begin{equation} \abs{\mathcal{P}si^{-1}(y)}^2\le (y_1+\ell)^2 + 2(y_2 - \zeta_0(-\ell))^2 + 2[\zeta_0(-\ell) - \zeta_0(y_1) + \alpha(y+\ell) ]^2 \le 2 r^2 + 2 r^4 \ns{\zeta_0}_{C^2}. \end{equation} A similar calculation works for $y \in \Sigma \cap B((-\ell,\zeta_0(-\ell)),r)$, completing the proof of the third item. We now turn to the proof of the fourth item. The matrix $\mathfrak{A}(x) = (D\mathcal{P}si(x))^{-T}$ is given by \begin{equation}\label{wedge_diffeo_3} \mathfrak{A}(x) = \begin{pmatrix} 1 & \alpha - \zeta'(x_1) \\ 0 & 1 \end{pmatrix}. \end{equation} From this we easily deduce that $\mathfrak{A}$ is smooth with derivatives of all order bounded in $\bar{K}_\omega$. The equality \eqref{wedge_diffeo_3} implies that $\mathfrak{A}$ satisfies \eqref{frak_A_assump}. The fact that $\alpha - \zeta'(x_1)$ is compactly supported in $(0,\infty)$ then implies that $\mathfrak{A} \mathfrak{A}^T$ is uniformly elliptic; indeed, it is easily verified that \begin{equation} (\mathfrak{A}(x) \mathfrak{A}^T(x))_{ij} \xi_i \xi_j \ge \gamma \abs{\xi}^2 \end{equation} for all $x \in \bar{K}_\omega$, where \begin{equation} \gamma = 1 + \frac{\norm{\alpha - \zeta'}_{L^\infty}^2 -\sqrt{4 \norm{\alpha - \zeta'}_{L^\infty}^2 + \norm{\alpha - \zeta'}_{L^\infty}^4} }{2} > 0. \end{equation} Finally, we note that $\partial^\alphartial_j(\mathfrak{A}_{ij})=0$ for $i=1,2$, which follows by direct computation. This completes the proof of the fourth item. \end{proof} We may now proceed to the proof of second-order regularity. \begin{thm}\label{stokes_om_reg} Let $\omega \in (0,\partial^\alphartiali)$ be the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. Let $(G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in \mathfrak{X}_{\delta}$, and let $(v,q) \in H^1(\Omega) \times \mathring{H}^0(\Omega)$ be the weak solution to \eqref{stokes_omega} constructed in Theorem \ref{stokes_om_weak}. Then $v \in W^2_{\delta}(\Omega)$, $q \in W^1_{\delta}(\Omega)$, and \begin{equation}\label{stokes_om_reg_0} \ns{v}_{W^2_{\delta}} + \ns{q}_{\mathring{W}^1_{\delta}} \lesssim \ns{G^1}_{W^0_{\delta} } + \ns{G^2}_{W^1_{\delta}} + \ns{G^3}_{W^{3/2}_{\delta} } + \ns{G^4}_{W^{1/2}_{\delta}} . \end{equation} \end{thm} \begin{proof} For the sake of brevity we will only sketch the proof. The omitted details may be filled in readily using standard argument. \emph{Step 1 -- Estimates away from the corners } Away from the corners we know that $\partial^\alphartial \Omega$ is $C^2$, so we may apply the standard elliptic regularity theory (see for example \cite{adn_2}) for the Stokes problem with boundary conditions as in \eqref{stokes_omega} to deduce that if $V \subset \Omega$ is an open set with a $C^2$ boundary whose boundary agrees with $\partial^\alphartial \Omega$ except near the corners, then $(v,q) \in H^2(V) \times H^1(V)$ and \begin{equation}\label{stokes_om_reg_1} \ns{v}_{H^2(V)} + \ns{q}_{H^1(V)} \le C(V) \left( \ns{G^1}_{W^0_\delta} + \ns{G^2}_{W^1_\delta} + \ns{G^3}_{W^{3/2}_\delta} + \ns{G^4}_{W^{1/2}_\delta} \right). \end{equation} Here we have used the fact that $V$ avoids the corners to trivially estimate \begin{multline} \ns{G^1}_{H^0(W) } + \ns{G^2}_{H^1(W)} + \ns{G^3}_{H^{3/2}(\bar{W} \cap \partial^\alphartial \Omega) } + \ns{G^4}_{H^{1/2}(\bar{W} \cap \partial^\alphartial \Omega) } \\ \lesssim \ns{G^1}_{W^0_\delta} + \ns{G^2}_{W^1_\delta} + \ns{G^3}_{W^{3/2}_\delta} + \ns{G^4}_{W^{1/2}_\delta}, \end{multline} where $V \subset W \subset \Omega$ is another open set that avoids the corners of $\Omega$. \emph{Step 2 -- Estimates near the corners } The key step is to get weighted estimates for the solution near the corners of the domain. To this end we introduce a small parameter \begin{equation}\label{stokes_om_reg_2} 0 < r < \min\left\{ \ell, \frac{\zeta_0(-\ell)}{2}, \frac{-1 + \sqrt{1 + 2\ns{\zeta_0}_{C^2}} }{2 \ns{\zeta_0}_{C^2} } \right\} \end{equation} and consider $U_r = \Omega \cap B((-\ell,\zeta_0(- \ell)),r)$. We choose a cutoff function $\partial^\alphartialsi \in C^\infty_c(B((-\ell,\zeta_0(- \ell)),r))$ such that $\partial^\alphartialsi\ge 0$ and $\partial^\alphartialsi =1$ on $B((-\ell,\zeta_0(- \ell)),r/2)$. By using $\partial^\alphartialsi v$ as a test function in the weak formulation and integrating by parts (which is justified by Step 1 since $\nabla \partial^\alphartialsi$ is supported away from the corner) we find that $(\tilde{v} ,\tilde{q}) = (v \partial^\alphartialsi, q \partial^\alphartialsi)$ is a weak solution to \eqref{stokes_omega} with $G^i$ replaced by $\tilde{G}^i$, for \begin{equation} \begin{split} \tilde{G}^1 & = \partial^\alphartialsi G^1 - \mu \mathbb{D} v \nabla \partial^\alphartialsi - \mu \diverge(v \otimes \nabla \partial^\alphartialsi + \nabla \partial^\alphartialsi \otimes v) + q \nabla \partial^\alphartialsi \\ \tilde{G}^2 & = \partial^\alphartialsi G^2 + v\cdot \nabla \partial^\alphartialsi \\ \tilde{G}^3 & = \partial^\alphartialsi G^3 \\ \tilde{G}^4 & = \partial^\alphartialsi G^4 + \mu (v \otimes \nabla \partial^\alphartialsi + \nabla \partial^\alphartialsi \otimes v )\nu \cdot \tau. \end{split} \end{equation} It's clear that $(\tilde{G}^1,\tilde{G}^2,\tilde{G}^3,\tilde{G}^4) \in \mathfrak{X}_{\delta}$ and that $(\tilde{v} ,\tilde{q}) \in H^1(\Omega) \times H^0(\Omega)$. Next we note that because of the assumption \eqref{stokes_om_reg_2} we know that $\bar{U}_r \cap \partial^\alphartial \Omega$ is actually smooth away from the corner point $(-\ell,\zeta_0(-\ell))$; indeed, the upper boundary is the graph of $\zeta_0$, which is smooth, and the side boundary is a straight line. We then employ the diffeomorphism $\mathcal{P}si^{-1}$ constructed in Proposition \ref{wedge_diffeo} to map $U_r$ to $\mathcal{P}si^{-1}(U_r) \subseteq K_\omega$, where $K_\omega$ is a cone of opening angle $\omega$. Let $(w,\theta)$ and $\mathcal{G}^i$ denote the composition of $(\tilde{v} ,\tilde{q})$ and $\tilde{G}^i$, respectively, with $\mathcal{P}si^{-1}$. It is then a simple matter to verify that $(w,\theta) \in H^1(K_\omega) \times H^0(K_\omega)$ and that \begin{equation} \mathcal{G}^1 \in W^0_{\delta}(K_\omega), \mathcal{G}^2 \in W^1_{\delta}(K_\omega), \mathcal{G}^3_\partial^\alphartialm \in W^{3/2}_{\delta}(\mathcal{G}amma_\partial^\alphartialm), \mathcal{G}^4_\partial^\alphartialm \in W^{1/2}_{\delta}(\mathcal{G}amma_\partial^\alphartialm) \end{equation} where $\mathcal{G}amma_\partial^\alphartialm$ denote the top and bottom sides of the cone $K_\omega$. Moreover, $(w,\theta)$ and the $\mathcal{G}^i$ are all supported in $\bar{K}_\omega \cap B[0,1]$ due to the third item of Proposition \ref{wedge_diffeo} since \eqref{stokes_om_reg_2} guarantees that $2r^2 + 2r^4 \ns{\zeta_0}_{C^2} \le 1$. Next we use the diffeomorphism to change variables in the weak formulation to derive a new identities for $(w,\theta)$: $\diverge_\mathfrak{A} w = \mathcal{G}^2$ in $K_\omega$, $w \cdot \mathfrak{A} \nu = \mathcal{G}^3 \abs{\mathfrak{A} \nu}$ on $\mathcal{G}amma_\partial^\alphartialm$, and \begin{equation} \int_{K_\omega} \frac{\mu}{2} \mathbb{D}_\mathfrak{A} w : \mathbb{D}_\mathfrak{A} \Upsilon - \theta \diverge_\mathfrak{A} \Upsilon = \int_{K_\omega} \mathcal{G}^1 \cdot \Upsilon + \int_{\mathcal{G}amma_+} \mathcal{G}^4_+ \Upsilon \cdot \frac{(\mathfrak{A} \nu)^\bot}{\abs{\mathfrak{A} \nu}} + \int_{\mathcal{G}amma_-} \mathcal{G}^4_- \Upsilon \cdot \frac{(\mathfrak{A} \nu)^\bot}{\abs{\mathfrak{A} \nu}} \end{equation} for all $\Upsilon \in H^1(K_\omega)$ such that $\Upsilon\cdot (\mathfrak{A} \nu) =0$ on $\mathcal{G}amma_\partial^\alphartialm$. This means that $(w,\theta)$ is a weak solution to the problem \eqref{af_stokes_cone} with $G^1,G^2$ replaced by $\mathcal{G}^1,\mathcal{G}^2$ and $G^3_\partial^\alphartialm,G^4_\partial^\alphartialm$ replaced by $\mathcal{G}^3_\partial^\alphartialm\abs{\mathfrak{A} \nu} ,\mathcal{G}^4_\partial^\alphartialm\abs{\mathfrak{A}\nu}$. The properties of $\mathcal{P}si$ given in Proposition \ref{wedge_diffeo} guarantee that Theorem \ref{cone_solve} is applicable and we then arrive at the inclusion $(w,\theta) \in W^2_{\delta} \times W^1_{\delta}$ and the estimate \begin{equation}\label{stokes_om_reg_3} \ns{w}_{W^2_{\delta}} + \ns{\theta}_{W^1_{\delta}} \lesssim \ns{\mathcal{G}^1}_{W^0_{\delta} } + \ns{\mathcal{G}^2}_{W^1_{\delta}} + \ns{\mathcal{G}^3_-}_{W^{3/2}_{\delta} } + \ns{\mathcal{G}^3_+}_{W^{3/2}_{\delta}} + \ns{\mathcal{G}^4_-}_{W^{1/2}_{\delta}} + \ns{\mathcal{G}^4_+}_{W^{1/2}_{\delta}} . \end{equation} Upon changing coordinates back to $\Omega$ we then find that \begin{multline}\label{stokes_om_reg_4} \ns{\tilde{v}}_{W^2_{\delta}(U_r) } + \ns{\tilde{q}}_{W^1_{\delta}( U_r) } \lesssim \ns{\tilde{G}^1}_{W^0_{\delta} } + \ns{\tilde{G}^2}_{W^1_{\delta}} + \ns{\tilde{G}^3}_{W^{3/2}_{\delta} } + \ns{\tilde{G}^4}_{W^{1/2}_{\delta}} \\ \lesssim \ns{G^1}_{W^0_{\delta} } + \ns{G^2}_{W^1_{\delta}} + \ns{G^3}_{W^{3/2}_{\delta} } + \ns{G^4}_{W^{1/2}_{\delta}} + \ns{u}_{H^1} + \ns{p}_{H^0}. \end{multline} A similar argument provides us with an estimate analogous to \eqref{stokes_om_reg_4} near the right corner of $\Omega$, namely the point $(\ell,\zeta_0(\ell))$. In this case we must employ a reflection of $\Omega$ across the $x_2$ axis in order to use the diffeomorphism from Proposition \ref{wedge_diffeo}, but this does not change any of the essential properties of the diffeomorphism, and so the analysis proceeds as above. Writing $U_r^\partial^\alphartialm$ for the neighborhoods of the left corner $(-)$ and the right corner $(+)$, noting that our cutoff functions are unity on $U_{r/2}$, and employing the estimate \eqref{stokes_weak_0}, we then find that \begin{multline}\label{stokes_om_reg_5} \ns{v}_{W^2_{\delta}(U^+_{r/2}) } + \ns{q}_{W^1_{\delta}( U^-_{r/2}) } + \ns{v}_{W^2_{\delta}(U^+_{r/2}) } + \ns{q}_{W^1_{\delta}( U^-_{r/2}) } \\ \le C(r) \left( \ns{G^1}_{W^0_{\delta} } + \ns{G^2}_{W^1_{\delta}} + \ns{G^3}_{W^{3/2}_{\delta} } + \ns{G^4}_{W^{1/2}_{\delta}}\right). \end{multline} \emph{Step 3 -- Synthesis } To conclude we simply sum \eqref{stokes_om_reg_1} and \eqref{stokes_om_reg_5} with an appropriate choice of $V$ and $r$ to deduce that \eqref{stokes_om_reg_0} holds. \end{proof} In what follows it will be useful to rephrase Theorem \ref{stokes_om_reg} as follows. For $0 < \delta < 1$ we define the operator \begin{equation}\label{stokes_om_iso_def1} T_\delta : W^2_\delta(\Omega) \times \mathring{W}^1_\delta(\Omega) \to \mathfrak{X}_\delta \end{equation} via \begin{equation}\label{stokes_om_iso_def2} T_\delta(v,q) = (\diverge S(q,v), \diverge v, v\cdot n \vert_{\Sigma},v\cdot n \vert_{\Sigma_s}, \mu \mathbb{D} v n \cdot \tau \vert_{\Sigma}, \mu \mathbb{D} v n \cdot \tau \vert_{\Sigma_s}). \end{equation} We may then deduce the following from Theorems \ref{stokes_om_weak} and \ref{stokes_om_reg}. \begin{cor}\label{stokes_om_iso} Let $\delta \in (\delta_\omega,1)$ be as in Theorem \ref{stokes_om_reg}. Then the operator $T_{\delta}$ defined by \eqref{stokes_om_iso_def1} and \eqref{stokes_om_iso_def2} is an isomorphism. \end{cor} \subsection{The $\mathcal{A}$-Stokes problem in $\Omega$} We now assume that $\eta \in W^{5/2}_\delta$ is a given function with $\delta \in (0,1)$, which in turn determines $\mathcal{A}, J$, etc, and we consider the problem \begin{equation}\label{A_stokes} \begin{cases} \diverge_{\mathcal{A}} S_\mathcal{A}(q,v) = G^1 & \text{in }\Omega \\ \diverge_{\mathcal{A}} v = G^2 & \text{in } \Omega \\ v\cdot \mathcal{N} = G^3_+ &\text{on } \Sigma \\ \mu \mathbb{D}_\mathcal{A} v \mathcal{N} \cdot \mathcal{T} = G^4_+ &\text{on } \Sigma \\ v\cdot \nu = G^3_- &\text{on } \Sigma_s \\ \mu \mathbb{D}_\mathcal{A} v \nu \cdot \tau = G^4_- &\text{on } \Sigma_s. \end{cases} \end{equation} Note here that $\mathcal{N} = N - \partial^\alphartial_1 \eta e_1$ for $N = -\partial^\alphartial_1\zeta_0 e_1 + e_2$ an outward normal vector on $\Sigma$ and $\mathcal{T} = T + \partial^\alphartial_1 \eta e_2$ for $T = e_1 + \partial^\alphartial_1 \zeta_0 e_2$ the associated tangent vector. We now show that under a smallness assumption on $\eta$, the problem \eqref{A_stokes} is solvable in weighted Sobolev spaces. We begin by introducing the operator \begin{equation}\label{A_stokes_om_iso_def1} T_\delta[\eta] : W^2_\delta(\Omega) \times \mathring{W}^1_\delta(\Omega) \to \mathfrak{X}_\delta \end{equation} given by \begin{equation}\label{A_stokes_om_iso_def2} T_\delta[\eta](v,q) = (\diverge_{\mathcal{A}} S_\mathcal{A}(q,v), \diverge_{\mathcal{A}} v, v\cdot \mathcal{N} \vert_{\Sigma}, v\cdot \nu \vert_{\Sigma_s}, \mu \mathbb{D} v \mathcal{N} \cdot \mathcal{T} \vert_{\Sigma}, \mu \mathbb{D} v \nu \cdot \tau \vert_{\Sigma_s}). \end{equation} \begin{prop}\label{A_stokes_T_well_def} Suppose that $\eta \in W^{5/2}_\delta$ is a given function that determines $\mathcal{A}, J$, etc. Then the map $T_\delta[\eta]$ defined above is well-defined and bounded. \end{prop} \begin{proof} Proposition \ref{weighted_embed} implies that \begin{equation} W^2_\delta(\Omega) \hookrightarrow W^{1,r}(\Omega) \text{ for } 1 \le r < \frac{2}{\delta}. \end{equation} We similarly find that $\eta \in H^{s+1/2}$ for each $1 < s < \min\{\partial^\alphartiali/\omega,2\}$. This, the usual Sobolev embeddings, the weighted Sobolev embeddings of Appendix \ref{app_weight}, the product estimates of Appendix \ref{app_prods}, and trace theory then imply that the map $T_\delta[\eta]$ is well-defined from $W^2_\delta(\Omega) \times W^1_\delta(\Omega)$ to $\mathfrak{X}_\delta$. \end{proof} In fact, the map $T_\delta[\eta]$ is an isomorphism for some values of $\delta$ under a smallness assumption on $\eta$. \begin{thm} \label{A_stokes_om_iso} Let $\delta \in (\delta_\omega,1)$ be as in Theorem \ref{stokes_om_reg}. There exists a $\gamma >0$ such that if $\ns{\eta}_{W^{5/2}_{\delta} } < \gamma$, then the operator $T_{\delta}[\eta]$ defined by \eqref{A_stokes_om_iso_def1} and \eqref{A_stokes_om_iso_def2} is an isomorphism. \end{thm} \begin{proof} Assume initially that $\gamma < 1$ is as small as in Lemma \ref{eta_small}. We can rewrite \eqref{A_stokes} as \begin{equation}\label{A_stokes_om_iso_1} T_\delta(v,q) = (\mathcal{G}^1(v,q), \mathcal{G}^2(v), \mathcal{G}^3_+(v), \mathcal{G}^4_+(v), \mathcal{G}^3_-, \mathcal{G}^4_-(v)) =: \mathcal{G}(v,q), \end{equation} where $T_\delta$ is defined by \eqref{stokes_om_iso} and \begin{equation} \begin{split} \mathcal{G}^1(v,q) &= G^1 + \diverge_{I-A} S_{\mathcal{A}}(q,v) - \diverge \mu \mathbb{D}_{I-\mathcal{A}}(v) \\ \mathcal{G}^2(v) &= G^2 + \diverge_{I-\mathcal{A}} v \\ \mathcal{G}^3_+(v) &= (1+(\partial^\alphartial_1\zeta_0)^2)^{-1/2}[G^3_- + \partial^\alphartial_1 \eta v_1 ] \\ \mathcal{G}^4_+(v) &= (1+(\partial^\alphartial_1\zeta_0)^2)^{-1} [G^4_+ + \mu \mathbb{D}_{I-\mathcal{A}} v N\cdot T -\mu \partial^\alphartial_1 \eta (\mathbb{D}_\mathcal{A} v N \cdot e_2 - \mathbb{D}_\mathcal{A} v e_1 \cdot T ) -\mu (\partial^\alphartial_1 \eta)^2 \mathbb{D}_\mathcal{A} v e_1 \cdot e_2 ] \\ \mathcal{G}^3_- & = G^3_- \\ \mathcal{G}^4_- & = G^4_- + \mu \mathbb{D}_{I-\mathcal{A}} v \nu \cdot \tau. \end{split} \end{equation} A variant of the argument used in Proposition \ref{A_stokes_T_well_def} shows that $\mathcal{G}: W^2_\delta(\Omega) \times \mathring{W}^1_\delta(\Omega) \to \mathfrak{X}_\delta$ and that we have the estimates \begin{equation}\label{A_stokes_om_iso_2} \begin{split} &\norm{\mathcal{G}(v,q) }_{\mathfrak{X}_\delta} \le P(\norm{\eta}_{W^{5/2}_\delta} ) \left( \norm{(G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-)}_{\mathfrak{X}_\delta } + \norm{v}_{W^2_\delta} + \norm{q}_{\mathring{W}^1_\delta} \right) \\ &\norm{\mathcal{G}(v_1,q_1) - \mathcal{G}(v_2,q_2)}_{\mathfrak{X}_\delta} \le P(\norm{\eta}_{W^{5/2}_\delta} ) \left( \norm{v_1 - v_2}_{W^2_\delta} + \norm{q_1 - q_2 }_{\mathring{W}^1_\delta} \right), \end{split} \end{equation} where $P$ is a polynomial with non-negative coefficients such that $P(0) = 0$. The coefficients depend on $\Omega$ and the parameters of the problem but not on $v,q, \eta$ or the data. Since $\delta \in (\delta_\omega,1)$ as in Theorem \ref{stokes_om_reg}, we know that $T_{\delta}$ is an isomorphism. Consequently, \eqref{A_stokes_om_iso_1} is equivalent to the fixed point problem \begin{equation}\label{A_stokes_om_iso_3} (v,q) = T_{\delta}^{-1} \mathcal{G}(v,q) \end{equation} on $W^2_{\delta}(\Omega) \times \mathring{W}^1_{\delta}(\Omega)$. The fact that $T_{\delta}$ is an isomorphism and the estimate \eqref{A_stokes_om_iso_2} then imply that if $\gamma$ is sufficiently small, then \begin{equation}\label{A_stokes_om_iso_4} P(\norm{\eta}_{W^{5/2}_\delta} ) \norm{T^{-1}_{\delta}}_{\mathfrak{X}_{\delta} \to W^2_{\delta} \times \mathring{W}^1_{\delta} } \le 1/2 \end{equation} and so the map $(v,q) \mapsto T_{\delta}^{-1} \mathcal{G}(v,q)$ is a contraction. Hence \eqref{A_stokes_om_iso_3} admits a unique solution $(v,q) \in W^2_{\delta}(\Omega) \times \mathring{W}^1_{\delta}(\Omega)$, which in turn implies that \eqref{A_stokes} is uniquely solvable for every $6-$tuple $(G^1,\dotsc,G^4_-) \in \mathfrak{X}_{\delta}$. This, the first estimate in \eqref{A_stokes_om_iso_2}, and \eqref{A_stokes_om_iso_4} then imply that $T_{\delta}[\eta]$ is an isomorphism with this choice of $\gamma$. \end{proof} \subsection{The $\mathcal{A}$-Stokes problem in $\Omega$ with $\beta \neq 0$} Previously we considered the $\mathcal{A}-$stokes problem \eqref{A_stokes} with the boundary condition $\mathbb{D}_\mathcal{A} v \nu \cdot \tau = G^4_-$ on $\Sigma_s$. Now we consider the problem with the boundary condition $\mathbb{D}_\mathcal{A} v \nu \cdot \tau + \beta v\cdot \tau = G^4_-$ on $\Sigma_s$: \begin{equation}\label{A_stokes_beta} \begin{cases} \diverge_{\mathcal{A}} S_\mathcal{A}(q,v) = G^1 & \text{in }\Omega \\ \diverge_{\mathcal{A}} v = G^2 & \text{in } \Omega \\ v\cdot \mathcal{N} = G^3_+ &\text{on } \Sigma \\ \mu \mathbb{D}_\mathcal{A} v \mathcal{N} \cdot \mathcal{T} = G^4_+ &\text{on } \Sigma \\ v\cdot \nu = G^3_- &\text{on } \Sigma_s \\ \mu \mathbb{D}_\mathcal{A} v \nu \cdot \tau + \beta v\cdot \tau = G^4_- &\text{on } \Sigma_s, \end{cases} \end{equation} where $\beta>0$ is the Navier slip friction coefficient on the vessel walls. \begin{thm}\label{A_stokes_beta_solve} Let $\delta \in (\delta_\omega,1)$ be as in Theorem \ref{stokes_om_reg}. Suppose that $\ns{\eta}_{W^{5/2}_{\delta} } < \gamma$, where $\gamma$ is as in Theorem \ref{A_stokes_om_iso}. If $(G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in \mathfrak{X}_{\delta}$ then there exists a unique $(v,q) \in W^2_{\delta}(\Omega) \times \mathring{W}^1_{\delta}(\Omega)$ solving \eqref{A_stokes_beta}. Moreover, the solution obeys the estimate \begin{equation}\label{A_stokes_beta_solve_0} \ns{v}_{W^2_{\delta}} + \ns{q}_{\mathring{W}^1_{\delta}} \lesssim \ns{G^1}_{W^0_{\delta} } + \ns{G^2}_{W^1_{\delta}} + \ns{G^3_+}_{W^{3/2}_{\delta} } +\ns{G^3_-}_{W^{3/2}_{\delta} } + \ns{G^4_+}_{W^{1/2}_{\delta}} + \ns{G^4}_{W^{1/2}_{\delta}}. \end{equation} \end{thm} \begin{proof} For $0 < \delta < 1$ define the operator $R: W^2_\delta(\Omega) \times \mathring{W}^1_\delta(\Omega) \to \mathfrak{X}_\delta$ via \begin{equation} R(v,q) = (0,0,0,0,0,\beta v\cdot \nu\vert_{\Sigma_s}), \end{equation} which is bounded and well-defined since $v\cdot \nu \in W^{3/2}_\delta(\Sigma_s)$. In fact, the embedding $W^{3/2}_\delta(\Sigma_s) \hookrightarrow W^{1/2}_\delta(\Sigma_s)$ is compact, so $R$ is a compact operator. Theorem \ref{A_stokes_om_iso} tells us that the operator $T_{\delta}[\eta]$ is an isomorphism from $W^2_\delta(\Omega) \times \mathring{W}^1_\delta(\Omega)$ to $\mathfrak{X}_\delta$. Since $R$ is compact we have that $T_{\delta}[\eta] + R$ is a Fredholm operator. We claim that $T_{\delta}[\eta] + R$ is injective. To see this we assume that $T_{\delta}[\eta](v,q) + R(v,q) =0$, which is equivalent to \eqref{A_stokes_beta} with vanishing $G^i$ data. Multiplying the first equation in \eqref{A_stokes_beta} by $J v$ and integrating by parts as in Lemma \ref{geometric_evolution}, we find that \begin{equation} \int_\Omega \frac{\mu}{2} \abs{\mathbb{D}_\mathcal{A} v}^2 J + \int_{\Sigma_s} \beta\abs{v\cdot \tau}^2 J =0 \end{equation} and thus that $v=0$. Then $0 = \nabla_\mathcal{A} q = \mathcal{A} \nabla q=0$, which implies, since $\mathcal{A}$ is invertible (via Lemma \ref{eta_small}), that $q$ is constant. Since $q \in \mathring{W}^1_{\delta}$ we then have that $q =0$. This proves the claim. We now know that $T_{\delta}[\eta] + R$ is injective, so the Fredholm alternative guarantees that it is also surjective and hence is an isomorphism. From this we deduce that \eqref{A_stokes_beta} is uniquely solvable for any choice of data in $\mathfrak{X}_{\delta}$ and that the estimate \eqref{A_stokes_beta_solve_0} holds. \end{proof} \subsection{The $\mathcal{A}$-Stokes problem in $\Omega$ with a boundary equations for $\xi$} We now consider another version of the $\mathcal{A}-$Stokes system in $\Omega$ with boundary conditions on $\Sigma$ involving a new unknown $\xi$: \begin{equation}\label{A_stokes_stress} \begin{cases} \diverge_{\mathcal{A}} S_\mathcal{A}(q,v) = G^1 & \text{in }\Omega \\ \diverge_{\mathcal{A}} v = G^2 & \text{in } \Omega \\ v\cdot \mathcal{N} = G^3_+ &\text{on } \Sigma \\ S_\mathcal{A}(q,v) \mathcal{N} = \left[ g\xi -\sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + G^6 \right) \right] \mathcal{N} + G^4_+\frac{\mathcal{T}}{\abs{\mathcal{T}}^2} + G^5\frac{\mathcal{N}}{\abs{\mathcal{N}}^2} &\text{on } \Sigma \\ v\cdot \nu = G^3_- &\text{on } \Sigma_s \\ (S_{\mathcal{A}}(q,v)\nu - \beta v)\cdot \tau = G^4_- &\text{on } \Sigma_s \\ \mp \sigma \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} (\partial^\alphartialm \ell) = G^7_\partial^\alphartialm. \end{cases} \end{equation} We now construct solutions to \eqref{A_stokes_stress}. \begin{thm}\label{A_stokes_stress_solve} Let $\delta \in (\delta_\omega,1)$ be as in Theorem \ref{stokes_om_reg}. Suppose that $\ns{\eta}_{W^{5/2}_{\delta} } < \gamma$, where $\gamma$ is as in Theorem \ref{A_stokes_om_iso}. If \begin{equation} (G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in \mathfrak{X}_{\delta}, \end{equation} $G^5,\partial^\alphartial_1 G^6 \in W^{1/2}_{\delta}$, and $G^7_\partial^\alphartialm \in \mathcal{R}$, then there exists a unique triple $(v,q,\xi) \in W^2_{\delta}(\Omega) \times \mathring{W}^1_{\delta}(\Omega) \times W^{5/2}_{\delta}$ solving \eqref{A_stokes_stress}. Moreover, the solution obeys the estimate \begin{multline}\label{A_stokes_stress_0} \ns{v}_{W^2_{\delta}} + \ns{q}_{\mathring{W}^1_{\delta}} + \ns{\xi}_{W^{5/2}_{\delta}}\lesssim \ns{G^1}_{W^0_{\delta} } + \ns{G^2}_{W^1_{\delta}} + \ns{G^3_+}_{W^{3/2}_{\delta} } +\ns{G^3_-}_{W^{3/2}_{\delta} } \\ + \ns{G^4_+}_{W^{1/2}_{\delta}} + \ns{G^4_-}_{W^{1/2}_{\delta}} + \ns{G^5}_{W^{1/2}_{\delta}} + \ns{\partial^\alphartial_1 G^6}_{W^{1/2}_{\delta}} + [G^7]_\ell^2. \end{multline} \end{thm} \begin{proof} We employ Theorem \ref{A_stokes_beta_solve} to find $(v,q) \in W^2_{\delta}(\Omega) \times \mathring{W}^1_{\delta}(\Omega)$ solving \eqref{A_stokes_beta} and obeying the estimates \eqref{A_stokes_beta_solve_0}. With this $(v,q)$ in hand we then have a solution to \eqref{A_stokes_stress} as soon as we find $\xi$ solving \begin{equation}\label{A_stokes_stress_solve_1} g\xi -\sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) = S_\mathcal{A}(q,v) \mathcal{N}\cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2} +\sigma \partial^\alphartial_1 G^6 - \frac{G^5}{\abs{\mathcal{N}}^2} \end{equation} on $\Sigma$ subject to the boundary conditions \begin{equation}\label{A_stokes_stress_solve_2} \mp \sigma \left(\frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3 \right)(\partial^\alphartialm \ell) = G^7_\partial^\alphartialm. \end{equation} The estimate \eqref{A_stokes_beta_solve_0} guarantees that $S_\mathcal{A}(q,v) \mathcal{N}\cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2} \in W^{1/2}_{\delta}(\Sigma)$, so the usual weighted elliptic theory implies that there exists a unique $\xi \in W^{5/2}_{\delta}(\Sigma)$ satisfying \eqref{A_stokes_stress_solve_1} and \eqref{A_stokes_stress_solve_2} and obeying the estimate \begin{multline}\label{A_stokes_stress_solve_3} \ns{\xi}_{W^{5/2}_{\delta}} \lesssim \ns{S_\mathcal{A}(q,v) \mathcal{N}\cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2}}_{W^{1/2}_{\delta}} + \ns{\partial^\alphartial_1 G^6}_{W^{1/2}_{\delta}} + \ns{\frac{G^5}{\abs{\mathcal{N}}^2}}_{W^{1/2}_{\delta}} + [G^7]_\ell^2 \\ \lesssim \ns{v}_{W^2_{\delta}} + \ns{q}_{\mathring{W}^1_{\delta}} + \ns{\partial^\alphartial_1 G^6}_{W^{1/2}_{\delta}} + \ns{G^5}_{W^{1/2}_{\delta}} + [G^7]_\ell^2. \end{multline} Then \eqref{A_stokes_stress_0} follow by combining \eqref{A_stokes_beta_solve_0} and \eqref{A_stokes_stress_solve_3}. \end{proof} \mathcal{E}ction{Energy estimate terms }\label{sec_nlin_en} We will employ the basic energy estimate of Theorem \ref{linear_energy} as the starting point for our a priori estimates. In order for this to be effective we must be able to estimate the interaction terms appearing on the right side of \eqref{linear_energy_0} when the $F^i$ terms are given as in Appendices \ref{fi_dt1} and \ref{fi_dt2}. For the sake of brevity we will only present these estimates when the $F^i$ terms are given for the twice temporally differentiated problem, i.e. when $F^i$ are given by \eqref{dt2_f1}--\eqref{dt2_f6}. The corresponding estimates for the once temporally differentiated problem follow from similar, though often simpler, arguments. When possible we will present our estimates in the most general form, as estimates for general functionals generated by the $F^i$ terms. It is only for a few essential terms that we must resort to employing the special structure of the interaction terms in order to close our estimates In all of the subsequent estimates we abbreviate $d = \dist(\cdot,M)$, where $M =\{(-\ell,\zeta_0(-\ell)), (\ell, \zeta_0(\ell))\}$ is the set of corner points of $\partial^\alphartial \Omega$. Throughout this section we will repeatedly make use of the following simple lemma, whose trivial proof we omit. \begin{lem} Suppose that $d = \dist(\cdot, M)$. Let $0 < \delta < 1$. Then $d^{-\delta} \in L^r(\Omega)$ for $1 \le r < \delta/2$. \end{lem} Note also that we will assume throughout the entirety of Section \ref{sec_nlin_ell} that $\eta$ is given and satisfies \begin{equation}\label{eta_assume} \sup_{0 \le t \le T} \left( \mathcal{E}b(t) + \ns{\eta(t)}_{W^{5/2}_\delta(\Omega)} + \ns{\partial^\alphartialartial_t \eta(t)}_{H^{3/2}((-\ell,\ell))} \right) \le \gamma < 1, \end{equation} where $\gamma \in (0,1)$ is as in Lemma \ref{eta_small}. For the sake of brevity we will not explicitly state this in each result's hypotheses. \subsection{Generic functional estimates: velocity term} On the right side of \eqref{linear_energy_0} we find an interaction term of the form \begin{equation} \br{\mathcal{F},w} = \int_\Omega F^1 \cdot w J - \int_{-\ell}^\ell F^4 \cdot w - \int_{\Sigma_s} J (w \cdot \tau)F^5 \end{equation} for $w \in H^1(\Omega)$. Our goal now is to prove estimates for this functional. We will estimate each term separately and then synthesize the estimates. We begin with an analysis of the $F^1$ term. \begin{prop}\label{ee_f1} Let $F^1$ be given by \eqref{dt1_f1} or \eqref{dt2_f1}. Then we have the estimate \begin{equation}\label{ee_f1_0} \abs{\int_\Omega J w\cdot F^1 } \lesssim \norm{w}_1 (\sqrt{\mathcal{E}} + \mathcal{E}) \sqrt{\mathcal{D}} \end{equation} for each $w \in H^1(\Omega)$. \end{prop} \begin{proof} We will prove the result only when $F^1$ is given by \eqref{dt2_f2}, i.e. \begin{multline}\label{ee_f1_1} F^1 = - 2\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) + 2\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \\ - \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} S_\mathcal{A}(p,u) + 2 \mu \diverge_{\partial^\alphartialartial_t \mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u + \mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u. \end{multline} The result when $F^1$ is of the form \eqref{dt1_f1} follows from a similar but simpler argument. We will examine each of the terms in \eqref{ee_f1_1} separately. The estimate \eqref{ee_f1_0} follow by combining the subsequent estimates of each term. \textbf{TERM: $- 2\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u)$.} We begin by estimating \begin{multline} \abs{\int_\Omega J w \cdot (- 2\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u)) } \lesssim \int_\Omega \abs{w} \abs{\partial^\alphartialartial_t \nabla \bar{\eta}} (\abs{\nabla \partial^\alphartialartial_t p} + \abs{\nabla^2 \partial^\alphartialartial_t u} ) \\ + \int_\Omega \abs{w} \abs{\partial^\alphartialartial_t \nabla \bar{\eta}} \abs{\nabla^2 \bar{\eta}} (\abs{\partial^\alphartialartial_t p} + \abs{\nabla \partial^\alphartialartial_t u}) =: I + II. \end{multline} For $I$ we choose $q\in [1,\infty)$ and $2 < r < 2/\delta$ such that $2/q + 1/r = 1/2$ and estimate \begin{multline} I \le \norm{w}_{L^q} \norm{\partial^\alphartialartial_t \nabla \bar{\eta}}_{L^q} \norm{ d^{-\delta} }_{L^r} \left( \norm{ d^\delta \nabla \partial^\alphartialartial_t p }_{L^2} + \norm{ d^\delta \nabla^2 \partial^\alphartialartial_t u }_{L^2} \right) \\ \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t \nabla \bar{\eta}}_{1} \left( \norm{\partial^\alphartialartial_t p}_{\mathring{W}^1_\delta} + \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \right) \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t \eta }_{3/2} \left( \norm{\partial^\alphartialartial_t p}_{\mathring{W}^1_\delta} + \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \right) \lesssim \norm{w}_1 \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} For $II$ we choose $m = 2/(2-s)$, $2 < r < \delta/2$ such that $1/m + 1/r < 1$, which is possible since $\delta < 1 < s$. We then choose $q \in [1,\infty)$ such that $3/q + 1/m + 1/r =1$. This allows us to estimate \begin{multline} II \le \norm{w}_{L^q} \norm{\partial^\alphartialartial_t \nabla \bar{\eta}}_{L^q} \norm{\nabla^2 \bar{\eta}}_{L^m} \norm{d^{-\delta}}_{L^r} \left(\norm{ d^\delta \partial^\alphartialartial_t p}_{L^q} + \norm{d^\delta \nabla \partial^\alphartialartial_t u}_{L^q} \right) \\ \lesssim \norm{w}_1 \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\nabla^2 \bar{\eta}}_{s-1} \left(\norm{ \partial^\alphartialartial_t p}_{\mathcal{W}^1_\delta} + \norm{ \nabla \partial^\alphartialartial_t u}_{W^1_\delta} \right) \lesssim \norm{w}_1 \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\eta}_{s+1/2 } \left(\norm{ \partial^\alphartialartial_t p}_{\mathring{W}^1_\delta} + \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \right) \\ \lesssim \norm{w}_1 \sqrt{\mathcal{E}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} = \norm{w}_1 \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} \textbf{TERM: $2\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u$.} We begin by estimating \begin{equation} \abs{\int_\Omega J w \cdot (2\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u )} \lesssim \int_\Omega \abs{w} \abs{\nabla^2 \partial^\alphartialartial_t \bar{\eta}} \abs{\nabla \partial^\alphartialartial_t u} + \int_\Omega \abs{w}\abs{\partial^\alphartialartial_t \nabla \bar{\eta}} \abs{\nabla^2 \partial^\alphartialartial_t u} =: I + II. \end{equation} For $I$ we let $2 < r < 2/\delta$ and choose $q \in [1,\infty)$ such that $2/q + 1/r =1/2$. We then estimate \begin{multline} I \le \norm{w}_{L^q} \norm{\nabla^2 \partial^\alphartialartial_t \bar{\eta}}_{L^2} \norm{d^{-\delta}}_{L^r} \norm{d^\delta \nabla \partial^\alphartialartial_t u}_{L^q} \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t \bar{\eta}}_{2} \norm{\nabla \partial^\alphartialartial_t u}_{W^1_\delta} \\ \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \lesssim \norm{w}_1 \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} For $II$ we choose the same $r,q$ as for $I$ to estimate \begin{equation} II \le \norm{w}_{L^q} \norm{\partial^\alphartialartial_t \bar{\nabla}\eta}_{L^q} \norm{d^{-\delta} }_{L^r} \norm{d^\delta \nabla^2 \partial^\alphartialartial_t u}_{L^2} \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \lesssim \norm{w}_{1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \textbf{TERM: $- \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} S_\mathcal{A}(p,u)$.} We start by bounding \begin{equation} \abs{\int_\Omega J w \cdot (- \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} S_\mathcal{A}(p,u)) } \lesssim \int_\Omega \abs{w} \abs{\nabla \partial^\alphartialartial_t^2 \bar{\eta}} (\abs{\nabla p} + \abs{\nabla^2 u}) + \int_\Omega \abs{w} \abs{\nabla \partial^\alphartialartial_t^2 \bar{\eta}} \abs{\nabla^2 \bar{\eta}} \abs{\nabla u} =: I + II. \end{equation} For $I$ we choose $2 < r < 2/\delta$ and $q \in [1,\infty)$ such that $2/q + 1/r =1/2$. We then bound \begin{multline} I \le \norm{w}_{L^q} \norm{\nabla \partial^\alphartialartial_t^2 \bar{\eta}}_{L^q} \norm{d^{-\delta}}_{L^r} \left( \norm{d^\delta \nabla p }_{L^2} + \norm{d^\delta \nabla^2 u}_{L^2} \right) \lesssim \norm{w}_1 \norm{\nabla \partial^\alphartialartial_t^2 \bar{\eta}}_{1} \left( \norm{p}_{\mathring{W}^1_\delta} + \norm{u}_{W^2_\delta}\right) \\ \lesssim \norm{w}_1 \norm{\partial^\alphartialartial_t^2\eta}_{3/2} \left( \norm{p}_{\mathring{W}^1_\delta} + \norm{u}_{W^2_\delta}\right) \lesssim \norm{w}_1 \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} For $II$ we choose $m = 2/(2-s)$, $2 < r < \delta/2$ such that $1/m + 1/r < 1$, which is possible since $\delta < 1 < s$. We then choose $q \in [1,\infty)$ such that $3/q + 1/m + 1/r =1$. Then \begin{multline} II \lesssim \norm{w}_{L^q} \norm{\nabla \partial^\alphartialartial_t^2 \bar{\eta}}_{L^q} \norm{\nabla^2 \bar{\eta}}_{L^m} \norm{d^{-\delta}}_{L^r} \norm{d^\delta \nabla u}_{L^q} \lesssim \norm{w}_1 \norm{\nabla \partial^\alphartialartial_t^2 \bar{\eta}}_{1} \norm{\nabla^2 \bar{\eta}}_{s-1} \norm{\nabla u}_{W^1_\delta} \\ \lesssim \norm{w}_1 \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{\eta}_{s+1/2} \norm{u}_{W^2_\delta} \lesssim \norm{w}_1 \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \sqrt{\mathcal{E}} = \norm{w}_1 \sqrt{\mathcal{D}}\mathcal{E}. \end{multline} \textbf{TERM: $2 \mu \diverge_{\partial^\alphartialartial_t \mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u$.} We first bound \begin{equation} \abs{\int_\Omega J w \cdot (2 \mu \diverge_{\partial^\alphartialartial_t \mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u) } \lesssim \int_\Omega \abs{w} \abs{\nabla \partial^\alphartialartial_t \bar{\eta}}^2 \abs{\nabla^2 u} + \int_\Omega \abs{w} \abs{\nabla \partial^\alphartialartial_t \bar{\eta}} \abs{\nabla^2 \partial^\alphartialartial_t \bar{\eta}} \abs{\nabla u} =: I + II. \end{equation} For $I$ we let $2 < r < 2/\delta$ and choose $q \in [1,\infty)$ such that $3/q + 1/r = 1/2$. Then \begin{multline} I \le \norm{w}_{L^q} \ns{\nabla \partial^\alphartialartial_t \bar{\eta}}_{L^q} \norm{d^{-\delta}}_{L^r} \norm{d^\delta \nabla^2 u}_{L^2} \lesssim \norm{w}_1 \ns{\nabla \partial^\alphartialartial_t \bar{\eta}}_1 \norm{u}_{W^2_\delta } \\ \lesssim \norm{w}_1 \ns{\partial^\alphartialartial_t \eta}_{3/2} \norm{u}_{W^2_\delta} \lesssim \norm{w}_1 \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} For $II$ we let $m = 2/(2-s)$ and choose $2 < r < 2/\delta$ such that $1/m + 1/r < 1$, which is possible since $\delta < 1 < s$. We then choose $q \in [1,\infty)$ such that $3/q + 1/m + 1/r =1$. Then \begin{multline} II \lesssim \norm{w}_{L^q} \norm{\nabla \partial^\alphartialartial_t \bar{\eta}}_{L^q} \norm{\nabla^2 \partial^\alphartialartial_t \bar{\eta}}_{L^m} \norm{d^{-\delta}}_{L^r} \norm{ d^\delta \nabla u}_{L^q} \lesssim \norm{w}_1 \norm{\nabla \partial^\alphartialartial_t \bar{\eta}}_1 \norm{\nabla^2 \partial^\alphartialartial_t \bar{\eta}}_{s-1} \norm{\nabla u}_{W^1_\delta} \\ \lesssim \norm{w}_1 \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\partial^\alphartialartial_t \eta}_{s+1/2} \norm{u}_{W^2_\delta} \lesssim \norm{w}_1 \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} = \norm{w}_1 \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} \textbf{TERM: $\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u$.} We estimate \begin{equation} \abs{\int_\Omega J w \cdot (\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u)} \lesssim \int_\Omega \abs{w} \abs{\nabla \partial^\alphartialartial_t^2 \bar{\eta}} \abs{\nabla^2 u} + \int_\Omega \abs{w} \abs{\nabla^2 \partial^\alphartialartial_t^2 \bar{\eta}} \abs{\nabla u} =: I + II. \end{equation} For $I$ we choose $2 < r < 2/\delta$ and $q \in [1,\infty)$ such that $2/q + 1/r =1/2$. Then \begin{multline} I \le \norm{w}_{L^q} \norm{\nabla \partial^\alphartialartial_t^2 \bar{\eta}}_{L^q} \norm{d^{-\delta}}_{L^r} \norm{d^\delta \nabla^2 u}_{L^2} \lesssim \norm{w}_1 \norm{\nabla \partial^\alphartialartial_t^2 \bar{\eta}}_1 \norm{u }_{W^2_\delta} \\ \lesssim \norm{w}_1 \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{u}_{W^2_\delta} \lesssim \norm{w}_1 \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} . \end{multline} For $II$ we choose $2 < r < 2/\delta$ and $q \in [1,\infty)$ such that $2/q + 1/r = 1/2$. Then \begin{multline} II \le \norm{w}_{L^q} \norm{\nabla^2 \partial^\alphartialartial_t^2 \bar{\eta}}_{L^2} \norm{d^{-\delta}}_{L^r} \norm{d^\delta \nabla u}_{L^q} \lesssim \norm{w}_1 \norm{ \partial^\alphartialartial_t^2 \bar{\eta}}_2 \norm{\nabla u}_{W^1_\delta} \\ \lesssim \norm{w}_1 \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{u}_{W^2_\delta} \lesssim \norm{w}_1 \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} \end{proof} Next we handle the $F^4$ term. \begin{prop}\label{ee_f4} Let $F^4$ be given by \eqref{dt1_f4} or \eqref{dt2_f4}. Then we have the estimate \begin{equation}\label{ee_f4_0} \abs{ \int_{-\ell}^\ell w \cdot F^4 } \lesssim \norm{w}_1 (\sqrt{\mathcal{E}} + \mathcal{E}) \sqrt{\mathcal{D}} \end{equation} for all $w \in H^1(\Omega)$. \end{prop} \begin{proof} Again we will only prove the result in the more complicated case when $F^4$ is given by \eqref{dt2_f4}, i.e. \begin{multline}\label{ee_f4_1} F^4 = 2\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \mathcal{N} + \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \mathcal{N} + \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \partial^\alphartialartial_t \mathcal{N}\\ +\left[ 2g \partial^\alphartialartial_t \eta - 2\sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \partial^\alphartialartial_t[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \right) -2 S_{\mathcal{A}}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) \right] \partial^\alphartialartial_t \mathcal{N} \\ + \left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) - S_{\mathcal{A}}(p,u) \right] \partial^\alphartialartial_t^2 \mathcal{N}. \end{multline} The case when $F^4$ is given by \eqref{dt1_f4} is handled by a similar and simpler argument. We will examine each of the terms in \eqref{ee_f4_1} separately. The estimate \eqref{ee_f4_0} follow by combining the subsequent estimates of each term. In what follows we always let $p$, $q$, and $r$ be given by \begin{equation} p = \frac{3+\delta}{2+2\delta}, q = \frac{6+2\delta}{1-\delta}, \text{ and } r = \frac{9+3\delta}{1-\delta} \end{equation} which implies that \begin{equation} \frac{1}{p} + \frac{3}{r} = 1 \text{ and } \frac{1}{p} + \frac{2}{q} = 1. \end{equation} \textbf{TERM: $ 2\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \mathcal{N}$.} We estimate \begin{multline} \abs{\int_{-\ell}^\ell 2\mu w \cdot (\mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u)(\mathcal{N}) } \lesssim \norm{w}_{L^q(\Sigma)} \norm{\partial^\alphartialartial_t \mathcal{A}}_{L^q(\Sigma)} \norm{\nabla \partial^\alphartialartial_t u}_{L^p(\Sigma)} \norm{\mathcal{N}}_{L^\infty} \\ \lesssim \norm{w}_{H^{1/2}(\Sigma)} \norm{\partial^\alphartialartial_t \bar{\eta}}_{H^{3/2}(\Sigma)} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \lesssim \norm{w}_{1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} . \end{multline} \textbf{TERM: $ \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \mathcal{N}$.} We estimate \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \mathcal{N} } \lesssim \norm{w}_{L^{q}(\Sigma)} \norm{\mathcal{N}}_{L^\infty(\Sigma)} \norm{\partial^\alphartialartial_t^2 \mathcal{A}}_{L^q(\Sigma)} \norm{\nabla u}_{L^{p}(\Sigma)} \\ \lesssim\norm{w}_{H^{1/2}(\Sigma)} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{u}_{W^{2}_\delta} \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{u}_{W^2_\delta} \lesssim \norm{w}_{1} \sqrt{\mathcal{D}}\sqrt{\mathcal{E}} . \end{multline} \textbf{TERM: $\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \partial^\alphartialartial_t \mathcal{N}$.} We estimate \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \partial^\alphartialartial_t \mathcal{N} } \lesssim \norm{w}_{L^r(\Sigma)} \norm{\partial^\alphartialartial_t \mathcal{A}}_{L^r(\Sigma)} \norm{\nabla u}_{L^p(\Sigma)} \norm{\partial^\alphartialartial_t \mathcal{N}}_{L^r} \\ \lesssim \norm{w}_{H^{1/2}(\Sigma)} \ns{\partial^\alphartialartial_t \eta}_{3/2} \norm{u}_{W^2_\delta} \lesssim \norm{w}_{1} \ns{\partial^\alphartialartial_t \eta}_{3/2} \norm{u}_{W^2_\delta} \lesssim \norm{w}_{1} \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} \textbf{TERM: $\left[ 2g \partial^\alphartialartial_t \eta - 2\sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \right] \partial^\alphartialartial_t \mathcal{N} $.} We estimate \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \left[ 2g \partial^\alphartialartial_t \eta - 2\sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \right] \partial^\alphartialartial_t \mathcal{N} } \lesssim \norm{w}_{L^q(\Sigma)}\left( \norm{\partial^\alphartialartial_t \eta}_{L^p} +\norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^p} \right) \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^q} \\ \lesssim \norm{w}_{H^{1/2}(\Sigma)} \norm{\partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} \norm{\partial^\alphartialartial_t \eta}_{3/2} \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} \norm{\partial^\alphartialartial_t \eta}_{3/2} \lesssim \norm{w}_1 \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} . \end{multline} \textbf{TERM: $\partial^\alphartial_1 \partial^\alphartialartial_t[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \partial^\alphartialartial_t \mathcal{N}$.} To begin we expand the term via \begin{multline} \partial^\alphartial_1 \partial^\alphartialartial_t[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \partial^\alphartial_1 [\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta] = \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1^2 \partial^\alphartialartial_t \eta + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta \\ + \frac{ \partial^\alphartial_z \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1^2 \zeta_0 \partial^\alphartial_1 \partial^\alphartialartial_t \eta. \end{multline} This allows us to estimate \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \partial^\alphartial_1 [\partial^\alphartialartial_t[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] ] \partial^\alphartialartial_t \mathcal{N} } \\ \lesssim \norm{w}_{L^{r}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^{r}} \left( \norm{\partial^\alphartial_1 \eta}_{L^{r}} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^{p}} + \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^{r}}\norm{\partial^\alphartial_1^2 \eta}_{L^{p}} + \norm{\partial^\alphartial_1 \eta}_{L^{r}}\norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^{p}} \right) \\ \lesssim \norm{w}_{H^{1/2}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{1/2} \left( \norm{\partial^\alphartial_1 \eta}_{1/2} \norm{ \partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} + \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{1/2}\norm{\eta}_{W^{5/2}_\delta} + \norm{\partial^\alphartial_1 \eta}_{1/2}\norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{1/2} \right)\\ \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t \eta}_{3/2} \left( \norm{\eta}_{3/2} \norm{\partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} + \norm{\partial^\alphartialartial_t \eta}_{3/2}\norm{\eta}_{W^{5/2}_\delta} + \norm{\eta}_{3/2}\norm{\partial^\alphartialartial_t \eta}_{3/2} \right) \\ \lesssim \norm{w}_1 \sqrt{\mathcal{E}} \left(\sqrt{\mathcal{E}} \sqrt{\mathcal{D}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \right) \lesssim \norm{w}_1 \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} \textbf{TERM: $ -2 S_{\mathcal{A}}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) \partial^\alphartialartial_t \mathcal{N}$.} We estimate \begin{multline} \abs{ \int_{-\ell}^\ell -2 w \cdot S_{\mathcal{A}}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) \partial^\alphartialartial_t \mathcal{N} } \lesssim \norm{w}_{L^q(\Sigma)} \norm{\mathcal{A}}_{L^\infty} \left(\norm{\partial^\alphartialartial_t p}_{L^p(\Sigma)} + \norm{\nabla \partial^\alphartialartial_t u}_{L^p(\Sigma)} \right) \norm{\partial^\alphartialartial_t \partial^\alphartial_1 \eta}_{L^q} \\ \lesssim \norm{w}_{H^{1/2}(\Sigma)} \left( \norm{\partial^\alphartialartial_t p}_{\mathring{W}^1_\delta} + \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \right) \norm{\partial^\alphartialartial_t \partial^\alphartial_1 \eta}_{1/2} \lesssim \norm{w}_1 \left( \norm{\partial^\alphartialartial_t p}_{\mathring{W}^1_\delta} + \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \right) \norm{\partial^\alphartialartial_t \eta}_{3/2} \\ \lesssim \norm{w}_1 \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} \textbf{TERM: $\left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \right] \partial^\alphartialartial_t^2 \mathcal{N}$.} We estimate \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \right] \partial^\alphartialartial_t^2 \mathcal{N} } \lesssim \norm{w}_{L^{q}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{q}} \left(\norm{\eta}_{L^p} + \norm{\partial^\alphartial_1^2 \eta}_{L^p} \right) \\ \lesssim \norm{w}_{H^{1/2}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{1/2} \norm{\eta}_{W^{5/2}_\delta} \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{\eta}_{W^{5/2}_\delta} \lesssim \norm{w}_1 \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} \textbf{TERM: $ \partial^\alphartial_1[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) ] \partial^\alphartialartial_t^2 \mathcal{N}$.} We expand \begin{equation} \partial^\alphartial_1 [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1^2 \eta + \frac{ \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{(\partial^\alphartial_1 \eta)^2}\partial^\alphartial_1^2 \zeta_0 (\partial^\alphartial_1 \eta)^2 . \end{equation} This allows us to estimate \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \partial^\alphartial_1[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) ] \partial^\alphartialartial_t^2 \mathcal{N} } \lesssim \norm{w}_{L^{r}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{r}} \left(\norm{\partial^\alphartial_1 \eta}_{L^r} \norm{\partial^\alphartial_1^2 \eta }_{L^p} + \norm{\partial^\alphartial_1 \eta}_{L^r} \norm{\partial^\alphartial_1 \eta }_{L^p} \right) \\ \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \left(\norm{\eta}_{3/2} \norm{\eta }_{W^{5/2}_\delta} + \ns{\eta}_{3/2} \right) \lesssim \norm{w}_1 \sqrt{\mathcal{D}} (\sqrt{\mathcal{E}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{E}}) \lesssim \norm{w}_1 \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} \textbf{TERM: $- S_{\mathcal{A}}(p,u) \partial^\alphartialartial_t^2 \mathcal{N}$.} We estimate \begin{multline} \abs{ - \int_{-\ell}^\ell w \cdot S_{\mathcal{A}}(p,u) \partial^\alphartialartial_t^2 \mathcal{N} } \lesssim \norm{w}_{L^{q}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{q}} \left( \norm{p}_{L^p(\Sigma)} + \norm{\mathcal{A}}_{L^\infty} \norm{\nabla u}_{L^p(\Sigma)} \right) \\ \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \left( \norm{p}_{W^1_\delta} + \norm{u}_{W^2_\delta} \right) \lesssim \norm{w}_1 \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} \end{proof} Now we handle the $F^5$ term. \begin{prop}\label{ee_f5} Let $F^5$ be given by \eqref{dt1_f5} or \eqref{dt2_f5}. Then we have the estimate \begin{equation}\label{ee_f5_0} \abs{ \int_{\Sigma_s} J(w \cdot \tau)F^5 } \lesssim \norm{w}_1 \sqrt{\mathcal{E}}\sqrt{\mathcal{D}} \end{equation} for every $w \in H^1(\Omega)$. \end{prop} \begin{proof} Once more we will only prove the result in the harder case, when \begin{equation} F^5 = 2 \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \nu \cdot \tau + \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \nu \cdot \tau. \end{equation} The simpler case of \eqref{dt1_f5} follows from a similar argument. Let $p$ and $q$ be given by \begin{equation} p = \frac{3+\delta}{2+2\delta} \text{ and } q = \frac{6+2\delta}{1-\delta} \end{equation} which implies that \begin{equation} \frac{1}{p} + \frac{2}{q} = 1. \end{equation} To handle the first term in $F^5$ we bound \begin{multline} \abs{ - \int_{\Sigma_s} J(w \cdot \tau)(2 \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \nu \cdot \tau) } \lesssim \norm{w}_{L^q(\Sigma_s)} \norm{\partial^\alphartialartial_t \mathcal{A}}_{L^p(\Sigma_s)} \norm{\nabla \partial^\alphartialartial_t u}_{L^p(\Sigma_s)} \norm{J }_{L^\infty} \\ \lesssim \norm{w}_{H^{1/2}(\Sigma_s)} \norm{\partial^\alphartialartial_t \bar{\eta}}_{H^{3/2}(\Sigma_s)} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \lesssim \norm{w}_{1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} For the second we estimate \begin{multline} \abs{ - \int_{\Sigma_s} J(w \cdot \tau)( \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \nu \cdot \tau) } \lesssim \norm{J }_{L^\infty(\Sigma_s)} \norm{w}_{L^q(\Sigma_s)} \norm{\partial^\alphartialartial_t^2 \mathcal{A}}_{L^q(\Sigma_s)} \norm{\nabla u}_{L^p(\Sigma_s)} \\ \lesssim \norm{w}_{H^{1/2}(\Sigma_s)} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{u}_{W^2_\delta} \\ \lesssim \norm{w}_{1} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{u}_{W^2_\delta} \lesssim \norm{w}_1 \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} The estimate \eqref{ee_f5_0} follows immediately from these two bounds. \end{proof} We now combine the previous analysis into a single result. \begin{thm}\label{ee_v_est} Define the functional $H^1(\Omega) \ni w\mapsto \br{\mathcal{F},w} \in \mathbb{R}$ via \begin{equation} \br{\mathcal{F},w} = \int_\Omega F^1 \cdot w J - \int_{-\ell}^\ell F^4 \cdot w - \int_{\Sigma_s} J (w \cdot \tau)F^5, \end{equation} where $F^1,F^4,F^5$ are defined by either by \eqref{dt1_f1}, \eqref{dt1_f4}, and \eqref{dt1_f5} or else by \eqref{dt2_f1}, \eqref{dt2_f4}, and \eqref{dt2_f5}. Then \begin{equation} \abs{\br{\mathcal{F},w}} \lesssim \norm{w}_1 (\mathcal{E}+ \sqrt{\mathcal{E}})\sqrt{\mathcal{D}} \end{equation} for all $w \in H^1(\Omega)$. \end{thm} \begin{proof} We simply combine Propositions \ref{ee_f1}, \ref{ee_f4}, and \ref{ee_f5}. \end{proof} We will need the following variant to use in conjunction with Theorem \ref{pressure_est}. \begin{thm}\label{ee_v_est_pressure} Define the functional $H^1(\Omega) \ni w\mapsto \br{\mathcal{F},w} \in \mathbb{R}$ via \begin{equation} \br{\mathcal{F},w} = \int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w \cdot \tau)F^5, \end{equation} where $F^1,F^4,F^5$ are defined by either by \eqref{dt1_f1}, \eqref{dt1_f4}, and \eqref{dt1_f5} or else by \eqref{dt2_f1}, \eqref{dt2_f4}, and \eqref{dt2_f5}. Then \begin{equation} \abs{\br{\mathcal{F},w}} \lesssim \norm{w}_1 (\mathcal{E}+ \sqrt{\mathcal{E}})\sqrt{\mathcal{D}} \end{equation} for all $w \in H^1(\Omega)$. \end{thm} \begin{proof} We simply combine Propositions \ref{ee_f1} and \ref{ee_f5}. \end{proof} \subsection{Generic nonlinear estimates: $F^3$ term} Theorem \ref{ee_v_est} will be useful for applying Theorem \ref{linear_energy}, but it will also play a role in the estimates of Theorem \ref{xi_est}. We now record a quick estimate involving $F^3$ that will also be useful there. \begin{thm}\label{ee_f3_half} Let \begin{equation} F^3 = \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2. \end{equation} Then \begin{equation} \norm{F^3}_{1/2} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \end{thm} \begin{proof} Since $s-1/2 >1/2$ we may estimate \begin{equation} \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }_{1/2} \lesssim \norm{\partial^\alphartial_1 \eta}_{s-1/2} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{1/2} \lesssim \norm{\eta}_{s+1/2} \norm{\partial^\alphartialartial_t^2 \eta}_{3/s} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} Similarly, \begin{equation} \norm{\partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2}_{1/2} \lesssim \norm{\partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2}_{s-1/2} \lesssim \ns{\partial^\alphartialartial_t \eta}_{3/2} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \end{proof} \subsection{Generic functional estimates: pressure term} On the right side of \eqref{linear_energy_0} we find an interaction term of the form \begin{equation} \int_\Omega J \partial^\alphartialsi F^2 \end{equation} for $\partial^\alphartialsi \in L^2(\Omega)$. We now provide an estimate for this functional. \begin{thm}\label{ee_p_est} Let $F^2$ be given by either \eqref{dt1_f2} or \eqref{dt2_f2}. Then \begin{equation}\label{ee_p_est_0} \abs{ \int_\Omega J \partial^\alphartialsi F^2} \lesssim \norm{\partial^\alphartialsi}_{L^2} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \end{equation} for every $\partial^\alphartialsi \in L^2(\Omega)$. \end{thm} \begin{proof} We will only prove the result in the harder case \eqref{dt2_f2}, i.e. when \begin{equation}\label{ee_p_est_1} F^2 = -\diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} u - 2\diverge_{\partial^\alphartialartial_t \mathcal{A}}\partial^\alphartialartial_t u. \end{equation} The easier case \eqref{dt1_f2} follows from a simpler argument. To handle the first term in \eqref{ee_p_est_1} we choose $2 < r < 2/\delta$ and $q\in[1,\infty)$ such that $2/q + 1/r =1/2$. This allows us to estimate \begin{multline} \abs{\int_\Omega J \partial^\alphartialsi (-\diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} u )}\lesssim \int_\Omega \abs{\partial^\alphartialsi} \abs{\nabla \partial^\alphartialartial_t^2 \bar{\eta}} \abs{\nabla u} \lesssim \norm{\partial^\alphartialsi}_{L^2} \norm{\nabla \partial^\alphartialartial_t^2 \bar{\eta}}_{L^q} \norm{d^{-\delta}}_{L^r} \norm{d^\delta \nabla u}_{L^q} \\ \lesssim \norm{\partial^\alphartialsi}_{L^2} \norm{\nabla \partial^\alphartialartial_t^2 \bar{\eta}}_{1} \norm{\nabla u}_{W^1_\delta} \lesssim \norm{\partial^\alphartialsi}_{L^2} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{u}_{W^2_\delta} \lesssim \norm{\partial^\alphartialsi}_{L^2} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} To handle the second term in \eqref{ee_p_est_1} we choose $2 < r < 2/\delta$ and $q\in[1,\infty)$ such that $2/q + 1/r =1/2$. Then \begin{multline} \abs{\int_\Omega J \partial^\alphartialsi(- 2\diverge_{\partial^\alphartialartial_t \mathcal{A}}\partial^\alphartialartial_t u)} \lesssim \int_\Omega \abs{\partial^\alphartialsi} \abs{\nabla \partial^\alphartialartial_t \bar{\eta}} \abs{\nabla \partial^\alphartialartial_t u} \lesssim \norm{\partial^\alphartialsi}_{L^2} \norm{\nabla \partial^\alphartialartial_t \bar{\eta}}_{L^q} \norm{d^{-\delta}}_{L^r} \norm{d^\delta \nabla \partial^\alphartialartial_t u}_{L^q} \\ \lesssim \norm{\partial^\alphartialsi}_{L^2} \norm{\nabla \partial^\alphartialartial_t \bar{\eta}}_1 \norm{\nabla \partial^\alphartialartial_t u}_{W^1_\delta} \lesssim \norm{\partial^\alphartialsi}_{L^2} \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \lesssim \norm{\partial^\alphartialsi}_{L^2} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} The estimate \eqref{ee_p_est_0} then follows by combining these. \end{proof} \subsection{Special functional estimates: velocity term} On the right side of \eqref{linear_energy_0} we encounter the terms \begin{equation} - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (\partial^\alphartialartial_t^2 u \cdot \mathcal{N}) \text{ and } - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (\partial^\alphartialartial_t u \cdot \mathcal{N}) \end{equation} with $F^3$ defined by \eqref{dt2_f3} in the first case and \eqref{dt1_f3} in the second case. We do not have the luxury of estimating these as generic functionals and instead must exploit the special structure shared between $F^3$ and $\partial^\alphartialartial_t^j u \cdot \mathcal{N}$. We will only present the analysis for the first term, which is harder to control due to the second-order temporal derivative. The analysis for the second term follows from similar, easier estimates. In the second-order case we have that \begin{equation} F^3 = \partial^\alphartialartial_t^2 [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2. \end{equation} For the purposes of these estimates we will write \begin{equation} \partial^\alphartialartial_t^2 u \cdot \mathcal{N} = \partial^\alphartialartial_t^3 \eta - F^6. \end{equation} We may then decompose \begin{multline}\label{ee_f3_decomp} - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (\partial^\alphartialartial_t^2 u \cdot \mathcal{N}) = - \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t^3 \eta - \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 F^6 \\ - \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^3 \eta - \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 F^6 \\ = I + II + III + IV. \end{multline} We will estimate each of these terms separately. We begin with $I$. \begin{prop}\label{ee_f3_I} Let $I$ be as given in \eqref{ee_f3_decomp}. Then \begin{equation} \abs{I + \frac{d}{dt} \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } \lesssim \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} Moreover, \begin{equation} \abs{ \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } \lesssim \sqrt{\mathcal{E}} \mathcal{E}b. \end{equation} \end{prop} \begin{proof} We compute \begin{multline} I = - \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartialartial_t \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} = - \frac{d}{dt} \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} \\ + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2}. \end{multline} We then estimate \begin{multline} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2}} \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t\eta}_{L^3} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^3} \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t\eta}_{1/2} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{1/2} \\ \lesssim \norm{ \partial^\alphartialartial_t\eta}_{3/2} \ns{ \partial^\alphartialartial_t^2 \eta}_{3/2} \lesssim \sqrt{\mathcal{E}} \mathcal{D}. \end{multline} We can also estimate the term in the time derivative: \begin{multline} \abs{ \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \eta} \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} \\ \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{0} \lesssim \norm{\eta}_{s+1/2} \ns{ \partial^\alphartialartial_t^2 \eta}_{1} \lesssim \sqrt{\mathcal{E}}\mathcal{E}b. \end{multline} \end{proof} Next we handle $II$. \begin{prop}\label{ee_f3_II} Let $II$ be as given in \eqref{ee_f3_decomp}. Then \begin{equation}\label{ee_f3_II_0} \abs{II} \lesssim \mathcal{E} \mathcal{D}. \end{equation} \end{prop} \begin{proof} We may write $F^6$ as \begin{equation} F^6 = -2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta - u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta. \end{equation} Using this, we may rewrite \begin{multline} II = \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta 2 \partial^\alphartial_1 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta + \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta 2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1^2 \partial^\alphartialartial_t \eta \\ + \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta + \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta u_1 \partial^\alphartial_1^2 \partial^\alphartialartial_t^2 \eta \\=: II_1 + II_2 + II_3 + II_4. \end{multline} Notice that Proposition \ref{weighted_embed} and the usual trace theory in $W^{1,p}(\Omega)$ allows us to estimate \begin{equation} \norm{\nabla \partial^\alphartialartial_t u}_{L^p(\partial^\alphartial \Omega)} \lesssim \norm{\nabla \partial^\alphartialartial_t u}_{W^{1,p}(\Omega)} \lesssim \norm{\nabla \partial^\alphartialartial_t u}_{W^1_\delta} \lesssim \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \lesssim \sqrt{\mathcal{D}} \end{equation} for any $1 \le p < 2/(1+\delta)$. In particular, we may choose $p = (3+\delta)/(2+2\delta) \in [1,2/(1+\delta))$ and $q = (6+2\delta)/(1-\delta)$, which satisfy $1/p + 2/q =1$, in order to estimate \begin{multline}\label{ee_f3_II_1} \abs{II_1} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }_{L^{q}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t u}_{L^{p}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^{q}} \lesssim \norm{\partial^\alphartial_1 \eta}_{s-1/2} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }_{1/2} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{1/2} \\ \lesssim \norm{\eta}_{s+1/2} \norm{\partial^\alphartialartial_t^2 \eta }_{3/2} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \norm{\partial^\alphartialartial_t \eta}_{3/2} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \lesssim \mathcal{E} \mathcal{D}. \end{multline} For $II_2$ we use Proposition \ref{weighted_trace} to bound \begin{equation} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^p} \lesssim \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{W^{1/2}_\delta} \lesssim \norm{\partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} \lesssim \sqrt{\mathcal{D}} \end{equation} for any $1 \le p < 2/(1+\delta)$. Choosing the same $p$ and $q$ as for $II_1$ above, we then estimate \begin{multline}\label{ee_f3_II_2} \abs{II_2} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }_{L^{q}} \norm{\partial^\alphartialartial_t u_1}_{L^q(\Sigma)} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^p} \lesssim \norm{\partial^\alphartial_1 \eta}_{s-1/2} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }_{1/2} \norm{\partial^\alphartialartial_t u}_{H^{1/2}(\Sigma)} \norm{ \partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} \\ \lesssim \norm{\eta}_{s+1/2} \norm{\partial^\alphartialartial_t^2 \eta }_{3/2} \norm{\partial^\alphartialartial_t u}_1 \norm{ \partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} = \mathcal{E} \mathcal{D}. \end{multline} Again using Proposition \ref{weighted_embed} and trace theory we may bound \begin{equation} \norm{\nabla u}_{L^p(\partial^\alphartial \Omega)} \lesssim \norm{\nabla u}_{W^{1,p}(\Omega)} \lesssim \norm{\nabla u}_{W^1_\delta} \lesssim \norm{u}_{W^2_\delta} \lesssim \sqrt{\mathcal{D}} \end{equation} for $1 \le p < 2/(1+\delta)$. Arguing as with $II_1$ for the same $p,q$ we then may estimate \begin{equation}\label{ee_f3_II_3} \abs{II_3} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1 u_1}_{L^{p}} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{q}} \lesssim \norm{\eta}_{s+1/2} \norm{u}_{W^2_\delta} \ns{ \partial^\alphartialartial_t^2 \eta}_{3/2} \lesssim \mathcal{E} \mathcal{D}. \end{equation} Since $u_1 =0$ at the endpoints we may similarly estimate \begin{multline}\label{ee_f3_II_4} \abs{II_4} = \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) u_1 \partial^\alphartial_1 \frac{\abs{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } = \abs{- \int_{-\ell}^\ell \sigma \partial^\alphartial_1 \left[ \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) u_1 \right] \frac{\abs{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } \\ = \abs{ \int_{-\ell}^\ell \sigma \left[ \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1 u_1 + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta u_1 + \frac{\partial^\alphartial_y \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)\partial^\alphartial_1^2 \zeta_0}{\partial^\alphartial_1\eta} \partial^\alphartial_1 \eta u_1\right] \frac{\abs{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } \\ \lesssim \left( \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1 u_1}_{L^{p}(\Sigma)} + \norm{\partial^\alphartial_1^2 \eta}_{L^{p}} \norm{u_1}_{L^\infty} + \norm{\partial^\alphartial_1 \eta}_{L^{p}} \norm{u_1}_{L^\infty} \right) \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{q}} \\ \lesssim \left( \norm{\eta}_{s+1/2} \norm{u}_{W^2_\delta} + \norm{\partial^\alphartial_1^2 \eta}_{W^{1/2}_\delta} \norm{u_1}_{s} + \norm{\eta}_{s+1/2} \norm{u_1}_{s} \right) \ns{\partial^\alphartialartial_t^2 \eta}_{3/2} \\ \lesssim (\sqrt{\mathcal{E}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{E}} )\mathcal{D} \lesssim \mathcal{E} \mathcal{D}. \end{multline} The estimate \eqref{ee_f3_II_0} then follows by combining \eqref{ee_f3_II_1}, \eqref{ee_f3_II_2}, \eqref{ee_f3_II_3}, and \eqref{ee_f3_II_4}. \end{proof} Next we handle $III$. \begin{prop}\label{ee_f3_III} Let $III$ be as given in \eqref{ee_f3_decomp}. Then \begin{equation} \abs{III + \frac{d}{dt} \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \lesssim (\sqrt{\mathcal{E}} + \mathcal{E})\mathcal{D} \end{equation} and \begin{equation} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta } \lesssim \sqrt{\mathcal{E}}\mathcal{E}b. \end{equation} \end{prop} \begin{proof} To handle $III$ we cannot get away with integrating by parts spatially (the resulting term needs too many dissipation terms at the endpoints since $\partial^\alphartial_1 \partial^\alphartialartial_t \eta$ is only in $H^{1/2}$ in the energy). Instead we pull out a time derivative: \begin{multline} III = - \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^3 \eta = - \frac{d}{dt} \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \\ + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^3 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^3 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) 2 \partial^\alphartial_1 \partial^\alphartialartial_t \eta \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2. \end{multline} We then estimate \begin{multline} \abs{ \int_{-\ell}^\ell \sigma \partial^\alphartial_z^3 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^3 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta } \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^4}^3 \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^4} \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{1/2}^3 \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{1/2} \\ \lesssim \norm{ \partial^\alphartialartial_t \eta}_{3/2}^3 \norm{ \partial^\alphartialartial_t^2 \eta}_{3/2} \lesssim \mathcal{E} \sqrt{\mathcal{D}} \sqrt{\mathcal{D}} = \mathcal{E} \mathcal{D} \end{multline} and \begin{equation} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) 2 \partial^\alphartial_1 \partial^\alphartialartial_t \eta \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2} \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^3} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^3}^2 \lesssim \norm{\partial^\alphartialartial_t \eta}_{3/2} \ns{\partial^\alphartialartial_t^2 \eta}_{3/2} \lesssim \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} Finally, we want to show that the term with the time derivative is actually controlled only by the energy. To this end we first note that the Sobolev embeddings and interpolation imply that if $\partial^\alphartialsi \in H^{3/2}((-\ell,\ell))$ then \begin{equation} \norm{\partial^\alphartial_1 \partial^\alphartialsi}_{L^4} \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialsi}_{H^{1/4}} \lesssim \norm{\partial^\alphartialsi}_{H^{5/4}} \lesssim \norm{\partial^\alphartialsi}_{H^1}^{1/2} \norm{\partial^\alphartialsi}_{H^{3/2}}^{1/2}. \end{equation} Using this, we may estimate \begin{equation} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta } \lesssim \ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^4} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^2} \lesssim \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\partial^\alphartialartial_t \eta}_{1} \norm{\partial^\alphartialartial_t^2 \eta}_{1} \lesssim \sqrt{\mathcal{E}} \mathcal{E}b. \end{equation} \end{proof} Finally, we handle the term $IV$. \begin{prop}\label{ee_f3_IV} Let $II$ be as given in \eqref{ee_f3_decomp}. Then \begin{equation}\label{ee_f3_IV_0} \abs{IV} \lesssim (\mathcal{E} + \mathcal{E}^{3/2})\mathcal{D}. \end{equation} \end{prop} \begin{proof} We now use the expression for $F^6= -2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta - u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta.$ to write \begin{multline} IV = \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 (2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta) \\ + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1(u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta) = IV_1 + IV_2. \end{multline} We first argue as with $II_1$ and $II_2$ in Proposition \ref{ee_f3_II} (using the same $p,q$) to estimate \begin{multline}\label{ee_f3_IV_1} \abs{IV_1} = \abs{ \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 2 \partial^\alphartial_1 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1^2 \partial^\alphartialartial_t \eta } \\ \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }_{L^{q}} \left( \norm{\partial^\alphartial_1 \partial^\alphartialartial_t u}_{L^{p}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^{q}} + \norm{\partial^\alphartialartial_t u}_{L^{q}(\Sigma)} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^{p}} \right) \\ \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{s-1/2} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }_{1/2} \left( \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{1/2} + \norm{\partial^\alphartialartial_t u}_{H^{1/2}(\Sigma)} \norm{\partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} \right) \\ \lesssim \norm{\partial^\alphartialartial_t \eta}_{s+1/2} \norm{ \partial^\alphartialartial_t \eta }_{3/2}\left( \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \norm{\partial^\alphartialartial_t \eta}_{3/2} + \norm{\partial^\alphartialartial_t u}_{1} \norm{\partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} \right) \\ \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \left(\sqrt{\mathcal{D}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \right) \lesssim \mathcal{E} \mathcal{D}. \end{multline} Next we expand \begin{multline} IV_2 = \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 u_1 \partial^\alphartial_1^2 \partial^\alphartialartial_t^2 \eta =: IV_{3} + IV_4. \end{multline} To handle the term $IV_3$ we let $p=(3+\delta)/(2+2\delta)$ as above but now choose $r = (9+3\delta)/(1-\delta)$ so that $1/p + 3/r =1$, which allows us to estimate \begin{equation}\label{ee_f3_IV_3} \abs{IV_3} \lesssim \ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }_{L^{r}} \norm{\partial^\alphartial_1 u}_{L^{p}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{r}} \lesssim \ns{ \partial^\alphartialartial_t \eta }_{3/2} \norm{u}_{W^2_\delta} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \lesssim \mathcal{E} \sqrt{\mathcal{D}} \sqrt{\mathcal{D}} = \mathcal{E} \mathcal{D}. \end{equation} For the term $IV_4$ we first use the fact that $u_1 =0$ at the endpoints to write \begin{multline} IV_4 = -\int_{-\ell}^\ell \sigma \left[ \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 u_1 + 2 \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta \partial^\alphartial_1^2 \partial^\alphartialartial_t \eta u_1 \right] \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \\ -\int_{-\ell}^\ell \sigma \left[ \partial^\alphartial_z^3 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta + \partial^\alphartial_y \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0 \right] (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta =: IV_5 + IV_6. \end{multline} Then with $p$ and $r$ as used above for $IV_3$ we bound \begin{multline}\label{ee_f3_IV_5} \abs{IV_5} \lesssim \left( \ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^r} \norm{\partial^\alphartial_1 u_1}_{L^p(\Sigma)} + \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^r} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^p} \norm{u}_{L^r(\Sigma)} \right) \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^r} \\ \lesssim \left( \ns{\partial^\alphartialartial_t \eta}_{3/2} \norm{u}_{W^2_\delta} + \norm{\partial^\alphartialartial_t \eta}_{3/2} \norm{\partial^\alphartialartial_t \eta}_{W^{5/2}_\delta} \norm{u}_{s} \right) \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \lesssim \left(\mathcal{E} \sqrt{\mathcal{D}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \right) \sqrt{\mathcal{D}} \lesssim \mathcal{E} \mathcal{D}, \end{multline} and \begin{multline}\label{ee_f3_IV_6} \abs{IV_6} \lesssim \left(\norm{\partial^\alphartial_1^2 \eta}_{L^p} + 1 \right) \ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^r} \norm{u}_{L^\infty} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^r} \lesssim \left(\norm{ \eta}_{W^{5/2}_\delta} + 1 \right) \ns{ \partial^\alphartialartial_t \eta}_{3/2} \norm{u}_{s} \norm{ \partial^\alphartialartial_t^2 \eta}_{3/2} \\ \lesssim \sqrt{\mathcal{D}} \mathcal{E} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} + \mathcal{E} \sqrt{\mathcal{D}} \sqrt{\mathcal{D}} = (\mathcal{E} + \mathcal{E}^{3/2}) \mathcal{D} . \end{multline} The estimate \eqref{ee_f3_IV_0} then follows by combining \eqref{ee_f3_IV_1}, \eqref{ee_f3_IV_3}, \eqref{ee_f3_IV_5}, and \eqref{ee_f3_IV_6}. \end{proof} Now that we have controlled $I$--$IV$ in \eqref{ee_f3_decomp} we can record a unified estimate. \begin{thm}\label{ee_f3_dt2} Let $F^3$ be given by \eqref{dt2_f3}. Then \begin{multline} \abs{ - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (\partial^\alphartialartial_t^2 u \cdot \mathcal{N}) + \frac{d}{dt} \int_{-\ell}^\ell \left[ \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} + \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \right] } \\ \lesssim (\sqrt{\mathcal{E}} + \mathcal{E} + \mathcal{E}^{3/2})\mathcal{D}. \end{multline} Moreover, \begin{equation} \abs{\int_{-\ell}^\ell \left[ \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} + \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \right]} \lesssim \sqrt{\mathcal{E}} \mathcal{E}b. \end{equation} \end{thm} \begin{proof} We simply combine \eqref{ee_f3_decomp} with Propositions \ref{ee_f3_I}, \ref{ee_f3_II}, \ref{ee_f3_III}, and , \ref{ee_f3_IV}. \end{proof} A similar, but simpler, result holds for the once time-differentiated problem. We will record it without proof. \begin{thm}\label{ee_f3_dt} Let $F^3$ be given by \eqref{dt1_f3}. Then \begin{equation} \abs{ - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (\partial^\alphartialartial_t^2 u \cdot \mathcal{N}) } \lesssim (\sqrt{\mathcal{E}} + \mathcal{E})\mathcal{D}. \end{equation} \end{thm} \subsection{Special functional estimates: free surface term} On the right side of \eqref{linear_energy_0} we encounter the terms \begin{equation} \int_{-\ell}^\ell g \partial^\alphartialartial_t^2 \eta F^6 + \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \text{ and } \int_{-\ell}^\ell g \partial^\alphartialartial_t \eta F^6 + \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \end{equation} where $F^6$ is given by \eqref{dt2_f6} for the first term and \eqref{dt1_f6} for the second term. We have the following estimate. \begin{thm}\label{ee_f6} We have the estimate \begin{equation}\label{ee_f6_01} \abs{ \int_{-\ell}^\ell g \partial^\alphartialartial_t^2 \eta F^6 + \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} when $F^6$ is given by \eqref{dt2_f6}, and we have the estimate \begin{equation}\label{ee_f6_02} \abs{ \int_{-\ell}^\ell g \partial^\alphartialartial_t \eta F^6 + \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} when $F^6$ is given by \eqref{dt1_f6}. \end{thm} \begin{proof} We will again only prove the result in the more difficult case, which corresponds to \eqref{ee_f6_01}. The estimate \eqref{ee_f6_02} follows from a similar argument. In the first case we may write \begin{equation} F^6 = -2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta - u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \end{equation} and \begin{equation} \int_{-\ell}^\ell g \partial^\alphartialartial_t^2 \eta F^6 + \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} = I + II. \end{equation} We will estimate $I$ and $II$ separately; combining these then leads to the bound \eqref{ee_f6_01}. We estimate the term $I$ via \begin{multline} \abs{I} \lesssim \norm{\partial^\alphartialartial_t^2 \eta}_{L^3} \left( \norm{\partial^\alphartialartial_t u_1}_{L^3(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^3} + \norm{u_1}_{L^3(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^3} \right) \\ \lesssim \norm{\partial^\alphartialartial_t^2 \eta}_{1/2} \left( \norm{\partial^\alphartialartial_t u_1}_{H^{1/2}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{1/2} + \norm{u_1}_{H^{1/2}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{1/2} \right) \\ \lesssim \norm{\partial^\alphartialartial_t^2 \eta}_{1/2} \left( \norm{\partial^\alphartialartial_t u_1}_{1} \norm{\partial^\alphartialartial_t \eta}_{3/2} + \norm{u_1}_{1} \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \right) \lesssim \sqrt{\mathcal{E}}(\sqrt{\mathcal{D}} \sqrt{\mathcal{D}} + \sqrt{\mathcal{D}} \sqrt{D}) = \sqrt{\mathcal{E}} \mathcal{D}. \end{multline} To estimate the term $II$ we let $p$ and $q$ be given by \begin{equation} p = \frac{3+\delta}{2+2\delta} \text{ and } q = \frac{6+2\delta}{1-\delta} \end{equation} which implies that \begin{equation} \frac{1}{p} + \frac{2}{q} = 1. \end{equation} We then expand \begin{equation} II = -\int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} 2\partial^\alphartial_1 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta - \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} u_1 \partial^\alphartial_1^2 \partial^\alphartialartial_t^2 \eta = II_1 + II_2. \end{equation} We estimate $II_1$ via \begin{multline} \abs{II_1} \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^q} \norm{\nabla \partial^\alphartialartial_t u}_{L^p(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^q} \lesssim \norm{\partial^\alphartialartial_t^2 \eta}_{3/2} \norm{\partial^\alphartialartial_t u}_{W^2_\delta} \norm{\partial^\alphartialartial_t \eta}_{3/2} \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} = \sqrt{\mathcal{E}} \mathcal{D}. \end{multline} Then since $u_1$ vanishes at the endpoints we have that \begin{equation} II_2 = - \int_{-\ell}^\ell \sigma \frac{u_1 }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \partial^\alphartial_1 \frac{\abs{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }^2}{2} = \int_{-\ell}^\ell \sigma \partial^\alphartial_1 \left(\frac{u_1 }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \frac{\abs{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }^2}{2} \end{equation} and so \begin{equation} \abs{II_2}\lesssim \left(\norm{u}_{L^{p}(\Sigma)} + \norm{\nabla u}_{L^{p}(\Sigma)} \right) \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{q}} \lesssim \norm{u}_{W^2_\delta} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{1/2} \lesssim \norm{u}_{W^2_\delta} \ns{\partial^\alphartialartial_t^2 \eta}_{3/2} \lesssim \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} \end{proof} \subsection{Special functional estimates: $F^7$ term} On the right side of \eqref{linear_energy_0} we also encounter the term $ [v\cdot \mathcal{N},F^7]_\ell.$ We estimate this now. \begin{thm}\label{ee_f7} We have the estimate \begin{equation}\label{ee_f7_01} \abs{ [\partial^\alphartialartial_t^2 u\cdot \mathcal{N},F^7]_\ell} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} when $F^7$ is given by \eqref{dt2_f7}, and we have the estimate \begin{equation}\label{ee_f7_02} \abs{ [\partial^\alphartialartial_t u \cdot \mathcal{N},F^7]_\ell} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} when $F^7$ is given by \eqref{dt1_f7}. \end{thm} \begin{proof} We will only prove \eqref{ee_f7_01}. In this case we bound \begin{equation} \abs{F^7} \lesssim \abs{\mathscr{W}h'(\partial^\alphartialartial_t \eta)} \abs{\partial^\alphartialartial_t^3 \eta} + \abs{\mathscr{W}h''(\partial^\alphartialartial_t \eta)} \abs{\partial^\alphartialartial_t^2 \eta}^2. \end{equation} Since $S = \norm{\partial^\alphartialartial_t \eta}_{C^0} \lesssim \norm{\partial^\alphartialartial_t \eta}_1 \lesssim \mathcal{E} \lesssim 1,$ we may estimate \begin{equation} \abs{\mathscr{W}h'(z)} = \frac{1}{\kappa}\abs{ \int_0^z \mathscr{W}''(r)dr}\lesssim \abs{z} \text{ for } z \in [-S,S]. \end{equation} Then we may use the equations for $\partial^\alphartialartial_t^j \eta$ and trace theory to bound \begin{equation} \abs{F^7} \lesssim \abs{\partial^\alphartialartial_t \eta} \abs{\partial^\alphartialartial_t^3 \eta} + \abs{ \partial^\alphartialartial_t^2 \eta}^2 \lesssim \norm{\partial^\alphartialartial_t \eta}_1 \abs{\partial^\alphartialartial_t^2 u \cdot \mathcal{N}} + \norm{\partial^\alphartialartial_t^2 \eta}_1 \abs{\partial^\alphartialartial_t u \cdot \mathcal{N}}. \end{equation} Consequently, \begin{equation} \abs{ [\partial^\alphartialartial_t^2 u\cdot \mathcal{N},F^7]_\ell } \lesssim [\partial^\alphartialartial_t^2 u \cdot \mathcal{N}]_\ell \left(\norm{\partial^\alphartialartial_t \eta}_1 \abs{\partial^\alphartialartial_t^2 u \cdot \mathcal{N}} + \norm{\partial^\alphartialartial_t^2 \eta}_1 \abs{\partial^\alphartialartial_t u \cdot \mathcal{N}} \right) \lesssim \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} \end{proof} \subsection{Special zeroth order terms} Here we record a couple simple estimates that we will use in conjunction with Corollary \ref{basic_energy}. We begin with the $\mathcal{Q}$ term. \begin{thm}\label{ee_Q} Let $\mathcal{Q}(y,z)$ be the smooth function defined by \eqref{Q_def}. Then \begin{equation} \abs{\int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0, \partial^\alphartial_1 \eta) } \lesssim \norm{\eta}_{s+1/2} \ns{\eta}_1 \lesssim \sqrt{\mathcal{E}} \ns{\eta}_1 \end{equation} \end{thm} \begin{proof} According to Proposition \ref{R_prop} we have that \begin{equation} \abs{\int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0, \partial^\alphartial_1 \eta) } \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \eta}^3 \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \ns{\eta}_1 \lesssim \norm{\partial^\alphartial_1 \eta}_{s-1/2} \ns{\eta}_1 \lesssim \norm{\eta}_{s+1/2} \ns{\eta}_1. \end{equation} This implies the desired estimate. \end{proof} Next we handle the $\mathscr{W}$ term appearing in Corollary \ref{basic_energy}. \begin{thm}\label{ee_W} We have that \begin{equation}\label{ee_W_0} \abs{ [u\cdot \mathcal{N},\mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell } \lesssim \norm{\partial^\alphartialartial_t \eta}_{1} \bs{u\cdot \mathcal{N}}. \end{equation} \end{thm} \begin{proof} The definition of $\mathscr{W}h \in C^2$ in \eqref{V_pert} easily shows that $\abs{\mathscr{W}h(z)} \lesssim z^2$. Since $\partial^\alphartialartial_t \eta = u\cdot \mathcal{N}$ at $\partial^\alphartialm \ell$ we have that \begin{equation} \abs{ [u\cdot \mathcal{N},\mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell } \lesssim \sum_{a=\partial^\alphartialm 1} \kappa \abs{u\cdot \mathcal{N}(a\ell,t)}^2 \abs{\partial^\alphartialartial_t \eta(a \ell,t)}. \end{equation} The estimate \eqref{ee_W} then follows from this and the $1-D$ trace estimate $\abs{\partial^\alphartialartial_t \eta (\partial^\alphartialm \ell,t)} \lesssim \norm{\partial^\alphartialartial_t \eta}_1$. \end{proof} \mathcal{E}ction{Terms in the elliptic estimates }\label{sec_nlin_ell} Our scheme of a priori estimates will employ the elliptic estimates of Theorem \ref{A_stokes_stress_solve}. In order for this to be useful we must estimate the various terms appearing on the right side of \eqref{A_stokes_stress_0} when the $G^i$ terms are determined by the once temporally-differentiated problem and by the non-differentiated problem, The former is far more delicate, and so we will focus our efforts on these. The latter can be handled with similar and simpler arguments and are thus omitted. Throughout this entire section we will assume that $\omega \in (0,\partial^\alphartiali)$ is the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ is given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. This determines explicitly the choice of $\delta$ appearing in the definitions of $\mathcal{E}$ and $\mathcal{D}$ in \eqref{ed_def_1}--\eqref{ed_def_5}. We will assume throughout the entirety of this section that $\eta$ is given and satisfies \begin{equation} \sup_{0 \le t \le T} \left( \mathcal{E}b(t) + \ns{\eta(t)}_{W^{5/2}_\delta(\Omega)} + \ns{\partial^\alphartialartial_t \eta(t)}_{H^{3/2}((-\ell,\ell))} \right) \le \gamma < 1, \end{equation} where $\gamma \in (0,1)$ is as in Lemma \ref{eta_small}. For the sake of brevity we will not explicitly state this in each result's hypotheses. \subsection{The time differentiated problem } We want to apply Theorem \ref{A_stokes_stress_solve} to the time differentiated problem. In this case we have \begin{equation} G^1 = F^1, G^2 = F^2, G^3_- = 0, G^3_+ = \partial^\alphartialartial_t^2 \eta - F^6, \end{equation} and \begin{equation} G^4_- = F^5, G^4_+ = F^4 \cdot \mathcal{T} /\abs{\mathcal{T}}^2, G^5 = F^4 \cdot \mathcal{N} /\abs{\mathcal{N}}^2, G^6 = F^3, G^7 = \kappa (\partial^\alphartialartial_t^2 \eta + F^7) \partial^\alphartialm \sigma F^3, \end{equation} where the $F^i$ terms are given in Appendix \ref{fi_dt1}. Theorem \ref{A_stokes_stress_solve} then dictates that we must control \begin{multline} \ns{F^1}_{W^0_{\delta} } + \ns{F^2}_{W^1_{\delta}} + \ns{\partial^\alphartialartial_t^2 \eta - F^6}_{W^{3/2}_{\delta} } + \ns{F^5}_{W^{1/2}_{\delta}} \\ + \ns{F^4 \cdot \mathcal{T} /\abs{\mathcal{T}}^2}_{W^{1/2}_{\delta}} + \ns{F^4 \cdot \mathcal{N} /\abs{\mathcal{N}}^2}_{W^{1/2}_{\delta}} + \ns{\partial^\alphartial_1 F^3}_{W^{1/2}_{\delta}} + [ \kappa \partial^\alphartialartial_t^2 \eta + \kappa F^7 \partial^\alphartialm \sigma F^3]_\ell^2. \end{multline} We begin by estimating the $F^1$ term. \begin{prop}\label{we_f1} Let $F^1$ be given by \eqref{dt1_f1}. We have the estimate \begin{equation} \ns{F^1}_{W^0_{\delta}} \lesssim \mathcal{D}( \mathcal{E} + \mathcal{E}^2). \end{equation} \end{prop} \begin{proof} We have that \begin{equation} F^1 = - \diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(p,u) + \mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u, \end{equation} and we will estimate each term separately. For the first we choose $q \in [1,\infty)$ such that $2/q + (2-s)/2 =1/2$ in order to estimate \begin{multline} \ns{- \diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(p,u)}_{W^0_{\delta}} \lesssim \ns{\partial^\alphartialartial_t \mathcal{A}}_{L^\infty} \ns{\nablala p}_{W^0_{\delta}} + \ns{\partial^\alphartialartial_t \mathcal{A}}_{L^\infty} \ns{\mathcal{A}}_{L^\infty} \ns{u}_{W^2_{\delta}} \\ + \ns{\partial^\alphartialartial_t \mathcal{A}}_{L^{q} } \ns{\nabla \mathcal{A}}_{L^{2/(2-s)}} \ns{d^\delta \nabla u}_{L^{q}} \lesssim \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \ns{p}_{W^1_{\delta}} + \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \ns{u}_{W^2_{\delta}} \\ + \ns{\partial^\alphartialartial_t \eta}_{3/2} \ns{\eta}_{s+1/2} \ns{u}_{W^2_{\delta}} \lesssim \mathcal{D} \mathcal{E} + \mathcal{D} \mathcal{E} + \mathcal{D} \mathcal{E}^2 \lesssim \mathcal{D}(\mathcal{E} + \mathcal{E}^2). \end{multline} For the second estimate \begin{multline} \ns{\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u}_{W^0_{\delta}} \lesssim \ns{\partial^\alphartialartial_t \nabla \mathcal{A}}_{L^{2/(2-s)}} \ns{d^\delta \nabla u}_{L^{2/(s-1)}} + \ns{\partial^\alphartialartial_t \mathcal{A}}_{L^\infty} \ns{u}_{W^2_{\delta}} \\ \lesssim \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \ns{u}_{W^2_\delta} + \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \ns{u}_{W^2_{\delta}} \lesssim \mathcal{D} \mathcal{E} + \mathcal{D} \mathcal{E} \lesssim \mathcal{D} \mathcal{E}. \end{multline} \end{proof} Next we estimate the $F^2$ term. \begin{prop}\label{we_f2} Let $F^2$ be given by \eqref{dt1_f2}. We have that \begin{equation} \ns{F^2}_{W^1_{\delta}} \lesssim \mathcal{E} \mathcal{D}. \end{equation} \end{prop} \begin{proof} Since $ F^2 = -\diverge_{\partial^\alphartialartial_t \mathcal{A}} u$ we only have one term to estimate. We bound \begin{multline} \ns{-\diverge_{\partial^\alphartialartial_t \mathcal{A}} u}_{W^1_{\delta}} \lesssim \ns{\partial^\alphartialartial_t \mathcal{A}}_{L^4} \ns{d^\delta \nabla u}_{L^4} + \ns{\partial^\alphartialartial_t \nablala \mathcal{A}}_{L^{2/(2-s)}} \ns{d^\delta \nabla u}_{L^{2/(s-1)}} + \ns{\partial^\alphartialartial_t \mathcal{A}}_{L^\infty} \ns{\nabla^2 u}_{W^0_{\delta}} \\ \lesssim \ns{\partial^\alphartialartial_t \eta}_{3/2} \ns{u}_{W^2_\delta} + \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \ns{u}_{W^2_\delta} + \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \ns{u}_{W^2_{\delta}} \lesssim \mathcal{D} \mathcal{E} + \mathcal{D} \mathcal{E} + \mathcal{D} \mathcal{E} \lesssim \mathcal{E} \mathcal{D}. \end{multline} \end{proof} Next we estimate the first $F^3$ term. \begin{prop}\label{we_f3_top} Let $F^3$ be given by \eqref{dt1_f3}. We have that \begin{equation} \ns{\partial^\alphartial_1 F^3}_{W^{1/2}_{\delta}} \lesssim \mathcal{D} \mathcal{E}. \end{equation} \end{prop} \begin{proof} We have that \begin{equation} \partial^\alphartial_1 F^3 = \partial^\alphartial_y \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0 \partial^\alphartial_1 \partial^\alphartialartial_t \eta + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta + \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \partial^\alphartialartial_t \eta =: I + II + III. \end{equation} To control these terms we will use Proposition \ref{weight_prod_half}, which is possible because $s-1/2 > 1/2$. We estimate \begin{equation} \ns{I}_{W^{1/2}_{\delta}} \lesssim \ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1/2}_{\delta}} \ns{\partial^\alphartial_y \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0}_{s-1/2} \lesssim \mathcal{D} \mathcal{E}, \end{equation} \begin{equation} \ns{II}_{W^{1/2}_{\delta}} \lesssim \ns{\partial^\alphartial_1^2 \eta}_{W^{1/2}_{\delta}} \ns{\partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{s-1/2} \lesssim \mathcal{E} \mathcal{D}, \end{equation} and \begin{equation} \ns{III}_{W^{1/2}_{\delta}} \lesssim \ns{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta }_{W^{1/2}_{\delta}} \ns{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{s-3/2} \lesssim \mathcal{D} \mathcal{E}. \end{equation} \end{proof} Next we estimate the term with $F^3$ and $F^7$ at the endpoints. \begin{prop}\label{we_f3_end} Let $F^3$ be given by \eqref{dt1_f3} and $F^7$ be given by \eqref{dt1_f7}. We have the estimate \begin{equation} [\kappa \partial^\alphartialartial_t^2 \eta + \kappa F^7 \partial^\alphartialm \sigma F^3]_\ell^2 \lesssim \mathcal{D}b+ \mathcal{D} \mathcal{E}. \end{equation} \end{prop} \begin{proof} We automatically have \begin{equation} \bs{\partial^\alphartialartial_t^2 \eta} = \bs{\partial^\alphartialartial_t u \cdot \mathcal{N}} \le \mathcal{D}b. \end{equation} Next we control \begin{equation} F^3 = \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartialartial_t \partial^\alphartial_1 \eta \end{equation} at $\partial^\alphartialm \ell$. Proposition \ref{R_prop} implies that $\abs{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)} \lesssim \abs{\partial^\alphartial_1 \eta}$, so we reduce to controlling $\abs{\partial^\alphartial_1 \eta \partial^\alphartialartial_t \partial^\alphartial_1 \eta}$ at the endpoints. We estimate \begin{equation} \abs{\partial^\alphartial_1 \eta(\partial^\alphartialm \ell,t)}^2 \lesssim \ns{\partial^\alphartial_1 \eta}_{s-1/2} \lesssim \mathcal{E} \text{ and } \abs{\partial^\alphartialartial_t \partial^\alphartial_1 \eta(\partial^\alphartialm \ell,t)}^2 \lesssim \ns{\partial^\alphartialartial_t \partial^\alphartial_1 \eta}_{s-1/2} \lesssim \mathcal{D}. \end{equation} Thus \begin{equation} \bs{\partial^\alphartialm \sigma F^3} \lesssim \mathcal{E} \mathcal{D}. \end{equation} Finally, we turn to the $F^7$ term. In this case we may argue as in Theorem \ref{ee_f7} to bound $\abs{F^7} \lesssim \abs{\partial^\alphartialartial_t \eta} \abs{\partial^\alphartialartial_t^2 \eta}$, from which we deduce that \begin{equation} \bs{F^7} \lesssim \ns{\partial^\alphartialartial_t \eta}_1 \bs{\partial^\alphartialartial_t u \cdot \mathcal{N}} \lesssim \mathcal{E} \mathcal{D}. \end{equation} \end{proof} Next we handle the $F^4$ term. \begin{prop}\label{we_f4} Let $F^4$ be given by \eqref{dt1_f4}. We have that \begin{equation} \ns{F^4}_{W^{1/2}_{\delta}} \lesssim \mathcal{D}(\mathcal{E} + \mathcal{E}^2). \end{equation} \end{prop} \begin{proof} We have that \begin{multline} \ns{F^4 \cdot \mathcal{T} /\abs{\mathcal{T}}^2}_{W^{1/2}_{\delta}} + \ns{F^4 \cdot \mathcal{N} /\abs{\mathcal{N}}^2}_{W^{1/2}_{\delta}} \lesssim \ns{F^4}_{W^{1/2}_{\delta}}(1 + \ns{\eta}_{C^1}) \\ \lesssim \ns{F^4}_{W^{1/2}_{\delta}}(1 + \ns{\eta}_{s+1/2}) \lesssim \ns{F^4}_{W^{1/2}_{\delta}}(1+ \mathcal{E}), \end{multline} and so it suffices to estimate $\ns{F^4}_{W^{1/2}_{\delta}}$. It is written \begin{equation} F^4 = \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \mathcal{N} +\left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) - S_{\mathcal{A}}(p,u) \right] \partial^\alphartialartial_t \mathcal{N}. \end{equation} We will handle each term separately. For the first we estimate \begin{multline} \ns{\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \mathcal{N}}_{W^{1/2}_{\delta}} \lesssim \ns{\mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u (e_2 - e_1 \partial^\alphartial_1 \bar{\eta})}_{W^{1}_{\delta}} \lesssim \ns{u}_{W^{2}_{\delta}} \ns{\partial^\alphartialartial_t \mathcal{A}}_{L^\infty} \ns{e_2 - e_1 \partial^\alphartial_1 \bar{\eta}}_{L^\infty} \\ + \ns{d^\delta \nabla u}_{L^{2/(s-1)}} \left( \ns{\partial^\alphartialartial_t \nabla \mathcal{A}}_{L^{2/(2-s)}} \ns{e_2 - e_1 \partial^\alphartial_1 \bar{\eta}}_{L^\infty} + \ns{\partial^\alphartialartial_t \mathcal{A}}_{L^\infty} \ns{\nabla^2 \bar{\eta}}_{L^{2/(2-s)}} \right) \\ \lesssim \ns{u}_{W^{2}_{\delta}} \ns{\partial^\alphartialartial_t \eta}_{2} + \ns{u}_{W^{2}_{\delta}} \left( \ns{\partial^\alphartialartial_t \eta}_{s+1/2} + \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \ns{\eta}_{s+1/2} \right) \lesssim \mathcal{E} \mathcal{D} + \mathcal{E}(\mathcal{D} + \mathcal{D} \mathcal{E}) \lesssim \mathcal{D}(\mathcal{E} + \mathcal{E}^2). \end{multline} For the second we estimate \begin{multline} \ns{-S_{\mathcal{A}}(p,u) \partial^\alphartialartial_t \mathcal{N}}_{W^{1/2}_{\delta}} \lesssim \ns{S_{\mathcal{A}}(p,u) \partial^\alphartialartial_t \partial^\alphartial_1 \bar{\eta}}_{W^{1}_{\delta}(\Omega)} \lesssim \left( \ns{p}_{W^1_{\delta}} + \ns{u}_{W^2_{\delta}} \right) \ns{\partial^\alphartialartial_t \partial^\alphartial_1 \bar{\eta} }_{L^\infty} \\ + \left( \ns{d^\delta p}_{L^{2/(s-1)} } + \ns{d^\delta \nabla u}_{L^{2/(s-1)} } \right) \left( \ns{ \partial^\alphartialartial_t \nabla^2 \bar{\eta}}_{L^{2/(s-2)}} + \ns{\nabla \mathcal{A}}_{L^{2/(s-2)}} \ns{\partial^\alphartialartial_t \partial^\alphartial_1 \bar{\eta}}_{L^\infty} \right) \\ \lesssim \mathcal{E} \ns{\partial^\alphartialartial_t \eta}_{s+1/2} + \mathcal{E}\left( \ns{\partial^\alphartialartial_t \eta}_{s+1/2} + \ns{\eta}_{s+1/2} \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \right) \lesssim \mathcal{E} \mathcal{D} + \mathcal{E}( \mathcal{D} + \mathcal{E} \mathcal{D}) \lesssim \mathcal{D}(\mathcal{E} + \mathcal{E}^2). \end{multline} For the remaining terms we use Proposition \ref{weight_prod_half} to bound \begin{multline} \ns{ \left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right)\right] \partial^\alphartialartial_t \mathcal{N}}_{W^{1/2}_{\delta}} \lesssim \ns{\eta}_{W^{5/2}_{\delta}} \ns{\partial^\alphartialartial_t \partial^\alphartial_1 \eta}_{s-1/2} \\ \lesssim \ns{\eta}_{W^{5/2}_{\delta}} \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \lesssim \mathcal{E} \mathcal{D}, \end{multline} which is possible because $s-1/2 > 1/2$. \end{proof} The $F^5$ terms is handled next. \begin{prop}\label{we_f5} Let $F^5$ be given by \eqref{dt1_f5}. We have that \begin{equation} \ns{F^5}_{W^{1/2}_{\delta}} \lesssim \mathcal{D}\mathcal{E}. \end{equation} \end{prop} \begin{proof} The fact that $1 < s$ allows us to use Proposition \ref{weight_prod_half} to estimate \begin{equation} \ns{ \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \nu \cdot \tau}_{W^{1/2}_{\delta}} \lesssim \ns{ \nabla u}_{W^{1/2}_{\delta}} \ns{ \partial^\alphartialartial_t \mathcal{A}}_{s-1/2} \lesssim \ns{u}_{W^2_\delta} \ns{\partial^\alphartialartial_t \eta}_{s+1/2} \lesssim \mathcal{E} \mathcal{D}. \end{equation} \end{proof} Next we consider the $F^6$ term. \begin{prop}\label{we_f6} Let $F^6$ be given by \eqref{dt1_f6}. We have that \begin{equation} \ns{\partial^\alphartialartial_t^2 \eta - F^6}_{W^{3/2}_{\delta}} \lesssim \ns{\partial^\alphartialartial_t^2 \eta}_{3/2} + \mathcal{E}\mathcal{D}. \end{equation} \end{prop} \begin{proof} We have that $F^6 = u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta$. We then use Proposition \ref{weight_prod_half} to estimate \begin{equation} \ns{\partial^\alphartialartial_t^2 \eta - F^6}_{W^{3/2}_{\delta}} \lesssim \ns{\partial^\alphartialartial_t^2 \eta}_{3/2} + \ns{u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{3/2}_{\delta}} \lesssim \ns{\partial^\alphartialartial_t^2 \eta}_{3/2} + \ns{u}_{W^2_{\delta}} \ns{\partial^\alphartialartial_t \eta}_{W^{5/2}_{\delta}} \lesssim \ns{\partial^\alphartialartial_t^2 \eta}_{3/2} + \mathcal{E} \mathcal{D} . \end{equation} \end{proof} Finally, we combine the above propositions into a single estimate. \begin{thm}\label{elliptic_est_dt} Let $\omega \in (0,\partial^\alphartiali)$ be the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. Suppose that $\ns{\eta}_{W^{5/2}_{\delta} } < \gamma$, where $\gamma$ is as in Theorem \ref{A_stokes_om_iso}. Then we have the inclusions $(\partial^\alphartialartial_t u,\partial^\alphartialartial_t p,\partial^\alphartialartial_t \eta) \in W^2_{\delta}(\Omega) \times \mathring{W}^1_{\delta}(\Omega) \times W^{5/2}_{\delta}$ as well as the estimate \begin{equation} \ns{\partial^\alphartialartial_t u}_{W^2_{\delta}} + \ns{\partial^\alphartialartial_t p}_{\mathring{W}^1_{\delta}} + \ns{\partial^\alphartialartial_t \eta}_{W^{5/2}_{\delta}} \lesssim \mathcal{D}b + \mathcal{D}(\mathcal{E} + \mathcal{E}^2). \end{equation} \end{thm} \begin{proof} Propositions \ref{we_f1}--\ref{we_f6} guarantee that we may estimate all the terms appearing on the right side of \eqref{A_stokes_stress_0} by $\mathcal{D}b + \mathcal{D}(\mathcal{E} +\mathcal{E}^2)$. The result then follows from Theorem \ref{A_stokes_stress_solve}. \end{proof} \subsection{The problem without time derivatives} We now want to apply Theorem \ref{A_stokes_stress_solve} to the basic problem without time derivatives. In this case we have $G^1 =0$, $G^2 =0$, $G^3_-=0$, $G^3_+ = \partial^\alphartialartial_t \eta$, $G^4_\partial^\alphartialm =0$, $G^5 =0$, $G^6 = \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)$, and $G^7_\partial^\alphartialm = \kappa \partial^\alphartialartial_t \eta(\partial^\alphartialm \ell,t) + \kappa \mathscr{W}h(\partial^\alphartialartial_t \eta(\partial^\alphartialm \ell,t)) \partial^\alphartialm \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)$. Consequently we must estimate \begin{equation} \ns{\partial^\alphartialartial_t \eta}_{W^{3/2}_{\delta}} + \ns{\partial^\alphartial_1 [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] }_{W^{1/2}_{\delta}} + [\kappa \partial^\alphartialartial_t \eta + + \kappa \mathscr{W}h(\partial^\alphartialartial_t \eta) \partial^\alphartialm \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)]_\ell^2 \lesssim \mathcal{D}b + \mathcal{D}( \mathcal{E} + \mathcal{E}^2). \end{equation} The estimates of these terms follow from argument similar to those given for the time differentiated problem and are thus omitted. \begin{thm}\label{elliptic_est_no_dt} Let $\omega \in (0,\partial^\alphartiali)$ be the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. Suppose that $\ns{\eta}_{W^{5/2}_{\delta} } < \gamma$, where $\gamma$ is as in Theorem \ref{A_stokes_om_iso}. Then we have the inclusions $(u,p,\eta) \in W^2_{\delta}(\Omega) \times \mathring{W}^1_{\delta}(\Omega) \times W^{5/2}_{\delta}$ as well as the estimate \begin{equation} \ns{u}_{W^2_{\delta}} + \ns{ p}_{\mathring{W}^1_{\delta}} + \ns{\eta}_{W^{5/2}_{\delta}} \lesssim \mathcal{D}b + \mathcal{D}(\mathcal{E} + \mathcal{E}^2). \end{equation} \end{thm} \mathcal{E}ction{Main results}\label{sec_apriori} Here we record the main results of the paper: the a priori estimates for solutions to \eqref{geometric}, and our small data global well-posedness and decay result. \subsection{A priori estimates } In order to deduce our a priori estimates we must first introduce some variants the energy and dissipation. We define \begin{equation}\label{fed_def_1} \mathfrak{E} = \sum_{j=0}^2 \int_{-\ell}^\ell \frac{g}{2} \abs{\partial^\alphartialartial_t^j \eta}^2 + \frac{\sigma}{2(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^j \eta}^2, \end{equation} \begin{equation}\label{fed_def_2} \mathfrak{D} = \sum_{j=0}^2 \left( \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a \partial^\alphartialartial_t^j u}^2 J +\int_{\Sigma_s} \beta J \abs{\partial^\alphartialartial_t^j u \cdot \tau}^2 + \bs{\partial^\alphartialartial_t^j u\cdot \mathcal{N}} \right) \end{equation} and \begin{equation}\label{fed_def_3} \mathfrak{F} = \int_{-\ell}^\ell \left[\sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) + \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} + \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \right]. \end{equation} These terms are clearly related to certain energy and dissipation terms that we have previously defined. We state these relations now. \begin{prop}\label{ap_fed_ests} Let $\mathfrak{E},$ $\mathfrak{D},$ and $\mathfrak{F}$ be as defined in \eqref{fed_def_1}--\eqref{fed_def_3}. There exists a universal constant $\gamma >0$ such that if \begin{equation} \sup_{0 \le t \le T} \mathcal{E}(t) \le \gamma, \end{equation} then \begin{equation}\label{ap_fed_ests_01} \mathfrak{E} \lesssim \mathcal{E}b \lesssim \mathfrak{E} \text{ and } \mathfrak{D} \lesssim \mathcal{D}bb \lesssim \mathfrak{D}, \end{equation} where $\mathcal{E}b$ and $\mathcal{D}bb$ are defined in \eqref{ed_def_1}--\eqref{ed_def_5}, and also \begin{equation}\label{ap_fed_ests_02} \abs{ \mathfrak{F} } \le \frac{1}{2} \mathfrak{E}. \end{equation} \end{prop} \begin{proof} The estimates in \eqref{ap_fed_ests_01} follows easily from Lemma \ref{eta_small} if we assume that $\gamma$ is as small as stated there. Theorems \ref{ee_f3_dt2} and \ref{ee_Q} guarantee that \begin{equation} \abs{ \mathfrak{F} } \lesssim \sqrt{\mathcal{E}} \mathcal{E}b \lesssim \sqrt{\mathcal{E}} \mathfrak{E}. \end{equation} Consequently, if we further restrict $\gamma$ then we must have that $\abs{\mathfrak{F}} \le \frac{1}{2} \mathfrak{E}$, which is \eqref{ap_fed_ests_02}. \end{proof} The reason we have introduced $\mathfrak{E}$, $\mathfrak{D}$, and $\mathfrak{F}$ is that they appear naturally in an energy-dissipation inequality that we may derive from Theorem \ref{linear_energy}. This inequality, which we now state, forms the core of our a priori estimates. \begin{thm}\label{ap_diff_ineq} Let $\omega \in (0,\partial^\alphartiali)$ be the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. There exists a universal constant $\gamma >0$ such that if \begin{equation}\label{ap_diff_ineq_00} \sup_{0 \le t \le T} \mathcal{E}(t) + \int_0^T \mathcal{D}(t) dt \le \gamma, \end{equation} then there exists a universal constant $C >0$ such that \begin{equation}\label{ap_diff_ineq_0} \frac{d}{dt} \left( \mathfrak{E} + \mathfrak{F}\right) + C \mathcal{D} \le 0. \end{equation} \end{thm} \begin{proof} Let $0<\gamma < 1$ be as small as in Proposition \ref{ap_fed_ests}, and hence as small as in Lemma \ref{eta_small}. To begin we apply Theorem \ref{linear_energy} twice: to the once and twice time differentiated problems. This is possible thanks to \eqref{ap_diff_ineq_00}. We sum the resulting inequalities with the result of Corollary \ref{basic_energy} and then apply the estimates of Theorems \ref{ee_v_est}, \ref{ee_p_est}, \ref{ee_f3_dt2}, \ref{ee_f3_dt}, \ref{ee_f6}, \ref{ee_f7}, and \ref{ee_W} to deduce that \begin{equation}\label{ap_diff_ineq_1} \frac{d}{dt} \left( \mathfrak{E} + \mathfrak{F} \right) + \mathfrak{D} \lesssim \sqrt{\mathcal{E}} \mathcal{D}, \end{equation} where $\mathfrak{E}$, $\mathfrak{D}$, and $\mathfrak{F}$ are as defined by \eqref{fed_def_1}--\eqref{fed_def_3}. We know from Proposition \ref{ap_fed_ests} that \begin{equation}\label{ap_diff_ineq_2} \mathfrak{D} \lesssim \mathcal{D}bb \lesssim \mathfrak{D}. \end{equation} Next we combine Theorem \ref{pressure_est} with the estimate of Theorem \ref{ee_v_est_pressure} to see that \begin{equation} \ns{p}_0 + \ns{\partial^\alphartialartial_t p}_0 +\ns{\partial^\alphartialartial_t^2 p}_0 \lesssim \mathcal{D}bb + \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} Similarly, Theorem \ref{xi_est} and the estimates of Theorems \ref{ee_v_est} and \ref{ee_f3_half} show that \begin{equation} \ns{\eta}_{3/2} + \ns{\partial^\alphartialartial_t \eta}_{3/2} +\ns{\partial^\alphartialartial_t^2 \eta}_{3/2} \lesssim \mathcal{D}bb + \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} Consequently, we may combine with \eqref{ap_diff_ineq_2} to see that \begin{equation}\label{ap_diff_ineq_4} \mathcal{D}b \lesssim \mathfrak{D} + \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} Theorems \ref{elliptic_est_dt} and \ref{elliptic_est_no_dt} then imply that if $\gamma \le \hat{\gamma}$ (where here we write $\hat{\gamma}$ for the smallness parameter used in the theorems) then \begin{equation} \ns{u}_{W^2_{\delta}} + \ns{p}_{\mathring{W}^1_{\delta}} + \ns{\eta}_{W^{5/2}_{\delta}} + \ns{\partial^\alphartialartial_t u}_{W^2_{\delta}} + \ns{\partial^\alphartialartial_t p}_{\mathring{W}^1_{\delta}} + \ns{\partial^\alphartialartial_t \eta}_{W^{5/2}_{\delta}} \lesssim \mathcal{D}b + \mathcal{D}(\mathcal{E} + \mathcal{E}^2). \end{equation} Since $\partial^\alphartialartial_t^3 \eta = \partial^\alphartialartial_t^2 (u \cdot \mathcal{N})$ we then also have that \begin{equation} \ns{\partial^\alphartialartial_t^3 \eta}_{W^{1/2}_\delta} \lesssim \mathcal{D}b + \mathcal{D}(\mathcal{E} + \mathcal{E}^2). \end{equation} Combining these with \eqref{ap_diff_ineq_4} then shows that \begin{equation}\label{ap_diff_ineq_5} \mathcal{D} \lesssim \mathfrak{D} + \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} Plugging \eqref{ap_diff_ineq_5} into \eqref{ap_diff_ineq_1} then shows that \begin{equation} \frac{d}{dt} \left( \mathfrak{E} + \mathfrak{F}\right) + 2 C \mathcal{D} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} for some universal constant $C>0$. By further restricting $\gamma$ we may absorb the term on the right onto the left, which yields the estimate \begin{equation} \frac{d}{dt} \left( \mathfrak{E} + \mathfrak{F}\right) + C \mathcal{D} \le 0. \end{equation} This is \eqref{ap_diff_ineq_0}. \end{proof} With theorem \ref{ap_diff_ineq} we can now complete the proof of our a priori estimates. These will consist of two estimates: a decay estimate and a higher-order bound. We begin with the proof of the decay estimate. \begin{thm}\label{ap_decay} Let $\omega \in (0,\partial^\alphartiali)$ be the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. There exists a universal constant $\gamma >0$ such that if \begin{equation} \sup_{0 \le t \le T} \mathcal{E}(t) + \int_0^T \mathcal{D}(t) dt \le \gamma, \end{equation} then there exists a universal constant $\lambda >0$ such that \begin{equation}\label{ap_decay_0} \sup_{ 0 \le t \le T} e^{\lambda t} \left[ \mathcal{E}b(t) + \ns{u(t)}_{1} + \ns{u(t)\cdot \tau}_{L^2(\Sigma_s)} + \bs{u\cdot \mathcal{N}(t)} +\ns{p(t)}_0\right] \lesssim \mathcal{E}b(0). \end{equation} \end{thm} \begin{proof} Let $\gamma$ be as small as in Theorem \ref{ap_diff_ineq}. The theorem then provides for the existence of a universal constant $C>0$ such that \begin{equation}\label{ap_decay_1} \frac{d}{dt} \left( \mathfrak{E} + \mathfrak{F}\right) + C \mathcal{D} \le 0. \end{equation} We know from Proposition \ref{ap_fed_ests} that \begin{equation}\label{ap_decay_2} \mathfrak{E} \lesssim \mathcal{E}b \lesssim \mathfrak{E} \text{ and } 0 \le \frac{1}{2} \mathfrak{E} \le \mathfrak{E} + \mathfrak{F} \le \frac{3}{2} \mathfrak{E}. \end{equation} On the other hand, it's clear that \begin{equation}\label{ap_decay_3} \mathfrak{E} \lesssim \mathcal{D}. \end{equation} We may thus combine \eqref{ap_decay_1}, \eqref{ap_decay_2}, and \eqref{ap_decay_3} to deduce that there exists a universal constant $\lambda >0$ such that \begin{equation} \frac{d}{dt} \left( \mathfrak{E} + \mathfrak{F}\right) + \lambda \left( \mathfrak{E} + \mathfrak{F}\right) \le 0. \end{equation} Upon integrating this differential inequality we find that \begin{equation} \frac{1}{2} \mathfrak{E}(t) \le \mathfrak{E}(t) + \mathfrak{F}(t) \le e^{-\lambda t} \left(\mathfrak{E}(0) + \mathfrak{F}(0)\right) \le \frac{3}{2} e^{-\lambda t} \mathfrak{E}(0) \end{equation} for all $t \in [0,T].$ Thus \eqref{ap_decay_2} tells us that \begin{equation}\label{ap_decay_6} \sup_{ 0 \le t \le T} e^{\lambda t} \mathcal{E}b(t) \lesssim \mathcal{E}b(0). \end{equation} To complete the proof of \eqref{ap_decay_0} we first use Lemma \ref{geometric_evolution} on \eqref{geometric}, which means that $F^i=0$ except for $i=3,7$, in which case $F^3 = \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)$ and $F^7 = \mathscr{W}h(\partial^\alphartialartial_t \eta)$. The lemma tells us that \begin{equation}\label{ap_decay_10} \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a u}^2 J +\int_{\Sigma_s} \lambda J \abs{ u \cdot \tau}^2 + \bs{ u\cdot \mathcal{N}} = -\ip{\eta}{\partial^\alphartialartial_t \eta}_{1,\sigma} - \int_{-\ell}^\ell \sigma \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta - [u\cdot \mathcal{N}, \mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell. \end{equation} We may write $\mathscr{W}h(z) = \frac{1}{\kappa} \int_0^z (z-r) \mathscr{W}''(r) dr$, which allows us to argue as in the proof of Theorem \ref{ee_W} to deduce that $\abs{\mathscr{W}h(\partial^\alphartialartial_t \eta)} \lesssim \abs{\partial^\alphartialartial_t \eta}^2$. From this and \eqref{ap_decay_10} we immediately deduce, using Proposition \ref{R_prop} and the fact that $\partial^\alphartialartial_t \eta = u\cdot \mathcal{N}$ at the endpoints, that \begin{equation}\label{ap_decay_7} \ns{u}_{1} + \ns{u\cdot \tau}_{L^2(\Sigma_s)} + \bs{u\cdot \mathcal{N}} \lesssim (1 + \sqrt{\mathcal{E}}) \norm{\eta}_{1} \norm{\partial^\alphartialartial_t \eta}_1 + \sqrt{\mathcal{E}} \ns{\partial^\alphartialartial_t \eta}_1 \lesssim \mathcal{E}b. \end{equation} Theorem \ref{pressure_est} provides the estimate \begin{equation}\label{ap_decay_8} \ns{p}_0 \lesssim \ns{u}_1 \lesssim \mathcal{E}b. \end{equation} Then \eqref{ap_decay_0} follows by combining \eqref{ap_decay_6}, \eqref{ap_decay_7}, and \eqref{ap_decay_8}. \end{proof} Next we complete our a priori estimate by proving the higher-order bound. \begin{thm}\label{ap_bound} Let $\omega \in (0,\partial^\alphartiali)$ be the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. There exists a universal constant $\gamma >0$ such that if \begin{equation} \sup_{0 \le t \le T} \mathcal{E}(t) + \int_0^T \mathcal{D}(t) dt \le \gamma, \end{equation} then \begin{equation}\label{ap_bound_0} \sup_{0 \le t \le T} \mathcal{E}(t) + \int_0^T \mathcal{D}(t) dt \lesssim \mathcal{E}(0). \end{equation} \end{thm} \begin{proof} Let $\gamma < 1$ be as small as in Theorem \ref{ap_diff_ineq}. Then there is a universal constant $C>0$ such that \begin{equation}\label{ap_bound_1} \frac{d}{dt} \left( \mathfrak{E} + \mathfrak{F}\right) + C \mathcal{D} \le 0. \end{equation} Again, we know from Proposition \ref{ap_fed_ests} that \begin{equation} \mathfrak{E} \lesssim \mathcal{E}b \lesssim \mathfrak{E} \text{ and } 0 \le \frac{1}{2} \mathfrak{E} \le \mathfrak{E} + \mathfrak{F} \le \frac{3}{2} \mathfrak{E}. \end{equation} This allows us to integrate \eqref{ap_bound_1} to deduce that \begin{equation} \frac{1}{2} \mathfrak{E}(t) + C \int_0^t \mathcal{D}(s) ds \le \mathfrak{E}(0) + \mathfrak{F}(0) \le \frac{3}{2} \mathfrak{E}(0), \end{equation} and hence that \begin{equation}\label{ap_bound_2} \mathcal{E}b(t) + \int_0^t \mathcal{D}(s) ds \lesssim \mathcal{E}b(0) \end{equation} for all $t\in [0,T]$. Now, if $X$ is a real Hilbert space and $f \in H^1([0,T];X)$ then \begin{equation} \frac{d}{dt} \ns{f(t)}_X = 2(f(t),\partial^\alphartialartial_t f(t))_X, \end{equation} and upon integrating and applying Cauchy-Schwarz we find that \begin{equation} \ns{f(t)}_X \le \ns{f(0)}_X + \int_0^t \left(\ns{f(s)}_X + \ns{\partial^\alphartialartial_t f(s)}_X\right) ds. \end{equation} We use this estimate to bound \begin{multline}\label{ap_bound_3} \ns{\eta(t)}_{W^{5/2}_\delta} + \ns{\partial^\alphartialartial_t \eta(t)}_{3/2} + \ns{u(t)}_{W^{2}_\delta} + \ns{\partial^\alphartialartial_t u(t)}_{W^{2}_\delta} + \ns{p(t)}_{W^{1}_\delta} + \ns{\partial^\alphartialartial_t p(t)}_{W^{1}_\delta} \\ \le \ns{\eta(0)}_{W^{5/2}_\delta} + \ns{\partial^\alphartialartial_t \eta(0)}_{3/2} + \ns{u(0)}_{W^{2}_\delta} + \ns{\partial^\alphartialartial_t u(0)}_{W^{2}_\delta} \\ + \ns{p(0)}_{W^{1}_\delta} + \ns{\partial^\alphartialartial_t p(0)}_{W^{1}_\delta} + \int_0^t \mathcal{D}(s) ds. \end{multline} Then we combine \eqref{ap_bound_2} and \eqref{ap_bound_3} to deduce that \begin{equation} \mathcal{E}(t) + \int_0^t \mathcal{D}(s) ds \lesssim \mathcal{E}(0) \end{equation} for all $t\in [0,T]$, which then easily implies \eqref{ap_bound_0}. \end{proof} \subsection{Global existence and decay } We now state our main result on the global existence and decay of solutions. \begin{thm}\label{gwp} Let $\omega \in (0,\partial^\alphartiali)$ be the angle formed by $\zeta_0$ at the corners of $\Omega$, $\delta_\omega \in [0,1)$ be given by \eqref{crit_wt}, and $\delta \in (\delta_\omega,1)$. There exists a universal smallness parameter $\gamma >0$ such that if \begin{equation} \mathcal{E}(0) \le \gamma, \end{equation} then there exists a unique global solution triple $(u,p,\eta)$ such that \begin{equation} \sup_{t \ge 0} \left( \mathcal{E}(t) + e^{\lambda t} \left[ \mathcal{E}b(t) + \ns{u(t)}_{1} + \ns{u(t)\cdot \tau}_{L^2(\Sigma_s)} + \bs{u\cdot \mathcal{N}(t)} + \ns{p(t)}_0 \right] \right) + \int_0^\infty \mathcal{D}(t) dt \le C \mathcal{E}(0), \end{equation} where $\lambda,C >0$ are universal constants. \end{thm} \begin{proof} The result follows from coupling Theorems \ref{ap_bound} and \ref{ap_decay} with the local existence theory and a standard continuation argument. \end{proof} \appendix \mathcal{E}ction{Recording the nonlinearities }\label{app_nonlin_form} The governing equations for $(u,p,\eta)$ are \eqref{geometric}, where $\mathcal{R}$ is defined by \eqref{R_def}. When we apply $\partial^\alphartialartial_t$ and $\partial^\alphartialartial_t^2$ to this system we get \begin{equation}\label{time_diff} \begin{cases} \diverge_{\mathcal{A}}S_{\mathcal{A}}(q,v) =F^1 & \text{in } \Omega \\ \diverge_{\mathcal{A}} v = F^2 & \text{in }\Omega \\ S_{\mathcal{A}}(q,v) \mathcal{N} = g\xi \mathcal{N} -\sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3\right) \mathcal{N} + F^4& \text{on } \Sigma \\ (S_{\mathcal{A}}(q,v)\nu - \beta v)\cdot \tau =F^5 &\text{on }\Sigma_s \\ v\cdot \nu =0 &\text{on }\Sigma_s \\ \partial^\alphartialartial_t \xi = v \cdot \mathcal{N} + F^6& \text{on } \Sigma \\ \kappa \partial^\alphartialartial_t \xi(\partial^\alphartialm \ell,t) = \mp \sigma \left(\frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3 \right)(\partial^\alphartialm \ell,t) - \kappa F^7(\partial^\alphartialm\ell,t) \end{cases} \end{equation} for $v = \partial^\alphartialartial_t^j u$, $\xi = \partial^\alphartialartial_t^j \eta$, and $q = \partial^\alphartialartial_t^j p$. We now identify the form of the forcing terms that appear in these equations. \subsection{Terms when $\partial^\alphartialartial_t$ is applied }\label{fi_dt1} First, we record the forcing terms appearing in \eqref{time_diff} when $\partial^\alphartialartial_t$ is applied. \begin{equation}\label{dt1_f1} F^1 = - \diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(p,u) + \mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \end{equation} \begin{equation}\label{dt1_f2} F^2 = -\diverge_{\partial^\alphartialartial_t \mathcal{A}} u \end{equation} \begin{equation}\label{dt1_f3} F^3 = \partial^\alphartialartial_t [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \end{equation} \begin{equation}\label{dt1_f4} F^4 = \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \mathcal{N} +\left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) - S_{\mathcal{A}}(p,u) \right] \partial^\alphartialartial_t \mathcal{N} \end{equation} \begin{equation}\label{dt1_f5} F^5 = \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \nu \cdot \tau \end{equation} \begin{equation}\label{dt1_f6} F^6 = u \cdot \partial^\alphartialartial_t \mathcal{N} = -u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta. \end{equation} \begin{equation}\label{dt1_f7} F^7 = \mathscr{W}h'(\partial^\alphartialartial_t \eta) \partial^\alphartialartial_t^2 \eta. \end{equation} Note that a key feature of $F^6$ is that it vanishes at $\partial^\alphartialm \ell$ since $u_1$ vanishes there. \subsection{Terms when $\partial^\alphartialartial_t^2$ is applied }\label{fi_dt2} Here we record the forcing terms appearing in \eqref{time_diff} $\partial^\alphartialartial_t^2$ is applied. \begin{multline}\label{dt2_f1} F^1 = - 2\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) + 2\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \\ - \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} S_\mathcal{A}(p,u) + 2 \mu \diverge_{\partial^\alphartialartial_t \mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u + \mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \end{multline} \begin{equation}\label{dt2_f2} F^2 = -\diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} u - 2\diverge_{\partial^\alphartialartial_t \mathcal{A}}\partial^\alphartialartial_t u \end{equation} \begin{equation}\label{dt2_f3} F^3 = \partial^\alphartialartial_t^2 [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \end{equation} \begin{multline}\label{dt2_f4} F^4 = 2\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \mathcal{N} + \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \mathcal{N} + \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \partial^\alphartialartial_t \mathcal{N}\\ +\left[ 2g \partial^\alphartialartial_t \eta - 2\sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \partial^\alphartialartial_t[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \right) -2 S_{\mathcal{A}}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) \right] \partial^\alphartialartial_t \mathcal{N} \\ + \left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) - S_{\mathcal{A}}(p,u) \right] \partial^\alphartialartial_t^2 \mathcal{N} \end{multline} \begin{equation}\label{dt2_f5} F^5 = 2 \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \nu \cdot \tau + \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \nu \cdot \tau \end{equation} \begin{equation}\label{dt2_f6} F^6 = 2 \partial^\alphartialartial_t u \cdot \partial^\alphartialartial_t \mathcal{N} + u \cdot \partial^\alphartialartial_t^2 \mathcal{N} = -2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta - u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta. \end{equation} \begin{equation}\label{dt2_f7} F^7 = \mathscr{W}h'(\partial^\alphartialartial_t \eta) \partial^\alphartialartial_t^3 \eta + \mathscr{W}h''(\partial^\alphartialartial_t \eta) (\partial^\alphartialartial_t^2 \eta)^2. \end{equation} Again a key feature of $F^6$ is that it vanishes at $\partial^\alphartialm \ell$ since $u_1$ and $\partial^\alphartialartial_t u_1$ vanish there. \mathcal{E}ction{Estimates for $\mathcal{R}$ }\label{app_r} Recall that $\mathcal{R}$ is given by \eqref{R_def}. The following result records the essential estimates of $\mathcal{R}$. We omit the proof for the sake of brevity. \begin{prop}\label{R_prop} The mapping $\mathcal{R} \in C^\infty(\mathcal{R}n{2})$ defined by \eqref{R_def} obeys the following estimates. \begin{multline} \sup_{(y,z) \in \mathcal{R}n{2}} \left[ \abs{\frac{1}{z^3}\int_0^z \mathcal{R}(y,s)ds} + \abs{\frac{\mathcal{R}(y,z)}{z^2}} + \abs{\frac{\partial^\alphartial_z \mathcal{R}(y,z)}{z}} + \abs{\frac{\partial^\alphartial_y \mathcal{R}(y,z)}{z^2}} \right. \\ \left. + \abs{ \partial^\alphartial_z^2 \mathcal{R}(y,z)} + \abs{\frac{\partial^\alphartial_y^2 \mathcal{R}(y,z)}{z^2}} + \abs{\frac{\partial^\alphartial_z \partial^\alphartial_y \mathcal{R}(y,z)}{z}} + \abs{ \partial^\alphartial_z^3 \mathcal{R}(y,z)} + \abs{ \partial^\alphartial_z^2 \partial^\alphartial_y \mathcal{R}(y,z)} \right] < \infty. \end{multline} \end{prop} \mathcal{E}ction{Weighted Sobolev spaces}\label{app_weight} We begin our discussion of weighted Sobolev spaces by recalling Hardy's inequality. \begin{lem}[Hardy's inequality]\label{hardy_ineq} Assume that $\alpha >1$ and $p \in [1,\infty)$. Then \begin{equation}\label{hardy_ineq_02} \left(\int_0^\infty y^{p(\alpha-1) -1} \left(\int_y^\infty \abs{\varphi(z)} dz \right)^{p} dy \right)^{1/p} \le \frac{1}{ (\alpha-1)} \left(\int_0^\infty \abs{\varphi(z)}^p z^{\alpha p-1} dz \right)^{1/p}. \end{equation} \end{lem} \begin{proof} This is one of the well-known Hardy inequalities. A proof may be found, for instance, in \cite{h_l_p}. \end{proof} Next we record a useful application of Hardy's inequality. \begin{prop}\label{hardy_sobolev} Let $K \subset \mathcal{R}n{n}$ be an open cone and suppose that $\delta > 1 - n/2$ and $r \in [2,\infty)$. Then there exists a constant $C(n,\delta) >0$ such that \begin{equation}\label{hardy_sobolev_01} \left(\int_K \abs{x}^{2(\delta-1)} \abs{\partial^\alphartialsi(x)}^2 dx \right)^{1/2} \le C(n,\delta) \left( \int_K \abs{x}^{2\delta} \abs{\nabla \partial^\alphartialsi(x)}^2 dx \right)^{1/2} \end{equation} for all $\partial^\alphartialsi \in C^1_c(\bar{K})$. \end{prop} \begin{proof} Writing $(s,\theta)$ for $s>0$ and $\theta \in K \cap \mathbb{S}^{n-1}$, we then have that \begin{equation} \int_K \abs{x}^{2(\delta-1)} \abs{\partial^\alphartialsi(x)}^2 dx = \int_{ K \cap \mathbb{S}^{n-1}} \int_0^\infty s^{2(\delta-1) + n -1} \abs{\partial^\alphartialsi(s,\theta)}^r ds d\theta \end{equation} and \begin{equation} \int_K \abs{x}^{2\delta} \abs{\nabla \partial^\alphartialsi(x)}^2 dx \ge \int_K \abs{x}^{2\delta} \abs{x \abs{x}^{-1} \cdot \nabla \partial^\alphartialsi(x)}^2 dx = \int_{ K \cap \mathbb{S}^{n-1}} \int_0^\infty s^{2\delta + n -1} \abs{\partial^\alphartial_s \partial^\alphartialsi(s,\theta)}^2 ds d\theta. \end{equation} To prove \eqref{hardy_sobolev_01} it thus suffices to show that \begin{equation}\label{hardy_sobolev_1} \int_0^\infty s^{2(\delta-1) + n -1} \abs{\partial^\alphartialsi(s,\theta)}^2 ds \le \int_0^\infty s^{2\delta + n -1} \abs{\partial^\alphartial_s \partial^\alphartialsi(s,\theta)}^2 ds \end{equation} for each $\theta \in K \cap \mathbb{S}^{n-1}$. To prove this we set $\varphi(s,\theta) = \partial^\alphartial_s \partial^\alphartialsi(s,\theta)$ and note that \begin{equation} \abs{\partial^\alphartialsi(s,\theta)} = \abs{ \int_s^\infty \partial^\alphartial_s \partial^\alphartialsi(z,\theta)dz} \le \int_s^\infty \abs{\varphi(z,\theta)}dz, \end{equation} which in turn means that \begin{equation} \int_0^\infty s^{2(\delta-1) + n -1} \abs{\partial^\alphartialsi(s,\theta)}^2 ds \le \int_0^\infty y^{2(\alpha-1) -1} \left(\int_y^\infty \abs{\varphi(z,\theta)} dz \right)^{2} dy. \end{equation} Applying Lemma \ref{hardy_ineq} with $\alpha = \delta + n/2 > 1$ and $p=2$ to estimate the right side, we then find that \eqref{hardy_sobolev_1} holds. \end{proof} Next we define various weighted Sobolev spaces on the equilibrium domain $\Omega$. We write \begin{equation}\label{corner_def} M = \{(-\ell,\zeta_0(\ell)), (\ell, \zeta_0(\ell))\} \end{equation} for the pair of corner points of $\Omega$. For $0 < \delta < 1$ and $k \in \mathcal{N}$ we let $W^k_\delta(\Omega)$ denote the space of functions such that $\ns{f}_{W^k_\delta} < \infty$, where \begin{equation} \ns{f}_{W^k_\delta} = \sum_{\abs{\alpha}\le k} \int_\Omega \dist(x,M)^{2\delta} \abs{\partial^\alphartial^\alpha f(x)}^2 dx. \end{equation} A consequence of Hardy's inequality (see for example Lemma 7.1.1 in \cite{kmr_1}) is that we have the continuous embeddings \begin{equation}\label{hardy_embed} W^1_\delta(\Omega) \hookrightarrow H^0(\Omega), W^2_\delta(\Omega) \hookrightarrow H^1(\Omega), \text{ and } H^1(\Omega) \hookrightarrow W^0_{-\delta}(\Omega) \end{equation} when $0 < \delta < 1$. We define the trace spaces $W^{k-1/2}_\delta(\partial^\alphartial \Omega)$ in the obvious way. It can be shown (see for example Section 9.1.2 of \cite{kmr_3}) that \begin{equation}\label{hardy_embed_2} \abs{\int_{\partial^\alphartial \Omega} f (v\cdot \tau) } \lesssim \norm{f}_{W^{1/2}_\delta(\partial^\alphartial \Omega)} \norm{v}_{H^1(\Omega)} \end{equation} for all $f \in W^{1/2}_\delta(\Omega)$ and $v \in H^1(\Omega)$ when $0 < \delta < 1$. Finally, we will also need the spaces \begin{equation} \mathring{W}^k_\delta(\Omega) = \{u \in W^k_\delta(\Omega) \;\vert\; \int_\Omega u =0\} \end{equation} for $k \ge 1$. Next we record a useful embedding. \begin{prop}\label{weigted_nest} Let $k \in \mathcal{N}$ and $\delta_1,\delta_2 \in \mathcal{R}$ with $\delta_1 < \delta_2$. Then we have that \begin{equation} W^k_{\delta_1}(\Omega) \hookrightarrow W^k_{\delta_2}(\Omega). \end{equation} \end{prop} \begin{proof} This follows from the trivial estimate $\dist(x,M)^{2\delta_2} \lesssim \dist(x,M)^{2\delta_1}$ in $\Omega$. \end{proof} We will also need the following embedding result. \begin{prop}\label{weighted_embed} If $k \in \mathbb{N}$ and $0 < \delta < 1$, then \begin{equation}\label{weighted_embed_01} W^k_\delta(\Omega) \hookrightarrow W^{k,q}(\Omega), \end{equation} where \begin{equation}\label{weighted_embed_02} 1 \le q < \frac{2}{1+\delta}. \end{equation} In particular, \begin{equation}\label{weighted_embed_03} W^1_\delta(\Omega) \hookrightarrow L^p(\Omega) \end{equation} when \begin{equation} 1 \le p < \frac{2}{\delta}. \end{equation} \end{prop} \begin{proof} By H\"older's inequality, for $1 \le q < 2$ we have that \begin{equation} \int_{\Omega} \abs{f}^q \le \left( \int_\Omega \abs{f}^2 \dist(\cdot,M)^{2\delta} \right)^{q/2} \left(\int_\Omega \frac{1}{\dist(\cdot,M)^{2\delta q/(2-q) } }\right)^{1-q/2} \end{equation} and the latter integral is finite if and only if \begin{equation} \frac{2 \delta q}{2-q} -1 < 1, \end{equation} which is equivalent to \eqref{weighted_embed_02}. Thus \begin{equation} \norm{f}_{W^{k,q}} \lesssim \norm{f}_{W^k_\delta} \end{equation} for every $f \in W^k_\delta(\Omega)$, and so we have \eqref{weighted_embed_01}. The embedding \eqref{weighted_embed_03} follows from the using the first with $k=1$ and the usual Sobolev embedding $W^{1,q}(\Omega) \hookrightarrow L^p(\Omega)$. \end{proof} Next we show a boundary embedding. \begin{prop}\label{weighted_trace} Suppose that $0 < \delta < 1$. Then $W^{1/2}_\delta(\partial^\alphartial \Omega) \hookrightarrow L^q(\partial^\alphartial\Omega)$, where \begin{equation} 1 \le q < \frac{2}{1+\delta}. \end{equation} \end{prop} \begin{proof} Suppose that $f \in W^{1/2}_\delta(\partial^\alphartial \Omega)$. Then there exists $F \in W^{1}_\delta(\Omega)$ such that $F = f$ on $\partial^\alphartial \Omega$ and $\norm{F}_{W^1_\delta} \le 2 \norm{f}_{W^{1/2}_\delta}.$ According to Proposition \ref{weighted_embed} we have that $F \in W^{1,q}(\Omega)$. The usual trace theory then implies that \begin{equation} \norm{f}_{L^q(\partial^\alphartial\Omega)} = \norm{F \vert_{\partial^\alphartial \Omega}}_{L^q(\partial^\alphartial \Omega)} \lesssim \norm{F}_{W^{1,q}(\Omega)} \lesssim \norm{F}_{W^1_\delta(\Omega)} \lesssim \norm{f}_{W^{1/2}_\delta}. \end{equation} This yields the desired embedding. \end{proof} Next we show another consequence of Hardy's inequality. \begin{prop}\label{weighted_sobolev_embed} If $0 < \delta < 1$, then \begin{equation} W^1_\delta(\Omega) \hookrightarrow W^{0}_{\delta-1}(\Omega). \end{equation} \end{prop} \begin{proof} This follows from the Hardy inequality. A proof may be found in Lemma 7.1.1 of \cite{kmr_1}, for instance. \end{proof} Next we study some other weighted estimates. \begin{thm}\label{weighted_sobolev_thm} Suppose that $0 < \delta < 1$. Let $M$ be the corner set given by \eqref{corner_def}. Then for each $q \in [1,\infty)$ we have that \begin{equation}\label{weighted_sobolev_thm_0} \norm{\dist(\cdot,M)^\delta f}_{L^q} \lesssim \norm{f}_{W^1_\delta} \end{equation} for all $f \in W^1_\delta(\Omega)$. \end{thm} \begin{proof} Note first that $\dist(\cdot,M)$ is Lipschitz, differentiable a.e., and satisfies $\abs{\nabla \dist(\cdot,M)} =1$ a.e. For $f \in W^1_\delta(\Omega)$ we compute \begin{equation} \nabla(\dist(\cdot,M)^\delta f) = \dist(\cdot,M)^\delta \nabla f + \delta \dist(\cdot,M)^{\delta-1} \nabla \dist(\cdot, M) f \end{equation} and hence Proposition \ref{weighted_sobolev_embed} allows us to estimate \begin{equation} \norm{\nabla (\dist(\cdot,M)^\delta f)}_{L^2} \lesssim \norm{\dist(\cdot,M)^\delta \nabla f}_{L^2} + \norm{ \dist(\cdot,M)^{\delta-1} f}_{L^2} \lesssim \norm{f}_{W^1_\delta}. \end{equation} Since we automatically have $\norm{\dist(\cdot,M)^\delta f}_{L^2} \le \norm{f}_{W^1_\delta}$, we then conclude that \begin{equation} \norm{\dist(\cdot,M)^\delta f}_{H^1} \lesssim \norm{f}_{W^1_\delta}. \end{equation} Consequently, the usual Sobolev embedding in $\Omega \subset \mathcal{R}n{2}$ imply \eqref{weighted_sobolev_thm_0}. \end{proof} \mathcal{E}ction{Product estimates}\label{app_prods} We now prove a product estimate. \begin{lem}\label{int_prod} Let $\Omega \subset \mathcal{R}n{2}$. Suppose that $f \in H^r(\Omega)$ for $r \in (0,1)$ and $g \in H^1(\Omega)$. Then $fg \in H^\sigma$ for every $\sigma \in (0,r)$, and \begin{equation} \norm{fg}_{\sigma} \le C(r,\sigma) \norm{f}_r \norm{g}_1. \end{equation} \end{lem} \begin{proof} Define the linear operator $T$ via $Tf = fg$. Then by H\"{o}lder's inequality and the Sobolev embedding, we have \begin{equation} \norm{fg}_{L^p} \le \norm{f}_{L^2} \norm{g}_{L^q} \lesssim \norm{f}_{H^0} \norm{g}_{H^1} \end{equation} for \begin{equation}\label{inp_1} \frac{1}{p} = \frac{1}{2} + \frac{1}{q} \text{ and } 2 \le q < \infty. \end{equation} From this we see that $T: H^0 \to L^p$ is a bounded linear operator for any $1 \le p < 2$ and \begin{equation} \norm{T}_{\mathcal{L}(H^0,L^p)} \lesssim \norm{g}_1. \end{equation} Similarly, \begin{multline} \norm{Tf}_{W^{1,p}} \lesssim \norm{fg}_{L^p} + \norm{f \nabla g}_{L^p} + \norm{\nabla f g}_{L^p} \\ \lesssim \norm{f}_{H^0} \norm{g}_{H^1} + \norm{f}_{L^q} \norm{\nabla g}_{L^2} + \norm{\nabla f}_{L^2} \norm{g}_{L^q} \\ \lesssim \norm{f}_{H^0} \norm{g}_{H^1} + \norm{f}_{H^1} \norm{ g}_{H^1} + \norm{ f}_{H^1} \norm{g}_{H^1} \lesssim \norm{f}_{H^1} \norm{g}_{H^1}. \end{multline} This means that $T: H^1 \to W^{1,p}$ is a bounded linear operator for any $1 \le p < 2$ and \begin{equation} \norm{T}_{\mathcal{L}(H^1,W^{1,p})} \lesssim \norm{g}_1. \end{equation} Now, the usual interpolation theory implies that \begin{equation} T : [H^0,H^1]_r \to [L^p, W^{1,p}]_r, \end{equation} which means that (see Adams and Fournier \cite{adams_fournier}) \begin{equation} T : H^r \to W^{r,p} \end{equation} is a bounded linear operator with $\norm{T f}_{W^{r,p}} \lesssim \norm{f}_{H^r} \norm{g}_{H^1}$. Now we use the embedding \begin{equation} W^{r,p} \hookrightarrow H^{r + 1 - 2/p} \end{equation} to deduce that \begin{equation} \norm{fg}_{r+1-2/p} \lesssim \norm{f}_{H^r} \norm{g}_{H^1}. \end{equation} Returning to \eqref{inp_1}, we have that \begin{equation} \sigma := r + 1 - \frac{2}{p} = r + 1 - \left( 1 + \frac{2}{q}\right) = r - \frac{2}{q} \end{equation} can take on any value in $(0,r)$ by choosing appropriate $q \in (2/r ,\infty)$. \end{proof} \begin{remark} If $\Omega \subset \mathcal{R}n{3}$ then the same argument works with $\sigma = r - (n-2)/n$. In dimension $n\ge 4$ we fail to get an embedding into $H^\sigma$ for $\sigma \ge 0$. \end{remark} Next we state a product estimate in weighted spaces on $\Sigma$ \begin{prop}\label{weight_prod_1} Suppose that $f \in W^1_\delta(\Omega)$ for $0 < \delta < 1$ and that $g \in H^{1+\kappa}(\Omega)$ for $0 < \kappa < 1$. Then $f g \in W^1_\delta(\Omega)$ and \begin{equation} \norm{fg}_{W^1_\delta} \lesssim \norm{f}_{W^1_\delta} \norm{g}_{1+\kappa}. \end{equation} \end{prop} \begin{proof} Since $\Omega \subset \mathcal{R}n{2}$ we have the Sobolev embedding $H^{1+\kappa}(\Omega) \hookrightarrow L^\infty(\Omega)$, and so we have that \begin{equation} \int_\Omega \dist(\cdot,M)^{2\delta} \left[ \abs{f}^2 \abs{g}^2 + \abs{\nabla f}^2 \abs{g}^2 \right] \lesssim \ns{f}_{W^1_\delta} \ns{g}_{L^\infty} \lesssim \ns{f}_{W^1_\delta} \ns{g}_{1+\kappa}. \end{equation} Consequently, in order to prove the desired estimate it suffices to prove that \begin{equation}\label{weight_prod_1_1} \int_\Omega \dist(\cdot,M)^{2\delta} \abs{f}^2 \abs{\nabla g}^2 \lesssim \ns{f}_{W^1_\delta} \ns{g}_{1+\kappa}. \end{equation} First note that $\nabla g \in H^\kappa(\Omega) \hookrightarrow L^{2/(1-\kappa)}(\Omega)$. We may then use H\"older's inequality to estimate \begin{multline} \int_\Omega \dist(\cdot,M)^{2\delta} \abs{f}^2 \abs{\nabla g}^2 \le \left(\int_\Omega \dist(\cdot,M)^{2\delta/\kappa} \abs{f}^{2/\kappa} \right)^\kappa \left(\int_\Omega \abs{\nabla g}^{2/(1-\kappa)} \right)^{1-\kappa} \\ \lesssim \ns{\dist(\cdot,M)^\delta f}_{L^{2/\kappa}} \ns{g}_{1+\kappa}. \end{multline} Now, $2/\kappa \in (2,\infty)$ and $0 < \delta <1$, so Theorem \ref{weighted_sobolev_thm} implies that $\norm{\dist(\cdot,M)^\delta f}_{L^{2/\kappa}} \lesssim \norm{f}_{W^1_\delta}$, and thus we deduce that \eqref{weight_prod_1_1} holds. \end{proof} Next we record a boundary result. \begin{prop}\label{weight_prod_half} Let $f\in W^{1/2}_\delta(\Sigma)$ for $0 < \delta < 1$, and let $g \in H^{1/2 + \kappa}(\Sigma)$ for $\kappa \in (0,1)$. Then $fg \in W^{1/2}_\delta(\Sigma)$ and \begin{equation} \norm{fg}_{W^{1/2}_\delta} \lesssim \norm{f}_{W^{1/2}_\delta} \norm{g}_{1/2 + \kappa}. \end{equation} \end{prop} \begin{proof} Using trace theory, we may find $F \in W^1_\delta(\Omega)$ and $G \in H^{1+\kappa}(\Omega)$ such that $F = f$ and $G = g$ on $\Sigma$ and \begin{equation} \norm{F}_{W^1_\delta} \lesssim \norm{f}_{W^{1/2}_\delta} \text{ and } \norm{G}_{1+\kappa} \lesssim \norm{g}_{1/2+\kappa}. \end{equation} Applying Proposition \ref{weight_prod_1} to $F$ and $G$ shows that $FG \in W^1_\delta(\Omega)$ and that $\norm{FG}_{W^1_\delta} \lesssim \norm{F}_{W^1_\delta} \norm{G}_{1+\kappa}$. Thus \begin{equation} \norm{fg}_{W^{1/2}_\delta} \le \norm{FG}_{W^1_\delta} \lesssim \norm{F}_{W^1_\delta} \norm{G}_{1+\kappa} \lesssim \norm{f}_{W^{1/2}_\delta} \norm{g}_{1/2+s}. \end{equation} \end{proof} \subsection{Coefficient estimates}\label{app_coeff} Here we are concerned with how the size of $\eta$ can control the ``geometric'' terms that appear in the equations. \begin{lem}\label{eta_small} Let $0 < \delta < 1$. There exists a universal $0 < \gamma < 1$ so that if $\ns{\eta}_{W^{5/2}_\delta}(\Sigma) \le \gamma$, then \begin{equation}\label{es_01} \begin{split} & \norm{J-1}_{L^\infty(\Omega)} +\norm{A}_{L^\infty(\Omega)} \le \frac{1}{2}, \\ & \norm{\mathcal{N}-1}_{L^\infty(\mathcal{G}amma)} + \norm{K-1}_{L^\infty(\mathcal{G}amma)} \le \frac{1}{2}, \text{ and } \\ & \norm{K}_{L^\infty(\Omega)} + \norm{\mathcal{A}}_{L^\infty(\Omega)} \lesssim 1. \end{split} \end{equation} Also, the map $\mathcal{P}hi$ defined by \eqref{mapping_def} is a diffeomorphism. \end{lem} \begin{proof} These follow from the weighted embeddings proved in Appendix \ref{app_weight} and the usual Sobolev product estimates. We refer to Lemma 2.4 of \cite{gt_hor} for more details in the case of a $3D$ domain. \end{proof} \mathcal{E}ction{Equilibrium surface}\label{app_surf} In this appendix we collect some well-known facts about the equilibrium capillary surface problem \eqref{zeta0_eqn}. \subsection{Uniqueness} We begin by proving that there is at most one solution to \eqref{zeta0_eqn}. This is proven in greater generality in Theorem 5.1 of \cite{finn}, but we record the simple $1-D$ proof here for the reader's convenience. \begin{thm}\label{eq_surf_unique} There exists at most one solution to the problem \begin{equation} \begin{cases} g \zeta - \sigma \mathcal{H}(\zeta) = P & \text{in }(-\ell,\ell) \\ \frac{\zeta'}{ \sqrt{1+(\zeta')^2} }(\partial^\alphartialm \ell) = \partial^\alphartialm \frac{\jump{\gamma}}{\sigma}. \end{cases} \end{equation} \end{thm} \begin{proof} Suppose that $\zeta_1,\zeta_2$ are two solutions. We subtract the equations for $\zeta_2$ from the equations for $\zeta_1$, multiply by $\zeta_1 -\zeta_2$, and integrate by parts over $(-\ell,\ell)$ to deduce that \begin{multline}\label{eq_surf_unique_1} \int_{-\ell}^\ell g\abs{\zeta_1 - \zeta_2}^2 + \sigma \left(\frac{\zeta_1'}{ \sqrt{1+(\zeta_1')^2} } - \frac{\zeta_2'}{ \sqrt{1+(\zeta_2')^2} } \right) (\zeta_1' - \zeta_2') \\ = \left.\sigma \left(\frac{\zeta_1'}{ \sqrt{1+(\zeta_1')^2} } - \frac{\zeta_2'}{ \sqrt{1+(\zeta_2')^2} } \right) (\zeta_1 - \zeta_2) \right\vert_{- \ell}^\ell =0. \end{multline} Set $f(z) = z/\sqrt{1+z^2}$ and $\varphi(t) = (x-y)(f(y+t(x-y)) - f(y))$ for fixed pair $x,y \in \mathcal{R}n{}$. Then \begin{equation} \varphi(0) = 0 \text{ and } \varphi'(t) = \frac{\abs{x-y}^2}{(1 + \abs{y + t(x-y)}^2)^{3/2}} \ge 0, \end{equation} and so \begin{equation}\label{eq_surf_unique_pos} (x-y)(f(y)-f(x)) = \varphi(1) = \varphi(0) + \int_0^1 \varphi'(t) dt \ge 0 \end{equation} for any $x,y \in \mathcal{R}n{}$. Applying this inequality in \eqref{eq_surf_unique_1} then shows that \begin{equation} \int_{-\ell}^\ell g \abs{\zeta_1 -\zeta_2}^2 \le 0 \end{equation} and hence $\zeta_1 = \zeta_2$. \end{proof} \subsection{Existence} We now prove the existence of solutions to \eqref{zeta0_eqn}. Consider the function $h:(1,\infty) \to (0,\infty)$ given by \begin{equation} h(r) := \int_{0}^{\arcsin(\abs{\jump{\gamma}}/\sigma) } \frac{\cos(\partial^\alphartialsi)}{\sqrt{r-\cos(\partial^\alphartialsi)}}d\partial^\alphartialsi. \end{equation} It's easy to show that the mapping $h$ is decreasing and satisfies \begin{equation} \lim_{r \to 1} h(r) = \infty \text{ and } \lim_{r \to \infty} h(r) = 0. \end{equation} From this we deduce that there exists a unique \begin{equation} C = C(g,\sigma,\abs{\jump{\gamma}},\ell) \in (1,\infty) \end{equation} such that $h(C) = \ell \sqrt{\frac{2g}{\sigma}} \in (0,\infty)$, which is equivalent to \begin{equation}\label{C_cond} \ell = \sqrt{\frac{\sigma}{2g}} \int_{0}^{\arcsin(\abs{\jump{\gamma}}/\sigma) } \frac{\cos(\partial^\alphartialsi)}{\sqrt{C-\cos(\partial^\alphartialsi)}}d\partial^\alphartialsi, \end{equation} With the constant $C = C(g,\sigma,\abs{\jump{\gamma}},\ell) \in (1,\infty) $ determined by \eqref{C_cond} we define the mapping $\mathbf{X}i :[\arcsin(-\abs{\jump{\gamma}}/\sigma),\arcsin(\abs{\jump{\gamma}}/\sigma)] \to [-\ell,\ell]$ via \begin{equation}\label{surf_xi_def} \mathbf{X}i(z) = \sqrt{\frac{\sigma}{2g}} \int_{0}^{z} \frac{\cos(\partial^\alphartialsi)}{\sqrt{C-\cos(\partial^\alphartialsi)}}d\partial^\alphartialsi. \end{equation} It's easy to see that $\mathbf{X}i$ is a smooth increasing diffeomorphism that is an odd function on $[-\ell,\ell]$. We use $\mathbf{X}i$ to construct the equilibrium capillary surface. \begin{thm}\label{chi_thm} Suppose that $ \jump{\gamma}/\sigma \in (-1,1).$ Define $\chi:[-\ell,\ell]\to \mathcal{R}n{}$ by \begin{equation} \chi(x) = \mathbb{D}n(\jump{\gamma}) \sqrt{\frac{2\sigma}{g}} \sqrt{C(g,\sigma,\abs{\jump{\gamma}},\ell) - \cos(\mathbf{X}i^{-1}(x))} \end{equation} with the understanding that $\chi$ is simply $0$ when sgn$(\jump{\gamma}) = \jump{\gamma} =0$ (in this case we cannot evaluate $\mathbf{X}i$, but we don't need to). Then $\chi$ is smooth on $[-\ell,\ell]$ and an even function. Moreover, $\chi$ is the unique solution to \begin{equation} \begin{cases} g \chi - \sigma \mathcal{H}(\chi) = 0& \text{in }(-\ell,\ell) \\ \frac{\chi'}{ \sqrt{1+(\chi')^2} }(\partial^\alphartialm \ell) = \partial^\alphartialm \frac{\jump{\gamma}}{\sigma}. \end{cases} \end{equation} \end{thm} \begin{proof} It's obvious that $\chi$ is smooth and even. A direct computation shows that $\chi$ satisfies \begin{equation}\label{chi_thm_1} \frac{\chi'}{\sqrt{1+(\chi')^2}} = \mathbb{D}n(\jump{\gamma}) \sin(\mathbf{X}i^{-1}(x)) \text{ and } \frac{1}{\sqrt{1+(\chi')^2}} = \cos(\mathbf{X}i^{-1}(x)) = C(g,\sigma,\abs{\jump{\gamma}},\ell) - \frac{g}{2\sigma} \chi^2. \end{equation} Upon evaluating the first equation in \eqref{chi_thm_1} at $x = \partial^\alphartialm \ell$ we find that \begin{equation} \frac{\chi'}{ \sqrt{1+(\chi')^2} }(\partial^\alphartialm \ell) = \mathbb{D}n(\jump{\gamma}) \sin(\arcsin(\partial^\alphartialm \abs{\jump{\gamma}}/\sigma)) = \partial^\alphartialm \frac{\jump{\gamma}}{\sigma}. \end{equation} The second equation in \eqref{chi_thm_1} shows that \begin{equation} g \chi \chi' + \sigma\left(\frac{1}{\sqrt{1+(\chi')^2}} \right)' =0 \text{ and hence } g \chi - \sigma \left(\frac{\chi'}{\sqrt{1+(\chi')^2}} \right)'=0. \end{equation} Uniqueness follows from Theorem \ref{eq_surf_unique}. \end{proof} Theorem \ref{chi_thm} only constructs a special equilibrium capillary surface for which the pressure vanishes. However, we can use it to recover arbitrary solutions. \begin{thm}\label{eq_surf_thm} Suppose that $\jump{\gamma}/\sigma \in (-1,1)$. Let $\chi: [-\ell,\ell] \to \mathcal{R}n{}$ be the function given in Theorem \ref{chi_thm}, and set \begin{equation} M_{min} = \int_{-\ell}^\ell \left( \chi(x) - \min_{(-\ell,\ell)} \chi\right) dx \ge 0. \end{equation} Then for each $M_{top} > M_{min}$ there exists a unique, smooth, even function $\zeta_0 :[-\ell,\ell] \to (0,\infty)$ and a constant $P_0 \in \mathcal{R}n{}$ such that \begin{equation} \begin{cases} g \zeta_0 - \sigma \mathcal{H}(\zeta_0) = P_0 & \text{in }(-\ell,\ell) \\ \frac{\zeta_0'}{ \sqrt{1+(\zeta_0')^2} }(\partial^\alphartialm \ell) = \partial^\alphartialm \frac{\jump{\gamma}}{\sigma}. \end{cases} \end{equation} Moreover, \begin{equation} M_{top} = \int_{-\ell}^\ell \zeta_0(x) dx \text{ and } P_0 = \frac{g M_{0} -2 \jump{\gamma}}{2\ell}. \end{equation} \end{thm} \begin{proof} We simply set $\zeta_0 = \chi - \min_{(-\ell,\ell)} \chi + h$ for $h \in (0,\infty)$ determined by $h = (M_{top}- M_{min})/(2\ell).$ The stated results then follow from Theorem \ref{chi_thm} and simple calculations. \end{proof} \subsection{Variational characterization} Although we have not used the calculus of variations to construct $\zeta_0$ we can still show that $\zeta_0$ satisfies a minimization principle. \begin{thm}\label{surf_min} Let $\mathscr{I}$ be the energy functional defined by \eqref{zeta0_energy}. Let $\zeta_0$, $M_{top}$, and $P_0$ be as in Theorem \ref{eq_surf_thm}. If $\eta \in W^{1,1}((-\ell,\ell))$ is such that $ \int_{-\ell}^\ell \eta = M_{top}$, then $\mathscr{I}(\zeta_0) \le \mathscr{I}(\eta)$. \end{thm} \begin{proof} Let $\partial^\alphartialsi \in W^{1,1}((-\ell,\ell)) $ be such that $\int_{-\ell}^\ell \partial^\alphartialsi =0$. Then \begin{equation} \int_{-\ell}^\ell (\zeta_0 + t \partial^\alphartialsi) = M_{top} \text{ for all }t \in \mathcal{R}n{}. \end{equation} We then compute, \begin{multline} \frac{d}{dt} \mathscr{I}(\zeta_0 + t \partial^\alphartialsi) = \int_{-\ell}^\ell g \zeta_0 \partial^\alphartialsi + \sigma \frac{\zeta_0' \partial^\alphartialsi'}{\sqrt{1+(\zeta_0')^2}} -\jump{\gamma}(\partial^\alphartialsi(\ell) + \partial^\alphartialsi(-\ell) ) \\ + \sigma \int_{-\ell}^\ell \left(\frac{\zeta_0' + t \partial^\alphartialsi'}{\sqrt{1+\abs{\zeta_0' + t \partial^\alphartialsi'}^2}} - \frac{\zeta_0' }{\sqrt{1+(\zeta_0')^2}} \right) \frac{(\zeta_0' + t \partial^\alphartialsi' - \zeta_0')}{t}. \end{multline} The ODE satisfied by $\zeta_0$ allows us to integrate by parts to deduce that \begin{equation} \int_{-\ell}^\ell g \zeta_0 \partial^\alphartialsi + \sigma \frac{\zeta_0' \partial^\alphartialsi'}{\sqrt{1+(\zeta_0')^2}} -\jump{\gamma}(\partial^\alphartialsi(\ell) + \partial^\alphartialsi(-\ell) ) = \int_{-\ell}^\ell P_0 \partial^\alphartialsi =0. \end{equation} On the other hand, \eqref{eq_surf_unique_pos} tells us that \begin{equation} \sigma \int_{-\ell}^\ell \left(\frac{\zeta_0' + t \partial^\alphartialsi'}{\sqrt{1+\abs{\zeta_0' + t \partial^\alphartialsi'}^2}} - \frac{\zeta_0' }{\sqrt{1+(\zeta_0')^2}} \right) \frac{(\zeta_0' + t \partial^\alphartialsi' - \zeta_0')}{t} \ge 0 \text{ for all }t \in \mathcal{R}n{}. \end{equation} Thus \begin{equation} \frac{d}{dt} \mathscr{I}(\zeta_0 + t \partial^\alphartialsi) \ge 0 \text{ for all }t \in \mathcal{R}n{}, \end{equation} which in particular means that \begin{equation}\label{surf_min_1} \mathscr{I}(\zeta_0 + \partial^\alphartialsi) = \mathscr{I}(\zeta_0) + \int_0^1 \frac{d}{dt} \mathscr{I}(\zeta_0 + t \partial^\alphartialsi) dt \ge \mathscr{I}(\zeta_0). \end{equation} Now, if $\eta \in W^{1,1}((-\ell,\ell))$ is such that $ \int_{-\ell}^\ell \eta = M_{top}$, then we may apply \eqref{surf_min_1} to $\partial^\alphartialsi = \zeta_0 - \eta$ to deduce that \begin{equation} \mathscr{I}(\eta) = \mathscr{I}(\zeta_0 +\partial^\alphartialsi) \ge \mathscr{I}(\zeta_0). \end{equation} \end{proof} \textbf{Acknowledgments:} We would like to thank Weinan E for bringing this problem to our attention and for fruitful discussions. We would also like to extend our thanks to the Beijing International Center for Mathematical Research for hosting us. \end{document}
\begin{document} \section{Introduction} Clustering aims to partition data $y_1,\ldots, y_n$ into disjoint groups. There is a large literature ranging from various algorithms such as K-means and DBSCAN \citep{macqueen1967classification,10.5555/3001460.3001507,frey2007clustering} to mixture model-based approaches [reviewed by \cite{fraley2002model}]. In the Bayesian community, model-based approaches are especially popular. To roughly summarize the idea, we view each $y_i$ as generated from a distribution $\mathcal K(\cdot\mid \theta_i)$, where $(\theta_1,\ldots,\theta_n)$ are drawn from a discrete distribution $ \sum_{k=1}^K w_k \delta_{\theta^*_k}(\cdot)$, with $w_k$ as the probability weight, and $\delta_{\theta^*_k}$ as a point mass at $\theta^*_k$. With prior distributions, we could estimate all the unknown parameters ($\theta^*_k$'s, $w_k$'s, and $K$) from the posterior. The model-based clustering has two important advantages. First, it allows important uncertainty quantification such as the probability for cluster assignment $c_i$, $\text{Pr}(c_i=k\mid y_i)$, as a probabilistic estimate that $y_i$ comes from the $k$th cluster ($c_i=k \Leftrightarrow \theta_i =\theta^*_k$). Different from commonly seen asymptotic results in statistical estimation, the clustering uncertainty does not always vanish even as $n\to \infty$. For example, in a two-component Gaussian mixture model with equal covariance, for a point $y_i$ at nearly equal distances to two cluster centers, we would have both $\text{Pr}(c_i=1\mid y_i)$ and $\text{Pr}(c_i=2\mid y_i)$ close to $50\%$ even as $n\to \infty$. For a recent discussion on this topic as well as how to quantify the partition uncertainty, see \cite{wade2018bayesian} and the references within. Second, the model-based clustering can be easily extended to handle more complicated modeling tasks. Specifically, since there is a probabilistic process associated with the clustering, it is straightforward to modify it to include useful dependency structures. We list a few examples from a rich literature: \cite{ng2006mixture} used a mixture model with random effects to cluster correlated gene-expression data, \cite{muller2010random,park2010bayesian,ren2011logistic} allowed the partition to vary according to some covariates, \cite{guha2016nonparametric} simultaneously clustered the predictors and use them in high-dimensional regression. On the other hand, model-based clustering has its limitations. Primarily, one needs to carefully specify the density/mass function $\mathcal K$, otherwise, it will lead to unwanted results and difficult interpretation. For example, \cite{coretto2016robust} demonstrated the sensitivity of the Gaussian mixture model to non-Gaussian contaminants, \cite{miller2018robust} and \cite{cai2021finite} showed that when the distribution family of $\mathcal K$ is misspecified, the number of clusters would be severely overestimated. It is natural to think of using more flexible parameterization for $\mathcal K$, in order to mitigate the risk of model misspecification. This has motivated many interesting works, such as modeling $\mathcal K$ via skewed distribution \citep{fruhwirth2010bayesian,lee2016finite}, unimodal distribution \citep{rodriguez2014univariate}, copula \citep{kosmidis2016model}, mixture of mixtures \citep{malsiner2017identifying}, among others. Nevertheless, as the flexibility of $\mathcal K$ increases, the modeling and computational burdens also increase dramatically. In parallel to the above advancements in model-based clustering, spectral clustering has become very popular in machine learning and statistics. \cite{von2007tutorial} provided a useful tutorial on the algorithms and a review of recent works. On clustering point estimation, spectral clustering has shown good empirical performance for separating non-Gaussian and/or manifold data, without the need to directly specify the distribution for each cluster. Instead, one calculates a matrix of similarity scores between each pair of data, then uses a simple algorithm to find a partition that approximately minimizes the total loss of similarity scores across clusters (adjusted with respect to cluster sizes). This point estimate is found to be not very sensitive to the choice of similarity score, and empirical solutions have been proposed for tuning the similarity and choosing the number of clusters \citep{zelnik2005self,shi2009data}. There is a rapidly growing literature of frequentist methods on further improving the point estimate [\cite{chi2007evolutionary,rohe2011spectral,NIPS2011_31839b03,lei2015consistency,han2021eigen,lei2022bias}; among others], although, in this article, we focus on the Bayesian perspective and aim to characterize the probability distribution. Due to the algorithmic nature, spectral clustering cannot be directly used in model-based extension, or produce uncertainty quantification. This has motivated a large Bayesian literature. There have been several works trying to quantify the uncertainty around the spectral clustering point estimate. For example, since the spectral clustering algorithm can be used to estimate the community memberships in a stochastic block model, one could transform the data into a similarity matrix, then treat it as if generated from a Bayesian stochastic block model \citep{snijders1997estimation,nowicki2001estimation,mcdaid2013improved,geng2019probabilistic}. Similarly, one could take the Laplacian matrix (a transform of the similarity used in spectral clustering) or its spectral decomposition, and model it in a probabilistic framework \citep{socher2011spectral,duan2019spiked}. Broadly speaking, we can view these works as following the recent trend of robust Bayesian methodology, in conditioning the parameter of interest (clustering) on an insufficient statistic (pairwise summary statistics) of the data. See \cite{lewis2021bayesian} for recent discussions. Pertaining to Bayesian robust clustering, one gains model robustness by avoiding putting any parametric assumption on within-cluster distribution $\mathcal K(\cdot \mid \theta^*_k)$; instead, one models the pairwise information that often has an arguably simple distribution. Recent works include the distance-based P\'olya urn process \citep{blei2011distance,socher2011spectral}, Dirichlet process mixture model on Laplacian eigenmaps \citep{banerjee2015bayesian}, Bayesian distance clustering \citep{duan2021dist}, generalized Bayes extension of product partition model \citep{rigon2020generalized}. This article follows this trend. Instead of modeling $y_i$'s as conditionally independent (or jointly dependent) from a certain within-cluster distribution $\mathcal K(\cdot \mid \theta_k^*)$, we choose to model $y_i$ as dependent on another point $y_j$ that is close by, provided $y_i$ and $y_j$ are from the same cluster. This leads to a Markov graphical model based on a spanning forest, a graph consisting of multiple disjoint spanning trees (each tree as a connected subgraph without cycles). The spanning forest itself is not new to statistics. There has been a large literature on using spanning trees and forests for graph estimation, such as \cite{meila2000learning,meilua2006tractable,edwards2010selecting,byrne2015structural,duan2021bayesian,luo2021bayesian}. Nevertheless, a key difference between graph estimation and graph-based clustering is that --- the former aims to recover both the node partition and the edges characterizing dependencies, while the latter only focuses on estimating the node partition alone (equivalent to clustering). Therefore, a distinction of our study is that we will treat the edges as a nuisance parameter/latent variable, while we will characterize the node partition in the marginal distribution. Importantly, we formally show that by marginalizing the randomness of edges, the point estimate on the node partition is provably close to the one from the normalized spectral clustering algorithm. As the result, the spanning forest model can serve as the probabilistic model for the spectral clustering algorithm --- this relationship is analogous to the one between the Gaussian mixture model and the K-means algorithm \citep{macqueen1967classification}. Further, we show that treating the spanning forest as random, as opposed to a fixed parameter (that is unknown), leads to much less sensitivity in clustering performance, compared to cutting the minimum spanning tree algorithm \citep{gower1969minimum}. On the distribution specification on the node and edges, we take a Bayesian non-parametric approach by considering the forest model as realized from a ``forest process'' --- each cluster is initiated with a point from a root distribution, then gradually grown with new points from a leaf distribution. We characterize the key differences in the partition distribution between the forest and classic P\'olya urn processes. This difference also reveals that extra care should be exerted during model specification when using graphical models for clustering. Lastly, by establishing the probabilistic model counterpart for spectral clustering, we show how such models can be easily extended to incorporate other dependency structures. We demonstrate several extensions, including a multi-subject clustering of the brain networks, and a high-dimensional clustering of photo images. \vspace*{-1cm} \section{Method} \vspace*{-1cm} \subsection{Background on Spectral Clustering Algorithms} \vspace*{-0.5cm} We first provide a brief review of spectral clustering algorithms. For data $y_1,\ldots, y_n$, let $A_{i,j}\ge 0$ be a similarity score between $y_i$ and $y_j$, and denote the degree $D_{i,i}= \sum_{j\neq i} A_{i,j}$. To partition the data index $(1,\ldots, n)$ into $K$ sets, $\mathcal V= (V_1,\ldots, V_K)$, we want to solve the following problem: \bel\label{eq:normalized_graph_cut} \min_{\mathcal V} \sum_{k=1}^K \frac{\sum_{i\in V_k, j \not \in V_k} A_{i,j}}{ \sum_{i\in V_k} D_{i,i} }. \eel This is known as the minimum normalized cut loss. The numerator above represents the across-cluster similarity due to cutting $V_k$ off from the others; and the denominator prevents trivial solutions of forming tiny clusters with small $ \sum_{i\in V_k} D_{i,i}$. This optimization problem is a combinatorial problem, hence has motivated approximate solutions such as spectral clustering. To start, using the Laplacian matrix $L=D-A$ with $D$ the diagonal matrix of $D_{i,i}$'s, and the normalized Laplacian $N=D^{-1/2} L D^{-1/2}$, we can equivalently solve the above problem via: \be \min_{\mathcal V} \text{tr} (Z'_{\mathcal V} N Z_{\mathcal V}), \ee where $Z_{{\mathcal V}: i,k} = 1(i\in V_k) \sqrt{D_{i,i}} / \sqrt{\sum_{i\in V_k}D_{i,i}}$. It is not hard to verify that $Z'_{\mathcal V}Z_{\mathcal V}=I_K$. We can obtain a relaxed minimizer of $Z:Z'Z=I_K$, by simply taking $\hat Z$ as the bottom $K$ eigenvectors of $N$ (with the minimum loss equal to the sum of the smallest $K$ eigenvalues). Afterward, we cluster the rows of $\hat Z$ into $K$ groups (using algorithms such as the K-means), hence producing an approximate solution to \eqref{eq:normalized_graph_cut}. To clarify, there is more than one version of the spectral clustering algorithms. An alternative version to \eqref{eq:normalized_graph_cut} is called ``minimum ratio cut'', which replaces the denominator $\sum_{i\in V_k} D_{i,i} $ by the size of cluster $|V_k|$. Similarly, continuous relaxation approximation can be obtained by following the same procedures above, except for clustering the eigenvectors of the unnormalized $L$. Details on comparing those two versions can be found in \cite{von2007tutorial}. In this article, we focus on the one based on \eqref{eq:normalized_graph_cut} and the normalized Laplacian matrix $N$. This version is also commonly referred to as ``normalized spectral clustering''. \vspace*{-0.5cm} \subsection{Probabilistic Model via Bayesian Spanning Forest} \vspace*{-0.5cm} The next question is if there is some partition-based generative model for $y$, that has the maximum likelihood estimate (or, the posterior mode in the Bayesian framework) almost the same as the point estimate from the normalized spectral clustering. We found an almost equivalence in the spanning forest model. A spanning forest model is a special Bayesian network that describes the conditional dependencies among $y_1,\ldots, y_n$. Given a partition $\mathcal V=(V_1,\ldots, V_K)$ of the data index $(1,\ldots,n)$, consider a forest graph $\mathcal F_{\mathcal V}= (T_1,\ldots, T_k)$, with each $T_k=(V_k, E_k)$ a component tree (a connected subgraph without cycles), $V_k$ the set of nodes and $E_k$ the set of edges among $V_k$. Using $\mathcal F_{\mathcal V}$ and a set of root nodes $\mathcal R_{\mathcal V}=(1^*,\ldots, K^*)$ with $k^*\in V_k$, we can form a graphical model with a conditional likelihood given the forest: \bel\label{eq:dag_likelihood} \mathcal L(y ; \mathcal V, \mathcal F_{\mathcal V}, \mathcal R_{\mathcal V}, \theta) = \prod_{k=1}^{K} \bigg [ r(y_{k^*} ; \theta) \prod_{(i,j) \in T_k }f(y_i \mid y_j; \theta) \bigg ], \eel where we refer to $r( \cdot ; \theta)$ as a ``root'' distribution, and $f(\cdot\mid y_j;\theta)$ as a ``leaf'' distribution; and we use $\theta$ to denote the other parameter; and we use simplified notation $(i,j)\in G$ to mean that $(i,j)$ is an edge of the graph $G$. \begin{figure} \caption{Three examples of clusters that can be represented by a spanning forest.\label{fig:illustration} \label{fig:illustration} \end{figure} \vspace*{-1cm} \noindent Figure \ref{fig:illustration} illustrates the high flexibility of a spanning forest in representing clusters. It shows the sampled $\mathcal F$ based on three clustering benchmark datasets. Note that some clusters are not elliptical or convex in shape. Rather, each cluster can be imagined as if it were formed by connecting a point to another nearby. \vspace*{-0.2cm} \begin{remark} To clarify, the point estimation on a spanning forest (as some fixed and unknown graph) has been studied \citep{gower1969minimum}. However, a distinction here is that we consider $\mathcal V$ as the parameter of interest, but the edges and roots ($\mathcal F_{\mathcal V},\mathcal R_{\mathcal V}$) as latent variables. The performance differences are shown in the Supplementary Materials S4.6. \end{remark} The stochastic view of $(\mathcal F_\mathcal V,\mathcal R_\mathcal V)$ is important, as it allows us to incorporate the uncertainty of edges and avoids the sensitivity issue in the point graph estimate. Equivalently, our clustering model is based on the marginal likelihood that varies with the node partition $\mathcal V$: \vspace*{-0.5cm} \bel\label{eq:marginal_lik} \mathcal L(y ; \mathcal V, \theta) =\sum_{\mathcal F_{\mathcal V}, \mathcal R_{\mathcal V}}\mathcal L(y ; \mathcal V, \mathcal F_{\mathcal V}, \mathcal R_{\mathcal V}, \theta)\Pi(\mathcal F_\mathcal V, \mathcal R_{\mathcal V}\mid \mathcal V). \eel \vspace*{-0.3cm} \noindent where $\Pi(\mathcal F_\mathcal V, \mathcal R_{\mathcal V}\mid \mathcal V)$ is the latent variable distribution that we will specify in the next section. We can quantify the marginal connecting probability for each potential edge $(i,j)$: \vspace*{-0.5cm} \bel\label{eq:marginal_connecting} M_{i,j}:=\text{Pr}[F_\mathcal V \ni (i,j)] \propto \sum_{\mathcal V}\sum_{\mathcal F_{\mathcal V}, \mathcal R_{\mathcal V}} 1[(i,j)\in F_\mathcal V] \mathcal L(y ; \mathcal V, \mathcal F_{\mathcal V}, \mathcal R_{\mathcal V}, \theta)\Pi(\mathcal F_\mathcal V, \mathcal R_{\mathcal V}\mid \mathcal V). \eel Similar to the normalized graph cut, there is no closed-form solution for directly maximizing \eqref{eq:marginal_lik}. However, closed-form does exist for \eqref{eq:marginal_connecting} (see Section 4). Therefore, an approximate maximizer of \eqref{eq:marginal_lik}, $\hat {\mathcal V}$, can be obtained via computing the matrix $M$ and searching for $K$ diagonal blocks (after row and column index permutation) that contain the highest total values of $M_{i,j}$'s. Specifically, we can extract the top leading eigenvectors of $M$ and cluster the rows into $K$ groups. \begin{figure} \caption{Comparing the eigenvectors of a marginal connecting probability matrix $M$ and the ones of normalized Laplacian $N$.\label{fig:spectral_equiv} \label{fig:spectral_equiv} \end{figure} \vspace*{-0.5cm} This approximate marginal likelihood maximizer produces almost the same estimate as the normalized spectral clustering does. This is because the two sets of eigenvectors are almost the same. Further, it is important to clarify that such closeness does not depend on how the data are really generated. Therefore, to provide some numerical evidence, for simplicity, we generate $y_i$ from a simple three-component Gaussian mixture in $\mathbb{R}^2$ with means in $(0,0),(2,2),(4,4)$ and all variances equal to $I_2$. Figure \ref{fig:spectral_equiv} compares the eigenvectors of the matrix $M$ and the normalized Laplacian $N$ (that uses $f$ and $r$ to specify $A$, with details provided in Section 4). Clearly, these two are almost identical in values. Due to this connection, the clustering estimates from spectral clustering can be viewed as an approximate estimate for $\hat{\mathcal V}$ in \eqref{eq:marginal_lik}. We now fully specify the Bayesian forest model. For simplicity, we now focus on continuous $y_i \in \mathbb{R}^p$. For ease of computation, we recommend choosing $f$ as a symmetric function $f(y_i\mid y_j; \theta) = f(y_j\mid y_i;\theta)$, so that the likelihood is invariant to the direction of each edge; and choose $r$ as a diffuse density, so that the likelihood is less sensitive to the choice of a node as root. In this article, we choose a Gaussian density for $f$ and Cauchy for $r$: \bel\label{eq:choice_density} & f(y_i \mid y_j;\theta) = {(2\pi\sigma_{i,j})^{-p/2}} \exp \left \{ - \frac {\|y_i-y_j\|_2^2}{2\sigma_{i,j}} \right\},\\ &r(y_i; \theta) = \frac{\Gamma[(1+p)/2]}{\gamma^p \pi^{(1+p)/2}} \frac{1}{ (1+ \|y_{i}-\mu\|_2^2/\gamma^2)^{(1+p)/2}}. \eel where $\sigma_{ij} >0$ and $\gamma>0$ are scale parameters. As the magnitudes of distances between neighboring points may differ significantly from cluster to cluster, we use a local parameterization $\sigma_{i,j} =\tilde \sigma_i \tilde \sigma_j$, and will regularize $(\tilde \sigma_1,\ldots, \tilde \sigma_n)$ via a hyper-prior. \begin{remark} In \eqref{eq:choice_density}, we effectively use Euclidean distances $\|y_i-y_j\|_2$. We focus on Euclidean distance in the main text, for the simplicity of presentation and to allow a complete specification of priors. One can replace Euclidean distance with some others, such as Mahalanobis distance and geodesic distance. We present a case of high-dimensional clustering based on geodesic distance on the unit-sphere in the Supplementary Materials S1.1. \end{remark} \vspace*{-1cm} \subsection{Forest Process and Product Partition Prior} \vspace*{-0.5cm} To simplify notations as well as to facilitate computation, we now introduce an auxiliary node $0$ that connects to all roots $(1^*,\ldots, K^*)$. As the result, the model can be equivalently represented by a spanning tree rooted at $0$: \be & \mathcal T=(V_{\mathcal T}, E_{\mathcal T}), \\ & V_{\mathcal T}=\{0\} \cup V_1 \cup \ldots \cup V_K, \; E_{\mathcal T}= \{ (0,1^*),\ldots, (0,K^*)\} \cup E_1\cup\ldots \cup E_K. \ee In this section, we focus on the distribution specification for $\mathcal T$. The distribution, denoted by $\Pi(\mathcal T)$, $\Pi(\mathcal T)$ can be factorized according to the following hierarchies: picking the number of partitions $K$, partitioning the nodes into $(V_1,\ldots, V_K)$, forming edges $E_k$ and picking one root $k^*$ for each $V_k$. To be clear on the nomenclature, we call $\Pi(\mathcal F_{\mathcal V}, \mathcal R_{\mathcal V} \mid \mathcal V)$ as the ``latent variable distribution'', $\Pi_0(\mathcal V)$ as the ``partition prior''. \bel\label{eq:hierarchical} \Pi(\mathcal T)= \underbrace{ \bigg\{ \Pi_0(K)\Pi_0(V_1,\ldots, V_K \mid K)}_{\Pi_0(\mathcal V) } \bigg\} \underbrace{\prod_{k=1}^K \bigg \{\Pi(E_k\mid V_k) \Pi( k^* \mid E_k, V_k) \bigg\}}_{ \Pi(\mathcal F_{\mathcal V}, \mathcal R_{\mathcal V} \mid \mathcal V)}. \eel \begin{remark} In Bayesian non-parametric literature, $\Pi_0(K)\Pi_0(V_1,\ldots, V_K \mid K)$ is known as the partition probability function, which plays the key role in controlling cluster sizes and cluster number in model-based clustering. However, when it comes to graphical model-based clustering (such as our forest model), it is important to note the difference --- for each partition $V_k$, there is an additional probability $\Pi(E_k, k^* \mid V_k)$ due to the multiplicity of all possible subgraphs formed between the nodes in $V_k$. \end{remark} For simplicity, we will use discrete uniform distribution for $\Pi(E_k, k^* \mid V_k)$. Since there are $n_k^{(n_k-2)_+}$ possible spanning trees for $n_k$ nodes [$(x)_+=x$ if $x>0$, otherwise $0$], and $n_k$ possible choice of roots. We have $\Pi(E_k, k^* \mid V_k) = n_k^{-(n_k-1)}$. We now discuss two different ways to complete the distribution specification. We first take a ``ground-up'' approach by viewing $\mathcal T$ as from a stochastic process where the node number $n$ could grow indefinitely. Starting from the first edge $e_1=(0,1)$, we sequentially draw new edges and add to $\mathcal T$, from \bel\label{eq:fp} & e_i \mid e_1,\ldots e_{i-1} \sim \sum_{j=1}^{i-1} \pi^{[i]}_j \delta_{(j,i)}(\cdot) + \pi^{[i]}_i \delta_{(0,i)}(\cdot),\\ & y_i \mid (j,i) \sim 1(j \ge 1) f(\cdot \mid y_j)+ 1(j=0) r(\cdot), \eel with some probability vector $(\pi^{[i]}_1,\ldots, \pi^{[i]}_i)$ that adds up to one. We refer to \eqref{eq:fp} as a forest process. The forest process is a generalization of the P\'olya urn process \citep{blackwell1973ferguson}. For the latter, $e_i=(j,i)$ would make node $i$ take the same value as node $j$, $y_i =y_j$ [although in model-based clustering, one would use notation $\theta_i =\theta_j$ , and $y_i\sim \mathcal K(\cdot \mid \theta_i)$]; $e_i=(0,i)$ would make node $i$ draw a new value for $y_i$ from the base distribution. Due to this relationship, we can borrow popular parameterization for $\pi^{[i]}_j$ from the urn process literature. For example, we can use the Chinese restaurant process parameterization $\pi^{[i]}_j= 1/(i-1+\alpha)$ for $j=1,\ldots, (i-1)$, and $\pi^{[i]}_i= \alpha/(i-1+\alpha)$ with some chosen $\alpha>0$. After marginalizing over the order of $i$ and partition index [see \cite{miller2019elementary} for a simplified proof of the partition function], we obtain: \bel\label{eq:crp} \Pi(\mathcal T)=\frac{\alpha^{K} \Gamma(\alpha)}{\Gamma(\alpha+n)} \prod_{k=1} ^K\Gamma(n_k) n_k^{-(n_k-1)}. \eel Compared to the partition probability prior in the Chinese restaurant process, we have an additional $n_k^{-(n_k-1)}$ term that corresponds to the conditional prior weight of for each possible $(k^*,E_k)$ given a partition $V_k$. To help understand the effect of this additional term on the posterior, we can imagine two extreme possibilities in the conditional likelihood given a $V_k$. If the conditional $\mathcal L(y_i: i\in V_k \mid k^*,E_k)$ is skewed toward one particular choice of tree $(\hat k^*,\hat E_k)$ [that is, $\mathcal L(y_i: i\in V_k \mid k^*,E_k)$ is large when $(k^*,E_k)= (\hat k^*,\hat E_k)$, but is close to zero for other values of $(k^*,E_k)$], then $n_k^{-(n_k-1)}$ acts as a penalty for a lack of diversity in trees. On the other hand, if $\mathcal L(y_i: i\in V_k \mid k^*,E_k)$ is equal for all possible $(k^*,E_k)$'s, then we can simply marginalize over $(k^*, E_k)$ and be not be subject to this penalty [since $\sum_{(k^*, E_k)} n_k^{-(n_k-1)}=1$]. Therefore, we can form an intuition by interpolating those two extremes: if a set of data points (of size $n_k$) are ``well-knit'' such that they can be connected via many possible spanning trees (each with a high conditional likelihood), then it would have a higher posterior probability of being clustered together, compared to some other points (of the same size $n_k$) that have only a few trees with high conditional likelihood. With the ``ground-up'' construction useful for understanding the difference from the classic urn process, the distribution \eqref{eq:crp} itself is not very convenient for posterior computation. Therefore, we also explore the alternative of a ``top-down" approach. This is based on directly assigning a product partition probability \citep{hartigan1990partition,barry1993bayesian,crowley1997product,quintana2003bayesian} as \bel\label{eq:simple_prior} & \Pi_0(V_1,\ldots, V_K \mid K) = \frac{\prod_{k=1}^K n_k^{(n_k-1)}}{ \sum_{\text{all }(V^*_1,\ldots, V^*_K) }\prod_{k=1}^K |V^*_k|^{(|V^*_k|-1)} }, \eel where the cohesion function $n_k^{(n_k-1)}$ effectively cancels out the probability for each $(k^*,E_k)$. To assign a prior for $K$, we assign a probability $$\Pi_0(K) \propto \lambda^K \sum_{\text{all }(V^*_1,\ldots, V^*_K) }\prod_{k=1}^K |V^*_k|^{(|V^*_k|-1)},$$supported on $K\in \{1,\ldots,n\}$ with $\lambda>0$, with $\Pi(E_k, k^* \mid V_k) = n_k^{-(n_k-1)}$, multiplying the terms according to \eqref{eq:hierarchical} leads to \bel\label{eq:trunc_poisson} \Pi(\mathcal T) \propto \lambda^K, \eel which is similar to a truncated geometric distribution and easy to handle in posterior computation, and we will use this from now on. In this article, we set $\lambda=0.5$. \begin{remark} We now discuss the exchangeability of the sequence of random variables generated from the above forest process. The exchangeability is defined as the the invariance of distribution $\Pi( X_1=x_1,\ldots X_n=x_n ) = \Pi( X_1= x_{\tilde\pi_1},\ldots X_n=x_{\tilde\pi_n})$ under any permutation $(\tilde\pi_1,\ldots,\tilde\pi_n)$ \citep{diaconis1977finite}. For simplicity, we focus on the joint distribution with $\theta$ marginalized out. There are three categories of random variables associated with each node index $i$: the first drawn edge $(j,i)$ that points to a new node $i$ (whose sequence forms $\mathcal T=(\mathcal V, \{E_k, k^*\}_{k=1}^K)$), the cluster assignment of a node $c_i$ (whose sequence forms $\mathcal V$), and the data point $y_i$. It is not hard to see that, since each component tree encodes an order among $\{i: c_i=k\}$, the joint distribution of the data and the forest $\Pi(y_1,\ldots, y_n, \mathcal T)$ is not exchangeable. Nevertheless, as we marginalize out each $(E_k,k^*)$ to form the clustering likelihood $\mathcal L(y ; \mathcal V)$ as in \eqref{eq:marginal_lik}, and all priors $\Pi_0(\mathcal V)$ presented in this section only depend on the number and sizes of clusters, the joint distribution of the data and cluster labels $\Pi \{ (y_1,c_1),\ldots, (y_n,c_n)\} = \mathcal L( y_i; \mathcal V \} \Pi_0( \mathcal V)$ is exchangeable. Further, we could marginalize over $\mathcal V$, and see that $\Pi(y_1,\ldots, y_n)$ is exchangeable. \end{remark} \vspace*{-1cm} \subsection{Hyper-priors for the Other Parameters} \vspace*{-0.5cm} We now specify the hyper-priors for the parameters in the root and leaf densities. To avoid model sensitivities to scaling and shifting of the data, we assume that the data have been appropriately scaled and centered (for example, via standardization), so that the marginally $\mathbb E y \approx 0$ and $\mathbb E \|y_{.,j}- \mathbb E y_{.,j}\|_2^2 \approx 1$ for $j=1,\ldots,p$. To make the root density $r(\cdot)$ close to a small constant in the support of the data, we set $\mu=0$ and $\gamma^2 \sim \text{Inverse-Gamma}(2, 1)$. For $\sigma_{i,j}$ in the leaf density $f(y_i \mid y_j; \sigma_{i,j})$, in order to likely pick an edge $(i,j)$ with $j$ as a close neighbors of $i$ (that is, $(i,j)$ with small $\|y_i-y_j\|_2$), we want most of $\sigma_{i,j}=\tilde\sigma_i\tilde\sigma_j$ to be small. We use the following hierarchical inverse-gamma prior that shrinks each $\tilde\sigma_i$, while using a common scale hyper-parameter $\beta_\sigma$ to borrow strengths among $\tilde\sigma_i$'s, \[ \begin{aligned} & \beta_\sigma \sim \text{Exp}(\eta_\sigma), \qquad \eta_\sigma \sim \text{Inverse-Gamma}(a_\sigma, \xi_\sigma),\\ & \tilde\sigma_i \stackrel{iid}\sim \text{Inverse-Gamma}(b_\sigma, \beta_\sigma) \text{ for } i=1,\ldots,n, \end{aligned} \] where $\eta_\sigma$ is the scale parameter for the exponential. To induce a shrinkage effect {\em a priori}, we use $a_\sigma=100$ and $\xi_\sigma=1$ for a likely small $\eta_\sigma$ hence a small $\beta_\sigma$. Further, we note that the coefficient of variation $\sqrt{ \text{Var}(\tilde \sigma_i \mid \beta_\sigma)}/{\mathbb{E}(\tilde \sigma_i \mid \beta_\sigma)} = 1/\sqrt{b_\sigma-2}$; therefore, we set $b_\sigma=10$ to have most of $\tilde\sigma_i$ near $\mathbb{E}(\tilde \sigma_i \mid \beta_\sigma)= \beta_\sigma/(b_\sigma-1)$ in the prior. We use these hyper-prior settings in all the examples presented in this article. In addition, \cite{zelnik2005self} show good empirical performance in spectral clustering, based on a heuristic of setting $\tilde\sigma_{i}$ to a low order statistic of the distances to $y_i$. We develop a model-based formalization that achieves similar effects. Since the model is more involved than a simple Bayesian spanning forest model, we defer the details to the Supplementary Materials S5. \vspace*{-0.5cm} \subsection{Model-based Extensions} \vspace*{-0.5cm} Compared to algorithms, a major advantage of probabilistic models is the ease of building useful model-based extensions. We demonstrate three directions for extending the Bayesian forest model. Due to the page constraint, we defer the details and numeric results of the first two extensions in the Supplementary Materials S1.1 and S1.2. \noindent\textbf{Latent Forest Model:} First, one could use the realization of the forest process as latent variables in another model $\mathcal M$ for data $(y_1,\ldots,y_n)$, \be z_1,\ldots, z_n \sim \t{Forest Model} (\mathcal T;\theta_z),\qquad y_1,\ldots, y_n \sim \mathcal M(z_1,\ldots, z_n;\theta_y), \ee where $\theta_z$ and $\theta_y$ denote the other needed parameters. For example, for clustering high-dimensional data such as images, it is often necessary to represent each high-dimensional observation $y_i$ by a low-dimensional coordinate $z_i$ \citep{wu2014spectral,chandra2020escaping}. In the Supplementary Materials, we present a high-dimensional clustering model, using an autoregressive matrix Gaussian for $\mathcal M$ and a sparse von Mises-Fisher for the forest model. \noindent\textbf{Informative Prior--Latent Variable Distribution:} Second, in applications it is sometimes desirable to have the clustering dependent on some external information $x$, such as covariates \citep{muller2011product} or an existing partition \citep{paganin2021centered}. From a Bayesian view, this can be achieved via taking an $x$-informative distribution: \vspace*{-0.3cm} \be \mathcal T \sim \Pi(\cdot\mid x), \qquad y_1,\ldots, y_n \sim \t{Forest Model} (\mathcal T;\theta). \ee In the Supplementary Materials, we illustrate an extension with a covariate-dependent product partition model [PPMx, \cite{muller2011product}] into the distribution of $\mathcal T$. \noindent\textbf{Hierarchical Multi-view Clustering:} Third, for multi-subject data $(y^{(s)}_1,\ldots, y_n^{(s)})$ for $s=1,\ldots, S$, we want to find a clustering for every $s$. At the same time, we can borrow strength among subjects, by letting subjects share some similar partition structure on a subset of nodes (while differing on the other nodes). This is known as multi-view clustering. On the other hand, a challenge is that a forest is a discrete object subject to combinatorial constraints, hence it would be difficult to partition the nodes freely while accommodating the tree structure. To circumvent this issue, we propose a latent coordinate-based distribution that gives a continuous representation for $\mathcal T^{(s)}$. Consider a latent $z^{(s)}_i\in \mathbb{R}^d$ for each node $i=1,\ldots,n$, we assign a joint prior--latent variable distribution for $z^{(s)}$ and $\mathcal T^{(s)}$: \bel\label{eq:np_forest_model} &\Pi [z^{(s)}, \mathcal{T}^{(s)}] \propto \\ &\lambda^{K[\mathcal{T}^{(s)}]} \bigg[\prod_{(i,j)\in \mathcal{T}^{(s)}:i\ge 1,j\ge 1} \exp(- \frac{\|z_i^{(s)}- z_j^{(s)}\|_2^2}{2\rho }) \bigg] \bigg[\prod_{i=1}^n \bigg\{\sum_{k=1}^{\tilde\kappa} v_{i,k} \exp ( - \frac{\|z_i^{(s)}- \eta^*_k\|_2^2}{2 \sigma_z^2})\bigg\} \bigg], \\ & (v_{i,1},\ldots, v_{i,\tilde\kappa}) \sim \t{Dir}(1/\tilde\kappa,\ldots, 1/\tilde\kappa) \qquad \text{ for }i=1,\ldots n,\\ & \{ y_{1}^{(s)}, \ldots, y_{n}^{(s)}\} \sim \t{Forest Model} (\mathcal T^{{(s)}}) \qquad \text{ for }s=1,\ldots S, \eel where $v_{i,1},\ldots, v_{i,\tilde\kappa}$ are the weights that vary with $i$ and $\sum_{k=1}^{\tilde \kappa} v_{i,k}=1$, $\rho>0$, and $z^{(s)}\in \mathbb{R}^{n\times d}$ is the matrix form. Equivalently, the above assigns each node a location parameter $\eta_i^{(s)}$, drawn from a hierarchical Dirichlet distribution with shared atoms $\{\eta_1^*,\ldots,\eta_{\tilde\kappa}^*\}$ and probability $(v_{.,1},\ldots,,v_{.,\tilde\kappa})$ \citep{doi:10.1198/016214506000000302}. Further, one could let $\eta_k^*$ vary over node according to some functional using a hybrid Dirichlet distribution \citep{petrone2009hybrid}. Using a Gaussian mixture kernel on $z_i^{(s)}$, we can now separate $z^{(s)}_i$'s into several groups that are far apart. To make the parameters identifiable and have large separations between groups, we fix $\tilde\eta^{*}_k$'s on the $d$-dimensional integer lattice $\{0,1,2\}^d$ with $d=2$ (hence $\tilde\kappa=9$); and we use $\sigma^2_z=0.01$ and $\rho=0.001$ in this article. \begin{remark} To clarify, our goal is to induce between-subject similarity in the \underline {node partition}, not the tree structure. For example, for two subjects $s$ and $s'$, when $z_i^{(s)}$ and $z_i^{(s')}$ are both near $\eta^*_k$ for all $i\in C$, then both the spanning forest $\mathcal T^{(s)}$ and $\mathcal T^{(s')}$ will likely cluster the nodes in $C$ together, even though $ T^{(s)}_k$ and $ T^{(s')}_k$ associated with $V_k\supset C$ may be different. \end{remark} The posterior can be sampled efficiently using the Gibbs sampling algorithm. We provide the posterior sampling algorithm in the Supplementary Materials S1.3, and illustrate this model in Section 6 of modeling brain regions for multiple subjects. \vspace*{-0.5cm} \section{Posterior Computation} \vspace*{-0.5cm} \subsection{Gibbs Sampling Algorithm} We now describe the Markov chain Monte Carlo (MCMC) algorithm. For ease of notation, we use an $(n+1)\times (n+1)$ matrix $S$, with $S_{i,j}= \log f (y_i \mid y_j ; \theta)$, $S_{0,i}=S_{i,0}= \log r (y_i ; \theta) +\log \lambda$ (for convenience, we use $0$ to index the last row/column), $S_{i,i}=0$, and $A_{\mathcal T}$ to represent the adjacency matrix of $\mathcal T$. We have the posterior distribution \bel\label{eq:comp_posterior} \Pi( \mathcal T, \theta \mid y) \propto \exp \big\{ \t{tr}[S(\theta) A_{\mathcal T}]/2\big\}\Pi_0(\theta). \eel Note the above form conveniently include the prior term for the number of clusters, $\lambda^K$, via the number of edges adjacent to node $0$. Our MCMC algorithm alternates in updating $\mathcal T$ and $\theta$, hence is a Gibbs sampling algorithm. To sample $\mathcal T$ given $\theta$, we take the random-walk covering algorithm for weighted spanning tree \citep{mosbah1999non}, as an extension of the Andrei--Broder algorithm for sampling uniform spanning tree \citep{broder1989generating,aldous1990random}. For this article to be self-contained, we describe the algorithm below. The above algorithm produces a random sample $\mathcal T$ following the full conditional $\Pi(\mathcal T \mid \theta,y)$ proportional to \eqref{eq:comp_posterior}. It has an expected finish time of $O(n\log n)$. Although some faster algorithms have been developed \citep{schild2018almost}, we choose to present the random-walk covering algorithm for its simplicity. \begin{algorithm} \begin{algorithmic} \State Start with $V_{\mathcal T}=\{0\}$ and $E_{\mathcal T}=\varnothing$, and set $i\leftarrow0$: \While{$|V_{\mathcal T}|\neq n+1$} \State Take a random walk from $i$ to $j$ with probability $\text{Pr}(j \mid i) = \frac{\exp[S_{i,j} (\theta)]}{\sum_{j:j\neq i} \exp[S_{i,j} (\theta)]}.$ \If {$j\not \in V_{\mathcal T}$ } \State Add $j$ to $V_{\mathcal T}$. Add $(i,j)$ to $E_{\mathcal T}$. \EndIf \State Update $i \leftarrow j$. \EndWhile \end{algorithmic} \caption{Random-walk covering algorithm for sampling the augmented tree $\mathcal T$.} \end{algorithm} We sample $\tilde \sigma_i$ using the following steps, \be & (\eta_\sigma \mid .) \sim \t{Inverse-Gamma}\big(1+ a_\sigma, \beta_\sigma + \xi_\sigma \big )\\ & (\beta_\sigma \mid .) \sim \t{Gamma}\bigg\{ 1+ n b_\sigma, (\sum_{i=1}^n \frac{1}{\tilde\sigma_i}+\frac{1}{\eta_\sigma})^{-1} \bigg\}\\ & (\tilde \sigma_i \mid .) \sim \t{Inverse-Gamma} \bigg [ \frac{p\sum_{j} 1\{(i,j)\in \mathcal T\} }{2} + b_\sigma , \sum_{j:(i,j)\in \mathcal T} \frac{\|y_i-y_j\|^2_2}{ 2\tilde\sigma_j } + {\beta_\sigma } \bigg] \ee To update $\gamma$, we use the form of the multivariate Cauchy as a scale mixture of $\text{N}(\mu, \gamma^2 u_{\gamma,i} I_p )$ over $u_{\gamma,i} \sim \text{Inverse-Gamma}(1/2,1/2)$. We can update via \be & u_{\gamma,i} \sim \text{Inverse-Gamma}( \frac{1+p}{2}, \frac{1}{2} +\frac{\|y_i-\mu\|_2^2}{2\gamma^2}),\\ & \gamma^2 \sim \text{Inverse-Gamma}(2 + \frac{Kp}{2}, \hat\sigma^2_y + \sum_{i:(0,i)\in \mathcal T}\frac{\|y_i-\mu\|_2^2}{2 u_{\gamma,i}}).\\ \ee We run the MCMC algorithm iteratively for many iterations. And we discard the first half of iterations as burn-in. \begin{remark} We want to emphasize that the Andrei--Broder random-walk covering algorithm \citep{broder1989generating,aldous1990random,mosbah1999non} is an exact algorithm for sampling a spanning tree $\mathcal T$. That is, if $\theta$ were fixed, each run of this algorithm would produce an {\em independent} Monte Carlo sample $\mathcal T \sim \Pi(\mathcal T \mid \theta,y)$. Removing the auxiliary node 0 from $\mathcal T$ will produce $K$ disjoint spanning trees. This augmented graph technique is inspired by \cite{boykov2001fast}. In our algorithm, since the scale parameters in $\theta$ are unknown, we use Markov chain Monte Carlo that updates two sets of parameters, (i) $(\theta_{[t+1]} \mid \mathcal T_{[t]})$ and (ii) $( \mathcal T_{[t+1]} \mid \theta_{[t+1]})$ from iteration $[t]$ to $[t+1]$. Therefore, rigorously speaking, there is a Markov chain dependency between $\mathcal T_{[t]}$ and $\mathcal T_{[t+1]}$ induced by $\theta_{[t+1]}$. Nevertheless, since we draw $\mathcal T$ in a block via the random-walk covering algorithm, we empirically find that $\mathcal T_{[t+1]}$ and $\mathcal T_{[t]}$ are substantially different. In the Supplementary Materials S4.4, we quantify the iteration-to-iteration graph changes, and provide diagnostics with multiple start points of $(\mathcal T_{[0]},\theta_{[0]})$. \end{remark} \vspace*{-1cm} \subsection{Posterior Point Estimate on Clustering} \vspace*{-0.5cm} In the field of Bayesian clustering, for producing point estimate on the partition, it had been a long-time practice to simply track $\text{pr}(c_i=k \mid y)$, then take the element-wise posterior mode over $k$ as the point estimate for $\hat c_i$. Nevertheless, this was shown to be sub-optimal due to that: (i) label switching issue causes unreliable estimates on $\text{pr}(c_i=k \mid y)$; (ii) the element-wise mode can be unrepresentative of the center of distribution for $(c_1,\ldots, c_n)$ \citep{wade2018bayesian}. These weaknesses have motivated new methods of obtaining point estimate of clustering, that transform an $n\times n$ pairwise co-assignment matrix $\{\text{pr}(c_i=c_j\mid y)\}_{\text{all }(i,j)}$ into an $n\times K$ assignment matrix \citep{medvedovic2002bayesian,rasmussen2008modeling,molitor2010bayesian,wade2018bayesian}. More broadly speaking, minimizing a loss function based on the posterior sample (via some estimator or algorithm) is common for producing a point estimate under some decision theory criterion. For example, the posterior mean comes as the minimizer of the squared error loss; in Bayesian factor modeling, an orthogonal Procrustes-based loss function is used for producing the posterior summary of the loading matrix from the generated MCMC samples \citep{assmann2016bayesian}. We follow this strategy. There have been many algorithms that one could use. For a recent survey, see \cite{dahl2022search}. In this article, we use a simple solution of first finding the mode of $K$ from the posterior sample, then doing a $\hat K$-rank symmetric matrix factorization on $\{\text{pr}(c_i=c_j\mid y)\}_{\text{all }(i,j)}$ and clustering into $\hat K$ groups, provided by \texttt{RcppML} package \citep{debruine2021fast}. \vspace*{-1cm} \section{Theoretical Properties} \vspace*{-0.5cm} \subsection{ Convergence of Eigenvectors \label{sec:spectral}} \vspace*{-0.5cm} We now formalize the closeness of the eigenvectors of matrices $N$ and $M$ (shown in Section 2.2), by establishing the convergence of the two sets of eigenvectors as $n$ increases. To be specific, we focus on the normalized spectral clustering algorithm using the similarity $A_{i,j}=\exp(S_{i,j})$, with $S_{i,j}= \log f (y_i \mid y_j ; \theta)$, $S_{0,i}=S_{i,0}= \log r (y_i ; \theta) +\log \lambda$. On the other hand, for the specific form, $f(y_i \mid y_j)$ can be any density satisfying $f(y_i\mid y_j, \theta) = f(y_j\mid y_i, \theta)$, $r(y_i; \theta)$ can be any density satisfying $r(y_i; \theta)>0$. For the associated normalized Laplacian $N$, we denote the first $K$ bottom eigenvectors by $\phi_1,\ldots, \phi_K$, which correspond to the smallest $K$ eigenvalues. Let $M$ be the matrix with $M_{i,j}=\t{pr}[\mathcal T \ni (i,j) \mid y, \theta ]$ for $i\neq j$ and $M_{i,i}=0$. The Kirchhoff's tree theorem \citep{chaiken1978matrix} gives an enumeration of all $\mathcal T\in \mathbb{T}$, \bel\label{eq:matrix_tree} \sum_{\mathcal T\in \mathbb{T}} \prod_{(i,j)\in \mathcal T} \exp(S_{i,j}) = (n+1)^{-1}\prod_{h=2}^{n+1}\lambda_{(h)} (L) \eel where $L$ is the Laplacian matrix transform of the similarity matrix $A$; $\lambda_{(h)}$ denotes the $h$th smallest eigenvalue. Differentiating its logarithmic transform with respect to $S_{i,j}$, \be M_{i,j} & = \t{Pr}[ \mathcal T \ni (i,j)\mid y ] = \frac{\sum_{\mathcal T\in \mathbb{T},(i,j)\in \mathcal T} \prod_{(i',j')\in \mathcal T} \exp(S_{i',j'})}{\sum_{\mathcal T\in \mathbb{T}} \prod_{(i',j')\in \mathcal T} \exp(S_{i',j'})} = \frac{\partial \sum_{i=2}^{n+1} \log\lambda_{(i)} (L)}{\partial S_{i,j}}. \ee Let $\Psi_1, \ldots, \Psi_K$ be the top $K$ eigenvectors of $M$, associated with eigenvalues $\xi_1\ge \xi_2 \ge \ldots \ge \xi_K$, and $\xi_K > \xi_{K+1}\ge \xi_{K+2}\ge \ldots \ge \xi_{n+1}$. And we can compare with the $K$ leading eigenvectors of $(-N)\in \mathbb{R}^{n\times n}$, $\phi_1,\ldots, \phi_K$. Using $\Psi_{1:K}$ and $\phi_{1:K}$ to denote two $(n+1)\times K$ matrices, we now show they are close to each other. \begin{theorem} There exists an orthonormal matrix $R\in\mathbb{R}^{K\times K}$ and a finite constant $\epsilon>0$, \be \|\Psi_{1:K}-\phi_{1:K} R \|_F \le \frac{ 40 \sqrt{K (n+1)} }{\xi_{K} -\xi_{K+1}} \max_{i,j} \left \{ (1+\epsilon)(D^{-1/2}_i-D^{-1/2}_j)^2 A_{i,j} \right\}, \ee with probability at least $1- \exp(-n).$ \end{theorem} \begin{remark} To make the right-hand side go to zero, a sufficient condition is to have all $ A_{i,j}/D_{i,i}= O(n^{-\kappa})$ with $\kappa>1/2$. We provide a detailed definition of the bound constant $\epsilon$ in the Supplementary Materials S2. To explain the intuition behind this theorem, our starting point is the close relationship between Laplacian and spanning tree models --- multiplying both sides of Equation \eqref{eq:matrix_tree} by $(n+1)^{-(n-1)}$ shows that the non-zero eigenvalue product of the graph Laplacian $L$ is proportional to the marginal probability of $n$ data points from a spanning forest-mixture model. Starting from this equality, we can write the marginal inclusion probability matrix of $\mathcal T$ as a mildly perturbed form of the normalized Laplacian matrix. Intuitively, when two matrices are close, their eigenvectors will be close as well \citep{yu2015useful}. \end{remark} Therefore, under mild conditions, as $n\to \infty$, the two sets of leading eigenvectors converge. In the Supplementary Materials S4.7, we show that the convergence is very fast, with the two sets of leading eigenvectors becoming almost indistinguishable starting around $n\ge 50$. Besides the eigenvector convergence, we can examine the marginal posterior \[ \Pi(\mathcal V\mid \theta, y) \propto \Pi_0(K,V_1,\ldots,V_K) \bigg\{ \prod_{k=1}^K [\sum_{i\in V_k}r(y_i)] \bigg\}\prod_{k=1}^K \bigg\{ n_k^{-1}\prod_{h=2}^{n_k}\lambda_{(h)} (L_k) \bigg\}, \] where $L_k$ is the unnormalized Laplacian matrix associated with matrix $\{A_{i,j}\}_{i\in V_k,j\in V_k}$. Imagine that if we put all indices in one partition $V_1=(1,\ldots,n)$, then $\Pi(\mathcal V\mid \theta, y)$ would be very small due to those close-to-zero eigenvalues. Applying this deduction recursively on subsets of data, it is not hard to see that a high-valued $\Pi(\mathcal V\mid \theta,y)$ would correspond to a partition, wherein each $V_k$ has $\lambda_{(h)} (L_k)$ away from $0$ for any $h\ge 2$. \vspace*{-0.5cm} \subsection{Consistent Clustering of Separable Sets} \label{sec:clustcon} \vspace*{-0.5cm} We show that clustering consistency is possible, under some separability assumptions when the data-generating distribution follows a forest process. Specifically, we establish posterior ratio consistency, as the ratio between the maximum posterior probability assigned to other possible clustering assignments to the posterior probability assigned to the true clustering assignments converges to zero almost surely under the true model \citep{cao2019posterior}. To formalize the above, we denote the true cluster label for generating $y_i$ by $c^0_i$ (subject to label permutation among clusters), and we define the enclosing region for all possible $y_i:c^0_i=k$ as $R_k^0$ for $k=1,\ldots, K_0$ for some true finite $K_0$. And we refer to $R^0=(R_1^0,\ldots, R_{K_0}^0)$ as the ``null partition''. By separability, we mean the scenario that $(R_1^0,\ldots, R_{K_0}^0)$ are disjoint and there is a lower-bounded distance between each pair of sets. As alternatives, regions $R=(R_1,\ldots, R_K)$ could be induced by $\{c_1,\ldots,c_n\}$ from the posterior estimate of $\mathcal T$. For simplicity, we assume the scale parameter in $f$ is known and all equal $\sigma_{i,j}=\sigma^{0,n}$. {\noindent \underline{Number of clusters is known.}} We first start with a simple case when we have fixed $K=K_0$. For regularities, we consider data as supported in a compact region $\mathcal{X}$, and satisfying the following assumptions: \begin{itemize} \item (A1, diminishing scale) $\sigma^{0,n}=C'(1/\log n)^{1+\iota}$ for some $\iota>0$ and $C'>0$. \item (A2, minimum separation) $\inf_{x\in R_k^0,y\in R_{k'}^0}\|x-y\|_2>M_n$, for all $k\neq k'$ with some positive constant $M_n>0$ such that $M_n^2/\sigma^{0,n}=8\tilde m_0\log(n)$ for all $(i,j)$ and is known for some constant $\tilde m_{0}>p/2+2$. \item (A3, near-flatness of root density) For any $n$, $\epsilon_1<r(y)<\epsilon_2$ for all $y\in\mathcal{X}$. \end{itemize} Under the null partition, $\Pi(\mathcal{T}|y)$ is maximized at $\mathcal T=\mathcal{T}_{\textrm{MST},R^0}$, which contains $K_0$ trees with each $T_k$ being the minimum spanning tree (denoted by subscript ``MST'') within region $R_k^0$. Similarly, for any alternative $R$, $\Pi(\mathcal{T}|y)$ is maximized at the $\mathcal{T}=\mathcal{T}_{\textrm{MST},R}$. \begin{theorem} Under (A1,A2,A3), we have ${\Pi(\mathcal{T}_{\textrm{MST},R}|y)}/{\Pi(\mathcal{T}_{\textrm{MST},R^0}|y)}\rightarrow 0$ almost surely, unless $R_{i}^0\subseteq R_{\xi(i)}$ for some permutation map $\xi(\cdot)$. \label{knownclust} \end{theorem} {\noindent\underline{Number of clusters is unknown:}} Next, we relax the condition by having a $K$ not necessarily equal to $K_0$. We show the consistency in two parts for 1)$K<K_0$, and 2) $K>K_0$ separately. In order to show posterior ratio consistency in the second part, we need some finer control on $r(y)$: \begin{itemize} \item (A3') The root density satisfies $\tilde m_1e^{-M/2\sigma^{0,n}}\leq r(y)\leq \tilde m_2 e^{-M/2\sigma^{0,n}}$ for some $\tilde m_1<\tilde m_2$. \end{itemize} In this assumption, we essentially assume the root distribution to be flatter with a larger $n$. Then we have the following results. \begin{theorem} 1) If $K<K_0$, under the assumptions (A1,A2,A3), we have \\${\Pi(\mathcal{T}_{\textrm{MST},R}|y)}/{\Pi(\mathcal{T}_{\textrm{MST},R^0}|y)}\rightarrow 0$ almost surely. 2) If $K>K_0$, under the assumptions (A1,A2,A3'), we have ${\Pi(\mathcal{T}_{\textrm{MST},R}|y)}/{\Pi(\mathcal{T}_{\textrm{MST},R^0}|y)}\rightarrow 0$ almost surely. \label{unknownclust} \end{theorem} {The above results show posterior ratio consistency. Furthermore, when the true of clusters is known, the ratio consistency result can be further extended to show clustering consistency, which is proved in the Supplementary Materials S3.} \vspace*{-1cm} \section{Numerical Experiments} \vspace*{-0.5cm} \subsection{Clustering Near-Manifold Data} \vspace*{-0.5cm} To illustrate the capability of uncertainty quantification, we carry out clustering tasks on those near-manifold data commonly used for benchmarking clustering algorithms. \begin{figure} \caption{Posterior point estimate.} \caption{$\text{Pr} \caption{Posterior point estimate.} \caption{$\text{Pr} \caption{Posterior point estimate.} \caption{$\text{Pr} \caption{One posterior sample.} \caption{$\text{Pr} \caption{Uncertainty quantification in clustering data generated near three manifolds. When data are close to the manifolds (Panels a,e), there is very little uncertainty on clustering in low $\pr(c_i=c_j\mid j)$ between points from different clusters (Panels b,f). As data deviate more from the manifolds (Panel c,g), the uncertainty increases (Panels d,h). And in Panel g, the point estimate shows a two-cluster partitioning, while there is about $20\%$ of probability for three-cluster partitioning. \label{fig:uq_3rings} \label{fig:uq_3rings} \end{figure} \vspace*{-0.5cm} In the first simulation, we start with $300$ points drawn from three rings of radii $0.2$, $1$ and $2$, with $100$ points from each ring. Then we add some Gaussian noise to each point to create a coordinate near a ring manifold. We present two experiments, one with noises from $\t{N}(0, 0.05^2 I_2)$, and one with noises $\t{N}(0, 0.1^2 I_2)$. As shown in Figure \ref{fig:uq_3rings}, when these data are well separated (Panel a, showing Posterior point estimate), there is very little uncertainty on the clustering (Panel b), with the posterior co-assignment $\pr(c_i=c_j\mid y)$ close to zero for any two data points near different rings. As noises increase, these data become more difficult to separate. There is a considerable amount of uncertainty for those red and blue points: these two sets of points are assigned into one cluster with a probability close to $40\%$ (Panel d). We conduct another simulation based on an arc manifold and two point clouds (Panels e-h), and find similar results. Additional experiments are described in the Supplementary Materials S4.2. \vspace*{-0.5cm} \subsection{Uncertainty Quantification for Data from Mixture Model} In the Supplementary Materials S4.1 and S4.3, we present some uncertainty quantification results, for clustering data that are from mixture models. We compare the estimates with the ones from Gaussian mixture models, which could correspond to correctly/erroneously specified component distribution. Empirically, we find that the uncertainty estimates on $\text{Pr}(c_i=c_j\mid y)$ and $\text{Pr}(K\mid y)$ from the forest model are close to the ones based on the true data-generating distribution; whereas the Gaussian mixture models suffer from sensitivity in model specification, especially when $K$ is not known. \vspace*{-0.5cm} \section{Application: Clustering in Multi-subject Functional Magnetic Resonance Imaging Data} \vspace*{-0.5cm} In this application, we conduct a neuroscience study for finding connected brain regions under a varying degree of impact from Alzheimer's disease. The source dataset is resting-state functional magnetic resonance imaging (rs-fMRI) scan data, collected from $S=166$ subjects at different stages of Alzheimer's disease. Each subject has scans over $n=116$ regions of interest using the Automated Anatomical Labeling (AAL) atlas \citep{rolls2020automated,shi2021application} and over $p=120$ time points. We denote the observation for the $s$th subject in the $i$th region by $y_{i}^{(s)}\in \mathbb{R}^{p}$. The rs-fMRI data are known for their high variability, often characterized by a low intraclass correlation coefficient (ICC), $(1-\hat\sigma^{2}_{\text{within--group}}/\hat\sigma^{2}_{\text{total}})$, as the estimate for the proportion of total variance that can be attributed to variability between groups \citep{noble2021guide}. Therefore, our goal is to use the multi-view clustering to divide the regions of interest for each subject, while improving our understanding of the source of high variability. \begin{figure} \caption{\scriptsize Clustering of the nodes for one healthy subject.} \caption{\scriptsize Clustering of the nodes for another healthy subject.} \caption{\scriptsize Clustering of the nodes for one diseased subject.} \caption{\scriptsize Clustering of the nodes for another diseased subject.} \caption{ Results of brain region clustering (lateral view) for four subjects taken from the healthy and diseased groups. The multi-view clustering model allows subjects to have similar partition structures on a subset of nodes, while having subtle differences on the others (Panels a and b, Panels c and d). At the same time, the healthy subjects show less degree of variability in the brain clustering than the diseased subjects. \label{fig:application2} \label{fig:application2} \end{figure} \vspace*{-0.5cm} We fit the multi-view clustering model to the data, by running MCMC for $5,000$ iterations and discarding the first $2,500$ as burn-in. As shown in Figure \ref{fig:application2}, the hierarchical Dirichlet distribution on the latent coordinates induces similarity between the clustering of brain regions among subjects on a subset of nodes, while showing subtle differences on the other nodes. On the other hand, some major differences can be seen in the clusterings between the healthy and diseased subjects. Using the latent coordinates (at the posterior mean), we quantify the distances between $z^{(s)}$ and $z^{(s')}$ for each pair of subjects $s\neq s'$. As shown in Figure \ref{fig:application}(a), there is a clear two-group structure in the pairwise distance matrix formed by $\|z^{(s)}-z^{(s')}\|_F$, and the separation corresponds to the first 64 subjects being healthy (denoted by $s\in g_1$) and the latter 102 being diseased (denoted by $s\in g_2$). Next, we compute the within--group variances for these two groups, using $\sum_{s\in g_l}\| z_i^{(s)}- (\sum_{s\in g_l} z^{(s)}_i / |g_l|)\|_F^2/|g_l|$ for $l=1$ and $2$, and plot the variance over each region of interest $i$ on the spatial coordinate of the atlas. Figure \ref{fig:application}(b) and (c) show that, although both groups show some degree of variability, the diseased group shows clearly higher variances in some regions of the brain. Specifically, the paracentral lobule (PCL) and superior parietal gyrus (SPG), dorsolateral superior frontal gyrus (SFGdor), and supplementary motor area (SMA) in the frontal lobe show the highest amount of variability. Indeed, those regions are also associated with very low ICC scores [Figure \ref{fig:application}(e)] calculated based on the variance of $z^{(s)}_i$, with pooled estimates $\hat\sigma^{2}_{\text{total},i}= \sum_{s}\| z_i^{(s)}- (\sum_{s} z^{(s)}_i /S)\|_F^2/S$ and $\hat\sigma^{2}_{\text{within--group},i} = \sum_{l=1}^2\sum_{s\in g_l}\| z_i^{(s)}- (\sum_{s\in g_l} z^{(s)}_i /|g_l|)\|_F^2/S$. On the other hand, some regions such as the hippocampus (HIP), parahippocampal gyrus (PHG), and superior occipital gyrus (SOG) show relatively lower variances within each group, hence higher ICC scores. To show more details on the heterogeneity, we plot the latent coordinates associated with those ROIs using boxplots. Since each $z_i^{(s)}$ is in two-dimensional space, we plot the linear transform $\tilde z_i^{(s)} = z_{i,1}^{(s)} +z_{i,2}^{(s)}$. Interestingly, those 8 ROIs with high variability still seem quite informative for distinguishing the two groups (Figure \ref{fig:application}(f)). To verify, we concatenate those latent coordinates and form an $S\times 16$ matrix, and fit them in a logistic regression model for classifying the healthy versus diseased states. The Area Under the Curve (AUC) of the Receiver Operating Characteristic is 86.6\%. On the other hand, when we fit the 6 ROIs with low variability in logistic regression, the AUC increases to 96.1\%. \begin{figure} \caption{\scriptsize Pairwise distance of the latent coordinates $\|z^{(s)} \caption{\scriptsize Within-group variances of $z^{(.)} \caption{\scriptsize Within-group variances of $z^{(.)} \caption{\scriptsize The regions of interest (ROIs) colored by the associated lobe names, under the automated anatomical labeling atlas.} \caption{\scriptsize Intraclass correlation coefficients for the regions of interest $(1-\hat\sigma^{2} \caption{\scriptsize Boxplot visualization of the latent coordinates for the regions with high variability in the diseased group.} \caption{\scriptsize Boxplot visualization of the latent coordinates for the regions with low variability in the diseased group.} \caption{Using the latent coordinates to characterize the heterogeneity within the subjects. \label{fig:application} \label{fig:application} \end{figure} An explanation for the above results is that Alzheimer's disease does different degrees of damage in the frontal and parietal lobes (see the two distinct clusterings in Figure \ref{fig:application2} (c) and (d)), and the severity of the damage can vary from person to person. On the other hand, the hippocampus region (HIP and PHG), important for memory consolidation, is known to be commonly affected by Alzheimer's disease \citep{braak1991neuropathological,klimova2015alzheimer}, which explains the low heterogeneity in the diseased group. Further, to our best knowledge, the high discriminability of the superior occipital gyrus (SOG) is a new quantitative finding, that could be meaningful for a further clinical study. For validation, without using any group information, we concatenate those $z_i^{(s)}$'s over all $i=1,\ldots,116$ and form an $S\times 232$ matrix and use lasso logistic regression to classify the two groups. When $12$ predictors are selected (as a similar-size model to the one above using 6 ROIs), the AUC is 96.4\%. Since $z_i^{(s)}$'s are obtained in an unsupervised way, this validation result shows that the multi-view clustering model produces meaningful representation for the nodes in this Alzheimer's disease data. We provide further details on the clusterings, including the number of clusters, and the posterior co-assignment probability matrices in the Supplementary Materials S4.5. \vspace*{-1cm} \section{Discussion} \vspace*{-0.5cm} In this article, we present our discovery of a probabilistic model for popular spectral clustering algorithms. This enables straightforward uncertainty quantification and model-based extensions through the Bayesian framework. There are several directions worth exploring. First, our consistency theory is conducted under the condition of separable sets, similar to \cite{ascolani2022clustering}. For general cases with non-separable sets, clustering consistency (especially on estimating $K$) is challenging to achieve; to our best knowledge, existing consistency theory only applies to data generated independently from a mixture model \citep{miller2018mixture,zeng2020quasi}. For data generated dependently via a graph, this is still an unsolved problem. Second, in all of our forest models, we have been careful in choosing densities with tractable normalizing constants. One could relax this constraint by using densities $f(y_i\mid y_j,\theta)= \alpha_f g_f(y_i\mid y_j;\theta)$ and $r(y_i;\theta)= \alpha_r g_r(y_i;\theta)$, with $g$ some similarity function, and $(\alpha_f,\alpha_r)$ potentially intractable. In these cases, the forest posterior becomes $\Pi(\mathcal T\mid .)\propto (\lambda\alpha_r/\alpha_f)^K \prod_{(0,i)\in \mathcal T} g_r(y_i;\theta) \prod_{(i,j)\in \mathcal T} g_r(y_i\mid y_j;\theta)$. Therefore, one could choose an appropriate $\tilde \lambda=\lambda\alpha_r/\alpha_f$ (equivalent to choosing some value of $\lambda$), without knowing the value of $\alpha_f$ or $\alpha_r$; nevertheless, how to calibrate $\tilde \lambda$ still requires further study. Third, a related idea is the Dirichlet Diffusion Tree \citep{neal2003density}, which considers a particle starting at the origin, following the path of previous particles, and diverging at a random time. The data are collected as the locations of particles at the end of a time period. Compared to the forest process, the diffusion tree process has the conditional likelihood given the tree invariant to the ordering of the data index, which is a stronger property compared to the marginal exchangeability of the data points. Therefore, it is interesting to further explore the relationship between those two processes. \end{document}
\begin{document} \title{Fidelity decay in trapped Bose-Einstein condensates} \author{G. Manfredi} \email{[email protected]} \author{P.-A. Hervieux} \affiliation{Institut de Physique et Chimie des Mat{\'e}riaux, CNRS and Universit{\'e} Louis Pasteur, BP 43, F-67034 Strasbourg, France} \date{\today} \begin{abstract} The quantum coherence of a Bose-Einstein condensate is studied using the concept of quantum fidelity (Loschmidt echo). The condensate is confined in an elongated anharmonic trap and subjected to a small random potential such as that created by a laser speckle. Numerical experiments show that the quantum fidelity stays constant until a critical time, after which it drops abruptly over a single trap oscillation period. The critical time depends logarithmically on the number of condensed atoms and on the perturbation amplitude. This behavior may be observable by measuring the interference fringes of two condensates evolving in slightly different potentials. \end{abstract} \pacs{03.75.Gg, 05.45.Mt} \maketitle {\it Introduction}.--- Ultracold atom gases have been at the center of intensive investigations since the first realization of an atomic Bose-Einstein condensate (BEC) in the mid-1990s. Applications of BECs include the possibility of revisiting standard problems of condensed-matter physics \cite{Anglin}, by making use of periodic optical lattices that mimic the ionic lattice in solid-state systems. More recently, BECs have been used to study quantum transport in disordered systems (another long-standing problem in condensed matter physics), by using a laser speckle to create a disordered potential. For instance, it was shown that the free expansion of a BEC is restrained, or even completely suppressed, in the presence of a disordered potential \cite{Fort,Clement} (see also \cite{Modugno} for a theoretical analysis), an effect akin to Anderson localization in solids \cite{Paul}. These problems are often approached by studying the interference pattern of two or more BECs that are released from the optical trap, expand freely, and eventually interact with each other \cite{Andrews,Kasevich,Greiner,Shin}. Experiments show high-contrast matter-wave interference fringes, thus revealing the coherent nature of Bose-Einstein condensates. Surprisingly, recent experiments have shown high-contrast fringes even for well separated BECs, whose phases are totally uncorrelated \cite{Hadzi}. Because the interference pattern depends on the phases of the condensates, the fringes should be sensitive to perturbations that strongly affect the phase, but weakly affect the motion. Therefore, when two condensates are subjected to a random potential (such as that generated by a laser speckle \cite{Lye, Clement2}) before interfering, we expect a reduction in the contrast of the interference fringes, which should depend both on the amplitude of the random potential and on the time during which it has been in contact with the condensates. The purpose of this Letter is to propose a theoretical procedure to quantify this loss of coherence and to suggest a possible experimental realization. {\correct In order to estimate the coherence and stability of a quantum system \cite{peres}, one can compare the evolution of the same initial condition in two slightly different Hamiltonians,} $H_1=H_0+\delta H_1$ and $H_2=H_0+\delta H_2$, where $H_0$ is the unperturbed Hamiltonian, and $\delta H_{1,2}$ are small perturbations characterized by the same amplitude, same wavelength spectrum, but different phases. The quantum fidelity at time $t$ is then defined as the square of the scalar product of the wavefunctions evolving with $H_1$ and $H_2$ respectively: $F(t) = ~\vline \langle \psi_{H_1}(t) \vline ~\psi_{H_2}(t) \rangle \vline^2$. This procedure is sometimes referred to as the `Loschmidt echo', as it is equivalent to evolving the system forward in time with $H_1$, then backward with $H_2$, and using the fidelity to check the accuracy of the time-reversal. Virtually all theoretical investigations of the Loschmidt echo consider one-particle systems evolving in a given (usually chaotic) Hamiltonian. Several regimes have been described in the past. For perturbations that are classically weak but quantum-mechanically strong, the fidelity decay is exponential, with a rate independent on the perturbation and given by the classical Lyapunov exponent of the unperturbed system \cite{jalabert}. This behavior has been confirmed by numerical simulations \cite{jacquod,cucchietti}. For weaker perturbations, the decay rate is still exponential, but perturbation-dependent (Fermi golden rule regime). For still weaker perturbations, the decay is Gaussian (perturbative regime) \cite{jacquod}. For integrable systems, other types of decay (notably algebraic) have been observed \cite{benenti}. Finally, a perturbation-independent regime, though with Gaussian decay, was also observed in experiments \cite{Pastawski}. In a previous work \cite{Manfredi}, we have applied the concept of Loschmidt echo to a system of many electrons interacting through their self-consistent electric field. The numerical results showed that the quantum fidelity remains equal to unity until a critical time, then drops suddenly to much lower values. A similar result was also obtained for a classical system of colliding hard spheres \cite{Pinto}. This effect is probably related to the nonlinearity introduced by the interactions between particles. Therefore, BECs should constitute an ideal arena to determine whether such behavior is typical of many-body quantum systems. {\it Model}.---The dynamics of a BEC is accurately described, in the mean-field approximation, by the Gross-Pitaevskii equation (GPE). We considered a cigar-shaped condensate, where the transverse frequency of the confining potential is much larger then the longitudinal frequency, $\omega_\perp \gg \omega_z$. In this case, a one-dimensional (1D) approximation can be used, and the GPE reads as: \begin{equation} i\hbar\frac{\partial\psi}{\partial\,t} = - \,\frac{\hbar^2}{2m}\frac{\partial^{2}\psi}{\partial\,z^2} + V(z)\psi + g_{1D}N_A|\psi|^2\psi \equiv H_0 \psi \, . \label{GP} \end{equation} Here, $\int_{-\infty}^{\infty} |\psi|^2 dz=1$, $N_A$ is the number of condensed atoms, $g_{1D} = 2a\hbar\omega_\perp$ is the 1D effective coupling constant, and $a$ is the 3D scattering length. The confining potential contains a small quartic component, which can be realized optically \cite{Bretin}, $V(z) = \frac{1}{2}m\omega_z^2( z^2 + Kz^4/L_{\rm ho}^2)$, where $L_{\rm ho}=(\hbar/m\omega_z)^{1/2}$ is the harmonic oscillator length. We choose the parameters of the experiment described in Ref. \cite{Fort}, where $N_A = 10^5$ atoms of $^{87}\rm Rb$ ($a = 5.7~\rm nm$) are confined in a quasi-1D trap with $\omega_z/2\pi = 24.7~\rm Hz$ and $\omega_\perp/2\pi = 293~\rm Hz$. In the simulations, we normalize time to $\omega_z^{-1}$, space to $L_{\rm ho} = 2.16~\rm \mu m$, and energies to $\hbar\omega_z$. The dimensionless 1D coupling constant is then: $\hat{g}_{1D} = g_{1D}/(L_{\rm ho}\hbar\omega_z)=0.063$. The quartic coefficient, which will be crucial to excite a sufficiently complex nonlinear dynamics, is taken to be $K=0.05$. With the above parameters, the half-length of the condensate is roughly $24~\rm \mu m$. The random perturbation $\delta H$ can be realized in practice using a laser speckle \cite{Lye,Clement2}. In our simulations, we model the random potential through the sum of a large number of uncorrelated waves: $\delta H/\hbar\omega_z = \epsilon \sum_{j=N_{\rm min}}^{N_{\rm max}} \cos(2\pi z/\lambda_j +\alpha_j)$, where $\epsilon$ is the amplitude of the perturbation, $\lambda_j$'s are the wavelengths, and $\alpha_j$'s are random phases. The smallest wavelength present in the random potential is $\lambda_{\rm min} = 2L_{\rm ho} = 4.32~\rm \mu m$, which is consistent with the experimental correlation length, $\sigma_z = 5~\rm \mu m$ \cite{Fort}. The wavelength spectrum of the perturbation (i.e. the values of $N_{\rm min}$ and $N_{\rm max}$) affects only weakly the behavior of the fidelity: therefore, we will focus our analysis on the dependence of the fidelity on the amplitude $\epsilon$. In order to compute the quantum fidelity, we proceed as follows: (i) first, we prepare the condensate in its ground state without perturbation; (ii) then, we suddenly displace the anharmonic trap by a distance $\Delta z$ (a few micrometers); (iii) finally, we solve numerically the time-dependent GPE (\ref{GP}) with the perturbed Hamiltonian $H_0+\delta H$. Step (iii) is performed for $N=11$ uncorrelated realizations of the random potential, thus yielding $N$ evolutions of the wavefunction, $\psi_j(t)$. We then use all possible combinations to compute the partial fidelities $F_{ij}(t) = ~\vline \langle \psi_{i}(t) \vline ~\psi_{j}(t) \rangle \vline^2$. There are of course $M=N!/(N-2)!2!=55$ independent combinations, which are finally averaged to obtain the quantum fidelity, $F(t)=\frac{1}{M}\sum_{j=1}^M F_{ij}(t)$. This averaging procedure allowed us to reduce considerably the level of statistical fluctuations. \begin{figure} \caption{\label{fig:fig1} \label{fig:fig1} \end{figure} {\it Results}.--- Our numerical results showed an unusual behavior for the quantum fidelity, which stays equal to unity until a critical time $\tau_C$, and then drops rapidly to small values (Fig. 1). The critical time is defined as the time at which the fidelity has dropped to 60\% of its maximum value, i.e. $F(\tau_C)=0.6$. Interestingly, the fidelity can be nicely fit by a Fermi-like curve \begin{equation} f(t) = (1-f_\infty)\left[1+\exp\left(\frac{t-\tau_C}{T}\right)\right]^{-1} + f_\infty.\label{eq:fit} \end{equation} Equation (\ref{eq:fit}) reveals the presence of two distinct time scales: (i) the critical time $\tau_C$ and (ii) $T \ll \tau_C$, which measures the rapidity of the fidelity decay. The parameter $f_\infty$ simply reflects the fact that the fidelity cannot decay to zero, because the system is confined in a finite region in space. In Fig. 1, $\omega_z T = 4.86$ is the same for all three cases and is equal to the oscillation period of a particle trapped in the anharmonic potential $V(z)$, for an initial condition $z(0) = \Delta z$, $\dot z(0)=0$. This means that {\em the fidelity decay occurs over one single oscillation period}. \begin{figure} \caption{\label{fig:fig2} \label{fig:fig2} \end{figure} This behavior was confirmed by investigating the dependence of the quantum fidelity on the initial displacement $\Delta z$ (Fig. 2). For each value of $\Delta z$, the parameter $T$ appearing in Eq. (\ref{eq:fit}) is taken to be equal to the corresponding oscillation period in the anharmonic potential $V(z)$. The oscillation period decreases with increasing energy (and thus with increasing $\Delta z$) and indeed the fidelity drop becomes steeper for larger displacements. The critical time also slightly increases with decreasing displacement and goes to infinity for $\Delta z \to 0$. This presumably happens because the dynamics of the condensate becomes too regular when the confinement is harmonic. \begin{figure} \caption{\label{fig:fig3} \label{fig:fig3} \end{figure} Figure 3 shows that $\tau_C$ depends logarithmically on the perturbation amplitude, i.e. $\tau_C \sim -t_0 \ln \epsilon$, with $\omega_z t_0 \simeq 3.3$ (this is the straight line depicted in Fig. 3). This is similar to what was obtained for another self-consistent model \cite{Manfredi}, suggesting that such behavior is generic for $N$-body systems, at least in the mean field approximation. The critical time also depends on the number of condensed atoms $N_A$. By decreasing $N_A$, $\tau_C$ becomes considerably longer (Fig. 4), and for $N_A \to 0$ (i.e. for the linear Schr{\"o}dinger equation) we have that $\tau_C \to \infty$. For sufficiently large condensates ($N_A \ge 2\times 10^4$ for the case of Fig. 4), the critical time depends logarithmically on the number of atoms. {\correct Finally, the collapse of the quantum fidelity is clearly linked to the phases of the wavefunctions. Indeed, by defining an `amplitude fidelity' $F_a(t) = ~\left(\int|\psi_{H_1}\psi_{H_2}|dx \right)^2$ (which neglects information on the phases), we have verified that $F_a(t)$ shows no sign of a sudden collapse when the ordinary fidelity drops.} \begin{figure} \caption{Critical time $\tau_C$ (in units of $\omega_z^{-1} \label{fig:fig4} \end{figure} {\it Discussion}.---{\correct We have shown that the evolutions of two BECs in slightly different Hamiltonians diverge suddenly after a critical time $\tau_C$. The interaction between the atoms is obviously a vital ingredient, as $\tau_C \to \infty$ when the coupling constant vanish, i.e. for the linear Schr{\"o}dinger equation. The crucial point is that, for the GPE, the unperturbed Hamiltonian $H_0$ depends on the wave function.} When the perturbation induces a small change in $\psi$, $H_0$ is itself modified, which in turns affects $\psi$, and so on. Thanks to such a nonlinear loop, the perturbed and unperturbed solutions can diverge very fast. In contrast, for the single-particle dynamics $H_0$ is fixed, and the solutions only diverge because of the perturbation $\delta H$. Changes in $\psi$ add incrementally to each other, but cannot trigger the nonlinear loop observed in the GP simulations \cite{foot}. {\correct This behavior is clearly linked to the phases of the wavefunctions and could be tested experimentally by studying the effect of a random potential on the interference pattern of two condensates \cite{Andrews,Kasevich,Greiner,Shin}.} A possible experiment could be performed as follows (see Fig. 5). First, a BEC is created in a single-well trap; then, the trap is deformed into a double-well potential \cite{Shin,Hansel}, with the barrier between the wells sufficiently high that the two condensates cannot tunnel through it. The condensates are left in the double trap for a time long enough to reach their ground state. A laser speckle is then used to create a small random potential of amplitude $\epsilon$ and correlation length $\sigma_z$. If $\sigma_z \ll d$, where $d$ is the distance between the two BECs, each condensate is subjected to a different random potential with the same statistical properties. In order to excite the dynamics, the total double-well trap is suddenly shifted by a distance $\Delta z$ of the order of a few micrometers. The BECs evolve in their perturbed trap for a certain time $t$, after which both the trap and the random potential are switched off, so that the condensates can overlap and interfere. We predict that the contrast of the interference fringes will depend on the time $t$ and on the perturbation $\epsilon$, in a manner analogous to the quantum fidelity: if $t<\tau_C(\epsilon)$, the contrast should be large, whereas it should drop significantly for times larger than $\tau_C$. Performing several experiments with different perturbation amplitudes and different numbers of condensed atoms should allow one to reproduce qualitatively the logarithmic scalings of Figs. 3 and 4. Accurate time-resolved measurements might even reproduce the fidelity drop time $T$. \begin{figure} \caption{Two BECs ($\psi_{H_1} \label{fig:fig5} \end{figure} These results, together with those obtained for an electron gas at low temperature \cite{Manfredi}, suggest that many-particle systems display a generic sudden decay of the quantum fidelity. Thanks to the ease with which ultracold atom gases can be created and manipulated in the laboratory, BECs should constitute an ideal arena to test these predictions experimentally. We thank C. Fort for providing the details of the experiment described in Ref. \cite{Fort}. We also thank R. Jalabert, J. L{\'e}onard, and H. Pastawski for several useful suggestions. \begin{references} \bibitem{Anglin} J. R. Anglin and W. Ketterle, Nature (London) {\bf 416}, 211 (2002). \bibitem{Fort} C. Fort, L. Fallani, V. Guarrera, J. E. Lye, M. Modugno, D. S. Wiersma, and M. Inguscio, Phys. Rev. Lett. {\bf 95}, 170410 (2005). \bibitem{Clement} D. Cl{\'e}ment, A. F. Varon, M. Hugbart, J. A. Retter, P. Bouyer, L. Sanchez-Palencia, D. M. Gangardt, G. V. Shlyapnikov, and A. Aspect, Phys. Rev. Lett. {\bf 95}, 170409 (2005). \bibitem{Modugno} M. Modugno, Phys. Rev. A {\bf 73}, 013606 (2006). \bibitem{Paul} T. Paul, P. Schlagheck, P. Leboeuf, and N. Pavloff, Phys. Rev. Lett. {\bf 98}, 210602 (2007). \bibitem {Andrews} M. R. Andrews, C. G. Townsend, H.-J. Miesner, D. S. Durfee, D. M. Kurn, and W. Ketterle, Science {\bf 275}, 637 (1997). \bibitem {Kasevich} Mark A. Kasevich, Science {\bf 298}, 1363 (2002). \bibitem{Greiner} M. Greiner, I. Bloch, O. Mandel, Th. W. H{\"a}nsch, and T. Esslinger, Phys. Rev. Lett. {\bf 87}, 160405 (2001). \bibitem{Shin}Y. Shin, M. Saba, T. A. Pasquini, W. Ketterle, D. E. Pritchard, and A. E. Leanhardt, Phys. Rev. Lett. {\bf 92 }, 050405 (2004). \bibitem{Hadzi} Z. Hadzibabic, S. Stock, B. Battelier, V. Bretin, and J. Dalibard, Phys. Rev. Lett. {\bf 93}, 180403 (2004). \bibitem{Lye} J. E. Lye, L. Fallani, M. Modugno, D. S. Wiersma, C. Fort, and M. Inguscio, Phys. Rev. Lett. {\bf 95}, 070401 (2005). \bibitem{Clement2} D. Cl{\'e}ment, A. F. Varon, J. A. Retter, L. Sanchez-Palencia, A. Aspect, and P. Bouyer, New J. Phys. {\bf 8}, 165 (2006). \bibitem{peres} A. Peres, Phys. Rev. A {\bf 30}, 1610 (1984). \bibitem{jalabert} R. A. Jalabert and H. M. Pastawski, Phys. Rev. Lett. {\bf 86}, 2490 (2001). \bibitem{jacquod} Ph. Jacquod, P. G. Silvestrov, and C. W. J. Beenakker, Phys. Rev. E {\bf 64}, 055203(R) (2001) \bibitem{cucchietti}F. M. Cucchietti, C. H. Lewenkopf, E. R. Mucciolo, H. M. Pastawski, and R. O. Vallejos, Phys. Rev. E {\bf 65}, 046209 (2002). \bibitem{benenti}G. Benenti, G. Casati, and G. Veble, Phys. Rev. E {\bf 68}, 036212 (2003). \bibitem{Pastawski}H. M. Pastawski, P. R. Levstein, G. Usaj, J. Raya, and J. Hirschinger, Physica A {\bf 283}, 166 (2000). \bibitem{Manfredi}G. Manfredi and P.-A. Hervieux, Phys. Rev. Lett. {\bf 97}, 190404 (2006). \bibitem{Pinto} R. Pinto, E. Medina, and H. M. Pastawski (private comunication). \bibitem{Bretin} V. Bretin, S. Stock, Y. Seurin, and J. Dalibard, Phys. Rev. Lett. {\bf 92}, 050403 (2004). \bibitem{foot} As the GPE is intrinsically nonlinear, one could, in principle, be able to merely perturb the initial quantum state (without doing anything to the Hamiltonian) in order to see some nontrivial decay of the corresponding overlap function. It will be interesting to investigate this alternative approach in future studies. \bibitem{Hansel} W. H{\"a}nsel, J. Reichel, P. Hommelhoff, and T. W. H{\"a}nsch, Phys. Rev. A {\bf 64}, 063607 (2001). \end{references} \end{document}
\begin{document} \title[Stochastic Fractional Conservation Laws] {Stochastic fractional conservation laws } \author{Abhishek Chaudhary} \date{\today} \maketitle \centerline{ Centre for Applicable Mathematics, Tata Institute of Fundamental Research} \centerline{P.O. Box 6503, GKVK Post Office, Bangalore 560065, India}\centerline{[email protected]} \begin{abstract} In this paper, we consider the Cauchy problem for the nonlinear fractional conservation laws driven by a multiplicative noise. In particular, we are concerned with the well-posedness theory and the study of the long-time behavior of solutions for such equations. We show the existence of desired kinetic solution by using the vanishing viscosity method. In fact, we establish strong convergence of the approximate viscous solutions to a kinetic solution. Moreover, under a nonlinearity-diffusivity condition, we prove the existence of an invariant measure using the well-known Krylov-Bogoliubov theorem. Finally, we show the uniqueness and ergodicity of the invariant measure. {\text{e}}nd{abstract} {\textbf{Keywords:} Fractional conservation Laws; Young measures; Existence; Uniqueness; Kinetic solution; Invariant measure; Multiplicative noise; Brownian noise} \section{Introduction} Nonlinear nonlocal integro-PDEs have great demand in mathematics due to their applications in several different areas such as mathematical finance \cite{mathmatical finance}, flow in porous media \cite{porous media}, radiation hydrodynamics \cite{hyd}, and overdriven gas detonations \cite{overdriven}. The addition of a Brownian noise to this type of physical model is completely natural, as it represents exterior perturbations or a lack of knowledge of certain substantial parameters. We are interested in the well-posedness theory and long-time behavior of solutions for the fractional conservation laws driven by Brownian noise. In this article, we consider the following stochastic fractional conservation laws \begin{equation}\label{1.1} \begin{cases} d u(x,t)+\mbox{div}(F(u(x,t)))d t +(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[A(u(x,t))]dt=\Psi(x,u(x,t))\,dB(t),\,\,\, &x \in \mathbb{T}^N,\,\, t \in(0,T),\\ u(x,0)=u_0(x),\,&\,x\in\mathbb{T}^N {\text{e}}nd{cases} {\text{e}}nd{equation} where $N\ge 1$, $B$ is a cylinderical Wiener process, $u_0$ is the given initial function, $F:\mathbb{R}\mapsto\mathbb{R}^N,$ $A:\mathbb{R}\mapsto\mathbb{R}$ are given (sufficiently smooth) functions (see Section \ref{section 2} for the complete list of assumptions). Here $(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha$ denotes the fractional Laplacian operator of order $\alpha\in(0,1)$, defined pointwise as follows \begin{align}\label{2.4} (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^{\alpha}[\kappa](x):=-P.V.\int_{\mathbb{R}^N}(\kappa(x+z)-\kappa(x))\lambda(z)\,dz,\,\,\, \vec{\mathbf{f}} orall\,\,x\in\mathbb{T}^N, {\text{e}}nd{align} where $\lambda(z)=\mathcal{L}_{\lambda}ac{1}{|z|^{N+2\alpha}},\,\, z\,\ne\,0$, $\lambda(0)=0$, and $\kappa$ is a sufficiently regular function. Finally, the function $ \Psi : L^2(\mathbb{T}^N )\to L_2(\mathfrak{O}; L^2(\mathbb{T}^N ))$ is a $L_2(\mathfrak{O}; L^2(\mathbb{T}^N ))$-valued function, where $L_2(\mathfrak{O};L^2(\mathbb{T}^N))$ represents the collection of Hilbert-Schmidt operators from $\mathfrak{O}$ to $L^2(\mathbb{T}^N )$. \subsection{Earlier works} Over the last decade, there have been many contributions to the larger area of stochastic partial differential equations that are driven by Brownian noise. The equation {\text{e}}qref{1.1} could be viewed as a stochastic perturbation of the nonlocal hyperbolic equation. In the absence of nonlocal term (fractional Laplacian) along with $\Psi=0$, the equation {\text{e}}qref{1.1} becomes the well-known conservation laws. Kruzhkov \cite{Kruzhkov} first introduced the concept of entropy solution for conservation laws, and established well-posedness theory for the same in the $L^\infty$ framework. The entropy formulation of the elliptic-parabolic-hyperbolic problem has been developed by Carrillo \cite{Carrillo}. We mention also the work by Bendahmane $\&$ Karlsen \cite{Karlsen} for the anisotropic diffusion case. On the other hand, we refer to the works of Alibaud \cite{Alibaud}, Cifani et.al. \cite{Cifani, Alibaud 2} for deterministic fractional conservation laws and deterministic fractional degenerate convection-diffusion equations respectively. The concept of the kinetic solution was first introduced in the paper of Lions et. al \cite{lions} for the conservation laws. Then Chen $\&$ Perthame \cite{chen} developed a well-posedness theory for general degenerate parabolic-hyperbolic equations with non-isotropic nonlinearity. Moreover, the well-posedness theory for the nonlocal conservation laws was recently studied by Alibaud et al. in \cite{N-2}. In the stochastic set-up, the work of Kim \cite{Kim} first established the concept of entropy solutions for stochastic conservation laws and developed the well-posedness theory for one-dimensional conservation laws driven by additive Brownian noise. We refer to the work of Vallet $\&$ Wittbold \cite{Vallet} for the multi-dimensional Dirichlet problem. However, when the noise is multiplicative in nature, then situation differs significantly from that of additive noise case, and in this direction we mention the work of Feng $\&$ Nualart \cite{Feng} for one-dimensional balance laws. For multi-dimensional case, Debussche and Vovelle \cite{deb} extended the well-posedness theory for the stochastic scalar conservation laws. Moreover, Dotti and Vovelle \cite{sylvain} showed convergence of approximations to stochastic scalar conservation laws. For the more delicate case of degenerate parabolic equations, we refer to the works of Hofmanova \cite{hofmanova}, Debussche et al. \cite{vovelle}. For more works in this direction, we refer to \cite{BaVaWitParab,BisKoleyMaj,KMV,KMV1,ujjwal,koley2013multilevel,Koley3}. Furthermore, well-posedness theory for stochastic fractional degenerate problem has recently been studied in \cite{neeraj,Neeraj2}. On the other hand, the analysis of the long-time behavior of global solutions is the second pillar in the theory of SPDEs, after the analysis of the well-posedness. The first paper in this direction is by E, Khanin, Mazel, Sinani \cite{EKAS} for the analysis of the invariant measure related to the periodic inviscid Burgers equation with stochastic forcing in space dimension one. We also mention the work by Bakhtin \cite{Bak}, which deals with the scalar conservation laws with Poisson random forcing on the whole line. Moreover, Debussche and Vovelle \cite{vovelle 2} established the existence of an invariant measure for a scalar first-order conservation laws with stochastic forcing under a hypothesis of non-degeneracy on the flux. Furthermore, Chen and Pang \cite{chen.2} also have proved existence of an invariant measure for the stochastic anisotropic parabolic-hyperbolic equation. A number of authors have contributed since then, and we mention the works of \cite{B-2,B-3}. \subsection{Aim and outline of this paper} Equation {\text{e}}qref{1.1} is a fractional convection-diffusion equation, and this class of equations have received considerable interest recently thanks to the wide variety of applications. Due to the nonlinearity involves in equation {\text{e}}qref{1.1}, classical solutions to {\text{e}}qref{1.1} are hard to find, and weak solutions must be sought. We adapt the notion of kinetic formulation for solutions to {\text{e}}qref{1.1}, which is only weak in space variable but strong in time variable, as proposed in \cite{sylvain}. To sum up, we aim at developing the following results related to {\text{e}}qref{1.1}: \begin{itemize} \item[(i)] Our first aim is to establish the well-posedness theory for solutions to the Cauchy problem {\text{e}}qref{1.1}. The proof of well-posedness theory is rather classical but still quite technical, the hardest part is probably the uniqueness part (see step 3 in the proof of Theorem \ref{comparison}). The proof of existence is based on the vanishing viscosity argument, while the uniqueness proof is settled by extending Kruzkov’s doubling of variable technique in the presence of multiplicative noise. Note that the noise coefficient $\Psi$ has an explicit dependency on the spatial variable $x$ as well. In this case, the proof of contraction principle (see Theorem \ref{th3.6}) requires a change in hierarchy while computing limits with respect to various parameters involved (for details, see {\text{e}}qref{final1}). Indeed, this technical hurdle forces us to analyze the general equation with values of $\alpha$ in $(0,\mathcal{L}_{\lambda}ac{1}{2})$. However, when the noise coefficient $\Psi=\Psi(u)$, we can extend well-posedness theory for $\alpha\in (0, 1)$ (see Theorem \ref{main result}). \item[(ii)] The second main contribution of this paper is the analysis of the long-time behavior of the kinetic solution. In fact, we obtain existence of an invariant measure through the Krylov-Bogoliubov theorem under a nonlinearity-diffusivity condition {\text{e}}qref{non} (see Theorem \ref{Invariant measure}). The proof of the existence of invariant measure itself is much more delicate than in the hyperbolic case \cite{vovelle 2}. Moreover, we show that the law of kinetic solution $u(t)$ converges to an unique invariant measure as $t\to\,\infty$ (see {\text{e}}qref{convergence of law}). We also establish a well-posedness theory for initial data in $L^1(\mathbb{T}^N)$ which is needed for the framework of invariant measure (see Theorem \ref{main result 2}). {\text{e}}nd{itemize} The paper is organized as follows. In Section \ref{section 2}, we give the details of assumptions and introduce the basic setting, define the notion of a kinetic solution, and state our main results, Theorem \ref{main result}, Theorem \ref{main result 2}, Theorem \ref{Invariant measure}. Section \ref{section 3} is devoted to well-posedness theory in $L^p$-setting. Subsection \ref{subsection 3.1} is devoted to the proof of uniqueness together with the proof of $L^1$-contraction principle. In Subsection \ref{subsection 3.2}, we prove the existence part of Theorem \ref{main result} which is divided into two parts. First, we prove the existence for the smooth initial condition. Second, we relax the hypothesis upon initial data and prove the existence for general initial data. In Section \ref{section 4}, we prove the existence and uniqueness of invariant measure, Theorem \ref{Invariant measure}. In Appendix \ref{A}, we briefly derive the kinetic formulation of equation {\text{e}}qref{1.1}. In Appendix \ref{B}, we give the proof of Theorem \ref{main result 2}. \section{Technical framework and statement of main result}\label{section 2} \subsection{Hypothesis} In this subsection, we give the precise assumptions on each of the term appearing in the equation {\text{e}}qref{1.1}. We work on a finite-time interval $[0,T]$ for the well-posedness theory and consider periodic boundary conditions: $x\in\mathbb{T}^N$, where $\mathbb{T}^N $ is the N-dimensional torus. Suppose that $(\Omega,\mathcal{F},\mathbb{P},({\mathcal{F}}_t),(\beta_k(t))_{k\ge1})$ is a stochastic basis with a complete, right continuous filtration. Let $\mathcal{P}$ indicates the predictable $\sigma$-algebra on $\Omega\times[0,T]$ associated to $(\mathcal{F}_t)_{t\ge0}$. $B$ is a cylindrical Wiener process defined as $B=\sum_{n\ge1}\beta_n l_n$ , where the coefficients $\beta_n$ are independent Brownian processes and $(l_n)_{n\ge1}$ is a complete orthonormal basis in a Hilbert space $\mathfrak{O}$. Here, we introduce the canonical space $\mathfrak{O}\subset\mathfrak{O}_0$ via $$\mathfrak{O}_0=\bigg\{v=\sum_{n\ge1}\theta_n \mathfrak{\gamma}_n;\, \sum_{n\ge1}\mathcal{L}_{\lambda}ac{\theta_n^2}{n^2}\,\textless\,\infty\bigg\}$$ with the norm $$\| v\|_{\mathfrak{O}_0}^2=\sum_{n\ge1}\mathcal{L}_{\lambda}ac{\theta_n^2}{n^2},\,\,\, v=\sum_{n\ge1}\theta_n \mathfrak{\gamma}_n .$$ Notice that the embedding $\mathfrak{O}\hookrightarrow \mathfrak{O}_0$ is Hilbert-Schmidt. Moreover, $B$ has $\mathbb{P}$-almost surely trajectories in $C([0,T];\mathfrak{O}_0)$. For all $w\in L^2(\mathbb{T}^N)$ we consider a mapping $\Psi: \mathfrak{O}\to L^2(\mathbb{T}^N)$ intruduced by $\Psi(w)\mathfrak{\gamma}_n = h_n(\cdot, w(\cdot))$. Then we define $$\Psi(x,w)=\sum_{n\ge1} h_n(x,w)\mathfrak{\gamma}_n ,$$ For details related to the It\^o integral $\int_0^t\Psi(x,u){\rm dB}(t)$, we refer to \cite{prato}. Now we separate the rest of the assumptions in two parts; first part for the well-posedness theory and second part for the existence and uniqueness of invariant measure. \subsection*{Assumptions for the well-posedness theory} Here, we list the assumptions which are necessary to state the Theorems \ref{main result} \& \ref{main result 2}. \begin{Assumptions} \item\label{A1} $F$ is a $C^2(\mathbb{R};\mathbb{R}^N)$-function with a polynomial growth of its derivative, in the following sense: there exists $r\,\ge\,1$ such that \begin{align}\label{F.2} \sup_{|\zeta|\,\le\,\delta}|F'(\xi)-F'(\xi+\zeta)|\,\le\,C_F (1+|\xi|^{r-1})\,\delta. {\text{e}}nd{align} \item\label{A2} $A:\mathbb{R}\to\mathbb{R}$ is a non-decreasing Lipschitz continuous function. \item\label{A3} $h_k\in C(\mathbb{T}^N \times\mathbb{R})$ with the following bounds: \begin{align}\label{2.2} H^2(x,w)=\sum_{k\ge1}|h_k(x,w)|^2 \le C_0 (1+|w|^2), {\text{e}}nd{align} \begin{align}\label{2.3} \sum_{k\ge1}|h_k(x,w)-h_k(y,z)|^2\le C_{\Psi}(|x-y|^2+|w-z|g(|w-z|)), {\text{e}}nd{align} where $x,y\in\mathbb{T}^N$ and $w,z\in\mathbb{R}$ and $g$ is a continuous non-decreasing function on $\mathbb{R}_+$ satisfying, $g(0)=0$, and we assume also $0\le g(\mathfrak{\zeta})\le1$ for all $\mathfrak{\zeta}\in\mathbb{R}_+$. \item \label{A4} If noise is additive in nature, then $h_k\in C(\mathbb{T}^N)$, with the following bounds \begin{align}\label{noise1} H^2(x)=\sum_{k\ge1}|h_k(x)|^2,\,\,\sum_{k\ge\,1} \|h_k\|_{C(\mathbb{T}^N)}^2 \le C_0, {\text{e}}nd{align} \begin{align}\label{noise2} \sum_{k\ge1}|h_k(x)-h_k(y)|^2\,\le\,C_{\Psi}(|x-y|^2), {\text{e}}nd{align} where $x,y\in\mathbb{T}^N$. {\text{e}}nd{Assumptions} \subsection*{Assumptions for the proof of invariant measure} Here, we list the assumptions which are necessary to state the Theorem \ref{Invariant measure}. \begin{Assumptions2} \item\label{H1} $A\in C^2(\mathbb{R};\mathbb{R})$ is a non-decreasing Lipschitz continuous function with the following bound: \begin{align} |A''(\mathfrak{\zeta})|\,\le\,C\,(|\mathfrak{\zeta}|+1)\,\qquad\, \vec{\mathbf{f}} orall\mathfrak{\zeta}\in\mathbb{R}. {\text{e}}nd{align} \item \label{H2} $F\in C^2(\mathbb{R};\mathbb{R}^N)$ with the following bound: \begin{align}\label{flux existance} |F''(\mathfrak{\zeta})|\,\le\,C\,(|\mathfrak{\zeta}|+1).\,\,\,\qquad\, {\text{e}}nd{align} \item\label{H3} The noise is additive in nature, and $h_k\in C^2(\mathbb{T}^N)$, with the following bounds \begin{align}\label{noise1} H^2(x)=\sum_{k\ge1}|h_k(x)|^2,\,\,\sum_{k\ge\,1} \|h_k\|_{C^2(\mathbb{T}^N)}^2 \le C_0, {\text{e}}nd{align} \begin{align}\label{noise2} \sum_{k\ge1}|h_k(x)-h_k(y)|^2\,\le\,C_{\Psi}(|x-y|^2), {\text{e}}nd{align} where $x,y\in\mathbb{T}^N$. \item\label{H4} The functions $h_k$ satisfies the cancellation condition \begin{align}\label{cancellation} \int_{\mathbb{T}^N}h_k(x)dx=0. {\text{e}}nd{align} {\text{e}}nd{Assumptions2} \subsection{Definitions for $L^p$-setting:} \label{section 3.1} Here, we introduce the formulation of kinetic solution to {\text{e}}qref{1.1} as well as the basic definitions concerning the notion of kinetic solution. Let $\mathcal{M}^+\big([0,T]\times\mathbb{T}^N\times\mathbb{R}\big)$ be the collection of non-negative Radon measures over $[0,T]\times\mathbb{T}^N\times\mathbb{R}$. \begin{definition}[\textbf{Kinetic measure}]\label{kinetic measure 1} A mapping $\mathfrak{m}$ from $\Omega$ to $\mathcal{M}^+\big([0,T]\times\mathbb{T}^N\times\mathbb{R}\big)$ is said to be kinetic measure provided the following conditions hold: \begin{enumerate} \item[(i)] $\mathfrak{m}$ is measurable in the following sense: for each $\psi\in C_0(\mathbb{T}^N\times[0,T]\times\mathbb{R})$ the mapping $\mathfrak{m}(\psi):\Omega\to\mathbb{R}$ is measurable, \item[(ii)] if $\mathbb{B}_{L}^c=\{\mathfrak{\zeta}\in\mathbb{R}; |\mathfrak{\zeta}|\ge L\}$, then $\mathfrak{m}$ vanishes for large $\mathfrak{\zeta}$ in the sense: $$\lim_{L\to\infty}\mathbb{E} \mathfrak{m}(\mathbb{T}^N\times[0,T]\times \mathbb{B}_{L}^c)=0.$$ {\text{e}}nd{enumerate} {\text{e}}nd{definition} \begin{definition}[\textbf{Kinetic solution}]\label{kinetic solution in lp setting} Let $u_0\in L^p(\Omega\times\mathbb{T}^N)$ for all $p\in[1,+\infty)$. A $L^1(\mathbb{T}^N)$- valued stochastic process $(u(t))_{t\in[0,T]}$ is said to be a solution to {\text{e}}qref{1.1} with initial datum $u_0$, if $(u(t))_{t\in[0,T]}$ and $f(t):=\mathbbm{1}_{u(t)\textgreater\mathfrak{\zeta}}$ have the following properties: \begin{enumerate} \item[1.] $u\in L_{\mathcal{P}}^p(\mathbb{T}^N\times[0,T]\times\Omega),\,\,\,\, \vec{\mathbf{f}} orall\,\, p\in[1,+\infty)$ , \item[2.] for all $\varphi\in C_c(\mathbb{T}^N\times\mathbb{R}),$ $\mathbb{P}$-almost surely, $t\to \langle f(t),\varphi\rangle$ is c\'adl\'ag, \item[3.]for all $p\in[1,+\infty),$ there exists $C_p\ge0$ such that \begin{align}\label{2.5} \mathbb{E}(\sup_{0\le t\le T}\| u(t)\|_{L^p(\mathbb{T}^N)}^p)\le C_p, {\text{e}}nd{align} \item[4.] Let $\mathfrak{\mathfrak{{\text{e}}ta}}_{1}:\Omega\to\,\mathcal{M}^{+}\big(\mathbb{T}^N\times[0,T]\times\mathbb{R}\big)$ be defined as follows: $$\mathfrak{{\text{e}}ta}_1(x,t,\xi)=\int_{\mathbb{R}^N}|A(u(x+z,t))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{u(x,t),u(x+z,t)\}}(\mathfrak{\zeta})\lambda(z)dz.$$ There exists a random kinetic measure $\mathfrak{m}$ in sense of Definition \ref{kinetic measure 1} such that $\mathbb{P}$-almost surely, $\mathfrak{m}\ge\mathfrak{\mathfrak{{\text{e}}ta}}_1$, and the pair $(f,\mathfrak{m})$ satisfies the following formulation: for all $\varphi \in C_c^2(\mathbb{T}^N\times\mathbb{R})$, $t\in[0,T]$, \begin{align}\label{2.6} \langle f(t),\varphi \rangle &= \langle f_0, \varphi \rangle + \int_0^t\langle f(s),F'(\mathfrak{\zeta})\cdot\nabla\varphi\rangle ds -\int_0^t\langle f(s), \, A'(\mathfrak{\zeta})\,(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[\varphi]\rangle ds\notag\\ &\qquad+\sum_{k=1}^\infty \int_0^t \int_{\mathbb{T}^N} h_k(x,u(x,s))\varphi(x,u(x,s) dx d\beta_k(s)\notag\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N}\partial_{\mathfrak{\zeta}}\varphi(x,u(x,s)) H^2 (x,u(x,s))dxds -\mathfrak{m}(\partial_{\mathfrak{\zeta}}\varphi)([0,t]), {\text{e}}nd{align} $\mathbb{P}$-almost surely, where, $f_0(x,\xi)=\mathbbm{1}_{u_0\textgreater\xi}$, $H^2(x, \xi) := \sum_{k\ge1} |h_k(x,\xi)|^2$. {\text{e}}nd{enumerate} {\text{e}}nd{definition} \noindent Here, we use the brackets $\langle.,.\rangle$, to indicate the duality between $C_c^\infty(\mathbb{T}^N \times\mathbb{R})$ and the space of distributions over $\mathbb{T}^N\times\mathbb{R}$, we use $\mathfrak{m}(\varphi)$ to denote the Borel measure on $[0,T]$ defined by $$\mathfrak{m}(\varphi):B\mapsto\int_{\mathbb{T}^N\times B\times \mathbb{R}}\varphi(x,\mathfrak{\zeta})d\mathfrak{m}(x,t,\mathfrak{\zeta}),\,\,\, \varphi \in C_b(\mathbb{T}^N\times\mathbb{R})$$ for all $B$ Borel subset of $[0,T]$, and for all $c,d \in {\text{e}}nsuremath{\mathbb{R}}$ $$\text{Conv}\{c, d\} := (\text{min}\{c, d\},\text{max}\{c, d\}).$$ \subsection{Definitions for $L^1$-setting:} Here, we recall the definition of kinetic solution as well as the related definitions used for $L^1$-setting. This is a generalization of the concept of kinetic solution as defined in Definition \ref{kinetic solution in lp setting}. In this context, the corresponding kinetic measure is not finite and one can show only suitable decay at infinity. \begin{definition}\textbf{(Kinetic measure)}\label{definition kinetic measure}\label{kinetic measure 2} A mapping $\mathfrak{m}_1$ from $\Omega$ to $\mathcal{M}^+\big([0,T]\times\mathbb{T}^N\times\mathbb{R}\big)$ is said to be a kinetic measure provided the following conditions hold: \begin{enumerate} \item[(i)] $\mathfrak{m}_1$ is measurable in the following sense: for each $\kappa\in C_0(\mathbb{T}^N\times[0,T]\times\mathbb{R})$ the mapping $\mathfrak{m}_1(\kappa):\Omega\to\mathbb{R}$ is measurable, \item[(ii)] if $\widetilde{\mathbb{B}}_{L}=\{\xi\in\mathbb{R}; (L+1)\,\ge\,|\mathfrak{\zeta}|\ge L\}$, then $$\lim_{L\to\infty}\mathbb{E} \mathfrak{m}_1(\mathbb{T}^N\times[0,T]\times\widetilde{\mathbb{B}}_{L})=0.$$ {\text{e}}nd{enumerate} {\text{e}}nd{definition} \begin{definition}\textbf{$\big($Kinetic solution$).$}\label{definition kinetic solution in l1 setting} Let $u_0\,\in\,L^1(\mathbb{T}^N).$ A $L^1(\mathbb{T}^N)$- valued stochastic process $(u(t))_{t\in[0,T]}$ is said to be a solution to {\text{e}}qref{1.1} with initial datum $u_0$, if the following conditions are satisfied, \begin{enumerate} \item[1.]$u\in L_{\mathcal{P}}^1(\mathbb{T}^N\times[0,T]\times\Omega)$ and satisfying $$\mathbb{E}\bigg(\sup_{t\in[0,T]}\|u(t)\|_{L^1(\mathbb{T}^N)}\bigg)\,\textless\,+\infty,$$ \item[2.] for all $\varphi\in C_c^2(\mathbb{T}^N\times\mathbb{R}),$ $\mathbb{P}$-almost surely, $t\to \langle f(t),\varphi\rangle$ is c\'adl\'ag, \item[3.] let $\mathfrak{\mathfrak{{\text{e}}ta}}_{1}:\Omega\to\,\mathcal{M}^{+}\big(\mathbb{T}^N\times[0,T]\times\mathbb{R}\big)$ be defined as follows: $$\mathfrak{\mathfrak{{\text{e}}ta}}_1(x,t,\xi)=\int_{\mathbb{R}^N}|A(u(x+z,t))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{u(x,t),u(x+z,t)\}}(\mathfrak{\zeta})\lambda(z)dz.$$ There exists a kinetic measure ${\mathfrak{m}}$ in sense of Definition \ref{kinetic measure 2} such that $\mathfrak{m}_1\,\ge\,\mathfrak{\mathfrak{{\text{e}}ta}}_1$, $\mathbb{P}$-almost surely, and the pair $\big(f,\, \mathfrak{m}_1 \big)$ satisfies the following formulation: for all $\varphi\,\in\,C_c^2(\mathbb{T}^N\times\mathbb{R})$, for all $t\in[0,T],$ \begin{align}\label{formulation} \langle {f}(t),\varphi \rangle &= \langle f_0, \varphi \rangle + \int_0^t\langle f(s),F'(\mathfrak{\zeta})\cdot\nabla\varphi\rangle ds -\int_0^t\langle f(s), A'(\mathfrak{\zeta})\,(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[\varphi]\rangle ds\notag\\ &\qquad+\sum_{k=1}^\infty \int_0^t \int_{\mathbb{T}^N} h_k(x)\varphi(x,u(x,s) dx d\beta_k(s)\notag\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N}\partial_{\mathfrak{\zeta}}\varphi(x,u(x,s)) H^2 (x)dxds -\mathfrak{m}_1(\partial_{\mathfrak{\zeta}}\varphi)([0,t]), {\text{e}}nd{align} $\mathbb{P}$-almost surely, where, $f_0(x,\mathfrak{\zeta})=\mathbbm{1}_{u_0\textgreater\mathfrak{\zeta}}$, $H^2(x) := \sum_{k\ge1} |h_k(x)|^2$. {\text{e}}nd{enumerate} {\text{e}}nd{definition} \subsection{Invariant measure} Suppose that noise is only additive in nature. Let $u(t,0,u_0)$ denote the solution at time $t$ from starting at time $0$ (for existence of solution, see Theorem \ref{main result 2}). Then, we can define the operator $\mathcal{Q}_t:B_b(L^1(\mathbb{T}^N))\to B_b(L^1(\mathbb{T}^N))$ by $$\big(\mathcal{Q}_t\kappa\big)(v)=\mathbb{E}[ \kappa(u(t,0,v)) ]$$ Suppose that $u(t,t_0,v)$ denotes the solution to {\text{e}}qref{1.1} starting at time $t_0$ from an $\mathcal{F}_{t_0}$-measurable initial condition $v$. By contraction principal (see Theorem \ref{main result 2}), it holds true that $\mathbb{P}$-almost surely, $$\|u(s,t,u_1)-u(s,t,u_2)\|_{L^1(\mathbb{T}^N)}\,\le\,\|u_1-u_2\|_{L^1(\mathbb{T}^N)}$$ It implies that $\mathcal{Q}_t\kappa\,\in\,C_b(L^1(\mathbb{T}^N))$ for all $\kappa\,\in\,C_b(L^1(\mathbb{T}^N))$. The equation {\text{e}}qref{1.1} defines a Markov process in the following sense: $$\mathbb{E}[\kappa(u(t+s,0,v))|\mathcal{F}_t]=\mathcal{Q}_s\kappa\big(u(t,0,v)\big),\,\,\, \vec{\mathbf{f}} orall\,\,\kappa\,\in \,C_b(L^1(\mathbb{T}^N))\,\,\, \vec{\mathbf{f}} orall\,t,s\,\textgreater\,0,\,\,\,\mathbb{P}-\text{almost surely},$$ then the semigroup property $\mathcal{Q}_{t+s}=\mathcal{Q}_t\circ \mathcal{Q}_s$ holds. The equation ${\text{e}}qref{1.1}$ defines a Feller Markov process. The semigroup $(\mathcal{Q}_t)_{t\,\ge\,0}$ is called Feller. We now denote the law of $u(t,0,v)$ by $\lambda_{t,v}$, then $$\mathcal{Q}_t\kappa(v)=\mathbb{E}[\kappa(u(t,0,v))]=\int_{L^1({\mathbb{T}^N})}\kappa(y)\lambda_{t,v}(dy),$$ and $\langle\cdot,\cdot\rangle$ denotes the duality product between bounded Borel functions and probability measures, we obtain $$\mathcal{Q}_t\kappa(v)=\langle \kappa, \lambda_{t,v}\rangle=\langle \mathcal{Q}_t\kappa,\delta_v\rangle.$$ It follows that $\lambda_{t,v}=\mathcal{Q}_t^*\delta_v.$ More generally, if we consider a solution to {\text{e}}qref{1.1} with initial condition $u_0$ having the initial law $\lambda$, we have $\lambda_{t,u_0}=\mathcal{Q}_t^*\lambda$. \begin{definition}\textbf{(Invariant measure)} We say that a probability measure $\lambda$ on $L^1(\mathbb{T}^N)$ is an invariant measure if $$\mathcal{Q}_t^*\lambda=\lambda,\,\,\,\,\,\,\text{for all}\,t\,\ge\,0.$$ {\text{e}}nd{definition} Now, we have all in hand to formulate the Krylov-Bogoliubov Theorem \cite[Proposition 11.3]{prato}. \begin{thm} Suppose that the semigroup $(\mathcal{Q}_t)$ is Feller. Suppose that there exists a random variable $u_0$, a sequence $(T_n)$ increasing to $\infty$ and a probability measure $\lambda$ such that $$\mathcal{L}_{\lambda}ac{1}{T_n}\int_0^{T_n}\lambda_{t,u_0} dt\,\to\,\lambda\,\,\text{weak-*}. $$ Then $\lambda$ is an invariant measure for $(\mathcal{Q}_t)$. {\text{e}}nd{thm} \subsection{Nonlinearity-Diffusivity condition and recent developments}\label{non-linearity diffusivity condition} Here, we will discuss some recent developments in the analysis of long-time behavior of solutions of nonlinear deterministic, stochastic PDEs and some comments about assumptions on flux function and non-linearity. Let us introduce the nonlinearity-diffusivity condition on which we will work on as follows: suppose that there exist $s\,\in\,(0,1)$, and $C \textgreater\,0$, independent of $\gamma$, such that \begin{align}\label{non} \sup_{\tau\in\mathbb{R}, k\in\,Z^n}\int_{\mathbb{R}}\mathcal{L}_{\lambda}ac{\gamma\big(A'(\mathfrak{\zeta})|k|^{2\alpha-1}+\gamma\big)}{(\gamma+A'(\mathfrak{\zeta})|k|^{2\alpha-1})^2+|F'(\mathfrak{\zeta})\cdot\mathcal{L}_{\lambda}ac{k}{|k|}+\tau|^2}d\mathfrak{\zeta} =:\mathfrak{\mathfrak{{\text{e}}ta}}(\gamma)\,\le\,C\,\gamma^s\,\,\to\,0\,\,\text{as}\,\gamma\,\to\,0. {\text{e}}nd{align} This condition implies that no interval of $\mathfrak{\zeta}$ on which, flux function $F(\mathfrak{\zeta})$ is affine and nonlinear diffusive function $A(\mathfrak{\zeta})$ is degenerate. We can write this condition in the simple and more standard setting: for any $\tau\in\mathbb{R}\,\,\hat{k}\,\in\,\mathbb{S}^{N-1}$, we have $$\mathcal{L}\{\mathfrak{\zeta}\in\mathbb{R};\,F'(\mathfrak{\zeta})\cdot\hat{k}+\tau=0,\, A'(\mathfrak{\zeta})=0\}=0.$$ We mention some results under similar type of conditions on flux functions: hyperbolic conservation laws \cite{vovelle2,vovelle 2}, and diffusion matrix in anisotropic degenerate parabolic-hyperbolic equations \cite{chen.2,chen3}. These results can summarized as follows: Consider \begin{equation}\label{hyperbolic} \begin{cases} d u(x,t)+\mbox{div}(F(u(x,t)))dt =\Psi(x)\,dB(t),\,\,\,& x \in \mathbb{T}^N,\,\, t \in(0,T),\\ u(x,0)=u_0(x) & x\in\mathbb{T}^N. {\text{e}}nd{cases} {\text{e}}nd{equation} The purely hyperbolic conservation laws is special case of {\text{e}}qref{1.1} with $A(\mathfrak{\zeta})=0$ and $\Psi=0$. If put $A(\mathfrak{\zeta})=0$ in condition {\text{e}}qref{non}, it gives that for any $\tau\in\mathbb{R},\,\,\hat{k}\,\in\,\mathbb{S}^{N-1}$, \begin{align}\label{non_degeneracy}\mathcal{L}\{\mathfrak{\zeta}\in\mathbb{R};\,F'(\mathfrak{\zeta})\cdot\hat{k}+\tau=0\}=0, {\text{e}}nd{align} \noindent then all solutions of the deterministic scalar conservation laws converges to $\bar{u}_0=\int_{\mathbb{T}^N}u_0(x)dx$ (see\,\cite{vovelle2}).\\ There is also an article by Chen and Perthame \cite{chen3}, where authors studied the large time behaviour of periodic solutions in $L^\infty$ to the deterministic nonlinear anisotropic degenerate parabolic-hyperbolic equations of second order. \begin{align}\label{aniso} \begin{cases} \partial _t u(x,t)+\mbox{div}_x(F(u(x,t))) =\nabla_x(A(u)\cdot\nabla_x u),\,\,\, &x \in \mathbb{T}^N,\,\, t \in(0,T),\\ u(x,0)=u_0 &x\in\mathbb{T}^N. {\text{e}}nd{cases} {\text{e}}nd{align} \begin{thm}[\textbf{Anisotropic degenerate parabolic-hyperbolic equation}] Let $u\,\in\,L^\infty([0,\infty)\times\mathbb{R}^N)$ be the unique periodic entropy solution to {\text{e}}qref{aniso}. Suppose that the flux function $F$ and the diffusion matrix $A$ satisfy the following non-linearity diffusivity condition: for any $\delta\,\textgreater\,0$ \begin{align} \sup_{|\tau|+|k|\,\ge\,\delta}\int_{|\mathfrak{\zeta}|\,\le\,\|u_0\|_{\infty}}\mathcal{L}_{\lambda}ac{\gamma}{\gamma+|F'(\mathfrak{\zeta})\cdot k+\tau|^2+(k^TA(\mathfrak{\zeta})k)^2}d\mathfrak{\zeta}:=\omega_\delta(\gamma)\,\to\,0\,\,\text{as}\,\,\gamma\,\to\,0. {\text{e}}nd{align} Then we have $$\|u(t)-\bar{ u}_0\|_{\infty}\,\to0\,\,\, \text{as}\,\,t\,\to\,\infty,$$ where $$\bar{u}_0=\int_{\mathbb{T}^N}u_0(x)dx.$$ {\text{e}}nd{thm} However, in stochastic setting, Debussche et. al \cite{deb} used the following non-degeneracy condition on the flux function, \begin{align}\label{non-degeneracy flux} i(\varepsilon)=\,\sup_{\tau\in\mathbb{R}\,\hat{k}\in\mathbb{S}^{N-1}}|\{\mathfrak{\zeta}\in\,\mathbb{R};|\tau+\hat{k}\cdot F'(\mathfrak{\zeta})|\,\textless\,\varepsilon\}|\,\le\,C_1\,\varepsilon^b {\text{e}}nd{align} for some $C_1\,\textgreater\,0$ and $b\,\textgreater\,0$. \begin{thm}[\textbf{The stochastic conservation laws}] Assumptions (H.2)-(H.4) hold. Then there exists an invariant measure for {\text{e}}qref{hyperbolic} in $L^1(\mathbb{T}^N)$. If the condition {\text{e}}qref{H2} is strengthened into the hypothesis that $F$ is sub-quadratic in the following sense: \begin{align} |F''(\mathfrak{\zeta})|\,\le\,C,\,\, \vec{\mathbf{f}} orall\,\,\,\mathfrak{\zeta}\in\mathbb{R}, {\text{e}}nd{align} then the invariant measure is unique. {\text{e}}nd{thm} The existence of invariant measure is also established for the stochastic anisotropic parabolic-hyperbolic equation in \cite{chen.2}. It is clear from the above discussion that our assumption {\text{e}}qref{non} on the flux $F$ and the diffusive function $A$ is in line with the assumptions in recently developed works. \subsection{Some basic facts about Young measure}\label{section 3.3} Roughly speaking a Young measure is a parametrized family of probability measures where the parameters are drawn from a finite measure space. Let $\mathcal{P}(\mathbb{R})$ be the space of probability measure on $\mathbb{R}$. \begin{definition}[\textbf{Young measure}] Suppose that $(\mathcal{X},\lambda)$ is a finite measure space. A mapping $\mathfrak{\mathcal{V}}$ from $\mathcal{X}$ to $\mathcal{P}(\mathbb{R})$ is said to be a Young measure if, for all $\kappa \in C_b(\mathbb{R})$, the map $w\to\mathfrak{\mathcal{V}}_w(\kappa)$ from $\mathcal{X}$ into $\mathbb{R}$ is measurable. We say that a Young measure $\mathfrak{\mathcal{V}}$ vanishes at infinity if , for all $p\ge1$, $$\int_\mathcal{X} \int_{\mathbb{R}}|\mathfrak{\zeta}|^p d\mathfrak{\mathcal{V}}_w(\mathfrak{\zeta})d\lambda(w)\textless \infty.$$ {\text{e}}nd{definition} \begin{definition}[\textbf{Kinetic function}]Suppose that $(\mathcal{X}$,$\lambda$) is a finite measure space. A measurable function $f:\mathcal{X}\times\mathbb{R}\to[0,1]$ is said to be a kinetic function if there exists a Young measure $\mathfrak{\mathcal{V}}$ on $\mathcal{X}$ vanishing at infinity such that, for $\lambda$-a.e. $w\in \mathcal{X}$, for all $\mathfrak{\zeta}\in\mathbb{R}$, $$f(w,\mathfrak{\zeta})=\mathfrak{\mathcal{V}}_w(\mathfrak{\zeta},\infty).$$ {\text{e}}nd{definition} \begin{definition}[\textbf{Equilibrium }] A measurable function $f:\mathcal{X}\times\mathbb{R}\to[0,1]$ is said to be an equilibrium if there exists a measurable function $v:\mathcal{X}\to\mathbb{R}$ such that $f(w,\mathfrak{\zeta})=\mathbbm{1}_{v(w)\textgreater\mathfrak{\zeta}}$ almost every $w\in \mathcal{X}$. {\text{e}}nd{definition} \begin{definition}[\textbf{Convergence of Young measure}] Suppose that $(\mathcal{X},\lambda)$ is a finite measure space. A sequence of Young measures $\mathcal{V}^n$ on $\mathcal{X}$ said to conveges to a Young measure $\mathcal{V}$ on $\mathcal{X}$ provided the following convergence holds: for all $h\in L^1(\mathcal{X})$, for all $g\in C_b(\mathbb{R})$, \begin{align}\label{2.12} \lim_{n\to +\infty}\int_{\mathcal{X}} h(y)\int_{\mathbb{R}}g(\zeta)d\mathcal{V}_z^n(\zeta)d\lambda(y)&=\int_{\mathcal{X}} h(y)\int_{\mathbb{R}}g(\zeta)d\mathcal{V}_y(\zeta)d\lambda(y). {\text{e}}nd{align} {\text{e}}nd{definition} \begin{remark}[\textbf{Kinetic formulation in terms of Young measure}]Suppose that u is a kinetic solution of {\text{e}}qref{1.1}. If we define $f(x,t,\zeta)=\mathbbm{1}_{u(x,t)\textgreater\mathfrak{\zeta}}$, then we have $\partial_{\mathfrak{\zeta}}f(x,t,\zeta)=-\delta_{u(x,t)=\mathfrak{\zeta}}$, where $\mathfrak{\mathcal{V}}=\delta_{u=\mathfrak{\zeta}}$ is a Young measure on $\Omega\times[0,T]\times\mathbb{T}^N $, therefore we can write {\text{e}}qref{2.6} as follows: for all $\varphi \in C_c^2(\mathbb{T}^N\times\mathbb{R})$, $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align}\label{2.10} \langle f(t),\varphi \rangle &= \langle f_0, \varphi \rangle + \int_0^t\langle f(s),F'(\mathfrak{\zeta})\cdot\nabla\varphi\rangle ds -\int_0^t\langle f(s),A'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[\varphi]\rangle ds\notag\\ &\qquad+\sum_{k=1}^\infty \int_0^t \int_{\mathbb{T}^N}\int_{\mathbb{R}} h_k(x,\mathfrak{\zeta})\varphi(x,\mathfrak{\zeta})d\mathfrak{\mathcal{V}}_{s,x}(\mathfrak{\zeta})dx d\beta_k(s)\notag\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}\partial_{\mathfrak{\zeta}}\varphi(x,\mathfrak{\zeta}) H^2 (x,\mathfrak{\zeta})d\mathfrak{\mathcal{V}}_{s,x}(\mathfrak{\zeta})dxds -\mathfrak{m}(\partial_{\mathfrak{\zeta}}\varphi)([0,t]). {\text{e}}nd{align} {\text{e}}nd{remark} \subsection{The main results.}\label{section 3.4} To conclude this section, we state our main results. \begin{thm}[\textbf{Well-posedness in $L^p$-setting}]\label{th2.10}\label{main result}\label{main result 1} Let the assumptions {\text{e}}qref{A1}-{\text{e}}qref{A3} be true. Let $\alpha\in (0, \mathcal{L}_{\lambda}ac{1}{2})$ and $u_0\in L^p(\Omega\times\mathbb{T}^N)$ for all $p\in[1,\infty)$. Then there exists a unique kinetic solution to {\text{e}}qref{1.1} with initial data in the sense of Definition \ref{kinetic solution in lp setting} and it has almost surely continuous trajectories in $L^p(\mathbb{T}^N)$, for all $p\in[1,\infty)$. Let $u_1, u_2$ be kinetic solutions to {\text{e}}qref{1.1} with initial data $u_{1,0}$ and $u_{2,0}$, respectively, then for all $t\in[0,T]$, \begin{align} \mathbb{E}\left\Vert u_1(t)-u_2(t)\right\Vert_{L^1(\mathbb{T}^N)}\le\mathbb{E}\left\Vert u_{1,0}-u_{2,0}\right\Vert_{L^1(\mathbb{T}^N)}.\notag {\text{e}}nd{align} Moreover, if assume that noise is only multiplicative, $\Psi=\Psi(u)$, then the same result also holds for $\alpha\in (0, 1)$. {\text{e}}nd{thm} In the next theorem, we also establish well-posedness theory for initial data in $L^1(\mathbb{T}^N)$ which is required for the framework of invariant measure. \begin{thm}[\textbf{Well-posedness in $L^1$-setting}]\label{existance and uniqueness}\label{main result 2} Let the assumptions {\text{e}}qref{A1}-{\text{e}}qref{A2} $\&$ {\text{e}}qref{A4} be true. Let $\alpha\in (0, \mathcal{L}_{\lambda}ac{1}{2})$ and $u_0\in L^1(\mathbb{T}^N)$. Then there exists a solution $u$ to {\text{e}}qref{1.1} with initial data $u_0$ in the sense of Definiton \ref{definition kinetic solution in l1 setting}. Besides, u has almost surely continuous trajectories in $L^1(\mathbb{T}^N)$. Suppose that $F$ is sub-quadratic in the following sense: $|F''(\mathfrak{\zeta})|\,\le\,C,\, \vec{\mathbf{f}} orall\mathfrak{\zeta}\in\mathbb{R}.$ Let $u_1, u_2$ be kinetic solutions to {\text{e}}qref{1.1} with initial data $u_0^1$ and $u_0^2\,\in\,L^1(\mathbb{T}^N)$, respectively. Then the following holds: $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align}\label{contraction principal} \|u_1(t)-u_2(t)\|_{L^1(\mathbb{T}^N)}\,\le\,\|u_0^1-u_0^2\|_{L^1(\mathbb{T}^N)}. {\text{e}}nd{align} {\text{e}}nd{thm} \begin{thm}[\textbf{Invariant measure}]\label{Invariant measure} Let initial data $u_0\in L^1(\mathbb{T}^N)$ such that $\int_{\mathbb{T}^N}u_0(x)dx=0$. Let the assumptions {\text{e}}qref{H1}-{\text{e}}qref{H4} be true. Let $\alpha\in (0, \mathcal{L}_{\lambda}ac{1}{2})$. Then there exists an invariant measure $\lambda$ for {\text{e}}qref{1.1} in $L^1(\mathbb{T}^N)$. Let $\lambda_{t,u_0}$ be law of solution $u(t)$ with initial data $u_0$. Suppose that, if the Asumptions {\text{e}}qref{H1}-{\text{e}}qref{H2} are strengthened into the hypothesis that $F$ and $A$ are sub-quadratic in the following sense: \begin{align}\label{flux uniqueness} |F''(\mathfrak{\zeta})|\,\le\,C,\,\,\,\qquad\,\,|A''(\mathfrak{\zeta})|\,\le\,C,\,\,\,\,\qquad \vec{\mathbf{f}} orall\mathfrak{\zeta}\in\mathbb{R}, {\text{e}}nd{align} then the invariant measure $\lambda$ is unique and $\lambda_{t,u_0}$ converges to an unique invariant measure $\lambda$ as $t\to\infty$. {\text{e}}nd{thm} \textbf{Comment:} In Section \ref{section 3}, one remembers that we will work with two cases. In first case, we have general noise and $\alpha\in (0, \mathcal{L}_{\lambda}ac{1}{2})$. In the second case, we have only multiplicative noise and $\alpha \in (0,1)$. A large amount of proof are same for both the cases. However, we mention and provide separate proofs when it is different. \section{Well-posednees in $L^p$-setting: Proof of Theorem \ref{main result 1}}\label{section 3} \subsection{Comparison principle:}\label{subsection 3.1} Here, we follow the approach of \cite{sylvain} and obtain a property of kinetic solution, which will help in the proof of continuity of trajectories of solutions. In the next proposition, we see that $\mathbb{P}$-almost surely c\'adl\'ag property is independent of test function $\varphi$ and that the limit from the left at any time $t\textgreater\,0$ is defined by a kinetic function. \begin{proposition}\label{Proposition 3.1} Suppose that if $(u(t))_{t\in[0,T]}$ is a solution to {\text{e}}qref{1.1} with initial data $u_0$, then we have the following two properties, \begin{enumerate} \item[1.] there exists a measurable subset $\Omega_1\subset\Omega$ of full probability such that, for all $\omega\in\Omega_1$, for all $\varphi\in C_c^2(\mathbb{T}^N\times\mathbb{R})$, $t\mapsto\langle f(\omega,t),\varphi\rangle$ is c\'adl\'ag. \item[2.] there exists an $L^\infty(\mathbb{T}^N\times\mathbb{R};[0,1])$-valued process $(f^{-}(t))_{t\in(0,T]}$ such that: for each $t\in(0,T]$, for all $\omega\in\Omega_1$, for all $\varphi\in C_c^2(\mathbb{T}^N\times\mathbb{R})$, $f^{-}(t)$ is a kinetic function on $\mathbb{T}^N$ which defines the left limit of $s\mapsto\langle f(s),\varphi \rangle$ at t as follows: \begin{align}\label{3.1} \langle f^{-}(t),\varphi\rangle=\lim_{s_n\to t^{-}}\langle f(s_n),\varphi\rangle . {\text{e}}nd{align} {\text{e}}nd{enumerate} \begin{proof} For a proof, we refer to \cite[Proposition 2.10]{sylvain}. {\text{e}}nd{proof} {\text{e}}nd{proposition} \begin{remark}[Left and right limits] In Proposition \ref{Proposition 3.1} we prove something more than what in state. Indeed, for $\omega\in{\Omega}_1$, we have $f(s_n)\to f^-(t)$ in $L^\infty(\mathbb{T}^N\times\mathbb{R})$ for the weak-* topology, when $s_n\uparrow t$, which implies {\text{e}}qref{3.1}. By similar arguments, we can show that $f(s_n)\to f(t)$ in $L^\infty(\mathbb{T}^N\times\mathbb{R})$ weak-* when $s_n\downarrow t.$ {\text{e}}nd{remark} \begin{remark}By using Fatou's lemma we obtain the following bounds: for all $\omega\in\Omega_1$, \begin{align}\label{3.3}\sup_{t\in[0,T]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}|\mathfrak{\zeta}|^p d\mathfrak{\mathcal{V}}_{x,t}^{-}(\mathfrak{\zeta})dx\le C_p(\omega),\,\,\,\mathbb{E}(\sup_{t\in[0,T]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}|\mathfrak{\zeta}|^p d\mathfrak{\mathcal{V}}_{x,t}^{-}(\mathfrak{\zeta})dx)\le C_p. {\text{e}}nd{align} {\text{e}}nd{remark} \begin{remark}[Equation for $f^{-}$] Passing to the limit in {\text{e}}qref{2.6} for an increasing sequence $t_n$ to $t$, we obtain the following equation on $f^{-}$: \begin{align}\label{3.4} \langle f^{-}(t),\varphi \rangle &= \langle f(0),\varphi\rangle +\int_0^t\langle f(s),F'(\mathfrak{\zeta})\cdot\nabla_x\varphi\rangle ds-\int_0^t\langle f(s),A'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[\varphi]\rangle ds\notag\\ &\qquad+\sum_{k\ge 1}\int_0^t \int_{\mathbb{T}^N}\int_{\mathbb{R}}h_k(x,\mathfrak{\zeta})\varphi(x,\mathfrak{\zeta})d\mathfrak{\mathcal{V}}_{x,s}(\mathfrak{\zeta})dxd\beta_k(s)\notag\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}H^2(x,\mathfrak{\zeta})\partial_{\mathfrak{\zeta}}\varphi(x,\mathfrak{\zeta})d\mathfrak{\mathcal{V}}_{x,s}(\mathfrak{\zeta})dx ds-\mathfrak{m}(\partial_{\mathfrak{\zeta}}\varphi)([0,t)). {\text{e}}nd{align} In particular, we obtain \begin{align}\label{3.5}\langle f(t)-f^{-}(t), \varphi\rangle=-\mathfrak{m}(\partial_\mathfrak{\zeta} \varphi)(\{t\}). {\text{e}}nd{align} Outside the set of atomic points (at most countable) of $A\mapsto \mathfrak{m}(\partial_{\mathfrak{\zeta}}\varphi)(A)$, we have$\langle f(t),\varphi\rangle=\langle f^{-}(t),\varphi\rangle.$ It implies that $\mathbb{P}$ almost surely, $f(t)=f^{-}(t)$ almost every $t\in[0,T]$. It is clear that equation {\text{e}}qref{3.4} gives us the following equation for $f^{-}(t)$: $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align}\label{3.6} \langle f^{-}(t),\varphi \rangle &= \langle f(0),\varphi\rangle +\int_0^t\langle f^{-}(s),F'(\mathfrak{\zeta})\cdot\nabla_x\varphi\rangle ds-\int_0^t\langle f^-(s),A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[\varphi]\rangle ds\notag\\&\qquad+\sum_{k\ge 1}\int_0^t \int_{\mathbb{T}^N}\int_{\mathbb{R}}h_k(x,\mathfrak{\zeta})\varphi(x,\mathfrak{\zeta})d\mathfrak{\mathcal{V}}_{x,s}^{-}(\mathfrak{\zeta})dxd\beta_k(s)\notag\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}H^2(x,\mathfrak{\zeta})\partial_{\mathfrak{\zeta}}\varphi(x,\mathfrak{\zeta})d\mathfrak{\mathcal{V}}_{x,s}^{-}(\mathfrak{\zeta})dx ds-\mathfrak{m}(\partial_{\mathfrak{\zeta}}\varphi)([0,t)). {\text{e}}nd{align} {\text{e}}nd{remark} \begin{remark} We will use the following notation: If $f:X\times\mathbb{R}\to[0,1]$ is kinetic function, we define the conjugate function $\bar{f}$ of $f$ as $\bar{f}(x,t,\zeta)=1-f(x,t,\zeta)$. We also define $f^+$ by $f^+:=f$. We can take any of them in integral with respect to time or in a stochastic integral. {\text{e}}nd{remark} \noindent \textbf{Doubling of variables}: The proof of the contraction principle follows a line argument that suitably Kruzkov's method of doubling the variables to the stochastic case. In this context, we approximate $\|\big(u_1(t)-u_2(t)\big)^+\|_{L^1(\mathbb{T}^N)}$ by $\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon(x-y)\kappa(\xi-\mathfrak{\zeta})f_1(x,t,\xi)\bar {f}_2(y,t,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dx dy$, where test functions is a suitable smooth approximation. The main idea of the proof is to analyze the value of $\|u_1(t)-u_2(t)\|_{L^1(\mathbb{T}^N)}$ as a random variable, and then we arrive at the conclusion that $\mathbb{E}\|u_1(t)-u_2(t)\|_{L^1(\mathbb{T}^N)}$ is decreasing fucntion of time. Here, we follow similar lines as proposed in the proof of \cite[Proposition 3.1]{sylvain} for the proof of following proposition. We give the details of the proof for the sake of completeness. \begin{proposition}[\textbf{Doubling of variables}]\label{th3.5} Let $(u_1(t))_{t\in[0,T]}$ and $(u_2(t))_{t\in[0,T]}$ be solution solution to {\text{e}}qref{1.1} with initial data $u_{1,0}$ and $u_{2,0}$ respectively. Let us denote $f_1(t)=\mathbbm{1}_{u_1(t)\textgreater\,\xi},\,\& \, f_2(t)= \mathbbm{1}_{u_2(t)\,\textgreater\,\xi}$. Then, for all \,\,$ 0\le t\leq T$ and non-negative test functions $\Upsilon\in{C}^\infty(\mathbb{T}^N)$,\, $\kappa\in{C_{c}^\infty(\mathbb{R})}$ we have \begin{align}\label{tech} \mathbb{E}\bigg[\int_{(\mathbb{T}^N)^2}&\int_{\mathbb{R}^2}\Upsilon(x-y)\kappa(\xi-\mathfrak{\zeta})f_1^{\pm}(x,t,\xi)\bar {f}_2^{\pm}(y,t,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dx dy\bigg]\notag\\ &\le\mathbb{E}\bigg[ \int_{(\mathbb{T}^N)}\int_{\mathbb{R}^2}\Upsilon(x-y)\kappa(\xi-\mathfrak{\zeta}) f_{1,0} (x,\xi)\bar f_{2,0} (y,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dx dy+ \mathcal{E}_{\Upsilon}+ \mathcal{E}_{\kappa}+J\bigg], {\text{e}}nd{align} where \begin{align} \mathcal{E}_{\Upsilon}\notag=\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1(x,s,\xi)\bar{f}_2(y,s,\mathfrak{\zeta})(F'(\xi)-F'(\mathfrak{\zeta}))&\kappa(\xi-\mathfrak{\zeta})d\xi d\mathfrak{\zeta}\cdot\nabla\Upsilon(x-y)dx dy ds,\notag {\text{e}}nd{align} \begin{align} \mathcal{E}_{\kappa}\notag=\mathcal{L}_{\lambda}ac{1}{2}\int_{(\mathbb{T}^N)^2}\Upsilon(x-y)\int_0^t\int_{\mathbb{R}^2}\kappa(\xi-\mathfrak{\zeta})\sum_{k\ge1}|h_k(x,\xi)-&h_k(y,\mathfrak{\zeta})|^2d\mathfrak{\mathcal{V}}_{x,s}^{1}\oplus\mathfrak{\mathcal{V}}_{y,s}^{2}(\xi,\mathfrak{\zeta})dx dy ds,\notag {\text{e}}nd{align} and \begin{align} J&=-\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1 (x,s,\xi)\bar {f}_2(y,s,\mathfrak{\zeta})\kappa(\xi-\mathfrak{\zeta})A'(\xi)(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha(\Upsilon(x-y)))d\xi d\mathfrak{\zeta} dx dy ds\notag\\ &\qquad-\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1 (x,s,\xi)\bar {f}_2(y,s,\mathfrak{\zeta})\kappa(\xi-\mathfrak{\zeta})A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_y^\alpha(\Upsilon(x-y)))d\xi d\mathfrak{\zeta} dx dy ds\notag\\ &\qquad-\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1(x,s,\xi)\partial_{\xi}\kappa(\xi-\mathfrak{\zeta})\Upsilon(x-y)d\mathfrak{\mathfrak{{\text{e}}ta}}_{2,2}(y,s,\mathfrak{\zeta})dxd\xi \notag\\ &\qquad+\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\bar{f}_2(y,s,\mathfrak{\zeta})\partial_{\mathfrak{\zeta}}\kappa(\xi-\mathfrak{\zeta})\Upsilon(x-y)d\mathfrak{\mathfrak{{\text{e}}ta}}_{1,2}(x,s,\xi)dy d\mathfrak{\zeta}.\notag {\text{e}}nd{align} {\text{e}}nd{proposition} \begin{remark} Here, we need to fix some notations for kinetic measures corresponding to kinetic solutions $(u_1(t))_{t\in[0,T]}$ and $(u_2(t))_{t\in[0,T]}$. Suppose that $\mathfrak{m}_1$ and $\mathfrak{m}_2$ are kinetic measures for $(u_1(t))_{t\in[0,T]}$ and $(u_2(t))_{t\in[0,T]}$ respectively, satisfying $\mathbb{P}$-almost surely, $\mathfrak{m}_1 \ge \mathfrak{\mathfrak{{\text{e}}ta}}_{1,2}$ and $\mathfrak{m}_2\ge \mathfrak{{\text{e}}ta}_{2,2}$, where $$\mathfrak{{\text{e}}ta}_{1,2}(x,t,\xi)=\int_{\mathbb{R}^N}|A(u_1(x+z))-A(\xi)|\mathbbm{1}_{\mbox{Conv}\{u_1(x,t),u_1(x+z)\}}(\xi)\mu(z)dz$$ and $$\mathfrak{{\text{e}}ta}_{2,2}(y,t,\mathfrak{\zeta})=\int_{\mathbb{R}^N}|A(u_2(y+z))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{u_2(x,t),u_2(x+z)\}}(\mathfrak{\zeta})\mu(z)dz.$$ These measures can be written as $\mathfrak{m}_1=\mathfrak{m}_{1,1}+\mathfrak{{\text{e}}ta}_{1,2}$ and $\mathfrak{m}_2=\mathfrak{m}_{2,1}+\mathfrak{{\text{e}}ta}_{2,2}$ for some non negative measure $\mathfrak{m}_{1,1}$ and $\mathfrak{m}_{2,1}$ respectively. {\text{e}}nd{remark} \begin{proof} Le us define $H_1^2(x,\xi)= \sum_{k\ge1}|h_k(x,\xi)|^2$ and $H_2^2(y,\mathfrak{\zeta})=\sum_{k\ge1}|h_k(y,\mathfrak{\zeta})|^2$. Let $\varphi_1 \in C_c^\infty(\mathbb{T}_x^N\times\mathbb{R}_{\xi})$ and $\varphi_2\in C_c^\infty(\mathbb{T}_y^N\times\mathbb{R}_{\mathfrak{\zeta}})$. By equation {\text{e}}qref{2.10} for $f_1=f_1^+$ we have $$\langle f_1^+(t),\varphi \rangle = \langle \Lambda_1^*, \partial_\xi \varphi_1 \rangle+\mathcal{M}_1(t)$$ with $$\mathcal{M}_1(t)=\sum_{k\ge1} \int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}} h_k(x,\xi)\varphi_1(x,\xi) d\mathfrak{\mathcal{V}}_{x,s}^1(\xi) dx d\beta_k (s)$$ and \begin{align*} \langle \Lambda_1^*, \varphi _1 \rangle ([0,t])& =\langle f_{1,0}, ,\varphi_1 \rangle \delta_0([0,t])+\int_0^t \langle f_1, F'\cdot\nabla \varphi_1 \rangle ds-\int_0^t \langle f_1 , A'(\xi)(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha[\varphi_1]\rangle ds\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2} \int_0^t \int_{\mathbb{T}^N}\int_{\mathbb{R}} \partial_{\xi} \varphi_1 H_1^2(x,\xi) d\mathfrak{\mathcal{V}}_{x,s}^1(\xi) dx ds -\mathfrak{m}_1(\partial_{\xi} \varphi_1)([0,t]). {\text{e}}nd{align*} Notice that $\mathfrak{m}_1(\partial_{\xi}\varphi_1)(\{0\})=0$ and the value of $\langle \Lambda_1^* ,\partial_{\xi} \varphi_1\rangle (\{0\})$ is $\langle f_{1,0}, \varphi_1 \rangle.$ Similarly $$\langle \bar{f}_2^+(t),\varphi_2 \rangle=\langle \bar{\Lambda}_2^*, \partial_{\mathfrak{\zeta}} \varphi_2 \rangle([0,t])+\bar{\mathcal{M}}_2(t)$$ with $$\bar{\mathcal{M}}_2(t)=\sum_{k\ge1} \int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}} h_k(y,\mathfrak{\zeta})\varphi_2(y,\mathfrak{\zeta}) d\mathfrak{\mathcal{V}}_{y,s}^1(\mathfrak{\zeta}) dx d\beta_k (s)$$ and \begin{align*} \langle \bar{\Lambda}_2^*, \varphi _2 \rangle ([0,t])& =\langle \bar{f}_{2,0}, ,\varphi_1 \rangle \delta_0([0,t])+\int_0^t \langle\bar{ f}_2, F'\cdot\nabla \varphi_2 \rangle ds-\int_0^t \langle \bar{f}_2 , A'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_y^\alpha[\varphi_2]\rangle ds\\ &\qquad-\mathcal{L}_{\lambda}ac{1}{2} \int_0^t \int_{\mathbb{T}^N}\int_{\mathbb{R}} \partial_{\mathfrak{\zeta}} \varphi_2 H_2^2 d\mathfrak{\mathcal{V}}_{y,s}^2(\mathfrak{\zeta}) dx ds +\mathfrak{m}_2(\partial_{\mathfrak{\zeta}} \varphi_2)([0,t]), {\text{e}}nd{align*} where $\langle \bar{\Lambda}_2^* ,\partial_{\mathfrak{\zeta}}\varphi_2 \rangle (\{0\})=\langle \bar{f}_2, \varphi_2 \rangle$. Let $\theta(x,\xi, y, \mathfrak{\zeta})= \varphi_1(x,\xi)\varphi_2(y,\mathfrak{\zeta}).$ Making use of It\^o formula for $\mathcal{M}_1(t)\bar{\mathcal{M}}_2(t)$, and integration by parts formula for functions of finite variation, $\langle \Lambda_1^* , \partial_{\xi} \varphi_1 \rangle\,\langle \bar{\Lambda}_2^*,\partial_{\mathfrak{\zeta}} \varphi_2 \rangle ([0,t])$, (see \cite[Chapter 0]{Rev}), we conclude that \begin{align*} \langle \Lambda_1^*, \partial_{\xi} \varphi_1 ([0,t])\rangle\,&\langle \bar{\Lambda}_2^* ,\partial_{\mathfrak{\zeta}}\varphi_2 \rangle ([0,t])=\langle \Lambda_1^* , \partial_{\xi} \varphi_1 \rangle (\{0\})\,\langle \bar{\Lambda}_2^*, \partial_{\mathfrak{\zeta}} \varphi_2 \rangle (\{0\})\\&+\int_{(0,t]} \langle \mathfrak{m}_1^* ,\partial_{\xi} \varphi_1 \rangle([0,s)) d \langle \bar{\Lambda}_2^*, \partial_{\mathfrak{\zeta}}\varphi_2 \rangle (s) +\int_{(0,t]}\langle \bar{\Lambda}_2^* , \partial_{\mathfrak{\zeta}} \varphi_2 ([0,s]) d\langle \Lambda_1^*, \partial_{\xi} \varphi_1 \rangle (s). {\text{e}}nd{align*} We also have the following identity \begin{align*} \langle \Lambda_1^* , \partial_{\xi} \varphi_1 \rangle ([0,t]) \bar{\mathcal{M}}_2(t)=\int_0^t \langle \Lambda_1^*, \partial_{\xi} \varphi_1\rangle ([0,s]) d\bar{\mathcal{M}}_2(s) + \int_0^t \bar{\mathcal{M}}_2(s) \langle \Lambda_1^* , \partial_{\xi} \varphi_1 \rangle (ds). {\text{e}}nd{align*} It is easy to obtain since $\bar{\mathcal{M}}_2$ is continuous and a similar formula for $\langle \bar{\Lambda}_2^* ,\partial_{\mathfrak{\zeta}}\varphi_2 \rangle {\mathcal{M}}_1(t)$. These identities give $$\langle f_1^+(t), \varphi_1 \rangle\,\langle \bar{f}_2^+(t),\varphi_2 \rangle =\langle \langle f_1^{+}(t)\,\bar{f}_2^{+}(t), \alpha \rangle \rangle$$ It implies that \begin{align}\label{un} \mathbb{E}\langle \langle f_1^+(t)\,\bar{f}_2^+(t), \theta \rangle\rangle&= \mathbb{E}\langle \langle f_{1,0} \bar{f}_{2,0}, \theta\rangle\rangle\notag\\ &+\mathbb{E}\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2} f_1 \bar{f}_2 (F'(\xi)\cdot\nabla_x + F'(\mathfrak{\zeta})\cdot\nabla_y)\theta d\xi d\mathfrak{\zeta} dx dy ds\notag\\ &-\mathbb{E}\int_0^t \int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2} f_1 \bar{f}_2(A'(\xi)(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha+A'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_y^\alpha) [\theta]\, d\xi d\mathfrak{\zeta} dx dy ds\notag\\ &+\mathcal{L}_{\lambda}ac{1}{2}\mathbb{E}\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\partial_{\xi} \theta \bar{f}_2(s) H_1^2\,d\mathfrak{\mathcal{V}}_{x,s}^1(\xi) d\mathfrak{\zeta} dx dy ds\notag\\ &-\mathcal{L}_{\lambda}ac{1}{2} \mathbb{E} \int_0^t \int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\partial_{\mathfrak{\zeta}} \theta f_1(s) H_2^2\, d\mathfrak{\mathcal{V}}_{y,s}^2(\mathfrak{\zeta})\,d\xi dy dx ds\notag\\ &-\mathbb{E}\int_0^t \int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2} H_{1,2}\theta\,d\mathfrak{\mathcal{V}}_{x,s}^1(\xi)\,d\mathfrak{\mathcal{V}}_{y,s}^2(\mathfrak{\zeta}) dx dy ds\notag\\ &-\mathbb{E}\int_{(0,t]}\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2} \bar{ f}_2^{+}(s)\partial_{\xi} \theta d\mathfrak{m}_1(x,s,\xi) d\mathfrak{\zeta} dy\notag\\ &+\mathbb{E}\int_{(0,t]}\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1^-(s) \partial_{\mathfrak{\zeta}} \theta d\mathfrak{m}_2(y,s,\mathfrak{\zeta}) d\xi dx {\text{e}}nd{align} where $H_{1,2}(x,y;\xi,\mathfrak{\zeta}):=\sum_{k\ge1 } h_k(x,\xi)h_k(y,\mathfrak{\zeta})$ and $\langle\langle\cdot,\cdot \rangle\rangle$ refer to the duality distribution over $\mathbb{T}_x^N\times\mathbb{R}_{\xi}\times\mathbb{T}_y^N\times\mathbb{R}_{\mathfrak{\zeta}}$. By a density argument of approximation, we can easily prove that {\text{e}}qref{un} is valid for any test function $\theta \in C_c^\infty(\mathbb{T}_x^N\times\mathbb{R}_{\xi}\times\mathbb{T}_y^N\times\mathbb{R}_{\mathfrak{\zeta}})$. The assumption that $\theta$ is compactly supported can be relaxed by the help of the condition at infinity on $\mathfrak{m}_j$ and $\mathfrak{\mathcal{V}}^j$, $j=1,2$. By help of truncation and approximation argument for $\theta$, we can prove that {\text{e}}qref{un} is also valid if $\theta \in C_b^\infty (\mathbb{T}_x^N\times\mathbb{R}_{\xi}\times\mathbb{T}_y^N\times\mathbb{R}_{\mathfrak{\zeta}})$ is compactly supported in a neighbourhood of the set $\big\{(x,\mathfrak{\zeta},x,\mathfrak{\zeta}); x\in\mathbb{T}^N, \mathfrak{\zeta} \in\mathbb{R}\big\}$, then we can take $\theta(x,\xi, y,\mathfrak{\zeta})=\Upsilon(x-y) \kappa(\xi-\mathfrak{\zeta})$ where $\Upsilon\in C^{\infty}(\mathbb{T}^N), \kappa\in C^{\infty}_c(\mathbb{R})$. the following identities, $(\nabla_x+\nabla_y)\theta=0,\,\,\,\,\, (\partial_{\xi}+\partial_\mathfrak{\zeta})\theta=0,$ gives that \begin{align*} &\mathbbm{E}\bigg[\int_{(\mathbbm{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon(x-y)\kappa(\xi-\mathfrak{\zeta})f_1^{+}(x,s,\xi)\bar{f}_2^{+}(y,t,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dx dy\bigg]\\ &=\mathbb{E}\bigg[\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon(x-y)\kappa(\xi-\mathfrak{\zeta}) f_{1,0}(x,\xi)\bar{f}_{2,0}(y,\mathfrak{\zeta}) d\xi d\mathfrak{\zeta} dx dy+J+K+\mathcal{E}_{\Upsilon}+\mathcal{E}_{\kappa}\bigg], {\text{e}}nd{align*} where \begin{align*} K&=\int_{(0,t]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f_1^{-}(x,s,\xi)\partial_\mathfrak{\zeta}\theta\,\,\, dm_{2,1}(y,s,\mathfrak{\zeta})-\int_{(0,t]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}\bar{f}_2^{+}(y,s,\mathfrak{\zeta})\,\,\partial_\xi \theta\,\,\, d\mathfrak{m}_{1,1}(x,s,\xi)\\ &=-\int_{(0,t]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f^{-}_1(x,s,\xi)\partial_{\xi} \theta d\mathfrak{m}_{2,1}(y,s,\mathfrak{\zeta})d\xi dx-\int_{(0,t]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f_2(y,s,\mathfrak{\zeta})\partial_{\mathfrak{\zeta}} \theta d\mathfrak{m}_{1,1}(x,s,\xi)dy d\mathfrak{\zeta}\\ &=-\int_{(0,t]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}\theta d\mathfrak{\mathcal{V}}_{x,s}^{1,-}(\xi) d\mathfrak{m}_{2,1}(y,s,\mathfrak{\zeta}) dx-\int_{(0,t]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}\theta d\mathfrak{\mathcal{V}}_{y,s}^{2,+}(\mathfrak{\zeta}) d\mathfrak{m}_{1,1}(x,s,\xi)dy\\ &\le\,0. {\text{e}}nd{align*} Consequently we deduce the following estimate, $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align} \mathbb{E}\bigg[\int_{(\mathbb{T}^N)^2}\int_{(\mathbb{R})^2}&\Upsilon(x-y)\kappa(\xi-\mathfrak{\zeta})f_1^{\pm}(x,t,\xi)\bar {f}_2^{\pm}(y,t,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dx dy\bigg]\notag\\ &\qquad\leq\mathbb{E}\bigg[\int_{(\mathbb{T}^N)}\int_{(\mathbb{R}^2)}\Upsilon(x-y)\kappa(\xi-\mathfrak{\zeta})f_{1,0}(x,\xi)\bar f_{2,0} (y,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dx dy+ \mathcal{E}_{\Upsilon}+\mathcal{E}_{\kappa}+J\bigg]. {\text{e}}nd{align} {\text{e}}nd{proof} \begin{remark} We can easily conclude that, if $f_1^\pm=f_2^\pm$, then inequality {\text{e}}qref{tech} holds pathwise, i.e., \begin{align}\label{app inequality1} \begin{aligned} \int_{(\mathbb{T}^N)^2}&\int_{\mathbb{R}^2}\Upsilon(x-y)\kappa(\xi-\mathfrak{\zeta})f_1^{\pm}(x,t,\xi)\bar {f}_2^{\pm}(y,t,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dx dy\\ &\le\int_{(\mathbb{T}^N)}\int_{\mathbb{R}^2}\Upsilon(x-y)\kappa(\xi-\mathfrak{\zeta}) f_{1,0} (x,\xi)\bar f_{2,0} (y,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dx dy+ \mathcal{E}_{\Upsilon}+ \mathcal{E}_{\kappa}+J, {\text{e}}nd{aligned} {\text{e}}nd{align} for all $t\in[0,T]$, $\mathbb{P}$-almost surely, where $\mathcal{E}_{\Upsilon}, \mathcal{E}_{\kappa},$ and $J$ are introduced as in previous Proposition \ref{th3.5}. For the proof of it, there is no need to take expectation after use of It\^o and integration by part formula in the proof of Proposition \ref{th3.5}, since after the approximation of test functions $\Upsilon(x-y)$, and $\kappa(\xi-\mathfrak{\zeta})$, the contributed martingale terms cancel to each other. {\text{e}}nd{remark} \begin{thm}[\textbf{Contraction principle}]\label{th3.6}\label{comparison} Let $(u(t))_{t\in[0,T]}$ be a kinetic solution to {\text{e}}qref{1.1}, then there exists a $L^1(\mathbb{T}^N)$-valued process $(u^{-}(t))_{t\in[0,T]}$ such that $\mathbb{P}$-almost surely, for all $t\in[0,T]$, $f^{-}(t)=\mathbbm{1}_{u^{-}(t)\textgreater\xi}$. Moreover, if $(u_1(t))_{t\in[0,T]}$,\,\,$(u_2(t))_{t\in[0,T]}$\,are kinetic solutions to {\text{e}}qref{1.1} with initial data $u_{1,0} $ and $u_{2,0}$ respectively, then we have for all $t\in[0,T]$, \begin{align}\label{contraction}\mathbb{E}\left\Vert u_1(t)-u_2(t)\right\Vert_{L^{1}(\mathbb{T}^N)}\le\mathbb{E}\left\Vert u_{1,0}-u_{2,0}\right\Vert_{\mathbb{L}^1(\mathbb{T}^N)}. {\text{e}}nd{align} {\text{e}}nd{thm} \begin{proof} We will first show contraction principle {\text{e}}qref{contraction} in the following several steps. \noindent \textbf{Step 1:} Suppose that $(\Upsilon_{\varepsilon}),\,(\kappa_{\delta})$ are the approximations to the identity on $\mathbb{T}^N$ and $\mathbb{R}$, respectively, that is, $\Upsilon\in C^\infty(\mathbb{T}^N)$ and $\kappa\in C_c^\infty (\mathbb{R})$ be symmetric nonnegative functions such that $\int_{\mathbb{T}^N} \Upsilon(x) dx = 1 ,\,\, \int_{\mathbb{R}} \kappa(\xi) d\xi =1$ and supp$\kappa\subset(-1,1)$ with $\Upsilon_{\varepsilon}=\mathcal{L}_{\lambda}ac{1}{\varepsilon^N}\Upsilon(\mathcal{L}_{\lambda}ac{x}{\varepsilon}),$ and $\kappa_{\delta}(\xi)=\mathcal{L}_{\lambda}ac{1}{\delta}\kappa(\mathcal{L}_{\lambda}ac{\xi}{\delta}).$ Let put $\kappa=\kappa_{\delta}$ , and $\Upsilon= \Upsilon_\varepsilon $ in inequality {\text{e}}qref{tech}. Then, we follow \cite[Equation 3.16]{sylvain} to conclude for all $t\in[0,T]$ \begin{align} &\mathbb{E}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f_1(x,t,\xi)\bar{f}_2(x,t,\xi)d\xi dx=\mathbb{E}\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon_{\varepsilon}(x-y)\kappa_{\delta}(\xi-\mathfrak{\zeta})f_1(x,t,\xi)\bar{f}_2(y,t,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dxdy+\mathfrak{{\text{e}}ta}_{t}(\varepsilon,\delta) {\text{e}}nd{align} where $ \lim_{\varepsilon,\delta\to 0}\mathfrak{{\text{e}}ta}_t(\varepsilon,\delta)=0.$\\ \textbf{Step 2:} We follow \cite{deb} to estimate $\mathcal{E}_\Upsilon$ and $\mathcal{E}_\kappa$. We have $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align} |\mathcal{E}_{\kappa}|&\le C_{\Psi}\,t\,(\delta^{-1}\varepsilon^2 + g(\delta))\label{estimate 1}\\ |\mathcal{E}_\Upsilon|&\leq C(\omega) \varepsilon^{-1}\delta,\label{estimate 2} {\text{e}}nd{align} \textbf{Step 3:} Let $\mathfrak{{\text{e}}ta}_\delta(\xi)=\delta\,\,\mathfrak{{\text{e}}ta}_(\mathcal{L}_{\lambda}ac{x}{\delta})$, be convex approximation of absolute value function, where $\mathfrak{{\text{e}}ta}$ be a $C^\infty(R)$ function satisfying, $\mathfrak{{\text{e}}ta}(0)=0$,\,\,\,\,\, $\mathfrak{{\text{e}}ta}(r)=\mathfrak{{\text{e}}ta}(-r)$,\,\,\,\, $\mathfrak{{\text{e}}ta}'(r)=-\mathfrak{{\text{e}}ta}'(-r)$, $\mathfrak{{\text{e}}ta}''_\delta=\kappa_\delta,$ and \[\mathfrak{{\text{e}}ta}'(r)=\begin{cases} -1, & r\, \le\,-1\\ \in[-1,1] ,& |r|\,\textless\,1\\ 1, & r\,\ge\,1 {\text{e}}nd{cases} \] In order to estimate $J$, we proceed as follow \begin{align*} J&=-\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1(x,s,\xi)\bar{ f}_2(y,s,\mathfrak{\zeta})\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\xi)(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha(\Upsilon_{\varepsilon}(x-y)))d\xi d\mathfrak{\zeta} dx dy \notag\\ &\quad-\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1(x,s,\xi)\bar{f}_2(y,s,\mathfrak{\zeta})\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_y^\alpha(\Upsilon_{\varepsilon}(x-y)))d\xi d\mathfrak{\zeta} dx dy ds\notag\\ &\quad+\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\partial_\mathfrak{\zeta}\kappa_{\delta}(\xi-\mathfrak{\zeta})\Upsilon_{\varepsilon}(x-y)f_1(x,s,\xi)d\mathfrak{{\text{e}}ta}_{2}(y,s,\mathfrak{\zeta})dx d\xi \notag\\ &\quad-\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\partial_\xi\kappa_{\delta}(\xi-\mathfrak{\zeta})\Upsilon_{\varepsilon}(x-y)\bar{f}_2(y,s,\mathfrak{\zeta})d\mathfrak{{\text{e}}ta}_{1}(x,s,\xi)dy d\mathfrak{\zeta}\\ &:=J_1+J_2, {\text{e}}nd{align*} where \noindent \begin{align*} J_1&=-\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1(x,s,\xi)\bar{ f}_2(y,s,\mathfrak{\zeta})\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\xi)(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha(\Upsilon_{\varepsilon}(x-y)))d\xi d\mathfrak{\zeta} dx dy \notag\\ &\quad-\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}f_1(x,s,\xi)\bar{ f}_2(y,s,\mathfrak{\zeta})\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_y^\alpha(\Upsilon_{\varepsilon}(x-y)))d\xi d\mathfrak{\zeta} dx dy ds,\notag\\ J_2&=\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\partial_\mathfrak{\zeta}\kappa_{\delta}(\xi-\mathfrak{\zeta})\Upsilon_{\varepsilon}(x-y)f_1(x,s,\xi)d\mathfrak{{\text{e}}ta}_{2}(y,s,\mathfrak{\zeta})dx d\xi\\ &\quad-\int_0^t\int_{(\mathbb{R}^N)^2}\int_{\mathbb{R}^2}\partial_\xi\kappa_{\delta}(\xi-\mathfrak{\zeta})\Upsilon_{\varepsilon}(x-y)\bar{f}_2(y,s,\mathfrak{\zeta})d\mathfrak{{\text{e}}ta}_{1}(x,s,\xi)dy d\mathfrak{\zeta}. {\text{e}}nd{align*} Now we will try to write $J_2$ in terms of $-J_1$ and plus some small term as follows: \begin{align*} &\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^2}\partial_\xi\kappa_{\delta}(\xi-\mathfrak{\zeta})\Upsilon_{\varepsilon}(x-y)\bar{f}_2(y,s,\mathfrak{\zeta})d\mathfrak{{\text{e}}ta}_{1}(x,s,\xi)dy d\mathfrak{\zeta}\\ &\quad=\int_0^t\int\limits_{({\mathbb{T}^N})^2}\int\limits_{\mathbb{R}^2}\int\limits_{\mathbb{R}^N}|\tau_z A(u_1(x,s))-A(\xi)|\mathbbm{1}_{Con\{u_1(x,s),\tau_z u_1(x,s)\}}(\xi) \partial_{\xi} \kappa_\delta(\xi-\mathfrak{\zeta}) \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Upsilon_{\varepsilon}(x-y)\bar{f}_2(y,s,\mathfrak{\zeta}) \mu(z) dz d\xi d\mathfrak{\zeta} dx dy ds\\ &\quad=\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^{N+1}} \bigg\{\int\limits_{u_1(x,s)}^{\tau_z u_1(x,s)}(\tau_z A (u_1(x,s))-A(\xi)) \partial_{\xi} \kappa_\delta(\xi-\mathfrak{\zeta}) d\xi \\ &\qquad+\int\limits_{\tau_z u_1(x,s)}^{u_1(x,s)}(A(\xi)-\tau_zA(u_1(x,s)))\partial_{\xi} \kappa_\delta(\xi-\mathfrak{\zeta}) d\xi \bigg\}\Upsilon_\varepsilon(x-y)\bar{f}_2(y,s,\mathfrak{\zeta}) \mu(z) dz d\mathfrak{\zeta} dx dy ds\\ &\quad=\int\limits_ 0^t\int\limits_{\mathbb{R}^{N+1}}\int\limits_{\mathbb{T}^N}\Bigg\{ \int\limits_{\mathbb{T}^N\,\cap\,\{u_1(x,s)\,\le\,\tau_z u_1(x,s)\} }\bigg\{\big[(\tau_z A (u_1(x,s))- A(\xi))\kappa_\delta(\xi-\mathfrak{\zeta})\big]_{u_1(x,s)}^{\tau_z u_1(x,s)}\\&\qquad+\int\limits_{u_1(x,s)}^{\tau_z u_1(x,s)}\kappa_\delta(\xi-\mathfrak{\zeta})A'(\xi) d\xi \bigg\}+\int\limits_{\mathbb{T}^N\cap\{\tau_zu_1(x,s)\le u_1(x,s)\}}\bigg\{\big[A(\xi)-\tau_zA(u_1(x,s)) \kappa_\delta(\xi-\mathfrak{\zeta})\big]_{\tau_z u_1(x,s)}^{u_1(x,s)}\\&\qquad-\int\limits_{\tau_z u_1(x,s)}^{u_1(x,s)} \kappa_\delta (\xi-\mathfrak{\zeta})A'(\xi) d\xi \bigg\}\Bigg\} \Upsilon_\varepsilon(x-y)\bar{f}_2(y,s,\mathfrak{\zeta}) \mu(z) dz d\mathfrak{\zeta} dx dy ds\\ &\quad=-\int\limits_0^t\int\limits_{({\mathbb{T}^N})^2}\int\limits_{\mathbb{R}^{N+1}}\bigg\{(\tau_z A(u_1(x,s))- A(u_1(x,s)))\kappa_{\delta}(u_1(x,s)-\mathfrak{\zeta})+\int\limits_{u_1(x,s)}^{\tau_z u_1(x,s)}\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\xi) d\xi \bigg\}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Upsilon_\varepsilon(x-y)\bar{f}_2(y,s,\mathfrak{\zeta}) \mu(z) dz d\mathfrak{\zeta} dx dy ds\\ &\quad=-\int\limits_0^t\int_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^{N+1}}(\tau_z A(u_1(x,s))-A(u_1(x,s)))\kappa_{\delta}(u_1(x,s)-\mathfrak{\zeta})\Upsilon_\varepsilon(x-y)\bar{ f}_2(y,s,\mathfrak{\zeta})\mu(z) dz d\mathfrak{\zeta} dx dy ds\\ &\quad\quad+\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^{N+1}}\int\limits_{\mathbb{R}}\bigg\{\int\limits_{-\infty}^{\tau_z u_1(x,s)}\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\xi)d\xi -\int_{-\infty}^{u_1(x,s)}\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\xi) d\xi \bigg\}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Upsilon_\varepsilon(x-y)\bar{ f}_2(y,s,\mathfrak{\zeta}) \mu(z) dz d\mathfrak{\zeta} dx dy ds\\ &\quad=-\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^N}\int\limits_{\mathbb{R}}(\tau_z A(u_1(x,s))-A(u_1(x,s)))\kappa_{\delta}(u_1(x,s)-\mathfrak{\zeta})\Upsilon_\varepsilon(x-y)\bar{ f}_2(y,s,\mathfrak{\zeta})\mu(z) dz d\mathfrak{\zeta} dx dy ds\\ &\qquad+\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^N}\int\limits_{\mathbb{R}}\bigg(\int_{-\infty}^{ u_1(x,s)}\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\xi)d\xi\bigg) (\tau_z(\Upsilon_\varepsilon(x-y)-\Upsilon_\varepsilon(x-y)\bar{ f}_2(y,s,\mathfrak{\zeta}) \mu(z) dz d\mathfrak{\zeta} dx dy ds\\ &\quad=-\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^N}\int\limits_{\mathbb{R}}(\tau_z A(u_1(x,s))-A(u_1(x,s)))\kappa_{\delta}(u_1(x,s)-\mathfrak{\zeta})\Upsilon_\varepsilon(x-y)\bar{ f}_2(y,s,\mathfrak{\zeta})\mu(z) dz d\mathfrak{\zeta} dx dy ds\\ &\qquad-\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^2} f_1(x,s,\xi)\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\xi) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha(\Upsilon_\varepsilon)(x-y) \bar{ f}_2(y,s,\mathfrak{\zeta})\mu(z) dz d\xi d\mathfrak{\zeta} dx dy ds. {\text{e}}nd{align*} Similarly, we can compute for remaining part of $J_2$ as follows: \begin{align*} &\int\limits_0^t\int\limits_{({\mathbb{T}^N})^2}\int\limits_{\mathbb{R}^2} \partial_{\mathfrak{\zeta}} \kappa_{\delta}(\xi-\mathfrak{\zeta}) \Upsilon_\varepsilon(x-y)f_1(x,s,\xi) d\mathfrak{{\text{e}}ta}_{2}(y,s,\mathfrak{\zeta}) dx d\xi\\ &\qquad=-\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^N}\int\limits_{\mathbb{R}}(\tau_z A(u_2(y,s))-A(u_2(y,s)))\kappa_{\delta}(\xi-u_2(x,s))\Upsilon_\varepsilon(x-y) f_1(x,s,\xi)\mu(z) dz d\xi dx dy ds\\ &\qquad\quad+\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^2} f_1(x,s,\xi)\kappa_{\delta}(\xi-\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha(\Upsilon_\varepsilon(x-y)) \bar{ f}_2(y,s,\mathfrak{\zeta}) \mu(z) dz d\xi d\mathfrak{\zeta} dx dy ds. {\text{e}}nd{align*} \begin{align*} &J_2=-\int_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^N}\big(\tau_z A(u_2(y,s))-A(u_2(y,s)))\big)(\kappa_{\delta} (\xi-u_2(y,s))\Upsilon_{\varepsilon}(x-y)) f_1(x,s,\xi)\mu(z)dz dx dy d\xi ds\notag\\ &\qquad\qquad+\int_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^2}\bar{f}_2(y,s,\mathfrak{\zeta}) \kappa_{\delta}(\xi-\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha(\Upsilon_{\varepsilon}(x-y))f_1(x,s,\xi)\mu(z)dz dx dy d\mathfrak{\zeta} d\xi ds \\&\qquad\qquad+\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^N}\big(\tau_z A(u_1(x,t))-A(u_1(x,s)))\big)\kappa_{\delta} (u_1(x,s)-\mathfrak{\zeta})\Upsilon_{\varepsilon}(x-y)\bar{f}_2(y,t,\mathfrak{\zeta})\mu(z)dz dx dy d\mathfrak{\zeta}\notag\\ &\qquad\qquad+\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^N}f_1(x,s,\xi)\kappa_{\delta}(\xi-\mathfrak{\zeta})A'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_y^\alpha(\Upsilon_\varepsilon(x-y))\bar{f}_2(y,s,\mathfrak{\zeta})\mu(z)dz dx dy d\mathfrak{\zeta} d\xi ds \\ &:=-J_1+I_1 {\text{e}}nd{align*} where \begin{align*} I_1&=-\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^N}\big(\tau_z A(u_2(y,s))-A(u_2(y,s)))\big)(\kappa_{\delta} (\xi-u_2(y,s))\Upsilon_{\varepsilon}(x-y)) f_1(x,s,\xi)\mu(z)dz dx dy d\xi ds\notag\\ &+\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^N}\big(\tau_z A(u_1(x,t))-A(u_1(x,s)))\big)\kappa_{\delta} (u_1(x,s)-\mathfrak{\zeta})\Upsilon_{\varepsilon}(x-y)\bar{f}_2(y,t,\mathfrak{\zeta})\mu(z)dz dx dy d\mathfrak{\zeta} ds\notag\\ &=\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}}\int_{\mathbb{R}^N}(\tau_z A(u_1(x,s))-A(u_1(x,s)))\kappa_{\delta}(u_1(x,s)-\mathfrak{\zeta})\Upsilon_{\varepsilon}(x-y)\bar{f}_2(y,s,\mathfrak{\zeta})\mu(z)dz d\mathfrak{\zeta} dx dy ds\notag\\ &-\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}^N}(\tau_z A(u_2(y,s))-A(u_2(y,s)))\kappa_{\delta}(\xi-u_2(y,s))\Upsilon_{\varepsilon}(x-y f_1(x,s,\xi)\mu(z) dz d\xi dx dy ds\notag\\ &=\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^N}(\int\limits_{u_2(y,s)}^{+\infty}\kappa_{\delta}(u_1(x,s)-\mathfrak{\zeta})d\mathfrak{\zeta})(\tau_z A(u_1(x,s))-A(u_1(x,s)))\Upsilon_{\varepsilon}(x-y)\mu(z)dz dx dy ds\notag\\ &-\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^N}(\int\limits_{-\infty}^{u_1(x,s)}\kappa_{\delta}(\xi-u_2(y,s))d\xi)(\tau_z A(u_2(y,s))-A(u_2(y,s)))\Upsilon_{\varepsilon}(x-y)\mu(z) dz dx dy ds\notag\\ &=\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{\mathbb{R}^N}\mathfrak{{\text{e}}ta}_{\delta}'(u_1(x,s)-u_2(y,s))[(\tau_z(A(u_1(x,s))\\ &\qquad\qquad\qquad\qquad\qquad\qquad-A(u_2(y,s)))-(A(u_1(x,s))-A(u_2(y,s)))]\Upsilon_{\varepsilon}(x-y)\mu(z)dz dx dy ds\notag\\ &=:I_1^r+I_r^1 {\text{e}}nd{align*} where \begin{align*} I_1^r&=\int_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\textgreater\,r}\mathfrak{{\text{e}}ta}_{\delta}'(u_1(x,s)-u_2(y,s))[(\tau_z(A(u_1(x,s))\\ &\qquad\qquad\qquad\qquad\qquad-A(u_2(y,s)))-(A(u_1(x,s))-A(u_2(y,s)))]\Upsilon_{\varepsilon}(x-y)\mu(z)dz dx dy ds,\\ I_r^1&=\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\le\,r}\mathfrak{{\text{e}}ta}_{\delta}'(u_1(x,s)-u_2(y,s))[(\tau_z(A(u_1(x,s))\\ &\qquad\qquad\qquad\qquad\qquad\qquad-A(u_2(y,s)))-(A(u_1(x,s))-A(u_2(y,s)))]\Upsilon_{\varepsilon}(x-y)\mu(z)dz dx dy ds,\\ {\text{e}}nd{align*} We will estimate $I_1^r$ by the help of the Lipschitz continuity property of nonlinear function $A$ as follows: \begin{align*} I_1^r&=-\mathcal{L}_{\lambda}ac{1}{2}\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\textgreater\,r}\big[\mathfrak{{\text{e}}ta}_\delta'(\tau_z(u_1(x,s)-u_2(y,s)))-\mathfrak{{\text{e}}ta}_\delta'(u_1(x,s)-u_2(y,s))\big]\\ &\qquad\qquad\times\big[\tau_z(A(u_1(x,s))-A(u_2(y,s)))-(A(u_1(x,s))-A(u_2(y,s)))\big]\Upsilon_\varepsilon(x-y)\mu(z)dz dx dy ds. {\text{e}}nd{align*}We observe that \begin{align} H_1&=\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\textgreater\,r}\mathfrak{{\text{e}}ta}_\delta'(u_1(x,s)-u_2(y,s))[(\tau_z(A(u_1(x,s))-A(u_2(y,s)))\notag\\ &\qquad\qquad\qquad-sgn(u_1(x,s)-u_2(y,s))(A(u_1(x,s))-A(u_2(y,s)))]\Upsilon_{\varepsilon}(x-y)\mu(z)dz dx dy\notag\\ &\le\,\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\,\textgreater\,r}\bigg(|\tau_z(A(u_1(x,s))-A(u_2(y,s)))|-|A(u_1(x,s))-A(u_2(y,s))|\bigg)\Upsilon_\varepsilon(x-y)\mu(z)dz dx dy ds\notag\\ &=0,\notag\\ {\text{e}}nd{align} With the help of above inequality we conclude that \begin{align} H_2&=I_1^r-H_1\notag\\ &=\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\le\,r}[sgn(u_1(x,s)-u_2(y,s))-\mathfrak{{\text{e}}ta}_{\delta}'(u_1(x,s)-u_2(y,s))](A(u_1(x,s))-A(u_2(y,s))\Upsilon_{\varepsilon}(x-y)\notag\\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mu(z)dz dx dy ds\notag\\ &\le\,C(\omega)\,r^{-2\alpha}\,\delta.\label{estimate 3} {\text{e}}nd{align} We will estimate $I_r^r$ by the help of non-decreasing property of nonlinear function $A$. Since $A$ is non decreasing Lipschitz continuous function, so we have the following inequality, for $a, b, k \in \mathbb{R}$, \begin{align*} \mathfrak{{\text{e}}ta}_{\delta}'(b-k)[A(a)-A(b)]\,\le\,\int_{k}^a\mathfrak{{\text{e}}ta}_{\delta}'(\sigma-k)A'(\sigma)d\sigma-\int_{k}^b\mathfrak{{\text{e}}ta}_{\delta}'(\sigma-k) A'(\sigma) d\sigma. {\text{e}}nd{align*} By using above inequality, we estimate $I_r^1$ as follows: \begin{align*} &I_r^1\\&\le\,\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\le\,r}\Bigg\{\int\limits_{u_2(y,s)}^{u_1(x+z)}\mathfrak{{\text{e}}ta}_\delta'(\sigma-u_2(y,s))A'(\sigma) d\sigma -\int\limits_{u_2(y)}^{u_1(x)}\mathfrak{{\text{e}}ta}_\delta' (\sigma -u_2(y,s))A'(\sigma) d\sigma\Bigg\}\Upsilon_\varepsilon(x-y)\mu(z) dz dx dy \\&+\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\le\,r}\bigg\{ \int\limits_{u_1(x)}^{u_2(y+z)} \mathfrak{{\text{e}}ta}_\delta'(\sigma-u_1(x,s))A'(\sigma)d\sigma-\int\limits_{u_1(x,s)}^{u_2(y,s)}\mathfrak{{\text{e}}ta}_\delta'(\sigma-u_1(x,s))A'(\sigma) d\sigma\bigg\}\Upsilon_\varepsilon(x-y)\mu(z)dz dx dy \\ &:=L_1+L_2 {\text{e}}nd{align*} where \begin{align} L_1&=\int\limits_0^t\int_{(\mathbb{T}^N)^2}\int\limits_{|z|\le\,r}\bigg(\int\limits_{u_2(y,s)}^{u_1(x,s)}\mathfrak{{\text{e}}ta}_\delta'(\sigma-u_2(y,s))A'(\sigma)d\sigma\bigg)\bigg(\Upsilon_\varepsilon(x+z-y)-\Upsilon_\varepsilon(x-y)\bigg)\mu(z)dz dx dy ds\notag\\ &\,\le\, \int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\le\,r}|u_1(x,s)-u_2(y,s)|\int\limits_0^1(1-\tau)\,D^2\,(\Upsilon_{\varepsilon}(x-y+\tau (z))) z^2d\tau|\mu(z)dz dx dy ds\notag\\ &\le\,C(\omega)\,\big(\mathcal{L}_{\lambda}ac{r^{2-2\alpha}}{\varepsilon^2}\big),\label{estimate 4}\\ L_2&=\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int_{|z|\le\,r}\bigg(\int_{u_1(x,s)}^{u_2(y,s)}\mathfrak{{\text{e}}ta}_\delta'(\sigma-u_1(x,s))A'(\sigma)d\sigma\bigg)\big(\Upsilon_\varepsilon(x-y+z)-\Upsilon_{\varepsilon}(x)\big)\mu(z)dz dy dx ds\notag\\ &\le\,C(\omega)\int\limits_0^t\int\limits_{(\mathbb{T}^N)^2}\int\limits_{|z|\le\,r}|u_1(x,s)-u_2(y,s)|\,|\int\limits_0^1(1-\tau)D^2(\Upsilon_\varepsilon(x-y+\tau z)) z^2 d\tau\,|\mu(z)dz dx dy ds\notag\\ &\le\,C(\omega)\,\mathcal{L}_{\lambda}ac{r^{2-2\alpha}}{\varepsilon^2}.\label{estimate 5} {\text{e}}nd{align} \noindent \textbf{Step 4:} As a consequence of previous estimates, we deduce for all $t\in[0,T]$, \begin{align}\label{final1} \mathbb{E}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f_{1}(x,t,\xi)f_2(x,t,\xi)d\xi dx&\le\mathbb{E}\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon_{\varepsilon}(x-y)\kappa_{\delta}(\xi-\mathfrak{\zeta})f_{1,0}\bar{f}_{2,0}(y,\mathfrak{\zeta})d\xi d\mathfrak{\zeta} dx dy\notag\\ &\qquad +C_T\big(\delta\,\varepsilon^{-1}+ \varepsilon^2\,\delta^{-1}+g(\delta) + r^{-2\alpha}\delta+ r^{2-2\alpha}\varepsilon^{-2}+\mathfrak{{\text{e}}ta}_t(\varepsilon,\delta)\big). {\text{e}}nd{align} \noindent \textbf{Case 1:} If noise is genral, $\Psi=\Psi(x,u)$ and $\alpha\in(0,\mathcal{L}_{\lambda}ac{1}{2})$. We set $\varepsilon= \delta^\beta,$ $r=\delta^{\gamma}$, and choose $\beta,\,\gamma $ such that $$\min\big\{\mathcal{L}_{\lambda}ac{1-\alpha}{2\alpha},1 \big\}\,\textgreater\beta\,\textgreater\, \mathcal{L}_{\lambda}ac{1}{2},\,\,$$ $$\mathcal{L}_{\lambda}ac{1}{2\alpha}\,\textgreater\,\gamma\,\textgreater\,\mathcal{L}_{\lambda}ac{\beta}{1-\alpha}.$$ Then after taking $\delta\to0$, we have for all $t\in[0,T]$ $$\mathbb{E}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f_1(t)\bar{f}_2(t)d\xi d\mathfrak{\zeta}\le\mathbb{E}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f_{1,0}\bar{f}_{2,0}d\xi dx.$$ \textbf{Case 2:} If noise is only multiplicative $\Psi=\Psi(u)$ and $\alpha\in(0,1)$, then $\varepsilon^2\delta^{-1}$ does not present in right hand side of inequality {\text{e}}qref{final1}. So first letting $\delta\,\to\,0$ and after that taking $r\,\to\,0$, $\varepsilon\,\to\,0$, yields $$\mathbb{E}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f_1(t)\bar{f}_2(t)d\xi d\mathfrak{\zeta}\le\mathbb{E}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f_{1,0}\bar{f}_{2,0}d\xi dx.$$ From previous both cases it is clear that for all $t\in[0,T]$ $$\mathbb{E}\|u_1(t)-u_2(t)\|_{L^1(\mathbb{T}^N)}\le\,\mathbb{E}\|u_{1,0}-u_{2,0}\|.$$ Let us now prove remaing part of the this result. To conclude this we proceed as follows, making use of inequality {\text{e}}qref{app inequality1} and pathwise estimates {\text{e}}qref{estimate 1}-{\text{e}}qref{estimate 5}, we can similarly conclude that $\mathbb{P}$-almost surely, for all $t\in[0,T]$, \begin{align}\label{contraction principle 2} \int_{\mathbb{T}^N}\int_{\mathbb{R}}f^{-}(x,t,\xi)\bar{f}^{-}(x,t,\xi)d\xi dx\,\le\,\int_{\mathbb{T}^N}\int_{\mathbb{R}}f_{0}(x,\xi)\bar{f}_{0}(x,\xi)d\xi dx. {\text{e}}nd{align} Since $f_0(x,\xi)\bar{f}_0(x,\xi)=0$, therefore $\mathbb{P}$-almost surely, for all $t\in[0,T]$, $f^{-}(x,t,\xi)\big(1-f^{-}(x,t,\xi)\big)=0$ almost every $(x,\xi)\in \mathbb{T}^N\times\mathbb{R}$, then Fubini's theorem imply that, $\mathbb{P}$-almost surely, for all $t\in[0,T]$, there exists a set $S\subset\mathbb{T}^N$ of full measure such that, for $x\in S,f^{-}(x,t,\xi)\in\{0,1\}$ for almost every $\xi\in\mathbb{R}$. Therefore by using the fact that $f^{-}$ is a kinetic function, we conclude that for all $t\in [0,T]$, there exists $u^{-}(t):\Omega\to L^1(\mathbb{T}^N)$ such that $\mathbb{P}$-almost surely, for all $t\in[0,T]$ $f^{-}(t)=\mathbbm{1}_{u^{-}(t)\textgreater\xi}$. {\text{e}}nd{proof} \begin{cor}\label{th3.7} Let $u_0\in L^p(\Omega;L^p(\mathbb{T}^N))$. Then, for all $p\in[1,+\infty)$, the solution u to {\text{e}}qref{1.1} has almost surely continuous trajectories in $L^p(\mathbb{T}^N)$. {\text{e}}nd{cor} \begin{proof} We refer to the proof of \cite[Corollary 3.3]{sylvain} for a proof. {\text{e}}nd{proof} \subsection{Existence:}\label{subsection 3.2} In this section, we show the existence part of the kinetic solution. We divide proof of existence into two-parts. First, we show the existence of a solution for smooth initial data. Then with the help of the contraction principle, we show the existence of kinetic solution for general initial data. \subsection*{Existence for smooth initial data} We prove the existence part of Theorem \ref{th2.10} for the initial data $u_0\in L^p(\Omega;C^\infty (\mathbb{T}^N)).$ Here we use the vanishing viscosity method. Consider a truncation $(\mathcal{X}_\tau)$ on $\mathbb{R}$ and approximations $(\kappa_\tau)$, ($\psi_\tau$) to identity on $\mathbb{T}^N$ and $\mathbb{R}$, respectively. Then smooth approximations of $F$ and $A$ are defined as follows: $$F_k^\tau(\mathfrak{\zeta})= \big((F_k * \kappa_\tau)\mathcal{X}_\tau\big)(\mathfrak{\zeta})\,\,\,\, k=1,...,N,$$ $$A^\tau(\mathfrak{\zeta})=(A*\psi_\tau)(\mathfrak{\zeta}).$$ Consequently, we define $F^\tau=(F_1^\tau,...,F_N^\tau)$. Note that $F^\tau$ is of class $C^\infty(\mathbb{R};\mathbb{R}^N)$ with the compact support, therefore it is Lipschitz continuous. Clearly, the polynomial growth of $F$ remains also valid for $F^\tau$ and holds uniformly in $\tau$. We consider the following equation to approximate original equation {\text{e}}qref{1.1}: \begin{align}\label{4.1} d\mathfrak{u^\tau}(x,t)+\mbox{div}(F^\tau(\mathfrak{u^\tau}(x,t)))d t +(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[A^\tau(\mathfrak{u^\tau}(x,t))]dt &=\tau{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta \mathfrak{u^\tau}+ \Psi(x,\mathfrak{u^\tau}(x,t))dW(t)\,\, x \in \mathbb{T}^N,\,\, t \in(0,T),\notag\\ \mathfrak{u^\tau}(0)&=u_0. {\text{e}}nd{align} There exists a unique weak solution $\mathfrak{u^\tau}$ to {\text{e}}qref{4.1} such that $$\mathfrak{u^\tau}\in L^2(\Omega;C([0,T];L^2(\mathbb{T}^N)))\cap L^2(\Omega;L^2(0,T;H^1(\mathbb{T}^N))).$$ For the existence of viscous solutions, we refer to \cite[Section 6]{Neeraj2}. The viscous equations {\text{e}}qref{4.1} have weak solutions and consequent passage to the limit proves the existence of a kinetic solution to {\text{e}}qref{1.1}. Nevertheless, this approximation method is quite technical and has to be done in many following steps. \subsection*{Strong convergence}\label{section 5.2} Our aim of this subsection is to show that strong convergence holds true. Towards this end, we make use of the ideas developed in the subsection \ref{subsection 3.1}. \begin{thm}\label{L1 convergence}\label{convergence} Let $\mathfrak{u^{\tau}}$ be viscous solution to {\text{e}}qref{4.1} with initial data $u_0$, then there exists $u\in L^1_{\mathcal{P}}(\Omega\times[0,T]; L^1(\mathbb{T}^N))$ such that $$\mathfrak{u^\tau}\,\to\,u\,\,\,\text{in}\,\,\,L^1_{\mathcal{P}}(\Omega\times[0,T]; L^1(\mathbb{T}^N)).$$ \begin{proof} Proof of this result is based on the computations of Theorem \ref{comparison}. \noindent \textbf{Step 1:} Based on the proof of Theorem \ref{comparison}, we can easily conclude that for all $t\in[0,T]$ \begin{align*} &\mathbb{E}\,\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\Upsilon_\varepsilon(x-y) f^\tau(x,t,\mathfrak{\zeta})\bar{f}^\tau(y,t,\mathfrak{\zeta})d\mathfrak{\zeta} dx dy\\ &\qquad\le\,\mathbb{E}\int_{(\mathbb{T}^N)^2}\int_{(\mathbb{R})^2}\Upsilon_\varepsilon(x-y)\kappa_\delta(\mathfrak{\zeta}-\mathfrak{\zeta})f^\tau (x,t,\mathfrak{\zeta})\bar{f}^\tau(y,t,\mathfrak{\zeta}) d\mathfrak{\zeta} d\mathfrak{\zeta} dx dy + \delta\\ &\qquad\le\,\mathbb{E}\int_{(\mathbb{T}^N)^2}\int_{(\mathbb{R}^2)}\Upsilon_\varepsilon(x-y)\kappa_\delta(\mathfrak{\zeta}-\mathfrak{\zeta}) f_0(x,\mathfrak{\zeta}) \bar{ f}_0(y,\mathfrak{\zeta})d\mathfrak{\zeta} d\mathfrak{\zeta} dx dy + \delta + \mathcal{E}_\Upsilon+ \mathcal{E}_\kappa+ \mathcal{J}^\tau+J\\ &\qquad\le\,\mathbb{E}\int_{(\mathbb{T}^N)^2}\int_{(\mathbb{R})}\Upsilon_\varepsilon(x-y) f_0(x,\mathfrak{\zeta})\bar{ f}_0(y,\mathfrak{\zeta}) d\mathfrak{\zeta} dx dy+ 2\delta+ \mathcal{E}_\Upsilon + \mathcal{E}_\kappa+ \mathcal{J}^\tau+J, {\text{e}}nd{align*} where $\mathcal{E}_\Upsilon, \mathcal{E}_{\kappa}, J$ are introduced similarly to Theorem \ref{th3.6}, $\mathcal{J}^\tau$ is regarding to the second order term $\tau{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta \mathfrak{u^\tau}$: \begin{align*} \mathcal{J}^\tau&=2\tau\mathbb{E}\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2} f^\tau \bar{ f}^\tau {{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta_x\Upsilon_\varepsilon(x-y)\kappa_\delta(\mathfrak{\zeta}-\mathfrak{\zeta})d\mathfrak{\zeta} d\mathfrak{\zeta} dx dy ds\\ &\qquad-\mathbb{E}\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon_\varepsilon(x-y)\kappa_\delta(\mathfrak{\zeta}-\mathfrak{\zeta}) d\mathfrak{\mathcal{V}}_{x,s}^\tau(\mathfrak{\zeta}) dx d\mathfrak{{\text{e}}ta}_2^\tau(y,s,\mathfrak{\zeta})\\ &\qquad-\mathbb{E}\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon_\varepsilon(x-y) \kappa_\delta(\mathfrak{\zeta}-\mathfrak{\zeta})d\mathfrak{\mathcal{V}}_{y,s}^\tau(\mathfrak{\zeta}) dy d\mathfrak{{\text{e}}ta}_2^\tau(x,s,\mathfrak{\zeta})\\ &=-\tau\,\mathbb{E}\int_0^t\int_{(\mathbb{T}^N)^2}\Upsilon_\varepsilon(x-y)\kappa_\delta(\mathfrak{u^\tau}(x)-\mathfrak{u^\tau}(y))|\nabla_x \mathfrak{u^\tau}-\nabla_y \mathfrak{u^\tau}|^2 dx dy\,\le\,0. {\text{e}}nd{align*} It shows that for all $t\in[0,T]$ \begin{align}\label{final21} \mathbb{E}\int_{(\mathbb{T}^N)^2}\Upsilon_\varepsilon(x-y)|\mathfrak{u^\tau}(x,t)-\mathfrak{u^\tau}(y,t)|dx dy &\le\,\mathbb{E}\int_{(\mathbb{T}^N)^2}\Upsilon_\varepsilon(x-y)|u_0(x)-u_0(y)|dx dy\,\notag\\&\qquad+\,C_T( \delta+\delta\,\varepsilon^{-1}+\varepsilon^2\delta^{-1} + g(\delta)+r^{-2\alpha}\delta+r^{2-2\alpha}\varepsilon^{-2}) {\text{e}}nd{align} \textbf{Step 2:} By similar techniques as in the proofs of Theorem \ref{comparison}, we obtain for any two viscous solution $\mathfrak{u^\tau},\,\,\mathfrak{u^\sigma}$ \begin{align}\label{heading back} &\mathbb{E}\,\int_{\mathbb{T}}\big(\mathfrak{u^\tau}(t)-\mathfrak{u^\sigma}(t)\big)^+dx=\mathbb{E}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f^\tau(x, t, \mathfrak{\zeta}) \bar{ f}^\sigma(x,t,\mathfrak{\zeta}) d\mathfrak{\zeta} dx\notag\\ &\qquad=\mathbb{E}\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon_\varepsilon(x-y)\kappa_{\delta}(\mathfrak{\zeta}-\mathfrak{\zeta}) f^\tau(x,t,\mathfrak{\zeta})\bar{ f}^\sigma(y,t,\mathfrak{\zeta}) d\mathfrak{\zeta} d\mathfrak{\zeta} dx dy +\mathfrak{{\text{e}}ta}_{1}(\tau, \sigma,\varepsilon,\delta). {\text{e}}nd{align} We want to show that the error term $\mathfrak{{\text{e}}ta}_{t}(\tau,\sigma, \varepsilon, \delta)$ is independent of $\tau,\sigma$ as follows: \begin{align*} \mathfrak{{\text{e}}ta}_t(\tau,\sigma,\varepsilon,\delta)&=\mathbb{E}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f^\tau(x,t,\mathfrak{\zeta})\bar{ f}^\sigma(x,t,\mathfrak{\zeta}) d\mathfrak{\zeta} dx\\&\qquad\qquad\qquad\qquad-\mathbb{E}\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon_{\varepsilon}(x-y)\kappa_{\delta}(\mathfrak{\zeta}-\mathfrak{\zeta}) f^\tau(x,t,\mathfrak{\zeta}) \bar{ f}^\sigma(y,t,\mathfrak{\zeta}) d\mathfrak{\zeta} d\mathfrak{\zeta} dx dy\\ &=\bigg(\mathbb{E}\int_{\mathbb{T}^N}\int_{\mathbb{R}}f^\tau(x,t,\mathfrak{\zeta})\bar{ f}^\sigma(x,t,\mathfrak{\zeta})d\mathfrak{\zeta} dx-\mathbb{E}\int_{\int_{()\mathbb{T}^N)^2}}\int_\mathbb{R}\Upsilon_\varepsilon(x-y)f^\tau(x,t,\mathfrak{\zeta})\bar{ f}^\sigma(y,t,\mathfrak{\zeta}) d\mathfrak{\zeta} dx dy\bigg)\\ &\qquad+\bigg(\mathbb{E}\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}}\Upsilon_\varepsilon(x-y)f^\tau(x,t,\mathfrak{\zeta})\bar{ f}^\sigma(y,t,\mathfrak{\zeta}) d\mathfrak{\zeta} dx dy\\&\qquad\qquad\qquad\qquad-\mathbb{E}\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon_\varepsilon(x-y)\kappa_\delta(\mathfrak{\zeta}-\mathfrak{\zeta}) f^\tau(x,t,\mathfrak{\zeta})\bar{ f}^\sigma(y,t,\mathfrak{\zeta})d\mathfrak{\zeta} d\mathfrak{\zeta} dx dy\bigg)\\ =H_1+H_2, {\text{e}}nd{align*} where \begin{align*} |H_1|&=\bigg|\mathbb{E}\int_{(\mathbb{T}^N)^2}\Upsilon_\varepsilon(x-y)\int_{\mathbb{R}}\mathbbm{1}_{\mathfrak{u^\tau}(x)\,\textgreater\,\mathfrak{\zeta}}\big[\mathbbm{1}_{\mathfrak{u^\sigma}(x)\,\le\,\mathfrak{\zeta}}-\mathbbm{1}_{\mathfrak{u^\sigma}(y)\,\le\,\mathfrak{\zeta}}\big]d\mathfrak{\zeta} dx dy\bigg|\\ &=\bigg|\mathbb{E}\int_{(\mathbb{T}^N)^2}\Upsilon_\varepsilon(x-y)\big(\mathfrak{u^\sigma}(y)-\mathfrak{u^\sigma}(x)\big)dx dy\bigg|\\ &\le\,\mathbb{E}\int_{(\mathbb{T}^N)^2}\Upsilon_\varepsilon(x-y)|u_0(x)-u_0(y)|dx dy\,+\,C_T( \delta + \delta\,\varepsilon^{-1}+\varepsilon^2\delta^{-1}\notag\\ &\qquad+g(\delta)+r^{-2\alpha}\delta+r^{2-2\alpha}\varepsilon^{-2}) {\text{e}}nd{align*} due to {\text{e}}qref{final21} and $|H_2|\,\le\,\delta$. Come back to {\text{e}}qref{heading back} and using the same computations as in Theorem \ref{comparison}, we obtain \begin{align*}\mathbb{E}\int_{\mathbb{T}^N}\bigg(\mathfrak{u^\tau}(t)-\mathfrak{u^\sigma}(t)\bigg)^+dx&\le\,\mathbb{E}\int_{(\mathbb{T}^N)^2}\Upsilon_\varepsilon(x-y)|u_0(x)-u_0(y)|dx dy\,+\,C_T( \delta+\delta\,\varepsilon^{-1}+\varepsilon^2\delta^{-1}\notag\\ &\qquad+g(\delta)+r^{-2\alpha}\delta+r^{2-2\alpha}\varepsilon^{-2})+2\delta+ \mathcal{E}_\Upsilon+ \mathcal{E}_\kappa+ J^\#+J. {\text{e}}nd{align*} The terms $\mathcal{E}_\Upsilon, \mathcal{E}_\kappa$, and $J$ are defined and can be dealt with similarly as in Theorem \ref{comparison}. The term $J^\#$ is defined as \begin{align*} J^\#&=(\tau+\sigma) \mathbb{E}\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2} f^\tau \bar{ f}^\sigma {{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta_x\Upsilon_{\varepsilon}(x-y)\kappa_{\delta}(\mathfrak{\zeta}-\mathfrak{\zeta}) d\mathfrak{\zeta} d\mathfrak{\zeta} dx dy ds\\ &\qquad-\mathbb{E}\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon_\varepsilon(x-y)\kappa_{\delta}(\mathfrak{\zeta}-\mathfrak{\zeta}) d\mathfrak{\mathcal{V}}_{x,s}^\tau(\mathfrak{\zeta}) dx d\mathfrak{{\text{e}}ta}_{2}^\sigma(y,s,\mathfrak{\zeta}) \\ &\qquad-\mathbb{E}\int_0^t\int_{(\mathbb{T}^N)^2}\int_{\mathbb{R}^2}\Upsilon_{\varepsilon}(x-y)\kappa_{\delta}(\mathfrak{\zeta}-\mathfrak{\zeta}) d\mathfrak{\mathcal{V}}_{y,s}^\sigma(\mathfrak{\zeta}) dy d\mathfrak{{\text{e}}ta}_2(x,s,\mathfrak{\zeta}). {\text{e}}nd{align*} We follow \cite[Section 6]{vovelle} to estimate $J^\#$ as follows: \begin{align*} |J^\#|&\,\le\,C\,(\sqrt{\tau}-\sqrt{\sigma})^2\,\varepsilon^{-2}, {\text{e}}nd{align*} Consequently, we see that for all $t\in[0,T]$ \begin{align}\label{final31} \mathbb{E}\int_{\mathbb{T}^N} \big(\mathfrak{u^\tau}(t)-\mathfrak{u^\sigma}(t))\big)^+dx dt\,&\le\,\bigg(\mathbb{E}\int_{(\mathbb{T}^N)^2}\Upsilon_\varepsilon(x-y)|u_0(x)-u_0(y)|dx dy\,+C_T\big(\varepsilon^\mathfrak{\zeta}+ 2\delta\notag\\&\qquad+ \varepsilon^2\delta^{-1}+g(\delta)+r^{-2\alpha}\delta+r^{2-2\alpha}\varepsilon^{-2}\big)\bigg)+C_T\big(\tau+\sigma\big)\varepsilon^{-2} {\text{e}}nd{align} \textbf{Case 1:} If noise is general, $\Psi=\Psi(x,u)$ and $0\,\textless\,\alpha\,\textless\mathcal{L}_{\lambda}ac{1}{2}$, and we set $\varepsilon= \delta^\beta, r=\delta^{\gamma}$, where $\beta,\,\gamma $ such that $$\min\big\{\mathcal{L}_{\lambda}ac{1-\alpha}{2\alpha},1 \big\}\,\textgreater\beta\,\textgreater\, \mathcal{L}_{\lambda}ac{1}{2},\,\,$$ $$\mathcal{L}_{\lambda}ac{1}{2\alpha}\,\textgreater\,\gamma\,\textgreater\,\mathcal{L}_{\lambda}ac{\beta}{1-\alpha}.$$ Then, given $\varpropto\,\textgreater\,0$ We can fix $\delta$ small enough so that the first term on the right hand side is bounded by $\mathcal{L}_{\lambda}ac{\varpropto}{2}$ and then we can find $\varkappa\,\textgreater\,0$ such that the second term is also bounded by $\mathcal{L}_{\lambda}ac{\varpropto}{2}$ for any $\tau, \sigma\,\textless\,\varkappa$. It shows that the sequence of viscous solutions $\{\mathfrak{u^\tau}\}$ is Cauchy sequence in $L^1_{\mathcal{P}}(\Omega\times[0,T] ; L^1(\mathbb{T}^N))$, as $\tau\,\to\,0$. \noindent \textbf{Case 2:} If noise is only multiplicative, $\Psi=\Psi(u)$ and $0\,\textless\,\alpha\,\textless\,1$, then $\varepsilon^2\,\delta^{-1}$ will not present in {\text{e}}qref{final31}. Letting $\delta\,\to\,0$, then for all $t\in[0,T]$ \begin{align}\label{final3} \mathbb{E}\int_{\mathbb{T}^N} \big(\mathfrak{u^\tau}(t)-\mathfrak{u^\sigma}(t))\big)^+dx dt\le\mathbb{E}\int_{(\mathbb{T}^N)^2}\Upsilon_\varepsilon(x-y)|u_0(x)-u_0(y)|dx dy+ C_T\big(\varepsilon^\mathfrak{\zeta}+r^{2-2\alpha}\varepsilon^{-2} + \tau+\sigma\big)\varepsilon^{-2}. {\text{e}}nd{align} Set $r=\varepsilon^a$ with $a\textgreater\mathcal{L}_{\lambda}ac{2}{2-2\alpha}$. Then, as previous case we can conclude sequence of viscous solutions $\{\mathfrak{u^{\tau}}\}$ is Cauchy sequence in $L^1_{\mathcal{P}}(\Omega\times[0,T] ; L^1(\mathbb{T}^N))$, as $\tau\,\to\,0$. It implies that there exists $u\in L^1_{\mathcal{P}}(\Omega\times[0,T]; L^1(\mathbb{T}^N))$ such that $\mathfrak{u^{\tau}}\,\to\,u\,\,\,\text{in}\,\,\,L^1_{\mathcal{P}}(\Omega\times[0,T]; L^1(\mathbb{T}^N))$ as $\tau\,\to\,0.$ {\text{e}}nd{proof} {\text{e}}nd{thm} \subsection*{Uniform $L^p$-estimate} In the following proposition we show that viscous solutions $\mathfrak{u^\tau}$ are uniformly bounded in $L^p(\Omega\times[0,T];L^p(\mathbb{T}^N))$ and kinetic measure $m^\tau$ corresponding these viscous solutions have uniform decay in $\mathfrak{\zeta}$. \begin{proposition}\label{p estimate} For all $p\in [2,\infty)$, the viscous solutions $\mathfrak{u^\tau}$ to {\text{e}}qref{4.1} satisfy the following estimates: \begin{align}\label{lp estimate} \mathbb{E}\sup_{0\le t\le T}\|\mathfrak{u^\tau}(t)\|_{L^p(\mathbb{T}^N)}^p\le C (1+\mathbb{E}\|u_0\|_{L^p(\mathbb{T}^N)}^p), {\text{e}}nd{align} \begin{align}\label{4.7} \mathbb{E}\sup_{0\le t\le T}\|\mathfrak{u^\tau}(t)\|_{L^p(\mathbb{T}^N)}^p+\int_0^T\int_{\mathbb{T}^N}\int_{\mathbb{R}} |\mathfrak{\zeta}|^{p-2} &d\mathfrak{m}^{\tau}(x,s,\mathfrak{\zeta})\le C (1+\mathbb{E}\|u_0\|_{L^p(\mathbb{T}^N)}^p). {\text{e}}nd{align} Here, the constant C does not depend on $\tau$ . {\text{e}}nd{proposition} \begin{proof}Proof of this proposition is an application of the generalized It\^o formula \cite[Appendix A]{vovelle}. We can not apply the generalized It\^o formula directly to $\kappa(\mathfrak{\zeta})=|\mathfrak{\zeta}|^p$, $p\in [2,\infty)$, and $\varphi(x)=1$. Here, we define functions $\kappa_k\in C^2(\mathbb{R})$ that approximate $\kappa$ and have quadratic growth at infinity as necessary to apply generalized It\^o formula. We define $\kappa_k$ as \[\kappa_k(\mathfrak{\zeta})=\begin{cases} |\mathfrak{\zeta}|^p, & |\mathfrak{\zeta}|\,\le\, k,\\ k^{p-2}\bigg[\mathcal{L}_{\lambda}ac{p(p-1)}{2}\mathfrak{\zeta}^2-p(p-2)k|\mathfrak{\zeta}|+\mathcal{L}_{\lambda}ac{(p-1)(p-2)}{2}k^2\bigg], & |\mathfrak{\zeta}|\,\textgreater \,k. {\text{e}}nd{cases} \] It is clear that \begin{align*}|\mathfrak{\zeta} \kappa_k'(\mathfrak{\zeta})| \,&\le \, \kappa_k (\mathfrak{\zeta}),\\ |\kappa_k'(\mathfrak{\zeta})| \, &\le p(1+\kappa_k(\mathfrak{\zeta})),\\ |\kappa_k'(\mathfrak{\zeta})| \, &\le |\mathfrak{\zeta}|\kappa''(\mathfrak{\zeta}),\\ \mathfrak{\zeta}^2 \kappa_k ''(\mathfrak{\zeta})\, &\le p(p-1) \kappa_k (\mathfrak{\zeta}),\\ \kappa_k''(\mathfrak{\zeta}) \,&\le p(p-1)(1+\kappa_k(\mathfrak{\zeta})) {\text{e}}nd{align*} holds true for all $\mathfrak{\zeta}\in\mathbb{R}$, $k\in\mathbb{N}$, $p\in[2,\infty)$. Then, by generalized It\^o formula \cite[Appendix A]{vovelle}, $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align*} \int_{\mathbb{T}^N} \kappa_k(\mathfrak{u^\tau}(t)) dx &= \int_{\mathbb{T}^N} \kappa_k(u_0) dx -\int_0^t \big\langle \kappa_k'(\mathfrak{u^\tau}), \mbox{div}(F^\tau(\mathfrak{u^\tau}))\big\rangle ds-\int_0^t\big\langle \kappa_k'(\mathfrak{u^\tau}), (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha(A^\tau(\mathfrak{u^{\tau}}))\big\rangle ds\\ &\qquad+\int_0^t\big\langle \kappa_k'(\mathfrak{u^{\tau}}), \tau {{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta \mathfrak{u^{\tau}}\big\rangle ds+\sum_{k\ge1}\int_0^t\big\langle \kappa_k'(\mathfrak{u^{\tau}}), h_k(x,\mathfrak{u^{\tau}})\big\rangle d\beta_k(s)\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\big\langle\kappa_k''(\mathfrak{u^{\tau}}), H^2(\mathfrak{u^{\tau}})\big\rangle ds. {\text{e}}nd{align*} If we define $H^\tau(\mathfrak{\zeta})=\int_0^\mathfrak{\zeta} \kappa_k''(\mathfrak{\zeta}) F^\tau(\mathfrak{\zeta}) d\mathfrak{\zeta}$, then it can be seen that the second term on the right-hand side vanishes due to the boundary conditions. The third and fourth terms are nonpositive as follows: $$-\int_0^t\big\langle\kappa_k'(\mathfrak{u^{\tau}}), (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha(A^\tau(\mathfrak{u^{\tau}}))\big\rangle ds=-\lim_{\delta\to0}\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}\kappa_k''(\mathfrak{\zeta})\mathfrak{{\text{e}}ta}_\delta^\tau(x,\mathfrak{\zeta},s)$$ $$\int_0^t\big\langle\kappa_k'(\mathfrak{u^{\tau}}), \tau {{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta \mathfrak{u^{\tau}} \big\rangle ds= -\int_0^t\int_{\mathbb{T}^N}\kappa_k''(\mathfrak{u^{\tau}}) \tau |\nabla \mathfrak{u^{\tau}}|^2 dx ds,$$ where $\mathfrak{{\text{e}}ta}^\delta=\int_{\mathbb{R}^N} |A^\tau(u^\delta(x+z,t))-A^\tau(\xi)|\mathbbm{1}_{\mbox{Conv}\{\mathfrak{u}^{\tau,\delta}(x+z,t),\mathfrak{u}^{\tau,\delta}(x,t)\}}(\xi)\mu(z)dz$ and $\mathfrak{u}^{\tau,\delta}$ is pathwise molification of $\mathfrak{u}^\tau$ in space variable. For discussion on this argument we refer to Appendix \ref{A}. After this, we can follow the proof of \cite[Proposition 4.3]{vovelle} to conclude the result. {\text{e}}nd{proof} \begin{remark}[\textbf{Approximate solutions}] Here, we use the computations of Appendix \ref{A} to derive the kinetic formulation to {\text{e}}qref{4.1} that satisfied by $f^\tau(t)=\mathbbm{1}_{\mathfrak{u^{\tau}}(t)\,\textgreater\,\mathfrak{\zeta}}$ in the sense of $D'(\mathbb{T}^N\times\mathbb{R})$. We read as follows: $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align*} df^\tau(t)+{F^{\,\tau}}'\cdot\nabla f^\tau(t) dt - \tau {{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta f^\tau(t) dt+{A^\tau}'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)_x^\alpha(f^\tau(t))dt&= \delta_{\mathfrak{u^{\tau}}(t)=\mathfrak{\zeta}} \varphi\, dB(t) + \partial_{\mathfrak{\zeta}}(\mathfrak{m}^\tau -\mathcal{L}_{\lambda}ac{1}{2}H^{2} \delta_{\mathfrak{u^{\tau}}(t)=\mathfrak{\zeta}}) dt, {\text{e}}nd{align*} where $d\mathfrak{m}^\tau(x,t,\mathfrak{\zeta})\ge d\mathfrak{m}_1^\tau(x,t,\mathfrak{\zeta})+d\mathfrak{{\text{e}}ta}_1^\tau(x,t,\mathfrak{\zeta}),$ $d\mathfrak{m}_1^\tau(x,t,\mathfrak{\zeta})=\tau |\nabla \mathfrak{u^{\tau}}|^2\delta_{\mathfrak{u^{\tau}}=\mathfrak{\zeta}}dx dt,$ and $$d\mathfrak{{\text{e}}ta}_1^\tau(x,t,\mathfrak{\zeta})=\int_{\mathbb{R}^N}|A^\tau(\mathfrak{u^{\tau}}(x+z))-A^\tau(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{\mathfrak{u^{\tau}}(x),\mathfrak{u^{\tau}}(x+z)\}}\mu(z)dz.$$ It shows that for all $\varphi\in C_c^2(\mathbb{T}^N\times\mathbb{R})$, $\mathbb{P}$-almost surely, for all $t\in[0,T]$, \begin{align}\label{approximate formulation} \langle f^\tau(t),\varphi\rangle &= \langle f_0^\tau,\varphi \rangle +\int_0^t \langle f^\tau(s), {F^{\,\tau}}'(\mathfrak{\zeta})\cdot\nabla_x \varphi \rangle ds -\int_0^t \langle f^\tau(s), {A^\tau}'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha(\varphi)\rangle ds\notag\\ &\qquad+ \int_0^t \int_{\mathbb{T}^N}\int_{\mathbb{R}} h_k(x,\mathfrak{\zeta}) \varphi(x,\mathfrak{\zeta}) d\mathfrak{\mathcal{V}}_{x,s}^\tau(\mathfrak{\zeta}) dx d\beta_k(s)+ \tau\int_0^t \langle f^\tau(s),{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta \varphi \rangle ds.\notag \\ & \qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}} H^{2}(x,\mathfrak{\zeta}) \partial_{\mathfrak{\zeta}}\varphi(x,\mathfrak{\zeta}) d\mathfrak{\mathcal{V}}_{x,s}^\tau(\mathfrak{\zeta})dx ds- \mathfrak{m}^\tau(\partial_{\mathfrak{\zeta}}\varphi)([0,t]). {\text{e}}nd{align} {\text{e}}nd{remark} \subsection*{Convergence of approximate kinetic functions}\label{section 5.3} Here, we will use the following notations: we say that a sequence $(\mathfrak{\mathcal{V}}^{\mathfrak{\tau_n}})$ of Young measures converges to $\mathfrak{\mathcal{V}}$ in $\mathfrak{\mathcal{Y}}^1$ if {\text{e}}qref{2.12} is satisfied. By definition, a random Young measure is a $\mathfrak{\mathcal{Y}}^1$- valued random variable. We define Young measures on $\mathbb{T}^N\times[0,T]$ as $\mathfrak{\mathcal{V}}^{\mathfrak{\tau_n}}=\delta_{\mathfrak{u^{\mathfrak{\tau_n}}}=\mathfrak{\zeta}},\,\, \& \,\,{\mathfrak{\mathcal{V}}}=\delta_{{u}=\mathfrak{\zeta}}$ and following uniform bound holds, \begin{align}\label{4.10}\mathbb{E}\Bigg[\sup_{t\in[0,T]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}|\mathfrak{\zeta}|^p d\mathfrak{\mathcal{V}}_{x,t}^{\mathfrak{\tau_n}}(\mathfrak{\zeta}) dx\Bigg]\le\,C_p {\text{e}}nd{align} \begin{proposition}\label{D} It holds true $($up to subsequences$)$ \begin{enumerate} \item[1.] ${\mathbb{P}}$-almost surely, ${\mathfrak{\mathcal{V}}}^{\mathfrak{\tau_n}}$ converges ${\mathfrak{\mathcal{V}}}$ in $\mathfrak{\mathcal{Y}}^1$. \item[2.] ${\mathfrak{\mathcal{V}}}$ satisfies the following bound: \begin{align}\label{4.11} {\mathbb{E}}\Bigg(\sup_{I\subset[0,T]}\mathcal{L}_{\lambda}ac{1}{|I|}\int_{I}\int_{\mathbb{T}^N}\int_{\mathbb{R}}|\mathfrak{\zeta}|^p d{\mathfrak{\mathcal{V}}}_{x,t}(\mathfrak{\zeta})dx dt\Bigg)&\le\, C_p. {\text{e}}nd{align} Here, the supremum in {\text{e}}qref{4.11} is a supremum over all open intervals $I\subset[0,T]$ with rational end points which are countable. \item[3.] Suppose that ${f}^{\mathfrak{\tau_n}}, {f}:\Omega\times\mathbb{T}^N\times[0,T]\times\mathbb{R}\to [0,1]$ are defined by $${f}^{\mathfrak{\tau_n}}(x,t,\mathfrak{\zeta})={\mathfrak{\mathcal{V}}}_{x,t}^{\mathfrak{\tau_n}}(\mathfrak{\zeta},+\infty),\,\,\,\, {f}(x,t,\mathfrak{\zeta})={\mathfrak{\mathcal{V}}}_{x,t}(\mathfrak{\zeta},+\infty),$$ then ${f}^{\mathfrak{\tau_n}}\to{f}$ in $L^{\infty}(\mathbb{T}^N\times[0,T]\times\mathbb{R})$-weak-* ${\mathbb{P}}$-almost surely. \item[4.] There exists a full measure suset $\mathcal{S}$ of $[0,T]$, containing $0$ such that for all $t\in \mathcal{S}$ $${f}^{\mathfrak{\tau_n}}\to {f}\,\,\, in\,\,\,\,\,\,L^\infty (\Omega\times\mathbb{T}^N\times\mathbb{R})-weak^*.$$ $$$$ {\text{e}}nd{enumerate} {\text{e}}nd{proposition} \begin{proof} We refer to \cite[Propostion 5.3]{Chaudhary} for a proof. {\text{e}}nd{proof} \subsection*{Compactness of the Random Measures}\label{section 5.4} We know the following duality holds for $q$, $q^*\in(1,\infty)$ being conjugate exponents $$L_w^q({\Omega};\mathcal{M}_b(\mathbb{T}^N\times[0,T]\times\mathbb{R})\simeq(L^{q^*}({\Omega};C_0(\mathbb{T}^N\times[0,T]\times\mathbb{R}))^*,$$ where $\mathcal{M}_b(\mathbb{T}^N\times[0,T]\times{R})$ denote the space of bounded Borel measures on $\mathbb{T}^N\times[0,T]\times\mathbb{R}$ whose norm is given by the total variation of measures. It is the dual space to $C_0(\mathbb{T}^N\times[0,T]\times\mathbb{R})$, the collection of all continuous functions vanishing at infinity equipped with the supremum norm. $L_w^q({\Omega};\mathcal{M}_b(\mathbb{T}^N\times[0,T]\times\mathbb{R})$ contains all $weak^*$-measurable functions $\mathfrak{{\text{e}}ta}:{\Omega}\to\mathcal{M}_b(\mathbb{T}^N\times[0,T]\times\mathbb{R})$ such that $${\mathbb{E}}\left\Vert \mathfrak{{\text{e}}ta}\right\Vert_{\mathcal{M}_b}^q\textless\,\infty.$$ Let us define measures $$d{\mathfrak{m}}_1^{\mathfrak{\tau_n}}(x,t,\mathfrak{\zeta})=\mathfrak{\tau_n}|\nabla \mathfrak{u^{{\tau_n}}}|^2d\delta_{\mathfrak{u^{\tau_n}}=\mathfrak{\zeta}}(\mathfrak{\zeta})\,dx\,dt,$$ $$d{\mathfrak{{\text{e}}ta}}_1^{\tau_n}(x,t,\mathfrak{\zeta})=\int_{\mathbb{R}^{N}}|A^{\tau_n}(\mathfrak{u^{\tau_n}}(x+z))-A^{\tau_n}(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{{\mathfrak{u^{\tau_n}}}(x+z),{\mathfrak{u^{\tau_n}}}(x)\}}(\mathfrak{\zeta})\mu(z)dz\,d\mathfrak{\zeta}\,dx\,dt.$$ To obtain the convergence of these measures to a kinetic measure, we prove the following lemma. \begin{lem}\label{m} There exists a kinetic measure ${\mathfrak{m}}$ such that \[ {\mathfrak{m}}^{\tau_n}\overset{w^*}{\to}{\mathfrak{m}}\,\,\qquad in\,\,\qquad\qquad L_w^2({\Omega}; \mathcal{M}_b(\mathbb{T}^N\times[0,T]\times\mathbb{R}))-weak^* .\] Here, ${\mathfrak{m}}$ can be written as ${\mathfrak{{\text{e}}ta}}_1+{\mathfrak{m}}_1$, where $$d{\mathfrak{{\text{e}}ta}}_1(x,t,\mathfrak{\zeta})=\int_{\mathbb{R}^N}|A({u}(x+z,t))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{{u}(x+z,t),{u}(x,t)\}}\mu(z)dz dx d\mathfrak{\zeta} dt$$ and $\mathbb{P}$-almost surely, $\mathfrak{m}_1$ is a nonnegative measure over $\mathbb{T}^N\times[0,T]\times\mathbb{R}$. {\text{e}}nd{lem} \begin{proof} Due to the computations used in the proof of Proposition \ref{p estimate}, we can conclude that $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align*} \int_0^T\int_{\mathbb{T}^N}\int_{\mathbb{R}}\mathfrak{m}^{\tau_n}(x,t,\mathfrak{\zeta})\,d\mathfrak{\zeta}\,dx\,dt\le\,&\left\Vert u_0\right\Vert_{L^2(\mathbb{T}^N)}^2 +\sum_{k\ge1}\int_0^T\int_{\mathbb{T}^N}\,\mathfrak{u^{\tau_n}}\,h_k(\mathfrak{u^{\tau_n}})\,dx\,d\beta_k(t)\\&\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^T\int_{\mathbb{T}^N}\, H^2(\mathfrak{u^{\tau_n}})\,dx ds. {\text{e}}nd{align*} Taking square and expectation and finally by the Ito isometry, we get that \begin{align*} \mathbb{E}|\mathfrak{m^{\tau_n}}([0,T]\times\mathbb{T}^N\times\mathbb{R})|^2 \le\,C. {\text{e}}nd{align*} Hence, sequence $m^{\tau_n}$ is bounded in $L_w^2({\Omega};\mathcal{M}_b(\mathbb{T}^N\times[0,T]\times\mathbb{R})$, and As a consequence of the Banach-Alaoglu theorem, there exists a $weak^*$ convergent subsequence, still denoted by $\{{\mathfrak{m}}^{\tau_n}\;n\in\mathbb{N}\}$, that is, $\mathfrak{m}^{\tau_n}$ converge to ${\mathfrak{m}}$ in $L_w^2({\Omega};\mathcal{M}_b(\mathbb{T}^N\times[0,T]\times\mathbb{R})$-weak-*. Now, it is only remaining to show that $weak^*$ limit ${\mathfrak{m}}$ is a kinetic measure. The first point of Definition \ref{kinetic measure 1} of kinetic measure is direct. To prove the behavior for large $\mathfrak{\zeta}$, we need $L^p$-estimate. From {\text{e}}qref{4.7} we conclude that \begin{align}\label{4.13} \mathbb{E}\sup_{t\in[0,T]}\left\Vert \mathfrak{u^{\tau_n}}\right\Vert_{L^p(\mathbb{T}^N)^p}^p+\mathbb{E}\int_0^T\int_{\mathbb{T}^N}\int_{\mathbb{R}} |\mathfrak{\zeta}|^{p-2}&d\mathfrak{m}^{\tau_n}(x,t,\mathfrak{\zeta})\le\,C(1+\mathbb{E}\left\Vert u_0\right\Vert_{L^p(\mathbb{T}^N)}^p). {\text{e}}nd{align} Suppose that $(\chi_{\delta})$ be a truncation on $\mathbb{R}$, then it implies, for $p\in[2,\infty),$ \begin{align*} &\mathbb{E}\int_{[0,T]\times\mathbb{T}^N\times{R}}|\mathfrak{\zeta}|^{p-2}d{\mathfrak{m}}(x,t,\mathfrak{\zeta})\le\,\liminf_{\delta\to0}\mathbb{E}\int_{[0,T]\times\mathbb{T}^N\times\mathbb{R}}|\mathfrak{\zeta}|^{p-2}\chi_{\delta}(\mathfrak{\zeta}) d{\mathfrak{m}}(t,x,\mathfrak{\zeta})\\ &=\liminf_{\delta\to 0}\liminf_{\tau_n\to\,0}\mathbb{E}\int_{[0,T]\times\mathbb{T}^N\times{R}}|\mathfrak{\zeta}|^{p-2}\chi_{\delta}(\mathfrak{\zeta})d{\mathfrak{m}}^{\tau_n}(t,x,\mathfrak{\zeta})\le\,C. {\text{e}}nd{align*} where the last inequality holds from {\text{e}}qref{4.13}. It shows that ${\mathfrak{m}}$ vanishes for large $\mathfrak{\zeta}$. Finally, we can deduce that there exist kinetic meaures $\lambda_1,$ and $\lambda_2$ such that $${\mathfrak{m}}_1^{\tau_n}\overset{w^*}\to \lambda_1,\,\,\,\,\,{\mathfrak{{\text{e}}ta}}_1^n\overset{w^*}\to\,\lambda_2\,\,\,\,\, in\,\,L_w^2({\Omega};\mathcal{M}_b(\mathbb{T}^N\times[0,T]\times\mathbb{R}) $$ We observe that $\mathbb{P}$-almost surely, there exists a negligible $\mathcal{S}\subset\,\mathbb{T}^N\times[0,T]$, such that upto subsequence, $\mathfrak{u^{\tau_n}}(x,t)\to{u}(x,t)$, for all $(t,x)\notin \mathcal{S}$. If we fixed $z\in\mathbb{R}^N$, then we have also $\mathfrak{u^{\tau_n}}(x+z,t)\to{u}(x+z, t)$ for any $(x,t)$ not in some negligible $\mathcal{S}_z$. Hence we have $$|A^{\tau_n}(\mathfrak{u^{\tau_n}}(x+z,t))-A^{\tau_n}(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{\mathfrak{u^{\tau_n}}(x,t),\mathfrak{u^{\tau_n}}(x+z,t)\}}(\mathfrak{\zeta})\to|A({u}(x+z,t))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{{u}(x,t),{u}(x+z,t)\}}(\mathfrak{\zeta})\,\,\, $$ $as\,\,\, n\to \infty,$ for any $(x,t,\mathfrak{\zeta})\in[0,T]\times\mathbb{T}^N\times\mathbb{R}$ such that $(t,x)\notin \mathcal{S}\cup \mathcal{S}_z$ and $\mathfrak{\zeta}\ne {u}(t,x)$. We can then use Fatou's lemma to conclude that $\mathbb{P}$-almost surely, ${\mathfrak{{\text{e}}ta}}_1\,\le\,\lambda_2$. Hence, $\mathbb{P}$-almost surely, ${\mathfrak{m}}\ge\lambda_1+\lambda_2\ge \,{\mathfrak{{\text{e}}ta}}_1.$ {\text{e}}nd{proof} \subsection*{Kinetic Solution}\label{section 5.5} As a direct consequence of convergence results, stated in previous subsections and \cite[Lemma 2.1, Proposition 4.9]{sylvain}, we can pass to the limit in all terms of {\text{e}}qref{approximate formulation}. The convergence of the It\^o integral can easily verify using uniform $L^p$-bound and strong convergence of $\mathfrak{u^{\tau}}$.\\ For all $\varphi\in C_c^2(\mathbb{T}^N\times\mathbb{R}),$ for ${\mathbb{P}}$-almost surely, there exists a subset $\mathcal{N}_0\subset[0,T]$ of measure zero such that for all $ t\in[0,T]\backslash\mathcal{N}_0$, \begin{align}\label{4.17} \langle {f}(t),\varphi \rangle &= \langle {f}_0, \varphi \rangle + \int _0^t \langle {f}(s), F'(\mathfrak{\zeta})\cdot\nabla\varphi \rangle ds - \int_0^t \langle {f}(s), A'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[\varphi] \rangle ds\notag\\ &\qquad+\sum_{k=1}^\infty \int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}} h_k(x,\mathfrak{\zeta}) \varphi (x,\mathfrak{\zeta}) d{\mathfrak{\mathcal{V}}}_{x,s}(\mathfrak{\zeta}) dx ds\notag\\ &\qquad +\mathcal{L}_{\lambda}ac{1}{2} \int_0^t \int_{\mathbb{T}^N}\int_{\mathbb{R}} \partial_{\mathfrak{\zeta}} \varphi( x,\mathfrak{\zeta}) H^2(x,\mathfrak{\zeta}) d{\mathfrak{\mathcal{V}}}_{x,s}(\mathfrak{\zeta}) dx ds - {m}(\partial_{\mathfrak{\zeta}}\varphi)([0,t]), {\text{e}}nd{align} We state the following proposition which help to extend kinetic formulation for all $t\in[0,T]$. \begin{proposition}\label{th4.15} There exists a measurable subset $\Omega_2$ of ${\Omega}$ of probability 1, and a random Young measure ${\mathfrak{\mathcal{V}}}^+$ on $\mathbb{T}^N\times(0, T)$ such that the following properties holds: \begin{enumerate} \item[1.] for all ${\omega}\in\Omega_2$, for almost every $(x,t)\in\mathbb{T}^N\times(0,T)$, the probability measures ${\mathfrak{\mathcal{V}}}_{x,t}^+$ and ${\mathfrak{\mathcal{V}}}_{x,t}$ are equal. \item[2.] the kinetic function ${f}^+(x,t,\mathfrak{\zeta}):= {\mathfrak{\mathcal{V}}}_{x,t}^+(\mathfrak{\zeta},+\infty)$ satisfies: for all ${\omega}\in\Omega_2$, for all $\varphi\in C_c(\mathbb{T}^N\times(0,T))$, $t\mapsto \langle {f}^+(t), \varphi \rangle$ is c\'adl\'ag. \item[3.] The random Young measure ${\mathfrak{\mathcal{V}}}^+ $ satisfies $${\mathbb{E}}\sup_{t\in[0,T]}\int_{\mathbb{T}^N}\int_{\mathbb{R}}|\mathfrak{\zeta}|^p d{\mathfrak{\mathcal{V}}}_{x,t}^+(\mathfrak{\zeta}) dx\,\,\le\,\, C_p.$$ {\text{e}}nd{enumerate} \begin{proof} For a proof, we refer to \cite{sylvain} in which {\text{e}}qref{4.11} is helpful to get this result. {\text{e}}nd{proof} {\text{e}}nd{proposition} \noindent Here, we consider only the c\'adl\'ag version. We have to replace ${\mathfrak{\mathcal{V}}}$ by ${\mathfrak{\mathcal{V}}}^+$ and ${f}$ by ${f}^+$ (by help of It\^o isometry for It\^o integral) in {\text{e}}qref{4.17} to conclude that the kinetic formulation is true for all t. That is, $\mathbb{P}$-almost surely, for all $t\in[0,T]$, for all $\varphi\in C_c^2(\mathbb{T}^N\times\mathbb{R})$ \begin{align}\label{4.26} \langle {f}^+(t),\varphi \rangle &= \langle {f}_0, \varphi \rangle + \int _0^t \langle {f}^+(s), F'(\mathfrak{\zeta})\cdot\nabla\varphi \rangle - \int_0^t \langle {f}^+(s), A'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[\varphi] \rangle ds\notag\\ &\qquad +\sum_{k=1}^\infty \int_0^t\int_{\mathbb{R}}\int_{\mathbb{T}^N} h_k(x,\mathfrak{\zeta}) \varphi (x,\mathfrak{\zeta}) d{\mathfrak{\mathcal{V}}}^+(\mathfrak{\zeta}) dx ds\notag\\ &\qquad +\mathcal{L}_{\lambda}ac{1}{2} \int_0^t\int_{\mathbb{R}} \int_{\mathbb{T}^N} \partial_{\mathfrak{\zeta}} \varphi( x,\mathfrak{\zeta} ) H^2(x,\mathfrak{\zeta}) d{\mathfrak{\mathcal{V}}}^+(\mathfrak{\zeta}) dx ds - {m}(\partial_{\mathfrak{\zeta}}\varphi)([0,t]) {\text{e}}nd{align} $\mathbb{P}$-almost surely. \noindent Here, we do not want formulation of kinetic solution in the form in which ${f}^+$ is only kinetic function. But $f^+$ should be equilibrium for all $t\in[0,T]$. We have only $\mathbb{P}$-almost surely ${f}^+(x,t,\mathfrak{\zeta})=\mathbbm{1}_{{u}(x,t)\textgreater\mathfrak{\zeta}}$ for all $t\in[0,T]\backslash \mathcal{N}_0$, almost $(x,\mathfrak{\zeta})\in\mathbb{T}^N\times\mathbb{R}$. To show that ${f}^+$ has equilibrium form for all $t\in[0,T]$, we can repeat the proof of Theorem \ref{th3.6} with ${f}^+$ and kinetic measure ${\mathfrak{m}}$. Then we get, ${\mathbb{P}}$-almost surely, for all $t\in[0,T]$, $${f}^+(x,t,\mathfrak{\zeta})=\mathbbm{1}_{\tilde{u}(x,t)\textgreater\mathfrak{\zeta}}$$ where $$\tilde{u}(x,t)=\int_{\mathbb{R}} ({f}^+(x,t,\mathfrak{\zeta})-\mathbbm{1}_{0\textgreater\mathfrak{\zeta}})\,\,d\mathfrak{\zeta}.$$ Since ${\mathbb{P}}$-almost surely, $u(t)=\tilde{u}(t)$ almost $t\in [0,T]$. We can modify $u$ as $\mathbb{P}$-almost surely, ${u}(t)=\tilde{u}(t)$ for all $t\in[0,T]$. Finally, ${(u(t))_{t\in[0,T]}}$ and $f(t)=\mathbbm{1}_{u(t)\textgreater\mathfrak{\zeta}}$ (after redefine) satisfies the all points of definition of kinetic solution. It shows that ${(u(t))_{t\in[0,T]}}$ is kinetic solution. \subsection*{Existence for general initial data}\label{section 6} Here, we provide an existence proof for the general case, for all $p\in[1,+\infty)$, $u_0\in L^p(\Omega; L^p(\mathbb{T}^N))$. Let $u^n_0$ be a sequence in $L^p(\Omega; C^\infty(\mathbb{T}^N))$ such that sequence $u^n_0$ converges to $u_0$ in $L^1(\Omega\times\mathbb{T}^N)$. That is, the initial conditions $u_0^n$ can be defined as a pathwise molification of $u_0$ then the following bound holds: \begin{align}\label{4.27} \|u_0^n\|_{L^p(\Omega; L^p(\mathbb{T}^N))}\,\le\,\|u_0\|_{L^p(\Omega; L^p(\mathbb{T}^N))} {\text{e}}nd{align} It is clear from the previous case that for each $n\in\mathbb{N}$, there exists a kinetic solution $u^n$ to {\text{e}}qref{1.1} with initial data $u_0^n$. From the contraction principle, we have for all $t\in[0,T],$ \[\mathbb{E}\|u^{n_1}(t)-u^{n_2}(t)\|_{L^1(\mathbb{T}^N)}\,\le\,\mathbb{E}\|u_0^{n_1}-u_0^{n_2}\|_{L^1(\mathbb{T}^N)},\,\,\text{for all}\,n_1, n_2 \in\mathbb{N}.\] By {\text{e}}qref{4.27} and {\text{e}}qref{lp estimate}, we have the following uniform estimates, $p\in[1,+\infty)$, \begin{align}\label{4.28} \mathbb{E}\,\big[\sup_{0\le t\le T}\|u^n(t)\|_{L^p(\mathbb{T}^N)}^p\big]\,\le\,C_{T}\mathbb{E}\big[\|u_0\|_{L^p(\mathbb{T}^N)}^p\big] {\text{e}}nd{align} and \begin{align*} \mathbb{E}|m^n(\mathbb{T}^N\times[0,T]\times\mathbb{R})|^2\,\le\,C_{T,u_0}. {\text{e}}nd{align*} Thus, from the observations of previous subsections, we can show that there exists a subsequence $u^{n_k}$ such that \begin{enumerate} \item[A.] there exists $u\in L^p_{\mathcal{P}}(\Omega\times[0,T];L^1(\mathbb{T}^N))$ such that $u^{n_k}\,\to u$ in $ L^p_{\mathcal{P}}(\Omega\times[0,T];L^1(\mathbb{T}^N))$ as $n_k\,\to\,\infty$. \item[B.] $f^{n_k}\,\overset{w^*}{\to}f=\mathbbm{1}_{u \textgreater\mathfrak{\zeta}}$\,\,in\,\, $L^\infty(\Omega\times\mathbb{T}^N\times[0,T]\times\mathbb{R})$-weak-star, \item[C.] there exists a kinetic measure $\mathfrak{m}$ such that \[m^{n_k}\overset{w^*}{\to}\,\mathfrak{m}\qquad\,in\qquad L_w^2(\Omega; \mathcal{M}_b(\mathbb{T}^N\times[0,T]\times\mathbb{R}))-weak^*\] and $\mathfrak{m}\ge\mathfrak{{\text{e}}ta}_1$, $\mathbb{P}$-almost surely. {\text{e}}nd{enumerate} With these information, we can pass the limit in {\text{e}}qref{2.6} and conclude that $u$ is the kinetic solution to {\text{e}}qref{1.1}. This finishes the proof of Theorem \ref{th2.10}. \section{invariant measure: Proof of Theorem \ref{Invariant measure}}\label{section 4} In this section, we prove existence and uniquenes of invariant measure. In case of existence and uniqueness of an invariant measure, we focus only for additive noise and $\alpha\in (0, \mathcal{L}_{\lambda}ac{1}{2})$. \subsection{Existence of invariant measure} \label{subsection 4.1} In this subsection, we prove the existence of invariant measures as an application of the Krylov-Bogoliubov theorem. There are many ways to prove the existence of invariant measures. One of them is the Krylov-Bogoliubov approach, as we discussed in this article. \noindent \textbf{Existence:} Let $u_0\in L^3(\mathbb{T}^N)$ and $u$ be solution to {\text{e}}qref{1.1} starting from $u_0$. Let $\beta,\,\,\delta,\,\,\theta\,\,\textgreater\,0$ be some positive parameters. On $L^2(\mathbb{T}^N)$, we consider the following regularization of the operators: Consider the kinetic formulation of equation {\text{e}}qref{1.1}, $\mathbb{P}$-almost surely, \begin{align}\label{fomulation in distribution sense} \partial_t f +\,\big(F'(\mathfrak{\zeta})\cdot\nabla + A'(\mathfrak{\zeta})\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\alpha\big)f =\partial_{\mathfrak{\zeta}}\big(\mathfrak{m}-\mathcal{L}_{\lambda}ac{1}{2}H^2 \delta_{u=\mathfrak{\zeta}}\big)+\varphi(x)\delta_{u=\mathfrak{\zeta}} \partial_t W {\text{e}}nd{align} where $f=\mathbbm{1}_{u\textgreater\mathfrak{\zeta}}-\mathbbm{1}_{0\textgreater\,\mathfrak{\zeta}}$. In order to handle the two measures: $\mu_1:=\mathfrak{m}-\mathcal{L}_{\lambda}ac{1}{2}H^2\delta_{u=\mathfrak{\zeta}}$ and $\mu_2:= \varphi(x)\delta_{u=\mathfrak{\zeta}}$$\partial_t{W}$, we need to regularize the operators. We define $$A_\theta:= -F'(\mathfrak{\zeta})\cdot\nabla-B_\theta,\,\,\,A_{\theta,\delta}=A_\theta-\delta Id,\,\,\,\,B_\theta=C_\theta+ A'(\mathfrak{\zeta})\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\alpha,\,\,\, C_\theta=\theta\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\beta.$$ Let $S_{A_\theta}(t)$ and $S_{A_{\theta,\delta}}(t)$ be also the associated semi-groups on $L^2(\mathbb{T}^N)$: $$S_{A_\theta}(t)\kappa(x)=\big({\text{e}} ^{-tB_{\theta}}\kappa(x)\big)(x-F'(\mathfrak{\zeta})t),\,\,\,\,\,\,\,S_{A_{\theta,\delta}}(t)\kappa(x)= {\text{e}}^{-\delta t}\big({\text{e}}^{t\,B_\theta}\kappa\big)(x-F'(\mathfrak{\zeta})t).$$ Adding $\theta(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\beta+\delta Id$ to each side to {\text{e}}qref{fomulation in distribution sense}: \begin{align} (\partial_t - A_{\theta,\delta})f =\big(C_\theta+\delta Id\big)f+\partial_{\mathfrak{\zeta}}\big(\mu_1\big)+\mu_2. {\text{e}}nd{align} Here, we adapt the semi group approach(cf.\cite[Proposition 6.3 ]{prato}). We can express to the kinetic formulation in the mild formulation as $\mathbb{P}$-almost surely, \begin{align} f(x,t,\mathfrak{\zeta})=&S_{A_{\theta,\delta}}(t) f_0(x,\mathfrak{\zeta})+\int_0^t S_{A_{\theta,\delta}}(s)\big(C_\theta f(x,s,\mathfrak{\zeta})+\delta f(x,s,\mathfrak{\zeta}) \big)f(x,\mathfrak{\zeta},t-s )ds\notag\\&\qquad+\int_0^t S_{A_{\theta,\delta}}(t-s)\partial_{\mathfrak{\zeta}}\mu_1(x,s,\mathfrak{\zeta}) ds +\int_0^t S_{A_{\theta,\delta}}(t-s)\varphi(x) \delta_{u=\mathfrak{\zeta}} \,dW(s). {\text{e}}nd{align} We deduce the following decompsition of the solution: \begin{align}\label{u} u=u^0+u^b+P+Q {\text{e}}nd{align} where \begin{align} u^0(x,t)&=\int_{\mathbb{R}} S_{A_{\theta,\delta}}(t)f(\mathfrak{\zeta},0)d\mathfrak{\zeta},\\ u^b(x,t)&=\int_{\mathbb{R}}\int_0^t S_{A_{\theta,\delta}}(s)(C_\theta f+\delta f)(\mathfrak{\zeta},t-s) ds d\mathfrak{\zeta},\\ \langle P(t),\varphi\rangle&=\sum_{k\ge\,1}\int_{\mathbb{T}^N}\int_0^t \bigg(S^*_{A_{\theta,\delta}}(t-s)\varphi\bigg)(x,u(x,s))h_k(x)d\beta_k(s)dx.\\ &\text{and}\\ \langle Q(t), \varphi \rangle&=\int_0^t\int_{\mathbb{T}^N}\langle \partial_\mathfrak{\zeta} \mu_1(x,\cdot,t-s),S^*_{A_{\theta,\delta}}(s)\varphi\rangle_{\mathfrak{\zeta}}dx ds. {\text{e}}nd{align} where $\varphi\in C(\mathbb{T}^N)$ and $S^*_{A_{\theta,\delta}}(t)$ is the dual operator of the semigroup operator $S_{A_{\theta\,\delta}}(t).$ We will estimate each term separately. \subsection*{ Estimate on $u^0$:} Here we use the Fourier transform with respect for $x\in\mathbb{T}^N$, that is, $$\hat{v}(j)=\int_{\mathbb{T}^N}v(x)\,e^{-2\pi i j\cdot x}dx,\,\,\,\,\,\,j\in\mathbb{Z}^N,\,\,\,\,v\in L^1(\mathbb{T}^N).$$ After Fourier transform, for all $j\in\mathbb{Z}^N,\,\,j\neq0,$ \begin{align*}\hat{v}(j,t)&=\int_{\mathbb{R}}{\text{e}}^{-\big(iF'(\mathfrak{\zeta})\cdot j+A'(\mathfrak{\zeta})|j|^{2\alpha}+\theta|j|^{2\beta}+\delta\big)t} \hat{f}_0(j,\mathfrak{\zeta})\, d\mathfrak{\zeta}\\ &=\int_{\mathbb{R}}{\text{e}}^{-\big(iF'(\mathfrak{\zeta})\cdot \hat{j}\,|j|+A'(\mathfrak{\zeta})|j|^{2\alpha-1}|j|+\omega_j|j|\big)t}\hat{f}_0(j,\mathfrak{\zeta})\,d\mathfrak{\zeta}, {\text{e}}nd{align*} where $\omega_j=\theta|j|^{2\beta-1}+\delta|j|^{-1}$ and $\hat{j}=\mathcal{L}_{\lambda}ac{j}{|j|}$.\\ Then, for $T\,\ge\,0,\,\,\,j\neq0,$ we get \begin{align*} \int_0^T|\hat{u}^0(j,t)|^2 dt&=\int_0^T \bigg|\int_{\mathbb{R}}{\text{e}}^{-\big(iF'(\mathfrak{\zeta})\cdot \hat{j}\,|j|+A'(\mathfrak{\zeta})|j|^{2\alpha-1}|j|+\omega_j|j|\big)t}\hat{f}_0(j,\mathfrak{\zeta})\,d\mathfrak{\zeta}\bigg|^2 dt\\ &\le\int_{\mathbb{R}^+} \bigg|\int_{\mathbb{R}}{\text{e}}^{-\big(iF'(\mathfrak{\zeta})\cdot \hat{j}\,|j|+A'(\mathfrak{\zeta})|j|^{2\alpha-1}|j|+\omega_j|j|\big)t}\hat{f}_0(j,\mathfrak{\zeta})\,d\mathfrak{\zeta}\bigg|^2 dt\\ &\le\mathcal{L}_{\lambda}ac{1}{|j|}\int_{\mathbb{R}^+} \bigg|\int_{\mathbb{R}}{\text{e}}^{-\big(iF'(\mathfrak{\zeta})\cdot \hat{j}\,+A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)s}\hat{f}_0(j,\mathfrak{\zeta})\,d\mathfrak{\zeta}\bigg|^2 ds. {\text{e}}nd{align*} Here we thank to the change of variable $s=|j|t$ for last inequality. To estimate right hand side, we want to use Fourier transformation in time variable as follows: \begin{align} \int_0^T|\hat{u}^0(j,t)|^2 dt&\le\mathcal{L}_{\lambda}ac{1}{|j|}\int_{\mathbb{R}^{+}} \bigg|\int_{\mathbb{R}}{\text{e}}^{-iF'(\mathfrak{\zeta})\cdot \hat{j}s}\,{\text{e}}^{-\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)|s|}\hat{f}_0(j,\mathfrak{\zeta})\,d\mathfrak{\zeta}\bigg|^2 ds\notag\\ &\le\mathcal{L}_{\lambda}ac{1}{|j|}\int_{\mathbb{R}}\bigg|\int_{\mathbb{R}}{\text{e}}^{-iF'(\mathfrak{\zeta})\cdot \hat{j}s}\,{\text{e}}^{-\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)|s|}\hat{f}_0(j,\mathfrak{\zeta})\,d\mathfrak{\zeta}\bigg|^2 ds\notag\\ &=\mathcal{L}_{\lambda}ac{1}{|j|}\int_{\mathbb{R}}\bigg|\mathcal{F}\bigg\{\int_{\mathbb{R}}{\text{e}}^{-iF'(\mathfrak{\zeta})\cdot \hat{j}s}\,{\text{e}}^{-\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)|s|}\hat{f}_0(j,\mathfrak{\zeta})\,d\mathfrak{\zeta}\bigg\}(r)\bigg|^2dr {\text{e}}nd{align} We also know that \begin{align*}\mathcal{F}\bigg\{{\text{e}}^{-iF'(\mathfrak{\zeta})\cdot \hat{j}s}\,{\text{e}}^{-\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)|s|}\bigg\}(r)&=\int_{\mathbb{R}}{\text{e}}^{-iF'(\mathfrak{\zeta})\cdot \hat{j}s}\,{\text{e}}^{-\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)|s|}{\text{e}}^{-irs}ds\\ &=\int_{\mathbb{R}}{\text{e}}^{-\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)|s|}{\text{e}}^{-i\big(F'(\mathfrak{\zeta})\cdot\hat{j}+r\big)s}ds\\ &=\mathcal{F}\bigg\{e^{-\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)|s|}\bigg\}(F'(\mathfrak{\zeta})\cdot\hat{j}+r)\\ &=\mathcal{L}_{\lambda}ac{2\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)}{\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)^2+|F'(\mathfrak{\zeta})\cdot\hat{j}+r|^2}. {\text{e}}nd{align*} We use the above identity to conclude that \begin{align*} \int_0^T|\hat{u}^0(j,t)|^2 dt&\le\mathcal{L}_{\lambda}ac{1}{|j|}\int_{\mathbb{R}}\bigg|\int_{\mathbb{R}}\mathcal{L}_{\lambda}ac{2\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)}{\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)^2+|F'(\mathfrak{\zeta})\cdot\hat{j}+r|^2} \hat{f}_0(\mathfrak{\zeta},j)d\mathfrak{\zeta}\bigg|^2 dr\\ &\le\,\mathcal{L}_{\lambda}ac{4}{|j|}\int_{\mathbb{R}}\bigg(\int_{\mathbb{R}_\mathfrak{\zeta}}|\hat{f}_0(\mathfrak{\zeta},j)|^2\mathcal{L}_{\lambda}ac{\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)}{\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)^2+|F'(\mathfrak{\zeta})\cdot\hat{j}+r|^2}d\mathfrak{\zeta}\bigg)\\ &\qquad\qquad\qquad\qquad\qquad\times\bigg(\int_{\mathbb{R}_\mathfrak{\zeta}}\mathcal{L}_{\lambda}ac{\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)}{\big(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j\big)^2+|F'(\mathfrak{\zeta})\cdot\hat{j}+r|^2}d\mathfrak{\zeta}\bigg)dr\\ &\le\mathcal{L}_{\lambda}ac{4}{|j|\omega_j}\sup_r\int_\mathbb{R}\mathcal{L}_{\lambda}ac{\omega_j (A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j)}{(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j)^2+|F'(\mathfrak{\zeta})\cdot\hat{j}+r|^2}\,d\mathfrak{\zeta} \\ &\qquad\qquad\times\int_{\mathbb{R}}|\hat{f}_0(\mathfrak{\zeta},j)|^2\bigg(\int_\mathbb{R}\mathcal{L}_{\lambda}ac{(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j)}{(A'(\mathfrak{\zeta})|j|^{2\alpha-1}+\omega_j)^2+|F'(\mathfrak{\zeta})\cdot\hat{j}+r|^2}dr\bigg)d\mathfrak{\zeta}\\ &\le\,\mathcal{L}_{\lambda}ac{4\mathfrak{{\text{e}}ta}(\omega_j)}{|j|\omega_j}\times\pi\times\int_{\mathbb{R}}|\hat{f}_0(\mathfrak{\zeta},j)|^2 d\mathfrak{\zeta}\\ &\le\,\mathcal{L}_{\lambda}ac{4\pi}{|j|\omega_j^{1-s}}\int_{\mathbb{R}}|\hat{f}_0(\mathfrak{\zeta},j)|^2 d\mathfrak{\zeta}, {\text{e}}nd{align*} where second last inequality due to {\text{e}}qref{non}. It implies that \begin{align*} |j|\omega_j^{1-s}\int_0^T|\hat{u}^0(j,t)|^2 dt&\le 4\pi \int_{\mathbb{R}}|\hat{f}_0(\mathfrak{\zeta},j)|^2 d\mathfrak{\zeta}. {\text{e}}nd{align*} It is easy to see that $$|j|\omega_j^{1-s}\,\ge\,\theta^{1-s}|j|^{(2\beta-1)(1-s)+1}$$ It implies that \begin{align}\label{summing} \int_0^T|j|^{2\{(\beta-\mathcal{L}_{\lambda}ac{1}{2})(1-s)+\mathcal{L}_{\lambda}ac{1}{2}\}}|\hat{u}^0(j,t)|^2 dt \le 4\pi \theta^{s-1}\int_\mathbb{R}|\hat{f}_0(\mathfrak{\zeta},k)|^2 d\mathfrak{\zeta} {\text{e}}nd{align} Observe that \begin{align*} \hat{u}^0(0,t)&=\int_{\mathbb{T}^N} u^0(t,x)dx=\int_{\mathbb{R}}\int_{\mathbb{T}^N}{\text{e}}^{-\delta t}f(0,x,\mathfrak{\zeta})d\mathfrak{\zeta} dx=\int_{\mathbb{T}^N}{\text{e}}^{-\delta t}u_0(x) dx=0, {\text{e}}nd{align*} Summing over $j\in\mathbb{Z}^N$ in {\text{e}}qref{summing}, we have \begin{align} \int_0^T\|u^0(t)\|_{H^{\beta+\big(\mathcal{L}_{\lambda}ac{1}{2}-\beta\big)s}}dt\,\le\,4\pi\,\theta^{s-1}\|f_0\|_{L_{x,\mathfrak{\zeta}}^2}^2=\,4\pi\theta^{s-1}\|u_0\|_{L_x^1}. {\text{e}}nd{align} \subsection*{Estimate on $u^b$:} Recall that $$u^b(x,t)=\int_{\mathbb{R}}\int_0^t S_{A_{\theta,\delta}}(s)(C_\theta f+\delta f)(\mathfrak{\zeta},t-s) ds d\mathfrak{\zeta}$$ Here, we use the same argument as above to get estimate on the Fourier transform of $u^b$ for $j\neq0$ as follows: \begin{align*} \hat{u}^b(j,t)&=\int_0^t\int_{\mathbb{R}}\big(\theta|j|^{2\alpha}+\delta\big){\text{e}}^{-s\big(iF'(\mathfrak{\zeta})\cdot \hat{j}\,|j|+A'(\mathfrak{\zeta})|j|^{2\alpha-1}|j|+\omega_j|j|\big)}\hat{f}(j,\mathfrak{\zeta},t-s)d\mathfrak{\zeta} ds\\ &=\int_0^t\int_{\mathbb{R}}|j|\omega_j {\text{e}}^{-s|j|\big(iF'(\mathfrak{\zeta})\cdot\hat{j}+A'(\mathfrak{\zeta})|j|^{2\alpha-1} +\,\omega_j\big)}\hat{f}(x,\mathfrak{\zeta},t-s) d\mathfrak{\zeta} ds. {\text{e}}nd{align*} It implies that \begin{align*} \int_0^T|\hat{u}^b(x&,j)^2 dt=\int\limits_0^T\bigg|\int\limits_0^t\int\limits_{\mathbb{R}}|j|\omega_j {\text{e}}^{-s|j|\big(iF'(\mathfrak{\zeta})\cdot\hat{j}+A'(\mathfrak{\zeta})|j|^{2\alpha-1} +\,\omega_j\big)}\hat{f}(x,\mathfrak{\zeta},t-s) d\mathfrak{\zeta} ds\bigg|^2dt\\ &=\,\int\limits_0^T\bigg|\int\limits_0^t\int\limits_{R}\big(\sqrt{|j|\omega_j}{\text{e}}^{\mathcal{L}_{\lambda}ac{-s|j|\omega_j}{2}}\big)\big(\sqrt{|j|\omega_j}{\text{e}}^{-s|j|\big(iF'(\mathfrak{\zeta})\cdot\hat{j}+A'(\mathfrak{\zeta})|j|^{2\alpha-1} +\,\mathcal{L}_{\lambda}ac{\omega_j}{2}\big)}\big)\hat{f}(x,\mathfrak{\zeta},t-s)d\mathfrak{\zeta} ds\bigg|^2 dt\\ &=\,\int\limits_0^T\bigg|\int\limits_0^t\big(\sqrt{|j|\omega_j}{\text{e}}^{\mathcal{L}_{\lambda}ac{-s|j|\omega_j}{2}}\big)\bigg(\int_{R}\sqrt{|j|\omega_j}{\text{e}}^{-s|j|\big(iF'(\mathfrak{\zeta})\cdot\hat{j}+A'(\mathfrak{\zeta})|j|^{2\alpha-1} +\,\mathcal{L}_{\lambda}ac{\omega_j}{2}\big)}\hat{f}(x,\mathfrak{\zeta},t-s)d\mathfrak{\zeta}\bigg) ds\bigg|^2 dt\\ &\le\,\int\limits_0^t|j|\omega_j e^{-s|j|\omega_js} ds\int\limits_0^T\int_0^t\bigg|\int\limits_{R}\sqrt{|j|\omega_j}{\text{e}}^{-s|j|\big(iF'(\mathfrak{\zeta})\cdot\hat{j}+A'(\mathfrak{\zeta})|j|^{2\alpha-1} +\,\mathcal{L}_{\lambda}ac{\omega_j}{2}\big)}d\mathfrak{\zeta}\bigg)\hat{f}(x,\mathfrak{\zeta}, t-s) \bigg|^2 dsdt\\ &=\,\int\limits_0^\infty|j|\omega_j e^{-s|j|\omega_js} ds\int\limits_0^T\int\limits_0^T\bigg|\int_{R}\sqrt{|j|\omega_j}{\text{e}}^{-s|j|\big(iF'(\mathfrak{\zeta})\cdot\hat{j}+A'(\mathfrak{\zeta})|j|^{2\alpha-1} +\,\mathcal{L}_{\lambda}ac{\omega_j}{2}\big)}d\mathfrak{\zeta}\bigg)\hat{f}(x,\mathfrak{\zeta},t) \bigg|^2 dsdt {\text{e}}nd{align*} Now, $$\int_0^\infty|j|\omega_j e^{-s|j|\omega_j} ds=\int_0^\infty{\text{e}}^{-s}ds=1,$$ and by similar computations as the previous subsection, we obtain \begin{align*} \int_0^T\bigg|\int_{R}\sqrt{|j|\omega_j}{\text{e}}^{-s|j|\big(iF'(\mathfrak{\zeta})\cdot\hat{j}+A'(\mathfrak{\zeta})|j|^{2\alpha-1} +\,\mathcal{L}_{\lambda}ac{\omega_j}{2}\big)}d\mathfrak{\zeta}\bigg)\hat{f}(x,\mathfrak{\zeta},t) \bigg|^2 ds\le\mathcal{L}_{\lambda}ac{4\pi \omega_j^s}{2^s}\|u(t)\|_{L_x^1} {\text{e}}nd{align*} \begin{align} \int_0^T\omega_j^s|\hat{u}^b(x,j)|^2 dt\le\mathcal{L}_{\lambda}ac{4\pi }{2^s}\int_0^T\|u(t)\|_{L_x^1}\notag {\text{e}}nd{align} \begin{align} \int_0^T |j|^{(1-2\beta)s}|\hat{u}^b(x,j)|^2 dt\le\mathcal{L}_{\lambda}ac{4\pi \theta^s}{2^s}\int_0^T\|u(t)\|_{L_x^1}\notag {\text{e}}nd{align} \begin{align} \int_0^T\|u^b(t)\|^2_{H^{\big(\mathcal{L}_{\lambda}ac{1}{2}-\beta\big)s}}dt\le\mathcal{L}_{\lambda}ac{4\pi\theta^s}{2^s}\int_0^T\|u(t)\|_{L_x^1}dt. {\text{e}}nd{align} \subsection*{Estimate on P:} We recall that $P$ is random measure on $\mathbb{T}^N$ given by: $$\langle P(t),\varphi\rangle=\sum_{k\ge\,1}\int_{\mathbb{T}^N}\int_0^t \bigg(S^*_{A_{\theta,\delta}}(t-s)\varphi\bigg)(x,u(x,s))h_k(x)d\beta_k(s)dx,\,\,\,\,\,\varphi\in C(\mathbb{T}^N).$$ Note that, for each $k\in\mathbb{N}$, \begin{align*} \int_{\mathbb{T}^N}\int_0^t \bigg(S^*_{A_{\theta,\delta}}(t-s)&\varphi\bigg)(x,u(x,s))h_k(x)d\beta_k(s)dx\\ &=\int_{\mathbb{T}^N}\int_0^t\, {\text{e}}^{-\delta(t-s)}\big({\text{e}}^{-B_\theta(t-s)}\varphi(x)\big)(x+F'(u(x,s)))h_k(x) d\beta_k(s) dx.\\ &=\int_{\mathbb{T}^N} \int_0^t{\text{e}}^{-\delta(t-s)}\varphi(x){\text{e}}^{-B_\theta(t-s)}h_k(x-F'(u(x,s)))d\beta_k(s)dx {\text{e}}nd{align*} Let set $p(x,\mathfrak{\zeta})=\delta_{u=\mathfrak{\zeta}} dx,$ then we define the measure \begin{align*} \langle P'(s), \varphi\rangle&=\int_{\mathbb{T}^N\times{\mathbb{R}}}{\text{e}}^{-\delta(t-s)}\varphi(x){\text{e}}^{-tB_\theta(t-s)}h_k(x-F'(\mathfrak{\zeta})) dp(x,\mathfrak{\zeta}) {\text{e}}nd{align*} where $\varphi\in L^2(\mathbb{T}^N)$. By {\text{e}}qref{noise1}, the mapping $x\mapsto h_k(x)$ is a bounded function. therefore by \cite[Lemma 9]{vovelle 2}, we can write for $\lambda\,\ge\,0$: \begin{align*} \|\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\mathcal{L}_{\lambda}ac{\lambda}{2} e^{-tB_\theta(t-s)}h_k(\cdot-F'(\mathfrak{\zeta}))\|_{L^\infty(\mathbb{T}^N)}&=\|\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^{\mathcal{L}_{\lambda}ac{\lambda}{2}}{\text{e}}^{-tC_\theta-tA'(\mathfrak{\zeta})\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\alpha }\big(h_k(\cdot-F'(\mathfrak{\zeta})(t-s))\big)\|_{L^\infty(\mathbb{T}^N)}\\ &=\|\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^{\mathcal{L}_{\lambda}ac{\lambda}{2}}{\text{e}}^{-tC_\theta}\bigg( {\text{e}}^{-tA'(\mathfrak{\zeta})\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\alpha }\big(h_k(\cdot-F'(\mathfrak{\zeta})(t-s))\big)\bigg)\|\\ &\le\,c\big(\theta(t-s)\big)^{\mathcal{L}_{\lambda}ac{-\lambda}{2\beta}}\|{\text{e}}^{-tA'(\mathfrak{\zeta})\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\alpha }\big(h_k(\cdot-F'(\mathfrak{\zeta})(t-s))\big)\|_{L^\infty(\mathbb{T}^N)}\\ &\le\,c\big(\theta(t-s)\big)^{\mathcal{L}_{\lambda}ac{-\lambda}{2\beta}}\times C_1 \|h_k\|_{C(\mathbb{T}^N)}\\ &\le\,C\big(\theta(t-s)\big)^{\mathcal{L}_{\lambda}ac{-\lambda}{2\beta}}{\text{e}}^{-\delta(t-s)}\|h_k\|_{C(\mathbb{T}^N)}. {\text{e}}nd{align*} \begin{align*} \langle \big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^{\lambda}P'(s),\varphi\rangle&=\int_{\mathbb{T}^N\times{\mathbb{R}}}{\text{e}}^{-\delta(t-s)}\varphi(x)\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\lambda e^{-tB_\theta(t-s)}h_k(x-F'(\mathfrak{\zeta})) dp(x,\mathfrak{\zeta})\\ &\le\,{\text{e}}^{-\delta(t-s)}\bigg(\int_{\mathbb{T}^N\times\mathbb{R}}|\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\lambda e^{-tB_\theta(t-s)}h_k(x-F'(\mathfrak{\zeta}))|^2 dp(x,\mathfrak{\zeta})\bigg)^\mathcal{L}_{\lambda}ac{1}{2}\,\|\varphi\|_{L^2(\mathbb{T}^N)},\\ &\le\,C\big(\theta(t-s)\big)^{\mathcal{L}_{\lambda}ac{-\lambda}{2\beta}}{\text{e}}^{-\delta(t-s)}\|h_k\|_{C(\mathbb{T}^N)}\,\|\varphi\|_{L^2(\mathbb{T}^N)}. {\text{e}}nd{align*} It implies that \begin{align*} \|P'(s)\|_{H^\lambda(\mathbb{T}^N)}\le\,C\,\big(\theta(t-s)\big)^{\mathcal{L}_{\lambda}ac{-\lambda}{2\beta}}{\text{e}}^{-\delta(t-s)}\|h_k\|_{C(\mathbb{T}^N)}. {\text{e}}nd{align*} We deduce that, for $\lambda\in[0,\beta)$, this defines a function in $H^\lambda(\mathbb{T}^N)$ and \begin{align*} \mathbb{E}\bigg(\bigg\|\int_0^t {\text{e}}^{-\delta(t-s)}&{\text{e}}^{-tB_\theta(t-s)}\big(h_k(\cdot,-F'(u(\cdot,s)))d\beta_k(s)\big)\bigg\|_{H^\lambda(\mathbb{T}^N)}^2\bigg)\\ &=\mathbb{E}\int_0^t \|P'(s)\|_{H^\lambda(\mathbb{T}^N)}^2 ds\\ &\le\,C\int_0^t{\text{e}}^{-2\delta(t-s)}(\theta(t-s))^{\mathcal{L}_{\lambda}ac{-\lambda}{\beta}}\|h_k\|_{C(\mathbb{T}^N)}^2ds\\ &\le\,C\,\theta^{-\mathcal{L}_{\lambda}ac{\lambda}{\beta}}\delta^{\mathcal{L}_{\lambda}ac{\lambda}{\beta}-1}\bigg(\int_{\mathbb{R}^+}{\text{e}}^{2s}\,s^{-\mathcal{L}_{\lambda}ac{\lambda}{\beta}}ds\bigg)\,\|h_k\|_{C(\mathbb{T}^N)}^2 {\text{e}}nd{align*} Since the sum over $k$ of the right hand side is finite. We conclude \begin{align*} \mathbb{E}\bigg(\bigg\|&\sum_{k\ge1}\int_0^t{\text{e}}^{B_\theta (t-s)}\big(h_k(\cdot-F'(u(\cdot,s))(t-s))\big)d\beta (s)\bigg\|_{H^\lambda(\mathbb{T}^N)}^2\bigg)\\ &\le C\theta^{-\mathcal{L}_{\lambda}ac{\lambda}{\alpha}-1}(\int_0^{+\infty}{\text{e}}^{-2s} s^{-\mathcal{L}_{\lambda}ac{\lambda}{\alpha}} ds) C_0. {\text{e}}nd{align*} In fact, this shows that $P(t)$ is more regular than a measure. It is in the space $L^2(\Omega;H^\lambda(\mathbb{T}^N))$ for $\lambda\,\textless\,\alpha$ and satisfying the following estimate \begin{align} \mathbb{E}\big(\|P(t)\|_{H^\lambda(\mathbb{T}^N)}^2\big)\,\le\,C\,\theta^{\mathcal{L}_{\lambda}ac{-\lambda}{\alpha}}\delta^{\mathcal{L}_{\lambda}ac{\lambda}{\alpha}-1}\big(\int_0^{+\infty}{\text{e}}^{-2s}\,s^{\mathcal{L}_{\lambda}ac{-\lambda}{\alpha}}ds\big)C_0. {\text{e}}nd{align} \subsection*{Bound on the Kinetic measure:} \begin{lem} Let $u:\mathbb{T}^N\times[0,T]\times\Omega\to\mathbb{R}$ be the solution to {\text{e}}qref{1.1} with initial data $u_0$. Then the measure $\mu_1$ satisfies \begin{align} \mathbb{E} \int_{\mathbb{T}^N\times[0,T]\times\mathbb{R}} \theta_1(\mathfrak{\zeta}) d|\mu_1|(x,t,\mathfrak{\zeta})\,\le, C_0\, \mathbb{E}\|\theta_1(u)\|_{L^1(\mathbb{T}^N\times[0,T])}+\mathbb{E}\int_{\mathbb{T}^N}{\text{e}}nsuremath{\mathcal{T}}heta(u_0(x)) dx {\text{e}}nd{align} for all non-negative $\theta_1\in C_c(\mathbb{R})$, where ${\text{e}}nsuremath{\mathcal{T}}heta(s)=\int_0^s\int_0^\sigma\theta_1(r) dr d\sigma.$ {\text{e}}nd{lem} \begin{proof} One can follow similar lines as proposed in \cite[Lemma 11]{vovelle 2} for a proof. {\text{e}}nd{proof} \subsection*{Estimate on $Q$:} Recall that \begin{align*} \langle Q(t), \varphi \rangle&=\int_0^t\int_{\mathbb{T}^N}\langle \partial_\mathfrak{\zeta} \mu_1(x,\cdot,t-s),S^*_{A_{\theta,\delta}}(s)\varphi\rangle_{\mathfrak{\zeta}}dx ds. {\text{e}}nd{align*} By using the following identity \begin{align*} \partial_{\mathfrak{\zeta}} \big(S_{A_{\theta,\delta}}^*(t-s)\varphi(x)\big)=-(t-s)F''(\mathfrak{\zeta})\cdot\nabla\big(S_{A_{\theta,\delta}}^*(t-s)\varphi\big)-(t-s)A''(\mathfrak{\zeta})\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\alpha\big(S_{A_{\theta,\delta}}(t-s)\varphi\big), {\text{e}}nd{align*} then we have \begin{align*} \langle Q(t), \varphi \rangle&=-\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}\partial_\mathfrak{\zeta}\big( S^*_{A_{\theta,\delta}}(t-s)\varphi\big) d\mu_1(x,\mathfrak{\zeta},s)\\ &=\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}(t-s)F''(\mathfrak{\zeta})\cdot\nabla\big(S_{A_{\theta,\delta}}^*(t-s)\varphi\big)d\mu_1(x,s,\mathfrak{\zeta})\\&\qquad\qquad\qquad\qquad\qquad+\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}A''(\mathfrak{\zeta})\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\alpha\big(S_{A_{\theta,\delta}}(t-s)\varphi\big) d\mu_1(x,t,\mathfrak{\zeta})\\ &:=\langle Q_1, \varphi\rangle+\langle Q_2, \varphi \rangle, {\text{e}}nd{align*} where \begin{align*} \langle Q_1, \varphi\rangle&=\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}(t-s)F''(\mathfrak{\zeta})\cdot\nabla\big(S_{A_{\theta,\delta}}^*(t-s)\varphi\big)d\mu_1(x,s,\mathfrak{\zeta}),\\ \langle Q_2, \varphi \rangle&=\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}A''(\mathfrak{\zeta})\big(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta\big)^\alpha\big(S_{A_{\theta,\delta}}(t-s)\varphi\big) d\mu_1(x,t,\mathfrak{\zeta}). {\text{e}}nd{align*} From \cite[section 3.6]{vovelle 2}, we can deduce the following estimate on $Q_1:$ \begin{align}\label{Q-1} \mathbb{E}\|Q_1\|_{L^1((0,T);W^{\lambda,p}(\mathbb{T}^N))}\,\le\,C_{N,\beta,\lambda,p}\theta^{-2}\big(\mathcal{L}_{\lambda}ac{\theta}{\delta}\big)^{\kappa}\bigg(D\,\mathbb{E}\|F''(u)\|_{L^1(\mathbb{T}^N\times(0,T))}+\mathbb{E}\int_{\mathbb{T}^N}{\text{e}}nsuremath{\mathcal{T}}heta(u_0)dx\bigg), {\text{e}}nd{align} where ${\text{e}}nsuremath{\mathcal{T}}heta(s)=\int_0^s\int_0^\sigma|F'(r)|drd\sigma,$ provided $\kappa=2-\mathcal{L}_{\lambda}ac{N+\lambda+1}{2\beta}+\mathcal{L}_{\lambda}ac{N}{2\beta}\,\textgreater\,0.$\\ As similar calculation in \cite[Section 3.6]{vovelle 2}, we can get the following estimate on $Q_2:$ \begin{align}\label{Q-2} \mathbb{E}\|Q_2\|_{L^1((0,T);W^\lambda,p(\mathbb{T}^N))}\,\le\,C_{N,\beta,\lambda, p}\,\theta^{-2}\,\big(\mathcal{L}_{\lambda}ac{\theta}{\delta}\big)^{\kappa_1}\bigg(D\,\mathbb{E}\|A''(u)\|_{L^1(\mathbb{T}^N\times(0,T))}+\mathbb{E}\int_{\mathbb{T}^N}{\text{e}}nsuremath{\mathcal{T}}heta(u_0)dx\bigg), {\text{e}}nd{align} where $\kappa_1=2-\mathcal{L}_{\lambda}ac{N+\lambda+2\alpha}{2\beta}+\mathcal{L}_{\lambda}ac{N}{2\beta}$. \subsection*{$W^{s,q}$- Regularity bound on $u$: } Here, We first collect all the estimates together. For $\lambda\,\textless\,\beta$ and $\beta\,\textless\,\mathcal{L}_{\lambda}ac{1}{2}$, we have the following estimates \begin{align} \int_0^T\|u^0(t)\|_{H^{\beta+\big(\mathcal{L}_{\lambda}ac{1}{2}-\beta\big)s}}dt\,&\le\,\,4\pi\theta^{s-1}\|u_0\|_{L_x^1},\label{u-1}\\ \int_0^T\|u^b(t)\|^2_{H^{\big(\mathcal{L}_{\lambda}ac{1}{2}-\beta\big)s}}dt&\le\mathcal{L}_{\lambda}ac{4\pi\theta^s}{2^s}\int_0^T\|u(t)\|_{L_x^1}dt,\label{u-2}\\ \mathbb{E}\big(\|P(t)\|_{H^\lambda(\mathbb{T}^N)}^2\big)\,&\le\,C(\theta,\delta, D), {\text{e}}nd{align} and \begin{align}\label{u-4} \mathbb{E}\|Q\|_{L^1((0,T);W^{\lambda,p}(\mathbb{T}^N))}\,\le\,&C_{N,\beta,\lambda,p}\theta^{-2}\bigg(\big(\mathcal{L}_{\lambda}ac{\theta}{\delta}\big)^{\kappa}+\big(\mathcal{L}_{\lambda}ac{\theta}{\delta}\big)^{\kappa_1}\bigg)\notag\\&\qquad\bigg(\,\mathbb{E}\|F''(u)\|_{L^1(\mathbb{T}^N\times(0,T))}+\mathbb{E}\|A''(u)\|_{L^1(\mathbb{T}^N)}+\mathbb{E}\int_{\mathbb{T}^N}{\text{e}}nsuremath{\mathcal{T}}heta(u_0)dx\bigg), {\text{e}}nd{align} Since $H^{\beta+\big(\mathcal{L}_{\lambda}ac{1}{2}-\beta\big)s}(\mathbb{T}^N)\hookrightarrow H^{\big(\mathcal{L}_{\lambda}ac{1}{2}-\beta\big)s}(\mathbb{T}^N),$ then from {\text{e}}qref{u-1} we have \begin{align}\label{u-5} \int_0^T\|u^0(t)\|_{H^{\big(\mathcal{L}_{\lambda}ac{1}{2}-\beta\big)s}}dt\,\le\,\,C\,\|u_0\|_{L_x^1} {\text{e}}nd{align} Let $r\,\textgreater\,0,\,\,q\,\ge\,1$ such that $r\,\le\,(\mathcal{L}_{\lambda}ac{1}{2}-\beta)s$ and $\mathcal{L}_{\lambda}ac{1}{q}\,\ge\,\mathcal{L}_{\lambda}ac{r}{N}+\mathcal{L}_{\lambda}ac{1}{2}-\mathcal{L}_{\lambda}ac{(1-2\beta)s}{2N}$, then $$H^{\big(\mathcal{L}_{\lambda}ac{1}{2}-\beta\big)s}(\mathbb{T}^N)\hookrightarrow W^{r,q}(\mathbb{T}^N).$$ It implies that \begin{align} \mathbb{E}\big(\|u^0+u^b+P\|_{L^2((0,T);W^{r,q})}^2\big)\le\,C(\theta,\delta, C_0)\big(1+\mathbb{E}\|u\|_{L^1(\mathbb{T}^N\times(0,T))}+\mathbb{E}\|u_0\|_{L^1(\mathbb{T}^N)}+T\big). {\text{e}}nd{align} By the H\"older ineqality, we have \begin{align}\label{w1} \mathbb{E}\big(\|u^0+u^b+P\|_{L^1((0,T);W^{r,q}(\mathbb{T}^N))}^2\big)\le\,C(\theta,\delta, C_0)\,T\,\big(1+\mathbb{E}\|u\|_{L^1(\mathbb{T}^N\times(0,T))}+\mathbb{E}\|u_0\|_{L^1(\mathbb{T}^N)}+T\big), {\text{e}}nd{align} By using Jensen inequality, \begin{align}\label{w2} \mathbb{E}\big(\|u^0+u^b+P\|_{L^1((0,T);W^{r,q}(\mathbb{T}^N))}\big)\le\,C(\theta,\delta, C_0)\,T^{1/2}\,\big(1+\mathbb{E}\|u\|_{L^1(\mathbb{T}^N\times(0,T))}+\mathbb{E}\|u_0\|_{L^1(\mathbb{T}^N)}+T\big)^{1/2}. {\text{e}}nd{align} Now by using Young inequality, we obtain \begin{align}\label{w3} T\big(1+\mathbb{E}\|u_0\|_{L^1(\mathbb{T}^N)}\big)\,\le\,T^2+\big(1+\mathbb{E}\|u_0\|_{L^1(\mathbb{T}^N)}\big)^2/2, {\text{e}}nd{align} and \begin{align}\label{w4} T\mathbb{E}\|u\|_{L^1((0,T)\times\mathbb{T}^N)}\le\,4\tau T^2+\mathcal{L}_{\lambda}ac{\mathbb{E}^2\|u\|_{L^1((0,T)\times\mathbb{T}^N)}}{16\tau}\,\,\text{for some}\,\,\tau\,\textgreater\,0. {\text{e}}nd{align} From {\text{e}}qref{w1}-{\text{e}}qref{w4} and choose $\tau$ such that, we obtain \begin{align}\label{uu} \mathbb{E}\big(\|u^0+u^b+P\|_{L^1((0,T);W^{r,q}(\mathbb{T}^N))}\big)\le\,C(\theta,\delta, C_0)\,\big(1+\mathbb{E}\|u_0\|_{L^1(\mathbb{T}^N)}+T\big) +\mathcal{L}_{\lambda}ac{1}{4}\mathbb{E}\|u\|_{L^1((0,T)\times\mathbb{T}^N)}. {\text{e}}nd{align} Under the growth hypothesis, sub-linearity of $A''$ and $F''$, the bound {\text{e}}qref{u-4} gives \begin{align} \mathbb{E}\|Q\|_{L^1((0,T);W^{\lambda,p}(\mathbb{T}^N))}\,\le\,&C_{N,\beta,\lambda,p}\theta^{-2}\bigg(\big(\mathcal{L}_{\lambda}ac{\theta}{\delta}\big)^{\kappa}+\big(\mathcal{L}_{\lambda}ac{\theta}{\delta}\big)^{\kappa_1}\bigg)\notag\bigg(\,1+\mathbb{E}\|u\|_{L^1(\mathbb{T}^N\times(0,T))}+\mathbb{E}\|u_0\|_{L^3(\mathbb{T}^N)}^3\bigg). {\text{e}}nd{align} If we choose $\theta,\delta\,\textgreater\,0,$ such that $$C_{N,\beta,\lambda,p}\theta^{-2}\bigg(\big(\mathcal{L}_{\lambda}ac{\theta}{\delta}\big)^{\kappa}+\big(\mathcal{L}_{\lambda}ac{\theta}{\delta}\big)^{\kappa_1}\bigg)\,\le\,\mathcal{L}_{\lambda}ac{1}{4}$$ then we obtain \begin{align}\label{uuu} \mathbb{E}\|Q\|_{L^1((0,T);W^{\lambda,p}(\mathbb{T}^N))}\,\le\,&\mathcal{L}_{\lambda}ac{1}{4}\mathbb{E}\|u\|_{L^1(\mathbb{T}^N\times(0,T))}+C(\theta,\delta, D)\big(1+\mathbb{E}\|u_0\|_{L^3(\mathbb{T}^N)}^3\big). {\text{e}}nd{align} Choose $r\,\textgreater\,0\,,\, q\,\textgreater\,1$ such that $\min\{\lambda,\mathcal{L}_{\lambda}ac{1}{2}-\beta\}\,\ge\,r\,$ and $\mathcal{L}_{\lambda}ac{1}{q}\,\ge\,\max\{\mathcal{L}_{\lambda}ac{r}{N}+\mathcal{L}_{\lambda}ac{1}{p}-\mathcal{L}_{\lambda}ac{\lambda}{N}\,,\,\mathcal{L}_{\lambda}ac{r}{N}+\mathcal{L}_{\lambda}ac{1}{2}-\mathcal{L}_{\lambda}ac{(1-2\beta)s}{2N}\}$, from {\text{e}}qref{u}, {\text{e}}qref{uu}, {\text{e}}qref{uuu} we deduce the following estimate \begin{align*} \mathbb{E}\|u\|_{L^1(0,T; W^{r,q}(\mathbb{T}^N))}\,\le\,C\big(1+T+\mathbb{E}\|u_0\|_{L^3(\mathbb{T}^N)}^3\big)+\mathcal{L}_{\lambda}ac{1}{2}\mathbb{E}\|u\|_{L^1((0,T)\times\mathbb{T}^N)} {\text{e}}nd{align*} and thus we conclude \begin{align}\label{final} \mathbb{E}\|u\|_{L^1(0,T; W^{r,q}(\mathbb{T}^N))}\,\le\,C(\delta,\theta,D)\big(1+T+\mathbb{E}\|u_0\|_{L^3(\mathbb{T}^N)}^3\big). {\text{e}}nd{align} Since $W^{r,q}(\mathbb{T}^N)$ is compactly embedded in $L^1(\mathbb{T}^N)$, then the Krylov-Bogoliubov theorem gives the existence of an invariant measure. Indeed, let $\lambda_{t,u_0}$ be the law of solution $u(t)$ and we define $$\mathfrak{\mathcal{V}}_T:=\mathcal{L}_{\lambda}ac{1}{T}\int_0^T\lambda_{s,u_0}ds.$$ By Markov's inequality, $$\mathbb{P}\bigg(\mathcal{L}_{\lambda}ac{1}{T}\int_0^T\|u(s)\|_{W^{r,q}(\mathbb{T}^N)}ds\,\ge\,R\bigg)\,\le\,\mathcal{L}_{\lambda}ac{\mathbb{E}\|u\|_{L^1([0,T];W^{r,q}(\mathbb{T}^N))}}{R\,T}\,\le\,C\,\mathcal{L}_{\lambda}ac{(\mathbb{E}\|u_0\|_{L^3(\mathbb{T}^N)}^3+1)}{R}.$$ It shows that $\mathfrak{\mathcal{V}}_T$ is tight. By Prokhorov theorem, there exists a sequence $(t_n)$ increase to $\infty$ such that $\mathfrak{\mathcal{V}}_{t_n}$ converges weak-* to measure $\lambda$. Krylov-Bogoliubov theorem ensures that $\lambda$ is an invariant measure. \subsection{Uniqueness of invariant measure}\label{section 5} In this subsection, we prove the uniqueness of invariant measures. \noindent \textbf{Strategy:} In the first step, we will prove that, if the initial data is in a fixed ball and the noise is small in $W^{2,\infty}(\mathbb{T}^N)$ for some time interval. Then the average of a solution is small. In the second step, we will show that the solution enters a ball of some fixed radius in a finite time almost surely. With the help of this property, we will construct an increasing sequence of stopping times. In the third step, finally, we will analyze the behavior of the solution at a large time with the help of a constructed sequence of stopping times and strong Markov property. In this subsection, for technical purposes, we refer to \cite[Section 4]{vovelle 2}. \subsection*{Smallness of solution:} \begin{proposition}\label{smallness} Let initial data $u_0\in L^1(\mathbb{T}^N)$ and $u$ be solution to {\text{e}}qref{1.1} corresponding initial data $u_0$ in the sense of Definition \ref{definition kinetic solution in l1 setting}. Suppose that $F$ and $A$ satisfy {\text{e}}qref{flux uniqueness}. Then, for any $\varepsilon\,\textgreater\,0$, there exist $T\,\textgreater\,0$ and $\mathfrak{{\text{e}}ta}\,\textgreater\,0$ such that: $$\mathcal{L}_{\lambda}ac{1}{T}\int_0^T\|u(s)\|_{L^1(\mathbb{T}^N)}ds\,\le\,\mathcal{L}_{\lambda}ac{\varepsilon}{2},$$ whenever $\|u(0)\|_{L^1(\mathbb{T}^N)}\,\le\,2\kappa_1\,\text{and}\,\sup_{t\in[0,T]}\|\Psi W(t)\|_{W^{2,\infty}(\mathbb{T}^N)}\,\le\,\mathfrak{{\text{e}}ta}. $ {\text{e}}nd{proposition} \begin{proof} We first take $\tilde{u}_0\in L^2(\mathbb{T}^N)$ such that $$\|u_0-\tilde{u}_0\|_{L^1(\mathbb{T}^N)}\,\le\,\mathcal{L}_{\lambda}ac{\varepsilon}{8}\,\,\,\|\tilde{u}_0\|_{L^2(\mathbb{T}^N)}\,\le\,C\,\kappa_1\varepsilon^{-N/2}.$$ Let $\tilde{u}$ be the kineic solution starting from $\tilde{u}_0$. By {\text{e}}qref{contraction principal}, we know that $$\|u(t)-\tilde{u}(t)\|_{L^1(\mathbb{T}^N)}\,\le\,\mathcal{L}_{\lambda}ac{\varepsilon}{8},\,\,t\,\ge\,0,$$ If we define $v=\tilde{u}-\Psi W$, then $v$ is a kinetic solution to the following equation \begin{align*} \partial_t v + \mbox{div}F(v+\Psi W)+(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha(A(v+\Psi W))=0 {\text{e}}nd{align*} and $g(x,\mathfrak{\zeta})=\mathbbm{1}_{v(x,t)\,\textgreater\,\mathfrak{\zeta}}-\mathbbm{1}_{0\,\textgreater\,\mathfrak{\zeta}}$,\,$f=\mathbbm{1}_{\tilde{u}\,\textgreater\,\mathfrak{\zeta}}-\mathbbm{1}_{0\,\textgreater\,\mathfrak{\zeta}}$ satisfy the following kinetic formulation in $\mathcal{D}'(\mathbb{T}^N\times[0,T]\times\mathbb{R})$(see Appendix \ref{A}) as follows: \begin{align}\label{kinetic formulation for v} \partial_t g(x,\mathfrak{\zeta},t)&+ F'(\mathfrak{\zeta})\cdot \nabla g(x,\mathfrak{\zeta},t) +A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha g(x,\mathfrak{\zeta},t)\notag\\&=(F'(\mathfrak{\zeta})-F'(\mathfrak{\zeta}+\Psi W)).\nabla g(x,\mathfrak{\zeta},t)-F'(\mathfrak{\zeta}+\Psi W) \nabla \big(\Psi W\big)\delta_{v=\mathfrak{\zeta}}\notag\\ &\qquad\qquad+\big(A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha g(x,\mathfrak{\zeta},t)-A'(\mathfrak{\zeta}+\Psi W)\big((-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha f\big)(x,\mathfrak{\zeta}+\Psi(x)W,t)\big)+\partial_\mathfrak{\zeta} p(x,\mathfrak{\zeta},t), {\text{e}}nd{align} where kinetic measure satisfies: $$p(x,\mathfrak{\zeta},t)\,\ge\,\int_{\mathbb{R}^N}{|A(\tilde{u}(x+z))-A(\Psi(x)W(x)+\mathfrak{\zeta})|}\mathbbm{1}_{\mbox{Conv}\{\tilde{u}(x+z)-\Psi(x)W(x),v(x)\}}(\mathfrak{\zeta})d\mu(z).$$ Also since both $u$ and $h_k$ have a zero spatial average, so does $v$. We use again $B_\theta$ and $A_{\theta,\delta}$ defined as before. Then we can rewrite {\text{e}}qref{kinetic formulation for v} in $\mathcal{D}'(\mathbb{T}^N\times[0,T]\times\mathbb{R})$ as follows: \begin{align}\label{2} \partial_t g + A_{\theta, \delta} g& =(B_\theta + \delta Id) g +(F'(\mathfrak{\zeta})-F'(\mathfrak{\zeta}+\Psi W))\cdot\nabla g(x,\mathfrak{\zeta},t)-F'(\mathfrak{\zeta}+\Psi W)\cdot \nabla\big(\Psi W\big)\delta_{v=\mathfrak{\zeta}}\notag\\ &\qquad\qquad+\big(A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha g(x,\mathfrak{\zeta},t)-A'(\mathfrak{\zeta}+\Psi W)\big((-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha f\big)(x,\mathfrak{\zeta}+\Psi(x)W,t)\big)+\partial_\mathfrak{\zeta} p(x,\mathfrak{\zeta},t). {\text{e}}nd{align} By using semi group approach (cf. \cite[Proposition 6.3]{prato}), we can get the following decomposition of $v$: $$v=v^0+v^b+v^\#+P^W+P^A+P^\alpha+P^\mathfrak{m},$$ where \begin{align*} v^0(t):&=\int_{\mathbb{R}} S_{A_{\theta,\delta}}(t) g(0,\mathfrak{\zeta}) d\mathfrak{\zeta},\\ v^b(t):&=\int_{\mathbb{R}}\int_0^t S_{A_{\theta,\delta}}(s)(B_{\theta}+\delta Id) g(\mathfrak{\zeta}, t-s) ds d\mathfrak{\zeta},\\ v^\#(t):&=\int_{\mathbb{R}}\int_0^t S_{A_{\theta,\delta}}(s)\big(F'(\mathfrak{\zeta})-F'(\mathfrak{\zeta}+\Psi W)\big)\cdot \nabla g(\mathfrak{\zeta}, t-s) ds d\mathfrak{\zeta},\\ \langle P^W(t),\varphi\rangle:&=-\int_{\mathbb{T}^N\times[0,t]}\big(F'(v+\Psi W)\cdot \nabla\big(\Psi W\big)\big)(x,s) \big(S^*(t-s)\varphi\big)(x,v(x,s)) ds dx,\\ \langle P^A(t),\varphi\rangle:&=\int_{\mathbb{T}^N}\int_{\mathbb{R}}\int_0^t A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha S_{A_{\theta,\delta}}^*(t-s)\varphi(x) g(x,\mathfrak{\zeta},s) ds d\mathfrak{\zeta} dx,\\ \langle P^\alpha(t),\varphi \rangle:&=\int_{\mathbb{T}^N\times\mathbb{R}\times[0,t]} A'(\mathfrak{\zeta})\big((-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha S_{A_{\theta,\delta}}^*(t-s)\varphi\big)(x,\mathfrak{\zeta}-\Psi W) f(x,\mathfrak{\zeta},s) ds d\mathfrak{\zeta} dx,\\ \langle P^\mathfrak{m}(t),\varphi\rangle:&=-\int_{\mathbb{T}^N\times\mathbb{R}\times[0,t]}\partial_{\mathfrak{\zeta}}\big(S_{A_{\theta,\delta}}^*(t-s)\varphi\big) dp(x,\mathfrak{\zeta},s). {\text{e}}nd{align*} We estimate above terms in the following steps. \noindent $\textbf{Step 1:}$ \textbf{Estimates on $v_0$ and $v^b$:} Reproducing similar argument of previous subsections, we can deduce the following estimates: $$\int_0^T\|v^0(t)\|_{H^\alpha(\mathbb{T}^N})^2\,\le\,C\,\theta^{s-1}\|u_0\|_{L^1(\mathbb{T}^N)} $$ and $$\int_0^T\|v^b(t)\|_{L^2(\mathbb{T}^N)}^2 dt\,\le\,C\,\theta^s\int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)} dt.$$ These imply \begin{align} \int_0^T\|v^0(t)\|_{L^1(\mathbb{T}^N)}dt\,\le\,T^{1/2}\bigg(\int_0^T\|v_0\|_{L^2(\mathbb{T}^N)})ds\bigg)^{1/2}\,\le\,C\,T^{1/2}\big( \theta^{s-1}\kappa_1\big)^{1/2}, {\text{e}}nd{align} and \begin{align} \int_0^T\|v^b(t)\|_{L^1(\mathbb{T}^N)}dt\,\le\,C\,T^{1/2}\bigg(\theta^{s}\,\int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)}\bigg)^{1/2}\le\,C\,T\,\theta^{s} + \mathcal{L}_{\lambda}ac{1}{12}\int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)}dt. {\text{e}}nd{align} \noindent $\textbf{Step 2:}$ \textbf{Estimate on $v^\#$:} \begin{align*} v^\#(t):&=\int_{\mathbb{R}}\int_0^t S_{A_{\theta,\delta}}(s)\big(F'(\mathfrak{\zeta})-F'(\mathfrak{\zeta}+\Psi W)\big)\cdot \nabla g(\mathfrak{\zeta},t-s) ds d\mathfrak{\zeta}, {\text{e}}nd{align*} By using chain rule, we have \begin{align*} (F'(\mathfrak{\zeta})-F'(\mathfrak{\zeta}+\Psi(x)W))\cdot\nabla g(x,\mathfrak{\zeta},s)=\nabla\cdot ((F'(\mathfrak{\zeta})-F'(\mathfrak{\zeta}+\Psi(x)W))g(x,\mathfrak{\zeta},s))\\\qquad-(F''(\mathfrak{\zeta}+\Psi(x)W)\cdot\Psi(x)W)g(x,\mathfrak{\zeta},s), {\text{e}}nd{align*} then it implies that \begin{align*} \langle v^\#(t),\varphi\rangle&=\int_{\mathbb{T}^N}\int_{\mathbb{R}}\int_0^t \varphi(x) S_{A_{\theta,\delta}}(t-s) \nabla\cdot ((F'(\mathfrak{\zeta})-F'(\mathfrak{\zeta}+\Psi(x)W))g(x,\mathfrak{\zeta},s))\,ds d\mathfrak{\zeta} dx\\ &\qquad-\int_{\mathbb{T}^N}\int_{\mathbb{R}}\int_0^t\varphi(x) S_{A_{\theta,\delta}}(t-s)((F''(\mathfrak{\zeta}+\Psi(x)W)\cdot\Psi(x)W)g(x,\mathfrak{\zeta},s))ds d\mathfrak{\zeta} dx,\\ &=\int_{\mathbb{T}^N}\int_{\mathbb{R}}\int_0^t\nabla (S_{A_{\theta,\delta}}^*(t-s)\varphi(x)) \cdot ((F'(\mathfrak{\zeta})-F'(\mathfrak{\zeta}+\Psi(x)W))g(x,\mathfrak{\zeta},s))\,ds d\mathfrak{\zeta} dx\\ &\qquad-\int_{\mathbb{T}^N}\int_{\mathbb{R}}\int_0^t S_{A_{\theta,\delta}}^*(t-s)\varphi(x)((F''(\mathfrak{\zeta}+\Psi(x)W)\cdot\nabla(\Psi(x)W))g(x,\mathfrak{\zeta},s))ds d\mathfrak{\zeta} dx\\ &=:\langle w_1(t),\varphi\rangle +\langle w_2(t), \varphi \rangle. {\text{e}}nd{align*} We have, by using \cite[Lemma 9]{vovelle 2} \begin{align*} |\langle w_1(t) , \varphi \rangle|&\le\,\int_{\mathbb{T}^N}\int_{\mathbb{R}}\int_0^t|\nabla (S_{A_{\theta,\delta}}^*(t-s)\varphi(x)) \cdot ((F'(\mathfrak{\zeta})-F'(\mathfrak{\zeta}+\Psi(x)W))g(x,\mathfrak{\zeta},s))|dsd\mathfrak{\zeta} dx\\ &\le\,C\,\mathfrak{{\text{e}}ta} \|\varphi\|_{L^\infty(\mathbb{T}^N)}\,\int_{\mathbb{T}^N}\int_{\mathbb{R}}\int_0^t (\theta(t-s))^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}{\text{e}}^{-\delta (t-s)}|g(x,\mathfrak{\zeta},s)|ds d\mathfrak{\zeta} dx. {\text{e}}nd{align*} It shows that \begin{align} \int_0^T\|w_1(t)\|_{L^1(\mathbb{T}^N)}dt\,&\le\,C\,\mathfrak{{\text{e}}ta}\,\sup_{s\in[0,T]}\big(\int_0^T{\text{e}}^{-\delta(t-s)}(\theta(t-s))^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}dt\big)\int_0^T\|v(s)\|_{L^1(\mathbb{T}^N)}\,ds\notag\\ &\le\,C\,\mathfrak{{\text{e}}ta}\,\theta^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}\delta^{\mathcal{L}_{\lambda}ac{1}{2\beta}-1}\,\int_0^\infty {\text{e}}^{-t} t^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}dt\,\int_0^T\|v(s)\|_{L^1(\mathbb{T}^N)}ds. {\text{e}}nd{align} Similarly we can get \begin{align} \int_0^T\|w_2(t)\|_{L^1(\mathbb{T}^N)}dt\,\le\,C\,\mathfrak{{\text{e}}ta}\,\int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)}ds. {\text{e}}nd{align} \noindent $\textbf{Step 3:}$ \textbf{Estimate on $P^W$:} \begin{align*} \langle P^W(t), \varphi \rangle\,&\le\,C\,\int_0^t\big(1+\|v(s)\|_{L^1(\mathbb{T}^N)}+\|\Psi W(s)\|_{L^1(\mathbb{T}^N)}\big)\|\nabla \Psi W(s)\|_{L^\infty(\mathbb{T}^N)}\|S^*_{A_{\theta,\delta}}(t-s)\varphi\|_{L^\infty(\mathbb{T}^N)}ds\\ &\le\,C\,\|\varphi\|_{L^\infty(\mathbb{T}^N)}\mathfrak{{\text{e}}ta}\,\int_0^t(1+\|v(s)\|_{L^1(\mathbb{T}^N)})\,{\text{e}}^{-\delta(t-s)} ds. {\text{e}}nd{align*} Therefore $P^W $is more regular than a measure. In particular, we have \begin{align} \int_0^T\|P^W(t)\|_{L^1(\mathbb{T}^N)}dt\,\le\,C\,\mathcal{L}_{\lambda}ac{\mathfrak{{\text{e}}ta}}{\delta}\int_0^T\big(1+\|v(s)\|_{L^1(\mathbb{T}^N)}\big) ds. {\text{e}}nd{align} \noindent $\textbf{Step 4:}$ \textbf{Estimate on $P^A$ and $P^\alpha$}: \begin{align*} \langle P^A(t),\varphi\rangle&\le\,C\,\int_{\mathbb{T}^N}\int_{\mathbb{R}}\int_0^t \|(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha S_{A_{\theta,\delta}}^*(t-s)\varphi(x)\|_{L^\infty(\mathbb{T}^N\times\mathbb{R)}} |g(x,\mathfrak{\zeta},s)| ds d\mathfrak{\zeta} dx\\ &\le\,C\int_{\mathbb{T}^N}\int_{\mathbb{R}}\int_0^t \big(\theta(t-s)\big)^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}{\text{e}}^{-\delta(t-s)}\|\varphi\|_{L^\infty(\mathbb{T}^N)}|g(x,\mathfrak{\zeta},s)|dsd\mathfrak{\zeta} dx,\\ {\text{e}}nd{align*} It implies that \begin{align*} \int_0^T\langle P^A(t),\varphi\rangle dt&\le\,C\,\|\varphi\|_{L^\infty(\mathbb{T}^N)}\int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)}dt\, \sup_{s\in[0,T]}\int_0^T\big(\theta(t-s)\big)^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}{\text{e}}^{-\delta(t-s)}dt\\ &\le\,C\theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\delta^{\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\|\varphi\|_{L^\infty(\mathbb{T}^N)}\,\bigg(\int_0^\infty t^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}{\text{e}}^{-t}dt\bigg)\int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)}dt, {\text{e}}nd{align*} It shows that \begin{align} \int_0^T\|P^A(t)\|_{L^1(\mathbb{T}^N)}dt&\le\,C\theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\delta^{\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\,\bigg(\int_0^\infty t^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}{\text{e}}^{-t}dt\bigg)\,\int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)}dt. {\text{e}}nd{align} Similarly we can conclude that \begin{align} \int_0^T\|P^\alpha(t)\|_{L^1(\mathbb{T}^N)}dt&\le\,C\theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\delta^{\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\,\bigg(\int_0^\infty t^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}{\text{e}}^{-t}dt\bigg)\,\int_0^T\|\tilde{u}(t)\|_{L^1(\mathbb{T}^N)}dt. {\text{e}}nd{align} \noindent $\textbf{Step 5:}$ \textbf{Bound on kinetic measure in $L^2(\mathbb{T}^N)$ setting}: We test equation {\text{e}}qref{kinetic formulation for v} against $\mathfrak{\zeta}$ (justified by standard argument of smooth approximation), we get \begin{align*} \langle g, \mathfrak{\zeta} \rangle&=\langle g, \mathfrak{\zeta} \rangle-\langle F'(\mathfrak{\zeta}+\Psi W)\cdot\nabla g(x,\mathfrak{\zeta},t),\mathfrak{\zeta}\rangle-\langle F'(\mathfrak{\zeta}+\Psi W)\nabla (\Psi(x)W(x)) \delta_{v=\mathfrak{\zeta}},\mathfrak{\zeta}\rangle\\&\qquad-\langle A'(\mathfrak{\zeta}+\Psi W)\big((-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha f\big)(x,\mathfrak{\zeta}+\Psi W,t),\mathfrak{\zeta}\rangle+\langle \partial_{\mathfrak{\zeta}}p(x,\mathfrak{\zeta},s),\mathfrak{\zeta}\rangle,\\ \\ \mathcal{L}_{\lambda}ac{1}{2}\|v(t)\|^2_{L^2(\mathbb{T}^N)}&=\mathcal{L}_{\lambda}ac{1}{2}\|\tilde{u}_0\|_{L^2(\mathbb{T}^N)}^2+\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}\mathfrak{\zeta}\,F''(\mathfrak{\zeta}+\Psi W)\cdot\nabla(\Psi(x)W)g(x,\mathfrak{\zeta},s) d\mathfrak{\zeta} dx ds\\&\qquad-\int_0^t\int_{\mathbb{T}^N}\int_\mathbb{R}(\mathfrak{\zeta}-\Psi(x)W)A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha f(x,\mathfrak{\zeta},s)d\mathfrak{\zeta} dx ds\\&\qquad+\int_0^t\int_{\mathbb{T}^N}vF'(v+\Psi(x)W)\cdot\nabla (\Psi(x)W)dxds+\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}dp(x,\mathfrak{\zeta},s), {\text{e}}nd{align*} It implies that \begin{align*} \mathcal{L}_{\lambda}ac{1}{2}\|v(t)\|^2_{L^2(\mathbb{T}^N)}+\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}dp(x,\mathfrak{\zeta},s)&=\mathcal{L}_{\lambda}ac{1}{2}\|\tilde{u}_0\|_{L^2(\mathbb{T}^N)}^2\\&\qquad+\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}\mathfrak{\zeta}\,F''(\mathfrak{\zeta}+\Psi W)\cdot\nabla(\Psi(x)W)g(x,\mathfrak{\zeta},s) d\mathfrak{\zeta} dx ds\\&\qquad+\int_0^t\int_{\mathbb{T}^N}vF'(v+\Psi(x)W)\cdot\nabla (\Psi(x)W)dxds\\&\qquad+\int_0^t\int_{\mathbb{T}^N}\int_\mathbb{R}A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha(\Psi(x)W)f(x,\mathfrak{\zeta},s)d\mathfrak{\zeta} dx ds. {\text{e}}nd{align*} We conclude that \begin{align*} \mathcal{L}_{\lambda}ac{1}{2}\|v(t)\|^2_{L^2(\mathbb{T}^N)}&+|p|([0,t]\times\mathbb{T}^N\times\mathbb{R})\le\,\mathcal{L}_{\lambda}ac{1}{2}\|\tilde{u}_0\|_{L^2(\mathbb{T}^N)}^2+C\mathfrak{{\text{e}}ta}\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}|\mathfrak{\zeta}||g(x,\mathfrak{\zeta},s)|d\mathfrak{\zeta} dx ds\\ &\qquad+\mathfrak{{\text{e}}ta}\,\int_0^t\int_{\mathbb{T}^N} |v(x)|\big(1+|v(x)|+|\Psi(x)W|\big)dxds+C\,\mathfrak{{\text{e}}ta}\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}|f(x,\mathfrak{\zeta},s)|dx d\mathfrak{\zeta} ds, {\text{e}}nd{align*} In last term, we used that $\|(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha \Psi W(t)\|_{L^{\infty}(\mathbb{T}^N)}\,\,\le\,C\,\sup_{t\in[0,T]}\|\Psi W\|_{W^{2,\infty}(\mathbb{T}^N)}$. It proves that \begin{align*} \mathcal{L}_{\lambda}ac{1}{2}\|v(t)\|^2_{L^2(\mathbb{T}^N)}&+|p|([0,t]\times\mathbb{T}^N\times\mathbb{R})\le\mathcal{L}_{\lambda}ac{1}{2}\|\tilde{u}_0\|_{L^2(\mathbb{T}^N)}^2+C\mathfrak{{\text{e}}ta}\int_0^t\big(1+\|v(s)\|_{L^2(\mathbb{T}^N)}^2\big)ds. {\text{e}}nd{align*} The Gronwall's inequality implies $$\|v(t)\|_{L^2(\mathbb{T}^N)}^2\le\,C\,{\text{e}}^{\mathfrak{{\text{e}}ta} C t}\big(\|\tilde{u}_0\|_{L^2(\mathbb{T}^N)}^2+1\big).$$ Hence $$|p|([0,T]\times\mathbb{T}^N\times\mathbb{R})\le\,C\,{\text{e}}^{\mathfrak{{\text{e}}ta} C T}\big(\|\tilde{u}_0\|_{L^2(\mathbb{T}^N)}^2+1\big).$$ Since $\|\tilde{u}_0\|_L^2(\mathbb{T}^N)\,\le\,C\,k_0\,\varepsilon^{-\mathcal{L}_{\lambda}ac{N}{2}}$, it follows that \begin{align}|p|\big([0,T]\times\mathbb{T}^N\times\mathbb{R}\big)\,\le\,C\,{\text{e}}^{\mathfrak{{\text{e}}ta} C T}\big(k_0^2\varepsilon^{-N}+1\big). {\text{e}}nd{align} \noindent $\textbf{Step 6:}$ \textbf{Estimate on $P^m$:} It is easy to know that \begin{align*} \partial_{\mathfrak{\zeta}}\big(S_{A_{\theta,\delta}}^*(t-s)\varphi(x)\big)=-\big((t-s)F''(\mathfrak{\zeta})\cdot\nabla(S_{A_{\theta,\delta}}^*(t-s)\varphi(x))+(t-s)A''(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha(S_{A_{\theta,\delta}}^*(t-s)\varphi(x))\big). {\text{e}}nd{align*} Using above identity we have \begin{align*} \langle P^\mathfrak{m}(t),\varphi\rangle&=-\int\limits_{\mathbb{T}^N\times\mathbb{R}\times[0,t]}\big((t-s)F''(\mathfrak{\zeta})\cdot\nabla(S_{A_{\theta,\delta}}^*(t-s)\varphi(x))+A''(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha(S_{A_{\theta,\delta}}(t-s)\varphi(x))\big)dp(x,\mathfrak{\zeta},s)\\ &=:\langle I_1(t),\varphi\rangle+\langle I_2(t),\varphi\rangle. {\text{e}}nd{align*} We conclude that \begin{align*} |\langle I_1(t),\varphi \rangle|\,&\le\,C\,\|\varphi\|_{L^\infty(\mathbb{T}^N)}\int_{\mathbb{T}^N\times\mathbb{R}\times[0,t]}(t-s)^{1-\mathcal{L}_{\lambda}ac{1}{2\beta}}{\text{e}}^{-\delta(t-s)}\,\theta^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}d|p|(x,\mathfrak{\zeta},s). {\text{e}}nd{align*} It implies that \begin{align} \int_0^T\|I_1(t)\|_{L^1(\mathbb{T}^N)}dt\,&\le C\,\theta^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}\delta^{2-\mathcal{L}_{\lambda}ac{1}{2\beta}}\bigg(\int_0^\infty t^{1-\mathcal{L}_{\lambda}ac{1}{2\beta}}{\text{e}}^{-t}dt\bigg)\int_{\mathbb{T}^N\times\mathbb{R}\times[0,T]}d|p|(x,\mathfrak{\zeta},s),\notag\\ &\le\,C\,e^{\mathfrak{{\text{e}}ta} C T}\big(k_0^2\varepsilon^{-N}+1\big)\theta^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}\delta^{2-\mathcal{L}_{\lambda}ac{1}{2\beta}}\bigg(\int_0^\infty t^{1-\mathcal{L}_{\lambda}ac{1}{2\beta}}{\text{e}}^{-t}dt\bigg). {\text{e}}nd{align} Similarly we can conclude that \begin{align} \int_0^T\|I_2(t)\|_{L^1(\mathbb{T}^N)}dt\,&\le C\,\theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\delta^{2-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\bigg(\int_0^\infty t^{1-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}{\text{e}}^{-t}dt\bigg)\int_{\mathbb{T}^N\times\mathbb{R}\times[0,T]}d|p|(x,\mathfrak{\zeta},s),\notag\\ &\le\,C\,e^{\mathfrak{{\text{e}}ta} C T}\big(k_0^2\varepsilon^{-N}+1\big)\theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\delta^{2-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\bigg(\int_0^\infty t^{1-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}{\text{e}}^{-t}dt\bigg). {\text{e}}nd{align} \textbf{Step 7:} \textbf{Conclusion:} We collect all the estimates obtained in previous steps and choose $\alpha\,\le\,\beta$ then we deduce: \begin{align*} \int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)}dt\,&\le\,C\,\bigg(T^{1/2}\big( \theta^{s-1}\kappa_1\big)^{1/2}+T\,\big(\theta^{s} + \mathcal{L}_{\lambda}ac{\mathfrak{{\text{e}}ta}}{\delta}\big) +\mathfrak{{\text{e}}ta}\,\theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\,\delta^{\mathcal{L}_{\lambda}ac{\alpha}{\beta}}+ \mathcal{L}_{\lambda}ac{1}{12}\int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)}dt\\ &\qquad+\big(\mathfrak{{\text{e}}ta}\,\theta^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}\delta^{\mathcal{L}_{\lambda}ac{1}{2\beta}-1}\,+\mathcal{L}_{\lambda}ac{\mathfrak{{\text{e}}ta}}{\delta}+\mathfrak{{\text{e}}ta}+\theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\delta^{\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\big)\int_0^T\|v(s)\|_{L^1(\mathbb{T}^N)}ds\\ &\qquad+e^{\mathfrak{{\text{e}}ta} C T}\big(k_0^2\varepsilon^{-N}+1\big)(\theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\delta^{2-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}+\theta^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}\delta^{2-\mathcal{L}_{\lambda}ac{1}{2\beta}})\bigg), {\text{e}}nd{align*} We choose $\theta,\delta,\,\mathfrak{{\text{e}}ta}\,\textgreater\,0$ such that $$\theta^s\,\le\,\mathcal{L}_{\lambda}ac{\varepsilon}{32\,C},\qquad\,\,\,\,\,\,\theta^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}\delta^{\mathcal{L}_{\lambda}ac{1}{2\beta}-1} + \theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\delta^{\mathcal{L}_{\lambda}ac{\alpha}{\beta}}+e^{ C T}\big(k_0^2\varepsilon^{-N}+1\big)(\theta^{-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}\delta^{2-\mathcal{L}_{\lambda}ac{\alpha}{\beta}}+\theta^{-\mathcal{L}_{\lambda}ac{1}{2\beta}}\delta^{2-\mathcal{L}_{\lambda}ac{1}{2\beta}})\le\,\min\bigg\{\mathcal{L}_{\lambda}ac{\varepsilon}{32\,C},\,\mathcal{L}_{\lambda}ac{1}{3}\bigg\},$$ and $$\mathfrak{{\text{e}}ta}+\mathcal{L}_{\lambda}ac{\mathfrak{{\text{e}}ta}}{\delta}\le\,\min\big\{\mathcal{L}_{\lambda}ac{\varepsilon}{32\,C},\mathcal{L}_{\lambda}ac{1}{3},\,\,\mathcal{L}_{\lambda}ac{\varepsilon}{8\delta}\big\}.$$ Choose T sufficiently large such that $$T^{-1/2}\big( \theta^{s-1}\kappa_1\big)^{1/2}\,\le\,\mathcal{L}_{\lambda}ac{\varepsilon}{32C}.$$ Then $$\mathcal{L}_{\lambda}ac{1}{T}\int_0^T\|v(t)\|_{L^1(\mathbb{T}^N)}dt\,\le\,\mathcal{L}_{\lambda}ac{\varepsilon}{4},\,\,\qquad\mathcal{L}_{\lambda}ac{1}{T}\int_0^T\|\tilde{u}\|_{L^1(\mathbb{T}^N)}dt\,\le\,\mathcal{L}_{\lambda}ac{3\varepsilon}{8},$$ and $$\mathcal{L}_{\lambda}ac{1}{T}\int_0^T\|u(t)\|_{L^1(\mathbb{T}^N)}dt\,\textless\,\varepsilon.$$ This finishes the proof. {\text{e}}nd{proof} \subsection{Sequence of increasing stopping times:} In this subsection, first, we prove that estimate {\text{e}}qref{final} implies that any solution enters a ball of some fixed radius at a finite time. With help of this property, we construct a sequence of increasing stopping times. Let $\tilde{u}_0^1,\,\tilde{u}_0^2$ in $L^3(\mathbb{T}^N)$ and denote by $\tilde{u}^1, \tilde{u}^2$ the corresponding solutions. We can easily generalize {\text{e}}qref{final} to two solutions on interval $[t,t+T]$ for $t,T\,\ge0.$ By repeating similar procedure to estimate for conditional expections, we have \begin{align} \mathbb{E}\bigg(\int_t^{t+T}\|\tilde{u}^1(s)\|_{L^1(\mathbb{T}^N)}+\|\tilde{u}^2(s)\|_{L^1(\mathbb{T}^N)}ds\big|\mathcal{F}_t\bigg)\le\,k\,\big(\|\tilde{u}^1(t)\|_{L^3(\mathbb{T}^N)}^3+\|\tilde{u}^2(t)\|_{L^3(\mathbb{T}^N)}^3+1+T\big) {\text{e}}nd{align} The following property can be shown in the same way as proposed in \cite[Section 4]{vovelle 2} via a Borel-Cantelli Lemma. Now we introduce for other term the sequence of deterministic times $(t_l)_{l\ge0}$ and $(r_l)_{l\ge0}$ by $$t_0=0,$$ $$t_{l+1}=t_l+r_l$$ and we choose $(r_l)_{l\ge0}$ so that the following inequality holds for all $l\,\ge\,0:$ $$\mathcal{L}_{\lambda}ac{1}{2r_l}\big(\|\tilde{u}_0^1\|_{L^3(\mathbb{T}^N)}^3+\|\tilde{u}_0^2\|_{L^3(\mathbb{T}^N)}^3+1\big)\,\le\,\mathcal{L}_{\lambda}ac{1}{8},$$ then $$k_0=\inf\{l\ge 0|\,\inf_{s\in[t_l,t_{l+1}]}\|\tilde{u}^1(t)\|_{L^1(\mathbb{T}^N)}+\|\tilde{u}^2(t)\|_{L^1(\mathbb{T}^N)}\,\le\,2k_1\}$$ is almost surely finite (for detials see \cite[Section 4]{vovelle 2}). Then we define the stopping time $$\tau^{u_0^1, u_0^2}=\inf\{t\ge0,| \|\tilde{u}^1(t)\|_{L^1(\mathbb{T}^N)}+\|\tilde{u}^2(t)\|_{L^1(\mathbb{T}^N)}\,\le\,2k_1\}.$$ It is clear that $\tau^{\tilde{u}_0^1, \tilde{u}_0^2}\,\le\,t_{k_0+1}$ then it implies that $\tau^{\tilde{u}_0^1, \tilde{u}_0^2}\,\textless\,\infty$ almost surely. It shows that the following stopping times are also almost surely finite: for any $T\,\textgreater\,0$, $$\tau_l=\inf\{t\,\ge\,\tau_{l-1}+T\big|\, \|\tilde{u}^1(t)\|_{L^1(\mathbb{T}^N)}+\|\tilde{u}^2(t)\|_{L^1(\mathbb{T}^N)}\,\le\,2k_1\},\,\,\tau_0=0.$$ \subsection{Conclusion of uniqueness} Let $\varepsilon\,\textgreater\,0$. Let $u_0^1,\, u_0^2$ be in $L^1(\mathbb{T}^N)$. We take $\tilde{u}_0^1,\,\tilde{u}_0^2$ in $L^3(\mathbb{T}^N)$ such that $\|u_0^j-\tilde{u}_0^j\|_{L^1(\mathbb{T}^N)}\,\le\,\mathcal{L}_{\lambda}ac{\varepsilon}{4}$. From Proposition \ref{smallness}, there exist $T\,\textgreater\,0$ and $\mathfrak{{\text{e}}ta}\,\textgreater\,0$ such that $$\mathcal{L}_{\lambda}ac{1}{T}\int_{\tau_l}^{\tau_l+T}\|\tilde{u}^1(s)-\tilde{u}^2(s)\|_{L^1(\mathbb{T}^N)}\,\le\,\mathcal{L}_{\lambda}ac{\varepsilon}{2}$$ whenever $$\sup_{[\tau_l,\tau_l+T]}\|\Psi W(t)-\Psi W(\tau_l)\|_{W^{2,\infty}(\mathbb{T}^N)}\,\le\,\mathfrak{{\text{e}}ta}.$$ We define $$A_{W,l}=\big\{\omega\in\Omega;\,\sup_{[\tau_l,\tau_l+T]}\|\Psi W(t)-\Psi W(\tau_l)\|_{W^{2,\infty}(\mathbb{T}^N)}\,\le\,\mathfrak{{\text{e}}ta}\big\},$$ $$\tilde{A}_l=\big\{\omega\in\Omega;\, \mathcal{L}_{\lambda}ac{1}{T}\int_{\tau_l}^{\tau_l+T}\|\tilde{u}^1(s)-\tilde{u}^2(s)\|_{L^1(\mathbb{T}^N)}\,\le\,\mathcal{L}_{\lambda}ac{\varepsilon}{2}\big\},$$ and$$A_l=\big\{\omega\in\Omega;\,\mathcal{L}_{\lambda}ac{1}{T}\int_{\tau_l}^{\tau_l+T}\|u^1(s)-u^2(s)\|_{L^1(\mathbb{T}^N)}\,\le\,\varepsilon\big\}.$$ By $L^1$-contraction {\text{e}}qref{contraction principal}, we have following inequality, \begin{align*} \mathbb{E}\big(\mathbbm{1}_{A_{W,l}}\,\big|\,\mathcal{F}_{\tau_l}\big)\,\le\,\mathbb{E}\big(\mathbbm{1}_{\tilde{A}_l}\,\big|\,\mathcal{F}_{\tau_l}\big)\,\le\,\mathbb{E}\big(\mathbbm{1}_{A_l}\,\big|\,\mathcal{F}_{\tau_l}\big). {\text{e}}nd{align*} By the strong Markov property, $\mathbb{E}\big(\mathbbm{1}_{A_{W,l}}\,\big|\,\mathcal{F}_{\tau_l}\big)$ is positive constant $c$, then \begin{align*} \mathbb{E}\bigg[\mathbb{E}\big[\mathbbm{1}_{A_l^c\cap A_{l+1}^c}\,\big|\,\mathcal{F}_{\tau_{l+1}}\big]\,\bigg|\,\mathcal{F}_{\tau_l}\bigg]=\mathbb{E}\bigg[\mathbbm{1}_{A_l^c}\,\mathbb{E}\big[ \mathbbm{1}_{A_{l+1}^c}\,\big|\,\mathcal{F}_{\tau_{l+1}}\big]\,\bigg|\,\mathcal{F}_{\tau_l}\bigg]\,\textless\,(1-c)\,\mathbb{E}\big[\mathbbm{1}_{A_l^c}\,\big|\,\mathcal{F}_{\tau_l}\big]\,\textless\,(1-c)^2. {\text{e}}nd{align*} It implies that \begin{align*} \mathbb{P}\big(A_l^c\cap A_{l+1}^c\big)\,\textless\,(1-c)^2. {\text{e}}nd{align*} Inductively, for $l_0,\,k\in \mathbb{N}$, we have \begin{align*} \mathbb{P}\bigg(\cap_{j=0}^k A_{{l_j}}^c\bigg)\,\textless\,(1-c)^k, {\text{e}}nd{align*} and therefore \begin{align}\label{limit} \mathbb{P}\bigg(\cap_{j=0}^{\infty}A_{j}^c\bigg)=\mathbb{P}\bigg({\text{e}}xists\,l_0\in\mathbb{N};\,\cap_{j=0}^{\infty}A_{l_j}^c\bigg)=0. {\text{e}}nd{align} Notice that limit exists, by {\text{e}}qref{contraction principal}, $t\,\mapsto\,\|u^1(t)-u^2(t)\|_{L^1(\mathbb{T}^N)}$ is almost surely non-increasing. This latter property is again used to conclude from {\text{e}}qref{limit} that \begin{align*} \mathbb{P}\big(\lim_{t\to\infty}\|u^1(t)-u^2(t)\|_{L^1(\mathbb{T}^N)}\,\textgreater\,\varepsilon\big)=0. {\text{e}}nd{align*} It implies that $\mathbb{P}$-almost surely, $$\lim_{t\to\infty}\|u^1(t)-u^2(t)\|_{L^1(\mathbb{T}^N)}=0.$$ It implies that $$\mathbb{P}\big(u^1(t)\neq u^2(t)\big)\,\to\,0\,\,\,\qquad\text{as}\,\qquad t\to\infty.$$ Let $\mu_{t,u_0^1},\,\mu_{t,u_0^2}$ be law of solutions $u^1(t)$, $u^2(t)$ respectively, then $$\|\mu_{t,u_0^1}-\mu_{t,u_0^2}\|_{TV}\,\le\,2\,\mathbb{P}\big(u^1(t)\neq u^2(t)\big),$$ It shows that $$\lim_{t\to\infty}\langle \mu_{t,u_0^1},\kappa \rangle=\lim_{t\to\infty}\langle\mu_{t,u_0^2},\kappa\rangle=c_{\kappa}\,\,\,\,\,\, \vec{\mathbf{f}} orall\,\kappa\in C_b(L^1(\mathbb{T}^N)).$$ It implies that for all $u_0\in L^1(\mathbb{T}^N)$, \begin{align*} \lim_{t\to\infty}\mathcal{Q}_t\kappa(u_0)=\lim_{t\to\infty}\langle \mu_{t,u_0},\kappa\rangle= c_{\kappa}. {\text{e}}nd{align*} Let $\lambda$ be any invariant measure and $\kappa\in C_b(L^1(\mathbb{T}^N))$, then by dominated convergence theorem, we get \begin{align}\label{convergence of law} \lim_{t\to\infty}\langle \mathcal{Q}_t^*\lambda, \kappa\rangle=\lim_{t\to\infty}\langle\lambda,\mathcal{Q}_t\kappa\rangle\implies \lim_{t\to\infty}\langle \mu_{t,u_0},\kappa\rangle=\langle \lambda, \kappa\rangle, \,\, \vec{\mathbf{f}} orall\,\,u_0\in L^1(\mathbb{T}^N). {\text{e}}nd{align} It shows that invariant measure $\lambda$ is unique \cite[Proposition 11.4]{prato} and law $\mu_{t,u_0}$ of solution $u(t)$ with initial data $u_0$ converge to an unique invariant measure $\lambda$ as $t\to\infty$. \appendix \section{Derivation of the kinetic formulation.}\label{A} In this Appendix, we give brief details about the derivation of kinetic formulation of equation {\text{e}}qref{1.1}. Here we are assuming that flux $F$ and diffusive function $A$ are smooth and Lipschitz continuous because we are only working with approximations of equation {\text{e}}qref{1.1}. We prove that if u is a weak solution to {\text{e}}qref{1.1} such that $$ u\in L^2(\Omega ; C([0,T];L^2(\mathbb{T}^N)))\cap L^2(\Omega;L^2(0,T; H^1(\mathbb{T}^N)),\text{and}\,A(u)\in L^2(\Omega;L^2(0,T; H^\alpha(\mathbb{T}^N))$$then $f(t)=\mathbbm{1}_{u(t)\textgreater\mathfrak{\zeta}}$ satisfies $$df(t)+ F'\cdot\nabla f(t) dt + A'(\mathfrak{\zeta})(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha( f(t)) dt = \delta_{u(t)=\mathfrak{\zeta}} \varphi dW(t) + \partial_{\mathfrak{\zeta}} (\mathfrak{{\text{e}}ta}-\mathcal{L}_{\lambda}ac{1}{2} H^2 \delta_{u(t)=\mathfrak{\zeta}}) dt$$ in the sense of $\mathcal{D}'(\mathbb{T}^N\times\mathbb{R})$ where ${\text{e}}ta\,\ge\,{\text{e}}ta_1$ with $$d\mathfrak{{\text{e}}ta}_1(x,t,\mathfrak{\zeta})=\int_{\mathbb{R}^N}|A(u(x+z))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{u(x),u(x+z)\}}(\mathfrak{\zeta}) \mu(z) dz d\mathfrak{\zeta} dx dt .$$ Indeed, it follows from generalized It\^o formula \cite[AppendixA]{vovelle}, for $\varphi\in C_b^2(\mathbb{R})$ with $\varphi(-\infty)=0$, $\kappa\in C^2(\mathbb{T}^N),$ $\mathbb{P}$-almost surely, \begin{align*} \langle \varphi(u(t)), \kappa \rangle &= \langle \varphi(u_0), \kappa \rangle -\int_0^t \langle \varphi'(u) \mbox{div} (F(u)), \kappa \rangle ds-\int_0^t \langle \varphi'(u)(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha(A(u)),\kappa \rangle ds\\ &\qquad+\sum_{k \ge 1} \int_0^t \langle \varphi'(u) h_k(x,u),\kappa \rangle d\beta_k(s)+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\langle \varphi''(u)H^2(x,u),\kappa \rangle ds. {\text{e}}nd{align*} Since $(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^{\alpha/2}$ is continuous linear operator from $H^1(\mathbb{T}^N)$ to $L^2(\mathbb{T}^N)$, it implies that \begin{align*}\langle \phi'(u)) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha\big(A(u)\big),\kappa \rangle&=\langle (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^{\alpha/2}\big(\phi'(u)\kappa\big),(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^{\alpha/2}\big(A(u)\big)\rangle\\&=\lim_{\delta\to 0}\langle (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^{\alpha/2}(\phi'(u^\delta)\kappa),(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^{\alpha/2}\big(A(u^\delta)\big)\rangle\\&=\lim_{\delta\to 0}\langle \phi'(u^\delta)) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha\big(A(u^\delta)\big),\kappa \rangle, {\text{e}}nd{align*} Here $u^\delta$ is pathwise molification of $u$ in space variable. By using it, we have \begin{align*}\langle \varphi'(u^\delta(x,t)) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha&(A(u^\delta(x,t))),\kappa(x) \rangle \\ &= -\langle \varphi'(u^\delta(x,t)) \int_{\mathbb{R}^N}(A(u^\delta(x+z,t))-A(u^\delta(x,t))) \mu(z) dz ,\kappa(x) \rangle, {\text{e}}nd{align*} By making use of the Taylor's identity \cite[identity (17)]{N-2}, we get \begin{align*} &\langle \varphi'(u^\delta(x,t))\int_{\mathbb{R}^N}(A(u^\delta(x+z,t))-A(u^\delta(x,t)))\mu(z) dz ,\kappa \rangle \\ &\qquad\qquad= \int_{\mathbb{T}^N}\int_{\mathbb{R}^N} \varphi'(u^\delta(x,t))(A(u^\delta(x+z,t))-A(u^\delta(x,t)))\kappa(x) \mu(z) dz dx\\ &\qquad\qquad=\int_{\mathbb{T}^N}\int_{\mathbb{R}^N}\kappa(x)\bigg(\int_{\mathbb{R}}(\varphi'(\mathfrak{\zeta})A'(\mathfrak{\zeta})\mathbbm{1}_{u^\delta(x+z,t)\textgreater\mathfrak{\zeta}}-\varphi'(\mathfrak{\zeta})A'(\mathfrak{\zeta})\mathbbm{1}_{u^\delta(x,t)\textgreater\mathfrak{\zeta}})d\mathfrak{\zeta}\\ &\qquad\qquad\qquad-\int_{\mathbb{R}}\varphi''(\mathfrak{\zeta})|A(u^\delta(x+z,t))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{u^\delta(x+z,t),u^\delta(x,t)\}} \bigg)\mu(z)dz dx\\ &\qquad\qquad=\int_{\mathbb{T}^N}\int_{\mathbb{R}^N}\int_{\mathbb{R}}A'(\mathfrak{\zeta}) \varphi(\mathfrak{\zeta}) \mathbbm{1}_{u^\delta(x,t)\textgreater\mathfrak{\zeta}}(\kappa(x+z)-\kappa(x)))\mu(z)\,d\mathfrak{\zeta}\,dz dx\\ &\qquad\qquad\qquad-\int_{\mathbb{T}^N} \int_{\mathbb{R}}\kappa(x) \varphi''(\mathfrak{\zeta})\int_{\mathbb{R}^N} |A(u^\delta(x+z,t))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{u^\delta(x+z,t),u^\delta(x,t)\}}(\mathfrak{\zeta})\mu(z)dz d\mathfrak{\zeta} dx, {\text{e}}nd{align*} It implies that \begin{align*} &\int_0^t\langle \varphi'(u(x,s)) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha(A(u))(x,s),\kappa \rangle ds\\ &=-\lim_{\delta\to0}\int_0^t\int_{\mathbb{T}^N} \int_{\mathbb{R}}A'(\mathfrak{\zeta}) \varphi'(\mathfrak{\zeta}) \mathbbm{1}_{u^\delta(x,s)\textgreater\mathfrak{\zeta}}\int_{\mathbb{R}^N}(\kappa(x+z)-\kappa(x))\mu(z)d\mathfrak{\zeta} dz dx ds\\ &\quad+\lim_{\delta\to0}\int_0^t\int_{\mathbb{T}^N} \int_{\mathbb{R}}\kappa(x) \varphi''(\mathfrak{\zeta})\int_{\mathbb{R}^N} |A(u^\delta(x+z,s))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{u^\delta(x+z,s),u^\delta(x,s)\}}(\mathfrak{\zeta})\mu(z)dz d\mathfrak{\zeta} dx ds\\ &=-\int_{\mathbb{T}^N} \int_{\mathbb{R}}A'(\mathfrak{\zeta}) \varphi'(\mathfrak{\zeta}) \mathbbm{1}_{u(x,s)\textgreater\mathfrak{\zeta}}\int_{\mathbb{R}^N}(\kappa(x+z)-\kappa(x))\mu(z) dz d\mathfrak{\zeta}dx ds +\lim_{\delta\to0}\int_0^t\int_{\mathbb{T}^N} \int_{\mathbb{R}}\kappa(x) \varphi''(\mathfrak{\zeta})d{\text{e}}ta^\delta\\ &=\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}A'(\mathfrak{\zeta}) \varphi'(\mathfrak{\zeta}) \mathbbm{1}_{u(x,s)\textgreater\mathfrak{\zeta}}(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha(\kappa)(x)dxd\mathfrak{\zeta} +\int_0^t\int_{\mathbb{T}^N}\int_{\mathbb{R}}\kappa(x)\varphi''(\mathfrak{\zeta})d{\text{e}}ta(x,\zeta,s)\\ &=\int_0^t\langle \mathbbm{1}_{u(x,s)\textgreater \mathfrak{\zeta} } \varphi'(\mathfrak{\zeta}), A'(\mathfrak{\zeta}) (-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha(\kappa)(x)\rangle_{x,\mathfrak{\zeta} } -\langle \kappa(x) \varphi'(\mathfrak{\zeta}),\partial_{\mathfrak{\zeta}}\mathfrak{{\text{e}}ta}\rangle[0,t] {\text{e}}nd{align*} where ${\text{e}}ta$ is weak-* limit of ${\text{e}}ta^\delta=\int_{\mathbb{R}^N} |A(u^\delta(x+z,t))-\xi|\mathbbm{1}_{\mbox{Conv}\{u^\delta(x+z,t),u^\delta(x,t)\}}(\xi)\mu(z)dz$ in $\mathcal{M}^+(\mathbb{T}^N\times[0,T]\times\mathbb{R})$. Note that above convergence holds for all $t\in[0,T]$ due to $u\in L^2(\Omega;C([0,T];L^2(\mathbb{T}^N))$. As an application of Fatou's lemma we have $\mathbb{P}$-almost surely, ${\text{e}}ta\,\ge\,{\text{e}}ta_1$. Making use of the chain rule for functions from Sobolev spaces, we obtain the following equalities that hold true in $\mathcal{D}'(\mathbb{T}^N)$. \begin{align*}\langle \mathbbm{1}_{u(x,t)\textgreater\mathfrak{\zeta}} , \varphi'\rangle_\mathfrak{\zeta}&= \int_{\mathbb{R}}\mathbbm{1}_{u(x,t)\textgreater\mathfrak{\zeta}}\varphi'(\mathfrak{\zeta})d\mathfrak{\zeta}= \varphi(u(x,t))\\ \varphi'(u(x,t))\mbox{div}(F(u(x,t)))&=\varphi'(u(x,t))F'(u(x,t))\cdot\nabla u(x,t)\notag\\ &=\mbox{div}(\int_{-\infty}^{u(x,t)}F'(\mathfrak{\zeta})\varphi'(\mathfrak{\zeta}))= \mbox{div}(\langle F'\mathbbm{1}_{u(x,t)\textgreater\mathfrak{\zeta}},\varphi'\rangle_\mathfrak{\zeta})\\ \varphi'(u(x,t))h_k(x,u(x,t))&=\langle h_k \delta_{u(x,t)=\mathfrak{\zeta}} ,\varphi'\rangle_{\mathfrak{\zeta}}\\ \varphi''(u(x,t))H^2(x,u(x,t))&=\langle H^2 \delta_{u(x,t)=\mathfrak{\zeta}} , \varphi''\rangle_{\mathfrak{\zeta}}=-\langle \partial_{\mathfrak{\zeta}}(H^2\delta_{u(x,t)=\mathfrak{\zeta}}), \varphi' \rangle_{\mathfrak{\zeta}}. {\text{e}}nd{align*} Therefore we define $\varphi=\int_{-\infty}^\mathfrak{\zeta} \Upsilon(\mathfrak{\zeta}) d\mathfrak{\zeta}$ for some $\Upsilon\in C_c^\infty(\mathbb{R})$ to deduce the kinetic formulation. \section*{Derivation of fomulation {\text{e}}qref{kinetic formulation for v}}\label{app} Here, we are giving only details about fractional term. For other terms, derivation is exactly same as in previous formulations. Let $\varphi\in C_c^2(\mathbb{R})$. By making use of the Taylor's identity \cite[identity (17)]{N-2}, we get \begin{align*} \varphi'(v)(A(\tilde{u}(x+z))&-A(\tilde{u}(x)))\\&=\int\limits_{R} \big(A'(\mathfrak{\zeta})\varphi'(\mathfrak{\zeta}+\Psi(x)W(x))\mathbbm{1}_{\tilde{u}(x+z)\,\textgreater\,\mathfrak{\zeta}}-A'(\mathfrak{\zeta})\varphi'(\mathfrak{\zeta}+\Psi(x)W(x))\mathbbm{1}_{\tilde{u}(x)\,\textgreater\,\mathfrak{\zeta}}\big)\d\mathfrak{\zeta}\\ &\qquad-\int_{\mathbb{R}}\varphi''(\mathfrak{\zeta}+\Psi(x)W(x))|A(\tilde{u}(x+z))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{\tilde{u}(x),\tilde{u}(x+z)\}}(\mathfrak{\zeta})d\mathfrak{\zeta}\\ &=\int_{\mathbb{R}}A'(\mathfrak{\zeta}+\Psi(x)W(x))\varphi'(\mathfrak{\zeta})\big(\mathbbm{1}_{\tilde{u}(x+z)\,\textgreater\,\mathfrak{\zeta}+\Psi(x)W(x)}-\mathbbm{1}_{\tilde{u}(x)\,\textgreater\,\mathfrak{\zeta}+\Psi(x)W(x)}\big)d\mathfrak{\zeta}\\ &\qquad-\int_{\mathbb{R}} \varphi''(\mathfrak{\zeta})|A(\tilde{u}(x+z))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{v,\tilde{u}(x+z)-\Psi(x)W(x)\}}d\mathfrak{\zeta}, {\text{e}}nd{align*} This identity is enough to derive the kinetic formulation {\text{e}}qref{kinetic formulation for v}. \label{appB} \section{Well-posedness for $L^1$-setting: Proof of Theorem \ref{main result 2}}\label{B} In this section we present the proof of the main well-posedness result in $L^1$-setting, Theorem \ref{existance and uniqueness}. Here we closely follow the approach of \cite{vovelle 2}. The existence part of Theorem \ref{existance and uniqueness} will be discussed in subsection \ref{Section 3.1}, the uniqueness in subsection \ref{Section 3.2}. \subsection{Existence}\label{Section 3.1} In this subsection we prove the existence of kinetic solution. The proof of existence is based on a approximation procedure.\\ \textbf{Existence of predictable process u:} First, we replace the initial condition $u_0$ by bounded approximation $u_0^k\in L^\infty(\mathbb{T}^N)$ such that $u_0^k\,\to\,u_0\,$in $L^1(\mathbb{T}^N)$. This defines a sequence of solution $(u^k)$ and kinetic measures $(m^k)$ satisfy, $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align}\label{formulation-1} \langle f^k(t),\varphi \rangle &= \langle f_0^k, \varphi \rangle + \int_0^t\langle f^k(s),F'(\mathfrak{\zeta})\cdot\nabla\varphi\rangle ds -\int_0^t\langle f^k(s), A'(\mathfrak{\zeta})\,(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[\varphi]\rangle ds\notag\\ &\qquad+\sum_{k=1}^\infty \int_0^t \int_{\mathbb{T}^N} h_k(x)\varphi(x,u^k(x,s) dx d\beta_k(s)\notag\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N}\partial_{\mathfrak{\zeta}}\varphi(x,u^k(x,s)) H^2 (x)dxds -m^k(\partial_{\mathfrak{\zeta}}\varphi)([0,t]). {\text{e}}nd{align} where $f^k(s)=\mathbbm{1}_{u^k(s)\,\textgreater\,\mathfrak{\zeta}}$ and $m^k\,\ge\,\mathfrak{{\text{e}}ta}^k$ $\mathbb{P}$ almost surely with $$\mathfrak{{\text{e}}ta}^k(x,t,\mathfrak{\zeta})=\int_{\mathbb{R}^N}|A(u^k(x+z,t))-A(\mathfrak{\zeta})|\mathbbm{1}_{\mbox{Conv}\{u^k(x,t),u^k(x+z,t)\}}(\mathfrak{\zeta})\mu(z)dz.$$ By the contraction property in $L^1$, sequence $(u^k)$ is cauchy sequence in $L^1_{\mathcal{P}}(\Omega\times\,\mathbb{T}^N\times[0,T])$. It implies that $u^k$ converges to $u$ in $L_{\mathcal{P}}^1(\Omega\times\mathbb{T}^N\times[0,T])$. \noindent \textbf{Bound on approxmate kinetic measure:} $m^k$ is the kinetic measure associated with $u^k$. Now we are interested to find a convergence of $m^k$ in a suitable topological space. For that, we are trying to find some type of uniformly bound in k in the following Lemma. \begin{lem}\label{bound on measure} Let $u^k$ be a kinetic solution to {\text{e}}qref{1.1} with initial condition $u_0^k$. Then, for all $R\,\textgreater\,0$, $$\mathbb{E}|m^k([0,T]\times\mathbb{T}^N\times[-R,R])|^2\,\le\,C(T,R,\|u_0\|_{L^1(\mathbb{T}^N)}^2),$$ and $$\mathbb{E}\big[\sup_{t\in[0,T]}\|u^k(t)\|_{L^1(\mathbb{T^N)}}\big]\le\,C,$$ for some $C\,\textgreater\,0$, indepenent of k. {\text{e}}nd{lem} \begin{proof} For $R\,\textgreater\,0,$ set $$\kappa_R(u)=\mathbbm{1}_{[-R,R]}(u)\,\,\,\,\,\,\kappa_R(u)=\int_{-R}^u\int_{-R}^r\kappa_R(s)ds dr.$$ We take $\varphi(x,\mathfrak{\zeta})=\kappa_R'(\mathfrak{\zeta})$ in {\text{e}}qref{formulation} to obtain \begin{align} \int_{\mathbb{T}^N}\kappa_R(u^k(x,t)) dx&=\int_{\mathbb{T}^N}\kappa_R(u_0^k(x))dx + \sum_{k\ge1}\int_0^t\int_{\mathbb{T}^N}h_k(x)\kappa_R'(u^k(x,s)) dx d\beta_j(s)\notag\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N} H^2(x)\kappa_R(u^k(x,s)) dx ds-\int_{\mathcal{B}_R}d\mathfrak{m}^k(x,s,t), {\text{e}}nd{align} where $\mathcal{B}_R=\mathbb{T}^N\times[0,t]\times\{\mathfrak{\zeta}\in\mathbb{R},R\,\le\,\mathfrak{\zeta}\,\le\,R+1\}$. \begin{align} &\mathbb{E}\bigg|\int_{\mathbb{T}^N}\kappa_R(u^k(x,t)) dx\bigg|^2+\mathbb{E}\bigg|\int_{\mathcal{B}_R}d\mathfrak{m}^k(x,t,\mathfrak{\zeta})\bigg|^2\le\mathbb{E}\bigg|\int_{\mathbb{T}^N}\kappa_R(u_0^k))dx\bigg|^2\notag\\&\qquad+\mathcal{L}_{\lambda}ac{1}{2}\mathbb{E}\bigg|\int_0^t\int_{\mathbb{T}^N}H^2(x) \kappa_R(u^k(x,s))dx\bigg|^2 +\mathbb{E}\bigg|\sum_{k\ge1}\int_0^t\int_{\mathbb{T}^N}h_k(x)\kappa_R'(u^k(x,s))dx d\beta_j(s)\bigg|^2\notag \\ {\text{e}}nd{align} Since $0\,\le\,\kappa_R(u)\,\le\,2R(R+|u|)$ and $H^2(x)\kappa_R(u^k(x,s))\,\le\,D$, we have $$\mathbb{E}\bigg|\int_{\mathbb{T}^N}\kappa_R(u_0^k))dx\bigg|^2 +\mathcal{L}_{\lambda}ac{1}{2}\mathbb{E}\bigg|\int_0^t\int_{\mathbb{T}^N}H^2(x) \kappa_R(u^k(x,s))dx\bigg|^2\,\le\,C(R^2,T^2,D,\|u_0\|_{L^1(\mathbb{T}^N)}^2)$$ Since $0\,\le\,\kappa_R'(u)\,\le\,2R$, we estimate the stochastic integral using It\^o isometry as follows \begin{align*} \mathbb{E}\bigg|\sum_{k\ge1}\int_0^t\int_{\mathbb{T}^N} h_k(x) \kappa_R'(u^k(x,s))dx d\beta_j(s) \bigg|^2&=\mathbb{E}\bigg(\int_0^t\sum_{k\ge1}\bigg(\int_{\mathbb{T}^N}h_k(x)\kappa_R'(u^k(x,s))dx\bigg)^2ds\bigg)\\ &\,\le\,4R^2\mathbb{E}\bigg(\int_0^T\bigg(\int_{\mathbb{T}^N}\big(\sum_{k\ge1}|h_k(x)|^2\big)^{\mathcal{L}_{\lambda}ac{1}{2}}dx\bigg)^2\bigg)\\ &\,\le\,4R^2TC_0, {\text{e}}nd{align*} This finishes proof of first part. For second, we can take constant $\varphi(\mathfrak{\zeta})=sign(\mathfrak{\zeta})$ as test function in {\text{e}}qref{formulation} (justified by standard approximation), then we can easily prove second part as previous calculation. {\text{e}}nd{proof} \noindent \textbf{Limit of kinetic measures $m^k$:} Suppose that $\mathcal{M}(\mathcal{B}_R)$ is the collection of bounded Borel measure over $\mathcal{B}_R$ with norm given by the total variation of measures. It is dual space of $C(\mathcal{B}_R)$, the collection of continuous functions on $\mathcal{B}_R$. Since $\mathcal{M}(\mathcal{B}_R)$ is separable, the space $L^2(\Omega; \mathcal{M}(\mathcal{B}_R))$ is the dual space of $L^2(\Omega; C(\mathcal{B}_R))$. Let us discuss convergence of $m^k$. By Lemma \ref{bound on measure}, we have for $R\in\mathbb{N}$ \begin{align}\label{uniform bound on measure} \sup_k\mathbb{E}|m^k(\mathcal{B}_R)|^2\,\le\,C_R, {\text{e}}nd{align} where $\mathcal{B}_R=\mathbb{T}^N\times[0,T]\times[-R,R]$. Bound {\text{e}}qref{uniform bound on measure} gives a uniform bound on $(m^k)$ in $L^2(\Omega,\mathcal{M}(\mathcal{B}_R))$. There exists $m_R\in L^2(\Omega;\mathcal{M}(\mathcal{B}_R))$ such that up to subsequence, $m^k\,\to\,m_R$ in $L^2(\Omega;\mathcal{M}(\mathcal{B}_R))$-weak star. By a diagonal process, we get, for $R\in\mathbb{N}$, $m_R=m_{R+1}$ in $L^2(\Omega; \mathcal{M}(\mathcal{B}_R))$ and the convergence of a single subsequence still denoted by $(m^k)$ in all the spaces $L^2(\Omega;\mathcal{M}(\mathcal{B}_R))$- weak-*. Let us define $\mathbb{P}$-almost surely, $\mathfrak{m}=m_R$ on $\mathcal{B}_R$. The condition (i) of Definition \ref{kinetic measure 2} is direct consequence of weak convergence, therefore satisfied by ${\mathfrak{m}}$. Now, condition (ii) of Definition \ref{kinetic measure 2} remains to prove. \noindent \textbf{Kinetic solution:} Now, we have all in hand to apply the method as developed in subsection \ref{subsection 3.2} and pass to limit in {\text{e}}qref{formulation-1}, we get a $L^1(\mathbb{T}^N)$-valued process $(\tilde{u}(t))_{t\in[0,T]}$ such that $(\tilde{u}(t))_{t\in[0,T]}$ and $f(t)=\mathbbm{1}_{\tilde{u}(t)\textgreater\zeta}$ satisfy $\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align}\label{formulation-2} \langle f(t),\varphi \rangle &= \langle f_0, \varphi \rangle + \int_0^t\langle f(s),F'(\mathfrak{\zeta})\cdot\nabla\varphi\rangle ds -\int_0^t\langle f(s), A'(\mathfrak{\zeta})\,(-{{\text{e}}nsuremath{{\text{e}}nsuremath{\mathbb{R}}^d}}elta)^\alpha[\varphi]\rangle ds\notag\\ &\qquad+\sum_{k=1}^\infty \int_0^t \int_{\mathbb{T}^N} h_k(x)\varphi(x,\tilde{u}(x,s) dx d\beta_k(s)\notag\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N}\partial_{\mathfrak{\zeta}}\varphi(x,\tilde{u}(x,s)) H^2 (x)dxds -\mathfrak{m}(\partial_{\mathfrak{\zeta}}\varphi)([0,t]), {\text{e}}nd{align} where $m\,\ge\,\mathfrak{{\text{e}}ta}_1$, $\mathbb{P}$-almost surely, where $\mathfrak{{\text{e}}ta}_1$ defined as in Definition \ref{definition kinetic solution in l1 setting}. For all $\varphi\in C_c^2(\mathbb{T}^N\times\mathbb{R}),$ $\mathbb{P}$-almost surely, $t\to \langle f(t),\varphi\rangle$ is c\'adl\'ag. \noindent \textbf{Decay of kinetic measure:} In the existence part of the result, decay of kinetic measure and continuity of solution is remaining to prove. For that, we will prove the following proposition. \begin{proposition}\label{22} Let $u_0\in L^1(\mathbb{T}^N)$. Let a measurable function $u:\mathbb{T}^N\times[0,T]\times\Omega\,\to\,\mathbb{R}$ be a solution to {\text{e}}qref{1.1}. Let $T\,\textgreater\,0$, There exists a decreasing function $\varepsilon:\mathbb{R}_+\,\to\,\mathbb{R}_+$ with $\lim_{n\to\infty}\varepsilon(n)=0$ depending on T and on the functions $$n\mapsto\|(u_0-n)^+\|_{L^1(\mathbb{T}^N)},\,\,\,\,\,\,\, n\mapsto\|(u_0-n)^-\|_{L^1(\mathbb{T}^N)}$$ only such that, for all $n\,\textgreater 1$, \begin{align}\label{tightness} \mathbb{E}\big(\sup_{t\in[0,T]}\|(u(t)-n)^{\pm}\|_{L^1(\mathbb{T}^N)}\big) + \mathbb{E}\mathfrak{m}(\mathcal{B}_n)\,\le\,C(T,C_0,\|u_0\|_{L^1(\mathbb{T}^N)})(\varepsilon(n-1)+\varepsilon^{1/2}(n)), {\text{e}}nd{align} where $\mathcal{B}_n=\mathbb{T}^N\times[0,T]\times\{\mathfrak{\zeta}\in\mathbb{R},n\,\le\,|\mathfrak{\zeta}|\,\le n+1\}$. {\text{e}}nd{proposition} \begin{proof} For $R\,\textgreater\,0$, set $$\kappa_R(u)=\mathbbm{1}_{R\,\textless\,u\,\textless R+1},\,\,\,\,\,\,\,\kappa_R(u)=\int_0^u\int_0^r\kappa_R(s)ds dr.$$ We take $$\varphi(x,\mathfrak{\zeta})=\kappa_R'(\mathfrak{\zeta})$$ in {\text{e}}qref{formulation} to obtain,$\mathbb{P}$-almost surely, for all $t\in[0,T]$ \begin{align} \int_{\mathbb{T}^N}\kappa_R(u(x,t)) dx&=\int_{\mathbb{T}^N}\kappa_R(u_0(x))dx + \sum_{j\ge1}\int_0^t\int_{\mathbb{T}^N}g_j(x)\kappa_R'(u(x,s)) dx d\beta_j(s)\notag\\ &\qquad+\mathcal{L}_{\lambda}ac{1}{2}\int_0^t\int_{\mathbb{T}^N} H^2(x)\kappa_R(u(x,s)) dx ds-\int_{\mathcal{B}_R}d\mathfrak{m}(x,s,t), {\text{e}}nd{align} Taking then expectation, we have, for all $t\in[0,T]$ \begin{align}\label{m} \mathbb{E}\int_{\mathbb{T}^N}\kappa_R(u(x,t)) dx&=\int_{\mathbb{T}^N}\kappa_R(u_0(x))dx +\mathcal{L}_{\lambda}ac{1}{2}\mathbb{E}\int_0^t\int_{\mathbb{T}^N} H^2(x)\kappa_R(u(x,s)) dx ds-\mathbb{E}\int_{\mathcal{B}_R}d\mathfrak{m}(x,s,t), {\text{e}}nd{align} \begin{align}\label{r} \mathbb{E}\int_{\mathbb{T}^N}\kappa_R(u(x,t)) dx&\,\le\,\int_{\mathbb{T}^N}\kappa_R(u_0(x))dx +\mathcal{L}_{\lambda}ac{1}{2}\mathbb{E}\int_0^t\int_{\mathbb{T}^N} H^2(x)\kappa_R(u(x,s)) dx ds {\text{e}}nd{align} Note that $$(u-(R+1))^+\,\le\,\kappa_R(u)\,\le\,(u-R)^+,$$ for all $R\,\ge\,0,\,u\in\mathbb{R}$. Note also \begin{align}\label{inequality} \kappa_R(u)\,\le\,\mathbbm{1}_{R\,\le\,u}\,\le\,\mathcal{L}_{\lambda}ac{(u-R+r)^+}{r},\,\,\,\,\,\,\, r\,\textgreater\,0. {\text{e}}nd{align} {\text{e}}qref{r} gives \begin{align*} \mathbb{E}\int_{\mathbb{T}^N}(u(x,t)-(R+1))^+dx\,&\le\,\mathcal{L}_{\lambda}ac{C_0}{2r}\mathbb{E}\int_0^t\int_{\mathbb{T}^N}(u(x,t)-R+r)^+ dx + \int_{\mathbb{T}^N}(u_0(x)-R)^+ dx.\\ &\le\,\mathcal{L}_{\lambda}ac{C_0}{2r}\mathbb{E}\int_0^t\int_{\mathbb{T}^N}(u(x,t)-R+r)^+ dx+\int_{\mathbb{T}^N}(u_0(x)-R+r)^+ dx. {\text{e}}nd{align*} Choose $r=C_0+1$,then \begin{align*} \mathbb{E}\int_{\mathbb{T}^N}(u(x,t)-(R+1))^+dx\,&\le\,\mathcal{L}_{\lambda}ac{1}{2}\mathbb{E}\int_0^t\int_{\mathbb{T}^N}(u(x,t)-R+C_0+1)^+ dx+\int_{\mathbb{T}^N}(u_0(x)-R+C_0+1)^+ dx, {\text{e}}nd{align*} we take $R=nC_0-1$,\,\, then we have following inequality, \begin{align*} \mathbb{E}\int_{\mathbb{T}^N}(u(x,t)-nC_0)^+ dx\,\le\,\mathcal{L}_{\lambda}ac{1}{2}\mathbb{E}\int_0^t\int_{\mathbb{T}^N}(u(x,t)-(n-1)C_0)^+ dx+\int_{\mathbb{T}^N}(u_0(x)-(n-1)C_0)^+ dx. {\text{e}}nd{align*} Denote by $h_n(t)$ the function $$h_n(t)=\mathbb{E}\int_{\mathbb{T}^N}(u(x,t)-nC_0)^+ dx,\,\,\, \vec{\mathbf{f}} orall t\in[0,T],$$ then, we finally obtain $$h_n(t)\,\le\,\mathcal{L}_{\lambda}ac{1}{2}\int_0^t h_{n-1}(s)ds + h_{n-1}(0).$$ Since $u^+\,\le\,(u-C_0)^+ + C_0,$ we have $$h_0(t)=\mathbb{E}\int_{\mathbb{T}^N}u^+(x)dx\,\le\,\mathbb{E}\int_{\mathbb{T}^N}(u(x,t)-C_0)^+ dx + C_0= h_1(t)+C_0$$ After this, we can follow the proof of \cite[Proposition 15]{vovelle 2} to conclude the result. {\text{e}}nd{proof} \noindent \textbf{Kinetic measure:} By estimate {\text{e}}qref{tightness}, it is clear that ${\mathfrak{m}}$ satiesfies decay codition $(ii)$ of Definition \ref{definition kinetic measure}. So ${\mathfrak{m}}$ is kinetic measure. \subsection{Uniqueness:}\label{Section 3.2} Note that both Propositions \ref{Proposition 3.1}, \& \ref{th3.5} also hold in present $L^1$-setting. Here we state only the statement of the contraction principle. We are not giving proof of result, because proof of the uniqueness result is similar to the proof of Theorem \ref{th3.6}. \begin{thm}\label{new} Let $u_0\in L^1(\mathbb{T}^N)$. Let u be a kinetic solution to {\text{e}}qref{1.1}, then there exists $L^1(\mathbb{T}^N)$-valued process $(u^{-}(t))_{t\in[0,T]}$ such that $\mathbb{P}$-almost surely, for all $t\in[0,T]$, $f^{-}(t)=\mathbbm{1}_{u^{-}(t)\textgreater\xi}$. Moreover, if $u_1$,\,\,$u_2$\, are kinetic solutions to {\text{e}}qref{1.1} with initial data $u_{1,0} $ and $u_{2,0}$ respectively, then for all $t\in[0,T]$, we have $\mathbb{P}$-almost surely, \begin{align} \|u^1(t)-u^2(t)\|_{L^1(\mathbb{T}^N)}\,\le\,\|u_{1,0}-u_{2,0}\|_{L^1(\mathbb{T}^N)}, {\text{e}}nd{align} {\text{e}}nd{thm} \noindent \textbf{Continuity in time:} Based on the equi-integrability estimate {\text{e}}qref{tightness} we can deduce the kinetic solutions have almost surely continuous paths in $L^1(\mathbb{T}^N)$. \begin{cor} Let $u_0\in L^1(\mathbb{T}^N)$ and let $u$ be a kinetic solution to ${\text{e}}qref{1.1}$. Then $u$ has almost surely continuous trajectories in $L^1(\mathbb{T}^N).$ {\text{e}}nd{cor} \begin{proof} Based on Proposition \ref{Proposition 3.1}, Theorem \ref{th3.6} and Proposition \ref{22}, we are in a position to apply \cite[Lemma 17]{vovelle 2}, which implies the continuity in $L^1(\mathbb{T}^N)$. Indeed, let us first show that $\mathbb{P}$-almost surely, $u$ is right continuous in $L^1(\mathbb{T}^N)$. From the Proposition \ref{Proposition 3.1} , we know that $f(t+\delta_n)\to f(t)$ in $L^\infty(\mathbb{T}^N\times\mathbb{R})$ weak-* $\mathbb{P}$-almost surely as $\delta_n\to0$. Finally, {\text{e}}qref{tightness} implies $$\lim_{n\to\infty}\mathbb{E}\sup_{t\in[0,T]}\|(u(t)-n)^{\pm}\|_{L^1(\mathbb{T}^N)}=0.$$ Therefore, there exists a subsequence which converges $\mathbb{P}$-almost surely, that is, $\mathbb{P}$-almost surely, $$\lim_{n\to\infty} \sup_{t\in[0,T]}\|(u(t)-n)^{\pm}\|_{L^1(\mathbb{T}^N)}=0.$$ Consequently, \cite[Lemma 17]{vovelle 2} applies and gives the following convergence $\mathbb{P}$-almost surely, $$u(t+\delta_n)\,\to\,u(t)\,\,\,\,\,\text{in}\,\,\,\,L^1(\mathbb{T}^N)\,\,\,\,\text{as}\,\,\,\,\,\delta_n\,\to\,0.$$ By the same arguments we can show that $\mathbb{P}$-almost surely, $u^{-}$, constructed in Theorem \ref{new}, is left-continuous in $L^1(\mathbb{T}^N)$. {\text{e}}nd{proof} \subsection*{Acknowledgments} The author wishes to thank Ujjwal Koley for many stimulating discussions and valuable suggestions. The author also thanks Martina Hofmanov\'a for enlightening discussions on this topic during her visit to TIFR-CAM. \begin{thebibliography}{9} \bibitem{Alibaud} N. Alibaud; \textit{ Entropy formulation for fractal conservation laws. J. Evol. Equ., 7(1), 145-175, 2007.} \bibitem{N-2} N. Alibaud, B. Andreianov, A. Ouedraogo; \textit{Nonlocal dissipation measure and $L^1$ kinetic theory for fractional conservation laws, Comm. Partial Differential Equations 45 (2020), no. 9, 1213–1251, 35R11 (35B30 35K59 35L65)} \bibitem{Alibaud 2} N. Alibaud, S. Cifani, and E. R. Jakobsen; \textit{ Continuous dependence estimate for nonlinear fractional convection-diffusion equations. SIAM. J. Math. Anal., 44(2), 603-632, 2012.} \bibitem{Bak} Y. Bakhtin; \textit{The burgers equation with poisson random forcing, Ann. Probab. 41 (2013), no. 4, 2961–2989} \bibitem{BaVaWitParab} C. Bauzet, G. Vallet and P. Wittbold. \newblock A degenerate parabolic-hyperbolic Cauchy problem with a stochastic force. \newblock{{\text{e}}m Journal of Hyperbolic Differential Equations}, 12(3) (2015) 501-533. \bibitem{Karlsen} M. Bendahmane and K.H. Karlsen; \textit{Renormalized entropy solutions for quasilinear anisotropic degenerate parabolic equations. SIAM J. Math. Anal. 36(2):405–422, 2004.} \bibitem{neeraj} N. Bhauryal; U. Koley; G. Vallet; \textit{ The Cauchy problem for a fractional conservation laws driven by Lévy noise, Stochastic Processes and their applications, 130(9), 5310-5365, 2020. https://doi.org/10.1016/j.spa.2020.03.009.} \bibitem{Neeraj2} N. Bhauryal, U. Koley, G. Vallet; \textit{A fractional degenerate parabolic-hyperbolic Cauchy problem with noise. J. Differential Equations 284 (2021), 433–521.} \bibitem{BisKoleyMaj} I. H. Biswas, U.~ Koley, and A.~ K. Majee. \newblock Continuous dependence estimate for conservation laws with L\'{e}vy noise. \newblock{{\text{e}}m J. Differ. Equ.}, 259 (2015), 4683-4706. \bibitem{Carrillo} J. Carrilllo; \textit{Entropy solutions for nonlinear degenerate problems. Arch. Rational Mech. Anal. 147 (1999) 269-361.} \bibitem{Chaudhary} A. Chaudhary; \textit{Stochastic Degenerate Fractional Conservation Laws; https://arxiv.org/abs/2109.11889.} \bibitem{Cifani} S. Cifani, and E. R. Jakobsen; \textit{ Entropy solution theory for fractional degenerate convection-diffusion equations. Ann. I. H. Poincare, 28(3), 413-441, 2011.} \bibitem{chen} G. Q. Chen; Q. Ding; K.H. Karlsen; \textit{ On nonlinear stochastic balance laws. Arch. Ration. Mech. Anal. 204 (2012), no. 3, 707–743. 35R60 (35B25 35B30 35L45 35L60 60H15 76B03 76M35)} \bibitem{chen3} G. Q. Chen and B. Perthame; \textit{Large-time behaviour of periodic entropy solutions to anisotropic degenerate parabolic-hyperbolic equations, Proc. Amer. Math. Soc. 137(2009), no.9, 30033011.} \bibitem{chen.2} G. Q. Chen, H.C. Pang; \textit{Invariant measures for nonlinear conservation laws driven by stochastic forcing, Chin. Ann. Math. Ser. B 40 (2019), no. 6, 967–1004. 60H15 (35K65 35Q53 37A50 37C40 58J70).} \bibitem{overdriven} P. Clavin; \textit{Instabilities and nonlinear patterns of overdriven detonations in gases. Nonlinear PDE’s in Condensed Matter and Reactive Flows. Kluwer, 49–97, 2002.} \bibitem{mathmatical finance} R. Cont and P. Tankov; \textit{Financial modelling with jump processes. Chapman \& Hall/CRC Financial Mathematics Series, Chapman \& Hall/CRC, Boca Raton (FL), 2004.} \bibitem{vovelle 2} A. Debussche, J. Vovelle; \textit{Invariant measure of scalar first-order conservation laws with stochastic forcing, Probability Theory and Related Fields: Volume 163, Issue 3 (2015), Page 575-611.} \bibitem{vovelle} A. Debussche; M. Hofmanová; J. Vovelle; \textit{Degenerate parabolic stochastic partial differential equations: quasilinear case. Ann. Probab. 44 (2016), no. 3, 1916–1955. 60H15 (35K65 35R60)} \bibitem{deb} A. Debussche; J. Vovelle; \textit{ Scalar conservation laws with stochastic forcing. J. Funct. Anal. 259 (2010), no. 4, 1014–1042. 60H15 (35L65 35R60)} \bibitem{vovelle2} A. Debussche and J. Vovelle; \textit{Long-time behaviour in scalar conservation laws, Differential Integral Equations 22 (2009), no. 3- 4, 225–238.} \bibitem{sylvain} S. Dotti, J. Vovelle; \textit{Convergence of Approximations to Stochastic Scalar Conservation Laws, Archive for Rational Mechanics and Analysis, Springer Verlag, 2018, 230 (2), pp.539-591.} \bibitem{Feng} J. Feng and D. Nualart; \textit{Stochastic scalar conservation laws. J. Funct. Anal., 255(2): 313-373, 2008.} \bibitem{hofmanova} M. Hofmanov\'a; \textit{ Degenerate parabolic stochastic partial differential equations. Stochastic Process. Appl. 123 4294-4336, MR3096355} \bibitem{ujjwal} K.~H.~Karlsen, U.~Koley, and N.~H.~Risebro \newblock An error estimate for the finite difference approximation to degenerate convection-diffusion equations. \newblock {{\text{e}}m Numer. Math.}, 121(2): 367-395, 2012. \bibitem{Kim} J. U. Kim; \textit{On a stochastic scalar conservation law. Indiana Univ. Math. J., 52 (1), 227-256, 2003.} \bibitem{KMV} U.~Koley, A.~K.~Majee, and G.~Vallet. \newblock Continuous dependence estimate for a degenerate parabolic-hyperbolic equation with L\'{e}vy noise. \newblock{{\text{e}}m Stochastic Partial Differential Equations: Analysis and Computations}, DOI: 10.1007/s40072-016-0084-z. \bibitem{KMV1} U.~Koley, A.~K.~Majee, and G.~Vallet. \newblock A finite difference scheme for conservation laws driven by L\'{e}vy noise. \newblock{ {\text{e}}m IMA Journal of Numerical Analysis}, 38(2), 998-1050, 2018 \bibitem{koley2013multilevel} U. Koley, N. H. Risebro, C. Schwab and F. Weber. \newblock{A multilevel Monte Carlo finite difference method for random scalar degenerate convection-diffusion equations.} \newblock{{\text{e}}m J. Hyperbolic Differ. Equ.}, 14(3), 415-454, 2017. \bibitem{Koley3} U. Koley, D. Ray, and T. Sarkar. \newblock{Multi-level Monte Carlo finite difference methods for fractional conservation laws with random data.,} \newblock{{\text{e}}m SIAM/ASA J. Uncertain. Quantif.}, 9(1), 65–105, 2021. \bibitem{Kruzhkov} S. N. Kruzhkov; \textit{First order quasilinear equations with several independent variables. Math. Sb. (N.S.) 81(123):228–255, 1970.} \bibitem{lions} P.L. Lions, B.Perthame, E. Tadmor; \textit{A kinetic formulation of multidimenional scalar conservation laws and related equations, J. Amer. Math. Soc. 7 (1) (1994) 169-191.} \bibitem{porous media} A. de Pablo, F. Quiros, A. Rodriguez and and J. L. Vazquez; \textit{A fractional porous medium equation, Adv. Math. 226 (2011), no. 2, 1378–1409. 35R11 (35A01 35A02 35B65 35K15 35K57 76S05).} \bibitem{B-2} B. Perthame; \textit{ Uniqueness and error estimates in first order quasilinear conservation laws via the kinetic entropy defect measure, J. Math. Pures et Appl. 77 (1998) 1055-1064.} \bibitem{B-3} B. Perthame; \textit{Kinetic Formulation of Conservation Laws, Oxford Lecture Ser. Math. Appl., vol. 21, Oxford University Press, Oxford, 2002.} \bibitem{prato} G. Da Prato; J. Zabczyk; \textit{Stochastic Equations in Infinite Dimensions, second edition 2014, Encyclopedia of Mathematics and Its Applications 44. Cambridge Univ. Press, Cambridge. MR12} \bibitem{Rev} D. Revuz, M. Yor; \textit{ Continuous Martingales and Brownian Motion, third ed., Grundlehren Math. Wiss. (FundamentalPrinciples of Mathematical Sciences), vol. 293, Springer-Verlag, Berlin, 1999} \bibitem{hyd} C. Rohde and W.-A. Yong; \textit{The nonrelativistic limit in radiation hydrodynamics. I. Weak entropy solutions for a model problem. J. Differential Equations, 234(1):91–109, 2007.} \bibitem{Vallet} G. Vallet. and P. Wittbold; \textit{On a stochastic first-order hyperbolic equation in a bounded domain. Infin. Dimens. Anal. Quantum Probab. Relat. Top., 12(4), 613–651, 2009.} \bibitem{EKAS} E. Weinan, K. Khanin, A. Mazel, and Ya. Sinai; \textit{Invariant measures for Burgers equation with stochastic forcing, Ann. of Math. (2) 151 (2000), no. 3, 877–960.} {\text{e}}nd{thebibliography} {\text{e}}nd{document}
\begin{document} \title[modular symbols in positive characteristic]{The Borel-Moore homology of an arithmetic quotient of the Bruhat-Tits building of PGL of a non-archimedean local field in positive characteristic and modular symbols} \author{Satoshi Kondo, Seidai Yasuda} \maketitle \begin{abstract}We study the homology and the Borel-Moore homology with coefficients in $\mathbb{Q}$ of a quotient (called arithmetic quotient) of the Bruhat-Tits building of $\mathrm{PGL}$ of a nonarchimedean local field of positive characteristic by an arithmetic subgroup (a special case of the general definition in Harder's article (Invent.\ Math.\ 42, 135-175 (1977)). We define an analogue of modular symbols in this context and show that the image of the canonical map from homology to Borel-Moore homology is contained in the sub $\mathbb{Q}$-vector space generated by the modular symbols. By definition, the limit of the Borel-Moore homology as the arithmetic group becomes small is isomorphic to the space of $\mathbb{Q}$-valued automorphic forms that satisfy certain conditions at a distinguished (fixed) place (namely that it is fixed by the Iwahori subgroup and the center at the place). We show that the limit of the homology with $\mathbb{C}$-coefficients is identified with the subspace consisting of cusp forms. We also describe an irreducible subquotient of the limit of Borel-Moore homology as an induced representation in a precise manner and give a multiplicity one type result. \keywords{modular symbol \and arithmetic group \and Borel-Moore homology \and Bruhat-Tits building \and automorphic forms} \end{abstract} \section{Introduction} Let us state our result slightly more precisely than in the abstract. Let us give the setup. We let $F$ denote a global field of positive characteristic. Let $C$ be a proper smooth curve over a finite field whose function field is $F$. Let $\infty$ be a place of $F$ and let $K=F_\infty$ denote the local field at $\infty$. Let $\mathcal{B}T_\bullet$ be the Bruhat-Tits building for $\mathrm{PGL}_d$ of $K$ for a positive integer $d$. It is a simplicial complex of dimension $d-1$. Let $\Gamma \subset \mathrm{GL}_d(K)$ be an arithmetic subgroup (see the definition in Section~\ref{sec:4.1.1}). We consider the homology, the Borel-Moore homology, and the canonical map from homology to Borel-Moore homology of the quotient $\Gamma \backslash \mathcal{B}T_\bullet$. A building is made of (in particular, a union of) subsimplicial complexes called apartments. These are labeled (not one-to-one) by the set of bases of $K^{\oplus d}$. In the Borel-Moore homology of an apartment, there is defined its fundamental class. When the class corresponds to an $F$-basis, using the pushforward map for Borel-Moore homology, we obtain a class in the Borel-Moore homology of the quotient $\Gamma \backslash \cB\cT_\bullet$. We regard this class as an analogue of a modular symbol. The first of our main results in its rough form may be stated as follows. See Theorem~\ref{lem:apartment} for the precise form. \begin{thm} \label{thm:intro} The image of the canonical injective homomorphism \[ H_{d-1}(\Gamma\backslash \mathcal{B}T_\bullet, \mathbb{Q}) \to H_{d-1}^\mathrm{BM}(\Gamma\backslash \mathcal{B}T_\bullet, \mathbb{Q}) \] is contained in the subspace generated by the classes of the apartments, which correspond to $F$-bases. \end{thm} We refer the reader to the nice introduction in \cite{AR} for a short exposition on the classical modular symbols. In the paper \cite{AR}, they consider (the analogue of) the modular symbols for the quotient by some congruence subgroup of $\mathrm{SL}_d(\mathbb{R})$. Our result may be regarded as a non-archimedean analogue of a part of their result, especially Proposition 3.2, p.246. Let us remark on the difference from the archimedean case and the difficulty in our case. In \cite{AR}, they use Borel-Serre bordification (compactification), on which the group $\Gamma$ acts freely. In the non-archimedean case, there exist several compactifications of (the geometric realization of) the Bruhat-Tits building of $\mathrm{PGL}$ (the reader is referred to the introduction in \cite{We2}), but the action is not free. We therefore work with the equivariant (co)homology groups. There is a reason to use Werner's compactification and not other compactifications. The key in the proof of our result is the explicit construction of the map (4) of Section~\ref{sec:4 and 5} at the level of chain complexes in the indicated direction. We do not have an analogous construction for other compactifications, since for the construction we use the continuous map $s(v_1,\dots, v_d)$ (see Lemma~\ref{lem:cont s}) which uses the interpretation of the geometric points of the compactification of the Bruhat-Tits building as the set of semi-norms. A further advantage is that the map $s(v_1,\dots, v_d)\times [g_0,\dots, g_{d-1}]$ readily defines a class in the equivariant homology. The modular symbols considered here have an application in our other paper \cite{KY:Zeta elements}. The setup and the statements in Sections~\ref{sec:2} and~\ref{sec:BT} of this article already appeared there. We reproduce them here for convenience. That paper uses the results Lemma~\ref{lem:arithmetic} and Theorem~\ref{lem:apartment}. We turn to our second result. We let $A=H^0(C \setminus \{\infty\}, \mathcal{O}_C)$. Here we identified a closed point of $C$ and a place of $F$. We write $\wh{A}=\varprojlim_I A/I$, where the limit is taken over the nonzero ideals of $A$. We let $\mathbb{A}^\infty=\wh{A} \otimes_A F$ denote the ring of finite adeles. For an open compact subgroup $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A}^\infty)$, let $X_{\mathbb{K}, \bullet}= \mathrm{GL}_d(F) \backslash (\mathrm{GL}_d(\mathbb{A}^\infty)/\mathbb{K} \times \mathcal{B}T_\bullet)$ (see Section~\ref{subsec:X_K} for the precise definition). It is the finite disjoint union of spaces of the form $\Gamma \backslash \mathcal{B}T_\bullet$ for some arithmetic subgroup $\Gamma$. Then we have the following result. See Proposition~\ref{prop:66_3} and Theorem~\ref{7_prop1} for the relevant notation. \begin{thm} \label{thm:intro2} \begin{enumerate} \item We have \[ \varinjlim_\mathbb{K} H_{d-1}(X_{\mathbb{K}, \bullet}, \mathbb{C}) \cong \bigoplus_\pi \pi^\infty\] as representations of $\mathrm{GL}_d(\mathbb{A})$ where $\pi=\pi^\infty \otimes \pi_\infty$ runs over the irreducible cuspidal automorphic representations of $\mathrm{GL}_d(\mathbb{A})$ such that $\pi_\infty$ is isomorphic to the Steinberg representation of $\mathrm{GL}_d(K)$. \item Let $\pi = \pi^\infty \otimes \pi_\infty$ be an irreducible smooth representation of $\mathrm{GL}_d(\mathbb{A}^\infty)$ such that $\pi^\infty$ appears as a subquotient of $\varinjlim_\mathbb{K} H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},\mathbb{C})$. Then there exist an integer $r \ge 1$, a partition $d=d_1 + \cdots + d_r$ of $d$, and irreducible cuspidal automorphic representations $\pi_i$ of $\mathrm{GL}_{d_i}(\mathbb{A})$ for $i=1,\ldots,d$ which satisfy the following properties: \begin{enumerate} \item For each $i$ with $0 \le i \le r$, the component $\pi_{i,\infty}$ at $\infty$ of $\pi_i$ is isomorphic to the Steinberg representation of $\mathrm{GL}_{d_i}(F_\infty)$. \item Let us write $\pi_i = \pi_i^\infty \otimes \pi_{i,\infty}$. Let $P \subset \mathrm{GL}_d$ denote the standard parabolic subgroup corresponding to the partition $d=d_1 + \cdots + d_r$. Then $\pi^\infty$ is isomorphic to a subquotient of the unnormalized parabolic induction $\mathrm{Ind}_{P(\mathbb{A}^\infty)}^{\mathrm{GL}_d(\mathbb{A}^\infty)} \pi_1^\infty \otimes \cdots \otimes \pi_r^\infty$. \end{enumerate} Moreover for any subquotient $H$ of $\varinjlim_\mathbb{K} H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},\mathbb{C})$ which is of finite length as a representation of $\mathrm{GL}_d(\mathbb{A}^\infty)$, the multiplicity of $\pi$ in $H$ is at most one. \end{enumerate} \end{thm} Theorem~\ref{thm:intro2} (1) follows from a result of Harder \cite{Harder} and the classification theorem due to Moeglin and Waldspurger \cite{MW} (see Remark~\ref{rmk:HMW} for a sketch). We give another proof which does not rely on Harder's result. Theorem~\ref{thm:intro2} (2) without Condition (a) follows from a general theorem of Langlands. In a forthcoming paper by the second author, an application of this result (using Condition (a) and the multiplicity one) will be given. The multiplicity result seems new in that, although we have a restriction that $\pi_\infty$ is isomorphic to the Steinberg representation, a subquotient of the Borel-Moore homology may not be contained in the discrete part of $L^2$-automorphic forms. For the proof of Theorem~\ref{thm:intro2}(2), we study the quotient building using the interpretation as the moduli of vector bundles with certain structures on $C$. The tools that appear are the same as in \cite{Gra} but we actually study the quotient space while in \cite{Gra} they consider only some orbit spaces since they ask only for finite generation of cohomology groups. One technical problem, which was already addressed by Prasad in the paper by Harder \cite[p.140, Bemerkung]{Harder}, is that the quotient of the Bruhat-Tits building may not be a simplicial complex. In Section~\ref{sec:2} we give a generalization of the notion of simplicial complexes so as to include those quotients. We warn that our use of the term Borel-Moore homology is not a common one. We give some justification in Section~\ref{sec:cellular}. The paper is organized as follows. In Section~\ref{sec:simplicial complex}, we consider simplicial complexes. Actually, we generalize the definition of a simplicial complex in the usual sense (which we call strict simplicial complex). One reason for doing so is that while the Bruhat-Tits building itself is canonically a (strict) simplicial complex, the quotient is not so canonically. We also redefine (co)homology in an orientation free manner. This is because the Bruhat-Tits building is not canonically oriented. In Section~\ref{sec:BT}, we recall the definitions of the Bruhat-Tits building and the apartments. Besides the recollections, we give a construction of the fundamental class of an apartment in the Borel-Moore homology group. This serves as an analogue of a cycle (from 0 to $i\infty$ in the upper half plane, for example) in the classical case. In Section~\ref{sec:71}, we give the definition and some properties of an arithmetic group. The first of our main results (Theorem~\ref{lem:apartment}) is stated in this section. Section~\ref{Modular Symbols} is devoted to the proof of Theorem~\ref{lem:apartment}. The key to the proof is the construction of the maps (4) and (5) in Section~\ref{sec:4 and 5}. Sections~\ref{section8} and~\ref{sec:BMquot} are devoted to Theorem~\ref{thm:intro2}, and are independent of Section~\ref{Modular Symbols}. We give the definition of the simplicial complex $X_{\mathbb{K}, \bullet}$ and make precise the relation between the limit of the (Borel-Moore) homology of $X_{\mathbb{K},\bullet}$ as $\mathbb{K}$ becomes small and the space of ($\mathbb{Q}$-valued) automorphic forms. The main result of Section~\ref{section8} is Proposition~\ref{7_prop2}. In Section~\ref{sec:BMquot}, we prove Theorem~\ref{thm:intro2}(2), or Theorem~\ref{7_prop1}. The contents of Sections~\ref{sec:locfree} and~\ref{sec:HN} are reformulation of \cite{Gra}. The aim of Section~\ref{sec:props} is to state Propositions~\ref{7_prop1b} and~\ref{7_prop2}. The proofs are given in Section~\ref{sec:pf1b} and in Section~\ref{sec:pf2} respectively. The proof of Theorem~\ref{7_prop1} using Propositions~\ref{7_prop1b} and~\ref{7_prop2} is given in Section~\ref{sec:pfthm}. \section{Simplicial complexes and their (co)homology} \label{sec:2} The material of this section (except for the remark in Section~\ref{sec:cellular}) appeared in Sections 3 and 5 of \cite{KY:Zeta elements}. We collected them for the convenience of readers. \subsection{Simplicial complexes} \label{sec:simplicial complex} \subsubsection{} Let us recall the notion of (abstract) simplicial complex. A simplicial complex is a pair $(Y_0,\Delta)$ of a set $Y_0$ and a set $\Delta$ of finite subsets of $Y_0$ which satisfies the following conditions: \begin{itemize} \item If $S \in \Delta$ and $T\subset S$, then $T \in \Delta$. \item If $v \in Y_0$, then $\{v \} \in \Delta$. \end{itemize} In this paper we call a simplicial complex in the sense above a strict simplicial complex, and use the terminology ``simplicial complex" in a little broader sense, since we will treat as simplicial complexes some arithmetic quotients of Bruhat-Tits building, in which two different simplices may have the same set of vertices. Bruhat-Tits building itself is a strict simplicial complex. Our primary example of a (nonstrict) simplicial complex is $\Gamma \backslash \mathcal{B}T_\bullet$ for an arithmetic group $\Gamma$ (to be defined in Section~\ref{sec:def Gamma}). We adopt the following definition of a simplicial complex: a simplicial complex is a collection $Y_\bullet = (Y_i)_{i \ge 0}$ of the sets indexed by non-negative integers, equipped with the following additional data \begin{itemize} \item a subset $V(\sigma) \subset Y_0$ with cardinality $i+1$, for each $i \ge 0$ and for each $\sigma \in Y_i$ (we call $V(\sigma)$ the set of vertices of $\sigma$), and \item an element in $Y_j$, for each $i \ge j \ge 0$, for each $\sigma \in Y_i$, and for each subset $V' \subset V(\sigma)$ with cardinality $j+1$ (we denote this element in $Y_j$ by the symbol $\sigma \times_{V(\sigma)} V'$ and call it the face of $\sigma$ corresponding to $V'$) \end{itemize} which satisfy the following conditions: \begin{itemize} \item For each $\sigma \in Y_0$, the equality $V(\sigma) = \{\sigma\}$ holds, \item For each $i \ge 0$, for each $\sigma \in Y_i$, and for each non-empty subset $V' \subset V(\sigma)$, the equality $V(\sigma \times_{V(\sigma)} V') = V'$ holds. \item For each $i \ge 0$ and for each $\sigma \in Y_i$, the equality $\sigma \times_{V(\sigma)} V(\sigma) = \sigma$ holds, and \item For each $i \ge 0$, for each $\sigma \in Y_i$, and for each non-empty subsets $V', V'' \subset V(\sigma)$ with $V'' \subset V'$, the equality $(\sigma \times_{V(\sigma)} V')\times_{V'} V'' = \sigma \times_{V(\sigma)} V''$ holds. \end{itemize} Let us call the element of the form $\sigma\times_{V(\sigma)} V'$ for $j$ and $V'$ as above, the $j$-dimensional face of $\sigma$ corresponding to $V'$. We remark here that the symbol $\times_{V(\sigma)}$ does not mean a fiber product in any way. Any strict simplicial complex gives a simplicial complex in the sense above in the following way. Let $(Y_0,\Delta)$ be a strict simplicial complex. We identify $Y_0$ with the set of subsets of $Y_0$ with cardinality $1$. For $i \ge 1$ let $Y_i$ denote the set of the elements in $\Delta$ which has cardinality $i+1$ as a subset of $Y_0$. For $i \ge 1$ and for $\sigma \in Y_i$, we set $V(\sigma)= \sigma$ regarded as a subset of $Y_0$. For a non-empty subset $V \subset V(\sigma)$, of cardinality $i'+1$, we set $\sigma \times_{V(\sigma)} V = V$ regarded as an element of $Y_{i'}$. Then it is easily checked that the collection $Y_\bullet = (Y_i)_{i \ge 0}$ together with the assignments $\sigma \mapsto V(\sigma)$ and $(\sigma, V) \mapsto \sigma \times_{V(\sigma)} V$ forms a simplicial complex. Let $Y_\bullet$ and $Z_\bullet$ be simplicial complexes. We define a map from $Y_\bullet$ to $Z_\bullet$ to be a collection $f=(f_i)_{i \ge 0}$ of maps $f_i : Y_i \to Z_i$ of sets which satisfies the following conditions: \begin{itemize} \item for any $i \ge 0 $ and for any $\sigma \in Y_i$, the restriction of $f_0$ to $V(\sigma)$ is injective and the image of $f|_{V(\sigma)}$ is equal to the set $V(f_i(\sigma))$, and \item for any $i \ge j \ge 0$, for any $\sigma \in Y_i$, and for any non-empty subset $V' \subset V(\sigma)$ with cardinality $j+1$ we have $f_j(\sigma \times_{V(\sigma)} V') = f_i(\sigma) \times_{V(f_i(\sigma))} f_0(V')$. \end{itemize} \subsubsection{} There is an alternative, less complicated, equivalent definition of a simplicial complex in the sense above, which we will describe in this paragraph. As it will not be used in this article, the reader may skip this paragraph. For a set $S$, let $\mathcal{P}^\mathrm{fin}(S)$ denote the category whose object are the non-empty finite subsets of $S$ and whose morphisms are the inclusions. Then giving a simplicial complex in our sense is equivalent to giving a pair $(Y_0,F)$ of a set $Y_0$ and a presheaf $F$ of sets on $\mathcal{P}^\mathrm{fin}(Y_0)$ such that $F(\{\sigma \}) = \{\sigma\}$ holds for every $\sigma \in Y_0$. This equivalence is explicitly described as follows: given a simplicial complex $Y_\bullet$, the corresponding $F$ is the presheaf which associates, to a non-empty finite subset $V \subset Y_0$ with cardinality $i+1$, the set of elements $\sigma \in Y_i$ satisfying $V(\sigma)=V$. This alternative definition of a simplicial complex is smarter, nevertheless we have adopted the former definition since it is nearer to the definition of a simplicial complex in the usual sense. \subsection{Homology and cohomology} \label{sec:BM homology} Usually the homology groups of $Y_\bullet$ are defined to be the homology groups of a complex $C_\bullet$ whose component in degree $i$ is the free abelian group generated by the $i$-simplices of $Y_\bullet$. For a precise definition of the boundary homomorphism of the complex $C_\bullet$, we need to choose an orientation of each simplex. In this paper we adopt an alternative, equivalent definition of homology groups which does not require any choice of orientations. The latter definition seems a little complicated at first glance, however it will soon turn out to be a better way for describing the (co)homology of the arithmetic quotients Bruhat-Tits building, which seems to have no canonical, good choice of orientations. \subsubsection{Orientation} \label{sec:orientation} We recall in Sections~\ref{sec:orientation} and~\ref{sec:def homology} the precise definitions of the (co)homology, the cohomology with compact support and the Borel-Moore homology of a simplicial complex. When computing (co)homology, one usually fixes an orientation of each simplex once and for all, but we do not. This results in an apparently different definition, but they indeed agree with the usual definition. We introduce the notion of orientation of a simplex. Let $Y_\bullet$ be a simplicial complex and let $i \ge 0$ be a non-negative integer. For an $i$-simplex $\sigma \in Y_i$, we let $T(\sigma)$ denote the set of all bijections from the finite set $\{1,\ldots, i+1 \}$ of cardinality $i+1$ to the set $V(\sigma)$ of vertices of $\sigma$. The symmetric group $S_{i+1}$ acts on the set $\{1,\ldots, i+1 \}$ from the left and hence on the set $T(\sigma)$ from the right. Through this action the set $T(\sigma)$ is a right $S_{i+1}$-torsor. We define the set $O(\sigma)$ of orientations of $\sigma$ to be the ${\{\pm 1\}}$-torsor $O(\sigma) = T(\sigma) \times_{S_{i+1},\mathrm{sgn}} {\{\pm 1\}}$ which is the push-forward of the $S_{i+1}$-torsor $T(\sigma)$ with respect to the signature character $\mathrm{sgn}: S_{i+1} \to {\{\pm 1\}}$. When $i \ge 1$, the ${\{\pm 1\}}$-torsor $O(\sigma)$ is isomorphic, as a set, to the quotient $T(\sigma)/A_{i+1}$ of $T(\sigma)$ by the action of the alternating group $A_{i+1} = \mathrm{Ker}\, \mathrm{sgn} \subset S_{i+1}$. When $i=0$, the ${\{\pm 1\}}$-torsor $O(\sigma)$ is isomorphic to the product $O(\sigma) = T(\sigma) \times {\{\pm 1\}}$, on which the group ${\{\pm 1\}}$ acts via its natural action on the second factor. Let $i \ge 1$ and let $\sigma \in Y_i$. For $v \in V(\sigma)$ let $\sigma_v$ denote the $(i-1)$-simplex $\sigma_v = \sigma \times_{V(\sigma)} (V(\sigma) \setminus \{v\})$. There is a canonical map $s_v : O(\sigma) \to O(\sigma_v)$ of ${\{\pm 1\}}$-torsors defined as follows. Let $\nu \in O(\sigma)$ and take a lift $\wt{\nu}:\{1,\ldots,i+1\} \xto{\cong} V(\sigma)$ of $\nu$ in $T(\sigma)$. Let $\wt{\iota}_v : \{1,\ldots,i\} \hookrightarrow \{1,\ldots,i+1\}$ denote the unique order-preserving injection whose image is equal to $\{1,\ldots,i+1\} \setminus \{\wt{\nu}^{-1}(v)\}$. It follows from the definition of $\wt{\iota}_v$ that the composite $\wt{\nu} \circ \wt{\iota}_v: \{1,\ldots,i\} \to V(\sigma)$ induces a bijection $\wt{\nu}_v : \{1,\ldots,i\} \xto{\cong} V(\sigma) \setminus \{v\} = V(\sigma_v)$. We regard $\wt{\nu}_v$ as an element in $T(\sigma_v)$. We define $s_v : O(\sigma) \to O(\sigma_v)$ to be the map which sends $\nu \in O(\sigma)$ to $(-1)^{\wt{\nu}^{-1}(v)}$ times the class of $\wt{\nu}_v$. It is easy to check that the map $s_v$ is well-defined. Let $i \ge 2$ and $\sigma \in Y_i$. Let $v, v' \in V(\sigma)$ with $v \neq v'$. We have $(\sigma_v)_{v'} = (\sigma_{v'})_v$. Let us consider the two composites $s_{v'} \circ s_v : O(\sigma) \to O((\sigma_v)_{v'})$ and $s_v \circ s_{v'} : O(\sigma) \to O((\sigma_{v'})_v)$. It is easy to check that the equality \begin{equation} \label{formula1} s_{v'} \circ s_v (\nu) = (-1) \cdot s_v \circ s_{v'} (\nu) \end{equation} holds for every $\nu \in O(\sigma)$. \subsubsection{Cohomology and homology} \label{sec:def homology} We say that a simplicial complex $Y_\bullet$ is locally finite if for any $i \ge 0$ and for any $\tau \in Y_i$, there exist only finitely many $\sigma \in Y_{i+1}$ such that $\tau$ is a face of $\sigma$. We recall the four notions of homology or cohomology for a locally finite simplicial complex. Let $Y_\bullet$ be a simplicial complex (resp.\ a locally finite simplicial complex). For an integer $i\ge 0$, we let $Y_i'=\coprod_{\sigma \in Y_i} O(\sigma)$ and regard it as a ${\{\pm 1\}}$-set. Given an abelian group $M$, we regard the abelian groups $\bigoplus_{\nu \in Y_i'} M$ and $\prod_{\nu \in Y_i'}M$ as ${\{\pm 1\}}$-modules in such a way that the component at $\epsilon\cdot \nu$ of $\epsilon \cdot (m_\nu)$ is equal to $\epsilon m_\nu$ for $\epsilon \in {\{\pm 1\}}$ and for $\nu \in Y_i'$. For $i \ge 1$, we let $\wt{\partial}_{i,\oplus}: \bigoplus_{\nu \in Y_i'}M \to \bigoplus_{\nu \in Y_{i-1}'}M$ (resp.\ $\wt{\partial}_{i,\prod}: \prod_{\nu \in Y_i'}M \to \prod_{\nu \in Y_{i-1}'}M$) denote the homomorphism of abelian groups which sends $m = (m_\nu)_{\nu \in Y_i'}$ to the element $\wt{\partial}_i(m)$ whose coordinate at $\nu' \in O(\sigma') \subset Y_{i-1}'$ is equal to \begin{eqnarray}\label{eqn:boundary} \wt{\partial}_i(m)_{\nu'} = \sum_{(v,\sigma,\nu)} m_\nu \end{eqnarray} where in the sum in the right hand side $(v,\sigma,\nu)$ runs over the triples of $v \in Y_0 \setminus V(\sigma')$, an element $\sigma \in Y_i$, and $\nu \in O(\sigma)$ which satisfy $V(\sigma) = V(\sigma') \amalg \{v\}$ and $s_v(\nu) = \nu'$. Note that the sum on the right hand side is a finite sum for $\wt{\partial}_{i,\oplus}$ by definition. One can see also that the sum is a finite sum in the case of $\wt{\partial}_{i,\prod}$ using the locally finiteness of $Y_\bullet$. Each of $\wt{\partial}_{i,\oplus}$ and $\wt{\partial}_{i,\prod}$ is a homomorphism of ${\{\pm 1\}}$-modules. Hence it induces a homomorphism $\partial_{i,\oplus} : (\bigoplus_{\nu \in Y_i'}M)_{\{\pm 1\}} \to (\bigoplus_{\nu \in Y_{i-1}'}M)_{\{\pm 1\}}$ (resp.\ $\partial_{i,\prod} : (\prod_{\nu \in Y_i'}M)_{\{\pm 1\}} \to (\prod_{\nu \in Y_{i-1}'}M)_{\{\pm 1\}}$) of abelian groups, where the subscript ${\{\pm 1\}}$ means the coinvariants. It follows from the formula (\ref{formula1}) and the definition of $\partial_{i,\oplus}$ and $\partial_{i,\prod}$ that the family of abelian groups $((\bigoplus_{\nu \in Y_i'}M)_{\{\pm 1\}})_{i\ge 0}$ (resp. $((\prod_{\nu \in Y_i'}M)_{\{\pm 1\}})_{i\ge 0}$) indexed by the integer $i \ge 0$, together with the homomorphisms $\partial_{i,\oplus}$ (resp. $\partial_{i,\prod}$) for $i \ge 1$, forms a complex of abelian groups. The homology groups of this complex are the homology groups $H_*(Y_\bullet, M)$ (resp. the Borel-Moore homology groups $H_*^\mathrm{BM}(Y_\bullet, M)$) of $Y_\bullet$ with coefficients in $M$. We note that there is a canonical map $H_*(Y_\bullet, M) \to H_*^\mathrm{BM}(Y_\bullet, M)$ from homology to Borel-Moore homology induced by the map of complexes $((\bigoplus_{\nu \in Y_i'}M)_{\{\pm 1\}})_{i\ge 0} \to ((\prod_{\nu \in Y_i'}M)_{\{\pm 1\}})_{i\ge 0}$ given by inclusion at each degree. The family of abelian groups $(\mathrm{Map}_{{\{\pm 1\}}}(Y_i', M))_{i\ge 0}$ (resp. $(\mathrm{Map}^{\mathrm{fin}}_{{\{\pm 1\}}}(Y_i', M))_{i\ge 0}$ where the superscript $\mathrm{fin}$ means finite support) of the ${\{\pm 1\}}$-equivariant maps from $Y_i'$ to $M$ forms a complex of abelian groups in a similar manner. (One uses the locally finiteness of $Y_\bullet$ for the latter.) The cohomology groups of this complex are the cohomology groups $H^*(Y_\bullet, M)$ (resp. the cohomology groups with compact support $H_c^*(Y_\bullet, M)$) of $Y_\bullet$ with coefficients in $M$. \subsubsection{Universal coefficient theorem} \label{univ_coeff} It follows from the definition that the following universal coefficient theorem holds. That is, for a simplicial complex $Y_\bullet$, there exist canonical short exact sequences $$ 0 \to H_i(Y_\bullet, \mathbb{Z}) \otimes M \to H_i(Y_\bullet, M) \to \mathrm{Tor}_1^\mathbb{Z} (H_{i-1}(Y_\bullet, \mathbb{Z}),M) \to 0 $$ and $$ 0 \to \mathrm{Ext}^1_\mathbb{Z}(H_{i-1}(Y_\bullet, \mathbb{Z}),M) \to H^i(Y_\bullet, M) \to \mathrm{Hom}_\mathbb{Z} (H_i(Y_\bullet, \mathbb{Z}),M) \to 0. $$ for any abelian group $M$. Similarly, for a locally finite simplicial complex $Y_\bullet$, we have short exact sequences $$ 0 \to \mathrm{Ext}^1_\mathbb{Z}(H_c^{i+1}(Y_\bullet, \mathbb{Z}),M) \to H^\mathrm{BM}_i(Y_\bullet,M) \to \mathrm{Hom}_\mathbb{Z} (H_c^i(Y_\bullet, \mathbb{Z}),M) \to 0 $$ and $$ 0 \to H^i_c (Y_\bullet, \mathbb{Z}) \otimes M \to H_c^i(Y_\bullet, M) \to \mathrm{Tor}_1^\mathbb{Z} (H_c^{i+1}(Y_\bullet, \mathbb{Z}),M) \to 0 $$ for any abelian group $M$. The canonical inclusions $$ \begin{array}{c} \left( \bigoplus_{\nu \in Y_i'}M \right)_{\{\pm 1\}} \hookrightarrow \left( \prod_{\nu \in Y_i'}M \right)_{\{\pm 1\}} \text{and} \\ \mathrm{Map}^{\mathrm{fin}}_{{\{\pm 1\}}}(Y_i', M) \hookrightarrow \mathrm{Map}_{{\{\pm 1\}}}(Y_i', M) \end{array} $$ for $i \ge 0$ induce homomorphisms $H_i(Y_\bullet, M) \to H^\mathrm{BM}_i(Y_\bullet, M)$ and $H_c^i(Y_\bullet, M) \to H^i(Y_\bullet, M)$ of abelian groups, respectively. \subsubsection{} \label{sec:quasifinite} Let $f=(f_i)_{i \ge 0}: Y_\bullet \to Z_\bullet$ be a map of simplicial complexes. For each integer $i \ge 0$ and for each abelian group $M$, the map $f$ canonically induces homomorphisms $f_* : H_i(Y_\bullet, M) \to H_i(Z_\bullet,M)$ and $f^* : H^i(Z_\bullet,M) \to H^i(Y_\bullet,M)$. We say that the map $f$ is finite if the subset $f_i^{-1}(\sigma)$ of $Y_i$ is finite for any $i \ge 0$ and for any $\sigma \in Z_i$. If $Y_\bullet$ and $Z_\bullet$ are locally finite, and if $f$ is finite, then $f$ canonically induces the pushforward homomorphism $f_* : H^\mathrm{BM}_i(Y_\bullet, M) \to H^\mathrm{BM}_i(Z_\bullet,M)$ and the pullback homomorphism $f^*: H_c^i(Z_\bullet,M) \to H_c^i(Y_\bullet,M)$. \subsubsection{} \label{sec:CWcomplex} Let $Y_\bullet$ be a simplicial complex. We associate a CW complex $|Y_\bullet|$ which we call the geometric realization of $Y_\bullet$. Let $I(Y_\bullet)$ denote the disjoint union $\coprod_{i \ge 0} Y_i$. We define a partial order on the set $I(Y_\bullet)$ by saying that $\tau \le \sigma$ if and only if $\tau$ is a face of $\sigma$. For $\sigma \in I(Y_\bullet)$, we let $\Delta_\sigma$ denote the set of maps $f:V(\sigma) \to \mathbb{R}_{\ge 0}$ satisfying $\sum_{v \in V(\sigma)} f(v) =1$. We regard $\Delta_\sigma$ as a topological space whose topology is induced from that of the real vector space $\mathrm{Map}(V(\sigma),\mathbb{R})$. If $\tau$ is a face of $\sigma$, we regard the space $\Delta_\tau$ as the closed subspace of $\Delta_\sigma$ which consists of the maps $V(\sigma) \to \mathbb{R}_{\ge 0}$ whose support is contained in the subset $V(\tau) \subset V(\sigma)$. We let $|Y_\bullet|$ denote the colimit $\varinjlim_{\sigma \in I(Y_\bullet)} \Delta_\sigma$ in the category of topological spaces and call it the geometric realization of $Y_\bullet$. It follows from the definition that the geometric realization $|Y_\bullet|$ has a canonical structure of CW-complex. \subsubsection{Cellular versus singular} \label{sec:cellular} We give a remark on the use of the term ``Borel-Moore homology'' in this paragraph. Given a strict simplicial complex, its cohomology, homology and cohomology with compact support (for a locally finite strict simplicial complex) are usually defined as above, and called cellular (co)homology. See for example \cite{Hatcher}. On the other hand, there is also the singular (co)homology and with compact support that are defined using the singular (co)chain complex. It is well-known that the cellular (co)homology groups (with compact support) are isomorphic to the singular (co)homology groups (with compact support) of the geometric realization. The same proof applies to the generalized simplicial complexes and gives an isomorphism between the cellular and the singular theories. For the Borel-Moore homology, we did not find a cellular definition as above, except in Hattori \cite{Hattori} where he does not call it the Borel-Moore homology. He also gives a definition using singular chains and shows that the two homology groups are isomorphic. There are several definitions of Borel-Moore homology that may be associated to a (strict) simplicial complex. The definition of the Borel-Moore homology for PL manifolds is found in Haefliger \cite{Haefliger}. There is also a sheaf theoretic definition in Iversen \cite{Iversen}. More importantly, there is the general definition which concerns the intersection homology. However, we did not find a statement in the literature and we did not check that the cellular definition in Hattori (same as the one given in this article) is isomorphic to the other Borel-Moore homology theories. \section{The Bruhat-Tits building and apartments} \label{sec:BT} For the general theory of Bruhat-Tits building and apartments, the reader is referred to the book \cite{Ab-Br}. \subsection{The Bruhat-Tits building of $\mathrm{PGL}_d$} In the following paragraphs, we recall the definition of the Bruhat-Tits building of $\mathrm{PGL}_d$ over $K$. We recall that it is a strict simplicial complex. \subsubsection{Notation} Let $K$ be a nonarchimedean local field. We let $\mathcal{O} \subset K$ denote the ring of integers. We fix a uniformizer $\varpi \in \mathcal{O}$. Let $d \ge 1$ be an integer. Let $V=K^{\oplus d}$. We regard it as the set of row vectors so that $\mathrm{GL}_d(K)$ acts from the right by multiplication. \subsubsection{Bruhat-Tits building (\cite{BT})} \label{Bruhat-Tits} An $\mathcal{O}$-lattice in $V$ is a free $\mathcal{O}$-submodule of $V$ of rank $d$. We denote by $\mathrm{Lat}_{\mathcal{O}}(V)$ the set of $\mathcal{O}$-lattices in $V$. We regard the set $\mathrm{Lat}_{\mathcal{O}}(V)$ as a partially ordered set whose elements are ordered by the inclusions of $\mathcal{O}$-lattices. Two $\mathcal{O}$-lattices $L$, $L'$ of $V$ are called homothetic if $L = \varpi^j L'$ for some $j \in \mathbb{Z}$. Let $\mathrm{Lat}bar_{\mathcal{O}}(V)$ denote the set of homothety classes of $\mathcal{O}$-lattices $V$. We denote by $\mathrm{cl}$ the canonical surjection $\mathrm{cl}: \mathrm{Lat}_{\mathcal{O}}(V) \to \mathrm{Lat}bar_{\mathcal{O}}(V)$. We say that a subset $S$ of $\mathrm{Lat}bar_{\mathcal{O}}(V)$ is totally ordered if $\mathrm{cl}^{-1}(S)$ is a totally ordered subset of $\mathrm{Lat}_{\mathcal{O}}(V)$. The pair $(\mathrm{Lat}bar_{\mathcal{O}}(V), \Delta)$ of the set $\mathrm{Lat}bar_{\mathcal{O}}(V)$ and the set $\Delta$ of totally ordered finite subsets of $\mathrm{Lat}bar_{\mathcal{O}}(V)$ forms a strict simplicial complex. The Bruhat-Tits building of $\mathrm{PGL}_d$ over $K$ is a simplicial complex $\mathcal{B}T_\bullet$ which is isomorphic to the simplicial complex associated to this strict simplicial complex. In the next paragraphs we explicitly describe the simplicial complex $\mathcal{B}T_\bullet$. \subsubsection{} For an integer $i \ge 0$, let $\wt{\mathcal{B}T}_i$ be the set of sequences $(L_j)_{j \in \mathbb{Z}}$ of $\mathcal{O}$-lattices in $V$ indexed by $j \in \mathbb{Z}$ such that $L_j \supsetneqq L_{j+1}$ and $\varpi L_j=L_{j+i+1}$ hold for all $j\in \mathbb{Z}$. In particular, if $(L_j)_{j \in \mathbb{Z}}$ is an element in $\wt{\mathcal{B}T}_0$, then $L_j = \varpi^j L_0$ for all $j \in \mathbb{Z}$. We identify the set $\wt{\mathcal{B}T}_0$ with the set $\mathrm{Lat}_{\mathcal{O}}(V)$ by associating the $\mathcal{O}$-lattice $L_0$ to an element $(L_j)_{j \in \mathbb{Z}}$ in $\mathcal{B}T_0$. We say that two elements $(L_j)_{j \in \mathbb{Z}}$ and $(L'_j)_{j \in \mathbb{Z}}$ in $\wt{\mathcal{B}T}_i$ are equivalent if there exists an integer $\ell$ satisfying $L'_j=L_{j+\ell}$ for all $j \in \mathbb{Z}$. We denote by $\mathcal{B}T_i$ the set of the equivalence classes in $\wt{\mathcal{B}T}_i$. For $i=0$, the identification $\wt{\mathcal{B}T_0} \cong \mathrm{Lat}_{\mathcal{O}}(V)$ gives an identification $\mathcal{B}T_0 \cong \mathrm{Lat}bar_{\mathcal{O}}(V)$. Let $\sigma \in \mathcal{B}T_i$ and take a representative $(L_j)_{j \in \mathbb{Z}}$ of $\sigma$. For $j \in \mathbb{Z}$, let us consider the class $\mathrm{cl}(L_j)$ in $\mathrm{Lat}bar_{\mathcal{O}}(V)$. Since $\varpi L_j = L_{j+i+1}$, we have $\mathrm{cl}(L_j) = \mathrm{cl}(L_{j+i+1})$. Since $L_j \supsetneqq L_k \supsetneqq \varpi L_j$ for $0 \le j < k \le i$, the elements $\mathrm{cl}(L_0), \ldots, \mathrm{cl}(L_i) \in \mathrm{Lat}bar_{\mathcal{O}}(V)$ are distinct. Hence the subset $V(\sigma) = \{ \mathrm{cl}(L_j)\ |\ j \in \mathbb{Z} \} \subset \mathcal{B}T_0$ has cardinality $i+1$ and does not depend on the choice of $(L_j)_{j \in \mathbb{Z}}$. It is easy to check that the map from $\mathcal{B}T_i$ to the set of finite subsets of $\mathrm{Lat}bar_{\mathcal{O}}(V)$ which sends $\sigma \in \mathcal{B}T_i$ to $V(\sigma)$ is injective and that the set $\{V(\sigma)\ |\ \sigma \in \mathcal{B}T_i\}$ is equal to the set of totally ordered subsets of $\mathrm{Lat}bar_{\mathcal{O}}(V)$ with cardinality $i+1$. In particular, for any $j \in \{0,\ldots,i\}$ and for any subset $V' \subset V(\sigma)$ of cardinality $j+1$, there exists a unique element in $\mathcal{B}T_j$, which we denote by $\sigma \times_{V(\sigma)} V'$, such that $V(\sigma \times_{V(\sigma)} V')$ is equal to $V'$. Thus the collection $\mathcal{B}T_\bullet = \coprod_{i \ge 0} \mathcal{B}T_i$ together with the data $V(\sigma)$ and $\sigma \times_{V(\sigma)} V'$ forms a simplicial complex which is canonically isomorphic to the simplicial complex associated to the strict simplicial complex $(\mathrm{Lat}bar_{\mathcal{O}}(V), \Delta)$ which we introduced in the first paragraph of Section~\ref{Bruhat-Tits}. We call the simplicial complex $\mathcal{B}T_\bullet$ the Bruhat-Tits building of $\mathrm{PGL}_d$ over $K$. \subsubsection{} \label{sec:dimension} The simplicial complex $\mathcal{B}T_\bullet$ is of dimension at most $d-1$, by which we mean that $\mathcal{B}T_i$ is an empty set for $i > d-1$. It follows from the fact that $\wt{\mathcal{B}T}_i$ is an empty set for $i > d-1$, which we can check as follows. Let $i > d-1$ and assume that there exists an element $(L_j)_{j \in \mathbb{Z}}$ in $\wt{\mathcal{B}T}_i$. Then for $j=0,\ldots,i+1$, the quotient $L_j/L_{i+1}$ is a subspace of the $d$-dimensional $(\mathcal{O}/\varpi \mathcal{O})$-vector space $L_0/L_{i+1}=L_0/\varpi L_0$. These subspaces must satisfy $L_0/L_{i+1} \supsetneqq L_1/L_{i+1} \supsetneqq \cdots \supsetneqq L_{i+1}/L_{i+1}$. It is impossible since $i+1 > d$. \subsection{Apartments} \label{sec:apartments} \ \ Here we recall the definition of the apartments which are simplicial subcomplexes of the Bruhat-Tits building. We then associate to each apartment a class in the Borel-Moore homology of a quotient of the Bruhat-Tits building. This class is an analogue of a modular symbol. \subsubsection{} \label{sec:def apartment} We give an explicit description of the simplicial complex $A_\bullet$ below without making use of the theory of root systems. For the viewpoint in the general theory of root systems, we refer the reader to \cite[p. 523, 10.1.7 Example]{Ab-Br}. set $A_0 = \mathbb{Z}^{\oplus d}/\mathbb{Z}(1,\ldots,1)$. For two elements $x=(x_j), y=(y_j) \in \mathbb{Z}^{\oplus d}$, we write $x \le y$ when $x_j \le y_j$ for all $1 \le j \le d$. We say that a subset $\wt{\sigma} \subset \mathbb{Z}^{\oplus d}$ is small if for any two elements $x, y \in \wt{\sigma}$ we have either $x \le y \lneq x+(1,\ldots,1)$ or $y \le x \lneq y+(1,\ldots,1)$. Explicitly, this means that $\wt{\sigma}$ is a finite set and is of the form $\wt{\sigma} = \{x_0,\ldots,x_i \}$ for some elements $x_0, \ldots, x_i$ satisfying $x_0 \lneq \cdots \lneq x_i \lneq x_{i+1} = x_0 + (1,\ldots,1)$. We say that a finite subset $\sigma \subset A_0$ has a small lift to $\mathbb{Z}^{\oplus d}$ if there exists a small subset $\wt{\sigma} \subset \mathbb{Z}^{\oplus d}$ which maps bijectively onto $\sigma$ under the canonical surjection $\mathbb{Z}^{\oplus d} \twoheadrightarrow A_0$. For $i \ge 0$, we let $A_i$ denote the set of the subsets $\sigma \subset A_0$ with cardinality $i+1$ which has a small lift to $\mathbb{Z}^{\oplus d}$. It is clear that the pair $(A_0, \coprod_{i\ge 0} A_i)$ forms a strict simplicial complex and the collection $A_\bullet =(A_i)_{i \ge 0}$ is the simplicial complex associated to the strict simplicial complex $(A_0, \coprod_{i\ge 0} A_i)$. We note that $A_i$ is an empty set for $i \ge d$, since by definition there is no small subset of $\mathbb{Z}^{\oplus d}$ with cardinality larger than $d$. \subsubsection{} \label{sec:521} Let $v_1, \dots, v_d$ be a basis of $V =K^{\oplus d}$. We define a map $\iota_{v_1,\ldots,v_d} : A_\bullet \to \mathcal{B}T_\bullet$ of simplicial complexes. Let $\wt{\iota}_{v_1,\ldots,v_d}: \mathbb{Z}^{\oplus d} \to \wt{\mathcal{B}T}_0$ denote the map which sends the element $(n_1,\ldots,n_d) \in \mathbb{Z}^d$ to the $\mathcal{O}$-lattice $\mathcal{O}\varpi^{n_1} v_1 \oplus \mathcal{O} \varpi^{n_2} v_2 \oplus \dots \oplus \mathcal{O} \varpi^{n_d} v_d$. Let $i \ge 0$ be an integer and let $\sigma \in A_i$. Take a small subset $\wt{\sigma} \subset \mathbb{Z}^{\oplus d}$ with cardinality $i+1$ which maps bijectively onto $\sigma$ under the surjection $\mathbb{Z}^{\oplus d} \to \mathbb{Z}^{\oplus d}/ \mathbb{Z} (1,\ldots,1) = A_0$. By definition the set $\wt{\sigma}$ is of the form $\wt{\sigma} = \{x_0, \ldots, x_i\}$ where $x_0 ,\ldots, x_i \in \mathbb{Z}^{\oplus d}$ satisfy $x_0 \lneq \cdots \lneq x_i \lneq x_{i+1}$ where we have set $x_{i+1} = x_0 + (1,\ldots,1)$. For each integer $j\in \mathbb{Z}$ we write $j$ in the form $j=m(i+1) + r$ with $m \in \mathbb{Z}$ and $r \in \{0,\ldots, i\}$, and set $x_j = x_r + m(1,\ldots,1)$ and $L_j = \wt{\iota}_{v_1,\ldots,v_d}(x_j)$. The sequence $(L_j)_{j \in \mathbb{Z}}$ of $\mathcal{O}$-lattices gives an element $\wt{\iota}_{v_1,\ldots,v_d,i}(\wt{\sigma})$ in $\wt{\mathcal{B}T}_i$. We denote by $\iota_{v_1,\ldots,v_d,i}(\sigma)$ the class of $\wt{\iota}_{v_1,\ldots,v_d,i}(\wt{\sigma})$ in $\mathcal{B}T_i$. \begin{lem} \label{lem:iota} The class $\iota_{v_1,\ldots,v_d,i}(\sigma)$ does not depend on the choice of a small lift~$\wt{\sigma}$. \end{lem} \begin{proof} The inverse image of $\sigma$ under the canonical surjection $\mathbb{Z}^{\oplus d} \to \mathbb{Z}^{\oplus d}/\mathbb{Z} (1,\ldots,1)$ is equal to $\{ x_j \ |\ j \in \mathbb{Z}\}$. Since $x_j \lneq x_{j'}$ for $j \lneq j'$ and $x_{j+i+1} = x_j + (1,\ldots,1)$, any small subset $\wt{\sigma}'$ of $\mathbb{Z}^{\oplus d}$ with cardinality $i+1$ which maps bijectively onto $\sigma$ is of the form $\wt{\sigma}' = \{x_{l}, x_{l +1}, \ldots, x_{l +i} \}$ for some $l \in \mathbb{Z}$. The element $\wt{\iota}_{v_1,\ldots,v_d,i}(\wt{\sigma}')$ is the sequence $(L'_j)_{j\in \mathbb{Z}}$, where $L'_j = L_{j+l}$. Hence the two elements $\wt{\iota}_{v_1,\ldots,v_d,i}(\wt{\sigma})$ and $\wt{\iota}_{v_1,\ldots,v_d,i}(\wt{\sigma}')$ give the same element in $\mathcal{B}T_i$. \end{proof} It is easy to check that the map $\iota_{v_1,\ldots,v_d,i} : A_i \to \mathcal{B}T_i$ is injective for every $i \ge 0$ and that the collection of the map $\iota_{v_1,\ldots,v_d,i}$ forms a map $\iota_{v_1,\ldots,v_d}:A_\bullet \to \mathcal{B}T_\bullet$ of simplicial complexes. We define a simplicial subcomplex $A_{v_1, \dots, v_d , \bullet}$ of $\mathcal{B}T_\bullet$ to be the image of the map $\iota_{v_1,\ldots,v_d}$ so that $A_{v_1, \dots, v_d , i}$ is the image of the map $\iota_{v_1,\ldots,v_d,i}$ for each $i \ge 0$. We call the subcomplex $A_{v_1, \dots, v_d , \bullet}$ of $\mathcal{B}T_\bullet$ the apartment in $\mathcal{B}T_\bullet$ corresponding to the basis $v_1,\ldots,v_d$. Since the map $\iota_{v_1,\ldots,v_d,i}$ is injective for every $i \ge 0$, the map $\iota_{v_1,\ldots,v_d}$ induces an isomorphism $A_\bullet \xto{\cong} A_{v_1, \dots, v_d , \bullet}$ of simplicial complexes. \subsubsection{} \label{sec:fundamental class} We introduce a special element $\beta$ in the group $H^\mathrm{BM}_{d-1}(A_\bullet,\mathbb{Z})$, which is an analogue of the fundamental class. Let $\sigma \in A_{d-1}$ and take a small lift $\wt{\sigma} \subset \mathbb{Z}^{\oplus d}$ to $\mathbb{Z}^{\oplus d}$. By definition the set $\wt{\sigma}$ is of the form $\wt{\sigma} = \{ x_1, \ldots, x_d\}$ with $x_0 \lneq x_1 \lneq \cdots \lneq x_d$ where we have set $x_0 = x_d -(1,\ldots,1)$. It follows from this property that for each integer $i$ with $1 \le i \le d$ there exists a unique integer $w(i)$ with $1 \le w(i) \le d$ such that the $w(i)$-th coordinate of $x_i - x_{i-1}$ is equal to $1$ and the other coordinates of $x_i - x_{i-1}$ are equal to zero. Since we have $\sum_{i=1}^d (x_i - x_{i-1}) = x_d -x_0 = (1,\ldots,1)$, the map $w:\{1,\ldots,d\} \to \{1,\ldots,d\}$ is injective. Hence it defines an element $w$ in the symmetric group $S_d$. The maps $\{1,\ldots,d\} \to A_0 = \mathbb{Z}^{\oplus d} / \mathbb{Z}(1,\ldots,1)$ which sends $i$ to the class of $x_{w^{-1}(i)}$ in $A_0$ gives an element $[\wt{\sigma}]$ in $T(\sigma)$. \begin{lem} The element $[\wt{\sigma}] \in T(\sigma)$ does not depend on the choice of a lift $\wt{\sigma}$. \end{lem} \begin{proof} For each integer $j\in \mathbb{Z}$ we write $j$ of the form $j=md + r$ with $m \in \mathbb{Z}$ and $r \in \{0,\ldots, d-1\}$ and set $x_j = x_r + m(1,\ldots,1)$. As we have mentioned in the proof of Lemma~\ref{lem:iota}, The inverse image of $\sigma$ under the canonical surjection $\mathbb{Z}^{\oplus d} \to \mathbb{Z}^{\oplus d}/\mathbb{Z} (1,\ldots,1)$ is equal to $\{ x_j \ |\ j \in \mathbb{Z}\}$ and any small lift $\wt{\sigma}'$ of $\sigma$ to $\mathbb{Z}^{\oplus d}$ is of the form $\wt{\sigma}' = \{x_{l}, x_{l +1}, \ldots, x_{l +d-1} \}$ for some $l \in \mathbb{Z}$. For each $i \in \{1,\ldots,d\}$, the unique integer $j \in \{l,l+1,\ldots, l+d-1 \}$ such that the $i$-th coordinate of $x_j - x_{j-1}$ is equal to $1$ and the other coordinates of $x_j - x_{j-1}$ are equal to zero is congruent to $w^{-1}(i)$ modulo $d$. Hence the class of $x_j$ in $A_0$ does not depend on the choice of a small lift $\wt{\sigma}'$. This proves the claim. \end{proof} We denote by $[\sigma]$ the class of $[\wt{\sigma}]$ in $O(\sigma)$. We let $\wt{\beta}$ denote the element $\wt{\beta} = (\beta_{\nu})_{\nu \in A_{d-1}'}$ in $\prod_{\nu \in A_{d-1}'} \mathbb{Z}$ where $\beta_\nu =1$ if $\nu = [\sigma]$ for some $\sigma \in A_{d-1}$ and $\beta_\nu=0$ otherwise. We denote by $\beta$ the class of $\wt{\beta}$ in $(\prod_{\nu \in A_{d-1}'} \mathbb{Z})_{\{\pm 1\}}$. \begin{prop} The element $\beta \in (\prod_{\nu \in A_{d-1}'} \mathbb{Z})_{\{\pm 1\}}$ is a $(d-1)$-cycle in the chain complex which computes the Borel-Moore homology of $A_\bullet$. \end{prop} \begin{proof} The assertion is clear for $d=1$ since the $(d-2)$-nd component of the complex is zero. Suppose that $d \ge 2$. Let $\tau$ be an element in $A_{d-2}$. Take a small lift $\wt{\tau} \subset \mathbb{Z}^{\oplus d}$ of $\tau$ to $\mathbb{Z}^{\oplus d}$. By definition the set $\wt{\tau}$ is of the form $\wt{\tau} = \{ x_1, \ldots, x_d\}$ with $x_0 \lneq x_1 \lneq \cdots \lneq x_{d-1}$ where we have set $x_0 = x_d -(1,\ldots,1)$. There is a unique $i \in \{1,\ldots,d-1\}$ such that $x_{i} - x_{i -1}$ has two non-zero coordinates. There are exactly two elements in $\mathbb{Z}^{\oplus d}$ which is larger than $x_{i-1}$ and which is smaller than $x_i$. We denote these two elements by $y_1$ and $y_2$. We set $\wt{\sigma}_j = \wt{\tau} \cup \{y_j\}$ for $j=1,2$. The sets $\wt{\sigma}_1$, $\wt{\sigma}_2$ are small subsets of $\mathbb{Z}^{\oplus d}$ of cardinality $d$ and their images $\sigma_1$, $\sigma_2$ under the surjection $\mathbb{Z}^{\oplus d} \twoheadrightarrow \mathbb{Z}^{\oplus d}/\mathbb{Z}(1,\ldots,1)$ are elements in $A_{d-1}$. For $j=1,2$, let $w_j$ denote the element $w$ in the symmetric group $S_d$ which appeared in the first paragraph of Section~\ref{sec:fundamental class} for $\sigma = \sigma_j$. It follows from the definition of $\sigma_j$ that we have $w_1 = w_2 (i,i+1)$, where $(i,i+1)$ denotes the transposition of $i$ and $i+1$. It is easily checked that the set of the elements in $A_{d-1}$ which has $\tau$ as a face is equal to $\{\sigma_1,\sigma_2\}$. Since we have $\mathrm{sgn}(w_1) = - \mathrm{sgn}(w_2)$, it follows that the component in $(\prod_{\nu \in O(\tau)} \mathbb{Z})_{\{\pm 1\}}$ of the image of $\beta$ under the boundary map $(\prod_{\nu \in A_{d-1}'} \mathbb{Z})_{\{\pm 1\}} \to (\prod_{\nu' \in A_{d-2}'} \mathbb{Z})_{\{\pm 1\}}$ is equal to zero. This proves the claim. \end{proof} \section{Arithmetic groups and modular symbols} \subsection{Arithmetic groups}\label{sec:71} \subsubsection{An arithmetic group} \label{sec:4.1.1} We give here the definition of our main object of study, an arithmetic group $\Gamma$. Let us give the setup. We let $F$ denote a global field of positive characteristic. Let $C$ be a proper smooth curve over a finite field whose function field is $F$. Let $\infty$ be a place of $F$ and let $K=F_\infty$ denote the local field at $\infty$. We let $A=H^0(C \setminus \{\infty\}, \mathcal{O}_C)$. Here we identified a closed point of $C$ and a place of $F$. We write $\wh{A}=\varprojlim_I A/I$, where the limit is taken over the nonzero ideals of $A$. We let $\mathbb{A}^\infty=\wh{A} \otimes_A F$ denote the ring of finite adeles. Let $\mathbb{K}^\infty \subset \mathrm{GL}_d(\mathbb{A}^\infty)$ be a compact open subgroup. We set $\Gamma=\mathrm{GL}_d(F) \cap \mathbb{K}^\infty$ and regard it as a subgroup of $\mathrm{GL}_d(K)$. We refer to the group of this form for some $\mathbb{K}^\infty$ an arithmetic (sub)group of $\mathrm{GL}_d(K)$ (contained in $\mathrm{GL}_d(F)$). We give a remark. Let $\Gamma$ be an arithmetic group. Then $\Gamma \cap \mathrm{SL}_d(F)=\Gamma \cap \mathrm{SL}_d(K)$ is a subgroup of $\Gamma$ of finite index, and is an $S$-arithmetic group of $\mathrm{SL}_d$ over $F$ for $S=\{\infty\}$ dealt in the paper of Harder \cite{Harder}. \subsubsection{} \label{sec:def Gamma} Let $\Gamma \subset \mathrm{GL}_d(K)$ be a subgroup. We consider the following Conditions (1) to (5) on $\Gamma$. \begin{enumerate} \item $\Gamma \subset \mathrm{GL}_d(K)$ is a discrete subgroup, \item $\{\det(\gamma) \,|\, \gamma \in \Gamma \} \subset O_\infty^\times$ where $O_\infty$ is the ring of integers of $K$, \item $\Gamma \cap Z(\mathrm{GL}_d(K))$ is finite. \end{enumerate} Let $A_\bullet=A_{v_1,\dots,v_d,\bullet}$ denote the apartment corresponding to a basis $v_1,\dots, v_d\in K^{\oplus d}$ (defined in Section~\ref{sec:521}). \begin{enumerate} \setcounter{enumi}{3} \item For any apartment $A_\bullet=A_{v_1,\dots, v_d,\bullet}$ with $v_1, \dots, v_d \in F^{\oplus d}$, the composition $A_\bullet \hookrightarrow \mathcal{B}\mathcal{T}_\bullet \to \Gamma \backslash \mathcal{B}\mathcal{T}_\bullet$ is quasi-finite, that is, the inverse image of any simplex by this map is a finite set. \item The cohomology group $H^{d-1}(\Gamma, \mathbb{Q})$ is a finite dimensional $\mathbb{Q}$-vector space. \end{enumerate} The condition (1) will be used in Lemma \ref{AF}. The condition (2) implies that each element in the isotropy group of a simplex fixes the vertices of the simplex. Under the condition (1), the condition (3) implies that the stabilizer of a simplex is finite. This implies that the $\mathbb{Q}$-coefficient group homology of $\Gamma$ and the homology of $\Gamma \backslash |\mathcal{B}\mathcal{T}_\bullet|$ are isomorphic. The condition (4) will be used to define a class in Borel-Moore homology of $\Gamma\backslash\mathcal{B}\mathcal{T}_\bullet$ starting from an apartment (Section \ref{sec:def modular symbol}). The condition (5) will be used in the proof of Lemma~\ref{lem:5.6}. Let us show that all five conditions of Section~\ref{sec:def Gamma} are satisfied when $\Gamma$ is an arithmetic subgroup. The condition (1) holds trivially. We note that there exists an element $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$ such that $g \mathbb{K}^\infty g^{-1} \subset \mathrm{GL}_d(\wh{A})$. Since $\det(\gamma) \in F^\times \cap \wh{A}^\times \subset O_\infty^\times$ for $\gamma \in \Gamma$, (2) holds. Because $F^\times \cap \mathrm{GL}_d(\wh{A})$ is finite, (3) holds. \begin{lem} \label{lem:arithmetic} Let $\Gamma$ be an arithmetic subgroup. Then (4) holds. \end{lem} \begin{proof} We show that the inverse image of each simplex of $\Gamma\backslash \mathcal{B}\mathcal{T}_\bullet$ under the map in (4) is finite. The set of simplices of $\mathcal{B}\mathcal{T}_\bullet$ is of a fixed dimension is identified (see Section~\ref{sec:6.1.2} for the identification) with the coset $\mathrm{GL}_d(K)/\wt{\mathbb{K}}_\infty$ for an open subgroup $\wt{\mathbb{K}}_\infty \subset \mathrm{GL}_d(K)$ which contains $K^\times \mathbb{K}_\infty$ as a subgroup of finite index for some compact open subgroup $\mathbb{K}_\infty \subset \mathrm{GL}_d(K)$. Let $T \subset \mathrm{GL}_d$ denote the diagonal maximal torus. The set of simplices of $A_\bullet$ of fixed dimension is identified with the image of the map \[ \coprod_{w \in S_d} gwT(K) \to \mathrm{GL}_d(K)/\wt{\mathbb{K}}_\infty \] for some $g \in \mathrm{GL}_d(F)$. Since $S_d$ is a finite group, it then suffices to show that for any $w \in S_d$, the map $$ \mathrm{Image}[gwT(K) \to \mathrm{GL}_d(K)/K^\times \mathbb{K}_\infty ] \to \Gamma \backslash \mathrm{GL}_d(K) /K^\times \mathbb{K}_\infty $$ is quasi-finite. The inverse image under the last map of the image of $gwt \in gwT(K)$ is isomorphic to the set \[ \begin{array}{ll} \{\gamma \in\Gamma\,|\, \gamma gwt \in gwT(K) K^\times \mathbb{K}_\infty\} &=\Gamma \cap gwT(K) K^\times \mathbb{K}_\infty (gwt)^{-1}\\ &=\Gamma \cap (gw)T(K) t \mathbb{K}_\infty t^{-1} (gw)^{-1}. \end{array} \] Hence, if we let $g'=gw$ and $\mathbb{K}'_\infty =t\mathbb{K}_\infty t^{-1}$, this set equals \[ \begin{array}{ll} \Gamma\cap g'T(K) \mathbb{K}'_\infty g'^{-1} & =\mathrm{GL}_d(F) \cap (\mathbb{K}^\infty \times g'T(K) \mathbb{K}'_\infty g'^{-1}) \\ &= g'(\mathrm{GL}_d(F) \cap (g'^{-1} \mathbb{K}^\infty g' \cap T(K) \mathbb{K}'_\infty) g'^{-1}. \end{array} \] The finiteness is proved in the following lemma. \end{proof} \begin{lem} For any compact open subgroup $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A})$, the set $\mathrm{GL}_d(F) \cap T(K) \mathbb{K}$ is finite. \end{lem} \begin{proof} Let $U=T(K) \cap \mathbb{K}$. Then $T(O_\infty) \supset U$ and is of finite index. Note that there exist a non-zero ideal $I\subset A$ and an integer $N$ such that $\mathbb{K} \subset I^{-1}\varpi_\infty^{-N} \mathrm{Mat}_d(\wh{A})\times \mathrm{Mat}_d(O_\infty)$ where $\varpi_\infty$ is a uniformizer in $O_\infty$. Let $\alpha:T(K)/U \to T(K)/T(O_\infty) \cong \mathbb{Z}^{\oplus d}$ be the (quasi-finite) map induced by the inclusion $U \subset T(O_\infty)$. For $h \in T(K)$, we write $(h_1,\dots, h_d)=\alpha(h)$. Then for $i=1,\ldots,d$, the $i$-th row of $h \mathbb{K}$ is contained in $(I^{-1}\wh{A} \times \varpi_\infty^{-N}\varpi_\infty^{h_i}O_\infty)^{\oplus d}$. Hence, for sufficiently large $h_i$, the intersection $h \mathbb{K} \cap \mathrm{GL}_d(F)$ is empty. We then have, for sufficiently large $N'$, \begin{equation}\label{lem q-finite} \mathrm{GL}_d(F)\cap T(K) \mathbb{K} =\coprod_{h\in T(K)/U, \atop h_1, \ldots, h_d \le N'} \mathrm{GL}_d(F) \cap h\mathbb{K}. \end{equation} The adelic norm of the determinant of an element in $\mathrm{GL}_d(F)$ is 1, while that of an element in $h \mathbb{K}$ is $|\det h\,|_\infty = \sum_{i=1}^d h_i$. So (\ref{lem q-finite}) equals \[\displaystyle \coprod_{h\in T(K)/U, h_i \le N', \sum h_i=0} \mathrm{GL}_d(F) \cap h \mathbb{K}. \] The index set of the disjoint union above is finite since $\alpha$ is quasi-finite, and $\mathrm{GL}_d(F) \cap h \mathbb{K}$ is finite since $\mathrm{GL}_d(F)$ is discrete and $h \mathbb{K}$ is compact. The claim follows. \end{proof} \begin{lem} \label{lem:Harder} Let $\Gamma$ be an arithmetic subgroup. Then (5) holds. \end{lem} \begin{proof} This follows from \cite[p.136, Satz 2]{Harder}. \end{proof} \subsection{Arithmetic quotient of the Bruhat-Tits building} \label{sec:explicit beta2} Let us define simplicial complex $\Gamma \backslash \mathcal{B}T_\bullet$ for an arithmetic subgroup $\Gamma$ in this section. We need a lemma. \begin{lem} \label{lem:stabilizer} Let $i \ge 0$ be an integer, let $\sigma \in \mathcal{B}T_i$ and let $v,v' \in V(\sigma)$ be two vertices with $v \neq v'$. Suppose that an element $g \in \mathrm{GL}_d(K)$ satisfies $|\det\, g|_\infty =1$. Then we have $gv \neq v'$. \end{lem} \begin{proof} Let $\wt{\sigma}$ be an element $(L_j)_{j \in \mathbb{Z}}$ in $\wt{\mathcal{B}T}_i$ such that the class of $\wt{\sigma}$ in $\mathcal{B}T_i$ is equal to $\sigma$. There exist two integers $j,j' \in \mathbb{Z}$ such that $v$, $v'$ is the class of $L_j$, $L_{j'}$, respectively. Assume that $g v =v'$. Then there exists an integer $k \in \mathbb{Z}$ such that $L_j g^{-1} = \varpi_\infty^{k} L_{j'} = L_{j' + (i+1) k}$. Let us fix a Haar measure $d\mu$ of the $K$-vector space $V_\infty=K^{\oplus d}$. As is well-known, the push-forward of $d\mu$ with respect to the automorphism $V_\infty \to V_\infty$ given by the right multiplication by $\gamma$ is equal to $|\det\, \gamma|_\infty^{-1} d\mu$ for every $\gamma \in \mathrm{GL}_d(K)$. Since $|\det\, g|_\infty =1$, it follows from the equality $L_j g^{-1} = L_{j'+(i+1)k}$ that the two $\mathcal{O}_\infty$-lattices $L_j$ and $L_{j'+(i+1)k}$ have a same volume with respect to $d\mu$. Hence we have $j=j'+(i+1)k$, which implies $L_j = \varpi_\infty^k L_{j'}$. It follows that the class of $L_j$ in $\mathcal{B}T_0$ is equal to the class of $L_{j'}$, which contradicts the assumption $v \neq v'$. \end{proof} Let $\Gamma \subset \mathrm{GL}_d(K)$ be an arithmetic subgroup. It follows from Lemma~\ref{lem:stabilizer} (using Condition (2) of Section~\ref{sec:def Gamma}) that for each $i \ge 0$ and for each $\sigma \in \mathcal{B}T_i$, the image of $V(\sigma)$ under the surjection $\mathcal{B}T_0 \twoheadrightarrow \Gamma \backslash \mathcal{B}T_0$ is a subset of $\Gamma \backslash \mathcal{B}T_0$ with cardinality $i+1$. We denote this subset by $V(\mathrm{cl}(\sigma))$, since it is easily checked that it depends only on the class $\mathrm{cl}(\sigma)$ of $\sigma$ in $\Gamma \backslash \mathcal{B}T_i$. Thus the collection $\Gamma \backslash \mathcal{B}T_\bullet =(\Gamma \backslash \mathcal{B}T_i)_{i \ge 0}$ has a canonical structure of a simplicial complex such that the collection of the canonical surjection $\mathcal{B}T_i \twoheadrightarrow \Gamma \backslash \mathcal{B}T_i$ is a map of simplicial complexes $\mathcal{B}T_\bullet \twoheadrightarrow \Gamma \backslash \mathcal{B}T_\bullet$. \subsection{Modular symbols} \label{sec:def modular symbol} Let $v_1,\dots, v_d$ be an $F$-basis (that is, a basis of $F^{\oplus d}$ regarded as a basis of $K^{\oplus d}$). We consider the composite \begin{equation} \label{quasifinite} A_\bullet \xto{\iota_{v_1,\ldots,v_d}} \mathcal{B}T_\bullet \to \Gamma \backslash \mathcal{B}T_\bullet. \end{equation} Condition (4) implies that the map (\ref{quasifinite}) is a finite map of simplicial complexes in the sense of Section~\ref{sec:quasifinite}. It follows that the map (\ref{quasifinite}) induces a homomorphism $$ H^\mathrm{BM}_{d-1}(A_\bullet, \mathbb{Z}) \to H^\mathrm{BM}_{d-1}(\Gamma \backslash \mathcal{B}T_\bullet, \mathbb{Z}). $$ We let $\beta_{v_1,\ldots,v_d} \in H^\mathrm{BM}_{d-1}(\Gamma \backslash \mathcal{B}T_\bullet, \mathbb{Z})$ denote the image under this homomorphism of the element $\beta \in H^\mathrm{BM}_{d-1}(A_\bullet, \mathbb{Z})$ introduced in Section~\ref{sec:fundamental class}. We call this the class of the apartment $A_{v_1,\dots, v_d,\bullet}$. \subsection{Main Theorem} We are ready to state our theorem. \begin{thm}\label{lem:apartment} Let $\Gamma \subset \mathrm{GL}_d(K)$ be an arithmetic subgroup. The image of the canonical map (see Section~\ref{sec:def homology} for the definition) \[ H_{d-1}(\Gamma \backslash \mathcal{B}T_\bullet, \mathbb{Q}) \to H^\mathrm{BM}_{d-1}(\Gamma \backslash \mathcal{B}T_\bullet, \mathbb{Q}) \] is contained in the sub $\mathbb{Q}$-vector space generated by the classes of apartments associated to $F$-bases. \end{thm} \section{Proof of Theorem~\ref{lem:apartment}}\label{Modular Symbols} The purpose of this section is to prove Theorem~\ref{lem:apartment}. It gives the description of the homology of certain arithmetic groups in terms of the subspace generated by the classes of modular symbols inside the Borel-Moore homology of the quotient of the Bruhat-Tits building. The treatment of the modular symbols differs from the archimedean case (see \cite{AR}) in that the group does not act freely on the compactification and in that an apartment is contractible as a subspace of the building. To compare, we use equivariant homology (Section \ref{Equivariant homology}) of Werner's compactification (Section \ref{Werner's compactification}) as an intermediary object. \subsection{Equivariant homology}\label{Equivariant homology} Let $\Gamma \subset \mathrm{GL}_d(K)$ be an arithmetic subgroup. We define the simplicial set (not a simplicial complex) $E\Gamma_\bullet$ as follows. We define $E\Gamma_n=\Gamma^{n+1}$ to be the $(n+1)$-fold direct product of $\Gamma$ for $n\ge 0$. The set $\Gamma^{n+1}$ is naturally regarded as the set of maps of sets $\mathrm{Map}(\{0,\dots, n\}, \Gamma)$ and from this one obtains naturally the structure of a simplicial set. We let $|E\Gamma_\bullet|$ denote the geometric realization of $E\Gamma_\bullet$. Then $|E\Gamma_\bullet|$ is contractible. We let $\Gamma$ act diagonally on each $E\Gamma_n$ $(n\ge 0)$. The induced action on $|E\Gamma_\bullet|$ is free. Let $M$ be a topological space on which $\Gamma$ acts. The diagonal action of $\Gamma$ on $M\times |E\Gamma_\bullet|$ is free. We let $H_*^\Gamma(M, B)= H_*(\Gamma\backslash(M \times |E\Gamma_\bullet|), B)$ where $B$ is a coefficient ring, and call it the equivariant homology of $M$ with coefficients in $B$. We also use the relative version, and define equivariant cohomology in a similar manner. \subsection{Werner's compactification}\label{Werner's compactification} In this section, we briefly recall the result of Werner (\cite{We2}, \cite{We1}). \subsubsection{Semi-norms} Let $W$ be an $K$-vector space. We call a function $\gamma:W \to \mathbb{R}_{\ge 0}$ a semi-norm if the following conditions are satisfied: \begin{enumerate} \item $\gamma(\lambda w)=|\lambda| \gamma(w)$ for $\lambda \in F_\infty, w\in W$, \item $\lambda(w_1+w_2) \le \sup\{\gamma(w_1), \gamma(w_2)\}$ for $w_1,w_2 \in W$, \item There exists an element $w \in W$ satisfying $\gamma(w) \neq 0$. \end{enumerate} We say that two semi-norms are equivalent if and only if one is a non-zero constant multiple of the other. Let $V^* = \mathrm{Hom}_{K}(V,K)$ be the dual vector space of $V$. We endow the set $S'$ of semi-norms on $V^*$ with the topology of pointwise convergence. We give the set $S$ of equivalence classes of semi-norms the quotient topology. \subsubsection{} We write $|\cB\cT_\bullet|bar$ for the compactification of $|\cB\cT_\bullet|$ of Werner in \cite{We2} (which uses lattices of smaller rank), and let $\partial|\cB\cT_\bullet|bar=|\cB\cT_\bullet|bar\setminus |\cB\cT_\bullet|$. The topological space $|\cB\cT_\bullet|bar$ is compact and contractible (\cite[p.519, Theorem 4.1]{We2}). The action of $\mathrm{GL}(V)$ is extended to $|\cB\cT_\bullet|bar$ (\cite[Theorem 4.2]{We1}). By a theorem of Goldman-Iwahori (see \cite[Theorem 2.2]{De-Hu}), the set of equivalence classes of norms on $V^*$ is isomorphic to the set of points of the geometric realization of the Bruhat-Tits building for $\mathrm{PGL}(V^*)$. In the paper of Werner \cite[Theorem 5.1]{We1}, this isomorphism is extended to a canonical homeomorphism $S \cong \overline{|\mathcal{B}\mathcal{T}_{V^*,\bullet}|}'$ where $\overline{|\mathcal{B}\mathcal{T}_{V^*,\bullet}|}'$ is the compactification of $|\mathcal{B}\mathcal{T}_{V^*,\bullet}|$ using semi-norms. We use the homeomorphism $\overline{|\mathcal{B}\mathcal{T}_{V^*,\bullet}|}' \cong \overline{|\mathcal{B}\mathcal{T}_{V,\bullet}|}$ of Werner (\cite{We1} p.518), and obtain a homeomorphism $S \cong \overline{|\mathcal{B}\mathcal{T}_{V,\bullet}|}$. \subsection{} Let us give an outline of the proof of Theorem~\ref{lem:apartment} in this section. We construct the following commutative diagram in Sections~\ref{sec:4 and 5} and~\ref{sec:2 and 3}: \begin{equation} \label{eqn:diagram0} \xymatrix{ H_{d-1}(\Gamma \backslash \cB\cT_\bullet,\mathbb{Q}) \ar[r]^{(1)} & H_{d-1}^\mathrm{BM}(\Gamma \backslash \cB\cT_\bullet,\mathbb{Q}) \\ H_{d-1}^\Gamma(\overline{|\mathcal{BT}_\bullet|},\mathbb{Q}) \ar[u]^{(2)}_\cong \ar[r]^{(5)\phantom{a;lskj}} & H_{d-1}^\Gamma (\overline{|\mathcal{BT}_\bullet|}, \partial \overline{|\mathcal{BT}_\bullet|}; \mathbb{Q}) \ar[u]^{(3)} \\ H_{d-1}(\Gamma,\mathbb{Q}) \ar[u]^{(4)}_{\cong\phantom{;lkasdjf;lk}} } \end{equation} Here the map (1) is the map that appeared in the statement of Theorem~\ref{lem:apartment}. The map (5) is the pushforward map of homology. The other maps will be constructed later. It is easy to see that the groups $H_{d-1}(\Gamma,\mathbb{Q})$ and $H_{d-1}(\Gamma\backslash\overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|,\mathbb{Q})$ are isomorphic since $\overline{|\mathcal{BT}_\bullet|} \times |E \Gamma_\bullet|$ is contractible and $\Gamma$ acts freely. However, the key here is to construct (4) explicitly at the level of chain complexes in the direction indicated by the arrow above, so that we are able to compute explicitly the image of the composite $(5)\circ (4)$. The construction of the square is elementary, but there is one problem which is caused by that the isomorphism between the Borel-Moore homology as defined in this paper and the Borel-Moore homology of a topological space in general is not found in the literature. We resort to the well-known cases of the isomorphisms for homology and for cohomology to circumvent this problem. In order to do so, we use property (5) of the arithmetic group (Section~\ref{sec:def Gamma}) and take the dual twice. \subsection{On the maps (4) and (5)} \label{sec:4 and 5} We construct the map (4) and show that it is an isomorphism. This is done very explicitly, so that we are able to compute the image of the composite map (5)(4) (Lemma~\ref{AF}). \subsubsection{} We let $C_\bullet$ denote the complex of $\mathbb{Z}[\Gamma]$-modules defined by $C_n=\mathbb{Z}[\Gamma^{n+1}] \,\,(n\ge 0)$ and the usual boundary homomorphisms. It is a free resolution of the trivial $\mathbb{Z}[\Gamma]$-module $\mathbb{Z}$. The homology group of the $\Gamma$-coinvariants $C_{\Gamma,\bullet}$ of $C_\bullet$ is the group homology $H_*(\Gamma,\mathbb{Z})$. Let $D_n=\mathbb{Z}[\mathrm{Map}_{\mathrm{cont}}(\Delta_n, \overline{|\mathcal{B}\mathcal{T}_\bullet|}\times |E\Gamma_\bullet|)]$. The usual boundary map turns $D_\bullet$ into a complex of $\mathbb{Z}[\Gamma]$-modules with $\Gamma$ acting on $\overline{|\cB\cT_\bullet|}\times |E\Gamma_\bullet|$ diagonally. It is a free resolution of the trivial $\mathbb{Z}[\Gamma]$-module since the action of $\Gamma$ on $\overline{|\cB\cT_\bullet|}\times |E\Gamma_\bullet|$ is free. The $\Gamma$-coinvariants, denoted $D_{\Gamma,\bullet}$, is canonically isomorphic to the module \[ \mathbb{Z}[\mathrm{Map}_{\mathrm{cont}}(\Delta_n, \Gamma \backslash \overline{|\cB\cT_\bullet|}\times |E\Gamma_\bullet|)]. \] Hence the homology group of the complex $D_{\Gamma,\bullet}$ is $H_*^\Gamma(\overline{|\mathcal{B}\mathcal{T}_\bullet|},\mathbb{Z})$. Let $r \ge 0$ be an integer. Let $\Delta_r=\{(t_0,\dots, t_r) \in \mathbb{R}^{r+1}| \sum t_i=1, 0\le t_i \le 1\}$ be the (geometric) $r$-simplex. Given $v_0,\dots, v_r \in V\setminus \{0\}$, we construct a map $s(v_0,\dots, v_r):\Delta_r \to S \cong \overline{|\mathcal{B}\mathcal{T}_\bullet|}$ as follows. For $(t_0,\dots, t_r)\in \Delta_r$ and $f \in V^*$, we set \[ s(v_0,\dots, v_r)(t_0,\dots, t_r)(f)= \sup_{0 \le i \le r} |f(v_i)| q_\infty^{-1/t_i}. \] Here, we set $q_\infty^{-1/t_i}=0$ if $t_i=0$. It is easy to check that $s(v_0,\dots, v_r)(t_0,\dots, t_r)$ is a semi-norm on $V^*$ for each $(t_0,\dots, t_r)\in \Delta_r$. \begin{lem} \label{lem:cont s} The map $s(v_0,\dots, v_r)$ is continuous. \end{lem} \begin{proof} This is immediate from the definition of the topology on $S'$ and on $S$, since for each $f \in V^*$, the map $s(v_0,\dots, v_r)(t_0,\dots, t_r)(f): \Delta_r \to \mathbb{R}_{\ge 0}$ is continuous. \end{proof} \subsubsection{} Let $r \ge 0$ be an integer. Given $v\in V\setminus\{0\}$ and $[g_0,\dots,g_r]\in \mathbb{Z}[\Gamma^{r+1}]=C_r$, we set $\eta_v([g_0,\dots, g_r])= s(g_0 v,\dots,g_r v) \times [g_0,\dots,g_r] : \Delta_r \to \overline{|\mathcal{B}\mathcal{T}_\bullet|} \times |E\Gamma_{\bullet}|$. Here $[g_0,\dots,g_r]$ on the right hand side is regarded as the canonical inclusion $\Delta_r \hookrightarrow |E\Gamma_\bullet|$ associated to the $r$-simplex $(g_0,\dots, g_r)$ of $E\Gamma_\bullet$. Extending this by linearity, we obtain a map of complexes $\eta_v:C_\bullet \to D_\bullet$. \begin{lem} The map $\eta_v$ is a quasi-isomorphism. \end{lem} \begin{proof} We have seen that both $C_\bullet$ and $D_\bullet$ are free $\mathbb{Z}[\Gamma]$-resolutions of the trivial $\mathbb{Z}[\Gamma]$-module $\mathbb{Z}$. So we only need to check at degree 0, that is, the commutativity of the following diagram: \[ \begin{CD} C_0 @>>{\eta_v}> D_0 \\ @VVV @VVV \\ \mathbb{Z} @>>{\mathrm{id}}> \mathbb{Z} \end{CD} \] where the vertical homomorphisms are augmentations. This is clear. \end{proof} Taking $\Gamma$-coinvariants, we obtain a map of complexes $C_{\Gamma,\bullet} \to D_{\Gamma,\bullet}$. It induces a map of homology $H_*(\Gamma,\mathbb{Z}) \to H_*^\Gamma(\overline{|\mathcal{B}\mathcal{T}_\bullet|},\mathbb{Z})$. This is an isomorphism since both $C_\bullet$ and $D_\bullet$ are free $\mathbb{Z}[\Gamma]$-resolutions of $\mathbb{Z}$. We define the map (4) to be this map tensored by $\mathbb{Q}$. \subsubsection{} For $n\ge 0$, let $$ \wt{D}_{\Gamma,n}= \mathbb{Z}[\mathrm{Map}_{\mathrm{cont}}(\Delta_n,\Gamma \backslash \overline{|\cB\cT_\bullet|}\times |E\Gamma_\bullet|)] /\mathbb{Z}[\mathrm{Map}_{\mathrm{cont}} (\Delta_n,\Gamma \backslash \partial \overline{|\cB\cT_\bullet|}\times |E\Gamma_\bullet|)]. $$ Then $H_*^\Gamma(\overline{|\mathcal{B}\mathcal{T}_\bullet|},\partial\overline{|\mathcal{B}\mathcal{T}_\bullet|};\mathbb{Z})$ is the homology group of the complex $\wt{D}_{\Gamma,\bullet}$. The canonical surjection at each degree induces a map of complexes $D_{\Gamma,\bullet}\to \wt{D}_{\Gamma,\bullet}$, and a homomorphism $H_*^\Gamma(\overline{|\mathcal{B}\mathcal{T}_\bullet|},\mathbb{Z}) \to H_*^\Gamma(\overline{|\mathcal{B}\mathcal{T}_\bullet|}, \partial\overline{|\mathcal{B}\mathcal{T}_\bullet|};\mathbb{Z})$. The map (5) of the diagram~\eqref{eqn:diagram0} is this map tensored by $\mathbb{Q}$. Let $v_1,\dots, v_d\in V$ be a basis and let $g_0,\dots, g_{d-1}\in \Gamma$. By construction, the image of the faces of $\Delta_{d-1}$ by the continuous map \[ s(v_1,\dots, v_d) \times [g_0,\dots, g_{d-1}]:\Delta_{d-1} \to \overline{|\cB\cT_\bullet|}\times |E\Gamma_\bullet| \] is contained in $\partial\overline{|\cB\cT_\bullet|}\times |E\Gamma_\bullet|$. We let $A_{v_1,\dots,v_d;g_0,\dots,g_{d-1}}$ denote the class of this continuous function in $\wt{D}_{\Gamma,d-1}$ and in $H_{d-1}^\Gamma(\overline{|\mathcal{B}\mathcal{T}_\bullet|}, \partial \overline{|\mathcal{B}\mathcal{T}_\bullet|};\mathbb{Z})$. We let $\mathcal{A}_F^\mathrm{rel}$ denote the submodule of $H_\Gamma^{d-1}(\overline{|\mathcal{B}\mathcal{T}_\bullet|}, \partial\overline{|\mathcal{B}\mathcal{T}_\bullet|};\mathbb{Z})$ generated by elements of the form $A_{v_1,\dots, v_d;g_0,\dots, g_{d-1}}$ with $g_i \in \Gamma \,\, (0\le i \le d-1)$ and $v_1,\dots, v_d \in V_F=F^{\oplus d} \subset K^{\oplus d}=V$ an $F$-basis. \begin{lem}\label{AF} The image of \[ H_{d-1}(\Gamma, \mathbb{Q}) \xto{(4)} H_{d-1}^\Gamma(\overline{|\mathcal{B}\mathcal{T}_\bullet|}, \mathbb{Q}) \xto{(5)} H_{d-1}^\Gamma(\overline{|\mathcal{B}\mathcal{T}_\bullet|}, \partial\overline{|\mathcal{B}\mathcal{T}_\bullet|}; \mathbb{Q}) \] is contained in the sub $\mathbb{Q}$-vector space generated by $\mathcal{A}_F^\mathrm{rel}$. \end{lem} \begin{proof} Take a $v \in V_F \backslash \{0\} \subset V\backslash \{0\}$. Consider the map of complexes $C_{\Gamma,\bullet} \to D_{\Gamma, \bullet} \to \wt{D}_{\Gamma,\bullet}$ where the first map is $\eta_v$ and the second map is the canonical map. The image of $C_{\Gamma,d-1}$ is of the form \[ s(g_0v,\dots, g_{d-1}v)\times [g_0,\dots,g_{d-1}] \] for some $g_0, \dots, g_{d-1} \in \Gamma$. Since $v \in F^{\oplus d}$ and $g_0,\dots, g_{d-1} \in \Gamma \subset \mathrm{GL}_d(F)$ by the condition (1) on $\Gamma$, the vectors $g_0 v, \dots, g_{d-1}v$ are $F$-vectors. If $g_0v,\dots, g_{d-1}v$ do not form a basis, then the element above is zero in $H_{d-1}^\Gamma(|\cB\cT_\bullet|bar,\partial|\cB\cT_\bullet|bar;\mathbb{Q})$ because by the construction of $s$ the image of the map above is contained in $\Gamma \backslash \partial|\cB\cT_\bullet|bar\times |E\Gamma_\bullet|$. \end{proof} \subsection{The maps (2) and (3)} \label{sec:2 and 3} Given a $\mathbb{Q}$-vector space $A$, let $A^*=\mathrm{Hom}(A, \mathbb{Q})$ denote the dual. In what follows, the coefficient ring is $\mathbb{Q}$ unless otherwise specified. \subsubsection{} Consider the following diagram. \begin{equation} \label{eqn:diagram1} \xymatrix{ H_{d-1}(\Gamma \backslash \cB\cT_\bullet) \ar[d]^{(6)}_\cong \ar[r]^{(1)} & H_{d-1}^\mathrm{BM}(\Gamma \backslash \cB\cT_\bullet) \ar[dd]_\cong^{(7)} \\ H_{d-1}(\Gamma \backslash \cB\cT_\bullet)^{**} \ar[d]_=^{(8)} \\ H^{d-1}(\Gamma \backslash \cB\cT_\bullet)^* \ar[r]^{(9)} & H_c^{d-1}(\Gamma \backslash \cB\cT_\bullet)^* } \end{equation} The map (1) is the canonical map from homology to Borel-Moore homology. The map (6) is the canonical map $A\to A^{**}$ for $A=H_{d-1}(\Gamma \backslash \cB\cT_\bullet)$. We will see later (Corollary~\ref{cor:5.7}) that (6) is an isomorphism. The map (7) is the isomorphism given by the map in the universal coefficient theorem (see Section~\ref{univ_coeff}). The map (8) is the dual of the fact that cohomology is the dual of homology. The map (9) is the dual of the canonical map from cohomology with compact support to cohomology. It follows from the definitions that the diagram is commutative. \subsubsection{} Consider the diagram \begin{equation} \label{eqn:diagram2} \xymatrix{ H^{d-1}(\Gamma \backslash \cB\cT_\bullet) & H_c^{d-1}(\Gamma \backslash \cB\cT_\bullet) \ar[l]^{(9)'} \ar[d]^{(11)}_\cong \\ & \varinjlim_L H^{d-1}(\Gamma \backslash \cB\cT_\bullet, L) \\ H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|) \ar[uu]^\cong_{(10)} & \varinjlim_L H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|, |L|), \ar[l]^{(13)\phantom{;al;lkj}} \ar[u]_{(12)}^\cong } \end{equation} where $L$ runs over the subsimplicial complexes of $\Gamma \backslash \cB\cT_\bullet$ such that the complement $|\Gamma \backslash \cB\cT_\bullet| \setminus |L|$ is covered by a finite number of simplices. The map (9)' is the forget support map, whose dual is the map (9) in diagram \eqref{eqn:diagram1}. The map (10) is the canonical map from singular cohomology to cellular cohomology (see Section~\ref{sec:cellular}) The map (11) is obtained from the definition. The map (12) at each stage is the canonical map from singular cohomology to cellular cohomology. The map (13) is the limit of the pullback maps. It is easy to check that the diagram is commutative. \begin{lem} The map (11) is an isomorphism. \end{lem} \begin{proof} It suffices to show that, given a finite set $B$ of simplices of $\Gamma \backslash \cB\cT_\bullet$, there exists a subsimplicial complex $L \subset \Gamma \backslash \cB\cT_\bullet$ such that \begin{enumerate} \item the cardinality of the set of simplices not contained in $L$ is finite, that is, the cardinality of $((\Gamma \backslash \cB\cT_\bullet) \setminus L)=\cup_{i \ge 0} ((\Gamma \backslash \cB\cT_\bullet)_i \setminus L_i)$ is finite, and \item $B \subset ((\Gamma \backslash \cB\cT_\bullet) \setminus L)$. \end{enumerate} Let us construct such an $L$. Let $\overline{B}$ denote the set of simplices $\sigma$ of $\Gamma \backslash \cB\cT_\bullet$ such that \begin{itemize} \item there exists a simplex $\tau \in B$ such that $\sigma$ and $\tau$ has a face in common. \end{itemize} Now we set $L$ to be the set of simplices $\sigma \in \Gamma \backslash \cB\cT_\bullet$ such that \begin{itemize} \item $\sigma \notin \overline{B}$, or \item there exists a simplex $\tau \in ((\Gamma \backslash \cB\cT_\bullet) \setminus \overline{B})$ such that $\sigma$ is a face of $\tau$. \end{itemize} Then $L$ has a structure of a subsimplicial complex. It is easy to see that \[ B \subset (\Gamma \backslash \cB\cT_\bullet \setminus L) \subset \overline{B}, \] which implies (2) above. Since $\Gamma \backslash \cB\cT_\bullet$ is locally finite, $\overline{B}$ is a finite set, which implies (1). \end{proof} \subsubsection{} \label{sec:Gamma isom} Consider the following diagram: \begin{equation} \label{eqn:diagram3} \xymatrix{ H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|) \ar[d]_\cong^\alpha & \displaystyle\varinjlim_L H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|, |L|) \ar[l]^\beta \ar[d]^\alpha \\ H^{d-1}_\Gamma(|\mathcal{BT}_\bullet|) & \displaystyle\varinjlim_L H^{d-1} (\Gamma\backslash|\cB\cT_\bullet|EG, \wt{|L|} ) \ar[l]^{\beta\phantom{;lkj;;}} \\ H^{d-1}_\Gamma(\overline{|\mathcal{BT}_\bullet|}) \ar[u]_\alpha^\cong & \displaystyle\varinjlim_L H^{d-1} (\Gamma\backslash\overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|, \wt{|L|} \cup \Gamma \backslash \partial \overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet| ) \ar[l]^{\beta\phantom{LLLLLLLLLLLLLL}} \ar[d]^\beta \ar[u]_\alpha^\cong \\ & H^{d-1} (\Gamma\backslash\overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|, \Gamma \backslash \partial \overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|) \ar[lu]^\beta. } \end{equation} Here $L$ runs over the subsimplicial complexes as in diagram \eqref{eqn:diagram2}, and $\wt{|L|}$ is the inverse image of $|L|$ by the projection $\Gamma\backslash|\cB\cT_\bullet|EG \to \Gamma\backslash|\cB\cT_\bullet|$. The maps labeled by $\alpha$ are (induced by) pullbacks. The maps $\beta$ are (induced by) the forget support maps. The diagram is commutative. The second vertical arrow on the right hand column is an isomorphism by the excision property of cohomology. \begin{lem} \label{lem:5.5} The maps on the left column of the diagram \eqref{eqn:diagram3} are isomorphisms. \end{lem} \begin{proof} As $|\cB\cT_\bullet|$ is contractible and $\Gamma$ satisfies (3) of Section~\ref{sec:def Gamma} (hence the stabilizer group of a simplex is a finite group as discussed there), the group $H^{d-1}(\Gamma \backslash |\cB\cT_\bullet|)$ is isomorphic to $H^{d-1}(\Gamma)$. As $|\cB\cT_\bullet|\times |E\Gamma_\bullet|$ and $|\cB\cT_\bullet|bar\times |E\Gamma_\bullet|$ are contractible and $\Gamma$ acts freely, the groups $H^{d-1}(\Gamma\backslash |\cB\cT_\bullet|\times |E\Gamma_\bullet|)$ and $H^{d-1}(\Gamma\backslash |\cB\cT_\bullet|bar\times |E\Gamma_\bullet|)$ are also isomorphic to $H^{d-1}(\Gamma)$. (These statements can be proved using spectral sequences, which are compatible with pullbacks.) This implies that the left vertical arrows are isomorphisms. \end{proof} \subsubsection{} Consider the following diagram. \begin{equation} \label{eqn:diagram4} \xymatrix{ H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|EG)^* \ar[r] & H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|EG, \Gamma \backslash \partial \overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|)^* \\ H_{d-1}(\Gamma\backslash|\cB\cT_\bullet|EG) \ar[u]^\cong \ar[r] & H_{d-1}(\Gamma\backslash|\cB\cT_\bullet|EG, \Gamma \backslash \partial \overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|) \ar[u] } \end{equation} Each of the two vertical arrows in the square is the canonical map of the form $A \to A^{**}$. The lower horizontal arrow is the pushforward map of homology. The top horizontal arrow is the twice dual of the lower horizontal arrow, and is the dual of the forget support map of cohomology. \begin{lem} \label{lem:5.6} The left vertical map in the diagram \eqref{eqn:diagram4} is an isomorphism. \end{lem} \begin{proof} As was remarked in the proof of Lemma~\ref{lem:5.5}, the group $H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|EG)$ is isomorphic to $H^{d-1}(\Gamma)$. From property (5) in Section~\ref{sec:def Gamma}, we know that $H^{d-1}(\Gamma)$ is a finite dimensional $\mathbb{Q}$-vector space. This implies the claim. \end{proof} \begin{cor} \label{cor:5.7} The map (6) in diagram \eqref{eqn:diagram1} is an isomorphism. \end{cor} \begin{proof} As remarked in the proof of the previous lemma, $H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|EG)$ is finite dimensional. Using the isomorphisms (8) (10) and the isomorphisms in diagrams \eqref{eqn:diagram3} and \eqref{eqn:diagram3}, we see that $H_{d-1}(\Gamma \backslash \cB\cT_\bullet)^{**}$ is also finite dimensional. The claim follows from this. \end{proof} \subsubsection{} We define the map (2) in diagram \eqref{eqn:diagram0} to be the composite \[ \begin{array}{l} H_{d-1}(\Gamma\backslash|\cB\cT_\bullet|EG) \xto{\cong} H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|EG)^* \xleftarrow{\cong} H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|)^* \\ \xleftarrow{\cong} H^{d-1}(\Gamma \backslash \cB\cT_\bullet)^* \xleftarrow{\cong} H_{d-1}(\Gamma \backslash \cB\cT_\bullet) \end{array} \] where the maps are the left vertical arrow in the diagram \eqref{eqn:diagram4}, the dual of the composite of the left vertical arrows in \eqref{eqn:diagram3}, the dual of (10), and the dual of the composite (8)(6). We define the map (3) in diagram \eqref{eqn:diagram0} to be the composite \[ \begin{array}{l} H_{d-1}(\Gamma\backslash\overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|, \Gamma \backslash \partial \overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|) \to H^{d-1}(\Gamma\backslash\overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|, \Gamma \backslash \partial \overline{|\mathcal{BT}_\bullet|} \times |E\Gamma_\bullet|)^* \\ \to \varinjlim_L H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|, |L|) \to H_c^{d-1}(\Gamma \backslash \cB\cT_\bullet)^* \to H_{d-1}^\mathrm{BM}(\Gamma \backslash \cB\cT_\bullet) \end{array} \] where the maps are the right vertical arrow in the diagram \eqref{eqn:diagram4}, the dual of the vertical arrows in the diagram \eqref{eqn:diagram3}, the dual of the composite $(11)^{-1}(12)$ and the inverse of (7). The diagram \eqref{eqn:diagram0} is then commutative since each of the diagrams \eqref{eqn:diagram1}, \eqref{eqn:diagram2}, \eqref{eqn:diagram3}, \eqref{eqn:diagram4} is commutative. \subsection{} Given a basis $v_1,\dots, v_d$ of $V$, we may regard $A_\bullet$ as a subsimplicial complex of $\mathcal{B}T_\bullet$ using the map $\iota_{v_1,\dots, v_d}$. This simplicial complex is denoted by $A_{v, \bullet}=A_{v_1,\dots, v_d, \bullet}$. Let $\overline{|A_{v,\bullet}|}$ denote the closure of $|A_{v, \bullet}|$ in $|\cB\cT_\bullet|bar$ and set $\partial\overline{|A_{v,\bullet}|} =\overline{|A_{v,\bullet}|} \setminus |A_{v, \bullet}|$. Let $\Delta_{d-1}'$ denote the interior of $\Delta_{d-1}$. Let $\varphi: \Delta_{d-1}' \xto{\cong} \mathbb{R}^{d}/\mathbb{R}(1,\dots,1)$ be the homeomorphism given by $(t_0,\dots, t_{d-1}) \mapsto (1/t_0,\dots, 1/t_{d-1})$. Let $n>0$ be an integer. We set $\wt{K_n}=\prod_{i=0}^d[0,n] \subseteq \mathbb{R}^d$, and let $K_n$ denote the image of $\wt{K_n}$ in $\mathbb{R}^d/\mathbb{R}(1,\dots, 1)$. Recall (Section~\ref{sec:def apartment}) that the simplices of $A_\bullet$ are defined using the set of vertices $A_0=\mathbb{Z}^{\oplus d}/ \mathbb{Z}(1,\dots,1)$. We regard $A_0 \subset \mathbb{R}^{\oplus d}/ \mathbb{R}(1,\dots,1)$ using the natural inclusion $\mathbb{Z} \subset \mathbb{R}$. The set of those simplices of $A_\bullet$ whose support is contained in the complement of the interior of $K_n$ naturally forms a subsimplicial complex of $A_\bullet$. We call this simplicial complex $K_{n,\bullet}^c$. It is easy to see that $\cap_n K_{n,\bullet}^c =\emptyset$. Consider the following map \begin{eqnarray} \label{eqn:sub diagram}\\ \nonumber H_{d-1}(\overline{|A_{v, \bullet}|}, \partial\overline{|A_{v,\bullet}|}) \rightarrow (H^{d-1}(\overline{|A_{v, \bullet}|}, \partial\overline{|A_{v, \bullet}|}))^* \rightarrow (\displaystyle\varinjlim_n H^{d-1}(\overline{|A_{v,\bullet}|}, \partial\overline{|A_{v, \bullet}|} \cup |K_{n,\bullet}^c|))^* \\ \nonumber \xleftarrow{\cong} (\displaystyle\varinjlim_n H^{d-1}(|A_{v, \bullet}|, |K_{n,\bullet}^c|))^* \xleftarrow{\cong} (\displaystyle\varinjlim_n H^{d-1}(A_{v, \bullet}, K_{n,\bullet}^c))^* \xleftarrow{\cong} H_c^{d-1}(A_{v,\bullet})^* \xleftarrow{\cong} H_{d-1}^\mathrm{BM}(A_{v,\bullet}) \end{eqnarray} where the limit is over the nonnegative integers in each case. The first map is the canonical map $A \to A^{**}$ for $A=H_{d-1}(\overline{|A_{v, \bullet}|}, \partial\overline{|A_{v,\bullet}|})$. The second map is the dual of the limit of the pullback map at each stage. The third map is the dual of the limit of the excision isomorphism of cohomology. The fourth map is the dual of the limit of the isomorphisms between cellular cohomology and singular cohomology (see Section~\ref{sec:cellular}). The fifth map is the map obtained from the definitions in Section~\ref{sec:def homology}. It is an isomorphism since $\cap_n K_{n,\bullet}^c =\emptyset$. The sixth map is the duality isomorphism in the universal coefficient theorem (see Section~\ref{univ_coeff}). Note that the image of the continuous map $s(v_1, \dots, v_d)$ of Section~\ref{sec:4 and 5} is contained in $\overline{|A_{v, \bullet}|}$ and defines a class $[s(v_1,\dots, v_d)]$ in $H_{d-1}( \overline{|A_{v, \bullet}|} , \partial \overline{|A_{v, \bullet}|} )$. \begin{lem} \label{lem:geom apartment} The following two elements in $H^\mathrm{BM}_{d-1} (A_\bullet)$ coincide: \begin{enumerate} \item The image of the class of $s(v_1,\dots, v_d)$ by \[ H_{d-1}(\overline{|A_{v,\bullet}|}, \partial\overline{|A_{v,\bullet}|}) \to H_{d-1}^\mathrm{BM}(A_{v, \bullet}) \xto{\iota_{v_1,\dots,v_d}^{-1}} H_{d-1}^\mathrm{BM}(A_\bullet), \] where the first map is the map in the diagram \eqref{eqn:sub diagram}. \item The class of $\beta$ of Section~\ref{sec:fundamental class} in $H_{d-1}^\mathrm{BM}(A_\bullet)$. \end{enumerate} \end{lem} \begin{proof} Let us describe the image of the class $[s(v_1,\dots, v_d)]$ in $[\varinjlim_n H^{d-1} (A_\bullet, K_{n,\bullet}^c)]^*$. Take an element $h \in \varinjlim_n H^{d-1} (A_\bullet, K_{n,\bullet}^c).$ We may suppose it is represented by an element $h_m$ of $H^{d-1} (A_\bullet, K_{m,\bullet}^c)$ for some $m$. Consider the map \[ H_{d-1}(\overline{|A_\bullet|}, \partial \overline{|A_\bullet|}) \to H_{d-1}(\overline{|A_\bullet|}, \partial \overline{|A_\bullet|} \cup |K_{m, \bullet}^c|) \xleftarrow{\cong} H_{d-1}(|A_\bullet|, |K_{m,\bullet}^c|) \xleftarrow{\cong} H_{d-1}(A_\bullet, K_{m,\bullet}^c). \] Let $t_m$ denote the image of $[s(v_1,\dots, v_d)]$ via this map. Then the pairing of $[s(v_1,\dots, v_d)]$ with the element $h$ is the pairing of $h_m$ and $t_m$ under the canonical pairing \[ H_{d-1}(A_\bullet, K_{m,\bullet}^c) \times H^{d-1}(A_\bullet, K_{m,\bullet}^c) \to \mathbb{Q} \] between homology and cohomology. Let us compute $t_m$. Given $s=(s_0,\dots, s_{d-1}) \in K_m$, take a representative $\wt{s}=(\wt{s_0},\dots, \wt{s_{d-1}}) \in \mathbb{R}^d$ such that $-m \le s_i \le 0$ $(0\le i \le d-1), \min_i s_i=-m$. We define a map $g_m:K_m \to \Delta_{d-1}$ by $s \mapsto (\wt{s_0}/(\wt{s_0}+\cdots+\wt{s_{d-1}}), \dots, \wt{s_{d-1}}/(\wt{s_0}+\cdots+\wt{s_{d-1}}))$. It is well-defined and is a homeomorphism. Let $f_m: \Delta_{d-1} \to \Delta_{d-1}$ be the composite $\Delta_{d-1} \xto{g_n^{-1}} K_m \subset \mathbb{R}^d/\mathbb{R}(1,\dots,1) \xleftarrow{\cong} \Delta_{d-1}' \subset \Delta_{d-1}$. From the following lemma, it follows that the class of $s(v_1,\dots, v_d)$ equals the class of $s(v_1,\dots,v_d)\circ f_m$ in $H_{d-1}(\overline{|A_\bullet|}, \partial\overline{|A_\bullet|} \cup |K_{m,\bullet}^c|).$ Note that $s(v_1,\dots, v_d) \circ f_m$ defines a class in $H_{d-1}(|A_\bullet|,|K_{m,\bullet}^c|)$, hence this is the image of $[s(v_1,\dots, s_d)]$. It is easy to check that the class of $s(v_1,\dots,v_d)\circ f_m$ is then represented by the chain $(\gamma_\nu)\in \prod_{\nu \in A_{d-1}'}\mathbb{Z}$ where $\gamma_\nu=\beta_\nu$ for $\nu \in K_m'$ ($\beta_\nu$ was defined in Section~\ref{sec:fundamental class}), and $\gamma_\nu=0$ for $\nu \notin K_m'$. \end{proof} \begin{lem} Let $X$ be a topological space and $Y$ a subspace. Let $n \ge 1$. Let $\alpha:\Delta_r \to X$ be a continuous map such that $\alpha(\overline{(\Delta_r\setminus \mathrm{Im} f_n)})\subset Y$. Then $\alpha$ and $\alpha\circ f_n: \Delta_r \to X$ both define the same element in $H_r(X,Y;\mathbb{Z})$. \end{lem} \begin{proof} Omitted. \end{proof} \subsection{Proof of Theorem \ref{lem:apartment}} \begin{proof} By Lemma~\ref{lem:geom apartment}, it suffices to show that \begin{enumerate} \item the image of the class of $[A_{v_1,\dots, v_d;g_0,\dots, g_d}]$ in $H^\mathrm{BM}_{d-1}(\Gamma \backslash \mathcal{B}T_\bullet)$, and \item the image of the class of $s(v_1,\dots, v_d)$ via the composite map \[ H_{d-1}(\overline{|A_{v,\bullet}|}, \partial \overline{|A_{v,\bullet}|}) \to H_{d-1}^\mathrm{BM}(A_{v,\bullet}) \to H_{d-1}^\mathrm{BM} (\Gamma \backslash \mathcal{B}T_\bullet), \] where the first map is the map in the diagram \eqref{eqn:sub diagram}, \end{enumerate} coincide. The argument is similar to that in the proof of Lemma~\ref{lem:geom apartment}. We compare the two classes in \[ [\varinjlim_L H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|,|L|)]^* \] which appeared in the definition of the map (3) in diagram \eqref{eqn:diagram0}. Let $h$ be an element in $H^{d-1}(\Gamma\backslash|\cB\cT_\bullet|, |L|)$. We need to compute the images of the two classes in \[ H_{d-1}(\Gamma\backslash|\cB\cT_\bullet|, |L|) \] Since the class (1) is represented by the chain $s(v_1, \dots, v_d) \times [g_0, \dots, g_{d-1}]$, using the argument as in the proof of Lemma~\ref{lem:geom apartment}, we see that it is represented by the chain $s(v_1, \dots, v_d)\circ f_n$ for sufficiently large $n$ in $H_{d-1}(\Gamma\backslash|\cB\cT_\bullet|, |L|)$. Again, as we have seen in the proof of Lemma~\ref{lem:geom apartment}, this is nothing but the class of (2). \end{proof} \section{The homology of an arithmetic quotient} \label{section8} In this section, we compute the homology groups and the Borel-Moore homology groups of some arithmetic quotients of the Bruhat-Tits building and relate them to the space of automorphic forms. The aim of this section is to prove Proposition \ref{7_prop1} below. \subsection{Identification of homology groups and the space of automorphic forms} \label{subsec:X_K} \ For an open compact subgroup $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A}^\infty)$, we let $\wt{X}_{\mathrm{GL}_d,\mathbb{K},\bullet}$ denote the disjoint union $\wt{X}_{\mathrm{GL}_d,\mathbb{K},\bullet}=(\mathrm{GL}_d(\mathbb{A}^\infty)/\mathbb{K}) \times \mathcal{B}T_{\bullet}$ of copies of the Bruhat-Tits building $\mathcal{B}T_{\bullet}$ indexed by $\mathrm{GL}_d(\mathbb{A}^\infty)/\mathbb{K}$. We often omit the subscript $\mathrm{GL}_d$ on $\wt{X}_{\mathrm{GL}_d,\mathbb{K},\bullet}$ when there is no fear of confusion. The group $\mathrm{GL}_d(\mathbb{A})$ acts on the simplicial complex $\wt{X}_{\mathbb{K},\bullet}$ from the left. We study the quotient $\mathrm{GL}_d(F) \backslash \wt{X}_{\mathbb{K},\bullet}$ of $\wt{X}_{\mathbb{K},\bullet}$ by the subgroup $\mathrm{GL}_d(F) \subset \mathrm{GL}_d(\mathbb{A})$. For $0\le i \le d-1$, we let $X_{\mathbb{K},i}= X_{\mathrm{GL}_d,\mathbb{K},i}$ denote the quotient $X_{\mathbb{K},i} = \mathrm{GL}_d(F) \backslash \wt{X}_{\mathrm{GL}_d,\mathbb{K},i}$. We set $J_\mathbb{K} =\mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A}^\infty)/\mathbb{K}$. For each $j \in J_\mathbb{K}$, we choose an element $g_j \in \mathrm{GL}_d(\mathbb{A}^\infty)$ in the double coset $j$ and set $\Gamma_j = \mathrm{GL}_d(F) \cap g_j \mathbb{K} g_j^{-1}$. Then the set $X_{\mathbb{K},i}$ is isomorphic to the disjoint union $\coprod_j \Gamma_j \backslash \mathcal{B}T_i$. For each $j$, the group $\Gamma_j \subset \mathrm{GL}_d(F)$ is an arithmetic group as defined in Section~\ref{sec:71}. It follows that the tuple $X_{\mathbb{K},\bullet}=(X_{\mathbb{K},i})_{0\le i \le d-1}$ forms a simplicial complex which is isomorphic to the disjoint union $\coprod_{j\in J_\mathbb{K}} \Gamma_j \backslash \mathcal{B}T_\bullet$. Since the simplicial complex $\wt{X}_{\mathrm{GL}_d,\mathbb{K},\bullet}$ is locally finite, it follows that the simplicial complex $X_{\mathbb{K},\bullet}$ is locally finite. Hence for an abelian group $M$, we may consider the cohomology groups with compact support $H_c^{*}(X_{\mathbb{K},\bullet},M)$ (resp.\ the Borel-Moore homology groups $H_*^\mathrm{BM}(X_{\mathbb{K},\bullet},M)$) of the simplicial complex $X_{\mathbb{K},\bullet}$. Since the simplicial complex $X_{\mathbb{K},\bullet}$ has no $i$-simplex for $i \ge d$ as was remarked in Section~\ref{sec:dimension}, it follows that the map $$ H_{d-1}(X_{\mathbb{K},\bullet},M) \to H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},M) $$ is injective for any abelian group $M$. We regard $H_{d-1}(X_{\mathbb{K},\bullet},M)$ as a subgroup of $H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},M)$. \subsubsection{} Let $\mathrm{St}_d$ denote the Steinberg representation as defined, for example, in \cite[p.193]{Laumon1}. It is defined with coefficients in $\mathbb{C}$, but it can also be defined with coefficients in $\mathbb{Q}$ in a similar manner. We let $\mathrm{St}_d$ denote the corresponding representation. \begin{lem}\label{lem:Steinberg} For a $\mathbb{Q}$-vector space $M$, there is a canonical, $\mathrm{GL}_d(F_\infty)$-equivariant isomorphism between the module of $M$-valued harmonic $(d-1)$-cochains and the module $\mathrm{Hom}_{\mathbb{Q}}(\mathrm{St}_d,M)$. \end{lem} \begin{proof} By definition, the module of $M$-valued harmonic $(d-1)$-cochains is identified with $\mathrm{Hom}(H^{d-1}_c(\mathcal{B}T_\bullet,\mathbb{Q}),M)$. It is shown in~\cite[6.2,6.4]{Borel} that $\mathrm{St}_d$ (with $\mathbb{C}$-coefficient) is canonically isomorphic to $H^{d-1}_c(\mathcal{B}T_\bullet,\mathbb{C})$ as a representation of $\mathrm{GL}_d(F_\infty)$. One can check that this map is defined over $\mathbb{Q}$. This proves the claim. \end{proof} \subsubsection{} \label{sec:6.1.2} We let $\mathcal{B}T_{j,*}$ denote the quotient $\wt{\mathcal{B}T}_j/F_{\infty}^{\times}$. This set is identified with the set of pairs $(\sigma, v)$ with $\sigma \in \mathcal{B}T_j$ and $v \in \mathcal{B}T_0$ a vertex of $\sigma$, which we call a pointed $j$-simplex. Here the element $(L_i)_{i\in \mathbb{Z}} \mod K^\times$ of $\wt{\mathcal{B}T_j}/K^\times$ corresponds to the pair $((L_i)_{i\in \mathbb{Z}}, L_0)$ via this identification. We identify the set $\wt{\mathcal{B}T}_{0}$ with the coset $\mathrm{GL}_d(K)/\mathrm{GL}_d(\mathcal{O})$ by associating to an element $g \in \mathrm{GL}_d(K)/\mathrm{GL}_d(\mathcal{O})$ the lattice $\mathcal{O}_{V}g^{-1}$. Let $\mathcal{I}=\{(a_{ij})\in \mathrm{GL}_d(\mathcal{O}) \,|\, a_{ij}\,\mathrm{mod}\, \varpi =0 \ \text{if}\ i>j\}$ be the Iwahori subgroup. Similarly, we identify the set $\wt{\mathcal{B}T}_{d-1}$ with the coset $\mathrm{GL}_d(K)/\mathcal{I}$ by associating to an element $g\in \mathrm{GL}_d(K)/\mathcal{I}$ the chain of lattices $(L_i)_{i \in \mathbb{Z}}$ characterized by $L_i=\mathcal{O}_{V}\Pi_ig^{-1}$ for $i=0,\dots,d$. Here, for $i=0,\dots,d$, we let $\Pi_i$ denote the diagonal $d\times d$ matrix $\Pi_i=\mathrm{diag}(\varpi,\ldots,\varpi,1,\ldots,1)$ with $\varpi$ appearing $i$ times and $1$ appearing $d-i$ times. Let $M$ be a $\mathbb{Q}$-vector space. Let $\mathcal{C}^\mathbb{K}(M)$ denote the ($\mathbb{Q}$-vector) space of locally constant $M$-valued functions on $\mathrm{GL}_d(F)\backslash \mathrm{GL}_d(\mathbb{A})/(\mathbb{K} \times F_\infty^\times)$. Let $\mathcal{C}_c^\mathbb{K}(M) \subset C^\mathbb{K}(M)$ denote the subspace of compactly supported functions. \begin{lem}\label{lem:Steinberg8} \begin{enumerate} \item There is a canonical isomorphism $$ H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},M) \cong \mathrm{Hom}_{\mathrm{GL}_d(F_\infty)} (\mathrm{St}_d, \mathcal{C}^\mathbb{K}(M)), $$ where $\mathcal{C}^\mathbb{K}(M)$ denotes the space of locally constant $M$-valued functions on $\mathrm{GL}_d(F)\backslash \mathrm{GL}_d(\mathbb{A})/(\mathbb{K}\times F_\infty^\times)$. \item Let $v \in \mathrm{St}_d^\mathcal{I}$ be a non-zero Iwahori-spherical vector. Then the image of the evaluation map $$ \begin{array}{l} \mathrm{Hom}_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d, \mathcal{C}^\mathbb{K}(M)) \\ \to \mathrm{Map}(\mathrm{GL}_d(F)\backslash \mathrm{GL}_d(\mathbb{A})/(\mathbb{K} \times F_\infty^\times \mathcal{I}),M) \end{array} $$ at $v$ is identified with the image of the map $$ \begin{array}{rl} H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},M) & \to \mathrm{Map}(\mathrm{GL}_d(F)\backslash (\mathrm{GL}_d(\mathbb{A}^\infty)/\mathbb{K} \times \mathcal{B}T_{d-1,*}),M) \\ & \cong \mathrm{Map}(\mathrm{GL}_d(F)\backslash \mathrm{GL}_d(\mathbb{A})/(\mathbb{K} \times F_\infty^\times \mathcal{I}),M). \end{array} $$ \end{enumerate} \end{lem} \begin{proof} For a $\mathbb{C}$-vector space $M$, (1) is proved in \cite[Section 5.2.3]{KY:Zeta elements}, and (2) is \cite[Corollary 5.7]{KY:Zeta elements}. The proofs and the argument in loc. cit. work for a $\mathbb{Q}$-vector space $M$ as well. \end{proof} \begin{cor} \label{cor:Steinberg8} Under the isomorphism in (1), the subspace $$ H_{d-1}(X_{\mathbb{K},\bullet},M) \subset H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},M) $$ corresponds to the subspace $$ \mathrm{Hom}_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d, \mathcal{C}_c^\mathbb{K}(M)) \subset \mathrm{Hom}_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d, \mathcal{C}^\mathbb{K}(M)). $$ \end{cor} \begin{proof} This follows from Lemma~\ref{lem:Steinberg8} (2) and the definition of the homology group $H_{d-1}(X_{\mathbb{K}, \bullet}, M)$. \end{proof} \subsection{Pull-back maps for homology groups} \ Let $\mathbb{K},\mathbb{K}' \subset \mathrm{GL}_d(\mathbb{A}^\infty)$ be open compact subgroups with $\mathbb{K}' \subset \mathbb{K}$. We denote by $f_{\mathbb{K}',\mathbb{K}}$ the natural projection map $X_{\mathbb{K}',i} \to X_{\mathbb{K},i}$. Since $\mathbb{K}'$ is a subgroup of $\mathbb{K}$ of finite index, it follows that for any $i$ with $0 \le i \le d-1$ and for any $i$-simplex $\sigma \in X_{\mathbb{K},i}$, the inverse image of $\sigma$ under the map $f_{\mathbb{K}',\mathbb{K}}$ is a finite set. Let $i$ be an integer with $0 \le i \le d-1$ and let $\sigma' \in X_{\mathbb{K}',i}$. Let $\sigma$ denote the image of $\sigma'$ under the map $f_{\mathbb{K}',\mathbb{K}}$. Let us choose an $i$-simplex $\wt{\sigma}'$ of $\wt{X}_{\mathbb{K}',\bullet}$ which is sent to $\sigma'$ under the projection map $\wt{X}_{\mathbb{K}',\bullet} \to X_{\mathbb{K}',\bullet}$. Let $\wt{\sigma}$ denote the image of $\wt{\sigma}'$ under the map $\wt{X}_{\mathbb{K}',i} \to \wt{X}_{\mathbb{K},i}$. We let $$ \Gamma_{\wt{\sigma}'} = \{ \gamma \in \mathrm{GL}_d(F)\ |\ \gamma \wt{\sigma}' =\wt{\sigma}' \} $$ and $$ \Gamma_{\wt{\sigma}} = \{ \gamma \in \mathrm{GL}_d(F)\ |\ \gamma \wt{\sigma} =\wt{\sigma} \} $$ denote the isotropy group of $\wt{\sigma}'$ and $\wt{\sigma}$, respectively. The following lemma can be checked easily. \begin{lem} Let the notation be as above. \begin{enumerate} \item The group $\Gamma_{\wt{\sigma}}$ is a finite group and the group $\Gamma_{\wt{\sigma}'}$ is a subgroup of $\Gamma_{\wt{\sigma}}$. \item The isomorphism class of the group $\Gamma_{\wt{\sigma}'}$ (resp.\ $\Gamma_{\wt{\sigma}}$) depends only on $\sigma'$ (resp.\ $\sigma$) and does not depends on the choice of $\wt{\sigma}'$. \end{enumerate} \end{lem} The lemma above shows in particular that the index $[\Gamma_{\wt{\sigma}} : \Gamma_{\wt{\sigma}'}]$ is finite and depends only on $\sigma'$ and $f_{\mathbb{K}',\mathbb{K}}$. We denote this index by $e_{\mathbb{K}',\mathbb{K}}(\sigma')$ and call it the ramification index of $f_{\mathbb{K}',\mathbb{K}}$ at $\sigma'$. Let $M$ be an abelian group. Let $i$ be an integer with $0 \le i \le d$. We set $X'_{\mathbb{K},i}= \coprod_{\sigma \in X_{\mathbb{K},i}} O(\sigma)$. The map $f_{\mathbb{K}',\mathbb{K}} : X_{\mathbb{K}',\bullet} \to X_{\mathbb{K},\bullet}$ induces a map $X'_{\mathbb{K}',i} \to X'_{\mathbb{K},i}$ which we denote also by $f_{\mathbb{K}'.\mathbb{K}}$. Let $m = (m_{\nu})_{\nu \in X'_{\mathbb{K},i}}$ be an element of the $\{\pm 1\}$-module $\prod_{\nu \in X'_{\mathbb{K},i}} M$. We define the element $f^*_{\mathbb{K}',\mathbb{K}}(m)$ in $\prod_{\nu \in X'_{\mathbb{K}',i}} M$ to be $$ f^*_{\mathbb{K}',\mathbb{K}}(m) = (m'_{\nu'})_{\nu' \in X'_{\mathbb{K}',i}} $$ where for $\nu' \in O(\sigma') \subset X'_{\mathbb{K}',i}$, the element $m'_{\nu'} \in M$ is given by $m'_{\nu'} = e_{\mathbb{K}',\mathbb{K}}(\sigma') m_{f_{\mathbb{K}',\mathbb{K}}(\nu')}$. The following lemma can be checked easily. \begin{lem} Let the notation be as above. \begin{enumerate} \item The map $f^*_{\mathbb{K}',\mathbb{K}} : \prod_{\nu \in X'_{\mathbb{K},i}} M \to \prod_{\nu' \in X'_{\mathbb{K}',i}} M$ is a homomorphism of $\{\pm 1\}$-modules. \item The map $f^*_{\mathbb{K}',\mathbb{K}} : \prod_{\nu \in X'_{\mathbb{K},i}} M \to \prod_{\nu' \in X'_{\mathbb{K}',i}} M$ sends an element in the subgroup $\bigoplus_{\nu \in X'_{\mathbb{K},i}} M \subset \prod_{\nu \in X'_{\mathbb{K},i}} M$ to an element in $\bigoplus_{\nu \in X'_{\mathbb{K}',i}} M$. \item For $1 \le i \le d-1$, the diagrams $$ \begin{CD} \prod_{\nu \in X'_{\mathbb{K},i}} M @>{\wt{\partial}_{i,\prod}}>> \prod_{\nu \in X'_{\mathbb{K},i-1}} M \\ @V{f_{\mathbb{K}',\mathbb{K}}^*}VV @V{f_{\mathbb{K}',\mathbb{K}}^*}VV \\ \prod_{\nu' \in X'_{\mathbb{K}',i}} M @>{\wt{\partial}_{i,\prod}}>> \prod_{\nu' \in X'_{\mathbb{K}',i-1}} M \end{CD} $$ and $$ \begin{CD} \bigoplus_{\nu \in X'_{\mathbb{K},i}} M @>{\wt{\partial}_{i,\oplus}}>> \bigoplus_{\nu \in X'_{\mathbb{K},i-1}} M \\ @V{f_{\mathbb{K}',\mathbb{K}}^*}VV @V{f_{\mathbb{K}',\mathbb{K}}^*}VV \\ \bigoplus_{\nu' \in X'_{\mathbb{K}',i}} M @>{\wt{\partial}_{i,\oplus}}>> \bigoplus_{\nu' \in X'_{\mathbb{K}',i-1}} M \end{CD} $$ are commutative. \end{enumerate} \end{lem} The lemma above shows that the map $f^*_{\mathbb{K}',\mathbb{K}}$ induces homomorphisms $H_*(X_{\mathbb{K},\bullet},M) \to H_*(X_{\mathbb{K}',\bullet},M)$ and $H^\mathrm{BM}_*(X_{\mathbb{K},\bullet},M) \to H^\mathrm{BM}_*(X_{\mathbb{K}',\bullet},M)$ of abelian groups. We denote these homomorphisms also by $f_{\mathbb{K}',\mathbb{K}}^*$. We remark here that in \cite[p.561, 5.3.3]{KY:Zeta elements}, we implicitly use these pullback maps for the Borel-Moore homology. The proof of the following lemma is straightforward and is left to the reader. \begin{lem} Let the notation be as above. \begin{enumerate} \item Suppose that $\mathbb{K}'$ is a normal subgroup of $\mathbb{K}$. Then the homomorphism $f^*_{\mathbb{K}',\mathbb{K}}$ induces an isomorphism $H^\mathrm{BM}_*(X_{\mathbb{K},\bullet},M) \cong H^\mathrm{BM}_*(X_{\mathbb{K}',\bullet},M)^{\mathbb{K}/\mathbb{K}'}$ and a similar statement holds for $H_*$. \item Let $M$ be a $\mathbb{Q}$-vector space. Then the diagrams $$ \begin{CD} H^\mathrm{BM}_{d-1} (X_{\mathbb{K},\bullet}, M) @>{\cong}>> Hom_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d, \mathcal{C}^\mathbb{K}(M)) \\ @V{f^*_{\mathbb{K}',\mathbb{K}}}VV @VVV \\ H^\mathrm{BM}_{d-1} (X_{\mathbb{K}',\bullet}, M) @>{\cong}>> Hom_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d, \mathcal{C}^{\mathbb{K}'}(M)) \end{CD} $$ and $$ \begin{CD} H_{d-1} (X_{\mathbb{K},\bullet}, M) @>{\cong}>> Hom_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d, \mathcal{C}_c^\mathbb{K}(M)) \\ @V{f^*_{\mathbb{K}',\mathbb{K}}}VV @VVV \\ H_{d-1} (X_{\mathbb{K}',\bullet}, M) @>{\cong}>> Hom_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d, \mathcal{C}_c^{\mathbb{K}'}(M)) \end{CD} $$ are commutative. Here the horizontal arrows are the isomorphisms given in Lemma~\ref{lem:Steinberg8} and Corollary~\ref{cor:Steinberg8}, and the right vertical arrows are the map induced by the quotient map $\mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A}) /(\mathbb{K}' \times F_\infty^\times) \to \mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A}) /(\mathbb{K}\times F_\infty^\times)$. \end{enumerate} \end{lem} \subsection{The action of $\mathrm{GL}_d(\mathbb{A}^\infty)$ and admissibility} For $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$, we let $\wt{\xi}_g : \wt{X}_{\mathbb{K},\bullet} \xto{\cong} \wt{X}_{g^{-1} \mathbb{K} g,\bullet}$ denote the isomorphism of simplicial complexes induced by the isomorphism $\mathrm{GL}_d(\mathbb{A}^\infty)/\mathbb{K} \xto{\cong} \mathrm{GL}_d(\mathbb{A}^\infty)/g^{-1}\mathbb{K} g$ which sends a coset $h \mathbb{K}$ to the coset $hg\cdot g^{-1} \mathbb{K} g$ and by the identity on $\mathcal{B}T_{\bullet}$. The isomorphism $\wt{\xi}_g$ induces an isomorphism $\xi_{g} :X_{\mathbb{K},\bullet} \xto{\cong} X_{g^{-1} \mathbb{K} g,\bullet}$ of simplicial complexes. For two elements $g, g' \in \mathrm{GL}_d(\mathbb{A}^\infty)$, we have $\xi_{gg'}=\xi_{g'}\circ \xi_g$. For an abelian group $M$, we let $H_*(X_{\lim,\bullet},M)=H_*(X_{\mathrm{GL}_d,\lim,\bullet},M)$ and $H_*^\mathrm{BM}(X_{\lim,\bullet},M)=H_*^\mathrm{BM}(X_{\mathrm{GL}_d,\lim,\bullet},M)$ denote the inductive limits $\varinjlim_{\mathbb{K}}H_*(X_{\mathbb{K},\bullet},M)$ and $\varinjlim_{\mathbb{K}}H_*^\mathrm{BM}(X_{\mathbb{K},\bullet},M)$, respectively. Here the transition maps in the inductive limits are given by $f^*_{\mathbb{K}',\mathbb{K}}$. The isomorphisms $\xi_g$ for $g\in \mathrm{GL}_d(\mathbb{A}^\infty)$ gives rise to a smooth action of the group $\mathrm{GL}_d(\mathbb{A}^\infty)$ on these inductive limits. If $M$ is a torsion free abelian group, then for each compact open subgroup $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A}^\infty)$, the homomorphism $H_*(X_{\mathbb{K},\bullet},M) \to H_*(X_{\lim,\bullet},M)$ is injective and its image is equal to the $\mathbb{K}$-invariant part $H_*(X_{\lim,\bullet},M)^{\mathbb{K}}$ of $H_*(X_{\lim,\bullet},M)$. Similar statement holds for $H_*^\mathrm{BM}$. \begin{lem}\label{7_lem1} \begin{enumerate} \item For any open compact subgroup $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A}^\infty)$, both $H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},\mathbb{Q})$ and $H_{d-1}(X_{\mathbb{K},\bullet},\mathbb{Q})$ are finite dimensional. \item The inductive limits $H_{d-1}(X_{\lim,\bullet},\mathbb{Q})$ and $H_{d-1}^\mathrm{BM}(X_{\lim,\bullet},\mathbb{Q})$ are admissible $\mathrm{GL}_d(\mathbb{A}^\infty)$-modules. \end{enumerate} \end{lem} \begin{proof} It follows from Lemma~\ref{lem:Steinberg8} that the $\mathbb{Q}$-vector space $H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},\mathbb{Q})$ is isomorphic to $\mathrm{Hom}_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d,\mathcal{C}^\mathbb{K}(\mathbb{Q}))$. Let $\mathcal{H}_\infty$ denote the convolution algebra of locally constant, compactly supported $\mathbb{Q}$-valued functions on $\mathrm{GL}_d(F_\infty)$, with respect to a Haar measure on $\mathrm{GL}_d(F_\infty)$ such that the volume of $\mathrm{GL}_d(\mathcal{O}_\infty)$ is a rational number. We regard $\mathrm{St}_d$ as a left $\mathcal{H}_\infty$-module. Let $K_\infty \subset \mathrm{GL}_d(F_\infty)$ be a compact open subgroup such that the $K_\infty$-invariant part $\mathrm{St}_d^{K_\infty}$ is non-zero. Let us fix a non-zero vector $v \in \mathrm{St}_d^{K_\infty}$ and let $J \subset\mathcal{H}_\infty$ denote the set of elements $f \in \mathcal{H}_\infty$ such that $fv =0$. Then $J$ is an admissible left ideal of $\mathcal{H}_\infty$ in the sense of \cite[5.5, p.199]{BJ}. The map $\mathrm{Hom}_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d,\mathcal{C}^\mathbb{K}(\mathbb{Q})) \to \mathcal{C}^\mathbb{K}(\mathbb{Q})$ which sends $\varphi: \mathrm{St}_d \to \mathcal{C}^\mathbb{K}(\mathbb{Q})$ to $\varphi(v)$ gives an isomorphism from the space $\mathrm{Hom}_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d,\mathcal{C}^\mathbb{K}(\mathbb{Q}))$ to the space of $\mathbb{Q}$-valued functions on $\mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A})$ which is right invariant under $\mathbb{K} \times K_\infty$ and is annihilated by $J$. Hence it follows from \cite[5.6.\ THEOREM, p.199]{BJ} that the space $\mathrm{Hom}_{\mathrm{GL}_d(F_\infty)}(\mathrm{St}_d,\mathcal{C}^\mathbb{K}(\mathbb{Q}))$ is finite dimensional. Therefore $H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},\mathbb{Q})$ is finite dimensional. The space $H_{d-1}(X_{\mathbb{K},\bullet},\mathbb{Q})$ is finite dimensional since it is a subspace of $H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},\mathbb{Q})$. This proves the claim (1). For each compact open subgroup $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A}^\infty)$, the maps $H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},\mathbb{Q}) \to H_{d-1}^\mathrm{BM}(X_{\lim,\bullet},\mathbb{Q})^\mathbb{K}$ and $H_{d-1}(X_{\mathbb{K},\bullet},\mathbb{Q}) \to H_{d-1}(X_{\lim,\bullet},\mathbb{Q})^\mathbb{K}$ are isomorphisms. Hence the claim (2) follows from the claim (1). \end{proof} \subsection{The homology} \begin{prop} \label{prop:66_3} As a representation of $\mathrm{GL}_d(\mathbb{A}^\infty)$, the inductive limit $H_{d-1}(X_{\lim,\bullet},\mathbb{C})$ is isomorphic to the direct sum $$ H_{d-1}(X_{\lim,\bullet},\mathbb{C}) \cong \bigoplus_\pi \pi^\infty, $$ where $\pi = \pi^\infty \otimes \pi_\infty$ runs over (the isomorphism classes of) the irreducible cuspidal automorphic representations of $\mathrm{GL}_d(\mathbb{A})$ such that $\pi_\infty$ is isomorphic to the Steinberg representation of $\mathrm{GL}_d(F_\infty)$. \end{prop} \begin{proof} Let $\mathcal{C}(\mathbb{C})$ be the space of locally constant, compactly supported $\mathcal{C}$-valued functions on $\mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A}) /F_\infty^\times$. It follows from Corollary~\ref{cor:Steinberg8} that the isomorphism in Lemma~\ref{lem:Steinberg8} (1) induces a $\mathrm{GL}_d(\mathbb{A}) = \mathrm{GL}_d(\mathbb{A}^\infty) \times \mathrm{GL}_d(F_\infty)$-equivariant homomorphism $$ \iota : H_{d-1}(X_{\lim,\bullet},\mathbb{C}) \otimes_\mathbb{Q} \mathrm{St}_d \to \mathcal{C}(\mathbb{C}). $$ We denote by $\mathcal{A}$ the image of the homomorphism $\iota$. It follows from Corollary~\ref{cor:Steinberg8} that the map $H_{d-1}(X_{\lim,\bullet},\mathbb{C}) \otimes_\mathbb{Q} \mathrm{St}_d^\mathcal{I} \to \mathcal{A}^\mathcal{I}$ is an isomorphism. We prove that $\mathcal{A}^\mathcal{I}$ is isomorphic to the right hand side of the desired isomorphism. Since $\mathcal{C}(\mathbb{C})$ consists of compactly supported functions, $\mathcal{C}(\mathbb{C})$ can be regarded as a subspace of $L^2(\mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A})/F_\infty^\times)$. It follows from Lemma~\ref{7_lem1} (2) that $H_{d-1}(X_{\lim,\bullet},\mathbb{C}) \otimes_\mathbb{Q} \mathrm{St}_d$ is an admissible representation of $\mathrm{GL}_d(\mathbb{A})$. Hence $\mathcal{A}$ is also an admissible representation of $\mathrm{GL}_d(\mathbb{A})$. Since $\mathcal{A}$ is an admissible subrepresentation of $L^2(\mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A})/F_\infty^\times)$, it follows that $\mathcal{A}$ is contained in a discrete spectrum of $L^2(\mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A})/F_\infty^\times)$ and is a direct sum of irreducible admissible representations. Let $\pi \subset \mathcal{A}$ be an irreducible subrepresentation. It follows from the construction of $\mathcal{A}$ that the component $\pi_\infty$ at $\infty$ of $\pi$ is isomorphic to the Steinberg representation $\mathrm{St}_d$. It follows from the classification (\cite[p.606, Th\'eor\`eme]{MW}) of the discrete spectrum of $L^2(\mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A})/F_\infty^\times)$ that $\pi$ does not belong to the residual spectrum. Hence $\pi$ is an irreducible cuspidal automorphic representation. It follows from the multiplicity one theorem that $\mathcal{A}$ is isomorphic to the direct sum of the irreducible subrepresentations of $\mathcal{A}$. Hence to prove the claim, it suffices to show that any cuspidal irreducible automorphic subrepresentation $\pi$ of $L^2(\mathrm{GL}_d(F) \backslash \mathrm{GL}_d(\mathbb{A})/F_\infty^\times)$ whose component $\pi_\infty$ at $\infty$ is isomorphic to $\mathrm{St}_d$ is contained in $\mathcal{A}$. It is essentially proved in \cite[Theorem 1.2.1]{Harder} (cf. \cite[p.16]{Laumon2}) that the support of a cusp form on $\mathrm{GL}_d(\mathbb{A})$ is compact modulo center. It follows that $\pi$ is a subspace of $\mathcal{C}(\mathbb{C})$. Let us write $\pi = \pi^\infty \otimes \pi_\infty$. Since $\pi_\infty$ is isomorphic to $\mathrm{St}_d$, there exists, for any vector $v \in \pi^\infty$, a $\mathrm{GL}_d(F_\infty)$-equivariant homomorphism $\mathrm{St}_d \to \mathcal{C}(\mathbb{C})$ whose image contains $\mathbb{Q} v \otimes \mathrm{St}_d$. Hence it follows from Corollary~\ref{cor:Steinberg8} that $\mathbb{Q} v \otimes \mathrm{St}_d$ is contained in $\mathcal{A}$. Therefore $\pi$ is a subrepresentation of $\mathcal{A}$. This proves the claim. \end{proof} \begin{remark} \label{rmk:HMW} We can prove Proposition~\ref{prop:66_3} also by using the argument in \cite{Harder}: it follows that $H_{d-1}(X_{\lim,\bullet},\mathbb{C})$ is isomorphic to the subspace $H$ of $L^2(\mathrm{GL}_d(F) F_\infty^\times \backslash \mathrm{GL}_d(\mathbb{A}))$ spanned by the subrepresentations whose component at $\infty$ is isomorphic to the Steinberg representation. Then the classification \cite[p.606, Th\'eor\`eme]{MW} shows that any constituent of $H$ is an irreducible cuspidal automorphic representation. \end{remark} \section{The Borel-Moore homology of an arithmetic quotient} \label{sec:BMquot} The goal of this section is to prove the following theorem. \begin{thm}\label{7_prop1} Let $\pi = \pi^\infty \otimes \pi_\infty$ be an irreducible smooth representation of $\mathrm{GL}_d(\mathbb{A})$ such that $\pi^\infty$ appears as a subquotient of $H_{d-1}^\mathrm{BM}(X_{\lim,\bullet},\mathbb{C})$. Then there exist an integer $r \ge 1$, a partition $d=d_1 + \cdots + d_r$ of $d$, and irreducible cuspidal automorphic representations $\pi_i$ of $\mathrm{GL}_{d_i}(\mathbb{A})$ for $i=1,\ldots,d$ which satisfy the following properties: \begin{itemize} \item For each $i$ with $0 \le i \le r$, the component $\pi_{i,\infty}$ at $\infty$ of $\pi_i$ is isomorphic to the Steinberg representation of $\mathrm{GL}_{d_i}(F_\infty)$. \item Let us write $\pi_i = \pi_i^\infty \otimes \pi_{i,\infty}$. Let $P \subset \mathrm{GL}_d$ denote the standard parabolic subgroup corresponding to the partition $d=d_1 + \cdots + d_r$. Then $\pi^\infty$ is isomorphic to a subquotient of the unnormalized parabolic induction $\mathrm{Ind}_{P(\mathbb{A}^\infty)}^{\mathrm{GL}_d(\mathbb{A}^\infty)} \pi_1^\infty \otimes \cdots \otimes \pi_r^\infty$. \end{itemize} Moreover for any subquotient $H$ of $H_{d-1}^\mathrm{BM}(X_{\lim,\bullet},\mathbb{C})$ which is of finite length as a representation of $\mathrm{GL}_d(\mathbb{A}^\infty)$, the multiplicity of $\pi$ in $H$ is at most one. \end{thm} \begin{remark} Any open compact subgroup of $\mathrm{GL}_d(\mathbb{A}^\infty)$ is conjugate to an open subgroup of $\mathrm{GL}_d(\wh{A})$. The set of the open subgroups of $\mathrm{GL}_d(\wh{A})$ is cofinal in the inductive system of all open compact subgroups of $\mathrm{GL}_d(\mathbb{A}^\infty)$. Therefore, to prove Theorem~\ref{7_prop1}, we may without loss of generality assume that the group $\mathbb{K}$ is contained in $\mathrm{GL}_d(\wh{A})$, and we may replace the inductive limit $\varinjlim_{\mathbb{K}}$ in the definition of $H_{d-1}^\mathrm{BM}(X_{\lim,\bullet},M)$ and $H_{d-1}(X_{\lim,\bullet},M)$ with the inductive limit $\varinjlim_{\mathbb{K}\subset \mathrm{GL}_d(\wh{A})}$. \end{remark} From now on until the end of this section, we exclusively deal with the subgroups $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A}^\infty)$ contained in $\mathrm{GL}_d(\wh{A})$. The notation $\varinjlim_{\mathbb{K}}$ henceforth means the inductive limit $\varinjlim_{\mathbb{K}\subset \mathrm{GL}_d(\wh{A})}$. \subsection{Chains of locally free $\mathcal{O}_C$-modules} \label{sec:locfree} \ In this paragraph, we introduce some terminology for locally free $\mathcal{O}_C$-modules of rank $d$ and then describe the sets of simplices of $X_{\mathbb{K},\bullet}$ in terms of chains of locally free $\mathcal{O}_C$-modules of rank $d$. The terminology here will be used in our proof of Theorem~\ref{7_prop1}. Let $\eta : \mathrm{Spec}\, F \to C$ denote the generic point of $C$. For each $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$ and an $\mathcal{O}_\infty$-lattice $L_\infty \subset \mathcal{O}_\infty^{\oplus d}$, we denote by $\mathcal{F}[g,L_{\infty}]$ the $\mathcal{O}_C$-submodule of $\eta_* F^{\oplus d}$ characterized by the following properties: \begin{itemize} \item $\mathcal{F}[g,L_{\infty}]$ is a locally free $\mathcal{O}_C$-module of rank $d$. \item $\Gamma(\mathrm{Spec}\, A, \mathcal{F}[g,L_{\infty}])$ is equal to the $A$-submodule $\wh{A}^{\oplus d} g^{-1} \cap F^{\oplus d}$ of $F^{\oplus d} = \Gamma(\mathrm{Spec}\, A,\eta_* F^{\oplus d})$. \item Let $\iota_\infty$ denote the morphism $\mathrm{Spec}\, \mathcal{O}_\infty \to C$. Then $\Gamma(\mathrm{Spec}\, \mathcal{O}_\infty, \iota_\infty^* \mathcal{F}[g,L_{\infty}])$ is equal to the $\mathcal{O}_\infty$-submodule $L_{\infty}$ of $F_\infty^{\oplus d} = \Gamma(\mathrm{Spec}\, \mathcal{O}_\infty, \iota_\infty^* \eta_* F^{\oplus d})$. \end{itemize} Let $\mathcal{F}$ be a locally free $\mathcal{O}_C$-modules of rank $d$. Let $I \subset A$ be a non-zero ideal. We regard the $A$-module $A/I$ as a coherent $\mathcal{O}_C$-module of finite length. A level $I$-structure on $\mathcal{F}$ is a surjective homomorphism $\mathcal{F} \to (A/I)^{\oplus d}$ of $\mathcal{O}_C$-modules. Let $\mathbb{K}^\infty_I \subset \mathrm{GL}_d(\wh{A})$ be the kernel of the homomorphism $\mathrm{GL}_d(\wh{A}) \to \mathrm{GL}_d(A/I)$. The group $\mathrm{GL}_d(A/I) \cong \mathrm{GL}_d(\wh{A})/\mathbb{K}^\infty_I$ acts from the left on the set of level $I$-structures on $\mathcal{F}$, via its left action on $(A/I)^{\oplus d}$. (We regard $(A/I)^{\oplus d}$ as an $A$-module of row vectors. The left action of $\mathrm{GL}_d(A/I)$ on $(A/I)^{\oplus d}$ is described as $g\cdot b = b g^{-1}$ for $g\in \mathrm{GL}_d(A/I)$, $b\in (A/I)^{\oplus d}$.) For a subgroup $\mathbb{K} \subset \mathrm{GL}_d(\wh{A})$ containing $\mathbb{K}_I^\infty$, a level $\mathbb{K}$-structure on $\mathcal{F}$ is a $\mathbb{K}/\mathbb{K}^\infty_I$-orbit of level $I$-structures on $\mathcal{F}$. For an open subgroup $\mathbb{K} \subset \mathrm{GL}_d(\wh{A})$, the set of level $\mathbb{K}$-structures on $\mathcal{F}$ does not depend, up to canonical isomorphisms, on the choice of an ideal $I$ with $\mathbb{K}_I^\infty \subset \mathbb{K}$. Let $\mathbb{K} \subset \mathrm{GL}_d(\wh{A})$ be an open subgroup. Let $(g,\sigma)$ be an $i$-simplex of $\wt{X}_{\mathbb{K},\bullet}$. Take a chain $\cdots \supsetneqq L_{-1} \supsetneqq L_0 \supsetneqq L_1 \supsetneqq \cdots$ of $\mathcal{O}_{\infty}$-lattices of $F_{\infty}^{\oplus d}$ which represents $\sigma$. To $(g,\sigma)$ we associate the chain $\cdots \supsetneqq \mathcal{F}[g,L_{-1}] \supsetneqq \mathcal{F}[g,L_0] \supsetneqq \mathcal{F}[g,L_1] \supsetneqq \cdots$ of $\mathcal{O}_C$-submodules of $\eta_* F^{\oplus d}$. Then the set of $i$-simplices in $\wt{X}_{\mathbb{K},\bullet}$ is identified with the set of the equivalences classes of chains $\cdots \supsetneqq \mathcal{F}_{-1} \supsetneqq \mathcal{F}_0 \supsetneqq \mathcal{F}_1 \supsetneqq \cdots$ of $\mathcal{O}_{\infty}$-lattices of locally free $\mathcal{O}_C$-submodules of rank $d$ of $\eta_* \eta^* \mathcal{O}_C^{\oplus d}$ with a level $\mathbb{K}$-structure such that $\mathcal{F}_{j-i-1}$ equals the twist $\mathcal{F}_{j}(\infty)$ as an $\mathcal{O}_C$-submodule of $\eta_* F^{\oplus d}$ with a level $\mathbb{K}$-structure for every $j\in \mathbb{Z}$. Two chains $\cdots \supsetneqq \mathcal{F}_{-1} \supsetneqq \mathcal{F}_0 \supsetneqq \mathcal{F}_1 \supsetneqq \cdots$ and $\cdots \supsetneqq \mathcal{F}'_{-1} \supsetneqq \mathcal{F}'_0 \supsetneqq \mathcal{F}'_1 \supsetneqq \cdots$ are equivalent if and only if there exists an integer $l$ such that $\mathcal{F}_{j} = \mathcal{F}'_{j+l}$ as an $\mathcal{O}_C$-submodule of $\eta_* F^{\oplus d}$ with a level structure for every $j\in \mathbb{Z}$. Let $g\in \mathrm{GL}_d(\mathbb{A}^\infty)$ and let $L_\infty$ be an $\mathcal{O}_\infty$-lattice of $F_\infty^{\oplus d}$. For $\gamma \in \mathrm{GL}_d(F)$, the two $\mathcal{O}_C$-submodules $\mathcal{F}[g,L_\infty]$ and $\mathcal{F}[\gamma g,\gamma L_\infty]$ are isomorphic as $\mathcal{O}_C$-modules. The set of $i$-simplices in $X_{\mathbb{K},\bullet}$ is identified with the set of the equivalence classes of chains $\cdots \hookrightarrow \mathcal{F}_{1} \hookrightarrow \mathcal{F}_0 \hookrightarrow \mathcal{F}_{-1} \hookrightarrow \cdots$ of injective non-isomorphisms of locally free $\mathcal{O}_C$-modules of rank $d$ with a level $\mathbb{K}$-structure such that the image of $\mathcal{F}_{j+i+1}\to \mathcal{F}_j$ equals the image of the canonical injection $\mathcal{F}_{j}(-\infty)\hookrightarrow \mathcal{F}_j$ for every $j\in \mathbb{Z}$. Two chains $\cdots \hookrightarrow \mathcal{F}_{1} \hookrightarrow \mathcal{F}_0 \hookrightarrow \mathcal{F}_{-1} \hookrightarrow \cdots$ and $\cdots \hookrightarrow \mathcal{F}'_{1} \hookrightarrow \mathcal{F}'_0 \hookrightarrow \mathcal{F}'_{-1} \hookrightarrow \cdots$ are equivalent if and only if there exists an integer $l$ and an isomorphism $\mathcal{F}_{j} \cong \mathcal{F}'_{j+l}$ of $\mathcal{O}_C$-modules with level structures for every $j\in \mathbb{Z}$ such that the diagram $$ \begin{CD} \cdots @>>> \mathcal{F}_{1} @>>> \mathcal{F}_0 @>>> \mathcal{F}_{-1} @>>> \cdots \\ @. @V{\cong}VV @V{\cong}VV @V{\cong}VV @. \\ \cdots @>>> \mathcal{F}'_{l+1} @>>> \mathcal{F}'_l @>>> \mathcal{F}'_{l-1} @>>> \cdots \end{CD} $$ is commutative. \subsection{Harder-Narasimhan polygons} \label{sec:HN} Let $\mathcal{F}$ be a locally free $\mathcal{O}_C$-module of rank $r$. For an $\mathcal{O}_C$-submodule $\mathcal{F}' \subset \mathcal{F}$ (note that $\mathcal{F}'$ is automatically locally free), we set $z_{\mathcal{F}}(\mathcal{F}') = (\mathrm{rank}(\mathcal{F}'),\deg(\mathcal{F}')) \in \mathbb{Q}^2$. It is known that there exists a unique convex, piecewise affine, affine on $[i-1,i]$ for $i=1,\ldots, r$, continuous function $p_{\mathcal{F}} :[0,r] \to \mathbb{R}$ on the interval $[0,r]$ such that the convex hull of the set $\{z_{\mathcal{F}}(\mathcal{F}')\ |\ \mathcal{F}' \subset \mathcal{F}\}$ in $\mathbb{R}^2$ equals $\{(x,y)\ |\ 0\le x\le r,\, y\le p_{\mathcal{F}}(x) \}$. We define the function $\Delta p_{\mathcal{F}}: \{1,\ldots,d-1\} \to \mathbb{R}$ as $\Delta p_{\mathcal{F}}(i) = 2 p_{\mathcal{F}}(i) - p_{\mathcal{F}}(i-1) - p_{\mathcal{F}}(i+1)$. Then $\Delta p_{\mathcal{F}}(i) \ge 0$ for all $i$. We note that for an invertible $\mathcal{O}_C$-module $\mathcal{L}$, $\Delta p_{\mathcal{F} \otimes \mathcal{L}}$ equals $\Delta p_{\mathcal{F}}$. The theory of Harder-Narasimhan filtration (\cite{HN}) implies that, if $i \in \mathrm{Supp}\,(\Delta p_{\mathcal{F}}) = \{ i\ |\ \Delta p_{\mathcal{F}}(i)>0 \}$, then there exists a unique $\mathcal{O}_C$-submodule $\mathcal{F}' \subset \mathcal{F}$ satisfying $z_{\mathcal{F}}(\mathcal{F}')=(i,p_{\mathcal{F}}(i))$. We denote this $\mathcal{O}_C$-submodule $\mathcal{F}'$ by $\mathcal{F}_{(i)}$. The submodule $\mathcal{F}_{(i)}$ has the following properties. \begin{itemize} \item If $i, j \in \mathrm{Supp}\,(\Delta p_{\mathcal{F}})$ with $i\le j$, then $\mathcal{F}_{(i)}\subset \mathcal{F}_{(j)}$ and $\mathcal{F}_{(j)}/\mathcal{F}_{(i)}$ is locally free. \item If $i \in \mathrm{Supp}\,(\Delta p_{\mathcal{F}})$, then $p_{\mathcal{F}_{(i)}}(x) = p_{\mathcal{F}}(x)$ for $x \in [0,i]$ and $p_{\mathcal{F}/\mathcal{F}_{(i)}}(x-i) =p_{\mathcal{F}}(x) - \deg(\mathcal{F}_{(i)})$ for $x \in [i,r]$. \end{itemize} \begin{lem}\label{7_diffdeg} Let $\mathcal{F}$ be a locally free $\mathcal{O}_C$-module of finite rank, and let $\mathcal{F}'\subset \mathcal{F}$ be a $\mathcal{O}_C$-submodule of the same rank. Then we have $0 \le p_{\mathcal{F}}(i) -p_{\mathcal{F}'}(i) \le \deg(\mathcal{F})-\deg(\mathcal{F}')$ for $i =1,\ldots,\mathrm{rank}(\mathcal{F})-1$. \end{lem} \begin{proof} Immediate from the definition of $p_{\mathcal{F}}$. \end{proof} \subsection{} \label{sec:props} In this paragraph, we state two propositions (Propositions~\ref{7_prop1b} and~\ref{7_prop2}). The proofs are given in Sections~\ref{7_prop1} and~\ref{7_prop1b}. Given a subset $\mathcal{D} \subset \{1,\ldots,d-1\}$ and a real number $\alpha > 0$, we define the simplicial subcomplex $X_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ of $X_{\mathbb{K},\bullet}$ as follows: A simplex of $X_{\mathbb{K},\bullet}$ belongs to $X_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ if and only if each of its vertices is represented by a locally free $\mathcal{O}_C$-module $\mathcal{F}$ of rank $d$ with a level $\mathbb{K}$-structure such that $\Delta p_{\mathcal{F}}(i) \ge \alpha$ holds for every $i \in \mathcal{D}$. Let $X_{\mathbb{K},\bullet}^{(\alpha)}$ denote the union $X_{\mathbb{K},\bullet}^{(\alpha)} = \bigcup_{\mathcal{D} \neq \emptyset} X_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$. \begin{lem}\label{7_lem:finiteness} For any $\alpha >0$, the set of the simplices in $X_{\mathbb{K},\bullet}$ not belonging to $X_{\mathbb{K},\bullet}^{(\alpha)}$ is finite. \end{lem} \begin{proof} Let $\mathcal{P}$ denote the set of continuous, convex functions $p':[0,d]\to \mathbb{R}$ with $p'(0)=0$ such that $p'(i)\in \mathbb{Z}$ and $p'$ is affine on $[i-1,i]$ for $i=1,\ldots,d$. It is known that for any $r \ge 1$ and $f \in \mathbb{Z}$, there are only a finite number of isomorphism classes of semi-stable locally free $\mathcal{O}_C$-modules of rank $r$ with degree $f$. Hence by the theory of Harder-Narasimhan filtration, for any $p' \in \mathcal{P}$, the set of the isomorphism classes of locally free $\mathcal{O}_C$-modules $\mathcal{F}$ with $p_{\mathcal{F}} = p'$ is finite. Let us give an action of the group $\mathbb{Z}$ on the set $\mathcal{P}$, by setting $(a\cdot p')(x)=p'(x)+ a\deg(\infty)x$ for $a \in \mathbb{Z}$ and for $p' \in \mathcal{P}$. Then $p_{\mathcal{F}(a \infty)}= a\cdot p_{\mathcal{F}}$ for any $a \in \mathbb{Z}$ and for any locally free $\mathcal{O}_C$-module $\mathcal{F}$ of rank $d$. For $\alpha >0$ let $\mathcal{P}^{(\alpha)} \subset \mathcal{P}$ denote the set of functions $p' \in \mathcal{P}$ with $2p'(i)- p'(i-1)-p'(i+1) \le \alpha$ for each $i \in \{1,\ldots, d-1\}$. An elementary argument shows that the quotient $\mathcal{P}^{(\alpha)}/\mathbb{Z}$ is a finite set, whence the claim follows. \end{proof} Lemma~\ref{7_lem:finiteness} implies that $H_{d-1}^\mathrm{BM}(X_{\mathbb{K},\bullet},\mathbb{Q})$ is canonically isomorphic to the projective limit $\varprojlim_{\alpha >0} H_{d-1}(X_{\mathbb{K},\bullet},X_{\mathbb{K},\bullet}^{(\alpha)}; \mathbb{Q})$ and $H_c^{d-1}(X_{\mathbb{K},\bullet},\mathbb{Q})$ is canonically isomorphic to the inductive limit $\varinjlim_{\alpha >0} H^{d-1}(X_{\mathbb{K},\bullet},X_{\mathbb{K},\bullet}^{(\alpha)}; \mathbb{Q})$. Thus we have an exact sequence \begin{equation}\label{7_cohseq} \varinjlim_{\alpha >0} H^{d-2}(X^{(\alpha)}_{\mathbb{K},\bullet},\mathbb{Q}) \to H_c^{d-1}(X_{\mathbb{K},\bullet},\mathbb{Q}) \to H^{d-1}(X_{\mathbb{K},\bullet},\mathbb{Q}) \to \varinjlim_{\alpha >0} H^{d-1}(X^{(\alpha)}_{\mathbb{K},\bullet},\mathbb{Q}). \end{equation} \begin{prop}\label{7_prop1b} For $\alpha' \ge \alpha >(d-1)\deg(\infty)$, the homomorphisms $H^*(X^{(\alpha)}_{\mathbb{K},\bullet},\mathbb{Q}) \to H^*(X^{(\alpha')}_{\mathbb{K},\bullet},\mathbb{Q})$ are isomorphisms. \end{prop} We give a proof of Proposition~\ref{7_prop1b} in Section~\ref{sec:pf1b}. \begin{lem}\label{7_betag} For every $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$ satisfying $g^{-1}\mathbb{K} g \subset \mathrm{GL}_d(\wh{A})$, there exists a real number $\beta_g \ge 0$ such that the isomorphism $\xi_g:X_{\mathbb{K},\bullet} \xto{\cong} X_{g^{-1}\mathbb{K} g,\bullet}$ sends $X_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ to $X_{g^{-1} \mathbb{K} g,\bullet}^{(\alpha -\beta_g),\mathcal{D}} \subset X_{g^{-1}\mathbb{K} g,\bullet}$ for all $\alpha > \beta_g$, for all open compact $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A}^\infty)$, and for all nonempty subset $\mathcal{D} \subset \{1,\ldots,d-1 \}$. \end{lem} \begin{proof} Take two elements $a,b \in \mathbb{A}^{\infty \times} \cap \wh{A}$ such that both $ag$ and $bg^{-1}$ lie in $\mathrm{GL}_d(\mathbb{A}^\infty)\cap \mathrm{Mat}_d(\wh{A})$. Then for any $h \in \mathrm{GL}_d(\mathbb{A}^\infty)$ we have $a \wh{A}^{\oplus d}h^{-1} \subset \wh{A}^{\oplus d}g^{-1} h^{-1} \subset b^{-1} \wh{A}^{\oplus d}h^{-1}$. This implies that, for any vertex $x \in X_{\mathbb{K},0}$, if we take suitable representatives $\mathcal{F}_x$, $\mathcal{F}_{\xi_{g}(x)}$ of the equivalence classes of locally free $\mathcal{O}_C$-modules corresponding to $x$, $\xi_{g}(x)$, then there exists a sequence of injections $\mathcal{F}_x(-\mathrm{div}(a)) \hookrightarrow \mathcal{F}_{\xi_g(x)} \hookrightarrow \mathcal{F}_x(\mathrm{div}(b))$. Applying Lemma~\ref{7_diffdeg}, we see that there exists a positive real number $m_g>0$ not depending on $x$ such that $|p_{\mathcal{F}_x}(i) - p_{\mathcal{F}_{\xi_{g}(x)}}(i)| < m_g$ for all $i$. Hence the claim follows. \end{proof} Thus the group $\mathrm{GL}_d(\mathbb{A}^\infty)$ acts on $\varinjlim_{\mathbb{K}} \varinjlim_{\alpha > 0} H^{*}(X^{(\alpha),\mathcal{D}}_{\mathbb{K},\bullet},\mathbb{Q})$ in such a way that the exact sequence (\ref{7_cohseq}) is $\mathrm{GL}_d(\mathbb{A}^\infty)$-equivariant. We use a covering spectral sequence \begin{equation}\label{7_specseq} E_1^{p,q} = \bigoplus_{\sharp \mathcal{D} = p+1} H^q(X^{(\alpha),\mathcal{D}}_{\mathbb{K},\bullet},\mathbb{Q}) \mathbb{R}ightarrow H^{p+q}(X^{(\alpha)}_{\mathbb{K},\bullet},\mathbb{Q}) \end{equation} with respect to the covering $X_{\mathbb{K},\bullet}^{(\alpha)} = \bigcup_{1\le i \le d-1} X_{\mathbb{K},\bullet}^{(\alpha),\{i\}}$ of $X_{\mathbb{K},\bullet}^{(\alpha)}$. For $\alpha' \ge \alpha>0$, the inclusion $X_{\mathbb{K},\bullet}^{(\alpha,\mathcal{D})} \to X_{\mathbb{K},\bullet}^{(\alpha',\mathcal{D})}$ induces a morphism of spectral sequences. Taking the inductive limit, we obtain the spectral sequence $$ E^{p,q}_1 = \bigoplus_{\sharp \mathcal{D} = p+1} \varinjlim_{\alpha} H^q(X^{(\alpha),\mathcal{D}}_{\mathbb{K},\bullet},\mathbb{Q}) \mathbb{R}ightarrow \varinjlim_{\alpha} H^{p+q}(X^{(\alpha)}_{\mathbb{K},\bullet},\mathbb{Q}). $$ For $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$ satisfying $g^{-1}\mathbb{K} g \subset \mathrm{GL}_d(\wh{A})$, let $\beta_g$ be as in Lemma~\ref{7_betag}. Then for $\alpha >\beta_g$ the isomorphism $\xi_g :X_{\mathbb{K},\bullet}\xto{\cong} X_{g\mathbb{K} g^{-1},\bullet}$ induces a homomorphism from the spectral sequence (\ref{7_specseq}) for $X_{\mathbb{K},\bullet}^{(\alpha)}$ to that for $X_{\mathbb{K},\bullet}^{(\alpha -\beta_g)}$. Passing to the inductive limit with respect to $\alpha$ and then passing to the inductive limit with respect to $\mathbb{K}$, we obtain the left action of the group $\mathrm{GL}_d(\mathbb{A}^\infty)$ on the spectral sequence \begin{equation}\label{7_limspecseq} E^{p,q}_1 = \bigoplus_{\sharp \mathcal{D} = p+1} \varinjlim_{\mathbb{K}} \varinjlim_{\alpha} H^q(X^{(\alpha),\mathcal{D}}_{\mathbb{K},\bullet},\mathbb{Q}) \mathbb{R}ightarrow \varinjlim_{\mathbb{K}} \varinjlim_{\alpha} H^{p+q}(X^{(\alpha)}_{\mathbb{K},\bullet},\mathbb{Q}). \end{equation} For a subset $\mathcal{D}$ of $\{1,\ldots,d-1\}$, we define the algebraic groups $P_{\mathcal{D}}$, $N_{\mathcal{D}}$ and $M_{\mathcal{D}}$ as follows. We write $\mathcal{D}= \{i_1,\ldots,i_{r-1} \}$, with $i_0=0 < i_1 < \cdots < i_{r-1} <i_r=d$ and set $d_j = i_j -i_{j-1}$ for $j=1,\ldots,r$. We define $P_{\mathcal{D}}$, $N_{\mathcal{D}}$ and $M_{\mathcal{D}}$ as the standard parabolic subgroup of $\mathrm{GL}_d$ of type $(d_1,\ldots,d_r)$, the unipotent radical of $P_{\mathcal{D}}$, and the quotient group $P_{\mathcal{D}}/N_{\mathcal{D}}$ respectively. We identify the group $M_{\mathcal{D}}$ with $\mathrm{GL}_{d_1}\times\cdots \times \mathrm{GL}_{d_r}$. \begin{prop}\label{7_prop2} Let the notations be above. Then as a smooth $\mathrm{GL}_d(\mathbb{A}^\infty)$-module, $\varinjlim_{\mathbb{K}} \varinjlim_{\alpha >0} H^*(X_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}},\mathbb{Q})$ is isomorphic to $$ \mathrm{Ind}_{P_{\mathcal{D}}(\mathbb{A}^\infty)}^{\mathrm{GL}_d(\mathbb{A}^\infty)} \bigotimes_{j=1}^r \varinjlim_{\mathbb{K}_j \subset \mathrm{GL}_{d_j}(\wh{A})} H^*(X_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet},\mathbb{Q}), $$ where the group $P_{\mathcal{D}}(\mathbb{A}^\infty)$ acts on ${\displaystyle \bigotimes_{j=1}^r \varinjlim_{\mathbb{K}_j\subset \mathrm{GL}_{d_j}(\wh{A})}} H^*(X_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet},\mathbb{Q})$ via the quotient $P_{\mathcal{D}}(\mathbb{A}^\infty) \to M_{\mathcal{D}}(A^\infty) = \prod_j \mathrm{GL}_{d_j}(\mathbb{A}^\infty)$, and $\mathrm{Ind}_{P_{\mathcal{D}}(\mathbb{A}^\infty)}^{\mathrm{GL}_d(\mathbb{A}^\infty)}$ denotes the parabolic induction unnormalized by the modulus function. \end{prop} The proof will be given in Section~\ref{sec:pf2}. \subsection{Proof of Theorem~\ref{7_prop1}} \label{sec:pfthm} Here we give a proof of Theorem~\ref{7_prop1} assuming Propositions~\ref{7_prop1b} and~\ref{7_prop2}. \begin{proof}[Proof of Theorem~\ref{7_prop1}] Let $\mathbb{K}, \mathbb{K}' \subset \mathrm{GL}_d(\mathbb{A}^\infty)$ be two compact open subgroups with $\mathbb{K}' \subset \mathbb{K}$. The pull-back morphism from the cochain complex of $X_{\mathbb{K},\bullet}$ to that of $X_{\mathbb{K}',\bullet}$ preserves the cochains with finite supports. Thus we have pull-back homomorphisms $H^*_c(X_{\mathbb{K},\bullet},\mathbb{Q}) \to H^*_c(X_{\mathbb{K}',\bullet},\mathbb{Q})$ which is compatible with the usual pull-back homomorphism $H^*(X_{\mathbb{K},\bullet},\mathbb{Q}) \to H^*(X_{\mathbb{K}',\bullet},\mathbb{Q})$. For an abelian group $M$, we let $H^*(X_{\lim,\bullet},M)=H^*(X_{\mathrm{GL}_d,\lim,\bullet},M)$ and $H_c^*(X_{\lim,\bullet},M)=H_c^*(X_{\mathrm{GL}_d,\lim,\bullet},M)$ denote the inductive limits $\varinjlim_{\mathbb{K}}H^*(X_{\mathbb{K},\bullet},M)$ and $\varinjlim_{\mathbb{K}}H_c^*(X_{\mathbb{K},\bullet},M)$, respectively. If $M$ is a $\mathbb{Q}$-vector space, then for each compact open subgroup $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A}^\infty)$, the homomorphism $H^*(X_{\mathbb{K},\bullet},M) \to H^*(X_{\lim,\bullet},M)$ is injective and its image is equal to the $\mathbb{K}$-invariant part $H^*(X_{\lim,\bullet},M)^{\mathbb{K}}$ of $H^*(X_{\lim,\bullet},M)$. Similar statement holds for $H_c^*$. It follows from Lemma~\ref{7_lem1} that the inductive limits $H^{d-1}(X_{\lim,\bullet},\mathbb{Q})$ and $H_c^{d-1}(X_{\lim,\bullet},\mathbb{Q})$ are admissible $\mathrm{GL}_d(\mathbb{A}^\infty)$-modules, and are isomorphic to the contragradient of $H_{d-1}(X_{\lim,\bullet},\mathbb{Q})$ and $H_{d-1}^\mathrm{BM}(X_{\lim,\bullet},\mathbb{Q})$, respectively. Since $\mathrm{St}_d$ is self-contragradient, it follows from the compatibility of the normalized parabolic inductions with taking contragradient that it suffice to prove that any irreducible subquotient of $H^{d-1}_c(X_{\lim,\bullet},\mathbb{C})$ satisfies the properties in the statement of Theorem~\ref{7_prop1}. Let $\pi$ be an irreducible subquotient of $H^{d-1}_c(X_{\lim,\bullet},\mathbb{C})$. Then Proposition~\ref{7_prop2} combined with the spectral sequence (\ref{7_limspecseq}) shows that there exists a subset $\mathcal{D} \subset \{1,\ldots,d-1\}$ such that $\pi^\infty$ is isomorphic to a subquotient of $\mathrm{Ind}_{P_{\mathcal{D}}(\mathbb{A}^\infty)}^{GL_d(\mathbb{A}^\infty)} \bigotimes_{j=1}^r \varinjlim_{\mathbb{K}_j} H^{d_j -1}(X_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet},\mathbb{C})$. Here $r=\sharp \mathcal{D} +1$, and $d_1,\ldots,d_{r} \ge 1$ are the integers satisfying $\mathcal{D}=\{d_1,d_1+d_2,\ldots, d_1+\cdots+d_{r-1} \}$ and $d_1 +\cdots + d_{r} =d$. By Proposition~\ref{prop:66_3}, $\pi^\infty$ is isomorphic to a subquotient of the non-$\infty$-component of the induced representation from $P_{\mathcal{D}}(\mathbb{A})$ to $\mathrm{GL}_d(\mathbb{A})$ of an irreducible cuspidal automorphic representation $\pi_1 \otimes \cdots \otimes \pi_r$ of $M_{\mathcal{D}}(\mathbb{A})$ whose component at $\infty$ is isomorphic to the tensor product the Steinberg representations. It remains to prove the claim of the multiplicity. The Ramanujan-Petersson conjecture proved by Lafforgue shows that each place $v$ of $F$, the representation $\pi_{i,v}$ is tempered. Hence for almost all places $v$ of $F$, the representation $\pi_v$ of $\mathrm{GL}_d(F_v)$ is unramified and its associated Satake parameters $\alpha_{v,1},\cdots ,\alpha_{v,d}$ have the following property: for each $i$ with $1 \le i \le r$, exactly $d_i$ parameters of $\alpha_{v,1},\cdots ,\alpha_{v,d}$ have the complex absolute value $q_v^{a_i/2}$ where $q_v$ denotes the cardinality of the residue field at $v$ and $a_i = \sum_{i<j\le r} d_j - \sum_{1 \le j <i} d_j$. This shows that the subset $\mathcal{D}$ is uniquely determined by $\pi$. It follows from the multiplicity one theorem and the strong multiplicity one theorem that the cuspidal automorphic representation $\pi_1 \otimes \cdots \otimes \pi_r$ of $M_{\mathcal{D}}(\mathbb{A})$ is also uniquely determined by $\pi$. Hence it suffices to show that the representation $\mathrm{Ind}_{P_{\mathcal{D}}(F_v)}^{\mathrm{GL}_d(F_v)} \pi_{1,v} \otimes \cdots \otimes \pi_{r,v}$ of $\mathrm{GL}_d(F)$ is of multiplicity free for every place $v$ of $F$. For $1 \le i \le r$, let $\Delta_i$ denote the multiset of segments corresponding to the representation $\pi_{i,v}\otimes |\det(\ )|_v^{a_i/2}$ in the sense of \cite{Zelevinsky}. We denote by $\Delta_i^t$ the Zelevinski dual of $\Delta_i$. Let $i_1, i_2$ be integers with $1 \le i_1 < i_2 \le r$ and suppose that there exist a segment in $\Delta_{i_1}^t$ and a segment in $\Delta_{i_2}^t$ which are linked. Since $\pi_{i_1,v}$ and $\pi_{i_2,v}$ are tempered, it follows that $i_2 = i_1 +1$ and that there exists a character $\chi$ of $F_v^\times$ such that both $\pi_{i_1,v} \otimes \chi$ and $\pi_{i_2,v} \otimes \chi$ are the Steinberg representations. In this case the multiset $\Delta_{i_j}^t$ consists of a single segment for $j=1,2$ and the the unique segment in $\Delta_{i_1}^t$ and the unique segment in $\Delta_{i_2}^t$ are juxtaposed. Thus the claim is obtained by applying the formula in \cite[9.13, Proposition, p.201]{Zelevinsky}. \end{proof} \subsection{Proof of Proposition~\ref{7_prop1b}} \label{sec:pf1b} We need some preparation. \begin{lem}\label{7_HNdiff} Let $\mathcal{F}$ be a locally free $\mathcal{O}_C$-module of rank $d$. Let $\mathcal{F}' \subset \mathcal{F}$ be a n $\mathcal{O}_C$-submodule of the same rank. Suppose that $\Delta p_{\mathcal{F}}(i) > \deg(\mathcal{F}) -\deg(\mathcal{F}')$. Then we have $\mathcal{F}'_{(i)} = \mathcal{F}_{(i)} \cap \mathcal{F}'$. \end{lem} \begin{proof} It suffices to prove that $\mathcal{F}'_{(i)}\subset \mathcal{F}_{(i)}$. Assume otherwise. Let us consider the short exact sequence $$ 0\to \mathcal{F}'_{(i)}\cap \mathcal{F}_{(i)} \to \mathcal{F}'_{(i)} \to \mathcal{F}'_{(i)}/(\mathcal{F}'_{(i)}\cap \mathcal{F}_{(i)}) \to 0 $$ Let $r$ denote the rank of $\mathcal{F}'_{(i)}\cap \mathcal{F}_{(i)}$. By assumption, $r$ is strictly smaller than $i$. Hence $$ \begin{array}{rl} \deg(\mathcal{F}'_{(i)}) & = \deg(\mathcal{F}'_{(i)}\cap \mathcal{F}_{(i)}) + \deg(\mathcal{F}'_{(i)}/(\mathcal{F}'_{(i)}\cap \mathcal{F}_{(i)})) \\ & \le p_{\mathcal{F}}(r) + p_{\mathcal{F}/\mathcal{F}_{(i)}}(i-r) \\ & \le p_{\mathcal{F}}(i) - (i-r)(p_{\mathcal{F}}(i)-p_{\mathcal{F}}(i-1)) + (i-r)(p_{\mathcal{F}}(i+1) -p_{\mathcal{F}}(i)) \\ & = \deg(\mathcal{F}_{(i)}) - (i-r) \Delta p_{\mathcal{F}}(i) \\ & < \deg(\mathcal{F}_{(i)}) - (\deg(\mathcal{F})-\deg(\mathcal{F}')). \end{array} $$ On the other hand, Lemma~\ref{7_diffdeg} shows that $\deg(\mathcal{F}_{(i)}\cap \mathcal{F}')\ge \deg(\mathcal{F}_{(i)}) - (\deg(\mathcal{F})-\deg(\mathcal{F}'))$. This is a contradiction. \end{proof} Let $\mathrm{Flag}_{\mathcal{D}}$ denote the set $$ \mathrm{Flag}_{\mathcal{D}} = \{ f= [0 \subset V_1 \subset \cdots \subset V_{r-1} \subset F^{\oplus d}] \ |\ \dim(V_j)=i_j \} $$ of flags in $F^{\oplus d}$. Let $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ denote the inverse image of $X_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ by the morphism $\wt{X}_{\mathbb{K},\bullet} \to X_{\mathbb{K},\bullet}$. For $f = [0 \subset V_1 \subset \cdots \subset V_{r-1} \subset F^{\oplus d}]\in \mathrm{Flag}_{\mathcal{D}}$, let $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},f}$ denote the simplicial subcomplex of $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ consisting of the simplices in $\wt{X}_{\mathbb{K},\bullet}$ whose representative $\cdots \supsetneqq \mathcal{F}_{-1} \supsetneqq \mathcal{F}_0 \supsetneqq \mathcal{F}_1 \supsetneqq \cdots$ satisfies $\mathcal{F}_{l,(i_j)} = \mathcal{F}_l \cap \eta_* V_{i_j}$ for every $l\in\mathbb{Z}$, $j=1,\ldots,r-1$. Lemma~\ref{7_HNdiff} implies that, for $\alpha > (d-1)\deg(\infty)$, $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ is decomposed into a disjoint union $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}} = \coprod_{f \in \mathrm{Flag}_{\mathcal{D}}} \wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},f}$. An argument similar to that in the proof of Lemma~\ref{7_betag} shows that, for each $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$ satisfying $g^{-1}\mathbb{K} g \subset \mathrm{GL}_d(\wh{A})$, there exists a real number $\beta'_g > \beta_g$ such that the isomorphism $\wt{\xi}_g$ sends $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},f}$ to $\wt{X}_{g \mathbb{K} g^{-1},\bullet}^{(\alpha-\beta_g),\mathcal{D},f} \subset \wt{X}_{g \mathbb{K} g^{-1},\bullet}$ for $\alpha > \beta'_g$ and for any $f \in \mathrm{Flag}_{\mathcal{D}}$. For $\gamma \in \mathrm{GL}_d(F)$, the action of $\gamma$ on $\wt{X}_{\mathbb{K},\bullet}$ sends $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},f}$ bijectively to $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},\gamma f}$. Let $f_0 = [0\ \subset F^{\oplus i_1}\oplus \{0\}^{\oplus d-i_1} \subset \cdots \subset F^{\oplus i_{r-1}} \oplus \{0\}^{\oplus d -i_{r-1}} \subset F^{\oplus d}]\in \mathrm{Flag}_{\mathcal{D}}$ be the standard flag. The group $\mathrm{GL}_d(F)$ acts transitively on $\mathrm{Flag}_{\mathcal{D}}$ and its stabilizer at $f_0$ equals $P_{\mathcal{D}}(F)$. Hence for $\alpha > (d-1)\deg(\infty)$, $X_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ is isomorphic to the quotient $P_{\mathcal{D}}(F)\backslash \wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},f_0}$. For $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$, we set $$ \wt{Y}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},g} = \wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},f_0} \cap (P_{\mathcal{D}}(\mathbb{A}^\infty)g/(g^{-1}P_{\mathcal{D}}(\mathbb{A}^\infty)g \cap \mathbb{K}) \times \mathcal{B}T_{\bullet}) $$ and $Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},g} = P_{\mathcal{D}}(F)\backslash \wt{Y}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},g}$. We omit the superscript $g$ on $\wt{Y}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},g}$ and $Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},g}$ if $g=1$. If we take a complete set $T \subset \mathrm{GL}_d(\mathbb{A}^\infty)$ of representatives of $P_{\mathcal{D}}(\mathbb{A}^\infty)\backslash \mathrm{GL}_d(\mathbb{A}^\infty)$, then we have $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},f_0} = \coprod_{g \in T} \wt{Y}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},g}$. For each $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$ satisfying $g^{-1}\mathbb{K} g \subset \mathrm{GL}_d(\wh{A})$, there exists a real number $\beta'_g > \beta_g$ such that the isomorphism $\wt{\xi}_g$ sends $\wt{Y}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},g'}$ to $\wt{Y}_{g \mathbb{K} g^{-1},\bullet}^{(\alpha-\beta_g),\mathcal{D},g'g} \subset \wt{X}_{g \mathbb{K} g^{-1},\bullet}$ for $\alpha > \beta'_g$, for any $f \in \mathrm{Flag}_{\mathcal{D}}$ and for any $g' \in \mathrm{GL}_d(\mathbb{A}^\infty)$. Hence, as a smooth $\mathrm{GL}_d(\mathbb{A}^\infty)$-module, $\varinjlim_{\mathbb{K}} \varinjlim_{\alpha} H^*(X_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}},\mathbb{Q})$ is isomorphic to $\mathrm{Ind}_{P_{\mathcal{D}}(\mathbb{A}^\infty)}^{\mathrm{GL}_d(\mathbb{A}^\infty)} \varinjlim_{\mathbb{K}} \varinjlim_{\alpha} H^* (Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}},\mathbb{Q})$. \begin{lem}\label{7_contract} For any $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$, the simplicial complex $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},f_0} \cap (\{ g\mathbb{K} \}\times \mathcal{B}T_{\bullet})$ is non-empty and contractible. \end{lem} \begin{proof} Since $\wt{X}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},f_0} \cap (\{ g\mathbb{K} \}\times \mathcal{B}T_{\bullet})$ is isomorphic to $\wt{X}_{\mathrm{GL}_d(\wh{A}),\bullet}^{(\alpha),\mathcal{D},f_0} \cap (\{ g\mathrm{GL}_d(\wh{A}) \}\times \mathcal{B}T_{\bullet})$, we may assume that $\mathbb{K} = \mathrm{GL}_d(\wh{A})$. We set $X=\wt{X}_{\mathrm{GL}_d,\mathrm{GL}_d(\wh{A}),\bullet}^{(\alpha),\mathcal{D},f_0} \cap (\{ g\mathrm{GL}_d(\wh{A}) \}\times \mathcal{B}T_{\mathrm{GL}_d,\bullet})$. We proceed by induction on $d$, in a manner similar to that in the proof of Theorem~4.1 of \cite{Gra}. Let $i\in \mathcal{D}$ be the minimal element and set $d'=d-i$. We define the subset $\mathcal{D}' \subset \{1,\ldots,d'-1 \}$ as $\mathcal{D}' = \{ i' -i\ |\ i' \in \mathcal{D}, i'\neq i \}$. We define $f'_0 \in \mathrm{Flag}_{\mathcal{D}'}$ as the image of the flag $f_0$ in $F^{\oplus d}$ with respect to the the projection $F^{\oplus d} \twoheadrightarrow F^{\oplus d}/(F^{\oplus i}\oplus \{0\}^{\oplus d'}) \cong F^{\oplus d'}$. Take an element $g' \in \mathrm{GL}_{d'}(\mathbb{A}^\infty)$ such that the quotient $\wh{A}^{\oplus d} g^{-1}/ (\wh{A}^{\oplus d}g^{-1}\cap (\mathbb{A}^{\infty \oplus i} \oplus \{0\}^{\oplus d'}))$ equals $\wh{A}^{\oplus d'}g^{\prime -1}$ as an $\wh{A}$-lattice of $\mathbb{A}^{\infty \oplus d'}$. We set $X'=\wt{X}_{\mathrm{GL}_{d'}, \mathrm{GL}_{d'}(\wh{A}),\bullet}^{(\alpha),\mathcal{D}',f'_0} \cap (\{ g' \mathrm{GL}_{d'}(\wh{A}) \}\times \mathcal{B}T_{\mathrm{GL}_{d'},\bullet})$ if $\mathcal{D}'$ is non-empty. Otherwise we set $X' = \wt{X}_{\mathrm{GL}_{d'}, \mathrm{GL}_{d'}(\wh{A}),\bullet} \cap (\{ g' \mathrm{GL}_{d'}(\wh{A}) \}\times \mathcal{B}T_{\mathrm{GL}_{d'},\bullet})$. By induction hypothesis, $|X'|$ is contractible. There is a canonical morphism $h:X \to X'$ which sends an $\mathcal{O}_C$-submodule $\mathcal{F}[g,L_\infty]$ of $\eta_* F^{\oplus d}$ to the $\mathcal{O}_C$-submodule $\mathcal{F}[g,L_\infty]/\mathcal{F}[g,L_\infty]_{(i)}$ of $\eta_* F^{\oplus d'}$. Let $\epsilonilon : \mathrm{Vert}(X) \to \mathbb{Z}$ and $\epsilonilon' : \mathrm{Vert}(X') \to \mathbb{Z}$ denote the maps which send a locally free $\mathcal{O}_C$-module $\mathcal{F}$ to the integer $[p_{\mathcal{F}}(1)/\deg(\infty)]$. We fix an $\mathcal{O}_C$-submodule $\mathcal{F}_0$ of $\eta_* F^{\oplus d}$ whose equivalence class belongs to $X$. By twisting $\mathcal{F}_0$ by some power of $\mathcal{O}_C(\infty)$ if necessary, we may assume that $p_{\mathcal{F}_0}(i) - p_{\mathcal{F}_0}(i-1) > \alpha$. We fix a splitting $\mathcal{F}_0 = \mathcal{F}_{0,(i)}\oplus \mathcal{F}'_0$. This splitting induces an isomorphism $\varphi : \eta_* \eta^* \mathcal{F}'_0 \cong \eta_* F^{\oplus d}$. Let $h' : X' \to X$ denote the morphism which sends an $\mathcal{O}_C$-submodule $\mathcal{F}'$ of $\eta_* \eta^* F^{\oplus d'}$ to the $\mathcal{O}_C$-submodule $\mathcal{F}_{0,(i)}(\epsilonilon'(\mathcal{F}')\infty) \oplus \varphi^{-1}(\mathcal{F}')$ of $\eta_* F^{\oplus d}$. For each $n \in \mathbb{Z}$, define a morphism $G_n : X \to X$ by sending an $\mathcal{O}_C$-submodule $\mathcal{F}$ of $\eta_* \eta^* F^{\oplus d}$ to the $\mathcal{O}_C$-submodule $\mathcal{F}_{0,(i)}((n+\epsilonilon(\mathcal{F}))\infty) + \mathcal{F}$ of $\eta_* F^{\oplus d}$. Then the argument in~\cite[p. 85--86]{Gra} shows that $f$ and $|h'|\circ|h| \circ f$ are homotopic for any map $f:Z \to |X|$ from a compact space $Z$ to $|X|$. Since the map $|h'|\circ|h| \circ f$ factors through the contractible space $|X'|$, $f$ is null-homotopic. Hence $|X|$ is contractible. \end{proof} \begin{proof}[Proof of Proposition~\ref{7_prop1b}] For any simplex $\sigma$ in $\wt{X}_{\mathbb{K},\bullet}$, the isotropy group $\Gamma_\sigma \subset \mathrm{GL}_d(F)$ is finite, as remarked in Section~\ref{sec:71}. Hence by Lemma~\ref{7_contract}, both $H^*(Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D},g},\mathbb{Q})$ and $H^*(Y_{\mathbb{K},\bullet}^{(\alpha'),\mathcal{D},g},\mathbb{Q})$ are canonically isomorphic to the same group $H^*(P_{\mathcal{D}}(F), \mathrm{Map}(P_{\mathcal{D}}(\mathbb{A})g/(g^{-1}P_{\mathcal{D}}(\mathbb{A}^\infty)g \cap \mathbb{K}),\mathbb{Q}))$ for any non-empty subset $\mathcal{D} \subset \{1,\ldots, d-1\}$ and for $g \in \mathrm{GL}_d(\mathbb{A}^\infty)$. This shows that $H^*(X_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}},\mathbb{Q}) \to H^*(X_{\mathbb{K},\bullet}^{(\alpha'),\mathcal{D}},\mathbb{Q})$ is an isomorphism. Since the homomorphisms between the $E_1$-terms of the spectral sequences (\ref{7_specseq}) for $\alpha$ and for $\alpha'$ is an isomorphism, $H^*(X_{\mathbb{K},\bullet}^{(\alpha)},\mathbb{Q}) \to H^*(X_{\mathbb{K},\bullet}^{(\alpha')},\mathbb{Q})$ is an isomorphism. \end{proof} \subsection{Proof of Proposition \ref{7_prop2}} \label{sec:pf2} For $j=1,\ldots,r$, let $\mathbb{K}_j \subset \mathrm{GL}_{d_i}(\mathbb{A}^\infty)$ denote the image of $\mathbb{K} \cap P_{\mathcal{D}}(\mathbb{A}^\infty)$ by the composition $P_{\mathcal{D}}(\mathbb{A}^\infty) \to M_{\mathcal{D}}(\mathbb{A}^\infty) \to \mathrm{GL}_{d_i}(\mathbb{A}^\infty)$. We define the continuous map $\wt{\pi}_{\mathcal{D},j}: |\wt{Y}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}| \to |\wt{X}_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet}|$ of topological spaces in the following way. Let $\sigma$ be an $i$-simplex in $\wt{Y}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$. Take a chain $\cdots \supsetneqq \mathcal{F}_{-1} \supsetneqq \mathcal{F}_0 \supsetneqq \mathcal{F}_1 \supsetneqq \cdots$ of $\mathcal{O}_C$-modules representing $\sigma$. For $l\in \mathbb{Z}$ we set $\mathcal{F}_{l,j} = \mathcal{F}_{l,(i_j)}/\mathcal{F}_{l,(i_{j-1})}$, which is an $\mathcal{O}_C$-submodule of $\eta_* F^{\oplus d_j}$. We set $S_j = \{ l \in \mathbb{Z} \ |\ \mathcal{F}_{l,j}\neq \mathcal{F}_{l+1,j} \}$. Define the map $\psi_j : \mathbb{Z} \to S_j$ as $\psi_j(l) = \min \{ l'\ge l\ |\ l' \in S_j \}$. Take an order-preserving bijection $\varphi_j :S_j \xto{\cong} \mathbb{Z}$. For $l \in\mathbb{Z}$ set $\mathcal{F}'_{l} =\mathcal{F}_{\varphi_j^{-1}(l), j}$. Then the chain $\cdots \supsetneqq \mathcal{F}'_{-1} \supsetneqq \mathcal{F}'_0 \supsetneqq \mathcal{F}'_1 \supsetneqq \cdots$ defines a simplex $\sigma'$ in $\wt{X}_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet}$. We define a continuous map $|\sigma| \to |\sigma'|$ as the affine map sending the vertex of $\sigma$ corresponding to $\mathcal{F}_l$ to the vertex of $\sigma'$ corresponding to $\mathcal{F}'_{\varphi_j \circ \psi_j(l)}$. Gluing these maps, we obtain a continuous map $\wt{\pi}_{\mathcal{D},j}: |\wt{Y}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}| \to |\wt{X}_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet}|$. We set $\wt{\pi}_{\mathcal{D}} = (\wt{\pi}_{\mathcal{D},1},\ldots,\wt{\pi}_{\mathcal{D},r}) :|\wt{Y}_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}| \to \prod_{j=1}^{r} |\wt{X}_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet}|$. This continuous map descends to the continuous map $\pi_{\mathcal{D}}: |Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}| \to \prod_{j=1}^{r} |X_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet}|$. If $g \in P_{\mathcal{D}}(\mathbb{A}^\infty)$ and $g^{-1}\mathbb{K} g \subset \mathrm{GL}_d(\wh{A})$, then the isomorphism $\xi_g :X_{\mathbb{K},\bullet} \xto{\cong} X_{g^{-1}\mathbb{K} g,\bullet}$ sends $Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ inside $Y_{g^{-1}\mathbb{K} g,\bullet}^{(\alpha -\beta_g),\mathcal{D}}$. If we denote by $(g_1,\ldots,g_r)$ the image of $g$ in $M_{\mathcal{D}}(\mathbb{A}^\infty) = \prod_{j=1}^r \mathrm{GL}_{d_j}(\mathbb{A}^\infty)$, then the diagram $$ \begin{CD} |Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}| @>{\xi_g}>> |Y_{g^{-1}\mathbb{K} g,\bullet}^{(\alpha-\beta_g),\mathcal{D}}| \\ @V{\pi_{\mathcal{D}}}VV @V{\pi_{\mathcal{D}}}VV \\ \prod_{j=1}^r |X_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet}| @>{(\xi_{g_1},\ldots,\xi_{g_r})}>> \prod_{j=1}^r |X_{\mathrm{GL}_{d_j},g_j^{-1} \mathbb{K}_j g_j,\bullet}| \end{CD} $$ is commutative. With the notations as above, suppose that the open compact subgroup $\mathbb{K} \subset \mathrm{GL}_d(\mathbb{A}^\infty)$ has the following property. \begin{equation} \label{7_property} \text{the homomorphism } P_{\mathcal{D}}(\mathbb{A}^\infty)\cap \mathbb{K} \to \mathbb{K}_1 \times \cdots \times \mathbb{K}_r \text{ is surjective.} \end{equation} For a simplicial complex $X$, we set $I_X = \mathrm{Map}(\pi_0(X),\mathbb{Q})$, where $\pi_0(X)$ is the set of the connected components of $X$. Let us consider the following commutative diagram. \begin{equation}\label{7_CD} \begin{CD} H^*(M_{\mathcal{D}}(F), \mathrm{Map}(\prod_{j=1}^r \pi_0(X_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet}),\mathbb{Q})) @>>> H^*(P_{\mathcal{D}}(F), I_{Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}}) \\ @VVV @VVV \\ H^*_{M_{\mathcal{D}}(F)} (\prod_{j=1}^r |\wt{X}_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet}|,\mathbb{Q}) @>>> H^*_{P_{\mathcal{D}}(F)}(|Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}|,\mathbb{Q})\\ @AAA @AAA \\ H^*(\prod_{j=1}^r |X_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet}|,\mathbb{Q}) @>>> H^*(|Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}|,\mathbb{Q}). \end{CD} \end{equation} Here $H^*_{M_{\mathcal{D}}(F)}$ and $H^*_{P_{\mathcal{D}}(F)}$ denote the equivariant cohomology groups. \begin{prop}\label{7_prop3} All homomorphisms in the above diagram (\ref{7_CD}) are isomorphisms. \end{prop} \begin{proof} We prove that the upper horizontal arrow and the four vertical arrows are isomorphisms. First we consider the upper horizontal arrow. \begin{lem}\label{thislemma} For $q\ge 1$, the group $H^q (N_{\mathcal{D}}(F), I_{Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}})$ is zero. \end{lem} \begin{proof}[Proof of Lemma \ref{thislemma}] For each $x \in N_{\mathcal{D}}(F) \backslash \pi_0(Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}})$, take a lift $\wt{x} \in Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}$ of $x$ and let $N_x \subset N_{\mathcal{D}}(F)$ denote the stabilizer of $\wt{x}$. Then the group $H^*(N_{\mathcal{D}}(F), I_{Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}})$ is isomorphic to the direct product $$\prod_{x \in N_{\mathcal{D}}(F) \backslash \pi_0(Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}})} H^*(N_x ,\mathbb{Q}).$$ We note that the group $N_{\mathcal{D}}(F)$ is a union $N_{\mathcal{D}}(F) = \bigcup_{i} U_i$ of finite subgroups of $p$-power order where $p$ is the characteristic of $F$. (This follows easily from \cite[p.2, 1.A.2 Lemma]{KeWe} or from \cite[p.60, 1.L.1 Theorem]{KeWe}.) Hence $N_x = \bigcup_{i} (U_i \cap N_x)$. Therefore, for an $N_x$-module $M$, $H^{j}(N_x,M) =0$ for $j\ge 1$ if the projective system $(H^0(U_i \cap N_x ,M))_i$ satisfies the Mittag-Leffler condition. In particular we have $H^{j}(N_x,\mathbb{Q}) =0$ for $j\ge 1$. Hence the claim follows. \end{proof} We note that $\pi_0(X_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet})$ is canonically isomorphic to $\mathrm{GL}_{d_i}(\mathbb{A}^\infty)/\mathbb{K}_j$ for $j=1,\ldots,r$, and Lemma~\ref{7_contract} implies that $I_{Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}}$ is canonically isomorphic to $\mathrm{Map}(P_{\mathcal{D}}(\mathbb{A}^\infty)/ (P_{\mathcal{D}}(\mathbb{A}^\infty) \cap \mathbb{K}),\mathbb{Q})$. Since $N_{\mathcal{D}}(F)$ is dense in $N_{\mathcal{D}}(\mathbb{A}^\infty)$, the group $H^0(N_{\mathcal{D}}(F), I_{Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}}})$ is canonically isomorphic to the group $\mathrm{Map}(M_{\mathcal{D}}(\mathbb{A}^\infty)/\prod_{j=1}^r \mathbb{K}_j,\mathbb{Q})$. Hence the upper horizontal arrow of the diagram (\ref{7_CD}) is an isomorphism. Next we consider the two vertical arrows. Each connected component of $\wt{X}_{\mathrm{GL}_{d_j},g_j^{-1} \mathbb{K}_j g_j,\bullet}$ is contractible since it is isomorphic to the Bruhat-Tits building for $\mathrm{GL}_{d_j}$. Recall that the simplicial complex $X_{\mathrm{GL}_{d_j},g_j^{-1} \mathbb{K}_j g_j,\bullet}$ is the quotient of $\wt{X}_{\mathrm{GL}_{d_j},g_j^{-1} \mathbb{K}_j g_j,\bullet}$ by the action of $\mathrm{GL}_{d_j}(F)$. For any simplex $\sigma$ in $\wt{X}_{\mathrm{GL}_{d_j},g_j^{-1} \mathbb{K}_j g_j,\bullet}$, the isotropy group $\Gamma_\sigma \subset \mathrm{GL}_{d_j}(F)$ of $\sigma$ is finite, as remarked in Section~\ref{sec:71}. Hence the left two vertical arrows in the diagram (\ref{7_CD}) are isomorphisms. Similarly, bijectivity of the two right vertical arrows in the diagram (\ref{7_CD}) follows from Lemma~\ref{7_contract}. Thus we have a proof of Proposition~\ref{7_prop3}. \end{proof} \begin{proof}[Proof of Proposition~\ref{7_prop2}] Let us consider the lower horizontal arrow in the diagram (\ref{7_CD}). By Proposition~\ref{7_prop3} it is an isomorphism. We note that the compact open subgroups $\mathbb{K} \subset \mathrm{GL}_d(\wh{A})$ with property (\ref{7_property}) form a cofinal subsystem of the inductive system of all open compact subgroups of $\mathrm{GL}_d(\mathbb{A}^\infty)$. Therefore, passing to the inductive limits with respect to $\alpha$ and $\mathbb{K}$ with property (\ref{7_property}), we have $\varinjlim_\mathbb{K} \varinjlim_\alpha H^*(Y_{\mathbb{K},\bullet}^{(\alpha),\mathcal{D}},\mathbb{Q}) \cong \bigotimes_{j=1}^r \varinjlim_{\mathbb{K}_j} H^*(X_{\mathrm{GL}_{d_j},\mathbb{K}_j,\bullet},\mathbb{Q})$ as desired. \end{proof} Satoshi Kondo \\ Kavli Institute for the Physics and Mathematics of the Universe \\ The University of Tokyo \\ 5-1-5 Kashiwanoha, Kashiwa, 277-8583, Japan \\ Tel.: +81-4-7136-4940\\ Fax: +81-4-7136-4941\\ [email protected] \\ \noindent Seidai Yasuda \\ Department of Mathematics, Graduate School of Science, Osaka University \\ [email protected] \end{document}
\begin{document} \begin{abstract} We show that the class of representable substitution algebras is characterized by a set of universal first order sentences. In addition, it is shown that a necessary and sufficient condition for a substitution algebra to be representable is that it is embeddable in a substitution algebra in which elements are distinguished. Furthermore, conditions in terms of neat embeddings are shown to be equivalent to representability. \end{abstract} \title{FUNCTIONAL REPESENTATION OF SUBSTITUTION ALGEBRAS } \section{Introduction} In [1], the problem of axiomatizing polonomial substitution algebras was considered. It was shown that a set of first order sentences and a non-first order condition of local finitness characterizes the class of isomorphs of these algebras. From Theorem 3.1 in [1], it follows that every locally finite substitution algebra is isomorphic to a function substitution algebra and hence, is representable. In this section we recall some of the definitions from [1]. Let $\alpha$ be an ordinal and let $F_\alpha(U)$ be the set of functions from $U^{\alpha}$, the set of $\alpha$-sequences of members of $U$, to $U$, a non-empty set. \begin{Definition} For $s \in U^{\alpha}$, let $s\langle \kappa, x\rangle \in U^{\alpha}$ be defined by $s\langle \kappa, x\rangle_\lambda = s_\lambda$ if $\lambda \neq \kappa$ and $s\langle \kappa, x\rangle_\lambda = x$ if $ \lambda = \kappa$. \end{Definition} \begin{Definition} For $\kappa<\alpha$, define a binary operation on $F_\alpha(U)$ by \[ (f*_kg)(s)=g(s\langle k,f(s)\rangle) \] for $f, g\in F_\alpha(U)$. \end{Definition} Let $V_\kappa\in F_\alpha(U)$ be defined by $V_\kappa\left(s\right)=s_\kappa$ for $s\in U^{\alpha}$. \begin{Definition} An algebra of the form $\mathfrak{A}=\langle A, *_\kappa, V_\kappa\rangle_{\kappa<\alpha}$ where $A\subseteq F_\alpha(U)$, $A$ is closed under $*_\kappa$, and $V_\kappa\in A$ for $\kappa<\alpha$, is a function substitution algebra of dimension $\alpha$ ($FSA_\alpha$). If $A=F_\alpha(U)$, then $\mathfrak{A}$ is said to be the full function substitution algebra of dimension $\alpha$ with base $U$. \end{Definition} \begin{Definition} An algebra $\mathfrak{A}=\langle A, *_\kappa, v_\kappa\rangle_{\kappa<\alpha}$ satisfying the following first order axiom schema is called a substitution algebra of dimension $\alpha$ ($SA_\alpha$): (1) For all $x$, $x*_\kappa v_\kappa=x$. (2) For all $x$, $x*_\kappa v_\lambda=v_\lambda$ for $\kappa \neq \lambda$. (3) For all $x$, $v_\kappa*_\kappa x=x$. (4) For all $x$ and $z$, if $w*_\lambda x=x$, for all $w$, then $x*_\lambda \left(v_\lambda *_\kappa z \right)=x*_\lambda \left(x *_\kappa z \right)$. (5) For all $x$, $y$, and $z$, $\left(x *_\kappa y \right)*_\kappa z=x *_\kappa \left(y*_\kappa z \right)$. (6) For all $x$, $y$, and $z$, if $w*_\lambda x=x$, for all $w$, then \[ x *_\kappa \left(y*_\lambda z \right) = \left(x *_\kappa y \right)*_\lambda \left(x*_\kappa z \right) \textrm{, for } \kappa \neq \lambda. \] \end{Definition} It is easily shown that every $FSA_\alpha$ is an $SA_\alpha$. In [1] it was shown that the class of $SA_\alpha$'s can be axiomatized by a set of universal sentences and hence, a subalgebra of a $SA_\alpha$ is an $SA_\alpha$. \begin{Definition} Let $\mathfrak{A}=\langle A, *_\kappa, v_\kappa\rangle_{\kappa<\alpha}$ be an $SA_\alpha$. For $x \in A$, let \[ \Delta _\mathfrak{A}x=\{\kappa:a*_\kappa x\neq x, \textrm{ for some } a\in A \}. \] $\Delta _\mathfrak{A}x$ is called the dimension set of $x$. \end{Definition} We usually drop the subscript and write $\Delta x$ for $\Delta _\mathfrak{A}x$ when there is no ambiguity. \begin{Definition} A substitution algebra $\mathfrak{A}=\langle A, *_\kappa, v_\kappa\rangle_{\kappa<\alpha}$ is locally finite if $\Delta x$ is finite for all $x \in A$. \end{Definition} For an arbitrary algebra $\mathfrak{B}$, the polynomial substitution algebra over $\mathfrak{B}$ of dimension $\alpha$ is a locally finite $SA_\alpha$. It is, in fact, an $FSA_\alpha$. In [1]. it was shown that every locally finite $SA_\alpha$ is isomorphic to some polynomial substitution algebra. The question arises: Is every $SA_\alpha$ isomorphic to an $FSA_\alpha$? Call such an algebra a representable substitution algebra ($RSA_\alpha$). In section 3, we show that the class of $RSA_\alpha$ is at least characterized by a set of first order universal sentences; that is, the class $RSA_\alpha$ is a $UC_\Delta$. \begin{Definition} Elements are distinguished in an $SA_\alpha$ if for all $\kappa < \alpha$, $x\neq y$ implies $c*_\kappa x \neq c*_\kappa y$ for some $c$ with $\Delta c=0$. \end{Definition} In section 4, we show that an $SA_\alpha$ is an $RSA_\alpha$ iff it is embeddable in an $SA_\alpha$ in which elements are distinguished. At this time, we do not know if there is an $SA_\alpha$ which is not an $RSA_\alpha$. Unless otherwise stated, the universe of a substitution algebra will be denoted by the Roman letter corresponding to the Gothic letter used to denote the algebra. For example, the universes of $\mathfrak{A}$, $\mathfrak{B}_i$,and $\mathfrak{C}'$ are $A$, $B_i$, and $C'$ respectively. Filters will be used in several places. Let $\mathcal{F}$ be a filter on a set $I$ and let $X_i$ be a set for $x\in I$. Let $\thicksim$ be the equivalence relation on the product $W=\prod \langle X_i:i\in I \rangle$ defined by $x \thicksim y$ iff $\{i:i \in I \textrm{ and } x_i = y_i\} \in \mathcal{F}$ for $x, y \in W$. Throughout the paper the equivalence class containing $x$ is denoted by $[x]$. \section{$\Gamma$-homomorphisms} \begin{Definition} Let $\mathfrak{A}$ be an $SA_\alpha$. For a subset $\Gamma$ of $\alpha$, let \[ Z(\Gamma)=\{x:x \in A \textrm{ and } \Delta x \cap \Gamma=0\}. \] \end{Definition} It follows that $Z(\alpha)=\{x:x \in A \textrm{ and } \Delta x=0\}$. In this case, let $Z=Z(\alpha)$. In [1], the notion of generalized substitutions was defined by \[ s*_{(\Gamma)}a=s_{\kappa_{n-1}} *_{\kappa_{n-1}}(\cdots(s_{\kappa_1} *_{\kappa_1}\left(s_{\kappa_0}*_{\kappa_0}a))\cdots\right) \] where $s\in Z^\alpha$, $a\in A$ and $\Gamma=\{\kappa_0,\cdots,\kappa_{n-1}\}$ with $\kappa_0<\cdots<\kappa_{n-1}$. We can let $s\in A^\alpha$ in this definition. With suitable restrictions on $s$, we obtain theorems analogous to Theorems 4.1 and 4.2 of [1]. We state below the parts of these theorems needed for this paper. The proofs are almost identical to their counterparts in [1]. The following theorem shows that a generalized substitution, $s*_{(\Gamma)}a$ is independent of the order of the elements in $\Gamma$. \begin{Theorem} If $s\in Z(\Gamma)^\alpha$, $a\in A$, and $\Gamma=\{\lambda_0,\cdots ,\lambda_{n-1}\}$, then \[ s*_{(\Gamma)}a=s_{\lambda_{n-1}} *_{\lambda_{n-1}}(\cdots(s_{\lambda_1} *_{\lambda_1}(s_{\lambda_0}*_{\lambda_0}a))\cdots). \] \end{Theorem} \begin{Theorem} Let $\Gamma$ and $\Sigma$ be finite subsets of $\alpha$. (i) If $s\in Z(\Gamma)^\alpha$ and $\kappa \in \Gamma $, then $s*_{(\Gamma)}a=s*_{(\Gamma-\{\kappa\})}(s_\kappa*_\kappa a)$. (ii) If $s\in Z(\Gamma)^\alpha$, then $\Delta (s*_{(\Gamma)} a)\subseteq \bigcup\{\Delta s_\kappa: \kappa \in\Gamma\}\cup(\Delta a-\Gamma)$. (iii) If $s\in Z(\Gamma)^\alpha$, $x \in Z(\Gamma)$, and $\kappa \in \Gamma$, then $s\langle \kappa,x\rangle*_{(\Gamma)}a=x*_\kappa(s*_{(\Gamma-\{\kappa\})}a)$. (iv) If $s\in Z(\Gamma\cup\Sigma\cup\{\kappa\})^\alpha$, and $\kappa \notin \Sigma$, then \[ (s*_{(\Gamma)}a)*_\kappa(s*_{(\Sigma)}b)=s*_{(\Gamma \cap \Sigma)}((s*_{(\Gamma-\Sigma)}a)*_\kappa (s*_{(\Sigma-\Gamma)}b)). \] \end{Theorem} \begin{Definition} Let $\mathfrak{A}=\langle A,*_\kappa,v_\kappa \rangle_{\kappa<\alpha}$ and $\mathfrak{A}'=\langle A',*'_\kappa,v'_\kappa \rangle_{\kappa<\alpha}$ be $SA_\alpha$'s. and let $\Gamma\subseteq \alpha$. $\phi$ is a $\Gamma$-homomorphism from $\mathfrak{A}$ into $\mathfrak{A}'$ if $\phi:A\to A'$ and for all $\kappa \in \Gamma$ and all $a,b \in A$, $\phi(a*_\kappa b)=\phi(a)*'_\kappa \phi(b)$ and $\phi(v_\kappa)=v_\kappa'$. \end{Definition} \begin{Theorem} Let $\mathfrak{A}$ be an $SA_\alpha$ and let $\Gamma$ be a finite subset of $\alpha$. Define a function $\phi_\Gamma:A\to F_\alpha(Z(\Gamma))$ by $\phi_\Gamma(a)(s)=s*_{(\Gamma)}a$. Then $\phi_\Gamma$ is a $\Gamma$-homomorphism from $\mathfrak{A}$ to the full $FSA_\alpha$ with base $Z(\Gamma)$. \begin{proof} First note that Theorem 2.3 (ii) implies that $s*_{(\Gamma)} a \in Z(\Gamma)$ for $s \in Z(\Gamma)^\alpha$ and $s \in A$. It is easily checked that $\phi_\Gamma(v_\kappa)=V_\kappa$ for $\kappa \in \Gamma$ where $V_\kappa(s)=s_\kappa$ for $s\in Z(\Gamma)^\alpha$. Let $\kappa \in \Gamma$, $a, b \in A$, and $s \in Z(\Gamma)^\alpha$. We then have \begin{eqnarray*} (\phi_\Gamma(a)*_\kappa \phi_\Gamma(b))(s) & =& \phi_\Gamma(b)(s\langle \kappa, \phi_\Gamma(a)(s)\rangle)\\ & = & s\langle \kappa,s*_{(\Gamma)}a\rangle*_{(\Gamma)}b\\ & = & (s*_{(\Gamma)}a)*_\kappa(s*_{(\Gamma-\{\kappa\})}b) \textrm{ by Theorem 2.3 (ii), (iii)}\\ & = & s*_{(\Gamma-\{\kappa\})}((s*_\kappa a)*_\kappa b) \textrm{ by Theorem 2.3 (iv)}\\ & = & s*_{(\Gamma-\{\kappa\})}(s_\kappa *_\kappa(a*_\kappa b))\\ & = & s_{(\Gamma)}(a*_\kappa b) \textrm{ by Theorem 2.3 (i)}\\ & = & \phi_{(\Gamma)}(a*_\kappa b)(s). \end{eqnarray*} \end{proof} \end{Theorem} \section{A representation theorem and ultraproducts} \begin{Definition} Let $\mathfrak{A}$ be an $SA_\alpha$. Elements are strongly distinguished in $\mathfrak{A}$ if, for all $s\in Z^\alpha$, there is a finite subset $\Sigma$ of $\alpha$, such that $s*_{(\Sigma)}a=s*_{(\Sigma)}b$, then $a=b$. \end{Definition} \begin{Theorem} Every $SA_\alpha$ in which elements are strongly distinguished is an $RSA_\alpha$. \begin{proof} Let $\mathfrak{A}$ be an $SA_\alpha$ in which elements are strongly distinguished. Let $J$ be the collection of all finite subsets of $\alpha$ and let \[ P_\Sigma=\{\Gamma:\Gamma \in J \textrm{ and } \Gamma \supseteq\Sigma\} \] for $\Sigma \in J$. Since $P_\Sigma \cap P_\Gamma=P_{\Sigma \cup \Gamma}$, it follows that \[ \mathcal{F}=\{Q:Q \subseteq J \textrm{ and for some } \Sigma \in J, Q\supseteq P_\Sigma\} \] is a filter on $J$. Let $W=\prod\langle Z(\Gamma):\Gamma \in J\rangle$, and let $U$ be the collection of all equivalence classes $[f]$ for $f \in W$. $U$ is the base for the $FSA_\alpha$ we seek. For $X \in U$, let \[ K(X)=\{f:f \in X \textrm{ and for some } x\in Z, f_\Gamma=x \textrm{ for all } \Gamma \in J\}. \] It is easily seen that $K(X)$ has at most one member. Let $c:U\to W$ be a choice function such that for all $X \in U$, $c(X) \in K(X)$ if $K(X) \ne \oslash$ and $c(X) \in X$ if $K(X)=\oslash$. For each $s \in U^\alpha$ and $\Gamma \in J$, define $s^\Gamma \in Z(\Gamma)^\alpha$ by $(s^\Gamma)_\lambda=c(s_\lambda)$ for $\lambda <\alpha$. For $\Gamma \in J$, let $\phi_\Gamma$ be the $\Gamma$- homomorphism of Theorem 2.5. Define a function $ \phi: A \to F_\alpha(U)$ by $\phi(a)(s)=[\langle \phi_\Gamma(a)(s^\Gamma):\Gamma \in J \rangle]$ where $a \in A$ and $s \in U^\alpha$. We shall show that $\phi$ is an isomorphism from $\mathfrak{A}$ to the full $FSA_\alpha$ with base $U$. The result follows immediately from this. First we show that $\phi$ is one-one. Suppose that $\phi(a)=\phi(b)$ where $a,b \in A$. Therefore, for all $s\in U^\alpha$, $\phi(a)(s)=\phi(b)(s)$ and hence, \[ \{\Gamma:\Gamma \in J \textrm{ and } s^\Gamma*_{(\Gamma)}a=s^\Gamma*_{(\Gamma)}b\} \in \mathcal{F}. \] From the definition of $\mathcal{F}$, it follows that \begin{equation} \textrm{ for all } s \in U^\alpha, \textrm{ there is a } \Sigma \in J \textrm{ such that } s^\Sigma*_{(\Sigma)}a =s^\Sigma*_{(\Sigma)}b. \end{equation} Since elements are strongly distinguished in $\mathfrak{A}$, to show that $a=b$,, it suffices to show that \begin{equation} \textrm{for all }t \in Z^\alpha, \textrm{ there is a } \Sigma \in J \textrm{ such that }t*_{(\Sigma)}a=t*_{(\Sigma)}b. \end{equation} To this end, let $t \in Z^\alpha$ and define $s\in U^\alpha$ by $s_\lambda=[\langle t_\lambda: \Gamma \in J\rangle]$ for $\lambda<\alpha$. By (1), there is a $\Sigma \in J$ such that $s^\Sigma*_{(\Sigma)}a=s^\Sigma*_{(\Sigma)}b$. Since $\langle t_\lambda:\Gamma \in J\rangle \in K(s_\lambda)$ we have $c(s_\lambda)=\langle t_\lambda:\Gamma \in J\rangle$. Therefore,, for all $\lambda<\alpha$, $(s^\Sigma)_\lambda=c(s_\lambda)^\Gamma=t_\lambda$. Hence, $s^\Sigma=t $ and $t*_{(\Sigma)}a=t*_{(\Sigma)}b$. We therefore have (2). Next we show that $\phi$ is a homomorphism from $\mathfrak{A}$ to to the full $FSA_\alpha$ with base $U$. We need the following easily verified fact: \begin{equation} \textrm{For all } s \in U^\alpha \textrm{and } X\in U, s\langle \kappa,x\rangle^\Gamma=s^\Gamma\langle \kappa,c(X)_\Gamma\rangle \textrm{.} \end{equation} Let $a,b \in A$ and $s \in U^\alpha$. Let \[ M=\{\Gamma:\Gamma \in J \textrm{ and } \phi_\Gamma(a*_\kappa b)(s^\Gamma)=(\phi_\Gamma(a)*_\kappa\phi_\Gamma(b))(s^\Gamma)\}. \] Since $\phi_\Gamma$ is a $\Gamma$-homomorphism, $M\supseteq P_{\{\kappa\}}$ and hence, $M \in \mathcal{F}$. We then have \begin{eqnarray*} \phi(a*_\kappa b)(s) & = & [\langle \phi_\Gamma(a*_\kappa b)(s^\Gamma):\Gamma \in J \rangle] \\ & = & [\langle (\phi_\Gamma(a)*_\kappa \phi_\Gamma(b))(s^\Gamma):\Gamma \in J \rangle] \textrm{ since M } \in \mathcal{F} \\ & = & [\langle \phi_\Gamma(b)(s^\Gamma\langle \kappa,\phi_\Gamma(a)s^\Gamma)\rangle):\Gamma \in J\rangle]. \end{eqnarray*} On the other hand, \begin{eqnarray*} (\phi(a)*_\kappa\phi(b))(s) & = & \phi(b)(s\langle \kappa,\phi(a)(s)\rangle)\\ & = & [\langle\phi_\Gamma(b)(s\langle \kappa,\phi(a)(s)\rangle^\Gamma):\Gamma \in J\rangle]\\ & = & [\langle \phi_\Gamma(b)(s^\Gamma\langle \kappa,c(\phi(a)(s)_\Gamma\rangle):\Gamma \in J\rangle]. \end{eqnarray*} Let \[ R=\{\Gamma:\Gamma \in J \textrm{ and } \phi_\Gamma(a)(s^\Gamma)=c(\phi(a)(s))_\Gamma\}. \] It is easily seen that $R \in \mathcal{F}$. It follows that $\phi(a*_\kappa b)(s)=(\phi(a)*_\kappa \phi(b))(s)$. By a similar type of argument, it can be shown that $\phi(v_\kappa)=V_\kappa$ where $V_\kappa(s)=s_\kappa$ for all $s \in U^\alpha$. \end{proof} \end{Theorem} \begin{Theorem} Elements are strongly distinguished in every full $FSA_\alpha$. \begin{proof} Let $\mathfrak{A}$ be a full $FSA_\alpha$ with base $U$. Suppose that $f, g \in F_\alpha(U)$ such that $f \ne g$. Therefore, $f(s) \ne g(s)$ for some $s \in U^\alpha$. Define $t \in Z^\alpha$ by $t_\lambda(r) = s_\lambda$ for $r \in U^\alpha$. A direct computation shows that for all finite subsets $\Sigma$ of $\alpha$, $(t*_{(\Sigma)}f)(s) = f(s)$ and $(t*_{(\Sigma)}g)(s) = g(s)$ and hence, $t*_{(\Sigma)}g \ne t*_{(\Sigma)}f$. Therefore, we have shown that if $f \ne g$, then there is a $t \in Z^\alpha$ such that $t*_{(\Sigma)}g \ne t*_{(\Sigma)}f$ for all finite subsets $\Sigma$ of $\alpha$. \end{proof} \end{Theorem} \begin{Theorem} An $SA_\alpha$ is representable iff it is embeddable in an $SA_\alpha$ in which elements are strongly distinguished. \begin{proof} If an $SA_\alpha$ is representable, then it is isomorphic to an $FSA_\alpha$ and, since every $FSA_\alpha$ is a subalgebra of a full $FSA_\alpha$, by Theorem 3.3, it is embeddable in an $SA_\alpha$ in which elements are strongly distinguished. The other implication follows from Theorem 3.2. \end{proof} \end{Theorem} We use the next theorem to show that an ultraproduct of $FSA_\alpha$'s is representable. \begin{Theorem} Let $\mathcal{F}$ be an ultrafilter on a set $I$ and for all $i\in I$, let $\mathfrak{A_i}$ be an $SA_\alpha$ in which elements are strongly distinguished. Then elements are strongly distinguished in $\mathfrak{B}=\prod \langle \mathfrak{A_i}:i\in I \rangle/\mathcal{F}$, the ultra product of the $\mathfrak{A_i}$. \begin{proof} The universe $B$ of $\mathfrak{B}$ is the collection of equivalence classes $[a]$ where $a \in \prod\langle A_i:i \in I\rangle$. Let $a,b \in W$ such that $[a] \ne [b]$. Let $P=\{i:i\in I \textrm{ and } a_i \ne b_i\}$. Since $\mathcal{F}$ is an ultrafilter, $P\in \mathcal{F}$. Let $Z_i = \{x:x\in A_i \textrm{ and } \Delta x=0\}$ and $Z = \{x:x \in B \textrm{ and } \Delta x = 0\}$. For all $i \in I$, elements are strongly distinguished in $\mathfrak{A_i}$. Therefore, for all $ i \in P$, there is a $t^i \in Z^\alpha_i$ such that $t^i*_{(\Sigma)}a_i \ne t^i*_{(\Sigma)}b_i$ for all finite subsets $ \Sigma$ of $\alpha$. For $i \in I-P$, let $t^i$ be any member of $Z^\alpha_i$. Define $s \in Z^\alpha$ by letting $s_\lambda = [\langle (t^i)_\lambda:i \in I \rangle]$ for $\lambda < \alpha$. Let $\Sigma$ be a finite subset of $\alpha$. A direct computation shows that for all $x \in W$, $s*_{(\Sigma)}[x]=[\langle t^i*_{(\Sigma)}x_i:i \in I\rangle]$. Let $Q = \{i:i \in I \textrm{ and } t^i*_{(\Sigma)}a_i \ne t^i*_{(\Sigma)} b_i\}$. Since $P \in \mathcal{F}$ and $Q\supseteq P$, we have $Q \in \mathcal{F}$. Therefore, \begin{eqnarray*} s*_{(\Sigma)}[a] & = & [\langle t^i *_{(\Sigma)} a_i:i \in I\rangle]\\ & \ne & [\langle t^i *_{(\Sigma)}b_i;i\in I\rangle] \\ & = & s*_{(\Sigma)}[b]. \end{eqnarray*} \end{proof} \end{Theorem} \begin{Theorem} An ultraproduct of $RSA_\alpha$'s is an $RSA_\alpha$. \begin{proof} For $i \in I$ let $\mathfrak{A_i}$ be an $RSA_\alpha$ and let $\mathcal{F}$ be an ultrafilter on $I$. Each $\mathfrak{A_i}$ is isomorphic to some $FSA_\alpha$ $\mathfrak{A_i}'$ with base $U_i$. $\mathfrak{A_i}'$ is a subalgebra of the full $FSA_\alpha$ $ \mathfrak{B_i}$ with base $U_i$. $\mathfrak{A}=\prod \langle \mathfrak{A_i}:i \in I\rangle/\mathcal{F}$ is isomorphic to a subalgebra of $\mathfrak{B}=\prod \langle \mathfrak{B_i}:i \in I\rangle/\mathcal{F}$. By Theorems 3.3, 3.5, and 3.2, $\mathfrak{B}$ is an $RSA_\alpha$ and therefore, $\mathfrak{A}$ is an $RSA_\alpha$. \end{proof} \end{Theorem} \begin{Theorem} The class of $RSA_\alpha$'s is a $UC_\Delta$. \begin{proof} The class of $RSA_\alpha$'s is closed under isomorphism, subalgebra, and by Theorem 3.6, ultraproducts. By a theorem of Los [4], the class of $RSA_\alpha$'s is a $UC_\Delta$. \end{proof} \end{Theorem} \section{A condition equivalent to representability} It is easily seen that if elements are strongly distinguished in an $SA_\alpha$ $\mathfrak{A}$ , then elements are distinguished in $\mathfrak{A}$. Therefore, by Theorem 3.3, elements are distinguished in a full $FSA_\alpha$. \begin{Theorem} An $SA_\alpha$ is an $RSA_\alpha$ iff it is emeddable in an $SA_\alpha$ in which elements are distinguished. \begin{proof} Every $FSA_\alpha$ is a subalgebra of a full $FSA_\alpha$ and elements are distinguished in every full $FSA_\alpha$. Therefore, every representable $SA_\alpha$ is embeddable in an $SA_\alpha$ in which elements are distinguished. For the converse, let $\mathfrak{A}'$ be an $SA_\alpha$ which is embeddable in $\mathfrak{A}$ in which elements are distinguished. To show that $\mathfrak{A}'$ is representable, it suffices to show that $\mathfrak{A}$ is representable. Let $J$ be the collection of finite subsets of $\alpha$. For $\Gamma \in J$, let $\mathfrak{A}_\Gamma$ be the full $FSA_\alpha$ with base $Z(\Gamma)$. For all $\Gamma \in J$, $\phi_\Gamma$, the function defined in Theorem 2.5, is a $\Gamma$-homomorphism from $\mathfrak{A}$ into $\mathfrak{A}_\Gamma$. We show that $\phi_\Gamma$ is one-one. Suppose that $\phi_\Gamma(a)=\phi_\Gamma(b)$ where $a, b \in A$. Hence, for all $s \in Z(\Gamma)^\alpha$, $\phi_\Gamma(a)(s)=\phi_\Gamma(b)(s)$; that is, $s*_{(\Gamma)}a=s*_{(\Gamma)}b$. Since $Z \subseteq Z(\Gamma)$, $s*_{(\Gamma)}a=s*_{(\Gamma)}b$ for all $s \in Z^\alpha$. By Theorem 5.5 of [1], $a=b$ since elements are distinguished in $\mathfrak{A}$. For $\lambda < \alpha$, let $R_\lambda=\{\Gamma:\Gamma \in J \textrm{ and } \lambda \in \Gamma\}$. The collection of all $R_\lambda$ for $\lambda < \alpha$, has the finite intersection property and hence, there is an ultrafilter $\mathcal{F}$ on $J$ such that $R_\lambda \in \mathcal{F}$ for all $\lambda < \alpha$. Let $\mathfrak{B}=\prod\langle \mathfrak{A_\Gamma}:\Gamma \in J\rangle/\mathcal{F}$. Define a map $\psi:A \to B$ as follows: $\psi(a)=[\langle \phi_\Gamma(a):\Gamma \in J\rangle]$ for $a \in A$. A straightforward verification shows that $\psi$ is an isomorphism from $\mathfrak{A}$ into $\mathfrak{B}$. By Theorem 3.6, $\mathfrak{B}$ is representable and hence, $\mathfrak{A}$ is representable. \end{proof} \end{Theorem} As in the theory of cylindric algebras [2], a dimension-complemented $SA_\alpha$ is defined as follows: \begin{Definition} An $SA_\alpha$ $\mathfrak{A}$ is dimension-complemented if for all finite $X\subseteq A$, \[\alpha-\bigcup\{\Delta x:x \in X\}\] is infinite. \end{Definition} \begin{Theorem} Every dimension-complemented $SA_\alpha$ is embeddable in a dimension-complemented $SA_\alpha$ in which elements are distinguished. \begin{proof} The proof is obtained from the proof of Theorem 5.3 and 5.4 of [1] if we replace ``locally finite'' wherever it occurs by ``dimension-complemented.'' \end{proof} \end{Theorem} In the theory of cylindrical algebras, every dimension-complemented cylindrical algebra is representable [3]. The following is a theorem analogous to this. \begin{Theorem} Every dimension-complemented $SA_\alpha$ is representable. \begin{proof} This follow immediately from Theorems 4.3 and 4.1. \end{proof} \end{Theorem} \section{Neat embeddings} \begin{Definition} Let $\mathfrak{A}=\langle A, *_\kappa,v_\kappa\rangle_{\kappa <\alpha}$ be an $SA_\alpha$ and $\beta \leq \alpha$. The $\beta$-reduct of $\mathfrak{A}$ is the algebra $\mathfrak{A}/\beta=\langle A, *_\kappa,v_\kappa\rangle_{\kappa <\beta}$. \end{Definition} Clearly, $\mathfrak{A}/\beta$ is an $SA_\beta$ if $\mathfrak{A}$ is an $SA_\alpha$. \begin{Theorem} If $\mathfrak{A}$ is a $RSA_\alpha$ and $\beta \leq \alpha$, then $\mathfrak{A}/\beta$ is an $RSA_\beta$. \begin{proof} Let $\mathfrak{A}$ be an $RSA_\alpha$. Then $\mathfrak{A}$ is isomorphic to a subalgebra of an $FSA_\alpha$ $\mathfrak{A'}$. $\mathfrak{A'}$ is a subalgebra of $\mathfrak{B}$, the full $FSA_\alpha$ with base $U$. Therefore, $\mathfrak{A'}/\beta$ is embeddable in $\mathfrak{B}/\beta$. Theorem 3.2 can be easily modified to show that elements are distinguished in $\mathfrak{B}/\beta$ and hence, by Theorem 4.1, $\mathfrak{A'}/\beta$ is an $RSA_\alpha$. $\mathfrak{A}/\beta$ is isomorphic to a subalgebra of $\mathfrak{A'}/\beta$. Therefore, $\mathfrak{A}/\beta$ is an $RSA_\beta$. \end{proof} \end{Theorem} \begin{Definition} Let $\mathfrak{A}$ be an $SA_\alpha$ and $\beta \leq \alpha$. Let $\mathfrak{A}^{(\beta)}=\langle A',*_\kappa,v_\kappa \rangle_{\kappa < \beta}$ where $A'=\{s:s \in A \textrm{ and } \Delta x \subseteq \beta\}$. An $SA_\alpha$ $\mathfrak{B}$ is $\beta$-neatly embeddable in $\mathfrak{A}$ if $\mathfrak{B}$ is isomorphic to a subalgebra of $\mathfrak{A}^{(\beta)}$. \end{Definition} As in the theory of cylindric algebras, necessary and sufficient conditions for representability of $SA_\alpha$'s can be given in terms of neat embeddings [3] . \begin{Theorem} Let $\mathfrak{A}$ be an $SA_\alpha$. The following statements are equivalent: (i) $\mathfrak{A}$ is an $RSA_\alpha$. (ii) For all $\kappa < \omega$, $\mathfrak{A}$ is $\alpha$-neatly embeddable in some $SA_{\alpha+\kappa}$. (iii) $\mathfrak{A}$ is $\alpha$-neatly embeddable in some $SA_{\alpha+\omega}$. \begin{proof} To show that (i) implies (ii), it suffices to show that if $\mathfrak{A}$ is an $FSA_\alpha$ and $\kappa < \omega$, then $\mathfrak{A}$ is $\alpha$-neatly embeddable in some $FSA_{\alpha+\kappa}$. Let $\mathfrak{A}$ be an $FSA_\alpha$ with base $U$ and let $\kappa < \omega$. Define a function $\phi:A \to U^{U^{\alpha+\kappa}}$ by $\phi(f)(s)=f(s/\alpha)$ where $f \in A$, $s \in U^{\alpha + \kappa}$ and $s/\alpha$ is the restriction of $s$ to $\alpha$. It is easy to verify that $\phi$ maps $\mathfrak{A}$ isomorphically onto a subalgebra of $\mathfrak{B}^{(\alpha)}$ where $\mathfrak{B}$ is the full $FSA_{\alpha+\kappa}$ with base $U$. Assume (ii). For $\kappa < \omega$, let $\phi_\kappa$ map $\mathfrak{A}$ isomorphically onto a subalgebra of $\mathfrak{B}_\kappa^{(\alpha)}$ where $\mathfrak{B}_\kappa=\langle B_\kappa,*_\lambda^\kappa,v_\lambda^\kappa\rangle_{\lambda<\alpha+\kappa}$ is an $SA_{\alpha+\kappa}$. Define algebras $\mathfrak{C_\kappa}= \langle B_\kappa,*_\lambda^\kappa,v_\lambda^\kappa\rangle_{\lambda<\alpha+\omega}$ where $x*_\lambda ^\kappa y=y$ for $x, y \in B_\kappa$ and $\alpha+\kappa \leq \lambda<\alpha+\omega$ and $v_\lambda^\kappa$ is any member of $B_\kappa$ for $\alpha+\kappa \leq \lambda<\alpha+\omega$. Let $\mathcal{F}$ be a non-principal ultafilter on $\omega$ and let $\mathfrak{C}=\prod \langle \mathfrak{C}_\kappa:\kappa<\omega\rangle/\mathcal{F}$. We show that $\mathfrak{C}$ is an $SA_{\alpha+\omega}$. If an instance of an axiom for $SA_{\alpha+\omega}$'s has maximum subscript $\gamma$, then for all $\kappa$ with $\gamma<\alpha+\kappa$,this axiom holds in $\mathfrak{B}_\kappa$ and hence in $\mathfrak{C}_\kappa$ where $\gamma<\alpha+\kappa$. Hence, it holds in all but finitely many of the $\mathfrak{C}_\kappa$ and therefore, in $\mathfrak{C}$. Define a function $\phi: A \to C$ by $\phi(a)=[\langle \phi_\kappa(a):\kappa<\omega\rangle]$ where $a \in A$. It is easily verified that $\phi$ is an isomorphism from $\mathfrak{A}$ into a subalgebra of $\mathfrak{C}^{(\alpha)}$. Hence, we have (iii). To show that (iii) implies (i), assume that $\mathfrak{A}$ is mapped isomorphically by $\phi$ onto a subalgebra of $\mathfrak{B}^{(\alpha)}$ where $\mathfrak{B}$ is a $SA_{\alpha+\omega}$. Using $\Delta(x*_\kappa y)\subseteq \Delta x \cup (\Delta y-\{\kappa\})$ [1], it can be shown that the subalgebra $\mathfrak{A}'$ generated by $ \{\phi(a):a \in A\}$ in $\mathfrak{B}$ is a dimension-complemented $SA_{\alpha+\omega}$. By Theorem 4.4, $\mathfrak{A'}$ is representable. $\mathfrak{A}$ is isomorphic to a subalgebra of $\mathfrak{A'}/\alpha$. By Theorem 5.2, $\mathfrak{A'}/\alpha$ is representable and hence, $\mathfrak{A}$ is representable. \end{proof} \end{Theorem} \begin{center} References \end{center} [1] N. \textsc{Feldman}, \textit{Representation of polynomial substitution algebras}, \textit{\textbf{Journal of symbolic logic}}, vol. 47 (1982), pp. 481-492. [2] L.\textsc{Henkin}, J.D. \textsc{Monk}, and A. \textsc{Tarski}, \textit{\textbf{Cylindric algebras, part I}}, North-Holland Publishing Company, Amsterdam,1971. [3] L.\textsc{Henkin}, J.D. \textsc{Monk}, and A. \textsc{Tarski}, \textit{\textbf{Cylindric algebras, part II}}, North-Holland Publishing Company, Amsterdam,1985. [4] J. \L \textsc{o\'s}, \textit{Quelques remarques, th\'eor\`emes et probl\`emes sur les classes d\'efinissables d'alg\`ebres}, \textit{\textbf{Mathematical interpretations of formal systems}}, North Holland Publishing Company., Ansterdam, 1955 pp. 98-113. \end{document}
\begin{document} \title{Cofinal types below $\aleph_\omega$ (DRAFT)} \author{Roy Shalev} \address{Department of Mathematics, Bar-Ilan University, Ramat-Gan 5290002, Israel.} \urladdr{https://roy-shalev.github.io/} \begin{abstract} It is proved that for every positive integer $n$, the number of non-Tukey-equivalent directed sets of cardinality $\leq \aleph_n$ is at least $c_{n+2}$, the $(n+2)$-Catalan number. Moreover, the Tukey class $\mathcal D_{\aleph_n} $ of directed sets of cardinality $\leq \aleph_n$ contains an isomorphic copy of the poset of Dyck $(n+2)$-paths. Furthermore, we give a complete description whether two successive elements in the copy contain another directed set in between or not. \end{abstract} \maketitle \subseteqection{Intoduction} Motivated by problems in general topology, Birkhoff \cite{MR1503323}, Tukey \cite{MR0002515}, and Day \cite{MR9970} studied some natural classes of directed sets. Later, Isbell \cite{MR201316,MR294185} and Schmidt \cite{MR76836} investigated uncountable directed sets under the Tukey order $<_T$. In \cite{MR792822}, Todor\v{c}evi\'{c} showed that under $\pfa$ there are only five cofinal types in the class $\mathcal D_{\aleph_1}$ of all cofinal types of size $\leq\aleph_1$ under the Tukey order, namely $\{1,\omega,\omega_1,\omega\times \omega_1,[\omega_1]^{<\omega}\}$. In the other direction, Todor\v{c}evi\'{c} showed that under $\ch$ there are $2^{\mathfrak c}$ many non-equivalent cofinal types in this class. Later in \cite{MR1407459} this was extended to all transitive relations on $\omega_1$. Recently, Kuzeljević and Todor\v{c}evi\'{c} \cite{kuzeljevic2021cofinal} initiated the study of the class $\mathcal D_{\aleph_2}$. They showed in $\zfc$ that this class contains at least fourteen different cofinal types which can be constructed from two basic types of directed sets and their products: $(\kappa,\in )$ and $([\kappa]^{<\theta},\subsetequbseteq)$, where $\kappa\in\{\omega,\omega_1,\omega_2\}$ and $\theta\in\{\omega,\omega_1\}$. In this paper, we extend the work of Todor\v{c}evi\'{c} and his collaborators and uncover a connection between the classes of the $\mathcal D_{\aleph_n}$'s and the Catalan numbers. Denote $V_k:=\{1, \omega_k, [\omega_k]^{<\omega_m} \mid 0\leq m<k\}$, $\mathcal F_n:=\bigcup_{k\leq n} V_k$ and finally let $\mathcal S_n$ be the set of all finite products of elements of $\mathcal F_n$. Recall (see Section~\ref{Section - Catalan}) that the $n$-Catalan number is equal to the cardinality of the set of all Dyck $n$-paths. The set $\mathcal K_n$ of all Dyck $n$-paths admits a natural ordering $\vartriangleleft$, and the connection we uncover is as follows. \begin{thma} The posets $(\mathcal S_n\subseteqlash{\equiv_T}, <_T)$ and $(\mathcal K_{n+2},\vartriangleleft)$ are isomorphic. In particular, the class $\mathcal D_{\aleph_n}$ has size at least the $(n+2)$-Catalan number. \end{thma} A natural question which arise is whether an interval determined by two successive elements of $\left(\mathcal S_n\subseteqlash{\equiv_T}, {<_T}\right)$ form an empty interval in $(\mathcal D_{ \aleph_n},<_T)$. In \cite{kuzeljevic2021cofinal}, the authors identified two intervals of $\mathcal S_2$ that are indeed empty in $\mathcal D_{\aleph_2}$, and they also showed that consistently, under $\gch$ and the existence of a non-reflecting stationary subset of $E^{\omega_2}_\omega$, two intervals of $\mathcal S_2$ that are nonempty in $\mathcal D_{\aleph_2}$. In this paper, we prove: \begin{thmb} Assuming $\gch$, for every positive integer $n$, all intervals of $\mathcal S_n$ that form an empty interval in $\mathcal D_{\aleph_n}$ are identified, and counterexamples are constructed to the other cases. \end{thmb} \subsetequbsection{Organization of this paper} In Section~\ref{Section - Tools} we analyze the Tukey order of directed sets using characteristics of the ideal of bounded subsets. In Section~\ref{Section - Catalan} we consider the poset $(\mathcal S_n\subseteqlash{\equiv_T}, <_T )$ and show it is isomorphic to the poset of good $(n+2)$-paths (Dyck paths) with the natural order. As a Corollary we get that the cardinality of $D_{\mathcal \aleph_n}$ is greater equal than the Catalan number $c_{n+2}$. Furthermore, we address the basic question of whether a specific interval in the poset $(\mathcal S_n\subseteqlash{\equiv_T}, <_T )$ is empty, i.e. considering an element $C$ and a successor of it $E$, is there a directed set $D$ such that $C<_T D <_T E$? We answer this question in Theorem~\ref{Theorem - Intervals} using results from the next two sections. In Section~\ref{Section - Gaps} we present sufficient conditions on an interval of the poset $(\mathcal S_n\subseteqlash{\equiv_T}, <_T )$ which enable us to prove there is no directed set inside. In Section~\ref{Section - No Gaps} we present cardinal arithmetic assumptions, enough to construct on specific intervals of the poset $(\mathcal S_n\subseteqlash{\equiv_T}, <_T )$ a directed set inside. In Section~\ref{Section - 6} we finish with a remark about future research. In the Appendix diagrams of the posets $(\mathcal S_2\subseteqlash{\equiv_T}, <_T )$ and $(\mathcal S_3\subseteqlash{\equiv_T}, <_T )$ are presented. \subsetequbsection{Notation}\label{notationsubsection} For a set of ordinals $C$, we write $\acc(C):=\{\alpha <\subsetequp(C) \mid\subsetequp(C\cap \alpha)=\alpha >0\}$. For ordinals $\alpha<\gamma$, denote $E^\gamma_\alpha:=\{ \beta<\gamma\mid \cf(\beta)=\alpha\}$. The set of all infinite (resp. infinite and regular) cardinals below $\kappa$ is denoted by $\card(\kappa)$ (resp. $\reg(\kappa)$). For a cardinal $\kappa$ we denote by $\kappa^+$ the successor cardinal of $\kappa$, and by $\kappa^{+n}$ the $n^{th}$-successor cardinal. For a function $f:X\rightarrow Y$ and a set $A\subsetequbseteq X$, we denote $f"A:=\{f(x)\mid x\in A\}$. For a set $A$ and a cardinal $\theta$, we write $[A]^{\theta}:=\{X\subsetequbseteq A \mid |X|=\theta\}$ and define $[A]^{\leq\theta}$ and $[A]^{<\theta}$ similarly. For a sequence of sets $\langle A_i\mid i\in A \rangle$, let $\prod_{i\in I} D_i:=\{f:I\rightarrow \bigcup_{i\in I} D_i \mid \forall i\in I[f(i)\in D_i] \}$. For two functions $f,g\in {}^{\omega}\omega$, we define the order $<^*$ by $f<^*g$ iff the set $\{n<\omega \mid g(n)\geq f(n)\}$ is finite. Furthermore, by $f<g$ we means that $f(n)<g(n)$ whenever $n<\omega$. \subsetequbsection{Preliminaries} A partial ordered set $(D,\leq_D)$ is \emph{directed} iff for every $x,y\in D$ there is $z\in D$ such that $x\leq_D z$ and $y\leq_D z$. We say that a subset $X$ of a directed set $D$ is \emph{bounded} if there is some $d\in D$ such that $x\leq d$ for each $x\in X$. Otherwise, $X$ is \emph{unbounded} in $D$. We say that a subset $X$ of a directed $D$ is \emph{cofinal} if for every $d\in D$ there exists some $x\in X$ such that $d\leq_D x$. Let $\cf(D)$ denote the minimal cardinality of a cofinal subset of $D$. If $D$ and $E$ are two directed sets, we say that $f:D\rightarrow E$ is a \emph{Tukey function} if $f"X:=\{f(x)\mid x\in X\}$ is unbounded in $E$ whenever $X$ is unbounded in $D$. If such a Tukey function exists we say that $D$ is \emph{Tukey reducible} to $E$, and write $D\leq_T E$. If $D\leq_T E$ and $E\not\leq_T D$, we write $D<_T E$. A function $g:E\rightarrow D$ is called a \emph{convergent/cofinal map} from $E$ to $D$ if for every $d\in D$ there is an $e_d\in E$ such that for every $c\geq e_d$ we have $g(c)\geq d$. There is a convergent map $g:E\rightarrow D$ iff $D\leq_T E$. Note that for a convergent map $g:E\rightarrow D$ and a cofinal subset $Y\subsetequbseteq E$, the set $g"Y$ is cofinal in $D$. We say that two directed sets $D$ and $E$ are \emph{cofinally/Tukey equivalent} and write $D\equiv_T E$ iff $D\leq_T E$ and $D\geq_T E$. Formally, a \emph{cofinal type} is an equivalence class under the Tukey order, we abuse the notation and call every representative of the class a cofinal type. Notice that a directed set $D$ is cofinally equivalent to any cofinal subset of $D$. In \cite{MR0002515}, Tukey proved that $D\equiv_T E$ iff there is a directed set $(X, \leq_X)$ such that both $D$ and $E$ are isomorphic to a cofinal subset of $X$. We denote by $\mathcal D_\kappa$ the set of all cofinal types of directed sets of cofinality $\leq \kappa$. Consider a sequence of directed sets $\langle D_i \mid i\in I \rangle$, we define a new direct set, defined on the set $\prod_{i\in I} D_i$ equipped with a relation $\leq$ defined as follows: for two $d, e \in\prod_{i\in I} D_i$ we let $ d \leq e$ if and only if $ d(i)\leq_{D_i} e(i)$ for each $i\in I$. For $X\subsetequbseteq \prod_{i\in I} D_i$, let $\pi_{D_i}$ by the projection to the $i$-coordinate. A simple observation {\cite[Proposition~2]{MR792822}} is that if $n$ is finite, then $D_1\times \dots \times D_n$ is the least upper bound of $D_1,\dots,D_n$ in the Tukey order. Similarly, we define a $\theta$-support product $\prod^{\leq\theta}_{i\in I}D_i$; for each $i\in I$, we fix some element $0_{D_i}\in D_i$ (usually minimal). Every element $v\in \prod^{\leq\theta}_{i\in I}D_i$ is such that $|\subsetequpp(v)|\leq \theta$, where $\subsetequpp(v):=\{ i\in I \mid v(i)\not = 0_{D_i}\}$. The order is coordinate wise. \subseteqection{Characteristics of directed sets}\label{Section - Tools} We commence this section with the following two Lemmas which will be used throughout the paper. \begin{lemma}[Pouzet, {\cite{Pouzet}}]\label{Lemma - cofinal set with no large bounded set} Suppose $D$ is a directed set such that $\cf(D)=\kappa$ is infinite, then there exists a cofinal directed set $P\subsetequbseteq D$ of size $\kappa$ such that every subset of size $\kappa$ of $P$ is unbounded \end{lemma} \begin{proof} Let $X\subsetequbseteq D$ be a cofinal subset of cardinality $\kappa$ and let $\{x_\alpha \mid \alpha<\kappa\}$ be an enumeration of $X$. Let $P:=\{ x_\alpha \mid \forall \beta<\alpha[x_\alpha\not <_D x_\beta] \}$. We claim that $P$ is cofinal, fix $d\in D$. As $X$ is cofinal in $D$, fix a minimal $\alpha<\kappa$ such that $d<_D x_\alpha$. If $x_\alpha\in P$, then we are done. If not, then fix some $\beta<\alpha$ minimal such that $x_\alpha<_D x_\beta$. As $\beta$ is minimal, there is no $\gamma<\beta$ such that $x_\beta<x_\gamma$, hence $x_\beta\in P$. Note that as $P$ is cofinal in $D$, $\cf(D)=\kappa$, $P\subsetequbseteq X$ and $|X|=\kappa$, we get that $|P|=\kappa$. Finally, let us show that every subset of size $\kappa$ of $P$ is unbounded. Suppose on the contrary that $X\subsetequbseteq P$ is a bounded subset of $P$ of size $\kappa$. Fix some $x_\beta\in P$ above $X$ and $\beta<\alpha<\kappa$ such that $x_\alpha \in X$, but this is absurd as $x_\alpha<x_\beta$ and $x_\alpha\in P$. \end{proof} \begin{fact}[Kuzeljević-Todor\v{c}evi\'{c}, {\cite[Lemma~2.3]{kuzeljevic2021cofinal}}]\label{KT2021 - Lemma 2.3} Let $\lambda\geq \omega$ be a regular cardinal and $n<\omega$ be positive. The directed set $[\lambda^{+n}]^{\leq \lambda}$ contains a cofinal subset $\mathfrak D_{[\lambda^{+n}]^{\leq \lambda}}$ of size $\lambda^{+n}$ with the property that every subset of $\mathfrak D_{[\lambda^{+n}]^{\leq \lambda}}$ of size $>\lambda$ is unbounded in $[\lambda^{+n}]^{\leq \lambda}$. In particular, $[\lambda^{+n}]^{\leq \lambda}$ belongs to $\mathcal D_{\lambda^{+n}}$, i.e. $\cf([\lambda^{+n}]^{\leq \lambda}) \leq \lambda^{+n}$. \end{fact} Recall that any directed set is Tukey equivalent to any of its cofinal subsets, hence $\mathfrak D_{[\lambda^{+n}]^{\leq \lambda}} \equiv_T [\lambda^{+n}]^{\leq \lambda}$. As part of our analysis of the class $\mathcal D_{\aleph_n}$, we would like to find certain traits of directed sets which distinct them from one another in the Tukey order. This was done previously, in \cite{MR76836}, \cite{MR201316} and \cite{MR1407459}. We use that the language of cardinals functions of ideals. \begin{definition}For a set $D$ and an ideal $\mathcal I$ over $D$, consider the following cardinal characteristics of $\mathcal I$: \begin{itemize} \item $\add(\mathcal I):=\min\{\kappa \mid \mathcal A \subsetequbseteq I,~ |\mathcal A|=\kappa, ~\bigcup X\notin \mathcal I\}$. \item $ \non(\mathcal I):=\min \{|X|\mid X\subsetequbseteq D \wedge X\notin \mathcal I \}$; \item $ \out(\mathcal I):=\min\{ \theta\leq|D|^+\mid \mathcal I\cap[D]^{\theta}=\emptyset\}$; \item $\Inner(\mathcal I, \kappa) = \{ \theta\leq \kappa \mid \forall X\in [D]^\kappa~\exists Y\in [X]^\theta \cap \mathcal I \}$; \end{itemize} \end{definition} Notice that $\add(\mathcal I)\leq \non(\mathcal I) \leq \out(\mathcal I)$. \begin{definition} For a directed set $D$, denote by $\mathcal I_{\bd}(D) $ the ideal of bounded subsets of $D$. \end{definition} \begin{prop} Let $D$ be a directed set. Then: \begin{enumerate} \item $\non(\mathcal I_{\bd}(D))$ is a the minimal size of an unbounded subset of $D$, so every subset of size less than $\non(\mathcal I_{\bd}(D))$ is bounded; \item If $\theta<\out(\mathcal I_{\bd}(D))$, then there exists in $D$ some bounded subset of size $\theta$. \item If $\theta \geq \out(\mathcal I_{\bd}(D))$, then every subset $X$ of size $\theta$ is unbounded in $D$; \item If $\theta\in \Inner(\mathcal I_{\bd}(D),\kappa)$, then for every $X\in [D]^\kappa$ there exists some $B\in [X]^{\theta}$ bounded; \item For every $\theta<\add(\mathcal I_{\bd}(D))$ and a family $\mathcal A$ of size $\theta$ of bounded subsets of $D$, the subset $\bigcup \mathcal A$ is also bounded in $D$. \qed \end{enumerate} \end{prop} Let us consider another intuitive feature of a directed set, containing information about the cardinality of hereditary unbounded subsets, this was considered previously by Isbell \cite{MR201316}. \begin{definition}[Hereditary unbounded sets] For a directed set $D$, set $$ \hu(D):=\{ \kappa\in \card(|D|^+)\mid \exists X\in [D]^{\kappa}[\forall Y\in [X]^\kappa\text{ is unbounded}] \}.$$ \end{definition} \begin{prop} Let $D$ be a directed set. Then: \begin{itemize} \item For every $\kappa\in \hu(D)$, we have that $\kappa \leq_T D$; \item If $\cf(D)$ is an infinite cardinal, then $\cf(D)\in \hu(D)$. \item If $\out(\mathcal I_{\bd}(D))\leq\kappa \leq |D|$, then $ \kappa \in \hu(D)$; \item For an infinite cardinal $\kappa$ we have that $\non(\mathcal I_{\bd}(\kappa))=\cf(\kappa)$, $\out(\mathcal I_{\bd}(D))=\kappa$ and $\hu(\kappa)=\{\lambda\in \card(\kappa^+) \mid \lambda =\cf(\kappa)\}$; \item If $\kappa=\cf(D)=\non(\mathcal I_{\bd}(D))$, then $D\equiv_T\kappa$. \item For two infinite cardinals $\kappa>\theta$ we have that $\non(\mathcal I_{\bd}([\kappa]^{<\theta}))=\cf(\theta)$. \item For a regular cardinal $\kappa$ and a positive $n<\omega$, $\out(\mathcal I_{\bd}(\mathfrak D_{[\kappa^{+n}]^{\leq \kappa}}))>\kappa$ and $\hu(\mathfrak D_{[\kappa^{+n}]^{\leq \kappa}}) = \{ \kappa^{+(m+1)}\mid m< n\}$. \item If $\kappa=\cf(D)$ is regular, $\theta=\out(\mathcal I_{\bd}(D))=\non(\mathcal I_{\bd}(D))$ and $\theta^{+n}=\kappa$ for some $n<\omega$, then $D\equiv_T [\kappa]^{<\theta}$.\qed \end{itemize} \end{prop} In the rest of this section we consider various scenarios in which the traits of a certain directed set give us information about its position in the poset $(\mathcal D_{\kappa}, <_T )$. \begin{lemma}\label{Lemma - existence of full-unbounded kappa set} Suppose $D$ is a directed set, $\kappa$ is regular and $X\subsetequbseteq D$ is unbounded subset of size $\kappa$ such that every subset of $X$ of size $<\kappa$ is bounded. Then $\kappa\in \hu(D)$. \end{lemma} \begin{proof} Enumerate $X:=\{x_\alpha \mid \alpha<\kappa\}$, by the assumption, for every $\alpha<\kappa$ we may fix some $z_\alpha\in D$ above the bounded initial segment $\{x_\beta\mid \beta<\alpha\}$. We show that $Z:=\{z_\alpha\mid \alpha<\kappa\}$, witness $\kappa\in \hu(D)$. Suppose on the contrary that $Z:=\{z_\alpha\mid \alpha<\kappa\}$ is of cardinality $< \kappa$. Then for some $\alpha<\kappa$, the element $z_\alpha$ is above the subset $X$, hence $X$ is bounded which is absurd. We claim that every subset of $Z$ of cardinality $\kappa$ is also unbounded. Suppose not, let us fix some $W\in [Z]^\kappa$ bounded by some $d\in D$, but then $d$ is above $X$ contradicting the fact $X$ is unbounded. \end{proof} \begin{lemma}\label{Lemma - kappa leq_T D} Suppose $D$ is a directed set and $\kappa$ is a regular cardinal in $\hu(D)$, then $\kappa \leq_TD$. \end{lemma} \begin{proof} Fix $X\subsetequbseteq D$ of cardinality $\kappa$ such that every subset of $X$ of size $\kappa$ is unbounded and a one-to-one function $f:\kappa\rightarrow X$, notice that $f$ is a Tukey function from $\kappa$ to $D$ as sought. \end{proof} \begin{cor}\label{Cor - existence of full-unbounded kappa set} Suppose $D$ is directed set, $\kappa$ is regular and $X\subsetequbseteq D$ is an unbounded subset of size $\kappa$ such that every subset of $X$ of size $< \kappa$ is bounded, then $\kappa\leq_T D$.\qed \end{cor} The reader may check the following: \begin{itemize} \item For any two infinite cardinals $\lambda$ and $\kappa$ of the same cofinality, we have $\lambda \equiv_T \kappa$; \item For an infinite regular cardinal $\kappa$, we have $\kappa\equiv_T [\kappa]^{<\kappa}$. \item $\hu(\prod^{<\omega}_{n<\omega}\omega_{n+1}) = \{\omega_n \mid n<\omega\}$. \end{itemize} \begin{comment}\begin{lemma} Suppose $\lambda$ and $\kappa$ are two infinite cardinals such that $\cf(\lambda)= \cf(\kappa)$, then $\lambda \equiv_T \kappa$. \end{lemma} \begin{proof} Let $\mu=\cf(\lambda)=\cf(\kappa)$ and fix two cofinal sequences $\langle \alpha_\nu \mid \nu<\mu \rangle$ and $\langle \beta_\nu \mid \nu<\mu \rangle$ in $\lambda $ and $\kappa$ respectively such that $\alpha_0=0$. We define a function $f:\lambda \rightarrow \kappa$ as follows: for every $\gamma<\lambda$, let $\nu<\mu$ be such that $\alpha_\nu \leq \gamma<\alpha_{\nu+1}$. Let $f(\gamma):=\beta_\nu$. Clearly every unbounded subset $X\subsetequbseteq \lambda$ is such that $f"X$ is unbounded in $\kappa$, so $\lambda\leq_T \kappa$, similarly we can also show that $\kappa \leq_T \lambda$ as sought. \end{proof} For example $\omega \equiv_T \omega_\omega$. \end{comment} \begin{lemma} Suppose $D$ and $E$ are two directed sets such that for some $\theta\in \hu(D)$ regular we have $\theta>\cf(E)$, then $D\not\leq_T E$. \end{lemma} \begin{proof} By passing to a cofinal subset, we may assume that $|E|=\cf(E)$. Fix $\theta\in \hu(D)$ regular such that $\cf(E)<\theta$ and $X\in [D]^\theta$ witnessing $\theta\in \hu(D)$, i.e. every subset of $X$ of size $\theta$ is unbounded. Suppose on the contrary that there exists a Tukey function $f:D\rightarrow E$. By the pigeonhole principle, there exists some $Z\in [X]^{\theta}$ and $e\in E$ such that $f"Z=\{e\}$. As $f$ is Tukey and the subset $Z\subsetequbseteq X$ is unbounded, $f"Z$ is unbounded in $Y$ which is absurd. \end{proof} Notice that for every directed set $D$, if $\cf(D)>1$, then $\cf(D)$ is an infinite cardinal. As a corollary from the previous Lemma, $\lambda \not \leq_T \kappa$ for any two regular cardinals $\lambda>\kappa$ where $\lambda$ is infinite. Furthermore, the reader can check that $\lambda\not\leq_T\kappa$, whenever $\lambda<\kappa$ are infinite regular cardinals. \begin{lemma}\label{Lemma - C leq_T D imply cf(C) leq cf(D)} Suppose $C$ and $D$ are directed sets such that $C\leq_T D$, then $\cf(C)\leq \cf(D)$. \end{lemma} \begin{proof} Suppose $|D|=\cf(D)$ and let $f:C\rightarrow D$ be a Tukey function. As $f$ is Tukey, for every $d\in D$ the set $f^{-1}\{d\}$ is bounded in $C$ by some $c_d\in C$. Note that for every $x\in C$, we have $x\leq_C c_{f(x)}$, hence the set $\{c_d \mid d\in D\}$ is cofinal in $C$. So $\cf(C)\leq |D|=\cf(D)$ as sought. \end{proof} \begin{lemma}\label{Lemma - below kappa theta} Let $\kappa$ and $\theta$ be two cardinals such that $\theta<\kappa=\cf(\kappa)$. Suppose $D$ is a directed set such that $\cf(D)\leq\kappa$ and $\non(\mathcal I_{\bd}(D))\geq\theta$, then $D\leq_T [\kappa]^{<\theta}$. Furthermore, if $\theta\in \Inner(\mathcal I_{\bd}(D),\kappa)$, then $D<_T [\kappa]^{<\theta}$. \end{lemma} \begin{proof} First we show that there exists a Tukey function $f:D \rightarrow [\kappa]^{<\theta}$. Let us fix a cofinal subset $X\subsetequbseteq D$ of cardinality $\leq\kappa$ such that every subset of $X$ of cardinality $< \theta$ is bounded. As $|X|\leq \kappa$ we may fix an injection $f:X\rightarrow [\kappa]^{1}$, we will show $f$ is a Tukey function. Let $Y\subsetequbseteq X$ by an unbounded subset, this implies $|Y|\geq\theta$. As $f$ is an injection, the set $\bigcup f"Y$ is of cardinality $\geq \theta$, hence $f"Y$ is an unbounded subset in $[\kappa]^{<\theta}$ as sought. We are left to show that $[\kappa]^{<\theta}\not\leq_T D$. Suppose on the contrary that $g:[\kappa]^{<\theta}\rightarrow D$ is a Tukey function. We split to two cases: $\blacktriangleright$ Suppose $| g" [\kappa]^1| < \kappa$. As $\kappa$ is regular, by the pigeonhole principle there exists a set $X\subsetequbseteq [\kappa]^1$ of cardinality $\kappa$, and $d\in D$ such that $g(x)=d$ for each $x\in X$. Notice $g"X$ is a bounded subset of $D$. As $X\subsetequbseteq [\kappa]^1$ is of cardinality $\kappa$ and $\kappa>\theta$, it is unbounded in $[\kappa]^{<\theta}$. Since $g$ is a Tukey function, we get that $g"X$ is unbounded which is absurd. $\blacktriangleright$ Suppose $|g"[\kappa]^1| = \kappa$. Let $X:=g"[\kappa]^1$, by our assumption on $D$, there exists a bounded subset $B\in[X]^{\theta}$. Since $B$ is of size $\theta$, we get that $(g^{-1} [B])\cap [\kappa]^1$ is of cardinality $\geq\theta$, hence unbounded in $[\kappa]^{<\theta}$, which is absurd to the assumption $g$ is Tukey. \end{proof} \begin{remark} For every two directed sets, $D$ and $E$, such that $\non(\mathcal I_{\bd}(D) )<\non(\mathcal I_{\bd}(E) )$, then $D\not \leq_T E$. For example, $\theta\not\leq_T [\kappa]^{\leq\theta}$. \end{remark} \begin{lemma}\label{Lemma - kappa, theta no tukey map} Let $\kappa$ be a regular infinite cardinal. Suppose $D$ and $E$ are two directed sets such that $\cf(D)=\kappa$, $\out(\mathcal I_{\bd}(D))\in \Inner(\mathcal I_{\bd}(E),\kappa)$, then $D\not\leq_T E$. \end{lemma} \begin{proof} Let $\theta:=\out(\mathcal I_{\bd}(D))$, so every subset of $D$ of size $\geq\theta$ is unbounded and every subset of size $\kappa$ of $E$ contains a bounded subset of size $\theta$. We may assume that $D$ is a directed set of size $\kappa$. Suppose on the contrary that there exists a Tukey function $f:D\rightarrow E$. We split to two cases: $\blacktriangleright$ Suppose $|f"D|<\kappa$, then by the pigeonhole principle there exists some $X\in [D]^\kappa$ and $e\in E$ such that $f"X=\{e\}$. As $|X|=\kappa\geq \theta$, we know that $X$ is unbounded in $D$, but $f"X$ is bounded in $E$ which is absurd as $f$ is a Tukey function. $\blacktriangleright$ Suppose $|f"D|=\kappa$, by the assumption there exists a bounded subset $Y\in [f"D]^\theta$. Notice that $X:=f^{-1}Y$ is a subset of $D$ of size $\geq \theta$, hence unbounded. So $X$ is an unbounded subset of $D$ such that $f"X=Y$ is bounded, contradicting the fact that $f$ is a Tukey function. \end{proof} \begin{lemma}\label{Lemma - out(, kappa)} Suppose $\kappa$ is a regular uncountable infinite cardinal, $C$ and $\langle D_m \mid m<n\rangle$ are directed sets such that $|C|<\kappa\leq \cf(D_m)$ and $\non(\mathcal I_{\bd}(D_m))>\theta$ for every $m<n$. Then $\theta\in \Inner(\mathcal I_{\bd} (C\times \prod_{m<n}{D_m}),\kappa) $. \end{lemma} \begin{proof} Suppose $X\subsetequbseteq C\times \prod_{m<n}{D_m}$ is of size $\kappa$, we show that $X$ contains a bounded subset of size $\theta$. As $|C|<\kappa$, by the pigeonhole principle we can fix some $Y\in [X]^\kappa$ and $c\in C$ such that $\pi_C " Y=\{c\}$. Suppose on the contrary that some subset $Z\subsetequbseteq Y$ of size $\theta$ is unbounded, it must be that for some $m<n$ the set $\pi_{D_m}"Z$ is unbounded in $D_m$, but this is absurd as $\non(\mathcal I_{\bd}(D_m)) >\theta$ and $|\pi_{D_m}"Z|\leq \theta$. \end{proof} \begin{lemma}\label{Lemma - D not<= C x E} Suppose $C,D$ and $E$ are directed sets such that: \begin{itemize} \item for every partition $D=\bigcup_{\gamma<\kappa} D_\gamma$, there exists an ordinal $\gamma<\kappa$, and an unbounded $X\subsetequbseteq D_\gamma$ of size $\kappa$; \item $|C|\leq \kappa$; \item $\non(\mathcal I_{\bd}(E))> \kappa$ \end{itemize} Then $D\not \leq_T C\times E$. \end{lemma} \begin{proof} Suppose on the contrary, that there exists a Tukey function $h:D \rightarrow C \times E$. For $c\in C$, let $D_c:=\{x\in D \mid \exists e\in E [h(x)=(c,e)]\}$. Since $h$ is a function, $D:=\bigcup_{c\in C} D_c$ is a partition to $\leq\kappa$ many sets. By the assumption, there exists a $c\in C$ and an unbounded subset $X\subsetequbseteq D_c$ of cardinality $\kappa$. Enumerate $X=\{x_\xi \mid \xi<\kappa\}$ and let $e_\xi\in E$ be such that $h(x_\xi) =(c,e_\xi)$, for each $\xi<\kappa$. As $\non(\mathcal I_{\bd}(E))> \kappa$, there exists some upper bound $e\in E$ to the set $\{e_\xi \mid \xi<\kappa\}$. Since $X$ is unbounded and $h$ is Tukey, $h"X=\{(c,e_\xi )\mid \xi<\kappa\}$ must be unbounded, which is absurd as $(c,e)$ is bounding it. \end{proof} \subseteqection{The Catalan structure}\label{Section - Catalan} The sequence of Catalan numbers $\langle c_n \mid n<\omega \rangle=\langle 1,1,2,5,14,42,\dots\rangle$ is an ubiquitous sequence of integers with many characterizations, for a comprehensive review of the subject, we refer the reader to Stanley's book \cite{MR3467982}. One of the many representations of $c_n$, is the number of good $n$-paths (Dyck paths), where a \emph{good $n$-path} is a monotonic lattice path along the edges of a grid with $n\times n$ square cells, which do not pass above the diagonal. A \emph{monotonic path} is one which starts in the lower left corner, finishes in the upper right corner, and consists entirely of edges pointing rightwards or upwards. An equivalent representation of a good $n$-path, which we will consider from now on, is a vector $\vec p$ of the columns' heights of the path (ignoring the first trivial column), i.e. a vector $\vec p=\langle p_0,\dots ,p_{n-2}\rangle$ of length $n-1$ of $\leq$-increasing numbers satisfying $0\leq p_k\leq k+1$, for every $0\leq k\leq n-2$. We consider the poset $(\mathcal K_n,\vartriangleleft)$ where $\mathcal K_n$ is the set of all good $n$-paths and the relation $\vartriangleleft$ is defined such that $\vec a \vartriangleleft \vec b$ if and only if the two paths are distinct and for every $k$ with $0\leq k\leq n-2$ we have $b_k\leq a_k$, in other words, the path $\vec b$ is below the path $\vec a$ (allowing overlaps). Notice that for two good $n$-paths $\vec a$ and $\vec b$, either $\vec a \not \vartriangleleft \vec b$ or $\vec b \not \vartriangleleft \vec a$. A good $n$-path $\vec b$ is an immediate successor of a good $n$-path $\vec a$ if $\vec a\vartriangleleft \vec b$ and $\|\vec b -\vec a\| =1$. Suppose $\vec a$ and $\vec b$ are two good $n$-paths where $\vec b$ is an immediate successor of $\vec a$. Let $i\leq n-2$ be the unique coordinate on which $\vec a$ and $\vec b$ are different and $a_i$ be the value of $\vec a$ on this coordinate, i.e. $b_i+1 = a_i$. We say that the pair $(\vec a, \vec b)$ is on the $k$-diagonal if and only if $|i+1-a_i|=k$ and $\vec b$ is an immediate successor of $\vec a$. \begin{figure} \caption{The good $4$-path $\langle 1,1,3\rangle$.} \end{figure} In this section we show the connection between the Catalan numbers and cofinal types. Let us fix $n<\omega$. Recall that for every $k<\omega$, we set $V_k:=\{1, \omega_k, [\omega_k]^{<\omega_m} \mid 0\leq m<k\}$, $\mathcal F_n:=\bigcup_{k\leq n} V_k$ and $\mathcal S_n$ be the set of all finite products of elements in $\mathcal F_n$. Our goal is to construct a coding which gives rise to an order-isomorphism between $(\mathcal S_n\subseteqlash{\equiv_T}, <_T)$ and $(\mathcal K_{n+2},\vartriangleleft)$. To do that, we first consider a ``canonical form" of directed sets in $\mathcal S_n$. By Lemma~\ref{Lemma - below kappa theta} the following holds: \begin{enumerate}[label=(\alph*)] \item\label{Clause - Catalan 1} for all $0\leq l<m<k<\omega$ we have $1<_T \omega_k<_T [\omega_k]^{<\omega_m} <_T [\omega_k]^{<\omega_l} $. \item\label{Clause - Catalan 2} for all $0\leq l\leq t<m\leq k<\omega$ with $(l,k)\neq(t,m)$ we have $[\omega_m]^{<\omega_t} <_T [\omega_k]^{<\omega_l}$ and $\omega_m <_T [\omega_k]^{<\omega_l}$. \end{enumerate} Notice that \ref{Clause - Catalan 1} imply $(V_k,<_T)$ is linearly ordered. A basic fact is that for two directed sets $C$ and $D$ such that $C\leq_T D$, we have $C\times D \equiv_T D$. Hence, for every $D\in \mathcal S_n$ we can find a sequence of elements $\langle D^k \mid k\leq n \rangle$, where $D^k\in V_k$ for every $k\leq n$, such that $D\equiv_T \prod_{k\leq n}D^k$. As we are analyzing the class $\mathcal D_{\aleph_n}$ under the Tukey relation $<_T$, two directed sets which are of the same $\equiv_T$-equivalence class are indistinguishable, so from now on we consider only elements of this form in $\mathcal S_n$ We define a function $\mathfrak F:\mathcal S_n \rightarrow \mathcal S_n$ as follows: Fix $D\in \mathcal S_n$ where $D=\prod_{k\leq n} D^k$. Next we construct a sequence $\langle D_k \mid k\leq n \rangle$ by reverse recursion on $k\leq n$. At the top case, set $D_n:=D^n$. Next, for $0\leq k< n$. If by \ref{Clause - Catalan 2}, we get that $D^{k} <_T D^{m}$ for some $k<m\leq n$, then set $D_{k}:=1$. Else, let $D_{k}:=D^{k}$. Finally, let $\mathfrak F(D) := \prod_{k\leq n} D_{k}$. Notice that we constructed $\mathfrak F(D)$ such that $\mathfrak F(D) \equiv_T D$. We define $\mathcal T_n:=\im(\mathfrak F)$. \noindent\textbf{The coding.} We encode each product $D\in \mathcal T_n$ by an $(n+2)$-good path $\vec v_D:=\langle d_0,\dots, d_{n}\rangle$. Recall that $D:=\prod_{k\leq n} D_k$, where $D_k\in V_k$ for every $k\leq n$. We define by reverse recursion on $0\leq k \leq n$, the elements of vector $\vec v_D$ such that $d_k\leq k+1$ as follows: Suppose one of the elements of $\langle [\omega_k]^{<\omega},\dots, [\omega_k]^{<\omega_{k-1}} ,\omega_k \rangle$ is equal to $D_k$, then let $d_{k}$ be his coordinate (starting from $0$). Suppose this is not the case, then if $k=n$, we let $d_{k}:=n+1$ else $d_{k}:=\min \{ d_{k+1}, k+1\}$. Notice that by \ref{Clause - Catalan 2}, if $0\leq i<j\leq n$, then $d_i \leq d_j$. Hence, every element $D\in \mathcal T_n$ is encoded as a good $(n+2)$-path. To see that the coding is one-to-one, suppose $C,D\in \mathcal T_n$ are distinct. Let $k:=\max \{ i\leq n \mid C_i\neq D_i\}$. We split to two cases: $\blacktriangleright$ Suppose both $C_k$ and $D_k$ are not equal to $1$, then clearly the column height of $\vec v_C$ and $\vec v_D$ are different at coordinate $k+1$. $\blacktriangleright$ Suppose one of them is equal to $1$, say $C_k$, then $D_k\neq 1$. Let $\vec v_C:=\langle v^C_0,\dots v^C_n\rangle$ and $\vec v_D:=\langle v^D_0,\dots v^D_n\rangle$. Suppose $k=n$, then clearly $v^D_n < v^C_n$. Suppose $k<n$, then $v^D_i = v^C_i$ for $k<i\leq n$. By the coding, $v^D_k<k+1$ and by \ref{Clause - Catalan 2} $v^D_k<v^D_{k+1}=v^C_{k+1}$, but $v^C_k:=\min\{ k+1, v^C_{k+1}\}$. Hence $v^D_k < v^C_k$ as sought. To see that the coding is onto, let us fix a good $(n+2)$-path $\vec v:=\langle v_0, \dots, v_n \rangle$. We construct $\langle D_k \mid k\leq n \rangle$ by reverse recursion on $k\leq n$. At the top case, set $D_n$ be the $v_n$ element of the vector $\langle [\omega_n]^{<\omega},\dots, [\omega_n]^{<\omega_{n-1}} ,\omega_n, 1\rangle$. For $k<n$. If $v_k=v_{k+1}$, let $D_{k}:=1$. Else, let $D_k$ be the $v_k$ element of the vector $\langle [\omega_k]^{<\omega},\dots, [\omega_k]^{<\omega_{k-1}} ,\omega_k, 1\rangle$. Let $D=\prod_{k\leq n} D_k$, notice that as $\vec v$ represent a good $(n+2)$-path we have $D=\mathfrak F(D)$, hence $D \in \mathcal T_n$. Furthermore, $\vec v_D =\vec v$, hence the coding is onto as sought. As a Corollary we get that $|\mathcal T_n|=c_{n+2}$. In Figure~2 we present all good $4$-paths and the corresponding types in $\mathcal T_2$ they encode. \begin{lemma}\label{Lemma - path < implies Tukey <} Suppose $C,D\in \mathcal T_n$ and $\vec v_D \vartriangleleft \vec v_C$, then $D\leq_T C$. \end{lemma} \begin{proof} Let $D = \prod_{k\leq n} D_k$ and $C=\prod_{k\leq n} C_k$. Note that if $D_k\leq_T C$ for every $k\leq n$, then $D\leq_T C$ as sought. Fix $k\leq n$, if $D_k=1$, then clearly $D_k\leq C$. Suppose $D_k\neq 1$, we split to two cases: $\blacktriangleright$ Suppose $C_k\neq 1$. As $v^C_k <v^D_k$ and by \ref{Clause - Catalan 1} we have $D_k \leq_T C_k\leq_T C$ as sought. $\blacktriangleright$ Suppose $C_k=1$, let $m:=\max\{i\leq n \mid k< i,~v^C_i= v^C_k\}$. As $v^C_i \leq i+1$, by the coding $m$ is well-defined and $v^C_m= v^C_k\leq k<m$. Notice that $C_m=[\omega_m]^{<\omega_p}$ where $p=v^C_m$ and $D_k\equiv_T [\omega_k]^{<\omega_p}$. So by~\ref{Clause - Catalan 2}, $D_k\leq_T C_m\leq_T C$ as sought. \end{proof} \begin{lemma}\label{Lemma - path not < implies Tukey not <} Suppose $C,D\in \mathcal T_n$ and $\vec v_D \not \vartriangleleft \vec v_C$, then $D\not \leq_T C$. \end{lemma} \begin{proof} Let $D=\prod_{k\leq n} D_k$, $C=\prod_{k\leq n} C_k$, $\vec v_C:=\langle v_0^C,\dots, v_n^C\rangle$ and $\vec v_D:=\langle v_0^D,\dots, v_n^D\rangle$ As $\vec v_D \not \vartriangleleft \vec v_C$, we can define $i=\min\{k\leq n \mid v^C_k>v^D_k\}$. Let $p:=v^D_i$ and $r=\max\{k\leq n \mid v_i^D = v_k^D\}$, notice $p\leq i$. We define a directed set $F$ such that $F\leq_T D$. $\blacktriangleright$ Suppose $p=i$ and let $F=\omega_i$. If $r=i$, then clearly $F=D_i$ and $F\leq_T D$ as sought. Else, by the coding $D_r=[\omega_r]^{<\omega_p}$. By Lemma~\ref{Lemma - below kappa theta}, we have $F\leq_T D$ as sought. $\blacktriangleright$ Suppose $p<i$ and let $F=\mathfrak D_{[\omega_i]^{<\omega_{p}}}$. By the coding $D_r=[\omega_r]^{<\omega_p}$ and by Clause~\ref{Clause - Catalan 2}, we have $F\leq_T D$ as sought. Notice that $\out(\mathcal I_{\bd}(F)) = \omega_p$ and $\cf(F)=\omega_i$. As $F\leq_T D$, it is enough to verify that $F\not\leq_T C$. As $\vec v_C$ is a good $(n+2)$-path, we know that $v_k^C>p$ for every $k\geq i$. Consider $A:=\{i\leq k\leq n \mid C_k \neq 1 \}$. We split to two cases: $\blacktriangleright$ Suppose $A=\emptyset$. Then $\cf(\prod_{k\leq n} C_k) <\omega_i$. As $\cf(F)=\omega_i$, by Lemma~\ref{Lemma - C leq_T D imply cf(C) leq cf(D)} we have that $F\not \leq_T \prod_{k\leq n} C_k$ as sought. $\blacktriangleright$ Suppose $A\neq \emptyset$. Let $E:=\prod_{k<i} C_k \times \prod_{k\in A} C_k$ Notice that $\cf(\prod_{k<i} C_k)<\omega_i$, $\prod_{i\leq k \leq n} C_k \equiv_T \prod_{k\in A} C_k$ and $C\equiv_T E$. Furthermore, for each $k\in A$, we have $\non(\mathcal I_{bd}(C_k))>\omega_p$. By Lemma~\ref{Lemma - out(, kappa)}, we have $\omega_p\in \Inner(\mathcal I_{\bd}(E),\omega_i)$. Recall $\out(\mathcal I_{\bd}(F)) = \omega_p$. By Lemma~\ref{Lemma - kappa, theta no tukey map}, we get that $F\not\leq_T E$, hence $F\not \leq_T C$ as sought. \end{proof} \begin{figure} \caption{$1$} \caption{$\omega$} \caption{$\omega_1$} \caption{$\omega_2$} \caption{$\omega\times\omega_1$} \caption{$\omega\times\omega_2$} \caption{$\omega_1\times\omega_2$} \caption{$[\omega_2]^{<\omega_1} \caption{$\omega\times\omega_1\times\omega_2$} \caption{$[\omega_1]^{<\omega} \caption{$\omega\times [\omega_2]^{<\omega_1} \caption{$[\omega_1]^{<\omega} \caption{$[\omega_1]^{<\omega} \caption{$ [\omega_2]^{<\omega} \caption{All good $4$-paths and the corresponding types in $\mathcal T_2$ they encode.} \label{Figure - good 4-paths} \end{figure} \begin{theorem} The posets $(\mathcal T_n, <_T)$ and $(\mathcal K_{n+2},\vartriangleleft)$ are isomorphic. \end{theorem} \begin{proof} Define $f$ from $(\mathcal T_n,<_T)$ to $(\mathcal K_{n+2},\vartriangleleft)$, where for $C\in \mathcal T_n$, we let $f(C):=\vec v_C$. By Lemmas~\ref{Lemma - path < implies Tukey <} and~\ref{Lemma - path not < implies Tukey not <}, this is indeed an isomorphism of posets. Furthermore, we claim that $\mathcal T_n$ contains one unique representative from each equivalence class of $(\mathcal S_n,\equiv_T)$. Recall that the function $\mathfrak F$ is preserving Tukey equivalence classes. Consider two distinct $C,D\in \mathcal T_n$. As the coding is a bijection, $\vec v_C$ and $\vec v_D$ are different. Notice that either $\vec v_C \not \vartriangleleft \vec v_D$ or $\vec v_D \not \vartriangleleft \vec v_C$, hence by Lemma~\ref{Lemma - path not < implies Tukey not <}, $C\not \equiv_T D$ as sought. \end{proof} Consider the poset $(\mathcal T_n, <_T)$, clearly $1$ is a minimal element and by Lemma~\ref{Lemma - below kappa theta}, $[\omega_n]^{<\omega}$ is a maximal element. By the previous Theorem, the set of immediate successors of an element $D$ in the poset $(\mathcal T_n, <_T)$, is the set of all directed sets $C\in \mathcal T_n$ such that $\vec v_C$ is an $\vartriangleleft $-immediate successor of $\vec v_D$. \begin{lemma} Suppose $G,H\in \mathcal T_n$, $H$ is an immediate successor of $G$ in the poset $(\mathcal T_n, <_T)$ and $(\vec v_G, \vec v_H)$ are on the $l$-diagonal. Then there are $C, E, M, N$ directed sets such that: \begin{itemize} \item $G\equiv_T C\times M\times E$ and $H\equiv_T C\times N \times E$; \item for some $k<\omega$, $|C|<\omega_k$ and either $E\equiv_T 1$ or $\non(\mathcal I_{\bd}(E))>\omega_{k-l}$. \end{itemize} Furthermore, \begin{itemize} \item If $l=0$, then $M=1$ and $N=\omega_k$. \item If $l=1$, then $k>1$ and $M=\omega_k$ and $N=[\omega_k]^{<\omega_{k-1}}$. \item If $l>1$, then $k>l$ and $M=[\omega_k]^{<\omega_{k-l+1}}$ and $N=[\omega_k]^{<\omega_{k-l}}$. \end{itemize} \end{lemma} \begin{proof} As $H$ is an immediate successor of $G$ in the poset $(\mathcal T_n, <_T)$, we have $\| \vec v_G - \vec v_H \|=1$. Let $k$ be the unique $k\leq n$ such that $v^k_G = v^k_H +1$. Let $\vec v_G:=\langle v^0_G, \dots, v^n_G \rangle$ be a good $(n+2)$-path coded by $G$. We construct $\langle M_i \mid i\leq n \rangle$ by letting $M_i$ be the $v_i$ element of the vector $\langle [\omega_i]^{<\omega},\dots, [\omega_i]^{<\omega_{i-1}} ,\omega_i, 1\rangle$ for every $i\leq n$. Notice that $G\equiv_T \prod_{i\leq n} M_i$. Similarly, we may construct $\langle N_i \mid i\leq n \rangle$ such that $H\equiv_T N_i$. Clearly, $M_i=N_i$ for every $i\neq k$. Let $C:=\prod_{i<k} M_i$ and $E:=\prod_{i>k} M_i$. Notice that $|C|=\cf(C)<\omega_k$ and either $E\equiv_T 1$ or $\non(\mathcal I_{\bd}(E))>\omega_{k-l}$. Moreover, $G\equiv_T C\times M_k \times E$ and $H\equiv_TC\times N_k \times E$. We split to cases: \begin{itemize} \item If $l=0$, then $v^i_H=k+1$, hence $M_k=1$ and $N_k=\omega_k$; \item If $l=1$, then $v^i_H=k$, hence $M_k=\omega_k$ and $N_k=[\omega_k]^{<\omega_{k-1}}$. \item If $l>1$, then $v^i_H=k-l$, hence $M_k=[\omega_k]^{<\omega_{k-l+1}}$ and $N_k=[\omega_k]^{<\omega_{k-l}}$. \end{itemize} \end{proof} \begin{theorem}\label{Theorem - Intervals} Suppose $G,H\in \mathcal T_n$, $H$ is an immediate successor of $G$ in the poset $(\mathcal T_n, <_T)$ and $(\vec v_G, \vec v_H)$ are on the $l$-diagonal. \begin{itemize} \item If $l=0$, then there is no directed set $D$ such that $G<_T D<_T H$; \item If $l>0$, then consistently there exist a directed set $D$ such that $G<_T D<_T H$. \end{itemize} \end{theorem} \begin{proof} Let $C,E, M, N$ be as in the previous Lemma, so $G\equiv_T C\times M\times E$, $H\equiv_T C\times N \times E$ and for some $k\leq n$, $|C|<\omega_k$ and either $E\equiv_T 1$ or $\non(\mathcal I_{\bd}(E))>\omega_{k-l}$. We split to three cases: \begin{itemize} \item Suppose $l=0$, then $G\equiv_T C\times E $ and $H\equiv_T C\times \omega_k \times E$, by Theorem~\ref{Theorem - Gap} there is no directed set $D$ such that $G<_T D <_T H$. \item Suppose $l=1$, then $k\geq 1$ and $N=[\omega_k]^{<\omega_{k-1}}$ and $M=\omega_k$. \begin{itemize} \item Suppose $k=1$, then under the assumption $\mathfrak b =\omega_1$, by Theorem~\ref{Theorem - omega x omega_1 < D < [omega_1]^{<omega}} there exists a directed set $D$ such that $G<_T D <_T H$. \item Suppose $k>1$, then under the assumption $2^{\omega_{k-2}} = \omega_{k-1}$ and $2^{\omega_{k-1}}= \omega_k$, by Corollary~\ref{Cor - theta^+ x theta^++ < D < [theta^++]^{<theta+}} there exists a directed set $D$ such that $G<_T D <_T H$. \end{itemize} \item Suppose $l>1$, then $k\geq 2$. Let $\theta=\omega_{k-l}$. Notice $N=[\omega_k]^{\leq\theta}$ and $M=[\omega_k]^{<\theta}$. In Corollary~\ref{Corollary - directed set D_{mathcal C}} below, we shall show that under the assumption $\lambda^{\theta}<\kappa$ and $ \clubsuit^{\omega_{k-1}}_{J}(S,1) $ for some stationary set $S\subsetequbseteq E^{\omega_k}_{\theta}$, there exists a directed set $D$ such that $G<_T D <_T H$.\qedhere \end{itemize} \end{proof} \subseteqection{Empty intervals in $D_{\aleph_n}$}\label{Section - Gaps} Consider two successive directed sets in the poset $(\mathcal T_n , <_T)$, we can ask whether there exists some other directed set in between in the Tukey order. The following Theorem give us a scenario in which there is a no such directed set. \begin{theorem}\label{Theorem - Gap} Let $\kappa$ be a regular cardinal. Suppose $C$ and $E$ are two directed sets such that $\cf(C)<\kappa$, $\cf(E)$ is regular and either $\kappa \in \Inner(\mathcal I_{\bd}(E),\kappa)$ or $E\equiv_T 1$. Then there is no directed set $D$ such that $C\times E <_T D <_T C\times \kappa\times E$. \end{theorem} \begin{proof} By the upcoming Lemmas \ref{lemma41} and \ref{Lemma - 2nd gap}. \end{proof} \begin{lemma}\label{lemma41} Let $\kappa$ be a regular cardinal. Suppose $C$ is a directed set such that $\cf(C)<\kappa$ is regular, then there is no directed set $D$ such that $C <_T D <_T C \times \kappa$. \end{lemma} \begin{proof} Suppose $D$ is a directed set such that $C <_T D <_T C \times \kappa$. Let us assume $D$ is a directed set of size $\cf(D)$ such that every subset of $D$ of size $\cf(D)$ is unbounded in $D$. By Lemma~\ref{Lemma - C leq_T D imply cf(C) leq cf(D)} we get that $\cf(C) \leq \cf(D)\leq \kappa$. We split to two cases: $\blacktriangleright$ Suppose $\cf(C)\leq \cf(D) <\kappa$. Let $g:D\rightarrow C\times \kappa$ be a Tukey function. As $\cf(D)<\kappa$ and $\kappa$ is regular there exists some $\alpha<\kappa$ such that $g"D \subsetequbseteq C\times \alpha$. We claim that $\pi_C \circ g$ is a Tukey function from $D$ to $C$, hence $D\leq_T C$ which is absurd. Suppose $X\subsetequbseteq D$ is unbounded in $D$, as $g$ is a Tukey function, we know that $g"X$ is unbounded. But as $\pi_{\kappa}\circ g"X$ is bounded by $\alpha$, we get that $(\pi_{C}\circ g)" X$ is unbounded in $C$ as sought. $\blacktriangleright$ Suppose $\cf(D)=\kappa$, notice that $\kappa \in \hu(D)$ is regular so by Lemma~\ref{Lemma - kappa leq_T D} we get that $\kappa\leq_T D$. We also know that $C\leq_T D$, thus $\kappa\times C \leq_T D$ which is absurd. \end{proof} Note that $\non(\mathcal I_{\bd}(E)) > \kappa$ implies that $\kappa \in \Inner(\mathcal I_{\bd}(E),\kappa)$. \begin{lemma}\label{Lemma - 2nd gap} Let $\kappa$ be a regular cardinal. Suppose $C$ and $E$ are two directed sets such that $\cf(C)<\kappa$, $\kappa \in \Inner(\mathcal I_{\bd}(E),\kappa)$. Then there is no directed set $D$ such that $C\times E <_T D <_T C\times \kappa\times E$. \end{lemma} \begin{proof} Suppose $D$ is a directed set such that $C\times E \leq_T D \leq_T C\times \kappa\times E$, we will show that either $D\equiv_T C\times E$ or $D\equiv_T C\times \kappa \times E$. We may assume that every subset of D of size $\cf(D)$ is unbounded and $|C|=\cf(C)$. By Lemma~\ref{Lemma - C leq_T D imply cf(C) leq cf(D)}, we have that $\cf(E)=\cf(D)$. Suppose first there exists some unbounded subset $X\in [D]^{\kappa}$ such that every subset $Y\in [X]^{<\kappa}$ is bounded. By Corollary~\ref{Cor - existence of full-unbounded kappa set}, this implies that $\kappa\leq_T D$. But as $C\times E\leq_T D$ and $D \leq_T C\times \kappa\times E$, this implies that $C\times \kappa \times E \equiv_T D$ as sought. Hereafter, suppose for every unbounded subset $X\in [D]^{\kappa}$ there exists some subset $Y\in [X]^{<\kappa}$ unbounded. Let $g:D\rightarrow C\times \kappa\times E$ be a Tukey function. Define $h:=\pi_{C\times E} \circ g$. Now, there are two main cases to consider: \begin{itemize} \item[$\blacktriangleright$] Suppose every unbounded subset $X\subsetequbseteq D$ of size $\mu>\kappa$ which contain no unbounded subset of smaller cardinality is such that $h"X$ is unbounded in $C\times E$. We show that $h$ is Tukey, it is enough to verify that for every cardinal $\omega\leq \mu\leq \kappa$ and every unbounded subset $X\subsetequbseteq D$ of size $\mu$ which contain no unbounded subset of smaller cardinality is such that $h"X$ is unbounded in $C\times E$. As $g$ is Tukey, the set $g"X$ is unbounded in $C\times \kappa \times E$. Notice that if the set $\pi_{C\times E}\circ g" X$ is unbounded, then we are done. Assume that $\pi_{C\times E}\circ g" X$ is bounded, then $\pi_{\kappa} \circ g"X$ is unbounded. $\blacktriangleright\blacktriangleright$ Suppose $|X|<\kappa$. As $|g"X|<\kappa$, we have that $\pi_{\kappa} \circ g"X$ is bounded, which is absurd. $\blacktriangleright\blacktriangleright$ Suppose $ |X|=\kappa$, by the case assumption there exists some $Y \in [X]^{<\kappa}$ unbounded in $D$. But this is absurd as the assumption on $X$ was that $X$ contain no subset of size smaller than $\mu$ which is unbounded. $\blacktriangleright\blacktriangleright$ Suppose $|X|>\kappa$, by the case assumption, $h"X$ is unbounded in $C\times E$ as sought. \item[$\blacktriangleright$] Suppose for some unbounded subset $X\subsetequbseteq D$ of size $\mu>\kappa$ which contain no unbounded subset of smaller cardinality is such that $h"X$ is bounded in $C\times E$. As $g$ is Tukey, $\pi_{\kappa}\circ g"X$ is unbounded. Let $X_\alpha := X\cap g^{-1} (C\times \{\alpha\}\times E )$ and $U_\alpha:=\bigcup_{\beta\leq \alpha} X_\beta$ for every $\alpha<\kappa$. As $g$ is Tukey and $g"U_\alpha$ is bounded, we get that $U_\alpha$ is also bounded by some $y_\alpha\in D$. Let $Y:=\{y_\alpha \mid \alpha<\kappa\}$. We claim that $Y$ is of cardinality $\kappa$, if it wasn't, then by the pigeonhole principle as $\kappa$ is regular there would be some $y_\alpha$ which bound the set $X$ and that is absurd. Similarly, as $X$ is unbounded, the set $Y$ and also every subset of it of size $\kappa$ must be unbounded. Next, we aim to get $Z\in [Y]^\kappa$ such that $\pi_{C\times E}\circ g"Z$ is bounded by some $(c,e)\in C\times E$. This can be done as follows: As $|C|<\kappa$ and $\kappa$ is regular, by the pigeonhole principle, there exists some $Z_0\in [Y]^\kappa$ and $c\in C$ such that $g"Z_0 \subsetequbseteq \{c\}\times \kappa \times E$. Similarly, if $|\pi_{E}" Z_0|<\kappa$, by the pigeonhole principle, there exists some $Z\in [Z_0]^\kappa$ and $e\in E$ such that $g"Z \subsetequbseteq \{c\}\times \kappa \times \{e\}$. Else, if $|\pi_{E}" Z_0|=\kappa$, then as $\kappa \in\Inner(\mathcal I_{\bd}(E),\kappa )$ for some $B\in [\pi_{E}" Z_0]$ and $e\in E$, $B$ is bounded in $E$ by $e$. Fix some $Z\in [Z_0]^\kappa$ such that $g"Z\subsetequbseteq \{c\}\times \kappa \times B$. Note that $Z$ is a subset of $Y$ of size $\kappa$, hence, unbounded. By the case assumption, there exists some subset $W\in [Z]^{<\kappa}$ unbounded. Note that as $\kappa$ is regular, for some $\alpha<\kappa$, $\pi_{\kappa} \circ g"W\subsetequbseteq \alpha$. As $g$ is Tukey, the subset $g"W\subsetequbseteq \{c\}\times \kappa \times E$ is unbounded in $C\times \kappa \times E$, but this is absurd as $g"W$ is bounded by $(c,\alpha,e)$. \qedhere \end{itemize} \end{proof} \subseteqection{Non-empty intervals}\label{Section - No Gaps} In this section we consider three types of intervals in the poset $(\mathcal T_n, <_T)$ and show each one can consistently have a directed set inside. \subsetequbsection{Directed set between $ \theta^+ \times \theta^{++}$ and $[\theta^{++}]^{\leq \theta}$} In {\cite[Theorem~1.1]{kuzeljevic2021cofinal}}, the authors constructed a directed set between $\omega_1\times \omega_2$ and $[\omega_2]^{\leq \omega}$ under the assumption $2^{\aleph_0}=\aleph_1$, $2^{\aleph_1}=\aleph_2$ and the existence of an $\aleph_2$-Souslin tree. In this subsection we generalize this result while waiving the assumption concerning the Souslin tree. The main corollary of this subsection is: \begin{cor}\label{Cor - theta^+ x theta^++ < D < [theta^++]^{<theta+}} Assume $\theta$ is an infinite cardinal such that $2^\theta = \theta^+$, $2^{\theta^+}= \theta^{++}$. Suppose $C$ and $E$ are directed sets such that $\cf(C)\leq \theta^+$ and either $\non(\mathcal I_{\bd} (E))>\theta^+$ or $E\equiv_T 1$. Then there exists a directed set $D$ such that $ C\times \theta^+ \times \theta^{++}\times E <_T C\times D\times E <_T C\times[\theta^{++}]^{\leq \theta}\times E.$ \end{cor} The result follows immediately from Theorems~\ref{theorem - D between theta x times+ and [theta++]<=theta} and \ref{Theorem - directed set from coloring}. First we prove the following required Lemma. \begin{lemma}\label{Lemma - not <_T cofinal unbounded >theta and >theta bounded} Suppose $\theta$ is a infinite cardinal and $D,J, E$ are three directed sets such that: \begin{itemize} \item $\cf(D)=\cf(J)=\theta^{++}$; \item $\theta^+ \in \Inner(\mathcal I_{\bd}(D), \theta^{++})$ and $\out(\mathcal I_{\bd}(J))\leq\theta^+$; \item $\non(\mathcal I_{\bd}(E))>\theta^+$ or $E\equiv_T 1$. \item $D\times E\leq_T J\times E$. \end{itemize} Then $J\times E\not \leq_T D\times E$. \end{lemma} \begin{proof} Notice that $D$ is a directed set such that every subset of size $\theta^{++}$ contains a bounded subset of size $\theta^+$. Let us fix a cofinal subset of $A\subsetequbseteq J$ of size $\theta^{++}$ such that every subset of $A$ of size $>\theta$ is unbounded in $J$. Suppose on the contrary that $J\times E\leq_T D\times E$. As $D\times E\leq_T J\times E$ we get that $D\times E\equiv_T J\times E$, hence there exists some directed set $X$ such that both $D\times E$ and $J\times E$ are cofinal subsets of $X$. We may assume that $D$ has an enumeration $D:=\{d_\alpha \mid \alpha<\theta^{++}\}$ such that for every $\beta<\alpha<\theta^{++}$ we have $d_\alpha \not <d_\beta$. Fix some $e\in E$. Now, for each $a\in A$ take a unique $x_a\in D$ and some $e_a\in E$ such that $(a,e)\leq_X (x_a,e_a)$. To do that, enumerate $A=\{a_\alpha \mid \alpha<\theta^{++}\}$. Suppose we have constructed already the increasing sequence $\langle \nu_\beta \mid \beta<\alpha \rangle$ of elements in $\theta^{++}$. Pick some $\xi<\theta^{++}$ above $\{ \nu_\beta \mid \beta<\alpha \}$. As $D\times E$ is a directed set we may fix some $(x_{a_\alpha},e_a):=(d_{\nu_\alpha},e_a)\in D\times E$ above $(a_\alpha,e)$ and $(d_\xi,e)$. Set $T=\{x_a\mid a\in A\}$, since $A\times E$ is cofinal in $X$, the set $T\times E$ is also cofinal in $X$ and $D\times E$. As $|T|=\theta^{++}$ we get that there exists some subset $B\in [T]^{\theta^+}$ bounded in $D$. Let $c\in D$ be such that $b\leq c$ for each $b\in B$. Consider the set $K=\{a\in A \mid x_a \in B \}$. So $P:=\{(x_a, e_a)\mid a\in K\}$ is bounded in $X$, as $\{e_a\mid a\in K\}$ is of size $\leq\theta^+$, hence bounded by some $\tilde e\in E$. Since $B$ is of size $>\theta$, the set $K$ is also of size $>\theta$. Thus, by the assumption on $A$, the set $K\times\{e\}$ is unbounded in $J\times E$, but also in $X$ because $J\times E$ is a cofinal subset of $X$. Then, for each $a\in K$ we have $(a,e)\leq_X (x_a,e_a) \leq_X (c,\tilde e)$, contradicting the unboundness of $K\times \{e\}$ in $J\times E$. \end{proof} \begin{theorem}\label{theorem - D between theta x times+ and [theta++]<=theta} Suppose $\theta$ is an infinite cardinal and $C,D,E$ are directed sets such that: \begin{enumerate}[label=(\arabic*)] \item $\cf(D)=\theta^{++}$. \item\label{Clause - every maximal size subset if unbounded} Every subset of $D$ of size $\theta^{++}$ is unbounded; \item\label{Clause - unbounded set of size theta} For every partition $D=\bigcup_{\gamma<\theta^+} D_\gamma$, there is an ordinal $\gamma<\theta^+$, and an unbounded $K\subsetequbseteq D_\gamma$ of size $\theta^+$; \item\label{Clause - bounded lambda set}$\theta^+ \in \Inner(\mathcal I_{\bd}(D), \theta^{++})$ and $\non(\mathcal I_{\bd}(D))=\theta^+$. \item $\non(\mathcal I_{\bd}(E))>\theta^+$ or $E\equiv_T 1$; \item $C$ is a directed set such that $\cf(C)\leq\theta^+$ \end{enumerate} Then $ C\times \theta^+ \times \theta^{++}\times E <_T C\times D\times E <_T C\times[\theta^{++}]^{\leq \theta}\times E.$ \end{theorem} \begin{proof} We open with a claim. \begin{claim} $\theta^+\times \theta^{++}\leq_T D$. \end{claim} \begin{subproof} As every subset of $D$ of size $\theta^{++}$ is unbounded, we get by Lemma~\ref{Lemma - kappa leq_T D} that $\theta^{++}\leq_T D$. Let $K$ be an unbounded subset of $D$ of size $\theta^+$, as every subset of size $\theta$ is bounded, by Corollary~\ref{Cor - existence of full-unbounded kappa set} we get that $\theta^+\leq_T D$. Finally, we get that $\theta^+\times \theta^{++} \leq_T D$ as sought. \end{subproof} \begin{claim} $D \leq_T [\theta^{++}]^{\leq \theta}$. \end{claim} \begin{subproof} By Lemma~\ref{Lemma - below kappa theta}. \end{subproof} Notice this implies that $ C\times \theta^+ \times \theta^{++}\times E \leq _T C\times D\times E \leq_T C\times [\theta^{++}]^{\leq \theta}\times E.$ By Lemma~\ref{Lemma - D not<= C x E}, as $|C\times \theta^+|\leq \theta^+$, $\non(\mathcal I_{\bd}(\theta^{++}\times E))>\theta^+$ and Clause~\ref{Clause - unbounded set of size theta} we get that $D\not\leq_T C\times \theta^+\times \theta^{++}\times E$. \begin{claim} $C\times [\theta^{++}]^{\leq \theta}\times E \not \leq_T C\times D\times E$. \end{claim} \begin{subproof} Recall that $\mathfrak D_{ [\theta^{++}]^{\leq \theta}}\equiv_T [\theta^{++}]^{\leq \theta}$. Notice that following: \begin{itemize} \item $\cf(C\times D)=\cf(C\times \mathfrak D_{[\theta^{++}]^{\leq\theta}})=\theta^{++}$; \item By Clause~\ref{Clause - bounded lambda set} we have that $\theta^+ \in \Inner(\mathcal I_{\bd}(C\times D), \theta^{++})$ and $\out(\mathcal I_{\bd}(C\times \mathfrak D_{ [\theta^{++}]^{\leq \theta}}))\leq \theta^{+}$; \item $\non(\mathcal I_{\bd}(E))>\theta^+$ or $E=1$; \item $C\times D\times E\leq_T C\times \mathfrak D_{[\theta^{++}]^{\leq\theta}}\times E$. \end{itemize} So by Lemma~\ref{Lemma - not <_T cofinal unbounded >theta and >theta bounded} we are done.\qedhere \end{subproof}\qedhere \end{proof} We are left with proving the following Theorem, in which we define a directed set $D_c$ using a coloring $c$. \begin{theorem}\label{Theorem - directed set from coloring} Suppose $\theta$ is an infinite cardinal such that $2^\theta= \theta^+$ and $2^{\theta^+}=\theta^{++}$. Then there exists a directed set $D$ such that: \begin{enumerate}[label=(\arabic*)] \item $\cf(D)=\theta^{++}$. \item Every subset of $D$ of size $\theta^{++}$ is unbounded; \item For every partition $D=\bigcup_{\gamma<\theta^+} D_\gamma$, there is an ordinal $\gamma<\theta^+$, and an unbounded $K\subsetequbseteq D_\gamma$ of size $\theta^+$; \item $\theta^+ \in \Inner(\mathcal I_{\bd}(D), \theta^{++})$ and $\non(\mathcal I_{\bd}(D))=\theta^+$. \end{enumerate} \end{theorem} The rest of this subsection is dedicated to proving Theorem~\ref{Theorem - directed set from coloring}. The arithmetic hypothesis will only play a role later on. Let $\theta$ be an infinite cardinal. For two sets of ordinals $A$ and $B$, we denote $A\circledast B:= \{(\alpha,\beta )\in A\times B\mid \alpha<\beta\}$. Recall that by \cite[Corollary~7.3]{paper47}, $\onto(\mathcal S,J^{\bd}[\theta^{++}],\theta^+)$ holds for $\mathcal S:=[\theta^{++}]^{\theta^{++}}$. This means that we may fix a coloring $c:[\theta^{++}]^2\rightarrow \theta^+$ such that for every $S\in\mathcal S$ and unbounded $B\subseteq\theta^{++}$, there exists $\delta\in S$ such that $c"(\{\delta\}\circledast B)=\theta^+$. We fix some $S\in\mathcal S$. For our purpose, it will suffice to assume that $S$ is nothing but whole of $\theta^{++}$. Let $$D_{c}:=\{ X\in [\theta^{++}]^{\leq \theta^{++}} \mid \forall \delta\in S[ \{c(\delta,\beta) \mid \beta\in X\subseteqetminus(\delta+1)\}\in \ns_{\theta^+} ]\}.$$ Consider $D_{c}$ ordered by inclusion, and notice that $D_{c}$ is a directed set since $\ns_{\theta^+}$ is an ideal. \begin{prop} The following hold: \begin{itemize} \item $[\theta^{++}]^{\leq\theta} \subsetequbseteq D_{c} \subsetequbseteq [\theta^{++}]^{\leq \theta^+}$. \item $\non(\mathcal I_{\bd}(D_c))\geq\add(\mathcal I_{\bd}(D_c))\geq\theta^+$, i.e. every family of bounded subsets of $D_{c}$ of size $< \theta^+$ is bounded. \item If $2^{\theta^+}=\theta^{++}$, then $|D_c|=\theta^{++}$, and hence $D_c\in\mathcal D_{\theta^{++}}$.\qed \end{itemize} \end{prop} \begin{lemma}\label{D_C - Lemma 5.3} For every partition $D_{c}=\bigcup_{\gamma<\theta^+} D_\gamma$, there is an ordinal $\gamma<\theta^+$, and an unbounded $E\subsetequbseteq D_\gamma$ of size $\theta^+$. \end{lemma} \begin{proof} As $[\theta^{++}]^1$ is a subset of $D_{c}$, the family $\{D_{\gamma }\mid \gamma<\theta^+\}$ is a partition of the set $[\theta^{++}]^1$ to $\theta^+$ many sets. As $\theta^+<\theta^{++}=\cf(\theta^{++})$, by the pigeonhole principle we get that for some $\gamma<\theta^+$ and $b\in [\theta^{++}]^{\theta^{++}}$, we have $[b]^1 \subsetequbseteq D_{\gamma}$. Notice that by the assumption on the coloring $c$, there exists some $\delta\in S$ and $\delta<b'\in [b]^{\theta^+}$ such that $c"(\delta \circledast b')=\theta^+$. Clearly the set $E:=[b']^1$ is a subset of $ D_{\gamma}$ of size $\theta^+$ which is unbounded in $D_{c}$. \end{proof} \begin{lemma}\label{D_C - Lemma 5.4} Suppose $2^{\theta}=\theta^+$, then $\theta^+ \in \Inner(\mathcal I_{\bd}(D_{c}),\theta^{++})$. \end{lemma} \begin{proof} We follow the proof of \cite[Lemma~5.4]{kuzeljevic2021cofinal}. Let $D'$ be a subset of $D_{c}$ of size $\theta^{++}$ we will show it contain a bounded subset of size $\theta^+$, let us enumerate it as $\{T_\gamma \mid \gamma <\theta^{++}\}$. Let, for each $X\in D_{c}$ and $\gamma\in S$, $N^X_\gamma$ denote the non-stationary set $\{c(\gamma,\beta)\mid \beta\in X\subseteqetminus(\gamma+1)\}$, and let $G^X_\gamma$ denote a club in $\theta^+$ disjoint from $N^X_\gamma$. As $2^{\theta}=\theta^+$ we may fix a sufficiently large regular cardinal $\mu$, and an elementary submodel $M\prec H_{\mu}$ of cardinality $\theta^+$ containing all the relevant objects and such that $M^{\theta} \subsetequbseteq M$. Denote $\delta=M\cap \theta^{++}$, notice $\delta \in E^{\theta^{++}}_{\theta^+}$. Fix an increasing sequence $\langle \gamma_\xi \mid \xi <\theta^+\rangle$ in $\delta$ such that $\subsetequp\{\gamma_\xi \mid \xi<\theta^+\}=\delta$. Enumerate $\delta\cap S=\{s_\xi \mid \xi<\theta^+\}$. In order to simplify notation, let $G^\gamma_\xi$ denote the set $G^{T_\gamma}_{s_\xi}$ for each $\gamma<\theta^{++}$ and $\xi<\theta^+$. We construct by recursion on $\xi<\theta^+$ three sequences $\langle \delta_\xi \mid \xi<\theta^+\rangle$, $\langle \Gamma_\xi \mid \xi<\theta^+\rangle$ and $\langle \eta_\xi \mid \xi<\theta^+\rangle$ with the following properties: \begin{enumerate}[label=(\arabic*)] \item $\langle \delta_\xi \mid \xi<\theta^+\rangle$ is an increasing sequence converging to $\delta$; \item $\langle \Gamma_\xi\mid \xi<\theta^+ \rangle$ is a decreasing $\subsetequbseteq$-chain of stationary subsets of $\theta^{++}$ each one containing $\delta$ and definable in $M$; \item $\langle \eta_\xi \mid \xi<\theta^+ \rangle$ is an increasing sequence of ordinals below $\theta^+$; \item $G^\delta_{\zeta} \cap \eta_{\mu} = G^{\delta_{\mu}}_{\zeta}\cap \eta_{\mu} $ for $\zeta\leq \mu<\theta^+$. \end{enumerate} $\blacktriangleright$ Base case: Let $\eta_0$ be the first limit point of $G^\delta_0$. Notice that $G^\delta_0\cap \eta_0$ is an infinite set of size $\leq\theta$ below $\delta$, hence it is inside of $M$. Let $$\Gamma_0:=\{\gamma<\theta^{++} \mid G^\delta_0\cap \eta_0 = G^\gamma_0\cap \eta_0\}.$$ Since $\delta\in \Gamma_0$, the set $\Gamma_0$ is stationary in $\theta^{++}$. Let $\delta_0:=\min(\Gamma_0)$. $\blacktriangleright$ Suppose $\xi_0<\theta^+$, and that $\delta_\xi$, $\Gamma_\xi$ and $\eta_\xi$ have been constructed for each $\xi<\xi_0$. Let $\eta_{\xi_0}$ be the first limit point of $G^\delta_{\xi_0}\subseteqetminus \subsetequp\{\eta_\xi \mid \xi<\xi_0\}$. Consider the set $$ \Gamma_{\xi_0}=\large \{\gamma \in \bigcap_{\xi<\xi_0}\Gamma_\xi \mid \forall \xi\leq \xi_0 [G^\delta_\xi\cap \eta_{\xi_0} = G^\gamma_{\xi}\cap \eta_{\xi_0}]\large \}.$$ Since $\Gamma_{\xi_0}$ belongs to $M$, and since $\delta\in \Gamma_{\xi_0}$, it must be that $\Gamma_{\xi_0}$ is stationary in $\theta^{++}$. Since $\Gamma_{\xi_0}$ is cofinal in $\theta^{++}$ and belongs to $M$, the set $\delta\cap \Gamma_{\xi_0}$ is cofinal in $\delta$. Define $\delta_{\xi_0}$ be the minimal ordinal in $\delta\cap \Gamma_{\xi_0}$ greater than both $\subsetequp\{\delta_\xi \mid \xi<\xi_0\}$ and $\gamma_{\xi_0}$. It is clear from the construction that conditions $(1-4)$ are satisfied. The following claim gives us the wanted result. \begin{claim} The set $\{T_{\delta_\xi}\mid \xi<\theta^+\}$ is a subset of $D'$ of size $\theta^+$ which is bounded in $D_{c}$. \end{claim} \begin{subproof} As the order on $D_{c}$ is $\subsetequbseteq$, it suffices to prove that the union $T=\bigcup_{\xi<\theta^+} T_{\delta_\xi}\in D_{c}$. Since, for each $\xi<\theta^+$, both $\delta_\xi$ and $\langle T_\gamma \mid \gamma<\theta^{++} \rangle$ belong to $M$, it must be that $T_{\delta_\xi}\in M$. Since $\theta^+\in M$ and $M\models |T_{\delta_\xi}|\leq \theta^+$, we have $T_{\delta_\xi}\subsetequbseteq M$. Thus $T\subsetequbseteq M$ and furthermore $T\subsetequbseteq \delta$. This means that, in order to prove that $T\in D_{c}$, it is enough to prove that for each $t \in S\cap \delta$, the set $\{c(t,\beta)\mid \beta\in T\subseteqetminus(t+1)\}$ is non-stationary in $\theta^+$. Fix some $t\in S\cap \delta$. Let $\zeta<\theta^+$ be such that $s_\zeta = t$. Define $$ G:=G^\delta_\zeta \cap (\bigcap_{\xi\leq \zeta} G^{\delta_\xi}_\zeta)\cap (\triangle_{\xi<\theta^+} G^{\delta_\xi}_\zeta).$$ Since the intersection of $< \theta^+$-many clubs in $\theta^+$ is a club, and since diagonal intersection of $\theta^+$ many clubs is a club, we know that $G$ is a club in $\theta^+$. We will prove that $G\cap \{c(t,\beta)\mid \beta\in T\subseteqetminus(t+1)\}=\emptyset$. Suppose $\alpha<\theta^+$ is such that $\alpha\in G \cap \{c(t,\beta)\mid \beta\in T\subseteqetminus(t+1)\}$. This means that $\alpha\in G$ and that for some $\mu<\theta^+$ and $\beta\in T_{\delta_\mu}\subseteqetminus(t+1)$ we have $\alpha = c(t,\beta)$. So $\alpha \in N^{T_{\delta_\mu}}_{t}$. Note that this implies that $\alpha\notin G^{\delta_\mu}_\zeta$. Let us split to three cases: $\blacktriangleright$ Suppose $\mu\leq \zeta$, then since $\alpha\in \bigcap_{\xi\leq \zeta} G^{\delta_\xi}_\zeta$, we have that $\alpha\in G^{\delta_\mu}_\zeta$ which is clearly contradicting $\alpha\notin G^{\delta_\mu}_\zeta$. $\blacktriangleright$ Suppose $\mu>\zeta$ and $\alpha<\eta_\mu$. Then by $(4)$, we have that $G^\delta_\zeta \cap \eta_\mu =G^{\delta_\mu}_\zeta \cap \eta_\mu$. As $\alpha \not \in G^{\delta_\mu}_\zeta$ and $\alpha<\eta_\mu$, it must be that $\alpha\notin G^\delta_\zeta$. Recall that $\alpha \in G$, but this is absurd as $G\subsetequbseteq G^\delta_\zeta$ and $\alpha\notin G^\delta_\zeta$. $\blacktriangleright$ Suppose $\mu>\zeta$ and $\alpha\geq \eta_\mu\geq\mu$. As $\alpha\in G$, we have that $\alpha\in \triangle_{\xi<\theta^+} G^{\delta_\xi}_\zeta$. As $\alpha>\mu$, we get that $\alpha\in G^{\delta_\mu}_\zeta$ which is clearly contradicting $\alpha\notin G^{\delta_\mu}_\zeta$.\qedhere \end{subproof}\qedhere \end{proof} \subsetequbsection{Directed set between $\omega\times\omega_1$ and $[\omega_1]^{<\omega}$} As mentioned in \cite{MR3247063}, by the results of Todor\v{c}evi\'{c} \cite{MR980949}, it follows that under the assumption $\mathfrak b=\omega_1$ there exists a directed set of size $\omega_1$ between the directed sets $\omega\times\omega_1$ and $[\omega_1]^{<\omega}$. In this subsection we spell out the details of this construction. \begin{fact}[Todor\v{c}evi\'{c}, {\cite[Theorem~1.1]{MR980949}}]\label{Fact - b=omega_1} Suppose $A$ is an uncountable sequence of ${}^{\omega}\omega$ of increasing functions which are $<^*$-increasing and unbounded, then there are $f,g\in A$ such that $f<g$. \end{fact} \begin{theorem}\label{Theorem - omega x omega_1 < D < [omega_1]^{<omega}} Assume $\mathfrak b =\omega_1$. Suppose $E$ is a directed set such that $\non(\mathcal I_{\bd} (E))>\omega$, then $\omega\times E <_T D\times E <_T [\omega_1]^{<\omega}\times E$. \end{theorem} \begin{proof} Let $\mathcal F:=\langle f_\alpha \mid \alpha<\omega_1 \rangle\subsetequbseteq {}^{\omega}\omega$ witness $\mathfrak b=\omega_1$. Recall $\mathcal F$ is a $<^*$-increasing and unbounded sequence, i.e. for every $g\in {}^{\omega}\omega$, there exists some $\alpha<\omega_1$ such that $f_\beta\not\leq^* g$, whenever $\alpha<\beta<\omega_1$. For a finite set of functions $F\subsetequbseteq {}^{\omega}\omega$, we define a function $h:=\max(F)$ which is $<$-above every function in $F$ by letting $h(n):=\max\{f(n)\mid f\in F\}$. We consider the directed set $D:=\{ \max(F)\mid F\subsetequbseteq \mathcal F,~ |F|<\aleph_0 \}$, ordered by the relation $<$, clearly $D$ is a directed set. \begin{claim}\label{Claim - D_b every uncountable set contains infinite unbounded subset} Every uncountable subset $X\subsetequbseteq D$ contains a countable unbounded subset $B\subsetequbset X$. \end{claim} \begin{subproof} Let $X$ be an uncountable subset of $D$. As $\mathcal F$ is a $<^*$-increasing and unbounded, also $X$ contains an uncountable $<^*$-unbounded subset $Y\subsetequbseteq X$. As no function $g:\omega\rightarrow \omega$ is $<^*$-bounding the set $Y$, we can find an infinite countable subset $B\subsetequbseteq Y$ and $n<\omega$ such that $\{f(n)\mid f\in B \}$ is infinite. Clearly $B$ is $<$-unbounded in $D$ as sought. \end{subproof} \begin{claim}\label{Claim - D_b every uncountable set contains infinite bounded subset} $\omega \in \Inner(\mathcal I_{\bd}(D),\omega_1) $. \end{claim} \begin{subproof} We show that every uncountable subset of $D$ contains a countable infinite bounded subset. Let $A\subsetequbseteq D$ be an uncountable set, we may refine $A$ and assume that it is $<^*$-increasing and unbounded. We enumerate $A:=\{g_\alpha\mid \alpha<\omega_1\}$ and define a coloring $c:[\omega_1]^2\rightarrow 2$, letting $c(\alpha,\beta)=1$ iff $g_\alpha<g_\beta$. Recall that Erd\"os and Rado showed that $\omega_1\rightarrow (\omega_1,\omega+1)^2$, so either there is an uncountable homogeneous set of color $0$ or there exists an homogeneous set of color $1$ of order-type $\omega+1$. Notice that Fact~\ref{Fact - b=omega_1} contradicts the first alternative, so the second one must hold. Let $X\subsetequbseteq \omega_1$ be a set such that $\otp(X)=\omega+1$ and $c"[X]^2=\{1\}$, notice that $\{g_\alpha \mid \alpha\in X\}$ is an infinite countable subset of $A$ which is $<$-bounded by the function $g_{\max(X)}\in A$ as sought.\qedhere \end{subproof} Clearly as $\cf(D)=\omega_1$, we have that $D\times E \leq_T [\omega_1]^{<\omega}\times E$. \begin{claim} $\omega\times \omega_1\times E\leq_T D\times E$. \end{claim} \begin{subproof} As every subset of $D$ of size $\omega_1$ is unbounded, we get by Lemma~\ref{Lemma - kappa leq_T D} that $\omega_1\leq_T D$. As $D$ is a directed set, every finite subset of $D$ is bounded. By Claim~\ref{Claim - D_b every uncountable set contains infinite unbounded subset}, $D$ contains an infinite countable unbounded subset, so by Corollary~\ref{Cor - existence of full-unbounded kappa set} we have $\omega \leq_T D$. Finally, $\omega\times \omega_1 \leq_T D$ as sought. \end{subproof} \begin{claim} $D\not \leq_T\omega \times E$. \end{claim} \begin{subproof} By Claim~\ref{Claim - D_b every uncountable set contains infinite unbounded subset} and Lemma~\ref{Lemma - D not<= C x E}. \end{subproof} \begin{claim} $[\omega_1]^{<\omega} \not \leq_T D\times E$. \end{claim} \begin{subproof} By Claim~\ref{Claim - D_b every uncountable set contains infinite bounded subset}, every uncountable subset of $D$ contains an infinite countable bounded subset and every countable subset of $E $ is bounded, we get that $\omega \in \Inner(\mathcal I_{\bd}(D\times E),\omega_1)$. As $\out(\mathcal I_{\bd}([\omega_1]^{<\omega}))=\omega$ by Lemma~\ref{Lemma - kappa, theta no tukey map} we get that $[\omega_1]^{<\omega} \not \leq_T D\times E$ as sought. \end{subproof}\qedhere \end{proof} \subsetequbsection{Directed set between $[\lambda]^{<\theta}\times [\lambda^+]^{\leq \theta}$ and $[\lambda^+]^{< \theta}$} In {\cite[Theorem~1.2]{kuzeljevic2021cofinal}}, the authors constructed a directed set between $[\omega_1]^{<\omega}\times [\omega_2]^{\leq \omega}$ and $[\omega_2]^{< \omega}$ under the assumption $2^{\aleph_0}=\aleph_1$, $2^{\aleph_1}=\aleph_2$ and the existence of a non-reflecting stationary subset of $E^{\omega_2}_\omega$. In this subsection we generalize this result while waiving the assumption concerning the stationary set. We commence by recalling some classic guessing principles and introducing a weak one, named $ \clubsuit^\mu_{J}(S,1) $, which will be useful for our construction. \begin{definition}\label{principles} For a stationary subset $ S\subsetequbseteq \kappa $: \begin{enumerate} \item $ \diamondsuit(S) $ asserts the existence of a sequence $ \langle C_\alpha \mid \alpha\in S \rangle $ such that: \begin{itemize} \item for all $ \alpha\in S $, $ C_\alpha \subsetequbseteq \alpha $; \item for every $B\subseteq \kappa$, the set $\{\alpha\in S\mid B\cap\alpha=C_\alpha \}$ is stationary. \end{itemize} \item $ \clubsuit(S) $ asserts the existence of a sequence $ \langle C_\alpha\mid \alpha\in S \rangle $ such that: \begin{itemize} \item\label{Definiton clubsuit - Clause A_alpha} for all $ \alpha\in S\cap \acc(\kappa) $, $ C_\alpha $ is a cofinal subset of $\alpha$ of order type $\cf(\alpha)$; \item\label{Definiton clubsuit - Clause guess} for every cofinal subset $ B\subsetequbseteq \kappa$, the set $\{\alpha\in S \mid C_\alpha \subsetequbseteq B \}$ is stationary. \end{itemize} \item \label{clubsuit^w(S)} $ \clubsuit^\mu_{J}(S,1) $ asserts the existence of a sequence $ \langle C_\alpha\mid \alpha\in S \rangle $ such that: \begin{itemize} \item for all $ \alpha\in S\cap \acc(\kappa) $, $ C_\alpha $ is a cofinal subset of $\alpha$ of order type $\cf(\alpha)$; \item\label{clubsuits_J_unboundedsubset} for every partition $\langle A_\beta \mid \beta<\mu \rangle$ of $\kappa$ there exists some $\beta<\mu$ such that the set $\{\alpha\in S \mid \subsetequp(C_\alpha \cap A_\beta)=\alpha \}$ is stationary. \end{itemize} \end{enumerate} \end{definition} Recall that by a Theorem of Shelah \cite{Sh_922}, for every uncountable cardinal $\lambda$ which satisfy $2^\lambda = \lambda^+$ and every stationary $S\subsetequbseteq E^{\lambda^+}_{\neq \cf(\lambda)}$, $\Diamond(S)$ holds. It is clear that $\Diamond(S) \Rightarrow \clubsuit(S) \Rightarrow \clubsuit^\lambda_{J}(S,1) $. The main corollary of this subsection is: \begin{cor}\label{Corollary - directed set D_{mathcal C}} Let $\theta<\lambda$ be two regular cardinals. Assume $\lambda^{\theta}<\lambda^+$ and $\clubsuit^\lambda_{J}(S,1) $ holds for some $S\subsetequbseteq E^{\lambda^+}_\theta$. Suppose $C$ and $E$ are two directed sets such that $\cf(C)<\lambda^+$ and $\non(\mathcal I_{\bd} (E))>\theta$ or $E\equiv_T 1$. Then there exists a directed set $D_{\mathcal C}$ such that: $$C\times [\lambda]^{<\theta}\times [\lambda^+]^{\leq \theta}\times E<_T C\times [\lambda]^{<\theta}\times D_{\mathcal C} \times E <_T C\times [\lambda^+]^{< \theta} \times E.$$ \end{cor} In the rest of this subsection we prove this result. Suppose $\mathcal C:=\langle C_\alpha \mid \alpha \in S \rangle$ is a $C$-sequence for some stationary set $S\subsetequbseteq E^{\lambda^+}_\theta$, i.e. $C_\alpha$ is a cofinal subset of $\alpha$ of order-type $\theta$, whenever $\alpha\in S$. We define the directed set $D_{\mathcal C} :=\{ Y\in [\lambda^+]^{\leq \theta}\mid \forall \alpha\in S [|Y\cap C_\alpha|<\theta] \}$ ordered by $\subsetequbseteq$. Notice that $\non(\mathcal I_{\bd}(D_{\mathcal C}))= \theta$ and $[\lambda^+]^{< \theta}\subsetequbseteq D_{\mathcal C}$. Recall that by Hausdorff's formula $(\lambda^+)^\theta = \max\{\lambda^+,\lambda^\theta\}$, so if $\lambda^{\theta}<\lambda^+$, then $(\lambda^+)^\theta=\lambda^+$. So we may assume $|D_{\mathcal C}|=\lambda^+$. \begin{claim}\label{Claim - D_C from below} Suppose $|D_{\mathcal C}|=\lambda^+$, then $[\lambda^+]^{\leq \theta} \leq_T D_{\mathcal C}$. \end{claim} \begin{proof} Fix a bijection $\phi:D_{\mathcal C}\rightarrow \lambda^+$. Denote $X:=\{x\cup\{\phi(x)\}\mid x\in D_{\mathcal C}\}$, clearly $X$ is cofinal subset of $D_{\mathcal C}$. Notice that for every $Y\subsetequbseteq [X]^{\theta^+}$, there exists $Z\in [\lambda^+]^{\theta^+}$ such that $Z\subsetequbseteq \bigcup Y$. Since every set in $D_{\mathcal C}$ is of size $\leq\theta$, we get that every subset of $X$ of size $>\theta$ is unbounded in $D_{\mathcal C}$. Let us fix some injective function $g:[\lambda^+]^{\leq \theta} \rightarrow X$, then $g$ witnesses that $[\lambda^+]^{\leq \theta} \leq_T D_{\mathcal C}$. \end{proof} Notice that by Lemma~\ref{Lemma - below kappa theta} and Claim~\ref{Claim - D_C from below}, as $(\lambda^+)^\theta=\lambda^+$ we have $ [\lambda]^{<\theta}\times [\lambda^+]^{\leq \theta} \leq _T [\lambda]^{<\theta} \times D_{\mathcal C} \leq _T [\lambda^+]^{< \theta}.$ Hence, $C\times [\lambda]^{<\theta}\times [\lambda^+]^{\leq \theta}\times E\leq _T C\times [\lambda]^{<\theta} \times D_{\mathcal C}\times E \leq _T C\times [\lambda^+]^{< \theta} \times E.$ \begin{claim} Suppose $\mathcal C$ is a $ \clubsuit^\lambda_{J}(S,1) $-sequence and: \begin{enumerate}[label=(\roman*)] \item $C$ is a directed set such that $|C|<\lambda^+$; \item $E$ is a directed set such that $\non(\mathcal I_{\bd}(E)) >\theta$ and $\cf(E)\geq \lambda^+$. \end{enumerate} Then $ C\times D_{\mathcal C} \not\leq_T C\times E$. \end{claim} \begin{subproof} Suppose that $f: C\times D_{\mathcal C} \rightarrow C\times E$ is a Tukey function. Fix some $o\in C$ and for each $\xi<\lambda^+$, denote $(c_\xi, x_\xi) := f(o,\{\xi\})$. Consider the set $\{(c_\xi,x_\xi) \mid \xi<\lambda^+\}$. For every $c\in C$, we define $A_c:=\{\xi <\lambda^+ \mid c_\xi =c\}$, clearly $\langle A_c \mid c\in C \rangle$ is a partition of $\lambda^+$ to less than $\lambda^+$ many sets. As $\mathcal C$ is a $ \clubsuit^\lambda_{J}(S,1) $-sequence, there exists some $c\in C$ and $\alpha\in S$ such that $|C_\alpha\cap A_c|=\theta$. Let us fix some $B\in [C_\alpha \cap A_c]^{\theta}$. Notice that the set $G:=\{(o,\{\xi\})\mid \xi\in B\}$ is unbounded in $ C\times D_{\mathcal C}$, hence as $g$ is Tukey, $g"G$ is unbounded in $C\times E$. The subset $\{x_\xi \mid \xi\in B\}$ of $E$ is of size $\theta$, hence bounded by some $e$. Note that $f"G=\{( c,x_\xi)\mid \xi\in B\}$ is bounded by $(c,e)$ in $ C\times E$ which is absurd. \end{subproof} By the previous Claim, as $\lambda^\theta<\lambda^+$, we get that $ C\times D_{\mathcal C}\times [\lambda]^{<\theta} \times E \not \leq_T C\times [\lambda]^{<\theta}\times [\lambda^+]^{\leq \theta}\times E $. The following Claim gives a negative answer to the question of whether there is a $C$-sequence $\mathcal C$ such that $D_{\mathcal C} \equiv_T [\lambda^+]^{<\theta}$. In the following claim we use the fact that the sets in the sequence $\mathcal C$ are of a bounded cofinality. \begin{claim}\label{gch imply no D_C equivalent to omega_2 finite} Assume $\lambda^\theta<\lambda^+$. Then for every $C$-sequence $\mathcal C$, we have $D_{\mathcal C}\not \geq_T [\lambda^+]^{<\theta}$. \end{claim} \begin{subproof} Let $S\subsetequbseteq E^{\lambda^+}_\theta$ and $\mathcal C:=\langle C_\alpha \mid \alpha\in S \rangle$ be a $C$-sequence. Suppose we have $ [\lambda^+]^{< \theta} \leq_T D_{\mathcal C} $, let $f:[\lambda^+]^{< \theta} \rightarrow D_{\mathcal C}$ be a Tukey function and $Y:=f"[\lambda^+]^{< \theta}$. Let us split to two cases: $\blacktriangleright$ Suppose $|Y|<\lambda^+$. By the pigeonhole principle, we can find a subset $Q\subsetequbseteq Z$ of size $\theta$ such that $f"Q=\{x\}$ for some $x\in D_{\mathcal C}$. As $f$ is Tukey and $Q$ is unbounded in $[\lambda^+]^{< \theta}$, the set $f"Q$ is unbounded which is absurd. $\blacktriangleright$ Suppose $|Y|=\lambda^+$. As $f$ is Tukey, every subset of $Y$ of size $\theta$ is unbounded which is absurd to the following claim. \begin{subclaim}\label{good claim} There is no subset $Y\subsetequbseteq D_{\mathcal C}$ of size $\lambda^+$ such that every subset of $Y$ of size $\theta$ is unbounded. \end{subclaim} \begin{subproof} As $\lambda^{\theta}<\lambda^+$, we may refine $Y$ and assume that $Y=\{y_\alpha \mid \alpha<\lambda^+\}$ is a $\Delta$-system with a root $R$ separated by a club $C\subsetequbseteq \lambda^+$, i.e. such that for every $\alpha<\beta<\lambda^+$, $y_\alpha\subseteqetminus R <\eta < y_\beta \subseteqetminus R$ for some $\eta\in C$. We define an increasing sequence of ordinals $\langle \beta_\nu\mid \nu<\theta^2\rangle$ where for each $\nu\leq\theta^2$ we let $\beta_\nu:=\subsetequp\{y_\xi \mid \xi<\nu\}$. As $C$ is a club, we get that $\beta_{\theta\cdot \nu}\in C$ for each $\nu <\theta$. We aim to construct a subset $X=\{x_\nu \mid \nu <\theta\}$ of $Y$, we split to two cases: Suppose $\beta_{\theta^2}\in S$. Recall that $\otp (C_{\beta_{\theta^2}})=\theta$ and $\subsetequp(C_{\beta_{\theta^2}})=\beta_{\theta^2}$, so for every $\nu <\theta$ we have that the interval $[\beta_{\theta\cdot \nu},\beta_{\theta\cdot (\nu+1)})$ contains $<\theta$ many elements of the ladder $C_{\beta_{\theta^2}}$, let us fix some $x_\nu\in Y$ such that $x_\nu\subseteqetminus R\subsetequbset [\beta_{\theta\cdot \nu},\beta_{\theta\cdot (\nu+1)})$ and $x_\nu\subseteqetminus R$ is disjoint from $C_{\beta_{\theta^2}}$. If $\beta_{\theta^2}\notin S$, define $X:=\{x_\nu \mid \nu<\theta\}$ where $x_\nu:= y_{\theta\cdot \nu}$. Let us show that $X=\{x_\nu \mid \nu <\theta\}$ is a bounded subset of $Y$, which is absurd to the assumption. It is enough to show that for every $\alpha\in S$, we have that $|(\bigcup X)\cap C_\alpha|<\theta$. Let $\alpha\in S$. $\blacktriangleright$ Suppose $\alpha>\beta_{\theta^2}$, as $C_\alpha$ is a cofinal subset of $\alpha$ of order-type $\theta$ and $\bigcup X$ is bounded by $\beta_{\theta^2}$ it is clear that $|(\bigcup X)\cap C_\alpha|<\theta$. $\blacktriangleright$ Suppose $\alpha<\beta_{\theta^2}$. As $C_\alpha$ if cofinal in $\alpha$ and of order-type $\theta$, there exists some $\nu <\theta$ such that for all $\nu <\rho<\theta$, we have $(x_\rho\subseteqetminus R_1)\cap C_\alpha =\emptyset$. As $x_\rho\in D_{\mathcal C_R}$ for every $\rho<\theta$, we get that $|(\bigcup X)\cap C_\alpha |<\theta$ as sought. $\blacktriangleright$ Suppose $\alpha = \beta_{\theta^2}$. Notice this implies that we are in the first case construction of the set $X$. Recall that the $\Delta$-system $\{x_\nu \mid \nu <\theta\}$ is such that $(x_\nu\subseteqetminus R )\cap C_\alpha =\emptyset$, hence $(\bigcup X)\cap C_{\alpha} = R \cap C_\alpha $. Recall that as $x_0\in D_{\mathcal C}$, we get that $R \cap C_\alpha$ is of size $<\theta$, hence also $(\bigcup X)\cap C_{\alpha}$ as sought. \end{subproof}\qedhere \end{subproof} \begin{claim}Assume $\lambda^\theta<\lambda^+$. Suppose $C$ and $E$ are two directed sets such that $|C|<\lambda^+$ and either $\non(\mathcal I_{\bd} (E))>\theta$ or $E\equiv_T 1$. Then for every $C$-sequence $\mathcal C$ of minimal order-type on a stationary $S\subsetequbseteq E^{\lambda^+}_\theta$, $C\times [\lambda^+]^{<\theta} \times E\not\leq_T C\times D_{\mathcal C} \times E$. \end{claim} \begin{subproof} Suppose $\mathcal C:=\langle C_\alpha \mid \alpha\in S \rangle $ is a $C$-sequence where $S\subsetequbseteq E^{\lambda^+}_\theta$. Suppose on the contrary that there exists a Tukey function $f:[\lambda^+]^{<\theta}\rightarrow C\times D_{\mathcal C}\times E$. Consider $X=[\lambda^+]^1$. By pigeonhole principle, there exists some $c\in C$ and some set $Z\subsetequbseteq X$ of size $\lambda^+$ such that $f"Z \subsetequbseteq \{ c\}\times D_{\mathcal C}\times E$. Let $Y:=\pi_{D_{\mathcal C}} (f"Z)$. Let us split to two cases: $\blacktriangleright$ Suppose $|Y|< \lambda^+$. By the pigeonhole principle, we can find a subset $Q\subsetequbseteq Z$ of size $\theta$ such that $f"Q=\{c\}\times \{x\}\times E$ for some $x\in D_{\mathcal C}$. As $f$ is Tukey and $Q$ is unbounded, we must have that $f"Q$ is unbounded, but this is absurd as $\non(\mathcal I_{\bd} (E))>\theta$. $\blacktriangleright$ Suppose $|Y|=\lambda^+$. As $f$ is Tukey and either $\non(\mathcal I_{\bd} (E))>\theta$ or $E=1$, every subset of $Y$ of size $\theta$ is unbounded which is absurd to Claim~\ref{good claim}. \end{subproof} \subsetequbsection{Structure of $D_{\mathcal C}$} In \cite[Lemmas~1,2]{MR792822}, Todor\v{c}evi\'{c} defined for every $\kappa$ regular and $S\subsetequbseteq \kappa$ the direced set $D(S):=\{C\subsetequbseteq [S]^{\leq\omega} \mid \forall \alpha<\omega_1[\subsetequp(C\cap \alpha)\in C]\}$ ordered by inclusion and studied the structure of such directed sets. In this section we follow this line of study but for directed sets of the form $D_{\mathcal C}$, constructing a large $<_T$-antichain and chain of directed sets using $\theta$-support product. \subsetequbsubsection{Antichain} \begin{theorem}\label{antichain of D_C} Suppose $2^\lambda=\lambda^+$, $\lambda^\theta<\lambda^+$, then there exists a family $\mathcal F$ of size $2^{\lambda^+}$ of directed sets of the form $D_{\mathcal C}$ such that every two of them are Tukey incomparable. \end{theorem} \begin{proof} As $2^\lambda=\lambda^+$ holds, by Shelah Theorem we get that $\diamondsuit(S)$ holds for every $S\subsetequbseteq E^{\lambda^+}_\theta$ stationary subset. Let us fix some stationary subset $S\subsetequbseteq E^{\lambda^+}_\theta$ and a partition of $S$ into $\lambda^+$-many stationary subsets $\langle S_\alpha \mid \alpha<\lambda^+ \rangle$. For each $S_\alpha$ we fix a $\clubsuit(S_\alpha)$ sequence $\langle C_\beta \mid \beta\in S_\alpha \rangle$. Let us fix a family $\mathcal F$ of size $2^{\lambda^+}$ of subsets of $S $ such that for every two $R,T\in \mathcal F$ there exists some $S_\alpha$ such that $R\subseteqetminus T \subsetequpseteq S_\alpha$. For each $T\in \mathcal F$ let us define a $C$-sequence $\mathcal C_T:= \langle C_\alpha \mid \alpha\in T \rangle$. Clearly the following Lemma shows the family $\{ D_{\mathcal C_T} \mid T\in \mathcal F \}$ is as sought. \begin{claim}\label{Usefull lemma} Suppose $\mathcal C_T:=\langle C_\beta \mid \beta \in T \rangle$ and $\mathcal C_R:=\langle C_\beta \mid \beta \in R \rangle$ are two $C$-sequences such that $T,R\subsetequbseteq E^{\lambda^+}_\theta$ are stationary subsets. Then if $\langle C_\beta \mid \beta \in T\subseteqetminus R \rangle$ is a $\clubsuit$-sequence, then $D_{\mathcal C_T}\not \leq_T D_{\mathcal C_R}$. \end{claim} \begin{subproof} Suppose $f:D_{\mathcal C_T} \rightarrow D_{\mathcal C_R}$ is a Tukey function. Fix a subset $W\subsetequbseteq [\lambda^+]^1\subsetequbseteq D_{\mathcal C_T}$ of size $\lambda^+$, we split to two cases. $\blacktriangleright$ Suppose $f"W \subsetequbseteq [\alpha]^{\theta}$ for some $\alpha<\lambda^+$. As $\lambda^\theta<\lambda^+$, by the pigeonhole principle we can find a subset $X\subsetequbseteq W $ of size $\lambda^+$ such that $f"X = \{z\}$ for some $z\in D_{\mathcal C_R}$. As $\langle C_\beta \mid \beta \in T\subseteqetminus R \rangle$ is a $\clubsuit$-sequence and $\bigcup X \in [\lambda^+]^{\lambda^+}$, there exists some $\beta \in T\subseteqetminus R$ such that $C_\beta \subsetequbseteq \bigcup X$. So $X$ is an unbounded subset of $\mathcal C_T$ such that $f"X$ is bounded in $\mathcal C_R$ which is absurd. $\blacktriangleright$ As $|f"W|=\lambda^+$, using $\lambda^\theta<\lambda^+$ we may fix a subset $Y=\{y_\beta \mid \beta <\lambda^+\}\subsetequbseteq f"W$ which forms a $\Delta$-system with a root $R_1$. In other words, for $\alpha<\beta<\lambda^+$ we have $y_\alpha \subseteqetminus R_1 < y_\beta\subseteqetminus R_1 $ and $y_\alpha\cap y_\beta =R_1$. For each $\alpha<\lambda^+$, we fix $x_\alpha \in W$ such that $f(x_\alpha)=y_\alpha$. Finally, we may use the $\Delta$-system Lemma again and refine our set $Y$ to get that there exists a club $E\subsetequbseteq \lambda^+$ such that, for all $\alpha<\beta<\lambda^+$ we have: \begin{itemize} \item $x_\alpha \cap x_\beta = R_0 $; \item $y_\alpha\cap y_\beta = R_1 $; \item there exists some $\gamma \in E$ such that $x_\alpha\subseteqetminus R_0 < \gamma < x_\beta \subseteqetminus R_0 $ and $y_\alpha\subseteqetminus R_1 < \gamma < y_\beta \subseteqetminus R_1 $; \item $f(x_\alpha) = y_\alpha$. \end{itemize} Furthermore we may assume that between any two elements of $\xi<\eta$ in $E$ there exists some $\alpha<\lambda^+$ such that $\xi< (x_\alpha\subseteqetminus R_0)\cup (y_\alpha\subseteqetminus R_1)<\eta$. As $\langle C_\beta \mid \beta \in T\subseteqetminus R \rangle$ is a $\clubsuit$-sequence, there exists some $\beta \in (T\subseteqetminus R)\cap \acc(E)$ such that $C_\beta \subsetequbseteq \bigcup \{x_\alpha \mid \alpha<\lambda^+\}$. Construct by recursion an increasing sequence $\langle \beta_\nu \mid \nu<\theta \rangle\subsetequbseteq C_\beta$ and a sequence $\langle z_\nu \mid \nu<\theta \rangle \subsetequbseteq \{x_\alpha \mid \alpha<\lambda^+\}$ such that $\beta_\nu\in z_\nu<\beta$. Clearly, $\{z_\nu\mid \nu<\theta\}$ is unbounded in $ D_{\mathcal C_T}$, so the following Claim proves $f$ is not a Tukey function. \begin{subclaim} The subset $\{ f(z_\nu)\mid \nu<\theta \}$ is bounded in $D_{\mathcal C_R}$. \end{subclaim} \begin{subproof} Let $Y:=\bigcup f(z_\nu)$ and $\mathcal C_R:=\langle C'_\beta \mid \beta \in R \rangle$, we will show that for every $\alpha\in R$, we have $|Y\cap C'_\alpha |<\theta$. By the refinement we did previously it is clear that $\{f(z_\nu)\subseteqetminus R_1\mid \nu<\theta\}$ is a pairwise disjoint sequence, where for each $\nu<\theta$ we have some element $\gamma_\nu \in E$ such that $f(z_\nu)\subseteqetminus R_1 < \gamma_\nu < f(z_{\nu+1}) \subseteqetminus R_1<\beta $. Let $\alpha\in R$. $\blacktriangleright$ Suppose $\alpha>\beta$. As $C'_\alpha$ if cofinal in $\alpha$ and of order-type $\theta$, then $|Y\cap C'_\alpha |<\theta$. $\blacktriangleright$ Suppose $\alpha<\beta$. As $C'_\alpha$ if cofinal in $\alpha$ and of order-type $\theta$, there exists some $\nu<\theta$ such that for all $\nu<\rho<\theta$, we have $(f(z_\rho)\subseteqetminus R_1)\cap C'_\alpha =\emptyset$. As $f(z_\rho)\in D_{\mathcal C_R}$ for every $\rho<\theta$, we get that $|Y\cap C'_\alpha |<\theta$ as sought. As $\beta \notin R$ there are no more cases to consider.\qedhere \end{subproof}\qedhere \end{subproof}\qedhere \end{proof} \begin{cor} Suppose $2^\lambda=\lambda^+$, $\lambda^\theta<\lambda^+$ and $S\subsetequbseteq E^{\lambda^+}_\theta$ is a stationary subset. Then there exists a family $\mathcal F$ of directed sets of the form $D_{\mathcal C}\times[\lambda]^{<\theta}$ of size $2^{\lambda^+}$ such that every two of them are Tukey incomparable. \end{cor} \begin{proof} Clearly by the same arguments of Theorem~\ref{antichain of D_C} the following Lemma is suffices to get the wanted result. \begin{claim} Suppose $\mathcal C_T:=\langle C_\beta \mid \beta \in T \rangle$ and $\mathcal C_R:=\langle C_\beta \mid \beta \in R \rangle$ are two $C$-sequences such that $T,R\subsetequbseteq E^{\lambda^+}_\theta$ are stationary subsets. Then if $\langle C_\beta \mid \beta \in T\subseteqetminus R \rangle$ is a $\clubsuit$-sequence, then $ D_{\mathcal C_T}\times [\lambda]^{<\theta }\not \leq_T D_{\mathcal C_R}\times [\lambda]^{<\theta }$. \end{claim} \begin{subproof} Suppose $f: D_{\mathcal C_T} \times [\lambda]^{<\theta }\rightarrow D_{\mathcal C_R}\times [\lambda]^{<\theta }$ is a Tukey function. Consider $Q=f"([\lambda^+]^1\times \{\emptyset\} )$, let us split to two cases: $\blacktriangleright$ If $|Q|<\lambda^+$, then by the pigeonhole principle, there exists $x\in D_{\mathcal C_R}$, $F\in [\lambda]^{<\theta }$ and a set $W\subsetequbseteq [\lambda^+]^1$ of size $\lambda^+$ such that $f"( W\times \{\emptyset\}) =\{(x,F)\}$. As $\langle C_\beta \mid \beta \in T\subseteqetminus R \rangle$ is a $\clubsuit$-sequence and $\bigcup W \in [\lambda^+]^{\lambda^+}$, we may fix some $\beta\in T\subseteqetminus R$ such that $C_\beta \subsetequbseteq \bigcup W$. Hence $W\times \{\emptyset\}$ is unbounded in $D_{\mathcal C_T}\times [\lambda]^{<\theta }$ but $f"(W\times \{\emptyset\})$ is bounded in $D_{\mathcal C_R}\times [\lambda]^{<\theta }$ which is absurd as $f$ is Tukey. $\blacktriangleright$ If $|Q|=\lambda^+$, then by the pigeonhole principle there exists some $F\in [\lambda]^{<\theta }$ and a set $W\subsetequbseteq D_{\mathcal C_T}$ of size $\lambda^+$ such that, $f"(W\times \{\emptyset\}) \subsetequbseteq D_{\mathcal C_R}\times\{F\} $. Let $Y:=\pi_0(f"(W\times \{\emptyset\}))$. Next we may continue with the same proof as in Lemma~\ref{Usefull lemma}. \end{subproof} \end{proof} \subsetequbsubsection{Chain} \begin{theorem}\label{Theorem - D_C chain} Suppose $2^\lambda=\lambda^+$, $\lambda^\theta<\lambda^+$. Then there exists a family $\mathcal F=\{D_{C_\xi} \mid \xi<\lambda^+\}$ of Tukey incomparable directed sets of the form $D_{\mathcal C}$ such that $\langle \prod_{\zeta <\xi} D_\zeta \mid \xi<\lambda^+\rangle$ is a $<_T$-increasing chain. \end{theorem} \begin{proof} As in Theorem~\ref{antichain of D_C}, we fix a partition $\langle S_\zeta \mid \zeta<\lambda^+\rangle$ of $E^{\lambda^+}_\theta$ to stationary subsets such that there exists a $\clubsuit(S_\zeta) $-sequence $\mathcal C_\zeta$ for $\zeta<\lambda^+$. Note that for every $A\in [\lambda^+]^{<\lambda^+}$, we have $|\prod^{\leq \theta}_{\zeta\in A} D_{\mathcal C_\zeta}|=\lambda^+$. Note that for every $A,B \in [\lambda^+]^{<\lambda^+}$ such that $A\subsetequbset B$, we have $\prod^{\leq \theta}_{\zeta\in A} D_{\mathcal C_\zeta} \leq_T \prod^{\leq \theta}_{\zeta\in B} D_{\mathcal C_\zeta}$. The following Claim gives us the wanted result. \begin{claim} Suppose $A\in [\lambda^+]^{<\lambda^+}$ and $\xi\in \lambda^+\subseteqetminus A$, then $ D_{\mathcal C_\xi} \not \leq_T \prod^{\leq \theta}_{\zeta\in A} D_{\mathcal C_\zeta} $. In particular, $\prod^{\leq \theta}_{\zeta\in A} D_{\mathcal C_\zeta} <_T \prod^{\leq \theta}_{\zeta\in A} D_{\mathcal C_\zeta}\times D_{\mathcal C_\xi}$. \end{claim} \begin{subproof} Let $D:=D_{\mathcal C_\xi}$ and $E:=\prod^{\leq \theta}_{\zeta\in A} D_{\mathcal C_\zeta}$. Note that as $2^\lambda=\lambda^+$, then $(\lambda^+)^\lambda= \lambda^+$, so $|E|=\lambda^+$. Suppose $f:D \rightarrow E$ is a Tukey function. Consider $Q=f"[\lambda^+]^1$, let us split to cases: $\blacktriangleright$ Suppose $|Q|< \lambda^+$, then by pigeonhole principle, there exists $e\in E$ and a subset $X\subsetequbseteq D$ of size $\lambda^+$ such that $f"X =\{e\}$. As $\langle C_\beta \mid \beta \in S_\xi \rangle$ is a $\clubsuit$-sequence, there exists some $\beta \in S_\xi$ such that $C_\beta \subsetequbseteq \bigcup X$. So $X$ is an unbounded subset of $D$ such that $f"X$ is bounded in $E$ which is absurd. $\blacktriangleright$ Suppose $|Q|=\lambda^+$. Let us enumerate $Q:=\{q_\alpha \mid \alpha<\lambda^+\}$. Recall that for every $\zeta\in A$, $D_{\mathcal C_{\zeta}}\subsetequbseteq [\lambda^+]^{\leq \theta}$. Let $z_\alpha:= \bigcup \{ q_\alpha(\zeta) \times \{\zeta\} \mid \zeta\in A, ~q_\alpha(\zeta)\neq 0_{D_{\mathcal C_\zeta}}\}$, notice that $z_\alpha \in [\lambda^+ \times A]^{\leq\theta}$. We fix a bijection $\phi:\lambda^+\times A \rightarrow \lambda^+$. As $\{\phi"z_\alpha\mid \alpha<\lambda^+\}$ is a subset of $[\lambda^+]^{\leq \theta}$ of size $\lambda^+$ and $\lambda^\theta<\lambda^+$, by the $\Delta$-system Lemma, we may refine our sequence $Q$ and re-index such that $\{\phi "z_\alpha \mid \alpha<\lambda^+\}$ will be a $\Delta$-system with root $R'$. For each $\alpha<\lambda^+$ and $\zeta\in A$, let $y_{\alpha,\zeta}:=\{ \beta<\lambda^+ \mid \beta\in q_\alpha(\zeta)\}$. We claim that for each $\zeta\in A$, the set sequence $\{y_{\alpha,\zeta}\mid \zeta \in A\}$ is a $\Delta$-system with root $R_{\zeta}:=\{\beta<\lambda^+\mid (\beta,\zeta)\in \phi^{-1}[R'] \}$. Let us show that whenever $\alpha<\beta<\lambda^+$, we have $y_{\alpha,\zeta}\cap y_{\beta,\zeta} = R_{\zeta} $. Notice that $\delta\in y_{\alpha,\zeta}\cap y_{\beta,\zeta} \iff \delta\in q_\alpha(\zeta)\cap q_\beta(\zeta) \iff (\delta,\zeta)\in z_\alpha\cap z_\beta \iff \phi(\delta,\zeta) \in \phi"(z_\alpha\cap z_\beta)=\phi"z_\alpha\cap \phi"z_\beta = R' \iff (\delta,\zeta)\in \phi^{-1}R' \iff \delta\in R_{\zeta}$. For each $\alpha<\lambda^+$, we fix $x_\alpha \in W$ such that $f(x_\alpha)=q_\alpha$. We use the $\Delta$-system Lemma again and refine our sequence such that there exist a club $C\subsetequbseteq \lambda^+$ and for all $\alpha<\beta<\lambda^+$ we have: \begin{enumerate} \item for every $\zeta\in A$, we have $y_{\alpha,\zeta}\cap y_{\beta,\zeta} = R_{\zeta} $; \item $x_\alpha \cap x_\beta= R$; \item there exists some $\gamma \in C$ such that $(x_\alpha \subseteqetminus R)\cup(\bigcup_{\zeta\in A} (y_{\alpha,\zeta}\subseteqetminus R_\zeta ))< \gamma < (x_\beta \subseteqetminus R)\cup(\bigcup_{\zeta\in A } (y_{\beta,\zeta}\subseteqetminus R_\zeta) )$. \end{enumerate} Furthermore, we may assume that between any two elements of $\gamma<\delta$ in $C$ there exists some $\alpha<\lambda^+$ such that $\gamma< (x_\alpha \subseteqetminus R)\cup(\bigcup_{\zeta\in A } (y_{\alpha,\zeta}\subseteqetminus R_\zeta ))<\delta$. We continue in the spirit of Claim~\ref{good claim}. As $\langle C_\beta \mid \beta \in S_\xi \rangle$ is a $\clubsuit$-sequence, there exists some $\beta \in S_\xi\cap \acc(C)$ such that $C_\beta \subsetequbseteq \bigcup \{x_\alpha \mid \alpha<\lambda^+\}$. Construct by recursion an increasing sequence $\langle \beta_\nu\mid \nu<\omega \rangle\subsetequbseteq C_\beta$ and a sequence $\langle w_\nu \mid \nu<\theta \rangle \subsetequbseteq \{x_\alpha \mid \alpha<\lambda^+\}$ such that $\beta_\nu\in w_\nu<\beta$. Clearly, $\{w_\nu\mid \nu<\theta\}$ is unbounded in $ D_{\mathcal C_\xi}$, so the following Claim proves $f$ is not a Tukey function. \begin{subclaim} The subset $\{ f(w_\nu)\mid \nu<\theta \}$ is bounded in $E$. \end{subclaim} \begin{subproof} For each $\zeta\in A$, let $W_\zeta:= \bigcup_{\nu<\theta} f(w_\nu)(\zeta)$. We will show that $W_\zeta \in D_{\mathcal C_\zeta}$, notice this will imply that $f(w_\nu)\leq_E \prod^{\leq \theta}_{\zeta \in A} W_\zeta$ for every $\nu<\theta$, so the set $\{ f(w_\nu)\mid \nu<\theta \}$ is bounded in $E$ as sought. Let $\mathcal C_{\zeta}:=\langle C_\alpha \mid \alpha \in S_\zeta \rangle$, we will show that for every $\alpha\in S_\zeta$, we have $|W_\zeta \cap C_\alpha |<\theta$. By the refinement we did previously it is clear that $\{f(w_\nu)(\zeta)\subseteqetminus R_\zeta\mid \nu<\theta\}$ is a pairwise disjoint sequence, where for each $\nu<\theta$ we have some element $\gamma_\nu \in C$ such that $f(w_\nu)(\zeta)\subseteqetminus R_\zeta < \gamma_\nu < f(w_{\nu+1})(\zeta)\subseteqetminus R_\zeta $. Furthermore, $f(w_\nu)(\zeta)\subsetequbseteq \beta$ for every $\nu<\theta$. Let $\alpha\in S_\zeta$. $\blacktriangleright$ Suppose $\alpha>\beta$. As $C_\alpha$ if cofinal in $\alpha$ and of order-type $\theta$, then $|W_\zeta\cap C_\alpha |<\theta$. $\blacktriangleright$ Suppose $\alpha<\beta$. As $C_\alpha$ if cofinal in $\alpha$ and of order-type $\theta$, there exists some $\nu<\theta$ such that for all $\nu<\rho<\theta$, we have $(f(w_\rho)(\zeta)\subseteqetminus R_\zeta )\cap C_\alpha =\emptyset$. As $f(w_\rho)(\zeta)\in D_{\mathcal C_\zeta}$ for every $\rho<\theta$, we get that $|W_\zeta\cap C_\alpha |<\theta$ as sought. As $\beta \notin S_\zeta$ there are no more cases to consider.\qedhere \end{subproof}\qedhere \end{subproof} \end{proof} \subseteqection{Concluding remarks}\label{Section - 6} A natural continuation of this line of research is analysing the class $\mathcal D_{\kappa}$ for cardinals $\kappa\geq \aleph_\omega$. As a preliminary finding we notices that the poset $(\mathcal P(\omega),\subsetequbset)$ can be embdedded by a function $\mathfrak F$ into the class $\mathcal D_{\aleph_\omega}$ under the Tukey order. Furthermore, for every two successive elements $A,B$ in the poset $(\mathcal P(\omega),\subsetequbset)$, i.e. $A\subsetequbset B $ and $|B\subseteqetminus A|=1$, there is no directed set $D$ such that $\mathfrak F(A)<_T D <_T \mathfrak F(B)$. The embedding is defined via $\mathfrak F(A):= \prod^{<\omega}_{n\in A} \omega_{n+1}$, and the furthermore part can be proved by Lemma~\ref{Lemma - 2nd gap}. As a corollary, we get that in $\zfc$ the cardinality of $\mathcal D_{\aleph_\omega}$ is at least $2^{\aleph_0}$. \begin{comment} In this section we study $\mathcal D_{\aleph_\omega}$. We define an embedding $\mathfrak F$ from the poset $(\mathcal P(\omega),\subsetequbset)$ to $(\mathcal D_{\aleph_\omega}, <_T)$. For each $A\in \mathcal P(\omega)$, set $\mathfrak F(A):= \prod^{<\omega}_{n\in A} \omega_{n+1}$. \begin{theorem} $\mathfrak F$ is an isomorphism between $(\mathcal P(\omega),\subsetequbset)$ and $(\im(\mathfrak F(A)), <_T)$. Moreover, for every $A,B\in \mathcal P(\omega)$ such that $A\subsetequbset B$ and $|B\subseteqetminus A|=1$ there is no directed set $D$ such that $\mathfrak F (A)<_T D<_T \mathfrak F(B)$. \end{theorem} \begin{proof} Let $A,B\in P(\omega)$, we will show that $A\subsetequbset B$ if and only if $\mathfrak F (A) <_T \mathfrak F(B)$. Suppose $A\subsetequbset B$, we define a Tukey function $f:\prod^{<\omega}_{n\in A} \omega_{n+1}\rightarrow \prod^{<\omega}_{n\in B} \omega_{n+1}$. For every $x\in \prod^{<\omega}_{n\in A}$, recall $x:A\rightarrow \bigcup_{n\in A} \omega_{n+1}$ with finite support such that $x(n)\in \omega_{n+1}$. We define $f(x)$ to be a function from $B$ to $\bigcup_{n\in B} \omega_{n+1}$ such that $f(x)(n)=x(n)$ for each $n\in B$, and $f(x)(n)=0$ for every $n\in B\subseteqetminus A$. Suppose $X\subsetequbseteq \prod^{<\omega}_{n\in A} \omega_{n+1}$ is unbounded. If $\{ \subsetequpp(x)\mid x\in X\}$ is infinite, then clearly $\{\subsetequpp(f(x))\mid x\in X\}$ is also infinite, hence $f"X$ is unbounded in $\prod^{<\omega}_{n\in B} \omega_{n+1}$. If $\{ \subsetequpp(x)\mid x\in X\}$ is finite, then for some $n\in A$ the set $\{x(n)\mid x\in X\}=\{f(x)(n)\mid x\in X\}$ is unbounded in $\omega_{n+1}$, so $f"X$ is unbounded in $\prod^{<\omega}_{n\in B} \omega_{n+1}$ as sought. Suppose $A\not \subsetequbseteq B$, and fix some $n\in A\subseteqetminus B$. The following Lemma gives us the wanted result. \begin{lemma}\label{Lemma - product of cardinals} Suppose $C,D$ are two sets of regular cardinals, $\kappa\in C\subseteqetminus D$ is an uncountable successor cardinal and $\max\{|C|,|D|\}<\kappa$, then $\prod^{<\omega}C \not\leq_T \prod^{<\omega} D$. \end{lemma} \begin{subproof} Suppose on the contrary that $\prod^{<\omega}C \leq_T \prod^{<\omega} D$. As $\kappa\leq_T \prod^{<\omega}C$, this implies that $\kappa \leq_T \prod^{<\omega}D$. Let $f:\kappa \rightarrow\prod^{<\omega} D$ be a Tukey function witnessing that. Let $Y:=f"\kappa$. We split to two cases: $\blacktriangleright$ Suppose $|Y|<\kappa$, then by the pigeonhole principle there exists some $X\in [\kappa]^{\kappa}$ and $y\in D_D$ such that $f"X=\{y\}$. As $X$ is unbounded and $f$ is Tukey, we get that $f"X$ is also unbounded which is absurd. $\blacktriangleright$ Suppose $|Y|=\kappa$. Let $E:=\prod^{<\omega}(D\cap \kappa)$, notice $|E|<\kappa$. By the pigeonhole principle there exists some $X\in [\kappa]^{\kappa}$ and $y\in E$ such that $\pi_E \circ f"X$ is constant. As $\kappa$ is uncountable, we may further refine $X$ and assume that the support of each element in $f"X$ is the same. Then, as $D$ is a set of regular cardinals, $f"X$ is bounded in $\prod^{<\omega} D$. But this is absurd as $X$ is unbounded in $\kappa$ and $f$ is Tukey function. \end{subproof} We prove the moreover part, suppose $A,B\in \mathcal P(\omega)$, $n\in B\subseteqetminus A$ and $|B\subseteqetminus A|=1$. Define $C:=\prod_{m\in A\cap n}\omega_{m+1}$ and $E:=\prod^{<\omega}_{m\in A\subseteqetminus(n+1) }\omega_{m+1}$. Note that $\mathfrak F(A)=C\times \omega_{n+1} \times E$, $\mathfrak F(B)=C\times E$, $|C|<\omega_{n+1}$. By Lemma~\ref{Lemma - 2nd gap} we are left to verify that $\omega_{n+1}\in \Inner(\mathcal I_{\bd}(E),\omega_{n+1})$. Let us fix some $X\in [E]^{\omega_{n+1}}$, as $E$ is a finite support product, by the pigeonhole principle, for some $Y\in [X]^{\omega_{n+1}}$ we have that $|\{\subsetequpp(y)\mid y\in Y\}|=1$. Let us define $e:A\subseteqetminus(n+1) \rightarrow \bigcup_{m\in A\subseteqetminus(n+1)}\omega_{m+1}$ by $e(m)=\subsetequp\{y(m)\mid y\in Y\}$. Clearly, $e\in E$ and $Y$ is bounded by $e$. \end{proof} \begin{cor}\label{Cor - D_aleph_omega >= 2^aleph_0} $|\mathcal D_{\aleph_\omega}|\geq 2^{\aleph_0}$. \end{cor} \end{comment} \subseteqection*{Acknowledgments} This paper presents several results from the author’s PhD research at Bar-Ilan University under the supervision of Assaf Rinot to whom he wishes to express his deep appreciation. The author is supported by the European Research Council (grant agreement ERC-2018-StG 802756). Our thanks go to Tanmay Inamdar for many illuminating discussions. \renewcommand{A}{A} \subseteqection{Appendix: Tukey ordering of simple elements of the class $\mathcal D_{\aleph_2}$ and $\mathcal D_{\aleph_3}$} We present each of the posets $(\mathcal T_{2},<_T)$ and $(\mathcal T_{3},<_T)$ in a diagram. In both diagrams below, for any two directed sets $A$ and $B$, an arrow $A\rightarrow B$, represents the fact that $A<_T B$. If the color of the arrow is red, then there is no directed set in between $A$ and $B$. If the color of the arrow is blue, then under $\gch$ there exists a directed set in between. Every two directed sets $A$ and $B$ such that there is no directed path (in the obvious sense) from $A$ to $B$, are such that $A\not\leq_T B$. Note that this implies any two directed sets on the same horizontal level are incompatible in the Tukey order. \begin{figure} \caption{Tukey ordering of $(\mathcal T_{2} \label{Figure T_2} \end{figure} \begin{landscape} \begin{figure} \caption{Tukey ordering of $(\mathcal T_{3} \label{Figure T_3} \end{figure} \end{landscape} \end{document}
\betagin{document} \betagin{abstract} We prove that the geodesic equations of all Sobolev metrics of fractional order one and higher on spaces of diffeomorphisms and, more generally, immersions are locally well posed. This result builds on the recently established real analytic dependence of fractional Laplacians on the underlying Riemannian metric. It extends several previous results and applies to a wide range of variational partial differential equations, including the well-known Euler--Arnold equations on diffeomorphism groups as well as the geodesic equations on spaces of manifold-valued curves and surfaces. \end{abstract} \title{Fractional Sobolev metrics on spaces of immersions} \setcounter{tocdepth}{1} \taubleofcontents \section{Introduction} \lambdabel{sec:introduction} \subsection*{Background} Many prominent partial differential equations (PDEs) in hydrodynamics admit variational formulations as geodesic equations on an infinite-dimensional manifold of mappings. These include the incompressible Euler \cite{Ar1966}, Burgers \cite{khesin2008geometry}, modified Constantin--Lax--Majda \cite{constantin1985simple, wunsch2010geodesic, bauer2016geometric}, Camassa--Holm \cite{camassa1993integrable, misiolek1998shallow, kouranbaeva1999camassa}, Hunter--Saxton \cite{hunter1991dynamics, lenells2007hunter}, surface quasi-geostrophic \cite{constantin1994formation, Was2016} and Korteweg--de Vries \cite{ovsienko1987korteweg} equations of fluid dynamics as well as the governing equation of ideal magneto-hydrodynamics \cite{vishik1978analogs, marsden1984semidirect}. This serves as a strong motivation for the study of Riemannian geometry on mapping space. An additional motivation stems from the field of mathematical shape analysis, which is intimately connected to diffeomorphisms groups and other infinite-dimensional mapping spaces via Grenander's pattern theory \cite{grenander1998computational, younes2010shapes} and elasticity theory \cite{srivastava-klassen-book:2016, bauer2014overview}. The variational formulations allow one to study analytical properties of the PDEs in relation to geometric properties of the underlying infinite-dimensional Riemannian manifold \cite{shnirel1987geometry, misiolek2010fredholm, bauer2012vanishing, bauer2013geodesic, bauer2018vanishing, jerrard2019vanishing}. Most importantly, local well-posedness of the PDE, including smooth dependence on initial conditions, is closely related to smoothness of the geodesic spray on Sobolev completions of the configuration space~\cite{EM1970}. This has been used to show local well-posedness of PDEs in many specific examples, cf.\@ the recent overview article \cite{Kol2017}. An extension of this successful methodology to wider classes of PDEs requires an in-depth study of smoothness properties of partial and pseudo differential operators with non-smooth coefficients such as those appearing in the geodesic spray or, more generally, in the Euler--Lagrange equations. This is the topic of the present paper. \subsection*{Contribution} This article establishes local well-posedness of the geodesic equation for fractional order Sobolev metrics on spaces of diffeomorphisms and, more generally, immersions. A simplified version of our main result reads as follows: \betagin{theorem*} On the space of immersions of a closed manifold $M$ into a Riemannian manifold $(N,\bar{g})$, the geodesic equation of the fractional-order Sobolev metric \betagin{align*} G_f(h,k) = ^{-1}nt_M \bar{g}\big((1+\Delta^{f^*\bar{g}})^ph,k\big)\on{vol}^{f^*\bar{g}}, \qquad h,k ^{-1}n T_f\on{Imm}(M,N), \end{align*} is locally well-posed in the sense of Hadamard for any $p^{-1}n[1,^{-1}nfty)$. \end{theorem*} This follows from Theorems~\ref{thm:wellposed} and~\ref{thm:satisfies}. The result unifies and extends several previously known results: \betagin{itemize} ^{-1}tem For integer-order metrics, local well-posedness on the space of immersions from $M$ to $N$ has been shown in \cite{Bauer2011b}. However, the proof contained a gap, which was closed in \cite{Mueller2017} for $N=\mathbb R^n$, and which is closed in the present article for general $N$. The strategy of proof, which goes back to Ebin and Marsden \cite{EM1970}, is to show that the geodesic spray extends smoothly to certain Sobolev completions of the space. Our generalization to fractional-order metrics builds on recent results about the smoothness of the functional calculus of sectorial operators \cite{bauer2018smooth}. ^{-1}tem For $N=\mathbb R^n$, the set of $N$-valued immersions becomes a vector space, which simplifies the formulation of the geodesic equation; see \autoref{cor:flat}. The treatment of general manifolds $N$ requires a theory of Sobolev mappings between manifolds, which is developed in \autoref{sec:sobolev}. Moreover, in the absence of global coordinate systems for these mapping spaces, we recast the geodesic equation using an auxiliary covariant derivative following \cite{Bauer2011b}; see \autoref{lem:connection} and \autoref{thm:geodesic}. ^{-1}tem For $M=N$ our result specializes to the diffeomorphism group $\on{Diff}(M)$, which is an open subset of $\on{Imm}(M,M)$. On $\on{Diff}(M)$ we obtain local well-posedness of the geodesic equation for Sobolev metrics of order $p^{-1}n [1/2,^{-1}nfty)$; see \autoref{cor:diffeos}. Analogous results have been obtained by different methods (smoothness of right-trivializations) for inertia operators that are defined as abstract pseudo-differential operators \cite{escher2014right, bauer2015local, BBCEK2019}. ^{-1}tem For $M=S^1$, our result specializes to the space of immersed loops in $N$. For loops in $N=\mathbb R^d$, local well-posedness has been shown by different methods (reparameterization to arc length) in \cite{bauer2018fractional}. Our analysis extends this result to manifold-valued loops and also to higher-dimensional and more general base manifolds $M$. \end{itemize} \section{Sobolev mappings} \lambdabel{sec:preliminaries} \subsection{Setting} \lambdabel{sec:setting} We use the notation of \cite{Bauer2011b} and write $\mathbb N$ for the natural numbers including zero. Smooth will mean $C^^{-1}nfty$ and real analytic $C^\omega$. Sobolev regularity is denoted by $H^r$, and Sobolev spaces $H^s_{H^r}$ of mixed order $r$ in the foot point and $s$ in the fiber are introduced in \autoref{thm:sobolev_mappings}. Throughout this paper, without any further mention, we fix a real analytic connected closed manifold $M$ of dimension $\on{dim}(M)$ and a real analytic manifold $N$ of dimension $\on{dim}(N)\bar{g}eq \on{dim}(M)$. \subsection{Sobolev sections of vector bundles} \lambdabel{sec:sobolev} \cite[Section~2.3]{bauer2018smooth} We write $H^s(\mathbb R^m,\mathbb R^n)$ for the Sobolev space of order $s^{-1}n\mathbb R$ of $\mathbb R^n$-valued functions on $\mathbb R^m$. We will now generalize these spaces to sections of vector bundles. Let $E$ be a vector bundle of rank $n^{-1}n\mathbb N_{>0}$ over $M$. We choose a finite vector bundle atlas and a subordinate partition of unity in the following way. Let $(u_i\colon U_i \to u_i(U_i)\subseteq \mathbb R^m)_{i^{-1}n I}$ be a finite atlas for $M$, let $(\varphi_i)_{i^{-1}n I}$ be a smooth partition of unity subordinated to $(U_i)_{i ^{-1}n I}$, and let $\partialsi_i\colon E|U_i \to U_i\times \mathbb R^n$ be vector bundle charts. Note that we can choose open sets $U_i^\circ$ such that $\on{supp}(\partialsi_i)\subset U_i^\circ \subset \overline{U_i^\circ}\subset U_i$ and each $u_i(U_i^\circ)$ is an open set in $\mathbb R^m$ with Lipschitz boundary (cf.\@ \cite[Appendix~H3]{behzadan2017certain}). Then we define for each $s ^{-1}n \mathbb R$ and $f ^{-1}n \Gamma(E)$ \betagin{equation*} \|f\|_{\Gamma_{H^s}(E)}^2 := \sum_{i ^{-1}n I} \|\on{pr}_cf{\mathbb R^n}\circ\, \partialsi_i\circ (\varphi_i \cdot f)\circ u_i^{-1} \|_{H^s(\mathbb R^m,\mathbb R^n)}^2. \end{equation*} Then $\|\cdot\|_{\Gamma_{H^s}(E)}$ is a norm, which comes from a scalar product, and we write $\Gamma_{H^s}(E)$ for the Hilbert completion of $\Gamma(E)$ under the norm. It turns out that $\Gamma_{H^s}(E)$ is independent of the choice of atlas and partition of unity, up to equivalence of norms. We refer to \cite[Section~7]{triebel1992theory2} and \cite[Section~6.2]{grosse2013sobolev} for further details. The following theorem describes module properties of Sobolev sections of vector bundles, which will be used repeatedly throughout the paper. \betagin{theorem}[Module properties] \lambdabel{thm:module} \cite[Theorem~2.4]{bauer2018smooth} Let $E_1,E_2$ be vector bundles over $M$ and let $s_1,s_2,s^{-1}n\mathbb R$ satisfy $$s_1+s_2\bar{g}eq 0,\; \min(s_1,s_2)\bar{g}eq s, \text{ and } s_1+s_2-s>\on{dim}(M)/ 2.$$ Then the tensor product of smooth sections extends to a bounded bilinear mapping \betagin{equation*} \Gamma_{H^{s_1}}(E_1)\times \Gamma_{H^{s_2}}(E_2) \to \Gamma_{H^{s}}(E_1\otimes E_2). \end{equation*} \end{theorem} The following theorem describes the manifold structure of Sobolev mappings between finite-dimensional manifolds. It is an elaboration of \cite[5.2 and 5.4]{Michor20} and an extension to the Sobolev case of parts of \cite[Section 42]{KM97}. \betagin{theorem}[Sobolev mappings between manifolds] \lambdabel{thm:sobolev_mappings} The following statements hold for any $r^{-1}n(\on{dim}(M)/2,^{-1}nfty)$ and $s,s_1,s_2 ^{-1}n [-r,r]$: \betagin{enumerate} ^{-1}tem\lambdabel{thm:sobolev_mappings:a} The space $H^{r}(M,N)$ is a $C^{^{-1}nfty}$ and a real analytic manifold. Its tangent space satisfies in a natural (i.e., functorial) way \betagin{equation*} T H^{r}(M,N) = H^r(M,TN) \timesrightarrow[\partiali_{H^{r}(M,N)}]{(\partiali_N)_*} H^{r}(M,N) \end{equation*} with foot point projection given by $\partiali_{H^{r}}(M,N)=(\partiali_N)_*\colon h \mapsto\partiali_N\circ h$. ^{-1}tem The space $H^s_{H^r}(M,TN)$ of `$H^s$ mappings $M\to TN$ with foot point in $H^{r}(M,N)$' is a real analytic manifold and a real analytic vector bundle over $H^{r}(M,N)$. Similarly, spaces such as $L(H^{s_1}_{H^r}(M,TN),H^{s_2}_{H^r}(M,TN))$ are real analytic vector bundles over $H^{r}(M,N)$. ^{-1}tem The space $\operatorname{Met}_{H^{r}}(M)$ of all Riemannian metrics of Sobolev regularity $H^r$ is an open subset of the Hilbert space $\Gamma_{H^r}(S^2T^*M)$, and thus a real analytic manifold. \end{enumerate} \end{theorem} \betagin{proof} \betagin{enumerate}[wide] ^{-1}tem Let us recall the chart construction: we use an auxiliary real analytic Riemannian metric $ \hat g$ on $N$ and its exponential mapping $\exp^{ \hat g}$; some of its properties are summarized in the following diagram: $$\timesymatrix { & 0_N \ar@{_{(}->}[d] \ar@(l,dl)[dl]_-{\text{zero section\quad }} & & N \ar@{^{(}->}[d] \ar@(r,dr)[rd]^-{\text{ diagonal}} & \\ TN & V^{TN} \ar@{_{(}->}[l]^{\text{ open }} \ar[rr]^{ (\partiali_N,\exp^{ \hat g}) }_{\cong} & & V^{N\times N} \ar@{^{(}->}[r]_{\text{ open }} & N\times N }$$ Without loss we may assume that $V^{N\times N}$ is symmetric: $$(y_1,y_2)^{-1}n V^{N\times N} ^{-1}ff (y_2,y_1)^{-1}n V^{N\times N}.$$ A chart, centered at a real analytic $f^{-1}n C^\omega(M,N)$, is: \betagin{align*} {H^r}(M,N)\supset U_f &=\{g: (f,g)(M)\subset V^{N\times N}\} \timesrightarrow{u_f}{} \tilde U_f \subset \Gamma_{H^r}( f^*TN) \\ u_f(g) = (\partiali_N,&\exp^{ \hat g})^{-1} \circ (f,g),\quad u_f(g)(x) = (\exp^{ \hat g}_{f(x)})^{-1}(g(x)) \\ (u_f)^{-1}(s) &= \exp^{ \hat g}_f\circ s, \qquad (u_f)^{-1}(s)(x) = \exp^{ \hat g}_{f(x)}(s(x)) \end{align*} Note that $\tilde U_f$ is open in $\Gamma(f^*TN)$. The charts $U_f$ for $f^{-1}n C^\omega(M,N)$ cover $H^r(M,N)$: since $C^\omega(M,N)$ is dense in $H^r(M,N)$ by \cite[42.7]{KM97} and since $H^r(M,N)$ is continuously embedded in $C^{0}(M,N)$, a suitable $C^0$-norm neighborhood of $g^{-1}n H^r(M,N)$ contains a real analytic $f ^{-1}n C^\omega(M,N)$, thus $f^{-1}n U_g$, and by symmetry of $V^{N\times N}$ we have $g^{-1}n U_f$. The chart changes, $$ \Gamma_{H^r}( f_1^*TN)\supset \tilde U_{f_1}\ni s \mapsto (u_{f_2,f_1})_*(s) := (\exp^{ \hat g}_{f_2})^{-1}\circ \exp^{ \hat g}_{f_1}\circ s^{-1}n \tilde U_{f_2}\subset \Gamma_{H^r}( f_2^*TN), $$ for charts centered on real analytic $f_1,f_2^{-1}n C^\omega(M,N)$ are real analytic by Lemma \ref{lem:pushforward} since $r>\on{dim}(M)/2$. The tangent bundle $TH^r(M,N)$ is canonically glued from the following vector bundle chart changes, which are real analytic by Lemma \ref{lem:pushforward} again: \betagin{multline}\lambdabel{eq:Tchartchanges} \tilde U_{f_1}\times \Gamma_{H^r}( f_1^*TN) \ni (s, h) \mapsto (T (u_{f_2,f_1})_*)(s,h) = \\ =\big((u_{f_2,f_1})_*(s), (d_{\text{fiber}} u_{f_2,f_1})_*(s,h)\big)^{-1}n \tilde U_{f_2}\times \Gamma_{H^r}( f_2^*TN) \end{multline} It has the canonical charts $$ TH^r(M,N) \supset T\tilde U_f \timesrightarrow[(T(\exp_f^{ \hat g})^{-1}_*)]{Tu_f} \tilde U_f \times \Gamma_{H^r}( f^*TN). $$ These identify $TH^r(M,N)$ canonically with $H^r(M,TN)$ since $$Tu_f^{-1}(s,s') = T(\exp_f^{ \hat g})\circ\on{vl}\circ (s,s')\colon M\to TN\,,$$ where we used the vertical lift $\on{vl}\colon TN\times_N TN \to TTN$ which is given by $\on{vl}(u_x,v_x)= \partial_t|_{t=0}(u_x+t.v_x)$; see \cite[8.12 or 8.13]{Michor08}. The corresponding foot-point projection is then $$\partiali_{H^s(M,N)}(T(\exp_f^{ \hat g})\circ \on{vl}\circ (s,s'))= \exp_f^{ \hat g}\circ s = \partiali_N\circ T(\exp_f^{ \hat g})\circ (s,s').$$ ^{-1}tem The canonical chart changes (\ref{eq:Tchartchanges}) for $TH^r(M,N)$ extend to \betagin{multline*} \tilde U_{f_1}\times \Gamma_{H^s}( f_1^*TN) \ni (s, h) \mapsto (T u_{f_2,f_1})_*(s,h) = \\ =\big((u_{f_2,f_1})_*(s), (d_{\text{fiber}} u_{f_2,f_1}\circ s)_*(h)\big)^{-1}n \tilde U_{f_2}\times \Gamma_{H^s}( f_2^*TN), \end{multline*} since $d_{\text{fiber}}u_{f_2,f_1}\colon f_1^*TN\times_M f_1^*TN = f_1^* (TN\times_N TN)\to f_2^* TN$ is fiber respecting real analytic by the module properties \ref{thm:module}. Note that $d_{\text{fiber}}u_{f_2,f_1}\circ s$ is then an $H^r$-section of the bundle $L(f_1^*TN,f_1^*TN)\to M$, which may be applied to the $H^s$-section $h$ by the module properties \ref{thm:module}. These extended chart changes then glue the vector bundle $$H^s_{H^r}(M,TN)\timesrightarrow{(\partiali_N)_*} H^r(M,TN). $$ ^{-1}tem The space $\Gammamma_{H^r}(S^2T^*M)$ is continuously embedded in $\Gamma_{C^1}(S^2T^*M)$ because $r>\on{dim}(M)/2+1$. Thus, the space of metrics is open. \qedhere \end{enumerate} \end{proof} \subsection{Connections, connectors, and sprays} \lambdabel{sec:connection} This sections reviews some relations between connections, connectors, and sprays. It holds for general convenient manifolds $N$, including infinite-dimensional manifolds of mappings, and will be used in this generality in the sequel (see e.g.\@ the proofs of Theorems \ref{thm:geodesic} and \ref{thm:wellposed}). \betagin{enumerate} ^{-1}tem \textbf{Connectors.} \lambdabel{sec:connection:a} \cite[22.8--9]{Michor08} Any connection $\nablabla$ on $TN$ is given in terms of a connector $K\colon TTN\to TN$ as follows: For any manifold $M$ and function $h\colon M\to TN$, one has $\nablabla h = K \circ Th\colon TM\to TN$. In the subsequent points we fix such a connection and connector on $N$. ^{-1}tem \textbf{Pull-backs.} \lambdabel{sec:connection:b} \cite[(22.9.6)]{Michor08} For any manifold $Q$, smooth mapping $g\colon Q\to M$ and $Z_y^{-1}n T_yQ$, one has $\nablabla_{Tg.Z_y}s = \nablabla_{Z_y}(s\circ g)$. Thus, for $g$-related vector fields $Z^{-1}n \mathfrak X(Q)$ and $X^{-1}n \mathfrak X(M)$, one has $\nablabla_Z(s\circ g) = (\nablabla_Xs)\circ g$, as summarized in the following diagram: \betagin{equation*} \timesymatrix@C=4em @R=.3em{ & & T^2N \ar[dd]^{K} \\ & & \\ TQ \ar[r]_{Tg} \ar[rruu]^{T(s\circ g)} & TM \ar[ruu]_{Ts} & TN \\ & & TN \ar[dd]^{\partiali_N} \\ Q \ar[r]^{g} \ar[uu]^{Z} & M \ar[uu]^{X} \ar[ruu]^{\nablabla_Xs} & \\ Q \ar[r]^{g} & M \ar[ruu]^{s} \ar[r]^{f} & N. \\ }\end{equation*} ^{-1}tem \textbf{Torsion.} \lambdabel{sec:connection:c} \cite[(22.10.4)]{Michor08} For any smooth mapping $f\colon M\to N$ and vector fields $X,Y^{-1}n \mathfrak X(M)$ we have \betagin{align*} \on{Tor}(Tf.X,Tf.Y) &=\nablabla_X(Tf\circ Y) - \nablabla_Y(Tf\circ X) - Tf\circ [X,Y] \\& = (K \circ \kappa_M - K) \circ TTf\circ TX \circ Y. \end{align*} ^{-1}tem \textbf{Sprays.} \lambdabel{sec:connection:d} \cite[22.7]{Michor08} Any connection $\nablabla$ induces a one-to-one correspondence between fiber-wise quadratic $C^\alpha$ mappings $\Phi\colon TN\to TN$ and $C^\alpha$ sprays $S\colon TN\to TTN$. Here $\nablabla_{\partial_t}c_t = \Phi(c_t)$ corresponds to $c_{tt}=S(c_t)$ for curves $c$ in $N$. Equivalently, in terms of the connector $K$, the relation between $\Phii$ and $S$ is as follows: $$\timesymatrix@R=0.5em@C=1em{ & TTN \ar[dl]_{T(\partiali_N)} \ar[dr]^{\partiali_{TN}} \\ TN \ar[dr]_{\partiali_N}& & TN \ar[dl]^{\partiali_N} \\ &N & } \hspace{4em} \timesymatrix@R=0.5em@C=1em{ & TTN \ar[dl]_{T(\partiali_N)} \ar[dr]^{K} \\ TN& & TN \\ &TN \ar@{=}[ul] \ar[ur]_{\Phi} \ar@{-->}[uu]^{S\!}& } $$ The diagram on the left introduces the projections $T(\partiali_N)$ and $\partiali_{TN}$, which define the two vector bundle structures on $TTN$. The diagram on the right shows that $\Phii$ and $S$ are related by $\Phii=K\circ S$. \end{enumerate} The following lemma describes how any connection on $TN$ induces via a product-preserving functor from finite to infinite-dimensional manifolds \cite{KMS93, KM96} a connection on the mapping space $H^s_{H^r}(M,TN)$. The induced connection will be used as an auxiliary tool for expressing the geodesic equation; see \autoref{thm:geodesic}. \betagin{lemma}[Induced connection on mapping spaces] \lambdabel{lem:connection} Let $r ^{-1}n (\on{dim}(M)/2,^{-1}nfty)$, $s ^{-1}n [-r,r]$, and $\alpha ^{-1}n \{^{-1}nfty,\omega\}$. Then any $C^\alpha$ connection on $TN$ induces in a natural (i.e., functorial) way a $C^\alpha$ connection on $H^s_{H^r}(M,TN)$. \end{lemma} \betagin{proof} Note that $TN\mapsto H^s_{H^r}(M,TN)$ is a product-preserving functor from finite-dimensional manifolds to infinite-dimensional manifolds as described in \cite{KM96} and \cite[Section 31]{KM97}. Furthermore, note that $TH^s_{H^r}(M,TN)=H^{r,s,r,s}(M,TTN)$, where $(r,s,r,s)$ denotes the Sobolev regularity of the individual components in any local trivialization $TTN\supset TTU \timesrightarrow{TTu} u(U)\times (\mathbb R^n)^3\subset (\mathbb R^n)^4$ induced by a chart $N\supset U\timesrightarrow{u}u(U)\subset \mathbb R^n$; cf.~the proof of \autoref{thm:sobolev_mappings}. Applying the functor $H^s_{H^r}(M,\cdot)$ to the connector $K\colon TTN\to TN$ gives the induced connector \betagin{equation*} K_*=H^s_{H^r}(M,K)\colon TH^s_{H^r}(M,TN)\to H^s_{H^r}(M,TN), \quad h\mapsto K\circ h. \end{equation*} The induced connector is $C^\alpha$ by \autoref{lem:pushforward}. \end{proof} \section{Sobolev immersions} This section collects some results about the differential geometry of immersions with Sobolev regularity. More specifically, it describes the Sobolev regularity of the induced metric, volume form, normal and tangential projections, and fractional Laplacian, as well as variations of these objects with respect to the immersion. Here the term fractional Laplacian is understood as a $p$-th power of the operator $1+\Deltalta$, where $\Deltalta$ is the Bochner Laplacian and $p ^{-1}n \mathbb R$; see \cite[Section~3]{bauer2018smooth}. \betagin{lemma}[Geometry of Sobolev immersions] \lambdabel{lem:dependence} The following statements hold for any $r ^{-1}n (\operatorname{dim}(M)/2+1,^{-1}nfty)$ and any smooth Riemannian metric $\bar{g}$ on $N$: \betagin{enumerate} ^{-1}tem \lambdabel{lem:dependence:a} The space $\on{Imm}_{H^{r}}(M,N)$ of all immersions $f\colon M\to N$ of Sobolev class $H^{r}$ is an open subset of the real analytic manifold $H^{r}(M,N)$. ^{-1}tem \lambdabel{lem:dependence:b} The pull-back metric is well defined and real analytic as a mapping \betagin{equation*} \on{Imm}^r(M,N) \ni f \mapsto f^*\bar{g} ^{-1}n \operatorname{Met}_{H^{r-1}}(M) := \Gamma_{H^{r-1}}(S^2_+T^*M). \end{equation*} ^{-1}tem \lambdabel{lem:dependence:c} The Riemannian volume form is well defined and real analytic as a mapping \betagin{equation*} \on{Imm}^r(M,N) \ni f \mapsto \on{vol}^{f^*\bar{g}} ^{-1}n \Gamma_{H^{r-1}}(\on{Vol} M). \end{equation*} ^{-1}tem \lambdabel{lem:dependence:d} The tangential projection $\top\colon T\on{Imm}(M,N)\to\mathfrak X(M)$ and the normal projection $\bot\colon T\on{Imm}(M,N)\to T\on{Imm}(M,N)$ are defined for smooth $h^{-1}n T_f\on{Imm}(M,N)$ via the relation $h = Tf.h^\top+h^\bot$, where $\bar{g}(h^\bot(x),T_xf(T_xM))=0$ for all $x^{-1}n M$. They extend real analytically for any real number $s^{-1}n [1-r,r-1]$ to \betagin{align*} &\bot ^{-1}n \Gammamma_{C^{\omegaega}}\Big(L(H^s_{\on{Imm}^r}(M,TN),H^s_{\on{Imm}^r}(M,TN))\Big),\\ &\top^{-1}n C^{\omegaega}\Big(H^s_{\on{Imm}^r}(M,TN), \mathfrak X_{H^s}(M)\Big), \end{align*} where $H^s_{\on{Imm}^r}(M,TN)$ is the space of `$H^s$ mappings $M\to TN$ with foot point in $\on{Imm}^r(M,N)$' described in \autoref{thm:sobolev_mappings}. ^{-1}tem \lambdabel{lem:dependence:e} For any real numbers $s,p$ with $s, s-2p ^{-1}n [1-r,r]$, the fractional Laplacian \betagin{equation*} f\mapsto (1+\Delta^{f^*\bar{g}})^p \end{equation*} is a real analytic section of the bundle \betagin{equation*} GL(H^s_{\on{Imm}^r}(M,TN),H^{s-2p}_{\on{Imm}^r}(M,TN)). \end{equation*} \end{enumerate} \end{lemma} \betagin{proof} \betagin{enumerate}[wide] ^{-1}tem The space $H^r(M,N)$ is continuously embedded in $C^1(M,N)$ because $r>\on{dim}(M)/2+1$. Thus, the space of immersions is open. ^{-1}tem follows from the formula $f^*\bar{g}=\bar{g}(Tf,Tf)$. ^{-1}tem follows from \ref{lem:dependence:b} and the real analyticity of $g \mapsto \on{vol}^g$; see \cite[Lemma~3.3]{bauer2018smooth}. ^{-1}tem Let $U$ be an open subset of $M$ which carries a local frame $X ^{-1}n \Gamma(GL(\mathbb R^m,TU))$. For any $f ^{-1}n \on{Imm}^{r}(M,N)$, the Gram-Schmidt algorithm transforms $X$ into an $(f^*\bar{g})$-ortho\-normal frame $Y_f^{-1}n \Gamma_{H^{r-1}}(GL(\mathbb R^m,TU))$, which is given by \betagin{align*} \forall j ^{-1}n \{1,\dots,m\}: \qquad Y_f^j=\frac{X^j-\sum_{k=1}^{j-1} (f^*\bar{g})(Y_f^k,X^j) Y_f^k}{\left\|X^j-\sum_{k=1}^{j-1} (f^*\bar{g})(Y_f^k,X^j) Y_f^j\right\|_{f^*\bar{g}}}. \end{align*} This defines a real analytic map \betagin{align*} Y\colon \on{Imm}^r(M,N) \to \Gamma_{H^{r-1}}(GL(\mathbb R^m,TU)). \end{align*} We write $TN$ as a sub-bundle of a trivial bundle $N\times V$ and denote the corresponding inclusion and projection mappings by \betagin{equation*} i\colon TN \to N \times V, \qquad \partiali\colon N\times V \to TN. \end{equation*} This allows one to define a projection from $N\times V$ onto $TN$ and further onto the normal bundle of $f$, which is real analytic as a map \betagin{align*} p\colon \on{Imm}^r(M,N) &\to H^{r-1}(U,L(V,V)), \\ p_f(x)(v) &:= v-\sum_{i=1}^m \bar{g}\big(\partiali(f(x),v), T_xf.Y_f^i(x)\big). \end{align*} This construction can be repeated for any open set $\tilde U$ such that $T\tilde U$ is parallelizable, and the resulting projections $p_f$ coincide on $U\cap \tilde U$. Thus, one obtains a real analytic map \betagin{align*} p\colon \on{Imm}^r(M,N) \to H^{r-1}(M,L(V,V)). \end{align*} By the module properties \ref{thm:module}, this induces a real analytic map \betagin{gather*} \tilde p\colon \on{Imm}^r(M,N) \times H^s(M,V) \to H^s(M,V), \quad \tilde p(f,h) := p_f.h. \end{gather*} These maps fit into the commutative diagrams \betagin{equation*} \timesymatrix{ T_{f(x)}N \ar[r]^\bot \ar@{^(->}[d]^i & T_{f(x)}N \ar@{<<-}[d]^\partiali \\ V \ar[r]^{p_f(x)} & V } \hspace{0.5em} \timesymatrix{ H^s_{\on{Imm}^r}(M,TN) \ar[r]^\bot \ar@{^(->}[d]^{i_*} & H^s_{\on{Imm}^r}(M,TN) \ar@{<<-}[d]^{\partiali_*} \\ \on{Imm}^r(M,N)\times H^s(M,V) \ar[r]^{\tilde p} & \on{Imm}^r(M,N)\times H^s(M,V) } \end{equation*} The maps $i_*$ and $\partiali_*$ are real analytic, as shown in part \ref{lem:pushforward:b'} of the proof of \autoref{lem:pushforward}. Therefore, the map $\bot=\partiali_*\circ \tilde p \circ i_*$ is real analytic. The tangential projection $h^\top = Tf^{-1}(h-h^\bot)$ is then also real analytic. ^{-1}tem There is a bundle $E$ over $N$ such that $TN\oplus E$ is a trivial bundle, i.e., $TN\oplus E \cong N\times V$ for some vector space $V$. We endow the bundle $E$ with a smooth connection and the bundle $N\times V \cong TN\oplus E$ with the product connection. By construction, the inclusion $i\colon TN\to N\times V$ and projection $\partiali\colon N\times V \to TN$ respect the connection. At the level of Sobolev sections of these bundles, this means that the natural inclusion and projection mappings fit into the following commutative diagram with $p=1$: \betagin{equation*} \timesymatrix@C6em{ H^s_{\on{Imm}^r}(M,TN) \ar[r]^{(1+\Delta)^p} \ar@{^{(}->}[d]^{i_*} & H^{s-2p}_{\on{Imm}^r}(M,TN) \ar@{<<-}[d]^{\partiali_*} \\ \on{Imm}^r(M,N)\times H^s(M,V) \ar[r]^{(\on{Id},(1+\Delta)^p)} & \on{Imm}^r(M,N)\times H^{s-2p}(M,V) } \end{equation*} As the functional calculus preserves commutation relations, this extends to all $p$. Thus, we have reduced the situation to the bottom row of the diagram, where the fractional Laplacian acts on $H^s(M,V)$. In this case real analytic dependence of the fractional Laplacian on the metric has been shown in \cite[Theorem 5.4]{bauer2018smooth}. Now the claim follows from the chain rule and \ref{lem:dependence:b}. \qedhere \end{enumerate} \end{proof} The following lemma describes the first variation of the metric and fractional Laplacian. The key point is that the variation in normal directions is more regular than the variation in tangential directions. This will be of importance in \autoref{thm:satisfies}. The lemma is formulated using an auxiliary connection $\hat\nablabla$ on $N$, e.g., the Levi-Civita connection of a Riemannian metric $\bar{g}$ on $N$. \betagin{lemma}[First variation formulas] \lambdabel{lem:variation} Let $\bar{g}$ be a smooth Riemannian metric on $N$, and let $\hat \nablabla$ be a $C^\alpha$ connection on $N$ for $\alpha^{-1}n\{^{-1}nfty,\omega\}$. \betagin{enumerate} ^{-1}tem \lambdabel{lem:variation:a} For any $r ^{-1}n (\operatorname{dim}(M)/2+1,^{-1}nfty)$ and $s ^{-1}n [2-r,r]$, the variation of the pull-back metric extends to a real analytic map \betagin{equation*} H^s_{\on{Imm}^r}(M,TN) \ni m \mapsto D_{f,m}(f^*\bar{g}) ^{-1}n \Gamma_{H^{s-1}}(S^2T^*M). \end{equation*} ^{-1}tem \lambdabel{lem:variation:b} For any $r ^{-1}n (\operatorname{dim}(M)/2+2,^{-1}nfty)$ and $s ^{-1}n [2-r,r-2]$, the variation of the pull-back metric in normal directions extends to a real analytic map \betagin{equation*} H^s_{\on{Imm}^r}(M,TN) \ni m \mapsto D_{f,m^\bot}(f^*\bar{g}) ^{-1}n \Gamma_{H^{s}}(S^2T^*M). \end{equation*} ^{-1}tem \lambdabel{lem:variation:c} For any $r>\operatorname{dim}(M)/2+2$ and $p ^{-1}n [1,r-1]$ the variation of the fractional Laplacian in normal directions extends to a $C^\alpha$ map \betagin{equation*} H^{2p-r}_{\on{Imm}^r}(M,TN) \ni m \mapsto \hat\nablabla_{m^\bot}(1+\Deltalta^{f^*\bar g})^p ^{-1}n L(H^r_{\on{Imm}^r}(M,TN),H^{1-r}_{\on{Imm}^r}(M,TN)), \end{equation*} where $\hat\nablabla$ is the induced connection on $GL(H^r_{\on{Imm}^r}(M,TN),H^{1-r}_{\on{Imm}^r}(M,TN))$ described in \autoref{lem:connection}, and $L(H^r_{\on{Imm}^r}(M,TN),H^{1-r}_{\on{Imm}^r}(M,N))$ is the vector bundle over $\on{Imm}^r(M,N)$ described in \autoref{thm:sobolev_mappings}. \end{enumerate} \end{lemma} \betagin{proof} We will repeatedly use the module properties \ref{thm:module}. \betagin{enumerate}[wide] ^{-1}tem follows from the following formula for the first variation of the pull-back metric \cite[Lemma~5.5]{Bauer2011b}: \betagin{equation*} D_{f,m}(f^*\bar{g}) = \bar{g}(\nablabla m,Tf)+\bar{g}(Tf,\nablabla m) \end{equation*} ^{-1}tem Splitting the above formula into tangential and normal parts of $m$ yields \betagin{equation*} D_{f,m}(f^*\bar{g}) = -2\bar{g}(m^\bot,\nablabla Tf)+g(\nablabla m^\top,\cdot)+g(\cdot,\nablabla m^\top). \end{equation*} Now the claim follows from the real analyticity of the projection $\bot$ in \autoref{lem:dependence}. ^{-1}tem[\textbf{(c')}] \makeatletter \partialrotected@edef\@currentlabel{\textbf{(c')}} \makeatother \lambdabel{lem:variation:c'} We claim for any bundle $E$ over $M$ with fixed fiber metric and fixed connection (i.e., not depending on $g$) that the following map is real analytic: \betagin{equation*} \operatorname{Met}_{H^{r-1}}(M) \times \Gamma_{H^{s}}(S^2T^*M))\ni (g,m) \mapsto D_{g,m}\Deltalta^g ^{-1}n L(\Gamma_{H^{q}}(E),\Gamma_{H^{s+q-r-1}}(E)), \end{equation*} where $s^{-1}n [2-r,r-1]$ and $q^{-1}n [2-s,r]$. To prove the claim we proceed similarly to \cite[Lemma~3.8]{bauer2018smooth}. As the connection on $E$ does not depend on the metric $g$, \betagin{align*} D_{g,m}\Delta^gh &= -D_{g,m}(\on{Tr}^{g^{-1}}\nablabla^g\nablabla h) = -(D_{g,m}\on{Tr}^{g^{-1}})\nablabla^g\nablabla h -\on{Tr}^{g^{-1}}(D_{g,m}\nablabla^g)\nablabla h. \end{align*} Here $\nablabla^g$ is the covariant derivative on $T^*M\otimes E$. The proof of \cite[Lemma~3.8]{bauer2018smooth} and some multi-linear algebra show that $D_{g,m}\nablabla^g$ is tensorial and real analytic as a map \betagin{multline*} \operatorname{Met}_{H^{r-1}}(M)\times \Gamma_{H^{s}}(S^2T^*M) \ni (g,m) \\ \mapsto D_{g,m}\nablabla^g ^{-1}n \Gamma_{H^{s-1}}(T^*M\otimes L(T^*M\otimes E,T^*M\otimes E)). \end{multline*} Moreover, the following maps are real analytic by \cite[Lemmas~3.2 and~3.5]{bauer2018smooth}: \betagin{align*} \operatorname{Met}_{H^{r-1}}(M) \ni g &\mapsto g^{-1} ^{-1}n \Gamma_{H^{r-1}}(S^2TM), \\ \operatorname{Met}_{H^{r-1}}(M) \ni g &\mapsto \nablabla^g ^{-1}n L(\Gamma_{H^{q-1}}(T^*M\otimes E),\Gamma_{H^{q-2}}(T^*M\otimes T^*M\otimes E)). \end{align*} Together with the module properties~\ref{thm:module} this establishes \ref{lem:variation:c'}. ^{-1}tem [\textbf{(c'')}] \makeatletter \partialrotected@edef\@currentlabel{\textbf{(c'')}} \makeatother \lambdabel{lem:variation:c''} Using \ref{lem:variation:c'} we will now study the smooth dependence of fractional Laplacians. In particular we claim for any bundle $E$ over $M$ with fixed fiber metric and fixed connection and any $p ^{-1}n (1,r-1]$ that the following map is real analytic: \betagin{equation*} \operatorname{Met}_{H^{r-1}}(M) \times \Gamma_{H^{2p-r}}(S^2T^*M))\ni (g,m) \mapsto D_{g,m}(1+\Deltalta^g)^p ^{-1}n L(\Gamma_{H^{r}}(E),\Gamma_{H^{1-r}}(E)). \end{equation*} The claim is a generalization of \cite[Lemma~5.5]{bauer2018smooth} to perturbations $m$ with even lower Sobolev regularity and uses the fact that the connection on $E$ does not depend on the metric $g$. Let $X,Y,Z$ be the spaces of operators given by \betagin{align*} X&=L(\Gamma_{H^{r}}(E),\Gamma_{H^{r-2}}(E))\cap L(\Gamma_{H^{3-r}}(E),\Gamma_{H^{1-r}}(E)), \\ Y&=L(\Gamma_{H^{r}}(E),\Gamma_{H^{-r+2p-1)}}(E))\cap L(\Gamma_{H^{r-2p+2}}(E),\Gamma_{H^{1-r}}(E)), \\ Z&=L(\Gamma_{H^{r}}(E),\Gamma_{H^{r-2}}(E))\cap L(\Gamma_{H^{r-2p+2}}(E),\Gamma_{H^{r-2p}}(E)). \end{align*} Note that the conditions $r>2$ and $p>1$ ensure that $X$, $Y$, and $Z$ are intersections of operator spaces on distinct Sobolev scales, as required in \cite[Theorem 4.5]{bauer2018smooth} Moreover, let $U\subseteq X$ be an open neighborhood of $1+\Delta^g$ with $g ^{-1}n \operatorname{Met}_{H^{r-1}}(M)$ such that the holomorphic functional calculus is well-defined and holomorphic on $U$ in the sense of \cite[Theorem 4.5]{bauer2018smooth}. Then the desired map is the composition of the following two maps: \betagin{gather*} \operatorname{Met}_{H^{r-1}}(M)\times \Gamma_{H^{2p-r}}(S^2T^*M) ^{-1}n (g,m)\mapsto (1+\Delta^g,D_{g,m}\Delta^g) ^{-1}n (X,Y), \\ (U,Y) \ni (A,B) \mapsto D_{A,B}A^p ^{-1}n L(\Gamma_{H^{r}}(E),\Gamma_{H^{1-r}}(E)). \end{gather*} The first map is real analytic by \autoref{lem:dependence}.\ref{lem:dependence:e} and \ref{lem:variation:c'}. The second map has to be interpreted via the following identity, which is shown in the proof of \cite[Lemma~5.5]{bauer2018smooth} using the resolvent representation of the functional calculus: \betagin{equation*} \forall A ^{-1}n U, \forall B ^{-1}n Y\cap Z: \qquad D_{A,B}A^p = A^{r-1-p}D_{A,A^{p-r+1}B}A^p. \end{equation*} The right-hand side above is the composition of the following maps, which are again real analytic by \cite[Theorem 4.5]{bauer2018smooth}: \betagin{gather*} (U,Y) \ni (A,B) \mapsto (A,A^{p-r+1/2}B) ^{-1}n (U,Z), \\ (U,Z) \ni (A,B) \mapsto (A,D_{A,B}A^p) ^{-1}n U\times L(\Gamma_{H^{r}}(E),\Gamma_{H^{r-2p}}(E)) \\ U\times L(\Gamma_{H^{r}}(E),\Gamma_{H^{r-2p}}(E)) \ni (A,B) \mapsto A^{r-p-1/2}B ^{-1}n L(\Gamma_{H^{r}}(E),\Gamma_{H^{1-r}}(E)) \end{gather*} This proves \ref{lem:variation:c''}. Note that \ref{lem:variation:c''} extends to $p=1$ thanks to \ref{lem:variation:c'}. ^{-1}tem As in the proof of \autoref{lem:dependence}.\ref{lem:dependence:e}, we write $i$ and $\partiali$ for the inclusion and projection mappings of $TN$, seen as a sub-bundle of a trivial bundle $TN\oplus E\cong N\times V$ with $C^\alpha$ product connection. If we consider $i_*$ and $\partiali_*$ as real analytic sections of operator bundles, \betagin{align*} i_* ^{-1}n \Gammamma_{C^\omega}(L(H^r_{\on{Imm}^r}(M,TN),H^r_{\on{Imm}^r}(M,N\times V)), \\ \partiali_* ^{-1}n \Gammamma_{C^\omega}(L(H^{r-2p}_{\on{Imm}^r}(M,TN),H^{r-2p}_{\on{Imm}^r}(M,N\times V)), \end{align*} then the covariant derivative of the fractional Laplacian can be expressed as follows: \betagin{multline*} \hat\nablabla_{m^\bot}(1+\Deltalta^{f^*\bar{g}})^p = (\hat\nablabla_{m^\bot}\partiali_*)(\on{Id},(1+\Deltalta^{f^*\bar{g}})^p)i_* \\ + \partiali_*\big(\hat\nablabla_{m^\bot}(\on{Id},(1+\Deltalta^{f^*\bar{g}})^p)\big)i_* + \partiali_*(\on{Id},(1+\Deltalta^{f^*\bar{g}})^p)(\hat\nablabla_{m^\bot}i_*). \end{multline*} The maps $i_*$ and $\partiali_*$ are real analytic, and consequently their covariant derivatives are $C^\alpha$. According to \autoref{lem:connection}, the canonical connection $D$ on the vector space $V$ induces a real analytic connection on the bundle of bounded linear operators $L(H^r_{\on{Imm}^r}(M,N\times V), H^{1-r}_{\on{Imm}^r}(M,N\times V))$. By general principles, this connection differs from $\hat\nablabla$ by a $C^\alpha$ tensor field, often called the Christoffel symbol. Thus, it suffices to show that the following map is $C^\alpha$: \betagin{multline*} H^{2p-r}_{\on{Imm}^r}(M,TN) \ni m \mapsto D_{f,m^\bot}(\on{Id},(1+\Deltalta^{f^*\bar{g}})^p) \\ ^{-1}n L(H^r_{\on{Imm}^r}(M,N\times V),H^{1-r}_{\on{Imm}^r}(M,N\times V)). \end{multline*} As $D$ is the canonical connection, this is equivalent to the following map being $C^\alpha$: \betagin{equation*} H^{2p-r}_{\on{Imm}^r}(M,TN) \ni m \mapsto D_{f,m^\bot} (1+\Deltalta^{f^*\bar{g}})^p ^{-1}n L(H^r(M,V),H^{1-r}(M,V)). \end{equation*} By \ref{lem:variation:b} with $s=2p-r$, the variation of the pull-back metric in normal directions is real analytic as a map \betagin{align*} H^{2p-r}_{\on{Imm}^r}(M,TN) \ni m \mapsto D_{f,m^\bot}(f^*\bar{g}) ^{-1}n \Gamma_{H^{2p-r}}(S^2T^*M). \end{align*} Thus, \ref{lem:variation:c} follows from \ref{lem:variation:c''} and the chain rule. \qedhere \end{enumerate} \end{proof} \section{Weak Riemannian metrics on spaces of immersions} \lambdabel{sec:metrics} The main result of this section is that the geodesic equation of Sobolev-type metrics is locally well posed under certain conditions on the operator governing the metric. The setting is general and encompasses several examples, including in particular fractional Laplace operators. \subsection{Sobolev-type metrics} \lambdabel{sec:conditions} Within the setup of \autoref{sec:setting}, we consider Sobolev-type Riemannian metrics on the space of immersions $f\colon M\to N$ of the form \betagin{align*} G^P_f(h,k)&=^{-1}nt_M \bar{g}(P_fh,k) \on{vol}(f^*\bar{g}), \qquad h,k ^{-1}n T_f\on{Imm}(M,N), \end{align*} where $\bar{g}$ is a $C^\alpha$ Riemannian metric on $N$ for $\alpha^{-1}n\{^{-1}nfty,\omega\}$, and where $P$ is an operator field which satisfies the following conditions for some $p ^{-1}n [0,^{-1}nfty)$, some $r_0 ^{-1}n (\on{dim}(M)/2+1,^{-1}nfty)$, and all $r ^{-1}n [r_0,^{-1}nfty)$: \betagin{enumerate} ^{-1}tem \lambdabel{sec:conditions:a} Assume that $P$ is a $C^\alpha$ section of the bundle \betagin{equation*} GL(H^r_{\on{Imm}^r}(M,TN),H^{r-2p}_{\on{Imm}^r}(M,TN)) \to \on{Imm}^r(M,N), \end{equation*} where $GL$ denotes bounded linear operators with bounded inverse. ^{-1}tem \lambdabel{sec:conditions:b} Assume that $P$ is $\on{Diff}(M)$-equivariant in the sense that one has for all $\varphi^{-1}n\on{Diff}(M)$, $f ^{-1}n \on{Imm}^r(M,N)$, and $h^{-1}n T_f\on{Imm}^r(M,N)$ that \betagin{equation*} (P_fh)\circ\varphi = P_{f\circ\varphi}(h\circ\varphi). \end{equation*} ^{-1}tem \lambdabel{sec:conditions:c} Assume for each $f^{-1}n \on{Imm}^r(M,N)$ that the operator $P_f$ is nonnegative and symmetric with respect to the $H^0(g)$ inner product on $T_f\on{Imm}^r(M,N)$, i.e., for all $h,k ^{-1}n T_f\on{Imm}^r(M,N)$: \betagin{equation*} ^{-1}nt_M \bar{g}(P_f h,k)\on{vol}(g) = ^{-1}nt_M \bar{g}(h,P_f k)\on{vol}(g), \qquad ^{-1}nt_M \bar{g}(P_f h,h)\on{vol}(g) \bar{g}eq 0. \end{equation*} ^{-1}tem \lambdabel{sec:conditions:d} Assume that the normal part of the adjoint $\on{Adj}(\nablabla P)^\bot$, defined by \betagin{equation*} ^{-1}nt_M \bar{g}((\nablabla_{m^\bot}P)h,k) \on{vol}(g) = ^{-1}nt_M \bar{g}(m,\on{Adj}(\nablabla P)^\bot(h,k)) \on{vol}(g) \end{equation*} for all $f ^{-1}n \on{Imm}(M,N)$ and $m,h,k ^{-1}n T_f\on{Imm}$, exists and is a $C^\alpha$ section of the bundle of bilinear maps \betagin{equation*} L^2(H^r_{\on{Imm}^r}(M,TN),H^r_{\on{Imm}^r}(M,TN);H^{r-2p}_{\on{Imm}^r}(M,TN)). \end{equation*} Here $\nablabla$ denotes the induced connection (see \autoref{lem:connection}) of the Levi-Civita connection of $\bar{g}$. \end{enumerate} \betagin{remark} In \cite[Section~6.6]{Bauer2011b} we had more complicated conditions, and we implicitly claimed that they imply the conditions in \autoref{sec:conditions} above. There was, however, a significant gap in the argumentation of the main result. Namely, we did not show the smoothness of the extended mappings on Sobolev completions. This article closes this gap and extends the analysis to the larger class of fractional order metrics. \end{remark} We now derive the geodesic equation of Sobolev-type metrics. Recall that the usual form of the geodesic equation is $f_{tt}=\Gammamma_f(f_t,f_t)$, where the time derivatives $f_t$ and $f_{tt}$ as well as the Christoffel symbols $\Gammamma$ are expressed in a chart. This raises the problem that the space $\on{Imm}(M,N)$ lacks canonical charts, unless $N$ admits a global chart. However, $\on{Imm}(M,N)$ carries a canonical connection, namely, the one induced by the metric $\bar{g}$ on $N$, which has been described in \autoref{lem:connection}. This auxiliary connection, which will be denoted by $\nablabla$, allows one to write the geodesic equation as $\nablabla_{\partialartial_t}f_t=\Gammamma_f(f_t,f_t)$, where $\Gammamma$ is a difference between two connections and therefore tensorial. In the special case where $N$ is an open subset of Euclidean space, this coincides with the usual derivative $\nablabla_{\partialartial_t}f_t=f_{tt}$; cf.~\autoref{cor:flat}. \betagin{theorem}[Geodesic equation] \lambdabel{thm:geodesic} \cite[Theorem~6.4]{Bauer2011b} Assume the conditions of \autoref{sec:conditions}. Then a smooth curve $f\colon [0,1]\to \on{Imm}(M,N)$ is a critical point of the energy functional \betagin{equation*} E(f) = \frac12 ^{-1}nt_0^1 ^{-1}nt_M \bar{g}(P_f f_t, f_t)\on{vol}^g dt \end{equation*} if and only if it satisfies the geodesic equation \betagin{align*} \nablabla_{\partial_t} f_t= &\frac12P_f^{-1}\Big(\on{Adj}(\nablabla P)_f(f_t,f_t)^\bot-2Tf\,\bar{g}(P_ff_t,\nablabla f_t)^\sharp -\bar{g}(P_f f_t,f_t)\,\on{Tr}^g(\nablabla Tf)\Big) \\& -P_f^{-1}\Big((\nablabla_{f_t}P)f_t+\on{Tr}^g\big(\bar{g}(\nablabla f_t,Tf)\big) P_ff_t\Big). \end{align*} This also holds for smooth curves in $\on{Imm}^r(M,N)$ for any $r\bar{g}eq r_0$. \end{theorem} Here we used the following notation: $g=f^*\bar{g}$ is the pull-back metric and $\sharp=g^{-1}$ its associated musical isomorphism, the operator $P$ is seen as a map $P\colon \on{Imm} \to GL( T\on{Imm},T\on{Imm})$, its composition with $f$ is denoted by $P_f\colon\mathbb R\to GL( T\on{Imm},T\on{Imm})$, its covariant derivative with respect to the connection on $L(T\on{Imm},T\on{Imm})$ induced by $\nablabla$ is denoted by $\nablabla P\colon T\on{Imm}\to L(T\on{Imm},T\on{Imm})$, the canonical vector field on $\mathbb R$ is denoted by $\partial_t\colon \mathbb R \to T\mathbb R$, the time derivative $f_t=\partial_t f$ is viewed as a map $f_t\colon \mathbb R\times M\to TN$ in the expression $\nablabla f_t\colon \mathbb R\times M \to T^*M\otimes f^*TN$ and as a map $f_t\colon \mathbb R\to T\on{Imm}$ elsewhere, the spatial derivative $Tf$ is viewed as a map $Tf\colon \mathbb R\times M \to T^*M\otimes f^*TN$, and the map $\nablabla Tf\colon \mathbb R\times M \to T^*M\otimes T^*M\otimes f^*TN$ is the second fundamental form. \betagin{proof} We will consider variations of the curve energy functional along one-parameter families $f\colon (-\varepsilon,\varepsilon) \times [0,1] \times M \to N$ of curves of immersions with fixed endpoints. The variational parameter will be denoted by $s ^{-1}n (-\varepsilon,\varepsilon)$, the time-parameter by $t ^{-1}n [0,1]$. Then the first variation of the energy $E(f)$ can be calculated as follows: \betagin{equation*} \partial_s E(f) = \partial_s \frac12 ^{-1}nt_0^1 ^{-1}nt_M \bar{g}(P_f f_t, f_t)\on{vol}^g dt. \end{equation*} As the connection respects $\bar{g}$ and is a derivation of tensor products, and as the operator $P_f$ is symmetric, we have \betagin{align*} \partial_s E(f) = \frac12^{-1}nt_0^1^{-1}nt_M \bar{g}\left( (\nablabla_{\partial_s} P_f)f_t+2P_f\nablabla_{\partial_s} f_t+ \frac{\partialartial_s \on{vol}^g}{\on{vol}^g} P_f f_t, f_t \right) \on{vol}^gdt. \end{align*} We will treat each of the three summands above separately, making extensive use of properties \ref{sec:connection} of the (induced) connection $\nablabla$: \betagin{enumerate}[wide] ^{-1}tem \lambdabel{thm:geodesic:a} For the first summand we have by the definition of the adjoint that \betagin{multline*} \frac12^{-1}nt_0^1^{-1}nt_M \bar{g}( (\nablabla_{\partial_s} P_f)f_t,f_t) \on{vol}^gdt = \frac12^{-1}nt_0^1^{-1}nt_M \bar{g}( (\nablabla_{f_s} P)f_t,f_t) \on{vol}^gdt \\= \frac12^{-1}nt_0^1^{-1}nt_M \bar{g}\Big( f_s,\on{Adj}(\nablabla P)(f_t,f_t)^{\bot}+\on{Adj}(\nablabla P)(f_t,f_t)^{\top}\Big) \on{vol}^gdt. \end{multline*} To calculate the tangential part of the adjoint, thereby establishing its existence, we need the following formula for the tangential variation of $P$, which holds for any vector field $X$ on $M$: \betagin{align*} (\nablabla_{Tf.X} P)(h) &= (\nablabla_{\partial_t|_0} P_{f\circ \operatorname{Fl}_t^X})(h \circ \operatorname{Fl}_0^X) \\&= \nablabla_{\partial_t|_0}\big(P_{f\circ \operatorname{Fl}_t^X}(h \circ \operatorname{Fl}_t^X)\big) - P_{f\circ \operatorname{Fl}_0^X} \big( \nablabla_{\partial_t|_0}(h \circ \operatorname{Fl}_t^X)\big) \\&= \nablabla_{\partial_t|_0}\big(P_f(h) \circ \operatorname{Fl}_t^X\big) - P_f \big( \nablabla_{\partial_t|_0}(h \circ \operatorname{Fl}_t^X)\big) \\&= \nablabla_X\big(P_f(h)) - P_f \big( \nablabla_X h\big), \end{align*} where $Fl_t^X$ denotes the flow of the vector field $X$ at time $t$ and where we used the equivariance of $P$ in the step from the second to the third line. Using this and the symmetry of $P$ we get \betagin{align*} & \frac12^{-1}nt_0^1^{-1}nt_M \bar{g}\Big(f_s,\on{Adj}(\nablabla P)(f_t,f_t)^{\top}\Big) \on{vol}^gdt= ^{-1}nt_M \bar{g}\big((\nablabla_{Tf.f_s^\top} P)f_t,f_t\big) \on{vol}(g) \\&\qquad= ^{-1}nt_M \bar{g}\big(\nablabla_{f_s^\top}(P_ff_t)-P_f(\nablabla_{f_s^\top}f_t),f_t\big) \on{vol}(g) \\&\qquad= ^{-1}nt_M \big(\bar{g}(\nablabla_{f_s^\top}(P_ff_t),f_t)-\bar{g}(\nablabla_{f_s^\top}f_t,P_ff_t)\big) \on{vol}(g) \\&\qquad= ^{-1}nt_M \bar{g}\Big(Tf .f_s^\top,Tf.\big(\bar{g}(\nablabla (P_ff_t),f_t)-\bar{g}(\nablabla f_t,P_ff_t)\big)^\sharp\Big) \on{vol}(g)\\&\qquad= ^{-1}nt_M \bar{g}\Big(Tf .f_s^\top,Tf.\big(\nablabla \bar{g}( P_f f_t,f_t)-2\bar{g}(\nablabla f_t,P_f f_t)\big)^\sharp\Big) \on{vol}(g)\\&\qquad= ^{-1}nt_M \bar{g}\Big(f_s,Tf.\big(\nablabla \bar{g}( P_f f_t,f_t)-2\bar{g}(\nablabla f_t,P_f f_t)\big)^\sharp\Big) \on{vol}(g). \end{align*} Thus we obtain the following formula for the first summand of the variation of $E$: \betagin{multline*} \frac12^{-1}nt_0^1^{-1}nt_M \bar{g}( (\nablabla_{\partial_s} P_f)f_t,f_t) \on{vol}^gdt\\= \frac12^{-1}nt_0^1^{-1}nt_M \bar{g}\Big( f_s,\on{Adj}(\nablabla P)(f_t,f_t)^{\bot}+Tf.\big(\nablabla\bar{g}( Pf_t,f_t)-2\bar{g}(\nablabla f_t,Pf_t)\big)^\sharp\Big) \on{vol}^gdt. \end{multline*} ^{-1}tem \lambdabel{thm:geodesic:b} As $P_f$ is symmetric and the covariant derivative on $\on{Imm}(M,N)$ is torsion-free (see \autoref{sec:connection}), i.e., \betagin{equation*} \nablabla_{\partial_t}f_s - \nablabla_{\partial_s}f_t=Tf.[\partial_t,\partial_s]+\on{Tor}(f_t,f_s) = 0, \end{equation*} we get for the second summand \betagin{equation*} ^{-1}nt_0^1^{-1}nt_M \bar{g}\left( P_f\nablabla_{\partial_s} f_t, f_t \right) \on{vol}^gdt= ^{-1}nt_0^1^{-1}nt_M \bar{g}\left( \nablabla_{\partial_t} f_s, P_f f_t \right) \on{vol}^gdt. \end{equation*} Integration by parts for $\partial_t$ yields \betagin{multline*} ^{-1}nt_0^1^{-1}nt_M \bar{g}\left( \nablabla_{\partial_t} f_s, P_f f_t \right) \on{vol}^gdt\\= ^{-1}nt_0^1^{-1}nt_M \bigg( \bar{g}\left( f_s, - (\nablabla_{f_t}P)f_t-P_f(\nablabla_{f_t}f_t) \right)- \frac{{\partial_t} \on{vol}^g}{\on{vol}^g} \bigg)\on{vol}^gdt \end{multline*} To further expand the last term we use the following formula for the variation of the volume form \cite[Lemma~5.7]{Bauer2011b}: \betagin{equation*} \frac{\partial_t \on{vol}^g}{\on{vol}^g} = \on{Tr}^g\big(\bar{g}(\nablabla f_t,Tf)\big) = -\nablabla^*( \bar{g}(f_t,Tf))-\bar{g}\big(f_t,\on{Tr}^g(\nablabla Tf)\big), \end{equation*} where $\nablabla Tf$ is the second fundamental form and where $\nablabla^*$ denotes the adjoint of the covariant derivative. Using the first of the above formulas we obtain for the second summand: \betagin{multline*} ^{-1}nt_0^1^{-1}nt_M \bar{g}\left( \nablabla_{\partial_t} f_s, P_f f_t \right) \on{vol}^gdt\\= ^{-1}nt_0^1^{-1}nt_M \bigg( \bar{g}\left( f_s, - (\nablabla_{f_t}P)f_t-P_f(\nablabla_{f_t}f_t) \right)-\on{Tr}^g\big(\bar{g}(\nablabla f_t,Tf)\big) P_f f_t \bigg)\on{vol}^gdt. \end{multline*} ^{-1}tem \lambdabel{thm:geodesic:c} Using the second version of the variational formula for the volume in the third summand in the variation of the energy yields \betagin{align*} &\frac12^{-1}nt_0^1^{-1}nt_M \frac{\partialartial_s \on{vol}^g}{\on{vol}^g} \bar{g}\left(P_f f_t, f_t \right) \on{vol}^gdt\\ &\qquad =- \frac12^{-1}nt_0^1^{-1}nt_M \Big(\nablabla^*( \bar{g}(f_s,Tf))+\bar{g}\big(f_s,\on{Tr}^g(\nablabla Tf)\big) \Big) \bar{g}\left(P_f f_t, f_t \right) \on{vol}^gdt\\ &\qquad=-\frac12^{-1}nt_0^1^{-1}nt_M \bigg(\bar{g}(f_s, Tf.(\nablabla \bar{g}\left(P_f f_t, f_t \right))^{\sharp}+\on{Tr}^g(\nablabla Tf) \bar{g}\left(P_f f_t, f_t \right)\bigg) \on{vol}^gdt. \end{align*} \end{enumerate} Taken together, the calculations of \ref{thm:geodesic:a}--\ref{thm:geodesic:c} yield \betagin{multline*} \partial_s E(f) = \frac12^{-1}nt_0^1^{-1}nt_M \bar{g}\bigg(f_s,\on{Adj}(\nablabla P)(f_t,f_t)^{\bot}-2Tf.\bar{g}(\nablabla f_t,Pf_t)^\sharp-2(\nablabla_{f_t}P)f_t \\-2P_f(\nablabla_{f_t}f_t)-2\on{Tr}^g\big(\bar{g}(\nablabla f_t,Tf)\big) P_f f_t -\on{Tr}^g(\nablabla Tf) \bar{g}\left(P_f f_t, f_t \right) \bigg) \on{vol}^gdt. \end{multline*} Setting $\partialartial_s E(f)=0$ for arbitrary perturbations $f_s$ yields the geodesic equation on the space $\on{Imm}(M,N)$ of smooth immersions. This statement extends to the space $\on{Imm}^r(M,N)$ of Sobolev immersions because the right-hand side of the geodesic equation is continuous in $f ^{-1}n C^^{-1}nfty([0,1],\on{Imm}^r(M,N))$, as shown in part \ref{thm:wellposed:a} of the proof of \autoref{thm:wellposed}. \qedhere \end{proof} We next show well-posedness of the geodesic equation using the Ebin--Marsden approach \cite{EM1970} of extending the geodesic spray to a smooth vector field on $T\on{Imm}^r$ for sufficiently high $r$ and showing that the solutions exist on a time interval which is independent of $r$. \betagin{theorem}[Local well-posedness of the geodesic equation] \lambdabel{thm:wellposed} Assume the conditions of \autoref{sec:conditions} with $p\bar{g}eq 1$. Then the following statements hold for all $r ^{-1}n [r_0,^{-1}nfty)$: \betagin{enumerate} ^{-1}tem \lambdabel{thm:wellposed:a} The initial value problem for geodesics has unique local solutions in $\on{Imm}^r(M,N)$. The solutions depend $C^\alpha$ on $t$ and on the initial condition $f_t(0)^{-1}n T\on{Imm}^r(M,N)$. ^{-1}tem \lambdabel{thm:wellposed:b} The Riemannian exponential map $\exp^{P}$ exists and is $C^\alpha$ on a neighborhood of the zero section in $T\on{Imm}_{H^r}$, and $(\partiali,\exp^{P})$ is a diffeomorphism from a (smaller) neighborhood of the zero section to a neighborhood of the diagonal in $\on{Imm}^r(M,N)\times \on{Imm}^r(M,N)$. ^{-1}tem \lambdabel{thm:wellposed:c} The neighborhoods in \ref{thm:wellposed:a}--\ref{thm:wellposed:b} are uniform in $r$ and can be chosen open in the $H^{r_0}$ topology. Thus, \ref{thm:wellposed:a}--\ref{thm:wellposed:b} continue to hold for $r=^{-1}nfty$, i.e., on the Fr\'echet manifold $\on{Imm}(M,N)$ of smooth immersions. \end{enumerate} \end{theorem} \betagin{proof} \betagin{enumerate}[wide] ^{-1}tem This can be shown as in \cite[Theorem~6.6]{Bauer2011b}. Let $\Phi(f_t)$ denote the right-hand side of the geodesic equation, i.e., \betagin{multline*} \Phi(f_t) = \frac12P^{-1}\Big(\on{Adj}(\nablabla P)(f_t,f_t)^\bot-2Tf\,\bar{g}(Pf_t,\nablabla f_t)^\sharp -\bar{g}(Pf_t,f_t)\,\on{Tr}^g(\nablabla Tf)\Big) \\ -P^{-1}\Big((\nablabla_{f_t} P)f_t+\on{Tr}^g\big(\bar{g}(\nablabla f_t,Tf)\big) Pf_t\Big). \end{multline*} A term-by-term investigation using the conditions \ref{sec:conditions} and the module properties~\ref{thm:module} shows that $\Phii$ is a fiber-wise quadratic $C^\alpha$ map \betagin{align*} \Phii\colon T\on{Imm}^r(M,N)\to T\on{Imm}^r(M,N). \end{align*} Here the condition $p\bar{g}eq 1$ is needed to ensure that the term $P^{-1}\big(\bar{g}(Ph,h)\,\on{Tr}^g(\nablabla Tf)\big)$ is again of regularity $H^r$. The map $\Phii$ corresponds uniquely to a $C^\alpha$ spray $S$ via the induced connection described in \autoref{lem:connection}. In more detail: The right-hand side diagram in the proof of \autoref{sec:connection}.\ref{sec:connection:d} holds for any manifold $N$ with connector $K$. Thus, replacing $(N,K)$ by $(\on{Imm}^r(M,N),K_*)$, one obtains the diagram $$ \timesymatrix@R=2mm{ &TT\on{Imm}^r (M,N) \ar[dl]_{T(\partiali_N)_*} \ar[dr]^{K_*} \\ T\on{Imm}^r(M,N)& & T\on{Imm}^r(M,N) \\ &T\on{Imm}^r(M,N) \ar@{=}[ul] \ar[ur]_{\Phi} \ar@{-->}[uu]^{S}& } $$ The spray $S$ is $C^\alpha$ because the connection $K_*$ and the map $\Phi$ are $C^\alpha$. Therefore, by the theorem of Picard-Lindel\"of, $S$ admits a $C^\alpha$ flow \betagin{equation*} \on{Fl}^S\colon U \to T\on{Imm}^{r}(M,N) \end{equation*} for a maximal open neighborhood $U$ of $\{0\}\times T\on{Imm}^r(M,N)$ in $\mathbb R\times T\on{Imm}^r(M,N)$. The neighborhood $U$ is $\on{Diff}(M)$-invariant thanks to the $\on{Diff}(M)$-equivariance of $S$. ^{-1}tem follows from \ref{thm:wellposed:a} as in \cite[Theorem~6.6]{Bauer2011b}, and \ref{thm:wellposed:c} follows from \autoref{lem:nolossnogain} by writing $\on{Imm}(M,N)$ as the intersection of all $\on{Imm}^{r_0+k}(M,N)$ with $k ^{-1}n \mathbb N_{\bar{g}eq 0}$. \qedhere \end{enumerate} \end{proof} \betagin{corollary} \autoref{thm:wellposed} with $\alpha=\omega$ remains valid if the assumptions in \autoref{sec:conditions} are modified as follows: the metric $\bar{g}$ is only $C^^{-1}nfty$, and the connection $\nablabla$ in condition~\ref{sec:conditions:d} is replaced by an auxiliary connection $\hat\nablabla$, which is induced by a torsion-free $C^\omega$ connection on $N$, as described in \autoref{lem:connection}. \end{corollary} \betagin{proof} In the proof of \autoref{thm:geodesic}, the geodesic equation is derived by expressing the first variation $\partial_s E$ of the energy functional using the Levi-Civita connection of $\bar{g}$. If the auxiliary connection $\hat\nablabla$ is used instead, then the following additional terms appear in the formula for $\partial_s E$: \betagin{multline*} ^{-1}nt_0^1^{-1}nt_M \bigg( -\frac12 (\hat\nablabla_{Tf.f_s^\top}\bar{g})(P_ff_t,f_t) -(\hat\nablabla_{f_t}\bar{g})(f_s,P_ff_t) +\frac12(\hat\nablabla_{f_s}\bar{g})(P_ff_t,f_t)\bigg)\on{vol}^gdt \end{multline*} Accordingly, letting $\Psii$ denote the right-hand side of the original geodesic equation with $\nablabla P$ replaced by $\hat\nablabla P$, i.e., \betagin{multline*} \Psii(f_t) = \frac12P^{-1}\Big(\on{Adj}(\hat\nablabla P)(f_t,f_t)^\bot-2Tf\,\bar{g}(Pf_t,\nablabla f_t)^\sharp -\bar{g}(Pf_t,f_t)\,\on{Tr}^g(\nablabla Tf)\Big) \\ -P^{-1}\Big((\hat\nablabla_{f_t} P)f_t+\on{Tr}^g\big(\bar{g}(\nablabla f_t,Tf)\big) Pf_t\Big), \end{multline*} the geodesic equation becomes \betagin{multline*} \hat\nablabla_{\partial_t}f_t = \Psii(f_t) -\frac12 Tf.\big(\bar{g}^{-1}(\hat\nablabla\bar{g})(P_ff_t,f_t)\big)^\top -\bar{g}^{-1}(\hat\nablabla_{f_t}\bar{g})(\cdot,P_ff_t) \\ +\frac12 \bar{g}^{-1}(\hat\nablabla\bar{g})(P_ff_t,f_t). \end{multline*} One verifies as in the proof of \autoref{thm:wellposed} that the right-hand side, seen as a function of $f_t$, is a fiber-wise quadratic real analytic map $T\on{Imm}^r(M,N)\to T\on{Imm}^r(M,N)$. As the auxiliary connection $\hat\nablabla$ is real analytic, this implies that the corresponding spray is real analytic, as well; see \autoref{sec:connection}. Since the spray is independent of the auxiliary connection $\hat\nablabla$, one may proceed as in the proof of \autoref{thm:wellposed}. \end{proof} The following theorem shows that (scale-invariant) fractional-order Sobolev metrics satisfy the conditions in \autoref{sec:conditions}. This implies local well-posedness of their geodesic equations by \autoref{thm:wellposed}. Further metrics considered in the literature include curvature weighted metrics and the so-called general elastic metric \cite{jermyn2017}, which can also be formulated in the present framework \cite{bauer2012sobolev}. The proof takes advantage of the fact that the adjoint in the geodesic equation \ref{thm:geodesic} has been split into normal and tangential parts. The normal part has the correct Sobolev regularity thanks to \autoref{lem:variation}. The tangential part incurs a loss of derivatives, but the bad terms cancel out with some other terms in the geodesic equation as shown in part \ref{thm:geodesic:a} of the proof of \autoref{thm:geodesic}. \betagin{theorem} \lambdabel{thm:satisfies} The following operators satisfy the conditions in \autoref{sec:conditions} with $\alpha=\omega$ for any $p ^{-1}n [1,^{-1}nfty)$ and $r_0 ^{-1}n (\operatorname{dim}(M)/2+2,^{-1}nfty)\cap[p+1,^{-1}nfty)$: \betagin{align*} P_f:=\big(1+\Delta^{f^*\bar{g}}\big)^p, \qquad \text{and} \qquad P_f:=\Big(\on{Vol}^{-1-\tfrac{2}{\on{dim}M}}+\on{Vol}^{-1}\Delta^{f^*\bar{g}}\Big)^p. \end{align*} Thus, the geodesic equations of these metrics are well posed in the sense of \autoref{thm:wellposed}. \end{theorem} \betagin{proof} We will prove this result only for the first field of operators because the proof for the second one is analogous. We shall check conditions \ref{sec:conditions:a}--\ref{sec:conditions:d} of \autoref{sec:conditions}. \betagin{enumerate}[wide] ^{-1}tem follows from \autoref{lem:dependence}. ^{-1}tem $\on{Diff}(M)$-equivariance of $(1+\Delta^{f^*\bar{g}})$ is well-known for smooth $f$ and follows in the general case by approximation, noting that the pull-back along a smooth diffeomorphism is a bounded linear map between Sobolev spaces of the same order of regularity \cite[Theorem~B.2]{inci2013regularity}. As the functional calculus preserves commutation relations, this implies the $\on{Diff}(M)$-equivariance of $(1+\Delta^g)^p$. ^{-1}tem is well-known for smooth $f,h,k$ and follows in the general case by approximation using the continuity of $f\mapsto\lambdangle\cdot,\cdot\rangle_{H^0(f^*\bar{g})}$ established in \cite[Lemma 3.3]{bauer2018smooth} and the continuity of $f\mapsto P_f$. ^{-1}tem Recall from \autoref{lem:variation} that the derivative of $P_f$ in normal directions extends to a real analytic map \betagin{multline*} H^{2p-r}_{\on{Imm}^r}(M,TN) \ni m \mapsto \big(h\mapsto\hat\nablabla_{m^\bot}P_fh\big) ^{-1}n L(H^r_{\on{Imm}^r}(M,TN),H^{1-r}_{\on{Imm}^r}(M,TN)). \end{multline*} Equivalently, the following map is real analytic: \betagin{multline*} H^{r}_{\on{Imm}^r}(M,TN) = T\on{Imm}^r(M,N) \ni h \mapsto \big(m\mapsto\hat\nablabla_{m^\bot} P_f h\big) \\ ^{-1}n L(H^{2p-r}_{\on{Imm}^r}(M,TN),H^{1-r}_{\on{Imm}^r}(M,N)). \end{multline*} Dualization using the $H^0(g)$ duality shows that the adjoint is real analytic \betagin{equation*} T\on{Imm}^r(M,N) \ni h \mapsto \on{Adj}(\hat\nablabla P)(h,\cdot)^\bot ^{-1}n L(H^{r-1}_{\on{Imm}^r}(M,TN),H^{r-2p}_{\on{Imm}^r}(M,TN)). \end{equation*} In particular, the adjoint is real analytic \betagin{equation*} T\on{Imm}^r(M,N) \ni h \mapsto \on{Adj}(\hat\nablabla P)(h,\cdot)^\bot ^{-1}n L(H^r_{\on{Imm}^r}(M,TN),H^{r-2p}_{\on{Imm}^r}(M,TN)). \qedhere \end{equation*} \end{enumerate} \end{proof} \betagin{remark} For Sobolev metrics of integer order $p ^{-1}n \mathbb N_{>0}$, condition~\ref{sec:conditions:d} of \autoref{sec:conditions} can be verified directly by a term-by-term investigation of the following explicit formula for the normal part of the adjoint \cite[Section~8.2]{Bauer2011b}, assuming that $\hat\nablabla=\nablabla$ is the Levi-Civita connection of $\bar{g}$: \betagin{align*} \on{Adj}(\nablabla P)(h,k)^\bot&= 2\sum_{i=0}^{p-1}\on{Tr}\big(g^{-1} \nabla Tf g^{-1} \bar{g}(\nablabla(1+\Deltalta)^{p-i-1}h,\nablabla(1+\Deltalta)^{i}k ) \big) \\&\qquad +\sum_{i=0}^{p-1} \big(\nablabla^*\bar{g}(\nablabla(1+\Deltalta)^{p-i-1}h,(1+\Deltalta)^{i}k) \big) \on{Tr}^g(\nabla Tf)\\ &\qquad+\sum_{i=0}^{p-1}\on{Tr}^g\big(R^{\bar{g}}((1+\Deltalta)^{p-i-1}h,\nablabla(1+\Deltalta)^{i}k)Tf \big)\\ &\qquad-\sum_{i=0}^{p-1}\on{Tr}^g\big(R^{\bar{g}}(\nablabla(1+\Deltalta)^{p-i-1}h,(1+\Deltalta)^{i}k)Tf \big). \end{align*} Here $g=f^*\bar{g}$, $\Deltalta=\Deltalta^{g}$, $\nablabla=\nablabla^g$, and $R^{\bar{g}}$ denotes the curvature on $(N,\bar{g})$. This direct calculation is consistent with the more general argument of \autoref{thm:satisfies}. \end{remark} \section{Special cases} \lambdabel{sec:special} This section describes several applications of the general well-posedness result, \autoref{thm:wellposed}. First, we consider the geodesic equation of right-invariant Sobolev metrics on the diffeomorphism group $\on{Diff}(M)$. In Eulerian coordinates, this equation is called Euler--Arnold \cite{Ar1966} or EPDiff \cite{HoMa2005} equation and reads as \betagin{equation*} m_t +\nablabla_u m+\bar{g}(\nablabla u,m)+(\on{div} u) m=0, \qquad m:= P_{\on{Id}} u, \qquad u:=\varphi_t\circ\varphi^{-1}. \end{equation*} In Lagrangian coordinates, the equation takes the form shown in the following corollary. The conditions for local well-posedness in this corollary agree with the ones in \cite{BBCEK2019}, where metrics governed by a general class of pseudo-differential operators are investigated. The proof is an application of \autoref{thm:wellposed} to $\on{Diff}(M)$, seen as an open subset of $\on{Imm}(M,M)$. Moreover, the proof extends \autoref{thm:wellposed} to lower Sobolev regularity using some cancellations which are due to the vanishing normal bundle. The notation is as in \autoref{thm:wellposed}. \betagin{corollary}[Diffeomorphisms] \lambdabel{cor:diffeos} A smooth curve $\varphi \colon [0,1]\to \on{Diff}(M)$ is a critical point of the energy functional \betagin{equation*} E(\varphi) = \frac12 ^{-1}nt_0^1 ^{-1}nt_M \bar{g}(P_\varphi \varphi_t, \varphi_t)\on{vol}^g dt \end{equation*} if and only if it satisfies the geodesic equation \betagin{align*} \nablabla_{\partial_t} \varphi_t= &P_\varphi^{-1}\Big(-T\varphi\,\bar{g}(P_\varphi \varphi_t,\nablabla \varphi_t)^\sharp - (\nablabla_{\varphi_t}P)\varphi_t -\on{Tr}^g\big(\bar{g}(\nablabla \varphi_t,T\varphi)\big) P_\varphi \varphi_t\Big). \end{align*} The geodesic equation is well-posed in the sense of~\autoref{thm:wellposed} if $P$ satisfies conditions \ref{sec:conditions:a}--\ref{sec:conditions:c} of \autoref{sec:conditions} for some $p ^{-1}n [1/2,^{-1}nfty)$ and all $r^{-1}n [r_0,^{-1}nfty)$ with $r_0 ^{-1}n (\on{dim}(M)/2+1,^{-1}nfty)$. In particular, this is the case if $P=(1+\Deltalta)^p$ with \betagin{itemize} ^{-1}tem $p ^{-1}n [1,^{-1}nfty)$ and $r ^{-1}n (\operatorname{dim}(M)/2+1,^{-1}nfty)\cap[p+1,^{-1}nfty)$; or ^{-1}tem $p ^{-1}n [1/2,1)$ and $r ^{-1}n (\operatorname{dim}(M)/2+1,^{-1}nfty)\cap[p+3/2,^{-1}nfty)$. \end{itemize} \end{corollary} \betagin{proof} The formula for the geodesic equation follows from \autoref{thm:geodesic} because the terms $\on{Adj}(\nablabla P)^\bot$ and $\nablabla Tf=(\nablabla Tf)^\bot$ vanish. To show well-posedness of the geodesic equation, note that condition~\ref{sec:conditions:d} of \autoref{sec:conditions} is trivially satisfied because $\on{Adj}(\nablabla P)^\bot$ vanishes. Moreover, note that the condition $p ^{-1}n [1,^{-1}nfty)$ in \autoref{thm:wellposed} can be replaced by the weaker condition $p ^{-1}n [1/2,^{-1}nfty)$ because the term $\nablabla Tf$, which is of second order in $f$, vanishes. This can be seen by a term-by-term investigation of the right-hand side of the geodesic equation as in the proof of \autoref{thm:wellposed}. Therefore, the geodesic equation is well-posed for any operator field $P$ satisfying conditions \ref{sec:conditions:a}--\ref{sec:conditions:c} of \autoref{sec:conditions} for some $p ^{-1}n [1/2,^{-1}nfty)$ and all $r^{-1}n [r_0,^{-1}nfty)$ with $r_0 ^{-1}n (\on{dim}(M)/2+1,^{-1}nfty)$, as claimed. It remains to verify these conditions for the specific operator $P=(1+\Deltalta)^p$. Condition~\ref{sec:conditions:a} for $p\bar{g}eq 1$ follows from \autoref{lem:dependence}, and condition~\ref{sec:conditions:a} for $p^{-1}n[1/2,1)$ is verified as follows. We split the operator $P_\varphi$ in two components, $$P_\varphi=(1+\Delta^{\varphi^*\bar{g}})^{-1}(1+\Delta^{\varphi^*\bar{g}})^{1+p}.$$ As $1+p\bar{g}eq 1$, \autoref{lem:dependence} shows that the operator $(1+\Delta^{\varphi^*\bar{g}})^{1+p}$ is a real analytic section of the bundle $$GL(H^r_{\on{Diff}^r}(M,TM),H^{r-2p-2}_{\on{Diff}^r}(M,TM)) \to \on{Diff}^r(M)$$ for any $r$ such that $r-2p-2\bar{g}eq 1-r$, i.e., $r\bar{g}eq p+3/2$. Similarly, under even weaker conditions, the operator $(1+\Delta^{\varphi^*\bar{g}})^{-1}$ is a real analytic section of the bundle $$GL(H^{r-2p-2}_{\on{Diff}^r}(M,TM),H^{r-2p}_{\on{Diff}^r}(M,TM)) \to \on{Diff}^r(M).$$ By the chain rule, the operator $P_\varphi$ is real analytic as required in condition~\ref{sec:conditions:a}. Conditions~\ref{sec:conditions:b} and \ref{sec:conditions:c} can be verified as in the proof of~\autoref{thm:satisfies}. \end{proof} Next we consider reparametrization-invariant Sobolev metrics on spaces of immersed curves, i.e., we consider the special case $M=S^1$. Our interest in these spaces stems from their fundamental role in the field of mathematical shape analysis; see e.g.~\cite{bauer2014overview,younes1998computable,klassen2004analysis,sundaramoorthi2011new,bauer2017numerical} for $\mathbb R^n$-valued curves and \cite{le2017computing,su2014statistical,celledoni2016shape,su2018comparing} for manifold-valued curves. For curves in $\mathbb R^n$ local well-posedness of the geodesic equation for integer-order metrics has been shown in \cite{michor2007overview}. This has recently been extended to fractional-order metrics in \cite{bauer2018fractional}. The following corollary of our main result further generalizes this to fractional-order metrics on spaces of manifold-valued curves: \betagin{corollary}[Curves] \lambdabel{cor:curves} A smooth curve $c \colon [0,1]\to \on{Imm}(S^1,N)$ is a critical point of the energy functional \betagin{equation*} E(c) = \frac12 ^{-1}nt_0^1 ^{-1}nt_M \bar{g}(P_c c_t, c_t)|\partialartial_{\thetaeta} c| d\thetaeta dt \end{equation*} if and only if it satisfies the geodesic equation \betagin{align*} \nablabla_{\partial_t} c_t= &\frac12P_c^{-1}\Big(\on{Adj}(\nablabla P)_c(c_t,c_t)^\bot-2\,\bar{g}(P_c c_t,\nablabla_{\partialartial_s} c_t) v_c -\bar{g}(P_c c_t,c_t)\,H_c \Big) \\& -P_c^{-1}\Big((\nablabla_{c_t}P)c_t+\big(\bar{g}(\nablabla_{\partialartial_s} c_{t},v_c)\big) P_c c_t\Big)\;, \end{align*} where $\partialartial_s=|c_{\thetaeta}|^{-1}\partialartial_{\thetaeta}$ denotes the normalization of the coordinate vector field $\partialartial_{\thetaeta}$, $v_c={\partialartial_s}c$ the unit-length tangent vector, and $H_c=(\nablabla_{\partialartial_s} v_c)^{\bot}$ the vector-valued curvature of $c$. If the operator $P$ satisfies the conditions of \autoref{sec:conditions} for some $p ^{-1}n [1,^{-1}nfty)$ and all $r^{-1}n [r_0,^{-1}nfty)$ with $r_0 ^{-1}n (\on{dim}(M)/2+1,^{-1}nfty)$, then the geodesic equation is well-posed in the sense of~\autoref{thm:wellposed}. This is in particular the case for the operator $P=(1-\nablabla_{\partialartial_s}\nablabla_{\partialartial_s})^p$ if $p ^{-1}n [1,^{-1}nfty)$ and $r ^{-1}n (\operatorname{dim}(M)/2+1,^{-1}nfty)\cap[p+1,^{-1}nfty)$. \end{corollary} \betagin{proof} This follows directly from~\autoref{thm:geodesic}, \autoref{thm:satisfies} and \autoref{thm:wellposed}. \end{proof} The last special case to be discussed in this section is $N=\mathbb R^n$, which includes in particular the space of surfaces in $\mathbb R^3$. In the article~\cite{Bauer2011b} we proved a local well-posedness result for integer-order metrics. The proof given there had a gap, which has been corrected in the article~\cite{Mueller2017}. The following corollary of our main result extends this to fractional order metrics: \betagin{corollary}[Flat ambient space] \lambdabel{cor:flat} A smooth curve $f \colon [0,1]\to \on{Imm}(M,\mathbb R^n)$ is a critical point of the energy functional \betagin{equation*} E(f) = \frac12 ^{-1}nt_0^1 ^{-1}nt_M \lambdangle P_f f_t, f_t\rangle \on{vol}^g dt \end{equation*} if and only if it satisfies the geodesic equation \betagin{align*} f_{tt}= &\frac12P_f^{-1}\Big(\on{Adj}(dP)_f(f_t,f_t)^\bot-2df\,\lambdangle P_f f_t, df_t\rangle^\sharp -\lambdangle P_f f_t,f_t\rangle\,H_f\Big) \\& -P_f^{-1}\Big((\nablabla_{f_t}P)f_t+\on{Tr}^g\big(\lambdangle d f_t,df\rangle\big) P_ff_t\Big), \end{align*} where $\lambdangle\cdot,\cdot\rangle$ denotes the Euclidean scalar product on $\mathbb R^n$, $g=f^*\lambdangle\cdot,\cdot\rangle$ the induced pullback metric on $M$, and $H_f=\on{Tr}^g(d^2f)^{\bot}$ the vector-valued mean curvature of $f$. If the operator $P$ satisfies the conditions of \autoref{sec:conditions} for some $p ^{-1}n [1,^{-1}nfty)$ and all $r^{-1}n [r_0,^{-1}nfty)$ with $r_0 ^{-1}n (\on{dim}(M)/2+1,^{-1}nfty)$, then the geodesic equation is well-posed in the sense of~\autoref{thm:wellposed}. This is in particular the case for the operator $P=(1+\Deltalta)^p$ with $p ^{-1}n [1,^{-1}nfty)$ and $r ^{-1}n (\operatorname{dim}(M)/2+1,^{-1}nfty)\cap[p+1,^{-1}nfty)$. \end{corollary} \betagin{proof} This follows from Theorems~\ref{thm:geodesic}, \ref{thm:wellposed}, and \ref{thm:satisfies} with $N=\mathbb R^n$, noting that the covariant derivative on $\mathbb R^n$ and the induced covariant derivative on $\on{Imm}^r(M,\mathbb R^n)$ coincide with ordinary derivatives. \end{proof} \appendix \section{The push-forward operator on Sobolev spaces} \betagin{theorem}[Smooth curves in convenient vector spaces] {\rm \cite[~4.1.19]{FK88}} \lambdabel{thm:FK} Let $c\colon\mathbb R\to E$ be a curve in a convenient vector space $E$. Let $\mathcal{V}\subset E'$ be a subset of bounded linear functionals such that the bornology of $E$ has a basis of $\sigmagma(E,\mathcal{V})$-closed sets. Then the following are equivalent: \betagin{enumerate} ^{-1}tem $c$ is smooth ^{-1}tem For each $k^{-1}n\mathbb N$ there exists a locally bounded curve $c^{k}\colon\mathbb R\to E$ such that for each $\ell^{-1}n\mathcal V$ the function $\ell\circ c$ is smooth $\mathbb R\to \mathbb R$ with $(\ell\circ c)^{(k)}=\ell\circ c^{k}$. \end{enumerate} If $E$ is reflexive, then for any point separating subset $\mathcal{V}\subset E'$ the bornology of $E$ has a basis of $\sigma(E,\mathcal{V})$-closed subsets, by {\rm \cite[~4.1.23]{FK88}}. \end{theorem} This theorem is surprisingly strong: Note that $\mathcal V$ does not need to recognize bounded sets. We shall use the theorem in situations where $\mathcal V$ is just the set of all point evaluations on suitable Sobolev spaces. \betagin{lemma}[Smooth curves in Sobolev spaces of sections] \lambdabel{lem:curvesSobolev} Let $E$ be a vector bundle over $M$, and let $\nablabla$ be a connection on $E$. Then it holds for each $r^{-1}n(\on{dim}(M)/2,^{-1}nfty)$ that the space $C^^{-1}nfty(\mathbb R,\Gamma_{H^r}(E))$ of smooth curves in $\Gamma_{H^r}(E)$ consists of all continuous mappings $c\colon\mathbb R\times M \to E$ with $p\circ c = \on{pr}_2\colon\mathbb R\times M\to M$ such that: \betagin{itemize} ^{-1}tem For each $x^{-1}n M$ the curve $t\mapsto c(t,x)^{-1}n E_x$ is smooth; let $(\partial^p_t c)(t,x) = \partial_t^p(c(t,x))$, and ^{-1}tem For each $p^{-1}n \mathbb N_{\bar{g}e0}$, the curve $\partial_t^pc$ has values in $\Gamma_{H^r}(E)$ so that $\partial_t^pc \colon\mathbb R\to \Gamma_{H^r}(E)$, and $t \mapsto \|\partial_t c(t,\quad)\|_{H^r}$ is bounded, locally in $t$. \end{itemize} \end{lemma} \betagin{proof} To see this we first choose a second vector bundle $F\to M$ such that $E\oplus_M F$ is a trivial bundle, i.e., isomorphic to $M\times \mathbb R^n$ for some $n^{-1}n\mathbb N$. Then $\Gamma_{H^r}(E)$ is a direct summand in $H^r(M,\mathbb R^n)$, so that we may assume without loss that $E$ is a trivial bundle, and then, that it is 1-dimensional. So we have to identify $C^^{-1}nfty(\mathbb R,H^r(M,\mathbb R))$. But in this situation we can just apply Theorem \ref{thm:FK} for the set $\mathcal V\subset H^s(M,\mathbb R)'$ consisting just of all point evaluations $\on{ev}_x\colon H^r(M,\mathbb R)\to \mathbb R$. \end{proof} \betagin{lemma}[Function spaces of mixed smoothness] \lambdabel{lem:mixed} Let $U$ be an open subset of a finite-dimensional vector space, let $r ^{-1}n (\on{dim}(M)/2,^{-1}nfty)$, let $\alpha^{-1}n\{^{-1}nfty,\omega\}$, and let $C^\alpha(U)=\varprojlim_p E_p$ be the representation of the complete locally convex space $C^\alpha(U)$ as a projective limit of Banach spaces $E_p$. Then \betagin{equation*} H^rC^\alpha(M\times U) := C^\alpha(U,H^r(M)) = H^r(M)\hat\otimes C^\alpha(U) = H^r(M,C^\alpha(U)), \end{equation*} where $\hat\otimes$ is the injective, projective, or bornological tensor product, or any tensor product in-between, and where $H^r(M,C^\alpha(U))$ is defined as the projective limit $\varprojlim_p H^r(M,E_p)$. \end{lemma} The lemma justifies the following notation, which shall be used in \autoref{lem:pushforward} below. If $E_1$ and $E_2$ are vector bundles over $M$, and $U \subseteq E_1$ is an open neighborhood of the image of an $H^r$ section, then we write $\Gammamma_{H^r}(C^\alpha(U,E_2))$ for the set of all fiber-preserving functions $F\colon U \to E_2$ which have regularity $H^rC^\alpha$ in every $C^\alpha$ vector bundle chart of $E_1$. Loosely speaking, these are sections of regularity $H^r$ in the foot point and regularity $C^\alpha$ in the fibers. \betagin{proof} The space $C^^{-1}nfty(U)$ is nuclear by \cite[Corollary to Theorem~51.4]{treves1967topological}, and the space $C^\omega(U)$ is nuclear as a countable inductive limit of nuclear spaces of holomorphic functions \cite[Theorem~30.11]{KM97}. Let $\otimes_\varepsilon$, $\otimes_\partiali$, and $\otimes_\beta$ be the injective, projective, and bornological completed tensor products, respectively. Then \betagin{align*} C^\alpha(U) \otimes_\varepsilon H^r(M) = C^\alpha(U) \otimes_\partiali H^r(M) = C^\alpha(U) \otimes_\beta H^r(M), \end{align*} where the first equality holds because $C^\alpha(U)$ is nuclear, and the second equality holds by \cite[Proposition~5.8]{KM97} using that $H^r(M)$ is a normed space, and $C^\omegaega(V)$ is an (LF)-space and therefore bornological. Thus, all tensor spaces $C^\alpha(U)\hat\otimes H^r(M)$ are equal. Moreover, \betagin{align*} C^^{-1}nfty(U,H^r(M)) = C^^{-1}nfty(U) \otimes_\varepsilon H^r(M) \end{align*} by \cite[Theorem~44.1]{treves1967topological}, and \betagin{align*} C^\omega(U,H^r(M)) = \varprojlim_{\tilde U} \mathcal H(\tilde U,H^r(M)) = \varprojlim_{\tilde U} \mathcal H(\tilde U)\hat\otimes H^r(M) = C^\omega(U) \hat\otimes H^r(M) \end{align*} by \cite[Corollary~16.7.5]{jarchow2012locally}, where $\mathcal H$ denotes holomorphic functions and $\tilde U$ are open neighborhoods of $U$ in the complexification of the underlying vector space. Let $\Delta_2$ be the natural norm on $L^2$ functions \cite[7.1]{defant1992tensor}. Then \betagin{equation*} H^r(M) \hat\otimes C^\alpha(U) = H^r(M) \hat\otimes_{\Delta_2} C^\alpha(U) = \varprojlim_p H^r(M) \hat\otimes_{\Delta_2} E_p = \varprojlim_p H^r(M, E_p), \end{equation*} where the first equality holds because $\varepsilon\le\Delta_2\le\partiali$ \cite[7.1]{defant1992tensor}, the second one by the definition of tensor products of locally convex spaces \cite[35.2]{defant1992tensor}, and the third one because the fractional Laplacian $(1+\Delta^g)\colon H^r(M)\to L^2(M)$ with respect to any auxiliary Riemannian metric $g ^{-1}n \operatorname{Met}(M)$ is an isometry and because $L^2(M,E_p)=L^2(M)\otimes_{\Delta_2}E_p$ by the definition of $\Delta_2$ \cite[7.2]{defant1992tensor}. \end{proof} \betagin{lemma}[Push-forward of functions] \lambdabel{lem:omega} Let $U$ be an open subset of $\mathbb R$, and let $r ^{-1}n (\on{dim}(M)/2,^{-1}nfty)$. Then $H^r(M,U)$ is open in $H^r(M,\mathbb R)$, and the following statements hold. \betagin{enumerate} ^{-1}tem \lambdabel{lem:omega:a} The following map is smooth: \betagin{align*} H^rC^^{-1}nfty(M\times U) \times H^r(M,U) \ni (F,h) \mapsto F\circ (\on{Id}_M,h) ^{-1}n H^r(M). \end{align*} ^{-1}tem \lambdabel{lem:omega:b} The following map is real analytic: \betagin{align*} H^rC^\omegaega(M\times U) \times H^r(M,U) \ni (F,h) \mapsto F\circ (\on{Id}_M,h) ^{-1}n H^r(M). \end{align*} \end{enumerate} \end{lemma} \betagin{proof} The set $\Gamma_{H^r}(U)$ is open in $\Gamma_{H^r}(E_1)$ because $\Gamma_{H^r}(E_1)$ is continuously included in $\Gamma_{C}(E_1)$ thanks to the Sobolev embedding theorem. \betagin{enumerate}[wide] ^{-1}tem follows from the more general statement \autoref{lem:pushforward}.\ref{lem:pushforward:a}. ^{-1}tem[\textbf{(b')}] \makeatletter \partialrotected@edef\@currentlabel{\textbf{(b')}} \makeatother \lambdabel{lem:omega:b'} As an intermediate step, we claim that the following map is real analytic: \betagin{align*} C^\omega(U) \times H^r(M,U) \ni (f,h) \mapsto f\circ h ^{-1}n H^r(M). \end{align*} For any $f ^{-1}n C^\omega(U)$ and $h ^{-1}n H^r(M,U)$, the composition $f\circ h$ coincides with the Riesz functional calculus $f(h)$, which is defined as follows \cite[Theorem~4.7]{conway2013course}. As the spectrum $\sigmagma(h)$ equals the range of $h$, which is a compact subset of $U$, there is a set of positively oriented curves $\Gammamma=\{\bar{g}ammamma_1,\dots,\bar{g}ammamma_n\}$ in $U \setminus \sigmagma(h)$ such that $\sigmagma(h)$ is inside of $\Gammamma$, and $\mathbb C\setminus U$ is outside of $\Gammamma$ \cite[Proposition~4.4]{conway2013course}. Then one defines $f(h)$ as the following Bochner integral over the resolvent of $h$: \betagin{align*} f(h) = \frac{-1}{2\partiali \mathrm{i} }^{-1}nt_\Gammamma f(\lambda) (h-\lambda)^{-1} d\lambda \end{align*} For any fixed $\Gammamma$, this integral is well-defined and real analytic as claimed. ^{-1}tem The following map is real analytic thanks to \autoref{lem:omega:b'} and the boundedness of multiplication $H^r(M)\times H^r(M)\to H^r(M)$: \betagin{align*} H^r(M) \times C^\omegaega(U) \times H^r(M,U) \ni (a,f,h) \mapsto (a\otimes f)\circ (\on{Id}_M,h) ^{-1}n H^r(M), \end{align*} where $(a\otimes f)\circ(\on{Id}_M,h)$ denotes the map $x \mapsto a(x)f(h(x))$. Equivalently, by the real analytic exponential law \cite[11.18]{KM97}, the following map is real analytic: \betagin{align*} H^r(M) \times C^\omegaega(U) \ni (a,f) \mapsto \big(h \mapsto (a\otimes f)\circ(\on{Id}_M, h)\big) ^{-1}n C^\omega(H^r(M,U),H^r(M)). \end{align*} This map is bilinear and real analytic, and therefore bounded. By the universal property of the bornological tensor product $\otimes_\betata$ \cite[5.7]{KM97}, it descends to a bounded linear map \betagin{align*} H^r(M) \otimes_\betata C^\omegaega(U) \ni F \mapsto \big(h \mapsto F\circ (\on{Id}_M,h)\big) ^{-1}n C^\omega(H^r(M,U),H^r(M)). \end{align*} The domain of this map equals $H^rC^\omega(M\times U)$ by \autoref{lem:mixed}. \qedhere \end{enumerate} \end{proof} \betagin{lemma}[Push-forward of sections] \lambdabel{lem:pushforward} Let $E_1,E_2$ be vector bundles over $M$, let $U\subset E_1$ be an open neighborhood of the image of a smooth section, let $F\colon U\to E_2$ be a fiber preserving function, and let $r ^{-1}n (\on{dim}(M)/2,^{-1}nfty)$. Then $\Gamma_{H^r}(U)$ is open in $\Gamma_{H^r}(E_1)$, and the following statements hold: \betagin{enumerate} ^{-1}tem \lambdabel{lem:pushforward:a} If $F$ is smooth or belongs to $\Gamma_{H^r}(C^^{-1}nfty(U,E_2))$, then the push-forward $F_*$ is smooth: \betagin{equation*} F_*\colon\Gamma_{H^r}(U) \to \Gamma_{H^r}(E_2),\quad h\mapsto F\circ h. \end{equation*} ^{-1}tem \lambdabel{lem:pushforward:b} If $F$ is real analytic or belongs to $\Gamma_{H^r}(C^\omega(U,E_2))$, then the pushforward $F_*$ is real analytic. \end{enumerate} The notation $\Gamma_{H^r}(C^^{-1}nfty(U,E_2))$ and $\Gamma_{H^r}(C^\omega(U,E_2))$ is explained in Section~\ref{lem:mixed}. \end{lemma} \betagin{proof} \betagin{enumerate}[wide] ^{-1}tem Let $c\colon\mathbb R\ni t\mapsto c(t,\cdot)^{-1}n \Gamma_{H^r}(U)$ be a smooth curve. As $r>\on{dim}(M)/2$, it holds for each $x^{-1}n M$ that the mapping $\mathbb R\ni t \mapsto F_x(c(t,x))^{-1}n (E_2)_x$ is smooth. By the Fa\`a di Bruno formula (see \cite{FaadiBruno1855} for the 1-dimensional version, preceded in \cite{Arbogast1800} by 55 years), we have for each $p^{-1}n\mathbb N_{>0}$, $t ^{-1}n \mathbb R$, and $x ^{-1}n M$ that \betagin{align*} \partial_t^p F_x (c(t,x)) = \sum_{j^{-1}n\mathbb N_{>0}} \sum_{\substack{\alpha^{-1}n \mathbb N_{>0}^j\\ \alpha_1+\dots+\alpha_j =p}} \frac{1}{j!}d^j (F_x) (c(t,x))\Big( \frac{\partial_t^{(\alpha_1)}c(t,x)}{\alpha_1!},\dots, \frac{\partial_t^{(\alpha_j)}c(t,x)}{\alpha_j!}\Big)\,. \end{align*} For each $x^{-1}n M$ and $\alpha_x^{-1}n (E_2)_x^*$ the mapping $s\mapsto \lambdangle s(x),\alpha_x\rangle$ is a continuous linear functional on the Hilbert space $\Gamma_{H^r}(E_2)$. The set $\mathcal V_2$ of all of these functionals separates points and therefore satisfies the condition of Theorem~\ref{thm:FK}. We also have for each $p^{-1}n\mathbb N_{>0}$, $t ^{-1}n \mathbb R$, and $x ^{-1}n M$ that \betagin{align*} \partial_t^p\lambdangle F_x (c(t,x)),\alpha_x\rangle &= \lambdangle\partial_t^p F_x (c(t,x)),\alpha_x\rangle. \end{align*} Using the explicit expressions for $\partial_t^p F_x (c(t,x))$ from above we may apply Lemma~\ref{lem:curvesSobolev} to conclude that $t\mapsto F(c(t,\;))$ is a smooth curve $\mathbb R\to \Gamma_{H^r}(E_2)$. Thus, $F_*$ is a smooth mapping, and we have shown \ref{lem:pushforward:a}. ^{-1}tem[\textbf{(b')}] \makeatletter \partialrotected@edef\@currentlabel{\textbf{(b')}} \makeatother \lambdabel{lem:pushforward:b'} We claim that \ref{lem:pushforward:b} holds when $F$ is fiber-wise linear. Then $F$ can be identified with a map in $\chieck F ^{-1}n \Gamma_{H^r}(L(E_1,E_2))$. For any $h ^{-1}n \Gammamma_{H^r}(E_1)$, the composition $F\circ h$ equals the trace $\chieck F.h$, which is real analytic in $h$ by the module properties \ref{thm:module}. ^{-1}tem To prove the general case, we write $E_1$ and $E_2$ as sub-bundles of a trivial bundle $M\times V$. The corresponding inclusion and projection mappings are real analytic mappings of vector bundles and are denoted by \betagin{align*} i_1&\colon E_1 \to M\times V, & i_2&\colon E_2 \to M\times V, & \partiali_1&\colon M\times V\to E_1, & \partiali_2&\colon M\times V\to E_2. \end{align*} Then the set $\tilde U := \partiali_1^{-1}(U)\subseteq M\times V$ and the map $\tilde F:=i_2\circ F\circ\partiali_1$ fit into the following commutative diagrams: \betagin{equation*} \timesymatrix@C=3em{ U \ar[r]^F \ar@{^(->}[d]^{i_1} & E_2 \ar@{<<-}[d]^{\partiali_2} \\ \tilde U \ar[r]^{\tilde F} & M\times V } \hspace{4em} \timesymatrix@C=3em{ \Gamma_{H^r}(U) \ar[r]^{F_*} \ar@{^(->}[d]^{(i_1)_*} & \Gamma_{H^r}(E_2) \ar@{<<-}[d]^{(\partiali_2)_*} \\ \Gamma_{H^r}(\tilde U) \ar[r]^{\tilde F_*} & \Gamma_{H^r}(M\times V) } \end{equation*} All maps in the diagram on the left are real analytic by definition. The map $(\tilde F)_*$ is real analytic by \autoref{lem:omega}.\ref{lem:omega:b} applied component-wise to the trivial bundle $M\times V$, and the maps $(i_1)_*$ and $(\partiali_2)_*$ are real analytic by \ref{lem:pushforward:b'}. Therefore, $F_*=(\partiali_2)_*\circ (\tilde F)_*\circ (i_1)_*$ is real analytic, which proves \ref{lem:pushforward:b}. \qedhere \end{enumerate} \end{proof} \section{A real analytic no-loss no-gain result} The following lemma is a variant of the no-loss-no-gain theorem of Ebin and Marsden \cite{EM1970}, adapted to the real analytic sprays on spaces of immersions as in the setting of \autoref{thm:wellposed}. The proof is a minor adaptation of the proof in \cite{EM1970}; see also \cite{bruveris2017regularity}. \betagin{lemma}[Real analytic no-loss no-gain] \lambdabel{lem:nolossnogain} Let $r_0>\on{dim}(M)/2+1$ and let $\alpha^{-1}n\{^{-1}nfty,\omega\}$. For each $r\bar{g}e r_0$, let $S^r$ be a $\on{Diff}(M)$-invariant $C^\alpha$ vector field on $T\on{Imm}^r(M,N)$ such that $Ti_{r,s} \circ S^r = S^s \circ i_{r,s}$ where $i_{r,s}\colon T\on{Imm}^r(M,N)\to T\on{Imm}^s(M,N)$ is the $C^\alpha$-embedding for $r_0\le s<r$. By the theorem of Picard-Lindel\"of each $S^r$ has a maximal $C^\alpha$-flow $\on{Fl}^{S^r}\colon U^r \to T\on{Imm}^r(M,N)$ for an open neighborhood $U^r$ of $\{0\}\times T\on{Imm}^r(M,N)$ in $\mathbb R\times T\on{Imm}^r(M,N)$. Then $U^r = U^s\cap( \mathbb R\times T\on{Imm}^r(M,N))$ for all $r_0+1 \le r$ and $r_0\le s\leq r$. Thus, there is no loss or gain in regularity during the evolution along any $S^r$ for $r\bar{g}e r_0+1$. \end{lemma} \betagin{proof} \betagin{enumerate}[wide] ^{-1}tem\lambdabel{lem:nolossnogain:result} We shall use the following result \cite[Lemma~12.2]{EM1970}: \emph{Any $h^{-1}n H^r(M,TN)$ such that $Th\circ X^{-1}n H^r(M,TTN)$ for all $X^{-1}n \mathfrak X(M)$ satisfies $h^{-1}n H^{r+1}(M,TN)$.} ^{-1}tem\lambdabel{lem:nolossnogain:J} For $h^{-1}n T\on{Imm}^r(M,N)$ let $J^r_h$ be the open interval such that $U^r\cap (\mathbb R\times \{h\}) = J^r_h\times \{h\}$, i.e., $J^r_h$ is the maximal domain of the integral curve of $S^r$ through $h$ in $T\on{Imm}^r(M,N)$; see \cite[32.14]{KM97}. Since $i_{r,s}\circ \on{Fl}^{S^r}_t = \on{Fl}^{S^s}_t\circ$ (see \cite[32.16]{KM97}), for $h^{-1}n T\on{Imm}^r(M,N)$ we have $J^r_h\subseteq J^s_h$ for $r_0\le s<r$. ^{-1}tem\lambdabel{lem:nolossnogain:claim} {\bf Claim.} \emph{For $h^{-1}n T\on{Imm}^{r+1}(M,N)$ we have $J^r_h = J^{r+1}_h$.} \\ Since $S^r$ is invariant under the pullback action of $\on{Diff}(M)$, we have for $h^{-1}n T\on{Imm}^{r+1}(M,N)$ and any $X^{-1}n \mathfrak X(M)$ that $$ \on{Fl}^{S^r}_t(h\circ \on{Fl}^X_u) = \on{Fl}^{S^r}_t(h) \circ \on{Fl}^X_u\,. $$ Differentiating both side we get \betagin{align*} T(\on{Fl}^{S^r}_t(h))\circ X &= \partial_u|_0 ( \on{Fl}^{S^r}_t(h) \circ \on{Fl}^X_u) = \partial_u|_0 (\on{Fl}^{S^r}_t(h\circ \on{Fl}^X_u)) \\ &= T(\on{Fl}^{S^r}_t)(\partial_u|_0(h\circ \on{Fl}^X_u)) = T(\on{Fl}^{S^r}_t)(Th\circ X) \end{align*} Since $Th\circ X ^{-1}n H^r(M,TTN)$ we see that $T(\on{Fl}^{S^r}_t(h))\circ X ^{-1}n H^r(M,TTN)$. By result \ref{lem:nolossnogain:result} we get $\on{Fl}^{S^r}_t(h)^{-1}n T\on{Imm}^{r+1}(M,N)$, and thus $J^{r}_h\supseteq J^{r+1}_h$. The converse inclusion is \ref{lem:nolossnogain:J}. ^{-1}tem Let $r_0+1\le s < r<s+1$ and let $h^{-1}n T\on{Imm}^{r}(M,N)$. Then $$J^r_h \subseteq J^s_h\subseteq J^{r-1}_h= J^r_h,$$ where the inclusions follow from \ref{lem:nolossnogain:J}, \ref{lem:nolossnogain:J}, and \ref{lem:nolossnogain:claim}, respectively. Thus we have $J^r_h=J^s_h=J^{r-1}_h$. \qedhere \end{enumerate} \end{proof} \betagin{thebibliography}{10} \bibitem{Arbogast1800} L.~F.~A. Arbogast. \newblock {\em Du calcul des d\'erivations}. \newblock Levrault, Strasbourg, 1800. \bibitem{Ar1966} V.~I. Arnold. \newblock Sur la g\'eom\'etrie diff\'erentielle des groupes de {L}ie de dimension infinie et ses applications \`a l'hydrodynamique des fluides parfaits. \newblock {\em Ann. Inst. Fourier (Grenoble)}, 16(fasc. 1):319--361, 1966. \bibitem{BBCEK2019} M.~Bauer, M.~Bruveris, E.~Cismas, J.~Escher, and B.~Kolev. \newblock Well-posedness of the {EPDiff} equation with a pseudo-differential inertia operator. \newblock To appear in Journal of Differential Equations, 2020. \bibitem{bauer2012vanishing} M.~Bauer, M.~Bruveris, P.~Harms, and P.~W. Michor. \newblock Vanishing geodesic distance for the {R}iemannian metric with geodesic equation the {K}d{V}-equation. \newblock {\em Annals of Global Analysis and Geometry}, 41(4):461--472, 2012. \bibitem{bauer2013geodesic} M.~Bauer, M.~Bruveris, P.~Harms, and P.~W. Michor. \newblock Geodesic distance for right invariant {S}obolev metrics of fractional order on the diffeomorphism group. \newblock {\em Annals of Global Analysis and Geometry}, 44(1):5--21, 2013. \bibitem{bauer2018smooth} M.~Bauer, M.~Bruveris, P.~Harms, and P.~W. Michor. \newblock Smooth perturbations of the functional calculus and applications to {R}iemannian geometry on spaces of metrics. \newblock {\em {\normalfont\textrm{arXiv:}\texttt{1810.03169}}}, 2018. \bibitem{bauer2017numerical} M.~Bauer, M.~Bruveris, P.~Harms, and J.~M{\o}ller-Andersen. \newblock A numerical framework for {Sobolev} metrics on the space of curves. \newblock {\em SIAM Journal on Imaging Sciences}, 10(1):47--73, 2017. \bibitem{bauer2018fractional} M.~Bauer, M.~Bruveris, and B.~Kolev. \newblock Fractional {Sobolev} metrics on spaces of immersed curves. \newblock {\em Calculus of Variations and Partial Differential Equations}, 57(1):27, 2018. \bibitem{bauer2014overview} M.~Bauer, M.~Bruveris, and P.~W. Michor. \newblock Overview of the geometries of shape spaces and diffeomorphism groups. \newblock {\em Journal of Mathematical Imaging and Vision}, 50(1-2):60--97, 2014. \bibitem{bauer2015local} M.~Bauer, J.~Escher, and B.~Kolev. \newblock Local and global well-posedness of the fractional order {EPD}iff equation on {$\mathbb R^d$}. \newblock {\em Journal of Differential Equations}, 258(6):2010--2053, 2015. \bibitem{Bauer2011b} M.~Bauer, P.~Harms, and P.~W. Michor. \newblock Sobolev metrics on shape space of surfaces. \newblock {\em J. Geom. Mech.}, 3(4):389--438, 2011. \bibitem{bauer2012sobolev} M.~Bauer, P.~Harms, and P.~W. Michor. \newblock Sobolev metrics on shape space, {II}: weighted {S}obolev metrics and almost local metrics. \newblock {\em Journal of Geometric Mechanics}, 4(4), 2012. \bibitem{bauer2018vanishing} M.~Bauer, P.~Harms, and S.~C. Preston. \newblock Vanishing distance phenomena and the geometric approach to {SQG}. \newblock {\em Archive for Rational Mechanics and Analysis}, 2019. \bibitem{bauer2016geometric} M.~Bauer, B.~Kolev, and S.~C. Preston. \newblock Geometric investigations of a vorticity model equation. \newblock {\em Journal of Differential Equations}, 260(1):478--516, 2016. \bibitem{behzadan2017certain} A.~Behzadan and M.~Holst. \newblock On certain geometric operators between {S}obolev spaces of sections of tensor bundles on compact manifolds equipped with rough metrics. \newblock {\em {\normalfont\textrm{arXiv:}\texttt{1704.07930}}}, 2017. \bibitem{bruveris2017regularity} M.~Bruveris. \newblock Regularity of maps between {Sobolev} spaces. \newblock {\em Annals of Global Analysis and Geometry}, 52(1):11--24, 2017. \bibitem{camassa1993integrable} R.~Camassa and D.~D. Holm. \newblock An integrable shallow water equation with peaked solitons. \newblock {\em Physical Review Letters}, 71(11):1661, 1993. \bibitem{celledoni2016shape} E.~Celledoni, S.~Eidnes, and A.~Schmeding. \newblock Shape analysis on homogeneous spaces: a generalised {SRVT} framework. \newblock In {\em The Abel Symposium}, pages 187--220. Springer, 2016. \bibitem{constantin1985simple} P.~Constantin, P.~D. Lax, and A.~Majda. \newblock A simple one-dimensional model for the three-dimensional vorticity equation. \newblock {\em Communications on pure and applied mathematics}, 38(6):715--724, 1985. \bibitem{constantin1994formation} P.~Constantin, A.~J. Majda, and E.~Tabak. \newblock Formation of strong fronts in the {2-D} quasigeostrophic thermal active scalar. \newblock {\em Nonlinearity}, 7(6):1495, 1994. \bibitem{conway2013course} J.~B. Conway. \newblock {\em A course in functional analysis}, volume~96. \newblock Springer Science \& Business Media, 2013. \bibitem{defant1992tensor} A.~Defant and K.~Floret. \newblock {\em Tensor norms and operator ideals}, volume 176. \newblock Elsevier, 1992. \bibitem{EM1970} D.~G. Ebin and J.~E. Marsden. \newblock {G}roups of diffeomorphisms and the motion of an incompressible fluid. \newblock {\em Ann. of Math.}, 92:102--163, 1970. \bibitem{escher2014right} J.~Escher and B.~Kolev. \newblock Right-invariant {S}obolev metrics of fractional order on the diffeomorphism group of the circle. \newblock {\em Journal of Geometric Mechanics}, 6(3):335--372, 2014. \bibitem{FaadiBruno1855} C.~F. Fa\`a~di Bruno. \newblock Note {s}ur {u}ne {n}ouvelle {f}ormule {d}u {c}alcul {d}iff\'erentielle. \newblock {\em Quart. J. Math.}, 1:359--360, 1855. \bibitem{FK88} A.~Fr{\"o}licher and A.~Kriegl. \newblock {\em Linear spaces and differentiation theory}. \newblock Pure and Applied Mathematics (New York). John Wiley \& Sons Ltd., Chichester, 1988. \bibitem{grenander1998computational} U.~Grenander and M.~I. Miller. \newblock Computational anatomy: An emerging discipline. \newblock {\em Quarterly of applied mathematics}, 56(4):617--694, 1998. \bibitem{grosse2013sobolev} N.~Gro{\ss}e and C.~Schneider. \newblock Sobolev spaces on {R}iemannian manifolds with bounded geometry: {G}eneral coordinates and traces. \newblock {\em Mathematische Nachrichten}, 286(16):1586--1613, 2013. \bibitem{HoMa2005} D.~D. Holm and J.~E. Marsden. \newblock Momentum maps and measure-valued solutions (peakons, filaments, and sheets) for the {EPD}iff equation. \newblock In {\em The breadth of symplectic and {P}oisson geometry}, volume 232 of {\em Progr. Math.}, pages 203--235. Birkh\"auser Boston, Boston, MA, 2005. \bibitem{hunter1991dynamics} J.~K. Hunter and R.~Saxton. \newblock Dynamics of director fields. \newblock {\em SIAM Journal on Applied Mathematics}, 51(6):1498--1521, 1991. \bibitem{inci2013regularity} H.~Inci, T.~Kappeler, and P.~Topalov. \newblock On the regularity of the composition of diffeomorphisms. \newblock {\em Mem. Amer. Math. Soc.}, 226(1062):vi+60, 2013. \bibitem{jarchow2012locally} H.~Jarchow. \newblock {\em Locally convex spaces}. \newblock Springer Science \& Business Media, 2012. \bibitem{jermyn2017} I.~Jermyn, S.~Kurtek, H.~Laga, and A.~Srivastava. \newblock Elastic shape analysis of three-dimensional objects. \newblock {\em Synthesis Lectures on Computer Vision}, 7:1--185, 09 2017. \bibitem{jerrard2019vanishing} R.~L. Jerrard and C.~Maor. \newblock Vanishing geodesic distance for right-invariant {S}obolev metrics on diffeomorphism groups. \newblock {\em Annals of Global Analysis and Geometry}, 55(4):631--656, 2019. \bibitem{khesin2008geometry} B.~Khesin and R.~Wendt. \newblock {\em The geometry of infinite-dimensional groups}, volume~51. \newblock Springer Science \& Business Media, 2008. \bibitem{klassen2004analysis} E.~Klassen, A.~Srivastava, M.~Mio, and S.~H. Joshi. \newblock Analysis of planar shapes using geodesic paths on shape spaces. \newblock {\em IEEE transactions on pattern analysis and machine intelligence}, 26(3):372--383, 2004. \bibitem{KMS93} I.~Kol{\'a}{\v{r}}, P.~W. Michor, and J.~Slov{\'a}k. \newblock {\em Natural operations in differential geometry}. \newblock Springer-Verlag, Berlin, 1993. \bibitem{Kol2017} B.~Kolev. \newblock Local well-posedness of the {EPD}iff equation: a survey. \newblock {\em J. Geom. Mech.}, 9(2):167--189, 2017. \bibitem{kouranbaeva1999camassa} S.~Kouranbaeva. \newblock The {C}amassa--{H}olm equation as a geodesic flow on the diffeomorphism group. \newblock {\em Journal of Mathematical Physics}, 40(2):857--868, 1999. \bibitem{KM96} A.~Kriegl and P.~W. Michor. \newblock Product preserving functors of infinite-dimensional manifolds. \newblock {\em Arch. Math. (Brno)}, 32(4):289--306, 1996. \bibitem{KM97} A.~Kriegl and P.~W. Michor. \newblock {\em The convenient setting of global analysis}, volume~53 of {\em Mathematical Surveys and Monographs}. \newblock American Mathematical Society, Providence, RI, 1997. \bibitem{le2017computing} A.~Le~Brigant. \newblock Computing distances and geodesics between manifold-valued curves in the {SRV} framework. \newblock {\em Journal of Geometric Mechanics}, 9(2):131--156, 2017. \bibitem{lenells2007hunter} J.~Lenells. \newblock The {H}unter--{S}axton equation describes the geodesic flow on a sphere. \newblock {\em Journal of Geometry and Physics}, 57(10):2049--2064, 2007. \bibitem{marsden1984semidirect} J.~E. Marsden, T.~Ratiu, and A.~Weinstein. \newblock Semidirect products and reduction in mechanics. \newblock {\em Transactions of the american mathematical society}, 281(1):147--177, 1984. \bibitem{Michor08} P.~W. Michor. \newblock {\em Topics in differential geometry}, volume~93 of {\em Graduate Studies in Mathematics}. \newblock American Mathematical Society, Providence, RI, 2008. \bibitem{Michor20} P.~W. Michor. \newblock Manifolds of mappings for continuum mechanics. \newblock In {\em Geometric {C}ontinuum {M}echanics -- an {O}verview}, pages 1-- 60. Birkhauser, 2020. \bibitem{michor2007overview} P.~W. Michor and D.~Mumford. \newblock An overview of the {R}iemannian metrics on spaces of curves using the {H}amiltonian approach. \newblock {\em Appl. Comput. Harmon. Anal.}, 23(1):74--113, 2007. \bibitem{misiolek1998shallow} G.~Misio{\l}ek. \newblock A shallow water equation as a geodesic flow on the {B}ott-{V}irasoro group. \newblock {\em Journal of Geometry and Physics}, 24(3):203--208, 1998. \bibitem{misiolek2010fredholm} G.~Misio{\l}ek and S.~C. Preston. \newblock Fredholm properties of {R}iemannian exponential maps on diffeomorphism groups. \newblock {\em Inventiones mathematicae}, 179(1):191, 2010. \bibitem{Mueller2017} O.~M\"uller. \newblock Applying the index theorem to non-smooth operators. \newblock {\em Journal of Geometry and Physics}, 116:140--145, 2017. \bibitem{ovsienko1987korteweg} V.~Y. Ovsienko and B.~A. Khesin. \newblock {K}orteweg--de {V}ries superequation as an {E}uler equation. \newblock {\em Functional Analysis and Its Applications}, 21(4):329--331, 1987. \bibitem{shnirel1987geometry} A.~I. Shnirel'man. \newblock On the geometry of the group of diffeomorphisms and the dynamics of an ideal incompressible fluid. \newblock {\em Mathematics of the USSR-Sbornik}, 56(1):79, 1987. \bibitem{srivastava-klassen-book:2016} A.~Srivastava and E.~Klassen. \newblock {\em Functional and Shape Data Analysis}. \newblock Springer Series in Statistics, 2016. \bibitem{su2014statistical} J.~Su, S.~Kurtek, E.~Klassen, A.~Srivastava, et~al. \newblock Statistical analysis of trajectories on {R}iemannian manifolds: bird migration, hurricane tracking and video surveillance. \newblock {\em The Annals of Applied Statistics}, 8(1):530--552, 2014. \bibitem{su2018comparing} Z.~Su, E.~Klassen, and M.~Bauer. \newblock Comparing curves in homogeneous spaces. \newblock {\em Differential Geometry and its Applications}, 60:9--32, 2018. \bibitem{sundaramoorthi2011new} G.~Sundaramoorthi, A.~Mennucci, S.~Soatto, and A.~Yezzi. \newblock A new geometric metric in the space of curves, and applications to tracking deforming objects by prediction and filtering. \newblock {\em SIAM Journal on Imaging Sciences}, 4(1):109--145, 2011. \bibitem{treves1967topological} F.~Treves. \newblock {\em Topological Vector Spaces, Distributions and Kernels: Pure and Applied Mathematics}. \newblock Academic Press, 1967. \bibitem{triebel1992theory2} H.~Triebel. \newblock {\em Theory of function spaces {II}}, volume~84 of {\em Monographs in Mathematics}. \newblock Birkh\"auser, 1992. \bibitem{vishik1978analogs} S.~Vishik and F.~Dolzhanskii. \newblock Analogs of the {E}uler--{L}agrange equations and magnetohydrodynamics equations related to {L}ie groups. \newblock In {\em Sov. Math. Doklady}, volume~19, pages 149--153, 1978. \bibitem{Was2016} P.~Washabaugh. \newblock The {SQG} equation as a geodesic equation. \newblock {\em Archive for Rational Mechanics and Analysis}, 222(3):1269--1284, 2016. \bibitem{wunsch2010geodesic} M.~Wunsch. \newblock On the geodesic flow on the group of diffeomorphisms of the circle with a fractional {S}obolev right-invariant metric. \newblock {\em Journal of Nonlinear Mathematical Physics}, 17(1):7--11, 2010. \bibitem{younes1998computable} L.~Younes. \newblock Computable elastic distances between shapes. \newblock {\em SIAM Journal on Applied Mathematics}, 58(2):565--586, 1998. \bibitem{younes2010shapes} L.~Younes. \newblock {\em Shapes and diffeomorphisms}, volume 171. \newblock Springer, 2010. \end{thebibliography} \end{document}
\begin{document} \title{A note about the invariance of the basic reproduction number for stochastically perturbed SIS models} \begin{abstract} We try to justify rigorously, using a Wong-Zakai approximation argument, the susceptible-infected-susceptible (SIS) stochastic differential equation proposed in \cite{Mao 2011}. We discover that according to this approach the \emph{right} stochastic model to be considered should be the Stratonovich version of the It\^o equation analyzed in \cite{Mao 2011}. Surprisingly, this alternative model presents the following feature: the threshold value characterizing the two different asymptotic regimes of the solution coincides with the one describing the classical SIS deterministic equation. \end{abstract} Key words and phrases: SIS epidemic model, It\^o and Stratonovich stochastic differential equations, Wong-Zakai approximation, extinction, persistence. \\ AMS 2000 classification: 60H10, 60H30, 92D30. \allowdisplaybreaks \section{Introduction}\label{intro} The susceptible-infected-susceptible (SIS) model is a simple mathematical model that describes, under suitable assumptions, the spread of diseases with no permanent immunity (see e.g. \cite{Brauer},\cite{HY}). In such models an individual starts being susceptible to a disease, at some point of time gets infected and then recovers after some other time interval, becoming susceptible again. If $S(t)$ and $I(t)$ denote the number of susceptibles and infecteds at time $t$, respectively, then the differential equations describing the spread of the disease are \begin{align}\label{SIS deterministic} \begin{cases} \frac{dS(t)}{dt}=\mu N-\beta S(t)I(t)+\gamma I(t)-\mu S(t),& S(0)=s_0>0;\\ \frac{dI(t)}{dt}=\beta S(t)I(t)-(\mu+\gamma) I(t),& I(0)=i_0>0. \end{cases} \end{align} Here, $N:=s_0+i_0$ is the initial size of the population amongst whom the disease is spreading, $\mu$ denotes the per capita death rate, $\gamma$ is the rate at which infected individuals become cured and $\beta$ stands for the disease transmission coefficient. Note that \begin{align*} \frac{d}{dt}(S(t)+I(t))=\mu(N-(S(t)+I(t))), \quad S(0)+I(0)=N, \end{align*} and hence \begin{align*} S(t)+I(t)=S(0)+I(0)=N,\quad\mbox{ for all $t\geq 0$}. \end{align*} Therefore, system (\ref{SIS deterministic}) reduces to the differential equation \begin{align}\label{SIS one} \frac{dI(t)}{dt}=\beta I(t)(N-I(t))-(\mu+\gamma) I(t),\quad I(0)=i_0\in ]0,N[, \end{align} with $S(t):=N-I(t)$, for $t\geq 0$. Equation (\ref{SIS one}) can be solved explicitly as \begin{align}\label{explicit SIS} I(t)=\frac{i_0e^{[\beta N-(\mu+\gamma)]t}}{1+\beta\int_0^ti_0e^{[\beta N-(\mu+\gamma)]s}ds},\quad t\geq 0, \end{align} and one finds that \begin{align*} \lim_{t\to+\infty}I(t)= \begin{cases} 0,&\mbox{ if $R_0\leq 1$};\\ N(1-1/R_0),&\mbox{ if $R_0>1$}, \end{cases} \end{align*} where \begin{align*} R_0:=\frac{\beta N}{\mu+\gamma}. \end{align*} This ratio is known as \emph{basic reproduction number} of the infection and determines whether the disease will become extinct, i.e. $I(t)$ will tend to zero as $t$ goes to infinity, or will be persistent, i.e. $I(t)$ will tend to a positive limit as $t$ increases. \subsection{The stochastic model} With the aim of examining the effect of environmental stochasticity, Gray et al. \cite{Mao 2011} have proposed a stochastic version of (\ref{SIS one}) which is obtained via a suitable perturbation of the parameter $\beta$. More precisely, they write equation (\ref{SIS one}) in the differential form \begin{align}\label{SIS differential} dI(t)=\beta I(t)(N-I(t))dt-(\mu+\gamma) I(t)dt,\quad I(0)=i_0\in ]0,N[, \end{align} and \emph{formally} replace the infinitesimal increment $\beta dt$ with $\beta dt+\sigma dB(t)$, where $\sigma$ is a new positive parameter and $\{B(t)\}_{t\geq 0}$ denotes a standard one dimensional Brownian motion. This perturbation transforms the deterministic differential equation (\ref{SIS one}) into the stochastic differential equation \begin{align}\label{SDE Mao} dI(t)=[\beta I(t)(N-I(t))-(\mu+\gamma) I(t)]dt+\sigma I(t)(N-I(t))dB(t), \end{align} which the authors interpret in the It\^o's sense. Equation (\ref{SDE Mao}) is then investigated and the authors prove the existence of a unique global strong solution living in the interval $]0,N[$ with probability one for all $t\geq 0$. Moreover, they identify a \emph{stochastic reproduction number} \begin{align*} R_0^S:=R_0-\frac{\sigma^2N^2}{2(\mu+\gamma)}, \end{align*} which characterizes the following asymptotic behaviour: \begin{itemize} \item if $R_0^S<1$ and $\sigma^2<\frac{\beta}{N}$ or $\sigma^2>\max\{\frac{\beta}{N},\frac{\beta^2}{2(\mu+\gamma)}\}$, then the \emph{disease will become extinct}, i.e. \begin{align*} \lim_{t\to+\infty}I(t)=0; \end{align*} \item if $R_0^S>1$, then \emph{the disease will be persistent}, i.e. \begin{align*} \liminf_{t\to+\infty}I(t)\leq \xi\leq \limsup_{t\to+\infty}I(t), \end{align*} where $\xi:=\frac{1}{\sigma^2}\left(\sqrt{\beta^2-2\sigma^2(\mu+\gamma)}-(\beta-\sigma^2N)\right)$. \end{itemize} It is worth mentioning that Xu \cite{Xu} refined the above description as follows: \begin{itemize} \item if $R_0^S<1$, then $I(t)$ tends to zero, as $t$ tends to infinity, almost surely; \item if $R_0^S\geq 1$, then $I(t)$ is recurrent on $]0,N[$. \end{itemize} \subsection{The stochastic model revised} We already mentioned that the It\^o equation \begin{align*} dI(t)=[\beta I(t)(N-I(t))-(\mu+\gamma) I(t)]dt+\sigma I(t)(N-I(t))dB(t), \end{align*} proposed in \cite{Mao 2011} is derived from \begin{align}\label{SIS one 1} \frac{dI(t)}{dt}=\beta I(t)(N-I(t))-(\mu+\gamma) I(t) \end{align} via the formal substitution \begin{align*} \beta dt\mapsto \beta dt+\sigma dB(t) \end{align*} in \begin{align*} dI(t)=\beta I(t)(N-I(t))dt-(\mu+\gamma) I(t)dt. \end{align*} It is important to remark that the non differentiability of the Brownian paths prevents from the implementation of an otherwise rigorous transformation \begin{align}\label{beta perturbation 2} \beta \mapsto \beta +\sigma \frac{dB(t)}{dt} \end{align} for equation (\ref{SIS one 1}). We now start from this simple observation and try to make such procedure rigorous. \\ Fix $T>0$ and, for a partition $\pi$ of the interval $[0,T]$, let $\{B^{\pi}(t)\}_{t\in [0,T]}$ be the polygonal approximation of the Brownian motion $\{B(t)\}_{t\in [0,T]}$, relative to the partition $\pi$. This means that $\{B^{\pi}(t)\}_{t\in [0,T]}$ is a continuous piecewise linear random function converging to $\{B(t)\}_{t\in [0,T] }$ almost surely and uniformly on $[0,T]$, as the mesh of the partition tends to zero. Now, substituting $\{B(t)\}_{t\in [0,T]}$ with $\{B^{\pi}(t)\}_{t\in [0,T]}$ in (\ref{beta perturbation 2}) we get a well defined transformation \begin{align*} \beta \mapsto \beta +\sigma \frac{dB^{\pi}(t)}{dt}, \end{align*} which in connection with (\ref{SIS one 1}) leads to the random ordinary differential equation \begin{align*} \frac{dI^{\pi}(t)}{dt}=[\beta I^{\pi}(t)(N-I^{\pi}(t))-(\mu+\gamma) I^{\pi}(t)]+\sigma I^{\pi}(t)(N-I^{\pi}(t))\frac{dB^{\pi}(t)}{dt}. \end{align*} According to the celebrated Wong-Zakai theorem \cite{WZ}, the solution of the previous equation converges, as the mesh of $\pi$ tends to zero, to the solution $\{\mathtt{I}(t)\}_{t\in [0,T]}$ of the Stratonovich-type stochastic differential equation \begin{align}\label{Mao Stratonovich} d\mathtt{I}(t)=[\beta \mathtt{I}(t)(N-\mathtt{I}(t))-(\mu+\gamma) \mathtt{I}(t)]dt+\sigma \mathtt{I}(t)(N-\mathtt{I}(t))\circ dB(t), \end{align} which is equivalent to the It\^o-type equation \begin{align}\label{Mao Stratonovich 2} d\mathtt{I}(t)=&\left[\beta \mathtt{I}(t)(N-\mathtt{I}(t))-(\mu+\gamma) \mathtt{I}(t)+\frac{\sigma^2}{2}\mathtt{I}(t)(N-\mathtt{I}(t))(N-2\mathtt{I}(t))\right]dt\nonumber\\ &+\sigma \mathtt{I}(t)(N-\mathtt{I}(t))dB(t) \end{align} (see e.g. \cite{KS} for the definition of Stratonovich integral and It\^o-Stratonovich correction term). Therefore, the model equation obtained via this procedure differs from the one proposed in \cite{Mao 2011} for the presence in the drift coefficient of the additional term \begin{align*} \frac{\sigma^2}{2}\mathtt{I}(t)(N-\mathtt{I}(t))(N-2\mathtt{I}(t)). \end{align*} Surprisingly, the \emph{stochastic reproduction number} for the corrected model (\ref{Mao Stratonovich 2}) coincides with $R_0=\frac{\beta N}{\mu+\gamma}$. In other words, the stochastic perturbation of $\beta$ doesn't affect the basic reproduction number. \begin{theorem}\label{main theorem} Equation (\ref{Mao Stratonovich 2}) possesses a unique global strong solution $\{\mathtt{I}(t)\}_{t\geq 0}$ which lives in the interval $]0,N[$ for all $t\geq 0$ with probability one. Such solution can be explicitly represented as \begin{align*} \mathtt{I}(t)=\frac{i_0\mathcal{E}(t)}{1+\frac{i_0}{N}(\mathcal{E}(t)-1)+i_0\frac{\mu+\gamma}{N}\int_0^t\mathcal{E}(s)ds},\quad t\geq 0, \end{align*} where \begin{align*} \mathcal{E}(t):=e^{(\beta N-(\mu+\gamma)) t+N \sigma B(t)}. \end{align*} Moreover, \begin{itemize} \item if $R_0<1$, then $\mathtt{I}(t)$ tends to zero, as $t$ tends to infinity, almost surely; \item if $R_0\geq 1$, then $\mathtt{I}(t)$ is recurrent on $]0,N[$. \end{itemize} \end{theorem} The paper is organized as follows: in Section 2 we develop a general framework to study existence, uniqueness and sufficient conditions for extinction and persistence for a large class of equations which encompasses the model equation (\ref{SDE Mao}) and its revised version (\ref{Mao Stratonovich 2}); Section 3 contains the proof of Theorem \ref{main theorem}. \section{A general approach}\label{general method} Aim of the present section is to propose a general method for studying existence and uniqueness of global strong solutions, as well as conditions for their extinction or persistence, for a large class of equations, which includes (\ref{SDE Mao}) and (\ref{Mao Stratonovich 2}) as particular cases. Namely, we consider stochastic differential equations of the form \begin{align}\label{SDE intro} \begin{cases} dX(t)=[f(X(t))-h(X(t))]dt+\sum_{i=1}^mg_i(X(t))dB_i(t),& t>0;\\ X(0)=x_0\in ]0,N[,& \end{cases} \end{align} where the coefficients satisfy only those fairly general assumptions needed to derive the desired properties (see Theorem \ref{general existence theorem} below for the detailed assumptions). Our method allows for a great flexibility in the choice of the coefficients while preserving the essential features of (\ref{SDE Mao}) and (\ref{Mao Stratonovich 2}). In particular, we allow the diffusion coefficients to vanish on arbitrary intervals, thus ruling out the techniques based on Feller's test for explosions (see for instance Chapter 5 in \cite{KS}). Also the method based on the Lyapunov function, which is successfully applied in \cite{Mao 2011} doesn't seem to be appropriate for the great generality considered here. Our approach relies instead on two general theorems of the theory of stochastic differential equations, which we now restate for the readers' convenience at the beginning of the next section (see Theorem \ref{boundary} and Theorem \ref{comparison} below). Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space endowed with an $m$-dimensional standard Brownian motion $\{(B_1(t),...,B_m(t))\}_{t\geq 0}$ and denote by $\{\mathcal{F}_t^B\}_{t\geq 0}$ its augmented natural filtration. In the sequel we will be working with one dimensional It\^o's type stochastic differential equations driven by the $m$-dimensional Brownian motion $\{(B_1(t),...,B_m(t))\}_{t\geq 0}$. \begin{theorem}\label{boundary} Let $\{X(t)\}_{t\geq 0}$ be the unique global strong solution of the stochastic differential equation \begin{align*} \begin{cases} dX(t)=\mu(X(t))dt+\sum_{i=1}^m\sigma_i(X(t))dB_i(t),& t>0;\\ X(0)=x_0\in \mathbb{R},& \end{cases} \end{align*} where the coefficients $\mu,\sigma_1,...,\sigma_m:\mathbb{R}\to\mathbb{R}$ are assumed to be globally Lipschitz continuous. If we set \begin{align*} \Lambda:=\{x\in\mathbb{R}:\mu(x)=\sigma_1(x)=\cdot\cdot\cdot=\sigma_m(x)=0\} \end{align*} and assume $x_0\notin\Lambda$, then \begin{align*} \mathbb{P}\left(X(t)\notin\Lambda,\mbox{ for all }t\geq 0\right)=1. \end{align*} \end{theorem} \begin{proof} See the theorem in \cite{L}. \end{proof} \begin{theorem}\label{comparison} Let $\{X(t)\}_{t\geq 0}$ be the unique global strong solution of the stochastic differential equation \begin{align*} \begin{cases} dX(t)=\mu_1(X(t))dt+\sum_{i=1}^m\sigma_i(X(t))dB_i(t),& t>0;\\ X(0)=z\in \mathbb{R},& \end{cases} \end{align*} and $\{Y(t)\}_{t\geq 0}$ be the unique global strong solution of the stochastic differential equation \begin{align*} \begin{cases} dY(t)=\mu_2(Y(t))dt+\sum_{i=1}^m\sigma_i(Y(t))dB_i(t),& t>0;\\ Y(0)=z\in \mathbb{R},& \end{cases} \end{align*} where the coefficients $\mu_1,\mu_2,\sigma_1,...,\sigma_m:\mathbb{R}\to\mathbb{R}$ are assumed to be globally Lipschitz continuous. If $\mu_1(z)\leq\mu_2(z)$, for all $z\in\mathbb{R}$, then \begin{align*} \mathbb{P}\left(X(t)\leq Y(t),\mbox{ for all }t\geq 0\right)=1. \end{align*} \end{theorem} \begin{proof} See Proposition 2.18, Chapter 5 in \cite{KS}, where the proof is given for $m=1$. The extension to several Brownian motions is immediate. See also Theorem 1.1, Chapter VI in \cite{Ikeda Watanabe}. \end{proof} \subsection{Existence, uniqueness and support} We are now ready to state our existence and uniqueness result. \begin{theorem}\label{general existence theorem} For $i\in\{1,...,m\}$, let $f,g_i,h:\mathbb{R}\to\mathbb{R}$ be locally Lipschitz-continuous functions such that \begin{enumerate} \item $f(0)=g_i(0)=0$ and $f(N)=g_i(N)=0$, for some $N>0$; \item $h(0)=0$ and $h(x)>0$, when $x>0$. \end{enumerate} Then, the stochastic differential equation \begin{align} \begin{cases}\label{SDE many BM} dX(t)=[f(X(t))-h(X(t))]dt+\sum_{i=1}^mg_i(X(t))dB_i(t),& t>0;\\ X(0)=x_0\in ]0,N[,& \end{cases} \end{align} admits a unique global strong solution, which satisfies $\mathbb{P}(0<X(t)<N)=1$, for all $t\geq 0$. \end{theorem} \begin{remark} It is immediate to verify that equations (\ref{SDE Mao}) and (\ref{Mao Stratonovich 2}) fulfill the assumptions of Theorem \ref{general existence theorem}. \end{remark} \begin{proof} The local Lipschitz-continuity of the coefficients entails pathwise uniqueness for equation (\ref{SDE many BM}), see for instance Theorem 2.5, Chapter 5 in \cite{KS}. Now, we consider the modified equation \begin{align} \begin{cases}\label{SDE 3} d\mathcal{X}(t)=[\bar{f}(\mathcal{X}(t))-\hat{h}(\mathcal{X}(t))]dt+\sum_{i=1}^m\bar{g}_i(\mathcal{X}(t))dB_i(t),& t>0;\\ \mathcal{X}(0)=x_0\in ]0,N[,& \end{cases} \end{align} where \begin{align*} \bar{f}(x)= \begin{cases} f(x),&\mbox{if $x\in [0,N]$};\\ 0,&\mbox{if $x\notin [0,N]$}, \end{cases} \quad \mbox{ and }\quad \bar{g}_i(x)= \begin{cases} g_i(x),&\mbox{if $x\in [0,N]$};\\ 0,&\mbox{if $x\notin [0,N]$}, \end{cases} \end{align*} while \begin{align*} \hat{h}(x)= \begin{cases} 0,&\mbox{if $x<0$};\\ h(x),&\mbox{if $x\in [0,N]$};\\ h(N),&\mbox{if $x>N$}. \end{cases} \end{align*} The coefficients of equation (\ref{SDE 3}) are bounded and globally Lipschitz-continuous; this implies the existence of a unique global strong solution $\{\mathcal{X}(t)\}_{t\geq 0}$ for (\ref{SDE 3}). Moreover, the drift and diffusion coefficients vanish at $x=0$. Therefore, according to Theorem \ref{boundary}, the solution never visits the origin, unless it starts from there. Since $\mathcal{X}(0)=x_0\in ]0,N[$, we deduce that $\mathcal{X}(t)> 0$, for all $t\geq 0$, almost surely. Recalling the assumption $h(x)>0$ for $x>0$, we can rewrite equation (\ref{SDE 3}) as \begin{align} \begin{cases}\label{SDE 4} d\mathcal{X}(t)=[\bar{f}(\mathcal{X}(t))-\hat{h}(\mathcal{X}(t))^+]dt+\sum_{i=1}^m\bar{g}_i(\mathcal{X}(t))dB_i(t),& t>0;\\ \mathcal{X}(0)=x_0\in ]0,N[,& \end{cases} \end{align} where $x^+:=\max\{x,0\}$. We now compare the solution of the previous equation with the one of \begin{align} \begin{cases}\label{SDE 5} d\mathcal{Y}(t)=\bar{f}(\mathcal{Y}(t))dt+\sum_{i=1}^m\bar{g}_i(\mathcal{Y}(t))dB_i(t),& t>0;\\ \mathcal{Y}(0)=x_0\in ]0,N[,& \end{cases} \end{align} which also possesses a unique global strong solution $\{\mathcal{Y}(t)\}_{t\geq 0}$. Systems (\ref{SDE 4}) and (\ref{SDE 5}) have the same initial condition and diffusion coefficients; moreover, the drift in (\ref{SDE 5}) is greater than the drift in (\ref{SDE 4}). By Theorem \ref{comparison} we conclude that \begin{align*} \mathcal{X}(t)\leq \mathcal{Y}(t),\quad\mbox{ for all $t\geq 0$}, \end{align*} almost surely. Moreover, both the drift and diffusion coefficients in (\ref{SDE 5}) vanish at $x=N$. Therefore, invoking once more Theorem \ref{boundary}, the solution never visits $N$, unless it starts from there. Since $\mathcal{Y}(0)=x_0\in ]0,N[$, we deduce that $\mathcal{Y}(t)< N$, for all $t\geq 0$, almost surely. Combining all these facts, we conclude that \begin{align*} 0<\mathcal{X}(t)< N,\quad\mbox{ for all $t\geq 0$}, \end{align*} almost surely. This in turn implies \begin{align*} \bar{f}(\mathcal{X}(t))=f(\mathcal{X}(t)),\quad \bar{g}_i(\mathcal{X}(t))=g_i(\mathcal{X}(t)),\quad \hat{h}(\mathcal{X}(t))=h(\mathcal{X}(t)), \end{align*} and that $\{\mathcal{X}(t)\}_{t\geq 0}$ solves equation \begin{align*} \begin{cases} d\mathcal{X}(t)=[f(\mathcal{X}(t))-h(\mathcal{X}(t))]dt+\sum_{i=1}^mg_i(\mathcal{X}(t))dB(t),& t>0;\\ \mathcal{X}(0)=x_0\in ]0,N[,& \end{cases} \end{align*} which coincides with (\ref{SDE many BM}). The uniqueness of the solution completes the proof. \end{proof} \subsection{Extinction} We now investigate the asymptotic behaviour of the solution of (\ref{SDE many BM}); here we are interested in sufficient conditions for extinction. \begin{theorem}\label{theorem extintion} Under the same assumptions of Theorem \ref{general existence theorem} assuming in addition, \begin{align}\label{sup} \sup_{x\in ]0,N[}\left\{\frac{f(x)-h(x)}{x}-\frac{1}{2}\sum_{i=1}^m\frac{g^2_i(x)}{x^2}\right\}<0, \end{align} the solution $\{X(t)\}_{t\geq 0}$ of equation (\ref{SDE many BM}) tends to zero exponentially, as $t$ tends to infinity, almost surely. More precisely, \begin{align*} \limsup_{t\to+\infty}\frac{\ln(X(t))}{t}\leq\sup_{x\in ]0,N[}\left\{\frac{f(x)-h(x)}{x}-\frac{1}{2}\sum_{i=1}^m\frac{g^2_i(x)}{x^2}\right\}<0, \quad\mbox{ almost surely}, \end{align*} \end{theorem} \begin{proof} We follow the proof of Theorem 4.1 in \cite{Mao 2011}. First of all, we observe that the local Lipschitz-continuity of $f$ implies the existence of a constant $L_N$ such that \begin{align*} |f(x)-f(0)|\leq L_N|x-0|,\quad \mbox{ for all $x\in [0,N]$}. \end{align*} In particular, using the equality $f(0)=0$, we can rewrite the previous condition as \begin{align*} \left|\frac{f(x)}{x}\right|\leq L_N,\quad \mbox{ for all $x\in [0,N]$}. \end{align*} Since the same reasoning applies also to $h$ and $g_i$, for $i\in\{1,...,m\}$, we deduce that the supremum in (\ref{sup}) is always finite. \\ Now, let $\{X(t)\}_{t\geq 0}$ be the unique global strong solution of equation (\ref{SDE many BM}). An application of the It\^o formula gives \begin{align}\label{ito} \ln(X(t))=&\ln(x_0)+\int_0^t\left[\frac{f(X(s))-h(X(s))}{X(s)}-\frac{1}{2}\sum_{i=1}^m\frac{g^2_i(X(s))}{X(s)^2}\right]ds\\ &+\sum_{i=1}^m\int_0^t\frac{g_i(X(s))}{X(s)}dB_i(s)\nonumber. \end{align} Note that the boundedness of the function $x\mapsto \frac{g_i(x)}{x}$ on $]0,N[$ mentioned above entails that the stochastic process \begin{align*} t\mapsto \sum_{i=1}^m\int_0^t\frac{g_i(X(s))}{X(s)}dB_i(s),\quad t\geq 0, \end{align*} is an $(\{\mathcal{F}_t^B\}_{t\geq 0},\mathbb{P})$-martingale. Therefore, from the strong law of large numbers for martingales (see e.g. Theorem 3.4, Chapter 1 in \cite{Mao book}) we conclude that \begin{align*} \lim_{t\to +\infty}\sum_{i=1}^m\frac{1}{t}\int_0^t\frac{g_i(X(s))}{X(s)}dB_i(s)=0, \end{align*} almost surely. This fact, combined with (\ref{ito}) gives \begin{align*} \limsup_{t\to+\infty}\frac{\ln(X(t))}{t}\leq&\limsup_{t\to+\infty}\frac{1}{t}\int_0^t\left[\frac{f(X(s))-h(X(s))}{X(s)}-\frac{1}{2}\sum_{i=1}^m\frac{g^2_i(X(s))}{X(s)^2}\right]ds\\ \leq&\limsup_{t\to+\infty}\sup_{x\in ]0,N[}\left\{\frac{f(x)-h(x)}{x}-\frac{1}{2}\sum_{i=1}^m\frac{g^2_i(x)}{x^2}\right\}<0, \end{align*} almost surely. The proof is complete. \end{proof} \subsection{Persistence} We now search for conditions ensuring the persistence for the solution $\{X(t)\}_{t\geq 0}$ of (\ref{SDE many BM}). \begin{theorem}\label{persistence} Under the same assumptions of Theorem \ref{general existence theorem}, if inequality \begin{align}\label{sup 2} \sup_{x\in ]0,N[}\left\{\frac{f(x)-h(x)}{x}-\frac{1}{2}\sum_{i=1}^m\frac{g^2_i(x)}{x^2}\right\}>0, \end{align} holds and moreover the function \begin{align}\label{function} x\mapsto\frac{f(x)-h(x)}{x}-\frac{1}{2}\sum_{i=1}^m\frac{g^2_i(x)}{x^2} \end{align} is strictly decreasing on the interval $]0,N[$, then the solution $\{X(t)\}_{t\geq 0}$ of the stochastic differential equation (\ref{SDE many BM}) verifies \begin{align}\label{thesis} \limsup_{t\to+\infty}X(t)\geq\xi\quad\mbox{ and }\quad\liminf_{t\to+\infty}X(t)\leq\xi, \end{align} almost surely. Here, $\xi$ is the unique zero of the function (\ref{function}) in the interval $[0,N]$. \end{theorem} \begin{proof} We follow the proof of Theorem 5.1 in \cite{Mao 2011}. To ease the notation we set \begin{align*} \eta(x):=\frac{f(x)-h(x)}{x}-\frac{1}{2}\sum_{i=1}^m\frac{g^2_i(x)}{x^2},\quad x\in [0,N]. \end{align*} First of all, we note that $\eta(N)=-\frac{h(N)}{N}<0$; this gives, in combination with (\ref{sup 2}) and the strict monotonicity of $\eta$, the existence and uniqueness of $\xi$. Now, assume the first inequality in (\ref{thesis}) to be false. This implies the existence of $\varepsilon>0$ such that \begin{align}\label{A} \mathbb{P}\left(\limsup_{t\to+\infty}X(t)\leq\xi-2\varepsilon\right)>\varepsilon. \end{align} In particular, for any $\omega\in A:=\{\limsup_{t\to+\infty}X(t)\leq\xi-2\varepsilon\}$, there exists $T(\omega)$ such that \begin{align*} X(t,\omega)\leq \xi-\varepsilon,\quad\mbox{ for all }t\geq T(\omega), \end{align*} which implies \begin{align*} \eta(X(t,\omega))\geq \eta(\xi-\varepsilon)>0,\quad\mbox{ for all }\omega\in A\mbox{ and } t\geq T(\omega). \end{align*} Therefore, for $\omega\in A$ and $t> T(\omega)$ we can write \begin{align*} \frac{\ln(X(t))}{t}=&\frac{\ln(x_0)}{t}+\frac{1}{t}\int_0^t\eta(X(s))ds+\sum_{i=1}^m\frac{1}{t}\int_0^t\frac{g_i(X(s))}{I(s)}dB_i(s)\\ =&\frac{\ln(x_0)}{t}+\frac{1}{t}\int_0^{T(\omega)}\eta(X(s))ds+\frac{1}{t}\int_{T(\omega)}^t\eta(X(s))ds\\ &+\sum_{i=1}^m\frac{1}{t}\int_0^t\frac{g_i(X(s))}{X(s)}dB_i(s)\\ \geq&\frac{\ln(x_0)}{t}+\frac{1}{t}\int_0^{T(\omega)}\eta(X(s))ds+\frac{t-T(\omega)}{t}\eta(\xi-\varepsilon)\\ &+\sum_{i=1}^m\frac{1}{t}\int_0^t\frac{g_i(X(s))}{X(s)}dB_i(s). \end{align*} Hence, recalling that the strong law of large numbers for martingales gives \begin{align*} \lim_{t\to +\infty}\sum_{i=1}^m\frac{1}{t}\int_0^t\frac{g_i(X(s))}{X(s)}dB_i(s)=0\quad\mbox{almost surely}, \end{align*} we conclude that \begin{align*} \liminf_{t\to +\infty}\frac{\ln(X(t))}{t}\geq\eta(\xi-\varepsilon)>0,\quad\mbox{on the set $A$}, \end{align*} which implies \begin{align*} \lim_{t\to+\infty}X(t)=+\infty,\quad\mbox{on the set $A$}. \end{align*} This contradicts (\ref{A}) and hence prove the first inequality in (\ref{thesis}). \\ The second inequality in (\ref{thesis}) is proven similarly; if the thesis is not true, then \begin{align}\label{B} \mathbb{P}\left(\liminf_{t\to+\infty}X(t)\geq\xi+2\varepsilon\right)>\varepsilon. \end{align} for some positive $\varepsilon$. In particular, for any $\omega\in B:=\{\liminf_{t\to+\infty}X(t)\geq\xi+2\varepsilon\}$, there exists $S(\omega)$ such that \begin{align*} X(t,\omega)\geq \xi+\varepsilon,\quad\mbox{ for all }t\geq S(\omega), \end{align*} which implies \begin{align*} \eta(X(t,\omega))\leq \gamma(\xi+\varepsilon)<0,\quad\mbox{ for all }t\geq S(\omega). \end{align*} Therefore, for $\omega\in B$ and $t> S(\omega)$ we can write \begin{align*} \frac{\ln(X(t))}{t}=&\frac{\ln(x_0)}{t}+\frac{1}{t}\int_0^t\eta(X(s))ds+\sum_{i=1}^m\frac{1}{t}\int_0^t\frac{g_i(X(s))}{X(s)}dB_i(s)\\ =&\frac{\ln(x_0)}{t}+\frac{1}{t}\int_0^{S(\omega)}\eta(X(s))ds+\frac{1}{t}\int_{S(\omega)}^t\eta(X(s))ds\\ &+\sum_{i=1}^m\frac{1}{t}\int_0^t\frac{g_i(X(s))}{X(s)}dB_i(s)\\ \leq&\frac{\ln(x_0)}{t}+\frac{1}{t}\int_0^{S(\omega)}\eta(X(s))ds+\frac{t-S(\omega)}{t}\eta(\xi+\varepsilon)\\ &+\sum_{i=1}^m\frac{1}{t}\int_0^t\frac{g_i(X(s))}{X(s)}dB_i(s)\\ \end{align*} Therefore, \begin{align*} \limsup_{t\to +\infty}\frac{\ln(X(t))}{t}\leq\eta(\xi+\varepsilon)<0,\quad\mbox{on the set $B$}, \end{align*} which implies \begin{align*} \lim_{t\to+\infty}X(t)=0,\quad\mbox{on the set $B$}. \end{align*} This contradicts (\ref{B}) and hence proves the second inequality in (\ref{thesis}). \end{proof} \section{Proof of Theorem \ref{main theorem}} We are now ready to prove our main theorem. \subsection{Existence, uniqueness, extinction and persistence} It is immediate to verify that the SDE \begin{align}\label{Mao Stratonovich 3} d\mathtt{I}(t)=&\left[\beta \mathtt{I}(t)(N-\mathtt{I}(t))-(\mu+\gamma) \mathtt{I}(t)+\frac{\sigma^2}{2}\mathtt{I}(t)(N-\mathtt{I}(t))(N-2\mathtt{I}(t))\right]dt\nonumber\\ &+\sigma \mathtt{I}(t)(N-\mathtt{I}(t))dB(t), \end{align} with initial condition $\mathtt{I}(0)=i_0\in ]0,N[$ fulfills the assumptions of Theorem \ref{general existence theorem} if we set $m=1$, \begin{align*} f(x):=\beta x(N-x)+\frac{\sigma^2}{2}x(N-x)(N-2x),\quad h(x)=(\mu+\gamma)x,\quad g(x):=\sigma x(N-x). \end{align*} Therefore, equation (\ref{Mao Stratonovich 3}) possesses a unique global strong solution which lives in the interval $]0,N[$ for all $t\geq 0$ with probability one.\\ Let us now observe that \begin{align*} \eta(x)&:=\frac{f(x)-h(x)}{x}-\frac{1}{2}\frac{g^2(x)}{x^2}\\ &=\beta (N-x)+\frac{\sigma^2}{2}(N-x)(N-2x)-(\mu+\gamma)-\frac{1}{2}\sigma^2 (N-x)^2\\ &=\left(\frac{\sigma^2}{2}x-\beta\right)(x-N)-(\mu+\gamma), \end{align*} and hence \begin{align*} \eta(0)=\beta N-(\mu+\gamma)\quad\mbox{ and }\quad \eta(N)=-(\mu+\gamma). \end{align*} This gives: \begin{itemize} \item if $\beta N-(\mu+\gamma)<0$, that means $\frac{\beta N}{\mu+\gamma}<1$, then the assumptions of Theorem \ref{theorem extintion} are fulfilled ($\gamma$ is a convex second order polynomial which takes negative values on the boundaries of $[0,N]$); therefore, $\mathtt{I}(t)$ will be extinct as $t$ tends to infinity; \item if $\beta N-(\mu+\gamma)>0$, that means $\frac{\beta N}{\mu+\gamma}>1$, then the assumptions of Theorem \ref{persistence} are fulfilled ($\gamma$ is a convex second order polynomial which takes a positive value at $0$ and a negative value at $N$); therefore, $\mathtt{I}(t)$ will be persistent as $t$ tends to infinity. \end{itemize} \subsection{Explicit representation of the solution} We observe that the solution of the deterministic equation \begin{align}\label{a} \frac{dI(t)}{dt}=\beta(t) I(t)(N-I(t))-(\mu+\gamma) I(t),\quad I(0)=i_0\in ]0,N[, \end{align} where $t\mapsto\beta(t)$ is now a continuous function of $t$, can be written as \begin{align}\label{b} I(t)=\frac{i_0e^{\int_0^tN\beta(s)ds-(\mu+\gamma)t}}{1+\int_0^t\beta(s)i_0e^{\int_0^sN\beta(r)dr-(\mu+\gamma)s}ds},\quad t\geq 0. \end{align} If we set $\beta(t):=\beta+\sigma\dot{B}^{\pi}(t)$, where $\dot{B}^{\pi}(t)$ stands for $\frac{d}{dt}B^{\pi}(t)$, then equation (\ref{a}) and formula (\ref{b}) become respectively \begin{align}\label{a1} \frac{dI^{\pi}(t)}{dt}=\beta I^{\pi}(t)(N-I^{\pi}(t))-(\mu+\gamma) I^{\pi}(t)+\sigma I^{\pi}(t)(N-I^{\pi}(t))\dot{B}^{\pi}(t), \end{align} with initial condition $I^{\pi}(0)=i_0\in ]0,N[$, and \begin{align}\label{b1} I^{\pi}(t)=\frac{i_0e^{\int_0^tN\left(\beta+\sigma\dot{B}^{\pi}(s)\right)ds-(\mu+\gamma)t}}{1+\int_0^t(\beta+\sigma\dot{B}^{\pi}(s))i_0e^{\int_0^sN(\beta+\sigma\dot{B}^{\pi}(r))dr-(\mu+\gamma)s}ds}. \end{align} We recall that according to the Wong-Zakai theorem the stochastic process $\{I^{\pi}(t)\}_{t\geq 0}$ converges, as the mesh of the partition $\pi$ tends to zero, to the unique strong solution of the Stratonovich SDE \begin{align*} d\mathtt{I}(t)=[\beta \mathtt{I}(t)(N-\mathtt{I}(t))-(\mu+\gamma) \mathtt{I}(t)]dt+\mathtt{I}(t)\sigma (N-\mathtt{I}(t))\circ dB(t),\quad \mathtt{I}(0)=i_0\in ]0,N[, \end{align*} which is equivalent to the It\^o-type equation \begin{align}\label{c} d\mathtt{I}(t)=&\left[\beta \mathtt{I}(t)(N-\mathtt{I}(t))-(\mu+\gamma) \mathtt{I}(t)+\frac{\sigma^2}{2}\mathtt{I}(t)(N-\mathtt{I}(t))(N-2\mathtt{I}(t))\right]dt\nonumber\\ &+\sigma \mathtt{I}(t)(N-\mathtt{I}(t))dB(t),\quad \mathtt{I}(0)=i_0\in ]0,N[. \end{align} We now simplify the expression in (\ref{b1}) and compute its limit as the mesh of the partition $\pi$ tends to zero: this will give us an explicit representation for the solution of (\ref{c}). To ease the notation we set \begin{align*} \mathcal{E}^{\pi}(t):=e^{\int_0^tN\left(\beta+\sigma\dot{B}^{\pi}(s)\right)ds-(\mu+\gamma)t}=e^{N\beta t+N \sigma B^{\pi}(t)-(\mu+\gamma)t}=e^{\delta t+N \sigma B^{\pi}(t)}, \end{align*} where $\delta:=N\beta-(\mu+\gamma)$, and rewrite (\ref{b1}) as \begin{align}\label{d} I^{\pi}(t)&=\frac{i_0\mathcal{E}^{\pi}(t)}{1+i_0\int_0^t(\beta+\sigma\dot{B}^{\pi}(s))\mathcal{E}^{\pi}(s)ds}\nonumber\\ &=\frac{i_0\mathcal{E}^{\pi}(t)}{1+i_0\beta\int_0^t\mathcal{E}^{\pi}(s)ds+ i_0\sigma\int_0^t\dot{B}^{\pi}(s)\mathcal{E}^{\pi}(s)ds}. \end{align} Note that $\delta\geq 0$ if and only if $R_0=\frac{\beta N}{\mu+\gamma}\geq 1$. Now, consider the second integral in the denominator above: an integration by parts gives \begin{align*} \int_0^t\dot{B}^{\pi}(s)\mathcal{E}^{\pi}(s)ds&=\int_0^t\dot{B}^{\pi}(s)e^{\delta s+N \sigma B^{\pi}(s)}ds\\ &=\int_0^t\dot{B}^{\pi}(s)e^{N\sigma B^{\pi}(s)}e^{\delta s}ds\\ &=\frac{1}{N\sigma}\left(e^{N \sigma B^{\pi}(t)}e^{\delta t}-1\right)-\frac{\delta}{N\sigma}\int_0^te^{N\sigma B^{\pi}(s)}e^{\delta s}ds\\ &=\frac{1}{N\sigma}(\mathcal{E}^{\pi}(t)-1)-\frac{\delta}{N\sigma}\int_0^t\mathcal{E}^{\pi}(s)ds. \end{align*} Therefore, inserting the last expression in (\ref{d}) we get \begin{align*} I^{\pi}(t)&=\frac{i_0\mathcal{E}^{\pi}(t)}{1+i_0\beta\int_0^t\mathcal{E}^{\pi}(s)ds+ i_0\sigma\int_0^t\dot{B}^{\pi}(s)\mathcal{E}^{\pi}(s)ds}\\ &=\frac{i_0\mathcal{E}^{\pi}(t)}{1+i_0\beta\int_0^t\mathcal{E}^{\pi}(s)ds+\frac{i_0}{N}(\mathcal{E}^{\pi}(t)-1)-\frac{i_0\delta}{N}\int_0^t\mathcal{E}^{\pi}(s)ds}\\ &=\frac{i_0\mathcal{E}^{\pi}(t)}{1+\frac{i_0}{N}(\mathcal{E}^{\pi}(t)-1)+i_0\left(\beta-\frac{\delta}{N}\right)\int_0^t\mathcal{E}^{\pi}(s)ds}\\ &=\frac{i_0\mathcal{E}^{\pi}(t)}{1+\frac{i_0}{N}(\mathcal{E}^{\pi}(t)-1)+i_0\frac{\mu+\gamma}{N}\int_0^t\mathcal{E}^{\pi}(s)ds}. \end{align*} We can now let the mesh of the partition $\pi$ tend to zero and get \begin{align*} \mathtt{I}(t)&=\lim_{|\pi|\to 0}I^{\pi}(t)=\lim_{|\pi|\to 0}\frac{i_0\mathcal{E}^{\pi}(t)}{1+\frac{i_0}{N}(\mathcal{E}^{\pi}(t)-1)+i_0\frac{\mu+\gamma}{N}\int_0^t\mathcal{E}^{\pi}(s)ds}\\ &=\frac{i_0\mathcal{E}(t)}{1+\frac{i_0}{N}(\mathcal{E}(t)-1)+i_0\frac{\mu+\gamma}{N}\int_0^t\mathcal{E}(s)ds}, \end{align*} with \begin{align*} \mathcal{E}(t):=e^{\delta t+N \sigma B(t)}. \end{align*} \subsection{Recurrence} To prove the recurrence of $\mathtt{I}(t)$ in the case $R_0\geq 1$, we need to exploit the specific structure of equation (\ref{Mao Stratonovich 3}). In particular, we will follow the approach utilized in \cite{Xu} which is based on the Feller's test for explosion (see for instance Chapter 5.5 C in \cite{KS}).\\ Let $\varphi(x):=\ln\left(\frac{x}{N-x}\right)$ and apply the It\^o formula to $\varphi(\mathtt{I}(t))$; this gives \begin{align*} d\varphi(\mathtt{I}(t))&=\left(\beta N-(\mu+\gamma)-(\mu+\gamma)\frac{\mathtt{I}(t)}{N-\mathtt{I}(t)}\right)dt+\sigma NdB(t)\\ &=\left(\beta N-(\mu+\gamma)-(\mu+\gamma)e^{\varphi(\mathtt{I}(t))}\right)dt+\sigma NdB(t), \end{align*} and, setting $\mathtt{J}(t):=\varphi(\mathtt{I}(t))$, we can write \begin{align*} d\mathtt{J}(t)=\left(\beta N-(\mu+\gamma)-(\mu+\gamma)e^{\mathtt{J}(t)}\right)dt+\sigma NdB(t). \end{align*} Now, the scale function for this process is \begin{align*} \psi(x)=\int_0^x\theta(y)dy \end{align*} where \begin{align*} \theta(y)&=\exp\left\{-\frac{2}{\sigma^2N^2}\int_0^y\beta N-(\mu+\gamma)-(\mu+\gamma)e^{z}dz\right\}\\ &=\exp\left\{-\frac{2(\beta N-(\mu+\gamma))}{\sigma^2N^2}y+\frac{2(\mu+\gamma)}{\sigma^2N^2}(e^y-1)\right\}. \end{align*} It is clear that $\psi(+\infty)=+\infty$; moreover, for $\beta N-(\mu+\gamma)\geq 0$, that means $R_0\geq 1$, we get $\psi(-\infty)=-\infty$. These two facts together with Proposition 5.22, Chapter 5 in \cite{KS} imply that $\{\mathtt{J}(t)\}_{t\geq 0}$ is recurrent on $]-\infty, +\infty[$ and hence that $\{\mathtt{I}(t)\}_{t\geq 0}$ is recurrent on $]0,N[$. \end{document}
\begin{document} \begin{abstract} This paper presents a new hybridizable discontinuous Galerkin (HDG) method for linear elasticity on general polyhedral meshes, based on a strong symmetric stress formulation. The key feature of this new HDG method is the use of a special form of the numerical trace of the stresses, which makes the error analysis different from the projection-based error analyzes used for most other HDG methods. For arbitrary polyhedral elements, we approximate the stress by using polynomials of degree $k\ge 1$ and the displacement by using polynomials of degree $k+1$. In contrast, to approximate the numerical trace of the displacement on the faces, we use polynomials of degree $k$ only. This allows for a very efficient implementation of the method, since the numerical trace of the displacement is the only globally-coupled unknown, but does not degrade the convergence properties of the method. Indeed, we prove optimal orders of convergence for both the stresses and displacements on the elements. In the almost incompressible case, we show the error of the stress is also optimal in the standard $L^2-$norm. These optimal results are possible thanks to a special superconvergence property of the numerical traces of the displacement, and thanks to the use of a crucial elementwise Korn's inequality. Several numerical results are presented to support our theoretical findings in the end. \end{abstract} \keywords{hybridizable; discontinuous Galerkin; superconvergence; linear elasticity} \subjclass[2000]{65N30, 65L12} \title{An HDG method for linear elasticity with strong symmetric stresses} \section{Introduction} In this paper, we introduce a new hybridizable discontinuous Galerkin (HDG) method for the system of linear elasticity \begin{subequations}\label{elasticity_equation} \begin{alignat}{2} \label{elasticity_1} \mathcal{A} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\epsilon}}(\boldsymbol{u}) & = 0 && \text{in $\Omega\subset\mathbb{R}^3$,}\\ \label{elasticity_2} \boldsymbol{n}abla \cdot \underline{\boldsymbol{\sigma}} & = \boldsymbol{f} \quad && \text{in $\Omega$,}\\ \label{elasticity_3} \boldsymbol{u} &= \boldsymbol{g} && \text{on $\partial \Omega$}. \end{alignat} \end{subequations} Here, the displacement is denoted by the vector field $\boldsymbol{u}:\Omega \rightarrow \mathbb{R}^{3}$. The strain tensor is represented by $\underline{\boldsymbol{\epsilon}}(\boldsymbol{u}): = \frac{1}{2}(\boldsymbol{n}abla \boldsymbol{u} + (\boldsymbol{n}abla \boldsymbol{u})^{\top})$. The stress tensor is represented by $\underline{\boldsymbol{\sigma}}: \Omega \rightarrow \boldsymbol{S}$, where $\boldsymbol{S}$ denotes the set of all symmetric matrices in $\mathbb{R}^{3\times 3}$. The compliance tensor $\mathcal{A}$ is assumed to be a bounded, symmetric, positive definite tensor over $\boldsymbol{S}$. The body force $\boldsymbol{f}$ lies in $\boldsymbol{L}^2(\Omega)$, the displacement of the boundary $\boldsymbol{g}$ is a function in $\boldsymbol{H}^{1/2}(\partial\Omega)$ and $\Omega$ is a polyhedral domain. In general, there are two approaches to design mixed finite element methods for linear elasticity. The first approach is to enforce the symmetry of the stress tensor weakly (\cite{ArnoldBrezziDouglas84,ArnoldFalkWinther07,BoffiBrezziFortin09, CockburnGopalakrishnanGuzman10,GopalakrishnanGuzman10,Guzman10,QiuDemko:2009:MME1,QiuDemko11,Stenberg88}). In this category, is included the HDG method considered in \cite{CockburnShi13_elas}. The other approach is to exactly enforce the symmetry of the approximate stresses. The methods considered in \cite{CockburnSchoetzauWang06,AdamsCockburn05,ArnoldAwanouWinther08,ArnoldAwanouWinther14,ArnoldWintherNC, Awanou09,GopalakrishnanGuzman11,ManHuShi09,SoonCockburnStolarski09,Yi05,Yi06} belong to the second category, and so does the contribution of this paper. In general, the methods in the first category are easier to implement. On the other hand, the methods in the second category preserve the balance of angular momentum strongly and have less degrees of freedom. Next, we compare our HDG method with several methods of the second category. In \cite{CockburnSchoetzauWang06}, an LDG method using strongly symmetric stresses (for isotropic linear elasticity) was introduced and proved to yield convergence properties that remain unchanged when the material becomes incompressible; simplexes and polynomial approximations or degree $k$ in all variables were used. However, as all LDG methods for second-order elliptic problems, although the displacement converges with order $k+1$, the strain and pressure converge sub-optimally with order $k$. Also, the method cannot be hybridized. Stress finite elements satisfying both strong symmetry and $H(\text{div})$-conformity are introduced in \cite{AdamsCockburn05,ArnoldAwanouWinther08}. The main drawback of these methods is that they have too many degrees of freedom of stress elements and hybridization is not available for them (see detailed description in \cite{Guzman10}). In \cite{ArnoldAwanouWinther14,ArnoldWintherNC, Awanou09,GopalakrishnanGuzman11,ManHuShi09,SoonCockburnStolarski09,Yi05,Yi06}, non-conforming methods using symmetric stress elements are introduced. But, methods in \cite{ArnoldAwanouWinther14,ArnoldWintherNC, Awanou09,ManHuShi09,Yi05,Yi06} use low order finite element spaces only (most of them are restricted to rectangular or cubical meshes except \cite{ArnoldAwanouWinther14,ArnoldWintherNC}). In \cite{GopalakrishnanGuzman11}, a family of simplicial elements (one for each $k \geq 1$) are developed in both two and three dimensions. (The degrees of freedom of $P_{k+1}(\boldsymbol{S},K)$ were studied in \cite{GopalakrishnanGuzman11} and then used to design the projection operator $\Pi^{(\text{div},\boldsymbol{S})}$ in \cite{GopaQiu:PracticalDPG}). However, the convergence rate of stress is suboptimal. The first HDG method for linear and nonlinear elasticity was introduced in \cite{SoonThesis08,SoonCockburnStolarski09}; see also the related HDG method proposed in \cite{NguyenPeraire2012}. These methods also use simplexes and polynomial approximations of degree $k$ in all variables. For general polyhedral elements, this method was recently analyzed in \cite{FuCockburnStolarski} where it was shown that the method converges optimally in the displacement with order $k+1$, but with the suboptimal order of $k+1/2$ for the pressure and the stress. For $k=1$, these orders of convergence were numerically shown to be sharp for triangular elements. In this paper, we prove that by enriching the local stress space to be polynomials of degree no more than $k+1$, and by using a modified numerical trace, we are able to obtain optimal order of convergence for all unknowns. In addition, this analysis is valid for general polyhedral meshes. To the best of our knowledge, this is so far the only result which has optimal accuracy with general polyhedral triangulations for linear elasticity problems. Like many hybrid methods, our HDG method provides approximation to stress and displacement in each element and trace of displacement along interfaces of meshes. In general, the corresponding finite element spaces are $\underline{\boldsymbol{V}}_h,\boldsymbol{W}_h,\boldsymbol{M}_h$, which are defined to be \begin{alignat*}{3} \underline{\boldsymbol{V}}_h=&\;\{\underline{\boldsymbol{v}} \in \underline{\boldsymbol{L}}^2(\Omega): &&\quad\underline{\boldsymbol{v}}|_K\in \underline{\boldsymbol{V}}(K) &&\quad\forall\; K\in\mathcal{T}_h\}, \\ \boldsymbol{W}_h=&\;\{\boldsymbol{\omega}\in \boldsymbol{L}^2(\Omega): &&\quad \boldsymbol{\omega}|_K \in \boldsymbol{W}(K) &&\quad\forall\; K\in\mathcal{T}_h\}, \\ \boldsymbol{M}_h=&\;\{\boldsymbol{\mu}\in \boldsymbol{L}^2(\mathcal{E}_h): &&\quad \boldsymbol{\mu}|_F\in \boldsymbol{M}(F) &&\quad\forall\; F\in\mathcal{E}_h\}. \end{alignat*} Here $\mathcal{T}_h$ denotes a triangulation of the domain $\Omega$ and $\mathcal{E}_h$ is the set of all faces $F$ of all elements $K \in \mathcal{T}_h$. The spaces $\underline{\boldsymbol{V}}(K), \boldsymbol{W}(K), \boldsymbol{M}(F)$ are called the {\em local spaces} which are defined on each element/face. In Table \ref{table:error-order} we list several choices of local spaces for different methods. In this paper, our choice of the local spaces is defined as: \[ \underline{\boldsymbol{V}}(K) = \underline{\boldsymbol{P}}_k(\boldsymbol{S}, K), \quad \boldsymbol{W}(K) = \boldsymbol{P}_{k+1}(K), \quad \boldsymbol{M}(F) =\boldsymbol{P}_k(F). \] Here, the space of vector-valued functions defined on $D$ whose entries are polynomials of total degree $k$ is denoted by $\boldsymbol{P}_k(D)$ ($k\geq 1$). Similarly, $\underline{\boldsymbol{P}}_k(\boldsymbol{S}, K)$ denotes the space of symmetric-valued functions defined on $K$ whose entries are polynomials of total degree $k$. In addition, our method allows $\mathcal{T}_{h}$ to be any conforming polyhedral triangulation of $\Omega$. Note the fact that the only globally-coupled degrees of freedom are those of the numerical trace of displacement along $\mathcal{E}_h$, renders the method efficiently implementable. However, the fact that the polynomial degree of the approximate numerical traces of the displacement is one {\em less} than that of the approximate displacement inside the elements, might cause a degradation in the approximation properties of the displacement. However, this unpleasant situation is avoided altogether by taking a special form of the numerical trace of the stresses inspired on the choice taken in \cite{Lehrenfeld10} in the framework of diffusion problems. This choice allows for a special superconvergence of part of the numerical traces of the stresses which, in turn, guarantees that, for $k\ge1$, the $L^{2}$-order of convergence for the stress is $k+1$ and that of the displacement $k+2$. So, we obtain optimal convergence for both stress and displacement for general polyhedral elements. Let us mention that the approach of error analysis of our HDG method is different from the traditional projection-based error analysis in \cite{CockburnGopalakrishnanSayas10,CockburnQiuShi11,CockburnShi13_elas} in three aspects. First, here, we use simple $L^{2}$-projections, not the numerical trace-tailored projections typically used for the analysis of other HDG methods. Second, we take the stabilization parameter to be of order $1/h$ instead of of order one. And finally, we use an elementwise Korn's inequality (Lemma~\ref{symmetric_grad}) to deal with the symmetry of the stresses. We notice that mixed methods in \cite{CockburnGopalakrishnanGuzman10,GopalakrishnanGuzman10} and HDG methods in \cite{CockburnShi13_elas} also achieve optimal convergence for stress and superconvergence for displacement by post processing. However, there are two disadvantages regarding of implementation. First, these methods enforce the stress symmetry weakly, which means that they have a much larger space for the stress. In additon, these methods usually need to add matrix bubble functions ($\underline{\boldsymbol{\delta V}}$ in \cite{CockburnGopalakrishnanGuzman10}) into their stress elements in order to obtain optimal approximations. In fact, the construction of such bubbles on general polyhedral elements is still an open problem. In contrast, our method avoids using matrix bubble functions but only use simple polynomial space of degree $k, k+1$. In Table~\ref{table:error-order}, we compare methods which use $\boldsymbol{M}_{h}$ for approximating trace of displacement $\widehat{\boldsymbol{u}}_{h}$ on $\mathcal{E}_h$. There, $\boldsymbol{u}^{\star}_h$ is a post-processed numerical solution of displacement. \begin{table}[ht] \caption{Orders of convergence for methods for which {$\widehat{\boldsymbol{u}}_{h}\in \boldsymbol{M}(F)=\boldsymbol{P}_k(F), k\ge1,$} and $K$ is a tetrahedron.} \centering \begin{tabular}{c c c c c c c} \hline \boldsymbol{n}oalign{ } method &$ \underline{\boldsymbol{V}}(K)$ &\hskip-.5truecm$\boldsymbol{W}(K)$ &\hskip-.3truecm$\|\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h\|_{\mathcal{T}_h}$ &\hskip-.3truecm$\|\ \boldsymbol{u} - \boldsymbol{u}_h\|_{\mathcal{T}_h}$ &\hskip-.3truecm$\|\boldsymbol{u} - \boldsymbol{u}^{\star}_h\|_{\mathcal{T}_h}$ \\ \boldsymbol{n}oalign{ } \hline\hline \boldsymbol{n}oalign{ } AFW\cite{ArnoldFalkWinther07} & $\underline{\boldsymbol{P}}_{k}(\mathbb{R}^{3\times 3}, K)$ &\hskip-.2truecm $\boldsymbol{P}_{k-1}(K)$& \hskip-.5truecm $k$ &\hskip-.5truecm $k$ & \hskip-.5truecm - \\ CGG\cite{CockburnGopalakrishnanGuzman10}& $\underline{\mathbf{RT}}_{k}(K)+\underline{\boldsymbol{\delta V}}$ &\hskip-.2truecm $\boldsymbol{P}_{k}(K)$& \hskip-.5truecm $k+1$ &\hskip-.5truecm $k+1$ & \hskip-.5truecm$k+2$ \\ GG\cite{GopalakrishnanGuzman10} & $\underline{\boldsymbol{P}}_{k}(\mathbb{R}^{3\times 3},K)+\underline{\boldsymbol{\delta V}}$ &\hskip-.2truecm $\boldsymbol{P}_{k-1}(K)$ & \hskip-.5truecm $k+1$ &\hskip-.5truecm $k$ &\hskip-.5truecm $k+1$ \\ CS\cite{CockburnShi13_elas} & $\underline{\boldsymbol{P}}_{k}(\mathbb{R}^{3\times 3},K)+\underline{\boldsymbol{\delta V}}$ &\hskip-.2truecm $\boldsymbol{P}_{k}(K)$ & \hskip-.5truecm $k+1$ &\hskip-.5truecm $k+1$ &\hskip-.5truecm $k+2$ \\ GG\cite{GopalakrishnanGuzman11} & $\underline{\boldsymbol{P}}_{k+1}(\boldsymbol{S},K)$&\hskip-.2truecm $\boldsymbol{P}_{k}(K)$& \hskip-.5truecm $k$ & \hskip-.5truecm$k+1$ &\hskip-.5truecm - \\ HDG-S & $\underline{\boldsymbol{P}}_{k}(\boldsymbol{S},K)$ &\hskip-.2truecm $\boldsymbol{P}_{k+1}(K)$ & \hskip-.5truecm $k+1$ &\hskip-.5truecm $k+2$ &\hskip-.5truecm - \\ \boldsymbol{n}oalign{ } \hline \end{tabular} \label{table:error-order} \end{table} The remainder of this paper is organized as follows. In Section $2$, we introduce our HDG method and present our a priori error estimates. In Section $3$, we give a characterization of the HDG method and show the global matrix is symmetric and positive definite. In Section $4$, we give elementwise Korn's inequality in Lemma~\ref{symmetric_grad}, then provide a detailed proof of the a priori error estimates. In Section $5$, we present several numerical examples in order to illustrate and test our method. \section{Main results} In this section we first present the method in details and then show the main results for the error estimates. \subsection{The HDG formulation with strong symmetry} Let us begin by introducing some notations and conventions. We adapt to our setting the notation used in \cite{CockburnQiuShi11}. Let $\mathcal{T}_h$ denote a conforming triangulation of $\Omega$ made of shape-regular polyhedral elements $K$. We recall that $\partial \mathcal{T}_h := \{\partial K : K \in \mathcal{T}_h \}$, and $\mathcal{E}_h$ denotes the set of all faces $F$ of all elements. We denote by $\mathcal{F}(K)$ the set of all faces $F$ of the element $K$. We also use the standard notation to denote scalar, vector and tensor spaces. Thus, if $D(K)$ denotes a space of scalar-valued functions defined on $K$, the corresponding space of vector-valued functions is $\boldsymbol{D}(K) := [D(K)]^d$ and the corresponding space of matrix-valued functions is $\underline{\boldsymbol{D}}(K) := [D(K)]^{d\times d}$. Finally, $\underline{\boldsymbol{D}}(\boldsymbol{S}, K)$ denotes the symmetric subspace of $\underline{\boldsymbol{D}}(K)$. The methods we consider seek an approximation $(\underline{\boldsymbol{\sigma}}_h, \boldsymbol{u}_h, \widehat{\boldsymbol{u}}_h)$ to the exact solution $(\underline{\boldsymbol{\sigma}}, \boldsymbol{u}, \boldsymbol{u}|_{\mathcal{E}_h})$ in the finite dimensional space $\underline{\boldsymbol{V}}_h \times \boldsymbol{W}_h \times \boldsymbol{M}_h \subset \underline{\boldsymbol{L}}^2(\boldsymbol{S}, \Omega) \times \boldsymbol{L}^2(\Omega) \times \boldsymbol{L}^2(\mathcal{E}_h)$ given by \begin{subequations} \label{spaces} \begin{alignat}{3} \label{spaces-V} \underline{\boldsymbol{V}}_h=&\;\{\underline{\boldsymbol{v}} \in \underline{\boldsymbol{L}}^2(\boldsymbol{S}, \Omega): &&\quad\underline{\boldsymbol{v}}|_K\in \underline{\boldsymbol{P}}_k(\boldsymbol{S}, K) &&\quad\forall\; K\in\mathcal{T}_h\}, \\ \label{spaces-W} \boldsymbol{W}_h=&\;\{\boldsymbol{\omega}\in \boldsymbol{L}^2(\Omega): &&\quad \boldsymbol{\omega}|_K \in \boldsymbol{P}_{k+1}(K) &&\quad\forall\; K\in\mathcal{T}_h\}, \\ \label{spaces-M} \boldsymbol{M}_h=&\;\{\boldsymbol{\mu}\in \boldsymbol{L}^2(\mathcal{E}_h): &&\quad \boldsymbol{\mu}|_F\in \boldsymbol{P}_k(F) &&\quad\forall\; F\in\mathcal{E}_h\}. \end{alignat} \end{subequations} Here $P_k(D)$ denotes the standard space of polynomials of degree no more than $k$ on $D$. Here we require $k \ge 1$. The numerical approximation $(\underline{\boldsymbol{\sigma}}_h, \boldsymbol{u}_h, \widehat{\boldsymbol{u}}_h)$ can now be defined as the solution of the following system: \begin{subequations}\label{HDG_formulation} \begin{alignat}{1} \bint{\mathcal{A} \underline{\boldsymbol{\sigma}}_h}{\underline{\boldsymbol{v}}} + \bint{\boldsymbol{u}_h}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{v}}} - \bintEh{\widehat{\boldsymbol{u}}_h}{\underline{\boldsymbol{v}} \boldsymbol{n}} &= 0, \\ \bint{\underline{\boldsymbol{\sigma}}_h}{\boldsymbol{n}abla \boldsymbol{\omega}} - \bintEh{\widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{\boldsymbol{\omega}} & = -\bint{\boldsymbol{f}}{\boldsymbol{\omega}},\\ \label{strong_cont} \bintEhi{\widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{\boldsymbol{\mu}} & = 0,\\ \bintEhb{\widehat{\boldsymbol{u}}_h}{\boldsymbol{\mu}} & = \bintEhb{{\boldsymbol{g}}}{\boldsymbol{\mu}}, \intertext{for all $(\underline{\boldsymbol{v}}, \boldsymbol{\omega}, \boldsymbol{\mu}) \in \underline{\boldsymbol{V}}_h \times \boldsymbol{W}_h \times \boldsymbol{M}_h$, where} \label{numerical_trace} \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n} = \underline{\boldsymbol{\sigma}}_h \boldsymbol{n} - \tau (\boldsymbol{P_M} \boldsymbol{u}_h - \widehat{\boldsymbol{u}}_h) \quad \text{on $\partial \mathcal{T}_h$.} \end{alignat} \end{subequations} In fact, in Christoph Lehrenfeld's thesis, the author defines the numerical flux in this way for diffusion problems (see Remark $1.2.4$ in \cite{Lehrenfeld10}). This method was then analyzed for diffusion recently in \cite{Oikawa2015}. Here, $\boldsymbol{P_M}$ denotes the standard $L^2$-orthogonal projection from $\boldsymbol{L}^2(\mathcal{E}_h)$ onto $\boldsymbol{M}_h$. We write $\bint{\underline{\boldsymbol{\eta}}}{\underline{\boldsymbol{\zeta}}}$ $ : = \sum^n_{i,j = 1} \bint{\underline{\boldsymbol{\eta}}_{i,j}}{\underline{\boldsymbol{\zeta}}_{i,j}}$, $\bint{\boldsymbol{\eta}}{\boldsymbol{\zeta}} : = \sum^n_{i = 1} \bint{\eta_i}{\zeta_i}$, and $ \bint{\eta}{\zeta} := \sum_{K \in \mathcal{T}_h} (\eta, \zeta)_K, $ where $(\eta,\zeta)_D$ denotes the integral of $\eta\zeta$ over $D \subset \mathbb{R}^n$. Similarly, we write $\bintEh{\boldsymbol{\eta}}{\boldsymbol{\zeta}}:= \sum^n_{i=1} \bintEh{\eta_i}{\zeta_i}$ and $\bintEh{\eta}{\zeta}:= \sum_{K \in \mathcal{T}_h} \langle \eta \,,\,\zeta \rangle_{\partial K}$, where $\langle \eta \,,\,\zeta \rangle_{D}$ denotes the integral of $\eta \zeta$ over $D \subset \mathbb{R}^{n-1}$. The parameter $\tau$ in \eqref{numerical_trace} is called the {\em{stabilization parameter}}. In this paper, we assume it is a fixed positive number on all faces. It is worth to mention that the numerical trace \eqref{numerical_trace} is defined slightly different from the usual {{HDG}} setting, see \cite{CockburnQiuShi11}. Namely, in the definition, we use $\boldsymbol{P_M} \boldsymbol{u}_h$ instead of $\boldsymbol{u}_h$. Indeed, this is a crucial modification in order to get error estimate. An intuitive explanation is that we want to preserve the strong continuity of the flux across the interfaces. Without the projection $\boldsymbol{P_M}$, by \eqref{strong_cont} the normal component of $\widehat{\underline{\boldsymbol{\sigma}}}_h$ is only weakly continuous across the interfaces. \subsection{A priori error estimates} To state our main result, we need to introduce some notations. We define \begin{equation*} \Vert \underline{\boldsymbol{v}}\Vert_{L^{2}(\mathcal{A},\Omega)} = \sqrt{(\mathcal{A}\underline{\boldsymbol{v}}, \underline{\boldsymbol{v}})_{\Omega}}, \quad \forall \underline{\boldsymbol{v}} \, \in \underline{\boldsymbol{L}}^{2}(\boldsymbol{S},\Omega). \end{equation*} We use $\|\cdot\|_{s, D}, |\cdot|_{s, D}$ to denote the usual norm and semi-norm on the Sobolev space $H^s(D)$. We discard the first index $s$ if $s=0$. A differential operator with a sub-index $h$ means it is defined on each element $K \in \mathcal{T}_h$. Similarly, the norm $\|\cdot\|_{s, \mathcal{T}_h}$ is the discrete norm defined as $\|\cdot\|_{s, \mathcal{T}_h}:= \sum_{K \in \mathcal{T}_h}\|\cdot\|_{s, K}$. Finally, we need an elliptic regularity assumption stated as follows. Let $(\boldsymbol{\phi}, \underline{\boldsymbol{\psi}}) \in \boldsymbol{H}^2(\Omega) \times \underline{\boldsymbol{H}}^1(\Omega)$ be the solution of the adjoint problem: \begin{subequations}\label{dual_problem} \begin{alignat}{2} \label{dual_problem_1} \mathcal{A}\underline{\boldsymbol{\psi}} - \underline{\epsilon} (\boldsymbol{\phi}) & = 0 && \text{in $\Omega$},\\ \label{dual_problem_2} \boldsymbol{n}abla \cdot \underline{\boldsymbol{\psi}} & = \boldsymbol{e_u} \quad && \text{in $\Omega$},\\ \label{dual_problem_3} \boldsymbol{\phi} & = 0 && \text{on $\partial \Omega$}. \end{alignat} \end{subequations} We assume the solution $(\boldsymbol{\phi}, \underline{\boldsymbol{\psi}})$ has the following elliptic regularity property: \begin{equation}\label{regularity} \|\underline{\boldsymbol{\psi}}\|_{1, \Omega} + \|\boldsymbol{\phi}\|_{2, \Omega} \le C_{reg} \|\boldsymbol{e_u}\|_{\Omega}, \end{equation} The assumption holds in the case of planar elasticity with scalar coefficients on a convex domain, see \cite{BacutaBramble03}. We are now ready to state our main result. \begin{theorem}\label{Main_result} If the meshes are quasi-uniform and $\tau = \mathcal{O}(\frac{1}{h})$, then we have \begin{equation}\label{error_estimate_sigma} \|\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h\|_{L^{2}(\mathcal{A},\Omega)} \le C h^s (\|\boldsymbol{u}\|_{s+1, \Omega} + \|\underline{\boldsymbol{\sigma}}\|_{s, \Omega}), \end{equation} for all $1 \le s \le k+1$. Moreover, if the elliptic regularity property \eqref{regularity} holds, then we have \begin{equation} \label{error_estimate_u} \| \boldsymbol{u} - \boldsymbol{u}_h\|_{\Omega} \le C h^{s+1}(\|\boldsymbol{u}\|_{s+1, \Omega} + \|\underline{\boldsymbol{\sigma}}\|_{s, \Omega}), \end{equation} for all $1 \le s \le k+1$. Here the constant $C$ depends on the upper bound of compliance tensor $\mathcal{A}$ but it is independent of the mesh size $h$. \end{theorem} This result shows that the numerical errors for both unknowns $(\boldsymbol{u}, \underline{\boldsymbol{\sigma}})$ are optimal. In addition, since the only globally-coupled unknown, $\widehat{\boldsymbol{u}}_{h}$, stays in $\boldsymbol{P}_{k}(\mathcal{E}_h)$, the order of convergence for the displacement remains optimal only because of a key superconvergence property, see the remark right after Corollary \ref{estimate_sigma}. In addition, we restrict our result on quasi-uniform meshes to make the proof simple and clear. This result holds for shape-regular meshes also. \subsection{Numerical approximation for nearly incompressible materials} Here, we consider the numerical approximation of stress for isotropic nearly incompressible materials. We define isotropic materials to be those whose compliance tensor satisfying the following Assumption~\ref{ass_isotropic}. \begin{assumption} \label{ass_isotropic} \begin{align} \label{ass_isotropic_def} \mathcal{A}\underline{\boldsymbol{\tau}} &= P_{D}\underline{\boldsymbol{\tau}}_{D} + P_{T}\frac{\text{tr}(\underline{\boldsymbol{\tau}})}{3}I_{3}\\ \boldsymbol{n}onumber \text{where } \quad \underline{\boldsymbol{\tau}}_{D} & = \underline{\boldsymbol{\tau}} -\frac{\text{tr}(\underline{\boldsymbol{\tau}})}{3}I_{3}, \end{align} for any $\underline{\boldsymbol{\tau}}$ in $\mathbb{R}^{3\times 3}$, and $P_{D}$ and $P_{T}$ are two positive constants. An isotropic material is nearly incompressible if $P_{T}$ is close to zero. \end{assumption} \begin{theorem} \label{thm_locking_free} If the material is isotropic (whose compliance tensor satisfies Assumption~\ref{ass_isotropic}), $P_{T}$ is positive, the boundary data $\boldsymbol{g}=0$, the meshes are quasi-uniform and $\tau = \mathcal{O}(\frac{1}{h})$, then we have \begin{equation} \label{conv_locking_free} \|\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h\|_{L^{2}(\Omega)} \le C h^{s} (\|\boldsymbol{u}\|_{s+1, \Omega} + \|\underline{\boldsymbol{\sigma}}\|_{s, \Omega}), \end{equation} for all $1 \le s \le k+1$. Here, the constant $C$ is independent of $P_{T}^{-1}$. \end{theorem} This result shows that the HDG method (\ref{HDG_formulation}) is locking-free for nearly incompressible materials. We emphasize that the convergence rate of stress for nearly incompressible materials is one order higher than \cite{ArnoldFalkWinther07,GopalakrishnanGuzman11} with the same finite element space for numerical trace of displacement. \section{A characterization of the HDG method} \label{sec:hybrid} In this section we show how to eliminate elementwise the unknowns $\underline{\boldsymbol{\sigma}}_h$ and $\boldsymbol{u}_h$ from the equations \eqref{HDG_formulation} and rewrite the original system solely in terms of the unknown $\widehat{\boldsymbol{u}}_{h}$, see also \cite{SoonCockburnStolarski09}. Via this elimination, we do not have to deal with the large indefinite linear system generated by \eqref{HDG_formulation}, but with the inversion of a sparser symmetric positive definite matrix of remarkably smaller size. \subsection{The local problems} The result on the above mentioned elimination can be described using additional ``local" operators defined as follows: On each element $K$, for any $\boldsymbol{\lambda}\in \boldsymbol{M}_h |_{\partial K}$, we denote $(\underline{\boldsymbol{Q}} \boldsymbol{\lambda}, \boldsymbol{U} \boldsymbol{\lambda}) \in \underline{\boldsymbol{V}}(K) \times \boldsymbol{W}(K)$ to be the unique solution of the local problem: \begin{subequations} \label{local_solvers} \begin{align} \label{local_solvers_eq1} (\mathcal{A} \underline{\boldsymbol{Q}}\boldsymbol{\lambda}, \underline{\boldsymbol{v}})_{K} +(\boldsymbol{U}\boldsymbol{\lambda}, \boldsymbol{n}abla\cdot \underline{\boldsymbol{v}})_{K} &= \langle \boldsymbol{\lambda}, \underline{\boldsymbol{v}}\cdot \boldsymbol{n}\rangle_{\partial K},\\ \label{local_solvers_eq2} - (\boldsymbol{n}abla\cdot \underline{\boldsymbol{Q}}\boldsymbol{\lambda}, \boldsymbol{\omega})_{K} + \langle \tau \boldsymbol{P_M} \boldsymbol{U}\boldsymbol{\lambda}, \boldsymbol{\omega}\rangle_{\partial K} &= \langle \tau \boldsymbol{\lambda}, \boldsymbol{\omega}\rangle_{\partial K}, \end{align} \end{subequations} for all $(\underline{\boldsymbol{v}}, \boldsymbol{\omega})\in \underline{\boldsymbol{V}}(K)\times \boldsymbol{W}(K)$. On each element $K$, we also denote $(\underline{\boldsymbol{Q}}_S \boldsymbol{\lambda}, \boldsymbol{U}_S \boldsymbol{\lambda}) \in \underline{\boldsymbol{V}}(K) \times \boldsymbol{W}(K)$ to be the unique solution of the local problem: \begin{subequations} \label{local_solvers_source} \begin{align} \label{local_solvers_source_eq1} (\mathcal{A} \underline{\boldsymbol{Q}}_{S}\boldsymbol{f}, \underline{\boldsymbol{v}})_{K} +(\boldsymbol{U}_{S}\boldsymbol{f}, \boldsymbol{n}abla\cdot \underline{\boldsymbol{v}})_{K} &= 0,\\ \label{local_solvers_source_eq2} - (\boldsymbol{n}abla\cdot \underline{\boldsymbol{Q}}_{S}\boldsymbol{f}, \boldsymbol{\omega})_{K} + \langle \tau \boldsymbol{P_M} \boldsymbol{U}_{S}\boldsymbol{f}, \boldsymbol{\omega}\rangle_{\partial K} &= - (\boldsymbol{f}, \boldsymbol{\omega})_{K}, \end{align} \end{subequations} for all $(\underline{\boldsymbol{v}}, \boldsymbol{\omega})\in \underline{\boldsymbol{V}}(K)\times \boldsymbol{W}(K)$. It is easy to show the two local problems are well-posted. In addition, due to the linearity of the global system \eqref{HDG_formulation},the numerical solution $(\underline{\boldsymbol{\sigma}}_h, \boldsymbol{u}_h, \widehat{\boldsymbol{u}}_h)$ satisfies \begin{equation} \label{local_to_global} \underline{\boldsymbol{\sigma}}_h = \underline{\boldsymbol{Q}}\widehat{\boldsymbol{u}}_h +\underline{\boldsymbol{Q}}_{S}\boldsymbol{f}, \quad \boldsymbol{u}_h = \boldsymbol{U}\widehat{\boldsymbol{u}}_h+\boldsymbol{U}_{S}\boldsymbol{f}. \end{equation} \subsection{The global problem} For the sake of simplicity, we assume the boundary data $\boldsymbol{g} = 0$. Then, the HDG method (\ref{HDG_formulation}) is to find $(\underline{\boldsymbol{\sigma}}_h, \boldsymbol{u}_h, \widehat{\boldsymbol{u}}_h) \in \underline{\boldsymbol{V}}_h \times \boldsymbol{W}_h \times\boldsymbol{M}^0_h$ satisfying \begin{subequations} \label{HDG_formulation_zero} \begin{alignat}{1} \label{strong_cont_zero} \bint{\mathcal{A} \underline{\boldsymbol{\sigma}}_h}{\underline{\boldsymbol{v}}} + \bint{\boldsymbol{u}_h}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{v}}}-\bintEh{\widehat{\boldsymbol{u}}_h}{\underline{\boldsymbol{v}} \boldsymbol{n}} &= 0, \\ -\bint{\boldsymbol{n}abla\cdot\underline{\boldsymbol{\sigma}}_h}{\boldsymbol{\omega}} + \bintEh{\tau (\boldsymbol{P_M} \boldsymbol{u}_h - \widehat{\boldsymbol{u}}_h)}{\boldsymbol{\omega}} & = -\bint{\boldsymbol{f}}{\boldsymbol{\omega}},\\ \label{HDG_formulation_zero_c} \bintEhi{\underline{\boldsymbol{\sigma}}_h \boldsymbol{n} - \tau (\boldsymbol{P_M} \boldsymbol{u}_h - \widehat{\boldsymbol{u}}_h)}{\boldsymbol{\mu}} & = 0, \end{alignat} \end{subequations} for all $(\underline{\boldsymbol{v}}, \boldsymbol{\omega}, \boldsymbol{\mu}) \in \underline{\boldsymbol{V}}_h \times \boldsymbol{W}_h \times \boldsymbol{M}^0_h$, where $\boldsymbol{M}^0_{h} = \{\boldsymbol{\mu}\in \boldsymbol{M}_{h}: \boldsymbol{\mu}|_{\partial \Omega}=0 \}$. Combining (\ref{HDG_formulation_zero_c}) with (\ref{local_to_global}), we have that for all $\boldsymbol{\mu}\in \boldsymbol{M}^0_{h}$, \begin{equation} \label{reduced_system_primitive} \langle (\underline{\boldsymbol{Q}}\widehat{\boldsymbol{u}}_h) \boldsymbol{n} - \tau (\boldsymbol{P_M} \boldsymbol{U}\widehat{\boldsymbol{u}}_h - \widehat{\boldsymbol{u}}_h ), \boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}} = \langle (\underline{\boldsymbol{Q}}_{S}\boldsymbol{f}) \boldsymbol{n} - \tau \boldsymbol{P_M} \boldsymbol{U}_{S}\boldsymbol{f}, \boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}}. \end{equation} Up to now we can see that we only need to solve the reduced global linear system (\ref{reduced_system_primitive}) first, then recover $(\underline{\boldsymbol{\sigma}}_h, \boldsymbol{u}_h)$ by (\ref{local_to_global}) element by element. Next we show that the global system \eqref{reduced_system_primitive} is in fact symmetric positive definite. \subsection{A characterization of the approximate solution} The above results suggest the following characterization of the numerical solution of the HDG method. \begin{theorem} \label{thm_reduced_system} The numerical solution of the HDG method (\ref{HDG_formulation}) satisfies \begin{equation*} \underline{\boldsymbol{\sigma}}_h = \underline{\boldsymbol{Q}}\widehat{\boldsymbol{u}}_h +\underline{\boldsymbol{Q}}_{S}\boldsymbol{f}, \quad \boldsymbol{u}_h = \boldsymbol{U}\widehat{\boldsymbol{u}}_h+\boldsymbol{U}_{S}\boldsymbol{f}. \end{equation*} If we assume the boundary data $\boldsymbol{g}=0$, then $\widehat{\boldsymbol{u}}_h\in \boldsymbol{M}^0_{h}$ is the solution of \begin{equation} \label{reduced_system} a_{h} (\widehat{\boldsymbol{u}}_h, \boldsymbol{\mu}) = \langle (\underline{\boldsymbol{Q}}_{S}\boldsymbol{f}) \boldsymbol{n} - \tau \boldsymbol{P_M} \boldsymbol{U}_{S}\boldsymbol{f}, \boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}},\quad \forall \boldsymbol{\mu}\in \boldsymbol{M}^0_{h}, \end{equation} where \[ a_{h}(\widehat{\boldsymbol{u}}_h, \boldsymbol{\mu}) = (\mathcal{A}\underline{\boldsymbol{Q}}\widehat{\boldsymbol{u}}_h, \underline{\boldsymbol{Q}}\boldsymbol{\mu})_{\mathcal{T}_{h}} + \langle \tau(\boldsymbol{P_M} \boldsymbol{U}\widehat{\boldsymbol{u}}_h - \widehat{\boldsymbol{u}}_h ), \boldsymbol{P_M} \boldsymbol{U}\boldsymbol{\mu} - \boldsymbol{\mu} \rangle_{\partial \mathcal{T}_{h}}. \] In addition, the bilinear operator $a_{h}(\boldsymbol{\lambda}, \boldsymbol{\lambda})$ is positive definite. \end{theorem} \begin{proof} In order to show (\ref{reduced_system}) is true, we only need to show that for all $\boldsymbol{\lambda}, \boldsymbol{\mu}\in \boldsymbol{M}^0_h$, then \begin{equation*} a_{h}(\boldsymbol{\lambda}, \boldsymbol{\mu}) = \langle (\underline{\boldsymbol{Q}}\boldsymbol{\lambda}) \boldsymbol{n} - \tau (\boldsymbol{P_M} \boldsymbol{U}\boldsymbol{\lambda} - \boldsymbol{\lambda}), \boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}}. \end{equation*} According to (\ref{local_solvers}), we have \begin{subequations} \label{local_solvers_derivation} \begin{align} \label{local_solvers_derivation_eq1} & (\mathcal{A} \underline{\boldsymbol{Q}}\boldsymbol{m}, \underline{\boldsymbol{v}})_{\mathcal{T}_{h}} +(\boldsymbol{U}\boldsymbol{m}, \boldsymbol{n}abla\cdot \underline{\boldsymbol{v}})_{\mathcal{T}_{h}} = \langle \boldsymbol{m}, \underline{\boldsymbol{v}}\cdot \boldsymbol{n}\rangle_{\partial \mathcal{T}_{h}},\\ \label{local_solvers_derivation_eq2} & (\boldsymbol{n}abla\cdot \underline{\boldsymbol{Q}}\boldsymbol{m}, \boldsymbol{\omega})_{\mathcal{T}_{h}} = \langle\tau(\boldsymbol{P_M} \boldsymbol{U}\boldsymbol{m}-\boldsymbol{m}),\boldsymbol{\omega}\rangle_{\partial \mathcal{T}_{h}}, \end{align} \end{subequations} for all $(\underline{\boldsymbol{v}}, \boldsymbol{\omega})\in \underline{\boldsymbol{V}}_{h}\times \boldsymbol{W}_{h}$, $\boldsymbol{m}\in \boldsymbol{M}^0_h$. Then, we have \begin{align*} & \langle (\underline{\boldsymbol{Q}}\boldsymbol{\lambda}) \boldsymbol{n} - \tau (\boldsymbol{P_M} \boldsymbol{U}\boldsymbol{\lambda} - \boldsymbol{\lambda}), \boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}}\\ = & \langle \boldsymbol{\mu}, (\underline{\boldsymbol{Q}}\boldsymbol{\lambda}) \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}} - \langle \tau (\boldsymbol{P_M} \boldsymbol{U}\boldsymbol{\lambda} - \boldsymbol{\lambda}), \boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}}\\ = & (\mathcal{A} \underline{\boldsymbol{Q}}\boldsymbol{\mu}, \underline{\boldsymbol{Q}}\boldsymbol{\lambda})_{\mathcal{T}_{h}} +(\boldsymbol{U}\boldsymbol{\mu}, \boldsymbol{n}abla\cdot \underline{\boldsymbol{Q}}\boldsymbol{\lambda})_{\mathcal{T}_{h}} - \langle \tau (\boldsymbol{P_M} \boldsymbol{U}\boldsymbol{\lambda} - \boldsymbol{\lambda}), \boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}}\quad \text{ by }(\ref{local_solvers_derivation_eq1})\\ = & (\mathcal{A} \underline{\boldsymbol{Q}}\boldsymbol{\mu}, \underline{\boldsymbol{Q}}\boldsymbol{\lambda})_{\mathcal{T}_{h}} +(\boldsymbol{n}abla\cdot \underline{\boldsymbol{Q}}\boldsymbol{\lambda}, \boldsymbol{U}\boldsymbol{\mu})_{\mathcal{T}_{h}} - \langle \tau (\boldsymbol{P_M} \boldsymbol{U}\boldsymbol{\lambda} - \boldsymbol{\lambda}), \boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}}\\ = & (\mathcal{A} \underline{\boldsymbol{Q}}\boldsymbol{\mu}, \underline{\boldsymbol{Q}}\boldsymbol{\lambda})_{\mathcal{T}_{h}} + \langle \tau (\boldsymbol{P_M} \boldsymbol{U}\boldsymbol{\lambda} -\boldsymbol{\lambda}), \boldsymbol{U}\boldsymbol{\mu}-\boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}}\quad \text{ by } (\ref{local_solvers_derivation_eq2})\\ = & (\mathcal{A} \underline{\boldsymbol{Q}}\boldsymbol{\mu}, \underline{\boldsymbol{Q}}\boldsymbol{\lambda})_{\mathcal{T}_{h}} + \langle \tau (\boldsymbol{P_M} \boldsymbol{U}\boldsymbol{\lambda} -\boldsymbol{\lambda}), \boldsymbol{P_M}\boldsymbol{U}\boldsymbol{\mu}-\boldsymbol{\mu}\rangle_{\partial \mathcal{T}_{h}}\\ = & a_{h}(\boldsymbol{\lambda}, \boldsymbol{\mu}). \end{align*} So, we can conclude that (\ref{reduced_system}) holds. We end the proof by showing the bilinear operator $a_h(\cdot, \cdot)$ is positive definite. If $a_{h}(\boldsymbol{\lambda}, \boldsymbol{\lambda}) = 0$ for some $\boldsymbol{\lambda}\in \boldsymbol{M}^0_h$, from the previous result we have \[ \underline{\boldsymbol{Q}}\boldsymbol{\lambda} = 0, \quad \boldsymbol{P_M}\boldsymbol{U}\boldsymbol{\lambda} - \boldsymbol{\lambda}|_{\partial\mathcal{T}_{h}}=0. \] We apply integration by parts on (\ref{local_solvers_eq1}), we have \[ \bintK{\underline{\epsilon} (\boldsymbol{U}\boldsymbol{\lambda})}{\underline{\boldsymbol{v}}} = 0, \quad \forall \, \underline{\boldsymbol{v}} \in \underline{\boldsymbol{V}}(K). \] This implies that $\underline{\epsilon}(\boldsymbol{U}\boldsymbol{\lambda})|_{K} = 0$ for all $K\in\mathcal{T}_{h}$. So, for any $K\in\mathcal{T}_{h}$, there are $\boldsymbol{a}_{K}, \boldsymbol{b}_{K}\in\mathbb{R}^{3}$ such that $\boldsymbol{U}\boldsymbol{\lambda}|_{K} = \boldsymbol{a}_{K}\times\boldsymbol{x}+\boldsymbol{b}_{K}$. Since $k \ge 1$, we have $\boldsymbol{P}_M \boldsymbol{U} \boldsymbol{\lambda} = \boldsymbol{U} \boldsymbol{\lambda}$. Combining this result with the fact that $\boldsymbol{P_M}\boldsymbol{U}\boldsymbol{\lambda} - \boldsymbol{\lambda}|_{\partial\mathcal{T}_{h}}=0$ and $\boldsymbol{\lambda}|_{\partial \Omega}=0$, we can conclude that $\boldsymbol{U}\boldsymbol{\lambda} \in \boldsymbol{C}^0(\Omega)$ and $\boldsymbol{U}\boldsymbol{\lambda}|_{\partial \Omega} = 0$. Finally, let us consider two adjacent element $K_1, K_2$ with the interface $F = \bar{K_1} \cap \bar{K_2}$. In addition, we assume that on $K_i$, $\boldsymbol{U}\boldsymbol{\lambda}$ can be expressed as \[ \boldsymbol{U}\boldsymbol{\lambda} = \boldsymbol{a}_i \times \boldsymbol{x} + \boldsymbol{b}_i, \quad i = 1, 2. \] We claim that $\boldsymbol{a}_1 = \boldsymbol{a}_2$ and $\boldsymbol{b}_1 = \boldsymbol{b}_2$. This fact can be shown by considering the continuity of the function on the interface $F$. We omit the detailed proof since it only involves elementary linear algebra. From this result we conclude that there exist $\boldsymbol{a}, \boldsymbol{b}\in\mathbb{R}^{3}$ such that $\boldsymbol{U \lambda} = \boldsymbol{a} \times \boldsymbol{x} + \boldsymbol{b}$ in $\Omega$. By the fact that $\boldsymbol{U \lambda}|_{\partial \Omega} = 0$, we can conclude that $\boldsymbol{U \lambda} = 0$, hence $\boldsymbol{\lambda} = 0$. This completes the proof. \end{proof} \begin{remark} In Theorem~\ref{thm_reduced_system}, we assume the boundary data $\boldsymbol{g}=0$. Actually, if $\boldsymbol{g}$ is not zero, we can still obtain the same linear system as $a_{h}$ in Theorem~\ref{thm_reduced_system} by the same treatment of boundary data in \cite{CockburnDuboisGopalakrishnanTan2013}. \end{remark} \section{Error Analysis} In this section we provide detailed proofs for our a priori error estimates - Theorem~\ref{Main_result} and Theorem~\ref{thm_locking_free}. We use elementwise Korn's inequality (Lemma~\ref{symmetric_grad}), which is novel and crucial in error analysis. We use $\underline{\boldsymbol{\Pi_V}}, \boldsymbol{\Pi_W}$ to denote the standard $L^2$-orthogonal projection onto $\underline{\boldsymbol{V}}_h, \boldsymbol{W}_h$, respectively. In addition, we denote \[ \underline{\boldsymbol{e_{\sigma}}} = \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h, \quad \boldsymbol{e_u} = \boldsymbol{\Pi_W} \boldsymbol{u} - \boldsymbol{u}_h, \quad \boldsymbol{e_u}hat = \boldsymbol{P_M} \boldsymbol{u} - \widehat{\boldsymbol{u}}_h, \] In the analysis, we are going to use the following classical results: \begin{subequations}\label{classical_ineq} \begin{alignat}{2} \label{classical_ineq_1} \|\boldsymbol{u} - \boldsymbol{\Pi_W} \boldsymbol{u}\|_{\Omega} & \le C h^s \|\boldsymbol{}u\|_{s, \Omega} \quad && 0 \le s \le k+2, \\ \label{classical_ineq_2} \|\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}\|_{\Omega} & \le C h^t \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} && 0 \le t \le k+1, \\ \label{classical_ineq_3} \|\boldsymbol{u} - \boldsymbol{P_M} \boldsymbol{u}\|_{\mathcal{E}_h} &\le C h^{s-\frac12} \|\boldsymbol{u}\|_{s, \Omega}, \quad && 1\le s \le k+1,\\ \label{classical_ineq_4} \|\boldsymbol{u} - \boldsymbol{\Pi_W} \boldsymbol{u}\|_{\partial K} &\le C h^{s-\frac12} \|\boldsymbol{u}\|_{s, K}, \quad && 1 \le s \le k+2, \\ \label{classical_ineq_5} \|\underline{\boldsymbol{\sigma}}\boldsymbol{n} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}\boldsymbol{n}\|_{\partial K} &\le C h^{t-\frac12} \|\underline{\boldsymbol{\sigma}}\|_{t, K}, \quad && 1 \le t \le k+1, \\ \label{classical_ineq_6} \|\boldsymbol{v}\|_{\partial K} &\le C h^{-\frac12} \|\boldsymbol{v}\|_K, \quad &&\forall \; \boldsymbol{v} \in \boldsymbol{P}_s(K), \\ \label{classical_ineq_7} \|\underline{\boldsymbol{\sigma}}\boldsymbol{n} - \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}}\boldsymbol{n})\|_{\partial K} &\le C h^{t-\frac12} \|\underline{\boldsymbol{\sigma}}\|_{t, K}, \quad && 1 \le t \le k+1. \end{alignat} \end{subequations} The above results are due to standard approximation theory of polynomials, trace inequality. Let $\underline{\boldsymbol{\epsilon}}_{h}$ denote the discrete symmetric gradient operator, such that for any $K\in\mathcal{T}_h$, $\underline{\boldsymbol{\epsilon}}_{h}|_{K}= \underline{\boldsymbol{\epsilon}}|_{K}$. It is well known (see Theorem $2.2$ in \cite{Ciarlet2010}) the {\em{kernel}} of the operator $\underline{\boldsymbol{\epsilon}}_{h} (\cdot)$ is: \[ \ker \underline{\boldsymbol{\epsilon}}_{h} = \Upsilon_h := \{\boldsymbol{\Lambda} \in \boldsymbol{L}^2(\Omega), \; \boldsymbol{\Lambda}|_{K} = \underline{\boldsymbol{B}}_{K} \boldsymbol{x} + \boldsymbol{b}_{K}, \; \underline{\boldsymbol{B}}_{K}\in \underline{\boldsymbol{\mathcal{A}}}, \boldsymbol{b}_{K} \in \mathbb{R}^3,K\in\mathcal{T}_h \}. \] Here, $\underline{\boldsymbol{\mathcal{A}}}$ denotes the set of all anti-symmetric matrices in $\mathbb{R}^{3\times 3}$. In the analysis, we need the following elementwise Korn's inequality: \begin{lemma}\label{symmetric_grad} Let $K \in \mathcal{T}_h$ be a generic element with size $h_K$ and $\Upsilon(K):= \Upsilon_h|_K$. Then for any function $\boldsymbol{v} \in \boldsymbol{W}(K)$, we have \[ \inf_{\boldsymbol{\Lambda} \in \Upsilon(K)} \|\boldsymbol{n}abla(\boldsymbol{v} + \boldsymbol{\Lambda})\|_K \le C \|\underline{\boldsymbol{\epsilon}}(\boldsymbol{v})\|_K, \] Here $C$ is independent of the size $h_K$. In addition, if $K$ is a tetrahedron, the above inequality holds for any $\boldsymbol{v}\in \boldsymbol{H}^1(K)$. \end{lemma} \begin{proof} Let $\widehat{K}$ denote the reference tetrahedron element and $\boldsymbol{v} \in \boldsymbol{H}^1(K)$. The mapping from $\widehat{K}$ to $K$ is $\boldsymbol{x} = \underline{\boldsymbol{A}}_K \widehat{\boldsymbol{x}}+\boldsymbol{c}_{K}$ where $\underline{\boldsymbol{A}}_K$ is a non-singular matrix and $\boldsymbol{c}_{K}\in\mathbb{R}^{3}$. We define $\widehat{\boldsymbol{v}}$, which is the pull back of $\boldsymbol{v}$ on $\widehat{K}$, by \begin{align*} \underline{\boldsymbol{A}}_K^{-\top} \widehat{\boldsymbol{v}}(\widehat{\boldsymbol{x}}) = \boldsymbol{v}(\boldsymbol{x}) \quad \forall \widehat{\boldsymbol{x}} \in \widehat{K}. \end{align*} So, we have \begin{align*} \boldsymbol{n}abla \boldsymbol{v} (\boldsymbol{x}) = \boldsymbol{n}abla (\underline{\boldsymbol{A}}_K^{-\top} \widehat{\boldsymbol{v}})(\boldsymbol{x}) = \underline{\boldsymbol{A}}_K^{-\top} (\boldsymbol{n}abla\widehat{\boldsymbol{v}})(\boldsymbol{x}). \end{align*} The last equality above is due to the fact that every component of $\underline{\boldsymbol{A}}_K^{-\top} $ is constant. It is easy to see that \begin{align*} (\boldsymbol{n}abla\widehat{\boldsymbol{v}})(\boldsymbol{x}) = \widehat{\boldsymbol{n}abla} \widehat{\boldsymbol{v}}(\widehat{\boldsymbol{x}})\underline{\boldsymbol{A}}_K^{-1}. \end{align*} So, we have \begin{align*} \underline{\boldsymbol{A}}_K^{-\top}\widehat{\boldsymbol{n}abla} \widehat{\boldsymbol{v}}(\widehat{\boldsymbol{x}}) \underline{\boldsymbol{A}}_K^{-1} = \boldsymbol{n}abla \boldsymbol{v}(\boldsymbol{x}). \end{align*} By taking the symmetric part of both sides of the above equation, we have \begin{align} \label{tensor_transform} \underline{\boldsymbol{A}}_K^{-\top}\widehat{\underline{\boldsymbol{\epsilon}}} \widehat{\boldsymbol{v}}(\widehat{\boldsymbol{x}}) \underline{\boldsymbol{A}}_K^{-1} = \underline{\boldsymbol{\epsilon}} \boldsymbol{v}(\boldsymbol{x}). \end{align} According to Theorem $2.3$ in \cite{Ciarlet2010}, the following inequality holds: \begin{align*} \inf_{\widehat{\boldsymbol{\Lambda}} \in \Upsilon(\widehat{K})} \|\widehat{\boldsymbol{v}} +\widehat{\boldsymbol{\Lambda}}\|_{1, \widehat{K}} \le C \|\widehat{\underline{\boldsymbol{\epsilon}}} (\widehat{\boldsymbol{v}})\|_{0, \widehat{K}}. \end{align*} So, there is $\widehat{\boldsymbol{\Lambda}} = \underline{\boldsymbol{B}}_{\widehat{K}} \widehat{\boldsymbol{x}} + \boldsymbol{b}_{\widehat{K}}$ with $\underline{\boldsymbol{B}}_{\widehat{K}}\in \underline{\boldsymbol{\mathcal{A}}}$ and $\boldsymbol{b}_{\widehat{K}} \in \mathbb{R}^3$, such that \begin{align} \label{reference_ele} \|\widehat{\boldsymbol{n}abla} (\widehat{\boldsymbol{v}} +\widehat{\boldsymbol{\Lambda}}) \|_{0, \widehat{K}} \le C \|\widehat{\underline{\boldsymbol{\epsilon}}}(\widehat{\boldsymbol{v}})\|_{0, \widehat{K}}. \end{align} We define \begin{align*} \boldsymbol{\Lambda} (\boldsymbol{x}) = \underline{\boldsymbol{A}}_K^{-\top} \widehat{\boldsymbol{\Lambda}}(\widehat{\boldsymbol{x}})\quad \forall \boldsymbol{x} \in K. \end{align*} It is easy to see that \begin{align*} \boldsymbol{n}abla \boldsymbol{\Lambda} = \underline{\boldsymbol{A}}_K^{-\top}\widehat{\boldsymbol{n}abla} \widehat{\boldsymbol{\Lambda}} \underline{\boldsymbol{A}}_K^{-1} = \underline{\boldsymbol{A}}_K^{-\top}\underline{\boldsymbol{B}}_{\widehat{K}} \underline{\boldsymbol{A}}_K^{-1}\in \underline{\boldsymbol{\mathcal{A}}}. \end{align*} So, $\boldsymbol{\Lambda} \in \Upsilon(K)$. Then, by standard scaling argument with (\ref{tensor_transform}, \ref{reference_ele}) and the shape regularity of the meshes, we can conclude that the proof for arbitrary tetrahedron element is complete. Now, we consider the case of arbitrary shape regular element $K$, which can be hexahedron, prism or pyramid. Let $\boldsymbol{v}= (v_{1}, v_{2}, v_{3})^{\top}\in \boldsymbol{W}_{h}|_{K}$. It is well known that for any $1\leq i,j,k\leq 3$, \begin{align*} \partial_{j} (\partial_{k} v_{i}) = \partial_{j} (\epsilon_{ik}(\boldsymbol{v})) + \partial_{k} (\epsilon_{ij}(\boldsymbol{v})) - \partial_{i} (\epsilon_{jk}(\boldsymbol{v})). \end{align*} Here, $\epsilon_{ik}(\boldsymbol{v}) = \left(\underline{\boldsymbol{\epsilon}}(\boldsymbol{v})\right)_{ik}$. Consequently, we have \begin{align*} \Vert \boldsymbol{n}abla (\partial_{j} v_{i} - \partial_{i} v_{j}) \Vert_{0,K} \leq C \Vert \boldsymbol{n}abla \underline{\boldsymbol{\epsilon}} (\boldsymbol{v}) \Vert_{0,K} \leq C h_{K}^{-1}\Vert \underline{\boldsymbol{\epsilon}} (\boldsymbol{v}) \Vert_{0,K}. \end{align*} We define an anti-symmetric matrix $\underline{\boldsymbol{B}}_{K}$ by \begin{align*} \left( \underline{\boldsymbol{B}}_{K} \right)_{ij} = \dfrac{1}{2\vert K\vert}\int_{K} (\partial_{j} v_{i} - \partial_{i} v_{j}) d\boldsymbol{x} \quad 1\leq i,j\leq 3. \end{align*} We take $\boldsymbol{\Lambda} = \underline{\boldsymbol{B}}_{K}\boldsymbol{x}$, which is obviously in $\Upsilon({K})$. Then, we have \begin{align*} \int_{K} \left( \boldsymbol{n}abla (\boldsymbol{v}-\boldsymbol{\Lambda}) - \underline{\boldsymbol{\epsilon}}(\boldsymbol{v}) \right) d\boldsymbol{x}= \int_K (\boldsymbol{n}abla \boldsymbol{v} - \underline{\boldsymbol{\epsilon}} (\boldsymbol{v}) ) \, d \boldsymbol{x} - \underline{\boldsymbol{B}}_K \int_K 1 \, d \boldsymbol{x} = 0. \end{align*} By the Poincar\'{e} inequality, we have \begin{align*} \Vert \boldsymbol{n}abla (\boldsymbol{v}-\boldsymbol{\Lambda}) - \underline{\boldsymbol{\epsilon}}(\boldsymbol{v})\Vert_{0, K} \leq C h_{K} \sum_{1\leq i,j\leq 3}\Vert \boldsymbol{n}abla (\partial_{j} v_{i} - \partial_{i} v_{j}) \Vert_{0,K} \leq C \Vert \underline{\boldsymbol{\epsilon}} (\boldsymbol{v}) \Vert_{0,K}. \end{align*} We immediately have that \begin{align*} \Vert \boldsymbol{n}abla (\boldsymbol{v} - \boldsymbol{\Lambda})\Vert_{0, K} \leq C \Vert \underline{\boldsymbol{\epsilon}}(\boldsymbol{v})\Vert_{0, K}. \end{align*} This completes the proof. \end{proof} {\textbf{Step 1: The error equation.}} We first present the error equation for the analysis. \begin{lemma} Let $(\boldsymbol{u}, \underline{\boldsymbol{\sigma}}), (\boldsymbol{u}_h, \underline{\boldsymbol{\sigma}}_h, \widehat{\boldsymbol{u}}_h)$ solve \eqref{elasticity_equation} and \eqref{HDG_formulation} respectively, we have \begin{subequations}\label{error_equation} \begin{alignat}{1} \label{error_equation_1} \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{v}}} + \bint{\boldsymbol{e_u}}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{v}}} - \bintEh{\boldsymbol{e_u}hat}{\underline{\boldsymbol{v}} \boldsymbol{n}} &= \bint{\mathcal{A} (\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}) }{\underline{\boldsymbol{v}}},\\ \label{error_equation_2} \bint{\underline{\boldsymbol{e_{\sigma}}}}{\boldsymbol{n}abla \boldsymbol{\omega}} - \bintEh{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{\boldsymbol{\omega}} & = 0,\\ \label{error_equation_3} \bintEhi{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{\boldsymbol{\mu}} & = 0,\\ \label{error_equation_4} \bintEhb{\boldsymbol{e_u}hat}{\boldsymbol{\mu}} & = 0, \end{alignat} \end{subequations} for all $(\underline{\boldsymbol{v}}, \boldsymbol{\omega}, \boldsymbol{\mu}) \in \underline{\boldsymbol{V}}_h \times \boldsymbol{W}_h \times \boldsymbol{M}_h$. \end{lemma} \begin{proof} We notice that the exact solution $(\boldsymbol{u}, \underline{\boldsymbol{\sigma}}, \boldsymbol{u}|_{\mathcal{E}_h})$ also satisfies the equation \eqref{HDG_formulation}. Hence, {after simple algebraic manipulations, we get that} \begin{alignat*}{1} \bint{\mathcal{A} \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}}{\underline{\boldsymbol{v}}} + \bint{\boldsymbol{\Pi_W} \boldsymbol{u}}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{v}}} - \bintEh{\boldsymbol{P_M} \boldsymbol{u}}{\underline{\boldsymbol{v}} \boldsymbol{n}} &= \\ -\bint{\mathcal{A}(\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{\underline{\boldsymbol{v}}} + \bintEh{\boldsymbol{u} - \boldsymbol{P_M} \boldsymbol{u}}{\underline{\boldsymbol{v}}\boldsymbol{n}} &- \bint{\boldsymbol{u} - \Piw \boldsymbol{u}}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{v}}} ,\\ \bint{\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}}{\boldsymbol{n}abla \boldsymbol{\omega}} - \bintEh{\underline{\boldsymbol{\sigma}} \boldsymbol{n}}{\boldsymbol{\omega}} = - \bint{\boldsymbol{f}}{\boldsymbol{\omega}} - &\bint{\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}}}{\boldsymbol{n}abla \boldsymbol{\omega}} \\ \bintEhi{\underline{\boldsymbol{\sigma}} \boldsymbol{n}}{\boldsymbol{\mu}} & = 0,\\ \bintEhb{\boldsymbol{P_M} \boldsymbol{u}}{\boldsymbol{\mu}} & = - \bintEhb{\boldsymbol{u} - \boldsymbol{P_M} \boldsymbol{u}}{\boldsymbol{\mu}}, \end{alignat*} {for all $(\underline{\boldsymbol{v}}, \boldsymbol{w}, \boldsymbol{\mu}) \in \underline{\boldsymbol{V}}_h \times \boldsymbol{W}_h \times \boldsymbol{M}_h$.} Notice that the local spaces satisfy the following inclusion property: \[ \boldsymbol{n}abla \cdot \underline{\boldsymbol{V}}(K) \subset \boldsymbol{W}(K), \quad \underline{\boldsymbol{\epsilon}} (\boldsymbol{W}(K)) \subset \underline{\boldsymbol{V}}(K), \quad \underline{\boldsymbol{V}}(K) \boldsymbol{n} |_{F} \subset \boldsymbol{M}(F). \] Hence by the property of the $L^2$-projection, the above system can be simplified as: \begin{alignat*}{1} \bint{\mathcal{A} \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}}{\underline{\boldsymbol{v}}} + \bint{\boldsymbol{\Pi_W} \boldsymbol{u}}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{v}}} - \bintEh{\boldsymbol{P_M} \boldsymbol{u}}{\underline{\boldsymbol{v}} \boldsymbol{n}} &= -\bint{\mathcal{A}(\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{\underline{\boldsymbol{v}}}, \\ \bint{\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}}{\boldsymbol{n}abla \boldsymbol{\omega}} - \bintEh{\underline{\boldsymbol{\sigma}} \boldsymbol{n}}{\boldsymbol{\omega}} &= - \bint{\boldsymbol{f}}{\boldsymbol{\omega}}, \\ \bintEhi{\underline{\boldsymbol{\sigma}} \boldsymbol{n}}{\boldsymbol{\mu}} & = 0,\\ \bintEhb{\boldsymbol{P_M} \boldsymbol{u}}{\boldsymbol{\mu}} & = 0, \end{alignat*} for all $(\underline{\boldsymbol{v}}, \boldsymbol{w}, \boldsymbol{\mu}) \in \underline{\boldsymbol{V}}_h \times \boldsymbol{W}_h \times \boldsymbol{M}_h$. Here we applied the fact that $\bint{\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}}}{\boldsymbol{n}abla \boldsymbol{\omega}} = \bint{\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}}}{{\underline{\boldsymbol{\epsilon}} (\boldsymbol{\omega}}) } = 0$. If we now subtract the equations \eqref{HDG_formulation}, we obtain the result. This completes the proof. \end{proof} {\textbf{Step 2: Estimate of $\underline{\boldsymbol{e_{\sigma}}}$.}} We are now ready to obtain our first estimate. \begin{proposition}\label{energy_argument} We have \[ \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{e_{\sigma}}}} + \bintEh{\tau (\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)}{\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat} = -\bint{\mathcal{A}(\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{\underline{\boldsymbol{e_{\sigma}}}}+ T_1 - T_2, \] where $T_1, T_2$ are defined as: \begin{align*} T_1 & := \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - (\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n}}, \\ T_2 & := \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{\tau (\boldsymbol{P_M} (\boldsymbol{u} - \Piw \boldsymbol{u}))}. \end{align*} \end{proposition} \begin{proof} By the error equation \eqref{error_equation_4} we know that $\boldsymbol{e_u}hat = 0$ on $\partial \Omega$. This implies that \[ \bintEhb{\boldsymbol{e_u}hat}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}} = 0. \] Now taking $(\underline{\boldsymbol{v}}, \boldsymbol{w}, \boldsymbol{\mu}) = (\underline{\boldsymbol{e_{\sigma}}}, \boldsymbol{e_u}, \boldsymbol{e_u}hat)$ in error equations \eqref{error_equation_1} - \eqref{error_equation_3} and adding these equations together with the above identity, we obtain, after some algebraic manipulation, \begin{equation}\label{energy_1} \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{e_{\sigma}}}} + \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{\underline{\boldsymbol{e_{\sigma}}} \boldsymbol{n} - (\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n})} = - \bint{\mathcal{A}(\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{\underline{\boldsymbol{e_{\sigma}}}}. \end{equation} Now we work with the second term on the left hand side, \begin{align*} \underline{\boldsymbol{e_{\sigma}}} \boldsymbol{n} - (\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}) &= \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} \boldsymbol{n} - \underline{\boldsymbol{\sigma}}_h \boldsymbol{n} - \underline{\boldsymbol{\sigma}} \boldsymbol{n} + \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n} \intertext{by the definition of the numerical trace \eqref{numerical_trace}, } & = -(\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}}) \boldsymbol{n} - \tau(\boldsymbol{P_M} \boldsymbol{u}_h - \widehat{\boldsymbol{u}}_h), \\ & = -(\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}}) \boldsymbol{n} + \tau (\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat) - \tau(\boldsymbol{P_M}( \boldsymbol{\Pi_W} \boldsymbol{u} - \boldsymbol{u})), \end{align*} the last step is by the definition of $\boldsymbol{e_u}, \boldsymbol{e_u}hat$. Inserting the above identity into \eqref{energy_1}, moving terms around, we have \[ \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{e_{\sigma}}}} + \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{\tau(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)} = -\bint{\mathcal{A}(\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{\underline{\boldsymbol{e_{\sigma}}}}+ T_1 - T_2. \] Finally, notice that on each $F \in \partial \mathcal{T}_h$, $\tau(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)|_F \in \boldsymbol{M}(F)$, so we have \[ \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{\tau(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)} = \bintEh{\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat}{\tau(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)}. \] This completes the proof. \end{proof} From the above energy argument we can see that we need to bound $T_1, T_2$ in order to have an estimate for $\underline{\boldsymbol{e_{\sigma}}}$. Next we present the estimates for these two terms: \begin{lemma}\label{Two_terms} If the parameter $\tau = \mathcal{O}(h^{-1})$, we have \begin{align*} T_1 & \le C h^{t} \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} \, (\|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} + \|\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h}) \\ T_2 & \le C h^{s-1} \|\boldsymbol{u}\|_{s, \Omega} \, \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} , \end{align*} for all $1 \le t \le k+1, 1 \le s \le k+2$. \end{lemma} \begin{proof} We first bound $T_2$. We have \begin{align*} T_2 &= \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{\tau (\boldsymbol{P_M} (\boldsymbol{u} - \Piw \boldsymbol{u}))} = \bintEh{\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat}{\tau (\boldsymbol{P_M} (\boldsymbol{u} - \Piw \boldsymbol{u}))} \\ & = \bintEh{\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat}{\tau (\boldsymbol{u} - \Piw \boldsymbol{u})} \\ &\le \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \tau^{\frac12} \|\boldsymbol{u} - \boldsymbol{\Pi_W} \boldsymbol{u}\|_{\partial \mathcal{T}_h}\\ & \le C h^{s} (\tau^{\frac12} h^{-\frac12}) \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \|\boldsymbol{u}\|_{s, \Omega}, \end{align*} for all $1 \le s \le k+2$. The last step we applied the inequality \eqref{classical_ineq_4}. The estimate for $T_1$ is much more sophisticated. We first split $T_1$ into two parts: \[T_1 = T_{11} + T_{12},\] where \begin{align*} T_{11} & := \bintEh{\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - (\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n}}, \\ T_{12} &:= \bintEh{\boldsymbol{e_u} - \boldsymbol{P_M} \boldsymbol{e_u}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - (\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n}}. \end{align*} For $T_{11}$, we simply apply the Cauchy-Schwarz inequality, \begin{align*} T_{11} &\le \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \, \tau^{-\frac12}\|\underline{\boldsymbol{\sigma}} \boldsymbol{n} - (\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n}\|_{\partial \mathcal{T}_h}\\ &\le C h^{t} (\tau^{-\frac12}h^{-\frac12}) \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h}, \end{align*} for all $1 \le t \le k+1$. Here we used the inequality \eqref{classical_ineq_5}. Now we work on $T_{12}$. Using the $L^2$-orthogonal property of the projection $\boldsymbol{P_M}$, we can write \begin{align*} T_{12} &= \bintEh{\boldsymbol{e_u} - \boldsymbol{P_M} \boldsymbol{e_u}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - (\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n}} \\ &= \bintEh{\boldsymbol{e_u} - \boldsymbol{P_M} \boldsymbol{e_u}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} } \\ & = \bintEh{\boldsymbol{e_u} - \boldsymbol{P_M} \boldsymbol{e_u}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}} \boldsymbol{n})}, \intertext{by the fact $\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} \boldsymbol{n}|_{F}, \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}} \boldsymbol{n})|_{F} \in \boldsymbol{M}(F)$ for all $F \in \partial \mathcal{T}_h$,} T_{12} & =\bintEh{\boldsymbol{e_u}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}} \boldsymbol{n})}, \quad \text{since $\boldsymbol{P_M} \boldsymbol{e_u}|_F \in \boldsymbol{M}(F), \forall F \in \partial \mathcal{T}_h$,} \\ & = \bintEh{\boldsymbol{e_u} + \boldsymbol{\Lambda}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}} \boldsymbol{n})}, \end{align*} where $\boldsymbol{\Lambda} \in \boldsymbol{L}^2(\Omega)$ is any vector-valued function in $\Upsilon_h$. Notice here the last step holds only if $\Upsilon_h|_F \in \boldsymbol{M}(F), \; \forall F \in \partial \mathcal{T}_h$. This is true if $k \ge 1$. Next, on each $K \in \mathcal{T}_h$, if we denote $\overline{\boldsymbol{u}}$ to be the average of $\boldsymbol{u}$ over $K$, then we have \begin{align*} \bintK{\boldsymbol{e_u} + \boldsymbol{\Lambda}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}} \boldsymbol{n})} &= \bintK{\boldsymbol{e_u} + \boldsymbol{\Lambda} - \overline{(\boldsymbol{e_u} + \boldsymbol{\Lambda})}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}} \boldsymbol{n})}\\ & \le \|\boldsymbol{e_u} + \boldsymbol{\Lambda} - \overline{(\boldsymbol{e_u} + \boldsymbol{\Lambda})}\|_{\partial K} \|\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}} \boldsymbol{n})\|_{\partial K}, \intertext{by the standard inequalities \eqref{classical_ineq_6}, \eqref{classical_ineq_7},} \bintK{\boldsymbol{e_u} + \boldsymbol{\Lambda}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}} \boldsymbol{n})} &\le C h^{t-1} \|\underline{\boldsymbol{\sigma}}\|_{t, K} \|\boldsymbol{e_u} + \boldsymbol{\Lambda} - \overline{(\boldsymbol{e_u} + \boldsymbol{\Lambda})}\|_{K}\\ &\le C h^t \|\underline{\boldsymbol{\sigma}}\|_{t, K} \|\boldsymbol{n}abla (\boldsymbol{e_u} + \boldsymbol{\Lambda})\|_K, \intertext{for all $1 \le t \le k+1$. The last step is by the Poincar\'{e} inequality. Notice that the constant $C$ in above inequality is independent of $\boldsymbol{\Lambda}\in \Upsilon_h$. Now applying the Lemma \ref{symmetric_grad}, yields, } \bintK{\boldsymbol{e_u} + \boldsymbol{\Lambda}}{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \boldsymbol{P_M} (\underline{\boldsymbol{\sigma}} \boldsymbol{n})} & \le C h^t \|\underline{\boldsymbol{\sigma}}\|_{t, K} \|\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_K, \end{align*} Sum over all $K \in \mathcal{T}_h$, we have \[ T_{12} \le C h^t \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} \|\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h}, \] for all $1 \le t \le k+1$. We complete the proof by combining the estimates for $T_2, T_{11}, T_{12}$. \end{proof} Combining Lemma \ref{Two_terms} and Proposition \ref{energy_argument}, we obtain the following estimate. \begin{corollary}\label{estimate_1} If the parameter $\tau = \mathcal{O}(h^{-1})$, then we have \begin{align*} \|\underline{\boldsymbol{e_{\sigma}}}\|^2_{L^{2}(\mathcal{A},\Omega)} &+ \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|^2_{\partial \mathcal{T}_h} \\ &\le C \left( h^{2t} \|\underline{\boldsymbol{\sigma}}\|^2_{t, \Omega} + h^{2(s-1)} \|\boldsymbol{u}\|^2_{s, \Omega} + h^t \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} \|\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h}\right), \end{align*} for all $1 \le s \le k+2, 1 \le t \le k+1$, the constant $C$ is independent of $h$ and exact solution. \end{corollary} The proof is omitted. One can obtain the above result by the Cauchy-Schwarz inequality and weighted Young's inequality. Finally, we can finish the estimate for $\underline{\boldsymbol{e_{\sigma}}}$ by the following estimate for $\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})$: \begin{lemma}\label{estimate_gradu} Under the same assumption as Theorem \ref{estimate_1}, we have \[ \|\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h} \le C \left(h^t \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} + \|\underline{\boldsymbol{e_{\sigma}}}\|_{L^{2}(\mathcal{A},\Omega)} + \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h}\right), \] for all $0 \le t \le k+1$. \end{lemma} \begin{proof} Notice that $\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u}) \in \underline{\boldsymbol{V}}_h$, so we can take $\underline{\boldsymbol{v}}=\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})$ in the error equation \eqref{error_equation_1}, after integrating by parts, we have: \[ \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})} - \bint{ \boldsymbol{n}abla \boldsymbol{e_u}}{ \underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})} + \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u}) \boldsymbol{n}} = \bint{\mathcal{A} (\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}) }{\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})}. \] Notice that $\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u}) \in \underline{\boldsymbol{V}}_h$ and it is symmetric, we have \[ \bint{ \boldsymbol{n}abla \boldsymbol{e_u}}{ \underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})} = \| \underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|^2_{\mathcal{T}_h}, \quad \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u}) \boldsymbol{n}} = \bintEh{\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat}{\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u}) \boldsymbol{n}}. \] Inserting these two identities into the first equation, we have \begin{align*} \| \underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|^2_{\mathcal{T}_h} &= \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})} + \bintEh{\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat}{\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u}) \boldsymbol{n}}\\ & \quad + \bint{\mathcal{A} (\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) }{\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})}\\ & \le C \|\underline{\boldsymbol{e_{\sigma}}}\|_{L^{2}(\mathcal{A},\Omega)} \| \underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h} + C \tau^{-\frac12} \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \|\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\boldsymbol{n}\|_{\partial \mathcal{T}_h} \\ & \quad + C h^t \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega}\|\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h} \\ & \le C \|\underline{\boldsymbol{e_{\sigma}}}\|_{L^{2}(\mathcal{A},\Omega)} \| \underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h} + C \tau^{-\frac12} h^{-\frac12} \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \|\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h} \\ & \quad + C h^t \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega}\|\underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h} \qquad \text{by inverse inequality \eqref{classical_ineq_6}.} \end{align*} The proof is complete by the assumption $\tau = \mathcal{O}(h^{-1})$. \end{proof} Finally, combining Lemma \ref{estimate_gradu} and Theorem \ref{estimate_1}, after simple algebraic manipulation, we have our first error estimate: \begin{corollary}\label{estimate_sigma} Under the same assumption as in Theorem \ref{estimate_1}, we have \begin{equation*} \|\underline{\boldsymbol{e_{\sigma}}}\|_{L^{2}(\mathcal{A},\Omega)} + \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} + \| \underline{\boldsymbol{\epsilon}} (\boldsymbol{e_u})\|_{\mathcal{T}_h} \le C( h^t \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} + h^{s-1} \|\boldsymbol{u}\|_{s, \Omega}), \end{equation*} for all $1 \le t \le k+1, 1 \le s \le k+2$, the constant $C$ is independent of $h$ and exact solution. \end{corollary} One can see that by taking $t=k+1, s=k+2$, both of the error $\underline{\boldsymbol{e_{\sigma}}}, \underline{\boldsymbol{\epsilon}}(\boldsymbol{e_u})$ obtain optimal convergence rate. Moreover, if we take $\tau=1/h$, we readily obtain the superconvergence property \[ \|h^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h}\le C\, h^{k+2}, \] for smooth solutions. It is this superconvergence property the one which allows to obtain the optimal convergence in the stress and, as we are going to see next, in the displacement. {\textbf{Step 3: Estimate of $\boldsymbol{e_u}$}.} Next we use a standard duality argument to get an estimate for $\boldsymbol{e_u}$. First we present an important identity. \begin{proposition}\label{duality_argument} Assume that $(\boldsymbol{\phi}, \underline{\boldsymbol{\psi}}) \in \boldsymbol{H}^2(\Omega) \times \underline{\boldsymbol{H}}^1(\Omega)$ is the solution of the adjoint problem \eqref{dual_problem_1}, we have \begin{alignat*}{1} \|\boldsymbol{e_u}\|^2_{\Omega} &= \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} - \bint{\mathcal{A} (\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}}\\ & \quad - \bintEh{\underline{\boldsymbol{e_{\sigma}}} \boldsymbol{n} - (\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n})}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} + \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{(\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}})\boldsymbol{n}}. \end{alignat*} \end{proposition} \begin{proof} By the dual equation \eqref{dual_problem}, we can write \begin{alignat*}{1} \|\boldsymbol{e_u}\|^2_{\Omega} & = \bint{\boldsymbol{e_u}}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{\psi}}} + \bint{\underline{\boldsymbol{e_{\sigma}}}}{\mathcal{A} \underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\epsilon}} (\boldsymbol{\phi})} \\ & = \bint{\boldsymbol{e_u}}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{\psi}}} + \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\psi}}} - \bint{\underline{\boldsymbol{e_{\sigma}}}}{\boldsymbol{n}abla \boldsymbol{\phi}} \\ & = \bint{\boldsymbol{e_u}}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} + \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} - \bint{\underline{\boldsymbol{e_{\sigma}}}}{\boldsymbol{n}abla \boldsymbol{\boldsymbol{\Pi_W} \phi}} \\ &+ \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}}+ \bint{\boldsymbol{e_u}}{\boldsymbol{n}abla \cdot (\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}})} - \bint{\underline{\boldsymbol{e_{\sigma}}}}{\boldsymbol{n}abla (\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi})}, \intertext{integrating by parts for the last two terms, applying the property of the $L^2$-projections, yields,} \|\boldsymbol{e_u}\|^2_{\Omega}& = \bint{\boldsymbol{e_u}}{\boldsymbol{n}abla \cdot \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} + \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} - \bint{\underline{\boldsymbol{e_{\sigma}}}}{\boldsymbol{n}abla \boldsymbol{\boldsymbol{\Pi_W} \phi}} \\ &+ \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}}+ \bintEh{\boldsymbol{e_u}}{ (\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}})\boldsymbol{n}} - \bintEh{\underline{\boldsymbol{e_{\sigma}}} \boldsymbol{n}}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}}. \intertext{Taking $\underline{\boldsymbol{v}} := \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}$ and $\boldsymbol{\omega} := \boldsymbol{\Pi_W} \boldsymbol{\phi}$ in the error equations \eqref{error_equation_1} and \eqref{error_equation_2}, respectively, inserting these two equations into above identity, we obtain} \|\boldsymbol{e_u}\|^2_{\Omega}& = \bintEh{\boldsymbol{e_u}hat}{\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}} \boldsymbol{n}} - \bint{\mathcal{A} (\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} - \bintEh{\underline{\boldsymbol{\sigma}}\boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{ \boldsymbol{\boldsymbol{\Pi_W} \phi}} \\ &+ \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}}+ \bintEh{\boldsymbol{e_u}}{ (\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}})\boldsymbol{n}} - \bintEh{\underline{\boldsymbol{e_{\sigma}}} \boldsymbol{n}}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}}. \end{alignat*} Next, note that by the regularity assumption, $(\underline{\boldsymbol{\psi}}, \boldsymbol{\phi}) \in \underline{\boldsymbol{H}}^2(\Omega) \times \boldsymbol{H}^1(\Omega)$, so the normal component of $\underline{\boldsymbol{\psi}}$ and $\boldsymbol{\phi}$ are continuous across each face $F \in \mathcal{E}_h$. By the equation \eqref{strong_cont}, the normal component of $\widehat{\underline{\boldsymbol{\sigma}}}_h$ is also strongly continuous across each face $F \in \mathcal{E}_h$. This implies that \begin{alignat*}{2} -\bintEh{\boldsymbol{e_u}hat}{\underline{\boldsymbol{\psi}} \boldsymbol{n}} & = -\bintEhb{\boldsymbol{e_u}hat}{\underline{\boldsymbol{\psi}} \boldsymbol{n}} = 0, \quad &&\text{by \eqref{error_equation_4}}, \\ \bintEh{\underline{\boldsymbol{\sigma}}\boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{\boldsymbol{\phi}} & = \bintEhb{\underline{\boldsymbol{\sigma}}\boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{\boldsymbol{\phi}}= 0 \quad &&\text{by \eqref{dual_problem_3}}. \end{alignat*} Adding these two zero terms into the previous equation, rearranging the terms, we obtain the expression as presented in the proposition. \end{proof} As a consequence of the result just proved, we can obtain our estimate of $\boldsymbol{e_u}$. \begin{corollary}\label{estimate_u} Under the same assumption as in Theorem \ref{estimate_1}, in addition, if the elliptic regularity property \eqref{regularity} holds, then we have \[ \|\boldsymbol{e_u}\|_{\Omega} \le C( h^{t+1} \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} + h^{s} \|\boldsymbol{u}\|_{s, \Omega}), \] for $1 \le t \le k+1, 1 \le s \le k+2$. \end{corollary} \begin{proof} We will estimate each of the terms on the right hand side of the identity in Proposition \ref{duality_argument}. \begin{align*} \bint{\mathcal{A} \underline{\boldsymbol{e_{\sigma}}}}{\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} \le C h \|\underline{\boldsymbol{e_{\sigma}}}\|_{L^{2}(\mathcal{A},\Omega)} \|\underline{\boldsymbol{\psi}}\|_{1, \Omega} \le C h \|\underline{\boldsymbol{e_{\sigma}}}\|_{L^{2}(\mathcal{A},\Omega)} \|\boldsymbol{e_u}\|_{\Omega}, \end{align*} by the projection property \eqref{classical_ineq_2} and the regularity assumption \eqref{regularity}. \begin{align*} \bint{\mathcal{A} (\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{\underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} &= \bint{\mathcal{A} (\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{\underline{\boldsymbol{\psi}}} - \bint{\mathcal{A} (\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{ \underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} \\ & \hspace{-0.5cm}= \bint{\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}}}{\mathcal{A}\underline{\boldsymbol{\psi}} - \overline{\mathcal{A}\underline{\boldsymbol{\psi}}}} - \bint{\mathcal{A} (\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}})}{ \underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}} \\ & \le C h \|\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}}\|_{\Omega} \|\underline{\boldsymbol{\psi}}\|_{1, \Omega}\\ &\le C h \|\underline{\boldsymbol{\sigma}} - \Piv \underline{\boldsymbol{\sigma}}\|_{\Omega} \|\boldsymbol{e_u}\|_{\Omega} \\ &\le C h^{t+1} \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} \|\boldsymbol{e_u}\|_{\Omega}, \end{align*} for all $0 \le t \le k+1$. Here we applied the Galerkin orthogonal property of the local $L^2$-projection $\underline{\boldsymbol{\Pi_V}}$ and the regularity assumption \eqref{regularity}. For the third term, by the definition of the numerical trace \eqref{numerical_trace}, we have \begin{align*} \bintEh{\underline{\boldsymbol{e_{\sigma}}} \boldsymbol{n} - (\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n})}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} & = - \bintEh{(\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n})}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} \\ & \quad + \bintEh{\widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n} - \underline{\boldsymbol{\sigma}}_h \boldsymbol{n}}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} \\ & = - \bintEh{(\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n})}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} \\ & \quad - \bintEh{\tau (\boldsymbol{P_M} \boldsymbol{u}_h - \widehat{\boldsymbol{u}}_h)}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} \\ & = - \bintEh{(\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n})}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} \\ & \quad + \bintEh{\tau (\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} \\ & \quad + \bintEh{\tau (\boldsymbol{P_M} (\boldsymbol{u} - \boldsymbol{\Pi_W} \boldsymbol{u})}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}}. \end{align*} To bound the third term, we only need to bound the above three terms individually. \begin{align*} \bintEh{(\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n})}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} & \le \|(\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}}) \boldsymbol{n})\|_{\partial \mathcal{T}_h}\|\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}\|_{\partial \mathcal{T}_h} \\ & \le C h^{t-\frac12} \|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} h^{\frac32}\|\boldsymbol{\phi}\|_{2, \Omega}, \intertext{by the standard inequalities , \eqref{classical_ineq_4}, \eqref{classical_ineq_5},} & \le C h^{t+1}\|\underline{\boldsymbol{\sigma}}\|_{t, \Omega} \|\boldsymbol{e_u}\|_{\Omega}, \end{align*} for all $1 \le t \le k+1$. The last step is due to the regularity assumption \eqref{regularity}. Similarly, we apply the Cauchy-Schwarz inequality and \eqref{classical_ineq_4} for the other two terms: \begin{align*} \bintEh{\tau (\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} & \le C \tau^{\frac12} \|\tau^{\frac12} (\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \|\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}\|_{\partial \mathcal{T}_h} \\ & \le C \tau^{\frac12} h^{\frac32} \|\tau^{\frac12} (\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \|\boldsymbol{\phi}\|_{2, \Omega} \\ & \le C \tau^{\frac12} h^{\frac32} \|\tau^{\frac12} (\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \|\boldsymbol{e_u}\|_{\Omega}. \end{align*} \begin{align*} \bintEh{\tau (\boldsymbol{P_M} (\boldsymbol{u} - \boldsymbol{\Pi_W} \boldsymbol{u})}{\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}} & \le \tau \|\boldsymbol{P_M} (\boldsymbol{u} - \boldsymbol{\Pi_W} \boldsymbol{u})\|_{\partial \mathcal{T}_h} \|\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}\|_{\partial \mathcal{T}_h} \\ & \le \tau \|\boldsymbol{u} - \boldsymbol{\Pi_W} \boldsymbol{u}\|_{\partial \mathcal{T}_h} \|\boldsymbol{\phi} - \boldsymbol{\Pi_W} \boldsymbol{\phi}\|_{\partial \mathcal{T}_h} \\ & \le C \tau h^{s-\frac12} \|\boldsymbol{u}\|_{s, \Omega} h^{\frac32}\|\boldsymbol{\phi}\|_{2, \Omega} \\ & \le C \tau h^{s+1} \|\boldsymbol{u}\|_{s, \Omega}\|\boldsymbol{e_u}\|_{\Omega}, \end{align*} for all $1 \le s \le k+2$. Finally, for the last term in Proposition \ref{duality_argument}, we can write: \begin{align*} \bintEh{\boldsymbol{e_u} - \boldsymbol{e_u}hat}{(\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}) \boldsymbol{n}} & = \bintEh{\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat}{(\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}})\boldsymbol{n}} \\ &\quad + \bintEh{\boldsymbol{e_u} - \boldsymbol{P_M} \boldsymbol{e_u}}{(\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}) \boldsymbol{n}}. \end{align*} For the first term, we can apply a similar argument as in the previous steps to obtain: \[ \bintEh{\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat}{(\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}})\boldsymbol{n}} \le C \tau^{-\frac12} h^{\frac12} \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \|\boldsymbol{e_u}\|_{\Omega}. \] For the second term, we apply the same argument for the estimate of $T_{12}$ in the proof of Lemma \ref{Two_terms} and obtain: \begin{align*} \bintEh{\boldsymbol{e_u} - \boldsymbol{P_M} \boldsymbol{e_u}}{(\underline{\boldsymbol{\psi}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\psi}}) \boldsymbol{n}} &\le C h \|\underline{\boldsymbol{\psi}}\|_{1, \Omega} \|\underline{\boldsymbol{\epsilon}}(\boldsymbol{e_u})\|_{\mathcal{T}_h} \\ & \le C h \|\underline{\boldsymbol{\epsilon}}(\boldsymbol{e_u})\|_{\mathcal{T}_h} \|\boldsymbol{e_u}\|_{\Omega}. \end{align*} Finally if we take $\tau = \mathcal{O}(h^{-1})$, combine all the above estimates and Theorem \ref{estimate_sigma}, we obtain the estimate in Theorem \ref{estimate_u}. \end{proof} As a consequence of Theorem \ref{estimate_sigma}, Theorem \ref{estimate_u}, we can obtain Theorem \ref{Main_result} by a simple triangle inequality and the approximation property of the projections $\boldsymbol{\Pi_W}, \underline{\boldsymbol{\Pi_V}}$ \eqref{classical_ineq_1}, \eqref{classical_ineq_2}. {\bf Step 4: Proof of locking-free result.} We can now give the proof of Theorem~\ref{thm_locking_free}. \begin{proof}[Proof of Theorem~\ref{thm_locking_free}] In what follows, we assume that $s$ is some arbitrary real number in $[1, k+1]$ and $C$ is a positive constant independent of $P_{T}$ and $s$. We recall that $\underline{\boldsymbol{e_{\sigma}}} = \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h, \boldsymbol{e_u} = \boldsymbol{\Pi_W} \boldsymbol{u} - \boldsymbol{u}_h, \boldsymbol{e_u}hat = \boldsymbol{P_M} \boldsymbol{u} - \widehat{\boldsymbol{u}}_h$. For any $B\in \mathbb{R}^{3\times 3}$, we denote $B^{D}:=B - \frac{1}{3}\text{tr}B \, I_{3}$. So, we have \begin{align*} \underline{\boldsymbol{e_{\sigma}}} = \underline{\boldsymbol{e_{\sigma}}}^{D} + \frac{1}{3}\text{tr}\underline{\boldsymbol{e_{\sigma}}} I_{3}. \end{align*} By the Assumption~\ref{ass_isotropic} and Theorem~\ref{Main_result}, we have \begin{equation} \label{esigmaD} \|P_D^{\frac12}\underline{\boldsymbol{e_{\sigma}}}^{D}\Vert_{L^{2}(\Omega)}\leq \|\underline{\boldsymbol{e_{\sigma}}} \|_{L^{2}(\mathcal{A}, \Omega)} \leq C h^{s} (\|\boldsymbol{u}\|_{s+1, \Omega} + \|\underline{\boldsymbol{\sigma}}\|_{s, \Omega}). \end{equation} In order to bound $\Vert \text{tr}\underline{\boldsymbol{e_{\sigma}}}\Vert_{L^{2}(\Omega)}$ independently of $P_{T}^{-1}$, we would like to use the well-known result \cite{BrezziFortin91, BBF-book} that for any $q\in L^{2}(\Omega)$ with $\int_{\Omega}q dx = 0$, we have \begin{align} \label{brezzifortin_ineq} \Vert q\Vert_{L^{2}(\Omega)} \leq C_{0} \sup_{\boldsymbol{\eta}\in \boldsymbol{H}_{0}^{1}(\Omega)} \dfrac{(q,\boldsymbol{n}abla\cdot\boldsymbol{\eta})_{\Omega}}{\Vert \boldsymbol{\eta}\Vert_{H^{1}(\Omega)}}, \end{align} for $C_0$ solely depends on the domain $\Omega$. By the assumption $\boldsymbol{g}=0$, taking $\underline{\boldsymbol{v}} = I_{3}$ in (\ref{error_equation_1}), we have that $\int_{\Omega} \text{tr}(\mathcal{A}\underline{\boldsymbol{e_{\sigma}}})dx = 0$. According to the Assumption~\ref{ass_isotropic} and the fact that $P_{T}>0$, we have $\int_{\Omega} \text{tr}\underline{\boldsymbol{e_{\sigma}}} dx = 0$. For any $\boldsymbol{\eta}\in \boldsymbol{H}_{0}^{1}(\Omega)$, we have \begin{align*} (\frac{1}{3}\text{tr}\underline{\boldsymbol{e_{\sigma}}}, \boldsymbol{n}abla\cdot\boldsymbol{\eta})_{\Omega} = & -(\boldsymbol{n}abla (\frac{1}{3}\text{tr}\underline{\boldsymbol{e_{\sigma}}}), \boldsymbol{\eta})_{\mathcal{T}_{h}} + \langle (\frac{1}{3}\text{tr}\underline{\boldsymbol{e_{\sigma}}})\boldsymbol{n}, \boldsymbol{\eta}\rangle_{\partial\mathcal{T}_{h}}\\ = & -(\boldsymbol{n}abla (\frac{1}{3}\text{tr}\underline{\boldsymbol{e_{\sigma}}}), \boldsymbol{\Pi_W}\boldsymbol{\eta})_{\mathcal{T}_{h}} + \langle (\frac{1}{3}\text{tr}\underline{\boldsymbol{e_{\sigma}}})\boldsymbol{n}, \boldsymbol{\eta}\rangle_{\partial \mathcal{T}_{h}}\\ = & (\frac{1}{3}\text{tr}\underline{\boldsymbol{e_{\sigma}}}, \boldsymbol{n}abla\cdot \boldsymbol{\Pi_W}\boldsymbol{\eta})_{\mathcal{T}_{h}} -\langle (\frac{1}{3}\text{tr}\underline{\boldsymbol{e_{\sigma}}})\boldsymbol{n}, \boldsymbol{\Pi_W} \boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial \mathcal{T}_{h}}\\ = & (\underline{\boldsymbol{e_{\sigma}}} - \underline{\boldsymbol{e_{\sigma}}}^{D}, \boldsymbol{n}abla\boldsymbol{\Pi_W}\boldsymbol{\eta})_{\mathcal{T}_{h}} - \langle (\underline{\boldsymbol{e_{\sigma}}} - \underline{\boldsymbol{e_{\sigma}}}^{D})\boldsymbol{n}, \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}}. \end{align*} By (\ref{error_equation_2}) with $\boldsymbol{\omega} = \boldsymbol{\Pi_W} \boldsymbol{\eta}$, we have \begin{align*} & (\frac{1}{3}\text{tr}\underline{\boldsymbol{e_{\sigma}}}, \boldsymbol{n}abla\cdot\boldsymbol{\eta})_{\Omega} \\ = & \bintEh{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{\boldsymbol{\Pi_W} \boldsymbol{\eta}} -(\underline{\boldsymbol{e_{\sigma}}}^{D}, \boldsymbol{n}abla\boldsymbol{\Pi_W}\boldsymbol{\eta})_{\mathcal{T}_{h}} - \langle (\underline{\boldsymbol{e_{\sigma}}} - \underline{\boldsymbol{e_{\sigma}}}^{D})\boldsymbol{n}, \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}}\\ = & T_{1} + T_{2}, \end{align*} where \begin{align*} T_{1} := & \bintEh{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{\boldsymbol{\Pi_W} \boldsymbol{\eta}} - \langle \underline{\boldsymbol{e_{\sigma}}} \boldsymbol{n}, \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}},\\ T_{2} := & -(\underline{\boldsymbol{e_{\sigma}}}^{D}, \boldsymbol{n}abla\boldsymbol{\Pi_W}\boldsymbol{\eta})_{\mathcal{T}_{h}} + \langle \underline{\boldsymbol{e_{\sigma}}}^{D}\boldsymbol{n}, \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}}. \end{align*} For the bound of $T_{1}$, by (\ref{strong_cont}) and the fact that $\boldsymbol{\eta}=0$ on $\partial\Omega$, we have \begin{align*} & \bintEh{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}}{\boldsymbol{\Pi_W} \boldsymbol{\eta}} \\ = & \bintEh{\underline{\boldsymbol{\sigma}} \boldsymbol{n}}{\boldsymbol{\Pi_W} \boldsymbol{\eta} - \boldsymbol{\eta}} -\bintEh{\widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}} {\boldsymbol{\Pi_W} \boldsymbol{\eta} - \boldsymbol{P_M}\boldsymbol{\eta}} \\ = & \bintEh{\underline{\boldsymbol{\sigma}} \boldsymbol{n} - \widehat{\underline{\boldsymbol{\sigma}}}_h \boldsymbol{n}} {\boldsymbol{\Pi_W} \boldsymbol{\eta} - \boldsymbol{\eta}} \\ = & \langle (\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}})\boldsymbol{n}, \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}} +\langle \underline{\boldsymbol{e_{\sigma}}}\boldsymbol{n}, \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}} +\langle \tau(\boldsymbol{P_M} \boldsymbol{u}_{h} - \widehat{\boldsymbol{u}}_{h}), \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}}. \end{align*} So, we have \begin{align} \label{T1_final_form_locking_free} T_{1} = \langle (\underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}})\boldsymbol{n}, \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}} +\langle \tau(\boldsymbol{P_M} \boldsymbol{u}_{h} - \widehat{\boldsymbol{u}}_{h}), \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}}. \end{align} According to Corollary~\ref{estimate_sigma}, we have \begin{equation}\label{trace_error} \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|_{\partial \mathcal{T}_h} \leq C h^{s} (\|\boldsymbol{u}\|_{s+1, \Omega} + \|\underline{\boldsymbol{\sigma}}\|_{s, \Omega}). \end{equation} By the definition of $\boldsymbol{e_u}hat$ and $\boldsymbol{e_u}$, we have \begin{align*} \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{u}_{h} - \widehat{\boldsymbol{u}}_{h})\|^2_{\partial \mathcal{T}_h} & \leq 2\|\tau^{\frac12}( \boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|^2_{\partial \mathcal{T}_h} +2\|\tau^{\frac12} \boldsymbol{P_M} (\boldsymbol{u} - \boldsymbol{\Pi_W} \boldsymbol{u})\|^2_{\partial \mathcal{T}_h} \\ & \le 2\|\tau^{\frac12}( \boldsymbol{P_M} \boldsymbol{e_u} - \boldsymbol{e_u}hat)\|^2_{\partial \mathcal{T}_h} +2 \|\tau^{\frac12} (\boldsymbol{u} - \boldsymbol{\Pi_W} \boldsymbol{u})\|^2_{\partial \mathcal{T}_h}. \end{align*} Now applying Young's inequality and \eqref{trace_error}, \eqref{classical_ineq_1} we obtain: \begin{equation} \label{jump_difference_ineq1} \|\tau^{\frac12}(\boldsymbol{P_M} \boldsymbol{u}_{h} - \widehat{\boldsymbol{u}}_{h})\|_{\partial \mathcal{T}_h} \leq C h^{s} (\|\boldsymbol{u}\|_{s+1, \Omega} + \|\underline{\boldsymbol{\sigma}}\|_{s, \Omega}). \end{equation} According to \eqref{T1_final_form_locking_free}, \eqref{jump_difference_ineq1}, we have \begin{align} \label{T1_bound_locking_free} T_{1} \leq C h^{s} (\|\boldsymbol{u}\|_{s+1, \Omega} + \|\underline{\boldsymbol{\sigma}}\|_{s, \Omega}) \Vert \boldsymbol{\eta}\Vert_{H^{1}(\Omega)}. \end{align} For the bound of $T_{2}$, we have \begin{align*} T_{2} = & -(\underline{\boldsymbol{e_{\sigma}}}^{D}, \boldsymbol{n}abla\boldsymbol{\Pi_W}\boldsymbol{\eta})_{\mathcal{T}_{h}} + \langle \underline{\boldsymbol{e_{\sigma}}}^{D}\boldsymbol{n}, \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}}\\ = & -(\underline{\boldsymbol{e_{\sigma}}}^{D}, \boldsymbol{n}abla(\boldsymbol{\Pi_W}\boldsymbol{\eta}-\boldsymbol{\eta}))_{\mathcal{T}_{h}} + \langle \underline{\boldsymbol{e_{\sigma}}}^{D}\boldsymbol{n}, \boldsymbol{\Pi_W}\boldsymbol{\eta} - \boldsymbol{\eta} \rangle_{\partial\mathcal{T}_{h}} -(\underline{\boldsymbol{e_{\sigma}}}^{D}, \boldsymbol{n}abla \boldsymbol{\eta})_{\mathcal{T}_{h}}\\ = & (\boldsymbol{n}abla\cdot\underline{\boldsymbol{e_{\sigma}}}^{D}, \boldsymbol{\Pi_W}\boldsymbol{\eta}-\boldsymbol{\eta})_{\mathcal{T}_{h}} -(\underline{\boldsymbol{e_{\sigma}}}^{D}, \boldsymbol{n}abla \boldsymbol{\eta})_{\mathcal{T}_{h}}\\ = & -(\underline{\boldsymbol{e_{\sigma}}}^{D}, \boldsymbol{n}abla \boldsymbol{\eta})_{\mathcal{T}_{h}}. \end{align*} By (\ref{esigmaD}), we have \begin{align} \label{T2_bound_locking_free} T_{2} \leq C h^{s} (\|\boldsymbol{u}\|_{s+1, \Omega} + \|\underline{\boldsymbol{\sigma}}\|_{s, \Omega}) \Vert \boldsymbol{\eta}\Vert_{H^{1}(\Omega)}. \end{align} Finally, combining the estimates \eqref{esigmaD}, \eqref{brezzifortin_ineq}, \eqref{T1_bound_locking_free}, \eqref{T2_bound_locking_free}, we have \begin{align*} \Vert \underline{\boldsymbol{e_{\sigma}}}\Vert_{L^{2}(\Omega)} \leq C_{1} h^{s} (\|\boldsymbol{u}\|_{s+1, \Omega} + \|\underline{\boldsymbol{\sigma}}\|_{s, \Omega}) \Vert \boldsymbol{\eta}\Vert_{H^{1}(\Omega)}. \end{align*} Here the constant $C_{1}$ is independent of $P_{T}^{-1}$. \end{proof} \section{Numerical Experiment} \label{numerical} In this section, we display numerical experiments in 2D to verify the error estimates provided in Theorem \ref{Main_result}. We also display numerical results showing that our method does not exhibit volumetric-locking when the material tends to be incompressible. In addition, our numerical results suggest that the error estimates provided in Theorem \ref{thm_locking_free} for the incompressible limit case are \textit{sharp}. We carry out the numerical experiments on the domain $\Omega = (0,1) \times (0,1)$ and monitor the errors $\| \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h \Vert_{L^{2}(\Omega)}$ and $\| \boldsymbol{\Pi_W} \boldsymbol{u} - \boldsymbol{u}_h\Vert_{L^{2}(\Omega)}$.To explore the dependence of the convergence properties of our method with respect to the form of the meshes, we consider two types of meshes, as shown in FIGURE \ref{fig:Mesh}. \begin{figure} \caption{An example of Mesh-$1$(left) and Mesh-$2$(right) with $h = 0.354$} \label{fig:Mesh} \end{figure} \begin{table}[h] \begin{center} \scriptsize \begin{tabular}{ c c c c c c c c c c } \hline & &\multicolumn{2}{c}{$\| \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h \Vert_{L^{2}(\Omega)}$} & \multicolumn{2}{c}{$\|\boldsymbol{\Pi_W} \boldsymbol{u} - \boldsymbol{u}_h\Vert_{L^{2}(\Omega)}$} & \multicolumn{2}{c}{$\| \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h \Vert_{L^{2}(\Omega)}$} & \multicolumn{2}{c}{$\| \boldsymbol{\Pi_W} \boldsymbol{u} - \boldsymbol{u}_h\Vert_{L^{2}(\Omega)}$} \\\hline & &\multicolumn{4}{c}{Mesh-$1$} & \multicolumn{4}{c}{Mesh-$2$} \\\hline $k$ & Mesh & Error & Order & Error & Order & Error & Order & Error & Order \\ \hline \multirow{3}{*}{0} & $h$ & 9.81E-02 & - & 3.74E-03 & - & 4.20E-01 & - &8.28E+12 & -\\ &$h/2$ & 9.50E-02 & 0.05 & 3.69E-02 & 0.02 & 2.14E-02 & 0.97 &1.05E+12 &2.98 \\ &$h/4$ & 9.42E-02 & 0.01 & 3.68E-03 & 0.00 & 1.05E-02 & 1.02 &1.41E+11&2.90 \\ &$h/8$ & 9.41E-02 & 0.00 & 3.68E-03 & 0.00 & 5.22E-03 & 1.01 &1.79E+10&2.98 \\ &$h/16$ & 9.41E-02 & 0.00 & 3.68E-03 & 0.00 & 8.17E-01 & -3.97 &1.39E+12&-6.28 \\ \hline \multirow{3}{*}{1}& $h$ & 2.26E-03 & - & 1.88E-03 & - & 2.04E-03 & - & 9.41E-04 & -\\ &$h/2$ & 7.24E-03 & 1.65 & 3.57E-04 & 2.40 & 5.90E-03 & 1.79 & 1.45E-04 & 2.70 \\ &$h/4$ & 2.09E-03 & 1.79 & 5.51E-05 & 2.69 & 1.58E-03 & 1.92 & 2.00E-05 & 2.86\\ &$h/8$ & 5.60E-04 & 1.90 & 7.60E-06 & 2.86 & 4.08E-04 & 1.95 & 2.62E-06 & 2.93\\ &$h/16$ & 1.45E-04 & 1.95 & 9.93E-07 & 2.94 & 7.01E-06 & 2.00 & 3.35E-07 & 2.97\\ \hline \multirow{3}{*}{2}& $h$ &1.24E-03 & - &5.52E-05 & - &1.23E-03 & - &3.53E-05 & -\\ &$h/2$ &1.57E-04 &2.98 &3.74E-06 &3.88 &1.57E-04 &2.97 &2.25E-06 &3.97 \\ &$h/4$ &1.97E-05 &2.99 &2.43E-07 &3.95 &1.97E-05 &2.99 &1.42E-07 &3.99 \\ &$h/8$ &2.46E-06 &3.00 &1.54E-08 &3.97 &2.47E-06 &3.00 &8.90E-09 &3.99 \\ &$h/16$ &3.08E-07 &3.00 &9.73E-10 &3.99 &3.10E-07 &3.00 &5.58E-10 &4.00 \\ \hline \multirow{3}{*}{3}& $h$ &5.26E-05 & - &1.45E-06 & - &5.33E-05 & - &1.27E-06 & -\\ &$h/2$ &3.51E-06 &3.90 &4.90E-08 &4.89 &3.54E-06 &3.91 &4.36E-08 &4.86 \\ &$h/4$ &2.26E-07 &3.96 &1.59E-09 &4.95 &2.29E-07 &3.95 &1.43E-09 &4.93 \\ &$h/8$ &1.42E-08 &3.98 &5.12E-11 &4.96 &1.45E-08 &3.98 &4.59E-11 &4.96 \\ \hline \end{tabular} \end{center} \caption{History of convergence for the exact solution (\ref{eq:Test_1}) where $h = 0.177$}\label{table:smooth} \end{table} \begin{figure} \caption{Convergence sequence of the displacement on Mesh-2 for $k=1$. Left: $u_h^1$(quadratic), Right: $\widehat{u} \label{fig:convseq} \end{figure} \subsection{Order of convergence of our HDG method} In this section, we consider an isotropic material in 2D with plain stress condition and take the Poisson Ratio $\boldsymbol{n}u = 0.3$ and the Young's Modulus $E = 1$: \begin{align} \mathcal{A}\underline{\boldsymbol{\sigma}} &= \frac{1+\boldsymbol{n}u}{E}\underline{\boldsymbol{\sigma}} - \frac{\boldsymbol{n}u}{E}\text{tr}(\underline{\boldsymbol{\sigma}})I_{2}. \end{align} In particular, we test our HDG method on a smooth solution $\boldsymbol{u} = (u_1,u_2)$ in \cite{SoonCockburnStolarski09}, such that: \begin{eqnarray} \label{eq:Test_1} u_1 = 10 \sin(\pi x) (1-x)(y-y^2)(1-0.5y), \ \ \ \ u_2 = 0. \end{eqnarray} We set $\boldsymbol{f}$ and $\boldsymbol{g}$ to satisfy the above exact solution (\ref{eq:Test_1}). To explore the convergence properties of our method, we conduct numerical experiments for $k=0,1,2,3$ and take $\tau = \mathcal{O}(\frac{1}{h})$. The history of convergence is displayed in Table \ref{table:smooth}. We observe that when $k \geq 1$, our method converges with order $k+1$ in the stress and order $k+2$ in the displacement for both Mesh-$1$ and Mesh-$2$. In addition, the numerical results suggest that our method does not converge to the exact solution when $k=0$. To aid visualization, we also plot the convergence sequence of the displacement in FIGURE \ref{fig:convseq}. \begin{table} \begin{center} \scriptsize \begin{tabular}{ c c c c c c c c c c } \hline & &\multicolumn{2}{c}{$\| \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h \Vert_{L^{2}(\Omega)}$} & \multicolumn{2}{c}{$\|\boldsymbol{\Pi_W} \boldsymbol{u} - \boldsymbol{u}_h\Vert_{L^{2}(\Omega)}$} & \multicolumn{2}{c}{$\| \underline{\boldsymbol{\Pi_V}} \underline{\boldsymbol{\sigma}} - \underline{\boldsymbol{\sigma}}_h \Vert_{L^{2}(\Omega)}$} & \multicolumn{2}{c}{$\| \boldsymbol{\Pi_W} \boldsymbol{u} - \boldsymbol{u}_h\Vert_{L^{2}(\Omega)}$} \\\hline & &\multicolumn{8}{c}{$\boldsymbol{n}u = 0.49$} \\\hline & &\multicolumn{4}{c}{Mesh-$1$} & \multicolumn{4}{c}{Mesh-$2$} \\\hline $k$ & Mesh & Error & Order & Error & Order & Error & Order & Error & Order \\ \hline \multirow{3}{*}{1} & $h$ & 4.12E-03 & - & 1.14E-04 & - & 4.12E-03 & - &9.15E-04 & -\\ &$h/2$ & 1.22E-03 & 1.75 & 2.53E-05 & 2.17 & 1.27E-03 & 1.70 &1.47E-05 & 2.64\\ &$h/4$ & 3.32E-04 & 1.88 & 4.76E-06 & 2.41 & 3.40E-04 & 1.90 &2.00E-06 &2.87 \\ &$h/8$ & 8.69E-05 & 1.93 & 8.17E-07 & 2.54 & 8.64E-05 & 1.98 &2.58E-07&2.96 \\ &$h/16$ & 2.22E-05 & 1.97 & 1.23E-07 & 2.73 & 2.17E-05 & 1.99 &3.27E-08&2.98 \\ \hline \multirow{3}{*}{2}& $h$ & 9.33E-04 & - & 2.00E-05 & - & 9.37E-04 & - & 1.24E-05 & -\\ &$h/2$ & 1.29E-04 & 2.85 & 1.65E-06 & 3.60 & 1.32E-04 & 2.83 & 9.42E-07 & 3.71 \\ &$h/4$ & 1.65E-05 & 2.97 & 1.17E-07 & 3.82 & 1.64E-05 & 3.00 & 6.01E-08 & 3.97\\ &$h/8$ & 2.07E-06 & 3.00 & 7.76E-09 & 3.92 & 2.05E-06 & 3.00 & 3.77E-09 & 3.99\\ &$h/16$ & 2.58E-07 & 3.00 & 4.98E-10 & 3.96 & 2.56E-07 & 3.00 & 2.36E-10 & 4.00\\ \hline \multirow{3}{*}{3}& $h$ &1.44E-04 & - &1.65E-06 & - &1.57E-04 & - &1.53E-06 & -\\ &$h/2$ &9.78E-06 &3.88 &6.18E-08 &4.74 &9.87E-06 &3.99 &5.11E-08 &4.90 \\ &$h/4$ &6.27E-07 &3.96 &2.09E-09 &4.89 &6.19E-07 &3.99 &1.68E-09 &4.93 \\ &$h/8$ &3.95E-08 &3.99 &6.77E-11 &4.95 &3.89E-08 &3.99 &5.43E-11 &4.95 \\ &$h/16$ &2.49E-09 &3.99 &2.21E-12 &4.94 &2.44E-09 &4.00 &1.74E-12 &4.97 \\ \hline & &\multicolumn{8}{c}{$\boldsymbol{n}u = 0.4999$} \\\hline & &\multicolumn{4}{c}{Mesh-$1$} & \multicolumn{4}{c}{Mesh-$2$} \\\hline $k$ & Mesh & Error & Order & Error & Order & Error & Order & Error & Order \\ \hline \multirow{3}{*}{1} & $h$ & 4.12E-03 & - & 1.13E-04 & - & 4.13E-03 & - &9.05E-04 & -\\ &$h/2$ & 1.22E-03 & 1.76 & 2.52E-05 & 2.17 & 1.26E-03 & 1.71 &1.45E-05 & 2.64\\ &$h/4$ & 3.31E-04 & 1.88 & 4.72E-06 & 2.41 & 3.39E-04 & 1.90 &1.98E-06 &2.87 \\ &$h/8$ & 8.66E-05 & 1.93 & 8.11E-07 & 2.54 & 8.61E-05 & 1.98 &2.55E-07&2.96 \\ &$h/16$ & 2.21E-05 & 1.97 & 1.22E-07 & 2.73 & 2.16E-05 & 1.99 &3.23E-08&2.98 \\ \hline \multirow{3}{*}{2}& $h$ & 9.32E-04 & - & 1.98E-05 & - & 9.34E-04 & - & 1.22E-05 & -\\ &$h/2$ & 1.29E-04 & 2.86 & 1.64E-06 & 3.60 & 1.32E-04 & 2.83 & 9.31E-07 & 3.72 \\ &$h/4$ & 1.64E-05 & 2.97 & 1.16E-07 & 3.82 & 1.64E-05 & 3.00 & 5.94E-08 & 3.97\\ &$h/8$ & 2.06E-06 & 3.00 & 7.70E-09 & 3.92 & 2.04E-06 & 3.00 & 3.73E-09 & 3.99\\ &$h/16$ & 2.57E-07 & 3.00 & 4.95E-10 & 3.96 & 2.55E-07 & 3.00 & 2.33E-10 & 4.00\\ \hline \multirow{3}{*}{3}& $h$ &1.44E-04 & - &1.63E-06 & - &1.57E-04 & - &1.51E-06 & -\\ &$h/2$ &9.75E-06 &3.88 &6.09E-08 &4.74 &9.83E-06 &3.99 &5.03E-08 &4.90 \\ &$h/4$ &6.25E-07 &3.96 &2.06E-09 &4.89 &6.17E-07 &3.99 &1.66E-09 &4.93 \\ &$h/8$ &3.94E-08 &3.99 &6.72E-11 &4.95 &3.87E-08 &3.99 &5.39E-11 &4.94 \\ &$h/16$ &2.48E-09 &3.99 &2.20E-12 &4.93 &2.43E-09 &3.99 &1.73E-12 &4.96 \\ \hline & &\multicolumn{8}{c}{$\boldsymbol{n}u = 0.49999$} \\\hline & &\multicolumn{4}{c}{Mesh-$1$} & \multicolumn{4}{c}{Mesh-$2$} \\\hline $k$ & Mesh & Error & Order & Error & Order & Error & Order & Error & Order \\ \hline \multirow{3}{*}{1} & $h$ & 4.12E-03 & - & 1.13E-04 & - & 4.13E-03 & - &9.05E-04 & -\\ &$h/2$ & 1.22E-03 & 1.76 & 2.52E-05 & 2.17 & 1.26E-03 & 1.71 &1.45E-05 & 2.64\\ &$h/4$ & 3.31E-04 & 1.88 & 4.72E-06 & 2.41 & 3.39E-04 & 1.90 &1.98E-06 &2.87 \\ &$h/8$ & 8.66E-05 & 1.93 & 8.11E-07 & 2.54 & 8.61E-05 & 1.98 &2.55E-07&2.96 \\ &$h/16$ & 2.21E-05 & 1.97 & 1.22E-07 & 2.73 & 2.16E-05 & 1.99 &3.23E-08&2.98 \\ \hline \multirow{3}{*}{2}& $h$ & 9.32E-04 & - & 1.98E-05 & - & 9.34E-04 & - & 1.22E-05 & -\\ &$h/2$ & 1.29E-04 & 2.86 & 1.64E-06 & 3.60 & 1.32E-04 & 2.83 & 9.31E-07 & 3.72 \\ &$h/4$ & 1.64E-05 & 2.97 & 1.16E-07 & 3.82 & 1.64E-05 & 3.00 & 5.94E-08 & 3.97\\ &$h/8$ & 2.06E-06 & 3.00 & 7.70E-09 & 3.92 & 2.04E-06 & 3.00 & 3.73E-09 & 3.99\\ &$h/16$ & 2.57E-07 & 3.00 & 4.95E-10 & 3.96 & 2.55E-07 & 3.00 & 2.33E-10 & 4.00\\ \hline \multirow{3}{*}{3}& $h$ &1.44E-04 & - &1.63E-06 & - &1.57E-04 & - &1.50E-06 & -\\ &$h/2$ &9.75E-06 &3.88 &6.09E-08 &4.74 &9.83E-06 &3.99 &5.03E-08 &4.90 \\ &$h/4$ &6.25E-07 &3.96 &2.06E-09 &4.89 &6.17E-07 &3.99 &1.66E-09 &4.93 \\ &$h/8$ &3.94E-08 &3.99 &6.72E-11 &4.95 &3.86E-08 &3.99 &5.39E-11 &4.94 \\ &$h/16$ &2.48E-09 &3.99 &2.20E-12 &4.93 &2.44E-09 &3.98 &1.74E-12 &4.95 \\ \hline \end{tabular} \end{center} \caption{History of convergence for the exact solution (\ref{eq:Test_2}) where $h = 0.354$}\label{table:lockingtest} \end{table} \begin{figure} \caption{ Convergence sequence of the stress and the displacement on Mesh-1 for $k=1$ and $\boldsymbol{n} \label{fig:convseq2} \end{figure} \subsection{Locking experiments} In this section, we consider an isotropic material in 2D with plane-strain condition: \begin{align} \mathcal{A}\underline{\boldsymbol{\sigma}} &= \frac{1+\boldsymbol{n}u}{E}\underline{\boldsymbol{\sigma}} - \frac{(1+\boldsymbol{n}u)\boldsymbol{n}u}{E}\text{tr}(\underline{\boldsymbol{\sigma}})I_{2} \end{align} where $\boldsymbol{n}u$ is the Poisson Ratio and $E$ is the Young's Modulus. This example satisfies the Assumption 2.1 with $P_D = \frac{1+\boldsymbol{n}u}{E}$ and $P_T = \frac{(1+\boldsymbol{n}u)}{E}(1 -2\boldsymbol{n}u)$. By sending $\boldsymbol{n}u \to 0.5$, this material is nearly incompressible. We consider an example in \cite{BercovierLivne79, SoonCockburnStolarski09} by setting $\boldsymbol{f}$ and $\boldsymbol{g}$ to satisfy the exact solution: \begin{eqnarray} \label{eq:Test_2} u_1 &=& -x^2(x-1)^2y(y-1)(2y-1) \\ u_2 &=& \ \ y^2(y-1)^2x(x-1)(2x-1) \end{eqnarray} with $E = 3$. We conduct numerical experiments for this problem for $k=1,2,3$ with $\tau = \mathcal{O}(\frac{1}{h})$. The history of convergence is displayed in Table \ref{table:lockingtest} and the convergence sequence of the stress and the displacement is plotted in FIGURE \ref{fig:convseq2}. By increasing $\boldsymbol{n}u$ from $0.49$ to $0.49999$, we observe the same order of convergence which is optimal in both stress and displacement. In addition, our numerical results demonstrate that the convergence properties of our method do not depend on the type of meshes. Altoghether, this observation exactly aligns with the error estimates provided in Theorem 2.2 and it justifies that our HDG method is free from volumetric locking. {\bf Acknowledgements}. The work of the first author was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 11302014). As a convention the names of the authors are alphabetically ordered. All authors contributed equally in this article. Finally authors would like to thank Guosheng Fu at University of Minnesota for fruitful discussion. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{Some geometric inequalities related to Liouville equation} Author{Changfeng Gui} Address{Changfeng Gui, The University of Texas at San Antonio, TX, USA. } \email{[email protected]} Author{Qinfeng Li} Address{Qinfeng Li, Hunan University, Changsha, Hunan, China. } \email{[email protected]} \begin{abstract} In this paper, we prove that if $u$ is a solution to the Liouville equation \begin{align} \label{scalliouville} \mathbb{D}elta u+e^{2u} =0 \quad \mbox{in $\mathbb{R}^2$,} \end{align}then the diameter of $\mathbb{R}^2$ under the conformal metric $g=e^{2u}\delta$ is bounded below by $\pi$. Here $\delta$ is the Euclidean metric in $\mathbb{R}^2$. Moreover, we explicitly construct a family of solutions to \eqref{scalliouville} such that the corresponding diameters of $\mathbb{R}^2$ range over $[\pi,2\pi)$. We also discuss supersolutions to \eqref{scalliouville}. We show that if $u$ is a supersolution to \eqref{scalliouville} and $\int_{\mathbb{R}^2} e^{2u} dx<\infty$, then the diameter of $\mathbb{R}^2$ under the metric $e^{2u}\delta$ is less than or equal to $2\pi$. For radial supersolutions to \eqref{scalliouville}, we use both analytical and geometric approaches to prove some inequalities involving conformal lengths and areas of disks in $\mathbb{R}^2$. We also discuss the connection of the above results with the sphere covering inequality in the case of Gaussian curvature bounded below by $1$. Higher dimensional generalizations are also discussed. \end{abstract} \maketitle \vskip0.2in {\bf Key words}: Isoperimetric inequality, Liouville equation, Gaussian curvature, Conformal metrics {\bf 2010 AMS subject classification}: 35J15, 35J60, 53A05, 30C55 \section{Introduction} In this paper, we study some geometric properties of the conformally flat Riemannian manifold $(M,g)=(\mathbb{R}^2, e^{2u}\delta)$, where $u$ being considered is either a solution or a supersolution to \eqref{scalliouville} and $\delta$ is denoted as the Euclidean metric. By $u$ being a supersolution to \eqref{scalliouville}, we mean that \begin{align} \label{ricci1} \mathbb{D}elta u+e^{2u} \le 0 \quad \mbox{in $\mathbb{R}^2$.} \end{align} The research in this paper consists of the following parts. \subsection{Diameter estimates and examples for solutions to \eqref{scalliouville}} It is well known that if $u$ is a solution to \eqref{scalliouville}, then either $vol_g(\mathbb{R}^2)=4\pi$ or $vol_g(\mathbb{R}^2)=\infty$, where $g=e^{2u}\delta$ and $vol_g$ denotes the volume function under the meric $g$. In the former case, $u$ must be a radial solution up to translations. This was first proved in the seminal paper by Chen-Li\cite{CL91} using the moving plane method, and later by Chou-Wan\cite{CW94} using Liouville's formula (see \cite{Liouville}): any entire solution $u$ to \eqref{scalliouville} must be of the form \begin{align} \label{complexrepresentation} u(x,y)=\ln \frac{2|f'(z)|}{1+|f(z)|^2}, \end{align} where $f$ is a meromorphic function in the complex plane, $f$ has only simple poles and $f'$ is nonvanishing. Such $f$ is called the \textit{developing function} of solution $u$ to \eqref{scalliouville}. In particular, if $f(z)=\lambda z$ for some positive constant $\lambda$, i.e., \begin{align} \label{bubble} u(x,y)=\ln \frac{2 \lambda}{1+\lambda^2 |z|^2}, \end{align} then we call them the \textit{standard bubble solutions}, for which $g= e^{2u} \delta$ represent the scaled metrics of the standard metric on the unit sphere after identifying the sphere with the complex plane compactified at infinity. Motivated by the above results, we are interested in studying the diameter of $\mathbb{R}^2$ under the conformal metric $g=e^{2u}\delta$ for $u$ being a solution to \eqref{scalliouville}. Throughout this paper we use $diam_g(\mathbb{R}^2)$ to denote the diameter of $\mathbb{R}^2$ under metric $g$. We first prove the following diameter lower bound estimate in Section 2. \begin{theorem} \label{lowerbounddiameterfor2D} Let $u$ be a solution to \eqref{scalliouville} in $\mathbb{R}^2$, then $diam(\mathbb{R}^2) \ge \pi$ under the metric $g=e^{2u}\delta$. \end{theorem} We also prove a stronger version, see Proposition \ref{manypoints} in the section below, which indicates that there are uncountably many pairs of points such that the conformal distances between any of the pairs are bigger than or equal to $\pi$. Next, we construct a family of solutions to \eqref{scalliouville} such that the corresponding diameters can be attained at any value in $[\pi,2\pi)$.. More precisely, we show: \begin{proposition} \label{utdiamter} Let $u_t$ be a family of functions given as \begin{align} \label{u_t} u_t(x,y)=\frac{2e^x}{1+t^2+2te^x \cos y+e^{2x}}, \end{align} then for each $t \ge 0$, $u_t(x,y)$ solves \eqref{scalliouville}. Moreover, $diam_g(\mathbb{R}^2) = \pi+2\tan^{-1}(t)$, where $g=e^{2u_t}\delta$. \end{proposition} One can see from the above proposition that if $t$ ranges over $[0,\infty)$, then the diameters of $\mathbb{R}^2$ corresponding to $u_t$ ranges over $[\pi,2\pi)$. This is somehow interesting, since unlike that the range of conformal volume is discrete regarding solutions to \eqref{scalliouville}, it turns out that the range of conformal diameter contain an interval. \begin{remark} Actually in our forthcoming paper \cite{EGLX}, we can show if a solution $u$ is bounded from above, then up to translation, rotation and scaling, either $u$ is radial, or $u$ is given by \eqref{u_t}. Hence we essentially proved when $u$ has an upper bound, the range of diameter of $\mathbb{R}^2$ under $e^{2u}\delta$ is $[\pi,2\pi)$. \end{remark} We also construct a solution in Example \ref{2pi}, where we show that the corresponding conformal diameter can be greater than or equal to $2\pi$. The difficulty of exactly computing the conformal diameters lies in two aspects. First, generally given two points in $\mathbb{R}^2$ and a solution $u$, there is no standard way to compute the conformal distance between the two points under the metric $g=e^{2u}\delta$, since there are infinitely many paths connecting them. Second, generally it is not clear how to choose pairs of points such that their conformal distances are approximating the conformal diameter of $\mathbb{R}^2$. The method we use to prove Proposition \ref{utdiamter} and illustrate Example \ref{2pi} is by finding the link between the conformal metrics with respect to solutions to \eqref{scalliouville} and the standard metric on sphere, and we also employ some complex analysis ideas to carefully proceed the argument. \\ \subsection{Diameter estimates for general supersolutions} All the above results are obtained using complex function theory. However, from a geometric point of view, \eqref{scalliouville} is equivalent to $K =1$, where $K$ is the Gaussian curvature of $(\mathbb{R}^2,g)$, where $g=e^{2u}\delta$. Similarly, when considering supersolutions, \eqref{ricci1} is equivalent to $K \ge 1$. Note that \eqref{ricci1} is also equivalent to $Ric_g \ge g$, where $Ric_g$ is the Ricci curvature of $(\mathbb{R}^2, g)$. This naturally reminds us of Myer's Theorem in Riemannian geometry. Recall that Meyer's Theorem says that if $(M,g)$ is a complete $n$-dimensional manifold such that $Ric_g \ge (n-1)g$, then $diam_g(M) \le \pi$. This is not true for incomplete manifold, and Proposition \ref{utdiamter} actually serves as a counterexample since in such case $Ric_g=(2-1)g$ while $diam_g(\mathbb{R}^2)>\pi$ if $t>0$. Even though nothing can be said in general on diameter bound for incomplete Riemannian manifolds, surprisingly, we can prove that if $g$ is a globally conformally flat metric in $\mathbb{R}^2$ with $Ric_g \ge g$ and $vol_g(\mathbb{R}^2)<\infty$, then $diam_g(\mathbb{R}^2) \le 2\pi$. We state this result in the following theorem in PDE language: \begin{theorem} \label{weakbounddiamter} Let $u$ satisfy \eqref{ricci1}. If we also assume that $\int_{\mathbb{R}^2}e^{2u} dx<\infty$, then $diam_g(\mathbb{R}^2) \le 2\pi$, where $g=e^{2u}\delta$. \end{theorem} This theorem will be proved in section 4, and the argument of our proof does apply Myer's Theorem in some situation while borrows the idea of proof of Hopf-Rinow Theorem. A key observation is Proposition \ref{diametercontrolinfty}, where no finiteness of volume assumption is needed. We note that given the assumptions in Theorem \ref{weakbounddiamter}, we do not know whether or not the $2\pi$ upper bound for the conformal diameter is sharp. It is interesting to find a supersolution $u$ satisfying all assumptions in Theorem \ref{weakbounddiamter} such that the conformal diameter is strictly between $\pi$ and $2\pi$, or to prove that the upper bound should be $\pi$. So far we haven't had an answer yet. \\ \subsection{Geometric inequalities related to radial supersolutions.} We also study radial supersolutions to \eqref{scalliouville}. This is essentially an ordinary differential inequality problem. Let us define the conformal perimeter and area of balls in $\mathbb{R}^2$ by $l(r)=\int_{\partial B_r}e^{u}ds$ and $A(r)=\int_{B_r}e^{2u} dx$, where $B_r$ is the ball of radius $r$ in $\mathbb{R}^2$ centered at the origin and $ds$ is the length element. It turns out that many inequalities involving $l(r)$ and $A(r)$ can be derived. We prove: \begin{theorem} \label{maintheoremonradialcase} Let $u$ be a radially symmetric function satisfying \eqref{ricci1}, and $g=e^{2u}\delta$ where $\delta$ is the Euclidean metric, then we have \begin{align} \label{volumeradial} vol_g(\mathbb{R}^2) \le 4\pi \end{align} and \begin{align} \label{diamradial} diam_g(\mathbb{R}^2) \le \pi. \end{align} Furthermore, there exists $r_0>0$ such that $l$ is increasing when $r<r_0$ and $l$ is decreasing when $r>r_0$. Moreover, \begin{align} \label{yanxing3} \max_{r>0}l(r)=l(r_0)\le 2\pi, \end{align} and \begin{align} \label{jiandanyouyong} \lim_{r\mathbb{R}ghtarrow \infty} l(r)=0. \end{align} Also, for any $r>0$, \begin{align} \label{leibi} A(A_{\infty}-A) \le l^2 \le 4\pi A -A^2, \end{align}where $A_{\infty}=\int_{\mathbb{R}^2} e^{2u} dx$. In addition, let $R=R(r)=\int_0^re^{u(\rho)}d\rho$, then \begin{align} \label{boundsforr0} R(r_0) \le \frac{\pi}{2}, \end{align} \begin{align} \label{busanhuang} A_{\infty}(1-\cos R) \le 2A, \end{align} and \begin{align} \label{hmm} \frac{A}{l}\ge \frac{1-\cos R}{\sin R}. \end{align}Moreover, if $r \le r_0$, we also have \begin{align} \label{jiefaqian} \frac{A}{l} \le \sin R. \end{align} In particular, if $r \le r_0$, then \begin{align} \label{jiefa} A(r) \le l(r), \end{align}and if $R \ge \frac{\pi}{2}$, then \begin{align} \label{bushanhuangtuichu} A \ge \frac{A_{\infty}}{2}. \end{align} \end{theorem} Note that when $u$ is a radial solution for the equality case \eqref{scalliouville}, then $(\mathbb{R}^2, e^{2u}\delta)$ becomes the standard unit sphere minus a point, and $l(r)$ then corresponds to the length of the latitudes and $A(r)$ corresponds to the area of the spherical caps. Then one can see geometrically that all the inequalities \eqref{volumeradial}-\eqref{hmm} become equalities. Let us make some comments of the proofs of Theorem \ref{maintheoremonradialcase}. \eqref{volumeradial} can be derived from either of the two inequalities in \eqref{leibi}. \eqref{diamradial} is proved using the idea of proof of Myer's Theorem and Proposition \ref{weakbounddiamter}. \eqref{yanxing3}-\eqref{jiandanyouyong}, together with the second inequality in \eqref{leibi} are proved analytically, while the rest of the inequalites are proved using geometric argument. In particular, the first inequality in \eqref{leibi}, \eqref{boundsforr0} and \eqref{jiefaqian} are all obtained by exploiting the Heintze-Karcher inequality (see \cite{HK78}), which gives a control of the Jocobian of the exponential map starting from the boundary of domain in Riemannian manifold with strictly positive Ricci curvature lower bound. \eqref{hmm} is derived from Bishop-Gromov inequality, while \eqref{busanhuang} is a consequence of \eqref{leibi} and \eqref{hmm}. \eqref{jiefa} is a consequence of \eqref{jiefaqian}, and it can also be derived from Alexandrov inequality. All of these are presented in section 5. We also discuss some higher dimensional generalizations for radial cases in section 6. \\ \subsection{Connection with sphere covering inequality} Our study of supersolutions to the Liouville equation \eqref{scalliouville} is also motivated by the famous sphere covering inequality recently discovered by Gui-Moradifam\cite{GM}, which is a very powerful inequality that has been successfully applied to solve various problems on symmetry and uniqueness properties of solutions of semilinear elliptic equations with exponential nonlinearity in $\mathbb{R}^2$. In particular, it was applied to prove a longstanding conjecture of Chang-Yang\cite{CY87} concerning the best constant in Moser-Trudinger-type inequalities, see \cite{GM}, and has led to several symmetry and uniqueness results for mean field equations, Onsager vortices, SinhGordon equation, cosmic string equation, Toda systems, and rigidity of Hawking mass in general relativity. References include but not limited to \cite{Du}, \cite{25}, \cite{26}, \cite{27}, \cite{GM}, \cite{29}, \cite{30}, \cite{31}, \cite{39}, \cite{41}, etc. A simple version of the sphere covering inequality states as follows: \begin{theorem}(\cite[Theorem 1.1]{GHM}) \label{spherecovering} Let $u_1$ be a smooth function defined in a simply connected domain ${\mathbf O}mega_0$ such that \begin{align} \label{subsolution} \mathbb{D}elta u_1+e^{2u_1} \ge 0\, \mbox{in ${\mathbf O}mega_0$},\quad \int_{{\mathbf O}mega_0}e^{2u_1} \le 4\pi. \end{align}Let $u_2$ be another smooth function defined in ${\mathbf O}mega_0$ and suppose that in a subdomain ${\mathbf O}mega\subset {\mathbf O}mega_0$ such that \begin{align} \mathbb{D}elta u_2+e^{2u_2}\ge \mathbb{D}elta u_1+e^{2u_1} \, \mbox{in ${\mathbf O}mega$}, \, u_2>u_1 \, \mbox{in ${\mathbf O}mega$ and} \, u_2=u_1 \, \mbox{on $\partial {\mathbf O}mega$.} \end{align}Then \begin{align} \int_{{\mathbf O}mega}e^{2u_1}dx +\int_{{\mathbf O}mega}e^{2u_2}dx \ge 4\pi. \end{align} \end{theorem} This Theorem was first proved by Gui-Morodifam\cite{GM}, and later proved by Gui-Hang-Morodifam\cite{GHM} in a simpler but more intrinsic way. However, both proofs use the crucial assumption \eqref{subsolution}, since \eqref{subsolution} means that $({\mathbf O}mega, e^{2u_1}\delta)$ is a Riemannain surface with $K\le 1$, where $K$ is the Gaussian curvature. Only with this assumption, the following Alexander-Bol inequality can be applied: \begin{align} \label{bol} l^2(\partial {\mathbf O}mega) \ge 4\pi A({\mathbf O}mega)-A^2({\mathbf O}mega), \end{align}where $l(\partial {\mathbf O}mega)$ is the conformal length of $\partial {\mathbf O}mega$ under the metric $e^{2u_1}\delta$, and $A({\mathbf O}mega)$ is the conformal area. The inequality \eqref{bol} is essential in both proofs of Theorem \ref{spherecovering} in \cite{GM} and \cite{GHM}. The main step in proving Theorem \ref{spherecovering} in \cite{GHM} is the following theorem: \begin{theorem} (\cite[Theorem 1.4]{GHM}) \label{ghm1.4} Let $(M,g)$ be a simply connected Riemannian surface with $\mu(M) \le 4\pi$ and $K \le 1$, where $\mu$ is the measure of $(M,g)$ and $K$ is the Gaussian curvature. Let ${\mathbf O}mega$ be a domain with compact closure and nonempty boundary, and $\lambda$ is a constant. Then if $u \in C^2(\overline{{\mathbf O}mega})$ satisfying \begin{align} \label{1.4} \begin{cases} -\mathbb{D}elta_g u+1 \le \lambda e^{2u}, \quad u>0 \quad &\mbox{in ${\mathbf O}mega$}\\ u=0 \quad &\mbox{on $\partial {\mathbf O}mega$} \end{cases} \end{align} Then \begin{align} \label{xinren1} 4\pi \int_{{\mathbf O}mega} e^{2u} d\mu -\lambda \left(\int_{{\mathbf O}mega} e^{2u} d\mu \mathbb{R}ght)^2 \le 4 \pi \mu({\mathbf O}mega)-\mu^2({\mathbf O}mega). \end{align} In particular, if $\lambda \in (0,1]$, then \begin{align} \label{linshiruji} \int_{{\mathbf O}mega} e^{2u} d\mu+\mu({\mathbf O}mega) \ge \frac{4\pi}{\lambda}. \end{align} \end{theorem} For the case when Gaussian curvature $K \ge 1$, the above result is no longer true. Actually the Alexander-Bol inequality goes in the complete opposite direction, because of the second inequality in \eqref{leibi}. However, we can still say something. The following are the results we proved: First, we prove the following counterpart of Theorem \ref{ghm1.4} in the case when $M$ is a closed surface and $K\ge 1$. In fact, we prove a slightly more general version. \begin{theorem} \label{genghm} Let $(M,g)$ be a closed Riemannian surface with $K \ge 1$, where $K$ is the Gaussian curvature. Let $\mu$ be the volume measure of $(M,g)$ and ${\mathbf O}mega$ be a domain with nonempty boundary. If $u$ satisfies \begin{align} \label{rubin2''} \begin{cases} -\mathbb{D}elta_g u+1 \le h(u), \quad u>0 \quad &\mbox{in ${\mathbf O}mega$}\\ u=0 \quad &\mbox{on $\partial {\mathbf O}mega$} \end{cases} \end{align}where $h(t)$ is a nonnnegative function such that \begin{align} \label{mc1''} 0 \le h'(t) \le 2 h(t), \end{align} then \begin{align} \label{guijia1'} \mu(M) \int_{{\mathbf O}mega}h(u) d\mu -\left(\int_{{\mathbf O}mega} h(u) d\mu\mathbb{R}ght)^2 \le h(0) \mu({\mathbf O}mega) \mu (M\setminus {\mathbf O}mega). \end{align} Moreover, if $h_0 \in (0,1]$, then \begin{align} \label{guijia2'} \int_{{\mathbf O}mega}h(u) d\mu +h(0)\mu({\mathbf O}mega) \ge \mu(M). \end{align} \end{theorem}In particular, if $h(u)=\lambda e^{2u}$, then \eqref{guijia1'} and \eqref{guijia2'} become the following forms similar to \eqref{xinren1} and \eqref{linshiruji}: \begin{align} \label{guijia1''} \mu(M) \int_{{\mathbf O}mega}e^{2u} d\mu -\lambda \left(\int_{{\mathbf O}mega} e^{2u} d\mu\mathbb{R}ght)^2 \le \mu(M)\mu({\mathbf O}mega)- \mu^2({\mathbf O}mega), \end{align}and \begin{align} \label{guijia2''} \int_{{\mathbf O}mega}e^{2u} d\mu +\mu({\mathbf O}mega) \ge \frac{\mu(M)}{\lambda}. \end{align} Note that when $K=1$, then $\mu(M)=4\pi$ and hence \eqref{guijia2''} recovers \eqref{linshiruji}. If $K >1$, then $\int_{{\mathbf O}mega}e^{2u} d\mu +\mu({\mathbf O}mega)$ has a smaller lower bound, since $\mu(M)<4\pi$. \\ The proof of Theorem \ref{genghm} is motivated by that of Theorem \ref{ghm1.4}, and the new ingredient is the application of L\'evy-Gromov inequality instead of Alexander-Bol ineqaulity. Similarly as in \cite{GHM}, we also prove the dual form of Theorem \ref{genghm}, see Theorem \ref{genghm'} in section 7. The closedness assumption of $M$ is crucial in Theorem \ref{genghm}, since it is crucial in either of the two proofs of L\'evy-Gromov inequality so far we have known, see \cite{Gromov} for the orginal proof by Gromov and \cite{Bayle} by Bayle for a different proof. As one can see from example \ref{noncomplete} and the comment right after, even for radial supersolution $u$, one cannot expect that $(\mathbb{R}^2, e^{2u}\delta)$ can be completed as a closed Riemannian surface. Hence a sphere covering inequality for the case $K\ge 1$ cannot be directly obtained from Theorem \ref{genghm}. However, by the first inequality of \eqref{leibi} obtained in Theorem \ref{maintheoremonradialcase} and by a similar argument in the proof of Theorem \ref{genghm}, the following counterpart to Theorem \ref{spherecovering} for radial functions is obtained. \begin{theorem} \label{radialspherecovering} Let $u_1$ be a radially symmetric function such that\begin{align*} \mathbb{D}elta u_1+e^{2u_1} \le 0\quad \mbox{in $\mathbb{R}^2$.} \end{align*}Let $u_2$ be another radially symmetric function defined in $\mathbb{R}^2$. If for some disk $B_r$ we have \begin{align} \mathbb{D}elta u_2+e^{2u_2}\ge \mathbb{D}elta u_1+e^{2u_1} \, \mbox{in $B_r$}, \, u_2>u_1 \, \mbox{in $B_r$ and} \, u_2=u_1 \, \mbox{on $\partial B_r$,} \end{align}then \begin{align} \int_{B_r} e^{2u_1}+\int_{B_r}e^{2u_2} \ge \int_{\mathbb{R}^2} e^{2u_1} dx. \end{align}The equality holds if and only if $(B_r, e^{2u_1}\delta)$ and $(B_r,e^{2u_2}\delta)$ are two complementary spherical caps on the unit sphere. \end{theorem} Similarly, we state the dual form for Theorem \ref{radialspherecovering}. \begin{theorem} \label{radialspherecovering'} Let $u_1$ be a radially symmetric function such that\begin{align*} \mathbb{D}elta u_1+e^{2u_1} \le 0\quad \mbox{in $\mathbb{R}^2$.} \end{align*}Let $u_2$ be another radially symmetric function defined in $\mathbb{R}^2$. If for some disk $B_r$ we have \begin{align} \mathbb{D}elta u_2+e^{2u_2}\le \mathbb{D}elta u_1+e^{2u_1} \, \mbox{in $B_r$}, \, u_2<u_1 \, \mbox{in $B_r$ and} \, u_2=u_1 \, \mbox{on $\partial B_r$,} \end{align}then \begin{align} \int_{B_r} e^{2u_1}+\int_{B_r}e^{2u_2} \ge \int_{\mathbb{R}^2} e^{2u_1} dx. \end{align}Moreover, the equality holds if and only if $(B_r, e^{2u_1}\delta)$ and $(B_r,e^{2u_2}\delta)$ are two complementary spherical caps on the unit sphere. \end{theorem} These results are proved in section 7. We conjecture that the above theorem holds for nonradial solutions and for general smooth domains, but so far we cannot validate this conjecture. In section 8, we list a series of important unsolved problems related to the results in this paper for future research. To the end, we remark that one of the essentials in the proof of L\'evy-Gromov isoperimetric inequality requires the diameter bounded above by $\pi$, which is guaranteed by Myer's Theorem on complete Riemannian manifolds with $Ric_g \ge (n-1)g$. For $n=2$, this is equivalent to $K \ge 1$. Hence in order to develop new sphere covering inequality related to incomplete globally conformally flat surface with $K \ge 1$, the first step is to try to prove some diameter bound. This has been another motivation for us to study diameter estimates for solutions and supersolutions to the Liouville equation \eqref{scalliouville}. \section{Diameter lower bound for solutions to \eqref{scalliouville}} In this section, we first prove Theorem \ref{lowerbounddiameterfor2D}. Before proving this, we state the following well-known lemma which is essentially proved in \cite{CW94}. Here we include a proof for readers' convenience. \begin{lemma} Let $f(z)$ be the developing function for the solution $u$ to \eqref{scalliouville}, then $f$ is either a Mobi\"us transform or transcendental meromorphic. For the former case, $u$ is radially symmetric up to translations. \end{lemma} \begin{proof} If $f$ is rational, then write $f=\frac{P}{Q}$ where both $P$ and $Q$ are polynomials over $\mathbb{C}$. We assume $P$ and $Q$ have no common factors. Since $f$ has at most simple poles, $Q$ has distinct simple roots. We claim that \begin{align} \label{claim} P'Q-PQ'\equiv C \ne 0. \end{align} Indeed, $f'=\frac{P'Q-PQ'}{Q^2}$. If $P'Q-PQ'$ and $Q^2$ have common factors, say $z-z_0$, then since $Q(z_0)=0$, $(P'Q-PQ')(z_0)=0$, and thus $P(z_0)Q'(z_0)=0$. Since $Q$ has simple roots, $Q'(z_0)\ne 0$, and thus $P(z_0)=0$. This contradicts our assumption that $P$ and $Q$ have no common factors. Therefore, $P'Q-PQ'$ and $Q^2$ have no common factors, and since $f'$ is nonvanishing, it is necessary that $P'Q-PQ'$ is a constant. Take one more derivative of \eqref{claim}, we have \begin{align} \label{claim'} P^{''}Q=PQ^{''}. \end{align} Note that $f'$ nonvanishing also implies that $P$ has only simple roots. Let $z_0$ be any root of $Q$, then from \eqref{claim'} and since $P(z_0)$ cannot be zero, $Q^{''}(z_0)=0$. Hence the number of roots of $Q^{''}$ is less than or equal to the number of roots of $Q$. Since $Q$ is polynomial, it is necessary that $Q^{''}$ must be zero, and thus $Q$ is linear. Similarly, $P^{''}=0$ and thus $P$ is linear. Therefore, $f$ is a Mobi\"us transform. In this case, by \eqref{complexrepresentation}, direct computation implies that $u$ is a radial solution up to translation. If $f$ is not rational, then by Liouville's result, it is transcendental meromorphic. This finishes the proof. \end{proof} Now we are ready to prove Theorem \ref{lowerbounddiameterfor2D}. \begin{proof}[Proof of Theorem \ref{lowerbounddiameterfor2D}] Any solution $u$ to \eqref{scalliouville} can be written in the form \begin{align} u=\ln \left(\frac{2|f'(z)|}{1+|f(z)|^2}\mathbb{R}ght), \end{align}where $f$ is a meromorphic function such that $f'(z) \ne 0$ and $f$ has simple poles. As discussed before, if $f$ is rational, then $f$ must be a Mobi\"us transformation and hence $u$ is a bubble solution, and thus $g$ is the standard sphere metric. Hence $diam(\mathbb{R}^2)=\pi$ under such metric. If $f$ is transcendental meromorphic, then by value distribution theory, $f$ takes the value in the complex plane infinitely many times except for two points. In particular, for any ${\varepsilonepsilon}silon>0$ we can choose $z_1=(x_1,y_1)$ and $z_2=(x_2,y_2)$ such that $|f(z_1)|<{\varepsilonepsilon}silon$ and $|f(z_2)|>1/{\varepsilonepsilon}silon$. Let $\gamma(t)=z(t);\, a \le t \le b$ be a curve starting at $z_1$ and ending at $z_2$, and hence the length of $\gamma$ under the Euclidean metric is larger than $1/{\varepsilonepsilon}silon-{\varepsilonepsilon}silon$. Let $ \mathscr{H}^0(S)$ denote the 0-Hausdorff measure of a set $S\in \mathbb{R}^2$, i.e. the number of points in $S$ if it is a finite set. Then the length of $\gamma$ under the conformal metric $e^{2u}\delta$ is given by \begin{align*} \int_a^b e^{u(z(t))}|z'(t)|dt =&\int_a^b \frac{2|f'(z(t))|}{1+|f(z(t))|^2}|z'(t)|dt\\ =& \int_{f(\gamma)}\frac{2}{1+|w|^2}\mathscr{H}^0\left(f^{-1}(w)\cap \gamma\mathbb{R}ght)|dw|\\ \ge &\int_0^{1/{\varepsilonepsilon}silon-{\varepsilonepsilon}silon}\frac{2}{1+|w(s)|^2}ds, \quad \mbox{where $s$ is the arc length parameter}\\ \ge & \int_{0}^{1/{\varepsilonepsilon}silon-{\varepsilonepsilon}silon}\frac{2}{1+({\varepsilonepsilon}silon+s)^2}ds, \quad \mbox{since $|w(s)|\le |w(0)|+|w(s)-w(0)|\le {\varepsilonepsilon}silon+s$}\\ =&2\left( \tan^{-1}(1/{\varepsilonepsilon}silon)-\tan^{-1}({\varepsilonepsilon}silon)\mathbb{R}ght)\\ \mathbb{R}ghtarrow & \pi \quad \mbox{as ${\varepsilonepsilon}silon \mathbb{R}ghtarrow 0$.} \end{align*}Hence $diam_g(\mathbb{R}^2) \ge \pi$ under the metric $g=e^{2u}\delta$. \end{proof} The proof above does not quite tell whether or not there exist two point $P,Q \in \mathbb{R}^2$ such that their conformal distance under the metric $g=e^{2u}\delta$ is bigger than or equal to $\pi$. Actually we have the following even stronger conclusion. \begin{proposition} \label{manypoints} Let $u$ be a solution to \eqref{scalliouville} and $f(z)$ be its developing function. Then there exists a set $S$ such that $\mathscr{H}^0(\mathbb{C} \setminus S) \le 5$ and that for any point $A \in S$, we can find point $P \in f^{-1}(\{A\})$ and $Q \in \mathbb{C}$ such that $d_g(P,Q) \ge \pi$, where $d_g$ is the distance function under the conformal metric $g=e^{2u}\delta$. \end{proposition} \begin{proof} Let $\Pi$ be the stereographic projection map from the north pole of unit sphere to the extended complex plane $\mathbb{C}\cup \{\infty\}$. Let $$X_1=\{A \in \mathbb{C}:f^{-1}\circ \Pi(\mbox{the antipodal of $\Pi^{-1}(A)) \ne \emptyset$} \},$$ $$X_2=\{A \in \mathbb{C}:\mbox{ the antipodal of $\Pi^{-1}(A)$ is not the north pole } \},$$ and $$X=X_1\cap X_2.$$ Clearly $\mathscr{H}^0 (\mathbb{C} \setminus X) \le 2$ if $f$ is a Mobi\"us transform, and $\mathscr{H}^0 (\mathbb{C} \setminus X) \le 3$ if $f$ is transcendental meromorphic, by value distribution theory. Let $S=f(\mathbb{C}) \cap X$, and thus $\mathscr{H}^0(\mathbb{C} \setminus S) \le 5$. For any $A \in S$, we can find $P \in f^{-1}(A)$. Moreover, by definition of $S$ we can find $$Q \in f^{-1}\circ \Pi(\mbox{the antipodal of $\Pi^{-1}(A)$}).$$ Let $\gamma$ be any curve in $\mathbb{C}$ from $P$ to $Q$, then the length of $\gamma$ is given by \begin{align*} \int_{\gamma} \frac{2|f'(z)|}{1+|f(z)|^2} ds =& \int_{f(\gamma)}\frac{2}{1+|\omega|^2}\mathscr{H}^0\left(f^{-1}(\omega)\cap \gamma\mathbb{R}ght)|d\omega|\\ \ge& \int_{f(\gamma)}\frac{2}{1+|\omega|^2}|d\omega|\\ =&l_{S^2}\left(\Pi^{-1}(f(\gamma))\mathbb{R}ght)\\ \ge & d_{S^2}(\Pi^{-1}\circ f(P),\Pi^{-1}\circ f(Q))\\ =&d_{S^2}(\Pi^{-1}(A), \mbox{the antipodal of $\Pi^{-1}(A)$ })=\pi. \end{align*}In the above $l_{S^2}$ is the length function on the unit sphere $S^2$ and $d_{S^2}$ is the sphere distance function. From the above estimate we immediately conclude that $d_g(P,Q) \ge \pi$. \end{proof} \begin{remark} If $u$ is a radial solution to \eqref{scalliouville}, then $diam_g(\mathbb{R}^2)=\pi$, since $(\mathbb{R}^2, e^{2u}\delta)$ is the standard unit sphere minus a point. One can also check that if $u$ is a $1$D solution to \eqref{scalliouville}, say $u(x,y)=\ln(\sech x)$, then under the corresponding conformal metric, $diam_g(\mathbb{R}^2)=\pi$, as we will see in the examples in next section. \end{remark} We make one more remark on the $1D$ solutions: \begin{remark} \label{1dremark} It is easy to prove that if $u$ is a $1$D solution, then its developing function $f(z)$ must have the form $f(z)=\frac{pe^{cz}-\bar{q}}{qe^{cz}+p}$, where $p,q\in \mathbb{C}$ and $|p|^2+|q|^2=1$. The converse is also true. \end{remark} To the end of this section, we also remark that the diameter of $\mathbb{R}^2$ does not change under translation, rotation and scaling of solutions to the Liouville equation \eqref{scalliouville}, that is, if $g=e^{2u}\delta$ and $g_1=e^{2u_{\lambda, c,\omega}}\delta$ where $\lambda>0, c\in \mathbb{R}^2$, $\omega$ is a unit vector in $\mathbb{R}^2$ and \begin{align*} u_{\lambda,c,\omega}=u(\lambda(\omega \cdot ((x,y)-c)))+\ln \lambda, \end{align*}then $diam_{g}(\mathbb{R}^2)=diam_{g_1}(\mathbb{R}^2)$. \section{Examples and Proof of Proposition \ref{utdiamter}} In this section, we first show that for the family of solutions given by \eqref{u_t}, the diameters of $\mathbb{R}^2$ under the corresponding metrics can take all numbers in the interval $[\pi, 2\pi)$. Note that $u_t$ corresponds to the developing function $t+e^z$, and when $t=0$, $u_t$ is $1$D. \begin{proof}[Proof of Proposition \ref{utdiamter}] Since $u_t$ corresponds to the developing function $f(z)=t+e^z$, $u_t$ solves \eqref{scalliouville}. To prove the diameter equality, first we note that \begin{align*} \int_{-\infty}^{\infty}e^{u_t(x,y)}dx=&\frac{2}{\sqrt{1+t^2\sin^2 y}}\tan^{-1}(\frac{t\cos y+e^x}{\sqrt{1+t^2 \sin^2 y}})\Big|_{-\infty}^{\infty}\\ =&\frac{2}{\sqrt{1+t^2\sin^2 y}}\left(\frac{\pi}{2}-\tan^{-1}(\frac{t\cos y}{\sqrt{1+t^2 \sin^2 y}})\mathbb{R}ght). \end{align*} Hence \begin{align} \label{sup'} \sup_{y \in \mathbb{R}}\int_{-\infty}^{\infty}e^{u_t(x,y)}dx=\pi+2\tan^{-1}(t). \end{align} Given arbitrary two points $P_1, P_2 \in \mathbb{R}^2$, for each point $P_i\, (i=1,2)$, we let $\gamma_R^i$ be the horizontal line segment passing through $P_i$ with $Q_{-R}^i$ and $Q_R^i$ as the left and right end points of $\gamma_R^i$, such that $Q_{-R}^1 Q_R^1 Q_R^2Q_{-R}^2$ is a rectangle with length $2R$. Let $\Gamma_{R}$ be the vertical line segments connecting $Q_R^1$ and $Q_R^2$, and $\Gamma_{-R}$ be the vertical line segment connecting $Q_{-R}^1$ and $Q_{-R}^2$. By \eqref{sup'}, $l(\gamma_{R}^i) \le \pi+2\tan^{-1}(t)$ for $i=1,2$. Also, it is easy to see that $\lim_{R \mathbb{R}ghtarrow \pm\infty} l(\Gamma_{\pm R})=0$. Now that \begin{align*} 2d_g(P_1,P_2) \le \sum_{i=1}^2l (\gamma_{R}^i)+ l(\Gamma_R)+l(\Gamma_{-R}), \end{align*}by letting $R \mathbb{R}ghtarrow \infty$, we have that $d_g(P_1,P_2) \le \pi+2\tan^{-1}(t)$. Since $P_1$ and $P_2$ are arbitary, we have that $diam_g(\mathbb{R}^2) \le \pi+2\tan^{-1}(t)$. To show that the equality holds, we choose $P_1=(a,\pi)$ and $P_2=(a,-\pi)$, where $a$ satisfies \begin{align} \label{newa} e^a-t=\tan(\frac{\pi}{4}-\frac{1}{2}\tan^{-1}(t)). \end{align} Such $a$ exists since $t+\tan(\frac{\pi}{4}-\frac{1}{2}\tan^{-1}(t)) \in [1,\infty)$ if $t \in [0,\infty)$. Let $\Pi$ be the stereographic projection map from the north pole of the unit sphere. Since $f(P_1)=f(P_2)=(t-e^a,0)$, by \eqref{newa} and double angle formula, for $i=1,2$ we have \begin{align*} \Pi^{-1}(f(P_i))=(\frac{2(t-e^a)}{1+(t-e^a)^2},0,\frac{(t-e^a)^2-1}{(t-e^a)^2+1})=(-\sin(\frac{\pi}{2}-\tan^{-1}(t)),0,-\cos(\frac{\pi}{2}-\tan^{-1}(t))). \end{align*}Let $Alpha=\tan^{-1}(t)$, hence $Alpha \in [0,\frac{\pi}{2})$ and \begin{align} \label{P_1} \Pi^{-1}(f(P_1))=\Pi^{-1}(f(P_2))=(-\cos Alpha,0,-\sinAlpha). \end{align} Let $\gamma$ be a curve connecting $P_1$ and $P_2$, then $\gamma$ must intersect the $x$-axis. Let us assume that $\gamma$ passes through $P_3=(b,0)$, and thus $f(b)=t+e^b>t$. Let $f(b)=\tan \beta$ for some $\beta \in (0,\frac{\pi}{2})$. So $\beta >Alpha$. and \begin{align} \label{P_3} \Pi^{-1}(f(P_3))=(\frac{2\tan\beta}{1+\tan^2\beta},0,\frac{\tan^2\beta-1}{1+\tan^2\beta})=(\sin(2\beta),0,-\cos(2\beta)). \end{align} Let $\theta \in (0,\pi)$ be the angle between $\Pi^{-1}(f(P_1))$ and $\Pi^{-1}(f(P_3))$. Then by \eqref{P_1} and \eqref{P_3}, \begin{align} \label{costheta} \cos \theta=-\cos Alpha \sin(2\beta)+\sin Alpha \cos(2\beta)=\cos(\frac{\pi}{2}+2\beta-Alpha). \end{align} If $\pi/2+2\beta-Alpha<\pi$, then by \eqref{costheta} we know $\theta=\pi/2+2\beta-Alpha>\pi/2+Alpha$. If $\pi/2+2\beta-Alpha>\pi$, then since $\beta <\frac{\pi}{2}$, we still have \begin{align*} \theta=2\pi- (\pi/2+2\beta-Alpha)=\frac{3\pi}{2}+Alpha-2\beta>\frac{\pi}{2}+Alpha. \end{align*} Therefore, \begin{align} \label{chonglai} d_{g_{S^2}}(\Pi^{-1}(f(P_1)),\Pi^{-1}(f(P_3)))\ge \frac{\pi}{2}+Alpha. \end{align} By \eqref{complexrepresentation}, we have \begin{align} \label{huanyuan} l(\gamma)=\int_{f(\gamma)}\frac{2}{1+|\omega|^2}\mathscr{H}^{0}(f^{-1}(\omega)\cap \gamma)|d\omega| \ge\int_{f(\gamma)}\frac{2}{1+|\omega|^2}|d\omega|. \end{align} Note that the metric of the unit sphere is given by $g_{S^2}=\frac{4}{(1+|\omega|^2)^2}\delta$, where $\omega$ denotes the coordinate of point on sphere obtained using stereographic projection map from the north pole. Hence from \eqref{huanyuan}, \begin{align} \label{chonglai2} l(\gamma)\ge l_{g_{S^2}}(\Pi^{-1}(f(\gamma)), \end{align} where $l_{g_{S^2}}$ is the length function on the unit sphere. Since $\Pi^{-1}\circ f (\gamma)$ is a curve in $S^2$ such that it starts from $\Pi^{-1}(f(P_1))$, goes through $\Pi^{-1}(f(P_3))$ and then goes back to $\Pi^{-1}(f(P_2))=\Pi^{-1}(f(P_1))$, by \eqref{chonglai} and \eqref{chonglai2} we know that the length of $\gamma$ satisfies \begin{align*} l(\gamma) \ge 2d_{g_{S^2}}(\Pi^{-1}(f(P_1)),\Pi^{-1}(f(P_3)))\ge \pi+2Alpha. \end{align*} That is, $l(\gamma)\ge \pi+2\tan^{-1}(t)$. Hence $diam_g(\mathbb{R}^2) \ge \pi+2\tan^{-1}(t)$. Therefore, we have shown that $diam_g(\mathbb{R}^2) = \pi+2\tan^{-1}(t)$. \end{proof} Here we make a remark. At the beginning of this project, we only knew that there exist a family of solutions given by \eqref{u_t}, and by just naively looking at the horizontal integrals, we figured out the upper bound for the corresponding diameters. It is quite straightforward up to this step. However, it took us quite a while to find out a way of exactly computing the conformal diameters corresponding to these functions $u_t$. Let us use the following example to geometrically illustrate the case $t=1$. In fact, this example is so important to us, that only after understanding it did we observe Proposition \ref{manypoints} and have a better understanding of the conformal diameters corresponding to solutions to Liouville equation \eqref{scalliouville}. So we will present the full details including some numerical computations as motivations in the example. \begin{example} \label{bifurcationof1dsolution} Let \begin{align} \label{1+e^z} u(x,y)=u_1(x,y)=\ln\left(\frac{2e^x}{2+2e^x\cos y+e^{2x}}\mathbb{R}ght). \end{align}Then clearly $u$ solves \eqref{scalliouville}. In the following, we will gradually show that $diam_g(\mathbb{R}^2)=\frac{3\pi}{2}$ if $g=e^{2u}\delta$ for such $u$. First, note that \begin{align} \label{jisuan1} \int_{-\infty}^{\infty} e^{u(x,y)}dx=\frac{2}{\sqrt{1+\sin^2 y}}\tan^{-1}\left(\frac{\cos y+e^x}{\sqrt{1+\sin^2 y}}\mathbb{R}ght)+C \end{align}and that \begin{align} \label{jisuan2} \int_0^{\pi} e^{u(x,y)}dy =\frac{\pi}{\sqrt{(e^{-x}+\frac{e^x}{2})^2-1}}. \end{align} By \eqref{jisuan1}, we have that \begin{align} \label{sup} \sup_{y \in \mathbb{R}} \int_{-\infty}^{\infty} e^{u(x,y)} dx=\frac{3\pi}{2}, \end{align} and hence by exact argument in the proof of Proposition \ref{utdiamter}, we have that $diam_g(\mathbb{R}^2) \le \frac{3\pi}{2}$. So the question is, can the equality be attained? Let us consider somehow the worst case. By \eqref{jisuan1}, the supremum of \eqref{sup} is attained at $y=(2k+1)\pi, \, k \in \mathbb{Z}$. Let us choose two points $P_1=(a,\pi)$ and $P_2=(a,-\pi)$, where $a \in \mathbb{R}$ such that \begin{align} \label{intchoiceofa} \int_{-\infty}^a e^{u(x,\pm\pi)}dx=\int_a^{\infty} e^{u(x,\pm\pi)}dx=\frac{3\pi}{4}. \end{align}One can see that $P_1$ is chosen to lie in the ``middle" way of the horizontal line from $(-\infty,\pi)$ to $(\infty,\pi)$, and similarly for $P_2$. From the choices, we can expect that the distance between $P_1=(a,\pi)$ and $P_2=(a,-\pi)$ is equal to $\frac{3\pi}{2}$ under the metric $g=e^{2u}\delta$. This guess can be supported by computing the length of ellipses connecting the two points: Let $C_s\, (s>0)$ be the right half of the ellipses to connect $(a,\pi)$ and $(a,-\pi)$, and thus the equation of $C_s$ is given by: \begin{align*} \frac{(x-a)^2}{s^2}+y^2=\pi^2. \end{align*} By setting $x=a+\pi \cos \theta$ and $y=\pi \sin \theta$, the length of $C_s$ under metric $g$ is given by \begin{align} \label{lcs} l(C_s)=\int_{-\pi/2}^{\pi/2}\frac{2\pi e^{a+s\pi \cos \theta}}{2+2e^{a+s\pi \cos \theta}\cos(\pi \sin \theta)+e^{2a+2s\pi \cos \theta}} \sqrt{s^2\sin^2\theta+\cos^2\theta}\, d\theta. \end{align} By \eqref{jisuan1} and \eqref{intchoiceofa}, we have \begin{align} \label{defofa} tan^{-1}(e^a-1)=\frac{\pi}{8}. \end{align} The following is the graph obtained by Mathematica of the function $l(C_s)-\frac{3\pi}{2}$ for $s \in [0,10]$.\\ \includegraphics[scale=1]{l3.png}\\ The graph indicates that $l(C_s) \ge \frac{3\pi}{2}, \, \forall s>0$, and $\lim_{s \mathbb{R}ghtarrow \infty}l(C_s)=\frac{3\pi}{2}$. This suggests that $d_g(P_1, P_2) = \frac{3\pi}{2}$. To rigorously prove it, we proceed with the exact same proof as in that of Proposition \ref{utdiamter}, for $t=1$. \begin{proof} Let $\gamma$ be a curve starting at $P_1$ and ending at $P_2$, then $\gamma$ must pass through some point on $x$-axis, which is given by $P_3=(b,0)$. Let $P_i'=f(P_i)$, where $f(z)=1+e^z$, which is the developing function for the solution $u$. By \eqref{defofa}, $P_1'=P_2'=(-\tan(\pi/8),0)$. Also, $P_3'=(b',0)$, where $b'=1+e^b>1$. Then as in the proof of Proposition \ref{utdiamter}, we have $l(\gamma)\ge l_{g_{S^2}}(\Pi^{-1}(f(\gamma))$. By direct computation, $\Pi^{-1}(P_1')=\Pi^{-1}(P_2')=(-\frac{\sqrt{2}}{2},0,-\frac{\sqrt{{2}}}{2})$, and we denote the point by $A$. Hence $\Pi^{-1}(f(\gamma))$ must be a closed curve on the unit sphere such that it starts and ends at the point $A$, and it must passes through the point $B=\Pi^{-1}(P_3')$, which lies in the circle connecting $(1,0,0)$ and the north pole in the northern hemisphere. Therefore, $\Pi^{-1}(\gamma) \ge 2dist_{g_{S^2}}(A,B) \ge \frac{3\pi}{2}$. Hence $diam_g(\mathbb{R}^2)=\frac{3\pi}{2}$. \end{proof} An interesting fact is that, from the proof above, we can now rigorously prove that $$\lim_{s \mathbb{R}ghtarrow \infty}l(C_s)=\frac{3\pi}{2}.$$ Indeed, when $s$ goes to $+\infty$, $\Pi^{-1}(f(C_s))$ is getting closer and closer to the curve that starts from $(-\frac{\sqrt{2}}{2},0,-\frac{\sqrt{{2}}}{2})$, goes along the great circle to the north pole, and then goes back to $(-\frac{\sqrt{2}}{2},0,-\frac{\sqrt{{2}}}{2})$. Hence the total length is closer and closer to $\frac{3\pi}{2}$. Without using the geometry on unit sphere, it seems very difficult to handle the ``monster" integral \eqref{lcs}. This finishes the discussion of Example 3.1. \end{example} Next, we construct a solution to \eqref{scalliouville} such that the corresponding conformal diameter can be greater than or equal to $2\pi$. \begin{example} \label{2pi} Let \begin{align} \label{infinitydiameter} u(x,y)=\ln \left(\frac{2e^{x+e^x \cos y}}{1+e^{2e^x \cos y}}\mathbb{R}ght). \end{align}We will show that $u$ solves \eqref{scalliouville}, and that $diam_g(\mathbb{R}^2)\ge 2\pi$, where $g=e^{2u}\delta$. \begin{proof} First, such $u$ given by \eqref{infinitydiameter} corresponds to the developing function \begin{align*} f(z)=e^{e^z}. \end{align*} Hence $u$ is a solution to \eqref{scalliouville}. Next, note that \begin{align*} f(z)=e^{e^x\cos y}\left(\cos (e^x \sin y)+i \sin (e^x \sin y)\mathbb{R}ght). \end{align*}We consider the distance between the point $P=(\ln \pi,\frac{\pi}{2})$ and point $Q=(\ln \pi, -\frac{3\pi}{2})$. We have $f(P)=f(Q)=-1$ and $P':=\Pi^{-1}(f(P))=\Pi^{-1}(f(Q))=(-1,0,0)$, where $\Pi$ is the stereographic projection map from the north pole. Let $\gamma$ be a curve starting from $P$ to $Q$, then $\gamma$ must pass through a point $P_1=(b,0)$ and another point $P_2=(c,-\pi)$. Since $f(P_1)=e^{e^b}>1$, $P_1':=\Pi^{-1}\circ f(P_1)$ lies between the arc from $(1,0,0)$ to the north pole. Since $f(P_2)=e^{-e^c} \in (0,1)$, $P_2':=\Pi^{-1}\circ f(P_2)$ lies between the arc from $(1,0,0)$ to the south pole. Let $d_g$ be the distance function under the metric $g=e^{2u}\delta$, $d_{S^2}$ be the distance function on the unit sphere $S^2$, and $l_{S^2}$ be the length function on the unit sphere. By the proof of Proposition \ref{manypoints} or Proposition \ref{utdiamter}, and since $\Pi^{-1}\circ f(\gamma)$ is a curve on the unit sphere starting from $P'$, passing through $P_1', P_2'$ and ending at $P'$, we have \begin{align*} d_g(P,Q) \ge l_{S^2}(\Pi^{-1} \circ f (\gamma)) \ge d_{S^2}(P', P_1')+d_{S^2}(P_1',P_2')+d_{S^2}(P_2',P'). \end{align*}By the location of $P', P_1'$ and $P_2'$, we have \begin{align*} d_g(P,Q) \ge 2\pi. \end{align*}Hence $diam_g(\mathbb{R}^2 ) \ge 2\pi$. \end{proof} \end{example} \section{Diameter upper bound for supersolutions to \eqref{scalliouville}} In this section, our main goal is prove Theorem \ref{weakbounddiamter}. In the following, we will constantly use $d_g(x,y)$ to denote the distance between two points $x$ and $y$ under the metric $g$, that is, the infimum of the lengths of piecewise smooth curves starting at $x$ and ending at $y$. Unlike previous sections, in this section $x$ and $y$ are denoted as points in Euclidean spaces, not coordinate components. First, for the noncompact Riemannian manifold $(\mathbb{R}^n, g)$, it is convenient to introduce the following definition. \begin{definition} For any $p \in \mathbb{R}^n$, $d_g(p,\infty)$ is defined as \begin{align*} \inf\{l(\gamma): \mbox{$\gamma$ is a piecewise smooth curve from [0,1) to $\mathbb{R}^n$ with $\gamma(0)=p$ and $|\gamma(1^-)|=\infty$}\}, \end{align*}where $l$ is the length function under metric $g$ and $|\cdot|$ is the Euclidean norm. \end{definition} Before stating certain geometric results related to $(\mathbb{R}^2, e^{2u}\delta)$, we first prove the following key observation, which does not rely on finiteness assumption on conformal volume. Actually the statement can be made in $\mathbb{R}^n$ as follows: \begin{proposition} \label{diametercontrolinfty} Let $g$ be a Riemannian metric on $\mathbb{R}^n$ with $Ric_g \ge (n-1)g$, then we have $d_g(x, \infty)\le \pi$ for all $x \in \mathbb{R}^n$. \end{proposition} Proposition \ref{diametercontrolinfty} is related to Myer's Theorem. Recall that Myer's Theorem states that if $(M,g)$ is a geodesically complete manifold with $Ric \ge (n-1)$, then $diam(M) \le \pi$. Moreover, if the diameter is equal to $\pi$, then by Cheng's rigidity result in \cite{Cheng75} , $M$ must be isometric to sphere. However, the proof of Myer's theorem requires that for any $x, y \in M$, there is a minimizing geodesic connecting $x$ and $y$. For incomplete manifolds, generally two points cannot be connected by a length minimizing curve. So in order to prove Proposition \ref{diametercontrolinfty}, we need to somehow overcome the non-completeness issue. The proof is motivated by the idea in the proof of Hopf-Rinow theorem. \begin{proof}[Proof of Proposition \ref{diametercontrolinfty}] Fix $x \in \mathbb{R}^n$ and $r>0$, and $B_r(x)$ be the Euclidean $r$-neighborhood of $x$. Let $y \in \partial B_r(x)$ such that $d_g(x,y)=d_g(x,\partial B_r(x))=T$. Let $\gamma:[0,b]$ be the unit speed curve starting at $x$. We say $\gamma|_{[0,b]}$ \textit{aims at} $y$ if $\gamma$ starts at $x$ and that for any $0<t<b$, $l(\gamma|_{[0,t]})+d_g(\gamma(t),y)=d_g(x,y)$. Clearly $b \le T$. We claim that there exists such a curve starting at $x$ and aiming at $y$ with length equal to $T$. We prove this claim by modifying the proof of Hopf-Rinow Theorem. Let $$S=\{b \in [0,T]: \mbox{there exists a unit speed curve $\gamma$ starting at $x$ such that $\gamma|_{[0,b]}$ aims at $y$}\}$$ and let $T_0=\sup S$. First, note that $S$ is not empty. This is because we can always choose a small geodesic ${\varepsilonepsilon}silon$-neighborhood of $x$, denoted $B'_{{\varepsilonepsilon}silon}(x)$. Since $d_g(\cdot,y)$ is continuous, there is $z \in \partial B'_{{\varepsilonepsilon}silon}(x)$ such that $d_g(z, y)= d_g(y,\partial B'_{{\varepsilonepsilon}silon})$. Let $\gamma$ be the minimizing geodesic connecting $x$ and $z$, and thus $\gamma|_{[0,{\varepsilonepsilon}silon]}$ aims at $y$. If $T_0<T$, then we claim that $\gamma|_{[0,T_0]}$ lies in a compact subset of $B_r(x)$. Indeed, if this is not the case, then there exists $w\in \partial B_r(x)$ such that $\gamma(t)=w$ for some $t \le T_0$. Then $d_g(x,w)\le T_0<T=d_g(x,y)$, which is a contradiction for the choice of $y$ above. Therefore, $\gamma|_{[0,T_0]} \Subset B_r(x)$. Hence again we can choose a small geodesic neighborhood of $\gamma(T_0)$ and this extends the length of the curve starting at $x$ aiming at $y$. Therefore, eventually we conclude that $T_0=T$. Note that such $\gamma$ is actually a length minimizing geodesic connecting $x$ and $y$. Therefore, by the proof of Myer's Theorem, see for example \cite{Lee}, we have $l(\gamma) \le \pi$. Hence $d_g(x, \partial B_r(x)) \le \pi$. Since $r$ arbitrary, let $r \mathbb{R}ghtarrow\infty$ we have $d_g(x,\infty) \le \pi$. \end{proof} \begin{remark} \label{keyobservation} The essential observation in the proof of of Proposition \ref{diametercontrolinfty} is that for any $x \in \mathbb{R}^n$ and $r>0$, we can find a point $y \in \partial B_r(x)$ such that there is a length minimizing curve connecting $x$ and $y$. We will use this fact often times in this paper. Note that this is generally not true for every $y \in \partial B_r(x)$. \end{remark} \begin{remark} \label{chengshi} Also from the proof above we know for any $x \in \mathbb{R}^n$, there exists a unit speed curve $\gamma$ starting at $x$ and aiming at $\infty$, that is, for any $0<t<d_g(x,\infty)$, $l(\gamma_{[0,t]})+d_g(\gamma(t),\infty)=d_g(x,\infty)$. \end{remark} \begin{remark} The other triangle inequality $d_g(x,\infty)+d_g(y,\infty)\ge d_g(x,y)$ generally does not hold. This can be seen in $(\mathbb{R}^2,e^{2u}\delta)$, where $u(t)=\ln(\sech t)$ is the $1D$ solution to \eqref{scalliouville}. Let $x=(-R,0)$ and $y=(R,0)$, then $d_g(x,y)\mathbb{R}ghtarrow \pi$ as $R \mathbb{R}ghtarrow \infty$ while both $d_g(x,\infty)$ and $d_g(y,\infty)$ converge to $0$. \end{remark} Applying Proposition \ref{diametercontrolinfty} and assuming the finiteness of volume of $(\mathbb{R}^2,g)$, we now prove Theorem \ref{weakbounddiamter}. \begin{proof}[Proof of Theorem \ref{weakbounddiamter}] We first show there exists a sequence $r_k \mathbb{R}ghtarrow \infty$ such that $\int_{\partial B_{r_k}} e^{u} ds \mathbb{R}ghtarrow 0$. Suppose this is not the case, then there is $c>0$ such that for any $r>0$, $\int_{\partial B_{r}} e^{u} ds>c$. Hence by H\"older's inequality, \begin{align*} 2\pi r\int_{\partial B_r} e^{2u} ds \ge \left(\int_{\partial B_{r}} e^{u} ds\mathbb{R}ght)^2 \ge c^2. \end{align*} Hence $\int_{\partial B_r}e^{2u} ds \ge \frac{c^2}{2\pi r}$, and this would imply $\int_{\mathbb{R}^2} e^{2u} dx=\infty$. Thus we get a contradiction. For any $x,y \in \mathbb{R}^2$, by Proposition \ref{diametercontrolinfty} and Remark \ref{chengshi}, we can choose curves $\gamma_1$ and $\gamma_2$ starting at $x$ and $y$ respectively aiming at $\infty$ with lengths less than or equal to $\pi$. Let $z_{k_i}=\partial B_{r_k} \cap \gamma_i,\, i=1,2$, where $r_k$ is the sequence above. Hence we have \begin{align*} dist_g(x,y)\le dist(x,z_{k_1})+dist(y,z_{k_2}) +dist_g(z_{k_1},z_{k_2})\le 2\pi+\int_{\partial B_{r_k}} e^{u} \mathbb{R}ghtarrow 2\pi \end{align*}as $k \mathbb{R}ghtarrow \infty$. Therefore, $diam(\mathbb{R}^2)\le 2\pi$. \end{proof} \section{geometric inequalities for radial supersolutions to \eqref{scalliouville}} We first prove the following ordinary differential inequality results. \begin{proposition} \label{aiya} Let $u=u(r)$ be a function satisfying \begin{align} \label{2dradialcase} u^{''}+\frac{u'}{r}+e^{2u}\le 0, \end{align} then \begin{align} \label{radlength} \int_0^{\infty} e^{u(r)}dr \le \pi. \end{align} \end{proposition} Unfortunately so far we cannot give a analytic proof of this result, so we use geometric argument to prove this proposition. \begin{proof} At any $x \in \mathbb{R}^2$, we consider $g_x=e^{2u(|x|)}\delta_x$,where $u$ is a solution to \eqref{2dradialcase}. This gives a metric $g$ in $\mathbb{R}^2$ with $Ric_g \ge g$. First note that all the rays starting from the origin are geodesics. This is because, given any curve $\gamma$, the geodesic curvature of $\gamma$ is given by $\kappa_g=e^{-u}(\kappa+\partial u/\partial \nu)$, where $\kappa$ is the curvature of $\gamma$ in the Euclidean metric, and $\nu$ is the unit normal to $\gamma$. So if $\gamma$ is a ray, then since $u$ is radial, $\partial u/\partial \nu$ is zero, and of course $\kappa$ is also zero, so $k_g$ is zero. Hence rays starting from origin are geodesics. By local uniqueness of geodesics, all geodesics starting from the origin must be the rays. Next, we claim that for any $x \in \mathbb{R}^2$, there is a minimizing geodesic connecting $0$ and $x$. Indeed, by Remark \ref{keyobservation}, there is a minimizing geodesic connecting $0$ and $\partial B_{|x|}$. Since $u(|\cdot|)$ is a radial function, by symmetry there is a minimizing geodesic connecting $0$ and $x$. Since all the geodesics from $0$ are rays and no two rays intersect at other points except $0$, such minimizing geodesic must be part of the rays starting from $0$ and passing through $x$. Therefore, by the proof of Myer's Theorem, $\int_0^{|x|}e^{u(r)}dr \le \pi$. Sending $|x| \mathbb{R}ghtarrow \infty$, we have \eqref{radlength}. \end{proof} \\ Recall that $l(r):=\int_{\partial B_r} e^{u} ds$, which is the conformal length of $\partial B_r$, and that $A(r):=\int_{B_r} e^{2u} dx$, which is the conformal area of $B_r$. Now we prove Theorem \ref{maintheoremonradialcase}. \begin{proof}[Proof of Theorem \ref{maintheoremonradialcase}] We first prove the second inequality in \eqref{leibi}: Since $u$ is radial, we may write $u(x)=u(r)$, where $r=|x|$. By \eqref{ricci1} and integration by part, we have \begin{align} \label{yanxing1} A \le -u' 2\pi r. \end{align} Since $A'=2\pi re^{2u}$, by \eqref{yanxing1} we have \begin{align*} 2A A'\le -2u'e^{2u}(2\pi r)^2 =&-(e^{2u})'4\pi^2 r^2\\ =& -(e^{2u}4\pi^2 r^2)'+e^{2u}8\pi^2 r\\ =& -(e^{2u}4\pi^2r^2)'+4\pi A'. \end{align*} Hence \begin{align*} \left(4\pi A -A^2\mathbb{R}ght)'\ge \left(e^{2u}4\pi^2 r^2\mathbb{R}ght)'. \end{align*}Integrating the above from $0$ to $r$, we have \begin{align} \label{yanxing2} 4\pi A(r) -A^2(r) \ge \left(2\pi r e^{u(r)}\mathbb{R}ght)^2=l^2(r). \end{align}This proves the second inequality of \eqref{leibi}. \\ Next, we prove \eqref{volumeradial}. This is actually from the above inquality, since it immediately implies that $A(r) \le 4\pi$ for any $r>0$. Let $r\mathbb{R}ghtarrow \infty$, we conclude that $vol_g(\mathbb{R}^2) =\int_{\mathbb{R}^2} e^{2u} \le 4\pi$. \\ Next, we prove the results related to properties of $l(r)$: First, \eqref{yanxing3} also follows from \eqref{yanxing2}, since $4\pi A-A^2 \le \left(\frac{4\pi-A+A}{2}\mathbb{R}ght)^2 =4\pi^2$. Hence $l(r) \le 2\pi$ for any $r>0$. Note that $l'(r)=2\pi e^{u(r)}(1+ru'(r))$ and that $(1+ru'(r))'\le -re^{2u}<0$. If $1+ru'(r)\ge 0$ for all $r>0$, then $l'(r) \ge 0$ and hence $l(r)$ is increasing. This cannot happen since by the proof of Theorem \ref{weakbounddiamter} we can choose a sequence $r_k$ such that $l(r_k) \mathbb{R}ghtarrow 0$. Hence there exists $r_0>0$ such that when $r\le r_0$, $1+ru'(r) \ge 0$ and when $r>r_0$, $1+ru'(r)\le 0$. Hence $l(r)$ is increasing when $r<r_0$, reaches its maximum at $r_0$ and then decreasing when $r>r_0$. Since $l(r_k) \mathbb{R}ghtarrow 0$ as $k \mathbb{R}ghtarrow \infty$ and $l(r)$ is decreasing when $r>r_0$, \eqref{jiandanyouyong} is also proved. \\ Next, we prove \eqref{boundsforr0} by applying the Heintze-Karcher inequality, which is originally obtained in \cite{HK78}, see also \cite{GLH}[Theorem 4.21] : For any $0<r \le r_0$, we have \begin{align} \label{zhuanzhu1} l(r) \le \int_{\partial B_{r_0}}\Big(\cos (R(r_0)-R(r))-\eta(r_0) \sin (R(r_0)-R(r)) \Big)e^{u(r_0)} ds, \end{align}where the function $\eta(\rho)$ is the mean curvature of $\partial B_{\rho}$ under the metric $g=e^{2u}\delta$. Note that even though the statement of Heintze-Karhcer inequality is for domains in complete manifolds, the proof only requires that any point inside the domain can be connected to the boundary along exponential map. This is true in our case since as shown in the proof Proposition \ref{aiya}, any line segment belonging to the ray starting from the origin must be a minimizing geodesic. From \eqref{zhuanzhu1} we have that for any $0<r\le r_0$, \begin{align} \cos (R(r_0)-R(r))-\eta(r_0) \sin (R(r_0)-R(r)) \ge 0. \end{align} Since $\eta=e^{-u}(u'+\frac{1}{r})$ and $l'(r_0)=0$, $\eta(r_0)=0$. Hence $\cos (R(r_0)-R(r)) \ge 0$ for any $0<r \le r_0$. Therefore, $R(r_0) \le \frac{\pi}{2}$. This is \eqref{boundsforr0}. \\ Now let us prove the first inequality of \eqref{leibi}, which is also a consequence of Heintze-Karcher inequality. We prove as follows. For any $r>0$, applying Heintze-Karcher inequality on $(B_r, e^{2u}\delta)$ and integrating from $0$ to $r$, we have \begin{align} \label{zhujidan} A(r) \le l(r) \int_0^R (\cos t-\eta(r) \sin t)dt, \end{align}where $R=R(r)=\int_0^r e^{u(\rho)}d\rho$. Hence $\eta(r) \le \cot r$ for $r \le R$. Let $\eta(r)=\cot \xi$, where $\xi\in [0,\pi]$, and thus $\xi \ge R$ and when $0\le t \le \xi$, $\cos t-\eta(r) \sin t \ge 0$. Hence from \eqref{zhujidan} we have \begin{align} \label{zhujidan'} A(r) \le l(r) \int_0^{\xi} (\cos t-\eta(r) \sin t)dt. \end{align} Since all the radial rays starting $\partial B_r$ exhausts $\mathbb{R}^2 \setminus B_r$, we can also apply Heintze-Karcher inequality on $\mathbb{R}^2 \setminus B_r$ to get \begin{align} \label{zhujidan1} A_{\infty}-A(r) \le l(r) \int_0^{R_{\infty}-R}(\cos t+\eta \sin t)dt, \end{align}where $R_{\infty}=\int_0^{\infty} e^{u(\rho)}d\rho$. By Proposition \ref{aiya}, $R_{\infty} \le \pi$. Suppose that $t \in [0,\pi]$, then $\cos t+\eta(r) \sin t =\sin t (\cot t+\cot \xi) \ge 0$ if and only if $0 \le t \le \pi-\xi$. Note also that from \eqref{zhujidan1} we have that \begin{align} \label{faxian} \cos t-\eta(r) \sin t \ge 0, \quad \forall 0\le t \le R_{\infty}-R. \end{align} Then since $\eta(r)=\cot \xi$ and the integrand is nonnegative, we have \begin{align} \label{Rinftyestimate} R_{\infty}-R \le \pi-\xi. \end{align}Therefore, we have \begin{align} \label{zhujidan1'} A_{\infty}-A(r) \le l(r) \int_0^{\pi-\xi}(\cos t+\eta \sin t)dt. \end{align} Multiplying \eqref{zhujidan'} and \eqref{zhujidan1'}, we have \begin{align*} A(A_{\infty}-A) \le & l^2 \left( \int_0^{\xi} (\cos t-\eta \sin t)dt\mathbb{R}ght) \left(\int_0^{\pi-\xi} (\cos t+\eta \sin t) dt \mathbb{R}ght)\\ =& l^2 \left(\sin \xi +\cot \xi (\cos \xi -1)\mathbb{R}ght)\left(\sin \xi +\cot \xi (\cos \xi +1)\mathbb{R}ght)\\ =& l^2 \frac{1-\cos \xi}{\sin \xi } \frac{1+\cos \xi}{\sin \xi}=l^2. \end{align*}This proves the first inequality of \eqref{leibi}. \\ Next, we prove \eqref{hmm}. It is actually a consequence of Bishop-Gromov inequality: Let $B_R'$ denote the geodesic ball of radius $R$ centered at the origin, where $R=R(r)=\int_0^r e^{u(\rho)}dr$, then \begin{align*} f(r):=\frac{vol_g(B_R')}{vol_{S^2}(B_R')}=\frac{\int_{B_r}e^{2u}dx}{2\pi (1-\cos R(r))} \end{align*}is a non-increasing function. Note that even if $(\mathbb{R}^n, e^{2u}\delta)$ is not complete, we can still apply this inequality because the proof only requires that for any point on $\partial B_R$, there is a minimizing geodesic connecting the origin and the point. This is true since $u$ is radial. We have also used the fact that the line segment starting from the origin to any point on $\partial B_r$ is a minimizing geodesic, as shown in the proof of proposition \ref{aiya}. Now that $f(r)$ is non-increasing, $f'(r)\le 0$. By directly computing $f'(r)$, we have \begin{align} \label{jinzhipaixu} 2\pi re^{2u(r)}2\pi(1-\cos R) -\left(\int_{B_r}e^{2u}dx\mathbb{R}ght)2\pi (\sin R) e^{u(r)} \le 0 \end{align} Since $A(r)=\int_{B_r}e^{2u}dx$ and $l(r)=2\pi r e^{u(r)}$, \eqref{jinzhipaixu} therefore implies \eqref{hmm}. \\ Next, we prove \eqref{jiefaqian} and \eqref{jiefa}. Recall that by the Heintze-Karcher inequality, we have \begin{align} \label{radialHeintzKarcher} A(r) \le \int_0^R \int_{\partial B_r} (\cos t-\eta \sin t)ds dt, \end{align}where $R=R(r)=\int_0^r e^{u(\rho)}d\rho$ and $\eta$ is the geodesic curvature of $\partial B_r$, which is equal to $e^{-u}(\frac{1}{r}+u'(r))$. Hence by \eqref{radialHeintzKarcher} we have \begin{align} A \le & l \sin R-\eta l (1-\cos R)\\ =&l \sin R -e^{-u}(1/r+u'(r))2\pi r e^{u(r)}(1-\cos R)\\ =\label{withoutusingbishop} & l\sin R-e^{-u} l' (1-\cos R). \end{align} Since $l' \ge 0$ when $r \le r_0$, hence $A \le l \sin R$, and this proves \eqref{jiefaqian}. \eqref{jiefa} is a direct consequence of \eqref{jiefaqian}. \\ Next, we prove \eqref{busanhuang} and \eqref{bushanhuangtuichu}. By \eqref{hmm} and the first inequality of \eqref{leibi}, we have \begin{align*} \left(\frac{1-\cos R}{\sin R}\mathbb{R}ght)^2 \le \frac{A^2}{l^2} \le \frac{A^2}{A(A_{\infty}-A)}. \end{align*}Hence \begin{align*} \frac{1-\cos R}{1+\cos R} \le \frac{A}{A_{\infty}-A}. \end{align*}After simplification, we obtain \eqref{busanhuang}, and \eqref{bushanhuangtuichu} is just a direct consequence of \eqref{busanhuang}. \\ It remains to show \eqref{diamradial}. This can be proved by using argument similar to that used in the proof of Proposition \ref{diametercontrolinfty}: For any $x, y \in \mathbb{R}^n$, We let $\gamma_1, \, \gamma_2$ be the rays staring from $0$ and passing through $x$ and $y$ respectively. Let $\gamma_x$ be the line segment connecting $0$ and $x$, and let $\gamma^x=\gamma_1 \setminus \gamma_x$. Similarly, let $\gamma_y$ be the line segment connecting $0$ and $y$, and let $\gamma^y=\gamma_2 \setminus \gamma_y$. We proceed similarly as the proof of Theorem \ref{maintheoremonradialcase}: If $l(\gamma_x)+l(\gamma_y) \le \pi$, then this already implies $d_g(x,y) \le \pi$ by triangular inequality. If $l(\gamma_x)+l(\gamma_y) \ge \pi$, then since $l(\gamma_i) \le \pi, \, i=1,2$ as we proved in Proposition \ref{aiya}, we have \begin{align} \label{moshenyihao3'} l(\gamma^x)+l(\gamma^y) \le \pi. \end{align} Also, by \eqref{jiandanyouyong} which we already proved, we know that for any ${\varepsilonepsilon}silon>0$, we can choose $x_R=\gamma^x \cap \partial B_R$ and $y_R=\gamma^y \cap \partial B_R$ such that $d_g(x_R, y_R)\le \int_{\partial B_R} e^u ds<{\varepsilonepsilon}silon$. Therefore, by \eqref{moshenyihao3'}, \begin{align*} d_g(x,y)\le d_g(x,x_R)+d_g(x_R,y_R)+d_g(y_R,y) \le \pi+{\varepsilonepsilon}silon. \end{align*} Let ${\varepsilonepsilon}silon \mathbb{R}ghtarrow 0$, we have $d_g(x,y)\le \pi$. \end{proof} \begin{remark} \label{bushang} From the proof, one can see that if either of the inequalities in \eqref{leibi} becomes equalities for some value $r>0$, then $u$ must be a solution to \eqref{scalliouville} and then $(\mathbb{R}^2, e^{2u}\delta)$ is a sphere minus a point. \end{remark} \subsection*{Some alternative proofs of some of the inequalities in Theorem \ref{maintheoremonradialcase}} \begin{itemize} \item \eqref{volumeradial} is proved as a consequence of the second inequality of \eqref{leibi}. Actually the first inequality of \eqref{leibi} also implies \eqref{volumeradial}, since \begin{align*} A_{\infty}-A\le \frac{l^2}{A}. \end{align*}Let $r\mathbb{R}ghtarrow 0$ on both sides of the above inequality, we have $A_{\infty} \le 4\pi$, which is exactly \eqref{volumeradial}. \item \eqref{jiefa} can also be proved by applying Alexandrov inequality. \begin{proof}[Second proof of \eqref{jiefa}] To show that $l(r) \ge A(r)$ for $r \le r_0$, we first exploit the following Alexandrov inequality (see \cite{Top99} for a proof): For any $K_0 \in \mathbb{R}$, \begin{align} \label{alexandrov'} 4\pi A \le L^2 +K_0A^2+2A\int_{{\mathbf O}mega}(K-K_0)_+ dvol_g. \end{align}Let $K_0=1$, and we apply \eqref{alexandrov'} to $(B_r, g)$. Note that \begin{align*} (K-1)_+=(-(\mathbb{D}elta u) e^{-2u}-1)_+=\left(e^{-2u}(-\mathbb{D}elta u-e^{2u})\mathbb{R}ght)_+=e^{-2u}(-\mathbb{D}elta u-e^{2u}), \end{align*}hence we have \begin{align*} 4\pi A-A^2 \le & l^2+2A\int_{B_r} (-\mathbb{D}elta u-e^{2u}) dx\\ =&l^2+2A\left(2\pi\int_0^r(-\rho u'(\rho))'d\rho-A\mathbb{R}ght)\\ =&l^2+2A\left(-2\pi ru'(r)-A\mathbb{R}ght). \end{align*}Hence \begin{align*} l^2 \ge& 4\pi A(1+ru'(r)) +A^2\\ \ge& A^2, \quad \mbox{when $r\le r_0$} \end{align*}since $ru'(r) \ge -1$ when $r \le r_0$. \end{proof} \item \begin{proof}[Alternative proof of \eqref{yanxing3}:] If $l(r)$ achieves its maximum at $r_0$, then $l'(r_0)=0$, and thus $1+r_0e^{u(r_0)} =0$. Hence $\partial B_{r_0}$ is a geodesic under the metric $g=e^{2u}$. Applying Toponogov's Theorem, we have $l(r_0) \le l_{S^2}(\partial B_{R(r_0)}')$, where $R(r)$ is the function defined above. Therefore, \begin{align*} l(r_0) \le 2\pi \sin R(r_0) \le 2\pi. \end{align*} \end{proof} \end{itemize} In the end of this section, we remark that if $u$ satisfies \eqref{ricci1} and $(\mathbb{R}^2, e^{2u}\delta)$ can be completed as a closed Riemannian surface, then the first inequality of \eqref{leibi} is exactly the L\'evy-Gromov isoperimetric inequality, while the second inequality is equivalent to that of \cite[Corollary 3.2]{NiWang}, provided one can show that any minimizer to the functional \begin{align*} h_{\beta}({\mathbf O}mega):=\Big\{\frac{\int_{\partial {\mathbf O}mega}e^{u} ds}{\int_{\mathbb{R}^2}e^{2u}dx}:\, {\mathbf O}mega \subset \mathbb{R}^2,\quad \frac{\int_{{\mathbf O}mega} e^{2u}dx}{\int_{\mathbb{R}^2}e^{2u}dx}=\beta\Big\} \end{align*}must be a ball centered at the origin. \section{Higher Dimensional Results for radial solutions to generalized Liouville equation} There are a few directions to extend the study of solutions or supersolutions to Liouville equation \eqref{scalliouville} in $\mathbb{R}^2$ to higher dimensional spaces. For example, \eqref{scalliouville} can be viewed as the equation describing globally conformally flat Einstein manifolds with $Ric_g=(n-1)g$ where $n=2$ is the dimension of the manifold. In higher dimensional case, it is natural to consider corresponding solutions to equation $Ric_g=(n-1)g$, $g=e^{2u}\delta$ where $\delta$ is the classical metric in $\mathbb{R}^n$. By \cite[Page 58]{Besse}, we have the formula for Ricci tensor \begin{align} \label{ricciconformalchangeformula} Ric_g=(2-n)(\nabla d u-du \otimes du)+(-\mathbb{D}elta u-(n-2)|\nabla u|^2) \delta. \end{align}Therefore, the higher dimensional Liouville equation reads \begin{align} \label{higherdimensionalLiouville} (2-n)(\nabla d u-du \otimes du)+(-\mathbb{D}elta u-(n-2)|\nabla u|^2) \delta=(n-1)e^{2u}\delta. \end{align} Let $u$ be a radial solution to $Ric_g \ge (n-1)g$. We first recall the equation when $u$ is radially symmetric. Let $h$ be the unit round metric on $S^{n-1}$, then \begin{align*} \delta=dr^2+r^2h \end{align*}and thus \begin{align} \label{change} \delta_{ij}-\frac{x_ix_j}{r^2}=r^2h_{ij} \end{align}where $h_{ij}=h(\partial_i,\partial_j)$. Hence if $u$ is radially symmetric, then we have \begin{align*} u_iu_j=u_r^2\frac{x_ix_j}{r^2}=u_r^2dr^2(\partial_i,\partial_j) \end{align*}and \begin{align*} u_{ij}=&u_{rr}\frac{x_j}{r}\frac{x_i}{r}+\frac{u_r}{r}\left(\delta_{ij}-\frac{x_ix_j}{r^2}\mathbb{R}ght)\\ =&[u_{rr}dr^2+ru_rh](\partial_i,\partial_j), \quad \mbox{by \eqref{change}}. \end{align*}Since $du=\sum_iu_i dx_i$ and $\nabla du=\sum_{i,j}u_{ij}dx_i \otimes dx_j$, we have \begin{align} \label{hessiansphericalcoordinates} \nabla du=u_{rr}dr^2+ru_rh, \end{align}and \begin{align} du\otimes du=u_r^2dr^2, \end{align} and thus \eqref{ricciconformalchangeformula} reads \begin{align*} Ric_g=-(n-1)(u^{''}+\frac{u'}{r})dr^2-\left(u^{''}+(2n-3)\frac{u'}{r}+(n-2)(u')^2\mathbb{R}ght)r^2h. \end{align*}Hence $Ric_g \ge (n-1)g$ is equivalent to \begin{align} \label{higherdimensionalradiallycase} \begin{cases} u^{''}+\frac{u'}{r}+e^{2u}\le 0\\ u^{''}+(2n-3)\frac{u'}{r}+(n-2)(u')^2+(n-1)e^{2u}\le 0. \end{cases} \end{align} Motivated by Theorem \ref{maintheoremonradialcase}, it is natural to ask: If $(M,g)=(\mathbb{R}^n, e^{2u}\delta)$ where $u$ is a radial function satisfying \eqref{higherdimensionalradiallycase}, then is it true that $diam_g(\mathbb{R}^2) \le \pi$ and $vol_g(\mathbb{R}^n) \le vol_{g_{S^n}}(S^n)$? The answer is yes. Actually, even assuming the weaker assumption \eqref{2dradialcase}, we still have the same diameter upper bound $\pi$. The proof of this is almost identical to the proof of \eqref{diamradial}. \begin{proposition} \label{zaoshuizaoqi} If $u$ satisfies \eqref{2dradialcase}, and let $g=e^{2u(|\cdot|)}\delta$ where $\delta$ is the classical metric in $\mathbb{R}^n$, then $diam_g(\mathbb{R}^n) \le \pi$. \end{proposition} \begin{proof} Let $x$ and $y$ be two points in $\mathbb{R}^n$, and let $\gamma_1, \, \gamma_2$ be the rays staring from $0$ and passing through $x$ and $y$ respectively. Let $\gamma_x$ be the line segment connecting $0$ and $x$, and let $\gamma^x=\gamma \setminus \gamma_x$. Similarly, let $\gamma_y$ be the line segment connecting $0$ and $y$, and let $\gamma^y=\gamma \setminus \gamma_y$. If $l(\gamma_x)+l(\gamma_y) \le \pi$, then this already implies $d_g(x,y) \le \pi$ by triangular inequality. If $l(\gamma_x)+l(\gamma_y) \ge \pi$, then by Proposition \ref{aiya}, \begin{align} \label{moshenyihao3} l(\gamma^x)+l(\gamma^y) \le \pi. \end{align} Since $\lim_{r \mathbb{R}ghtarrow \infty} 2\pi r e^{u(r)}=0$ as proved in Theorem \ref{maintheoremonradialcase}, we know that for any ${\varepsilonepsilon}silon>0$, we can choose $x_R=\gamma^x \cap \partial B_R$ and $y_R=\gamma^y \cap \partial B_R$ such that $d_g(x_R, y_R)\le 2\pi R e^{u(R)}<{\varepsilonepsilon}silon$. Therefore, by \eqref{moshenyihao3}, \begin{align*} d_g(x,y)\le d_g(x,x_R)+d_g(x_R,y_R)+d_g(y_R,y) \le \pi+{\varepsilonepsilon}silon. \end{align*} Let ${\varepsilonepsilon}silon \mathbb{R}ghtarrow 0$, we have $d_g(x,y)\le \pi$. \end{proof} Now let us state the higher dimensional result for radially symmetric solutions. \begin{proposition} Let $u$ be the function satisfying \eqref{higherdimensionalradiallycase}, then under the metric $g=e^{2u(|\cdot|)}\delta$, where $\delta$ is the Euclidean metric in $\mathbb{R}^n$, then \begin{align} \label{ndimdiameter} diam_g(\mathbb{R}^n) \le \pi \end{align} and \begin{align} \label{ndimvolume} vol_g(\mathbb{R}^n) \le vol(S^n). \end{align} \end{proposition} \begin{proof} \eqref{ndimdiameter} is proved in Proposition \ref{zaoshuizaoqi}. \eqref{ndimvolume} follows from Bishop-Gromov Theorem. Indeed, as discussed in the proof of \eqref{hmm}, Bishop-Gromov Theorem can be applied in $(\mathbb{R}^n, g)$ for $g$ to be a radially symmetric conformally flat metric. \end{proof} At the end of the section, we remark that any solution to \eqref{higherdimensionalLiouville} must be radially symmetric about a point. This is because any solution to \eqref{higherdimensionalLiouville} is also a solution to the Yamabe equation, and hence by \cite{CL91}, such solution is radially symmetric about a point. \section{Connection with sphere covering inequality for the case $K \ge 1$} In this section, we prove Theorem \ref{genghm} and Theorem \ref{radialspherecovering}. First, let us state an equivalent version of L\'evy-Gromov isoperimetric inequality in closed Riemannian manifold of dimension $2$, which gives a simple algebraic relation between length and area. \begin{proposition} \label{xulie} Let $M$ be a closed Riemannian manifold of dimension 2 and the Guassian curvature on $M$ is bounded below by $1$, then for any smooth domain ${\mathbf O}mega$ in $M$, we have \begin{align} \label{consequenceofgromov} l^2(\partial {\mathbf O}mega) \ge vol({\mathbf O}mega) vol(M \setminus {\mathbf O}mega). \end{align}The equality holds if and only if $M$ is a unit sphere and ${\mathbf O}mega$ is a spherical cap (geodesic disk). \end{proposition} \begin{proof} Let $\beta=\frac{vol({\mathbf O}mega)}{vol(M)}$, then by L\'evy-Gromov isoperimetric inequality, we have \begin{align} \label{levygromov} \frac{l(\partial {\mathbf O}mega)}{vol(M)}\ge \frac{l(\partial B_r)}{vol(S^2)}, \end{align} where $S^2$ is the unit 2-sphere and $B_r$ is the geodesic ball in $S^2$ such that $\frac{vol(B_r)}{vol(S^2)}=\beta$. Hence $vol(B_r)=4\pi \beta$ and thus $l(\partial B_r)=4\pi \sqrt{\beta-\beta^2}$. Therefore, \eqref{levygromov} becomes \begin{align*} \frac{l^2(\partial {\mathbf O}mega)}{vol^2(M)}\ge \beta(1-\beta)=\frac{vol({\mathbf O}mega)}{vol(M)}\left(1-\frac{vol({\mathbf O}mega)}{vol(M)}\mathbb{R}ght). \end{align*}This implies \eqref{consequenceofgromov}. Since from the original proof of \eqref{levygromov}, it becomes equality for some $r>0$ if and only if $M$ is a sphere, we prove the equality case of \eqref{consequenceofgromov}. \end{proof} Now we prove Theorem \ref{genghm}. \begin{proof}[Proof of Theorem \ref{genghm}] Let $\lambda(t)=h(t)e^{-2t}$, hence \eqref{mc1''} implies $\lambda'(t) \le 0$. Set $Alpha(t)=\int_{\{u>t\}}\lambda(u)e^{2u}d\mu$ and $\beta(t)=\mu\left(\{u>t\}\mathbb{R}ght)$. Hence \begin{align*} \beta'(t)=-\int_{\{u=t\}}\frac{1}{|\nabla u|} ds \quad \mbox{and $Alpha'(t)=\lambda(t)e^{2t}\beta'(t)$.} \end{align*} We integrate \eqref{rubin2''} over $\{u>t\}$, and by divergence theorem we have \begin{align*} \int_{\{u=t\}}|\nabla u|ds +\beta \le Alpha. \end{align*} Hence \begin{align*} -\beta'(Alpha-\beta) \ge \left(\int_{\{u=t\}}|\nabla u|ds\mathbb{R}ght)\left(\int_{\{u=t\}}\frac{1}{|\nabla u|}ds\mathbb{R}ght) \ge l^2(\partial \{u>t\}), \end{align*}where $l$ is the length function on $M$. We multiply the above by $\lambda(t) e^{2t}$, and using $Alpha'=\lambda e^{2t} \beta'$ and applying \eqref{consequenceofgromov}, we have \begin{align*} -Alpha Alpha' +Alpha' \beta \ge& \lambda e^{2t} (\mu(M) \beta-\beta^2)\\ =&\lambda e^{2t}\mu(M) \beta-\frac{(\lambda e^{2t}\beta^2)'-\lambda'e^{2t}\beta^2-2\beta Alpha'}{2}. \end{align*}Hence \begin{align} \label{1cru} -2Alpha Alpha'\ge 2\mu(M) \lambda e^{2t}\beta -(\lambda e^{2t}\beta^2)'+\lambda'e^{2t} \beta^2. \end{align} Note that \begin{align*} 2\lambda e^{2t}\beta=(\lambda e^{2t}\beta)'-\lambda' e^{2t} \beta -Alpha', \end{align*}where we have again used that $Alpha'=\lambda e^{2t}\beta'$. Hence \eqref{1cru} becomes \begin{align} \label{2cru} -2Alpha Alpha' \ge \left(\lambda e^{2t}\beta (\mu(M)-\beta)\mathbb{R}ght)'-\mu(M)Alpha'+\lambda'e^{2t}(\beta^2-\mu(M)\beta). \end{align} Since $\lambda'\le 0$ and $\beta^2-\mu(M) \beta \le 0$, we have \begin{align} \label{jianhua} -2Alpha Alpha' \ge \left(\lambda e^{2t}\beta (\mu(M)-\beta)\mathbb{R}ght)'-\mu(M)Alpha'. \end{align} Then we integrate \eqref{jianhua} from $t=0$ to $\infty$, and using the fact that $\lim_{t \mathbb{R}ghtarrow \infty}Alpha(t)=0$ and $\lim_{t \mathbb{R}ghtarrow \infty} \lambda e^{2t}\beta (\mu(M)-\beta) \le \mu(M) \lim_{t\mathbb{R}ghtarrow \infty}Alpha(t)=0$, we have \begin{align*} \mu(M) Alpha(0) -Alpha(0)^2 \le \lambda(0)\beta(0)\left(\mu(M)-\beta(0)\mathbb{R}ght). \end{align*} That is, \begin{align*} \mu(M)\int_{{\mathbf O}mega}h(u) d\mu-\left(\int_{{\mathbf O}mega} h(u) d\mu\mathbb{R}ght)^2 \le h(0)\left( \mu(M) \mu({\mathbf O}mega)-\mu^2({\mathbf O}mega)\mathbb{R}ght). \end{align*} In particular, if $h(0)\le 1$, then\begin{align*} \mu(M) \int_{{\mathbf O}mega}h(u) d\mu-\left(\int_{{\mathbf O}mega} h(u) d\mu\mathbb{R}ght)^2 \le \mu(M) h(0) \mu({\mathbf O}mega)-h^2(0)\mu^2({\mathbf O}mega). \end{align*}Hence \begin{align*} \mu(M) \left(\int_{{\mathbf O}mega} h(u) d\mu -h(0) \mu({\mathbf O}mega)\mathbb{R}ght) \le \left(\int_{{\mathbf O}mega} h(u) d\mu -h(0) \mu({\mathbf O}mega)\mathbb{R}ght) \left(\int_{{\mathbf O}mega} h(u) d\mu +h(0) \mu({\mathbf O}mega)\mathbb{R}ght). \end{align*}Hence \begin{align*} \int_{{\mathbf O}mega}h(u) d\mu +h(0)\mu({\mathbf O}mega) \ge \mu(M). \end{align*} The proof is complete. \end{proof} Theorem \ref{genghm} also has its dual form: \begin{theorem} \label{genghm'} Let $(M,g)$ be a closed Riemannian surface with and $K \ge 1$, where $\mu$ is the measure of $(M,g)$ and $K$ is the Gaussian curvature. Let ${\mathbf O}mega$ be a domain with compact closure and nonempty boundary. If $u$ satisfies \begin{align} \label{rubin2'} \begin{cases} -\mathbb{D}elta_g u+1 \ge h(u), \quad u<0 \quad &\mbox{in ${\mathbf O}mega$}\\ u=0 \quad &\mbox{on $\partial {\mathbf O}mega$} \end{cases} \end{align}where $h(t)$ is a nonnnegative function satisfying \eqref{mc1''}, i.e., \begin{align} \label{mc1-new} 0 \le h'(t) \le 2 h(t), \end{align} then \begin{align} \label{guijia1} \mu(M) \int_{{\mathbf O}mega}h(u) d\mu -\left(\int_{{\mathbf O}mega} h(u) d\mu\mathbb{R}ght)^2 \le h(0) \mu({\mathbf O}mega) \mu (M\setminus {\mathbf O}mega). \end{align} Moreover, if $h_0 \ge 1$, then \begin{align} \label{guijia2} \int_{{\mathbf O}mega}h(u) d\mu +h(0)\mu({\mathbf O}mega) \ge \mu(M). \end{align} \end{theorem} \begin{proof} We still let $\lambda(t)=h(t)e^{-2t}$, $Alpha(t)=\int_{\{u<t\}}\lambda(u)e^{2u}d\mu$ and $\beta(t)=\int_{\{u<t\}}d\mu$. Then $\beta'(t)=\int_{\{u=t\}}\frac{1}{|\nabla u|}ds$ and $Alpha'(t)=\lambda(t)e^{2t}\beta'(t)$. We integrate \eqref{rubin2'} over $\{u<t\}$ and by divergence theorem, we have \begin{align*} \int_{\{u=t\}}|\nabla u| ds -\beta \le -Alpha. \end{align*}Hence \begin{align*} \beta'(\beta-Alpha)\le\left( \int_{\{u=t\}}|\nabla u|ds \mathbb{R}ght)\left(\int_{\{u=t\}}\frac{1}{|\nabla u|}ds\mathbb{R}ght)\ge s^2(\partial \{u<t\}). \end{align*} Multiply the above by $\lambda(t)e^{2t}$, using $Alpha'(t)=\lambda(t)e^{2t}\beta'(t)$ and the isoperimetric inequality, we have \begin{align*} Alpha'(\beta-Alpha) \ge \lambda e^{2t} (\mu(M) \beta -\beta^2). \end{align*} Then arguing simiarly as the proof of Theorem \ref{genghm}, we still get \eqref{jianhua}, that is, \begin{align*} -2Alpha Alpha' \ge \left(\lambda e^{2t}\beta (\mu(M)-\beta)\mathbb{R}ght)'-\mu(M)Alpha'. \end{align*}. Then we integrate the above inequality from $-\infty$ to $0$ and thus obtain \begin{align*} \mu(M) Alpha(0)-Alpha^2(0) \ge \lambda(0)\beta(0)\left(\mu(M)-\beta(0)\mathbb{R}ght). \end{align*} That is, \begin{align*} \mu(M) \int_{{\mathbf O}mega}h(u) d\mu -\left(\int_{{\mathbf O}mega} h(u) d\mu\mathbb{R}ght)^2 \ge h(0)\left(\mu(M) \mu({\mathbf O}mega)-\mu^2({\mathbf O}mega)\mathbb{R}ght). \end{align*} If $h(0)\ge 1$, then \begin{align*} \mu(M) \int_{{\mathbf O}mega}h(u) d\mu-\left(\int_{{\mathbf O}mega} h(u) d\mu\mathbb{R}ght)^2 \ge \mu(M) h(0) \mu({\mathbf O}mega)-h^2(0)\mu^2({\mathbf O}mega). \end{align*} Hence \begin{align*} \mu(M) \left(\int_{{\mathbf O}mega} h(u) d\mu -h(0) \mu({\mathbf O}mega)\mathbb{R}ght) \ge \left(\int_{{\mathbf O}mega} h(u) d\mu -h(0) \mu({\mathbf O}mega)\mathbb{R}ght) \left(\int_{{\mathbf O}mega} h(u) d\mu +h(0) \mu({\mathbf O}mega)\mathbb{R}ght). \end{align*}Since $u<0$ in ${\mathbf O}mega$, $h(u)<h(0)$ and thus $\int_{{\mathbf O}mega} h(u)d\mu -h(0)\mu({\mathbf O}mega)\le 0$. Therefore, \begin{align*} \int_{{\mathbf O}mega}h(u) d\mu +h(0)\mu({\mathbf O}mega) \ge \mu(M). \end{align*} This completes the proof. \end{proof} \begin{remark} If the Gaussian curvature $K$ of $(M, g)$ satisfies $a^2\le K\le 1$ for some positive constant $a<1$, and $u$ satisfies conditions of Theorem \ref{genghm} with $h(0) \le a^2$, then the conclusions of Theorem \ref{genghm} still hold with $\mu, h(u), \lambda$ replaced by $\tilde \mu= a^2 \mu, \tilde h (u) := h(u)/a^2, \tilde \lambda=\lambda/a^2$ respectively, after applying Theorem \ref{genghm} with a proper scaling of the metric $g$ by $ag$. In particular, if $\lambda\le a^2$, then \eqref{guijia2''} becomes \begin{align} \label{guijia2-new} \int_{{\mathbf O}mega}e^{2u} d\mu +\mu({\mathbf O}mega) \ge \frac{a^2\mu(M)}{\lambda}. \end{align} It is interesting to compare this with the lower bound obtained from \eqref{linshiruji} of Theorem \ref{ghm1.4}, where $\lambda$ is allowed to be in $(0,1]$. Note that $ a^2 \mu(M) \le 4 \pi$ by the Gauss-Bonnet theorem. It turns out that under the same curvature conditions $a^2\le K\le 1$, \eqref{linshiruji} in Theorem \ref{ghm1.4} requires less constraints and has a better lower bound. Nevertheless, Theorem \ref{genghm} gives a similar lower bound but under a complete opposite curvature condition. Similar remark can also be made for Theorem \ref{genghm'}, the dual form of Theorem \ref{genghm}. \end{remark} Next, we prove Theorem \ref{radialspherecovering}. \begin{proof}[Proof of Theorem \ref{radialspherecovering}] First, let $u=u_2-u_1$, and hence $u>0$ in $B_r$ and $u=0$ on $\partial B_r$. Note that for any $t>0$, $\{u>t\}$ is a radially symmetric domain. We claim that if $\omega$ is a radially symmetric domain, then \begin{align} \label{liantongargument} P^2(\omega) \ge A(\omega)(A_{\infty}-A(\omega)), \end{align} where $P(\omega)=\int_{\partial \omega}e^{u_1}ds, A(\omega)=\int_{\omega}e^{2u_1}dx$ and $A_{\infty}=\int_{\mathbb{R}^2}e^{2u_1}dx$. Indeed, if $\omega$ is connected, then $\omega$ is either a disk or an annulus centered at origin. For the disk case, \eqref{liantongargument} is exactly the first inequality of \eqref{leibi}. If $\omega$ is an annulus, then $\omega=\omega_1 \setminus \omega_2$, where $\omega_1 \Supset \omega_2$ are two disks. Then by \eqref{leibi}, we have \begin{align*} P^2(\omega) - A(\omega)(A_{\infty}-A(\omega))\ge& P^2(\omega_1)+P^2(\omega_2)-\left(A(\omega_1)-A(\omega_2)\mathbb{R}ght)\left(A_{\infty}-A(\omega_1)+A(\omega_2)\mathbb{R}ght)\\\ge&\sum_{i=1}^2A(\omega_i)(A_{\infty}-A(\omega_i))-A_{\infty}\left(A(\omega_1)-A(\omega_2)\mathbb{R}ght)+(A(\omega_1)-A(\omega_2))^2\\ =&2A_{\infty}A(\omega_2)-2A(\omega_1)A(\omega_2)\ge 0. \end{align*} Hence we proved \eqref{liantongargument} for connected radially symmetric domains. If $\omega$ has more than $1$ component, then set $\omega=\cup_i \omega_i$, where $\omega_i$ are disjoint component. Hence $\omega_i$ satisfies \eqref{liantongargument}. We have \begin{align*} P^2(\omega) - A(\omega)(A_{\infty}-A(\omega))\ge & \sum_i P^2(\omega_i)-\left(\sum_i A(\omega_i)\mathbb{R}ght)(A_{\infty}-\sum_iA(\omega_i))\\ \ge& \sum_iA(\omega_i)(A_{\infty}-A(\omega_i))-\left(\sum_i A(\omega_i)\mathbb{R}ght)(A_{\infty}-\sum_iA(\omega_i))\\ =&\left(\sum_i A(\omega_i)\mathbb{R}ght)^2-\sum_iA^2(\omega_i)\ge 0. \end{align*}Hence \eqref{liantongargument} holds for general radially symmetric domains. Since $u$ satisfies \eqref{rubin2''} for $g=e^{2u_1}\delta$, applying \eqref{liantongargument} for the radially symmetric domain $\{u>t\}$ instead of \eqref{consequenceofgromov} in the proof of Theorem \ref{genghm}, and proceeding the same argument, we have \begin{align*} \int_{B_r}e^{2u}d\mu+\mu(B_r)\ge \mu(\mathbb{R}^2). \end{align*}Since $\mu=e^{2u_1}dx$ and $u_2=u_1+u$, the above inequality becomes \begin{align*} \int_{B_r}e^{2u_2} dx+\int_{B_r}e^{2u_1} dx\ge \int_{\mathbb{R}^2}e^{2u_1}dx. \end{align*}When the equality holds, then by tracing the equality cases, especially by Remark \ref{bushang}, we conclude that $(\mathbb{R}^2, e^{2u_1}\delta)$ is punctured sphere. In such case, we have \begin{align*} \mathbb{D}elta u_2+e^{2u_2}= \mathbb{D}elta u_1+e^{2u_1}=0 \, \mbox{in $B_r$}, \, u_2>u_1 \, \mbox{in $B_r$ and} \, u_2=u_1 \, \mbox{on $\partial B_r$,} \end{align*}Hence as shown in \cite{GM}, $(B_r, e^{2u_1}\delta)$ and $(B_r,e^{2u_2}\delta)$ are two complementary spherical caps on the unit sphere. \end{proof} The proof of Theorem \ref{radialspherecovering'} is similar because of Theorem \ref{genghm'}, so we omit it. \section{Some unsolved problems and remarks} In this section, we present some unsolved problems related to the results proved in this paper. Much further research can be done as a continuation of this paper. The following question is on conformal diameter corresponding to solutions to \eqref{scalliouville}. \begin{question} Let $u$ be a solution to \eqref{scalliouville} and $g=e^{2u}\delta$, where $\delta$ is the Euclidean metric. Is it true that there exists a universal constant $C\ge 20$ such that \begin{align*} \pi \le diam_g(\mathbb{R}^2) \le C\pi? \end{align*}Moreover, is it true that $diam_g(\mathbb{R}^2)=\pi$ if and only if $u$ is radial up to translation, or $u$ is an $1$D solution to \eqref{scalliouville}? \end{question} We remark that in our forthcoming paper \cite{EGLX}, we can prove if $u$ has an upper bound, then under the metric $e^{2u}\delta$, diameter of $\mathbb{R}^2$ equal to $\pi$ if and only if $u$ is either radial about a point, or $u$ is one-dimensional. The following question is on conformal diameter corresponding to supersolutions to \eqref{scalliouville}. \begin{question} On general supersolutions, let $u$ satisfy \eqref{ricci1} and $g=e^{2u}\delta$, where $\delta$ is the Euclidean metric. Suppose that $\int_{\mathbb{R}^2} e^{2u} dx <\infty$, is it true that $diam_g(\mathbb{R}^2) \le \pi$? Recall that in Proposition \ref{weakbounddiamter}, we can only prove the $2\pi$ upper bound. \end{question} The following question is on conformal volume estimate coorespnding to supersolutions to \eqref{scalliouville}. \begin{question} \label{volume4pibound} let $u$ satisfy \eqref{ricci1} and $g=e^{2u}\delta$, where $\delta$ is the Euclidean metric. Suppose that $\int_{\mathbb{R}^2} e^{2u} dx <\infty$, then is it true that \begin{align} \label{buzifa} \int_{\mathbb{R}^2} e^{2u}dx \le 4\pi? \end{align} \end{question} Note that if the completion of $(\mathbb{R}^2, e^{2u}\delta)$ is a closed surface, then by Gauss-Bonnet theorem, \eqref{buzifa} is true. Let $K(x)$ be the Gaussian curvature at $x$, then a necessary condition for this to be true is that \begin{align} \label{totalgausscurvure} \int_{\mathbb{R}^2}K(x)e^{2u(x)}=4\pi, \end{align}where $K$ is the Gaussian curvature, $K=-e^{-2u}\mathbb{D}elta u$. However, generally this is not true even for radial supersolutions to \eqref{scalliouville}, even if the metric can be smoothly extended at $\infty$, as we shall see immediately in the next example: \begin{example} \label{noncomplete} Let $u$ be the radial function $u=-r^2$, then $(\mathbb{R}^2, e^{2u}\delta)$ is a Riemannian manifold with Gaussian curvature $K=-\mathbb{D}elta u e^{-2u}=4e^{2r^2} \ge 1$. Using the conformal change of variable $z\mapsto \frac{1}{z}$, the metric at $\infty$ is equivalent to $h(r):=e^{2u(1/r)}\frac{1}{r^4}dr^2$ at $0$. Clearly, $h(r)$ is smooth at $0$, but \begin{align*} \int_{\mathbb{R}^2} K dvol_g=\int_{\mathbb{R}^2}(-\mathbb{D}elta u)dx=\infty \ne 4\pi. \end{align*} \end{example} From the example above, we can see that generally one cannot directly apply geometric results on closed Riemannian manifolds to study supersolutions to the Liouville equation even for radial cases. Recall that our proof of \eqref{buzifa} is based on the geometric inequality \eqref{leibi}. So far we don't know the answer to Question \ref{volume4pibound} except the radial cases, however, if the inequality \eqref{ricci1} goes in the other direction, we do have the following counterpart for general cases: \begin{proposition} \label{Kle1volumeestimate} If \begin{align} \label{Kle1liouvile} \mathbb{D}elta u+e^{2u}\ge 0, \end{align}then \begin{align} \label{inversedirection} \int_{\mathbb{R}^2} e^{2u} dx \ge 4\pi. \end{align} \end{proposition} The proof of the proposition follows exactly from the proof of \cite[Lemma 1.1]{CL91}. As a convenience for readers, we include the proof here. \begin{proof} It suffices to prove \eqref{inversedirection} by assuming $\int_{\mathbb{R}^2} e^{2u} dx<\infty$. Let ${\mathbf O}mega_t$ be the superlevel set of $u$. Since $|{\mathbf O}mega_t|<\infty$ because of Chebyshev's inequality, we have \begin{align} \int_{{\mathbf O}mega_t}e^{2u} dx \ge -\int_{{\mathbf O}mega_t}\mathbb{D}elta u dx=\int_{\partial {\mathbf O}mega_t}|\nabla u| dx \end{align} and \begin{align} \label{jing} -\frac{d}{dt}|{\mathbf O}mega_t|=\int_{\partial {\mathbf O}mega_t} \frac{1}{|\nabla u|}. \end{align} Using \begin{align*} \left(\int_{\partial {\mathbf O}mega_t}\frac{1}{|\nabla u|}dx\mathbb{R}ght) \left(\int_{\partial {\mathbf O}mega_t} |\nabla u| dx \mathbb{R}ght) \ge |\partial {\mathbf O}mega_t|^2 \ge 4\pi|{\mathbf O}mega_t|, \end{align*} we have \begin{align*} -\frac{d}{dt}|{\mathbf O}mega_t| \int_{{\mathbf O}mega_t}e^{2u} dx \ge 4\pi |{\mathbf O}mega_t|. \end{align*} Hence \begin{align*} \frac{d}{dt}\left( \int_{{\mathbf O}mega_t} e^{2u} dx \mathbb{R}ght)^2=2e^{2t} (\frac{d}{dt}|{\mathbf O}mega_t| )\int_{{\mathbf O}mega_t} e^{2u}dx \le -8\pi e^{2t} |{\mathbf O}mega_t|. \end{align*} Integrating from $-\infty$ to $\infty$, and since \begin{align*} \int_{-\infty}^{\infty}2e^{2t}|{\mathbf O}mega_t|dt=\int_{\mathbb{R}^2}e^{2u} dx, \end{align*} we proved \eqref{inversedirection}. \end{proof} \begin{question} On radial supersolutions, recall that we have applied several geometric argument to prove the inequalities in Theorem \ref{maintheoremonradialcase}. Can we give a proof from pure analytic point of view? \end{question} The following question is on generalization of Proposition \ref{xulie} to the case $M=(\mathbb{R}^2, e^{2u}\delta)$ such that $u$ is a supersoluton to \eqref{scalliouville} and that $vol_g(M)<\infty$. \begin{question} If $u$ satisfies \eqref{ricci1} and $\int_{\mathbb{R}^2}e^{2u} dx <\infty$, then for any smooth domain ${\mathbf O}mega \subset \mathbb{R}^2$, is it true that \begin{align} \label{conformalversionoflevygromov} \left(\int_{\partial {\mathbf O}mega} e^{u} ds\mathbb{R}ght)^2 \ge \left(\int_{{\mathbf O}mega} e^{2u} dx\mathbb{R}ght) \left(\int_{\mathbb{R}^2\setminus {\mathbf O}mega} e^{2u} dx\mathbb{R}ght)? \end{align} \end{question} We note that \eqref{conformalversionoflevygromov} already implies \eqref{buzifa}. Also, if the answer to this question is positive, then by similar argument in the proof of Theorem \ref{radialspherecovering}, we can extend Theorem \ref{radialspherecovering} to the case that $u_1, u_2$ are not necessarily radial and ${\mathbf O}mega$ can be a general smooth domain. \vskip0.5in $\mathbf{Acknowledgement}$ This research is partially supported by NSF grants DMS-1601885 and DMS-1901914 and and Simons Foundation Award 617072. \end{document}
\begin{document} \title{Some desultory remarks concerning algebraic cycles and Calabi--Yau threefolds} \author{Robert Laterveer} \institute{CNRS - IRMA, Universit\'e de Strasbourg \at 7 rue Ren\'e Descartes \\ 67084 Strasbourg cedex\\ France\\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} We study some conjectures about Chow groups of varieties of geometric genus one. Some examples are given of Calabi--Yau threefolds where these conjectures can be verified, using the theory of finite--dimensional motives. \end{abstract} \keywords{Algebraic cycles \and Chow groups \and motives \and finite--dimensional motives \and Calabi--Yau threefolds} \subclass{Primary 14C15, 14C25, 14C30, 14J32. Secondary 14F42, 19E15.} \section{Introduction} This note is about some specific questions concerning Chow groups $A^\ast X$ of complex varieties. In this field, the following relative version of Bloch's conjecture occupies a central position: \begin{conjecture}[Bloch \cite{B}]\label{relbloch} Let $X$ be a smooth projective variety over $\mathbb{C}$ of dimension $n$. Let $\Gamma\in A^n(X\times X)$ be a correspondence such that \[ \Gamma_\ast=\hbox{id}\colon\ \ H^{j,0}(X)\ \to\ H^{j,0}(X)\ \ \hbox{for\ all\ } j=2,\ldots, n.\] Then \[ \Gamma_\ast=\hbox{id}\colon\ \ A^n_{AJ}(X)\ \to\ A^n_{AJ}(X)\ .\] (Here $A^n_{AJ}(X)$ denotes the subgroup of $0$--cycles in the kernel of the Albanese map.) \end{conjecture} This conjecture is open in most interesting cases. A second conjecture adressed in this note is specific to varieties with $p_g=1$: \begin{conjecture}\label{indecomp} Let $X$ be a smooth projective variety over $\mathbb{C}$ of dimension $n$, and $p_g(X)=1$. Then there exists a ``transcendental motive'' $t(X)\in\mathcal M_{\rm rat}$, responsible for $H^{n,0}(X)$, which is indecomposable: any submotive of $t(X)$ is either $0$ or $t(X)$. \end{conjecture} This is motivated by results of Voisin \cite{V8} and Pedrini \cite{Ped}, who prove that for certain $K3$ surfaces, the transcendental motive $t_2(X)$ (in the sense of \cite{KMP}) is indecomposable. A third conjecture adressed in this note is the following conjecture made by Voisin concerning self--products of varieties of geometric genus one. (For simplicity, we only state the conjecture in the case of odd dimension.) \begin{conjecture}[Voisin \cite{V9}]\label{conjvois} Let $X$ be a smooth projective variety of odd dimension $n$, with $p_g(X)=1$ and $h^{j,0}(X)=0$ for $0<j<n$. For any $k\ge 2$, let the symmetric group $S_k$ act on $X^k$ by permutation of the factors. Let $pr_k\colon X^k\to X^{k-1}$ denote the projection obtained by omitting one of the factors. Then the induced map \[ (pr_k)_\ast\colon\ \ A_j(X^k)^{S_k}\ \to\ A_j(X^{k-1})\] is injective for $j\le k-2$. \end{conjecture} In this note, using elementary arguments, some examples are given of Calabi--Yau threefolds where these conjectures are verified. The following is a sample of this (slightly more general statements can be found below): \begin{nonumbering}[=Theorems \ref{main2}, \ref{main3} and \ref{main1}] Let $X$ be a Calabi--Yau threefold which is rationally dominated by a product of elliptic curves. Then conjecture \ref{indecomp} and a weak form of conjecture \ref{relbloch} are true for $X$. If in addition $h^{2,1}(X)=0$, then conjecture \ref{conjvois} is true for $X$. \end{nonumbering} One example where this applies is Beauville's threefold \cite{Beau0}; other examples are given below. The main tool used in this note is the theory of finite--dimensional motives of Kimura and O'Sullivan \cite{Kim}. \vskip0.4cm \begin{convention} All varieties will be projective irreducible varieties over $\mathbb{C}$. For smooth $X$, we will denote by $A^j(X)$ the Chow group $CH^j(X)\otimes{\mathbb{Q}}$ of codimension $j$ cycles under rational equivalence. The notations $A^j_{hom}(X)$ and $A^j_{AJ}(X)$ will denote the subgroup of homologically trivial and Abel--Jacobi trivial cycles respectively. $\mathcal M_{\rm rat}$ will denote the (contravariant) category of Chow motives with $\mathbb{Q}$--coefficients over $\mathbb{C}$. For a smooth projective variety over $\mathbb{C}$, $h(X)=(X,\mathbb{D}elta_X,0)$ will denote its motive in $\mathcal M_{\rm rat}$. $H^\ast(X)$ will denote singular cohomology with $\mathbb{Q}$--coefficients. \end{convention} \section{Some Calabi--Yau threefolds} \label{examples} This section presents some examples of Calabi--Yau threefolds to which our arguments apply. \begin{definition}[Calabi--Yau]\label{CY} In this note, a smooth projective variety $X$ of dimension $3$ is called a {\em Calabi--Yau threefold\/} if $h^{3,0}(X)=1$ and $h^{1,0}(X)=h^{2,0}(X)=0$. \end{definition} \begin{remark} Definition \ref{CY} is non--standard; usually, one requires that the canonical bundle is trivial. For the purposes of the present note, however, definition \ref{CY} suffices. \end{remark} \subsection{Rigid examples} \label{rigid} \begin{example}[Beauville \cite{Beau0}, Strominger--Witten \cite{SW}] Let $E$ be the Fermat elliptic curve, and let $\varphi\colon E\to E$ be the automorphism given by $(x,y,z)\mapsto (x,y, \zeta z)$, where $\zeta$ is a primitive third root of unity. Let \[ \varphi_3=\varphi\times \varphi\times \varphi\colon\ \ E^3\ \to\ E^3\] be the automorphism acting as $\varphi$ on each factor. Let $\widetilde{E^3}\to E^3$ denote the blow--up of the 27 fixed points of $\varphi^3$, and let \[ \widetilde{\varphi_3}\colon\ \ \widetilde{E^3}\ \to\ \widetilde{E^3}\] denote the automorphism induced by $\varphi^3$. The quotient \[ Z:= \widetilde{E^3}/ \widetilde{\varphi_3}\] is a smooth Calabi--Yau threefold, which is rigid (i.e. $h^{2,1}(Z)=0$). \end{example} \begin{remark} The threefold $Z$ is relevant in string theory. Indeed, as explained in the nice article \cite{FG} (where $Z$ is studied in great detail), the rigidity of $Z$ posed a conundrum to physicists: the mirror of $Z$ cannot be a projective threefold ! This is discussed in \cite{CHSW}, and led to the subsequent development of a theory of generalized mirror symmetry \cite{CDP}. \end{remark} \begin{example}[\cite{GG2}, \cite{R}] The group $G=(\mathbb{Z}_3)^2=\langle\zeta\times\zeta\times\zeta, \zeta\times\zeta^2\times 1\rangle$ acts on $E^3$, and there exists a desingularization \[ Z_2\ \to\ E^3/G\] which is Calabi--Yau. The variety $Z_2$ is rigid \cite{R}. \end{example} \subsection{More (not necessarily rigid) examples} \begin{example}[Oguiso--Sakurai \cite{OS}] The varieties $X_{3,1}$ and $X_{3,2}$ constructed in \cite[Theorem 3.4]{OS} are Calabi--Yau threefolds, obtained as crepant resolutions of quotients $E^3/G$, where $E$ is an elliptic curve and $G\subset\hbox{Aut}(E^3)$ a certain group.\footnote{The definition of Calabi--Yau variety in \cite{OS} is different from ours, as it is not required that $h^{2,0}=0$; however (as noted in \cite[Section 4.1]{FG}, the varieties $X_{3,1}$ and $X_{3,2}$ do have $h^{2,0}=0$.} \end{example} \begin{example}[Borcea--Voisin] Let $S$ be a $K3$ surface admitting a non--symplectic involution $\alpha$ which fixes $k= 10$ rational curves. Let $E$ be an elliptic curve, and let $\iota\colon E\to E$ be the involution $z\mapsto -z$. There exists a desingularization \[ X\ \to\ (S\times E)/(\alpha\times\iota)\ \] which is Calabi--Yau; it has $h^{2,1}(X)=11-k$ \cite{V20}, \cite{Bo}. To be sure, the Borcea--Voisin construction exists more generally for any $k\le 10$ \cite{V20}, \cite{Bo}; in this note, however, we only consider the extremal case $k=10$. In this easy case of the Borcea--Voisin construction, the $K3$ surface $S$ is rationally dominated by a product of elliptic curves. Also (as explained in \cite[2.4]{GG}), $X$ is birational to a double cover of $\mathbb{P}^3$ branched along $8$ planes. \end{example} \begin{remark}\label{more} In \cite{CG}, the Borcea--Voisin construction is generalized, to include quotients of higher--order automorphisms of $S\times E$. In some cases, e.g. \cite[Table 2 lines 18 and 19]{CG}, the resulting Calabi--Yau threefold is rationally dominated by curves, and is rigid (cf. \cite[Remarks 6.3 and 6.5]{CG}). \end{remark} \section{Preliminaries} \subsection{Standard conjecture $B(X)$} Let $X$ be a smooth projective variety of dimension $n$, and $h\in H^2(X,\mathbb{Q})$ the class of an ample line bundle. The hard Lefschetz theorem asserts that the map \[ L^{n-i}\colon H^i(X,\mathbb{Q})\to H^{2n-i}(X,\mathbb{Q})\] obtained by cupping with $h^{n-i}$ is an isomorphism, for any $i< n$. One of the standard conjectures asserts that the inverse isomorphism is algebraic: \begin{definition} Given a variety $X$, we say that $B(X)$ holds if for all ample $h$, and all $i<n$ the isomorphism \[ (L^{n-i})^{-1}\colon H^{2n-i}(X,\mathbb{Q})\stackrel{\cong}{\rightarrow} H^i(X,\mathbb{Q})\] is induced by a correspondence. \end{definition} \begin{remark} It is known that $B(X)$ holds for the following varieties: curves, surfaces, abelian varieties \cite{K0}, \cite{K}, threefolds not of general type \cite{Tan}, hyperk\"ahler varieties of $K3^{[n]}$--type \cite{ChM}, $n$--dimensional varieties $X$ which have $A_i(X)_{}$ supported on a subvariety of dimension $i+2$ for all $i\le{n-3\over 2}$ \cite[Theorem 7.1]{V}, $n$--dimensional varieties $X$ which have $H_i(X)=N^{\llcorner {i\over 2}\lrcorner}H_i(X)$ for all $i>n$ \cite[Theorem 4.2]{V2}, products and hyperplane sections of any of these \cite{K0}, \cite{K}. For smooth projective varieties over $\mathbb{C}$, the standard conjecture $B(X)$ implies the standard conjecture $D(X)$, i.e homological and numerical equivalence coincide on $X$ and $X\times X$ \cite{K0}, \cite{K}. \end{remark} \subsection{Coniveau and niveau filtration} \begin{definition}[Coniveau filtration \cite{BO}]\label{con} Let $X$ be a quasi--projective variety. The {\em coniveau filtration\/} on cohomology and on homology is defined as \[\begin{split} N^c H^i(X,\mathbb{Q})&= \sum \operatorname{i}a\bigl( H^i_Y(X,\mathbb{Q})\to H^i(X,\mathbb{Q})\bigr)\ ;\\ N^c H_i(X,\mathbb{Q})&=\sum \operatorname{i}a \bigl( H_i(Z,\mathbb{Q})\to H_i(X,\mathbb{Q})\bigr)\ ,\\ \end{split}\] where $Y$ runs over codimension $\ge c$ subvarieties of $X$, and $Z$ over dimension $\le i-c$ subvarieties. \end{definition} Vial introduced the following variant of the coniveau filtration: \begin{definition}[Niveau filtration \cite{V4}] Let $X$ be a smooth projective variety. The {\em niveau filtration} on homology is defined as \[ \widetilde{N}^j H_i(X)=\sum_{\Gamma\in A_{i-j}(Z\times X)_{}} \operatorname{i}a\bigl( H_{i-2j}(Z)\to H_i(X)\bigr)\ ,\] where the union runs over all smooth projective varieties $Z$ of dimension $i-2j$, and all correspondences $\Gamma\in A_{i-j}(Z\times X)_{}$. The niveau filtration on cohomology is defined as \[ \widetilde{N}^c H^iX:= \widetilde{N}^{c-i+n} H_{2n-i}X\ .\] \end{definition} \begin{remark}\label{is} The niveau filtration is included in the coniveau filtration: \[ \widetilde{N}^j H^i(X)\subset N^j H^i(X)\ .\] These two filtrations are expected to coincide; indeed, Vial shows this is true if and only if the Lefschetz standard conjecture is true for all varieties \cite[Proposition 1.1]{V4}. Using the truth of the Lefschetz standard conjecture in degree $\le 1$, it can be checked \cite[page 6 "Properties"]{V4} that the two filtrations coincide in a certain range: \[ \widetilde{N}^j H^i(X)= N^j H^iX\ \ \ \hbox{for\ all\ }j\ge {i-1\over 2} \ .\] \end{remark} \subsection{Finite--dimensional motives} We refer to \cite{Kim}, \cite{An}, \cite{J4}, \cite{MNP} for basics on finite--dimensional motives. A crucial property is the nilpotence theorem, which allows to lift relations between cycles from homological to rational equivalence: \begin{theorem}[Kimura \cite{Kim}]\label{nilp} Let $X$ be a smooth projective variety of dimension $n$ with finite--dimensional motive. Let $\Gamma\in A^n(X\times X)_{}$ be a correspondence which is numerically trivial. Then there is $N\in\mathbb{N}$ such that \[ \Gamma^{\circ N}=0\ \ \ \ \in A^n(X\times X)_{}\ .\] \end{theorem} Conjecturally, any variety has finite--dimensional motive \cite{Kim}. We are still far from knowing this, but at least there are quite a few non--trivial examples: \begin{remark} The following varieties have finite--dimensional motive: abelian varieties, varieties dominated by products of curves \cite{Kim}, $K3$ surfaces with Picard number $19$ or $20$ \cite{P}, surfaces not of general type with $p_g=0$ \cite[Theorem 2.11]{GP}, many examples of surfaces of general type with $p_g=0$ \cite{PW}, \cite{V8}, generalized Kummer varieties \cite[Remark 2.9(\romannumeral2)]{Xu}, 3--folds and 4--folds with nef tangent bundle \cite{Iy}, \cite{Iy2}, varieties of dimension $\le 3$ rationally dominated by products of curves \cite[Example 3.15]{V3}, varieties $X$ with $A^i_{AJ}X_{}=0$ for all $i$ \cite[Theorem 4]{V2} (in particular, Fano 3--folds \cite{GoGu}), products of varieties with finite--dimensional motive \cite{Kim}. \end{remark} \begin{remark} It is worth pointing out that up till now, all examples of finite-dimensional motives happen to be in the tensor subcategory generated by Chow motives of curves. On the other hand, ``many'' motives are known to lie outside this subcategory, e.g. the motive of a general hypersurface in $\mathbb{P}^3$ \cite[Remark 2.34]{Ay}. \end{remark} \section{Bloch conjecture for some Calabi--Yau threefolds} \begin{definition} Let $X$ be a Calabi--Yau threefold. A correspondence $\Gamma\in A^3(X\times X)$ is called {\em symplectic\/} if \[ \Gamma_\ast=\hbox{id}\colon\ \ H^{0,3}(X)\ \to\ H^{0,3}(X)\ .\] \end{definition} \begin{theorem}\label{main1} Let $X$ be a Calabi--Yau threefold. Assume moreover \noindent {(\romannumeral1)} $X$ has finite--dimensional motive; \noindent (\romannumeral2) $B(X)$ is true; \noindent (\romannumeral3) the generalized Hodge conjecture is true for $H^3(X)$. Let $\Gamma\in A^3(X\times X)$ be a symplectic correspondence. Then \[ \Gamma_\ast\colon\ \ A^3_{hom}X\ \to\ A^3_{hom}X\ \] is an isomorphism. \end{theorem} \begin{remark} In case $X$ is not of general type (i.e., if we adhere to the usual definition of Calabi--Yau varieties), hypothesis (\romannumeral2) is always fulfilled \cite{Tan}. \end{remark} \begin{proof} Hypotheses (\romannumeral1) and (\romannumeral2) ensure the existence of a refined Chow--K\"unneth decomposition $\Pi_{i,j}$ as in \cite{V4}. There is a splitting \[ H^3(X)= H^3_{\rm tr}(X)\oplus \widetilde{N}^1 H^3(X)\ ,\] where the ``transcendental cohomology'' $H^3_{\rm tr}(X)$ is defined as \[ H^3_{\rm tr}(X):=(\Pi_{3,0})_\ast H^3(X)\ \subset\ H^3(X)\ .\] Hypothesis (\romannumeral3) implies that \[ \bigl((\Gamma-\mathbb{D}elta)\circ \Pi_{3,0}\bigr)_\ast H^3(X)=0\ ,\] in view of lemma \ref{trans} below. This means that \[ \Gamma -\mathbb{D}elta = (\Gamma-\mathbb{D}elta)\circ ({\displaystyle \sum_{(i,j)\not=(0,3)} \Pi_{i,j}})\ \ \hbox{in\ } H^6(X\times X)\ .\] By construction of the $\Pi_{i,j}$, this implies \[ \Gamma-\mathbb{D}elta= R_0+R_1+R_2\ \ \hbox{in\ } H^6(X\times X)\ ,\] where $R_0, R_1, R_2$ are cycles supported on $(\hbox{point})\times X$, resp. on $(\hbox{divisor})\times (\hbox{divisor})$, resp. on $X\times(\hbox{point})$. That is, the cycle \[ \Gamma-\mathbb{D}elta-R_0-R_1-R_2\ \ \in A^3(X\times X)\] is homologically trivial. Applying the nilpotence theorem, and noting that the $R_\ell$ do not act on $A^3_{hom}X$, it follows that there exists $N\in\mathbb{N}$ such that \[ (\Gamma^{\circ N})_\ast=\hbox{id}\colon\ \ A^3_{hom}X\ \to\ A^3_{hom}X\ .\] In particular, \[ \Gamma_\ast\colon\ \ A^3_{hom}X\ \to\ A^3_{hom}X\ \] is injective and surjective. \begin{lemma}\label{trans} Let $X$ be a Calabi--Yau threefold, and assume the generalized Hodge conjecture is true for $H^3(X)$. Let $\Gamma \in A^3(X\times X)$ be a symplectic correspondence. Then \[ \Gamma_\ast=\hbox{id}\colon\ \ H^3_{\rm tr}(X)\ \to\ H^3_{\rm tr}(X)\ .\] \end{lemma} \begin{proof} The intersection pairing on $H^3(X)$ respects the decomposition \[ H^3(X)= H^3_{\rm tr}(X)\oplus \widetilde{N}^1 H^3(X)\ ,\] i.e. restriction induces a non--degenerate pairing \[ \begin{split} H^3_{\rm tr}(X)\otimes H^3_{\rm tr}(X) \ &\to\ H^6(X)\ ,\\ \end{split}\] and hence $H^3_{\rm tr}(X)$ and $\widetilde{N}^1 H^3(X)$ are orthogonal with respect to the intersection pairing. Let $\omega\in H^{0,3}(X)$ be a generator. By the truth of the generalized Hodge conjecture and remark \ref{is}, we have \[ \widetilde{N}^1 H^3(X)=N^1 H^3(X)=\bigl\{ a\in H^3(X)\ \vert\ a_{\mathbb{C}} \cdot \omega=0\bigr\}\ . \] Let $K\subset H^3(X)$ denote the kernel \[K:= \ker \Bigl( (\Gamma-\mathbb{D}elta)_\ast\colon H^3(X)\ \to\ H^3(X)\Bigr)\ .\] Since the correspondence $\Gamma$ is symplectic, we have (by definition) \[ H^{0,3}(X)\ \subset\ K_{\mathbb{C}}:=\ker \Bigl( (\Gamma-\mathbb{D}elta)_\ast\colon H^3(X,\mathbb{C})\ \to\ H^3(X,\mathbb{C})\Bigr)\ .\] But then, \[ K^\perp \ \subset\ \bigl\{ a\in H^3(X)\ \vert\ a_{\mathbb{C}} \cdot \omega=0\bigr\}=\widetilde{N}^1 H^3(X)\ \] (here ${}^\perp$ denotes the orthogonal complement with respect to the intersection pairing on $H^3(X)$). This implies \[ K\ \supset\ \widetilde{N}^1 H^3(X)^\perp=H^3_{\rm tr}(X)\ .\] \end{proof} \end{proof} \begin{remark} Lemma \ref{trans} is inspired by the analogous result for $K3$ surfaces, which can be found in \cite[Proof of Corollary 3.11]{V8} or \cite[Lemma 2.5]{Ped}. \end{remark} \begin{remark} As for examples which satisfy the hypotheses of theorem \ref{main1}, all the examples of section \ref{examples} will do. Indeed, all examples in section \ref{examples} are rationally dominated by products of elliptic curves. As such, they have finite--dimensional motive and $B(X)$ is true. The generalized Hodge conjecture is true for products of elliptic curves \cite[Theorem 6.1]{Ab} (NB: for products of Fermat curves, it suffices to refer to \cite{Sh}); any blow--up of $E_1\times E_2\times E_3$ still satisfies the generalized Hodge conjecture in degree $3$, hence so do the Calabi--Yau varieties of section \ref{examples}, as they are dominated by such a blow--up. \end{remark} \section{Indecomposability} \begin{definition} Let $X$ be a smooth projective variety of dimension $n\le 5$. Assume $B(X)$ holds and $X$ has finite--dimensional motive. Then we define the ``transcendental motive'' $t(X)$ as \[ t(X):=(X,\Pi_{n,0},0)\ \ \in \mathcal M_{\rm rat}\ ,\] where $\Pi_{n,0}$ is the refined Chow--K\"unneth projector constructed by Vial \cite[Theorem 2]{V4}. \end{definition} \begin{remark} The fact that $t(X)$ is well--defined up to isomorphism follows from \cite[Theorem 7.7.3]{KMP} and \cite[Proposition 1.8]{V4}. In case $n=2$, $t(X)$ coincides with the ``transcendental part'' $t_2(X)$ constructed for any surface in \cite{KMP}. \end{remark} \begin{theorem}\label{main2} Let $X$ be a Calabi--Yau threefold. Assume moreover \noindent {(\romannumeral1)} $X$ has finite--dimensional motive; \noindent (\romannumeral2) $B(X)$ is true; \noindent (\romannumeral3) the generalized Hodge conjecture is true for $H^3(X)$. Then $t(X)$ is indecomposable: any non--zero submotive of $t(X)$ coincides with $t(X)$. \end{theorem} \begin{proof} Suppose $V=(X,v,0)\subset t(X)$ is a submotive which is not the whole motive $t(X)$. Then in particular, \[ H^3(V)\ \subsetneqq\ H^3(t(X))=H^3_{\rm tr}(X)\ .\] (Indeed, suppose we have equality. Then $V=t(X)$ in $\mathcal M_{\rm hom}$, and using finite--dimensionality this implies $V=t(X)$ in $\mathcal M_{\rm rat}$, contradiction.) But $H^3_{\rm tr}(X)$ does not have non--trivial sub-Hodge structures: indeed, suppose $H^3(V,\mathbb{C})$ contains $H^{3,0}(X)$. Then \[ (v-c\mathbb{D}elta)_\ast H^{3,0}(X)=0\ ,\] for some non--zero $c\in \mathbb{Q}$. But then, as in the proof of lemma \ref{trans}, \[ \Bigl(\ker\bigl( (v-c\mathbb{D}elta)\vert_{H^3}\bigr)\Bigr)^\perp\ \subset\ \widetilde{N}^1 H^3(X)\ ,\] whence \[ \ker\bigl( (v-c\mathbb{D}elta)\vert_{H^3}\bigr)\ \supset\ \widetilde{N}^1 H^3(X)^\perp= H^3_{\rm tr}(X)\ ;\] this is absurd as it contradicts the fact that $H^3(V)\not=H^3_{\rm tr}(X)$. Suppose next that $H^3(V,\mathbb{C})$ does {\em not\/} contain $H^{3,0}(X)$, i.e. $v_\ast H^{3,0}(X)=0$. Then, again as in the proof of lemma \ref{trans}, we find that \[ \Bigl( \ker\bigl( v\vert_{H^3}\bigr)\Bigr)^\perp\ \subset\ \widetilde{N}^1 H^3(X)\ ,\] whence \[ \ker\bigl( v\vert_{H^3}\bigr)\ \supset\ \widetilde{N}^1 H^3(X)^\perp= H^3_{\rm tr}(X)\ ;\] it follows that $H^\ast(V)=0$ and so (using finite--dimensionality) $V=0$ in $\mathcal M_{\rm rat}$. \end{proof} \begin{corollary}\label{aut} Let $X$ be as in theorem \ref{main2}. Let $G\subset\hbox{Aut}(X)$ be a finite group of finite order automorphisms. \noindent (\romannumeral1) If $g\in G$ is symplectic, then \[ A^3_{hom}(X)=A^3_{hom}(Y)\ ,\] where $Y$ denotes a resolution of singularities of the quotient $X/G$. \noindent (\romannumeral2) If $g\in G$ is not symplectic, then \[ A^3_{hom}(Y)=0\ .\] \end{corollary} \begin{proof} \noindent (\romannumeral1) After blowing up $X$ (which doesn't change $A^3$), we may assume the rational map $p\colon X\to Y$ is a morphism, i.e. $Y=X/G$. The morphism $p$ induces a map of motives \[ p\colon\ \ t(X)\ \to\ t(Y)\ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] Since \[ p_\ast p^\ast=s\cdot\hbox{id}\colon\ \ A^3_{hom}(Y)\ \to\ A^3_{hom}(Y)\ \] (where $s$ is the number of elements of $G$), this map of motives has a right--inverse (given by ${1/s}$ times the transpose of the graph of $p$). By general properties of pseudo--abelian categories, this means \cite[Remark 1.7]{Sch} that $t(Y)$ is (non--canonically) a direct summand of $t(X)$, i.e. we can write \[ t(X)=T_0\oplus T_1\ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\] such that $p$ induces an isomorphism $T_0\cong t(Y)$. The motive $T_0$ cannot be $0$ (if it were $0$, then a fortiori $t(Y)\in\mathcal M_{\rm hom}$ would be $0$ and hence $H^{3,0}(X)=H^{3,0}(Y)=0$, which is absurd). Applying theorem \ref{main2}, it follows that $T_0=t(X)$ and so \[ p\colon\ \ t(X)\ \xrightarrow{\cong}\ t(Y)\ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] \noindent (\romannumeral2) As in the proof of (\romannumeral1), we have a splitting \[ t(X)=T_0\oplus T_1\ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\] such that $p$ restricts to an isomorphism $T_0\cong t(Y)$. The motive $T_0$ cannot be all of $t(X)$ (if it were, then also $p\colon t(X)\cong t(Y)$ in $\mathcal M_{\rm hom}$ and hence $H^{3,0}(X)\cong H^{3,0}(Y)$. But this is absurd, for the projector ${1\over s}\sum_{g\in G} \Gamma_g$ acts as $0$ on $H^{3,0}(X)$). It follows that $T_0=0$ and so \[ t(Y)=0\ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] \end{proof} \section{Voisin's conjecture} \begin{conjecture}[Voisin \cite{V9}]\label{voisin} Let $X$ be a Calabi--Yau threefold. For any $k\ge 2$, let the symmetric group $S_k$ act on $X^k$ by permutation of the factors. Let $pr_k\colon X^k\to X^{k-1}$ denote the projection obtained by omitting one of the factors. The induced map \[ (pr_k)_\ast\colon\ \ A_j(X^k)^{S_k}\ \to\ A_j(X^{k-1})\] is injective for $j\le k-2$. \end{conjecture} \begin{remark} Suppose $X$ has a Chow--K\"unneth decomposition $h(X)=\sum_i h^i(X)$ in $\mathcal M_{\rm rat}$. Then conjecture \ref{voisin} is equivalent to the following: for any $k\ge 2$, the Chow motive $\hbox{Sym}^k h^3(X)$ satisfies \[ A_j \bigl( \hbox{Sym}^k h^3(X)\bigr)=0\ \ \hbox{for\ all\ }j\le k-2\ .\] In case $k=2$, conjecture \ref{voisin} predicts the following concrete statement about $0$--cycles: let $a,a^\prime\in A^3_{hom}(X)$ be two $0$--cycles of degree $0$. Then \[ a\times a^\prime= - a^\prime\times a\ \ \hbox{in}\ A^6(X\times X)\ .\] \end{remark} \begin{remark} A conjecture similar to conjecture \ref{voisin} can be formulated for varieties of geometric genus $1$ in any dimension. We refer to \cite{V9} and \cite[Conjecture 4.37 and Example 4.40]{Vo} for precise statements, and verifications in certain cases. \end{remark} \begin{theorem}\label{main3} Let $X$ be a Calabi--Yau threefold. Assume moreover \noindent (\romannumeral1) $X$ has finite--dimensional motive; \noindent (\romannumeral2) $B(X)$ is true; \noindent (\romannumeral3) $X$ is {\em rigid\/}, i.e. $h^{2,1}(X)=0$. Then conjecture \ref{voisin} is true for $X$. \end{theorem} \begin{proof} Hypotheses (\romannumeral1) and (\romannumeral2) ensure the existence of a Chow--K\"unneth decomposition $\Pi_i$, i.e. \[ h(X)= h^0(X)\oplus \cdots \oplus h^6(X)\ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\] where $h^i(X)=(X,\Pi_i,0)$. Let \[ \Lambda_k:= {1\over k!}\Bigl(\displaystyle \sum_{\sigma\in S_k} \Gamma_\sigma\Bigr)\circ (\Pi_3^{\otimes k})\ \ \in A^{3k}(X^k\times X^k)\ .\] On the level of cohomology, the correspondence $\Lambda_k$ is a projector on $\hbox{Sym}^k H^3(X)\subset H^{3k}(X^k)$; on Chow--theoretical level $\Lambda_k$ is idempotent and defines the Chow motive $\hbox{Sym}^k h^3(X)$ in the language of \cite{Kim}. Hypothesis (\romannumeral3) implies that $\dim H^3(X)=2$, hence for $k\ge 3$ one has \[ \Lambda_k=0\ \ \hbox{in}\ H^{6k}(X^k\times X^k)\ .\] Using the nilpotence theorem, it follows that \[ \Lambda_k=0\ \ \hbox{in}\ A^{3k}(X^k\times X^k)\ .\] It only remains to check the case $k=2$. Note that $\hbox{Sym}^2 H^3(X)$ has dimension $1$, and \[ \hbox{Sym}^2 H^3(X)\ \subset\ H^6(X^2)\cap F^3\ .\] What's more, the Hodge conjecture is true for this subspace, since \[ \hbox{Sym}^2 H^3(X)=\mathbb{Q}\cdot \Pi_3 \ \subset\ H^6(X^2)\ .\] It follows that \[ \Lambda_2= \Pi_3\times \Pi_3\ \ \hbox{in}\ H^{12}(X^2\times X^2)\ ,\] and hence (using the nilpotence theorem) \[ \Lambda_2= \Pi_3\times \Pi_3\ \ \hbox{in}\ A^{6}(X^2\times X^2)\ .\] It follows that \[ (\Lambda_2)_\ast \bigl(A_j(X^2)\bigr) =0\ \ \hbox{for\ all}\ j\le 2\ ,\] i.e. a strong form of Voisin's conjecture is true. \end{proof} \begin{remark} Theorem \ref{main3} applies to the examples in subsection \ref{rigid}, and also to the two examples of remark \ref{more}. \end{remark} \begin{remark} In the proof of theorem \ref{main3}, we have used the condition $\dim H^3(X)=2$, which is a consequence of hypothesis (\romannumeral3). By replacing in the proof the correspondence $\Pi_3$ by $\Pi_{3,0}$ (i.e., replacing the motive $h^3(X)$ by $t(X)$), it is enough to assume \[ \dim H^3_{tr}(X)=2\ ,\] a condition a priori weaker than (\romannumeral3). \end{remark} \end{document}
\mathit{beg}in{document} \title{Optimal searching of gapped repeats in a word} \mathit{beg}in{abstract} Following (Kolpakov et al., 2013; Gawrychowski and Manea, 2015), we continue the study of {\em $\alpha$-gapped repeats} in strings, defined as factors $uvu$ with $|uv|\leq \alpha |u|$. Our main result is the $O(\alpha n)$ bound on the number of {\em maximal} $\alpha$-gapped repeats in a string of length $n$, previously proved to be $O(\alpha^2 n)$ in (Kolpakov et al., 2013). For a closely related notion of maximal $\delta$-subrepetition (maximal factors of exponent between $1+\delta$ and $2$), our result implies the $O(n/\delta)$ bound on their number, which improves the bound of (Kolpakov et al., 2010) by a $\log n$ factor. We also prove an algorithmic time bound $O(\alpha n+S)$ ($S$ size of the output) for computing all maximal $\alpha$-gapped repeats. Our solution, inspired by (Gawrychowski and Manea, 2015), is different from the recently published proof by (Tanimura et al., 2015) of the same bound. Together with our bound on $S$, this implies an $O(\alpha n)$-time algorithm for computing all maximal $\alpha$-gapped repeats. \end{abstract} \section{Introduction} \paragraph{Notation and basic definitions.} Let $w=w[1]w[2]\ldots w[n]=w[1 \mathinner{\ldotp\ldotp} n]$ be an arbitrary word. The length $n$ of~$w$ is denoted by $|w|$. For any $1\le i\le j\le n$, word $w[i]\ldots w[j]$ is called a {\it factor} of~$w$ and is denoted by $w[i \mathinner{\ldotp\ldotp} j]$. Note that notation $w[i \mathinner{\ldotp\ldotp} j]$ denotes two entities: a word and its occurrence starting at position $i$ in $w$. To underline the second meaning, we will sometimes use the term {\it segment}. Speaking about the equality between factors can also be ambiguous, as it may mean that the factors are identical words or identical segments. If two factors $u, v$ are identical words, we call them {\it equal} and denote this by $u=v$. To express that $u$ and $v$ are the same segment, we use the notation $u\equiv v$. For any $i=1\ldots n$, factor $w[1 \mathinner{\ldotp\ldotp} i]$ (resp. $w[i \mathinner{\ldotp\ldotp} n]$) is a {\it prefix} (resp. {\it suffix}) of~$w$. By {\em positions} on~$w$ we mean indices $1, 2,\ldots ,n$ of letters in $w$. For any factor~$v\equiv w[i \mathinner{\ldotp\ldotp} j]$ of~$w$, positions $i$ and $j$ are called respectively {\it start position} and {\it end position} of~$v$ and denoted by $\mathit{beg}(v)$ and $\mathit{end}(v)$ respectively. Let $u, v$ be two factors of~$w$. Factor $u$ {\it is contained} in~$v$ iff $\mathit{beg}(v)\le \mathit{beg}(u)$ and $\mathit{end}(u)\le \mathit{end}(v)$. Letter $w[i]$ {\it is contained} in~$v$ iff $\mathit{beg}(v)\le i\le \mathit{end}(v)$. A positive integer $p$ is called a {\it period} of~$w$ if $w[i]=w[i+p]$ for each $i=1,\ldots ,n-p$. We denote by $\mathit{}{per}(w)$ the {\em smallest period} of~$w$ and define the {\it exponent} of~$w$ as $\mathit{exp}(w)=|w|/\mathit{}{per}(w)$. A word is called {\it periodic} if its exponent is at least~2. Occurrences of periodic words are called {\it repetitions}. \paragraph{Repetitions, squares, runs.} Patterns in strings formed by repeated factors are of primary importance in word combinatorics~\cite{Lothaire83} as well as in various applications such as string matching algorithms~\cite{GaliSeiferas83,CrochRytter95}, molecular biology~\cite{Gusfield97}, or text compression~\cite{Storer88}. The simplest and best known example of such patterns is a factor of the form $uu$, where $u$ is a nonempty word. Such repetitions are called {\it squares}. Squares have been extensively studied. While the number of all square occurrences can be quadratic (consider word $\texttt{a}^n$), it is known that the number of {\em primitively-rooted} squares is $O(n\log n)$ \cite{CrochRytter95}, where a square $uu$ is primitively-rooted if the exponent of $u$ is not an integer greater than~$1$. An optimal $O(n\log n)$-time algorithm for finding all primitively-rooted squares was proposed in~\cite{Crochemor81}. Repetitions can be seen as a natural generalization of squares. A repetition in a given word is called {\it maximal} if it cannot be extended by at least one letter to the left nor to the right without changing (increasing) its minimal period. More precisely, a repetition $r\equiv w[i \mathinner{\ldotp\ldotp} j]$ in~$w$ is called {\it maximal} if it satisfies the following conditions: \mathit{beg}in{enumerate} \item $w[i-1]\neq w[i-1+\mathit{}{per}(r)]$ if $i>1$, \item $w[j+1-\mathit{}{per}(r)]\neq w[j+1]$ if $j<n$. \end{enumerate} For example, word $\texttt{cababaaa}$ has two maximal repetitions: $\texttt{ababa}$ and $\texttt{aaa}$. Maximal repetitions are usually called {\it runs} in the literature. Since any repetition is contained in some run, the set of all runs can be considered as a compact encoding of all repetitions in the word. This set has many useful applications, see, e.g., \cite{Crochetal1}. For any word~$w$, we denote by ${\cal R}(w)$ the number of maximal repetitions in~$w$ and by ${\cal E}(w)$ the sum of exponents of all maximal repetitions in~$w$. The following statements are proved in~\cite{KK00}. \mathit{beg}in{theorem} $\max_{|w|=n}{\cal E}(w)=O(n)$. \label{sumexp} \end{theorem} \mathit{beg}in{corollary} $\max_{|w|=n}{\cal R}(w)=O(n)$. \label{onmaxrun} \end{corollary} A series of papers (e.g.,~\cite{CrochIlieTinta, Crochetal11}) focused on more precise upper bounds on ${\cal E}(w)$ and ${\cal R}(w)$ trying to obtain the best possible constant factor behind the $O$-notation. A breakthrough in this direction was recently made in~\cite{RunsTheor} where the so-called ``runs conjecture'' ${\cal R}(w[1..n])<n$ was proved. To the best of our knowledge, the currently best upper bound ${\cal R}(w[1..n])\le\frac{22}{23}n$ on ${\cal R}(w)$ is shown in~\cite{BeyRunsTheor}. On the algorithmic side, an $O(n)$-time algorithm for finding all runs in a word of length~$n$ was proposed in~\cite{KK00} for the case of constant-size alphabet. Another $O(n)$-time algorithm, based on a different approach, has been proposed in~\cite{RunsTheor}. The $O(n)$ time bound holds for the (polynomially-bounded) integer alphabet as well, see, e.g., \cite{RunsTheor}. However, for the case of unbounded-size alphabet where characters can only be tested for equality, the lower bound $\Omega(n\log n)$ on computing all runs has been known for a long time \cite{MainLorentz85}. It is an interesting open question (raised over 20 years ago in \cite{BreslauerPhD92}) whether the $O(n)$ bound holds for an unbounded linearly-ordered alphabet. Some results related to this question have recently been obtained in \cite{KosolobovSTACS15}. \paragraph{Gapped repeats and subrepetitions.} Another natural generalization of squares are factors of the form $uvu$ where $u$ and $v$ are nonempty words. We call such factors {\it gapped repeats}. For a gapped repeat $uvu$, the left (resp. right) occurrence of~$u$ is called the {\it left} (resp. {\em right}) {\em copy}, and $v$ is called the {\it gap}. The {\it period} of this gapped repeat is $|u|+|v|$. For a gapped repeat~$\pi{}$, we denote the length of copies of~$\pi{}$ by $c(\pi{})$ and the period of~$\pi{}$ by $p(\pi{})$. Note that a gapped repeat $\pi{}=uvu$ may have different periods, and $\mathit{}{per}(\pi{})\leq p(\pi{})$. For example, in string $\texttt{cabacaabaa}$, segment $\texttt{abacaaba}$ corresponds to two gapped repeats having copies $\texttt{a}$ and $\texttt{aba}$ and periods $7$ and $5$ respectively. Gapped repeats forming the same segment but having different periods are considered distinct. This means that to specify a gapped repeat it is generally not sufficient to specify its segment. If $u',u''$ are equal non-overlapping factors and $u'$ occurs to the left of $u''$, then by $(u',u'')$ we denote the gapped repeat with left copy~$u'$ and right copy~$u''$. For a given gapped repeat $(u', u'')$, equal factors $u'[i \mathinner{\ldotp\ldotp} j]$ and $u''[i \mathinner{\ldotp\ldotp} j]$, for $1\le i\le j\le |u'|$, of the copies $u'$, $u''$ are called {\it corresponding factors} of repeat $(u', u'')$. For any real $\alpha>1$, a gapped repeat~$\pi{}$ is called {\it $\alpha$-gapped} if $p(\pi{})\le\alpha c(\pi{})$. Maximality of gapped repeats is defined similarly to repetitions. A gapped repeat $(w[i' \mathinner{\ldotp\ldotp} j'], w[i'' \mathinner{\ldotp\ldotp} j''])$ in~$w$ is called {\it maximal} if it satisfies the following conditions: \mathit{beg}in{enumerate} \item $w[i'-1]\neq w[i''-1]$ if $i'>1$, \item $w[j'+1]\neq w[j''+1]$ if $j''<n$. \end{enumerate} In other words, a gapped repeat $\pi{}$ is maximal if its copies cannot be extended to the left nor to the right by at least one letter without breaking its period $p(\pi{})$. As observed in~\cite{KPPHr13}, any $\alpha$-gapped repeat is contained either in a (unique) maximal $\alpha$-gapped repeat with the same period, or in a (unique) maximal repetition with a period which is a divisor of the repeat's period. For example, in the above string $\texttt{cabacaabaa}$, gapped repeat $(\texttt{ab})\texttt{aca}(\texttt{ab})$ is contained in maximal repeat $(\texttt{aba})\texttt{ca}(\texttt{aba})$ with the same period $5$. In string $\texttt{cabaaabaaa}$, gapped repeat $(\texttt{ab})\texttt{aa}(\texttt{ab})$ with period $4$ is contained in maximal repetition $\texttt{abaaabaaa}$ with period $4$. Since all maximal repetitions can be computed efficiently in $O(n)$ time (see above), the problem of computing all $\alpha$-gapped repeats in a word can be reduced to the problem of finding all maximal $\alpha$-gapped repeats. Several variants of the problem of computing gapped repeats have been studied earlier. In~\cite{Brodal00}, it was shown that all maximal gapped repeats with a gap length belonging to a specified interval can be found in time $O(n\log n+S)$, where $n$ is the word length and $S$ is output size. In \cite{KK00a}, an algorithm was proposed for finding all gapped repeats with a fixed gap length $d$ running in time $O(n\log d+S)$. In~\cite{KPPHr13}, it was proved that the number of maximal $\alpha$-gapped repeats in a word of length~$n$ is bounded by $O(\alpha^2 n)$ and all maximal $\alpha$-gapped repeats can be found in $O(\alpha^2 n)$ time for the case of integer alphabet. A new approach to computing gapped repeats was recently proposed in~\cite{GabrMan, DumiMan}. In particular, in~\cite{GabrMan} it is shown that the longest $\alpha$-gapped repeat in a word of length~$n$ over an integer alphabet can be found in $O(\alpha n)$ time. Finally, in a recent paper~\cite{Tanimuraetal}, an algorithm is proposed for finding all maximal $\alpha$-gapped repeats in $O(\alpha n+S)$ time where $S$ is the output size, for a constant-size alphabet. The algorithm uses an approach previously introduced in \cite{BadkobehCrochToop12}. Recall that repetitions are segments with exponent at least $2$. Another way to approach gapped repeats is to consider segments with exponent smaller than $2$, but strictly greater than $1$. Clearly, such a segment corresponds to a gapped repeat $\pi{}=uvu$ with $\mathit{}{per}(\pi{})=p(\pi{})=|u|+|v|$. We will call such factors (segments) {\em subrepetitions}. More precisely, for any~$\delta$, $0<\delta<1$, by a $\delta$-subrepetition we mean a factor~$v$ that satisfies $1+\delta\le \mathit{exp}(v)<2$. Again, the notion of maximality straightforwardly applies to subrepetitions as well: maximal subrepetitions are defined exactly in the same way as maximal repetitions. The relationship between maximal subrepetitions and maximal gapped repeats was clarified in \cite{KPPHr13}. Directly from the definitions, a maximal subrepetition $\pi{}$ in a string $w$ corresponds to a maximal gapped repeat with $p(\pi{})=\mathit{}{per}(\pi{})$. Futhermore, a maximal $\delta$-subrepetition corresponds to a maximal $\frac{1}{\delta}$-gapped repeat. However, there may be more maximal $\frac{1}{\delta}$-gapped repeats than maximal $\delta$-subrepetitions, as not every maximal $\frac{1}{\delta}$-gapped repeat corresponds to a maximal $\delta$-subrepetition. Some combinatorial results on the number of maximal subrepetitions in a string were obtained in~\cite{KKOch}. In particular, it was proved that the number of maximal $\delta$-subrepetitions in a word of length~$n$ is bounded by $O(\frac{n}{\delta}\log n)$. In~\cite{KPPHr13}, an $O(n/\delta^2)$ bound on the number of maximal $\delta$-subrepetitions in a word of length~$n$ was obtained. Moreover, in~\cite{KPPHr13}, two algorithms were proposed for finding all maximal $\delta$-subrepetitions in the word running respectively in $O(\frac{n\log\log n}{\delta^2})$ time and in $O(n\log n+\frac{n}{\delta^2}\log\frac{1}{\delta})$ expected time, over the integer alphabet. In \cite{BadkobehCrochToop12}, it is shown that all subrepetitions with the largest exponent (over all subrepetitions) can be found in an overlap-free string in time $O(n)$, for a constant-size alphabet. \paragraph{Our results.} In the present work we improve the results of~\cite{KPPHr13} on maximal gapped repeats: we prove an asymptotically tight bound of $O(\alpha n)$ on the number of maximal $\alpha$-gapped repeats in a word of length~$n$ (Section~\ref{counting}). From our bound, we also derive a $O(n/\delta)$ bound on the number of maximal $\delta$-subrepetitions occurring in the word, which improves the bound of \cite{KKOch} by a $\log n$ factor. Then, based on the algorithm of~\cite{GabrMan}, we obtain an asymptotically optimal $O(\alpha n)$ time bound for computing all maximal $\alpha$-gapped repeats in a string (Section~\ref{algorithm}). Note that this bound follows from the recently published paper \cite{Tanimuraetal} that presents an $O(\alpha n+S)$ algorithm for computing all maximal $\alpha$-gapped repeats. Here we present an alternative algorithm with the same bound that we obtained independently. \section{Preliminaries} In this Section we state a few propositions that will be used later in the paper. The following fact is well-known (see, e.g.,~\cite[Proposition~2]{Kolpakov12}). \mathit{beg}in{proposition} Any period~$p$ of a word~$v$ such that $|v|\ge 2p$ is divisible by $\mathit{}{per}(v)$, the smallest period of $v$. \label{divminper} \end{proposition} Let $\Delta$ be some natural number. A period~$p$ of some word~$v$ is called {\it $\Delta$-period} if $p$ is divisible by~$\Delta$. The minimal $\Delta$-period of~$v$, if exists, is denoted by $p_{\Delta}(v)$. The word~$v$ is called $\Delta$-periodic if $|v|\ge 2p_{\Delta}(v)$. It is obvious that any $\Delta$-periodic word is also periodic. Proposition~\ref{divminper} can be generalized in the following way. \mathit{beg}in{proposition} Any $\Delta$-period~$p$ of a word~$v$ such that $|v|\ge 2p$ is divisible by $p_{\Delta}(v)$. \label{minDeltaper} \end{proposition} \mathit{beg}in{proof} By Proposition~\ref{divminper}, period~$p$ is divisible by $\mathit{}{per}(v)$, so $p$ is divisible by $LCM(\mathit{}{per}(v),\Delta)$. On the other hand, $LCM(\mathit{}{per}(v),\Delta)$ is a $\Delta$-period of~$v$. Thus, $p_{\Delta}(v)=LCM(\mathit{}{per}(v),\Delta)$, and $p$ is divisble by $p_{\Delta}(v)$. \end{proof} Consider an arbitrary word $w=w[1 \mathinner{\ldotp\ldotp} n]$ of length~$n$. Recall that any repetition~$y$ in~$w$ is extended to a unique maximal repetition $r$ with the same minimal period. We call $r$ the {\it extension} of~$y$. Let $r$ be a repetition in the word~$w$. We call any factor of~$w$ of length $\mathit{}{per}(r)$ which is contained in~$r$ a {\it cyclic root} of~$r$. For cyclic roots we have the following property proved, e.g., in~\cite[Proposition~2]{KPPHr13}. \mathit{beg}in{proposition} Two cyclic root $u'$, $u''$ of a repetition~$r$ are equal if and only if $\mathit{beg}(u')\equiv \mathit{beg}(u'')\pmod{\mathit{}{per}(r)}$. \label{oncycroot} \end{proposition} \section{Number of maximal repeats and subrepetitions} \label{counting} In this section, we obtain an improved upper bound on the number of maximal gapped repeats and subrepetitions in a string $w$. Following the general approach of \cite{KPPHr13}, we split all maximal gapped repeats into three categories according to periodicity properties of repeat's copy: periodic, semiperiodic and ordinary repeats. Bounds for periodic and semiperiodic repeats are directly borrowed from \cite{KPPHr13}, while for ordinary repeats, we obtain a better bound. \paragraph{\it Periodic repeats.} We say that a maximal gapped repeat is {\it periodic} if its copies are periodic strings (i.e. of exponent at least $2$). The set of all periodic maximal $\alpha$-gapped repeats in $w$ is denoted by ${\cal PP}_{\alpha}$. The following bound on the size of ${\cal PP}_{\alpha}$ was been obtained in \cite[Corollary~6]{KPPHr13}. \mathit{beg}in{lemma} $|{\cal PP}_k|=O(kn)$ for any natural $k>1$. \label{onPP} \end{lemma} \paragraph{\it Semiperiodic repeats.} A maximal gapped repeat is called {\it prefix} ({\it suffix}) {\it semi\-periodic} if the copies of this repeat are not periodic, but have a prefix (suffix) which is periodic and its length is at least half of the copy length. A maximal gapped repeat is {\it semiperiodic} if it is either prefix or suffix semiperiodic. The set of all semiperiodic $\alpha$-gapped maximal repeats is denoted by~${\cal SP}_{\alpha}$. In~\cite[Corollary~8]{KPPHr13}, the following bound was obtained on the number of semiperiodic maximal $\alpha$-gapped repeats. \mathit{beg}in{lemma}[\cite{KPPHr13}] $|{\cal SP}_k|=O(kn)$ for any natural $k>1$. \label{onSP} \end{lemma} \paragraph{\it Ordinary repeats.} Maximal gapped repeats which are neither periodic nor semiperiodic are called {\it ordinary}. The set of all ordinary maximal $\alpha$-gapped repeats in the word~$w$ is denoted by~${\cal OP}_{\alpha}$. In the rest of this section, we prove that the cardinality of ${\cal OP}_{\alpha}$ is $O(\alpha n)$. For simplicity, assume that $\alpha$ is an integer number $k$. To estimate the number of ordinary maximal $k$-gapped repeats, we use the following idea from~\cite{Kolpakov12}. We represent a maximal repeat $\pi{}\equiv (u', u'')$ from ${\cal OP}_k$ by a triple $(i, j, c)$ where $i=\mathit{beg}(u')$, $j=\mathit{beg}(u')$ and $c=c(\pi{})=|u'|=|u''|$. Such triples will be called {\em points}. Obviously, $\pi{}$ is uniquely defined by values $i$, $j$ and~$c$, therefore two different repeats from ${\cal OP}_k$ can not be represented by the same point. For any two points $(i', j', c')$, $(i'', j'', c'')$ we say that point $(i', j', c')$ {\it covers} point $(i'', j'', c'')$ if $i'\le i''\le i'+c'/6$, $j'\le j''\le j'+c'/6$, $c'\ge c'' \ge \frac{2c'}{3}$. A point is {\it covered} by a repeat $\pi{}$ if this it is covered by the point representing~$\pi{}$. By $V[\pi{} ]$ we denote the set of all points covered by a repeat~$\pi{}$. We show that any point can not be covered by two different repeats from ${\cal OP}_k$. \mathit{beg}in{lemma} Two different repeats from ${\cal OP}_k$ cannot cover the same point. \label{keylemma} \end{lemma} \mathit{beg}in{proof} Let $\pi{}_1\equiv (u'_1, u''_1)$, $\pi{}_2\equiv (u'_2, u''_2)$ be two different repeats from ${\cal OP}_k$ covering the same point $(i, j, c)$. Denote $c_1=c(\pi{}_1)$, $c_2=c(\pi{}_2)$, $p_1=\mathit{}{per}(\pi{}_1)$, $p_2=\mathit{}{per}(\pi{}_2)$. Without loss of generality we assume $c_1\ge c_2$. From $c_1\ge c\ge \frac{2c_1}{3}$, $c_2\ge c\ge \frac{2c_2}{3}$ we have $c_1\ge c_2\ge \frac{2c_1}{3}$, i.e. $c_2\le c_1\le\frac{3c_2}{2}$. Note that $w[i]$ is contained in both left copies $u'_1, u'_2$, i.e. these copies overlap. If $p_1=p_2$, then repeats $\pi{}_1$ and $\pi{}_2$ must coincide due to the maximality of these repeats. Thus, $p_1\neq p_2$. Denote $\Delta =|p_1-p_2|>0$. From $\mathit{beg}(u'_1)\le i\le \mathit{beg}(u'_1)+c_1/6$ and $\mathit{beg}(u''_1)\le j\le \mathit{beg}(u''_1)+c_1/6$ we have $$ (j-i)-c_1/6\le p_1\le (j-i)+c_1/6. $$ Analogously, we have $$ (j-i)-c_2/6\le p_2\le (j-i)+c_2/6. $$ Thus $\Delta\le (c_1+c_2)/6$ which, together with inequality $c_1\le\frac{3c_2}{2}$, implies $\Delta\le\frac{5c_2}{12}$. First consider the case when one of the copies $u'_1, u'_2$ is contained in the other, i.e. $u'_2$ is contained in $u'_1$. In this case, $u''_1$ contains some factor $\widehat u''_2$ corresponding to the factor $u'_2$ in $u'_1$. Since $\mathit{beg}(u''_2)-\mathit{beg}(u'_2)=p_2$, $\mathit{beg}(\widehat u''_2)-\mathit{beg}(u'_2)=p_1$ and $u''_2=\widehat u''_2=u'_2$, we have $$ |\mathit{beg}(u''_2)-\mathit{beg}(\widehat u''_2)|=\Delta, $$ so $\Delta$ is a period of $u''_2$ such that $\Delta\le\frac{5}{12}c_2=\frac{5}{12}|u''_2|$. Thus, $u''_2$ is periodic which contradicts that $\pi{}_2$ is not periodic. Now consider the case when $u'_1, u'_2$ are not contained in one another. Denote by $z'$ the overlap of $u'_1$ and $u'_2$. Let $z'$ be a suffix of $u'_k$ and a prefix of $u'_l$ where $k,l=1, 2$, $k\neq l$. Then $u''_k$ contains a suffix $z''$ corresponding to the suffix $z'$ in $u'_k$, and $u''_l$ contains a prefix $\widehat z''$ corresponding to the prefix $z'$ in $u'_l$. Since $\mathit{beg}(z'')-\mathit{beg}(z')=p_k$ and $\mathit{beg}(\widehat z'')-\mathit{beg}(z')=p_l$ and $z''=\widehat z''=z'$, we have $$ |\mathit{beg}(z'')-\mathit{beg}(\widehat z'')|=|p_k-p_l|=\Delta, $$ therefore $\Delta$ is a period of $z'$. Note that in this case $$ \mathit{beg}(u'_k)<\mathit{beg}(u'_l)\le i \le \mathit{beg}(u'_k)+c_k/6, $$ therefore $0<\mathit{beg}(u'_l)-\mathit{beg}(u'_k)\le c_k/6$. Thus $$ |z'|=c_k-(\mathit{beg}(u'_l)-\mathit{beg}(u'_k))\ge\frac{5}{6}c_k\ge\frac{5}{6}c_2. $$ From $\Delta\le\frac{5}{12}c_2$ and $c_2\le\frac{6}{5}|z'|$ we obtain $\Delta\le |z'|/2$. Thus, $z'$ is a periodic suffix of $u'_k$ such that $|z'|\ge \frac{5}{6}|u'_k|$, i.e. $\pi{}_k$ is either suffix semiperiodic or periodic which contradicts $\pi{}_k\in {\cal OP}_k$. \end{proof} Denote by ${\cal Q}_k$ the set of all points $(i, j, c)$ such that $1\le i, j, c\le n$ and $i<j\le i+(\frac{3}{2}k+\frac{1}{4})c$. \mathit{beg}in{lemma} Any point covered by a repeat from ${\cal OP}_k$ belongs to ${\cal Q}_k$. \label{keylemma1} \end{lemma} \mathit{beg}in{proof} Let a point $(i, j, c)$ be covered by some repeat $\pi{}\equiv (u', u'')$ from ${\cal OP}_k$. Denote $c'=c(\pi{} )$. Note that $w[i]$ and $w[j]$ are contained respectively in $u'$ and $u''$ and $n>c'\ge c\ge \frac{2c'}{3}>0$, so inequalities $1\le i, j, c\le n$ and $i<j$ are obvious. Note also that $$ j\le \mathit{beg}(u'')+c'/6=\mathit{beg}(u')+\mathit{}{per}(\pi{} )+c'/6\le i+kc'+c'/6, $$ therefore, taking into account $c'\le \frac{3c}{2}$, we have $j\le i+(\frac{3}{2}k+\frac{1}{4})c$. \end{proof} From Lemmas~\ref{keylemma} and~\ref{keylemma1}, we obtain \mathit{beg}in{lemma} $|{\cal OP}_k|=O(nk)$. \label{OPklemma} \end{lemma} \mathit{beg}in{proof} Assign to each point $(i, j, c)$ the weight $\rho (i, j, c)=1/c^3$. For any finite set~$A$ of points, we define $$ \rho (A)=\sum_{(i, j, c)\in A} \rho (i, j, c)=\sum_{(i, j, c)\in A}\frac{1}{c^3}. $$ Let $\pi{}$ be an arbitrary repeat from ${\cal OP}_k$ represented by a point $(i', j', c')$. Then \mathit{beg}in{eqnarray*} \rho (V[\pi{} ])&=&\sum_{i'\le i\le i'+c'/6}\;\sum_{j'\le j\le j'+c'/6}\;\sum_{2c'/3\le c\le c'}\frac{1}{c^3}\\ &>&\frac{c'^2}{36}\sum_{2c'/3\le c\le c'}\frac{1}{c^3}. \end{eqnarray*} Using a standard estimation of sums by integrals, one can deduce that $\sum_{2c'/3\le c\le c'}\frac{1}{c^3}\ge \frac{5}{32}\frac{1}{c'^2}$ for any~$c'$. Thus, for any $\pi{}$ from ${\cal OP}_k$ $$ \rho (V[\pi{} ])>\frac{1}{36}\frac{5}{36}=\Omega (1). $$ Therefore, \mathit{beg}in{equation} \sum_{\pi{}\in {\cal OP}_k}\rho (V[\pi{} ])=\Omega (|{\cal OP}_k|). \label{lowbndOP} \end{equation} Note also that \mathit{beg}in{eqnarray*} \rho ({\cal Q}_k)&\le &\sum_{i=1}^n\;\sum_{i<j\le i+(\frac{3}{2}k+\frac{1}{4})c}\; \sum_{c=1}^n\frac{1}{c^3}\\ &<&n(\frac{3}{2}k+\frac{1}{4})c\sum_{c=1}^n\frac{1}{c^3}<2nk\sum_{c=1}^n\frac{1}{c^2} <2nk\sum_{c=1}^{\infty}\frac{1}{c^2}=\frac{nk\pi^2}{3}. \end{eqnarray*} Thus, \mathit{beg}in{equation} \rho ({\cal Q}_k)=O(nk). \label{upbndQ} \end{equation} By Lemma~\ref{keylemma1}, any point covered by repeats from ${\cal OP}_k$ belongs to ${\cal Q}_k$. On the other hand, by Lemma~\ref{keylemma}, each point of ${\cal Q}_k$ can not be covered by two repeats from ${\cal OP}_k$. Therefore, $$ \sum_{\pi{}\in {\cal OP}_k}\rho (V[\pi{} ])\le \rho ({\cal Q}_k). $$ Thus, using \ref{lowbndOP} and~\ref{upbndQ}, we conclude that $|{\cal OP}_k|=O(nk)$. \end{proof} Putting together Lemma~\ref{onPP}, Lemma~\ref{onSP}, and Lemma~\ref{OPklemma}, we obtain that for any integer $k\ge 2$, the number of maximal $k$-gapped repeats in~$w$ is $O(nk)$. The bound straightforwardly generalizes to the case of real~$\alpha >1$. Thus, we conclude with \mathit{beg}in{theorem} For any $\alpha >1$, the number of maximal $\alpha$-gapped repeats in~$w$ is $O(\alpha n)$. \label{onPk} \end{theorem} Note that the bound of Theorem~\ref{onPk} is asymptotically tight. To see this, it is enough to consider word $w_k=(0110)^k$. It is easy to check that for a big enough~$\alpha$ and $k=\Omega (\alpha)$, $w_k$ contains $\Theta (\alpha |w_k|)$ maximal $\alpha$-gapped repeats whose copies are single-letter words. We now use Theorem~\ref{onPk} to obtain an upper bound on the number of maximal $\delta$-subrepetitions. The following proposition, shown in \cite[Proposition~3]{KPPHr13}, follows from the fact that each maximal $\delta$-subrepetition defines at least one maximal $1/\delta$-gapped repeat (cf. Introduction). \mathit{beg}in{proposition}[\cite{KPPHr13}] For $0<\delta <1$, the number of maximal $\delta$-subrepetitions in a string is no more then the number of maximal $1/\delta$-gapped repeats. \label{relforep} \end{proposition} Theorem~\ref{onPk} combined with Proposition~\ref{relforep} immediately imply the following upper bound for maximal $\delta$-subrepetitions that improves the bound of \cite{KKOch} by a $\log n$ factor. \mathit{beg}in{theorem} For $0<\delta <1$, the number of maximal $\delta$-subrepetitions in~$w$ is $O(n/\delta)$. \label{ondelrep} \end{theorem} The $O(n/\delta)$ bound on the number of maximal $\delta$-subrepetitions is asymptotically tight, at least on an unbounded alphabet : word $\texttt{ab}_1\texttt{ab}_2\ldots \texttt{ab}_k$ contains $\Omega (n/\delta)$ maximal $\delta$-subrepetitions for $\delta\le 1/2$. \section{Computing all maximal $\alpha$-gapped repeats} \label{algorithm} In this section, we present an $O(\alpha n+S)$ algorithm for computing all maximal $\alpha$-gapped repeats in a word $w$. This bound has been recently announced in \cite{Tanimuraetal}, here we present a different solution. Together with the the $O(\alpha n)$ bound of Theorem~\ref{onPk}, this implies an $O(\alpha n)$-time algorithm. \subsection{Computing PR-repeats} Some maximal $\alpha$-gapped repeats can be specifically located as defined below within maximal repetitions (runs). For example, word $\texttt{cabababababaa}$ contains maximal gapped repeats $(\texttt{a})\texttt{babababab}(\texttt{a})$, $(\texttt{aba})\texttt{babab}(\texttt{aba})$ and $(\texttt{ababa})\texttt{b}(\texttt{ababa})$ within the run $\texttt{abababababa}=(\texttt{ab})^{11/2}$. In this section, we describe the structure of such repeats, and in particular those of them which are periodic (see Section~\ref{counting}), like the repeat $(\texttt{ababa})\texttt{b}(\texttt{ababa})$ above. We show how those maximal $\alpha$-gapped repeats can be extracted from the runs. Repeats which are located within runs but are not periodic will be found separately, together with repeats (periodic or not) which are not located within runs. This part will be described in the next section. Let $\pi{}\equiv (u', u'')$ be a periodic gapped repeat. If the extensions of $u'$ and $u''$ are the same repetition~$r$ then we say that $r$ {\it generates} $\pi{}$ and we call $\pi{}$ {\it PR-repeat} (abbreviating from {\em Periodic Run-generated}). Gapped repeats which are not PR-repeats are called {\it non-PR repeats}. We will use the following fact. \mathit{beg}in{proposition} Let $\pi{}\equiv (u', u'')$ be a maximal gapped repeat such that its copies $u'$ and $u''$ contain a pair of corresponding factors having the same extension~$r$. Then $\pi{}$ is generated by~$r$. \label{samext} \end{proposition} \mathit{beg}in{proof} Observe that to prove the proposition, it is enough to show that both copies $u'$ and $u''$ are contained in~$r$, i.e. $\mathit{beg}(r)\le \mathit{beg}(u')$ and $\mathit{end}(r)\ge \mathit{end}(u'')$. Let $\mathit{beg}(r)>\mathit{beg}(u')$. Then both letters $w[\mathit{beg}(r)-1]$ and $w[\mathit{beg}(r)-1+\mathit{}{per}(r)]$ are contained in $u'$. Let these letters be respectively $j$-th and ($j+\mathit{}{per}(r)$)-th letters of $u'$. Then we have $u''[j]=u'[j]\neq u'[j+\mathit{}{per}(r)]=u''[j+\mathit{}{per}(r)]$, i.e. $u''[j]\neq u''[j+\mathit{}{per}(r)]$, which is a contradiction to the fact that both letters $u''[j]$ and $u''[j+\mathit{}{per}(r)]$ are contained in~$r$. Relation $\mathit{end}(r)\ge \mathit{end}(u'')$ is proved analogously. \end{proof} All maximal PR-repeats can be easily computed according to the following lemma. \mathit{beg}in{lemma} A maximal gapped periodic repeat $\pi{}\equiv (u', u'')$ is generated by a maximal repetition~$r$ if and only if $p(\pi{})$ is divisible by $\mathit{}{per}(r)$ and $$ \mathit{beg}in{array}{c} |r|/2<p(\pi{})\le |r|-2\,\mathit{}{per}(r),\\ u'\equiv w[\mathit{beg}(r) \mathinner{\ldotp\ldotp} \mathit{end}(r)-p(\pi{})],\\ u''\equiv w[\mathit{beg}(r)+\mathit{}{per}(r) \mathinner{\ldotp\ldotp} \mathit{end}(r)]. \label{compgenreps} \end{array} $$ \end{lemma} \mathit{beg}in{proof} Let $\pi{}$ be generated by~$r$. Consider prefixes of $u'$ and $u''$ of length $\mathit{}{per}(r)$. These prefixes are equal cyclic roots of~$r$, and by Proposition~\ref{oncycroot} the difference $\mathit{beg}(u'')-\mathit{beg}(u')=p(\pi{})$ is divisible by~$\mathit{}{per}(r)$. Inequalities $|r|/2<p(\pi{})\le |r|-2\mathit{}{per}(r)$ follow immediately from the definition of a repeat generated by a repetition. To prove the last two conditions of the lemma, it is sufficient to prove $\mathit{beg}(u')=\mathit{beg}(r)$ and $\mathit{end}(u'')=\mathit{end}(r)$. Let $\mathit{beg}(u')\neq \mathit{beg}(r)$, i.e. $\mathit{beg}(u')>\mathit{beg}(r)$. Then both letters $w[\mathit{beg}(u')-1]$ and $w[\mathit{beg}(u'')-1]$ are contained in~$r$. Thus, since the difference $(\mathit{beg}(u'')-1)-(\mathit{beg}(u')-1)=p(\pi{})$ is divisible by $\mathit{}{per}(r)$, we have $w[\mathit{beg}(u')-1]=w[\mathit{beg}(u'')-1]$ which contradicts the maximality of~$\pi{}$. The relation $\mathit{end}(u'')=\mathit{end}(r)$ is proved analogously. Thus, all the conditions of the lemma are proved. On the other hand, if $\pi{}$ satisfies all the conditions of the lemma then $\pi{}$ is obviously generated by~$r$. \end{proof} \mathit{beg}in{corollary} A maximal repetition~$r$ generates no more than $\mathit{exp}(r)/2$ maximal PR-repeats, and all these repeats can be computed from~$r$ in $O(\mathit{exp}(r))$ time. \label{corrgenreps} \end{corollary} To find all maximal $\alpha$-gapped PR-repeats in a string $w$, we first compute all maximal repetitions in~$w$ in $O(n)$ time (see Introduction). Then, for each maximal repetition~$r$, we output all maximal $\alpha$-gapped repeats generated by~$r$. Using Corollary~\ref{corrgenreps}, this can be done in $O(\mathit{exp}(r))$ time. Thus the total time of processing all maximal repetitions is $O({\cal E}(w))$. Since ${\rm E}(w)=O(n)$ by Theorem~\ref{sumexp}, all maximal $\alpha$-gapped PR-repeats in~$w$ can be computed in $O(n)$ time. \subsection{Computing non-PR repeats} We now turn to the computation of maximal non-PR $\alpha$-gapped repeats. Recall that non-PR repeats are those which are either non-periodic, or periodic but not located within a single run. Our goal is to show that all maximal non-PR $\alpha$-gapped repeats can be found in $O(\alpha n)$ time. Observe that there exists a trivial algorithm for computing all maximal $\alpha$-gapped repeats in $O(n^2)$ time that proceeds as follows: for each period~$p\le n$, find all maximal $\alpha$-gapped repeats with period $p$ in $O(n)$ time by consecutively comparing symbols $w[i]$ and $w[i+p]$ for $i=1, 2,\ldots , n-p$. From the results of \cite{Brodal00}, it follows that all maximal $\alpha$-gapped repeats can be found in time $O(n\log n +S)$. This, together with Theorem~\ref{onPk}, implies an $O(\alpha n)$-time algorithm for the case $\alpha \ge \log n$. Therefore, we only have to consider the case $\alpha <\log n$. \subsubsection*{(i) Preliminaries} Assume that $\alpha <\log n$. For this case, we proceed with a modification of the algorithm of~\cite{GabrMan}. We compute all maximal $\alpha$-gapped non-PR repeats $\pi{}$ in~$w$ such that $c(\pi{})\ge\log n$. To do this, we divide $w$ into {\it blocks} of $\Delta=(\log n)/4$ consecutive symbols of~$w$. Without loss of generality, we assume that $n=2^k\Delta$, i.e. $w$ contains exactly $2^k$ blocks. A word~$x$ of length~$2^l\Delta$ where $0\le l\le k-1$ is called a {\it basic factor} of~$w$ if $x= w[i\Delta +1 \mathinner{\ldotp\ldotp} (i+2^l)\Delta]$ for some~$i$. Such an occurrence $w[i\Delta +1 \mathinner{\ldotp\ldotp} (i+2^l)\Delta]$ of~$x$ starting at a block frontier will be called {\it aligned}. A basic factor $x$ of length ~$2^l\Delta$, where $1\le l\le k-1$, is called {\it superbasic} if $x= w[i2^l\Delta +1 \mathinner{\ldotp\ldotp} (i+1)2^l\Delta]$ for some~$i$. Note that $w$ contains $O(n)$ aligned occurrences of basic factors and $O(\frac{n}{\log n})$ aligned occurrences of superbasic factors. Let $z\equiv w[q2^l\Delta +1 \mathinner{\ldotp\ldotp} (q+1)2^l\Delta]$ be an aligned occurrence of superbasic factor of length $2^l$ in~$w$. For $\tau =0,1,\ldots \Delta -1$, an occurrence $w[q2^l\Delta +1 +\tau \mathinner{\ldotp\ldotp} (q2^l+2^{l-1})\Delta +\tau ]$ of a basic factor of length $2^{l-1}\Delta$ is called {\it $\tau$-associated} (or simply {\em associated}) with $z$. Note that any basic factor occurrence $\tau$-associated with $z$ is entirely contained in~$z$ and is uniquely defined by~$z$ and~$\tau$. Thus, $z$ has no more than $\Delta$ associated occurrences of basic factors. To continue, we need one more definition : for $1\le i, j\le n$, denote by $LCP(i, j)$ the length of the longest common prefix of $w[i \mathinner{\ldotp\ldotp} n]$ and $w[j \mathinner{\ldotp\ldotp} n]$, and by $LCS(i, j)$ the length of the longest common suffix of $w[1 \mathinner{\ldotp\ldotp} i]$ and $w[1 \mathinner{\ldotp\ldotp} j]$. Let $\pi{}\equiv (u', u'')$ be a maximal gapped repeat in~$w$ such that $c(\pi{} )\ge\log n= 4\Delta$. Note that in this case, the left copy $u'$ contains at least one aligned occurrence of superbasic factors. Consider aligned occurrences of superbasic factors of maximal length contained in~$u'$. Note that $u'$ can contain either one or two adjacent such occurrences. Let $z$ be the leftmost of them. Note that in this case, we have the following restrictions imposed on $u'$: \mathit{beg}in{equation} \mathit{beg}in{array}{c} \mathit{beg} (z)-|z|<\mathit{beg} (u')\le\mathit{beg} (z),\\ \mathit{end} (z)\le\mathit{end} (u')<\mathit{end} (z)+2|z|. \end{array} \label{leftcond} \end{equation} Thus, $c(\pi{} )<4|z|$. Consider factor~$z''$ in~$u''$ corresponding to~$z$ in~$u'$. Note that $z''$ can be non-aligned. Consider in~$z''$ the leftmost aligned basic factor $y''$ of of length~$|z''|/2$. Observe that $\mathit{beg} (z'')\le \mathit{beg} (y'')<\mathit{beg} (z'')+\Delta$ and $y''$ is entirely contained in $z''$. Let $y'$ be the factor of~$z$ corresponding to factor $y''$ in $z''$. It is easily seen that $y'$ is an occurrence of a basic factor associated with $z$, and $\pi{}$ is uniquely defined by $z$, $y'$ and $y''$. Thus, any maximal gapped repeat $\pi{}$ such that $c(\pi{} )\ge \log n$ is uniquely defined by a triple $(z, y', y'')$, where $z$ is an aligned occurrence of some superbasic factor, $y'$ is an occurrence of some basic factor associated with~$z$, and $y''$ is an aligned occurrence of the same basic factor. From now on, we will say in such case that $\pi{}$ {\em is defined} by the triple $(z, y', y'')$. Observe that $\pi{}\equiv (u', u'')$ can be retrieved from $(z, y', y'')$ using $LCP$ and $LCS$ functions. \mathit{beg}in{equation} \mathit{beg}in{array}{c} \mathit{beg} (u')=\mathit{beg} (y')-LCS(\mathit{beg} (y')-1, \mathit{beg} (y'')-1),\\ \mathit{end} (u')=\mathit{end} (y')+LCP(\mathit{end} (y')+1, \mathit{end} (y'')+1),\\ \mathit{beg} (u'')=\mathit{beg} (y'')-LCS(\mathit{beg} (y')-1, \mathit{beg} (y'')-1),\\ \mathit{end} (u'')=\mathit{end} (y'')+LCP(\mathit{end} (y')+1, \mathit{end} (y'')+1). \end{array} \label{compgenrep} \end{equation} Assume additionally that $\pi{}$ is an $\alpha$-gapped repeat for $\alpha >1$. Then, taking into account inequalities~(\ref{leftcond}) and $c(\pi{} )<4|z|$, we have \mathit{beg}in{eqnarray*} \mathit{end} (y'')&\le &\mathit{end} (u'')=\mathit{end} (u')+\mathit{}{per}(\pi{} )<\mathit{end} (z)+2|z|+\alpha c(\pi{} )\\ &<&\mathit{end} (z)+2|z|+4\alpha |z|<\mathit{end} (z) +6\alpha |z|=\mathit{end} (z) +12\alpha |y''|. \end{eqnarray*} On the other hand, $\mathit{beg} (y'')\ge \mathit{beg} (u'')>\mathit{end} (u')\ge{\rm end} (z)$. Thus, for any triple $(z, y', y'')$ defining a maximal $\alpha$-gapped repeat in~$w$ the occurrence $y''$ is contained in the segment $w[\mathit{end} (z)+1 \mathinner{\ldotp\ldotp} \mathit{end} (z) +12\alpha |y''|]$ of length $12\alpha |y''|$ to the right of~$z$. We will denote this segment by ${\cal I}(z)$. The main idea of the algorithm is to consider all triples $(z, y', y'')$ which can define maximal $\alpha$-gapped non-PR repeats and for each such triple, check if it actually defines one, which is then computed and output. All the triples $(z, y', y'')$ are considered in a natural way: for each aligned occurrence~$z$ of a superbasic factor and each occurrence $y'$ of a basic factor associated with~$z$, we consider all aligned occurrences $y''$ of the same basic factor in the segment ${\cal I}(z)$. \subsubsection*{(ii) Naming basic factors on a suffix tree and computing their associated occurrences} We now describe how this computation is implemented. First we construct a suffix tree for the input string $w$. Suffix tree is a classical data structure of size $O(n)$ which can be constructed in $O(n)$ for a word over constant alphabet see e.g. \cite{Gusfield97}. Using the suffix tree, we can make in $O(n)$-time preprocessing which allows to retrieve $LCP(i, j)$ for any $i, j$ in constant time, see e.g. \cite{Gusfield97}. Similarly, we precompute~$w$ to support $LCS(i, j)$ for any $i, j$ in constant time. Then we compute all basic factors of~$w$. This computation is performed by naming all the basic factors, i.e. assigning to each aligned occurrence of a basic factor a name of this factor. The most convenient way to name basic factors is to assign to a basic factor~$y$ of length~$2^l$ a pair $(l, i)$, where $i$ is the start position of the leftmost aligned occurrence of~$y$ in~$w$. Note that since we have only $n/\Delta$ distinct start positions~$i$, the size of the two-dimensional array required for working with these pairs is $O(n)$. To perform the required computation, we first mark in the suffix tree each node labeled by a basic factor by the name of this factor (in the case when this node is implicit we make it explicit). To this end, for each node~$v$ of the suffix tree we compute the value ${ \mathit{minleaf}} (v)$ which is the smallest leaf number divisible by $\Delta$ in the subtree rooted in~$v$ if such a number exists. This can be easily done in $O(n)$ time by a bottom-up traversal of the tree. Then, each suffix tree edge $(u, v)$ such that the string depth of~$u$ is less than $2^l$, the string depth of~$v$ is not less than $2^l$, and ${ \mathit{minleaf}} (v)$ is defined is treated in the following way: if the string depth of~$v$ is $2^l$, node $v$ is marked by name $(l, { \mathit{minleaf}} (v))$, otherwise a new node of string depth $2^l$ is created within edge $(u, v)$ and marked by name $(l, { \mathit{minleaf}} (v))$. The obtained tree will be called {\it marked suffix tree}. Since we have $O(n)$ distinct basic factors, the marked suffix tree contains no more than $O(n)$ additionally inserted nodes. Thus, this tree has $O(n)$ size and is constructed in $O(n)$ time. To assign to each aligned occurrence $w[i \mathinner{\ldotp\ldotp} i+2^l-1]$ of a basic factor the name of this factor, we perform a depth-first top-down traversal of the marked suffix tree. During the traversal we maintain an auxiliary array ${ basancestor}$: at the first visit of a node marked by a name $(l, m)$ we set ${ basancestor} [l]$ to $m$, and at the second visit of this node we reset ${ basancestor} [l]$ to undefined. While during the traversal we get to a leaf $i$ divisible by $\Delta$, for each $l=0, 1,\ldots , k-1$ we identify $w[i \mathinner{\ldotp\ldotp} i+2^l-1]$ as an occurrence of the basic factor named by $(l, { basancestor}[l])$. Note that this traversal is performed in $O(n)$ time. Then, we compute all occurrences of basic factors associated with aligned occurrences of superbasic factors. This is done again by a depth-first top-down traversal of the marked suffix tree. During the traversal, we maintain the same auxiliary array ${ basancestor}$. Assume that during the traversal we get to a leaf labelled by a position $q2^p\Delta +1+\tau$, where $q$ is odd and $0\le\tau <\Delta$. Then for each $l=0, 1,\ldots , p-1$ such that ${ basancestor} [l]$ is defined, we identify $w[q2^p\Delta +1+\tau \mathinner{\ldotp\ldotp} (q2^p+2^l)\Delta +\tau]$ as an occurence of the basic factor named $(l, { basancestor} [l])$, which is $\tau$-associated with the superbasic factor occurrence $w[q2^p\Delta +1 \mathinner{\ldotp\ldotp} (q2^p+2^{l+1})\Delta ]$. Observe that this traversal is performed in $O(n)$ time as well. \subsubsection*{(iii) Computing lists of aligned occurrences of basic factors} Let $y$ be a $\Delta$-periodic basic factor (cf Introduction). Note that $y$ is also periodic, and then any occurrence of~$y$ in~$w$ is a repetition. By Proposition~\ref{divminper}, the period~$\mathit{}{per}(y)$ is a divisor of $p_{\Delta}(y)$. Given the value $p_{\Delta}(y)$, we can compute in constant time the extension~$r$ of any occurrence~$y'$ of a $\Delta$-periodic basic factor~$y$ as follows: \mathit{beg}in{eqnarray*} \mathit{beg} (r)&=&\mathit{beg} (y')-LCS(\mathit{beg} (y')-1, \mathit{beg} (y')+p_{\Delta}(y)-1),\\ \mathit{end} (r)&=&\mathit{end} (y')+LCP(\mathit{beg} (y')+1, \mathit{beg} (y')-p_{\Delta}(y)+1). \end{eqnarray*} Using Proposition~\ref{minDeltaper}, it is easy to show that any set of all aligned occurrences of~$y$ having the same extension is a sequence of occurrences, where the difference between start positions of any two consecutive occurrences is equal to $p_{\Delta}(y)$, i.e. the start positions of all these occurrences form a finite arithmetic progression with common difference $p_{\Delta}(y)$. We will call these sets {\it runs of occurrences}. The following fact can be easily proved. \mathit{beg}in{proposition} Let $y'$, $y''$ be two consecutive aligned occurrences of a basic factor~$y$ in~$w$. Then $|\mathit{beg} (y')-\mathit{beg} (y'')|\le |y|/2$ if and only if $y$ is $\Delta$-periodic, $y'$ and $y''$ are contained in the same run of occurrences, and, moreover, $|\mathit{beg} (y')-\mathit{beg} (y'')|=p_{\Delta}(y)$. \label{occurrun} \end{proposition} At the next step of the algorithm, in order to effectively select appropriate occurrences~$y''$ in the checked triples $(z, y', y'')$, for each basic factor~$y$ we construct a linked list ${\mathit{alignocc}} (y)$ of all aligned occurences of~$y$ in the left-to-right order in~$w$. If $y$ is not $\Delta$-periodic, each item of ${\mathit{alignocc}} (y)$ consists of only one aligned occurrence of~$y$ defined, for example, by its start position (we will call such items {\it ordinary}). If $y$ is $\Delta$-periodic, each item of ${\mathit{alignocc}} (y)$ contains a run of aligned occurrences of~$y$. If a run of aligned occurrences of~$y$ consists of only one occurrence, we will consider the item of ${\mathit{alignocc}} (y)$ for this run as ordinary, otherwise, if a run of aligned occurrences of~$y$ consists of at least two occurrences, the item of ${\mathit{alignocc}} (y)$ for this run will be defined, for example, by start positions of leftmost and rightmost occurrences in the run and the value $p_{\Delta}(y)$ (such item will be called {\it runitem}). The following fact follows from Proposition~\ref{occurrun}. \mathit{beg}in{proposition} Let $y'$, $y''$ be two consecutive aligned occurrences of a basic factor~$y$ in~$w$. Then $|\mathit{beg} (y')-\mathit{beg} (y'')|\le |y|/2$ if and only if $y'$ and $y''$ are contained in the same runitem of ${\mathit{alignocc}} (y)$ and, moreover, $|\mathit{beg} (y')-\mathit{beg} (y'')|=p_{\Delta}(y)$. \label{occuritem} \end{proposition} Proposition~\ref{occuritem} implies that if two aligned occurrences $y'$, $y''$ of a basic factor~$y$ are contained in distinct items of ${\mathit{alignocc}} (y)$ then $|\mathit{beg} (y')-\mathit{beg} (y'')|>|y|/2$. Therefore, we have the following consequence from the proposition. \mathit{beg}in{corollary} Let $y$ be a basic factor of~$w$. Then for any segment~$v$ in~$w$, the list ${\mathit{alignocc}} (y)$ contains $O(|v|/|y|)$ items having at least one occurrence of~$y$ contained in~$v$. \label{itemsnumber} \end{corollary} To construct the lists ${\mathit{alignocc}}$, for each $i=1, 2, \ldots, n$ and each $l=0, 1,\ldots , k-1$, we insert consecutively the occurrence $y'\equiv w[i \mathinner{\ldotp\ldotp} i+2^l-1]$ of some basic factor~$y$ to the appropriate list ${\mathit{alignocc}} (y)$ as follows. Consider the last item in the current list ${\mathit{alignocc}} (y)$. Let it be an ordinary item consisting of an occurrence $y''$ of~$y$ starting at position~$j$. Denote $\delta =i-j$. Consider the following two cases for~$\delta$. Let $\delta >|y|/2$. Then, by Proposition~\ref{occuritem}, $y''$ and $y'$ are contained in distinct items of ${\mathit{alignocc}} (y)$, and in this case we insert $y'$ to ${\mathit{alignocc}} (y)$ as a new ordinary item. Now let $\delta\le|y|/2$. In this case, by Proposition~\ref{occuritem}, $y''$ and $y'$ are the first two occurrences of the same run of occurrences of~$y$ and, moreover, $\delta =p_{\Delta}(y)$. Let $r$ be the extension of the occurrences of this run. It is easy to see that $$ \mathit{end} (r) = \mathit{end} (y') + LCP(\mathit{end} (y'') +1, \mathit{end} (y') +1), $$ i.e. $\mathit{end} (r)$ can be computed in constant time. From the values ${\rm beg} (y'')$, $\mathit{end} (r)$ and $p_{\Delta}(y)$ we can compute in constant time the start position of the last occurrence of~$y$ in the considered run of occurrences and thereby identify completely this run. Thus, in this case we replace the last item of ${\mathit{alignocc}} (y)$ by the identified run of occurrences of~$y$. Now let the last item in ${\mathit{alignocc}} (y)$ be a run of occurrences. Then, if $y'$ is not contained in this run, we insert $y'$ to ${\mathit{alignocc}} (y)$ as a new ordinary item. Thus, each occurrence of a basic factor in~$w$ is processed in constant time, and the total time for construction of lists ${\mathit{alignocc}}$ is $O(n)$. Furthermore, in order to optimize the selection of appropriate occurrences $y''$ in the checked triples $(z, y', y'')$, for each pair $(z, y')$ where $z$ is an aligned occurrence of a superbasic factor and $y'$ is an occurrence of some basic factor $y$ associated with~$z$, we compute a pointer ${\mathit{firstocc}} (z, y')$ to the first item in ${\mathit{alignocc}} (y)$ containing at least one occurrence of~$y$ to the right of~$z$. For these purposes, we use auxiliary lists ${\mathit{factends}} (i)$ defined for each position~$i$ in~$w$. Lists ${\mathit{factends}} (i)$ consist of pairs $(z, y')$ and are constructed at the stage computation of occurrences associated with aligned occurrences of superbasic factors: each time we find a new occurrence~$y'$ associated with an aligned occurrence~$z$ of a superbasic factor, we insert the pair $(z, y')$ into the list ${\mathit{factends}} ({\rm end} (z)+1)$. After construction of lists ${\mathit{alignocc}}$, we compute consecutively for each $i=1, 2,\ldots , n$ pointers ${\mathit{firstocc}} (z, y')$ for all pairs $(z, y')$ from the list ${\mathit{factends}} (i)$. During the computation, we save in each list ${\mathit{alignocc}} (y)$ the last item pointed before (this item is denoted by ${lastpnt} (y)$). To compute ${\mathit{firstocc}} (z, y')$, we go through the list ${\mathit{alignocc}} (y)$ from ${lastpnt} (y)$ (or from the beginning of ${\mathit{alignocc}} (y)$ if ${lastpnt} (y)$ does not exist) until we find the first item containing at least one occurrence of~$y$ to the right of the position~$i$. The found item is pointed by ${\mathit{firstocc}} (z, y')$ and becomes a new item ${lastpnt} (y)$. Since the total size of lists ${\mathit{alignocc}}$ and ${\mathit{factends}}$ is $O(n)$, the total time of computing ${\mathit{firstocc}} (z, y')$ is also $O(n)$. \subsubsection*{(iv) Main step: computing large repeats} At the main stage of the algorithm, in order to process each pair $(z, y')$, note that all appropriate for $(z, y')$ occurrences~$y''$ contained in ${\cal I}(z)$ are located in the fragment of ${\mathit{alignocc}} (y)$ consisting of all items having at least one occurrence of~$y$ contained in~${\cal I}(z)$. We will call this fragment {\it checked fragment}. Thus, we consider all items of the checked fragment by going through this fragment from the first item which can be found in constant time by the value ${\mathit{firstocc}} (z, y')$. For each considered item, we check triples $(z, y', y'')$ for all occurrences $y''$ from this item as follows. Let the considered item be an ordinary item consisting of only one occurrence~$y''$. Recall that gapped repeat $(u', u'')$ defined by the triple $(z, y', y'')$ can be computed in constant time by formulas~(\ref{compgenrep}). Thus, if $(u', u'')$ is an $\alpha$-gapped repeat satisfying conditions~(\ref{leftcond}), we output it. Now let the item considered in the checked fragment be a runitem. This implies that basic factor~$y$ is $\Delta$-periodic, i.e $y$ is $\Delta$-periodic. Moreover, from the runitem we can derive the value $p_{\Delta}(y)$. Therefore we can compute in constant time extensions $r'$ and~$r''$ of occurrences $y'$ and~$y''$ respectively. Denote by~$\rho$ the run of occurrences contained in the runitem. Recall that our goal is to compute effectively all $\alpha$-gapped repeats defined by triples $(z, y', y'')$ such that $y''\in\rho$. Note that, if $r'$ and~$r''$ are the same repetition, then by Proposition~\ref{samext} all such repeats are PR-repeats, therefore we can assume that $r'$ and~$r''$ are distinct repetitions. Let $(u', u'')$ be an $\alpha$-gapped repeat defined by a triple $(z, y', y'')$ where $y''\in\rho$. First, consider the case when $u'$ is not contained in $r'$, i.e. either ${\rm beg} (u')<\mathit{beg} (r')$ or $\mathit{end} (u')>\mathit{end} (r')$. \mathit{beg}in{proposition} If $\mathit{beg} (u')<\mathit{beg} (r')$, then $\mathit{beg} (r')-\mathit{beg} (u')=\mathit{beg} (r'')-\mathit{beg} (u'')$. \label{begdif} \end{proposition} \mathit{beg}in{proof} Define $\gamma'=\mathit{beg} (r')-\mathit{beg} (u')$, $\gamma''=\mathit{beg} (r'')-\mathit{beg} (u'')$. Let $\gamma' >\gamma''$. Then $u'[\gamma' +\mathit{}{per}(y)]\neq u'[\gamma']=u''[\gamma']=u''[\gamma' +\mathit{}{per}(y)]$, i.e. we have a contradiction $u'[\gamma' +\mathit{}{per}(y)]\neq u''[\gamma' +\mathit{}{per}(y)]$. Similarly, we obtain a contradiction $u'[\gamma'' +\mathit{}{per}(y)]\neq u''[\gamma'' +\mathit{}{per}(y)]$ in the case $\gamma' <\gamma''$. \end{proof} The following proposition can be proved analogously. \mathit{beg}in{proposition} If $\mathit{end} (u')>\mathit{end} (r')$, then $\mathit{end} (u')-\mathit{end} (r')=\mathit{end} (u'')-\mathit{end} (r'')$. \label{enddif} \end{proposition} Define \mathit{beg}in{eqnarray*} s_\mathit{left}&=&\mathit{beg} (y')+(\mathit{beg} (r'')-\mathit{beg} (r')),\\ s_\mathit{right}&=&\mathit{beg} (y')+(\mathit{end} (r'')-\mathit{end} (r')). \end{eqnarray*} From Propositions~\ref{begdif} and~\ref{enddif}, we derive the following fact. \mathit{beg}in{corollary} If $\mathit{beg} (u')<\mathit{beg} (r')$ then $\mathit{beg} (y'')=s_\mathit{left}$. If $\mathit{end} (u')>\mathit{end} (r')$ then $\mathit{beg} (y'')=s_\mathit{right}$. \end{corollary} Thus, for computing $\alpha$-gapped repeats $(u', u'')$ such that $u'$ is not contained in $r'$, it is enough to consider in $\rho$ only occurrences $y''_\mathit{left}$ and $y''_\mathit{right}$ with start positions $s_\mathit{left}$ and $s_\mathit{right}$ respectively, provided that these occurrences exist. We check the occurrences $y''_\mathit{left}$ and $y''_\mathit{right}$ in the same way as we did for occurrence $y''$ in the case of ordinary item. Then, it remains to check all occurrences from $\rho$ except for possible occurrences $y''_\mathit{left}$ and $y''_\mathit{right}$. Denote by $\rho'=\rho\setminus\{y''_\mathit{left}, y''_\mathit{right}\}$ the set of all such occurrences. Assume that $|r'|\le |r''|$, i.e. $s_\mathit{left}\le s_\mathit{right}$ (the case $|r'|>|r''|$ is similar). In order to check all occurrences from $\rho'$, we consider the following subsets of $\rho'$ separately: subset $\rho'_1$ of all occurrences $y''$ such that $\mathit{beg} (y'')<s_\mathit{left}$, subset $\rho'_2$ of all occurrences $y''$ such that $s_\mathit{left}<\mathit{beg} (y'')<s_\mathit{right}$, and subset $\rho'_3$ of all occurrences $y''$ such that $s_\mathit{right}<\mathit{beg} (y'')$. Note that start positions of all occurrences in each of these subsets form a finite arithmetic progression with common difference $p_{\Delta}(y)$. Thus, we unambiguously denote all occurrences in each of the subsets $\rho'_i$, $i=1, 2, 3$, by $y''_0, y''_1,\ldots , y''_k$ where $y''_0$ is the leftmost occurrence in the subset $\rho'_i$ and $\mathit{beg} (y''_j)= {\mathit{beg}} (y''_0)+jp_{\Delta}(y)$ for $j=1,\ldots , k$. Note that values ${\mathit{beg}} (y''_0)$ and~$k$ for each subset $\rho'_i$ can be computed in constant time. First, consider an occurrence~$y''_j$ from $\rho'_1$. Let $\pi{}\equiv (u', u'')$ be the repeat defined by triple $(z, y', y''_j)$. Note that \mathit{beg}in{equation} \mathit{}{per}(\pi{} )=\mathit{beg} (y''_j)-\mathit{beg} (y')=q+jp_{\Delta}(y), \label{eqvforp} \end{equation} where $q=\mathit{beg} (y''_0)-\mathit{beg} (y')$. Taking into account that $y'$ and $y''_j$ are contained in maximal repetitions $r'$ and $r''$ respectively, it is easy to verify that $$ \mathit{beg}in{array}{c} LCS(\mathit{beg} (y')-1, \mathit{beg} (y''_j)-1)=\mathit{beg} (y''_j)-\mathit{beg} (r''),\\ LCP(\mathit{end} (y')+1, \mathit{end} (y''_j)+1)=\mathit{end} (r')-\mathit{end} (y'). \end{array} $$ Therefore, $\mathit{beg} (u')=\mathit{beg} (r'')-\mathit{}{per}(\pi{} )=q'-jp_{\Delta}(y)$, where $q'=\mathit{beg} (r'')-q$, and $\mathit{end} (u')=\mathit{end} (r')$. It follows that $$ c(\pi{} )=|u'|=\mathit{end} (u')-\mathit{beg} (u')+1=q''+jp_{\Delta}(y), $$ where $q''=\mathit{end} (r')+1-q'$. Recall that for any $\alpha$-gapped repeat $\pi{}$, we have $c(\pi{})<\mathit{}{per}(\pi{})\le\alpha c(\pi{})$. Thus, $\pi{}$ is an $\alpha$-gapped repeat if and only if \mathit{beg}in{equation} q''<q\le\alpha q''+(\alpha -1)jp_{\Delta}(y). \label{alphacond1} \end{equation} Moreover, $u'$ has to satisfy conditions~(\ref{leftcond}). Thus, the triple $(z, y', y''_j)$ defines an $\alpha$-gapped repeat if and only if conditions (\ref{alphacond1}) and~({\ref{leftcond}) are verified for~$j$. Note that all these conditions are linear inequalities on~$j$, and then can be resolved in constant time. Thus, we output all $\alpha$-gapped repeats defined by triples $(z, y', y'')$ such that $y''\in\rho'_1$ in time $O(1+S)$, where $S$ is the size of the output. Now consider an occurrence~$y''_j$ from $\rho'_2$. Let $\pi{}\equiv (u', u'')$ be the repeat defined by the triple $(z, y', y''_j)$. Note that in this case, $\mathit{}{per}(\pi{})$ also satisfies relation~(\ref{eqvforp}). Analogously to the previous case of set $\rho'_1$, we obtain that $\mathit{beg} (u')=\mathit{beg} (r')$ and $\mathit{end} (u')=\mathit{end} (r')$, and then $c(\pi{})=|r'|$. Therefore, $\pi{}$ is an $\alpha$-gapped repeat if and only if \mathit{beg}in{equation} |r'|<q+jp_{\Delta}(y)\le \alpha |r'|. \label{alphacond2} \end{equation} Thus, in this case, we output all $\alpha$-gapped repeats defined by triples $(z, y', y''_j)$ such that $j$ satisfies conditions (\ref{alphacond2}) and~({\ref{leftcond}). Since all these conditions can be resolved for~$j$ in constant time, all these repeats can be output in time $O(1+S)$ where $S$ is the size of output. Finally, consider an occurrence~$y''_j$ from $\rho'_3$. Let $\pi{}\equiv (u', u'')$ be the repeat defined by triple $(z, y', y''_j)$. In this case, $\mathit{}{per}(\pi{})$ also satisfies relation~(\ref{eqvforp}). Analogously to the case of set $\rho'_1$, we obtain that $\mathit{beg} (u')=\mathit{beg} (r')$ and $\mathit{end} (u')=\mathit{end} (r')-\mathit{}{per}(\pi{} )= \widehat q'-jp_{\Delta}(y)$, where $\widehat q'=\mathit{end} (r'')-q$, and then $$ c(\pi{})=\mathit{end} (u')-\mathit{beg} (u')+1=\widehat q''-jp_{\Delta}(y), $$ where $\widehat q''=\widehat q'-\mathit{beg} (r')+1$. Therefore, $\pi{}$ is an $\alpha$-gapped repeat if and only if \mathit{beg}in{equation} \widehat q''-jp_{\Delta}(y)<q+jp_{\Delta}(y)\le \alpha (\widehat q''-jp_{\Delta}(y)). \label{alphacond3} \end{equation} Thus, in this case, we output all $\alpha$-gapped repeats defined by triples $(z, y', y''_j)$ such that $j$ satisfies conditions (\ref{alphacond3}) and~({\ref{leftcond}). Like in the previous cases, this can be done in time $O(1+S)$, where $S$ is the size of the output. Putting together all the considered cases, we conclude that all $\alpha$-gapped repeats defined by triples $(z, y', y'')$ such that $y''\in\rho$ can be computed in time $O(1+S)$ where $S$ is the size of output. Thus, in $O(1+S)$ time we can process each item of the checked fragment. Therefore, since by Corollary~\ref{itemsnumber} the checked fragment has $O(\alpha)$ items, the total time for processing pair $(z, y')$ is $O(\alpha +S)$ where $S$ is the total number of $\alpha$-gapped repeats defined by triples $(z, y', y'')$. Since each occurrence~$z$ has no more than $\Delta$ associated occurrences $y'$, the total number of processed pairs $(z, y')$ is $O(n)$. Thus the time complexity of the main stage of the algorithm is $O(\alpha n +S)$, where $S$ is the size of the output. Taking into account that $S=O(\alpha n)$ by Theorem~\ref{onPk}, we conclude that the time complexity of the main stage is $O(\alpha n)$. Thus, all maximal $\alpha$-gapped non-PR repeats $\pi{}$ in~$w$ such that $c(\pi{})\ge\log n$ can be computed in $O(\alpha n)$ time. \subsubsection*{(v) Computing small repeats} To compute all remaining maximal $\alpha$-gapped non-PR repeats in~$w$, note that the length of any such repeat~$\pi{}$ is not greater than $$ (1+\alpha)c(\pi{})<(1+\log n)\log n<2\log^2 n. $$ Thus, setting $\Delta'=\lfloor 2\log^2 n\rfloor$, any such repeat is contained in at least one of segments ${\cal I}'_i\equiv w[i\Delta'+1 \mathinner{\ldotp\ldotp} (i+2)\Delta']$ for $0\le i< n/\Delta'$. Therefore, all the remaining $\alpha$-gapped repeats can be found by searching separately in segments ${\cal I}'_i$. The procedure of searching for repeats in ${\cal I}'_i$ is similar to the algorithm described above. If $\alpha\ge \log\log n$, searching for repeats in ${\cal I}'_i$ can be done by the algorithm proposed in~\cite{Brodal00}. The $O(|{\cal I}'_i|\log |{\cal I}'_i| +S)$ time complexity implied by this algorithm, where by Theorem~\ref{onPk} the output size $S$ is $O(\alpha |{\cal I}'_i|)$, can be bounded here by $O(\alpha \Delta')$. Thus, the total time complexity of the search in all segments ${\cal I}'_i$ is $O(\alpha n)$. In the case of $\alpha <\log\log n$, we search in each segment ${\cal I}'_i$ for all remaining maximal $\alpha$-gapped non-PR repeats $\pi{}$ in~$w$ such that $c(\pi{} )\ge \log |{\cal I}'_i|$ in time $O(\alpha \Delta')$, in the same way as we described above for the word~$w$. The total time of the search in all segments ${\cal I}'_i$ is $O(\alpha n)$. Then, it remains to compute all maximal $\alpha$-gapped non-PR repeats~$\pi{}$ in~$w$ such that $c(\pi{})<\log |{\cal I}'_i|\le 3\log\log n$. Note that the length of any such repeat is not greater than $$ (1+\alpha)3\log\log n<(1+\log\log n)3\log\log n\le 6\log^2\log n. $$ Thus, setting $\Delta''=\lfloor 6\log^2\log n\rfloor$, any such repeat is contained in at least one of the segments ${\cal I}''_i\equiv w[i\Delta''+1 \mathinner{\ldotp\ldotp} (i+2)\Delta'']$ for $0\le i< n/\Delta''$. Note that these segments are words of length $2\Delta''$ over an alphabet of size~$\sigma$, therefore the total number of distinct segments ${\cal I}''_i$ is not greater than $\sigma^{2\Delta''}\le\sigma^{12\log^2\log n}$. In each of the distinct segments ${\cal I}''_i$, all maximal $\alpha$-gapped repeats can be found by the trivial algorithm described above in $O({\Delta''}^2)=O(\log^4\log n)$ time. Thus, maximal $\alpha$-gapped repeats in all distinct segments ${\cal I}''_i$ can be found in $O(\sigma^{12\log^2\log n}\log^4\log n)=o(n)$ time. We conclude that all remaining maximal $\alpha$-gapped repeats in~$w$ can be found in $O(n+S)$ time where $S$ is the total number of maximal $\alpha$-gapped repeats contained in all segments ${\cal I}''_i$. According to Theorem~\ref{onPk}, this number can be bounded by $O(\alpha n)$, and the time for finding all the remaining maximal $\alpha$-gapped repeats can be bounded by $O(\alpha n)$ as well. This leads to the final result. \mathit{beg}in{theorem} For a fixed~$\alpha>1$, all maximal $\alpha$-gapped repeats in a word of length~$n$ over a constant alphabet can be found in $O(\alpha n)$ time. \label{algteorem} \end{theorem} Note that since, as mentioned earlier, a word can contain $\Theta (\alpha n)$ maximal $\alpha$-gapped repeats, the $O(\alpha n)$ time bound stated in Theorem~\ref{algteorem} is asymptotically optimal. \section{Conclusions} Besides gapped repeats we can also consider gapped palindromes which are factors of the form $uvu^R$ where $u$ and $v$ are nonempty words and $u^R$ is the reversal of~$u$ \cite{KK09}. A gapped palindrome $uvu^R$ in a word~$w$ is called {\it maximal} if $w[\mathit{end} (u)+1]\neq w[\mathit{beg} (u^R)-1]$ and $w[\mathit{beg} (u)-1]\neq w[\mathit{end} (u^R)+1]$ for $\mathit{beg} (u)>1$ and $\mathit{end} (u^R)<|w|$. A maximal gapped palindrome $uvu^R$ is $\alpha$-gapped if $|u|+|v|\le\alpha |u|$ \cite{GabrMan}. It can be shown analogously to the results of this paper that for $\alpha >1$ the number of maximal $\alpha$-gapped palindromes in a word of length~$n$ is bounded by $O(\alpha n)$ and for the case of constant alphabet, all these palindromes can be found in $O(\alpha n)$ time\footnote{Note that in \cite{GabrMan}, the number of maximal $\alpha$-gapped palindromes was conjectured to be $O(\alpha^2 n)$.}. In this paper we consider maximal $\alpha$-gapped repeats with $\alpha >1$. However this notion can be formally generalized to the case of $\alpha\le 1$. In particular, maximal $1$-gapped repeats are maximal repeats whose copies are adjacent or overlapping. It is easy to see that such repeats form runs whose minimal periods are divisors of the periods of these repeats. Moreover, each run in a word is formed by at least one maximal $1$-gapped repeat, therefore the number of runs in a word is not greater than the number of maximal $1$-gapped repeats. More precisely, each run $r$ is formed by $\lfloor \mathit{exp}(r)/2\rfloor$ distinct maximal $1$-gapped repeats. Thus, if a word contains runs with exponent greater than or equal to~4 then the number of maximal $1$-gapped repeats is strictly greater than the number of runs. However, using an easy modification of the proof of ``runs conjecture'' from~\cite{RunsTheor}, it can be also proved the number of maximal $1$-gapped repeats in a word is strictly less than the length of the word. Moreover, denoting by ${\cal R}(n)$ (respectively, ${\cal R}_1(n)$) the maximal possible number of runs (respectively, maximal possible number of maximal $1$-gapped repeats) in words of length~$n$, we conjecture that ${\cal R}(n)={\cal R}_1(n)$ since known words with a relatively large number of runs have no runs with big exponents. We can also consider the case of $\alpha <1$ for repeats with overlapping copies, in particular, the case of maximal $1/k$-gapped repeats where $k$ is integer greater than~1. It is easy to see that such repeats form runs with exponents greater than or equal to $k+1$. It is known from~\cite[Theorem~11]{RunsTheor} that the number of such runs in a word of length~$n$ is less than $n/k$, and it seems to be possible to modify the proof of this fact for proving that the number of maximal $1/k$-gapped repeats in the word is also less than $n/k=\alpha n$. These observations together with results of computer experiments for the case of $\alpha >1$ leads to a conjecture that for any $\alpha >0$, the number maximal $\alpha$-gapped repeats in a word of length~$n$ is actually less than $\alpha n$. This generalization of the ``runs conjecture'' constitutes an interesting open problem. Another interesting open question is whether the obtained $O(n/\delta)$ bound on the number of maximal $\delta$-subrepetitions is asymptotically tight for the case of constant alphabet. \paragraph{Acknowledgments.} This work was partially supported by Russian Foundation for Fundamental Research (Grant 15-07-03102). \end{document}
\begin{document} \title{Graph Sanitation with Application to Node Classification} \author{Zhe Xu} \email{[email protected]} \affiliation{ \institution{University of Illinois at Urbana-Champaign} } \author{Boxin Du} \email{[email protected]} \affiliation{ \institution{University of Illinois at Urbana-Champaign} } \author{Hanghang Tong} \email{[email protected]} \affiliation{ \institution{University of Illinois at Urbana-Champaign} } \input{Files/00Abstract.tex} \maketitle \input{Files/01Introduction.tex} \input{Files/02ProblemDefinition.tex} \input{Files/03Methods.tex} \input{Files/05Experiment.tex} \input{Files/06RelatedWork.tex} \input{Files/07Conclusion.tex} \appendix \input{Files/09Appendix.tex} \end{document}
\begin{document} \maketitle \begin{abstract} Given a rational map of the Riemann sphere and a subset $A$ of its Julia set, we study the $A$-exceptional set, that is, the set of points whose orbit does not accumulate at $A$. We prove that if the topological entropy of $A$ is less than the topological entropy of the full system then the $A$-exceptional set has full topological entropy. Furthermore, if the Hausdorff dimension of $A$ is smaller than the dynamical dimension of the system then the Hausdorff dimension of the $A$-exceptional set is larger than or equal to the dynamical dimension, with equality in the particular case when the dynamical dimension and the Hausdorff dimension coincide. We discuss also the case of a general conformal $C^{1+\alpha}$ dynamical system and, in particular, certain multimodal interval maps on their Julia set. \end{abstract} \Rightarrowction{Introduction} Consider a compact metric space $(X,d)$ and a continuous transformation $f\colon X \to X$. Let $W\subset X$ be $f$-invariant, that is, $f(W) = W$. Given $A\subset W$, the \emph{$A$-exceptional set} in $W$ (with respect to $f|_W$) is defined to be the set \[ E^+_{f|W}(A) := \{x\in W\colon \overline{\mathcal{O}_f(x)}\cap A = \emptyset\}, \] where $\mathcal{O}_f(x):= \{f^k(x)\colon k\in \mathbb{N} \cup \{0\}\}$ denotes the forward orbit of $x$ by $f$. In other words, $E^+_{f|W}(A)$ is the set of points in $W$ whose forward orbit does not accumulate at $A$. In this paper we study the ``size'' of exceptional sets in terms of their topological entropy and their Hausdorff dimension. We will consider as dynamical systems rational functions of the Riemann sphere which include those with parabolic points and critical points. The following is our first main result stated in terms of topological entropy (we recall its definition in Section~\ref{Ent}). \begin{teorema}\label{teoentropy} Let $f\colon \overline{\mathbb{C}} \to \overline{\mathbb{C}}$ be a rational function of degree $d\ge2$ on the Riemann sphere and let $J=J(f)$ be its Julia set. If $A\subset J$ satisfies $h(f|_J,A) < h(f|_J)=\log d$, then $$ h( f|_J,E^+_{f|J}(A)) = h(f|_J)=\log d. $$ \end{teorema} The above result will be a consequence of a corresponding statement for entropy of a continuous shift-equivalent transformation (see Proposition~\ref{proph}). The second result in terms of the Hausdorff dimension $\dim_{\rm H}$ uses canonical concepts which we briefly recall (see Section~\ref{HD} for more details). Given a $f$-invariant probability measure $\mu$, the {\it Hausdorff dimension of $\mu$} is defined by $$ \dim_{\rm H}\mu := \inf \{\dim_{\rm H}Y\colon Y\subset X \text{ and } \mu(Y) = 1\}. $$ The {\it dynamical dimension} of $f$ is defined by \begin{equation}\label{def:DD} \mathbb{D}D(f|_X) := \sup_\mu\dim_{\rm H}\mu, \end{equation} where the supremum is taken over all ergodic measures $\mu$ with positive entropy. We will consider only maps where such measures do exist and where hence $\mathbb{D}D$ is well defined. Note that clearly we have $\mathbb{D}D(f|_X)\le \dim_{\rm H}X$. The following relation was established in the context of a general rational function $f$ of degree $\ge2$ of the Riemann sphere and $X=J(f)$ its Julia set (see~\cite[Chapter 12.3]{PU}) and will be fundamental for our approach. We have \begin{equation}\label{eq:dyndimdims} \mathbb{D}D(f|_{J(f)}) = \hD(f|_{J(f)}), \quad\text{ where }\quad \hD(f|_{J(f)}) :=\sup_Y\dim_{\rm H}Y, \end{equation} where the supremum is taken over all conformal expanding repellers $Y\subset J(f)$ (we recall its definition in Section~\ref{chirepellers}), the latter number is also called the \emph{hyperbolic dimension} of $J(f)$. \begin{teorema}\label{teoprinc} Let $f\colon \overline{\mathbb{C}} \to \overline{\mathbb{C}}$ be a rational function of degree $\ge2$ on the Riemann sphere and let $J=J(f)$ be its Julia set. If $A\subset J$ satisfies $\dim_{\rm H} A < \mathbb{D}D(f|_{J})$, then $$ \dim_{\rm H} E_{f|J}^+(A) \geq \mathbb{D}D(f|_{J}). $$ \end{teorema} Theorem~\ref{teoprinc} immediately implies the following. \footnote{Note that until recently it was unknown whether there exist a map for which $\hD(f|_J)<\dim_{\rm H}J$. Avila and Lyubich in~\cite[Theorem D]{AviLyu:07} show that for so-called Feigenbaum maps with periodic combinatorics whose Julia set has positive area one has $\hD(f|_J)<\dim_{\rm H}J=2$. They provide examples in~\cite{AviLyu:}. } \begin{cor}\label{teoprinc2new} Let $f\colon \overline{\mathbb{C}} \to \overline{\mathbb{C}}$ be a rational function of degree $\ge2$ on the Riemann sphere and let $J=J(f)$ be its Julia set. Assume that we have \begin{equation}\label{eq:equalityAvila} \mathbb{D}D(f|_J)= \dim_{\rm H}J. \end{equation} If $A\subset J$ satisfies $\dim_{\rm H} A < \dim_{\rm H}{J}$ then $$ \dim_{\rm H} E_{f|J}^+(A) = \dim_{\rm H} J. $$ \end{cor} We obtain an immediate conclusion in the particular case of an expansive map. For that recall that a continuous map $f\colon X \to X$ is {\it expansive} if there exists $\delta > 0$ such that for each pair of distinct points $x,y\in X$ there is $n\geq 1$ such that $d(f^n(x),f^n(y))\ge\delta$. By the Bowen-Manning-McCluskey formula, in the case of a rational function $f\colon J(f) \to J(f)$ which is expansive equality~\eqref{eq:equalityAvila} holds true (see~\cite[Theorem 3.4]{U}). Recall that by~\cite[Theorem 4]{DenUrb:91} a rational function of degree $\ge2$ on its Julia set $J(f)$ is expansive (and hence~\eqref{eq:equalityAvila} is true) if, and only if, $J(f)$ does not contain critical points. Recent work by Rivera-Letelier and Shen~\cite{RivShe:} establishes~\eqref{eq:equalityAvila} for a much wider class of maps. In particular they show that for a rational map of degree $\ge2$ without neutral periodic points, and such that for each critical value $v$ of $f$ in $J(f)$ one has \[ \sum_{n=1}^\infty\frac{1}{\lvert (f^n)'(v)\rvert}<\infty \] equalities~\eqref{eq:equalityAvila} hold true (see~\cite[Theorem II and Section 2.1]{RivShe:}) and Corollary~\ref{teoprinc2new} applies. Note that, in particular, this is true for Collet-Eckmann maps. Let us compare the main results with other previously known ones. Results of this sort have already a long history which starts with the Jarnik-Besicovitch theorem (see~\cite{Jar:29}) which states that the set of badly approximable numbers\footnote{Recall that a real number $x$ is \emph{badly approximable} if there is a constant $C(x)$ such that for any reduced rational $p/q$ we have $\lvert p/q -x\rvert>C(x)/q^2$.} in the interval $[0,1]$ is $1$. Observe that $x\in[0,1)$ is \emph{badly approximable} if, and only if, the forward orbit of $x$ relative to the Gauss continued fraction map $f\colon[0,1)\to[0,1)$ does not accumulate at $0$, that is, if $x$ does not belong to the $\{0\}$-exceptional set of points. Here $f$ is defined by $f(x):=1/x-[1/x]$ if $x>0$, where $[y]$ denotes the integer part of $y$, and $f(0)=0$. This result is then an immediate consequence of the fact that for an expanding Markov map of the interval and any point $x_0$ the $\{x_0\}$-exceptional set has full Hausdorff dimension $1$. In analogy, in the case of $f$ being an expanding $C^2$ map of a Riemannian manifold $X$, it is known that $f$ preserves a probability measure which is equivalent to the Liouville measure~\cite{KrzSzl:69} and hence the set of points whose forward orbit is not dense has zero measure. In particular, for every $x\in X$ the $\{x\}$-exceptional set has zero measure. However, by a result by Urba\'nski~\cite{Urb:91}, this set is large in terms of Hausdorff dimension. Tseng~\cite{T} strengthens this result by showing that in fact this set is a \emph{winning set} in the sense of so-called Schmidt games and hence has full Hausdorff dimension (he also considers the case of a countable set of points $A$). Abercrombie and Nair \cite{AN2} proved that for a rational map on the Riemann sphere which is uniformly expanding on its Julia set for a given \emph{finite} set of points $A$ satisfying some additional properties the $A$-exceptional set has full Hausdorff dimension (see also~\cite{AN1} for a precursor of this work in the case of a Markov map on the interval as well as~\cite{AN3} in a more abstract setting but also requiring uniform expansion of the dynamics and finiteness of the set $A$). Their method of proof (which is similarly used by Dolgopyat~\cite{D} to show Theorem~\ref{Dolgoteoshift} stated below) is based on constructing a certain Borel measure which is supported on the set of points whose forward orbits miss certain neighborhoods of $A$ and then use of a mass distribution principle to estimate dimension. Theorems~\ref{teoentropy} and~\ref{teoprinc} and Corollary~\ref{teoprinc2new} generalize these results by Abercrombie and Nair in the sense that we can consider more general sets $A$ and in the sense that we can consider rational maps which are not uniformly expanding. They are analogues to \cite[Theorems 1 and 2]{D} by Dolgopyat which allows for more general set $A$ but requires $f$ to be a piecewise uniformly expanding map of the interval. To the best of our knowledge, our results are the first which apply also in a nonhyperbolic setting. Finally, note that there exists a wide range of work on so-called shrinking target problems which are somehow similar -- considering instead of orbits which do not accumulate on a fixed set those orbits which do not hit a neighborhood of a given size which shrinks with the iteration length (see, for example, Hill and Velani~\cite{HV1,HV2,HV3}). Let us briefly sketch the content of this paper and the idea of the proofs of Theorems \ref{teoentropy} and \ref{teoprinc} (see Section \ref{prova}). We will choose a sequence of subsets of $J(f)$ (certain repellers) such that the dynamics inside them is expanding with all their Lyapunov exponents being close to a given number and their entropy being close to the entropy of the original system. Such repellers are provided by a construction following ideas of Katok (see Theorem \ref{Katrin1} in Section~\ref{katoksec}). They have the property that their Lyapunov exponents and their entropies are close to the ones of an ergodic measure and their Hausdorff dimension is close to the dynamical dimension of the Julia set of whole system. Here we will also invoke the fact~\eqref{eq:dyndimdims}. Then we will use that (for some iterate of the map) these repellers are conjugate to a subshift of finite type and we will use the following abstract results by Dolgopyat~\cite{D} on shift spaces. \begin{teorema}[{\cite[Theorem 1]{D}}]\label{Dolgoteoshift} Let $\sigma\colon \Sigma^+_M \to \Sigma^+_M$ be a subshift of finite type. If $B\subset \Sigma^+_M$ satisfies $h(\sigma,B)< h(\sigma)$, then $h(\sigma,E _{\sigma|\Sigma^+_M}^+(B)) = h(\sigma)$. \end{teorema} \noindent Therefore, Theorem \ref{Dolgoteoshift} guarantees that the entropy on a certain conjugate exceptional set in the subshift coincides with the entropy of the subshift (see Section~\ref{Aset} where general relations for exceptional sets on subsystems are derived). To conclude the proof, it is necessary to show a relationship between topological entropy and Hausdorff dimension inside the sub-repellers, which is proved in Section \ref{sec:dimenttt}. \begin{remark*}{\rm We remark that the methods in this paper extend to more general conformal $C^{1+\alpha}$ maps $f$ of a Riemannian manifold $X$ and a compact invariant subset $W\subset X$ studying exceptional sets in $W$ (relative to the dynamics of $f|_W$). We point out that one essential ingredient is the equality \footnote{Note that in such a context we always have $\hD(f|_W)\le \mathbb{D}D(f|_W)\le \dim_{\rm H}W$. Indeed, it suffices to observe that to each conformal expanding repeller $Y$ there exists an ergodic measure $\mu$ of maximal dimension $\dim_{\rm H}\mu=\dim_{\rm H}Y$ (e.g.~\cite[Theorem 1]{GP}). This implies the first inequality, the second one is immediate.} between hyperbolic dimension and dynamical dimension of $f|_W$ (as in~\eqref{eq:dyndimdims}). Another one is the possibility to approach any ergodic measure with positive entropy and positive Lyapunov exponent by a certain repeller (see Theorem~\ref{Katrin1}). Then a key point is to guarantee that such repellers are contained in $W$. Whenever these facts were true, our proofs extend to such a map and Theorems~\ref{teoentropy} and~\ref{teoprinc} (and Corollary~\ref{teoprinc2new} in case one has equality between dynamical and Hausdorff dimension as in~\eqref{eq:equalityAvila}) continue to hold true exchanging the Julia set for $W$. For example, in~\cite{RivShe:} the authors consider the Julia set of a certain $C^3$ multimodal interval map with nonflat critical points and without neutral periodic points. We refrain from giving all the details and refer to~\cite{RivShe:} for the precise definitions. Under additional conditions, in particular on the critical points, they establish the corresponding equalities~\eqref{eq:equalityAvila} for such maps. The above results apply in this context (see also~\cite{Cam:15}). } \end{remark*} \Rightarrowction{Dimension and entropy of a $(\chi,\epsilon)$-repeller} \label{sec:dimenttt} In this section we will derive a relationship between the Hausdorff dimension and the topological entropy for a specific type of repellers that we call $(\chi,\epsilon)$-repellers. First, we recall briefly dimension and entropy and some of their properties. \subsection{Hausdorff Dimension}\label{HD} Let $X$ be a metric space. Given a set $Y\subset X$ and a nonnegative number $d \in\mathbb{R}$, we denote the {\it $d$-dimensional Hausdorff measure} of $Y$ by $$ \mathcal{H}^d(Y):= \lim_{r \to 0}\mathcal{H}_{r}^d(Y), $$ where $$ {\mathcal H}^d_r(Y) := \inf\left\{\displaystylelaystyle\sum_{i=1}^{\infty}(\diam U_i)^d\colon Y \subset \bigcup_{i=1}^\infty U_i, \diam U_i <r\right\} , $$ where $\diam U_i$ denotes the diameter of $U_i$. Observe that $\mathcal{H}^d(Y)$ is monotone nonincreasing in $d$. Furthermore, if $d\in(a,b)$ and $\mathcal{H}^d(Y)<\infty$ then $\mathcal{H}^b(Y)=0$ and $\mathcal{H}^a(Y)=\infty.$ The unique value $d_0$ at which $d\mapsto \mathcal{H}^d(Y)$ jumps from $\infty$ to $0$ is the {\it Hausdorff dimension} of $Y$, that is, $$ \dim_{\rm H} Y=\inf\{d\geq 0 \colon {\mathcal H}^d(Y)=0\}= \sup\{d\geq 0 \colon {\mathcal H}^d(Y)=\infty\}. $$ We recall some of its properties: \begin{itemize} \item [(H1)] Hausdorff dimension is monotone: if $Y_1\subset Y_2\subset X$ then $\dim_{\rm H} Y_1\leq\dim_{\rm H} Y_2$. \item [(H2)] Hausdorff dimension is countably stable: $\dim_{\rm H}\bigcup_{i=1}^\infty B_i=\sup_i\dim_{\rm H}B_i$. \end{itemize} \subsection{Topological Entropy}\label{Ent} Let us now define topological entropy. We will follow the more general approach by Bowen \cite{B} considering the topological entropy of a general (i.e., not necessarily compact and invariant) set. Let $X$ be a compact metric space. Consider a continuous map $f\colon X\to X$, a set $Y\subset X$, and a finite open cover $\mathscr{A} = \{A_1, A_2,\ldots, A_n\}$ of $X$. Given $U\subset X$ we write $U \prec \mathscr{A}$ if there is an index $j$ so that $U\subset A_j$, and $U\nprec\mathscr{A}$ otherwise. Taking $U\subset X$ we define $$ n_{f,\mathscr{A}}(U) := \begin{cases} 0&\text{ if } U \nprec \mathscr{A},\\ \infty &\text{ if } f^k(U)\prec \mathscr{A}\,\,\forall k\in\mathbb{N},\\ \ell&\text{ if } f^k(U)\prec \mathscr{A}\,\, \forall k\in \{0, \dots, \ell-1\},f^\ell(U)\nprec\mathcal{A}. \end{cases} $$ If $\mathcal U$ is a countable collection of open sets, given $d>0$ let \[ m(\mathscr A,d,\mathcal U) := \sum_{U\in\mathcal U}e^{-d \,n_{f,\mathscr{A}}(U)}. \] Given a set $Y\subset X$, let $$ m_{\mathscr{A}, d} (Y) := \lim_{\rho \to 0}\inf \Big\{m(\mathscr A,d,\mathcal U)\colon Y \subset\displaystyle\bigcup_{U\in\mathcal U} U, e^{-n_{f,\mathcal{A}}(U)}<\rho \text{ for every } U\in\mathcal U \Big\}. $$ Analogously to the Hausdorff measure, $d\mapsto m_{\mathcal{A},d}(Y)$ jumps from $\infty$ to $0$ at a unique critical point and we define $$ h_{\mathscr{A}}(f,Y) := \inf\{d\colon m_{\mathscr{A}, d}(Y)=0\} = \sup\{d\colon m_{\mathscr{A}, d}(Y)=\infty\}. $$ The \emph{topological entropy} of $f$ on the set $Y$ is defined by $$ h(f,Y) := \sup_{\mathscr{A}} h_{\mathscr{A}}(f,Y) , $$ Observe that for any finite open cover $\mathscr{A}$ of $Y$ there exists another finite open cover $\mathscr{A}'$ of $Y$ with smaller diameter such that $h_{\mathscr{A}'}(f,Y) \geq h_{\mathscr{A}}(f,Y)$, which means that, in fact, for any $R>0$ $$ h(f,Y) = \sup\{h_{\mathscr{A}}(f,Y)\colon \mathscr{A} \textrm{ finite open cover of } Y,\diam{\mathscr{A}}< R\}. $$ When $Y=X$, we simply write $h(f) = h(f,X)$. To avoid confusion, we sometimes explicitly write $h(f|_X,Y)=h(f,Y)$. In~\cite[Proposition 1]{B}, it is shown that in the case of a compact set $Y$ this definition is equivalent to the canonical definition of topological entropy (see, for example, \cite[Chapter 7]{W}). We recall some of its properties which are relevant in our context (see~\cite{B}). \begin{itemize} \item[(E1)] Conjugation preserves entropy: If $f\colon X\to X$ and $g\colon Z \to Z$ are topologically conjugate, that is, there is a homeomorphism $\pi\colon X \to Z$ with $\pi \circ f = g\circ \pi$, then $h(f,Y) = h(g,\pi(Y))$ for every $Y\subset X$. \item[(E2)] Entropy is invariant under iteration: $h(f,f(Y)) = h(f,Y)$. \item[(E3)] Entropy is countably stable: $h(f,\bigcup_{i=1}^\infty B_i ) = \sup_i h(f,B_i).$ \item[(E4)] $h(f^m,Y) = m\cdot h(f,Y)$ for all $m\in\mathbb{N}$. \item[(E5)] Entropy is monotone: if $Y\subset Z\subset X$ then $h(f,Y)\le h(f,Z)$. \end{itemize} \subsection{$(\chi,\epsilon)$-repellers} \label{chirepellers} In this section let $X$ be a Riemannian manifold and $f\colon X \to X$ be a differentiable map. We call $f$ {\it conformal} if for each $x\in X$ we have $D_xf = a(x)\cdot\Isom_x$, where $a(x)$ is a positive scalar and $\Isom_x\colon T_xX\to T_{f(x)}X$ is an isometry; in this case we simply write $ a(x) = \lvert f'(x)\rvert$. We say that a set $W\subset X$ is \emph{forward invariant} if $f(W)= W$. A compact set $W\subset X$ is said to be \emph{isolated} if there is an open neighborhood $V$ of $W$ such that $f^n(x)\in V$ for all $n\ge0$ implies $x\in W$. Given a $f$-forward invariant subset $W\subset X$ we call $f|_W$ {\it expanding} if there exists $n\geq 1$ such that, for all $x \in W$ we have $$ |(f^n)'(x)|>1. $$ \begin{defi} A compact $f$-forward invariant isolated expanding set $W\subset X$ of a conformal map $f$ is said to be a \emph{conformal expanding repeller}. Given numbers $\chi>0$ and $\epsilon\in(0,\chi)$, we call a conformal expanding repeller $W\subset X$ a \emph{$(\chi, \epsilon)$-repeller} if for every $x\in W$ we have \begin{equation} \label{proprichirepeller} \varlimsupup_{n\to \infty} \Big\lvert\frac{1}{n} \log \,\lvert (f^n)'(x)\rvert - \chi\Big\rvert < \epsilon. \end{equation} \end{defi} In the following, we will collect some important estimates between Hausdorff dimension and topological entropy of $(\chi, \epsilon)$-repellers. The following estimate is of similar spirit as~\cite[Lemma 2]{D}. The method of proof is partially inspired by~\cite{M} and \cite[proof of Theorem 1.2]{BG}. See also~\cite{MaWu:10} for similar results. We will first prove a general result and then consider the particular case of $(\chi, \epsilon)$-repellers. \begin{prop}\label{proplema 2.0.1} Consider a Riemannian manifold $X$ and $f\colon X \to X$ a conformal $C^{1+\alpha}$ map. Let $W \subset X$ be a conformal expanding repeller. Let $Z\subset W$ and let $\chi>0$ and $\epsilon\in(0,\chi)$ be numbers such that for every $x\in Z$ we have~\eqref{proprichirepeller}. Then we have $$ \frac{h(f|_W,Z)}{\chi + \epsilon} \leq \dim_{\rm H}Z \leq \frac{h(f|_W,Z)}{\chi - \epsilon} . $$ \end{prop} \begin{proof} In what follows, in order to simplify notations we avoid conceptually unnecessary use of coordinate charts. Given $N\in\mathbb N$, we define the following level sets: $$ Z_N := \Big\{ x \in Z\colon \Big\lvert \frac{1}{n}\log\,\lvert(f^n)'(x)\rvert - \chi\Big\rvert<\epsilon \text{ for all } n\geq N\Big\}. $$ By hypothesis on $Z$, we have that \begin{equation}\label{Zunion} Z=\displaystyle\bigcup_{N\in\mathbb{N}} Z_N. \end{equation} Observe that $Z_N\subset Z_{N'}$ for $N<N'$. Given $N\in\mathbb{N}$, for all $x\in Z_N$ and all $k\geq N$ we have \begin{equation}\label{lema2cotader} e^{k(\chi - \epsilon)} < |(f^{k})'(x)| < e^{k(\chi + \epsilon)} . \end{equation} On a sufficiently small neighborhood $V$ of $W$ we have $|f'|\ne0$ and hence for $\theta >0$ there exists $R =R(\theta)> 0$ such that if $z_1, z_2 \in V$ and $d(z_1, z_2) < R$ then \begin{equation}\label{lemdesigeqlog1} \big\lvert\log\,\lvert f'(z_1)\rvert - \log\,\lvert f'(z_2)\rvert\big\rvert < \theta. \end{equation} \noindent\textbf{Step 1:} We start by showing \begin{equation}\label{eq:dimonesid} h(f|_W,Z)\le (\chi+\epsilon)\dim_{\rm H}Z. \end{equation} Fix $N\in\mathbb{N}$. Fix some $\theta>0$ and let $R=R(\theta)$ as above. We start by estimating the entropy on $Z_N$. For that we choose some finite open cover $\mathscr{A}$ of $W$ with $\diam\mathscr{A} \leq R$. Let $\ell=\ell(\mathscr A)$ denote a Lebesgue number of $\mathscr A$. Let \[ r_0=r_0(N) :=\ell\min_{0\leq k \leq N} \min_{x\in \overline{V}}\,\lvert(f^k)'(x)\rvert^{-1}. \] We prove the following intermediate result. \begin{claim}\label{claimmmm} For every $\gamma>(\chi+\epsilon+\theta)\dim_{\rm H}Z_N$, we have $m_{\mathscr A, \gamma} (Z_N) =0$. \end{claim} \begin{proof} Let $D:=\gamma/(\chi +\epsilon+\theta)$. Let $c=\log(\ell/2)/(\chi+\epsilon+\theta)$. Let $\zeta>0$. As $D>\dim_{\rm H} Z_N$, there is $\rho_0>0$ such that for all $r\in(0,\rho_0)$ we have that \[ \inf\Big\{\sum_ir_i^D\colon Z_N\subset\bigcup_iB(x_i,r_i),r_i<r\Big\}< \zeta e^{c\gamma}. \] Let $\rho_1:=\min\{r_0, \rho_0\}$. Then, for every $\rho \in (0,\rho_1)$ there is $r\in (0, \rho)$ also satisfying $$ r < (e^c \rho)^{\chi +\epsilon+\theta} $$ and a cover $\mathcal U=\{U_i\}$ of $Z_N$ by open balls $U_i=B(x_i,r_i)$, $r_i<r$, so that \begin{equation}\label{eq:22222} \sum_ir_i^D< \zeta e^{c\gamma}. \end{equation} For every $U_i\in\mathcal{U}$, for any $z_1,z_2 \in U$ for all $j \in\{0,\ldots,n_{f,\mathscr{A}}(U_i) - 1\}$ we have \[ d(f^j(z_1), f^j(z_2)) <\diam\mathscr A\le R. \] From~\eqref{lemdesigeqlog1} it follows that for every $k=1,\ldots,n_{f,\mathscr{A}}(U_i)$ we have \[ \big|\log|(f^k)'(z_1)| - \log|(f^k)'(z_2)|\big| \leq \sum_{j=0}^{k-1}\big |\log|f'(f^j (z_1))| - \log|f'(f^j(z_2))|\big| \leq k \theta \] and hence \begin{equation}\label{lemadeslogeq2} e^{-k\theta} \leq \displaystyle\frac{|(f^k)'(z_1)|}{|(f^k)'(z_2)|} \leq e^{k\theta} . \end{equation} Given $i$, for $x\in U_i\in\mathcal U$ let $F(x)=f^{n_{f,\mathscr{A}}(U_i)}(x)$. By definition of $n_{f,\mathscr{A}}(U_i)$ and of the Lebesgue number $\ell$, for every $U_i\in\mathcal U$ it follows that $\ell \leq \diam f^{n_{f,\mathscr{A}}(U_i)}(U_i)=\diam F(U_i)$. Consider $x,y\in \overline{U_i}$ such that $\diam F(U_i) = d(F(x), F(y))$. Consider the shortest path $\gamma\colon [0,1] \to X$ linking $x$ to $y$, which is completely contained in $\overline{U_i}$ since $\mathcal U$ is a cover by balls. Thus $$ \ell \leq d(F(x) , F(y)) \leq \int^1_0 |(F\circ \gamma)'(t)|\,dt = \int^1_0 |F'(\gamma(t))||\gamma ' (t)|\,dt. $$ Observe that $r_i < r_0$ implies that $n_{f, \mathcal{A}}(U_i) >N$. Considering $z\in U_i\cap Z_N$, with $k = n_{f,\mathscr{A}}(U_i)>N$ in~\eqref{lemadeslogeq2} and~\eqref{lema2cotader} we conclude \[\begin{split} \ell & \leq \int^1_0 \displaystyle\frac{|F'(\gamma(t))|}{|F'(z)|} |F'(z)|\,|\gamma ' (t)|\,dt\\ \text{by~\eqref{lemadeslogeq2}}\quad &\leq e^{ n_{f,\mathscr{A}}(U_i)\theta}\int^1_0 |(f^{n_{f,\mathscr{A}}(U_i)})'(z)|\,|\gamma ' (t)|\,dt\\ \text{by~\eqref{lema2cotader}}\quad & < e^{n_{f,\mathscr{A}}(U_i)\theta} e^{n_{f,\mathscr{A}}(U_i)(\chi + \epsilon)} \diam U_i. \end{split}\] Recalling the definition of $c$ we obtain \begin{equation}\label{lemainteqp1} e^{- n_{f,\mathscr{A}}(U_i)} < \big(\ell^{-1}\diam U_i\big)^{1/(\chi+\epsilon+\theta)} = e^{-c} (\frac12\diam U_i)^{1/(\chi + \epsilon+\theta)}. \end{equation} Since $\diam{U_i} < 2r < 2(e^c \rho)^{\chi +\epsilon+\theta}$ we have $ e^{-n_{f, \mathcal{A}}(U_i)} < \rho. $ Then, we have \[\begin{split} m(\mathscr A,\gamma,\mathcal U) &= \sum_{U_i\in\mathcal U}e^{-\gamma\, n_{f,\mathscr A}(U_i)}\\ \text{by~\eqref{lemainteqp1} }\quad &\le e^{-c\gamma}\sum_{U_i\in\mathcal U} (\frac12\diam U_i)^{\gamma/(\chi + \epsilon+\theta)} = e^{-c\gamma}\sum_{U_i\in\mathcal U} r_i^D \\ \text{by~\eqref{eq:22222}}\quad &< e^{-c\gamma} \zeta e^{c\gamma} = \zeta. \end{split}\] Summarizing, for arbitrary $\zeta>0$, there exists $\rho_1>0$ such that for any $\rho\in( 0,\rho_1)$ we can cover $Z_N$ by a family of balls $U_i$ satisfying $e^{-n_{f, \mathcal{A}}(U_i)} < \rho$ and $\sum_{U_i\in \mathcal{U}} e^{-\gamma n_{f,\mathcal{A}}(U_i)} < \zeta$. Thus $m_{\mathscr{A}, \gamma}(Z_N) =0$ as claimed. \end{proof} By Claim~\ref{claimmmm}, for every $\gamma>(\chi+\epsilon+\theta)\dim_{\rm H}Z_N$, we have $m_{\mathscr A,\gamma}(Z_N)=0$, which implies $h_{\mathscr{A}}(f,Z_N)\leq\gamma$. Since $\gamma>(\chi+\epsilon+\theta)\dim_{\rm H}Z_N$ is arbitrary, therefore $$ h_{\mathscr A}(f,Z_N)\leq (\chi+\epsilon+\theta)\dim_{\rm H} Z_N. $$ Thus, as $\mathscr A$ was arbitrary (but sufficiently small) \[ h(f|_W,Z_N)\le (\chi+\epsilon+\theta)\dim_{\rm H} Z_N. \] Since $\theta$ was arbitrary, we obtain \[ h(f|_W,Z_N)\le (\chi+\epsilon)\dim_{\rm H}Z_N. \] Now recall that $N\ge1$ was arbitrary. With~\eqref{Zunion} and countable stabilities (H2) of Hausdorff dimension and (E3) of entropy we conclude~\eqref{eq:dimonesid} from $$ \dim_{\rm H} Z = \sup_N \dim_{\rm H} Z_N \geq \sup_N \displaystyle\frac{h(f|_W, Z_N)}{\chi+\epsilon} = \frac{1}{\chi+\epsilon}\sup_Nh(f|_W,Z_N) = \frac{1}{\chi+\epsilon}h(f|_W, Z). $$ This concludes Step 1. \noindent \textbf{Step 2:} We now show \begin{equation}\label{eq:upperboudim} \dim_{\rm H}Z \le \frac{h(f|_W,Z)}{\chi-\epsilon}. \end{equation} Fix some $N\in\mathbb N$. Fix some $\theta\in(0,\chi-\epsilon)$ and let $R=R(\theta)$ as above. We start by estimating the dimension of $Z_N$. Fix some $\tau>0$ and denote $D:=(h(f|_W,Z_N)+\tau)/(\chi-\epsilon-\theta)$. Observe that \[ (\chi - \epsilon -\theta)D = h(f|_W,Z_N)+\tau > h(f|_W,Z_N) = \sup_{\mathscr{A}} h_{\mathscr{A}} (Z_N). \] Hence, for any finite open cover $\mathscr{A}$ of $W$ we have $m_{\mathscr A,(\chi - \epsilon -\theta)D}(Z_N) = 0$. Choose some cover $\mathscr{A}$ with $\diam\mathscr A\le R$. Given some $U\prec \mathscr{A}$ with $n=n_{f,\mathscr A}(U)<\infty$, fix some point $x\in U\cap Z_N$ and consider the sequence $x_k=f^k(x)$, $k=0,\ldots,n-1$. So for each $k$ there is some $A_k\in\mathscr A$ with $x_k\in f^k(U)\subset A_k$. Denote by $f^{-k}_{x_{n-1-k}}$ the inverse branch $g$ of $f^k$ so that $(g\circ f^k)(x_{n-1-k})=x_{n-1-k}$. We observe the following preliminary fact. \begin{claim}\label{cla:fuenf} For every $k=0,\ldots,n-1$ for every $x\in U$ we have \[ \diam f^{-k}_{x_{n-1-k}} (f^{n-1}(U)) \le \lvert (f^k)'(x_{n-1-k})\rvert^{-1} e^{k\theta}\cdot R. \] \end{claim} \begin{proof} The proof is by induction. For $k=0$ we have $f^{n-1}(U)\subset A_{n-1}\in\mathscr A$ and hence $\diam f^{n-1}(U)\le R$. For $k\in \{1,\ldots,n-1\}$, suppose the claim holds for $k-1=j$. Let $V_{j+1}:=f^{-(j+1)}_{x_{n-1-(j+1)}}(f^{n-1}(U))= f^{-1}_{x_{n-1-(j+1)}}(V_j)$ and observe that, in particular, $V_{j+1}\subset A_{j+1}\in\mathscr A$. Since for every $y,z\in A_{j+1}$, using~\eqref{lemdesigeqlog1}, we have that $\lvert f'(y)\rvert/\lvert f'(z)\rvert\le e^\theta$, we can conclude \[ \diam V_{j+1} \le \sup_{y\in V_{j+1}}\lvert f'(y)\rvert^{-1}\diam V_j \le \lvert f'(x_j)\rvert^{-1}e^\theta\diam V_j. \] Invoking the induction hypothesis for $k =j$, we obtain \[\begin{split} \diam V_{j+1} &\le \lvert f'(x_j)\rvert^{-1}e^\theta \cdot\lvert (f^j)'(x_{n-1-j})\rvert^{-1}e^{j\theta} \cdot R\\ &= \lvert (f^{j+1})'(x_{n-1-(j+1)})\rvert^{-1}e^{(j+1)\theta}\cdot R, \end{split}\] proving the assertion for $j+1$. This proves the claim. \end{proof} \begin{claim}\label{cla:clavier} $\mathcal{H}^D(Z_N) = 0$. \end{claim} \begin{proof} Let $\eta>0$. Observe that $m_{\mathscr A,(\chi - \epsilon -\theta)D}(Z_N)=0$ implies that there is $\rho_0>0$ such that for every $\rho \in(0,\rho_0)$ we have that \[ \inf\Big\{ m(\mathscr A,(\chi-\epsilon-\theta)D,\mathcal U) \colon Z_N\subset\bigcup_{U\in\mathcal U}U, e^{-n_{f,\mathscr A}(U)} < \rho\Big\} < \eta e^{-(\chi - \epsilon -\theta)D} R^{-D}. \] Consider $r_1 < \min\{\rho_0, e^{-(N+1)}\}$. Then, for every $r \in (0,r_1)$ there is $\rho \in (0,r)$ also satisfying \begin{equation}\label{eq:definitionr11} e^{\chi-\epsilon-\theta}R\cdot \rho^{\chi-\epsilon-\theta} < r. \end{equation} Hence, there exists a cover $\mathcal{U}=\{U_i\}$ of $Z_N$ satisfying $e^{-n_{f, \mathcal{A}}(U_i)} < \rho$ and \begin{equation}\label{eq:bedingungentr2} m(\mathscr A, (\chi - \epsilon -\theta)D,\mathcal U) < \eta e^{-(\chi - \epsilon -\theta)D} R^{-D}. \end{equation} Note that $\rho< e^{-(N+1)}$ implies $n_{f,\mathscr A}(U_i)>N+1$. Also recall that $f^k(U_i)$ lies inside an element of $\mathscr A$ for every $k=0,\ldots,n_{f,\mathscr A}(U_i)-1$. Consequently, with Claim~\ref{cla:fuenf} for $k=n_{f,\mathscr A}(U_i)-1$ and $x\in Z_N \cap U_i$ we obtain \[ \diam U_i \le \lvert (f^{n_{f,\mathscr A}(U_i)-1})'(x)\rvert^{-1} e^{(n_{f,\mathscr A}(U_i)-1)\theta}\cdot R < e^{-(n_{f,\mathscr A}(U_i)-1)(\chi-\epsilon-\theta)}\cdot R. \] Thus, since $e^{-n_{f,\mathscr A}(U_i)}< \rho$, we have that \[\begin{split} \diam U_i &< e^{\chi-\epsilon-\theta} R\cdot e^{-n_{f,\mathscr A}(U_i)(\chi-\epsilon-\theta)} <e^{\chi-\epsilon-\theta} R\cdot\rho^{\chi-\epsilon-\theta}\\ \text{by~\eqref{eq:definitionr11}}\quad &<r. \end{split}\] By~\eqref{eq:bedingungentr2} and above inequality, \[\begin{split} \sum_{U_i\in\mathcal U}(\diam U_i)^D &\le \sum_i\left(e^{\chi-\epsilon-\theta}R\cdot e^{-n_{f,\mathscr A}(U_i)(\chi-\epsilon-\theta)}\right)^{D}\\ &= e^{(\chi-\epsilon-\theta)D}R^D \cdot m(\mathscr A,D(\chi-\epsilon-\theta),\mathcal U) < \eta. \end{split}\] Summarizing, for arbitrary $\eta>0$, there exists $r_1>0$ such that for every $r\in(0,r_1)$ we can cover $Z_N$ by $\mathcal{U}$ such that $\diam U_i< r$ for every $U_i\in\mathcal U$ and $\sum_{U_i\in \mathcal{U}} (\diam U_i)^D < \eta$. Thus, $\mathcal H^D(Z_N)=0$, proving the claim. \end{proof} Claim~\ref{cla:clavier} now implies immediately \[ \dim_{\rm H}Z_N\le\frac{h(f|_W,Z_N)+\tau}{\chi-\epsilon-\theta}. \] As $\tau>0$ and $\theta\in(0,\chi-\epsilon)$ were arbitrary, we conclude $$ \dim_{\rm H} Z_N \leq \displaystyle\frac{h(f|_W, Z_N)}{\chi - \epsilon}. $$ Finally, recall that $N$ was arbitrary, by~\eqref{Zunion}, (E3), and (H2), we obtain $$ \dim_{\rm H} Z = \sup_N \dim_{\rm H} Z_N \leq \sup_N \frac{h(f|_W, Z_N)}{\chi - \epsilon} = \frac{h(f|_W,Z)}{\chi - \epsilon}. $$ This shows~\eqref{eq:upperboudim} and finishes the proof of the proposition. \end{proof} The following is now an immediate consequence of Proposition~\ref{proplema 2.0.1}. \begin{cor}\label{proplema 2.0cor} Consider a Riemannian manifold $X$ and $f\colon X \to X$ a conformal $C^{1+\alpha}$ map. Let $W \subset X$ be a $(\chi,\epsilon)$-repeller. Then for every $Z\subset W$ we have $$ \frac{h(f|_W,Z)}{\chi + \epsilon} \leq \dim_{\rm H}Z \leq \frac{h(f|_W,Z)}{\chi - \epsilon} . $$ \end{cor} Finally, we provide some further consequences which we will need in the sequel. Given $N\in \mathbb{N}$ let $R\subset W$ be a compact set satisfying \begin{equation}\label{eq:W} f^N(R)=R \quad\text{ and }\quad W=\bigcup_{i=0}^{N-1}f^i(R). \end{equation} \begin{lema}\label{lem:simple} $h(f|_W)=\frac1Nh(f^N|_R)$. \end{lema} \begin{proof} By (E3), (E2), (E4) and the $f^N$-invariance of $R$ we have \[ h(f|_W) =\max_ih(f|_W,f^i(R)) =h(f|_W,R) =\frac1Nh(f^N|_W,R) =\frac1Nh(f^N|_R). \] This proves the lemma.\end{proof} \begin{lema}\label{afirm2.1.1} Consider a Riemannian manifold $X$ and $f\colon X \to X$ a conformal $C^{1+\alpha}$ map. Suppose that $W \subset X$ is a $(\chi,\epsilon)$-repeller of positive entropy and $R\subset W$ a compact set satisfying $f^N(R)=R$ and $W=\bigcup_{i=0}^{N-1}f^i(R)$ for some $N\ge1$. Then for every $Y\subset R$ we have \[ \dim_{\rm H} Y \geq \frac{h(f|_{W},Y)}{h(f|_{W})} \frac{(\chi - \epsilon)}{(\chi + \epsilon)} \dim_{\rm H} W = \frac{h(f^N|_{R},Y)}{h(f^N|_{R})} \frac{(\chi - \epsilon)}{(\chi + \epsilon)} \dim_{\rm H} W. \] \end{lema} \begin{proof} Applying Corollary~\ref{proplema 2.0cor} we have \[ \frac{1}{h(f|_W)}(\chi-\epsilon)\dim_{\rm H} W \le 1. \] Given $Y\subset R\subset W$, we also have \[ \frac{h(f|_W,Y)}{\chi+\epsilon}\le \dim_{\rm H} Y \] Multiplying both inequalities, we obtain the first inequality. The equality is a consequence of Lemma \ref{lem:simple}, the Property (E4) and the $f^N$- invariance of $R$. \end{proof} \Rightarrowction{Expanding repellers for nonuniformly expanding maps}\label{katoksec} In order to find an approximation of ergodic quantifiers of the -- in general non-expanding -- maps, we follow an idea by Katok to construct suitable repellers. For a proof of the following theorem see~\cite[Chapter 11.6]{PU} and ~\cite[Theorems 1 and 3]{G}. \begin{teorema}\label{Katrin1} Consider a Riemannian manifold $X$ and $f\colon X \to X$ a conformal $C^{1+\alpha}$ map. Let $\mu$ be an $f$-invariant ergodic measure with positive entropy $h_\mu(f)$ and positive Lyapunov exponent \[ \chi(\mu) := \int\log\,\lvert f'\rvert\,d\mu. \] Then for all $\epsilon>0$ there is a compact set $W_{\epsilon} \subset X$ such that $f|_{W_{\epsilon}}$ is a conformal expanding repeller satisfying: \begin{itemize} \item[(a)] $h_{\mu} (f) +\epsilon\geq h(f|_{W_{\epsilon}}) \geq h_{\mu} (f) -\epsilon$,\\[-0.4cm] \item[(b)] For every $f$-invariant ergodic measure $\nu$ supported in $W_\epsilon$ we have \[ \big\lvert \chi(\nu)-\chi(\mu)\big\rvert <\epsilon. \] \end{itemize} In particular, $W_\epsilon$ is a $(\chi(\mu),\epsilon)$-repeller. Moreover, there is a compact set $R_\epsilon\subset W_\epsilon$ and some $N=N(\epsilon)\in\mathbb{N}$ such that $f^N(R_\epsilon) = R_\epsilon$, $f^N|_{R_{\epsilon}}$ is expanding and topologically conjugate to a topologically mixing subshift of finite type, and we have $$ W_\epsilon = \displaystyle\bigcup_{i=0}^{N-1} f^i(R_\epsilon). $$ \end{teorema} These repellers $W_\epsilon$ have good dimension properties as we shall see below. In particular, we can apply Corollary~\ref{proplema 2.0cor} to them. For the following result recall the definition of the dynamical dimension in~\eqref{def:DD}. \begin{lema}\label{lemaqeaprox} Let $f\colon \overline{\mathbb{C}} \to \overline{\mathbb{C}}$ be a rational function of degree $\ge2$ on the Riemann sphere and let $J=J(f)$ be its Julia set. Then there exist a sequence of probability measures $(\mu_n)_n$ and a sequence of positive numbers $(\epsilon_n)_n$ with $\lim_{n\to 0}\epsilon_n = 0$ and $\epsilon_n<\chi(\mu_n)/n$ such that there are $(\chi(\mu_n), \epsilon_n)$-repellers $W_n = W_n(\mu_n)\subset J$ satisfying \[ \lim_{n\to \infty} \dim_{\rm H} W_n = \mathbb{D}D(f|_{J}). \] \end{lema} \begin{proof} First note that for a $f$-invariant ergodic probability measure $\mu$ of a rational function with positive Lyapunov exponent $\chi(\mu)$ we have \begin{equation}\label{eq:Mane} \dim_{\rm H}\mu = \frac{h_{\mu}(f)}{\chi(\mu)} \end{equation} (\cite{Man:88}, see also~\cite[Chapters 8--10]{PU}). Given $n\in \mathbb{N}$, by definition of the dynamical dimension, there is an $f$-ergodic probability measure $\mu_n$ with positive entropy (and hence positive Lyapunov exponent) such that \begin{equation}\label{lemadihqe} \dim_{\rm H}\mu_n \geq \mathbb{D}D(f|_{J}) - \frac{1}{n} . \end{equation} Choose $\epsilon_n>0$ satisfying $\epsilon_n<\chi(\mu_n)/n$. Let $W_n$ be a $(\chi(\mu_n), \epsilon_n)$-repeller provided by Theorem \ref{Katrin1} applied to $\mu_n$ and recall that there is $N=N(\epsilon_n)$ and $R_n\subset W_n$ such that $f^N|_{R_n}$ is expanding and conjugate to a mixing subshift of finite type. Observe that $\dim_{\rm H}W_n=\dim_{\rm H}R_n$. Also observe that $W_n\subset J$. Applying Bowen's formula (see~\cite{GP}) for $f^N|_{R_n}$, with $s_n=\dim_{\rm H}R_n$ we have \[ 0=\sup_\nu\big(h_\nu(f^N)- s_nN\chi(\nu)\big), \] where the supremum is taken over all $f^N$-invariant measures $\nu$ supported in $R_n$. Recall that for every invariant measure $\nu$ for $f^N\colon R_n\to R_n$ we get an invariant measure $\hat\nu$ for $f\colon W_n\to W_n$ by defining $\hat\nu:=\frac1N(\nu+f_\ast\nu+\ldots+f^{N-1}_\ast\nu)$ and observe that $h_\nu(f^N)=Nh_{\hat\nu}(f)$. Further, $h(f^N|_{R_n})=Nh(f|_{W_n})$ (Lemma~\ref{lem:simple}). By the variational principle for topological entropy (see e.g. \cite[Chapter 9]{PU}), we can take $\nu$ such that $h_\nu(f^N) \ge Nh(f|_{W_n}) -N\epsilon_n$, which implies $$ 0\ge h(f|_{W_n}) - \epsilon_n -s_n\chi(\nu). $$ From Theorem~\ref{Katrin1} we obtain \[ 0 \ge h_{\mu_n}(f) -2\epsilon_n - s_n (\chi(\mu_n)+\epsilon_n), \] which implies \[ s_n\ge \frac{h_{\mu_n}(f) -2\epsilon_n}{\chi(\mu_n)+\epsilon_n}. \] Hence, by~\eqref{eq:Mane}, we conclude \begin{equation} \label{dimmu} s_n \ge \dim_{\rm H}\mu_n\left(\frac{\chi(\mu_n)}{\chi(\mu_n) + \epsilon_n}\right) - \frac{2\epsilon_n}{\chi(\mu_n) + \epsilon_n}. \end{equation} As we required $0<\epsilon_n < \chi(\mu_n)/n$, inequalities (\ref{lemadihqe}) and (\ref{dimmu}) show that \[ \left(\mathbb{D}D(f|_{J}) - \frac{1}{n}\right) \frac{1}{1+1/n} - \frac{2}{n+1} \leq s_n=\dim_{\rm H} W_n. \] Finally, it follows definition of hyperbolic dimension and ~\eqref{eq:dyndimdims} that \[ \dim_{\rm H} W_n \le \hD(f|_{J}) = \mathbb{D}D(f|_{J}). \] Taking the limit when $n\to\infty$, we obtain $$ \displaystyle\lim_{n\to \infty} \dim_{\rm H} W_n = \mathbb{D}D(f|_{J}). $$ This proves the lemma. \end{proof} \Rightarrowction{General properties of exceptional sets}\label{Aset} In this section we will derive some general properties of exceptional sets. We first show that being exceptional is preserved by conjugation. \begin{lema}\label{lempropriE1} If $f\colon X \to X$ and $g\colon Y \to Y$ are topologically conjugate by a homeomorphism $\pi \colon X \to Y$ with $g\circ \pi = \pi \circ f$, then for every $A \subset Y$ we have $$ \pi (E^+_{f|X}(\pi^{-1}(A))) = E^+_{g|Y}(A). $$ \end{lema} \begin{proof} Given $y\in \pi(E^+_{f|X}(\pi^{-1}(A)))$, suppose that $y\notin E^+_{g|Y}(A)$. Then there is a subsequence $(n_k)_k$ and $y_0\in A$ such that $g^{n_k}(y)$ converge to $y_0$. By conjugation, $f^{n_k}(\pi^{-1}(y))$ converges to $\pi^{-1}(y_0)\in\pi^{-1}(A)$, which is a contradiction. Thus, $\pi (E^+_{f|X}(\pi^{-1}(A))) \subset E^+_{g|Y}(A)$. The other inclusion is analogous, by conjugation. \end{proof} We require the following simple fact which we state without proof. \begin{lema}\label{lem:invsubset} Let $W\subset X$ be a compact set such that $f(W)=W$. If $A\subset X$ then $\displaystylelaystyle E^+_{f|W}(A\cap W) \subset E^+_{f|X}(A) $. \end{lema} In order to see how exceptional sets behave with respect to iterates, for given $A\subset X$ and $N \in\mathbb{N}$ let us denote \begin{equation}\label{defAm} A_N := \bigcup_{j=0}^{N-1}f^{-j}(A). \end{equation} \begin{lema} \label{lem:invsubsetneeew} Let $W\subset X$ be a compact set such that $f(W)=W$. If $A\subset W$ then $E^+_{f|W}(A) = E^+_{f^N|W}(A_N\cap W)$. \end{lema} \begin{proof} Let $x \in E^+_{f|W}(A)$. Suppose that there is $y\in \overline{\mathcal{O}_{f^N}(x)} \cap A_N\cap W$. Then, there are $j_0 \in \{0,\ldots, N-1\}$ such that $y\in f^{-j_0}(A)$ and a sequence $(n_k)_{k=0}^\infty$ such that $\lim_{k\to \infty}f^{Nn_k}(x) = y$. By continuity of $f$, we have that $\lim_{k\to \infty} f^{Nn_k+j_0}(x) = f^{j_0}(y) \in A$ and hence $\overline{\mathcal{O}_{f}(x)} \cap A \neq \emptyset $, which is a contradiction. This proves that $E^+_{f|W}(A) \subset E^+_{f^N|W}(A_N\cap W)$. Consider now $x\in E^+_{f^N|W}(A_N\cap W)$. Suppose that there exists $y\in \overline{O_f(x)} \cap A$. Thus, there is a subsequence $(n_k)_{k=0}^\infty$ such that $\lim_{k\to\infty} f^{n_k}(x) = y \in A$. We can write $n_k = N s_k + r_k$ where $0\leq r_k\leq N-1$. Then exist $r \in \{0, \ldots, N-1\}$ such that $(f^{Ns_k +r}(x))_{k=0}^\infty$ is a subsequence such that $\lim_{k\to\infty} f^{Ns_k + r}(x) = y\in A$. By compactness of $W$ and because $f^{Ns_k}(x) \in W$ for all $k$, there exists a convergent subsequence $\lim_{k\to\infty}f^{N\ell_k}(x) = v \in W$. By continuity of $f$ we have that $$ f^r(v) = f^r\big(\lim_{k\to\infty}f^{N\ell_k}(x)\big) = \lim_{k\to\infty}f^{N\ell_k+r}(x) = y. $$ Thus, $\lim_{k\to\infty}f^{N\ell_k}(x) = v \in f^{-r}(y)\subset f^{-r}(A)$, wich is a contradiction. This proves the other inclusion. \end{proof} For the remaining results in this section, let $W\subset X$ be a compact set such that $f(W)=W$, let $N\in \mathbb{N}$ and let $R\subset W$ be a compact set satisfying \[ f^N(R)=R \quad\text{ and }\quad W=\bigcup_{i=0}^{N-1}f^i(R), \] and let $A\subset W$ and let $A_N$ be defined as in~\eqref{defAm}. \begin{lema}\label{lempropriE1.2} For all $i\in\{0, \ldots, N-1\}$, we have \[ f^i\big(E^+_{f^N|R}(A_N\cap R)\big) \subset E^+_{f^N|f^i(R)}\big(A_N\cap f^i(R)\big). \] \end{lema} \begin{proof} Let $y\in f^i(E^+_{f^N|R}(A_N\cap R))$. Then there is $x\in E^+_{f^N|R}(A_N\cap R)$ such that $f^i(x)=y$. Suppose, by contradiction, that there is $z\in \overline{\mathcal{O}_{f^N|f^i(R)}(y)}\cap A_N$. Then, there are $j\in\{0, \ldots, N-1\}$ and a sequence $(n_k)_{k=1}^\infty$ such that $\lim_{k\to \infty} f^{Nn_k}(y) = z \in f^{-j}(A)$. By compactness and $f^N$-invariance of $R$, we have that there are $\tilde{x}\in R$ and a subsequence $(n_\ell)_{\ell=0}^{\infty}$ of $(n_k)_{k=0}^{\infty}$ such that $\lim_{\ell\to \infty}f^{Nn_\ell}(x) = \tilde{x}$. Note that, by continuity of $f^i$, follows that $$ z = \lim_{\ell\to\infty}f^{Nn_\ell}(y) = f^i(\lim_{\ell\to\infty}f^{Nn_\ell}(x)) = f^i(\tilde{x}). $$ In this case, $\tilde{x} \in f^{-i}(z)$ and, since $z\in f^{-j}(A)$, we have that $\tilde{x}\in f^{-(i+j)}(A)$. If $i+j \in\{0, \ldots, N-1\}$, then $\tilde{x}\in A_N\cap R$ which is a contradiction. If $i+j\geq N$, then there are $s, \iota\in \mathbb{N}$ such that $\iota \in \{0, \ldots, N-1\}$ and $i+j = sN+\iota$. Thus, by continuity of $f^{sN}$ and $f^N$-invariance of $R$, follows that $$ \lim_{\ell\to\infty}f^{N(s+n_\ell)}(x) = f^{Ns}(\tilde{x}) \in f^{-\iota}(A)\cap R. $$ It is again a contradiction. \end{proof} \begin{lema} \label{lempropriE2} $\displaystylelaystyle E^+_{f^N|W}(A_N\cap W) = \displaystyle\bigcup_{i=0}^{N-1}E^+_{f^N|{f^i(R)}}(A_N\cap f^i(R)). $ \end{lema} \begin{proof} Observe that $f^N(f^i(R)) = f^i(f^N(R)) = f^i(R)$. If $x\in E^+_{f^N|W}(A_N\cap W)$, then $x\in f^i(R)$ for some $i\in\{0,\ldots,N-1\}$ and by $f^N$-invariance of $f^i(R)$ we have $$ \overline{\mathcal{O}_{f^N|{f^i(R)}}(x)} \cap (A_N \cap f^i(R)) = \overline{\mathcal{O}_{f^N|{f^i(R)}}(x)} \cap A_N = \overline{\mathcal{O}_{f^N}(x)} \cap A_N = \emptyset. $$ Hence, $$ E^+_{f^N|W}(A_N\cap W) \subset \displaystyle\bigcup_{i=0}^{N-1}E^+_{f^N|{f^i(R)}}(A_N\cap f^i(R)). $$ On the other hand, let $x$ be a point in the set on the right hand side, that is, let $x \in E^+_{f^N|{f^i(R)}}(A_N\cap f^i(R))$ for some $i\in\{0,\ldots,N-1\}$ and, in particular, $x \in f^i(R)$. Again, by $f^N$-invariance of $f^i(R)$, we have $$ \overline{\mathcal{O}_{f^N}(x)}\cap A_N = \overline{\mathcal{O}_{f^N|{f^i(R)}}(x)}\cap A_N = \overline{\mathcal{O}_{f^N|{f^i(R)}}(x)}\cap (A_N \cap f^i(R)) = \emptyset. $$ This finishes the proof. \end{proof} Finally, in this section we give a relation for the entropy of the sets $A_N$, which we will need right after. \begin{lema}\label{lem:propriE4} If $h( f|_W,A) < h(f|_W)$ then $h(f^N|_R,A_N\cap R) < h(f^N|_R)$. \end{lema} \begin{proof} Starting from our hypothesis, \begin{eqnarray*} h(f|_W) & > & h(f|_W,A)\\ \text{ by (E2) and~\eqref{eq:W} }\quad & = & h( f|_W,A_N\cap R) = h\Big( f|_W, A_N\cap\bigcup_{i=0}^{N-1}f^i(R)\Big) \\ \text{ by (E5) }\quad & \ge & h( f|_W,A_N\cap R)\\ \text{ by (E4) and~\eqref{eq:W} }\quad & = & \displaystyle\frac{1}{N} h( f^N|_W,A_N \cap R) = \frac{1}{N}h(f^N|_R,A_N\cap R) \end{eqnarray*} Hence, applying Lemma~\ref{lem:simple} we obtain the claimed property. \end{proof} \Rightarrowction{Proof of the main results}\label{prova} We first establish a preparatory result for the entropy of a continuous transformation that can be decomposed into finite systems each being conjugate to a subshift of finite type. \begin{prop}\label{proph} Let $(W,d)$ be a compact metric space and $f\colon W\to W$ a continuous transformation. Let $R\subset W$ be a compact set satisfying $f^N(R)=R$ and $W=\bigcup_{i=0}^{N-1}f^i(R)$ for some $N\ge1$ and suppose that $f^N\colon R\to R$ is conjugate to a subshift of finite type. Then for every compact set $A\subset W$ satisfying $h(f|_W,A)<h(f|_W)$ we have \[ h\big(f|_W,E^+_{f|W}(A)\big) = h(f|_W). \] \end{prop} \begin{proof} By hypothesis, there is a subshift of finite type $\sigma\colon\Sigma_M^+\to\Sigma_M^+$ and a homeomorphism $\pi\colon \Sigma_M^+\to R$ satisfying $\pi\circ f^N=\sigma\circ\pi$. By hypothesis and Lemma~\ref{lem:propriE4} we have $h( f^N|_R,A_N\cap R) < h(f^N|_R).$ By the conjugation property (E1) of entropy we have $ h( \sigma,\pi^{-1}(A_N\cap R)) < h(\sigma). $ By Theorem \ref{Dolgoteoshift}, we have that \[ h\big( \sigma,E^+_{\sigma|\Sigma^+_M}(\pi^{-1}(A_N\cap R))\big) = h(\sigma). \] From Lemma \ref{lempropriE1} and property (E1), we conclude \[ h\big( f^N|_R,E^+_{f^N|R}(A_N\cap R)\big) = h(f^N|_R). \] By $f^N$-invariance of $R$, properties (E2) and (E5) of entropy, Lemma \ref{lempropriE1.2}, Lemma \ref{lempropriE2}, and Lemma \ref{lem:simple} \[\begin{split} h(f^N|_R) & = h\big( f^N|_W,E^+_{f^N|R}(A_N\cap R)\big) \\ & = h\big( f^N|_W,f^i(E^+_{f^N|R}(A_N\cap R))\big) \\ & \leq h\big( f^N|_W,E^+_{f^N|f^i(R)}(A_N\cap f^i(R))\big)\\ & \leq h(f^N|_R). \end{split}\] Thus, by Lemma \ref{lempropriE2}, $f^N$-invariance of $R$, (E3), and Lemma \ref{lem:simple}, it follows \[\begin{split} h\big( f^N|_W,E^+_{f^N|W}(A_N\cap W)\big) & = \max_{0\leq i \leq N-1} h\big( f^N|_{f^i(R)},E^+_{f^N|{f^i(R)}}(A_N \cap f^i(R))\big)\\ & = h(f^N|_{R})\\ & = N h(f|_W). \end{split}\] Then, by Lemma~\ref{lem:invsubsetneeew} and property (E4), it follows that \[\begin{split} h(f|_W,E^+_{f|W}(A)) & = h\big( f|_W,E^+_{f^N|W}(A_N\cap W)\big) \\ & = \frac{1}{N} h\big( f^N|_W,E^+_{f^N|W}(A_N\cap W)\big) \\ & = h(f|_W), \end{split}\] and this finishes the proof of the proposition. \end{proof} Now we can give the proofs of Theorem \ref{teoentropy} and Theorem \ref{teoprinc}. \begin{proof}[Proof of Theorem \ref{teoentropy}] By hypothesis, $h(f|_J)>0$. By the variational principle and Ruelle's inequality, for every $\epsilon>0$ there is an ergodic measure $\mu$ satisfying $\chi(\mu)>0$ and $h_\mu(f) \ge h(f|_J) - \epsilon$. By Theorem \ref{Katrin1}, there is a compact set $W_\epsilon\subset J$ such that $h(f|_{W_\epsilon})\ge h_\mu(f) - \epsilon$. If $\epsilon$ was sufficiently small, this and our hypothesis $h(f|_J, A) < h(f|_J)$ together imply $h(f|_{W_\epsilon}, A\cap W_\epsilon) < h(f|_{W_\epsilon})$. Then, by Proposition~\ref{proph}, the above inequalities, and observing that $E^+_{f|{W_\epsilon}}(A\cap W_\epsilon)\subset E^+_{f|J}(A)$, we have \[\begin{split} h(f|_J) & \leq h_{\mu}(f) + \epsilon \le h(f|_{W_\epsilon})+2\epsilon\\ & = h( f|_{W_\epsilon},E^+_{f|W_\epsilon}(A\cap W_\epsilon)) + 2\epsilon\\ & \leq h( f|_J,E^+_{f|J}(A)) + 2\epsilon. \end{split}\] Since $\epsilon$ was arbitrary, this implies the claim. \end{proof} \begin{proof}[Proof of Theorem \ref{teoprinc}] Consider the sequences $(\mu_n)_n$, $(\epsilon_n)_n$ and $(W_{n})_n$ from Lemma \ref{lemaqeaprox}. In particular, $\epsilon_n<\chi(\mu_n)/n$ and thus \[ \lim_{n\to\infty}\frac{\chi(\mu_n)-\epsilon_n} {\chi(\mu_n)+\epsilon_n} =1. \] By hypothesis we have \[ \dim_{\rm H}A<\mathbb{D}D(f|_J). \] Hence, for $n$ sufficiently large we have (the first inequality is simple) \[ \dim_{\rm H}(A\cap W_{n}) \le \dim_{\rm H}A <\frac{\chi(\mu_n)-\epsilon_n}{\chi(\mu_n)+\epsilon_n}\dim_{\rm H}W_{n} \leq \mathbb{D}D (f|_J). \] Applying Corollary~\ref{proplema 2.0cor}, the above inequality, and again Corollary~\ref{proplema 2.0cor}, we obtain \[\begin{split} h( f|_{W_{n}},A\cap W_{n}) & \le (\chi(\mu_n) + \epsilon_n)\dim_{\rm H} (A\cap W_{n})\\ & < (\chi(\mu_n) - \epsilon_n)\dim_{\rm H} W_{n}\\ &\leq h(f|_{W_{n}}). \end{split}\] Hence, we can apply Proposition~\ref{proph} and obtain \[ h\big( f|_{W_{n}},E^+_{f|W_{n}}(A\cap W_{n})\big) = h(f|_{W_{n}}). \] Together with Lemma \ref{afirm2.1.1} applied to $W=W_{n}$ and $Y= E^+_{f|W_{n}}(A\cap W_{n})$ this implies \[\begin{split} \dim_{\rm H} E^+_{f|W_{n}}&(A\cap W_{n})\\ &\ge \frac{h\big( f|_{W_{n}}, E^+_{f|W_n}(A\cap W_{n})\big)} {h(f|_{W_{n}})}\frac{(\chi(\mu_n) -\epsilon_n)} {(\chi(\mu_n) +\epsilon_n)}\dim_{\rm H}W_{n}\\ &=\frac{(\chi(\mu_n) -\epsilon_n)} {(\chi(\mu_n) +\epsilon_n)}\dim_{\rm H}W_{n} . \end{split}\] Lemma \ref{lemaqeaprox} now proves that \[ \lim_{n\to\infty} \dim_{\rm H} E^+_{f|W_{n}}(A\cap W_{n}) \geq \mathbb{D}D(f|_J)). \] Observe that, by Lemma~\ref{lem:invsubset} it follows that \[ E^+_{f|W_{n}}(A\cap W_{n}) \subset E^+_{f|J}(A). \] Now property (H1) of the Hausdorff dimension implies $$ \dim_{\rm H} E^+_{f|J}(A) \geq \mathbb{D}D(f|_J) $$ and proves the theorem. \end{proof} \end{document}
\begin{document} \title{\LARGE \bf Strong Structural Controllability of Signed Networks } \thetahispagestyle{empty} \pagestyle{empty} \begin{abstract} In this paper, we discuss the controllability of a family of linear time-invariant (LTI) networks defined on a signed graph. In this direction, we introduce the notion of positive and negative signed zero forcing sets for the controllability analysis of positive and negative eigenvalues of system matrices with the same sign pattern. A sufficient combinatorial condition that ensures the strong structural controllability of signed networks is then proposed. Moreover, an upper bound on the maximum multiplicity of positive and negative eigenvalues associated with a signed graph is provided. \end{abstract} \section{Introduction} Thanks to the ubiquity and wide recent applications of networks, there has been a surge of interest in studying networked dynamical systems and their control. One of the fundamental problems pertinent to the control of networks is their controllability~\cite{tanner2004controllability}. In most cases, the exact value of the entries of system matrices, that is, the connection weights of a network, is unknown or highly uncertain. Accordingly, finding alternative means of system analysis based on topological features of the underlying graph is of importance; these features are also instrumental in network design problems~\cite{egerstedt2012interacting,shima}. There are different works in the literature, adopting diverse points of view towards controllability analysis of networks. In some works, controllability of a particular dynamics, e.g., Laplacian dynamics, has been considered \cite{tanner2004controllability,mousavi2018controllability, mousavi2018laplacian}, while in other works, instead of a specific dynamics, a family of dynamical networks all of which are defined on the same structure, has been studied. The second approach leads to structural controllability analysis for networked systems of interest in this work. In the structural controllability framework, the network is viewed in terms of the zero-nonzero pattern of system matrices. In this direction, strong structural controllability results provide conditions ensuring the controllability for \emph{all} systems with the same zero-nonzero pattern \cite{mayeda1979strong}. In the systems and control literature, different interpretations of strong structural controllability have been presented in terms of spanning cycle \cite{jarczyk2011strong}, constrained $t$-matchings \cite{chapman2013strong}, and zero forcing sets \cite{monshizadeh2014zero,trefois2015zero,mousavi2018structural, mousavi2017robust,mousavi2018null, mousavi2019strong}. Signed networks have recently attracted a lot of attention in the systems community; the controllability of this class of networks with a particular Laplacian dynamics has also been examined in a few works \cite{sun2017controllability,she2018controllability}. In fact, signed networks can be representative of a wide range of scenarios of practical interest, such as social networks and fault tolerant networks~ \cite{altafini2013consensus,wasserman1994social}. In a signed network, the graph admits both positive and negative edges that indicate respectively, cooperative or adversarial interactions among the nodes. As such, by considering a \emph{sign pattern} instead of a zero-nonzero pattern, not only can we capture the network structure, but also define a more restrictive family of networks that represents distinct qualitative features. The notion of \emph{sign controllability}, which is strong structural controllability of networks with the same sign pattern, was first introduced in \cite{johnson1993sign} and examined for the special case of single-input systems with all nonzero entries as positive. These results were later extended to the multi-input case in \cite{tsatsomeros1998sign}, where signed networks are examined in the context of the so-called \emph{strict linear control systems} with some restrictive properties; for example, the diagonal entries of the system matrices should be nonzero and have the same sign. In \cite{tsatsomeros1998sign}, sufficient algebraic conditions for sign controllability of a network as well as necessary and sufficient conditions for the sign controllability of {strict linear control systems} have been presented; however, recognition of these algebraic conditions was proven to be NP-hard. More recently, in \cite{hartung2013characterization}, sign controllability of another family of networks has been analyzed, and an algebraic condition has been provided for systems whose sign pattern admits only real eigenvalues. However, the verification of these conditions is also NP-hard. The notion of zero forcing game, played on a graph to change the color of the nodes based on a coloring rule, was defined in \cite{work2008zero} for the minimum rank problem. Later, other variants of the zero forcing sets were introduced in~\cite{brimkov2019computational}. For example, in \cite{goldberg2014zero}, in order to study the minimum rank problem for symmetric matrices with the same sign pattern (with an undirected graph), a signed zero forcing set was defined. In this paper, we introduce the new notion of positive and negative signed zero forcing sets for a directed signed graph that can be utilized in providing an upper bound on the maximum geometric multiplicity of positive and negative eigenvalues of matrices with the same sign pattern. As the main contribution of this work, using the notion of signed zero forcing sets, strong structural controllability conditions for the zero, positive, and negative eigenvalues of matrices with the same sign pattern are provided. Furthermore, we present a sufficient condition for the strong structural controllability of signed networks, whose sign patterns admits only real eigenvalues. However, there is no restriction on the sign of diagonal entries, and we allow the self-loops of the signed networks not to have any specified signs. In \cite{eschenbach1991sign}, a complete characterization of such networks has been provided. For example, one can mention undirected networks with symmetric pattern matrices. A few examples are used throughout the paper to better illustrate the results. \begin{comment} \color{black}The notion of zero forcing set, \color{black}{related to a particular coloring of nodes of a graph}\color{black}, was first introduced in \cite{work2008zero} to study the minimum rank problem. This problem was motivated by the inverse eigenvalue problem of a graph whose goal is to obtain information about the possible eigenvalues of a family of patterned matrices; the first step to this aim is finding the maximum multiplicity of an arbitrary eigenvalue among all matrices in this family. \color{black}{With this in mind, using the notion of zero forcing sets, \cite{work2008zero} and \cite{barioli2009minimum} presented upper bounds on the maximum multiplicity of the \emph{zero eigenvalue} for loop-free undirected and loop directed graphs. In this direction, by introducing the new notion of \emph{strong zero forcing set}, we provide an upper bound on the maximum multiplicity of \emph{nonzero eigenvalues} associated with a { loop } directed graph.} \color{black} The focus of this paper is on \emph{modal strong controllability}, \color{black}{extending the results of \cite{trefois2015zero}}.\color{black} \color{black}{The} modal strong controllability, where strong controllability is examined with respect to particular subspaces associated with the network, is of importance in a number of applications. For example, relative state measurements in consensus dynamics induce a non-trivial kernel that might need to be controlled. Further, when the system dynamics are dissipative, external control would only be required to control the zero mode of the system matrix for ensuring stabilizability. One such example is networked systems with significant self damping compared with the coupling strengths; this dynamics in turn induces a diagonally dominant state matrix. An example of this class of systems is the bipartite consensus dynamics \cite{Altafini2013}. \color{black}{The other contributions of the paper are as follows:} \color{black} (1) in \S 3 and \S 4, we show that the {strong} and the {\color{black}{loop} \color{black} zero forcing sets} provide necessary and sufficient conditions for strong controllability of the nonzero and zero modes of a network. \color{black}{These conditions are derived through a new proof for the result reported in \cite{trefois2015zero} (Theorem 5.5).} \color{black} (2) We derive combinatorial conditions, under which, all patterned matrices of a graph are nilpotent or nonsingular; (3) We provide algorithms for growing and synthesizing a network with favorable system-theoretic properties, a topic that has recently been examined in the literature \cite{albert2002statistical,demetrius2005robustness,de2016growing,mousavi2017robust}. In \S 5, we provide general methods for growing graphs to preserve their modal strong controllability. For example, it is shown that by adding directed cycles (edges), strong controllability of zero (nonzero) modes remain intact. Moreover, these methods can be applied in the reverse direction for the network reduction--one consequence of such a reduction is more streamlined determination of their respective \color{black}{loop} \color{black} and strong zero forcing sets. (4) When a network is not \color{black}{strongly structurally controllable}\color{black}{,} characterizing a lower bound on the dimension of the controllable subspace which holds for all patterned matrices becomes important \cite{hosoe1980determination,jarczyk2010determination}. \color{black}{While there are works in the literature that provide lower bounds on the dimension of the controllable subspace for undirected networks with the consensus dynamics \cite{zhang2014upper,yaziciouglu2016graph}, our result in \S 6 is applicable for all patterned matrices associated with an undirected graph.} \color{black} \end{comment} \section{Preliminaries} We denote the set of real numbers by $\mathbb{R}$. For a vector $v$, $v_i$ is its $i$th entry; for a matrix $M$, $M_{ij}$ is the entry in row $i$ and column $j$. A subvector $v_X$ is comprised of $v_i$, for $i\in X$, ordered lexiographically. We denote the transpose of the matrix $M$ by $M^T$. The $n\thetaimes n$ identity matrix is denoted by $I_n$, and its $j$th column is designated by $e_j$. We designate by $|S|$ the cardinality of the set $S$. The sign function $\mathrm{sign}(.):\mathbb{R}\rightarrow \{+,-,0\}$ returns the sign of a nonzero scalar, and we have $\mathrm{sign}(a)=0$ if and only if $a=0$. We also define the sign inversion function as $\mathrm{inv}(+)=-$ and $\mathrm{inv}(-)=+$. A \emph{zero-nonzero pattern} $P\in \{\thetaimes,0,?\}^{n\thetaimes n}$ is a matrix whose off-diagonal entries can be zero or nonzero, respectively denoted by 0 or $\thetaimes$, and the diagonals are chosen from the set $\{\thetaimes,0, ?\}$. The zero-nonzero pattern of a matrix $A$ is a matrix $P$ such that for $i\neq j$, $P_{ij}= 0$ if and only if $A_{ij}= 0$. Note that if $P_{ii}=?$, $A_{ii}$ can be both zero or nonzero. A \emph{sign pattern} $P_s\in \{+,-,0,?\}^{n\thetaimes n}$ is a matrix whose off-diagonal entries are from the set $\{+,-,0\}$, and the diagonals belongs to $\{+,-,0,?\}$ . The sign pattern of a matrix $A$ is some $P_s$ such that for $i\neq j$, $(P_{s})_{ij}=\mathrm{sign}(A_{ij})$; also, $(P_s)_{ii}=\mathrm{sign}(A_{ii})$ whenever $(P_s)_{ii}\neq ? $. If $(P_s)_{ii}=?$, $A_{ii}$ can be zero or a nonzero with a positive or negative sign. A \thetaextit{graph} is denoted by $G=(V,E, P)$, where $V=\{1,\lambda dots,n\}$ is the vertex set and $E\subseteq V\thetaimes V$ is the edge set of the graph. We write $(i,j)\in E$ when there is an edge from the node $i$ to the node $j$. $P$ is a zero-nonzero pattern such that $(i,j)\in E$ whenever $P_{ji}\neq 0$. Note that in our setup, a graph $G$ can contain self-loops as $(i,i)$ for some $i\in V(G)$; if we have $P_{ii}=?$ for some $1\lambda eq i\lambda eq n$, we assign a label $?$ to the self-loop $(i,i)\in E$, implying that $(i,i)$ can appear or not appear in $G$. For $(i,j)\in E$, node $j$ (resp., node $i$) is an out-neighbor (resp., in-neighbor) of node $i$ (resp., node $j$). We denote by $N_{out}(i)$ the set of out-neighbors of node $i$. An undirected graph is a graph such that $(i, j)\in E(G)$ if and only if $(j, i) \in E(G)$; in this case, we write $\{i, j\} \in E(G)$, and node $j$ is referred to as the neighbor of node $i$. The matrix $P$ is symmetric for an undirected graph. A \thetaextit{signed graph} $G_s$ is denoted by $G_s=(V,E, P_s)$, where $P_s$ is an $n\thetaimes n$ sign pattern such that $(i,j)\in E$ whenever $(P_s)_{ji}\neq 0$. Then, to every edge $(i,j)\in E$, where $i\neq j$, we can assign a label $+$ or $-$, that indicates whether the weight of the connection between the nodes $i$ and $j$ is positive or negative. Moreover, a self-loop $(i,i)$, $i\in V$, can be labeled with $+$, $-$, or $?$. This implies that the self-loop of node $i$ has a weight that can be positive, negative, or unspecified (including zero value). We can also define an undirected signed graph associated with a symmetric $P_s$. \begin{comment}An \thetaextit{undirected} graph is a graph such that $(i,j)\in E(G)$ if and only if $(j,i)\in E(G)$; in this case, we write $\{i,j\}\in E(G)$, and node $j$ is referred to as the neighbor of node $i$.\end{comment} \begin{comment} On the other hand, in a \thetaextit{loop-free} or simple graph, the edge set does not admit self-loops. By a \thetaextit{loopy} graph, we will refer to a graph with $(i,i)\in E(G)$, for all $i\in V(G) $. Let $G_{\ell}$ be a loopy graph obtained from $G$ by putting self-loops on all of its nodes when such loops are originally absent. We also let $G_{\emptyset}$ be a loop-free graph obtained from $G$ by removing all of its self-loops. \end{comment} A \thetaextit{looped graph} is obtained from a graph by putting a self-loop on every node $v\in V$ that does not have a self-loop itself. Before precisely defining a looped graph, let us consider the indices $s$ and $r$ chosen respectively from the sets $\{\thetaimes, +,-,0,?\}$ and $\{+,-\}$. Now, we let the \emph{sign equations} as $?+s=?$, $0+s=s$, $r+\mathrm{inv}(r)=?$, $r+r=r$, and $\thetaimes+\thetaimes=?$. One can verify theses equations by considering different scalars with the same pattern denoted by some $s\in \{\thetaimes, +,-,0,?\}$ and checking the pattern of the result. For example, by adding two positive (resp., negative) numbers, a positive (resp., negative) number is obtained, leading to $r+r=r$. On the other hand, by adding a positive and a negative number, the result may be positive, negative, or zero, implying that $r+\mathrm{inv}(r)=?$. Now, let $\mathcal{D}(\thetaimes)$, $\mathcal{D}(+)$, and $\mathcal{D}(-)$ be $n\thetaimes n$ diagonal pattern matrices whose diagonals are respectively, $\thetaimes$, $+$, and $-$. Using the sign equations, for a given zero-nonzero pattern $P$ and a sign pattern $P_s$, let us define $P^{\thetaimes}=P+\mathcal{D}(\thetaimes)$, $P_s^+=P_s+ \mathcal{D}(+)$, and $P_s^-=P_s+ \mathcal{D}(-)$. Then, for a graph $G=(V,E,P)$, we define the \emph{looped graph} $G^{\thetaimes}=(V,E,P^{\thetaimes})$. Moreover, for a signed graph $G_s=(V,E,P_s)$, one has the \emph{positive looped graph} $G_s^+=(V,E,P_s^+)$ and the \emph{negative looped graph} $G_s^-=(V,E,P_s^-)$. \begin{ex} For the zero-nonzero pattern $P$ and the sign pattern $P_s$ defined as $$ P=\begin{bmatrix} ? & 0 & \thetaimes\\ 0 & \thetaimes & 0 \\ 0 & \thetaimes & 0 \end{bmatrix}, \:\:\: P_s=\begin{bmatrix} ? & - & 0 & 0\\ 0 & - & + & 0\\ 0 & 0 & + & 0\\0 & 0 & + & 0 \end{bmatrix},$$ the graphs $G$ and $G_s$ in Figs. \ref{G} (a) and \ref{Gs} (a) can be represented. Also, the looped graph $G^{\thetaimes}$ is shown in Fig. \ref{G} (b), and the positive and the negative looped graphs $G_s^+$ and $G_s^-$ are respectively, depicted in Figs. \ref{Gs} (b) and (c). \begin{figure} \caption{a) Graph $G$, b) looped graph $G^{\thetaimes} \end{figure} \begin{figure} \caption{a) Graph $G_s$, b) positive looped graph $G_s^{+} \end{figure} \end{ex} For a (undirected) graph $G=(V,E,P)$, the \emph{qualitative class}, denoted by $\mathcal{Q}(G)$, is defined as the set of all (symmetric) matrices in $\mathbb{R}^{n\thetaimes n}$ whose zero-nonzero pattern is $P$. Similarly, for a (undirected) signed graph $G_s=(V,E,P_s)$, the qualitative class $\mathcal{Q}_s(G_s)$, is the set of all (symmetric) matrices in $\mathbb{R}^{n\thetaimes n}$ whose sign pattern is $P_s$. We denote by $\Lambda(A)$ the set of eigenvalues of the matrix $A$. For an eigenvalue $\lambda ambda\in \Lambda(A)$, the dimension of the subspace $\mathcal{S}_{A}(\lambda ambda)=\{\nu\in \mathbb{R}^n | \nu^TA=\lambda ambda \nu^T \}$ is called the geometric multiplicity of $\lambda ambda$ and is denoted by $\psi_{A}(\lambda ambda)$. For some $\mathcal{M}\subseteq \Lambda(A)$, we also define the maximum geometric multiplicity of eigenvalues of $A$ belonging to $\mathcal{M}$ as $\Psi_{\mathcal{M}}(A)=\mathrm{max}\{\psi_{A}(\lambda ambda)| \lambda ambda\in \mathcal{M}\}$. \subsection{Problem Formulation} Given is an LTI network with the following dynamics \begin{align} \dot{x}=Ax+Bu, \lambda abel{e1} \end{align} where $x\in \mathbb{R}^n$ is the state vector of the nodes, and $u\in \mathbb{R}^m$ is the control input; we refer to matrices $A\in \mathbb{R}^{n\thetaimes n}$ and $B\in \mathbb{R}^{n\thetaimes m}$ respectively, as the system and input matrices. We let $A\in\mathcal{Q}_s(G_s)$ for some signed graph $G_s=(V,E,P_s)$; \color{black}{moreover}, $B$ is defined as $B=[e_{j_1},\lambda dots,e_{j_m} ],$ \lambda abel{e2} where nodes $j_k$, $k=1,\dots,m$, are called \thetaextit{control nodes}, and $V_C=\{j_1,\lambda dots,j_m\}$ is the set of control nodes. \color{black}{ If with a suitable choice of the input, we can transfer the state of the nodes from any initial state to any final state within a finite time, then we say that the network with the pair $(A,B)$ is controllable.} \color{black}{As} controllability is preserved under a similarity transformation, when the LTI system (\ref{e1}) is uncontrollable, there exists a nonsingular matrix $T\in \mathbb{R}^{n\thetaimes n}$ such that \color{black}{for some $q<n$,} \color{black}{\begin{equation} \Scale[.87]{T^{-1}AT=\begin{bmatrix}\hat{A}_{11} & \hat{A}_{12}\\ 0 & \hat{A}_{22}\end{bmatrix}, \; T^{-1}B=\begin{bmatrix}\hat{B}_1\\0\end{bmatrix}}, \lambda abel{decom} \end{equation}}where $(\hat{A}_{11},\hat{B}_{1})$ is controllable, with $\hat{A}_{11}\in\mathbb{R}^{q\thetaimes q}$, $\hat{B}_{1}\in\mathbb{R}^{q\thetaimes m}$. \color{black}{ When $\lambda ambda\in \Lambda(A)$ and $\lambda ambda\notin \Lambda(\hat{A}_{22})$, it is called a \thetaextit{controllable eigenvalue} of the system (\ref{e1}). On the other hand, we define $\lambda ambda$ as an \thetaextit{uncontrollable eigenvalue} if $\lambda ambda\notin \Lambda(\hat{A}_{11})$. In this case, the input of the system cannot have any influence on $\lambda ambda$. We can use \color{black}{the} Popov-Belevitch-Hautus (PBH) test for checking the controllability of eigenvalues. } \color{black}{\begin{pro}[\cite{sontag2013mathematical}] The eigenvalue $\lambda ambda$ of $A$ in a system with dynamics (\ref{e1}) is controllable if and only if for all nonzero $w$ for which $w^{T}A=\lambda ambda w^T$, $w^{T}B\neq0$. \end{pro}} \color{black}{An eigenvalue $\lambda ambda$ is called strongly structurally controllable if it is a controllable eigenvalue for all $A\in \mathcal{Q}_s(G_s)$ for which $\lambda ambda\in \Lambda(A)$.} \color{black} Along the way, a \emph{signed} network with dynamics (\ref{e1}) which is defined on a signed graph $G_s=(V,E,P_s)$ is strongly structurally controllable if every $\lambda ambda\in\Lambda(A)$ (for all $A\in\mathcal{Q}_s(G_s)$) is controllable. With a slight abuse of notation, in this case, we say that $(G_s, V_C)$ is controllable. Also, given a graph $G=(V,E,P)$, we say that $(G,V_C)$ is controllable if every $\lambda ambda\in\Lambda(A)$ (for all $A\in\mathcal{Q}(G)$) is controllable. \color{black}{Our focus is on the combinatorial characterizations of strong structural controllability of positive, negative, and zero eigenvalues of a network, and then we provide a sufficient condition for strong structural controllability of signed networks whose sign patterns admits only real eigenvalues \cite{eschenbach1991sign}. \section{ Zero Forcing Games} In this section, we first review the classical coloring rule and zero forcing sets for a graph $G$ \cite{barioli2009minimum}. Then the notions of \emph{signing and coloring rule} and \emph{signed zero forcing sets} introduced in \cite{goldberg2014zero} are presented. Finally, the new notions of \emph{positive} and \emph{negative signed zero forcing sets} are discussed. The following definitions can be utilized for undirected graphs by interpreting out-neighbors simply as neighbors. \subsection{Classical Zero Forcing Sets} Consider a graph $G=(V,E,P)$, and assume that some of its nodes are black, while the other nodes are white. The classical coloring rule is defined as follows. \emph{\thetaextbf{Classical coloring rule:}} Let $v\in V$ be either white with $P_{vv}\neq ?$ or black. If $v$ has only one white out-neighbor $u$, change the color of $u$ to black. Next, we define classical and strong zero forcing sets. \begin{deff} Assume that $Z\subset V$ is the set of initially black nodes in the graph $G=(V,E,P)$. The set $Z$ is a \emph{classical zero forcing set} of $G$ if by repetitively applying the classical coloring rule in $G$, all the nodes become black. \end{deff} \begin{deff} Consider the looped graph $G^{\thetaimes}=(V,E,P^{\thetaimes})$ associated with the graph $G=(V,E,P)$. We refer to a set $Z\subset V$ as a \emph{strong zero forcing set} of $G$ if by repetitively applying the classical coloring rule in $G^{\thetaimes}$, all of its nodes become black. \end{deff} \begin{ex} Consider the graph $G$ in Fig. \ref{zfs} (a) with the initial set of black nodes $Z=\{2,4,5\}$. First, node 2 forces its only one white out-neighbor node 3 to be black, and then node 1 is forced by node 3 to be black. Since all nodes can be finally black through the successive application of the classical coloring rule, $Z$ is a classical zero forcing set of $G$. In addition, the looped graph $G^{\thetaimes}$ of the graph $G$ in Fig. \ref{nzfs} (a) is depicted in Fig. \ref{nzfs} (b). By performing the same chain of forces, one can see that $Z$ is a classical zero forcing set of $G^{\thetaimes}$, or equivalently, it is a strong zero forcing set of $G$. \begin{figure} \caption{An example for the classical coloring rule.} \end{figure} \begin{figure} \caption{a) Graph $G$, b) the associated looped graph $G^{\thetaimes} \end{figure} \end{ex} \subsection{Signed Zero Forcing Sets} \emph{Signed zero forcing game} is indeed a signing and coloring game played on the nodes of a signed graph. In the first part of this game, we assume that some nodes of the signed graph $G_s=(V,E,P_s)$ are colored black, and others are white. Recall that a signed graph is a graph whose edges are labeled with the positive or negative sign. By doing this game, we aim to also assign the nodes of the graph a sign. For a node $u\in V$, let $m(u)$ denote its sign. If a node is assigned zero, its color is changed to black. Otherwise, if $u$ is white and is marked with $+$ or $-$, we have $m(u)=+$ or $m(u)=-$. If a node is not marked, and its sign is undetermined, then we write $m(u)=*$. Thus, the goal of the game is to blacken the nodes and find the sign of white nodes when possible. Note that before starting the game, we only have some black nodes in the graph, and none of the nodes are marked with a sign. In this step, we can simply take one white node and mark it with $+$ and proceed based on the coloring rule. Before stating the game rule, let us introduce some new notation. The letter $s$ is an index taking values from $\{+,-\}$. If $s=+$, $\mathrm{inv}(s)=-$, and vice versa. For a node $v\in V$, let $W(v)=\{u\in N_{out}(v): u \:\: \mathrm{is} \:\: \mathrm{white}\}$. Then, $W(v)$ is the set of all white out-neighbors of the node $v$. Now, define $W_{+}(v)=\{u\in W(v): m(u)= (P_s)_{uv}\}$ and $W_{-}(v)=\{u\in W(v): m(u)=\mathrm{inv}( (P_s)_{uv})\}$. Accordingly, $W_{+}(v)$ (resp., $W_{-}(v)$) is the set of any white out-neighbor of the node $v$ which is marked, and its sign is the same as (resp., the opposite of) the sign of the edge connecting $v$ to it. Also, let $W_*(v)=\{ u \in W(v): m(u)= *\}$. Then, $W_*(v)$ is the set of white out-neighbors of $v$ that has not yet been marked. Now, consider a signed graph $G_s$ with all nodes colored either black or white, and some node of $G_s$ may be marked with $+$ or $-$. The rule of the game is stated as follows. \emph{\thetaextbf{Signing and coloring rule:}} Let $v\in V$ be either a black node or a white node with $(P_s)_{vv}\neq ?$ (then if $v$ is white, it has either no self-loops or a self-loop labeled with $+$ or $-$). \begin{enumerate} \item If $v$ has exactly one white out-neighbor $u$ (i.e. $W(v)=\{u\}$), then the color of $u$ is changed to black (note that $u$ and $v$ may be the same). \item If either $W_{+}(v)=W(v)$ or $W_{-}(v)=W(v)$, then all nodes in $W(v)$ become black. \item If all white out-neighbors of $v$ except one node $w$ are marked such that $W_{s}(v)\neq \emptyset$, $W_{\mathrm{inv}(s)}(v)=\emptyset$, and $W_*(v)=\{w\}$, then the unmarked node $w$ is marked with $P_{wv}.\mathrm{inv}(s)$. \item If there is no white node in $G_s$ that is marked, and $u\in V$ is white, then $u$ is marked with $+$. \end{enumerate} Note that the first clause of the rule is the same as the classical coloring rule. In what follows, for a signed graph, the definitions of a signed zero forcing set, a positive signed zero forcing set, and a negative signed zero forcing set are proposed. \begin{deff} Let $Z\subset V$ be a set of initially black nodes in the signed graph $G_s$. Apply the signing and coloring rules as many times as possible. The derived set of colored nodes of $Z$, denoted by $\mathcal{D}_c(Z)$, is defined as the final set of black nodes in $G_s$. Also, the derived set of marked nodes $\mathcal{D}_m(Z)$ is the set of any node $v$ with $m(v)=+\:\mathrm{or}\:-$ at the termination of the game. For an initial set of black nodes $Z$, if $\mathcal{D}_c(Z)=V$, then $Z $ is called a \emph{signed zero forcing set} of $G_s$. \end{deff} \begin{deff} For a signed graph $G_s=(V,E,P_s)$, consider the negative (resp., positive) looped graph $G^-_s=(V,E,P^-_s)$ (resp., $G^+_s=(V,E,P^+_s)$), and let $Z\subset V$ be the set of initially black nodes. Now, perform the signing and coloring rule in $G^-_s$ (resp., $G^+_s$) as many times as possible. The set of all nodes of $G_s$ that eventually become black in $G^-_s$ (resp., $G^+_s$) at the final stage of the game is called the positive (resp., negative) derived set of colored nodes of $Z$ and is denoted by $\mathcal{D}_c^+(Z)$ (resp., $\mathcal{D}_c^-(Z)$). Also, we denote the set of marked nodes at the termination of the game by $\mathcal{D}_m^+(Z)$ (resp., $\mathcal{D}_m^-(Z)$) and refer to it as the positive (resp., negative) derived set of marked nodes of $Z$. Now, given an initial set of black nodes $Z$, if $\mathcal{D}_c^+(Z)=V$ (resp., $\mathcal{D}_c^-(Z)=V$), $Z$ is called a \emph{positive signed zero forcing set} (resp., \emph{negative signed zero forcing set}) of $G_s$. The cardinality of a positive (resp., negative) signed zero forcing set of $G_s$ is called the positive (resp., negative) signed zero forcing number, and is denoted by ${\mathcal{Z}_s^+}$ (resp., $\mathcal{Z}_s^-$). \end{deff} \begin{ex} Consider the signed graph $G_s$ shown in Fig. \ref{szfs1} (a) where the nodes 4 and 5 are initially colored black. The different steps of applying the signing and coloring rule are shown in this figure. As shown in Fig. \ref{szfs1} (b), the 4th clause of the rule is applied, and node 1 is marked with $+$. Then, in Fig. \ref{szfs1} (c), we apply the 3rd clause of the rule for node 4 and mark its unmarked out-neighbor (node 2) with $-$. Next, the 2nd clause of the rule is performed for node 5, forcing nodes 1 and 2 to become black. Finally, the 1st clause of the rule is applied for node 2, and it forces its white out-neighbor to be black. Thus, since all nodes of the graph are finally black, the set $\{4,5\}$ is a signed zero forcing set of $G_s$. \begin{figure} \caption{An example of the signing and coloring rule. } \end{figure} Moreover, in Figs. \ref{npszfs} (a) and (b), the negative looped graph $G_s^-$ and the positive looped graph $G_s^+$ are respectively shown and through the application of a similar sequence of clauses of signing and coloring rule, we see that set $\{4,5\}$ is a signed zero forcing set of $G_s^-$ and $G_s^+$, and thus it is both a positive and a negative signed zero forcing set of $G_s$. \begin{figure} \caption{a) Negative looped graph $G_s^-$, b) positive looped graph $G_s^+$ (associated with $G_s$ in Fig. \ref{szfs1} \end{figure} \end{ex} \section{Strong Structural Controllability} In this section, we derive combinatorial conditions, ensuring strong structural controllability of a signed network. The next theorem provides a necessary and sufficient condition for controllability of $(G,V_C)$ in terms of classical zero forcing sets, where $G=(V,E,P)$. \begin{theo}[\cite{trefois2015zero}] Given a network with dynamics (\ref{e1}), defined on the graph $G$, $(G,V_C)$ is controllable if and only if $V_C$ is both a classical and a strong zero forcing set of $G$. \end{theo} \begin{ex} For a network with dynamics (\ref{e1}) with the graph $G$ in Fig. \ref{zfs} (a), if $V_C=\{2,4,5\}$, $(G,V_C)$ is controllable, since we have shown that $V_C$ is both a classical and a strong zero forcing set of $G$. Note that set $V'_C=\{4,5\}$ cannot render the network strongly structurally controllable. This is due to the fact that although $V'_C$ is a classical zero forcing set of $G$, it is not a strong zero forcing set. Indeed, for this network, the minimum number of control nodes ensuring the controllability of $(G, V_C)$ is 3. \end{ex} Now, we consider a signed network and characterize the controllability of its positive, negative, and zero eigenvalues in terms of the corresponding signed zero forcing sets. However, for the sake of brevity, we prove the results only for the positive eigenvalues; the proofs for the other cases are analogous. \begin{lem} Let $G_s$ be a signed graph with a set of initially black nodes $Z$ . Let $A\in\mathcal{Q}_s(G_s)$, and $\nu\in \mathbb{R}^n$ be a left eigenvector of $A$ associated with $\lambda ambda > 0$. If $\nu_i=0$ for all $i\in Z$, then $\nu_i=0$ for all $i\in \mathcal{D}_c^{+}(Z)$. Moreover, if for some nodes $i\in \mathcal{D}_m^{+}(Z)$ with $\nu_i\neq 0$, one has $m(i)=\mathrm{sign}(\nu_i)$, then for any $k \in \mathcal{D}_m^{+}(Z)$ with $\nu_k\neq 0$, $m(k)=\mathrm{sign}(\nu_k)$. \lambda abel{l2} \end{lem} \thetaextit{Proof.} Assume that the signed zero forcing game on the graph $G_s^+$ can be performed in $K$ step, at which only one of the clauses of the signing and coloring rule can be applied. In the first step, the nodes of $Z$ are colored black. Let $C_j$ and $M_j$ be respectively, the set of black nodes and marked nodes after the step $j$. Assume that $\nu_Z=0$. Note that for $j=1$, $C_j=Z$ and $M_j=\emptyset$. Also, $C_K=\mathcal{D}_c^{+}(Z)$ and $M_K=\mathcal{D}_m^{+}(Z)$. We claim that the theorem is not only true for $C_K$ and $M_K$, but also for any $C_j$ and $M_j$ that $1\lambda eq j\lambda eq K$. The proof follows by a strong induction on $j$. It is clear that for $j=1$, the claim is true. Now, assuming that the result holds for some $j\geq 1$, let us show its validity for $j+1$. Consider the $i$th column of the matrix equation $\nu^TA=\lambda ambda \nu^T$, that is, $\sum_{k\in N_{out}(i)}\nu_k A_{ki}= \lambda ambda \nu_i$. This equation implies that \begin{equation} \nu_i(A_{ii}-\lambda ambda)+\sum_{k\in N_{out}(i)\setminus\{i\}}\nu_k A_{ki}= 0, \lambda abel{eq3} \end{equation} where $\lambda ambda>0$. Thus, if $\mathrm{sign}(A_{ii})\lambda eq0$, we have $\mathrm{sign}(A_{ii}-\lambda ambda)\lambda eq0$, and otherwise $\mathrm{sign}(A_{ii}-\lambda ambda)$ can be positive, negative, or zero. Then, it can be represented in the graph by a self-loop for node $i$ labeled with $-$ or $?$. Accordingly, we assume that for any node $v\in V$, $v\in N_{out}(v)$. Then, the matrix $A$ is replaced with a matrix $A^+\in P_s^+$. Now, assume that the first clause is applied in step $j+1$. In other words, there is a node $v$ for which $(P_s)_{vv}\neq ?$ if it is white, and it has only one white out-neighbor $u$. Then, $C_{j+1}=C_{j}\cup \{u\}$. Also, equation (\ref{eq3}) leads to $\nu_u A^+_{uv}=0$, and as such, we have $\nu_u=0$, which shows the validity of the claim in this case. Regarding the 4th clause of the rule, note that when no white node is marked, we can arbitrarily mark some white node $u$ with $+$; this follows from the fact that $\nu$ and $-\nu $ are both the eigenvectors, and any of which can be chosen in this case. For the 2nd clause of the rule, equation (\ref{eq3}) simplifies to $\sum_{u\in W_+(v)}\nu_u A^+_{uv}=0$. Now, analogous to the proof of Theorem 3.2 in \cite{goldberg2014zero}, we claim that all summands on the right hand side of this equation have the same sign. Indeed, based on the definition, for all $u\in W_+(v)$, we have $m(u) A^+_{uv}>0$. Moreover, according to the hypothesis of induction and without loss of generality, let us assume that for any $u\in W_+(v)$, $m(u)=\mathrm{sign}(\nu_u)$. Then, the claim immediately follows. Now, consider the case when the 3rd clause is applied. Equation (\ref{eq3}) in this case leads to $\sum_{u\in W_s(v)}\nu_u A^+_{uv} + \nu_w A^+_{wv}=0 $. Similar to the second case, we can prove that all summands of $\sum_{u\in W_s(v)}\nu_u A^+_{uv}$ have the same sign. Without loss of generality, assume that for all $i\in W_s(v)$, $m(i)=\mathrm{sign}(\nu_i)$. Based on equation (\ref{eq3}), we should have $\mathrm{sign}(\nu_w)=P_{wv} \mathrm{inv}(s)$. Moreover, based on the 3rd clause of the rule, $m(w)=P_{wv} \mathrm{inv}(s)$, and hence we have $m(w)=\mathrm{sign}(\nu_w)$. Hence, the claim remains valid in this case, completing the proof. \carre The next theorem is one of the main results of the paper, providing a sufficient condition for strong structural controllability of positive (resp., negative/zero) eigenvalues of a signed network. \begin{theo} In an LTI network with a signed graph $G_s$, every positive (resp., negative/zero) eigenvalue of all $A\in \mathcal{Q}_s(G_s)$ is controllable if $V_C$ is a positive signed zero forcing set (resp., negative signed zero forcing set/signed zero forcing set) of $G_s$. \lambda abel{TH} \end{theo} \thetaextit{Proof.} We only provide the proof for controllability of positive eigenvalues for brevity. Suppose $V_C$ is a positive signed zero forcing set of $G_s$, but there is some $A\in\mathcal{P}_s(G_s)$ with an uncontrollable positive eigenvalue $\lambda ambda$. Then, there is a nonzero left eigenvector $\nu$ associated with $\lambda ambda>0$ such that $\nu^TB=0$, or equivalently, $\nu_i=0$, for all $i\in V_C$. Since $\mathcal{D}^{+}_c(V_C)=V$, from Lemma \ref{l2}, we have $\nu=0$; which is a contradiction. \carre The next example is mentioned in \cite{tsatsomeros1998sign}. \begin{ex} Consider a signed network with dynamics (\ref{e1}) with the signed graph $G_s$ in Fig. \ref{szfs1} (a), and let $V_C=\{4,5\}$. Since $V_C$ is a signed, a positive signed, and a negative signed zero forcing set of $G_s$, then based on Theorem \ref{TH}, zero, positive, and negative eigenvalues of the network are strongly structurally controllable. \end{ex} Theorem 2 leads to the next result, a sufficient condition for strong structural controllability of signed networks. \begin{theo} Consider an LTI network with a signed graph $G_s$ whose sign pattern admits only real eigenvalues. Then, $(G_s,V_C)$ is controllable if $V_C$ is a signed, a positive signed, and a negative signed zero forcing set of $G_s$. \lambda abel{th33} \end{theo} \begin{ex} For the undirected network shown in Fig. \ref{ex}(a), one can verify that $V_C$ is a signed, positive signed, and negative signed zero forcing set of the graph; as such based on Theorem \ref{th33}, we can deduce that the signed network is strongly structurally controllable. To demonstrate that $V_C$ is a signed zero forcing set, the steps of the signing and coloring rule are shown in Fig. \ref{ex}. \begin{figure} \caption{An example of the signing and coloring rule. } \end{figure} \end{ex} \begin{comment} \begin{ex} In Fig. \ref{ex} (a), a network is shown whose sign controllability has been studied in \cite{hartung2013characterization}. It was proven that by choosing $V_C=\{3\}$, a necessary condition for the strong structural controllability of this network is satisfied. However, since the corresponding sign pattern admits also complex eigenvalues, no statement regarding the sign controllability of the system could be made based on the results of that work. Now, one can verify that $V_C$ is a signed, positive signed, and negative signed zero forcing set of the graph; as such based on Theorem \ref{th33}, we can deduce that the signed network is strongly structurally controllable. To demonstrate that $V_C$ is a signed zero forcing set, the steps of the signing and coloring rule are shown in Fig. \ref{ex}. \begin{figure} \caption{An example of the signing and coloring rule. } \end{figure} \end{ex} \end{comment} In \cite{goldberg2014zero}, an upper bound on the maximum nullity of matrices with a symmetric sign pattern has been obtained. In the same direction, by using the notions of positive and negative signed zero forcing sets, we can provide an upper bound on the maximum geometric multiplicity of positive and negative eigenvalues of matrices with the same sign pattern. Let $\Lambda_+(A)$ (resp., $\Lambda_-(A)$) denote the set of positive (resp., negative) eigenvalues of the matrix $A$. \begin{pro} Consider a signed graph $G_s=(V,E,P_s)$ with the positive (resp., negative) signed zero forcing number $\mathcal{Z}_s^+$ (resp., $\mathcal{Z}_s^-$). Then, for all $A\in \mathcal{Q}_s(G_s)$, we have $\Psi_{\Lambda_{+}(A)}(A)\lambda eq~\mathcal{Z}_s^+$ (resp., $\Psi_{\Lambda_{-}(A)}(A)\lambda eq~\mathcal{Z}_s^-$). \lambda abel{p8} \end{pro} \thetaextit{Proof:} Here we state the proof for positive eigenvalues only. Assume that for some $\lambda ambda>0$, $\psi_A(\lambda ambda)=k$. Then, similar to the proof of Proposition 2.2 of \cite{work2008zero}, we can say that for every subset of nodes $X$ whose cardinality is $k-1$, there exists a nonzero $\nu\in \mathcal{S}_{A}(\lambda ambda)=\{\nu\in \mathbb{R}^n | \nu^TA=\lambda ambda \nu^T \}$ such that $\nu_i=0$, for every $i\in X$. Now, suppose that there exists a positive eigenvalue $\beta$ of some $A\in\mathcal{Q}_s(G)$ such that $\psi_A(\beta)> \mathcal{Z}_s^+$. Let $Z$ be a positive signed zero forcing set of $G_s$ with the cardinality $\mathcal{Z}^+_s$. Then, there is a nonzero eigenvector $\nu\in \mathcal{S}_A(\beta)$ for which we have $\nu_i=0$, for all $i\in Z$. Additionally, we know from Lemma \ref{l2} that with the positive signed zero forcing set $Z$, if $\nu_Z=0$, then $\nu=0$, as $\mathcal{D}^+_c(Z)=V$. Thus, we reach a contradiction. \carre \begin{comment} Before presenting the next result, let us introduce a new notation. Let $G_s=(V,E,P_s)$ be a signed graph. Denote by $G_s^?$ a signed graph with sign pattern $P_s+\mathcal{D}(?)$. Then, by putting a self-loop with label $?$ on each node of $G_s$, $G_s^?$ is obtained. Moreover, we say that all self-loops of a signed graph $G_s$ are uni-signed if for all $1\lambda eq i\lambda eq n$, $(P_s)_{ii}\in\{?,t\}$, where $t=+$ or $t=-$. \begin{pro} Let $G_s$ be a signed graph whose all self-loops are uni-signed and labeled with $+$ or $?$ (resp., $-$ or $?$). The set $Z\subset V$ is a positive (resp., negative) signed zero forcing set of $G_s$ if and only if $Z$ is a signed zero forcing set of $G_s^0$. \lambda abel{P2} \end{pro} \emph{Proof:} If $Z$ is a positive (resp., negative) signed zero forcing set of $G_s$, it is a signed zero forcing set of the graph $G_s^-$ (resp., $G_s^+$) with the signed pattern $P_s^-=P_s+\mathcal{D}(-)$ (resp., $P_s^+=P_s+\mathcal{D}(+)$). Then, one can verify that for all $1\lambda eq i\lambda eq n$, $(P_s^-)_{ii}=?$ (resp., $(P_s^+)_{ii}=?$). Note that when applying the signing and coloring rule, one can put a self-loop labeled with $?$ on any node without a self-loop, or remove any self-loop labeled with $?$ and obtain a graph with the same sequence of forces. Hence, we can conclude that $Z$ is a signed zero forcing set of $G_s^0$ if and only if it is a signed zero forcing set of $G_s^-$. \carre Now, let us consider a self-damped network. In such a system, the dynamics of any state is influenced by itself. Therefore, the underlying graph has a self-loop on each of its nodes. \begin{theo} Consider a signed self-damped network with dynamics (\ref{e1}) and the signed graph $G_s=(V,E,P_s)$ whose all self-loops are uni-signed. Then, $(G_s, V_C)$ is controllable if $V_C$ is a signed zero forcing set of $G_s^?$. \end{theo} \emph{Proof:} Without loss of generality, suppose that all self-loops of $G_s$ are labeled with $?$ or $+$. We first claim that the set $Z\subset V$ is a positive signed zero forcing set of $G_s$ if and only if $Z$ is a signed zero forcing set of $G_s^?$. If $Z$ is a positive signed zero forcing set of $G_s$, it is a signed zero forcing set of the graph $G_s^-$ with the signed pattern $P_s^-=P_s+\mathcal{D}(-)$. Then, one can verify that for all $1\lambda eq i\lambda eq n$, $(P_s^-)_{ii}=?$, meaning that $G_s^-=G_s^?$. \end{comment} \balance \section{Conclusion} Strong structural controllability of positive, negative and zero eigenvalues of LTI systems defined on the same signed graphs has been examined in this work. We introduced the notions of positive and negative signed zero forcing sets; these notions can be used to provide a set of control nodes for ensuring strong structural controllability of positive and negative eigenvalues of signed networks. Moreover, we have shown that a signed zero forcing set, as a set of control nodes, renders the zero eigenvalues strongly structurally controllable. Finally, an upper bound on the maximum multiplicity of the positive and negative eigenvalues of matrices with the same sign pattern has been presented. \end{document}
\begin{document} \title{On a theorem of Livsic} \author[A.~Aleman]{Alexandru Aleman} \address{Department of Mathematics, Lund University, P.O. Box 118, S-221 00 Lund, Sweden} \email{[email protected]} \author{R. T. W. Martin} \address{Department of Mathematics and Applied Mathematics, University of Cape Town, Cape Town, South Africa} \email{[email protected]} \author[W.T.~Ross]{William T. Ross} \address{Department of Mathematics and Computer Science, University of Richmond, Richmond, VA 23173, USA} \email{[email protected]} \begin{abstract} The theory of symmetric, non-selfadjoint operators has several deep applications to the complex function theory of certain reproducing kernel Hilbert spaces of analytic functions, as well as to the study of ordinary differential operators such as Schrodinger operators in mathematical physics. Examples of simple symmetric operators include multiplication operators on various spaces of analytic functions such as model subspaces of Hardy spaces, deBranges-Rovnyak spaces and Herglotz spaces, ordinary differential operators (including Schrodinger operators from quantum mechanics), Toeplitz operators, and infinite Jacobi matrices. In this paper we develop a general representation theory of simple symmetric operators with equal deficiency indices, and obtain a collection of results which refine and extend classical works of Krein and Livsic. In particular we provide an alternative proof of a theorem of Livsic which characterizes when two simple symmetric operators with equal deficiency indices are unitarily equivalent, and we provide a new, more easily computable formula for the Livsic characteristic function of a simple symmetric operator with equal deficiency indices. \end{abstract} \maketitle \section{Introduction} For $n \in \mathbb{N} \cup \{\infty\}$ let $\mathcal{S}_{n}(\mathcal{H})$ denote the set of simple, closed, symmetric, densely defined linear transformations $T: \mathscr{D}(T) \subset \mathcal{H} \to \mathcal{H}$ with deficiency indices $(n, n)$. By this we mean that $T$ is a linear transformation defined on a dense domain $\mathscr{D}(T)$ in a complex separable Hilbert space $\mathcal{H}$ which satisfies the properties: \begin{equation} \label{symmetric-defn} \langle T x, y \rangle = \langle x, T y \rangle, \quad \forall x, y \in \mathscr{D}(T), \quad \mbox{($T$ is \emph{symmetric})}; \end{equation} \begin{equation} \label{simple-defn} \bigcap_{\Im \lambda \not = 0} \mbox{Rng}(T - \lambda I) = \{0\}, \quad \mbox{($T$ is \emph{simple})}; \end{equation} \begin{equation} \mbox{$\{(x, T x): x \in \mathscr{D}(T)\}$ is a closed subset of $\mathcal{H} \oplus \mathcal{H}$}, \quad \mbox{($T$ is \emph{closed})}; \end{equation} \begin{equation} \dim \mbox{Rng}(T - i I)^{\perp} = \dim \mbox{Rng}(T + i I)^{\perp} = n \quad \mbox{(\emph{$T$ has equal deficiency indices})}. \end{equation} Condition \eqref{simple-defn} ($T$ is simple) can be restated equivalently as: $T$ is simple if there does not exist a (non-trivial) subspace invariant under $T$ such that the restriction of $T$ to this subspace is self-adjoint \cite{A-G}. We also point out that $$n = \dim \mbox{Rng}(T - w I)^{\perp} = \dim \mbox{Rng}(T - z I)^{\perp}, \quad \Im w > 0, \Im z < 0,$$ that is to say, the deficiency indices $ \dim \mbox{Rng}(T - w I)$ are constant for $w$ in the upper $\mathbb{C}_{+} := \{\Im z > 0\}$ and lower $\mathbb{C}_{-} := \{\Im z < 0\}$ half planes \cite[Section 78]{A-G}. As we will discuss later in this paper, examples of operators which satisfy the above properties include certain classes of Sturm-Liouville operators, Schr\"{o}dinger operators, unbounded Toeplitz operators on the Hardy space, and multiplication operators on various spaces of analytic functions on $\mathbb{C}_{+}$ and $\mathbb{C} \setminus \mathbb{R}$. The purpose of this paper is to rediscover and improve upon a theorem of Livsic \cite{Livsic-1, Livsic-2} (see Theorem \ref{intro-Livsic} below) which characterizes when $T_1 \in \mathcal{S}_n(\mathcal{H}_{1})$ and $T_2 \in \mathcal{S}_{n}(\mathcal{H}_2)$ are unitarily equivalent, written $T_{1} \cong T_2$. Let us review Livsic's theorem when $n < \infty$. For $T \in \mathcal{S}_n(\mathcal{H})$ we know, since $T$ has equal deficiency indices, that $T$ has (canonical) self-adjoint extensions $T': \mathscr{D}(T') \subset \mathcal{H} \to \mathcal{H}$. If $\{u_j\}_{j = 1}^{n}$ is an orthonormal basis for $\mbox{Rng}(T + i I)^{\perp} = \mbox{Ker}(T^* -i I)$, define the matrix-valued function $w_{T}$ on $\mathbb{C}_{+}$ by \begin{equation} \label{Livsic-char} w_T (z) := b(z) B(z) ^{-1} A(z), \quad z \in \mathbb{C}_{+}, \end{equation} where \begin{equation} \label{Blaschke-b} b(z) := \frac{z-i}{z+i}, \end{equation} is the single Blaschke factor defined on $\mathbb{C}_{+}$ with zero at $z = i$, $$B(z) := \left[ \langle \left( I + (z - i) (T' - z I)^{-1} \right) u_j, u_k\rangle \right]_{1 \leqslant j, k \leqslant n}, $$ and $$ A (z) := \left[ \langle \left( I + (z + i) (T' - z)^{-1} \right) u_j, u_k \rangle \right]_{1 \leqslant j, k \leqslant n}. $$ The function $w_T$ in \eqref{Livsic-char}, called the \emph{Livsic characteristic function} for $T$, is a contractive matrix-valued analytic function on $\mathbb{C}_{+}$. Moreover, given any contractive, matrix-valued analytic function $w$ on $\mathbb{C}_{+}$ with $w(i) = 0$ there is a closed, simple, symmetric, linear transformation $T$ with $w = w_{T}$ (however this symmetric linear transformation $T$ is not necessarily densely defined). The characteristic function $w_T$ of $T$ is essentially independent of the choice of self-adjoint extension $T'$ and the choice of orthonormal basis $\{ u_j \}_{j = 1}^{n}$, i.e., if $T' _k, k = 1, 2, $ are two self-adjoint extensions of $T$, $\{ u _j ^{(k)} \}_{j = 1}^{n}, k = 1, 2,$ are orthonormal bases of $\mbox{Ker}(T^* -i I)$, and $w_k, k = 1, 2,$ are the characteristic functions of $T$ constructed using the $T' _k$ and $\{ u_j ^{(k)} \}_{j = 1}^{n}$, then there exists two constant unitary matrices $Q$ and $R$ such that \begin{equation} \label{RQ-equiv} w_1(z) = R w_2(z) Q, \quad z \in \mathbb{C}_{+}. \end{equation} For this reason we say that two characteristic functions $w_1, w_2$ are \emph{equivalent} if condition \eqref{RQ-equiv} for some constant unitary matrices $Q$ and $R$. Livsic's theorem is the following: \begin{Theorem}[Livsic \cite{Livsic-1, Livsic-2}] \label{intro-Livsic} The operators $T_1 \in \mathcal{S}_n(\mathcal{H}_1)$ and $T_2 \in \mathcal{S}_n(\mathcal{H}_2)$ are unitarily equivalent if and only if $w_{T_1}$ and $w_{T_2}$ are equivalent. \end{Theorem} Lisvic's original proof of this result uses the spectral theorem for self-adjoint operators \cite{A-G}. Here is a brief sketch: One direction of the proof is straightforward. Indeed if $T_1$ and $T_2$ are unitarily equivalent, one can find self-adjoint extensions $T_i ', i = 1, 2,$ of the $T_i$ which are unitarily equivalent, and, if one uses these extensions to construct the characteristic functions $w_{T_i}$ as in \eqref{Livsic-char}, it follows that these characteristic functions will be equivalent in the sense of \eqref{RQ-equiv}. To prove the converse, if $w_{T_1}$ and $w_{T_2}$ are equivalent, one can, without loss of generality, assume they are equal (they can be made equal by choosing the orthonormal bases $\{ u _j ^{(1)} \}_{j = 1}^{n}, \{ u _j ^{(2)} \}_{j = 1}^{n}$ for $\mbox{Ker}(T_1 ^* -i I )$ and $\mbox{Ker}(T_{2}^{*} - i I)$ respectively, and the self-adjoint extensions $T_{1}^{'}, T_{2}^{'}$ used to construct the characteristic functions $w_{T_1 }, w_{T_2}$ appropriately). A calculation using \eqref{Livsic-char} shows that $w_{T_1} = w _{T_2}$ implies that $\Omega _1 = \Omega _2$ where \begin{equation} \label{Her-rep-intro} \Omega _k (z) := \frac{ I + w_{T_k} (z) }{ I - w_{T_k} (z)} = \frac{1}{i\pi} \int _{-\infty} ^\infty \frac{1}{t-z} \Lambda_k (dt), \end{equation} and $\Lambda_p$ are $n\times n$ unital positive matrix-valued measures such that $$\Lambda_p (\Delta ) = 4\pi ^2 (1+t^2) \left[ \langle \chi _\Delta (T' _p ) u_j ^{(p)} , u _k ^{(p)} \rangle \right]_{1 \leqslant j, k \leqslant n}, \quad p = 1, 2,$$ for any Borel subset $\Delta \subset \mathbb{R}$. Here $\chi _\Delta $ denotes the characteristic function of the Borel set $\Delta$, and $\chi _\Delta (T ' _p )$ defines a unital projection-valued measure using the functional calculus for self-adjoint operators. The uniqueness of the Herglotz representation in \eqref{Her-rep-intro}, along with the fact that $\Omega _1 = \Omega _2$, implies that $\Lambda_1 = \Lambda_2$. Since the $\{ u _j ^{(p)} \}_{j = 1}^{n}, p = 1, 2, $ are generating bases for the unitary operators $b(T'_{p})$ (this follows from the simplicity of the $T_p$), it follows that the $T'_{p}, p = 1, 2,$ and hence the $T_p$ are unitarily equivalent. In this paper we give an alternate proof of Livsic's theorem (Theorem \ref{intro-Livsic}) using reproducing kernel Hilbert spaces of analytic functions. In particular, in Theorem \ref{T-main-vector-kernel} below, we factor these reproducing kernels in a particular way which yields the Livsic characteristic function. By doing this we accomplish several things. First, our factorization of reproducing kernels technique gives us further insight into what makes Livsic's theorem work and lets us see the characteristic function in a broader context. Second, our alternate proof is more abstract and thus gives us more latitude in computing the characteristic function since computing $w_T$, as it is defined by \eqref{Livsic-char}, involves a self adjoint extension of $T$, which can be difficult to compute, as well as a resolvent, which is also difficult to compute. Third, by associating, in certain circumstances, $T \in \mathcal{S}_{n}(\mathcal{H})$ with multiplication by the independent variable on a deBranges-Rovnyak space, we can gain further information about some function theory properties of the associated Livsic function. Fourth, our proof handles the $n = \infty$ case for which $w_T$ becomes an contractive operator-valued analytic function on $\mathbb{C}_{+}$. The main results of this paper will be to (i) associate any $T \in \mathcal{S}_{n}(\mathcal{H})$ with a vector-valued reproducing kernel Hilbert space of analytic functions on $\mathbb{C} \setminus \mathbb{R}$ (Propositions \ref{H-Gamma} and \ref{H-Gamma2}); (ii) associate the kernel function for this space with the Livsic characteristic function (Theorem \ref{T-main-vector-kernel}); (iii) compute the Livsic function for the operators of differentiation and double differentiation, Sturm-Liouville operators, unbounded symmetric Toeplitz operators, and symmetric operators which act as multiplication by the independent variable in Lebesgue spaces, Herglotz spaces, and deBranges-Rovnyak spaces; (iv) show that $T$ is unitarily equivalent to multiplication by the independent variable on a Herglotz space (Theorem \ref{Mz-Herglotz}); (v) show that when $n < \infty$ and the Livsic characteristic function for $T$ is an extreme point of the unit ball of the $n\times n$ matrix-valued bounded analytic functions on $\mathbb{C}_{+}$, then $T$ is unitarily equivalent to multiplication by the independent variable on an associated vector-valued deBranges-Rovnyak space (Corollary \ref{Mz-DR}); (vi) use this equivalence to show, when the Livsic function $V$ for $T$ is an extreme point, that the angular derivative of $(V \circ b^{-1}) \vec{k}$ at $z = 1$ does not exist for any $\vec{k} \in \mathbb{C}^{n}$ (Corollary \ref{Mz-DR}). Finally we mention that some of the results we prove here, like Livsic's theorem and the fact that every $T \in \mathcal{S}_{n}(\mathcal{H})$ can be realized as multiplication by the independent variable on some Lebesgue space, are known (and we will certainly point out the original sources) but the main wrinkle here is that they can be obtained via reproducing kernel Hilbert spaces and factorization of kernel functions for these spaces. Moreover, via deBranges-Rovnkay spaces, we gain some additional information about the Livsic function. As demonstrated above with the the sketch of the proof of Livsic's theorem, the original proofs used the spectral theorem, which certainly adds efficiency and utility (and even elegance) but not computability. We would be remiss if we did not point out a paper of Poltoratski and Makarov \cite{MR2215727} which uses a different model than ours to associate operators with inner functions and classical model spaces of the upper-half plane. In particular they use these results to solve specific problems associated with Schr\"{o}dinger operators. \section{A model operator} The main idea, going back to Krein \cite{MR1466698, MR0011170, MR0011533, MR0048704}, and used many times before \cite{MR1784638, Langer, Martin1}, in examining symmetric operators is the idea of a vector-valued reproducing kernel Hilbert space of analytic functions associated with a symmetric operator. For $T \in \mathcal{S}_n(\mathcal{H})$, $n \in \mathbb{N} \cup \{\infty\}$, let $\mathcal{K}$ be any complex separable Hilbert space whose dimension is $$n = \mbox{Rng}(T+ i I)^{\perp} = \mbox{Rng}(T - i I)^{\perp}.$$ When $n \in \mathbb{N}$, one usually takes $\mathcal{K}$ to be $\mathbb{C}^{n}$, with the standard inner product $$\langle \vec{z}, \vec{w}\rangle_{\mathbb{C}^n} := \sum_{j = 1}^{n} z_{j} \overlineerline{w_j}.$$ \subsection{The model} If $\mathcal{B}(\mathcal{K}, \mathcal{H})$ is the space of bounded linear operators from $\mathcal{K}$ to $\mathcal{H}$, we say that $\Gamma: \mathbb{C} \setminus \mathbb{R} \to \mathcal{B}(\mathcal{K}, \mathcal{H})$ is a \emph{model} for $T$ if $\Gamma$ satisfies the following conditions: \begin{equation} \label{I} \Gamma: \mathbb{C} \setminus \mathbb{R} \to \mathcal{B}(\mathcal{K}, \mathcal{H}) \quad \mbox{is co-analytic}; \end{equation} \begin{equation} \Gamma(\lambda): \mathcal{K} \to \mbox{Rng}(T - \lambda I)^{\perp} \quad \mbox{is invertible for each $\lambda \in \mathbb{C} \setminus \mathbb{R}$}; \end{equation} \begin{equation} \Gamma(z)^{*} \Gamma(\lambda): \mathcal{K} \to \mathcal{K} \quad \mbox{is invertible for all $\lambda, z \in \mathbb{C}_{+}$ or $\lambda, z \in \mathbb{C}_{-}$}; \label{equation:condition} \end{equation} \begin{equation} \bigvee_{\Im \lambda \not = 0} \mbox{Rng} \Gamma(\lambda) = \mathcal{H}, \end{equation} where $\bigvee$ denotes the closed linear span. \begin{Proposition} Every $T \in \mathcal{S}_{n}(\mathcal{H})$ has a model. \label{prop:model} \end{Proposition} The proof of this proposition needs a little set up. Given a closed densely-defined operator $T$ with domain $\mathscr{D} (T) \subset \mathcal{H}$, a point $z \in \mathbb{C}$ is called a \emph{regular point} of $T$ if $T -z I$ is bounded below on $\mathscr{D} (T)$, i.e., $\|(T - z I) x\| \geqslant c_{z} \|x\|$ for all $x \in \mathscr{D}(T)$. Let $\Omega $ denote the set of all regular points of $T$. If $T \in \mathcal{S} _n (\mathcal{H})$, then since $T$ is symmetric we have $\mathbb{C} \setminus \mathbb{R} \subset \Omega \subset \mathbb{C}$. $T$ is called \emph{regular} if $\Omega = \mathbb{C}$. For any $w \in \Omega$, let $$\mathscr{D} _w := \mathscr{D} (T) + \mbox{Ker}(T^* -w I ),$$ and define $T_w := \overlineerline{T^* | \mathscr{D} _w}$, the closure of $T^*|\mathscr{D}_{w}$. \begin{lemma} Suppose that $w \in \Omega$ is a regular point of $T \in \mathcal{S} _n (\mathcal{H})$. The spectrum of $T_w$ is contained in $\overlineerline{\mathbb{C} _{+}} $, $\mathbb{R}$ or $\overlineerline{\mathbb{C} _{-}}$ when $w \in \mathbb{C} _+ , \ \mathbb{R} \cap \Omega $ or $\mathbb{C} _-$ respectively. \end{lemma} \begin{proof} We will prove the lemma when $w \in \mathbb{C} _+$. The proofs of the other two cases are analogous. Note that when $w \in \mathbb{R} \cap \Omega$ a proof that $T_w$ is in fact self-adjoint is found in \cite[Section 83]{A-G}. First we show, for any $u \in \mathscr{D} _w$, that $\Im \left( \langle T_w u , u \rangle \right) \geqslant 0$ (assuming $w \in \mathbb{C} _+$). Since $\mathscr{D} _w$ is by definition a core for $T_w$, this will prove that $T_w -z I $ is bounded below for any $z \in \mathbb{C} _-$. Any $u \in \mathscr{D} _w$ can be written as $u = v + \psi _w $ where $u \in \mathscr{D}(T)$ and $\psi _w \in \mbox{Ker} (T^* -w I)$. It follows that \begin{eqnarray} \langle T_w u , u \rangle & = & \langle T v + w \psi _w , v + \psi _w \rangle \nonumber \\ & = & \langle T v , v \rangle + (\overlineerline{w} \langle v , \psi _w \rangle + w \langle \psi _w , v \rangle ) + w \| \psi _w \| ^2 , \end{eqnarray} and so \[ \Im \langle T_w u , u \rangle = \Im(w) \| \psi _w \| ^2 \geqslant 0. \] To show that the spectrum of $T_w$ is contained in $\overlineerline{\mathbb{C}_{+}}$, it remains to verify that $T_w - z I $ is onto for any $z \in \mathbb{C} _-$. First we show that $T_w - \overlineerline{w} I$ is onto. If $\phi \in \mathcal{H}$ and $\phi \perp \mbox{Rng} (T_w - \overlineerline{w} I )$ then $\phi \perp \mbox{Rng} (T - \overlineerline{w} I)$, so that $\phi \in \mbox{Ker}(T^* -w I)$, and \[0 = \langle (T_w -\overlineerline{w} I ) \phi , \phi \rangle = (w -\overlineerline{w} ) \| \phi \| ^2. \] Since $w \notin \mathbb{R}$ this shows that $\phi =0$ and proves that $T_w - \overlineerline{w} I$ is onto. The fact that $\Im \langle T_w u , u \rangle \geqslant 0$ for any $u \in \mathscr{D} _w$ implies that every $z \in \mathbb{C} _-$ is regular for $T_w$. By \cite[Section 78]{A-G} the dimension of $\mbox{Rng} ( T_w - z I ) ^\perp $ is constant for $z$ in any connected component of the set of regular points of $T_w$. It follows that $T_w -z I$ is onto for all $z \in \mathbb{C} _-$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:model}] This proof is adapted from a resolvent formula from \cite[Sec.~84]{A-G}. Let $T'$ be any (canonical) self-adjoint extension of $T$ and note that for each fixed $\lambda \not \in \mathbb{R}$, the operator $T' - \lambda I: \mathscr{D}(T') \to \mathcal{H}$ is onto. Define \begin{equation} \label{K-form} U_{\lambda} := (T' - i I) (T' - \lambda I)^{-1} = I + (\lambda - i) (T' - \lambda I)^{-1} \end{equation} and observe that $U_{\lambda} : \mathcal{H} \to \mathcal{H}$ is one-to-one and onto. Moreover, for any $x \in \mathcal{H}$, the map $\lambda \mapsto U_{\lambda} x$ is an $\mathcal{H}$-valued analytic function on $\mathbb{C} \setminus \mathbb{R}$. By \cite[Sec.~84]{A-G} $U_{\lambda}$ is a bounded invertible operator from $\mbox{Rng}(T + iI)^{\perp}$ onto $\mbox{Rng}(T - \overlineerline{\lambda} I)^{\perp}$. Indeed, the inverse of $U_{\lambda}$ is $$(T' - \lambda) (T' - i I)^{-1}.$$ Recall that $\mathcal{K}$ is any complex separable Hilbert space whose dimension is equal to $$\dim \mbox{Rng}(T - i I)^{\perp} = \dim \mbox{Rng}(T + i I)^{\perp} = n.$$ Let $j$ be any bounded isomorphism from $\mathcal{K}$ onto $\mbox{Rng}(T + i I)^{\perp}$. Finally define \begin{equation} \label{Krein-trick} \Gamma(\lambda) := U_{\overlineerline{\lambda}} j. \end{equation} With the exception of condition (\ref{equation:condition}), one can easily check that $\Gamma$ satisfies the conditions of a model for $T$. We will now show that $\Gamma( z ) ^* \Gamma (\lambda)$ is invertible whenever $z,\lambda $ are both in $\mathbb{C} _+$ or both in $\mathbb{C} _-$. In particular, this will prove that $\Gamma$ obeys condition (\ref{equation:condition}). Without loss of generality assume that $\mathcal{K} := \mathbb{C} ^n$ with canonical orthonormal basis $\{ e_k \}_{k = 1}^{n}$. When $n=\infty$, we define $\mathbb{C} ^\infty := \ell^2(\mathbb{N})$, the Hilbert space of square-summable sequences of complex numbers. Since $j$ is a bounded isomorphism, we see that the set $\{ \gamma _k (i) \} _{k=1} ^n$ where $\gamma _k (i) := j e_k$ is a basis (in fact a Riesz basis) for $\mbox{Ker} (T^* -i I)$. For any $\lambda \in \mathbb{C} _+$, the set $\{\gamma _k (\lambda)\}_{k = 1}^{n}$ where $\gamma _k (\lambda) := U_\lambda \gamma _k (i)$ is a basis (in general non-orthonormal) for $\mbox{Ker}(T^* -\lambda I)$. It follows that we have the matrix representation $$ \Gamma (\overlineerline{z} ) ^* \Gamma ( \overlineerline{\lambda} ) = \left[ \langle \gamma _j (\lambda ) , \gamma _k (z) \rangle \right]_{1 \leqslant j, k \leqslant n} \label{matrixrep} $$ for $z, \lambda \in \mathbb{C} \setminus \mathbb{R}$. We will need the fact that \{$\gamma _j (z )\}_{j = 1}^{n}$ is actually a Riesz basis for $\mbox{Ker}(T^* - z I )$, \emph{i.e.}, the image of an orthonormal basis under a bounded, invertible operator. This implies there are constants $0 < c \leqslant C$ such that for any $\psi \in \mbox{Ker}(T^* -z I ) $, \begin{equation} c \| \psi \| ^2 \leqslant \sum_{j = 1}^{n} | \langle \gamma _j (z) , \psi \rangle | ^2 \leqslant C \| \psi \| ^2 . \label{Riesz} \end{equation} To see this, choose an orthonormal basis $\{ \delta _j (z ) \}_{j = 1}^{n}$ for $\mbox{Ker}(T^* - z I )$ and let $U : \mbox{Ker}(T^* - z I) \rightarrow \mathbb{C} ^n$ be the isometry defined by $U \delta _j (z) = e_j $. It follows that the linear map $$V : \mbox{Ker}(T^* - z I ) \rightarrow \mbox{Ker}(T^* - z I), \quad V := U_z j U $$ is invertible. Hence for any $\psi \in \mbox{Ker}(T^* -z I) $, $$ \sum_{j = 1}^{n} | \langle \gamma _j (z) , \psi \rangle | ^2 = \sum_{j = 1}^{n} | \langle \delta _j (z) , V^* \psi \rangle | ^2 = \| V^* \psi \| ^2, $$ and since $V^*$ is bounded above and below (because $V$ is invertible), equation (\ref{Riesz}) follows. Observe that if $\vec{c} \in \mathbb{C} ^n$ we have \[ \Gamma (\overlineerline{z}) ^* \Gamma (\overlineerline{\lambda}) \vec{c} = ( \langle \gamma _k (\lambda ) , \psi _{\vec{c}} (z) \rangle ) _{k=1} ^n, \] where \[ \psi _{\vec{c}} (z) = \sum _{k=1} ^n \overlineerline{ c_k } \gamma _k (z) = \Gamma (\overlineerline{\lambda} ) \vec{c} _{c} . \] Here $\vec{c} _c$ denotes the component-wise complex-conjugate of the vector $\vec{c}$. If $\vec{c}$ has unit norm, then since $\Gamma (\lambda ) : \mathbb{C} ^n \rightarrow \mbox{Ker}(T^* -\overlineerline{\lambda} I )$ is bounded and invertible, it follows that there are constants $c(z), C(z) > 0$ such that $$c(z) \leqslant \| \psi _{\vec{c}} (z) \| \leqslant C(z), \quad \forall \vec{c} \in \mathbb{C}^{n}, \|\vec{c}\| = 1.$$ Hence in order to prove that $\Gamma (\overlineerline{z} ) ^* \Gamma (\overlineerline{\lambda})$ is bounded below, it suffices to show, for any unit norm $\psi ( z) \in \mbox{Ker}(T^* - z I)$, that the sequence $$\vec{b} := ( \langle \gamma _k (\lambda) , \psi (z) \rangle )_{k = 1}^{n} $$ is bounded below in the norm of $\mathbb{C} ^n$. Now suppose that both $z$ and $\lambda$ belong to $\mathbb{C} _+$ or both belong to $\mathbb{C} _-$ and assume that $\Gamma(\overlineerline{z}) ^* \Gamma (\overlineerline{\lambda} )$ is not bounded below. Then, by the discussion above, there exists a sequence of unit norm vectors $\psi _k (z) \in \mbox{Ker}(T^* -z I)$, such that \[ \vec{b} _k := ( \langle \gamma _j (\lambda) , \psi _k (z) \rangle ) _{j=1} ^n \rightarrow 0 \] in $\mathbb{C} ^{n}$ norm as $k \to \infty$. Let $\phi _k = P \psi _k (z)$ be the projection of $\psi _k (z) $ onto $\mbox{Rng} (T - \overlineerline{\lambda} I) = \mbox{Ker}(T^* -\lambda I)^\perp$, where $I-P$ is the projection onto $\mbox{Ker}(T^* -\lambda I)$. Since we assume that the $\psi _k (z)$ all have unit norm, we see that $\| \phi _k \| = \| P \psi _k (z) \| \leqslant 1$ for all $k$. Since $\{ \gamma _j (\lambda) \}_{j = 1}^{n}$ is actually a Riesz basis for $\mbox{Ker}(T^* -\lambda I)$, it can be shown that $\psi _k (z) - \phi_k \rightarrow 0$ in norm as $k \rightarrow \infty$ so that $\| \phi _k \| \rightarrow 1$ . To see this last fact note that by equation (\ref{Riesz}) there is a constant $c >0$ such that \begin{eqnarray} \| \psi _k (z) - \phi _k \| ^2 & = & \| (I-P ) \psi _k (z) \| ^2\\ & \leqslant & \frac{1}{c} \sum _j | \langle \gamma _j (\lambda) , (I-P ) \psi _k (z) \rangle | ^2 \nonumber \\ & = & \frac{1}{c} \sum _j | \langle \gamma _j (\lambda) , \psi _k (z) \rangle | ^2 \nonumber\\ & = & \frac{1}{c} \| \vec{b} _k \| ^2 _{\mathbb{C} ^n}, \end{eqnarray} which vanishes as $k \rightarrow \infty$ by assumption. Since $\phi _k \in \mbox{Rng} (T- \overlineerline{\lambda} I ) $, it follows that $\phi _k = (T- \overlineerline{\lambda} I ) \varphi _k$ for some sequence $\varphi _k \in \mathscr{D} (T)$. Now consider \begin{eqnarray} (T-z I) \varphi _k & = &( T - \overlineerline{\lambda} I) \varphi _k - (z -\overlineerline{\lambda}) \varphi _k \nonumber \\ & = & \phi _k - (z -\overlineerline{\lambda}) \varphi _k. \label{eq:bndbelow} \end{eqnarray} We want to show that this vanishes as $k \rightarrow \infty$ and that $\| \varphi _k \| $ is uniformly bounded below in norm. This will show that $T-z I$ is not bounded below, contradicting the fact that $z \in \mathbb{C} _\pm $ is a regular point for $T$. Since $\overlineerline{\lambda} \in \mathbb{C} _\mp$ is not in the spectrum of $T_z$, and each eigenvector $\psi _k (z)$ is an eigenvector of $T_z$ corresponding to the eigenvalue $z$, we can write \begin{eqnarray} (z -\overlineerline{\lambda } ) ^{-1} \phi _k -\varphi _k & = & \left( (z -\overlineerline{\lambda } ) ^{-1} - (T_z -\overlineerline{\lambda}I ) ^{-1} \right) \phi _k \nonumber \\ & =& \left( (z -\overlineerline{\lambda } ) ^{-1} - (T_z -\overlineerline{\lambda} I) ^{-1} \right) (\phi _k - \psi _k (z) ). \end{eqnarray} This vanishes as $k \to \infty$ since $\psi _k (z) - \phi _k \rightarrow 0$ in norm. Moreover, by equation (\ref{eq:bndbelow}), $$ | z - \overlineerline{\lambda} | \| \varphi _k \| \geqslant \| \phi _k \| - \| (T-z I) \varphi _k \| \rightarrow 1, $$ which shows that the $\| \varphi _k \| $ are uniformly bounded below in norm. This implies $T-z I$ is not bounded below, contradicting the assumption that $z \in \mathbb{C} \setminus \mathbb{R}$. The above proves that $\Gamma(z)^* \Gamma (\lambda )$ is bounded below whenever $z, \lambda \in \mathbb{C} _\pm$. Since the adjoint of $\Gamma (z) ^* \Gamma (\lambda)$ is $\Gamma (\lambda )^* \Gamma (z)$, and $\Gamma (z) ^* \Gamma (\lambda)$ is not onto if and only if its adjoint has non-zero kernel, this actually proves that $\Gamma (z) ^* \Gamma (\lambda )$ is invertible whenever $z, \lambda \in \mathbb{C} _+$ or $z,\lambda \in \mathbb{C} _-$. \end{proof} \begin{Remark} The key point of this, perhaps overly formal, approach is that one is free to choose the model and is not restricted to the one given by the above Krein construction. We will give many examples, and take advantage, of this freedom below. \end{Remark} \subsection{The model space} For a model $\Gamma$ we now define an associated vector-valued Hilbert space of analytic functions $\mathcal{H}(\Gamma)$ associated with our underlying Hilbert space $\mathcal{H}$ on which $T$ acts. For $f \in \mathcal{H}$ define $$\widehat{f}: \mathbb{C} \setminus \mathbb{R} \to \mathcal{K}, \quad \widehat{f}(\lambda) := \Gamma(\lambda)^{*} f,$$ and $$\mathcal{H}(\Gamma) := \left\{\widehat{f}: f \in \mathcal{H}\right\}.$$ \begin{Proposition} \label{H-Gamma} With an inner product on $\mathcal{H}(\Gamma)$ defined by $$\langle \widehat{f}, \widehat{g} \rangle_{\mathcal{H}(\Gamma)} := \langle f, g \rangle_{\mathcal{H}},$$ $\mathcal{H}(\Gamma)$ is a vector-valued reproducing kernel Hilbert space of analytic functions on $\mathbb{C} \setminus \mathbb{R}$. Moreover, the reproducing kernel function for $\mathcal{H}(\Gamma)$ is $$K_{\lambda}(z) = \Gamma(z)^{*} \Gamma(\lambda),$$ i.e., for any $a \in \mathcal{K}$ and $\widehat{f} \in \mathcal{H}(\Gamma)$, \begin{equation} \label{RKHS} \langle \widehat{f} (\lambda ) , a \rangle_{\mathcal{K}} = \langle \widehat{f}, K_{\lambda}(\cdot) a \rangle_{\mathcal{H}(\Gamma)}. \end{equation} \end{Proposition} \begin{proof} The only significant things to check here are that (i) $\|\widehat{f}\|_{\mathcal{H}(\Gamma)} = 0$ if and only if $\widehat{f}(\lambda) = 0$ for all $\lambda \in \mathbb{C} \setminus \mathbb{R}$; and (ii) the reproducing kernel formula from \eqref{RKHS}. Fact (i) follows from the fact that $T$ is simple. Indeed, for each $\lambda \in \mathbb{C} \setminus \mathbb{R}$, $$\widehat{f}(\lambda) := \Gamma(\lambda)^{*} f = 0_{\mathcal{K}} \Leftrightarrow f \in \mbox{Ker} (\Gamma(\lambda)^{*} ) = \mbox{Rng}(T - \lambda I).$$ Thus if $\widehat{f}(\lambda) = 0$ for all $\lambda \in \mathbb{C} \setminus \mathbb{R}$, we use \eqref{simple-defn} to see that $f = 0_{\mathcal{H}}$. To prove (ii) let $\widehat{f} \in \mathcal{H}(\Gamma)$ and $a \in \mathcal{K}$. Then \begin{align*} \langle \widehat{f}, K_{\lambda}(\cdot) a\rangle_{\mathcal{H}(\Gamma)} & = \langle \Gamma(\cdot)^{*} f, \Gamma(\cdot)^{*} \Gamma(\lambda) a \rangle_{\mathcal{H}(\Gamma)}\\ & = \langle f, \Gamma(\lambda) a \rangle_{\mathcal{H}}\\ & = \langle \Gamma(\lambda)^{*} f, a \rangle_{\mathcal{K}}\\ & = \langle \widehat{f}(\lambda), a \rangle_{\mathcal{K}}, \end{align*} which proves the reproducing kernel formula in \eqref{RKHS}. \end{proof} \begin{Proposition} \label{H-Gamma2} For $T \in \mathcal{S}_{n}(\mathcal{H})$ with model $\Gamma$, the operator $M_{\Gamma}$ on defined on $$\mathscr{D}(M_{\Gamma}) := \left\{\widehat{f} \in \mathcal{H}(\Gamma): z \widehat{f} \in \mathcal{H}(\Gamma)\right\}$$ by $M_{\Gamma} \widehat{f} = z \widehat{f}$ is densely defined and belongs to $\mathcal{S}_{n}(\mathcal{H}(\Gamma))$. Moreover, $M_{\Gamma}$ is unitarily equivalent to $T$. \end{Proposition} \begin{proof} Let $$U: \mathcal{H} \to \mathcal{H}(\Gamma), \quad (U f)(z) := \widehat{f}(z) = \Gamma(z)^{*} f$$ and note by Proposition \ref{H-Gamma} that $U$ is an isometric isomorphism. We need to show that if $$\mathscr{D}(M_{\Gamma}) := \left\{\widehat{f} \in \mathcal{H}(\Gamma): z \widehat{f} \in \mathcal{H}(\Gamma)\right\},$$ then \begin{equation} \label{UTdomains} U \mathscr{D}(T) = \mathscr{D}(M_{\Gamma}) \end{equation} and \begin{equation} \label{intertwine} U T = M_{\Gamma} U. \end{equation} Let us first show the $\supset$ containment in \eqref{UTdomains}. Indeed let $U f \in \mathscr{D}(M_{\Gamma})$, i.e., $z U f \in \mathcal{H}(\Gamma)$. Then for any $\lambda \in \mathbb{C} \setminus \mathbb{R}$ the function $$z \mapsto (z - \lambda) U f$$ is a function in $\mathcal{H}(\Gamma)$ which is zero at $\lambda$ and so, using the fact from the proof of the previous proposition that \begin{equation} \label{Gamma-star-0} \widehat{f}(z) = \Gamma(z)^{\ast} f = 0 \Leftrightarrow f \in \mbox{Rng}(T - z I), \end{equation} we see that $$(z - \lambda) U f = U f_{\lambda}$$ for some $f_{\lambda} \in \mbox{Rng}(T - \lambda I)$. Thus $ $$f_{\lambda} = (T - \lambda I) g_{\lambda}$, where $g_{\lambda} \in \mathscr{D}(T)$. We now need to prove that $f = g_{\lambda}$. Indeed, \begin{align*} (z - \lambda) \Gamma(z)^{*} f & = (z - \lambda) (U f)(z)\\ & = (U f_{\lambda})(z)\\ & = (U (T - \lambda I) g_{\lambda})(z)\\ & = \Gamma(z)^{*} (T - \lambda I) g_{\lambda}\\ & = \Gamma(z)^{*} ((T - z I) g_{\lambda} + (z - \lambda) g_{\lambda})\\ & = \Gamma(z)^{*} (T - z I) g_{\lambda} + (z - \lambda)\Gamma(z)^{*} g_{\lambda}\\ & = 0 + (z - \lambda) \Gamma(z)^{*} g_{\lambda}. \end{align*} The last equality follows from \eqref{Gamma-star-0}. This means that $\Gamma(z)^{*} f = \Gamma(z)^{*} g_{\lambda}$ for all $z \in \mathbb{C} \setminus \mathbb{R}$. From \eqref{Gamma-star-0} and the fact that $T$ is simple (see \eqref{simple-defn}) it follows that $f = g_{\lambda}$. Thus we have shown the $\supset$ containment in \eqref{UTdomains}. For the $\subset$ containment in \eqref{UTdomains}, let $f \in \mathscr{D}(T)$. Then for any $z \in \mathbb{C} \setminus \mathbb{R}$, \begin{align*} (U T f)(z) & = \Gamma(z)^{*} T f\\ & = \Gamma(z)^{*} ((T - z I) f + z f)\\ & = 0 + z \Gamma(z)^{*} f\\ & = M_{\Gamma} (U f)(z). \end{align*} This proves the $\subset$ containment in \eqref{UTdomains} along with the intertwining identity in \eqref{intertwine}. \end{proof} \begin{Remark} As mentioned earlier, we are not constrained by the Krein trick \eqref{Krein-trick} in selecting our model $\Gamma$ for $T$. There are other methods of constructing a model. For example, when $n < \infty$, we can use Grauert's theorem, as was used to prove a related result for bounded operators in \cite{Cow-Doug}, to find an analytic vector-valued function $$\gamma(\lambda) := (\gamma(\lambda)_1, \cdots, \gamma(\lambda)_{n}),$$ where $\{\gamma(\lambda)_1, \cdots, \gamma(\lambda)_{n}\}$ is a basis for $\mbox{Rng}(T - \overlineerline{\lambda} I)^{\perp}$. Then, if $\{e_j\}_{j = 1}^{n}$ is the standard basis for $\mathbb{C}^n$, we can define our model for $T$ to be \begin{equation} \label{model-anything} \Gamma(\lambda) := \sum_{j = 1}^{n} \gamma(\overlineerline{\lambda})_{j} \otimes e_j. \end{equation} From here it is not difficult to compute the matrix representation for $K_\lambda (z)$ in the standard $\{ e_i \}_{j = 1}^{n}$ basis for $\mathbb{C} ^n$ as \begin{equation} K_\lambda (z) = \left[ \langle \gamma _i (\overlineerline{\lambda}) , \gamma _j (\overlineerline{z}) \rangle \right]_{1 \leqslant i, j \leqslant n}. \label{equation:kernel} \end{equation} The alert reader might be worried about the verification of property \eqref{equation:condition}, the invertibility of $K_{\lambda}(z)$ for $\lambda, z \in \mathbb{C}_{+}$ or $\lambda, z \in \mathbb{C}_{-}$. Any model $\Gamma$, the Krein model in particular, will take the form in \eqref{model-anything}. Any other model $\widetilde{\Gamma}$ must then take the form $$\widetilde{\Gamma}(\lambda) := \sum_{j = 1}^{n} \widetilde{\gamma}(\overlineerline{\lambda})_{j} \otimes e_j,$$ where $$\widetilde{\gamma}(\overlineerline{\lambda})_{j} = \sum_{k = 1}^{n} c_{k, j}(\lambda) \gamma(\overlineerline{\lambda})_{k}.$$ and the matrix $C_{\lambda} := (c_{k, j}(\lambda))_{k, j}$ is invertible. One can now check that $$\widetilde{K}_{\lambda}(z) = C_z K_{\lambda}(z) C_{\lambda}^{*}.$$ So, if, with the Krein model we have $K_{\lambda}(z)$ is invertible for $\lambda, z \in \mathbb{C}_{+}$ or $\lambda, z \in \mathbb{C}_{-}$, then with any other model will also satisfy this property. This Grauert's trick will help us avoid dealing with the self-adjoint extensions and the resolvents in Krein's formula \eqref{K-form}, which, as mentioned earlier, can be difficult to compute. \end{Remark} \section{Examples} \subsection{Differentiation} \label{diff-ex} Consider the simple differential operator $T f = i f'$ defined densely on $L^2[-\pi, \pi]$ with domain $\mathscr{D}(T)$ the Sobolev space of absolutely continuous functions $f$ on $[-\pi, \pi]$ with $f' \in L^2[-\pi, \pi]$ and $f(-\pi) = f(\pi) = 0$. Simple integration by parts will show that $T$ is symmetric and closed. Furthermore, $\mathscr{D}(T^{*})$ consists of the absolutely continuous functions $f$ on $[-\pi, \pi]$ with $f' \in L^2[-\pi, \pi]$. This is quite standard and can be found in many functional analysis books. Observe, for any $\lambda \in \mathbb{C} \setminus \mathbb{R}$, that $$\mbox{Ker} (T^{*} - \lambda I) = \{f\in \mathscr{D}(T^{*}): i f' = \lambda f\} = \mathbb{C} e^{-i \lambda t}$$ and so the deficiency indices are both equal to one. Moreover, $T$ satisfies the simplicity condition \eqref{simple-defn} since $$\bigvee_{\lambda \not \in \mathbb{R}} \mbox{Ker} (T^{*} - \lambda I) = \bigvee_{\lambda \not \in \mathbb{R}} e^{-i \lambda t} = L^2[-\pi, \pi]$$ via the Stone-Weierstrass theorem. Thus $T \in \mathcal{S}_{1}(L^2[-\pi, \pi])$. Define $$\gamma(\lambda) := e^{-i \lambda t}, \quad \Gamma(\lambda) := \gamma(\overlineerline{\lambda}) \otimes 1.$$ For $f \in L^2[-\pi, \pi]$ we have $$\widehat{f}(\lambda) = \Gamma(\lambda)^{*} f = \int_{-\pi}^{\pi} f(t) e^{i \lambda t} dt.$$ Thus our Hilbert space of analytic functions $\mathcal{H}(\Gamma)$ is one of the classical Paley-Wiener spaces \cite{DB}. From here it follows that $T$ is unitarily equivalent to multiplication by the independent variable on the Paley-Wiener space. A computation will show that $$K_{\lambda}(z) = \int_{-\pi}^{\pi} e^{-i z t} e^{i \overlineerline{\lambda} t} d t = \frac{2 \sin(\pi (z - \overlineerline{\lambda}))}{z - \overlineerline{\lambda}}.$$ \subsection{Double differentiation} \label{double-diff} Now consider the double differentiation operator $T f = -f''$ initially defined on the set $C_{0}^{\infty}(0, \infty)$ (smooth functions with compact support in $(0, \infty)$) and extend the domain of $T$ to the closure of $C_{0}^{\infty}(0, \infty)$ in the norm $\|f''\|_{L^2}$ -- which will be some Sobolev space. This domain $\mathscr{D}(T)$ is clearly dense in $L^2(0, \infty)$ and a simple computation with integration by parts will show that $T$ is symmetric and closed. Note that $\mathscr{D}(T^{*})$ contains $C^{\infty}(0, \infty) \cap L^2(0, \infty)$ and moreover, $T$ has deficiency indices equal to one since $$\mbox{Ker} (T^{*} - \lambda) = \{f \in \mathscr{D}(T^{*}): -f'' = \lambda f\} = \mathbb{C} e^{\pm i \sqrt{\lambda} t}.$$ Note that, depending on whether $\Im \lambda > 0$ or $\Im \lambda < 0$, only \textit{one} of the solutions $e^{\pm i \sqrt{\lambda} t}$ will belong to $L^2(0, \infty)$. Denote this solution by $$e^{i \epsilon_{\lambda} \sqrt{\lambda} t},$$ where $\epsilon_{\lambda} = 1$ if $\Im \lambda > 0$ and $\epsilon_{\lambda} = -1$ if $\Im \lambda < 0$. Furthermore, one can check, using duality, that $$\bigvee_{\lambda \not \in \mathbb{R}} \mbox{Ker} (T^{*} - \lambda) = L^2(0, \infty)$$ and so $T \in \mathcal{S}_{1}(L^2(0, \infty))$. As in the previous example, we can define $$\gamma(\lambda) := e^{i \epsilon_{\lambda} \sqrt{\lambda} t}, \quad \Gamma(\lambda) := \gamma(\overlineerline{\lambda}) \otimes 1.$$ From here, for $f \in L^2(0, \infty)$, $$\widehat{f}(\lambda) = \int_{0}^{\infty} f(t) e^{-i \epsilon_{\lambda} \sqrt{\lambda} t} dt.$$ The kernel function is $$K_{\lambda}(z) = \int_{0}^{\infty} e^{- i \epsilon_{z} \sqrt{z} t} e^{i \epsilon_{\lambda} (\overlineerline{\lambda} ) ^{\frac{1}{2}} t} dt = \frac{- i}{\epsilon_{z} \sqrt{z} - \epsilon_{\lambda} (\overlineerline{\lambda} ) ^{\frac{1}{2}}}.$$ \subsection{Sturm-Liouville operators} \label{SL-ex} In this example we will consider second-order Sturm-Liouville differential operators on intervals $I = [a,b] \subset \mathbb{R}$. A good reference for this is \cite{Naimark}. In particular, this will include Schr\"{o}dinger operators as a special case. Suppose that $p\geqslant 0$, and $q$ are real-valued functions on $I$ such that $ 1/p$ and $q$ are locally $L ^1$ functions on $(a,b)$, i.e., they belong to $L ^1$ of any compact subset of $(a,b)$. Define the dense domain \[ \mathscr{D} ( H (p, q, I) ^* ) := \left\{ f \in L^2 (I) \ | \ f , pf' \in L^1 _{loc} (I) \ \mbox{and} \ -(pf')' + qf \in L^2 [a,b] \right\}, \] and then define $$H (p,q ,I ) ^* f = -(pf')' + qf, \quad f \in \mathcal{D} (H (p,q,I) ^* ).$$ The theory of \cite[Section 17]{Naimark} shows that $$H (p,q, I) := (H (p,q,I) ^* ) ^*$$ belongs to $\mathcal{S}_{n}(L^2[a, b])$, where $n$ is either $0, 1$, or $2$, depending on the properties of $p$ and $q$. Furthermore, $ H (p,q,I) ^* $ is its adjoint. Although it is non-trivial, it can be shown that $H(p,q,I)$ is simple whenever it has indices $(1,1)$ or $(2,2)$ \cite{Gil1, Gil2}, and hence $H(p,q,I)$ is either simple or self-adjoint. Recall here that any closed symmetric linear operator with indices $(0,0)$ is self-adjoint. Fix an interior point $x_0 \in I$ and given any $z\in \mathbb{C}$ let $u_z$ and $v_z$ be solutions to the ordinary differential equation, $$ -(pf')' + qf = z f, $$ which satisfy $$ \left( \begin{array}{cc} u_z (x_0 ) & p(x_0) u_z ' (x_0) \\ v_z (x_0) & p(x_0) v_z ' (x_0 ) \end{array} \right) = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right). $$ Using the method of Picard iterates, typically used to prove the existence-uniqueness theorem for ordinary differential equations, it is not difficult to show that for any fixed $x \in (a,b)$, the solutions $u_z (x), v_z (x)$ are entire functions of $z$ (see for example \cite[Section 2.3]{Hille} or \cite[pgs. 51-56]{Naimark}). It follows that whenever $u_z, v_z$ belong to $L^2 (I)$, they are entire $L^2 (I)$-valued functions. Now suppose that $H(p,q,I)$ has deficiency indices $(2,2)$. This happens, for example, if both $a,b$ are finite and $q, 1/p \in L^1 [a,b]$ \cite[Section 17]{Naimark}. In this case both $u_\lambda , v_\lambda$ belong to $L^2 (I)$ for all $\lambda \in \mathbb{C} $ (when $\lambda \in \mathbb{R}$ this is not obvious, but still true \cite[Theorem 4, Section 19.4]{Naimark}) and it follows that for any $z \in \mathbb{C} $, $$\mbox{Ker} (H(p,q,I) ^* - z I ) = \bigvee \{ u_z, v_z \},$$ and so we can define $\gamma _1 (z) = u_z$, $\gamma _2 (z) = v_z$, and then for each $z \in \mathbb{C} \setminus \mathbb{R}$, if $\{ e_i \} _{i=1} ^2$ is the standard orthonormal basis of $\mathbb{C} ^2$, $\Gamma(z) : \mathbb{C} ^2 \rightarrow \mbox{Ker} (H (p,q , I) ^* - \overlineerline{z} I )$ defined by \[ \Gamma (z) := \gamma_1 (\overlineerline{z} ) \otimes e_1 + \gamma _2 (\overlineerline{z} ) \otimes e_2, \] is a valid choice of model for $H(p,q,I)$. Moreover, it follows from equation (\ref{equation:kernel}) that $\mathcal{H} (\Gamma)$ has reproducing kernel: $$ K ^I _\lambda (z) = \left( \begin{array}{cc} \int _I u_{\overlineerline{\lambda}} (x) \overlineerline{ u _{\overlineerline{z}} (x) } dx & \int _I u _{\overlineerline{\lambda}} (x) \overlineerline{ v_{\overlineerline{z}} (x)} dx \\ \int _I v_{\overlineerline{\lambda}} (x) \overlineerline{ u _{\overlineerline{z}} (x) } dx & \int _I v _{\overlineerline{\lambda}} (x) \overlineerline{ v_{\overlineerline{z}} (x) } dx \end{array} \right). $$ For a concrete example consider $p=1$ and $q(t) = V(t) = \frac{1}{2t^2}$. Then $$H _V := H (1, V , [0,b] )$$ is a symmetric operator acting on its appropriate domain in $L^2 [0, b]$. We will consider the two cases where (i) $b < \infty$ and (ii) $b = \infty$. For the first case, as discussed above, the theory of \cite{Naimark} implies that $H_V $ has deficiency indices $(2,2)$ and the above calculations apply. We will, however, compute the deficiency indices and subspaces directly for this example, and obtain a model for $H_V $ with explicit formulas in terms of Hankel and Bessel functions. Choosing $\lambda \in \mathbb{C} \setminus \mathbb{R}$, two linearly independent solutions to the differential equation \[ -f'' (t) +\frac{1}{2t^2} f(t) = \lambda f(t) \] are $$u _\lambda (t) = \sqrt{t} H _{\sqrt{3}/4} ^{(1)} (\sqrt{\lambda} t )$$ and $$v_\lambda (t) = \sqrt{t} H _{\sqrt{3}/4} ^{(2)} (\sqrt{\lambda} t ),$$ where the $H ^{(i)}, i = 1, 2,$ are Hankel functions of the first and second kind \cite{MR0167642}, and $\sqrt{\lambda}$ is chosen to be such that $ 0 < \mbox{arg} (\sqrt{\lambda}) < \frac{\pi}{2}$ when $\lambda \in \mathbb{C} _+$ and $ \pi < \mbox{arg} (\sqrt{\lambda}) < \frac{5\pi}{2}$ when $\lambda \in \mathbb{C} _-$. One can verify \cite{MR1349110} that the Bessel-$J$ and Bessel-$Y$ functions behave asymptotically as $$ J_\nu (t) \sim \frac{1}{\Gamma (\nu +1)} \left( \frac{t}{2} \right) ^\nu \ \ \ \ Y_\nu (t) \sim \frac{-\Gamma (\nu )}{\pi } \left( \frac{2}{t} \right) ^\nu, $$ for small $t$. Here $\Gamma$ denotes the Euler gamma function. Since the Hankel functions satisfy $H_\nu ^{(j)} = J_\nu + (-1) ^{j +1} i Y _\nu$, it follows that in the case (i) where $b< \infty$, both solutions $v_\lambda , u_\lambda$ belong to $L^2 [0, b]$, and hence they both belong to $\mbox{Ker} (H_V ^* - \lambda I )$. We conclude that the deficiency indices of $H_V$ are $(2,2)$. Both solutions $u_\lambda$ and $v_\lambda$ are analytic as functions of $\lambda$ in $\mathbb{C} \setminus \mathbb{R}$, and so we can choose $\gamma _1 (\lambda) = u_{\lambda}$ and $\gamma _2 (\lambda) = v_{\lambda}$. Then if $\{ e_k \}_{k = 1}^{2}$ denotes the standard basis for $\mathbb{C} ^2$, $$ \Gamma (\lambda ) = \gamma _1 (\overlineerline{\lambda}) \otimes e_1 + \gamma _2 (\overlineerline{\lambda}) \otimes e_2, $$ is a valid choice of model for $H_V$. By equation (\ref{equation:kernel}), the reproducing kernel for $\mathcal{H} (\Gamma )$ is $$ K_\lambda (z) = \left( \begin{array}{cc} \int _0 ^b t H_{\frac{\sqrt{3}}{4}} ^{(1)} \left( (\overlineerline{\lambda} ) ^{\frac{1}{2}} t \right) \overlineerline{ H_{\frac{\sqrt{3}}{4}} ^{(1)} \left( (\overlineerline{z} ) ^{\frac{1}{2}} t \right)} dt & \int _0 ^b t H_{\frac{\sqrt{3}}{4}} ^{(1)} \left( (\overlineerline{\lambda} ) ^{\frac{1}{2}} t \right) \overlineerline{ H_{\frac{\sqrt{3}}{4}} ^{(2)} \left( (\overlineerline{z} ) ^{\frac{1}{2}} t \right)} dt \\ \int _0 ^b t H_{\frac{\sqrt{3}}{4}} ^{(2)} \left( (\overlineerline{\lambda} ) ^{\frac{1}{2}} t \right) \overlineerline{ H_{\frac{\sqrt{3}}{4}} ^{(1)} \left( ( \overlineerline{z} ) ^{\frac{1}{2}} t \right)} dt & \int _0 ^b t H_{\frac{\sqrt{3}}{4}} ^{(2)} \left( (\overlineerline{\lambda} ) ^{\frac{1}{2}} t \right) \overlineerline{ H_{\frac{\sqrt{3}}{4}} ^{(2)} \left( (\overlineerline{z} ) ^{\frac{1}{2}} t \right) } dt \end{array}\right). $$ Now consider the second case where $b = \infty$. The Hankel functions have the asymptotic behavior $$ H_\nu ^{(j)} (\sqrt{\lambda t} ) \sim \sqrt{\frac{2}{\pi \sqrt{\lambda} t }} e^{(-1)^{j+1} i \left( \sqrt{\lambda t} - \frac{\pi \nu}{2} - \frac{\pi}{4} \right)}$$ as $t \rightarrow \infty$ \cite{MR0167642}. It follows that for any $\lambda \in \mathbb{C} \setminus \mathbb{R}$, one of the solutions $u_\lambda , v_\lambda$ is square integrable on $[0 , \infty)$ and one is not. More precisely, $u_\lambda $ is square integrable when $\lambda \in \mathbb{C} _+$ while $v_\lambda $ is square integrable if $\lambda \in \mathbb{C} _-$. This shows that in this case $H_V$ has deficiency indices $(1,1)$, and so if we define $$ \gamma (\lambda ) = \left\{ \begin{array}{cc} u_\lambda & \lambda \in \mathbb{C} _+ \\ v_\lambda & \lambda \in \mathbb{C} _- \end{array} \right. , $$ and $\Gamma (\lambda ) := \gamma (\overlineerline{\lambda} ) \otimes e_1$, then $\Gamma$ is a model for $H_V$, and the reproducing kernel for $\mathcal{H} (\Gamma)$ is $$ K _\lambda (z) = \left\{ \begin{array}{cc} \int _0 ^\infty t H_{\frac{\sqrt{3}}{4}} ^{(1)} \left( (\overlineerline{\lambda} ) ^{\frac{1}{2}} t \right) \overlineerline{ H_{\frac{\sqrt{3}}{4}} ^{(1)} \left( (\overlineerline{z} ) ^{\frac{1}{2}} t \right)} dt & \lambda, z \in \mathbb{C} _+ \\ \int _0 ^\infty t H_{\frac{\sqrt{3}}{4}} ^{(1)} \left( (\overlineerline{\lambda} ) ^{\frac{1}{2}} t \right) \overlineerline{ H_{\frac{\sqrt{3}}{4}} ^{(2)} \left( (\overlineerline{z} ) ^{\frac{1}{2}} t \right)} dt & \lambda \in \mathbb{C} _+ \ \ z \in \mathbb{C} _- \\ \int _0 ^\infty t H_{\frac{\sqrt{3}}{4}} ^{(2)} \left( (\overlineerline{\lambda} ) ^{\frac{1}{2}} t \right) \overlineerline{ H_{\frac{\sqrt{3}}{4}} ^{(1)} \left((\overlineerline{z} ) ^{\frac{1}{2}} t \right)} dt & \lambda \in \mathbb{C} _- \ \ z \in \mathbb{C} _+ \\ \int _0 ^\infty t H_{\frac{\sqrt{3}}{4}} ^{(2)} \left( (\overlineerline{\lambda} ) ^{\frac{1}{2}} t \right) \overlineerline{ H_{\frac{\sqrt{3}}{4}} ^{(2)} \left((\overlineerline{z} ) ^{\frac{1}{2}} t \right)} dt & \lambda, z \in \mathbb{C} _- \\ \end{array} \right.$$ \subsection{Unbounded Toeplitz operators} \label{Toeplitz-ex} Let $H^2$ denote the classical Hardy space of the open unit disk $\mathbb{D} := \{|z| < 1\}$ with inner product \begin{equation} \label{H2-inner} \langle f, g \rangle := \frac{1}{2 \pi} \int_{0}^{2 \pi} f(e^{i \theta}) \overlineerline{g(e^{i \theta})} d \theta. \end{equation} Note how we equate, as is customary, a Hardy space function (analytic on $\mathbb{D}$) with its almost everywhere define $L^2(\partial \mathbb{D})$ radial boundary values on the unit circle $\partial \mathbb{D}$. Note that $H^2$ is a reproducing kernel Hilbert space with Cauchy kernel $$k_z(\zeta) = \frac{1}{1 - \overlineerline{z} \zeta},$$ i.e., $$f(z) = \langle f, k_{z} \rangle_{H^2}, \quad \forall f \in H^2, z \in \mathbb{D}.$$ Let $H^{\infty}$ denote the bounded analytic functions on $\mathbb{D}$ and $N^{+}$ denote the \emph{Smirnov functions}, i.e., the algebra of analytic functions $f$ on $\mathbb{D}$ which can be written as $f = g/h$, where $g, h \in H^{\infty}$ and $h$ is an outer function. We refer the reader to the well-known texts \cite{Duren, Garnett} for a reference on all of this. By a result of Sarason \cite{Sarason}, one can write each $g \in N^{+}$ as \begin{equation} \label{gisba} g = \frac{b}{a}; \quad a, b \in H^{\infty}, a(0) > 0, \ \mbox{$a$ outer}, \ |a(e^{i \theta})|^2 + |b(e^{i \theta})|^2 = 1 \; \; \mbox{a.e.-$\theta$}. \end{equation} If $g$ is a rational function then so are $a$ and $b$. Since $a$ is an outer function, the set $a H^2$ is dense in $H^2$ and one can define a Toeplitz operator $T_g$ on $\mathscr{D}(T_g) = a H^2$ by $$T_{g} f = g f.$$ In \cite{Sarason} it is shown that $T_g$ is a densely defined closed operator. If $g$ is also real-valued almost everywhere on $\partial \mathbb{D}$, then, by the definition of the inner product on $H^2$ from \eqref{H2-inner}, $T_{g}$ is a closed symmetric operator. Let $N^{+}_{\mathbb{R}}$ denote the Smirnov functions which are real valued on the unit circle. Helson \cite{Helson, Helson2} shows that $g \in N^{+}_{\mathbb{R}}$ if and only if there are inner functions $p, q$ with $p - q$ outer such that \begin{equation} \label{g-Helson} g = i \frac{q + p}{q - p}. \end{equation} The inner functions $p$ and $q$ in the above representation are unique up to constant factors. Furthermore, the deficiency indices for $T_{g}$, $g \in N^{+}_{\mathbb{R}}$, are finite if and only if $g$ is a rational function and Helson was able to construct examples of $g$ for which the defect indices are any pair $(m, n)$, $m, n \in \mathbb{N} \cup\{\infty\}$. Indeed, let $\nu$ be an real signed atomic measure on $\partial \mathbb{D}$ with $m$ point masses which are positive and $n$ which are negative. The function $$g(z) := i \int \frac{\zeta + z}{\zeta - z} d \nu(\zeta)$$ belongs to $N^{+}_{\mathbb{R}}$ and the deficiency indices of $T_{g}$ are $(m, n)$. If $g$ takes the form \eqref{g-Helson} then $$g - i = p \frac{2 i}{q - p}, \quad g + i = q \frac{2 i}{q - p}$$ and so $p$ is the inner factor of $g - i$ while $q$ is the inner factor of $g + i$. Thus $$\mbox{Rng}(T_{g} - i I)^{\perp} = (p H^2)^{\perp}, \quad \mbox{Rng}(T_{g} + i I)^{\perp} = (q H^2)^{\perp}.$$ It is well-known that these spaces have dimension equal to the order of the inner functions $p$ and $q$. Hence, $T_{g} \in \mathcal{S}_{n}(H^2)$ when $g \in N^{+}_{\mathbb{R}}$ and $p$ and $q$ are inner functions of equal order. \begin{Remark} This brings up the interesting question as to when, for two inner functions $p$ and $q$ of equal order, the difference $p - q$ is outer. Certainly when $p$ and $q$ are Blaschke products of order $n$ the condition $p - q$ is outer implies that $p$ and $q$ do not share any zeros. One might be tempted to believe the converse. Unfortunately this is not true. Take two different $n$-th roots on unity $\zeta, \xi$ and $0 < a < 1$. The inner functions $$p(z) = \left(\frac{z - a \zeta}{1 - \overlineerline{a \zeta} z}\right)^n, \quad q(z) = \left(\frac{z - a \xi}{1 - \overlineerline{a \xi} z}\right)^n$$ have distinct zeros $a \zeta \not = a \xi$. But $p(0) - q(0) = 0$ and so $p - q$ is not outer. \end{Remark} From the well-known identity $$T_{g}^{*} k_{z} = \overlineerline{g(z)} k_{z},$$ we see that if $T_{g} \in \mathcal{S}_{1}(H^2)$ then $g$ must be univalent. Moreover, we have $$\mbox{Ker} (T_{g}^{*} - \lambda I) = \mathbb{C} k_{g^{-1}(\overlineerline{\lambda})}.$$ Thus, as in our previous examples, $$\gamma(\lambda) = \frac{1}{1 - g^{-1}(\overlineerline{\lambda}) z}, \quad \Gamma(\lambda) = \gamma(\overlineerline{\lambda}) \otimes 1, \quad K_{\lambda}(z) = \frac{1}{1 - \overlineerline{g^{-1}(\lambda)} g^{-1}(z)}.$$ In this case the corresponding $H^{2}(\Gamma)$ space is the set of functions of the form $$\widehat{f}(\lambda) = \langle f, k_{g^{-1}(\lambda)} \rangle = f(g^{-1}(\lambda)), \quad f \in H^2(\mathbb{D}),$$ and so $H^2(\Gamma)$ is the Hardy space $H^{2}(g(\mathbb{D}))$, where, since $g$ is real on the circle, will be the Hardy space of a certain slit domain. The norming point of $H^2(g(\mathbb{D}))$ will be $g(0)$. See \cite{A-F-R} for more on Hardy spaces of a slit domains. For a vector-valued example, consider the Toeplitz operator $T_{g^2}$, where $g \in N^{+}_{\mathbb{R}}$ and $g$ is univalent. Then $$\mbox{Ker} (T^{*}_{g^2} - \lambda I) = \bigvee \left\{k_{g^{-1}((\overlineerline{\lambda} ) ^{\frac{1}{2}})}, k_{g^{-1}(-(\overlineerline{\lambda} ) ^{\frac{1}{2}})}\right\}.$$ Here $$\gamma_1(\lambda) := \frac{1}{1 - g^{-1}((\overlineerline{\lambda} ) ^{\frac{1}{2}}) z}, \quad \gamma_2(\lambda) := \frac{1}{1 - g^{-1}(-(\overlineerline{\lambda} ) ^{\frac{1}{2}}) z}$$ and if $e_1 = (1, 0), e_2 = (0, 1)$ are the standard basis vectors for $\mathbb{C}^2$, then $$\Gamma(\lambda) = \gamma_{1}(\overlineerline{\lambda}) \otimes e_1 + \gamma_{2}(\overlineerline{\lambda}) \otimes e_2.$$ In this case the kernel function turns out to be $$K_{\lambda}(z) = \left( \begin{array}{cc} \frac{1}{1 - g^{-1}((\overlineerline{\lambda} ) ^{\frac{1}{2}}) g^{-1}(\sqrt{z})} & \frac{1}{1 - g^{-1}((\overlineerline{\lambda} ) ^{\frac{1}{2}}) g^{-1}(-\sqrt{z})} \\ \frac{1}{1 - g^{-1}(-(\overlineerline{\lambda} ) ^{\frac{1}{2}}) g^{-1}(\sqrt{z})} & \frac{1}{1 - g^{-1}(\sqrt{-\overlineerline{\lambda}}) g^{-1}(-\sqrt{z})}\\ \end{array} \right),$$ and the corresponding $H^2(\Gamma)$ space is the set of functions of the form $$\widehat{f}(\lambda) = (f(g^{-1}(\sqrt{\lambda})), f(g^{-1}(-\sqrt{\lambda}))), \quad f \in H^2.$$ \subsection{Multiplication by the independent variable on a Lebesgue space} Suppose $\mu$ is a positive Borel measure on $\mathbb{R}$ such that $$\mu(\mathbb{R}) = \infty \quad \mbox{and} \quad \int \frac{1}{1 + x^2} d \mu(x) < \infty.$$ It is well-known that the operator $(M^{\mu}f)(x) = x f(x)$, defined densely on $$\mathscr{D}(M^{\mu}) := \{f \in L^2(\mu) : x f \in L^2(\mu)\},$$ is self adjoint. Consider the operator $M_{\mu}$ defined as $M^{\mu}$ restricted to $$\mathscr{D}(M_{\mu}) := \left\{f \in L^2(\mu): x f \in L^2(\mu), \int f d\mu = 0\right\}.$$ Notice the difference between $M^{\mu}$, which is self-adjoint, and $M_{\mu}$, which is symmetric but not self-adjoint. \begin{Proposition} \label{Mmu} For a positive Borel measure $\mu$ on $\mathbb{R}$ with $\int \frac{1}{1 + x^2} d \mu(x) < \infty$, the operator $M_{\mu}$ is densely defined if and only if $\mu(\mathbb{R}) = \infty$. \end{Proposition} We start with the following technical lemma from functional analysis. \begin{Lemma} \label{TLmu} Suppose $B$ is a dense linear manifold in $L^2(\mu)$ and $\ell$ is a linear functional defined on $B$ such that there exists a sequence of unit vectors $\{f_n\}_{n \geqslant 1}$ in $B$ such that $\ell(f_n) \not = 0$ and $\ell(f_n) \to \infty$ as $n \to \infty$. Then the linear manifold $$\{f \in B: \ell(f) = 0\}$$ is dense in $L^2(\mu)$. \end{Lemma} \begin{proof} Suppose $\lambda$ is a \emph{bounded} linear functional on $L^2(\mu)$ with $$\lambda(\{f: \ell(f) = 0\}) = 0.$$ Then $$\lambda\left(f - \frac{\ell(f)}{\ell(f_n)} f_n \right) = 0 \quad \forall f \in B.$$ As $n \to \infty$ we use the fact that $\|f_n\| = 1$ and $\ell(f_n) \to \infty$ to see that $\lambda(f) = 0$ for all $B$ and so, by and Hahn-Banach theorem, $\lambda \equiv 0$. \end{proof} \begin{proof}[Proof of Proposition \ref{Mmu}]Let us show that $$\mathscr{D}(M_{\mu}) = \left \{f \in L^2(\mu): t f \in L^2(\mu), \int f(t) d \mu(t) = 0\right\} $$ is dense in $L^2(\mu)$ if and only if $\mu(\mathbb{R}) = \infty$. Suppose that $\mu(\mathbb{R}) < \infty$. Then $\mathbb{C} \subset L^2(\mu)$ and $\mathscr{D}(M_{\mu}) \subset \mathbb{C}^{\perp}$ and so $\mathscr{D}(M_{\mu})$ is not dense in $L^2(\mu)$. Now suppose that $\mu(\mathbb{R}) = \infty$. If $f \in L^2(\mu)$ and $t f \in L^2(\mu)$ then \begin{align*} \int |f(t)| d\mu(t) & = \int |f(t)| (1 + |t|) \frac{1}{1 + |t|} d \mu(t)\\ & \leqslant \left( \int |f(t)|^2 (1 + t^2) d \mu(t)\right)^{1/2} \left(\int \frac{1}{1 + t^2} d \mu(t)\right)^{1/2}\\ & < \infty. \end{align*} Thus the linear functional $$f \mapsto \int f(t) d \mu(t)$$ is defined on $\{f \in L^2(\mu): t f \in L^2(\mu)\}$. This last set is dense in $L^2(\mu)$ since it contains the smooth functions with compact support. But since $\mu(\mathbb{R}) = \infty$, this linear functional satisfies the hypothesis of Lemma \ref{TLmu} (Indeed just take $f_{n} \in C_{0}^{\infty}(\mathbb{R})$, $f_n \geq 0$ with $f_{n} = 1$ on $[-N, N]$ in Lemma \ref{TLmu}) and so $\mathscr{D}(M_{\mu})$ is dense in $L^2(\mu)$. \end{proof} For $g \in \mathscr{D}(M_{\mu})$ observe that $$0 = \int g d\mu = \int g(x) (x - \lambda) \overlineerline{\frac{1}{x - \overlineerline{\lambda}}} d \mu(x)$$ and so $\frac{1}{x - \overlineerline{\lambda}} \in \mbox{Rng}(M_{\mu} - \lambda I)^{\perp}$. A little exercise will show that indeed $$\mbox{Rng}(M_{\mu} - \lambda I)^{\perp} = \mathbb{C} \frac{1}{x - \overlineerline{\lambda}}$$ and thus $M_{\mu} \in \mathcal{S}_{1}(L^2(\mu))$. Notice that $$\gamma(\lambda) = \frac{1}{x - \overlineerline{\lambda}}, \quad \Gamma(\lambda) = \gamma(\overlineerline{\lambda}) \otimes 1$$ and thus $$K_{\lambda}(z) = \int \frac{1}{(x - z)(x - \overlineerline{\lambda})} d \mu(x)$$ and $$\widehat{f}(\lambda) = \int \frac{f(x)}{x - \lambda} d \mu(x), \quad f \in L^2(\mu).$$ Thus $M_{\mu}$ is unitarily equivalent to $M_{\Gamma}$ (multiplication by the independent variable) on the space of Cauchy transforms of $L^2(\mu)$ functions. We point out that there is a version of all this when $\mu$ is a positive matrix-valued measure on $\mathbb{R}$ which will be explored later on. \subsection{Multiplication by the independent variable on a deBranges-Rovnyak space} Let $H^2 _\mathcal{K}$ denote the Hardy space of analytic $\mathcal{K}-$valued functions on the upper half-plane $\mathbb{C} _+$. These are the analytic functions $f: \mathbb{C}_{+} \to \mathcal{K}$ such that $$\sup_{y > 0} \int_{-\infty}^{\infty} \|f(x + i y)\|_{\mathcal{K}}^{2} dx < \infty.$$ It is well known that these functions have non-tangential boundary values almost everywhere on $\mathbb{R}$ such that $$\int_{-\infty}^{\infty} \|f(x)\|_{\mathcal{K}}^2 dx < \infty.$$ The above quantity determines an inner product $\langle \cdot , \cdot \rangle$ on $H^2_{\mathcal{K}}$. Given any contractive analytic $B (\mathcal{K} )-$valued function $\Theta$, one can define the de Branges-Rovnyak space $\mathscr{K}(\Theta)$ as the range of the operator \begin{equation}\label{Rtheta} R_\Theta := (I - T _\Theta T_\Theta^{*})^{1/2}, \end{equation} where the inner product $ \langle \cdot , \cdot \rangle _\Theta$ on $\mathscr{K}(\Theta)$ is defined so that $R_\Theta$ acts as a co-isometry of $H^2 _\mathcal{K} $ onto $\mathscr{K}(\Theta)$. In other words, if at least one of $f , g \in H^2 _\mathcal{K}$ are orthogonal to $\mbox{Ker} (R _\Theta )$ then $$\langle R_\Theta f , R_\Theta g \rangle _\Theta = \langle f , g \rangle.$$ In \eqref{Rtheta}, $$T_\Theta := P _{H^2 _\mathcal{K}} M _\Theta |H^2 _\mathcal{K}$$ is a Toeplitz operator, $M_\Theta$ is multiplication by $\Theta$ acting on $L^2 _\mathcal{K}$-the Hilbert space of $\mathcal{K}$-valued functions which are square integrable with respect to Lebesgue measure on $\mathbb{R}$, and $P_{H^2 _\mathcal{K}}$ is the orthogonal projection from $L^2 _\mathcal{K}$ onto $H^2 _\mathcal{K}$. Note that $T_{\Theta}$ is a contraction and so $R_{\Theta}$ makes sense (and is a contraction). Note also that when $\Theta$ is inner, i.e., $\Theta(x)$ is unitary for almost every $x \in \mathbb{R}$, then the deBranges-Rovnyak space $\mathscr{K}(\Theta)$ becomes the classical model space $H^{2}_{\mathcal{K}} \ominus \Theta H^2_{\mathcal{K}}$ in the upper half plane \cite{Nik, Nik2, Niktr}. If $\{ e_k \}_{k = 1}^{n}$ is an orthonormal basis for $\mathcal{K}$, then it follows that finite linear combinations of the reproducing kernel vectors $$\delta _w ^{(j)} (z) := \frac{i}{2\pi} \frac{1}{z-\overlineerline{w}} e_j, \quad 1 \leqslant j \leqslant n, $$ form a dense set in $H^2 _\mathcal{K}$. Since $\mathscr{K}(\Theta)$ is contractively contained in $H^2 _\mathcal{K}$ (this follows because the operator $R_\Theta$ is a contraction), it follows that given any $1 \leqslant j \leqslant n$ and $z \in \mathbb{C} _+$, the point evaluation linear functional $l ^{(j)} _z $ defined by \[ l ^{(j)} _z (f) = \langle f(z) , e_j \rangle _{\mathcal{K}} ,\] is well-defined and bounded on $\mathscr{K}(\Theta)$. From the Riesz representation theorem, there is a point evaluation or reproducing kernel vector $\sigma ^{(j)} _z \in \mathscr{K}(\Theta)$ such that for any $h \in \mathscr{K}(\Theta)$, \[ \langle h(z) , e_j \rangle _{\mathcal{K}} = l _z ^{(j)} (h) = \langle h , \sigma ^{(j)} _z \rangle _\Theta . \] To compute $\sigma ^{(j)} _z$, consider the fact that if $h = R_\Theta f \in \mathscr{K}(\Theta)$ for some $f \in H^2 _\mathcal{K}$, then \begin{eqnarray} \langle h(z) , e_j \rangle _{\mathcal{K}} & = & \langle h , \delta ^{(j)} _z \rangle = \langle R_\Theta f , \delta ^{(k)} _z \rangle \nonumber \\ & = & \langle f , R_\Theta ^* \delta _z ^{(j)} \rangle = \langle h , R_\Theta R_\Theta ^* \delta _z ^{(j)} \rangle _\Theta.\end{eqnarray} This computation shows that \[ \sigma _z ^{(j)} = R_\Theta R_\Theta ^* \delta _z ^{(j)} = (I - T_\Theta T_{\Theta } ^* ) \delta _z ^{(j)}.\] An easy calculation shows $T_\Theta ^* \delta _z ^{(j)} = \Theta (z) ^* \delta _z ^{(j)} $, and so it follows that \[ \sigma _\lambda ^{(j)} (z) = \frac{i}{2\pi} \frac{ I - \Theta (z) \Theta ^* (\lambda)}{z-\overlineerline{\lambda}} e_j, \] and hence the reproducing kernel operator on $\mathscr{K}(\Theta)$ is \begin{equation} \label{dbrkernel} \Delta _w ^\Theta (z) := \frac{i}{2\pi} \frac{I - \Theta(z) \Theta (w) ^*}{z -\overlineerline{w} }; \quad w,z \in \mathbb{C} _+ . \end{equation} Moreover, finite liner combinations of $$\sigma _w ^{(j)} = \Delta _w ^\Theta e_j, \quad w \in \mathbb{C} _+, 1 \leqslant j \leqslant n,$$ are dense in $\mathscr{K}(\Theta)$. Let $H ^\infty _\mathcal{K}$ be the Banach space of all bounded analytic $B (\mathcal{K})$-valued functions on $\mathbb{C} _+$ (with the supremum norm), and $$\mathscr{B}_{\mathcal{K}} := \{f \in H^{\infty}_{\mathcal{K}}: \|f\|_{\infty} \leqslant 1\}$$ be the closed unit ball in $H^{\infty}_{\mathcal{K}}$. A function $f \in \mathscr{B}_{\mathcal{K}}$ is an \emph{extreme point} of $\mathscr{B}_{\mathcal{K}}$ if $f$ does not belong to the interior of a line segment lying in $\mathscr{B}_{\mathcal{K}}$. Equivalently, $f \in \mathscr{B}_{\mathcal{K}}$ is extreme if $f \pm g \in \mathscr{B}_{\mathcal{K}}$, where $g \in H^{\infty}_{\mathcal{K}}$, implies $g \equiv 0$. If $\theta$ is a contractive matrix-valued analytic function on the open unit disk $\mathbb{D}$ and $\vec{x} \in \mathbb{C}^{n}$, we say that $\theta \vec{x}$ has an \emph{angular derivative in the sense of Carath\'{e}odory} at $\zeta \in \partial \mathbb{D}$ if $\theta \vec{x}$ has a non-tangential limit $\theta(\zeta) \vec{x}$ at $\zeta$, $\|\theta(\zeta) \vec{x}\| = \|\vec{x}\|$, and the non-tangential limit of $\theta' \vec{x}$ exists at $\zeta$. Existence of angular derivatives relates to non-tangential limits of functions in model and deBrances-Rovnyak spaces \cite{AC, Martin-uni, Poltoratskii, Sarason-dB}. From \cite{Martin-uni} we know the following. \begin{Theorem} \label{Martin-Z} If $\Theta \in \mathscr{B}_{\mathbb{C}^n}$ is an extreme point then $Z_{\Theta} f := z f$, defined on $$\mathscr{D}(Z_{\Theta}) := \{f \in \mathscr{K}(\Theta): z f \in \mathscr{K}(\Theta)\},$$ is a closed symmetric operator with deficiency indices $(n, n)$. Moreover, $Z_{\Theta}$ is densely defined if and only if $(\Theta \circ b^{-1})\vec{k}$ does not have a finite angular derivative at $z = 1$ for any $\vec{k} \in \mathbb{C}^n$. In this case $Z_{\Theta} \in \mathcal{S}_{n}(\mathscr{K}(\Theta))$. \end{Theorem} For the sake of simplicity, let us discuss, as in our previous examples, the model space for $Z_{\Theta}$ when $Z_{\Theta} \in \mathcal{S}_{1}(\mathscr{K}(\Theta))$, i.e., $\Theta$ is scalar valued. In this case the kernel functions for $\mathscr{K}(\Theta)$ are $$\Delta^{\Theta}_{\lambda}(z) = \frac{i}{2 \pi} \frac{1 - \Theta(z) \overlineerline{\Theta(\lambda)}}{z - \overlineerline{\lambda}}, \quad z, \lambda \in \mathbb{C}_{+}.$$ Notice that \begin{equation} \label{e-values-Z} Z_{\Theta}^{*} \Delta^{\Theta}_{\lambda} = \overlineerline{\lambda} \Delta^{\Theta}_{\lambda}, \quad \lambda \in \mathbb{C}_{+}. \end{equation} Thus $$\mbox{Rng}(Z_{\Theta} - \lambda I)^{\perp} = \mathbb{C} \Delta^{\Theta}_{\lambda}, \quad \lambda \in \mathbb{C}_{+}.$$ To complete the picture we need to compute $\mbox{Rng}(Z_{\Theta} - \lambda I)^{\perp}$ when $\lambda \in \mathbb{C}_{-}$. Notice the complication here in using the identity in \eqref{e-values-Z} since $\Delta^{\Theta}_{\lambda}$ is not defined when $\lambda \in \mathbb{C}_{-}$. By results from \cite{Martin1} there exists a conjugation $C_{\Theta}: \mathscr{K}(\Theta) \to \mathscr{K}(\Theta)$, i.e., $C _\Theta$ is an involutive, isometric and conjugate linear operator defined by $C_\Theta f = \Theta \circ *$. Here $* : H^2 (\mathbb{C} _+ ) \rightarrow H^2 (\mathbb{C} _- )$ is the involutive, onto, isometric and anti-linear map defined by $* f(z) =: f^* (z)$, where $f^* (z) =: \overlineerline{f (\overlineerline{z} )}$ is defined to be the function in $H^2 (\mathbb{C} _-)$ whose non-tangential boundary values are given by $\overlineerline{f (x) }$. Hence $$ (C_\Theta f) (z) = \Theta (z) \overlineerline{f(\overlineerline{z})} ,$$ and again this has to be interpreted as the function in $H^2 (\mathbb{C} _+ )$ whose non-tangential boundary values are equal to $\Theta (x) \overline{f(x)}$ almost everywhere. Moreover, $C_{\Theta}$ maps $\mathscr{D}(Z_{\Theta})$ to itself, commutes with $Z_{\Theta}$, and satisfies $$C_{\Theta} \mbox{Rng}(Z_{\Theta} - \lambda I)^{\perp} = \mbox{Rng}(Z_{\Theta} - \overlineerline{\lambda} I)^{\perp}, \quad \lambda \in \mathbb{C} \setminus \mathbb{R},$$ as well as $$C_{\Theta} \Delta^{\Theta}_{\lambda} = \frac{1}{2 \pi i} \frac{\Theta(z) - \Theta(\lambda)}{z - \lambda}.$$ Therefore, $$\mbox{Rng}(Z_{\Theta} - \lambda I)^{\perp} = \mathbb{C} \{ C_{\Theta} \Delta^{\Theta}_{\overlineerline{\lambda}} \}, \quad \lambda \in \mathbb{C}_{-}.$$ As in our previous examples, the model space for $Z_{\Theta}$ will be the space of functions of the form $$\widehat{f}(\lambda) = \begin{cases} \langle f, \Delta^{\Theta}_{\lambda}\rangle_{\Theta} &\mbox{if } \lambda \in \mathbb{C}_{+} \\ \langle f, C_{\Theta} \Delta^{\Theta}_{\overlineerline{\lambda}} \rangle_{\Theta} & \mbox{if } \lambda \in \mathbb{C}_{-} \end{cases} = \begin{cases} f(\lambda) &\mbox{if } \lambda \in \mathbb{C}_{+} \\ \overlineerline{(C_{\Theta} f)(\overlineerline{\lambda})} & \mbox{if } \lambda \in \mathbb{C}_{-}. \end{cases}$$ When $\Theta$ is an inner function, we can unpack this a bit further. In this case, as mentioned earlier, the deBranges-Rovnyak space $\mathscr{K}(\Theta)$ is the classical model space $(\Theta H^2(\mathbb{C}_{+}))^{\perp}$. Moreover, the inner product is the usual $L^2(\mathbb{R})$ inner product. For $f \in (\Theta H^2(\mathbb{C}_{+}))^{\perp}$ we have $\widehat{f}(\lambda) = f(\lambda)$, $\lambda \in \mathbb{C}_{+}$. For $\lambda \in \mathbb{C}_{-}$ we have \begin{align*} \widehat{f}(\lambda) & = \langle f, C_{\Theta} \Delta^{\Theta}_{\overlineerline{\lambda}} \rangle\\ & = \frac{1}{2 \pi i} \int_{-\infty}^{\infty} f(x) \overlineerline{\left(\frac{\Theta(x)}{x - \overlineerline{\lambda}}\right)}dx - \frac{1}{2 \pi i}\overlineerline{\Theta(\overlineerline{\lambda})} \int_{-\infty}^{\infty} f(x) \overlineerline{\left(\frac{1}{x - \overlineerline{\lambda}}\right)} dx\\ & = \frac{1}{2 \pi i} \int_{-\infty}^{\infty} \frac{f(x)\overlineerline{\Theta(x)}}{x - \lambda} dx. \end{align*} Using Fatou's jump theorem and a similar computation as used in \cite[p.~85]{CR}, one can show that the non-tangential limits of $f/\Theta$ (from $\mathbb{C}_{+}$) are equal to the non-tangential limits of $\widehat{f}$ (from $\mathbb{C}_{-}$) almost everywhere on $\mathbb{R}$. Thus $\widehat{f}$ is a pseudo-continuation of $f/\Theta$ to $\mathbb{C}_{-}$. See also \cite{RS}. We include $Z_{\Theta}$ in our list of examples since this operator will be closely related to the model operator on the Herglotz space we discuss later on -- and will also help us gain some additional information about the Livsic function. \section{The main results} For a fixed $T \in \mathcal{S}_{n}(H)$ modeled by $\Gamma$, we have an associated $\mathcal{K}$-valued reproducing kernel Hilbert space $\mathcal{H}(\Gamma)$ of analytic functions on $\mathbb{C} \setminus \mathbb{R}$ such that $T$ is unitarily equivalent to $M_{\Gamma}$ (multiplication by the independent variable) on $\mathcal{H}(\Gamma)$. We also have a formula for the reproducing kernel $K_{\lambda}(z) = \Gamma(z)^{*} \Gamma(\lambda)$ for $\mathcal{H}(\Gamma)$. Our first theorem says that the reproducing kernel can be factored in a particular way, which, as we will see momentarily, involves the Livsic characteristic function. \begin{Theorem} \label{T-main-vector-kernel} Using the above notation we have $$K_{\lambda}(z) = \Phi(z) \left(\frac{I - V(z) V(\lambda)^{*}}{1 - \overlineerline{b(\lambda)} b(z)}\right) \Phi(\lambda)^{*},$$ where $$ V(z) := b(z) \Phi (z) ^{-1} \Psi (z), $$ $$b(z) = \frac{z-i}{z+i}, \quad \Phi (z) := K_i (z) K_i (i) ^{-1/2}, \quad\Psi (z) := K_{-i} (z) K _{-i} (-i) ^{-1/2},$$ and $\Phi$ and $V$ satisfy the following: \begin{enumerate} \item The function $z \mapsto \Phi(z)$ is a $\mathcal{B}(\mathcal{K})$-valued meromorphic function on $\mathbb{C} \setminus \mathbb{R}$ which is analytic on $\mathbb{C}_{+}$. \item The function $z \mapsto \Psi(z)$ is a $\mathcal{B}(\mathcal{K})$-valued meromorphic function on $\mathbb{C} \setminus \mathbb{R}$ which is analytic on $\mathbb{C}_{-}$. \item The function $z \mapsto V(z)$ is a $\mathcal{B}(\mathcal{K})$-valued meromorphic function on $\mathbb{C} \setminus \mathbb{R}$ such that $\|V(z)\| < 1$ for all $z \in \mathbb{C}_{+}$. \end{enumerate} \end{Theorem} The skeptical reader might be wondering why factoring the reproducing kernel in this particular way is important. This is answered by the following corollary. \begin{Corollary}\label{Cor-main-Liv-ue} For $T_1 \in \mathcal{S}_{n}(\mathcal{H}_1)$ and $T_2 \in \mathcal{S}_{n}(\mathcal{H}_2)$ let $V_1$ and $V_2$ be the corresponding operators from Theorem \ref{T-main-vector-kernel}. Then $T_1$ is unitarily equivalent to $T_2$ if and only if there are constant unitary operators $R, Q$ on $\mathcal{K}$ so that $$V_1(z) = R V_2(z) Q, \quad z \in \mathbb{C}_{+}.$$ \end{Corollary} What does all this have to do with Livsic's theorem? \begin{Corollary} \label{VisW} If $n < \infty$ and $T \in \mathcal{S}_{n}(\mathcal{H})$ then there are constant unitary matrices $R, Q$ so that $$w_T(z) = R V(z) Q, \quad z \in \mathbb{C}_{+},$$ i.e., $w_T (z)$ and $V(z)$ are equivalent in terms of \eqref{RQ-equiv}. \end{Corollary} As to be expected, the proof will require some preliminary technical results. \begin{Remark}Let us assume that we are working at the $\mathcal{H}(\Gamma)$ level and thus equate $f \in \mathcal{H}$ with $\widehat{f} \in \mathcal{H}(\Gamma)$. This will avoid the cumbersome $\widehat{f}$ notation in all of our calculations. With this understanding, we also note that we are now determining when $M_{\Gamma_1} \cong M_{\Gamma_2}$, where $\Gamma_j$ is a model for $T_j \in \mathcal{S}_{n}(\mathcal{H}_j)$ and $M_{\Gamma_j}$ is multiplication by the independent variable from Proposition \ref{H-Gamma2}.\end{Remark} As we head towards our formula for $K_{\lambda}(z)$, let $P_{\pm i}$ be the orthogonal projections of $\mathcal{H}(\Gamma)$ onto $$\{f \in \mathcal{H}(\Gamma): f(\pm i) = 0\}^{\perp}.$$ By the definition of reproducing kernel notice that \begin{equation} \label{PH} \{f \in \mathcal{H}(\Gamma): f(\pm i) = 0\}^{\perp} = \bigvee\{K_{\pm i}(z) a: a \in \mathcal{K}\}. \end{equation} Define $$L, R: \mathcal{H}(\Gamma) \to \mathcal{H}(\Gamma)$$ by the formulas $$L = \frac{1}{b} (I - P_i), \quad R = b (I - P_{-i}).$$ The alert reader might wonder why these operators $L$ and $R$ are actually defined on $\mathcal{H}(\Gamma)$. Since $I - P_{\pm}$ is the orthogonal projection of $\mathcal{H}(\Gamma)$ onto $\{f \in \mathcal{H}(\Gamma): f(\pm i) = 0\}$ and $$b(z) = 1 - \frac{2 i}{z - i}, \quad \frac{1}{b(z)} = 1 + \frac{2 i}{z + i},$$ the operators $R$ and $L$ become well-defined, and, by the closed graph theorem, bounded on $\mathcal{H}(\Gamma)$ once we verify the following lemma. \begin{Lemma} If $f \in \mathcal{H} (\Gamma)$ and $f (w) =0$ for some $w\in \mathbb{C} \setminus \mathbb{R}$, then $\frac{f }{z-w} \in \mathcal{H} (\Gamma)$. \end{Lemma} \begin{proof} If $f (w) =0$ then $\Gamma (w) ^* f =0$ which implies $f \in \mbox{Rng} (T-w I)$. Thus $f = (T-w)g$ for some $g \in \mathscr{D} (T)$. Now, \begin{eqnarray} f (z) & = & \Gamma (z) ^* f \nonumber\\ &= &\Gamma (z) ^* (T-w) g \nonumber \\ & = & \Gamma (z) ^* \left( (T-z) + (z-w) \right) g \nonumber \\ & = & (z-w) g (z), \end{eqnarray} which demonstrates that $\frac{f }{z-w} = g \in \mathcal{H} (\Gamma)$. \end{proof} Since $M_{\Gamma} f = z f$ on $\mathcal{H}(\Gamma)$ is symmetric with equal deficiency indices, then its \emph{Cayley transform} \cite{A-G} $$C_{\Gamma} := (M_{\Gamma} - i I)(M_{\Gamma} + i I)^{-1},$$ is a partial isometry with initial space $(P_{-i} \mathcal{H}(\Gamma))^{\perp}$ and final space $(P_{i} \mathcal{H}(\Gamma))^{\perp}$. Moreover $$C_{\Gamma} f = b f, \quad f \in (P_{-i} \mathcal{H}(\Gamma))^{\perp}.$$ \begin{Lemma}\label{RLnn} $L^{*} = R$ \end{Lemma} \begin{proof} Note that $(L f)(-i) = 0 = (R f)(i)$ and so $$\langle L f, P_{-i} g \rangle_{\mathcal{H}(\Gamma)} = 0 = \langle P_{i} f, R g \rangle_{\mathcal{H}(\Gamma)}.$$ Also note that, via the above Cayley transform discussion, the map $f \mapsto b f$ is a partial isometry with initial space $(P_{-i} \mathcal{H}(\Gamma))^{\perp}$ and final space $(P_{i} \mathcal{H}(\Gamma))^{\perp}$. Putting this all together we get \begin{align*} \langle Lf, g \rangle_{\mathcal{H}(\Gamma)} & = \langle L f, g - P_{-i} g \rangle_{\mathcal{H}(\Gamma)}\\ & = \langle b L f, b(g - P_{-i} g)\rangle_{\mathcal{H}(\Gamma)}\\ & = \langle f - P_{i} f, R g \rangle_{\mathcal{H}(\Gamma)}\\ & = \langle f, R g\rangle_{\mathcal{H}(\Gamma)}, \end{align*} which proves $L^{*} = R$. \end{proof} \begin{Lemma} For each $a \in \mathcal{K}$ and $\lambda, z \in \mathbb{C} \setminus \mathbb{R}$ we have $$K_{\lambda}(z) a = \frac{(P_{i} K_{\lambda}(\cdot) a)(z) - \overlineerline{b(\lambda)} b(z) (P_{-i} K_{\lambda}(\cdot) a)(z)}{1 - \overlineerline{b(\lambda)} b(z)}.$$ \end{Lemma} \begin{proof} First let us compute $(L^{*} K_{\lambda}(\cdot) a)(z)$ and $(R K_{\lambda}(\cdot) a)(z)$. Indeed, for any $f$, \begin{align*} \langle f, L^{*} K_{\lambda}(\cdot) a\rangle_{\mathcal{H}(\Gamma)} & = \langle L f, K_{\lambda}(\cdot) a\rangle_{\mathcal{H}(\Gamma)}\\ & = \left\langle \frac{1}{b} (f - P_{i} f), K_{\lambda}(\cdot) a\right\rangle_{\mathcal{H}(\Gamma)}\\ & = \left\langle \frac{f(\lambda) - (P_{i} f)(\lambda)}{b(\lambda)}, a \right\rangle_{\mathcal{K}}\\ & = \frac{1}{b(\lambda)} \left\langle f, K_{\lambda}(\cdot) a - P_{i} K_{\lambda}(\cdot) a\right\rangle_{\mathcal{H}(\Gamma)} \end{align*} This means that $$(L^{*} K_{\lambda}(\cdot) a)(z) = \frac{1}{\overlineerline{b(\lambda)}} (K_{\lambda}(z) a - (P_{i} K_{\lambda}(\cdot) a)(z)).$$ A similar computation yields $$(R K_{\lambda}(\cdot) a)(z) = b(z) (K_{\lambda}(z) a - (P_{-i} K_{\lambda}(\cdot) a)(z)).$$ Using Lemma \eqref{RLnn} we equate the two previous formulas for $(L^{*} K_{\lambda}(\cdot) a)(z)$ and $(R K_{\lambda}(\cdot) a)(z)$ and then work the algebra to get the result. \end{proof} \begin{Remark}\label{epl} Let us pause for a moment to remember, since this will be important for what follows, that by the definition of a model $\Gamma$ and the formula for the reproducing kernel $K_{\lambda}(z) = \Gamma(z)^{*} \Gamma(\lambda)$, the operator $K_{\lambda}(z): \mathcal{K} \to \mathcal{K}$ is invertible whenever $\lambda, z \in \mathbb{C}_{+}$ or $\lambda, z\in \mathbb{C}_{-}$. \end{Remark} \begin{Lemma} \label{choice-phi} If $$\Phi(z) := K_{i}(z) K_{i}(i)^{-1/2}, \quad \Psi(z) := K_{-i}(z) K_{-i}(-i)^{-1/2},$$ then for all $a \in \mathcal{K}$, $$(P_i K_{\lambda}(\cdot) a)(z) = \Phi(z) \Phi(\lambda)^{*} a, \quad (P_{-i} K_{\lambda}(\cdot) a)(z) = \Psi(z) \Psi(\lambda)^{*} a.$$ \end{Lemma} \begin{proof} For $z, \lambda \in \mathbb{C} \setminus \mathbb{R}$ and $a \in \mathcal{K}$ notice that \begin{align*} \Phi(z) \Phi(\lambda)^{*} a & = K_{i}(z) K_{i}(i)^{-1/2} K_{i}(i)^{-1/2} K_{\lambda}(i) a\\ & = K_{i}(z) K_{i}(i)^{-1} K_{\lambda}(i) a \end{align*} which, by \eqref{PH} belongs to $P_i \mathcal{H}(\Gamma)$. To finish the proof we need to show that $$\langle K_{\lambda}(\cdot) a - \Phi(\cdot) \Phi(\lambda)^{*} a, \Phi(\cdot) \Phi(\lambda)^{*} b\rangle_{\mathcal{K}} = 0, \quad \forall b \in \mathcal{K}.$$ This can be routinely verified with the definition of the reproducing kernel as well as the fact that $K_{i}(i)$ is self-adjoint and invertible. \end{proof} \begin{Lemma} \label{L-inv} For each $z \in \mathbb{C}_{+}$, the operator $V(z): \mathcal{K} \to \mathcal{K}$ defined by $$V(z) := b(z) \Phi(z)^{-1} \Psi(z)$$ is a strict contraction. \end{Lemma} \begin{proof} It suffices to prove that $\|V(z)^{*}\| < 1$ for each fixed $z$. To this end, note that the above technical lemmas show that $$K_{\lambda}(\lambda) = \frac{1}{1 - |b(\lambda)|^2} (\Phi(\lambda) \Phi(\lambda)^{*} - |b(\lambda)|^2 \Psi(\lambda) \Psi(\lambda)^{*}), \quad \lambda \in \mathbb{C}_{+}.$$ Use this formula along with the estimate $$\langle K_{\lambda}(\lambda) a, a\rangle_{\mathcal{K}} \geqslant \epsilon(\lambda) \|a\|^2,$$ (from Remark \ref{epl}) to get, after re-arranging some terms, $$\epsilon(\lambda) (1 - |b(\lambda|^2) \|a\|^2 \leqslant \|\Phi(\lambda)^{*} a\|^2 - |b(\lambda)|^2 \|\Psi(\lambda)^{*} a\|^2.$$ Re-arrange the terms from the previous line to see that $$|b(\lambda)|^2 \|\Psi(\lambda)^{*} a\|^2 \leqslant \|\Phi(\lambda)^{*} a\|^2 - \epsilon(\lambda) (1 - |b(\lambda|^2) \|a\|^2.$$ Insert $$a = (\Phi(\lambda)^{*})^{-1} x$$ into the previous inequality, along with the definition fo $V$, to obtain $$\|V(\lambda)^{*} x\|^2 \leqslant \|x\|^2 - \epsilon(\lambda) (1 - |b(\lambda)|^2) \|(\Phi(\lambda)^{*})^{-1} x\|^2.$$ From here we see that $\|V(\lambda)^{*} x\| < \|x\|$ for all $x$. Suppose $\|x_{n}\| = 1$ with $\|V(\lambda)^{*} x_{n}\| \to 1$. The previous inequality will show that $$ \|(\Phi(\lambda)^{*})^{-1} x_n\| \to 0$$ which will contradict the fact that $(\Phi(\lambda)^{*})^{-1}$ is an invertible operator and hence must be bounded below. Thus $\|V(\lambda)^{*}\| < 1$. \end{proof} \begin{Remark} Using the fact that $|b(z)| > 1$ when $z \in \mathbb{C}_{-}$, one can run the above proof again to show that $\|V(z)\| > 1$ when $z \in \mathbb{C}_{-}$. Note that $V(z)$ might have a pole when $z \in \mathbb{C}_{-}$. In fact if $V(z) = 0$ for some $z \in \mathbb{C}_{+}$ then $V$ will have a pole at $\overlineerline{z}$ of the same order. See Remark \ref{zero-pole-V} below for more on this. \end{Remark} \begin{proof}[Proof of Theorem \ref{T-main-vector-kernel}] Since the statements of the theorem are contained in the above technical Lemmas, we just need to prove the formula for the kernel function. Indeed, \begin{align*} K_{\lambda}(z) & = \frac{\Phi(z) \Phi(\lambda)^{*} - b(z) \overlineerline{b(\lambda)} \Psi(z) \Psi(\lambda)^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\\ & = \Phi(z) \left( \frac{\Phi(\lambda)^{*} - b(z) \overlineerline{b(\lambda)} \Phi(z)^{-1} \Psi(z) \Psi(\lambda)^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\right)\\ & = \Phi(z) \left( \frac{I - b(z) \overlineerline{b(\lambda)} \Phi(z)^{-1} \Psi(z) \Psi(\lambda)^{*}(\Phi(\lambda)^{*})^{-1}}{1 - b(z) \overlineerline{b(\lambda)}}\right) \Phi(\lambda)^{*}\\ & = \Phi(z) \left( \frac{I - (b(z) \Phi(z)^{-1} \Psi(z)) (\overlineerline{b(\lambda)}\Psi(\lambda)^{*}(\Phi(\lambda)^{*})^{-1})}{1 - b(z) \overlineerline{b(\lambda)}}\right) \Phi(\lambda)^{*}\\ & = \Phi(z) \left(\frac{I - V(z) V(\lambda)^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\right) \Phi(\lambda)^{*}. \end{align*} This completes the proof. \end{proof} The proof of Corollary \ref{Cor-main-Liv-ue} requires another technical lemma. \begin{Lemma} \label{multiplier-matrx} Suppose $M_{\Gamma_1} \cong M_{\Gamma_2}$ via the unitary operator $U: \mathcal{H}_1(\Gamma_1) \to \mathcal{H}_2(\Gamma_2)$. Then there is an analytic operator-valued function $W$ on $\mathbb{C} \setminus \mathbb{R}$ such that $$(U f)(\lambda) = W(\lambda) f(\lambda), \quad f \in \mathcal{H}_1(\Gamma_1), \quad \lambda \in \mathbb{C} \setminus \mathbb{R}.$$ \end{Lemma} \begin{proof} For any $f \in \mathscr{D}(M_1)$ and $g \in \mathcal{H}_1(\Gamma_1)$ we have $$\langle (M_{\Gamma_1} - \lambda I) f, g \rangle_{\mathcal{H}_1(\Gamma_1)} = \langle (M_{\Gamma_2} - \lambda) U f, U g \rangle_{\mathcal{H}_{2}(\Gamma_2)}. \label{ker-equation}$$ Thus $g \in \mbox{Rng}(M_{\Gamma_1} - \lambda I)^{\perp} \Leftrightarrow U g \in \mbox{Rng}(M_{\Gamma_2} - \lambda I)^{\perp}$ and so $U$ maps $ \mbox{Rng}(M_{\Gamma_1} - \lambda I)^{\perp}$ onto $\mbox{Rng}(M_{\Gamma_2} - \lambda I)^{\perp}$. By the identity $$\bigvee \{K^{j}_{\lambda}(\cdot) a: a \in \mathcal{K}\} = \mbox{Rng}(M_{\Gamma_j} - \lambda I)^{\perp}, \quad j = 1, 2, $$ we see, for every $a \in \mathcal{K}$, that $U K^{1}_{\lambda}(z) a \in \mbox{Rng}(M_2 - \lambda I)^{\perp}$ and so there exists an invertible operator $J(\lambda): \mathcal{K} \to \mathcal{K}$ with $U K^{1}_{\lambda}(\cdot) a = K^{2}_{\lambda}(\cdot) J(\lambda) a$. Then for any $f \in \mathcal{H}_1(\Gamma_1), a \in \mathcal{K}, \lambda \in \mathbb{C} \setminus \mathbb{R}$ we have \begin{align*} \langle f(\lambda), a\rangle_{\mathcal{K}} & = \langle f, K^{1}_{\lambda}(\cdot) a\rangle_{\mathcal{H}_1(\Gamma_1)}\\ & = \langle U f, U K^{1}_{\lambda}(\cdot) a\rangle_{\mathcal{H}_2(\Gamma_2)}\\ & = \langle U f, K^{2}_{\lambda}(\cdot) J(\lambda) a\rangle_{\mathcal{H}_{2}(\Gamma_2)}. \end{align*} Now let $a = J(\lambda)^{-1} b$ to get \begin{align*} \langle (U f)(\lambda), a\rangle_{\mathcal{K}} & = \langle f(\lambda), J(\lambda)^{-1}(b)\rangle_{\mathcal{K}}\\ & = \langle (J(\lambda)^{-1})^{*} f(\lambda), b\rangle_{\mathcal{K}}. \end{align*} If we now set $W(\lambda) = (J(\lambda)^{-1})^{*}$ then $$(Uf)(\lambda) = W(\lambda) f(\lambda),$$ which completes our proof. \end{proof} \begin{Remark} \label{zero-pole-V} Observe that the denominator in the above formula for $K_{\lambda}(z)$ in Theorem \ref{T-main-vector-kernel} vanishes when $z = \overlineerline{\lambda}$ and thus the numerator must also vanish. This shows $$V(z) V(\overlineerline{z})^{*} = I, \quad z \in \mathbb{C} \setminus \mathbb{R}.$$ When $V$ has a zero or pole at $z$ the above formula must be interpreted in the usual way (poles cancel out the zeros). \end{Remark} \begin{proof}[Proof of Corollary \ref{Cor-main-Liv-ue}] Suppose there are constant unitary matrices $Q$ and $R$ so that $V_1(z) = R V_{2}(z) Q$ for all $z \in \mathbb{C}_{+}$. Using the fact that $V(z) V(\overlineerline{z})^{*} = I$ for all $z \in \mathbb{C} \setminus \mathbb{R}$ we see that $V_1(z) = R V_{2}(z) Q$ for all $z \in \mathbb{C} \setminus \mathbb{R}$. First let us relate the two kernel functions $K^{1}_{\lambda}(z)$ and $K^{2}_{\lambda}(z)$ for the spaces $\mathcal{H}_1(\Gamma_1)$ and $\mathcal{H}_2(\Gamma_2)$. Indeed, \begin{align*} K^{1}_{\lambda}(z) & = \Phi_{1}(z) \left( \frac{I - V_{1}(z) V_{1}(\lambda)^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\right) \Phi_{1}(\lambda)^{*}\\ & = \Phi_{1}(z) \left( \frac{I - R V_2(z) Q Q^{*} V_{2}(\lambda)^{*} R^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\right) \Phi_1(\lambda)^{*}\\ & = \Phi_{1}(z) \left( \frac{R R^{*} - R V_2(z) V_{2}(\lambda)^{*} R^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\right) \Phi_1(\lambda)^{*}\\ & = \Phi_1(z) R \left( \frac{I - V_2(z) V_{2}(\lambda)^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\right) R^{*} \Phi_{1}(z)^{*}\\ & = (\Phi_1(z) R) \left( \frac{I - V_2(z) V_{2}(\lambda)^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\right) (\Phi_{1}(\lambda) R)^{*}. \end{align*} Work with the identity $$b\Phi_{1}^{-1} \Psi_1 = R b \Phi_{2}^{-1} \Psi_{2} Q$$ to get $$\Phi_{1} R = \Psi_{1} Q^{*} \Psi_{2}^{-1} \Phi_2.$$ Plug this into the above calculation for $K^{1}_{\lambda}(z)$ to see that \begin{align*} K^{1}_{\lambda}(z) & = (\Phi_{1}(z) R) \left( \frac{I - V_2(z) V_{2}(\lambda)^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\right) (\Phi_{1}(\lambda) R)^{*}\\ & = (\Psi_{1}(z) Q^{*} \Psi_{2}(z)^{-1} \Phi_2(z)) \left( \frac{I - V_2(z) V_{2}(\lambda)^{*}}{1 - b(z) \overlineerline{b(\lambda)}}\right) (\Psi_{1}(\lambda) Q^{*} \Psi_{2}(\lambda)^{-1} \Phi_2(\lambda))^{*}\\ & = (\Psi_{1}(z) Q^{*} \Psi_{2}^{-1}(z)) K^{2}_{\lambda}(z) (\Psi_{1}(\lambda) Q^{*} \Psi_{2}^{-1}(\lambda))^{*}\\ & = G(z) K^{2}_{\lambda}(z) G(\lambda)^{*}, \end{align*} where $$G(z) = \Psi_{1}(z) Q^{*} \Psi_{2}^{-1}(z).$$ Notice that \begin{equation} \bigvee\{K^{j}_{\lambda}(\cdot) a: a \in \mathcal{K}\} = \mbox{Rng}(M_{\Gamma_j} - \lambda I)^{\perp} = \mbox{Ker} (M_{\Gamma_j}^{*} - \overlineerline{\lambda} I) \end{equation} and $$\bigvee\{\mbox{Ker} (M_{\Gamma_j}^{*} - \overlineerline{\lambda} I) : \lambda \in \mathbb{C} \setminus \mathbb{R}\} = \mathcal{H}_j(\Gamma_j).$$ Thus we can define the operator $U: \mathcal{H}_{1}(\Gamma_1) \to \mathcal{H}_2(\Gamma_2)$ first as $$U K^{1}_{\lambda}(\cdot) a = K^{2}_{\lambda}(\cdot) G(\lambda)^{*} a, \quad a \in \mathcal{K}, \lambda \in \mathbb{C} \setminus \mathbb{R},$$ and then extend linearly. We have the following computation \begin{align*} \langle U K^{1}_{\lambda}(\cdot) a, U K^{1}_{\eta}(\cdot) b\rangle_{\mathcal{H}_2(\Gamma_2)} & = \langle K^{2}_{\lambda}(\cdot) G(\lambda)^{*} a, K^{2}_{\eta}(\cdot) G(\eta)^{*} b\rangle_{\mathcal{H}_2(\Gamma_2)}\\ & = \langle K^{2}_{\lambda}(\eta) G(\lambda)^{*} a, G(\eta)^{*} b\rangle_{\mathcal{K}}\\ & = \langle G(\eta) K^{2}_{\lambda}(\eta) G(\lambda)^{*} a, b\rangle_{\mathcal{K}}\\ & = \langle K^{1}_{\lambda}(\eta) a, b\rangle_{\mathcal{K}}\\ & = \langle K^{1}_{\lambda}(\cdot) a, K^{1}_{\eta}(\cdot) b\rangle_{\mathcal{K}}. \end{align*} This says that $U$ is a unitary operator. For $f \in \mathcal{H}_{2}(\Gamma_2)$ we have \begin{align*} \langle (U^{*} f)(\lambda), a\rangle_{\mathcal{K}} & = \langle U^{*} f, K^{1}_{\lambda}(\cdot) a\rangle_{\mathcal{H}_1(\Gamma_1)}\\ & = \langle f, U K^{1}_{\lambda}(\cdot) a \rangle_{\mathcal{H}_{2}(\Gamma_2)}\\ & = \langle f(\lambda), G(\lambda)^{*} a\rangle_{\mathcal{K}}\\ & = \langle G(\lambda) f(\lambda), a\rangle_{\mathcal{K}}. \end{align*} Thus $(U^{*} f)(\lambda) = G(\lambda) f(\lambda)$ and $M_{\Gamma_1}$ is unitarily equivalent to $M_{\Gamma_2}$ via the unitary $U$. We have just shown that $V_1 = R V_2 Q$ implies $M_{\Gamma_1} \cong M_{\Gamma_2}$. So now suppose that $M_{\Gamma_1} \cong M_{\Gamma_2}$ via a unitary operator $U: \mathcal{H}_{1}(\Gamma_1) \to \mathcal{H}_{2}(\Gamma_2)$. Then by Lemma \ref{multiplier-matrx} there is an analytic operator-valued function $W$ so that $U f = W f$. Furthermore since $M_{W}$ (multiplication by $W$) takes, for each fixed $\lambda \in \mathbb{C} \setminus \mathbb{R}$, $\mbox{Rng}(M_{\Gamma_1} - \lambda I)^{\perp}$ onto $\mbox{Rng}(M_{\Gamma_2} - \lambda I)^{\perp}$, we get that for each $a \in \mathcal{K}$, $$M_{W} K^{1}_{\lambda}(\cdot) K^{1}_{\lambda}(\lambda)^{-1/2} a = K^{2}_{\lambda}(\cdot) K^{2}_{\lambda}(\lambda)^{-1/2} R(\lambda)a$$ for some invertible linear operator $R(\lambda): \mathcal{K} \to \mathcal{K}$. Observe that for any $a, b \in \mathcal{K}$, \begin{align*} & \langle M_{W} K^{1}_{\lambda}(\cdot) K^{1}_{\lambda}(\lambda)^{-1/2} a, M_{W} K^{1}_{\lambda}(\cdot) K^{1}_{\lambda}(\lambda) b\rangle_{\mathcal{H}_{2}(\Gamma_2)}\\ & = \langle K^{1}_{\lambda}(\cdot) K^{1}_{\lambda}(\lambda)^{-1/2} a, K^{1}_{\lambda}(\cdot) K^{1}_{\lambda}(\lambda) b\rangle_{\mathcal{H}_{1}(\Gamma_1)}\\ & = \langle K^{1}_{\lambda}(\lambda) K^{1}_{\lambda}(\lambda)^{-1/2} a, K^{1}_{\lambda}(\lambda)^{-1/2} b\rangle_{\mathcal{K}}\\ & = \langle a, b \rangle_{\mathcal{K}}. \end{align*} On the other hand, \begin{align*} & \langle M_{W} K^{1}_{\lambda}(\cdot) K^{1}_{\lambda}(\lambda)^{-1/2} a, M_{W} K^{1}_{\lambda}(\cdot) K^{1}_{\lambda}(\lambda) b\rangle_{\mathcal{H}_{2}(\Gamma_2)}\\ & = \langle K^{2}_{\lambda}(\cdot) K^{2}_{\lambda}(\lambda)^{-1/2} R(\lambda)a, K^{2}_{\lambda}(\cdot) K^{2}_{\lambda}(\lambda)^{-1/2} R(\lambda)b \rangle_{\mathcal{H}_{2}(\Gamma_2)}\\ & = \langle R(\lambda) a, R(\lambda) b \rangle_{\mathcal{K}}. \end{align*} This implies that $R(\lambda): \mathcal{K} \to \mathcal{K}$ is unitary for each $\lambda \in \mathbb{C} \setminus \mathbb{R}$. We leave it to the reader to check that the following two identities $$W(z) K^{1}_{\lambda}(z) K^{1}_{i}(i)^{-1/2} = K^{2}_{i}(z) K^{2}_{i}(i)^{-1/2} R(i)$$ $$W(z) K^{1}_{\lambda}(z) K^{1}_{-i}(-i)^{-1/2} = K^{2}_{-i}(z) K^{2}_{-i}(-i)^{-1/2} R(-i)$$ yield $$R(i) V_1(z) = V_2(z) R(-i)$$ which completes the proof. \end{proof} We can now prove that the invariant $V$ from Lemma \ref{L-inv} and Livsic's $w_{T}$ from \eqref{intro-Livsic} are indeed equivalent. \begin{proof}[Proof of Corollary \ref{VisW}] Let $\Gamma$ be a model for $T$, and consider the model space $\mathcal{H}(\Gamma)$. Hence there is an $n-$dimensional Hilbert space $\mathcal{K}$ such that $\Gamma (z) : \mathcal{K} \rightarrow \mbox{Rng}(T- z I ) ^\perp $ is bounded and invertible for each $z \in \mathbb{C} \setminus \mathbb{R}$. Suppose that $\{ e_j \}_{j = 1}^{n}$ is a fixed orthonormal basis for $\mathcal{K}$. From our earlier work, $T$ is unitarily equivalent to $M := M_{\Gamma}$ which acts (densely) as multiplication by $z$ on $\mathcal{H} (\Gamma )$, and so, without loss of generality, we assume that $T = M$ and $\mathcal{H} = \mathcal{H} (\Gamma )$. Then the Lisvic characteristic function for $T$ is $$ w_T (z) := b(z) B(z) ^{-1} A(z),$$ where $$B(z) = \left[ \langle (M' -i I)(M' -z I)^{-1} u_j, u_k \rangle \right] ,$$ $$A(z) = \left[ \langle (M' + i I)(M' -z I)^{-1} u_j, u_k \rangle \right] ,$$ $\{ u_k \}_{k = 1}^{n}$ is any orthonormal basis for $\mbox{Ker}(M^* - i I)$ and $M'$ is some fixed (canonical) self-adjoint extension of $M$. Now since $\{ e_j \}_{j = 1}^{n}$ is orthonormal, we see that $$u_j := K _{-i} (\cdot) K_{-i} (-i) ^{-1/2} e_j, \quad 1 \leqslant j \leqslant n,$$ forms an orthonormal basis for $\mbox{Ker}(M^* -i I)$. Indeed, \begin{align*} \langle u_j, u_k \rangle_{\mathcal{H}} & = \langle K_{-i}(\cdot) K_{-i}(-i)^{-1/2} e_j, K_{-i}(\cdot) K_{-i}(-i)^{-1/2} e_k\rangle_{\mathcal{H}}\\ & = \langle K_{-i}(-i) K_{-i}(-i)^{-1/2} e_j, K_{-i}(-i)^{-1/2} e_k \rangle_{\mathcal{K}}\\ & = \langle e_j, e_k \rangle_{\mathcal{K}}. \end{align*} Thus we can assume, with at most creating an equivalent Livsic characteristic function, the $u_j$ have this form. From our Krein trick in \eqref{K-form} we also know, for any $z \in \mathbb{C} \setminus \mathbb{R}$, that $$(M' -i I ) (M' -z I) ^{-1} : \mbox{Ker}(M^* -i I) \rightarrow \mbox{Ker}(M^* -z I)$$ is bounded and invertible, so that we can find a bounded invertible operator $V_z : \mathcal{K} \rightarrow \mathcal{K} $ such that $$(M' -i I ) (M' - \overline{z} I) ^{-1} u_j = K _{z} (\cdot ) V_z e_j.$$ Here we are using the fact that $\mbox{Ker}(M^* -\overline{z} I )$ is spanned by the vectors $$K _z (\cdot) e_j, \quad 1 \leqslant j \leqslant n.$$ Actually they form a Riesz basis which ensures that $V_z$ is bounded and invertible when $n=\infty$. We can now compute $A(z)$ as \begin{align*} A(z) = & \left[ \langle (M' + i I)(M' -z I)^{-1} u_j, u_k \rangle \right] \nonumber \\ = & \left[ \langle u_j, (M' - i I)(M' -\overline{z} I )^{-1} u_k \rangle \right] \nonumber \\ = & \left[ \langle K_{-i} (\cdot) K_{-i} (-i) ^{-1/2} e_j , K_z (\cdot) V_{z} e_k \rangle \right] , \end{align*} and it follows that $$A(z) = V_{z} ^* K_{-i} (z) K _{-i} (-i) ^{-1/2} = V_z ^* \Psi (z) .$$ Similarly, \begin{align*} B(z) & = \left[ \langle (M' - i I)(M' -z I )^{-1} u_j, u_k \rangle \right] \nonumber \\ & = \left[ \langle (M' -iI) (M' +iI) ^{-1} u_j, (M' -i I ) (M' - \overline{z} I ) ^{-1} u_k \rangle \right] \nonumber \\ & = \left[ \langle K_i (\cdot) K_i (i) ^{-1/2} U e_j , K _z (\cdot ) V_z e_k \rangle \right]. \end{align*} Here we have that $U : \mathcal{K} \rightarrow \mathcal{K}$ is some fixed unitary operator. The existence of $U$ follows from the facts that $(M' -i) (M' +i) ^{-1}$ is unitary and that for any orthonormal basis $\{ b_j \}_{j = 1}^{n}$ of $\mathcal{K}$, $K_i (\cdot )K_i (i) ^{-1/2} b_j$ is an orthonormal basis of $\mbox{Ker} (M^* +iI)$. This shows that $B(z) = V_z ^* K_i (z) K _i (i) ^{-1/2} U = V_z ^* \Phi (z) U$. Hence $$ w_T (z) = b(z) B(z) ^{-1} A(z) = U^* b(z) \Phi (z) ^{-1} ( V_z ^* ) ^{-1} V_z ^* \Psi (z) = U^* V_T (z) .$$ Since $U$ is a constant unitary matrix we conclude that $V_T$ and $w_T$ are equivalent. \end{proof} \section{Computing the characteristic function} Let us compute the characteristic functions $V$ for the examples mentioned earlier. After discussing Herglotz spaces we will use these computations to make come interesting connections to these operators to vector-valued deBranges-Rovnyak spaces. \subsection{Differentiaton} Let us return to the differentiation example $Tf = i f'$ on $L^2[-\pi, \pi]$ with domain $\{f \in L^2[-\pi, \pi]: f(-\pi) = f(\pi) = 0\}$ from Example \ref{diff-ex}. We saw that the corresponding Hilbert space of analytic functions on $\mathbb{C} \setminus \mathbb{R}$ was the Paley-Wiener (type) space with reproducing kernel $$K_{\lambda}(z) = 2 \frac{\sin \pi(z - \overlineerline{\lambda})}{z - \overlineerline{\lambda}}.$$ A computation will show that $$K_{i}(i) = K_{-i}(-i) = \sinh \pi$$ and thus the Livsic function $V(z)$ is $$V(z) = \left(\frac{z - i}{z + i}\right) \frac{\sin \pi (z - i)}{\sin\pi (z + i)}, \quad z \in \mathbb{C}_{+}.$$ Notice how $|V(z)| < 1$ on $\mathbb{C}_{+}$ with zeros $\{i + \pi n: n \in \mathbb{Z}\}$ and $|V(z)| > 1$ on $\mathbb{C}_{-}$ with poles $\{-i + \pi n: n \in \mathbb{Z}\}$. A computation will show that $|V(x)| = 1$ for $x \in \mathbb{R}$ and so $V$ is inner on $\mathbb{C}_{+}$. This will be important later on. \subsection{Double differentiation} For the operator $T f = -f''$ discussed in Example \ref{double-diff}, the kernel function $K_{\lambda}(z)$ was computed to be $$K_{\lambda}(z) = \frac{-i}{\epsilon_{z} \sqrt{z} - \epsilon_{\lambda} (\overlineerline{\lambda} ) ^{\frac{1}{2}}},$$ where $\epsilon_{\lambda} = 1$ if $\Im \lambda > 0$ and $\epsilon_{\lambda} = -1$ if $\Im \lambda < 0$. In this example, $$K_{i}(i) = K_{-i}(-i) = \frac{1}{\sqrt{2}}$$ and so $$V(z) =\left( \frac{z - i}{z + i}\right) \frac{\sqrt{z} - \frac{1-i}{\sqrt{2}}}{\sqrt{z} + \frac{1+i}{\sqrt{2}}}, \quad z \in \mathbb{C}_{+}.$$ One needs to be careful when computing $V(z)$ for $\Im z < 0$ since $\epsilon_z = -1$ and so $\sqrt{z}$ changes to a $-\sqrt{z}$ in the above formula for $V(z)$. A computation will show that $|V(x)| = 1$ for $x < 0$ and so, although $V$ is not inner on $\mathbb{C}_{+}$, it is an extreme function for the unit ball of $H^{\infty}(\mathbb{C}_{+})$ -- which will become important later when we discuss deBranges-Rovnyak spaces. \subsection{Sturm-Liouville operators} This example is a continuation of Example \ref{SL-ex}. We will assume that $I$ is a closed finite interval such that $1/p , q \in L^1 (I)$. Recall that in this case the operators $H(p,q,I)$ which act as \[ H(p,q,I) f = -(pf')' + qf, \] for all $f \in \mathscr{D} (H (p,q,I)) \subset L^2 (I)$ are closed simple symmetric densely defined operators with indices $(2,2)$. In this case where $I$ is finite and $1/p, q \in L^1 (I)$, $H (p,q,I)$ is called a regular second-order Sturm-Liouville differential operator, and it is known that $H(p,q,I) -xI$ is bounded below for any $x \in \mathbb{R}$, so that every $x \in \mathbb{R}$ is a regular value of $H(p,q,I)$. Recall that any symmetric operator with this property is called regular. Here is a brief sketch of a proof that $H(p,q,I)$ is regular: It can be proven that the domain of $H(p,q,I)$ is the set of all $f \in \mathscr{D} ( H(p,q,I) ^* )$ such that both $f(a) = 0 = f(b)$ and $p(a) f'(a) = 0 = p(b) f'(b)$ \cite[Lemma 1, Section 17.3]{Naimark}. Hence if $x \in \mathbb{R}$ was an eigenvalue of $H(p,q,I)$ with corresponding eigenfunction $f \in \mathscr{D} (H(p,q,I))$, $f$ would be a solution to the ordinary differential equation: \[ -(pf')' +qf = x f, \] which obeys the boundary conditions $f(a) = 0 $ and $p(a) f'(a) =0$. The existence-uniqueness theorem for ordinary differential equations \cite[Theorem 2, Section 16.2]{Naimark} would then imply that $f=0$. This contradiction proves that $H(p,q,I)$ has no eigenvalues. Now by \cite[Theorem 1, Section 19.2]{Naimark}, the resolvent $ (H - z I) ^{-1}$, where $z \in \mathbb{C} \setminus \mathbb{R}$ and $H$ is any fixed self-adjoint extension of $H(p,q,I)$, is a compact Hilbert-Schmidt integral operator. It follows that the spectrum of any self-adjoint extension $H$ of $H(p,q,I)$ is a discrete sequence of eigenvalues with no finite accumulation point, and $H$ has no finite essential spectrum. If for some $x \in \mathbb{R}$, $H(p,q,I) -x I$ was not bounded below, then since $x$ cannot be an eigenvalue, it would have to belong to the essential spectrum of $H(p,q,I)$. It follows from \cite[Theorem 1, Section 83]{A-G} that $x$ would have to belong to the essential spectrum of every self-adjoint extension $H$ of $H(p,q,I)$, and this contradicts the fact that the essential spectrum of any such self-adjoint extension is empty. Note here that the Cayley transforms $b(H)$ and $b(H(p,q,I))$ of $H$ and $H(p,q,I)$ differ by a finite rank perturbation, so this also follows from the fact that any two bounded operators which differ by a compact perturbation have the same essential spectrum. In conclusion $H(p,q,I) -xI $ is bounded below for any $z \in \mathbb{C}$, and $H(p,q,I)$ is regular. Also note that any regular symmetric operator $T$ must also be simple, as if $T$ had a self-adjoint restriction $T_0$, then $T_0$ would have spectrum so that $T_0 -xI$ and hence $T-xI $ would not be bounded below for some $x \in \mathbb{R}$. This provides another proof that $H(p,q,I)$ is simple in this case. Let $V_I$ denote the characteristic function of $H(p,q,I)$. By a result of Livsic, \cite[Theorem 4]{Livsic-2}, since every $x \in \mathbb{R}$ is a regular point of $H(p,q, I)$, it follows that $V_I$ is a $2 \times 2$ matrix-valued inner function which has an analytic extension to a neighborhood of $\mathbb{R}$. To actually compute this inner characteristic function $V_I$, for any $z \in \mathbb{C}$, let $u_z , v_z$ be the entire $L^2 (I) -$valued functions spanning $\mbox{Ker} (H(p,q,I)^* -z I ) $ discussed in Example \ref{SL-ex}. As in Example \ref{SL-ex}, if we define $$\gamma _1 (z) = u_z, \quad \gamma _2 (z) = v_z$$ and $$\Gamma _I (z) = \gamma_1 (\overlineerline{z} ) \otimes e_1 + \gamma _2 (\overlineerline{z} ) \otimes e_2,$$ then $\Gamma: \mathbb{C} \setminus \mathbb{R} \rightarrow \mbox{Ker} (H (p,q,I) ^* -\overlineerline{z} I)$ is a model for $H (p,q,I)$ and $\mathcal{H} (\Gamma _I)$ has reproducing kernel: \[ K ^I _\lambda (z) = \left( \begin{array}{cc} \int _I u_{\overlineerline{\lambda}} (x) \overlineerline{ u _{\overlineerline{z}} (x) } dx & \int _I u _{\overlineerline{\lambda}} (x) \overlineerline{ v_{\overlineerline{z}} (x)} dx \\ \int _I v_{\overlineerline{\lambda}} (x) \overlineerline{ u _{\overlineerline{z}} (x) } dx & \int _I v _{\overlineerline{\lambda}} (x) , \overlineerline{ v_{\overlineerline{z}} (x) } dx \end{array} \right). \] From this one can compute the characteristic function $V_I$ as $$V_I (z) = b(z) \Phi ^I (z) ^{-1} \Psi ^I (z)$$ where $$b(z) = \frac{z-i}{z+i},$$ $$\Phi ^I (z) = K_i ^I (z) K_i ^I (i) ^{-1/2},$$ and $$\Psi ^I (z) = K_{-i} ^I (z) K_{-i} ^I (-i) ^{-1/2}.$$ Note that both $\Phi ^I (z)$ and $\Psi ^I (z)$ are entire matrix functions of $z \in \mathbb{C}$. Now consider a larger interval $J \supset I$, and repeat the above arguments for the operator $H (p,q , J )$ acting on its dense domain in $L^2 (J)$. Note that $H(p,q,J) \supset H(p,q,I)$, \emph{i.e.} $$\mathscr{D} (H(p,q,J) ) \supset \mathscr{D} (H(p,q,I)), \quad H(p,q,J) |\mathscr{D} (H(p,q,I)) = H(p,q,I).$$ Observe that if $U_J : L^2 (J) \rightarrow \mathcal{H} (\Gamma _J ) $ is the isometry defined by $$ (U_J f ) (z) = \Gamma _J (z) ^* f = \left( \langle f , u_{\overlineerline{z}} \rangle _J , \langle f , v_{\overlineerline{z}} \rangle _J \right),$$ that $U_J |L^2 (I) = U_I$ where $U_I$ is the corresponding isometry of $L^2 (I) $ onto $\mathcal{H} (\Gamma _I)$ which takes $H(p,q,I)$ onto $M _I = U_I H(p,q,I) U_I ^*$, the symmetric operator of multiplication by $z$ in $\mathcal{H} (\Gamma _I )$. This shows that $\mathcal{H} (\Gamma _I )$ is a closed subspace of $\mathcal{H} (\Gamma _J )$, and that if $M _J = U_J H(p,q, J) U_J ^*$ is the corresponding operator of multiplication by $z$ in $\mathcal{H} (\Gamma _J )$ then $M_I \subset M _J$. Since the characteristic functions $V_I$ and $V_J$ are inner, it will further follow from Theorem \ref{multiplier} of the last section of this paper that multiplication by $\frac{\Phi _I }{\Phi _J}$, where $\Phi _I (z) = K ^I _i (z) K^I _i (i) ^{-1/2}$, is an isometric multiplier from the model subspace $\mathscr{K} _I := H^2 _{\mathbb{C} ^2} \ominus V_I H^2 _{\mathbb{C} ^2}$ into $\mathscr{K} _J$. These observations seem to be connected to the results of \cite{Remling}, and although we will not pursue this further here, it would be interesting to investigate this in a future paper. \subsection{Multiplication by the independent variable} For the example of $M_{\mu}$, the restriction of $M^{\mu}$, multiplication by the independent variable on $L^2(\mu)$, to $$\left\{f \in L^2(\mu): x f \in L^2(\mu), \int f d \mu = 0\right\},$$ recall that the reproducing kernel for the corresponding Hilbert space of analytic functions on $\mathbb{C} \setminus \mathbb{R}$ (the Cauchy transforms of $L^2(\mu)$ functions) is $$K_{\lambda}(z) = \int \frac{1}{(x - z)(x - \overlineerline{\lambda})} d \mu(x).$$ Notice that $$K_{i}(i) = K_{-i}(-i) = \int \frac{1}{1 + x^2} d \mu(x)$$ and so the Livsic characteristic function is $$V(z) = \frac{z - i}{z + i} \frac{\int \frac{d \mu(t)}{(t - i)(t - z)}}{\int \frac{d \mu(t)}{(t + i)(t - z)}}, \quad z \in \mathbb{C}_{+}.$$ One can use the Poisson integral theory to show that $V$ is inner on $\mathbb{C}_{+}$ if and only if $\mu$ is singular with respect to Lebesgue measure on $\mathbb{R}$. If $\mu$ has no support on an interval $I \subset \mathbb{R}$, then, since $|V(x + i y)| < 1$ for $x \in I, y > 0$ while $|V(x + i y)| > 1$ for $x \in I, y < 0$, and $V$ has an obvious analytic continuation across $I$, we see that $|V(x)| = 1$ on $I$. Though $V$, in this case where the support of $\mu$ omits an interval, may not be inner (unless $\mu$ is singular with respect to Lebesgue measure), it is an extreme function (see the definition of extreme functions in the last section). \subsection{Toeplitz operators} Recall the Toeplitz operator example $T_g, g \in N^{+}_{\mathbb{R}}$ from Example \ref{Toeplitz-ex}. Note that $T_{g} \in \mathcal{S}_{1}(H^2)$ precisely when $$g = i \frac{p + q}{p - q},$$ where $p, q$ are order one Blaschke products such that $p - q$ is outer. One can easily check that $$p(z) = z, \quad q(z) = \frac{z- a}{1 - a z}, \quad 0 < a < 1,$$ work. One can show that when $a = 1/2$, $g$ maps $\mathbb{D}$ onto $\mathbb{C} \setminus ((-\infty, \sqrt{3}] \cup [\sqrt{3}, \infty))$ and $$g^{-1}(z) = \frac{\sqrt{z^2 - 3}-2 i}{z-i}.$$ As worked out in Example \ref{Toeplitz-ex} we saw that \begin{equation} \label{K-Toep} K_{\lambda}(z) = \frac{1}{1 - \overlineerline{g^{-1}(\lambda)} g^{-1}(z)} \end{equation} and a computation will show that the corresponding Livsic function $V$ is $$V(z) = -\frac{\sqrt{z^2 - 3}-2 z}{\sqrt{3} (z+i)}, \quad z \in \mathbb{C}_{+}.$$ Another computation will show that that $|V(x)| = 1$ on $[-\sqrt{3}, \sqrt{3}]$ and so $V$ is extreme. Let show how the formula \eqref{K-Toep} can be used to prove a theorem about unitary equivalence of symmetric Toeplitz operators. \begin{Theorem} \label{Toeplitz-1-1} Let $g, h \in N^{+}_{\mathbb{R}}$ be such that $T_{g}, T_{h} \in \mathcal{S}_{1}(H^2)$. Then $T_{g} \cong T_{h}$ if and only if $g = h (w)$ where $w$ is a disk automorphism. \end{Theorem} \begin{proof} If $g = h \circ w$ then the unitary operator $U: H^2 \to H^2$, $U f = \sqrt{w'} (f \circ w)$ satisfies $U T_{h} = T_{g} U$ and so $T_{g} \cong T_{h}$. For the other direction, assume $T_{g} \cong T_{h}$. By composing with disk automorphisms, which will not change the unitary equivalence of $T_g$ and $T_h$, we can assume that $g(0) = h(0) = -i$. Recall that the kernels $K^{g}$ and $K^{h}$ for the associated spaces corresponding to $T_{g}$ and $T_{h}$ are given by $$K^{g}_{\lambda}(z) = \frac{1}{1 - \overlineerline{g^{-1}(\lambda)} g^{-1}(z)}, \quad K^{h}_{\lambda}(z) = \frac{1}{1 - \overlineerline{h^{-1}(\lambda)} h^{-1}(z)}.$$ Since $T_{g} \cong T_{h}$ we have $$\frac{K^{g}_{-i}(z)/\|\cdot\|}{K^{g}_{i}(z)/\|\cdot\|} = \zeta \frac{K^{h}_{-i}(z)/\|\cdot\|}{K^{h}_{i}(z)/\|\cdot\|}$$ for some $|\zeta| = 1$. This reduces to the identity $$\frac{1 - g^{-1}(z) \overlineerline{g^{-1}(i)}}{\sqrt{1 - |g^{-1}(i)|^2}} = \zeta \frac{1 - h^{-1}(z) \overlineerline{h^{-1}(i)}}{\sqrt{1 - |h^{-1}(i)|^2}}.$$ Plug in $z = i$ into the above identity to show that $\zeta = 1$ and $|g^{-1}(i)| = |h^{-1}(i)|$. A little algebra will now show that $$g^{-1}(z) = \frac{\overlineerline{h^{-1}(i)}}{\overlineerline{g^{-1}(i)}} h^{-1}(z)$$ and moreover, $$\frac{\overlineerline{h^{-1}(i)}}{\overlineerline{g^{-1}(i)}} = \alpha$$ is unimodular. Letting $z = h(t)$ for some $|t| < 1$ we see that $$g^{-1}(h(t)) = \alpha t$$ and so $g^{-1} \circ h$ is a disk automorphism. \end{proof} \begin{Question} For $g, h \in N^{+}_{\mathbb{R}}$ with $T_{g}, T_{h} \in \mathcal{S}_{n}(H^2)$, when is $T_g \cong T_h$? \end{Question} For the general case, the answer is unknown but we can make a few general remarks. \begin{Proposition} If $g \in N_{\mathbb{R}}^{+}$ and $T_{g} \in \mathcal{S}_n(H^2)$, $n < \infty$, then the point spectrum $\sigma_{p}(T_{g}^{*})$ of $T_{g}^{*}$ satisfies $$\sigma_{p}(T_{g}^{*}) = \left\{\overlineerline{g(z)}: z \in \mathbb{D}\right\}.$$ \end{Proposition} \begin{proof} Since $T_{g}^{*} k_{\lambda} = \overlineerline{g(\lambda)} k_{\lambda}$ we have $$\left\{\overlineerline{g(z)}: z \in \mathbb{D}\right\} \subset \sigma_{p}(T_{g}^{*}).$$ For the other direction, suppose $T_{g}^{*} f = \eta f$ for some $f \in \mathscr{D}(T_{g}^{*}) \setminus \{0\}$. Then $f \perp \mbox{Rng}(T_{g} - \overlineerline{\eta}I )$ or equivalently $$\langle f, (T_{g} - \overlineerline{\eta}I) h\rangle = 0, \quad \forall h \in \mathscr{D}(T_{g}).$$ But writing $g = b/a$ in the Sarason decomposition from \eqref{gisba}, we see that $\mathscr{D}(T_{g}) = a H^2$ and so $$\langle f, (b/a - \overlineerline{\eta}) a w\rangle = 0, \quad \forall w \in H^2,$$ which implies $$\langle f, (b - \overlineerline{\eta} a) w \rangle = 0, \quad \forall w \in H^2.$$ However, since $f \not \equiv 0$, it must be the case that $b - \overlineerline{\eta} a$ has an inner factor. But since $a$ and $b$ are rational functions (Sarason proves that if $g$ is rational then so are $a$ and $b$) we see that this inner factor is a finite Blaschke product and so $b - \overlineerline{\eta} a$ must vanish for some $z \in \mathbb{D}$, i.e., $g(z) = \overlineerline{\eta}$. Thus we have the inclusion $$\sigma_{p}(T_{g}^{*}) \subset \left\{\overlineerline{g(z)}: z \in \mathbb{D}\right\},$$ which completes the proof. \end{proof} \begin{Corollary} Suppose $g_1, g_2 \in N_{\mathbb{R}}^{+}$ with $T_{g_1}, T_{g_2} \in \mathcal{S}_{n}(H^2), n \in \mathbb{N}$. If $T_{g_1} \cong T_{g_2}$, then $g_1(\mathbb{D}) = g_2(\mathbb{D})$. \end{Corollary} \begin{proof} If $U T_{g_1} = T_{g_2} U$, where $U: H^2 \to H^2$ is unitary with $U \mathscr{D}(T_{g_1}) = \mathscr{D}(T_{g_2})$, then $U(T_{g_1} - \lambda I) = (T_{g_2} - \lambda I)$ for all $\lambda \in \mathbb{C}$. So if $g \in H^2$ and $f \in \mathscr{D}(T_{g_1})$ with $$\langle (T_{g_1} - \lambda I) f, g \rangle = 0,$$ then $$\langle (T_{g_2} - \lambda I) U f, U g \rangle = 0.$$ This means that $$g \in \mbox{Rng}(T_{g_1} - \lambda I)^{\perp} \Leftrightarrow U g \in \mbox{Rng}(T_{g_2} - \lambda I)^{\perp}$$ and so $$\mbox{Ker}(T_{g_1}^{*} - \overlineerline{\lambda} I) \not = \{0\} \Leftrightarrow \mbox{Ker}(T_{g_2}^{*} - \overlineerline{\lambda} I) \not = \{0\}.$$ This means that $\sigma_{p}(T_{g_1}^{*}) = \sigma_{p}(T_{g_2}^{*})$. By the previous proposition we conclude that $g_1(\mathbb{D}) = g_2(\mathbb{D})$. \end{proof} \begin{Remark} Notice how the previous corollary gives us a proof of Theorem \ref{Toeplitz-1-1} which comes from general principles and does not involve the Livsic characteristic function. \end{Remark} Suppose $T_{g} \in \mathcal{S}_n(H^2)$ and we want to compute the Livsic characteristic function. In this case $$\mbox{Ker}(T^{*}_{g} - \overlineerline{\lambda} I) = \bigvee\{k_{z_{j}(\lambda)}: 1 \leqslant j \leqslant n\},$$ where $z_{1}(\lambda), \cdots, z_{n}(\lambda)$ are the solutions to $g(z) = \lambda$. If, and this is not always the case, the $g$ is such that the $z_j(\lambda)$ can be chosen so $\lambda \mapsto z_{j}(\lambda)$ is analytic on $\mathbb{C} \setminus \mathbb{R}$. Then we can use our model discussed earlier and define $$\gamma(\lambda) = (k_{z_{1}(\lambda)}, \cdots, k_{z_{n}(\lambda)}).$$ The Hilbert space $\mathcal{H}(\Gamma)$ is then $$\{(f(z_{1}(\lambda)), \cdots, f(z_{n}(\lambda))): f \in H^2\}$$ with inner product $$\langle (f_1(z_{1}(\lambda)), \cdots, f_1(z_{n}(\lambda))), (f_2(z_{1}(\lambda)), \cdots, f_2(z_{n}(\lambda))\rangle_{\mathcal{H}(\Gamma)} = \langle f_1, f_2 \rangle_{H^2}.$$ By our earlier discussion, the reproducing kernel is $$K_{\lambda}(z) = [k_{z_i(\lambda)}(z_{j}(z))]_{1 \leqslant i, j \leqslant n}$$ and the Livsic characteristic function can be computed from here. Can the above situation actually happen? Yes. Consider the case where $g \in N^{+}_{\mathbb{R}}$ and $T_g \in \mathcal{S}_1(H^2)$ (and consequently $g$ will be univalent). Then $T_{g^2} \in \mathcal{S}_{2}(H^2)$ and to solve $g(z)^2 = \lambda$ we must solve $g(z) = \pm \sqrt{\lambda}$, which, at the end of the day (and since $g$ is invertible) will yield $z_1(\lambda)$ and $z_2(\lambda)$ analytic on $\mathbb{C} \setminus \mathbb{R}$. Note how the kernel function in this case was computed earlier. So what does this all mean? From our version of Livsic's theorem we know that $T_{g_1}$ is unitarily equivalent to $T_{g_2}$ if and only if $V_1(\lambda) = R V_{2}(\lambda) Q$, where $V_1$ and $V_2$ are created from the above expression for $K_{\lambda}(z)$. Is it possible to translate this into a more workable condition -- as in the $(1, 1)$ case where $T_{g_1}$ is unitarily to $T_{g_2}$ if and only if $g_1 = g_2 \circ h$ where $h$ is a disk automorphism. The more likely situation is when the functions $\lambda \mapsto z_j(\lambda)$ are only locally analytic -- to avoid where $g' = 0$. In this case, by Grauert's construction (or really the Krein construction) we have $$\gamma(\lambda) = (\gamma_{1}(\lambda), \cdots, \gamma_n(\lambda)),$$ where $$\gamma_{i}(\lambda) = \sum_{j = 1}^{n} \overlineerline{a_{i, j}(\lambda)} k_{z_{j}(\lambda)}.$$ The functions $\lambda \mapsto a_{i, j}(\lambda)$ are locally analytic -- avoiding the zeros of $g'$. But somehow, amazingly, $\gamma_{j}$ are co-analytic on $\mathbb{C} \setminus \mathbb{R}$. When looking at \emph{bounded} Toeplitz operators on $H^2$, there is this result of Cowen \cite{Cowen} (see also \cite{Steph}). \begin{Theorem}[Cowen] Suppose that $\phi_1$ and $\phi_2$ are bounded rational functions on $\mathbb{D}$. Then the following are equivalent: \begin{enumerate} \item $T_{\phi_1}$ is similar to $T_{\phi_2}$. \item $T_{\phi_1}$ is unitarily equivalent to $T_{\phi_2}$, \item There is a bounded function $h$ on $\mathbb{D}$ and Blaschke products $b_1$ and $b_2$ of equal order such that $\phi_1 = h \circ b_1$ and $\phi_2 = h \circ b_2$. \end{enumerate} \end{Theorem} Can we get a similar result for our unbounded Toeplitz operators? We think the answer is yes and we can prove the following result which is analogous to one direction of Cowen's result \cite{Cowen} for bounded Toeplitz operators. In fact, with nearly the same proof. \begin{Proposition} If $g \in N^{+}_{\mathbb{R}}$ and $B$ is a finite Blaschke product of order $n$, then $$T_{g \circ B} \cong \oplus_{n} T_{g}.$$ \end{Proposition} \begin{proof} Let $\{w_1, \ldots, w_n\}$ be an orthonormal basis for $(B H^2)^{\perp}$ (which is $n$-dimensional since $B$ has order $n$). Then $$\{w_{j} B^{k}: 1 \leqslant j \leqslant n, k \geqslant 1\}$$ is an orthonormal basis for $H^2$. This allows us to define the unitary operator $$U: \oplus_{n} H^2 \to H^2, \quad U(\oplus_{j = 1}^{n} f_j) = w_1 (f_1 \circ B) + \cdots + w_{n} (f_n \circ B).$$ If $g = b/a$ is the canonical representation of $g$, as before, then, as discussed before, the domain of $T_g$ is $a H^2$, the domain of $T_{g \circ B}$ is $(a \circ B) H^2$, and the domain of $\oplus_{n} T_{g}$ is $\oplus_{n} aH^2$. One easily checks from the definition of $U$ that $U(\oplus_{n} a H^2) = (a \circ B)H^2$ and that $$U(\oplus_{n} T_{g}) = T_{g \circ B} U.$$ Thus $T_{g \circ B} \cong \oplus_{n} T_{g}.$ \end{proof} \begin{Corollary} If $g \in N^{+}_{\mathbb{R}}$ and $B_1, B_2$ are Blaschke products of order $n$, then $T_{g \circ B_1} \cong T_{g \circ B_2}$. \end{Corollary} \section{Herglotz spaces} \label{section:Herglotz} There are many ways one can create a model space $\mathcal{H}(\Gamma)$ for a given $T \in \mathcal{S}_{n}(\mathcal{H})$, i.e., a Hilbert space of vector-valued analytic functions on $\mathbb{C} \setminus \mathbb{R}$ for which multiplication by the independent variable is unitarily equivalent to $T$. Indeed, if $\mathcal{H}_1$ is a model space for $T$ and $W(z): \mathcal{K} \to \mathcal{K}$ is invertible for each $z \in \mathbb{C} \setminus \mathbb{R}$ and analytic on $\mathbb{C} \setminus \mathbb{R}$, then $\mathcal{H}_2 := W \mathcal{H}_1$ (endowed with the norm $\|W f\|_{\mathcal{H}_2} := \|f\|_{\mathcal{H}_1}$) is also a model space for $T$. That is to say the map $f \mapsto W f$ is an isometric multiplier from $\mathcal{H}_1$ onto $\mathcal{H}_2$. Furthermore, as seen by the proof of Corollary \ref{Cor-main-Liv-ue}, we know that if $K^1, K^2$ are the corresponding kernel functions for model spaces $\mathcal{H}_1, \mathcal{H}_2$ then $$K^1_{\lambda}(z) = W(z) K^2_{\lambda}(z) W(\lambda)^{*}$$ if and only if $\mathcal{H}_1 = W \mathcal{H}_{2}$. We know from our earlier work that, up to unitary operators (matrices), the Livsic function determines unitary equivalence for operators in $\mathcal{S}_{n}(\mathcal{H})$. It turns out that one can parameterize these model spaces in terms of the Livsic characteristic function and a certain Herglotz space. This will be the efforts of this section. So far we know that for our given $T \in \mathcal{S}_{n}(\mathcal{H})$ and model $\Gamma$, the kernel function $K_{\lambda}(z)$ can be factored as $$K_{\lambda}(z) = \Phi(z) \left( \frac{I - V(z) V(\lambda)^{*}}{1 - b(z) \overlineerline{b(\lambda)}} \right) \Phi(\lambda)^{*},$$ where $$\Phi(z) = K_{i}(z) K_{i}(i)^{-1/2}, \quad \Psi(z) = K_{-i}(z) K_{-i}(-i)^{-1/2}, \quad V(z) = b(z) \Phi(z)^{-1} \Psi(z),$$ and $V$ is, up to unitary operators, the Livsic characteristic function for $T$. Moreover, $V$ is contractive on $\mathbb{C}_{+}$ and $V(i) = 0$. Also recall that $V$ is a meromorphic operator-valued function on $\mathbb{C}_{-}$. As observed earlier in Remark \ref{zero-pole-V} but worth reminding here, the denominator in the above formula for $K_{\lambda}(z)$ vanishes when $z = \overlineerline{\lambda}$ and thus the numerator must also vanish. This shows $$V(z) V(\overlineerline{z})^{*} = I, \quad z \in \mathbb{C} \setminus \mathbb{R}.$$ When $V$ has a zero or pole at $z$ the above formula must be interpreted in the usual way (poles cancel out the zeros). This means that we can use the the identity $V(z) V(\overlineerline{z})^{*} = I$ along with the fact that $\|V(z)\| < 1$ for all $z \in \mathbb{C}_{+}$ and $\|V(z)\| > 1$ for all $z \in \mathbb{C}_{-}$, to see that \begin{equation} \label{Omega} \Omega(z) := (I + i V(z))(I - i V(z))^{-1} \end{equation} is well defined on $\mathbb{C} \setminus \mathbb{R}$. Moreover, one can check that \begin{enumerate} \item $z \mapsto \Omega(z)$ is an analytic operator-valued function on $\mathbb{C} \setminus \mathbb{R}$. \item $\mathbb{R}e \Omega(z) := \frac{1}{2}(\Omega(z) + \Omega(z)^{*}) \geqslant 0$ on $\mathbb{C}_{+}$. \item $\Omega(z) = - \Omega(\overline{z})^{*}$. \end{enumerate} Such $\Omega$ satisfying the three properties listed above are called Herglotz functions and there is a very large theory of such functions \cite{MR0154132, DB, MR1784638, Langer}. The literature on this can be a bit confusing at times since Herglotz functions are often defined in slightly different ways or given different names, but they are essentially the same and have the same properties. A computation will show the following. \begin{Theorem} If $$W(z) := \sqrt{\pi}(z + i) \Phi(z) (\Omega(z) + I)^{-1}$$ then $$K_{\lambda}(z) = W(z) \left(\frac{\Omega(z) + \Omega(\lambda)^{*}}{\pi i (\overlineerline{\lambda} - z)}\right) W(\lambda)^{*}.$$ \end{Theorem} The function $$K^{V}_{\lambda}(z) = \frac{\Omega(z) + \Omega(\lambda)^{*}}{\pi i (\overlineerline{\lambda} - z)}.$$ is a positive definite kernel function on $\mathbb{C} \setminus \mathbb{R}$ and, by general theory \cite{Paulsen}, is the reproducing kernel for a unique vector-valued reproducing kernel Hilbert space $\mathscr{H}(V)$, often called a \emph{Herglotz space}, and was discussed by L.~deBranges \cite{MR0154132, DB}. This gives us the following. \begin{Theorem} \label{Mz-Herglotz} Any $T \in \mathcal{S}_{n}$, $n \in \mathbb{N} \cup \{\infty\}$, is unitarily equivalent to $M^{V}$, multiplication by the independent variable on a Herglotz space $\mathscr{H}(V)$, where $V$ is the Livsic function corresponding to $T$. Furthermore, the Livsic function for $M^{V}$ is $V$. \end{Theorem} When $n < \infty$, we can use deBranges' results \cite{MR0154132, DB} further to identify the Herglotz space $\mathscr{H}(V)$ as a space of vector-valued Cauchy transforms. Indeed, by a vector-valued analog of the classical Herglotz theorem (every positive harmonic function on $\mathbb{C}_{+}$ is the Poisson integral of a measure \cite{Duren}) there exists a positive matrix-valued measure $\mu$ on $\mathbb{R}$ satisfying (i) $\mu(E) \in M_{n \times n}(\mathbb{C})$, $E \subset \mathbb{R}$, Borel; (ii) $\mu(E) \geqslant 0$ for all $E$; (iii) $\mu(\cup_j E_j) = \sum_{j} \mu(E_j)$, disjoint $E_j$; (iv) $$\int \frac{d\langle \mu(t) a, a\rangle_{\mathbb{C}^n}}{1 + t^2} < \infty, \quad \forall a \in \mathbb{C}^n;$$ (v) $$K^{V}_{\lambda}(z) = \frac{1}{\pi^2} \int \frac{d \mu(t)}{(t - \overlineerline{\lambda})(t - z)},$$ i.e., $$\langle K^{V}_{\lambda}(z) a, b\rangle_{\mathbb{C}^n} = \frac{1}{\pi^2} \int \frac{d\langle \mu(t) a, b\rangle}{(t - \overlineerline{\lambda})(t - z)}, \quad \forall a, b \in \mathbb{C}^n;$$ (vi) $$\mathscr{H}(V) = \left\{\frac{1}{\pi i} \int \frac{d \mu(t) f(t)}{t - z}: f \in L^2_{\mathbb{C}^n}(\mu)\right\}$$ and the Cauchy transform takes $L_{\mathbb{C}^n}^{2}(\mu) \to \mathscr{H}(V)$ in a unitary way. What is vector-valued $L^{2}_{\mathbb{C}^n}(\mu)$? If $$f = \sum_{j} c_j \chi_{E_{j}}, \quad c_{j} \in \mathbb{C}^n,$$ is a simple function, where $\chi_{E_j}$ is a scalar-valued characteristic function on $\mathbb{R}$, define $$\langle f, f \rangle_{L^{2}_{\mathbb{C}^n}(\mu)} := \sum_{j} \langle \mu(E_j) c_j, c_j\rangle_{\mathbb{C}^n}.$$ Now complete this to get an $L^2_{\mathbb{C}^n}(\mu)$ space. See \cite{MR0190760} for more on matrix and operator-valued measures. In summary, we have the following: \begin{Corollary} Suppose $T \in \mathcal{S}_{n}(\mathcal{H})$, $n \in \mathbb{N}$. Then there is a positive matrix-valued measure $\mu$ on $\mathbb{R}$ satisfying the conditions above and such that $T$ is unitarily equivalent to $M_{\mu}$, multiplication by the independent variable with domain $$\mathscr{D}(M_{\mu}) = \left\{f \in L^2_{\mathbb{C}^n}(\mu): x f \in L^2_{\mathbb{C}^n}(\mu), \int d\mu(t) f(t) = 0\right\}.$$ \end{Corollary} \begin{Remark} \begin{enumerate} \item One can also prove this corollary by using the spectral theorem for a self-adjoint extension of $T$. See \cite{MR1821917} for details. \item When $n = \infty$, identifying $\mathscr{H}(V)$ as a vector-valued $L^2$-type space becomes more difficult due to some convergence issues. However, in certain circumstances, e.g., when $\mu(E)$ is a trace-class operator for every Borel set $E$, one can identify $\mathscr{H}(V)$ as an $L^2$-type space. This is worked out carefully in \cite{MR0154132}. \end{enumerate} \end{Remark} The function $\Omega$ in \eqref{Omega} can be replaced by $$(I + A V(z))(I - A V(z))^{-1},$$ where $A \in U(n)$, the $n \times n$ unitary matrices, and an analogous result holds but with a positive $M_{n \times n}$-valued measure $\mu_{A}$. That is to say $T$ is unitarily equivalent to $M_{\mu_{A}}$, the densely defined multiplication by the independent variable on $L^2_{\mathbb{C}^n}(\mu_A)$. The family of measures $\{\mu_{A}: A \in U(n)\}$ is often called the family of \emph{Clark measures} \cite{CRM, Elliott, Martin-uni, Poltoratskii, Sarason-dB} corresponding to the function $V$ and have many fascinating properties. We will not go into the details here but one can show the following. \begin{Theorem} For $T_1 \in \mathcal{S}_{n}(\mathcal{H}_1), T_2 \in \mathcal{S}_n(\mathcal{H}_2)$ with corresponding Livsic functions $V_1, V_2$, we have that $T_1 \cong T_2$ if and only if the associated family of Clark measures are the same. \end{Theorem} For a positive $M_{n \times n}$-valued measure $\mu$ satisfying the properties discussed above along with $\mu(\mathbb{R}) = \infty$, one can use Stieltjes inversion formula \cite{DB} to produce a $V$ in the closed unit ball of $H^{\infty}_{\mathbb{C}^n}(\mathbb{C}_{+})$ ($\mathbb{C}^n$-valued bounded analytic functions on $\mathbb{C}_{+}$) such that $\mu$ belongs to the Clark family of measures corresponding to $V$. Moreover $V$ will be the Livsic function corresponding to $M_{\mu}$. This tells is the following: \begin{Corollary} For positive $M_{n \times n}$-valued measures $\mu, \nu$ above we have that $M_{\mu} \cong M_{\nu}$ if and only if $\mu, \nu$ belong to the same Clark family corresponding to some $V$ in the unit ball of $H^{\infty}_{\mathbb{C}^n}(\mathbb{C}_{+})$. \end{Corollary} We will point out that determining when $\mu, \nu$ belong to the same Clark family seems to be a difficult problem. \section{deBranges-Rovnyak spaces} \label{section:dBR} In this final section, we will show that when $V$, the Livsic function for $T \in \mathcal{S}_{n}(\mathcal{H})$, $n < \infty$, is an extreme function for $\mathscr{B}_{\mathbb{C}^n}$, the closed unit ball in $H^{\infty}_{\mathbb{C}^n}(\mathbb{C}_{+})$, then $M^{V}$ on the Herglotz space $\mathscr{H}(V)$ is unitarily equivalent to multiplication by the independent variable on a vector-valued deBrange-Rovnyak space. In the examples we covered, differentiation operators, Sturm-Liouville operators, Toeplitz operators, etc., we will, through the Livsic functions we computed earlier, connect these operators to multiplication operators on these deBranges-Rovnyak spaces. Along the way, we will show an interesting property of the Livsic function. Compare the formula \eqref{dbrkernel} for the reproducing kernels of the de Branges-Rovnyak space $\mathscr{K}(V)$ with the formulas for the reproducing kernels of the representation space $\mathcal{H} (\Gamma)$ as given in Theorem \ref{T-main-vector-kernel}, \[ K_w (z) = \Phi (z) \left( \frac{ I - V(z) V(w ) ^* }{1 - \overlineerline{b(w)}b(z)} \right) \Phi (w) ^* \] for any $w, z \in \mathbb{C} \setminus \mathbb{R}$, where \[ \Phi (z) = K_i (z) K_i (i) ^{-1/2 }, \] and $K_i (z) = \Gamma (z) ^* \Gamma ( i)$. Now let $$\mathcal{H} (\Gamma) _+ := \bigvee _{\lambda \in \mathbb{C} _+ } K _\lambda \mathbb{C}^n \subset \mathcal{H} (\Gamma ).$$ Similarly let $$\mathscr{H} (V) _+ := \bigvee_{\lambda \in \mathbb{C}_{+}} K_{\lambda}^{V} \mathbb{C}^n \subset \mathscr{H}(V).$$ It follows from Section \ref{section:Herglotz} that multiplication by $$W(z) := \sqrt{\pi} (z+i) \Phi (z) ( \Omega(z) + I ) ^{-1}$$ is an isometry of $\mathscr{H} (V)$ onto $\mathcal{H} (\Gamma )$ which takes $\mathscr{H} (V) _+$ onto $\mathcal{H} (\Gamma) _+$. This next theorem shows that there is also a natural isometric multiplier from $\mathscr{H} (V) _+$ onto $\mathscr{K}(V)$. \begin{Theorem} Multiplication by $ U(z) = \sqrt{\pi} (z+i) \Phi (z)$ is an isometry of $\mathscr{K}(V)$ onto $\mathcal{H} (\Gamma) _+$, and hence $Q := \frac{1}{2} (I - V)$ is an isometric multiplier of $\mathscr{H} (V ) _+ $ onto $\mathscr{K}(V)$. \label{multiplier} \end{Theorem} \begin{proof} Since $$ (I + \Omega) ^{-1} = \frac{I +V}{2} $$ we see that if we can show that $U$ is an isometric multiplier of $\mathscr{K}(V)$ onto $\mathcal{H} (\Gamma ) _+$, then since $$W(z) = \sqrt{\pi} (z+i) \Phi (z) (\Omega(z) + I) ^{-1}$$ is an isometric multiplier of $\mathscr{H} (V) _+ $ onto $\mathcal{H} (\Gamma ) _+,$ it will follow that $$W U^{-1} = \frac{I +V}{2} = Q $$ is an isometric multiplier of $\mathscr{H} (V) _+$ onto $\mathscr{K}(V)$. To see that $U$ is an isometric multiplier from $\mathscr{K}(V)$ onto $\mathcal{H} (\Gamma ) _+$, it suffices to verify, as discussed in Section \ref{section:Herglotz}, that $$K _\lambda (z) = U(z) \Delta ^V _\lambda (z) U (\lambda) ^*,$$ where $\Delta _\lambda ^V $ and $K_\lambda$ are as above. It is indeed easy to check that \begin{eqnarray} K_\lambda (z) & = & \Phi (z) \left( \frac{ I - V(z) V(\lambda) ^*}{I-\overlineerline{b(\lambda)} b(z)} \right) \Phi (\lambda) ^* \nonumber \\ & = & \sqrt{\pi} (z+i) \Phi (z) \left( \frac{i}{2\pi} \frac{ I - V(z) V(\lambda) ^*}{z -\overlineerline{\lambda}} \right) \left( \sqrt{\pi} (\lambda +i ) \Phi (\lambda) \right) ^* \nonumber \\ & =& U(z) \Delta _\lambda ^V (z) U(\lambda) ^* .\end{eqnarray} This proves the claim. \end{proof} It can be shown \cite{Martin-uni} that if $V$ (the Livsic characteristic function) is an extreme point of $\mathscr{B}_{\mathbb{C}^n}$ that $$\mathscr{H} (V) _+ = \mathscr{H} (V)$$ so that $Q$ is an isometric multiplier from $\mathscr{H} (V) $ onto the de Branges-Rovnyak space $\mathscr{K}(V)$. More precisely, as was discussed in \cite[Section 4.3]{Martin-uni} the Helson-Lowdenslager generalization of Szego's theorem \cite[Theorem 8]{HelsonIII} allows us to characterize the extreme points of $\mathscr{B}_{\mathbb{C}^n}$ as follows. \begin{Theorem} Given $V \in \mathscr{B}_{\mathbb{C}^n}$, the following are equivalent: \begin{enumerate} \item $V$ is an extreme point. \item $$ \int _{-\infty} ^\infty \mathrm{tr} \left( \log (I - | V (x) | ) \right) \frac{1}{1+x^2} dx = - \infty.$$ \item $\mathscr{H} (V) _+ = \mathscr{H} (V)$. \end{enumerate} \end{Theorem} \begin{Remark} \begin{enumerate} \item The theorem above is actually a translation of the results of \cite[Section 4.3]{Martin-uni}, which were originally stated for contractive matrix analytic functions on the unit disc, to the setting of the upper half-plane. \item It follows that if $n < \infty $ and $V$ is an extreme point, that $Q = \frac{1}{2} (I - V)$ is an isometric multiplier of the Herglotz space $\mathcal{H} (V ) = \mathcal{H} (V) _+ $ onto the de Branges-Rovnyak space $\mathscr{K}(V)$. While this fact may still hold in the case where $n=\infty$, our only known proof of the implication $(1) \mathbb{R}ightarrow (3)$ in the above theorem uses the condition $(2)$, and it is not clear how to formulate $(2)$ in the case where $n=\infty$. Moreover the proof that $(2) \mathbb{R}ightarrow (3)$ uses the Helson-Lowdenslager generalization of Szego's theorem, and it is not immediately clear whether there is an analogue of this theorem in the case where $n=\infty$, or whether there is a way to directly prove the implications $(1) \Leftrightarrow (3)$. \end{enumerate} \end{Remark} There is a nice corollary to this result along with Theorem \ref{Martin-Z} which applies, in particular, to the operators mentioned throughout this paper: differentiation, double differentiation, Sturm-Liouville, Toeplitz, etc. For these operators we have the following. \begin{Corollary} \label{Mz-DR} If $T \in \mathcal{S}_{n}(\mathcal{H}), n < \infty$, and its Livsic characteristic function $V$ is an extreme point of $\mathscr{B}_{\mathbb{C}^n}$, then $T$ is unitarily equivalent to $Z_{V}$, multiplication by the independent variable in the deBranges-Rovnyak space $\mathscr{K}(V)$. Furthermore, $(V \circ b^{-1}) \vec{k}$ does not have an angular derivative at $z = 1$ for any $\vec{k} \in \mathbb{C}^n$. \end{Corollary} \end{document}
\begin{document} \title{Large deviations for the interchange process on the interval and incompressible flows} \begin{abstract} We use the framework of permuton processes to show that large deviations of the interchange process are controlled by the Dirichlet energy. This establishes a rigorous connection between processes of permutations and one-dimensional incompressible Euler equations. While our large deviation upper bound is valid in general, the lower bound applies to processes corresponding to incompressible flows, studied in this context by Brenier. These results imply the Archimedean limit for relaxed sorting networks and allow us to asymptotically count such networks. \end{abstract} \tableofcontents \begin{section}{Introduction} In this paper we investigate the large deviation principle for a model of random permutations called the one-dimensional \emph{interchange process}. The process can be roughly described as follows. We put $N$ particles, labelled from $1$ to $N$, on a line $\{1,\ldots, N\}$ and at each time step perform the following procedure: an edge is chosen at random and adjacent particles are swapped. By comparing the particles' initial positions with their positions after given time $t$ we obtain a random permutation from the symmetric group $\mathcal{S}_N$ on $N$ elements. The interchange process on the interval (whose discrete time analog is known as the adjacent transposition shuffle) and on more general graphs has attracted considerable attention in probability theory, for example with regard to the analysis of mixing times. It is natural to ask whether, after proper rescaling and as $N \to \infty$, the permutations obtained in the interchange process converge in distribution to an appropriately defined limiting process. Such limits have been recently studied (\cite{limits}, \cite{mustazee}) under the name of permutons and permuton processes. These notions have been inspired by the theory of graph limits (\cite{lovasz}), where the analogous notion of a graphon as a limit of dense graphs appears. A \emph{permuton} is a Borel probability measure on $[0,1]^2$ with uniform marginals on each coordinate. A sequence of permutations $\sigma^N \in \mathcal{S}_N$ is said to converge to a permuton $\mu$ as $N \to \infty$ if the corresponding empirical measures \[ \frac{1}{N}\sum_{i=1}^N \delta_{\left(\frac{i}{N},\frac{\sigma^N(i)}{N}\right)} \] converge weakly to $\mu$. A \emph{permuton process} is a stochastic process $X = (X_{t}, 0 \leq t \leq T)$ taking values in $[0,1]$, with continuous sample paths and having uniform marginals at each time $t \in [0,T]$. A permutation-valued path, such as a sample from the interchange process, is said to converge to $X$ if the trajectory of a randomly chosen particle converges in distribution to $X$. Depending on the time scale considered, one observes different asymptotic structure in the permutations arising from the interchange process. If the average number of all swaps is greater than $\sim N^3 \log N$, the process will be close to its stationary distribution (\cite{aldous}, \cite{lacoin}), which is the uniform distribution on $\mathcal{S}_N$. For $\sim N^3$ swaps each particle has displacement of order $N$ and the whole process converges, in the sense of permuton processes, to a Brownian motion on $[0,1]$ (\cite{mustazee2}). Here we will be interested in yet shorter time scales, corresponding to $\sim N^{2+\varepsilon}$ swaps for fixed $\varepsilon \in (0,1)$. In this scaling each particle has displacement $\ll N$, so the resulting permutations will be close to the identity permutation. Nevertheless, in the spirit of large deviation theory one can still ask questions about rare events, for example ``what is the probability that starting from the identity permutation we are close to a fixed permuton after time $t$?'' or, more generally, ``what is the probability that the interchange process behaves like a given permuton process $X$?''. We expect such probabilities to decay exponentially in $N^\gamma$ for some $\gamma > 0$, with the decay rate given by a \emph{rate function} on the space of permuton processes. The large deviation principle we obtain in this paper can be informally summarized as follows: for a class of permuton processes solving a natural energy minimization problem, the probability $\mathbb{P}(A)$ that the interchange process is close in distribution to a process $X$ satisfies asymptotically \begin{equation}\label{eq:informal-main-result} \frac{1}{N^{\gamma}} \log \mathbb{P}(A) \approx - I(X), \end{equation} where $\gamma = 2 - \varepsilon$ and $I(X)$ is the \emph{energy} of $X$, defined as the expected Dirichlet energy of a path sampled from $X$. Apart from a purely probabilistic interest, the result is relevant to two other seemingly unrelated subjects, namely the study of Euler equations in fluid dynamics and the study of sorting networks in combinatorics. Let us first state the energy minimization problem in question, which is as follows -- given a permuton $\mu$, find \begin{equation}\label{eq:energy-min} \inf\limits_{(X_0, X_T) \sim \mu} I(X), \end{equation} where the infimum is over all permuton processes $X$ such that $(X_0, X_T)$ has distribution $\mu$. As it happens, such energy-minimizing processes have been considered in fluid dynamics in the study of \emph{incompressible Euler equations}, under the name of \emph{generalized incompressible flows}. This connection is discussed in more detail in Section \ref{sec:incompressible-flows}. Very roughly speaking, Euler equations in a domain $D \subseteq \mathbb{R}^d$ describe motion of fluid particles whose trajectories satisfy the equation \begin{equation}\label{eq:euler-informal} x''(t) = - \nabla p (t, x) \end{equation} for some function $p$ called the \emph{pressure}. The incompressibility constraint means that the flow defined by the equation has to be volume-preserving. Classical, smooth solutions to Euler equations correspond to flows which are diffeomorphisms of $D$. Generalized incompressible flows are a stochastic variant of such solutions in which each particle can choose its initial velocity independently from a given probability distribution. It turns out that, under additional regularity assumptions, such generalized solutions to Euler equations \eqref{eq:euler-informal} for $D = [0,1]$ correspond exactly to permuton processes solving the energy minimization problem \eqref{eq:energy-min} for some permuton $\mu$. Our large deviation result \eqref{eq:informal-main-result} is valid precisely for such energy-minimizing permuton processes (again, under certain regularity assumptions). As it happens, the original motivation for our work came from a different direction, namely from the study of \emph{sorting networks} in combinatorics. This connection is explained in more detail below. Using our large deviation principle \eqref{eq:informal-main-result}, we are able to prove novel results on a variant of the model we call \emph{relaxed sorting networks}. Thus the large deviation principle presented in this paper provides a rather unexpected link between problems in combinatorics (sorting networks) and fluid dynamics (incompressible Euler equations), along with a quite general framework for analyzing permuton processes which we hope will find further applications. \paragraph{Main results.} Let us now state our main results more formally, still with complete definitions and discussion of assumptions deferred until Sections \ref{sec:stationarity} and \ref{sec:ode-part}. Let $\mathcal{D} = \mathcal{D}([0,T], [0,1])$ be the space of c\`adl\`ag paths from $[0,T]$ to $[0,1]$ and let $\mathcal{M}(\mathcal{D})$ be the space of Borel probability measures on $\mathcal{D}$. Let $\mathcal{P} \subseteq \mathcal{M}(\mathcal{D})$ denote the space of permuton processes and their approximations by permutation-valued processes. For $\pi \in \mathcal{M}(\mathcal{D})$ by $I(\pi)$ we will denote the expected Dirichlet energy of the process $X$ whose distribution is $\pi$. Let $\eta^N$ denote the interchange process in continuous time on the interval $\{1, \ldots, N\}$, speeded up by $N^{\alpha}$ for some $\alpha \in (1,2)$. Let $\gamma = 3 - \alpha$. We have the following large deviation principle \begin{theoremalph}[Large deviation lower bound]\label{th:theorem-main-lower} Let $\mathbb{P}^{N}$ be the law of the interchange process $\eta^N$ and let $\mu^{\eta^{N}} \in \mathcal{M}(\mathcal{D})$ be the empirical distribution of its trajectories. Let $\pi$ be a permuton process which is a generalized solution to Euler equations \eqref{eq:gen-euler}. Provided $\pi$ satisfies Assumptions \eqref{as:main-assumptions}, for any open set $\mathcal{O} \subseteq \mathcal{P}$ such that $\pi \in \mathcal{O}$ we have \[ \liminf_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}\left(\mu^{\eta^{N}} \in \mathcal{O}\right) \geq - I(\pi). \] \end{theoremalph} \begin{theoremalph}[Large deviation upper bound]\label{th:theorem-main-upper} Let $\mathbb{P}^{N}$ be the law of the interchange process $\eta^{N}$ and let $\mu^{\eta^{N}} \in \mathcal{M}(\mathcal{D})$ be the empirical distribution of its trajectories. For any closed set $\mathcal{C} \subseteq \mathcal{P}$ we have \[ \limsup_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}\left(\mu^{\eta^{N}} \in \mathcal{C} \right) \leq - \inf\limits_{\pi \in \mathcal{C}} I(\pi). \] \end{theoremalph} The results are referred to as respectively Theorem \ref{thm:lower-bound-for-minimizers} and Theorem \ref{th:upper-bound} in the following sections. Here the large deviation upper bound is valid for \emph{all} permuton processes, without any additional assumptions. On the other hand, in the proof of the lower bound we exploit rather heavily the special structure possessed by generalized solutions to Euler equations. We expect the lower bound to hold for arbitrary permuton processes as well, since one can locally approximate any permuton process by energy minimizers. However, for our techniques to apply one would need to understand in more detail regularity of the associated velocity distributions and pressure functions, which falls outside the scope of our work. The reader may notice that the rate function, which is the energy $I(\pi)$, is similar to the one appearing in the analysis of large deviations for independent random walks. In fact, the crux of our proofs lies in proving that particles in the interchange process and its perturbations are in a certain sense almost independent. The main techniques used here come from the field of interacting particle systems. A comprehensive introduction to the subject can be found in \cite{kipnis}. The novelty in our approach is in applying tools usually used to study hydrodynamic limits to a setting which is in some respects more involved, since the limiting objects we consider, permuton processes, are stochastic processes instead of deterministic objects like solutions of PDEs apearing, for example, for exclusion processes. \paragraph{Sorting networks and the sine curve process.} The large deviation bounds can be applied to obtain results on a model related to sorting networks. A \emph{sorting network} on $N$ elements is a sequence of $M = \binom{N}{2}$ transpositions $(\tau_{1}$, $\tau_{2}$, $\ldots$, $\tau_{M})$ such that each $\tau_{i}$ is a transposition of adjacent elements and $\tau_{M} \circ \ldots \circ \tau_{1} = \mathrm{rev}_{N}$, where $\mathrm{rev}_{N} = (N \,\ldots \,2 \, 1)$ denotes the \emph{reverse permutation}. It is easy to see that any sequence of adjacent transpositions giving the reverse permutation must have length at least $\binom{N}{2}$, hence sorting networks can be thought of as shortest paths joining the identity permutation and the reverse permutation in the Cayley graph of $\mathcal{S}_{N}$ generated by adjacent transpositions. A \emph{random sorting network} is obtained by sampling a sorting network uniformly at random among all sorting networks on $N$ elements. Let us work in continuous time, assuming each transposition $\tau_i$ happens at time $\frac{i}{M+1}$. It was conjectured in \cite{sorting} and recently proved in \cite{duncan} that the trajectory of a randomly chosen particle in a random sorting network has a remarkable limiting behavior as $N \to \infty$, namely it converges in the sense of permuton processes to a deterministic limit, which is the sine curve process described below. Here it will be more natural to consider the square $[-1,1]^2$ and processes with values in $[-1,1]$ instead of $[0,1]$ (with the obvious changes in the notion of a permuton and a permuton process which we leave implicit). The \emph{Archimedean law} is the measure on $[-1,1]^2$ obtained by projecting the normalized surface area of a $2$-dimensional half-sphere to the plane or, equivalently, the measure supported inside the unit disk $\{x^2 + y^2 \leq 1\}$ whose density is given by $1 /(2\pi \sqrt{1 - x^2 - y^2}) \, dx \, dy$. Observe that thanks to the well-known plank property each strip $[a,b] \times [-1,1]$ has measure proportional to $b-a$, hence the Archimedean law defines a permuton. The \emph{sine curve process} is the permuton process $\mathcal{A} = (\mathcal{A}_{t}, 0 \leq t \leq 1)$ with the following distribution -- we sample $(X,Y)$ from the Archimedean law and then follow the path \[ \mathcal{A}_t = X \cos \pi t + Y \sin \pi t. \] One can directly check that $\mathcal{A}_t$ has uniform distribution on $[-1,1]$ at each time $t$, hence $\mathcal{A}_t$ indeed defines a permuton process. Observe that $(\mathcal{A}_{0}, \mathcal{A}_{0}) = (X,X)$ and $(\mathcal{A}_{0}, \mathcal{A}_{1}) = (X, -X)$, thus the sine curve process defines a path between the identity permuton and the \emph{reverse permuton}. An equivalent way of describing the sine curve process consists of choosing a pair $(R, \theta)$ at random, where the angle $\theta$ is uniform on $[0,2\pi]$ and $R$ has density $r / 2 \pi \sqrt{1-r^2} \, dr$ on $[0,1]$, and following the path $\mathcal{A}_t = R \cos(\pi t + \theta)$. Thus the trajectories of this process are sine curves with random initial phase and amplitude -- the path of a random particle is determined by its initial position $X$ and velocity $V$, given by $(X,V) = (R \cos \theta, -\pi R \sin \theta)$. Recall now the energy minimization problem \eqref{eq:minimize-permuton}. The sine curve process is the unique minimizer of energy among all permuton processes joining the identity to the reverse permuton (\cite{brenier}, see also \cite{mustazee}), with the minimal energy equal to $I(\mathcal{A}) = \frac{\pi^2}{6}$. It is one of the few examples where the solution to the problem \eqref{eq:minimize-permuton} can be explicitly calculated for a target permuton $\mu$. It also seems to play a special role in constructing generalized incompressible flows which are non-unique solutions to the energy minimization problem in dimensions greater than one, see, e.g., \cite{bernot-figalli-santambrogio}. The sine curve process is a generalized solution to Euler equations with the pressure function $p(x) = \frac{x^2}{2}$, which unsurprisingly leads to each particle satisyfing the harmonic oscillator equation $x'' = -x$. The reader may check that the sine curve process satisfies the Assumptions \eqref{as:main-assumptions} (with the velocity distribution being time-independent), thus providing a non-trivial and explicit example for which our large deviation bounds hold. To the best of our knowledge the connection between sorting networks on the one hand and Euler equations on the other hand was first observed in the literature in \cite{duncan}. Let us now describe the results on relaxed sorting networks. Fix $\delta > 0$ and $N \geq 1$. We define a $\delta$-\emph{relaxed sorting network} of length $M$ on $N$ elements to be a sequence of $M$ adjacent transpositions $(\tau_1, \ldots, \tau_M)$ such that the permutation $\sigma_M = \tau_M \circ \ldots \circ \tau_1$ is $\delta$-close to the reverse permutation $\mathrm{rev} = (N \, \ldots \, 2 \, 1)$ in the Wasserstein distance on the space $\mathcal{M}([0,1]^2)$ of Borel probability measures on $[0,1]^2$ (see Section \ref{sec:stationarity} for the definition). For fixed $\kappa \in (0,1)$ we define a \emph{random} $\delta$-\emph{relaxed sorting network} on $N$ elements by choosing $M$ from a Poisson distribution with mean $\lfloor \frac{1}{2} N^{1 + \kappa}(N-1) \rfloor$ and then sampling a $\delta$-relaxed sorting network of length $M$ uniformly at random. Our first result is that the analog of the sorting network conjecture holds for relaxed sorting networks, that is, in a random relaxed sorting network the trajectory of a random particle is with high probability close in distribution to the sine curve process. Precisely, we have the following \begin{theorem}\label{th:lln-for-relaxed-networks} Fix $\kappa \in (0,1)$ and let $\pi^{N}_{\delta}$ denote the empirical distribution of the permutation process (as defined in \eqref{eq:empirical-eta}) associated to a random $\delta$-relaxed sorting network on $N$ elements. Let $\pi_{\mathcal{A}}$ denote the distribution of the sine curve process. Given any $\varepsilon > 0$ we have for all sufficiently small $\delta > 0$ \[ \lim\limits_{N \to \infty} \mathbb{P}^N \left( \pi^{N}_{\delta} \in B(\pi_{\mathcal{A}}, \varepsilon) \right) = 1, \] where $B(\pi_{\mathcal{A}}, \varepsilon)$ is the $\varepsilon$-ball in the Wasserstein distance on $\mathcal{P}$. \end{theorem} Here for consistency of notation we assume that the sine curve process is rescaled so that it is supported on $[0,1]$ rather than $[-1,1]$. The second result is more combinatorial and concerns the problem of enumerating sorting networks. A remarkable formula due to Stanley (\cite{stanley}) says that the number of all sorting networks on $N$ elements is equal to \[ \frac{\binom{N}{2}!}{1^{N-1} 3^{N-2} \ldots (2N-3)^{1}}, \] which is asymptotic to $\exp\left\{\frac{N^2}{2}\log N + (\frac{1}{4} - \log 2)N^2 + O(N \log N) \right\}$. For relaxed sorting networks we have the following asymptotic estimate \begin{theorem}\label{th:asymptotics-relaxed} For any $\kappa \in (0,1)$ let $\mathcal{S}^{N}_{\kappa, \delta}$ be the number of $\delta$-relaxed sorting networks on $N$ elements of length $M = \lfloor\frac{1}{2} N^{1+\kappa}(N-1)\rfloor$. We have \[ \mathcal{S}^{N}_{\kappa,\delta} = \exp \left\{ \frac{1}{2} N^{1 + \kappa} (N-1)\log (N-1) - \left(\frac{\pi^2}{6} + \varepsilon^{N}_{\delta}\right)N^{2 - \kappa}\right\}, \] where $\varepsilon^{N}_{\delta}$ satisfies $\lim\limits_{\delta \to 0} \lim\limits_{N \to \infty} \varepsilon^{N}_{\delta} = 0$. \end{theorem} The asymptotics is analogous to that of Stanley's formula -- the first term in the exponent corresponds simply to the number of all paths of required length, and, crucially, the factor $\frac{\pi^2}{6}$ corresponds to the energy of the sine curve process. The proofs of Theorem \ref{th:lln-for-relaxed-networks} and Theorem \ref{th:asymptotics-relaxed} are given in Section \ref{sec:asymptotics}. It would be an interesting problem to obtain analogous results for relaxed sorting networks reaching \emph{exactly} the reverse permutation, not only being $\delta$-close in the permuton topology. This case is not covered by the results of this paper, since the set of permuton processes reaching exactly the reverse permuton is not open, hence the lower bound of Theorem \ref{th:theorem-main-lower} does not apply. \paragraph{Acknowledgments.} We would like to thank Maxim Arnold, Duncan Dauvergne, Boris Khesin and Mustazee Rahman for interesting discussions. MK was supported by the National Science Centre, Poland, grant no. 2019/32/C/ST1/00525. BV was supported by the Canada Research Chair program and the NSERC Discovery Accelerator grant. \end{section} \begin{section}{Preliminaries}\label{sec:preliminaries} \begin{subsection}{Permutons and stochastic processes}\label{sec:stationarity} \paragraph{Permutons.} Consider the space $\mathcal{M}([0,1]^2)$ of all Borel probability measures on the unit square $[0,1]^2$, endowed with the weak topology. A \emph{permuton} is a probability measure $\mu \in \mathcal{M}([0,1]^2)$ with uniform marginals. In other words, $\mu$ is the joint distribution of a pair of random variables $(X,Y)$, with $X$, $Y$ taking values in $[0,1]$ and having marginal distribution $X, Y \sim \mathcal{U}[0,1]$. We will sometimes call the pair $(X,Y)$ itself a permuton if there is no risk of ambiguity. A few simple examples of permutons are the \emph{identity permuton} $(X,X)$, the \emph{uniform permuton} (the distribution of two independent copies of $X$, which is the uniform measure on the square) or the \emph{reverse permuton} $(X, 1 - X)$. Permutons can be thought of as continuous limits of permutations in the following sense. Let $\mathcal{S}_{N}$ be the symmetric group on $N$ elements and let $\sigma \in \mathcal{S}_{N}$. We associate to $\sigma$ its \emph{empirical measure} \begin{equation}\label{eq:permutation-empirical} \mu_{\sigma} = \frac{1}{N} \sum\limits_{i=1}^{N} \delta_{\left(\frac{i}{N}, \, \frac{\sigma(i)}{N}\right)}, \end{equation} which is an element of $\mathcal{M}([0,1]^2)$. By a slight abuse of terminology we will sometimes identify $\sigma$ with $\mu_\sigma$. Since every such measure has uniform marginals on $\left\{\frac{1}{N}, \frac{2}{N}, \ldots, 1 \right\}$, it is not difficult to see that if a sequence of empirical measures converges weakly, the limiting measure will be a permuton. Conversely, every permuton can be realized as a limit of finite permutations, in the sense of weak convergence of empirical measures (see \cite{limits}). We will consider $\mathcal{M}([0,1]^2)$ endowed with the Wasserstein distance corresponding to the Euclidean metric on $[0,1]^2$, under which the distance of measures $\mu$ and $\nu$ is given by \[ d_{\mathcal{W}}(\mu, \nu) = \inf\limits_{\left\{(X,Y), (X',Y')\right\}} \mathbb{E} \left[ \sqrt{(X - X')^2 + (Y-Y')^2} \right], \] where the infimum is over all couplings of $(X,Y)$ and $(X',Y')$ such that $(X,Y) \sim \mu$, $(X',Y') \sim \nu$. \paragraph{The path space $\mathcal{D}$ and stochastic processes.} A natural setting for analyzing trajectories of particles in random permutation sequences is to consider $\mathcal{D} = \mathcal{D}([0,T], [0,1])$, the space of all c\`adl\`ag paths from $[0,T]$ to $[0,1]$. We endow it with the standard Skorokhod topology, metrized by a metric $\rho$ under which $\mathcal{D}$ is separable and complete. By $\mathcal{M}(\mathcal{D})$ we will denote the space of all Borel probability measures on $\mathcal{D}$, endowed with the weak topology. It will be convenient to metrize $\mathcal{M}(\mathcal{D})$ by the Wasserstein distance, under which the distance between measures $\mu$ and $\nu$ is given by \[ d_{\mathcal{W}}(\mu, \nu) = \inf\limits_{(X,Y)} \mathbb{E} \left[ \rho(X,Y) \right], \] where the infimum is over all couplings $(X,Y)$ such that $X \sim \mu$, $Y \sim \nu$. We will also make use of the Wasserstein distance associated to the supremum norm, given by \[ d_{\mathcal{W}}^{sup}(\mu, \nu) = \inf\limits_{(X,Y)} \mathbb{E} \left[ \norm{X - Y}_{sup} \right], \] where $\norm{\cdot}_{sup}$ is the supremum norm on $\mathcal{D}$ and again the infimum is over all couplings $(X,Y)$ as above. Given two times $0 \leq s \leq t \leq T$ and a stochastic process $X = (X_{t}, 0 \leq t \leq T)$ with distribution $\mu \in \mathcal{M}(\mathcal{D})$, by $\mu_{s, t} \in \mathcal{M}([0,1]^2)$ we will denote the distribution of the marginal $(X_{s}, X_{t})$. Note that the projection $\mu \mapsto \mu_{s, t}$ is continuous as a map from $\mathcal{M}(\mathcal{D})$ to $\mathcal{M}([0,1]^2)$ as long as paths $X \sim \mu$ sampled from $\mu$ have almost surely no jumps at times $s$ and $t$. We will sometimes implicitly identify the stochastic process with its distribution when there is no risk of misunderstanding. \paragraph{Permutation processes and permuton processes.} Consider a permutation-valued path $\eta^{N} = (\eta^{N}_{t}, 0 \leq t \leq T)$, with $\eta^{N}_{t}$ taking values in the symmetric group $\mathcal{S}_{N}$. We will always assume that $\eta^N$ is c\`{a}dl\`{a}g as a map from $[0,T]$ to $\mathcal{S}_N$. Let $\eta^{N}(i) = \left( \eta^{N}_{t}(i), 0 \leq t \leq T \right)$ be the trajectory of $i$ under $\eta^{N}$ and let $X^{\eta^{N}}(i) = \frac{1}{N}\eta^{N}(i)$ be the rescaled trajectory. We define the empirical measure \begin{equation}\label{eq:empirical-eta} \mu^{\eta^{N}} = \frac{1}{N} \sum\limits_{i=1}^{N} \delta_{X^{\eta^{N}}(i)}, \end{equation} where $\delta_{X^{\eta^{N}}(i)}$ is the delta measure concentrated on the trajectory $X^{\eta^{N}}(i)$. The associated {\it permutation process} $X^{\eta^{N}} = (X^{\eta^{N}}_{t}, 0 \leq t \leq T)$ is obtained by choosing $i = 1, \ldots, N$ uniformly at random and following the path $X^{\eta^{N}}(i)$. In other words, $X^{\eta^{N}}$ is a random path with values in $[0,1]$ whose distribution is $\mu^{\eta^{N}} \in \mathcal{M}(\mathcal{D})$. If $\eta^N$ is fixed, the only randomness here comes from the random choice of the particle $i$. Note that at each time $t$ the marginal distribution of $X_{t}^{\eta^{N}}$ is uniform on $\left\{ \frac{1}{N}, \frac{2}{N}, \ldots, 1 \right\}$. A \emph{permuton process} is a stochastic process $X = (X_{t}, 0 \leq t \leq T)$ taking values in $[0,1]$, with continuous sample paths and such that for every $t \in [0,T]$ the marginal $X_{t}$ is uniformly distributed on $[0,1]$. The name is justified by observing that if $\pi$ is the distribution of $X$, then for any fixed $s,t \in [0,T]$ the joint distribution $\pi_{s,t} \in \mathcal{M}([0,1]^2)$ of $(X_{s}, X_{t})$ defines a permuton. As explained in the next subsection, permuton processes arise naturally as limits of permutation processes defined above. Since every permutation process has marginals uniform on $\left\{ \frac{1}{N}, \frac{2}{N}, \ldots, 1 \right\}$, we will call it an \emph{approximate permuton process}. By $\mathcal{P}$ we will denote the space of all permuton processes and approximate permuton processes, treated as a subspace of $\mathcal{M}(\mathcal{D})$ (with the same topology and the metric $d_{\mathcal{W}}$). \paragraph{Random permutation and permuton processes.} A \emph{random permuton process} is a permuton process chosen from some probability distribution on the space of all permuton processes, i.e., a random variable $X$, defined for a probability space $\Omega$, such that $X(\omega)$ is a permuton process for $\omega \in \Omega$. By identifying the random variable with its distribution we can also think of a random permuton process as a random element of $\mathcal{M}(\mathcal{P})$. In this setting, with weak topology on $\mathcal{M}(\mathcal{P})$, one can consider convergence in distribution of random permuton processes $X_{n}$ to a (possibly also random) permuton process $X$. One can prove (see \cite{mustazee}) that if a sequence of random permutation processes $X^{\eta^N}$ converges in distribution, then the limit is a permuton process (in general also random). Of particular interest will be sequences of random permutation-valued paths $\eta^{N}$ (coming for example from the interchange process) such that the corresponding permutation processes $X^{\eta^N}$ converge in distribution to a \emph{deterministic} permuton process (for example the sine curve process described below). For any random permuton process $X$ we define its associated \emph{random particle process} $\bar{X} = \mathbb{E}_{\omega} X(\omega)$, which is a process with a deterministic distribution, obtained by first sampling a permuton process $X(\omega)$ and then sampling a random path according to $X(\omega)$. To elucidate the difference between random and deterministic permuton processes, consider a random permuton process $X$ and its associated random particle process $\bar{X}$. If we sample an outcome $X(\omega)$ and then a path from $X(\omega)$, then obviously the distribution of paths will be the same as for $\bar{X}$. However, consider now sampling an outcome $X(\omega)$ and then sampling independently two paths from $X(\omega)$. The distribution of a pair of paths obtained in this way will not in general be the same as the distribution of two independent copies sampled from $\bar{X}$, since the paths might be correlated within the outcome $X(\omega)$. The following general lemma will be useful later for showing that limits of certain random permutation processes are in fact deterministic (\cite[Lemma 3]{mustazee2}): \begin{lemma}\label{lm:deterministic} Let $K$ be a compact metric space and let $\mu$ be a random probability measure on $K$, i.e., a random variable with values in $\mathcal{M}(K)$. Let $X$ and $Y$ be two independent samples from an outcome of $\mu$ and let $Z$ be a sample from an outcome of an independent copy of $\mu$. If $(X,Y)$, as a $K^2$-valued random variable, has the same distribution as $(X,Z)$, then $\mu$ is in fact deterministic, i.e., there exists $\nu \in \mathcal{M}(K)$ such that $\mu = \nu$ almost surely. \end{lemma} \paragraph{Energy.} Here we introduce several related notions of energy for paths, permutations, permutons and permuton processes. Given a path $\gamma : [0,T] \to [0,1]$ and a finite partition $\Pi = \{ 0 = t_{0} < t_{1} < \ldots < t_{k} = T \}$ we define the \emph{energy of $\gamma$ with respect to $\Pi$} as \begin{equation}\label{eq:energy-path-def} \mathcal{E}^{\Pi}(\gamma) = \frac{1}{2} \sum\limits_{i=1}^{k} \frac{| \gamma(t_{i}) - \gamma(t_{i-1}) |^2}{t_{i} - t_{i-1}}, \end{equation} and the \emph{energy of $\gamma$} as \begin{equation}\label{eq:energy-path-full} \mathcal{E}(\gamma) = \sup_{\Pi} \mathcal{E}^{\Pi}(\gamma), \end{equation} where the supremum is over all finite partitions $\Pi = \{ 0 = t_{0} < t_{1} < \ldots < t_{k} = T \}$. For a path which is not absolutely continuous the supremum is equal to $+\infty$. If a path $\gamma$ is differentiable, its energy is equal to \[ \frac{1}{2} \int\limits_{0}^{T} \dot{\gamma}(s)^2 \, ds. \] For a permutation $\sigma \in \mathcal{S}_N$ we define its energy as \begin{equation}\label{eq:permutation-energy} I(\sigma) = \frac{1}{2} \left( \frac{1}{N} \sum\limits_{i=1}^{N} \left( \frac{\sigma(i) - i}{N} \right)^2 \right). \end{equation} Likewise, for a permuton $\mu \in \mathcal{M}([0,1]^2)$ its energy is defined by \begin{equation}\label{eq:permuton-energy-def} I(\mu) = \frac{1}{2} \mathbb{E} |X-Y|^2, \end{equation} where the pair $(X, Y)$ has distribution $\mu$. If $\mu = \mu_{\sigma}$ is the empirical measure of a permutation $\sigma \in \mathcal{S}_N$, defined by \eqref{eq:permutation-empirical}, then we have $I(\mu_{\sigma}) = I(\sigma)$. Note also that $I = I(\mu)$ is a continuous function of $\mu$ in the weak topology on $\mathcal{M}([0,1]^2)$. Finally, we define the energy of a permuton process $\pi$ as \begin{equation}\label{eq:process-energy} I(\pi) = \mathbb{E}_{\gamma \sim \pi} \mathcal{E}(\gamma), \end{equation} where the expectation is over paths $\gamma$ sampled from $\pi$. We can extend this definition to any process $\pi \in \mathcal{M}(\mathcal{D})$ by adopting the convention that $I(\pi) = + \infty$ if paths sampled from $\pi$ are not absolutely continuous almost surely. The function $I$ will turn out to correspond to the rate function in large deviation bounds for random permuton process. It can be checked that $I$ is lower semicontinuous (in the weak topology on $\mathcal{P}$) and its level sets $\{ \pi \in \mathcal{P} : I(\pi) \leq C\}$ are compact. We will also use the notation \begin{equation}\label{eq:def-energy-fin-dim} I^{\Pi}(\pi) = \mathbb{E}_{\gamma \sim \pi} \mathcal{E}^{\Pi}(\gamma) \end{equation} to denote the approximation of energy of $\pi$ associated to the finite partition $\Pi$. The following lemma will be useful in characterizing the large deviation rate function in terms of these approximations \begin{lemma}\label{lm:approximate-energy} For any process $\pi \in \mathcal{M}(\mathcal{D})$ we have \[ I(\pi) = \sup\limits_{\Pi} I^{\Pi}(\pi), \] where the supremum is taken over all finite partitions $\Pi = \{ 0 = t_{0} < t_{1} < \ldots < t_{k} = T \}$. \end{lemma} \begin{proof} Let $\Pi_n = \left\{0 < \frac{1}{2^n} < \frac{2}{2^n} < \ldots < 1\right\}$, $n=0,1,2,\ldots$, be the sequence of dyadic partitions of $[0,1]$. It is elementary to show that if a path $\gamma$ is continuous, then $\mathcal{E}(\gamma) = \lim\limits_{n \to \infty} \mathcal{E}^{\Pi_n}(\gamma)$. Note that if $\Pi'$ is a refinement of $\Pi$, then we have $\mathcal{E}^{\Pi}(\gamma) \leq \mathcal{E}^{\Pi'}(\gamma)$, thus $\mathcal{E}^{\Pi_n}(\gamma) \to \mathcal{E}(\gamma)$ monotonically as $n \to \infty$. Now we apply the monotone convergence theorem to get the same same convergence for the expectations $\mathbb{E}_{\gamma \sim \pi}\mathcal{E}^{\Pi_n}(\gamma)$. \end{proof} \paragraph{The interchange process.} The \emph{interchange process} on the interval $\{1, \ldots, N\}$ is a Markov process in continuous time defined in the following way. Consider particles labelled from $1$ to $N$ on a line with $N$ vertices. Each edge has an independent exponential clock that rings at rate $1$. Whenever a clock rings, the particles at the endpoints of the corresponding edge swap places. By comparing the initial position of each particle with its position after time $t$ we obtain a random permutation of $\{ 1, \ldots,N\}$. Formally, we define the state space of the process as consisting of permutations $\eta \in \mathcal{S}_N$, with the notation $ \eta = (x_1, \ldots, x_N)$ indicating that the particle with label $i$ is at the position $x_{i}$, or in other words, $x_i = \eta(i)$. The dynamics is given by the generator \begin{equation}\label{eq:unbiased-generator} (\mathcal{L} f)(\eta) = \frac{1}{2} N^{\alpha} \sum\limits_{x=1}^{N-1} \left( f(\eta^{x,x+1}) - f(\eta) \right), \end{equation} where $\eta^{x, x+1}$ is the configuration $\eta$ with particles at locations $x$ and $x+1$ swapped and $\alpha \in (1,2)$ is a fixed parameter (introduced so that we will be able to consider the limit $N \to \infty$). Since we will also be considering variants of this process with modified rates, we will often refer to the process with generator $\mathcal{L}$ as the \emph{unbiased interchange process}. The interchange process defines a probability distribution on permutation-valued paths $\eta^N = (\eta^N_t, 0 \leq t \leq T)$ for any $T \geq 0$. Consider now the permutation process $X^{\eta^N}$ associated to $\eta^N$, that is, sample $\eta^N$ according to the interchange process, pick a particle uniformly at random and follow its trajectory in $\eta^N$. The distribution $\mu^{\eta^N}$ of $X^{\eta^N}$, defined by \eqref{eq:empirical-eta}, is then a random element of $\mathcal{M}(\mathcal{D})$. The position of a random particle in the interchange process will be distributed as the stationary simple random walk (in continuous time) on the line $\{1, \ldots, N\}$. If we look at timescales much shorter than $N^2$, typically each particle will have distance $o(N)$ from its origin, so the permutation obtained at time $t$ such that $tN^{\alpha} \ll N^{2}$ will be close (in the sense of permutons) to the identity permutation. As mentioned in the introduction, we will be interested in large deviation bounds for rare events such as seeing a nontrivial permutation after a short time. \end{subsection} \begin{subsection}{Euler equations and generalized incompressible flows}\label{sec:incompressible-flows} Let us now discuss the connection to fluid dynamics and incompressible flows (the discussion here follows \cite{ambrosio-figalli} and \cite{bernot-figalli-santambrogio}). The Euler equations describe the motion of an incompressible fluid in a domain $D \subseteq \mathbb{R}^d$ in terms of its \emph{velocity field} $u(t,x)$, which is assumed to be divergence-free. The evolution of $u$ is given in terms of the \emph{pressure field} $p$ \[ \begin{cases} \partial_t u + (u \cdot \nabla)u = - \nabla p \quad & \mbox{in $[0,T] \times D$}, \\ \mathrm{div} \, u = 0 \quad & \mbox{in $[0,T] \times D$},\\ u \cdot n = 0 \quad & \mbox{on $[0,T] \times \partial D$}, \end{cases} \] where the second equation encodes the incompressiblity constraint and the third equation means that $u$ is parallel to the boundary $\partial D$. Assuming $u$ is smooth, the trajectory $g(t,x)$ of a fluid particle initially at position $x$ is obtained by solving the equation \[ \begin{cases} \dot{g}(t,x) = u(t, g(t,x)), \\ g(0,x) = x.\\ \end{cases} \] Since $u$ is assumed to be divergence-free, the flow map $\Phi^{t}_{g} : D \to D$ given by $\Phi^{t}_{g}(x) = g(t,x)$ is a measure-preserving diffeomorphism of $D$ for each $t \in [0,T]$. This means that $(\Phi^{t}_{g})_{\ast} \mu_D = \mu_D$, where from now on by $f_{\ast}$ we denote the pushforward map on measures, associated to $f$, and $\mu_D$ is the Lebesgue measure inside $D$. Denoting by $\mathrm{SDiff}(D)$ the space of all measure-preserving diffeomorphisms of $D$, we can rewrite the Euler equations in terms of $g$ \begin{equation}\label{eq:euler-g} \begin{cases} \mathrm{d}ot{g}(t,x) = - \nabla p (t, g(t,x)) \quad & \mbox{in $[0,T] \times D$}, \\ g(0,x) = x & \mbox{in $D$},\\ g(t, \cdot) \in \mathrm{SDiff}(D) & \mbox{for each $t \in [0,T]$}. \end{cases} \end{equation} Arnold proposed an interpretation according to which the equation above can be viewed as a geodesic equation on $\mathrm{SDiff}(D)$. Thus one can look for solutions to \eqref{eq:euler-g} by considering the variational problem \begin{equation}\label{eq:arnold} \mbox{minimize} \quad \frac{1}{2}\int\limits_{0}^{T} \int\limits_{D} |\dot{g}(t,x)|^2 \, d\mu_{D}(x) \, dt \end{equation} among all paths $g(t, \cdot) : [0,T] \to \mathrm{SDiff}(D)$ such that $g(0, \cdot) = f$, $g(T, \cdot) = h$ for some prescribed $f, h \in \mathrm{SDiff}(D)$ (by right invariance without loss of generality $f$ can be assumed to be the identity). The pressure $p$ then arises as a Lagrange multiplier coming from the incompressibility constraint. Shnirelman proved (\cite{shnirelman}) that in dimensions $d \geq 3$ the infimum in this minimization problem is not attained in general and in dimension $d=2$ there exist diffeomorphisms $h = g(T, \cdot)$ which cannot be connected to the identity map by a path with finite action. This motivated Brenier (\cite{brenier}) to consider the following relaxation of this problem. With $C(D)$ denoting the space of continuous paths from $[0,T]$ to $D$ and $\mathcal{M}(C(D))$ the set of probability measures on $C(D)$, the variational problem is \begin{equation}\label{eq:minimize-brenier} \mbox{minimize} \quad \int\limits_{C(D)} \left(\frac{1}{2}\int\limits_{0}^{T} |\dot{\gamma}(t)|^2 \, dt \right) d\pi(\gamma) \end{equation} over all $\pi \in \mathcal{M}(C(D))$ satisfying the constraints \begin{equation}\label{eq:constraints} \begin{cases} \pi_{0,T} = (id, h)_{\ast} \mu_D, \\ \pi_t = \mu_D \quad \mbox{for each $t \in [0,T]$}, \end{cases} \end{equation} where $\pi_{0,T}$, $\pi_t$ denote the marginals of $\pi$ at times respectively $0,T$ and at time $t$. Following Brenier, a probability measure $\pi \in \mathcal{M}(C(D))$ satisfying constraints \eqref{eq:constraints} is called a \emph{generalized incompressible flow} between the identity $id$ and $h$. To see that indeed \eqref{eq:minimize-brenier} is a relaxation of \eqref{eq:arnold}, note that any sufficiently regular path $g(t, \cdot) : [0,T] \to \mathrm{SDiff}(D)$, for example corresponding to a solution of \eqref{eq:euler-g}, induces a generalized incompressible flow given by $\pi = (\Phi_g)_{\ast} \mu_D$, where as before $\Phi_g(x) = g(\cdot, x)$. As evidenced by the sine curve process mentioned in the introduction, the converse is false -- trajectories of particles sampled from a generalized flow can cross each other or split at a later time when starting from the same position, which is not possible for classical, smooth flows. We refer the reader to \cite{brenier-physica} for an interesting discussion of physical relevance of this phenomenon. The problem admits a natural further relaxation in which the target map is ``non-deterministic'', in the sense that we have $\pi_{0,T} = \mu$ with $\mu$ being an arbitrary probability measure supported on $D \times D$ and having uniform marginals on each coordinate, not necessarily of the form $\mu = (id, h)_{\ast}\mu_{D}$ for some map $h$. From now on whenever we refer to problem \eqref{eq:minimize-brenier} or generalized incompressible flows we will be always considering this more general variant. The connection between the generalized problem \eqref{eq:minimize-brenier} and the original Euler equations \eqref{eq:euler-g} is provided by a theorem due to Ambrosio and Figalli (\cite{ambrosio-figalli}), with earlier weaker results by Brenier (\cite{brenier-distribution}). Roughly speaking, they showed that given a measure $\mu$ with uniform marginals there exists a pressure function $p(t,x)$ such that the following holds -- one can replace the problem of minimizing the functional \eqref{eq:minimize-brenier} over incompressible flows satisfying $\pi_{0,T} = \mu$ by an easier problem in which the incompressibility constraint is dropped, provided one adds to the functional a Lagrange multiplier given by $p$. We refer the reader to \cite[Section 6]{ambrosio-figalli} for a precise formulation and further results on regularity of $p$. In particular, if $\pi$ is optimal for \eqref{eq:minimize-brenier} and the corresponding pressure $p$ is smooth enough, their result implies that almost every path $\gamma$ sampled from $\pi$ minimizes the functional \begin{equation}\label{eq:functional-with-p} \gamma \mapsto \int\limits_{0}^{T} \left( \frac{1}{2} |\dot{\gamma}(t)|^2 - p(t, \gamma(t)) \right) dt. \end{equation} In that case the equation $\mathrm{d}ot{g}(t,x) = - \nabla p(t, g(t,x))$ from \eqref{eq:euler-g} is nothing but the Euler-Lagrange equation for extremal points of the functional \eqref{eq:functional-with-p}. We can therefore, at least under some regularity assumptions on $p$, think of generalized incompressible flows as solutions to \eqref{eq:euler-g} in which instead of having a diffeomorphism we assume random initial conditions for each particle. From now on let us restrict the discussion to $D = [0,1]$, which will be most directly relevant to the results of this paper. In this case the original problem \eqref{eq:arnold} is somewhat uninteresting, since the only measure-preserving diffeomorphisms of $[0,1]$ are $f(x) = x$ and $f(x) = 1 - x$. However, the relaxed problem \eqref{eq:minimize-brenier} is non-trivial and indeed for the target map $h(x) = 1 - x$ and $T = 1$ the unique optimal solution is given by the sine curve process. In this setting, the reader may recognize that generalized incompressible flows are in fact the same objects as permuton processes. The term \emph{measure-preserving plans} is used in \cite{ambrosio-figalli} for what we call permutons. The functional minimized in \eqref{eq:minimize-brenier} is the energy $I(\pi)$ of a permuton process, defined in \eqref{eq:process-energy}. In this language the optimization problem we are interested in can be rephrased as follows: \begin{equation}\label{eq:minimize-permuton} \mbox{find} \inf\limits_{\substack{\pi \in \mathcal{P} \\ \pi_{0,T} = \mu}} I(\pi), \end{equation} where the infimum is over all permuton processes $\pi \in \mathcal{P}$ satisfying $\pi_{0,T} = \mu$ for a given permuton $\mu \in \mathcal{M}([0,1]^2)$. \paragraph{Generalized solutions to Euler equations.} We will say that a permuton process $\pi$ is a \emph{generalized solution to Euler equations} if there exists a function $p : [0,T] \times [0,1] \to \mathbb{R}$, differentiable in the second variable, such that almost every path $x : [0,T] \to [0,1]$ sampled from $\pi$ satisfies the equation \begin{equation}\label{eq:gen-euler} \begin{cases} x'(t) = v(t) \\ v'(t) = - \partial_x p(t, x(t)) \\ \end{cases} \end{equation} for $t \in [0,T]$. This is of course equivalent to $x''(t) = - \partial_x p (t, x(t))$. By the remarks above, if $\pi$ minimizes the energy in \eqref{eq:minimize-permuton} and the associated pressure $p$ is smooth enough, then $\pi$ is always a generalized solution to Euler equations. However, this is only a necessary condition -- for a discussion of corresponding sufficient conditions see \cite{bernot-figalli-santambrogio}. \end{subsection} \begin{subsection}{Proof outline and structure of the paper}\label{sec:main-results} Let us now give a brief outline of the proof strategy for Theorem \ref{th:theorem-main-lower} and Theorem \ref{th:theorem-main-upper}. For the lower bound, given a process $X$ we construct a perturbation of the interchange process (defined by introducing asymmetric jump rates based on \eqref{eq:gen-euler}) for which a law of large numbers holds, namely, the distribution of the path of a random particle converges to a deterministic limit (which is the distribution of $X$). The large deviation principle is then proved by estimating the Radon-Nikodym derivative between the biased process and the original one. The key property which makes this construction possible is that the process $X$ satisfies a second order ODE given by \eqref{eq:gen-euler}, so its trajectories are fully specified by the particle's position and velocity (the latter chosen initially from a mean zero distribution). The biased process is then constructed by endowing each particle with an additional parameter keeping track of its velocity, but we perform an additional change variables, working instead of velocity with a variable we call \emph{color}. The advantage of this is that the uniform distribution of colors is stationary when the jump rates are properly chosen, which will greatly facilitate the analysis. An additional technical difficulty arises if the velocity distribution of $X$ is time-dependent or not regular enough near the boundary, in which case we first approximate $X$ by a process with a sufficiently regular and piecewise time-homogeneous velocity distribution. To prove the law of large numbers we need to show that in the biased interchange process particles' trajectories behave approximately like independent samples from $X$. This requires proving that their velocities remain uncorrelated when averaged over time and is accomplished by means of a local mixing result called the \emph{one block estimate}. It is here that we rely on stationarity of the uniform distribution of colors in the biased process and the fact that $X$ has velocity zero on average. The strategy for proving the upper bound is somewhat simpler. We consider a family of exponential martingales similar to the one employed in analyzing independent random walks and use the one block estimate to show that the particles' velocities are typically nonnegatively correlated. This enables us to prove the large deviation upper bound for compact sets and the extension to closed sets is done by proving exponential tightness. \paragraph{Structure of the paper.} The rest of the paper is structured as follows. In Section \ref{sec:ode-part} we introduce the change of variables needed to define the process with colors and prove the approximation result for $X$ mentioned above (Proposition \ref{prop:approximation-epsilon-delta}). In Section \ref{sec:interchange} we define the biased interchange process and derive the conditions on its rates which guarantee stationarity. Section \ref{sec:lln} contains the proof of the law of large numbers for the biased interchange process (Theorem \ref{th:lln}). In Section \ref{sec:one-block} we prove two variants of the one block estimate -- one needed for the large deviation upper bound (Lemma \ref{lm:one-block-superexponential-probability}) and a more involved one needed for the proof of the law of large numbers (Lemma \ref{lm:one-block-superexponential-biased}). In Section \ref{sec:lower-bound} these pieces are then used to prove the large deviation lower bound (Theorem \ref{thm:lower-bound-for-minimizers}). Section \ref{sec:upper-bound} is devoted to the proof of the large deviation upper bound (Theorem \ref{th:upper-bound}) and is independent of the previous sections (apart from the use of Lemma \ref{lm:one-block-superexponential-probability}). Finally, in Section \ref{sec:asymptotics} we prove Theorem \ref{th:lln-for-relaxed-networks} and Theorem \ref{th:asymptotics-relaxed} on relaxed sorting networks. \end{subsection} \end{section} \begin{section}{ODEs and generalized solutions to Euler equations}\label{sec:ode-part} \paragraph{Regularity assumptions and properties of generalized solutions.} Suppose $\pi$ is a generalized solution to Euler equations \eqref{eq:gen-euler} and let $X$ be a process with distribution $\pi$. For the proof of the large deviation lower bound we will need to impose additional regularity assumptions on $\pi$. For $t \in [0,T]$ let $\mu_t$ denote the joint distribution of $(x(t), x'(t))$ when $x$ is sampled according to $\pi$. In particular, $\mu_0$ is the joint distribution of the initial conditions of the ODE \eqref{eq:gen-euler}. If $\Phi^{t,s}(x,v)$ denotes the solution $x(s)$ of \eqref{eq:gen-euler} satisfying $(x(t), v(t)) = (x, v)$, then $\mu_t = \Phi^{0,t}_{\ast}\mu_0$. We will assume that each $\mu_t$ has a density $\rho_t(x,v)$ with respect to the Lebesgue measure on $[0,1] \times \mathbb{R}$. For $x \in [0,1]$ and $t \in [0,T]$ let $\mu_{t,x}$ denote the conditional distribution of $v$, given $x$, at time $t$. In addition we assume that for $x=0$ or $1$ the distribution $\mu_{t,x}$ is a delta mass at $0$, as otherwise the process $X$ cannot stay confined to $[0,1]$ and have mean velocity zero everywhere (see the discussion of incompressiblity below). Let $F_{t,x}$ denote the cumulative distribution function of $\mu_{t,x}$ and let $V_{t}(x, \cdot) : [0,1] \to \mathbb{R}$ be the quantile function of $\mu_{t,x}$, defined for $x \in [0,1]$ and $\phi \in (0,1]$ by \[ V_{t}(x,\phi) = \inf\left\{ v \in \mathbb{R} \, | \, F_{t,x}(v) \geq \phi \right\} \] and $V_t(x, 0) = \inf\left\{ v \in \mathbb{R} \, | \, F_{t,x}(v) > 0 \right\}.$ In particular for $x=0,1$ we have $V_{t}(x,\phi) = 0$. \begin{assumption}\label{as:main-assumptions} Throughout the paper, we will assume that for a generalized solution to Euler equations $\pi$ the following properties are satisifed \begin{enumerate}[(1)] \item the pressure function $(t,x) \mapsto p(t,x)$ in \eqref{eq:gen-euler} is measurable in $t$ and differentiable in $x$, with the derivative $\partial_x p(t,x)$ Lipschitz continuous in $x$ (with the Lipschitz constant uniform in $t$) \item there exists a compact set $K \subseteq [0,1] \times \mathbb{R}$ such that for each $t \in [0,T]$ the density $\rho_t$ is supported in $K$ \item for $t \in [0,T], x \in [0,1]$ the support of $\mu_{t,x}$ is a connected interval in $\mathbb{R}$ \item the density $\rho_t$ is continuously differentiable in $t$, $x$ and $v$ for each $t \in [0,T]$ and $x,v$ in the interior of the support of $\rho_t$ \end{enumerate} \end{assumption} Let us comment on the relevance of these assumptions. Assumption (1) will guarantee uniqueness of solutions to \eqref{eq:gen-euler}. Assumption (2) implies that the velocity of a particle moving along a path sampled from $\pi$ stays uniformly bounded in time. Assumption (3) implies that for any $x \in (0,1)$ and $\phi \in [0,1]$ we have $F_{t,x}(V_{t}(x, \phi)) = \phi$, i.e., $V_t(x, \cdot)$ is the inverse function of $F_{t,x}$. Assumptions (3) and (4) imply that $V_{t}(x,\phi)$ is a continuous function of $t$, $x$, $\phi$ and it is continuously differentiable in all variables for $x \in (0,1)$. Note that for $V_t(x,\phi)$ to be differentiable at $\phi = 0,1$, the distribution function $F_{t,x}$ necessarily has to be non-differentiable at corresponding $v$ such that $F_{t,x}(v) = \phi$. This is why we can require the density $\rho_t$ to be smooth only in the interior of its support and not at the boundary. From now on we assume that $\pi$ is a fixed generalized solution to Euler equations, satisfying Assumptions \eqref{as:main-assumptions}. Almost every path $x : [0,T] \to [0,1]$ sampled from $\pi$ satisfies the ODE \begin{equation}\label{eq:true-process} \begin{cases} x'(t) = v(t) \\ v'(t) = - \partial_x p(t, x(t)). \\ \end{cases} \end{equation} Note that since $\pi$ is a permuton process, each measure $\mu_t$ satisfies the \emph{incompressibility} condition, meaning that its projection onto the first coordinate is equal to the uniform measure on $[0,1]$. This is equivalent to the property that for any test function $f : [0,1] \to \mathbb{R}$ we have \[ \int\limits f(x) \, d\mu_t(x,v) = \int\limits_{0}^{1} f(x) \, dx. \] An important consequence of the incompressibility assumption is that under $\mu_t$ the velocity has mean zero at each $x$, that is, we have the following \begin{lemma}\label{lm:velocity-mean-zero} For any $t \in [0,T]$ and $x \in [0,1]$ we have \[ \int v \, d\mu_{t,x}(v) = 0. \] \end{lemma} \begin{proof} Consider any test function $f : [0,1] \to \mathbb{R}$ and write \[ \int\limits f(x) \, d\mu_{t+s}(x,v) = \int\limits f(x) \, d(\Phi^{t,t+s}_{\ast}\mu_t) (x,v) = \int\limits f(\Phi^{t,t+s}(x,v)) \, d\mu_{t}(x,v). \] By incompressibility the integral above is always equal to $\int\limits_{0}^{1} f(x) \, dx $, in particular does not depend on time. On the other hand its derivative with respect to $s$ is \[ \frac{d}{ds} \int\limits f(x) \, d\mu_{t+s}(x,v) = \frac{d}{ds} \int\limits f(\Phi^{t,t+s}(x,v)) \, d\mu_{t}(x,v) = \int\limits f'(\Phi^{t,t+s}(x,v)) \frac{d\Phi^{t,t+s}}{ds}(x,v) \, d\mu_{t}(x,v). \] Since $\Phi^{t,t+s}(x,v)\vert_{s=0} = x$ and $\frac{d\Phi^{t,t+s}}{ds}(x,v)\vert_{s=0} = v$, by evaluating the derivative at $s = 0$ we arrive at $\int\limits f'(x) v \, d\mu_{t}(x,v) = 0$. Since $\int g(x,v) \, d\mu_t(x,v) = \int g(x,v) \, d\mu_{t,x}(v) dx$ for any measurable $g$ and $f$ was an arbitrary test function, the claim of the lemma holds for almost every $x$. Since we have assumed that $\mu_{t}$ has a continuous density, the claim in fact holds for all $x$, which ends the proof. \end{proof} We will also make use of an explicit evolution equation that the densities $\rho_t$ have to satisfy. This is the content of the following lemma. \begin{lemma}\label{lm:evolution-equation} For any $t \in [0,T]$ and $x, v$ in the interior of the support of $\rho_t$ we have \[ \frac{\partial \rho_t}{\partial t} (x,v) = - v \frac{\partial \rho_t}{\partial x} (x,v) + \partial_x p(t, x) \frac{\partial \rho_t}{\partial v} (x,v). \] \end{lemma} \begin{proof} Let $f : [0,1] \times \mathbb{R} \to \mathbb{R}$ be any test function and consider the integral \[ I_{t+s} = \int f(x,v) \, d\mu_{t+s}(x,v). \] On the one hand, its derivative with respect to $s$ is equal to \begin{align*} \frac{d}{ds}I_{t+s} & = \frac{d}{ds} \int f(x,v) \, d\mu_{t+s}(x,v) =\frac{d}{ds} \int f\left(\Phi^{t,t+s}(x,v),\frac{d\Phi^{t,t+s}}{ds}(x,v)\right) \rho_t (x,v) \, dx \, dv = \\ & = \int \bigg[ \frac{\partial f}{\partial x}\left(\Phi^{t,t+s}(x,v),\frac{d\Phi^{t,t+s}}{ds}(x,v)\right)\frac{d\Phi^{t,t+s}}{ds}(x,v) + \\ & + \frac{\partial f}{\partial v}\left(\Phi^{t,t+s}(x,v),\frac{d\Phi^{t,t+s}}{ds}(x,v)\right)\frac{d^2\Phi^{t,t+s}}{ds^2}(x,v) \bigg] \rho_t (x,v) \, dx \, dv. \end{align*} Since $\Phi^{t,t+s}(x,v)$ is a solution to \eqref{eq:true-process}, we have $\frac{d\Phi^{t,t+s}}{ds}(x,v)\big\vert_{s=0} = v$ and $\frac{d^2\Phi^{t,t+s}}{ds^2}(x,v)\big\vert_{s=0} = -\partial_x p(t, x)$, which gives us \[ \frac{d}{ds}I_{t+s}\Big\vert_{s=0} = \int \left( \frac{\partial f}{\partial x}(x,v)v - \frac{\partial f}{\partial v}(x,v)\partial_x p(t, x) \right) \rho_t (x,v) \, dx \, dv. \] Performing integration by parts with respect to $x$ for the first term and with respect to $v$ for the second term gives (noting that $f$ has compact support so the boundary terms vanish) \[ \frac{d}{ds}I_{t+s}\Big\vert_{s=0} = -\int f(x,v)v \frac{\partial \rho_t}{\partial x} (x,v) \, dx \, dv + \int f(x,v)\partial_x p(t, x) \frac{\partial \rho_t}{\partial v} (x,v) \, dx \, dv. \] On the other hand, we have \[ \frac{d}{ds}I_{t+s} = \frac{d}{ds} \int f(x,v) \, d\mu_{t+s}(x,v) = \frac{d}{ds} \int f(x,v) \rho_{t+s} (x,v) \, dx \, dv = \int f(x,v) \frac{\partial \rho_{t+s}}{\partial s} (x,v) \, dx \, dv, \] so \[ \frac{d}{ds}I_{t+s}\Big\vert_{s=0} = \int f(x,v) \frac{\partial \rho_{t}}{\partial t} (x,v) \, dx \, dv \] and thus \[ \int f(x,v) \left(- v \frac{\partial \rho_t}{\partial x} (x,v) + \partial_x p(t, x) \frac{\partial \rho_t}{\partial v} (x,v) - \frac{\partial \rho_{t}}{\partial t} (x,v) \right) dx \, dv. \] Since the test function $f$ was arbitrary, the equation from the statement of the lemma must hold for every $t$, $x$, $v$ as assumed. \end{proof} \paragraph{The colored trajectory process.} Let $X = (X_t, 0 \leq t \leq T)$ be the permuton process with distribution $\pi$. For the large deviation lower bound we will need to construct a suitable interacting particle system in which the behavior of a random particle mimics that of the permuton process $X$. A crucial ingredient will be a property analogous to Lemma \ref{lm:velocity-mean-zero}, i.e., having velocity distribution whose mean is locally zero. Instead of working with velocity $v$, whose distribution $\rho_t(x,v)$ at a given site $x$ may change in time, it will be more convenient to perform a change variables and use another variable $\phi$, which we call \emph{color}, whose distribution will be invariant in time. Recall that under Assumptions \eqref{as:main-assumptions} the distribution function $F_{t,x}(\cdot)$ and the quantile function $V_{t}(x, \cdot)$ are related by \begin{equation}\label{eq:cdf} \begin{cases} F_{t,x}(V_{t}(x, \phi)) = \phi \\ V_t(x, F_{t,x}(v)) = v \end{cases} \end{equation} for any $t \in [0,T]$, $x \in (0,1)$, $\phi \in [0,1]$, $v \in \mathrm{supp} \, \mu_{t,x}$. The reason for introducing the variable $\phi$ is the following elementary property -- if $\phi$ is sampled from the uniform distribution on $[0,1]$, then $V_t(x, \phi)$ is distributed according to $\mu_{t,x}$. Thus instead of working with $(x,v)$ variables in the ODE \eqref{eq:true-process}, where the distribution of $v$ evolves in time, we can set up an ODE for $x$ and $\phi$ such that the joint distribution of $(x, \phi)$ will be uniform on $[0,1]^2$ at each time. The velocity $v$ and its distribution can then be recovered via the equation $v = V_t(x, \phi)$. Let $(x(t), v(t))$ be a solution to \eqref{eq:true-process} such that $x(t) \neq 0,1$ and let \[ \phi(t) = F_{t, x(t)}(v(t)). \] Let us derive the ODE that $(x(t), \phi(t))$ satisifes. Since $(x(t), v(t))$ is a solution of \eqref{eq:true-process}, we have \begin{align*} \phi'(t) = & \frac{\partial F_{t, x(t)}}{\partial t}(v(t)) + \frac{\partial F_{t, x(t)}}{\partial x}(v(t))x'(t) + \frac{\partial F_{t, x(t)}}{\partial v}(v(t)) v'(t) = \\ = &\frac{\partial F_{t, x(t)}}{\partial t}(v(t)) + \frac{\partial F_{t, x(t)}}{\partial x}(v(t))v(t) + \rho_t(x(t), v(t)) \left[ -\partial_x p (t, x(t)) \right]. \end{align*} Lemma \ref{lm:evolution-equation} implies that \[ \frac{\partial F_{t, x(t)}}{\partial t} (x(t), v(t)) = - \int\limits_{-\infty}^{v(t)} w \frac{\partial \rho_t}{\partial x} (x(t),w) + \left[ \partial_x p(t, x(t)) \right] \rho_t (x(t),v(t)), \] which gives \[ \phi'(t) = \frac{\partial F_{t, x(t)}}{\partial x}(v(t))v(t) - \int\limits_{-\infty}^{v(t)} w \frac{\partial \rho_t}{\partial x} (x(t),w) \] and upon integrating by parts in the last integral we obtain \begin{equation}\label{eq:phi-prime} \phi'(t) = \int\limits_{-\infty}^{v(t)} \frac{\partial F_{t,x(t)}}{\partial x}(x(t),w) \, dw. \end{equation} Now, differentiating \eqref{eq:cdf} with respect to $x$ and $\phi$ gives \[ \begin{cases} \frac{\partial F_{t,x}}{\partial x}(V_t(x, \phi)) + \rho_t(x,\phi) \frac{\partial V_t}{\partial x}(x, \phi) = 0 \\ \rho_t(x, \phi) \frac{\partial V_t}{\partial \phi}(x, \phi) = 1. \end{cases} \] Also by \eqref{eq:cdf} we have $v(t) = V_t(x(t), \phi(t))$, so a change of variables $w = V_t(x(t), \psi)$ in \eqref{eq:phi-prime} yields \[ \phi'(t) = R_t(x(t), \phi(t)), \] where $R_t(x,\phi) = - \int\limits_{0}^{\phi} \frac{\partial V_t}{\partial x}(x, \psi) \, d\psi$. Thus we have shown that $(x(t), \phi(t))$ satisfies the ODE \begin{equation}\label{eq:process-with-color} \begin{cases} x'(t) = V_t(x(t), \phi(t)) \\ \phi'(t) = R_t(x(t), \phi(t)). \\ \end{cases} \end{equation} If $x(t) \neq 0,1$, this equation is equivalent to \eqref{eq:true-process}, i.e., $(x(t), \phi(t))$ is a solution of \eqref{eq:process-with-color} with initial conditions $(x(0), \phi(0)) = (x_0, \phi_0)$ if and only if $(x(t),v(t))$ is a solution of \eqref{eq:true-process} with initial conditions $(x(0), v(0))= (x_0, V_0(x_0, \phi_0))$. We also note that Lemma \ref{lm:velocity-mean-zero} expressed in terms of $(x, \phi)$ variables states that for each $t \in [0,T]$ and $x \in [0,1]$ we have \begin{equation}\label{eq:mean-zero-color} \int\limits_{0}^{1} V_t(x, \psi) \, d\psi = 0. \end{equation} From now on we work exclusively with \eqref{eq:process-with-color}. We will need to make two approximations necessary for the interacting particle system analysis later on. One is necessitated by the fact that the function $V_t(x, \phi)$ might not be smooth with respect to $x$ at the boundaries $x = 0,1$ (this happens, for example, for the sine curve process). We will therefore replace the function by its smooth approximation in a $\beta$-neighborhood of the boundary and in the end take $\beta \to 0$. The other approximation consists in dividing the time interval $[0,T]$ into intervals of length $\delta$ and approximating $V_t(x, \phi)$ for given $x, \phi$ with a piecewise-constant function of $t$. This will enable us to give a simple stationarity condition for the corresponding interacting particle system and in the end take $\delta \to 0$ a well. Let $\beta \in (0,\frac{1}{4})$ and let $V_{t}^{\beta}(x, \phi)$ be a function with the following properties \begin{enumerate}[(a)] \item $V_{t}^{\beta}(x, \phi)$ is continuously differentiable for every $t \in [0,T]$, $x \in [0,1]$, $\phi \in [0,1]$, \item $V_{t}^{\beta}(x, \phi) = V_{t}(x, \phi)$ for $x \in \left[\beta, 1 - \beta\right]$ and $V_{t}^{\beta}(0,\phi) = V_{t}^{\beta}(1,\phi) = 0$, \item for each $x \in [0,1]$ we have $\int\limits_{0}^{1} V_{t}^{\beta}(x, \psi) \, d\psi = 0$, \item $|V_{t}^{\beta}(x, \phi)| \leq |V_{t}(x, \phi)| + 1$, \item we have $\lim\limits_{\beta \to 0} \int\limits_{0}^{1}\int\limits_{0}^{1} |V_{t}^{\beta}(x, \phi)|^2 \, dx \, d\phi = \int\limits_{0}^{1}\int\limits_{0}^{1} |V_{t}(x, \phi)|^2 \, dx \, d\phi$. \end{enumerate} The existence of such a function $V_{t}^{\beta}$ is proved at the end of this section. By $(x^{\beta}(t), \phi^{\beta}(t))$ we will denote the solution to the ODE \begin{equation}\label{eq:process-with-delta} \begin{cases} x'(t) = V^{\beta}_{t}(x(t), \phi(t)) \\ \phi'(t) = R^{\beta}_{t}(x(t), \phi(t)). \\ \end{cases} \end{equation} Take any $\delta > 0$ (to simplify notation we will assume that $T$ is an integer multiple of $\delta$, this will not influence the argument in any substantial way) and consider a partition $0 = t_0 < t_1 < \ldots < t_M = T$ of $[0,T]$ into $M = \frac{T}{\delta}$ intervals of length $\delta$, with $t_k = k \delta$. Let $V^{\beta, \delta}(t, x,\phi)$ be the piecewise-constant in time approximation of $V^{\beta}_{t}(x, \phi)$, defined by \begin{equation}\label{eq:def-of-piecewise-s} V^{\beta, \delta}(t, x,\phi) = V^{\beta}_{t_k}(x,\phi) \quad \mbox{for $t \in [t_k, t_{k+1})$}, \, \, \mbox{$k=0,1,\ldots,M - 1$}. \end{equation} We can now define the piecewise-stationary process which will be our main tool in subsequent arguments. Consider the ODE \begin{equation}\label{eq:invariant-process} \begin{cases} y'(t) = V^{\beta, \delta}(t, y(t),\phi(t)) \\ \phi'(t) = R^{\beta, \delta}(t, y(t), \phi(t)), \\ \end{cases} \end{equation} where \[ R^{\beta, \delta}(t, y, \phi) = - \int\limits_{0}^{\phi} \frac{\partial V^{\beta, \delta}}{\partial y}(t, y, \psi) \, d\psi. \] Solutions to \eqref{eq:invariant-process} exist and are unique as usual for any initial conditions, provided we interpret $(y'(t), \phi'(t))$ above as right-handed derivatives at $t = 0, t_1, t_2, \ldots, t_{M-1}$ (we adopt this convention from now on). Let $P^{\beta, \delta} = \left( (X^{\beta, \delta}_{t}, \Phi^{\beta, \delta}_{t}), 0 \leq t \leq T \right)$ be the stochastic process with values in $[0,1]^2$ with the following distribution: choose $(X^{\beta, \delta}_{0}, \Phi^{\beta, \delta}_{0})$ uniformly at random from $[0,1]^2$ and then take $(X^{\beta, \delta}_{t}, \Phi^{\beta, \delta}_{t}) = (y(t), \phi(t))$, where $(y, \phi)$ is the solution of the system \eqref{eq:invariant-process} with initial conditions given by $(y(0), \phi(0)) = (X^{\beta, \delta}_{0}, \Phi^{\beta, \delta}_{0})$. We will call this process the \emph{colored trajectory process} associated to \eqref{eq:invariant-process}. We also define the process $P^{\beta} = \left( (X^{\beta}_{t}, \Phi^{\beta}_{t}), 0 \leq t \leq T \right)$, which is obtained in the same way as $P^{\beta, \delta}$ except that we follow solutions to \eqref{eq:process-with-delta} instead of \eqref{eq:invariant-process}, i.e., make no piecewise approximation in time of $V_{t}^{\beta}$. The key property of the process $P^{\beta, \delta}$ is the following \begin{lemma}\label{lm:p-is-stationary} For each $t \in [0,T]$ the distribution of $(X^{\beta, \delta}_t, \Phi^{\beta, \delta}_t)$ is uniform on $[0,1]^2$. \end{lemma} \begin{proof} First we show that the process stays confined to $[0,1]^2$. Because of uniqueness of solutions to \eqref{eq:invariant-process} it is enough to show that if a solution starts in the interior of $[0,1]^2$, it never reaches the boundary, or, equivalently, that if a solution is at the boundary at some $t$, it is actually at the boundary for all $s \in [0,T]$. If $y(t) = 0$ or $1$ for any $t$, then $y'(t) = 0$, since $V^{\beta, \delta}(t, 0,\phi) = V^{\beta, \delta}(t, 1,\phi) = 0$ for any $\phi$. By uniqueness of solutions we then have $y(t) \equiv 0$ or $1$. If $\phi(t) = 0$ for any $t$, then $R^{\beta, \delta}(t, y, 0) = 0$ regardless of $y$, so as before $\phi'(t) = 0$ and $\phi(t) \equiv 0$. Finally, if $\phi(t) = 1$, then using the property (c) of the function $S^{\beta}_{t}(x, \phi)$ we have \[ R^{\beta, \delta}(t,y,1) = - \int\limits_{0}^{1} \frac{\partial V^{\beta, \delta}}{\partial y}(t, y, \psi) \, d\psi = - \frac{\partial}{\partial y} \int\limits_{0}^{1} V^{\beta, \delta} (t, y, \psi) \, d\psi = 0, \] so as before $\phi'(t) = 0$ and $\phi(t) \equiv 1$. Now we observe that the form of $V^{\beta, \delta}$ and $R^{\beta, \delta}$ in \eqref{eq:invariant-process} implies that the vector field $(V^{\beta, \delta}(t,\cdot,\cdot),R^{\beta, \delta}(t,\cdot,\cdot))$ is divergence-free at each $t$, so by Liouville's theorem the uniform measure on $[0,1]^2$ is invariant for the corresponding flow map. \end{proof} In particular, the process $X^{\beta, \delta} = (X^{\beta, \delta}_t, 0 \leq t \leq T)$ is a permuton process. Crucially, we can couple it to the process $X$ in a natural way. Consider $(x_0, \phi_0)$ chosen uniformly at random from $[0,1]^2$ and take $(x(0), v(0)) = (x_0, V_{0}(x_0, \phi_0))$, resp. $(y(0), \phi(0)) = (x_0, \phi_0)$, as initial conditions for \eqref{eq:process-with-color}, resp. \eqref{eq:invariant-process}. By definition of $V_0(x, \phi)$, the pair $(x(0), v(0))$ has distribution given by $\mu_0$, so indeed the pair of solutions $(x(t), y(t))$ corresponding to the initial conditions above defines a coupling of $X$ and $X^{\beta, \delta}$. From now on $X$ and $X^{\beta, \delta}$ are always assumed to be coupled in this way. It is readily seen that the statements above also hold for $P^{\beta}$ instead of $P^{\beta, \delta}$, hence with a slight abuse of notation we can allow $\delta = 0$ and write $P^{\beta, 0} = P^{\beta}$, $X^{\beta,0} = X^{\beta}$ etc. Our goal in the remainder of this section is to show that, as $\beta, \delta \to 0$, the processes $X$ and $X^{\beta, \delta}$ typically stay close to each other and have approximately the same Dirichlet energy, so in the probabilistic part of the arguments it will be enough to work with the process $(X^{\beta, \delta}, \Phi^{\beta, \delta})$, which is more convenient thanks to piecewise stationarity. First we prove a simple lemma, showing that $X^{\beta}$ is unlikely to ever be close to the boundary (so that approximation of $X$ with $X^{\beta}$ is meaningful as $\beta \to 0$). \begin{lemma}\label{lm:no-visits-to-boundary} Let $\mathbb{P}$ denote the law of the process $X^{\beta}$. Let \[ B^{\beta} = \left\{ \exists t \in [0,T] \, X^{\beta}_{t} \notin [\beta, 1 - \beta] \right\}. \] We have \[ \mathbb{P}\left( B^{\beta} \right) \xrightarrow{\beta \to 0} 0. \] \end{lemma} \begin{proof} We will prove that $X^{\beta}_{t} \notin [0, \beta]$ with high probability as $\beta \to 0$ (the proof for $[1-\beta, 1]$ is analogous). Suppose that $y$ is a solution of \eqref{eq:invariant-process} with initial condition $y(0) \notin [0, 2\beta]$ and that $y(t) \in [0, \beta]$ for some $t \in [0,T]$. Then there exists a time interval $[s,s']$ such that $y(s) = 2\beta$, $y(s') = \beta$ and $y(u) \in [\beta, 2 \beta]$ for every $u \in [s,s']$. Without loss of generality we can assume that $[s,s'] \subseteq [t_k, t_{k+1})$ for some $k$ (the other case is easily dealt with by further subdividing $[\beta, 2 \beta]$ into two equal subintervals and repeating the argument for each of them). By the mean value theorem \[ |y(s) - y(s')| = (s' - s) y'(w) \] for some $w \in [s,s']$. For $x \in [\beta, 2\beta]$ we have $V^{\beta}(w,x,\phi) = V_{t_k}(x,\phi)$, so $y'(w) = V_{t_k}(w, y(w), \phi(w))$. Since $|y(w)| \leq 2\beta$ and $V_{t_k}(x,\phi)$ is continuous at $x=0$, we have $|y'(w)| \leq f(\beta)$ for some function $f$ (depending only on $V$) satisfying $\lim\limits_{\beta \to 0} f(\beta) = 0$. As $|y(s) - y(s')| = \beta$, altogether this implies that $s'- s \geq \frac{\beta}{f(\beta)}$, i.e., if the process $X^{\beta}$ starts outside $[0, 2\beta]$, it has to spend time at least $\frac{\beta}{f(\beta)}$ before it reaches $[0, \beta]$. Thus \[ \int\limits_{0}^{T} \mathbbm{1}_{\{ X^{\beta}_{s} \in [0, \beta] \}} \, ds \geq \frac{\beta}{f(\beta)} \mathbbm{1}_{\{\exists t \in [0,T] \, X^{\beta}_{t} \in [0,\beta]\}} \mathbbm{1}_{\{X^{\beta}_{0} \notin [0,2\beta]\}}. \] Taking expectation yields \[ \mathbb{E} \int\limits_{0}^{T} \mathbbm{1}_{\{ X^{\beta}_{s} \in [0, \beta] \}} \, ds \geq \frac{\beta}{f(\beta)} \mathbb{P} \left( \{\exists t \in [0,T] \, X^{\beta}_{t} \in [0,\beta]\} \cap \{X^{\beta}_{0} \notin [0,2\beta]\} \right). \] Since $X^{\beta}$ is a permuton process, $X^{\beta}_s$ has uniform distribution for each $s$, which gives \[ \mathbb{E} \int\limits_{0}^{T} \mathbbm{1}_{\{ X^{\beta}_{s} \in [0, \beta] \}} \, ds = \int\limits_{0}^{T} \mathbb{E} \mathbbm{1}_{\{ X^{\beta}_{s} \in [0, \beta] \}} \, ds = \int\limits_{0}^{T} \mathbb{P} \left( X^{\beta}_{s} \in [0, \beta] \right) ds = T \beta. \] Together with the inequality above this implies \[ T \beta \geq \frac{\beta}{f(\beta)} \left( \mathbb{P} \left( \exists t \in [0,T] \, X^{\beta}_{t} \in [0,\beta] \right) - \mathbb{P}(X^{\beta}_{0} \in [0, 2\beta])\right). \] Since $X^{\beta}_{0}$ has uniform distribution, we have $\mathbb{P}(X^{\beta}_{0} \in [0, 2\beta]) = 2\beta$. Thus \[ \mathbb{P} \left( \exists t \in [0,T] \, X^{\beta}_{t} \in [0,\beta] \right) \leq 2 \beta + T f(\beta). \] Since $f(\beta) \to 0$ as $\beta \to 0$, the claim is proved. \end{proof} \begin{proposition}\label{prop:x-and-y-are-close} Fix $\beta \in (0, \frac{1}{4})$ and $(x_0, \phi_0) \in [0,1]^2$. Let $(x^{\beta}(t), \phi^{\beta}(t))$, resp. $(x^{\beta, \delta}(t), \phi^{\beta, \delta}(t))$, be the solution to \eqref{eq:process-with-delta}, resp. \eqref{eq:invariant-process}, with initial conditions $(x_0, \phi_0)$. We have \begin{align*} & \sup\limits_{t \in [0,T]} | x^{\beta, \delta}(t) - x^{\beta}(t)| \xrightarrow{\delta \to 0} 0, \\ & \sup\limits_{t \in [0,T]} | \phi^{\beta, \delta}(t) - \phi^{\beta}(t)| \xrightarrow{\delta \to 0} 0. \end{align*} \end{proposition} \begin{proof} The statement follows from continuous dependence of solutions to an ODE on parameters, see e.g., \cite[Theorem 4.2]{ode-book}. Denoting $V^{\beta, 0}(t, y, \phi) = V^{\beta}_t(y, \phi)$, $R^{\beta, 0}(t, y, \phi) = R^{\beta}_t(y, \phi)$, we only need to check that for $f(t, y, \phi, \delta) = V^{\beta, \delta}(t,y,\phi)$, $g(t, y, \phi, \delta) = R^{\beta, \delta}(t, y, \phi)$ we have \begin{enumerate}[(1)] \item $f(\cdot, y, \phi, \delta)$ and $g(\cdot, y, \phi, \delta)$ are measurable on $[0,T]$, \item for any fixed $t \in [0,T]$ and $\delta > 0$ $f(t,\cdot,\cdot,\delta)$ and $g(t,\cdot,\cdot,\delta)$ are continuous in $(y, \phi)$, \item for any fixed $t \in [0,T]$ $f(t,\cdot,\cdot,\cdot)$ and $g(t,\cdot,\cdot,\cdot)$ are continuous in $(y,\phi,\delta)$ at $\delta = 0$, \item $f(t,y,\phi,\delta)$, $g(t,y,\phi,\delta)$ are uniformly bounded. \end{enumerate} Properties 1), 2) and 4) follow directly from our regularity assumptions about $V^{\beta, \delta}(t,y,\phi)$ (in case of $R^{\beta, \delta}(t,y,\phi)$ we use continuity of $\frac{\partial V^{\beta, \delta}}{\partial y}(t, y, \phi)$). Property 3) follows from pointwise convergence $f(t,y,\phi,\delta) \xrightarrow{\delta \to 0} f(t,y,\phi,0)$ and equicontinuity of $\{f(t,y,\phi,\delta)\}_{\delta \geq 0}$ in $(y, \phi)$, which in turn follows from uniform continuity of $V^{\beta}_{t}(y, \phi)$ in $t, y$ and $\phi$. The argument for $g(t,y,\phi, \delta)$ is analogous (again, using uniform continuity of $\frac{\partial V^{\beta, \delta}}{\partial y}(t, y, \phi)$). \end{proof} Now we can prove the main result of this section, which states that the trajectories of the process $X$ and its energy can be approximated by those of the process $X^{\beta, \delta}$. \begin{proposition}\label{prop:approximation-epsilon-delta} Let $\pi \in \mathcal{M}(\mathcal{D})$ be the distribution of the process $X$ and let $\pi^{\beta, \delta} \in \mathcal{M}(\mathcal{D})$ be the distribution of the process $X^{\beta, \delta}$. Then we have \[ \lim\limits_{\beta \to 0} \lim\limits_{\delta \to 0} d_{\mathcal{W}}^{sup}(\pi, \pi^{\beta, \delta}) = 0, \] where $d_{\mathcal{W}}^{sup}$ is the Wasserstein distance associated to the supremum norm on $\mathcal{D}$. Furthermore, \[ \lim\limits_{\beta \to 0} \lim\limits_{\delta \to 0} I(\pi^{\beta, \delta}) = I(\pi), \] where $I(\mu)$ is the energy of the process $\mu$ defined in \eqref{eq:process-energy}. \end{proposition} \begin{proof} For the first convergence it is enough to show that $\mathbb{E} \norm{X - X^{\beta, \delta}}_{sup} \to 0$ in the coupling between $X$ and $X^{\beta, \delta}$ considered before. We have \[ \mathbb{E} \norm{X - X^{\beta, \delta}}_{sup} \leq \mathbb{E} \norm{X - X^{\beta}}_{sup} + \mathbb{E} \norm{X^{\beta} - X^{\beta, \delta}}_{sup}. \] Let $B^{\beta}$ be the event from the statement of Lemma \ref{lm:no-visits-to-boundary}. Since the supremum norm is bounded by $1$, we have \[ \mathbb{E} \norm{X - X^{\beta}}_{sup} \leq \mathbb{P}\left( B^{\beta} \right) + \mathbb{E} \left[\norm{X - X^{\beta}}_{sup} \mathbbm{1}_{(B^{\beta})^c} \right]. \] By Lemma \ref{lm:no-visits-to-boundary} the first term is $o(1)$ as $\beta \to 0$. Since $V^{\beta}_{t}(x, \phi) = V_{t}(x, \phi)$ if $x \in [\beta, 1-\beta]$, on the event $(B^{\beta})^c$ we have $X^{\beta} = X$, so the second term above is equal to $0$. As for $\mathbb{E}\norm{X^{\beta} - X^{\beta, \delta}}_{sup}$, by Proposition \ref{prop:x-and-y-are-close} for fixed $\beta > 0$ we have with probability one $\norm{X^{\beta} - X^{\beta, \delta}}_{sup} \to 0$ as $\delta \to 0$, which together with the estimate on $\mathbb{E} \norm{X - X^\beta}_{sup}$ proves the first claim of the theorem. As for the energy, let $\pi^\beta$ denote the distribution of the process $X^\beta$, with $X$, $X^\beta$ and $X^{\beta, \delta}$ coupled as before. Since \[ |I(\pi) - I(\pi^{\beta, \delta})| \leq |I(\pi) - I(\pi^{\beta})| + |I(\pi^{\beta}) - I(\pi^{\beta, \delta})| \] it is enough to show that $\lim\limits_{\delta \to 0} I(\pi^{\beta, \delta}) = I(\pi^{\beta})$ and $\lim\limits_{\beta \to 0} I(\pi^{\beta}) = I(\pi)$. We have \[ I(\pi^{\beta, \delta}) = \mathbb{E} \int\limits_{0}^{T} |\dot{X}^{\beta, \delta}(t)|^2 \, dt = \mathbb{E} \int\limits_{0}^{T} V^{\beta, \delta}(t, X^{\beta, \delta}(t), \Phi^{\beta, \delta}(t))^2 \, dt. \] For fixed $t \in [0,T]$ by Lemma \ref{lm:p-is-stationary} $(X^{\beta, \delta}(t), \Phi^{\beta, \delta}(t))$ has uniform distribution on $[0,1] \times [0,1]$ and moving the expectation inside the integral we obtain \[ I(\pi^{\beta, \delta}) = \int\limits_{0}^{T} \mathbb{E} \left[ V^{\beta, \delta}(t, X^{\beta, \delta}(t), \Phi^{\beta, \delta}(t))^2 \right] dt = \int\limits_{0}^{T} \left( \int\limits_{0}^{1} \int\limits_{0}^{1} V^{\beta, \delta}(t, x, \phi)^2 \, dx \, d\phi\right) dt. \] The analogous formula is valid for $I(\pi)$ as well. Now, for fixed $\beta > 0$ we have $V^{\beta, \delta}(t,x,\phi) \xrightarrow{\delta \to 0} V^{\beta}_t(x,\phi)$ and $V^{\beta, \delta}(t,x,\phi)$ is uniformly bounded in $t, x$ and $\phi$, independently of $\delta$, which by dominated convergence implies the convergence of the integrals above as well. Thus $\lim\limits_{\delta \to 0} I(\pi^{\beta, \delta}) = I(\pi^{\beta})$. The convergence $\lim\limits_{\beta \to 0} I(\pi^{\beta}) = I(\pi)$ follows directly from properties (d) and (e) of $V^{\beta}_{t}(x, \phi)$ and dominated convergence. \end{proof} \paragraph{Construction of $V_{t}^{\beta}$.} We will construct the desired modification of $V_t(x, \phi)$ for $x \in [0,\beta]$, the construction for $x \in [1-\beta,1]$ is analogous. Fix $t \in [0,T]$. Let \[ L_t(x, \phi) = \frac{\partial V_t}{\partial x}(\beta, \phi) (x - \beta) + V_{t}(\beta, \phi). \] Consider $\beta' < \beta / 2$ to be fixed later and let $f$ be a smooth approximation of a step function which has values in $[0,1]$, is equal to $0$ on $[0, \beta - 2 \beta']$, equal to $1$ on $[\beta' - \beta, \beta]$ and is increasing on $[\beta - 2\beta', \beta - \beta']$. In particular we have $f(0) = 0$, $f(\beta) = 1$ and $f'(\beta) = 0$. Let us now take \[ \widetilde{V}_t(x, \phi) = f(x)L_t(x, \phi) \] and \[ V^{\beta}_{t}(x, \phi) = \begin{cases} \widetilde{V}_t(x, \phi) \quad \mbox{for $x \in [0, \beta]$}, \\ V_{t}(x, \phi) \quad \mbox{otherwise}. \end{cases} \] We will check that $V^{\beta}_{t}(x, \phi)$ indeed satisfies the desired properties. Let us first check that the property (c) is satisfied for $x \in [0, \beta]$. We have \begin{align*} & \int\limits_{0}^{1} \widetilde{V}_t(x, \psi) \, d\psi = \int\limits_{0}^{1} f(x)L_t(x, \psi) \, d\psi = f(x) \int\limits_{0}^{1} \left( \frac{\partial V_t}{\partial x}(\beta, \psi) (x - \beta) + V_{t}(\beta, \psi) \right) d\psi = \\ & = f(x)(x - \beta) \int\limits_{0}^{1} \frac{\partial V_t}{\partial x}(\beta, \psi) \, d\psi + f(x) \int\limits_{0}^{1} V_{t}(\beta, \psi) \, d\psi = \\ & = f(x)(x - \beta) \frac{d}{d x}\Big\vert_{x=\beta} \left( \int\limits_{0}^{1} V_t(x, \psi) \, d\psi \right) + f(x) \int\limits_{0}^{1} V_{t}(\beta, \psi) \, d\psi = 0, \end{align*} thanks to \eqref{eq:mean-zero-color}. Property (b) follows directly from $f(0) = 0$. As for (a), for $x \in [0,\beta)$ continuous differentiability of $V^{\beta}_t(x, \phi)$ follows from continuous differentiability of $f(x)$. At $x = \beta$ we have \[ \widetilde{V}_t(\beta, \phi) = f(\beta) L_t (\beta, \phi) = f(\beta) V_t(\beta, \phi) \] and $f(\beta) = 1$, so $\widetilde{V}_t(x, \phi)$ is continuous at $x=\beta$. Likewise, \[ \frac{\partial \widetilde{V}_t}{\partial x}(x, \phi) = f'(x) L_t (x, \phi) + f(x) \frac{\partial L_t}{\partial x}(x, \phi) = f'(x) L_t (x, \phi) + f(x) \frac{\partial V_t}{\partial x} (x, \phi). \] Since $f(\beta) = 1$ and $f'(\beta) = 0$, we have $\frac{\partial \widetilde{V}_t}{\partial x}(\beta, \phi) = \frac{\partial V_t}{\partial x} (\beta, \phi)$. As the functions in the formula above are continuously differentiable at $x=\beta$, $V^{\beta}_t(x, \phi)$ is continuously differentiable at $x=\beta$ as well. To see that property (d) is satisfied, we note that by continuity of $V_t(x, \phi)$ and $\frac{\partial V_t}{\partial x}(x, \phi)$ for $x \neq 0,1$ we can take $\beta'$ in the definition of $f(x)$ above to be arbitrarily small (depending on $V_t$, $\frac{\partial V_t}{\partial x}$ and $\beta$) so that on $[\beta - 2 \beta', \beta]$ the function $\widetilde{V}_t(x, \phi)$ is less than $|V_t(\beta, \phi)| + 1$ in absolute value. Since on $[0, \beta - 2\beta']$ we have $V_{t}^{\beta}(x, \phi) = 0$, the desired bound on $|V_{t}^{\beta}(x, \phi)|$ follows. Finally, to prove that property (e) holds it is enough to show that \[ \int\limits_{0}^{1} \int\limits_{0}^{\beta} |V_{t}^{\beta}(x, \phi)|^2 \, dx \, d\phi \to 0 \] as $\beta \to 0$, since $V_{t}^{\beta}(x, \phi) = V_{t}(x, \phi)$ for $x \notin [0,\beta]$. The claim follows immediately from property (d), since the integrand is bounded independently of $\beta$. \end{section} \begin{section}{The biased interchange process and stationarity}\label{sec:interchange} \paragraph{The biased interchange process.} For the sake of proving a large deviation lower bound, we will need to perturb the interchange process to obtain dynamics which typically exhibits (otherwise rare) behavior of a fixed permuton process. Let us introduce the \emph{biased interchange process}. Its configuration space $E$ consists of sequences $\eta = \left( (x_{i}, \phi_{i})\right)_{i=1}^{N}$, where as before $(x_1, \ldots, x_N)$ is a permutation of $\{1, \ldots, N\}$ and $\phi_{i}$ has $N$ possible values, $1, \ldots, N$. Here $x_{i}$ will be the \emph{position} of the particle with label $i$ and $\phi_{i}$ will be its \emph{color}. By a slight abuse of notation we will write $\eta^{-1}(x)$ to denote the label (number) of the particle at position $x$ in configuration $\eta$ (so that $\eta^{-1}(x_{i}) = i$). For a position $x$ we will often write $\phi_{x}$ as a shorthand for $\phi_{\eta^{-1}(x)}$ (the positions will be always denoted by $x$ or $y$ and labels by $i$, so there is no risk of ambiguity). In this way we can treat any configuration $\eta$ as a function which assigns to each site $x$ a pair $(\eta^{-1}(x), \phi_x)$, the label and the color of the particle present at $x$ The configuration at time $t$ will be denoted by $\eta^{N}_{t}$ (or simply $\eta_{t}$), and likewise by $x_{i}(\eta^{N}_{t})$ and $\phi_{i}(\eta^{N}_{t})$ we denote the position and the color of the particle number $i$ at time $t$. We will use notation $X_{i}(\eta^{N}_{t}) = \frac{1}{N}x_{i}(\eta^{N}_{t})$, $\Phi_{i}(\eta^{N}_{t}) = \frac{1}{N} \phi_{i}(\eta^{N}_{t})$ for the rescaled positions and colors. By the same convention as above $\Phi_x(\eta^{N}_{t})$ will denote the rescaled color of the particle at site $x$ at time $t$. Let $\varepsilon = N^{1-\alpha}$, with the same $\alpha \in (1,2)$ as in \eqref{eq:unbiased-generator}. Suppose we are given functions $v, r : [0,T] \times \{1, \ldots, N\} \times \{1, \ldots, N\}$. The dynamics of the corresponding biased interchange process is defined by the (time-inhomogeneous) generator \begin{align}\label{eq:biased-generator} & (\widetilde{\mathcal{L}}_t f)(\eta) = \frac{1}{2} N^{\alpha} \sum\limits_{x=1}^{N-1} \big( 1 + \varepsilon \left[v(t, x, \phi_{x}(\eta)) - v(t,x+1, \phi_{x+1}(\eta))\right] \big) (f(\eta^{x, x+1}) - f(\eta)) + \\ & + \frac{1}{2} N^{\alpha} \sum\limits_{x=1}^{N} \Big[ \big( 1 + \varepsilon r(t,x, \phi_{x}(\eta)) \big) (f(\eta^{x,+}) - f(\eta)) + \big( 1 - \varepsilon r (t,x, \phi_{x}(\eta)) \big) (f(\eta^{x,-}) - f(\eta)) \Big]. \nonumber \end{align} Here $\eta^{x, x+1}$ is the configuration $\eta$ with particles at locations $x$ and $x+1$ swapped, and $\eta^{y, \pm}$ is the configuration $\eta$ with $\phi_{y}$ changed by $\pm 1$ (with the convention that $\eta^{y,+} = \eta^{y}$ if $\phi_y = N$ and likewise $\eta^{y,-} = \eta^{y}$ if $\phi_y = 1$). We will often use the abbreviated notation $v_{x}(t,\eta) = v(t,x, \phi_{x}(\eta))$ (with the convention $v_{0}(t,\eta) = v_{N+1}(t,\eta) = 0$). In other words, at each time neighboring particles make a swap at rate close to $1$, with bias proportional to the difference of their \emph{velocities} $v(t,x,\phi_{x})$, and each particle independently changes its color by $\pm 1$, also at rate close to $1$ with bias proportional to $\pm r(t,x, \phi_{x})$. The parameter $\varepsilon$ has been chosen so that we expect particles to have displacement of order $N$ at macroscopic times. Since the interchange process is a pure jump Markov process, for each particle its rescaled position $X_{i}(\eta^{N})$ and color $\Phi_{i}(\eta^{N})$ will be c\`adl\`ag paths from $[0,T]$ to $[0,1]$ and thus elements of $\mathcal{D}$. In the same way we can consider the joint trajectory $P_{i}(\eta^{N}) = (X_{i}(\eta^{N}), \Phi_{i}(\eta^{N}))$ as an element of $\mathcal{D}D = \mathcal{D}([0,T], [0,1]^2)$, the space of c\'adl\'ag paths from $[0,T]$ to $[0,1]^2$ (equipped with the Skorokhod topology). By $\mathcal{M}(\mathcal{D}D)$ we will denote the space of Borel probability measures on $\mathcal{D}D$, endowed with the weak topology, and by a slight abuse of notation the corresponding Wasserstein distance will be denoted by $d_{\mathcal{W}}$, as for $\mathcal{M}(\mathcal{D})$. If $\eta^{N}$ is the trajectory of the biased interchange process, then by analogy with the permutation process $X^{\eta^{N}}$ we can define the \emph{colored permutation process} $P^{\eta^{N}} = (X^{\eta^{N}}, \Phi^{\eta^{N}})$, obtained by choosing a particle $i$ at random and following the path $(X_{i}(\eta^{N}_{t}), \Phi_{i}(\eta^{N}_{t}))$. Thus we keep track both of the position and the color of a random particle. Since $\eta^N$ is random, the distribution $\nu^{\eta^N}$ of $P^{\eta^{N}}$, given by \[ \nu^{\eta^N} = \frac{1}{N} \sum\limits_{i=1}^{N} \delta_{P^{\eta^{N}}_{i}}, \] is a random element of $\mathcal{M}(\mathcal{D}D)$. \paragraph{Stationarity conditions.} Let us now connect the discussion of the interchange process with deterministic permuton processes and generalized solutions to Euler equations considered in Section \ref{sec:ode-part}. Recall the colored trajectory process $P^{\beta, \delta} = (X^{\beta, \delta}, \Phi^{\beta, \delta})$ defined in Section \ref{sec:ode-part}. From now on we consider $\beta \in (0, \frac{1}{4})$ and $\delta > 0$ to be fixed and we suppress them in the notation, writing $X = X^{\beta, \delta}$, $\Phi = \Phi^{\beta, \delta}$, $V(t, x, \phi) = V^{\beta, \delta}(t, x, \phi)$, $R(t, x, \phi) = R^{\beta, \delta}(t, x, \phi)$. Note that this should not be confused with the actual generalized solution to Euler equations, which was also denoted by $X$, but does not appear in this and the following sections except in Theorem \ref{thm:lower-bound-for-minimizers}. Our goal is to set up a biased interchange process so that typically trajectories of particles will behave like trajectories of the process $X$. We would also like to preserve the stationarity of the uniform distribution of colors, which will greatly facilitate parts of the argument. To find the correct rates $v(t, x, \phi)$ and $r(t, x, \phi)$ in \eqref{eq:biased-generator}, recall that by definition the trajectories of the colored trajectory process $P = (X, \Phi)$ satisfy the equation \begin{equation}\label{eq:ode-general} \begin{cases} \frac{dX}{dt}(t) = V(t, X(t), \Phi(t)) \\ \frac{d\Phi}{dt}(t) = R(t, X(t), \Phi(t)), \\ \end{cases} \end{equation} with the functions $V$ and $R$ satisfying \begin{align}\label{eq:ode-rates} \begin{cases} V(t,X, \Phi) = \frac{\partial F}{\partial \Phi}(t,X, \Phi) \\ R(t,X, \Phi) = -\frac{\partial F}{\partial X}(t,X, \Phi) \end{cases} \end{align} for $F(t,X,\Phi) = \int\limits_{0}^{\Phi} V(t,X,\psi) \, d\psi$. Note that $F(t,X,0) = 0$ and $F(t,X,1)=0$, where the latter equality follows from property (c) of $V^{\beta}_{t}(x,\phi)$ (and thus of $V = V^{\beta,\delta}$). It is clear that $v$ and $r$ should be chosen so that approximately we have $v(t,x, \phi) \approx V\left(t,\frac{x}{N} , \frac{\phi}{N}\right)$, $r(t,x, \phi) \approx R\left(t,\frac{x}{N} , \frac{\phi}{N}\right)$. To analyze the stationarity condition, consider the uniform distribution on configurations of the biased interchange process, i.e., a distribution in which the labelling of particles is a uniformly random permutation and each particle has a uniformly random color, chosen indepedently from $\{1, \ldots, N\}$ for each of them. We want to find a condition on rates $v(t,x, \phi)$ and $r(t,x, \phi)$ such that this measure will be invariant for the dynamics of $\widetilde{\mathcal{L}}_t$. Note that since $V(t,X,\Phi)$, $R(t,X,\Phi)$ are piecewise-constant as functions of $t$, the dynamics induced by $\widetilde{\mathcal{L}}_t$ is time-homogeneous on each interval $[t_k, t_{k+1})$ from the definition \eqref{eq:def-of-piecewise-s} of $V$. Thus the stationarity condition for the uniform measure is that for each state (i.e., each configuration $\eta$) the sums of outgoing and incoming jump rates have to be equal. We write down this condition as follows. For any given configuration $\eta$, with particle at location $x$ having color $\phi_{x} = \phi_{x}(\eta)$, there are the following possible outgoing jumps: \begin{itemize} \item for some $x \in \{ 1, \ldots, N - 1\}$ the particles at locations $x$ and $x+1$ swap, at rate $1 + \varepsilon \left[ v (t,x, \phi_{x}) - v (t,x+1, \phi_{x+1}) \right]$ \item for some $x \in \{1, \ldots, N\}$ the particle at $x$ changes its color from $\phi_x$ to $\phi_x \pm 1$, at rate $1 \pm \varepsilon r (t,x, \phi_{x})$ \end{itemize} and incoming jumps: \begin{itemize} \item for some $x \in \{ 1, \ldots, N - 1\}$ the particles at locations $x$ and $x+1$ swap, at rate $1 + \varepsilon \left[ v (t,x, \phi_{x+1}) - v (t,x+1, \phi_{x}) \right]$ \item for some $x \in \{1, \ldots, N\}$ the particle at $x$ changes its color from $\phi_x \pm 1$ to $\phi_x$, at rate $1 \mp \varepsilon r (t,x, \phi_{x} \pm 1)$ \end{itemize} Thus the condition on the sums of jump rates is \begin{align*} & \sum\limits_{x=1}^{N-1} \left( v (t,x, \phi_{x}) - v (t,x+1, \phi_{x+1}) \right) = \\ & = \sum\limits_{x=1}^{N-1} \left( v (t,x, \phi_{x+1}) - v (t,x+1, \phi_{x}) \right) + \sum\limits_{x=1}^{N} \left( r (t,x, \phi_{x} - 1) - r (t,x, \phi_{x} + 1) \right), \end{align*} where we adopt the convention $r(t,x,0) = r(t,x,N+1) = 0$. This implies \begin{align*} & \sum\limits_{x=2}^{N-1} \big( v (t,x-1, \phi_{x}) - v (t,x+1, \phi_{x}) + r(t,x,\phi_{x} - 1) - r(t,x,\phi_{x} + 1) \big) + \\ & + v(t,N-1, \phi_{N}) + v(t,N, \phi_{N}) - v(t,1, \phi_{1}) - v(t,2, \phi_{1}) + \\ & + \left[ r(t,1,\phi_{1} - 1) - r(t,1,\phi_{1} + 1) \right] + \left[ r(t,N,\phi_{N} - 1) - r(t,N,\phi_{N} + 1) \right] = 0. \end{align*} Since we would like this equation to be satisfied for any configuration, regardless of the choice of $\phi_x$ for each $x$, we want each term in the sum and each of the boundary terms to vanish. This gives us a set of equations \begin{align}\label{eq:stationarity-eqs} \begin{cases} v(t,1, \phi) + v(t,2, \phi) = r(t,1,\phi - 1) - r(t,1,\phi + 1) \\ v (t,x+1, \phi) - v (t,x-1, \phi) = r(t,x,\phi - 1) - r(t,x,\phi + 1), \, \, \, x = 2, \ldots, N-1 \\ v(t,N-1, \phi) + v(t,N, \phi) = r(t,N,\phi + 1) - r(t,N,\phi - 1) \end{cases} \end{align} which have to be satisfed for every $\phi = 1, \ldots, N$. Let us consider the function $f(t,x, \phi)$ defined for $x \in \{0, \ldots, N+1\}$, $\phi \in \{1, \ldots, N\}$ by \begin{align*} f (t,x, \phi) = \begin{cases} F\left( t,\frac{x}{N}, \frac{\phi}{N+1} \right), \quad & x = 2, \ldots, N-1, \\ 0, \quad & x = 0, 1, N, N + 1, \end{cases} \end{align*} where $F$ is the function appearing in \eqref{eq:ode-rates}. It is straightforward to check that the rates given by \begin{equation}\label{eq:rates-s-r} \begin{cases} v(t,x, \phi) = \frac{N}{2} \left( f(t,x, \phi - 1) - f(t,x, \phi + 1) \right) \\ r(t,x, \phi) = \frac{N}{2} \left( f(t,x+1, \phi) - f(x-1,\phi) \right) \end{cases} \end{equation} solve the equations for stationarity, given by \eqref{eq:stationarity-eqs}, for any $x, \phi \in \{1, \ldots, N\}$. Note that with this choice of rates we have for any $x, \phi \in \{1, \ldots, N\}$ \begin{equation}\label{eq:s-r-approx} \begin{cases} v(t,x, \phi) = V \left(t, \frac{x}{N}, \frac{\phi}{N} \right) + O\left( \frac{1}{N} \right) \\ r(t,x, \phi) = R \left(t, \frac{x}{N}, \frac{\phi}{N} \right) + O\left( \frac{1}{N} \right) \end{cases} \end{equation} uniformly in $x$, $\phi$ and $t$, because of smoothness of $F(t, X, \Phi)$ in $X$ and $\Phi$ variables. In particular the rates $v$ and $r$ are uniformly bounded for all $N$. From now on we will always assume that the biased interchange process has rates $v(t,x, \phi)$ and $r(t,x, \phi)$ given by \eqref{eq:rates-s-r} and is started from the uniform distribution (which by the discussion above is stationary). The properties of $v$ and $r$ which will be relevant to our analysis is that they are bounded, approximately equal to some smooth functions $V$, $R$, that the corresponding dynamics has the uniform measure as the stationary distribution and, crucially, that in stationarity the velocities are independent and mean zero. This last property, which should be thought of as the particle system analog of Lemma \ref{lm:velocity-mean-zero}, is conveniently summarized in the following proposition. \begin{proposition}\label{prop:speeds} Let $\phi_{x}$, $x=1, \ldots, N$, be independent and uniformly distributed on $\{1, \ldots, N\}$. Then for each $t \in [0,T]$ the random variables $v(t,x, \phi_x)$, $x=1, \ldots, N$, are independent and for each $x$ we have \[ \mathbb{E} \, v(t,x, \phi_x) = 0. \] \end{proposition} \begin{proof} Under the uniform distribution of $\phi_x$ we have \[ \mathbb{E} \, v(t,x, \phi_x) = \frac{1}{N}\sum\limits_{\phi=1}^{N} v(t,x,\phi), \] which by definition of $v$ is equal to \begin{align*} \frac{1}{N}\sum\limits_{\phi=1}^{N} \frac{N}{2} \left( f(t,x, \phi - 1) - f(t,x, \phi + 1) \right) = \frac{1}{2} \left( F\left(t, \frac{x}{N}, 0\right) - F\left(t,\frac{x}{N}, 1\right)\right). \end{align*} Recalling the definition of $F$ below \eqref{eq:ode-rates}, the right-hand side is equal to $0$. \end{proof} \end{section} \begin{section}{Law of large numbers}\label{sec:lln} Throughout this section $\widetilde{\mathbb{P}}^{N}$ will denote the probability law of the biased interchange process on $N$ particles, started in stationarity, associated to the equation \eqref{eq:ode-general} (with all the assumptions from the previous section). To simplify notation we will usually write $\eta = \eta^{N}$. Whenever we use $o(\cdot)$ or $O(\cdot)$ asymptotic notation the implicit constants will depend only on the rates $v$, $r$ and possibly on $T$. Let $P = (X, \Phi)$ be the colored trajectory process associated to the equation \eqref{eq:ode-general} and let $P^{\eta^{N}}$ be the colored permutation process defined in Section \ref{sec:interchange}. Let us denote the distributions of $P$ and $P^{\eta^{N}}$ respectively by $\nu$ and $\nu^{\eta^{N}}$, with $\nu, \nu^{\eta^{N}} \in \mathcal{M}(\mathcal{D}D)$. We will prove the following theorem \begin{theorem}\label{th:lln} Let $\eta^{N}$ be the trajectory of the biased interchange process. The measures $\nu^{\eta^{N}}$ converge in distribution, as random elements of $\mathcal{M}(\mathcal{D}D)$, to the deterministic measure $\nu$ as $N \to \infty$. \end{theorem} In other words, the random processes $P^{\eta^{N}}$ converge in distribution to the process $P$ whose distribution is deterministic. The theorem above can be thought of as a law of large numbers for random permuton processes and it will be useful for establishing the large deviation lower bound. \begin{remark}\label{rm:lln} Since the limiting measure $\nu$ is deterministic and supported on continuous trajectories, Theorem \ref{th:lln} implies that the convergence $\nu^{\eta^N} \to \nu$ in fact holds in a stronger sense, namely in probability when $\mathcal{M}(\mathcal{D}D)$ is endowed with the Wasserstein distance $d_{\mathcal{W}}^{sup}$ associated to the supremum norm on $\mathcal{D}D$. \end{remark} To prove Theorem \ref{th:lln}, we will show that typically trajectories of most particles approximately follow the same ODE \eqref{eq:ode-general} as trajectories of the limiting process. In other words, if a given particle is at site $x$, it should locally move according to its velocity $v(t, x, \phi_{x})$. However, because of swaps between particles the actual jump rates of the particle will be influenced by velocities of its neighbors. Nevertheless, since velocity at each site has mean $0$ in stationarity, we will be able to show that the contribution from velocities of the particle's neighbors cancels out when averaged over time -- this will be the content of the \emph{one block estimate} proved in the next section. Note that to prove that the random processes converge indeed to a deterministic process, it is not enough to look only at single path distributions, as explained in Section \ref{sec:stationarity}. Nevertheless, we will show that in the interchange process typically any two particles (in fact almost all of them) behave like independent random walks, which by Lemma \ref{lm:deterministic} will be enough to establish a deterministic limit. Throughout this and the following sections we will make extensive use of martingales associated to Markov processes (see \cite{kipnis} for a comprehensive treatment of such techniques applied to interacting particle systems). For any Markov process with generator $\mathcal{L}$ and a bounded function $F : E \to \mathbb{R}$, where $E$ is the configuration space of the process, the following processes are mean zero martingales (\cite[Lemma A1.5.1]{kipnis}) \begin{align} & M_t = F(\eta_t) - F(\eta_0) - \int\limits_{0}^{t} \mathcal{L} F(\eta_s) \, ds, \label{eq:martingale1}\\ & N_t = M_{t}^{2} - \int\limits_{0}^{t} \left(\mathcal{L} F(\eta_s)^2 - 2F(\eta_s)\mathcal{L} F(\eta_s)\right) ds \label{eq:martingale2}. \end{align} Furthermore, for any $F$ as above the following process is a mean one positive martingale (see discussion following \cite[Lemma A1.7.1]{kipnis}) \begin{equation}\label{eq:martingale-exp} \mathbb{M}_t = \exp\left\{ F(\eta_t) - F(\eta_0) - \int\limits_{0}^{t} e^{-F(\eta_s)} \mathcal{L} e^{F(\eta_s)} \, ds \right\}. \end{equation} In the following sections we will also consider the case when $F$ is not necessarily bounded, in which case $M_t$, $N_t$, $\mathbb{M}_t$ are only local martingales. Our first goal is to prove that with high probability almost all particles move according to their local velocity $v(t,x_{i}, \phi_i)$. Recall that \[ X_{i}(\eta_{t}) = \frac{1}{N} x_{i}(\eta_{t}), \quad \Phi_{i}(\eta_{t}) = \frac{1}{N} \phi_{i}(\eta_{t}) \] are respectively the rescaled position and color of the particle with label $i$. Our first goal is to prove the following \begin{proposition}\label{prop:second-moment} For any fixed $t \in [0,T]$ and $\varepsilon > 0$ we have in the biased interchange process \begin{align*} & \widetilde{\mathbb{P}}^{N} \left( \frac{1}{N} \sum\limits_{i=1}^{N}\left| X_{i}(\eta_{t}) - X_{i}(\eta_{0}) - \int\limits_{0}^{t} v(s,x_{i}(\eta_{s}), \phi_{i}(\eta_{s})) \, ds \right| > \varepsilon \right)\to 0 \\ & \widetilde{\mathbb{P}}^{N} \left( \frac{1}{N} \sum\limits_{i=1}^{N} \left| \Phi_{i}(\eta_{t}) - \Phi_{i}(\eta_{0}) - \int\limits_{0}^{t} r(s,x_{i}(\eta_{s}), \phi_{i}(\eta_{s})) \, ds \right| > \varepsilon \right )\to 0 \end{align*} as $N \to \infty$. \end{proposition} As a starting point let us rewrite $X_i(\eta_t)$ in a more useful form. Recall from \eqref{eq:biased-generator} that $\widetilde{\mathcal{L}}$ denotes the generator of the biased interchange process. By the formula \eqref{eq:martingale1} applied to $F(\eta_s) = X_{i}(\eta_s)$ we have \[ X_{i}(\eta_{t}) - X_{i}(\eta_{0}) = M_{t}^{i} + \int\limits_{0}^{t} \widetilde{\mathcal{L}} X_{i}(\eta_{s}) \, ds, \] where $M_{t}^{i}$ is a mean zero martingale with respect to $\widetilde{\mathbb{P}}^{N}$. Recall that $v_x(t,\eta) = v(t,x, \phi_{x}(\eta))$ denotes the velocity of the particle at site $x$ in configuration $\eta$ at time $t$. For simplicity we will also write $v_{x_{i}}(t,\eta) = v(t,x_{i}(\eta), \phi_{i}(\eta))$ for the velocity of the particle with label $i$. We have \begin{align*} & \widetilde{\mathcal{L}} X_{i}(\eta_{s}) = \frac{1}{N} \widetilde{\mathcal{L}} (x_{i}(\eta_{s})) = \frac{1}{2} N^{\alpha-1} \sum\limits_{x=1}^{N-1} \left( 1 + \varepsilon \left[v_{x}(s,\eta_{s}) - v_{x+1}(s,\eta_{s})\right] \right) (x_{i}(\eta^{x, x+1}_{s}) - x_{i}(\eta_{s})) = \\ & = \frac{1}{2} N^{\alpha-1}\varepsilon \Big[ - \left[ v_{x_{i}-1}(s,\eta_{s}) - v_{x_{i}}(s,\eta_{s}) \right] + \left[ v_{x_{i}}(s,\eta_{s}) - v_{x_{i}+1}(s,\eta_{s}) \right]\Big] = \\ & = \frac{1}{2} \left(2 v_{x_{i}}(s,\eta_{s}) - v_{x_{i}-1}(s,\eta_{s}) - v_{x_{i}+1}(s,\eta_{s})\right), \end{align*} since the position of the particle $i$ changes by $\pm 1$ depending on whether it makes a swap with its left or right neighbor. Thus we obtain \begin{align*} & X_{i}(\eta_{t}) - X_{i}(\eta_{0}) = M_{t}^{i} + \int\limits_{0}^{t} v_{x_{i}}(s,\eta_{s}) \, ds + \frac{1}{2} \int\limits_{0}^{t} \left( v_{x_{i}-1}(s,\eta_{s}) + v_{x_{i}+1}(s,\eta_{s}) \right) ds, \end{align*} or in other words \begin{align}\label{eq:xt-x0} & X_{i}(\eta_{t}) - X_{i}(\eta_{0}) - \int\limits_{0}^{t} v_{x_{i}}(s,\eta_{s}) \, ds = M_{t}^{i} + \frac{1}{2} \int\limits_{0}^{t} \left( v_{x_{i}-1}(s,\eta_{s}) + v_{x_{i}+1}(s,\eta_{s}) \right) \, ds. \end{align} For the sake of proving the first part of Proposition \ref{prop:second-moment} it will be enough to show that \begin{equation}\label{eq:expectation-to-zero} \frac{1}{N} \sum\limits_{i=1}^{N} \mathbb{E} \Big( X_{i}(\eta_{t}) - X_{i}(\eta_{0}) - \int\limits_{0}^{t} v_{x_{i}}(\eta_{s}) \, ds \Big)^2 \to 0 \end{equation} as $N \to \infty$. First we prove that for most particles the martingale term $M_{t}^{i}$ will be small with high probability. Let us define \[ Q_{s}^{i} = \widetilde{\mathcal{L}} X_{i}(\eta_{s})^2 - 2 X_{i}(\eta_{s}) \widetilde{\mathcal{L}} X_{i}(\eta_{s}). \] By the martingale formula \eqref{eq:martingale2} we have that \begin{equation}\label{eq:quadratic} N_{t}^{i} = (M_{t}^{i})^2 - \int\limits_{0}^{t} Q_{s}^{i} \, ds \end{equation} is a mean zero martingale. A quick calculation gives \begin{align*} & \widetilde{\mathcal{L}} X_{i}(\eta_{s})^2 = \\ & = \frac{1}{2} \Big[ \left(v_{x_{i}-1}(s,\eta_{s}) - v_{x_{i}}(s,\eta_{s})\right) \left( \frac{-2 x_{i}(\eta_{s})+1}{N} \right) + \left(v_{x_{i}}(s,\eta_{s}) - v_{x_{i}+1}(s,\eta_{s})\right) \left( \frac{2 x_{i}(\eta_{s})+1}{N} \right)\Big] + N^{\alpha - 2} \end{align*} and \begin{align*} & 2 X_{i}(\eta_{s}) \widetilde{\mathcal{L}} X_{i}(\eta_{s}) = \frac{x_{i}(\eta_{s})}{N} \left( 2 v_{x_{i}}(s,\eta_{s}) - v_{x_{i}-1}(s,\eta_{s}) - v_{x_{i}+1}(s,\eta_{s}) \right), \end{align*} so these two quantities are the same up to terms of order $o(1)$. Thus $Q_{s}^{i} = o(1)$ (uniformly in $s$ and $i$) and, since $\mathbb{E} N_{t}^{i} = 0$, we obtain from \eqref{eq:quadratic} that $\mathbb{E} (M_{t}^{i})^2 = o(1)$ as well. Incidentally, a similar calculation (only simpler, since it does not involve correlations between adjacent particles) and the martingale argument gives us that for $\Phi_{i}(\eta_{t}) = \frac{1}{N}\phi_{i}(\eta_{t})$ we have \[ \Phi_{i}(\eta_{t}) - \Phi_{i}(\eta_{0}) - \int\limits_{0}^{t} r(s,x_{i}(\eta_{s}), \phi_{i}(\eta_{s})) \, ds = o(1) \] for any fixed particle $i$. This proves the second part of Proposition \ref{prop:second-moment}. Recalling \eqref{eq:xt-x0} and \eqref{eq:expectation-to-zero}, to finish the proof of the first part of Proposition \ref{prop:second-moment} we only need to show that \[ \frac{1}{N} \sum\limits_{i=1}^{N} \mathbb{E} \left(Y_{i}^{t}\right)^2 \to 0 \] as $N \to \infty$, where \[ Y_{i}^{t} = \int\limits_{0}^{t} \left( v_{x_{i}-1}(s,\eta_{s}) + v_{x_{i}+1}(s,\eta_{s}) \right) ds. \] Recall from \eqref{eq:def-of-piecewise-s} that $V^{\beta,\delta}(s,x,\phi)$ was defined in terms of a partition $0 = t_0 < t_1 < \ldots < t_M = T$. We would like to take advantage of the fact that on each interval the dynamics of the biased interchange process is time-homogeneous. Suppose that $t \in [t_l, t_{l+1})$ for some $l \leq M - 1$ and let us write \[ Y_{i}^{t} = \sum\limits_{k=0}^{l-1} \int\limits_{t_k}^{t_{k+1}} \left( v_{x_{i}-1}(s,\eta_{s}) + v_{x_{i}+1}(s,\eta_{s}) \right) ds + \int\limits_{t_l}^{t} \left( v_{x_{i}-1}(s,\eta_{s}) + v_{x_{i}+1}(s,\eta_{s}) \right) ds. \] For any $t \geq 0$ let \[ Y_{i}^{t,k} = \int\limits_{t_k}^{t_k + t} \left( v_{x_{i}-1}(s,\eta_{s}) + v_{x_{i}+1}(s,\eta_{s}) \right) ds. \] Since $M$ is fixed, it is enough to show that for any fixed $k \leq M - 1$ and $t \in [0, t_{k+1} - t_{k}]$ we have \[ \frac{1}{N} \sum\limits_{i=1}^{N} \mathbb{E} \left(Y_{i}^{t,k}\right)^2 \to 0 \] as $N \to \infty$. To keep the notation simple we will prove the desired statement just for $k=0$, with the general case being exactly analogous. Recall that $t_0 = 0$. By definition of the piecewise-constant in time approximation of $V^{\beta,\delta}$, for $s \in [0, t_{1})$ we have $v_{x}(s, \eta_s) = v_{x}(0, \eta_s)$. Let us define $v_{x}(\eta) = v_{x}(0, \eta)$. Fix any $t \in [0, t_{1}] $ and let us look at \[ \left(Y_{i}^{t,0}\right)^2 = \left( \int\limits_{0}^{t} \left( v_{x_{i}-1}(\eta_{s}) + v_{x_{i}+1}(\eta_{s}) \right) ds \right)^2. \] We will have four cross-terms here, it is enough to show that each of them is small in expectation. The argument will be similar in all cases, so we will only present the proof for one of them. Let us focus on \begin{align*} & \mathbb{E} \left[\left( \int\limits_{0}^{t} v_{x_{i}-1}(\eta_{s}) \, ds \right) \left( \int\limits_{0}^{t} v_{x_{i}-1}(\eta_{s}) \, ds \right) \right] = \mathbb{E} \int\limits_{0}^{t} \int\limits_{0}^{t} v_{x_{i}-1}(\eta_{u_{1}}) v_{x_{i}-1}(\eta_{u_{2}}) \, du_{1} \, du_{2}. \end{align*} For each particle $i$ we are looking at the correlation of the velocity of its left neighbor at time $u_1$ with the velocity of its left neighbor at time $u_{2}$. By averaging over particles $i = 1, \ldots, N$ and using the symmetry between $u_1$ and $u_2$ we can write the contribution to the second moment of $Y_{i}^{t,0}$ as \begin{align*} & \frac{2}{N} \sum\limits_{i=1}^{N} \mathbb{E} \int\limits_{0}^{t} \int\limits_{u_{1}}^{t} v_{x_{i}-1}(\eta_{u_{1}}) v_{x_{i}-1}(\eta_{u_{2}}) \, du_{2} \, du_{1} = 2 \int\limits_{0}^{t} \, du_{1} \left( \frac{1}{N} \sum\limits_{i=1}^{N} \mathbb{E} \int\limits_{u_{1}}^{t} v_{x_{i}-1}(\eta_{u_{1}}) v_{x_{i}-1}(\eta_{u_{2}}) \, du_{2} \right). \end{align*} Since the rates $v$ are bounded, it is enough to show that for each fixed $u_{1} \in [0,t]$ the expression inside the bracket is close to $0$ as $N \to \infty$. Let us look at \[ \frac{1}{N} \sum\limits_{i=1}^{N} \mathbb{E} \int\limits_{u_{1}}^{t} v_{x_{i}-1}(\eta_{u_{1}}) v_{x_{i}-1}(\eta_{u_{2}}) \, du_{2}. \] Since the average here depends only on the configuration at time $u_{1}$ and its evolution from that point on (and not otherwise on the trajectory of the process before time $u_{1}$), by stationarity of the biased interchange process it will be the same as \begin{equation}\label{eq:decorrelation} \frac{1}{N} \sum\limits_{i=1}^{N} \mathbb{E} \int\limits_{0}^{t - u_1} v_{x_{i}-1}(\eta_{0}) v_{x_{i}-1}(\eta_{s}) \, ds , \end{equation} since the dynamics of the process is time-homogeneous on $[0, t_{1})$. Thus we have to prove that for a random particle the velocity of its initial left neighbor is uncorrelated (when averaged over time) with the velocity of its current left neighbor. Let us introduce the following setup -- we can rewrite the average above in terms of a sum over sites (for $y = x_{i}(\eta_{s})$) instead over particles \begin{equation}\label{eq:average-sites} \frac{1}{N} \sum\limits_{y=1}^{N} \mathbb{E} \int\limits_{0}^{t - u_1} v_{x_{\eta^{-1}_{s}(y)}(\eta_{0})-1}(\eta_{0}) v_{y-1}(\eta_{s}) \, ds \end{equation} To analyze this average we introduce the following extension of the biased interchange process. Consider the extended configuration space $\widetilde{E}$ consisting of sequences $\left( (x_i, \phi_i, L_i) \right)_{i=1}^{N}$, with $L_i \in \{1, \ldots, N\}$. Here each particle, in addition to its color $\phi_{i}$, also has an additional color $L_{i}$ in which we keep information about the velocity of its left neighbor at time $0$, that is \[ L_{i} = v_{x_{i}(\eta_{0})-1}(\eta_{0}). \] The dynamics is given by the same generator \eqref{eq:biased-generator} as before, i.e., labels (together with their corresponding colors $\phi_i$ and $L_i$) are exchanged by swaps of adjacent particles, each $\phi_i$ has its own evolution and $L_i$ does not evolve. For a site $x$ let $L_{x}(\eta)$ be the additional color at site $x$ in configuration $\eta$, i.e., $L_x(\eta) = L_{\eta^{-1}(x)}$. We can now treat $\eta$ as a function which assigns to each site $x$ a triple $(\eta^{-1}(x), \phi_{x}, L_{x})$ or simply a pair $(\phi_{x}, L_{x})$, since we are not interested in particles' labels at this point, only in the distribution of colors. In this setup the average \eqref{eq:average-sites} can be written as \begin{equation}\label{eq:time-average-with-f} \frac{1}{N} \sum\limits_{y=1}^{N} \mathbb{E} \int\limits_{0}^{t - u_1} f_{y}(\eta_s) \, ds, \end{equation} where $f_{y}(\eta) = L_{y}(\eta) v_{y-1}(\eta)$. Let \[ \Lambda_{x, l} = \{x-l, x-l + 1, \ldots, x+l\}, \] denote a box of size $l$ around $x$ (with the convention that the box is truncated if the endpoints $x-l$ or $x+l$ exceed $1$ or $N$, but this will not influence the argument in any substantial way) and let $\widehat{\mu}_{x,l}^{\eta}$ be the empirical distribution of colors in $\Lambda_{x,l}$ in configuration $\eta$, given for any $(L, \phi)$ by \[ \widehat{\mu}_{x, l}^{\eta} \left(L, \phi \right) = \frac{1}{|\Lambda_{x,l}|} \# \{ z \in \Lambda_{x,l} \, | \, \left(L_z(\eta), \phi_z(\eta) \right) = (L, \phi )\}. \] Consider the associated i.i.d. distribution on configurations restricted to $\Lambda_{x,l}$, given by \[ \mu_{x, l}^{\eta} \left((L_y, \phi_y)_{y=x-l}^{x+l} \right) = \prod\limits_{y=x-l}^{x+l} \widehat{\mu}_{x, l}^{\eta} \left(L_y, \phi_y \right). \] In other words, under the measure $\mu_{x, l}^{\eta}$ the probability of seeing a color pair $(L, \phi)$ at site $y \in \Lambda_{x,l}$ is proportional to the number of sites in $\Lambda_{x,l}$ with the color pair $(L, \phi)$, independently for each site. The \emph{superexponential one block estimate} says that on an event of high probability we can replace $f_{y}(\eta_{s})$ in the time average \eqref{eq:time-average-with-f} by its average $\mathbb{E}_{\mu_{y,l}^{\eta_{s}}} (f)$ with respect to the local i.i.d. distribution over a sufficiently large box. In other words, due to local mixing the distribution of colors in a microscopic box can be approximated by an i.i.d. distribution for large $l$. \begin{lemma}\label{lm:one-block-superexponential-biased} Let $U_{x,l}(\eta) = | f_{x}(\eta) - \mathbb{E}_{\mu_{x,l}^{\eta}} (f) |$. For any $t \in [0,t_{1}]$ and $\delta > 0$ we have \[ \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} N^{-\gamma} \log \widetilde{\mathbb{P}}^{N} \left( \int\limits_{0}^{t} \frac{1}{N} \sum\limits_{x=1}^{N} U_{x,l}(\eta_{s}) \, ds > \delta \right) = - \infty, \] where $\gamma = 3-\alpha$. \end{lemma} The lemma is proved in the next section. Let us see how it enables us to finish the proof of Proposition \ref{prop:second-moment}. By the one block estimate, in \eqref{eq:time-average-with-f} we can replace \[ \frac{1}{N} \sum\limits_{y=1}^{N} \int\limits_{0}^{t - u_1} f_{y} (\eta_s) \, ds \] by \begin{equation}\label{eq:time-average-after-one-block} \frac{1}{N} \sum\limits_{y=1}^{N} \int\limits_{0}^{t - u_1} \mathbb{E}_{\mu_{y,l}^{\eta_{s}}} f_{y} (\eta_s) \, ds, \end{equation} with the difference going to $0$ in expectation as first $N \to \infty$ and then $l \to \infty$, so we only need to show that the latter expression goes to $0$ in the same limit. Observe that in $f_{y}(\eta) = L_y(\eta)v_{y-1}(\eta) = L_y(\eta)v(y-1, \phi_{y-1}(\eta))$ the colors $\phi_{y-1}$ and $L_y$ depend on different sites, so they are independent under $\mu_{y,l}^{\eta_{s}}$, since the measure is product. Thus in the average above we can simply write \[ \mathbb{E}_{\mu_{y,l}^{\eta_{s}}} f_{y} (\eta_s) = \mathbb{E}_{\mu_{y,l}^{\tilde{\eta}_{s}}} \left[ L_y(\eta)v_{y-1}(\eta) \right] = \left( \mathbb{E}_{\sigma \sim \mu_{y,l}^{\eta_{s}}} L_{y}(\sigma) \right) \left( \mathbb{E}_{\sigma \sim \mu_{y,l}^{\eta_{s}}} v_{y-1}(\sigma) \right), \] where by a slight abuse of notation we have denoted by $\sigma$ the local configuration of colors in a box $\Lambda_{y,l}$ and considered $L_{y}$, $v_{y-1}$ as functions of $\sigma$. The average \eqref{eq:time-average-after-one-block} now becomes \[ \frac{1}{N} \sum\limits_{y=1}^{N} \int\limits_{0}^{t - u_1} \left( \mathbb{E}_{\sigma \sim \mu_{y,l}^{\eta_{s}}} L_{y}(\sigma) \right) \left( \mathbb{E}_{\sigma \sim \mu_{y,l}^{\eta_{s}}} v_{y-1}(\sigma) \right) ds. \] Since the distribution of $\eta_{s}$ in the biased interchange process process without the additional colors $L_i$ is stationary, the distribution of the average $\mathbb{E}_{\sigma \sim \mu_{y,l}^{\eta_{s}}} v_{y-1}(\sigma)$ does not depend on $s$. So we only need to show that $\mathbb{E}_{\sigma \sim \mu_{y,l}^{\eta_{0}}} v_{y-1}(\sigma)$ is small, since $L_{y}$ is bounded. Recall that in stationarity $\phi_{y}$ has uniform distribution, so for any $y$ the expectation of $v_{y-1}(\sigma) = v(0, y-1, \phi_{y-1}(\sigma))$ with respect to $\mu_{y,l}^{\eta_{0}}$ is simply equal to \[ \frac{1}{2l+1} \sum\limits_{j=1}^{2l+1} v(0, y-1, \phi_{j}), \] where $\phi_{j}$ are independent and uniformly distributed on $\{1, \ldots, N\}$. As for each $x$ the random variables $v(0, x, \phi_{j})$ are independent, bounded and have mean $0$ (see Proposition \ref{prop:speeds}), an easy application of Hoeffding's inequality gives that for fixed $y$ the sum above goes to $0$ in probability as $l \to \infty$. This finishes the proof of Proposition \ref{prop:second-moment}. We can now prove the law of large numbers. \begin{proof}[Proof of Theorem \ref{th:lln}] Consider the random particle process $\bar{P}^{N} = (\bar{X}^{N}, \bar{\Phi}^{N})$, obtained by first sampling $\eta = \eta^N$ and then following the trajectory $P_{i}(\eta_t) = (X_i(\eta_t), \Phi_{i}(\eta_t))$ of a randomly chosen particle $i$. We will first show that the (deterministic) distribution $\bar{\nu}^{N}$ converges to $\nu$, the distribution of $P$ (in the metric $d_{\mathcal{W}}^{sup}$). Let us start by proving that the estimate from Proposition \ref{prop:second-moment} holds not only at each time $t$, but also with the supremum over all times $t \leq T$ under the sum over particles. Consider the process $(A^{N}, B^{N})$ defined as \begin{align*} & A_{t}^{N} = X_{i}(\eta_{t}) - X_{i}(\eta_{0}) - \int\limits_{0}^{t} v(s, x_{i}(\eta_{s}), \phi_{i}(\eta_{s})) \, ds, \\ & B_{t}^{N} = \Phi_{i}(\eta_{t}) - \Phi_{i}(\eta_{0}) - \int\limits_{0}^{t} r(s, x_{i}(\eta_{s}), \phi_{i}(\eta_{s})) \, ds, \end{align*} where $i$ is a random particle and $\eta = \eta^{N}$ comes from the biased interchange process. Proposition \ref{prop:second-moment} implies that all finite-dimensional marginals of $(A^{N},B^{N})$ converge to $0$. To obtain convergence to $0$ for the whole process in the supremum norm we only need to check tightness in the Skorokhod topology (which will imply convergence in the supremum norm, since the limiting process is continuous). We will use the following stopping time criterion (\cite[Proposition 4.1.6]{kipnis}). Let $Y^{N}$ be a family of stochastic processes with sample paths in $\mathcal{D}D$ such that for each time $t \in [0,T]$ the marginal distribution of $Y^{N}_{t}$ is tight. If for every $\varepsilon > 0$ we have \begin{equation}\label{eq:tightness-aldous} \lim_{\gamma \to 0} \limsup_{N \to \infty} \, \sup_{\substack{\tau \\ \theta \leq \gamma}} \mathbb{P}\left( \norm{Y^{N}_{\tau + \theta} - Y^{N}_{\tau}} > \varepsilon\right) = 0, \end{equation} where the supremum is over all stopping times $\tau$ bounded by $T$, then the family $Y^{N}$ is tight. Here $\norm{\cdot}$ denotes the Euclidean distance on $[0,1]^2$ and for simplicity we write $\tau + \theta$ instead of $(\tau + \theta) \wedge T$. Let $\tau$ be any stopping time bounded by $T$. We have from formula \ref{eq:xt-x0} \[ A_{\tau + \theta}^{N} - A_{\tau}^{N} = M^{i}_{\tau + \theta} - M^{i}_{\tau} - \frac{1}{2} \int\limits_{\tau}^{\tau + \theta} \left[ v(s, x_{i}(\eta_{s})-1, \phi_{x_{i}(\eta_{s}) - 1}(\eta_{s})) + v(s, x_{i}(\eta_{s})+1, \phi_{x_{i}(\eta_{s}) + 1}(\eta_{s})) \right] ds. \] Since $v(\cdot, \cdot,\cdot)$ is bounded, the integral is bounded by $C \theta$ for some constant $C > 0$, regardless of $\tau$, so goes to $0$ as $\theta \to 0$ (deterministically and for every $i$). Thus it only remains to bound the martingale term. As $\tau$ is a stopping time, by formula \eqref{eq:quadratic} we have for each $i$ \[ \mathbb{E} \left[ \left(M^{i}_{\tau + \theta}\right)^2 - \left(M^{i}_{\tau}\right)^2\right] = \mathbb{E} \int\limits_{\tau}^{\tau + \theta} Q_{s} \, ds. \] As in the calculation of $\mathbb{E} (M^{i}_{t})^2$ following \eqref{eq:quadratic} we have that for fixed $\theta$ the right hand side is $o(1)$ as $N \to \infty$. Since $M^{i}_{t}$ is bounded, we obtain $ \mathbb{E} \left| M^{i}_{\tau + \theta} - M^{i}_{\tau}\right| \to 0$ as $N \to \infty$, for any $\theta$ and $i$ (independently of $\tau$). The calculation for $B^{N}$ is analogous. This shows that the family $(A^{N}, B^{N})$ satisfies the tightness criterion \eqref{eq:tightness-aldous}. In particular it converges to $0$ in the supremum norm as $N \to \infty$. Thus for any $\varepsilon > 0$ we have \begin{align} & \widetilde{\mathbb{P}}^{N} \left( \frac{1}{N} \sum\limits_{i=1}^{N} \sup_{0 \leq t \leq T} \left| X_{i}(\eta_{t}) - X_{i}(\eta_{0}) - \int\limits_{0}^{t} v(s, x_{i}(\eta_{s}), \phi_{i}(\eta_{s})) \, ds \right| > \varepsilon \right)\to 0,\label{eq:sup-convergence1} \\ & \widetilde{\mathbb{P}}^{N} \left( \frac{1}{N} \sum\limits_{i=1}^{N} \sup_{0 \leq t \leq T} \left| \Phi_{i}(\eta_{t}) - \Phi_{i}(\eta_{0}) - \int\limits_{0}^{t} r(s,x_{i}(\eta_{s}), \phi_{i}(\eta_{s})) \, ds \right| > \varepsilon \right )\to 0 \label{eq:sup-convergence2} \end{align} as $N \to \infty$. Now we can prove that $\bar{\nu}^N$ converges to $\nu$. Recalling the definition of the Wasserstein distance $d_{\mathcal{W}}^{sup}$, it is enough to construct for each $N$ a coupling $(\bar{P}^{N}, P)$ such that \[ \mathbb{E} \norm{ \bar{P}^{N} - P }_{sup} \to 0 \] as $N \to \infty$. Let us couple these two processes in the following way: first we let $\bar{P}^{N} = \left( \left(\bar{X}^{N}_{t}, \bar{\Phi}^{N}_{t}\right), 0 \leq t \leq T\right)$ be a path sampled according to $\bar{\nu}^{\eta^N}$, starting at $(\bar{X}^{N}_{0}, \bar{\Phi}^{N}_{0})$ (whose distribution is uniform on $\left\{\frac{1}{N}, \ldots, 1\right\} \times \left\{\frac{1}{N}, \ldots, 1\right\}$). We then take $P(t) = (X(t), \Phi(t))$ to be the solution of the ODE \eqref{eq:ode-general} started from an initial condition $(X(0), \Phi(0))$ chosen uniformly at random from $\left[\bar{X}^{N}_{0} - \frac{1}{N}, \bar{X}^{N}_{0} \right] \times \left[\bar{\Phi}^{N}_{0} - \frac{1}{N}, \bar{\Phi}^{N}_{0} \right]$ (so the two processes start close to each other). Because the initial condition is distributed uniformly on $[0,1]^2$, the path $P = \left( P(t), 0 \leq t \leq T \right)$ will be distributed according to $\nu$. Since $P(t) = (X(t), \Phi(t))$ is the solution of \eqref{eq:ode-general}, we have at each time $t \leq T$ \begin{align*} & X(t) - X(0) = \int\limits_{0}^{t} V(s,X(s), \Phi(s)) \, ds, \\ & \Phi(t) - \Phi(0) = \int\limits_{0}^{t} R(s,X(s), \Phi(s)) \, ds. \end{align*} Bounds \eqref{eq:sup-convergence1}, \eqref{eq:sup-convergence2} imply that for all times $t \leq T$ we have \begin{align*} & \bar{X}^{N}(t) - \bar{X}^{N}(0) = \int\limits_{0}^{t} v\left(s, N \bar{X}^{N}(s), N \bar{\Phi}^{N}(s) \right) \, ds + \varepsilon_{t}^{1}, \\ & \bar{\Phi}^{N}(t) - \bar{\Phi}^{N}(0) = \int\limits_{0}^{t} r\left(s, N \bar{X}^{N}(s), N \bar{\Phi}^{N}(s) \right) \, ds + \varepsilon_{t}^{2}, \end{align*} with $\varepsilon_{t}^{1}$, $\varepsilon_{t}^{2}$ satisfying $\sup\limits_{ 0 \leq t \leq T} |\varepsilon_{t}^{i}| \to 0$ in probability as $N \to \infty$. Recalling from \eqref{eq:s-r-approx} that $v(\cdot, \cdot, \cdot), r(\cdot, \cdot, \cdot)$ are approximately equal to $V(\cdot, \cdot, \cdot), R(\cdot, \cdot, \cdot)$ after rescaling of the arguments, we obtain \begin{align*} & \bar{X}^{N}(t) - \bar{X}^{N}(0) = \int\limits_{0}^{t} V\left( s, \bar{X}^{N}(s), \bar{\Phi}^{N}(s) \right) ds + o(1), \\ & \bar{\Phi}^{N}(t) - \bar{\Phi}^{N}(0) = \int\limits_{0}^{t} R\left( s, \bar{X}^{N}(s), \bar{\Phi}^{N}(s) \right) ds + o(1), \end{align*} with the $o(1)$ terms going to $0$ in probability (in the supremum norm over $t$) as $N \to \infty$. Thus $(\bar{X}^{N}, \bar{\Phi}^{N})$ approximately satisfies the same ODE as $(X, \Phi)$ and an application of Gr\"onwall's inequality gives that for any $\varepsilon > 0$ with probability approaching $1$ as $N \to \infty$ we have \[ \norm{ \bar{P}^{N} - P}_{sup} \leq C \max\{ |\bar{X}^{N}(0) - X(0)| + \varepsilon, |\bar{\Phi}^{N}(0) - \Phi(0)| + \varepsilon \} e^{KT} \] for some $C>0$, where $K>0$ depends only on the Lipschitz constants of $V$ an $R$. By definition of the processes $\bar{P}^{N}$ and $P$ the initial conditions $\bar{X}^{N}(0)$, $X(0)$ and $\bar{\Phi}^{N}(0)$, $\Phi(0)$ differ by at most $\frac{1}{N}$, which implies that $\mathbb{E} \norm{\bar{P}^{N} - P}_{sup} \to 0$ as $N \to \infty$. Thus the distribution $\bar{\nu}^{\eta^N}$ of the random particle process $\bar{P}^{N}$ converges to $\nu$ in the $d_{\mathcal{W}}^{sup}$ metric as desired. Now we can show that the random measures $\nu^{\eta^{N}}$ converge in distribution to the deterministic measure $\nu$. By the characterization of tightness for random measures (see, e.g., \cite[Theorem 23.15]{kallenberg}) the family $\nu^{\eta^{N}}$ will be tight, as a family of $\mathcal{M}(\mathcal{D}D)$-valued random variables, if for any $\varepsilon > 0$ there exists a compact set $K \subseteq \mathcal{D}D$ such that $\limsup\limits_{N \to \infty} \mathbb{E} \left( \nu^{\eta^{N}}(K) \right) \geq 1 - \varepsilon$, or, more simply put, $\limsup\limits_{N \to \infty} \widetilde{\mathbb{P}}^{N} \left( P^{\eta^{N}} \in K \right) \geq 1 - \varepsilon$. Exactly the same calculation as for the processes $(A^N, B^N)$ before shows the processes $P^{\eta^{N}}$ satisfy the tightness criterion \eqref{eq:tightness-aldous}, which guarantess the existence of desired compact sets $K$ and in turn tightness of $\nu^{\eta^{N}}$. Now to finish the proof we only need to show uniqueness of subsequential limits for the family $\nu^{\eta^{N}}$. Since any such (possibly random) limit must have the associated random particle process distributed according to $\nu$, it is enough to show that the limit is deterministic. Consider an outcome of $\nu^{\eta^{N}}$, which is a measure from $\mathcal{M}(\mathcal{D}D)$, and sample independently two paths $P_{1}^{N}, P_{2}^{N}$ from it. This corresponds to sampling $\eta^{N}$ according to the biased interchange process, then choosing uniformly at random a pair of particles $i,j$ (possibly with $i=j$, but this event has vanishing probability) and following their trajectories in $\eta^{N}$. By the already established convergence $\bar{\nu}^{N} \to \nu$ in $\mathcal{M}(\mathcal{D}D)$, each path $P_{1}^{N}$ and $P_{2}^{N}$ separately has distribution converging to $\nu$. Moreover, due to stationarity of $\eta^{N}$ the initial colors $\phi_i(\eta^{N}_{0})$, $\phi_j(\eta^{N}_{0})$ of any two particles $i,j$ are chosen uniformly at random, in particular they are independent for $i \neq j$. Thus the joint distribution of $(P_{1}^{N},P_{2}^{N})$ converges to the distribution of two independent paths sampled from $\nu$, as a path $P$ sampled from $\nu$ is uniquely determined by its initial conditions. Since we already have tightness, applying Lemma \ref{lm:deterministic} gives that any limit of a subsequence has to be deterministic, which finishes the proof. \end{proof} \end{section} \begin{section}{One block estimate}\label{sec:one-block} In this section we prove the one block estimate of Lemma \ref{lm:one-block-superexponential-biased}, needed for the proof of Theorem \ref{th:lln}. Since another, simpler variant of this estimate will also be needed for the proof of the large deviation upper bound (Lemma \ref{lm:upper-bound-one-block}), we prove the result in generality suited for both of these applications. Let us fix a continuous function $w : [0,1]^2 \to \mathbb{R}$ and let $I^N_{w} = \left\{ w\left( \frac{i}{N}, \frac{j}{N} \right) \right\}_{i,j=1}^{N}$. Let $I^N = \left\{ \frac{1}{N}, \ldots, 1 \right\}$. Consider the interchange process on an extended configuration space $E'$ in which each particle in addition to its label $i$ has two colors $(a_i, \phi_i)$, with $a_i \in I^N_{w}$, $\phi_i \in I^N$. The dynamics is given by the usual generator $\mathcal{L}$ -- adjacent particles are making swaps at rate $\frac{1}{2}N^{\alpha}$ and the colors $a_i, \phi_i$ of the particle $i$ do not evolve in time. Since the one block estimate concerns only the distribution of colors, from now on we ignore the labels of the particles altogether. Similarly as before we use the notation $a_x = a_x (\eta), \phi_x = \phi_x (\eta)$ to denote the colors of the particle at site $x$ in configuration $\eta$. The configuration at time $s$ is denoted by $\eta_s$. Consider a continuous function $g : [0,1] \to [-1,1]$ and for $\eta \in E'$ let $h_x(\eta) = a_x(\eta) b_{x-1}(\eta)$, where $b_x(\eta) = g(\phi_{x}(\eta))$ or $b_x (\eta) = a_x (\eta)$. As in the previous section let $\Lambda_{x, l} = \{x-l, x-l + 1, \ldots, x+l\}$ denote the box of size $l$ around $x$ (with an appropriate truncation if the endpoints $x-l$ or $x+l$ exceed $1$ or $N$, which we neglect in the notation from now on) and let $\widehat{\mu}_{x, l}^{\eta}$ be the empirical distribution of colors in $\Lambda_{x,l}$ in configuration $\eta$, given for any $(\alpha, \varphi) \in I^N_{w} \times I^N$ by \[ \widehat{\mu}_{x, l}^{\eta} \left(\alpha, \varphi \right) = \frac{1}{|\Lambda_{x,l}|} \# \{ z \in \Lambda_{x,l} \, | \, \left(a_z(\eta), \phi_z(\eta) \right) = (\alpha, \varphi )\}. \] Consider the associated i.i.d. distribution on configurations restricted to $\Lambda_{x,l}$, given for $(\alpha_y, \varphi_y)_{y=x-l}^{x+l} \in \left(I^N_{w} \times I^N \right)^{2l+1}$ by \[ \mu_{x, l}^{\eta} \left((\alpha_y, \varphi_y)_{y=x-l}^{x+l} \right) = \prod\limits_{y=x-l}^{x+l} \widehat{\mu}_{x, l}^{\eta} \left(\alpha_y, \varphi_y \right). \] Since $h_x$ depends on $\eta$ only through the colors at $x$ and $x-1$, we will slightly abuse notation by writing $\mathbb{E}_{\mu_{x,l}^{\eta}} (h_x)$ for the expectation of $h_x$ with respect to $\mu_{x,l}^{\eta}$. Let $\psi : [0,1] \to \mathbb{R}$ be a continuous function and let $U^{N}_{x,l}(\eta) = \psi(x) \left( h_{x}(\eta) - \mathbb{E}_{\mu_{x,l}^{\eta}} (h_x) \right)$. We define \[ U^{N}_{l}(\eta) = \frac{1}{N} \sum\limits_{x=1}^{N} |U^{N}_{x,l}(\eta)|. \] Let $\mu$ denote the uniform distribution on $E'$. Note that the dynamics given by $\mathcal{L}$ is reversible with respect to $\mu$ and the associated Dirichlet form is given by \[ D^{N}(f) = \frac{1}{4} N^{\alpha} \int \sum\limits_{x=1}^{N-1} \left(\sqrt{f(\eta^{x, x+1})} - \sqrt{f(\eta)}\right)^2 \, d\mu(\eta) \] for any $f : E' \to [0, \infty)$. \begin{lemma}\label{lm:one-block-superexponential-general} With $\mu$ denoting the uniform distribution on $E'$, we have for any $C_0 > 0$ \[ \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} \sup\limits_{\substack{f \\ D^{N}(f) \leq C_0 N^\gamma}} \int U^{N}_{l}(\eta)f(\eta) \, d\mu(\eta) = 0, \] where $\gamma = 3-\alpha$ and the supremum is over all densities $f$ with respect to $\mu$ such that $D^{N}(f) \leq C_0 N^\gamma$. \end{lemma} \begin{proof} Let us decompose $a_x = a_x(\eta)$ and $b_x = b_x(\eta)$ into their positive and negative parts, $a_x = a_x^{+} - a_x^{-}$, $b_x = b_x^{+} - b_x^{-}$. Since \[ h_x = a_x b_{x-1} = a_x^{+} b_{x-1}^{+} - a_x^{+} b_{x-1}^{-} - a_x^{-} b_{x-1}^{+} + a_x^{-} b_{x-1}^{-}, \] by the triangle inequality it is enough to prove the lemma with $h_x$ replaced by one of the terms in the sum above, say, $a_x^{+} b_{x-1}^{+}$. Let $K = \max \{ 1, \norm{w}_{\infty} \}$ and let us write \begin{align*} & a_x^{+}(\eta) = \int\limits_{0}^{K} \mathbbm{1}_{\{ a_x(\eta) > \lambda \}} \, d\lambda, \\ & b_x^{+}(\eta) = \int\limits_{0}^{K} \mathbbm{1}_{\{ b_x(\eta) > \theta \}} \, d\theta. \end{align*} We have \begin{align*} & \frac{1}{N} \sum\limits_{x=1}^{N}\left| \psi(x) \left( a_x^{+}(\eta) b_{x-1}^{+}(\eta) - \mathbb{E}_{\mu_{x,l}^{\eta}} \left[ a_x^{+}(\eta) b_{x-1}^{+}(\eta) \right] \right) \right| \leq \\ & \leq \int\limits_{0}^{K} \int\limits_{0}^{K} \frac{1}{N} \sum\limits_{x=1}^{N} \left| \psi(x) \left( \mathbbm{1}_{\{ a_x(\eta) > \lambda \}} \mathbbm{1}_{\{ b_{x-1}(\eta) > \theta \}} - \mathbb{E}_{\mu_{x,l}^{\eta}} \left[ \mathbbm{1}_{\{ a_x(\eta) > \lambda \}} \mathbbm{1}_{\{ b_{x-1}(\eta) > \theta \}} \right] \right)\right| d\lambda \, d\theta, \end{align*} where the inequality comes from pulling the integrals over $\lambda$ and $\theta$ outside the absolute value. Let us denote the expression under the integrals on the right hand side by $U^{N}_{l, \lambda, \theta}$. Since it is nonnegative and bounded, we can write \[ \sup\limits_{\substack{f}} \int \left( \int\limits_{0}^{K} \int\limits_{0}^{K} U^{N}_{l, \lambda, \theta}(\eta) \, d\lambda \, d\theta \right) f(\eta) \, d\mu(\eta) \leq \int\limits_{0}^{K} \int\limits_{0}^{K} \left(\sup\limits_{\substack{f \\ }} \int U^{N}_{l, \lambda, \theta}(\eta) f(\eta) \, d\mu(\eta) \right)\, d\lambda \, d\theta, \] where the supremum is over all densities $f$ satisfying $D^{N}(f) \leq C_0 N^\gamma$. By the same token, when taking the $\limsup$ first over $N$ and then over $l$, we can bound the resulting limit from above by one with the integral over $\lambda$ and $\theta$ outside the $\limsup$. Thus we see that it is enough to prove for fixed $\lambda, \theta \in [0, K]$ \[ \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} \sup\limits_{\substack{f \\ D^{N}(f) \leq C_0 N^\gamma}} \int U^{N}_{l,\lambda, \theta}(\eta)f(\eta) \, d\mu(\eta) = 0. \] Since \[ U^{N}_{l,\lambda, \theta}(\eta) = \frac{1}{N} \sum\limits_{x=1}^{N} \left| \psi(x) \left( \mathbbm{1}_{\{ a_x(\eta) > \lambda \}} \mathbbm{1}_{\{ b_{x-1}(\eta) > \theta \}} - \mathbb{E}_{\mu_{x,l}^{\eta}} \left[ \mathbbm{1}_{\{ a_x(\eta) > \lambda \}} \mathbbm{1}_{\{ b_{x-1}(\eta) > \theta \}} \right] \right)\right|, \] we have reduced the problem to proving the one block estimate for the interchange process in which each particle has only four possible colors, corresponding to the possible values of the pair $(\mathbbm{1}_{\{ a_i(\eta) > \lambda \}}, \mathbbm{1}_{\{ b_i(\eta) > \theta \}})$. This in turn follows by essentially the same argument as for the simple exclusion process, which can be thought of as interchange process with just two colors (see e.g., \cite[Lemma 5.3.1]{kipnis}). Since the argument is by now standard and used in several places in the literature (see e.g., \cite{Fritz2004} for the case of three possible colors), let us only explain that the bound on the Dirichlet form under the supremum is of the right order. The argument for the simple exclusion process goes through (see the remark following the proof of \cite[Lemma 5.4.2]{kipnis}) if we assume that the Dirichlet form corresponding to the generator without time scaling is $o(N)$ and the process is speeded up by $N^2$. In our case the generator $\mathcal{L}$ has a scaling factor of $N^{\alpha}$, so if $N^{-\alpha} D^{N}(f)$ is the Dirichlet form corresponding to the process without time scaling, then our bound on this Dirichlet form is $\leq C_0 N^{\gamma - \alpha} = C_{0} N^{3 - 2\alpha}$. Since $\alpha \in (1,2)$, this is $o(N)$, which agrees with the assumptions for the simple exclusion process. \end{proof} \begin{lemma}\label{lm:one-block-superexponential-probability} Let $\mathbb{P}^N$ denote the law of the interchange process on $E'$ with an arbitrary initial distribution. With the notation as above we have for any $t \geq 0$ and $\delta > 0$ \[ \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N} \left( \int\limits_{0}^{t} U^{N}_{l}(\eta_s) \, ds > \delta \right) = - \infty. \] \end{lemma} \begin{proof} Let $\mu_{0}$ be an arbitrary initial distribution. Let $\mathbb{P}_{0, \mu}^{N}$, resp. $\mathbb{P}_{0, \mu_{0}}^{N}$, denote the distribution of the process started from $\mu$, resp $\mu_{0}$, and let $\mathbb{E}_{\mu}$, resp. $\mathbb{E}_{\mu_{0}}$, denote the corresponding expectation. By Chebyshev's inequality we have for any $c > 0$ \begin{equation}\label{eq:chebyshev} \mathbb{P}_{0, \mu_{0}}^{N} \left( \int\limits_{0}^{t} U^{N}_{l}(\eta_s) \, ds > \delta \right) \leq e^{-cN^{\gamma}} \mathbb{E}_{\mu_{0}} \exp\left\{c N^{\gamma} \int\limits_{0}^{t} U^{N}_{l}(\eta_s) \, ds\right\}. \end{equation} We also have \begin{align*} \mathbb{E}_{\mu_{0}} \exp\left\{c N^{\gamma} \int\limits_{0}^{t} U^{N}_{l}(\eta_s) \, ds\right\} & = \mathbb{E}_{\mu} \left[ \frac{\mathrm{d} \mathbb{P}_{0,\mu_{0}}^{N}}{\mathrm{d} \mathbb{P}_{0,\mu}^{N}} (t) \exp\left\{c N^{\gamma} \int\limits_{0}^{t} U^{N}_{l}(\eta_s) \, ds\right\} \right] \\ & \leq \norm{\frac{\mathrm{d} \mathbb{P}_{0,\mu_{0}}^{N}}{\mathrm{d} \mathbb{P}_{0,\mu}^{N}}}_{\infty} \mathbb{E}_{\mu} \exp\left\{c N^{\gamma} \int\limits_{0}^{t} U^{N}_{l}(\eta_s) \, ds\right\}. \end{align*} Let $M = |I^{N}_{w}|$. Since $M \leq N^2$ and under $\mu$ each initial configuration has probability $(MN)^N$ = $e^{o(N^{\gamma})}$, the supremum norm of the Radon-Nikodym derivative above is $e^{o(N^\gamma)}$ as well, so to prove \eqref{eq:chebyshev} it is in fact enough to show that for any $c > 0$ \begin{equation}\label{eq:exp-moment} \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} N^{-\gamma} \log \exp\left\{c N^{\gamma} \int\limits_{0}^{t} U^{N}_{l}(\eta_s) \, ds\right\} \leq 0 \end{equation} and then take $c \to \infty$. An application of Feynman-Kac formula to the semigroup generated by $\mathcal{L}$ shows (see e.g., \cite[Theorem 10.3.1 and Section A1.7]{kipnis}) that to obtain \eqref{eq:exp-moment} it is sufficient to prove for any $c > 0$ \[ \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} \sup\limits_{f} \left\{ \int c U^{N}_{l}(\eta)f(\eta) \, d\mu(\eta) - N^{-\gamma} D^{N}(f) \right\} \leq 0, \] where the supremum is taken over all densities with respect to $\mu$. Since $U^{N}_{l}$ is bounded by a constant $C > 0$ depending only on $\psi$ and $g$, the expression under the supremum becomes negative if $D^{N}(f) > c C N^{\gamma}$. Thus it is enough to show that for any constant $C_0 > 0$ we have \[ \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} \sup\limits_{\substack{f \\ D^{N}(f) \leq C_0 N^\gamma}} \int U^{N}_{l}(\eta)f(\eta) \, d\mu(\eta) \leq 0, \] which exactly the statement of Lemma \ref{lm:one-block-superexponential-general}. \end{proof} This estimate will be enough for application in the proof of Lemma \ref{lm:upper-bound-one-block}. As for the proof of Lemma \ref{lm:one-block-superexponential-biased}, we will first show that the one block estimate holds for the unbiased process with color evolution, but with all rates equal to $1$, i.e., the process with state space $E'$ and the generator \begin{align*} & (\mathcal{L}_{0} f)(\eta) = \frac{1}{2} N^{\alpha} \sum\limits_{x=1}^{N-1} (f(\eta^{x, x+1}) - f(\eta)) + \frac{1}{2} N^{\alpha} \sum\limits_{x=1}^{N} \left[ (f(\eta^{x,+}) - f(\eta)) + (f(\eta^{x,-}) - f(\eta)) \right]. \end{align*} Here as usual $\eta^{x, \pm}$ denotes the configuration obtained from $\eta$ by changing the color $\phi_x$ of the particle at site $x$ to $\phi_x \pm 1$ (note that the colors $a_i$ do not evolve in time here). We will then transfer the result to the biased process by estimating its Radon-Nikodym derivative. \begin{lemma}\label{lm:one-block-superexponential-unbiased} Let $\mathbb{P}_{0}^{N}$ be the law of the unbiased process with rates $1$ described above (with an arbitrary initial distribution). With the notation from Lemma \ref{lm:one-block-superexponential-probability}, we have for any $t \geq 0$ and $\delta > 0$ \[ \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} N^{-\gamma} \log \mathbb{P}_{0}^{N} \left( \int\limits_{0}^{t} U^{N}_{l}(\eta_s) \, ds > \delta \right) = - \infty. \] \end{lemma} \begin{proof} Let us write $\mathcal{L}_{0} = \mathcal{L} + \mathcal{L}_{c}$, where $\mathcal{L}$ is the first term in the definition of $\mathcal{L}_{0}$ and $\mathcal{L}_{c}$ is the second term. The dynamics induced by $\mathcal{L}$ and by $\mathcal{L}_{c}$ is reversible with respect to $\mu$, so the Dirichlet forms associated respectively to $\mathcal{L}_{c}$ and $\mathcal{L}_{0}$ can be written as \begin{align*} & D^{N}_{c}(f) = \frac{1}{4} N^{\alpha} \int \sum\limits_{x=1}^{N} \left[ \left( \sqrt{f(\eta^{x,+})} - \sqrt{f(\eta)} \right)^2 + \left( \sqrt{f(\eta^{x,-})} - \sqrt{f(\eta)} \right)^2 \right]\, d\mu(\eta), \\ & D^{N}_{0}(f) = D^{N}(f) + D^{N}_{c}(f). \end{align*} By repeating the argument from the proof of Lemma \ref{lm:one-block-superexponential-probability} with the generator $\mathcal{L}_0$ instead of $\mathcal{L}$ we obtain that it is enough to prove that for any $c > 0$ \[ \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} \sup\limits_{f} \left\{ \int c U^{N}_{l}(\eta)f(\eta) \, d\mu(\eta) - N^{-\gamma} D^{N}_{0}(f) \right\} \leq 0, \] where the supremum is taken over all densities with respect to $\mu$. Now observe that since $D^{N}_{c}(f) \geq 0$ for any nonnegative $f$, it is in fact enough to prove the statement above with $D^{N}_{0}(f)$ replaced by $D^{N}(f)$. Thus we have eliminated color evolution and the conclusion follows as in the proof of Lemma \ref{lm:one-block-superexponential-probability}. \end{proof} We can now prove the superexponential estimate for the biased process. \begin{proof}[Proof of Lemma \ref{lm:one-block-superexponential-biased}] Recall that $f_x(\eta) = L_{x}(\eta) v_{x-1}(\eta)$. Since we can uniformly approximate $v(0,x,\phi)$ by finite sums of terms which are product in $x$ and $\phi$, by using the triangle inequality we can without loss of generality assume that $v_{x}(\eta) = \psi(x)g(\phi_x)$ for some continuous functions $\psi : [0,1] \to \mathbb{R}, g : [0,1] \to [-1,1]$. Applying Lemma \ref{lm:one-block-superexponential-unbiased} with $w(x, \phi)= v(0,x,\phi)$, $a_i = L_i$ and $h_x = L_x g(\phi_{x-1})$ provides us with the superexponential estimate for the process $\mathbb{P}_{0}^{N}$. To transfer the estimate to the biased process $\widetilde{\mathbb{P}}^{N}$ we will need to estimate the Radon-Nikodym derivative of the two processes. If $\mathbb{P}$ is a Markov process with jump rates $\lambda(x)p(x,y)$ and $\widetilde{\mathbb{P}}$ is another process on the same state space with rates $\widetilde{\lambda}(x) \widetilde{p}(x,y)$, the Radon-Nikodym derivative up to time $t$ is given by (see, e.g., \cite[Proposition A1.2.6]{kipnis}) \begin{equation}\label{eq:radon-nikodym-sum-rates} \frac{\mathrm{d} \widetilde{\mathbb{P}}}{\mathrm{d} \mathbb{P}} (t) = \exp \left\{ - \int\limits_{0}^{t} \left( \widetilde{\lambda}(X_{s}) - \lambda(X_{s}) \right) ds + \sum\limits_{s \leq t}\log \frac{\widetilde{\lambda}(X_{s-})\widetilde{p}(X_{s-},X_{s})}{\lambda(X_{s-})p(X_{s-},X_{s})}\right\}, \end{equation} where the sum is over jump times $s \leq t$. Let us look at $\frac{\mathrm{d} \widetilde{\mathbb{P}}^{N}}{\mathrm{d} \mathbb{P}_{0}^{N}}$. By the form \eqref{eq:biased-generator} of the generator of $\widetilde{\mathbb{P}}^{N}$ the sum of outgoing rates for any $\eta$ is equal to \[ \frac{1}{2}N^{\alpha} \left(\sum\limits_{x=1}^{N-1} \left[ 1 + \varepsilon \left(v_{x}(\eta) - v_{x+1}(\eta) \right) \right] + \sum\limits_{x=1}^{N} \left[ 1 + \varepsilon r_{x}(\eta) \right] + \sum\limits_{x=1}^{N} \left[ 1 - \varepsilon r_{x}(\eta) \right]\right). \] Since the sum of $\varepsilon(v_{x} - v_{x+1})$ telescopes, the rates $v_x$ are $0$ at the boundaries $x=1, N$ and rates $r_x$ for the color change cancel out, the intensities $\widetilde{\lambda}$ and $\lambda$ cancel out as well. The Radon-Nikodym derivative takes the form \begin{align}\label{eq:radon-nikodym-sum-jumps} \frac{\mathrm{d} \widetilde{\mathbb{P}}^{N}}{\mathrm{d} \mathbb{P}_{0}^{N}}(t) = & \exp \Big\{ \sum\limits_{s \leq t} \log \left( 1 + \varepsilon \left[v(x_{j_{s}}, \phi_{x_{j_{s}}}(\eta_{s})) - v(x_{j_{s}}+1, \phi_{x_{j_{s}}+1}(\eta_{s}))\right] \right) + \\ & + \sum\limits_{s_{+} \leq t} \log \left( 1 + \varepsilon \left[r(x_{j_{s_{+}}}, \phi_{x_{j_{s_{+}}}}(\eta_{s_{+}}))\right] \right) + \sum\limits_{s_{-} \leq t} \log \left( 1 - \varepsilon \left[r(x_{j_{s_{-}}}, \phi_{x_{j_{s_{-}}}}(\eta_{s_{-}}))\right] \right) \Big\},\nonumber \end{align} where $j_{s}$ is the label of the particle which makes a swap at time $s$ and $j_{s_{\pm}}$ is the label of the particle that changes its color by $\pm 1$ at time $s_{\pm}$. To simplify this formula we will use the fact that empirical currents across edges can be approximated by their averages, modulo a small martingale. More precisely, let us denote for simplicity \[ (\nabla_{x} v)(\eta) = v_x (\eta) - v_{x+1} (\eta). \] We will sometimes use this notation with $x=N$, in which case we assume $(\nabla_{x} v)(\eta) = 0$. For brevity of notation whenever sums involving both $r_x$ and $-r_x$ appear, we will write them as one term with a $\pm$ sign, that is, with $\sum\limits_{x}(1 \pm \varepsilon r_x)$ serving as a shorthand for $\sum\limits_{x}(1 + \varepsilon r_x) + \sum\limits_{x}(1 - \varepsilon r_x)$ and so on. We introduce the following extension of the dynamics under $\mathbb{P}_{0}^{N}$ -- for any functions $h(x,\eta)$, $h^{\pm}(x,\eta)$, $x \in \{1, \ldots, N\}$ consider the extended state space $E'$, consisting of pairs $(\eta, J)$, $J \in \mathbb{R}$, and the generator $\mathcal{L}'$ acting by \begin{align*} (\mathcal{L}' f)(\eta, J) = & \frac{1}{2} N^{\alpha} \Big[ \sum\limits_{x=1}^{N-1} \left( f(\eta^{x,x+1}, J + h(x, \eta)) - f(\eta, J) \right) +\\ & + \sum\limits_{x=1}^{N} \left( f(\eta^{x,+}, J + h^{+}(x, \eta)) - f(\eta, J) \right) + \sum\limits_{x=1}^{N} \left( f(\eta^{x,-}, J + h^{-}(x, \eta)) - f(\eta, J) \right) \Big]. \end{align*} In other words, in the evolution of the extended configuration $(\eta_t, J_t)$ each time the process makes a jump, $J_t$ is increased by $h(x, \eta_t)$, $h^{+}(x, \eta_t)$ or $h^{-}(x, \eta_t)$, depending on the type of the jump (swap or color change). Now if we take \begin{align*} & h(x,\eta) = \log \left[ 1 + \varepsilon (\nabla_{x} v)(\eta)\right], \\ & h^{\pm}(x,\eta) = \log \left[ 1 \pm \varepsilon r_{x}(\eta)\right], \end{align*} we see that $J_t$ is simply equal to the sum over jumps appearing in the exponent in \eqref{eq:radon-nikodym-sum-jumps}. Thus to bound the Radon-Nikodym derivative we only need to bound $J_t$. This is done by use of an exponential martingale -- for any $\lambda > 0$ the following process \[ Z_{t} = \exp\left\{ \lambda J_{t}- \int\limits_{0}^{t} e^{-\lambda J_{s}} \mathcal{L}' e^{\lambda J_{s}} \, ds \right\} \] is a local martingale with respect to $\mathbb{P}_{0}^{N}$. We will actually only need to consider $\lambda = 2$. Writing out the action of $\mathcal{L}'$ on the function $g(\eta, J) = e^{2 J}$ we obtain \[ Z_{t} = \exp\left\{ 2 J_{t}- \frac{1}{2} N^{\alpha} \int\limits_{0}^{t} \sum\limits_{x=1}^{N} \left[ \left( e^{2 \log(1 + \varepsilon (\nabla_{x} v)(\eta_{s}))} - 1\right) + \left( e^{2 \log(1 \pm \varepsilon r_x (\eta_{s})} - 1 \right) \right] ds \right\}. \] Now we have \begin{align*} & e^{2 \log(1 + \varepsilon (\nabla_{x} v)(\eta_{s}))} - 1 = (1 + \varepsilon (\nabla_{x} v)(\eta_{s}))^2 - 1 = 2 \varepsilon (\nabla_{x} v)(\eta_{s}) + \varepsilon^2 \left[(\nabla_{x} v)(\eta_{s})\right]^2, \\ & e^{2 \log(1 \pm \varepsilon r_{x}(\eta_{s}))} - 1 = (1 \pm \varepsilon r_{x}(\eta_{s}))^2 - 1 = \pm 2 \varepsilon r_{x}(\eta_{s}) + \varepsilon^2 r_{x}(\eta_{s})^2. \end{align*} The sum of terms linear in $\varepsilon$ vanishes -- the rates $r$ for $\pm 1$ color change have opposite sign and the sum involving $\nabla_{x}v$ telescopes. Recalling that $\varepsilon = N^{1-\alpha}$ and $\gamma = 3 - \alpha$, so $N^{\alpha+1}\varepsilon^2 = N^\gamma$, we can then write \[ Z_{t} = \exp\left\{ 2 J_{t} - \frac{1}{2} N^{\gamma} \int\limits_{0}^{t} \frac{1}{N}\sum\limits_{x=1}^{N} \left[ (\nabla_{x} v)(\eta_{s})^2 + r_x(\eta_{s})^2\right] \, ds \right\}. \] Since the rates $v$ and $r$ are bounded, we have $Z_t = e^{ 2 J_t - N^{\gamma} X_t}$, where $|X_t| \leq C$ for some constant $C > 0$ depending only on $v$, $r$ and $T$. In particular we get \[ \mathbb{E} e^{2 J_t} = \mathbb{E} \left( e^{2 J_t - N^{\gamma} X_t} e^{N^{\gamma}X_t} \right) = \mathbb{E} \left( Z_t e^{N^{\gamma}X_t} \right) \leq e^{C N^{\gamma}} \mathbb{E} Z_t. \] Since $Z_t$ is a local martingale bounded from below, it is a supermartingale, so we have $\mathbb{E} Z_t \leq \mathbb{E} Z_0 = 1$ and thus \begin{equation}\label{eq:exponential-zt} \mathbb{E} e^{2 J_t} \leq e^{C N^\gamma}. \end{equation} Now we can transfer the superexponential bound of Lemma \ref{lm:one-block-superexponential-unbiased} from $\mathbb{P}_{0}^{N}$ to $\widetilde{\mathbb{P}}^{N}$. Let $\mathcal{O}_{N,l}$ be the event from the statement of the lemma and let us write simply $\frac{\mathrm{d} \widetilde{\mathbb{P}}^{N}}{\mathrm{d} \mathbb{P}_{0}^{N}} = \frac{\mathrm{d} \widetilde{\mathbb{P}}^{N}}{\mathrm{d} \mathbb{P}_{0}^{N}}(T)$. Denoting by $\widetilde{\mathbb{E}}$ the expectation with respect to $\widetilde{\mathbb{P}}^{N}$ we have \[ \widetilde{\mathbb{P}}^{N}\left( \mathcal{O}_{N,l} \right) = \widetilde{\mathbb{E}} \left( \mathbbm{1}_{\mathcal{O}_{N,l}} \right) = \mathbb{E} \left( \frac{\mathrm{d} \widetilde{\mathbb{P}}^{N}}{\mathrm{d} \mathbb{P}_{0}^{N}} \mathbbm{1}_{\mathcal{O}_{N,l}} \right). \] Applying the Cauchy-Schwarz inequality gives \[ \widetilde{\mathbb{P}}^{N}\left( \mathcal{O}_{N,l} \right) \leq \left[ \mathbb{E} \left(\frac{\mathrm{d} \widetilde{\mathbb{P}}^{N}}{\mathrm{d} \mathbb{P}_{0}^{N}}\right)^2 \right]^{1/2} \cdot \mathbb{P}_{0}^{N}\left(\mathcal{O}_{N,l}\right)^{1/2}. \] Recalling that $\frac{\mathrm{d} \widetilde{\mathbb{P}}^{N}}{\mathrm{d} \mathbb{P}_{0}^{N}} = e^{J_T}$ and applying the bound \eqref{eq:exponential-zt} we obtain \[ \widetilde{\mathbb{P}}^{N}\left( \mathcal{O}_{N,l} \right) \leq e^{c N^{\gamma}} \mathbb{P}_{0}^{N}\left(\mathcal{O}_{N,l}\right)^{1/2} \] with $c = \frac{C}{2}$. Thus \[ \limsup\limits_{N \to \infty} N^{-\gamma} \log \widetilde{\mathbb{P}}^{N}\left( \mathcal{O}_{N,l} \right) \leq c + \frac{1}{2}\limsup\limits_{N \to \infty} N^{-\gamma} \log \mathbb{P}_{0}^{N}\left( \mathcal{O}_{N,l} \right) \] and taking $\limsup$ as $l \to \infty$ together with an application of Lemma \ref{lm:one-block-superexponential-unbiased} finishes the proof. \end{proof} \end{section} \begin{section}{Large deviation lower bound}\label{sec:lower-bound} In this section we prove the large deviation lower bound of Theorem \ref{th:theorem-main-lower}. Let us assume that the permuton process $X$ satisfies equations \ref{eq:ode-general}. Since we already know how to construct a biased interchange process that will typically display the behavior of $X$, to bound the probability that the trajectory of a random particle in the interchange process is close in distribution to $X$ we only need to compare the unbiased process with the biased one by means of calculating their Radon-Nikodym derivative. Since these two processes have different configuration spaces, for convenience we introduce the \emph{unbiased interchange process with colors}, which has the same configuration space as the biased process associated to \ref{eq:ode-general} and the generator $\mathcal{L}^{u}$ obtained by putting all velocities $v$ to $0$ \begin{align}\label{eq:unbiased-with-colors} & (\mathcal{L}^{u}_{t} f)(\eta) = \frac{1}{2} N^{\alpha} \sum\limits_{y=1}^{N-1} (f(\eta^{y, y+1}) - f(\eta)) + \frac{1}{2} N^{\alpha} \sum\limits_{x=1}^{N} \left[ 1 \pm \varepsilon r(t, x, \phi_{x}(\eta)) \right] (f(\eta^{x,\pm}) - f(\eta)). \end{align} Since here the colors do not influence the dynamics of swaps, the corresponding permutation process $X^{\eta^{N}}$ will be the same as for the ordinary unbiased interchange process (and we will never be interested in the distribution of $\Phi^{\eta^{N}}$ for the unbiased process with colors). Let us start by deriving the formula for the Radon-Nikodym derivative of the unbiased process with colors with respect to the biased one. Recall that $v_x (s, \eta_{s}) = v(s,x,\phi_{x}(\eta_s))$ denotes the velocity at time $s$ of the particle at site $x$. Let $\mathbb{P}^{N}_{u}$ denote the law of the unbiased process with colors. We will prove the following statement \begin{lemma}\label{lm:radon-nikodym-lower-bound} We have \[ \frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(T) = \exp \Bigg\{ - \frac{1}{2} N^{\gamma} \Bigg[ \int\limits_{0}^{T} \frac{1}{N} \sum\limits_{x=1}^{N} v_x (s, \eta_{s})^2 \, ds + o(1) \Bigg]\Bigg\}, \] where the $o(1)$ term goes to $0$ in probability as $N \to \infty$. \end{lemma} \begin{proof} The calculation is similar as in the proof of Lemma \ref{lm:one-block-superexponential-biased}, with the difference that we are using generator $\widetilde{\mathcal{L}}$ instead of $\mathcal{L}_0$. By the analog of formula \eqref{eq:radon-nikodym-sum-rates} for time-inhomogeneous processes we have \[ \frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(t) = \exp \left\{ -\sum\limits_{s \leq t} \log \left( 1 + \varepsilon \left[v(s,x_{j_{s}}, \phi_{x_{j_{s}}}(\eta_{s})) - v(s,x_{j_{s}}+1, \phi_{x_{j_{s}}+1}(\eta_{s}))\right] \right) \right\}, \] where the sum is over jump times $s \leq t$. Denoting the sum in the exponent by $J_{t}$, we obtain by \eqref{eq:martingale1} (by considering as before the generator $\widetilde{\mathcal{L}}$ acting on an extended configuration space) that \[ J_{t} = M_{t} + \frac{1}{2} N^{\alpha}\sum\limits_{x=1}^{N-1} \int\limits_{0}^{t} \left[ 1 + \varepsilon (\nabla_{x} v)(s, \eta_{s})\right] \log \left[ 1 + \varepsilon (\nabla_{x} v)(s, \eta_{s})\right] ds, \] where $M_{t}$ is a local martingale with respect to $\widetilde{\mathbb{P}}^{N}$. Expanding all terms up to order $\varepsilon^2$ allows us to write \[ \frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(t) = \exp \left\{- M_{t} - \frac{1}{2} N^{\alpha} \int\limits_{0}^{t} \sum\limits_{x=1}^{N-1} \left[ \varepsilon (\nabla_{x} v)(s,\eta_{s}) + \frac{\varepsilon^2}{2} \left[ (\nabla_{x} v)(s,\eta_{s}) \right]^2 \right] ds + O(N^{\alpha+1}\varepsilon^3)\right\}. \] As before the term linear in $\varepsilon$ vanishes. Recalling that $\varepsilon = N^{1-\alpha}$ and $\gamma = 3 - \alpha$ we have \[ \frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(t) = \exp \left\{ - M_{t} - \frac{1}{4} N^{\gamma} \int\limits_{0}^{t} \frac{1}{N} \sum\limits_{x=1}^{N-1} \left[ (\nabla_{x} v)(s,\eta_{s}) \right]^2 \, ds + o(N^{\gamma})\right\}. \] Expanding $\left[(\nabla_{x} v)(s,\eta_s)\right]^2 = (v_x(s,\eta_s) - v_{x+1}(s,\eta_s))^2$ leads us to \begin{align}\label{eq:expansion-of-radon-nikodym} & \frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(t) = \exp \Bigg\{ - M_{t} - \frac{1}{2} N^{\gamma} \int\limits_{0}^{t} \frac{1}{N} \sum\limits_{x=1}^{N} v_x(s,\eta_{s})^2 \, ds + \frac{1}{2} N^{\gamma} \int\limits_{0}^{t} \frac{1}{N} \sum\limits_{x=1}^{N} v_x(s,\eta_{s})v_{x+1}(s,\eta_{s}) \, ds + o(N^{\gamma})\Bigg\}. \end{align} The martingale term will be typically $o(N^{\gamma})$. To see this, we use formula \eqref{eq:martingale2} -- by performing a calculation similar to the one above we get that \[ N_{t} = M_{t}^2 - \frac{1}{2} N^{\alpha} \int\limits_{0}^{t} \sum\limits_{x=1}^{N} \left[ 1 + \varepsilon (\nabla_{x} v)(s,\eta_{s})\right] \left[\log \left( 1 + \varepsilon (\nabla_{x} v)(s,\eta_{s})\right) \right]^2 \, ds \] is a local martingale with respect to $\widetilde{\mathbb{P}}^{N}$. By expanding the $\log$ terms up to $\varepsilon^2$ we see that the second term above is bounded by $C N^{\alpha+1} \varepsilon^2 = C N^{\gamma}$ for some $C > 0$. In particular $N_{t}$ is bounded from below, so it is a supermartingale. Thus $\mathbb{E} N_T \leq \mathbb{E} N_0 = 0$ and $\mathbb{E} M_T^2 \leq C N^\gamma$, so Chebyshev's inequality implies that $M_T = o(N^\gamma)$ with high probability. The second sum in the exponent in \eqref{eq:expansion-of-radon-nikodym} will be small by invariance of the uniform distribution of colors in the biased process. More precisely, at fixed time $s$ for each $x$ the correlation term $v_{x}(s,\eta_{s})v_{x+1}(s,\eta_{s})$ has mean $0$, since $\eta_{s}$ has stationary distribution and by Proposition \ref{prop:speeds} in stationarity velocities at different sites are independent with mean $0$. Moreover, for the same reason these terms are uncorrelated for different $x$, so by the weak law of large numbers we get that for any $s \leq T$ and $\delta > 0$ \[ \widetilde{\mathbb{P}}^{N} \left( \left| \frac{1}{N} \sum\limits_{x=1}^{N} v_{x}(s,\eta_{s})v_{x+1}(s,\eta_{s}) \right| > \delta \right) \to 0 \] as $N \to \infty$. Since this holds for any fixed $s$ and the random variables are bounded, we also have \[ \widetilde{\mathbb{P}}^{N} \left( \left| \int\limits_{0}^{T} \frac{1}{N} \sum\limits_{x=1}^{N} v_{x}(s,\eta_{s})v_{x+1}(s,\eta_{s}) \, ds \right| > \delta \right) \to 0 \] as $N \to \infty$, which proves that the correlation term is $o(N^{\gamma})$ with high probability. Together with the bound on $M_t$ this proves the desired formula for the Radon-Nikodym derivative. \end{proof} We can now use Lemma \ref{lm:radon-nikodym-lower-bound} and the law of large numbers established in Theorem \ref{th:lln} to prove a large deviation lower bound for the interchange process. As the formula from the lemma suggests, the large deviation rate function will be related to the energy of the process to which the biased interchange process converges. Recall from \eqref{eq:process-energy} that for any process $\pi \in \mathcal{P}$ its energy was defined by \[ I(\pi) = \mathbb{E}_{\gamma \sim \pi} \mathcal{E}(\gamma), \] where $\mathcal{E}(\gamma)$ is the Dirichlet energy of the path $\gamma$ defined by \eqref{eq:energy-path-full}. We have the following large deviation lower bound \begin{theorem}\label{th:lower-bound} Let $\mathbb{P}^{N}$ be the law of the unbiased interchange process $\eta^N$ and let $\mu^{\eta^{N}}$ be the (random) distribution of the corresponding permutation process $X^{\eta^N}$. Let $P = (X, \Phi)$ be the colored trajectory process associated to the equation \eqref{eq:ode-general} and let $\mu$ denote the distribution of $X$. For any open set $\mathcal{O} \subseteq \mathcal{P}$ such that $\mu \in \mathcal{O}$ we have \[ \liminf_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}\left(\mu^{\eta^{N}} \in \mathcal{O}\right) \geq - I(\mu). \] \end{theorem} \begin{proof} It will be enough to show the bound above for $\mathcal{O}$ being any open ball $B(\mu, \varepsilon)$ in $\mathcal{P}$ around $\mu$. Let $\mathbb{P}_{u}^{N}$ be the distribution of the unbiased process with colors, $\nu^{\eta^{N}}$ the distribution of the colored permutation process $P^{\eta^N} = (X^{\eta^N}, \Phi^{\eta^N})$ associated to $\eta^N$. Let $\nu$ denote the distribution of $P = (X, \Phi)$ and $\widetilde{B}(\nu, \varepsilon)$ an open ball around $\nu$ in $\mathcal{M}(\mathcal{D}D)$. Since the projection $(X, \Phi) \mapsto X$ is continuous as a map from $\mathcal{D}D$ to $\mathcal{D}$, the corresponding projection from $\mathcal{M}(\mathcal{D}D)$ to $\mathcal{M}(\mathcal{D})$ is also continuous. As $\mu^{\eta^N}$ has the same law under $\mathbb{P}^N$ and $\mathbb{P}_{u}^N$ (remember that in the latter process the colors do not influence the dynamics of swaps), we have that for any $\varepsilon > 0$ there exists $\varepsilon' > 0$ such that $\mathbb{P}^{N}\left(\mu^{\eta^{N}} \in B(\mu, \varepsilon) \right) \geq \mathbb{P}_{u}^{N}\left(\nu^{\eta^{N}} \in \widetilde{B}(\nu, \varepsilon')\right)$. Thus to prove the large deviation bound it is sufficient to prove the local lower bound \begin{equation}\label{eq:lower-bound-nu-epsilon} \liminf\limits_{\varepsilon \to 0}\liminf_{N \to \infty} N^{-\gamma} \log \mathbb{P}_{u}^{N}\left(\nu^{\eta^{N}} \in \widetilde{B}(\nu, \varepsilon)\right) \geq - I(\mu). \end{equation} Recall that $\widetilde{\mathbb{P}}^{N}$ denotes the distribution of the biased process associated to \eqref{eq:ode-general} and consider the Radon-Nikodym derivative $\frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(t)$. By Lemma \ref{lm:radon-nikodym-lower-bound} we have \begin{equation}\label{eq:radon-nikodym-in-lower-bound} \frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(T) = \exp \Bigg\{ -\frac{1}{2} N^{\gamma} \Bigg[ \int\limits_{0}^{T} \frac{1}{N} \sum\limits_{x=1}^{N} v_{x}(\eta_{s})^2 \, ds + Y_N\Bigg]\Bigg\}, \end{equation} where $Y_N$ goes to $0$ in probability as $N \to \infty$. Now by the law of large numbers from Theorem \ref{th:lln} and Remark \ref{rm:lln} the distributions $\nu^{\eta^{N}}$ converge in probability in the $d_{\mathcal{W}}^{sup}$ metric to $\nu$ when $\eta^{N}$ is sampled according to $\widetilde{\mathbb{P}}^{N}$. Thus for any $\varepsilon > 0$ and an open ball $\widetilde{B}_{\varepsilon} = \{ \zeta \in \mathcal{M}(\mathcal{D}D) \, | \, d_{\mathcal{W}}^{sup}(\zeta, \nu) < \varepsilon \}$ around $\nu$ in the $d_{\mathcal{W}}^{sup}$ metric we have $\lim\limits_{N \to \infty} \widetilde{\mathbb{P}}^{N}\left(\nu^{\eta^{N}} \in \widetilde{B}_{\varepsilon}\right) = 1$. Since convergence in $d_{\mathcal{W}}^{sup}$ implies convergence in $d_{\mathcal{W}}$, to prove \eqref{eq:lower-bound-nu-epsilon} it is enough to analyze the probability $\mathbb{P}^N_{u}\left(\nu^{\eta^{N}} \in \widetilde{B}_{\varepsilon}\right)$. Fix arbitrary $\delta > 0$ and let $U_{N} = \{ |Y_N| \leq \delta \}$. Let $V_{N,\varepsilon} = U_{N} \cap \{ \nu^{\eta^{N}} \in \widetilde{B}_{\varepsilon}\}$ and $\frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}} = \frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(T)$. With $\mathbb{E}$ denoting the expectation with respect to $\mathbb{P}_{u}^{N}$ and $\widetilde{\mathbb{E}}$ with respect to $\widetilde{\mathbb{P}}^{N}$ we have for any $\varepsilon > 0$ and sufficiently large $N$ \[ \mathbb{P}^{N}_{u}\left(\nu^{\eta^{N}} \in \widetilde{B}_{\varepsilon}\right) = \mathbb{E} \left(\mathbbm{1}_{\{ \nu^{\eta^{N}} \in \widetilde{B}_{\varepsilon} \}}\right) \geq \mathbb{E} (\mathbbm{1}_{V_{N,\varepsilon}}) = \widetilde{\mathbb{E}} \left( \frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}} \mathbbm{1}_{V_{N,\varepsilon}} \right) \geq \widetilde{\mathbb{P}}^{N}(V_{N,\varepsilon}) \left(\inf_{ V_{N,\varepsilon}}\frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}} \right). \] We have $\lim\limits_{N \to \infty} \widetilde{\mathbb{P}}^{N}(V_{N,\varepsilon}) = 1 $ and on the event $U_N$ we have \[ \frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(T) \geq \exp \Bigg\{ -\frac{1}{2} N^{\gamma} \Bigg[ \int\limits_{0}^{T} \frac{1}{N} \sum\limits_{x=1}^{N} v_{x}(\eta_{s})^2 \, ds + \delta\Bigg]\Bigg\}. \] This implies \begin{equation}\label{eq:lower-bound-with-in} N^{-\gamma} \log \mathbb{P}_{u}^{N}\left(\nu^{\eta^{N}} \in \widetilde{B}_{\varepsilon}\right) \geq - \inf_{\eta \in V_{N,\varepsilon}} I_N (\eta) - \delta, \end{equation} where \[ I_{N}(\eta) = \frac{1}{2} \left(\frac{1}{N} \sum\limits_{x=1}^{N} \int\limits_{0}^{T} v_x(s,\eta_{s})^2 \, ds \right). \] Now it is not difficult to see that the infimum on the right hand side of \eqref{eq:lower-bound-with-in} converges to $I(\mu)$ as $N \to \infty$ and then $\varepsilon \to 0$. When $(X, \Phi)$ is sampled from $\nu$, $X$ is the solution of \eqref{eq:ode-general} with a uniformly random initial condition, so the energy $I(\mu)$ is simply equal to \[ \mathbb{E} \int\limits_{0}^{T} V(s, X(s),\Phi(s))^2 \, ds, \] where the expectation is with respect to the choice of $(X(0), \Phi(0))$. Recall the notation $X_{i}(\eta^{N}_{t}) = \frac{1}{N} x_{i}(\eta^{N}_{t})$, $\Phi_{i}(\eta^{N}_{t}) = \frac{1}{N} \phi_{i}(\eta^{N}_{t})$. In light of \eqref{eq:s-r-approx} what we need to show is that \begin{equation}\label{eq:energy} \inf\limits_{\eta \in V_{N,\varepsilon}} \left( \frac{1}{N} \sum\limits_{i=1}^{N} \int\limits_{0}^{T} V(s, X_{i}(\eta^{N}_{s}),\Phi_{i}(\eta^{N}_{s}))^2 \, ds \right) \to \mathbb{E} \int\limits_{0}^{T} V(s, X(s),\Phi(s))^2 \, ds \end{equation} as $N \to \infty$ and then $\varepsilon \to 0$. Consider the trajectory $\eta^N$ and for any particle $i$ let $(X_{i}(t), \Phi_{i}(t))$ denote the solution of \eqref{eq:ode-general} corresponding to the initial condition $(X_{i}(\eta^{N}_{0}), \Phi_{i}(\eta^{N}_{0}))$. Since the velocities $V$ are bounded, we can write \begin{align} & \left| \int\limits_{0}^{T} \left[ V(s,X_{i}(\eta^{N}_{s}),\Phi_{i}(\eta^{N}_{s}))^2 - V(s,X_{i}(s),\Phi_{i}(s))^2 \right] ds \right| \leq \nonumber \\ & \leq C \int\limits_{0}^{T} \big| V(s,X_{i}(\eta^{N}_{s}),\Phi_{i}(\eta^{N}_{s})) - V(s,X_{i}(s),\Phi_{i}(s)) \big| \, ds \label{eq:speeds-squared} \leq \\ & \leq K T \max \left\{\sup_{t \leq T} \left| X_{i}(\eta^{N}_{t}) - X_{i}(t)\right|, \sup_{t \leq T} \left| \Phi_{i}(\eta^{N}_{t}) - \Phi_{i}(t)\right| \right\} \nonumber \end{align} for some $C, K > 0$ depending on the bound on $V$ and the Lipschitz constant of $V$. Now note that if $\nu^{\eta^N} \in \widetilde{B}_{\varepsilon}$, then by considering the same coupling as in the proof of Theorem \ref{th:lln} we have \begin{equation} \limsup\limits_{N \to \infty} \frac{1}{N} \sum\limits_{i=1}^{N} \left| \max \left\{\sup_{t \leq T} \left| X_{i}(\eta^{N}_{t}) - X_{i}(t)\right|, \sup_{t \leq T} \left| \Phi_{i}(\eta^{N}_{t}) - \Phi_{i}(t)\right| \right\} \right| \leq \varepsilon' \end{equation} for some $\varepsilon' > 0$ satisfying $\varepsilon' \to 0$ as $\varepsilon \to 0$. Since $\{ \nu^{\eta^{N}} \in \widetilde{B}_{\varepsilon}\} \subseteq V_{N,\varepsilon}$, combining this with \eqref{eq:speeds-squared} we obtain that the left hand side of \eqref{eq:energy} converges to \[ \frac{1}{N} \sum\limits_{i=1}^{N} \int\limits_{0}^{T} V(s,X_{i}(s),\Phi_{i}(s))^2 \, ds \] as $N \to \infty$ and then $\varepsilon \to 0$. Since $(X_{i}(t), \Phi_{i}(t))$ is a solution of \eqref{eq:ode-general} and $V$ is the derivative of $X$, the integral is equal simply to the energy of the path $X_{i}(t)$. Since for each $i$ the initial condition $\Phi_{i}(\eta^{N}_{0})$ has uniform distribution on $\left \{\frac{1}{N}, \ldots, 1 \right\}$, independently for all $i$, it follows easily that this expression converges with high probability to the expected energy on the right hand side of \eqref{eq:energy}. This implies $\inf_{\eta \in V_{N,\varepsilon}} I_N (\eta) \to I(\nu)$ as $N \to \infty$ and then $\varepsilon \to 0$. Since in \eqref{eq:lower-bound-with-in} we can take $\delta$ to be arbitrarily small, this proves \eqref{eq:lower-bound-nu-epsilon} and finishes the proof of the lower bound. \end{proof} With this theorem the large deviation lower bound for generalized solutions to Euler equations, announced as Theorem \ref{th:theorem-main-lower} in the introduction, is now an easy corollary. \begin{theorem}\label{thm:lower-bound-for-minimizers} Let $\mathbb{P}^{N}$ be the law of the interchange process $\eta^N$ and let $\mu^{\eta^{N}}$ be the (random) distribution of the corresponding permutation process $X^{\eta^N}$. Let $\pi$ be a permuton process which is a generalized solution to Euler equations \eqref{eq:gen-euler}. Provided $\pi$ satisfies Assumptions \eqref{as:main-assumptions}, for any open set $\mathcal{O} \subseteq \mathcal{P}$ such that $\pi \in \mathcal{O}$ we have \[ \liminf_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}\left(\mu^{\eta^{N}} \in \mathcal{O}\right) \geq - I(\pi). \] \end{theorem} \begin{proof} Let $\pi^{\beta, \delta}$ be the distribution of the process $X^{\beta, \delta}$ defined in Section \ref{sec:ode-part}. By the first part of Proposition \ref{prop:approximation-epsilon-delta} we have $d_{\mathcal{W}}^{sup}(\pi, \pi^{\beta, \delta}) \to 0$ as first $\delta$ and then $\beta \to 0$, in particular for small enough $\delta$ and $\beta$ we have $\pi^{\beta, \delta} \in \mathcal{O}$. Then Theorem \ref{th:lower-bound} implies that \[ \liminf_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}\left(\mu^{\eta^{N}} \in \mathcal{O}\right) \geq - I(\pi^{\beta, \delta}). \] Since by the second part of Proposition \ref{prop:approximation-epsilon-delta} we have $\lim\limits_{\beta \to 0} \lim\limits_{\delta \to 0} I(\pi^{\beta, \delta}) = I(\pi)$, the lower bound is proved. \end{proof} \end{section} \begin{section}{Large deviation upper bound}\label{sec:upper-bound} In this section we prove Theorem \ref{th:theorem-main-upper}, a large deviation upper bound for the distribution of the interchange process (we will drop the term ``unbiased'' from now on). As a first step we will bound the probability that after a (possibly short) time $t > 0$ we see a fixed permutation in the interchange process. This is summarized in the following \begin{proposition}\label{prop:one-slice-bound} Let $\mathbb{P}^{N}$ be the law of the interchange process, with $\eta = \eta^N$ denoting the trajectory of the process. Let $\sigma^N \in \mathcal{S}_N$ be a sequence of permutations. For any $t > 0$ we have \[ \limsup\limits_{N \to \infty} N^{-\gamma}\log \mathbb{P}^{N}(\eta_{0}^{-1}\eta_{t} = \sigma_N) \leq - \frac{1}{t} \left(\liminf\limits_{N \to \infty} I(\sigma^N) \right), \] where $I(\sigma)$ is the energy of the permutation $\sigma$ defined in \eqref{eq:permutation-energy}. \end{proposition} In other words, the large deviation rate of seeing a permutation $\sigma$ at time $t$ in the interchange process is asymptotically bounded from above by $\frac{1}{t}$ times the energy of the permutation $\sigma$. To prove Proposition \ref{prop:one-slice-bound} we will employ exponential martingales. The idea is as follows -- if $M_{S}(\eta)$ is a function of the process (depending on some set of parameters $S$) which is a positive mean one martingale, then for any permutation $\sigma \in \mathcal{S}_N$ we can write \begin{align}\label{eq:upper-bound-ej} & \mathbb{P}^{N}(\eta_{0}^{-1}\eta_{t} = \sigma) = \mathbb{E} (\mathbbm{1}_{\{\eta_{0}^{-1}\eta_{t} = \sigma\}} )= \mathbb{E} \left( M_{S}(\eta) M_{S}(\eta)^{-1} \mathbbm{1}_{\{\eta_{0}^{-1}\eta_{t} = \sigma\}}\right) \leq \\ & \sup_{ \{\chi_{0}^{-1}\chi_{t} = \sigma\}} M_{S}(\chi)^{-1} \mathbb{E} \left( M_{S}(\eta) \mathbbm{1}_{\{\eta_{0}^{-1}\eta_{t} = \sigma\}}\right) \leq \sup_{\{\chi_{0}^{-1}\chi_{t} = \sigma\}} M_{S}(\chi)^{-1}, \nonumber \end{align} where the supremum is over all deterministic permutation-valued paths $\chi = (\chi_s, 0 \leq s \leq T)$ satisfying $\chi_{0}^{-1}\chi_{t} = \sigma$ and the last inequality comes from the fact that $M_{S}(\chi)$ is a positive mean one martingale. If $M_{S}$ depends only on the increment $\chi_{0}^{-1}\chi_{t}$, we obtain a particularly simple expression \[ \mathbb{P}^{N}(\eta_{0}^{-1}\eta_{t} = \sigma) \leq M_{S}(\sigma)^{-1}. \] We can then optimize over the set of parameters $S$ to obtain a large deviation upper bound. The family of martingales we will use is similar to the one used in analyzing large deviations for a simple random walk. Fix $t > 0$ and a sequence $S = (s_{1}, \ldots, s_{N})$, with $s_{i} \in \left\{ \frac{-1 + \frac{1}{N}}{t}, \frac{-1 + \frac{2}{N}}{t}, \ldots, \frac{1 - \frac{2}{N}}{t}, \frac{1 - \frac{1}{N}}{t} \right\}$. We will think of $s_i$ as ``velocity'' assigned to the particle $i$. Consider the function \[ F_{S}(\eta_{t}) = \varepsilon \sum\limits_{i=1}^{N} s_{i} x_{i}(\eta_{t}), \] where $x_{i}(\eta_{t})$ is the position of the particle $i$ in the configuration $\eta_{t}$. If $\mathcal{L}$ is the generator of the interchange process, given by \eqref{eq:unbiased-generator}, then by the formula \eqref{eq:martingale-exp} for exponential martingales we obtain that \[ M_{t}^{S} = \exp \left\{ F_{S}(\eta_{t}) - F_{S}(\eta_{0}) - \int\limits_{0}^{t} e^{-F_{S}(\eta_{s})} \mathcal{L} e^{F_{S}(\eta_{s})} \, ds \right\} \] is a mean one positive martingale with respect to $\mathbb{P}^{N}$. For simplicity we will use the same notation $s_{x}(\eta) = s_{\eta^{-1}(x)}$ as for velocities $v_x$ of particles in the previous sections (with the convention that $i$ denotes labels of particles and $x$ denotes the positions), although bear in mind that now $s_x$ are just parameters, not related in any way to the the biased interchange process considered in the preceding sections. We have \[ \mathcal{L} e^{F_{S}(\eta)} = \frac{1}{2} N^{\alpha} \sum\limits_{x=1}^{N-1} \left( e^{F_{S}(\eta^{x,x+1})} - e^{F_{S}(\eta)} \right) = \frac{1}{2} N^{\alpha} \sum\limits_{x=1}^{N-1} \left( e^{F_{S}(\eta) + \varepsilon \left[s_{x}(\eta) - s_{x+1}(\eta)\right]} - e^{F_{S}(\eta)} \right), \] so \[ M_{t}^{S} = \exp \left\{ \varepsilon \sum\limits_{i=1}^{N} s_{i} \left( x_{i}(\eta_{t}) - x_{i}(\eta_{0})\right) - \frac{1}{2} N^{\alpha} \int\limits_{0}^{t} \sum\limits_{x=1}^{N-1} \left( e^{\varepsilon [s_{x}(\eta_{s}) - s_{x+1}(\eta_{s})]} -1\right) ds \right\}. \] Expanding up to order $\varepsilon^2$ we get \begin{align}\label{eq:exponential} M_{t}^{S} = & \exp \Bigg\{ \varepsilon \sum\limits_{i=1}^{N} s_{i} \left( x_{i}(\eta_{t}) - x_{i}(\eta_{0})\right) - \frac{1}{2} N^{\alpha} \varepsilon \int\limits_{0}^{t} \sum\limits_{x=1}^{N-1} [s_{x}(\eta_{s}) - s_{x+1}(\eta_{s})] \, ds - \\ & - \frac{1}{4} N^{\alpha}\varepsilon^2 \int\limits_{0}^{t} \sum\limits_{x=1}^{N-1} \left(s_{x}(\eta_{s}) - s_{x+1}(\eta_{s})\right)^2 \, ds + O(N^{\alpha+1} \varepsilon^3 ) \Bigg\} \nonumber, \end{align} where the constants in the $O(\cdot)$ notation depend on $t$ (which is fixed). Observe that the sum of $s_{x} - s_{x+1}$ telescopes, leaving only terms with $s_{1}$ and $s_{N}$, which are $O(N^{\alpha} \varepsilon) = o(N^{\gamma})$. Rescaling by appropriate powers of $N$ and expressing the exponents in terms of the large deviation exponent $\gamma$ we get \begin{equation*} M_{t}^{S} = \exp \Bigg\{ N^{\gamma} \left[ \frac{1}{N}\sum\limits_{i=1}^{N} s_{i} \left( \frac{x_{i}(\eta_{t}) - x_{i}(\eta_{0})}{N} \right) - \frac{1}{4} \int\limits_{0}^{t} \frac{1}{N}\sum\limits_{x=1}^{N-1} \left(s_{x}(\eta_{s}) - s_{x+1}(\eta_{s})\right)^2 \, ds \right]+ o(N^{\gamma}) \Bigg\}. \end{equation*} Expanding $(s_x - s_{x+1})^2$ we obtain (after adding and subtracting the boundary terms $s_1^2$, $s_N^2$ which are only $o(1)$ after rescaling) twice the sum of $s_{x}^{2}$ and the sum of mixed terms $s_x s_{x+1}$. Since $\sum\limits_{x=1}^N s_x^2 = \sum\limits_{i=1}^N s_i^2 $ does not depend on time, we can write \begin{align*} M_{t}^{S} = & \exp \Bigg\{ N^{\gamma} \Bigg[ \frac{1}{N}\sum\limits_{i=1}^{N} s_{i} \left( \frac{x_{i}(\eta_{t}) - x_{i}(\eta_{0})}{N} \right) - \frac{t}{2} \left( \frac{1}{N}\sum\limits_{i=1}^{N} s_{i}^{2} \right) + \\ & + \frac{1}{2} \int\limits_{0}^{t} \frac{1}{N}\sum\limits_{x=1}^{N-1} s_{x}(\eta_{s}) s_{x+1}(\eta_{s}) \, ds + o(1)\Bigg] \Bigg\}. \end{align*} As in the proof of the law of large numbers we want to use the one block estimate to get rid of the sum involving correlations between $s_{x}$ for adjacent $x$. This time the correlation term might not be small, since $s_{i}$ are arbitrary, but typically it will be nonnegative, so we can neglect it for the sake of the upper bound. More precisely, we have the following \begin{lemma}\label{lm:upper-bound-one-block} Let $\mathbb{P}^{N}$ be the law of the interchange process. Fix $t > 0$ and let \\ $s_{i} \in \left\{ \frac{-1}{t}, \frac{-1 + \frac{1}{N}}{t}, \ldots, \frac{1 - \frac{1}{N}}{t}, \frac{1}{t} \right\}$. Then, with notation as above, we have for any $\delta > 0$ \[ \limsup\limits_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N} \left( \int\limits_{0}^{t} \frac{1}{N}\sum\limits_{x=1}^{N-1} s_{x}(\eta_{s}) s_{x+1}(\eta_{s}) \, ds \leq - \delta \right) = - \infty. \] \end{lemma} \begin{proof} We employ Lemma \ref{lm:one-block-superexponential-probability} with $w(x,\phi) = \frac{2x-1}{t}$, $a_i = s_i$ and $b_{x}(\eta) = a_{x}(\eta)$, in particular $h_{x}(\eta) = s_x(\eta) s_{x-1}(\eta)$. As in the lemma consider $\mathbb{E}_{\mu_{x,l}^{\eta_s}} \left(s_{x}(\eta) s_{x+1}(\eta)\right)$, where $\mu_{x,l}^{\eta_s}$ is the empirical distribution of $a_i$ in a box $\Lambda_{x,l}$. Let us write \begin{align}\label{eq:sxsx+1} & \int\limits_{0}^{t} \frac{1}{N}\sum\limits_{x=1}^{N-1} s_{x}(\eta_s) s_{x+1}(\eta_s) \, ds = \\ & = \int\limits_{0}^{t} \frac{1}{N}\sum\limits_{x=1}^{N-1} \left( s_{x}(\eta_s) s_{x+1}(\eta_s) - \mathbb{E}_{\mu_{x,l}^{\eta_s}} \left[s_{x}(\eta) s_{x+1}(\eta)\right] \right)\, ds + \int\limits_{0}^{t} \frac{1}{N}\sum\limits_{x=1}^{N-1} \mathbb{E}_{\mu_{x,l}^{\eta_s}} \left[s_{x}(\eta) s_{x+1}(\eta) \right]\, ds. \nonumber \end{align} Since under $\mu_{x,l}^{\eta_s}$ the colors are i.i.d. random variables, we have $\mathbb{E}_{\mu_{x,l}^{\eta_s}} \left[s_{x}(\eta) s_{x+1}(\eta)\right] = \left( \mathbb{E}_{\mu_{x,l}^{\eta_{s}}} s_{x}(\eta)\right )^2 \geq 0$, so the second term on the right hand side of \eqref{eq:sxsx+1} is nonnegative for every $l$. Lemma \ref{lm:one-block-superexponential-probability} guarantees that for any $\delta > 0$ \[ \limsup\limits_{l \to \infty} \limsup\limits_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N} \left( \int\limits_{0}^{t} \frac{1}{N}\sum\limits_{x=1}^{N-1} \left( s_{x}(\eta_s) s_{x+1}(\eta_s) - \mathbb{E}_{\mu_{x,l}^{\eta_s}} \left[s_{x}(\eta) s_{x+1}(\eta)\right] \right)\, ds \leq - \delta \right) = - \infty. \] Since the left hand side of \eqref{eq:sxsx+1} does not depend on $l$, this finishes the proof. \end{proof} With this lemma the proof of Proposition \ref{prop:one-slice-bound} is rather straightforward. \begin{proof}[Proof of Proposition \ref{prop:one-slice-bound}] Lemma \ref{lm:upper-bound-one-block} implies that for any $a > 0$ there exist sets $\mathcal{O}_{N,a}$ such that on $\mathcal{O}_{N,a}$ we have \begin{equation}\label{eq:upper-bound-mt} M_{t}^{S}(\eta) \geq \exp \Bigg\{ N^{\gamma} \Bigg[ \frac{1}{N}\sum\limits_{i=1}^{N} s_{i} \left( \frac{x_{i}(\eta_{t}) - x_{i}(\eta_{0})}{N} \right) - \frac{t}{2} \left( \frac{1}{N}\sum\limits_{i=1}^{N} s_{i}^{2} \right) - a + o(1)\Bigg] \Bigg\}, \end{equation} with the $o(1)$ term depending on $t$, and $\mathbb{P}^{N}(\mathcal{O}_{N,a}^{c}) \to 0 $ as $N \to \infty$ superexponentially fast \[ \limsup_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}(\mathcal{O}_{N,a}^{c}) = - \infty. \] Now we can use the strategy outlined earlier with the positive mean one martingale $M_{t}^{S}(\eta)$. We write \begin{align*} & \mathbb{P}^{N}(\eta_{0}^{-1}\eta_{t} = \sigma^N) = \mathbb{E} \left(\mathbbm{1}_{\{\eta_{0}^{-1}\eta_{t} = \sigma^N\}} \right)= \mathbb{E} \left( M_{t}^{S}(\eta)^{-1} M_{t}^{S}(\eta) \mathbbm{1}_{\{\eta_{0}^{-1}\eta_{t} = \sigma^N\}}\right) = \\ & \mathbb{E} \left( M_{t}^{S}(\eta)^{-1} M_{t}^{S}(\eta) \mathbbm{1}_{\{\eta_{0}^{-1}\eta_{t} = \sigma^N \}}\mathbbm{1}_{\mathcal{O}_{N,a}} \right) + \mathbb{E} \left( \mathbbm{1}_{\{\eta_{0}^{-1}\eta_{t} = \sigma^N \}}\mathbbm{1}_{\mathcal{O}_{N,a}^{c}} \right). \end{align*} On $\mathcal{O}_{N,a}$ we can use the bound \eqref{eq:upper-bound-mt} obtained above. Note also that on the event $\{\eta_{0}^{-1}\eta_{t} = \sigma^N\}$ we have $x_{i}(\eta_{t}) - x_{i}(\eta_{0}) = \sigma^N(i) - i$, which together with \eqref{eq:upper-bound-ej} leads us to \begin{equation}\label{eq:one-slice-intermediate} \mathbb{P}^{N}(\eta_{0}^{-1}\eta_{t} = \sigma^N) \leq e^{-N^{\gamma} \left(I_{S}(\sigma^N) - a + o(1)\right)} + \mathbb{P}^{N}(\mathcal{O}_{N,a}^{c}), \end{equation} where \[ I_{S}(\sigma^N) = \frac{1}{N}\sum\limits_{i=1}^{N} s_{i} \left( \frac{\sigma^N(i) - i}{N} \right) - \frac{t}{2} \left( \frac{1}{N}\sum\limits_{i=1}^{N} s_{i}^{2} \right). \] To optimize over the choice of $S = (s_1, \ldots, s_N)$, observe that $I_{S}(\sigma^N)$ is quadratic in $s_{i}$, so an easy calculation shows that the optimal choice is \[ s_{i} = \frac{\sigma^N(i) - i}{t N}, \] which is valid, since we assumed $s_{i} \in \left\{ \frac{-1 + \frac{1}{N}}{t}, \frac{-1 + \frac{2}{N}}{t}, \ldots, \frac{1 - \frac{1}{N}}{t}\right\}$. This gives the maximal value of $I_{S}(\sigma^N)$ equal to \[ \frac{1}{2} \left( \frac{1}{N}\sum\limits_{i=1}^{N} \frac{1}{t}\left( \frac{\sigma^N(i) - i}{N} \right)^2 \right), \] which is exactly the energy $I(\sigma^N)$ rescaled by $t$. Inserting this into \eqref{eq:one-slice-intermediate} gives us \[ \mathbb{P}^{N}(\eta_{0}^{-1}\eta_{t} = \sigma^N) \leq e^{-N^{\gamma} \left(I(\sigma^N) - a + o(1)\right)} + \mathbb{P}^{N}(\mathcal{O}_{N,a}^{c}) \] Since $\limsup\limits_{n \to \infty} \frac{1}{n} \log(a_n + b_n) = \max\{\limsup\limits_{n \to \infty} \frac{1}{n} \log a_n, \limsup\limits_{n \to \infty} \frac{1}{n} \log b_n\}$, we obtain \[ \limsup\limits_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}(\eta_{0}^{-1}\eta_{t} = \sigma^N) \leq \max\left\{ - \liminf\limits_{N \to \infty} I(\sigma^N) + a, \limsup\limits_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N} (\mathcal{O}_{N,a}^{c}) \right\}. \] The second lim sup is $-\infty$ and by taking $a \to 0$ we arrive at \[ \limsup\limits_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}(\eta_{0}^{-1}\eta_{t} = \sigma^N) \leq - \liminf\limits_{N \to \infty} I(\sigma^N) \] as desired. \end{proof} We can readily extend the bound from Proposition \ref{prop:one-slice-bound} to all finite-dimensional distributions of the interchange process as follows. Fix a finite set of times $0 \leq t_0 < t_1 < \ldots < t_k \leq T$ and for clarity of notation let us write $\vec{\eta}_{t_0, \ldots, t_k} = (\eta_{t_{0}}^{-1}\eta_{t_1}, \ldots, \eta_{t_{k-1}}^{-1}\eta_{t_k})$ for the corresponding sequence of increments of $\eta$. Suppose we want to bound the probability $\mathbb{P}^{N}(\vec{\eta}_{t_0, \ldots, t_k} = (\sigma^N_{1}, \ldots, \sigma^N_{k}))$, where $(\sigma^N_{1}, \ldots, \sigma^N_{k})$ is a fixed sequence of permutations for each $N$, $\sigma^N_j \in \mathcal{S}_N$. Recall that the interchange process has independent increments, i.e., the permutations $\left(\eta_{t_{j}}\right)^{-1} \eta_{t_{j+1}}$ for any family non-overlapping intervals $[t_{j}, t_{j+1})$ are independent. Therefore we can write \[ \mathbb{P}^{N}(\vec{\eta}_{t_0, \ldots, t_k} = (\sigma^N_{1}, \ldots, \sigma^N_{k})) = \prod\limits_{j=1}^{k} \mathbb{P}^{N}(\eta_{t_{j-1}}^{-1} \eta_{t_j} = \sigma^N_{j}). \] As the interchange process is stationary, we have $\mathbb{P}^{N}(\eta_{t_{j-1}}^{-1} \eta_{t_j} = \sigma^N_{j}) = \mathbb{P}^{N}(\eta_{0}^{-1} \eta_{t_{j} - t_{j-1}} = \sigma^N_{j})$. Thus by applying Proposition \ref{prop:one-slice-bound} we obtain \begin{equation}\label{eq:many-slices} \limsup\limits_{N \to \infty} N^{-\gamma}\log \mathbb{P}^{N}(\vec{\eta}_{t_0, \ldots, t_k} = (\sigma^N_{1}, \ldots, \sigma^N_{k})) \leq - \liminf\limits_{N \to \infty} \sum\limits_{j=1}^{k} \frac{1}{t_j - t_{j-1}}I(\sigma^N_j). \end{equation} Recall that $\mu^{\eta^N}$ denotes the distribution of the random permutation process associated to $\eta^N$ (defined by \eqref{eq:empirical-eta}) and for a finite partition $\Pi$ by $I^{\Pi}(\mu^{\eta^N})$ we denote the approximation of energy of $\mu^{\eta^N}$ associated to $\Pi$ (defined by \eqref{eq:def-energy-fin-dim}). From equation \eqref{eq:many-slices} we obtain the following corollary which will be useful later \begin{corollary}\label{cor:many-slices-energy} For any $C > 0$ and any finite partition $\Pi = \{ 0 = t_0 < t_1 < \ldots < t_k = T\}$ we have \[ \limsup\limits_{N \to \infty} N^{-\gamma}\log \mathbb{P}^{N}(I^{\Pi}(\mu^{\eta^{N}}) \geq C) \leq - C. \] \end{corollary} \begin{proof} Consider the set $A_{C}^{N}$ of all sequences of permutations $(\sigma^N_1, \ldots, \sigma^N_k)$, $\sigma^N_j \in \mathcal{S}_N$, such that $\sum\limits_{j=1}^{k} \frac{1}{t_j - t_{j-1}}I(\sigma^N_j) \geq C$. By performing a union bound over all such sequences we get \[ \mathbb{P}^{N}(I^{\Pi}(\mu^{\eta^{N}}) \geq C) \leq N!^k \sup\limits_{A_{C}^{N}} \mathbb{P}^{N}(\vec{\eta}_{t_0, \ldots, t_k} = (\sigma^N_{1}, \ldots. \sigma^N_{k})), \] Now it is enough to observe that for fixed $k$ we have $\log (N!^k) = o(N^\gamma)$ and apply \eqref{eq:many-slices}. \end{proof} Now we can proceed to prove a general large deviation upper bound, announced as Theorem \ref{th:theorem-main-upper} in the introduction,. Recall that $\mathcal{P} \subseteq \mathcal{M}(\mathcal{D})$ denotes the space of all permuton and approximate permuton processes. \begin{theorem}\label{th:upper-bound} Let $\mathbb{P}^{N}$ be the law of the interchange process $\eta^{N}$ and let $\mu^{\eta^{N}}$ be the (random) distribution of the corresponding random permutation process $X^{\eta^N}$. For any closed set $\mathcal{C} \subseteq \mathcal{P}$ we have \[ \limsup_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}\left(\mu^{\eta^{N}} \in \mathcal{C} \right) \leq - \inf\limits_{\pi \in \mathcal{C}} I(\pi), \] where $I(\pi)$ is the energy of the process $\pi$ defined by \eqref{eq:process-energy}. \end{theorem} \begin{proof} It is standard (see, e.g., \cite[Lemma 2.3]{varadhan}) that the large deviation upper bound for closed sets follows from a local upper bound for open balls and exponential tightness of the sequence $\mu^{\eta^{N}}$. The exponential tightness part will be proved in Proposition \ref{prop:exp-tightness} below, so here we focus on the first part, that is, we will prove that for any $\pi \in \mathcal{P}$ we have \begin{equation}\label{eq:ld-ball} \limsup_{\varepsilon \to 0} \limsup_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}\left(\mu^{\eta^{N}} \in B(\pi, \varepsilon) \right) \leq - I(\pi), \end{equation} where $B(\pi, \varepsilon)$ denotes the open $\varepsilon$-ball around $\pi$ in the Wasserstein distance $d_{\mathcal{W}}$ on $\mathcal{P}$. Fix a finite set of times $0 = t_{0} < t_{1} < \ldots < t_{k} = T$. Since almost surely the interchange process does not make jumps at any of the prescribed times $t_0, t_1, \ldots, t_k$, by continuity of projections for any $\varepsilon > 0$ there exists $\varepsilon' > 0$ such that \begin{equation}\label{eq:upper-bound-projection} \mathbb{P}^{N}\left( d_{\mathcal{W}}(\mu^{\eta^{N}}, \pi) < \varepsilon' \right) \leq \mathbb{P}^{N} \left(d(\mu^{\eta^{N}}_{t_{0}, t_{1}}, \pi_{t_0, t_1}) < \varepsilon) \wedge \ldots \wedge d(\mu^{\eta^{N}}_{t_{k-1}, t_{k}},\pi_{t_{k-1}, t_{k}}) < \varepsilon \right), \end{equation} where $d$ denotes the Wasserstein distance on $\mathcal{M}([0,1]^2)$. Furthermore, note that the permutation process with distribution $\mu^{\eta^{N}}$ has independent increments, i.e., the permutations $\left(\eta_{t_{j}}^{N}\right)^{-1} \eta_{t_{j+1}}^{N}$ for any family non-overlapping intervals $[t_{j}, t_{j+1})$ are independent. Thus we can write \begin{align}\label{eq:upper-bound-product} \mathbb{P}^{N} \left(d(\mu^{\eta^{N}}_{t_{0}, t_{1}}, \pi_{t_0, t_1}) < \varepsilon) \wedge \ldots \wedge d(\mu^{\eta^{N}}_{t_{k-1}, t_{k}},\pi_{t_{k-1}, t_{k}}) < \varepsilon \right) = \prod\limits_{i=0}^{k-1} \mathbb{P}^{N} \left( d(\mu^{\eta^{N}}_{t_{i}, t_{i+1}}, \pi_{t_{i}, t_{i+1}}) < \varepsilon)\right). \end{align} In this way we have reduced the problem to bounding the probability that the random measure $\mu^{\eta^{N}}_{t_{i}, t_{i+1}}$ is close to a fixed permuton $\pi_{t_{i}, t_{i+1}}$. Fix $i$ and consider all permutations $\sigma \in \mathcal{S}_{N}$ such that the empirical measure $\mu_{\sigma}$ satisfies $d(\mu_{\sigma}, \pi_{t_{i}, t_{i+1}}) < \varepsilon$. As there are at most $N!$ such permutations, by performing a union bound over this set we obtain \[ \mathbb{P}^{N} \left( d(\mu^{\eta^{N}}_{t_{i}, t_{i+1}}, \pi_{t_{i}, t_{i+1}}) < \varepsilon)\right) \leq N! \sup_{ \substack{\sigma \in \mathcal{S}_{N} \\ d(\mu_{\sigma}, \pi_{t_{i}, t_{i+1}}) < \varepsilon}} \mathbb{P}^{N} \left(\mu^{\eta^{N}}_{t_{i}, t_{i+1}} = \mu_{\sigma}\right), \] where on the right hand side we have the probability that the random measure $\mu_{t_i, t_{i+1}}^{\eta^{N}}$ is equal to $\mu_{\sigma}$. This probability is simply equal to $\mathbb{P}^{N} \left( \left(\eta_{t_i}^{N}\right)^{-1} \eta_{t_{i+1}}^{N} = \sigma)\right)$ and by stationarity of the interchange process this is the same as $\mathbb{P}^{N} \left( \left(\eta_{0}^{N}\right)^{-1} \eta_{t_{i+1} - t_i}^{N} = \sigma)\right)$. By employing Proposition \ref{prop:one-slice-bound}, with $\sigma^N \in \mathcal{S}_N$ being any permutation attaining the supremum above, and noticing that $\log N! = o(N^{\gamma})$ we get \[ \limsup_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N} \left( d(\mu^{\eta^{N}}_{t_i, t_{i+1}}, \pi_{t_{i}, t_{i+1}}) < \varepsilon)\right) \leq \limsup_{N \to \infty} \left(- \frac{1}{t_{i+1} - t_i} I(\sigma^N)\right). \] Now observe that for any $\sigma$ such that $d(\mu_{\sigma}, \pi_{t_{i}, t_{i+1}}) < \varepsilon$ the energy $I(\sigma) = I(\mu_{\sigma})$ has to be close to $I(\pi_{t_{i}, t_{i+1}})$, the energy of the permuton $\pi_{t_i, t_{i+1}}$ (recall definition \ref{eq:permuton-energy-def}), since $I$ is continuous in the weak topology on $\mathcal{M}([0,1]^2)$. Thus upon taking $\varepsilon \to 0$ we obtain \[ \limsup_{\varepsilon \to 0} \limsup_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N} \left(d(\mu^{\eta^{N}}_{t_i, t_{i+1}}, \pi_{t_{i}, t_{i+1}}) < \varepsilon \right) \leq -\frac{1}{t_{i+1} - t_{i}} I(\pi_{t_{i}, t_{i+1}}) . \] Applying this estimate to the product in \eqref{eq:upper-bound-product} and observing that in \eqref{eq:upper-bound-projection} without loss of generality we can assume $\varepsilon' \leq \varepsilon$, we arrive at the following bound \begin{align*} & \limsup_{\varepsilon \to 0} \limsup_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}\left(d_{\mathcal{W}}(\mu^{\eta^{N}}, \pi) < \varepsilon \right) \leq -\sum\limits_{i=0}^{k-1} \frac{1}{t_{i+1} - t_{i}} I(\pi_{t_{i}, t_{i+1}}). \end{align*} Since $t_0, t_1, \ldots, t_k$ were arbitrary, by optimizing over all finite partitions $\Pi = \{ 0 = t_{0} < t_{1} < \ldots < t_{k} = T \}$ we obtain \[ \limsup_{\varepsilon \to 0} \limsup_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N}\left(d_{\mathcal{W}}(\mu^{\eta^{N}}, \pi) < \varepsilon \right) \leq - \sup_{\Pi} \sum\limits_{i=0}^{k-1} \frac{1}{t_{i+1} - t_{i}} I(\pi_{t_{i}, t_{i+1}}). \] Recalling the definitions \eqref{eq:energy-path-def}, \eqref{eq:process-energy} and \eqref{eq:def-energy-fin-dim}, to prove \eqref{eq:ld-ball} it remains to show that we have $I(\pi) = \sup\limits_{\Pi} I^{\Pi}(\pi)$, which is exactly the statement of Lemma \ref{lm:approximate-energy}. \end{proof} \begin{proposition}\label{prop:exp-tightness} The family of measures $\mu^{\eta^{N}}$ is exponentially tight, that is, there exists a sequence of compact sets $K_{m} \subseteq \mathcal{P}$ such that \[ \limsup_{N \to \infty} N^{-\gamma} \log \mathbb{P}^{N} (\mu^{\eta^{N}} \notin K_{m}) \leq - m. \] \end{proposition} \begin{proof} The idea of the proof is to show that having many particles whose trajectories have poor modulus of continuity (which would spoil compactness) necessarily implies the process having high energy, which by Corollary \ref{cor:many-slices-energy} is unlikely. Recall that $\mathcal{D} = \mathcal{D}([0,T], [0,1])$ is the space of all c\`adl\`ag paths from $[0,T]$ to $[0,1]$. It will be convenient to work with the following notion of c\`adl\`ag modulus of continuity -- for a path $f \in \mathcal{D}$ we define \[ w_{\delta}''(f) = \sup_{\substack{t_1 \leq t \leq t_2 \\ t_2 - t_1 \leq \delta}} \left\{ |f(t) - f(t_1)| \wedge |f(t_2) - f(t)| \right\}. \] By a characterization of compactness in the Skorokhod space (\cite[Theorem 12.4]{billingsley}) a set $A \subseteq \mathcal{D}$ has compact closure if and only if the following conditions hold \[ \begin{cases} \sup\limits_{f \in A} \sup\limits_{t \in [0,T]} |f(t)| < \infty, \\ \lim\limits_{\delta \to 0} \sup\limits_{f \in A} w_{\delta}''(f) = 0, \\ \lim\limits_{\delta \to 0} \sup\limits_{f \in A} |f(\delta) - f(0)| = 0, \\ \lim\limits_{\delta \to 0} \sup\limits_{f \in A} |f(T-) - f(T-\delta)| = 0. \end{cases} \] In our setting the first condition is trivially satisifed. To exploit the other conditions let us introduce for any $m,r \geq 1$ the following sets \begin{align*} & K^{w}_{m,r} = \bigcap\limits_{k \geq 1}\left\{ f \in \mathcal{D} \, \big\vert \, w_{\delta_{k}(m,r)}''(f) \leq \varepsilon_{k} \right\}, \\ & K^{0}_{m,r} = \bigcap\limits_{k \geq 1}\left\{ f \in \mathcal{D} \, \big\vert \, |f(\delta_{k}(m,r)) - f(0)| \leq \varepsilon_{k} \right\}, \\ & K^{T}_{m,r} = \bigcap\limits_{k \geq 1}\left\{ f \in \mathcal{D} \, \big\vert \, |f(T-) - f(T - \delta_{k}(m,r))| \leq \varepsilon_{k} \right\}, \end{align*} and \[ K_{m,r} = K^{w}_{m,r} \cap K^{0}_{m,r} \cap K^{T}_{m,r}, \] where $\varepsilon_{k} = 4^{-k}$ and $\delta_{k}(m,r)$ will be appropriately chosen later. We will assume that for fixed $m,r$ we have $\lim\limits_{k \to \infty} \delta_{k}(m,r) = 0$ and that for any $k \geq 1$ both $\frac{T}{\delta_k}$ and $\frac{\delta_{k}(m,r)}{\delta_{k+1}(m,r)}$ are integer (the latter assumption is for simplicity of notation only). Note that by the aforementioned compactness conditions each set $K_{m,r}$ has compact closure in $\mathcal{D}$. Let \[ K_m = \bigcap\limits_{r \geq 1} \left\{ \mu \in \mathcal{M}(\mathcal{D}) \, \bigg\vert \, \mu(K_{m,r}) \geq 1 - \frac{1}{r} \right\}. \] We claim that $K_m$ has compact closure in $\mathcal{M}(\mathcal{D})$. Indeed, by Prokhorov's theorem it is enough to prove that $K_m$ is tight. If $\mu \in K_m$, then for any $r \geq 1$ we have $\mu(K_{m,r}^{c}) \leq \frac{1}{r}$, so the sets $K_{m,r}$ form the family of compact sets needed for tightness of $K_m$. The sets $K_m$ (possibly after taking their closures) will form the family of compact sets needed for exponential tightness. Thus our goal is to bound $\mathbb{P}^{N}(\mu^{\eta^{N}} \notin K_{m})$. Let us write \[ \mathbb{P}^{N}(\mu^{\eta^{N}} \notin K_{m}) = \mathbb{P}^{N}\left(\exists_{r \geq 1} \, \, \mu^{\eta^{N}} (K_{m,r}^{c}) \geq \frac{1}{r} \right) \leq \sum\limits_{r \geq 1} \mathbb{P}^{N}\left(\mu^{\eta^{N}} (K_{m,r}^{c}) \geq \frac{1}{r}\right). \] It is enough to show that for any $m,r \geq 1$ and any $N \geq 1$ we have \begin{equation}\label{eq:exp-tightness-m} \mathbb{P}^{N}\left(\mu^{\eta^{N}} (K_{m,r}^{c}) \geq \frac{1}{r} \right) \leq C e^{-mr N^{\gamma}}, \end{equation} where $C > 0$ is some global constant. For any given $m$ and $r$, observe that $\mu^{\eta^{N}} (K_{m,r}^{c}) \geq \frac{1}{r}$ means that in $\eta^N$ we have at least $\frac{N}{r}$ particles with paths $f \notin K_{m,r}$. Since $K_{m,r} = K^{w}_{m,r} \cap K^{0}_{m,r} \cap K^{T}_{m,r}$, clearly it is enough to estimate separately the probabilities that at least $\frac{N}{3r}$ particles have paths respectively not in $K^{w}_{m,r}$, $K^{0}_{m,r}$ or $K^{T}_{m,r}$. The argument for $K^{0}_{m,r}$ and $K^{T}_{m,r}$ is much simpler, so we skip it and concentrate only on the case of $K^{w}_{m,r}$. For simplicity we will write $\alpha(r) = \frac{1}{3r}$ For fixed $m$ and $r$ we will call a path $f$ \emph{bad} if $w_{\delta_{k}(m,r)}''(f) > \varepsilon_{k}(m,r)$ for some $k \geq 1$. We will call $f$ \emph{bad exactly at scale $k$} if $w_{\delta_{k}(m,r)}''(f) > \varepsilon_{k}(m,r)$, but $w_{\delta_{j}(m,r)}''(f) \leq \varepsilon_{j}(m,r)$ for all $j \geq k+1$. Recalling the definition of the set $K^{w}_{m,r}$, the event whose probability we would like to bound is \[ A_{N}^{m,r} = \left\{ \mbox{there exist} \geq \alpha(r) N \mbox{ particles with bad paths} \right\}. \] Consider now the events \[ B_{N}^{m,r,k} = \left\{ \mbox{there exist} \geq \frac{\alpha(r)}{2^k} N \mbox{ particles whose paths are bad exactly at scale $k$} \right\}. \] Note that if $f$ is a bad path with jumps of fixed size $\frac{1}{N}$, then there exists $k \geq 1$ such that $f$ is bad exactly at scale $k$ (since all paths we are considering are c\'{a}dl\`{a}g). Thus we have $A_{N}^{m,r} \subseteq \bigcup\limits_{k \geq 1} B_{N}^{m,r,k}$, so \[ \mathbb{P}^{N}\left(\mu^{\eta^{N}} ((K_{m,r}^{w})^c) \geq \alpha(r)\right) = \mathbb{P}^{N}\left(A_{N}^{m,r}\right) \leq \sum\limits_{k \geq 1} \mathbb{P}^{N}(B_{N}^{m,r,k}). \] Thus it is enough to show that for any $m,r,k \geq 1$ and any $N \geq 1$ we have \begin{equation}\label{eq:bound-on-b} \mathbb{P}^{N}(B_{N}^{m,r,k}) \leq e^{-mrk N^\gamma}. \end{equation} From now on we fix $m,r,k$ and $N$. All paths we are considering are assumed to come from the interchange process $\eta^N$, in particular they have jumps of fixed size $\frac{1}{N}$. For the sake of brevity we will simply write $\delta_k = \delta_k(m,r)$. Let us divide the interval $[0,T]$ into $J = \frac{T}{\delta_k}$ intervals of the form $[j \delta_k, (j+1)\delta_k]$, $j=0, \ldots, J-1$. Consider any path $f$ which is bad exactly at scale $k$. The condition $w_{\delta_{k}}''(f) > \varepsilon_{k}$ implies that for some $t, t_1, t_2$ such that $t ,t_2 \in [t_1, t_1 + \delta_k]$ we have $|f(t) - f(t_1)| > \varepsilon_k$ and $|f(t_2) - f(t)| > \varepsilon_k$. A simple application of the triangle inequality implies that there exists $j \in \{0, \ldots, J-1 \}$ and $t' \in [j \delta_k, (j+1)\delta_k]$ such that $|f(j\delta_k) - f(t')| > \frac{\varepsilon_k}{2}$. Let us consider the interval $[s, s'] = [j \delta_k, (j+1)\delta_k]$ obtained above. Subdivide it into $L = \frac{\delta_k}{\delta_{k+1}}$ intervals of the form $[s_{\ell}, s_{\ell+1}]$, $\ell = 0, \ldots, L-1$, where $s_{\ell} = s + \ell \delta_{k+1}$. For $\ell=0, \ldots, L-1$ let $\Delta_{\ell}(f) = |f(s_{\ell}) - f(s_{\ell+1})|$. The crucial observation is that $\sum\limits_{\ell=0}^{L-1} \Delta_{\ell}(f) > \frac{\varepsilon_k}{4}$. To see this, consider $t'$ such that $|f(s) - f(t')| > \frac{\varepsilon_k}{2}$, obtained above, and let $\tilde{\ell}$ be such that $t' \in (s_{\tilde{\ell}}, s_{\tilde{\ell} + 1}]$. By the triangle inequality we have \[ \frac{\varepsilon_{k}}{2} < |f(s) - f(t')| \leq \sum\limits_{\ell=0}^{\tilde{\ell} - 2} \Delta_{\ell}(f) + |f(s_{\tilde{\ell}}) - f(t')|. \] Since $f$ is bad exactly at scale $k$, we have $w_{\delta_{k+1}}''(f) \leq \varepsilon_{k+1}$, which together with $|s_{\tilde{\ell}} - t'| \leq \delta_{k+1}$ implies $|f(s_{\tilde{\ell}}) - f(t')| \leq \varepsilon_{k+1} = 4^{-(k+1)} = \frac{\varepsilon_{k}}{4}$. Thus necessarily $\sum\limits_{\ell=0}^{\tilde{\ell} - 2} \Delta_{\ell}(f) > \frac{\varepsilon_k}{4}$. From this we obtain \begin{equation}\label{eq:sum-of-delta} \frac{\varepsilon_k^2}{16} < \left( \sum\limits_{\ell=0}^{L-1} \Delta_{\ell}(f) \right)^2 \leq L \sum\limits_{\ell=0}^{L-1} \Delta_{\ell}(f)^2, \end{equation} where the right-hand side estimate follows from the Cauchy-Schwarz inequality. Now let us suppose that the event $B_{N}^{m,r,k}$ holds. Then there exist at least $\frac{\alpha(r)}{2^k}N$ paths $f_i$ for which the estimate \eqref{eq:sum-of-delta} holds. Consider the partition $\Pi = \{0 = t_0 < t_1 < \ldots < t_n = T \}$ where $n = \frac{T}{\delta_{k+1}}$, $t_j = j \delta_{k+1}$ for $j=0, \ldots, n$. Recalling that $f_i = \frac{1}{N}\eta^{N}(i)$, the definition of $\Delta_{\ell}(f)$ and the definition \eqref{eq:def-energy-fin-dim} of the energy $I^{\Pi}(\mu^{\eta^{N}})$ we obtain that on $B_{N}^{m,r,k}$ we have \begin{align*} I^{\Pi}(\mu^{\eta^{N}}) = &\frac{1}{N} \sum\limits_{i=1}^{N} \left( \frac{1}{2} \sum\limits_{j=1}^{n} \frac{| f_{i}(t_{j}) - f_{i}(t_{j-1}) |^2}{t_{j} - t_{j-1}} \right) = \\ & \frac{1}{N} \sum\limits_{i=1}^{N} \left(\frac{1}{2\delta_{k+1}} \sum\limits_{j=1}^{n} | f_{i}(t_{j}) - f_{i}(t_{j-1}) |^2 \right) > \frac{1}{N} \cdot \frac{\alpha(r)}{2^k}N \cdot \left( \frac{1}{2\delta_{k+1}} \frac{\varepsilon_k^2}{16 L} \right) = \\ & \frac{\alpha(r)}{2^{k+5}} \frac{1}{\delta_{k+1}} \frac{\varepsilon_k^2}{ \frac{\delta_k}{\delta_{k+1}}} = \frac{\varepsilon_k^2}{\delta_k} \frac{\alpha(r)}{2^{k+5}}. \end{align*} Writing again $\delta_k = \delta_k(m,r)$, we have thus obtained the bound \[ \mathbb{P}^{N}(B_{N}^{m,r,k}) \leq \mathbb{P}^{N}\left( I^{\Pi}(\mu^{\eta^{N}}) \geq \frac{\varepsilon_k^2}{\delta_k(m,r)} \frac{\alpha(r)}{2^{k+5}}\right). \] Recalling $\varepsilon_k = 4^{-k}$, $\alpha(r) = \frac{1}{3r}$, we see that to prove \eqref{eq:bound-on-b} it is sufficient to take $\delta_k(m,r)$ small enough so that \[ \frac{4^{-2k}}{\delta_k(m,r)} \frac{1}{3r 2^{k+5}} \geq 2 mrk. \] By applying Corollary \ref{cor:many-slices-energy} we obtain that \[ \mathbb{P}^{N}(B_{N}^{m,r,k}) \leq \mathbb{P}^{N}\left( I^{\Pi}(\mu^{\eta^{N}}) \geq 2mrk \right)\leq e^{-mrkN^{\gamma}} \] for $N$ large enough. By taking $\delta_k(m,r)$ even smaller if necessary we can make this estimate true for all values of $N \geq 1$, which proves \eqref{eq:bound-on-b} and finishes the proof of exponential tightness. \end{proof} \end{section} \begin{section}{Asymptotics of relaxed sorting networks}\label{sec:asymptotics} In this section we prove the limiting behavior of random relaxed sorting networks, given by Theorem \ref{th:lln-for-relaxed-networks}, and the asymptotic counting formula of Theorem \ref{th:asymptotics-relaxed}. With the large deviation bounds obtained in the preceding sections both of the proofs are now rather straightforward. \begin{proof}[Proof of Theorem \ref{th:lln-for-relaxed-networks}] Let $\mathcal{R} \subseteq \mathcal{P}$ be the set of permuton processes $X$ reaching exactly the reverse permuton at time $1$, i.e., such that $(X_0, X_1) \sim (X, 1 - X)$, and likewise let $\mathcal{R}_N$ be the set of permutation processes on $N$ elements reaching exactly the reverse permutation $\mathrm{rev}_N = (N \, \ldots \, 2 \, 1)$ at time $1$. Let $\mathcal{R}_{\delta}$ denote the $\delta$-neighborhood in the Wasserstein distance on $\mathcal{P}$ of the set $\mathcal{R} \cup \bigcup\limits_{N \geq 1} \mathcal{R}_N$. Let $\eta^N$ be the interchange process with $\alpha = 1 + \kappa \in (1,2)$ and let $\mu^{\eta^N}$ be the distribution of the corresponding permutation process. By definition of a random relaxed sorting network, for any given $\delta > 0$ we have for sufficiently large $N$ \[ \mathbb{P}^N \left( \pi^{N}_{\delta} \in B(\pi_{\mathcal{A}}, \varepsilon) \right) = \mathbb{P}^N \left( \mu^{\eta^N} \in B(\pi_{\mathcal{A}}, \varepsilon) \big\vert \mu^{\eta^N} \in \mathcal{R}_{\delta} \right). \] Now, we have \[ \mathbb{P}^N \left( \mu^{\eta^N} \notin B(\pi_{\mathcal{A}}, \varepsilon) \big\vert \mu^{\eta^N} \in \mathcal{R}_{\delta} \right) = \frac{1}{\mathbb{P}^N\left( \mu^{\eta^N} \in \mathcal{R}_{\delta} \right)}\mathbb{P}^N \left( \left\{ \mu^{\eta^N} \notin B(\pi_{\mathcal{A}}, \varepsilon) \right\} \cap \left\{ \mu^{\eta^N} \in \mathcal{R}_{\delta} \right\} \right). \] By the large deviation lower bound of Theorem \ref{thm:lower-bound-for-minimizers} we have \[ \mathbb{P}^N\left( \mu^{\eta^N} \in \mathcal{R}_{\delta} \right) \geq \exp \left\{-N^\gamma \left( I(\pi_{\mathcal{A}}) + o(1) \right) \right\}, \] where $\gamma = 3 - \alpha$. Let $\mathcal{C}_{\varepsilon, \delta} = B(\pi_{\mathcal{A}}, \varepsilon)^{c} \cap \overline{\mathcal{R}_{\delta}}$. By the large deviation upper bound of Theorem \ref{th:upper-bound} we have \[ \mathbb{P}^N \left( \mu^{\eta^N} \in \mathcal{C}_{\varepsilon, \delta} \right) \leq \exp \left\{ -N^\gamma \left( \inf\limits_{\mu \in \mathcal{C}_{\varepsilon, \delta}} I(\mu) + o(1)\right) \right\}. \] Since $\pi_{\mathcal{A}}$ is the unique minimizer of energy on $\mathcal{R}$ (\cite{brenier}, \cite{mustazee}), given $\varepsilon > 0$ there exists $\beta = \beta(\varepsilon) > 0$ such that \begin{equation}\label{eq:infimum-sine} \inf\limits_{\mu \in B(\pi_{\mathcal{A}}, \varepsilon)^c \cap \mathcal{R}} I(\mu) \geq I(\pi_{\mathcal{A}}) + \beta. \end{equation} Since $I(\cdot)$ is lower semi-continuous, by \eqref{eq:infimum-sine} we obtain (possibly after adjusting $\delta$ to replace $\overline{\mathcal{R}_{\delta}}$ with $\mathcal{R}_{\delta}$) that for all sufficiently small $\delta$ we have \[ \inf\limits_{\mu \in B(\pi_{\mathcal{A}}, \varepsilon)^c \cap \mathcal{R}_{\delta}} I(\mu) \geq I(\pi_{\mathcal{A}}) + \frac{\beta}{2}. \] Altogether we obtain that for all sufficiently small $\delta$ (depending on $\varepsilon$ only) \[ \mathbb{P}^N \left( \mu^{\eta^N} \notin B(\pi_{\mathcal{A}}, \varepsilon) \big\vert \mu^{\eta^N} \in \mathcal{R}_{\delta} \right) \leq e^{N^{\gamma} \left( I(\pi_{\mathcal{A}}) + o(1) \right)} e^{- N^{\gamma} \left( I(\pi_{\mathcal{A}}) + \frac{\beta}{2} + o(1) \right)} = e^{-N^{\gamma} \left( \frac{\beta}{2} + o(1) \right)} \] and the right-hand side goes to $0$ as $N \to \infty$ \end{proof} \begin{proof}[Proof of Theorem \ref{th:asymptotics-relaxed}] Let $\mathbb{P}^N$ denote the law of the interchange process with $\alpha = 1 + \kappa$. Let $J$ be the number of all particle swaps in the process and let $M = \left\lfloor \frac{1}{2} N^{\alpha}(N-1) \right\rfloor$. Let $\mathcal{S}_{\delta}$ denote the $\delta$-neighborhood of the reverse permuton in the Wasserstein distance on $\mathcal{M}([0,1]^2)$. Observe that for sufficiently large $N$ we have for any $k \leq M$ \begin{equation}\label{eq:comparison-with-k} \mathbb{P}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \big\vert J = k\right) \leq \mathbb{P}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \big\vert J = M\right)(1 + o(1)). \end{equation} This is because if the process has done $k$ swaps up to time $T_k$ and $\mu^{\eta^N}_{0,T_k} \in \mathcal{S}_{\delta}$, then with high probability $\mu^{\eta^N}_{0,T_M} \in \mathcal{S}_{\delta}$ as well, since $\mathcal{S}_{\delta}$ is an open set in $\mathcal{M}([0,1]^2)$ and the additional number of steps done between $T_k$ and $T_M$ is $\leq \frac{1}{2} N^{\alpha}(N-1) = o(N^3)$, so typically almost all particles have negligible displacement. On the other hand, since in the interchange process each sequence of swaps of given length is equally likely, we have \[ \mathbb{P}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \big\vert J = M\right) = \frac{|\mathcal{S}^{N}_{\kappa, \delta}|}{|\mathcal{P}^{N}_{M}|}, \] where $\mathcal{P}^{N}_{M}$ is the set of all sequences of adjacent transpositions of length $M$. Summing \eqref{eq:comparison-with-k} over $k \leq M$ we obtain \[ \mathbb{P}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \cap \{ J \leq M \}\right) \leq \mathbb{P}^N \left( J \leq M \right) \frac{|\mathcal{S}^{N}_{\kappa, \delta}|}{|\mathcal{P}^{N}_{M}|}(1 + o(1)). \] Since under $\mathbb{P}^N$ $J$ has Poisson distribution with mean $\frac{1}{2}N^{\alpha}(N-1)$, we have $\mathbb{P}^N \left( J \leq M \right) \to 1/2$ as $N \to \infty$. To estimate the left-hand side, let $\widetilde{\mathbb{P}}^N$ be the law of the biased interchange process corresponding to the sine curve process $\pi_{\mathcal{A}}$. Recall Lemma \ref{lm:radon-nikodym-lower-bound} and for fixed $\varepsilon > 0$ let $A$ be the event that the $o(1)$ term in the formula for $\frac{\mathrm{d} \mathbb{P}_{u}^{N}}{\mathrm{d} \widetilde{\mathbb{P}}^{N}}(T)$ is at most $\varepsilon$. Let us write \[ \mathbb{P}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \cap \{ J \leq M \} \cap A\right) \leq \mathbb{P}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \cap \{ J \leq M \}\right) \] and \begin{align*} & \mathbb{P}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \cap \{ J \leq M \} \cap A \right) = \\ & = \widetilde{\mathbb{P}}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \cap \{ J \leq M \} \cap A \right) \frac{\mathbb{P}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \cap \{ J \leq M \}\cap A \right)}{\widetilde{\mathbb{P}}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \cap \{ J \leq M \} \cap A \right)}. \end{align*} By Theorem \ref{th:lln} $\mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta}$ has high probability under $\widetilde{\mathbb{P}}^N$ and, since the particle swap rates for the biased process sum up to $\frac{1}{2}N^{\alpha}(N-1)$ (recall \eqref{eq:biased-generator}), we have similarly as for the unbiased process $\widetilde{\mathbb{P}}^N \left( J \leq M \right) \to 1/2$ as $N \to \infty$. By Lemma \ref{lm:radon-nikodym-lower-bound} $A$ is a high probability event under $\widetilde{\mathbb{P}}^N$ as well. To estimate the remaining probabilities, we employ the formula for the Radon-Nikodym derivative from Lemma \ref{lm:radon-nikodym-lower-bound}. Since in the biased process with high probability the energy term in the derivative is close to $I(\pi_{\mathcal{A}}) = \frac{\pi^2}{6}$, we obtain \[ \frac{\mathbb{P}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \cap \{ J \leq M \}\cap A \right)}{\widetilde{\mathbb{P}}^N \left( \mu^{\eta^N}_{0,T} \in \mathcal{S}_{\delta} \cap \{ J \leq M \} \cap A \right)} \geq e^{-N^{\gamma} \left( \frac{\pi^2}{6} + \varepsilon \right) + o(N^\gamma)}, \] where $\gamma = 3 - \alpha$. Altogether we obtain \[ |\mathcal{S}^{N}_{\kappa, \delta}| \geq |\mathcal{P}^{N}_{M}| e^{-N^{\gamma} \left( \frac{\pi^2}{6} + \varepsilon \right) + o(N^\gamma)}. \] Since $|\mathcal{P}^{N}_{M}| = (N-1)^M = e^{\lfloor \frac{1}{2} N^{\alpha}(N-1) \rfloor \log(N-1)}$ and $\varepsilon$ was arbitrary, we obtain the asymptotic lower bound on $|\mathcal{S}^{N}_{\kappa, \delta}|$ as claimed. For the upper bound, let $\mathcal{R}_{\delta}$ be as in the previous theorem. By the large deviation upper bound of Theorem \ref{th:upper-bound} we have \[ \mathbb{P}^N\left( \mu^{\eta^N}_{0,T} \in \mathcal{R}_{\delta} \right) \leq \exp \left\{ -N^{\gamma} \left( \inf\limits_{\mu \in \overline{\mathcal{R}_{\delta}}} I(\mu) + o(1) \right) \right\}, \] Since $I$ is lower semi-continuous, given any $\varepsilon > 0$ we have for all sufficiently small $\delta > 0$ \[ \inf\limits_{\mu \in \overline{\mathcal{R}_{\delta}}} I(\mu) \geq \inf\limits_{\mu \in \mathcal{R}} I(\mu) - \varepsilon = I(\pi_{\mathcal{A}}) - \varepsilon, \] where again we have used the energy minimization property of $\pi_{\mathcal{A}}$. Since $I(\pi_{\mathcal{A}}) = \frac{\pi^2}{6}$, this implies that for any $\varepsilon > 0$ and sufficiently small $\delta > 0$ \[ \mathbb{P}^N\left( \mu^{\eta^N} \in \mathcal{R}_{\delta} \right) \leq e^{-N^{\gamma} \left( I(\pi_{\mathcal{A}}) - \varepsilon + o(1) \right)}. \] Now we estimate \[ \mathbb{P}^N\left( \mu^{\eta^N} \in \mathcal{R}_{\delta} \right) \geq \mathbb{P}^N\left( \mu^{\eta^N} \in \mathcal{R}_{\delta} \big\vert J = M\right) \mathbb{P}^N \left( J = M \right) = \frac{|\mathcal{S}^{N}_{\kappa, \delta}|}{|\mathcal{P}^{N}_{M}|}\mathbb{P}^N \left( J = M \right) \] and use the same asymptotic estimate for $|\mathcal{P}^{N}_{M}|$ as in the lower bound. Since $J$ is Poisson with mean $\frac{1}{2}N^\alpha (N-1)$ under $\mathbb{P}^N$, the second term on the right-hand side is $e^{O(\log N)}$. Altogether we obtain \[ |\mathcal{S}^{N}_{\kappa, \delta}| \leq \exp\left\{ \frac{1}{2}N^{1 + \kappa} (N-1) \log(N-1)- N^{2 - \kappa} \left( I(\pi_{\mathcal{A}}) - \varepsilon \right) + o(N^{2 - \kappa})\right\}, \] which proves the desired asymptotic upper bound on $|\mathcal{S}^{N}_{\kappa, \delta}|$. \end{proof} \end{section} {} \noindent Michał Kotowski \\ Institute of Mathematics of the Polish Academy of Sciences \\ ul. Śniadeckich 8 00-656 Warszawa, Poland\\ \noindent {E-mail:} {\tt [email protected]} \\ \noindent B\'{a}lint Vir\'{a}g \\ Department of Mathematics, University of Toronto \\ 40 St George St. Toronto, ON, M5S 2E4, Canada \\ \noindent {E-mail:} {\tt [email protected]} \\ \end{document}
\begin{document} \begin{abstract} We exhibit an infinite family of knots with isomorphic knot Heegaard Floer homology. Each knot in this infinite family admits a nontrivial genus two mutant which shares the same total dimension in both knot Floer homology and Khovanov homology. Each knot is distinguished from its genus two mutant by both knot Floer homology and Khovanov homology as bigraded groups. Additionally, for both knot Heegaard Floer homology and Khovanov homology, the genus two mutation interchanges the groups in $\delta$-gradings $k$ and $-k$. \end{abstract} \title{Genus two mutant knots with the same dimension in knot Floer and Khovanov homologies} \section{Introduction} \label{sec:Introduction} Genus two mutation is an operation on a three-manifold $M$ in which an embedded, genus two surface $F$ is cut from $M$ and reglued via the hyperelliptic involution $\tau$. The resulting manifold is denoted $M^\tau$. When $M$ is the three-sphere, the genus two mutant manifold $(S^3)^\tau$ is homeomorphic to $S^3$ (see Section~\ref{sec:Mutation}). If $K\subset S^3$ is a knot disjoint from $F$, then the knot that results from performing a genus two mutation of $S^3$ along $F$ is denoted $K^\tau$ and is called a {\it genus two mutant of the knot} $K$. The related operation of Conway mutation in a knot diagram can be realized as a genus two mutation or a composition of two genus two mutations (see Section~\ref{sec:Mutation}). In~\cite{OS:GenusBounds}, Ozsv\'ath and Szab\'o~ demonstrate that as a bigraded object, knot Heegaard Floer homology can detect Conway mutation. However, it can be observed that in all known examples~\cite{BG:Computations}, the rank of $\widehat{\operatorname{HFK}}(K)$ as an ungraded object remains invariant under Conway mutation. The question of whether the rank of knot Floer homology is unchanged under Conway mutation, or more generally, genus two mutation, remains an interesting open problem. Moreover, while it is known that Khovanov homology with $\mathbb{F}_2=\mathbb{Z}/2\mathbb{Z}$ coefficients is invariant under Conway mutation~\cite{Bloom},\cite{Wehrli:Mutation}, the case of $\mathbb{Z}$-coefficients is also unknown. The invariance of the rank of Khovanov homology under genus two mutation constitutes a natural generalization of the question. Recently, Baldwin and Levine have conjectured~\cite{BL:Spanning} that the $\delta$-graded knot Floer homology groups \[ \widehat{\operatorname{HFK}}_{\delta}(L) = \bigoplus_{\delta=a-m}\widehat{\operatorname{HFK}}_m(L,a) \] are unchanged by Conway mutation, which implies that their total ranks are preserved, amongst other things. A parallel conjecture can be made about $\delta$-graded Khovanov homology, and the $\delta$-graded Khovanov homology groups are given by \[ \operatorname{Kh}_{\delta}(L) = \bigoplus_{\delta=q-2i}\operatorname{Kh}_q^i(L). \] In this note, we offer an example of an infinite family of knots with isomorphic knot Floer homology, all of which admit a genus two mutant of the same dimension in both $\widehat{\operatorname{HFK}}$ and $\operatorname{Kh}$, though each pair is distinguished by both $\widehat{\operatorname{HFK}}$ and $\operatorname{Kh}$ as bigraded vector spaces.\footnote{Because we compute $\widehat{\operatorname{HFK}}$ and $\operatorname{Kh}$ as graded vector spaces over $\mathbb{Z}/2\mathbb{Z}$ or $\mathbb{Q}$, the theorem has been formulated in terms of dimension rather than rank. } Additionally, we show that both the $\delta$-graded $\widehat{\operatorname{HFK}}$ and $\operatorname{Kh}$ groups distinguish the genus two mutants pairs. Here, knot Floer homology computations are done with $\mathbb{F}_2$-coefficients, and Khovanov homology computations are done with $\mathbb{Q}$-coefficients. \begin{theorem} \label{thm:main} There exists an infinite family of genus two mutant pairs $(K_n, K_n^\tau)$, $n\in\mathbb{Z}^+$, in which \begin{enumerate} \item each infinite family has isomorphic knot Floer homology groups, \begin{eqnarray*} \widehat{\operatorname{HFK}}_m(K_n,a) &\cong& \widehat{\operatorname{HFK}}_m(K_0,a), \;\text{for all } m,a \\ \widehat{\operatorname{HFK}}_m(K_n^\tau,a) &\cong& \widehat{\operatorname{HFK}}_m(K_0^\tau,a), \; \text{for all } m,a, \end{eqnarray*} \item each genus two mutant pair shares the same total dimension in $\widehat{\operatorname{HFK}}$ and $\operatorname{Kh}$, \begin{eqnarray*} \bigoplus_{m,a} \dim_{\mathbb{F}_2}\;\widehat{\operatorname{HFK}}_m(K_n,a) &=& \bigoplus_{m,a} \dim_{\mathbb{F}_2}\;\widehat{\operatorname{HFK}}_m(K_n^\tau,a) \\ \bigoplus_{i,q} \dim_{\mathbb{Q}}\;\operatorname{Kh}^i_q (K_n) &=& \bigoplus_{i,q} \dim_{\mathbb{Q}}\;\operatorname{Kh}^i_q (K_n^\tau), \end{eqnarray*} \item each genus two mutant pair is distinguished by $\widehat{\operatorname{HFK}}$ and $\operatorname{Kh}$ as bigraded groups, \begin{eqnarray*} \widehat{\operatorname{HFK}}_m(K_n,a) &\not\cong& \widehat{\operatorname{HFK}}_m(K_n^\tau,a) \; \text{for some }m,a\\ \operatorname{Kh}^i_q(K_n) &\not\cong& \operatorname{Kh}_q^i(K_n^\tau) \; \text{for some }i,q, \end{eqnarray*} \item each genus two mutant pair is distinguished by $\delta$-graded $\widehat{\operatorname{HFK}}$ and $\delta$-graded $\operatorname{Kh}$, and moreover \begin{eqnarray*} \widehat{\operatorname{HFK}}_\delta(K_n) &\cong& \widehat{\operatorname{HFK}}_{-\delta}(K_n^\tau) \; \text{for all } \delta\\ \operatorname{Kh}_\delta(K_n) & \cong& \operatorname{Kh}_{-\delta}(K_n^\tau) \; \text{for all } \delta. \\ \end{eqnarray*} \end{enumerate} \end{theorem} This example suggests that having invariant dimension of knot Floer homology or Khovanov homology is a property shared not only by Conway mutants, but by genus two mutant knots as well, offering positive evidence towards all the above open questions about total rank. \subsection{Organization}In Section~\ref{sec:Mutation} we review genus two mutation and describe the infinite family of genus two mutant pairs. In Section~\ref{sec:Floer} we show that within each infinite family $\{K_n\}$ and $\{K^\tau_n\}$, the knots have isomorphic knot Heegaard Floer homology and that these families share the same dimension. In Section~\ref{sec:Khovanov} we show that each family also shares the same dimension of Khovanov homology. In Section~\ref{sec:Observation} we mention a few observations. \section{Genus Two Mutation} \label{sec:Mutation} \begin{figure} \caption{The genus two surface $F$ and hyperelliptic involution $\tau$.} \label{fig:genus2surface} \end{figure} Let $F$ be an embedded, genus two surface in a compact, orientable three-manifold $M$, equipped with the hyperelliptic involution $\tau$. A genus two mutant of $M$, denoted $M^\tau$, is obtained by cutting $M$ along $F$ and regluing the two copies of $F$ via $\tau$~\cite{DGST:Behavior}. The involution $\tau$ has the property that an unoriented simple closed curve $\gamma$ on $F$ is isotopic to its image $\tau(\gamma)$. When $M=S^3$, any closed surface $F\subset S^3$ is compressible. This implies by the Loop Theorem that $(S^3)^\tau$ is homeomorphic to $S^3$~\cite{DGST:Behavior}. Therefore, if $S^3$ contains a knot $K$ disjoint from $F$, mutation along $F$ is a well-defined homeomorphism of $S^3$ taking a knot $K$ to a potentially different knot $K^\tau$~\cite{DGST:Behavior}. In this note, we restrict our attention to surfaces of mutation which bound a handlebody containing $K$ in its interior. These mutations are called \emph{handlebody mutations}. A Conway mutant of a knot $K\subset S^3$ is similarly obtained by an operation under which a Conway sphere $S$ interests $K$ in four points and bounds a ball containing a tangle. The ball containing the tangle is replaced by its image under a rotation by $\pi$ about a coordinate axis. In fact, Conway mutation of a knot can be realized as a special case of genus two mutation. Since $S$ separates $K$ into two tangles, i.e. \[ K = T_1 \cup_S T_2 \] a genus two surface $F$ is formed by taking $S$ and tubing along either $T_1$ or $T_2$. The Conway mutation is then achieved by performing at most two such genus two mutations~\cite{DGST:Behavior}. Like Conway mutants, genus two mutants are difficult to detect and are indistinguishable by many knot invariants~\cite{DGST:Behavior}. \begin{theorem}~\cite{CL:Mutations},~\cite{MT:Jones} \label{thm:AlexanderJones} The Alexander polynomial and colored Jones polynomials for all colors of a knot in $S^3$ are invariant under genus two mutation. Generalized signature is invariant under genus two handlebody mutation. \end{theorem} \begin{theorem}~\cite[Theorem 1.3]{Ruberman:Volumes} \label{thm:Volume} Let $K^\tau$ be a genus two mutation of the hyperbolic knot $K$. Then $K^\tau$ is also hyperbolic, and the volumes of their complements are the same. \end{theorem} Theorem~\ref{thm:Volume} is a special case of a more general theorem which shows that the Gromov norm is preserved under mutation along any of several symmetric surfaces, including the genus two surface on which we are focused here. Ruberman also shows that cyclic branched coverings and Dehn surgeries along a Conway mutant knot pair yield manifolds of the same Gromov norm. Moreover, it is well-known that Conway mutation preserves the homeomorphism type of the branched double covering. In light of this, it is natural to ask whether $\Sigma_2(K)$ is homeomorphic to $\Sigma_2(K^\tau)$; however, this is not the case. We verify this by investigating the pair of genus two mutant knots in Figure~\ref{fig:mutantpair}, which we call $K_0$ and $K_0^\tau$ and which are known as $14^n_{22185}$ and $14^n_{22589}$ in Knotscape notation. \begin{figure} \caption{The genus two mutant pair $K_0=14^n_{22185} \label{fig:mutantpair} \end{figure} \begin{proposition} \label{prop:nothomeomorphic} The branched double covers of $K_0$ and $K_0^\tau$ are not homeomorphic. \end{proposition} \begin{proof} This is a fact which can be checked by computing the geodesic length spectra of $\Sigma_2(K_0)$ and $\Sigma_2(K_0^\tau)$ in SnapPy~\cite{SnapPy} with the following code snippet.\\ \centerline{ \fbox{ \begin{scriptsize} \lstinputlisting{arclengths.txt} \end{scriptsize} } } The complex length spectrum of a compact hyperbolic three-orbifold $M$ is the collection of all complex lengths of closed geodesics in $M$ counted with their multiplicities (Chapter 12 of~\cite{Alan:Arithmetic}). SnapPy demonstrates that the complex length spectra of $\Sigma_2(K)$ and $\Sigma_2(K^\tau)$ bounded above are different, therefore these manifolds are not isospectral, and therefore not isometric. Mostow rigidity says that the geometry of a finite-volume hyperbolic three-manifold is unique, therefore $\Sigma_2(K)$ and $\Sigma_2(K^\tau)$ are not homeomorphic. \end{proof} \begin{corollary} \label{cor:notmutants} The genus two mutant pair $K_0$ and $K_0^\tau$ are not Conway mutants. \end{corollary} \begin{proof} Since Conway mutants have homeomorphic branched double covers, this follows directly from Proposition~\ref{prop:nothomeomorphic}. \end{proof} We will continue to explore the pair $14^n_{22185}$ and $14^n_{22589}$. As genus two mutants, they share all of the properties mentioned in Theorems~\ref{thm:AlexanderJones} and~\ref{thm:Volume}. Moreover, $14^n_{22185}$ and $14^n_{22589}$ are also shown in~\cite{DGST:Behavior} to have the same HOMFLY-PT and Kauffman polynomials, although in general these polynomials are known to distinguish larger examples of genus two mutant knots~\cite{DGST:Behavior}. Just as a subtler hyperbolic invariant was required to distinguish their branched double covers, we require a subtler quantum invariant to distinguish the knot pair. The categorified invariants $\widehat{\operatorname{HFK}}$ and $\operatorname{Kh}$ do the trick. \begin{theorem} \label{thm:differenthomologies} The genus two mutant knots $K_0$ and $K_0^\tau$ are distinguished by their knot Heegaard Floer homology and Khovanov homology, as well as by their $\delta$-graded versions. \end{theorem} See Table~\ref{table:hfkandkh}. Khovanov homology with $\mathbb{Z}$ coefficients was computed in~\cite{DGST:Behavior} using KhoHo~\cite{Shumakovitch:Program}. Here, we include Khovanov homology with rational coefficients computed with the Mathematica program JavaKH-v2~\cite{GM:Program}. Since $\widehat{\operatorname{HFK}}$ is known to detect Conway mutation \cite{OS:GenusBounds}, it is not surprising that knot Floer homology can distinguish genus two mutant pairs. Nonetheless, the knot Floer groups $\widehat{\operatorname{HFK}}(K_0)$ and $\widehat{\operatorname{HFK}}(K^\tau_0)$ have been computed using the Python program of Droz~\cite{Droz:Program}. The key observation is that although both knot Floer homology and Khovanov homology distinguish the genus two mutants as bigraded vector spaces, in both cases the pairs are indistinguishable as ungraded objects. \begin{center} \begin{table} \begin{tabular}{|c|} \hline \begin{tabular}{cc} & \\ $\bgroup\arraycolsep=3pt \begin{array}{|r|rrrrr|}\hline \multicolumn{6}{|c|}{\widehat{\operatorname{HFK}}(K_0) } \\ \hline &-2&-1&0&1&2 \\ \hline 3& & & & & \mathbb{F} \\ 2& & & & \mathbb{F}^2& \mathbb{F} \\ 1& & &\mathbb{F}^2 & \mathbb{F}^2& \\ 0& & \mathbb{F}^2&\mathbb{F}^3 & & \\ -1&\mathbb{F} & \mathbb{F}^2& & & \\ -2& \mathbb{F}& & & & \\ \hline \multicolumn{6}{|c|}{\dim =17}\\ \hline \end{array} \egroup$ & \hspace{0.5cm} $\bgroup\arraycolsep=3pt \begin{array}{|r|rrr|}\hline \multicolumn{4}{|c|}{\widehat{\operatorname{HFK}}(K_0^\tau) } \\ \hline & -1& 0& 1 \\ \hline 1& & & \mathbb{F}^2\\ 0& & \mathbb{F}^5& \mathbb{F}^2\\ -1& \mathbb{F}^2&\mathbb{F}^4 & \\ -2& \mathbb{F}^2& & \\ \hline \multicolumn{4}{|c|}{\dim =17}\\ \hline \end{array} \egroup$ \\ \end{tabular}\\ \begin{tabular}{cc} & \\ $\bgroup\arraycolsep=3pt \begin{array}{|r|rrrrr|r|}\hline \multicolumn{7}{|c|}{\delta-\text{graded } \widehat{\operatorname{HFK}}(K_0) } \\ \hline &-2&-1&0&1&2 & \dim \\ \hline a-m=-1&\mathbb{F} & \mathbb{F}^2& \mathbb{F}^2& \mathbb{F}^2& \mathbb{F}& 8 \\ a-m=0&\mathbb{F} & \mathbb{F}^2& \mathbb{F}^3& \mathbb{F}^2& \mathbb{F}& 9 \\ \hline \multicolumn{7}{|c|}{\dim =17}\\ \hline \end{array} \egroup$ & \hspace{0.5cm} $\bgroup\arraycolsep=3pt \begin{array}{|r|rrr|r|}\hline \multicolumn{5}{|c|}{\delta-\text{graded }\widehat{\operatorname{HFK}}(K_0^\tau) } \\ \hline & -1& 0& 1 & \dim \\ \hline a-m=0& \mathbb{F}^2& \mathbb{F}^5& \mathbb{F}^2 & 9\\ a-m=+1& \mathbb{F}^2&\mathbb{F}^4 & \mathbb{F}^2 & 8\\ \hline \multicolumn{5}{|c|}{\dim =17}\\ \hline \end{array} \egroup$ \\ & \end{tabular} \\ \begin{tabular}{|cc|c|} \hline $\operatorname{Kh}(K_0;\mathbb{Q}) = $ & $1_{\underline{13}}^{\underline{7}}\, 1_{\underline{9}}^{\underline{6}}\, 1_{\underline{7}}^{\underline{4}}\, 1_{\underline{7}}^{\underline{3}}\, 1_{\underline{3}}^{\underline{3}}\, 1_{\underline{5}}^{\underline{2}}\, 1_{\underline{3}}^{\underline{2}}\, 1_{\underline{3}}^{\underline{1}}\, 1_{\underline{1}}^{\underline{1}}\, 1_{\underline{3}}^{0}\, 2_{\underline{1}}^{0}\, 2_{1}^{0}\, 2_{1}^{1}\, 1_{3}^{1}\, 1_{1}^{2}\, 1_{3}^{2}\, 1_{5}^{2}\, 1_{3}^{3}\, 1_{5}^{3}\, 1_{7}^{3}\, 1_{7}^{4}\, 1_{7}^{5}\, 1_{11}^{6}$ & $\dim =26$\\ \hline $\operatorname{Kh}(K^{\tau}_0;\mathbb{Q}) =$ & $1_{\underline{13}}^{\underline{7}}\, 1_{\underline{9}}^{\underline{6}}\, 1_{\underline{9}}^{\underline{5}}\, 1_{\underline{9}}^{\underline{4}}\, 1_{\underline{7}}^{\underline{4}}\, 1_{\underline{5}}^{\underline{4}}\, 1_{\underline{7}}^{\underline{3}}\, 1_{\underline{5}}^{\underline{3}}\, 1_{\underline{3}}^{\underline{3}}\, 1_{\underline{5}}^{\underline{2}}\, 2_{\underline{3}}^{\underline{2}}\, 1_{\underline{3}}^{\underline{1}}\, 1_{\underline{1}}^{\underline{1}}\, 1_{1}^{\underline{1}}\, 2_{\underline{1}}^{0}\, 2_{1}^{0}\, 1_{1}^{1}\, 1_{3}^{1}\, 1_{1}^{2}\, 1_{5}^{2}\, 1_{5}^{3}\, 1_{7}^{5}\, 1_{11}^{6}$ & $\dim=26$\\ \hline \end{tabular} \\ \begin{tabular}{cc} & \\ $\bgroup\arraycolsep=3pt \begin{array}{|l|l|}\hline \multicolumn{2}{|l|}{\delta-\text{graded } \operatorname{Kh}(K_0) } \\ \hline q-2i=-3& 4 \\ q-2i=-1& 11 \\ q-2i=1& 9 \\ q-2i=3& 2 \\ \hline \end{array} \egroup$ & \hspace{0.5cm} $\bgroup\arraycolsep=3pt \begin{array}{|l|l|}\hline \multicolumn{2}{|l|}{\delta-\text{graded } Kh(K_0^\tau) } \\ \hline q-2i=-3& 2\\ q-2i=-1& 9\\ q-2i=1& 11 \\ q-2i=3& 4 \\ \hline \end{array} \egroup$ \\ \end{tabular} \\ \\ \hline \end{tabular} \caption{Knot Floer groups are displayed with Maslov grading on the vertical axis and Alexander grading on the horizontal axis. Computation~\cite{Droz:Program} also confirms that $\widehat{\operatorname{HFK}}(K_0)\cong\widehat{\operatorname{HFK}}(K_1)$ and $\widehat{\operatorname{HFK}}(K^\tau_0)\cong\widehat{\operatorname{HFK}}(K^\tau_1)$. For Khovanov homology, $\mathbf{R}^i_j$ denotes Khovanov groups in homological grading $i$ and quantum grading $j$ with dimension $\mathbf{R}$. The underline denotes negative gradings. This notation originated in \cite{BarNatan:OnKhovanov}.}\label{table:hfkandkh} \end{table} \end{center} \begin{figure} \caption{The surface of mutation for all $K_n$. Note the surface bounds a handlebody.} \label{fig:mutationbody} \end{figure} \begin{figure} \caption{Oriented and unoriented skein triples.} \label{fig:orientedskein} \label{fig:unorientedskein} \label{fig:skeintriples} \end{figure} We will derive an infinite family of knots from the pair $14^n_{22185}$ and $14^n_{22589}$. Notice that each of these can be formed as the band sum of a two-component unlink. Let us call $14^n_{22185}$ and $14^n_{22589}$ by $K_0$ and $K^\tau_0$, respectively. By adding $n$ half-twists with positive crossings to the bands of $K_0$ and $K^\tau_0$, as in Figure~\ref{fig:skeintriples}, we obtain knots $K_n$ and $K^\tau_n$. It is visibly clear that $K^\tau_n$ is the genus two mutant of $K_n$ by the same surface of mutation relating $K_0$ and $K^\tau_0$, illustrated in Figure~\ref{fig:mutationbody}. Observe that by resolving a crossing in the twisted band, $K_n$ and $K_{n-2}$ fit into an oriented skein triple $(L_+,L_-,L_0)$ with $L_0$ equal to the two-component unlink $\mathcal{U}$ for all integers $n>1$. Moreover, $K_n$ and $K_{n-1}$ fit into an unoriented skein triple, again with third term the unlink. $K^\tau_n, K^\tau_{n-1} ,K^\tau_{n-2}$ and $\mathcal{U}$ fit into these same oriented and unoriented skein triples. \begin{figure} \caption{A smooth cobordism illustrating that $K_n$ is slice. } \label{fig:knotcobordism} \end{figure} \begin{lemma} \label{lemma:tauands} The Ozsv\'ath and Szab\'o~$\tau$ invariant and Rasmussen $s$ invariant vanish for all $K_n$ and $K_n^\tau$. \end{lemma} \begin{proof} The knots $K_n$ and $K_n^\tau$ are formed from the band sum of a two-component unlink. In general, if $K$ is any such knot, then $K$ is smoothly slice. This is a standard fact (see for example~\cite[p. 86]{Lickorish}), and the slicing disk is illustrated in Figure~\ref{fig:knotcobordism}. Ozsv\'ath and Szab\'o~define the smooth concordance invariant $\tau(K)\in \mathbb{Z}$ in~\cite[Corollary 1.3]{OS:FourBallGenus} and Rasmussen defines a smooth concordance invariant $s(K)\in2\mathbb{Z}$ in~\cite[Theorem 1]{Rasmussen:Khovanov}. Both $\tau(K)$ and $s(K)$ provide lower bounds on the four-ball genus. \[ |\tau(K)| \leq g_*(K) \text{ \;\;\; and \;\;\; } |s(K)| \leq 2g_*(K). \] Since all of our knots are slice, we immediately obtain $\tau=s=0$. \end{proof} \section{Knot Floer Homology} \label{sec:Floer} Knot Floer homology is a powerful invariant of oriented knots and links in an oriented three manifold $Y$, developed independently by Ozsv\'ath and Szab\'o~\cite{OS:KnotInvariants} and Rasmussen~\cite{Rasmussen:Thesis}. We tersely paraphrase~Ozsv\'ath and Szab\'o's construction of the invariant for knots from~\cite{OS:KnotInvariants}, and refer the reader to~\cite{OS:KnotInvariants} for details of the construction. \subsection{Background from knot Floer homology} To a knot $K\subset S^3$ is associated a doubly pointed Heegaard diagram $(\Sigma, \boldsymbol\alpha,\boldsymbol\beta,z,w)$. The data of the Heegaard diagram gives rise to chain complexes ($\widehat{\operatorname{CFK}}m(K), \partial^-)$ and $(\widehat{\operatorname{CFK}}(K), \mathbf{w}idehat{\partial})$. These complexes come equipped with a bigrading $(M,A)$, where $M$ denotes Maslov grading and $A$ denotes Alexander grading. $\widehat{\operatorname{CFK}}m(K)$ is an $\mathbb{F}_2[U]$ module, where the action of $U$ reduces $A$ by one and $M$ by two. The differentials $\partial^-$ and $\mathbf{w}idehat{\partial}$ preserve $A$ and reduce $M$ by one. The homology groups $\widehat{\operatorname{HFK}}m(K)$ and $\widehat{\operatorname{HFK}}(K)$ are invariants of $K$. We will require the following theorem of~Ozsv\'ath and Szab\'o~specialized to the case where $L_+$ and $L_-$ are knots, which we state without proof. \begin{theorem}~\cite[Theorem 1.1]{OS:Squence} \label{thm:skein} Let $L_+$, $L_-$ and $L_0$ be three oriented links, which differ at a single crossing as indicated by the notation. Then, if $L_+$ and $L_-$ are knots, there is a long exact sequence \[ \cdots\longrightarrow \widehat{\operatorname{HFK}}m_m(L_+,a)\stackrel{f^-}{\longrightarrow}\widehat{\operatorname{HFK}}m_{m}(L_-,a) \stackrel{g^-}{\longrightarrow} H_{m-1} \left( \frac{\widehat{\operatorname{CF}}lm(L_0)}{U_1-U_2},a \right) \stackrel{h^-}{\longrightarrow}\widehat{\operatorname{HFK}}m_{m-1}(L_+,a)\longrightarrow \cdots \] \end{theorem} We remark that the skein exact sequence of Theorem~\ref{thm:skein} is derived from a mapping cone construction. Indeed, Ozsv\'ath and Szab\'o~show in~\cite[Theorem 3.1]{OS:Squence} that there is a chain map $f:\widehat{\operatorname{CFK}}m(L_+)\rightarrow \widehat{\operatorname{CFK}}m(L_-)$ whose mapping cone is quasi-isomorphic to the mapping cone of the chain map $U_1 - U_2 : \widehat{\operatorname{CF}}lm(L_0)\rightarrow \widehat{\operatorname{CF}}lm(L_0)$, which is in turn quasi-isomorphic to the complex $\widehat{\operatorname{CF}}lm(L_0)/U_1 - U_2$. The maps in the diagram appearing in~\cite[Section 3.1]{OS:Squence} which determine the quasi-isomorphism from the cone of $f$ to the cone of $U_1 - U_2$ are $U$-equivariant. The map $f^-$ appearing in the sequence above is the map induced on homology by $f$. The maps $g^-$ and $h^-$ are induced by inclusions and projections of the mapping cone of $f$ along with the quasi-isomorphism. Therefore the long exact sequence is $U$-equivariant. \begin{lemma} \label{lemma:unlink} Let $\mathcal{U}$ be the two-component unlink in $S^3$. $\mathcal{U}$ corresponds with the unknot $\mathbf{w}idetilde{\mathcal{U}}\subset S^2\times S^1$, whose knot Floer homology is \begin{eqnarray} \widehat{\operatorname{HFK}}(S^3,\mathcal{U}) \cong {\mathbb{F}_2}_{\begin{tiny}\begin{array}{l}m=0 \\ a=0\end{array}\end{tiny} }\oplus {\mathbb{F}_2}_{\begin{tiny}\begin{array}{l}m=-1 \\ a=0\end{array}\end{tiny}} \\ H_*\left(\frac{\widehat{\operatorname{CF}}lm(\mathcal{U})}{U_1-U_2} \right) \cong \widehat{\operatorname{HFK}}(S^3,\mathcal{U})\otimes_{\mathbb{F}_2} \mathbb{F}_2[U] \end{eqnarray} where in the module $\mathbb{F}_2[U]$, the action of $U$ drops the Maslov grading by two and the Alexander grading by one. \end{lemma} \begin{proof} A Heegaard diagram for $\mathbf{w}idetilde{\mathcal{U}}\subset S^2\times S^1$ can be constructed by taking a genus one splitting of $S^2\times S^1$ with two curves, $\alpha$ and $\beta$, intersecting in two points $\mathbf{x}$ and $\mathbf{y}$. Place basepoints $z$ and $w$ inside the annular region such that $\mathbf{x}$ is connected to $\mathbf{y}$ by two disks. Since it is a genus one splitting we count only $\phi$ correspnding to domains that are disks. As an application of the Riemann mapping theorem, $\#\mathbf{w}idehat{\mathcal{M}}(\phi)=\pm1$ for each such $\phi$. Therefore the differential is zero in both $\widehat{\operatorname{CFK}}(S^2\times S^1, \mathbf{w}idetilde{\mathcal{U}})$ and $\widehat{\operatorname{CFK}}m(S^2\times S^1, \mathbf{w}idetilde{\mathcal{U}})$. The relative grading difference is evident from the diagram and pinned down by the observation that the $\mathcal{U}\subset S^3$ fits into a skein exact sequence (Theorem~\ref{thm:skein}) with the unknot. \end{proof} \subsection{Knot Floer homology proof} The main objective of this section is to show that each knot in the family $\{K_n\}$ has knot Floer homology isomorphic to $\widehat{\operatorname{HFK}}(K_0)$, and that each knot in the family $\{K_n^\tau\}$ has knot Floer homology isomorphic to $\widehat{\operatorname{HFK}}(K^\tau_0)$. Similar computations generating knots with isomorphic knot homologies occur in the work of the second author~\cite{Starkston}, Watson~\cite{Watson:IdenticalKH} and Greene and Watson~\cite{GW:Turaev}, to name a few. Theorem~\ref{thm:floergroups} is a special case of an observation originally due to Hedden. It will soon appear as part of a more general result of Hedden and Watson in~\cite{Hedden:Botany}. We include a proof only for the sake of completeness and the benefit of the reader. \begin{theorem}~\cite{Hedden:Botany} \label{thm:floergroups} Let $K$ be a knot in $S^3$ formed from the band sum of a two-component unlink, and let $\{K_n\}$ denote the family of knots obtained by adding $n$ half-twists with positive crossings to the band. For all $m, a \in \mathbb{Z}$ and $n\geq2\in\mathbb{Z}$, $\widehat{\operatorname{HFK}}m_m(K_{n},a) \cong \widehat{\operatorname{HFK}}m_m(K_{n-2},a)$. \end{theorem} \begin{proof} The proof is by induction on $n$. Just as with the specific families of knots described above, $K_n$ fits into the skein triple $(K_n, K_{n-2}, \mathcal{U})$. Theorem~\ref{thm:skein} applied to the skein triple gives a long exact sequence \[ \cdots\rightarrow \widehat{\operatorname{HFK}}m_m(K_n,a)\stackrel{f^-}{\longrightarrow}\widehat{\operatorname{HFK}}m_{m}(K_{n-2},a) \stackrel{g^-}{\longrightarrow}H_{m-1} \left( \frac{\widehat{\operatorname{CF}}lm(\mathcal{U})}{U_1-U_2},a \right) \stackrel{h^-}{\longrightarrow}\widehat{\operatorname{HFK}}m_{m-1}(K_n,a)\rightarrow\cdots . \] We will use this sequence in conjunction with information coming from the $\tau$ invariant. By Lemma~\ref{lemma:tauands}, $\tau(K_n)=0$ $\forall n$. Because we are working with $\widehat{\operatorname{HFK}}m(K)$, we will use the definition of $\tau$ appearing in~\cite[Appendix]{OS:Legendrian}, where $m(K)$ denotes the mirror of $K$. \[ \tau(m(K)) = \max \{a\;|\;\exists \; \mathbf{x}i\in\widehat{\operatorname{HFK}}m(K,a) \text{ such that } U^d\mathbf{x}i\neq0\text{ for all integers } d\geq 0 \}. \] Moreoever, for a homogeneous element $\mathbf{x}i\in\widehat{\operatorname{HFK}}m(K,\tau(m(K))$ such that $U^d\mathbf{x}i\neq0\;\forall d\geq 0$, the Maslov grading of $\mathbf{x}i$ is given by $m=2\tau(m(K))$. This fact can be verified by following the argument given in~\cite[Appendix]{OS:Legendrian}, keeping careful track of the bigrading shifts at each step. Since $\tau(K_n)=0$, we have the additional fact that $\tau(K_n)=\tau(m(K_n))$. The non-torsion summand of $\widehat{\operatorname{HFK}}m(K_n)$ is generated by an element $\mathbf{x}i_n$ with maximal bigrading $(2\tau(m(K)), \tau(m(K))$, which in this case is $(0,0)$. The third term $H_* \left( \frac{\widehat{\operatorname{CF}}lm(L_0)}{U_1-U_2},0 \right)$ of the skein triple corresponds with the two-component unlink and is freely generated over $\mathbb{F}_2[U]$ by elements $z$ and $z'$ in bigradings $(0,0)$ and $(-1,0)$. Since $\widehat{\operatorname{HFK}}m(\mathcal{U})$ is supported entirely in bigradings $(-2d,-d)$ and $(-2d-1,-d)$ the long exact sequence immediately supplies isomorphisms $\widehat{\operatorname{HFK}}m_m(K_n,a)\cong \widehat{\operatorname{HFK}}m_m(K_{n-2},a)$ whenever $a=-d\leq0$ and $|m-2a| > 1$ or when $a>0$. The $U$-equivariant long exact sequence for the remaining case is displayed below, parameterized by $d\geq0$.\\ \centerline{ \mathbf{x}ymatrix @R-1.5pc @C-1.25pc{ 0 \ar[r] & \widehat{\operatorname{HFK}}m_{1-2d}(K_n,-d) \ar[r]^{f^-} & \widehat{\operatorname{HFK}}m_{1-2d}(K_{n-2},-d) \ar[r]^>>>>>{g^-} & {\mathbb{F}_2}_{\{-2d,-d\}} \ar[r]^{h^-} \ar@{} [d] |{\rin}& \widehat{\operatorname{HFK}}m_{-2d}(K_n,-d) \ar[r]^<<<<<{i^-} \ar@{} [d] |{\rin} & \\ & & & U^d\cdot z \ar@{|->}[r] &U^d\cdot \mathbf{x}i_n +\eta &&& \\ & \widehat{\operatorname{HFK}}m_{-2d}(K_{n-2},-d) \ar[r]^>>>>{j^-} \ar@{} [d] |{\rin} & {\mathbb{F}_2}_{\{-1-2d,-d\}} \ar[r]^>>>>{k^-} \ar@{} [d] |{\rin} & \widehat{\operatorname{HFK}}m_{-1-2d}(K_n,-d) \ar[r]^{\ell^-} & \widehat{\operatorname{HFK}}m_{-1-2d}(K_{n-2},-d) \ar[r] & 0 \\ & U^d\cdot \mathbf{x}i_{n-2} \ar@{|->}[r] & U^d\cdot z' } } In the diagram above, equivariance of the long exact sequence with respect to the action of $U$ implies that $U^d\cdot z$ cannot be in the image of any $\mathbb{F}_2[U]$-torsion element. Since $\widehat{\operatorname{HFK}}m_{1-2d}(K_{n-2},-d)$ is torsion, $U^d\cdot z$ is not in the image of $g^-$, and the map $g^-=0$. Exactness implies that $f^-$ is an isomorphism, and also that $h^-$ is an injection. Since the map $h^-$ is degree preserving, $U^d\cdot z$ maps to a non-torsion element $U^d\cdot \mathbf{x}i_n +\eta \in \widehat{\operatorname{HFK}}m_{-2d}(K,-d)$, where $\eta$ is $\mathbb{F}_2[U]$-torsion. By exactness, $U^d\cdot \mathbf{x}i_n+\eta \in\Ker i^-$. Because the non-torsion summand gets mapped to zero by $i^-$, $U^d\cdot \mathbf{x}i_{n-2}$, which is also non-torsion, is not in the image of $i^-$. By exactness, $U^d \cdot \mathbf{x}i_{n-2}\not\in\Ker j^-$ and $U^d\cdot \mathbf{x}i_{n-2}$ must map to $U^d\cdot z'$. Exactness implies that $k^-=0$ and $\ell^-$ is an isomorphism. What remains is an isomorphism of torsion submodules at $i^-$. Hence, for all $(m,a)$, $\widehat{\operatorname{HFK}}m_m(K_n,a)\cong \widehat{\operatorname{HFK}}m_m(K_{n-2},a)$. \end{proof} \begin{corollary} Let $\{K_n\}$ and $\{K_n^\tau\}$ denote the infinite family of knots derived from $14^n_{22185}$ and $14^n_{22589}$. Then \begin{eqnarray*} \widehat{\operatorname{HFK}}_m(K_n,a) & \cong & \widehat{\operatorname{HFK}}_m(K_0,a)\\ \widehat{\operatorname{HFK}}_m(K^\tau_n,a) & \cong & \widehat{\operatorname{HFK}}_m(K^\tau_0,a). \end{eqnarray*} \end{corollary} \begin{proof} Once a suitable base case has been established, then the result follows from relating $\widehat{\operatorname{HFK}}m(K_n)$, $\widehat{\operatorname{HFK}}m(K_{n-2})$, $\widehat{\operatorname{HFK}}(K_n)$ and $\widehat{\operatorname{HFK}}(K_{n-2})$ by the five lemma. There are four distinct families in our investigation, with base cases $K_0, K_1, K^\tau_0$ and $K_1^\tau$, for even and odd values of $n$. The hat-version $\widehat{\operatorname{HFK}}$ of each has been verified computationally with the program of Droz~\cite{Droz:Program}. $\widehat{\operatorname{HFK}}(K_1)$ and $\widehat{\operatorname{HFK}}(K^\tau_1)$ have been found to be isomorphic with $\widehat{\operatorname{HFK}}(K_0)$ and $\widehat{\operatorname{HFK}}(K_0^\tau)$, respectively (see Table~\ref{table:hfkandkh}). \end{proof} This verifies that $\{K_n\}$, $n\in\mathbb{Z}^+$, is an infinite family of knots admitting a distinct genus two mutant of the same total dimension in knot Floer homology. \section{Khovanov Homology} \label{sec:Khovanov} Khovanov homology is a bigraded homology knot invariant introduced in \cite{Khovanov:Categorification}. The chain complex and differential of the homology theory are computed combinatorially from a knot diagram using the cube of smooth resolutions of the crossings. See \cite{BarNatan:OnKhovanov} for an introduction to the theory. Here, we compute the Khovanov homology of $K_n$ and $K_n^{\tau}$ over rational coefficients. While our computation of Heegaard Floer homology was over coefficients in $\mathbb{F}_2$, we need to work over $\mathbb{Q}$ to obtain the corresponding results in Khovanov homology. This is for two reasons. First, Rasmussen's invariant and Lee's spectral sequence are only applicable to Khovanov homology with rational coefficients, and we require these tools for the computation. Furthermore, Khovanov homology over $\mathbb{F}_2$ coefficients is significantly weaker at distinguishing mutants in the following sense. Bloom and Wehrli independently proved that Khovanov homology over $\mathbb{F}_2$ is invariant under Conway mutation in \cite{Bloom}, \cite{Wehrli:Mutation}. While these pairs are not Conway mutants, we can compute that $K_0$ and $K_0^{\tau}$ have the same $\mathbb{F}_2$-Khovanov homology (though we have not proven this for the infinite family). The goal of this section is to provide an infinite family of genus $2$ mutants where the bigraded rational Khovanov homology distinguishes between the knot and its mutant, whereas the total dimension of the Khovanov homology is invariant under the mutation. Our main result in this section is the following theorem. \begin{theorem} The Khovanov homology with rational coefficients for $K_n$ respectively $K_n^{\tau}$, for $n\geq 8$ is described by the following sequences of numbers. Here $\mathbf{R}^i_j$ denotes that the Khovanov homology in homological grading $i$ and quantum grading $j$ has dimension $\mathbf{R}$. (This notation originated in \cite{BarNatan:OnKhovanov}.) \begin{eqnarray*} Kh(K_n)&=& \mathbf{1}^0_{-1} \mathbf{1}^0_1\, \mathbf{1}^{n-7}_{2n-13} \mathbf{1}^{n-6}_{2n-9} \mathbf{1}^{n-4}_{2n-7} \mathbf{1}^{n-3}_{2n-7} \mathbf{1}^{n-3}_{2n-3} \mathbf{1}^{n-2}_{2n-5} \mathbf{1}^{n-2}_{2n-3} \mathbf{1}^{n-1}_{2n-3} \mathbf{1}^{n-1}_{2n-1}\mathbf{1}^{n}_{2n-3}\mathbf{1}^{n}_{2n-1} \mathbf{1}^{n}_{2n+1}\\ &&\mathbf{2}^{n+1}_{2n+1} \mathbf{1}^{n+1}_{2n+3} \mathbf{1}^{n+2}_{2n+1} \mathbf{1}^{n+2}_{2n+3} \mathbf{1}^{n+2}_{2n+5} \mathbf{1}^{n+3}_{2n+3} \mathbf{1}^{n+3}_{2n+5} \mathbf{1}^{n+3}_{2n+7} \mathbf{1}^{n+4}_{2n+7} \mathbf{1}^{n+5}_{2n+7} \mathbf{1}^{n+6}_{2n+11} \end{eqnarray*} \begin{eqnarray*} Kh(K_n^{\tau})&=& \mathbf{1}^0_{-1} \mathbf{1}^0_1\, \mathbf{1}^{n-7}_{2n-13} \mathbf{1}^{n-6}_{2n-9} \mathbf{1}^{n-5}_{2n-9} \mathbf{1}^{n-4}_{2n-9} \mathbf{1}^{n-4}_{2n-7} \mathbf{1}^{n-4}_{2n-5} \mathbf{1}^{n-3}_{2n-7} \mathbf{1}^{n-3}_{2n-5} \mathbf{1}^{n-3}_{2n-3}\mathbf{1}^{n-2}_{2n-5}\mathbf{2}^{n-2}_{2n-3} \mathbf{1}^{n-1}_{2n-3}\\ && \mathbf{1}^{n-1}_{2n-1} \mathbf{1}^{n-1}_{2n+1} \mathbf{1}^{n}_{2n-1} \mathbf{1}^{n}_{2n+1} \mathbf{1}^{n+1}_{2n+1} \mathbf{1}^{n+1}_{2n+3} \mathbf{1}^{n+2}_{2n+1} \mathbf{1}^{n+2}_{2n+5} \mathbf{1}^{n+3}_{2n+5} \mathbf{1}^{n+5}_{2n+7} \mathbf{1}^{n+6}_{2n+11} \end{eqnarray*} \label{KhovanovComputation} \end{theorem} The key aspect of this computation to note for the proof is that as $n$ increases by $1$, in all but the first two terms the homological grading increases by $1$ and the quantum grading increases by $2$. The first part of the proof will justify the computation for all but the first two terms. The second part of the proof justifies the computation of the first two terms. Before we give the proof of the computation, the following corollary highlights the relevant conclusions. \begin{corollary} For all $n\geq 0$, $$Kh(K_n)\not\cong Kh(K_n^{\tau})$$ as bigraded groups, and $$Kh^{\delta}(K_n)\not\cong Kh^{\delta}(K_n^{\tau})$$ however $$\dim(Kh(K_n))=\dim(Kh(K_n^{\tau}))=26.$$ \end{corollary} \begin{proof}[Proof of corollary] For $n\geq 8$ it is clear from the theorem that the bigraded Khovanov homology over $\mathbb{Q}$ of $K_n$ and $K_n^{\tau}$ differ. For example $K_n$ has dimension zero in homological grading $n-5$, quantum grading $2n-9$ while $K_n^{\tau}$ has dimension $1$ in that grading. The $\delta$ graded groups can be easily computed from the theorem. The $\delta$-gradings are supported in $\delta=-3,-1,1,3$. For any value of $n$, $Kh^{\delta}(K_n)$ agrees $Kh^{\delta}(K_0)$ and $Kh^{\delta}(K_n^\tau)$ agrees with $Kh^{\delta}(K_0^\tau)$, as given in table \ref{table:hfkandkh}. In particular $Kh^{\delta}$ distinguishes $K_n$ from $K_n^\tau$. The total dimension of the Khovanov homology in each case is $26$, and can be computed by summing the dimensions over all bidegrees. For the finitely many cases where $0\leq n\leq 7$ this result has been computationally verified using Green and Morrison's program JavaKh-v2 \cite{GM:Program}. \end{proof} \begin{proof}[Proof of theorem \ref{KhovanovComputation}] The method of computing Khovanov homology we use here was previously used in \cite{Starkston} to find the Khovanov homology of $(p,-p,q)$ pretzel knots. The reader may refer to that paper or the above cited sources for further background and detail. There is no difference in the proof for $K_n$ versus $K_n^{\tau}$. We will write $K_n$ throughout the proof, but all statements in the proof hold for $K_n^{\tau}$ as well. There is a long exact sequence whose terms are given by the unnormalized Khovanov homology of a knot diagram and its $0$ and $1$ resolutions at a particular crossing. The unnormalized Khovanov homology is an invariant of a specific diagram, not of a particular knot. It is given by taking the homology of the appropriate direct sum in the cube of resolutions before making the overall grading shifts. Let $n_+$ denote the number of positive crossings in a diagram and $n_-$ the number of negative crossings. Let $[ \cdot ]$ denote a shift in the homological grading and $\{ \cdot \}$ denote a shift in the quantum grading such that $\mathbb{Q}sub{q}\{k\}=\mathbb{Q}sub{q+k}$ and such that $Kh(K)[k]$ has an isomorphic copy of $Kh^i(K)$ in homological grading $i+k$ for each $i$. \footnote{There is some discrepancy in the literature regarding the notation for grading shifts. The notation in this paper agrees with that of Bar-Natan's introduction \cite{BarNatan:OnKhovanov}, though it is the opposite of that used in Khovanov's original paper \cite{Khovanov:Categorification}. Negating all signs relating to grading shifts will give Khovanov's original notation.} Let $\widehat{\operatorname{Kh}}(D)$ denote the unnormalized Khovanov homology of a knot diagram $D$. Then $$Kh(D)=\widehat{\operatorname{Kh}}(D)[-n_-]\{n_+-2n_-\}.$$ If $D$ is a diagram of a knot, $D_0$ is the diagram where one crossing is replaced by its $0$-resolution and $D_1$ is the diagram where that crossing is replaced by its $1$-resolution. Then, we have the following long exact sequence (whose maps preserve the $q$-grading) \begin{equation} \cdots \rightarrow \widehat{\operatorname{Kh}}^{i-1}(D_1)\{1\} \rightarrow \widehat{\operatorname{Kh}}^{i}(D) \rightarrow \widehat{\operatorname{Kh}}^i(D_0) \rightarrow \widehat{\operatorname{Kh}}^i(D_1)\{1\} \rightarrow \cdots. \label{LES} \end{equation} Let $D,D_0$ and $D_1$ be the diagrams for $K_n$ and its resolutions $\mathcal{U}$ and $K_{n-1}$ as shown in Figure \ref{fig:unorientedskein}. Observe that $D_0$ is a diagram for the two component unlink $\mathcal{U}$ with $6+n$ positive crossings and $7$ negative crossings. $D_1$ is a diagram for $K_{n-1}$ with $6+n$ positive crossings and $7$ negative crossings and $D$ is a diagram for $K_n$ with $7+n$ positive crossings and $7$ negative crossings. Therefore we have the following identifications \begin{eqnarray*} \widehat{\operatorname{Kh}}(D_1)[-7]\{n-8\} &= &Kh(K_{n-1})\\ \widehat{\operatorname{Kh}}(D_0)[-7]\{n-8\} &=& Kh(\mathcal{U})\\ \widehat{\operatorname{Kh}}(D)[-7]\{n-7\}&=&Kh(K_n). \end{eqnarray*} Note that the Khovanov homology of the two component unlink is $Kh^0(\mathcal{U})=\mathbb{Q}sub{-2}\oplus \mathbb{Q}sub{0}^2\oplus \mathbb{Q}sub{2}$ and $Kh^i(\mathcal{U})=0$ for $i\neq 0$. After applying appropriate shifts we obtain $\widehat{\operatorname{Kh}}(D_0)$. We will inductively assume the computation in the theorem holds for $K_{n-1}$. The base case is established by computing $Kh(K_8)$ using the JavaKh-v2 program \cite{GM:Program}. Applying the appropriate shifts from above we thus get the value for $\widehat{\operatorname{Kh}}(D_1)$. Plugging this into the long exact sequence (\ref{LES}) gives the following exact sequences \begin{equation} 0\rightarrow Kh^{i-8}(K_{n-1})\{8-n\}\{1\} \rightarrow Kh^{i-7}(K_n)\{7-n\} \rightarrow 0 \label{isomexact} \end{equation} for $i\neq 7,8$, and \begin{eqnarray*} 0\rightarrow Kh^{-1}(K_{n-1})\{9-n\} \rightarrow Kh^0(K_n)\{7-n\} \rightarrow \mathbb{Q}sub{6-n}\oplus \mathbb{Q}sub{8-n}^2\oplus \mathbb{Q}sub{10-n} \rightarrow \\ \rightarrow Kh^0(K_{n-1})\{9-n\} \rightarrow Kh^1(K_n)\{7-n\} \rightarrow 0 \end{eqnarray*} which by the inductive hypothesis is the same as \begin{subequations} \begin{align} 0\rightarrow 0 \rightarrow Kh^0(K_n)\{7-n\} \rightarrow \mathbb{Q}sub{6-n}\oplus \mathbb{Q}sub{8-n}^2\oplus \mathbb{Q}sub{10-n} \rightarrow \mathbb{Q}sub{8-n}\oplus \mathbb{Q}sub{10-n} \rightarrow \tag{7} \label{exactsequence} \\ \rightarrow Kh^1(K_n)\{7-n\}\rightarrow 0. \notag \end{align} \end{subequations} Exactness of line (\ref{isomexact}) yields isomorphisms $$Kh^{j-1}(K_{n-1})\{2\} \cong Kh^j(K_n)$$ for all $j\neq 0,1$. Inspecting the way the formula for $Kh(K_n)$ in the theorem depends on $n$, one can see that the inductive hypothesis verifies the computation for $Kh^j(K_n)$ for $j\neq 0,1$. Exactness of line (\ref{exactsequence}) gives a few possibilities. Analyzing the sequence we must have \begin{eqnarray*} Kh^0(K_n)&=&\mathbb{Q}sub{-1}\oplus \mathbb{Q}sub{1}^{1+a} \oplus \mathbb{Q}sub{3}^b\\ Kh^1(K_n)&=&\mathbb{Q}sub{1}^a\oplus \mathbb{Q}sub{3}^b\\ \text{ where } a,b\in \{0,1\}. \end{eqnarray*} Now we use the fact that $s(K_n)$ vanishes by Lemma \ref{lemma:tauands}. Since $s(K_n)=0$, the spectral sequence given by Lee in \cite{Lee:Endomorphism} converges to two copies of $\mathbb{Q}$, each in homological grading $0$, with one in quantum grading $-1$ and the other in quantum grading $1$, as proven by Rasmussen in \cite{Rasmussen:Khovanov}. Note that the $r^{th}$ differential goes up $1$ and over $r$, because of an indexing that differs from the standard indexing for a spectral sequence induced by a filtration. (See the note in section 3.1 of \cite{Starkston} for further explanation). Let $d_r^{p,q}$ denote the differential on the $r^{th}$ page from $E_r^{p,q}$ to $E_r^{p+1,q+r}$ in Lee's spectral sequence. Here $p$ is the coordinate for the homological grading shown on the vertical axis and $q$ is the coordinate for the quantum grading shown on the horizontal axis. \begin{table} \resizebox{6.2in}{!}{ \begin{tabular}{|r||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline n+6&&&&&&&&&&&&&&&&&1\\\hline n+5&&&&&&&&&&&&&&&1&&\\\hline n+4&&&&&&&&&&&&&&&1&&\\\hline n+3&&&&&&&&&&&&&1&1&1&&\\\hline n+2&&&&&&&&&&&&1&1&1&&&\\\hline n+1&&&&&&&&&&&&2&1&&&&\\\hline n&&&&&&&&&&1&1&1&&&&&\\\hline n-1&&&&&&&&&&1&1&&&&&&\\\hline n-2&&&&&&&&&1&1&&&&&&&\\\hline n-3&&&&&&&&1&&1&&&&&&&\\\hline n-4&&&&&&&&1&&&&&&&&&\\\hline n-5&&&&&&&&&&&&&&&&&\\\hline n-6&&&&&&&1&&&&&&&&&&\\\hline n-7&&&&&1&&&&&&&&&&&&\\\hline \vdots&&&&&&&&&&&&&&&&&\\\hline 1&&a&b&&&&&&&&&&&&&&\\\hline 0&1&1+a&b&&&&&&&&&&&&&&\\\hline &-1&1&3&$\cdots$&m-13&m-11&m-9&m-7&m-5&m-3&m-1&m+1&m+3&m+5&m+7&m+9&m+11\\\hline \end{tabular} } \caption{Here $m=-2n$. When $a=b=0$ this table gives the $\mathbb{Q}$-dimensions of the Khovanov homology of $K_n$ with homological grading on the vertical axis and quantum grading on the horizontal axis. This is the $E_1$ page of Lee's spectral sequence.} \label{table:knotKhovanov} \end{table} \begin{table} \resizebox{6.2in}{!}{ \begin{tabular}{|r||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline n+6&&&&&&&&&&&&&&&&&1\\\hline n+5&&&&&&&&&&&&&&&1&&\\\hline n+4&&&&&&&&&&&&&&&&&\\\hline n+3&&&&&&&&&&&&&&1&&&\\\hline n+2&&&&&&&&&&&&1&&1&&&\\\hline n+1&&&&&&&&&&&&1&1&&&&\\\hline n&&&&&&&&&&&1&1&&&&&\\\hline n-1&&&&&&&&&&1&1&1&&&&&\\\hline n-2&&&&&&&&&1&2&&&&&&&\\\hline n-3&&&&&&&&1&1&1&&&&&&&\\\hline n-4&&&&&&&1&1&1&&&&&&&&\\\hline n-5&&&&&&&1&&&&&&&&&&\\\hline n-6&&&&&&&1&&&&&&&&&&\\\hline n-7&&&&&1&&&&&&&&&&&&\\\hline \vdots&&&&&&&&&&&&&&&&&\\\hline 1&&a&b&&&&&&&&&&&&&&\\\hline 0&1&1+a&b&&&&&&&&&&&&&&\\\hline &-1&1&3&$\cdots$&m-13&m-11&m-9&m-7&m-5&m-3&m-1&m+1&m+3&m+5&m+7&m+9&m+11\\\hline \end{tabular} } \caption{Here $m=-2n$. When $a=b=0$ this table gives the $\mathbb{Q}$-dimensions of the Khovanov homology of $K_n^{\tau}$ with homological grading on the vertical axis and quantum grading on the horizontal axis. This is the $E_1$ page of Lee's spectral sequence.} \label{table:knotmKhovanov} \end{table} See Tables \ref{table:knotKhovanov} and \ref{table:knotmKhovanov} for the $E_1$ page on which the following analysis is carried out. In order to preserve one copy of $\mathbb{Q}sub{-1}$ and one copy of $\mathbb{Q}sub{1}$ in the $0^{th}$ homological grading we must have $d_r^{0,-1} = 0$ and $d_r^{0,1}$ acting trivially on one copy of $\mathbb{Q}$ for every $r$. We may computationally verify another base case where $n=9$ and then assume $n\geq10$. By the above inductive results, we know that $Kh^2(K_n)=0$ when $n\geq 10$. Therefore, $d_r^{1,1}=0$ for all $r\geq 1$. Thus, if $a\neq 0$, an additional copy of $\mathbb{Q}$ will survive in $E_{\infty}^{1,1}$ since it cannot be in the image of any $d_r$ for $r>0$. This contradicts Lee's result that there can only be two copies of $\mathbb{Q}$ on the $E_{\infty}$ page. Therefore $a=0$ and $d_r^{0,1}=0$ for all $r\geq 1$. Because the row corresponding to the first homological grading has zeros in quantum gradings greater than $3$, $d_r^{0,3}=0$ for all $r\geq 1$. Therefore, if $b\neq 0$, an additional copy of $\mathbb{Q}$ will survive in $E_{\infty}^{0,3}$, again contradicting Lee's result. Therefore $a=b=0$, and the Khovanov homology of $K_n$ and $K_n^{\tau}$ is as stated in the theorem. \end{proof} \section{Observation And Speculation} \label{sec:Observation} The families of knots which we have employed in this paper are all non-alternating slice knots, and in particular, are formed from the band sum of a two-component unlink. There are other infinite families of slice knots for which these computational techniques using skein exact sequences and concordance invariants work. For example, Hedden and Watson~\cite{Hedden:Botany} prove that there are infinitely many knots with isomorphic Floer groups in any given concordance class, whereas Greene and Watson~\cite{GW:Turaev} have worked with the Kanenobu knots. Certain pretzel knots (see~\cite{Starkston}) also share this property. Nor is the non-alternating status of these knots a coincidence; in fact there can only be finitely many alternating knots of a given knot Heegaard Floer homology type. \begin{proposition} Let $K$ be an alternating knot. There are only finitely many other alternating knots with knot Floer homology isomorphic to $\widehat{\operatorname{HFK}}(K)$ as bigraded groups. \end{proposition} \begin{proof} Suppose to the contrary that $K$ belongs to an infinite family $\{K_n\}_{n\in\mathbb{Z}}$ of alternating knots sharing the same knot Floer groups. Since $\widehat{\operatorname{HFK}}(K_n)\cong\widehat{\operatorname{HFK}}(K)$ and knot Floer homology categorizes the Alexander polynomial, \[ \det(K_n) = |\Delta_{K_n}(-1)| = |\Delta_{K}(-1)| = \det(K) \] for all $n$. Each knot $K_n$ admits a reduced alternating diagram $D_n$ with crossing number $c(D_n)$. The Bankwitz Theorem implies that $c(K_n) \leq \det(K_n)$. However, there are only finitely many knots of a given crossing number, and in particular $c(K_n)$ grows arbitrarily large with $n$, which contradicts that $c(K_n) \leq \det(K)$. \end{proof} This fact leads to the interesting open question of whether there are infinitely many quasi-alternating knots of a given knot Floer type. Greene formulates an even stronger conjecture in~\cite{Greene:Homologically}, and proves the cases where $\det(L)=1,2$ or $3$. \begin{conjecture}[Conjecture 3.1 of~\cite{Greene:Homologically}] There exist only finitely-many quasi-alternating links with a given determinant. \end{conjecture} In Section~\ref{sec:Khovanov}, we mention that $K_0$ and $K_0^\tau$ have the same Khovanov homology with $\mathbb{F}_2$ coefficients. In fact, $(K_0,K_0^\tau)$ is one of five pairs of genus two mutants appearing in~\cite{DGST:Behavior}, none of which can be distinguished by Khovanov homology over $\mathbb{F}_2$. Bloom and Wehrli~\cite{Bloom},\cite{Wehrli:Mutation} have shown that Khovanov homology with $\mathbb{F}_2$ coefficients is invariant under component-preserving Conway mutation. This leads to another unanswered question. \begin{question} \label{question:z2Khovanov} Is Khovanov homology with $\mathbb{F}_2$ coefficients invariant under genus two mutation? \end{question} Because there is a spectral sequence relating the reduced Khovanov homology of $L$ over $\mathbb{F}_2$ to the Heegaard Floer homology of the branched double cover of $-L$, this raises another natural question. \begin{question} If $K$ and $K^\tau$ are genus two mutant knots, is $\rank \widehat{\operatorname{HF}}(\Sigma_2(K)) = \rank \widehat{\operatorname{HF}}(\Sigma_2(K^\tau))$? \end{question} Genus two mutation provides a method for producing closely related knots and links, but more generally it is an operation on three manifolds. This yields yet another unanswered question: \begin{conjecture} Let $M$ be a closed, oriented three-manifold with an embedded genus two surface $F$. If $M^\tau$ is the genus two mutant of $M$, then \[ \rank \widehat{\operatorname{HF}}(M) = \rank \widehat{\operatorname{HF}}(M^\tau) \] \end{conjecture} The question of whether the total rank is preserved under Conway mutation remains an interesting problem. The evidence that we offer above suggests that the total ranks of knot Floer homology and Khovanov homology are also preserved by genus two mutation. Because genus two mutation along a surface which does not bound a handlebody does not correspond in an obvious way to an operation on a knot diagram, a combinatorial proof of this general statement may be difficult to obtain. \section{Acknowledgments} We would like to thank our advisors, Cameron Gordon and Robert Gompf, for their guidance and support. We would also like to thank John Luecke, Matthew Hedden, and Cagri Karakurt for helpful conversations. We are also grateful to Adam Levine for pointing out an error in a previous version, and to the anonymous referee who made numerous helpful comments. The first author was partially supported by the NSF RTG under grant no. DMS-0636643. The second author was supported by the NSF Graduate Research Fellowship under grant no. DGE-1110007. \end{document}
\begin{equation}gin{document} \title{Lower bounds on the complexity of simulating quantum gates} \author{Andrew M. Childs} \email[]{[email protected]} \affiliation{Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{Henry L. Haselgrove} \email[]{[email protected]} \affiliation{School of Physical Sciences, The University of Queensland, Brisbane QLD 4072, Australia} \affiliation{Information Sciences Laboratory, Defence Science and Technology Organisation, Edinburgh 5111, Australia} \author{Michael A. Nielsen} \email[]{[email protected]} \homepage[]{www.qinfo.org/people/nielsen} \affiliation{School of Physical Sciences, The University of Queensland, Brisbane QLD 4072, Australia} \affiliation{School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane QLD 4072, Australia} \affiliation{Institute for Quantum Information, California Institute of Technology, Pasadena CA 91125, USA} \date[]{25 July 2003} \begin{equation}gin{abstract} We give a simple proof of a formula for the minimal time required to simulate a two-qubit unitary operation using a fixed two-qubit Hamiltonian together with fast local unitaries. We also note that a related lower bound holds for arbitrary $n$-qubit gates. \end{abstract} \maketitle \section{Introduction} Understanding quantum dynamics is at the heart of quantum physics. Recent ideas from quantum computation have stimulated interest in studying the physical resources needed to implement quantum operations. In addition to a qualitative understanding of what resources are necessary, we would like to quantify the resource requirements for universal quantum computation and other information processing tasks. Ultimately, we would like to understand the minimal resources that are necessary and sufficient to implement particular quantum dynamics. As a first step towards answering these questions, it has been shown that there is a sense in which all entangling dynamics are qualitatively equivalent. In particular, it has been shown that any $n$-qudit two-body Hamiltonian capable of creating entanglement between any pair of qudits is, in principle, universal for quantum computation, when assisted by arbitrary single-qudit unitaries \cite{Jones99a,Leung00a,Dodd02a,Wocjan02a,Dur01a,Bennett01a,Nielsen02f,Vidal01b}. Thus, any particular entangling two-qudit Hamiltonian can be used to simulate any other, provided local unitaries are available. This suggests that such dynamics are a fungible physical resource. Having established the qualitative equivalence of all entangling dynamics, we would like to quantify their information processing power. In particular, it is interesting to consider the minimal time required to implement a unitary operation, $U$, on a two-qubit system, using a fixed Hamiltonian, $H$, and the ability to intersperse fast local unitary operations on the two qubits. This problem was studied by Khaneja, Brockett and Glaser~\cite{Khaneja01a}, who found a solution using the theory of Lie groups. Their results, although giving a solution in principle, are neither explicit about the form of the minimal time, nor do they explain how to construct all elements of the time-optimal simulation. Further work by Vidal, Hammerer, and Cirac \cite{Vidal02a}, from a different point of view, resulted in an explicit formula for the minimal time, and gave a constructive procedure for minimizing that time (see also~\cite{Hammerer02a}, where an alternate proof is given by the same authors). The purpose of the present paper is to give a simplified proof that the formula of Vidal, Hammerer and Cirac is, in fact, a lower bound on the simulation time. Note that the difficult part of~\cite{Vidal02a,Hammerer02a} was proving the lower bound; finding a protocol to meet the lower bound was comparatively easy. The main advantages of our proof are its simplicity and conceptual clarity, as compared to the ingenious, but rather complex, arguments in~\cite{Khaneja01a,Vidal02a,Hammerer02a}. This simplicity is achieved by making use of a powerful result from linear algebra, Thompson's theorem. We expect that Thompson's theorem might be useful for many other problems in quantum information theory. A second advantage of using Thompson's theorem is that it does not rely on special properties of two-qubit unitary operators. Therefore, essentially the same arguments give a lower bound on the time required to implement an $n$-qubit unitary operation using a fixed $n$-qubit interaction Hamiltonian, and fast local unitary operations. Our approach to the proof of the lower bound has its roots in the framework of \emph{dynamic strength measures} for quantum operations~\cite{Nielsen03a}. The dynamic strength framework is an attempt to develop a quantitative theory of the power of dynamical operations for information processing. The idea is to associate with a quantum dynamical operation, such as a unitary operation $U$, a quantitative measure of its ``strength.'' In~\cite{Nielsen03a} it was shown that such strength measures can be used to analyze the minimal time required for the implementation of a quantum operation. The present paper takes a similar approach, but instead of using a single real number to quantify dynamic strength, we use a vector-valued measure. This can also be compared to the analysis of optimal simulation of Hamiltonian dynamics using a set of several strength measures \cite{Childs03b}. Our paper is structured as follows. Section~\ref{sec:background} reviews some background material on majorization, Thompson's theorem, and the structure of the two-qubit unitary matrices. The main result, the lower bound on optimal simulation, is proved in Section~\ref{sec:two-qubit}. We conclude in Section~\ref{sec:conc} by presenting our generalization of the lower bound to $n$ qubits and suggesting some directions for future work. In addition, an appendix gives a procedure for calculating a canonical decomposition of two-qubit unitary gates. \section{Background} \label{sec:background} This section reviews the relevant background needed for our proof. Section~\ref{subsec:majorization} reviews the basic notions of majorization, introduces Thompson's theorem, and explains how to use Thompson's theorem and majorization to relate properties of a product of unitary operators to properties of the individual unitaries. Section~\ref{subsec:canonical} introduces the \emph{canonical decomposition}, a useful representation theorem for two-qubit unitary operators, and Section~\ref{subsec:canonicalham} presents an analogous decomposition for Hamiltonians. \subsection{Majorization and Thompson's theorem} \label{subsec:majorization} Our analysis uses the theory of majorization together with Thompson's theorem. More detailed introductions to majorization may be found in \cite{Nielsen01b}, Chapter~2 and~3 of~\cite{Bhatia97a}, and in~\cite{Marshall79a,Alberti82a}. Suppose $x = (x_1,\ldots,x_D)$ and $y = (y_1,\ldots,y_D)$ are two $D$-dimensional real vectors. The relation \emph{$x$ is majorized by $y$}, written $x \prec y$, is intended to capture the intuitive notion that $x$ is \emph{less ordered} (i.e., more disordered) than $y$. To make the formal definition we introduce the notation $\downarrow$ to denote the components of a vector rearranged into non-increasing order, so $x^{\downarrow} = (x^{\downarrow}_1,\ldots,x^{\downarrow}_D)$, where $x^{\downarrow}_1 \geq x^{\downarrow}_2 \geq \ldots \geq x^{\downarrow}_D$. Then $x$ is majorized by $y$, that is, $x \prec y$, if \begin{equation} \sum_{j=1}^k x^{\downarrow}_j \leq \sum_{j=1}^k y^{\downarrow}_j \end{equation} for $k = 1,\ldots,D-1$, and the inequality holds with equality when $k = D$. To connect majorization to Hamiltonian simulation, we use a result of Thompson relating a product of two unitary operators to the individual unitary operators. Recall that an arbitrary pair of unitary operators can be written in the form $e^{iH}$ and $e^{iK}$, for some Hermitian $H$ and $K$. Thompson's theorem provides a representation for the product $e^{iH}e^{iK}$ in terms of $H$ and $K$: \begin{equation}gin{theorem}[Thompson~\cite{Thompson86a}]\label{thm:thompson} Let $H,K$ be Hermitian matrices. Then there exist unitary matrices $U,V$ such that \begin{equation} e^{i H} e^{i K} = e^{i(U H U^\dag + V K V^\dag)}. \end{equation} \end{theorem} The proof of Thompson's theorem in~\cite{Thompson86a} depends on a result conjectured earlier by Horn~\cite{Horn62a}. A proof of this conjecture had been announced and outlined by Lidskii~\cite{Lidskii82a} at the time of Thompson's paper. However, remarks in~\cite{Thompson86a} suggest that~\cite{Lidskii82a} did not contain enough detail to be considered a fully rigorous proof. Fortunately, a proof of Horn's conjecture has recently been fully completed and published. See, for example,~\cite{Fulton00a,Knutson00a} for reviews and references. Thompson's theorem may be related to majorization using the following theorem of Ky Fan: \begin{equation}gin{theorem}[Ky Fan \cite{Fan49a,Bhatia97a}]\label{thm:fan} Let $H,K$ be Hermitian matrices. Then $\lambda(H+K) \prec \lambda(H)+\lambda(K)$, where $\lambda(A)$ denotes the vector whose entries are the eigenvalues of the Hermitian matrix $A$, arranged into non-increasing order. \end{theorem} Combining the results of Ky Fan and Thompson, we have the following: \begin{equation}gin{cor} \label{cor:Thompson-Fan} Let $H,K$ be Hermitian matrices. Then there exists a Hermitian matrix $L$ such that \begin{equation} e^{i H} e^{i K} = e^{iL}; \,\,\,\, \lambda(L) \prec \lambda(H)+\lambda(K). \end{equation} \end{cor} \noindent We will not apply this corollary directly, but we have included it here because it captures the spirit of our later argument, combining the Thompson and Ky Fan theorems to relate the properties of a product of unitaries to the individual unitaries themselves. Corollary~\ref{cor:Thompson-Fan} can be regarded as a vector-valued analogue of the chaining property for dynamic strength measures used in \cite{Nielsen03a} to establish lower bounds on computational complexity. \subsection{The canonical decomposition of a two-qubit gate} \label{subsec:canonical} The \emph{canonical decomposition} is a useful representation theorem characterizing the non-local properties of a two-qubit unitary operator. It was proved by Khaneja, Brockett, and Glaser~\cite{Khaneja01a} using ideas from Lie theory. Kraus and Cirac~\cite{Kraus01a} have given a constructive proof using elementary notions, while Zhang \emph{et al.}~\cite{Zhang02b} have discussed the decomposition in detail from the point of view of Lie theory. The decomposition states that any two-qubit unitary $U$ may be written in the form \begin{equation}gin{equation} \label{eq:can-decomp} U=(A_1\otimes B_1)e^{i(\theta_x X\otimes X+\theta_y Y\otimes Y+\theta_z Z\otimes Z)} (A_2\otimes B_2), \end{equation} where $A_1$, $A_2$, $B_1$, $B_2$ are single-qubit unitaries, and the three parameters, $\theta_x$, $\theta_y$, and $\theta_z$ characterize the non-local properties of $U$.\footnote{Prior to~\cite{Khaneja01a}, Makhlin~\cite{Makhlin00a} gave a proof that the non-local properties of $U$ are completely characterized by $\theta_x$, $\theta_y$ and $\theta_z$, but did not write down the canonical decomposition explicitly.} Without loss of generality, we may choose the local unitaries to ensure that \begin{equation}gin{equation} \frac{\pi}{4} \ge \theta_x \ge \theta_y \ge |\theta_z|, \label{eq:order} \end{equation} and we refer to the set of parameters chosen in this way as the \emph{canonical parameters} for $U$. We will see below that these parameters are unique. We define the {\em canonical form} of $U$ to be \begin{equation} U_c := (A_1^\dag\otimes B_1^\dag)U(A_2^\dag\otimes B_2^\dag); \end{equation} up to local unitaries, $U_c$ is equivalent to $U$. It will be convenient to assume through the remainder of this section that $U$ has unit determinant. This is equivalent to requiring that $A_1, A_2, B_1, B_2$ can all be chosen to have unit determinant. The canonical parameters turn out to be crucial to results about simulation of two-qubit gates. If \begin{equation} U_c = e^{i(\theta_x X\otimes X + \theta_y Y\otimes Y + \theta_z Z\otimes Z)} \end{equation} is the canonical form of $U$, then we define the \emph{non-local content}, $\phi(U)$, of $U$ by $\phi(U) := \lambda(H_U)$, where \begin{equation} H_U := \theta_x \, X\otimes X + \theta_y \, Y\otimes Y + \theta_z \, Z\otimes Z. \end{equation} Explicitly, the components of $\phi(U)$ are \begin{equation}gin{eqnarray} \phi_1 &=& \theta_x+\theta_y-\theta_z \label{eq:phi1} \\ \phi_2 &=& \theta_x-\theta_y+\theta_z \label{eq:phi2} \\ \phi_3 &=& -\theta_x+\theta_y+\theta_z \label{eq:phi3} \\ \phi_4 &=& -\theta_x-\theta_y-\theta_z. \label{eq:phi4} \end{eqnarray} We now outline a simple procedure to determine the canonical parameters of a two-qubit unitary operator. Our explanation initially follows~\cite{Hammerer02a} and~\cite{Leifer02a}. However, as explained below, there is an ambiguity in the procedure described in those papers, related to the fact that the logarithm function has many branches. Our procedure resolves this ambiguity. To explain the procedure, we need to introduce a piece of notation, and explain a simple observation about single-qubit unitary matrices. The \emph{spin flip} operation on an arbitrary two-qubit operator is defined as \begin{equation} \tilde M := (Y \otimes Y) M^T (Y \otimes Y), \end{equation} where $Y$ is the Pauli sigma $y$ matrix, and the transpose operation is taken with respect to the computational basis. Note that the spin flip operation may also be written as $\tilde M = M^T$, where the transpose is taken with respect to a different basis, the \emph{magic basis}~\cite{Hill97a}, \begin{equation}gin{eqnarray} \frac{|00\rangle+|11\rangle}{\sqrt{2}}; & & i\frac{|00\rangle-|11\rangle}{\sqrt{2}}; \nonumber \\ i\frac{|01\rangle+|10\rangle}{\sqrt{2}}; & & \frac{|01\rangle-|10\rangle}{\sqrt{2}}. \end{eqnarray} The observation about single-qubit unitary matrices that we need is the following. Let $U$ be any single-qubit unitary matrix with unit determinant. Then \begin{equation}gin{eqnarray} \label{eq:single-qubit-identity} U Y U^T = Y, \end{eqnarray} where the transpose is taken in the computational basis. This simple identity is easily verified. Now suppose $U$ is an arbitrary two-qubit unitary with unit determinant. By definition of the spin flip, and substituting the canonical decomposition, we have \begin{equation}gin{eqnarray} U \tilde U & = & (A_1\otimes B_1) U_c (A_2\otimes B_2) (Y \otimes Y) \nonumber\\ && \times (A_2^T \otimes B_2^T) U_c (A_1^T \otimes B_1^T) (Y \otimes Y). \end{eqnarray} By the identity Eq.~(\ref{eq:single-qubit-identity}) we see that \begin{equation}gin{eqnarray} U \tilde U = (A_1\otimes B_1) U_c (Y \otimes Y) U_c (A_1^T \otimes B_1^T) (Y \otimes Y). \end{eqnarray} Using the fact that $Y \otimes Y$ commutes with $X \otimes X$, $Y \otimes Y$, and $Z \otimes Z$, we see that $Y \otimes Y$ commutes with $U_c$, and thus \begin{equation}gin{eqnarray} U \tilde U = (A_1\otimes B_1) U_c^2 (Y \otimes Y)(A_1^T \otimes B_1^T) (Y \otimes Y). \end{eqnarray} Finally, applying Eq.~(\ref{eq:single-qubit-identity}) again gives \begin{equation}gin{eqnarray} \label{eq:almost-can-decomp} U \tilde U = (A_1\otimes B_1) U_c^2 (A_1^\dagger \otimes B_1^\dagger) \end{eqnarray} Eq.~(\ref{eq:almost-can-decomp}) suggests a procedure to determine the canonical parameters for $U$, based on the observation that \begin{equation} \label{eq:can-eig-decomp} \lambda(U \tilde U) = \lambda(U_c^2) = (e^{2i \phi_1}, e^{2i \phi_2}, e^{2i\phi_3},e^{2i\phi_4}), \end{equation} where the $\phi_j$ are related to the canonical parameters $\theta_x,\theta_y$ and $\theta_z$ by Eqs.~(\ref{eq:phi1})--(\ref{eq:phi4}). It is tempting to conclude that one can determine $\theta_x,\theta_y,\theta_z$ from the eigenvalues of $U \tilde U$, simply by taking logarithms and inverting the resulting linear equations. Indeed, such a conclusion is reached in~\cite{Hammerer02a} and~\cite{Leifer02a}, using arguments similar to those just described. Unfortunately, determining the canonical parameters is not quite as simple as this, because $z \rightarrow e^{iz}$ is not a uniquely invertible function. In particular, $e^{iz} = e^{i(z+2\pi m)}$, where $m$ is any integer, so there is some ambiguity about which branch of the logarithm function to use in calculating the canonical parameters. In fact, we prove later that no one branch of the logarithm function can be used. However, these considerations do allow us to reach the following conclusion: \begin{equation}gin{lemma} \label{lemma:can-formula} Let $U$ be a two-qubit unitary. Then there exists a Hermitian $H$ such that \begin{equation}gin{eqnarray} U \tilde U = e^{2iH}, \,\,\,\, \lambda(H) = \phi(U). \end{eqnarray} Moreover, if $H$ is any Hermitian matrix such that $\lambda(U \tilde U) = \lambda(e^{2iH})$ then it follows that $\lambda(H) = \phi(U) + \pi \vec m$, where $\vec m$ is some vector of integers. \end{lemma} Although this lemma is sufficient to prove our later results, there is in fact a simple method for exactly calculating the canonical parameters. Because there are many applications of the canonical decomposition, we describe this method in the appendix. The method will not be needed elsewhere in the paper. \subsection{The canonical form of a two-qubit Hamiltonian} \label{subsec:canonicalham} Finally, we introduce one additional concept, the \emph{canonical form} of a two-qubit Hamiltonian, $H$ \cite{Dur01a}. Any two-qubit Hamiltonian $H$ can be expanded as \begin{equation} H = \sum_{j,k=0}^3 h_{jk} \, \sigma_j \otimes \sigma_k. \end{equation} Then let \begin{equation} H' := {H+\tilde H \over 2} = \sum_{j,k \neq 0} h_{jk} \, \sigma_j \otimes \sigma_k. \end{equation} That is, $H'$ is just the Hamiltonian that results when the local terms in $H$ are removed. It is not difficult to show that $H$ and $H'$ are interchangeable resources for simulation in the sense that, given fast local unitaries, evolution according to $H$ for a time $t$ can be simulated by evolution according to $H_c$ for a time $t$, and vice versa. Furthermore, by doing appropriate local unitaries, it can be shown~\cite{Dur01a} that simulating $H'$ (and thus $H$) is equivalent to simulating the canonical form of $H$, \begin{equation}gin{eqnarray} H_c = h_x \, X\otimes X + h_y \, Y\otimes Y + h_z \, Z\otimes Z, \end{eqnarray} where $h_x \geq h_y \geq |h_z|$. Once again, $H$ and $H_c$ are interchangeable resources for simulation. Note that the three parameters $h_x,h_y,h_z$ are completely characterized by the three degrees of freedom in $\lambda(H_c)=\lambda(H+\tilde H)/2$, just as the three canonical parameters $\theta_x,\theta_y,\theta_z$ are completely characterized by the three degrees of freedom in $\lambda(U_c^2)=\lambda(U \tilde U)$. \section{Simulation of two-qubit gates} \label{sec:two-qubit} We now return to the main purpose of the paper, proving results about the time to simulate a unitary gate using entangling Hamiltonians and fast local gates. We aim to prove the following result: \begin{equation}gin{theorem}[Vidal, Hammerer, Cirac \cite{Vidal02a,Hammerer02a}, cf. Khaneja, Brockett and Glaser~\cite{Khaneja01a}] \label{thm:VHC} Let $U$ be a two-qubit unitary operator, and let $H$ be a two-qubit entangling Hamiltonian. Then the minimal time required to simulate $U$ using $H$ and fast local unitaries is the minimal value of $t$ such that there exists a vector of integers $\vec m$ satisfying \begin{equation}gin{eqnarray} \label{eq:2-constraint} \phi(U) + \pi \vec m \prec \frac{\lambda(H+\tilde H)}{2} \, t. \end{eqnarray} \end{theorem} \noindent Note further that only two vectors of integers need to be checked, $\vec m = (0,0,0,0)$ and $\vec m = (1,1,-1,-1)$, since all the other possibilities give rise to weaker constraints on the minimal time, $t$ \cite{Vidal02a,Hammerer02a}. The difficult part of the proof of Theorem~\ref{thm:VHC} is the proof that Eq.~(\ref{eq:2-constraint}) is a lower bound on the simulation time, $t$, and it is this part of the proof that we focus on simplifying. The proof that this lower bound may be achieved follows from standard results on majorization, and we refer the interested reader to~\cite{Vidal02a,Hammerer02a} for details. To prove that Eq.~(\ref{eq:2-constraint}) constrains the minimal time for simulation, we begin by characterizing the canonical decomposition of a product of unitary matrices. Let $\Lambda(U) := \lambda(U\tilde U)$, and define the equivalence relation $A \sim B$ for Hermitian matrices $A$ and $B$ iff $\lambda(A) = \lambda(B)$. Then we have: \begin{equation}gin{lemma} \label{lemma:2-proof} Let $U_j$ be unitary matrices, and let $H_j$ be Hermitian matrices such that $U_j \tilde U_j = e^{2iH_j}$. Then there exist Hermitian matrices $K_j$ such that $H_j \sim K_j$, and \begin{equation} \Lambda(U_N \ldots U_1) = \lambda(e^{2i(K_1+\cdots+K_N)}). \label{eq:2-proof} \end{equation} \end{lemma} \begin{equation}gin{proof} We induct on $N$. The result is trivial for $N = 1$, so we need only consider the inductive step. Using the fact $\lambda(AB) = \lambda(BA)$, we have \begin{equation} \Lambda(U_{N+1}\ldots U_1) = \lambda(\tilde U_{N+1} U_{N+1} \, U_N \ldots U_1 \tilde U_1 \ldots \tilde U_N). \end{equation} By the inductive hypothesis there exist Hermitian $K_j'$ such that $H_j \sim K_j'$ and \begin{equation} \lambda(U_N \ldots U_1 \tilde U_1 \ldots \tilde U_N) = \lambda(e^{2i(K_1'+\cdots+K_N')}). \end{equation} Therefore, $U_N \ldots U_1 \tilde U_1 \ldots \tilde U_N = e^{2i(K_1''+\cdots+K_N'')}$, for some $K_j'' \sim H_j$. Observe also that \begin{equation} \tilde U_{N+1}U_{N+1} \sim U_{N+1}\tilde U_{N+1} = e^{2iH_{N+1}}, \end{equation} and thus $\tilde U_{N+1}U_{N+1} = e^{2iK_{N+1}''}$ for some $K_{N+1}'' \sim H_{N+1}$. It follows by substitution that \begin{equation} \Lambda(U_{N+1}\ldots U_1) = \lambda(e^{2i K_{N+1}''} e^{2i(K_1''+\cdots+K_N'')}). \end{equation} Applying Thompson's theorem gives \begin{equation} \Lambda(U_{N+1}\ldots U_1) = \lambda(e^{2i (K_1+\cdots+K_{N+1})}) \end{equation} for some $K_j \sim K_j'' \sim H_j$, which completes the inductive step of the proof. \end{proof} Given this result, it is straightforward to complete the proof of Eq.~(\ref{eq:2-constraint}). \begin{equation}gin{proof} Write $U$ in the form \begin{equation}gin{eqnarray} U = e^{-i H t_1} V_1 e^{-i H t_2} V_2 \ldots V_{k-1} e^{-iH t_k}, \end{eqnarray} where $t_1,\ldots,t_k$ are times of evolution, $t = t_1 + \ldots + t_k$ is the total time for simulation, and $V_j$ are local unitaries. Without loss of generality, we may assume $H$ is in canonical form. Applying Lemma~\ref{lemma:2-proof}, we obtain \begin{equation}gin{eqnarray} \Lambda(U) = \lambda(e^{2i(H_1 t_1+\ldots + H_k t_k)}) \label{eq:Lambdau} \end{eqnarray} where $H_j \sim H$ for each $j$. Here we have used the observation $V_j \tilde V_j = 1$, so all the contributions from local unitaries vanish. It follows from Lemma~\ref{lemma:can-formula} that \begin{equation}gin{eqnarray} \phi(U)+\pi \vec m = \lambda(H_1 t_1+\ldots + H_m t_m), \end{eqnarray} and using Ky Fan's theorem gives \begin{equation}gin{eqnarray} \phi(U)+\pi \vec m \prec \lambda(H)(t_1+\ldots +t_m), \end{eqnarray} which is Eq.~(\ref{eq:2-constraint}), as desired. \end{proof} \section{Discussion} \label{sec:conc} In this paper, we have provided a simplified proof of a lower bound on the time required to simulate a two-qubit unitary gate using a given two-qubit interaction Hamiltonian and local unitaries. The bound follows easily from standard results on majorization together with Thompson's theorem on products of unitary operators. Although we have described canonical decompositions of two-qubit gates in some detail, we note that our proof does not actually require properties of the decomposition unique to two qubits. In fact, it is straightforward to prove an analogue of Eq.~(\ref{eq:2-constraint}) for an $n$-qubit system. For an $n$-qubit operator $M$, suppose we define a generalized spin flip $M \rightarrow \tilde M$, where $\tilde M$ is the transpose operation in a basis such that, whenever $M$ is local, $M$ is orthogonal, i.e., $M\tilde M = I$. It is not difficult to construct examples of such bases, at least when $n$ is even. An example is the basis obtained by rotating the computational basis using the transformation $(I-iY^{\otimes n})/\sqrt 2$, for $n$ even. This basis change gives $\tilde M = Y^{\otimes n} M^T Y^{\otimes n}$, where the transpose is taken in the computational basis, and thus this operation generalizes the transpose in the magic basis. In this general setting the following lower bound on the time required to implement an $n$-qubit gate holds: \begin{equation}gin{cor} Let $U$ be an $n$-qubit unitary operator, and let $H$ be an $n$-qubit Hamiltonian. Then the time required to simulate $U$ using $H$ and fast local unitaries satisfies \begin{equation}gin{eqnarray} {1 \over 2} \arg \lambda(U \tilde U) + \pi \vec m \prec \frac{\lambda(H+\tilde H)}{2} \, t. \label{eq:n-constraint} \end{eqnarray} for some vector of integers $\vec m$. \end{cor} \noindent The proof follows simply by taking the arguments of both sides of Eq.~(\ref{eq:Lambdau}) and applying Ky Fan's theorem. All steps leading up to Eq.~(\ref{eq:Lambdau}) remain valid for $n$-qubit systems using the above definition of the generalized spin flip. Unfortunately, we have not found any interesting examples with $n>2$ for which Eq.~(\ref{eq:n-constraint}) provides a nontrivial lower bound on the time required to implement some quantum gate. It would be interesting to construct cases where Eq.~(\ref{eq:n-constraint}) (or some similar condition) does give a nontrivial constraint on multipartite gate simulation. One might imagine that such techniques could be used to prove circuit lower bounds on certain quantum computations, although it does not seem likely that such bounds would be especially strong, given the well-known difficulty of this problem. \acknowledgments We thank Aram Harrow and Tobias Osborne for helpful discussions, and Andrew Doherty for an informative seminar on related problems. AMC received support from the Fannie and John Hertz Foundation, and thanks the University of Queensland node of the Centre for Quantum Computer Technology for its hospitality. AMC was also supported in part by the Cambridge--MIT Institute, by the Department of Energy under cooperative research agreement DE-FC02-94ER40818, and by the National Security Agency and Advanced Research and Development Activity under Army Research Office contract DAAD19-01-1-0656. Finally, we acknowledge the hospitality of the Caltech Institute for Quantum Information, where this work was completed. This work was supported in part by the National Science Foundation under grant EIA-0086038. \appendix \section*{Appendix: A method for computing the canonical parameters of a two-qubit unitary gate} In this appendix, we describe a method for computing the canonical parameters of a two-qubit unitary, based on the discussion in Section~\ref{subsec:canonical}. The key is to take logarithms in just the right way. From Eqs.~(\ref{eq:order}) and~(\ref{eq:phi1})--(\ref{eq:phi4}), we see that \begin{equation}gin{equation} \frac{3\pi}{2} \ge 2\phi_1 \ge 2\phi_2 \ge 2\phi_3 \ge 2\phi_4 \ge -\frac{3\pi}{2}. \label{eq:phiorder} \end{equation} It is not difficult to find examples where the first or last inequality is saturated, so no single fixed branch of the logarithm function can be used to determine the $\phi_j$. One might hope instead that there exists a method for choosing a different branch for each particular $U$, so that the corresponding $2\phi_j$ lie within that branch. However, even this is not possible in general. To understand this, note that \begin{equation}gin{equation} 2\phi_1-2\phi_4=4(\theta_x+\theta_y). \label{eq:thetaxthetay} \end{equation} In cases where $\theta_x=\theta_y=\pi/4$, we have $2\phi_1-2\phi_4=2\pi$, in which case the values $2\phi_j$ do not lie in {\em any} one branch. We now show how to compute the $\phi_j$. The idea is that we can first take the argument of the eigenvalues in Eq.~(\ref{eq:can-eig-decomp}) over some fixed branch. Then we can systematically determine which of the resulting values have been shifted by $2\pi$ from the value $2\phi_j$ (due to an incorrect branch) and correct these values accordingly. Let $S_j$, $j=1,\dots,4$ be defined as follows: \begin{equation}gin{equation} 2S_j=\arg (e^{2i\phi_j}). \end{equation} That is, $2S_j$ are the arguments of the eigenvalues of $U\tilde U$, where we take the argument over the branch $(-\frac{\pi}{2}, \frac{3\pi}{2}]$, so that the $S_j$ are contained in the interval $(-\frac{\pi}{4}, \frac{3\pi}{4}]$. Considering the range of values that $\phi_j$ may take, from Eq.~(\ref{eq:phiorder}), and the particular branch we are using, it is clear that: \begin{equation}gin{equation} S_j= \left\{ \begin{equation}gin{array}{lcl} \phi_j+\pi &\hspace{0.5cm}& \mbox{if } \phi_j\le-\frac{\pi}{4} \\ \phi_j & & \mbox{otherwise.} \end{array} \right. \label{eq:sj} \end{equation} {}From Eqs.~(\ref{eq:phi1})--(\ref{eq:phi4}) we have \begin{equation}gin{equation} \phi_1+\phi_2+\phi_3+\phi_4=0. \label{eq:sumphi} \end{equation} Combining Eqs.~(\ref{eq:sj}) and~(\ref{eq:sumphi}), we see that \begin{equation}gin{equation} S_1+S_2+S_3+S_4=\pi n, \end{equation} where $n$ is the number of $\phi_j$ that are less than or equal to $-\frac{\pi}{4}$. Possible values for $n$ are $0, 1, 2$ and $3$ (all four $\phi_j$ cannot simultaneously be $\le-\frac{\pi}{4}$, since that would contradict Eq.~(\ref{eq:sumphi})). Since the $\phi_j$ obey the ordering in Eq.~(\ref{eq:phiorder}), then the $n$ values of $\phi_j$ that are less than or equal to $-\frac{\pi}{4}$ are $\phi_4,\dots,\phi_{4-n+1}$, and the remaining $4-n$ values greater than $\frac{\pi}{4}$ are $\phi_1,\dots,\phi_{4-n}$. Thus, using Eq.~(\ref{eq:sj}), we see that the set of values $S_j$ consist of $n$ ``shifted'' $\phi_j$ values \begin{equation}gin{equation} \phi_4+\pi,\dots,\phi_{4-n+1}+\pi, \label{eq:shifted} \end{equation} and $4-n$ ``non-shifted'' values of $\phi_j$ \begin{equation}gin{equation} \phi_1,\dots,\phi_{4-n}. \label{eq:nonshifted} \end{equation} Furthermore, all of the shifted values in (\ref{eq:shifted}) are no less than any of the non-shifted values in (\ref{eq:nonshifted}). This is shown by combining Eq.~(\ref{eq:order}) with Eq.~(\ref{eq:thetaxthetay}), giving $\phi_1-\phi_4 \le \pi$, which when combined with Eq.~(\ref{eq:phiorder}) implies that $\phi_j \le \phi_k + \pi$ for all $j,k$, as required. Therefore, the largest $n$ values of $S_j$ are guaranteed to be the values in (\ref{eq:shifted}). Thus subtracting $\pi$ from the largest $n$ values of $S_j$, gives us $\phi_4,\dots,\phi_{4-n+1}$, and the the remaining $4-n$ values of $S_j$ give us $\phi_1,\dots,\phi_{4-n}$. In summary, the nonlocal parameters $\theta_x, \theta_y$ and $\theta_z$ may be computed as follows. Find the arguments of the eigenvalues of $U\tilde U$ over the branch $(-\frac{\pi}{2}, \frac{3\pi}{2}]$. Call these values $2S_j$. Calculate $n=(S_1+S_2+S_3+S_4)/\pi$. Replace the $n$ largest values of $S_j$ by those values minus $\pi$. The resulting values, when placed in nonincreasing order, are equal to $(\phi_1,\phi_2,\phi_3,\phi_4)$. The parameters $\theta_x,\theta_y$ and $\theta_z$ are then found by inverting Eqs.~(\ref{eq:phi1})--(\ref{eq:phi4}). \end{document}
\begin{document} \title{ Examples of de Branges-Rovnyak spaces generated by nonextreme functions} \author{Bartosz {\L}anucha, Maria T. Nowak} \address{ Bartosz {\L}anucha, \newline Institute of Mathematics, \newline Maria Curie-Sk{\l}odowska University, \newline pl. M. Curie-Sk{\l}odowskiej 1, \newline 20-031 Lublin, Poland} \email{[email protected]} \address{ Maria T. Nowak, \newline Institute of Mathematics, \newline Maria Curie-Sk{\l}odowska University, \newline pl. M. Curie-Sk{\l}odowskiej 1, \newline 20-031 Lublin, Poland} \email{[email protected]} \subjclass[2010]{47B32, 30H10, 30H15} \keywords{Hardy space, de Branges-Rovnyak space, Smirnov class, rigid function} \begin{abstract} We describe de Branges-Rovnyak spaces $\mathcal H (b_{\alpha})$, $\alpha>0$, where the function $b_{\alpha}$ is not extreme in the unit ball of $H^{\infty}$ on the unit disk $\mathbb D $, defined by the equality $b_{\alpha}(z)/a_{\alpha}(z)= (1-z)^{-\alpha}$, $z\in\mathbb D$, where $a_{\alpha}$ is the outer function such that $a_{\alpha}(0)>0$ and $|a_{\alpha}|^2+|b_{\alpha}|^2= 1$ a.e. on $\partial \mathbb D$. \end{abstract} \maketitle \section{Introduction} Let $H^2$ denote the standard Hardy space in the open unit disk $\mathbb D$ and let $\mathbb T=\partial \mathbb D$. For $\chi\in L^{\infty}(\mathbb{T})$ let $T_{\chi}$ denote the bounded Toeplitz operator on $H^2$, that is, $T_{\chi}f=P_{+}(\chi f)$, where $P_{+}$ is the orthogonal projection of $L^{2}(\mathbb{T})$ onto $H^2$. In particular, $S=T_{e^{it}}$ is called the shift operator. We will denote by $\mathcal{M}(\chi)$ the range of $T_{\chi}$ equipped with the range norm, that is, the norm that makes the operator $T_{\chi}$ a coisometry of $H^2$ onto $\mathcal{M}(\chi)$. Given a function $b$ in the unit ball of $H^{\infty}$, the \textit{de Branges-Rovnyak space $\mathcal{H}(b)$} is the image of $H^2$ under the operator $(I-T_bT_{\overline{b}})^{1/2}$ with the corresponding range norm $\|\cdot\|_b$. It is known that $\mathcal{H}(b)$ is a Hilbert space with reproducing kernel $$k_w^b(z)=\frac{1-\overline{b(w)}b(z)}{1-\overline{w}z}\quad(z,w\in\mathbb{D}).$$ Here we are interested in the case when the function $b$ is not an extreme point of the unit ball of $H^{\infty}$. Then there exists an outer function $a\in H^{\infty}$ for which $|a|^2+|b|^2=1$ a.e. on $\mathbb{T}$. Moreover, if we suppose that $a(0)>0$, then $a$ is uniquely determined, and, following Sarason, we say that $(b,a)$ is a \emph{pair}. The function $a$ is sometimes called the \emph{Pythagorean mate} associated with $b$. It is known that both $\mathcal{M}(a)$ and $\mathcal{M}(\overline{a})$ are contained contractively in $\mathcal{H}(b)$ (see \cite[p. 25]{sarason}). Moreover, if $(b,a)$ is a corona pair, that is, $|a|+|b|$ is bounded away from $0$ in $\mathbb{D}$, then $\mathcal{H}(b)=\mathcal{M}(\overline{a})$ (see e.g. \cite[p. 62]{sarason}). Let us recall that the Smirnov class $\mathcal{N}^{+}$ consists of those holomorphic functions in $\mathbb{D}$ that are quotients of functions in $H^{\infty}$ in which the denominators are outer functions. If $(b,a)$ is a pair, then the quotient $\varphi=b/a$ is in $\mathcal{N}^{+}$, and conversely, for every nonzero function $\varphi\in\mathcal{N}^{+}$ there exists a unique pair $(b,a)$ such that $\varphi=b/a$ (\cite{sarason2}). Many properties of $\mathcal{H}(b)$ can be expressed in terms of the function $\varphi=b/a$ in the Smirnov class $\mathcal{N}^{+}$. It is worth noting here that if $\varphi$ is rational, then the functions $a$ and $b$ in the representation of $\varphi$ are also rational (see \cite{sarason2}) and in such a case $ (b,a)$ is called a rational pair. Recently spaces $\mathcal H(b)$ for rational pairs have been studied in \cite{ransford2}, \cite{ross} and \cite{LN}. In \cite{ross} the authors described also the spaces $\mathcal H (b^r)$, where $b$ is a rational outer funtion in the closed unit ball of $H^{\infty}$ and $r$ is a positive number. Here we describe the Branges-Rovnyak spaces $\mathcal{H}(b_{\alpha})$, $\alpha>0$, where $(b_{\alpha},a_{\alpha})$ is such a pair that $$\varphi_{\alpha}(z)= \frac{b_{\alpha}(z)}{a_{\alpha}(z)}= \frac1{(1-z)^{\alpha}}$$ (principal branch). For a function $\varphi$ that is holomorphic on $\mathbb{D}$ we define $T_{\varphi}$ to be the operator of multiplication by $\varphi$ on the domain $\mathcal{D}(T_{\varphi})=\{f\in H^2\colon\ \varphi f\in H^2\}$. It is well known that $T_{\varphi}$ is bounded on $H^2$ if and only if $\varphi\in H^{\infty}$. Moreover, it was proved in \cite{sarason2} that the domain $\mathcal{D}(T_{\varphi})$ is dense in $H^2$ if and only if $\varphi\in \mathcal{N}^{+}$. More precisely, if $\varphi$ is a nonzero function in $\mathcal{N}^{+}$ with canonical representation $\varphi={b}/{a}$, then $\mathcal{D}(T_{\varphi})=aH^2$. In this case $T_{\varphi}$ has a unique, densely defined adjoint $T_{\varphi}^{*}$. In what follows we denote $T_{\overline{\varphi}}=T_{\varphi}^{*}$ (see \cite[p. 286]{sarason2} for more details). The next theorem says that the domain of $T_{\overline{\varphi}}$ coincides with the de Branges-Rovnyak space $\mathcal{H}(b)$. \begin{thm}[\cite{sarason2}]\label{sarsar} Let $(b,a)$ be a pair and let $\varphi=b/a$. Then the domain of $T_{\overline{\varphi}}$ is $\mathcal{H}(b)$ and for $f\in\mathcal{H}(b)$, $$\|f\|_{b}^2=\|f\|_{2}^2+\|T_{\overline{\varphi}}f\|_{2}^2.$$ \end{thm} The next proposition was also proved in \cite{sarason2}. \begin{prop}[\cite{sarason2}]\label{propek} If $\varphi$ is in $\mathcal{N}^{+}$, $\psi$ is in $H^{\infty}$, and $f$ is in $\mathcal{D}(T_{\overline{\varphi}})$, then $$T_{\overline{\varphi}}T_{\overline{\psi}}f=T_{\overline{\varphi}\overline{\psi}}f=T_{\overline{\psi}}T_{\overline{\varphi}}f.$$ \end{prop} \begin{cor}\label{korek} Let $\varphi_1,\varphi_2\in\mathcal{N}^{+}$ have canonical representations $\varphi_{i}=b_{i}/a_{i}$, $i=1,2$. If $\varphi_2/\varphi_1\in H^{\infty}$, then $\mathcal{H}(b_1)\subset\mathcal{H}(b_2)$. \end{cor} \begin{proof} Put $\psi = \varphi_2/\varphi_1$. It follows from Proposition \ref{propek} that $\mathcal{D}(T_{\overline{\varphi}_1})\subset\mathcal{D}(T_{\overline{\varphi}_1\overline{\psi}})$, and so $$\mathcal{H}(b_1)=\mathcal{D}(T_{\overline{\varphi}_1})\subset\mathcal{D}(T_{\overline{\varphi}_1\overline{\psi}})=\mathcal{D}(T_{\overline{\varphi}_2})=\mathcal{H}(b_2).$$ \end{proof} In the proof of our main theorem we will use the following description of invertible Toeplitz operators with unimodular symbols. \begin{thDWT}[\cite{nik}, p. 250]\label{animal} Let $\psi\in L^{\infty}(\partial\mathbb{D})$ be such that $|\psi|=1$ a.e. on $\partial\mathbb{D}$. The following are equivalent. \begin{itemize} \item[(a)] $T_{\psi}$ is invertible. \item[(b)] $\mathrm{dist}(\psi,H^{\infty})<1$ and $\mathrm{dist}(\overline{\psi},H^{\infty})<1$. \item[(c)] There exists an outer function $h\in H^{\infty}$ such that $\|\psi-h\|_{\infty}<1$. \item[(d)] There exist real valued bounded functions $u$, $v$ and a constant $c\in\mathbb{R}$ such that $\psi=e^{i(u+\tilde{v}+c)}$ and $\|u\|_{\infty}<\frac{\pi}2$, where $\tilde{v}$ denotes the conjugate function of $v$. \end{itemize} \end{thDWT} We will need also the notion of a rigid function in $H^1$. A function in $H^1$ is called rigid if no other functions in $H^1$, except for positive scalar multiples of itself, have the same argument as it almost everywhere on $\partial \mathbb{ D}$. As observed in \cite{sarasonk}, every rigid function is outer. It is known that the function $(1-z)^{\alpha}$ is rigid if $0<\alpha\leq 1$ and is not rigid if $\alpha>1$ (see e.g. \cite[Section 6.8]{fm}). The next theorem shows a close connection between kernels of Toeplitz operators and rigid functions in $H^1$ (\cite[p. 70]{sarason}). \begin{thm}\label{jdr} If $f$ is an outer function in $H^2$, then $f^2$ is rigid if and only if the operator $T_{\overline{f}/f}$ has a trivial kernel. \end{thm} Moreover, for a pair $(b,a)$ the following sufficient condition for density of $\mathcal{M}(a)$ in $\mathcal{H}(b)$ is known (\cite[p. 72]{sarason}, \cite[vol. 2, p. 496]{fm}). \begin{thm}\label{rigid} If the function $a^2$ is rigid, then $\mathcal{M}(a)$ is dense in $\mathcal{H}(b)$. \end{thm} \section{The spaces $\mathcal{H}(b_{\alpha})$, $\alpha>0$} Recall that for $\alpha>0$ we define the pair $(b_{\alpha},a_{\alpha})$ by $$\varphi_{\alpha}(z)=\frac{b_{\alpha}(z)}{a_{\alpha}(z)}=\frac1{(1-z)^{\alpha}}.$$ Consequently, the outer function $a_{\alpha}$ is given by \begin{equation}\label{a} a_{\alpha}(z)=\exp{\left\{\frac1{4\pi}\int_{0}^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log{\frac{|1-e^{it}|^{2\alpha}}{1+|1-e^{it}|^{2\alpha}}}dt\right\}}. \end{equation} Since both $a_{\alpha}$ and $(1-z)^{\alpha}$ are outer functions, the equality $(1-z)^{\alpha}b_{\alpha}(z)=a_{\alpha}(z)$ implies that $b_{\alpha}$ is also outer. Hence \begin{equation}\label{b} b_{\alpha}(z)=a_{\alpha}(z)\varphi_{\alpha}(z)=\exp{\left\{\frac1{4\pi}\int_{0}^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log{\frac{1}{1+|1-e^{it}|^{2\alpha}}}dt\right\}}. \end{equation} This formula shows that $\log{|b_{\alpha}(z)|}$ is a function harmonic in ${\mathbb{D}}$ and continuous in $\overline{\mathbb{D}}$. Moreover, $|b_{\alpha}(1)|=1$. We now prove that actually $b_{\alpha}(1)=1$. To this end, it is enough to note that $\mathrm{arg}b_{\alpha}(r)=0$ for all $0<r<1$. Indeed, \begin{displaymath} \begin{split} \mathrm{arg}b_{\alpha}(r)&=\frac1{4\pi}\int_{0}^{2\pi}\mathrm{Im}\left(\frac{e^{it}+r}{e^{it}-r}\right)\log{\frac{1}{1+|1-e^{it}|^{2\alpha}}}dt\\ &=-\frac1{4\pi}\int_{-\pi}^{\pi}\frac{2r\sin t }{|e^{it}-r|^2}\log{\frac{1}{1+|1-e^{it}|^{2\alpha}}}dt=0, \end{split} \end{displaymath} because the integrand is an odd function. The following proposition says for which $\alpha$ a nontangential limit at 1 of each function (and its derivatives up to a given order) from $\mathcal{H}(b_{\alpha})$ exists. \begin{prop}\label{deri} Let $n\in\mathbb{N}$. Every $f\in\mathcal{H}(b_{\alpha})$ along with its derivatives up to order $n-1$ has a nontangential limit at the point $1$ if and only if $\alpha>n-1/2$. \end{prop} This is a consequence of Theorem 3.2 from \cite{fric} (see also \cite{sarason} and \cite{ross}), which states that the following two conditions are equivalent: \begin{itemize} \item[(i)] for every $f\in\mathcal{H}(b_{\alpha})$ the functions $f(z), f'(z),\ldots , f^{(n-1)}(z)$ have finite limits as $z$ tends nontangentially to $1$; \item[(ii)] $$\int_{0}^{2\pi}\frac{|\log|b_{\alpha}(e^{it})||}{|1-e^{it}|^{2n}}dt<+\infty.$$ \end{itemize} Since \begin{displaymath} \log{|b_{\alpha}(e^{it})|^2}=\log{\frac{1}{1+|1-e^{it}|^{2\alpha}}}=\log{\left(1-\frac{|1-e^{it}|^{2\alpha}}{1+|1-e^{it}|^{2\alpha}}\right)} \end{displaymath} and $|\log{(1-x)}|\approx |x|$ for $x$ sufficiently close to zero, we have \begin{displaymath} \log{|b_{\alpha}(e^{it})|}\approx\frac{|1-e^{it}|^{2\alpha}}{1+|1-e^{it}|^{2\alpha}}\approx |1-e^{it}|^{2\alpha} \end{displaymath} whenever $t$ is sufficiently close to $0$ or $2\pi$. This implies that \begin{displaymath} \int_0^{2\pi}\frac{|\log{|b_{\alpha}(e^{it})|}|}{|1-e^{it}|^{2n}}dt<\infty \end{displaymath} if and only if \begin{displaymath} \int_0^{2\pi}\frac{1}{|1-e^{it}|^{2n-2\alpha}}dt<\infty, \end{displaymath} which holds only when $\alpha>n-1/2$. In particular, we see that every $f\in\mathcal{H}(b_{\alpha})$ has a nontangential limit at $1$ if and only if $\alpha>1/2$. The next proposition is an immediate consequence of Corollary \ref{korek}. \begin{prop} For every $0<\alpha\leq\beta<\infty$, $$\mathcal{H}(b_{\beta})\subset \mathcal{H}(b_{\alpha}).$$ \end{prop} Finally, we observe that $$|b_{\alpha}(z)|\geq \sqrt{\frac1{1+4^{\alpha}}},$$ which implies that $(b_{\alpha},a_{\alpha})$ is a corona pair for $\alpha>0$. \begin{cor}\label{opiec} For $\alpha>0$, $$\mathcal{M}({a}_{\alpha})=\mathcal{M}((1-z)^{\alpha})\quad\mathrm{and}\quad\mathcal{H}(b_{\alpha})=\mathcal{M}(\overline{a}_{\alpha})=\mathcal{M}(\overline{(1-z)^{\alpha}})$$ with equivalence of norms. \end{cor} \begin{proof} The equality of $\mathcal{H}(b_{\alpha})$ and $\mathcal{M}(\overline{a}_{\alpha})$ follows from the fact that $(b_{\alpha},a_{\alpha})$ is a corona pair, which in turn is a consequence of the fact that $b_{\alpha}$ is bounded below. The latter implies that $1/b_{\alpha}\in H^{\infty}$ and so $T_{b_{\alpha}}$ and $T_{\overline{b}_{\alpha}}$ are invertible. Hence $$\mathcal{M}((1-z)^{\alpha})=T_{\frac{{a}_{\alpha}}{{b}_{\alpha}}}H^2=T_{{a}_{\alpha}}H^2$$ and $$\mathcal{M}(\overline{(1-z)^{\alpha}})=T_{\frac{\overline{a}_{\alpha}}{\overline{b}_{\alpha}}}H^2=T_{\overline{a}_{\alpha}}H^2.$$ Both $\mathcal{M}({a}_{\alpha})$ and $\mathcal{M}((1-z)^{\alpha})$ are boundedly contained in $H^2$. Hence, the Closed Graph Theorem implies equivalence of their norms. Similarly, one obtains the equivalence of norms in $\mathcal{M}(\overline{a}_{\alpha})$ and $\mathcal{M}(\overline{(1-z)^{\alpha}})$. \end{proof} \section{Main results} We start with the following. \begin{thm}\label{lemBB} For any $n\in\mathbb{N}$ and $n-1/2<\alpha<n+1/2$ we have \begin{equation*} \mathcal{M}(\overline{(1-z)^{\alpha}})=\mathcal{M}((1-z)^{\alpha})+\mathrm{span}\{S^{*}(1-z)^{\alpha},\ldots, S^{*n}(1-z)^{\alpha}\}. \end{equation*} \end{thm} \begin{proof} Let $$Q(z)=\frac{1-z}{\overline{1-z}},\quad z\in\mathbb{D}.$$ Then $Q$ has a continuous extension to $\overline{\mathbb{D}}\setminus\{1\}$ and $$Q(e^{it})=e^{(t-\pi)i},\quad t\in (0,2\pi),$$ which implies that $$T_{Q^n}=(-1)^nS^n\quad \mathrm{for}\ n\geq1.$$ Moreover, we observe that for $n-1/2<\alpha<n+1/2$, $n\geq 1$, we have $$T_{Q^{\alpha}}=T_{Q^{\alpha-n}Q^n}=(-1)^nT_{Q^{\alpha-n}}S^n.$$ Consequently, \begin{equation}\label{krop} T_{(1-z)^{\alpha}}=T_{\overline{(1-z)^{\alpha}}Q^{\alpha}}=(-1)^nT_{\overline{(1-z)^{\alpha}}}T_{Q^{\alpha-n}}S^n. \end{equation} Observe now that the operator $T_{Q^{\alpha-n}}$ is invertible. This is an immediate consequence of the Devinatz-Widom Theorem. Let $f\in \mathcal{M}(\overline{(1-z)^{\alpha}})$ and $f=T_{\overline{(1-z)^{\alpha}}}g$ for a function $g\in H^2$. Since $T_{Q^{\alpha-n}}$ is invertible, there exists $g_0\in H^2$ such that $(-1)^ng= T_{Q^{\alpha-n}}g_0$. Hence, using \eqref{krop}, we obtain \begin{displaymath} \begin{split} f=T_{\overline{(1-z)^{\alpha}}}g&=(-1)^nT_{\overline{(1-z)^{\alpha}}}T_{Q^{\alpha-n}}g_0\\ &= (-1)^nT_{\overline{(1-z)^{\alpha}}}T_{Q^{\alpha-n}}\left(S^nS^{*n}g_0+\sum_{k=0}^{n-1}\langle g_0,z^k\rangle z^k\right)\\ &= T_{(1-z)^{\alpha}}S^{*n}g_0+(-1)^n\sum_{k=0}^{n-1}\langle g_0,z^k\rangle T_{\overline{(1-z)^{\alpha}}}T_{Q^{\alpha-n}}z^k. \end{split} \end{displaymath} Since for $0\leq k\leq n-1$, \begin{equation}\label{krop2} \begin{split} (-1)^nT_{\overline{(1-z)^{\alpha}}}T_{Q^{\alpha-n}}z^k&=(-1)^nT_{\overline{Q}^n(1-z)^{\alpha}}S^k1\\&=S^{*(n-k)}T_{(1-z)^{\alpha}}1=S^{*(n-k)}(1-z)^{\alpha}, \end{split} \end{equation} we get \begin{displaymath} \begin{split} f=& (1-z)^{\alpha}S^{*n}g_0+\sum_{k=0}^{n-1}\langle g_0,z^k\rangle S^{*(n-k)}(1-z)^{\alpha}\\ &\in \mathcal{M}((1-z)^{\alpha})+\mathrm{span}\{S^{*}(1-z)^{\alpha},\ldots, S^{*n}(1-z)^{\alpha}\}. \end{split} \end{displaymath} On the other hand, if $$f= (1-z)^{\alpha}h+\sum_{k=1}^{n}c_k S^{*k}(1-z)^{\alpha},\quad h\in H^2,$$ then, by \eqref{krop} and \eqref{krop2}, \begin{displaymath} \begin{split} f=& T_{(1-z)^{\alpha}}h+\sum_{k=0}^{n-1}c_{n-k} S^{*(n-k)}(1-z)^{\alpha} \\ =&(-1)^nT_{\overline{(1-z)^{\alpha}}}T_{Q^{\alpha-n}}S^nh+(-1)^n\sum_{k=0}^{n-1}c_{n-k} T_{\overline{(1-z)^{\alpha}}}T_{Q^{\alpha-n}}z^k\\ =&T_{\overline{(1-z)^{\alpha}}}\left((-1)^nT_{Q^{\alpha-n}}S^nh+(-1)^n\sum_{k=0}^{n-1}c_{n-k} T_{Q^{\alpha-n}}z^k\right)\in \mathcal{M}(\overline{(1-z)^{\alpha}}). \end{split} \end{displaymath} \end{proof} Now we prove our main result. \begin{thm}\label{mejn} Let $0<\alpha<\infty$ and let $(b_{\alpha}, a_{\alpha})$ be a pair, with the functions $b_{\alpha}$ and $a_{\alpha}$ given by \eqref{b} and \eqref{a}, respectively. Then \begin{enumerate}[(i)] \item for $0<\alpha<1/2$, $$\mathcal{H}(b_{\alpha})=\mathcal{M}(a_{\alpha})=(1-z)^{\alpha}H^2,$$ \item for $n-1/2<\alpha<n+1/2$, $n=1,2,\ldots$, $$\mathcal{H}(b_{\alpha})=\mathcal{M}(a_{\alpha})+\mathcal{P}_n=(1-z)^{\alpha}H^2+\mathcal{P}_n,$$ where $\mathcal{P}_n$ is the set of all polynomials of degree at most $n-1$, \item $$\mathcal{H}(b_{1/2})=\overline{\mathcal{M}(a_{1/2})}=\overline{(1-z)^{1/2}H^2},$$ where the closure is taken with respect to the $\mathcal{H}(b_{1/2})$-norm, \item for $\alpha=n+1/2$, $n=1,2,\ldots$, $$\mathcal{H}(b_{\alpha})=\overline{\mathcal{M}(a_{\alpha})}+\mathcal{A}_n,$$ where the closure is taken with respect to the $\mathcal{H}(b_{\alpha})$-norm and $\mathcal{A}_n$ is the $n$-dimensional subspace of $\mathcal{H}(b_{\alpha})$ defined by $$\mathcal{A}_n=\left\{ p_n\cdot P_+\left(\overline{(1-z)^{\alpha}}{(1-z)}^{1/2}\right)+P_+\left(p_nP_-\left(\overline{(1-z)^{\alpha}}{(1-z)}^{1/2}\right)\right)\ \colon \ p_n\in \mathcal{P}_n \right\},$$ where $P_-=I-P_+$. \end{enumerate} \end{thm} \begin{proof}(i) We know from Corollary \ref{opiec} that for $\alpha>0$, $$\mathcal{H}(b_{\alpha})=\mathcal{M}(\overline{a}_{\alpha})=\mathcal{M}(\overline{(1-z)^{\alpha}}).$$ We first observe that for $0<\alpha<1/2$ the operator $T_{(1-z)^{\alpha}/\overline{(1-z)^{\alpha}}}$ is invertible. This follows from $$\frac{(1-e^{it})^{\alpha}}{\overline{(1-e^{it})}^{\alpha}}=e^{i\alpha(t-\pi)},\quad t\in(0,2\pi),$$ and the Devinatz-Widom Theorem. Consequently, $$\mathcal{M}(\overline{(1-z)^{\alpha}})=T_{\overline{(1-z)^{\alpha}}}H^2=T_{\overline{(1-z)^{\alpha}}}T_{\frac{(1-z)^{\alpha}}{\overline{(1-z)^{\alpha}}}}H^2=(1-z)^{\alpha}H^2.$$ (ii) Since $\mathcal{H}(b_{\alpha})$ contains $\mathcal{M}(a_{\alpha})=\mathcal{M}((1-z)^{\alpha})$ and all polynomials (see e.g. \cite[p. 25]{sarason}), to prove (ii) it is enough to show that $$\mathcal{H}(b_{\alpha})\subset\mathcal{P}_n+\mathcal{M}((1-z)^{\alpha}).$$ By Theorem \ref{lemBB} we have $$\mathcal{H}(b_{\alpha})=\mathcal{M}(\overline{(1-z)^{\alpha}})=\mathcal{M}((1-z)^{\alpha})+\mathrm{span}\{S^{*}(1-z)^{\alpha},\ldots, S^{*n}(1-z)^{\alpha}\}.$$ Therefore, we only need to show that \begin{displaymath} \mathrm{span}\{S^{*}(1-z)^{\alpha},\ldots, S^{*n}(1-z)^{\alpha}\}\subset\mathcal{P}_n+\mathcal{M}((1-z)^{\alpha}). \end{displaymath} Clearly, \begin{displaymath} \begin{split} S^{*}(1-z)^{\alpha}&=\frac{(1-z)^{\alpha}-1}{z}=\frac{(1-z)^{\alpha}-(1-z)^n+(1-z)^n-1}{z}\\ &=S^{*}(1-z)^{n}-(1-z)^{\alpha}S^{*}(1-z)^{n-\alpha}\in \mathcal{P}_n+\mathcal{M}((1-z)^{\alpha}) \end{split} \end{displaymath} ($(1-z)^{n-\alpha}\in H^2$ since $n-\alpha>-1/2$). Now assume that for any $1\leq k<n$, $$S^{*k}(1-z)^{\alpha}\in \mathcal{P}_n+\mathcal{M}((1-z)^{\alpha}),$$ or, in other words, $$S^{*k}(1-z)^{\alpha}=p_n+(1-z)^{\alpha}h_k\text{ for some }p_n\in \mathcal{P}_n\ \mathrm{and}\ h_k\in H^2.$$ Then \begin{displaymath} \begin{split} S^{*(k+1)}(1-z)^{\alpha}&=S^{*}(S^{*k}(1-z)^{\alpha})=\frac{ p_n+(1-z)^{\alpha}h_k-p_n(0)-h_k(0)}{z}\\ &=\frac{ p_n+(1-z)^{\alpha}h_k-(1-z)^{\alpha}h_k(0)+(1-z)^{\alpha}h_k(0)-p_n(0)-h_k(0)}{z}\\ &= S^{*}p_n+h_k(0)S^{*}(1-z)^{\alpha} +(1-z)^{\alpha}S^{*}h_k\in \mathcal{P}_n+\mathcal{M}((1-z)^{\alpha}). \end{split} \end{displaymath} This completes the proof of (ii). (iii) In view of Theorem \ref{rigid}, to prove (iii) it is enough to show that $a_{1/2}^2$ is a rigid function. We actually prove that $a_{\alpha}^2$ is rigid for every $0<\alpha\leq1/2$. To this end, we observe that for $\alpha>0 $, \begin{equation}\label{inequal} \frac{1}{\sqrt{1+4^{\alpha}}}|1-z|^{\alpha} \leq |a_{\alpha}(z)|\leq |1-z|^{\alpha},\quad z\in\mathbb D.\end{equation} This follows from \eqref{a} and the representation of the outer function $$(1-z)^{\alpha}= \exp\left\{ \frac{\alpha}{2\pi }\int_0^{2\pi} \frac{e^{it}+z}{e^{it}-z}\log|1-e^{it}|dt\right\}.$$ Thus we have $$\frac{|a_{\alpha}(z)|}{|1-z|^{\alpha}}= \exp\left\{ \frac{1}{2\pi }\int_0^{2\pi} \frac{1-|z|^2}{|1-ze^{-it}|^2}\log\frac 1{\sqrt{1+|1-e^{it}|^{2\alpha}}}dt\right\}$$ which implies inequalities \eqref {inequal}. Now we use a reasoning analogous to that in \cite[(X--5)]{sarason}. If $a_{\alpha}^2$ is not rigid for some $0<\alpha\leq1/2$, then by Theorem \ref{jdr} there is a nonzero function $g$ in the kernel of $T_{\overline{a}_{\alpha}/a_{\alpha}}$. Then \begin{displaymath} T_{\frac{\overline{(1-z)^{\alpha}}}{(1-z)^{\alpha}}}\left(\tfrac{(1-z)^{\alpha}g}{a_{\alpha}}\right)=P_+\left(\tfrac{\overline{(1-z)^{\alpha}}g}{a_{\alpha}}\right)=P_+\left(\tfrac{\overline{(1-z)^{\alpha}}g}{a_{\alpha}}\cdot \tfrac{\overline{a}_{\alpha}}{\overline{a}_{\alpha}}\right)=T_{\frac{\overline{(1-z)^{\alpha}}}{\overline{a}_{\alpha}}}T_{\frac{\overline{a}_{\alpha}}{{a}_{\alpha}}}g=0, \end{displaymath} which means that $(1-z)^{\alpha}g/a_{\alpha}$ is a nonzero function in the kernel of $T_{\overline{(1-z)^{\alpha}}/(1-z)^{\alpha}}$, contrary to the fact that $(1-z)^{2\alpha}$ is rigid for $0<\alpha\leq1/2$ (see, e.g., \cite[Section 6.8]{fm}). (iv) We know that for every $\alpha>0$, $$\mathcal{H}(b_{\alpha})=\mathcal{M}(\overline{a}_{\alpha})=\mathcal{M}(\overline{(1-z)^{\alpha}})=T_{\overline{(1-z)^{\alpha}}}H^2$$ and $\mathcal{M}(a_{\alpha})=\mathcal{M}((1-z)^{\alpha})$ is the image under $T_{\overline{(1-z)^{\alpha}}}$ of the range of $T_{(1-z)^{\alpha}/\overline{(1-z)^{\alpha}}}$, that is, $$\mathcal{M}((1-z)^{\alpha})=T_{\overline{(1-z)^{\alpha}}}T_{\frac {(1-z)^{\alpha}}{\overline{(1-z)^{\alpha}}}}H^2. $$ It follows that the orthogonal complement of $\mathcal{M}((1-z)^{\alpha})$ in the space $\mathcal{M}(\overline{(1-z)^{\alpha}})$ is the image under $T_{\overline{(1-z)^{\alpha}}}$ of $\ker T_{\overline{(1-z)^{\alpha}}/(1-z)^{\alpha}}$. We now observe that for $\alpha=n+1/2$, $$\ker T_{\frac {\overline{(1-z)^{\alpha}}}{(1-z)^{\alpha}}}=\ker T_{\overline{z^n}}T_{\frac {\overline{(1-z)^{1/2}}}{(1-z)^{1/2}}}=(1-z)^{1/2}\mathcal{P}_n,$$ where $\mathcal{P}_n$ is the set of all polynomials of degree at most $n-1$. Finally, note that if $p_n$ is in $\mathcal{P}_n$, then \begin{displaymath} \begin{split} T_{\overline{(1-z)^{\alpha}}}\left((1-z)^{1/2}p_n \right)&=P_+\left(\overline{(1-z)^{\alpha}}(1-z)^{1/2}p_n \right)=\\ &=P_+\left(\overline{(1-z)^{\alpha}}(1-z)^{1/2} \right)p_n+P_+\left(P_{-}\left(\overline{(1-z)^{\alpha}}(1-z)^{1/2}\right)p_n \right). \end{split} \end{displaymath} Our claim follows. \end{proof} The following corollary is just another statement of (ii) in Theorem \ref{mejn}. \begin{cor} For any $n\in\mathbb{N}$ and $n-1/2<\alpha<n+1/2$ we have \begin{equation*} \mathcal{H}(b_{\alpha})=\mathcal{M}(a_{\alpha})+\mathcal{P}_n=\mathcal{M}(a_{\alpha})+\mathrm{span}\{T_{\overline{a}_{\alpha}}1,\ldots, T_{\overline{a}_{\alpha}}z^{n-1}\}. \end{equation*} \end{cor} \begin{rem} We observe that since $a_{\alpha}^2$ is rigid for all $0<\alpha\leq 1/2$, Theorem \ref{rigid} implies that the space $\mathcal{M}(a_{\alpha})$ is dense in $\mathcal{H}(b_{\alpha})$ for all such $\alpha$. However, for $0<\alpha< 1/2$ we have $\mathcal{M}(a_{\alpha})=\mathcal{H}(b_{\alpha})$, while $\mathcal{M}(a_{1/2})\subsetneq\mathcal{H}(b_{1/2})$. The latter follows from the fact that every $h\in H^2$ satisfies $|h(z)|=o((1-|z|)^{1/2})$ as $|z|\rightarrow 1^{-}$. Thus if $f\in\mathcal{M}(a_{1/2})$, then $f(z)=(1-z)^{1/2}h(z)$, $h\in H^2$, and $$|f(z)|=|1-z|^{\frac12}|h(z)|=\left(\frac{|1-z|}{1-|z|}\right)^{\frac12}|h(z)|(1-|z|)^{\frac12}.$$ This shows that the nontangential limit of $f$ at $1$ is $0$. On the other hand, $\mathcal{H}(b_{1/2})$ contains nonzero constant functions, so $\mathcal{M}(a_{1/2})$ cannot be equal to $\mathcal{H}(b_{1/2})$. \end{rem} \begin{cor} If $n-1/2<\alpha <n+1/2$, $n\in\mathbb{N}$, and $f\in\mathcal{H}(b_{\alpha})$, then there is a function $h$ in $H^2$ such that $$f(z)=f(1)+f'(1)(z-1)+\ldots+\frac{f^{(n-1)}(1)}{(n-1)!}(z-1)^{n-1}+(1-z)^{\alpha}h(z).$$ \end{cor} \begin{proof} It follows from Proposition \ref{deri} that $f$ and its derivatives of order up to $n-1$ have nontangential limits at $1$, say $f(1),f'(1),\ldots,f^{(n-1)}(1)$. By Theorem \ref{mejn}(ii), $f$ can be written as $$f(z)=p_n(z)+(1-z)^{\alpha}h(z)=\sum_{k=0}^{n-1}a_k(z-1)^k+(1-z)^{\alpha}h(z),\quad h\in H^2.$$ Since every $h$ in $H^2$ satisfies $$|h^{(k)}(z)|\leq \frac{c_k}{(1-|z|)^{k+\frac12}},$$ we find that $$a_k=\frac{p_n^{(k)}(1)}{k!}=\frac{f^{(k)}(1)}{k!}\quad\text{for }k=0,1,\ldots,n-1.$$ \end{proof} The next theorem describes the space $\mathcal{H}(\tilde {b}_{\alpha})$ where $\tilde {b}_{\alpha}$ is an outer function from the unit ball of $H^{\infty}$ whose Pythagorean mate is $\left(\frac {1-z}2\right)^{\alpha}$, $\alpha>0$. \begin{thm} For $\alpha>0$ let $\tilde {a}_{\alpha}(z)=\left(\frac {1-z}2\right)^{\alpha}$ and let $\tilde {b}_{\alpha}$ be the outer function such that $(\tilde {b}_{\alpha},\tilde {a}_{\alpha})$ is a pair. Then $$ \mathcal{H}(\tilde {b}_{\alpha})=\mathcal{H}(b_{\alpha})$$ \end{thm} \begin{proof} It is enough to show that $(\tilde {b}_{\alpha}, \tilde {a}_{\alpha})$ is a corona pair. The function $\tilde {a}_{\alpha}$ is continuous on $\overline{\mathbb D}$ and vanishes only at $1$. Since $|\tilde {b}_{\alpha}(1)|=\tilde {a}_{\alpha}(-1)=1,$ there exist $\delta>0$ such that $|\tilde {b}_{\alpha}(z)|>1/2\ $ on $D_1=\overline{\mathbb D}\cap\{z: |z-1|<\delta\}$ and $|\tilde {a}_{\alpha}(z)|>1/2$ on $ D_2=\overline{\mathbb D}\cap\{z: |z+1|<\delta\}$. Then the continuous function $|\tilde {b}_{\alpha}|^2+|\tilde {a}_{\alpha}|^2$ is positive on the compact set $\overline{\mathbb D}\setminus (D_1\cup D_2)$, so it is bounded from below by a strictly positive number $\varepsilon>0$. \end{proof} \begin{rem} Since $\tfrac{1-z}2$ is the Pythagorean mate for $\tfrac{1+z}2$, we remark that it follows from \cite{ross} that for $\alpha>0$, $$\mathcal{H}\left(\left(\tfrac {1+z}2\right)^{\alpha}\right)=\mathcal{H}\left(\tfrac {1+z}2\right)=c+(1-z)H^2$$ as sets. \end{rem} Finally, we remark that if $u$ is a finite Blaschke product and $b_{\alpha}$ is given by \eqref{b}, then \begin{equation}\label{bal} \mathcal{H}(ub_{\alpha})=\mathcal{H}(b_{\alpha}). \end{equation} Since every function in $\mathcal{H}(u)$ is holomorphic in $\overline{\mathbb D}$ (see, e.g. \cite[Sec. 14.2]{fm}) and $\mathcal{H}(b_{\alpha})$ is invariant under multiplication by functions holomorphic in $\overline{\mathbb D}$ (see, e.g. \cite[ (IV-6)]{sarason}), \eqref{bal} follows from the equality $$ \mathcal{H}(ub_{\alpha})=\mathcal{H}(u)+u\mathcal{H}(b_{\alpha}).$$ \begin{qe} Can one characterize all inner functions $u$ for which equality \eqref{bal} holds? \end{qe} \end{document}
\begin{document} \setcounter{page}{1} \title[The Refined Sobolev Scale, Interpolation, and Elliptic Problems] {\large The Refined Sobolev Scale,\\ Interpolation, and Elliptic Problems} \author[V.A. Mikhailets, A.A. Murach] {Vladimir A. Mikhailets and Aleksandr A. Murach} \address{Institute of Mathematics, National Academy of Sciences of Ukraine, 3, Tere\-shch\-en\-kiv\-ska Str, 01601 Kyiv-4, Ukraine.} \email{[email protected], [email protected]} \subjclass[2000]{Primary 46E35; Secondary 46B70, 35J30, 35J40.} \keywords{Sobolev scale, H\"ormander spaces, interpolation with function parameter, elliptic operator, elliptic boundary-value problem, Fredholm property, local regularity of solutions, spectral expansions, almost everywhere convergence.} \thanks{The authors were partly supported by grant no. 01-01-02 of National Academy of Sciences of Ukraine (under the joint Ukrainian--Russian project of NAS of Ukraine and Russian Foundation of Basic Research)} \begin{abstract} The paper gives a detailed survey of recent results on elliptic problems in Hilbert spaces of generalized smoothness. The latter are the isotropic H\"ormander spaces $H^{s,\varphi}:=B_{2,\mu}$, with $\mu(\xi)=\langle\xi\rangle^{s}\varphi(\langle\xi\rangle)$ for $\xi\in\mathbb{R}^{n}$. They are parametrized by both the real number $s$ and the positive function $\varphi$ varying slowly at $+\infty$ in the Karamata sense. These spaces form the refined Sobolev scale, which is much finer than the Sobolev scale $\{H^{s}\}\equiv\{H^{s,1}\}$ and is closed with respect to the interpolation with a function parameter. The Fredholm property of elliptic operators and elliptic boundary-value problems is preserved for this new scale. Theorems of various type about a solvability of elliptic problems are given. A~local refined smoothness is investigated for solutions to elliptic equations. New sufficient conditions for the solutions to have continuous derivatives are found. Some applications to the spectral theory of elliptic operators are given. \end{abstract} \maketitle \section{Introduction}\label{sec1} \noindent In the theory of partial differential equations, the questions concerning the existence, uniqueness, and regularity of solutions are in the focus of investigations. Note that the regularity properties are usually formulated in terms of the belonging of solutions to some standard classes of function spaces. Thus, the finer a used scale of spaces is calibrated, the sharper and more informative results will be. In contrast to the ordinary differential equations with smooth coefficients, the above questions are complicated enough. Indeed, some linear partial differential equations with smooth coefficients and right-hand sides are known to have no solutions in a neighbourhood of a given point, even in the class of distributions \cite{Lewy57}, \cite[Sec.~6.0 and 7.3]{Hermander63}, \cite[Sec.~13.3]{Hermander83}. Next, certain homogeneous equations (specifically, of elliptic type) with smooth but not analytic coefficients have nontrivial solutions supported on a compact set \cite{Plis54}, \cite[Theorem 13.6.15]{Hermander83}. Hence, the nontrivial null-space of this equation cannot be removed by any homogeneous boundary-value conditions; i.e., the operator of an arbitrary boundary-value problem is not injective. Finally, the question about the regularity of solutions is not simple either. For example, it is known \cite[Ch. 4, Notes]{GilbargTrudinger98} that $$ \triangle u=f\in\,C(\Omega)\nRightarrow\,u\in C^{\,2}(\Omega), $$ with $\triangle$ being the Laplace operator, and $\Omega$ being an arbitrary Euclidean domain. These questions have been investigated most completely for the elliptic equations, systems, and boundary-value problems. This was done in the 1950s and 1960s by S.~Agmon, A.~Douglis, L.~Nirenberg, M.S.~Agranovich, A.C.~Dynin, Yu.M.~Berezansky, S.G.~Krein, Ya.A.~Roitberg, F.~Browder, L.~Hermander, J.-L.~Lions, E.~Magenes, M.~Schechter, L.N.~Slobodetsky, V.A.~Solonnikov, L.R.~Vo\-le\-vich and some others (see, e.g., M.S.~Agranovich's surveys \cite{Agranovich94, Agranovich97} and the references given therein). Note that the elliptic equations and problems have been investigated in the classical scales of H\"older spaces (of noninteger order) and Sobolev spaces (both of positive and negative orders). The fundamental result of the theory of elliptic equations consists in that they generate bounded and Fredholm operators (i.e., the operators with finite index) between appropriate function spaces. For instance, let $Au=f$ be an elliptic linear differential equation of order $m$ given a closed smooth manifold $\Gamma$. Then the operator $$ A:\,H^{s+m}(\Gamma)\rightarrow H^{s}(\Gamma),\quad s\in\mathbb{R}, $$ is bounded and Fredholm. Moreover, the finite-dimensional spaces formed by solutions to homogeneous equations $Au=0$ and $A^{+}v=0$ both lie in $C^{\infty}(\Gamma)$. Here $A^{+}$ is the formally adjoint operator to $A$, whereas $H^{s+m}(\Gamma)$ and $H^{s}(\Gamma)$ are inner product Sobolev spaces over $\Gamma$ and of the orders $s+m$ and $s$ respectively. It follows from this that the solution $u$ have an important regularity property on the Sobolev scale, namely \begin{equation}\label{eq1.1} (f\in H^{s}(\Gamma)\;\;\mbox{for some}\;\;s\in\mathbb{R})\;\Rightarrow\; u\in H^{s+m}(\Gamma). \end{equation} If the manifold has a boundary, then the Fredholm operator is generated by an elliptic boundary-value problem for the equation $Au=f$, specifically, by the Dirichlet problem. Some of these results were extended by H.~Triebel \cite{Triebel95, Triebel83} and the second author \cite{94UMJ12, 94Dop12} of the survey to finer scales of function spaces, namely the Nikolsky--Besov, Zygmund, and Lizorkin--Triebel scales. The results mentioned above have various applications in the theory of differential equations, mathematical physics, the spectral theory of differential operators; see M.S.~Agranovich's surveys \cite{Agranovich94, Agranovich97} and the references therein. As for applications, especially to the spectral theory, the case of Hilbert spaces is of the most interest. Until recently, the Sobolev scale had been a unique scale of Hilbert spaces in which the properties of elliptic operators were investigated systematically. However, it turns out that this scale is not fine enough for a number of important problems. We will give two representative examples. The first of them concerns with the smoothness properties of solutions to the elliptic equation $Au=f$ on the manifold $\Gamma$. According to Sobolev's Imbedding Theorem, we have \begin{equation}\label{eq1.2} H^{\sigma}(\Gamma)\subset C^{r}(\Gamma)\;\;\Leftrightarrow\;\;\sigma>r+n/2, \end{equation} where the integer $r\geq0$ and $n:=\dim\Gamma$. This result and property \eqref{eq1.1} allow us to investigate the classical regularity of the solution $u$. Indeed, if $f\in H^{s}(\Gamma)$ for some $s>r-m+n/2$, then $u\in H^{s+m}(\Gamma)\subset C^{r}(\Gamma)$. However, this is not true for $s=r-m+n/2$; i.e., the Sobolev scale cannot be used to express unimprovable sufficient conditions for belonging of the solution $u$ to the class $C^{r}(\Gamma)$. An analogous situation occurs in the theory of elliptic boundary-value problems too. The second demonstrative example is related to the spectral theory. Suppose that the differential operator $A$ is of order $m>0$, elliptic on $\Gamma$, and self-adjoint on the space $L_{2}(\Gamma)$. Given a function $f\in\nobreak L_{2}(\Gamma)$, consider the spectral expansion \begin{equation}\label{eq1.3} f=\sum_{j=1}^{\infty}\;c_{j}(f)\,h_{j}, \end{equation} where $(h_{j})_{j=1}^{\infty}$ is a complete orthonormal system of eigenfunctions of $A$, and $c_{j}(f)$ is the Fourier coefficient of $f$ with respect to $h_{j}$. The eigenfunctions are enumerated so that the absolute values of the corresponding eigenvalues form a (nonstrictly) increasing sequence. According to the Menshov--Rademacher theorem, which are valid for the general orthonormal series too, the expansion \eqref{eq1.3} converges almost everywhere on $\Gamma$ provided that \begin{equation}\label{eq1.4} \sum_{j=1}^{\infty}\,|c_{j}(f)|^{2}\,\log^{2}(j+1)<\infty. \end{equation} This hypotheses cannot be reformulated in equivalent manner in terms of the belonging of $f$ to Sobolev spaces because $$ \|f\|_{H^{s}(\Gamma)}^{2}\,\asymp\,\sum_{j=1}^{\infty}\,|c_{j}(f)|^{2}\,j^{2s} $$ for every $s>0$. We may state only that the condition “$f\in H^{s}(\Gamma)$ for some $s>0$” implies convergence of the series \eqref{eq1.3} almost everywhere on $\Gamma$. This condition does not adequately express the hypotheses \eqref{eq1.4} of the Menshov--Rademacher theorem. In 1963 L.~H\"ormander \cite[Sec. 2.2]{Hermander63} proposed a broad and informative generalization of the Sobolev spaces in the category of Hilbert spaces (also see \cite[Sec. 10.1]{Hermander83}). He introduced spaces that are parametrized by a general enough weight function, which serves as an analog of the differentiation order or smoothness index used for the Sobolev spaces. In particular, H\"ormander considered the following Hilbert spaces \begin{gather}\label{eq1.5} B_{2,\mu}(\mathbb{R}^{n}):= \bigl\{u:\,\mu\,\mathcal{F}u\in L_{2}(\mathbb{R}^{n})\bigr\}, \\ \|u\|_{B_{2,\mu}(\mathbb{R}^{n})}:=\|\mu\,\mathcal{F}u\|_{L_{2}(\mathbb{R}^{n})}. \notag \end{gather} Here $\mathcal{F}u$ is the Fourier transform of a tempered distribution $u$ given on $\mathbb{R}^{n}$, and $\mu$ is a weight function of $n$ arguments. In the case where $$ \mu(\xi)=\langle\xi\rangle^{s},\quad \langle\xi\rangle:=(1+|\xi|^{2})^{1/2},\quad\xi\in\mathbb{R}^{n},\quad s\in\mathbb{R}, $$ we have the Sobolev space $B_{2,\mu}(\mathbb{R}^{n})=H^{s}(\mathbb{R}^{n})$ of differentiation order $s$. The H\"ormander spaces occupy a central position among the spaces of generalized smoothness, which is characterized by a function parameter, rather than a number. These spaces are under various and profound investigations; a good deal of the work was done in the last decades. We refer to G.A.~Kalyabin and P.I.~Lizorkin's survey \cite{KalyabinLizorkin87}, H.~Triebel's monograph \cite[Sec.~22]{Triebel01}, the recent papers by A.M.~Caetano and H.-G.~Leopold \cite{CaetanoLeopold06}, W.~Farkas, N.~Jacob, and R.L.~Schilling \cite{FarkasJacobScilling01b}, W.~Farkas and H.-G.~Leopold \cite{FarkasLeopold06}, P.~Gurka and B.~Opic \cite{GurkaOpic07}, D.D.~Haroske and S.D.~Moura \cite{HaroskeMoura04, HaroskeMoura08}, S.D.~Moura \cite{Moura01}, B.~Opic and W.~Trebels \cite{OpicTrebels00}, and references given therein. Various classes of spaces of generalized smoothness appear naturally in embedding theorems for function spaces, the theory of interpolation of function spaces, approximation theory, the theory of differential and pseudodifferential operators, theory of stochastic processes; see the monographs by D.D.~Haroske \cite{Haroske07}, N.~Jacob \cite{Jacob010205}, V.G.~Maz'ya and T.O.~Shaposhnikova \cite[Sec.~16]{MazyaShaposhnikova09}, F.~Nicola and L.~Rodino \cite{NicolaRodino10}, B.P.~Paneah \cite{Paneah00}, A.I.~Stepanets \cite[Ch.~I, \S~7]{Stepanets87}, \cite[Part~I, Ch.~3, Sec. 7.1]{Stepanets05}, and also the papers by F.~Cobos and D.L.~Fernandez \cite{CobosFernandez88}, C.~Merucci \cite{Merucci84}, M.~Schechter \cite{Schechter67} devoted to the interpolation of function spaces, and the papers by D.E.~Edmunds and H.~Triebel \cite{EdmundsTriebel98, EdmundsTriebel99}, V.A.~Mikhailets and V.M.~Molyboga \cite{MikhailetsMolyboga09, MikhailetsMolyboga11, MikhailetsMolyboga12} on spectral theory of some elliptic operators appearing in mathematical physics. Already in 1963 L.~H\"ormander applied the spaces \eqref{eq1.5} and more general Banach spaces $B_{p,\mu}(\mathbb{R}^{n})$, with $1\leq p\leq\infty$, to an investigation of regularity properties of solutions to the partial differential equations with constant coefficients and to some classes of equations with varying coefficients. However, as distinct from the Sobolev spaces, the H\"ormander spaces have not got a broad application to the general elliptic equations on manifolds and to the elliptic boundary-value problems. This is due to the lack of a reasonable definition of the H\"ormander spaces on smooth manifolds (the definition should be independent of a choice of local charts covering the manifold) an on the absence of analytic tools fit to use these spaces effectively. Such a tool exists in the Sobolev spaces case; this is the interpolation of spaces. Namely, an arbitrary fractional order Sobolev space can be obtained by the interpolation of a certain couple of integer order Sobolev spaces. This fact essentially facilitates both the investigation of these spaces and proofs of various theorems of the theory of elliptic equations because the boundedness and the Fredholm property (if the defect is invariant) preserve for linear operators under the interpolation. Therefore it seems reasonable to distinguish the H\"ormander spaces that are obtained by the interpolation (with a function parameter) of couples of Sobolev spaces; we will consider only inner product spaces. For this purpose we introduce the following class of isotropic spaces \begin{equation}\label{eq1.6} H^{s,\varphi}(\mathbb{R}^{n}):=B_{2,\mu}(\mathbb{R}^{n})\quad\mbox{for}\quad \mu(\xi)={\langle\xi\rangle^{s}\varphi(\langle\xi\rangle)}. \end{equation} Here the number parameter $s$ is real, whereas the positive function parameter $\varphi$ varies slowly at $+\infty$ in the Karamata sense \cite{BinghamGoldieTeugels89, Seneta76}. (We may assume that $\varphi$ is constant outside of a neighbourhood of $+\infty$.) For example, $\varphi$ is admitted to be the logarithmic function, its arbitrary iteration, their real power, and a product of these functions. The class of spaces \eqref{eq1.6} contains the Sobolev Hilbert scale $\{H^{s}\}\equiv\{H^{s,1}\}$, is attached to it by the number parameter, but is calibrated much finer than the Sobolev scale. Indeed, $$ H^{s+\varepsilon}(\mathbb{R}^{n})\subset H^{s,\varphi}(\mathbb{R}^{n})\subset H^{s-\varepsilon}(\mathbb{R}^{n})\quad\mbox{for every}\quad\varepsilon>0. $$ Therefore the number parameter $s$ defines the main (power) smoothness, whereas the function parameter $\varphi$ determines an additional (subpower) smoothness on the class of spaces \eqref{eq1.6}. Specifically, if $\varphi(t)\rightarrow\infty$ (or $\varphi(t)\rightarrow0$) as $t\rightarrow\infty$, then $\varphi$ determines an additional positive (or negative) smoothness. Thus, the parameter $\varphi$ refines the main smoothness $s$. Therefore the class of spaces \eqref{eq1.6} is naturally called the refined Sobolev scale. This scale possesses the following important property: every space $H^{s,\varphi}(\mathbb{R}^{n})$ is a result of the interpolation, with an appropriate function parameter, of the couple of Sobolev spaces $H^{s-\varepsilon}(\mathbb{R}^{n})$ and $H^{s+\delta}(\mathbb{R}^{n})$, with $\varepsilon,\delta>0$. The parameter of the interpolation is a function that varies regularly (in the Karamata sense) of index $\theta\in(0,\,1)$ at $+\infty$; namely $\theta:=\varepsilon/(\varepsilon+\delta)$. Moreover, the refined Sobolev scale proves to be closed with respect to this interpolation. Thus, every H\"ormander space $H^{s,\varphi}(\mathbb{R}^{n})$ possesses the interpolation property with respect to the Sobolev Hilbert scale. This means that each linear operator bounded on both the spaces $H^{s-\varepsilon}(\mathbb{R}^{n})$ and $H^{s+\delta}(\mathbb{R}^{n})$ is also bounded on $H^{s,\varphi}(\mathbb{R}^{n})$. The interpolation property plays a decisive role here; namely, it permits us to establish some important properties of the refined Sobolev scale. They enable this scale to be applied in the theory of elliptic equations. Thus, we can prove with the help of the interpolation that each space $H^{s,\varphi}(\mathbb{R}^{n})$, as the Sobolev spaces, is invariant with respect to diffeomorphic transformations of $\mathbb{R}^{n}$. This permits the space $H^{s,\varphi}(\Gamma)$ to be well defined over a smooth closed manifold $\Gamma$ because the set of distributions and the topology in this space does not depend on a choice of local charts covering~$\Gamma$. The spaces $H^{s,\varphi}(\mathbb{R}^{n})$ and $H^{s,\varphi}(\Gamma)$ are useful in the theory of elliptic operators on manifolds and in the theory of elliptic boundary-value problems; these spaces are present implicitly in a number of problems appearing in calculus. Let us dwell on some results that demonstrate advantages of the introduced scale as compared with the Sobolev scale. These results deal with the examples considered above. As before, let $A$ be an elliptic differential operator given on $\Gamma$, with $m:=\mathrm{ord}\,A$. Then $A$ sets the bounded and Fredholm operators $$ A:\,H^{s+m,\varphi}(\Gamma)\rightarrow H^{s,\varphi}(\Gamma)\quad\mbox{for all}\quad s\in\mathbb{R},\;\;\varphi\in\mathcal{M}. $$ Here $\mathcal{M}$ is the class of slowly varying function parameters $\varphi$ used in \eqref{eq1.6}. Note that the differential operator $A$ leaves invariant the function parameter $\varphi$, which refines the main smoothness $s$. Besides, we have the following regularity property of a solution to the elliptic equation $Au=f$: $$ (f\in H^{s,\varphi}(\Gamma)\;\;\mbox{for some}\;\;s\in\mathbb{R},\;\varphi\in\mathcal{M})\;\Rightarrow\; u\in H^{s+m,\varphi}(\Gamma). $$ For the refined Sobolev scale, we have the following sharpening of Sobolev's Imbedding Theorem: given an integer $r\geq0$ and function $\varphi\in\mathcal{M}$, the embedding $H^{r+n/2,\varphi}(\Gamma)\subset C^{r}(\Gamma)$ is equivalent to that \begin{equation}\label{eq1.7} \int\limits_{1}^{\infty}\frac{dt}{t\,\varphi^{2}(t)}<\infty. \end{equation} Therefore, if $f\in H^{r-m+n/2,\varphi}(\Gamma)$ for some parameter $\varphi\in\nobreak\mathcal{M}$ satisfying \eqref{eq1.7}, then the solution $u\in C^{r}(\Gamma)$. Similar results are also valid for the elliptic systems and elliptic boundary-value problems. Now let us pass to the analysis of the spectral expansion \eqref{eq1.3} convergence. We additionally suppose that the operator $A$ is of order $m>0$ and is unbounded and self-adjoint on the space $L_{2}(\Gamma)$. Condition \eqref{eq1.4} for the convergence of \eqref{eq1.3} almost everywhere on $\Gamma$ is equivalent to the inclusion $$ f\in H^{0,\varphi}(\Gamma),\quad\mbox{with}\quad\varphi(t):=\max\{1,\log t\}. $$ The latter is much wider than the condition “$f\in H^{s}(\Gamma)$ for some $s>\nobreak0$”. We can also similarly represent conditions for unconditional convergence almost everywhere or convergence in the H\"older space $C^{r}(\Gamma)$, with integral $r\geq0$. The above and some other results show that the refined Sobolev scale is helpful and convenient. This scale can be used in different topics of the modern analysis as well; see, e.g., the articles by M.~Hegland \cite{Hegland95, Hegland10}, P.~Math\'e and U.~Tautenhahn \cite{MatheTautenhahn06}. This paper is a detailed survey of our recent articles [78--94, 101--108], which are summed up in the monograph \cite{MikhailetsMurach10} published in Russian in 2010. In them, we have built a theory of general elliptic (both scalar and matrix) operators and elliptic boundary-value problems on the refined Sobolev scales of function spaces. Let us describe the survey contents in greater detail. The paper consists of 13 sections. Section~\ref{sec1} is Introduction, which we are presenting now. Section~\ref{sec2} is preliminary and contains a necessary information about regularly varying functions and about the interpolation with a function parameter. Here we distinguish important Theorem \ref{th2.5}, which gives a description of all interpolation parameters for the category of separable Hilbert spaces. In Section~\ref{sec3}, we consider the H\"ormander spaces, give a definition of the refined Sobolev scale, and study its properties. Among them, we especially note the interpolation properties of this scale, formulated as Theorems \ref{th3.4} and \ref{th3.5}. They are very important for applications. Section~\ref{sec4} deals with uniformly elliptic pseudodifferential operators that are studied on the refined Sobolev scale over $\mathbb{R}^{n}$. We get an a priory estimate for a solution of the elliptic equation and investigate an interior smoothness of the solution. As an application, we obtain a sufficient condition for the existence of continuous bounded derivatives of the solution. Next in Section~\ref{sec5}, we define a class of H\"ormander spaces, the refined Sobolev scale, over a smooth closed manifold. We give three equivalent definitions of these spaces: local (in terms of local properties of distributions), interpolational (by means of the interpolation of Sobolev spaces with an appropriate function parameter), and operational (via the completion of the set of infinitely smooth functions with respect to the norm generated by a certain function of the Beltrami--Laplace operator). These definitions are similar to those used for the Sobolev spaces. We study properties of the refined Sobolev scale over the closed manifold. Important applications of these results are given in Sections \ref{sec6} and \ref{sec7}. Section~\ref{sec6} deals with elliptic pseudodifferential operators on a closed manifold. We show that they are Fredholm (i.e. have a finite index) on appropriate couples of H\"ormander spaces. As in Section~\ref{sec4}, a priory estimates for solutions of the elliptic equations are obtained, and the solutions regularity is investigated. Using elliptic operators, we give equivalent norms on H\"ormander spaces over the manifold. In Section~\ref{sec7}, we investigate a convergence of spectral expansions corresponding to elliptic normal operators given on the closed manifold. We find sufficient conditions for the following types of the convergence: almost everywhere, unconditionally almost everywhere, and in the space $C^{k}$, with integral $k\geq0$. These conditions are formulated in constructive terms of the convergence on some function classes, which are H\"ormander spaces. Section~\ref{sec8} deals with the classes of H\"ormander spaces that relate to the refined Sobolev scale and are given over Euclidean domains being open or closed. For these classes, we study interpolation properties, embeddings, traces, and riggings of the space of square integrable functions with H\"ormander spaces. The results of this section are applied in next Sections \ref{sec9}--\ref{sec12}, where a regular elliptic boundary-value problem is investigated in appropriate H\"ormander spaces. In Section~\ref{sec9}, this problem is studied on the one-sided refined Sobolev scale. We show that the problem generates a Fredholm operator on this scale. We investigate some properties of the problem; namely, a priory estimates for solutions and local regularity are given. Moreover, a sufficient condition for the weak solution to be classical is found in terms of H\"ormander spaces. Section~\ref{sec10} deals with semihomogeneous elliptic boundary-value problems. They are considered on H\"ormander spaces which form an appropriate two-sided refined Sobolev scale. We show that the operator corresponding to the problem is bounded and Fredholm on this scale. In Sections \ref{sec11}--\ref{sec12}, we give various theorems about a solvability of nonhomogeneous regular elliptic boundary-value problems in H\"ormander spaces of an arbitrary real main smoothness. Developing the methods suggested by Ya.A.~Roitberg \cite{Roitberg96} and J.-L.~Lions, E.~Magenes \cite{LionsMagenes72}, we establish a certain generic theorem and a wide class of individual theorems on the solvability. The generic theorem is featured by that the domain of the elliptic operator does not depend on the coefficients of the elliptic equation and is common for all boundary-value problems of the same order. Conversely, the individual theorems are characterized by that the domain depends essentially on the coefficients, even of the lower order derivatives. In Section~\ref{sec11}, we elaborate on Roitberg's approach in connection with H\"ormander spaces and then deduce the generic theorem about the solvability of elliptic boundary-value problems on the two-sided refined Sobolev scale modified in the Roitberg sense. Section~\ref{sec12} is devoted to J.-L.~Lions and E.~Magenes' approach, which we develop for various Hilbert scales consisting of Sobolev or H\"ormander spaces. For the space of right-hand sides of an elliptic equation, we find a sufficiently general condition under which the operator of the problem is bounded and Fredholm (see key Theorems \ref{th12.1} and \ref{th12.4}). As a consequence, we obtain new various individual theorems on the solvability of elliptic boundary-value problems considered in Sobolev or H\"ormander spaces, both nonweighted and weighted. In final Section~\ref{sec13}, we indicate application of H\"ormander spaces to other important classes of elliptic problems. They are nonregular boundary-value problems, parameter-elliptic problems, certain mixed elliptic problems, elliptic systems and corresponding boundary-value problems. It is necessary to note that some results given in the survey are new even for the Sobolev spaces. These results are Theorem \ref{th10.1} in the case of half-integer $s$ and individual Theorems \ref{th12.1}, \ref{th12.2}, and \ref{th12.3}. In addition, note that we have also investigated a certain class of H\"ormander spaces, which is wider than the refined Sobolev scale. Interpolation properties of this class are studied and then applied to elliptic operators \cite{08Collection1, 09Dop3, MikhailetsMurach10, arXiv:1106.2049, 09UMJ3, arXiv:1202.6156}. It is remarkable that this class consists of all the Hilbert spaces which possess the interpolation property with respect to the Sobolev Hilbert scale. These results fall beyond the limits of our survey. \section{Preliminaries}\label{sec2} In this section we recall some important results concerning the regularly varying functions and the interpolation with a function parameter of couples of Hilbert spaces. These results will be necessary for us in the sequel. \subsection{Regularly varying functions}\label{sec2.1} We recall the following notion. \begin{definition}\label{def2.1} A positive function $\psi$ defined on a semiaxis $[b,+\infty)$ is said to be regularly varying of index $\theta\in\mathbb{R}$ at $+\infty$ if $\psi$ is Borel measurable on $[b_{0},+\infty)$ for some number $b_{0}\geq b$ and $$ \lim_{t\rightarrow+\infty}\;\frac{\psi(\lambda\,t)}{\psi(t)}= \lambda^{\theta}\quad\mbox{for each}\quad \lambda>0. $$ A function regularly varying of the index $\theta=0$ at $+\infty$ is called slowly varying at $+\infty$. \end{definition} The theory of regularly varying functions was founded by Jovan Karamata \cite{Karamata30a, Karamata30b, Karamata33} in the 1930s. These functions are closely related to the power functions and have numerous applications, mainly due to their special role in Tauberian-type theorems (see the monographs \cite{BinghamGoldieTeugels89, GelukHaan87, Haan70, Maric00, Resnick87, Seneta76} and references therein). \begin{example}\label{ex2.1} The well-known standard case of functions regularly varying of the index $\theta$ at $+\infty$ is \begin{equation}\label{eq2.1} \psi(t):=t^{\theta}\,(\log t)^{r_{1}}\,(\log\log t)^{r_{2}} \ldots (\log\ldots\log t)^{r_{k}}\quad\mbox{for}\quad t\gg1 \end{equation} with arbitrary parameters $k\in\mathbb{Z}_{+}$ and $r_{1}, r_{2},\ldots,r_{k}\in\mathbb{R}$. In the case where $\theta=0$ these functions form the logarithmic multiscale, which has a number of applications in the theory of function spaces. \end{example} We denote by $\mathrm{SV}$ the set of all functions slowly varying at $+\infty$. It is evident that $\psi$ is a function regularly varying at $+\infty$ of index $\theta$ if and only if $\psi(t)=t^{\theta}\varphi(t)$, $t\gg1$, for some function $\varphi\in\mathrm{SV}$. Thus, the investigation of regularly varying functions is reduced to the case of slowly varying functions. The study and application of regularly varying functions are based on two fundamental theorems: the Uniform Convergence Theorem and Representation Theorem. They were proved by Karamata \cite{Karamata30a} in the case of continuous functions and, in general, by a number of authors later (see the monographs cited above). \begin{theorem}[Uniform Convergence Theorem]\label{th2.1} Let $\varphi\in\mathrm{SV}$; then $\varphi(\lambda t)/\varphi(t)\rightarrow\nobreak1$ as $t\rightarrow+\infty$ uniformly on each compact $\lambda$-set in $(0,\infty)$. \end{theorem} \begin{theorem}[Representation Theorem]\label{th2.2} A function $\varphi$ belongs to $\mathrm{SV}$ if and only if it can be written in the form \begin{equation}\label{eq2.2} \varphi(t)=\exp\Biggl(\beta(t)+\int\limits_{b}^{\:t}\frac{\alpha(\tau)}{\tau}\,d\tau\Biggr), \quad t\geq b, \end{equation} for some number $b>0$, continuous function $\alpha:[b,\infty)\rightarrow\mathbb{R}$ approaching zero at $\infty$, and Borel measurable bounded function $\beta:\nobreak[b,\infty)\rightarrow\mathbb{R}$ that has the finite limit at $\infty$. \end{theorem} The Representation Theorem implies the following sufficient condition for a function to be slowly varying at infinity \cite[Sec. 1.2]{Seneta76}. \begin{theorem}\label{th2.3} Suppose that a function $\varphi:(b,\infty)\rightarrow(0,\infty)$ has a continuous derivative and satisfies the condition $t\varphi\,'(t)/\varphi(t)\rightarrow0$ as $t\rightarrow \infty$. Then $\varphi\in\mathrm{SV}$. \end{theorem} Using Theorem~\ref{th2.3} one can give many interesting examples of slowly varying functions. Among them we mention the following. \begin{example}\label{ex2.2} Let $\varphi(t):=\exp\psi(t)$, with $\psi$ being defined according to \eqref{eq2.1}, where $\theta=0$ and $r_{1}<1$. Then $\varphi\in\mathrm{SV}$. \end{example} \begin{example}\label{ex2.3} Let $\alpha,\beta,\gamma\in\mathbb{R}$, $\beta\neq0$, and $0<\gamma<1$. We set $\omega(t):=\alpha+\beta\sin\,\log^{\gamma}t$ and $\varphi(t):=(\log t)^{\omega(t)}$ for $t>1$. Then $\varphi\in\mathrm{SV}$. \end{example} \begin{example}\label{ex2.4} Let $\alpha,\beta,\gamma\in\mathbb{R}$, $\alpha\neq0$, $0<\gamma<\beta<1$, and $$ \varphi(t):=\exp(\alpha(\log t)^{1-\beta}\,\sin\log^{\gamma}t)\quad\mbox{for} \quad t>1. $$ Then $\varphi\in\mathrm{SV}$. \end{example} The last two examples show that a function $\varphi$ varying slowly at $+\infty$ may exhibit infinite oscillation, that is $$ \liminf_{t\rightarrow+\infty}\,\varphi(t)=0\quad\mbox{and}\quad \limsup_{t\rightarrow+\infty}\,\varphi(t)=+\infty. $$ We will use regularly varying functions as parameters when we define certain Hilbert spaces. If the function parameters are equivalent in a neighbourhood of $+\infty$, we get the same space up to equivalence of norms. Therefore it is useful to introduced the following notion \cite[p.~90]{08MFAT1}. \begin{definition}\label{def2.2} We say that a positive function $\psi$ defined on a semiaxis $[b,+\infty)$ is quasiregularly varying of index $\theta\in\mathbb{R}$ at $+\infty$ if there exist a number $b_{1}\geq b$ and a function $\psi_{1}:[b_{1},+\infty)\rightarrow (0,+\infty)$ regularly varying of the same index $\theta\in\mathbb{R}$ at $+\infty$ such that $\psi\asymp\psi_{1}$ on $[b_{1},+\infty)$. A function quasiregularly varying of the index $\theta=0$ at $+\infty$ is called quasislowly varying at $+\infty$. \end{definition} As usual, the notation $\psi\asymp\psi_{1}$ on $[b_{1},+\infty)$ means that the functions $\psi$ and $\psi_{1}$ are equivalent there, that is both the functions $\psi/\psi_{1}$ and $\psi_{1}/\psi$ are bounded on $[b_{1},+\infty)$. We denote by $\mathrm{QSV}$ the set of all functions varying quasislowly at $+\infty$. It is evident that $\psi$ is quasiregularly varying of the index $\theta$ at $+\infty$ if and only if $\psi(t)=t^{\theta}\varphi(t)$, $t\gg1$, for some function $\varphi\in\mathrm{QSV}$. We note the following properties of the class $\mathrm{QSV}$. \begin{theorem}\label{th2.4} Let $\varphi,\chi\in\mathrm{QSV}$. The next assertions are true: \begin{enumerate} \item[i)] There is a function $\varphi_{1}\in C^{\infty}((0;+\infty))\cap\mathrm{SV}$ such that $\varphi\asymp\varphi_{1}$ in a neighbourhood of $+\infty$. \item[ii)] If $\theta>0$, then both $t^{-\theta}\varphi(t)\rightarrow0$ and $t^{\theta}\varphi(t)\rightarrow+\infty$ as $t\rightarrow+\infty$. \item[iii)] All the functions $\varphi+\chi$, $\varphi\,\chi$, $\varphi/\chi$ and $\varphi^{\sigma}$, with $\sigma\in\mathbb{R}$, belong to $\mathrm{QSV}$. \item[iv)] Let $\theta\geq0$, and in the case where $\theta=0$ suppose that $\varphi(t)\rightarrow+\infty$ as $t\rightarrow+\infty$. Then the composite function $\chi(t^{\theta}\varphi(t))$ of $t$ belongs to $\mathrm{QSV}$. \end{enumerate} \end{theorem} Theorem~\ref{th2.4} are known for slowly varying functions, even with the strong equivalence $\varphi(t)\sim\varphi_{1}(t)$ as $t\rightarrow+\infty$ being in assertion i); see, e.g., \cite[Sec. 1.3]{BinghamGoldieTeugels89} and \cite[Sec. 1.5]{Seneta76}. This implies the case when $\varphi,\chi\in\mathrm{QSV}$ \cite[p.~91]{08MFAT1}. \subsection{The interpolation with a function parameter of Hilbert spaces}\label{sec2.2} It is a natural generalization of the classical interpolation method by J.-L.~Lions and S.G.~Krein (see, e.g., \cite[Ch.~IV, \S~9]{FunctionalAnalysis72} and \cite[Ch.~1, Sec. 2 and 5]{LionsMagenes72}) to the case when a general enough function is used as an interpolation parameter instead of a number parameter. The generalization appeared in the paper by C.~Foia\c{s} and J.-L.~Lions \cite[p.~278]{FoiasLions61} and then was studied by W.F.~Donoghue \cite{Donoghue67}, E.I.~Pustyl`nik \cite{Pustylnik82}, V.I.~Ovchinnikov \cite[Sec. 11.4]{Ovchinnikov84}, and the authors \cite{08MFAT1}. We recall the definition of this interpolation. For our purposes, it is sufficient to restrict ourselves to the case of separable Hilbert spaces. Let an ordered couple $X:=[X_{0},X_{1}]$ of complex Hilbert spaces $X_{0}$ and $X_{1}$ be such that these spaces are separable and that the continuous dense embedding $X_{1}\hookrightarrow X_{0}$ holds true. We call this couple admissible. For the couple $X$ there exists an isometric isomorphism $J:X_{1}\leftrightarrow X_{0}$ such that $J$ is a self-adjoint positive operator on the space $X_{0}$ with the domain $X_{1}$ (see \cite[Ch.~1, Sec. 2.1]{LionsMagenes72} and \cite[Ch.~IV, Sec. 9.1]{FunctionalAnalysis72}). The operator $J$ is said to be generating for the couple $X$ and is uniquely determined by $X$. We denote by $\mathcal{B}$ the set of all functions $\psi:(0,\infty)\rightarrow(0,\infty)$ such that: \begin{enumerate} \item[a)] $\psi$ is Borel measurable on the semiaxis $(0,+\infty)$; \item[b)] $\psi$ is bounded on each compact interval $[a,b]$ with $0<a<b<+\infty$; \item[c)] $1/\psi$ is bounded on each set $[r,+\infty)$ with $r>0$. \end{enumerate} Let $\psi\in\mathcal{B}$. Generally, the unbounded operator $\psi(J)$ is defined in the space $X_{0}$ as a function of $J$. We denote by $[X_{0},X_{1}]_{\psi}$ or simply by $X_{\psi}$ the domain of the operator $\psi(J)$ endowed with the inner product $(u,v)_{X_{\psi}}:=(\psi(J)u,\psi(J)v)_{X_{0}}$ and the corresponding norm $\|u\|_{X_{\psi}}:=(u,u)_{X_{\psi}}^{1/2}$. The space $X_{\psi}$ is Hilbert and separable. \begin{definition}\label{def2.3} We say that a function $\psi\in\mathcal{B}$ is an interpolation parameter if the following property is fulfilled for all admissible couples $X=[X_{0},X_{1}]$, $Y=[Y_{0},Y_{1}]$ of Hilbert spaces and an arbitrary linear mapping $T$ given on $X_{0}$. If the restriction of the mapping $T$ to the space $X_{j}$ is a bounded operator $T:X_{j}\rightarrow Y_{j}$ for each $j=0,\,1$, then the restriction of the mapping $T$ to the space $X_{\psi}$ is also a bounded operator $T:X_{\psi}\rightarrow Y_{\psi}$. \end{definition} Otherwise speaking, $\psi$ is an interpolation parameter if and only if the mapping $X\mapsto X_{\psi}$ is an interpolation functor given on the category of all admissible couples $X$ of Hilbert spaces. (For the notion of interpolation functor, see, e.g., \cite[Sec. 2.4]{BerghLefstrem76} and \cite[Sec. 1.2.2]{Triebel95}) In the case where $\psi$ is an interpolation parameter, we say that the space $X_{\psi}$ is obtained by the interpolation with the function parameter $\psi$ of the admissible couple $X$. Then the continuous dense embeddings $X_{1}\hookrightarrow X_{\psi}\hookrightarrow X_{0}$ are fulfilled. The classical result by J.-L. Lions and S.G.~Krein consists in the fact that the power function $\psi(t):=t^{\theta}$ is an interpolation parameter whenever $0<\theta<1$; see \cite[Ch.~IV, \S~9, Sec.~3]{FunctionalAnalysis72} and \cite[Ch.~1, Sec. 5.1]{LionsMagenes72}. We have the following criterion for a function to be an interpolation parameter. \begin{theorem}\label{th2.5} A function $\psi\in\mathcal{B}$ is an interpolation parameter if and only if $\psi$ is pseudoconcave in a neighbourhood of $+\infty$, i.e. $\psi\asymp\psi_{1}$ for some concave positive function $\psi_{1}$ \end{theorem} This theorem follows from Peetre's results \cite{Peetre68} on interpolations functions (see also the monograph \cite[Sec. 5.4]{BerghLefstrem76}). The corresponding proof is given in \cite[Sec. 2.7]{08MFAT1}. For us, it is important the next consequence of Theorem \ref{th2.5}. \begin{corollary}\label{cor2.1} Suppose a function $\psi\in\mathcal{B}$ to be quasiregularly varying of index $\theta$ at $+\infty$, with $0<\theta<1$. Then $\psi$ is an interpolation parameter. \end{corollary} The direct proof of this assertion is given in \cite[Sec. 2]{06UMJ2}. \section{H\"ormander spaces}\label{sec3} In 1963 Lars H\"ormander \cite[Sec. 2.2]{Hermander63} introduced the spaces $B_{p,\mu}(\mathbb{R}^{n})$, which consist of distributions in $\mathbb{R}^{n}$ and are parametrized by a number $p\in[1,\infty]$ and a general enough weight function $\mu$ of argument $\xi\in\mathbb{R}^{n}$; see also \cite[Sec. 10.1]{Hermander83}. The number parameter $p$ characterizes integrability properties of the distributions, whereas the function parameter $\mu$ describes their smoothness properties. In this section, we recall the definition of the spaces $B_{p,\mu}(\mathbb{R}^{n})$, some their properties, and an application to constant-coefficient partial differential equations. Further we consider the important case where the H\"ormander space $B_{p,\mu}(\mathbb{R}^{n})$ is Hilbert, i.e. $p=2$, and $\mu$ is a quasiregularly varying function of $(1+|\xi|^{2})^{1/2}$ at infinity. \subsection{The spaces $B_{p,\mu}(\mathbb{R}^{n})$}\label{sec3.1} Let an integer $n\geq1$ and a parameter $p\in[1,\infty]$. We use the following conventional designations, where $\Omega$ is an nonempty open set in $\mathbb{R}^{n}$, in particular $\Omega=\mathbb{R}^{n}$: \begin{enumerate} \item [a)] $L_{p}(\Omega):=L_{p}(\Omega,d\xi)$ is the Banach space of complex-valued functions $f(\xi)$ of $\xi\in\Omega$ such that $|f|^{p}$ is integrable over $\Omega$ (if $p=\infty$, then f is essentially bounded in $\Omega$); \item [b)] $C^{k}_{\mathrm{b}}(\Omega)$ is the Banach space of functions $u:\Omega\rightarrow\nobreak\mathbb{C}$ having continuous and bounded derivatives of order $\leq k$ on $\Omega$; \item [c)] $C^{\infty}_{0}(\Omega)$ is the linear topological space of infinitely differentiable functions $u:\mathbb{R}^{n}\rightarrow\mathbb{C}$ such that their supports are compact and belong to $\Omega$; we will identify functions from $C^{\infty}_{0}(\Omega)$ with their restrictions to $\Omega$; \item [d)] $\mathcal{D}'(\Omega)$ is the linear topological space of all distributions given in $\Omega$; we always suppose that distributions are antilinear complex-valued functionals; \item [e)] $\mathcal{S}'(\mathbb{R}^{n})$ is the linear topological Schwartz space of tempered distributions given in $\mathbb{R}^{n}$; \item [f)] $\widehat{u}:=\mathcal{F}u$ is the Fourier transform of a distribution $u\in\mathcal{S}'(\mathbb{R}^{n})$; $\mathcal{F}^{-1}f$ is the inverse Fourier transform of $f\in\mathcal{S}'(\mathbb{R}^{n})$; \item [g)] $\langle\xi\rangle:=(1+|\xi|^{2})^{1/2}$ is a smoothed modulus of $\xi\in\mathbb{R}^{n}$. \end{enumerate} Suppose a continuous function $\mu:\mathbb{R}^{n}\rightarrow(0,\infty)$ to be such that, for some numbers $c\geq1$ and $l>0$, we have \begin{equation}\label{eq3.1} \frac{\mu(\xi)}{\mu(\eta)}\leq c\,(1+|\xi-\eta|)^{l}\quad\mbox{for all}\quad\xi,\eta\in\mathbb{R}^{n}. \end{equation} The function $\mu$ is called a weight function. \begin{definition}\label{def3.1} The H\"ormander space $B_{p,\mu}(\mathbb{R}^{n})$ is a linear space of the distributions $u\in\mathcal{S}'(\mathbb{R}^{n})$ such that the Fourier transform $\widehat{u}$ is locally Lebesgue integrable on $\mathbb{R}^{n}$ and, moreover, $\mu\,\widehat{u}\in L_{p}(\mathbb{R}^{n})$. The space $B_{p,\mu}(\mathbb{R}^{n})$ is endowed with the norm $\|u\|_{B_{p,\mu}(\mathbb{R}^{n})}:=\|\mu\,\widehat{u}\|_{L_{p}(\mathbb{R}^{n})}$. \end{definition} The space $B_{p,\mu}(\mathbb{R}^{n})$ is complete and continuously embedded in $\mathcal{S}'(\mathbb{R}^{n})$. If $1\leq p<\infty$, then this space is separable, and the set $C^{\infty}_{0}(\mathbb{R}^{n})$ is complete in it \cite[Sec. 2.2]{Hermander63}. Of special interest is the $p=2$ case, when $B_{p,\mu}(\mathbb{R}^{n})$ becomes a Hilbert space. \begin{remark}\label{rem3.1} H\"ormander assumes initially that $\mu$ satisfies a stronger condition than \eqref{eq3.1}; namely, there exist some positive numbers $c$ and $l$ such that \begin{equation}\label{eq3.2} \frac{\mu(\xi)}{\mu(\eta)}\leq(1+c\,|\xi-\eta|)^{l}\quad\mbox{for all}\quad\xi,\eta\in\mathbb{R}^{n}. \end{equation} But he notices that two sets of functions satisfying either \eqref{eq3.1} or \eqref{eq3.2} lead to the same class of spaces $B_{p,\mu}(\mathbb{R}^{n})$ \cite[the remark at the end of Sec. 2.1]{Hermander63}. \end{remark} The term `H\"ormander space' was suggested by H.~Triebel in \cite[Sec. 4.11.4]{Triebel95}. The following H\"ormander's theorem establishes an important relation between the spaces $B_{p,\mu}(\mathbb{R}^{n})$ and $C^{k}_{\mathrm{b}}(\mathbb{R}^{n})$ \cite[Sec. 2.2, Theorem 2.2.7]{Hermander63}. \begin{theorem}[H\"ormander's Embedding Theorem]\label{th3.1} Let $p,q\in[1,\infty]$, $1/p+1/q=\nobreak1$, and an integer $k\geq0$. Then the condition \begin{equation}\label{eq3.3} \langle\xi\rangle^{k}\,\mu^{-1}(\xi)\in L_{q}(\mathbb{R}^{n},d\xi) \end{equation} entails the continuous embedding $B_{p,\mu}(\mathbb{R}^{n})\hookrightarrow C^{k}_{\mathrm{b}}(\mathbb{R}^{n})$. Conversely, if $$ \{u\in B_{p,\mu}(\mathbb{R}^{n}):\mathrm{supp}\,u\subset V\}\subset C^{k}(\mathbb{R}^{n}) $$ for some nonempty open set $V\subseteq\mathbb{R}^{n}$, then \eqref{eq3.3} is valid. \end{theorem} The spaces $B_{p,\mu}(\mathbb{R}^{n})$ were applied by H\"ormander to investigation of regularity properties of solutions to some partial differential equations (see \cite[Ch.~IV, VII]{Hermander63} and \cite[Ch.~11, 13]{Hermander83}). We state one of his results relating to elliptic equations \cite[Sec 7.4]{Hermander63}. Let $\Omega$ be a nonempty open set in $\mathbb{R}^{n}$. In $\Omega$, consider a partial differential equation $P(x,D)u=f$ of an order $r$ with coefficients belonging to $C^{\infty}(\Omega)$. Introduce the local H\"ormander space over $\Omega$: $$ B_{p,\mu}^{\mathrm{loc}}(\Omega):=\{f\in\mathcal{D}'(\Omega):\,\chi f\in B_{p,\mu}(\Omega)\;\;\forall\;\;\chi\in C^{\infty}_{0}(\Omega)\}. $$ Here $B_{p,\mu}(\Omega)$ is the space of restrictions of all the distributions $u\in B_{p,\mu}(\mathbb{R}^{n})$ to~$\Omega$. \begin{theorem}[H\"ormander's Regularity Theorem]\label{th3.2} Let the operator $P(x,D)$ be elliptic in $\Omega$, and $u\in\mathcal{D}'(\Omega)$. If $P(x,D)u\in B_{p,\mu}^{\mathrm{loc}}(\Omega)$ for some $p\in[1,\infty]$ and weight function $\mu$, then $u\in B_{p,\mu_{r}}^{\mathrm{loc}}(\Omega)$ with $\mu_{r}(\xi):=\langle\xi\rangle^{r}\mu(\xi)$. \end{theorem} For applications of the spaces $B_{p,\mu}(\mathbb{R}^{n})$, the Hilbert case of $p=2$ is the most interesting. This case was investigated by B.~Malgrange \cite{Malgrange57} and L.R.~Volevich, B.P.~Paneah \cite{VolevichPaneah65} (see also Paneah's monograph \cite[Sec. 1.4]{Paneah00}). Specifically, if $\mu(\xi)=\langle\xi\rangle^{s}$ for all $\xi\in\mathbb{R}^{n}$ with some $s\in\mathbb{R}$, then $B_{2,\mu}(\mathbb{R}^{n})$ becomes the Sobolev inner product space $H^{s}(\mathbb{R}^{n})$ of order $s$. In what follows we will consider the isotropic H\"ormander inner product spaces $B_{2,\mu}(\mathbb{R}^{n})$, with $\mu(\xi)$ being a radial function, i.e. depending only on $\langle\xi\rangle$. \subsection{The refined Sobolev scale}\label{sec3.2} It useful to have a class of the H\"ormander inner product spaces $B_{2,\mu}(\mathbb{R}^{n})$ that are close to the Sobolev spaces $H^{s}(\mathbb{R}^{n})$ with $s\in\mathbb{R}$. For this purpose we choose $\mu(\xi):=\langle\xi\rangle^{s}\varphi(\langle\xi\rangle)$ for some functions $\varphi\in\mathrm{QSV}$; then $\mu$ is a quasiregularly varying function of $\langle\xi\rangle$ at infinity of index $s$. In this case it is naturally to rename the H\"ormander space $B_{2,\mu}(\mathbb{R}^{n})$ by $H^{s,\varphi}(\mathbb{R}^{n})$. Let us formulate the corresponding definitions. First we introduce the following set $\mathcal{M}\subset\mathrm{QSV}$ of function parameters $\varphi$. By $\mathcal{M}$ we denote the set of all functions $\varphi:[1;+\infty)\rightarrow(0;+\infty)$ such that: \begin{enumerate} \item [a)] $\varphi$ is Borel measurable on $[1;+\infty)$; \item [b)] $\varphi$ and $1/\varphi$ are bounded on every compact interval $[1;b]$, where $1<b<+\infty$; \item [c)] $\varphi\in\mathrm{QSV}$. \end{enumerate} It follows from Theorem \ref{th2.2} that $\varphi\in\mathcal{M}$ if and only if $\varphi$ can be written in the form \eqref{eq2.2} with $b=1$ for some continuous function $\alpha:[1,\infty)\rightarrow\mathbb{R}$ approaching zero at $+\infty$ and Borel measurable bounded function $\beta:\nobreak[1,\infty)\rightarrow\mathbb{R}$. Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. \begin{definition}\label{def3.2} The space $H^{s,\varphi}(\mathbb{R}^{n})$ is the H\"ormander inner product space $B_{2,\mu}(\mathbb{R}^{n})$ with $\mu(\xi):=\langle\xi\rangle^{s}\varphi(\xi)$ for $\xi\in\mathbb{R}^{n}$. \end{definition} Thus $H^{s,\varphi}(\mathbb{R}^{n})$ consists of the distributions $u\in\mathcal{S}'(\mathbb{R}^{n})$ such that the Fourier transform $\widehat{u}$ is a function locally Lebesgue integrable on $\mathbb{R}^{n}$ and $$ \int\limits_{\mathbb{R}^{n}} \langle\xi\rangle^{2s}\varphi^{2}(\langle\xi\rangle)\,|\widehat{u}(\xi)|^{2}\, d\xi<\infty. $$ The inner product in the space $H^{s,\varphi}(\mathbb{R}^{n})$ is defined by the formula $$ (u_{1},u_{2})_{H^{s,\varphi}(\mathbb{R}^{n})}:= \int\limits_{\mathbb{R}^{n}}\langle\xi\rangle^{2s}\varphi^{2}(\langle\xi\rangle) \,\widehat{u_{1}}(\xi)\,\overline{\widehat{u_{2}}(\xi)}\,d\xi $$ and induces the norm in the usual way, $H^{s,\varphi}(\mathbb{R}^{n})$ being a Hilbert space. The function $\mu$ used in Definition \ref{def3.2} is a weight function that follows from the integral representation of the set $\mathcal{M}$ given above. We consider the Borel measurable weight functions $\mu$, rather than continuous as H\"ormander does. By Theorem \ref{th2.4} i) we do not obtain the spaces different from those considered by H\"ormander. In the simplest case where $\varphi(\cdot)\equiv1$, the space $H^{s,\varphi}(\mathbb{R}^{n})=H^{s,1}(\mathbb{R}^{n})$ coincides with the Sobolev space $H^{s}(\mathbb{R}^{n})$. By Theorem \ref{th2.4} (ii), for each $\varepsilon>0$ there exist a number $c_{\varepsilon}\geq1$ such that $$ c_{\varepsilon}^{-1}t^{-\varepsilon}\leq\varphi(t)\leq c_{\varepsilon}t^{\varepsilon}\quad\mbox{for all}\quad t\geq1. $$ This implies the inclusions \begin{equation}\label{eq3.4} \bigcup_{\varepsilon>0}H^{s+\varepsilon}(\mathbb{R}^{n})=:H^{s+}(\mathbb{R}^{n}) \subset H^{s,\varphi}(\mathbb{R}^{n})\subset H^{s-}(\mathbb{R}^{n}):=\bigcap_{\varepsilon>0}H^{s-\varepsilon}(\mathbb{R}^{n}). \end{equation} They show that in the class of spaces \begin{equation}\label{eq3.5} \bigl\{H^{s,\varphi}(\mathbb{R}^{n}):\,s\in\mathbb{R},\,\varphi\in\mathcal{M}\,\bigr\} \end{equation} the functional parameter $\varphi$ defines a supplementary (subpower) smoothness to the basic (power) $s$-smoothness. If $\varphi(t)\rightarrow\infty$ [$\varphi(t)\rightarrow0$] as $t\rightarrow\infty$, then $\varphi$ defines a positive [negative] supplementary smoothness. Otherwise speaking, $\varphi$ \textit{refines} the power smoothness $s$. Therefore, it is naturally to give \begin{definition}\label{def3.3} The class of spaces \eqref{eq3.5} is called the refined Sobolev scale over~$\mathbb{R}^{n}$. \end{definition} Obviously, the scale \eqref{eq3.5} is much finer than the Hilbert scale of Sobolev spaces. The scale \eqref{eq3.5} was considered by the authors in \cite{05UMJ5, 06UMJ3, 08MFAT1}. Let us formulate some important properties of it. \begin{theorem}\label{th3.3} Let $s\in\mathbb{R}$ and $\varphi,\varphi_{1}\in\mathcal{M}$. The following assertions are true: \begin{itemize} \item[i)] The dense continuous embedding $H^{s+\varepsilon,\varphi_{1}}(\mathbb{R}^{n})\hookrightarrow H^{s,\varphi}(\mathbb{R}^{n})$ is valid for each $\varepsilon>0$. \item[ii)] The function $\varphi/\varphi_{1}$ is bounded in a neighbourhood of $+\infty$ if and only if $H^{s,\varphi_{1}}(\mathbb{R}^{n})\hookrightarrow H^{s,\varphi}(\mathbb{R}^{n})$. This embedding is continuous and dense. \item[iii)] Let an integer $k\geq0$ be given. The inequality \begin{equation}\label{eq3.6} \int\limits_{1}^{\infty}\frac{dt}{t\,\varphi^{\,2}(t)}<\infty \end{equation} is equivalent to the embedding \begin{equation}\label{eq3.7} H^{k+n/2,\varphi}(\mathbb{R}^{n})\hookrightarrow C^{k}_{\mathrm{b}}(\mathbb{R}^{n}). \end{equation} The embedding is continuous. \item[iv)] The spaces $H^{s,\varphi}(\mathbb{R}^{n})$ and $H^{-s,1/\varphi}(\mathbb{R}^{n})$ are mutually dual with respect to the inner product in $L_{2}(\mathbb{R}^{n})$. \end{itemize} \end{theorem} Assertion i) of this theorem follows from \eqref{eq3.4}, whereas assertions ii) -- iv) are inherited from the H\"ormander spaces properties \cite[Sec. 2.2]{Hermander63}, in particular, iii) from Theorem \ref{th3.1}. Note that $\varphi\in\mathcal{M}\Leftrightarrow1/\varphi\in\mathcal{M}$, so the space $H^{-s,1/\varphi}(\mathbb{R}^{n})$ in assertion iv) is defined as an element of the refined Sobolev scale. The refined Sobolev scale possesses the interpolation property with respect to the Sobolev scale because every space $H^{s,\varphi}(\mathbb{R}^{n})$ is obtained by the interpolation, with an appropriate function parameter, of a couple of inner product Sobolev spaces. \begin{theorem}\label{th3.4} Let a function $\varphi\in\mathcal{M}$ and positive numbers $\varepsilon,\delta$ be given. We set \begin{equation}\label{eq3.8} \psi(t):= \begin{cases} \;t^{\,\varepsilon/(\varepsilon+\delta)}\, \varphi(t^{1/(\varepsilon+\delta)}) & \text{for\;\;\;$t\geq1$}, \\ \;\varphi(1) & \text{for\;\;\;$0<t<1$}. \end{cases} \end{equation} Then the following assertions are true: \begin{itemize} \item[i)] The function $\psi$ belongs to the set $\mathcal{B}$ and is an interpolation parameter. \item[ii)] For an arbitrary $s\in\mathbb{R}$, we have \begin{equation}\label{eq3.9} [H^{s-\varepsilon}(\mathbb{R}^{n}),H^{s+\delta}(\mathbb{R}^{n})]_{\psi} =H^{s,\varphi}(\mathbb{R}^{n}) \end{equation} \end{itemize} with equality of norms in the spaces. \end{theorem} Assertion i) holds true by Corollary \ref{cor2.1} because the function \eqref{eq3.8} is quasiregularly varying of index $\theta:=\varepsilon/(\varepsilon+\delta)\in(0,\,1)$ at $+\infty$. Assertion ii) is directly verified if we note that the operator $J:u\mapsto\mathcal{F}^{-1}(\langle\xi\rangle^{\varepsilon+\delta}\,\widehat{u}(\xi))$ is generating for the couple on the left of \eqref{eq3.9}. Then the operator $\psi(J):u\mapsto \mathcal{F}^{-1}(\langle\xi\rangle^{\varepsilon}\varphi(\langle\xi\rangle)\,\widehat{u}(\xi))$ maps $H^{s,\varphi}(\mathbb{R}^{n})$ onto $H^{s-\varepsilon}(\mathbb{R}^{n})$ that means \eqref{eq3.9}; for details, see \cite[Sec.~3]{06UMJ3} or \cite[Sec. 3.2]{08MFAT1}. The refined Sobolev scale is closed with respect to the interpolation with the functions parameters that are quasiregularly varying at $+\infty$. \begin{theorem}\label{th3.5} Let $s_{0},s_{1}\in\mathbb{R}$, $s_{0}\leq s_{1}$, and $\varphi_{0},\varphi_{1}\in\mathcal{M}$. In the case where $s_{0}=s_{1}$ we suppose that the function $\varphi_{0}/\varphi_{1}$ is bounded in a neighbourhood of $\infty$. Let $\psi\in\mathcal{B}$ be a quasiregularly varying function of an index $\theta\in(0,\,1)$ at $\infty$. We represent $\psi(t)=t^{\theta}\chi(t)$ with $\chi\in\mathrm{QSV}$ and set $s:=(1-\theta)s_{0}+\theta s_{1}$, $$ \varphi(t):=\varphi_{0}^{1-\theta}(t)\,\varphi_{1}^{\theta}(t)\, \chi\Bigl(t^{s_{1}-s_{0}}\,\frac{\varphi_{1}(t)}{\varphi_{0}(t)}\Bigr)\quad\mbox{for}\quad t\geq1. $$ Then $\varphi\in\mathcal{M}$, and \begin{equation}\label{eq3.10} [H^{s_{0},\varphi_{0}}(\mathbb{R}^{n}), H^{s_{1},\varphi_{1}}(\mathbb{R}^{n})]_{\psi}= H^{s,\varphi}(\mathbb{R}^{n}) \end{equation} with equality of norms in the spaces. \end{theorem} This theorem can be proved by means of the repeated application of Theorem \ref{th3.4} if we employ the reiteration formula $[X_{f},X_{g}]_{\psi}=X_{\omega}$, where $X$ is an admissible couple of Hilbert spaces, $f,g,\psi\in\mathcal{B}$, $f/g$ is bounded in a neighbourhood of $\infty$, and $\omega(t):=f(t)\,\psi(g(t)/f(t))$ for $t>0$; see \cite[Sec. 2.3]{08MFAT1}. Besides, it is possible to give the direct proof, which is similar to that used for Theorem~\ref{th3.4}. \begin{remark}\label{rem3.2} The interpolation of the H\"ormander spaces $B_{p,\mu}(\mathbb{R}^{n})$, with $1\leq p\leq\infty$, was studied by M.~Schechter \cite{Schechter67} with the help of the complex method of interpolation. C.~Merucci \cite{Merucci84} and F.~Cobos, D.L.~Fernandez \cite{CobosFernandez88} considered the interpolation of various Banach spaces of generalized smoothness by means of the real method involving a function parameter. \end{remark} \section{Elliptic operators in $\mathbb{R}^{n}$}\label{sec4} In this section we consider an arbitrary uniformly elliptic classical pseudodifferential operator (PsDO) $A$ on the scale \eqref{eq3.5}. We establish an a priory estimate for a solution to the equation $Au=f$ and investigate the solution smoothness in this scale. Our results refine the classical theorems on elliptic operators on the Sobolev scale; see, e.g., \cite[Sec. 1.8]{Agranovich94} or \cite[Sec. 18.1]{Hermander85}. Following \cite[Sec. 1.1]{Agranovich94}, we denote by $\Psi^{r}(\mathbb{R}^{n})$ with $r\in\mathbb{R}$ the class of all the PsDOs $A$ in $\mathbb{R}^{n}$ (generally, not classical) such that their symbols $a(x,\xi)$ are complex-valued infinitely smooth functions satisfying the following condition. For arbitrary multi-indexes $\alpha$ and $\beta$, there exist a number $c_{\alpha,\beta}>0$ such that $$ |\,\partial_{x}^{\alpha}\,\partial_{\xi}^{\beta}\,a(x,\xi)\,| \leq\,c_{\alpha,\beta}\,\langle\xi\rangle^{r-|\beta|}\quad\mbox{for every}\quad x,\xi\in\mathbb{R}^{n}. $$ \begin{lemma}\label{lem4.1} Let $A\in\Psi^{r}(\mathbb{R}^{n})$ with $r\in\mathbb{R}$. Then the restriction of the mapping $u\mapsto Au$, $u\in\mathcal{S}'(\mathbb{R}^{n})$, to the space $H^{s,\varphi}(\mathbb{R}^{n})$ is a bounded linear operator $$ A:H^{s,\varphi}(\mathbb{R}^{n})\rightarrow H^{s-r,\,\varphi}(\mathbb{R}^{n}) $$ for each $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. \end{lemma} This lemma follows from the Sobolev $\varphi\equiv1$ case \cite[Sec. 1.1, Theorem 1.1.2]{Agranovich94} by the interpolation formula \eqref{eq3.9}. By $\Psi^{r}_{\mathrm{ph}}(\mathbb{R}^{n})$ we denote the subset in $\Psi^{r}(\mathbb{R}^{n})$ that consists of all the classical (polyhomogeneous) PsDOs of the order $r$; see \cite[Sec. 1.5]{Agranovich94}. An important example of PsDO from $\Psi^{r}_{\mathrm{ph}}(\mathbb{R}^{n})$ is given by a partial differential operator of order $r$ with coefficients belonging to $C^{\infty}_{\mathrm{b}}(\mathbb{R}^{n})$. \begin{definition}\label{def4.1} A PsDO $A\in\Psi^{r}_{\mathrm{ph}}(\mathbb{R}^{n})$ is called uniformly elliptic in $\mathbb{R}^{n}$ if there exists a number $c>0$ such that $|a_{0}(x,\xi)|\geq c$ for each $x,\xi\in\mathbb{R}^{n}$ with $|\xi|=1$. Here $a_{0}(x,\xi)$ is the principal symbol of $A$. \end{definition} Let $r\in\mathbb{R}$. Suppose a PsDO $A\in\Psi^{r}_{\mathrm{ph}}(\mathbb{R}^{n})$ to be uniformly elliptic in $\mathbb{R}^{n}$. \begin{theorem}\label{th4.1} Let $s\in\mathbb{R}$, $\varphi\in\mathcal{M}$, and $\sigma<s$. The following a~priori estimate holds true: \begin{equation}\label{eq4.1} \|u\|_{H^{s,\varphi}(\mathbb{R}^{n})}\leq c\,\bigr(\,\|Au\|_{H^{s-r,\varphi}(\mathbb{R}^{n})}+ \|u\|_{H^{\sigma,\varphi}(\mathbb{R}^{n})}\,\bigl)\quad\mbox{for all}\quad u\in H^{s,\varphi}(\mathbb{R}^{n}). \end{equation} Here $c=c(s,\varphi,\sigma)$ is a positive number not depending on $u$. \end{theorem} We prove this theorem with the help of the left parametrix of $A$ if we apply Lemma \ref{lem4.1}. As knows \cite[Sec. 1.8, Theorem 1.8.3]{Agranovich94} there exists a PsDO $B\in\Psi^{-r}_{\mathrm{ph}}(\mathbb{R}^{n})$ such that $BA=I+T$, where $I$ is identical operator and $T\in\Psi^{-\infty}:=\bigcap_{m\in\mathbb{R}}\, \Psi^{m}(\mathbb{R}^{n})$. The operator $B$ is called the left parametrix of $A$. Writing $u=BAu-Tu$, we easily get \eqref{eq4.1} by Lemma \ref{lem4.1}. Let $\Omega$ be an arbitrary nonempty open subset in $\mathbb{R}^{n}$. We study an interior smoothness of a solution to the equation $Au=f$ in $\Omega$. Let us introduce some relevant spaces. By $H^{-\infty}(\mathbb{R}^{n})$ we denote the union of all the spaces $H^{s,\varphi}(\mathbb{R}^{n})$ with $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. The linear space $H^{-\infty}(\mathbb{R}^{n})$ is endowed with the inductive limit topology. We set \begin{gather}\notag H^{s,\varphi}_{\mathrm{int}}(\Omega):=\bigl\{f\in H^{-\infty}(\mathbb{R}^{n}): \,\chi\,f\in H^{s,\varphi}(\mathbb{R}^{n})\\ \mbox{for all}\;\;\chi\in C^{\infty}_{\mathrm{b}}(\mathbb{R}^{n}),\;\mathrm{supp}\,\chi\subset \Omega,\; \mathrm{dist}(\mathrm{supp}\,\chi,\partial \Omega)>0\bigr\}.\label{eq4.2} \end{gather} A topology in $H^{s,\varphi}_{\mathrm{int}}(\Omega)$ is defined by the seminorms $f\mapsto\|\chi\,f\|_{H^{s,\varphi}(\mathbb{R}^{n})}$ with $\chi$ being the same as in \eqref{eq4.2}. \begin{theorem}\label{th4.2} Let $u\in H^{-\infty}(\mathbb{R}^{n})$ be a solution to the equation $Au=f$ in $\Omega$ with $f\in H^{s,\varphi}_{\mathrm{int}}(\Omega)$ for some $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. Then $u\in H^{s+r,\varphi}_{\mathrm{int}}(\Omega)$. \end{theorem} The special case when $\Omega=\mathbb{R}^{n}$ (global smoothness) follows at once from the equality $u=Bf-Tu$, with $B$ being the left parametrix, and Lemma \ref{lem4.1}. In general, we deduce Theorem \ref{th4.2} from this case if we rearrange $A$ and the operator of multiplication by a function $\chi$ satisfying \eqref{eq4.2}. Then we write \begin{equation}\label{eq4.3} A\chi u=A\chi\eta u=\chi\,A\eta u+A'\eta u=\chi f+\chi\,A(\eta-1)u+A'\eta u, \end{equation} where $A'\in\Psi^{r-1}(\mathbb{R}^{n})$, and the function $\eta$ has the same properties as $\chi$ and is equal to 1 in a neighbourhood of $\mathrm{supp}\,\chi$. Now, if $u\in H^{s+r-k,\varphi}_{\mathrm{int}}(\Omega)$ for some integer $k\geq1$, then the right-hand side of \eqref{eq4.3} belongs to $H^{s-k+1,\varphi}(\mathbb{R}^{n})$ that implies $\chi u\in H^{s+r-k+1,\varphi}(\mathbb{R}^{n})$, i.e. $u\in H^{s+r-k+1,\varphi}_{\mathrm{int}}(\Omega)$. By induction in $k$ we have $u\in H^{s+r,\varphi}_{\mathrm{int}}(\Omega)$. It is useful to compare Theorem \ref{th4.2} with H\"ormander's Regularity Theorem. If $A$ is a partial \emph{differential} operator, and $\Omega$ is bounded, then Theorem \ref{th4.2} is a consequence of the H\"ormander theorem. Applying Theorems \ref{th4.2} and \ref{th3.3} iii) we get the following sufficient condition for the solution $u$ to have continuous and bounded derivatives of the prescribed order. \begin{theorem}\label{th4.3} Let $u\in H^{-\infty}(\mathbb{R}^{n})$ be a solution to the equation $Au=f$ in $\Omega$, with $f\in H^{k-r+n/2,\varphi}_{\mathrm{int}}(\Omega)$ for some integer $k\geq0$ and function parameter $\varphi\in\mathcal{M}$. Suppose that $\varphi$ satisfies \eqref{eq3.6}. Then $u$ has the continuous partial derivatives on $\Omega$ up to the order $k$, and they are bounded on every set $\Omega_{0}\subset \Omega$ with $\mathrm{dist}(\Omega_{0},\partial \Omega)>0$. In particular, if $\Omega=\mathbb{R}^{n}$, then $u\in C^{k}_{\mathrm{b}}(\mathbb{R}^{n})$. \end{theorem} This theorem shows an advantage of the refined Sobolev scale over the Sobolev scale when a classical smoothness of a solution is under investigation. Indeed, if we restrict ourselves to the Sobolev case of $\varphi\equiv1$, then we have to replace the condition $f\in H^{k-r+n/2,\varphi}_{\mathrm{int}}(\Omega)$ with the condition $f\in H^{k-r+\varepsilon+n/2,1}_{\mathrm{int}}(\Omega)$ for some $\varepsilon>0$. The last condition is far stronger than previous one. Note that the condition \eqref{eq3.6} not only is sufficient in Theorem 3.3 but also is necessary on the class of all the considered solutions $u$. Namely, \eqref{eq3.6} is equivalent to the implication \begin{equation}\label{eq4.4} \bigl(\,u\in H^{-\infty}(\mathbb{R}^{n}),\,\;\;f:=Au\in H^{k-r+n/2,\varphi}_{\mathrm{int}}(\Omega)\,\bigr)\;\;\Rightarrow\;\;u\in C^{k}(\Omega). \end{equation} Indeed, if $u\in H^{k+n/2,\varphi}_{\mathrm{int}}(\Omega)$, then $f=Au\in H^{k-r+n/2,\varphi}_{\mathrm{int}}(\Omega)$, whence $u\in C^{k}(\Omega)$ if \eqref{eq4.4} holds. Thus \eqref{eq4.4} entails \eqref{eq3.6} in view of H\"ormander's Theorem \ref{th3.1}. The analogs of Theorems \ref{th4.1}--\ref{th4.3} were proved in \cite{08UMB3} for uniformly elliptic matrix PsDOs. \section{H\"ormander spaces over a closed manifold}\label{sec5} In this section we introduce a certain class of H\"ormander spaces over a closed (compact) smooth manifold. Namely, using the spaces $H^{s,\varphi}(\mathbb{R}^{n})$ with $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$ we construct their analogs for the manifold. We give three equivalent definitions of the analogs; these definitions are similar to those used for the Sobolev spaces (see, e.g., \cite[Ch.~1, Sec.~5]{Taylor81}). \subsection{The equivalent definitions}\label{sec5.1} In what follows except Subsection \ref{sec7.1}, $\Gamma$ is a closed (i.e. compact and without a boundary) infinitely smooth oriented manifold of an arbitrary dimension $n\geq1$. We suppose that a certain $C^{\infty}$-density $dx$ is defined on $\Gamma$. As usual, $\mathcal{D}'(\Gamma)$ denotes the linear topological space of all distributions on $\Gamma$. The space $\mathcal{D}'(\Gamma)$ is antidual to the space $C^{\infty}(\Gamma)$ with respect to the natural extension of the scalar product in $L_{2}(\Gamma):=L_{2}(\Gamma,dx)$ by continuity. This extension is denoted by $(f,w)_{\Gamma}$ for $f\in\mathcal{D}'(\Gamma)$ and $w\in C^{\infty}(\Gamma)$. Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. We give the following three equivalent definitions of the H\"ormander space $H^{s,\varphi}(\Gamma)$. The first definition exhibits the local properties of those distributions $f\in\mathcal{D}'(\Gamma)$ that form $H^{s,\varphi}(\Gamma)$. From the $C^{\infty}$-structure on $\Gamma$, we arbitrarily choose a finite collection of the local charts $\alpha_{j}:\mathbb{R}^{n}\leftrightarrow\Gamma_{j}$, $j=1,\ldots,\varkappa$, such that the open sets $\Gamma_{j}$ form the finite covering of $\Gamma$. Let functions $\chi_{j}\in C^{\infty}(\Gamma)$, $j=1,\ldots,\varkappa$, form a partition of unity on $\Gamma$ satisfying the condition $\mathrm{supp}\,\chi_{j}\subset\Gamma_{j}$. \begin{definition}\label{def5.1} The linear space $H^{s,\varphi}(\Gamma)$ is defined by the formula $$ H^{s,\varphi}(\Gamma):=\bigl\{f\in\mathcal{D}'(\Gamma):\; (\chi_{j}f)\circ\alpha_{j}\in H^{s,\varphi}(\mathbb{R}^{n})\;\;\forall\;j=1,\ldots\varkappa\bigr\}. $$ Here $(\chi_{j}f)\circ\alpha_{j}$ is the representation of the distribution $\chi_{j}f$ in the local chart $\alpha_{j}$. The inner product in the space $H^{s,\varphi}(\Gamma)$ is introduced by the formula $$ (f_{1},f_{2})_{H^{s,\varphi}(\Gamma)}:=\sum_{j=1}^{\varkappa}\,((\chi_{j}f_{1})\circ\alpha_{j}, (\chi_{j}\,f_{2})\circ\alpha_{j})_{H^{s,\varphi}(\mathbb{R}^{n})} $$ and induces the norm in the usual way. \end{definition} In the special case where $\varphi\equiv1$ the space $H^{s,\varphi}(\Gamma)$ coincides with the inner product Sobolev space $H^{s}(\Gamma)$ of order $s$. The Sobolev spaces on $\Gamma$ are known to be complete and independent (up to equivalence of norms) of the choice of the local charts and the partition of unity. The second definition connects the space $H^{s,\varphi}(\Gamma)$ with the Sobolev scale by means of the interpolation. \begin{definition}\label{def5.2} Let two integers $k_{0}$ and $k_{1}$ be such that $k_{0}<s<k_{1}$. We define \begin{equation}\label{eq5.1} H^{s,\varphi}(\Gamma):=[H^{k_{0}}(\Gamma),H^{k_{1}}(\Gamma)]_{\psi}, \end{equation} where the interpolation parameter $\psi$ is given by the formula \eqref{eq3.8} with $\varepsilon:=s-k_{0}$ and $\delta:=k_{1}-s$. \end{definition} It is useful in the spectral theory to have the third definition of $H^{s,\varphi}(\Gamma)$ that connects the norm in $H^{s,\varphi}(\Gamma)$ with a certain function of $1-\Delta_{\Gamma}$. As usual, $\Delta_{\Gamma}$ is the Beltrami-Laplace operator on the manifold $\Gamma$ endowed with the Riemannian metric that induces the density $dx$; see, e.g., \cite[Sec. 22.1]{Shubin01}. \begin{definition}\label{def5.3} The space $H^{s,\varphi}(\Gamma)$ is defined to be the completion of $C^{\infty}(\Gamma)$ with respect to the Hilbert norm \begin{equation}\label{eq5.2} f\,\mapsto\, \|(1-\Delta_{\Gamma})^{s/2}\varphi((1-\Delta_{\Gamma})^{1/2})\,f\|_{L_{2}(\Gamma)},\quad f\in C^{\infty}(\Gamma). \end{equation} \end{definition} \begin{theorem}\label{th5.1} Definitions $\ref{def5.1}$, $\ref{def5.2}$, and $\ref{def5.3}$ are mutually equivalent, that is they define the same Hilbert space $H^{s,\varphi}(\Gamma)$ up to equivalence of norms. \end{theorem} Let us explain how to prove this fundamental theorem. The equivalence of Definitions \ref{def5.1} and \ref{def5.2}. We use Definition \ref{def5.1} as a starting point and show that the equality \eqref{eq5.1} holds true up to equivalence of norms. We apply the $\mathbb{R}^{n}$-analog of \eqref{eq5.1}, due to Theorem \ref{th3.4}, and pass to local coordinates on~$\Gamma$. Namely, let the mapping $T$ take each $f\in\mathcal{D}'(\Gamma)$ to the vector with components $(\chi_{j}f)\circ\alpha_{j}$, $j=1,\ldots,\varkappa$. We get the bounded linear operator \begin{equation}\label{eq5.3} T:\,H^{s,\varphi}(\Gamma)\rightarrow(H^{s,\varphi}(\mathbb{R}^{n}))^{\varkappa}. \end{equation} It has the right inverse bounded linear operator \begin{equation}\label{eq5.4} K:\,(H^{s,\varphi}(\mathbb{R}^{n}))^{\varkappa}\rightarrow H^{s,\varphi}(\Gamma), \end{equation} where the mapping $K$ can be constructed with the help of the local charts and is independent of parameters $s$ and $\varphi$. If we consider these operators in the Sobolev case of $\varphi\equiv1$ and use the $\mathbb{R}^{n}$-analog of \eqref{eq5.1}, then we get the bounded operators \begin{gather}\label{eq5.5} T:\,[H^{k_{0}}(\Gamma),H^{k_{1}}(\Gamma)]_{\psi} \rightarrow(H^{s,\varphi}(\mathbb{R}^{n}))^{\varkappa}, \\ K:\,(H^{s,\varphi}(\mathbb{R}^{n}))^{\varkappa} \rightarrow[H^{k_{0}}(\Gamma),H^{k_{1}}(\Gamma)]_{\psi}. \label{eq5.6} \end{gather} Now it follows from \eqref{eq5.3} and \eqref{eq5.6} that the identity mapping $KT$ establishes the continuous embedding of $H^{s,\varphi}(\Gamma)$ into the interpolation space $[H^{k_{0}}(\Gamma),H^{k_{1}}(\Gamma)]_{\psi}$, whereas the inverse continuous embedding is valid by \eqref{eq5.4} and \eqref{eq5.5}. Thus the equality \eqref{eq5.1} holds true up to equivalence of norms; for details, see \cite[Sec.~3]{06UMJ3} or \cite[Sec. 3.3]{08MFAT1}. By equivalence of Definitions \ref{def5.1} and \ref{def5.2}, the space $H^{s,\varphi}(\Gamma)$ is complete and independent (up to equivalence of norms) of the choice of the local charts and the partition of unity on $\Gamma$. Moreover, the set $C^{\infty}(\Gamma)$ is dense in $H^{s,\varphi}(\Gamma)$. The equivalence of Definitions \ref{def5.2} and \ref{def5.3}. Let us use Definition \ref{def5.2} as a starting point. If $s>0$, then we choose $k_{0}:=0$ and $k_{1}:=2k>s$ for some integer $k$ in \eqref{eq5.1}. By $\Lambda_{k}(\Gamma)$ we denote the Sobolev space $H^{2k}(\Gamma)$ endowed with the equivalent norm $\|(1-\Delta_{\Gamma})^{k}f\|_{L_{2}(\Gamma)}$ of $f$. We have \begin{eqnarray*} \|f\|_{H^{s,\varphi}(\Gamma)}&=&\|f\|_{[H^{0}(\Gamma),H^{2k}(\Gamma)]_{\psi}}\asymp \|f\|_{[L_{2}(\Gamma),\Lambda_{k}(\Gamma)]_{\psi}}= \|\psi((1-\Delta_{\Gamma})^{k})f\|_{L_{2}(\Gamma)}\\ &=&\|(1-\Delta_{\Gamma})^{s/2}\varphi((1-\Delta_{\Gamma})^{1/2})\,f\|_{L_{2}(\Gamma)}, \end{eqnarray*} with $f\in C^{\infty}(\Gamma)$, because $(1-\Delta_{\Gamma})^{k}$ is the generating operator for the couple $[L_{2}(\Gamma),\Lambda_{k}(\Gamma)]$. Thus the norm \eqref{eq5.2} is equivalent to the norm in $H^{s,\varphi}(\Gamma)$ on the dense set $C^{\infty}(\Gamma)$ if $s>0$. The case of $s\leq0$ can be reduced to the previous one with the help of the homeomorphism $$ (1-\Delta_{\Gamma})^{k}:\,H^{s+2k,\varphi}(\Gamma)\leftrightarrow H^{s,\varphi}(\Gamma), $$ with $s+2k>0$ for some integer $k\geq1$. This homeomorphism follows from the Sobolev case of $\varphi\equiv1$ by the interpolation formula \eqref{eq5.1} (for details, see \cite[Sec. 3.4]{08MFAT1}). Now we can give \begin{definition}\label{def5.4} The class of Hilbert spaces \begin{equation}\label{eq5.7} \bigl\{H^{s,\varphi}(\Gamma):\,s\in\mathbb{R},\,\varphi\in\mathcal{M}\,\bigr\} \end{equation} is called the refined Sobolev scale over the manifold~$\Gamma$. \end{definition} \subsection{The properties}\label{sec5.2} We consider some important properties of the scale \eqref{eq5.7}. They are inherited from the refined Sobolev scale over $\mathbb{R}^{n}$. \begin{theorem}\label{th5.2} Let $s\in\mathbb{R}$ and $\varphi,\varphi_{1}\in\mathcal{M}$. The following assertions are true: \begin{enumerate} \item[i)] The dense compact embedding $H^{s+\varepsilon,\varphi_{1}}(\Gamma)\hookrightarrow H^{s,\varphi}(\Gamma)$ is valid for each $\varepsilon>0$. \item[ii)] The function $\varphi/\varphi_{1}$ is bounded in a neighbourhood of $+\infty$ if and only if $H^{s,\varphi_{1}}(\Gamma)\hookrightarrow H^{s,\varphi}(\Gamma)$. This embedding is continuous and dense. It is compact if and only if $\varphi(t)/\varphi_{1}(t)\rightarrow0$ as $t\rightarrow+\infty$. \item[iii)] Let integer $k\geq0$ be given. Inequality \eqref{eq3.6} is equivalent to the embedding $H^{k+n/2,\varphi}(\Gamma)\hookrightarrow C^{k}(\Gamma)$. The embedding is compact. \item[iv)] The spaces $H^{s,\varphi}(\Gamma)$ and $H^{-s,1/\varphi}(\Gamma)$ are mutually dual (up to equivalence of norms) with respect to the inner product in $L_{2}(\mathbb{R}^{n})$. \end{enumerate} \end{theorem} This theorem except the statements on compactness follows from Theorem \ref{th3.3} in view of Definition \ref{def5.1}. The compactness of the embeddings is a consequence of the compactness of~$\Gamma$. Namely, the statements in assertions i) and ii) follow from the next proposition. For each number $r>0$, the embedding $$ \{u\in H^{s,\varphi_{1}}(\mathbb{R}^{n}):\mathrm{dist}(0,\mathrm{supp}\,u)\leq r\}\hookrightarrow H^{s,\varphi}(\mathbb{R}^{n}) $$ is compact if and only if $\varphi(t)/\varphi_{1}(t)\rightarrow0$ as $t\rightarrow\infty$. This proposition is a special case of H\"ormander's result \cite[Sec.~2, Theorem 2.2.3]{Hermander63}. Now we get the compactness of the embedding in assertion iii) if we write $$ H^{k+n/2,\varphi}(\Gamma)\hookrightarrow H^{k+n/2,\varphi_{1}}(\Gamma)\hookrightarrow C^{k}(\Gamma) $$ for a function $\varphi_{1}$ such that the first embedding is compact. \begin{theorem}\label{th5.3} Theorems $\ref{th3.4}$ and $\ref{th3.5}$ remain true if we replace the designation $\mathbb{R}^{n}$ with $\Gamma$, and the phrase `equality of norms' with `equivalence of norms'. \end{theorem} Theorem \ref{th5.3} can be proved by means of a repeated application of the interpolation formula \eqref{eq5.1}. We can also deduce this theorem from its $\mathbb{R}^{n}$-analogs with the help of operators $T$ and $K$ used above. \section{Elliptic operators on a closed manifold}\label{sec6} Recall that $\Gamma$ is a closed infinitely smooth oriented manifold. In this section we study an arbitrary elliptic classical PsDO $A$ on the refined Sobolev scale over $\Gamma$. We prove that $A$ is a bounded and Fredholm operator on the respective pairs of H\"ormander spaces, and investigate the smoothness of a solution to the equation $Au=f$. Our results \cite[Sec. 4 and 5]{07UMJ6} refine the classical theorems on elliptic operators on the Sobolev scale over a closed smooth manifold (see, e.g., \cite[Sec.~2]{Agranovich94}, \cite[Sec.~19]{Hermander85}, and \cite[\S~8]{Shubin01}). We also use some elliptic PsDOs to get an important class of equivalent norms in $H^{s,\varphi}(\Gamma)$ \cite[Sec. 3.4]{08MFAT1}. \subsection{The main properties}\label{sec6.1} By $\Psi^{r}(\Gamma)$ with $r\in\mathbb{R}$ we denote the class of all the PsDOs $A$ on $\Gamma$ (generally, not classical) such that the image of $A$ in each local chart on $\Gamma$ belongs to $\Psi^{r}(\mathbb{R}^{n})$; see \cite[Sec. 2.1]{Agranovich94}. \begin{lemma}\label{lem6.1} Let $A\in\Psi^{r}(\Gamma)$, with $r\in\mathbb{R}$. Then the restriction of the mapping $u\mapsto Au$, $u\in\mathcal{D}'(\Gamma)$, to the space $H^{s,\varphi}(\Gamma)$ is the bounded linear operator \begin{equation}\label{eq6.1} A:H^{s,\varphi}(\Gamma)\rightarrow H^{s-r,\varphi}(\Gamma) \end{equation} for each $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. \end{lemma} This lemma follows from the Sobolev $\varphi\equiv1$ case \cite[Sec. 2.1, Theorem 2.1.2]{Agranovich94} by the interpolation in view of Theorem \ref{th5.3}. By $\Psi^{r}_{\mathrm{ph}}(\Gamma)$ we denote the subset in $\Psi^{r}(\Gamma)$ that consists of all classical PsDOs of the order $r$; see \cite[Sec. 2.1]{Agranovich94}. The image of PsDO $A\in\Psi^{r}_{\mathrm{ph}}(\Gamma)$ in every local chart on $\Gamma$ belongs to $\Psi^{r}_{\mathrm{ph}}(\mathbb{R}^{n})$. \begin{definition}\label{def6.1} A PsDO $A\in\Psi^{r}_{\mathrm{ph}}(\mathbb{R}^{n})$ is called elliptic on $\Gamma$ if $a_{0}(x,\xi)\neq0$ for each point $x\in\Gamma$ and covector $\xi\in T^{\ast}_{x}\Gamma\setminus\{0\}$. Here $a_{0}(x,\xi)$ is the principal symbol of $A$, and $T^{\ast}_{x}\Gamma$ is the cotangent space to $\Gamma$ at $x$. \end{definition} Let $r\in\mathbb{R}$. Suppose a PsDO $A\in\Psi^{r}_{\mathrm{ph}}(\Gamma)$ to be elliptic on $\Gamma$. By $A^{+}$ we denote the PsDO formally adjoint to $A$ with respect to the sesquilinear form $(\cdot,\cdot)_{\Gamma}$. Since both $A$ and $A^{+}$ are elliptic on $\Gamma$, both the spaces \begin{gather*} N:=\{\,u\in C^{\infty}(\Gamma):\,Au=0\;\;\mbox{on}\;\;\Gamma\,\},\\ N^{+}:=\{\,v\in C^{\infty}(\Gamma):\,A^{+}v=0\;\;\mbox{on}\;\;\Gamma\,\} \end{gather*} are finite-dimensional \cite[Sec. 8.2, Theorem 8.1]{Shubin01}. Recall the following definition. \begin{definition}\label{def6.2} Let $X$ and $Y$ be Banach spaces. The bounded linear operator $T:X\rightarrow Y$ is said to be Fredholm if its kernel $\ker T$ and co-kernel $\mathrm{coker}\,T:=Y/\,T(X)$ are finite-dimensional. The number $\mathrm{ind}\,T:=\dim\ker T-\dim\mathrm{coker}\,T$ is called the index of the Fredholm operator~$T$. \end{definition} If the operator $T:X\rightarrow Y$ is Fredholm, then its range $T(X)$ is closed in $Y$; see, e.g., \cite[Sec. 19.1, Lemma 19.1.1]{Hermander85}. \begin{theorem}\label{th6.1} Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. For the elliptic PsDO $A$, the operator \eqref{eq6.1} is Fredholm, has the kernel $N$ and the range \begin{equation}\label{eq6.2} A(H^{s,\varphi}(\Gamma))=\bigl\{f\in H^{s-r,\varphi}(\Gamma):\,(f,v)_{\Gamma}=0\;\;\mbox{for all}\;\;v\in N^{+}\bigr\}. \end{equation} The index of the operator \eqref{eq6.1} is equal to $\dim N-\dim N^{+}$ and independent of $s$ and $\varphi$. \end{theorem} Theorem \ref{th6.1} is well known in the Sobolev case of $\varphi\equiv1$; see, e.g., \cite[Sec. 19.2, Theorem 19.2.1]{Hermander85}. For an arbitrary $\varphi\in\mathcal{M}$, we deduce this theorem from the Sobolev case with the help of the interpolation. Indeed, consider the bounded Fredholm operators $A:H^{s\mp1}(\Gamma)\rightarrow H^{s\mp1-r}(\Gamma)$. By applying Theorem \ref{th5.3}, we have the bounded operator $$ A:\,H^{s,\varphi}(\Gamma)=[H^{s-1}(\Gamma),H^{s+1}(\Gamma)]_{\psi}\rightarrow [H^{s-1-r}(\Gamma),H^{s+1-r}(\Gamma)]_{\psi}=H^{s-r,\varphi}(\Gamma). $$ Here $\psi$ is the interpolation parameter defined by the formula \eqref{eq3.8} with $\varepsilon=\delta=\nobreak1$. This operator has the properties stated in Theorem \ref{th6.1} because of the next proposition. \begin{proposition}\label{prop6.1} Let $X=[X_{0},X_{1}]$ and $Y=[Y_{0},Y_{1}]$ be admissible couples of Hilbert spaces, and a linear mapping $T$ be given on $X_{0}$. Suppose we have the Fredholm bounded operators $T:X_{j}\rightarrow Y_{j}$, with $j=0,\,1$, that possess the common kernel and the common index. Then, for an arbitrary interpolation parameter $\psi\in\mathcal{B}$, the bounded operator $T:X_{\psi}\rightarrow Y_{\psi}$ is Fredholm, has the same kernel and the same index, moreover its range $T(X_{\psi})=Y_{\psi}\cap T(X_{\,0})$. \end{proposition} This proposition was proved by G.~Geymonat \cite[p.~280, Proposition 5.2]{Geymonat65} for arbitrary interpolation functors given on the category of couples of Banach spaces. The proof for Hilbert spaces is analogous. If both the spaces $N$ and $N^{+}$ are trivial, then the operator \eqref{eq6.1} is a homeomorphism. Generally, the index of \eqref{eq6.1} is equal to zero provided that $\dim\Gamma\geq2$ (see \cite{AtiyahSinger63} and \cite[Sec. 2.3~f]{Agranovich94}). In the case where $\dim\Gamma=1$, the index can be nonzero. If $A$ is a differential operator, then the index is always zero. The Fredholm property of $A$ implies the following a priory estimate. \begin{theorem}\label{th6.2} Let $s\in\mathbb{R}$, $\varphi\in\mathcal{M}$, and $\sigma<s$. Then $$ \|u\|_{H^{s,\varphi}(\Gamma)}\leq c\,\bigr(\,\|Au\|_{H^{s-r,\varphi}(\Gamma)}+ \|u\|_{H^{\sigma,\varphi}(\Gamma)}\,\bigl)\quad\mbox{for all}\quad u\in H^{s,\varphi}(\Gamma); $$ here the number $c>0$ is independent of $u$. \end{theorem} We obtain the above implication if we use the compactness of the embedding $H^{s,\varphi}(\Gamma)\hookrightarrow H^{\sigma,\varphi}(\Gamma)$ for $\sigma<s$ and apply the following proposition \cite[Sec. 2.3, Theorem 2.3.4]{Agranovich94}. \begin{proposition}\label{prop6.2} Let $X$, $Y$, and $Z$ be Banach spaces. Suppose that the compact embedding $X\hookrightarrow Z$ is valid, and a bounded linear operator $T:X\rightarrow Y$ is given. Then $\ker T$ is finite-dimensional and $T(X)$ is closed in $Y$ if and only if there exists a number $c>0$ such that $$ \|u\|_{X}\leq c\,(\,\|Tu\|_{Y}+\|u\|_{Z}\,)\quad\mbox{for all}\quad u\in X. $$ \end{proposition} Now we study a local smoothness of a solution to the elliptic equation $Au=f$. Let $\Gamma_{0}$ be an nonempty open set on the manifold $\Gamma$, and define \begin{equation}\label{eq6.3} H^{s,\varphi}_{\mathrm{loc}}(\Gamma_{0}) :=\bigl\{f\in\mathcal{D}'(\Gamma):\,\chi\,f\in H^{s,\varphi}(\Gamma),\;\;\forall\;\chi\in C^{\infty}(\Gamma),\;\mathrm{supp}\,\chi\subseteq \Gamma_{0}\bigr\}. \end{equation} \begin{theorem}\label{th6.3} Let $u\in\mathcal{D}'(\Gamma)$ be a solution to the equation $Au=f$ on $\Gamma_{0}$ with $f\in H^{s,\varphi}_{\mathrm{loc}}(\Gamma_{0})$ for some $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. Then $u\in H^{s+r,\varphi}_{\mathrm{loc}}(\Gamma_{0})$. \end{theorem} The special case when $\Gamma_{0}=\Gamma$ (global smoothness) follows from Theorem \ref{th6.1}. Indeed, using \eqref{eq6.2} we can write $f=Av$ for some $v\in H^{s+r,\varphi}(\Gamma)$, whence $u-v\in N$ and $u\in H^{s+r,\varphi}(\Gamma)$. In general, we deduce Theorem \ref{th6.3} from this case reasoning similar to the proof of Theorem~\ref{th4.2}. If we bring Theorems \ref{th6.3} and \ref{th5.2} iii) together, then we get the following sufficient condition for the solution $u$ to have continuous derivatives of the prescribed order on $\Gamma_{0}$. Recall that $n:=\dim\Gamma$. \begin{theorem}\label{th6.4} Let $u\in\mathcal{D}'(\Gamma)$ be a solution to the equation $Au=f$ on $\Gamma_{0}$, with $f\in H^{k-r+n/2,\varphi}_{\mathrm{loc}}(\Gamma_{0})$ for some integer $k\geq0$ and function parameter $\varphi\in\mathcal{M}$. If $\varphi$ satisfies \eqref{eq3.6}, then $u\in C^{k}(\Gamma_{0})$. \end{theorem} Here it is important that the condition \eqref{eq3.6} not only is sufficient for $u$ to belong to $C^{k}(\Gamma_{0})$ but also is necessary on the class of all the considered solutions $u$. \subsection{The equivalent norms induced by elliptic operators}\label{sec6.2} Let $r>0$, and a PsDO $A\in\Psi^{r}_{\mathrm{ph}}(\Gamma)$ be elliptic on $\Gamma$. We may consider $A$ as a closed unbounded operator on $L_{2}(\Gamma)$ with the domain $H^{r}(\Gamma)$; see, e.g., \cite[Sec. 2.3, Theorem 2.3.5]{Agranovich94}. Suppose the operator $A$ to be positive in $L_{2}(\Gamma)$. Then $A$ is self-adjoint in $L_{2}(\Gamma)$ \cite[Sec. 2.3, Theorem 2.3.7]{Agranovich94}. For $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$, we set $$ \varphi_{s,r}(t):=t^{s/r}\varphi(t^{1/r})\;\;\mbox{for}\;\;t\geq1,\quad\mbox{and} \quad\varphi_{s,r}(t):=\varphi(1)\;\;\mbox{for}\;\;0<t<1. $$ The operator $\varphi_{s,r}(A)$ is regarded as the Borel function $\varphi_{s,r}$ of the positive self-adjoint operator~$A$ in $L_{2}(\Gamma)$. Consider the norm \begin{equation}\label{eq6.4} f\,\mapsto\,\|\varphi_{s,r}(A)f\|_{L_{2}(\Gamma)},\quad f\in C^{\infty}(\Gamma). \end{equation} \begin{theorem}\label{th6.5} Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. The norm in the space $H^{s,\varphi}(\Gamma)$ is equivalent to the norm \eqref{eq6.4} on the set $C^{\infty}(\Gamma)$. Thus $H^{s,\varphi}(\Gamma)$ is the completion of $C^{\infty}(\Gamma)$ with respect to the norm \eqref{eq6.4}. \end{theorem} The proof of this theorem is quite similar to the reasoning we did to demonstrate the equivalence of Definitions \ref{def5.2} and \ref{def5.3}. If $H^{s,\varphi}(\Gamma)\hookrightarrow L_{2}(\Gamma)$, then Theorem \ref{th6.5} entails the following. \begin{corollary}\label{cor6.1} Let $s\geq0$ and $\varphi\in\mathcal{M}$. In the case where $s=0$ we suppose that the function $1/\varphi$ is bounded in a neighbourhood of $\infty$. Then the space $H^{s,\varphi}(\Gamma)$ coincides with the domain of the operator $\varphi_{s,r}(A)$, and the norm in the space $H^{s,\varphi}(\Gamma)$ is equivalent to the graphics norm of $\varphi_{s,r}(A)$. \end{corollary} It is useful to have an analog of Theorem \ref{th6.5} formulated in terms of sequences. For this purpose, we recall some spectral properties of the operator $A$; see, e.g., \cite[Sec. 6.1]{Agranovich94} or \cite[Sec. 8.3 and 15.2]{Shubin01}. There is an orthonormal basis $(h_{j})_{j=1}^{\infty}$ of $L_{2}(\Gamma)$ formed by eigenfunctions $h_{j}\in C^{\infty}(\Gamma)$ of the operator~$A$. Let $\lambda_{j}>0$ is the eigenvalue corresponding to $h_{j}$; the enumeration is such that $\lambda_{j}\leq\lambda_{j+1}$. Then the spectrum of $A$ coincides with the set $\{\lambda_{1},\lambda_{2},\lambda_{3},\ldots\}$ of all eigenvalues of $A$, and the asymptotics formula holds: $\lambda_{j}\sim c\,j^{\,r/n}$ as $j\rightarrow\infty$. Each distribution $f\in\mathcal{D}'(\Gamma)$ is expanded into the Fourier series \begin{equation}\label{eq6.5} f=\sum_{j=1}^{\infty}\;c_{j}(f)\,h_{j} \end{equation} convergent in $\mathcal{D}'(\Gamma)$; here $c_{j}(f):=(f,h_{j})_{\Gamma}$ are the Fourier coefficients of $f$. \begin{theorem}\label{th6.6} Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. Then the next equality of spaces with equivalence of norms in them holds: \begin{gather}\label{eq6.6} H^{s,\varphi}(\Gamma)=\Bigl\{f\in\mathcal{D}'(\Gamma):\, \sum_{j=1}^{\infty}\;j^{\,2s/n}\,\varphi^{2}(j^{\,1/n})\,|c_{j}(f)|^{2}<\infty\Bigr\},\\ \|f\|_{H^{s,\varphi}(\Gamma)}\asymp\Bigl(\;\sum_{j=1}^{\infty}\; j^{\,2s/n}\,\varphi^{2}(j^{\,1/n})\,|c_{j}(f)|^{2}\;\Bigr)^{1/2}.\label{eq6.7} \end{gather} \end{theorem} This theorem follows from Theorem \ref{th6.5} since $$ \varphi_{s,r}(A)\,f=\sum_{j=1}^{\infty}\,\varphi_{s,r}(\lambda_{j})\,c_{j}(f)\,h_{j} \quad\mbox{(convergence in $L_{2}(\Gamma)$)} $$ for each function $f$ from the domain of the operator $\varphi_{s,r}(A)$. Here $\varphi_{s,r}(\lambda_{j})\asymp j^{\,s/n}\varphi(j^{\,1/n})$ with integers $j\geq1$ in view of the asymptotics formula mentioned above. By applying Parseval's equality we get \eqref{eq6.7} and then can deduce \eqref{eq6.6}. \begin{corollary}\label{cor6.2} Suppose $f\in H^{s,\varphi}(\Gamma)$, then the series \eqref{eq6.5} converges to $f$ in the space $H^{s,\varphi}(\Gamma)$. \end{corollary} This is a simple consequence of \eqref{eq6.7}. \begin{example}\label{ex6.1} Let $\Gamma$ be a unit circle and $A:=1-d^{2}/dt^{2}$, where $t$ sets the natural parametrization on $\Gamma$. The eigenfunctions $h_{j}(t):=(2\pi)^{-1}\mathrm{e}^{ijt}$, $j\in\mathbb{Z}$, of $A$ form an orthonormal basis in $L_{2}(\Gamma)$, with $\lambda_{j}:=1+j^{\,2}$ being the corresponding eigenvalues. Therefore the equivalence \eqref{eq6.7} becomes $$ \|f\|_{H^{s,\varphi}(\Gamma)}\asymp\|\varphi_{s,2}(A)f\|_{L_{2}(\Gamma)}= \Bigl(\;\sum_{j=-\infty}^{\infty}\;(1+j^{\,2})^{s} \,\varphi^{2}((1+j^{\,2})^{1/2})\,|c_{j}(f)|^{2}\;\Bigr)^{1/2}. $$ Note that we can chose the basis formed by the real-valued eigenfunctions $h_{0}(t):=(2\pi)^{-1}$, $h_{j}(t):=\pi^{-1}\cos jt$, and $h_{-j}(t):=\pi^{-1}\sin jt$, with integral $j\geq\nobreak1$. Then $$ \|f\|_{H^{s,\varphi}(\Gamma)}^{2}\asymp|a_{0}(f)|^{2}+ \sum_{j=1}^{\infty}\;j^{\,2s}\,\varphi^{2}(j)\,\bigl(|a_{j}(f)|^{2}+|b_{j}(f)|^{2}\bigr), $$ where $a_{0}(f)$, $a_{j}(f)$, and $b_{j}(f)$ are the Fourier coefficients of $f$ with respect to these eigenfunctions. In the considered case, $H^{s,\varphi}(\Gamma)$ is closely related to the spaces of periodic functions considered by A.I.~Stepanets \cite[Ch.~I, \S~7]{Stepanets87}, \cite[Part~I, Ch.~3, Sec. 7.1]{Stepanets05}. \end{example} \section{Applications to spectral expansions}\label{sec7} In this section, we investigate the convergence of expansions in eigenfunctions of normal (in particular, self-adjoint) elliptic PsDOs given on a compact smooth manifold. Using the refined Sobolev scale, we find new sufficient conditions for the convergence almost everywhere; they are expressed in constructive terms of regularity of functions. We also give a criterion for convergence in the metrics of $C^{k}$ on the classes being H\"ormander spaces. Beforehand let us recall some classical results concerning the convergence almost everywhere of arbitrary orthogonal series. The results will be used below. \subsection{The classical results}\label{sec7.1} In this subsection, $\Gamma$ is an arbitrary set with a finite measure~$\mu$. Suppose that $(h_{j})_{j=1}^{\infty}$ is an orthonormal system of functions in $L_{2}(\Gamma):=L_{2}(\Gamma,d\mu)$, generally, complex-valued. The following proposition is a general version of the well-known Menshov-Rademacher convergence theorem. \begin{theorem}\label{th7.1} Let a sequence of complex numbers $(a_{j})_{j=1}^{\infty}$ be such that \begin{equation}\label{eq7.1} \sum_{j=2}^{\infty}\;|a_{j}|^{2}\,\log^{2}j<\infty. \end{equation} Then the series \begin{equation}\label{eq7.2} \sum_{j=1}^{\infty}\;a_{j}\,h_{j}(x) \end{equation} converges almost everywhere on $\Gamma$. \end{theorem} This theorem was proved independently by D.E.~Menshov \cite{Menschoff23} and H.~Rad\-e\-ma\-cher \cite{Rademacher22} in the classical case where $\Gamma$ is a bounded interval on $\mathbb{R}$, $\mu$ is the Lebesgue measure, and the functions $h_{j}$ are real-valued. The proof of the Menshov--Rademacher theorem given in \cite[Ch.~8, \S~1]{KashinSaakyan89} remains valid in the general situation that we consider (apparently, the most general case is treated in \cite{11MFAT4}). It is important that the Menshov--Rademacher theorem is precise. Menshov \cite{Menschoff23} gave an example of an orthonormal system $(h_{j})_{j=1}^{\infty}$ in $L_{2}((0,1))$ for which Theorem \ref{th7.1} will not be true if one replaces, in \eqref{eq7.1}, the sequence $(\log^{2}j)_{j=1}^{\infty}$ by any increasing sequence of positive numbers $\omega_{j}=o(\log^{2}j)$ with $j\rightarrow\infty$. This result is set forth, e.g., in the monograph \cite[Ch.~8, \S~1, Theorem~2]{KashinSaakyan89}. Note that, for series \eqref{eq7.2} with coefficients subject to \eqref{eq7.1}, the convergence almost everywhere need not be unconditional; see, e.g., \cite[Ch.~8, \S~2]{KashinSaakyan89}. Recall that a series of functions is said to be unconditionally convergent almost everywhere on a set if it remains convergent almost everywhere on the set after arbitrary permutation of its terms (the null measure set of divergence may vary.) The following proposition is a general version of the Orlicz theorem on unconditional convergence of orthogonal series of functions. \begin{theorem}\label{th7.2} Let a sequence of complex numbers $(a_{j})_{j=1}^{\infty}$ and increasing sequence of positive numbers $(\omega_{j})_{j=1}^{\infty}$ satisfy the conditions $$ \sum_{j=2}^{\infty}\;|a_{j}|^{2}\,(\log^{2}j)\,\omega_{j}<\infty,\quad \sum_{j=2}^{\infty}\;\frac{1}{j\,(\log j)\,\omega_{j}}<\infty. $$ Then the series \eqref{eq7.2} converges unconditionally almost everywhere on $\Gamma$. \end{theorem} This equivalent statement of W.~Orlicz' theorem \cite{Orlicz27} was given by P.L.~Uly\-a\-nov \cite[\S~4]{Uljanov63}; they considered the classical case mentioned above. In our (more general) case, Theorem 6.2 follows from K.~Tandori's theorem \cite{Tandori62}, which remains valid for arbitrary measure space \cite{11UMJ11, 11MFAT4}. As Tandori proved \cite{Tandori62}, the Orlicz theorem is the best possible in the sense that its condition on the sequence $(\omega_{j})_{j=1}^{\infty}$ cannot be weaken. \subsection{The convergence of spectral expansions}\label{sec7.2} Further, $\Gamma$ is a closed infinitely smooth oriented manifold, and $n=\dim\Gamma$. A $C^{\infty}$-density $dx$ is given on $\Gamma$ and defines the finite measure there. Let a PsDO $A\in\Psi^{r}_{\mathrm{ph}}(\Gamma)$, with $r>0$, be elliptic on~$\Gamma$. Suppose that $A$ is a normal (unbounded) operator on $L_{2}(\Gamma)=L_{2}(\Gamma,dx)$. Let $(h_{j})_{j=1}^{\infty}$ be a complete orthonormal system of eigenfunctions of this operator. They are enumerated so that $|\lambda_{j}|\leq|\lambda_{j+1}|$ for $j=1,2,3,\ldots$, where $Ah_{j}=\lambda_{j}h_{j}$. For an arbitrary function $f\in L_{2}(\Gamma)$, we consider its expansion into the Fourier series \eqref{eq6.5} with respect to the system $(h_{j})_{j=1}^{\infty}$. We say that the series \eqref{eq6.5} converges on a function class $X(\Gamma)$ in the indicated sense if, for every function $f\in X(\Gamma)$, the series converges to $f$ in the indicated manner. We investigate the convergence almost everywhere of the spectral expansion \eqref{eq6.5} with the help of Theorems \ref{th6.6}, \ref{th7.1}, and \ref{th7.2}. Let $\log^{\ast}t:=\max\{1,\log t\}$ for $t\geq1$. \begin{theorem}\label{th7.3} The series \eqref{eq6.5} converges almost everywhere on $\Gamma$ on the class $H^{0,\log^{\ast}}(\Gamma)$. \end{theorem} Indeed, if $A$ is a positive operator on $L_{2}(\Gamma)$, then by Theorem \ref{th6.6} we have $$ |c_{1}(f)|^{2}+ \sum_{j=2}^{\infty}\;|c_{j}(f)|^{2}\,\log^{2}j\;\asymp\; \|f\|_{H^{0,\log^{\ast}}(\Gamma)}^{2}<\infty\quad\mbox{for}\quad f\in H^{0,\log^{\ast}}(\Gamma). $$ This and Theorem \ref{th7.1} yields Theorem \ref{th7.3}. In general, if $A$ is a normal operator, we should exchange $A$ for the positive elliptic PsDO $B:=1+A^{\ast}A$ in our consideration and use the fact that $(h_{j})_{j=1}^{\infty}$ is a complete system of eigenfunctions of $B$. Similarly, if we bring together Theorems \ref{th6.6} and \ref{th7.2}, we will get the following result. \begin{theorem}\label{th7.4} Let an increasing function $\varphi\in\mathcal{M}$ be such that $$ \int\limits_{2}^{\infty}\frac{dt}{t\,(\log t)\,\varphi^{2}(t)}<\infty. $$ Then the series \eqref{eq6.5} converges unconditionally almost everywhere on $\Gamma$ on the class $H^{0,\varphi\log^{\ast}}(\Gamma)$. \end{theorem} Note that the applying of H\"ormander spaces permits us to use the conditions of Theorems \ref{th7.1} and \ref{th7.2} in an exhaustive manner. If we remain in the framework of the Sobolev scale, we deduce that the series \eqref{eq6.5} converges (unconditionally) almost everywhere on $\Gamma$ on the class $H^{0+}(\Gamma):= \bigcup_{\varepsilon>0}H^{\varepsilon}(\Gamma)$. The result is far rougher than those formulated in Theorems \ref{th7.3} and \ref{th7.4}. In the special case of $A=\Delta_{\Gamma}$, this result was proved by C.~Meaney \cite{Meaney82}. (As Meaney noted, it has "all the qualities of a folk theorem".) To compete this section we give a criterion for the convergence of the spectral expansions in the metrics of $C^{k}(\Gamma)$ on the classes $H^{s,\varphi}(\Gamma)$. \begin{theorem}\label{th7.5} Let an integer $k\geq0$ and function $\varphi\in\mathcal{M}$ be given. The series \eqref{eq6.5} converges in $C^{k}(\Gamma)$ on the class $H^{k+n/2,\varphi}(\Gamma)$ if and only if $\varphi$ satisfies \eqref{eq3.6}. \end{theorem} This theorem results from Corollary \ref{cor6.2} and Theorem \ref{th5.2} iii). Note that the convergence conditions in Theorems \ref{th7.3}, \ref{th7.4}, and \ref{th7.5} are given in constructive terms of regularity of functions. The regularity properties can be determined locally on $\Gamma$ according to Definition \ref{def5.1}. \section{H\"ormander spaces over Euclidean domains}\label{sec8} In the next sections, we will consider some applications of H\"ormander spaces to elliptic boundary problems in a bounded domain $\Omega\subset\mathbb{R}^{n}$. For this purpose we need the H\"ormander spaces that consists of distributions given in~$\Omega$. The spaces of distributions supported on the closure $\overline{\Omega}$ of the domain $\Omega$ is also of use. These spaces are constructed from the H\"ormander spaces over $\mathbb{R}^{n}$ in the standard way \cite[Ch.~1, \S~3]{VolevichPaneah65}. We are interested in the H\"ormander spaces that form the refined Sobolev scales over $\Omega$ and $\overline{\Omega}$. In this section, we give the definitions of these spaces and consider their properties, among them the interpolation properties being of great importance for us. We also study a connection between the refined Sobolev scales over $\Omega$ and its boundary (the trace theorems) and introduce riggings of $L_{2}(\Omega)$ with some H\"ormander spaces. \subsection{The definitions}\label{sec8.1} Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. \begin{definition}\label{def8.1} Suppose that $Q$ is a nonempty closed set in $\mathbb{R}^{n}$. The linear space $H^{s,\varphi}_{Q}(\mathbb{R}^{n})$ is defined to consist of the distributions $u\in H^{s,\varphi}(\mathbb{R}^{n})$ such that $\mathrm{supp}\,u\subseteq Q$. The space $H^{s,\varphi}_{Q}(\mathbb{R}^{n})$ is endowed with the inner product and norm from $H^{s,\varphi}(\mathbb{R}^{n})$. \end{definition} The space $H^{s,\varphi}_{Q}(\mathbb{R}^{n})$ is complete (i.e., Hilbert) because of the continuous embedding of the Hilbert space $H^{s,\varphi}(\mathbb{R}^{n})$ into $\mathcal{S}'(\mathbb{R}^{n})$. \begin{definition}\label{def8.2} Suppose that $\Omega$ is a nonempty open set in $\mathbb{R}^{n}$. The linear space $H^{s,\varphi}(\Omega)$ is defined to consist of the restrictions $v=u\!\upharpoonright\!\Omega$ of all the distributions $u\in H^{s,\varphi}(\mathbb{R}^{n})$ to $\Omega$. The space $H^{s,\varphi}(\Omega)$ is endowed with the norm \begin{equation}\label{eq8.1} \|v\|_{H^{s,\varphi}(\Omega)}:= \inf\,\bigl\{\,\|u\|_{H^{s,\varphi}(\mathbb{R}^{n})}:\,u\in H^{s,\varphi}(\mathbb{R}^{n}),\;\;v=u\;\;\mbox{in}\;\;\Omega\,\bigr\}. \end{equation} \end{definition} By Definition \ref{def8.2}, $H^{s,\varphi}(\Omega)$ is a factor space $H^{s,\varphi}(\mathbb{R}^{n})/H^{s,\varphi}_{\widehat{\Omega}}(\mathbb{R}^{n})$, where $\widehat{\Omega}:=\mathbb{R}^{n}\setminus\Omega$. Hence, the space $H^{s,\varphi}(\Omega)$ is Hilbert; the norm \ref{def8.1} is induced by the inner product $$ \bigl(v_{1},v_{2}\bigr)_{H^{s,\varphi}(\Omega)}:= \bigl(u_{1}-\Pi u_{1},u_{2}-\Pi u_{2}\bigr)_{H^{s,\varphi}(\mathbb{R}^{n})}. $$ Here $u_{j}\in H^{s,\varphi}(\mathbb{R}^{n})$, $u_{j}=v_{j}$ in $\Omega$ for $j=1,\,2$, and $\Pi$ is the orthogonal projector of the space $H^{s,\varphi}(\mathbb{R}^{n})$ onto the subspace $H^{s,\varphi}_{\widehat{\Omega}}(\mathbb{R}^{n})$. Both the spaces $H^{s,\varphi}_{Q}(\mathbb{R}^{n})$ and $H^{s,\varphi}(\Omega)$ are separable. In the Sobolev case of $\varphi\equiv1$ we will omit the index $\varphi$ in the designations of these and other $H^{s,\varphi}$-type spaces. In what follows, we suppose that $\Omega$ is a bounded domain in $\mathbb{R}^{n}$ with $n\geq2$, and its boundary $\partial\Omega$ is an infinitely smooth closed manifold of the dimension $n-1$. (Note that domains are defined to be an open and connected sets.) Consider the classes of H\"ormander spaces \begin{equation}\label{eq8.2} \bigl\{H^{s,\varphi}(\Omega):\,s\in\mathbb{R},\,\varphi\in\mathcal{M}\,\bigr\}\quad \mbox{and}\quad \bigl\{H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n}): \,s\in\mathbb{R},\,\varphi\in\mathcal{M}\,\bigr\}. \end{equation} The space $H^{s,\varphi}(\Omega)$ consists of distributions given in $\Omega$, whereas the space $H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})$ consists of distributions supported on $\overline{\Omega}$. \begin{definition}\label{def8.3} The classes appearing in \eqref{eq8.2} are called the refined Sobolev scales over $\Omega$ and $\overline{\Omega}$ respectively. \end{definition} \subsection{The interpolation properties}\label{sec8.2} The scales \eqref{eq8.2} have the interpolation properties analogous to those the refined Sobolev scale over $\mathbb{R}^{n}$ possesses. \begin{theorem}\label{th8.1} Let a function $\varphi\in\mathcal{M}$ and positive numbers $\varepsilon,\delta$ be given, and let the interpolation parameter $\psi\in\mathcal{B}$ be defined by \eqref{eq3.8}. Then, for each $s\in\mathbb{R}$, the following equalities of spaces with equivalence of norms in them hold: \begin{gather}\label{eq8.3} \bigl[H^{s-\varepsilon}(\Omega),H^{s+\delta}(\Omega)\bigl]_{\psi}=H^{s,\varphi}(\Omega)\\ \bigl[H^{s-\varepsilon}_{\overline{\Omega}}(\mathbb{R}^{n}), H^{s+\delta}_{\overline{\Omega}}(\mathbb{R}^{n})\bigr]_{\psi} =H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n}).\label{eq8.4} \end{gather} \end{theorem} We will deduce this theorem from Theorem \ref{th3.4} with the help of the following result concerning the interpolation of subspaces and factor spaces. \begin{proposition}\label{prop8.1} Let $X=[X_{0},X_{1}]$ be an admissible couple of Hilbert spaces, and $Y_{0}$ be a subspace in $X_{0}$. Then $Y_{1}:=X_{1}\cap Y_{0}$ is a subspace in $X_{1}$. Suppose that there exists a linear mapping $P$ which is a projector of $X_{j}$ onto $Y_{j}$ for $j=0,\,1$. Then the couples $[Y_{0},Y_{1}]$ and $[X_{0}/Y_{0},X_{1}/Y_{1}]$ are admissible, and, for each interpolation parameter $\psi\in\mathcal{B}$, the following equalities of spaces up to equivalence of norms in them hold: $$ [Y_{0},Y_{1}]_{\psi}=X_{\psi}\cap Y_{0}, \quad [X_{0}/Y_{0},X_{1}/Y_{1}]_{\psi}=X_{\psi}/(X_{\psi}\cap Y_{0}). $$ Here $X_{\psi}\cap Y_{0}$ is a subspace in $X_{\psi}$. \end{proposition} Recall that, by definition, subspaces of a Hilbert space are closed, and projectors on subspaces are, generally, nonorthogonal. Proposition \ref{prop8.1} was proved in H.~Triebel's monograph \cite[Sec. 1.17]{Triebel95} for arbitrary interpolation functors given on the category of couples of Banach spaces. The proof for Hilbert spaces is quite similar. Let us explain how to prove Theorem \ref{th8.1}. It is known \cite[Sec. 2.10.4, Theorem~2]{Triebel95} that, for each integer $k>0$, there exists a linear mapping $P_{k,Q}$ which is a projector of every space $H^{\sigma}(\mathbb{R}^{n})$, with $|\sigma|<k$, onto the subspace $H^{\sigma}_{Q}(\mathbb{R}^{n})$, where $Q$ is a closed half-space in $\mathbb{R}^{n}$. Using the local coordinates methods and $P_{k,Q}$, we can construct a linear mapping that projects $H^{\sigma}(\mathbb{R}^{n})$ onto $H^{\sigma}_{\widehat{\Omega}}(\mathbb{R}^{n})$ (or onto $H^{\sigma}_{\overline{\Omega}}(\mathbb{R}^{n})$) with $|\sigma|<k$. Now Theorem \ref{th8.1} follows from Theorem \ref{th3.4} and Proposition~\ref{prop8.1}. Reasoning as above we can also deduce analogs of Theorem \ref{th3.5} (on interpolation) for the refined Sobolev scale given over $\Omega$ or $\overline{\Omega}$. We will not formulate them. \subsection{Embeddings and other properties}\label{sec8.3} Let us consider some other important properties of scales \eqref{eq8.2}. Among these properties, there are the following embeddings. \begin{theorem}\label{th8.2} Let $s\in\mathbb{R}$ and $\varphi,\varphi_{1}\in\mathcal{M}$. The next assertions are true: \begin{enumerate} \item[i)] The set $C^{\infty}(\,\overline{\Omega}\,)$ is dense in $H^{s,\varphi}(\Omega)$, whereas the set $C^{\infty}_{0}(\Omega)$ is dense in $H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})$. \item[ii)] For each $\varepsilon>0$, the dense compact embeddings hold: \begin{equation}\label{eq8.5} H^{s+\varepsilon,\,\varphi_{1}}(\Omega)\hookrightarrow H^{s,\varphi}(\Omega), \quad H^{s+\varepsilon,\,\varphi_{1}}_{\overline{\Omega}}(\mathbb{R}^{n})\hookrightarrow H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n}). \end{equation} \item[iii)] The function $\varphi/\varphi_{1}$ is bounded in a neighbourhood of $+\infty$ if and only if the embeddings \eqref{eq8.5} are valid for $\varepsilon=0$. The embeddings are continuous and dense. They are compact if and only if $\varphi(t)/\varphi_{1}(t)\rightarrow0$ as $t\rightarrow+\infty$. \item[iv)] For every fixed integer $k\geq0$, the inequality \eqref{eq3.6} is equivalent to the embedding $H^{k+n/2,\varphi}(\Omega)\hookrightarrow C^{k}(\,\overline{\Omega}\,)$. This embedding is compact. \end{enumerate} \end{theorem} Assertion i) can be deduced directly from the Sobolev case of $\varphi\equiv1$ with the help of the interpolation Theorem \ref{th8.1}. Assertions ii)--iv) follow from Theorem \ref{th3.3} with the exception of the statements on compactness. The compactness of the embeddings is a consequences of the boundedness of $\Omega$ and can be proved similarly to the argument of Theorem \ref{th5.2}. The general analogs of assertions i)--iv) for the H\"ormander inner product spaces parametrized by arbitrary weight functions were proved by L.R.~Volevich and B.P.~Paneach in \cite[\S~3, 7, and 8]{VolevichPaneah65}. Further we examine the properties that exhibit a relation between the refined Sobolev scales over $\Omega$ and $\overline{\Omega}$. Denote by $H^{s,\varphi}_{0}(\Omega)$ the closure of $C^{\infty}_{0}(\Omega)$ in $H^{s,\varphi}(\Omega)$. We consider $H^{s,\varphi}_{0}(\Omega)$ as a Hilbert space with respect to the inner product in $H^{s,\varphi}(\Omega)$. \begin{theorem}\label{th8.3} Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. The following assertions are true: \begin{enumerate} \item[i)] If $s<1/2$, then $C^{\infty}_{0}(\Omega)$ is dense in $H^{s,\varphi}(\Omega)$, and therefore $H^{s,\varphi}(\Omega)=H^{s,\varphi}_{0}(\Omega)$. \item[ii)] If $s>-1/2$ and $s+1/2\notin\mathbb{Z}$, then the restriction mapping $u\rightarrow u\!\upharpoonright\!\Omega$, $u\in\mathcal{D}'(\mathbb{R}^{n})$, establishes a homeomorphism of $H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})$ onto $H^{s,\varphi}_{0}(\Omega)$. \item[iii)] The spaces $H^{s,\varphi}(\Omega)$ and $H^{-s,1/\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})$ are mutually dual with respect to the inner product in $L_{2}(\Omega)$. \item[iv)] Suppose that $s<1/2$ and $s-1/2\notin\mathbb{Z}$. Then the spaces $H^{s,\varphi}(\Omega)$ and $H^{-s,1/\varphi}_{0}(\Omega)$ are mutually dual, up to equivalence of norms, with respect to the inner product in $L_{2}(\Omega)$. Therefore the space $H^{s,\varphi}(\Omega)$ coincides, up to equivalence of norms, with the factor space $H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})/ H^{s,\varphi}_{\partial\Omega}(\mathbb{R}^{n})$ dual to $H^{-s,1/\varphi}_{0}(\Omega)$; i.e., $H^{s,\varphi}(\Omega)=\{u\!\upharpoonright\!\Omega:\,u\in H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})\}$. \end{enumerate} \end{theorem} This theorem is known in the Sobolev case of $\varphi\equiv1$; see, e.g., \cite[Sec. 4.3.2 and 4.8]{Triebel95}. In general, assertion i) follows from the Sobolev case in view of Theorem \ref{th8.2}~ii), where $\varphi_{1}\equiv1$; assertion ii) is deduced with the help of the interpolation Theorem \ref{th8.1}; assertion iii) results from Theorem \ref{th3.3} iv); finally, assertion iv) follows from ii) and iii). To deduce assertion ii) we need, besides \eqref{eq8.4}, the interpolation formula \begin{equation}\label{eq8.6} \bigl[H^{s-\varepsilon}_{0}(\Omega),H^{s+\delta}_{0}(\Omega)\bigl]_{\psi}= H^{s,\varphi}(\Omega)\cap H^{s-\varepsilon}_{0}(\Omega)=H^{s,\varphi}_{0}(\Omega). \end{equation} Here $\varepsilon$, $\delta$ are positive numbers such that $[s-\varepsilon,s+\delta]$ is disjoint from $\mathbb{Z}-1/2$, and $\psi$ is the interpolation parameter defined by \eqref{eq3.8}. Formula \eqref{eq8.6} follows from \eqref{eq8.3} and Proposition \ref{prop8.1} (interpolation of subspaces) because there exist a linear mapping that projects $H^{\sigma}(\Omega)$ onto $H^{\sigma}_{0}(\Omega)$ if $\sigma$ runs over $[s-\varepsilon,s+\delta]$. The mapping is constructed in \cite[Lemma 5.4.4 with regard for Theorem 4.7.1]{Triebel95}. Note that if $s$ is half-integer, then assertions ii) and iv) fail to hold at least for $\varphi\equiv1$ \cite[Sec. 4.3.2, Remark~2]{Triebel95}. \begin{remark}\label{rem8.1} In the literature, there are three different definitions of the Sobolev space of negative order $s$ over $\Omega$. The first of them coincides with Definition \ref{def8.2} for $\varphi\equiv1$ (\cite[Sec. A.4]{Grubb96} and \cite[Sec. 4.2.1]{Triebel95}). The second defines this space as the dual of $H^{-s}_{0}(\Omega)$ (\cite[Ch.~II, \S~1, Sec.~5]{FunctionalAnalysis72} and \cite[Ch.~1, Sec. 12.1]{LionsMagenes72}), whereas the third defines it as the dual of $H^{-s}(\Omega)$ (\cite[Ch. XIV, \S~3]{BerezanskySheftelUs96b} and \cite[Sec. 1.10]{Roitberg96}), the both duality being with respect to the inner product in $L_{2}(\Omega)$. By Theorem \ref{th8.3} iii) and iv), the first and second definitions are tantamount if (and only if) $s-1/2\notin\mathbb{Z}$, but the third gives $H^{s}_{\overline{\Omega}}(\mathbb{R}^{n})$ and, therefore, are not equivalent to them for $s<-1/2$. If $-1/2<s<0$, then all the three definitions are tantamount in view of Theorem \ref{th8.3} i) and ii). They are suitable in various situations appearing in the theory of elliptic boundary problems. The situations will occur below when we will investigate these problems in the refined Sobolev scale. We chose Definition \ref{def8.2} to introduce the H\"ormander spaces over $\Omega$ because it is universal; i.e., it allows us to define the space $X(\Omega)\hookrightarrow\mathcal{D}'(\Omega)$ if we have an arbitrary function Banach space $X(\mathbb{R}^{n})\hookrightarrow\mathcal{D}'(\mathbb{R}^{n})$ instead of $H^{s,\varphi}(\mathbb{R}^{n})$ (embeddings being continuous). \end{remark} \subsection{Traces}\label{sec8.4} We now study the traces of functions $f\in H^{s,\varphi}(\Omega)$ and their normal derivatives on the boundary $\partial\Omega$. The traces belong to certain spaces from the refined Sobolev scale on $\partial\Omega$. This scale was defined in Section \ref{sec5.1} because $\partial\Omega$ is a closed infinitely smooth oriented manifold (of dimension $n-1$). Further we use the notation $D_{\nu}:=i\,\partial/\partial\nu$, where $\nu$ is the field of unit vectors of inner normals to the boundary $\partial\Omega$; this field is given in a neighbourhood of $\partial\Omega$. (For us it will be more suitable to use $D_{\nu}$ instead of $\partial/\partial\nu$; see Sec. \ref{sec11} below.) \begin{theorem}\label{th8.4} Let an integer $r\geq1$, real number $s>r-1/2$, and function $\varphi\in\mathcal{M}$ be given. Then the mapping \begin{equation}\label{eq8.7} R_{r}:u\mapsto\bigl((D_{\nu}^{k-1}u)\!\upharpoonright\!\partial\Omega:\, k=1,\ldots,r\bigr),\quad u\in C^{\infty}(\,\overline{\Omega}), \end{equation} extends uniquely to a continuous linear operator $$ R_{r}:\,H^{s,\varphi}(\Omega)\rightarrow \bigoplus_{k=1}^{r}\,H^{s-k+1/2,\,\varphi}(\partial\Omega)=: \mathcal{H}_{s,\varphi}^{r}(\partial\Omega). $$ The operator \eqref{eq8.7} has a right inverse continuous linear operator $\Upsilon_{r}:\mathcal{H}_{s,\varphi}^{r}(\partial\Omega)\rightarrow H^{s,\varphi}(\Omega)$ such that the mapping $\Upsilon_{r}$ does not depend on $s$ and $\varphi$. \end{theorem} Theorem \ref{th8.4} is known in the Sobolev case of $\varphi\equiv1$; see, e.g., \cite[Ch.~1, Sec. 9.2]{LionsMagenes72} or \cite[Sec. 4.7.1]{Triebel95}. For arbitrary $\varphi\in\mathcal{M}$, the theorem follows from this case by the interpolation Theorems \ref{th5.3} and \ref{th8.1}. It useful to note that \begin{equation}\label{eq8.8} H^{s,\varphi}_{0}(\Omega)=\bigl\{u\in H^{s,\varphi}(\Omega):\,R_{r}u=0\bigr\}\quad\mbox{if}\quad r-1/2<s<r+1/2; \end{equation} here the integer $r\geq1$. This formula is known in the $\varphi\equiv1$ case, the equality $r+1/2=s$ being possible; see, e.g., \cite[Ch.~1, Sec. 11.4]{LionsMagenes72} or \cite[Sec. 4.7.1]{Triebel95}. In general, \eqref{eq8.8} follows from the Sobolev case by \eqref{eq8.6}. If $s>1/2$ and $\varphi\in\mathcal{M}$, then, by Theorem \ref{th8.4}, a trace $u\!\upharpoonright\!\partial\Omega:=R_{1}u$ of each function $u\in H^{s,\varphi}(\Omega)$ on the boundary $\partial\Omega$ exists and belongs to the space $H^{s-1/2,\varphi}(\partial\Omega)$. Moreover, we get the following description of this space in terms of traces. Put $\sigma:=s-1/2$. \begin{corollary}\label{cor8.1} Let $\sigma>0$ and $\varphi\in\mathcal{M}$. Then \begin{gather*} H^{\sigma,\varphi}(\partial\Omega)=\{g:=u\!\upharpoonright\!\partial\Omega:\, u\in H^{\sigma+1/2,\varphi}(\Omega)\},\\ \|g\|_{H^{\sigma,\varphi}(\partial\Omega)}\asymp \inf\,\bigl\{\|u\|_{H^{\sigma+1/2,\varphi}(\Omega)}: \,u\!\upharpoonright\!\partial\Omega=g\bigr\}. \end{gather*} \end{corollary} If $s<1/2$ and $\varphi\in\mathcal{M}$, then the trace mapping \begin{equation}\label{eq8.9} R_{1}:u\mapsto u\!\upharpoonright\!\partial\Omega,\quad u\in C^{\infty}(\,\overline{\Omega}), \end{equation} has not a continuous extension $R_{1}:H^{s,\varphi}(\Omega)\rightarrow\mathcal{D}'(\partial\Omega)$. Indeed, if this extension existed, we would get, by Theorem \ref{th8.3} i), the equality $R_{1}u=0$ on $\partial\Omega$ for each $u\in H^{s,\varphi}(\Omega)$. But this equality fails to hold, e.g., for the function $u\equiv1$. In the limiting case of $s=1/2$, we have the next criterion for the trace operator $R_{1}$ to be well defined on $H^{1/2,\varphi}(\Omega)$. \begin{theorem}\label{th8.5} Let $\varphi\in\mathcal{M}$. The following assertions are true: \begin{enumerate} \item[i)] The function $\varphi$ satisfies \eqref{eq3.6} if an only if the mapping \eqref{eq8.9} is a continuous operator from the space $C^{\infty}(\,\overline{\Omega})$ endowed with the topology of $H^{1/2,\varphi}(\Omega)$ to the space $\mathcal{D}'(\partial\Omega)$. \item[ii)] Moreover, if $\varphi$ satisfies \eqref{eq3.6}, then the mapping \eqref{eq8.9} extends uniquely to a continuous linear operator $R_{1}:H^{1/2,\varphi}(\Omega)\rightarrow H^{0,\varphi_{0}}(\partial\Omega)$, where $\varphi_{0}\in\mathcal{M}$ is given by the formula \begin{equation}\label{eq8.10} \varphi_{0}(\tau):= \biggl(\:\int\limits_{\tau}^{\infty}\frac{d\,t}{t\,\varphi^{2}(t)}\:\biggr)^{-1/2} \quad\mbox{for}\quad\tau\geq1. \end{equation} This operator has a right inverse continuous linear operator $$ \Upsilon_{1,\varphi}:H^{0,\varphi_{0}}(\partial\Omega)\rightarrow H^{1/2,\varphi}(\Omega), $$ the map $\Upsilon_{1,\varphi}$ depending on $\varphi$. \end{enumerate} \end{theorem} Theorem \ref{th8.5} follows from the trace theorems proved by L.~H\"ormander \cite[Sec. 2.2, Theorem 2.2.8]{Hermander63} and L.R.~Volevich, B.P.~Paneah \cite[\S~6, Theorems 6.1 and 6.2]{VolevichPaneah65}. Indeed, consider a H\"ormander space $B_{p,\mu}(\mathbb{R}^{n})$ for some weight function $\mu$, and let $U$ be a neighbourhood of the origin in $\mathbb{R}^{n}$. We write points $x\in\mathbb{R}^{n}$ as $x=(x',x_{n})$ with $x'\in\mathbb{R}^{n-1}$ and $x_{n}\in\mathbb{R}$. According to the trace theorems, the condition \begin{equation}\label{eq8.11} \nu^{-2}(\xi\,'):= \int\limits_{-\infty}^{\infty}\mu^{-2}(\xi',\xi_{n})\,d\xi_{n} <\infty\quad\mbox{for all}\quad\xi'\in\mathbb{R}^{n-1} \end{equation} holds true if and only if the mapping $u(x',x_{n})\rightarrow u(x',0)$ is a continuous operator from the space $C^{\infty}_{0}(U)$ endowed with the topology of $B_{2,\mu}(\mathbb{R}^{n})$ to the space $\mathcal{D}'(\mathbb{R}^{n-1})$. (In \eqref{eq8.11} the phrase `for all' can be replaced with `for a certain'.) Moreover, if \eqref{eq8.11} holds and $U=\mathbb{R}^{n}$, then this mapping extends by continuity to a bounded operator from $B_{2,\mu}(\mathbb{R}^{n})$ to $B_{2,\nu}(\mathbb{R}^{n-1})$; the operator has a linear bounded right-inverse. Whence, by setting $\mu(\xi):=\langle\xi\rangle^{1/2}\varphi(\langle\xi\rangle)$ and observing that \eqref{eq8.11} $\Leftrightarrow$ \eqref{eq3.6} with $\nu(\xi')\asymp\varphi_{0}(\langle\xi'\rangle)$, we can deduce Theorem \ref{th8.5} with the help of the local coordinates method. Let us note that the domain $\Omega$ is a special case of an infinitely smooth compact manifold with boundary. The refined Sobolev scale over such a manifold was introduced and investigated by authors in \cite[Sec.~3]{06UMJ3}. Specifically, the interpolation Theorem \ref{th8.1} and the traces Theorems \ref{th8.4} (for $r=1$) and \ref{th8.5} were proved. \subsection{Riggings} We recall the important notion of a Hilbert rigging, which has various applications, specifically, in the theory of elliptic operators; see, e.g., \cite[Ch.~I, \S~1]{Berezansky68} and \cite[Ch.~XIV, \S~1]{BerezanskySheftelUs96b}. Let $H$ and $H_{+}$ be Hilbert spaces such that the dense continuous embedding $H_{+}\hookrightarrow H$ holds. Denote by $H_{-}$ the completion of $H$ with respect to the norm $$ \|f\|_{H_{-}}:=\sup\,\biggl\{\,\frac{|(f,u)_{H}|}{\|u\|_{H_{+}}}:\,u\in H_{+}\,\biggr\},\quad f\in H. $$ It is known the following: this norm and the space $H_{-}$ are Hilbert; the spaces $H_{+}$ and $H_{-}$ are mutually dual with respect to the inner product in $H$; the dense continuous embeddings $H_{+}\hookrightarrow H\hookrightarrow H_{-}$ hold. \begin{definition}\label{def8.4} The chain $H_{-}\hookleftarrow H\hookleftarrow H_{+}$ is said to be a (Hilbert) rigging of $H$ with $H_{+}$ and $H_{-}$. In this rigging, $H_{-}$, $H$ and $H_{+}$ are called negative, zero, and positive spaces respectively. \end{definition} According to Theorem \ref{th8.3} iii) we have the following rigging of $L_{2}(\Omega)$ with some H\"ormander spaces: \begin{equation}\label{eq8.12} H^{-s,1/\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})\hookleftarrow L_{2}(\Omega)\hookleftarrow H^{s,\varphi}(\Omega),\quad s>0,\;\varphi\in\mathcal{M}. \end{equation} Here we naturally identify $L_{2}(\Omega)$ with $H^{0}_{\overline{\Omega}}(\mathbb{R}^{n})$ (extending the functions $f\in L_{2}(\Omega)$ by zero). In some applications to elliptic problems, it is useful to have a scale consisting of both negative and positive spaces in \eqref{eq8.12}. For this purpose we introduce the uniform notation \begin{equation}\label{eq8.13} H^{s,\varphi,(0)}(\Omega):= \begin{cases} \;H^{s,\varphi}(\Omega)\;\; & \text{for}\;\;s\geq0, \\ \;H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n}) & \text{for}\;\;s<0, \end{cases} \end{equation} with $\varphi\in\mathcal{M}$, and form the scale of Hilbert spaces \begin{equation}\label{eq8.14} \bigl\{H^{s,\varphi,(0)}(\Omega):\,s\in\mathbb{R},\,\varphi\in\mathcal{M}\,\bigr\}. \end{equation} In the Sobolev case of $\varphi\equiv1$ the rigging \eqref{eq8.12} and the scale of spaces $H^{s,(0)}(\Omega):=H^{s,1,(0)}(\Omega)$, $s\in\mathbb{R}$, were used by Yu.M.~Berezansky, S.G.~Krein, Ya.A.~Roitberg \cite{Berezansky68, BerezanskyKreinRoitberg63, BerezanskySheftelUs96b, Roitberg64} and M.~Schechter \cite{Schechter63} in the elliptic theory. (They also considered the Banach $L_{p}$-analogs of these spaces with $1<p<\infty$ and denoted negative spaces in the same manner as positive ones but with negative index $s$, e.g. $H^{s}(\Omega)$; see Remark \ref{rem8.1} above.) Properties of the scale \eqref{eq8.14} are inherited from the refined Sobolev scales over $\Omega$ and $\overline{\Omega}$. Now we dwell on the properties that link negative and positive spaces to each other. When dealing with the scale \eqref{eq8.14}, it is suitable to identify each function $f\in C^{\infty}(\,\overline{\Omega}\,)$ with its extension by zero \begin{equation}\label{eq8.15} \mathcal{O}f(x):= \begin{cases} \;f(x) &\;\; \text{for}\;\; x\in\overline{\Omega}, \\ \;0 &\;\; \text{for}\;\; x\in\mathbb{R}^{n}\setminus\overline{\Omega}. \end{cases} \end{equation} The extension is a regular distribution belonging to $H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})$ for $s<0$. Now the set $C^{\infty}(\,\overline{\Omega}\,)$ is dense in every space $H^{s,\varphi,(0)}(\Omega)$ from \eqref{eq8.14} in view of Theorem \ref{th8.2} i). This allow us to consider the continuous embeddings of spaces pertaining to \eqref{eq8.14} and viewed as the completions of the same linear manifold, $C^{\infty}(\,\overline{\Omega}\,)$, with respect to different norms. (The general theory of such embeddings is in \cite[Ch. XIV, \S~7]{BerezanskySheftelUs96b}). So, by Theorem \ref{th8.2} ii) and formula \eqref{eq8.17} given below, we have the dense compact embeddings in the scale \eqref{eq8.14}: \begin{equation}\label{eq8.16} H^{s_{1},\varphi_{1},(0)}(\Omega)\hookrightarrow H^{s,\varphi,(0)}(\Omega),\quad-\infty<s<s_{1}<\infty\;\;\mbox{and} \;\;\varphi,\varphi_{1}\in\mathcal{M}. \end{equation} Note that in the $|s|<1/2$ case the spaces $H^{s,\varphi}(\Omega)$ and $H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})$ can be considered as completions of $C^{\infty}_{0}(\Omega)$ with respect to equivalent norms in view of Theorem \ref{th8.3} i) and ii). Hence, up to equivalence of norms, \begin{equation}\label{eq8.17} H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})=H^{s,\varphi,(0)}(\Omega)= H^{s,\varphi}(\Omega)\quad\mbox{for}\;\;|s|<1/2,\;\;\varphi\in\mathcal{M}. \end{equation} It follows from this result and Theorem \ref{th8.3} iii) that the spaces $H^{s,\varphi,(0)}(\Omega)$ and $H^{-s,1/\varphi,(0)}(\Omega)$ are mutually dual with respect to the inner product in $L_{2}(\Omega)$ for every $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$, the duality being up to equivalence of norms if $s=0$. The scale \eqref{eq8.14} has an interpolation property analogous to that stated in Theorem \ref{th8.1}. \begin{theorem}\label{th8.6} Under the conditions of Theorem $\ref{th8.1}$ we have \begin{equation}\label{eq8.18} \bigl[H^{s-\varepsilon,(0)}(\Omega),H^{s+\delta,(0)}(\Omega)\bigl]_{\psi}= H^{s,\varphi,(0)}(\Omega)\quad\mbox{for all}\;\;s\in\mathbb{R} \end{equation} up to equivalence of norms in the spaces. \end{theorem} If $s-\varepsilon>-1/2$ or $s+\delta<1/2$, then \eqref{eq8.18} holds by Theorem \ref{th8.1} and \eqref{eq8.17}. If $\varphi\equiv1$, then \eqref{eq8.18} is proved in \cite[Ch.~1, Sec. 12.5, Theorem 12.5]{LionsMagenes72}. The general case can be reduced to the previous ones by the reiterated interpolation. \section{Elliptic boundary-value problems on the one-sided scale}\label{sec9} In Sections \ref{sec9}--\ref{sec12}, we will investigate regular elliptic boundary-value problems on various scales of H\"ormander spaces. We begin with the one-sided refined Sobolev scale consisting of the spaces $H^{s,\varphi}(\Omega)$ for sufficiently large~$s$. \subsection{The statement of the boundary-value problem}\label{sec9.1} Recall that $\Omega$ is a boun\-ded domain in $\mathbb{R}^{n}$ with $n\geq2$, and its boundary $\partial\Omega$ is an infinitely smooth closed manifold of the dimension $n-1$. Let $\nu(x)$ denote the unit vector of the inner normal to $\partial\Omega$ at a point $x\in\partial\Omega$. We consider the following boundary-value problem in $\Omega$: \begin{gather}\label{eq9.1} L\,u\equiv\sum_{|\mu|\leq2q}\,l_{\mu}(x)\,D^{\mu}u=f\quad\mbox{in}\;\;\Omega,\\ B_{j}\,u\equiv\sum_{|\mu|\leq m_{j}}\,b_{j,\mu}(x)\,D^{\mu}u= g_{j}\quad\mbox{on}\;\;\partial\Omega,\quad j=1,\ldots,q. \label{eq9.2} \end{gather} Here $L=L(x,D)$, $x\in\overline{\Omega}$, and $B_{j}=B_{j}(x,D)$, $x\in\partial\Omega$, are linear partial differential expressions with complex-valued coefficients $l_{\mu}\in C^{\infty}(\,\overline{\Omega}\,)$ and $b_{j,\mu}\in C^{\infty}(\partial\Omega)$. We suppose that $\mathrm{ord}\,L=2q$ is an even positive number and $\mathrm{ord}\,B_{j}=m_{j}\leq2q-1$ for all $j=1,\ldots,q$. Set $B:=(B_{1},\ldots,B_{q})$ and $m:=\max\,\{m_{1},\ldots,m_{q}\}$. Note that we use the standard notation in \ref{eq9.1} and \ref{eq9.2}; namely, for a multi-index $\mu=(\mu_{1},\ldots,\mu_{n})$ we let $|\mu|:=\mu_{1}+\ldots+\mu_{n}$ and $D^{\mu}:=D_{1}^{\mu_{1}}\ldots D_{n}^{\mu_{n}}$, with $D_{k}:=i\,\partial/\partial x_{k}$ for $k=1,\ldots,n$ and $x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}$. \begin{lemma}\label{lem9.1} Let $s>m+1/2$ and $\varphi\in\mathcal{M}$. Then the mapping \begin{equation}\label{eq9.3} (L,B):\,u\rightarrow(Lu,B_{1}u,\ldots,B_{q}u),\quad u\in C^{\infty}(\,\overline{\Omega}\,), \end{equation} extends uniquely to a continuous linear operator \begin{equation}\label{eq9.4} (L,B):\,H^{s,\varphi}(\Omega)\rightarrow H^{s-2q,\,\varphi}(\Omega)\oplus\bigoplus_{j=1}^{q}\, H^{s-m_{j}-1/2,\,\varphi}(\partial\Omega)=: \mathcal{H}_{s,\varphi}(\Omega,\partial\Omega). \end{equation} \end{lemma} Note the differential operator $L$ maps $H^{s,\varphi}(\Omega)$ continuously into $H^{s-2g,\varphi}(\Omega)$ for each real~$s$, whereas the boundary differential operator $B_{j}$ maps $H^{s,\varphi}(\Omega)$ continuously into $H^{s-m_{j}-1/2,\varphi}(\partial\Omega)$ provided that $s>m_{j}+1/2$. This is well known in the Sobolev case of $\varphi\equiv1$ (see, e.g., \cite[Sec. B.2]{Hermander85}), whence the case of an arbitrary $\varphi\in\mathcal{M}$ is got by the interpolation in view of Theorems \ref{th5.3} and \ref{th8.1}. The mentioned continuity of $B_{j}$ also results from the trace Theorem \ref{th8.4} (the $r=1$ case). If $s\leq m+1/2$, then Lemma \ref{lem9.1} fails to hold (see Section \ref{sec8.4}). In the limiting case of $s=m+1/2$ an analog of the lemma is valid provided the function $\varphi$ satisfies \eqref{eq3.6} and we exchange the space $\mathcal{H}_{s,\varphi}(\Omega,\partial\Omega)$ for another (see Section \ref{sec9.3} below). We will investigate properties of the operator \eqref{eq9.4} under the assumption that the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} is regular elliptic in $\Omega$. Recall some relevant notions; see, e.g., \cite[Ch.~III, \S~6]{FunctionalAnalysis72} or \cite[Ch.~2, Sec. 1 and 2]{LionsMagenes72}. The principal symbols of the partial differential expressions $L(x,D)$, with $x\in\overline{\Omega}$, and $B_{j}(x,D)$, with $x\in\partial\Omega$, are defined as follows: $$ L^{(0)}(x,\xi):=\sum_{|\mu|=2q}l_{\mu}(x)\,\xi^{\mu},\quad\quad B_{j}^{(0)}(x,\xi):=\sum_{|\mu|=\,m_{j}}b_{j,\mu}(x)\,\xi^{\mu}. $$ They are homogeneous polynomials in $\xi=(\xi_{1},\ldots,\xi_{n})\in\mathbb{C}^n$; here as usual $\xi^{\mu}:=\xi_{1}^{\mu_{1}}\ldots\xi_{n}^{\mu_{n}}$. \begin{definition}\label{def9.1} The boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} is said to be regular (or normal) elliptic in $\Omega$ if the following conditions are satisfied: \begin{enumerate} \item[i)] The expression $L$ is proper elliptic on $\overline{\Omega}$; i.e., for each point $x\in\overline{\Omega}$ and for all linearly independent vectors $\xi',\xi''\in\mathbb{R}^{n}$, the polynomial $L^{(0)}(x,\xi'+\tau\xi'')$ in $\tau\in\mathbb{C}$ has $q$ roots $\tau^{+}_{j}(x;\xi',\xi'')$, $j=1,\ldots,q$, with positive imaginary part and $q$ roots with negative imaginary part, each root being taken the number of times equal to its multiplicity. \item[ii)] The system $\{B_{1},\ldots,B_{q}\}$ satisfies the Lopatinsky condition with respect to $L$ on $\partial\Omega$; i.e., for an arbitrary point $x\in\partial\Omega$ and for each vector $\xi\neq0$ tangent to $\partial\Omega$ at $x$, the polynomials $B_{j}^{(0)}(x,\xi+\tau\nu(x))$, $j=1,\ldots,q$, in $\tau$ are linearly independent modulo $\prod_{j=1}^{q}\bigl(\tau-\tau^{+}_{j}(x;\xi,\nu(x))\bigr)$. \item[iii)] The system $\{B_{1},\ldots,B_{q}\}$ is normal; i.e., the orders $m_{j}$ are pairwise distinct, and $B_{j}^{(0)}(x,\nu(x))\neq\nobreak0$ for each $x\in\partial\Omega$. \end{enumerate} \end{definition} \begin{remark}\label{rem9.1} It follows from condition i) that $L^{(0)}(x,\xi)\neq0$ for each point $x\in\overline{\Omega}$ and nonzero vector $\xi\in\mathbb{R}^{n}$, i.e. $L$ is elliptic on $\overline{\Omega}$. If $n\geq3$, then the ellipticity condition is equivalent to i). The equivalence also holds if $n=2$ and all coefficients of $L$ are real-valued. Not more that there are various equivalent statements of the Lopatinsky condition; see \cite[Sec. 1.2 and 1.3]{Agranovich97}. \end{remark} \begin{example}\label{ex9.1} Let $L$ satisfy condition i), and let $B_{j}u:=\partial^{k+j-1}u/\partial\nu^{k+j-1}$, with $j=1,\ldots,q$, for some $k\in\mathbb{Z}$, $0\leq k\leq q$. Then the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} is regular elliptic. If $k=0$, we have the Dirichlet boundary-value problem. \end{example} In what follows the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} is supposed to be regular elliptic in $\Omega$. To describe the range of the operator \eqref{eq9.4} we consider the boundary-value problem \begin{gather}\label{eq9.5} L^{+}v\equiv\sum_{|\mu|\leq\,2q}D^{\mu}(\overline{l_{\mu}(x)}\,v)=\omega \quad\mbox{in}\;\;\Omega,\\ B^{+}_{j}v=h_{j}\quad\mbox{on}\;\;\partial\Omega,\quad j=1,\ldots,q, \label{eq9.6} \end{gather} that is formally adjoint to the problem \eqref{eq9.1}, \eqref{eq9.2} with respect to the Green formula \begin{equation}\label{eq9.7} (Lu,v)_{\Omega}+\sum_{j=1}^{q}\;(B_{j}u,\,C_{j}^{+}v)_{\partial\Omega} =(u,L^{+}v)_{\Omega}+\sum_{j=1}^{q}\;(C_{j}u,\,B_{j}^{+}v)_{\partial\Omega}, \end{equation} where $u,v\in C^{\infty}(\,\overline{\Omega}\,)$. Here $\{B^{+}_{j}\}$, $\{C_{j}\}$, $\{C^{+}_{j}\}$ are some normal systems of linear partial differential boundary expressions with coefficients from $C^{\infty}(\partial\Omega)$; the orders of expressions $B_{j}^{\pm}$, $C_{j}^{\pm}$, $j=1,\ldots,q$, run over the set $\{0,1,\ldots,2q-1\}$ and satisfy the equality $$ \mathrm{ord}\,B_{j}+\mathrm{ord}\,C^{+}_{j}= \mathrm{ord}\,C_{j}+\mathrm{ord}\,B^{+}_{j}=2q-1. $$ We denote $m_{j}^{+}:=\mathrm{ord}\,B_{j}^{+}$. In \eqref{eq9.7} and bellow, the notations $(\cdot,\cdot)_{\Omega}$ and $(\cdot,\cdot)_{\partial\Omega}$ stand for the inner products in the spaces $L_{2}(\Omega)$ and $L_{2}(\partial\Omega)$ respectively, and for extensions by continuity of these products as well. The expression $L^{+}$ is said to be formally adjoint to $L$, whereas the system $\{B^{+}_{j}\}$ is said to be adjoint to $\{B_{j}\}$ with respect to $L$. Note that $\{B^{+}_{j}\}$ is not uniquely defined. We set \begin{gather*} \mathcal{N}:=\{u\in C^{\infty}(\,\overline{\Omega}\,):\,Lu=0\;\,\mbox{in}\;\,\Omega,\; B_{j}u=0\;\,\mbox{on}\;\,\partial\Omega\;\,\mbox{for all}\;\,j=1,\ldots,q\},\\ \mathcal{N}^{+}:=\{v\in C^{\infty}(\,\overline{\Omega}\,):\,L^{+}v=0\;\,\mbox{in}\;\,\Omega,\; B^{+}_{j}v=0\;\,\mbox{on}\;\,\partial\Omega\;\,\mbox{for all}\;\,j=1,\ldots,q\}. \end{gather*} Since both the problems \eqref{eq9.1}, \eqref{eq9.2} and \eqref{eq9.5}, \eqref{eq9.6} are regular elliptic, both the spaces $\mathcal{N}$ and $\mathcal{N}^{+}$ are finite dimensional. Note that the space $\mathcal{N}^{+}$ does not not depend on the choice of the system $\{B^{+}_{j}\}$ adjoint to $\{B_{j}\}$. \begin{example}\label{ex9.2} If the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} is Dirichlet, then the formally adjoint problem is also Dirichlet, with $\dim\mathcal{N}=\dim\mathcal{N}^{+}$. \end{example} \subsection{The operator properties}\label{sec9.2} Now we investigate properties of the operator \eqref{eq9.4} corresponding to the regular elliptic boundary-value problem \eqref{eq9.1}, \eqref{eq9.2}. \begin{theorem}\label{th9.1} Let $s>m+1/2$ and $\varphi\in\mathcal{M}$. Then the bounded linear operator \eqref{eq9.4} is Fredholm, its kernel coincides with $\mathcal{N}$, and its range consists of all the vectors $(f,g_{1},\ldots,g_{q})\in\mathcal{H}_{s,\varphi}(\Omega,\partial\Omega)$ such that \begin{equation}\label{eq9.8} (f,v)_{\Omega}+\sum_{j=1}^{q}\,(g_{j},C^{+}_{j}v)_{\partial\Omega}=0\quad\mbox{for all}\quad v\in \mathcal{N}^{+}. \end{equation} The index of \eqref{eq9.4} is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$ and does not depend on $s$ and $\varphi$. \end{theorem} In the Sobolev case of $\varphi\equiv1$ Theorem \ref{th9.1} is a classical result if $s\geq2q$; see, e.g., \cite[Ch.~III, \S~6, Subsec.~4]{FunctionalAnalysis72} or \cite[Ch.~2, Sec.~5.4]{LionsMagenes72}. If $m+1/2<s<2q$, then this theorem is also true \cite[Ch.~III, Sec. 2.2]{Egorov86}. For an arbitrary $\varphi\in\mathcal{M}$ the theorem can be deduced from the Sobolev case by the interpolation with a function parameter if we apply Proposition \ref{prop6.1} and Theorems \ref{th5.3} and \ref{th8.1}. \begin{remark}\label{rem9.2} G.~Slenzak \cite{Slenzak74} proved an analog of Theorem \ref{th9.1} for a different scale of H\"ormander inner product spaces. These spaces are not attached to Sobolev spaces with the number parameter; the class of the weight functions used by Slenzak is not described constructively. \end{remark} \begin{theorem}\label{th9.2} Let $s>m+1/2$, $\varphi\in\mathcal{M}$, and $\sigma<s$. Then the following a~priori estimate holds: $$ \|u\|_{H^{s,\varphi}(\Omega)}\leq c\,\bigr(\,\|(L,B)u\|_{\mathcal{H}_{s,\varphi}(\Omega,\partial\Omega)}+ \|u\|_{H^{\sigma,\varphi}(\Omega)}\,\bigl)\quad\mbox{for all}\quad u\in H^{s,\varphi}(\Omega); $$ here the number $c>0$ is independent of $u$. \end{theorem} This theorem follows from the Fredholm property of the operator \eqref{eq9.4} in view of Proposition \ref{prop6.2} and the compactness of the embedding $H^{s,\varphi}(\Omega)\hookrightarrow H^{\sigma,\varphi}(\Omega)$ for $\sigma<s$. Now we study a local smoothness of a solution $u$ to the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} in the refined Sobolev scale. The relevant property will be formulated as a theorem on the local increase in smoothness. Let $U$ be an open set in $\mathbb{R}^{n}$; we put $\Omega_{0}:=U\cap\Omega\neq\varnothing$ and $\Gamma_{0}:=U\cap\partial\Omega$ (the case were $\Gamma_{0}=\varnothing$ is possible). We introduce the following local analog of the space $H^{s,\varphi}(\Omega)$ with $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$: \begin{gather*} H^{s,\varphi}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0}):= \bigl\{u\in\mathcal{D}'(\Omega):\chi\,u\in H^{s,\varphi}(\Omega)\\ \mbox{for all}\;\;\chi\in C^{\infty}(\overline{\Omega})\;\;\mbox{with}\;\; \mathrm{supp}\,\chi\subseteq\Omega_{0}\cup\Gamma_{0}\bigr\}. \end{gather*} The other local space $H^{s,\varphi}_{\mathrm{loc}}(\Gamma_{0})$, which we need, was defined in \eqref{eq6.3}. \begin{theorem}\label{th9.3} Let $s>m+1/2$ and $\eta\in\mathcal{M}$. Suppose that the distribution $u\nobreak\in H^{s,\eta}(\Omega)$ is a solution to the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2}, with $$ f\in H^{s-2q+\varepsilon,\varphi}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0}) \quad\mbox{and}\quad g_{j}\in H^{s-m_{j}-1/2+\varepsilon,\varphi}_{\mathrm{loc}}(\Gamma_{0}), \;\;j=1,\ldots,q, $$ for some $\varepsilon\geq0$ and $\varphi\in\mathcal{M}$. Then $u\in H^{s+\varepsilon,\varphi}_{\mathrm{loc}}(\Omega_{0},\Gamma_{0})$. \end{theorem} In the special case where $\Omega_{0}=\Omega$ and $\Gamma_{0}=\partial\Omega$ we have the global smoothness increase (i.e. the increase in the domain $\Omega$ up to its boundary). This case follows from Theorem \ref{th9.1}. Indeed, since the vector $(f,g)\in\mathcal{H}_{s+\varepsilon,\varphi}(\Omega,\partial\Omega)$ satisfies \eqref{eq9.8}, we can write $(L,B)v=(f,g)$ for some $v\in H^{s+\varepsilon,\varphi}(\Omega)$, whence $u-v\in\mathcal{N}$ and $u\in H^{s+\varepsilon,\varphi}(\Omega)$; here $g:=(g_{1},\ldots,g_{q})$. In general, we can deduce Theorem \ref{th9.3} from the above case reasoning similar to the proof of Theorem~\ref{th4.2}. Note, if $\Gamma_{0}=\varnothing$, then we get an interior smoothness increase (in neighbourhoods of interior points of $\Omega$). Theorem \ref{th9.3} specifies, with regard to the refined Sobolev scale, the classical results on a local smoothness of solutions to elliptic boundary-value problems \cite[Ch. 3, Sec.~4]{Berezansky68}, \cite{Browder56, Nirenberg55, Schechter61}. Theorems \ref{th9.3} and \ref{th8.2} iv) imply the following sufficient condition for the solution $u$ to be classical. \begin{theorem}\label{th9.4} Let $s>m+1/2$ and $\eta\in\mathcal{M}$. Suppose that the distribution $u\nobreak\in H^{s,\eta}(\Omega)$ is a solution to the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2}, where \begin{gather*} f\in H^{n/2,\,\varphi}_{\mathrm{loc}}(\Omega,\varnothing)\cap H^{m-2q+n/2,\,\varphi}(\Omega),\\ g_{j}\in H^{m-m_{j}+(n-1)/2,\,\varphi}(\partial\Omega),\;\;j=1,\ldots,q, \end{gather*} for some $\varphi\in\mathcal{M}$. If $\varphi$ satisfies \eqref{eq3.6}, then the solution $u$ is classical, i.e. $u\in C^{2q}(\Omega)\cap C^{m}(\,\overline{\Omega}\,)$. \end{theorem} Note that the condition \eqref{eq3.6} not only is sufficient for $u$ to be a classical solution but also is necessary on the class of all the considered solutions $u$. This follows from Theorem \ref{th3.1}. \subsection{The limiting case}\label{sec9.3} This case is $s=m+1/2$. We study it, e.g., for the Dirichlet problem for the Laplace equation: $$ \Delta\,u=f\;\;\mbox{in}\;\;\Omega,\quad R_{1}u:=u\!\upharpoonright\!\partial\Omega=g. $$ This problem is regular elliptic, with $m=0$. Let $\varphi\in\mathcal{M}$. By Theorem \ref{th8.5}, the mapping $u\mapsto(\Delta\,u,R_{1}u)$, $u\in C^{\infty}(\,\overline{\Omega}\,)$, extends uniquely to a continuous linear operator from $H^{1/2,\varphi}(\Omega)$ to $H^{-3/2,\varphi}(\Omega)\times\mathcal{D}'(\partial\Omega)$ if and only if $\varphi$ satisfies \eqref{eq3.6}. Suppose the inequality \eqref{eq3.6} is fulfilled, and $\varphi_{0}\in\mathcal{M}$ is defined by \eqref{eq8.10}. Then we get the bounded linear operator \begin{equation}\label{eq9.9} (\Delta,R_{1}):\,H^{1/2,\varphi}(\Omega)\rightarrow H^{-3/2,\varphi}(\Omega)\oplus H^{0,\varphi_{0}}(\partial\Omega)=:\mathcal{H}(\Omega,\partial\Omega), \end{equation} with $R_{1}(H^{1/2,\varphi}(\Omega))$ being equal to $H^{0,\varphi_{0}}(\partial\Omega)$. It is reasonable to ask whether this operator is Fredholm or not. The answer is no because the range of \eqref{eq9.9} is not closed in $\mathcal{H}(\Omega,\partial\Omega)$. To prove this let us suppose the contrary, i.e., the range of \eqref{eq9.9} to be closed in $\mathcal{H}(\Omega,\partial\Omega)$. Then the restriction of \eqref{eq9.9} to the subspace $$ K_{\Delta}^{1/2,\varphi}(\Omega):=\bigl\{\,u\in H^{1/2,\varphi}(\Omega):\,\Delta\,u=0\;\;\mbox{in}\;\;\Omega\,\bigr\} $$ has a closed range in $H^{0,\varphi_{0}}(\partial\Omega)$. But, according to Theorem \ref{th10.1} given below in Section \ref{th10.1}, this restriction establishes a homeomorphism of $K_{\Delta}^{1/2,\varphi}(\Omega)$ onto $H^{0,\varphi}(\partial\Omega)$. Hence, $H^{0,\varphi}(\partial\Omega)$ is a (closed) subspace of $H^{0,\varphi_{0}}(\partial\Omega)$, so that $H^{0,\varphi}(\partial\Omega)=H^{0,\varphi_{0}}(\partial\Omega)$. We arrive at a contradiction if we note that $\varphi_{0}(t)/\varphi(t)\rightarrow0$ as $t\rightarrow+\infty$ and use Theorem \ref{th5.2} ii). Thus our hypothesis is false. Given a general elliptic boundary-value problem \eqref{eq9.1}, \eqref{eq9.2}, the reasoning is similar. If $s=m+1/2$, $\varphi$ satisfies \eqref{eq3.6}, and $\varphi_{0}$ is defined by \eqref{eq8.10}, then we get the bounded linear operator \eqref{eq9.4} providing the space $H^{s-m_{j}-1/2,\varphi}(\partial\Omega)=H^{0,\varphi}(\partial\Omega)$ is replaced by $H^{0,\varphi_{0}}(\partial\Omega)$ for $j$ such that $m_{j}=m$. This operator has a nonclosed range and therefore is not Fredholm. \section{Semihomogeneous elliptic problems}\label{sec10} As we have mentioned, the results of the previous section are not valid for $s\leq m+1/2$. But if the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} is semihomogeneous (i.e., $f\equiv0$ or all $g_{j}\equiv0$), it establishes a bounded and Fredholm operator on the two-sided refined Sobolev scale, in which the number parameter $s$ runs over the whole real axis. In this section we separately consider the case of the homogeneous elliptic equation \eqref{eq9.1} and the case of the homogeneous boundary conditions \eqref{eq9.2}. In what follows we focuss our attention on analogs of Theorem \ref{th9.1} on the Fredholm property of $(L,B)$. Counterparts of Theorems \ref{th9.2}--\ref{th9.4} can be derived from the analogs similarly to the reasoning outlined in Section~\ref{sec9} (for details, see \cite{05UMJ5, 06UMJ11, 06UMB4}). \subsection{A boundary-value problem for a homogeneous elliptic equation}\label{sec10.1} Let us consider the regular elliptic boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} provided that $f\equiv0$, namely \begin{equation}\label{eq10.1} Lu=0\;\;\mbox{in}\;\;\Omega,\quad B_{j}u=g_{j}\;\;\mbox{on}\;\;\partial\Omega,\;\;j=1,\ldots,q. \end{equation} We connect the following linear spaces with this problem: \begin{gather*} K_{L}^{\infty}(\Omega):=\bigl\{\,u\in C^{\infty}(\,\overline{\Omega}\,):\,L\,u=0\;\;\mbox{in}\;\;\Omega\,\bigr\}, \\ K_{L}^{s,\varphi}(\Omega):=\bigl\{\,u\in H^{s,\varphi}(\Omega):\,L\,u=0\;\;\mbox{in}\;\;\Omega\,\bigr\} \end{gather*} for $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. Here the equality $L\,u=0$ is understood in the distribution theory sense. It follows from a continuity of the embedding $H^{s,\varphi}(\Omega)\hookrightarrow\mathcal{D}'(\Omega)$ that $K_{L}^{s,\varphi}(\Omega)$ is a (closed) subspace in $H^{s,\varphi}(\Omega)$. We consider $K_{L}^{s,\varphi}(\Omega)$ as a Hilbert space with respect to the inner product in $H^{s,\varphi}(\Omega)$. \begin{theorem}\label{th10.1} Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. Then the set $K_{L}^{\infty}(\Omega)$ is dense in the space $K_{L}^{s,\varphi}(\Omega)$, and the mapping $$ u\mapsto Bu=(B_{1}u,\ldots,B_{q}u),\quad u\in K_{L}^{\infty}(\Omega), $$ extends uniquely to a continuous linear operator \begin{equation}\label{eq10.2} B:\,K_{L}^{s,\varphi}(\Omega)\rightarrow \bigoplus_{j=1}^{q}\,H^{s-m_{j}-1/2,\,\varphi}(\partial\Omega)=: \mathcal{H}_{s,\varphi}(\partial\Omega). \end{equation} This operator is Fredholm. Its kernel coincides with $\mathcal{N}$, whereas its range consists of all the vectors $(g_{1},\ldots,g_{q})\in\mathcal{H}_{s,\varphi}(\partial\Omega)$ such that $$ \sum_{j=1}^{q}\,(g_{j},C^{+}_{j}v)_{\partial\Omega}=0\quad\mbox{for all}\quad v\in \mathcal{N}^{+}. $$ The index of the operator \eqref{eq10.2} is equal to $\dim\mathcal{N}-\dim\mathcal{G}^{+}$, with $$ \mathcal{G}^{+}:=\bigl\{\,(C_{1}^{+}v,\ldots,C_{q}^{+}v):\,v\in \mathcal{N}^{+}\,\bigr\}, $$ and does not depend on $s$ and $\varphi$. \end{theorem} Theorem \ref{th10.1} was proved in \cite[Sec. 6]{05UMJ5}. In the $s>m+1/2$ case the theorem follows plainly from Lemma \ref{lem9.1} and Theorem \ref{th9.1}. If $s\leq m+1/2$, then the ellipticity condition is essential for the continuity of the operator \eqref{eq10.2}. Note that $\dim\mathcal{G}^{+}\leq\dim\mathcal{N}^{+}$, the strict inequality being possible \cite[Theorem 13.6.15]{Hermander83}. Theorem \ref{th10.1} can be regarded as a certain analog of the Harnack theorem on convergence of sequences of harmonic functions (see, e.g., \cite[Ch. 11, \S~9]{Mikhlin68}), however we use the metric in $H^{s,\varphi}(\Omega)$ instead of the uniform metric. Here it is relevant to mention R.~Seeley's investigation \cite{Seeley66} of the Cauchy data of solutions to a homogeneous elliptic equation in the two-sided Sobolev scale; see also the survey \cite[Sec. 5.4~b]{Agranovich97}. Let us outline the proof of Theorem \ref{th10.1}. For the sake of simplicity, we suppose that both $\mathcal{N}$ and $\mathcal{N}^{+}$ are trivial. Let $s<2q$ and $\varphi\in\mathcal{M}$. Chose an integer $r\geq1$ such that $2q(1-r)<s<2q$. We need the following Hilbert space \begin{gather*} D^{s,\varphi}_{L}(\Omega):=\bigl\{u\in H^{s,\varphi}(\Omega):\,L\,u\in L_{2}(\Omega)\bigr\},\\ (u_{1},u_{2})_{D^{s,\varphi}_{L}(\Omega)}:=(u_{1},u_{2})_{H^{s,\varphi}(\Omega)}+ (L\,u_{1},L\,u_{2})_{L_{2}(\Omega)}. \end{gather*} The mapping \eqref{eq9.3} extends uniquely to the homeomorphisms \begin{gather*} (L,B):\,D^{2q(1-r)}_{L}(\Omega)\leftrightarrow L_{2}(\Omega)\oplus\mathcal{H}_{2q(1-r)}(\partial\Omega),\\ (L,B):\,H^{2q}(\Omega)\leftrightarrow L_{2}(\Omega)\oplus\mathcal{H}_{2q}(\partial\Omega). \end{gather*} The first of them follows from the Lions--Magenes theorems \cite{LionsMagenes72} stated below in Section \ref{sec11.1}, whereas the second is a special case of Theorem \ref{th9.1}. (Recall that we omit $\varphi$ in the notations if $\varphi\equiv1$.) Applying the interpolation with the function parameter $\psi$ defined by \eqref{eq3.8} with $\varepsilon:=s-2q(1-r)$ and $\delta:=2q-s$ we get another homeomorphism \begin{equation}\label{eq10.3} (L,B):\,\bigl[D^{2q(1-r)}_{L}(\Omega),H^{2q}(\Omega)\bigr]_{\psi}\leftrightarrow L_{2}(\Omega)\oplus\mathcal{H}_{s,\varphi}(\partial\Omega). \end{equation} Now if we prove that $Z_{\psi}:=\bigl[D^{2q(1-r)}_{L}(\Omega),H^{2q}(\Omega)\bigr]_{\psi}$ coincides with $D^{s,\varphi}_{L}(\Omega)$ up to equivalence of norms, then the restriction of \eqref{eq10.3} to $K_{L}^{s,\varphi}(\Omega)$ will give the homeomorphism \eqref{eq10.2}. The continuous embedding $Z_{\psi}\hookrightarrow D^{s,\varphi}_{L}(\Omega)$ is evident. The inverse can be proved by the following modification of the reasoning used by Lions and Magenes \cite[Ch.~2, Sec. 7.2]{LionsMagenes72} for $r=1$ and power parameter~$\psi$. In view of Theorem \ref{th9.1} we have the homeomorphism $$ L^{r}L^{r+}+I:\,\bigl\{u\in H^{\sigma}(\Omega):(D_{\nu}^{j-1}u)\upharpoonright\partial\Omega=0\;\;\forall\; j=1,\ldots,r\bigr\}\leftrightarrow H^{\sigma-4qr}(\Omega) $$ for each $\sigma\geq2qr$. Here $L^{r}$ is the $r$-th iteration of $L$, $L^{r+}$ is the formally adjoint to $L^{r}$, and $I$ is the identity operator. We regard the domain of $L^{r}L^{r+}+I$ as a subspace of $H^{\sigma}(\Omega)$. Consider the bounded linear inverse operators $$ (L^{r}L^{r+}+I)^{-1}:\,H^{\sigma}(\Omega)\rightarrow H^{\sigma+4qr}(\Omega),\quad \sigma\geq-2qr. $$ Set $R:=L^{r-1}L^{r+}(L^{r}L^{r+}+I)^{-1}$ and $P:=-RL+I$. Since $$ LPu=(L^{r}L^{r+}+I)^{-1}Lu\in L_{2}(\Omega)\quad\mbox{for each}\quad u\in H^{2q(1-r)}(\Omega), $$ the operator $P$ maps continuously $H^{\sigma}(\Omega)\rightarrow D^{\sigma}_{L}(\Omega)$ with $\sigma\geq2q(1-r)$. Therefore, by the interpolation, we get the bounded operator $$ P:\,H^{s,\varphi}(\Omega)=\bigl[H^{2q(1-r)}(\Omega),H^{2q}(\Omega)\bigr]_{\psi}\rightarrow \bigl[D^{2q(1-r)}_{L}(\Omega),H^{2q}(\Omega)\bigr]_{\psi}=Z_{\psi}. $$ Now, for each $u\in D^{s,\varphi}_{L}(\Omega)$, we have $u=Pu+RLu$, with $Pu\in Z_{\psi}$ and $RLu\in H^{2q}(\Omega)\subset Z_{\psi}$. So $D^{s,\varphi}_{L}(\Omega)\subseteq Z_{\psi}$, and our reasoning is complete. Note that in the Sobolev case of $\varphi\equiv1$ Theorem \ref{th10.1} is a consequence of the above-mentioned Lions--Magenes theorems provided $s$ is negative and not half-integer. If negative $s$ is half-integer, then Theorem \ref{th10.1} is new even in the Sobolev case. \subsection{An elliptic problem with homogeneous boundary conditions}\label{sec10.2} Now we consider the regular elliptic boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} provided that all $g_{j}\equiv0$, namely \begin{equation}\label{eq10.4} Lu=f\;\;\mbox{in}\;\;\Omega,\quad B_{j}u=0\;\;\mbox{on}\;\;\partial\Omega,\;\;j=1,\ldots,q. \end{equation} Let us introduce some function spaces related to the boundary-value problem \eqref{eq10.4}. For the sake of brevity, we denote by $(\mathrm{b.c.})$ the homogeneous boundary conditions in \eqref{eq10.4}. In addition, we denote by $(\mathrm{b.c.})^{+}$ the homogeneous adjoint boundary conditions \eqref{eq9.6}: $$ B^{+}_{j}v=0\;\;\mbox{on}\;\;\partial\Omega,\;\;j=1,\ldots,q. $$ We set \begin{gather*} C^{\infty}(\mathrm{b.c.}):=\bigl\{u\in C^{\infty}(\,\overline{\Omega}\,):\,B_{j}u=0\;\;\mbox{on}\;\;\partial\Omega\;\; \forall\;\;j=1,\ldots,q\bigr\}, \\ C^{\infty}(\mathrm{b.c.})^{+}:=\bigl\{v\in C^{\infty}(\,\overline{\Omega}\,):\,B^{+}_{j}v=0\;\;\mbox{on}\;\;\partial\Omega \;\;\forall\;\;j=1,\ldots,q\bigr\}. \end{gather*} Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. We introduce the separable Hilbert spaces $H^{s,\varphi}(\mathrm{b.c.})$ and $H^{s,\varphi}(\mathrm{b.c.})^{+}$ formed by distributions satisfying the homogeneous boundary conditions $(\mathrm{b.c.})$ and $(\mathrm{b.c.})^{+}$ respectively. \begin{definition}\label{def10.1} If $s\notin\{m_{j}+1/2:j=1,\ldots,q\}$, then $H^{s,\varphi}(\mathrm{b.c.})$ is defined to be the closure of $C^{\infty}(\mathrm{b.c.})$ in $H^{s,\varphi,(0)}(\Omega)$, the space $H^{s,\varphi}(\mathrm{b.c.})$ being regarded as a subspace of $H^{s,\varphi,(0)}(\Omega)$. If $s\in\{m_{j}+1/2:j=1,\ldots,q\}$, then the space $H^{s,\varphi}(\mathrm{b.c.})$ is defined by means of the interpolation with the power parameter $\psi(t)=t^{1/2}$: \begin{equation}\label{eq10.5} H^{s,\varphi}(\mathrm{b.c.}):=\bigl[H^{s-1/2,\,\varphi}(\mathrm{b.c.}), H^{s+1/2,\,\varphi}(\mathrm{b.c.})\bigr]_{t^{1/2}}. \end{equation} Changing $(\mathrm{b.c.})$ for $(\mathrm{b.c.})^{+}$, and $m_{j}$ for $m_{j}^{+}$ in the last two sentences, we have the definition of the space $H^{s,\varphi}(\mathrm{b.c.})^{+}$. \end{definition} The space $C^{\infty}(\mathrm{b.c.})^{+}$ and therefore $H^{s,\varphi}(\mathrm{b.c.})^{+}$ are independent of the choice of the system $\{B^{+}_{j}\}$ adjoint to $\{B_{j}\}$; see, e.g., \cite[Ch.~2, Sec. 2.5]{LionsMagenes72}. Note that the case of $s\in\{m_{j}+1/2:j=1,\ldots,q\}$ is special in the definition of $H^{s,\varphi}(\mathrm{b.c.})$. We have to resort to the interpolation formula \eqref{eq10.5} to get the spaces for which the main result of the subsection, Theorem \ref{th10.3}, will be valid. In this case the norms in the spaces $H^{s,\varphi}(\mathrm{b.c.})$ and $H^{s,\varphi,(0)}(\Omega)$ are not equivalent. The analogous fact is true for $H^{s,\varphi}(\mathrm{b.c.})^{+}$. Providing $\varphi\equiv1$, this was proved in \cite{Grisvard67, Seeley72} (see also \cite[Sec. 4.3.3]{Triebel95}). The spaces just introduced admit the following constructive description. \begin{theorem}\label{th10.2} Let $s\in\mathbb{R}$, $s\neq m_{j}+1/2$ for all $j=1,\ldots,q$, and $\varphi\in\mathcal{M}$. If $s>0$, then the space $H^{s,\varphi}(\mathrm{b.c.})$ consists of the functions $u\in H^{s,\varphi}(\Omega)$ such that $B_{j}u=0$ on $\partial\Omega$ for all indices $j=1,\ldots,q$ satisfying $s>m_{j}+1/2$. If $s<1/2$, then $H^{s,\varphi}(\mathrm{b.c.})=H^{s,\varphi,(0)}(\Omega)$. This proposition remains true if one changes $m_{j}$ for $m_{j}^{+}$, $(\mathrm{b.c.})$ for $(\mathrm{b.c.})^{+}$, and $B_{j}$ for $B_{j}^{+}$. \end{theorem} Theorem \ref{th10.2} is known in the Sobolev case of $\varphi\equiv1$ \cite[Sec. 5.5.2]{Roitberg96}. In general, we can deduce it by means of the interpolation with a function parameter. Here we only need to treat the case where $m_{k}+1/2<s<m_{k+1}+1/2$ for some $k=1,\ldots,q$, with $m_{1}<m_{2}<\ldots<m_{q}$ and $m_{q+1}:=\infty$. Chose $\varepsilon>0$ such that $m_{k}+1/2<s\mp\varepsilon<m_{k+1}+1/2$. Then the space $H^{s\mp\varepsilon}(\mathrm{b.c.})$ consists of the functions $u\in H^{s\mp\varepsilon}(\Omega)$ satisfying the condition $B_{j}u=0$ on $\partial\Omega$ for all $j=1,\ldots,k$. So there exists a projector $P_{k}$ of $H^{s\mp\varepsilon}(\Omega)$ onto $H^{s\mp\varepsilon}(\mathrm{b.c.})$; it is constructed in \cite[the proof of Lemma 5.4.4]{Triebel95}. Hence, by Proposition \ref{prop8.1} and Theorem \ref{th8.1} with $\varepsilon=\delta$, we get that $Y_{\psi}:=[H^{s-\varepsilon}(\mathrm{b.c.}),H^{s+\varepsilon}(\mathrm{b.c.})]_{\psi}$ is the subspace $H^{s,\varphi}(\Omega)\cap H^{s-\varepsilon}(\mathrm{b.c.)}$ of $H^{s,\varphi}(\Omega)$. Now since $C^{\infty}(\mathrm{b.c.})$ is dense in $Y_{\psi}$, we have \begin{equation}\label{eq10.6} \bigl[H^{s-\varepsilon}(\mathrm{b.c.}),H^{s+\varepsilon}(\mathrm{b.c.})\bigl]_{\psi}= H^{s,\varphi}(\mathrm{b.c.}) \end{equation} with equivalence of norms, so that $H^{s,\varphi}(\mathrm{b.c.})$ admits the description stated in Theorem \ref{th10.1}. The following theorem is about the Fredholm property of the boundary-value problem \eqref{eq10.4} in the two-sided refined Sobolev scale. \begin{theorem}\label{th10.3} Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. Then the mapping $u\mapsto Lu$, with $u\in C^{\infty}(\mathrm{b.c.})$, extends uniquely to a continuous linear operator \begin{equation}\label{eq10.7} L:H^{s,\varphi}(\mathrm{b.c.})\rightarrow (H^{2q-s,1/\varphi}(\mathrm{b.c.})^{+})'. \end{equation} Here $Lu$ is interpreted as the functional $(Lu,\,\cdot\,)_{\Omega}$, and $(H^{2q-s,\,1/\varphi}(\mathrm{b.c.})^{+})'$ denotes the antidual space to $H^{2q-s,1/\varphi}(\mathrm{b.c.})^{+}$ with respect to the inner product in $L_{2}(\Omega)$. The operator \eqref{eq10.7} is Fredholm. Its kernel coincides with $\mathcal{N}$, whereas its range consists of all the functionals $f\in(H^{2q-s,\,1/\varphi}(\mathrm{b.c.})^{+})'$ such that $(f,v)_{\Omega}=0$ for all $v\in\mathcal{N}^{+}$. The index of \eqref{eq10.7} is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$ and does not depend on $s$ and~$\varphi$. \end{theorem} For the Sobolev scale, where $\varphi\equiv1$, this theorem was proved by Yu.~M.~Berezansky, S.G.~Krein, and Ya.A.~Roitberg (\cite{BerezanskyKreinRoitberg63} and \cite[Ch. III, \S~6, Sec. 10]{Berezansky68}) in the case of integral $s$ and by Roitberg \cite[Sec. 5.5.2]{Roitberg96} for all real $s$; see also the textbook \cite[Ch. XVI, \S~1]{BerezanskySheftelUs96b} and the survey \cite[Sec. 7.9~c]{Agranovich97}. They formulated the theorem in an equivalent form of a homeomorphism theorem. Note that if $s\leq m+1/2$, then the ellipticity condition is essential for the continuity of the operator \eqref{eq10.7}. For arbitrary $\varphi\in\mathcal{M}$, Theorem \ref{th10.3} follows from the Sobolev case by Proposition \ref{prop6.1} if we apply the interpolation formulas \eqref{eq10.6}, \eqref{eq10.5} and their counterparts for $H^{2q-s,1/\varphi}(\mathrm{b.c.})^{+}$. First we should use \eqref{eq10.6} for $s\notin\{j-1/2:j=1,\ldots,2q\}$ and a sufficiently small $\varepsilon>0$, then should apply \eqref{eq10.5} for the rest of $s$. Moreover, we have to resort to the interpolation duality formula $[X_{1}',X_{0}']_{\psi}=[X_{0},X_{1}]_{\chi}'$, where $X:=[X_{0},X_{1}]$ is an admissible couple of Hilbert spaces and $\chi(t):=t/\psi(t)$ for $t>0$. The formula follows directly from the definition of $X_{\psi}$; see, e.g., \cite[Sec. 2.4]{08MFAT1}. \subsection{On a connection between nonhomogeneous and semihomogeneous elliptic problems}\label{sec10.3} Here, for the sake of simplicity, we suppose that $\mathcal{N}=\mathcal{N}^{+}=\{0\}$. Let $s>m+1/2$ and $\varphi\in\mathcal{M}$. It follows from Theorems \ref{th9.1} and \ref{th10.2} that the space $H^{s,\varphi}(\Omega)$ is the direct sum of the subspaces $K^{s,\varphi}_{L}(\Omega)$ and $H^{s,\varphi}(\mathrm{b.c.})$. Therefore Theorem \ref{th9.1} are equivalent to Theorems \ref{th10.1} and \ref{th10.3} taken together; note that the antidual space $(H^{2q-s,1/\varphi}(\mathrm{b.c.})^{+})'$ coincides with $H^{s-2q,\varphi}(\Omega)$. Thus the nonhomogeneous problem \eqref{eq9.1}, \eqref{eq9.2} can be reduced immediately to the semihomogeneous problems \eqref{eq10.1} and \eqref{eq10.4} provided $s>m+1/2$. This reduction fails for $s<m+1/2$. Indeed, if $0\leq s<m+1/2$, then the operator $(L,B)$ cannot be well defined on $K_{L}^{s,\varphi}(\Omega)\cup H^{s,\varphi}(\mathrm{b.c.})$ because $K_{L}^{s,\varphi}(\Omega)\cap H^{s,\varphi}(\mathrm{b.c.})\neq\varnothing$. This inequality follows from Theorems \ref{th10.1} and \ref{th10.2} if we note that the boundary-value problem \eqref{eq10.1}, with $g_{q}\equiv1$ and $g_{j}\equiv0$ for $j<q$, has a nonzero solution $u\in K_{L}^{\infty}(\Omega)$ belonging to $H^{s,\varphi}(\mathrm{b.c.})$. Here we may suppose that $m_{q}=m$. So much the more, the above reduction is impossible for negative $s$. Note if $s<-1/2$, then solutions to the semihomogeneous problems pertain to the spaces of distributions of the different nature. Namely, the solutions to the problem \eqref{eq10.1} belong to $K_{L}^{s,\varphi}(\Omega)\subset H^{s,\varphi}(\Omega)$ and are distributions given in the open domain $\Omega$, whereas the solutions to the problem \eqref{eq10.4} belong to $H^{s,\varphi}(\mathrm{b.c.})\subset H^{s,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})$ and are distributions supported on the closed domain $\overline{\Omega}$. The same conclusions are valid in general, for nontrivial $\mathcal{N}$ and/or $\mathcal{N}^{+}$. \section{Generic theorems for elliptic problems in two-sided scales}\label{sec11} Let us return to the nonhomogeneous regular elliptic boundary-value problem \eqref{eq9.1}, \eqref{eq9.2}. We aim to prove analogs of Theorem \ref{th9.1} for \emph{arbitrary} real $s$. To get the bounded operator $(L,B)$ for such $s$ we have to chose another space instead of $H^{s,\varphi}(\Omega)$ as a domain of the operator. There are known two essentially different ways to construct the domain. They were suggested by Ya.A.~Roitberg \cite{Roitberg64, Roitberg65, Roitberg96} and J.-L.~Lions, E.~Magenes \cite{LionsMagenes62, LionsMagenes63, LionsMagenes72, Magenes65} in the Sobolev case. These ways lead to different kinds of theorems on the Fredholm property of $(L,B)$; we name them generic and individual theorems. In generic theorems, the domain of $(L,B)$ does not depend on the coefficients of the elliptic expression $L$ and is generic for all boundary-value problems of the same order. Note that Theorem \ref{th9.1} is generic. In individual theorems, the domain depends on coefficients of $L$, even on the coefficients of lover order derivatives. In this section we realize Roitberg's approach with regard to the refined Sobolev scale; namely, we modify this scale by Roitberg and prove a generic theorem about the Fredholm property of $(L,B)$ on the two-sided modified scale. The results of the section were obtained by the authors in \cite{08UMJ4}. Lions and Magenes's approach led to individual theorems will be considered below in Section \ref{sec12}. \subsection{The modification of the refined Sobolev scale}\label{sec11.1} Let $s\in\mathbb{R}$, $\varphi\in\mathcal{M}$, and integer $r>0$. We set $E_{r}:=\{k-1/2:k=1,\ldots,r\}$. Note that $D_{\nu}:=i\,\partial/\partial\nu$, where $\nu$ is the field of unit vectors of inner normals to $\partial\Omega$. Let us define the separable Hilbert spaces $H^{s,\varphi,(r)}(\Omega)$, which form the modified scale. \begin{definition}\label{def11.1} If $s\in\mathbb{R}\setminus E_{r}$, then the space $H^{s,\varphi,(r)}(\Omega)$ is defined to be the completion of $C^{\infty}(\,\overline{\Omega}\,)$ with respect to the Hilbert norm \begin{equation}\label{eq11.1} \|u\|_{H^{s,\varphi,(r)}(\Omega)}:= \biggl(\,\|u\|_{H^{s,\varphi,(0)}(\Omega)}^{2}+ \sum_{k=1}^{r}\;\bigl\|(D_{\nu}^{k-1}u)\upharpoonright\partial\Omega\,\bigr\| _{H^{s-k+1/2,\varphi}(\partial\Omega)}^{2}\,\biggr)^{1/2}. \end{equation} If $s\in E_{r}$, then the space $H^{s,\varphi,(r)}(\Omega)$ is defined by means of the interpolation with the power parameter $\psi(t)=t^{1/2}$, namely \begin{equation}\label{eq11.2} H^{s,\varphi,(r)}(\Omega):=\bigl[\,H^{s-1/2,\varphi,(r)}(\Omega), H^{s+1/2,\varphi,(r)}(\Omega)\,\bigr]_{t^{1/2}}. \end{equation} \end{definition} In the Sobolev case of $\varphi\equiv1$ the space $H^{s,\varphi,(r)}(\Omega)$ was introduced and investigated by Ya.A.~Roitberg; see \cite{Roitberg64, Roitberg65} and \cite[Ch.~2]{Roitberg96}. As usual, we put $H^{s,(r)}(\Omega):=H^{s,1,(r)}(\Omega)$. Note that the case of $s\in E_{r}$ is special in Definition \ref{def11.1} because the norm in $H^{s,\varphi,(r)}(\Omega)$ is defined by the interpolation formula \eqref{eq11.2} instead of \eqref{eq11.1}. These formulas give nonequivalent norms. As in Subsection \ref{sec10.2}, we have to resort to the interpolation in the mentioned case to get the spaces for which the main result of this section, Theorem \ref{th11.2}, will be true. \begin{definition}\label{def11.2} The class of Hilbert spaces \begin{equation}\label{eq11.3} \{H^{s,\varphi,(r)}(\Omega):\,s\in\mathbb{R},\,\varphi\in\mathcal{M}\} \end{equation} is called the refined Sobolev scale modified by Roitberg. The number $r$ is called the order of the modification. \end{definition} The scale \eqref{eq11.3} is found fruitful in the theory of boundary-value problems because the trace mapping \eqref{eq8.7} extends uniquely to an operator $R_{r}$ mapping continuously $H^{s,\varphi,(r)}(\Omega)\rightarrow\mathcal{H}^{r}_{s,\varphi}(\partial\Omega)$ for all real $s$. It is useful to compare this fact with Theorem \ref{th8.4}, in which the condition $s>r-1/2$ cannot be neglected. Note that \begin{equation}\label{eq11.4} H^{s,\varphi,(r)}(\Omega)=H^{s,\varphi}(\Omega)\quad\mbox{if}\quad s>r-1/2 \end{equation} because the spaces in \eqref{eq11.4} are completions of $C^{\infty}(\,\overline{\Omega}\,)$ with equivalence norms due to Theorem~\ref{th8.4}. The spaces $H^{s,\varphi,(r)}(\Omega)$ admit the following isometric representation. We denote by $K_{s,\varphi,(r)}(\Omega,\partial\Omega)$ the linear space of all vectors \begin{equation}\label{eq11.5} (u_{0},u_{1},\ldots,u_{r})\in H^{s,\varphi,(0)}(\Omega) \oplus\bigoplus_{k=1}^{r}\,H^{s-k+1/2,\,\varphi}(\partial\Omega)=: \Pi_{s,\varphi,(r)}(\Omega,\partial\Omega) \end{equation} such that $u_{k}=(D_{\nu}^{k-1}u_{0})\!\upharpoonright\!\partial\Omega$ for each integer $k=1,\ldots r$ satisfying $s>k-1/2$. By Theorem \ref{th8.4} we may regard $K_{s,\varphi,(r)}(\Omega,\partial\Omega)$ as a subspace of $\Pi_{s,\varphi,(r)}(\Omega,\partial\Omega)$. \begin{theorem}\label{th11.1} The mapping $$ T_{r}:u\mapsto\bigl(\,u,u\!\upharpoonright\!\partial\Omega,\ldots, (D_{\nu}^{r-1}u)\!\upharpoonright\!\partial\Omega\,\bigr),\quad u\in C^{\infty}(\,\overline{\Omega}\,), $$ extends uniquely to a continuous linear operator \begin{equation}\label{eq11.6} T_{r}:\,H^{s,\varphi,(r)}(\Omega)\rightarrow K_{s,\varphi,(r)}(\Omega,\partial\Omega) \end{equation} for all $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. This operator is injective. Moreover, if $s\notin E_{r}$, then \eqref{eq11.6} is an isometric isomorphism. \end{theorem} We need only to argue that \eqref{eq11.6} is surjective if $s\notin E_{r}$. For $\varphi\equiv1$ this property is proved by Ya.A.~Roitberg; see, e.g., \cite[Sec.~2.2]{Roitberg96}. In general, the proof is quite similar provided we apply Theorem \ref{th8.4} and \eqref{eq8.8}. Note that we have the following dense compact embeddings in the modified scale \eqref{eq11.3}: \begin{equation}\label{eq11.7} H^{s_{1},\varphi_{1},(r)}(\Omega)\hookrightarrow H^{s,\varphi,(r)}(\Omega),\quad-\infty<s<s_{1}<\infty\;\;\mbox{and} \;\;\varphi,\varphi_{1}\in\mathcal{M}. \end{equation} They results from \eqref{eq8.16} and Theorem \ref{th5.2} (i) by Theorem \ref{th11.1} and are understood as embeddings of spaces which are completions of the same set, $C^{\infty}(\,\overline{\Omega}\,)$, with different norms. Suppose that $s=s_{1}$, then the continuous embedding \eqref{eq11.7} holds if and only if $\varphi/\varphi_{1}$ is bounded in a neighbourhood of~$+\infty$; the embedding is compact if and only if $\varphi(t)/\varphi_{1}(t)\rightarrow0$ as $t\rightarrow\nobreak+\infty$. This follows from the relevant properties of the refined Sobolev scales over $\Omega$ and $\partial\Omega$. \subsection{Roitberg's type generic theorem}\label{sec11.2} The main result of the section is the following generic theorem about properties of the operator $(L,B)$ on the two-sided scale \eqref{eq11.3} with $r=2q$. \begin{theorem}\label{th11.2} Let $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. The mapping \eqref{eq9.3} extends uniquely to a continuous linear operator \begin{gather}\label{eq11.8} (L,B):\,H^{s,\varphi,(2q)}(\Omega)\rightarrow H^{s-2q,\varphi,(0)}(\Omega)\oplus\bigoplus_{j=1}^{q}\, H^{s-m_{j}-1/2,\varphi}(\partial\Omega)\\ =:\mathcal{H}_{s,\varphi,(0)}(\Omega,\partial\Omega).\notag \end{gather} This operator is Fredholm. Its kernel coincides with $\mathcal{N}$, and its range consists of all the vectors $(f,g_{1},\ldots,g_{q})\in\mathcal{H}_{s,\varphi,(0)}(\Omega,\partial\Omega)$ that satisfy \eqref{eq9.8}. The index of \eqref{eq11.8} is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$ and does not depend on $s$ and $\varphi$. \end{theorem} Note that Theorem \ref{th11.2} is generic because the domain of the operator \eqref{eq11.8}, the space $H^{s,\varphi,(2q)}(\Omega)$, is independent of $L$ due to Definition \ref{def11.1}. If $s>2q-1/2$, then generic Theorems \ref{th9.1} and \ref{th11.2} are tantamount in view of \eqref{eq11.4} and \eqref{eq8.17}. For the modified Sobolev scale (the $\varphi\equiv1$ case) Theorem \ref{th11.2} was proved by Ya.A.~Roi\-tberg \cite{Roitberg64, Roitberg65}, \cite[Ch.~4 and Sec. 5.3]{Roitberg96}; see also the monograph \cite[Ch.~3, Sec.~6, Theorem 6.9]{Berezansky68}, the handbook \cite[Ch.~III, \S~6, Sec. 5]{FunctionalAnalysis72}, and the survey \cite[Sec. 7.9]{Agranovich97}. For arbitrary $\varphi\in\mathcal{M}$ we can deduce Theorem \ref{th11.2} from the $\varphi\equiv1$ case with the help of the interpolation in the following way. First assume that $s\notin E_{2q}$ and let $\varepsilon>0$. We have the Fredholm bounded operators on the modified Sobolev scale \begin{equation}\label{eq11.9} (L,B):\,H^{s\mp\varepsilon,(2q)}(\Omega)\rightarrow \mathcal{H}_{s\mp\varepsilon,(0)}(\Omega,\partial\Omega). \end{equation} They possess the common kernel $\mathcal{N}$ and the common index $\varkappa:=\dim\mathcal{N}-\dim\mathcal{N}^{+}$. Applying the interpolation with the function parameter $\psi$ defined by \eqref{eq3.8} for $\varepsilon=\delta$, we get by Proposition \ref{prop6.1} and Theorems \ref{th5.3}, \ref{th8.6} that \eqref{eq11.9} implies the boundedness and the Fredholm property of the operator $$ (L,B):\,\bigl[H^{s-\varepsilon,(2q)}(\Omega),H^{s+\varepsilon,(2q)}(\Omega)\bigr]_{\psi} \rightarrow\mathcal{H}_{s,\varphi,(0)}(\Omega,\partial\Omega). $$ It remains to prove the interpolation formula \begin{equation}\label{eq11.10} \bigl[H^{s-\varepsilon,(2q)}(\Omega),H^{s+\varepsilon,(2q)}(\Omega)\bigr]_{\psi}= H^{s,\varphi,(2q)}(\Omega), \end{equation} where the equality of spaces is up to equivalence of norms. Let an index $p$ be such that $s\in\alpha_{p}$, where $\alpha_{0}:=(-\infty,1/2)$, $\alpha_{k}:=(k-1/2,\,k+1/2)$ with $k=1,\ldots,2q-1$, and $\alpha_{2q}:=(2q-1/2,\infty)$. We chose $\varepsilon>0$ satisfying $s\mp\varepsilon\in\alpha_{p}$. By Theorem \ref{th11.1}, the mapping $$ T_{2q,p}:\,u\mapsto \left(\,u,\,\{(D_{\nu}^{k-1}u)\!\upharpoonright\!\partial\Omega:\,p+1\leq k\leq 2q\}\,\right) $$ establishes the homeomorphisms \begin{gather} \label{eq11.11} T_{2q,p}:\,H^{s,\varphi,(2q)}(\Omega)\leftrightarrow H^{s,\varphi,(0)}(\Omega) \oplus\bigoplus_{p+1\leq k\leq2q} H^{s-k+1/2,\varphi}(\partial\Omega)=:K_{s,\varphi,(2q)}^{p}(\Omega,\partial\Omega),\\ T_{2q,p}:\,H^{s\mp\varepsilon,(2q)}(\Omega)\leftrightarrow K_{s\mp\varepsilon,(2q)}^{p}(\Omega,\partial\Omega). \label{eq11.12} \end{gather} Applying the interpolation with $\psi$, we deduce another homeomorphism from \eqref{eq11.12}: \begin{equation}\label{eq11.13} T_{2q,p}:\,\bigl[H^{s-\varepsilon,(2q)}(\Omega),H^{s+\varepsilon,(2q)}(\Omega)\bigr]_{\psi} \leftrightarrow K_{s,\varphi,(2q)}^{p}(\Omega,\partial\Omega). \end{equation} Now \eqref{eq11.11} and \eqref{eq11.13} imply the required formula \eqref{eq11.10}. In the remaining case of $s\in E_{2q}$, we deduce Theorem \ref{th11.2} from the $s\notin E_{2q}$ case by the interpolation with the power parameter $\psi(t)=t^{1/2}$ if we apply \eqref{eq11.2} and the counterparts of Theorem \ref{th3.5} for the refined Sobolev scales over $\Omega$ and $\partial\Omega$. Note that the continuity of the operator \eqref{eq11.8} holds without the assumption about the regular ellipticity of the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2}. \subsection{Roitberg's interpretation of generalized solutions}\label{sec11.3} Using Theorem 10.1, we can give the following interpretation of a solution $u\in H^{s,\varphi,(2q)}(\Omega)$ to the boun\-dary-value problem \eqref{eq9.1}, \eqref{eq9.2} in the framework of the distribution theory. Let us write down the differential expressions $L$ and $B_{j}$ in a neighbourhood of $\partial\Omega$ in the form \begin{equation}\label{eq11.14} L=\sum_{k=0}^{2q}\;L_{k}\,D_{\nu}^{k},\quad B_{j}=\sum_{k=0}^{m_{j}}\;B_{j,k}\,D_{\nu}^{k}. \end{equation} Here $L_{k}$ and $B_{j,k}$ are certain tangent differential expression. Integrating by parts, we arrive at the (special) Green formula $$ (Lu,v)_{\Omega}=(u,L^{+}v)_{\Omega}- i\sum_{k=1}^{2q}\;(D_{\nu}^{k-1}u,L^{(k)}v)_{\partial\Omega},\quad u,v\in C^{\infty}(\,\overline{\Omega}\,). $$ Here $L^{(k)}:=\sum_{r=k}^{2q}D_{\nu}^{r-k}L_{r}^{+}$, where $L_{r}^{+}$ is the tangent differential expression formally adjoint to $L_{r}$. By passing to the limit and using the notation \begin{equation}\label{eq11.15} (u_{0},u_{1},\ldots,u_{2q}):=T_{2q}u\in K_{s,\varphi,(2q)}(\Omega,\partial\Omega), \end{equation} we get the next equality for $u\in H^{s,\varphi,(2q)}(\Omega)$: \begin{equation}\label{eq11.16} (Lu,v)_{\Omega}=(u_{0},L^{+}v)_{\Omega}- i\sum_{k=1}^{2q}\;(u_{k},L^{(k)}v)_{\partial\Omega},\quad v\in C^{\infty}(\,\overline{\Omega}\,). \end{equation} Now it follows from \eqref{eq11.14} and \eqref{eq11.16} that the element $u\in H^{s,\varphi,(2q)}(\Omega)$ is a solution to the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} with $f\in H^{s-2q,\varphi,(0)}(\Omega)$, $g_{i}\in H^{s-m_{j}-1/2,\,\varphi}(\partial\Omega)$ if and only if \begin{gather} \label{eq11.17} (u_{0},L^{+}v)_{\Omega}- i\sum_{k=1}^{2q}\;(u_{k},L^{(k)}v)_{\partial\Omega}= (f,v)_{\Omega}\quad\mbox{for all}\quad v\in C^{\infty}(\,\overline{\Omega}\,),\\ \sum_{k=0}^{m_{j}}\;B_{j,k}\,u_{k+1}=g_{j}\;\;\mbox{on}\;\;\partial\Omega,\quad j=1,\ldots,q. \label{eq11.18} \end{gather} Note that these equalities have meaning for arbitrary distributions \begin{gather} \label{eq11.19} u_{0}\in\mathcal{D}'(\mathbb{R}^{n}),\;\; \mathrm{supp}\,u_{0}\subseteq\overline{\Omega},\quad u_{1},\ldots,u_{2q}\in\mathcal{D}'(\Gamma), \\ f\in\mathcal{D}'(\mathbb{R}^{n}),\;\; \mathrm{supp}\,f\subseteq\overline{\Omega},\quad g_{1},\ldots,g_{q}\in\mathcal{D}'(\Gamma). \label{eq11.20} \end{gather} Therefore it is useful to introduce the following notion. \begin{definition}\label{def11.3} Suppose that \eqref{eq11.19} and \eqref{eq11.20} are fulfilled. Then the vector $u:=(u_{0},u_{1},\ldots,u_{2q})$ is called a generalized solution in Roitberg's sense to the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} if the conditions \eqref{eq11.17} and \eqref{eq11.18} are valid. \end{definition} This interpretation of a generalized solution is suggested by Roitberg; see, e.g, his monograph \cite[Sec. 2.4]{Roitberg96}. Thus, Theorem \ref{th11.2} can be regarded as a statement about the solvability of the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} in the class of generalized solutions in Roitberg's sense provided that we identify solutions $u\in H^{s,\varphi,(2q)}(\Omega)$ with vectors \eqref{eq11.15}. Roitberg's interpretation of a generalized solution and the relevant Theorem \ref{th11.2} have been found fruitful in the theory of elliptic boundary-value problems. Analogs of this theorem were proved by Roitberg for nonregular elliptic boundary-value problems and for general elliptic systems of differential equations, the modified scale of the $L_{p}$-type Sobolev spaces with $1<p<\infty$ being used. In the literature \cite{FunctionalAnalysis72, Roitberg96, Roitberg99}, Theorem \ref{th11.2} and its analogs are known as theorems on a complete collection of homeomorphisms. They have various applications; among them are the theorems on an increase in smoothness of solutions up to the boundary, application to the investigation of Green functions of elliptic boundary-value problems, applications to elliptic problems with power singularities, to the transmission problem, the Odhnoff problem, and others. The investigations of Ya.A.~Roitberg, Z.G.~Sheftel' and their disciples into this subject were summed up in Roitberg's monographs \cite{Roitberg96}. Note that, in the most general form, the theorem on a complete collection of homeomorphisms was proved by A.~Kozhevnikov \cite{Kozhevnikov01} for general elliptic pseudodifferential boundary-value problems. Analogs of Theorem \ref{th11.2} were obtained in \cite{94UMJ12, 94Dop12} for some non-Sobolev Banach spaces parametrized by collections of numbers; the case of a scalar elliptic equation was treated therein. We also remark applications of the concept of a generalized solution and relevant modified two-sided scale in the theory of elliptic boundary-value problems in nonsmooth domains \cite{KozlovMazyaRossmann97} and in the theory of parabolic \cite{EidelmanZhitarashu98} and hyperbolic \cite{Roitberg99} equations. \section{Individual theorems for elliptic problems}\label{sec12} In this section, we generalize J.-L.~Lions and E.~Magenes's method \cite{LionsMagenes62, LionsMagenes63, LionsMagenes72, Magenes65} for constructing of the domain of the operator $(L,B)$. We prove new theorems on the Fredholm property of the operator on scales of Sobolev inner product spaces and some H\"ormander spaces. These theorems has an individual character because the domain of $(L,B)$ depends on coefficients of elliptic expression $L$, as distinguished from generic Theorems \ref{th9.1} and \ref{th11.2}. Moreover, in the individual theorems the operator $(L,B)$ acts on the spaces consisting of distributions given in the domain $\Omega$, so that we do not need to modify the refined Sobolev scale as this was done for Theorem \ref{th11.2}. The section is organized in the following manner. First, for the sake of the reader's convenience, we recall Lions and Magenes's theorems about elliptic boun\-da\-ry-value problems. Then we prove a certain general form of the Lions--Magenes theorems; we call it the key theorem. Namely, we find a general condition on the space of right-hand sides of the elliptic equation $Lu=f$ under which the operator $(L,B)$ is bounded and Fredholm on the corresponding pairs of Sobolev inner product spaces of negative order. Extensive classes of the spaces satisfying this condition will be constructed; they contain the spaces used by Lions and Magenes and many others spaces. These results motivate statements and proofs of individual theorems on the Fredholm property of the operator $(L,B)$ on some Hilbert H\"ormander spaces. \subsection{The Lions--Magenes theorems}\label{sec12.1} As we have mentioned in Remark \ref{rem8.1}, J.-L.~Lions and E.~Magenes used a definition of the Sobolev space of negative order $s$ over $\Omega$ which is different from our Definition \ref{def8.2} for $\varphi\equiv1$. Namely, they defined this space as the dual of $H^{-s}_{0}(\Omega)$ with respect to the inner product in $L_{2}(\Omega)$. We use this definition throughout Section \ref{sec12}. To distinguish the Sobolev spaces $H^{s}(\Omega)$ introduced above by Definition \ref{def8.2} from ones used here, we resort to the somewhat different notation $\mathrm{H}^{s}(\Omega)$, where the letter H is not slanted. Thus we put $$ \mathrm{H}^{s}(\Omega):= \begin{cases} \;H^{s}(\Omega)&\;\text{for}\;s\geq0, \\ \;(H^{-s}_{0}(\Omega))'&\;\text{for}\;s<0. \end{cases} $$ Here $(H^{-s}_{0}(\Omega))'$ denotes the Hilbert space antidual to $H^{-s}_{0}(\Omega)$ with respect to the inner product in $L_{2}(\Omega)$. The antilinear continuous functionals from $\mathrm{H}^{s}(\Omega)$ with $s<0$ are defined uniquely by their values on the functions in $C^{\infty}_{0}(\Omega)$. Therefore it is reasonable to identify these functionals with distributions given in $\Omega$. In so doing, we have \cite[Ch.~1, Remark 12.5]{LionsMagenes72} \begin{equation}\label{eq12.1} \mathrm{H}^{s}(\Omega)=H^{s}_{\overline{\Omega}}(\mathbb{R}^{n})/ H^{s}_{\partial\Omega}(\mathbb{R}^{n})=\bigl\{w\!\upharpoonright\!\Omega:\,w\in H^{s}_{\overline{\Omega}}(\mathbb{R}^{n})\bigr\}\quad\mbox{for}\quad s<0. \end{equation} It is remarkable that the spaces $\mathrm{H}^{s}(\Omega)$ and $H^{s}(\Omega)$, with $s<0$, coincide up to equivalence of norms provided $s+1/2\notin\mathbb{Z}$; see, e.g., \cite[Sec. 4.8.2]{Triebel95}. If $s$ is half-integer, then $\mathrm{H}^{s}(\Omega)$ is narrower than $H^{s}(\Omega)$. Note also that \begin{equation}\label{eq12.2} -1/2\leq s<0\;\Rightarrow\;\mathrm{H}^{s}(\Omega)=H^{s,(0)}(\Omega)\;\;\mbox{with equality of norms}. \end{equation} This fact follows, by the duality, from the equality $H^{-s}_{0}(\Omega)=H^{-s}(\Omega)$; see, e.g., \cite[Sec. 4.7.1]{Triebel95}. Lions and Magenes consider the operator \begin{equation}\label{eq12.3} (L,B):\,D^{\sigma+2q}_{L,X}(\Omega)\rightarrow X^{\sigma}(\Omega)\oplus\bigoplus_{j=1}^{q}\,H^{\sigma+2q-m_{j}-1/2}(\partial\Omega) =:\mathbf{X}_{\sigma}(\Omega,\partial\Omega), \end{equation} with $\sigma\in\mathbb{R}$. Here $X^{\sigma}(\Omega)$ is a certain Hilbert space consisting of distributions in $\Omega$ and embedded continuously in $\mathcal{D}'(\Omega)$. The domain of the operator \eqref{eq12.3} is the Hilbert space $$ D^{\sigma+2q}_{L,X}(\Omega):=\bigl\{u\in \mathrm{H}^{\sigma+2q}(\Omega):\, Lu\in X^{\sigma}(\Omega)\bigr\} $$ endowed with the graph inner product $$ (u_{1},u_{2})_{D^{\sigma+2q}_{L,X}(\Omega)}:= (u_{1},u_{2})_{\mathrm{H}^{\sigma+2q}(\Omega)}+(Lu_{1},Lu_{2})_{X^{\sigma}(\Omega)}. $$ In the case where $s:=\sigma+2q>m+1/2$ we may set $X^{\sigma}(\Omega):=H^{\sigma}(\Omega)$ that leads us to Theorem \ref{th9.1} for $\varphi\equiv1$. But in the case where $s\leq m+1/2$ we cannot do so if we want to have the well-defined operator \eqref{eq12.3}. The space $X^{\sigma}(\Omega)$ must be narrower than $H^{\sigma}(\Omega)$. Lions and Magenes found some important spaces $X^{\sigma}(\Omega)$ with $\sigma<0$ such that the operator \eqref{eq12.3} is bounded and Fredholm; see \cite{LionsMagenes62, LionsMagenes63} and \cite[Ch.~2, Sec. 6.3]{LionsMagenes72}. We state their results in the form of two individual theorems on elliptic boundary-value problems. \begin{theorem}[the first Lions--Magenes theorem \cite{LionsMagenes62, LionsMagenes63}] \label{thLM1} Let $\sigma<0$ and $X^{\sigma}(\Omega):=L_{2}(\Omega)$. Then the mapping \eqref{eq9.3} extends uniquely to the continuous linear operator \eqref{eq12.3}. This operator is Fredholm. Its kernel coincides with $\mathcal{N}$, and its range consists of all the vectors $(f,g_{1},\ldots,g_{q})\in\mathbf{X}_{\sigma}(\Omega,\partial\Omega)$ satisfying \eqref{eq9.8}. The index of \eqref{eq12.3} is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$ and does not depend on $\sigma$. \end{theorem} \begin{remark} Here, the $\sigma=-2q$ case is important in the spectral theory of elliptic operators with general boundary conditions \cite{Grubb68, Grubb96, Mikhailets82, Mikhailets89}; see also the survey \cite[Sec. 7.7 and 9.6]{Agranovich97}. Then the space $D^{0}_{A,L_{2}}(\Omega)=\{u\in L_{2}(\Omega):Au\in L_{2}(\Omega)\}$ is the domain of the maximal operator $A_{\mathrm{max}}$ corresponding to the differential expression~$A$. Even when all coefficients of $A$ are constant, this space depends essentially on each of them \cite[Sec. 3.1, Theorem 3.1]{Hermander55}. \end{remark} To formulate the second Lions--Magenes theorem, we need the next weighted space $$ \varrho\mathrm{H}^{\sigma}(\Omega):=\{f=\varrho v:\,v\in\mathrm{H}^{\sigma}(\Omega)\,\},\quad (f_{1},f_{2})_{\varrho \mathrm{H}^{\sigma}(\Omega)}:= (\varrho^{-1}f_{1},\varrho^{-1}f_{2})_{\mathrm{H}^{\sigma}(\Omega)}, $$ with $\sigma<0$ and a positive function $\varrho\in C^{\infty}(\Omega)$. The space $\varrho\mathrm{H}^{\sigma}(\Omega)$ is Hilbert and imbedded continuously in $\mathcal{D}'(\Omega)$. Consider a weight function $\varrho:=\varrho_{1}^{-\sigma}$ such that \begin{equation}\label{eq12.4} \varrho_{1}\in C^{\infty}(\,\overline{\Omega}\,),\;\;\varrho_{1}>0\;\;\mbox{in $\Omega$},\;\;\varrho_{1}(x)=\mathrm{dist}(x,\partial\Omega)\;\;\mbox{near $\partial\Omega$}. \end{equation} \begin{theorem}[the second Lions--Magenes theorem \cite{LionsMagenes72}]\label{thLM2} Let $\sigma<0$ and \begin{equation}\label{eq12.5} X^{\sigma}(\Omega):= \begin{cases} \;\varrho_{1}^{-\sigma}\mathrm{H}^{\sigma}(\Omega)&\mbox{if}\;\;\sigma+1/2\notin\mathbb{Z},\\ \;\bigl[\,\varrho_{1}^{-\sigma+1/2}\,\mathrm{H}^{\sigma-1/2}(\Omega),\, \varrho_{1}^{-\sigma-1/2}\,\mathrm{H}^{\sigma+1/2}(\Omega)\bigr]_{t^{1/2}}& \mbox{if}\;\;\sigma+1/2\in\mathbb{Z}. \end{cases} \end{equation} Then the conclusion of Theorem $\ref{thLM1}$ remains true. \end{theorem} \begin{remark}\label{rem12.2} In the cited monograph \cite[Ch.~2, Sec. 6.3]{LionsMagenes72}, Lions and Magenes introduced the space $X^{\sigma}(\Omega)$ in a way different from \eqref{eq12.5} and designated $X^{\sigma}(\Omega)$ as $\Xi^{\sigma}(\Omega)$. Namely, for an integer $\sigma\geq0$, the space $\Xi^{\sigma}(\Omega)$ is defined to consists of all $f\in\mathcal{D}'(\Omega)$ such that $\varrho_{1}^{|\mu|}D^{\mu}f\in L_{2}(\Omega)$ for each multi-index $\mu$ with $|\mu|\leq\sigma$, and $\Xi^{\sigma}(\Omega)$ is endowed with the Hilbert norm $\sum_{|\mu|\leq\sigma}\|\varrho_{1}^{|\mu|}D^{\mu}f\|_{L_{2}(\Omega)}$. Then, $\Xi^{\sigma}(\Omega):= [\Xi^{[\sigma]}(\Omega),\Xi^{[\sigma]+1}(\Omega)]_{t^{\{\sigma\}}}$ for fractional $\sigma>0$, with $\sigma=[\sigma]+\{\sigma\}$ and $[\sigma]$ being the integral part of $\sigma$. Finally, $\Xi^{\sigma}(\Omega):=(\Xi^{-\sigma}(\Omega))'$ for $\sigma<0$, the duality being with respect to the inner product in $L_{2}(\Omega)$. It follows from the result of Lions and Magenes \cite[Ch.~2, Sec. 7.1, Corollary 7.4]{LionsMagenes72} that, for every $\sigma<0$, the space $\Xi^{\sigma}(\Omega)$ coincides with \eqref{eq12.5} up to equivalence of norms. \end{remark} \subsection{An extension of the Lions--Magenes theorems}\label{sec12.2} The results presented here are got by the second author in \cite{09MFAT2}. First, we establish the key theorem, which is a certain generalization of the Lions--Magenes theorems stated above. The key theorem asserts that the operator \eqref{eq12.3} is well defined, bounded, and Fredholm for $\sigma<0$ provided that a Hilbert space $X^{\sigma}(\Omega)\hookrightarrow\mathcal{D}'(\Omega)$ satisfies the following condition. \begin{condition}[we name it as I$_{\sigma}$]\label{cond12.1} The set $X^{\infty}(\Omega):=X^{\sigma}(\Omega)\cap C^{\infty}(\,\overline{\Omega}\,)$ is dense in $X^{\sigma}(\Omega)$, and there exists a number $c>0$ such that $\|\mathcal{O}f\|_{H^{\sigma}(\mathbb{R}^{n})}\leq c\,\|f\|_{X^{\sigma}(\Omega)}$ for all $f\in X^{\infty}(\Omega)$, where $\mathcal{O}f$ is defined by \eqref{eq8.15}. \end{condition} Note that the smaller $\sigma$ is, the weaker Condition \ref{cond12.1} (I$_{\sigma}$) will be for the same space $X^{\sigma}(\Omega)$. The spaces $X^{\sigma}(\Omega)$ appearing in Theorems \ref{thLM1} and \ref{thLM2} satisfy Condition \ref{cond12.1}. This is evident for the first theorem, whereas, for the second one, this follows from the dense continuous imbedding $\mathrm{H}^{-\sigma}(\Omega)\hookrightarrow\Xi^{-\sigma}(\Omega)$ by the duality in view of Theorem \ref{th8.3} iii) and Remark \ref{rem12.2}. Our key theorem is the following. \begin{theorem}\label{th12.1} Let $\sigma<0$ and $X^{\sigma}(\Omega)$ be an arbitrary Hilbert space imbedded continuously in $\mathcal{D}'(\Omega)$ and satisfying Condition $\ref{cond12.1}$ ($\mathrm{I}_{\sigma}$). Then: \begin{enumerate} \item[i)] The set $D^{\infty}_{L,X}(\Omega):=\{u\in C^{\infty}(\,\overline{\Omega}\,):Lu\in X^{\sigma}(\Omega)\}$ is dense in $D^{\sigma+2q}_{L,X}(\Omega)$. \item[ii)] The mapping $u\rightarrow(Lu,Bu)$, with $u\in D^{\infty}_{L,X}(\Omega)$, extends uniquely to the continuous linear operator \eqref{eq12.3}. \item[iii)] The operator \eqref{eq12.3} is Fredholm. Its kernel is $\mathcal{N}$, and its range consists of all the vectors $(f,g_{1},\ldots,g_{q})\in\mathbf{X}_{\sigma}(\Omega,\partial\Omega)$ that satisfy \eqref{eq9.8}. \item[iv)] If $\mathcal{O}(X^{\infty}(\Omega))$ is dense in $H^{\sigma}_{\overline{\Omega}}(\mathbb{R}^{n})$, then the index of \eqref{eq12.3} is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$. \end{enumerate} \end{theorem} Let us outline the proof of Theorem \ref{th12.1}. The main idea is to derive this theorem from Roitberg's generic theorem, i.e. from Theorem \ref{th11.2} considered in the $\varphi\equiv1$ case. For the sake of simplicity, suppose that $\mathcal{N}=\mathcal{N}^{+}=\{0\}$. We get from Condition \ref{cond12.1} ($\mathrm{I}_{\sigma}$) that the mapping $f\mapsto\mathcal{O}f$, $f\in X^{\infty}(\Omega)$, extends by a continuity to a bounded linear injective operator $\mathcal{O}:X^{\sigma}(\Omega)\rightarrow H^{\sigma}_{\overline{\Omega}}(\mathbb{R}^{n})$. This operator defines the continuous imbedding $X^{\sigma}(\Omega)\hookrightarrow H^{\sigma,(0)}(\Omega)$. Hence, by Theorem \ref{th11.2} a restriction of \eqref{eq11.8} establishes a homeomorphism \begin{equation}\label{eq12.6} (L,B):D^{\sigma+2q,(2q)}_{L,X}(\Omega)\leftrightarrow \mathbf{X}_{\sigma}(\Omega,\partial\Omega). \end{equation} Its domain is the Hilbert space \begin{gather*} D^{\sigma+2q,(2q)}_{L,X}(\Omega):=\{u\in H^{\sigma+2q,(2q)}(\Omega):Lu\in X^{\sigma}(\Omega)\},\\ \|u\|_{D^{\sigma+2q,(2q)}_{L,X}(\Omega)}^{2}:=\|u\|_{H^{\sigma+2q,(2q)}(\Omega)}^{2}+ \|Lu\|_{X^{\sigma}(\Omega)}^{2}. \end{gather*} It follows from \eqref{eq12.6} that $D^{\infty}_{L,X}(\Omega)$ is dense in $D^{\sigma+2q,(2q)}_{L,X}(\Omega)$. According to Ya.A.~Roitberg \cite[Sec. 6.1, Theorem 6.1.1]{Roitberg96} we have the equivalence of norms \begin{equation}\label{eq12.7} \|u\|_{H^{\sigma+2q,(2q)}(\Omega)}\asymp\bigl(\,\|u\|_{H^{\sigma+2q,(0)}(\Omega)}^{2}+ \|Lu\|_{H^{\sigma,(0)}(\Omega)}^{2}\,\bigr)^{1/2},\quad u\in C^{\infty}(\,\overline{\Omega}\,). \end{equation} This result and the continuous imbedding $X^{\sigma}(\Omega)\hookrightarrow H^{\sigma,(0)}(\Omega)$ imply \begin{equation}\label{eq12.8} \|u\|_{D^{\sigma+2q,(2q)}_{L,X}(\Omega)}\asymp \bigl(\,\|u\|_{H^{\sigma+2q,(0)}(\Omega)}^{2}+ \|Lu\|_{X^{\sigma}(\Omega)}^{2}\,\bigr)^{1/2},\quad u\in C^{\infty}(\,\overline{\Omega}\,). \end{equation} Thus, $D^{\sigma+2q,(2q)}_{L,X}(\Omega)$ is the completion of $D^{\infty}_{L,X}(\Omega)$ with respect to the norm which is the right-hand side of \eqref{eq12.8}. Consider the mapping $u\mapsto u_{0}$ that takes each $u\in D^{\sigma+2q,(2q)}_{L,X}(\Omega)$ to the initial component $u_{0}\in H^{\sigma+2q,(0)}(\Omega)$ of the vector $T_{2q}u$. Here the operator $T_{2q}$ is that in Theorem \ref{th11.1} for $r=2q$. If $-2q-1/2\leq\sigma<0$, then $H^{\sigma+2q,(0)}(\Omega)=\mathrm{H}^{\sigma+2q}(\Omega)$ by \eqref{eq12.2}. Now, we may assert that the mapping $u\mapsto u_{0}$ establishes a homeomorphism of $D^{\sigma+2q,(2q)}_{L,X}(\Omega)$ onto $D^{\sigma+2q}_{L,X}(\Omega)$. Hence, \eqref{eq12.6} implies the required homeomorphism \begin{equation}\label{eq12.9} (L,B):D^{\sigma+2q}_{L,X}(\Omega)\leftrightarrow \mathbf{X}_{\sigma}(\Omega,\partial\Omega). \end{equation} Further, if $\sigma<-2q-1/2$, then $H^{\sigma+2q,(0)}(\Omega)=H^{\sigma+2q}_{\overline{\Omega}}(\mathbb{R}^{n})$. Then using \eqref{eq12.1} and Roitberg's result \cite[Sec. 6.2, Theorem 6.2]{Roitberg96} we can prove that the mapping $u\mapsto u_{0}\!\upharpoonright\!\Omega$ establishes a homeomorphism of $D^{\sigma+2q,(2q)}_{L,X}(\Omega)$ onto $D^{\sigma+2q}_{L,X}(\Omega)$. Hence, \eqref{eq12.6} implies \eqref{eq12.9} in this case as well. See \cite[Sec. 4]{09MFAT2} for more details. \begin{remark}\label{rem12.3} A proposition similar to Theorem \ref{th12.1} was proved in Magenes's survey \cite[Sec.~6.10]{Magenes65} for non half-integer $\sigma\leq-2q$ and the Dirichlet problem, the space $X^{\sigma}(\Omega)$ obeying some different conditions depending on the problem. Our Condition \ref{cond12.1} (I$_{\sigma}$) does not depend on it. \end{remark} \begin{remark}\label{rem12.4} Ya.A.~Roitberg \cite[Sec.~2.4]{Roitberg71} considered a condition on the space $X^{\sigma}(\Omega)$, which was somewhat stronger than Condition \ref{cond12.1} (I$_{\sigma}$). He required additionally that $C^{\infty}(\,\overline{\Omega}\,)\subset X^{\sigma}(\Omega)$. Under this stronger condition, Roitberg \cite[Sec.~2.4]{Roitberg71}, \cite[Sec. 6.2, p. 190]{Roitberg96} proved the boundedness of the operator \eqref{eq12.3} for all $\sigma<0$. Homeomorphism Theorem for this operator was formulated in the survey \cite[Sec. 7.9, p.~85]{Agranovich97} provided that $-2q\leq s\leq0$ and $\mathcal{N}=\mathcal{N}^{+}=\{0\}$. We also mention the analogs of Theorem \ref{th12.1} proved by Yu.V.~Kostarchuk and Ya.A.~Roitberg \cite[Theorem~4]{KostarchukRoitberg73}, \cite[Sec.~1.3.8]{Roitberg99}. In these analogs, Roitberg's condition is used, but solutions of an elliptic boundary-value problem are considered in $H^{\sigma+2q,(2q)}(\Omega)$. Note that Roitberg's condition does not include the important case where $X^{\sigma}(\Omega)=\{0\}$ and does not cover some weighted spaces $X^{\sigma}(\Omega)=\varrho\mathrm{H}^{\sigma}(\Omega)$, which we consider. \end{remark} Let us consider some applications of Theorem \ref{th12.1} caused by a particular choice of the space $X^{\sigma}(\Omega)$. Apparently, the space $X^{\sigma}(\Omega):=\{0\}$ satisfies Condition \ref{cond12.1} (I$_{\sigma}$). In this case, Theorem \ref{th12.1} coincides with Theorem \ref{th10.1} for $s:=\sigma+2q<2q$. It is remarkable that, despite $\mathrm{H}^{s}(\Omega)\neq H^{s}(\Omega)$ for half-integer $s<0$, we have \begin{equation}\label{eq12.10} \{u\in\mathrm{H}^{s}(\Omega):\,Lu=0\;\;\mbox{in}\;\;\Omega\}=\{u\in H^{s}(\Omega):\,Lu=0\;\;\mbox{in}\;\;\Omega\}, \end{equation} the norms in $\mathrm{H}^{s}(\Omega)$ and $H^{s}(\Omega)$ being equivalent on the distributions $u$ appearing in \eqref{eq12.10}. It is also evident that the space $X^{\sigma}(\Omega):=L_{2}(\Omega)$ satisfies Condition \ref{cond12.1} (I$_{\sigma}$) for every $\sigma<0$. In this important case, Theorem \ref{th12.1} coincides with Theorem \ref{thLM1}. We can describe all the Sobolev inner product spaces satisfying Condition \ref{cond12.1}. \begin{lemma}\label{lem12.1} Let $\sigma<0$ and $\lambda\in\mathbb{R}$. The space $X^{\sigma}(\Omega):=\mathrm{H}^{\lambda}(\Omega)$ satisfies Condition $\ref{cond12.1}$ ($\mathrm{I}_{\sigma}$) if and only if \,$\lambda\geq\max\,\{\sigma,-1/2\}$. \end{lemma} Indeed, we can restrict ourselves to the $\lambda<0$ case. Then the space $X^{\sigma}(\Omega):=\mathrm{H}^{\lambda}(\Omega)$ satisfies Condition \ref{cond12.1} (I$_{\sigma}$) if and only if the mapping $\mathcal{O}$ establishes the dense continuous embedding $\mathrm{H}^{\lambda}(\Omega)\hookrightarrow H^{\sigma}_{\overline{\Omega}}(\mathbb{R}^{n})$. By the duality, this embedding is equivalent to the dense continuous embedding $H^{-\sigma}(\Omega)\hookrightarrow H^{-\lambda}_{0}(\Omega)$, which is valid if and only if $-\sigma\geq-\lambda$ and $H^{-\lambda}_{0}(\Omega)=H^{-\lambda}(\Omega)$. Since the latter equality $\Leftrightarrow-\lambda\leq1/2$, the lemma is proved. The next individual theorem results from Theorem \ref{th12.1} and Lemma \ref{lem12.1}. \begin{theorem}\label{th12.2} Let $\sigma<0$ and $\lambda\geq\max\,\{\sigma,-1/2\}$. Then the mapping $u\mapsto(Lu,Bu)$, with $u\in C^{\infty}(\,\overline{\Omega}\,)$, extends uniquely to a continuous linear operator \begin{equation}\label{eq12.11} (L,B):\,\{u\in\mathrm{H}^{\sigma+2q}(\Omega):Lu\in \mathrm{H}^{\lambda}(\Omega)\}\rightarrow \mathrm{H}^{\lambda}(\Omega)\oplus\bigoplus_{j=1}^{q}\,H^{\sigma+2q-m_{j}-1/2}(\partial\Omega) \end{equation} provided that its domain is endowed with the norm $$ \bigl(\,\|u\|_{\mathrm{H}^{\sigma+2q}(\Omega)}^{2}+ \|Lu\|_{\mathrm{H}^{\lambda}(\Omega)}^{2}\bigr)^{1/2}. $$ The domain is a Hilbert space with respect to this norm. Moreover, the operator \eqref{eq12.11} is Fredholm, and its index is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$. \end{theorem} Here, it is useful to discuss the special case where $\lambda=\sigma$. If $-1/2<\lambda=\sigma<0$, then the domain of \eqref{eq12.11} coincides with $H^{\sigma+2q}(\Omega)$ and we arrive at Theorem \ref{th9.1} for $s=\sigma+2q$ and $\varphi\equiv1$. If $\lambda=\sigma=-1/2$, then the domain is narrower than $\mathrm{H}^{2q-1/2}(\Omega)$ and is equal to $H^{2q-1/2,(2q)}(\Omega)$ in view of \eqref{eq12.8} and \eqref{eq12.2} so that we get Theorem \ref{th11.2} for $s=2q-1/2$ and $\varphi\equiv1$. In Theorem \ref{th12.2}, we always have $X^{\sigma}(\Omega)\subseteq\mathrm{H}^{-1/2}(\Omega)$. But we can get a space $X^{\sigma}(\Omega)$ containing an extensive class of distributions $f\notin\mathrm{H}^{-1/2}(\Omega)$ and satisfying Condition \ref{cond12.1} (I$_{\sigma}$) if we use certain weighted spaces $\varrho\mathrm{H}^{\sigma}(\Omega)$. In this connection, recall the following. \begin{definition}\label{def12.1} Let $X(\Omega)$ be a Banach space lying in $\mathcal{D}'(\Omega)$. A function $\varrho$ given in $\Omega$ is called a multiplier in $X(\Omega)$ if the operator of multiplication by $\varrho$ is defined and bounded on $X(\Omega)$. \end{definition} Let $\sigma<-1/2$ and consider the next condition. \begin{condition}[we name it as II$_{\sigma}$]\label{cond12.2} The function $\varrho$ is a multiplier in $H^{-\sigma}(\Omega)$, and \begin{equation}\label{eq12.12} D_{\nu}^{j}\,\varrho=0\;\;\mbox{on}\;\;\partial\Omega\;\;\mbox{for every}\;\; j\in\mathbb{Z}\;\;\mbox{such that}\;\;0\leq j<-\sigma-1/2. \end{equation} \end{condition} Note if $\varrho$ is a multiplier in $H^{-\sigma}(\Omega)$, then evidently $\varrho\in H^{-\sigma}(\Omega)$ so that, by Theorem \ref{th8.4}, the trace of $D_{\nu}^{j}\varrho$ on $\partial\Omega$ is well defined in \eqref{eq12.12}. A description of the set of all multipliers in $H^{-\sigma}(\Omega)$ is given in \cite[Sec. 9.3.3]{MazyaShaposhnikova09}. Using Condition \ref{cond12.2} (II$_{\sigma}$), we can describe the class of all weighted Sobolev inner product spaces of order $\sigma$ that satisfies Condition \ref{cond12.1} (I$_{\sigma}$). \begin{lemma}\label{lem12.2} Let $\sigma<-1/2$, and let a function $\varrho\in C^{\infty}(\Omega)$ be positive. The space $X^{\sigma}(\Omega):=\varrho\mathrm{H}^{\sigma}(\Omega)$ satisfies Condition $\ref{cond12.1}$ ($\mathrm{I}_{\sigma}$) if and only if $\varrho$ meets Condition $\ref{cond12.2}$ ($\mathrm{II}_{\sigma}$). \end{lemma} Indeed, using the intrinsic description of $H^{-\sigma}_{0}(\Omega)$ mentioned in Subsection \ref{sec8.4}, we can prove that $\varrho$ satisfies Condition \ref{cond12.2} ($\mathrm{II}_{\sigma}$) if and only if the multiplication by $\varrho$ is a bounded operator $M_{\varrho}:H^{-\sigma}(\Omega)\rightarrow H^{-\sigma}_{0}(\Omega)$. The latter is equivalent, by the duality, to the boundedness of the operator $M_{\varrho}:\mathrm{H}^{\sigma}(\Omega)\rightarrow H^{\sigma}_{\overline{\Omega}}(\mathbb{R}^{n})$. Note that the mapping $f\mapsto\varrho^{-1}f$ establishes the homeomorphism $M_{\varrho^{-1}}: \varrho\mathrm{H}^{\sigma}(\Omega)\leftrightarrow\mathrm{H}^{\sigma}(\Omega)$. Therefore, we conclude that $\varrho$ satisfies Condition \ref{cond12.2} ($\mathrm{II}_{\sigma}$) if and only if the identity operator $M_{\varrho}\,M_{\varrho^{-1}}$ establishes a continuous embedding $\mathcal{O}:\varrho\mathrm{H}^{\sigma}(\Omega)\rightarrow H^{\sigma}_{\overline{\Omega}}(\mathbb{R}^{n})$. The embedding means that the space $X^{\sigma}(\Omega)=\varrho\mathrm{H}^{\sigma}(\Omega)$ satisfies Condition \ref{cond12.1} ($\mathrm{I}_{\sigma}$). The next individual theorem results from Theorem \ref{th12.1} and Lemma \ref{lem12.2}. \begin{theorem}\label{th12.3} Let $\sigma<-1/2$, and let a positive function $\varrho\in C^{\infty}(\Omega)$ satisfy Condition $\ref{cond12.2}$ ($\mathrm{II}_{\sigma}$). Then the mapping $u\rightarrow(Lu,Bu)$, with $u\in C^{\infty}(\,\overline{\Omega}\,)$, $Lu\in\varrho\mathrm{H}^{\sigma}(\Omega)$, extends uniquely to a continuous linear operator \begin{equation}\label{eq12.13} (L,B):\,\bigl\{u\in\mathrm{H}^{\sigma+2q}(\Omega): Lu\in\varrho\mathrm{H}^{\sigma}(\Omega)\bigr\}\rightarrow \varrho\mathrm{H}^{\sigma}(\Omega) \oplus\bigoplus_{j=1}^{q}\,H^{\sigma+2q-m_{j}-1/2}(\partial\Omega) \end{equation} provided that its domain is endowed with the norm $$ \bigl(\,\|u\|_{\mathrm{H}^{\sigma+2q}(\Omega)}^{2}+ \|\varrho^{-1}Lu\|_{\mathrm{H}^{\sigma}(\Omega)}^{2}\bigr)^{1/2}. $$ The domain is a Hilbert space with respect to this norm. Moreover, the operator \eqref{eq12.13} is Fredholm, and its index is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$. \end{theorem} We give an important example of a function $\varrho$ satisfying Condition \ref{cond12.2} (II$_{\sigma}$) for fixed $\sigma<-1/2$ if we set $\varrho:=\varrho_{1}^{\delta}$ provided that $\varrho_{1}$ meets \eqref{eq12.4} and that $\delta\geq-\sigma-1/2\in\mathbb{Z}$ or $\delta>-\sigma-1/2\notin\mathbb{Z}$. It is useful to compare Theorem \ref{thLM2} (the second Lions--Magenes theorem) with Theorems \ref{th12.2} and \ref{th12.3}. For non half-integer $\sigma<-1/2$, Theorem \ref{thLM2} is the special case of Theorem \ref{th12.3}, where $\varrho:=\varrho_{1}^{-\sigma}$. For the half-integer values of $\sigma<-1/2$, Theorem \ref{thLM2} follows from this case by the interpolation with the power parameter $t^{1/2}$. Finally, if $-1/2\leq\sigma<0$, then Theorem \ref{thLM2} is a consequence of Theorem \ref{th12.2}, in which we can take the space $X^{\sigma}(\Omega):=\mathrm{H}^{\sigma}(\Omega)$ containing $\varrho_{1}^{-\sigma}\mathrm{H}^{\sigma}(\Omega)$. \subsection{Individual theorems on classes of H\"ormander spaces}\label{sec12.3} Here we give analogs of Theorems \ref{th12.1}, \ref{th12.2}, and \ref{th12.3} for some classes of H\"ormander spaces. The proofs of the analogs are similar to those outlined in the previous subsection. First, we state the key theorem, an analog of Theorems \ref{th12.1}. Let $\sigma<0$ and $\varphi\in\mathcal{M}$. Suppose that a Hilbert space $X^{\sigma,\varphi}(\Omega)$ is embedded continuously in $\mathcal{D}'(\Omega)$. Consider the following analog of Condition \ref{cond12.1} (I$_{\sigma}$). \begin{condition}[we name it as I$_{\sigma,\varphi}$]\label{cond12.3} The set $X^{\infty}(\Omega):=X^{\sigma,\varphi}(\Omega)\cap C^{\infty}(\,\overline{\Omega}\,)$ is dense in $X^{\sigma,\varphi}(\Omega)$, and there exists a number $c>0$ such that $\|\mathcal{O}f\|_{H^{\sigma,\varphi}(\mathbb{R}^{n})}\leq c\,\|f\|_{X^{\sigma,\varphi}(\Omega)}$ for all $f\in X^{\infty}(\Omega)$, where $\mathcal{O}f$ is defined by \eqref{eq8.15}. \end{condition} The domain of $(L,B)$ is defined by the formula $$ D^{\sigma+2q,\varphi}_{L,X}(\Omega):=\{u\in H^{\sigma+2q,\varphi}(\Omega):\,Lu\in X^{\sigma,\varphi}(\Omega)\} $$ and endowed with the graph inner product $$ (u_{1},u_{2})_{D^{\sigma+2q}_{L,X}(\Omega)}:= (u_{1},u_{2})_{H^{\sigma+2q}(\Omega)}+(Lu_{1},Lu_{2})_{X^{\sigma}(\Omega)}. $$ The space $D^{\sigma+2q,\varphi}_{L,X}(\Omega)$ is Hilbert. Our key theorem on classes of H\"ormander spaces is the following. \begin{theorem}\label{th12.4} Let $\varphi\in\mathcal{M}$, and let a number $\sigma<0$ be such that \begin{equation}\label{eq12.14} \sigma+2q\neq1/2-k\quad\mbox{for every integer}\quad k\geq1. \end{equation} Suppose that $X^{\sigma,\varphi}(\Omega)$ is an arbitrary Hilbert space imbedded continuously in $\mathcal{D}'(\Omega)$ and satisfying Condition $\ref{cond12.3}$ ($\mathrm{I}_{\sigma,\varphi}$). Then: \begin{enumerate} \item[i)] The set $D^{\infty}_{L,X}(\Omega):=\{u\in C^{\infty}(\,\overline{\Omega}\,):Lu\in X^{\sigma,\varphi}(\Omega)\}$ is dense in $D^{\sigma+2q,\varphi}_{L,X}(\Omega)$. \item[ii)] The mapping $u\rightarrow(Lu,Bu)$, with $u\in D^{\infty}_{L,X}(\Omega)$, extends uniquely to a continuous linear operator \begin{equation}\label{eq12.15} (L,B):\,D^{\sigma+2q,\varphi}_{L,X}(\Omega)\rightarrow X^{\sigma,\varphi}(\Omega)\oplus \bigoplus_{j=1}^{q}\,H^{\sigma+2q-m_{j}-1/2,\varphi}(\partial\Omega) =:\mathbf{X}_{\sigma,\varphi}(\Omega,\partial\Omega), \end{equation} \item[iii)] The operator $\eqref{eq12.15}$ is Fredholm. Its kernel is $\mathcal{N}$, and its range consists of all the vectors $(f,g_{1},\ldots,g_{q})\in\mathbf{X}_{\sigma,\varphi}(\Omega,\partial\Omega)$ that satisfy \eqref{eq9.8}. \item[iv)] If $\mathcal{O}(X^{\infty}(\Omega))$ is dense in $H^{\sigma,\varphi}_{\overline{\Omega}}(\mathbb{R}^{n})$, then the index of \eqref{eq12.15} is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$. \end{enumerate} \end{theorem} Note that the condition \eqref{eq12.14} is stipulated by that, in definition of $D^{\sigma+2q,\varphi}_{L,X}(\Omega)$, we use the space $H^{\sigma+2q,\varphi}(\Omega)$, rather than an appropriate analog of $\mathrm{H}^{\sigma+2q}(\Omega)$, which is different from $H^{\sigma+2q,\varphi}(\Omega)$ if $\sigma+2q$ is negative and half-integer. The following two individual theorems result from the key theorem. The first of them is for nonweighted H\"ormander spaces $X^{\sigma,\varphi}(\Omega):=H^{\lambda,\eta}(\Omega)$. In view of Theorem \ref{th9.1}, we can confine ourselves to the $\sigma<-1/2$ case. \begin{theorem}\label{th12.5} Let $\sigma<-1/2$, the condition \eqref{eq12.14} be fulfilled, $\lambda>-1/2$, and $\varphi,\eta\in\mathcal{M}$. Then the mapping $u\mapsto(Lu,Bu)$, with $u\in C^{\infty}(\,\overline{\Omega}\,)$, extends uniquely to a continuous linear operator \begin{equation}\label{eq12.16} (L,B):\,\{u\in H^{\sigma+2q,\varphi}(\Omega):Lu\in H^{\lambda,\eta}(\Omega)\}\rightarrow H^{\lambda,\eta}(\Omega)\oplus \bigoplus_{j=1}^{q}\,H^{\sigma+2q-m_{j}-1/2,\varphi}(\partial\Omega) \end{equation} provided that its domain is endowed with the norm $$ \bigl(\,\|u\|_{H^{\sigma+2q,\varphi}(\Omega)}^{2}+ \|Lu\|_{H^{\lambda,\eta}(\Omega)}^{2}\bigr)^{1/2}. $$ The domain is a Hilbert space with respect to this norm. Moreover, the operator \eqref{eq12.16} is Fredholm, and its index is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$. \end{theorem} It is remarkable that, in this individual theorem, the solution and right-hand side of the elliptic equation $Lu=f$ can be of different supplementary smoothness, $\varphi$ and~$\eta$. The second individual theorem is for weighted H\"ormander spaces $X^{\sigma,\varphi}(\Omega):=\varrho H^{\sigma,\varphi}(\Omega)$, namely \begin{gather*} \varrho H^{\sigma,\varphi}(\Omega):=\{f=\varrho v:\,v\in H^{\sigma,\varphi}(\Omega)\,\},\\ (f_{1},f_{2})_{\varrho H^{\sigma,\varphi}(\Omega)}:= (\varrho^{-1}f_{1},\varrho^{-1}f_{2})_{H^{\sigma,\varphi}(\Omega)}. \end{gather*} Here $\sigma<-1/2$, $\varphi\in\mathcal{M}$, and the function $\varrho\in C^{\infty}(\Omega)$ is positive. The space $\varrho H^{\sigma,\varphi}(\Omega)$ is Hilbert. \begin{theorem}\label{th12.6} Let $\sigma<-1/2$, the condition \eqref{eq12.14} be valid, and $\varphi\in\mathcal{M}$. Suppose that a positive function $\varrho\in C^{\infty}(\Omega)$ is a multiplier in $H^{-\sigma,1/\varphi}(\Omega)$ and satisfies \eqref{eq12.12}. Then the mapping $u\rightarrow(Lu,Bu)$, with $u\in C^{\infty}(\,\overline{\Omega}\,)$, $Lu\in\varrho H^{\sigma,\varphi}(\Omega)$, extends uniquely to a continuous linear operator \begin{gather}\label{eq12.17} (L,B):\bigl\{u\in H^{\sigma+2q,\varphi}(\Omega): Lu\in\varrho H^{\sigma,\varphi}(\Omega)\bigr\}\rightarrow\notag \\ \varrho H^{\sigma,\varphi}(\Omega) \oplus\bigoplus_{j=1}^{q}H^{\sigma+2q-m_{j}-1/2,\varphi}(\partial\Omega) \end{gather} provided that its domain is endowed with the norm $$ \bigl(\,\|u\|_{H^{\sigma+2q,\varphi}(\Omega)}^{2}+ \|\varrho^{-1}Lu\|_{H^{\sigma,\varphi}(\Omega)}^{2}\bigr)^{1/2}. $$ The domain is a Hilbert space with respect to this norm. Moreover, the operator \eqref{eq12.17} is Fredholm, and its index is $\dim\mathcal{N}-\dim\mathcal{N}^{+}$. \end{theorem} We get a wide enough class of weight functions $\varrho$ satisfying the condition of this theorem if we set $\varrho:=\varrho_{1}^{\delta}$, where $\varrho_{1}$ is subject to \eqref{eq12.4} and $\delta>-\sigma-1/2$. \section{Other results}\label{sec13} In this section, we outline applications of H\"ormander spaces to other classes of elliptic problems, namely to nonregular elliptic boundary-value problems, par\-a\-me\-ter-elliptic problems, mixed elliptic problems, and elliptic systems. We recall the statements of these problems and formulate theorems about properties of the correspondent operators. As for Sobolev spaces, the Fredholm property and its implications will be preserved for some classes of H\"ormander spaces. The theorems stated below are deduced from the Sobolev case with the help of the interpolation with an appropriate function parameter. We will not sketch the proofs and only will refer to the authors' relevant papers. \subsection{Nonregular elliptic boundary-value problems}\label{sec13.1} Here we suppose that the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} is elliptic in $\Omega$ but can be nonregular. This means that it satisfies conditions i) and ii) of Definition \ref{def9.1} but need not meet condition iii). Theorems \ref{th9.1}--\ref{th9.4} remain valid for this boundary-value problem except for the description of the operator range and the index formula given in Theorem \ref{th9.1}. The exception is caused by that the boundary-value problem need not have a formally adjoint boundary-value problem in the class of differential equations. A~version of Theorem \ref{th9.1} in this situation is the following. \begin{theorem}\label{th13.1} Let $s>m+1/2$ and $\varphi\in\mathcal{M}$. Then the bounded linear operator \eqref{eq9.4} is Fredholm. Its kernel coincides with $\mathcal{N}$, whereas its range consists of all the vectors $(f,g_{1},\ldots,g_{q})\in\mathcal{H}_{s,\varphi}(\Omega,\partial\Omega)$ such that the equality in \eqref{eq9.8} is fulfilled for each $v\in W$. Here $W$ is a certain finite-dimensional space that lies in $C^{\infty}(\,\overline{\Omega}\,)\times(C^{\infty}(\partial\Omega))^{q}$ and does not depend on $s$ and $\varphi$. The index of \eqref{eq9.4} is $\dim\mathcal{N}-\dim W$ and is also independent of $s$, $\varphi$. \end{theorem} The proof is given in \cite[Sec. 4]{06UMJ3}. Recall, if the boundary-value problem \eqref{eq9.1}, \eqref{eq9.2} is regular elliptic, then $W=\mathcal{N}^{+}$ \begin{example}\label{ex13.1} The oblique derivative problem for the Laplace equation: \begin{equation}\label{eq13.1} \Delta u=f\;\;\mbox{in}\;\;\Omega,\quad\quad\frac{\partial u}{\partial\eta}=g\;\;\mbox{on}\;\;\partial\Omega. \end{equation} Here $\eta$ is an infinitely smooth field of unit vectors $\eta(x)$, $x\in\partial\Omega$. Suppose that $\dim\Omega=2$, then the boundary-value problem \eqref{eq13.1} is elliptic in $\Omega$, but it is nonregular provided $\partial\Omega_{\eta}\neq\varnothing$. Here $\partial\Omega_{\eta}$ denotes the set of all $x\in\partial\Omega$ such that $\eta(x)$ is tangent to $\partial\Omega$. If $\overline{\Omega}$ is a disk, then the correspondent operator index equals $2-\delta(\eta)/\pi$, where $\delta(\eta)$ is the increment of the angle between $i:=(1,\,0)$ and $\eta(x)$ when $x$ goes counterclockwise around $\partial\Omega$; see, e.g., \cite[Ch. 19, \S~4]{Mikhlin68}. Note if $\dim\Omega\geq3$ and $\partial\Omega_{\eta}\neq\varnothing$, then the boundary-value problem \eqref{eq13.1} is not elliptic at all. \end{example} Other examples of nonregular elliptic boundary-value problems are given in \cite[Sec.~4]{Roitberg69}. At the end of this subsection, we recall the following important result concerning an arbitrary boundary-value problem \eqref{eq9.1}, \eqref{eq9.1} (see, e.g., \cite[Sec. 2.4]{Agranovich97}). If the corresponding operator \eqref{eq9.4} is Fredholm for certain $s\geq2q$ with $\varphi\equiv1$, then this problem is elliptic in $\Omega$, i.e., the above-mentioned conditions i) and ii) are satisfied. \subsection{Parameter-elliptic problems}\label{sec13.2} Such problems were distinguished by S. Agmon and L.~Nirenberg \cite{Agmon62, AgmonNirenberg63}, M.S.~Agranovich and M.I.~Vishik \cite{AgranovichVishik64} as a class of elliptic boundary-value problems that depend on a complex-valued parameter, say $\lambda$, and possess the following remarkable property. Providing $|\lambda|\gg1$, the operator correspondent to the problem establishes a homeomorphism on appropriate pairs of Sobolev spaces, and moreover the operator norm admits a two-sided a~priory estimate with constants independent of $\lambda$. Parameter-elliptic problems were applied to the spectral theory of elliptic operators and to parabolic equations. Some wider classes of parameter-elliptic operators and boundary-value problems were investigated by M.S.~Agranovich \cite{Agranovich90, Agranovich92}, R.~Denk, R.~Mennicken and L.R.~Vol\-e\-vich \cite{DenkMennickenVolevich98, DenkMennickenVolevich01}, G.~Grubb \cite[Ch.~2]{Grubb96}, A.N.~Kozhevnikov \cite{Kozhevnikov73, Kozhevnikov96, Kozhevnikov97} (see also the surveys \cite{Agranovich94, Agranovich97}). In this subsection, we give an application of H\"ormander spaces to parameter-elliptic boundary-value problems considered by Agmon, Nirenberg, and Agranovich, Vishik. Namely, we state a homeomorphism theorem on a class of H\"ormander spaces and give a correspondent two-sided a~priory estimate for the operator norm. Recall the definition of the parameter-elliptic boundary-value problem. We consider the nonhomogeneous boundary-value problem \begin{equation}\label{eq13.2} L(\lambda)\,u=f\quad\mbox{in}\quad\Omega,\quad\quad B_{j}(\lambda)\,u=g_{j}\quad\mbox{on}\quad\partial\Omega,\quad j=1,\ldots,q, \end{equation} that depends on the parameter $\lambda\in\mathbb{C}$ as follows: \begin{equation}\label{eq13.3} L(\lambda):=\sum_{r=0}^{2q}\,\lambda^{2q-r}L_{r},\quad\quad B_{j}(\lambda):=\sum_{r=0}^{m_{j}}\,\lambda^{m_{j}-r}B_{j,r}. \end{equation} Here $L_{r}=L_{r}(x,D)$, $x\in\overline{\Omega}$, and $B_{j,r}=B_{j,r}(x,D)$, $x\in\partial\Omega$, are linear partial differential expressions of order $\leq r$ and with complex-valued infinitely smooth coefficients. As above, the integers $q$ and $m_{j}$ satisfy the equalities $q\geq1$ and $0\leq m_{j}\leq 2q-1$. Note that $L(0)=L_{2q}$ and $B_{j}(0)=B_{j,m_{j}}$. We associate certain homogeneous polynomials in $(\xi,\lambda)\in\mathbb{C}^{n+1}$ with partial differential expressions \eqref{eq13.3}. Namely, we set $$ L^{(0)}(x;\xi,\lambda):=\sum_{r=0}^{2q}\,\lambda^{2q-r}L^{(0)}_{r}(x,\xi), \quad\mbox{with}\quad x\in\overline{\Omega},\;\xi\in\mathbb{C}^{n},\;\lambda\in\mathbb{C}. $$ Here $L^{(0)}_{r}(x,\xi)$ is the principal symbol of $L_{r}(x,D)$ provided $\mathrm{ord}\,L_{r}=r$, or $L^{(0)}_{r}(x,\xi)\equiv0$ if $\mathrm{ord}\,L_{r}<r$. Similarly, for $j=1,\ldots,q$, we put $$ B^{(0)}_{j}(x;\xi,\lambda):=\sum_{r=0}^{m_{j}}\,\lambda^{m_{j}-r}B^{(0)}_{j,r}(x,\xi), \quad\mbox{with}\quad x\in\partial\Omega,\;\xi\in \mathbb{C}^{n},\;\lambda\in\mathbb{C}. $$ Here $B^{(0)}_{j,r}(x,\xi)$ is the principal symbol of $B_{j,r}(x,D)$ provided $\mathrm{ord}\,B_{j,r}=r$, or $B^{(0)}_{j,r}(x,\xi)\equiv0$ if $\mathrm{ord}\,B_{j,r}<r$. Note that $L^{(0)}(x;\xi,\lambda)$ and $B^{(0)}_{j}(x;\xi,\lambda)$ are homogeneous polynomials in $(\xi,\lambda)$ of the orders $2q$ and $m_{j}$ respectively. Let $K$ be a fixed closed angle on the complex plain with vertex at the origin; here we admits the case where $K$ degenerates into a ray. \begin{definition}\label{def13.1} The boundary-value problem \eqref{eq13.2} is called parameter-elliptic in the angle $K$ if the following conditions are satisfied: \begin{enumerate} \item[i)] $L^{(0)}(x;\xi,\lambda)\neq0$ for each $x\in\overline{\Omega}$, $\xi\in\mathbb{R}^{n}$, and $\lambda\in K$ whenever $|\xi|+|\lambda|\neq0$. \item[ii)] Let $x\in\partial\Omega$, $\xi\in\mathbb{R}^{n}$, and $\lambda\in K$ be such that $\xi$ is tangent to $\partial\Omega$ at $x$ and that $|\xi|+|\lambda|\neq0$. Then the polynomials $B^{(0)}_{j}(x;\xi+\tau\nu(x),\lambda)$ in $\tau$, $j=1,\ldots,q$, are linearly independent modulo $\prod_{j=1}^{q}(\tau-\tau^{+}_{j}(x;\xi,\lambda))$. Here $\tau^{+}_{1}(x;\xi,\lambda),\ldots,\tau^{+}_{q}(x;\xi,\lambda)$ are all the $\tau$-roots of $L^{(0)}(x;\xi+\tau\nu(x),\lambda)$ with $\mathrm{Im}\,\tau>0$, each root being taken the number of times equal to its multiplicity. \end{enumerate} \end{definition} \begin{remark}\label{rem13.1} Condition ii) of Definition \ref{def13.1} is well stated in the sense that, for the polynomial $L^{(0)}(x;\xi+\tau\nu(x),\lambda)$, the numbers of the $\tau$-roots with $\mathrm{Im}\,\tau>0$ and of those with $\mathrm{Im}\,\tau<0$ are the same and equal to $q$ if we take into account the roots multiplicity. Indeed, it follows from condition i) that the partial differential expression $$ L(x;D,D_{t}):=\sum_{r=0}^{2q}\,D_{t}^{2q-r}L_{r}(x,D),\quad x\in\overline{\Omega}, $$ is elliptic. Since the expression includes the derivation with respect to $n+1\geq3$ real arguments $x_{1},\ldots,x_{n},t$, its ellipticity is equivalent to the proper ellipticity condition (see Remark \ref{rem9.1}). So, the $\tau$-roots of $L^{(0)}(x;\xi+\tau\nu(x),\lambda)$ satisfy the indicated property. \end{remark} Let us give some instances of parameter-elliptic boundary-value problems \cite[Sec. 3.1 b)]{Agranovich97}. \begin{example}\label{ex13.2} Let differential expression $L(\lambda)$ satisfy condition i) of Definition \ref{def13.1}. Then the Dirichlet boundary-value problem for the equation $L(\lambda)=f$ is parameter-elliptic in the angle $K$. Here the boundary conditions do not depend on the parameter $\lambda$. \end{example} \begin{example}\label{ex13.3} The boundary-value problem $$ \Delta u+\lambda^{2}u=f\;\;\mbox{in}\;\;\Omega,\quad\quad\frac{\partial u}{\partial\nu}-\lambda u=g\;\;\mbox{on}\;\;\partial\Omega $$ is parameter-elliptic in each angle $K_{\varepsilon}:= \{\lambda\in\mathbb{C}:\,\varepsilon\leq|\mathrm{Im\,\lambda}|\leq\pi-\varepsilon\}$, with $0<\varepsilon<\pi/2$, if the complex plane is slitted along the negative semiaxis. \end{example} Further in this subsection the boundary-value problem \eqref{eq13.2} is supposed to be parameter-elliptic in the angle $K$. It follows from Definition \ref{def13.1} in view of Remark \ref{rem13.1} that the boundary-value problem \eqref{eq13.2} is elliptic in $\Omega$ (and need not be regular) provided $\lambda=0$. Since $\lambda$ is contained only in the lover order terms of differential expressions $L(\lambda)$ and $B_{j}(\lambda)$, the problem is elliptic in $\Omega$ for every $\lambda\in\mathbb{C}$. So, by Theorem \ref{th9.1}, we have the Fredholm bounded operator \begin{equation}\label{eq13.4} (L(\lambda),B(\lambda)):\,H^{s,\varphi}(\Omega)\rightarrow \mathcal{H}_{s,\varphi}(\Omega,\partial\Omega) \end{equation} for each $s>m+1/2$, $\varphi\in\mathcal{M}$, and $\lambda\in\mathbb{C}$. The operator index does not depend on $s$, $\varphi$, and on $\lambda$ because $\lambda$ influences only the lover order terms; see, e.g., \cite[Sec. 20.1, Theorem 20.1.8]{Hermander85}. Moreover, since the boundary-value problem \eqref{eq13.2} is parameter-elliptic in $K$, the operator \eqref{eq13.4} possesses the following additional properties. \begin{theorem}\label{th13.2} \begin{enumerate} \item[i)] There exists a number $\lambda_{0}>0$ such that for each $\lambda\in K$ with $|\lambda|\geq\nobreak\lambda_{0}$ and for any $s>m+1/2$, $\varphi\in\mathcal{M}$, the operator \eqref{eq13.2} is a homeomorphism of $H^{s,\varphi}(\Omega)$ onto $\mathcal{H}_{s,\varphi}(\Omega,\partial\Omega)$. \item[ii)] Suppose that $s>2q$ and $\varphi\in\mathcal{M}$, then there is a number $c=c(s,\varphi)\geq\nobreak1$ such that, for each $\lambda\in K$, with $|\lambda|\geq\max\{\lambda_{0},1\}$, and for every $u\in H^{s,\varphi}(\Omega)$, we have the following two-sided estimate \begin{eqnarray}\label{eq13.5} &&c^{-1}\bigl(\,\|u\|_{H^{s,\varphi}(\Omega)}+ |\lambda|^{s}\varphi(|\lambda|)\,\|u\|_{L_{2}(\Omega)}\,\bigr)\notag\\ &\leq&\|L(\lambda)u\|_{H^{s-2q,\varphi}(\Omega)}+ |\lambda|^{s-2q}\varphi(|\lambda|)\,\|L(\lambda)u\|_{L_{2}(\Omega)}\notag\\ &&+\,\sum_{j=1}^{q}\, \bigl(\,\|B_{j}(\lambda)u\|_{H^{s-m_{j}-1/2,\varphi}(\partial\Omega)}\notag\\ &&+\,|\lambda|^{s-m_{j}-1/2}\varphi(|\lambda|)\, \|B_{j}(\lambda)u\|_{L_{2}(\partial\Omega)}\,\bigr)\notag\\ &\leq& c\,\bigl(\,\|u\|_{H^{s,\varphi}(\Omega)}+ |\lambda|^{s}\varphi(|\lambda|)\,\|u\|_{L_{2}(\Omega)}\,\bigr). \end{eqnarray} Here $c$ does not depend on $u$ and $\lambda$. \end{enumerate} \end{theorem} We should comment on assertion ii) of this theorem. For fixed $\lambda$, the estimate \eqref{eq13.5} is written for the norms, non-Hilbert, that are equivalent to $\|u\|_{H^{s,\varphi}(\Omega)}$ and $\|(L(\lambda),B(\lambda))u\|_{\mathcal{H}_{s,\varphi}(\Omega,\partial\Omega)}$ respectively. The non-Hilbert norms are used to avoid cumbersome expressions. To have the finite norm $\|L(\lambda)u\|_{L_{2}(\Omega)}$ in \eqref{eq13.5}, we suppose that $s>2q$ is fulfilled instead of the condition $s>m+1/2$ used in assertion i). Finally, the supplement condition $|\lambda|\geq1$ is caused by that the function $\varphi(t)$ is defined for $t\geq1$. Note the estimate \eqref{eq13.5} is of interest for $|\lambda|\gg1$ only. In the Sobolev case where $s\geq2q$ and $\varphi\equiv1$, Theorem \ref{th13.2} was proved by M.S.~Agranovich and M.I.~Vishik \cite[\S~4 and 5]{AgranovichVishik64}; see also \cite[Sec. 3.2]{Agranovich97}. In general, the theorem is proved in \cite[Sec.~7]{07UMJ5}. Note that the right-hand side of the estimate \eqref{eq13.5} is valid without the assumption about the parameter-ellipticity of \eqref{eq13.2}. Analogs of Theorem \ref{th13.2} for parameter-elliptic operators, scalar or matrix, are proved in \cite{07Dop5, 07UMJ6, 08MFAT2}. We note an important consequence of Theorem \ref{th13.2} i). Suppose that the boun\-dary-value problem \eqref{eq13.2} is parameter-elliptic on a certain ray $K:=\{\lambda\in\mathbb{C}:\arg\lambda =\mathrm{const}\}$. Then the operator \eqref{eq13.4} is of zero index for each $s>m+1/2$, $\varphi\in\mathcal{M}$, and $\lambda\in\mathbb{C}$. \subsection{Mixed elliptic problems} Here we consider a certain class of elliptic boun\-dary-value problems in multiply connected bonded domains. As distinguished from the above, we allow the orders of the boundary differential expressions to be distinct on different connected components of the boundary. For instance, studying the Laplace equation in a ring, one may set the Dirichlet condition on a chosen connected component of the ring boundary and the Neumann condition on the other component. The problems under consideration relate to the mixed elliptic boundary-value problems \cite{Peetre61, Schechter60, Simanca87, VishikEskin69}. They have not investigated so completely as the unmixed elliptic problems. This is concerned with some difficulties, that appear when one reduces the mixed problem to a pseudodifferential operator on the boundary; see, e.g., \cite{Simanca87}. In the problems we consider, the portions of boundary on which the boundary expression has distinct orders do not adjoin to each other. These problems are called formally mixed. They can be reduced locally to a model elliptic problem in the half-space \cite{07Dop4}. In this subsection, we suppose that the boundary of $\Omega$ consists of $r\geq2$ nonempty connected components $\Gamma_{1},\ldots,\Gamma_{r}$. Fix an integer $q\geq1$ and consider a formally mixed boundary-value problem \begin{equation}\label{eq13.6} L\,u=f\quad\text{in}\;\;\Omega,\quad B^{(k)}_{j}u=g_{k,j}\;\;\text{on}\;\;\Gamma_{k},\;\;j=1,\ldots,q,\;\;k=1,\ldots,r. \end{equation} Here the partial differential expression $L=L(x,D)$, $x\in\overline{\Omega}$, of order $2q$, is the same as in Section \ref{sec9}, whereas $B^{(k)}:=\{B^{(k)}_{j}:j=1,\ldots,q\}$ is a system of boundary linear partial differential expressions given on the component $\Gamma_{k}$. Suppose that the coefficients of the expressions $B^{(k)}_{j}=B^{(k)}_{j}(x,D)$, $x\in\Gamma_{k}$, are infinitely smooth complex-valued functions and that all $m^{(k)}_{j}:=\mathrm{ord}\,B^{(k)}_{j}\leq2q-1$. We denote \begin{gather*} \Lambda:=(L,B^{(1)}_{1},\ldots,B^{(1)}_{q},\ldots,B^{(r)}_{1},\ldots,B^{(r)}_{q}), \\ \mathcal{N}_{\Lambda}:=\{u\in C^{\infty}(\,\overline{\Omega}\,):\,\Lambda u=0\}, \\ m:=\max\,\{\mathrm{ord}\,B^{(k)}_{j}:\,j=1,\ldots,q,\;\;k=1,\ldots,r\}. \end{gather*} The mapping $u\mapsto\Lambda u$, $u\in C^{\infty}(\,\overline{\Omega}\,)$, extends uniquely to a bounded linear operator \begin{gather}\label{eq13.7} \Lambda:\,H^{s,\varphi}(\Omega)\rightarrow H^{s-2q,\;\varphi}(\Omega)\oplus\bigoplus_{k=1}^{r}\bigoplus_{j=1}^{q} H^{s-m^{(k)}_{j}-1/2,\varphi}(\Gamma_{k})\\ =:\mathcal{H}_{s,\varphi}(\Omega,\Gamma_{1},\ldots,\Gamma_{r}) \notag \end{gather} for each $s>m+1/2$ and $\varphi\in\mathcal{M}$. \begin{definition}\label{def13.2} The formally mixed boundary-value problem \eqref{eq13.6} is called elliptic in the multiply connected domain $\Omega$ if $L$ is proper elliptic on $\overline{\Omega}$ and if, for each $k=1,\ldots,r$, the system $B^{(k)}$ satisfies the Lopatinsky condition with respect to $L$ on $\Gamma_{k}$. \end{definition} Suppose the mixed boundary-value problem \eqref{eq13.6} is elliptic in $\Omega$. Then it has the following properties \cite{07Dop4}. \begin{theorem}\label{th13.3} Let $s>m+1/2$ and $\varphi\in\mathcal{M}$. Then the bounded linear operator \eqref{eq13.7} is Fredholm. Its kernel coincides with $\mathcal{N}_{\Lambda}$, whereas its range consists of all the vectors $$ (f,g_{1,1},\ldots,g_{1,q},\ldots,g_{r,1},\ldots,g_{r,q})\in \mathcal{H}_{s,\varphi}(\Omega,\Gamma_{1},\ldots,\Gamma_{r}) $$ such that \begin{equation}\label{eq13.8} (f,w_{0})_{\Omega}+\sum_{k=1}^{r}\,\sum_{j=1}^{q}\; (g_{k,j},w_{k,j})_{\Gamma_{k}}=0 \end{equation} for each vector-valued function $$ (w_{0},w_{1,1},\ldots,w_{1,q},\ldots,w_{r,1},\ldots,w_{r,q})\in W_{\Lambda}. $$ Here $W_{\Lambda}$ is a certain finite-dimensional space that lies in $$ C^{\infty}(\,\overline{\Omega}\,)\times\prod_{j=1}^{r}\,(C^{\infty}(\Gamma_{j}))^{q} $$ and does not depend on $s$ and $\varphi$. The index of \eqref{eq13.7} is $\dim\mathcal{N}-\dim W_{\Lambda}$ and is also independent of $s$,~$\varphi$. \end{theorem} It is self-clear that, in \eqref{eq13.8}, the notation $(\cdot,\cdot)_{\Gamma_{k}}$ stands for the inner product in $L_{2}(\Gamma_{k})$. \subsection{Elliptic systems}\label{sec13.4} Extensive classes of elliptic systems of linear partial differential equations were introduced and investigated by I.G.~Petrovskii \cite{Petrovskii39} and A.~Douglis, L.~Nirenberg \cite{DouglisNirenberg55}. For pseudodifferential equations, general elliptic systems were studied by L.~H\"{o}rmander \cite[Sec. 1.0]{Hermander67}. He proved a priori estimates for solutions of these systems in appropriate couples of Sobolev inner product spaces of arbitrary real orders. If the system is given on a closed smooth manifold, then the estimate is equivalent to the Fredholm property of the correspondent elliptic matrix PsDO; see, e.g., the monograph \cite[Ch. 19]{Hermander85}, and the survey \cite[Sec. 3.2]{Agranovich94}. This fact is of great importance in the theory of elliptic boundary-value problems because each of these problems can be reduced to an elliptic system of pseudodifferential equations on the boundary of the domain; see, e.g., \cite[Ch.~20]{Hermander85} and \cite[Part~IV]{WlokaRowleyLawruk95}. In this subsection, we examine the Petrovskii elliptic systems on the refined Sobolev scale over a closed smooth manifold $\Gamma$ and generalize the results of Subsection \ref{sec6.1} to these systems. Let us consider a system of $p\geq2$ linear equations \begin{equation}\label{eq13.9} \sum_{k=1}^{p}\:A_{j,k}\,u_{k}=f_{j}\quad\mbox{on}\quad\Gamma,\quad j=1,\ldots,p. \end{equation} Here $A_{j,k}$, $j,k=1,\ldots,p$, are scalar classical pseudodifferential operators of arbitrary real orders defined on $\Gamma$. We consider equations \eqref{eq13.9} in the sense of the distribution theory so that $u_{k},\,f_{j}\in\mathcal{D}'(\Gamma)$. Put $m_{k}:=\max\{\mathrm{ord}\,A_{1,k},\ldots,\mathrm{ord}\,A_{p,k}\}$. Let us rewrite the system \eqref{eq13.9} in the matrix form: $Au=f$ on $\Gamma$, where $A:=(A_{j,k})$\; is a square matrix of order $p$, and $u=\mathrm{col}\,(u_{1},\ldots,u_{p})$, $f=\mathrm{col}\,(f_{1},\ldots,f_{p})$ are functional columns. The mapping $u\mapsto Au$ is a linear continuous operator on the space $(\mathcal{D}'(\Gamma))^{p}$. By lemma \ref{lem6.1}, a restriction of the mapping sets a bounded linear operator \begin{equation}\label{eq13.10} A:\,\bigoplus_{k=1}^{p}\,H^{s+m_{k},\,\varphi}(\Gamma)\rightarrow (H^{s,\varphi}(\Gamma))^{p} \end{equation} for each $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. \begin{definition}\label{def13.3} The system \eqref{eq13.9} and the matrix PsDO $A$ are called Petrovskii elliptic on $\Gamma$ if $\det\bigl(a^{(0)}_{j,k}(x,\xi)\bigr)_{j,k=1}^{p}\neq0$ for each point $x\in\Gamma$ and covector $\xi\in T^{\ast}_{x}\Gamma\setminus\{0\}$. Here $a_{j,k}^{(0)}(x,\xi)$ is the principal symbol of $A_{j,k}$ provided $\mathrm{ord}\,A_{j,k}=m_{k}$; otherwise $a_{j,k}^{(0)}(x,\xi)\equiv0$. \end{definition} We suppose that the system $Au=f$ is elliptic on $\Gamma$. Then both the spaces \begin{gather*} N:=\bigl\{\,u\in(C^{\infty}(\Gamma))^{p}: \,Au=0\;\;\mbox{on}\;\;\Gamma\,\bigr\},\\ N^{+}:=\bigl\{v\in(C^{\infty}(\Gamma))^{p}:\,A^{+}v=0 \;\;\mbox{on}\;\;\Gamma\,\bigr\} \end{gather*} are finite-dimensional \cite[Sec. 3.2]{Agranovich94}. Here $A^{+}$ is the matrix pseudodifferential operator formally adjoint to $A$ with respect to the inner product in $(L_{2}(\Gamma))^{p}$. \begin{theorem}\label{th13.4} The operator \eqref{eq13.10} corresponding to the elliptic system is Fredholm for each $s\in\mathbb{R}$ and $\varphi\in\mathcal{M}$. Its kernel coincides with $N$, whereas its range consists of all the vectors $f\in(H^{s,\varphi}(\Gamma))^{p}$ such that $\sum_{j=1}^{p}\,(f_{j},v_{j})_{\Gamma}=0$ for each $(v_{1},\ldots,v_{p})\in N^{+}$. The index of \eqref{eq13.10} is equal to $\dim N-\dim N^{+}$ and independent of $s$ and $\varphi$. \end{theorem} This theorem is proved in \cite{08BPAS3} together with other properties of the system \eqref{eq13.9}. They are similar to that given in Subsection \ref{sec6.1}, in which the scalar case is treated. We also refer to the second author's papers \cite{07Dop5, 08MFAT2, 08UMB3, 09UMJ3} devoted to various classes of elliptic systems in H\"ormander spaces. \subsection{Boundary-value problems for elliptic systems}\label{sec13.5} Boundary-value problems for various classes of elliptic systems of linear partial differential equations were investigated by S.~Agmon, A.~Douglis, and L.~Nirenberg, M.S.~Agranovich and A.S.~Dynin, L.~H\"ormander, L.N.~Slobodetskii, V.A.~Solonnikov, L.R.~Volevich; see the until now unique monograph \cite{WlokaRowleyLawruk95} devoted especially to these problems, the survey \cite[\S~6]{Agranovich97} and the references given therein. It was proved that the operator correspondent to the problem is Fredholm on appropriate pairs of the positive order Sobolev spaces. Regarding the boundary-value problems for Petrovskii elliptic systems, we extend this result over the one-sided refined Sobolev scale. Let us consider a system of $p\geq2$ partial differential equations \begin{equation}\label{eq13.11} \sum_{k=1}^{p}\,L_{j,k}\,u_{k}=f_{j}\quad\mbox{in}\quad\Omega,\quad j=1,\ldots,p. \end{equation} Here $L_{j,k}=L_{j,k}(x,D)$, $x\in\overline{\Omega}$, $j,k=1,\ldots,p$, are scalar linear partial differential expressions given on $\overline{\Omega}$. The expression $L_{j,k}$ is of an arbitrary finite order, the coefficients of $L_{j,k}$ are supposed to be complex-valued and infinitely smooth on $\overline{\Omega}$. Put $m_{k}:=\max\{\mathrm{ord}\,L_{1,k},\ldots,\mathrm{ord}\,L_{p,k}\}$ so that $m_{k}$ is the maximal order of derivative of the unknown function $u_{k}$. Suppose that all $m_{k}\geq1$ and that $\sum_{k=1}^{p}\,m_{k}$ is even, say $2q$. We consider the solutions of \eqref{eq13.11} that satisfy the boundary conditions \begin{equation}\label{eq13.12} \sum_{k=1}^{p}\,B_{j,k}\,u_{k}=g_{j}\quad\mbox{on}\quad\partial\Omega,\quad j=1,\ldots,q. \end{equation} Here $B_{j,k}=B_{j,k}(x,D)$, with $x\in\partial\Omega$, $j=1,\ldots,q$, and $k=1,\ldots,p$, are boundary linear partial differential expressions with infinitely smooth coefficients. We suppose $\mathrm{ord}\,B_{j,k}\leq m_{k}-1$ and set $r_{j}:=\min\,\{m_{k}-\mathrm{ord}\,B_{j,k}:\,k=1,\ldots,p\}$ admitting $\mathrm{ord}\,B_{j,k}:=-\infty$ for $B_{j,k}\equiv0$; thus $\mathrm{ord}\,B_{j,k}\leq m_{k}-r_{j}$. Let us write the boundary-value problem \eqref{eq13.11}, \eqref{eq13.12} in the matrix form $$ Lu=f\;\;\mbox{in}\;\;\Omega,\quad Bu=g\;\;\mbox{on}\;\;\partial\Omega. $$ Here $L:=(L_{j,k})_{j,k=1}^{p}$ and $B:=(B_{j,k})_{\substack{j=1,\ldots,q \\ k=1,\ldots,p}}$ are matrix differential expressions, whereas $u:=\mathrm{col}\,(u_{1},\ldots,u_{p})$, $f:=\mathrm{col}\,(f_{1},\ldots,f_{p})$, and $g:=\mathrm{col}\,(g_{1},\ldots,g_{q})$ are function columns. It follows from Lemma \ref{eq9.1} that the mapping $u\mapsto(Lu,Bu)$, $u\in(C^{\infty}(\,\overline{\Omega}\,))^{p}$, extends uniquely to a continuous linear operator \begin{gather}\label{eq13.13} (L,B):\,\bigoplus_{k=1}^{p}H^{s+m_{k},\varphi}(\Omega)\rightarrow (H^{s,\varphi}(\Omega))^{p}\oplus\bigoplus_{j=1}^{q} H^{s+r_{j}-1/2,\varphi}(\partial\Omega)\\ =:\mathbf{H}_{s,\varphi}(\Omega,\partial\Omega) \notag \end{gather} for each $s>-r+1/2$ and $\varphi\in\mathcal{M}$, with $r:=\min\{r_{1},\ldots,r_{q}\}\geq1$. We are interested in properties of this operator provided the boundary-value problem is elliptic in the Petrovskii sense. Recall the ellipticity definition. With $L$ and $B$ we associate the matrixes of homogeneous polynomials $$ L^{(0)}(x,\xi):=\bigl(L_{j,k}^{(0)}(x,\xi)\bigr)_{j,k=1}^{p},\quad B^{(0)}(x,\xi):= \bigl(B_{j,k}^{(0)}(x,\xi)\bigr)_{\substack{j=1,\ldots,q \\ k=1,\ldots,p}}. $$ Here $L_{j,k}^{(0)}(x,\xi)$, $x\in\overline{\Omega}$, $\xi\in\mathbb{C}^{n}$, is the principal symbol of $L_{j,k}(x,D)$ provided $\mathrm{ord}\,L_{j,k}=m_{k}$; otherwise $L_{j,k}^{(0)}(x,\xi)\equiv0$. Similarly, $B_{j,k}^{(0)}(x,\xi)$, $x\in\partial\Omega$, $\xi\in\mathbb{C}^{n}$, is the principal symbol of $B_{j,k}(x,D)$ provided $\mathrm{ord}\,B_{j,k}=m_{k}-r_{j}$; otherwise $B_{j,k}^{(0)}(x,\xi)\equiv0$. \begin{definition}\label{def13.4} The boundary-value problem \eqref{eq13.11}, \eqref{eq13.12} is called Petrovskii elliptic in $\Omega$ if the following conditions are satisfied: \begin{enumerate} \item[i)] System \eqref{eq13.11} is proper elliptic on $\overline{\Omega}$; i.e., condition i) of Definition \ref{def9.1} is fulfilled, with the notation $\det L^{(0)}(x,\xi'+\tau\xi'')$ being placed instead of $L^{(0)}(x,\xi'+\tau\xi'')$. \item[ii)] Relations \eqref{eq13.12} satisfies the Lopatinsky condition with respect to \eqref{eq13.11} on $\partial\Omega$; i.e., for an arbitrary point $x\in\partial\Omega$ and for each vector $\xi\neq0$ tangent to $\partial\Omega$ at $x$, the rows of the matrix $B^{(0)}(x,\xi+\tau\nu(x))\times L^{(0)}_{\mathrm{c}}(x,\xi+\tau\nu(x))$ are linearly independent polynomials, in $\tau\in\mathbb{R}$, modulo $\prod_{j=1}^{q}\bigl(\tau-\tau^{+}_{j}(x;\xi,\nu(x))\bigr)$. Here $L^{(0)}_{\mathrm{c}}(x,\xi)$ is the transpose of the matrix composed by the cofactors of the matrix $L^{(0)}(x,\xi)$ elements. \end{enumerate} \end{definition} Note, if condition i) is satisfied, then the system \eqref{eq13.11} is Petrovskii elliptic on $\overline{\Omega}$, i.e. $\det L^{(0)}(x,\xi)\neq0$ for each $x\in\overline{\Omega}$ and $\xi\in\mathbb{R}^{n}\setminus\{0\}$. The converse is true provided that $\dim\Omega\geq3$; see \cite[Sec. 6.1~a)]{Agranovich97}. \begin{example}\label{ex13.4} The elliptic boundary-value problem for the Cauchy-Riemann system: \begin{gather*} \frac{\partial u_{1}}{\partial x_{1}}-\frac{\partial u_{2}}{\partial x_{2}}=f_{1},\quad \frac{\partial u_{1}}{\partial x_{2}}+\frac{\partial u_{2}}{\partial x_{1}}=f_{2}\quad\mbox{in}\quad\Omega,\\ u_{1}+u_{2}=g\quad\mbox{on}\quad\partial\Omega. \end{gather*} Here $n=p=2$ and $m_{1}=m_{2}=1$, so that $q=1$. The Cauchy-Riemann system is an instance of homogeneous elliptic systems, which satisfy Definition \ref{def13.4} with $m_{1}=\ldots=m_{p}$. \end{example} \begin{example}\label{ex13.5} The Petrovskii elliptic boundary-value problem \begin{gather*} \frac{\partial u_{1}}{\partial x_{1}}-\frac{\partial^{3}u_{2}}{\partial x_{2}^{3}}=f_{1},\quad \frac{\partial u_{1}}{\partial x_{2}}+\frac{\partial^{3} u_{2}}{\partial x_{1}^{3}}=f_{2}\quad\mbox{in}\quad\Omega, \\ u_{1}=g_{1},\quad u_{2}\,\biggl(\mbox{or}\;\frac{\partial u_{2}}{\partial\nu},\;\mbox{or}\;\frac{\partial^{2} u_{2}}{\partial\nu^{2}}\biggr)=g_{2}\quad\mbox{on}\quad\partial\Omega. \end{gather*} Here $n=p=2$, $m_{1}=1$, and $m_{2}=3$ so that $q=2$. This system is not homo\-ge\-ne\-ous elliptic. \end{example} Other examples of elliptic systems, of various kinds, are given in \cite[\S~6.2]{Agranovich97}. Suppose the boundary-value problem \eqref{eq13.11}, \eqref{eq13.12} is Petrovskii elliptic in $\Omega$. Then it has the following properties \cite{07Dop6}. \begin{theorem}\label{th13.5} Let $s>-r+1/2$ and $\varphi\in\mathcal{M}$. Then the bounded linear operator \eqref{eq13.13} is Fredholm. The kernel $\mathcal{N}$ of \eqref{eq13.13} lies in $(C^{\infty}(\,\overline{\Omega}\,))^{p}$ and does not depend on $s$ and $\varphi$. The range of \eqref{eq13.13} consists of all the vectors $(f_{1},\ldots,f_{p};g_{1},\ldots,g_{q})\in \mathbf{H}_{s,\varphi}(\Omega,\partial\Omega)$ such that $$ \sum_{j=1}^{p}\,(f_{j},w_{j})_{\Omega}+\sum_{j=1}^{q}\, (g_{j},h_{j})_{\partial\Omega}=0 $$ for each vector-valued function $(w_{1},\ldots,w_{p};\,h_{1},\ldots,h_{q})\in W$. Here $W$ is a certain finite-dimensional space that lies in $(C^{\infty}(\,\overline{\Omega}\,))^{p}\times (C^{\infty}(\Gamma))^{q}$. The index of the operator \eqref{eq13.13} is $\dim\mathcal{N}-\dim W$ and independent of $s$,~$\varphi$. \end{theorem} \end{document}
\begin{document} \title{Lorentz spaces with variable exponents} \begin{abstract} We introduce Lorentz spaces $L_{p(\cdot),q}(\mathbb{R}^n)$ and $L_{p(\cdot),q(\cdot)}(\mathbb{R}^n)$ with variable exponents. We prove several basic properties of these spaces including embeddings and the identity $L_{p(\cdot),p(\cdot)}(\mathbb{R}^n)=L_{p(\cdot)}(\mathbb{R}^n)$. We also show that these spaces arise through real interpolation between $L_{{p(\cdot)}}(\mathbb{R}^n)$ and $L_\infty(\mathbb{R}^n)$. Furthermore, we answer in a negative way the question posed in \cite{DHN} whether the Marcinkiewicz interpolation theorem holds in the frame of Lebesgue spaces with variable integrability. \end{abstract} {s(\cdot)}ection{Introduction} Lorentz spaces were introduced in \cite{Lor1, Lor2} as a generalization of classical Lebesgue spaces and have become a standard tool in mathematical analysis, cf. \cite{AM,CPSS, BS, Grafaneu}. For an introduction to Lorentz spaces we refer e.g. to \cite[Chapter V]{SW}, \cite[Chapter 4]{BS} or \cite[Chapter 1]{Grafaneu}. One of the main ingredients of the theory of Lorentz spaces is the celebrated Marcinkiewicz interpolation theorem, which states that under certain conditions one can deduce the strong boundedness of a sublinear operator $T$ on the interpolation spaces provided that the operator is weakly bounded at the endpoints of the interpolation pair. This approach was used for example in the classical book of Stein \cite{S} to prove the boundedness of the Hardy-Littlewood maximal operator on $L_p(\mathbb{R}^n)$ for $1<p\le \infty$. Another classical topic we shall touch in our work are the Lebesgue spaces $L_{{p(\cdot)}}(\mathbb{R}^n)$ of variable integrability. The study of this class of function spaces goes back to Orlicz \cite{Orlicz}. After the survey paper of Kov\'a\v{c}ik and R\'akosn\'\i k \cite{KoRa}, there has been an enormous interest in these spaces (and in Sobolev spaces $W^1_{{p(\cdot)}}(\Omega)$ built on $L_{{p(\cdot)}}(\Omega)$) especially in connection with the application in modeling of electrorheological fluids \cite{Ruz1}. Moreover, the spaces $L_{{p(\cdot)}}(\mathbb{R}^n)$ possess interesting applications in the theory of PDE's, variational calculus, financial mathematics and image processing. A recent overview of this vastly growing field is given in \cite{DHHR}. A fundamental breakthrough concerning spaces of variable integrability was the observation that, under certain regularity assumptions on ${p(\cdot)}$, the Hardy-Littlewood maximal operator is also bounded on $L_{{p(\cdot)}}(\mathbb{R}^n)$, see \cite{Max1}. This result has been generalized to wider classes of exponents ${p(\cdot)}$ in \cite{CruzUribe03}, \cite{Max3} and \cite{Max4}. Unfortunately, it turned out that the standard proof of Stein \cite{S} for spaces with constant indices breaks down and completely different methods had to be used to achieve this result, see \cite{DHHR}. The main aim of this paper is to return to this topic and to study the validity of the Marcinkiewicz interpolation theorem in the frame of Lebesgue spaces with variable integrability. For this reason, we first explore the possibility of extending the definition of Lorentz spaces to the setting of variable integrability exponents. We show, that there is really a natural way to define the Lorentz spaces $L_{{p(\cdot)},{q(\cdot)}}(\mathbb{R}n)$, which extends the scale of Lebesgue spaces with variable exponents, i.e. $L_{{p(\cdot)},{p(\cdot)}}(\mathbb{R}n)=L_{{p(\cdot)}}(\mathbb{R}n)$ for ${p(\cdot)}={q(\cdot)}$. Later on, we study the interpolation properties of this new scale of spaces. A special case (see Remark \ref{rem:4}) of Theorem \ref{thm:interpol} shows that $$ (L_{{p(\cdot)}}(\mathbb{R}^n),L_\infty(\mathbb{R}^n))_{\theta,q}=L_{\tilde p(\cdot),q}(\mathbb{R}^n) $$ for $0<\theta<1$ and $$ \frac{1}{\tilde p(\cdot)}=\frac{1-\theta}{{p(\cdot)}}. $$ Finally, we discuss the validity of the Marcinkiewicz interpolation theorem in the context of this new scale of function spaces, an open question posed in \cite{DHN}. It turns out that the answer is negative and we provide a detailed counterexample to this conjecture. The structure of the paper is as follows. Section 2 collects classical definitions of Lorentz spaces with constant indices and of Lebesgue spaces with variable integrability. Furthermore, the definition of Lorentz spaces with variable integrability is given. After collecting some basic properties of this new scale of function spaces in Section 3, we study the real interpolation properties of this scale in Section 4. Section 5 is devoted to Marcinkiewicz interpolation and contains the counterexample to \cite[Question 2.8]{DHN}. Finally, Section 6 collects some possible research directions and open problems. At the end of this introduction we would like to mention that another definition of Lorentz spaces $\mathcal{L}^{{p(\cdot)},{q(\cdot)}}(\mathbb{R}n)$ with variable exponents was recently given in \cite{EphreKoki} with \cite{KokiSamko} and \cite{IsraKoki} as forerunners. Their definition works with non-increasing rearrangement and two variable exponents ${p(\cdot)},{q(\cdot)}:[0,\infty)\to [1,\infty]$. Due to this effect, the important and natural identity $\mathcal{L}^{{p(\cdot)},{p(\cdot)}}(\mathbb{R}n)=L_{{p(\cdot)}}(\mathbb{R}n)$ does not hold in this scale of variable Lorentz spaces. This is a consequence of the definition of $L_{{p(\cdot)}}(\mathbb{R}n)$ where the variable exponent ${p(\cdot)}$ is defined on $\mathbb{R}n$ and not on $[0,\infty)$. We return to this topic in detail in Remark \ref{rem:Samko}. {s(\cdot)}ection{Old and new definitions} In this section we collect the very well known definitions of classical Lorentz spaces (Section \ref{def_sec:Lorentz}) and Lebesgue spaces of variable exponents (Section \ref{def_sec:Lebesgue}). Finally, in Section \ref{def_sec:Lorentz_var}, we provide the definition of Lorentz spaces with variable exponents. For simplicity, we start in Definition \ref{dfn2} with the more intuitive spaces $L_{{p(\cdot)},q}(\mathbb{R}n)$. The Lorentz spaces $L_{{p(\cdot)},{q(\cdot)}}(\mathbb{R}n)$ with both exponents variable are introduced shortly after in Definition \ref{dfn3}. {s(\cdot)}ubsection{Lebesgue spaces with variable exponents}\label{def_sec:Lebesgue} Let us now recall the definition of the variable Lebesgue spaces $L_{{p(\cdot)}}(\mathbb{R}n)$. A measurable function $p:\mathbb{R}n\to(0,\infty]$ is called a variable exponent function if it is bounded away from zero. For a set $A{s(\cdot)}ubset\mathbb{R}n$ we denote $p_A^+=\operatornamewithlimits{ess\,sup}_{x\in A}p(x)$ and $p_A^-=\operatornamewithlimits{ess\,inf}_{x\in A}p(x)$; we use the abbreviations $p^+=p_{\mathbb{R}n}^+$ and $p^-=p_{\mathbb{R}n}^-$. The variable exponent Lebesgue space $L_{p(\cdot)}(\mathbb{R}n)$ consists of all measurable functions $f$ such that there exist an $\lambda>0$ such that the modular \begin{align*} \varrho_{L_{p(\cdot)}(\mathbb{R}n)}(f/\lambda)=\int_{\mathbb{R}n}\varphi_{p(x)}\left(\frac{|f(x)|}{\lambda}\right)dx \end{align*} is finite, where \begin{align} \varphi_p(t)=\begin{cases}t^p&\text{if}\ p\in(0,\infty),\\ 0&\text{if}\ p=\infty\ \text{and}\ t\le 1,\\\label{Formelphi} \infty&\text{if}\ p=\infty\ \text{and}\ t>1. \end{cases} \end{align} This definition is nowadays standard and was used also in \cite[Section 2.2]{AlHa10} and \cite[Definition 3.2.1]{DHHR}. If we define $\mathbb{R}^n_\infty=\{x\in\mathbb{R}n:p(x)=\infty\}$ and $\mathbb{R}^n_0=\mathbb{R}n{s(\cdot)}etminus\mathbb{R}^n_\infty$, then the Luxemburg norm of a function $f\in L_{p(\cdot)}(\mathbb{R}n)$ is given by \begin{align*} \norm{f}{L_{p(\cdot)}(\mathbb{R}n)}&=\inf\{\lambda>0:\varrho_{L_{p(\cdot)}(\mathbb{R}n)}(f/\lambda)\leq1\}\\ &=\inf\left\{\lambda>0:\int_{\mathbb{R}^n_0}\!\!\!\left(\frac{|f(x)|}{\lambda}\right)^{p(x)}\!\!\!\!\!\!dx\leq1\ \text{and}\ |f(x)|\le \lambda \ \text{for a.e.}\ x\in\mathbb{R}^n_\infty\right\}. \end{align*} It constitutes a norm if $p(\cdot)\geq1$, but it is always a quasi-norm if at least $p^->0$. Furthermore, the spaces $L_\p(\Rn)$ are complete, hence they are (quasi-) Banach spaces if $p^->0$, see \cite{KoRa} for details and further properties. We denote the class of all measurable functions $p:\mathbb{R}n\to(0,\infty]$ such that $p^->0$ by $\mathcal{P}(\Rn)$ and the corresponding modular is denoted by $\varrho_{p(\cdot)}$ instead of $\varrho_{L_{p(\cdot)}(\mathbb{R}^n)}$. {s(\cdot)}ubsection{Classical Lorentz spaces}\label{def_sec:Lorentz} Next, we recall the definition of classical Lorentz spaces as it can be found in \cite{BS} or \cite{Grafaneu}. This definition makes use of the so-called \emph{non-increasing rearrangement} $f^*$ of a function $f$. For a measurable function $f:\mathbb{R}n\to\mathbb{C}$, we define first the \emph{distribution function} $\mu_f:[0,\infty)\to[0,\infty]$ by \begin{align*} \mu_f(s)=\mu\{x\in\mathbb{R}n:|f(x)|>s\}, {q(\cdot)}uad s\ge 0, \end{align*} where $\mu$ denotes the Lebesgue measure on $\mathbb{R}n$. The distribution function $\mu_f$ provides information about the size of $f$ but not about the local behavior of $f$. The (generalized) inverse function to the distribution function is called \emph{non-increasing rearrangement} $f^*:[0,\infty)\to[0,\infty]$ and is defined by \begin{align*} f^*(t)=\inf\{s>0:\mu_f(s)\leq t\}. \end{align*} Equipped with these tools, we are now ready to give the definition of the classical Lorentz spaces with constant indices. \begin{dfn}\label{dfn1} Given a measurable function $f$ on $\mathbb{R}n$ and real parameters $0<p,q\leq\infty$, we define \begin{align*} \norm{f}{L_{p,q}(\mathbb{R}n)}=\begin{cases}\displaystyle \left(\int_0^\infty\left(t^{1/p}f^*(t)\right)^q\frac{dt}{t}\right)^{1/q},& \text{if }q<\infty\\ \displaystyle{s(\cdot)}up_{t>0}t^{1/p}f^*(t),&\text{if }q=\infty. \end{cases} \end{align*} The space of all measurable $f:\mathbb{R}n\to\mathbb{C}$ with $\norm{f}{L_{p,q}(\mathbb{R}n)}<\infty$ is denoted by $L_{p,q}(\mathbb{R}n)$. The spaces are complete and they are normable for $1<p<\infty$ and $1\leq q\leq\infty$, see \cite[Theorem 1.4.11 and Exercise 1.4.3]{Grafaneu}. \end{dfn} The use of non-increasing rearrangement makes it rather difficult to extend Definition \ref{dfn1} to variable exponents ${p(\cdot)},{q(\cdot)}:\mathbb{R}n\to(0,\infty]$. It is very well known that the spaces $L_\p(\Rn)$ are not translation invariant (see Proposition 3.6.1 in \cite{DHHR}) and therefore the membership of $f$ in $L_\p(\Rn)$ cannot be characterized by any condition on $f^*$ only.\\ To avoid this obstacle, we look for an equivalent characterization of Lorentz spaces $L_{p,q}(\mathbb{R}n)$ which does not make use of the notion of non-increasing rearrangement. Therefore we calculate for $p,q<\infty$ using Fubini's theorem and the substitution $s^{p/q}:=t$ (cf. Proposition 1.4.9 in \cite{Grafaneu}) \begin{align} \norm{f}{L_{p,q}(\mathbb{R}n)}&=\left(\int_0^\infty\left(t^{1/p}f^*(t)\right)^q\frac{dt}{t}\right)^{1/q}=\left(\int_0^\infty\frac pq f^*(s^{p/q})^{q}{ds}\right)^{1/q}\notag\\ &=\left(\frac pq\right)^{1/q}\left(\int_0^\infty\mu\{s\geq0:f^*(s^{p/q})^{q}> t\}dt\right)^{1/q}\notag\\ &=\left(\frac pq\right)^{1/q}\left(\int_0^\infty\mu\{s\geq0:f^*(s^{p/q})> t^{1/q}\}dt\right)^{1/q}\notag\\ &=p^{1/q}\left(\int_0^\infty\lambda^q \mu\{s\geq0:f^*(s^{p/q})> \lambda\}\frac{d\lambda}{\lambda}\right)^{1/q}\notag\\ &=p^{1/q}\left(\int_0^\infty\lambda^q\mu\{s\geq0:f^*(s)> \lambda\}^{q/p}\frac{d\lambda}{\lambda}\right)^{1/q}\notag\\ &=p^{1/q}\left(\int_0^\infty\lambda^q \norm{\chi_{\{x\in\mathbb{R}n:|f(x)|> \lambda\}}}{L_p(\mathbb{R}n)}^{q}\frac{d\lambda}{\lambda}\right)^{1/q}\label{Formel1}. \end{align} Here, $\chi_{\{x\in\mathbb{R}n:|f(x)|> \lambda\}}$ stands for the characteristic function of the set $\{x\in\mathbb{R}n:|f(x)|> \lambda\}$. If no confusion is possible, this will also be denoted by $\chi_{\{|f|>\lambda\}}$. The equation \eqref{Formel1} can be discretized and we derive \begin{align} \norm{f}{L_{p,q}(\mathbb{R}n)} &{s(\cdot)}im p^{1/q}\left({s(\cdot)}um_{k=-\infty}^\infty 2^{kq}\norm{\chi_{\{x\in\mathbb{R}n:|f(x)|>2^k\}}}{L_p(\mathbb{R}n)}^q\right)^{1/q}.\label{Formel2} \end{align} {s(\cdot)}ubsection{Lorentz spaces with variable exponents}\label{def_sec:Lorentz_var} The expression \eqref{Formel1} for the norm can be generalized quite easily to variable exponent ${p(\cdot)}$ with $q$ constant. Surprisingly enough, even $q$ can be considered variable when we use the spaces ${\ell_{q(\cdot)}(L_{p(\cdot)})}$ of Almeida and H\"ast\"o \cite{AlHa10} and the discretized equation \eqref{Formel2}. Furthermore we do not destroy the local properties of the function $f$, since it gets not rearranged and the exponents map from $\mathbb{R}n$. \begin{dfn}\label{dfn2} Let $p\in\mathcal{P}(\Rn)$ be a variable exponent with range $0<p^-\leq p^+\leq\infty$ and let $0<q\leq\infty$. Then $L_{p(\cdot),q}(\mathbb{R}^n)$ is the collection of all measurable functions $f:\mathbb{R}n\to\mathbb{C}$ such that \begin{align}\label{LorentzNormqconst} \norm{f}{L_{{p(\cdot)},q}(\mathbb{R}n)}= \begin{cases}\displaystyle \left(\int_0^\infty \lambda^q\norm{\chi_{\{x\in\mathbb{R}n:|f(x)|>\lambda\}}}{L_\p(\Rn)}^q\frac{d\lambda}{\lambda}\right)^{1/q},&\text{ if }q<\infty\\ \displaystyle{s(\cdot)}up_{\lambda>0}\lambda\norm{\chi_{\{x\in\mathbb{R}n:|f(x)|>\lambda\}}}{L_\p(\Rn)},&\text{ if }q=\infty \end{cases} \end{align} is finite. \end{dfn} Using the ${\ell_{q(\cdot)}(L_{p(\cdot)})}$ spaces introduced recently in \cite{AlHa10}, we may even consider the situation, where also $q$ is variable. Let us recall their approach. For a sequence $(f_k)_{k\in\mathbb{Z}}$ we define the modular \begin{align*} \varrho_{\ell_{q(\cdot)}(L_{p(\cdot)})}((f_k)_k)={s(\cdot)}um_{k\in\mathbb{Z}}\inf\left\{\lambda_k>0:\varrho_{p(\cdot)}\left(\frac{f_k}{\lambda_k^{1/{q(\cdot)}}}\right)\leq1\right\}, \end{align*} with the convention $\lambda^{1/\infty}=1$. If $q^+<\infty$ or if ${q(\cdot)}\leq{p(\cdot)}$ we can replace this by a simpler expression \begin{align}\label{EasyModular} \varrho_{\ell_{q(\cdot)}(L_{p(\cdot)})}((f_k)_k)={s(\cdot)}um_{k\in\mathbb{Z}}\norm{\varphi_{q(\cdot)}(|f_k(\cdot)|)}{L_{\frac{p(\cdot)}{q(\cdot)}}(\mathbb{R}n)}, \end{align} which is much more intuitive. Here $\varphi_q(t)$ equals basically $t^q$, see \eqref{Formelphi}. The norm in these spaces gets defined as usual as the Luxemburg norm \begin{align*} \norm{(f_k)_k}{{\ell_{q(\cdot)}(L_{p(\cdot)})}}=\inf\{\mu>0:\varrho_{{\ell_{q(\cdot)}(L_{p(\cdot)})}}\left({f_k}/{\mu}\right)\leq1\}. \end{align*} Up to now, it is not completely clear under which conditions on ${p(\cdot)}$ and ${q(\cdot)}$ the expression above becomes a norm. It was shown in \cite{AlHa10} that it always constitutes a quasi-norm if $p^-,q^->0$. Further it is known (see \cite{KV11}) that it is a norm if either $\frac1{p(\cdot)}+\frac1{q(\cdot)}\leq1$ holds pointwise for all $x\in\mathbb{R}n$ or if $1\leq{q(\cdot)}\leq{p(\cdot)}\leq\infty$ holds pointwise. Also in this work there is given an example where $\min({p(\cdot)},{q(\cdot)})\geq1$ but the triangle inequality does not hold. Let us mention that it is an open question if there exists an equivalent norm on ${\ell_{q(\cdot)}(L_{p(\cdot)})}$ whenever $\min({p(\cdot)},{q(\cdot)})\geq1$. Nevertheless, since our exponents are between $(0,\infty]$ we generally work with quasi-norms and there are no obstacles with that.\\ We use now the modular $\varrho_{\ell_{q(\cdot)}(L_{p(\cdot)})}$ and \eqref{Formel2} to define the variable Lorentz spaces $L_\p(\Rn)q$. \begin{dfn}\label{dfn3} Let $p,q\in\mathcal{P}(\Rn)$ be two variable exponents with range $0<p^-\leq p^+\leq\infty$ and $0<q^-\leq q^+\leq\infty$. Then $L_\p(\Rn)q$ is the collection of all measurable functions $f:\mathbb{R}n\to\mathbb{C}$ such that \begin{align}\label{def:norm1} \norm{f}{L_\p(\Rn)q}=\inf\Bigl\{\lambda>0:\varrho_{\ell_{q(\cdot)}(L_{p(\cdot)})}\Bigl(2^k\chi_{\{x\in\mathbb{R}n:|f(x)/\lambda|>2^k\}}\Bigr)\le 1\Bigr\}<\infty. \end{align} \end{dfn} Before we discuss the properties of these new function spaces we derive an equivalent expression for $\norm{f}{L_\p(\Rn)q}$. \begin{lem}\label{lem:equiv}Let $p,q\in\mathcal{P}(\Rn)$ be two variable exponents with range $0<p^-\leq p^+\leq\infty$ and $0<q^-\leq q^+\leq\infty$. Then \begin{equation}\label{def:norm2} \norm{f}{L_\p(\Rn)q}\approx \norm{\left(2^k\chi_{\{x\in\mathbb{R}n:|f(x)|>2^k\}}\right)_{k=-\infty}^\infty}{{\ell_{q(\cdot)}(L_{p(\cdot)})}}. \end{equation} \end{lem} \begin{proof} If $\lambda=2^j$ for some $j\in\mathbb{Z}$, we obtain $$ \varrho_{\ell_{q(\cdot)}(L_{p(\cdot)})}\Bigl(2^k\chi_{\{x\in\mathbb{R}n:|f(x)/\lambda|>2^k\}}\Bigr)= \varrho_{\ell_{q(\cdot)}(L_{p(\cdot)})}\biggl(\frac{2^k\chi_{\{x\in\mathbb{R}n:|f(x)|>2^k\}}}{\lambda}\biggr). $$ The rest of the proof then follows by simple monotonicity arguments. \end{proof} \begin{rem} The somehow more complicated definition of the quasi-norm in Definition \ref{dfn3} was necessary. Only the expression in \eqref{def:norm1} is homogeneous; i.e.~ \begin{align*} \norm{\lambda f}{L_\p(\Rn)q}=|\lambda|\cdot \norm{f}{L_\p(\Rn)q}{q(\cdot)}uad\text{for all $\lambda\in\mathbb{R}$.} \end{align*} Easy examples show that the right-hand side of \eqref{def:norm2} fails to have this property. Nevertheless due to Lemma \ref{lem:equiv}, both expressions are equivalent and therefore define the same spaces $L_\p(\Rn)q$. In majority of our considerations, we shall work with the somehow simpler expression \eqref{def:norm2}. \end{rem} If $q(\cdot)=q$ is a constant function, then the Proposition 3.3 in \cite{AlHa10} shows that $\ell_{q(\cdot)}(L_{p(\cdot)})$ is really an iterated space, i.e. $$ \|(f_k)_{k\in\mathbb{Z}}|\ell_{q}(L_{p(\cdot)})\|=\Bigl({s(\cdot)}um_{k\in\mathbb{Z}}\|f_k|L_{{p(\cdot)}}(\mathbb{R}^n)\|^q\Bigr)^{1/q} $$ (with an appropriate modification if $q=\infty$). By \eqref{Formel2} and \eqref{def:norm2}, we obtain that Definitions \ref{dfn2} and \ref{dfn3} are equivalent. Moreover, we observe by \eqref{Formel1} and \eqref{Formel2} that for constant functions $p(x)=p$ and $q(x)=q$ we get an equivalent norm for the usual Lorentz spaces $L_{p,q}(\mathbb{R}n)$, whenever $p,q<\infty$. A similar calculation justifies this fact also if $p<q=\infty$. If $p=\infty$, then the usual Lorentz spaces $L_{\infty,q}(\mathbb{R}n)$ defined by Definition \ref{dfn1} consist only of the zero function whenever $0<q<\infty$, see Section 1.4.2 in \cite{Grafaneu}. It is easy to see, that Definition \ref{dfn2} applied to $p(\cdot)=\infty$ and $q<\infty$ gives $L_{\infty,q}(\mathbb{R}n)=L_\infty(\mathbb{R}n)$. Nevertheless, we will show that $L_{{p(\cdot)},{p(\cdot)}}(\mathbb{R}n)=L_\p(\Rn)$ and therefore the case $p=q=\infty$ is also included for $p=\infty$.\\ Summarizing, our spaces $L_\p(\Rn)q$ are equivalent to the usual Lorentz spaces $L_{p,q}(\mathbb{R}n)$ if ${p(\cdot)}=p$ and ${q(\cdot)}=q$ are constant functions. The only exception is the case if $p=\infty$ and $0<q<\infty$, see Theorem \ref{thm:embedding}. \begin{rem}\label{rem:Samko} Another approach to generalize this definition to variable exponents was given in \cite{EphreKoki}, with forerunners \cite{KokiSamko} and \cite{IsraKoki}. They introduced the spaces ${\mathcal L}^{{p(\cdot)},{q(\cdot)}}(\Omega)$ by the corresponding quasi-norm $$ \|f|{\mathcal L}^{{p(\cdot)},{q(\cdot)}}(\Omega)\|=\|t^{\frac{1}{p(t)}-\frac{1}{q(t)}}f^*(t)|L_{{q(\cdot)}}(0,|\Omega|)\|, $$ where $\Omega{s(\cdot)}ubset \mathbb{R}n$ is a measurable set, $|\Omega|$ is its Lebesgue measure and $p,q:(0,|\Omega|)\to (0,\infty)$ are variable exponents. The spaces ${\mathcal L}^{{p(\cdot)},{q(\cdot)}}(\Omega)$ coincide with usual Lorentz spaces $L_{p,q}(\Omega)$ if ${p(\cdot)}=p$ and ${q(\cdot)}=q$ are constant. On the other hand, in this scale there is no hope for the identity ${\mathcal L}^{{p(\cdot)},{p(\cdot)}}(\Omega)=L_{p(\cdot)}(\Omega)$ to hold, since in the definition of the Lebesgue spaces the variable exponent ${p(\cdot)}$ is defined on whole $\Omega$. \end{rem} {s(\cdot)}ection{Basic properties} In this section, we prove several basic properties of the new scale of function spaces. \begin{thm} Let $p,q\in \mathcal{P}(\Rn)$. Then $L_\p(\Rn)q$ are quasi-Banach spaces. \end{thm} \begin{proof} To prove that \eqref{def:norm1} defines a quasi-norm, we only have to show the quasi-triangle inequality. We use the estimate \begin{align*} \{x\in\mathbb{R}n:|f(x)+g(x)|>2^k\}&{s(\cdot)}ubset\{x\in\mathbb{R}n:|f(x)|+|g(x)|>2^k\}\\ &{s(\cdot)}ubset\{x\in\mathbb{R}n:|f(x)|>2^{k-1}\}\cup\{x\in\mathbb{R}n:|g(x)|>2^{k-1}\} \end{align*} to obtain \begin{align*} \chi_{\{x\in\mathbb{R}n:|f(x)+g(x)|>2^k\}}&\leq \chi_{\{x\in\mathbb{R}n:|f(x)|>2^{k-1}\}}+\chi_{\{x\in\mathbb{R}n:|g(x)|>2^{k-1}\}}. \end{align*} This in turn implies \begin{align*} &\norm{f+g}{L_\p(\Rn)q}\approx\norm{\left(2^k\chi_{\{x\in\mathbb{R}n:|f(x)+g(x)|> 2^k\}}\right)_{k=-\infty}^\infty}{{\ell_{q(\cdot)}(L_{p(\cdot)})}}\\ &{q(\cdot)}quad\le \norm{\left(2^k\chi_{\{x\in\mathbb{R}n:|f(x)|>2^{k-1}\}}\right)_{k=-\infty}^\infty+\left(2^k\chi_{\{x\in\mathbb{R}n:|g(x)|>2^{k-1}\}}\right)_{k=-\infty}^\infty}{{\ell_{q(\cdot)}(L_{p(\cdot)})}}\\ &{q(\cdot)}quad\le c\biggl\{ \norm{\left(2^k\chi_{\{x\in\mathbb{R}n:|f(x)|>2^{k-1}\}}\right)_{k=-\infty}^\infty}{{\ell_{q(\cdot)}(L_{p(\cdot)})}}+\norm{\left(2^k\chi_{\{x\in\mathbb{R}n:|g(x)|>2^{k-1}\}}\right)_{k=-\infty}^\infty}{{\ell_{q(\cdot)}(L_{p(\cdot)})}}\biggr\}\\ &{q(\cdot)}quad\lesssim \norm{f}{L_\p(\Rn)q}+\norm{g}{L_\p(\Rn)q}, \end{align*} where $c$ is the constant from the quasi-triangle inequality of ${\ell_{q(\cdot)}(L_{p(\cdot)})}$. To show that the spaces $L_\p(\Rn)q$ are complete we take a Cauchy sequence $(f_l)_{l\in\mathbb{N}}{s(\cdot)}ubsetL_\p(\Rn)q$. We chose a subsequence (which we denote by $(f_l)_{l\in\mathbb{N}}$ again) with $$ \|f_{l+1}-f_l|L_{{p(\cdot)},{q(\cdot)}}\|\le \frac{1}{2^{2l}},{q(\cdot)}uad l\in\mathbb{N}. $$ For notational reasons, we put $f_0=0.$ We consider the function $$ g(t):={s(\cdot)}um_{l=0}^\infty |f_{l+1}(t)-f_l(t)|. $$ As $\chi_{\{g>\lambda\}}(x)\le {s(\cdot)}um_{l=0}^\infty \chi_{\{|f_{l+1}-f_l|>\lambda/2^{l+1}\}}(x)$, we obtain \begin{align*} \|\chi_{\{g>\lambda\}}|L_\p(\Rn)\|^r&\le {s(\cdot)}um_{l=0}^\infty \|\chi_{\{|f_{l+1}-f_l|>\lambda/2^{l+1}\}}|L_\p(\Rn)\|^r\\ &\le {s(\cdot)}um_{l=0}^\infty \frac{2^{(l+1)r}}{\lambda^r}\cdot \|f_{l+1}-f_l|L_{{p(\cdot)},\infty}\|^r \lesssim {s(\cdot)}um_{l=0}^\infty \frac{2^{(l+1)r}}{\lambda^r} 2^{-2lr} \end{align*} where $r=\min(p^-,1)$ and we have used the embedding $L_\p(\Rn)q\hookrightarrow L_{{p(\cdot)},\infty}(\mathbb{R}n)$, see Theorem \ref{thm:embedding}. As the last sum converges, we get $\|\chi_{\{g>\lambda\}}|L_\p(\Rn)\|\to 0$ for $\lambda\to \infty$ and $g$ is finite almost everywhere. Therefore, the series $$ f(x)={s(\cdot)}um_{l=0}^\infty f_{l+1}(x)-f_l(x){q(\cdot)}uad \text{and}{q(\cdot)}uad \tilde f(x)={s(\cdot)}um_{l=1}^\infty f_{l+1}(x)-f_l(x)=f(x)-f_1(x), {q(\cdot)}uad x\in\mathbb{R}^n $$ converge also almost everywhere. It remains to show that $f\inL_\p(\Rn)q$ and $f_l\to f$ in $L_\p(\Rn)q.$ The estimate $\displaystyle 2^k \chi_{\{|\tilde f|>2^k\}}\le {s(\cdot)}um_{l=1}^\infty 2^k\chi_{\{|f_{l+1}-f_l|>2^{k-l}\}}$ implies that \begin{align*} \|2^k \chi_{\{|\tilde f|>2^k\}}|{\ell_{q(\cdot)}(L_{p(\cdot)})}\|^\varrho&\lesssim {s(\cdot)}um_{l=1}^\infty \|2^k\chi_{\{|f_{l+1}-f_l|>2^{k-l}\}}|{\ell_{q(\cdot)}(L_{p(\cdot)})}\|^\varrho\\ &= {s(\cdot)}um_{l=1}^\infty 2^{l\varrho}\|2^{k-l}\chi_{\{|f_{l+1}-f_l|>2^{k-l}\}}|{\ell_{q(\cdot)}(L_{p(\cdot)})}\|^\varrho\lesssim{s(\cdot)}um_{l=1}^\infty 2^{l\varrho}\cdot 2^{-2l\varrho}<\infty, \end{align*} where $\varrho>0$ is chosen small enough, cf. \cite[Theorem 3.8]{AlHa10}. Therefore, $\tilde f\inL_\p(\Rn)q$ and $f=\tilde f+f_1\in L_\p(\Rn)q.$ Finally, for $l\in\mathbb{N}$ fixed, we consider $$ f-f_l={s(\cdot)}um_{m=l}^\infty (f_{m+1}-f_m). $$ The estimate $\chi_{\{|f-f_l|>2^k\}}\le {s(\cdot)}um_{m=l}^\infty \chi_{\{|f_{m+1}-f_m|>2^{k-(m-l+1)}\}}$ implies the convergence $\|f-f_l|L_\p(\Rn)q\| \to0$ for $l\to\infty$ in a similar manner as above. \end{proof} We continue with a theorem showing that the scale of variable Lorentz spaces includes the scale of Lebesgue spaces with variable exponent. We would like to emphasize, that the identity $L_{{p(\cdot)},{p(\cdot)}}(\mathbb{R}n)=L_\p(\Rn)$ holds without any restrictions on ${p(\cdot)}$ with $0<p^-\leq p^+\leq\infty$. \begin{thm} If $p\in\mathcal{P}(\Rn)$, then it holds $L_{{p(\cdot)},{p(\cdot)}}(\mathbb{R}n)=L_\p(\Rn)$. \end{thm} \begin{proof} We want to show \begin{align}\label{ModulareInequality} \varrho_{p(\cdot)}(f/2)\leq\varrho_{\ell_{p(\cdot)}(L_{p(\cdot)})}\left((2^k\chi_{\{x\in\mathbb{R}^n:|f(x)|>2^k\}})_{k=-\infty}^\infty\right)\leq\varrho_{p(\cdot)}(cf), \end{align} where $c=(1-2^{-p^-})^{-1/p^-}$. From the inequalities above we conclude easily \begin{align*} \frac12\norm{f}{L_\p(\Rn)}\lesssim\norm{f}{L_{{p(\cdot)},{p(\cdot)}}(\mathbb{R}n)}\lesssim c\norm{f}{L_\p(\Rn)}, \end{align*} which proves the theorem. We first treat the case $|\mathbb{R}n_{\!\!\!\infty}|=|\{x\in\mathbb{R}n:p(x)=\infty\}|=0$. At the end we comment on the case $|\mathbb{R}n_{\!\!\!\infty}|>0$. Since ${p(\cdot)}={q(\cdot)}$ we can use the easy expression \eqref{EasyModular} for the modular and then the first inequality in \eqref{ModulareInequality} follows from \begin{align*} \varrho_{\ell_{p(\cdot)}(L_{p(\cdot)})}\left((2^k\chi_{\{|f|>2^k\}})_{k=-\infty}^\infty\right)&={s(\cdot)}um_{k=-\infty}^\infty\norm{|2^k\chi_{\{|f|>2^k\}}|^{p(x)}}{L_1(\mathbb{R}n)}\\ &=\int_\mathbb{R}n{s(\cdot)}um_{\{k\in \mathbb{Z}:2^k<|f(x)|\}}2^{kp(x)}dx. \end{align*} For fixed $x\in\mathbb{R}n$ with $|f(x)|>0$ we choose the unique $k_x\in\mathbb{Z}$ with $2^{k_xp(x)}<|f(x)|^{p(x)}\leq2^{(k_x+1)p(x)}$ and obtain \begin{align}\label{fkZero} {s(\cdot)}um_{\{k\in\mathbb{Z}:2^k<|f(x)|\}}2^{kp(x)}={s(\cdot)}um_{k=-\infty}^{k_x}\left(2^{-p(x)}\right)^{-k}=2^{k_xp(x)}\frac{1}{1-2^{-p(x)}}. \end{align} Using $1\leq\frac{1}{1-2^{-p(x)}}$ we get from \eqref{fkZero} \begin{align*} \varrho_{\ell_{p(\cdot)}(L_{p(\cdot)})}\left((2^k\chi_{\{|f(x)|>2^k\}})_{k=-\infty}^\infty\right)&=\int_\mathbb{R}n{s(\cdot)}um_{\{k\in\mathbb{Z}:2^k<|f(x)|\}}2^{kp(x)}dx\\ &\geq\int_\mathbb{R}n2^{k_xp(x)}dx=\int_\mathbb{R}n2^{(k_x+1)p(x)}2^{-p(x)}dx\\ &\ge\int_\mathbb{R}n|\frac12f(x)|^{p(x)}dx=\varrho_{p(\cdot)}(f/2). \end{align*} The converse inequality uses again \eqref{fkZero} with $\frac{1}{1-2^{-p(x)}}\leq\frac{1}{1-2^{-p^-}}$ and follows in a similar way \begin{align*} \varrho_{\ell_{p(\cdot)}(L_{p(\cdot)})}\left((2^k\chi_{\{|f(x)|>2^k\}})_{k=-\infty}^\infty\right)&=\int_\mathbb{R}n{s(\cdot)}um_{\{k\in\mathbb{Z}:2^k<|f(x)|\}}2^{kp(x)}dx\\ &\hspace{-5em}=\int_\mathbb{R}n2^{k_xp(x)}\frac{1}{1-2^{-p(x)}}dx\leq\int_\mathbb{R}n2^{k_xp(x)}\frac{1}{1-2^{-p^-}}dx\\ &\hspace{-5em}\leq\int_\mathbb{R}n|f(x)|^{p(x)}\left(\frac{1}{1-2^{-p^-}}\right)^{\frac{p(x)}{p^-}}dx=\varrho_{p(\cdot)}(cf), \end{align*} with $c=(1-2^{-p^-})^{-1/p^-}$. Now, we come back to the case, where $|\mathbb{R}n_{\!\!\!\infty}|>0$. First, we split our function $f=f_0+f_\infty:=f\cdot\chi_{{\mathbb{R}n_{0}}}+f\cdot\chi_{\mathbb{R}n_{\!\!\!\infty}}$. Then we use the considerations above and \begin{align*} \norm{f_0}{L_\p(\Rn)}+\norm{f_\infty}{L_\p(\Rn)}&\leq 2\norm{f_0+f_\infty}{L_\p(\Rn)}, \intertext{see \cite[Remark 3.2.3]{DHHR}, and} \norm{f_0}{L_{{p(\cdot)},{p(\cdot)}}(\mathbb{R}n)}+\norm{f_\infty}{L_{{p(\cdot)},{p(\cdot)}}(\mathbb{R}n)}&\leq 2\norm{f_0+f_\infty}{L_{{p(\cdot)},{p(\cdot)}}(\mathbb{R}n)}, \end{align*} which is implied by \begin{align*} \varrho_{\ell_{p(\cdot)}(L_{p(\cdot)})}((2^k\chi_{\{|f_0+f_\infty|>2^k\}})_{k=-\infty}^\infty)&=\varrho_{\ell_{p(\cdot)}(L_{p(\cdot)})}((2^k\chi_{\{|f_0|>2^k\}})_{k=-\infty}^\infty)\\ &+\varrho_{\ell_{p(\cdot)}(L_{p(\cdot)})}((2^k\chi_{\{|f_\infty|>2^k\}})_{k=-\infty}^\infty). \end{align*} \end{proof} Next we show that the new scale of function spaces satisfies some elementary embeddings, which are very well know in the case of constant exponents. \begin{thm}\label{thm:embedding} \begin{itemize} \item[(i)] Let $p,q_0,q_1\in \mathcal{P}(\Rn)$ with $q_0(\cdot)\leq q_1(\cdot)$ pointwise. Then $L_{{p(\cdot)},q_0(\cdot)}(\mathbb{R}n)\hookrightarrow L_{{p(\cdot)},q_1(\cdot)}(\mathbb{R}n)$. \item[(ii)] Let $q\in\mathcal{P}(\Rn)$. Then $L_{\infty,q(\cdot)}(\mathbb{R}^n)=L_{\infty}(\mathbb{R}^n)$. \item[(iii)] Let $p_0,p_1,q_0,q_1\in \mathcal{P}(\Rn)$ with $p_0^+<\infty$ and $\alpha:=(p_1/p_0)^->1$. Then the inequality \begin{equation}\label{eq:bddsupp1} \|f|L_{p_0(\cdot),q_0(\cdot)}\| \le c \|f|L_{p_1(\cdot),q_1(\cdot)}\| \end{equation} holds for all measurable $f$ with ${\rm supp\ } f{s(\cdot)}ubset [0,1]^n$ with $c$ independent of $f$. \item[(iv)] Let $p_0,p_1,q\in\mathcal{P}(\Rn)$ with $p_0(\cdot)\le p_1(\cdot)$ pointwise. Then the inequality \begin{equation}\label{eq:bddsupp3} \|f|L_{p_0(\cdot),q(\cdot)}\| \le c \|f|L_{p_1(\cdot),q(\cdot)}\| \end{equation} holds for all measurable $f$ with ${\rm supp\ } f{s(\cdot)}ubset [0,1]^n$ with $c$ independent of $f$. \end{itemize} \end{thm} \begin{proof} The first statement follows from Lemma \ref{lem:equiv} and the embedding $\ell_{q_0(\cdot)}(L_{p(\cdot)})\hookrightarrow\ell_{q_1(\cdot)}(L_{p(\cdot)})$ which has been proven in \cite{AlHa10}.\\ To prove the second part of the theorem, it is enough to use the above embedding in the form \begin{align*} L_{\infty,q^-}(\mathbb{R}n)\hookrightarrow L_{\infty,{q(\cdot)}}(\mathbb{R}n)\hookrightarrow L_{\infty,\infty}(\mathbb{R}n)=L_\infty(\mathbb{R}n) \end{align*} and the simple embedding $L_\infty(\mathbb{R}n)\hookrightarrow L_{\infty,q^-}(\mathbb{R}n)$, which follows directly from Definition \ref{dfn3} and Lemma \ref{lem:equiv}.\\ The proof of the third statement is based on the following simple fact. For every $A{s(\cdot)}ubset\mathbb{R}^n$ with $\mu(A)\le 1$ the following inequality holds \begin{equation}\label{eq:bddsupp2} \|\chi_A|L_{p_0(\cdot)}\|\le \|\chi_A|L_{p_1(\cdot)}\|^\alpha, \end{equation} where again $\alpha=(p_1/p_0)^->1.$ To show \eqref{eq:bddsupp1}, it is enough to assume that $q_1(\cdot)=\infty$ and \begin{equation*} \|f|L_{p_1(\cdot),\infty}(\mathbb{R}^n)\|\approx {s(\cdot)}up_{k\in\mathbb{Z}} 2^k\|\chi_{\{|f|>2^k\}}|L_{p_1(\cdot)}(\mathbb{R}^n)\|=1. \end{equation*} Using \eqref{eq:bddsupp2}, we obtain \begin{align*} \|f|L_{p_0(\cdot),q_0(\cdot)}\|^{q_0^-}&\lesssim \|f|L_{p_0(\cdot),q_0^-}\|^{q_0^-}\le {s(\cdot)}um_{k=-\infty}^\infty 2^{kq_0^-}\|\chi_{\{|f|>2^k\}}|L_{p_0(\cdot)}(\mathbb{R}^n)\|^{q_0^-}\\ &\le {s(\cdot)}um_{k=-\infty}^0 2^{kq_0^-}+{s(\cdot)}um_{k=1}^\infty 2^{kq_0^-}\|\chi_{\{|f|>2^k\}}|L_{p_1(\cdot)}(\mathbb{R}^n)\|^{\alpha q_0^-}\\ &\le c+{s(\cdot)}um_{k=1}^\infty 2^{kq_0^-}(2^{-k}\|f|L_{p_1(\cdot),\infty}(\mathbb{R}^n)\|)^{\alpha q_0^-}\le c' \end{align*} with an obvious modification if $q_0^-=\infty.$ This justifies \eqref{eq:bddsupp1}. The proof of the forth statement follows a similar pattern. We start with $f$ such that $$ \|f|L_{p_1(\cdot),q(\cdot)}(\mathbb{R}^n)\| \approx \norm{\left(2^k\chi_{\{x\in\mathbb{R}n:|f(x)|>2^k\}}\right)_{k=-\infty}^\infty}{\ell_{q(\cdot)}(L_{p_1(\cdot)})}=1. $$ We use again the splitting into two parts, namely with $k\le 0$ and $k\ge 1$, respectively. In the first case, we use the bounded support of $f$ to obtain. \begin{align*} \|(2^k\chi_{\{|f|>2^k\}})_{k=-\infty}^0|\ell_{q(\cdot)}(L_{p_0(\cdot)})\|&\le \|(2^k\chi_{[0,1]^n})_{k=-\infty}^0|\ell_{q(\cdot)}(L_{p_0(\cdot)})\|\\ &\lesssim \|(2^k\chi_{[0,1]^n})_{k=-\infty}^0|\ell_{q^-}(L_{p_0(\cdot)})\|\lesssim 1 \end{align*} The second part with $k\in \mathbb{N}$ may be estimated directly as \begin{align*} \|(2^k\chi_{\{|f|>2^k\}})_{k=1}^\infty|\ell_{q(\cdot)}(L_{p_0(\cdot)})\|\le \|(2^k\chi_{\{|f|>2^k\}})_{k=1}^\infty|\ell_{q(\cdot)}(L_{p_1(\cdot)})\|\lesssim 1, \end{align*} which finishes the proof of \eqref{eq:bddsupp3}. \end{proof} \begin{rem} The second part of this theorem is in contrast to \cite[Section 1.4.2]{Grafaneu}, where $L_{\infty,q}(\mathbb{R}n)=\{0\}$ is stated. But this is also not surprising since we did not take the extra factor $p^{1/q}$ appearing in \eqref{Formel1} and \eqref{Formel2} into our Definitions \ref{dfn2} and \ref{dfn3} of our variable Lorentz spaces. \end{rem} {s(\cdot)}ection{Interpolation} We stated already in the introduction that the main importance of Lorentz spaces lies in their connection with (real) interpolation theory. In this section, we shall explore the interpolation properties of Lorentz spaces with variable exponents. But before we come to this, we recall some basics of the interpolation theory, as they may be found for example in the classical monographs \cite{BerghL} and \cite{Triebel}. We shall touch only the two most important interpolation methods - the real interpolation and the complex interpolation. Complex interpolation of variable exponent spaces has already been treated in \cite{DHN}. It turned out that the expected result \begin{align*} [L_{p_0(\cdot)}(\mathbb{R}n),L_{p_1(\cdot)}(\mathbb{R}n)]_{\theta}=L_{p_\theta(\cdot)}(\mathbb{R}n) \end{align*} does hold for all $0<\theta<1$ and all $\frac1{p_\theta(\cdot)}=\frac{1-\theta}{p_0(\cdot)}+\frac{\theta}{p_1(\cdot)}$ with $p_0^-,p_1^-\geq1$.\\ This complex interpolation result has been complemented in \cite{Kop09} by showing \begin{align*} [L_{p(\cdot)}(\mathbb{R}n), BMO(\mathbb{R}n)]_\theta=L_{p_\theta(\cdot)}(\mathbb{R}n){q(\cdot)}uad\text{and}{q(\cdot)}uad[L_{p(\cdot)}(\mathbb{R}n), H_1(\mathbb{R}n)]_\theta=L_{p_\theta(\cdot)}(\mathbb{R}n) \end{align*} under some regularity conditions on ${p(\cdot)}$. We shall therefore concentrate on the real interpolation method (the so-called \emph{K-method}). Let $X_0$ and $X_1$ be two (quasi-)Banach spaces, which are both embedded into a topological vector space $Y$. Then the spaces $X_0+X_1$ is defined as the set of all $x\in Y$, which may be written as $x=x_0+x_1$ with $x_0\in X_0$ and $x_1\in X_1.$ For any $x\in X_0+X_1$ and any $0<t<\infty$, the so-called \emph{Peetre K-functional} is defined by \begin{equation}\label{eq:K} K(x,t,X_0,X_1)=\inf\{\|x_0|X_0\|+t\|x_1|X_1\|:x=x_0+x_1, x_0\in X_0, x_1\in X_1\}. \end{equation} If the spaces $X_0$ and $X_1$ are fixed and no confusion is possible, then we abbreviate this to $K(x,t)$. If $0<\theta<1$ and $0<q\le\infty$, then the \emph{real interpolation space} $(X_0,X_1)_{\theta,q}$ is defined as the set of all $x\in X_0+X_1$, such that \begin{align*} \|x|(X_0,X_1)_{\theta,q}\|=\begin{cases} \displaystyle \biggl(\int_0^\infty t^{-\theta q} K(x,t)^q \frac{dt}{t}\biggr)^{1/q},{q(\cdot)}uad &\text{if\ }q<\infty,\\ \displaystyle {s(\cdot)}up_{t>0}t^{-\theta}K(x,t),&\text{if\ }q=\infty \end{cases} \end{align*} is finite. \begin{thm}\label{thm:interpol} Let $p,q_0\in\mathcal{P}(\Rn)$ with $p^+<\infty$. Let $0<q\le\infty$ and $0<\theta<1$ and put $$ \frac{1}{\tilde p(\cdot)}=\frac{1-\theta}{p(\cdot)}. $$ Then \begin{equation}\label{eq:interpol1} (L_{{p(\cdot)},q_0(\cdot)}(\mathbb{R}^n),L_\infty(\mathbb{R}^n))_{\theta,q}=L_{\tilde p(\cdot),q}(\mathbb{R}^n) \end{equation} in the sense of equivalent quasi-norms. \end{thm} \begin{proof} To prove \eqref{eq:interpol1}, we will justify the following chain of embeddings. \begin{align} \notag L_{\tilde p(\cdot),q}(\mathbb{R}^n)&\hookrightarrow (L_{{p(\cdot)},q_0^-}(\mathbb{R}^n),L_\infty(\mathbb{R}^n))_{\theta,q}\hookrightarrow (L_{{p(\cdot)},q_0(\cdot)}(\mathbb{R}^n),L_\infty(\mathbb{R}^n))_{\theta,q}\\ \label{eq:review1}&\hookrightarrow (L_{{p(\cdot)},\infty}(\mathbb{R}^n),L_\infty(\mathbb{R}^n))_{\theta,q}\hookrightarrow L_{\tilde p(\cdot),q}(\mathbb{R}^n). \end{align} The second and third embedding in \eqref{eq:review1} follow by monotonicity, cf. Theorem \ref{thm:embedding}. The last embedding in \eqref{eq:review1}, namely \begin{equation}\label{eq:interpol2} (L_{{p(\cdot)},\infty}(\mathbb{R}^n),L_\infty(\mathbb{R}^n))_{\theta,q}\hookrightarrow L_{\tilde p(\cdot),q}(\mathbb{R}^n), \end{equation} will be proven in Step 1. Finally, in Step 2 we shall prove that $$ L_{\tilde p(\cdot),q}(\mathbb{R}^n)\hookrightarrow (L_{{p(\cdot)}}(\mathbb{R}^n),L_\infty(\mathbb{R}^n))_{\theta,q}. $$ The proof of this embedding only works with norms of characteristic functions, which do not depend on the second parameter of the Lorentz space. This is very well known for classical Lorentz spaces, follows from Definition \ref{dfn2} for Lorentz spaces with variable $p(\cdot)$ and $q$ constant, and finally follows by monotonicity also for Lorentz spaces with both indices variable. Therefore, the proof given in Step 2 also justifies the first embedding in \eqref{eq:review1}. \emph{Step 1.} We shall prove \eqref{eq:interpol2}, i.e. that \begin{equation}\label{eq:interpol3} \int_0^\infty \lambda^q\norm{\chi_{\{x\in\mathbb{R}n:|f(x)|> \lambda\}}}{L_{\tilde p(\cdot)}(\mathbb{R}^n)}^q\frac{d\lambda}{\lambda}\lesssim \int_0^\infty t^{-\theta q} K(f,t)^q \frac{dt}{t}. \end{equation} We shall use that \begin{align*} K(f,t)&=\inf\{\|f^0\|_{{p(\cdot)},\infty}+t\|f^1\|_\infty:f=f^0+f^1\}\\ &=\inf_{\mu>0}\{\|(|f(x)|-\mu)_+\|_{p(\cdot),\infty}+t\|\min(|f(x)|,\mu)\|_\infty\}\\ &\ge \inf_{\mu>0}\{\mu \|\chi_{\{x\in\mathbb{R}n:|f(x)|\ge 2\mu\}}\|_{p(\cdot)}+t\|\min(f(x),\mu)\|_\infty\} \end{align*} for every fixed $t>0$. We denote $h(\lambda)=\|\chi_{\{x\in\mathbb{R}n:|f(x)|\ge \lambda\}}\|_{{p(\cdot)}}$ for $\lambda>0$ and $f_*(t)={s(\cdot)}up\{\lambda>0:h(\lambda)\ge t\}$ its generalized inverse function. Using the assumption $p^+<\infty$, we obtain that $h(f_*(t))\ge t$ for all $t>0$. Then we choose $\mu$ by $\mu=f_*(t)/2$. This leads to $K(f,t)\ge f_*(t)h(f_*(t))/2\ge tf_*(t)/2.$ The proof is then a consequence of the following two estimates on the left and right hand side of \eqref{eq:interpol3} \begin{align*} LHS\eqref{eq:interpol3}&=\int_0^\infty \lambda^q h(\lambda)^{(1-\theta)q}\frac{d\lambda}{\lambda}\approx {s(\cdot)}um_{k=-\infty}^\infty \int_{\lambda:2^k< h(\lambda)\le2^{k+1}}\lambda^q h(\lambda)^{(1-\theta)q}\frac{d\lambda}{\lambda}\\ &\lesssim {s(\cdot)}um_{k=-\infty}^\infty 2^{k(1-\theta)q}\int_{\lambda:2^k\le h(\lambda)}\lambda^q \frac{d\lambda}{\lambda} \le {s(\cdot)}um_{k=-\infty}^\infty 2^{k(1-\theta)q}\int_{0}^{f_*(2^k)}\lambda^q \frac{d\lambda}{\lambda}\lesssim {s(\cdot)}um_{k=-\infty}^\infty 2^{k(1-\theta)q}f_*(2^k)^q \end{align*} and \begin{align*} RHS\eqref{eq:interpol3}&\ge {s(\cdot)}um_{k=-\infty}^\infty \int_{2^k}^{2^{k+1}}t^{(1-\theta)q}f_*(t)^q\frac{dt}{t}\gtrsim {s(\cdot)}um_{k=-\infty}^\infty 2^{k(1-\theta)q}\int_{2^k}^{2^{k+1}}f_*(t)^q\frac{dt}{t}\\ &\gtrsim{s(\cdot)}um_{k=-\infty}^\infty 2^{(k+1)(1-\theta)q}f_*(2^{k+1})^q ={s(\cdot)}um_{k=-\infty}^\infty 2^{k(1-\theta)q}f_*(2^{k})^q. \end{align*} If $q=\infty$, only notational modifications are necessary. \emph{Step 2.} Next, we prove that \begin{equation}\label{eq:interpol10} L_{\tilde p(\cdot),q}(\mathbb{R}^n)\hookrightarrow (L_{{p(\cdot)}}(\mathbb{R}^n),L_\infty(\mathbb{R}^n))_{\theta,q}, \end{equation} i.e. \begin{equation}\label{eq:interpol11} \int_0^\infty t^{-\theta q} K(f,t)^q \frac{dt}{t}\lesssim\int_0^\infty \lambda^q\norm{\chi_{\{x\in\mathbb{R}n:|f(x)|> \lambda\}}}{L_{\tilde p(\cdot)}(\mathbb{R}^n)}^q\frac{d\lambda}{\lambda}. \end{equation} We start again with a reformulation of $K(f,t)$. \begin{align*} K(f,t)&=\inf\{\|f^0\|_{{p(\cdot)}}+t\|f^1\|_\infty:f=f^0+f^1\}\\ &=\inf_{\mu>0}\{\|(|f(x)|-\mu)_+\|_{p(\cdot)}+t\|\min(|f(x)|,\mu)\|_\infty\}\\ &\le \inf_{\mu>0}\{\|f\chi_{\{x\in\mathbb{R}n:|f(x)|> \mu\}}\|_{p(\cdot)}+t\mu\}\\ &\lesssim \inf_{\mu>0}\{\|{s(\cdot)}um_{j=0}^\infty 2^j\mu\chi_{\{x\in\mathbb{R}n:|f(x)|> 2^j\mu\}}\|_{p(\cdot)}+t\mu\}\\ &\le \inf_{\mu>0}\{{s(\cdot)}um_{j=0}^\infty 2^j\mu\|\chi_{\{x\in\mathbb{R}n:|f(x)|> 2^j\mu\}}\|_{p(\cdot)}+t\mu\}\\ &= \inf_{\mu>0}\{{s(\cdot)}um_{j=0}^\infty 2^j\mu h(2^j\mu)+t\mu\}, \end{align*} where we denoted $h(\lambda):=\|\chi_{\{x\in\mathbb{R}n:|f(x)|> \lambda\}}\|_{p(\cdot)}$. Let us remark, that we have assumed $p^-\ge 1$ in the calculation above to be able to use the triangle inequality without additional powers. The modification in the case $p^-<1$ is straightforward and left to the reader. For fixed $t>0$, we choose $\mu=\mu(t)$ by $$ \mu(t):=\inf\{\mu>0:{s(\cdot)}um_{j=0}^\infty 2^j h(2^j\mu)\le t\}. $$ As the function $h$ is right-continuous, we obtain immediately ${s(\cdot)}um_{j=0}^\infty 2^j h(2^j\mu(t))\le t$. We first estimate the right-hand side of \eqref{eq:interpol11} as \begin{align*} RHS\eqref{eq:interpol11} \gtrsim {s(\cdot)}um_{k=-\infty}^\infty 2^{kq}h(2^k)^{(1-\theta)q}. \end{align*} Furthermore, we discretize the left-hand side of \eqref{eq:interpol11} as \begin{align*} LHS\eqref{eq:interpol11}&\le\int_0^\infty t^{-\theta q} t^q\mu(t)^q \frac{dt}{t} ={s(\cdot)}um_{k=-\infty}^\infty 2^{kq}\int_{t:2^k< \mu(t)\le 2^{k+1}}t^{(1-\theta)q}\frac{dt}{t}\\ &\le {s(\cdot)}um_{k=-\infty}^\infty 2^{kq}\int_{t:2^k< \mu(t)}t^{(1-\theta)q}\frac{dt}{t}. \end{align*} If $\mu(t)>2^k$, we obtain $$ t\le {s(\cdot)}um_{j=0}^\infty 2^j h(2^{j+k}). $$ Therefore, we continue \begin{align*} LHS\eqref{eq:interpol11}&\lesssim {s(\cdot)}um_{k=-\infty}^\infty 2^{kq}\int_{0}^{{s(\cdot)}um_{j=0}^\infty 2^j h(2^{j+k})}t^{(1-\theta)q}\frac{dt}{t} \lesssim {s(\cdot)}um_{k=-\infty}^\infty 2^{kq} \left({s(\cdot)}um_{j=0}^\infty 2^j h(2^{j+k})\right)^{(1-\theta)q}. \end{align*} If $(1-\theta)q\le 1$, we may write ($l=j+k$) \begin{align*} LHS\eqref{eq:interpol11}&\lesssim {s(\cdot)}um_{k=-\infty}^\infty 2^{kq} {s(\cdot)}um_{j=0}^\infty 2^{j(1-\theta)q} h(2^{j+k})^{(1-\theta)q}\\ &={s(\cdot)}um_{l=-\infty}^\infty {s(\cdot)}um_{j=0}^\infty 2^{(l-j)q} 2^{j(1-\theta)q}h(2^l)^{(1-\theta)q}\\ &={s(\cdot)}um_{l=-\infty}^\infty 2^{lq}h(2^l)^{(1-\theta)q}{s(\cdot)}um_{j=0}^\infty 2^{-j\theta q}\lesssim {s(\cdot)}um_{l=-\infty}^\infty 2^{lq}h(2^l)^{(1-\theta)q}. \end{align*} If $(1-\theta)q>1$, we use a similar approach combined with H\"older's inequality \begin{align*} LHS\eqref{eq:interpol11}&\lesssim {s(\cdot)}um_{k=-\infty}^\infty 2^{kq} {s(\cdot)}um_{j=0}^\infty 2^{j(1+\varepsilon)(1-\theta)q} h(2^{j+k})^{(1-\theta)q}\\ &={s(\cdot)}um_{l=-\infty}^\infty {s(\cdot)}um_{j=0}^\infty 2^{(l-j)q} 2^{j(1+\varepsilon)(1-\theta)q}h(2^l)^{(1-\theta)q}\\ &={s(\cdot)}um_{l=-\infty}^\infty 2^{lq}h(2^l)^{(1-\theta)q}{s(\cdot)}um_{j=0}^\infty 2^{jq(-1 + (1+\varepsilon)(1-\theta))}\lesssim {s(\cdot)}um_{l=-\infty}^\infty 2^{lq}h(2^l)^{(1-\theta)q}, \end{align*} where we have assumed that $\varepsilon>0$ was small enough to obtain $-1 + (1+\varepsilon)(1-\theta)=-\theta+\varepsilon(1-\theta)<0$. This finishes the proof. \end{proof} \begin{rem}\label{rem:4} \begin{itemize} \item[(i)] Let us point out that the assumption $p^+<\infty$ is forced mainly be the technique of generalized inverse functions used in the proof of Theorem \ref{thm:interpol}. We leave it as an open problem if this assumption might be removed. \item[(ii)] Of course, taking $q_0(\cdot)=p(\cdot)$ in Theorem \ref{thm:interpol} leads immediately to the following special case \begin{equation*} (L_{{p(\cdot)}}(\mathbb{R}^n),L_\infty(\mathbb{R}^n))_{\theta,q}=L_{\tilde p(\cdot),q}(\mathbb{R}^n). \end{equation*} Therefore the spaces $L_{\tilde p(\cdot),q}(\mathbb{R}^n)$ from Definition \ref{dfn2} naturally arise by real interpolation between $L_{{p(\cdot)}}(\mathbb{R}^n)$ and $L_\infty(\mathbb{R}^n)$. \end{itemize} \end{rem} {s(\cdot)}ection{Marcinkiewicz interpolation theorem}\label{sec:Marcinkiewicz} Let $T$ be an operator defined on measurable functions on $\mathbb{R}^n$. We say, that $T$ is sublinear, if $$ |T(f+g)(x)|\le |Tf(x)|+|Tg(x)| $$ holds for (almost) every $x\in\mathbb{R}^n$. One of the most important tools in analysis of (sub-)linear operators is the Marcinkiewicz interpolation theorem. Let us recall its statement as it may be found for example in \cite[Corollary 1.4.21]{Grafaneu}. \begin{thm}\label{Marcin} Let $\Omega{s(\cdot)}ubset\mathbb{R}^n$ be a measurable set and let $T$ be a sublinear operator, which maps $L_{p_0}(\Omega)$ to $L_{q_0,\infty}(\Omega)$ and $L_{p_1}(\Omega)$ to $L_{q_1,\infty}(\Omega)$, where $0<p_0\neq p_1\le\infty$ and $0<q_0\neq q_1\le\infty$. Let $0<\theta<1$ and put $$ \frac{1}{p}:=\frac{1-\theta}{p_0}+\frac{\theta}{p_1},{q(\cdot)}uad \frac{1}{q}:=\frac{1-\theta}{q_0}+\frac{\theta}{q_1}. $$ If \begin{equation}\label{central} p\le q, \end{equation} then $T$ maps boundedly $L_p(\Omega)$ into $L_q(\Omega)$. \end{thm} One of the prominent applications of Marcinkiewicz interpolation theorem was given by Stein in his classical book \cite{S}. The Hardy-Littlewood maximal operator is defined for every locally-integrable function $f$ on $\mathbb{R}^n$ by $$ Mf(x)={s(\cdot)}up_{B\ni x}\frac{1}{|B|}\int_B |f(x)|dx,{q(\cdot)}uad x\in\mathbb{R}^n, $$ where the supremum is taken over all balls in $\mathbb{R}^n$ containing $x$. It is easy to see that $M$ acts boundedly from $L_\infty(\mathbb{R}^n)$ into $L_\infty(\mathbb{R}^n)$. Furthermore, one shows that $M$ maps $L_1(\mathbb{R}^n)$ into $L_{1,\infty}(\mathbb{R}^n)$. These two facts, combined with Theorem \ref{Marcin}, lead immediately to the boundedness of $M$ on $L_p(\mathbb{R}^n)$ for every $1<p\le\infty$. The study of the maximal operator in the frame of Lebesgue spaces with variable exponents attracted a lot of attention with the most important breakthroughs being achieved in \cite{Max1, Max2, Max3, Max4, CDF}. It turned out, cf. \cite[Theorem 4.3.8]{DHHR}, that $M$ is bounded on $L_{p(\cdot)}(\mathbb{R}^n)$ if $p^->1$ with the function $1/p(\cdot)$ satisfying the so-called \emph{$\log$-H\"older continuity} conditions. Under the same regularity conditions on $p$ it was proven in \cite{CDF}, that if $p^-\ge 1$ the maximal operator maps $L_{p(\cdot)}(\mathbb{R}^n)$ into $L_{p(\cdot),\infty}(\mathbb{R}^n)$. Quite naturally, this raises the question if the boundedness of $M$ on Lebesgue spaces of variable integrability could be deduced from this weak-type estimate and some version of the Marcinkiewicz interpolation theorem. In its abstract setting, the same question was already posed as an open problem in \cite{DHN}. We recall their notation first. We say, that the sublinear operator $T$ is of weak-type $({p(\cdot)}i_0(\cdot),{p(\cdot)}i_1(\cdot))$, if $$ \lambda \|\chi_{\{x\in\mathbb{R}^n:|Tf(x)|>\lambda\}}\|_{{p(\cdot)}i_1(\cdot)}\le c\|f\|_{{p(\cdot)}i_0(\cdot)} $$ holds for some $c>0$ and all $f\in L_{{p(\cdot)}i_0(\cdot)}(\mathbb{R}^n)$ and all $\lambda>0$. Let ${p(\cdot)}i_0(\cdot)$ and ${p(\cdot)}i_1(\cdot)$ be two variable exponents and let $0<\theta<1$ be a real number. Then we put $$ \frac{1}{{p(\cdot)}i_\theta(x)}:=\frac{1-\theta}{{p(\cdot)}i_0(x)}+\frac{\theta}{{p(\cdot)}i_1(x)}. $$ The Question 2.8 from \cite{DHN} becomes {\bf Question 2.8}(\cite{DHN}, Marcinkiewicz Interpolation) Let $T$ be a sublinear operator that is of weak type $({p(\cdot)}i_0(\cdot),{p(\cdot)}i_0(\cdot))$ and $({p(\cdot)}i_1(\cdot),{p(\cdot)}i_1(\cdot))$. Is $T$ then bounded from $L_{{p(\cdot)}i_\theta(\cdot)}(\mathbb{R}^n)$ to $L_{{p(\cdot)}i_\theta(\cdot)}(\mathbb{R}^n)$? We shall prove that the answer to Question 2.8 is negative. The basic idea of our construction is based on the observation that the condition \eqref{central} is necessary for the Marcinkiewicz interpolation theorem on usual $L_p(\mathbb{R}n)$ with constant exponent, cf. \cite{H}. It means that there are $0<p_0\not=p_1\le\infty$ and $0<q_0\not=q_1\le\infty$ and $0<\theta<1$ such that $T$ is of weak type $(p_0,q_0)$ and $(p_1,q_1)$, $p>q$ and $T$ is not bounded from $L_p[0,1]$ to $L_q[0,1]$. Then we set $$ \tilde Tf(x):= \begin{cases} T(\chi_{[0,1]}f)(x-1), {q(\cdot)}uad x\in[1,2],\\ 0 , {q(\cdot)}uad x\in[0,1).\end{cases} $$ We put $$ {p(\cdot)}i_0(x):=\begin{cases}p_0,x\in[0,1),\{q(\cdot)}_0,x\in[1,2]\end{cases}{q(\cdot)}uad\text{and}{q(\cdot)}uad {p(\cdot)}i_1(x):=\begin{cases}p_1,x\in[0,1),\{q(\cdot)}_1,x\in[1,2].\end{cases} $$ We obtain that $\tilde T$ is of weak type $({p(\cdot)}i_0(\cdot),{p(\cdot)}i_0(\cdot))$ and of weak type $({p(\cdot)}i_1(\cdot),{p(\cdot)}i_1(\cdot))$ but not of strong type $({p(\cdot)}i_\theta(\cdot),{p(\cdot)}i_\theta(\cdot))$. Furthermore, a simple modification of this argument allows to construct a counterexample even for smooth parameters ${p(\cdot)}i_0(\cdot)$ and ${p(\cdot)}i_1(\cdot)$. To make the presentation self-contained, we provide a simple construction of $T$ with the properties mentioned above. {s(\cdot)}ubsection{Specific counterexample for $T$} In this section, we shall provide more details on the above given construction. Especially, we shall construct an operator $T$ which satisfies the assumptions from our counterexample. Following the work of Hunt \cite{H,HW}, we define for $\alpha>0$ the following Hardy type operator $$ (T_\alpha f)(x)=x^{-\alpha-1}\int_0^x f(t)dt, {q(\cdot)}uad 0<x<1. $$ We observe first that $T_\alpha$ is linear and defined on all $L_1(0,1)$. Using the estimate $|\int_0^x f(t)dt|\le \int_0^x f^*(t)dt$ and H\"older's inequality \begin{align*} \|T_{1/2}f\|_{1,\infty}\le {s(\cdot)}up_{0<x<1} x\cdot x^{-3/2}\int_0^x f^*(t)dt\le {s(\cdot)}up_{0<x<1}x^{-1/2}\Bigl(\int_0^x (f^*(t))^2dt\Bigr)^{1/2}\cdot x^{1/2}=\|f\|_2, \end{align*} we obtain also the weak-type estimate $T_{1/2}:L_2(0,1)\to L_{1,\infty}(0,1)$. Furthermore, the boundedness of $T_{1/2}$ from $L_\infty(0,1)$ into $L_{2,\infty}(0,1)$ follows by \begin{align*} \|T_{1/2}f\|_{2,\infty}\le {s(\cdot)}up_{0<x<1} x^{1/2}\cdot x^{-3/2}\int_0^x f^*(t)dt={s(\cdot)}up_{0<x<1}x^{-1}\cdot\int_0^x f^*(t)dt\le \|f\|_\infty. \end{align*} On the other hand, it is easy to see, that $T_{1/2}$ is not bounded from $L_4(0,1)$ into $L_{4/3}(0,1)$. Just take $f(t)=t^{-1/4}|\ln(t)|^{-1/4-\varepsilon}\chi_{[0,1/2]}(t)\in L_4(0,1)$ for $\varepsilon>0$ and calculate \begin{align*} \|T_{1/2}f\|_{4/3}&=\Bigl(\int_0^1 x^{-2} \Bigl(\int_0^x f(t)dt\Bigr)^{4/3}dx\Bigr)^{3/4}\\ &\ge \Bigl(\int_0^{1/2} x^{-2} \Bigl(\int_0^x t^{-1/4}|\ln(t)|^{-1/4-\varepsilon}dt\Bigr)^{4/3}dx\Bigr)^{3/4}\\ &\approx \Bigl(\int_0^{1/2} x^{-2} \Bigl(x^{3/4}|\ln(x)|^{-1/4-\varepsilon}\Bigr)^{4/3}dx\Bigr)^{3/4}\\ &=\Bigl(\int_0^{1/2}x^{-1}\cdot |\ln(x)|^{-1/3-\varepsilon\cdot 4/3}dx\Bigr)^{3/4}=\infty \end{align*} for $0<\varepsilon\le 1/2.$ {s(\cdot)}ection{Open problems} Although we presented some basic properties of the scale of Lorentz spaces with variable exponents, many questions remained opened for further investigations. The first obvious generalization is to treat Lorentz spaces $L_{{p(\cdot)},{q(\cdot)}}(\Omega,\mu)$ on arbitrary measure spaces $(\Omega,\mu)$ as it is usually done for Lorentz spaces with constant exponents, cf. \cite{Grafaneu}. We believe that the considerations above are also true in this case and we have studied mainly spaces on $\mathbb{R}^n$ to simplify the notation. It would be also highly desirable to obtain further results on real interpolation in this scale, complementing Theorem \ref{thm:interpol}. Especially, it would be useful to have two variable exponent spaces as interpolation couple, i.e. to characterize $(L_{p_0(\cdot)}(\mathbb{R}n),L_{p_1(\cdot)}(\mathbb{R}n))_{\theta,q}$. The use of ${\ell_{q(\cdot)}(L_{p(\cdot)})}$ spaces suggests yet another interesting idea, namely to allow for interpolation with variable parameter ${q(\cdot)}$. This option was already noticed in the introduction of \cite{AlHa10}. Unfortunately it turns out, and it is also mentioned in \cite{AlHa}, that the real interpolation spaces with variable parameter ${q(\cdot)}$ lack in general the interpolation property. On the other hand, it is possible, that under suitable conditions on the endpoint spaces, the real interpolation method with variable ${q(\cdot)}$ works well and the interpolation property is restored again. The last open problem is the starting point of this paper. If the Marcinkiewicz interpolation theorem would work in the scale of variable exponent spaces, we would have found a very elegant way to prove the boundedness of the Hardy-Littlewood maximal operator on $L_\p(\Rn)$. Combining the weak estimate from \cite{CDF} \begin{align*} M&:L_{p_0(\cdot)}(\mathbb{R}n) \to L_{p_0(\cdot),\infty}(\mathbb{R}n)\\ \intertext{with the trivial boundedness} M&:L_\infty(\mathbb{R}n)\to L_\infty(\mathbb{R}n) \end{align*} we could get the boundedness of $M$ on $L_\p(\Rn)$, with $\frac1{p(\cdot)}=\frac{1-\theta}{p_0(\cdot)}$. Unfortunately the previous counterexample tells us, that Marcinkiewicz does not work on variable $L_\p(\Rn)$ spaces. On the other hand, using Theorem \ref{thm:interpol}, we may easily show that \begin{align*} M:L_{{p(\cdot)},q}(\mathbb{R}n)\to L_{{p(\cdot)},q}(\mathbb{R}n) \end{align*} under the regularity assumptions on ${p(\cdot)}$ as used in \cite{CDF}. This implies especially by chosing $q$ within $p^-\leq q\leq p^+$ the boundedness \begin{align*} M:L_{{p(\cdot)},p^-}(\mathbb{R}n)\to L_{{p(\cdot)},p^+}(\mathbb{R}n). \end{align*} This seems to be very suggestive as to why Marcinkiewicz interpolation might fail for the maximal operator. We would therefore be very much interested if it is possible to find additional conditions on the sublinear operator $T$, which would ensure the validity of Marcinkiewicz interpolation. \textbf{Acknowledgement:} The first author acknowledges the financial support by the DFG grants HA 2794/5-1 and KE 1847/1-1. The second author acnowledges the support by the DFG Research Center MATHEON ``Mathematics for key technologies'' in Berlin. Moreover we are very grateful for the comments of the anonymous referee which helped to improve our paper. \thebibliography{99} \bibitem{AlHa10} A. Almeida, P. H{\"a}st{\"o}, \emph{Besov spaces with variable smoothness and integrability}, J. Funct. Anal. \textbf{258} no. 5 (2010), 1628--1655. \bibitem{AlHa} A. Almeida, P. H{\"a}st{\"o}, \emph{Interpolation in variable exponent spaces}, to appear in Rev. Mat. Complut. \bibitem{AM} M.~A.~Ari\~no, B.~Muckenhoupt, \emph{Maximal functions on classical Lorentz spaces and Hardy's inequality with weights for nonincreasing functions}, Trans. Amer. Math. Soc. \textbf{320} no. 2 (1990), 727--735. \bibitem{BS} C.~Bennett, R.~Sharpley, \emph{Interpolation of operators}, Academic Press, San Diego, 1988. \bibitem{BerghL} J.~Bergh, J.~L\"ofstr\"om, \emph{Interpolation spaces. An introduction}, Springer, Berlin, 1976. \bibitem{CPSS} M.~Carro, L.~Pick, J.~Soria, V. D. Stepanov, \emph{On embeddings between classical Lorentz spaces}, Math. Inequal. Appl. \textbf{4} no.~3 (2001), 397--428. \bibitem{CDF} D.~Cruz-Uribe, L.~Diening, A.~Fiorenza, \emph{A new proof of the boundedness of maximal operators on variable Lebesgue spaces}, Boll. Unione Mat. Ital. (9) \textbf{2} no.~1 (2009), 151--173. \bibitem{Max2} D.~Cruz-Uribe, A.~Fiorenza, J.~Martell, C.~P\'erez, \emph{The boundedness of classical operators in variable $L^p$ spaces}, Ann. Acad. Sci. Fenn. Math. \textbf{31} (2006), 239--264. \bibitem{CruzUribe03} D. Cruz-Uribe, A. Fiorenza, C. J. Neugebauer, \emph{The maximal function on variable $L^p$ spaces}, Ann. Acad. Sci. Fenn. Math. \textbf{28} (2003), 223--238. \bibitem{Max1} L. Diening, \emph{Maximal function on generalized Lebesgue spaces $L^{p(\cdot)}$}, Math. Inequal. Appl. \textbf{7} (2004), 245--253. \bibitem{DHN} L. Diening, P.~H\"ast\"o, A. Nekvinda, \emph{Open problems in variable exponent Lebesgue and Sobolev spaces}, Proceedings FSDONA 2004, Academy of Sciences, Prague, 38--52. \bibitem{Max4} L. Diening, P. Harjulehto, P. H\"ast\"o, Y. Mizuta, T. Shimomura, \emph{Maximal functions in variable exponent spaces: limiting cases of the exponent}, Ann. Acad. Sci. Fenn. Math. \textbf{34} (2009), 503--522. \bibitem{DHHR} L. Diening, P. Harjulehto, P. H\"ast\"o, M. R{\accent23 u}\v{z}i\v{c}ka, \emph{Lebesgue and Sobolev Spaces with Variable Exponents}, Lecture Notes in Mathematics \textbf{2017}, Springer, Berlin, 2011. \bibitem{EphreKoki} L. Ephremidze, V. Kokilashvili, S. Samko, \emph{Fractional, maximal and singular operators in variable exponent Lorentz spaces}, Fract. Calc. Appl. Anal. \textbf{11} no. 4 (2008), 407--420. \bibitem{Grafaneu} L. Grafakos, \emph{Classical Fourier analysis}, Second edition, Grad. Texts Math. \textbf{249}, Springer, Berlin, 2008. \bibitem{H} R. A. Hunt, \emph{An extension of the Marcinkiewicz interpolation theorem to Lorentz spaces}, Bull. Amer. Math. Soc. \textbf{70}, (1964) 803--807. \bibitem{HW} R. A. Hunt and G. Weiss, \emph{The Marcinkiewicz interpolation theorem}, Proc. Amer. Math. Soc. \textbf{15}, (1964) 996--998. \bibitem{IsraKoki} D.M. Israfilov, V. Kokilashvili, N.P. Tuzkaya, \emph{The classical integral operators in weighted Lorentz spaces with variable exponent}, IBSU Scientific Journal \textbf{1} issue 1, (2006), 171--178. \bibitem{KokiSamko} V. Kokilashvili, S. Samko, \emph{Singular integrals and potentials in some Banach spaces with variable exponent}, J. Funct. Spaces and Appl. \textbf{1} (1), (2003), 45--59. \bibitem{KV11} H. Kempka, J. Vyb\'\i ral, \emph{A note on the spaces of variable integrability and summability of Almeida and H\"ast\"o}, Proc. Amer. Math. Soc. \textbf{141} (9) (2013), 3207--3212. \bibitem{Kop09} T. Kopaliani, \emph{Interpolation theorems for variable exponent {L}ebesgue spaces}, J. Funct. Anal. \textbf{257} (2009), 3541--3551. \bibitem{KoRa}O. Kov\'{a}\v{c}ik, J. R\'{a}kosn\'{i}k, \emph{On spaces $L^{p(x)}$ and $W^{1,p(x)}$}, Czechoslovak Math. J. \textbf{41} ({116}) (1991), 592--618. \bibitem{Lor1} G. G. Lorentz, \emph{Some new functional spaces}, Ann. of Math. (2) \textbf{51} (1950), 37--55. \bibitem{Lor2} G. G. Lorentz, \emph{On the theory of spaces $\Lambda$}, Pacific J. Math. \textbf{1} (1951), 411--429. \bibitem{Musielak} J. Musielak, \emph{Orlicz spaces and modular spaces}, Lecture Notes in Mathematics \textbf{1034}, Springer, Berlin, 1983. \bibitem{Max3} A. Nekvinda, \emph{Hardy-Littlewood maximal operator on $L^{p(x)}(\mathbb{R}^n)$}, Math. Inequal. Appl. \textbf{7} (2004), 255--266. \bibitem{Orlicz} W. Orlicz, \emph{\"Uber konjugierte Exponentenfolgen}, Studia Math. \textbf{3} (1931), 200--212. \bibitem{Ruz1} M. R{\accent23 u}\v{z}i\v{c}ka, \emph{Electrorheological fluids: modeling and mathematical theory}, Lecture Notes in Mathematics \textbf{1748}, Springer, Berlin, 2000. \bibitem{Saw} E.~Sawyer, \emph{Boundedness of classical operators on classical Lorentz spaces}, Studia Math. \textbf{96} no. 2 (1990), 145--158. \bibitem{S} E.~M.~Stein, \emph{Singular integrals and differentiability properties of functions}, Princeton Mathematical Series, No. 30 Princeton University Press, Princeton, N.J. 1970. \bibitem{SW} E.~M.~Stein, G.~Weiss, \emph{Introduction to Fourier analysis on Euclidean spaces}, Princeton Mathematical Series, No. 32. Princeton University Press, Princeton, N.J., 1971. \bibitem{Triebel} H.~Triebel, \emph{Interpolation theory, function spaces, differential operators}, {V}erlag der {W}issenschaften, Berlin, 1978. \end{document}
\begin{equation}gin{document} \title{Long-range surface plasmon polariton excitation at the quantum level} \author{D. Ballester\,$^1$} \author{M. S. Tame\,$^1$} \email{[email protected]} \author{C. Lee\,$^2$} \author{J. Lee\,$^{2,3}$} \author{M. S. Kim\,$^1$} \affiliation{$^1$School of Mathematics and Physics, Queen's University,~Belfast BT7 1NN, United Kingdom\\ $^2$Department of Physics, Hanyang University, Seoul 133-791, Korea \\ $^3$Quantum Photonic Science Research Center, Hanyang University, Seoul 133-791, Korea } \date{\today} \begin{equation}gin{abstract} We provide the quantum mechanical description of the excitation of long-range surface plasmon polaritons (LRSPPs) on thin metallic strips. The excitation process consists of an attenuated-reflection setup, where efficient photon-to-LRSPP wavepacket-transfer is shown to be achievable. For calculating the coupling, we derive the first quantization of LRSPPs in the polaritonic regime. We study quantum statistics during propagation and characterize the performance of photon-to-LRSPP quantum state transfer for single-photons, photon-number states and photonic coherent superposition states. \end{abstract} \pacs{03.67.-a, 42.50.Dv, 42.50.Ex, 03.70.+k, 73.20.Mf} \maketitle \section{Introduction} Plasmonics~\cite{Zayats} is a rapidly growing area of research based at the nanoscale that is currently experiencing intensive studies by researchers from many areas of the physical sciences~\cite{electro}. Plasmonic-based nanophotonic devices using surface plasmon polaritons (SPPs) have recently started to attract much interest from the quantum optics community for their use in quantum information processing (QIP)~\cite{plasmonQIP,Alte,Lukin1,Lukin2}. At present, it is essential that practical techniques are properly developed for efficiently generating and controlling plasmonic excitations at the quantum level. In order to do this, a rigorous quantum mechanical model must be included for describing how photons and different forms of SPPs interact. With a clear description and theoretical understanding of these interactions, the rapid development of novel QIP applications, using nanostructured devices based on linear and nonlinear plasmonic effects~\cite{Lukin1,nonlinplasm} will become possible. In this work, we adapt and extend techniques recently introduced by us~\cite{TSPP} to provide the first {\it quantum mechanical} description of the coupling between single-photons and SPPs on thin metallic strips, also known as long-range SPPs~\cite{Econ} (LRSPPs). Here, an attenuated-reflection (ATR) setup is described, that has so far only been considered for {\it classical} LRSPP generation~\cite{ClassLRSPP}. In order to introduce the Hamiltonian for the interaction, we derive the first quantized description of the LRSPP fields in the polaritonic regime~\cite{Econ}. We find that high {\it quantum efficiencies} can in fact be reached for photon-to-LRSPP wavepacket-transfer upon appropriate modification of the ATR geometry. We comment on the extent to which the excited LRSPPs preserve quantum statistical features of the original photons as they propagate along realistic metallic strips. We then characterize the performance of photon-to-LRSPP {\it quantum state transfer}, focusing on an informative example of the transfer of coherent superposition states~\cite{cat}. Recently, we have become aware of an experimental effort to transfer a similar type of nonclassical field into a LRSPP~\cite{Ander}. The benefits of exciting LRSPPs in this configuration compared to the previously studied standard single-interface SPPs~\cite{TSPP,OttoKret} include a significant increase in the propagation length, together with the support for both transverse magnetic (TM) and transverse electric (TE) polarization degrees of freedom, given the correct lateral width and thickness of the metallic strip~\cite{LRSPPHV}. Therefore they have the potential to open up a wider variety of QIP applications, where this type of additional flexibility is necessary. The work we present here provides a valuable description of the physics of photon-SPP coupling in multilayer geometries at the quantum level and the new methods we have developed specifically for this task should be well-suited to other complex SPP excitation scenarios, such as grating~\cite{Grating} and end-fire~\cite{EF} techniques. The paper is structured as follows: In Section II, we provide the quantum mechanical description of LRSPPs. We also introduce the ATR setup used for the excitation of LRSPPs with single photons and the coupling Hamiltonian for the fields. In Section III we analyze this coupling for single modes of the system as well as for wavepackets involving single-photons and photon-number states. From this analysis we determine the transfer efficiencies of photons to LRSPPs over a range of input frequencies. In Section IV we then examine the extent to which quantum statistics of the injected photons are preserved during transfer and propagation of the excited LRSPPs. In Section V we characterize the performance of photon-to-LRSPP quantum state transfer, providing an illuminating example of the transfer of coherent superposition states. Finally, Section VI summarizes our main results. \section{Excitation setup} LRSPPs are nonradiative electromagnetic excitations associated with electron charge density waves propagating along the interfaces of a dielectric-metal-dielectric configuration~\cite{Econ}. In Figs.~\ref{fig:setup}~{\bf (a)} and~{\bf (b)} we show the ATR setup and geometry utilized for single-photon excitation of LRSPPs. It consists of four layers: a metallic layer (with permittivity $\epsilon_3=\epsilon_m$), two dielectric layers (with $\epsilon_2=\epsilon_4=1$), and a prism ($\epsilon_1$). We consider the metal as silver in this work only to illustrate our main results, with the theory developed supporting a more general setting. For LRSPP excitations, due to the collective nature of the electron charge density waves, a macroscopic picture of the electromagnetic field produced is appropriate~\cite{ER,Econ}. Upon quantization, LRSPPs are found to correspond to bosonic modes. \begin{equation}gin{figure}[b] \centerline{\psfig{figure=setup.eps,width=8.3cm,height=3.6cm}} \caption{(Color online) Single-photon excitation of LRSPPs using attenuated-reflection. {\bf (a)}: A photon wavepacket is injected into the system at a specific angle $\theta$, with a prism mediating an interaction between the photon and LRSPP modes. The minimum prism size is diffraction-limited. {\bf (b)}: The ATR excitation geometry considered. {\bf (c)}: Transfer process for the photon and LRSPP mode operators.} \label{fig:setup} \end{figure} A brief outline of this quantization is given in the Appendix. It is well-known that {\it classically} two types of TM surface modes are found for the geometry depicted in Fig.~\ref{fig:setup}~{\bf (b)}: An antisymmetric mode, denoted by eigenfrequency $\omega^+$ and a symmetric mode denoted by $\omega^-$. The {\it quantized} vector potential in the continuum limit for these modes propagating along an air-metal-air interface in the $\hat{\mathbf x}$ direction, as shown in Fig.~1~{\bf (a)}, is found to be \begin{eqnarray} \hat{\mathbf A}_{SPP}^{\pm}({\mathbf r},t) & \propto & \int_0^{\infty} {\mathrm d} \omega^\pm ({\cal N}^{\pm}(\omega^\pm ){\cal W})^{-1/2}\times \nonumber \\ & & [{\bm \phi}({\mathbf r},\omega )e^{-i \omega^\pm t}\hat{b}(\omega^\pm )+ h.c]. \end{eqnarray} Here ${\cal N}^{\pm}(\omega^{\pm} )$ is a frequency dependent normalization and ${\cal W}$ is the {\it beam-width}~\cite{Blow}. For both $\omega^{+}$ and $\omega^{-}$ the $\hat{b}(\omega)$'s ($\hat{b}^{\dag}(\omega)$'s) correspond to bosonic annihilation (creation) operators which obey the quantum mechanical commutation relations $[\hat{b}(\omega),\hat{b}^{\dag}(\omega')]=\delta(\omega-\omega')$. The modefunctions are given by \begin{eqnarray} {\bm \phi}^\pm({\bf r},\omega^\pm )&=&e^{i {\bf k}\cdot {\bf r}}[(\hat{\bf k}-(ik/\nu_0)\hat{\bf z})e^{\nu_0 z} \vartheta(-z) \nonumber \\ & & +(1-\nu_m/\epsilon_m\nu_0)[(\hat{\bf k}+(i k/\nu_m)\hat{\bf z})e^{-\nu_m z} \nonumber \\ & & \mp( \hat{\bf k}-(ik/\nu_m)\hat{\bf z})e^{\nu_m (z-d_1)}] \vartheta(z)\vartheta(d_1-z) \nonumber \\ & & \mp( \hat{\bf k}+(ik/\nu_0)\hat{\bf z})e^{-\nu_0 (z-d_1)} \vartheta(z-d_1)], \label{Phi} \end{eqnarray} where the wavevector ${\mathbf k}=k \hat{\mathbf x}$, $\vartheta(z)$ is the Heaviside step function and the decays into the metal and air are parameterized by $\nu_m^2=k^2-\left(\omega^\pm \right) ^2 \epsilon_m /c^2$ and $\nu_0^2=k^2- \left(\omega^\pm\right) ^2/c^2$ respectively. The dispersion relation between $\omega^\pm$ and $k$ is \begin{eqnarray} e^{- \nu_m d_1}=\pm(\nu_m+\epsilon_m \nu_0)/(\nu_m-\epsilon_m \nu_0), \label{dispersioneq}\end{eqnarray} where $d_1$ is the thickness of the metallic strip. The solutions of this equation, given by $\omega^{\pm}(k)$, correspond to two different types of coupled plasma excitations at the metal-dielectric interfaces 2/3 and 3/4 shown in Fig.~\ref{fig:setup}~{\bf (b)}, which oscillate synchronously out-of-phase ($+$) and in-phase ($-$). The dependence of these solutions on the wavevector and slab thickness are shown in Fig.~\ref{fig:disp}~{\bf (a)}, where silver has been chosen as an example having permittivity~\cite{JohnChrist} $\epsilon_m (\omega) = 1- \omega_p^2/ \omega^2 + \delta \epsilon_r,$ with $\omega_p=1.402\times 10^{16}$ rad/s and $\delta \epsilon_r= 29 \omega^2/\omega_p^2$. For a fixed value of $d_1$, they evolve above ($+$) and beneath ($-$) the dispersion relation known for SPPs at a simple air-metal interface~\cite{Zayats}, and move closer to this curve as either $k$ or $d_1$ grows~\cite{Econ}, approaching the limiting value $\omega_{sp}$ (where $\epsilon_m=-1$) as $k \to \infty$ for any $d_1$. While only the $\omega^+$ excitations have long-range propagation lengths, due to damping (addressed in detail in Sec. IV), here we consider both excitations as `long-range' in order to give a more complete description of the physical system. \begin{equation}gin{figure}[t] \centerline{\psfig{figure=figdisp.eps,width=8.5cm,height=3.5cm}} \caption{(Color online) {\bf (a)}: Dispersion relations for the $\omega^{\pm}$ excitations as a function of $k$ and metal thickness $d_1$. The inset shows the curves $\omega^{+}(k)$ (dashed) and $\omega^{-}(k)$ (solid) corresponding to $d_1=20$ $nm$, and the same curves merged at $d=100$ $nm$ (dashed-dotted). {\bf (b)}: Coupling angle $\theta$ as a function of frequency and metal thickness $d_1$. The inset shows the coupling angle $\theta$ for $\omega^{+}(k)$ (dashed) and $\omega^{-}(k)$ (solid) at $d_1=20$ $nm$, and the same curves merged at $d=100$ $nm$ (dashed-dotted).} \label{fig:disp} \end{figure} Following the diagram depicted in Fig.~\ref{fig:setup}~{\bf (a)}, let us consider an incoming photon propagating in the air with direction given by the unit vector $\hat{\mathbf k}'=\sin \theta \hat{\mathbf x}+\cos \theta \hat{\mathbf z}$ and corresponding wavevector $\mathbf k '= k' \hat{\mathbf k}' = k'_x \hat{\mathbf x}+ k'_z \hat{\mathbf z}$. The vector potential is given by~\cite{Blow} \begin{equation}gin{equation} \hat{A}_{P}({\mathbf r},t)\propto \int_0^{\infty} {\mathrm d} \omega (\omega {\cal A})^{-1/2}[e^{i k'(\hat{\mathbf k}' \cdot {\mathbf r})} e^{-i \omega t}\hat{a}(\omega)+ h.c]. \nonumber \end{equation} Here, ${\cal A}$ is the beam-width, with the $\hat{a}(\omega)$'s and $\hat{a}^{\dag}(\omega)$'s satisfying bosonic commutation relations. The impossibility to fulfill the mode-matching conditions between the branches $\omega^{\pm}(k)$ and the incoming photon beam, with dispersion relation $\omega (k')=c k'_x / \sin\theta$ (shaded region of inset in Fig.~\ref{fig:disp}~{\bf (a)}), can be overcome in the ATR configuration by using a prism with dielectric constant $\epsilon_1>\epsilon_2=1$, placed at a distance $d_2$ over the surface of the metal. This modifies the dispersion relation of the incoming photon beam to $\omega(k')=c k'_x / ( \sqrt{\epsilon_1} \sin\theta)$ (solid line of inset in Fig.~\ref{fig:disp}~{\bf a}). For $\theta$ greater than the critical angle, an evanescent photon field is created below the prism surface due to total internal reflection~\cite{OttoKret}. This provides a mechanism for achieving the coupling between the photon and LRSPP, as the $x$ component of the transmitted wavevector remains unchanged (see Fig.~\ref{fig:disp}~{\bf (b)} for mode-matching $\theta$ values). In order to model the ATR geometry, we make use of the 4-layer (4L) configuration in Fig.~\ref{fig:setup}~{\bf (b)}. The modefunctions are given by \begin{equation}gin{eqnarray} {\bf \Psi}({\mathbf r}, \omega) = r \tilde{{\bm \psi}} ({\mathbf r}, \omega) \vartheta(-(z+d_2)) + \tau {\bm \psi}({\mathbf r}, \omega) \vartheta(z+d_2) , \label{Psi} \end{eqnarray} where $r$ and $\tau$ denote reflection and transmission coefficients, $|r(\omega)|^2+|\tau(\omega)|^2=1, \forall \omega$. These coefficients, together with the modefunctions $\tilde{\bm \psi} ({\mathbf r}, \omega)$ and ${\bm \psi} ({\mathbf r}, \omega)$ are determined by solving Maxwell's equations for the incoming photon field, as in the {\it classical} coupled mode approach. In what follows, we will show how the $r$ and $\tau$ coefficients are combined with the quantum mechanical operators associated with the modes to derive the {\it quantum} coupling model. The modefunction $\tilde{\bm \psi} ({\mathbf r}, \omega)$ possesses a real component of the wavevector in the $\hat{\mathbf{z}}$ direction and therefore cannot meet the mode-matching conditions required for coupling to LRSPPs. Only the modefunction ${\bm \psi}({\mathbf r}, \omega)$ is involved in the coupling of photons to LRSPP's and we have \begin{equation}gin{eqnarray} {\bm \psi}({\mathbf r}, \omega) &=& e^{i k'_x x } [(\boldsymbol{\kappa}_1 e^{-\gamma_0 (z+d_2)} + \boldsymbol{\kappa}_2 e^{\gamma_0 z} ) \vartheta(-z) \nonumber \\ & & + (\boldsymbol{\kappa}_3 e^{-\gamma_m z} + \boldsymbol{\kappa}_4 e^{\gamma_m (z-d_1)} ) \vartheta(z) \vartheta(d_1-z) \nonumber \\ & & + \boldsymbol{\kappa}_5 e^{-\gamma_0 (z - d_1)} \vartheta(z-d_1) ], \label{psi} \end{eqnarray} where the $\boldsymbol{\kappa}_i$'s are vector-valued functions related by boundary conditions at the interfaces, while $\gamma_{m}^2=(k'_x)^2-\epsilon_m \omega^2/c^2$ and $\gamma_{0}^2=(k'_x)^2- \omega^2/c^2$. Within a linear response regime~\cite{TSPP}, the process of coupling between the photon field and the plasmon field can be described in the Heisenberg picture by a transformation matrix $\cal{T}(\omega)$ as \begin{equation}gin{equation} \left[ \begin{equation}gin{array}{c} \hat{a}_{out}(\omega) \\ \hat{b}_{out}(\omega) \end{array} \right] = \left[ \begin{equation}gin{array}{cc} \alpha(\omega) & \begin{equation}ta(\omega) \\ -\begin{equation}ta^*(\omega) & \alpha^*(\omega) \end{array} \right] \left[ \begin{equation}gin{array}{c} \hat{a}_{in}(\omega) \\ \hat{b}_{in}(\omega) \end{array} \right], \label{Heisenberg} \end{equation} with $|\alpha(\omega)|^2+|\begin{equation}ta(\omega)|^2=1, \forall \omega$. The transfer process is depicted in Fig.~\ref{fig:setup}~{\bf (c)}. The applicability of a linear approach is fully justified in this context, as we are interested in the description of the excitation of LRSPPs by a weak intensity photon field~\cite{linok}. The coefficients of $\cal{T}(\omega)$ are determined through the overlap of system modefunctions, while the commutation relations of the quantum operators $\hat{a}(\omega)$ and $\hat{b}(\omega)$ properly define the structure of ${\cal T}(\omega)$ as a valid unitary quantum transfer matrix~\cite{salehleon}. The operators $\hat{b}_{in/out}$ are associated with the in/out LRSPP modefunctions ${\bm \phi}^{\pm}({\mathbf r},\omega)$ in Eq.~(\ref{Phi}), {\it i.e.} $\hat{b}_{in}= \hat{b}$, whereas $\hat{a}_{in/out}$ are associated with the in/out modefunctions ${\bm \Psi} ({\mathbf r},\omega)$ of the 4L configuration. Here we assume that the photon field experiences negligible losses as it enters the prism and set $\hat{a}_{in}=\hat{a}$. Thus, we have \begin{equation}gin{eqnarray} \begin{equation}ta^* (\omega) &=& -\tau(\omega) \delta(\omega-\omega') \delta(k-k'_x) \int {\rm d}z \left[\left( {\cal N}_1^{\pm} (\omega) \right)^{-1/2} \right. \nonumber \\ && \times {\bm \phi}^{\pm}(z,\omega)\Bigr]^* \cdot \left[\left({\cal N}_2(\omega') \right)^{-1/2} {\bm \psi}(z, \omega') \right] , \label{beta*} \end{eqnarray} with $ \phi^{\pm}(z,\omega)$ and $\psi(z,\omega')$ denoting the $z-$dependent part of the functions in Eqs.~(\ref{Phi}) and (\ref{psi}), with normalization factors ${\cal N}_1^{\pm}$ and ${\cal N}_2$ respectively. Expression (\ref{beta*}) can be obtained using classical coupled mode theory. However, how the value of $\begin{equation}ta(\omega)$ enters into the quantum coupling model of Eq. (\ref{Heisenberg}) is determined by the commutation relations of the mode operators $\hat{a}(\omega)$ and $\hat{b}(\omega)$ which define ${\cal T}(\omega)$. A derivation of the coupling based solely on a classical electrodynamics approach is unsuitable when one considers properties that are explicitly dependent on the quantum operators associated with the modes of the excitations. For instance, a description of the excitation of LRSPPs by an $n$-photon state of light would not be possible. This issue is discussed in more detail in Sect. IV. In describing the coupling process so far, we have used the Heisenberg picture. However, in many practical situations, it is more convenient to work in the Schr\"odinger picture, where the coupling is described by the following Hamiltonian \begin{equation}gin{eqnarray} \hat{\cal H}_S&=& \int_{0}^{\infty}{\mathrm d} \omega \hbar \omega \hat{a}^{\dag}(\omega)\hat{a}(\omega)+\int_{0}^{\infty}{\mathrm d} \omega \hbar \omega \hat{b}^{\dag}(\omega)\hat{b}(\omega) \label{Hamil} \\ & & + i \hbar \int_{0}^{\infty}{\mathrm d} \omega [g(\omega)\hat{a}^{\dag}(\omega)\hat{b}(\omega)-g^*(\omega)\hat{b}^{\dag}(\omega)\hat{a}(\omega)], \nonumber \end{eqnarray} with coupling coefficient~\cite{salehleon} $g(\omega)=e^{i \arg \begin{equation}ta(\omega) } \sin^{-1} | \begin{equation}ta(\omega) |$. \section{Photon-LRSPP transfer} In this Section we investigate the optimization of the coupling coefficient $g(\omega)$ for a given range of parameters $d_1$, $d_2$, and $\omega$. For clarity we will handle the dependence on all three variables by first optimizing the coupling over $d_2$ for any pair of values of $d_1$ and $\omega$, and then optimizing over $d_1$. Here, we use a prism with $\epsilon_1=1.51$ and silver as an example, with the phenomenologically-derived~\cite{JohnChrist} dielectric function $\epsilon_m (\omega) = 1- \omega_p^2 / \left(\omega(\omega+ i \Gamma)\right) + \delta \epsilon_m$, where $\Gamma=6.25 \times 10^{13}$ rad/s and $\delta \epsilon_m=\delta \epsilon_m^r+i \delta \epsilon_m^i$ with $\delta \epsilon_m^i=0.22$. In deriving the modefunctions ${\bm \phi}^{\pm}(\mathbf{r},\omega)$, we have assumed negligible damping in the metal for the surface plasmon field during the transfer process, allowing us to treat damping as the LRSPP propagates separately from the excitation process. For the optimization, some additional restrictions must be taken into account. First, under realistic conditions, the incoming photon field does not constitute a genuine monochromatic wave, but instead consists of a wavepacket with, for instance, a well-localized Gaussian distribution in frequency. This implies that both $\omega^+$ and $\omega^-$ surface plasmons are susceptible to being excited by the incoming wavepacket. For example, if the incidence angle $\theta$ and the parameters, $d_1$, $d_2$, and $\omega$, are set in order to achieve the excitation of one of the surface plasmons, either $\omega^+$ or $\omega^-$, it is possible that the other one might also be excited, due to the bandwidth of the wavepacket. To limit this effect we introduce a {\it bandwidth parameter}, $$B^\pm \equiv B(\omega^\pm) = \frac{\omega^+ - \omega^-}{2 \Delta\omega}.$$ Here, $\Delta \omega$ is the bandwidth of an incoming Gaussian wavepacket, which we set as $\Delta \omega=3.02\times 10^{13}$ rad/s, and is centered on $\omega=\omega^\pm$ for $B^{\pm}$. For a set $d_1$ and $\omega^\pm$, such that $B^\pm \geq 1$, the possibility to excite both surface plasmons simultaneously is very low. The values of $B^{\pm}$ are shown in Figs.~\ref{fig:bpc}~{\bf (a)} and {\bf (b)} and indicate that only large values of $d_1$ and low $\omega$ suffer from the possibility of simultaneous excitation. Note that the region corresponding to large frequencies and low $d_1$ in Fig.~\ref{fig:bpc}~{\bf (b)} has been subtracted. This is because the dispersion for the symmetric excitation $\omega^- (k)$ can never reach these frequencies for the range of $d_1$ considered (see inset of Fig.~\ref{fig:disp}~{\bf (a)}). This is applicable to all the plots for the symmetric excitation. \begin{equation}gin{figure}[t] \centerline{\psfig{figure=figbpc.eps,width=8.0cm,height=10.2cm}} \caption{(Color online) Values of the parameters $B^{\pm}$, $P^{\pm}$, and $C^{\pm}$ corresponding to the optimized coupling $g^\pm(\omega)$ over the separation $d_2$.} \label{fig:bpc} \end{figure} A second restriction related to the optimization of the coupling parameter is the extent to which the LRSPP modefunctions ${\bm \phi}^{\pm}(\mathbf{r},\omega)$ penetrate into the prism. If the LRSPP field penetrates too much, the modefunctions should be modified to include the presence of the prism. In order to check the validity of using ${\bm \phi}^{\pm}(\mathbf{r},\omega)$, we introduce a {\it penetration factor}, $P^\pm=2/\nu_0^{\pm} d_2$, which depends on the three parameters $d_1$, $d_2$, and $\omega$. Here, $P^\pm \leq 1$ signifies that ${\bm \phi}^{\pm}(\mathbf{r},\omega)$ at $z=-d_2$ is less than $2\%$ its maximum value. The dependence of this factor on $d_1$ and $\omega$ is depicted in Figs.~\ref{fig:bpc}~{\bf (c)} and {\bf (d)}. The values of $P^\pm$ shown correspond to the optimized coupling coefficient $g(\omega)$ (over the parameter $d_2$) for any pair of values of $d_1$ and $\omega$. Since the value of $g(\omega)$ could deviate significantly from the true coupling in regions where $P^\pm$ is large, due to the weak approximation of ${\bm \phi}^{\pm}(\mathbf{r},\omega)$ to the true modefunctions, we must ensure $g(\omega)$ meets the condition $P^\pm\leq 1$. \begin{equation}gin{figure}[b] \centerline{\psfig{figure=figgres.eps,width=7.7cm}} \caption{(Color online) Optimization of the coupling over the prism separation $d_2$, subject to the constraints $B^\pm\geq 1$, $P^\pm\leq 1$, and $C^\pm\geq 1$. {\bf (a)} $\omega^{+}$ excitation, {\bf (b)} $\omega^{-}$ excitation. Panels {\bf (c)} and {\bf (d)} show the values of coupling parameter $|g^{\pm}|$ (dashed line), metal thickness $d_1$ (solid), and prism separation $d_2$ (dot-dash), after numerical optimization: {\bf (c)} $\omega^{+}$ excitation, {\bf (d)} $\omega^{-}$ excitation. For $d_1$, by increasing the resolution of the numerical calculation, one finds the surface plots in {\bf (a)} and {\bf (b)} become smoother and the points in {\bf (c)} and {\bf (d)} tend toward the best fit line.} \label{fig:gres} \end{figure} A final restriction concerns the coupled nature of the LRSPP field. LRSPPs originate from the existence of coupled SPPs at both dielectric-metal interfaces of the metallic strip. For any finite value of the metal thickness, $d_1$, it is always possible to find a solution of the dispersion relation Eq.~(\ref{dispersioneq}). However, the interaction strength between both SPPs decays exponentially as $d_1$ grows and the LRSPP evolves into two single SPPs. Due to a lack of symmetry during the single-photon excitation in the 4L configuration, the incoming field may be concentrated on the nearest metal surface to the prism, without reaching the other side. If the metal thickness is large enough, then it becomes impossible to excite any LRSPP. In order to account for this decoupling, we introduce a {\it coupled-surfaces parameter}, $C^\pm =4/\nu_m^\pm d_1$. This factor quantifies the extent to which the field penetrates into the metal (see Eq.~(\ref{Phi})) to maintain a coupling between both SPPs, depending on the parameters $d_1$ and $\omega$. In Figs.~\ref{fig:bpc}~{\bf (e)} and {\bf (f)} the parameter $C^\pm$ remains above 1 over the entire range considered, except for large values of $d_1$ and low $\omega$. Note that the restrictions and parameters introduced here also apply to the classical case, as they depend only the classical modefunction structure. While the penetration restriction was introduced previously in the context of single interface SPP excitation~\cite{TSPP}, the bandwidth and coupling restrictions are new parameters emerging directly from this investigation of a multilayer configuration. With all three of these important restrictions properly identified for the system, we are now in a position to correctly optimize the value of the coupling $g(\omega)$. In Figs.~\ref{fig:gres}~{\bf (a)} and {\bf (b)} we show the normalized coupling parameter, $|\tilde{g}^\pm (\omega)|= 2 |g^\pm (\omega)| /\pi$, after being optimized over the variable $d_2$ and subject to the restrictions $B^\pm \geq1$, $P^\pm \leq1$, and $C^\pm \geq1$. The regions where these restrictions could not be met have been subtracted. Here a value of $|\tilde{g}^\pm|=1$ ($0$) corresponds to the transfer of a photon field to a LRSPP with unit (zero) probability. The highlighted paths correspond to the maximum coupling achievable. These paths are shown separately in Figs.~\ref{fig:gres}~{\bf (c)} and {\bf (d)}, plotted as a function of the frequency, together with the corresponding values of the parameters $d_1$ and $d_2$. A maximum value of $|\tilde{g}^+| =0.9$ is achieved for the $\omega^+$ excitation, whereas the $\omega^-$ excitation shows a flatter behavior, reaching $|\tilde{g}^- |=0.8$. It is interesting to note that as $d_1$ increases, the optimized couplings in Figs.~\ref{fig:gres}~{\bf (a)} and {\bf (b)} move closer to those found for the photonic excitation of SPPs on a single interface~\cite{TSPP}, but with a remarkably lower efficiency. Under such conditions ($d_1 \to\infty$), it is only feasible to excite one of the SPPs and the excitation can no longer be considered a single LRSPP ($C^{\pm}\to 0$). This transition from multilayer coupled quantum excitation to single interface excitation should be an important factor to consider when designing optimal quantum excitation methods in the context of multiple interfaces. \section{LRSPP propagation} As excited LRSPPs propagate along the metal surface they experience loss due to finite conductivity of the metal and surface roughness. This results in heating and radiative losses respectively~\cite{Raether}. For a reasonably smooth surface, thermal loss is the main source of damping~\cite{Zayats}. In order to include this loss mechanism for the LRSPP excitations, we follow a standard phenomenological approach, using a bath of quantized field modes~\cite{CavesCrouch,LoudonDamp,Loudon}. The main advantages of this model are its simplicity and that it leads to the same physical conclusions as a more rigorous derivation~\cite{Senitzky}. We consider an array of $N=x/\Delta x$ discrete, equally spaced beamsplitters, as depicted in Fig.~\ref{fig:propag}~{\bf (a)}. The upper ports represent the spatial evolution of the operator for a propagating LRSPP, with input $\hat{b}_{out}(\omega)$ and output $\hat{b}_{out}^{D}(\omega)$ after a distance $x$. The lower ports consist of a bath of field excitations, $\hat{c}_{i}(\omega)$, $i=1,\dots ,N$, which are independent and satisfy quantum mechanical bosonic commutation relations $[ \hat{c}_n(\omega),\hat{c}_m^{\dag}(\omega ')] = \delta_{nm}\delta(\omega-\omega').$ After applying successively the beamsplitter transformations, together with the continuum limit $N\to \infty$, $\Delta x\to 0$, such that $\hat{c}_{m}(\omega) \to \sqrt{\Delta x}\hat{c}(\omega,x')$ and $\delta_{mn} \to \Delta x\delta(x-x')$, the operator of a damped LRSPP at point $x$ can be written as~\cite{LoudonDamp,Loudon} \begin{equation}gin{equation} \hat{b}_{out}^{D}(\omega) = e^{iKx} \hat{b}_{out}(\omega) + i \sqrt{2\kappa } \int_{0}^{x} {\rm d}x' e^{iK(x-x')} \hat{c}(\omega,x'), \nonumber \end{equation} with $[\hat{c}(\omega,x), \hat{c}^{\dag}(\omega',x')] = \delta(\omega-\omega') \delta(x-x').$ Here we have introduced the complex wavenumber $K\equiv K(\omega)=k(\omega)+ i \kappa(\omega)$. This stems from solving the dispersion relations of the LRSPP excitations with the complex-valued dielectric function of the metal $\epsilon_m$. Under the above conditions, the output LRSPP remains a bosonic excitation, with the appearance of the second term in $\hat{b}_{out}^{D}(\omega)$ from the bath of field excitations preserving this bosonic nature. We now define the time dependent creation and annihilation operators through the inverse Fourier transform of the frequency dependent ones, for instance, $\hat{b}(t)=(2\pi)^{-1/2} \int {\rm d}\omega e^{-i \omega t} \hat{b}(\omega) .$ Although the limits of integration over the frequency are $(-\infty, \infty)$, we are interested in the case of a narrow wavepacket centered on frequency $\omega_0$ with bandwidth $\Delta \omega \ll \omega_0$, thus $\omega \in (0,\infty)$ can be taken. To calculate the mean flux of LRSPPs at a point $x$ and time $t$ after their excitation, we can write~\cite{Loudon,LoudonDamp} \begin{equation}gin{eqnarray} f_{out}^{D}(t) &=& \left\langle\hat{b}_{out}^{D \dag} (t) \hat{b}_{out}^{D} (t) \right\rangle \nonumber\\ &=& \frac{1}{2\pi} \int {\rm d}\omega \int {\rm d}\omega' \left\langle\hat{b}_{out}^{\dag} (\omega) \hat{b}_{out} (\omega') \right\rangle \times \nonumber\\ & & e^{-(\kappa(\omega)+\kappa(\omega')) x} e^{i[(k(\omega')-k(\omega))x-(\omega'-\omega)t]} , \label{foutD} \end{eqnarray} where we have used~\cite{Loudon,TSPP} $\left\langle\hat{c}^{\dag} (\omega,x) \right\rangle= \left\langle \hat{c} (\omega,x) \right\rangle=\left\langle\hat{c}^{\dag} (\omega,x) \hat{c} (\omega',x') \right\rangle=0$. Due to the small bandwidth of the wavepacket we are considering, the imaginary part of the LRSPP wavenumber remains essentially constant around the central frequency, {\it i.e.} $\kappa(\omega)\approx \kappa(\omega_0)\equiv \kappa_0$, whereas the real part can be approximated by truncating its series up to first order, $k(\omega)=k(\omega_0)+ (\omega-\omega_0)v_G^{-1}(\omega_0)$. Here $v_G^{-1}(\omega_0)=\partial k(\omega)/\partial \omega |_{\omega_0}$ is the inverse of the group velocity of the LRSPPs at the central frequency $\omega_0$ (for either $\omega^{+}$ or $\omega^{-}$). We then have \begin{equation}gin{eqnarray} f_{out}^{D}(t) &=& \frac{1}{2\pi} e^{-2 \kappa_0 x} \int {\rm d}\omega \int {\rm d}\omega' e^{i(\omega-\omega')(t-x/v_G)} \times \nonumber\\ & & \left\langle\hat{b}_{out}^{\dag} (\omega) \hat{b}_{out} (\omega') \right\rangle \label{foutD2} \\ & = & e^{-2 \kappa_0 x} \left\langle\hat{b}_{out}^{\dag} (t_R) \hat{b}_{out} (t_R) \right\rangle \equiv e^{-2 \kappa_0 x} f_{out}(t_R). \nonumber \end{eqnarray} The mean flux of LRSPPs at point $x$ and time $t$ therefore equals that at $x=0$ and time $t_R=t-x/v_G$, but damped by a factor $e^{-2\kappa_0 x}$ due to losses incurred during the propagation. We now consider an incoming $n$-photon wavepacket state~\cite{Loudon} entering the prism with frequency profile $\xi(\omega)$, given by $\ket{n_{\xi}} =(n!)^{-1/2}( \hat{a}_{\xi}^{\dag})^{n} \ket{0}$. Here, $\hat{a}_{\xi}^{\dag} = \int{\rm d}\omega \xi(\omega) \hat{a}^{\dag}(\omega) = \int{\rm d}t \xi(t) \hat{a}^{\dag}(t) $ and for simplicity we use the Gaussian profile $\xi(\omega) = (2\pi \sigma^{2})^{-1/4} \exp [ -i(\omega-\omega_0)t_0 - (\omega-\omega_0 )^{2}/4 \sigma^{2} ]$, where $\sigma=\Delta\omega/(2\sqrt{2\ln 2})$ and $t_0$ is the time of injection. The mean number of LRSPPs that can be detected at point $x$ is obtained by integrating $f_{out}^{D}(t)$ in Eq.~(\ref{foutD2}) over the time interval $\tau= [ t_0+ x/v_G - 3\sigma, t_0+ x/v_G +3\sigma ]$, leading to \begin{equation}gin{equation} \left\langle m \right\rangle = \int_\tau f_{out}^D (t) {\rm d}t = \mu e^{-2 \kappa_0 x} |\begin{equation}ta(\omega_0)|^{2} n , \label{mean} \end{equation} where $\mu$ parameterizes the detector efficiency~\cite{LoudonLoss}, and we have used the fact the LRSPP coupling does not vary appreciably over the bandwidth considered~\cite{TSPP}, {\it i.e.} $\begin{equation}ta(\omega)\approx \begin{equation}ta(\omega_0)$. Figs.~\ref{fig:propag}~{\bf (b)} and {\bf (c)} show $\langle \tilde{m}\rangle=\langle m \rangle / n$, for $\omega^+$ and $\omega^-$ respectively, along $x$ for the optimized $\begin{equation}ta(\omega)$ (and therefore $g(\omega)$) obtained from Figs.~\ref{fig:gres}~{\bf (c)} and {\bf (d)}. It is interesting to see the positive effect that the structure of the $\omega^{+}$ excitations has on damping reduction compared to $\omega^{-}$. \begin{equation}gin{figure}[t] \centerline{\psfig{figure=figpropag.eps,width=7.0cm}} \caption{(Color online) Phenomenological model to include the effect of damping for the LRSPP propagation~\cite{LoudonDamp}. {\bf (a)}: Array of beamsplitters and bath of field excitations. {\bf (b)} and {\bf (c)}: Normalized mean excitation number $\langle \tilde{m}\rangle=\langle m \rangle / n$ as a function of the frequency and the distance traveled from the injection point. The optimal profiles obtained in Fig.~\ref{fig:gres} are used for the $\omega^{+}$ excitation {\bf (b)} and $\omega^{-}$ excitation {\bf (c)}. A detector efficiency of $\mu=0.65$ is used~\cite{TSPP}.} \label{fig:propag} \end{figure} While the observables $\langle m^\pm \rangle$ match well the behavior of their classical counterparts~\cite{Zayats}, the field intensity $I$, we must emphasize that the formalism presented here provides a more complete description of the LRSPPs; we are now able to investigate the behavior of quantum statistics. In particular, in order to show that the LRSPP field has quantum characteristics, we need to consider the zero time-delay second-order quantum coherence function $g^{(2)}(0)$ at a fixed position~\cite{Loudon}. This observable is defined as $g^{(2)}(0)\!=\!\langle\,\colon\!\hat{I}^{2}(t)\colon\!\rangle/\langle\,\colon\!\hat{I}(t)\colon\!\rangle^2$, where $\hat{I}$ is the intensity of the quantized field operator, $\colon\colon$ denotes normal-ordering of the quantum operators and the expectation value is taken over the {\it initial} state of the field. It has been noted recently~\cite{TSPP}, that as the photon-to-surface plasmon transfer and propagation stages constitute an array of lossy beamsplitters~\cite{LoudonLoss}, $g^{(2)}(0)$, which is equal to $\langle m(m-1) \rangle/\langle m \rangle^{2}$ for an incident $n$ photon wavepacket, remains unaffected by the entire conversion process. This is to be expected~\cite{Loudon}, because at a beamsplitter with loss coefficient $\eta^{1/2}$, the quantum observables $\langle m\rangle \to \eta \langle m\rangle$ and $\langle m(m-1) \rangle \to \eta^2 \langle m(m-1) \rangle$. Thus, the individual losses accumulated from the transfer and damping processes cancel due to the form of $g^{(2)}(0)$. For a classical field $1 \le g^{(2)}(0) \le \infty$. Thus $g^{(2)}(0)$ for a propagated LRSPP will always lie in the classically {\it forbidden} region $g^{(2)}(0)<1$. A Hanbury-Brown Twiss type experiment~\cite{HBT} could be used to measure $g^{(2)}(0)$. Here, single-photon detection-based techniques could be employed. One might use an additional prism to convert the LRSPP excitation back into a photon and indirectly measure the signal with avalanche photodiode detectors. A more direct approach would be to use avalanche-type plasmonic detectors~\cite{pdet} embedded within the metal surface to directly probe the surface plasmon's excitation signal. In either approaches, the detection data from many identical excitation processes would be required in order to determine the overall quantum expectation value of $g^{(2)}(0)$. This repetition technique is used frequently in quantum photonic experiments~\cite{Rempe} and could be achieved easily by using a steady rate of single photons injected into the prism at set time intervals. \section{Quantum State Transfer} The ability to interconvert a quantum state between two kinds of physical system is an important requirement for QIP and is one of DiVincenzo's criteria~\cite{Divincenzo} for quantum computing and communication. As an example of efficient quantum state transfer from photons to LRSPPs using the theory developed in the previous sections, we consider a superposition of coherent states~\cite{cat}. These states reside in a single field mode consisting of superpositions of two coherent states of equal amplitude separated in phase by $180^\circ$ and written as \begin{equation}gin{eqnarray} \ket{\psi}={\cal N}(\ket{\alpha} + e^{i \varphi}\ket{- \alpha}). \label{scs} \end{eqnarray} Here $\ket{\pm\alpha}={\rm exp}(-|\alpha|^2/2)\sum_{n=1}^{\infty}[(\pm \alpha)^n/\sqrt{n !}]\ket{n}$, with the normalization ${\cal N}= [2 + 2 \exp (- 2 |\alpha|^{2}) \cos \varphi]^{-1/2}$. When $\alpha$ is real, $\ket{\psi}$ for $\varphi=0$ is orthogonal to the state with $\varphi=\pi$, regardless of the size of $\alpha$. Therefore one can use $\ket{\psi}_{\varphi=0}$ and $\ket{\psi}_{\varphi=\pi}$ as a logical basis for QIP~\cite{BVE}. For the remainder of the paper, we will focus on $\ket{\psi}_{\varphi=0}$ as our example basis state. Photonic superpositions of coherent states injected into the prism can be transferred to a quantum state of LRSPPs under the transformation matrix ${\cal T}(\omega)$ given in Eq.~(\ref{Heisenberg}). The unitary transformation defined by ${\cal T(\omega)}$ acting on the input product state $\ket{\Psi}_{in}=\ket{\psi}_{a_{in}} \ket{0}_{b_{in}}$ produces the state \begin{equation}gin{eqnarray} \ket{\Psi}_{out}&=& {\cal N} (\ket{\alpha \cos g(\omega)}_{a_{out}} \ket{-\alpha \sin g(\omega)}_{b_{out}} \nonumber \\ & & + \ket{-\alpha \cos g(\omega)}_{a_{out}} \ket{\alpha \sin g(\omega)}_{b_{out}}), \end{eqnarray} where the coupling coefficient $g(\omega) = \sin^{-1}|\begin{equation}ta(\omega)|$ is used, with phase $\Phi$ absorbed into the definition of the incoming-outgoing fields~\cite{salehleon}. Tracing out the unobserved photon mode $a_{out}$ of the photon-LRSPP system, the final state of the LRSPP mode $b_{out}$ for a specific frequency $\omega$ is given by \begin{equation}gin{eqnarray} \hat{\rho}_{b_{out}}&=& {{\cal N'}^2} (\proj{\alpha \sin g}{\alpha \sin g} + c_{0} \proj{\alpha \sin g}{-\alpha \sin g} \nonumber \\ & & + c_{0} \proj{-\alpha \sin g}{\alpha \sin g} + \proj{-\alpha \sin g}{-\alpha \sin g}), \nonumber \\ \label{density} \end{eqnarray} where $c_{0}=\exp[-2 \alpha^2 \cos^2 g]$. We now consider the damped propagation of these excited LRSPP coherent superposition states as they travel along the surface of the metal. This analysis complements well the study performed in the previous section on amplitude damping of the LRSPP field. In contrast to before however, we now have a {\it coherent} superposition of an LRSPP excitation and we seek to characterize the effects of loss of coherence in this state, more commonly referred to as {\it decoherence}. Using the identity for the damped LRSPP field operator $\hat{b}_{out}^{D}$ in Section IV, and applying it to an excited coherent superposition state wavepacket of central frequency $\omega_0$, as described by Eq.~(\ref{density}), a straightforward calculation leads to \begin{equation}gin{eqnarray} \hat{\rho}_{b}(x)&=& {\cal N'}^2(\proj{\alpha \sin g~e^{-\kappa_{0} x}}{\alpha \sin g~e^{-\kappa_{0} x}} \nonumber \\ && +~c_{0} c(x) \proj{\alpha \sin g~e^{-\kappa_{0} x}}{-\alpha \sin g~e^{-\kappa_{0} x}} \nonumber \\ && +~c_{0} c(x) \proj{-\alpha \sin g~e^{-\kappa_{0} x}}{\alpha \sin g~e^{-\kappa_{0} x}} \nonumber \\ && + ~\proj{-\alpha \sin g~e^{-\kappa_{0} x}}{-\alpha \sin g~e^{-\kappa_{0} x}}), \label{dscs} \end{eqnarray} where $c(x)=\exp[-2 \alpha^2 (\sin^2 g)(1-e^{-2 \kappa_{0} x})]$. It is easily seen from Eq.~(\ref{dscs}) that as LRSPP coherent superposition states propagate along the metal surface, the initial superposition evolves into a statistical mixture of coherent states due to the factors $c_{0}c(x)$. As the coefficients of the off-diagonal elements of the density operator expressed in the coherent state basis vanish fastest, the initial mixture of the coherent state superposition tends toward a classical mixture (dephasing) at early times, eventually moving toward the vacuum state (amplitude damping) at long times. The coefficients of the off-diagonal elements also indicate that the greater the value of $\alpha$, the more quickly quantum coherences will decay through a dephasing type process of the LRSPP state~\cite{BVK}. \begin{equation}gin{figure}[t] \centerline{\psfig{figure=entropy.eps,width=8.3cm,height=5.494cm}} \caption{(Color online) The entropy ${\textrm S_{V}}$ as a function of frequency and distance traveled for fixed values of cat-state amplitude $\alpha$. {\bf (a)} and {\bf (c)} correspond to the $\omega^{+}$ LRSPP excitations with $\alpha=2$ and $5$ respectively. {\bf (b)} and {\bf (d)} correspond to the $\omega^{-}$ excitations with $\alpha=2$ and $5$ respectively.} \label{fig7} \end{figure} A characterization of the loss of coherence in a quantum state can be investigated using the von Neumann entropy~\cite{von} defined as ${\textrm S_{V}}=- {\textrm Tr}(\hat{\rho}~{\rm ln} \hat{\rho})$. The von Neumann entropy is a monotonic function of the linear entropy for a two-level quantum state of a single mode~\cite{vonlin}. The evaluation of the von Neumann entropy requires, in general, the diagonalization of the density operator $\hat{\rho}$. Fortunately, the density operator of coherent superposition states with $\alpha \in {\mathbb R}$ can be decomposed into the orthonormal basis $\ket{\pm}={{\cal N_{\pm}}}(\ket{\alpha \sin g~e^{-\kappa_{0} x}} \pm \ket{-\alpha \sin g~e^{-\kappa_{0} x}})$, with ${\cal N}_{\pm}=(2\pm 2e^{-2 \alpha^2 (\sin^2 g)e^{-2 \kappa_0 x}})^{-1/2}$. In this basis, Eq.~(\ref{dscs}) becomes \begin{equation}gin{eqnarray} \hat{\rho}_{b}(x)&=& \lambda_{+}(x) \hat{\rho}_{+}(x) + \lambda_{-}(x) \hat{\rho}_{-}(x), \end{eqnarray} where $\lambda_{\pm}(x) = \frac{{\cal N'}^{2}}{2 {\cal N_{\pm}}^{2}}(1 \pm c_{0}c(x))$ are eigenvalues corresponding to eigenstates $\hat{\rho}_{\pm}(x) = \proj{\pm}{\pm}$, together with $\lambda_{+}(x)+\lambda_{-}(x)=1$. The von Neumann entropy is then simply ${\textrm S_{V}(x)}=- \lambda_{+} {\textrm ln} (\lambda_{+}) - \lambda_{-} {\textrm ln} (\lambda_{-})$. In Fig.~\ref{fig7} we show the dependence of ${\textrm S_{V}}$ on the frequency and the distance traveled for specific values of $\alpha$. From Fig.~\ref{fig7} one can see that the greater the value of $\alpha$, the more quickly the entropy increases toward unity, and thus the mixedness of the state increases, indicating a greater loss of coherence due to dephasing, an effect noted earlier~\cite{BVK}. Moreover, one can see from the left hand ($\omega^{+}$) and right hand ($\omega^{-}$) columns of Fig.~\ref{fig7} that, for a given value of $\alpha$, the entropy slowly increases for the $\omega^{+}$ LRSPPs compared to the $\omega^{-}$ excitations as they propagate. This effect is related to the amplitude damping process observed in Sec. IV for the propagation of LRSPP-number states and is due to the smaller value of $\kappa_0$ for a given mode frequency for the $\omega^{+}$ excitations, a result of the structure of the modefunctions and their corresponding dispersion relation. According to Jeong {\it et al.}~\cite{Jeong2} mixed macroscopic superposition states can, in some cases, be more robust with respect to decoherence than their pure state counterparts. We expect this study to be useful in future work on the optimization of quantum state transfer of photons to LRSPPs and the consideration of different forms of SPPs on multiple interfaces. \section{Concluding remarks} We have provided a fully quantum-mechanical description of the photonic excitation of LRSPPs using a versatile ATR geometry. In order to do this it was necessary to quantize the LRSPP field, which we included as an appendix. With this, we described the photon-LRSPP coupling mechanism by means of a linear Hamiltonian and optimized the coupling efficiency over a wide-range of parameters accessible to the setup. We found remarkably good transfer efficiencies. A phenomenological model was then used to account for damping as the LRSPPs propagate, where the long-range behavior of the excitations manifested itself through quantum interactions with an environment. The effect of finite bandwidth for the incoming photon field on the coupling optimization and on the propagation of a LRSPP wavepacket along the metal surface was also discussed. We studied the quantum statistics of the excited LRSPP fields and provided an outline of how one might experimentally investigate them. Finally, we characterized the performance of photon-to-LRSPP quantum state transfer, providing an informative example of the transfer of coherent superposition states~\cite{cat}. We found efficient transfer and analyzed the loss of decoherence in the states. The work presented here should be a useful starting point for future research into the practical design of novel long-range and multilayer plasmonic quantum-controlled devices based at the nanoscale. Applications in this context include SPP-enhanced nonlinear photon interactions and SPP-assisted photonic quantum networking and processing. \section{Acknowledgments} We thank M. Paternostro and C. Di Franco for helpful discussions and insights. We acknowledge funding from ESF, EPSRC, QIPIRC and KRF~(2005-041-C00197). \appendix \section*{Appendix: LRSPP quantization} Here we develop the quantization procedure of Elson and Ritchie~\cite{ER} for the case of a thin metallic strip. An alternative approach based on the point-ion model for a dielectric slab has been used in Ref. \cite{Tomas} to quantize the long-range plasmonic fields in the polaritonic regime. For quantization in the limit of short wavelengths, {\it i.e.} the {\it non-retarded} regime, the reader is kindly referred to Refs.~\cite{Econ,Sun}. {\it Classical mode structure}.-- We start with the geometry depicted in Fig.~1~{\bf (b)}, where Maxwell's equations in terms of the vector potential ${\bf A}({\bf r},t)$ lead to \begin{equation}gin{gather} \label{field1} \left[ \nabla^2 - \frac{\epsilon}{c^2}\frac{\partial^2}{\partial t^2} \right] {\bf A}=\nabla \cdot(\nabla \cdot {\bf A}), \\ \label{field2} \nabla \cdot \dot{\bf A}=\frac{e}{\epsilon_0}(n({\bf r},t)-n_0({\bf r})). \end{gather} Here $\epsilon=\epsilon({\bf r},\omega)$ is a position and frequency-dependent dielectric function, $n({\bf r},t)$ is the electronic number density of the electron gas, $n_0({\bf r})$ is the static density in the undisturbed electron gas and $e$ is the electronic charge. We use the gauge $\phi=0$, where the electric field ${\bf E}=-\dot{\bf A}$ and magnetic field ${\bf B}=\nabla \times {\bf A}$. The classical energy residing in both the fields and electron gas is given by~\cite{ER} \begin{equation} \label{Ham} {\cal H}=\frac{\epsilon_0}{2}\int {\rm d}^3r \left[\left[1+\vartheta_{eg}\frac{\omega^2(1-\epsilon)^2}{\omega_p^2({\bf r},t)}\right]\dot{\bf A}^2+ c^2 (\nabla \times {\bf A})^2\right]. \end{equation} Here, $\vartheta_{eg}=\vartheta(z)\vartheta(d-z)$ is a step function for the electron gas located in the region $0<z<d$, with $\vartheta$ denoting the Heaviside function. In addition, $\omega_p({\bf r},t)=[n({\bf r},t)e^2/(\epsilon_0 m^*)]^{1/2}$ is a position and time dependent plasma frequency, with $m^*$ the effective electron mass. Following a linearized hydrodynamic approach~\cite{ER,Pitarke} and taking into account the location of the electron gas, the approximation $n({\bf r},t) \approx n_0({\bf r})=\vartheta(z)\vartheta(d-z)n_0$ is used. Correspondingly, we have $\epsilon({\bf r},\omega)=\vartheta(-z)+\vartheta(z)\vartheta(d-z)\epsilon(\omega)+\vartheta(z)\vartheta(z-d)$, where $\epsilon(\omega)$ is a real-valued dielectric function for the metal of thickness $d$ sandwiched by two layers of air with $\epsilon=1$. Note that here we are considering an ideal case with no damping effects in the metal. This simplifies the quantization procedure. However, damping can be introduced at a later stage~\cite{TSPP}, as described in the main text. From Eqs.~(\ref{field1}) and (\ref{field2}) and the above considerations, we now have the classical field equation \begin{equation} \label{finalfield} \left[ \nabla^2 - \frac{\epsilon}{c^2}\frac{\partial^2}{\partial t^2} \right] {\bf A}=0, \end{equation} and $\nabla \cdot{\bf A}=0$ in the region $z \neq \{0,d\}$. Here, the usual conditions of continuity of the tangential components of the fields across the planes at $z=0$ and $d$ respectively must be satisfied. To find the normal mode solutions for the system, we make the standard Ansatz \begin{equation} \label{Ans} {\bf A}({\bf r},t)=\sum_{\bf k}{\bf A}_{\bf k}(z)N_{\bf k}(t)e^{i {\bf k} \cdot {\bf r}}, \end{equation} where ${\bf r}=x\hat{\bf x} + y\hat{\bf y}$ is a vector parallel to the $x-y$ plane, ${\bf k}=k_x\hat{\bf x} + k_y\hat{\bf y}$ and the associated eigenfrequency, denoted by $\omega_{\bf k}$, depends on ${\bf k}$. The temporal amplitude $N_{\bf k}(t)$ is assumed to satisfy the oscillator equation, {\it i.e.} $({\rm d}^2/{\rm d}t^2+\omega_{\bf k}^2)N_{\bf k}(t)=0$, thus upon inserting Eq.~(\ref{Ans}) into Eq.~(\ref{finalfield}), one obtains \begin{equation}gin{gather} \left(\frac{{\rm d}^2}{{\rm d}z^2}-\nu_0^2 \right){\bf A}_{\bf k}^{\pm}(z)=0,~~\left(\frac{{\rm d}^2}{{\rm d}z^2}-\nu_m^2 \right){\bf A}_{\bf k}^{m}(z)=0, \nonumber \end{gather} where $\nu_m^2=k^2-\omega_{\bf k}^2\epsilon(\omega_{\bf k})/c^2$ and $\nu_0^2=k^2-\omega_{\bf k}^2/c^2$. Here the spatial amplitudes ${\bf A}_{\bf k}^{+}(z)$, ${\bf A}_{\bf k}^{-}(z)$ and ${\bf A}_{\bf k}^{m}(z)$ correspond to fields in the $z>d$, $z<0$ and $0<z<d$ regions respectively. Consider the following solutions: ${\bf A}_{\bf k}^{+}(z)={\bf A}_{\bf k}^{+}e^{-\nu_0(z-d)}$, ${\bf A}_{\bf k}^{-}(z)={\bf A}_{\bf k}^{-}e^{\nu_0 z}$ and ${\bf A}_{\bf k}^{m}(z)={\bf A}_{\bf k}^{m^+}e^{-\nu_m z}+{\bf A}_{\bf k}^{m^-}e^{\nu_m(z-d)}$, where ${\bf A}_{\bf k}^{+}=\alpha^{+}_{\bf k}\hat{\bf k}+\begin{equation}ta^{+}_{{\bf k}}\hat{\bf z}$, ${\bf A}_{\bf k}^{-}=\alpha_{\bf k}^{-}\hat{\bf k}+\begin{equation}ta_{{\bf k}}^{-}\hat{\bf z}$ and ${\bf A}_{\bf k}^{m^\pm}=\alpha^{m^\pm}_{\bf k}\hat{\bf k}+\begin{equation}ta^{m^\pm}_{{\bf k}}\hat{\bf z}$. Here we focus on TM modes of the system due to boundary conditions~\cite{Econ, LRSPPHV}. By requiring that the tangential components of ${\bf E}$ and ${\bf B}$ derived from {\bf A} across the planes $z=0$ and $d$ are continuous and $\nabla \cdot {\bf A}=0$ elsewhere, one finds the $\alpha_{\bf k}$'s and $\begin{equation}ta_{\bf k}$'s are related to one another, with solutions existing only if $e^{- \nu_m d}=\pm(\nu_m+\epsilon(\omega_{\bf k})\nu_0)/(\nu_m-\epsilon(\omega_{\bf k})\nu_0)$ is satisfied. At a set thickness $d$ there are two possible eigenfrequencies of this equation for a given ${\bf k}$, which we denote $\omega_{\bf k}^\pm$. Thus, there are two sets of coefficients for the $\alpha_{\bf k}$'s and $\begin{equation}ta_{\bf k}$'s for a given ${\bf k}$. The most general form of ${\bf A}({\bf r},t)$ is then \begin{equation} \label{soln} {\bf A}^\pm({\bf r},t)=\sum_{\bf k}{\bm \phi}_{\bf k}^\pm(z)N_{\bf k}^\pm(t) e^{i {\bf k} \cdot {\bf r}}+ c.c, \end{equation} where the eigenmodes are given by \begin{eqnarray} {\bm \phi}_{\bf k}^\pm(z)&=&[(\hat{\bf k}-(ik/\nu_0)\hat{\bf z})e^{\nu_0 z} \vartheta(-z) \nonumber \\ & & +(1-\nu_m/\epsilon(\omega_{\bf k})\nu_0)[(\hat{\bf k}+(i k/\nu_m)\hat{\bf z})e^{-\nu_m z} \nonumber \\ & & \mp( \hat{\bf k}-(ik/\nu_m)\hat{\bf z})e^{\nu_m (z-d)}] \vartheta(z)\vartheta(d-z) \nonumber \\ & & \mp( \hat{\bf k}+(ik/\nu_0)\hat{\bf z})e^{-\nu_0 (z-d)} \vartheta(z-d)], \end{eqnarray} and the time-dependent amplitudes are given by $N_{\bf k}^\pm(t)=N_{\bf k}^{\pm}e^{-i \omega^{\pm}_{\bf k} t}$. In the above, we have used $\alpha_{\bf k}^{-}$ as the free coefficient in the coupled boundary equations and absorbed it into the definition of $N_{\bf k}^{\pm}$. Due to the symmetry in the phases of the amplitudes in the ${\bm \phi}_{\bf k}^\pm(z)$'s with respect to the center of the metal, the associated field modes are commonly referred~\cite{Zayats,Econ} to as antisymmetric ($\omega_{\bf k}^{+}$) and symmetric modes $(\omega_{\bf k}^{-})$. {\it Discretization and quantization}.-- We now discretize the classical system and quantize it. The components of ${\bf k}$ are taken to be $k_x=2 \pi \ell_x/L$ and $k_y=2 \pi \ell_y/L$, where $\ell_x$ and $\ell_y$ are integers, with $e^{i {\bf k}\cdot {\bf r}}$ satisfying boundary conditions at the planes $x=\pm L/2$ and $y=\pm L/2$. Substituting Eq.~(\ref{soln}) into Eq.~(\ref{Ham}) one finds the total energy of the discretized classical modes given by $ {\cal H}^{\pm}=\sum_{\bf k}\epsilon_0 L^2 (\omega_{\bf k}^{\pm})^2{\cal N}_{\bf k}^{\pm}(N_{\bf k}^{\pm}N_{\bf k}^{\pm *}+N_{\bf k}^{\pm *}N_{\bf k}^{\pm}),$ where ${\cal N}_{\bf k}^{\pm}$ is a coefficient with dimensions of length. Using the correspondence with a quantized harmonic oscillator~\cite{Loudon}, {\it i.e.} $N_{\bf k}^{\pm}\to(\hbar/2\epsilon_0L^2\omega_{\bf k}^{\pm}{\cal N}_{\bf k}^{\pm})^{1/2}\hat{b}_{{\bf k},\pm}$ and $N_{\bf k}^{\pm *}\to(\hbar/2\epsilon_0L^2\omega_{\bf k}^{\pm}{\cal N}_{\bf k}^{\pm})^{1/2}\hat{b}_{{\bf k},\pm}^{\dag}$, we have the Hamiltonian \begin{equation} \hat{\cal H}^{\pm}=\frac{1}{2}\hbar \omega_{\bf k}^{\pm}(\hat{b}_{{\bf k},\pm}\hat{b}_{{\bf k},\pm}^{\dag}+\hat{b}_{{\bf k},\pm}^{\dag}\hat{b}_{{\bf k},\pm}) , \end{equation} along with the vector potential converted to the operator \begin{equation} \hat{\bf A}^{\pm}({\bf r},t)=\sum_{\bf k}(\hbar/2\epsilon_0L^2\omega_{\bf k}^{\pm}{\cal N}_{\bf k}^{\pm})^{1/2}[{\bm \phi}_{\bf k}^\pm({\bf r})e^{-i \omega_{\bf k}^{\pm}t}\hat{b}_{{\bf k},\pm}+ h.c]. \nonumber \end{equation} Here we have ${\bm \phi}_{\bf k}^\pm({\bf r})={\bm \phi}_{\bf k}^\pm(z)e^{i{\bf k}\cdot {\bf r}}$, where the creation and annihilation operators for the quantum excitations satisfy $[\hat{b}_{{\bf k},\pm},\hat{b}_{{\bf k}',\pm}^{\dag}]=\delta_{{\bf k}, {\bf k}'}$. {\it Continuum limit and beam-width}.-- We now take the continuum limit using the transformations $\sum_{\bf k} \to (L/2\pi)^2\int{\rm d}{\bf k}$ and $\hat{b}_{{\bf k},\pm}\to (2 \pi/L)\hat{b}_{\pm}({\bf k})$, leading to \begin{eqnarray} \hat{\bf A}^{\pm}({\bf r},t)&=&\frac{1}{2\pi}\int {\rm d}{\bf k}(\hbar/2\epsilon_0\omega^{\pm}(k){\cal N}^{\pm}(k))^{1/2}\times \nonumber \\ & & [{\bm \phi}^\pm({\bf r}, {\bf k}) e^{-i \omega^{\pm}(k)t} \hat{b}_{\pm}({\bf k})+ h.c]. \end{eqnarray} Next, the excitations propagating in the $\hat{\bf x}$ direction have a beam-width ${\cal W}$ imposed in the $\hat{\bf y}$ direction~\cite{Blow} using $\int {\rm d}{\bf k} \to (2 \pi/{\cal W})\sum_{k_y}\int {\rm d}k_x$ and setting $k_y=0$, with $\delta^2({\bf k}-{\bf k}')\to ({\cal W}/2\pi)\delta(k-k')$ and $\hat{b}_{\pm}({\bf k}) \to ({\cal W}^{1/2}/2\pi)\hat{b}_{\pm}(k)$, giving \begin{eqnarray} \hat{\bf A}^{\pm}({\bf r},t)&=&\frac{1}{2\pi}\int {\rm d}k(\hbar/2\epsilon_0{\cal W}\omega^{\pm}(k){\cal N}^{\pm}(k))^{1/2}\times \nonumber \\ & & [{\bm \phi}^\pm({\bf r},k) e^{-i \omega^{\pm}(k)t} \hat{b}_{\pm}(k)+ h.c]. \end{eqnarray} Finally, we convert to the frequency domain using ${\rm d}k \to [v_G(\omega^\pm)]^{-1}{\rm d}\omega^\pm$ and $\hat{b}_{\pm}(k)\to [v_G(\omega^\pm)]^{1/2}\hat{b}_{\pm}(\omega^\pm)$, where $v_G(\omega^\pm)=\partial \omega^\pm/\partial k$ is the group velocity. This gives the quantized vector potential for the $\omega^{\pm}$ field as \begin{eqnarray} \hat{\bf A}^{\pm}({\bf r},t)&=&\frac{1}{2\pi}\int {\rm d}\omega^{\pm}(\hbar/2\epsilon_0{\cal W}\omega^{\pm}v_G(\omega^\pm){\cal N}^\pm(\omega^\pm))^{1/2}\times \nonumber \\ & & [{\bm \phi}^{\pm}({\bf r},\omega^\pm) e^{-i \omega^{\pm}t} \hat{b}_\pm(\omega^\pm)+ h.c]. \end{eqnarray} \begin{equation}gin{thebibliography}{99} \bibitem{Zayats} A. V. Zayats, I. I. Smolyaninov and A. A. Maradudin, Phys. Rep. {\bf 408}, 131 (2005). \bibitem{electro} W. L. Barnes, A. Dereux and T. W. Ebbesen, Nature {\bf 424}, 824 (2003). \bibitem{plasmonQIP} J. L. van Velsen, J. Tworzydlo and C. W. J. Beenakker, Phys. Rev. A {\bf 68} 043807 (2003); S. Fasel, M. Halder, N. Gisin and H. Zbinden, New J. Phys. {\bf 8}, 13 (2006); X.-F. Ren, G.-P. Guo, Y.-F. Huang, Z.-W. Wang, and G.-C. Guo, Opt. Lett. {\bf 31}, 2792 (2006); A. Kamli, S. A. Moiseev and B. C. Sanders, Phys. Rev. Lett. {\bf 101}, 263601 (2008). \bibitem{Alte} E. Altewischer, M. P. van Exter and J. P. Woerdman, Nature {\bf 418}, 304 (2002); E. Moreno, F. J. García-Vidal, D. Erni, J. I. Cirac and L. Martín-Moreno, Phys. Rev. Lett. {\bf 92}, 236801 (2004); S. Fasel, F. Robin, E. Moreno, D. Erni, N. Gisin, and H. Zbinden, Phys. Rev. Lett. {\bf 94}, 110501 (2005); X.-F. Ren, G. P. Guo, Y. F. Huang, C. F. Li and G. C. Guo, Europhys. Lett. {\bf 76}, 753 (2006). \bibitem{Lukin1} D. E. Chang, A. S. S\o rensen, P. R. Hemmer and M. D. Lukin, Phys. Rev. Lett. {\bf 97}, 053002 (2006); A. V. Akimov, A. Mukherjee, C. L. Yu, D. E. Chang, A. S. Zibrov, P. R. Hemmer, H. Park and M. D. Lukin, Nature {\bf 450}, 402 (2007). \bibitem{Lukin2} D. E. Chang. A. S. S\o rensen, E. A. Demler, M. D. Lukin, Nature Phys. {\bf 3}, 807 (2007). \bibitem{nonlinplasm} G. I. Stegeman, J. I. Burk and D. G. Hall, Appl. Phys. Lett. {\bf 41}, 906 (1982); R. T. Deck and D. Sarid, J. Opt. Soc. Am. {\bf 72}, 1613 (1982); J. C. Quail, J. G. Rako, H. J. Simon and R. T. Deck, Phys. Rev. Lett. {\bf 50}, 1987 (1983); I. I. Smolyaninov, A. V. Zayats, A. Gungor and C. C. Davis, Phys. Rev. Lett. {\bf 88}, 187402 (2002); M. D. Lukin and A. Imamoglu, Nature {\bf 413}, 273 (2001). \bibitem{TSPP} M. S. Tame, C. Lee, J. Lee, D. Ballester, M. Paternostro, A. V. Zayats and M. S. Kim, Phys. Rev. Lett. {\bf 101}, 190504 (2008). \bibitem{Econ} E. N. Economou, Phys. Rev. {\bf 182}, 539 (1969). \bibitem{ClassLRSPP} A. Otto, Z. Phys. {\bf 219}, 227 (1969). \bibitem{cat} Letter from Einstein to Schr\"odinger of 22 December 1950, in {\sl Briefe zur Wellenmechanik}, edited by K. Przibram (Springer, Vienna, 1963), p. 36; E. Schr\"odinger, Naturwissenschaften {\bf 23}, 807 (1935); B. Yurke and D. Stoler, Phys. Rev. Lett. {\bf 57}, 13 (1986). \bibitem{Ander} A. Huck, S. Smolka, P. Lodahl, A. S. S\o rensen, A. Boltasseva, J. Janousek and U. L. Andersen, arXiv:0901.3969 (2009). \bibitem{OttoKret} A. Otto, Z. Phys. {\bf 216}, 398 (1968); E. Kretschmann and H. Raether, Z. Naturforsch {\bf 23a}, 2135 (1968); E. Kretschmann, Z. Phys. {\bf 241}, 313 (1971). \bibitem{LRSPPHV} P. Berini, Phys. Rev. B {\bf 61}, 10484 (2000); P. Berini, {\it ibid.} {\bf 63}, 125417 (2001). \bibitem{Grating} Y. Teng and E. A. Stern, Phys. Rev. Lett. {\bf 19}, 511 (1967). \bibitem{EF} H. P. Hsu, A. F. Milton and W. K. Burns, Appl. Phys. Lett. {\bf 33}, 603 (1978); G. I. Stegeman, R. F. Wallis and A. A. Maradudin, Opt. Lett. {\bf 8}, 386 (1983). \bibitem{ER} J. M. Elson and R. H. Ritchie,\,Phys.\,Rev.\,B\,{\bf 4},\,4129\,(1971); J. Nkoma, R. Loudon and D. R. Tilley, J. Phys. C: Solid State Phys. {\bf 7}, 3547 (1974); M. S. Toma$\check{\rm s}$ and M. $\check{\rm S}$unji$\acute{\rm c}$, Phys. Rev. B {\bf 12} 5363 (1975); Y. O. Nakamura, Prog. Theor. Phys. {\bf 70}, 908 (1983). \bibitem{Blow} K. J. Blow, R. Loudon, S. Phoenix and T. J. Shepherd, Phys. Rev. A {\bf 42}, 4102 (1990). \bibitem{linok} F. Brown, R. E. Parks and A. M. Sleeper, Phys. Rev. Lett. {\bf 14} 1029 (1965); H. J. Simon, D. E. Mitchell and J. G. Watson, Phys. Rev. Lett. {\bf 33}, 1531 (1974); C. K. Chen, A. R. B. de Castro and Y. R. Shen, Phys. Rev. Lett. {\bf 46}, 145 (1981); T. Y. F. Tsang, Opt. Lett. {\bf 21}, 245 (1996). \bibitem{JohnChrist} H. Ehrenreich and H. R. Philipp, Phys. Rev. {\bf 128}, 1622 (1962); P. B. Johnson and R. W. Christy, Phys. Rev. B {\bf 6}, 4370 (1972). \bibitem{salehleon} R. A. Campos, B. E. A. Saleh and M. C. Teich, Phys. Rev. A {\bf 40}, 1371 (1989); U. Leonhardt, Rep. Prog. Phys. {\bf 66}, 1207 (2003). \bibitem{Loudon} R. Loudon, {\sl The Quantum Theory of Light}, 3$^{\rm rd}$ Ed., Oxford University Press, Oxford (2000). \bibitem{Raether} H. Raether, {\sl Surface Plasmons}, Springer-Verlag, Berlin (1986). \bibitem{CavesCrouch} C. M. Caves and D. D. Crouch, J. Opt. Soc. Am. B {\bf 4}, 1535 (1987). \bibitem{LoudonDamp} J. Jeffers, N. Imoto and R. Loudon, Phys. Rev. A {\bf 47}, 3346 (1993). \bibitem{Senitzky} I. R. Senitzky, Phys. Rev. {\bf 119}, 670 (1960). \bibitem{LoudonLoss} H. P. Yuen and J. H. Shapiro, IEEE Trans. Inf. Theor. {\bf 26}, 78 (1980). \bibitem{HBT} R.\,Hanbury-Brown\,and\,R.\,Q.\,Twiss,\,Nature\,{\bf 177},\,27\,(1956). \bibitem{pdet} H. Ditlbacher, F. R. Aussenegg, J. R. Krenn, B. Lamprecht, G. Jakopic and G. Leising, App. Phys. Lett. {\bf 89}, 161101 (2006). \bibitem{Rempe} A. Kuhn, M. Hennrich and G. Rempe, Phys. Rev. Lett. {\bf 89}, 067901 (2002); M. Hennrich, T. Legero, A. Kuhn and G. Rempe, New J. Phys. {\bf 6}, 86 (2004). \bibitem{Sun} M. $\check{\rm S}$unji$\acute{\rm c}$ and A. A. Lucas, Phys. Rev. B {\bf 3}, 719 (1971). \bibitem{Pitarke} J. M. Pitarke, V. M. Silkin, E. V. Chulkov and P. M. Echenique, Rep. Prog. Phys. {\bf 70}, 1 (2007). \bibitem{Divincenzo} D. P. DiVincenzo, Fortschr. Phys. {\bf 48}, 771 (2000). \bibitem{BVE} S. Braunstein and P. van Loock, Rev. Mod. Phys. {\bf 77}, 513 (2005). \bibitem{von} J. von Neumann, {\sl Mathematical Foundations of Quantum Mechanics}, Princeton University Press, Princeton (2000). \bibitem{vonlin} H. Moya-Cessa, Physics Reports {\bf 432}, 1 (2006). \bibitem{BVK} V. Bu$\check{\rm z}$ek, A. Vidiella-Barranco and P. L. Knight, Phys. Rev. A {\bf 45}, 6570 (1992). \bibitem{Jeong2} H.\,Jeong, J. Lee and H. Nha, J. Opt. Soc. Am. B {\bf 25}, 1025 (2008). \bibitem{Tomas} M. S. Toma$\check{\rm s}$, M. $\check{\rm S}$unji$\acute{\rm c}$, and Z. Lenac, Fizika {\bf 14}, 77 (1982); Z. Lenac and M. S. Toma$\check{\rm s}$, J. Phys. C: Solid State Phys. {\bf 16}, 4273 (1983). \end{thebibliography} \end{document}
\begin{document} \title{Waveguide quantum electrodynamics in squeezed vacuum} \author{Jieyu You, Zeyang Liao\footnote{[email protected]}, Sheng-Wen Li, and M. Suhail Zubairy\footnote{[email protected]}} \affiliation{Institute for Quantum Science and Engineering (IQSE) and Department of Physics and Astronomy, Texas A$\&$M University, College Station, TX 77843-4242, USA } \begin{abstract} We study the dynamics of a general multi-emitter system coupled to the squeezed vacuum reservoir and derive a master equation for this system based on the Weisskopf-Wigner approximation. In this theory, we include the effect of positions of the squeezing sources which is usually neglected in the previous studies. We apply this theory to a quasi-one-dimensional waveguide case where the squeezing in one dimension is experimentally achievable. We show that while dipole-dipole interaction induced by ordinary vacuum depends on the emitter separation, the two-photon process due to the squeezed vacuum depends on the positions of the emitters with respect to the squeezing sources. The dephasing rate, decay rate and the resonance fluorescence of the waveguide-QED in the squeezed vacuum are controllable by changing the positions of emitters. Furthermore, we demonstrate that the stationary maximum entangled NOON state for identical emitters can be reached with arbitrary initial state when the center-of-mass position of the emitters satisfies certain condition. \end{abstract} \pacs{42.30.-d, 42.50.Hz, 42.62.Fi}\maketitle \section{INTRODUCTION} Due to the well known Purcell effect \cite{Purcell1946}, the spontaneous decay rate of an emitter can be modified by engineering the electromagnetic bath environment with which the emitters interact. One example of bath engineering is the squeezed vacuum. Although the squeezed vacuum does not change the density of the electromagnetic modes, it can still modify the decay rate of the emitter \cite{Gardiner1986, Collett1984, Zubairy1988}. A single emitter interacting with the squeezed vacuum has been widely studied \cite{Gardiner1987, Palma2, Palma1989}. However, there are only a few publications dealing with multiple emitters interacting with squeezed vacuum. Among these works, most are considering the case where emitters are separated by much less than an optical wavelength which is the well known Dicke model \cite{Agarwal1990}. It is shown that in a broadband squeezed vacuum, emitter system evolves into a state whose properties are similar to those of the squeezed vacuum. Only a very few papers study the case when the separation between the emitters becomes important \cite{Ficek1990, Ficek1991, Goldstein1996}. It is found that the dipole-dipole interaction induced by ordinary vacuum depends on the relative emitter separation, while the interaction induced by the squeezed vacuum depends on the center of mass coordinate of the emitters. Since it depends on the position of the center of mass, the choice of the coordinate system should be no longer arbitrary. However, it is not yet clearly illustrated in these literature on how to choose the coordinate system. Actually, the dependence on the absolute position comes from the fact that the squeezed vacuum is not vacuum but generated by a coherent light source. The phase of a coherent source is important for the dynamics of the emitter system \cite{Das2008} and it is seldom considered in the previous literature. People usually thought this phase can be included in the phase of the correlation function. However, the phase in the correlation function is usually treated as a constant, while it can a function of position. In addition, the previous calculations mainly consider a broadband squeezing in all directions of the 3-dimensional (3D) space which is difficult to be experimentally realized. Recently, photon transport in a one-dimensional (1D) waveguide coupled to quantum emitters (well known as ``waveguide-QED") has attracted much attention due to its possible applications in quantum device and quantum information \cite{Shen2005a, Shen2005b, Shen2007, Yudson2008, Zheng063816, Zhou2008, Shi205111, Chen2011, Liao2015, Shen2015, Liao2016b, Liao2016, Roy2017}. In these previous studies, the photon modes in the waveguide are usually considered to be ordinary vacuum modes. The case when the waveguide modes are squeezed is seldom studied. In contrast to the 3D case, squeezing in 1D is more experimentally feasible. Suppression of the radiative decay of atomic coherence and the linewidth of the resonance fluorescence have been experimentally demonstrated in a 1D microwave transmission line coupled to single artificial atom \cite{Turchette1998, Kocabas2012, Murch2013, Toyli2016}. However, many-body interaction in a 1D waveguide-QED system coupled to squeezed vacuum has not yet been studied. In this paper we consider the phase of the squeezing source and rederive the master equation for multi-atom dynamics in the squeezed vacuum based on the Weisskopf-Wigner approximation. We show that while the collective dipole-dipole interaction due to the ordinary vacuum depends on the emitter separation, the collective two-photon decay rate due to the squeezed vacuum largely depends on the center of mass position of the emitters relative to the squeezing source. We then apply this theory to the 1D waveguide-QED system with squeezing reservoir. Contrary to the traditional result that the dephasing rate of a single atom in the squeezed vacuum is a constant \cite{Zubairy1988, Scully1997}, our calculation shows that the dephasing rate is actually position-dependent. As dipole-dipole interaction is involved, both emitter separation and center of mass coordinate can affect the decay rate, dephasing rate and the emitted resonance fluorescence spectrum. In addition, we also show that stationary quantum entanglement can be prepared in this system by the squeezing reservoir. The stationary maximum entangled NOON state can be approached if the center-of-mass of the emitters is at certain position. This paper is organized as follows: In Sec. II, we introduce the Hamiltonian of the system and the modified mode function for the squeezed vacuum. In Sec. III, we derive the master equation for the emitter system in 3D case based on the Weisskopf-Wigner approximations. In Sec. IV, we consider the squeezing in a 1D waveguide-QED system where we show how the dephasing rate depends on the position of the atoms and we also show that stationary quantum entangled state can be prepared. Then, we analyze the properties of power spectrum under the effects of squeezed vacuum and dipole-dipole interaction. Finally, we summarize our results. \section{Hamiltonian and mode function} We here consider $N_{a}$ identical two-level atoms located at $ \vec{r}_{i}$ ($i=1,\cdots,N_{a}$). Suppose that all the transition dipole moments $ \vec{\mu}_{i} $ have the same amplitude and direction. The atom-field system is described by the Hamiltonian \begin{equation} \label{eq1} \begin{gathered} H=H_{A}+H_{F}+H_{AF} \end{gathered} \end{equation} where $H_{A}=\sum_{i=1}^{N_a}\hbar\omega_i\left|e_{i}\right\rangle \left\langle e_{i}\right|$ is the atomic Hamiltonian, and $ \left|e_{i}\right\rangle $ is the excited state of the $i$th atom with transition frequency $\omega_i$. Here, for simplicity, we assume that all the atoms have the same transition frequency, i.e., $\omega_{i}\equiv\omega_0$. The Hamiltonian of the EM field is $H_{F}=\sum_{\vec{k}s}\hbar\omega_{\vec{k}s}(\hat{a}_{\vec{k}s}^{\dagger}\hat{a}_{\vec{k}s}+\frac{1}{2})$ where $\hat{a}_{\vec{k}s}$ and $\hat{a}_{\vec{k}s}^{\dagger}$ are the annihilation and creation operators of the filed mode with wavevector $ \vec{k}$, polarization $s$, and frequency $\omega_{\vec{k},s}$. The interaction Hamiltonian in electric-dipole approximation is $H_{AF}=-i\hbar\sum_{\vec{k}s}\sum_{i=1}^{2}[\vec{\mu}_{i}\cdot\vec{u}_{\vec{k}s}(\vec{r}_{i})S_{i}^{+}\hat{a}_{\vec{k}s}+\vec{\mu}_{i}^{*}\cdot\vec{u}_{\vec{k}s}(\vec{r}_{i})S_{i}^{-}\hat{a}_{\vec{k}s}-H.c.]$ where $ \vec{\mu}_{i} $ is the electric dipole moment and $ S_{i}^{+} $ and $S_{i}^{-} $ are the raising and lowering operator for the $i$th atom. The mode function of the squeezed vacuum is given by \begin{equation} \label{eq2b} \begin{gathered} \vec{u}_{\vec{k}s}(\vec{r}_{i})=\sqrt{\frac{\omega_{\vec{k}s}}{2\epsilon_{0}\hbar V}}\vec{e}_{ks}e^{i\vec{k}\cdot(\vec{r}_{i}-\vec{o}_{\vec{k}s})} \end{gathered} \end{equation} where $\vec{o}_{\vec{k}s} $ includes the effects of the initial phase and the position of the squeezing source with wavevector $ \vec{k} s$. Here we need to make two assumptions: first, one specific mode is generated from a single source, i.e., mode $ \vec{k}s$ is only generated from the source located at $\vec{o}_{\vec{k}s}$; second, the phases of all modes can be well defined by $ \vec{k}\cdot(\vec{r}-\vec{o}_{\vec{k}s}) $. In the ordinary vacuum or thermal reservoir, there is no source and we can set $\vec{o}_{\vec{k}s}=0$, so the mode function shown in Eq. \eqref{eq2b} is reduced to the normal cases \cite{Scully1997}. However, when the reservoir is produced by different sources with non-vanishing correlation function $\langle\hat{a}_{\vec{k}s}^{\dagger}\hat{a}_{\vec{k'}s'}^{\dagger}\rangle$ and $\langle\hat{a}_{\vec{k}s}\hat{a}_{\vec{k'}s'}\rangle$, for example, the squeezed vacuum reservoir, the spatial distribution of the source is important. Neglecting $\vec{o}_{\vec{k}s}$ in the mode function will lead to an ambiguity of physics where the emitters' coordiantes are not well defined \cite{Goldstein1996}. Therefore, the position of the source should be included in the mode function when the squeezed vacuum is considered. One can also add an additional global phase $e^{i\phi}$ to the mode function Eq. \eqref{eq2b}, but for simplicity\cite{Das2008}, we can set $\phi=0$. \section{MASTER EQUATION} In this section, we first derive the master equation of a multi-emitter system in a general 3D squeezed vacuum with the Hamiltonian shown in Eq. \eqref{eq1} and mode function shown in Eq. \eqref{eq2b}. The Hamiltonian in the interaction picture without rotating-wave approximation is given by \begin{equation} \label{eq3} \begin{split} V(t)=&-i\hbar\underset{\vec{k}s}{\sum}\underset{i}{\sum}[\vec{\mu}_{i}\cdot\vec{u}_{\vec{k}s}(\vec{r}_{i})S_{i}^{+}(t)\hat{a}_{\vec{k}s}(t) \\ &+\vec{\mu}_{i}^{*}\cdot\vec{u}_{\vec{k}s}(\vec{r}_{i})S_{i}^{-}(t)\hat{a}_{\vec{k}s}(t)-H.c.] \end{split} \end{equation} where $ S_{i}^{\pm}(t)=S_{i}^{\pm}e^{\pm i\omega_{0}t} $, $ \hat{a}_{\vec{k}s}(t)=\hat{a}_{\vec{k}s}e^{-i\omega_{\vec{k}s}t} $, and $ \hat{a}_{\vec{k}s}^{\dagger}(t)=\hat{a}_{\vec{k}s}^{\dagger}e^{i\omega_{\vec{k}s}t} $. Different from Ref. \cite{Goldstein1996}, no rotating-wave approximation is made at this stage. The equation of motion for the reduced density matrix of the system is given by \cite{Scully1997} \begin{equation} \label{eq4} \begin{split} \dot{\rho^{S}}=&-\frac{i}{\hbar}Tr_{R}[V(t),\rho^{S}(0)\otimes\rho^{F}(0)] \\ &-\frac{1}{\hbar^{2}}Tr_{R}\intop_{0}^{t}[V(t),[V(t-\tau),\rho^{S}(t-\tau)\otimes\rho^{F}(0)]]d\tau \end{split} \end{equation} where $ \rho^{F} $ is the density matrix for the squeezed vacuum reservoir and is defined by $ \rho^{F}=\underset{\vec{k},s}{\prod}S_{\vec{k},s}\left|0_{\vec{k}_{0}\pm\vec{k}}\right\rangle \left\langle 0_{\vec{k}_{0}\pm\vec{k}}\right|S_{\vec{k},s}^{\dagger}$. The squeezed operator $S_{\vec{k},s}(\zeta)=\exp(\zeta^{*}a_{\vec{k}_{0}+\vec{k}}a_{\vec{k}_{0}-\vec{k}}-\zeta a_{\vec{k}_{0}+\vec{k}}^{\dagger}a_{\vec{k}_{0}-\vec{k}}^{\dagger})$ where $\zeta=re^{i\theta}$ is the squeezing parameter with the degree of squeezing $r$ and the squeezing phase $\theta$. For simplicity, we can also assume that $ck_{0}=\omega_0$, i.e., the center frequency of the squeezing field is equal to the transition frequency of the atom. For a squeezed vacuum reservoir, it can be shown that \cite{Scully1997}: \begin{subequations} \begin{align} \left\langle a_{\vec{k},s}\right\rangle &=\left\langle a_{\vec{k},s}^{\dagger}\right\rangle =0 \label{eq51}\\ \left\langle a_{\vec{k},s}^{\dagger}a_{\vec{k}',s'}\right\rangle &=\sinh^{2}r\delta_{\vec{k}'\vec{k}}\delta_{ss'} \label{eq52}\\ \left\langle a_{\vec{k},s}a_{\vec{k}',s'}^{\dagger}\right\rangle &=\cosh^{2}r\delta_{\vec{k}'\vec{k}}\delta_{ss'} \label{eq53}\\ \left\langle a_{\vec{k},s}^{\dagger}a_{\vec{k}',s'}^{\dagger}\right\rangle &=-e^{-i\theta}\cosh(r)\sinh(r)\delta_{\vec{k}',2\vec{k}_{0}-\vec{k}}\delta_{ss'} \label{eq54}\\ \left\langle a_{\vec{k},s}a_{\vec{k}',s'}\right\rangle &=-e^{i\theta}\cosh(r)\sinh(r)\delta_{\vec{k}',2\vec{k}_{0}-\vec{k}}\delta_{ss'} \label{eq55} \end{align} \end{subequations} For simplicity, we can set the squeezing phase $\theta=0 $. On inserting these correlation functions into Eq. \eqref{eq4}, we can obtain the master equation (see Appendix A for the derivation): \begin{widetext} \begin{equation} \label{eq6} \begin{split} \frac{d\rho^{S}}{dt}=&-i\underset{i\neq j}{\sum}\Lambda_{ij}[S_{i}^{+}S_{j}^{-},\rho^{S}]e^{i(\omega_{i}-\omega_{j})t}-\frac{1}{2}\underset{i,j}{\sum}\gamma{}_{ij}(1+N)(\rho^{S}S_{i}^{+}S_{j}^{-}+S_{i}^{+}S_{j}^{-}\rho^{S}-2S_{j}^{-}\rho^{S}S_{i}^{+})e^{i(\omega_{i}-\omega_{j})t} \\ &-\frac{1}{2}\underset{i,j}{\sum}\gamma{}_{ij}N(\rho^{S}S_{i}^{-}S_{j}^{+}+S_{i}^{-}S_{j}^{+}\rho^{S}-2S_{j}^{+}\rho^{S}S_{i}^{-})e^{-i(\omega_{i}-\omega_{j})t}-\frac{1}{2}\sum_{\alpha=\pm}\underset{i,j}{\sum}\gamma'_{ij}Me^{2\alpha ik_{0z}R}(\rho^{S}S_{i}^{\alpha}S_{j}^{\alpha}+S_{i}^{\alpha}S_{j}^{\alpha}\rho^{S}-2S_{j}^{\alpha}\rho^{S}S_{i}^{\alpha}) \end{split} \end{equation} \end{widetext} where the first three terms are the same as in the thermal reservoir and the last term is the collective decay due to the squeezed vacuum. We have $M=\sinh(r)cosh(r)$ and average photon number $N=\sinh^{2}(r)$. The collective energy shifts $\Lambda_{ij}$ and decay rates $\gamma_{ij}$ due to the ordinary vacuum are given by \cite{Agarwal1974, Ficek2005} \begin{align} \Lambda_{ij}&=\frac{3}{4}\sqrt{\gamma_{i}\gamma_{j}}\{-(1-\cos^{2}\alpha)\frac{\cos(k_{0}r_{ij})}{k_{0}r_{ij}} \nonumber \\ & +(1-3\cos^{2}\alpha)[\frac{\sin(k_{0}r_{ij})}{(k_{0}r_{ij})^{2}}+\frac{\cos(k_{0}r_{ij})}{(k_{0}r_{ij})^{3}}]\}\\ \gamma_{ij}&=\sqrt{\gamma_{i}\gamma_{j}} F(k_{0}r_{ij}) \end{align} where $\gamma=\frac{\omega_{0}^{3}\mu^{2}}{3\pi\epsilon_{0}\hbar c^{3}}$ is the spontaneous decay rate of the atom in ordinary vacuum and $F(x)=\frac{3}{2}\{(1-\cos^{2}\alpha)\frac{\sin x}{x}+(1-3\cos^{2}\alpha)[\frac{\cos x}{x^2}-\frac{\sin x}{x^3}]\}$. Different from the thermal reservoir terms, the squeezed vacuum can contribute to the additional collective two-photon decay rate of the system which is given by \begin{equation} \label{ad9} \gamma'_{ij}=\gamma e^{2ik_{0}R}F(k_{0} |\vec{r}_{i}+\vec{r}_{j}|). \end{equation} Thus, the collective decay due to the squeezed vacuum depends on the position of the center of mass of the emitters instead of their separation. One may think this reult is identical to the privious work\cite{Ficek1990,Ficek1991} except the phase $e^{2ik_{0}R}$, but that is not true. No matter how the coordinate system is built, to reach the neat form of Eq.\eqref{ad9}, $\vec{r}_i$ must still be interpreted as the displacement from the center of squeezing sources to the $i$th atom. When their center of mass is at equal distances from all squeezing sources (i.e., $r_i+r_j=0$), the decay induced by the squeezing is the strongest due to the perfectly constructive interference of the two-photon excitation from all directions. It decreases when it deviates from the center due to the destructive interference. The master equation shown in Eq. (6) can be transformed to the Lindblad form \cite{Lindblad1976} and the density matrix is positive definite which is proven in Appendix B. The phase factor $e^{2ik_{0z}R}$ can be effectively regarded as an controllable phase of $M$, which can be incorporated into $\theta$. \section{Waveguide-QED in the squeezed vacuum} In practice, it is very difficult to squeeze all photon modes in 3D case. Since squeezing in 1D is experimentally achievable \cite{Murch2013, Toyli2016}, in this section we discuss the dynamics of the waveguide-QED in the squeezed vacuum. Here, we consider a perfect rectangular waveguide with negligible loss out of the waveguide as is shown in Fig.~\ref{1}(a). We assume that the cross section of the waveguide is a square with dimensions $a\times b$. The origin of the coordinate system is chosen to be at the center of the two squeezing sources with the positions of the sources to be $(0,0,\pm R)$. The emitters are located along the longitudinal centerline of the waveguide at $(0,0,r_{i})$ ($i=1,2,\cdots,N_{a}$) with the squeezed vacuum injected from both ends by the parametric process. Compared with the 3D case, the master equation in the 1D case is the same as Eq. \eqref{eq6} except that the values of $\gamma_{ij}, \gamma'_{ij}, \Lambda_{ij}$ are different. \begin{figure} \caption{(a) Schematic setup for waveguide-QED in a 1D squeezed vacuum where the vacuum is squeezed from both directions. (b) The dispersion relations inside the waveguide. Here the atomic transition frequency is $\frac{1.2c\pi} \label{1} \end{figure} Different from the free-space case, the square waveguide can only support certain photon modes. The allowed TE and TM modes are shown in Appendix C and their dispersion relations are shown in Fig.~\ref{1}(b). To simplify the problem, we assume that the transtion dipole moment of the emitter is along the y direction and the size of the waveguide satisfies $\lambda_{0}/2<a<\lambda_0/\sqrt{2}$ where $\lambda_{0}=2\pi c/\omega_{0}$ with $\omega_{0}$ being the transition frequency of the emitter. In this case, the emitter is mainly coupled to the $TE_{10}$ mode (Fig.~\ref{1}(b)). The density of states of EM field in the waveguide is $D(\nu)=\frac{L}{\pi c^{2}}\frac{\nu}{\sqrt{(\frac{\nu}{c})^{2}-(\frac{\pi}{a})^{2}}}$. The coupling strength between the emitter and the $TE_{10}$ mode is therefore given by $g\equiv \vec{\mu}\cdot\vec{E}/\hbar=\mu\sqrt{\nu/\epsilon_{0}LS\hbar}$ \cite{Kim2013}. The single emitter decay rate due to the waveguide modes is \begin{equation} \label{eq7} \begin{split} \gamma_{1d}=2\pi\underset{\nu}{\sum}|g(\nu)|^{2}\delta(\omega_{0}-\nu)=\frac{2\mu^{2}\omega_{0}^{2}}{\hbar\epsilon_{0}Sc^{2}k_{0z}}\equiv \eta \gamma_0, \end{split} \end{equation} where $\eta=3\lambda_{0}\lambda_{0z}/(2\pi a^2)$ is the enhancement factor, $\lambda_{0z}=2\pi/k_{0z}$ is the effective longitudinal wavelength and $\gamma_0$ is the spontaneous decay rate in the free space. Around the cutoff frequency, we have $k_{0z}\rightarrow 0$ and therefore $\eta\rightarrow \infty$, i.e., the spontaneous decay rate can be greatly enhanced. The master equation in the 1D waveguide is also given by Eq. \eqref{eq6}, but the coefficients are replaced by (see Appendix C for detail calculations): \begin{equation} \label{eq8} \begin{split} & \gamma_{ij}=\gamma_{1d}\cos(k_{0z}r_{ij}) \\ & \Lambda_{ij}=\frac{\gamma_{1d}}{2}\sin(k_{0z}r_{ij})\\ & \gamma'_{ij}=\gamma_{1d}\cos[k_{0z}(r_{i}+r_{j})] \end{split} \end{equation} where $k_{0z}=\sqrt{(\frac{\omega_{0}}{c})^{2}-(\frac{c\pi}{a})^{2}}$ is the wave vector along the waveguide direction and $r_{ij}=|r_i-r_j|$ is the separation between two emitters. It is worth noting that Eq. \eqref{eq6} is valid not only for the rectangular waveguide, but also for arbitrary type of waveguide with arbitrary atomic transition frequency. The only difference for different types of waveguide and different transition frequency is the value of $\gamma_{1d}$ in Eq.~\eqref{eq7}. Similar to the 3D case, the two-photon decay rate induced by the squeezed vacuum depends on the center of mass of the emitters. This can be explained by the interference shown in Fig.~\ref{1}(b). The emitters can absorb two photons from the squeezing sources either from the left or the right. These two processes can interfere with each others and we have $\gamma_{ij}^{'}\propto S_{L}^{1}S_{L}^{2}+S_{R}^{1}S_{R}^{2}=2e^{2ik_{0z}R}\cos[k_{0z}(r_{i}+r_{j})]$ which is a periodic function with period $\lambda_{0z}$. Thus, when the center of mass happens to be at the antinodes (nodes) of the standing wave, the two-photon decay rate is maximized (minimized). \subsection{One Emitter} \begin{figure*} \caption{(a) The dephasing dynamics of a single emitter in the squeezed vacuum. The black and red solid curves are the results of $\sigma_{x} \label{2} \end{figure*} Our theory can be used to calculate the dynamics of arbitrary number of emitters. Let us first see the one-emitter case. We still assume that the emitter is located at $(0,0,\delta)$, with the transition dipole moment along the $y$-axis. By eliminating the terms with $i\neq j$, the master equation shown in Eq.\eqref{eq6} is reduced to the single-atom case which is given by \begin{equation} \label{eq9} \begin{split} \frac{d\rho^{S}}{dt}&=\sinh(r)\cosh(r)\gamma'(e^{2ik_{0z}R}S^{+}\rho^{S}S^{+}+H.c.)\\ &-\frac{1}{2}\gamma\cosh^{2}(r)(\rho^{S}S^{+}S^{-}+S^{+}S^{-}\rho^{S}-2S^{-}\rho^{S}S^{+})\\ &-\frac{1}{2}\gamma \sinh^{2}(r)(\rho^{S}S^{-}S^{+}+S^{-}S^{+}\rho^{S}-2S^{+}\rho^{S}S^{-})\\ \end{split} \end{equation} with $\gamma=\gamma_{1d}$ and $\gamma'=\gamma_{1d}\cos(2k_{0}\delta)$. It is worth noting that the squeezing terms like $S^{+}\rho^{S}S^{+}$ and $S^{-}\rho^{S}S^{-}$ in Eq.~\eqref{eq9} only affect the non-diagonal terms but not the diagonal terms. Thus, for single emitter, the squeezing can only modify the dephasing rate rather than the population decay rate. We also notice that the dephasing rate due to the squeezed vacuum is dependent on the emitter position because the interference between the two squeezing sources generates a standing wave. The dynamical equations for the expectation value of $\sigma_{+}$ and $\sigma_{-}$ are given by \begin{equation} \begin{split} \frac{d}{dt}\left(\begin{array}{c} \left\langle \sigma_{+}\right\rangle \\ \left\langle \sigma_{-}\right\rangle \end{array}\right)=U\left(\begin{array}{c} \left\langle \sigma_{+}\right\rangle \\ \left\langle \sigma_{-}\right\rangle \end{array}\right) \end{split} \end{equation} where \begin{equation} \begin{split} U=\left(\begin{array}{cc} -(N+\frac{1}{2}) & Me^{-2ik_{0z}R}\cos(2k_{0z}\delta)\\ Me^{2ik_{0z}R}\cos(2k_{0z}\delta) & -(N+\frac{1}{2}) \end{array}\right). \end{split} \end{equation} The eigenvalues of $U$ are $\gamma_{dp,\pm}=N+\frac{1}{2}\pm M\cos(2k_{0z}\delta)$ which are the dephasing rate. In fact, such a position-dependent property of the dephasing rate can be associated with the variance in the quadrature phases of the squeezed field at the site of the atom. Considering the operator $X(\delta,\alpha,\beta)=\frac{1}{2\sqrt{2}}(e^{i(k_{0z}+k_{z})\delta}a_{k_{0z}+k_{z}}e^{i\alpha}+e^{i(k_{0z}-k_{z})\delta}a_{k_{0z}-k_{z}}e^{i\beta}+H.c.)$ which describes the entangled modes of the two-mode squeezing, we can find its variance $\Delta X(\delta,\alpha,\beta)=\frac{1}{2}[N+\frac{1}{2}-M\cos(2k_{0z}\delta+\alpha+\beta)]$. Therefore, we have the relation that $\gamma_{dp,+}=2\Delta X(\delta,\alpha+\beta=0)$ and $\gamma_{dp,-}=2\Delta X(\delta,\alpha+\beta=\pi)$. We can see that when there is no squeezing, i.e., $M=0$, both $\sigma_{x}$ and $\sigma_{y}$ have the same dephasing rate $\cosh^2(r)\gamma_{1d}/2$ (blue dotted line in Fig.~\ref{2}(a)). However, if there is squeezing, i.e., $M\neq 0$, $\sigma_{x}$ and $\sigma_{y}$ have different dephasing rates with one being enhanced and the other one being suppressed (solid lines in Fig.~\ref{2}(a)). The dephasing rate can be tuned by changing the position of the emitter. In Fig.~\ref{2}(b), it is shown that the dephasing rates of $\sigma_{x}$ and $\sigma_{y}$ vary periodically as the emitter position changes. At some regions, $\sigma_{x}$ decays faster than $\sigma_{y}$, while at other regions, $\sigma_{x}$ decays slower than $\sigma_{y}$. This result challenges the traditional conclusion where dephasing rate is a position-independent constant\cite{Zubairy1988, Scully1997}. The power spectrum of the resonance fluorescence can also be calculated and the result is similar to Ref. \cite{Carmichael1987} with the simple replacements of $M$ by $M\gamma'$ and the phase of $M$ by $e^{2ik_{0z}R}$. \subsection{Two Emitters} \begin{figure*} \caption{Two-emitter case: Transverse polarization decay of the first emitter as a function of time. (a) $r_{12} \label{3} \end{figure*} Next, we consider the two-emitter case where dipole-dipole interaction can occur and two-photon process is allowed. In Fig.~\ref{3}(a), we show the dynamics of the transverse polarization $\sigma_{x}$ and $\sigma_{y}$. Here, we compare two different emitter separations $r_{12}=0.5\lambda_{0z}$ and $r_{12}=1.0\lambda_{0z}$. In both cases, the $x$ and $y$ polarizations have the same decay dynamics in the thermal reservoir. However, in the squeezed vacuum, the two orthogonal polarizations have different decay rates with one being enhanced and the other being suppressed. When $r_{12}=0.5\lambda_{0z}$, $\sigma_{x}$ decays faster than that in the thermal reservoir, but $\sigma_{y}$ decays much slower than that in the thermal reservoir. While opposite result occurs when $r_{12}=1.0\lambda_{0z}$. This is similar to the one-emitter case. Different from the one-emitter case, as is shown in Fig.~\ref{3}(b), the squeezed vacuum can affect the population decay of the two-emitter system. This is because two-photon process is allowed in the two-emitter system. Without the squeezed vacuum, the system is finally in the thermal equilibrium state (dotted lines). However, the squeezed vacuum can deplete the populations on $|++\rangle$ and $|--\rangle$ with $|\pm\rangle=\frac{1}{\sqrt{2}}(|e_{1}\rangle|g_{2}\rangle \pm |g_{1}\rangle|e_{2}\rangle)$. In fact, the atomic pair evolves into an entanglement state in this case and we will discuss it later. We also study the dephasing rate as a function of emitter separation and position of the center of mass which are shown in Fig.~\ref{3}(c) and (d) respectively. Here the dephasing rate is defined to be the inverse of time for $\sigma_x(\sigma_y)$ to damp to $1/e$ of its initial value. Similar to the one-emitter case, the dephasing rate is a periodic function of both $r_{12}$ and $r_{c}$. However, due to the dipole-dipole interaction, the dephasing rate is no longer a constant even in the thermal reservoir (dotted line in Fig.~\ref{3}(c)) so that the value ranges of $\sigma_x$ and $\sigma_y$ are no longer the same in the squeezed vacuum(solid lines in Fig.~\ref{3}(c)). It is noted that when $r_{12}=0.5n\lambda_{0z}$ ($n$ is any integer) $\sigma_{y}$ does not decay to $1/e$ of its initial value due to the subradiance effect. When we fix the atom separation and change the center of mass(Fig.~\ref{3}(d)), the dephasing rate changes periodically and harmonically like one-emitter case. Therefore, the dephasing rate is tunable by changing the atom separation or position of center of mass. Usually, the positions of the atoms are not easy to be tuned. However, we can easily tune the position of the squeezing sources to effectively change the center of mass of the atoms. Figure~\ref{3}(d) also shows the result when there are five emitters (dashed lines). The dephasing rate is significantly increased when $N_a$ increases due to the collective effect. \subsection{Quantum Entanglement} Quantum entanglement is an important resource of the quantum information and quantum metrology \cite{Horodecki2009, Giovannetti2011}. Preparation of the maximum entangled state is still a central topic of interest. It has been shown that stationary quantum entanglement can be dissipatively prepared by engineering the bath enviroment \cite{Kraus2008, Diehl2008, Lin2013, Ma2015}. By squeezing the enviroment, quantum entanglement between emitters can be also created \cite{Kraus2004, Tanas2004, Li2006}. However, it is shown in Ref. \cite{Tanas2004} that stationary maximum entanglement can not be reached by the squeezed vacuum for identical emitters. Here, we show that identical emitters coupled to the 1D waveguide can also be driven to a stationary maximum entangled NOON state by the squeezed vacuum as long as the center of mass is put at the proper position. The quantum entanglement can be measured by the concurrence which is defined as \cite{Hill1997}: $\mathscr{C}\equiv max\{0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}\}$ in which $\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}$ are eigenvalues, in decreasing order, of the Hermitian matrix $R=\sqrt{\sqrt{\rho}\widetilde{\rho}\sqrt{\rho}}$ with $\widetilde{\rho}=(\sigma_{y}\bigotimes\sigma_{y})\rho^{\ast}(\sigma_{y}\bigotimes\sigma_{y})$. For a pure two-qubit state $|\Psi\rangle=\alpha |ee\rangle +\beta |eg\rangle+\gamma |ge\rangle +|gg\rangle$ with $|\alpha|^{2}+|\beta|^{2}+|\gamma|^{2}+|\delta|^{2}=1$, the concurrence is given by $\mathscr{C}=max\{0,2|\alpha\delta-\beta\gamma|\}$. The concurrence as a function of time for different initial states is shown in Fig.~\ref{4}(a) where $r=1, r_c=0,$ and $r_{12}=0.25\lambda_{0z}$. Different curves correspond to different initial states. We can see that no matter what the initial state is, the two-emitter state will be driven to a very high entangled state. To see what the stationary state is, we also show the fidelity of the emitter state with respect to the maximum entangled state $\frac{1}{\sqrt{2}}(|gg\rangle -|ee\rangle)$ which is shown in Fig.~\ref{4}(b). We can see that the stationary state is very close to it. Therefore, under these parameters the two emitters can be driven to the maximum entangled state which may find important applications in quantum information and quantum computation. \begin{figure*} \caption{(a) Concurrence evolution of different initial states in squeezed vacuum, where $r=1$, $r_c=0$, and $r_{12} \label{4} \end{figure*} To find the stationary state analytically, we rewrite the master equation in Eq.~\eqref{eq6} as \begin{eqnarray} \dot{\rho}_{gg}&=&-2N\gamma\rho_{gg}+(N+1)\gamma_{+}\rho_{++}+(N+1)\gamma_{-}\rho_{--}\nonumber \\ & & +M\gamma^{'}_{12}\rho_{u}, \\ \dot{\rho}_{ee}&=&-2(N+1)\gamma\rho_{ee}+N\gamma_{+}\rho_{++}+N\gamma_{-}\rho_{--}\nonumber \\ & &+M\gamma^{'}_{12}\rho_{u}, \\ \dot{\rho}_{++}&=&-(2N+1)\gamma_{+}\rho_{++}+(N+1)\gamma_{+}\rho_{ee}+N\gamma_{+}\rho_{gg}\nonumber \\ & &-M\gamma^{'}_{+}\rho_{u}, \\ \dot{\rho}_{--}&=&-(2N+1)\gamma_{-}\rho_{--}+(N+1)\gamma_{-}\rho_{ee}+N\gamma_{-}\rho_{gg}\nonumber \\ & &-M\gamma^{'}_{-}\rho_{u}. \\ \dot{\rho}_{u}&=&-(2N+1)\gamma_{11}\rho_{u}-2M\gamma^{'}_{+}\rho_{++}-2M\gamma^{'}_{-}\rho_{--} \nonumber \\ & & +2M\gamma^{'}_{12}(\rho_{ee}+\rho_{gg}). \end{eqnarray} where $\rho_{ee}=\langle ee|\rho |ee\rangle$, $\rho_{gg}=\langle gg|\rho |gg\rangle$, $\rho_{\pm\pm}=\langle \pm|\rho |\pm\rangle$ with $|\pm\rangle=\frac{1}{\sqrt{2}}(|e_{1}\rangle|g_{2}\rangle \pm |g_{1}\rangle|e_{2}\rangle)$, $\rho_{u}=e^{-2ik_{0z}R}\langle ee|\rho |gg\rangle+e^{2ik_{0z}R}\langle gg|\rho |ee\rangle$, and $\gamma=\gamma_{1d}$, $\gamma_{\pm}=\gamma_{1d}(1\pm\cos(k_{0z}r_{12}))$, $\gamma_{12}^{'}=\gamma_{1d}\cos(2k_{0z}r_{c})$, $\gamma^{'}_{\pm}=\gamma_{1d}\{\cos[2k_{0z}r_{c}]\pm\frac{1}{2}[\cos(2k_{0z}r_{1})+\cos(2k_{0z}r_{2})]\}$ with $r_{c}=\frac{(r_{1}+r_{2})}{2}$. Then the steady state solutions are given by \begin{equation} \label{eq11} \begin{split} &\rho_{ee}=\frac{N[-1-N-2N^{2}+(-1+N+2N^{2})\cos(4k_{0z}r_{c})]}{2(1+2N)[-1-2N-2N^{2}+2N(1+N)\cos(4k_{0z}r_{c})]}\\ &\rho_{++}=-\frac{N(1+N)sin^{2}(2k_{0z}r_{c})}{-1-2N-2N^{2}+2N(1+N)\cos(4k_{0z}r_{c})}\\ &\rho_{--}=-\frac{N(1+N)sin^{2}(2k_{0z}r_{c})}{-1-2N-2N^{2}+2N(1+N)\cos(4k_{0z}r_{c})}\\ &\rho_{u}=\frac{-2\sqrt{N(1+N)}\cos(2k_{0z}r_{c})}{(1+2N)[-1-2N-2N^{2}+2N(1+N)\cos(4k_{0z}r_{c})]}\\ \end{split} \end{equation} where we have used the relation $M^2=N(N+1)$. Obviously, the population given by Eq.~\eqref{eq11} differs from that given by thermal reservoir: $\rho_{ee(gg)}=\rho^{th}_{ee(gg)}+\Delta\rho, \rho_{++(--)}=\rho^{th}_{++(--)}-\Delta\rho$ with $\Delta\rho=\frac{N(N+1)\cos^2(2k_{0z}r_c)}{(1+2N)^2(1+2N+2N^2-2N(1+N)\cos(4k_{0z}r_c))}$ and $\rho^{th}_{ee}=\frac{N^2}{(1+2N)^2}$, $\rho^{th}_{++}=\rho^{th}_{--}=\frac{N(N+1)}{(1+2N)^2}$, $\rho^{th}_{gg}=\frac{(1+N)^2}{(1+2N)^2}$ which obey the Boltzmann distribution. It is interesting that the steady state depends only on the center of mass but not on the separation between the two emitters. Meanwhile, it is worth noting that the dark state cannot always be reached since the ergodicity cannot be guaranteed under every condition. For example, when $\cos(k_{0z}r_{12})= 1$, $|+\rangle$ becomes a dark state, while it is $|-\rangle$ when $\cos(k_{0z}r_{12})=-1$. Eq.~\eqref{eq11} shows that as $r_c$ gets closer to $\frac{n}{4}\lambda_{0z}$, the magnitude of $\gamma'_{\pm}$ gets closer to $\pm1$ which leads to smaller population on $|+\rangle$ and $|-\rangle$ as well as bigger concurrence. When the position of the center mass $r_{c}=\frac{n}{4}\lambda_{0z}$, the steady states are given by \begin{equation} \label{eq12} \begin{split} &\rho_{gg}=\frac{N+1}{(1+2N)},\\ &\rho_{ee}=\frac{N}{(1+2N)},\\ &\rho_{++}=\rho_{--}=0,\\ &\rho_{u}=(-1)^{n+1}\frac{2\sqrt{N(1+N)}}{(1+2N)}.\\ \end{split} \end{equation} which corresponds to the state $|\Psi_{s}\rangle=\frac{1}{\sqrt{2N+1}}(\sqrt{N+1}|gg\rangle +(-1)^{n+1} \sqrt{N}|ee\rangle)$. The concurrence of this state is given by $\mathscr{C}=|\rho_u|-(\rho_{++}+\rho_{--})=\frac{2\sqrt{N(N+1)}}{(2N+1)}$, which monotonically increases with the average photon number $N$. When $N\rightarrow\infty$, $\mathscr{C}\rightarrow 1$ which is a maximum-entangled state $\frac{1}{\sqrt{2}}(|gg\rangle -|ee\rangle)$ ($\frac{1}{\sqrt{2}}(|gg\rangle +|ee\rangle)$) with even(odd) $n$. \begin{figure} \caption{(a) Concurrence of the steady state as a function of average photon number $N=\sinh(r)^2$ and the position of the center mass $r_c=\frac{r_1+r_2} \label{5} \end{figure} Fig.~\ref{5}(a) shows the dependence of the stationary quantum entanglement on the photon number and the center-of-mass position. It is clearly seen that when $r_{c}$ is close to $\frac{n}{4}\lambda_{0z}$ the system can be prepared in a high entangled state, while the entanglement can never be formed when $r_c=\frac{2n+1}{8}\lambda_{0z}$ because the dipole-dipole interaction $\gamma'_{12}$ vanishes. In experiments, the center of mass position of emitters may be hard to control, but it can be effectively controllable by setting the positions squeezing sources. Thus, as long as the pump beam in SPDC is strong enough to guarantee the average photon number of the squeezed vacuum, the emitters can definitely evolve into a NOON state. While the dephasing rate is not very sensitive to the fluctuations of the emitter positions, the stationary quantum entanglement significantly depends on their center of mass. Only when the center of mass position is around $n\lambda/4$, the quantum entanglement is nonzero. In Fig.~\ref{5}(b), we show half the range of center of mass where the quantum entanglement is non-zero. The larger the squeezing is, the more sensitive the quantum entanglement is to the fluctuation of center-of-mass. For example, when $N=1$, a deviation of about $0.04\lambda$ from $n\lambda/4$ will make the entanglement vanish. \subsection{Resonance Fluorescence} \begin{figure*} \caption{Resonance fluorescence spectrum of the two-emitter system inside a 1D waveguide. For better comparison, the spectra are normalized to the intensity at $\omega=\omega_0$ with the coherent elastic scattering singularity removed. Coherent driving Rabi frequency is $\Omega_R=4\gamma$. In (a) and (b), the solid curves are the spectra for the coupled emitters, while the dashed curves are the spectra without emitter-emitter coupling. Parameters: (a) $r_1=0, r_2=0.01\lambda_{0z} \label{6} \end{figure*} In this subsection, we study how the squeezing can affect the resonance fluorescence of the waveguide-QED system. In the following we study how the collective interaction, squeezing phase, squeezing degree, emitter separation, and the center of mass affect the resonance fluorescence of this system. The power spectrum of the resonance fluorescence is given by \cite{Scully1997, Ficek1990a, Liao2012} \begin{equation} S(\omega)\propto Re\int_{0}^{\infty}d\tau Tr[\sigma^{-}(\tau)\sigma^{+}(0)]e^{i\omega\tau}. \end{equation} where we assume that the detector is perpendicular to the waveguide and $\sigma^{\pm}=\sigma^{\pm}_{1}+\sigma^{\pm}_{2}$ for the two-emitter example. The two-time correlation function in the integration can be calculated by the quantum regression theorem. Usually, the analytical result of Eq. (22) is difficult to get. However, we can resort to the numerical method to calculate the resonance fluorescence \cite{Molmer1993}. To observe the resonance fluorescence, we need to apply an external coherent driving field. The master equation is given by \begin{equation} \label{eq13} \begin{split} \frac{d\rho}{dt}=-i[V,\rho]+\mathcal{L\rho} \end{split} \end{equation} where $\mathcal{L\rho}$ is the right hand side of Eq.\eqref{eq6} and $V=\frac{\Omega_{R}}{2}e^{-i\alpha}(e^{-ik_{0z}r_{1}}\sigma_{1}^{-}+e^{-ik_{0z}r_{2}}\sigma_{2}^{-})+H.c.$ is the interaction between the driving field and the emitters with Rabi frequency $\Omega_{R}=\frac{\vec{d}\cdot\vec{E}}{\hbar}$. From Eq.~\eqref{eq13} we can evolve and obtain the steady state of the system $\rho_{ss}$. Next we use $(\sigma_{1}^{-}+\sigma_{2}^{-})\rho_{ss}$ as the initial condition to solve a density matrix $c(t)$ which obeys the same equation of motion as $\rho$ in Eq.~\eqref{eq13}. The resonance fluorescence spectrum is then given by \cite{Molmer1993} \begin{equation} S(\omega)\propto Re\int_{0}^{\infty}d\tau Tr[c(\tau)(\sigma_{1}^{+}+\sigma_{2}^{+})]e^{i\omega\tau}. \end{equation} In Fig. 6(a) and 6(b) we compare the resonance fluorescence spectrum with and without the dipole-dipole interaction for different squeezing phases and emitter separations. When $r_{12}=0.01\lambda_{0z}$ and $\phi=0$, we can see that the spectrum is very different with and without dipole-dipole interaction. Without dipole-dipole interaction, the spectrum is very similar to the typical Mollow triplet (red dashed line). However, with dipole-dipole interaction, there is a very narrow peak around the center frequency (red solid line). This is due to the subradiant state induced by the dipole-dipole interaction. On the contrary, when $\phi=\pi/2$ the spectrum with and without the dipole-dipole interaction is very similar (black solid and dashed lines). From Fig. 6(b) we see that with dipole-dipole interaction, the spectrum can be asymmetric, i.e., the positive and negative sidebands are different. In Fig. 6(c) we compare the spectrum with different squeezing degrees. We can see that greater squeezing parameter leads to the power spectrum in weak-driving-field limit(sidebands disappear). FIG. 6(d) shows that different emitter separation has different spectrum. This is not only due to atomic interaction which is described by $\gamma_{12},\gamma'_{12},\Lambda_{12}$, but also due to their positions which determine the values of $\gamma'_{ii}$, i.e., the effective phase and magnitude of $M$. Comparing the red solid curve in Fig. 6(b) and the red dashed curve in Fig. 6(d) we can see that different center-of-mass position can also have different resonance fluorescence. \section{Summary} We modify the usual squeezed vacuum mode function to include the position information of the squeezing source and derive a master equation of the atom dynamics based on the Weisskopf-Wigner approximation. In our formalism, the density matrix is positive-definite. We then apply this theory to the 1D waveguide-QED system where the squeezing in one direction is experimentally achievable. We show that the enhancement and suppression of the dephasing rate caused by the squeezed vacuum is actually position dependent. In single-atom case, the squeezing does not affect its population dynamics. However, in multi-atom case, the squeezing can strongly affect the population dynamics of the system because two-photon absorption and emission are allowed in multi-atom system. We also show that dipole-dipole interaction influences dephasing rate and we can tune the position of the squeezing source to tune the dephasing rate of the system. Moreover, we show that stationary entangled state can be achieved in this system independent of the initial state and the emitter separation. Particularly, when the center of mass is close to $n\lambda_{0z}/4$ and the squeezing is large, the system can be prepared in GHZ state. Moreover, we study the power spectrum of the resonance fluorescence. It is demonstrated that the phase of the squeezed vacuum, emitter separation, and the center-of-mass position can affect the bandwidth and the intensity of the sidebands. \section{Acknowledgment} This work is supported by a grant from the Qatar National Research Fund (QNRF) under NPRP project 8-352-1-074. \begin{widetext} \appendix \section{DERIVATION OF EQ.(6)} Here we show how to derive the master equation Eq.\eqref{eq6}. We start from a more general case where atoms are not identical but $\omega_{i}\approx \omega_{j}$, and we make the squeezing center frequency $\omega_{0}=\underset{i}{\sum}\omega_{i}/l$. Then we can rewrite the interaction Hamiltonian in Eq.\eqref{eq3} as \begin{equation} \label{eqa0}\tag{A1} V(t)=-i\hbar \sum_{\vec{k}s}[D(t)a_{\vec{k}s}(t)-D^{+}(t)a^{\dagger}_{\vec{k}s}(t)], \end{equation} where \begin{equation} \label{eqa1}\tag{A2} \begin{gathered} D(t)=\underset{i}{\sum}[\vec{\mu}_{i}\cdot\vec{u}_{\vec{k},s}(r_{i})S_{i}^{\dagger}(t)+\vec{\mu}^{*}_{i}\cdot\vec{u}_{\vec{k},s}(r_{i})S_{i}^{-}(t)]. \end{gathered} \end{equation} Since $\left\langle a_{\vec{k},s}\right\rangle =\left\langle a_{\vec{k},s}^{\dagger}\right\rangle =0$, the first term in Eq.\eqref{eq4} vanishes. Therefore, we have \begin{equation} \label{eqa2}\tag{A3} \begin{split} \frac{d\rho^{S}}{dt}=&-\frac{1}{\hbar^{2}}\int_{0}^{t}d\tau Tr_{F}\{[V(t),[V(t-\tau),\rho^{S}(t-\tau)\rho^{F}\}\\ =&-\frac{1}{\hbar^{2}}\int_{0}^{t}d\tau Tr_{F}\{V(t)V(t-\tau)\rho^{S}(t-\tau)\rho^{F}+\rho^{S}(t-\tau)\rho^{F}V(t-\tau)V(t)\\ &-V(t)\rho^{S}(t-\tau)\rho^{F}V(t-\tau)-V(t-\tau)\rho^{S}(t-\tau)\rho^{F}V(t)\}. \end{split} \end{equation} Here we just show how to deal with the first term in Eq.\eqref{eqa2}, the remaining terms can be calculated in the same way. For the first term, we have \begin{equation} \label{eqa3}\tag{A4} \begin{split} &-\frac{1}{\hbar^{2}}\int_{0}^{t}d\tau Tr_{F}\{V(t)V(t-\tau)\rho^{S}(t-\tau)\rho^{F}\}\\ =&\int_{0}^{t}d\tau\underset{\vec{k}s,\vec{k}'s'}{\sum}\{D(t)D(t-\tau)Tr_{F}[\rho^{F}a_{ks}(t)a_{k's'}(t-\tau)]-D(t)D^{+}(t-\tau)Tr_{F}[\rho^{F}a_{ks}(t)a^{\dagger}_{k's'}(t-\tau)]\\ &-D^{+}(t)D(t-\tau)Tr_{F}[\rho^{F}a^{\dagger}_{ks}(t)a_{k's'}(t-\tau)]+D^{+}(t)D^{+}(t-\tau)Tr_{F}[\rho^{F}a^{\dagger}_{ks}(t)a^{\dagger}_{k's'}(t-\tau)]\}\rho^{S}(t-\tau)\}. \end{split} \end{equation} Using Eq.\eqref{eqa1} and the correlation function Eq.\eqref{eq51}$\sim$\eqref{eq55}, under the rotating wave approximation(RWA), we have \begin{equation} \label{eqa4}\tag{A5} \begin{split} &-\frac{1}{\hbar^{2}}\int_{0}^{t}d\tau Tr_{F}\{V(t)V(t-\tau)\rho^{S}(t-\tau)\rho^{F}\}\\ =& \sum_{ij}\underset{\vec{k}s,\vec{k'}s'}{\sum}\int_{0}^{t}d\tau\{\vec{\mu}{}_{i}\cdot\vec{u}_{\vec{k}s}(r_{i})S_{i}^{+}e^{i\omega_{i}t}\vec{\mu}_{j}\cdot\vec{u}_{\vec{k}'s'}(r_{j})S_{j}^{+}e^{i\omega_{j}(t-\tau)}e^{-i(\omega_{\vec{k}s}+\omega_{\vec{k}'s'})t+i\omega_{\vec{k}'s'}\tau}[-\sinh(r)\cosh(r)\delta_{\vec{k}',2\vec{k}_{0}-\vec{k}}\delta_{ss'}]\\ &-\vec{\mu}_{i}\cdot\vec{u}_{\vec{k}s}(r_{i})S_{i}^{+}e^{i\omega_{i}t}\vec{\mu}^{*}_{j}\cdot\vec{u}_{\vec{k}'s'}^{*}(r_{j})S_{j}^{-}e^{-i\omega_{j}(t-\tau)}e^{-i\omega_{\vec{k}'s'}\tau}\cosh^{2}r\delta_{\vec{k}\vec{k}'}\delta_{ss'}\\ &-\vec{\mu}^{*}_{i}\cdot\vec{u}_{\vec{k}s}(r_{i})S_{i}^{-}e^{-i\omega_{i}t}\vec{\mu}_{j}\cdot\vec{u}^{*}_{\vec{k}'s'}(r_{j})S_{j}^{+}e^{i\omega_{j}(t-\tau)}e^{-i\omega_{\vec{k}'s'}\tau}\cosh^{2}r\delta_{\vec{k}\vec{k}'}\delta_{ss'}\\ &-\vec{\mu}^{*}_{i}\cdot\vec{u}_{\vec{k}s}^{*}(r_{i})S_{i}^{-}e^{-i\omega_{i}t}\vec{\mu}_{j}\cdot\vec{u}_{\vec{k}'s'}(r_{j})S_{j}^{+}e^{i\omega_{j}(t-\tau)}e^{i\omega_{\vec{k}'s'}\tau}\sinh^{2}r\delta_{\vec{k}\vec{k}'}\delta_{ss'}\\ &-\vec{\mu}_{i}\cdot\vec{u}^{*}_{\vec{k}s}(r_{i})S_{i}^{+}e^{i\omega_{i}t}\vec{\mu}^{*}_{j}\cdot\vec{u}_{\vec{k}'s'}(r_{j})S_{j}^{-}e^{-i\omega_{j}(t-\tau)}e^{i\omega_{\vec{k}'s'}\tau}\sinh^{2}r\delta_{\vec{k}\vec{k}'}\delta_{ss'}\\ &+\vec{\mu}^{*}_{i}\cdot\vec{u}_{\vec{k}s}^{*}(r_{i})S_{i}^{-}e^{-i\omega_{i}t}\vec{\mu}^{*}_{j}\cdot\vec{u}^{*}_{\vec{k}'s'}(r_{j})S_{j}^{-}e^{-i\omega_{j}(t-\tau)}e^{i(\omega_{\vec{k}s}+\omega_{\vec{k}'s'})t-i\omega_{\vec{k}'s'}\tau}[-\sinh(r)\cosh(r)\delta_{\vec{k}',2\vec{k}_{0}-\vec{k}}\delta_{ss'}]\}\rho^{S}(t-\tau) \end{split} \end{equation} where we have the relationship $\underset{\vec{k}}{\sum}\rightarrow\frac{L^3}{(2\pi)^3}\int k^{2}dk\int_{\Omega_{k}}$. In Ref. \cite{Scully1997}, it has been shown that \begin{equation} \label{eqa6}\tag{A6} \begin{split} \frac{L^{3}}{(2\pi)^{3}}\int k^{2}dk\int_{\Omega_{k}}\underset{s}{\sum}\vec{\mu}{}_{i}\cdot\vec{u}_{\vec{k}s}(r_{i})\vec{\mu}_{j}^{*}\cdot\vec{u}_{\vec{k}s}^{*}(r_{j})\approx\frac{\sqrt{\gamma_{i}\gamma_{j}}}{2\pi\omega_{0}^{3}}\int_{0}^{\infty}d\omega\omega^{3}F(kr_{ij}) \end{split} \end{equation} with \begin{equation} \label{eqa7}\tag{A7} \begin{split} &F(kr_{ij})=\frac{3}{2}\{[1-\cos^{2}\alpha]\frac{sin(kr_{ij})}{kr_{ij}}+[1-3\cos^{2}\alpha][\frac{\cos(kr_{ij})}{(kr_{ij})^{2}}-\frac{sin(kr_{ij})}{(kr_{ij})^{3}}]\}\\ &\gamma_{i}=\frac{\omega_{i}^{3}\mu_{i}^{2}}{3\pi\epsilon_{0}\hbar c^{3}} \end{split} \end{equation} where $\vec{r}_{ij}=\vec{r}_i-\vec{r}_j$, $r_{ij}=|\vec{r}_{ij}|$, $\alpha$ is the angle between $\vec{r}_{ij}$ and $\vec{\mu}_{i}$, and the approximation in Eq.\eqref{eqa6} becomes equality when $\omega_{1}=\omega_{2}$. We can also show that \begin{equation} \label{eqa8}\tag{A8} \begin{gathered} \frac{L^{3}}{(2\pi)^{3}}\int k^{2}dk\int_{\Omega_{k}}\underset{s}{\sum}\vec{\mu}{}_{i}\cdot\vec{u}_{\vec{k}s}(r_{i})\vec{\mu}_{j}\cdot\vec{u}_{\vec{2k_{0}}-\vec{k},s}(r_{j})\approx\frac{\sqrt{\gamma_{i}\gamma_{j}}}{2\pi\omega_{0}^{3}}\int_{0}^{\infty}d\omega\omega^{2}\sqrt{\omega(2\omega_{0}-\omega)}F(k_{0}|\frac{k}{k_{0}}\vec{r}_{ij}+2\vec{r}_{j}|)e^{2ik_{0}R} \end{gathered} \end{equation} where R is the distance from the sources to the center mass of two atoms, and the approximation becomes equality when $\omega_{1}=\omega_{2}$. Next, we will show how to calculate the first and the second terms in Eq.\eqref{eqa4}, and the remaining terms can be approached in the same way. Using Eq.\eqref{eqa6}, the second term in Eq.\eqref{eqa4} can be simplified as \begin{equation} \label{eqb1}\tag{A10} \begin{split} &\underset{\vec{k}s}{\sum}\int_{0}^{t}d\tau\vec{\mu}_{i}\cdot\vec{u}_{\vec{k}s}(r_{i})S_{i}^{+}e^{i\omega_{i}t}\vec{\mu}_{j}^{*}\cdot\vec{u}_{\vec{k}s}^{*}(r_{j})S_{j}^{-}e^{-i\omega_{j}(t-\tau)}e^{-i\omega_{\vec{k}s}\tau}\cosh^{2}r\rho^{S}(t-\tau)\\ &=\cosh^{2}r\frac{\sqrt{\gamma_{i}\gamma_{j}}}{2\pi\omega_{0}^{3}}\int_{0}^{t}d\tau\int_{0}^{\infty}d\omega\omega^{3}F(kr_{ij})e^{i(\omega_{i}-\omega_{j})t}e^{i(\omega_{j}-\omega_{k})\tau}S_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau) \end{split} \end{equation} with $F(kr_{ij})$ given in Eq.\eqref{eqa7}. We here calculate the integral of the first term in $F(kr_{ij})$ ($i\ne j$) and the other terms can be calculated similarly. \begin{equation} \label{eqb2}\tag{A11} \begin{split} &\\ &\cosh^{2}r\frac{\sqrt{\gamma_{i}\gamma_{j}} c^{4}}{2\pi\omega_{0}^{3}}\frac{3}{2}\int_{0}^{t}d\tau\int_{0}^{\infty}dkk^{3}\frac{sinkr_{ij}}{kr_{ij}}e^{i(\omega_{j}-\omega_{k})\tau}S_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau)e^{i(\omega_{i}-\omega_{j})t}\\ &=\cosh^{2}r\frac{\sqrt{\gamma_{i}\gamma_{j}}c^{4}}{2\pi\omega_{0}^{3}}\frac{3}{2}\int_{0}^{t}d\tau\int_{-\infty}^{\infty}dkk^{2}\frac{1}{2ir_{ij}}(e^{i(k-k_{j})r_{ij}+ik_{j}r_{ij}}-e^{-i(k-k_{j})r_{ij}-ik_{j}r_{ij}})e^{-i(k-k_{j})c\tau}S_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau)e^{i(\omega_{i}-\omega_{j})t}\\ &\approx \cosh^{2}r\frac{\sqrt{\gamma_{i}\gamma_{j}} c^{4}}{2\pi\omega_{0}^{3}}\frac{3}{2}\int_{0}^{t}d\tau k_{j}^{2}\frac{1}{ir_{ij}}[\delta(r_{ij}-c\tau)e^{ik_{j}r_{ij}}-\delta(r_{ij}+c\tau)e^{-ik_{j}r_{ij}}]S_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau)e^{i(\omega_{i}-\omega_{j})t}\\ &\approx \cosh^{2}r\frac{\sqrt{\gamma_{i}\gamma_{j}} c^{4}}{2\pi\omega_{0}^{3}}\frac{3}{2}k_{j}^{2}\frac{\pi}{icr_{ij}}e^{ik_{j}r_{ij}}S_{i}^{+}S_{j}^{-}\rho^{S}(t)e^{i(\omega_{i}-\omega_{j})t}\\ &\approx \frac{3}{4}\sqrt{\gamma_{i}\gamma_{j}}\cosh^{2}r\frac{e^{ik_{0}r_{ij}}}{ik_{0}r_{ij}}S_{i}^{+}S_{j}^{-}\rho^{S}(t)e^{i(\omega_{i}-\omega_{j})t}\\ \end{split} \end{equation} In the second line of the equations, we replace $\int_{0}^{\infty}dk$ by $\int_{-\infty}^{\infty}dk$ since the main contribution comes from the frequency around $\omega_{0}$ and the negative frequency part leads to fast-oscillating term such that its integration $\int_{0}^{t}d\tau$ vanishes. From the second line to the third line, the Weisskopf-Wigner approximation\cite{Scully1997} is applied and $k$ is replaced by $k_j$ because the contribution comes mainly from the resonant frequency. From the third line to the fourth line, we assume that the two atoms are very close that the time-retarded effect can be neglected. In the last line, we use the fact that $\omega_i\approx \omega_0$ The other terms in Eq.\eqref{eqb1} can be calculated in a similar way, and the result is given by \begin{equation} \label{eqb6}\tag{A12} \begin{split} \frac{\sqrt{\gamma_{i}\gamma_{j}}}{2\pi\omega_{0}^{3}}\int_{0}^{t}d\tau\int_{0}^{\infty}dkk^{3}F(kr_{ij})e^{i(\omega_{i}-\omega_{j})t}e^{i(\omega_{j}-\omega_{k})\tau}S_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau)=(\frac{1}{2}\gamma_{ij}+i\Lambda_{ij})S_{i}^{+}S_{j}^{-}\rho^{S}(t)e^{i(\omega_{i}-\omega_{j})t} \end{split} \end{equation} where \begin{equation} \label{eqa14}\tag{A13} \begin{split} &\Lambda_{ij}=\frac{3}{4}\sqrt{\gamma_{i}\gamma_{j}}\{-(1-\cos^{2}\alpha)\frac{\cos(k_{0}r_{ij})}{k_{0}r_{ij}}+(1-3\cos^{2}\alpha)[\frac{sin(k_{0}r_{ij})}{(k_{0}r_{ij})^{2}}+\frac{\cos(k_{0}r_{ij})}{(k_{0}r_{ij})^{3}}]\}\\ &\gamma_{ij}=\sqrt{\gamma_{i}\gamma_{j}} F(k_{0}r_{ij})\\ \end{split} \end{equation} All the other terms with the combination of $S_{i}^{+}$ and $S_{i}^{-}$ can also be calculated in the same way. Thus, all the thermal terms and oscillation terms in Eq.\eqref{eq6} can be given. Next we need to calculate the squeezed vacuum terms including $S_{i}^{+}S_{j}^{+}$ or $S_{i}^{-}S_{j}^{-}$. Here we show the calculation of the first term in Eq.\eqref{eqa4} as an example. By inserting Eq.\eqref{eqa8}, the first term of Eq.\eqref{eqa4} yields \begin{equation} \label{eqb3}\tag{A14} \begin{split} &\underset{\vec{k}s,\vec{k'}s'}{\sum}\int_{0}^{t}d\tau\int d^{3}k\{\vec{\mu}{}_{i}\cdot\vec{u}_{2\vec{k}_{0}-\vec{k},s}(r_{i})\vec{\mu}_{j}\cdot\vec{u}_{\vec{k}s}(r_{j})e^{i(\omega_{\vec{k}s}-\omega_{j})\tau}S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)\\ &=\frac{\sqrt{\gamma_{i}\gamma_{j}} c^{4}}{2\pi\omega_{0}^{3}}\int_{0}^{t}d\tau\int_{0}^{2k_{0}}dkk^{2}\sqrt{k(2k_{0}-k)}F(k_{0}|\frac{k}{k_{0}}\vec{r}_{ij}+2\vec{r}_{j}|)e^{i(\omega_{k}-\omega_{j})\tau}S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)e^{2ik_{0}R}\\ &\approx \frac{\sqrt{\gamma_{i}\gamma_{j}} c}{2\pi}\int_{0}^{t}d\tau\int_{-\infty}^{\infty}dkF(k_{0}|\frac{k}{k_{0}}\vec{r}_{ij}+2\vec{r}_{j}|)e^{i(\omega_{k}-\omega_{0})\tau}S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)e^{2ik_{0}R} \end{split} \end{equation} From the second line to the third line, the integral limit is extended to $\pm \infty $ and $k^{2}\sqrt{k(2k_{0}-k)}$ is pulled out as $k_{0}^{3}$ according to the Weisskopf-Wigner approximation. To calculate one term with fixed $i,j$, we need to rebuild the coordinate system where $\vec{r}_{i}+\vec{r}_j=0$ for $i\neq j$(We need to build different coordinate systems for different pairs of $i,j$). For example, we here consider the first two atoms, $i,j=1,2$. When $i=j$, this term directly gives $\frac{1}{2}\gamma \cosh^{2}rF(2k_{0}|\vec{r}_{j}|)S_{i}^{+}S_{i}^{+}\rho^{S}(t)$. When $i\neq j$, since there is a singular point at $k=k_{0}$, the calculation is a little bit more complicated but can still be calculated. We have the following integrals: \begin{equation} \label{eqb4}\tag{A15} \begin{split} &\int_{-\infty}^{\infty}dk\frac{\sin{kr_{ij}}}{kr_{ij}}e^{-ikc\tau}=\frac{\pi}{r_{ij}}\theta_{1}(r_{ij}-c\tau),\\ &\int_{-\infty}^{\infty}dk \Big [\frac{\cos{kr_{ij}}}{(kr_{ij})^{2}}-\frac{\sin{kr_{ij}}}{(kr_{ij})^{3}}\Big ]e^{-ikc\tau}=\frac{\pi(c\tau-r_{ij})(c\tau+r_{ij})}{2r^{3}_{ij}}\theta_{2}(r_{ij}-c\tau),\\ \end{split} \end{equation} where $\theta_{1,2}(x)$ are step functions: $\theta_{1,2}(x)=0$ when $x<0$, $\theta_{1,2}(x)=1$ when $x>0$, and $\theta_{1}(0)=1/2$ and $\theta_{2}(0)=0$. Since $F(k_{0}|\frac{k}{k_{0}}\vec{r}_{ij}+2\vec{r}_{j}|)=F((k-k_{0})r_{12})$, we have \begin{equation} \label{eqb5}\tag{A16} \begin{split} &\int_{0}^{t}d\tau\int_{-\infty}^{\infty}dkF([(k-k_0)r_{12}]e^{i(\omega_{0}-\omega_{k})\tau}\rho^{S}(t-\tau) \\ &=\int_{0}^{\frac{r_{ij}}{c}}d\tau\frac{3}{2}[(1-\cos^{2}\alpha)\frac{\pi}{r_{ij}}+(1-3\cos^{2}\alpha)\frac{\pi(c\tau-r_{ij})(c\tau+r_{ij})}{2r_{ij}^{3}}]\rho^{S}(t-\tau) \\ & \approx \frac{\pi}{c}\rho^{S}(t). \end{split} \end{equation} In Eq.\eqref{eqb5}, the emitter separation is assumed to be small and the Markovian approximation is applied such that $\rho^{S}(t-\tau)\approx \rho^{S}(t)$. Hence, Eq.\eqref{eqb3} gives $\sinh{r} \cosh{r}\frac{\gamma'_{ij}}{2} S_{i}^{+}S_{j}^{+}\rho^{S}(t)$ with $\gamma'_{ij}=e^{2ik_{0}R}\gamma F(k_{0}|\vec{r}_{i}+\vec{r}_{j}|)$ after transforming the above results to the original coordinate system(Although replacing $k$ by $k_0$ in Eq.\eqref{eqb3}'s last line yields the same result, it is not always safe to do so since $F(x)$ is an oscillating function). Having Dealt with all the squeezed vacuum terms, we can get \begin{equation} \label{eqb17}\tag{A17} \begin{split} \frac{d\rho^{S}}{dt}=&-\frac{1}{2}\sum_{\alpha=\pm}\underset{i,j}{\sum}\gamma'_{ij}M(\rho^{S}S_{i}^{\alpha}S_{j}^{\alpha}+S_{i}^{\alpha}S_{j}^{\alpha}\rho^{S}-2S_{j}^{\alpha}\rho^{S}S_{i}^{\alpha})\\ &-\frac{1}{2}\underset{i,j}{\sum}\gamma{}_{ij}(1+N)(\rho^{S}S_{i}^{+}S_{j}^{-}+S_{i}^{+}S_{j}^{-}\rho^{S}-2S_{j}^{-}\rho^{S}S_{i}^{+})e^{i(\omega_i-\omega_j)t}\\ &-\frac{1}{2}\underset{i,j}{\sum}\gamma{}_{ij}N(\rho^{S}S_{i}^{-}S_{j}^{+}+S_{i}^{-}S_{j}^{+}\rho^{S}-2S_{j}^{+}\rho^{S}S_{i}^{-})e^{-i(\omega_i-\omega_j)t}\\&-i\underset{i\neq j}{\sum}\Lambda_{ij}[S_{i}^{+}S_{j}^{-},\rho^{S}]e^{i(\omega_i-\omega_j)t} \end{split} \end{equation} and Eq.\eqref{eq6} is the special case when $\omega_i=\omega_0$. \section{POSITIVE DEFINITENESS OF DENSITY MATRIX} In the following we will show that Eq.\eqref{eq6} can be written in the Lindblad equation and it is positive definite: \begin{equation} \label{eqa15}\tag{B1} \begin{gathered} \frac{d\rho^{S}}{dt}=-i\underset{i}{\sum}[H,\rho^{S}]+\underset{m,n}{\sum}h_{nm}(L_{n}\rho L_{m}^{\dagger}-\frac{1}{2}(\rho L_{m}^{\dagger}L_{n}+L_{m}^{\dagger}L_{n}\rho)) \end{gathered} \end{equation} where \begin{equation} \label{eqa16}\tag{B2} \begin{split} &H=\underset{i\neq j}{\sum}\Lambda_{ij}S_{i}^{+}S_{j}^{-}\\ &L_{1}=S_{1}^{+},L_{2}=S_{2}^{+},L_{3}=S_{3}^{-},L_{4}=S_{4}^{-}\\ &h=\begin{bmatrix} \gamma_{11}\sinh^{2}{r} &\gamma_{12}\sinh^{2}{r}&\gamma'_{11}\sinh{r}\cosh{r}&\gamma'_{12}\sinh{r}\cosh{r}\\ \gamma_{12}\sinh^{2}{r}&\gamma_{11}\sinh^{2}{r}&\gamma'_{12}\sinh{r}\cosh{r}&\gamma'_{11}\sinh{r}\cosh{r}\\ \gamma'_{11}\sinh{r}\cosh{r}&\gamma'_{12}\sinh{r}\cosh{r}&\gamma_{11}\cosh^{2}{r}&\gamma_{12}\cosh^{2}{r}\\ \gamma'_{12}\sinh{r}\cosh{r}&\gamma'_{11}\sinh{r}\cosh{r}&\gamma_{12}\cosh^{2}{r}&\gamma_{11}\cosh^{2}{r}\\ \end{bmatrix} \end{split} \end{equation} here for simplicity, we have already used the relations: $ \gamma'_{12}=\gamma'_{21}$, $\gamma_{12}=\gamma_{21}$, $\gamma_{11}=\gamma_{22}$, $\gamma'_{11}=\gamma'_{22}$. The last relation $\gamma'_{11}=\gamma'_{22}$ is not always satisfied, but without it we cannot diagonalize matrix $h$ analytically. Hence we set $r_i+r_j=0$. Now matrix $h$ can be diagonalized: \begin{equation} \label{eqa17}\tag{B3} \begin{split} &h=u^{\dagger}\begin{bmatrix} \zeta_{1}&&&\\ &\zeta_{2}&&\\ &&\zeta_{3}&\\ &&&\zeta_{4}\\ \end{bmatrix}u \end{split} \end{equation} where $u$ is a unitary matrix, and \begin{equation} \label{eqa18}\tag{B4} \begin{split} &\zeta_{1}=\frac{1}{2}[(\gamma_{11}-\gamma_{12})(1+2\sinh^{2}r)-\sqrt{(\gamma_{11}-\gamma_{12})^{2}+4\sinh^{2}r\cosh^{2}r(\gamma'_{11}-\gamma'_{12})^{2}}]\\ &\zeta_{2}=\frac{1}{2}[(\gamma_{11}-\gamma_{12})(1+2\sinh^{2}r)+\sqrt{(\gamma_{11}-\gamma_{12})^{2}+4\sinh^{2}r\cosh^{2}r(\gamma'_{11}-\gamma'_{12})^{2}}]\\ &\zeta_{3}=\frac{1}{2}[(\gamma_{11}+\gamma_{12})(1+2\sinh^{2}r)-\sqrt{(\gamma_{11}+\gamma_{12})^{2}+4\sinh^{2}r\cosh^{2}r(\gamma'_{11}+\gamma'_{12})^{2}}]\\ &\zeta_{4}=\frac{1}{2}[(\gamma_{11}+\gamma_{12})(1+2\sinh^{2}r)+\sqrt{(\gamma_{11}+\gamma_{12})^{2}+4\sinh^{2}r\cosh^{2}r(\gamma'_{11}+\gamma'_{12})^{2}}]\\ \end{split} \end{equation} We noticed that since $|\gamma_{11}-\gamma_{12}|=|\gamma'_{11}-\gamma'_{12}|$ for $r_i+r_j=0$, none of the eigenvalues is negative, so the density matrix is completely positive for any initial condition. For arbitrary $r_i,r_j$, we can only get the positive eigenvalues numerically. \section{DERIVATION OF EQ. (11)} Now let's consider the perfect rectangular waveguide with cross section $a\times b$. The rectangular waveguide can support both TE and TM electric field modes and they are given as follows(To get a neat expression of field equation, we set the origin of our coordinate system at the corner of the waveguide): \begin{equation} \label{eqac1}\tag{C1} \begin{split} &E_{z}^{TM}=E_{0}sin\frac{m\pi x}{a}sin\frac{n\pi y}{b}e^{ik_{z}z},\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, H_{z}^{TE}=H_{0}\cos\frac{m\pi x}{a}\cos\frac{n\pi y}{b}e^{ik_{z}z} \\ &E_{x}^{TM}=E_{0}\frac{ik_{z}}{h_{mn}^{2}}\frac{m\pi}{a}\cos\frac{m\pi x}{a}sin\frac{n\pi y}{b}e^{ik_{z}z}, \, \, \, \, \, \,\, \, \, \, \, \,\, \, E_{x}^{TE}=H_{0}\frac{i\omega_{k}\mu}{h_{mn}^{2}}\frac{n\pi}{a}\cos\frac{m\pi x}{a}sin\frac{n\pi y}{b}e^{ik_{z}z}\\ &E_{y}^{TM}=E_{0}\frac{ik_{z}}{h_{mn}^{2}}\frac{n\pi}{a}sin\frac{m\pi x}{a}\cos\frac{n\pi y}{b}e^{ik_{z}z},\, \, \,\, \, \,\, \, \, \, \, \,\, \, \, \, E_{y}^{TE}=-H_{0}\frac{i\omega_{k}\mu}{h_{mn}^{2}}\frac{m\pi}{a}sin\frac{m\pi x}{a}\cos\frac{n\pi y}{b}e^{ik_{z}z} \\ &H_{x}^{TM}=E_{0}\frac{i\omega_{k}\epsilon}{h_{mn}^{2}}\frac{n\pi}{a}sin\frac{m\pi x}{a}\cos\frac{n\pi y}{b}e^{ik_{z}z},\, \, \, \, \, \,\, \, \, \, \, \,\, \, \,H_{x}^{TE}=-H_{0}\frac{ik_{z}}{h_{mn}^{2}}\frac{m\pi}{a}sin\frac{m\pi x}{a}\cos\frac{n\pi y}{b}e^{ik_{z}z} \\ &H_{y}^{TM}=-E_{0}\frac{i\omega_{k}\epsilon}{h_{mn}^{2}}\frac{m\pi}{a}\cos\frac{m\pi x}{a}sin\frac{n\pi y}{b}e^{ik_{z}z},\, \, \,\, \, \,\, \, \, H_{y}^{TE}=-H_{0}\frac{ik_{z}}{h_{mn}^{2}}\frac{n\pi}{a}\cos\frac{m\pi x}{a}sin\frac{n\pi y}{b}e^{ik_{z}z} \end{split} \end{equation} where $h_{mn}=\sqrt{(\frac{m\pi}{a})^{2}+(\frac{n\pi}{b})^{2}}$, $\epsilon (\mu )$ is the permittivity (permeability), and $H_{0}, E_{0}$ are arbitrary constants. For quantized modes, we have $E_0=\sqrt{4\hbar h_{mn}^{2}/\epsilon ^{2}\mu \nu LS}$ and $H_0=\sqrt{4\hbar h_{mn}^{2}/\epsilon \mu^{2} \nu LS}$\cite{Kim2013}. The dispersion relation inside the waveguide is given by $\omega_{k}^{2}/c^{2}=(m\pi/a)^{2}+(n\pi/b)^{2}+k_z^{2}$. For simplicity, we here consider the waveguide with square cross section, i.e., $a=b$ and the dispersion curves of different modes are shown in Fig.~\ref{1}(b). For square waveguide, $TE_{mn} (TM_{mn})$ and $TE_{nm} (TM_{nm})$ modes are degenerate, and $TE_{10}$ and $TE_{01}$ have the lowest energy. We assume that the all emitters' transition frequencies are the same and they are below the cutoff frequency of $TE_{11}$ and $TM_{11}$ modes. Since the rectangular waveguide cannot support the $TM_{10}$ and $TM_{01}$ mode, the emitter can only couple to the $TE_{01}$ or $TE_{10}$ modes. Here, without loss of generality we assume that the transition dipole moment of the emitter is in the $y$ direction. Thus, it can only couple to the $TE_{10}$ mode. The emitters are assumed to be located at the center of the waveguide cross section, i.e., $(\frac{a}{2},\frac{a}{2},r_{i})$ and $(\frac{a}{2},\frac{a}{2},r_{j})$. In this case, the mode function for $TE_{10}$ mode is given by $\vec{u}_{k_{z}}(\vec{r}_{i})=\sqrt{\frac{\omega_{k_{z}}\hbar}{\epsilon_{0}LS}}\hat{y}e^{ik_{z}(\vec{r}-\vec{o}_{k_{z}})}$ with $S=a^2$. By reducing the cross section, we can increase the amplitude of the mode function and therefore the coupling strength. Compared with the free space case shown in Appendix A, the only modification to the calculation for the waveguide is $\sum_{\vec{k}s}\rightarrow \sum_{k_{z}}$ in Eq.\eqref{eqa4}. We here calculate the first and the second term in Eq.\eqref{eqa4} to show how to get Eq.\eqref{eq6} and Eq.\eqref{eq8}. For the second term, we have \begin{equation} \label{eqc2}\tag{C2} \begin{split} &-\underset{k_z}{\sum}\int_{0}^{t}d\tau\vec{\mu}_{i}\cdot\vec{u}_{\vec{k}s}(r_{i})S_{i}^{+}e^{i\omega_{0}t}\vec{\mu}_{j}^{*}\cdot\vec{u}_{\vec{k}'s'}^{*}(r_{j})S_{j}^{-}e^{-i\omega_{0}(t-\tau)}e^{-i\omega_{\vec{k}'s'}\tau}\cosh^{2}r\rho^{S}(t-\tau)\delta_{\vec{k}\vec{k}'}\delta_{ss'}\\ =&-\frac{L}{2\pi}\int_{-\infty}^{\infty}dk_{z}\int_{0}^{t}d\tau e^{i\omega_{0}\tau}e^{-i\omega_{k_{z}}\tau}\frac{\omega_{k}\mu^{2}}{\epsilon_{0}LS\hbar}e^{ik_{z}(r_{i}-r_{j})}\cosh^{2}rS_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau)\\ \approx&-\frac{L}{2\pi}\int_{0}^{\infty}dk_{z}\int_{0}^{t}d\tau e^{i\omega_{0}\tau}e^{-i[\omega_{0}+c^{2}k_{0z}(k_{z}-k_{0z})/\omega_{0}]\tau}\frac{\omega_{k}\mu^{2}}{\epsilon_{0}LS\hbar}[e^{ik_{z}(r_{i}-r_{j})}+e^{-ik_{z}(r_{i}-r_{j})}]\cosh^{2}rS_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau)\\ \approx&-\frac{L}{2\pi}\int_{-k_{0z}}^{\infty}d\delta k_{z}\int_{0}^{t}d\tau e^{-i\tau c^{2}k_{0z}\delta k_{z}/\omega_{0}}\frac{\omega_{k}\mu^{2}}{\epsilon_{0}LS\hbar}[e^{i(k_{0z}+\delta k_{z})(r_{i}-r_{j})}+e^{-i(k_{0z}+\delta k_{z})(r_{i}-r_{j})}]\cosh^{2}rS_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau)\\ \approx&-\frac{L}{2\pi}\int_{-\infty}^{\infty}d\delta k_{z}\int_{0}^{t}d\tau e^{-i(c^{2}k_{0z}\delta k_{z}/\omega_{0})\tau}\frac{\omega_{k}\mu^{2}}{\epsilon_{0}LS\hbar}[e^{i(k_{0z}+\delta k_{z})(r_{i}-r_{j})}+e^{-i(k_{0z}+\delta k_{z})(r_{i}-r_{j})}]\cosh^{2}rS_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau)\\ \approx&-\frac{L}{2\pi}\int_{0}^{t}d\tau \frac{\omega_{0}\mu^{2}}{\epsilon_{0}LS\hbar}2\pi[e^{ik_{0z}(r_{i}-r_{j})}\delta((r_{i}-r_{j})-\frac{c^{2}k_{0z}}{\omega_{0}}\tau)+e^{-ik_{0z}(r_{i}-r_{j})}\delta((r_{i}-r_{j})+\frac{c^{2}k_{0z}}{\omega_{0}}\tau)]\cosh^{2}rS_{i}^{+}S_{j}^{-}\rho^{S}(t-\tau)\\ \approx&-\frac{L}{2\pi}e^{ik_{0z}r_{ij}}\frac{\omega_{0}\mu^{2}}{\epsilon_{0}LS\hbar}2\pi\frac{\omega_{0}}{c^{2}k_{0z}}\cosh^{2}rS_{i}^{+}S_{j}^{-}\rho^{S}(t)\\ \approx&-[\frac{\gamma_{1d}}{2}\cos(k_{0z}r_{ij})+i\frac{\gamma_{1d}}{2}sin(k_{0z}r_{ij})]\cosh^{2}rS_{i}^{+}S_{j}^{-}\rho^{S}(t)\\ \equiv &-(\frac{\gamma_{ij}}{2}+i\Lambda_{ij})\cosh^{2}rS_{i}^{+}S_{j}^{-}\rho^{S}(t) \end{split} \end{equation} where emitter separation $r_{ij}=|r_{i}-r_{j}|$, $\gamma_{1d}=2\mu^{2}\omega_{0}^{2}/\hbar\epsilon_{0}Sc^{2}k_{0z}$ is the spontaneous decay rate in the waveguide as is shown in Eq.\eqref{eq7}, $\gamma_{ij}=\gamma_{1d} \cos(k_{0z}r_{ij})$ is the collective decay rate, and $\Lambda_{ij}=\gamma_{1d}\sin(k_{0z}r_{ij})/2$ is the collective energy shift. In the third line we expand $\omega_{k}=c\sqrt{(\frac{\pi}{a})^{2}+(k_{z})^{2}}$ around $k_{z}=k_{0z}$ since resonant modes provide dominant contributions. In the fifth line we extend the integration $\int_{-k_{0z}}^{\infty}dk_{z}\rightarrow\int_{-\infty}^{\infty}dk_{z}$ because the main contribution comes from the components around $\delta k_{z}=0$. In the next line, Weisskopf-Wigner approximation is used. Thus, we have obtained $\gamma_{ij}$ and $\Lambda_{ij}$ as is shown in Eq.\eqref{eq8}. Next we need to calculate the first term (squeezing term) in Eq.\eqref{eqa4}: \begin{equation} \label{eqb8}\tag{C4} \begin{split} & \underset{k_{z}}{\sum}\int_{0}^{t}d\tau\{\vec{\mu}{}_{i}\cdot\vec{u}_{2\vec{k}_{0}-\vec{k}}(r_{i})S_{i}^{+}\vec{\mu}_{j}\cdot\vec{u}_{\vec{k}}(r_{j})S_{j}^{+}e^{i(\omega_{\vec{k}}-\omega_{0})\tau}[-\sinh(r)\cosh(r)]\rho^{S}(t-\tau) \\ &=-\frac{L}{2\pi}\int_{0}^{2k_{0z}}dk_{z}\int_{0}^{t}d\tau e^{i(\omega_{k_{z}}-\omega_{0})\tau}e^{i(2k_{0z}-k_{z})(r_{i}-o_{1})}e^{ik_{z}(r_{j}-o_{1})}\frac{\sqrt{\omega_{k_{z}}\omega_{2k_{0z}-k_{z}}}\mu^{2}}{\epsilon_{0}LS\hbar}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)\\ &-\frac{L}{2\pi}\int_{-2k_{0z}}^{0}dk_{z}\int_{0}^{t}d\tau e^{i(\omega_{k_{z}}-\omega_{0})\tau}e^{i(-2k_{0z}-k_{z})(r_{i}-o_{2})}e^{ik_{z}(r_{j}-o_{2})}\frac{\sqrt{\omega_{k_{z}}\omega_{-2k_{0z}-k_{z}}}\mu^{2}}{\epsilon_{0}LS\hbar}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau). \end{split} \end{equation} For $i=j$, Eq.\eqref{eqb8} reduces to \begin{equation} \label{eqb9}\tag{C5} \begin{split} &\underset{k_{z}}{\sum}\int_{0}^{t}d\tau\{\vec{\mu}{}_{i}\cdot\vec{u}_{2\vec{k}_{0}-\vec{k}}(r_{i})S_{i}^{+}\vec{\mu}_{j}\cdot\vec{u}_{\vec{k}}(r_{j})S_{j}^{+}e^{i(\omega_{\vec{k}}-\omega_{0})\tau}[-\sinh(r)\cosh(r)]\rho^{S}(t-\tau)\\ & =-\frac{L}{2\pi}\int_{0}^{2k_{0z}}dk_{z}\int_{0}^{t}d\tau e^{i\frac{c^{2}k_{0z}}{_{\omega_{0}}}(k_{z}-k_{0z})\tau}e^{i2k_{0z}(r_{i}-o_{1})}\frac{\sqrt{\omega_{k_{z}}\omega_{2k_{0z}-k_{z}}}\mu^{2}}{\epsilon_{0}LS\hbar}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)\\ & -\frac{L}{2\pi}\int_{-2k_{0z}}^{0}dk_{z}\int_{0}^{t}d\tau e^{i\frac{c^{2}k_{0z}}{_{\omega_{0}}}(k_{z}-k_{0z})\tau}e^{-i2k_{0z}(r_{i}-o_{2})}\frac{\sqrt{\omega_{k_{z}}\omega_{-2k_{0z}-k_{z}}}\mu^{2}}{\epsilon_{0}LS\hbar}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)\\ &=-\frac{L}{2\pi}[e^{i2k_{0z}(r_{i}-o_{1})}+e^{-i2k_{0z}(r_{i}-o_{2})}]\frac{\omega_{k_{0z}}\mu^{2}}{\epsilon_{0}LS\hbar}\int_{0}^{t}d\tau2\pi\delta(\frac{c^{2}k_{0z}}{\omega_{0}}\tau)\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)\\ &=-\frac{L}{2\pi}[e^{i2k_{0z}(r_{i}-o_{1})}+e^{-i2k_{0z}(r_{i}-o_{2})}]\frac{\omega_{k_{0z}}\mu^{2}}{\epsilon_{0}LS\hbar}\int_{0}^{t}d\tau2\pi\delta(\frac{c^{2}k_{0z}}{\omega_{0}}\tau)\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)\\ &=-e^{i2k_{0z}R}\frac{\omega_{0}^{2}\mu^{2}}{\epsilon_{0}\hbar Sc^{2}k_{0z}}\cos(2k_{0z}r_{i})\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t)\\ &=-e^{i2k_{0z}R}\frac{\gamma_{1d}}{2}\cos(2k_{0z}r_{i})\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t) \end{split} \end{equation} where we have used the fact that the origin of coordinate system is at equal distant from two sources(i.e., $o_2=-o_1=R$) in the second last line. Thus, we have $\gamma'_{ii}=\gamma_{1d}\cos(2k_{0z}r_{i})$. For $i\neq j$, Eq. \eqref{eqb8} reduces to \begin{equation} \label{eqb10}\tag{C6} \begin{split} &\underset{k_{z}}{\sum}\int_{0}^{t}d\tau\{\vec{\mu}{}_{i}\cdot\vec{u}_{2\vec{k}_{0}-\vec{k}}(r_{i})S_{i}^{+}\vec{\mu}_{j}\cdot\vec{u}_{\vec{k}}(r_{j})S_{j}^{+}e^{i(\omega_{\vec{k}}-\omega_{0})\tau}[-\sinh(r)\cosh(r)]\rho^{S}(t-\tau)\\ & =-\frac{L}{2\pi}\int_{0}^{2k_{0z}}dk_{z}\int_{0}^{t}d\tau e^{i\frac{c^{2}k_{0z}}{_{\omega_{0}}}(k_{z}-k_{0z})\tau}e^{i2k_{0z}(r_{c}-o_{1})}e^{-i(k_{z}-k_{0z})(r_{i}-r_{j})}\frac{\sqrt{\omega_{k_{z}}\omega_{2k_{0z}-k_{z}}}\mu^{2}}{\epsilon_{0}LS\hbar}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau) \\ & -\frac{L}{2\pi}\int_{-2k_{0z}}^{0}dk_{z}\int_{0}^{t}d\tau e^{i\frac{c^{2}k_{0z}}{_{\omega_{0}}}(-k_{z}-k_{0z})\tau}e^{-i2k_{0z}(r_{c}-o_{2})}e^{-i(k_{z}+k_{0z})(r_{i}-r_{j})}\frac{\sqrt{\omega_{k_{z}}\omega_{-2k_{0z}-k_{z}}}\mu^{2}}{\epsilon_{0}LS\hbar}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)\\ & =-\frac{L}{2\pi}\int_{0}^{2k_{0z}}dk_{z}\int_{0}^{t}d\tau e^{i\frac{c^{2}k_{0z}}{_{\omega_{0}}}(k_{z}-k_{0z})\tau}e^{i2k_{0z}(r_{c}-o_{1})}e^{-i(k_{z}-k_{0z})(r_{i}-r_{j})}\frac{\sqrt{\omega_{k_{z}}\omega_{2k_{0z}-k_{z}}}\mu^{2}}{\epsilon_{0}LS\hbar}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau) \\ & -\frac{L}{2\pi}\int_{0}^{2k_{0z}}dk_{z}\int_{0}^{t}d\tau e^{i\frac{c^{2}k_{0z}}{_{\omega_{0}}}(k_{z}-k_{0z})\tau}e^{-i2k_{0z}(r_{c}-o_{2})}e^{-i(-k_{z}+k_{0z})(r_{i}-r_{j})}\frac{\sqrt{\omega_{-k_{z}}\omega_{-2k_{0z}+k_{z}}}\mu^{2}}{\epsilon_{0}LS\hbar}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau) \\ & =-\frac{L}{2\pi}e^{i2k_{0z}(r_{c}-o_{1})}\frac{\omega_{k_{0z}}\mu^{2}}{\epsilon_{0}LS\hbar}\int_{-\infty}^{\infty}dk_{z}\int_{0}^{t}d\tau e^{i\frac{c^{2}k_{0z}}{_{\omega_{0}}}(k_{z}-k_{0z})\tau}e^{-i(k_{z}-k_{0z})(r_{i}-r_{j})}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau) \\ &-\frac{L}{2\pi}e^{-i2k_{0z}(r_{c}-o_{2})}\frac{\omega_{k_{0z}}\mu^{2}}{\epsilon_{0}LS\hbar}\int_{-\infty}^{\infty}dk_{z}\int_{0}^{t}d\tau e^{i\frac{c^{2}k_{0z}}{_{\omega_{0}}}(k_{z}-k_{0z})\tau}e^{i(k_{z}-k_{0z})(r_{i}-r_{j})}\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau)\\ &=-\frac{L}{2\pi}e^{i2k_{0z}R}\frac{\omega_{0}\mu^{2}}{\epsilon_{0}LS\hbar}\int_{0}^{t}d\tau2\pi[e^{i2k_{0z}r_{c}}\delta(r_{i}-r_{j}-\frac{c^{2}k_{0z}}{_{\omega_{0}}}\tau)+e^{-i2k_{0z}r_{c}}\delta(r_{i}-r_{j}+\frac{c^{2}k_{0z}}{_{\omega_{0}}}\tau)]\sinh(r)\cosh(r)S_{i}^{+}S_{j}^{+}\rho^{S}(t-\tau) \\ &=-e^{i2k_{0z}R}\frac{\omega_{0}^{2}\mu^{2}}{\epsilon_{0}\hbar Sc^{2}k_{0z}}e^{i2k_{0z}r_{c}sgn(i-j)}S_{i}^{+}S_{j}^{+}\rho^{S}(t)\rightarrow-\frac{\gamma_{1d}}{2}e^{i2k_{0z}R}\cos(k_{0z}(r_{i}+r_{j}))S_{i}^{+}S_{j}^{+}\rho^{S}(t) \end{split} \end{equation} where $sgn(i-j)$ is the sign function. The last arrow is because we need to sum over $i,j$, so the imaginary part of $e^{i2k_{0z}r_{c}sgn(i-j)}$ vanishes and the neat result is that $\gamma'_{ij}=e^{i2k_{0z}R}\gamma_{1d}\cos(k_{0z}(r_{i}+r_{j}))$. As for $S_{i}^{+}\rho^{S}(t)S_{j}^{+}$ terms, the combination of the last two terms in Eq.\eqref{eqa2} will make the imaginary part of $e^{i2k_{0z}r_{c}sgn(i-j)}$ vanish. Thus, we have $\gamma'_{ij}=e^{i2k_{0z}R}\gamma_{1d}\cos(k_{0z}(r_i+r_j))$. If one needs to get $\gamma_{ij}, \gamma'_{ij}$ and $\Lambda_{ij}$in the unidirectional waveguide case, we just need to discard the second terms in the parenthesis of Eq.\eqref{eqc2} and Eq.\eqref{eqb10} \end{widetext} \end{document}
\begin{document} \title{Evaluating analytic gradients on quantum hardware} \hat{a}uthor{Maria Schuld} \mathrm{e}mail{[email protected]} \hat{a}uthor{Ville Bergholm} \hat{a}uthor{Christian Gogolin} \hat{a}uthor{Josh Izaac} \hat{a}uthor{Nathan Killoran} \hat{a}ffiliation{Xanadu Inc., 372 Richmond St W, Toronto, M5V 1X6, Canada} \hat{d}ate{\today} \begin{abstract} An important application for near-term quantum computing lies in optimization tasks, with applications ranging from quantum chemistry and drug discovery to machine learning. In many settings --- most prominently in so-called parametrized or variational algorithms --- the objective function is a result of hybrid quantum-classical processing. To optimize the objective, it is useful to have access to exact gradients of quantum circuits with respect to gate parameters. This paper shows how gradients of expectation values of quantum measurements can be estimated using the same, or almost the same, architecture that executes the original circuit. It generalizes previous results for qubit-based platforms, and proposes recipes for the computation of gradients of continuous-variable circuits. Interestingly, in many important instances it is sufficient to run the original quantum circuit twice while shifting a single gate parameter to obtain the corresponding component of the gradient. More general cases can be solved by conditioning a single gate on an ancilla. \mathrm{e}nd{abstract} \maketitle \section{Introduction} Hybrid optimization algorithms have become a central quantum software design paradigm for current-day quantum technologies, since they outsource parts of the computation to classical computers. Examples of such algorithms are variational quantum eigensolvers \cite{peruzzo2014variational}, quantum approximate optimization \cite{farhi2014quantum}, variational autoencoders \cite{romero2017quantum}, quantum feature embeddings \cite{schuld2018quantum, havlicek2018supervised} and variational classifiers \cite{schuld2018circuit, farhi2018classification}, but also more general hybrid optimization frameworks \cite{bergholm2018pennylane}. In such applications, the objective or cost function is a combination of both classical and quantum information processing modules, or nodes (see Fig.~\ref{Fig:hybrid}). The quantum nodes execute parametrized quantum circuits, also called \textit{variational circuits}, in which gates have adjustable continuous parameters such as rotations by an angle. To unlock the potential of gradient-descent-based optimization strategies it is essential to have access to the gradients of quantum computations. While individual quantum measurements produce probabilistic results, the expectation value of a quantum observable --- which can be estimated by taking the average over measurement results --- is a deterministic quantity that varies smoothly with the gate parameters. It is therefore possible to formally define the gradient of a quantum computation via derivatives of expectations. The challenge however is to compute such gradients on quantum hardware. As we will lay out below, the derivative of a quantum expectation with respect to a parameter $\mu$ used in gate $\mathcal{G}$ involves the ``derivative of the gate'' $\hat{p}artial_{\mu} \mathcal{G}$, which is not necessarily a quantum gate itself. Hence, the derivative of an expectation is not a valid quantum expectation. Since in interesting cases the gradient, just as the objective function itself, tends to be classically intractable, we need to express such derivatives as a combination of quantum operations that can be implemented in hardware. Even more, in the case of special-purpose quantum hardware it is desirable that gradients can be evaluated by the same device that is used for the original computation. This paper derives rules to compute the partial derivatives of quantum expectation values with respect to gate parameters on quantum hardware. A number of results in this direction have been recently proposed in the quantum machine learning literature \cite{guerreschi2017practical, farhi2018classification, schuld2018circuit, mitarai2018quantum, liu2018differentiable}. Refs.~\cite{guerreschi2017practical, farhi2018classification, schuld2018circuit} note that if the derivative $\hat{p}artial_{\mu} \mathcal{G}$ as well as the observable whose expectation we are interested in can be decomposed into a sum of unitaries, we can evaluate the derivative of an expectation by measuring an overlap of two quantum states. Mitarai \mathrm{e}mph{et al.}~\cite{mitarai2018quantum}, leveraging a technique from quantum control, propose an elegant method for gates of the form $\mathcal{G} = e^{-i \mu \sigma}$, where $\sigma$ is a tensor product of the Pauli operators $\{\sigma_x, \sigma_y, \sigma_z\}$. In this case, the derivative can be computed by what we will call the ``parameter shift rule'', which requires us to evaluate the original expectation twice, but with one circuit parameter shifted by a fixed value. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{figures/hybrid.pdf} \caption{The ``parameter shift rule'' in the larger context of hybrid optimization. A quantum node, in which a variational quantum algorithm is executed, can compute derivatives of its outputs with respect to gate parameters by running the original circuit twice, but with a shift in the parameter in question.} \label{Fig:hybrid} \mathrm{e}nd{figure*} In this work, we make several contributions to the literature on quantum gradients. Firstly, we expand the parameter shift rule by noting that it holds for any gate of the form $\mathcal{G} = e^{-i \mu G}$, where the Hermitian generator~$G$ has at most two distinct eigenvalues. We mention important examples of this class. Secondly, we show that any other gate can be handled by a method that involves a coherent \textit{linear combination of unitaries} routine \cite{childs12}. This requires adding a single ancilla qubit and conditioning the gate and its ``derivative'' on the ancilla while running the circuit. Thirdly, we derive parameter shift rules for Gaussian gates in continuous-variable quantum computing. These rules can be efficiently implemented if all gates following the differentiated gate are Gaussian and the final observable is a low-degree polynomial of the creation and annihilation operators. In fact, the method still works efficiently for some non-Gaussian gates, such as the cubic phase gate, as long as there is at most a logarithmically large number of these non-Gaussian gates. The results of this paper are implemented in the software framework \textit{PennyLane} \cite{bergholm2018pennylane}, which facilitates hybrid quantum-classical optimization across various quantum hardwares and simulator platforms \cite{bergholm2018pennylane}. \section{Computing quantum gradients} Consider a quantum algorithm that is possibly part of a larger hybrid computation, as shown in Fig.~\ref{Fig:hybrid}. The quantum algorithm or circuit consists of a gate sequence $U(\theta)$ that depends on a set $\theta$ of $m$ real gate parameters, followed by the measurement of an observable~$\hat{B}$.\footnote{ The output of the circuit may consist of the measurements of $n$~mutually commuting scalar observables, however, without loss of generality, they can always be combined into a vector-valued observable with $n$~components. } An example is the Pauli-Z observable $\hat{B} = \sigma_z$, and the result of this single measurement is $\hat{p}m 1$ for a qubit found in the state $\ket{0}$ or $\ket{1}$, respectively. The gate sequence $U(\theta)$ usually consists of an ansatz or architecture that is repeated $K$ times, where $K$ is a hyperparameter of the computation. We refer to the combined procedure of applying the gate sequence~$U(\theta)$ and finding the expectation value of the measurement~$\hat{B}$ as a \textit{variational circuit}. In the overall hybrid computation one can therefore understand a variational circuit as a function~$f: \mathrm{e}nsuremath{\mathbb R}^m \to \mathrm{e}nsuremath{\mathbb R}^n$, mapping the gate parameters to an expectation, \begin{equation} f(\theta) \coloneqq \mathrm{e}xpval{\hat{B}} = \bra{0} U^{\hat{d}agger}(\theta) \hat{B} U(\theta) \ket{0}. \label{Eq:expval} \mathrm{e}nd{equation} While this abstract definition of a variational circuit is exact, its physical implementation on a quantum device runs the quantum algorithm several times and averages measurement samples to get an \textit{estimate} of~$f(\theta)$. If the circuit is executed on a classical simulator, $f(\theta)$ can be computed exactly up to numerical precision. In the following, we are concerned with the partial derivative $\hat{p}artial_{\mu} f(\theta)$ where $\mu \in \theta$ is one of the gate parameters. The partial derivatives with respect to all gate parameters form the gradient $\nabla f$. The differentiation rules we derive consider the expectation value in Eq.~\mathrm{e}qref{Eq:expval} and are therefore exact. Just like the variational circuit itself has an `analytic' definition and a `stochastic' implementation, the \textit{evaluation} of these rules with finite runs on noisy hardware return estimates of the gradient.\footnote{It is an open question whether such estimates have favourable properties similar to approximations of gradients in stochastic gradient descent.} There are three main approaches to evaluate the gradients of a numerical computation, i.e., a computer program that executes a mathematical function $g(x)$: \begin{enumerate} \item \textit{Numerical differentiation}: The gradient is approximated by black-box evaluations of $g$, e.g., \begin{equation} \label{eq:finitediff} \quad \qquad \nabla g(x) \hat{a}pprox (g(x +\Delta x/2) -g(x -\Delta x/2) )/\Delta x, \mathrm{e}nd{equation} where $\Delta x$ is a small shift. \item \textit{Automatic differentiation}: The gradient is efficiently computed through the accumulation of intermediate derivatives corresponding to different subfunctions used to build $g$, following the chain rule~\cite{maclaurin2015autograd}. \item \textit{Symbolic differentiation}: Using manual calculations or a symbolic computer algebra package, the function $\nabla g$ is constructed and evaluated. \mathrm{e}nd{enumerate} Until recently, numerical differentiation (or altogether gradient-free methods) have been the method of choice in the quantum variational circuits literature. However, the high errors of near-term quantum devices can make it unfeasible to use finite difference formulas to approximate the gradient of a circuit. Several modern numerical programming frameworks, especially in machine learning, successfully employ automatic differentiation \cite{baydin2017automatic} instead, a famous example being the ubiquitous backpropagation algorithm for the training of neural networks. Unfortunately, it is not clear how intermediate derivatives could be stored and reused \mathrm{e}mph{inside of a quantum computation}, since the intermediate quantum states cannot be measured without impacting the overall computation. To compute gradients of quantum expectation values, we therefore use the following strategy: Derive an equation for $\hat{p}artial_{\mu} f(\theta), \;\mu \in \theta$, whose constituent parts can be evaluated on a quantum computer and subsequently combined on a classical coprocessor. It turns out that this strategy has a number of favourable properties: \begin{enumerate} \item It follows similar rules for a range of different circuits, \item Evaluating $\hat{p}artial_{\mu} f(\theta)$ can often be done on a circuit architecture that is very similar or even identical to that for evaluating $f(\theta)$, \item Evaluating $\hat{p}artial_{\mu} f(\theta)$ requires the evaluation of only two expectation values. \mathrm{e}nd{enumerate} We emphasize that automatic differentiation techniques such as backpropagation can still be used within a larger overall hybrid computation, but we will not get any efficiency gains for this technique on the intermediate steps of the quantum circuit. The remainder of the paper will present the recipes for how to evaluate the derivatives of expectation values, first for qubit-based, and then for continuous-variable quantum computing. The results are summarized in Table~\ref{Tbl:results}. \begin{table*}[t] \hat{d}ef\hat{a}rraystretch{1.5} \setlength\tabcolsep{12pt} \begin{tabular}{p{4cm} p{6cm} p{4.5cm}} \hline \hline Architecture & Condition & Technique \\ \hline Qubit & $\mathcal{G}$ generated by a Hermitian operator with $2$ unique eigenvalues & parameter shift rule\\ Qubit & no special condition & derivative gate decomposition + linear combination of unitaries\\ Continuous-variable & $\mathcal{G}$ Gaussian, followed by at most logarithmically many non-Gaussian operations & continuous-variable parameter shift rules \\ Continuous-variable & no special condition & unknown \\ \hline \hline \mathrm{e}nd{tabular} \caption{Summary of results. $\mathcal{G}$ refers to the gate with parameter $\mu$ that we compute the partial derivative for. $\hat{p}artial_{\mu} \mathcal{G}$ refers to the partial derivative of the operator $\mathcal{G}$. } \label{Tbl:results} \mathrm{e}nd{table*} \section{Gradients of discrete-variable circuits} \label{sec:discrete} As a first step, the overall unitary $U(\theta)$ of the variational circuit can be decomposed into a sequence of single-parameter gates, which can be differentiated using the product rule. For simplicity, let us assume that the parameter $\mu \in \theta$ only affects a single gate $\mathcal{G}(\mu)$ in the sequence, $U(\theta) = V \mathcal{G}(\mu) W$. The partial derivative $\hat{p}artial_\mu f$ then looks like \begin{equation} \label{Eq:der_of_exp} \hat{p}artial_\mu f = \hat{p}artial_{\mu} \bra{\hat{p}si} \mathcal{G}^{\hat{d}agger} \hat{Q} \mathcal{G}\ket{\hat{p}si} = \bra{\hat{p}si} \mathcal{G}^{\hat{d}agger} \hat{Q} (\hat{p}artial_{\mu}\mathcal{G}) \ket{\hat{p}si} +\text{h.c.}, \mathrm{e}nd{equation} where we have absorbed~$V$ into the Hermitian observable $\hat{Q} = V^\hat{d}agger \hat{B} V$, and $W$~into the state $\ket{\hat{p}si} = W \ket{0}$. For any two operators $B$, $C$ we have \begin{align} \label{Eq:term_equivalence} \bra{\hat{p}si}B^{\hat{d}agger} \hat{Q} C \ket{\hat{p}si} +\text{h.c.}\notag\\ \begin{split} = \frac{1}{2}\big(& \bra{\hat{p}si} (B + C)^{\hat{d}agger} \hat{Q} (B + C) \ket{\hat{p}si}\\ -&\bra{\hat{p}si}(B - C)^{\hat{d}agger} \hat{Q} (B - C) \ket{\hat{p}si} \big). \mathrm{e}nd{split} \mathrm{e}nd{align} Hence, whenever we can implement $\mathcal{G} \hat{p}m \hat{p}artial_\mu \mathcal{G}$ as part of an overall unitary evolution, we can evaluate Eq.~\mathrm{e}qref{Eq:der_of_exp} directly. Sec.~\ref{sec:parameter_shift_rule} identifies a class of gates for which $\mathcal{G} \hat{p}m \hat{p}artial_\mu \mathcal{G}$ is already unitary, while Sec.~ \ref{sec:qubit_general_lcu} shows that an ancilla can help to evaluate the terms in Eq.~\mathrm{e}qref{Eq:der_of_exp} with minimal overhead, and guaranteed success. \subsection{Parameter-shift rule for gates with generators with two distinct eigenvalues} \label{sec:parameter_shift_rule} Consider a gate $\mathcal{G}(\mu) = \mathrm{e}^{-i \mu G}$ generated by a Hermitian operator~$G$. Its derivative is given by \begin{equation} \hat{p}artial_{\mu} \mathcal{G} = -i G \mathrm{e}^{-i \mu G}. \mathrm{e}nd{equation} Substituting into Eq.~\mathrm{e}qref{Eq:der_of_exp}, we get \begin{align} \hat{p}artial_{\mu} f &= \bra{\hat{p}si'} \hat{Q} \, (-iG) \ket{\hat{p}si'} +\text{h.c.} , \label{Eq:qubitder} \mathrm{e}nd{align} where $\ket{\hat{p}si'} = \mathcal{G} \ket{\hat{p}si}$. If $G$ has just two distinct eigenvalues (which can be repeated) \footnote{The rather elegant special case for generators $G$ that are tensor products of Pauli matrices has been presented in Mitarai \mathrm{e}mph{et al.}~\cite{mitarai2018quantum}. Here we consider the slightly more general case.} we can, without loss of generality, shift the eigenvalues to~$\hat{p}m r$, as the global phase is unobservable. Note that any single qubit gate is of this form. Using Eq.~\mathrm{e}qref{Eq:term_equivalence} for $B=\mathbbm{1}$ and $C=-ir^{-1}G$ we can write \begin{equation} \begin{split} \hat{p}artial_{\mu} f = \frac{r}{2}\big( &\bra{\hat{p}si'} (\mathbbm{1} -ir^{-1}G)^{\hat{d}agger} \hat{Q} (\mathbbm{1} -ir^{-1}G) \ket{\hat{p}si'}\\ -&\bra{\hat{p}si'}(\mathbbm{1} +ir^{-1}G)^{\hat{d}agger} \hat{Q} (\mathbbm{1} +ir^{-1}G) \ket{\hat{p}si'} \big). \mathrm{e}nd{split} \mathrm{e}nd{equation} We now show that for gates with eigenvalues $\hat{p}m r$ there exist values for $\mu$ for which $\mathcal{G}(\mu)$ becomes equal to $\frac{1}{\sqrt{2}}(\mathbbm{1} \hat{p}m i r^{-1}G)$. \begin{theorem} \label{Th:taylor} If the Hermitian generator~$G$ of the unitary operator $\mathcal{G}(\mu) = \mathrm{e}^{-i \mu G}$ has at most two unique eigenvalues $\hat{p}m r$, the following identity holds: \begin{equation} \mathcal{G}\left(\frac{\hat{p}i}{4r}\right) = \frac{1}{\sqrt{2}}(\mathbbm{1} - i r^{-1} G). \label{Eq:Pauli_equality} \mathrm{e}nd{equation} \begin{proof} The fact that $G$ has the spectrum $\{\hat{p}m r\}$ implies $G^2 = r^2 \mathbbm{1}$. Therefore the sine and cosine parts of the Taylor series of $\mathcal{G}(\mu)$ take the following simple form: \begin{align} \label{Eq:taylor} \mathcal{G}(\mu) &= \mathrm{e}xp(-i\mu G) = \sum_{k=0}^\infty \frac{(-i\mu)^k G^k}{k!}\\ &= \sum_{k=0}^\infty \frac{(-i\mu)^{2k} G^{2k}}{(2k)!} +\sum_{k=0}^\infty \frac{(-i\mu)^{2k+1} G^{2k+1}}{(2k+1)!} \\ &= \mathbbm{1} \sum_{k=0}^\infty \frac{(-1)^{k} (r\mu)^{2k}}{(2k)!} \nonumber \\ & \quad -i r^{-1} G \sum_{k=0}^\infty \frac{(-1)^{k} (r\mu)^{2k+1}}{(2k+1)!}\\ &= \mathbbm{1} \cos(r \mu) -i r^{-1} G \sin(r \mu). \mathrm{e}nd{align} Hence we get $\mathcal{G}(\frac{\hat{p}i}{4r}) = \frac{1}{\sqrt{2}}(\mathbbm{1} -i r^{-1} G)$. \mathrm{e}nd{proof} \mathrm{e}nd{theorem} We conclude that in this case $\hat{p}artial_{\mu} f$ can be estimated using two additional evaluations of the quantum device; for these evaluations, we place either the gate $\mathcal{G}(\frac{\hat{p}i}{4r})$ or the gate $\mathcal{G}(-\frac{\hat{p}i}{4r})$ in the original circuit next to the gate we are differentiating. Since for unitarily generated one-parameter gates $\mathcal{G}(a) \mathcal{G}(b) = \mathcal{G}(a+b)$, this is equivalent to shifting the gate parameter, and we get the ``parameter shift rule'' with the shift $s = \frac{\hat{p}i}{4r}$: \begin{align} \label{eq:parameter_shift_rule} \hat{p}artial_{\mu} f &= r \big(\bra{\hat{p}si} \mathcal{G}^\hat{d}agger(\mu +s) \hat{Q} \mathcal{G}(\mu +s) \ket{\hat{p}si}\\ \notag & \quad -\bra{\hat{p}si} \mathcal{G}^{\hat{d}agger}(\mu -s) \hat{Q} \mathcal{G}(\mu -s) \ket{\hat{p}si}\big)\\ \label{eq:parameter_shift_rule2} &= r \left(f(\mu+s) -f(\mu-s)\right). \mathrm{e}nd{align} If the parameter $\mu$ appears in more than a single gate in the circuit, the derivative is obtained using the product rule by shifting the parameter in each gate separately and summing the results. It is interesting to note that Eq.~\mathrm{e}qref{eq:parameter_shift_rule2} looks similar to the finite difference rule in Eq.~\mathrm{e}qref{eq:finitediff}, but uses a macroscopic shift and is in fact exact. The parameter shift rule applies to a number of special cases. As remarked in Mitarai \mathrm{e}mph{et al.} \cite{mitarai2018quantum}, if $G$ is a one-qubit rotation generator in $\frac{1}{2}\{\sigma_x, \sigma_y, \sigma_z\}$ then $r=1/2$ and $s = \frac{\hat{p}i}{2}$. If $G = r \vec{n} \cdot\boldsymbol{\sigma}$ is a linear combination of Pauli operators with the $3$-dimensional normal vector $\vec{n}$, it still has two unique eigenvalues and Eq.~\mathrm{e}qref{Eq:Pauli_equality} can also be derived from what is known as the generalized Euler rule. Also gates from a ``hardware-efficient'' variational circuit ansatz may fall within the scope of the parameter shift rule. For example, according to the documentation of Google's \textit{Cirq} programming language \cite{googlecirq}, their Xmon qubits naturally implement the three gates \begin{align*} \text{ExpW}(\mu, \hat{d}elta) &= \mathrm{e}xp \left(- i \mu \left( \cos (\hat{d}elta) \sigma_x + \sin(\hat{d}elta)\sigma_y\right) \right),\\ \text{ExpZ}(\mu) &= \mathrm{e}xp \left(- i \mu \sigma_z \right),\\ \text{Exp11}(\mu) &= \mathrm{e}xp \left(- i \mu \ketbra{11}{11} \right) . \mathrm{e}nd{align*} which all have generators with at most two eigenvalues. Pauli-based multi-qubit gates however do in general not fall in this category. A hardware-efficient example here is the microwave-controlled transmon gate for superconducting architectures\footnote{The time-dependent prefactors are summarized as the gate parameter $\mu$, while $b=\frac{J}{\Delta_{12}}$ represents the quotient of the interaction strength $J$ and detuning $\Delta_{12}$ between the qubits, and $c =m_{12}$ is a cross-talk factor.} \cite{chow2011simple}, \begin{equation*} \mathcal{G}(\mu) = \mathrm{e}xp \left( \mu (\sigma_x \otimes \mathbbm{1} - b (\sigma_z \otimes \sigma_x) + c (\mathbbm{1} \otimes \sigma_x )) \right). \mathrm{e}nd{equation*} which has $4$ eigenvalues. In these cases, other strategies have to be found to compute exact gradients of variational circuits. \subsection{Differentiation of general gates via linear combination of unitaries} \label{sec:qubit_general_lcu} In case the parameter-shift differentiation strategy does not apply, we may always evaluate Eq.~\mathrm{e}qref{Eq:der_of_exp} by introducing an ancilla qubit. Since for finite-dimensional systems $\hat{p}artial_{\mu}\mathcal{G}$ can be expressed as a complex square matrix, we can always decompose it into a linear combination of unitary matrices $A_1$ and $A_2$, \begin{equation} \hat{p}artial_{\mu}\mathcal{G} = \frac{\hat{a}lpha}{2} ((A_1 + A_1^{\hat{d}agger}) + i(A_2 + A_2^{\hat{d}agger})) \label{Eq:decomp} \mathrm{e}nd{equation} with real $\hat{a}lpha$.\footnote{ If $\hat{a}lpha$ contains a renormalisation so that $|\mathcal{G}| \leq 1$, and $\mathcal{G} = \mathcal{G}_{\mathrm{re}} + i \mathcal{G}_{\mathrm{im}}$ we can set $A_1 = \mathcal{G}_{\mathrm{re}} + i \sqrt{\mathbbm{1} - \mathcal{G}_{\mathrm{re}}^2}$ and $A_2 = \mathcal{G}_{\mathrm{im}} + i \sqrt{\mathbbm{1} - \mathcal{G}_{\mathrm{im}}^2}$.} $A_1$ and $A_2$ in turn can be implemented as quantum circuits. To be more general, for example when another decomposition suits the hardware better, we can write \begin{equation} \hat{p}artial_{\mu}\mathcal{G} = \sum_{k=1}^K \hat{a}lpha_k A_k, \label{Eq:deriv_decomp} \mathrm{e}nd{equation} for real $\hat{a}lpha_k$ and unitary $A_k$. The derivative becomes \begin{equation} \hat{p}artial_{\mu} f = \sum_{k=1}^K \hat{a}lpha_k \left( \bra{\hat{p}si}\mathcal{G}^{\hat{d}agger} \hat{Q} A_k \ket{\hat{p}si} +\text{h.c.} \right). \label{Eq:gradient_decomp} \mathrm{e}nd{equation} With Eq.~\mathrm{e}qref{Eq:term_equivalence} we may compute the value of each term in the sum using a coherent linear combination of the unitaries $\mathcal{G}$ and~$A_k = A$, implemented by the quantum circuit in Fig.~\ref{Fig:lcu_qubits} (here and in the following we drop the subscript $k$ for readability). \begin{figure}[t] $$ \mathrm{e}nsuremath{\mathbb Q}circuit @C=1em @R=.7em { \lstick{\ket{0}} & \gate{H} &\ctrlo{1} & \ctrl{1}& \gate{H} & \meter & \cw & \rstick{c}\\ \lstick{\ket{\hat{p}si}} & \qw &\gate{\mathcal{G}} & \gate{A} & \qw & \qw & \qw & \rstick{\ket{\hat{p}si'}} \\ } $$ \caption{Quantum circuit illustrating the `linear combination of unitaries' technique \cite{childs12}. Between interfering Hadamards, two unitary circuits or gates $A$ and~$\mathcal{G}$ are applied conditioned on an ancilla. Depending on the state of the ancilla qubit, the effect is equivalent to applying a sum or difference of $A$ and~$\mathcal{G}$. \label{Fig:lcu_qubits} } \mathrm{e}nd{figure} First, we append an ancilla in state $\ket{0}$ and apply a Hadamard gate to it to obtain the bipartite state \begin{equation} \frac{1}{\sqrt{2}} \left( \ket{0} + \ket{1} \right)\otimes \ket{\hat{p}si} . \mathrm{e}nd{equation} Next, we apply $\mathcal{G}$ conditioned on the ancilla being in state $0$, and $A$ conditioned on the ancilla being in state $1$ (remember that both $\mathcal{G}$ and $A$ are unitary). This results in the state \begin{equation} \frac{1}{\sqrt{2}} \left( \ket{0} \mathcal{G} \ket{\hat{p}si} + \ket{1} A \ket{\hat{p}si} \right). \mathrm{e}nd{equation} Applying a second Hadamard on the ancilla we can prepare the final state \begin{equation} \frac{1}{2} \left( \ket{0} (\mathcal{G} + A) \ket{\hat{p}si} + \ket{1} (\mathcal{G} - A) \ket{\hat{p}si} \right). \mathrm{e}nd{equation} A measurement of the ancilla selects one of the two branches and results in either the state $\ket{\hat{p}si'_0} = \frac{1}{2 \sqrt{p_0}} (\mathcal{G} + A) \ket{\hat{p}si}$ with probability \begin{equation} p_0 = \frac{1}{4} \bra{\hat{p}si} (\mathcal{G} + A)^{\hat{d}agger} (\mathcal{G} + A) \ket{\hat{p}si}, \mathrm{e}nd{equation} or the state $\ket{\hat{p}si'_1} = \frac{1}{2\sqrt{p_1}} (\mathcal{G} - A) \ket{\hat{p}si}$ with probability \begin{equation} p_1 = \frac{1}{4} \bra{\hat{p}si} (\mathcal{G} - A)^{\hat{d}agger} (\mathcal{G} - A) \ket{\hat{p}si}. \mathrm{e}nd{equation} We then measure the observable $\hat{Q}$ for the final state~$\ket{\hat{p}si'_i}$, $i=0,1$. Repeating this process several times allows us to estimate $p_0$, $p_1$ and the expected values of~$\hat{Q}$ conditioned on the value of the ancilla, \begin{equation} \tilde{E}_0 = \bra{\hat{p}si'_0} \hat{Q} \ket{\hat{p}si'_0} = \frac{1}{4 p_0} \bra{\hat{p}si} (\mathcal{G} + A)^{\hat{d}agger} \hat{Q} (\mathcal{G} + A) \ket{\hat{p}si}, \mathrm{e}nd{equation} and \begin{equation} \tilde{E}_1 = \bra{\hat{p}si'_1} \hat{Q} \ket{\hat{p}si'_1} = \frac{1}{4 p_1} \bra{\hat{p}si} (\mathcal{G} - A)^{\hat{d}agger} \hat{Q} (\mathcal{G} - A) \ket{\hat{p}si}. \mathrm{e}nd{equation} Comparing with Eq.~\mathrm{e}qref{Eq:term_equivalence}, we find that we can compute the desired left-hand side and thus the individual terms in Eq.~\mathrm{e}qref{Eq:gradient_decomp} from these quantities, since \begin{equation} \bra{\hat{p}si}\mathcal{G}^{\hat{d}agger} \hat{Q} A \ket{\hat{p}si} +\text{h.c.} = 2 (p_0 \tilde{E}_0 - p_1 \tilde{E}_1) . \mathrm{e}nd{equation} Note that the measurement on the ancilla is not a typical conditional measurement with limited success probability: either result contributes to the final estimate. Overall, this approach requires that we can apply the gate $\mathcal{G}$, as well the unitaries $A_k$ from the derivative decomposition in Eq.~\mathrm{e}qref{Eq:deriv_decomp}, controlled by an ancilla. Altogether, we need to estimate $2 K$ expectation values and $2 K$ probabilities, and with Eq.~\mathrm{e}qref{Eq:decomp} $K$ can always be chosen as $2$. The decomposition of $\hat{p}artial_\mu \mathcal{G}$ into a linear combination of unitaries~$A_k$ needs to be found, but this is easy for few qubit gates and has to be done only once. Note that the idea of decomposing gates into ``\textit{classical} linear combinations of unitaries'' has been brought forward in Ref.~\cite{schuld2018circuit}, where $\hat{Q}$ had the special form of a $\sigma_z$ observable, which allowed the authors to evaluate expectations via overlaps of quantum states. Here we added the well-known strategy of \textit{coherent} linear combinations of unitaries \cite{childs12} to generalize the idea to any observable. \section{Gradients of continuous-variable circuits} We now turn to continuous-variable (CV) quantum computing architectures. Continuous-variable systems~\cite{weedbrook2012gaussian} differ from discrete systems in that the generators of the gates typically have infinitely many unique eigenvalues, or even a continuum of them. Despite this, we can still find a version of the parameter-shift differentiation recipe which works for Gaussian gates in CV variational circuits if the gate is only followed by Gaussian operations, and if the observable is a low-degree polynomial in the quadratures. The derivation is based on the fact that in this case the effect of a Gaussian gate, albeit commonly represented by an infinite-dimensional matrix in the Schr\"{o}dinger picture, can be captured by a finite-dimensional matrix in the Heisenberg picture. As in Sec.~\ref{sec:discrete}, the task is to compute $\hat{p}artial_{\mu} f$. In the Heisenberg picture, instead of evolving the state forward in time with the gates in the circuit, the final observable is evolved `backwards' in time with the adjoint gates. We consider observables $\hat{B}$ that are polynomials of the quadrature operators~$\hat{x}_i$,~$\hat{p}_i$ (such as $\hat{x}_1\hat{p}_1\hat{x}_2$ or $\hat{x}_1^4 \hat{p}_2^3 +2\hat{x}_1$). By linearity, it is sufficient to understand differentiation of the individual monomials. For an $n$-mode system, we introduce the infinite-dimensional vector of quadrature monomials, \begin{equation} \hat{C} \coloneqq (\mathbbm{1}, \hat{x}_1, \hat{p}_1, \hat{x}_2, \hat{p}_2, \ldots, \hat{x}_n, \hat{p}_n, \hat{x}_1^2, \hat{x}_1 \hat{p}_1, \ldots), \mathrm{e}nd{equation} sorted by their degree, in terms of which we will expand the observables. \subsection{CV gates in the Heisenberg picture} Let us consider the Heisenberg-picture action $\mathcal{G}^{\hat{d}agger} \hat{C}_j \mathcal{G}$ of a gate $\mathcal{G}$ on a monomial $\hat{C}_j \in \hat{C}$. This conjugation acts as a linear transformation $\Omega^\mathcal{G}$ on $ \hat{C}$, i.e., \begin{equation} \Omega^{\mathcal{G}}[\hat{C}_j] \coloneqq \mathcal{G}^\hat{d}agger \hat{C}_j \mathcal{G} = \sum_{i} M^{\mathcal{G}}_{ij} \hat{C}_i, \mathrm{e}nd{equation} where $M_{ij}^{\mathcal{G}} = M_{ij}^{\mathcal{G}}(\mu)$ are the elements of a real matrix $M^{\mathcal{G}}$ that depends on the gate parameter. Subsequent conjugations correspond to multiplying the matrices together: \begin{equation} \Omega^{U}[\Omega^V[\hat{C}_k]] = \Omega^{U} [V^\hat{d}agger \hat{C}_k V] = \sum_{ij} M^{U}_{ij} M^{V}_{jk} \hat{C}_i. \mathrm{e}nd{equation} Suppose now that the gate $\mathcal{G}$ is Gaussian. Conjugation by a Gaussian gate does not increase the degree of a polynomial. This means that $\mathcal{G}$ will map the subspace of the zeroth- and first-degree monomials spanned by $\hat{D} \coloneqq (\mathbbm{1}, \hat{x}_1, \hat{p}_1, \hat{x}_2, \hat{p}_2, \ldots, \hat{x}_n, \hat{p}_n)$ into itself, \begin{equation} \Omega^\mathcal{G}[\hat{D}_j] = \sum_{i=0}^{2n} M^{\mathcal{G}}_{ij} \hat{D}_i. \label{Eq:omega_gaussian} \mathrm{e}nd{equation} For observables that are higher-degree polynomials of the quadratures, we can use the fact that $\Omega^\mathcal{G}$ is a unitary conjugation, and that the higher-degree monomials can be expressed as products of the lower-degree ones in~$\hat{D}$: \begin{align} \label{eq:higher_degree_obs} \Omega^\mathcal{G}[\hat{D}_i\hat{D}_j] & = \mathcal{G}^\hat{d}agger \hat{D}_i \hat{D}_j \mathcal{G}, \\ & = \mathcal{G}^\hat{d}agger \hat{D}_i \mathcal{G} \mathcal{G}^\hat{d}agger\hat{D}_j \mathcal{G}, \\ & = \Omega^\mathcal{G}[\hat{D}_i] \Omega^\mathcal{G}[\hat{D}_j]. \mathrm{e}nd{align} Hence we may represent any $n$-mode Gaussian gate~$\mathcal{G}$ as a $(2n+1) \times (2n+1)$ matrix in the Heisenberg picture. We can now compute the derivatives $\hat{p}artial_{\mu} f$ using the derivatives of the matrix~$M^{\mathcal{G}}(\mu)$. It turns out that like the derivatives of the finite-dimensional gates in Sec.~\ref{sec:discrete}, $\hat{p}artial_{\mu}M^{\mathcal{G}}$ can be often decomposed into a finite linear combination of matrices from the same class as~$M^{\mathcal{G}}$. In fact, the derivatives of all gates from a universal Gaussian gate set can be decomposed to just two terms, so derivative computations in this setting have the same complexity as in the qubit case. We summarize the derivatives of important Gaussian gates in Table~\ref{Tbl:cv_gate_derivatives}. \begin{table*}[t] \hat{d}ef\hat{a}rraystretch{1.2} \begin{tabular*}{\textwidth}{l@{\mathrm{e}xtracolsep{\fill}}l@{\mathrm{e}xtracolsep{\fill}}l} Gate~$\mathcal{G}$ & Heisenberg representation~$M^{\mathcal{G}}$ & Partial derivatives of~$M^{\mathcal{G}}$\\ \hline \hline \cell{Phase rotation \\ $R(\hat{p}hi)$} & $M^R(\hat{p}hi) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos\hat{p}hi & -\sin\hat{p}hi \\ 0 & \sin\hat{p}hi & \cos\hat{p}hi \mathrm{e}nd{pmatrix}$ & $\hat{p}artial_{\hat{p}hi} \; M^R(\hat{p}hi) = \frac{1}{2} \big( M^R(\hat{p}hi + \frac{\hat{p}i}{2}) - M^R(\hat{p}hi - \frac{\hat{p}i}{2}) \big)$\\ \hline \cell{Displacement \\ $D(r, \hat{p}hi)$} & $M^D(r, \hat{p}hi) = \begin{pmatrix} 1 & 0 & 0 \\ 2r \cos \hat{p}hi & 1 & 0 \\ 2r \sin \hat{p}hi & 0 & 1 \\ \mathrm{e}nd{pmatrix}$ & \cell{$\hat{p}artial_{r} M^D(r, \hat{p}hi) = \frac{1}{2s}\big( M^D(r+s,\hat{p}hi) - M^D(r-s,\hat{p}hi) \big), \; s \in \mathrm{e}nsuremath{\mathbb R}$\\ $\hat{p}artial_{\hat{p}hi} M^D(r, \hat{p}hi) = \frac{1}{2} \big( M^D(r,\hat{p}hi+\frac{\hat{p}i}{2}) - M^D(r,\hat{p}hi-\frac{\hat{p}i}{2}) \big)$ }\\ \hline \cell{Squeezing\footnote{A more general version of the squeezing gate $\tilde{S}(r,\hat{p}hi)$ also contains a parameter $\hat{p}hi$ which defines the angle of the squeezing, and $S(r) = \tilde{S}(r,0)$. This two-parameter gate can be broken down into a product of single-parameter gates: $\tilde{S}(r, \hat{p}hi) = R(\frac{\hat{p}hi}{2})S(r)R(-\frac{\hat{p}hi}{2})$.} \\ $S(r)$} & $M^S(r) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \mathrm{e}^{-r}& 0 \\ 0 & 0 & \mathrm{e}^{r} \mathrm{e}nd{pmatrix}$ & \cell{$\hat{p}artial_{r} M^S(r) = \frac{1}{2\sinh(s)} \big( M^S(r + s) - M^S(r -s) \big), \; s \in \mathrm{e}nsuremath{\mathbb R}$\\ } \\ \hline \cell{Beamsplitter\\ $B(\theta, \hat{p}hi)$} & \cell{ $M^B(\theta, \hat{p}hi) = \begin{pmatrix} 1 & 0 & 0 & 0 & 0\\ 0 & \cos\theta & 0 & -\hat{a}lpha & -\beta \\ 0 & 0 & \cos\theta & \beta & -\hat{a}lpha\\ 0 & \hat{a}lpha & -\beta & \cos\theta & 0\\ 0 & \beta & \hat{a}lpha & 0 & \cos\theta \mathrm{e}nd{pmatrix} $ } & \cell{$\hat{p}artial_{\theta} M^B(\theta, \hat{p}hi) = \frac{1}{2} \big( M^B(\theta + \frac{\hat{p}i}{2}, \hat{p}hi) - M^B(\theta - \frac{\hat{p}i}{2}, \hat{p}hi) \big)$ \\ $\hat{p}artial_{\hat{p}hi} M^B(\theta, \hat{p}hi) = \frac{1}{2} \big( M^B(\theta, \hat{p}hi+ \frac{\hat{p}i}{2}) - M^B(\theta, \hat{p}hi - \frac{\hat{p}i}{2}) \big)$\\ \\ $\hat{a}lpha = \cos\hat{p}hi\sin\theta, \;\; \beta = \sin\hat{p}hi\sin\theta$ } \\ \hline \hline \mathrm{e}nd{tabular*} \caption{Parameter shift rules for the partial derivatives of important Gaussian gates. Every Gaussian gate can be decomposed into this universal gate set. We use the gate definitions laid out in the Strawberry Fields documentation~\cite{killoran2018strawberry} with $\hbar=2$. All parameters are real-valued. Single-mode gates have been expanded using the set $(\mathbbm{1},\hat{x},\hat{p})$, whereas the two-mode beamsplitter has been expanded using the set $(\mathbbm{1}, \hat{x}_a, \hat{p}_a, \hat{x}_b, \hat{p}_b)$. More derivative rules can be found in the PennyLane \cite{bergholm2018pennylane} documentation (https://pennylane.readthedocs.io)} \label{Tbl:cv_gate_derivatives} \mathrm{e}nd{table*} As an example, we consider the single-mode squeezing gate with zero phase $S(r, \hat{p}hi=0)$, which is represented by \begin{equation}\label{eq:squeezinggate} M^S(r) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & e^{-r} & 0 \\ 0 & 0 & e^r \mathrm{e}nd{pmatrix}. \mathrm{e}nd{equation} Its derivative is given by \begin{equation} \hat{p}artial_{r} \; M^S(r) = \begin{pmatrix}0 & 0 & 0 \\ 0 & -e^{-r} & 0 \\ 0 & 0 & e^r \mathrm{e}nd{pmatrix}. \mathrm{e}nd{equation} The derivative itself is not a Heisenberg representation of a squeezing gate, but we can decompose it into a linear combination of such representations, namely \begin{equation} \hat{p}artial_r \; M^S(r) = \tfrac{1}{2\sinh(s)}\left(M^S(r + s) - M^S(r - s)\right), \mathrm{e}nd{equation} where $s$ is a fixed but arbitrary nonzero real number. Hence, \begin{align} \notag \hat{p}artial_{r} \left[S(r)^\hat{d}agger \hat{B}_j S(r)\right] =& \tfrac{1}{2\sinh(s)} \big(S(r + s)^{\hat{d}agger} \hat{B}_j S(r + s) \\ & -S(r - s)^{\hat{d}agger} \hat{B}_j S(r - s)\big) \mathrm{e}nd{align} for $j \in \{0,1,2\}$. \subsection{Differentiating CV circuits} Again we split the gate sequence into three pieces, $U(\theta) = V \mathcal{G}(\mu) W$. For simplicity, let us at first assume that our observable is a first-degree polynomial in the quadrature operators, and thus can be expanded as $\hat{B} = \sum_i b_i \hat{D}_i$. As shown in the previous section, for Gaussian gates the Heisenberg-picture matrix~$M$ is block-diagonal, and maps from the space spanned by $\hat{D}$ onto itself. Thus, if $\mathcal{G}$ is Gaussian and $V$~consists of Gaussian gates only, we may write \begin{align} f(\theta) &= \bra{0} U^{\hat{d}agger}(\theta) \hat{B} U(\theta) \ket{0},\\ &= \sum_{ijk} \: \bra{0} W^\hat{d}agger \hat{D}_k W \ket{0} M^\mathcal{G}_{kj}(\mu) M^V_{ji} b_i, \mathrm{e}nd{align} where $\ket{0}$ denotes the vacuum state. Now the derivative is simply \begin{align} \hat{p}artial_\mu f(\theta) &= \sum_{ijk} \: \bra{0} W^\hat{d}agger \hat{B}_k W \ket{0} (\hat{p}artial_\mu M^\mathcal{G})_{kj} M^V_{ji} b_i. \mathrm{e}nd{align} If $\hat{p}artial_\mu M^\mathcal{G}$ can be expressed as a linear combination $\sum_i \gamma_i M^\mathcal{G}(\mu+s_i)$ with $\gamma_i, s_i \in \mathbb{R}$, by linearity we may express $\hat{p}artial_\mu f$ using the same linear combination, $\hat{p}artial_\mu f = \sum_i \gamma_i f(\mu +s_i)$. This is the parameter shift rule for CV quantum computing. What about the subcircuit $W$ that appears before the gate that we differentiate? For the purposes of differentiating the gate $\mathcal{G}$, this subcircuit can be arbitrary, since the above differentiation recipe does not depend on the properties of the matrix~$M^W$. The above recipe works as long as no non-Gaussian gates are between $\mathcal{G}$ and the observable~$\hat{B}$. With observables $\hat{B}$ that are higher-degree polynomials of the quadratures, we can use the property in Eq.~\mathrm{e}qref{eq:higher_degree_obs} to compute the derivative using the product rule: \begin{align} \hat{p}artial_\mu\left(\Omega^\mathcal{G}[\hat{B}_i\hat{B}_j] \right) & = \hat{p}artial_\mu\left(\Omega^\mathcal{G}[\hat{B}_i] \Omega^\mathcal{G}[\hat{B}_j]\right), \\ & = \hat{p}artial_\mu\Omega^\mathcal{G} [\hat{B}_i] \; \Omega^\mathcal{G}[\hat{B}_j] +\Omega^\mathcal{G}[\hat{B}_i] \; \hat{p}artial_\mu \Omega^\mathcal{G}[\hat{B}_j]. \notag \mathrm{e}nd{align} \subsection{Non-Gaussian transformations} For the above decomposition strategy to work efficiently, the subcircuit $V$ must be Gaussian. In the case that $V$ is non-Gaussian, it will generally increase the degree of the final observable, i.e., $V^\hat{d}agger \hat{B} V$ will be higher degree than $\hat{B}$. For example, the cubic phase gate $V(\gamma) = \mathrm{e}^{i \gamma \hat{x}^3}$ carries out the transformations \begin{align} V^\hat{d}agger(\gamma)\hat{x} V(\gamma) = &~\hat{x}, \\ V^\hat{d}agger(\gamma)\hat{p} V(\gamma) = &~\hat{p} + \gamma x^2. \mathrm{e}nd{align} In this case, we will have to consider a higher-dimensional subspace (tracking both the linear and the quadratic terms). If the subcircuit $V$ contains multiple non-Gaussian gates, each one can raise the degree of the observable. Thus, the matrices considered in the Heisenberg representation can become large depending on both the quantity and the character of non-Gaussian gates in the subcircuit $V$. Finding analytic derivative decompositions of circuits containing non-Gaussian gates is more challenging, but not strictly ruled out by complexity arguments. Specifically, in the case where there are only logarithmically few non-Gaussian gates, and each of those gates only raises the degree of quadrature polynomials by a bounded amount, there is still the possibility to efficiently decompose a gradient of an expectation value into a polynomial number of component expectation values. \section{Conclusion} We present several hardware-compatible strategies to evaluate the derivatives of quantum expectation values from the output of variational quantum circuits. In many cases of qubit-based quantum computing the derivatives can be computed with a simple parameter shift rule, using the variational architecture of the original quantum circuit. In all other cases it is possible to do the same by using an ancilla and a decomposition of the ``derivative of a gate''. For continuous-variable architectures we show that, as long as the parameter we differentiate with respect to feeds into a Gaussian gate that is only followed by Gaussian operations, a close relative to the parameter shift rule can be applied. We leave the case of non-Gaussian circuits as an open direction for future research. \mathrm{e}nd{document}
\begin{document} \title{Martingale property for the Scott correlated stochastic volatility model} \begin{abstract} In this paper, we study the martingale property for a Scott correlated stochastic volatility model, when the correlation coefficient between the Brownian motion driving the volatility and the one driving the asset price process is arbitrary. For this study we verify the martingale property by using the necessary and sufficient conditions given by Bernard \emph{et al.} \cite{Bernard}. Our main results are to prove that the price process is a true and uniformly integrable martingale if and only if $\rho \in [-1,0]$ for two transformations of Brownian motion describing the dynamics of the underling asset. \end{abstract} \begin{MC} Scott model, stochastic volatility, martingale property, local martingale.\end{MC} \begin{class}60G44; 60H30\end{class} \section{Introduction} The very popular model for option pricing, it was established by Black and Scholes (1973) \cite{BS} (BS model hereafter). In particular, the BS model assumes that the underlying asset price follows a geometric Brownian motion with a fixed volatility. Within the BS theory, the most direct technique constructs an equivalent martingale measure for the underlying asset process. However, the assumption of constant volatility was suspect from the beginning. Some statistical tests strongly reject the idea that a volatility process can be a constant. It also became clear, although this was less immediate, that the BS model was in conflict with evolving patterns in observed option pricing data. In particular, after the 1987 market crash, a persistent pattern emerged, called the \textquotedblleft smile\textquotedblright that should not exist under the BS theory. Nevertheless, the continuous-time framework provides several alternative models specially designed to explain, at least qualitatively, this effect. Among them, we highlight on the Stochastic Volatility (SV) models. These are two-dimensional diffusion processes in which one dimension describes the asset price dynamics and the second one governs the volatility evolution. The examples of stochastic volatility models are abundant: Hull and White \cite{HW}, Stein and Stein \cite{SS}, Heston \cite{HESTON}, Scott \cite{Scott}, Wiggins \cite{Wiggins}, Melino and Turnbull \cite{MT}. \\ The martingale problem has been extensively studied from Girsanov (1960) who poses the problem of deciding whether a stochastic exponential is a true martingale or not. In the context of stochastic volatility models, Bernard \emph{et al.} \cite{Bernard} have established necessary and sufficient analytic conditions to verify when a stochastic exponential of a continuous local martingale is a martingale or a uniformly integrable martingale for arbitrary correlation ($- 1\leq \rho \leq 1$). Mijatovic and Urusov \cite{MU} have obtained necessary and sufficient conditions in the case of perfect correlation ($\rho = 1$), and Lions and Musiela \cite{LM} gave sufficient conditions to verify when a stochastic exponential of a continuous local martingale is a martingale or a uniformly integrable martingale, and also Sin \cite{Sin}, Andersen and Piterbarg \cite{and}, Bayraktar, Kardaras and Xing \cite{bay} provide easily verifiable conditions.\\ The Scott model assumes that the volatility process is the exponential of an Ornstein--Uhlenbeck stochastic process. \\ The Ornstein--Uhlenbeck model is able: (i) to describe simultaneously the observed long-range memory in volatility and the short one in leverage \cite{QZ}, (ii) to provide a consistent stationary distribution for the volatility with data \cite{CFMN, EPM}, (iii) it shows the same mean first--passage time profiles for the volatility as those of empirical daily data \cite{MP3} and finally (iv) it fairly reproduces the realized volatility having some degree of predictability in future return changes \cite{EPM}.\\ Our aim in the present work is to take advantage of all this knowledge to study the martingale property for the Scott correlated stochastic volatility model. We shall use the criterium given by Bernard \emph{et al.} \cite{Bernard} in two situations. The first one we use the Cholesky decomposition of the Brownian motion of the stock price as a linear transformation of two independent Brownian motions. The second one consists to use transformations of Wu and Yor \cite{yor}. \\ The paper is organized as follows, in section 2, we recall some preliminary results and the main result of \cite{Bernard}. The section 3 is devoted to the study of the martingale property of the Scott model. \section{Preliminaries} We now formally introduce the setup of this work. We start by the presentation of general stochastic volatility model, and we introduce a canonical probability space of our processes, which we shall use to formulate the necessary and sufficient analytic conditions given by Bernard \emph{et al.} \cite{Bernard} to verify when a stochastic exponential of a continuous local martingale is a martingale or a uniformly integrable martingale for arbitrary correlation ($- 1\leq \rho \leq 1$).\\ We consider the state space $J=(\ell,r)$, $-\infty \leq \ell < r \leq \infty $, let the stochastic exponential $Z=(Z_{t})_{t\in \lbrack 0,\infty )}$ denote the (discounted) stock price and a $J$--valued diffusion $Y=(Y_{t})_{t\in \lbrack 0,\infty \lbrack }$ on some filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\in \lbrack 0,\infty \lbrack })$ governed by the stochastic differential equations for all $t \in [0,\zeta)$: \begin{eqnarray} \left\{ \begin{array}{l} dZ_{t} =Z_{t}b(Y_{t})dW_{t}^{(1)}\text{, \ }Z_{0}=1 \label{sys1} \\ dY_{t} =\mu (Y_{t})dt+\sigma (Y_{t})dW_{t},\text{ \, }Y_{0}=x_{0}\in \mathbb{R} \label{Y} \\ \end{array} \right. \end{eqnarray} where $W_{t}^{(1)}$ and $W_{t}$ are standard $\mathcal{F}_{t}$--Brownian motions, with $E [dW_{t}^{(1)} dW_{t}]= \rho dt$, $\rho$ is the constant correlation coefficient with $-1\leq \rho \leq 1$, denote $\zeta $ the exit time of $Y$ from its state space, where $ \zeta =\inf \{t>0:Y_{t}\notin J\}$, which mean that on: \begin{itemize} \item the event $\left\{ \zeta =\infty \right\} $ the trajectories of $Y$ do not exit $J$ ; \item the event $\{\zeta <\infty \}$, $\lim_{t\rightarrow \zeta }Y_{t}=r$ or $\lim_{t\rightarrow \zeta }Y_{t}=\ell$, $P$--a.s. $Y$ is defined such that it stays at its exit point, which means that $\ell$ and $r$ are absorbing boundaries. \end{itemize} \textbf{Assumption H}: Let $\mu $, $\sigma$ and $b : J\rightarrow \mathbb{R}$ be given Borel functions. Let $L_{loc}^{1}(J)$ denotes the class of locally integrable functions on $J$. We say that $\mu $ and $\sigma$ satisfy: \begin{itemize} \item[$\mathbf{(A1)}$] if for all $x\in J$ $\sigma (x)\neq 0\text{ \ and }\frac{1}{\sigma^{2}(\cdot)}\text{, \ }\frac{\mu (\cdot)}{\sigma ^{2}(\cdot)} \in L_{loc}^{1}(J)$. \end{itemize} And $b$ and $\sigma$ satisfy: \begin{itemize} \item[$\mathbf{(A2)}$] if $\frac{b^{2}(\cdot)}{\sigma ^{2}(\cdot)} \in L_{loc}^{1}(J)$. \end{itemize} Under condition $\mathbf{(A1)}$ the SDE satisfies by $Y$ defined in (\ref{Y}) has a unique solution in law that possibly exits its state space $J$, and the condition $\mathbf{(A2)}$ ensures that the stochastic integral $ \int_{0}^{t\wedge \zeta }b(Y_{s})dW_{s}^{(1)}$ is well--defined, then the process $Z$ defined in (\ref{sys1}) is a nonnegative continuous local martingale.\\ We define the space accommodating all four processes ($Y$, $Z$, $W$, $W^{(1)}$). \begin{itemize} \item Let $\Omega _{1}:=\overline{\mathcal{C}}([0,\infty ),\overline{J})$ be the space of continuous functions \\ $\omega _{1}$ $:[0,\infty )$ $\rightarrow \overline{J}$ that start inside $J$ \ and can exit, i.e. there exists $ \zeta (\omega _{1})\in [0,\infty ]$ such that $\omega _{1}(t)\in J$ \ for $ t<$ $\zeta (\omega _{1})$ and in the case $\zeta (\omega _{1})<\infty $ we have either $\omega _{1}(t)=r$ \ for $t\geq \zeta (\omega _{1})$ (hence also $\lim_{t\rightarrow \zeta (\omega _{1})}\omega _{1}(t)=r$) or $\omega _{1}(t)=\ell$ \ for $t\geq $ $\zeta (\omega _{1})$ (hence also $\lim_{t\rightarrow \zeta (\omega _{1})}\omega _{1}(t)=\ell$). \item Let $\Omega _{2}$ $:=\overline{\mathcal{C}}((0,\infty ),[0,\infty ])$ be the space of continuous functions \\ $\omega _{2}$ $:(0,\infty ) \rightarrow \lbrack 0,\infty ]$ with $\omega _{2}(0)=1$ that satisfy $\omega _{2}(t)=$ $ \omega _{2}(t\wedge T_{0}(\omega _{2})\wedge T_{\infty }(\omega _{2}))$ for all $t\geq 0$, where $T_{0}(\omega _{2})$ and $T_{\infty }(\omega _{2})$ denote the first hitting times of $0$ and $\infty $ by $\omega _{2}$. \item Let $\Omega _{3}$ $=\overline{\mathcal{C}}([0,\infty ),(-\infty,\infty ) )$ be the space of continuous functions \\ $\omega _{3}:[0,\infty )\rightarrow (-\infty,\infty ) $ \ with $\omega _{3}(0)=0$. \item Let $\Omega _{4}$ $=\overline{\mathcal{C}}([0,\infty ),(-\infty,\infty ) )$ be the space of continuous functions \\ $\omega _{4}:[0,\infty )\rightarrow (-\infty,\infty ) $ \ with $\omega _{4}(0)=0$. \end{itemize} Define the canonical process \begin{center} $(Y_{t}(\omega _{1}),Z_{t}(\omega _{2}),W_{t}(\omega _{3}),W_{t}^{(1)}(\omega _{4})):=(\omega _{1}(t),\omega _{2}(t),\omega _{3}(t),\omega _{4}(t))$ \end{center} for all $t \geq 0$, and let $(\mathcal{F}_{t})_{t\geq 0}$ denote the filtration generated by the canonical process and satisfying the usual conditions, and $\sigma$--field is $\mathcal{F}=\bigvee _{t\in [0, \infty)}\mathcal{F}_{t}$. \\ Now, the processes are defined in this filtered space $(\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\geq 0})$, let $\mathbb{P}$ be the probability measure induced by the canonical process on the space $(\Omega,\mathcal{F})$. \begin{proposition}(Change of measure for continuous local martingales)\label{P1} Consider the space $\left( \Omega,\mathcal{F},(\mathcal{F})_{t\geq 0}\right) $, with the process $Z$ defined in (\ref{sys1}) and suppose that the \textbf{Assumption H} is fulfilled. Then \begin{itemize} \item[1.] There exists a unique probability measure $\mathbb{Q}$ on the same space such that, for any bounded stopping time $\tau $ and for all non-negative $\mathcal{F} _{\tau }$--measurable random variables $S$, \begin{eqnarray}\label{eq7} E_{\mathbb{Q}}\left[ \frac{1}{Z_{\tau }}S\mathbf{1}_{\left\lbrace 0<Z_{\tau }<\infty \right\rbrace } \right] =E_{\mathbb{P}}\left[ S\mathbf{1}_{\left\lbrace 0<Z_{\tau}\right\rbrace }\right] \end{eqnarray} where we define $\frac{1}{Z_{\tau }}\mathbf{1}_{\left\lbrace 0<Z_{\tau }<\infty \right\rbrace }=0$ on $\{Z_{\tau }=0\}$ from the usual convention. \item[2.] Under $\mathbb{P}$, for $t\in \lbrack 0,T_{0}),$ define the continuous $\mathbb{P}$--local martingale $M_{t}$ as: \begin{eqnarray} M_{t}=\int_{0}^{t\wedge \zeta }b(Y_{s})dW_{s}^{(1)}. \end{eqnarray} Then under $\mathbb{Q}$ \, for $ t\in \lbrack 0,T_{\infty })$, \begin{eqnarray} \widetilde{M}_{t}^{\ast }:=M_{t}-\left< M\right>_{t} =\int_{0}^{t\wedge \zeta }b(Y_{s})dW_{s}^{(1)}-\int_{0}^{t\wedge \zeta }b^{2}(Y_{s})ds \end{eqnarray} is a continuous $\mathbb{Q}$--local martingale. Here $T_{0}$ and $T_{\infty }$ are defined as the first hitting times to $0$ and $\infty$ by $Z$. \item[3.] Under $\mathbb{Q}$, for $t\in \lbrack 0,T_{\infty })$ \begin{eqnarray} \frac{1}{Z_{t}} =\mathcal{E} (-\widetilde{M}_{t}^{\ast })=\exp \left\{ -\int_{0}^{t\wedge \zeta }b(Y_{s})dW_{s}^{(1)}+\frac{1}{2}\int_{0}^{t\wedge \zeta }b^{2}(Y_{s})ds\right\} \end{eqnarray} \end{itemize} \end{proposition} \begin{proof} The proof can be found in Ruf \cite{R} Theorem 2 and its proof. \end{proof} Fix an arbitrary constant $c\in J$ and introduce the scale functions $s(\cdot)$ of the SDE satisfies by $Y$ under $\mathbb{P}$, and $\widetilde{s}(\cdot)$ of the SDE satisfies by Y under $\mathbb{Q}$: \[ s(x):=\int_{c}^{x}\exp \left\{ -\int_{c}^{y}\frac{2\mu }{\sigma ^{2}} (u)du\right\} dy\text{, \ }x\in \overline{J} \] \[ \widetilde{s}(x):=\int_{c}^{x}\exp \left\{ -\int_{c}^{y}\frac{2 \widetilde{\mu} }{\sigma ^{2}} (u)du\right\} dy\text{, \ }x\in \overline{J} \] And introduce the following test functions for $x \in \overline{J}$, with a constant $c \in J$. \begin{eqnarray*} \upsilon (x)= 2 \int_{c}^{x} \frac{\left( s(x)-s(y)\right)}{s^{\prime }(y)\sigma ^{2}(y)}dy, \; \; \upsilon _{b}(x)= 2 \int_{c}^{x} \frac{ \left( s(x)-s(y)\right) b^{2}(y)}{s^{\prime }(y)\sigma ^{2}(y)}dy \end{eqnarray*} \begin{eqnarray*} \widetilde{\upsilon } (x)= 2 \int_{c}^{x} \frac{\left( \widetilde{s}(x)-\widetilde{s}(y)\right)}{\widetilde{s}^{\prime }(y)\sigma ^{2}(y)}dy, \; \; \widetilde{\upsilon _{b}}(x)= 2 \int_{c}^{x} \frac{ \left( \widetilde{s}(x)-\widetilde{s}(y)\right) b^{2}(y)}{\widetilde{s}^{\prime }(y)\sigma ^{2}(y)}dy \end{eqnarray*} Consider the stochastic exponential $Z$ defined in (\ref{sys1}). The following proposition provides the necessary and sufficient condition for $Z_{T}$ to be a $\mathbb{P}$--martingale for all $T \in [0, \infty)$, when $-1\leq \rho \leq 1$. The proofs of the following propositions can be found in \cite{Bernard} (Propositions 4.1, 4.2, 4.3 and 4.4, p. 18--19) \begin{proposition}\label{P2} If \textbf{Assumption H} is satisfied, then for all $T \in [0,\infty)$, $\mathbb{E}^{\mathbb{P}}(Z_{T})=1$ if and only if at least one of the conditions $(A)$--$(D)$ below is satisfied: \begin{itemize} \item[(A)] $\widetilde{\upsilon} (\ell)$ $=\widetilde{\upsilon} (r)$ $=\infty $, \item[(B)] $\widetilde{\upsilon} _{b}(r)$ $<\infty $ and $\widetilde{\upsilon} (r)$ $=\infty $, \item[(C)] $\widetilde{\upsilon} _{b}(\ell)$ $<\infty $ and $\widetilde{\upsilon} (r)=\infty $, \item[(D)] $\widetilde{\upsilon} _{b}(r)$ $<\infty $ and $\widetilde{\upsilon} _{b}(\ell)$ $<\infty $. \end{itemize} \end{proposition} We have the following necessary and sufficient condition for $Z$ to be a uniformly integrable $\mathbb{P}$--martingale on $[0, \infty)$, when $ -1 \leq \rho \leq 1$. \begin{proposition}\label{P3} If \textbf{Assumption H} is satisfied, then $\mathbb{E}^{\mathbb{P}}( Z_{\infty })= 1$ if and only if at least one of the conditions $(A')$--$(D')$ below is satisfied: \begin{itemize} \item[(A')] $b = 0$ a.e. on $J$ with respect to the Lebesgue measure, \item[(B')] $\widetilde{\upsilon} _{b}(r)<\infty $ and $\widetilde{s}(\ell)=-\infty $, \item[(C')] $\widetilde{\upsilon} _{b}(\ell)<\infty $ and $\widetilde{s}(r)=\infty $, \item[(D')] $\widetilde{\upsilon} _{b}(r)<\infty $ and $\widetilde{\upsilon} _{b}(\ell)<\infty $. \end{itemize} \end{proposition} \begin{proposition}\label{P4} If \textbf{Assumption H} is satisfied, then for all $T \in [0,\infty)$, $Z_{T} > 0$ $\mathbb{P}$--a.s. if and only if at least one of the conditions 1.--4. below is satisfied: \begin{itemize} \item[1.] $\upsilon (\ell)$ $=\upsilon (r)$ $=\infty $, \item[2.] $\upsilon _{b}(r)$ $<\infty $ and $\upsilon (r)$ $=\infty $, \item[3.] $\upsilon _{b}(\ell)$ $<\infty $ and $\upsilon (r)=\infty $, \item[4.] $\upsilon _{b}(r)$ $<\infty $ and $\upsilon _{b}(\ell)$ $<\infty $. \end{itemize} \end{proposition} \begin{proposition}\label{P5} If \textbf{Assumption H} is satisfied, and let $Y$ be a (possibly explosive) solution of the SDE (\ref{Y}) under $\mathbb{P}$, with $Z$ defined in (\ref{sys1}), then $Z_{\infty} > 0$, $\mathbb{P}$--a.s. if and only if at least one of the conditions 1.--4. below is satisfied: \begin{itemize} \item[1.] $b = 0$ a.e. on $J$ with respect to the Lebesgue measure, \item[2.] $\upsilon _{b}(r)<\infty $ and $s(\ell)=-\infty $, \item[3.] $\upsilon _{b}(\ell)<\infty $ and $s(r)=\infty $, \item[4.] $\upsilon _{b}(r)<\infty $ and $\upsilon _{b}(\ell)<\infty $. \end{itemize} \end{proposition} \section{Main results} In this section, we apply the results of Bernard \emph{et al.} \cite{Bernard} to the study of martingale properties of (discounted) stock prices in Scott correlated stochastic volatility model \cite{Scott} in two cases, the first one by using the Cholesky decomposition, and the second one by using a transformation given by Wu and Yor \cite{yor}. \subsection{Cholesky decomposition} Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space and let $(W_t)_{t \geq 0}$ be a standard Brownian motion with respect to the filtration ($\mathcal{F})_{t \geq 0}$. Let $(B_t)_{t \geq 0}$ be another standard Brownian motion on the same probability space which is independent of $(W_t)_{ t \geq 0 }$. \begin{proposition}{(The Cholesky decomposition)} The linear transformation $T^{\rho}$ for $ \rho \in [-1,1]$, defined by \begin{eqnarray*} T^{\rho}_t = \rho W_t -\sqrt{1 - \rho ^2} B_t, \end{eqnarray*} defines a new Brownian motion $(\Omega,\mathcal{F},\mathbb{P})$. \end{proposition} On a filtered probability space $\left( \Omega,\mathcal{F},(\mathcal{F})_{t\in \left[0,\infty \right) }\right) $, we consider the following risk--neutral Scott model for the actualized asset price $S_{t}$: \begin{eqnarray}\label{SYSC} \left\{ \begin{array}{l} dS_{t}=\sigma _{t}S_{t} \mathbf{1}_{[0,\zeta)} (t) dT_{t}^{\rho} \\ \sigma_{t}=f(Y_{t})=e^{Y_{t}} \\ dY_{t} = \alpha (m -Y_{t})\mathbf{1}_{[0,\zeta)} (t) dt+\beta \mathbf{1}_{[0,\zeta)} (t) dW_{t} \end{array} \right. \end{eqnarray} where $E \left[dT_{t}^{\rho} dW_{t} \right] = \rho dt$, and $-1\leq \rho \leq 1$, $\alpha > 0$, $ m > 0$, $\beta > 0$. The natural state space for $Y$ is $J = (\ell,r) = (0,+\infty)$. $\zeta$ is the possible exit time of the process $Y$ from $J$. The Scott model belongs to the general stochastic volatility model considered in (\ref{sys1}) and (\ref{Y}) with $\mu (x) =\alpha (m-x)$, $\sigma (x) = \beta$ and $b(x) = e^{x} $. \\ Since \begin{eqnarray*} &&\forall \; x\in J, \; \; \; \sigma (x) \neq 0, \; \; \; \frac{1}{ \sigma ^{2}(x)}=\frac{1}{\beta ^{2}}\in L_{loc}^{1}(J), \\ &&\frac{\mu (x)}{\sigma ^{2}(x)} =\frac{\alpha (m-x)}{\beta ^{2}} \in L_{loc}^{1}(J), \; \frac{b^{2}(x)}{\sigma ^{2}(x)}=\frac{ e^{2x}}{\beta ^{2}}\in L_{loc}^{1}(J). \end{eqnarray*} Then the \textbf{Assumption H} is satisfied. From Proposition \ref{P1}, there exists a probability $\mathbb{Q}$ is absolutely continuous with respect to $\mathbb{P}$. \begin{lemma}\label{L7} If \textbf{Assumption H} is satisfied, then $\zeta \leq T_{\infty}$, $\mathbb{P}$--a.s. and $\mathbb{Q}$--a.s, where $ T_{\infty}$ is the first hitting times of $\infty$ by $Z$. \end{lemma} \begin{proof} The proof can be found in Lemma 2.4 p. 9 \cite{Bernard}. \end{proof} \begin{proposition} Under $\mathbb{Q}$, if \textbf{Assumption H} is satisfied, the diffusion $Y$ satisfies the following SDE up to $\zeta $ \begin{eqnarray}\label{YQC} dY_{t}&=&\left( \mu (Y_{t})+ \rho b(Y_{t})\sigma (Y_{t})\right) \mathbf{1}_{t\in \lbrack 0,\zeta ]}dt+\sigma (Y_{t})\mathbf{1}_{t\in \lbrack 0,\zeta ]}d\widetilde{W_{t}},\\ Y_{0} &=&x_{0}\nonumber \end{eqnarray} where $\widetilde{W}$ is a standard $\mathbb{Q}$--Brownian motion. \end{proposition} \begin{proof} Denote $R_n$ as the first hitting time of $S$ to the level $n$ and set $\tau _n = R_n \wedge n$ for all $n \in \mathbb{N}$. Define $\zeta _n = \zeta \wedge \tau _n$, and consider the process $\widetilde{W}$ up to $\zeta _n $. Since $ \mathcal{F}_{\zeta _n } \subset \mathcal{F}_{\tau _n }$, it follows from Proposition \ref{P1} that $\mathbb{Q}$ restricted to $\mathcal{F}_{\zeta _n }$ is absolutely continuous with respect to $P$ restricted to $\mathcal{F}_{\zeta _n }$ for $n \in \mathbb{N}$. Then from Girsanov Theorem \begin{eqnarray*} \widetilde{W}_t &:=& W_t - \left< W_\cdot, \int_{0}^{\cdot} b(Y_{s})dT^{\rho}_s \right>_t \\ &=& W_t - \left< W_{\cdot}, \rho \int_{0}^{\cdot} b(Y_{s}) dW_s \right>_t + \left< W_\cdot, \sqrt{1- \rho ^2}\int_{0}^{\cdot} b(Y_{s})dB_{s} \right>_t \\ &=& W_t - \rho \int_{0}^{t} b(Y_{s}) ds \end{eqnarray*} is $\mathbb{Q}$--Brownian motion for $t \in [0, \zeta _n )$ and $n \in \mathbb{N}$.\\ From monotone convergence, $\mathbb{Q}(\lim _{n\rightarrow \infty} \tau _n = T_{\infty})$ and $\mathbb{Q}(\lim _{n\rightarrow \infty} \zeta _n = \zeta \wedge T_{\infty})$ hold. From Lemma \ref{L7}, $\mathbb{Q}(\lim _{n\rightarrow \infty} \zeta _n = \zeta) = 1$, Thus $Y$ is governed by the following SDE under $\mathbb{Q}$ for $t \in [0, \zeta)$ \begin{eqnarray*} dY_{t}&=& \mu( Y_{t})dt + \sigma (Y_{t}) \left( d\widetilde{W}_t +\rho b(Y_{t}) dt\right) \\ &=&\left( \mu (Y_{t})+ \rho b(Y_{t})\sigma (Y_{t})\right)dt+\sigma (Y_{t})d\widetilde{W_{t}} \end{eqnarray*} \end{proof} For a constant $c \in J$, we calculate the scale functions of the SDE (\ref{SYSC}) and SDE (\ref{YQC}) for $x\in J $: \begin{eqnarray*} s(x) &=&\int_{c}^{x}\exp \left( -\int_{c}^{y}\frac{2\mu }{\sigma ^{2}} (z)dz\right) dy\\ &=&\int_{c}^{x}\exp \left( -\int_{c}^{y}\frac{2\alpha (m-z)}{\beta ^{2}} dz\right) dy \\ &=&\exp \left( -\frac{\alpha }{\beta ^{2}}(c-m)^{2}\right) \int_{c}^{x}\exp \left( \frac{\alpha }{\beta ^{2}}(y-m)^{2}\right) dy \\ &=&A_{1}\int_{c}^{x}\exp\left(\frac{\alpha }{\beta ^{2}}(y-m)^{2}\right)dy, \end{eqnarray*} and \begin{eqnarray*} \widetilde{s}(x) &=&\int_{c}^{x}\exp \left( -\int_{c}^{y}\frac{2\widetilde{ \mu }}{\widetilde{\sigma }^{2}}(z)dz\right) dy\text{ \ \ ; \ }x\in J \\ &=&\int_{c}^{x}\exp \left( -\int_{c}^{y}\frac{2\alpha (m-z+\frac{\rho \beta }{\alpha }e^{z})}{\beta ^{2}}dz\right) dy \\ &=&\exp \left( -\frac{\alpha }{\beta ^{2}}(c-m)^{2}+\frac{2\rho }{\beta } e^{c}\right) \int_{c}^{x}\exp \left( \frac{\alpha }{\beta ^{2}} (y-m)^{2}\right) \exp \left( -\frac{2\rho }{\beta }e^{y}\right) dy \\ &=&A_{2}\int_{c}^{x}e^{\frac{\alpha }{\beta ^{2}}(y-m)^{2}}e^{-\frac{2\rho }{ \beta }e^{y}}dy, \end{eqnarray*} where $A_{1}=\exp( -\frac{\alpha }{\beta ^{2}}(c-m)^{2})$ and $A_{2}=\exp( -\frac{\alpha }{\beta ^{2}}(c-m)^{2}+\frac{2\rho }{\beta }e^{c})$.\\ Under $\mathbb{P}$ and $\mathbb{Q}$, we calculate the test functions for $x \in J$: \begin{eqnarray*} \upsilon (x) =\int_{c}^{x}\left( s(x)-s(y)\right) \frac{2}{s^{\prime }(y)\sigma ^{2}(y)}dy =\frac{2}{\beta ^{2}}\int_{c}^{x}\frac{\left( \int_{y}^{x}e^{\frac{\alpha }{\beta ^{2}}(z-m)^{2}}dz\right) }{e^{\frac{\alpha }{\beta ^{2}}(y-m)^{2}}}dy \end{eqnarray*} \begin{eqnarray*} \upsilon_{b} (x) =\int_{c}^{x}\left( s(x)-s(y)\right) \frac{2b^{2}(y)}{ s^{\prime }(y)\sigma ^{2}(y)}dy =\frac{2}{\beta ^{2}}\int_{c}^{x}\frac{\left( \int_{y}^{x}e^{\frac{\alpha }{\beta ^{2}}(z-m)^{2}}dz\right) e^{2y} }{e^{\frac{\alpha }{\beta ^{2}}(y-m)^{2}}} dy \end{eqnarray*} \begin{eqnarray*} \widetilde{\upsilon }(x) =\int_{c}^{x}\left( \widetilde{s}(x)-\widetilde{s} (y)\right) \frac{2}{\widetilde{s}^{\prime }(y)\sigma ^{2}(y)}dy =\frac{2}{\beta ^{2}}\int_{c}^{x}\frac{\left( \int_{y}^{x}e^{\frac{\alpha }{\beta ^{2}}(z-m)^{2}}e^{-\frac{2\rho }{\beta }e^{z}}dz\right) }{e^{\frac{ \alpha }{\beta ^{2}}(y-m)^{2}}e^{-\frac{2\rho }{\beta }e^{y}}}dy \end{eqnarray*} \begin{eqnarray*} \widetilde{\upsilon }_{b}(x) &=&\int_{c}^{x}\left( \widetilde{s}(x)- \widetilde{s}(y)\right) \frac{2b^{2}(y)}{\widetilde{s}^{\prime }(y)\sigma ^{2}(y)}dy \\ &=&\frac{2}{\beta ^{2}}\int_{c}^{x}\frac{\left( \int_{y}^{x}e^{\frac{\alpha }{\beta ^{2}}(z-m)^{2}}e^{-\frac{2\rho }{\beta }e^{z}}dz\right)e^{2y} }{e^{\frac{ \alpha }{\beta ^{2}}(y-m)^{2}}e^{-\frac{2\rho }{\beta }e^{y}}}dy \end{eqnarray*} By using the Cholesky decomposition, we have the followings results, \begin{theorem}\label{T9} For the Scott model (\ref{SYSC}), the underlying stock price $(S_{t})_{0\leq t\leq T}$; $T\in [0, \infty)$ is a true $\mathbb{P}$--martingale\footnote{The same result is proved by B. Jourdain \cite{j}} if and only if $\rho \leq 0$. \end{theorem} \begin{proof} To prove this theorem, We will check that one of the conditions (A)--(D) of the Proposition \ref{P2} is satisfied.\\ The proof detail is given in Appendix \ref{Asubsec:1}. The results are summarized in the following table: \begin{table}{Summary Table}{tab:1 }\label{t1} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Case & $\widetilde{s}(0)$ & $\widetilde{s}(\infty )$ & $\widetilde{\upsilon }(0)$ & $\widetilde{\upsilon }(\infty )$ & $\widetilde{\upsilon }_{b}(0)$ & $ \widetilde{\upsilon }_{b}(\infty )$ \\ \hline $\rho \leq 0$ & $>-\infty $ & $+\infty $ & $<+\infty $ & $+\infty $ & $<+\infty $ & $+\infty $ \\ \hline $\rho >0$ & $>-\infty $ & $<+\infty $ & $<+\infty $ & $<+\infty $ & $<+\infty $ & $+\infty $\\ \hline \end{tabular} \end{table} Therefore the condition (C) of Proposition \ref{P2} is fulfilled, then we conclude that $(S_{t})_{0 \leq t\leq T}$ is a true $\mathbb{P}$--martingale if and only if $\rho \leq 0$. \end{proof} \begin{theorem}\label{T10} For the Scott model (\ref{SYSC}), the underlying stock price $(S_{t})_{0\leq t\leq T}$; $T\in [0, \infty)$ is a uniformly integrable $\mathbb{P}$--martingale if and only if $\rho \leq 0$. \end{theorem} \begin{proof} Similar to the proof of Theorem \ref{T9}, from the Table \ref{t1}, we have $\widetilde{\upsilon} _{b}(0)<\infty $ and $\widetilde{s}(\infty)=\infty $ for all $\rho \leq 0$, which is the condition (C') of Proposition \ref{P3}, then we deduce that $(S_{t})_{0\leq t\leq T}$ is a uniformly integrable $\mathbb{P}$--martingale if and only if $\rho \leq 0$. \end{proof} \begin{theorem} For the Scott model (\ref{SYSC}), we have for all $\rho \in [-1,1]$: \begin{equation*} \mathbb{P}(S_{T}>0)=1, \; \; \text{for all}\; T\in [0, \infty) . \end{equation*} \end{theorem} \begin{proof} We prove that at least one of the conditions 1.--4. of the Proposition \ref{P4} is satisfied.\\ The proof detail is given in Appendix \ref{Asubsec:2}. The results are summarized in the following table, \begin{table}{Summary Table}{tab: 2}\label{t2} \begin{tabular}{|c|c|c|c|c|c|} \hline $s(0)$ & $s(\infty )$ & $\upsilon(0)$ & $\upsilon(\infty )$ & $\upsilon_{b}(0)$ & $ \upsilon_{b}(\infty )$ \\ \hline $>-\infty $ & $+\infty $ & $<+\infty $ & $+\infty $ & $<+\infty $ & $+\infty $ \\ \hline \end{tabular} \end{table} Since the condition 3. of the Proposition \ref{P4} is satisfied, then $\mathbb{P}(S_{T}>0)=1$ for all $T\in [0, \infty)$ \end{proof} \begin{theorem} For the Scott model (\ref{SYSC}), we have for all $\rho \in [-1,1]$: \begin{equation*} \mathbb{P}(S_{\infty}>0)<1 . \end{equation*} \end{theorem} \begin{proof} From the Table \ref{t2}, we have $\upsilon _{b}(0)<\infty $ and $s(\infty)=\infty $. Thus the condition 3. of Proposition \ref{P5} is satisfied, then $\mathbb{P}(S_{\infty}>0)<1$. \end{proof} \subsection{Transformations of Wu and Yor} Now we shall use a linear transformations of two independent Brownian motions given by Wu and Yor \cite{yor}. \begin{proposition}{$($Theorem 2.1 \cite{yor}$)$} Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space, let $(W_t)_{t \geq 0}$ and $(B_t)_{t \geq 0}$ be two independent Brownian motions with respect to ($\mathcal{F} )_{t \geq 0}$. We consider the transformation $T^{\rho}$ for $ \rho \in [0,1]$, defined by \begin{eqnarray*} T^{\rho}_t = W_t - \int ^t_0 \left( \frac{1-\rho}{s} W_s + \frac{\sqrt{\rho - \rho ^2}}{s} B_s \right) ds \end{eqnarray*} then $T^{\rho}$ is a new Brownian motion. \end{proposition} With this new transformation $T^{\rho}$, we consider the following risk--neutral Scott model for the discounted assets price process $S_{t}$: \begin{eqnarray}\label{SYSWY} \left\{ \begin{array}{l} dS_{t}=\sigma _{t}S_{t} \mathbf{1}_{[0,\zeta)}(t) dT_{t}^{\rho} \\ \sigma_{t}=f(Y_{t})=e^{Y_{t}} \\ dY_{t}=\alpha (m -Y_{t})\mathbf{1}_{[0,\zeta)}(t) dt+\beta \mathbf{1}_{[0,\zeta)}(t) dW_{t} \end{array} \right. \end{eqnarray} with $E \left[dT_{t}^{\rho} dW_{t} \right] = \rho dt$ and $ 0 \leq \rho \leq 1$, $\alpha > 0$, $ m > 0$, $\beta > 0$. The natural state space for $Y$ is $J = (\ell,r) = (0,+\infty)$. $\zeta$ is the possible exit time of the process $Y$ from its state space $J$. \begin{proposition} If \textbf{Assumption H} is satisfied under $\mathbb{Q}$, then the diffusion $Y$ satisfies the following SDE up to $\zeta $ \begin{eqnarray}\label{YQWY} dY_{t}&=&\left( \mu (Y_{t})+ b(Y_{t})\sigma (Y_{t})\right) \mathbf{1}_{ \lbrack 0,\zeta ]}(t)dt+\sigma (Y_{t}) \mathbf{1}_{ \lbrack 0,\zeta ]}(t)d\widetilde{W_{t}}\\ Y_{0} &=&x_{0} \nonumber \end{eqnarray} where $\widetilde{W}$ is a standard $\mathbb{Q}$--Brownian motion. \end{proposition} \begin{proof} Denote $R_n$ as the first hitting time of $S$ to the level $n$, and set $\tau _n = R_n \wedge n$ for all $n \in \mathbb{N}$. Define $\zeta _n = \zeta \wedge \tau _n$, and consider the process $\widetilde{W}$ up to $\zeta _n $. Since $ \mathcal{F}_{\zeta _n } \subset \mathcal{F}_{\tau _n }$, it follows from Proposition \ref{P1} that $\mathbb{Q}$ restricted to $\mathcal{F}_{\zeta _n }$ is absolutely continuous with respect to $\mathbb{P}$ restricted to $\mathcal{F}_{\zeta _n }$ for $n \in \mathbb{N}$. Then from Girsanov Theorem \begin{eqnarray*} \widetilde{W}_t &:=& W_t - \left< W_\cdot, \int_{0}^{\cdot} b(Y_{s})dT^{\rho}_s \right>_t \\ &=& W_t - \left< W_\cdot, \int_{0}^{\cdot} b(Y_{s}) dW_s \right>_t + \left< W_\cdot, \int_{0}^{\cdot} (1- \rho ) \frac{b(Y_{s})}{s} W_{s} ds \right>_t \\ & & + \left< W_\cdot, \int_{0}^{\cdot} \sqrt{\rho - \rho ^2} \frac{b(Y_{s})}{s} B_{s} ds \right>_t \\ &=& W_t - \int_{0}^{t} b(Y_{s}) ds \end{eqnarray*} is $\mathbb{Q}$--Brownian motion for $t \in [0, \zeta _n )$ and $n \in \mathbb{N}$.\\ We have $\mathbb{Q}(\lim _{n\rightarrow \infty} \zeta _n = \zeta) = 1$, Thus $Y$ is governed by the following SDE under $\mathbb{Q}$ for $t \in [0, \zeta)$ \begin{eqnarray*} dY_{t}&=& \mu( Y_{t})dt + \sigma (Y_{t}) \left( d\widetilde{W}_t + b(Y_{t}) dt\right) \\ &=&\left( \mu (Y_{t})+ b(Y_{t})\sigma (Y_{t})\right)dt+\sigma (Y_{t})d\widetilde{W_{t}} \end{eqnarray*} \end{proof} For a constant $c \in J$, we calculate the scale functions of the SDE (\ref{SYSWY}) and SDE (\ref{YQWY}), for any $x\in J$: \begin{eqnarray*} s(x) &=&\int_{c}^{x}\exp \left( -\int_{c}^{y}\frac{2\mu }{\sigma ^{2}} (z)dz\right) dy \\ &=&\int_{c}^{x}\exp \left( -\int_{c}^{y}\frac{2\alpha (m-z)}{\beta ^{2}} dz\right) dy \\ &=&A_{1}\int_{c}^{x}e^{\frac{\alpha }{\beta ^{2}}(y-m)^{2}}dy, \end{eqnarray*} and \begin{eqnarray*} \widetilde{s}(x) &=&\int_{c}^{x}\exp \left( -\int_{c}^{y}\frac{2\widetilde{ \mu }}{\widetilde{\sigma }^{2}}(z)dz\right) dy \\ &=&\int_{c}^{x}\exp \left( -\int_{c}^{y}\frac{2\alpha (m-z+\frac{ \beta }{\alpha }e^{z})}{\beta ^{2}}dz\right) dy \\ &=&A_{2}\int_{c}^{x}e^{\frac{\alpha }{\beta ^{2}}(y-m)^{2}}e^{-\frac{2 }{ \beta }e^{y}}dy, \end{eqnarray*} where $A_{1}=\exp( -\frac{\alpha }{\beta ^{2}}(c-m)^{2})$ and $ A_{2}=\exp( -\frac{\alpha }{\beta ^{2}}(c-m)^{2}+\frac{2 }{\beta }e^{c})$.\\ Under $\mathbb{P}$, we calculate the test functions for $x \in \bar{J}$: \begin{eqnarray*} \upsilon (x)& =& \int_{c}^{x}\left( s(x)-s(y)\right) \frac{2}{s^{\prime }(y)\sigma ^{2}(y)}dy \\ &=& \frac{2}{\beta ^{2}}\int_{c}^{x}\frac{\left( \int_{y}^{x}e^{\frac{\alpha }{\beta ^{2}}(z-m)^{2}}dz\right) }{e^{\frac{\alpha }{\beta ^{2}}(y-m)^{2}}}dy \end{eqnarray*} \begin{eqnarray*} \upsilon_{b} (x) &=&\int_{c}^{x}\left( s(x)-s(y)\right) \frac{2b^{2}(y)}{ s^{\prime }(y)\sigma ^{2}(y)}dy \\ &=&\frac{2}{\beta ^{2}}\int_{c}^{x}\frac{\left( \int_{y}^{x}e^{\frac{\alpha }{\beta ^{2}}(z-m)^{2}}dz\right) e^{2y} }{e^{\frac{\alpha }{\beta ^{2}}(y-m)^{2}}} dy. \end{eqnarray*} Under $\mathbb{Q}$, we calculate the test functions for $x \in \bar{J}$: \begin{eqnarray*} \widetilde{\upsilon }(x) &=&\int_{c}^{x}\left( \widetilde{s}(x)-\widetilde{s} (y)\right) \frac{2}{\widetilde{s}^{\prime }(y)\sigma ^{2}(y)}dy \\ &=& \frac{2}{\beta ^{2}}\int_{c}^{x}\frac{\left( \int_{y}^{x}e^{\frac{\alpha }{\beta ^{2}}(z-m)^{2}}e^{-\frac{2 }{\beta }e^{z}}dz\right) }{e^{\frac{ \alpha }{\beta ^{2}}(y-m)^{2}}e^{-\frac{2 }{\beta }e^{y}}}dy \end{eqnarray*} \begin{eqnarray*} \widetilde{\upsilon }_{b}(x) &=&\int_{c}^{x}\left( \widetilde{s}(x)- \widetilde{s}(y)\right) \frac{2b^{2}(y)}{\widetilde{s}^{\prime }(y)\sigma ^{2}(y)}dy \\ &=& \frac{2}{\beta ^{2}}\int_{c}^{x}\frac{\left( \int_{y}^{x}e^{\frac{\alpha }{\beta ^{2}}(z-m)^{2}}e^{-\frac{2 }{\beta }e^{z}}dz\right)e^{2y} }{e^{\frac{ \alpha }{\beta ^{2}}(y-m)^{2}}e^{-\frac{2 }{\beta }e^{y}}}dy \end{eqnarray*} By using the Transformations of Wu and Yor, we have the followings results, \begin{theorem}\label{T15} For the Scott model (\ref{SYSWY}), the underlying stock price $(S_{t})_{0\leq t\leq T}$; $T\in [0, \infty)$ is not a true $\mathbb{P}$--martingale if and only if $\rho \geq 0$. \end{theorem} \begin{proof} To prove this theorem, we show this contrapositive of the Proposition \ref{P2}: if $(\lbrace \widetilde{\upsilon }(0) < \infty \rbrace$ and $\lbrace \widetilde{\upsilon }_b (0) = \infty \rbrace)$ or $(\lbrace \widetilde{\upsilon }(\infty) < \infty \rbrace$ and $\lbrace \widetilde{\upsilon }_b (\infty) = \infty \rbrace)$, then \begin{table}{Summary Table}{tab: 3}\label{t3} \begin{tabular}{|c|c|c|c|c|c|} \hline $\widetilde{s}(0)$ & $\widetilde{s}(\infty )$ & $\widetilde{\upsilon }(0)$ & $\widetilde{\upsilon }(\infty )$ & $\widetilde{\upsilon }_{b}(0)$ & $ \widetilde{\upsilon }_{b}(\infty )$ \\ \hline $>-\infty $ & $<+\infty $ & $<+\infty $ & $<+\infty $ & $<+\infty $ & $+\infty $ \\ \hline \end{tabular} \end{table} Thus from this table, $(\lbrace \widetilde{\upsilon }(\infty) < \infty \rbrace$ and $\lbrace \widetilde{\upsilon }_b (\infty) = \infty \rbrace)$ is satisfied, then $(S_{t})_{0\leq t\leq T}$ is not a true $\mathbb{P}$--martingale. \end{proof} \begin{theorem} For the Scott model (\ref{SYSWY}), the underlying stock price $(S_{t})_{0\leq t\leq T}$; $T\in [0, \infty)$ is not uniformly integrable $\mathbb{P}$--martingale if and only if $\rho \geq 0$. \end{theorem} \begin{proof} We check this contrapositive of Proposition \ref{P3}: if $(\lbrace \widetilde{s}(0) > -\infty \rbrace$ and $\lbrace \widetilde{\upsilon }_b (0) = \infty \rbrace)$ or $(\lbrace \widetilde{s }(\infty) < \infty \rbrace$ and $\lbrace \widetilde{\upsilon }_b (\infty) = \infty \rbrace)$.\\ Thus from the Table \ref{t3} , we have $(\lbrace \widetilde{s }(\infty) < \infty \rbrace$ and $\lbrace \widetilde{\upsilon }_b (\infty) = \infty \rbrace)$, then $(S_{t})_{0\leq t\leq T}$ is not uniformly integrable $\mathbb{P}$--martingale. \end{proof} \begin{theorem} For the Scott model (\ref{SYSWY}), $\mathbb{P}(S_{T}>0)=1$ for all $T\in [0, \infty)$ \end{theorem} \begin{proof} We check that at least one of the conditions 1.--4. of the Proposition \ref{P4} is satisfied.\\ We have \begin{table}{Summary Table}{tab:4}\label{t4} \begin{tabular}{|c|c|c|c|c|c|} \hline $s(0)$ & $s(\infty )$ & $\upsilon(0)$ & $\upsilon(\infty )$ & $\upsilon_{b}(0)$ & $ \upsilon_{b}(\infty )$ \\ \hline $ >-\infty $ & $+\infty $ & $<+\infty $ & $+\infty $ & $<+\infty $ & $+\infty $ \\ \hline \end{tabular} \end{table} Since the condition 3. of Proposition \ref{P4} is satisfied, then $\mathbb{P}(S_{T}>0)=1$ for all $T\in [0, \infty)$. \end{proof} \begin{theorem} For the Scott model (\ref{SYSWY}), $\mathbb{P}(S_{\infty}>0)<1$. \end{theorem} \begin{proof} We will show that one of the conditions 1.--4. of the Proposition \ref{P5} is satisfied.\\ From the Table \ref{t4}, we have $\upsilon _{b}(0)<\infty $ and $s(\infty)=\infty $. Therefore the condition 3. of Proposition \ref{P5} is satisfied, then $\mathbb{P}(S_{\infty}>0)<1$. \end{proof} \section*{Conclusion} In this paper we have proved by using two linear transformation of the two independent Brownian motion, which were known as the Cholesky decomposition and Wu and Yor transformation, that the stock price process is a true and a uniformly integrable martingale if and only if $\rho \in [-1,0]$ (see Theorem \ref{T9} and Theorem \ref{T15}). Therefore in the Scott correlated stochastic volatility model, the stock price is a true martingale if and only if $\rho \in [-1,0]$. \appendix{Proof of Theorem \ref{T9}}\label{Asubsec:1} For the sake of simplification of the notations we set $f(x)=e^{\frac{\alpha }{\beta ^{2}}(x-m)^{2}-\frac{2\rho }{\beta}e^{x}}$. Under the probability measure $\mathbb{Q}$, we have a scale function: \begin{eqnarray*} \widetilde{s}(x) =A_{2}\int_{c}^{x}f(y)dy, \quad x\in J \end{eqnarray*} Let us check the conditions for $r$, recall that $r=\infty$ \begin{itemize} \item Case (1): $\rho \leq 0$ \\ By integration by parts, one has: $\widetilde{s}(\infty )=A_{2}\int_{c}^{\infty }f(y)dy$, for all $x \in ]c,\infty[$: \begin{eqnarray*} \int_{c}^{x}f(y)dy &=&\int_{c}^{x}\frac{1}{\left( \frac{2\alpha }{\beta ^{2}} (y-m)-\frac{2\rho }{\beta }e^{y}\right) }\left( \frac{2\alpha }{\beta ^{2}} (y-m)-\frac{2\rho }{\beta }e^{y}\right) f(y)dy \\ &=&\left[ \frac{f(y)}{\left( \frac{2\alpha }{\beta ^{2}}(y-m)-\frac{2\rho }{ \beta }e^{y}\right) }\right] _{c}^{x}+\int_{c}^{x}\frac{\frac{2\alpha }{ \beta ^{2}}-\frac{2\rho }{\beta }e^{y}}{\left( \frac{2\alpha }{\beta ^{2}} (y-m)-\frac{2\rho }{\beta }e^{y}\right) ^{2}}f(y)dy. \end{eqnarray*} We know that $$\lim_{x\rightarrow +\infty }\frac{\frac{2\alpha }{\beta ^{2}}-\frac{2\rho }{ \beta }e^{x}}{\left( \frac{2\alpha }{\beta ^{2}}(x-m)-\frac{2\rho }{\beta } e^{x}\right) ^{2}}=0.$$ Thus there exists $M>c>0$, such that for $y>M$, $$\left\vert \frac{\frac{ 2\alpha }{\beta ^{2}}-\frac{2\rho }{\beta }e^{x}}{\left( \frac{2\alpha }{ \beta ^{2}}(x-m)-\frac{2\rho }{\beta }e^{x}\right) ^{2}}\right\vert <\frac{1 }{2}.$$ Then $$\int_{c}^{x}f(y)dy\geq 2\left[ \frac{f(y)}{\left( \frac{2\alpha }{\beta ^{2}} (y-m)-\frac{2\rho }{\beta }e^{y}\right) }\right] _{c}^{x}$$ Since $ \lim_{y\rightarrow +\infty }\frac{f(y)}{\left( \frac{2\alpha }{\beta ^{2}} (y-m)-\frac{2\rho }{\beta }e^{y}\right) }=+\infty$, then $\int_{c}^{\infty }f(y)dy=+\infty $, and $\widetilde{s}(\infty )=A_{2}\int_{c}^{\infty }f(y)dy=+\infty$, therefore $\widetilde{\upsilon }(\infty )=+\infty $ and $ \widetilde{\upsilon }_{b}(\infty )=+\infty $ \item Case (2): $\rho>0$ \\ We have \begin{eqnarray*} \widetilde{s}(\infty )=A_{2}\int_{c}^{\infty }f(y)dy<+\infty. \end{eqnarray*} We shall check the finiteness of $\widetilde{v}(\infty )$, where \begin{eqnarray*} \widetilde{\upsilon }(\infty )=\frac{2}{\beta ^{2}}\int_{c}^{\infty } \frac{\int_{y}^{\infty }f(z)dz}{f(y)} dy. \end{eqnarray*} Since $\lim_{y\rightarrow \infty }e^{-y}f(y)=0$, by using L'H\^{o}pital's rule, we get \begin{eqnarray*} && \lim_{y\rightarrow +\infty } \frac{\int_{y}^{+\infty }f(z)dz}{e^{-y}f(y)}= \lim_{y\rightarrow +\infty }\frac{e^{y}}{ 1-\left( \frac{2\alpha }{\beta ^{2}}(y-m)-\frac{2\rho }{\beta }e^y\right) } =\frac{ \beta}{2 \rho} \end{eqnarray*} Thus as $y\rightarrow +\infty $ \begin{eqnarray*} \int_{y}^{+\infty }f(y)dz \sim \frac{ \beta}{2 \rho} e^{-y} f(y) \end{eqnarray*} and there exists $M>c>0$, such that for $y>M$, \begin{eqnarray}\label{maj1} \int_{y}^{+\infty }f(z)dz \leq \frac{ \beta}{\rho} e^{-y}f(y) \end{eqnarray} Taking (\ref{maj1}) into account, we obtain the following \begin{eqnarray*} \widetilde{\upsilon }(\infty) &=&\frac{2}{\beta ^{2}}\int_{c}^{\infty}\frac{\int_{y}^{\infty}f(z)dz}{f(y)} dy \\ &=&\frac{2}{\beta ^{2}}\int_{c}^{M}\frac{\int_{y}^{\infty}f(z)dz}{f(y)}dy + \frac{2}{\beta ^{2}}\int_{M }^{\infty}\frac{\int_{y}^{\infty}f(z)dz}{f(y)} dy \\ &\leq&\frac{2}{\beta ^{2}}\int_{c }^{M}\frac{\int_{y}^{\infty}f(z)dz}{f(y)} dy + \frac{2}{\beta ^{2}}\int_{M}^{\infty }\frac{\beta}{\rho} e^{-y}dy \\ &\leq&\frac{2}{\beta ^{2}}\int_{c }^{M}\frac{\int_{y}^{\infty}f(z)dz}{f(y)} dy+\frac{2 e^{- M}}{\rho \beta } <\infty \end{eqnarray*} \end{itemize} The same arguments work for $\widetilde{v}_{b}(\infty)$: \begin{eqnarray*} \widetilde{\upsilon }_{b}(\infty )=\frac{2}{\beta ^{2}}\int_{c}^{\infty } e^{2y} \frac{ \int_{y}^{\infty }f(z)dz}{f(y)}dy \end{eqnarray*} Since $\lim_{y\rightarrow \infty }e^{-y}f(y)=0$, applying L'H\^{o}pital's rule, we obtain : \begin{eqnarray*} \lim_{y\rightarrow +\infty }\frac{\int_{y}^{+\infty }f(z)dz}{e^{-y}f(y)}=\lim_{y\rightarrow +\infty }\frac{e^{y}}{1-\left(\frac{2\alpha }{\beta ^{2}}(y-m) -\frac{2\rho }{\beta}e^y\right) }=\frac{\beta}{2 \rho} \end{eqnarray*} Thus as $y\rightarrow +\infty $ \begin{eqnarray*} \int_{y}^{+\infty }f(z)dz \sim \frac{\beta}{2 \rho} e^{-y} f(y) \end{eqnarray*} and there exists $M>c>0$, such that for $y>M$, $\int_{y}^{+\infty }f(z)dz \geq \frac{ \beta}{4\rho} e^{-y}f(y)$ \begin{eqnarray*} \widetilde{\upsilon }_{b}(\infty) &=&\frac{2}{\beta ^{2}}\int_{c}^{\infty } e^{2y} \frac{ \int_{y}^{\infty }f(z)dz}{f(y)}dy\\ &=&\frac{2}{\beta ^{2}}\int_{c}^{M} e^{2y} \frac{ \int_{y}^{\infty }f(z)dz}{f(y)}dy+ \frac{2}{\beta ^{2}}\int_{M}^{\infty } e^{2y} \frac{ \int_{y}^{\infty }f(z)dz}{f(y)}dy\\ &>&\frac{2}{\beta ^{2}}\int_{c}^{M} e^{2y} \frac{ \int_{y}^{\infty }f(z)dz}{f(y)}dy+\frac{2}{\beta ^{2}}\int_{M}^{\infty }\frac{ \beta}{4\rho} e^{y}dy = \infty \end{eqnarray*} Then $\widetilde{\upsilon }_{b}(\infty)=\infty$. \\ To summarize, \begin{eqnarray} \widetilde{\upsilon }(\infty )=\left\{ \begin{array}{lcr} +\infty \text{ \ \ \ \ \ \ \ \ \ if }\rho \leq 0 \\ <+\infty \text{ \ \ \ \ \ \ \ if }\rho >0 \end{array} \right. \end{eqnarray} \begin{eqnarray} \widetilde{\upsilon }_{b}(\infty )=+\infty \text{ \ \ \ \ \ \ \ }\forall \; \rho \in \left[ -1,1\right] \end{eqnarray} Let us now check the conditions for $\ell$, recall $\ell=0$ \begin{eqnarray*} \widetilde{s}(0)=-A_{2}\int_{0}^{c}f(y)dy \end{eqnarray*} \begin{itemize} \item Case (1): $\rho \leq 0$ \\ We check the finiteness of $\widetilde{v}(0)$ for this case: \begin{eqnarray*} \widetilde{\upsilon }(0)=\frac{2}{\beta ^{2}}\int_{0}^{c}\frac{ \int_{0}^{y}f(z)dz}{f(y)} dy \end{eqnarray*} since $\lim_{y\rightarrow 0}y f(y)=0$, and from L'H\^{o}pital's rule we get \begin{eqnarray*} \lim_{y\rightarrow 0}\frac{\int_{0}^{y}f(z)dz}{yf(y)} =\lim_{y\rightarrow 0}\frac{1}{ 1+\left( \frac{2\alpha }{\beta ^{2}}(y-m)-\frac{2\rho }{\beta }e^{y}\right) y} =1 \end{eqnarray*} Thus as $y\rightarrow 0$ \begin{eqnarray*} \int_{0}^{y}f(z)dz\sim y f(y) \end{eqnarray*} Then there exists $0<\varepsilon <c$ such that $\forall \; 0<y<\varepsilon $, $\int_{0}^{y}f(z) dz \leq 2 y f(y) $ \begin{eqnarray*} \widetilde{\upsilon }(0) &=&\frac{2}{\beta ^{2}}\int_{0}^{c}\frac{ \int_{0}^{y}f(z)dz}{f(y)} dy\\ &=&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon}\frac{ \int_{0}^{y}f(z)dz}{f(y)} dy+ \frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}\frac{ \int_{0}^{y}f(z)dz}{f(y)} dy\\ &\leq&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon }2ydy+ \frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}\frac{ \int_{0}^{y}f(z)dz}{f(y)} dy\\ &\leq&\frac{2\varepsilon ^{2}}{\beta ^{2}}+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}\frac{ \int_{0}^{y}f(z)dz}{f(y)} dy <+\infty \end{eqnarray*} The same for $\widetilde{v}_{b}(0)$: \begin{eqnarray*} \widetilde{\upsilon }_{b}(0)=\frac{2}{\beta ^{2}}\int_{0}^{c}e^{2y} \frac{ \int_{0}^{y}f(z)dz}{f(y)}dy \end{eqnarray*} we have $\lim_{y\rightarrow 0}y e^{-2y} f(y)=0$, from L'H\^{o}pital's rule: \begin{eqnarray*} \lim_{y\rightarrow 0}\frac{\int_{0}^{y}f(z) dz}{y e^{-2y} f(y)} =\lim_{y\rightarrow 0}\frac{1}{\left( 1+\left(\frac{2\alpha }{\beta ^{2}} (y-m)-\frac{2\rho }{\beta }e^{y}\right)y-2y\right) e^{-2y}} =1 \end{eqnarray*} Thus as $y\rightarrow 0$ \begin{eqnarray*} \int_{0}^{y}f(z) dz\sim y e^{-2y} f(y) \end{eqnarray*} Then there exist $0<\varepsilon <c$ \ ; \ $\forall \; 0<y<\varepsilon $, $\int_{0}^{y}f(z) dz \leq 2y e^{-2y} f(y)$ \begin{eqnarray*} \widetilde{\upsilon }_{b}(0) &=&\frac{2}{\beta ^{2}}\int_{0}^{c}e^{2y} \frac{ \int_{0}^{y}f(z)dz}{f(y)}dy \\ &=&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon}e^{2y} \frac{ \int_{0}^{y}f(z)dz}{f(y)}dy+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}e^{2y} \frac{ \int_{0}^{y}f(z)dz}{f(y)}dy\\ &\leq &\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon }2ydy+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}e^{2y} \frac{ \int_{0}^{y}f(z)dz}{f(y)}dy \\ &\leq&\frac{2\varepsilon ^{2}}{\beta ^{2}}+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}e^{2y} \frac{ \int_{0}^{y}f(z)dz}{f(y)}dy <+\infty \end{eqnarray*} \item Case (2): $\rho>0$ \\ We check the finiteness of $\widetilde{v}(0)$ for this case: \begin{eqnarray*} \widetilde{\upsilon }(0)=\frac{2}{\beta ^{2}}\int_{0}^{c} \frac{ \int_{0}^{y}f(z)dz} {f(y)} dy \end{eqnarray*} Since $\lim_{y\rightarrow 0}y f(y)=0$, we apply L'H\^{o}pital's rule: \begin{eqnarray*} \lim_{y\rightarrow 0}\frac{\int_{0}^{y}f(z)dz}{y f(y)}=\lim_{y\rightarrow 0}\frac{1}{ 1+\left( \frac{2\alpha }{\beta ^{2}}(y-m)-\frac{2\rho }{\beta }e^{y}\right) y}=1. \end{eqnarray*} Thus as $y\rightarrow 0$ \begin{eqnarray*} \int_{0}^{y}f(z)dz\sim y f(y) \end{eqnarray*} Then there exists $0<\varepsilon <c$ such that $\forall \; 0<y<\varepsilon $, $\int_{0}^{y}f(y)dz\leq 2yf(y)$, therefore \begin{eqnarray*} \widetilde{\upsilon }(0) &=&\frac{2}{\beta ^{2}}\int_{0}^{c} \frac{ \int_{0}^{y}f(z)dz} {f(y)} dy \\ &=&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon} \frac{ \int_{0}^{y}f(z)dz} {f(y)} dy +\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c} \frac{ \int_{0}^{y}f(z)dz} {f(y)} dy\\ &\leq &\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon }2ydy+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c} \frac{ \int_{0}^{y}f(z)dz} {f(y)} dy \\ &\leq&\frac{2\varepsilon ^{2}}{\beta ^{2}}+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c} \frac{ \int_{0}^{y}f(z)dz} {f(y)} dy<+\infty \end{eqnarray*} The same for $\widetilde{v}_{b}(0)$: \begin{eqnarray*} \widetilde{\upsilon }_{b}(0)=\frac{2}{\beta ^{2}}\int_{0}^{c}e^{2y}\frac{ \int_{0}^{y}f(z)dz} {f(y)} dy \end{eqnarray*} we have $\lim_{y\rightarrow 0}y e^{2y} f(y)=0 $, from L'H\^{o}pital's rule: \begin{eqnarray*} \lim_{y\rightarrow 0}\frac{\int_{0}^{y}f(y)dz}{ye^{-2y}f(y)}=\lim_{y\rightarrow 0}\frac{1}{\left( 1+\frac{2\alpha }{\beta ^{2}} (y-m)y-2y \right)e^{2y} }=1 \end{eqnarray*} thus as $y\rightarrow 0$ \begin{eqnarray*} \int_{0}^{y}f(z)dz\sim ye^{-2y}f(y), \end{eqnarray*} then there exists $0<\varepsilon <c$ such that $\forall \; 0<y<\varepsilon $, $\int_{0}^{y}f(z)dz \leq 2ye^{-2y}f(y)$, hence \begin{eqnarray*} \widetilde{\upsilon }_{b}(0) &=&\frac{2}{\beta ^{2}}\int_{0}^{c}e^{2y}\frac{ \int_{0}^{y}f(z)dz} {f(y)} dy \\ &=&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon}e^{2y}\frac{ \int_{0}^{y}f(z)dz} {f(y)} dy + \frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}e^{2y}\frac{ \int_{0}^{y}f(z)dz} {f(y)} dy \\ &\leq&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon }2ydy+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}e^{2y}\frac{ \int_{0}^{y}f(z)dz} {f(y)} dy \\ &\leq&\frac{2\varepsilon ^{2}}{\beta ^{2}}+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}e^{2y}\frac{ \int_{0}^{y}f(z)dz} {f(y)} dy \\ &<&\infty \end{eqnarray*} \end{itemize} To summarize, \begin{eqnarray} \widetilde{\upsilon }(0 )<+\infty \text{ \ \ \ \ \ \ \ }\forall \; \rho \in \left[ -1,1\right] \end{eqnarray} \begin{eqnarray} \widetilde{\upsilon }_{b}(0)<+\infty \text{ \ \ \ \ \ \ \ }\forall \; \rho \in \left[ -1,1\right] \end{eqnarray} \appendix{Proof of Theorem \ref{T10}}\label{Asubsec:2} For ease of notations we denote by $g(x)=e^{\frac{\alpha }{\beta ^{2}}(x-m)^{2}}$. Under $\mathbb{P}$, we have a scale function: \begin{eqnarray*} s(x)=A_{1}\int_{c}^{x}g(y)dy \end{eqnarray*} To check the conditions for $r$, recall $r=\infty$ \begin{eqnarray*} s(\infty )=A_{1}\int_{c}^{\infty }g(y)dy>A_{1}\int_{c}^{\infty }e^{\frac{\alpha }{\beta ^{2}} (y-m)}dy=A_{1}\left[ \frac{\beta ^{2}}{\alpha }e^{\frac{\alpha }{\beta ^{2}} (y-m)}\right] _{c}^{\infty }=+\infty \end{eqnarray*} Since $s(\infty )=+\infty$, then \ $\upsilon (\infty )=+\infty $ \ and $\upsilon_{b}(\infty )=+\infty $ \\ To check similar conditions for $\ell$, recall $\ell=0$ \begin{eqnarray*} s(0)=-A_{1}\int_{0}^{c}g(y)dy>-\infty \end{eqnarray*} We check the finiteness of $v(0)$ for this case: \begin{eqnarray*} \upsilon (0)=\frac{2}{\beta ^{2}}\int_{0}^{c}\frac{ \int_{0}^{y}g(z)dz}{g(y)} dy \end{eqnarray*} One has $\lim_{y\rightarrow 0}yg(y)=0$, from L'H\^{o}pital's rule: \begin{eqnarray*} \lim_{y\rightarrow 0}\frac{\int_{0}^{y}g(z)dz}{y g(y)}=\lim_{y\rightarrow 0} \frac{1}{ 1+\frac{2\alpha }{ \beta ^{2}}(y-m)y}=1 \end{eqnarray*} hence, as $y\rightarrow 0$ \begin{eqnarray*} \int_{0}^{y}g(z)dz \sim y g(y) \end{eqnarray*} hence there exists $0<\varepsilon <c$ such that $\forall \; 0<y<\varepsilon $, $\int_{0}^{y}g(z)dz \leq 2y g(y)$ \begin{eqnarray*} \upsilon (0) &=&\frac{2}{\beta ^{2}}\int_{0}^{c}\frac{ \int_{0}^{y}g(z)dz}{g(y)} dy\\ &=&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon}\frac{ \int_{0}^{y}g(z)dz}{g(y)} dy+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}\frac{ \int_{0}^{y}g(z)dz}{g(y)} dy\\ &\leq&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon }2ydy+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}\frac{ \int_{0}^{y}g(z)dz}{g(y)} dy \\ &\leq& \frac{2\varepsilon ^{2}}{\beta ^{2}}+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}\frac{ \int_{0}^{y}g(z)dz}{g(y)} dy <+\infty \end{eqnarray*} The same for $v_{b}(0)$: \begin{eqnarray*} \upsilon _{b}(0)=\frac{2}{\beta ^{2}}\int_{0}^{c}e^{2y} \frac{ \int_{0}^{y}g(z)dz}{g(y)}dy \end{eqnarray*} we have $\lim_{y\rightarrow 0}y e^{-2y} g(y)=0$, by using L'H\^{o}pital's rule, we get: \begin{eqnarray*} \lim_{y\rightarrow 0}\frac{\int_{0}^{y}g(z)dz}{ye^{-2y}g(y)} =\lim_{y\rightarrow 0}\frac{1}{\left( 1+\frac{2\alpha }{\beta ^{2}}(y-m)y-2y\right)e^{-2y} }=1 \end{eqnarray*} Thus as $y\rightarrow 0$ \begin{eqnarray*} \int_{0}^{y}g(z)dz\sim ye^{-2y} g(y). \end{eqnarray*} Therefore one can choose $0<\varepsilon <c$ such that $\forall \; 0<y<\varepsilon $, $\int_{0}^{y}g(z)dz\leq2y e^{-2y} g(y) $ \begin{eqnarray*} \upsilon _{b}(0) &=&\frac{2}{\beta ^{2}}\int_{0}^{c}e^{2y} \frac{ \int_{0}^{y}g(z)dz}{g(y)}dy\\ &=&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon}e^{2y} \frac{ \int_{0}^{y}g(z)dz}{g(y)}dy + \frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}e^{2y} \frac{ \int_{0}^{y}g(z)dz}{g(y)}dy\\ &\leq&\frac{2}{\beta ^{2}}\int_{0}^{\varepsilon }2ydy+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}e^{2y} \frac{ \int_{0}^{y}g(z)dz}{g(y)}dy \\ &\leq&\frac{2\varepsilon ^{2}}{\beta ^{2}}+\frac{2}{\beta ^{2}}\int_{\varepsilon}^{c}e^{2y} \frac{ \int_{0}^{y}g(z)dz}{g(y)}dy\\ &<& +\infty \end{eqnarray*} \end{document}
\begin{document} \maketitle \begin{abstract} Let $G$ be a group. A subset $F \subset G$ is called irreducibly faithful if there exists an irreducible unitary representation $\pi$ of $G$ such that $\pi(x) \neq \mathrm{id}$ for all $x \in F \smallsetminus \{e\}$. Otherwise $F$ is called irreducibly unfaithful. Given a positive integer $n$, we say that $G$ has Property $P(n)$ if every subset of size $n$ is irreducibly faithful. Every group has $P(1)$, by a classical result of Gelfand and Raikov. Walter proved that every group has $P(2)$. It is easy to see that some groups do not have~$P(3)$. \par We provide a complete description of the irreducibly unfaithful subsets of size~$n$ in a countable group $G$ (finite or infinite) with Property $P(n-1)$: it turns out that such a subset is contained in a finite elementary abelian normal subgroup of $G$ of a particular kind. We deduce a characterization of Property~$P(n)$ purely in terms of the group structure. It follows that, if a countable group $G$ has $P(n-1)$ and does not have $P(n)$, then $n$ is the cardinality of a projective space over a finite field. \par A group $G$ has Property $Q(n)$ if, for every subset $F \subset G$ of size at most $n$, there exists an irreducible unitary representation $\pi$ of $G$ such that $\pi(x) \ne \pi(y)$ for any distinct $x, y$ in $F$. Every group has $Q(2)$. For countable groups, it is shown that Property $Q(3)$ is equivalent to $P(3)$, Property $Q(4)$ to $P(6)$, and Property $Q(5)$ to $P(9)$. For $m, n \ge 4$, the relation between Properties $P(m)$ and $Q(n)$ is closely related to a well-documented open problem in additive combinatorics. \end{abstract} \setcounter{tocdepth}{1} \tableofcontents \begin{flushright} \begin{minipage}{0.60\linewidth} \itshape Fid\`ele, infid\`ele ?\\ Qu'est-ce que \c ca fait,\\ Au fait ?\\ Puisque toujours dispose \`a couronner mon z\`ele\\ Ta beaut\'e sert de gage \`a mon plus cher souhait.\\ \upshape (Paul Verlaine, \emph{Chansons pour elle}, 1891.) \end{minipage} \end{flushright} \vskip.2cm \section{Introduction} \label{Section:intro} \subsection{Irreducibly unfaithful subsets}\label{subsec:1.a} A subset $F$ of a group $G$ is called \textbf{irreducibly faithful} if there exists an irreducible unitary representation $\pi$ of $G$ in a Hilbert space $\mathcal H$ such that $\pi(x) \ne \mathrm{id}$ for all $x \in F$ with $x \ne e$. (We denote by~$e$ the identity element of the group, and by $\mathrm{id}$ the identity operator on the space~$\mathcal H$.) Otherwise $F$ is called \textbf{irreducibly unfaithful}. For $n \ge 1$, we say that $G$ has \textbf{Property}~$\mathbf{P(n)}$ if every subset of size at most $n$ is irreducibly faithful. \par \textit{Every group has Property $P(1)$:} this is the particular case for discrete groups of a foundational result established for all locally compact groups and continuous unitary representations by Gelfand and Raikov~\cite{GeRa--43} (see also the exposition in \cite[13.6.6]{Dixm--69}, and another proof for second-countable locally compact groups in \cite[Pages 109--110]{Mack--76}). \par The following refinement of the Gelfand--Raikov Theorem is due to Walter: \textit{Every group has Property $P(2)$.} In other words, \textit{in a group, every couple is irreducibly faithful} (!). See \cite[Proposition 2]{Walt--74}, as well as \cite{Sasv--91} and \cite[1.8.7]{Sasv--95}. \par It is clear that Property $P(3)$ does not hold for all groups. Indeed, Klein's Vierergruppe, the direct product $C_2 \times C_2$ of two copies of the group of order~$2$, does not have $P(3)$. \par The first goal of this article is to characterize groups with $P(n)$ for all $n \ge 3$. We focus on \emph{countable groups}, i.e., groups that are either finite or countably infinite. What follows can be seen as a quantitative refinement of results in \cite{BeHa--08}, quoted in Theorem \ref{thm:Gasch} below. \vskip.2cm Before stating our main result, we need the following preliminaries. Let $\mathbf k$ be a finite field of order $q$; in case $q = p$ is a prime, we write $\mathbf F_p$ instead of $\mathbf k$. For a group $G$, we denote by $\mathbf k[G]$ its group algebra over $\mathbf k$. We recall that any abelian group $U$ whose exponent is a prime $p$ carries the structure of a vector space over~$\mathbf F_p$, which is invariant under all group automorphisms of $U$. In other words, the group structure on $U$ canonically determines an $\mathbf F_p$-linear structure. In particular, an abelian normal subgroup $U$ of exponent $p$ in a group $G$ may be viewed, in a canonical way, as an $\mathbf F_p[G]$-module. Moreover, $U$ is minimal as a normal subgroup of $G$ if and only if $U$ is simple as an $\mathbf F_p[G]$-module. (We rather use $W$ instead of $U$ when such a simple module appears below, and $V$ for direct sums of particular numbers of copies of simple modules.) \par Let $G$ be a group and $U$ an $\mathbf F_pG]$-module. The \textbf{centralizer} of $U$ is the $\mathbf F_p$-algebra $$ \mathcal L_{\mathbf F_p[G]}(U) = \{\alpha \in \mathrm{End}_{\mathbf F_p}(U) \mid g.\alpha (u) = \alpha(g.u) \hskip.2cm \text{for all} \hskip.2cm g \in G, \ u \in U\} . $$ If $W$ is a simple $\mathbf F_p[G]$-module, Schur's lemma ensures that $\mathcal L_{\mathbf F_p[G]}(W)$ is a division algebra over $\mathbf F_p$ \cite[\S~4, Proposition~2]{Bo--A8}. If in addition $W$ is finite, then the algebra $\mathcal L_{\mathbf F_p[G]}(W)$ is a finite field extension of $\mathbf F_p$, by Wedderburn's Theorem \cite[\S~11, no.~1]{Bo--A8}. In this case, $W$ is a vector space over $\mathcal L_{\mathbf F_p[G]}(W)$, and the dimension of $W$ over $\mathcal L_{\mathbf F_p[G]}(W)$ is the quotient of $\dim_{\mathbf F_p}(W)$ by the degree of the extension $\mathcal L_{\mathbf F_p[G]}(W)$ of $ \mathbf F_p$. \par For example, consider a finite field extension $\mathbf k$ of $\mathbf F_p$, a positive integer $m$, the vector space $W = \mathbf k^m$, and the general linear group $\mathrm{GL}(W) = \mathrm{GL}_m(\mathbf k)$ together with its natural action on $W$. View $W$ as a vector space over $\mathbf F_p$, and as an $\mathbf F_p\mathrm{GL}(W)]$-module. Then $\mathcal L_{\mathbf F_p[\mathrm{GL}(W)]}(W)$ and $\mathbf k$ can be identified, and $\dim_{\mathbf k}(W) = m$. \par Our main result reads as follows; the proof is in Section~\ref{Section:proofof1.1}. \begin{thm} \label{thm:CharP(n)} Let $G$ be a countable group and $n$ a positive integer. The following assertions are equivalent. \begin{enumerate}[label=(\arabic*)] \item\label{1DEthm:CharP(n)} $G$ does not have $P(n)$. \item\label{2DEthm:CharP(n)} There exist a prime $p$, a finite normal subgroup $V$ in $G$ which is an elementary abelian $p$-group, and a finite simple $\mathbf F_p[G]$-module $W$, such that the following properties hold, where $\mathbf k$ denotes the centralizer field $\mathcal L_{\mathbf F_p[G]}(W)$, $m = \dim_{\mathbf k}(W)$, and $q = \vert \mathbf k \vert$: \begin{enumerate}[label=(\roman*)] \item $V$ is isomorphic to the direct sum of $m+1$ copies of $W$, as an $\mathbf F_pG]$-module; \item $q$ is a power of $p$ and $q^m + q^{m-1} + \dots + q + 1 \le n$. \end{enumerate} \end{enumerate} \end{thm} Notice that the inequality $q^m + q^{m-1} + \dots + q + 1 \ge 3$ always holds, since $q \ge p \ge 2$ and $m \ge 1$. Therefore, in the particular cases of $n = 1$ and $n = 2$, Theorem \ref{thm:CharP(n)} recovers for countable groups 0the results of Gelfand--Raikov and Walter quoted above. In the particular case of $n = 3$, the theorem shows that the only obstruction to $P(3)$ can be expressed in terms of Klein's Vierergruppe: \begin{cor} \label{cor:P(3)} A countable group has $P(3)$ if and only if its centre does not contain any subgroup isomorphic to $C_2 \times C_2$. \end{cor} \begin{proof} For $n=3$, we have $p = q = 2$ and $m = 1$ in \ref{2DEthm:CharP(n)} of Theorem \ref{thm:CharP(n)}. Hence $V = W \oplus W$ is an $\mathbf F_2[G]$-module of dimension $2$ over $\mathbf F_2$. Moreover, the action of $G$ is trivial on $W$, therefore also on $V$, and this means that, as a normal subgroup of $G$, the group $V$ is central. \end{proof} \begin{cor} \label{finitenormalcentral} Let $n$ be a positive integer and $G$ a countable group. Assume that every minimal finite abelian normal subgroup of $G$ is central. \par Then $G$ does not have $P(n)$ if and only if $G$ contains a central subgroup isomorphic to $C_p \times C_p$ for some prime $p \le n-1$. \end{cor} In the following proof, and later, we denote by $\mathbf T$ the group of complex numbers of modulus one. Recall that, for any irreducible unitary representation $\pi$ of a group $G$ with centre $Z$ on a Hilbert space $\mathcal H$, there exists by Schur's Lemma a unitary character $\chi \hskip.1cm \colon Z \to \mathbf T$ such that $\pi(g) = \chi(g) \mathrm{id}$ for every $g \in Z$. \begin{proof} Suppose that $G$ does not have Property $P(n)$. Let $p$ be a prime and $V, W$ as in Assertion \ref{2DEthm:CharP(n)} of Theorem \ref{thm:CharP(n)}. Since the action by conjugation of $G$ on minimal finite abelian normal subgroups is trivial by hypothesis, the $\mathbf F_p[G]$-module $W$, which is simple, is of dimension $1$ over $\mathbf F_p$. With the notation of Theorem \ref{thm:CharP(n)}, this implies that $m = 1$ and $\mathbf k = \mathbf F_p$. It follows that $V = W \oplus W \cong C_p \times C_p$. \par Conversely, if $G$ contains a central subgroup $V$ isomorphic to $C_p \times C_p$ for some prime $p \le n-1$, consider a subset $F$ of $G$ of size~$p+1$ containing a generator of each of the $p+1$ non-trivial cyclic subgroups of $V$. As recalled just before the present proof, every irreducible unitary representation of $G$ provides a unitary character $\chi \hskip.1cm \colon C_p \times C_p \to \mathbf T$. We have $F \cap \mathrm{Ker} \chi \not \subset \{e\}$, hence $F$ is irreducibly unfaithful. \end{proof} \begin{exe} \label{examplenormalcentral} There are several classes of groups which have the property that ``every minimal finite abelian normal subgroup is central'': \vskip.2cm (1) Torsion-free groups have the property. \vskip.2cm (2) Icc-groups, that is infinite groups in which all conjugacy classes distinct from $\{e\}$ are infinite, have the property. \vskip.2cm (3) A group $G$ without non-trivial finite quotient has the property. Indeed, if $N$ is a finite normal subgroup of $G$, the action by conjugation of $G$ on $N$ provides a homomorphism from $G$ to the group of automorphisms of $N$; since this homomorphism is trivial by hypothesis, $N$ is central. \vskip.2cm (4) In a connected algebraic group $G$, a Zariski-dense subgroup $\Gamma$ has the property. To check this, it suffices to show that the FC-centre $\mathbf FC(\Gamma)$ of $\Gamma$ coincides with the centre of $\Gamma$. Recall that the \textbf{FC-centre} of a group is the characterisitic subgroup consisting of elements which have a finite conjugacy class; the FC-centre of a group contains every finite normal subgroup. \par Recall that the centraliser of an element in an algebraic group is a Zariski-closed subgroup. Given $\gamma \in \mathbf FC(\Gamma)$, the centralizer $C_G(\gamma)$ is Zariski-closed and contains a finite index subgroup of $\Gamma$. Since $\Gamma$ is Zariski-dense, it follows that $C_G(\gamma)$ is of finite index in $G$. Therefore $C_G(\gamma) = G$ since $G$ is connected. Hence $\gamma$ is in the centre $Z(G)$ of $G$. It follows that $\gamma \in \Gamma \cap Z(G) = Z(\Gamma)$. This shows that $\mathbf FC(\Gamma)$ is central in $\Gamma$, and consequently that every finite normal subgroup of $\Gamma$ is central. \par In particular, this applies to all lattices in connected semi-simple groups over non-discrete locally compact fields without compact factors, by the Borel Density Theorem. \vskip.2cm (5) A nilpotent group has the property, because any non-trivial normal subgroup of a nilpotent group has a non-trivial intersection with the centre \cite[Chap.~I, \S~6, no.~3]{Bo--A1-3}. \end{exe} \vskip.2cm Theorem~\ref{thm:CharP(n)} also has the following immediate consequence: \begin{cor} \label{P(n-1)impliesP(n)} Let $n$ be an integer, $n \ge 2$. Suppose that there is no prime power $q$ and integer $m \ge 1$ such that $n = q^m + q^{m-1} + \dots + q + 1$. \par Every countable group that has $P(n-1)$ also has $P(n)$. \end{cor} When $n = q^m + q^{m-1} + \dots + q + 1$, we have the following. \begin{exe} \label{exampleGqm} Consider a prime $p$, a power $q$ of $p$, an integer $m \ge 1$, a field $\mathbf k$ of order $q$, the vector space $W = \mathbf k^m$, and the group $\mathrm{GL}(W) = \mathrm{GL}_m(\mathbf k)$. Let $V_0, V_1, \dots, V_m$ be $m+1$ copies of $W$. The group $\mathrm{GL}(W)$ acts diagonally on $V := \bigoplus_{i=0}^m V_i$. Since $V$ is an elementary abelian $p$-group, it can be viewed as an $\mathbf F_p \mathrm{GL}(W)]$-module. Define the semi-direct product group $$ G_{(q, m)} = \mathrm{GL}(W) \ltimes V. $$ \par Let $N$ be a normal subgroup of $G_{(q, m)}$. Assume that $N \cap V = \{e\}$. On the one hand, $N$ commutes with $V$, hence acts trivially on $V$ by conjugation. On the other hand, the triviality of $N \cap V$ implies that $N$ maps injectively in the quotient $G_{(q, m)}/V \cong \mathrm{GL}(W)$, whose conjugation action on $V$ is faithful. Hence $N = \{e\}$. This shows that every non-trivial normal subgroup of $G_{(q, m)}$ has a non-trivial intersection with $V$. In particular, every minimal normal subgroup of $G_{(q, m)}$ is contained in $V$, and thus is abelian. Hence it corresponds to a simple $\mathbf F_p[G_{(q, m)}]$-submodule of $V$. Therefore, every minimal abelian normal subgroup of $G_{(q, m)}$ is isomorphic to $W$ as an $\mathbf F_pG_{(q, m)}]$-module. \par We now set $n = q^m + q^{m-1} + \dots + q + 1$. Then Theorem~\ref{thm:CharP(n)} implies that $G_{(q, m)}$ has Property $P(n-1)$ but not $P(n)$. \end{exe} Notice that the group $G_{(q, 1)}$ is the semi-direct product $\mathbf k^\times \ltimes (\mathbf k \oplus \mathbf k)$, where $\mathbf k$ is a field of order~$q$. The group $G_{(2,1)}$ is Klein's Vierergruppe. The group $G_{(3,1)}$ appears in \cite[Note F]{Burn--11} as an example of a finite group with trivial centre which does not admit any faithful irreducible representation. The group $G_{(4,1)}$ appears in \cite[Problem 2.19]{Isaa--76} for the same reason. Our groups $G_{(q,1)}$ appear in the historical review section of \cite{Szec--16}, where they are denoted by $G(2,q)$. The tables $$ { \begin{array}{c|ccccccccccccccccccc} q & 2 & 3 & 4 & 5 & 7 & 8 & 9 & 11 \\ \hline \vert G_{(q,1)} \vert & 4 & 18 & 48 & 100 & 294 & 448 & 649 & 1110 \\ \end{array} } \hskip.5cm \text{and} \hskip.5cm { \begin{array}{c|ccccccccccccccccccc} q & 2 & 3 \\ \hline \vert G_{(q,2)} \vert & 384 & 34992 \\ \end{array} } $$ give the orders of the $8$ smallest groups $G_{(q,1)}$ and the $2$ smallest groups $G_{(q,2)}$. \begin{numer} The sequence of positive integers which are of the form $q^m + q^{m-1} + ... + q + 1$ for some prime power $q$ and positive integer $m$ is Sequence A258777 of \cite{OEIS}; the first $25$ terms are $$ 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 17, 18, 20, 21, 24, 26, 28, 30, 31, 32, 33, 38, 40 $$ (note that we start with $3$ whereas A258777 starts with $1$). The first $10 \hskip.1cm 000$ terms appear on \hskip.2cm \url{https://oeis.org/A258777/b258777.txt} \hskip.2cm where the last term is $101 \hskip.1cm 808$. For terms below $100$, the largest gap is between the $45$th term and the $46$th term, i.e., between $91$ and $98$; it follows from Corollary \ref{P(n-1)impliesP(n)} that a group with Property $P(91)$ has necessarily Property $P(97)$. \par It is a consequence of the Prime Number Theorem that the asymptotic density of this sequence is $0$. In other words, if for $k \ge 1$ we denote by $R(k)$ the number of positive integers less than $k$ which are terms of this sequence, then $\lim_{k \to \infty} R(k)/k = 0$. See \cite[Appendix B]{Radu--17}. \par Note also that the $21$st term, which is $31$, can be written in two ways justifying its presence in the sequence: $31 = 2^4 + 2^3 + 2^2 + 2 + 1 = 5^2 + 5 + 1$. It is a conjecture that there are no other terms with this property, but this is still open. Indeed, conjecturally, the Goormaghtigh equation $$ \frac{x^M - 1}{x-1} = \frac{y^N-1}{y-1} $$ has no solution in integers $x,y,M,N$ such that $x,y \ge 2$, $x \ne y$, and $M, N \ge 3$, except $31 = \frac{2^5-1}{2-1} = \frac{5^2-1}{5-1}$ and $8191 = \frac{2^{13}-1}{2-1} = \frac{90^3-1}{90-1}$. We are grateful to Emmanuel Kowalski and Yann Bugeaud for information on the relevant literature, which includes \cite{Rafa--16, Goor--17, BuSh--02, He--09}. \end{numer} In a group which has $P(n-1)$ and not $P(n)$, irreducibly unfaithful subsets of size $n$ are contained in finite normal subgroups of a very particular kind, described in Theorem \ref{thm:UnfaithfulSetSize-n}. Here is a partial statement of this theorem: \begin{prop} \label{partial4.5} Let $G$ be a countable group and $n$ a positive integer. Assume that $G$ has Property $P(n-1)$. Let $F$ be a finite subset of $G$ of size $n$ which is irreducibly unfaithful, and let $U$ denote the smallest normal subgroup of $G$ containing $F$. \par Then there exists a prime $p$ such that $U$ is a finite elementary abelian $p$-group, and $U$ is contained in the mini-socle $\operatorname{MA} (G)$ (as defined in Subsection \ref{subsec:Gaschutz} below). \end{prop} \subsection{Irreducibly faithful groups} A group is \textbf{irreducibly faithful} if it has a faithful irreducible unitary representation. Clearly, an irreducibly faithful group $G$ has $P(n)$ for all $n \ge 1$. The problem of characterizing finite groups which are irreducibly faithful has been addressed by Burnside in \cite[Note F]{Burn--11}, where a sufficient condition is given. Since then, various papers have been published on the subject, providing various answers to Burnside's question; see the historical review in \cite{Szec--16}. \par Gasch\"utz~\cite{Gasc--54} obtained a short proof of the following simple criterion: \emph{a finite group $G$ admits a faithful irreducible representation over an algebraically closed field of characteristic~$0$ if and only if the abelian part of the socle of $G$ is generated by a single conjugacy class}. For unitary representations, this result has been extended to the class of all countable groups in \cite[Theorem~2]{BeHa--08}; see Subection~\ref{subsec:Gaschutz} below. As a consequence of Theorem~\ref{thm:CharP(n)}, we shall obtain the following supplementary characterization. \begin{cor} \label{cor:P(n)-For-All-n} For a countable group $G$, the following conditions are equivalent: \begin{enumerate}[label=(\roman*)] \item\label{iDEcor:P(n)-For-All-n} $G$ has a faithful irreducible unitary representation. \item\label{iiDEcor:P(n)-For-All-n} $G$ has $P(n)$ for all $n \ge 1$. \item\label{iiiDEcor:P(n)-For-All-n} For any prime $p$, the group $G$ does not contain any finite abelian normal subgroup $V$ of expontent $p$ with the following properties: there exists a finite simple $\mathbf F_p[G]$-module $W$, with associated centralizer $\mathbf k = \mathcal L_{\mathbf F_p[G]}(W)$ and dimension $m = \dim_{\mathbf k}(W)$, such that $V$ is isomorphic as an $\mathbf F_pG]$-module to the direct sum of $m+1$ copies of $W$. \end{enumerate} \end{cor} In the case of finite groups, the equivalence between \ref{iDEcor:P(n)-For-All-n} and \ref{iiDEcor:P(n)-For-All-n} is trivial, while the equivalence between \ref{iDEcor:P(n)-For-All-n} and \ref{iiiDEcor:P(n)-For-All-n} is due to Akizuki (see \cite[Page 207]{Shod--31}). \par Let $G$ be a countable group in which every minimal finite abelian normal subgroup is central (see Corollary \ref{finitenormalcentral}). For such a group, Condition \ref{iiiDEcor:P(n)-For-All-n} above can be reformulated as follows. \begin{enumerate} \item[(iii$'$)] The group $G$ does not contain any central subgroup isomorphic to $C_p \times C_p$ for some prime $p$. \end{enumerate} \par For uncountable groups, some of the equivalences of Corollary \ref{cor:P(n)-For-All-n} may fail. See Remark \ref{remcountablegps}. \subsection{Abelian groups} Corollary \ref{finitenormalcentral} applies in particular to countable abelian groups. The following Proposition \ref{prop:Abelian} shows that the conclusion holds for all abelian groups, countable or not. Our proof does not rely on Theorem~\ref{thm:CharP(n)}, but uses the following result of Mira Bhargava, which is Theorem 4 of \cite{Bhar--02}. \begin{thm}[Mira Bhargava] \label{thm:Bhargava} For any group $G$ and any natural number $n$, the following conditions are equivalent: \begin{enumerate}[label=(\roman*)] \item $G$ is the union of $n$ proper normal subgroups. \item $G$ has a quotient isomorphic to $C_p \times C_p$, for some prime $p \le n-1$. \end{enumerate} \end{thm} \begin{prop} \label{prop:Abelian} Let $n$ be a positive integer and let $G$ be an abelian group. \par Then $G$ does not have $P(n)$ if and only if $G$ contains a subgroup isomorphic to $C_p \times C_p$ for some prime $p \le n-1$. \end{prop} \begin{proof} Assume that $G$ does not have Property $P(n)$. Let $F \subset G \smallsetminus \{e\}$ be an irreducibly unfaithful subset of $G$ of size $\le n$. Let $\widehat G$ be the Pontryagin dual of $G$, namely the group of all unitary characters $G \to \mathbf T$. For each $x \in F$, let $H_x = \{\chi \in \widehat G \mid \chi(x) = 1\}$; it is a subgroup of $\widehat G$. Since $G$ has $P(1)$, we have $H_x \neq \widehat G$. Since $F$ is irreducibly unfaithful we have $\widehat G = \bigcup_{x \in F} H_x$. Since $\widehat G$ is abelian, every subgroup is normal, and Theorem~\ref{thm:Bhargava} ensures that $\widehat G$ maps onto $C_p \times C_p$, for some prime $p \le \vert F \vert -1 \le n-1$. By duality (see \cite[chap.\ II, \S~1, no~7, Th.~4]{Bo--TS}), it follows that $G$ contains a subgroup isomorphic to the dual of $C_p \times C_p$, i.e., a subgroup isomorphic to $C_p \times C_p$ itself. \par The proof of the converse implication is as in the proof of Corollary \ref{finitenormalcentral}. \end{proof} It is easy to characterize abelian groups having faithful unitary characters, i.e., having faithful irreducible unitary representations. We denote by $\mathfrak c$ the cardinality of the continuum, i.e., of~$\mathbf T$. \begin{prop} \label{almostinFuchs} For an abelian group $G$, the following conditions are equivalent: \begin{enumerate}[label=(\roman*)] \item\label{iDEalmostinFuchs} $G$ has a faithful unitary character, i.e., $G$ is isomorphic to a subgroup of~$\mathbf T$. \item\label{iiDEalmostinFuchs} The cardinality of $G$ is at most $\mathfrak c$, and there is not any subgroup of $G$ isomorphic to $C_p \times C_p$, for some prime $p$. \end{enumerate} \end{prop} \begin{proof} Let $A$ be an abelian group. Denote by $r_0(A)$ the torsion-free rank and, for each prime $p$, by $r_p(A)$ the $p$-rank of $A$. For a prime $p$, denote by $\mathbf Z(p^\infty)$ the quasicyclic group $\mathbf Z[1/p] / \mathbf Z$. \par Let $G$ be an abelian group; denote by $E$ a \textbf{divisible hull} of $G$. Recall the following standard results on $G$ and $E$. First, as any divisible group, $E$ is isomorphic to a direct sum $$ E \cong \Big( \bigoplus_{r_0(E)} \mathbf Q \Big) \mathlarger{\oplus} \Big( \bigoplus_p \Big( \bigoplus_{r_p(E)} \mathbf Z(p^\infty) \Big) \Big) , \leqno{(*)} $$ and we also have $$ \mathbf T \cong \Big( \bigoplus_{\mathfrak c} \mathbf Q \Big) \mathlarger{\oplus} \Big( \bigoplus_p \mathbf Z(p^\infty) \Big) ; \leqno{(**)} $$ see \cite[Theorem 23.1]{Fuch--70}. Second, a divisible group (for example $\mathbf T$) has a subgroup isomorphic to $G$ if and only if it has a subgroup isomorphic to $E$ \cite[Theorem 24.4]{Fuch--70}. Furthermore, we have $$ r_0(E) = r_0(G) \hskip.5cm \text{and} \hskip.5cm r_p(E) = r_p(G) \hskip.2cm \text{for all prime $p$} ; \leqno{(***)} $$ see \cite[Section 24]{Fuch--70}. \par It follows that $G$ satisfies Condition \ref{iDEalmostinFuchs} if and only if $r_0(G) \le \mathfrak c$ and $r_p(G) \le 1$ for all primes $p$, that is if and only if $G$ satisfies Condition \ref{iiDEalmostinFuchs} \end{proof} \begin{rem} \label{remcountablegps} Corollary \ref{cor:P(n)-For-All-n} does not extend to groups of cardinality larger than~$\mathfrak c$. Indeed, by Proposition \ref{prop:Abelian}, any torsion-free abelian group has $P(n)$ for all $n \ge 0$ (Condition \ref{iiDEcor:P(n)-For-All-n} of Corollary \ref{cor:P(n)-For-All-n}), but cannot be isomorphic to a subgroup of $\mathbf T$ when its cardinality is larger than $\mathfrak c$ (negation of Property \ref{iDEcor:P(n)-For-All-n} of Corollary \ref{cor:P(n)-For-All-n}). \par We have not been able to decide whether Conditions \ref{iDEcor:P(n)-For-All-n} and \ref{iiDEcor:P(n)-For-All-n} of Corollary \ref{cor:P(n)-For-All-n} are equivalent for all groups of cardinality at most $\mathfrak c$. They are for abelian groups of cardinality at most $\mathfrak c$, this is Proposition \ref{almostinFuchs}. \par Conditions \ref{iiDEcor:P(n)-For-All-n} and \ref{iiiDEcor:P(n)-For-All-n} of Corollary \ref{cor:P(n)-For-All-n} are equivalent for any abelian group, this is Proposition \ref{prop:Abelian}. \end{rem} \subsection{Irreducible versus factor representations} Recall that two unitary representations $\pi$, $\pi'$ of a group $G$ are called \textbf{disjoint} if there do not exist non-zero subrepresentations $\rho$ of $\pi$ and $\rho'$ of $\pi'$ which are equivalent. A unitary representation $\pi$ of a group $G$ is called a \textbf{factor representation} (or a \textbf{primary representation}) if it cannot be decomposed as the direct sum of two disjoint subrepresentations. Equivalently the unitary representation $\pi$ is a factor representation if the von Neumann algebra generated by $\pi(G)$ is a factor. Every irreducible unitary representation is a factor representation. The direct sum of several copies of a given irreducible unitary representation is an example of a factor representation which is not irreducible. However, some factor representations do not contain any irreducible subrepresentations. The notion of factor representation plays a key role in the theory of unitary representations on infinite-dimensional Hilbert spaces, see \cite{Dixm--69} and \cite{Mack--76}. We record here the following observation, which implies that the results above remain unchanged if one replaces the class of irreducible unitary representations by the larger class of factor representations (proof in Subsection \ref{subsec:SubsidiaryBeHa}). \begin{prop} \label{prop:Factor->Irrep} Let $G$ be a countable group. For any factor representation $\pi$ of $G$, there is an irreducible unitary representation $\sigma$ of $G$ such that $\mathrm{Ker}(\sigma) = \mathrm{Ker}(\pi)$. \end{prop} In particular, a countable group is irreducibly faithful if and only if it is factorially faithful. \subsection{Irreducibly injective subsets} \label{IntroQ(n)} A natural variation on the notion of irreducible (un)faith\-fulness can be defined as follows. \par A subset $F$ of a group $G$ is called \textbf{irreducibly injective} if $G$ has an irreducible unitary representation $\pi$ such that the restriction $\pi \vert_F$ of $\pi$ to $F$ is injective. We say that $G$ has Property $Q(n)$ if every subset of $G$ of size~$\le n$ is irreducibly injective. It is a tautology that every group has Property $Q(1)$. Though we do not know a characterization of countable groups which have Property $Q(n)$ for $n \ge 6$ in terms of the group structure, some of the results we show in Section \ref{SectionQ(n)} can be summarized as follows. \begin{prop} \label{summingupSection6} Let $G$ be a countable group and $n$ a positive integer. \begin{enumerate}[label=(\roman*)] \item\label{iDEsummingupSection6} If $G$ has $P\left( \binom{n}{2} \right)$, then $G$ has $Q(n)$; in particular, every countable group has $Q(2)$. \item\label{iiDEsummingupSection6} If $G$ has $Q(n)$, then $G$ has $P(n)$. \item\label{iiiDEsummingupSection6} $G$ has $Q(3)$ if and only if $G$ has $P(3)$. \item\label{ivDEsummingupSection6} $G$ has $Q(4)$ if and only if $G$ has $P(6)$. \item\label{vDEsummingupSection6} $G$ has $Q(5)$ if and only if $G$ has $P(9)$. \end{enumerate} \end{prop} Understanding Property $Q(n)$ for larger $n$ is closely related to a well-documented open problem in additive combinatorics. See Subsection \ref{AdditiveComb}. \vskip.2cm We are grateful to Yves Cornulier for his comments on a previous version of our text. \section{Irreducibly faithful groups and related facts} \subsection{Feet, mini-feet and Gasch\"utz' Theorem} \label{subsec:Gaschutz} Theorem \ref{thm:Gasch} below is due to Gasch\"utz in the case of finite groups \cite{Gasc--54} (see also \cite[Theorem 42.7]{Hupp--98}), and has been generalized to countable groups in \cite[part of Theorem 2]{BeHa--08}. First we remind some terminology. \par In a group $G$, a \textbf{mini-foot} is a minimal non-trivial finite normal subgroup; we denote by $\mathcal M_G$ the set of all mini-feet of $G$. The \textbf{mini-socle} of $G$ is the subgroup $\operatorname{MS} (G)$ generated by $\bigcup_{M \in \mathcal M_G} M$; the mini-socle is $\{e\}$ if $\mathcal M_G$ is empty, for example $\operatorname{MS} (\mathbf Z) = \{0 \}$. \par Let $\mathcal A_G$ denote the subset of $\mathcal M_G$ of abelian mini-feet, and $\mathcal H_G$ the complement of $\mathcal A_G$ in $\mathcal M_G$. The \textbf{abelian mini-socle} of $G$ is the subgroup $\operatorname{MA} (G)$ generated by $\bigcup_{A \in \mathcal A_G} A$, and the \textbf{semi-simple part} $\operatorname{MH} (G)$ of the mini-socle is the subgroup generated by $\bigcup_{H \in \mathcal H (G)} H$. For examples of $\operatorname{MA} (G)$, see \ref{exe:socletrunk} below. \par In the context of finite groups, mini-foot and mini-socle are respectively called \textbf{foot} and \textbf{socle}. We denote the socle of a finite group $G$ by $\operatorname{Soc}(G)$, the abelian socle by $\operatorname{Soc}A(G)$, and the semi-simple part of the socle by $\operatorname{Soc}H(G)$. The structure of the socle is due to Remak \cite{Rema--30}. \par For general groups, finite or not, the structure of the mini-socle can be described similarly, as follows. We write $\prod'_{\iota \in I} G_\iota$ for the restricted sum of a family of groups $(G_\iota)_{\iota \in I}$; recall that it is the subgroup of the direct product consisting of elements $(g_\iota)_{\iota \in I} \in \prod_{\iota \in I} G_\iota$ such that $g_\iota$ is the identity of $G_\iota$ for all but finitely many $\iota \in I$. \begin{prop} \label{structureminisocle} Let $G$ be a group. Let $\mathcal M_G, \operatorname{MS}(G), \mathcal A_G, \operatorname{MA}(G), \mathcal H_G, \operatorname{MH}(G)$ be as above. \begin{enumerate}[label=(\arabic*)] \item\label{1DEstructureminisocle} Every abelian mini-foot $A$ in $\mathcal A_G$ is an elementary abelian $p$-group $(\mathbf F_p)^m$ for some prime $p$ and positive integer $m$. \item\label{2DEstructureminisocle} There exists a subset $\mathcal A'_G$ of $\mathcal A_G$ such that $\operatorname{MA} (G) = \prod'_{A \in \mathcal A'_G} A$. In particular $\operatorname{MA} (G)$ is abelian. \item\label{3DEstructureminisocle} Every non-abelian mini-foot $H$ in $\mathcal H_G$ is a direct product of a finite number of isomorphic non-abelian simple groups, conjugate with each other in $G$. \item\label{4DEstructureminisocle} $\operatorname{MH} (G)$ is the restricted sum $\prod'_{H \in \mathcal H_G} H$ of the mini-feet in $\mathcal H_G$. \item\label{5DEstructureminisocle} $\operatorname{MS} (G)$ is the direct product $\operatorname{MA} (G) \times \operatorname{MH} (G)$. \item\label{6DEstructureminisocle} Each of the subgroups $\operatorname{MS} (G)$, $\operatorname{MA} (G)$, $\operatorname{MH} (G)$ is characteristic (in particular normal) in $G$. \item\label{7DEstructureminisocle} Let $r \hskip.1cm \colon G \twoheadrightarrow Q$ be a surjective homomorphism of $G$ onto a group $Q$. Then, for every mini-foot $X$ of $G$, either $r(X)$ is trivial or $r(X)$ is a mini-foot of $Q$. In particular $r$ maps $\operatorname{MA}(G)$ [respectively $\operatorname{MH}(G)$, $\operatorname{MS}(G)$] to a subgroup of $\operatorname{MA}(Q)$ [respectively $\operatorname{MH}(Q)$, $\operatorname{MS}(Q)$] which is normal in $Q$. \end{enumerate} \end{prop} We refer to \cite[Proposition 1]{BeHa--08} for the proof. \par The next result is a slight reformulation of the equivalence between (i) and (iv) in \cite[Theorem~2]{BeHa--08}. \begin{thm} \label{thm:Gasch} For a countable group $G$, the following assertions are equivalent. \begin{enumerate}[label=(\roman*)] \item $G$ has a faithful irreducible unitary representation. \item Every finite normal subgroup of $G$ contained in the abelian mini-socle is generated by a single conjugacy class. \end{enumerate} \end{thm} This result is a crucial tool for the proof of Theorem~\ref{thm:CharP(n)}. Moreover, we shall also need subsidiary facts that we will extract from~\cite{BeHa--08}. They will be presented in Section~\ref{subsec:SubsidiaryBeHa} below. \subsection{On characteristic subgroups that are directed unions of finite normal subgroups} The next lemma, whose straightforward proof is left to the reader, ensures that every group has a unique largest normal subgroup that is the directed union of all its finite normal subgroups [respectively all its soluble finite normal subgroups]. In particular these subgroups are characteristic. \par For an element $g \in G$ and a subset $F \subset G$, we denote by $\langle\!\langle g \rangle\!\rangle_G$ the normal subgroup of $G$ generated by $\{g\}$, and by $\langle\!\langle F \rangle\!\rangle_G$ that generated by $F$. \begin{lem} \label{lem:N1Nk} Let $G$ be a group. \begin{enumerate}[label=(\roman*)] \item\label{iDElem:N1Nk} Let $k$ be a positive integer and $N_1, \hdots, N_k$ finite normal subgroups of $G$. The subgroup of $G$ generated by $\bigcup_{j=1}^k N_j$ is the product $N_1N_2 \dots N_k$, in particular it is a finite normal subgroup of $G$. \item\label{iiDElem:N1Nk} The subset $$ \mathrm W(G) = \{ g \in G \mid \text{the normal subgroup $\langle\!\langle g \rangle\!\rangle_G$ is finite} \} $$ is a characteristic subgroup of $G$, and is the directed union of all finite normal subgroups of $G$. \item\label{iiiDElem:N1Nk} The subset $$ \hskip1cm \mathbf Fsol (G) = \{ g \in G \mid \text{the normal subgroup $\langle\!\langle g \rangle\!\rangle_G$ is finite and soluble} \} $$ is a characteristic subgroup of $G$, and is the directed union of all soluble finite normal subgroups of $G$. \end{enumerate} \end{lem} The FC-centre $\mathbf FC(G)$ has been defined in Example \ref{examplenormalcentral}. The characteristic subgroup $\mathrm W (G)$ is the \textbf{torsion FC-centre} of $G$. According to Dicman's Lemma \cite[14.5.7]{Robi--96}, which ensures that every element of finite order in the FC-centre of $G$ has a finite normal closure, $\mathrm W (G)$ is also the set of elements of finite order in $\mathbf FC(G)$. The inclusions $$ \operatorname{MA}(G) \le \mathbf Fsol (G) \le \mathrm W(G) \le \mathbf FC(G) $$ and $$ \operatorname{MS}(G) \le \mathrm W(G) $$ follow from the definitions. \vskip.2cm We illustrate those notions by discussing several examples. \begin{exe}[Abelian mini-socles and other charactersitic subgroups] \label{exe:socletrunk} (1) Let $p$ be a prime. In the cyclic group $\mathbf Z / p^2\mathbf Z$, the abelian socle $\mathbf Z / p\mathbf Z$ (which is also the socle) is a proper subgroup of $\mathbf Fsol (\mathbf Z / p^2\mathbf Z) = \mathbf Z / p^2\mathbf Z$. \par More generally, for a torsion abelian group $G$, the abelian mini-socle (which is also the mini-socle) is generated by the elements of prime order, while $\mathbf Fsol (G) = G$. \vskip.2cm (2) If $G$ is the restricted sum of an infinite family $(G_n)_{n \ge 1}$ of soluble finite groups, then $\mathbf Fsol (G)$ is the whole group $G$; note that $\mathbf Fsol (G)$ is soluble if and only if the supremum over all $n$ of the derived length of $G_n$ is finite. \par If $G$ is a finite group, then $\mathbf Fsol (G)$ is the largest soluble normal subgroup of~$G$, known as the \textbf{soluble radical} of $G$. \vskip.2cm (3) Let $G$ be a torsion-free group. Then $\mathrm W(G) = \{e\}$, so that $\operatorname{MA}(G) = \operatorname{MS}(G) = \mathbf Fsol (G) = \{e\}$. \vskip.2cm (4) Let $G$ be a group for which Assertion \ref{2DEthm:CharP(n)} in Theorem \ref{thm:CharP(n)} holds true. Then, with the notation of this Theorem, the finite normal subgroup $V$ of $G$ is contained in the abelian mini-socle $\operatorname{MA} (G)$. \vskip.2cm (5) Let $p$ be a prime, $d$ an integer, $d \ge 2$, and $q = p^d$. Let $C_q$ be the cyclic group $\mathbf Z / q\mathbf Z$; denote by $c_q \in C_q$ the class modulo $q\mathbf Z$ of an integer $c \in \mathbf Z$. Let $H_q$ be the group of triples $(a, b, c) \in \mathbf Z \times \mathbf Z \times C_q$ with the multiplication defined by $$ (a, b, c) (a', b', c') = (a + a', b + b', c + c' + (ab')_q) . $$ We identify the cyclic group $C_p$ of order $p$ with a subgroup of $C_q$, and the group $C_q$ to the subgroup of $H_q$ of triples of the form $(0, 0, c)$. Observe that all conjugacy classes in $H_q$ are finite, i.e., $H_q$ is its own FC-centre (it is a so-called FC-group). Moreover, the torsion FC-centre $W(H_q)$ coincides with the central subgroup $C_q$ of $H_q$, and also with $\mathbf Fsol (H_q)$. The following five subgroups of $H_q$ constitute a strictly ascending chain of characteristic subgroups: \par the trivial group $\{e\}$, \par the mini-socle $\operatorname{MS} (H_q) = \operatorname{MA} (H_q) = C_p$, \par the group $\mathbf Fsol (H_q) = \mathrm W(H_q) = C_q$, \par the centre $q\mathbf Z \times q\mathbf Z \times C_q$, \par and the group $H_q$ itself. \vskip.2cm (6) Let $G$ be a non-trivial nilpotent group. Since minimal normal subgroups of $G$ are central, as recalled in Example \ref{examplenormalcentral}(5), it follows that the mini-socle of~$G$ is the subgroup generated by the central elements of prime order. \par Recall also that the set $\tau(G)$ of elements of finite order in $G$ is a subgroup of~$G$, indeed a charactersitic subgroup, and that $G/\tau(G)$ is torsion-free. When $G$ is moreover finitely generated then $\tau(G)$ is finite \cite[Chapter 1, Corollary 10]{Sega--83}. \par It follows that, for a finitely generated nilpotent group $G$, we have $\mathbf Fsol (G) = \mathrm W(G) = \tau(G)$, and $\mathrm W(G / \mathrm W(G)) = \{e\}$. The next example shows that the finite generation condition cannot be deleted. \par \vskip.2cm (7) For each integer $n \ge 1$, let $$ H_n = \langle x_n, y_n, z_n \mid x_n^3, \hskip.1cm y_n^3, \hskip.1cm [x_n, y_n]z_n^{-1}, \hskip.1cm [x_n, z_n], \hskip.1cm [y_n, z_n] \rangle $$ be a copy of the Heisenberg group over the field $\mathbf F_3$. We form the full direct product $P = \prod_{n \ge 1} H_n$ and, for each $n$, we identify $x_n, y_n$, and $z_n$ with their natural images in $P$. We also set $x = (x_n)_{n \ge 1} \in P$, and define $$ G = \langle x, y_n \mid n \ge 1 \rangle \le P. $$ The group $G$ is countable, of exponent $3$, and nilpotent of class~$2$. Observe that $z_n = [x_n, y_n]$ is in $G$ for all $n \ge 1$. \par For $n \ge 1$, let $A_n$ be the group generated by $y_n$ and $z_n$. It is an abelian $3$-group of order $9$, which is normal in each of $H_n$, $P$, and $G$. Let $A$ be the subgroup of $G$ generated by $\bigcup_{n \ge 1} A_n$, which is normal in $G$. Observe that $G/A$ is a cyclic group of order $3$, generated by the class of $x$ modulo $A$. \par We have $A \le \mathbf Fsol (G)$. Indeed, let $t = (t_n)_{n \ge 1} \in A$. There exists $C \ge 1$ such that $t_n = e$ whenever $n \ge C$, so that $t$ is in the normal subgroup $\prod_{n=1}^C A_n$ of $G$, which is finite and abelian. Hence $t \in \mathbf Fsol (G)$. \par For all $n \ge 1$, we have $[x, y_n] = z_n \in G$, so that the normal subroup $\langle\!\langle x \rangle\!\rangle_G$ generated by $x$ contains $\{z_n \mid n \ge 1\}$, and thus is infinite. It follows that $x$ is not in the FC-centre of $G$, and in particular that $x$ is not in $\mathbf Fsol (G)$. \par We have shown that $A = \mathbf Fsol (G) \lneqq G$, and that $G / \mathbf Fsol (G)$ is a cyclic group of order $3$. In particular, $\mathbf Fsol \big( G / \mathbf Fsol (G) \big) = G / \mathbf Fsol (G) \cong C_3$ is not trivial. \par Since $\mathrm W (-)$ and $\mathbf Fsol (-)$ coincide for $G$ and its quotients (indeed for any soluble group), this can be written $A = \mathrm W(G) \lneqq G$ and $\mathrm W(G/\mathrm W(G)) = G/\mathrm W(G) \cong C_3$. \end{exe} The last example shows that $\mathbf Fsol (G)$ does not behave as a \textit{radical} in general, in the sense that $\mathbf Fsol (G / \mathbf Fsol (G))$ can be non-trivial. Similarly $W(G / \mathrm W(G))$ can be non-trivial. However, it is easy to see that, if $\mathrm W(G)$ is finite [respectively $\mathbf Fsol (G)$ is finite], then $\mathrm W(G / \mathrm W(G)) =\{e\}$ [respectively $\mathbf Fsol (G / \mathbf Fsol (G)) = \{e\}$]. The following proposition will be used in Remark \ref{TrunkSufFiur}(2). \begin{prop} \label{MSMATofproduct} For any two groups $G_1$ and $G_2$, we have: \begin{enumerate}[label=(\roman*)] \item\label{iDEMSMATofproduct} $\operatorname{MS}(G_1 \times G_2) = \operatorname{MS}(G_1) \times \operatorname{MS}(G_2)$, \item\label{iiDEMSMATofproduct} $\operatorname{MA}(G_1 \times G_2) = \operatorname{MA}(G_1) \times \operatorname{MA}(G_2)$, \item\label{iiiDEMSMATofproduct} $ \mathbf Fsol (G_1 \times G_2) = \mathbf Fsol (G_1) \times \mathbf Fsol (G_2)$. \item\label{ivDEMSMATofproduct} $ \mathrm W(G_1 \times G_2) = \mathrm W(G_1) \times \mathrm W (G_2)$. \end{enumerate} \end{prop} \begin{proof} We identify $G_1$ and its subgroups with subgroups of $G_1 \times G_2$, and similarly for $G_2$ and its subgroups. For $j \in \{1, 2\}$, we denote by $e_j$ the neutral element of $G_j$ and by $r_j \hskip.1cm \colon G_1 \times G_2 \twoheadrightarrow G_j$ the canonical projection. \vskip.2cm \ref{iDEMSMATofproduct} The inclusion $\operatorname{MS}(G_1) \times \operatorname{MS}(G_2) \le \operatorname{MS}(G_1 \times G_2)$ is straightforward, because any minimal non-trivial finite normal subgroup of $G_1$ or of $G_2$ is a minimal non-trivial finite normal subgroup of $G_1 \times G_2$. \par To check the reverse inclusion, consider a minimal non-trivial finite normal subgroup $N$ of $G_1 \times G_2$, and distinguish two cases. First, if $N \le G_1$ or $N \le G_2$, then $N \le \operatorname{MS}(G_1) \times \operatorname{MS}(G_2)$. Second, if $N \nleq G_1$ and $N \nleq G_2$, then $N$ does not contain any element of the form $(x_1, e_2)$ or $(e_1, x_2)$ with $x_1 \ne e_1$ and $x_2 \ne e_2$, by minimality. If $N$ did contain an element $x = (x_1, x_2)$ with $x_1$ non central in $G_1$, then $N$ would contain $(y_1, e_2)^{-1} x^{-1} (y_1, e_2) x = ([y_1, x_1], e_2)$ for some $y_1 \in G_1$ such that $[y_1, x_1] \ne e_1$, in contradiction with the hypothesis on $N$; and similarly for $N \ni (x_1, x_2)$ with $x_2$ non central in $G_2$; hence $r_1(N)$ is central in $G_1$ and $r_2(N)$ is central in $G_2$. It follows that $N$ is central in $G_1 \times G_2$, and that there exists a prime $p$ such that $N$ is a cyclic group of order~$p$. Hence $N$ is of the form $\langle (x', x'') \rangle_{G_1 \times G_2}$ with $x'$ of order $p$ in $G_1$ and $x''$ of order $p$ in $G_2$. In particular, $N \le \langle\!\langle x' \rangle\!\rangle_{G_1} \times \langle\!\langle x'' \rangle\!\rangle_{G_2} \le \operatorname{MS} (G_1) \times \operatorname{MS} (G_2)$. It follows that $\operatorname{MS} (G_1 \times G_2) \le \operatorname{MS} (G_2) \times \operatorname{MS} (G_2)$. \vskip.2cm An argument of the same kind shows that \ref{iiDEMSMATofproduct} holds. \vskip.2cm \ref{ivDEMSMATofproduct} Given $x \in \mathrm W(G_1 \times G_2)$, the normal closure $\langle\!\langle x \rangle\!\rangle_{G_1 \times G_2}$ is finite by definition. Therefore $r_j(\langle\!\langle x \rangle\!\rangle_{G_1 \times G_2}) = \langle\!\langle r_j(x) \rangle\!\rangle_{G_j}$ is finite, so that $r_j(x) \in \mathrm W(G_j)$, for $j = 1, 2$. This proves that $x \in \mathrm W(G_1) \times \mathrm W(G_2)$, hence $\mathrm W(G_1 \times G_2) \le \mathrm W(G_1) \times \mathrm W(G_2)$. \par Let $j \in \{1,2\}$ and $x_j \in G_j$. The group $G_{3-j}$ commutes with $x_j$, so that $\langle\!\langle x_j \rangle\!\rangle_{G_1 \times G_2} = \langle\!\langle x_j \rangle\!\rangle_{G_j}$. Assume in addition that $x_j \in \mathrm W(G_j)$; then by definition $\langle\!\langle x_j \rangle\!\rangle_{G_j}$ is finite, hence $\langle\!\langle x_j \rangle\!\rangle_{G_1 \times G_2}$ is finite as well. Therefore $x_j \in \mathrm W(G_1 \times G_2)$. This proves that $\mathrm W(G_j) \le \mathrm W(G_1 \times G_2)$. Therefore $\mathrm W(G_1) \times \mathrm W(G_2) \le \mathrm W(G_1 \times G_2)$, which ends the proof of \ref{ivDEMSMATofproduct}. \vskip.2cm An argument of the same kind shows that \ref{iiiDEMSMATofproduct} holds. \end{proof} \subsection{A basic property of factor representations} \label{subsec:factor} \begin{lem} \label{lem:Disjoint} Let $\pi$ be a unitary representation of a group $G$ in a Hilbert space $\mathcal H$ and $N$ a normal subgroup of $G$. Let $\pi_1$ [respectively $\pi_2$] be the subrepresentation of $\pi$ given by the $G$-action on the subspace $\mathcal H^N$ of $\mathcal H$ consisting of the $N$-invariant vectors [respectively on its orthogonal complement]. \par Then $\pi_1$ and $\pi_2$ are disjoint. \end{lem} \begin{proof} Let $\rho_1$ be a non-zero subrepresentation of $\pi_1$ and $\rho_2$ a non-zero subrepresentation of $\pi_2$. On the one hand, the kernel of $\rho_1$ contains $N$. On the other hand, if the kernel of $\rho_2$ did contain $N$, the space of $\rho_2$ would be contained in $\mathcal H^N$, hence it would be $\{0\}$ by the definition of $\pi_2$. This is preposterous. Therefore the representations $\rho_1$ and $\rho_2$ have different kernels, and thus they are not equivalent. \end{proof} Two unitary representations $\pi$, $\pi'$ of a group $G$ are called \textbf{quasi-equivalent} if no non-zero subrepresentation of $\pi$ is disjoint from $\pi'$, and vice-versa. \begin{prop} \label{prop:KernelPrimary} Let $\pi$ be a factor representation of a group $G$. \par For every non-zero subrepresentation $\rho \le \pi$, we have $\mathrm{Ker}(\rho) = \mathrm{Ker}(\pi)$. \par In particular, if $\pi'$ is any factor representation quasi-equivalent to $\pi$, then $\mathrm{Ker}(\pi) = \mathrm{Ker}(\pi')$. \end{prop} \begin{proof} Set $N = \mathrm{Ker}(\rho)$. Denote by $\mathcal H_\pi$ the Hilbert space of $\pi$ and by $\mathcal H_\rho$ that of~$\rho$. \par Since $\rho \le \pi$, we have $\mathrm{Ker}(\pi) \le N$. When $N = \{e\}$, there is nothing more to prove. We assume now that $N \ne \{e\}$. \par The space $\mathcal H_\rho$ is contained in $\mathcal H_\pi^N$; in particular $\mathcal H_\pi^N \ne \{0\}$ since $\rho$ is non-zero. Since $\pi$ is a factor representation, $\mathcal H_\pi^N = \mathcal H_\pi$ by Lemma~\ref{lem:Disjoint}. It follows that $N \le \mathrm{Ker}(\pi)$, hence that $N = \mathrm{Ker}(\pi)$. \par Let $\pi'$ be a factor representation of $G$ which is quasi-equivalent to $\pi$. By \cite[Theorem 1.7, Page 20]{Mack--76}, up to equivalence we have $\pi \le \pi'$ or $\pi' \le \pi$. Hence $\mathrm{Ker}(\pi) = \mathrm{Ker}(\pi')$ by the assertion that we have already established. \end{proof} \subsection{On \texorpdfstring{$G$}{G}-faithful representations of subgroups of \texorpdfstring{$G$}{G}} \label{subsec:SubsidiaryBeHa} Given a group $G$ and a normal subgroup $N$, a unitary character or a {unitary representation $\rho$ of~$N$ is called \textbf{$G$-faithful} if the intersection over all $g \in G$ of the kernels $\mathrm{Ker} (\rho^g)$ is trivial, where $\rho^g (x) = \rho(gxg^{-1})$ for all $x \in N$. \par The following lemma generalizes \cite[Lemma~9]{BeHa--08}. More precisely, the statement from loc.\ cit.\ assumes that $\pi$ is irreducible and faithful, whereas we only require that $\pi$ is a factor representation and that the restriction $\pi \vert_N$ is faithful. \begin{lem} \label{FromL9} Let $G$ be a countable group, $N$ a normal subgroup of $G$, and $\pi$ a factor representation of $G$ such that the restriction $\pi \vert_N$ is faithful. \par Then $N$ has an irreducible unitary representation $\rho$ which is $G$-faithful. \end{lem} \begin{proof} The proof is a small modification of that in \cite[Lemma~9]{BeHa--08}. We reproduce the details since our hypotheses are slightly more general. \par We assume that $N$ is non-trivial, since otherwise there is nothing to prove. Set $\sigma := \pi \vert_N$ and let $\sigma = \int_\Omega^\oplus \sigma_\omega d\mu(\omega)$ be a direct integral decomposition of $\sigma$ into irreducible unitary representations, implemented by an isomorphism $\mathcal H_\sigma \cong \int_\Omega^\oplus \mathcal H_\omega d\mu(\omega)$. Denote by $\{C_j\}_{j \in J}$ the family of $G$-conjugacy classes contained in $N$ distinct from $\{e\}$. For each $j$, let $N_j \le N$ be the subgroup generated by $C_j$; note that $N_j$ is normal in $G$. The family $\{C_j\}_{j \in J}$ is countable and non-empty. Every non-trivial normal subgroup of $G$ contained in $N$ must contain $N_j$ for some $j \in J$. Therefore, given $\omega \in \Omega$, we see that $\sigma_\omega$ is not $G$-faithful if and only if $\mathrm{Ker} \big( \bigoplus_{g \in G} (\sigma_\omega)^g \big)$ contains $N_j$ for some $j \in J$. \par Set now $\Omega_j = \big\{ \omega \in \Omega \mid N_j \le \mathrm{Ker} \big( \bigoplus_{g \in G} (\sigma_\omega)^g \big) \big\}$ and $\widetilde \Omega = \bigcup_{j \in J} \Omega_j$. It follows that $\widetilde \Omega$ is the subset consisting of these $\omega \in \Omega$ such that $\sigma_\omega$ is not $G$-faithful. By \cite[Lemma~8]{BeHa--08}, each $\Omega_j$ is measurable. Since $J$ is countable, $\widetilde \Omega$ is also measurable. \par In order to finish the proof, it suffices to show that $\mu(\widetilde \Omega) = 0$. Suppose for a contradiction that $\mu(\widetilde \Omega)>0$. Since $J$ is countable, we have $\mu(\Omega_\ell)>0$ for some $\ell \in J$. For each $\omega \in \Omega_\ell$, we have $N_\ell \le \mathrm{Ker}(\sigma_\omega)$, so that the subspace $\int_{\Omega_\ell}^\oplus \mathcal H_\omega$ of $\mathcal H_\sigma$, which is non-zero since $\mu(\Omega_\ell)>0$, consists of $N_\ell$-invariant vectors. Since $N_\ell$ is normal in $G$, the set of $N_\ell$-invariant vectors is $G$-invariant, and thus corresponds to a subrepresentation of $\pi$. Since $\pi$ is a factor representation, we have $N_\ell \le \mathrm{Ker}(\pi)$ by Proposition~\ref{prop:KernelPrimary}. Since $N_\ell \le N$, this contradicts the hypothesis that $\pi \vert_N$ is faithful. \par We have just shown that {almost all} irreducible unitary representations $\sigma_\omega$ of $N$ occurring in a direct integral decomposition of $\sigma$ are $G$-faithful. In particular there exists $\omega \in \Omega$ such that the irreducible unitary representation $\rho := \sigma_\omega$ is $G$-faithful. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:Factor->Irrep}] Let $\pi$ be a factor representation of the countable group $G$. View $\pi$ as a faithful representation of the group $H := G / \mathrm{Ker}(\pi)$. By Lemma~\ref{FromL9} applied to $H$ and its trivial normal subgroup $N = H$, the group $H$ has an irreducible unitary representation $\rho$ which is faithful. We may now view $\rho$ as a representation of $G$ and the proposition follows. \end{proof} \begin{lem} \label{lem:G-faithful-normal-subgroup} Let $G$ be a countable group, $N$ a normal subgroup of $G$, and $\sigma$ an irreducible unitary representation of $N$ which is $G$-faithful. \par Then $G$ has an irreducible unitary representation $\pi$ with the following properties: the restriction $\pi \vert_N$ is faithful, and every element of $\mathrm{Ker}(\pi)$ is contained in a finite normal subgroup of $G$, i.e., $\mathrm{Ker}(\pi)$ is contained in the torsion FC-centre $\mathrm W(G)$ of $G$. \end{lem} \begin{proof} Let $\rho = \mathrm{Ind}_N^G(\sigma)$ be the unitary representation of $G$ induced from $\sigma$. Let $\rho = \int_\Omega^\oplus \rho_\omega d\mu(\omega)$ be a direct integral decomposition of $\rho$ into irreducible unitary representations. Set $$ \widetilde \Omega = \{\omega \in \Omega \mid \rho_\omega \vert_N \text{ is not faithful} \} $$ and $$ \widehat \Omega = \{\omega \in \Omega \mid \text{there exists} \hskip.2cm g \in \mathrm{Ker}(\rho_\omega) \hskip.2cm \text{such that} \hskip.2cm \langle\!\langle g \rangle\!\rangle_G \hskip.2cm \text{is infinite} \}. $$ We claim that $\mu(\widetilde \Omega) = \mu( \widehat \Omega) = 0$; to show this, we argue as in the proof of \cite[Lemma~10]{BeHa--08}. \par To show that $\mu(\widetilde \Omega) = 0$, we proceed by contradiction. We assume that there exists a conjugacy class $C_\ell \ne \{e\}$ of $G$ contained in $N$, generating a subgroup $G_\ell$ of $G$ which is normal and contained in $N$, and defining a measurable subset $\Omega_\ell = \{ \omega \in \Omega \mid G_\ell \le \mathrm{Ker} (\rho_\omega) \}$, such that $\mu(\Omega_\ell) > 0$. Then, as in `Claim 1' in the proof of \cite[Lemma 10]{BeHa--08} we show that $G_\ell \cap N = \{e\}$, in contradiction with $G_\ell \le N$. \par To show that $\mu(\widehat \Omega) = 0$, also by contradiction, we assume this time that there exists a conjugacy class $C_m \ne \{e\}$ of $G$ generating an infinite subgroup $G_m$ of $G$, and defining a measurable subset $\Omega_m = \{ \omega \in \Omega \mid G_m \le \mathrm{Ker} (\rho_\omega) \}$, such that $\mu(\Omega_m) > 0$, and we arrive at a contradiction. Indeed, `Claim 1' in the proof already quoted shows that $G_m \cap N = \{e\}$, and `Claim 2' in the same proof shows that $G_m$ is finite, in contradiction with the hypothesis. \par Consequently, the complement of $\widetilde \Omega \cup \widehat \Omega$ in $\Omega$ has full measure, and thus is non-empty. For any $\omega \in \Omega \smallsetminus (\widetilde \Omega \cup \widehat \Omega)$, the representation $\pi := \rho_\omega$ is an irreducible unitary representation of $G$ that has the required properties. \end{proof} A strengthening of Lemma~\ref{lem:G-faithful-normal-subgroup} will be established in Lemma~\ref{lem:G-faithful-normal-subgroup:Kernel<sol(G)} below. \begin{lem} \label{lem:semi-simple_feet_G-faithful} Let $G$ be a group and $N, A, S$ normal subgroups of $G$ such that $N = A \times S$. Assume that $A$ is abelian, and that $S$ is the restricted sum of a collection $\{S_i\}$ of non-abelian simple finite groups. Then: \begin{enumerate}[label=(\roman*)] \item $S$ has a faithful irreducible unitary representation; \item $N$ has a $G$-faithful irreducible unitary representation if and only if $A$ has a $G$-faithful unitary character. \end{enumerate} \end{lem} \begin{proof}[Proof:] see Lemma 13 and its proof in \cite{BeHa--08}. \end{proof} We end this section with some subsidiary facts. Given an abelian group $A$, denote by $\widehat A$ the \textbf{Pontrjagin dual} of $A$, namely the space of all unitary characters $A \to \mathbf T$, with the compact open topology. Recall that $\widehat A$ is a compact abelian group. \begin{lem} \label{lem:Pontrjagin} Let $G$ be a discrete group, $A$ an abelian normal subgroup of $G$, and $\chi$ a unitary character of $A$. \par Then $\chi$ is $G$-faithful if and only if the subgroup generated by $\chi^G = \{\chi^g \mid g \in G\}$ is dense in $\widehat A$. \end{lem} \begin{proof} This follows from Pontrjagin duality. See the proof of the equivalence between (i) and (ii) of Lemma 14 in in \cite{BeHa--08}. \end{proof} Before the last proposition of this section, we recall the natural module structure on abelian normal subgroups, the definition of cyclic modules, and we state a lemma which is helpful for translating from the language of abelian groups to that of modules. \begin{rem} \label{remabnormalmodule} Let $G$ be a group, $V$ an abelian normal subgroup of $G$, and $\mathbf Z[G]$ the group ring of $G$ over the integers. Then $V$ has a canonical structure of $\mathbf Z[G]$-module. Moreover, $V$ is simple as a $\mathbf Z[G]$-module if and only if $V$ is minimal as abelian normal subgroup of $G$. \par Compare with the reminder on simple $\mathbf F_p[G]$-modules just before Theorem \ref{thm:CharP(n)}. \end{rem} For a ring $R$ and a module $V$, the module~$V$ is \textbf{cyclic} if there exists $v \in V$ such that $V = Rv$. This terminology is used below for $R$ the group ring $\mathbf Z[G]$ and $V$ an abelian normal subgroup of $G$, and for $R$ the group algebran $\mathbf F_pG]$ and $V$ a $p$-elementary abelian normal subgroup of $G$ for some prime $p$. \par The proof of the next lemma is straightforward, and left to the reader. \begin{lem} \label{lem:cyclic1conjclass} Let $G$ be a group and $V$ an abelian normal subgroup of $G$. \par Then $V$ is generated as a group by one $G$-conjugacy class if and only if $V$ as a $\mathbf Z[G]$-module is cyclic. \par Suppose moreover that $V$ is an elementary abelian $p$-group. Then $V$ is generated as a group by one $G$-conjugacy class if and only if $V$ as an $\mathbf F_p[G]$-module is cyclic. \end{lem} The following classical result will be frequently used in the sequel, without further notice. For a proof, see \cite[\S~3, no.\ 3]{Bo--A8}. \begin{prop} \label{prop:semi-simple} Let $R$ be a ring and $U$ a $R$-module. The following conditions are equivalent: \begin{enumerate}[label=(\roman*)] \item\label{iDEprop:semi-simple} $U$ is generated by simple submodules. \item\label{iiDEprop:semi-simple} $U$ is a direct sum of a family of simple submodules. \item\label{iiiDEprop:semi-simple} Every submodule of $U$ is a direct summand. \end{enumerate} If $U$ satisfies these conditions, then \begin{enumerate}[label=(\alph*)] \item every submodule of $U$ satisfies Conditions \ref{iDEprop:semi-simple} to \ref{iiiDEprop:semi-simple}, \item every quotient module of $U$ satisfies Conditions \ref{iDEprop:semi-simple} to \ref{iiiDEprop:semi-simple}. \end{enumerate} \end{prop} A module $U$ satisfying Conditions \ref{iDEprop:semi-simple} to \ref{iiiDEprop:semi-simple} is called \textbf{semi-simple}. \vskip.2cm Proposition \ref{lem14BEHA} will be needed in Section~\ref{Section:proofof1.1}. \begin{prop} \label{lem14BEHA} Let $G$ be a group, $A$ a finite normal subgroup of $G$ contained in $\operatorname{MA}(G)$, and $p$ a prime. \par The following properties are equivalent: \begin{enumerate}[label=(\roman*)] \item \label{iDElem14BEHA} The group $A$ has a $G$-faithful unitary character. \item \label{iiDElem14BEHA} The group $A$ is generated by a single conjugacy class. \item \label{iiiDElem14BEHA} The $\mathbf Z[G]$-module $A$ is cyclic. \end{enumerate} Suppose moreover that $A$ is an elementary abelian $p$-group. Then Properties \ref{iDElem14BEHA} to \ref{iiiDElem14BEHA} are equivalent to: \begin{enumerate}[label=(\roman*)] \addtocounter{enumi}{3} \item \label{ivDElem14BEHA} The $\mathbf F_p[G]$-module $A$ is cyclic. \end{enumerate} \end{prop} \begin{proof} For the equivalence of \ref{iDElem14BEHA} and \ref{iiDElem14BEHA}, we follow the arguments of the proof of Lemma~14 in \cite{BeHa--08} (whose formal statement is however insufficient for our purposes). \par By \ref{2DEstructureminisocle} in Proposition \ref{structureminisocle}, $A$ is a finite abelian group and is therefore a direct sum $A = \bigoplus_{p \in P} A_p$, where $P$ is the set of primes $p$ for which $A$ has elements of order $p$, and where $A_p$ is the $p$-Sylow subgroup of $A$. Moreover $A_p$ is an elementary abelian $p$-group for each $p \in P$, by (1) of the same proposition. Notice that $A_p$ is semi-simple by Proposition~\ref{prop:semi-simple}, since $A$ is contained in $\operatorname{MA}(G)$. (For comparison with \cite[Lemma~14]{BeHa--08}, note that it follows from Proposition \ref{prop:semi-simple} applied to each $A_p$ that there exists a finite set $\{A_i\}_{i \in E}$ of abelian mini-feet in $G$ such that $A = \bigoplus_{i \in I} A_i$; each $A_i$ is isomorphic to $(\mathbf F_p)^n$ for some prime $p$ and some $n \ge 1$.) Observe that the Pontryagin dual of $A = \bigoplus_{p \in P} A_p$ is canonically isomorphic to $\bigoplus_{p \in P} \widehat A_p$. \par We know by Lemma~\ref{lem:Pontrjagin} that $A$ has a $G$-faithful unitary character if and only if $\widehat A$ is generated by one $G$-orbit. By the Chinese Remainder Theorem, the group $\widehat A = \bigoplus_{p \in P} \widehat A_p$ is generated by a single $G$-orbit if and only each of its $p$-Sylow subgroups $\widehat A_p$ is generated by a single $G$-orbit (this can alternatively be deduced from Lemma~\ref{lem:cyclic1conjclass} together with Lemma~\ref{lem:IsotypicalDecomposition} below). Using Lemma~\ref{lem:Pontrjagin} again, we deduce that $A$ has a $G$-faithful unitary character if and only if $A_p$ has a $G$-faithful character for each $p \in P$. \par Consequently, it suffices to prove the equivalence of \ref{iDElem14BEHA} and \ref{iiDElem14BEHA} when $A = A_p$ for one prime $p$. By Lemma~\ref{lem:cyclic1conjclass}, the group $A_p$ is generated by a single conjugacy class if and only if $A_p$ is cyclic as an $\mathbf F_p[G]$-module. Under the natural identification of $\widehat A_p$ with the dual $A_p^* := \text{Hom}_{\mathbf F_p}(A_p, \mathbf F_p)$, the $G$-action on $\widehat A_p$ corresponds to the dual (or contragredient) action of $G$ on $A_p^*$. Thus we may identify $\widehat A_p$ with $A_p^*$ as $\mathbf F_p[G]$-modules. A finite semi-simple $\mathbf F_p[G]$-module is cyclic if and only if its dual is cyclic (see Lemma 3.2 in \cite{Szec--16}). Since the dual $A_p^*$ is canonically isomorphic to the Pontrjagin dual $\widehat A_p$, and since $A_p$ is semi-simple, we deduce from Lemma~\ref{lem:Pontrjagin} that $A_p$ is generated by a single conjugacy class if and only if $A_p$ has a $G$-faithful unitary character. \vskip.2cm The equivalence of \ref{iiDElem14BEHA} and \ref{iiiDElem14BEHA} holds by Lemma \ref{lem:cyclic1conjclass}. \vskip.2cm In the particular case of $A$ an elementary abelian $p$-group, similarly, the equivalence of \ref{iiDElem14BEHA} and \ref{ivDElem14BEHA} holds by Lemma \ref{lem:cyclic1conjclass}. \end{proof} \subsection{Irreducible representations whose kernel is contained in \texorpdfstring{$\mathbf Fsol (G)$}{FSol (G)}} The goal of this subsection is to establish the following result of independent interest. \begin{prop} \label{prop:Kernel} Any countable group $G$ admits an irreducible unitary representation $\pi$ such that, for every element $g \in \mathrm{Ker}(\pi)$, the normal closure $\langle\!\langle g \rangle\!\rangle_G$ is a soluble finite subgroup of $G$. \par In other words, $G$ has an irreducible unitary representation whose kernel is contained in the characteristic subgroup $\mathbf Fsol (G)$. \end{prop} We need the following. \begin{lem} \label{lem:ShrinkKernel} Let $G$ be a countable group and $K$ a normal subgroup of $G$ contained in the torsion FC-centre $\mathrm W(G)$. \par If $G / K$ is irreducibly faithful, then $G / (K \cap \mathbf Fsol (G))$ is also irreducibly faithful. \end{lem} \begin{proof} Set $S = \mathbf Fsol (G)$. In order to show that $G / (K \cap S)$ is irreducibly faithful, it suffices by Theorem~\ref{thm:Gasch} to consider an arbitrary finite normal subgroup $A$ of $G / (K \cap S)$ contained in $\operatorname{MA} \big(G / (K \cap S) \big)$ and to show that $A$ is generated by a single conjugacy class. \par Let $r_1 \hskip.1cm \colon G \twoheadrightarrow G / (K \cap S)$ and $r_2 \hskip.1cm \colon G / (K \cap S) \twoheadrightarrow G / K$ be the canonical projections. We claim that the restriction $r_2 \vert_A$ is injective. Indeed, let $x \in G$ be such that $r_1(x) \in \mathrm{Ker}(r_2\vert_A) = A \cap \mathrm{Ker}(r_2)$; note that $r_1(x) \in A$ and $x \in K$. We have $$ \langle\!\langle r_1(x) \rangle\!\rangle_{G / (K \cap S)} \cong \langle\!\langle x \rangle\!\rangle_G / \big( \langle\!\langle x \rangle\!\rangle_G \cap (K \cap S) \big) = \langle\!\langle x \rangle\!\rangle_G / \big( \langle\!\langle x \rangle\!\rangle_G \cap S \big) . $$ Since $K \le \mathrm W(G)$ by hypothesis, the normal closure $\langle\!\langle x \rangle\!\rangle_G$ is finite. By the definition of $S$, every finite normal subgroup of $G$ contained in $S$ is soluble; hence $\langle\!\langle x \rangle\!\rangle_G \cap S$ is soluble. Moreover $\langle\!\langle r_1(x) \rangle\!\rangle_{G / K \cap S}$ is abelian beause $r_1(x) \in A$ and $A$ is abelian normal in $G / (K \cap S)$. It follows that $\langle\!\langle x \rangle\!\rangle_G$ is soluble-by-abelian, hence soluble. We infer that $x \in S$. Therefore $r_1(x) = e$, which proves the claim. \par Since $G / (K \cap S)$ is a quotient of $G$, we may view $A$ as a $\mathbf Z[G]$-module, and we must show that this module is cyclic (see Proposition~\ref{lem14BEHA}). The claim implies that $r_2$ induces an isomorphism of $\mathbf Z[G]$-modules $A \to r_2(A)$. Since $G / K$ is irreducibly faithful by hypothesis, and since $r_2(A) \le \operatorname{MA}(G / K)$ by Proposition~\ref{structureminisocle}\ref{7DEstructureminisocle}, we deduce from Theorem~\ref{thm:Gasch} that $r_2(A)$ is generated by a single conjugacy class in $G / K$. Thus $r_2(A)$ is a cyclic $\mathbf Z[G]$-module by Proposition~\ref{lem14BEHA}, from which it finally follows that $A$ is a cyclic $\mathbf Z[G]$-module, as required. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:Kernel}] The group $G$ has an irreducible unitary representation whose kernel $K$ is contained in $\mathrm W(G)$, by Lemma~\ref{lem:G-faithful-normal-subgroup} applied with $N = \{e\}$. By Lemma~\ref{lem:ShrinkKernel}, it follows that $G$ also has an irreducible unitary representation whose kernel is contained in $\mathbf Fsol (G)$. \end{proof} \begin{rem} \label{BrolineGarrison} For a finite group $G$, Proposition \ref{prop:Kernel} implies that $G$ has an irreducible representation with soluble kernel. This falls quite short of a theorem due to Broline and Garrison \cite[Corollary 12.20]{Isaa--76} which establishes that $G$ has an irreducible representation with nilpotent kernel. More precisely: \par \emph{ Let $G$ be a finite group and let $\pi$ be an irreducible representation of $G$ over $\mathbf C$ satisfying either of the following conditions: (i) the degree of $\pi$ is maximal among the degrees of all irreducible representations of $G$, (ii) the kernel of $\pi$ is minimal among the kernels of all irreducible representations of $G$. Then the kernel of $\pi$ is nilpotent. } \par There are groups without any irreducible representation having abelian kernel. This is well-known to experts, and we are convinced that examples exist in the literature, but we have not been able to find a precise reference; one specific example can be found in Appendix \ref{Section:appendixA}. \end{rem} \begin{rem} \label{TrunkSufFiur} Let $G$ be a countable group. \vskip.2cm (1) It follows from Proposition \ref{prop:Kernel} that the complement $G \smallsetminus \mathbf Fsol (G)$ of $\mathbf Fsol (G)$ in a countable group $G$ is irreducibly faithful. (A refinement of that statement will be established in Proposition~\ref{prop:IrredFaithful:sol}.) \vskip.2cm (2) However, the quotient $G / \mathbf Fsol (G)$ need not have a faithful irreducible unitary representation. \par Indeed, let $H$ be a countable group such that $H / \mathbf Fsol (H) \cong C_3$ is cyclic of order $3$; see Example \ref{exe:socletrunk}(7). Set $G = H \times H$. By Proposition~\ref{MSMATofproduct}, we have $$ G / \mathbf Fsol (G) \cong (H \times H) / (\mathbf Fsol (H) \times \mathbf Fsol (H)) \cong C_3 \times C_3 , $$ so that $G / \mathbf Fsol (G)$ does not have any faithful irreducible unitary representation. \vskip.2cm (3) In \cite[Corollary 3]{BeHa--08}, it is noted that each of the following conditions on $G$ is sufficient to imply that $G$ has a faithful irreducible unitary representation: \begin{enumerate}[label=(\roman*)] \item\label{iDETrunkSufFiur} $G$ is torsion-free, \item\label{iiDETrunkSufFiur} all conjugacy classes in $G$ distinct from $\{e\}$ are infinite. \end{enumerate} Proposition \ref{prop:Kernel} shows that the following condition, weaker than both \ref{iDETrunkSufFiur} and \ref{iiDETrunkSufFiur}, is also sufficient: \begin{enumerate}[label=(\roman*)] \addtocounter{enumi}{2} \item\label{iiiDETrunkSufFiur} $\mathbf Fsol (G) = \{e\}$. \end{enumerate} Here are two families of groups for which \ref{iiiDETrunkSufFiur} holds, but neither \ref{iDETrunkSufFiur} nor \ref{iiDETrunkSufFiur} does. \par (a) Any restricted sum $H$ of finite non-abelian simple groups. More generally, any direct product $G \times H$ of an irreducibly faithful group $G$ with a restricted sum $H$ of finite non-abelian simple groups. \par (b) Let $G$ be one of the groups defined by B.H.\ Neumann in 1937 to show that there are uncoutably many pairwise non-isomorphic groups which are finitely generated and not finitely presented; see \cite{Neum--37}, as well as \cite[Section III.B and in particular no~35]{Harp--00}. Recall that, in $G$, the FC-centre is a restricted sum $N := \prod'_n H_n$, where each $H_n$ is a finite simple alternating group, and $\vert H_1 \vert < \dots < \vert H_n \vert < \vert H_{n+1} \vert < \dots$, and the quotient $G/N$ is the permutation group of $\mathbf Z$ generated by translations and even finitely supported permutations. Observe that $G$ is neither torsion-free, nor with all conjugacy classes other than $\{e\}$ infinite. The subgroup $\mathbf Fsol (G)$ is trivial, and therefore $G$ has a faithful irreducible unitary representation. \end{rem} We finish by recording the following strengthening of Lemma~\ref{lem:G-faithful-normal-subgroup}, which will be needed in Section~\ref{Section:proofof1.1}. \begin{lem} \label{lem:G-faithful-normal-subgroup:Kernel<sol(G)} Let $G$ be a countable group, $N$ a normal subgroup of $G$, and $\sigma$ an irreducible unitary representation of $N$ which is $G$-faithful. \par Then $G$ has an irreducible unitary representation $\pi$ such that $\mathrm{Ker}(\pi) \cap N =\{e\}$ and $\mathrm{Ker}(\pi) \le \mathbf Fsol (G)$. \end{lem} \begin{proof} Let $K$ be the kernel of the irreducible unitary representation of $G$ afforded by applying Lemma~\ref{lem:G-faithful-normal-subgroup} to $\sigma$. Thus $K \cap N = \{e\}$ and $K \le \mathrm W(G)$. The desired conclusion now follows from Lemma~\ref{lem:ShrinkKernel}. \end{proof} \section{Cyclic semi-simple \texorpdfstring{$\mathbf F_p[G]$}{Fp[G]}-modules} \label{Section:cyclic} Let $R$ be a ring. The following lemma is the module version of a result often stated for groups and known as Goursat's Lemma. The module version appears, for example, in \cite[Page 171]{Lamb--76}. More on this lemma can be consulted in \cite{BaSZ--15}. \begin{lem} \label{lem:Goursat} Let $A = A_0 \oplus A_1$ be the direct sum of two $R$-modules. For $i = 0, 1$, let $r_i \hskip.1cm \colon A \twoheadrightarrow A_i$ be the canonical projection. Let $M \le A$ be a submodule such that $r_i(M) = A_i$ for $i = 0, 1$. Set $M_i = M \cap A_i$. \par Then the $R$-modules $A_0/M_0$ and $A_1/M_1$ are isomorphic, and the the canonical image of $M$ in $A_0/M_0 \oplus A_1/M_1$ is the graph of an isomorphism of $R$-modules $A_0/M_0 \to A_1/M_1$. \end{lem} Let now $p$ be a prime and $G$ a group. The goal of this section is to characterize when a finite semi-simple $\mathbf F_p[G]$-module is cyclic. This will be achieved in Proposition~\ref{prop:cyclicsemi-simple} below, after some preparatory steps. Proposition~\ref{prop:cyclicsemi-simple} is well-known to experts: see Lemma 3.1 in \cite{Szec--16}. It can be seen as a version over $\mathbf F_p$ of a result for cyclic unitary representations of compact groups which appears in Greenleaf and Moskowitz \cite[Proposition 1.8]{GrMo--71}. \begin{lem} \label{lem:Goursat-for-2} Let $W$ be a finite simple $\mathbf F_p[G]$-module. Let $\mathbf k = \mathcal L_{\mathbf F_p[G]}(W)$ be its centralizer, which is a finite field extension of $\mathbf F_p$. Let $V_0, V_1$ be two copies of $W$. \par Every simple $\mathbf F_p[G]$-submodule $M$ of $V_0 \oplus V_1$ such that $M \cap V_0 = \{0\}$ is of the form $$ M = \{(\lambda x, x) \mid x \in V_1\} $$ for some $\lambda \in \mathbf k$. \end{lem} \begin{proof} This is a straightforward consequence of Lemma~\ref{lem:Goursat}. \end{proof} The following extension to a direct sum of $\ell+1$ components will be useful. \begin{lem} \label{lem:Goursat-for-ell+1} Let $W$ be a finite simple $\mathbf F_p[G]$-module. Let $\mathbf k = \mathcal L_{\mathbf F_p[G]}(W)$. Let $\ell \ge 0$; for each $i = 0, \dots, \ell$, let $V_i$ be a copy of $W$. Set $U = V_0 \oplus V_1 \oplus \dots \oplus V_\ell$. \par Every maximal $\mathbf F_p[G]$-submodule $M \lneqq U$ such that $M \cap V_0 = \{0\}$ is of the form $$ M = \Big\{ \Big( \sum_{i=1}^\ell \lambda_i x_i, x_1, x_2, \dots, x_\ell \Big) \hskip.2cm \Big\vert \hskip.2cm (x_1, \dots, x_\ell) \in V_1 \oplus \dots \oplus V_\ell \Big\} $$ for some $(\lambda_1, \dots, \lambda_\ell) \in \mathbf k^\ell$. \end{lem} \begin{proof} Let $r \hskip.1cm \colon U \twoheadrightarrow V_1 \oplus \dots \oplus V_\ell$ be the canonical projection. Let $M \lneqq U$ be a maximal $\mathbf F_p[G]$-submodule such that $M \cap V_0 = \{0\}$. Then the restriction $r \vert_M$ is injective. Since $M$ is maximal, we have $U = V_0 \oplus M$, so that $r \vert_M \hskip.1cm \colon M \to V_1 \oplus \dots \oplus V_\ell$ is an isomorphism of $\mathbf F_p[G]$-modules. \par Given $i \in \{1, \dots, \ell\}$, let $M_i = (r \vert_M)^{-1}(V_i)$. Then $M_i$ is isomorphic to $V_i$, hence it is a simple $\mathbf F_p[G]$-submodule of $M$ contained in $V_0 \oplus V_i$. Moreover $M_i \cap V_0 = \{0\}$. By Lemma~\ref{lem:Goursat-for-2}, there exists $\lambda_i \in \mathbf k$ such that $M_i \cong \{(\lambda_i x_i, x_i) \mid x_i \in V_i\} \le V_0 \oplus V_i$. Since $r \vert_M \hskip.1cm \colon M \to V_1 \oplus \dots \oplus V_\ell$ is an isomorphism, we deduce that \begin{align*} M & = M_1 \oplus \dots \oplus M_\ell \\ & = \Big\{ \Big( \sum_{i=1}^\ell \lambda_i x_i, x_1, x_2, \dots, x_\ell \Big) \hskip.2cm \Big\vert \hskip.2cm (x_1, \dots, x_\ell) \in V_1 \oplus \dots \oplus V_\ell \Big\} \end{align*} as required. \end{proof} We can now characterize when a direct sum of copies of a given simple $\mathbf F_p[G]$-module is cyclic. \begin{lem} \label{lem:Isotypical} Retain the notation of Lemma \ref{lem:Goursat-for-ell+1}. \par The $\mathbf F_p[G]$-module $U$ is cyclic if and only if $\ell < \dim_{\mathbf k} (W)$. \end{lem} \begin{proof} Assume first that $\ell \ge \dim_{\mathbf k} (W)$. Let $(v_0, \dots, v_\ell) \in U$. Since $V_i = W$ for all $i$, we may view $v_i$ as an element of $W$. Then, upon reordering the summands $V_0, \dots, V_\ell$, we may assume that there exists $(\lambda_1, \dots, \lambda_\ell) \in \mathbf k^\ell$ such that $v_0 = \sum_{i=1}^\ell \lambda_i v_i$. It follows that $(v_0, \dots, v_\ell)$ belongs to $$ \Big\{ \Big( \sum_{i=1}^\ell \lambda_i x_i, x_1, x_2, \dots, x_\ell \Big) \hskip.2cm \Big\vert \hskip.2cm (x_1, \dots, x_\ell) \in V_1 \oplus \dots \oplus V_\ell \Big\}, $$ which is a proper $\mathbf F_p[G]$-submodule of $U$. Hence $U$ is not cyclic. (In a context of characteristic zero, an argument of this kind is used for the proof of \cite[Lemma~15.5.3]{Dixm--69}.) \par In order to prove the converse, we proceed by induction on $\ell$. In case $\ell =0$, we have $0 = \ell < \dim_{\mathbf k} (W)$ and $U = V_0 = W$ is simple, hence cyclic. \par We now assume that $0 < \ell < \dim_{\mathbf k} (W)$. The induction hypothesis ensures that the $\mathbf F_p[G]$-module $V_1 \oplus \dots \oplus V_\ell$ is cyclic. Let $(v_1, \dots, v_\ell)$ be a generator. Viewing all $v_i$ as elements of $W$, the hypothesis that $\ell < \dim_{\mathbf k} (W)$ ensures the existence of an element $v_0 \in W$ which does not belong to the $\mathbf k$-subspace of $W$ spanned by $\{v_1, \dots, v_\ell\}$. Let $M$ be the $\mathbf F_p[G]$-submodule of $U$ spanned by $(v_0, v_1, \dots, v_\ell)$. Let $r \hskip.1cm \colon U \twoheadrightarrow V_1 \oplus \dots \oplus V_\ell$ denote the canonical projection. The image $r(M)$ coincides with the $\mathbf F_p[G]$-submodule generated by $(v_1, \hdots, v_\ell)$, i.e., with $V_1 \oplus \dots \oplus V_\ell$. If one had $M \cap V_0 = \{0\}$, then $M$ would be a maximal proper $\mathbf F_p[G]$-submodule of $U$, and Lemma~\ref{lem:Goursat-for-ell+1} would then ensures that $v_0$ is a $\mathbf k$-linear combination of $\{v_1, \dots, v_\ell\}$, a contradiction. Hence $M \cap V_0 \ne \{0\}$. Since $V_0$ is simple, $M$ contains $V_0$, so that $U = M$; this shows that $U$ is indeed cyclic. \end{proof} The following basic counting lemma will also be useful. \begin{lem} \label{lem:counting} Retain the notation of Lemma \ref{lem:Goursat-for-ell+1}. Moreover, set $q = \vert \mathbf k \vert$. \par The number of simple $\mathbf F_p[G]$-submodules of $U$ is $$ q^\ell + q^{\ell-1} + \dots + q + 1 = \frac{q^{\ell+1}-1}{q-1}. $$ \end{lem} \begin{proof} We proceed by induction on $\ell$. In case $\ell = 0$, the $\mathbf F_p[G]$-module $U = V_0$ is simple, so the result is clear. Assume now that $\ell \ge 1$. Consider \par the collection $\text{\large$\mathcal S$}$ of all simple $\mathbf F_p[G]$-submodules of $U$, \par the complement $\text{\large$\mathcal S$}_0$ of $V_0$ in $\text{\large$\mathcal S$}$, i.e., $\text{\large$\mathcal S$}_0 = \{ S \in \text{\large$\mathcal S$} \mid S \cap V_0 = \{0\} \}$, \par and the collection $\text{\large$\mathcal S'$}$ of all simple $\mathbf F_p[G]$-submodules of $V_1 \oplus \dots \oplus V_\ell$. \par\noindent Denote by $r$ the canonical projection $U \twoheadrightarrow V_1 \oplus \dots \oplus V_\ell$. Each $S \in \text{\large$\mathcal S$}_0$ determines its image $S' = r(S) \in \text{\large$\mathcal S'$}$, which can be viewed as a submodule of $U$ contained in $V_1 \oplus \dots \oplus V_\ell$, and there exists by Lemma \ref{lem:Goursat} an element $\lambda \in \mathbf k$ such that $S = \{(\lambda x, x) \mid x \in S'\} \le V_0 \oplus S'$. Conversely, $S' \in \text{\large$\mathcal S'$}$ and $\lambda \in \mathbf k$ determine $S$. This shows that $\vert \text{\large$\mathcal S$} \vert = \vert \text{\large$\mathcal S$}_0 \vert + 1 = q \vert \text{\large$\mathcal S'$} \vert + 1$. Since $\vert \text{\large$\mathcal S'$} \vert = q^{\ell-1} + \dots + q + 1$ by the induction hypothesis, this ends the proof. \end{proof} \begin{lem} \label{lem:CyclicQuotientModule} Retain the notation of Lemma \ref{lem:Goursat-for-ell+1}. Moreover, set $q = \vert \mathbf k \vert$, denote by $m$ the dimension of $W$ over $\mathbf k$, and assume that $\ell \ge m$. Let $\mathcal Z$ be a set of simple $\mathbf F_p[G]$-submodules of $U$ of cardinality $\vert \mathcal Z \vert < q^m + \dots + q + 1$. \par There is an $\mathbf F_p[G]$-submodule $B \le U$ with $B \cap Z = \{0\}$ for all $Z \in \mathcal Z$, and such that $U/B$ is cyclic. \end{lem} \begin{proof} Let $\text{\large$\mathcal B$}$ be the collection of all $\mathbf F_p[G]$-submodules $B$ of $U$ such that $B \cap Z = \{0\}$ for all $Z \in \mathcal Z$. Let also $B \in \text{\large$\mathcal B$}$ be an element which is maximal for the inclusion relation. Note that the $\mathbf F_p[G]$-module $U/B$ is semi-simple. If $U/B$ were not cyclic, then $U/B$ would be isomorphic to a direct sum of at least $m+1$ copies of $W$ by Lemma~\ref{lem:Isotypical}. Therefore $U/B$ would contain at least $q^m + \dots + q + 1$ simple $\mathbf F_p[G]$-submodules by Lemma~\ref{lem:counting}. In particular $U/B$ would contain at least one simple $\mathbf F_p[G]$-submodule $C$ which is different from the canonical image of $Z$ in $U/B$ for all $Z \in \mathcal Z$. Denoting by $B'$ the preimage of $C$ in $U$, we obtain $B \lneqq B'$ and $B' \in \text{\large$\mathcal B$}$. This contradicts the maximality of $B$. Hence $U/B$ is cyclic. \end{proof} Given an additive group $V$ and a subset $F \subseteq V$, we set $$ F - F = \{c \in V \mid c = a-b \hskip.2cm \text{for some} \hskip.2cm a, b \in F \} . $$ The following result will be needed in Section~\ref{SectionQ(n)}. \begin{lem} \label{lem:F-F} Retain the notation from Lemma~\ref{lem:counting}. \par If $\ell \ge 1$, there is a subset $F \subseteq U$ of size~$q^\ell + q^{\ell-1} + \dots + q + 1$ such that $F-F$ contains a non-zero element of each of the simple $\mathbf F_p[G]$-submodules of $U$. \end{lem} \begin{proof} For each subset $I \subseteq \{0, 1, \dots, \ell\}$, we view the direct sum $\bigoplus_{i \in I} V_i$ as a submodule of $U$. \par Let $\text{\large$\mathcal S$}$ be the collection of all simple submodules of $U$ and $\text{\large$\mathcal S$}'$ be the subcollection consisting of those $S \in \text{\large$\mathcal S$}$ which are contained in $V_0 \oplus V_1$. By Lemma~\ref{lem:counting}, we have $\vert \text{\large$\mathcal S$} \vert = q^\ell + q^{\ell-1} + \dots + q + 1$ and $\vert \text{\large$\mathcal S$}' \vert = q+1$. \par By definition, each element of $\text{\large$\mathcal S$}$ is a simple module, so that any two distinct elements of $\text{\large$\mathcal S$}$ have intersection $\{0\}$. Choosing a non-zero element in each member of $\text{\large$\mathcal S$} \smallsetminus \text{\large$\mathcal S$}'$, we obtain a set $E$ of size $q^\ell + q^{\ell-1} + \dots + q^2$. \par Choose now a non-zero $x \in V_1$, and set $E' = \{(\lambda x, x) \mid \lambda \in k\} \subseteq V_0 \oplus V_1$. Thus $\vert E' \vert = \vert \mathbf k \vert = q$. By Lemma~\ref{lem:Goursat-for-2}, the set $E'$ contains a non-zero element in each member of $\text{\large$\mathcal S$}' \smallsetminus \{V_0\}$. \par Finally, we set $F = E \cup E' \cup \{0\}$. Observe that $\vert F \vert = q^\ell + q^{\ell-1} + \dots + q + 1$. Moreover, we have $E \cup E' \subset F \smallsetminus \{0\} \subset F-F$, so that $F-F$ contains a non-zero element of each member of $\text{\large$\mathcal S$} \smallsetminus \{V_0\}$. Since $$ V_0 \ni (x, 0) = (x, x)-(0, x) \in E' - E' \subset F-F, $$ we see that $F-F$ also contains a non-zero element of $V_0$. Thus the set $F$ has the required properties. \end{proof} Given a semi-simple $R$-module $U$ and a simple $R$-module $W$, the submodule of $U$ generated by all simple submodules isomorphic to $W$ is called the \textbf{isotypical component of type $W$} of $U$. Every semi-simple $R$-module is the direct sum of its isotypical components \cite[\S~3, Proposition~9]{Bo--A8}. \begin{lem} \label{lem:IsotypicalDecomposition} A finite semi-simple $\mathbf F_p[G]$-module $U$ is cyclic if and only if each of its isotypical components is cyclic. \end{lem} \begin{proof} The `only if' part is clear since any quotient of a cyclic module is cyclic. \par Let $U = M_1 \oplus \dots \oplus M_\ell$ be the decomposition of $U$ as the direct sum of its isotypical components. Assume that $M_i$ is cyclic for all $i \in \{1, \dots, \ell\}$ and let $v_i \in M_i$ be a generator. We claim that $v = (v_1, \dots, v_\ell)$ is a generator of $U$. We prove this by induction on $\ell$. The base case $\ell = 1$ is trivial. Assume now that ${\ell \ge 2}$ and let $M$ be the submodule generated by $v$. The induction hypothesis ensures that the canonical projection of $M$ to $A_0 = \bigoplus_{i=1}^{\ell-1} M_i$ is surjective. Clearly, the projection of $M$ to $A_1 = M_\ell$ is surjective. Since $A_0$ and $A_1$ are disjoint (i.e., they do not contain any non-zero isomorphic summands), it follows from Lemma~\ref{lem:Goursat} that $M = A_0 \oplus A_1 = U$. \end{proof} \begin{prop} \label{prop:cyclicsemi-simple} Let $U$ be a finite semi-simple $\mathbf F_p[G]$-module. The following properties are equivalent: \begin{enumerate}[label=(\roman*)] \item \label{iDEprop:cyclicsemi-simple} $U$ is not cyclic. \item\label{iiDEprop:cyclicsemi-simple} There exist a finite simple $\mathbf F_p[G]$-module $W$ of dimension $m \ge 1$ over $\mathbf k = \mathcal L_{\mathbf F_p[G]}(W)$, and a submodule $V \le U$ isomorphic to a direct sum of $m+1$ copies of $W$. \end{enumerate} \end{prop} \begin{proof} Assume that Property \ref{iiDEprop:cyclicsemi-simple} holds. In view of Lemma~\ref{lem:Isotypical}, the module $V$ afforded by \ref{iiDEprop:cyclicsemi-simple} is not cyclic. Since $V$ is a direct summand of $U$, and therefore isomorphic to a quotient of $U$, it follows that $U$ is not cyclic. \par Assume conversely that $U$ is not cyclic. Then $U$ has a non-cyclic isotypical component by Lemma~\ref{lem:IsotypicalDecomposition}, and it follows from Lemma~\ref{lem:Isotypical} that Condition \ref{iiDEprop:cyclicsemi-simple} holds. \end{proof} \section{On the structure of minimal unfaithful subsets} \label{Section:proofof1.1} The goal of this section is to prove Theorem~\ref{thm:CharP(n)}. In fact, we shall establish a finer statement that describes precisely the structure of the normal closure of an irreducibly unfaithful subset of size~$n$ in a countable group with Property $P(n-1)$, see Theorem~\ref{thm:UnfaithfulSetSize-n} below. We shall however start with the proof of the easier implication in Theorem~\ref{thm:CharP(n)}. \subsection{Proof of \ref{2DEthm:CharP(n)} \texorpdfstring{$\mathbf Rightarrow$}{=>} \ref{1DEthm:CharP(n)} in Theorem~\ref{thm:CharP(n)}} \begin{lem} \label{firstpartproofMainTheorem} Let $G$ be a countable group. Suppose that there exist a prime $p$, a~finite normal subgroup $V$ of $G$ which is an elementary abelian $p$-group, and a finite simple $\mathbf F_p[G]$-module $W$, with centralizer field $\mathbf k = \mathcal L_{\mathbf F_p[G]}(W)$ and dimension $m = \dim_{\mathbf k} (W)$, such that $V$ is isomorphic as $\mathbf F_p[G]$-module to a direct sum of $m+1$ copies of~$W$. Set $q = \vert \mathbf k \vert$. Then: \begin{enumerate}[label=(\roman*)] \item\label{iDEfirstpartproofMainTheorem} For every irreducible unitary representation $\pi$ of $G$, the kernel $\mathrm{Ker}(\pi)$ contains at least one of the $q^m + \dots + q + 1 $ simple submodules of $V$. \item\label{iiDEfirstpartproofMainTheorem} A subset $F \subset V$ is irreducibly faithful in $G$ if and only if $F \cap K \subset\{e\}$ for some simple $\mathbf F_p[G]$-submodule $K$ of $V$. \item\label{iiiDEfirstpartproofMainTheorem} There is a subset $F \subset V$ of size $q^m + \dots + q + 1 $ which is not irreducibly faithful. \end{enumerate} \end{lem} \begin{proof} \ref{iDEfirstpartproofMainTheorem} Since $V$ is not cyclic as an $\mathbf F_p[G]$-module by Lemma~\ref{lem:Isotypical}, it follows that $V$ has no $G$-faithful character by Proposition~\ref{lem14BEHA}. In view of Lemma~\ref{FromL9}, for every irreducible unitary representation $\pi$ of $G$, the restriction $\pi \vert_V$ cannot be faithful. In particular $\mathrm{Ker}(\pi)$ contains at least one of the simple $\mathbf F_p[G]$-submodules of $V$. \par \ref{iiDEfirstpartproofMainTheorem} Let $F$ be a subset of $V$. If $F$ contains a non-trivial element in each of the simple $\mathbf F_p[G]$-submodules of $V$, it follows from \ref{iDEfirstpartproofMainTheorem} that $F$ is not irreducibly faithful in $G$. Conversely, if there is a simple $\mathbf F_p[G]$-submodule $K$ in $V$ such that $F \cap K \subset \{e\}$, then every non-trivial element of $F$ has a non-trivial image in the quotient $V/K$. Since $V/K$ is a cyclic $\mathbf F_p[G]$-module by Lemma~\ref{lem:Isotypical}, it follows from Proposition~\ref{lem14BEHA} and Lemma~\ref{lem:G-faithful-normal-subgroup} that $G/K$ has an irreducible unitary representation whose restriction to $V/K$ is faithful. Terefore $F$ is irreducibly faithful in $G$. \par \ref{iiiDEfirstpartproofMainTheorem} By Lemma~\ref{lem:counting}, the number of simple submodules in $V$ equals $q^m + \dots + q + 1$. Thus $V$ contains a subset of size $q^m + \dots + q + 1 $ which is not irreducibly faithful. \end{proof} \begin{proof}[Proof of \ref{2DEthm:CharP(n)} $\mathbf Rightarrow$ \ref{1DEthm:CharP(n)} in Theorem~\ref{thm:CharP(n)}] If $G$ satisfies \ref{2DEthm:CharP(n)} in Theorem~\ref{thm:CharP(n)}, then $G$ contains a set $F \subseteq V$ of size $q^m + \dots + q + 1$ which is not irreducibly faithful by Lemma~\ref{firstpartproofMainTheorem}. so that $G$ does not have Property $P( q^m + \dots + q + 1)$, Since $n \ge q^m + \dots + q + 1$, the group $G$ does not have Property $P(n)$. \end{proof} \subsection{Minimal unfaithful subsets are contained in \texorpdfstring{$\mathbf Fsol (G)$}{FSol (G)}} The following result shows that, in a countable group $G$, the irreducible faithfulness of a subset $F$ can be checked on the intersection of $F$ with $\mathbf Fsol (G)$. \begin{prop} \label{prop:IrredFaithful:sol} Let $G$ be a countable group and $F$ a subset of $G$. If $F \cap \mathbf Fsol (G)$ is finite and irreducibly faithful, then $F$ is irreducibly faithful. \par In particular, a finite subset $F \subseteq G$ is irreducibly faithful if and only if the intersection $F \cap \mathbf Fsol (G)$ is irreducibly faithful. \end{prop} \emph{Note~:} We know already the particular case of this proposition for $F$ disjoint from $\mathbf Fsol (G)$, for example for $F = G \smallsetminus \mathbf Fsol (G)$, see Remark \ref{TrunkSufFiur}(1). \begin{proof} Set $S = \mathbf Fsol (G)$. Let $F \subseteq G$ be such that $F \cap S$ is finite and irreducibly faithful. We aim at proving that $F$ is irreducibly faithful. To this end, we partition $F$ into three subsets, $F = F_S \sqcup F_H \sqcup F_\infty$, where: \begin{enumerate} \item[] $F_S = \{ x \in F \mid \langle\!\langle x \rangle\!\rangle_G \hskip.2cm \text{is finite soluble} \} = F \cap S$, \item[] $F_H = \{ x \in F \mid \langle\!\langle x \rangle\!\rangle_G \hskip.2cm \text{is finite non-soluble} \} = (F \cap \mathrm W(G)) \smallsetminus S$, \item[] $F_\infty = \{ x \in F \mid \langle\!\langle x \rangle\!\rangle_G \hskip.2cm \text{is infinite} \} = F \smallsetminus \left( F_S \sqcup F_H \right)$. \end{enumerate} \par By hypothesis, there exists an irreducible unitary representation $\rho$ of $G$ such that $\rho(x) \neq \mathrm{id}$ for all $x \in F_S \smallsetminus \{e\}$. Since $F_S$ is finite by hypothesis, the normal subgroup $A = \langle\!\langle F_S \rangle\!\rangle_G$ is finite soluble (Lemma \ref{lem:N1Nk}). Let $K = A \cap \mathrm{Ker}(\rho)$, which is a finite soluble normal subgroup of $G$, and let $r \hskip.1cm \colon G \twoheadrightarrow Q = G / K$ be the canonical projection. Note that $r(x) \neq e$ for all $x \in F_S \smallsetminus \{e\}$. \par Since $A$ is soluble, its image $\rho(A)$ is soluble as well. Therefore the socle $\operatorname{Soc}(\rho(A))$ is abelian. Since $\rho(G)$ is irreducibly faithful, the socle $\operatorname{Soc}(\rho(A))$ has a $\rho(G)$-faithful irreducible unitary character by Lemma~\ref{FromL9}. \par The homomorphism $\rho$ induces an isomorphism $\rho_A \hskip.1cm \colon A / K \overset{\cong}{\longrightarrow} \rho(A)$, and similarly $r$ induces an isomorphism $r_A \hskip.1cm \colon A / K \overset{\cong}{\longrightarrow} r(A)$. Moreover, the action by conjugation of $G$ on $A$ induces $G$-actions on $\rho(A)$ and $r(A)$, and the isomorphism $r_A \rho_A^{-1} \hskip.1cm \colon \rho(A) \overset{\cong}{\longrightarrow} r(A)$ is $G$-equivariant. Hence the group $N = \operatorname{Soc}(r(A))$, which is normal in $Q$, is abelian and has a $Q$-faithful unitary character, say $\sigma$. \par We now invoke Lemma~\ref{lem:G-faithful-normal-subgroup:Kernel<sol(G)}, which affords an irreducible unitary representation $\pi$ of $Q$ whose restriction to $N$ is faithful, and such that $\mathrm{Ker}(\pi)$ is contained in $\mathbf Fsol (Q)$. \par The composite map $\pi' = \pi \circ r$ is an irreducible unitary representation of $G$. \par We claim that $\pi'(x) \neq \mathrm{id}$ for all $x \in \big( F_S \smallsetminus \{e\} \big)$. We know that the representation $\pi \vert_N$ is faithful. Since $N = \operatorname{Soc} (r(A))$, it follows that $\pi \vert_{r(A)}$ is also faithful. As noted above, for every $x \in F_S \smallsetminus \{e\}$, we have $r(x) \ne e$, and therefore $\pi'(x) = \pi(r(x)) \ne \mathrm{id}$. \par We next claim that $\pi'(x) \neq \mathrm{id}$ for all $x \in F_H$. Indeed, for $x \in F_H$, we have $x \neq e$ and $\langle\!\langle x \rangle\!\rangle_G \not \le S$, since $\langle\!\langle x \rangle\!\rangle_G$ is not soluble. But $K \cap \langle\!\langle x \rangle\!\rangle_G$ is finite soluble, because $K$ is so. Therefore $ \langle\!\langle r(x) \rangle\!\rangle_Q = r(\langle\!\langle x \rangle\!\rangle_G) \cong \langle\!\langle x \rangle\!\rangle_G / (K \cap \langle\!\langle x \rangle\!\rangle_G)$ is not soluble, hence $r(x) \notin \mathbf Fsol (Q)$. Since the kernel of $\pi$ is contained in $\mathbf Fsol (Q)$, this shows that $r(x) \notin \mathrm{Ker} (\pi)$, and therefore that $x \notin \mathrm{Ker} (\pi')$, as claimed. \par Given $x \in F_\infty$, the normal closure $\langle\!\langle x \rangle\!\rangle_G$ is infinite. Since $K$ is finite, it follows that $\langle\!\langle r(x) \rangle\!\rangle_Q$ is infinite as well. In particular $r(x)$ is not contained in the kernel of $\pi$, which is contained in $\mathbf Fsol (Q) \le \mathrm W(Q)$. Hence $\pi'(x) \neq 1$. This proves that $\pi'(x) \neq 1$ for all $x \in F \smallsetminus \{e\}$. \par Thus $F$ is irreducibly faithful, as required. \end{proof} \begin{cor} \label{cor:MinimalIrreduciblyUnfaithful} Let $G$ be a countable group and $F \subseteq G$ be a finite subset which is irreducibly unfaithful. \par If every proper subset of $F$ is irreducibly faithful, then $F$ is contained in $\mathbf Fsol (G)$. \end{cor} \begin{proof} Let $F$ be a finite subset of $G$ of which every proper subset is irreducibly faithful. If $F$ was not contained in $\mathbf Fsol (G)$, then $F$ would be irreducibly faithful by Proposition~\ref{prop:IrredFaithful:sol}. Therefore $F \subseteq \mathbf Fsol (G)$. \end{proof} \subsection{Unfaithful subsets of size \texorpdfstring{$n$}{n} in countable groups with \texorpdfstring{$P(n-1)$}{P(n-1)}} \begin{lem} \label{lem:UnfaithfulSubsets+mini-feet} Let $n$ be a positive integer and $G$ a countable group. Let $F \subset G$ be an irreducibly unfaithful subset of size $n$ such that: \begin{enumerate}[label=(\alph*)] \item\label{aDElem:UnfaithfulSubsets+mini-feet} every proper subset of $F$ is irreducibly faithful; \item\label{bDElem:UnfaithfulSubsets+mini-feet} every element of $F$ is contained in an abelian mini-foot of $G$. \end{enumerate} Let $U = \langle\!\langle F \rangle\!\rangle_G$ the normal subgroup of $G$ generated by $F$. \par Then there exist a prime $p$, and a simple $\mathbf F_p[G]$-module $W$, such that the following assertions hold, where $\mathbf k$ denotes the centralizer field $\mathcal L_{\mathbf F_p[G]}(W)$, and $m = \dim_{\mathbf k}(W)$, and $q = \vert \mathbf k \vert$~: \begin{enumerate}[label=(\roman*)] \item\label{iDElem:UnfaithfulSubsets+mini-feet} $U$ is a finite elementary abelian $p$-group, contained in the abelian mini-socle $\operatorname{MA} (G)$; \item\label{iiDElem:UnfaithfulSubsets+mini-feet} $U$ is isomorphic as an $\mathbf F_p[G]$-module to the direct sum of a number $\ell+1$ of copies of $W$, and $\ell \ge m$; \item\label{iiiDElem:UnfaithfulSubsets+mini-feet} $q^m + q^{m-1} + \dots + q + 1 \le n$. \end{enumerate} \end{lem} \begin{proof} The hypothesis \ref{aDElem:UnfaithfulSubsets+mini-feet} on $F$ implies that $F$ does not contain the neutral element $e$, since otherwise $F$ would be irreducibly faithful. \par By Proposition \ref{structureminisocle}, the normal subgroup $U$ is abelian and finite. The conjugation action of $G$ on $U$ allows us to view $U$ as a $\mathbf Z[G]$-module. Since $U$ is generated by mini-feet of $G$, it follows from Proposition~\ref{prop:semi-simple} that $U$ is a semi-simple $\mathbf Z[G]$-module. Let $\mathcal Y$ denote the set of isomorphism classes of simple $\mathbf Z[G]$-submodules of $U$. For each $Y \in \mathcal Y$, let $U_Y$ be the submodule of $U$ generated by the simple submodules isomorphic to $Y$; note that $U_Y$ is a finite abelian normal subgrouop of $U$. We have the isotypical direct sum decomosition $U = \bigoplus_{Y \in \mathcal Y} U_Y$. \par For $x \in F$, the normal closure $\langle\!\langle x \rangle\!\rangle_G \le U$ is an abelian mini-foot of $G$ by hypothesis, hence a simple $\mathbf Z[G]$-module. Thus it is isomorphic to some $Y \in \mathcal Y$ and $\langle\!\langle x \rangle\!\rangle_G \le U_Y$. Setting $F_Y = F \cap U_Y$ for all $Y \in \mathcal Y$, we obtain a partition of $F$ as $F = \bigsqcup_{Y \in \mathcal Y} F_Y$. \vskip.2cm We claim that $\mathcal Y$ contains a single element. Indeed, assume this is not the case. For each $Y \in \mathcal Y$, the subset $F_Y$ is striclty contained in $F$, hence is irreducibly faithful by the hypotheses made on $F$. Let $\pi_Y$ be an irreducible unitary representation of $G$ witnessing the faithfulness of $F_Y$, and set $K_Y = U_Y \cap \mathrm{Ker}(\pi_Y)$. Thus every element of $F_Y$ has a non-trivial image in $U_Y / K_Y$. Moreover, we may view $\pi_Y$ as an irreducible unitary representation of $G / K_Y$ whose restriction to $U_Y / K_Y$ is faithful. By Lemma~\ref{FromL9}, $U_Y / K_Y$ has a $G / K_Y$-faithful unitary character. Therefore it is a cyclic $\mathbf F_p[G / K_Y]$-module by Proposition~\ref{lem14BEHA}, where $p$ is the exponent of $Y$. In particular it is a cyclic $\mathbf Z[G]$-module \par Let now $K = \left\langle \bigcup_{Y \in \mathcal Y} K_Y \right\rangle$. Thus $K$ is a normal subgroup of $G$, and we have a natural direct sum decomposition $K \cong \bigoplus_{Y \in \mathcal Y} K_Y$ (see Lemma \ref{lem:N1Nk}). In particular $K \cap U_Y = K_Y$ for all $Y \in \mathcal Y$. Moreover, we have $K \cap F = \varnothing$, since otherwise $K \cap F_Y$ would be non-empty for some $Y \in \mathcal Y$, which would imply that $K_Y$ contains an element of $F_Y$. This contradicts the definition of $K_Y$. Therefore, every element of $F$ has a non-trivial image in $G / K$. \par We may view the quotient $U / K$ as a $\mathbf Z[G]$-module. It is semi-simple by Proposition~\ref{prop:semi-simple}, as a quotient module of $U$. Moreover, the direct sum decomposition $U / K \cong \bigoplus_{Y \in \mathcal Y} U_Y / K_Y$ is the isotypical decomposition of $U / K$. We have seen above that the isotypical component $U_Y / K_Y$ of $U / K$ is a cyclic $\mathbf Z[G]$-module for each $Y \in \mathcal Y$. It follows from Lemma~\ref{lem:IsotypicalDecomposition} that $U / K$ is cyclic. By Proposition~\ref{lem14BEHA}, this means that $U / K$ has a $G / K$-faithful unitary character. By Lemma~\ref{lem:G-faithful-normal-subgroup}, $G / K$ has an irreducible unitary representation $\pi$ whose restriction to $U / K$ is faithful. Since every element of $F$ has a non-trivial image in $G / K$, precomposing $\pi$ with the projection $G \twoheadrightarrow G / K$ yields an irreducible unitary representation of $G$ mapping every element of $F$ to a non-trivial operator. Thus $F$ is irreducibly faithful, a contradiction. This proves the claim. \vskip.2cm We denote the single element of $\mathcal Y$ by $W$, From now on, denote by $p$ the exponent of $W$. Since the $\mathbf Z[G]$-module $W$ is simple, $p$ is a prime. Thus $W$ is a simple $\mathbf F_p[G]$-module, and $U = U_W$ is isomorphic to a direct sum of $\ell+1$ copies of $W$ for some integer $\ell \ge 0$. Since $F$ is not irreducibly faithful, it follows that the restriction to $U$ of every irreducible unitary representation of $G$ cannot be faithful. Therefore $U$ has no $G$-faithful character by Lemma~\ref{lem:G-faithful-normal-subgroup}. Hence $U$ is not a cyclic $\mathbf F_p[G]$-module by Proposition~\ref{lem14BEHA}. In view of Proposition \ref{prop:cyclicsemi-simple}, this implies that $\ell \ge m$, where $m$ is the dimension of $W$ over $\mathbf k = \mathcal L_{\mathbf F_p[G]}(W)$. This proves \ref{iDElem:UnfaithfulSubsets+mini-feet} and \ref{iiDElem:UnfaithfulSubsets+mini-feet}. \vskip.2cm It remains to prove that $n = \vert F \vert \ge q^m + \dots + q +1$. Recall from the hypothesis that $\langle\!\langle x \rangle\!\rangle_G $ is a simple $\mathbf F_p[G]$-submodule of $U$ for all $x \in F$. Assume for a contradiction that $n < q^m + \dots + q +1$. Then, by Lemma~\ref{lem:CyclicQuotientModule}, there is an $\mathbf F_p[G]$-submodule $B \le U$ with $B \cap \langle\!\langle x \rangle\!\rangle_G = \{e\}$ for all $x \in F$, such that $U/B$ is cyclic. It then follows from Lemma~\ref{lem:G-faithful-normal-subgroup} and Proposition~\ref{lem14BEHA} that $G/B$ has an irreducible unitary representation whose restriction to $U/B$ is faithful. Viewing that representation as a representation of $G$, we obtain a contradiction with the fact that $F$ is irreducibly unfaithful. This proves \ref{iiiDElem:UnfaithfulSubsets+mini-feet}. \end{proof} In the introduction, Property $P(n)$ was defined for all $n \ge 1$. For the sake of uniformity in the forthcoming arguments, we extend the definition to the case $n=0$. Thus every group has Property $P(0)$, tautologically. The main result of this section is the following. \begin{thm} \label{thm:UnfaithfulSetSize-n} Let $n$ be a positive integer and $G$ a countable group with Property $P(n-1)$. Let $F \subset G$ be an irreducibly unfaithful subset of size $n$, and $U = \langle\!\langle F \rangle\!\rangle_G$ the normal subgroup of $G$ generated by $F$. \par Then there exist a prime $p$ and a finite simple $\mathbf F_p[G]$-module $W$ such that the following assertions hold, where $\mathbf k$ denotes the centralizer field $\mathcal L_{\mathbf F_p[G]}(W)$, and $m = \dim_{\mathbf k}(W)$, and $q = \vert \mathbf k \vert$~: \begin{enumerate}[label=(\roman*)] \item\label{iDEthm:UnfaithfulSetSize-n} $U$ is a finite elementary abelian $p$-group, contained in the abelian mini-socle $\operatorname{MA} (G)$; \item\label{iiDEthm:UnfaithfulSetSize-n} $U$ is isomorphic as an $\mathbf F_p[G]$-module to the direct sum of a number $\ell+1$ of copies of $W$, and $\ell \ge m$; \item\label{iiiDEthm:UnfaithfulSetSize-n} $q$ is a power of $p$ and $q^m + q^{m-1} + \dots + q + 1 = n$. \end{enumerate} \end{thm} \begin{proof} It follows from the hypotheses that every proper subset of $F$ is irreducibly faithful; in particular $e \not \in F$. Hence $F \le \mathbf Fsol (G)$ by Corollary~\ref{cor:MinimalIrreduciblyUnfaithful}. For every $x \in F$, it follows that the group $\langle\!\langle x \rangle\!\rangle_G$ is finite soluble, and therefore has an abelian socle. Since socles are characteristic subgroups, this socle is a finite abelian normal subgroup of $G$, hence it contains an abelian mini-foot of $G$. We may therefore choose $b_x \in \langle\!\langle x \rangle\!\rangle_G$ such that $\langle\!\langle b_x \rangle\!\rangle_G$ is an abelian mini-foot of $G$. \par We set $F' = \{b_x \mid x \in F\}$, so that $\vert F' \vert \le \vert F \vert = n$. Since $b_x \in \langle\!\langle x \rangle\!\rangle_G$, we see that $F'$ is irreducibly unfaithful, because $F$ itself has that property. If $\vert F' \vert < n$, then $F'$ would be faithful since $G$ has $P(n-1)$, a contradiction. Thus $\vert F' \vert = n$, and every proper subset of $F'$ is irreducibly faithful. We may therefore apply Lemma~\ref{lem:UnfaithfulSubsets+mini-feet} to the set $F'$. We denote by $p$, $U' = \langle\!\langle F' \rangle\!\rangle_G$, $W$, $m$, $q$ the various objects afforded in that way. Then Properties \ref{iDEthm:UnfaithfulSetSize-n} and \ref{iiDEthm:UnfaithfulSetSize-n} are satisfied by $U'$. Moreover we have $q^m + q^{m-1} + \dots + q + 1 \le n$. If we had $q^m + q^{m-1} + \dots + q + 1 \le n-1$, then $G$ would not have Property $P(n-1)$ by Lemma~\ref{firstpartproofMainTheorem}. Therefore Property~\ref{iiiDEthm:UnfaithfulSetSize-n} is also satisfied by $U'$. It remains to show that $U' = U$, i.e., that $U' = \langle\!\langle F \rangle\!\rangle_G$. \vskip.2cm Since $b_x \in \langle\!\langle x \rangle\!\rangle_G$ for all $x \in F$, we have $U' = \langle\!\langle F' \rangle\!\rangle_G \le \langle\!\langle F \rangle\!\rangle_G$. Thus it suffices to show that $F$ is contained in $U'$. Assume for a contradiction that this is not the case, and let $y \in F$ be such that $y \notin U'$. Since $F' \smallsetminus \{b_y\}$ is irreducibly faithful, there exists an irreducible unitary representation $\rho$ of $G$ such that $\rho(b_x) \neq 1$ for all $x \in F \smallsetminus \{y\}$. Let $B = U' \cap \mathrm{Ker}(\rho)$. By Lemma~\ref{FromL9} and Proposition~\ref{lem14BEHA}, the $\mathbf F_p[G]$-module $U'/B$ is cyclic. In particular, it is isomorphic to a direct sum of $j$ copies of $W$, for some $j \in \{1, \dots, m\}$, by Lemma~\ref{lem:Isotypical}. \par Let $r \hskip.1cm \colon G \twoheadrightarrow G/B = Q$ be the canonical projection. We have seen that $r(U') = U'/B$ is a cyclic $\mathbf F_p[Q]$-module. Thus $U'/B$ has a $Q$-faithful unitary character by Proposition~\ref{lem14BEHA}. \par Since $y \not \in U'$, we have $r(y) \notin r(U')$. Since $F$ is contained in $\mathbf Fsol (G)$, it follows that $\langle\!\langle r(y) \rangle\!\rangle_Q$ is soluble finite, i.e., $r(y) \in \mathbf Fsol (Q)$. In particular the socle of $\langle\!\langle r(y) \rangle\!\rangle_Q$ is abelian. We may therefore choose $b'_y \in \langle\!\langle r(y) \rangle\!\rangle_Q$ such that $\langle\!\langle b'_y \rangle\!\rangle_Q$ is an abelian mini-foot of $Q$, not contained in $r(U')$. Now we discuss the structure of $\langle\!\langle b'_y \rangle\!\rangle_Q$, in order to achieve a contradiction. \vskip.2cm Since $\langle\!\langle b'_y \rangle\!\rangle_Q$ is a mini-foot of $Q$, it follows that $\langle\!\langle b'_y \rangle\!\rangle_Q$ may be viewed as a simple $\mathbf Z[Q]$-module (see Remark \ref{remabnormalmodule}). In particular the normal subgroup $$ N = r(U') \times \langle\!\langle b'_y \rangle\!\rangle_Q $$ is a semi-simple $\mathbf Z[Q]$-module, by Proposition~\ref{prop:semi-simple}. \par We claim that $N$ is not cyclic as a $\mathbf Z[Q]$-module. Suppose by contradiction that $N$ is cyclic. Then $N$ has a $Q$-faithful unitary character by Proposition~\ref{lem14BEHA}. Therefore $Q$ has an irreducible unitary representation whose restriction to $N$ is faithful, by Lemma~\ref{lem:G-faithful-normal-subgroup}. It follows that the set $r(F' \smallsetminus \{b_y\}) \cup \{b'_y\}$ is irreducibly faithful in~$Q$. Notice that, if the kernel of a unitary representation of $Q$ contains $r(x)$ for some $x$ in $F \smallsetminus \{y\}$, then it contains $r(b_x)$ since $b_x \in \langle\!\langle x \rangle\!\rangle_G$. Similarly, if that kernel contains $r(y)$, then it contains $b'_y$. Since $r(F' \smallsetminus \{b_y\}) \cup \{b'_y\}$ is irreducibly faithful in $Q$, we infer that the set $r(F)$ is irreducibly faithful in $Q$. Hence $F$ is irreducibly faithful in $G$, a contradiction. This confirms that the $\mathbf Z[Q]$-module $N$ is not cyclic. \par Since $Q$ is a quotient of $G$, we may view any $\mathbf Z[Q]$-module as a $\mathbf Z[G]$-module. We have seen above that $r(U')$ is a cyclic $\mathbf F_p[Q]$-module, hence a cyclic $\mathbf Z[G]$-module. Since $r(U')$ is a quotient of $U'$, it is isomorphic, as a $\mathbf Z[G]$-module, to a direct sum of copies of $W$. If $\langle\!\langle b'_y \rangle\!\rangle_Q$ were not isomorphic to $W$ as a $\mathbf Z[G]$-module, it would follow that the decomposition $N = r(U') \times \langle\!\langle b'_y \rangle\!\rangle_Q$ would be the isotypical decomposition of $N$. Since $\langle\!\langle b'_y \rangle\!\rangle_Q$ is a simple module, it is cyclic, and it would follow from Lemma~\ref{lem:IsotypicalDecomposition} that $N$ is a cyclic $\mathbf Z[G]$-module as well. This contradicts the claim above. We infer that $\langle\!\langle b'_y \rangle\!\rangle_Q$ is abelian of exponent $p$, and isomorphic to $W$ as an $\mathbf F_p[G]$-module. \par For each $x \in F \smallsetminus \{y\}$, the image $r(b_x)$ is contained in a simple $\mathbf F_p[G]$-submodule of $N$ contained in $r(U')$. Since $r(U') = U'/B$ is a direct sum of $j \le m$ copies of $W$, we deduce from Lemma~\ref{lem:counting} that $r(U')$ contains $q^{j-1} + \dots + q +1$ simple submodules. Since $N$ is a direct sum of $j+1$ copies of $W$, it contains $q^j + q^{j-1} + \dots + q +1$ simple submodules. Since $q^j \ge 2$, we deduce that $N$ contains a simple $\mathbf F_p[G]$-submodule $C$ which is neither contained in $r(U')$ nor equal to $\langle\!\langle b'_y \rangle\!\rangle_Q$. The quotient $N/C$ is a direct sum of at most $j \le m$ copies of $W$, and thus is cyclic by Lemma~\ref{lem:Isotypical}. Therefore $Q/C$ has an irreducible unitary representation whose restriction to $N/C$ is faithful, by Lemma~\ref{lem:G-faithful-normal-subgroup} and Proposition~\ref{lem14BEHA}. By construction, every element of $r(F' \smallsetminus \{b_y\}) \cup \{b'_y\}$ has a non-trivial image in $N/C$. We conclude that the set $r(F' \smallsetminus \{b_y\}) \cup \{b'_y\}$ is irreducibly faithful in $Q$. Therefore, as in the proof of the claim above, we deduce that the set $r(F)$ is also irreducibly faithful in $Q$. In particular $F$ is irreducibly faithful in $G$. This final contradiction finishes the proof. \end{proof} \subsection{End of proof of Theorem~\ref{thm:CharP(n)} and proof of Corollary~\ref{cor:P(n)-For-All-n}} \begin{proof}[Proof of \ref{1DEthm:CharP(n)} $\mathbf Rightarrow$ \ref{2DEthm:CharP(n)} in Theorem~\ref{thm:CharP(n)}] Let $n$ be a positive integer and $G$ a countable group for which \ref{1DEthm:CharP(n)} of Theorem~\ref{thm:CharP(n)} holds, i.e., a group which does not have Property $P(n)$. Upon replacing $n$ by a smaller integer, we may assume that $G$ has Property $P(n-1)$. (Recall that Property $P(0)$ holds for any group.) \par Let $F \subset G$ be an irreducibly unfaithful subset of size $n$. We invoke Theorem~\ref{thm:UnfaithfulSetSize-n}. This shows that $U = \langle\!\langle F \rangle\!\rangle_G$ is a finite normal subgroup which is an elementary abelian $p$-group, and which is isomorphic to a direct sum of $\ell+1$ copies of a simple $\mathbf F_p[G]$-module $W$ of dimension $m$ over $\mathbf k = \mathcal L_{\mathbf F_p[G]}(W)$, where $\ell \ge m$. In particular $U$ has a submodule $V$ which is isomorphic to a direct sum of $m+1$ copies of $W$. This proves that \ref{2DEthm:CharP(n)} holds. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:P(n)-For-All-n}] That \ref{iDEcor:P(n)-For-All-n} implies \ref{iiDEcor:P(n)-For-All-n} is clear. That \ref{iiDEcor:P(n)-For-All-n} implies \ref{iiiDEcor:P(n)-For-All-n} follows from Theorem~\ref{thm:CharP(n)}. \par Assume that \ref{iiiDEcor:P(n)-For-All-n} holds. If $\operatorname{MA} (G) = \{e\}$, then \ref{iDEcor:P(n)-For-All-n} holds by Theorem~\ref{thm:Gasch}. If not, let $A$ be a non-trivial finite abelian normal subgroup of $G$ contained in the mini-socle. Let $p$ be a prime dividing $\vert A \vert$; let $A_p$ be the $p$-Sylow subgroup of $A$. Then $A_p$ is a finite $\mathbf F_p[G]$-module, which is semi-simple because~$A$, hence also $A_p$, is generated by mini-feet of $G$. Since (iii) holds, $A_p$ is a finite simple $\mathbf F_p[G]$-module. By Lemma~\ref{lem:cyclic1conjclass} and Proposition~\ref{prop:cyclicsemi-simple}, $A_p$ is generated by a single conjugacy class. Since that holds for all $p$ dividing $\vert A \vert$, it follows that $A$ is generated by a single conjugacy class (see Lemmas~\ref{lem:cyclic1conjclass} and~\ref{lem:IsotypicalDecomposition}). Therefore $G$ is irreducibly faithful by Theorem~\ref{thm:Gasch}. Thus \ref{iDEcor:P(n)-For-All-n} holds. \end{proof} \section{Irreducibly injective sets} \label{SectionQ(n)} \subsection{Property \texorpdfstring{$Q(n)$}{Q(n)}} Recall from Subsection \ref{IntroQ(n)} that a subset $F$ of a group $G$ is called \textbf{irreducibly injective} if $G$ has an irreducible unitary representation $\pi$ such that the restriction $\pi \vert_F$ is injective. We say that $G$ has Property $Q(n)$ if every subset of $G$ of size~$\le n$ is irreducibly injective. \par As mentioned earlier, the fact that every group has $P(1)$ is a classical result of Gelfand--Raikov. That every group has $Q(1)$ is a trivial fact. \par The goal of this section is to compare properties $P(m)$ and $Q(n)$. For a group $G$ written multiplicatively and for a subset $F$ of $G$, we define $$ FF^{-1} = \{z \in G \mid z = xy^{-1} \hskip.2cm \text{for some} \hskip.2cm x, y \in F \} . $$ (When $G$ is abelian and written additively, this is the same as the subset $F-F$ defined in Section \ref{Section:cyclic}.) To a subset $F$ of $G$, we associate a subset $\binom{F}{2}$ of $G \smallsetminus\{e\}$ defined as follows. Let $F^2_{\ne}$ be a subset of $F \times F$ consisting of exactly one of each $(x, y)$, $(y,x)$, for $x,y \in F$ with $x \ne y$. Then $$ \binom{F}{2} = \{z \in G \smallsetminus \{e\} \mid z = xy^{-1} \hskip.2cm \text{for some} \hskip.2cm (x,y) \in F^2_{\ne} \} . $$ In particular, if $F$ is a singleton, then $\binom{F}{2}$ is empty; if $F$ is finite of some size $n \ge 2$, then $\vert \binom{F}{2} \vert \le \binom{n}{2}$. Note that $\binom{F}{2}$ involves an arbitrary choice (its dependence on $F$ is not canonical), even though it is not apparent in the notation. \par The following lemma records the most straightforward implications between Properties $P$ and $Q$. \begin{lem} \label{lem:BasicPQ} Let $G$ be a group and $n$ a positive integer. \begin{enumerate}[label=(\roman*)] \item \label{iDElem:BasicPQ} Let $F$ be a finite subset of $G$ of size $n$; let $E$ be a finite subset of the form~$\binom{F}{2}$. Then $F$ is irreducibly injective if and only if $E$ is irreducibly faithful. \item \label{iiDElem:BasicPQ} If $G$ has $P\left( \binom{n}{2} \right)$, then $G$ has $Q(n)$. In particular $G$ has $Q(2)$. \item \label{iiiDElem:BasicPQ} If $G$ has $Q(n+1)$, then $G$ has $P(n)$. \end{enumerate} \end{lem} \begin{proof} Claim \ref{iDElem:BasicPQ} follows from the definitions. \par For \ref{iiDElem:BasicPQ}, let $F \subset G$ be a subset of size at most $n$. Let $E \subset G \smallsetminus \{e\}$ be a subset of the form $\binom{F}{2}$. Since $G$ has $P\left( \binom{n}{2} \right)$, and as $\vert E \vert \le \binom{n}{2}$, there exists an irreducible unitary representation $\pi$ of $G$ such that $\pi(z) \ne \mathrm{id}$ for all $z \in E$. It follows that $\pi(xy^{-1}) \ne \mathrm{id}$ for all $(x,y) \in F^2_{\ne}$, i.e., $\pi(x) \ne \pi(y)$ for all $(x,y) \in F^2$ with $x \ne y$. Hence $G$ has $Q(n)$. Applying this fact to $n=2$, and recalling that every group has $P(1)$, we deduce that every group has $Q(2)$. \par For \ref{iiiDElem:BasicPQ}, let $F \subset G$ be a subset of size at most $n$. Since $G$ has $Q(n+1)$, the set $F \cup \{e\}$ is irreducibly injective. \end{proof} Claim \ref{iiiDElem:BasicPQ} will be strengthened in Proposition \ref{prop:Q(n)=>P(n)}. \subsection{\texorpdfstring{$Q(n)$}{Q(n)} implies \texorpdfstring{$P(n)$}{P(n)}} Using Theorem~\ref{thm:CharP(n)}, we obtain for countable groups the following small improvement of Lemma~\ref{lem:BasicPQ}\ref{iiiDElem:BasicPQ}. \begin{prop} \label{prop:Q(n)=>P(n)} If a countable group has $Q(n)$ for some $n \ge 1$, then it also has $P(n)$. \end{prop} \begin{proof} Since every group has $Q(1)$ and $P(1)$, we may assume that $n \ge 2$. Let $G$ be a countable group satisfying $Q(n)$. By Lemma~\ref{lem:BasicPQ}\ref{iiiDElem:BasicPQ}, the group $G$ has $P(n-1)$. \par Suppose for a contradiction that $G$ does not have $P(n)$. We may then invoke Theorem~\ref{thm:CharP(n)}. Let $V, m, q$ be as in Theorem~\ref{thm:CharP(n)}\ref{2DEthm:CharP(n)}; in particular $V$ is a finite abelian normal subgroup of $G$ and we have $q^m + \dots + q +1 \le n$. If we had $q^m + \dots + q +1 < n$, then the other implication of Theorem~\ref{thm:CharP(n)} would imply that $G$ does not have $P(n-1)$, in contradiction with the previous paragraph. We conclude that $q^m + \dots + q +1 = n$. \par By Lemma~\ref{lem:F-F}, the group $V$ has a subset $F$ of size $q^m + \dots + q +1$ such that the set $F-F$ contains a non-zero element of each abelian mini-foot of $G$ contained in $V$. By Lemma~\ref{firstpartproofMainTheorem}, given an irreducible unitary representation $\pi$ of $G$, the kernel $\mathrm{Ker}(\pi)$ intersects $V$ non-trivially. More precisely, $\mathrm{Ker}(\pi)$ contains an abelian mini-foot of $G$ contained in $V$, and hence a non-zero element of $F-F$. Therefore $\pi(x) = \pi(y)$ for some $x \neq y \in F$. This proves that $F$ is not irreducibly injective. Since $\vert F \vert = n$, we deduce that $G$ does not have $Q(n)$, a contradiction. \end{proof} \subsection{The constant \texorpdfstring{$\alpha_{(q,m)}$}{alphaetc}} Theorem \ref{thm:Q(n)small-n}, which is the main result of this section, depends on technical results for which we introduce the following notation. Let $q$ be a power of some prime $p$ and $m \ge 1$ be an integer. Let $G_{(q, m)} = \mathrm{GL}(W) \ltimes V$ be the group defined in Example~\ref{exampleGqm}, whose notation is retained here. We define $$ \alpha_{(q, m)} $$ as the smallest cardinality of a subset $F \subset V$ such that the difference set $F-F$ contains a non-zero vector of each of the $q^m + \dots + q +1$ simple $\mathbf F_p[G_{(q, m)}]$-submodules of $V$. Lemma~\ref{lem:F-F} implies that the inequality $$ \alpha_{(q, m)} \le q^m + \dots + q +1 $$ holds for all $q$ and $m$. The following result shows that the constant $\alpha_{(q, m)}$ is somehow independent of the group $G_{(q, m)}$. \begin{lem} \label{lem:alpha(q,m)} Let $G$ be a group. Suppose that there exist a prime $p$, a positive integer $m$, a finite simple $\mathbf F_p[G]$-module $W$ of dimension $m$ over $\mathbf k = \mathcal L_{\mathbf F_p[G]}(W)$, and a finite normal subgroup $V$ of $G$ which is an elementary abelian $p$-group and which is isomorphic as $\mathbf F_p[G]$-module to the direct sum of $m+1$ copies of $W$. \par Then $\alpha_{(q, m)}$ is equal to the smallest cardinality of a subset $F \subset V$ such that $F-F$ contains a non-zero element of each of the simple $\mathbf F_p[G]$-submodules of $V$. \end{lem} \begin{proof} The fact that $W$ is a $\mathbf k[G]$-module yields a homomorphism $G \to \mathrm{GL}_m(\mathbf k) = \mathrm{GL}(W)$. Set $L = \mathrm{GL}(W)$. We may view $V$ both as an $\mathbf F_p[G]$-module and as an $\mathbf F_p[L]$-module. Moreover, every simple $\mathbf F_p[G]$-submodule is also a simple $\mathbf F_p[L]$-submodule. Since the number of simple $\mathbf F_p[G]$-submodules equals the number of simple $\mathbf F_p[L]$-sub\-modules by Lemma~\ref{lem:counting}, we infer that every simple $\mathbf F_p[L]$-submodule of $V$ is also a simple $\mathbf F_p[G]$-submodule of $V$. In particular, an additive subgroup of $V$ is a simple $\mathbf F_p[L]$-submodule if and only if it is a simple $\mathbf F_p[G]$-submodule. The required assertion follows. \end{proof} Clearly, we have $$ \binom{\alpha_{(q, m)}} 2 \ge q^m + \dots + q +1. $$ The following lemma provides the values of $\alpha_{(q, m)}$ for some small $q$ and $m$.g The proof of the last item was computer-aided. We are grateful to Max Horn for having independently checked the result. \begin{lem} \label{lem:alpha(q,m)small} With the notation $\alpha_{(q, m)}$ defined above, we have: \begin{enumerate}[label=(\roman*)] \item $\alpha_{(2, 1)} = 3$. \item $\alpha_{(3, 1)} = \alpha_{(4, 1)} = \alpha_{(5, 1)} = 4$. \item $\alpha_{(7, 1)} = \alpha_{(8, 1)} = \alpha_{(2, 2)} = 5$. \item $\alpha_{(9, 1)} = 6$. \end{enumerate} \end{lem} \begin{proof} Consider as above the group $G_{(q, m)} = \mathrm{GL}(W) \ltimes V$. Recall that $q = \vert \mathbf k \vert$, $m = \dim_{\mathbf k}(W)$, and $V = W \oplus \dots \oplus W$ ($m+1$ times). \vskip.2cm If $m =1$, the simple $\mathbf F_p[\mathbf k^\times]$-submodules of $V$ coincide with the $1$-dimensional subspaces of $V = \mathbf k^2$. As noticed above, we have $\binom{\alpha_{(q, 1)}} 2 \ge q +1$. For $q = 2$ [respectively, $3 \le q \le 5$, $7 \le q \le 9$], this implies $\alpha_{(2,1)} \ge 3$ [respectively $\alpha_{(q,1)} \ge 4$, $\alpha_{(q,1)} \ge 5$]. For $q \in \{2,3,4,5,7,8\}$, to show that this lower bound on $\alpha_{(q,1)}$ is attained, it suffices to exhibit a subset $F \subset V$ of the corresponding size such that $F-F$ has the required property. One can check that the following sets do the job, where the elements of the prime field $\mathbf F_p$ are denoted by $0, 1, \dots, p-1$. \par For $q = 2$, we set $F = \{(0, 0), (1, 0), (0, 1)\}$. \par For $q = 3$, we set $F = \{(0,0), (0, 1), (1, 0), (1, 1)\}$. \par For $q = 4$, we set $F = \{(0, 0), (1,0), (0, 1), (1, x)\}$, where $\mathbf k$ has been identified with $\mathbf F_2[x] / (x^2+x+1)$. \par For $q=5$, we set $F = \{(0, 0), (1,0), (0,1), (3, 4)\}$. \par For $q=7$, we set $F = \{ (0, 0), (1,0), (0,1), (2, 3), (5, 2)\}$. \par For $q=8$, we set $F = \{ (0, 0), (1,0), (0,1), (1, x), (x^2+x, x^2)\}$, where $\mathbf k$ has been identified with $\mathbf F_2[x] / (x^3 +x +1)$. \par For $q = 9$, the situation is different. We know that $\binom{\alpha_{(9, 1)}} 2 \ge 10$, so that $\alpha_{(9, 1)} \ge 5$. With the help of a computer, we checked that no subset $F$ in $V$ of size $5$ is such that $F-F$ contains a non-zero vector of each of the $10$ one-dimensional subspaces of $V$. On the other hand, one verifies that the set $$ F= \{ (0, 0), (1,0), (0,1), (0, 2), (0, x), (2, 2x+1) \} $$ has this property, where $\mathbf k$ has been identified with $\mathbf F_3[x] / (x^2-x-1)$. Thus $\alpha_{(9, 1)} = 6$. \vskip.2cm Finally, consider the case of $m=2$ and $q=2$. Since $\binom{\alpha_{(2, 2)}} 2 \ge 2^2 + 2 +1 = 7$, we have $\alpha_{(2, 2)} \ge 5$. Let $a$ be a non-zero vector in~$W$. One checks that the set $$ F = \{(0, 0, 0), (a, 0, 0), (0, a, 0), (0, 0, a), (a, a, a)\} $$ satisfies the required condition, so that $\alpha_{(2, 2)} = 5$. \end{proof} \subsection{\texorpdfstring{$P(\binom{n}{2} -1)$}{} sometimes implies \texorpdfstring{$Q(n)$}{}} We are now ready to present the main technical result of this section. It may be viewed as a supplement to Lemma~\ref{lem:BasicPQ}\ref{iiDElem:BasicPQ}. \begin{prop} \label{prop:nonQ(n)} Let $n$ be an integer, $n \ge 3$. Let $G$ be a countable group with Property $P\left( \binom{n}{2} -1 \right)$. Assume that, for all pairs $(q, m)$ consisting of a prime power $q$ and an integer $m$ such that $q^m + \dots + q +1 = \binom{n}{2}$, we have $\alpha_{(q, m)} > n$. \par Then $G$ has $Q(n)$. \end{prop} \begin{proof} Suppose for a contradiction that $G$ does not have $Q(n)$. Let $F \subset G$ be a subset of size $\le n$ which is not irreducibly injective in~$G$. Upon replacing $F$ by $Fx^{-1}$ for some $x \in F$, we may assume without loss of generality that $F$ contains the neutral element $e$. \par Let $E \subset G \smallsetminus \{e\}$ be a subset of the form $\binom{F}{2}$; recall that $\vert E \vert \le \binom{n}{2}$. Since $e \in F$, we may choose $E$ in such a way that $E$ contains $F \smallsetminus \{e\}$. It follows from Lemma \ref{lem:BasicPQ}\ref{iDElem:BasicPQ} that $E$ is irreducibly unfaithful. Since $G$ has $P\left( \binom{n}{2} -1 \right)$ by hypothesis, we deduce that $\vert E \vert = \binom{n}{2}$. Set $U = \langle\!\langle E \rangle\!\rangle_G$. Since $F \smallsetminus \{e\} \subset E$, we have $F \subset U$. \par We invoke Theorem~\ref{thm:UnfaithfulSetSize-n} and use its notation, except for $F$ there being $E$ here. In particular, there exist a prime $p$ and a simple $\mathbf F_p[G]$-module $W$ such that $U$ is isomorphic as an $\mathbf F_p[G]$-module to the direct sum of $\ell+1$ copies of $W$ for some $\ell \ge m$. By Theorem~\ref{thm:UnfaithfulSetSize-n}\ref{iiiDEthm:UnfaithfulSetSize-n}, we have $q^m + \dots + q +1 = \binom{n}{2}$. Set $V = \bigoplus_0^m W$. \par We next claim that there exists a surjective map of $\mathbf F_p[G]$-modules $r \hskip.1cm \colon U \twoheadrightarrow V$ whose restriction to $F$ is injective. If $\ell = m$, then $U = V$ and $r$ can be defined as the identity map. If $\ell > m$, we proceed by induction on $\ell - m$. Lemma~\ref{lem:counting} ensures that the number of simple $\mathbf F_p[G]$-submodules of $U$ is strictly larger than $\binom{n}{2}$. Since $\binom{n}{2} = \vert E \vert$, there exists a simple $\mathbf F_p[G]$-submodule $U_0$ of $U$ such that $U_0 \cap E = \{0\}$. If $r_0 \hskip.1cm \colon U \twoheadrightarrow U/U_0$ denotes the quotient map, we have $\mathrm{Ker}(r_0) \cap E = \{0\}$, and it follows that the restriction of $r_0$ to $F$ is injective. Since $U/U_0$ is isomorphic to a direct sum of~$\ell$ copies of $W$, the induction hypothesis guarantees the existence of a surjective map of $\mathbf F_p[G]$-modules $r_1 \hskip.1cm \colon U/U_0 \twoheadrightarrow V$ whose restriction to $r_0(F)$ is injective. The map $r = r_1 \circ r_0 \hskip.1cm \colon U \twoheadrightarrow V$ satisfies the required property. This proves the claim. \par Set $E' = r(E)$, $F' = r(F)$ and $K = \mathrm{Ker}(r)$. Since $K$ is an $\mathbf F_p[G]$-submodule of $U$, we may view it as a normal subgroup of $G$. We view $E'$, $F'$ and $V$ as subsets of the quotient group $G' = G/K$; observe that $E' \subset G' \smallsetminus \{e\}$ is of the form $\binom{F'}{2}$. Since $F$ is not irreducibly injective in $G$, it follows that $F'$ is not irreducibly injective in $G'$. Hence $E'$ is not irreducibly faithful in~$G'$. Therefore $E'$ contains a non-zero element in each of the simple submodules of $V$, by Lemma~\ref{firstpartproofMainTheorem}. Recalling that $E' \subset F' - F'$, we deduce from Lemma~\ref{lem:alpha(q,m)} that $\alpha_{(q, m)} \le \vert F' \vert$. Since $\vert F' \vert = \vert F \vert = n$, this contradicts the hypothesis that $\alpha_{(q, m)} > n$. \end{proof} \begin{rem} As mentioned in Section~\ref{subsec:1.a}, the Goormaghtigh Conjecture predicts that for every integer $\ell$, there exists at most one prime power $q$ and one positive integer $m$ such that $q^m + \dots + q + 1 = \ell$, except for $\ell = 31$. Since $31$ is not of the form $n \choose 2$, that conjecture predicts that the condition from Proposition~\ref{prop:nonQ(n)} needs to be checked for at most one value of $q$ and $m$, once the integer $n$ is fixed. \end{rem} \begin{thm} \label{thm:Q(n)small-n} Let $G$ be a group. Then $G$ has Properties $P(2)$ and $Q(2)$. Suppose moreover that $G$ is countable; then: \begin{enumerate}[label=(\roman*)] \item \label{iDEthm:Q(n)small-n} $G$ has $Q(3)$ if and only if $G$ has $P(3)$; \item \label{iiDEthm:Q(n)small-n} $G$ has $Q(4)$ if and only if $G$ has $P(6)$. \item \label{iiiDEthm:Q(n)small-n} $G$ has $Q(5)$ if and only if $G$ has $P(9)$. \end{enumerate} \end{thm} \begin{proof} By Lemma~\ref{lem:BasicPQ}\ref{iiDElem:BasicPQ}, Property $P( \binom{n}{2} )$ implies $Q(n)$. For $n=3$, and $4$, this yields $P(3) \mathbf Rightarrow Q(3)$ and $P(6) \mathbf Rightarrow Q(4)$. By Proposition~\ref{prop:Q(n)=>P(n)}, we have $Q(3) \mathbf Rightarrow P(3)$. \par Among other things, this proves~\ref{iDEthm:Q(n)small-n}. \vskip.2cm Let now $G$ be a countable group that does not satisfy $P(6)$. To show \ref{iiDEthm:Q(n)small-n}, it remains to show that $G$ does not have $Q(4)$. We may assume that $G$ has $Q(3)$, since otherwise we are already done. Hence, $G$ has $P(3)$ by~\ref{iDEthm:Q(n)small-n}. Let $n$ be the least integer such that $G$ does not have $P(n)$. Hence $n$ is one of $4$, $5$, or $6$. \par If $n = 4$, we deduce from Theorem~\ref{thm:CharP(n)} that $G$ contains a normal subgroup $V$ isomorphic to $\mathbf F_3 \oplus \mathbf F_3$, on which the $G$-action is by scalar multiplication. Let $\pi$ be an irreducible unitary representation of $G$. Set $Q = G/\mathrm{Ker}(\pi)$ and let $r \hskip.1cm \colon G \twoheadrightarrow Q$ be the canonical projection. By Proposition~\ref{structureminisocle}\ref{7DEstructureminisocle}, the subgroup $r(V) \le Q$ is generated by abelian mini-feet of $Q$, and it is an elementary abelian $3$-group. Suppose that $r(V)$ were isomorphic to $V$; note that $Q$ would act on $V$ by scalar multiplication; since $Q$ is irreducibly faithful, hence has Property $P(4)$, this would contradict Theorem~\ref{thm:CharP(n)}. Hence the restriction of $r$ to $V$ cannot be faithful. (Note moreover that, for each of the simple $\mathbf F_3[G]$-modules $W$ contained in $V$, the restriction to $W$ of the projection $r$ is either injective or the zero map.) Therefore $\mathrm{Ker}(r) = \mathrm{Ker}(\pi)$ contains at least one of the $4$ cyclic subgroups of order~$3$ of $V$. Lemma~\ref{lem:alpha(q,m)small} yields a subset $F$ of $V$ of size $4$ such that $F - F$ contains a non-trivial element of each fo the $4$ cyclic subgroups of order~$3$ of $V$. Therefore $\pi(a) = \pi(b)$ for some $a, b$ distinct in $F$. This shows that $G$ does not have Property $Q(4)$. \par If $n = 5$ and $n = 6$, similar arguments using Lemmas \ref{lem:alpha(q,m)} and \ref{lem:alpha(q,m)small} apply, each time with $\vert F \vert = 4$. This confirms that \ref{iiDEthm:Q(n)small-n} holds. \vskip.2cm Arguing similarly using Theorem~\ref{thm:CharP(n)} and Lemma~\ref{lem:alpha(q,m)small}, we see that $Q(5)$ implies $P(9)$. Conversely, invoking Proposition~\ref{prop:nonQ(n)} for $n = 5$, we deduce that $P(9)$ implies $Q(5)$ since $\alpha_{(9, 1)} = 6$ by Lemma~\ref{lem:alpha(q,m)small}. This proves \ref{iiiDEthm:Q(n)small-n}. \end{proof} \subsection{From \texorpdfstring{$Q(n)$}{} to additive combinatorics} \label{AdditiveComb} Theorem~\ref{thm:Q(n)small-n} suggests the following question. \begin{ques} \label{ques:Q(n)} Can we characterize Property $Q(n)$ by an algebraic property of $G$, in the same vein as in Theorem~\ref{thm:CharP(n)}? \par In particular, is it true that, for each $n \ge 1$, there exists an integer $f(n) \ge 1$ such that a countable group $G$ has Property $Q(n)$ if and only if it has Property~$P(f(n))$ ? \end{ques} The proof of Theorem~\ref{thm:Q(n)small-n} suggests that an answer to Question~\ref{ques:Q(n)} might require to compute the numbers $\alpha_{(q, m)}$ for all $(q, m)$. This is confirmed by the following observation. \begin{obs} \label{obs:n_p} The group $G_{(q, m)}$ of Example~\ref{exampleGqm} has Property $Q(\alpha_{(q, m)}-1)$, but not $Q(\alpha_{(q, m)})$. \end{obs} \begin{proof} That $G = G_{(q, m)}$ does not have $Q(\alpha_{(q, m)})$ follows from the definition and from Lemma~\ref{lem:BasicPQ}, in view of Theorem~\ref{thm:CharP(n)}. \par In order to show that $G_{(q, m)}$ has $Q(\alpha_{(q, m)}-1)$, we fix a subset $F$ of $G$ such that $\vert F \vert < \alpha_{(q, m)}$. We shall prove that $FF^{-1}$ is irreducibly faithful. This implies that $F$ is irreducibly injective, as required. Modules below refer to the ring $\mathbf F_p[G]$. \par Notice that $FF^{-1}$ remains unchanged when $F$ is replaced by a translate $Fg$, for some $g \in G$. Without loss of generality we may thus assume that $F$ contains~$e$. In particular $F \subseteq FF^{-1}$. \par Let $\{g_1, \dots , g_k\} \subset G$ be a set of minimal cardinality such that $F \subset \bigcup_{i=1}^k V g_i$. For each $i$, set $F_i = F \cap V g_i$. Notice that if $x \in F_i$ and $y \in F_j$ with $i \neq j$, then $xy^{-1} \not \in V$ because $g_ig_j^{-1} \notin V$. Therefore the intersection $FF^{-1} \cap V$ coincides with $\bigcup_{i=1}^k F_i F_i^{-1}$. For each $i$, we set $F'_i = F_i g_i^{-1}$, and set $F' = \bigcup_{i=1}^k F'_i$. Hence $$ \vert F' \vert \le \vert F \vert , \hskip.2cm F' \subseteq V \hskip.2cm \text{and} \hskip.2cm F'(F')^{-1} \supseteq FF^{-1} \cap V . \leqno{(\sharp)} $$ \par We next observe that, if $W$ is any simple submodule of $V$, then the quotient group $G/W$ is irreducibly faithful. This follows from Theorem~\ref{thm:CharP(n)} and Corollary~\ref{cor:P(n)-For-All-n} (using a similar argument as in the discussion of Example~\ref{exampleGqm}). Therefore, if $FF^{-1}$ were not irreducibly faithful, then it would contain a non-zero element of each simple submodule of $V$. By ($\sharp$), $F'(F')^{-1}$ would also contain a non-zero element of each simple submodule of $V$. This would contradict the sequence of inequalities $\vert F' \vert \le \vert F \vert < \alpha_{(q,m)}$. It follows that $F F^{-1}$ is irreducibly faithful, and this ends the proof. \end{proof} In particular, answering Question~\ref{ques:Q(n)} for $C_p \times C_p = G_{(p, 1)}$ amounts to compute $\alpha_{(p, 1)}$. This happens to be an open problem in additive combinatorics, see Question~5.2 in \cite{CrSL--07}. As pointed out in this reference, the value of $\alpha_{(p, 1)} = n_p$ can be estimated as follows. On the one hand, since $$ \frac {n_p^2} 2 > \binom{n_p}{2} \ge p+1 > p, $$ we have $n_p > \sqrt{2p}$. On the other hand, using Theorems 1.2 and 2.1 from \cite{FiJa--00}, we obtain the upper bound $$ n_p \le 2 \lceil \sqrt p \rceil +1. $$ However, determining the exact value of $n_p$ remains an open problem. We are grateful to Ben Green for point out the reference \cite{CrSL--07} and for discussing it with us. \appendix \section{A finite group all of whose irreducible representations have non-abelian kernels} \label{Section:appendixA} We know from Proposition~\ref{prop:Kernel} that every countable group $G$ has an irreducible unitary representation whose kernel is contained in $\mathbf Fsol (G)$, and we have cited in Remark \ref{BrolineGarrison} the result according to which every finite group has an irreducible representation with nilpotent kernel. Short of having found in the literature appropriate references for groups without irreducible representations having abelian kernels, we indicate here an example, long known to experts. \vskip.2cm Let $D_8$ denote the dihedral group of order~$8$. The centre of $D_8$ is cyclic of order~$2$. For $i = 1, 2, 3$, let $H_i$ be a group isomorphic to $D_8$, and let $z_i$ be the non-trivial element of the centre of $H_i$. We set $$ G = (H_1 \times H_2 \times H_3) / \langle z_1 z_2 z_3 \rangle. $$ Thus $G$ is a nilpotent group of order $2^{8} = 256$. Its centre $Z(G)$ is isomorphic to $C_2 \times C_2$. The socle of $G$ coincides with its centre, and $\mathbf Fsol (G)$ is the group $G$ itself (see Example \ref{exe:socletrunk}(6)). \begin{prop} \label{prop:AbelianNormal} For every abelian normal subgroup $N$ in $G$, the centre of the quotient $G/N$ is not cyclic. \end{prop} \begin{proof} The natural homomorphism $H_1 \times H_2 \times H_3 \to G$ induces an embedding $H_i \to G$ for each $i$. We identify $H_i$ with its image in $G$. In particular we view $z_1, z_2, z_3$ as elements of $G$. The centre of $G$ is $Z(G) = \{e, z_1, z_2, z_3\}$. \par We assume for a contradiction that $N$ is an abelian normal subgroup of $G$ such that $G/N$ has a cyclic centre. Since the centre of $G$ is not cyclic, we have $N \neq \{e\}$. Let $r \hskip.1cm \colon G \twoheadrightarrow G/N$ be the canonical projection. \par Let $i \in \{1, 2, 3\}$. Since $N$ is abelian and $H_i$ is not, $r(H_i ) \cong H_i / H_i \cap N$ is non-trivial. In particular $r(H_i )$ has a non-trivial centre. We may thus choose an element $h_i \in H_i$ such that $r(h_i)$ is a non-trivial element of the centre $Z(r(H_i))$. Since $H_1, H_2$ and $H_3$ commute pairwise in $G$, and since $G$ is generated by these subgroups, we have $Z(H_i) \le Z(G)$ and $Z(r(H_i)) \le Z(r(G))$. In particular $r(h_i) \in Z(G/N)$. \par Since $G$ is a $2$-group, every non-trivial normal subgroup has a non-trivial intersection with the centre $Z(G)$. Thus there exists $j \in \{1, 2, 3\}$ such that $z_j \in N$. Since $Z(G/N)$ is cyclic and $Z(r(H_j)) \le Z(G/N)$, it follows that $Z(r(H_j))$ is cyclic. Since $N \cap H_j$ is a non-trivial normal subgroup of $H_j \cong D_8$, the quotient $ r(H_j) \cong H_j / (N \cap H_j)$ is abelian. Therefore, it coincides with its centre, hence it is cyclic. The only abelian normal subgroups of $H_j \cong D_8$ affording a cyclic quotient group are its subgroups of index~$2$. Thus $r(H_j) \cong H_j/ (N \cap H_j)$ is of order~$2$. In particular $N \cap H_j$ is a maximal subgroup of $H_j$, and $r(h_j)$ is of order~$2$. Moreover we have $H_j = \langle h_j \rangle (N \cap H_j)$ since $r(h_j) \neq e$ and hence $h_j \not \in N \cap H_j$. \par Let now $i \in \{1, 2, 3\}$ such that $i \neq j$. We know that $Z(G/N)$ is cyclic, and that $r(h_i)$ and $r(h_j)$ are two non-trivial elements in $Z(G/N)$. Since moreover $r(h_j)$ is of order~$2$, we infer that $r(h_i)^k r(h_j) = e$ for some integer $k$. In other words $h_i^k h_j \in N$. Since $N$ is abelian, it follows that $h_i^k h_j$ commutes with $N \cap H_j$. Moreover $h_i$ commutes with $H_j$, hence $h_i^k$ commutes with $N \cap H_j$. It follows that $h_j$ commutes with $N \cap H_j$. Since $N$ is abelian and since $H_j = \langle h_j \rangle (N \cap H_j)$, it follows that $H_j \cong D_8$ is abelian, which is absurd. \end{proof} By Schur's Lemma, the image $\pi(G)$ of $G$ under any irreducible representation $\pi$ has cyclic centre. It follows from Proposition~\ref{prop:AbelianNormal} that the kernel of $\pi$ cannot be abelian. Thus we obtain: \begin{cor} \label{cor:Appendix} Every irreducible representation of $G$ has a non-abelian kernel. \end{cor} Here is another proof of Corollary \ref{cor:Appendix}. Let $z$ denote the non-trivial element of the centre of $D_8$. The group $D_8$ has $4$ irreducible representations of degree $1$, of which the kernels contain $z$, and one irreducible representation $\pi$ of degree $2$, such that $\pi(z) = -\mathrm{id}$. Consequently, the group $H_1 \times H_2 \times H_3$ has \begin{enumerate} \item[---] $64$ irreducible representations of degree $1$, \item[---] 48 irreducible representations of degree $2$, \item[---] $12$ irreducible representations of degree $4$, \item[---] $1$ irreducible representation of degree $8$, \end{enumerate} and the irreducible representations having kernels containing $z_1z_2z_3$ are precisely those of dimensions $1$ and $4$. It follows that $G$ has $64$ irreducible representations of degree $1$, none of them with abelian kernel, and $12$ irreducible representations of degree $3$, each having a kernel isomorphic to $D_8$. \section{On the collection of kernels of irreducible unitary representations} \label{Section:appendixB} Given a group $G$, we denote by $\mathrm{Sub}(G)$ the set of all subgroups of $G$, endowed with the \textbf{Chabauty topology}. In this appendix, we collect some observations concerning the subspace $\mathcal K_G$ of $\mathrm{Sub}(G)$ of all kernels of irreducible unitary representations of $G$, for comparison with the situation in the particular case of finite groups. \vskip.2cm We begin by a short reminder on the space $\mathrm{Sub}(G)$. In a group $G$, every subset can be identified, in a canonical way, with a function $f \hskip.1cm \colon G \to \{0, 1\}$. The set $\{0, 1\}^G$ of all such functions, endowed with the topology of pointwise convergence, is compact by Tychonoff's theorem. The set $\mathrm{Sub}(G)$ is closed in $\{0, 1\}^G$, because being a subgroup is a pointwise condition. Endowed with the induced topology, the space $\mathrm{Sub}(G)$ is called the \textbf{Chabauty space} of subgroups of $G$. \par By definition of the topology, a sequence $(K_n)_{n \ge 1}$ in $\mathrm{Sub}(G)$ converges to a subgroup $K \le G$ if and only if, for every finite subset $F \subset G$, the intersection $K_n \cap F$ is equal to $K \cap F$ for all sufficiently large $n$. When $G$ is countable, it suffices to check the latter condition on the finite sets $F$ belonging to some ascending chain $F_1 \subset F_2 \subset \dots$ of finite subsets of $G$ such that $\bigcup_{m \ge 1} F_m = G$. \begin{exe} \label{exKGnotclosed} Our first observation is that the subspace $\mathcal K_G$ need not be a closed subset of the Chabauty space $\mathrm{Sub}(G)$. \par For this, consider the group $$ G = \langle x, y_n \mid n \ge 1 \rangle $$ of Example~\ref{exe:socletrunk}(7). Recall that $G$ is a subgroup of $P = \prod_{n \ge 1} H_n$, where $$ H_n = \langle x_n, y_n, z_n \mid x_n^3, \hskip.1cm y_n^3, \hskip.1cm [x_n, y_n]z_n^{-1}, \hskip.1cm [x_n, z_n], \hskip.1cm [y_n, z_n] \rangle $$ for each $n \ge 1$, and $x = (x_n)_{n \ge 1}$. For $m \ge 1$, define the subgroup $$ K_m = \langle y_j^{-1}z_{m+1}, \hskip.1cm y_{m+1}^{-1} y_k, \hskip.1cm z_j, \hskip.1cm z_{m+1}^{-1} z_k \mid 1 \le j \le m, \hskip.1cm k \ge m+2 \rangle $$ of $G$. In Lemma \ref{lemmaforB1} and just after, we will show that: \begin{enumerate} \item[---] for all $m \ge 1$, the group $K_m$ is normal in $G$ and is in $\mathcal K_G$; \item[---] the sequence $(K_m)_{m \ge 1}$ converges in $\mathrm{Sub}(G)$ to a normal subgroup $K$ of $G$ which is not in $\mathcal K_G$. \end{enumerate} \end{exe} \begin{lem} \label{lemmaforB1} Let the notation be as just above. Let $m \ge 1$. \begin{enumerate}[label=(\roman*)] \item \label{iDElemmaforB1} The group $K_m$ is normal in $G$. \item \label{iiDElemmaforB1} We have $G = K_m \rtimes \langle x, y_{m+1}, z_{m+1} \rangle$. In particular $G/K_m$ is of order $27$, it is isomorphic to $H_1$, which is a Heisenberg group over $\mathbf F_3$. \item \label{iiiDElemmaforB1} The assignments $$ \left\{ \begin{array}{rcll} x & \mapsto & x_1 \\ y_j & \mapsto & z_1 & \text{for all } j \le m \\ y_j & \mapsto & y_1 & \text{for all } j \ge m+1 \\ z_j & \mapsto & e & \text{for all } j \le m \\ z_j & \mapsto & z_1 & \text{for all } j \ge m+1 \end{array} \right. $$ extend to a uniquely defined group homomorphism $\rho_m \hskip.1cm \colon G \twoheadrightarrow H_1$ which is surjective. \end{enumerate} \end{lem} \begin{proof} Recall that we have defined in Example~\ref{exe:socletrunk}(7) the subgroup $$ A = \langle y_j, z_k \mid j, k \ge 1 \rangle . $$ It is abelian, normal and of index $3$ in $G$. Observe that $K_m \le A$. \vskip.2cm \ref{iDElemmaforB1} The generators defining $K_m$ are either central in $G$, or of the form $y_j^{-1}z_{m+1}$ for $j \le m$, or of the form $y_{m+1}^{-1} y_k$ for $k \ge m+2$. Every conjugate of $y_j^{-1}z_{m+1}$ in $G$ belongs to the coset $y_j^{-1} z_{m+1} \langle z_j \rangle$, which is entirely contained in $K_m$ since $j \le m$. Similarly, the conjugacy class of $y_{m+1}^{-1} y_k$ in $G$ is of size at most~$3$ (because the centralizer $C_G(y_{m+1}^{-1} y_k)$ contains $A$, which is of index~$3$ in $G$), and is contained in the coset $y_{m+1}^{-1} y_k \langle z_{m+1}^{-1} z_k \rangle$, which is entirely contained in $K_m$ as well since $k \ge m+2$. Thus $K_m$ contains the conjugacy class of each of its generators, and therefore $K_m$ is normal in $G$. \vskip.2cm \ref{iiDElemmaforB1} Let $D = \langle x, y_{m+1}, z_{m+1} \rangle$. The natural projection $P \twoheadrightarrow H_{m+1}$ restricts to an isomorphism $D \overset{\cong}{\longrightarrow} H_{m+1}$. In particular $D \cong H_{m+1} \cong H_1$. We must show that $G = K_m \rtimes D$. \par We have $A = \langle y_{m+1}, z_{m+1}\rangle K_m$. Viewing $A$ as a vector space over $\mathbf F_3$ with basis $\{y_1, z_1, y_2, z_2, \dots \}$, we obtain a direct product decomposition $A = \langle y_{m+1}, z_{m+1} \rangle \times K_m$. Since $A \cap D = \langle y_{m+1}, z_{m+1} \rangle$ because $x \not \in A$ as observed in Example~\ref{exe:socletrunk}(7), we deduce that $K_m \cap D = \{e\}$. Since $K_m D$ contains $A$ as a proper subgroup, and since $[G : A]= 3$, we infer that $G = K_m D$. This confirms that $G$ is the semi-direct product $K_m \rtimes \langle x, y_{m+1}, z_{m+1} \rangle$. \vskip.2cm \ref{iiiDElemmaforB1} Observe that, modulo $K_m$, we have $$ \left\{ \begin{array}{rcllll} & y_j &\equiv &z_{m+1} &\pmod{K_m} &\hskip.2cm \text{for all} \hskip.2cm j \le m \\ & y_j &\equiv &y_{m+1} &\pmod{K_m} &\hskip.2cm \text{for all} \hskip.2cm j \ge m+1 \\ & z_j &\equiv &e &\pmod{K_m} &\hskip.2cm \text{for all} \hskip.2cm j \le m \\ & z_j &\equiv &z_{m+1} &\pmod{K_m} &\hskip.2cm \text{for all} \hskip.2cm j \ge m+1 \end{array} \right. $$ The homomorphism $\rho_m$ is the composition of the canonical projection $G \twoheadrightarrow G/K_m$ with the isomorphism $G/K_m \to H_1$ mapping $xK_m $ to $x_1$, and $y_{m+1}K_m$ to $y_1$, and $z_{m+1}K_m$ to $z_1$. \end{proof} \begin{proof}[End of proof of the claims of Example \ref{exKGnotclosed}] The group $H_1$ is irreducibly faithful; this can be seen on the character table of the group (see, for example, Page 216 in \cite{Kowa--14}); alternatively, it follows from Theorem~\ref{thm:Gasch} because $H_1$ is a nilpotent group with cyclic centre. Therefore $K_m = \mathrm{Ker}(\rho_m)$ is in $\mathcal K_G$ for all $m \ge 1$. \par Let now $\rho \hskip.1cm \colon G \to H_1$ be the group homomorphism defined by the assignments $$ \left\{ \begin{array}{rcll} x & \mapsto & x_1 \\ y_j & \mapsto & z_1 & \text{for all } j \ge 1 \\ z_j & \mapsto & e & \text{for all } j \ge 1. \end{array} \right. $$ Thus $\rho(G) = \langle x_1, z_1\rangle \cong C_3 \times C_3$. In particular $K := \mathrm{Ker}(\rho) \not \in \mathcal K_G$. \par It remains to observe that the sequence $(\rho_m)_{m \ge 1}$ converges, in the topology of pointwise convergence, to the homomorphism $\rho$. This readily implies that $(K_m)_{m \ge 1}$ converges to $K$ in the Chabauty topology. Thus $(K_m)_{m \ge 1}$ is a sequence in $\mathcal K_G$ converging to a normal subgroup of $G$ not belonging to $\mathcal K_G$. \end{proof} Despite the fact that it need not be closed in $\mathrm{Sub} (G)$, the set $\mathcal K_G$ always contains minimal elements. \begin{prop} \label{prop:Chabauty} Let $G$ be a countable group and $\mathcal K_G$ be as above the set of all kernels of irreducible unitary representations of $G$. \par Then, for every descending chain $(K_i)_{i \in I}$ in $\mathcal K_G$, the intersection $\bigcap_{i \in I} K_i$ belongs to $\mathcal K_G$. In particular every $K \in \mathcal K_G$ contains a minimal $K_0 \in \mathcal K_G$. \end{prop} \begin{proof} Let $(K_i)_{i \in I}$ be a descending chain in $\mathcal K_G$. Set $K = \bigcap_{i \in I} K_i$. We must show that $G/K$ is irreducibly faithful. To that end, we consider a normal subgroup $A$ of $G$ containing $K$, such that $A/K$ is a finite abelian normal subgroup of $G/K$ contained in $\operatorname{MA}(G/K)$. \par We have $K \le A \cap K_i \le A$ for all $i$. Since $A/K$ is finite, it follows that, for all sufficiently large $i$, we have $A \cap K_i = K$, and therefore $A/ (A \cap K_i) = A/K$. Since $A/K \le \operatorname{MA}(G/K)$, we have $A/ (A \cap K_i) \le \operatorname{MA}(G/ (A \cap K_i))$ for $i$ large enough. By Proposition~\ref{structureminisocle}\ref{7DEstructureminisocle} applied to $G/ (A \cap K_i) \twoheadrightarrow G/K_i$, the quotient $A/ (A \cap K_i) \cong AK_i/K_i$ is in $\operatorname{MA}(G/K_i)$. Since $G/K_i$ is irreducibly faithful by the definition of $\mathcal K_G$, the normal subgroup $AK_i/K_i $ is generated by a single conjugacy class. Since the canonical isomorphism $A/ (A \cap K_i) \cong AK_i/K_i$ is $G$-equivariant, it follows that $A/ (A \cap K_i)$ is generated by a single conjugacy class. Thus the same holds for $A/K$. It follows that $G/K$ is irreducibly faithful by Theorem~\ref{thm:Gasch}. \par The second assertion follows from the first via Zorn's Lemma. \end{proof} \begin{ques} \label{QuestionBroGar} In the case of a finite group $G$, the theorem of Broline and Garrison quoted in Remark \ref{BrolineGarrison} ensures that every minimal element $K \in \mathcal K_G$ is nilpotent. For a countable group $G$, we already know from Proposition~\ref{prop:Kernel} that some element of $\mathcal K_G$ is contained in $\mathbf Fsol(G)$. \par For a countable group $G$, can we describe more precisely the algebraic structure of the minimal elements of $\mathcal K_G$ ? Can we always find an element $K \in \mathcal K_G$ contained in the characteristic subgroup $\mathbf Fnil (G)$ generated by all finite nilpotent normal subgroups of $G$ ? In case $G$ is finite, the subgroup $\mathbf Fnil (G)$ coincides with the \textbf{Fitting subgroup}, i.e., the largest nilpotent normal subgroup of $G$. \end{ques} We finish by mentionning that, in the case of infinite groups, some minimal elements of $\mathcal K_G$ may fail to be nilpotent and also fail to be contained in the torsion FC-centre $W(G)$; a fortiori it need not be contained in $\mathbf Fnil (G)$, as defined in Question \ref{QuestionBroGar}, or $\mathbf Fsol (G)$, as defined in Lemma \ref{lem:N1Nk}. The construction of such examples is based on the following. \begin{lem} \label{lem:MinimalKernels} Let $p$ be a prime. For each $n \in \mathbf N$, let $G_n$ be a (possibly infinite) non-trivial nilpotent $p$-group. Let $K$ be the restricted sum $\prod'_{n \in \mathbf N} G_n$, and set $G = K \times C_p$. \par Then $K$ is a minimal element of $\mathcal K_G$. \end{lem} \begin{proof} Since $G/K \cong C_p$, it is clear that $K$ belongs to $\mathcal K_G$. Let $K_0 \in \mathcal K_G$ be such that $K_0 \le K$, and let $r \hskip.1cm \colon G \twoheadrightarrow G/K_0$ denote the canonical projection. \par Assume for a contradiction that $r(G_n)$ is non-trivial for some $n \in \mathbf N$. On the one hand, the centre of $r(G_n)$ is an abelian $p$-group, hence it contains some element of order $p$. That element commutes with $r(G_m)$ for all $m$, and also with $r(C_p)$. Thus it belongs to the centre of $r(G) = G/K_0$. On the other hand, the factor $C_p$ is central in $G$, hence $r(C_p)$ is also contained in the centre of $r(G)$. Since $r(G)$ does not contain any group isomorphic to $C_p \times C_p$, it follows that $r(G_n)$ and $r(C_p)$ have a non-trivial intersection. Therefore $\mathrm{Ker}(r) = K_0$ contains an element of the form $xy$ with $x \in G_n \smallsetminus \{e\}$ and $y \in C_p \smallsetminus \{e\}$. This is impossible since $K_0 \le K$. We have thus proven that $r(G_n) = \{e\}$ for all $n$. \par It follows that $K \le K_0$, hence that $K = K_0$. This confirms that $K$ is minimal in $\mathcal K_G$. \end{proof} \begin{exe} Consider the group $G$ of Example~\ref{exe:socletrunk}(7), which is an infinite countable nilpotent $3$-group; denote it now by $G_0$. For each positive integer $n$, let $G_n$ be a finite nilpotent $3$-group; assume that the derived length of $G_n$ tends to infinity with $n$. Let $K$ be the restricted direct sum $\prod'_{n \in \mathbf N} G_n$ By Lemma~\ref{lem:MinimalKernels}, the countable group $K \times C_3$ has the following property: \par \begin{center} \emph{the subgroup $K \le K \times C_3$ is a minimal element in $\mathcal K_{K \times C_3}$ which is \\ neither soluble, nor contained in the torsion FC-centre $W(K \times C_3)$.} \end{center} \noindent Indeed $K$ has an infinite conjugacy class, because $G_0$ has one, since (with the notation of Example~\ref{exe:socletrunk}(7)) $y_j^{-1}xy_j = xz_j$ for all $j \ge 1$. \end{exe} \end{document}
\begin{equation*}gin{document} \hrule width0pt \vsk-> \title[Gaudin Hamiltonians generate the Bethe algebra] {Gaudin Hamiltonians generate the Bethe algebra of\\ a tensor power of vector representation of {\large $\frak{gl}n$}} \author[E.\,Mukhin, V.\,Tarasov, and \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiA.\,Varchenko] {E.\,Mukhin$\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi^*$, V.\,Tarasov$\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi^\star$, and \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiA.\,Varchenko$\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi^\diamond$} {\mc M}aketitle\thispagestyle{empty}\let\maketitle\empty \begin{equation*}gin{center} \vsk-.2> {\it $\kern-.4em^{*,\star}\relax\ifmmode\mskip-.333333\thinmuskip\relax\else\kern-.0555556em\fi$Department of Mathematical Sciences, Indiana University\,--\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiPurdue University Indianapolis\kern-.4em\\ 402 North Blackford St, Indianapolis, IN 46202-3216, USA\/} \par{\mc M}edskip {\it $^\star\relax\ifmmode\mskip-.333333\thinmuskip\relax\else\kern-.0555556em\fi$St.\,Petersburg Branch of Steklov Mathematical Institute\\ Fontanka 27, St.\,Petersburg, 191023, Russia\/} \par{\mc M}edskip {\it $^\diamond\relax\ifmmode\mskip-.333333\thinmuskip\relax\else\kern-.0555556em\fi$Department of Mathematics, University of North Carolina at Chapel Hill\\ Chapel Hill, NC 27599-3250, USA\/} \end{center} {\let\thefootnote\relax \footnotetext{\vsk-.8>\noindent $^*$\,Supported in part by NSF grant DMS-0601005\\ $^\star$\,Supported in part by RFFI grant 08-01-00638\\ $^\diamond$\,Supported in part by NSF grant DMS-0555327}} \par{\mc M}edskip \begin{equation*}gin{abstract} We show that the Gaudin Hamiltonians $H_1\lc H_n$ generate the Bethe algebra of the $n$-fold tensor power of the vector representation of $\frak{gl}n$. Surprisingly the formula for the generators of the Bethe algebra in terms of the Gaudin Hamiltonians does not depend on $N$. Moreover, this formula coincides with Wilson's formula for the stationary Baker-Akhiezer function on the adelic Grassmannian. \end{abstract} \section{Introduction} The Gaudin model describes a completely integrable quantum spin chain \cite{G1}, \cite{G2}. We consider the Gaudin model associated with the Lie algebra $\frak{gl}n$. Denote by $L_{\bs\la}$ the irreducible finite-dimensional $\frak{gl}n$-module with highest weight $\bs\la$. Consider a tensor product $\otimes_{a=1}^nL_{\bs\la^{(a)}}$ of such modules and two sequences of complex numbers: \,$\kk_1\lc\kk_N$ and $z_1\lc z_n$. Assume that the numbers $z_1\lc z_n$ are distinct. The Hamiltonians of the Gaudin model are mutually commuting operators $H_1\lc H_n$, acting on the space $\otimes_{a=1}^nL_{\bs\la^{(a)}}$, \vvn.2> \begin{equation*}gin{equation} {\bs{\bar\la}}el{Ha} H_a\ =\ \sum_{i=1}^N\kk_i\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{ii}^{(a)}\,+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi \sum_{i,j=1}^N\,\sum_{b\neq a} \,\frac{e_{ij}^{(a)}e_{ji}^{(b)}}{z_a-z_b}\;, \end{equation} where \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$e_{ij}$ are the standard generators of $\frak{gl}n$ and \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$e_{ij}^{(a)}$ is the image of $1^{\otimes(a-1)}\otimes e_{ij}\otimes1^{\otimes(n-a)}$. One of the main problems in the Gaudin model is to find eigenvalues and joint eigenvectors of the operators $H_1\lc H_n$, see \cite{B}, \cite{RV}, \cite{MTV1}. The Gaudin Hamiltonians appear also as the right-hand sides of the Knizhnik-Zamolodchikov equations, see~\cite{SV}, \cite{RV}, \cite{FFR}, \cite{FMTV}. It was realized long time ago that there are additional interesting operators commuting with the operators $H_1\lc H_n$, see for example~\cite{KS}, \cite{FFR}. Those operators are called the higher Gaudin Hamiltonians. To distinguish the operators $H_1\lc H_n$, we will call them the classical Gaudin Hamiltonians. The algebra generated by all of the classical and higher Gaudin Hamiltonians is called the Bethe algebra. A useful formula for generators of the Bethe algebra was suggested in \cite{T}, see also \cite{MTV1}, \cite{CT}. In general, the Bethe algebra is larger than its subalgebra generated by the classical Gaudin Hamiltonians. Nevertheless, we show in this paper that if all factors of the tensor product $\otimes_{a=1}^nL_{\bs\la^{(a)}}$ are the standard vector representations of $\frak{gl}n$, then the classical Gaudin Hamiltonians generate the entire Bethe algebra. It is a surprising fact since every tensor product of polynomial $\frak{gl}n$-modules is a submodule of a tensor power of the vector representation, and one may expect that the Bethe algebra of a tensor power of the vector representation is as general as the Bethe algebra of a tensor product of arbitrary representations. Another surprising fact is that our formula for the elements of the Bethe algebra in terms of the classical Gaudin Hamiltonians does not depend on $N$, see Theorem~\ref{generate}. The third surprise is that our formula is nothing else but Wilson's formula for the stationary Baker-Akhiezer function on the adelic Grassmannian \cite{Wi}. Our theorem can be used to study the higher Gaudin Hamiltonians as functions of the classical Hamiltonians (or as limits of functions of the classical Gaudin Hamiltonians). It is known much more about the classical Hamiltonians than about the higher Hamiltonians. Our proof of Theorem~\ref{generate} is not elementary. We use the fact that the Bethe algebra is preserved under the $(\frak{gl}n,\frak{gl}_n)$-duality and the completeness of the Bethe ansatz for a tensor product of vector representations and generic $\kk_1\lc\kk_N$, $z_1\lc z_n$. \section{Bethe algebra} {\bs{\bar\la}}el{alg sec} \subsection{Lie algebras $\frak{gl}n$ and $\frak{gl}nt$} Let $e_{ij}$, $i,j=1\lc N$, be the standard generators of the Lie algebra $\frak{gl}n$ satisfying the relations $[e_{ij},e_{sk}]=\dl_{js}e_{ik}-\dl_{ik}e_{sj}$. Let $\h\subset\frak{gl}n$ be the Cartan subalgebra generated by $e_{ii}, \,i=1\lc N$. We denote by $V=\oplus_{i=1}^N{{\mc B}bb C} v_i$ the standard $N$-dimensional vector representation of $\frak{gl}n$: \,$e_{ij}v_j=v_i$ \,and \,$e_{ij}v_k=0$ \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fifor \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$j\ne k$. Let $M$ be a $\frak{gl}n$-module. A vector $v\in M$ is called {\it singular\/} if $e_{ij}v=0$ for $1\le i<j\le N$. We denote by $M^{\it\>sing}$ the subspace of all singular vectors in $M$. Let $\frak{gl}nt=\frak{gl}n\otimes{{\mc B}bb C}[t]$ be the complex Lie algebra of $\frak{gl}n$-valued polynomials with the pointwise commutator. For $g\in\frak{gl}n$, we set $g(u)=\sum_{s=0}^\infty (g\otimes t^s)u^{-s-1}$. We identify $\frak{gl}n$ with the subalgebra $\frak{gl}n\otimes1$ of constant polynomials in $\frak{gl}nt$. Hence, any $\frak{gl}nt$-module has a canonical structure of a $\frak{gl}n$-module. For each $a\in{{\mc B}bb C}$, there exists an automorphism $\rho_a$ of $\frak{gl}nt$, \;$\rho_a:g(u)\mapsto g(u-a)$. Given a $\frak{gl}nt$-module $M$, we denote by $M(a)$ the pull-back of $M$ through the automorphism $\rho_a$. As $\frak{gl}n$-modules, $M$ and $M(a)$ are isomorphic by the identity map. We have the evaluation homomorphism, ${\frak{gl}nt\to\frak{gl}n}$, \;${g(u) \mapsto g\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiu^{-1}}$. Its restriction to the subalgebra $\frak{gl}n\subset\frak{gl}nt$ is the identity map. For any $\frak{gl}n$-module $M$, we denote by the same letter the $\frak{gl}nt$-module, obtained by pulling $M$ back through the evaluation homomorphism. \subsection{Bethe algebra} {\bs{\bar\la}}el{secbethe} Given an ${N\times N}$-matrix $A$ with possibly noncommuting entries $a_{ij}$, we define its {\it row determinant\/} to be \vvn.3> \begin{equation*} \on{rdet} A\,= \sum_{\;\si\in S_N\!} (-1)^\si\,a_{1\si(1)}a_{2\si(2)}\ldots a_{N\si(N)}\,. \vv-.1> \end{equation*} Let $\kk_1\lc\kk_N$ be a sequence of complex numbers. Let $\der_u$ be the operator of differentiation in the variable $u$. Define the {\it universal differential operator\/} ${\mc D}^\kk$ \vvn.4> by the formula \vadjust{\penalty-500} \begin{equation*}gin{equation} {\bs{\bar\la}}el{DK} {\mc D}^\kk=\,\on{rdet}\left( \begin{equation*}gin{matrix} \der_u-\kk_1-e_{11}(u) & -\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{21}(u)& \dots & -\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{N1}(u)\\[3pt] -\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{12}(u) &\der_u-\kk_2-e_{22}(u)& \dots & -\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{N2}(u)\\[1pt] \dots & \dots &\dots &\dots \\[1pt] -\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{1N}(u) & -\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{2N}(u)& \dots & \der_u-\kk_N-e_{NN}(u) \end{matrix}\right). \vv.2> \end{equation} It is a differential operator in $u$, whose coefficients are formal power series in $u^{-1}$ with coefficients in $U(\gln)t$, \vvn-.1> \begin{equation*} {\mc D}^\kk=\,\der_u^N+\sum_{i=1}^N\,B_i^\kk(u)\,\der_u^{N-i}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi, \qquad B_i^\kk(u)\,=\,\sum_{j=0}^\infty B_{ij}^\kk\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiu^{-j}\,, \end{equation*} and $B_{ij}^\kk\inU(\gln)t$, \,$i=1\lc N$, \,$j\in{{\mc B}bb Z}_{\ge 0}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$. We have \begin{equation*}gin{equation} {\bs{\bar\la}}el{Bi0} \der_u^N\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\sum_{i=1}^N B_{i0}^\kk\,\der_u^{N-i}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi= \,\prod_{i=1}^N\,(\der_u-K_i)\,. \vv.2> \end{equation} The unital subalgebra of $U(\gln)t$ generated by $B_{ij}^\kk$, \,$i=1\lc N$, \,$j\in{{\mc B}bb Z}_{>0}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$, is called the {\it Bethe algebra\/} and denoted by ${\mc B}^\kk$. \par\vsk.2> By \cite{T}, \cite{MTV1}, \cite{CT}, the algebra ${\mc B}^\kk$ is commutative, and ${\mc B}^\kk$ commutes with the subalgebra $U(\h)\subset U(\gln)t$. If all $K_1\lc K_N$ coincide, then ${\mc B}^\kk$ commutes with the subalgebra $U(\gln)\subsetU(\gln)t$. \par\vsk.2> As a subalgebra of $U(\gln)t$, the algebra ${\mc B}^\kk$ acts on any $\frak{gl}nt$-module $M$. Since ${\mc B}^\kk$ commutes with $U(\h)$, it preserves the weight subspaces of $M$. If all $K_1\lc K_N$ coincide, then ${\mc B}^\kk$ preserves the subspace $M^{{\it\>sing}}$ of singular vectors. If $L$ is a ${\mc B}^\kk$-module, then the image of ${\mc B}^\kk$ in $\on{End}(L)$ is called the {\it Bethe algebra of\/} $L$. For our purpose it is convenient to consider another set of generators of the Bethe algebra ${\mc B}^\kk$ defined as follows. Let $x$ be a new variable and \vvn.4> \begin{equation*}gin{equation} {\bs{\bar\la}}el{PsiK} \varPsi^\kk(u,x)\,=\, {\mc B}igl(\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fix^N+\sum_{i=1}^N\,B_i^\kk(u)\,x^{N-i}\,{\mc B}igr) \prod_{i=1}^N\,\frac1{x-K_i}\;=\, 1+\sum_{i=1}^\infty\,\varPsi_i^\kk(u)\,x^{-i}\,. \vv.1> \end{equation} The series $\varPsi_i^\kk(u)$, \,$i\in{{\mc B}bb Z}_{>0}$, are linear combinations of the series $B_i^\kk(u)$, \,$i=1\lc N$, and vice versa. Write \vvn-.3> \begin{equation*}gin{equation} {\bs{\bar\la}}el{PsiiK} \varPsi_i^\kk(u)\,=\,\sum_{j=1}^\infty\,\varPsi_{ij}^\kk\,u^{-j}\,. \vv-.1> \end{equation} Then $\varPsi_{ij}^\kk$, \,$i,j\in{{\mc B}bb Z}_{>0}$, is a new set of generators of the Bethe algebra ${\mc B}^\kk$. \section{Classical Gaudin Hamiltonians on $\otimes_{a=1}^nV(z_a)$} {\bs{\bar\la}}el{main} Recall that $V$ is the vector representation of the Lie algebra $\frak{gl}n$. Consider the tensor product $\otimes_{a=1}^nV(z_a)$ of evaluation $\frak{gl}nt$-modules. The series $e_{ij}(u)$ acts on $\otimes_{a=1}^nV(z_a)$ as $\sum_{a=1}^ne_{ij}^{(a)}(u-z_a)^{-1}$, where $e_{ij}^{(a)}$ is the image of $1^{\otimes(a-1)}\otimes e_{ij}\otimes1^{\otimes(n-a)}\in \bigl(U(\frak{gl}n)\bigr)^{\otimes n}$. \goodbreak We denote by $B_{ij}\,,\varPsi_{ij}\in\on{End}(V^{\otimes n})$ the images of the elements $B_{ij}^\kk\,,\varPsi_{ij}^\kk\inU(\gln)t$\,. \,Set \begin{equation*}gin{alignat}2 B_i(u)\,&{}=\,\sum_{j=0}^\infty\,B_{ij}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiu^{-j}\,,\qquad &&{\mc D}\,=\,\der_u^N+\sum_{i=1}^N\,B_i(u)\,\der_u^{N-i}\,, \notag \\[4pt] {\bs{\bar\la}}el{Psi} \varPsi_i(u)\,&{}=\,\sum_{j=1}^\infty\,\varPsi_{ij}\,u^{-j}\,, &&\varPsi(u,x)\,=\,1+\sum_{i=1}^\infty\,\varPsi_i(u)\,x^{-i}\,. \end{alignat} All of the series $B_i(u)$, $\varPsi_i(u)$ sum up to rational functions of $u$ with values in $\on{End}(V^{\otimes n})$. Set in addition \vvn-.1> \begin{equation*} \varPsi_{\sssty\dag}(x)=-\sum_{i=1}^\infty\varPsi_{i1}x^{-i}\,. \vv-.4> \end{equation*} \begin{equation*}gin{lem} We have \begin{equation*}gin{equation} {\bs{\bar\la}}el{Psi12} \varPsi_1(u)\,=\,-\,\sum_{a=1}^n\,\frac1{u-z_a}\;,\qquad \varPsi_2(u)\,=\,\sum_{a=1}^n\,\frac1{u-z_a}\,{\mc B}igl( -\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiH_a\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\sum_{b\ne a} \frac1{z_a-z_b}\,{\mc B}igr)\,, \vv-.2> \end{equation} where \vvn-.4> \begin{equation*}gin{equation} {\bs{\bar\la}}el{H} H_a\,=\,\sum_{i=1}^N\kk_i\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{ii}^{(a)}\,+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi \sum_{i,j=1}^N\,\sum_{b\ne a} \,\frac{e_{ij}^{(a)}e_{ji}^{(b)}}{z_a-z_b} \vv.3> \end{equation} are the classical Gaudin Hamiltonians~{{\mc B}bb R}ef{Ha}, and \vvn.1> \begin{equation*} \varPsi_{\sssty\dag}(x)\,=\,\sum_{i=1}^N\,\sum_{a=1}^n\,\frac{e_{ii}^{(a)}}{x-K_i}\ . \end{equation*} \end{lem} \begin{equation*}gin{proof} The claim is straightforward. See also formula~(8.5) and Appendix~B in \cite{MTV1}. \end{proof} To formulate our main result we introduce a diagonal matrix \vvn.2> \begin{equation*}gin{equation} {\bs{\bar\la}}el{Z} Z\,=\,\on{diag}(z_1\lc z_n) \end{equation} and a matrix \vvn-.6> \begin{equation*}gin{equation} {\bs{\bar\la}}el{Q} Q = \left(\, \begin{equation*}gin{matrix} \yh_1 & \dfrac{1}{z_2-z_1} & \dfrac{1}{z_3-z_1} &\dots & \dfrac{1}{z_n-z_1} \\[14pt] \dfrac{1}{z_1-z_2} & \yh_2 & \dfrac{1}{z_3-z_2} &{} \dots & \dfrac{1}{z_n-z_2} \\[9pt] {}\dots & {}\dots & {} \dots & \dots & \dots \\[6pt] \dfrac{1}{z_1-z_n} & \dfrac{1}{z_2-z_n}& \dfrac{1}{z_3-z_n} &{} \dots & \yh_n \end{matrix}\,\right) \vv.4> \end{equation} depending on new variables $\yh_1\lc\yh_n$. Set \vvn.3> \begin{equation*}gin{equation} {\bs{\bar\la}}el{psi} \psi(u,x,z_1\lc z_n,\yh_1\lc\yh_n)\,=\, \det\bigl(1-(u-Z)^{-1}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi(x-Q)^{-1}\bigr)\,, \vv-.2> \end{equation} \begin{equation*} \phi(x,z_1\lc z_n,\yh_1\lc\yh_n)\,=\,\det(x-Q)\,,\quad \psi_{\sssty\dag}(x,z_1\lc z_n,\yh_1\lc\yh_n)\,=\,\on{tr}\bigl((x-Q)^{-1}\bigr)\,. \vv.2> \end{equation*} \begin{equation*}gin{thm} {\bs{\bar\la}}el{generate} The Bethe algebra of $\otimes_{a=1}^nV(z_a)$ is generated by the classical Gaudin Hamiltonians $H_1\lc H_n$. More precisely, \vvn.3> \begin{equation*} \varPsi(u,x)\,=\,\psi(u,x,z_1\lc z_n,H_1\lc H_n)\,. \vadjust{\penalty-500} \vv-.2> \end{equation*} In particular, \vvn-.6> \begin{equation*}gin{equation} {\bs{\bar\la}}el{eii} \psi_{\sssty\dag}(x,z_1\lc z_n,H_1\lc H_n)\,=\, \sum_{i=1}^N\,\sum_{a=1}^n\,\frac{e_{ii}^{(a)}}{x-K_i}\ . \end{equation} \end{thm} \vsk.2> \begin{equation*}gin{rem} Since \;$\on{tr}\bigl((x-Q)^{-1}\bigr)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi=\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\der_x\log\bigl(\det(x-Q)\bigr)$, \,formula~{{\mc B}bb R}ef{eii} can be written as \begin{equation*} \phi(x,z_1\lc z_n,H_1\lc H_n)\,=\, \prod_{i=1}^N\,(x-K_i)^{\sum_{a=1}^ne_{ii}^{(a)}}\,. \vv.2> \end{equation*} \end{rem} \begin{equation*}gin{rem} The matrix \,$[Q,Z]+1$ \,has rank one. For every distinct $z_1\lc z_n$ and every $h_1\lc h_n$, the pair \;$(Q\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi,Z)$ \,defines a point of the $n$-th Calogero-Moser space, hence, a point of the adelic Grassmannian. The function $e^{ux}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\psi(u,x,z_1\lc z_n,\yh_1\lc\yh_n)$ is the stationary Baker-Akhiezer function of that point, see Section~3 in~\cite{Wi}. Theorem \ref{generate} says that the coefficients $\psi_{ij}(z_1\lc z_n,H_1\lc H_n)$ of the stationary Baker-Akhiezer function, \vvn-.2> \begin{equation*} e^{ux}\psi(u,x,z_1\lc z_n,H_1\lc H_n)\,=\,e^{ux}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi{\mc B}igl(\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi1\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi+ \sum_{i,j=1}^\infty\,\psi_{ij}(z_1\lc z_n,H_1\lc H_n)\,u^{-j}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fix^{-i}\,{\mc B}igr) \vv-.2> \end{equation*} generate the Bethe algebra of $\otimes_{a=1}^nV(z_a)$. More remarks on this subject see in Section~\ref{CM}. \end{rem} \begin{equation*}gin{cor} {\bs{\bar\la}}el{cor spec} For distinct real $\kk_1\lc\kk_N$, and distinct real $z_1\lc z_n$, the joint spectrum of the classical Gaudin Hamiltonians $H_1\lc H_n$ acting on $\otimes_{a=1}^nV(z_a)$ is simple. That is, the classical Gaudin Hamiltonians have a joint eigenbasis, and for any two vectors of the eigenbasis at least one of the classical Gaudin Hamiltonians has different eigenvalues for those vectors. \end{cor} \begin{equation*}gin{proof} By \cite{MTV5}, for distinct real $\kk_1\lc\kk_N$, and distinct real $z_1\lc z_n$, the Bethe algebra of $\otimes_{a=1}^nV(z_a)$ has simple spectrum. Therefore, the classical Gaudin Hamiltonians have simple spectrum by Theorem~\ref{generate}. \end{proof} \begin{equation*}gin{cor} {\bs{\bar\la}}el{cor spec0} If $K_1\lc K_N$ coincide, and $z_1\lc z_n$ are distinct and real, then the joint spectrum of the classical Gaudin Hamiltonians $H_1\lc H_n$ acting on $(\otimes_{a=1}^nV(z_a))^{{\it\>sing}}$ is simple. \end{cor} \begin{equation*}gin{proof} By \cite{MTV3}, if $K_i=0$ for all $i=1\lc N$, and $z_1\lc z_n$ are real and distinct, then the Bethe algebra of $(\otimes_{a=1}^nV(z_a))^{{\it\>sing}}$ has simple spectrum. Therefore, the classical Gaudin Hamiltonians acting on $(\otimes_{a=1}^nV(z_a))^{{\it\>sing}}$ have simple spectrum by Theorem~\ref{generate}. The case of nonzero coinciding $K_1\lc K_N$ follows from the case of zero $K_1\lc K_N$, since $\sum_{i=1}^N e_{ii}^{(a)}=1$ for all $a=1\lc n$, see~{{\mc B}bb R}ef{H}. \end{proof} \section{Proof of Theorem~\ref{generate}} \subsection{Preliminary lemmas} For functions $f_1(x)\lc f_m(x)$ of one variable, denote by \vvn.2> \begin{equation*} \on{Wr}[f_1\lc f_m]\,=\,\det\left( \begin{equation*}gin{matrix} f_1 & f_1'& \dots & f_1^{(m-1)}\;\\ f_2 & f_2'& \dots & f_2^{(m-1)}\\ \dots & \dots &\dots &\dots \\ f_m & f_m'& \dots & f_m^{(m-1)} \end{matrix}\right) \vv-.4> \end{equation*} the Wronskian of $f_1(x)\lc f_m(x)$. \vadjust{\penalty-500} \ Set \ $\dsty{\mc D}l\,=\,\prod_{1\le a<b\le n}(z_b-z_a)$\,,\quad $\dsty P(u)\,=\,\prod_{a=1}^n\,(u-z_a)$\,, \ and \begin{equation*} P_a(u)\,=\,\prod_{\genfrac{}{}{0pt}1{b=1}{b\ne a}}^n\,\frac{u-z_b}{z_a-z_b}\,,\qquad a=1\lc n\,. \end{equation*} Let $f_a(x) = (x+\mu_a)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie^{z_ax}$, \,$a=1\lc n$, \,where \,$\mu_1\lc\mu_n$ are new variables. Set \vvn-.1> \begin{equation*} W(u,x)\,=\,e^{-ux-\sum_{a=1}^n z_ax}\, \on{Wr}\bigl[f_1(x)\lc f_n(x), e^{ux}\bigr]\,=\, W_0(x)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi{\mc B}igl(u^n+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\sum_{a=1}^n\,C_a(x)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiu^{n-a}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi{\mc B}igr)\,. \vv-.3> \end{equation*} Clearly, \ $W_0(x)\,=\,e^{-\sum_{a=1}^n z_ax}\,\on{Wr}\bigl[f_1(x)\lc f_n(x)]$\,. \begin{equation*}gin{lem} {\bs{\bar\la}}el{WW} Let \ $\dsty \yh_a\,=\,-\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\mu_a-\sum_{b\neq a}\,\frac1{z_a-z_b}$\;,\quad $a=1\lc n$.\quad Then \vvn.1> \begin{equation*}gin{equation} {\bs{\bar\la}}el{Wux} W(u,x)\,=\,{\mc D}l\cdot\det\bigl((u-Z)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi(x-Q)-1\bigr)\,, \vv.2> \end{equation} where the matrices $Z$ and $Q$ are given by~{{\mc B}bb R}ef{Z} and~{{\mc B}bb R}ef{Q}. In particular, \vvn.3> \begin{equation*}gin{equation} {\bs{\bar\la}}el{W0} W_0(x)\,=\,{\mc D}l\cdot\det(x-Q)\,. \vv.2> \end{equation} \end{lem} \begin{equation*}gin{proof} First, we prove formula~{{\mc B}bb R}ef{W0}. Let $S$ and $T$ be \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi${n\times n}$ matrices with entries \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$S_{ab}=z_b^{a-1}$ \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiand \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$T_{ab}=(a-1)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiz_b^{a-2}$, respectively. Clearly, \,$\det S={\mc D}l$. The entries of the matrix $S^{-1}$ are determined by the equality \,$P_a(u)=\sum_{b=1}^n(S^{-1})_{ab}\,u^{b-1}$, \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiso that the entries of \,$S^{-1}T$ \,are \,$(S^{-1}T)_{ab}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi=\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiP_a'(z_b)$. \vsk.2> Let \,$M=\on{diag}(\mu_1\lc\mu_n)$. Since \,$\der_x^k f_a(x)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi=\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\bigl((x+\mu_a)z_a^k+kz_a^{k-1}\bigr)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie^{z_ax}$, we have \vvn.4> \begin{equation*} W_0(x)\,=\,\det\bigl(S\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi(x+M)+T\bigr)\,=\, \det S\cdot\det(x+M+S^{-1}T)\,=\,{\mc D}l\cdot\det\bigl(x-Q)\,. \end{equation*} \vsk.3> To prove formula~{{\mc B}bb R}ef{Wux}, set $z_{n+1}=u$. Let \,$\Hat Q$ be an \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi${(n+1)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi{\times}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi(n+1)}$ \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fimatrix with entries $\Hat Q_{ab}=(z_b-z_a)^{-1}$ for $a\ne b$, \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiand \vvn-.6> \begin{equation*} \Hat Q_{aa}\,=\,-\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\mu_a-\sum_{\genfrac{}{}{0pt}1{b=1}{b\neq a}}^{n+1}\,\frac1{z_a-z_b}\;, \end{equation*} where \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$\mu_{n+1}$ is a new variable. \,Set \,$f_{n+1}(x)=(x+\mu_{n+1})\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie^{z_{n+1}x}$. Similarly to~{{\mc B}bb R}ef{W0}, we have \vvn.3> \begin{equation*} e^{-\sum_{a=1}^{n+1}z_ax}\,\on{Wr}\bigl[f_1(x)\lc f_{n+1}(x)\bigr]\,=\, {\mc D}l\cdot P(z_{n+1})\,\det(x-\Hat Q\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi)\,. \vv.3> \end{equation*} It is easy to see that \vv.2> \;$\on{Wr}\bigl[f_1(x)\lc f_n(x), e^{ux}\bigr]\,=\,\lim_{\mu_{n+1}\to\infty} \bigl(\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\mu_{n+1}^{-1}\on{Wr}\bigl[f_1(x)\lc f_{n+1}(x)\bigr]\bigr)$ \ and \ $\lim_{\mu_{n+1}\to\infty}\bigl(\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\mu_{n+1}^{-1} \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\det(x-\Hat Q\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi)\bigr)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi=\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\det\bigl(x-Q-(u-Z)^{-1}\bigr)$\,. \;Then \vvn.4> \begin{equation*} W(u,x)\,=\,{\mc D}l\cdot P(u)\,\det\bigl(x-Q-(u-Z)^{-1}\bigr)\,=\, {\mc D}l\cdot\det\bigl((u-Z)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi(x-Q)-1\bigr)\,. \vvn.2> \end{equation*} The lemma is proved. \end{proof} The complex vector space spanned by the functions $f_1\lc f_n$ is the kernel of the monic differential operator \vvn-.6> \begin{equation*}gin{equation} {\bs{\bar\la}}el{D} D\,=\,\der_x^n\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi+\sum_{a=1}^n\,C_a(x)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\der_x^{n-a}\,. \end{equation} The function $\psi(u,x)$, defined by~{{\mc B}bb R}ef{psi}, has the following expansion as $u\to\infty$, \,$x\to\infty$: \vvn.3> \begin{equation*}gin{equation} {\bs{\bar\la}}el{psiij} \psi(u,x)\,=\,1\,+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi \sum_{i=1}^\infty\,\sum_{j=1}^\infty\,\psi_{ij}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiu^{-j}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fix^{-i}\,. \end{equation} Here we suppressed the arguments \,$z_1\lc z_n$\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi, \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$\yh_1\lc\yh_n$. \,Set \,$\psi_i(u)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi=\sum_{j=1}^\infty\psi_{ij}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiu^{-j}$, \;$i\in{{\mc B}bb Z}_{>0}$. \begin{equation*}gin{lem} {\bs{\bar\la}}el{expand} We have \vvn.3> \begin{equation*}gin{equation} {\bs{\bar\la}}el{psi12} \psi_1(u)\,=\,-\,\sum_{a=1}^N\,\frac1{u-z_a}\;,\qquad \psi_2(u)\,=\,\sum_{a=1}^N\,\frac1{u-z_a}\, {\mc B}igl(-\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\yh_a\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\sum_{b\ne a} \frac1{z_a-z_b}\,{\mc B}igr)\,. \vv-.3> \end{equation} and \vvn-.2> \begin{equation*}gin{equation} {\bs{\bar\la}}el{psix1} \sum_{i=1}^\infty\,\psi_{i1}\,x^{-i}\,=\,-\on{tr}\bigl((x-Q)^{-1}\bigr)\,. \end{equation} \end{lem} \begin{equation*}gin{proof} The proof is straightforward from formulae~{{\mc B}bb R}ef{Q},~{{\mc B}bb R}ef{psi}. \end{proof} \subsection{Proof of Theorem~\ref{generate}} Denote \;${\mc D}_{\it reg}=P(u)\,{\mc D}$. By Theorem~3.1 in~\cite{MTV2}, we have \vvn-.5> \begin{equation*}gin{equation} {\bs{\bar\la}}el{Dreg} {\mc D}_{reg}=\sum_{i=0}^N\sum_{a=0}^nA_{ia}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiu^a\der^i\ , \qquad A_{ia}\in \on{End}(V^{\otimes n})\ , \vv-.2> \end{equation} and \vvn-.4> \begin{equation*} \sum_{a=0}^n\,A_{Na}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiu^a\,=\,P(u)\,,\qquad \sum_{i=0}^N\,A_{in}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\der^i\,=\,R(\der_u)\,,\qquad R(x)=\prod_{i=1}^N\,(x-K_i)\,. \end{equation*} \vsk.3> Let $v \in \otimes_{a=1}^nV(z_a)$ be an eigenvector of the Bethe algebra, $A_{ia}v= \al_{ia}v$, \,$\al_{ia}\in{{\mc B}bb C}$, for all $(i,a)$. Consider a scalar differential operator \begin{equation*} D_v\,=\,\sum_{i=0}^N\sum_{a=0}^n\,\al_{ia}\,x^i\der_x^a\ , \vv.2> \end{equation*} Notice that we changed \,$u\mapsto\der_x$, \;$\der_u\mapsto x$ compared with~{{\mc B}bb R}ef{Dreg}. By Theorem~3.1 in~\cite{MTV2} and Theorem~12.1.1 in~\cite{MTV4}, the kernel of $D_v$ is generated by the functions $(x+\mu_a)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie^{z_ax}$, \,$a=1\lc n$, \,with suitable $\mu_a\in{{\mc B}bb C}$. Let \begin{equation*} \yh_a\,=\,-\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\mu_a-\sum_{b\neq a}\,\frac1{z_a-z_b}\;,\qquad a=1\lc n\,. \vadjust{\penalty-500} \end{equation*} \begin{equation*}gin{lem} {\bs{\bar\la}}el{lem on b} We have \,$H_av=h_av$ \,for all \,$a=1\lc n$. \end{lem} \begin{equation*}gin{proof} We have \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$D_v=R(x)\,D$, where $D$ is given by~{{\mc B}bb R}ef{D}. Then Lemma~\ref{WW} and formulae~{{\mc B}bb R}ef{PsiK}, {{\mc B}bb R}ef{PsiiK}, {{\mc B}bb R}ef{Psi} yield that the eigenvalues of the operators $\varPsi_{ij}$ are the numbers $\psi_{ij}$ given by~{{\mc B}bb R}ef{psiij}: \,$\varPsi_{ij}v=\psi_{ij}v$. The claim follows from comparing formulae~{{\mc B}bb R}ef{Psi12} and~{{\mc B}bb R}ef{psi12}. \end{proof} By Theorem~10.5.1 in~\cite{MTV4}, if $\kk_1\lc\kk_N$ and $z_1\lc z_n$ are generic, then the Bethe algebra of $\otimes_{a=1}^nV(z_a)$ has an eigenbasis. Hence, by Lemmas~\ref{Wux} and~\ref{lem on b}, for such $\kk_1\lc\kk_N$, $z_1\lc z_n$ we have \begin{equation*} \varPsi(u,x)\,=\,\psi(u,x,z_1\lc z_n,H_1\lc H_n)\,. \vv.4> \end{equation*} Since both sides of this equality are meromorphic functions of $\kk_1\lc\kk_N$ and $z_1\lc z_n$, the equality holds for all $\kk_1\lc\kk_N$, $z_1\lc z_n$. The theorem is proved. \qed \section{Bethe algebra and functions on the Calogero-Moser space} {\bs{\bar\la}}el{CM} \subsection{Calogero-Moser space ${{\mc B}bb C}c_n$} Let \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi${\cal M}_n$ be the space of \,${n\times n}$ complex matrices. The group $GL_n$ acts on \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi${\cal M}_n\oplus{\cal M}_n$ by conjugation, \,$g:(X,Y)\mapsto(gXg^{-1},\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\figYg^{-1})$\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi. \vvn.16> Denote \,$\Hat{\mc F}c_n={{\mc B}bb C}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi[{\cal M}_n\oplus{\cal M}_n]^{GL_n}$. Let \,${{\mc B}bb C}c_n\subset{\cal M}_n\oplus{\cal M}_n$ be the subset of pairs $(X,Y)$ with the matrix $[X,Y]+1$ having rank one. The set \,${{\mc B}bb C}c_n$ is $GL_n$-invariant. The algebra ${\cal F}_n=\Hat{\mc F}c_n|_{{{\mc B}bb C}c_n}$ is, by definition, the algebra of functions on the $n$-th Calogero-Moser space, see~\cite{Wi}. Consider a function \vvn-.1> \begin{equation*}gin{equation} {\bs{\bar\la}}el{phi} \pho(u,x,X,Y)\,=\,\det\bigl(1-(u-Y)^{-1}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi(x-X)^{-1}\bigr)\,, \vv.3> \end{equation} depending on matrices $X,Y$ and variables $u,x$. It has an expansion as $u\to\infty$, \,$x\to\infty$: \vvn.2> \begin{equation*}gin{equation} {\bs{\bar\la}}el{phiij} \pho(u,x,X,Y)\,=\,1\,+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi \sum_{i=1}^\infty\,\sum_{j=1}^\infty\,\pho_{ij}(X,Y)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiu^{-j}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fix^{-i}\, \vv-.3> \end{equation} with \,$\pho_{ij}\in\Hat{\mc F}c_n$ \,for any $(i,j)$. \begin{equation*}gin{lem}[\cite{MTV6}] {\bs{\bar\la}}el{EG} The algebra \,${\cal F}_n$ is generated by the images of \,$\pho_{ij}$, \,$i,j\in{{\mc B}bb Z}_{>0}$. \end{lem} \subsection{Bethe algebra and functions on ${{\mc B}bb C}c_n$} In this section we treat $\kk_1\lc\kk_N$ and $z_1\lc z_n$ as variables. Set \vvn.1> \begin{equation*} {\cal E}_{N,n}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi=\,\on{End}(V^{\ox n})\ox{{\mc B}bb C}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi[\kk_1\lc\kk_N,z_1\lc z_n]\;. \vv.3> \end{equation*} We identify the algebras $\on{End}(V^{\ox n})$ and ${{\mc B}bb C}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi[\kk_1\lc\kk_N,z_1\lc z_n]$ with the respective subalgebras $\on{End}(V^{\ox n})\ox 1$ and $1\ox{{\mc B}bb C}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi[\kk_1\lc\kk_N,z_1\lc z_n]$ \,of \,${\cal E}_{N,n}$. The operators $B_{ij}$ and $\Psi_{ij}$, defined in Section~\ref{main}, depend on $K_1\lc K_N$, $z_1\lc z_n$ polynomially, so we consider them as elements of \,${\cal E}_{N,n}$. Denote by \,${\cal B}_{N,n}$ the unital subalgebra of \,${\cal E}_{N,n}$ generated by $B_{ij}$, \,$i=1\lc N$, \,$j\in{{\mc B}bb Z}_{>0}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$. \begin{equation*}gin{lem} {\bs{\bar\la}}el{Bc} The algebra \,${\cal B}_{N,n}$ is generated by \,$\varPsi_{ij}$, \,$i=1\lc N$, \,$j\in{{\mc B}bb Z}_{>0}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$, and symmetric polynomials in $\kk_1\lc\kk_N$. \end{lem} \begin{equation*}gin{proof} By formula~{{\mc B}bb R}ef{Bi0}, we have \begin{equation*} x^N\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\sum_{i=1}^N B_{i0}\,x^{N-i}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi=\,\prod_{i=1}^N\,(x-K_i)\,, \vv.2> \end{equation*} so symmetric polynomials in $\kk_1\lc\kk_N$ belong to \,${\cal B}_{N,n}$. Formula~{{\mc B}bb R}ef{PsiK} yields \vvn.2> \begin{equation*} {\mc B}igl(\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fix^N+\sum_{i=1}^N\,\sum_{j=0}^\infty B_{ij}\,u^{-j}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fix^{N-i}\,{\mc B}igr) \prod_{i=1}^N\,\frac1{x-K_i}\;=\, 1+\sum_{i=1}^\infty\,\sum_{i=1}^\infty\,\varPsi_{ij}\,u^{-j}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fix^{-i}\,. \vv.1> \end{equation*} Therefore, the elements \,$\varPsi_{ij}$ are linear combinations of the elements \,$B_{ij}$ with coefficients being symmetric polynomials in $\kk_1\lc\kk_N$, and vice versa. That proves the claim. \end{proof} Let \,$Z\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi,Q$ \,be the matrices given by~{{\mc B}bb R}ef{Z}, {{\mc B}bb R}ef{Q}. For any $f\in\Hat{\mc F}c_n$, \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fidefine a function $\bar f$ of the variables \,$z_1\lc,z_n$, $h_1\lc h_n$ \,by the formula \vvn.3> \begin{equation*} \bar f(z_1\lc z_n,\yh_1\lc\yh_n)\,=\,f(Q,Z)\,. \vv.2> \end{equation*} \begin{equation*}gin{lem} {\bs{\bar\la}}el{fb} The function \,$\bar f$ depends only on the image of \,$f$ in \,${\cal F}_n$. \end{lem} \begin{equation*}gin{proof} The matrix \,$[Q,Z]+1$ \,has rank one, so the pair $(Q,Z)$ belongs to \,${{\mc B}bb C}c_n$, \end{proof} \begin{equation*}gin{thm} {\bs{\bar\la}}el{second} For any \,$f\in\Hat{\mc F}c_n$, we have $\bar f(z_1\lc z_n,H_1\lc H_n)\in{\cal B}_{N,n}$. In particular, $f(z_1\lc z_n,H_1\lc H_n)$ is a polynomial in $z_1\lc z_n$. \end{thm} \begin{equation*}gin{proof} By Lemmas~\ref{fb} and~\ref{EG}, it suffices to prove the claim for the functions \,$\pho_{ij}(X,Y)$. Since \,$\bar\pho_{ij}=\psi_{ij}$ by~{{\mc B}bb R}ef{phi}, {{\mc B}bb R}ef{phiij}, {{\mc B}bb R}ef{psi}, {{\mc B}bb R}ef{psiij}, and \,$\psi_{ij}(z_1\lc z_n,H_1\lc H_n)=\varPsi_{ij}$ \,by Theorem~\ref{generate}, the statement follows from Lemma~\ref{Bc}. \end{proof} \begin{equation*}gin{example} Let \,$N=n=2$. \,Then \,$Z=\on{diag}(z_1,z_2)$\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi, \ $Q\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi=\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\left(\! \begin{equation*}gin{array}{cccc} \yh_1 & (z_2-z_1)^{-1}\\[2pt] (z_1-z_2)^{-1}\! & \!\yh_2 \end{array} \!\right)$\,, \vvn.3> \begin{equation*} H_1\,=\,\kk_1\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{11}^{(1)}\,+\, \kk_2\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{22}^{(1)}\,+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\frac{\mc O}m{z_1-z_2}\,, \qquad H_2\,=\,\kk_1\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{11}^{(2)}\,+\, \kk_2\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{22}^{(2)}\,+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\frac {\mc O}m{z_2-z_1}\,, \end{equation*} \begin{equation*} {\mc O}m\,=\,e_{11}^{(1)}e_{11}^{(2)}+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{12}^{(1)}e_{21}^{(2)}+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi e_{21}^{(1)}e_{12}^{(2)}+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fie_{22}^{(1)}e_{22}^{(2)}\,. \vv.5> \end{equation*} Let \,$f(X,Y)=\on{tr}(X^2)$. \,Then \,$\bar f(z_1,z_2,H_1,H_2)\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi=\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiH_1^2+\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiH_2^2-\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi2\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi(z_1-z_2)^{-2}$ \,is a polynomial in $z_1,z_2$. \end{example} \begin{equation*}gin{rem} It is known that $\Hat{\mc F}c_n$ is spanned by the functions \;$\on{tr}(X^{m_1}Y^{m_2}X^{m_3}Y^{m_4}\cdots{})$, \,where $m_1,m_2,\ldots{}$ are nonnegative integers, see~\cite{W}. \end{rem} Theorems~\ref{generate} and~\ref{second} show that the assignment \,$\gm:f\mapsto\bar f(z_1\lc z_n,H_1\lc H_n)$ \,defines an algebra homomorphism \,$\Hat{\mc F}c_n\to{\cal B}_{N,n}$ \,that sends \,$\pho_{ij}$ to \,$\varPsi_{ij}$. By Lemma~\ref{fb}, this homomorphism factors through \,${\cal F}_n$\,. By Lemma~\ref{Bc}, the image of \,$\Hat{\mc F}c_n$ tensored with the algebra of symmetric polynomials in $\kk_1\lc\kk_N$ generate \,${\cal B}_{N,n}$. \par{\mc M}edskip We show in~\cite{MTV6} that for \relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi$n=N$, the homomorphism \,$\gm$ \,induces an isomorphism \vvn.1> of \,${\cal F}_N$ with the quotient of \,${\cal B}_{N,N}$ by the relations \,$\varPsi_{i1}=\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi-\sum_{j=1}^N\,K_j^{i-1}$, \,$i\in{{\mc B}bb Z}_{>0}$\,. In other words, let \vvn-.1> \begin{equation*} (V^{\ox N})_{\bf 1}=\, \bigl\{\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiv\in V^{\ox N}\ \big|\ \mathop{\tsty\sum}\limits_{a=1}^Ne_{ii}^{(a)}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiv\,=\,v\,, \ i=1\lc N\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi\bigr\}\,. \vv.3> \end{equation*} Each element of \,${\cal B}_{N,N}$ induces an element of \,${\on{End}\bigl((V^{\ox N})_{\bf 1}\bigr)\ox{{\mc B}bb C}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi[\kk_1\lc\kk_N,z_1\lc z_N]}$\,. Then \,${\cal F}_N$ is isomorphic to the image of \,${\cal B}_{N,N}$ in \,$\on{End}\bigl((V^{\ox N})_{\bf 1}\bigr)\ox{{\mc B}bb C}\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fi[\kk_1\lc\kk_N,z_1\lc z_N]$\,. \par{\mc B}igskip \begin{equation*}gin{thebibliography}{[CG]} \normalsize \frenchspacing \parskip.1\baselineskip \raggedbottom \bibitem[B]{B} H.M.\,Babujian, {\it Off-shell Bethe ansatz equations and $N$-point correlators in the ${\rm SU}(2)$ WZNW theory\/} J.\;Phys.~A {\bf 26} (1993), no.\;23, 6981--6990 \bibitem[CT]{CT} A.\,Chervov, D.\,Talalaev, {\it Quantum spectral curves, quantum integrable systems and the geometric Langlands correspondence\/}, Preprint (2006), 1--54; {\tt hep-th/0604128} \bibitem[FFR]{FFR} B.\,Feigin, E.\,Frenkel, N.\,Reshetikhin, {\it Gaudin model, Bethe ansatz and critical level\/}, Comm. Math. Phys. {\bf 166} (1994), no. 1, 27--62 \bibitem[FMTV]{FMTV} G.\,Felder, Y.\,Markov, V.\,Tarasov, A.\,Varchenko, {\it Differential equations compatible with KZ equations\/}, Math. Phys. Anal. Geom. {\bf 3} (2000), no.\;2, 139--177 \bibitem[G1]{G1} M.\,Gaudin, {\it Diagonalisation d'une classe d'Hamiltoniens de spin}, J. Physique {\bf 37} (1976), no.\;10, 1089--1098 \bibitem[G2]{G2} M.\,Gaudin, {\it La fonction d'onde de Bethe\/}, Collection du Commissariat \`a l'\'Energie Atomique: S\'erie Scientifique, Masson, Paris, 1983 \bibitem[KS]{KS} P.P.\,Kulish, E.K.\,Sklyanin, {\it Quantum spectral transform method. Recent developments\/}, Lect. Notes in Phys., {\bf 151}, Springer, Berlin-New York, 1982, 61--119 \bibitem[MTV1]{MTV1} E.\,Mukhin, V.\,Tarasov, A.\,Varchenko, {\it Bethe Eigenvectors of Higher Transfer Matrices\/}, J.~Stat. Mech. (2006), no.\;8, P08002, 1--44 \bibitem[MTV2]{MTV2} E.\,Mukhin, V.\,Tarasov, A.\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiVarchenko, {\it A generalization of the Capelli identity\/}, Preprint (2006), 1--14; {\tt arXiv:math/0610799} \bibitem[MTV3]{MTV3} E.\,Mukhin, V.\,Tarasov, A.\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiVarchenko, {\it Schubert calculus and representations of the general linear group\/}, Preprint (2007), 1--32; {\tt arXiv:0711.4079} \bibitem[MTV4]{MTV4} E.\,Mukhin, V.\,Tarasov, A.\relax\ifmmode\mskip.666667\thinmuskip\relax\else\kern.111111em\fiVarchenko, {\it Generating operator of \,XXX or Gaudin transfer matrices has quasi-exponential kernel\/}, SIGMA {\bf 6} (2007), 060, 1--31 \bibitem[MTV5]{MTV5} E.\,Mukhin, V.\,Tarasov, and A.\,Varchenko, {\it Spaces of quasi-exponentials and representations of \,$\frak{gl}n$}, Preprint (2008), 1--29; {\tt arXiv:0801.3120} \bibitem[MTV6]{MTV6} E.\,Mukhin, V.\,Tarasov, and A.\,Varchenko, {\it Bethe algebra, Calogero-Moser space and Cherednik algebra\/}, Preprint (2009) \bibitem[RV]{RV} N.\,Reshetikhin, A.\,Varchenko, {\it Quasiclassical asymptotics of solutions to the KZ equations\/}, Geometry, topology, \& physics, 293--322, Conf. Proc. Lect. Notes Geom. Topology, IV, Int.~Press, Cambridge, MA, 1995 \bibitem[SV]{SV} V.\,Schechtman, A.\,Varchenko, {\it Arrangements of hyperplanes and Lie algebra homology\/}, Invent. Math. {\bf 106} (1991), no.\;1, 139--194 \bibitem[T]{T} D.\,Talalaev, {\it Quantization of the Gaudin System\/}, Preprint (2004), 1--19;\\ {\tt hep-th/0404153} \bibitem[W]{W} H.\,Weyl, {\it The classical groups. their invariants and representations\/}, Princeton University Press, Princeton, NJ, 1939 \bibitem[Wi]{Wi} G.\,Wilson, {\it Collisions of Calogero-Moser particles and an adelic Grassmannian\/}, Invent. Math. {\bf 133} (1998), 1--41 \end{thebibliography} \end{document}
\begin{document} \title[Krivine's Function Calculus and Bochner integration] {Krivine's Function Calculus\\ and Bochner integration} \author{V.G. Troitsky} \address{Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, AB, T6G\,2G1, Canada.} \email{[email protected]} \author{M.S. T\"urer} \address{Department of Mathematics and Computer Science, \.Istanbul K\"ult\"ur University, Bak\i rk\"oy 34156, \.Istanbul, Turkey} \email{[email protected]} \thanks{The first author was supported by an NSERC grant.} \keywords{Banach lattice, Function Calculus, Bochner integral} \subjclass[2010]{Primary: 46B42. Secondary: 46A40} \date{\today} \begin{abstract} We prove that Krivine's Function Calculus is compatible with integration. Let $(\Omega,\Sigma,\mu)$ be a finite measure space, $X$ a Banach lattice, $\boldsymbol{x}\in X^n$, and $f\colon\mathbb R^n\times\Omega\to\mathbb R$ a function such that $f(\cdot,\omega)$ is continuous and positively homogeneous for every $\omega\in\Omega$, and $f(\boldsymbol{s},\cdot)$ is integrable for every $\boldsymbol{s}\in\mathbb R^n$. Put $F(\boldsymbol{s})=\int f(\boldsymbol{s},\omega)d\mu(\omega)$ and define $F(\boldsymbol{x})$ and $f(\boldsymbol{x},\omega)$ via Krivine's Function Calculus. We prove that under certain natural assumptions $F(\boldsymbol{x})=\int f(\boldsymbol{x},\omega)d\mu(\omega)$, where the right hand side is a Bochner integral. \end{abstract} \maketitle \section{Motivation} In \cite{Kalton:12}, the author defines a real-valued function of two real or complex variable via \begin{math} F(s,t)=\int_0^{2\pi}\bigabs{s+e^{i\theta}t}d\theta. \end{math} This is a positively homogeneous continuous function. Therefore, given two vectors $u$ and $v$ in a Banach lattice~$X$, one may apply Krivine's Function Calculus to $F$ and consider $F(u,v)$ as an element of~$X$. The author then claims that \begin{equation} \label{Kalton} F(u,v)=\int_0^{2\pi}\bigabs{u+e^{i\theta}v}d\theta, \end{equation} where the right hand side here is understood as a Bochner integral; this is used later in \cite{Kalton:12} to conclude that \begin{math} \bignorm{F(u,v)}\leqslant \int_0^{2\pi}\bignorm{u+e^{i\theta}v}d\theta \end{math} because Bochner integrals have this property: \begin{math} \bignorm{\int f}\leqslant\int\norm{f}. \end{math} A similar exposition is also found in~\cite[p.~146]{Davis:84}. Unfortunately, neither \cite{Kalton:12} nor \cite{Davis:84} includes a proof of~\eqref{Kalton}. In this note, we prove a general theorem which implies~\eqref{Kalton} as a special case. \section{Preliminaries} We start by reviewing the construction of Krivine's Function Calculus on Banach lattices; see~\cite[Theorem~1.d.1]{Lindenstrauss:79} for details. For Banach lattice terminology, we refer the reader to~\cite{Abramovich:02,Aliprantis:06a}. Fix $n\in\mathbb N$. A function $F\colon\mathbb R^n\to\mathbb R$ is said to be \term{positively homogeneous} if \begin{displaymath} F(\lambda t_1,\dots,\lambda t_n) =\lambda F(t_1,\dots,t_n) \mbox{ for all }t_1,\dots,t_n\in\mathbb R\mbox{ and }\lambda\geqslant 0. \end{displaymath} Let $H_n$ be the set of all continuous positively homogeneous functions from $\mathbb R^n$ to~$\mathbb R$. Let $S_\infty^n$ be the unit sphere of $\ell_\infty^n$, that is, \begin{displaymath} S_\infty^n=\bigl\{(t_1,\dots,t_n)\in\mathbb R^n\::\: \max\limits_{i=1,\dots,n}\abs{t_i}=1\bigr\}. \end{displaymath} It can be easily verified that the restriction map $F\mapsto F_{|S_\infty^n}$ is a lattice isomorphism from $H_n$ onto $C(S_\infty^n)$. Hence, we can identify $H_n$ with $C(S_\infty^n)$. For each $i=1,\dots,n$, the $i$-th coordinate projection $\pi_i\colon\mathbb R^n\to\mathbb R$ clearly belongs to~$H_n$. Let $X$ be a (real) Banach lattice and $\boldsymbol{x}=(x_1,\dots,x_n)\in X^n$. Let $e\in X_+$ be such that $x_1,\dots,x_n$ belong to $I_e$, the principal order ideal of $e$. For example, one could take $e=\abs{x_1}\vee\dots\vee\abs{x_n}$. By Kakutani's representation theorem, the ideal $I_e$ equipped with the norm \begin{displaymath} \norm{x}_e=\inf\bigl\{\lambda>0\::\:\abs{x}\leqslant\lambda e\bigr\} \end{displaymath} is lattice isometric to $C(K)$ for some compact Hausdorff $K$. Let $F\in H_n$. Interpreting $x_1,\dots,x_n$ as elements of $C(K)$, we can define $F(x_1,\dots,x_n)$ in $C(K)$ as a composition. We may view it as an element of $I_e$ and, therefore, of $X$; we also denote it by $\widetilde F$ or $\Phi(F)$. It may be shown that, as an element of $X$, it does not depend on the particular choice of $e$. This results in a (unique) lattice homomorphism $\Phi\colon H_n\to X$ such that $\Phi(\pi_i)=x_i$. The map $\Phi$ will be referred to as \term{Krivine's function calculus}. This construction allows one to define expressions like $\Bigl(\sum_{i=1}^n\abs{x_i}^p\Bigr)^{\frac1p}$ for $0<p<\infty$ in every Banach lattice $X$; this expression is understood as $\Phi(F)$ where $F(t_1,\dots,t_n)=\Bigl(\sum_{i=1}^n\abs{t_i}^p\Bigr)^{\frac1p}$. Furthermore, \begin{equation} \label{FC-norms} \bignorm{F(\boldsymbol{x})}\leqslant \norm{F}_{C(S_\infty^n)}\cdot\Bignorm{\bigvee_{i=1}^n\abs{x_i}}. \end{equation} Let $L_n$ be the sublattice of $H_n$ or, equivalently, of $C(S_\infty^n)$, generated by the coordinate projections $\pi_i$ as $i=1,\dots,n$. It follows from the Stone-Weierstrass Theorem that $L_n$ is dense in $C(S_\infty^n)$. It follows from $\Phi(\pi_i)=x_i$ that $\Phi(L_n)$ is the sublattice generated by $x_1,\dots,x_n$ in $X$, hence $\Range\Phi$ is contained in the closed sublattice of $X$ generated by $x_1,\dots,x_n$. It follows from, e.g., Exercise~8 on \cite[p.204]{Aliprantis:06a} that this sublattice is separable. Let $(\Omega,\Sigma,\mu)$ be a finite measure space and $X$ a Banach space. A function $f\colon\Omega\to X$ is \emph{measurable} if there is a sequence $(f_n)$ of simple functions from $\Omega$ to $X$ such that $\lim_n\norm{f_n(\omega)-f(\omega)}=0$ almost everywhere. If, in addition, $\int\norm{f_n(\omega)-f(\omega)}d\mu(\omega)\to 0$ then $f$ is \term{Bochner integrable} with $\int_Af\,d\mu=\lim_n\int_Af_n\,d\mu$ for every measurable set~$A$. In the following theorem, we collect a few standard facts about Bochner integral for future reference; we refer the reader to~\cite[Chapter II]{Diestel:77} for proofs and further details. \begin{theorem}\label{Boch} Let $f\colon\Omega\to X$. \begin{enumerate} \item\label{Boch-lim} If $f$ is the almost everywhere limit of a sequence of measurable functions then $f$ is measurable. \item\label{Boch-norming} If $f$ is separable-valued and there is a norming set $\Gamma\subseteq X^*$ such that $x^*f$ is measurable for every $x^*\in\Gamma$ then $f$ is measurable. \item\label{Boch-int} A measurable function $f$ is Bochner integrable iff $\norm{f}$ is integrable. \item\label{Boch-L1} If $f(\omega)=u(\omega)x$ for some fixed $x\in X$ and $u\in L_1(\mu)$ and for all $\omega$ then $f$ is measurable and Bochner integrable. \item\label{Boch-op} If $f$ is Bochner integrable and $T\colon X\to Y$ is a bounded operator from $X$ to a Banach space $Y$ then \begin{math} T\bigl(\int f\,d\mu\bigr)= \int Tf\,d\mu. \end{math} \end{enumerate} \end{theorem} \section{Main theorem} Throughout the rest of the paper, we assume that $(\Omega,\Sigma,\mu)$ is a finite measure space, $n\in\mathbb N$, and $f\colon\mathbb R^n\times\Omega\to\mathbb R$ is such that $f(\cdot,\omega)$ is in $H_n$ for every $\omega\in\Omega$ and $f(\boldsymbol{s},\cdot)$ is integrable for every $\boldsymbol{s}\in\mathbb R^n$. For every $\boldsymbol{s}\in\mathbb R^n$, put $F(\boldsymbol{s})=\int f(\boldsymbol{s},\omega)d\mu(\omega)$. It is clear that $F$ is positively homogeneous. Suppose, in addition, that $F$ is continuous. Let $X$ be a Banach lattice, $\boldsymbol{x}\in X^n$, and $\Phi\colon H_n\to X$ the corresponding function calculus. Since $F\in H_n$, $\widetilde{F}=F(\boldsymbol{x})=\Phi(F)$ is defined as an element of~$X$. On the other hand, for every~$\omega$, the function $\boldsymbol{s}\in\mathbb R^n\mapsto f(\boldsymbol{s},\omega)$ is in~$H_n$, hence we may apply $\Phi$ to it. We denote the resulting vector by $\tilde{f}(\omega)$ or $f(\boldsymbol{x},\omega)$. This produces a function $\omega\in\Omega\mapsto f(\boldsymbol{x},\omega)\in X$. \begin{theorem}\label{CK} Suppose that $F$ is continuous and the function $M(\omega):=\bignorm{f(\cdot,\omega)}_{C(S_\infty^n)}$ is integrable. Then $f(\boldsymbol{x},\omega)$ is Bochner integrable as a function of $\omega$ and \begin{math} F(\boldsymbol{x})=\int f(\boldsymbol{x},\omega)d\mu(\omega), \end{math} where the right hand side is a Bochner integral. \end{theorem} \begin{proof} \emph{Special case:} $X=C(K)$ for some compact Hausdorff $K$. By uniqueness of function calculus, Krivine's function calculus $\Phi$ agrees with ``point-wise'' function calculus. In particular, \begin{displaymath} \widetilde{F}(k)=F\bigl(x_1(k),\dots,x_n(k)\bigr)\mbox{ and } \bigl(\tilde f(\omega))(k)=f\bigl(x_1(k),\dots,x_n(k),\omega\bigr) \end{displaymath} for all $k\in K$ and $\omega\in\Omega$. We view $\tilde f$ as a function from $\Omega$ to $C(K)$. We are going to show that $\tilde f$ is Bochner integrable. It follows from $\tilde f(\omega)\in\Range\Phi$ that $\tilde f$ a separable-valued function. For every $k\in K$, consider the point-evaluation functional $\varphi_k\in C(K)^*$ given by $\varphi_k(x)=x(k)$. Then \begin{displaymath} \varphi_k\bigl(\tilde f(\omega)\bigr)=\bigl(\tilde f(\omega))(k) =f\bigl(x_1(k),\dots,x_n(k),\omega\bigr). \end{displaymath} for every $k\in K$. By assumptions, this function is integrable; in particular, it is measurable. Since the set \begin{math} \bigl\{\varphi_k\::\: k\in K\bigr\} \end{math} is norming in $C(K)^*$, Theorem~\ref{Boch}\eqref{Boch-norming} yields that $\tilde f$ is measurable. Clearly, \begin{math} \bigabs{\bigl(\tilde f(\omega)\bigr)(k)}\leqslant M(\omega) \end{math} for every $k\in K$ and $\omega\in\Omega$, so that $\norm{\tilde f(\omega)}_{C(K)}\leqslant M(\omega)$ for every $\omega$. It follows that \begin{math} \int\norm{\tilde f(\omega)}_{C(K)}\, d\mu(\omega) \end{math} exists and, therefore, $\tilde f$ is Bochner integrable by Theorem~\ref{Boch}\eqref{Boch-int}. Put $h:=\int\tilde f(\omega)\,d\mu(\omega)$, where the right hand side is a Bochner integral. Applying Theorem~\ref{Boch}\eqref{Boch-op}, we get \begin{multline*} h(k)=\varphi_k(h) =\int\varphi_k\bigl(\tilde f(\omega)\bigr)\,d\mu(\omega) =\int f\bigl(x_1(k),\dots,x_n(k),\omega\bigr)\,d\mu(\omega)\\ =F\bigl(x_1(k),\dots,x_n(k)\bigr)=\widetilde{F}(k). \end{multline*} for every $k\in K$. It follows that $\int\tilde f(\omega)\,d\omega=\widetilde{F}$. \emph{General case.} Let $e=\abs{x_1}\vee\dots\abs{x_n}$. Then $\bigl(I_e,\norm{\cdot}_e\bigr)$ is lattice isometric to $C(K)$ for some compact Hausdorff $K$. Note also that $\abs{x}\leqslant\norm{x}_ee$ for every $x\in I_e$; this yields $\norm{x}\leqslant\norm{x}_e\norm{e}$, hence the inclusion map $T\colon \bigl(I_e,\norm{\cdot}_e\bigr)\to X$ is bounded. Identifying $I_e$ with $C(K)$, we may view $T$ as a bounded lattice embedding from $C(K)$ into $X$. By the construction on Krivine's Function Calculus, $\Phi$ actually acts into $I_e$, i.e., $\Phi=T\Phi_0$, where $\Phi_0$ is the $C(K)$-valued function calculus. By the special case, we know that $\int\tilde f(\omega)\,d\mu(\omega)=\widetilde{F}$ in $C(K)$. Applying $T$, we obtain the same identity in $X$ by Theorem~\ref{Boch}\eqref{Boch-op}. \end{proof} Finally, we analyze whether any of the assumptions may be removed. Clearly, one cannot remove the assumption that $F$ is continuous; otherwise, $\widetilde{F}$ would make no sense. The following example shows that, in general, $F$ need not be continuous. \begin{example} Let $n=2$, let $\mu$ be a measure on $\mathbb N$ given by $\mu\bigl(\{k\}\bigr)=2^{-k}$. For each $k$, we define $f_k=f(\cdot,k)$ as follows. Note that it suffices to define $f_k$ on $S^2_\infty$. Let $I_k$ be the straight line segment connecting $(1,0)$ and $(1,2^{-k+1})$. Define $f_k$ so that it vanishes on $S^2_\infty\setminus I_k$, $f_k(1,0)=f_k(1,2^{-k+1})=0$, $f_k(1,2^{-k})=2^k$, and is linear on each half of $I_k$. Then $f_k\in H_2$ and $F(\boldsymbol{s})$ is defined for every $\boldsymbol{s}\in\mathbb R^2$. It follows from $F(\boldsymbol{s})=\sum_{k=1}^\infty 2^{-k}f_k(\boldsymbol{s})$ that $F(1,0)=0$ and $F(1,2^{-k})\geqslant 2^{-k}f_k(1,2^{-k})=1$, hence $F$ is discontinuous at $(1,0)$. \end{example} The assumption that $M$ is integrable cannot be removed as well. Indeed, consider the special case when $X=C(S_\infty^n)$ and $x_i=\pi_i$ as $i=1,\dots,n$. In this case, $\Phi$ is the identity map and $\tilde f(\omega)=f(\cdot,\omega)$. It follows from Theorem~\ref{Boch}\eqref{Boch-int} that $\tilde{f}$ is Bochner integrable iff $\norm{\tilde{f}}$ is integrable iff $M$ is integrable. Finally, the assumption that $f(\cdot,\omega)$ is in $H_n$ for every $\omega$ may clearly be relaxed to ``for almost every $\omega$''. \section{Direct proof} In the previous section, we presented a proof of Theorem~\ref{CK} using representation theory. In this section, we present a direct proof. However, we impose an additional assumption: we assume that $f(\cdot,\omega)$ is continuous on $S_\infty^n$ uniformly on~$\omega$, that is, \begin{multline} \label{ucont} \mbox{for every }\varepsilon>0\mbox{ there exists }\delta>0 \mbox{ such that } \bigabs{f(\boldsymbol{s},\omega)-f(\boldsymbol{t},\omega)}<\varepsilon\\ \mbox{ for all }\boldsymbol{s},\boldsymbol{t}\in S_\infty^n \mbox{ and all }\omega\in\Omega \mbox{ provided that } \norm{\boldsymbol{s}-\boldsymbol{t}}_\infty<\delta. \end{multline} In Theorem~\ref{CK}, we assumed that $F$ was continuous and $M$ was integrable. Now these two conditions are satisfied automatically. In order to see that $F$ to is continuous, fix $\varepsilon>0$; let $\delta$ be as in~\eqref{ucont}, then \begin{equation} \label{FsFt} \bigabs{F(\boldsymbol{s})-F(\boldsymbol{t})} \leqslant\int\bigabs{f(\boldsymbol{s},\omega)-f(\boldsymbol{t},\omega)} d\mu(\omega)<\varepsilon\mu(\Omega) \end{equation} whenever $\boldsymbol{s},\boldsymbol{t}\in S_\infty^n$ with $\norm{\boldsymbol{s}-\boldsymbol{t}}_\infty<\delta$. The proof of integrability of $M$ will be included in the proof of the theorem. \begin{theorem}\label{direct} Suppose that $f(\cdot,\omega)$ is continuous on $S_\infty^n$ uniformly on~$\omega$. Then $f(\boldsymbol{x},\omega)$ is Bochner integrable as a function of $\omega$ and \begin{math} F(\boldsymbol{x})=\int f(\boldsymbol{x},\omega)d\mu(\omega). \end{math} \end{theorem} \begin{proof} Without loss of generality, by scaling $\mu$ and~$\boldsymbol{x}$, we may assume that $\mu$ is a probability measure and \begin{math} \Bignorm{\bigvee_{i=1}^n\abs{x_i}}=1; \end{math} this will simplify computations. In particular,~\eqref{FC-norms} becomes $\norm{H(\boldsymbol{x})}\leqslant\norm{H}_{C(S_\infty^n)}$ for every $H\in C(S_\infty^n)$. Note also that $\boldsymbol{x}$ in the theorem is a ``fake'' variable as $\boldsymbol{x}$ is fixed. It may be more accurate to write $\widetilde{F}$ and $\tilde{f}(\omega)$ instead of $F(\boldsymbol{x})$ and $f(\boldsymbol{x},\omega)$, respectively. Hence, we need to prove that $\tilde f$ as a function from $\Omega$ to $X$ is Bochner integrable and its Bochner integral is~$\widetilde{F}$. Fix $\varepsilon>0$. Let $\delta$ be as in~\eqref{ucont}. It follows from~\eqref{FsFt} that \begin{equation} \label{FsFteps} \bigabs{F(\boldsymbol{s})-F(\boldsymbol{t})}<\varepsilon \mbox{ whenever }\boldsymbol{s},\boldsymbol{t}\in S_\infty^n \mbox{ with }\norm{\boldsymbol{s}-\boldsymbol{t}}_\infty<\delta. \end{equation} Each of the $2n$ faces of $S_\infty^n$ is a translate of the $(n-1)$-dimensional unit cube $B_\infty^{n-1}$. Partition each of these faces into $(n-1)$-dimensional cubes of diameter less than~$\delta$, where the diameter is computed with respect to the $\norm{\cdot}_\infty$-metric. Partition each of these cubes into simplices. Therefore, there exists a partition of the entire $S_\infty^n$ into finitely many simplices of diameter less than~$\delta$. Denote the vertices of these simplices by $\boldsymbol{s}_1,\dots,\boldsymbol{s_m}$. Thus, we have produced a triangularization of $S_\infty^n$ with nodes $\boldsymbol{s}_1,\dots,\boldsymbol{s_m}$. Let $\boldsymbol{a}\in\mathbb R^m$. Define a function $L\colon S_\infty^n\to\mathbb R$ by setting $L(\boldsymbol{s_j})=a_j$ as $j=1,\dots,m$ and then extending it to each of the simplices linearly; this can be done because every point in a simplex can be written in a unique way as a convex combination of the vertices of the simplex. We write $L=T\boldsymbol{a}$. This gives rise to a linear operator $T\colon\mathbb R^m\to C(S_\infty^n)$. For each $j=1,\dots,m$, let $e_j$ be the $j$-th unit vector in $\mathbb R^m$; put $d_j=Te_j$. Clearly, \begin{equation} \label{Ta} T\boldsymbol{a}=\sum_{j=1}^ma_jd_j\mbox{ for every }\boldsymbol{a}\in\mathbb R^m. \end{equation} Let $H\in C(S_\infty^n)$. Let $L=T\boldsymbol{a}$ where $a_j=H(\boldsymbol{s_j})$. Then $L$ agrees with $H$ at $\boldsymbol{s}_1,\dots,\boldsymbol{s}_m$. We write $L=SH$; this defines a linear operator $S\colon C(S_\infty^n)\to C(S_\infty^n)$. Clearly, this is a linear contraction. Suppose that $H\in C(S_\infty^n)$ is such that $\bigabs{H(\boldsymbol{s})-H(\boldsymbol{t})}<\varepsilon$ whenever $\norm{\boldsymbol{s}-\boldsymbol{t}}_\infty<\delta$. Let $L=SH$. We claim that $\bignorm{L-H}_{C(S_\infty^n)}<\varepsilon$. Indeed, fix $\boldsymbol{s}\in S_\infty^n$. Let $\boldsymbol{s}_{j_1},\dots,\boldsymbol{s}_{j_n}$ be the vertices of a simplex in the triangularization of $S_\infty^n$ that contains~$\boldsymbol{s}$. Then $\boldsymbol{s}$ can be written as a convex combination $\boldsymbol{s}=\sum_{k=1}^n\lambda_k\boldsymbol{s}_{j_k}$. Note that $\norm{\boldsymbol{s}-\boldsymbol{s}_{j_k}}_\infty<\delta$ for all $j=1,\dots,n$. It follows that \begin{displaymath} \bigabs{L(\boldsymbol{s})-H(\boldsymbol{s})} =\Bigabs{\sum_{k=1}^n\lambda_kL(\boldsymbol{s}_{j_k}) -\sum_{k=1}^n\lambda_kH(\boldsymbol{s})} \leqslant\sum_{j=1}^n\lambda_k\bigabs{H(\boldsymbol{s}_{j_k})-H(\boldsymbol{s})} <\varepsilon. \end{displaymath} This proves the claim. Let $G=SF$. It follows from~\eqref{FsFteps} and the preceding observation $\norm{G-F}_{C(S_\infty^n)}<\varepsilon$, so that \begin{equation}\label{GF} \bignorm{G(\boldsymbol{x})-F(\boldsymbol{x})}<\varepsilon. \end{equation} Similarly, for every $\omega\in\Omega$, apply $S$ to $f(\cdot,\omega)$ and denote the resulting function $g(\cdot,\omega)$. In particular, $g(\boldsymbol{s}_j,\omega)=f(\boldsymbol{s}_j,\omega)$ for every $\omega\in\Omega$ and every $j=1,\dots,m$. It follows also that \begin{equation} \label{fgseps} \bignorm{f(\cdot,\omega)-g(\cdot,\omega)}_{C(S_\infty^n)}<\varepsilon \end{equation} for every~$\omega$, and, therefore \begin{equation} \label{fgxeps} \bignorm{\tilde{f}(\omega)-\tilde{g}(\omega)} =\bignorm{f(\boldsymbol{x},\omega)-g(\boldsymbol{x},\omega)}<\varepsilon, \end{equation} where $\tilde{g}(\omega)=g(\boldsymbol{x},\omega)$ is the image under $\Phi$ of the function $\boldsymbol{s}\in S_\infty^n\mapsto g(\boldsymbol{s},\omega)$. Note that \begin{equation} \label{Ggs} G(\boldsymbol{s}_j)=F(\boldsymbol{s}_j)=\int f(\boldsymbol{s}_j,\omega)d\mu(\omega) =\int g(\boldsymbol{s}_j,\omega)d\mu(\omega) \end{equation} for every $j=1,\dots,m$. Since $G=SF=T\boldsymbol{a}$ where $a_j=F(\boldsymbol{s}_j)=G(\boldsymbol{s}_j)$ as $j=1,\dots,m$, it follows from~\eqref{Ta} that \begin{equation} \label{Gsum} G=\sum_{j=1}^mG(\boldsymbol{s}_j)d_j. \end{equation} Similarly, for every $\omega\in\Omega$, we have \begin{equation} \label{gsum} g(\cdot,\omega)=\sum_{j=1}^mg(\boldsymbol{s}_j,\omega)d_j. \end{equation} Applying $\Phi$ to~\eqref{Gsum} and~\eqref{gsum}, we obtain $\widetilde{G}=G(\boldsymbol{x})=\sum_{j=1}G(\boldsymbol{s}_j)d_j(\boldsymbol{x})$ and \begin{displaymath} \tilde{g}(\omega)=g(\boldsymbol{x},\omega) =\sum_{j=1}^mg(\boldsymbol{s}_j,\omega)d_j(\boldsymbol{x}) =\sum_{j=1}^mf(\boldsymbol{s}_j,\omega)d_j(\boldsymbol{x}). \end{displaymath} Together with Theorem~\ref{Boch}\eqref{Boch-L1}, this yields that $\tilde{g}$ is measurable and Bochner integrable. It now follows from~\eqref{Ggs} and~\eqref{Gsum} that \begin{multline} \label{Gg} G(\boldsymbol{x})=\sum_{j=1}G(\boldsymbol{s}_j)d_j(\boldsymbol{x}) =\sum_{j=1}^m\Bigl(\int g(\boldsymbol{s}_j,\omega)d\mu(\omega)\Bigr)d_j(\boldsymbol{x})\\ =\int\Bigl(\sum_{j=1}^mg(\boldsymbol{s}_j,\omega)d_j(\boldsymbol{x})\Bigr)d\mu(\omega) =\int g(\boldsymbol{x},\omega)d\mu(\omega). \end{multline} We will show next that $\tilde{f}$ is Bochner integrable. It follows from~\eqref{fgxeps} and the fact that $\varepsilon$ is arbitrary that $\tilde{f}$ can be approximated almost everywhere (actually, everywhere) by measurable functions; hence $\tilde{f}$ is measurable by Theorem~\ref{Boch}\eqref{Boch-lim}. Next, we claim that there exists $\lambda\in\mathbb R_+$ such that $\bigabs{f(\boldsymbol{s},\omega)-f(\boldsymbol{1},\omega)}\leqslant\lambda$ for all $\boldsymbol{s}\in S_\infty^n$ and all $\omega\in\Omega$. Here $\boldsymbol{1}=(1,\dots,1)$. Indeed, let $\boldsymbol{s}\in S_\infty^n$ and $\omega\in\Omega$. Find $j_1,\dots,j_l$ such that $\boldsymbol{s}_{j_1}=\boldsymbol{1}$, $\boldsymbol{s}_{j_k}$ and $\boldsymbol{s}_{j_{k+1}}$ belong to the same simplex as $k=1,\dots,l-1$, and $\boldsymbol{s}_{j_l}$ is a vertex of a simplex containing~$\boldsymbol{s}$. It follows that \begin{displaymath} \bigabs{f(\boldsymbol{s},\omega)-f(\boldsymbol{1},\omega)} \leqslant\bigabs{f(\boldsymbol{s},\omega)-f(\boldsymbol{s}_{j_l},\omega)} +\sum_{k=1}^{l-1}\bigabs{f(\boldsymbol{s}_{j_{k+1}},\omega)-f(\boldsymbol{s}_{j_k},\omega)} \leqslant l\varepsilon\leqslant m\varepsilon. \end{displaymath} This proves the claim with $\lambda=m\varepsilon$. It follows that \begin{displaymath} \bignorm{\tilde{f}(\omega)} \leqslant\bignorm{f(\cdot,\omega)}_{C(S_\infty^n)} =\sup_{\boldsymbol{s}\in S_\infty^n}\bigabs{f(\boldsymbol{s},\omega)} \leqslant\bigabs{f(\boldsymbol{1},\omega)}+\lambda. \end{displaymath} Since $\bigabs{f(\boldsymbol{1},\omega)}+\lambda$ is an integrable function of~$\omega$, we conclude that $\norm{\tilde{f}}$ is integrable, hence $\tilde{f}$ is Bochner integrable by Theorem~\ref{Boch}\eqref{Boch-int}. It now follows from~\eqref{fgxeps} that \begin{equation} \label{fg} \Bignorm{\int f(\boldsymbol{x},\omega)d\mu(\omega) -\int g(\boldsymbol{x},\omega)d\mu(\omega)} \leqslant\int\bignorm{f(\boldsymbol{x},\omega)-g(\boldsymbol{x},\omega)}d\mu(\omega) <\varepsilon \end{equation} Finally, combining~\eqref{GF}, \eqref{Gg}, and~\eqref{fg}, we get \begin{displaymath} \Bignorm{F(\boldsymbol{x})-\int f(\boldsymbol{x},\omega)d\mu(\omega)}< 2\varepsilon. \end{displaymath} Since $\varepsilon>0$ is arbitrary, this proves the theorem. \end{proof} Some of the work on this paper was done during a visit of the second author to the University of Alberta. We would like to thank the referee whose helpful remarks and suggestions considerably improved this paper. \end{document}
\begin{document} \title[Surjectivity]{Surjectivity of the $\overline{\partial}$-operator between weighted spaces of smooth vector-valued functions} \author[K.~Kruse]{Karsten Kruse} \address{Hamburg University of Technology\\ Institute of Mathematics \\ Am Schwarzenberg-Campus~3 \\ 21073 Hamburg \\ Germany} \email{[email protected]} \subjclass[2010]{Primary 35A01, 32W05, Secondary 46A32, 46E40} \keywords{Cauchy-Riemann, weight, smooth, surjective, solvability, Fr\'echet} \differentialate{\today} \begin{abstract} We derive sufficient conditions for the surjectivity of the Cauchy-Riemann operator $\overline{\partial}$ between weighted spaces of smooth Fr\'echet-valued functions. This is done by establishing an analog of H\"ormander's theorem on the solvability of the inhomogeneous Cauchy-Riemann equation in a space of smooth $\C$-valued functions whose topology is given by a whole family of weights. Our proof relies on a weakened variant of weak reducibility of the corresponding subspace of holomorphic functions in combination with the Mittag-Leffler procedure. Using tensor products, we deduce the corresponding result on the solvability of the inhomogeneous Cauchy-Riemann equation for Fr\'echet-valued functions. \end{abstract} \maketitle \section{Introduction} We study the Cauchy-Riemann operator between weighted spaces of smooth functions with values in a Fr\'echet space. Let $E$ be a complete locally convex Hausdorff space over $\C$, $\Omega\subset\R^{2}$ open and $\mathcal{E}(\Omega):=\mathcal{C}^{\infty}(\Omega,\C)$ the space of infinitely continuously partially differentiable functions from $\Omega$ to $\C$. It is well-known that the Cauchy-Riemann operator \[ \overline{\partial}:=\frac{1}{2}(\partial_{1}+i\partial_{2})\colon \mathcal{E}(\Omega)\to \mathcal{E}(\Omega) \] is surjective (see e.g.\ \cite[Theorem 1.4.4, p.\ 12]{H3}). Since $\mathcal{E}(\Omega)$, equipped with the usual topology of uniform convergence of partial derivatives of any order on compact subsets, is a nuclear Fr\'{e}chet space by \cite[Example 28.9 (1), p.\ 349]{meisevogt1997}, we have the topological isomorphy $\mathcal{E}(\Omega,E)\cong\mathcal{E}(\Omega)\widehat{\otimes}_{\pi}E$ by \cite[Theorem 44.1, p.\ 449]{Treves} where $\mathcal{E}(\Omega)\widehat{\otimes}_{\pi}E$ is the completion of the projective tensor product. Due to classical theory of tensor products, the surjectivity of $\overline{\partial}$ implies the surjectivity of \[ \overline{\partial}^{E}\colon \mathcal{E}(\Omega,E)\to \mathcal{E}(\Omega,E) \] for Fr\'echet spaces $E$ over $\C$ (see e.g.\ \cite[Satz 10.24, p.\ 255]{Kaballo}) where $\mathcal{E}(\Omega,E)$ is the space of infinitely continuously partially differentiable functions from $\Omega$ to $E$ and $\overline{\partial}^{E}$ is the Cauchy-Riemann operator for $E$-valued functions. In other words, given $f\in\mathcal{E}(\Omega,E)$ there is a solution $u\in\mathcal{E}(\Omega,E)$ of the $\overline{\partial}$-problem, i.e.\ \begin{equation}\label{eq:intro.0} \overline{\partial}^{E}u=f. \end{equation} Now, we consider the following situation. Denote by $(p_{\alpha})_{\alpha\in\mathfrak{A}}$ a system of seminorms inducing the locally convex Hausdorff topology of $E$. Let $f$ fulfil some additional growth conditions given by an increasing family of positive continuous functions $\mathcal{V}:=(\nu_{n})_{n\in\N}$ on an increasing sequence of open subsets $(\Omega_{n})_{n\in\N}$ of $\Omega$ with $\Omega=\bigcup_{n\in\N}\Omega_{n}$, namely, \[ |f|_{n,m,\alpha}:=\sup_{\substack{x\in \Omega_{n}\\ \beta\in\N^{2}_{0},\,|\beta|\leq m}} p_{\alpha}\bigl((\partial^{\beta})^{E}f(x)\bigr)\nu_{n}(x)<\infty \] for every $n\in\N$, $m\in\N_{0}$ and $\alpha\in\mathfrak{A}$. Let us call the space of smooth functions having this growth $\mathcal{EV}(\Omega,E)$. Then there is always a solution $u\in\mathcal{E}(\Omega,E)$ of \eqref{eq:intro.0}. Our aim is to derive sufficient conditions such that there is a solution $u$ of \eqref{eq:intro.0} having the same growth as the right-hand side $f$. So we are interested under which conditions the Cauchy-Riemann operator \[ \overline{\partial}^{E}\colon \mathcal{EV}(\Omega,E)\to \mathcal{EV}(\Omega,E) \] is surjective. The interest in solving the vector-valued $\overline{\partial}$-problem arises in \cite{ich} from the construction of smooth functions with exponential growth on strips (with holes), which is used to prove the flabbyness of the sheaf of vector-valued Fourier hyperfunctions, see \cite[6.8 Lemma, p.\ 118]{ich} and \cite[6.11 Theorem, p.\ 136]{ich}. However, the interest in the vector-valued $\overline{\partial}$-problem may also be motivated from the scalar-valued problem, namely, from the question of parameter dependence. If e.g.\ the right-hand side $f_{\lambda}\in\mathcal{EV}(\Omega)$ depends continuously on a parameter $\lambda\in[0,1]$, then there are solutions $u_{\lambda}\in\mathcal{EV}(\Omega)$ of $\overline{\partial}u_{\lambda}=f_{\lambda}$ which depend continuously on $\lambda$ as well if the vector-valued $\overline{\partial}$-problem \eqref{eq:intro.0} is solvable for the Banach space $E=\mathcal{C}([0,1],\C)$ of continuous $\C$-valued functions on $[0,1]$. The difficult part is to solve the $\overline{\partial}$-problem in the scalar-valued case, i.e.\ in $\mathcal{EV}(\Omega)$. In the case that $\mathcal{V}=(\nu)$ and $\Omega_{n}=\Omega$ for all $n\in\N$ where $\nu$ is a weight which permits growth near infinity there is a classical result by H\"ormander \cite[Theorem 4.4.2, p.\ 94]{H3} on the solvability of the $\overline{\partial}$-problem (in the distributional sense) in weighted spaces of $\C$-valued square-integrable functions of the form \[ L^{2}\nu(\Omega):=\{f\colon\Omega\to\C\;\text{measurable}\;|\;\int_{\Omega}|f(z)|^{2}\nu(z)\differential z<\infty\}. \] The opposite situation where the weight $\nu$ permits decay near infinity is handled in \cite[Theorem 1.2, p.\ 351]{Hedenmalm2015} and more general in \cite{Amar2016}. The solvability of the $\overline{\partial}$-problem in weighted $L^{2}$-spaces and its subspaces of holomorphic functions has some nice applications (see \cite{Hoermander2003}) and the properties of the canonical solution operator to $\overline{\partial}$ are subject of intense studies \cite{Bonami1990}, \cite{Charpentier2014}, \cite{Haslinger2001}, \cite{Haslinger2002}, \cite{Haslinger2007}. If there is a whole system of weights $\mathcal{V}=(\nu_{n})_{n\in\N}$, i.e.\ the $\overline{\partial}$-problem is considered in the projective limit spaces $L^{2}\mathcal{V}(\Omega):=\bigcap_{n\in\N}L^{2}\nu_{n}(\Omega_{n})$ or $L^{\infty}\mathcal{V}(\Omega):=\bigcap_{n\in\N}L^{\infty}\nu_{n}(\Omega)$ where \[ L^{\infty}\nu_{n}(\Omega):=\{f\colon\Omega\to\C\;\text{measurable}\;|\;\sup_{z\in\Omega}|f(z)|\nu_{n}(z)<\infty\}, \] then solving the $\overline{\partial}$-problem becomes more complicated since a whole family of $L^{2}$- resp.\ $L^{\infty}$-estimates has to be satisfied. Such a $\overline{\partial}$-problem is usually solved by a combination of H\"ormander's classical result with the Mittag-Leffler procedure. However, this requires the projective limit $\mathcal{O}^{2}\mathcal{V}(\Omega):=\bigcap_{n\in\N}\mathcal{O}^{2}\nu_{n}(\Omega)$ resp.\ $\mathcal{O}^{\infty}\mathcal{V}(\Omega):=\bigcap_{n\in\N}\mathcal{O}^{\infty}\nu_{n}(\Omega)$, where \[ \mathcal{O}^{k}\nu_{n}(\Omega):=\{f\in L^{k}\nu_{n}(\Omega)\;|\;f\;\text{holomorphic}\}, \] to be weakly reduced, i.e.\ for every $n\in\N$ there is $m\in\N$ such that $\mathcal{O}^{k}\mathcal{V}(\Omega)$ is dense in $\mathcal{O}^{k}\nu_{m}(\Omega)$ with respect to the topology of $\mathcal{O}^{k}\nu_{n}(\Omega)$ for $k=2$ resp.\ $k=\infty$, see \cite[Theorem 3, p.\ 56]{Epifanov1992}, \cite[1.3 Lemma, p.\ 418]{Langenbruch1994} and \cite[Theorem 1, p.\ 145]{Polyakova2017}. Unfortunately, the weak reducibility of the projective limit is not easy to check. Furthermore, in our setting we have to control the growth of the partial derivatives as well and the sequence $(\Omega_{n})_{n\in\N}$ usually consists of more than one set. Let us outline our strategy to solve the $\overline{\partial}$-problem in $\mathcal{EV}(\Omega,E)$ for Fr\'echet spaces $E$ over $\C$. In Section 2 we fix the notation and state some preliminaries. In Section 3 we phrase sufficient conditions (see Condition $(PN)$) such that there is an equivalent system of $L^{q}$-seminorms on $\mathcal{EV}(\Omega)$ (see \prettyref{lem:switch_top}). If they are fulfilled for $q=1$, then $\mathcal{EV}(\Omega)$ is a nuclear Fr\'echet space by \cite[Theorem 3.1, p.\ 188]{kruse2018_4}. If they are fulfilled for $q=2$ as well, we can use H\"ormander's $L^{2}$-machinery and the hypoellipticity of $\overline{\partial}$ to solve the scalar-valued equation \eqref{eq:intro.0} on each $\Omega_{n}$ with given $f\in\mathcal{EV}(\Omega)$ and a solution $u_{n}\in\mathcal{E}(\Omega_{n})$ satisfying $|u_{n}|_{n,m}<\infty$ for every $m\in\N_{0}$. In Section 4 the solution $u\in\mathcal{EV}(\Omega)$ is then constructed from the $u_{n}$ by using the Mittag-Leffler procedure in our main \prettyref{thm:scalar_CR_surjective}, which requires a density condition on the kernel of $\overline{\partial}$. Due to \cite[Example 16 c), p.\ 1526]{kruse2017} we have $\mathcal{EV}(\Omega,E)\cong\mathcal{EV}(\Omega)\widehat{\otimes}_{\pi}E$ if Condition $(PN)$ holds for $q=1$ and are able to lift the surjectivity from the scalar-valued to the Fr\'echet-valued case in \prettyref{cor:frechet_CR_surjective}. This density condition can be regarded as a weakened variant of weak reducibility of the subspace of $\mathcal{EV}(\Omega)$ consisting of holomorphic functions. In our last section we state sufficient conditions on $\mathcal{V}$ and $(\Omega_{n})_{n\in\N}$ in \prettyref{thm:dense_proj_lim} for our density condition to hold that are more likely to be checked. Further, we give examples of weights $\mathcal{V}$ and sets $(\Omega_{n})_{n\in\N}$ that satisfy our conditions in \prettyref{ex:families_of_weights_2}. The stated results are obtained by generalising the methods in \cite[Chap.\ 5]{ich} where the special case $\nu_{n}(z):=\exp(-|\re(z)|/n)$ and, amongst others, $\Omega_{n}:=\{z\in\C\;|\; 1/n<|\im(z)|<n\}$ is treated (see \cite[5.16 Theorem, p.\ 80]{ich} and \cite[5.17 Theorem, p.\ 82]{ich}). \section{Notation and Preliminaries} We define the distance of two subsets $M_{0}, M_{1} \subset\R^{d}$, $d\in \N$, w.r.t.\ a norm $\|\cdot\|$ on $\R^{d}$ via \[ \differential^{\|\cdot\|}(M_{0},M_{1}) :=\begin{cases} \inf_{x\in M_{0},\,y\in M_{1}}\|x-y\| &,\; M_{0},\,M_{1} \neq \emptyset, \\ \infty &,\; M_{0}= \emptyset \;\text{or}\; M_{1}=\emptyset. \end{cases} \] Moreover, we denote by $\|\cdot\|_{\infty}$ the sup-norm, by $|\cdot|$ the Euclidean norm, by $\langle\cdot|\cdot\rangle$ the usual scalar product on $\R^{d}$ and by $\mathbb{B}_{r}(x):=\{w\in\R^{d}\;|\;|w-x|<r\}$ the Euclidean ball around $x\in\R^{d}$ with radius $r>0$. We denote the complement of a subset $M\subset \R^{d}$ by $M^{C}:= \R^{d}\setminus M$, the set of inner points of $M$ by $\mathring{M}$, the closure of $M$ by $\overline{M}$ and the boundary of $M$ by $\partial M$. Further, we also use for $z=(z_{1},z_{2})\in\R^{2}$ a notation of mixed-type \[ z=z_{1}+iz_{2} =(z_{1},z_{2}) =\begin{pmatrix} z_{1}\\ z_{2} \end{pmatrix}, \] hence identify $\R^{2}$ and $\C$ as (normed) vector spaces. For a function $f\colon M\to\C$ and $K\subset M$ we denote by $f_{\mid K}$ the restriction of $f$ to $K$ and by \[ \|f\|_{K}:=\sup_{x\in K}|f(x)| \] the sup-norm on $K$. By $E$ we always denote a non-trivial locally convex Hausdorff space over the field $\K=\R$ or $\C$ equipped with a directed fundamental system of seminorms $(p_{\alpha})_{\alpha\in \mathfrak{A}}$. If $E=\K$, then we set $(p_{\alpha})_{\alpha\in \mathfrak{A}}:=\{|\cdot|\}$. Further, we denote by $L(F,E)$ the space of continuous linear maps from a locally convex Hausdorff space $F$ to $E$. If $E=\K$, we write $F':=L(F,\K)$ for the dual space of $F$. We recall the following well-known definitions concerning continuous partial differentiability of vector-valued functions (c.f.\ \cite[p.\ 237]{kruse2018_2}). A function $f\colon\Omega\to E$ on an open set $\Omega\subset\mathbb{R}^{d}$ to $E$ is called continuously partially differentiable ($f$ is $\mathcal{C}^{1}$) if for the $n$-th unit vector $e_{n}\in\mathbb{R}^{d}$ the limit \[ (\partial^{e_{n}})^{E}f(x):=(\partial_{x_{n}})^{E}f(x):=(\partial_{n})^{E}f(x) :=\lim_{\substack{h\to 0\\ h\in\mathbb{R}, h\neq 0}}\frac{f(x+he_{n})-f(x)}{h} \] exists in $E$ for every $x\in\Omega$ and $(\partial^{e_{n}})^{E}f$ is continuous on $\Omega$ ($(\partial^{e_{n}})^{E}f$ is $\mathcal{C}^{0}$) for every $1\leq n\leq d$. For $k\in\mathbb{N}$ a function $f$ is said to be $k$-times continuously partially differentiable ($f$ is $\mathcal{C}^{k}$) if $f$ is $\mathcal{C}^{1}$ and all its first partial derivatives are $\mathcal{C}^{k-1}$. A function $f$ is called infinitely continuously partially differentiable ($f$ is $\mathcal{C}^{\infty}$) if $f$ is $\mathcal{C}^{k}$ for every $k\in\mathbb{N}$. The linear space of all functions $f\colon\Omega\to E$ which are $\mathcal{C}^{\infty}$ is denoted by $\mathcal{C}^{\infty}(\Omega,E)$. Let $f\in\mathcal{C}^{\infty}(\Omega,E)$. For $\beta=(\beta_{n})\in\mathbb{N}_{0}^{d}$ we set $(\partial^{\beta_{n}})^{E}f:=f$ if $\beta_{n}=0$, and \[ (\partial^{\beta_{n}})^{E}f :=\underbrace{(\partial^{e_{n}})^{E}\cdots(\partial^{e_{n}})^{E}}_{\beta_{n}\text{-times}}f \] if $\beta_{n}\neq 0$ as well as \[ (\partial^{\beta})^{E}f :=(\partial^{\beta_{1}})^{E}\cdots(\partial^{\beta_{d}})^{E}f. \] Due to the vector-valued version of Schwarz' theorem $(\partial^{\beta})^{E}f$ is independent of the order of the partial derivatives on the right-hand side, we call $|\beta|:=\sum_{n=1}^{d}\beta_{n}$ the order of differentiation and write $\partial^{\beta}f:=(\partial^{\beta})^{\K}f$. Now, the precise definition of the weighted spaces of smooth vector-valued functions from the introduction reads as follows. \begin{defn}[{\cite[Definition 3.2, p.\ 238]{kruse2018_2}}]\label{def:smooth_weighted_space} Let $\Omega\subset\R^{d}$ be open and $(\Omega_{n})_{n\in\N}$ a family of non-empty open sets such that $\Omega_{n}\subset\Omega_{n+1}$ and $\Omega=\bigcup_{n\in\N} \Omega_{n}$. Let $\mathcal{V}:=(\nu_{n})_{n\in\N}$ be a countable family of positive continuous functions $\nu_{n}\colon \Omega \to (0,\infty)$ such that $\nu_{n}\leq\nu_{n+1}$ for all $n\in\N$. We call $\mathcal{V}$ a (directed) family of continuous weights on $\Omega$ and set \[ \mathcal{E}\nu_{n}(\Omega_{n}, E):= \{ f \in \mathcal{C}^{\infty}(\Omega_{n}, E)\; | \; \forall\;\alpha\in\mathfrak{A},\,m \in \N_{0}^{d}:\; |f|_{n,m,\alpha} < \infty \} \] for $n\in\N$ and \[ \mathcal{EV}(\Omega, E):=\{ f\in \mathcal{C}^{\infty}(\Omega, E)\; | \;\forall\; n \in \N: \; f_{\mid\Omega_{n}}\in \mathcal{E}\nu_{n}(\Omega_{n}, E)\} \] where \[ |f|_{n,m,\alpha}:=\sup_{\substack{x \in \Omega_{n}\\ \beta \in \N_{0}^{d}, \, |\beta| \leq m}} p_{\alpha}\bigl((\partial^{\beta})^{E}f(x)\bigr)\nu_{n}(x). \] The subscript $\alpha$ in the notation of the seminorms is omitted in the scalar-valued case. The notation for the spaces in the scalar-valued case is $\mathcal{E}\nu_{n}(\Omega_{n}):=\mathcal{E}\nu_{n}(\Omega_{n},\K)$ and $\mathcal{EV}(\Omega):=\mathcal{EV}(\Omega,\K)$. \end{defn} The space $\mathcal{EV}(\Omega,E)$ is a projective limit, namely, we have \[ \mathcal{EV}(\Omega, E)\cong \lim_{\substack{\longleftarrow\\n\in \N}}\mathcal{E}\nu_{n}(\Omega_{n}, E) \] where the spectral maps are given by the restrictions \[ \pi_{k,n}\colon \mathcal{E}\nu_{k}(\Omega_{k}, E)\to \mathcal{E}\nu_{n}(\Omega_{n}, E),\; f\mapsto f_{\mid\Omega_{n}},\;k\geq n. \] The space of scalar-valued infinitely differentiable functions with compact support in an open set $\Omega\subset\R^{d}$ is defined by the inductive limit \[ \mathcal{D}(\Omega):=\lim_{\substack{\longrightarrow\\K\subset \Omega\;\text{compact}}}\,\mathcal{C}^{\infty}_{c}(K) \] where \[ \mathcal{C}^{\infty}_{c}(K):=\{f\in\mathcal{C}^{\infty}(\R^{d},\K)\;|\;\forall \;x\notin K:\;f(x)=0\}. \] Every element $f$ of $\mathcal{D}(\Omega)$ can be regarded as an element of $\mathcal{D}(\R^{d})$ just by setting $f:=0$ on $\Omega^{C}$ and we write $\operatorname{supp} f$ for the support of $f$. Moreover, we set for $m\in\N_{0}$ and $f\in\mathcal{D}(\R^{d})$ \[ \|f\|_{m}:=\sup_{\substack{x\in\R^{d}\\ \alpha\in\N^{d}_{0},\;|\alpha|\leq m}}|\partial^{\alpha}f(x)|. \] By $L^{1}(\Omega)$ we denote the space of (equivalence classes of) $\K$-valued Lebesgue integrable functions on $\Omega$, by $L^{q}(\Omega)$, $q\in\N$, the space of functions $f$ such that $f^{q}\in L^{1}(\Omega)$ and by $L^{q}_{loc}(\Omega)$ the corresponding space of locally integrable functions. For a locally integrable function $f\in L^{1}_{loc}(\Omega)$ we denote by $T_{f}\in\mathcal{D}'(\Omega):=\mathcal{D}(\Omega)'$ the regular distribution defined by \[ T_{f}(\varphi):=\int_{\R^{d}} {f(x)\varphi(x)\differential x},\quad \varphi\in \mathcal{D}(\Omega). \] For $\alpha\in\N^{d}_{0}$ the partial derivatives of a distribution $T\in\mathcal{D}'(\Omega)$ are defined by \[ \partial^{\alpha}T(\varphi):=\langle\partial^{\alpha}T,\varphi\rangle :=(-1)^{|\alpha|}T(\partial^{\alpha}\varphi),\quad\varphi\in\mathcal{D}(\Omega). \] The convolution $T\ast\varphi$ of a distribution $T\in\mathcal{D}'(\R^{d})$ and a test function $\varphi\in \mathcal{D}(\R^{d})$ is defined by \[ (T\ast\varphi)(x):=T(\varphi(x-\cdot)),\quad x\in\R^{d}. \] In particular, we have $\differentialelta\ast\varphi=\varphi$ for the Dirac distribution $\differentialelta$ and \begin{equation}\label{distr.falt.} (T_{f}\ast\varphi)(x)=\int_{\R^{d}} {f(y)\varphi(x-y)\differential y},\quad x\in\R^{d}, \end{equation} for $f\in L^{1}_{loc}(\R^{d})$ and $\varphi\in \mathcal{D}(\R^{d})$. Furthermore, $\partial^{\alpha}(T\ast\varphi)=(\partial^{\alpha}T)\ast\varphi=T\ast(\partial^{\alpha}\varphi)$ is valid for $T\in\mathcal{D}'(\R^{d})$ and $\varphi\in \mathcal{D}(\R^{d})$. For more details on the theory of distributions see \cite{H1}. By $\mathcal{O}(\Omega)$ we denote the space of $\C$-valued holomorphic functions on an open set $\Omega\subset\C$ and for $\alpha=(\alpha_{1},\alpha_{2})\in\N^{2}_{0}$ we often use the relation \begin{equation}\label{lem1} \partial^{\alpha}f(z)=i^{\alpha_{2}}f^{(|\alpha|)}(z),\quad z\in\Omega, \end{equation} between real partial derivatives $\partial^{\alpha}f$ and complex derivatives $f^{(|\alpha|)}$ of a function $f\in\mathcal{O}(\Omega)$ (see e.g.\ \cite[3.4 Lemma, p.\ 17]{ich}). \section{From \texorpdfstring{$\sup$}{sup}- to \texorpdfstring{$L^{q}$}{Lq}-seminorms} For applying H\"ormander's solution of the weighted $\overline{\partial}$-problem (see \cite[Chap.\ 4]{H3}) it is appropriate to consider weighted $L^{2}$-(semi)norms and use them to control the seminorms $|\cdot|_{n,m}$ of solutions $u_{n}$ of $\overline{\partial}u_{n}=f$ in weighted $L^{2}$-spaces on $\Omega_{n}$ for given $f\in\mathcal{EV}(\Omega)$. Throughout this section let $P$ be a polynomial in $d$ real variables with constant coefficients in $\K$, i.e.\ there are $n\in\N_{0}$ and $c_{\alpha}\in\K$ for $\alpha=(\alpha_{i})\in\N_{0}^{d},$ $|\alpha|\leq n$, such that \[ P(\zeta)=\sum_{\substack{\alpha\in\N^{d}_{0},\\|\alpha|\leq n}}c_{\alpha}\zeta^{\alpha},\quad \zeta=(\zeta_{i})\in\R^{d}, \] where $\zeta^{\alpha}:=\zeta^{\alpha_{1}}_{1}\cdots\zeta^{\alpha_{d}}_{d}$, and $P(\partial)$ be the linear partial differential operator associated to $P$. \begin{lem}\label{lem:iso_sobolev} Let $U\subset \R^{d}$ be open, $\{K_{n}\;|\; n\in\N\}$ a compact exhaustion of $U$, $P(\partial)$ a hypoelliptic partial differential operator and $q\in\N$. Then \[ \mathcal{I}\colon \mathcal{C}^{\infty}(U)\rightarrow \mathcal{F}(U):=\{f\in L^{q}_{loc}(U)\;|\;\forall\;\alpha\in\N^{d}_{0}:\; \partial^{\alpha}P(\partial)f\in L^{q}_{loc}(U)\},\;\mathcal{I}(f):=[f], \] is a topological isomorphism where $[f]$ is the equivalence class of $f$, the first space is equipped with the system of seminorms $\{|\cdot|_{K_{n},m}\;|\;n\in\N,\,m\in\N_{0}\}$ defined by \begin{equation}\label{lem8.1} |f|_{K_{n},m}:=\sup_{\substack{x\in K_{n} \\ \alpha\in\N^{d}_{0},|\alpha|\leq m}}|\partial^{\alpha}f(x)|, \quad f\in \mathcal{C}^{\infty}(U), \end{equation} and the latter with the system \begin{equation}\label{lem8.2} \{\|\cdot\|_{L^{q}(K_{n})}+s_{K_{n},m}\;|\;\;n\in\N,\,m\in\N_{0}\} \end{equation} defined for $f=[F]\in\mathcal{F}(U)$ by \[ \|f\|_{L^{q}(K_{n})}:=\|F\|_{L^{q}(K_{n})}:=\bigl(\int_{K_{n}}{|F(x)|^{q} \differential x}\bigr)^{\frac{1}{q}} \] and \[ s_{K_{n},m}(f):=\sup_{\alpha\in\N^{d}_{0},|\alpha|\leq m}\|\partial^{\alpha}P(\partial)f\|_{L^{q}(K_{n})}. \] \end{lem} \begin{proof} First, let us remark the following. The derivatives in the definition of $\mathcal{F}(U)$ are considered in the distributional sense and $\partial^{\alpha}P(\partial)f\in L^{q}_{loc}(U)$ means that there exists $g\in L^{q}_{loc}(U)$ such that $\partial^{\alpha}P(\partial)T_{f}=T_{g}$. The definition of the seminorm $\|\cdot\|_{L^{q}(K_{n})}$ does not depend on the chosen representative and we make no strict difference between an element of $L^{q}_{loc}(U)$ and its representative. $(i)$ $\mathcal{C}^{\infty}(U)$, equipped with the system of seminorms \eqref{lem8.1}, is known to be a Fr\'echet space. The space $\mathcal{F}(U)$, equipped with the system of seminorms \eqref{lem8.2}, is a metrisable locally convex space. Let $(f_{k})_{k\in\N}$ be a Cauchy sequence in $\mathcal{F}(U)$. By definition of $\mathcal{F}(U)$ we get for all $\beta\in\N^{d}_{0}$ that there exists a sequence $(g_{k,\beta})_{k\in\N}$ in $L^{q}_{loc}(U)$ such that $\partial^{\beta}P(\partial)T_{f_{k}}=T_{g_{k,\beta}}$. Therefore we conclude from \eqref{lem8.2} that $(f_{k})_{k\in\N}$ and $(g_{k,\beta})_{k\in\N}$, $\beta\in\N^{d}_{0}$, are Cauchy sequences in $(L^{q}_{loc}(U),(\|\cdot\|_{L^{q}(K_{n})})_{n\in\N})$, which is a Fr\'echet space by \cite[5.17 Lemma, p.\ 36]{F/W/Buch}, so they have a limit $f$ resp.\ $g_{\beta}$ in this space. Since $(f_{k})_{k\in\N}$ resp.\ $(g_{k,\beta})_{k\in\N}$ converges to $f$ resp.\ $g_{\beta}$ in $L^{q}_{loc}(U)$, it follows that $(T_{f_{k}})_{k\in\N}$ resp.\ $(T_{g_{k,\beta}})_{k\in\N}$ converges to $T_{f}$ resp.\ $T_{g_{\beta}}$ in $\mathcal{D}_{\sigma}'(U)$. Here $\mathcal{D}_{\sigma}'(U)$ is the space $\mathcal{D}'(U)$ equipped with the weak$^{\ast}$-topology. Hence we get \[ \partial^{\beta}P(\partial)T_{f}\underset{k\to \infty}{\leftarrow} \partial^{\beta}P(\partial)T_{f_{k}}=T_{g_{k,\beta}}\underset{k\to \infty}{\rightarrow}T_{g_{\beta}} \] in $\mathcal{D}_{\sigma}'(U)$, implying $f\in \mathcal{F}(U)$ and the convergence of $(f_{k})_{k\in\N}$ to $f$ in $\mathcal{F}(U)$ with respect to the seminorms \eqref{lem8.2} as well. Thus this space is complete and so a Fr\'echet space. $(ii)$ $\mathcal{I}$ is obviously linear and injective. It is continuous as for all $n\in\N$ and $m\in\N_{0}$ we have \[ \|\mathcal{I}(f)\|^{q}_{L^{q}(K_{n})}\leq \lambda(K_{n})|f|_{K_{n},0}^{q} \] and there exists $C>0$, only depending on the coefficients and the number of summands of $P(\partial)$, such that \[ s_{n,m}(\mathcal{I}(f))^{q}\leq C^{q}\lambda(K_{n})|f|_{K_{n},\operatorname{deg}P+m}^{q} \] for all $f\in \mathcal{C}^{\infty}(U)$ where $\lambda$ denotes the Lebesgue measure. $(iii)$ The next step is to prove that $\mathcal{I}$ is surjective. Let $f\in \mathcal{F}(U)$. Then we have $P(\partial)f\in W^{\infty,q}_{loc}(U)$ where \[ W^{\infty,q}_{loc}(U):=\{f\in L^{q}_{loc}(U)\;|\;\forall\;\alpha\in\N^{d}_{0}: \;\partial^{\alpha}f\in L^{q}_{loc}(U)\} \] and so $P(\partial)f\in\mathcal{C}^{\infty}(U)$ by the Sobolev embedding theorem \cite[Theorem 4.5.13, p.\ 123]{H1} in combination with \cite[Theorem 3.1.7, p.\ 59]{H1}. To be precise, this means that the regular distribution $P(\partial)f$ has a representative in $\mathcal{C}^{\infty}(U)$. Due to the hypoellipticity of $P(\partial)$ we obtain $f\in\mathcal{C}^{\infty}(U)$, more precisely, that $f$ has a representative in $\mathcal{C}^{\infty}(U)$, so $\mathcal{I}$ is surjective. Finally, our statement follows from $(i)-(iii)$ and the open mapping theorem. \end{proof} \begin{cor}\label{cor:iso_sobolev} Let $P(\partial)$ be a hypoelliptic partial differential operator, $q\in\N$ and $0<r_{0}<r_{1}<r_{2}$. Then we have \begin{flalign*} \forall\; m\in\N_{0}\;&\exists\; l\in\N_{0},\, C>0\;\forall \;\alpha\in\N^{d}_{0},\,|\alpha|\leq m, \, f\in\mathcal{C}^{\infty}(\mathring{Q}_{r_{2}}(0)): \\ &\|\partial^{\alpha}f\|_{Q_{r_{0}}(0)}\leq C\bigl(\|f\|_{L^{q}(Q_{r_{1}}(0))}+\sup_{\beta\in\N^{d}_{0},|\beta|\leq l} \|\partial^{\beta}P(\partial)f\|_{L^{q}(Q_{r_{1}}(0))}\bigr) \end{flalign*} where $Q_{r}(0):=[-r,r]^{d}$, $r>0$. \end{cor} \begin{proof} Let $U:=\mathring{Q}_{r_{1}}(0)$. Then the sets $K_{n}:=Q_{r_{1}-\frac{1}{n+\nicefrac{1}{r_{1}}}}(0)$, $n\in\N$, form a compact exhaustion of $U$ and there exists $n_{0}=n_{0}(r_{0},r_{1})\in\N$ such that $Q_{r_{0}}(0)\subset K_{n_{0}}$. Since $\mathcal{I}^{-1}\colon \mathcal{F}(U)\rightarrow\mathcal{C}^{\infty}(U)$ is continuous by \prettyref{lem:iso_sobolev}, there are $N\in\N$, $l\in\N_{0}$ and $C>0$ such that \begin{align*} \|\partial^{\alpha}f\|_{Q_{r_{0}}(0)}&\leq |f|_{K_{n_{0}},m} =|\mathcal{I}^{-1}([f])|_{K_{n_{0}},m} \leq C\bigl(\|[f]\|_{L^{q}(K_{N})}+s_{K_{N},l}([f])\bigr)\\ &\leq C\bigl(\|f\|_{L^{q}(Q_{r_{1}}(0))}+\sup_{\beta\in\N^{d}_{0},|\beta|\leq l} \|\partial^{\beta}P(\partial)f\|_{L^{q}(Q_{r_{1}}(0))}\bigr) \end{align*} for all $\alpha\in\N^{d}_{0}$, $|\alpha|\leq m$, and $f\in\mathcal{C}^{\infty}(\mathring{Q}_{r_{2}}(0))$. \end{proof} Due to this corollary we can switch to types of $L^{q}$-seminorms which induce the same topology on $\mathcal{EV}(\Omega)$ as the $\sup$-seminorms and we get an useful inequality to control the growth of the solutions of the weighted $\overline{\partial}$-problem by the right-hand side under the following conditions. \begin{condPN}\label{cond:weights} Let $\mathcal{V}:=(\nu_{n})_{n\in\N}$ be a directed family of continuous weights on an open set $\Omega\subset\R^{2}$ and $(\Omega_{n})_{n\in\N}$ a family of non-empty open sets such that $\Omega_{n}\subset\Omega_{n+1}$ and $\Omega=\bigcup_{n\in\N} \Omega_{n}$. For every $k\in\N$ let there be $\rho_{k}\in \R$ such that $0<\rho_{k}<\differential^{\|\cdot\|_{\infty}}(\{x\},\partial\Omega_{k+1})$ for all $x\in\Omega_{k}$ and let there be $q\in\N$ such that for any $n\in\N$ there are $\psi_{n}\in L^{q}(\Omega_{k})$, $\psi_{n}>0$, and $\N\ni J_{i}(n)\geq n$ and $C_{i}(n)>0$ for $i=1,2$ such that for any $x\in\Omega_{k}$: \begin{itemize} \item [$(PN.1)\phantom{^{q}}$] $\sup_{\zeta\in\R^{2},\,\|\zeta\|_{\infty}\leq \rho_{k}}\nu_{n}(x+\zeta) \leq C_{1}(n)\inf_{\zeta\in\R^{2},\,\|\zeta\|_{\infty}\leq \rho_{k}}\nu_{J_{1}(n)}(x+\zeta)$ \item [$(PN.2)^{q}$] $\nu_{n}(x)\leq C_{2}(n)\psi_{n}(x)\nu_{J_{2}(n)}(x)$ \end{itemize} \end{condPN} If $q=1$, then these conditions are a special case of \cite[Condition 2.1, p.\ 176]{kruse2018_4} by \cite[Remark 2.3 (b), p.\ 177]{kruse2018_4} and modifications of the conditions $(1.1)$-$(1.3)$ in \cite[p.\ 204]{L4}. They guarantee that the \textbf{p}rojective limit $\mathcal{EV}(\Omega)$ is a \textbf{n}uclear Fr\'echet space by \cite[Theorem 3.1, p.\ 188]{kruse2018_4} and \cite[Remark 2.7, p.\ 178-179]{kruse2018_4} if $q=1$, which we use to derive the surjectivity of $\overline{\partial}^{E}$ from the one of $\overline{\partial}$ for Fr\'echet spaces $E$ over $\C$. \begin{rem}\label{rem:standard_psi} A typical candidate for $\psi_{n}$ for every $n\in\N$ is $\psi_{n}(x):= (1+|x|^{2})^{-d}$, $x\in\R^{d}$. If Condition $(PN)$ is fulfilled with this choice for $\psi_{n}$ for some $q\in\N$, then it is fulfilled for every $q\in\N$ because $\psi_{n}\in L^{q}(\R^{d})$ for every $q\in\N$. \end{rem} \begin{conv}[{\cite[1.1 Convention, p.\ 205]{L4}}]\label{conv:index} We often delete the number $n$ counting the seminorms (e.g.\ $J_{i}=J_{i}(n)$ or $C_{i}=C_{i}(n)$) and indicate compositions with the functions $J_{i}$ only in the index (e.g.\ $J_{23}=J_{2}(J_{3}(n))$). \end{conv} \begin{defn} Let $\Omega\subset\R^{d}$ be open and $(\Omega_{n})_{n\in\N}$ a family of non-empty open sets such that $\Omega_{n}\subset\Omega_{n+1}$ and $\Omega=\bigcup_{n\in\N} \Omega_{n}$. Let $\mathcal{V}:=(\nu_{n})_{n\in\N}$ be a (directed) family of continuous weights on $\Omega$. For $n,q\in\N$ we define the locally convex Hausdorff spaces \[ \mathcal{C}^{\infty}_{q}\nu_{n}(\Omega_{n}):=\{f\in\mathcal{C}^{\infty}(\Omega_{n})\;|\; \forall\; m\in\N_{0}:\;\|f\|_{n,m,q}<\infty\} \] and \[ \mathcal{C}^{\infty}_{q}\mathcal{V}(\Omega) :=\{ f \in \mathcal{C}^{\infty}(\Omega)\; | \;\forall n \in \N:\; f_{\mid\Omega_{n}}\in \mathcal{C}^{\infty}_{q}\nu_{n}(\Omega_{n})\} \] where \[ \|f\|_{n,m,q}:=\sup_{\alpha\in\N^{d}_{0},|\alpha|\leq m} \bigl(\int_{\Omega_{n}}{|\partial^{\alpha}f(x)|^{q}\nu_{n}(x)^{q}\differential x}\bigr)^{\frac{1}{q}}. \] \end{defn} \begin{lem}\label{lem:switch_top} Let $(PN)$ be fulfilled for some $q\in\N$. \begin{enumerate} \item [a)] Let $P(\partial)$ be a hypoelliptic partial differential operator, $n\in\N$ and $f\in\mathcal{C}^{\infty}(\Omega_{2J_{11}})$ such that $\|f\|_{2J_{11},0,q}<\infty$ and $P(\partial)f\in \mathcal{C}^{\infty}_{q}\nu_{2J_{11}}(\Omega_{2J_{11}})$. Then $f\in\mathcal{E}\nu_{n}(\Omega_{n})$ and \[ \forall\;m\in\N_{0}\;\exists\; l\in\N_{0},\, C_{0}>0:\; |f|_{n,m}\leq C_{0}(\|f\|_{2J_{11},0,q}+\|P(\partial)f\|_{2J_{11},l,q}). \] \item [b)] Then $\mathcal{C}^{\infty}_{q}\mathcal{V}(\Omega)=\mathcal{EV}(\Omega)$ as locally convex spaces. \end{enumerate} \end{lem} \begin{proof} $a)$ Due to \cite[Lemma 2.11 (p.1), p.\ 183-184]{kruse2018_4}, \cite[Remark 2.7, p.\ 178-179]{kruse2018_4} and \cite[Remark 2.3 (b), p.\ 177]{kruse2018_4} there are $\mathcal{K}\subset\N$ and a sequence $(z_{k})_{k\in\mathcal{K}}$, $z_{k}\neq z_{j}$ for $k\neq j$, in $\Omega_{n}$ such that the balls \[ b_{k}:=\{\zeta\in\R^{d}\;|\;\|\zeta-z_{k}\|_{\infty}<\rho_{n}/2\} \] form an open covering of $\Omega_{n}$ with \[ \Omega_{n}\subset \bigcup_{k\in\mathcal{K}}b_{k}\subset\bigcup_{k\in\mathcal{K}}B_{k} \subset \Omega_{n+1}\subset \Omega_{2J_{11}(n)} \] where \[ B_{k}:=\{\zeta\in\R^{d}\;|\;\|\zeta-z_{k}\|_{\infty}<\rho_{n}\}. \] Let $m\in\N_{0}$, $\alpha\in\N^{d}_{0}$, $|\alpha|\leq m$, and $k\in\mathcal{K}$. By \prettyref{cor:iso_sobolev} there exist $l\in\N_{0}$ and $C>0$, $C$ and $l$ independent of $k$ and $\alpha$, such that \begin{flalign}\label{lem10.1} &\hspace{0.37cm} \|(\partial^{\alpha}f)\nu_{n}\|_{b_{k}}\nonumber\\ &\leq \|\nu_{n}\|_{b_{k}}\|\partial^{\alpha}f\|_{b_{k}} \underset{(\omega.1)}{\leq}C_{1}\nu_{J_{1}}(z_{k})\|\partial^{\alpha}f\|_{b_{k}}\nonumber\\ &= C_{1}\nu_{J_{1}}(z_{k})\|\partial^{\alpha}f(z_{k}+\cdot)\|_{Q_{\rho_{n}/2}(0)}\nonumber\\ &\underset{\mathclap{\ref{cor:iso_sobolev}}}{\leq} CC_{1}\nu_{J_{1}}(z_{k}) \bigl(\|f\|_{L^{q}(B_{k})} +\sup_{\beta\in\N^{2}_{0},|\beta|\leq l} \|\partial^{\beta}P(\partial)f\|_{L^{q}(B_{k})}\bigr)\nonumber\\ &\underset{\mathclap{(PN.1)}}{\leq} CC_{1}C_{1}(J_{1}) \bigl(\|f\nu_{J_{11}}\|_{L^{q}(B_{k})} +\sup_{\beta\in\N^{2}_{0},|\beta|\leq l} \|(\partial^{\beta}P(\partial)f)\nu_{J_{11}}\|_{L^{q}(B_{k})}\bigr)\nonumber\\ &\leq CC_{1}C_{1}(J_{1})\bigl(\|f\|_{2J_{11},0,q}+\|P(\partial)f\|_{2J_{11},l,q}\bigr) \end{flalign} and so we get \begin{align*} |f|_{n,m}&\leq \sup_{\substack{x\in\bigcup_{k\in \mathcal{K}}b_{k}\\ \alpha\in\N^{d}_{0},\,|\alpha|\leq m}} {|\partial^{\alpha}f(x)|\nu_{n}(x)} \leq\quad\sup_{\mathclap{\substack{k\in \mathcal{K}\\ \alpha\in\N^{d}_{0},\,|\alpha|\leq m}}} {\|(\partial^{\alpha}f)\nu_{n}\|_{b_{k}}}\\ &\underset{\mathclap{\eqref{lem10.1}}}{\leq} CC_{1}C_{1}(J_{1})\bigl(\|f\|_{2J_{11},0,q}+\|P(\partial)f\|_{2J_{11},l,q}\bigr). \end{align*} $b)$ Let $f\in\mathcal{C}^{\infty}_{q}\mathcal{V}(\Omega)$ and $P(\partial):=\distanceelta$ be the Laplacian. Then $f$ satisfies the conditions of $a)$ for all $n\in\N$ because \[ \|\distanceelta f\|_{n,m,q}=\sup_{\alpha\in\N^{d}_{0},|\alpha|\leq m} \|\sum_{i=1}^{d}(\partial^{\alpha+2e_{i}}f)\nu_{n}\|_{L^{q}(\Omega_{n})} \leq d \|f\|_{n,m+2,q}<\infty \] for every $m\in\N_{0}$. So for every $n\in\N$ and $m\in\N_{0}$ there exist $l\in\N_{0}$ and $C_{0}>0$ such that \begin{align*} |f|_{n,m}&\leq C_{0}\bigl(\|f\|_{2J_{11},0,q}+\|\distanceelta f\|_{2J_{11},l,q}\bigr)\\ &\leq C_{0}\bigl(\|f\|_{2J_{11},0,q}+d\|f\|_{2J_{11},l+2,q}\bigr) \leq (1+d)C_{0}\|f\|_{2J_{11},l+2,q}. \end{align*} On the other hand, let $f\in\mathcal{EV}(\Omega)$. For every $n\in\N$ and $m\in\N_{0}$ we have \begin{align*} \|f\|_{n,m,q}^{q}&=\sup_{\alpha\in\N^{d}_{0},|\alpha|\leq m} \int_{\Omega_{n}}{|\partial^{\alpha}f(x)|^{q}\nu_{n}(x)^{q}\differential x}\\ &\underset{\mathclap{(PN.2)^{q}}}{\leq}\quad C_{2}^{q}\sup_{\alpha\in\N^{d}_{0},|\alpha|\leq m} \int_{\Omega_{n}}{|\partial^{\alpha}f(x)|^{q}\psi_{n}(x)^{q}\nu_{J_{2}}(x)^{q}\differential x}\\ &\leq C_{2}^{q}\int_{\Omega_{n}}{\psi_{n}(x)^{q}\differential x}\sup_{\substack{w\in \Omega_{n}\\ \alpha\in\N^{d}_{0},\,|\alpha|\leq m}} |\partial^{\alpha}f(w)|^{q}\nu_{J_{2}}(w)^{q}\\ &\leq C_{2}^{q}\|\psi\|^{q}_{L^{q}(\Omega_{n})}\left|f\right|_{J_{2},m}^{q}. \end{align*} \end{proof} The following examples from \cite[Example 2.8, p.\ 179]{kruse2018_4} and \cite[Example 2.9, p.\ 182]{kruse2018_4} fulfil $(PN)$ for every $q\in\N$ (see \prettyref{rem:standard_psi}). \begin{exa}\label{ex:families_of_weights_1} Let $\Omega\subset\R^{d}$ be open and $(\Omega_{n})_{n\in\N}$ a family of non-empty open sets such that \begin{enumerate} \item [(i)] $\Omega_{n}:=\R^{d}$ for every $n\in\N$. \item [(ii)] $\Omega_{n}\subset\Omega_{n+1}$ and $\differential^{\|\cdot\|}(\Omega_{n},\partial\Omega_{n+1})>0$ for every $n\in\N$. \item [(iii)] $\Omega_{n}:=\{x=(x_{i})\in \Omega\;|\;\forall\;i\in I:\;|x_{i}|<n+N\;\text{and}\; \differential^{\|\cdot\|}(\{x\},\partial \Omega)>1/(n+N) \}$ where $I\subset\{1,\ldots,d\}$, $\partial \Omega\neq\varnothing$ and $N\in\N_{0}$ is big enough. \item [(iv)] $\Omega_{n}:=\{x=(x_{i})\in \Omega\;|\;\forall\;i\in I:\;|x_{i}|<n\}$ where $I\subset\{1,\ldots,d\}$ and $\Omega:=\R^{d}$. \item [(v)] $\Omega_{n}:=\mathring K_{n}$ where $K_{n}\subset\mathring K_{n+1}$, $\mathring{K}_{n}\neq\varnothing$, is a compact exhaustion of $\Omega$. \end{enumerate} Let $(a_{n})_{n\in\N}$ be strictly increasing such that $a_{n}\geq 0$ for all $n\in\N$ or $a_{n}\leq 0$ for all $n\in\N$. The family $\mathcal{V}:=(\nu_{n})_{n\in\N}$ of positive continuous functions on $\Omega$ given by \[ \nu_{n}\colon\Omega\to (0,\infty),\;\nu_{n}(x):=e^{a_{n}\mu(x)}, \] with some function $\mu\colon\Omega\to[0,\infty)$ fulfils $\nu_{n}\leq\nu_{n+1}$ for all $n\in\N$ and $(PN)$ for every $q\in\N$ with $\psi_{n}(x):= (1+|x|^{2})^{-d}$, $x\in\R^{d}$, for every $n\in\N$ if \begin{enumerate} \item [a)] there is some $0<\gamma\leq 1$ and such that $\mu(x)=|(x_{i})_{i\in I_{0}}|^{\gamma}$, $x=(x_{i})\in\Omega$, where $I_{0}:=\{1,\ldots,d\}\setminus I$ with $I\subsetneq\{1,\ldots,d\}$ and $(\Omega_{n})_{n\in\N}$ from (iii) or (iv). \item [b)] $\lim_{n\to\infty}a_{n}=\infty$ or $\lim_{n\to\infty}a_{n}=0$ and there is some $m\in\N$, $m\leq 2d+1$, such that $\mu(x)=|x|^{m}$, $x\in\Omega$, with $(\Omega_{n})_{n\in\N}$ from (i) or (ii). \item [c)] $a_{n}=n/2$ for all $n\in\N$ and $\mu(x)=\ln(1+|x|^{2})$, $x\in\R^{d}$, with $(\Omega_{n})_{n\in\N}$ from (i). \item [d)] $\mu(x)=0$, $x\in\Omega$, with $(\Omega_{n})_{n\in\N}$ from (v). \end{enumerate} \end{exa} \prettyref{ex:families_of_weights_1} a) covers the weights $\nu_{n}(z):=\exp{(-|\re(z)|/n)}$, $z\in \C\setminus\R$, with the sets $\Omega_{n}:=\{z\in\C\;|\; 1/n<|\im(z)|<n\}$ and \prettyref{lem:switch_top} is a generalisation of \cite[5.15 Lemma, p.\ 78]{ich}. In \prettyref{ex:families_of_weights_1} c) we have $\mathcal{EV}(\R^{d})=\mathcal{S}(\R^{d})$, the Schwartz space. Hence the $\sup$- and $L^{q}$-seminorms form an equivalent system of seminorms on $\mathcal{S}(\R^{d})$ for every $q\in\N$ by \prettyref{lem:switch_top}. This generalises the observation for $q=2$ and $d=1$ in \cite[Example 29.5 (2), p.\ 361]{meisevogt1997} (cf.\ \cite[Folgerung, p.\ 85]{Wloka1967}). In \prettyref{ex:families_of_weights_1} d) $\mathcal{EV}(\Omega)=\mathcal{C}^{\infty}(\Omega)$ with the usual topology of uniform convergence of partial derivatives of any order on compact subsets of $\Omega$. We remark that we can choose $C_{2}(n):=(1+\sup\{|x|^2\;|\;x\in\Omega_{n}\})^{d}$ in d). Other choices for $\psi_{n}$ are also possible, for example, $\psi_{n}:=1$ in d) but we are interested in this particular choice of $\psi_{n}$ in view of the next section. \section{Main result} In this section we always assume that $\K=\C$ and $d=2$ and state our main theorem on the surjectivity of the Cauchy-Riemann operator \[ \overline{\partial}^{E}:=\frac{1}{2}((\partial_{1})^{E}+i(\partial_{2})^{E}) \colon \mathcal{EV}(\Omega,E)\to \mathcal{EV}(\Omega,E) \] for any Fr\'echet space $E$. Let us outline the proof. If the subspace of the projective spectrum $\mathcal{EV}(\Omega)$ consisting of holomorphic functions has some kind of density property weaker than weak reducibility and for every $f\in\mathcal{EV}(\Omega)$ and $n\in\N$ there is $u_{n}\in\mathcal{E}\nu_{n}(\Omega_{n})$ such that $\overline{\partial} u_{n}=f$ on $\Omega_{n}$, then the Mittag-Leffler procedure yields the surjectivity of $\overline{\partial}^{E}$ for $E=\C$. Since the space $\mathcal{EV}(\Omega)$ is a nuclear Fr\'echet space under condition $(PN)$ and $\mathcal{EV}(\Omega,E)\cong\mathcal{EV}(\Omega)\widehat{\otimes}_{\pi}E$, the surjectivity of $\overline{\partial}$ implies the surjectivity of $\overline{\partial}^{E}$ for any Fr\'echet space $E$ by the classical theory of tensor products of Fr\'echet spaces. \begin{defn} Let $E$ be a locally convex Hausdorff space over $\C$, $\mathcal{V}:=(\nu_{n})_{n\in\N}$ a (directed) family of continuous weights on an open set $\Omega\subset\R^{2}$ and $(\Omega_{n})_{n\in\N}$ a family of non-empty open sets such that $\Omega_{n}\subset\Omega_{n+1}$ for all $n\in\N$ and $\Omega=\bigcup_{n\in\N} \Omega_{n}$. For $n\in\N$ we define \[ \mathcal{E}\nu_{n,\overline{\partial}}(\Omega_{n},E):= \{ f \in \mathcal{E}\nu_{n}(\Omega_{n}, E)\; | \; f\in\operatorname{ker}\overline{\partial}^{E} \} \] and \[ \mathcal{EV}_{\overline{\partial}}(\Omega, E):=\{f\in\mathcal{EV}(\Omega, E)\;|\; f\in\operatorname{ker}\overline{\partial}^{E}\}. \] $\mathcal{EV}_{\overline{\partial}}(\Omega)$ is called \emph{very weakly reduced} if for every $n\in\N$ there are $\iota_{1}(n),\iota_{2}(n)\in\N$, $\iota_{2}(n)\geq \iota_{1}(n)$, such that the restricted space $\pi_{\iota_{2}(n),n}(\mathcal{E}\nu_{\iota_{2}(n),\overline{\partial}}(\Omega_{\iota_{2}(n)}))$ is dense in $\pi_{\iota_{1}(n),n}(\mathcal{E}\nu_{\iota_{1}(n),\overline{\partial}}(\Omega_{\iota_{1}(n)}))$ w.r.t.\ $(|\cdot|_{n,m})_{m\in\N_{0}}$. \end{defn} We note that $\mathcal{EV}_{\overline{\partial}}(\Omega)$ is very weakly reduced if it is weakly reduced. By now all ingredients that are required to prove the surjectivity of $\overline{\partial}$ are provided. \begin{thm}\label{thm:scalar_CR_surjective} Let $(PN)$ with $\psi_{n}(z):=(1+|z|^{2})^{-2}$, $z\in\Omega$, be fulfilled for some (thus all) $q\in\N$, $\mathcal{EV}_{\overline{\partial}}(\Omega)$ be very weakly reduced with $\iota_{2}(n)\geq \iota_{1}(n+1)$ and $-\ln\nu_{n}$ be subharmonic on $\Omega$ for every $n\in\N$. Then \[ \overline{\partial}\colon \mathcal{EV}(\Omega)\to\mathcal{EV}(\Omega) \] is surjective. \end{thm} \begin{proof} $(i)$ Let $f\in\mathcal{EV}(\Omega)$, $n\in\N$ and set \[ \varphi_{n}\colon\Omega\to\R,\;\varphi_{n}\left(z\right):=-2\ln\nu_{J_{2}(2J_{11}(n))}(z), \] which is a (pluri)subharmonic function on $\Omega$. The set $\Omega_{J_{2}(2J_{11})}$ is open and pseudoconvex since every open set in $\C$ is a domain of holomorphy by \cite[Corollary 1.5.3, p.\ 15]{H3} and hence pseudoconvex by \cite[Theorem 4.2.8, p.\ 88]{H3}. For the differential form $g:=f\operatorname{d\bar{z}}$ we have $\overline{\partial}g=0$ in the sense of differential forms and $f\in\mathcal{EV}(\Omega)=\mathcal{C}^{\infty}_{2}\mathcal{V}(\Omega)$ by \prettyref{lem:switch_top} b) resulting in \[ \int_{\Omega_{J_{2}(2J_{11})}}{|f(z)|^{2}e^{-\varphi_{n}(z)} \differential z}= \|f\|_{J_{2}(2J_{11}),0,2}^{2}<\infty. \] Thus by \cite[Theorem 4.4.2, p.\ 94]{H3} there is a solution $u_{n}\in L^{2}_{loc}(\Omega_{J_{2}(2J_{11})})$ of $\overline{\partial}u_{n}=f_{\mid\Omega_{J_{2}(2J_{11})}}$ in the distributional sense such that \[ \int_{\Omega_{J_{2}(2J_{11})}}{|u_{n}(z)|^{2}e^{-\varphi_{n}(z)}(1+|z|^{2})^{-2} \differential z} \leq\int_{\Omega_{J_{2}(2J_{11})}}{|f(z)|^{2}e^{-\varphi_{n}(z)} \differential z}. \] Since $\overline{\partial}$ is hypoelliptic, it follows that $u_{n}\in\mathcal{C}^{\infty}(\Omega_{J_{2}(2J_{11})})$, resp.\ $u_{n}$ has a representative which is $\mathcal{C}^{\infty}$. By virtue of property $(PN.2)^{2}$ we gain \begin{align*} \|u_{n}\|_{2J_{11},0,2}^{2} &=\int_{\Omega_{2J_{11}}}{|u_{n}(z)|^{2}\nu_{2J_{11}}(z)^{2} \differential z}\\ &\underset{\mathclap{(PN.2)^{2}}}{\leq} \quad C_{2}(2J_{11})^{2}\int_{\Omega_{J_{2}(2J_{11})}} {|u_{n}(z)|^{2}e^{-\varphi_{n}(z)}(1+|z|^{2})^{-4} \differential z}\\ &\leq C_{2}(2J_{11})^{2}\int_{\Omega_{J_{2}(2J_{11})}}{|u_{n}(z)|^{2}e^{-\varphi_{n}(z)}(1+|z|^{2})^{-2} \differential z} <\infty . \end{align*} So the conditions of \prettyref{lem:switch_top} a) are fulfilled for all $n\in\N$, implying $u_{n}\in\mathcal{E}\nu_{n}(\Omega_{n})$. $(ii)$ The next step is to prove the surjectivity of $\overline{\partial}\colon \mathcal{EV}(\Omega)\to\mathcal{EV}(\Omega)$ via the Mittag-Leffler procedure (see \cite[9.14 Theorem, p.\ 206-207]{Kaballo}). Due to $(i)$ we have for every $l\in\N$ a function $u_{l}\in\mathcal{E}\nu_{l}(\Omega_{l})$ such that $\overline{\partial}u_{l}=f_{\mid\Omega_{l}}$. Now, we inductively construct $g_{n}\in\mathcal{E}\nu_{\iota_{1}(n)}(\Omega_{\iota_{1}(n)})$, $n\in\N$, such that \begin{enumerate} \item [(1)] $\overline{\partial}g_{n}=f_{\mid\Omega_{\iota_{1}(n)}}$, $n\geq 1$, \item [(2)] $|g_{n}-g_{n-1}|_{n-1,n-1}\leq \frac{1}{2^{n}}$, $n\geq 2$, \end{enumerate} with $\iota_{1}(n)$ from the definition of very weak reducibility. For $n=1$ set $g_{1}:=u_{\iota_{1}(1)}$. Then we have $g_{1}\in\mathcal{E}\nu_{\iota_{1}(1)}(\Omega_{\iota_{1}(1)})$ and $\overline{\partial}g_{1}=f_{\mid\Omega_{\iota_{1}(1)}}$ by part $(i)$. Let $g_{n}$ fulfil (1) for some $n\geq 1$. Since \[ \overline{\partial}(u_{\iota_{2}(n)}-g_{n})_{\mid\Omega_{\iota_{1}(n)}} =\overline{\partial}{u_{\iota_{2}(n)}}_{\mid\Omega_{\iota_{1}(n)}}-{\overline{\partial}g_{n}}_{\mid\Omega_{\iota_{1}(n)}} \underset{(i),\,(1)}{=}f_{\mid\Omega_{\iota_{1}(n)}}-f_{\mid\Omega_{\iota_{1}(n)}}=0, \] it follows $u_{\iota_{2}(n)}-g_{n}\in\mathcal{E}\nu_{\iota_{1}(n),\overline{\partial}}(\Omega_{\iota_{1}(n)})$ and by the very weak reducibility of $\mathcal{EV}_{\overline{\partial}}(\Omega)$ there is $h_{n+1}\in\mathcal{E}\nu_{\iota_{2}(n),\overline{\partial}}(\Omega_{\iota_{2}(n)})$ such that \[ |u_{\iota_{2}(n)}-g_{n}-h_{n+1}|_{n,n}\leq \frac{1}{2^{n+1}}. \] Set $g_{n+1}:=u_{\iota_{2}(n)}-h_{n+1}\in\mathcal{E}\nu_{\iota_{2}(n)}(\Omega_{\iota_{2}(n)})$. As $\iota_{2}(n)\geq \iota_{1}(n+1)$, we have $g_{n+1}\in\mathcal{E}\nu_{\iota_{1}(n+1)}(\Omega_{\iota_{1}(n+1)})$. Condition (2) is satisfied by the inequality above and condition (1) as well because \[ \overline{\partial}g_{n+1} =\overline{\partial}u_{\iota_{2}(n)}-\overline{\partial}h_{n+1} =\overline{\partial}u_{\iota_{2}(n)}-0 \underset{(i)}{=}f_{\mid\Omega_{\iota_{1}(n+1)}}. \] Now, let $\varepsilon>0$, $l\in\N$ and $m\in\N_{0}$. Choose $l_{0}\in\N$, $l_{0}\geq \max\left(l,m\right)$, such that $\frac{1}{2^{l_{0}}}<\varepsilon$. For all $p\geq k\geq l_{0}$ we get \begin{align*} |g_{p}-g_{k}|_{l,m} &\leq|g_{p}-g_{k}|_{l_{0},l_{0}} =\bigl|\sum^{p}_{j=k+1}{g_{j}-g_{j-1}}\bigr|_{l_{0},l_{0}} \leq\sum^{p}_{j=k+1}{|g_{j}-g_{j-1}|_{l_{0},l_{0}}}\\ &\underset{\mathclap{l_{0}\leq k\leq j-1}}{\leq}\quad\; \sum^{p}_{j=k+1}{|g_{j}-g_{j-1}|_{j-1,j-1}} \;\underset{\mathclap{(2)}}{\leq} \;\sum^{p}_{j=k+1}{\frac{1}{2^{j}}}<\frac{1}{2^{k}} \leq\frac{1}{2^{l_{0}}}<\varepsilon. \end{align*} Hence $(g_{n})_{n\geq \max(l-2,1)}$ is a Cauchy sequence in $\mathcal{E}\nu_{l}(\Omega_{l})$ for all $l\in\N$ and, since these spaces are complete by \cite[Proposition 3.7, p.\ 240]{kruse2018_2}, it has a limit $G_{l}\in\mathcal{E}\nu_{l}(\Omega_{l})$. These limits coincide on their common domain because for every $l_{1},l_{2}\in\N$, $l_{1}<l_{2}$, and $\varepsilon_{1}>0$ there exists $N\in\N$ such that for all $n\geq N$ \begin{align*} |G_{l_{1}}-G_{l_{2}}|_{l_{1},m} &\leq |G_{l_{1}}-g_{n}|_{l_{1},m}+|g_{n}-G_{l_{2}}|_{l_{1},m} \leq |G_{l_{1}}-g_{n}|_{l_{1},m}+|g_{n}-G_{l_{2}}|_{l_{2},m}\\ &<\frac{\varepsilon_{1}}{2}+\frac{\varepsilon_{1}}{2}=\varepsilon_{1}. \end{align*} So the limit function $g$, defined by $g:=G_{l}$ on $\Omega_{l}$ for all $l\in\N$, is well-defined and we have $g\in\mathcal{EV}(\Omega)$. Thus we obtain for all $l\in\N$ \[ f_{\mid\Omega_{l}} \underset{\substack{(1)\\n\geq \max(l-2,1)}}{=}{\overline{\partial}g_{n}}_{\mid\Omega_{l}} \underset{n\to\infty}{\to} \overline{\partial}g_{\mid\Omega_{l}} \] and hence the existence of $g\in\mathcal{EV}(\Omega)$ with $\overline{\partial}g=f$ on $\Omega$ is proved. \end{proof} Moreover, we are already able to show that $\overline{\partial}^{E}$ is surjective for Fr\'echet spaces $E$ over $\C$ just by using classical theory of tensor products of Fr\'echet spaces. \begin{cor}\label{cor:frechet_CR_surjective} Let $(PN)$ with $\psi_{n}(z):=(1+|z|^{2})^{-2}$, $z\in\Omega$, be fulfilled for some (thus all) $q\in\N$, $\mathcal{EV}_{\overline{\partial}}(\Omega)$ be very weakly reduced with $\iota_{2}(n)\geq \iota_{1}(n+1)$ and $-\ln\nu_{n}$ be subharmonic on $\Omega$ for every $n\in\N$. If $E$ is a Fr\'echet space over $\C$, then \[ \overline{\partial}^{E}\colon \mathcal{EV}(\Omega,E)\to\mathcal{EV}(\Omega,E) \] is surjective. \end{cor} \begin{proof} First, we recall some definitions and facts from the theory of tensor products (see \cite{Defant}, \cite{Jarchow}, \cite{Kaballo}). The $\varepsilon$-product of Schwartz is given by $\mathcal{EV}(\Omega)\varepsilon E:=L_{e}(\mathcal{EV}(\Omega)_{\kappa}',E)$ where the dual $\mathcal{EV}(\Omega)'$ is equipped with the topology of uniform convergence on absolutely convex, compact subsets of $\mathcal{EV}(\Omega)$ and $L(\mathcal{EV}(\Omega)_{\kappa}',E)$ with the topology of uniform convergence on equicontinuous subsets of $\mathcal{EV}(\Omega)'$. The space $\mathcal{EV}(\Omega)$ is a Fr\'echet space by \cite[Proposition 3.7 , p.\ 240]{kruse2018_2} and nuclear by \cite[Theorem 3.1, p.\ 188]{kruse2018_4}, \cite[Remark 2.7, p.\ 178-179]{kruse2018_4} and \cite[Remark 2.3 (b), p.\ 177]{kruse2018_4} because $(PN.1)$ and $(PN.2)^{1}$ are fulfilled. Thus the map \[ S\colon \mathcal{EV}(\Omega)\varepsilon E\to\mathcal{EV}(\Omega,E),\; u\longmapsto [x\mapsto u(\differentialelta_{x})], \] is a topological isomorphism by \cite[Example 16 c), p.\ 1526]{kruse2017} where $\differentialelta_{x}$ is the point-evaluation at $x\in\Omega$. The continuous linear injection \[ \chi\colon \mathcal{EV}(\Omega)\otimes_{\pi} E \to \mathcal{EV}(\Omega)\varepsilon E,\; \sum_{n=1}^{k} f_{n}\otimes e_{n} \longmapsto \bigl[y\mapsto \sum_{n=1}^{k} y(f_{n})e_{n}\bigr], \] from the tensor product $\mathcal{EV}(\Omega)\otimes_{\pi} E$ with the projective topology extends to a continuous linear map $\widehat{\chi}\colon \mathcal{EV}(\Omega)\widehat{\otimes}_{\pi} E \to \mathcal{EV}(\Omega)\varepsilon E$ on the completion $\mathcal{EV}(\Omega)\widehat{\otimes}_{\pi} E$ of $\mathcal{EV}(\Omega)\otimes_{\pi} E$. The map $\widehat{\chi}$ is also a topological isomorphism since $\mathcal{EV}(\Omega)$ is nuclear. Furthermore, we define \[ \overline{\partial}\varepsilon \operatorname{id}_{E} \colon \mathcal{EV}(\Omega)\varepsilon E \to \mathcal{EV}(\Omega)\varepsilon E,\; u\mapsto u\circ \overline{\partial}^{t}, \] where $\overline{\partial}^{t}\colon \mathcal{EV}(\Omega)'\to\mathcal{EV}(\Omega)'$, $y\mapsto y\circ\overline{\partial}$, and $\overline{\partial} \otimes_{\pi}\operatorname{id}_{E}\colon \mathcal{EV}(\Omega)\otimes_{\pi}E\to \mathcal{EV}(\Omega)\otimes_{\pi}E$ is defined by the relation $\chi\circ (\overline{\partial} \otimes_{\pi}\operatorname{id}_{E}) =(\overline{\partial}\varepsilon \operatorname{id}_{E})\circ \chi $. Denoting by $\overline{\partial}\, \widehat{\otimes}_{\pi}\operatorname{id}_{E}$ the continuous linear extension of $\overline{\partial} \otimes_{\pi}\operatorname{id}_{E}$ to the completion $\mathcal{EV}(\Omega)\widehat{\otimes}_{\pi} E$, we observe that \[ \overline{\partial}^{E} =S\circ(\overline{\partial}\varepsilon \operatorname{id}_{E})\circ S^{-1} =S\circ\widehat{\chi}\circ(\overline{\partial}\, \widehat{\otimes}_{\pi}\operatorname{id}_{E}) \circ\widehat{\chi}^{-1}\circ S^{-1}. \] Now, we turn to the actual proof. Let $g\in\mathcal{EV}(\Omega,E)$. The maps $\operatorname{id}_{E}\colon E\to E$ and $\overline{\partial}\colon\mathcal{EV}(\Omega)\to\mathcal{EV}(\Omega)$ are linear, continuous and surjective, the latter one by \prettyref{thm:scalar_CR_surjective}. Moreover, $E$ and $\mathcal{EV}(\Omega)$ are Fr\'echet spaces, so $\overline{\partial}\,\widehat{\otimes}_{\pi}\operatorname{id}_{E}$ is surjective by \cite[10.24 Satz, p.\ 255]{Kaballo}, i.e.\ there is $f \in\mathcal{EV}(\Omega)\widehat{\otimes}_{\pi}E$ such that $(\overline{\partial}\,\widehat{\otimes}_{\pi}\operatorname{id}_{E})(f)=(\widehat{\chi}^{-1}\circ S^{-1})(g)$. Then $(S\circ\widehat{\chi})(f)\in\mathcal{EV}(\Omega,E)$ and \[ \overline{\partial}^{E}((S\circ\widehat{\chi})(f)) =(S\circ\widehat{\chi})((\overline{\partial}\, \widehat{\otimes}_{\pi}\operatorname{id}_{E})(f)) =(S\circ\widehat{\chi})((\widehat{\chi}^{-1}\circ S^{-1})(g)) =g. \] \end{proof} \section{Very weak reducibility and applications of the main result} In our last section we derive sufficient conditions for the very \textbf{w}eak \textbf{r}educibility of $\mathcal{EV}_{\overline{\partial}}(\Omega)$ and apply our main result to some examples. We start with our conditions. \begin{condWR}\label{cond:dense} Let $\mathcal{V}:=(\nu_{n})_{n\in\N}$ be a (directed) family of continuous weights on an open set $\Omega\subset\R^{2}$ and $(\Omega_{n})_{n\in\N}$ a family of non-empty open sets such that $\Omega_{n}\neq \R^{2}$, $\Omega_{n}\subset\Omega_{n+1}$ for all $n\in\N$, $\differential_{n,k}:=\differential^{|\cdot|}(\Omega_{n},\partial\Omega_{k})>0$ for all $n,k\in\N$, $k>n$, and $\Omega=\bigcup_{n\in\N} \Omega_{n}$. (WR.1) For every $n\in\N$ let there be $g_{n}\in\mathcal{O}(\C)$ with $g_{n}(0)=1$ and $\N\ni I_{j}(n)> n$ for $j=1,2$ such that \begin{enumerate} \item [(a)] for every $\varepsilon>0$ there is a compact set $K\subset \overline{\Omega}_{n}$ with $\nu_{n}(x)\leq\varepsilon\nu_{I_{1}(n)}(x)$ for all $x\in\Omega_{n}\setminus K$. \item [(b)] there is an open set $X_{I_{2}(n)}\subset\R^{2}\setminus \overline{\Omega}_{I_{2}(n)}$ such that there are $R_{n},r_{n}\in\R$ with $0<2R_{n}<\differential^{|\cdot|}(X_{I_{2}(n)},\Omega_{I_{2}(n)}):=\differential_{X,I_{2}(n)}$ and $R_{n}<r_{n}<\differential_{X,I_{2}(n)}-R_{n}$ as well as $A_{2}(\cdot,n)\colon X_{I_{2}(n)}+\mathbb{B}_{R_{n}}(0)\to (0,\infty)$, $A_{2}(\cdot,n)_{\mid X_{I_{2}(n)}}$ locally bounded, satisfying \begin{equation}\label{pro.2} \max\{|g_{n}(\zeta)|\nu_{I_{2}(n)}(z)\;|\;\zeta\in\R^{2},\,|\zeta-(z-x)|=r_{n}\}\leq A_{2}(x,n) \end{equation} for all $z\in\Omega_{I_{2}(n)}$ and $x\in X_{I_{2}(n)}+\mathbb{B}_{R_{n}}(0)$. \item [(c)] for every compact set $K\subset \R^{2}$ there is $A_{3}(n,K)>0$ with \[ \int_{K}{\frac{|g_{n}(x-y)|\nu_{n}(x)}{|x-y|}\differential y}\leq A_{3}(n,K),\quad x\in \Omega_{n}. \] \end{enumerate} (WR.2) Let (WR.1a) be fulfilled. For every $n\in\N$ let there be $\N\ni I_{4}(n)>n$ and $A_{4}(n)>0$ such that \begin{equation}\label{pro.4} \int_{\Omega_{I_{4}(n)}}{\frac{|g_{I_{14}(n)}(x-y)|\nu_{p}(x)}{|x-y|\nu_{k}(y)}\differential y}\leq A_{4}(n), \quad x\in \Omega_{p}, \end{equation} for $(k,p)=(I_{4}(n),n)$ and $(k,p)=(I_{14}(n),I_{14}(n))$ where $I_{14}(n):=I_{1}(I_{4}(n))$. (WR.3) Let (WR.1a), (WR.1b) and (WR.2) be fulfilled. For every $n\in\N$, every closed subset $M\subset \overline{\Omega}_{n}$ and every component $N$ of $M^{C}$ we have \[ N\cap \overline{\Omega}_{n}^{C}\neq \varnothing\;\Rightarrow\; N\cap X_{I_{214}(n)}\neq \varnothing \] where $I_{214}(n):=I_{2}(I_{14}(n))$. \end{condWR} We use the same convention for $I_{j}$ as for $J_{i}$ (see \prettyref{conv:index}). Condition (WR.1a) appears in \cite[p.\ 67]{Bierstedt1975} under the name $(RU)$ (cf.\ \cite[Remark 3.4, p.\ 239]{kruse2018_2}) as well. (WR.1a) is used for approximation by compactly supported functions, (WR.1b) to control Cauchy estimates, (WR.1c) as well as (WR.2) to guarantee that several kinds of convolutions of the fundamental solution $z\mapsto g_{n}(z)/(\pi z)$ of the $\overline{\partial}$-operator with certain functionals are well-defined and (WR.3) to control the support of an analytic distribution by using the identity theorem. We begin with the proof that $(WR)$ is sufficient for very weak reducibility. The underlying idea of the proof is extracted from a proof of H\"ormander \cite[Theorem 4.4.5, p.\ 112]{H1} in a comparable situation for non-weighted $\mathcal{C}^{\infty}$-functions. The proof is split into several parts to enhance comprehensibility. \begin{thm}\label{thm:dense_proj_lim} Let $n\in\N$. Then the space $\pi_{I_{214}(n),n}(\mathcal{E}\nu_{I_{214}(n),\overline{\partial}}(\Omega_{I_{214}(n)}))$ is dense in $\pi_{I_{14}(n),n}(\mathcal{E}\nu_{I_{14}(n),\overline{\partial}}(\Omega_{I_{14}(n)}))$ w.r.t.\ $(|\cdot|_{n,m})_{m\in\N_{0}}$ if $(WR)$ is fulfilled. In particular, $\mathcal{EV}_{\overline{\partial}}(\Omega)$ is very weakly reduced if $(WR)$ is fulfilled. \end{thm} In order to gain access to the theory of distributions in our approach, we prove another density statement first whose proof is a modification of the one of \cite[Lemma 3.14, p.\ 242]{kruse2018_2}. \begin{lem}\label{lem:dense_comp_supp} Let $n\in \N$. Then the space $\pi_{I_{1}(n),n}(\mathcal{D}(\Omega_{I_{1}(n)}))$ is dense in the space $\pi_{I_{1}(n),n}(\mathcal{E}\nu_{I_{1}(n)}(\Omega_{I_{1}(n)}))$ w.r.t.\ $(|\cdot|_{n, m})_{m\in\N_{0}}$ if $(WR.1a)$ is fulfilled. \end{lem} \begin{proof} Let $f\in\mathcal{E}\nu_{I_{1}}(\Omega_{I_{1}})$ and $\varepsilon>0$. Due to $(WR.1a)$ there is a compact set $K\subset\overline{\Omega}_{n}$ such that \begin{equation}\label{lem2.1} \sup_{x \in \Omega_{n}\setminus K}\frac{\nu_{n}(x)}{\nu_{I_{1}}(x)}\leq\varepsilon \end{equation} where we use the convention $\sup_{x\in\varnothing}\tfrac{\nu_{n}(x)}{\nu_{I_{1}}(x)}:=-\infty$. Like in the proof of \cite[Theorem 1.4.1, p.\ 25]{H1} we can find $\varphi\in\mathcal{D}(\Omega_{I_{1}})$, $0\leq \varphi\leq 1$, such that $\varphi=1$ near $K$ and \begin{equation}\label{lem2.2} |\partial^{\alpha}\varphi|\leq C_{\alpha}D^{-|\alpha|} \end{equation} for all $\alpha\in\N^{2}_{0}$ where $D:=\differential_{n,I_{1}}/2$ and $C_{\alpha}>0$ is a constant only depending on $\alpha$. Then $\varphi f\in\mathcal{D}(\Omega_{I_{1}})$ and with $K_{0}:=\operatorname{supp}\varphi$ we have for $m\in\N_{0}$ by the Leibniz rule \begin{flalign*} &\hspace{0.37cm}|\varphi f -f|_{n,m}\\ &\leq\sup_{\substack{x \in \Omega_{n}\setminus K \\ \alpha\in\N^{2}_{0}, |\alpha| \leq m}} |\partial^{\alpha}( \varphi f)(x)|\nu_{n}(x) +\sup_{\substack{x \in \Omega_{n}\setminus K\\ \alpha\in\N^{2}_{0}, |\alpha| \leq m}} |\partial^{\alpha} f(x)|\nu_{n}(x)\\ &\leq \sup_{\substack{x \in (\Omega_{n}\setminus K)\cap K_{0}\\ \alpha\in\N^{2}_{0}, |\alpha| \leq m}} \bigl|\sum_{\gamma\leq \alpha}\differentialbinom{\alpha}{\gamma}\partial^{\alpha-\gamma} \varphi(x) \partial^{\gamma}f(x)\bigr|\nu_{n}(x) +\sup_{\substack{x \in \Omega_{n}\setminus K\\ \alpha\in\N^{2}_{0},|\alpha| \leq m}} |\partial^{\alpha} f(x)|\nu_{I_{1}}(x)\frac{\nu_{n}(x)}{\nu_{I_{1}}(x)}\\ &\underset{\mathclap{\eqref{lem2.1}}}{\leq}\; \sup_{\alpha\in\N^{2}_{0},|\alpha| \leq m} \sum_{\gamma\leq \alpha}\differentialbinom{\alpha}{\gamma}\sup_{x \in K_{0}}|\partial^{\alpha-\gamma} \varphi(x)| \bigl(\sup_{\substack{x \in \Omega_{n}\setminus K\\ \beta\in\N^{2}_{0},|\beta| \leq m}} |\partial^{\beta}f(x)|\nu_{n}(x)\bigr) +\varepsilon |f|_{I_{1},m}\\ &\underset{\mathclap{\eqref{lem2.1},\eqref{lem2.2}}}{\leq}\quad\underbrace{\sup_{\alpha\in\N^{2}_{0},|\alpha|\leq m} \sum_{\gamma\leq\alpha}\differentialbinom{\alpha}{\gamma}C_{\alpha-\gamma}D^{-|\alpha-\gamma|}}_{:=C(m,D)} \varepsilon|f|_{I_{1},m}+\varepsilon |f|_{I_{1},m} =(C(m,D)+1)|f|_{I_{1},m}\varepsilon \end{flalign*} where $C(m,D)$ is independent of $\varepsilon$, proving the density. \end{proof} The next lemma is devoted to a special fundamental solution of the $\overline{\partial}$-operator and its properties, namely to $E_{n}\colon \C\setminus \{0\}\to \C,\; E_{n}(z):=\frac{g_{n}(z)}{\pi z},$ with $g_{n}$ from $(WR)$. \begin{lem}\label{lem:prepare_convolution} Let $n\in\N$ and $(WR.1b)$, $(WR.1c)$ and $(WR.2)$ be fulfilled. \begin{enumerate} \item [a)] Then $\overline{\partial}T_{E_{n}}=\differentialelta$ in the distributional sense. \item [b)] Let $x\in X_{I_{2}(n)}+\mathbb{B}_{R_{n}}(0)$ and $\alpha\in\N^{2}_{0}$. Then $\partial^{\alpha}_{x}[E_{n}(\cdot-x)]\in \mathcal{E}\nu_{I_{2}(n),\overline{\partial}}(\Omega_{I_{2}(n)})$. \item [c)] Let $K\subset\R^{2}$ be a compact set and $m\in\N_{0}$. Then \begin{equation}\label{lem3.1} |T_{E_{n}}\ast\psi|_{n,m}\leq \frac{A_{3}(n,K)}{\pi}\|\psi\|_{m},\quad \psi\in\mathcal{C}^{\infty}_{c}(K), \end{equation} with the convolution from \eqref{distr.falt.}. In particular, $T_{E_{n}}\ast\psi\in\mathcal{E}\nu_{n}(\Omega_{n})$. \item [d)] \begin{enumerate} \item [(i)] There exists $\varphi\in\mathcal{C}^{\infty}(\R^{2})$, $0\leq\varphi\leq 1$, such that $\varphi=1$ near $\overline{\Omega}_{n}$ and $\varphi=0$ near $\Omega_{I_{4}(n)}^{C}$ plus \begin{equation}\label{lem3.2} |\partial^{\alpha}\varphi|\leq c_{\alpha}\differential_{n,I_{4}(n)}^{-|\alpha|} \end{equation} for all $\alpha\in\N^{2}_{0}$ where $c_{\alpha}>0$ is a constant only depending on $\alpha$. \item [(ii)] Choose $\varphi\in\mathcal{C}^{\infty}(\R^{2})$ like in (i) and $m\in\N_{0}$. Then there is $A_{5}=A_{5}(n,m)$ such that \begin{equation}\label{lem3.3} |T_{E_{I_{14}(n)}}\ast(\varphi f)|_{p,m}\leq A_{5}|f|_{k,m}, \quad f\in\mathcal{E}\nu_{I_{14}(n)}(\Omega_{I_{14}(n)}), \end{equation} for $(k,p)=(I_{4}(n),n)$ and $(k,p)=(I_{14}(n),I_{14}(n))$ where the convolution is defined by the right-hand side of \eqref{distr.falt.} and we set $\varphi f:=0$ outside $\Omega_{I_{14}(n)}$. In particular, $T_{E_{I_{14}(n)}}\ast(\varphi f)\in\mathcal{E}\nu_{I_{14}(n)}(\Omega_{I_{14}(n)})$. \end{enumerate} \end{enumerate} \end{lem} \begin{proof} $a)$ Let $\varphi\in\mathcal{D}(\C)$ and set $E_{0}(z):=\frac{1}{\pi z}$, $z\neq 0$. Using $g_{n}\in\mathcal{O}(\C)$, $g_{n}(0)=1$ and the fact that $T_{E_{0}}$ is a fundamental solution of the $\overline{\partial}$-operator by \cite[Eq.\ (3.1.12), p.\ 63]{H1}, we get \begin{align*} \langle \overline{\partial}T_{E_{n}}, \varphi\rangle &= -\langle T_{E_{n}}, \overline{\partial}\varphi\rangle = -\langle T_{E_{0}}, g_{n}\overline{\partial}\varphi\rangle = -\langle T_{E_{0}}, \overline{\partial}(g_{n}\varphi)\rangle\\ &= \langle \overline{\partial}T_{E_{0}}, g_{n}\varphi\rangle = \langle \differentialelta, g_{n}\varphi\rangle =g_{n}(0)\varphi(0)=\varphi(0)=\langle \differentialelta, \varphi\rangle. \end{align*} $b)$ Since $x\in (X_{I_{2}}+\mathbb{B}_{R_{n}}(0))\subset \Omega_{I_{2}}^{C}$, it follows $\partial^{\alpha}_{x}[E_{n}(\cdot-x)]\in\mathcal{O}(\Omega_{I_{2}})$. Let $z\in \Omega_{I_{2}}$ and $\beta\in\N^{2}_{0}$. We get by the Cauchy inequality and $(WR.1b)$ \begin{align*} |\partial^{\beta}_{z}\partial^{\alpha}_{x}[E_{n}(z-x)]| &\underset{\mathclap{\eqref{lem1}}}{=} |i^{\alpha_{2}+\beta_{2}}(-1)^{|\alpha|}E_{n}^{(|\alpha+\beta|)}(z-x)|\\ &\leq\frac{|\alpha+\beta|!}{r_{n}^{|\alpha+\beta|}}\max_{|\zeta-(z-x)|=r_{n}}{|E_{n}(\zeta)|}\\ &\leq\frac{1}{\pi}\frac{|\alpha+\beta|!}{r_{n}^{|\alpha+\beta|}(\differential_{X,I_{2}}-R_{n}-r_{n})}\max_{|\zeta-(z-x)|=r_{n}} {|g_{n}(\zeta)|} \end{align*} and hence \begin{flalign*} &\hspace{0.37cm}|\partial^{\alpha}_{x}[E_{n}(\cdot-x)]|_{I_{2},m}\\ &\underset{\eqref{pro.2}}{\leq}\frac{1}{\pi}\sup_{\beta\in\N^{2}_{0},|\beta|\leq m} \frac{|\alpha+\beta|!}{r_{n}^{|\alpha+\beta|}(\differential_{X,I_{2}}-R_{n}-r_{n})}A_{2}(x,n)<\infty. \end{flalign*} $c)$ By the definition of distributional convolution $T_{E_{n}}\ast\psi\in\mathcal{C}^{\infty}(\R^{2})$ and for $x\in\R^{2}$ and $\alpha\in\N^{2}_{0}$ the following inequalities hold \begin{align*} |\partial^{\alpha}(T_{E_{n}}\ast\psi)(x)| &=\bigl|\int_{\R^{2}}{E_{n}(y)(\partial^{\alpha}\psi)(x-y)\differential y}\bigr| \leq \|\psi\|_{|\alpha|}\int_{x-K}{|E_{n}(y)|\differential y}\\ &=\frac{1}{\pi}\|\psi\|_{|\alpha|}\int_{K}{\frac{|g_{n}(x-y)|}{|x-y|}\differential y} \end{align*} and thus by $(WR.1c)$ \[ |T_{E_{n}}\ast\psi|_{n,m} \leq\frac{1}{\pi}A_{3}(n,K)\|\psi \|_{m}. \] $d)$ The existence of $\varphi$ follows from the proof of \cite[Theorem 1.4.1, p.\ 25]{H1}. Now, let $x\in\Omega_{p}$ and $\alpha\in\N^{2}_{0}$. Then we have by $(WR.2)$, the Leibniz rule and due to $\operatorname{supp}\varphi\subset \Omega_{I_{4}}$ that \begin{flalign*} &\hspace{0.37cm}\bigl|\int_{\R^{2}}{E_{I_{14}}(y)\partial^{\alpha}(f\varphi)(x-y)\differential y}\bigr|\\ &\leq\int_{x-\Omega_{I_{4}}}{|E_{I_{14}}(y)\partial^{\alpha}(f\varphi)(x-y)|\nu_{k}(x-y)\nu_{k}(x-y)^{-1}\differential y}\\ &\leq \sup_{z \in \Omega_{I_{4}}}{|\partial^{\alpha}(f\varphi)(z)|\nu_{k}(z)} \int_{\Omega_{I_{4}}}{|E_{I_{14}}(x-y)|\nu_{k}(y)^{-1}\differential y}\\ &\leq\phantom{\cdot}\sum_{\gamma\leq \alpha}{\differentialbinom{\alpha}{\gamma}\sup_{z \in\Omega_{I_{4}}}|\partial^{\alpha-\gamma} \varphi(z)|} \sup_{\substack{w \in\Omega_{I_{4}}\\ \beta\in\N^{2}_{0}, |\beta| \leq |\alpha|}} |\partial^{\beta}f(w)|\nu_{k}(w)\\ &\phantom{\underset{\mathclap{\eqref{pro.4}}}{\leq}}\cdot \int_{\Omega_{I_{4}}}{|E_{I_{14}}(x-y)|\nu_{k}(y)^{-1}\differential y}\\ &\underset{\mathclap{\eqref{lem3.2}}}{\leq}\phantom{\cdot}\underbrace{\sum_{\gamma\leq \alpha}{\differentialbinom{\alpha}{\gamma} c_{\alpha-\gamma}\differential_{n,I_{4}}^{-|\alpha-\gamma|}}}_{=:C_{0}(\alpha,n)} \sup_{\substack{w \in\Omega_{I_{4}}\\\beta\in\N^{2}_{0},|\beta| \leq |\alpha|}}|\partial^{\beta}f(w)|\nu_{k}(w)\\ &\phantom{\underset{\mathclap{\eqref{pro.4}}}{\leq}}\cdot \int_{\Omega_{I_{4}}}{|E_{I_{14}}(x-y)|\nu_{k}(y)^{-1}\differential y} \\ &\underset{\mathclap{\eqref{pro.4}}}{\leq}\frac{A_{4}(n)C_{0}(\alpha,n)}{\nu_{p}(x)} \sup_{\substack{w\in\Omega_{I_{4}}\\ \beta\in\N^{2}_{0},|\beta| \leq |\alpha|}}|\partial^{\beta}f(w)|\nu_{k}(w). \end{flalign*} Thus $T_{E_{I_{14}}}\ast(\varphi f)\in \mathcal{C}^{\infty}(\Omega_{p})$ and \[ \partial^{\alpha}(T_{E_{I_{14}}}\ast(\varphi f))(x)=\int_{\R^{2}}{E_{I_{14}}(y)\partial^{\alpha}(f\varphi)(x-y)\differential y} \] by differentiation under the integral sign as well as \[ |T_{E_{I_{14}}}\ast(\varphi f)|_{p,m} \leq A_{4}(n)\sup_{\alpha\in\N^{2}_{0},|\alpha|\leq m}C_{0}(\alpha,n)|f|_{k,m} \] for $(k,p)=(I_{4},n)$ and $(k,p)=(I_{14},I_{14})$. \end{proof} The next step is to define different kinds of convolutions and study their relations and properties, which shall be exploited in the proof of the density theorem. \begin{lem}\label{lem:convolution} Let $n\in\N$, $(WR.1b)$, $(WR.1c)$ and $(WR.2)$ be fulfilled and $w\in(\pi_{I_{14}(n),n}(\mathcal{E}\nu_{I_{14}(n)}(\Omega_{I_{14}(n)})), (|\cdot|_{n,m})_{m\in\N_{0}})'$. \begin{enumerate} \item [a)] For $\psi\in\mathcal{D}(\R^{2})$ we define \[ \langle w \ast_{1} T_{\check{E}_{I_{14}}}, \psi\rangle:=\langle w,(T_{E_{I_{14}}}\ast \psi)_{\mid\Omega_{n}}\rangle. \] Then $w \ast_{1} T_{\check{E}_{I_{14}}}\in\mathcal{D}'(\R^{2})$. \item [b)] For $x\in X_{I_{214}}$ we define \[ (w \ast_{2} \check{E}_{I_{14}})(x):=\langle w,E_{I_{14}}(\cdot-x)_{\mid\Omega_{n}}\rangle. \] Then $w \ast_{2} \check{E}_{I_{14}}\in\mathcal{C}^{\infty}(X_{I_{214}})$ and for $\alpha\in\N^{2}_{0}$ \begin{equation}\label{lem4.1} \partial^{\alpha}_{x}(w \ast_{2} \check{E}_{I_{14}})(x) =\langle w, \partial^{\alpha}_{x}[E_{I_{14}}(\cdot-x)]_{\mid\Omega_{n}}\rangle. \end{equation} \item [c)] For $\psi\in\mathcal{D}(\R^{2})$ with $\operatorname{supp}\psi\subset X_{I_{214}}$ the preceding definitions of convolution are consistent, i.e.\ \[ \langle w \ast_{1} T_{\check{E}_{I_{14}}}, \psi\rangle = \langle T_{w \ast_{2} \check{E}_{I_{14}}},\psi\rangle. \] \item [d)] Choose $\varphi$ like in \prettyref{lem:prepare_convolution} d), let $m\in\N_{0}$ and for $f\in\mathcal{E}\nu_{I_{14}}(\Omega_{I_{14}})$ we define \[ \langle w\ast_{\varphi} T_{\check{E}_{I_{14}}},f\rangle:=\langle w, [T_{E_{I_{14}}}\ast (\varphi f)]_{\mid\Omega_{n}}\rangle . \] Then there exists a constant $A_{6}=A_{6}\left(w,n,m\right)>0$ such that \begin{equation}\label{lem4.0.1} |\langle w\ast_{\varphi} T_{\check{E}_{I_{14}}}, f\rangle|\leq A_{6} |f|_{I_{4},m}. \end{equation} \end{enumerate} \end{lem} \begin{proof} $a)$ $w \ast_{1} T_{\check{E}_{I_{14}}}$ is defined by \prettyref{lem:prepare_convolution} c). Let $K\subset \R^{2}$ be compact. Since $w$ is continuous, there exist $C>0$ and $m\in\N_{0}$ such that \begin{align*} |\langle w \ast_{1} T_{\check{E}_{I_{14}}},\psi\rangle| &=|\langle w , (T_{E_{I_{14}}}\ast\psi)_{\mid\Omega_{n}}\rangle| \leq C |T_{E_{I_{14}}}\ast\psi|_{n,m}\\ &\leq C |T_{E_{I_{14}}}\ast\psi |_{I_{14},m} \underset{\eqref{lem3.1}}{\leq}\frac{CA_{3}(I_{14},K)}{\pi} \|\psi\|_{m} \end{align*} for all $\psi\in\mathcal{C}^{\infty}_{c}(K)$, thus $w \ast_{1} T_{\check{E}_{I_{14}}}\in\mathcal{D}'(\R^{2})$. $b)$ $w \ast_{2} \check{E}_{I_{14}}$ and the right-hand side of \eqref{lem4.1} are defined by \prettyref{lem:prepare_convolution} b) since $I_{214}>I_{14}$. For $h\in\R$ with $0<|h|$ small enough and $x\in X_{I_{214}}$ we define \[ \psi_{h}(x)\colon \Omega_{I_{214}}\to\R^{2}, \; \psi_{h}(x)[y]:=\frac{E_{I_{14}}(y-(x+he_{l}))-E_{I_{14}}(y-x)}{h} \] where $e_{l}$, $l=1,2$, is the $l$th canonical unit vector in $\R^{2}$. For $0<|h|<R_{I_{14}}$ we have $x+he_{l}\in X_{I_{214}}+\mathbb{B}_{R_{I_{14}}}(0)$ and so $E_{I_{14}}(\cdot-(x+he_{l}))\in\mathcal{E}\nu_{I_{214}}(\Omega_{I_{214}})$ by \prettyref{lem:prepare_convolution} b). Hence we get $\psi_{h}(x)\in\mathcal{E}\nu_{I_{214}}(\Omega_{I_{214}})$. The motivation for the definition of $\psi_{h}(x)$ comes from \begin{flalign*} &\hspace{0.37cm} \frac{(w \ast_{2} \check{E}_{I_{14}})(x+he_{l})-(w \ast_{2} \check{E}_{I_{14}})(x)}{h}\\ &=\bigl\langle w,\frac{E_{I_{14}}(\cdot-(x+he_{l}))-E_{I_{14}}(\cdot-x)}{h}_{\big{|}\Omega_{n}}\bigr\rangle =\langle w,\psi_{h}(x)_{\mid\Omega_{n}}\rangle. \end{flalign*} So, if we show, that $\psi_{h}(x)$ converges to $\partial_{x_{l}}[E_{I_{14}}(\cdot-x)]$ in $\mathcal{E}\nu_{_{I_{214}}}(\Omega_{I_{214}})$ as $h$ tends to $0$, we get, keeping $|\cdot|_{n,m}\leq|\cdot|_{I_{214},m}$ in mind, \[ \partial_{x_{l}}(w \ast_{2} \check{E}_{I_{14}})(x) =\langle w, \partial_{x_{l}}[E_{I_{14}}(\cdot-x)]_{\mid\Omega_{n}}\rangle. \] Then the general statement follows by induction over the order $|\alpha|$. Let $y\in \Omega_{I_{214}}$ and $\beta\in\N^{2}_{0}$. Since $|y-x|\geq \differential_{X,I_{214}}>0$, we get $0\notin \mathbb{B}_{\differential_{X,I_{214}}}(y-x)$. Moreover, $R_{I_{14}}<\differential_{X,I_{214}}$ by \prettyref{cond:dense} a)(ii) and so \[ |y-(x+he_{l})-(y-x)|=|h|<R_{I_{14}}<\differential_{X,I_{214}}. \] Thus $y-(x+he_{l})\in \overline{\mathbb{B}_{|h|}(y-x)}\subset \mathbb{B}_{R_{I_{14}}}(y-x)$ and $0\notin\overline{\mathbb{B}_{|h|}(y-x)}$. We write $E_{I_{14}}=(E_{I_{14},1},E_{I_{14},2})$ as a tuple of its coordinate functions. By the mean value theorem there exist $\zeta_{i}\in[y-(x+he_{l}),y-x]\subset\overline{\mathbb{B}_{|h|}(y-x)}$, $i=1,2$, where $[y-(x+he_{l}),y-x]$ denotes the line segment from $y-(x+he_{l})$ to $y-x$, such that \begin{align*} \partial^{\beta}_{y}\psi_{h}(x)[y] &=\frac{(\partial^{\beta}E_{I_{14}})(y-(x+he_{l}))-(\partial^{\beta}E_{I_{14}})(y-x)}{h}\\ &=\frac{1}{h} \begin{pmatrix} \langle \nabla(\partial^{\beta}E_{I_{14},1})(\zeta_{1})|-he_{l}\rangle\\ \langle \nabla(\partial^{\beta}E_{I_{14},2})(\zeta_{2})|-he_{l}\rangle \end{pmatrix} =-\begin{pmatrix} \partial_{l}\partial^{\beta}E_{I_{14},1}(\zeta_{1})\\ \partial_{l}\partial^{\beta}E_{I_{14},2}(\zeta_{2}) \end{pmatrix}, \end{align*} where $\nabla$ denotes the gradient, as well as $\zeta_{ii}\in[\zeta_{i},y-x]\subset\overline{\mathbb{B}_{|h|}(y-x)}$, $i=1,2$, such that \begin{flalign}\label{lem4.2} &\hspace{0.37cm}\partial^{\beta}_{y}\psi_{h}(x)[y]-\partial^{\beta}_{y}\partial_{x_{l}}[E_{I_{14}}(y-x)]\nonumber\\ &=-\begin{pmatrix} \partial_{l}\partial^{\beta}E_{I_{14},1}(\zeta_{1})\\ \partial_{l}\partial^{\beta}E_{I_{14},2}(\zeta_{2}) \end{pmatrix} -\partial^{\beta}(-\partial_{l}E_{I_{14}})(y-x)\nonumber\\ &=\begin{pmatrix} \langle \nabla(\partial_{l}\partial^{\beta}E_{I_{14},1})(\zeta_{11})|y-x-\zeta_{1}\rangle\\ \langle \nabla(\partial_{l}\partial^{\beta}E_{I_{14},2})(\zeta_{22})|y-x-\zeta_{2}\rangle \end{pmatrix}. \end{flalign} Then \begin{flalign}\label{lem4.3} &\hspace{0.3cm}\left|\begin{pmatrix} \langle \nabla(\partial_{l}\partial^{\beta}E_{I_{14},1})(\zeta_{11})|y-x-\zeta_{1}\rangle\\ \langle \nabla(\partial_{l}\partial^{\beta}E_{I_{14},2})(\zeta_{22})|y-x-\zeta_{2}\rangle \end{pmatrix}\right|\nonumber\\ &\leq|\langle \nabla(\partial_{l}\partial^{\beta}E_{I_{14},1})(\zeta_{11})|y-x-\zeta_{1}\rangle| +|\langle \nabla(\partial_{l}\partial^{\beta}E_{I_{14},2})(\zeta_{22})|y-x-\zeta_{2}\rangle|\nonumber\\ &\leq |\nabla(\partial_{l}\partial^{\beta}E_{I_{14},1})(\zeta_{11})| |y-x-\zeta_{1}| +|\nabla(\partial_{l}\partial^{\beta}E_{I_{14},2})(\zeta_{22})| |y-x-\zeta_{2}|\nonumber\\ &\leq \phantom{+}(|\partial_{1}\partial_{l}\partial^{\beta}E_{I_{14},1}(\zeta_{11})| +|\partial_{2}\partial_{l}\partial^{\beta}E_{I_{14},1}(\zeta_{11})|\nonumber\\ &\phantom{\leq}+|\partial_{1}\partial_{l}\partial^{\beta}E_{I_{14},2}(\zeta_{22})| +|\partial_{2}\partial_{l}\partial^{\beta}E_{I_{14},2}(\zeta_{22})|) |h|\nonumber\\ &\leq \phantom{+}(|\partial_{1}\partial_{l}\partial^{\beta}E_{I_{14}}(\zeta_{11})| +|\partial_{2}\partial_{l}\partial^{\beta}E_{I_{14}}(\zeta_{11})|\nonumber\\ &\phantom{\leq}+|\partial_{1}\partial_{l}\partial^{\beta}E_{I_{14}}(\zeta_{22})| +|\partial_{2}\partial_{l}\partial^{\beta}E_{I_{14}}(\zeta_{22})|) |h|\nonumber\\ &\underset{\mathclap{\eqref{lem1}}}{=}2(|E_{I_{14}}^{(|\beta|+2)}(\zeta_{11})| +|E_{I_{14}}^{(|\beta|+2)}(\zeta_{22})|)|h| \end{flalign} is valid. By the choice $R_{I_{14}}<r_{I_{14}}<\differential_{X,I_{214}}-R_{I_{14}}$ from $(WR.1b)$ we get due to Cauchy's integral formula \begin{flalign}\label{lem4.4} &\hspace{0.37cm} |E_{I_{14}}^{(|\beta|+2)}(\zeta_{ii})|\nonumber\\ &=\frac{(|\beta|+2)!}{2\pi }\bigl|\int_{\partial \mathbb{B}_{r_{I_{14}}}(y-x)} {\frac{E_{I_{14}}(\zeta)}{(\zeta-\zeta_{ii})^{|\beta|+3}}\differential\zeta}\bigr|\nonumber\\ &\leq \frac{r_{I_{14}}(|\beta|+2)!}{(r_{I_{14}}-R_{I_{14}})^{|\beta|+3}} \max_{|\zeta-(y-x)|=r_{I_{14}}}{|\frac{g_{I_{14}}(\zeta)}{\pi \zeta}|}\nonumber\\ &\leq \underbrace{\frac{r_{I_{14}}(|\beta|+2)!}{\pi(r_{I_{14}}-R_{I_{14}})^{|\beta|+3}(\differential_{X,I_{214}}-r_{I_{14}})}}_{=:C(n,|\beta|)} \max_{|\zeta-(y-x)|=r_{I_{14}}}{|g_{I_{14}}(\zeta)|}. \end{flalign} Hence by combining \eqref{lem4.2}, \eqref{lem4.3} and \eqref{lem4.4}, we have for $m\in\N_{0}$ \[ \hspace{0.65cm}|\psi_{h}(x)-\partial_{x_{l}}[E_{I_{14}}(\cdot-x)]|_{I_{214},m} \underset{\eqref{pro.2}}{\leq} 4 \sup_{\beta\in\N^{2}_{0},|\beta|\leq m}C(n,|\beta|)A_{2}(x,I_{14})|h|\underset{h\to 0}{\to} 0. \] This means that $\psi_{h}(x)$ converges to $\partial_{x_{l}}[E_{I_{14}}(\cdot-x)]$ in $\mathcal{E}\nu_{I_{214}}(\Omega_{I_{214}})$ and so with respect to $(|\cdot|_{n,m})_{m\in\N_{0}}$ as well since $|\cdot|_{n,m}\leq|\cdot|_{I_{214},m}$. $c)(i)$ For $h>0$ small enough we define \[ S_{h}(\psi)\colon \Omega_{I_{14}}\to\R^2,\; S_{h}(\psi)(y):=\sum_{m\in\Z^2}{E_{I_{14}}(y-mh)\psi(mh)h^2}, \] where $E_{I_{14}}(0)\psi(mh)=E_{I_{14}}(0)0:=0$ if $mh\in\Omega_{I_{14}}$. The first part of the proof is to show that $S_{h}(\psi)$ converges to $T_{E_{I_{14}}}\ast\psi$ in $\mathcal{E}\nu_{I_{14}}(\Omega_{I_{14}})$ as $h$ tends to $0$. Set $Q_{m}:=mh+[0,h]^{2}$ and let $N\subset X_{I_{214}}$ be compact. Now, we define $M_{N,h}:=\{m\in\Z^{2}\; | \; Q_{m}\cap N\neq \varnothing\}$. Due to this definition we have \begin{equation}\label{lem4.6} \{m\in\Z^{2}\; | \; mh\in N\}\subset M_{N,h} \end{equation} and \begin{equation}\label{lem4.8} |M_{N,h}| \leq \left\lceil \frac{\operatorname{diam}(N)}{h}\right\rceil^{2} \leq \bigl(\frac{\operatorname{diam}(N)}{h}+1\bigr)^{2} \end{equation} where $|M_{N,h}|$ denotes the cardinality of $M_{N,h}$, $\lceil x\rceil$ the ceiling of $x$ and $\operatorname{diam}(N)$ the diameter of $N$ w.r.t.\ $|\cdot|$. Let $0<h<\tfrac{1}{2\sqrt{2}}\differential^{|\cdot|}(N, \partial X_{I_{214}})$. Then \begin{equation}\label{lem4.9.0} Q_{m}\subset (N+\overline{\mathbb{B}_{\frac{1}{2}\differential^{|\cdot|}(N,\partial X_{I_{214}})}(0)})=:K\subset X_{I_{214}}, \quad m\in M_{N,h}, \end{equation} as $\sqrt{2}h$ is the length of the diagonal of any cube $Q_{m}$. Therefore we obtain for $y\in \Omega_{I_{14}} \subset\Omega_{I_{214}}$, $x\in Q_{m}$, $m\in M_{N,h}$ and $\beta\in\N^{2}_{0}$ analogously to the proof of \prettyref{lem:prepare_convolution} b) with the choice of $r_{I_{14}}$ from $(WR.1b)$ \begin{align}\label{lem4.10} |\partial^{\beta}_{y}[E_{I_{14}}(y-x)]| &\leq\frac{|\beta|!}{\pi r_{I_{14}}^{|\beta|}} \max_{|\zeta-(y-x)|=r_{I_{14}}}{\frac{|g_{I_{14}}(\zeta)|}{|\zeta|}}\nonumber\\ &\leq\underbrace{\frac{|\beta|!}{\pi r_{I_{14}}^{|\beta|}(\differential_{X,I_{214}}-r_{I_{14}})}}_{=:C_{1}(|\beta|,n)} \max_{|\zeta-(y-x)|=r_{I_{14}}}{|g_{I_{14}}(\zeta)|}\nonumber\\ &\underset{\mathclap{\eqref{pro.2}}}{\leq}C_{1}(|\beta|,n)\frac{A_{2}(x,I_{14})}{\nu_{I_{214}}(y)}. \end{align} Due to $(WR.1b)$ and \eqref{lem4.9.0} there is $C_{0}>0$, independent of $h$, such that for every $m\in M_{N,h}$ \begin{equation}\label{lem4.9.1} A_{2}(x,I_{14})\leq \sup_{z\in K}A_{2}(z,I_{14})\leq C_{0},\quad x\in Q_{m}. \end{equation} Let $\psi\in\mathcal{C}^{\infty}_{c}(N)$ and $m_{0}\in\N_{0}$. Then we have \begin{align*} |\partial^{\beta}_{y}S_{h}(\psi)(y)|\;&\underset{\mathclap{\eqref{lem4.6}}}{=}\; \bigl|\sum_{m\in M_{N,h}}{\partial^{\beta}_{y}[E_{I_{14}}(y-mh)]\psi(mh)h^2}\bigr|\\ &\underset{\mathclap{\eqref{lem4.10},\, mh\in Q_{m}}}{\leq} h^2C_{1}(|\beta|,n)\frac{1}{\nu_{I_{214}}(y)} \sum_{m\in M_{N,h}}A_{2}(mh,I_{14})|\psi(mh)|\\ &\underset{\mathclap{\eqref{lem4.8},\eqref{lem4.9.1}}}{\leq}C_{0}C_{1}(|\beta|,n)h^2 \bigl(\frac{\operatorname{diam}(N)}{h}+1\bigr)^{2}\frac{1}{\nu_{I_{214}}(y)}\|\psi\|_{0}\\ &=C_{0}C_{1}(|\beta|,n)(\operatorname{diam}(N)+h)^{2}\frac{1}{\nu_{I_{214}}(y)}\|\psi\|_{0} \end{align*} and therefore \begin{flalign*} &\hspace{0.37cm} |S_{h}(\psi)|_{I_{14},m_{0}}\\ &\leq C_{0}\sup_{\beta\in\N^{2}_{0},|\beta|\leq m_{0}}{C_{1}(|\beta|,n)}(\operatorname{diam}(N)+h)^{2} \sup_{y\in \Omega_{I_{14}}}{\frac{\nu_{I_{14}}(y)}{\nu_{I_{214}}(y)}}\|\psi\|_{0}\\ &\leq C_{0}\sup_{\beta\in\N^{2}_{0},|\beta|\leq m_{0}}{C_{1}(|\beta|,n)}(\operatorname{diam}(N)+h)^{2}\|\psi\|_{0} \end{flalign*} bringing forth $S_{h}\left(\psi\right)\in\mathcal{E}\nu_{I_{14}}(\Omega_{I_{14}})$. Further, the following equations hold \begin{flalign}\label{lem4.12} &\hspace{0.37cm}|\partial^{\beta}(S_{h}(\psi)-T_{E_{I_{14}}}\ast\psi)(y)|\nonumber\\ &=\bigl|\sum_{m\in M_{N,h}}\underbrace{{\partial^{\beta}_{y}[E_{I_{14}}(y-mh)]\psi(mh)h^2}}_{ =\int_{Q_{m}}{\partial^{\beta}_{y}[E_{I_{14}}(y-mh)]\psi(mh)\differential x}} -\underbrace{\int_{\R^2}{\partial^{\beta}_{y}[E_{I_{14}}(y-x)]\psi(x)\differential x}}_{ =\sum_{m\in M_{N,h}}{\int_{Q_{m}}{\partial^{\beta}_{y}[E_{I_{14}}(y-x)]\psi(x)\differential x}}}\bigl|\nonumber\\ &=\bigl| \sum_{m\in M_{N,h}}\,{\int_{Q_{m}}{(\partial^{\beta}E_{I_{14}})(y-mh)\psi(mh) -(\partial^{\beta}E_{I_{14}})(y-x)\psi(x)\differential x}}\bigr|\nonumber\\ &=\bigl|\sum_{m\in M_{N,h}}\,{\int_{Q_{m}}{[(\partial^{\beta}E_{I_{14}})(y-mh)-(\partial^{\beta}E_{I_{14}})(y-x)]\psi(mh)}}\nonumber\\ & \phantom{\bigl|\sum_{m\in M_{N,h}}\,{\int_{Q_{m}}}}\;+{{[\psi(mh)-\psi(x)](\partial^{\beta}E_{I_{14}})(y-x)\differential x}}\bigr| . \end{flalign} The next steps are similar to the proof of $b)$. By the mean value theorem there exist $x_{0,i},x_{1,i}\in[x,mh]\subset Q_{m}$, $i=1,2$, such that for $\psi=(\psi_{1},\psi_{2})$ \begin{align}\label{lem4.13} |\psi(mh)-\psi(x)| &=\left| \begin{pmatrix} \langle \nabla(\psi_{1})(x_{0,1})|mh-x\rangle\\ \langle \nabla(\psi_{2})(x_{0,2})|mh-x\rangle \end{pmatrix}\right| \leq 4\|\psi\|_{1}|mh-x|\nonumber\\ &\leq 4\sqrt{2}h\|\psi\|_{1} \end{align} and \begin{flalign}\label{lem4.14} &\hspace{0.37cm}|(\partial^{\beta}E_{I_{14}})(y-mh)-(\partial^{\beta}E_{I_{14}})(y-x)|\nonumber\\ &=\left|- \begin{pmatrix} \langle \nabla(\partial^{\beta}E_{I_{14},1})(y-x_{1,1})|mh-x\rangle\\ \langle \nabla(\partial^{\beta}E_{I_{14},2})(y-x_{1,2})|mh-x\rangle \end{pmatrix}\right|\nonumber\\ &\leq 2(E_{I_{14}}^{(|\beta|+1)}(y-x_{1,1})+E_{I_{14}}^{(|\beta|+1)}(y-x_{1,2}))|mh-x|\nonumber\\ &\underset{\mathclap{\eqref{lem4.10}}}{\leq}4\sqrt{2}hC_{1}(|\beta|+1,n)\frac{1}{\nu_{I_{214}}(y)} (A_{2}(x_{1,1},I_{14})+A_{2}(x_{1,2},I_{14}))\nonumber\\ &\underset{\mathclap{\eqref{lem4.9.1}}}{\leq} 4\sqrt{2}hC_{1}(|\beta|+1,n)\frac{2C_{0}}{\nu_{I_{214}}(y)} \end{flalign} analogously to \eqref{lem4.3}. Thus by combining \eqref{lem4.12}, \eqref{lem4.13} and \eqref{lem4.14}, we obtain \begin{flalign*} &\hspace{0.37cm}|\partial^{\beta}(S_{h}(\psi)-E_{I_{14}}\ast\psi)(y)|\\ &\leq\sum_{m\in M_{N,h}}\,\int_{Q_{m}}4\sqrt{2}h\bigl(C_{1}(|\beta|+1,n)\frac{2C_{0}}{\nu_{I_{214}}(y)}|\psi(mh)|\\ &\phantom{\leq\sum_{m\in M_{N,h}\,}\int_{Q_{m}}4\sqrt{2}h}\; +\|\psi\|_{1}|(\partial^{\beta}E_{I_{14}})(y-x)|\bigr)\differential x\\ &\underset{\mathclap{\eqref{lem4.10},\eqref{lem4.9.1}}}{\leq}\quad\, \sum_{m\in M_{N,h}}{8\sqrt{2}C_{0}C_{2}(|\beta|,n) h\frac{1}{\nu_{I_{214}}(y)}(\|\psi\|_{0}+\|\psi\|_{1})\lambda(Q_{m})} \\ &\underset{\mathclap{\eqref{lem4.8}}}{\leq}8\sqrt{2}h^{3}\bigl(\frac{\operatorname{diam}(N)}{h}+1\bigr)^{2} C_{0}C_{2}(|\beta|,n)\frac{1}{\nu_{I_{214}}(y)}(\|\psi\|_{0}+\|\psi\|_{1})\\ &\leq 16\sqrt{2}(\operatorname{diam}(N)+h)^{2}hC_{0}C_{2}(|\beta|,n)\frac{1}{\nu_{I_{214}}(y)}\|\psi\|_{1} \end{flalign*} with $C_{2}(|\beta|,n):=\max\{C_{1}(|\beta|+1,n),C_{1}(|\beta|,n)\}$ and so for $m_{0}\in\N_{0}$ \begin{flalign*} &\hspace{0.37cm}|S_{h}(\psi)-T_{E_{I_{14}}}\ast\psi|_{I_{14},m_{0}}\\ &\leq 16\sqrt{2}(\operatorname{diam}(N)+h)^{2}hC_{0}\sup_{\beta\in\N^{2}_{0},|\beta|\leq m_{0}}{C_{2}(|\beta|,n)} \sup_{y\in \Omega_{I_{14}}}{\frac{\nu_{I_{14}}(y)}{\nu_{I_{214}}(y)}}\|\psi\|_{1}\\ &\leq 16\sqrt{2}C_{0}\sup_{\beta\in\N^{2}_{0},|\beta|\leq m_{0}} {C_{2}(|\beta|,n)}\|\psi\|_{1}(\operatorname{diam}(N)+h)^{2}h \underset{h\to 0}{\to} 0, \end{flalign*} proving the convergence of $S_{h}(\psi)$ to $T_{E_{I_{14}}}\ast\psi$ in $\mathcal{E}\nu_{I_{14}}(\Omega_{I_{14}})$ and hence with respect to $(|\cdot|_{n,m_{0}})_{m_{0}\in\N_{0}}$ as well. $(ii)$ The next part of the proof is to show that \[ \lim_{h\to 0}\sum_{m\in M_{N,h}} (w\ast_{2}\check{E}_{I_{14}})(mh)\psi(mh)h^{2} =\int_{\R^{2}}{(w\ast_{2}\check{E}_{I_{14}})(x)\psi(x)\differential x}. \] Let $0<h<\tfrac{1}{2\sqrt{2}}\differential^{|\cdot|}(N, \partial X_{I_{214}})$. We begin with \begin{flalign}\label{lem4.16} &\hspace{0.37cm}\bigl|\sum_{m\in M_{N,h}} (w\ast_{2}\check{E}_{I_{14}})(mh)\psi(mh)h^{2} -\int_{\R^2}{(w\ast_{2}\check{E}_{I_{14}})(x)\psi(x)\differential x}\bigr|\nonumber\\ &=\bigl|\sum_{m\in M_{N,h}}\,\int_{Q_{m}}{ (w\ast_{2}\check{E}_{I_{14}})(mh)\psi(mh) -(w\ast_{2}\check{E}_{I_{14}})(x)\psi(x)\differential x}\bigr|\nonumber\\ &=\bigl|\sum_{m\in M_{N,h}}\,{\int_{Q_{m}} {[(w\ast_{2}\check{E}_{I_{14}})(mh)-(w\ast_{2}\check{E}_{I_{14}})(x) ]\psi(mh)}}\nonumber\\ &\phantom{\sum_{m\in M_{N,h}}\,\int_{Q_{m}}}\;\;+{{[\psi(mh)-\psi(x)](w\ast_{2}\check{E}_{I_{14}})(x)\differential x}}\bigr|. \end{flalign} Again, by the mean value theorem there exist $x_{0,i},\;x_{1,i}\in[x,mh]\subset Q_{m}$, $i=1,2$, for $x\in Q_{m}=mh+[0,h]^{2}$ such that \begin{equation}\label{lem4.17} |\psi(mh)-\psi(x)| =\left| \begin{pmatrix} \langle \nabla(\psi_{1})(x_{0,1})|mh-x\rangle\\ \langle \nabla(\psi_{2})(x_{0,2})|mh-x\rangle \end{pmatrix}\right| \leq 4\sqrt{2}h\|\psi\|_{1} \end{equation} and for $w\ast_{2}\check{E}_{I_{14}}=((w\ast_{2}\check{E}_{I_{14}})_{1},(w\ast_{2}\check{E}_{I_{14}})_{2})$, taking account of \eqref{lem4.9.0} and part b), \begin{flalign}\label{lem4.18} &\hspace{0.37cm}|(w\ast_{2}\check{E}_{I_{14}})(mh)-(w\ast_{2}\check{E}_{I_{14}})(x)|\nonumber\\ &=\left| \begin{pmatrix} \langle \nabla((w\ast_{2}\check{E}_{I_{14}})_{1})(x_{1,1})|mh-x\rangle\\ \langle \nabla((w\ast_{2}\check{E}_{I_{14}})_{2})(x_{1,2})|mh-x\rangle \end{pmatrix}\right|\nonumber\\ &\leq(|\nabla((w\ast_{2}\check{E}_{I_{14}})_{1})(x_{1,1})| +|\nabla((w\ast_{2}\check{E}_{I_{14}})_{2})(x_{1,2})|)\sqrt{2}h\nonumber\\ &\leq \underbrace{(\|\nabla((w\ast_{2}\check{E}_{I_{14}})_{1})\|_{K} +\|\nabla((w\ast_{2}\check{E}_{I_{14}})_{2})\|_{K})}_{=:C_{3}<\infty}\sqrt{2}h \end{flalign} where we used $x_{1,i}\in Q_{m}$, $m\in M_{N,h}$, in the last inequality. Due to \eqref{lem4.16}, \eqref{lem4.17} and \eqref{lem4.18} we gain \begin{flalign*} &\hspace{0.37cm}\bigl|\sum_{m\in M_{N,h}}(w\ast_{2}\check{E}_{I_{14}})(mh)\psi(mh)h^{2} -\int_{\R^2}{(w\ast_{2}\check{E}_{I_{14}})(x)\psi(x)\differential x}\bigr|\nonumber\\ &\leq \sum_{m\in M_{N,h}}(C_{3}\sqrt{2}h\|\psi\|_{0}+4\sqrt{2}h\|\psi\|_{1}\|w\ast_{2}\check{E}_{I_{14}}\|_{K})h^{2}\\ &\underset{\mathclap{\eqref{lem4.8}}}{\leq}(C_{3}\sqrt{2}\|\psi\|_{0}+4\sqrt{2}\|\psi\|_{1}\| w\ast_{2}\check{E}_{I_{14}}\|_{K})(\operatorname{diam}(N)+h)^{2}h \underset{h\to 0}{\to} 0. \end{flalign*} $(iii)$ Merging $(i)$ and $(ii)$, we get for $\psi\in\mathcal{C}^{\infty}_{c}(N)$ \begin{align*} \langle w \ast_{1} T_{\check{E}_{I_{14}}}, \psi\rangle &=\langle w , (T_{E_{I_{14}}}\ast\psi)_{\mid\Omega_{n}}\rangle \underset{(i)}{=}\lim_{h\to 0}\langle w , S_{h}(\psi)_{\mid\Omega_{n}}\rangle\\ &=\lim_{h\to 0}\langle w , \sum_{m\in M_{N,h}}{E_{I_{14}}(\cdot-mh)_{\mid\Omega_{n}}\psi(mh)h^2}\rangle\\ &\underset{\mathclap{\eqref{lem4.8}}}{=}\; \lim_{h\to 0}\sum_{m\in M_{N,h}} \underbrace{\langle w ,E_{I_{14}}(\cdot-mh)_{\mid\Omega_{n}}\rangle}_{ =(w\ast_{2}\check{E}_{I_{14}})(mh)}\psi(mh)h^2\\ &\underset{\mathclap{(ii)}}{=}\;\int_{\R^2}{(w\ast_{2}\check{E}_{I_{14}})(x)\psi(x)\differential x} =\langle T_{w\ast_{2}\check{E}_{I_{14}}},\psi\rangle. \end{align*} $d)$ $w\ast_{\varphi} T_{\check{E}_{I_{14}}}$ is defined by \prettyref{lem:prepare_convolution} d). Because $w$ is continuous, there exist $C_{4}>0$ and $m\in\N_{0}$ such that \begin{align*} |\langle w\ast_{\varphi} T_{\check{E}_{I_{14}}},f \rangle| &=|\langle w, [T_{E_{I_{14}}}\ast(\varphi f)]_{\mid\Omega_{n}}\rangle| \leq C_{4}|T_{E_{I_{14}}}\ast(\varphi f)|_{n,m}\\ &\underset{\mathclap{\eqref{lem3.3},\, k=I_{4},\,p=n}}{\leq}\;C_{4}A_{5}|f|_{I_{4},m}. \end{align*} \end{proof} \begin{lem}\label{lem:dense_component} Let $n\in\N$, $w\in (\pi_{I_{14},n}(\mathcal{E}\nu_{I_{14}}(\Omega_{I_{14}})), (|\cdot|_{n,m})_{m\in\N_{0}})'$ and $(WR)$ be fulfilled. If $w_{\mid\pi_{I_{214},n}(\mathcal{E}\nu_{I_{214},\overline{\partial}}(\Omega_{I_{214}}))}=0$, then $\operatorname{supp}(w \ast_{1} T_{\check{E}_{I_{14}}})\subset\overline{\Omega}_{n}$ where the support is meant in the distributional sense. \end{lem} \begin{proof} $(i)$ Let $\psi\in\mathcal{C}^{\infty}_{c}(N)$ where $N\subset \R^{2}$ is compact. The set $K:=\overline{\Omega}_{I_{14}}\cap N$ is compactly contained in $\Omega$ and we have for $m\in\N_{0}$ \begin{align*} |\psi|_{I_{14},m} &=\sup_{\substack{x\in \Omega_{I_{14}}\\ \beta\in\N^{2}_{0}, |\beta|\leq m}} |\partial^{\beta}\psi(x)|\nu_{I_{14}}(x) \leq \|\nu_{I_{14}}\|_{K}\sup_{\substack{x\in \R^{2}\\ \beta\in\N^{2}_{0},|\beta|\leq m}}|\partial^{\beta}\psi(x)|\\ &=\|\nu_{I_{14}}\|_{K}\|\psi\|_{m}<\infty, \end{align*} hence $\psi_{\mid\Omega_{I_{14}}}\in\mathcal{E}\nu_{I_{14}}(\Omega_{I_{14}})$. Now, we define \[ w_{0}\colon \mathcal{D}(\R^{2})\to\mathcal{D}(\R^{2}),\; w_{0}(\psi):=w(\pi_{I_{14},n}(\psi_{\mid\Omega_{I_{14}}}))=w(\psi_{\mid\Omega_{n}}). \] Then we obtain by the assumptions on $w$ that there exist $m\in\N_{0}$ and $C>0$ such that \[ |w_{0}(\psi)| =|w(\psi_{\mid\Omega_{n}})| \leq C |\psi|_{n,m} \leq C |\psi|_{I_{14},m} \leq C \|\nu_{I_{14}}\|_{K}\|\psi\|_{m}, \] for all $\psi\in\mathcal{C}^{\infty}_{c}(N)$ and therefore $w_{0}\in\mathcal{D}'(\R^{2})$ as well as $\operatorname{supp} w_{0}\subset \overline{\Omega}_{n}$. $(ii)$ Let $\psi\in\mathcal{D}(\R^{2})$. Then we get \begin{align*} \langle \overline{\partial}(w\ast_{1}T_{\check{E}_{I_{14}}}), \psi\rangle &\;\underset{\mathclap{\ref{lem:convolution}\,a)}}{=}\;\langle w\ast_{1}T_{\check{E}_{I_{14}}}, -\overline{\partial}\psi\rangle =-\langle w, (T_{E_{I_{14}}}\ast\overline{\partial}\psi)_{\mid\Omega_{n}}\rangle\\ &\;=-\langle w, (\overline{\partial}T_{E_{I_{14}}}\ast\psi)_{\mid\Omega_{n}}\rangle \underset{\ref{lem:prepare_convolution}\,a)}{=}-\langle w, (\differentialelta\ast\psi)_{\mid\Omega_{n}}\rangle\\ &\;=-\langle w, \psi_{\mid\Omega_{n}}\rangle\underset{(i)}{=}-\langle w_{0},\psi\rangle, \end{align*} thus $\overline{\partial}(w\ast_{1}T_{\check{E}_{I_{14}}})=-w_{0}$ and so $\overline{\partial}(w\ast_{1}T_{\check{E}_{I_{14}}})=0$ on $\mathcal{D}(\R^{2}\setminus\operatorname{supp} w_{0})$ due to \cite[Theorem 2.2.1, p.\ 41]{H1}. Hence, by virtue of the ellipticity of the $\overline{\partial}$-operator, it exists $u\in\mathcal{O}(\C\setminus\operatorname{supp} w_{0})$ such that $T_{u}=w\ast_{1}T_{\check{E}_{I_{14}}}$ (see \cite[Theorem 11.1.1, p.\ 61]{H2}). By $(i)$ we have $\operatorname{supp} w_{0}\subset \overline{\Omega}_{n}$ and therefore we get $X_{I_{214}}\subset(\operatorname{supp} w_{0})^{C}$ and thus $\mathcal{D}(X_{I_{214}})\subset\mathcal{D}((\operatorname{supp} w_{0})^{C})$. It follows by \prettyref{lem:convolution} c) that \[ T_{u}=w\ast_{1}T_{\check{E}_{I_{14}}}=T_{w\ast_{2}\check{E}_{I_{14}}} \] on $\mathcal{D}(X_{I_{214}})$, implying $u=w\ast_{2}\check{E}_{I_{14}}$ on $X_{I_{214}}$ by \prettyref{lem:convolution} b). This means we have for every $x\in X_{I_{214}}$ and $\alpha\in\N^{2}_{0}$ \begin{align*} u^{(|\alpha|)}(x) &=(w\ast_{2}\check{E}_{I_{14}})^{(|\alpha|)}(x) \underset{\eqref{lem1}}{=}i^{-\alpha_{2}}\partial^{\alpha}(w\ast_{2}\check{E}_{I_{14}})(x)\\ &\underset{\mathclap{\eqref{lem4.1}}}{=}i^{-\alpha_{2}}\langle w, \partial^{\alpha}_{x} [E_{I_{14}}(\cdot-x)]_{\mid\Omega_{n}}\rangle \underset{\ref{lem:prepare_convolution}\,b)}{=} 0 \end{align*} by the assumptions on $w$. Hence $u=0$ in every component $N$ of $(\operatorname{supp} w_{0})^{C}$ with $N\cap X_{I_{214}}\neq\varnothing$ by the identity theorem. Denote by $N_{i}$, $i\in I$, the components of $(\operatorname{supp} w_{0})^{C}$ and let $I_{0}:=\{i\in I\;| \; N_{i}\cap \overline{\Omega}_n^{C}\neq\varnothing\}$. Due to $(WR.3)$ with $M:=\operatorname{supp} w_{0}$ we get $u=0$ on \[ \bigcup_{i\in I_{0}}{N_{i}}\supset\bigl(\bigcup\limits_{i\in I_{0}}{N_{i}}\bigr)\cap\overline{\Omega}_{n}^{C} =\bigl(\bigcup_{i\in I}{N_{i}}\bigr)\cap\overline{\Omega}_{n}^{C} =(\operatorname{supp} w_{0})^{C}\cap\overline{\Omega}_{n}^{C}=\overline{\Omega}_{n}^{C}. \] Since $T_{u}=w\ast_{1}T_{\check{E}_{I_{14}}}$ on $\mathcal{D}((\operatorname{supp} w_{0})^{C})$, we conclude $\operatorname{supp}({w \ast_{1} T_{\check{E}_{I_{14}}}})\subset\overline{\Omega}_{n}$. \end{proof} Now, we are finally able to prove that $(WR)$ implies very weak reducibility. \begin{proof}[\textbf{Proof of \prettyref{thm:dense_proj_lim} }] Set $G:=(\pi_{I_{14},n}(\mathcal{E}\nu_{I_{14},\overline{\partial}}(\Omega_{I_{14}})),(|\cdot|_{n,m})_{m\in\N_{0}})$ and $F:=\pi_{I_{214},n}(\mathcal{E}\nu_{I_{214},\overline{\partial}}(\Omega_{I_{214}}))\subset G$. Further, let $\widetilde{w}\in F^{\circ}:=\{y\in G'\;|\;\forall\;f\in F:\;y(f)=0\}$. The space $H:=(\pi_{I_{14},n}(\mathcal{E}\nu_{I_{14}}(\Omega_{I_{14}})),(|\cdot|_{n,m})_{m\in\N_{0}})$ is a locally convex Hausdorff space and by the Hahn-Banach theorem exists $w\in H'$ such that $w_{\mid G}=\widetilde{w}$. Let $f\in\mathcal{E}\nu_{I_{14},\overline{\partial}}(\Omega_{I_{14}})$ and $\varphi$ like in \prettyref{lem:prepare_convolution} d). By \prettyref{lem:dense_comp_supp} there exists a sequence $(\psi_{l})_{l\in\N}$ in $\mathcal{C}^{\infty}_{c}(\Omega_{I_{14}})$ which converges to $f$ with respect to $(|\cdot|_{I_{4},m})_{m\in\N_{0}}$ and thus $(\overline{\partial}\psi_{l})_{l\in\N}$ to $\overline{\partial}f=0$ as well since \[ \overline{\partial}\colon \mathcal{E}\nu_{I_{4}}(\Omega_{I_{4}})\to\mathcal{E}\nu_{I_{4}}(\Omega_{I_{4}}) \] is continuous. Therefore we obtain \begin{align*} \langle \widetilde{w},\pi_{I_{14},n}(f)\rangle &=\langle \widetilde{w},f_{\mid\Omega_{n}}\rangle =\langle w,f_{\mid\Omega_{n}}\rangle \underset{n<I_{4}}{=} \lim_{l\to\infty}\langle w,{\psi_{l}}_{\mid\Omega_{n}}\rangle =\lim_{l\to\infty}\langle w,(\differentialelta\ast\psi_{l})_{\mid\Omega_{n}}\rangle\\ &\underset{\mathclap{\ref{lem:prepare_convolution}\,a)}}{=}\;\; \lim_{l\to\infty}\langle w,(T_{E_{I_{14}}}\ast\overline{\partial}\psi_{l})_{\mid\Omega_{n}}\rangle =\lim_{l\to\infty}\langle w\ast_{1} T_{\check{E}_{I_{14}}},\overline{\partial}\psi_{l}\rangle\\ &\underset{\mathclap{\ref{lem:dense_component}}}{=}\; \lim_{l\to\infty}\langle w\ast_{1} T_{\check{E}_{I_{14}}},\varphi\overline{\partial}\psi_{l}\rangle =\lim_{l\to\infty}\langle w ,(T_{E_{I_{14}}}\ast\varphi\overline{\partial}\psi_{l})_{\mid\Omega_{n}}\rangle\\ &=\lim_{l\to\infty}\langle w\ast_{\varphi} T_{\check{E}_{I_{14}}},\overline{\partial}\psi_{l}\rangle \underset{\eqref{lem4.0.1}}{=}\langle w\ast_{\varphi} T_{\check{E}_{I_{14}}},\overline{\partial}f\rangle =0, \end{align*} so $\widetilde{w}=0$, yielding the statement due to the bipolar theorem. In particular, it follows from the choice $\iota_{1}(n):=I_{14}(n)$ and $\iota_{2}(n):=I_{214}(n)$ that $\mathcal{EV}_{\overline{\partial}}(\Omega)$ is very weakly reduced. \end{proof} The results obtained so far give rise to the following corollary of our main result. \begin{cor}\label{cor:CR_surjective_WR} Let $(PN)$ with $\psi_{n}(z):=(1+|z|^{2})^{-2}$, $z\in\Omega$, and $(WR)$ with $I_{214}(n)\geq I_{14}(n+1)$ be fulfilled and $-\ln\nu_{n}$ be subharmonic on $\Omega$ for every $n\in\N$. If $E$ is a Fr\'echet space over $\C$, then \[ \overline{\partial}^{E}\colon \mathcal{EV}(\Omega,E)\to\mathcal{EV}(\Omega,E) \] is surjective. \end{cor} \begin{proof} It follows from \prettyref{thm:dense_proj_lim} that $\mathcal{EV}_{\overline{\partial}}(\Omega)$ is very weakly reduced with $\iota_{1}(n):=I_{14}(n)$ and $\iota_{2}(n):=I_{214}(n)$ for $n\in\N$. Thus \prettyref{cor:frechet_CR_surjective} yields our statement. \end{proof} \begin{exa}\label{ex:families_of_weights_2} Let $\Omega\subset\C$ be a non-empty open set and $(\Omega_{n})_{n\in\N}$ a family of open sets such that \begin{enumerate} \item [(i)] $\Omega_{n}:=\{z\in \Omega\;|\;|\im(z)|<n\;\text{and}\; \differential^{|\cdot|}(\{z\},\partial \Omega)>1/n \}$ for all $n\in\N$. \item [(ii)] $\Omega_{n}:=\mathring K_{n}$ for all $n\in\N$ where $K_{n}:=\overline{\mathbb{B}_{n}(0)}\cap\{z\in\Omega\;|\; \differential^{|\cdot|}(\{z\},\partial\Omega)\geq 1/n\}$. \end{enumerate} The following families $\mathcal{V}:=(\nu_{n})_{n\in\N}$ of continuous weight functions fulfil the assumptions of \prettyref{cor:CR_surjective_WR}: \begin{enumerate} \item [a)] Let $(a_{n})_{n\in\N}$ be strictly increasing such that $a_{n}\leq 0$ for all $n\in\N$ and \[ \nu_{n}\colon\Omega\to (0,\infty),\;\nu_{n}(z):=e^{a_{n}|z|^{\gamma}}, \] for some $0<\gamma\leq 1$ with $(\Omega_{n})_{n\in\N}$ from (i). \item [b)] $\nu_{n}(z):=1$, $z\in\Omega$, with $(\Omega_{n})_{n\in\N}$ from (ii). \end{enumerate} \end{exa} \begin{proof} For each family $(\Omega_{n})_{n\in\N}$ in (i) and (ii) it holds that $\Omega_{n}\neq\C$ and there is $N\in\N_{0}$ such that $\Omega_{n}\neq\varnothing$ for all $n\geq N$. Hence we assume w.l.o.g.\ that $\Omega_{n}\neq \varnothing$ for every $n\in\N$ in what follows. In all the examples $(PN)$ is fulfilled for $\psi_{n}(z):=(1+|z|^{2})^{-2}$ by \prettyref{ex:families_of_weights_1} for all $q\in\N$. Further, we choose $I_{j}(n):=2n$ for $j=1,2,4$ and define the open set $X_{I_{2}(n)}:=\overline{\Omega}_{4n}^{C}$. Then we have \[ I_{214}(n)=8n\geq 4n+4=I_{14}(n+1),\quad n\in\N. \] The function $-\ln \nu_{n}$ is subharmonic on $\Omega$ for the considered weights by \cite[Corollary 1.6.6, p.\ 18]{H3} and \cite[Theorem 1.6.7, p.\ 18]{H3} since the function $z\mapsto z$ is holomorphic and $-a_{n}\geq 0$. Furthermore, we have $\differential_{n,k}=|1/n-1/k|$ if $\partial \Omega\neq\varnothing$ and $\differential_{n,k}=|n-k|$ if $\Omega=\C$ in (i) as well as $\differential_{n,k}\geq |1/n-1/k|$ if $\partial \Omega\neq\varnothing$ and $\differential_{n,k}=|n-k|$ if $\Omega=\C$ in (ii). \begin{enumerate} \item [a)] $(WR.1a)$: The choice $K:=\overline{\Omega}_{n}$, if $\Omega_{n}$ is bounded, and \[ K:=\overline{\Omega}_{n}\cap\{z\in\C\;|\; |\re(z)|\leq \max(0,\ln(\varepsilon)/(a_{n}-a_{2n}))^{1/\gamma}+n\}, \] if $\Omega_{n}$ is unbounded, guarantees that this condition is fulfilled. $(WR.1b)$: We have $\differential_{X,I_{2}}=1/(2n)$ if $\partial\Omega\neq\varnothing$ and $\differential_{X,I_{2}}=2n$ if $\Omega=\C$ for $(\Omega_{n})_{n\in\N}$ from (i). We choose $g_{n}\colon\C\to\C$, $g_{n}(z):=\exp(-z^2)$, as well as $r_{n}:=1/(4n)$ and $R_{n}:=1/(6n)$ for $n\in\N$. Let $z\in\Omega_{I_{2}(n)}$ and $x\in X_{I_{2}(n)}+\mathbb{B}_{R_{n}}(0)$. For $\zeta=\zeta_{1}+i\zeta_{2}\in\C$ with $|\zeta-(z-x)|=r_{n}$ we have \begin{align*} |g_{n}(\zeta)|\nu_{I_{2}(n)}(z)&=e^{-\re(\zeta^{2})}e^{a_{2n}|z|^{\gamma}}\leq e^{-\zeta_{1}^{2}+\zeta_{2}^{2}} \leq e^{(r_{n}+|z_{2}|+|x_{2}|)^{2}}e^{-\zeta_{1}^{2}}\\ &\leq e^{(r_{n}+2n+|x_{2}|)^{2}}=:A_{2}(x,n) \end{align*} and observe that $A_{2}(\cdot,n)$ is continuous and thus locally bounded on $X_{I_{2}(n)}$. $(WR.1c)$: Let $K\subset\C$ be compact and $x=x_{1}+ix_{2}\in\Omega_{n}$. Then there is $b>0$ such that $|y|\leq b$ for all $y=y_{1}+iy_{2}\in K$ and from polar coordinates and Fubini's theorem follows that \begin{flalign*} &\hspace{0.37cm}\int_{K}\frac{|g_{n}(x-y)|}{|x-y|}\differential y\\ &=\int_{K}\frac{e^{-\re((x-y)^{2})}}{|x-y|}\differential y \leq \int_{\mathbb{B}_{1}(x)}\frac{e^{-\re((x-y)^{2})}}{|x-y|}\differential y +\int_{K\setminus \mathbb{B}_{1}(x)}\frac{e^{-\re((x-y)^{2})}}{|x-y|}\differential y\\ &\leq \int_{0}^{2\pi}\int_{0}^{1}\frac{e^{-r^{2}\cos(2\varphi)}}{r}r\differential r\differential\varphi + \int_{K\setminus \mathbb{B}_{1}(x)}e^{-\re((x-y)^{2})}\differential y\\ &\leq 2\pi e+\int_{-b}^{b}e^{(x_{2}-y_{2})^{2}}\differential y_{2}\int_{\R}e^{-(x_{1}-y_{1})^{2}}\differential y_{1} \leq 2\pi e+2be^{(|x_{2}|+b)^{2}}\int_{\R}e^{-y_{1}^{2}}\differential y_{1}\\ &=2\pi e+2\sqrt{\pi}be^{(|x_{2}|+b)^{2}} \leq 2\pi e+2\sqrt{\pi}be^{(n+b)^{2}}. \end{flalign*} We conclude that $(WR.1c)$ holds since $\nu_{n}\leq 1$. $(WR.2)$: Let $p,k\in\N$ with $p\leq k$. For all $x=x_{1}+ix_{2}\in\Omega_{p}$ and $y=y_{1}+iy_{2}\in\Omega_{I_{4}(n)}$ we note that \begin{align*} a_{p}|x|^{\gamma}-a_{k}|y|^{\gamma} &\leq -a_{k}|y-x|^{\gamma}= -a_{k}|x-y|^{\gamma}\leq -a_{k}(|x_{1}-y_{1}|+|x_{2}-y_{2}|)^{\gamma}\\ &\leq -a_{k}(1+|x_{1}-y_{1}|+|x_{2}-y_{2}|) \end{align*} because $(a_{n})_{n\in\N}$ is non-positive and increasing and $0<\gamma\leq 1$. We deduce that \begin{flalign*} &\hspace{0.37cm}\int_{\Omega_{I_{4}(n)}}\frac{|g_{n}(x-y)|\nu_{p}(x)}{|x-y|\nu_{k}(y)}\differential y\\ &=\int_{\Omega_{2n}}\frac{e^{-\re((x-y)^{2})}}{|x-y|}e^{a_{p}|x|^{\gamma}-a_{k}|y|^{\gamma}}\differential y \leq \int_{\Omega_{2n}}\frac{e^{-\re((x-y)^{2})}}{|x-y|}e^{-a_{k}|x-y|^{\gamma}}\differential y\\ &\leq \int_{0}^{2\pi}\int_{0}^{1}\frac{e^{-r^{2}\cos(2\varphi)}}{r}e^{-a_{k}r^{\gamma}}r\differential r\differential\varphi + \int_{\Omega_{2n}\setminus \mathbb{B}_{1}(x)}e^{-\re((x-y)^{2})}e^{-a_{k}|x-y|^{\gamma}}\differential y\\ &\leq 2\pi e^{1-a_{k}}+e^{-a_{k}}\int_{-2n}^{2n}e^{(x_{2}-y_{2})^{2}-a_{k}|x_{2}-y_{2}|}\differential y_{2} \int_{\R}e^{-(x_{1}-y_{1})^{2}-a_{k}|x_{1}-y_{1}|}\differential y_{1}\\ &\leq 2\pi e^{1-a_{k}}+4ne^{-a_{k}+(|x_{2}|+2n)^{2}-a_{k}(|x_{2}|+2n)} \int_{\R}e^{-y_{1}^{2}-a_{k}|y_{1}|}\differential y_{1}\\ &=2\pi e^{1-a_{k}}+4ne^{-a_{k}+(|x_{2}|+2n)^{2}-a_{k}(|x_{2}|+2n)} \int_{\R}e^{-(|y_{1}|+a_{k}/2)^{2}+a_{k}^{2}/4}\differential y_{1}\\ &=2\pi e^{1-a_{k}}+8ne^{-a_{k}+(|x_{2}|+2n)^{2}-a_{k}(|x_{2}|+2n)+a_{k}^{2}/4} \int_{a_{k}/2}^{\infty}e^{-y_{1}^{2}}\differential y_{1}\\ &\leq 2\pi e^{1-a_{k}}+8\sqrt{\pi}ne^{-a_{k}+(|x_{2}|+2n)^{2}-a_{k}(|x_{2}|+2n)+a_{k}^{2}/4}\\ &\leq 2\pi e^{1-a_{k}}+8\sqrt{\pi}ne^{-a_{k}+(p+2n)^{2}-a_{k}(p+2n)+a_{k}^{2}/4}\\ &\leq 2\pi e^{1-a_{I_{4}(n)}}+8\sqrt{\pi}ne^{-a_{I_{4}(n)}+(I_{14}(n)+2n)^{2}-a_{I_{4}(n)}(I_{14}(n)+2n)+a_{I_{4}(n)}^{2}/4} \end{flalign*} for $(k,p)=(I_{4}(n),n)$ and $(k,p)=(I_{14}(n),I_{14}(n))$ as $(-a_{n})_{n\in\N}$ is non-negative and decreasing. $(WR.3)$: Let $M\subset\overline{\Omega}_{n}$ be closed and $N$ a component of $M^{C}$ such that $N\cap\overline{\Omega}_{n}^{C}\neq\varnothing$. We claim that $N\cap X_{I_{214}(n)}=N\cap\overline{\Omega}_{16n}^{C}\neq\varnothing$. We note that $\overline{\Omega}_{16n}^{C}\subset\overline{\Omega}_{n}^{C}\subset M^{C}$ and \begin{align*} \overline{\Omega}_{k}^{C}&=\phantom{\cup}\{z\in \C\;|\;\im(z)>k\}\cup\{z\in \C\;|\;\im(z)<-k\} \\ &\phantom{=}\cup\{z\in\C\;|\;\differential^{|\cdot|}(\{z\},\partial \Omega)<1/k\} =:S_{1,k}\cup S_{2,k}\cup S_{3,k},\quad k\in\N. \end{align*} If there is $x\in N\cap\overline{\Omega}_{n}^{C}$ with $\im(x)>n$ or $\im(x)<-n$, then $S_{1,16n}\subset S_{1,n}\subset N$ or $S_{2,16n}\subset S_{2,n}\subset N$ since $S_{1,n}$ and $S_{2,n}$ are connected and $N$ a component of $M^{C}$. If there is $x\in N\cap\overline{\Omega}_{n}^{C}$ such that $x\in S_{3,n}$, then there is $y\in\partial\Omega$ with $x\in \mathbb{B}_{1/n}(y)\subset S_{3,n}$. This implies $\mathbb{B}_{1/(16n)}(y)\subset\mathbb{B}_{1/n}(y)\subset N$ as $\mathbb{B}_{1/n}(y)$ is connected and $N$ a component of $M^{C}$, proving our claim. \item [b)] $(WR.1a)$: The choice $K:=\overline{\Omega}_{n}$ guarantees that this condition is fulfilled. $(WR.1b)$: We have $\differential_{X,I_{2}}\geq 1/(2n)$ if $\partial\Omega\neq\varnothing$ and $\differential_{X,I_{2}}= 2n$ if $\Omega=\C$ for $(\Omega_{n})_{n\in\N}$ from (ii). We choose $g_{n}\colon\C\to\C$, $g_{n}(z):=1$, as well as $r_{n}:=1/(4n)$ and $R_{n}:=1/(6n)$ for $n\in\N$. Let $z\in\Omega_{I_{2}(n)}$ and $x\in X_{I_{2}(n)}+\mathbb{B}_{R_{n}}(0)$. For $\zeta=\zeta_{1}+i\zeta_{2}\in\C$ with $|\zeta-(z-x)|=r_{n}$ we have $|g_{n}(\zeta)|\nu_{I_{2}(n)}(z)=1=:A_{2}(x,n)$. $(WR.1c)$: Let $K\subset\C$ be compact and $x=x_{1}+ix_{2}\in\Omega_{n}$. Again, it follows from polar coordinates and Fubini's theorem that \begin{flalign*} &\hspace{0.37cm}\int_{K}\frac{|g_{n}(x-y)|}{|x-y|}\differential y\\ &=\int_{K}\frac{1}{|x-y|}\differential y \leq \int_{\mathbb{B}_{1}(x)}\frac{1}{|x-y|}\differential y +\int_{K\setminus \mathbb{B}_{1}(x)}\frac{1}{|x-y|}\differential y\\ &\leq \int_{0}^{2\pi}\int_{0}^{1}\frac{1}{r}r\differential r\differential\varphi + \int_{K\setminus \mathbb{B}_{1}(x)}1\differential y\\ &\leq 2\pi +\lambda(K), \end{flalign*} yielding $(WR.1c)$ because $\nu_{n}=1$. $(WR.2)$: Follows from $(WR.1c)$. $(WR.3)$: Let $M\subset\overline{\Omega}_{n}$ be closed and $N$ a component of $M^{C}$ such that $N\cap\overline{\Omega}_{n}^{C}\neq\varnothing$. We claim that $N\cap X_{I_{214}(n)}=N\cap\overline{\Omega}_{16n}^{C}\neq\varnothing$. We note that $\overline{\Omega}_{16n}^{C}\subset\overline{\Omega}_{n}^{C}\subset M^{C}$ and \begin{align*} \overline{\Omega}_{k}^{C}&=\{z\in \C\;|\;|z|>k\}\cup\{z\in\C\;|\;\differential^{|\cdot|}(\{z\},\partial \Omega)<1/k\}\\ &=:S_{1,k}\cup S_{2,k},\quad k\in\N. \end{align*} If there is $x\in N\cap\overline{\Omega}_{n}^{C}$ with $|x|>n$, then $S_{1,16n}\subset S_{1,n}\subset N$ since $S_{1,n}$ is connected and $N$ a component of $M^{C}$. If there is $x\in N\cap\overline{\Omega}_{n}^{C}$ such that $x\in S_{2,n}$, then there is $y\in\partial\Omega$ with $x\in \mathbb{B}_{1/n}(y)\subset S_{2,n}$. This implies $\mathbb{B}_{1/(16n)}(y)\subset\mathbb{B}_{1/n}(y)\subset N$ as $\mathbb{B}_{1/n}(y)$ is connected and $N$ a component of $M^{C}$, proving our claim. \end{enumerate} \end{proof} Due to \prettyref{ex:families_of_weights_2} b) we get \cite[Theorem 1.4.4, p.\ 12]{H3} back. For certain non-metrisable spaces $E$ the surjectivity of the Cauchy-Riemann operator in \prettyref{ex:families_of_weights_2} a) for $a_{n}=-1/n$, $n\in\N$, $\partial\Omega\subset \R$ and $\gamma=1$ is proved in \cite[5.24 Theorem, p.\ 95]{ich} by using the splitting theory of Vogt \cite{V1} and of Bonet and Doma\'nski \cite{Dom1} and that $\mathcal{EV}_{\overline{\partial}}(\Omega)$ has property $(\Omega)$ (see \cite[Definition, p.\ 367]{meisevogt1997}) in this case by \cite[5.20 Theorem, p.\ 84]{ich} and \cite[5.22 Theorem, p.\ 92]{ich}. This is generalised in \cite{kruse2019_1}. \end{document}
\begin{document} \title{An inexact version of the symmetric proximal ADMM for solving separable convex optimization} \author{Vando A. Adona \thanks{IME, Universidade Federal de Goias, Campus II- Caixa Postal 131, CEP 74001-970, Goi\^ania-GO, Brazil. (E-mails: {\tt [email protected]} and {\tt [email protected]}). The work of these authors was supported in part by FAPEG/GO Grant PRONEM-201710267000532, and CNPq Grants 302666/2017-6 and 408123/2018-4.} \and Max L.N. Gon\c calves \footnotemark[1] } \maketitle \begin{abstract} In this paper, we propose and analyze an inexact version of the symmetric proximal alternating direction method of multipliers (ADMM) for solving linearly constrained optimization problems. Basically, the method allows its first subproblem to be solved inexactly in such way that a relative approximate criterion is satisfied. In terms of the iteration number $k$, we establish global $\mathcal{O} (1/ \sqrt{k})$ pointwise and $\mathcal{O} (1/ {k})$ ergodic convergence rates of the method for a domain of the acceleration parameters, which is consistent with the largest known one in the exact case. Since the symmetric proximal ADMM can be seen as a class of ADMM variants, the new algorithm as well as its convergence rates generalize, in particular, many others in the literature. Numerical experiments illustrating the practical advantages of the method are reported. To the best of our knowledge, this work is the first one to study an inexact version of the symmetric proximal ADMM. \\ \\ \textbf{Key words:} Symmetric alternating direction method of multipliers, convex program, relative error criterion, pointwise iteration-complexity, ergodic iteration-complexity. \\[2mm] \textbf{AMS Subject Classification:} 47H05, 49M27, 90C25, 90C60, 65K10. \end{abstract} \pagestyle{plain} \section{Introduction}{\langle}bel{sec:int} Throughout this paper, $\mathbb{R}$, $\mathbb{R}^n$ and $\mathbb{R}^{n\times p}$ denote, respectively, the set of real numbers, the set of $n$ dimensional real column vectors and the set of ${n\times p}$ real matrices. For any vectores $x, y \in \mathbb{R}^n,$ $\inner{x}{y}$ stands for their inner product and $\|x\|\coloneqq \sqrt{\inner{x}{y}}$ stands for the Euclidean norm of $x$. The space of symmetric positive semidefinite (resp. definite) matrices on $\mathbb{R}^{n\times n}$ is denoted by $\mathbb{S}^{n}_{+}$ (resp. $\mathbb{S}^{ n}_{++}$). Each element $Q\in \mathbb{S}^{ n}_{+}$ induces a symmetric bilinear form $\inner{Q(\cdot)}{\cdot}$ on $\mathbb{R}^n \times \mathbb{R}^n$ and a seminorm $\|\cdot\|_{Q}:=\sqrt{\inner{Q(\cdot)}{\cdot}}$ on $\mathbb{R}^n$. The trace and determinant of a matrix $P$ are denoted by $\mbox{Tr}(P)$ and $\det(P)$, respectively. We use $I$ and ${\bf 0}$ to stand, respectively, for the identity matrix and the zero matrix with proper dimension throughout the context. Consider the separable linearly constrained optimization problem \begin{equation} {\langle}bel{optl} \min \{ f(x) + g(y): A x + B y =b\}, \end{equation} where {$f\colon\mathbb{R}^{n} \to (-\infty, \infty]$ and $g\colon\mathbb{R}^{p} \to (-\infty, \infty]$ are proper, closed and convex functions,} $A\in\mathbb{R}^{m\times n}$, $B\in\mathbb{R}^{m\times p}$, and $b \in\mathbb{R}^{m}$. Convex optimization problems with a separable structure such as \eqref{optl} appear in many applications areas such as machine learning, compressive sensing and image processing. The augmented Lagrangian method (see, e.g., \cite{Ber1}) attempts to solve~\eqref{optl} directly without taking into account its particular structure. To overcome this drawback, a variant of the augmented Lagrangian method, namely, the alternating direction method of multipliers (ADMM), was proposed and studied in \cite{0352.65034,0368.65053}. The ADMM takes full advantage of the special structure of the problem by considering each variable separably in an alternating form and coupling them into the Lagrange multiplier updating; for detailed reviews, see \cite{Boyd:2011,glowinski1984}. More recently, a symmetric version of the ADMM was proposed in \cite{HeMaYuan2016sym} and since then it has been studied by many authors (see, for example, \cite{Bai2018sym, Chang2019251, GaoMa2018sym, Shen2020part, Sun2017sym, Wu2019LQP, Wu2017, Wu2018708}). The method with proximal terms added (named here as symmetric proximal ADMM) is described as follows: let an initial point $(x_0, y_0,\gamma_0) \in \mathbb{R}^{n}\times\mathbb{R}^{p}\times \mathbb{R}^{m}$, a penalty parameter $\beta>0$, two acceleration parameters $\tilde au$ and $\theta$, and two proximal matrices $G\in \mathbb{S}^{ n}_{+}$ and $H\in \mathbb{S}^{ m}_{+}$ be given; for $k=1,2,\ldots$ do { \begin{subequations} {\langle}bel{eq:secpro} \begin{align} &x_{k}\in \arg\min_{x \in \mathbb{R}^{n}} \left \{ f(x) - \inner{ {\gamma}_{k-1}}{Ax} +\frac{\beta}{2} \|Ax+By_{k-1}-b \|^2 +\frac{1}{2} \|x-x_{k-1}\|_G^2{\mbox{\rm ri\,}}ght\}, {\langle}bel{eq:xprob}\\ &\gamma_{k-\frac{1}{2}} := \gamma_{k-1}- \tilde au\beta\left(A{x}_{k}+B y_{k-1} - b{\mbox{\rm ri\,}}ght), \\ {\langle}bel{eq:yprob} &y_{k}\in\arg\min_{y \in \mathbb{R}^{p}} \begin{itemize}gg\{ g(y) - \inner{ {\gamma}_{k+\frac12}}{By} +\frac{\beta}{2} \| Ax_k+By-b \|^2 +\frac{1}{2}\|y- y_{k-1}\|_{H}^2\begin{itemize}gg\},\\ &\gamma_k:= \gamma_{k-1}- \theta\beta\left(A{x}_{k}+B y_{k} - b{\mbox{\rm ri\,}}ght). \end{align} \end{subequations}} The symmetric proximal ADMM unifies several ADMM variants. For example, it reduces to: \begin{itemize} \item the standard ADMM when $G={\bf 0}$, $H={\bf 0}$, $\tilde au=0$ and $\theta=1$; \item the Fortin and Glowinski acceleration version of the proximal ADMM (for short FG-P-ADMM) when $\tilde au=0$; see \cite{FORTIN198397,GABAY1983299,MJR2}; \item the generalized proximal ADMM (for short G-P-ADMM) with the relaxation factor $\alpha:=\tilde au+1$ when $\theta=1$; see \cite{Adona2018, MR1168183}. The proof of the latter fact can be found, for example, in \cite[Remark 5.8]{HeMaYuan2016sym}; \item the strictly contractive Peaceman--Rachford splitting method studied in \cite{doi:10.1137/13090849X} when $\tilde au=\theta \in (0,1)$. It is worth pointing out that if $\tilde au=\theta=1$, $G={\bf 0}$ and $H={\bf 0}$, the symmetric proximal ADMM corresponds to the standard Peaceman-Rachford splitting method applied to the dual of~\eqref{optl}. \end{itemize} As has been observed by some authors (see, e.g., \cite{ Adona2018,Bai2018sym,HeMaYuan2016sym}), the use of suitable acceleration parameters $\tilde au$ and $\theta$ considerably improves the numerical performances of the ADMM-type algorithms. We also mention that the proximal terms $ \|x-x_{k-1}\|_G^2/2$ and $\|y- y_{k-1}\|_{H}^2/2$ in subproblems \eqref{eq:xprob} and \eqref{eq:yprob}, respectively, can make them easier to solve or even to have closed-form solution in some applications; see, e.g., \cite{attouch:hal, PADMM_Eckstein,He2015} for discussion. In order to ensure the convergence of the symmetric ADMM in \cite{HeMaYuan2016sym}, the parameters $\tilde au$ and $\theta$ were considered into the domain \[ \mathcal{D}:= \left\{\left(\tilde au,\theta{\mbox{\rm ri\,}}ght)|\; \tilde au\in(-1,1),\; \theta \in (0,(1+\sqrt{5})/2), \; \tilde au+\theta>0,\; |\tilde au|<1+\theta-\theta^2{\mbox{\rm ri\,}}ght\}. \] Later, in the multi-block symmetric ADMM setting, the authors in \cite{Bai2018sym} have extended this convergence domain to \begin{equation}{\langle}bel{def:reg} \mathcal{K}:= \left\{\left(\tilde au,\theta{\mbox{\rm ri\,}}ght)| \; \tilde au\leq1,\; \tilde au+\theta>0,\; 1+\tilde au+\theta-\tilde au\theta-\tilde au^2-\theta^2>0{\mbox{\rm ri\,}}ght\}, \end{equation} by using appropriate proximal terms and by assuming that the matrices associated to the respective multi-block problem have full column rank. Note that, if $\tilde au=0$ (resp. $\theta=1$), the convergence domains in the above regions are equivalent to the classical condition $\theta \in (0,(1+\sqrt{5})/2)$ (resp. $\tilde au\in(-1,1)$ or, in terms of the relaxation factor $\alpha$, $\alpha\in(0,2)$) in the FG-P-ADMM (resp. G-P-ADMM). We refer the reader to \cite{Adona2018, MJR2} for some complexity and numerical results of the G-P-ADMM and FG-P-ADMM. It is well-known that implementations of the ADMM in some applications may be expensive and difficult due to the necessity to solve exactly its two subproblems. For applications in which one subproblem of the ADMM is significantly more challenging to solve than the other one, being necessary, therefore, to use iterative methods to approximately solve it, papers \cite{adona2018partially} and \cite{adonaCOAP} proposed partially inexact versions of the FG-P-ADMM and G-P-ADMM, respectively, using relative error conditions. Essentially, the proposed schemes allow an inexact solution $\tilde{x}_k\in \mathbb{R}^n$ with residual $u_k\in \mathbb{R}^n$ of subproblem \eqref{eq:xprob} with $G=I/\beta$, i.e., \begin{equation}{\langle}bel{cond:inex34} u_k \in \partial f(\tilde x_k) - A^*\tilde{\gamma}_{k}, \end{equation} such that the relative error condition \begin{equation}{\langle}bel{cond:inex43} \|\tilde x_k-x_{k-1}+\beta u_k\|^2\leq\tilde \sigma\| \tilde{\gamma}_{k}-{\gamma}_{k-1}\|^2+\hat \sigma\|\tilde x_k-x_{k-1}\|^2, \end{equation} is satisfied, where \[ \tilde{\gamma}_{k}:={\gamma}_{k-1}-\beta(A\tilde x_k +B y_{k-1} - b), \quad x_k := x_{k-1}-\beta u_k, \] and $\tilde \sigma$ and $\hat \sigma$ are two error tolerance parameters. Recall that the {$\varepsilon$-subdifferential} of a convex function $h:\mathbb{R}^n\to \mathbb{R}$ is defined by \[ \partial_{\varepsilon}h(x):=\{u\in \mathbb{R}^n \,:\,h(\tilde{x})\geq h(x)+\inner{u}{\tilde{x}-x}-\varepsilon,\;\;\forall \,\tilde{x}\in \mathbb{R}^n\}, \quad\forall\, x\in \mathbb{R}^n. \] When $\varepsilon=0$, then $\partial_0 h(x)$ is denoted by $\partial h(x)$ and is called the {subdifferential} of $f$ at $x$. Note that the inclusion in \eqref{cond:inex34} is based on the first-order optimality condition for \eqref{eq:xprob}. For the inexact FG-P-ADMM in \cite{adona2018partially}, the domain of the acceleration factor $\theta$ was \begin{equation}{\langle}bel{cond:theta} \theta\in \left(0,\frac{1-2\tilde \sigma+\sqrt{(1-2\tilde \sigma)^2+4(1-\tilde \sigma)}}{2(1-\tilde \sigma)}{\mbox{\rm ri\,}}ght), \end{equation} whereas, for the inexact G-P-ADMM in \cite{adonaCOAP}, the domain of the acceleration factor $\tilde au$ was \begin{equation}{\langle}bel{eq:alpha} \tilde au\in(-1,1-\tilde\sigma) \; \mbox{ (or, in term of the relaxation factor $\alpha,$ $ \alpha\in(0,2-\tilde\sigma)$)}. \end{equation} If $\tilde \sigma=0$, then \eqref{cond:theta} and \eqref{eq:alpha} reduce, respectively, to the standard conditions $\theta \in (0,(1+\sqrt{5})/2)$ and $\tilde au\in(-1,1)$. Other inexact ADMMs with relative and/or absolute error condition were proposed in \cite{jeffCOAP, Eckstein2017App, Eckstein2017Relat, NgWangYuan, Xie2017}. It is worth pointing out that, as observed in \cite{Eckstein2017App}, approximation criteria based on relative error are more interesting from a computational viewpoint than those based on absolute error. Therefore, the main goal of this work is to present an inexact version of the symmetric proximal ADMM \eqref{eq:secpro} in which, similarly to \cite{adona2018partially,adonaCOAP}, the solution of the first subproblem can be computed in an approximate way such that a relative error condition is satisfied. From the theoretical viewpoint, the global $\mathcal{O} (1/ \sqrt{k})$ pointwise convergence rate is shown, which ensures, in particular, that for a given tolerance $\rho>0$, the algorithm generates a $\rho-$approximate solution $(\tilde x,y,\tilde{\gamma})$ with {residual} $(u,v,w)$ of the Lagrangian system associated to \eqref{optl}, i.e., \[ u\in\partial f(\tilde x)-A^{\ast}\tilde{\gamma}, \qquad v\in \partial g(y)-B^{\ast}\tilde{\gamma},\qquad w= A\tilde{x}+By-b, \] and \[ \max\{\|u\|, \|v\|, \|w\|\}\leq \rho,\] in at most $\mathcal{O}(1/\rho^2)$ iterations. The global $\mathcal{O} (1/ {k})$ ergodic convergence rate is also established, which implies, in particular, that a $\rho-$approximate solution $(\tilde x^a,y^a,\tilde{\gamma}^a)$ with {residuals} $(u^a,v^a,w^a)$ and $(\varepsilon^a,\zeta^a)$ of the Lagrangian system associated to \eqref{optl}, i.e., \[u^a\in \partial_{\varepsilon^a}f(\tilde{x}^a)- A^*\tilde{\gamma}^a,\quad v^a\in \partial_{{\zeta^{a}}}g(y^a)- B^*\tilde{\gamma}^a, \quad w^a=A\tilde{x}^a+By^a-b, \] and \[\max \{\|u^a\|,\|v^a\|,\|w^a\|,\varepsilon^a,{\zeta^{a}} \}\leq \rho,\] is obtained in at most $\mathcal{O}(1/\rho)$ iterations by means of the ergodic sequence. The analysis of the method is established without any assumptions on $A$ and $B$. Moreover, the new convergence domain of $\tilde au$ and $\theta$ reduces to \eqref{def:reg}, except for the case $\tilde au=1$ and $\theta\in (-1,1)$, in the exact setting (see Remark~\ref{remarkalg}(a)). From the applicability viewpoint, we report numerical experiments in order to illustrate the efficiency of the method for solving real-life applications. To the best of our knowledge, this work is the first one to study an inexact version of the symmetric proximal ADMM. The paper is organized as follows. Section~\ref{subsec:Admm1} presents the inexact symmetric proximal ADMM as well as its pointwise and ergodic convergence rates. Section \ref{sec:Numer} is devoted to the numerical study of the proposed method. Some concluding remarks are given in Section~\ref{conclusion}. \section{Inexact symmetric proximal ADMM}{\langle}bel{subsec:Admm1} This section describes and investigates an inexact version of the symmetric proximal ADMM for solving \eqref{optl}. Essentially, the method allows its first subproblem to be solved inexactly in such way that a relative error condition is satisfied. In particular, the new algorithm as well as its iteration-complexity results generalize many others in the literature. We begin by formally stating the inexact algorithm. \begin{algorithm}[h!] \caption{\textbf{An inexact symmetric proximal ADMM}} {\langle}bel{alg:in:sy} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \BlankLine \textbf{Step 0.} Let an initial point $(x_0,y_0,\gamma_0)\in\mathbb{R}^n\times \mathbb{R}^p\times \mathbb{R}^m$, a penalty parameter $\beta>0$, two error tolerance parameters $\tilde{\sigma}, \hat{\sigma} \in [0,1)$, and two proximal matrices $G\in \mathbb{S}^{ n}_{++}$ and $H\in\mathbb{S}^{ p}_+$ be given. Choose the acceleration parameters $\tilde au$ and $\theta$ such that $(\tilde au,\theta)\in\mathcal{R}_{\tilde{\sigma}}$ where \begin{equation}{\langle}bel{def:Reg} \mathcal{R}_{\tilde{\sigma}}= \left\{\left(\tilde au,\theta{\mbox{\rm ri\,}}ght)\,\Bigg{|} \begin{array}{c} \tilde au\in\left(-1,1-\tilde{\sigma}{\mbox{\rm ri\,}}ght),\qquad\tilde au+\theta>0, \qquad \text{and}\\\noalign{ } \left(1-\tilde au^{2}{\mbox{\rm ri\,}}ght)\left(2-\tilde au-\theta-\tilde{\sigma}{\mbox{\rm ri\,}}ght)-\left(1-\theta{\mbox{\rm ri\,}}ght)^{2}\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)>0 \end{array} {\mbox{\rm ri\,}}ght\}, \end{equation} and set $k=1$. \BlankLine {\bf Step 1.} Compute $(\tilde{x}_k,u_k)\in \mathbb{R}^n\times\mathbb{R}^n$ such that \begin{equation}{\langle}bel{cond:inex} u_k \in \partial f(\tilde{x}_k) - A^*\tilde{\gamma}_{k}, \end{equation} and \begin{equation}{\langle}bel{cond:inex2} \mathbb{N}orm{\tilde{x}_{k}-x_{k-1}+ G^{-1}u_{k}}^{2}_{G}\leq \frac{\tilde{\sigma}}{\beta}\mathbb{N}orm{\tilde{\gamma}_{k}-\gamma_{k-1}}^{2}+\hat{\sigma}\mathbb{N}orm{\tilde{x}_{k}-x_{k-1}}^{2}_{G}, \end{equation} where \begin{equation} {\langle}bel{tilde} \tilde{\gamma}_{k}:={\gamma}_{k-1}-\beta(A\tilde x_k +B y_{k-1} - b). \end{equation} \BlankLine {\bf Step 2.} Set \begin{equation}{\langle}bel{mult_1} \gamma_{k-\frac{1}{2}} := \gamma_{k-1}- \tilde au\beta\left(A\tilde{x}_{k}+B y_{k-1} - b{\mbox{\rm ri\,}}ght). \end{equation} \BlankLine {\bf Step 3.} Compute an optimal solution $y_{k}\in\mathbb{R}^p$ of the subproblem \begin{equation} {\langle}bel{g_sub} \min_{y \in \mathbb{R}^p} \left \{ g(y) - \inner{ \gamma_{k-\frac{1}{2}}}{By} +\frac{\beta}{2}\mathbb{N}orm{A\tilde{x}_{k}+By - b}^2+\frac{1}{2}\|y- y_{k-1}\|_{H}^2{\mbox{\rm ri\,}}ght\}. \end{equation} \BlankLine {\bf Step 4.} Set \begin{equation}{\langle}bel{mult_2} x_k := x_{k-1}- G^{-1}u_k, \qquad \gamma_k := \gamma_{k-\frac{1}{2}}- \theta\beta\left(A\tilde{x}_{k}+B y_{k} - b{\mbox{\rm ri\,}}ght), \end{equation} and $k\leftarrow k+1$, and go to step 1. \BlankLine \end{algorithm} We now make some relevant comments of our approach. \begin{remark}{\langle}bel{remarkalg} \begin{itemize} \item[(a)] Clearly, it follows from the definition in \eqref{def:Reg} that if $\left(\tilde au,\theta{\mbox{\rm ri\,}}ght)\in\mathcal{R}_{\tilde{\sigma}}$, then $\theta<2$, $(1-\tilde au-\tilde{\sigma})>0$ and $(2-\tilde au-\theta-\tilde{\sigma})>0$. Moreover, the third condition in \eqref{def:Reg} can be rewritten as \begin{equation*} \left(1+\tilde au+\theta-\tilde au\theta-\tilde au^{2}-\theta^{2}{\mbox{\rm ri\,}}ght)\left(1-\tilde au{\mbox{\rm ri\,}}ght)+\left(\tilde au^{2}-2\theta+\theta^{2}{\mbox{\rm ri\,}}ght)\tilde{\sigma}>0. \end{equation*} If $\tilde{\sigma}=0$, then $\tilde au\in(-1,1)$. Hence, it follows from the above inequality that $\mathcal{R}_{0}$ reduces to the region $\mathcal{K}$ in~\eqref{def:reg} with $\tilde au\neq 1$. The regions $\mathcal{R}_{0}$, $\mathcal{R}_{0.3}$ and $\mathcal{R}_{0.6}$ are illustrated in Fig.~\ref{fig:reg1}, \ref{fig:reg2}, and \ref{fig:reg3}, respectively. Note that for some suitable choice of $(\tilde{\sigma},\tilde au)$, the stepsize $\theta$ can be even chosen greater than $(1+\sqrt{5})/2\approx1.618$. \item[(b)] If the inaccuracy parameters $\tilde{\sigma}$ and $\hat{\sigma}$ are zeros, from \eqref{cond:inex2} and the first equality in \eqref{mult_2}, we obtain $\tilde{x}_{k}={x}_{k}$ and $u_{k}=G(x_{k-1}-{x}_{k})$. Hence, in view of the definition of $\tilde \gamma_k$ in \eqref{tilde} and the inclusion in~\eqref{cond:inex}, it follows that computing $x_k$ is equivalent to solve exactly the subproblem in \eqref{eq:xprob}. Therefore, we can conclude that Algorithm~\ref{alg:in:sy} recovers its exact version. \item[(c)] In order to simplify the updated formula of $x_k$ in \eqref{mult_2} and the relative error condition in \eqref{cond:inex2}, a trivial choice for the proximal matrix $G$ would be $I/\beta$. \item[(d)] If $\tilde au=0$, then $(\tilde au,\theta)\in\mathcal{R}_{\tilde{\sigma}}$ corresponds to \begin{equation} {\langle}bel{eq:ad34} \theta\in \left(0,\frac{1-2\tilde{\sigma}+\sqrt{(1-2\tilde{\sigma})^2+4(1-\tilde{\sigma})}}{2(1-\tilde{\sigma})}{\mbox{\rm ri\,}}ght), \end{equation} and hence Algorithm~\ref{alg:in:sy} with $G=I/\beta$ reduces to the partially inexact proximal ADMM studied in \cite{adona2018partially}. Note also that if $\tilde{\sigma}=0$ (exact case), then \eqref{eq:ad34} turns out to be the classical condition $\theta \in (0,(1+\sqrt{5})/2)$ for the FG-P-ADMM; see \cite{MJR2}. \item[(e)] If $\theta=1$, then $(\tilde au,\theta)\in\mathcal{R}_{\tilde{\sigma}}$ corresponds to $\tilde au\in(-1,1-\tilde\sigma)$. By setting $\alpha:=\tilde au+1$, it is possible to prove (see, e.g., \cite[Remark 5.8]{HeMaYuan2016sym}) that Algorithm~\ref{alg:in:sy} with $G=I/\beta$ reduces to the inexact generalized proximal ADMM in \cite{adonaCOAP}. Furthermore, if $\tilde{\sigma}=0$, then the condition on $\tilde au$ becomes the standard condition $\tilde au\in(-1,1)$ (or, in term of the relaxation factor $\alpha$, $\alpha\in(0,2)$) for the G-P-ADMM; see \cite{Adona2018}. \end{itemize} \end{remark} \begin{figure} \caption{Some instances of $\mathcal{R} \end{figure} Throughout the paper, we make the following standard assumption. \\[2mm] {\bf Assumption~1.} There exists a solution $(x^{\ast},y^{\ast},\gamma^{\ast})\in\mathbb{R}^n\times\mathbb{R}^p\times\mathbb{R}^m$ of the { Lagrangian system \begin{equation}{\langle}bel{sist:lag} 0\in\partial f( x)-A^{\ast} \gamma, \qquad 0\in \partial g( y)-B^{\ast}\gamma, \qquad 0=A x+B y-b, \end{equation} associated to \eqref{optl}.} In order to establish pointwise and ergodic convergence rates for Algorithm~\ref{alg:in:sy}, we first show in Section~\ref{sec:pre2} that the algorithm can be seen as an instance of a general proximal point method. With this fact in hand, we will be able to present convergence rates of Algorithm~\ref{alg:in:sy} in Section~\ref{sec:bound}. It should be mentioned that the analysis of Algorithm~\ref{alg:in:sy} is much more complicated, since it involves two acceleration parameters $\tilde au$ and $\theta$. \subsection{Auxiliar results}{\langle}bel{sec:pre2} Our goal in this section is to show that Algorithm~\ref{alg:in:sy} can be seen as an instance of the hybrid proximal extragradient (HPE) framework in \cite{MJR2} (see also \cite{Adona2018, adona2018partially}). More specifically, it will be proven that there exists a scalar ${\sigma}\in[\hat{\sigma},1)$ such that \begin{equation}{\langle}bel{eq:bn45} M\left(z_{k-1}-z_{k}{\mbox{\rm ri\,}}ght)\in T(\tilde z_{k}), \qquad \mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2}+\eta_{k}\leq \sigma\mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}+\eta_{k-1}, \quad \forall\, k\geq 1, \end{equation} where $z_{k}:=\left(x_{k},y_{k},\gamma_{k}{\mbox{\rm ri\,}}ght)$ and $\tilde z_{k}:=\left(\tilde x_{k},y_{k},\tilde{\gamma}_{k}{\mbox{\rm ri\,}}ght)$, and the matrix $M$, the operator $T$ and the sequence $\{\eta_k\}$ will be specified later. As a consequence of the latter fact, the pointwise convergence rate to be presented in the next section could be derived from \cite[Theorem 3.3]{MJR2}. However, since its proof follows easily from \eqref{eq:bn45}, we present it here for completeness and convenience of the reader. On the other hand, although the ergodic convergence rate in the next section is related to \cite[Theorem 3.4]{MJR2}, its proof does not follow immediately from the latter theorem. The proof of \eqref{eq:bn45} is extensive and nontrivial. We begin by defining and establishing some properties of the matrix $M$ and the operator $T$. \begin{proposition}{\langle}bel{def:2oper} Consider the operator $T$ and the matrix $M$ defined as \begin{equation}{\langle}bel{def:oper} T(x,y,\gamma)= \left[ \begin{array}{c} \partial f(x)- A^{*}\gamma\\ \partial g(y)- B^{*}\gamma\\ Ax+By-b \end{array} {\mbox{\rm ri\,}}ght], \qquad M=\left[ \begin{array}{ccc} G &\textbf{0}&\textbf{0}\\ \textbf{0}&H+\frac{\left(\tilde au-\tilde au\theta+\theta{\mbox{\rm ri\,}}ght)\beta}{\tilde au+\theta}B^*B& -\frac{\tilde au}{\tilde au+\theta}B^{\ast}\\[2mm] \textbf{0}&-\frac{\tilde au}{\tilde au+\theta}B & \frac{1}{(\tilde au+\theta)\beta}I \end{array} {\mbox{\rm ri\,}}ght]. \end{equation} Then, $T$ is maximal monotone and $M$ is symmetric positive semidefinite. \end{proposition} \begin{proof} Note that $T$ can be decomposed as $T=\widetilde{T}+\widehat{T}$, where \begin{equation*} \widetilde{T}(z):= \left(\partial f(x), \partial g(y), -b {\mbox{\rm ri\,}}ght) \quad\mbox{and}\quad \widehat{T}(z):= Dz, \quad\mbox{with}\qquad D := \left[ \begin{array}{ccc} \textbf{0}& \textbf{0}& -A^{\ast}\\ \textbf{0}& \textbf{0}& -B^{*}\\ A& B& \textbf{0} \end{array} {\mbox{\rm ri\,}}ght]. \end{equation*} Thus, since $f$ and $g$ are convex functions, the operators $\partial f$ and $\partial g$ are maximal monotone (see \cite{Rockafellar70}) and, hence, the operator $\widetilde{T}$ is maximal monotone as well. In addition, since $D$ is skew-symmetric, $\widehat{T}$ is also maximal monotone. Therefore, we obtain that $T$ is maximal monotone. Now, it is evident that $M$ is symmetric and, using the inequality of Cauchy-Schwarz, for every $z=(x,y,\gamma)\in\mathbb{R}^n\times \mathbb{R}^p\times \mathbb{R}^m$ \begin{align} \nonumber \mathcal{I}nner{Mz}{z}&= \mathbb{N}orm{x}_{G}^{2}+\mathbb{N}orm{y}_{H}^{2}+ \frac{(\tilde au-\tilde au\theta+\theta)\beta}{\tilde au+\theta}\mathbb{N}orm{By}^{2}-\frac{2\tilde au}{\tilde au+\theta}\mathcal{I}nner{By}{\gamma}+ \frac{1}{(\tilde au+\theta)\beta}\mathbb{N}orm{\gamma}^{2}\\{\langle}bel{eq:de45} &\geq\frac{(\tilde au-\tilde au\theta+\theta)\beta}{\tilde au+\theta}\mathbb{N}orm{By}^{2}-\frac{2\Abs{\tilde au}}{\tilde au+\theta}\mathbb{N}orm{By}\mathbb{N}orm{\gamma}+ \frac{1}{(\tilde au+\theta)\beta}\mathbb{N}orm{\gamma}^{2} =\mathcal{I}nner{Pw}{w}, \end{align} where $w:=\left(\mathbb{N}orm{\gamma},\mathbb{N}orm{By}{\mbox{\rm ri\,}}ght)$ and \[ P:=\left[ \begin{array}{cc} \frac{1}{(\tilde au+\theta)\beta}&-\frac{\Abs{\tilde au}}{\tilde au+\theta}\\[2mm] -\frac{\Abs{\tilde au}}{\tilde au+\theta} & \frac{\left(\tilde au-\tilde au\theta+\theta{\mbox{\rm ri\,}}ght)\beta}{\tilde au+\theta} \end{array} {\mbox{\rm ri\,}}ght]. \] From step 0 of Algorithm~\ref{alg:in:sy}, we obtain \begin{equation*} P_{1,1}=\frac{1}{(\tilde au+\theta)\beta}>0, \quad \mbox{and}\quad \det(P)=\frac{(1-\tilde au)(\tilde au+\theta)}{(\tilde au+\theta)^{2}}>0, \end{equation*} Therefore, $P$ is symmetric positive definite and, hence, the statement on $M$ follows now from~\eqref{eq:de45}. \end{proof} We next establish a technical result. \begin{lemma} {\langle}bel{rel:tilde} Consider the sequences $\{p_k\}$ and $\{q_k\}$ defined by \begin{equation}{\langle}bel{def:pq} p_{k}= B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght), \qquad q_{k}=-\beta\left(A\tilde{x}_{k}+By_{k}-b{\mbox{\rm ri\,}}ght), \quad \forall\, k\geq 1. \end{equation} Then, for every $k\geq 1$, the following equalities hold: \begin{align}{\langle}bel{gtk:gk} \tilde{\gamma}_{k}-\gamma_{k-1}&=\beta p_{k} + q_{k}, & \tilde{\gamma}_{k}-\gamma_{k}=\left(1-\tilde au{\mbox{\rm ri\,}}ght)\beta p_k+\left(1-\tilde au-\theta{\mbox{\rm ri\,}}ght)q_{k}, \\ \gamma_{k}-\gamma_{k-1}&=\tilde au\beta p_{k} +\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)q_{k}. & {\langle}bel{gtk:gk2} \end{align} \end{lemma} \begin{proof} From the definition of $\tilde{\gamma}_{k}$ in \eqref{tilde}, we have \begin{equation*} \tilde{\gamma}_{k}-\gamma_{k-1} =\beta B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght)-\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght), \end{equation*} which, in view of \eqref{def:pq}, proves the first identity in \eqref{gtk:gk}. Now, using \eqref{tilde}, \eqref{mult_1} and the definition of $\gamma_{k}$ in \eqref{mult_2} we get \begin{align*} \tilde{\gamma}_{k}-\gamma_{k}&= \gamma_{k-1} -\gamma_{k-\frac{1}{2}}-\beta\left(A\tilde x_{k}+By_{k-1}-b{\mbox{\rm ri\,}}ght) +\theta\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght)\\ &=-\left(1-\tilde au{\mbox{\rm ri\,}}ght)\beta\left(A\tilde x_{k}+By_{k-1}-b{\mbox{\rm ri\,}}ght)+\theta\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght)\\ &=(1-\tilde au)\beta B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght)-\left(1-\tilde au-\theta{\mbox{\rm ri\,}}ght)\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght). \end{align*} This equality, together with \eqref{def:pq}, implies the second identity in \eqref{gtk:gk}. Again using the definitions of $\gamma_{k-\frac{1}{2}}$ and $\gamma_{k}$ in \eqref{mult_1} and \eqref{mult_2}, respectively, we obtain \begin{align*} \gamma_{k}-\gamma_{k-1}&=-\theta\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght)-\tilde au\beta\left(A\tilde x_{k}+By_{k-1}-b{\mbox{\rm ri\,}}ght)\\ &=\tilde au\beta B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght)-\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght), \end{align*} which, combined with \eqref{def:pq}, yields \eqref{gtk:gk2}. \end{proof} We next show that the inclusion in \eqref{eq:bn45} holds. \begin{theorem} {\langle}bel{pr:aux} For every $k\geq 1$, the following estimatives hold: \begin{align} G(x_{k-1}-x_{k})&\in \partial f(\tilde x_k)-A^*\tilde{\gamma}_k, {\langle}bel{1:incl}\\[2mm] \left(\!H+\frac{(\tilde au-\tilde au\theta+\theta)\beta}{\tilde au+\theta} B^{\ast}B\!{\mbox{\rm ri\,}}ght)\!\!(y_{k-1}\!-\!y_{k})-\frac{\tilde au}{\tilde au+\theta}B^{\ast}(\gamma_{k-1}-\gamma_{k})&\in\partial g(y_k)\!-\!B^{\ast}\tilde{\gamma}_k,{\langle}bel{2:incl}\\[2mm] -\frac{\tilde au}{\tilde au+\theta}B(y_{k-1}-y_{k})+\frac{1}{(\tilde au+\theta)\beta}(\gamma_{k-1}-\gamma_{k})&=A\tilde x_k+By_k-b.{\langle}bel{3:equ} \end{align} As a consequence, for every $k\geq 1$, \[ M\left(z_{k-1}-z_{k}{\mbox{\rm ri\,}}ght)\in T(\tilde z_{k}), \] where \begin{equation}{\langle}bel{def:ztz} z_{k}:=\left(x_{k},y_{k},\gamma_{k}{\mbox{\rm ri\,}}ght)\quad\forall\, k\geq 0, \qquad \tilde z_{k}:=\left(\tilde x_{k},y_{k},\tilde{\gamma}_{k}{\mbox{\rm ri\,}}ght)\quad\forall\,k\geq 1, \end{equation} and $T$ and $M$ are as in \eqref{def:oper}. \end{theorem} \begin{proof} Inclusion in \eqref{1:incl} follows trivially from \eqref{cond:inex} and the definition of $x_{k}$ in \eqref{mult_2}. It follows from \eqref{mult_1} and \eqref{mult_2} that \begin{align*} \gamma_k-\gamma_{k-1}&=-\theta\beta\left(A\tilde x_k+By_k-b {\mbox{\rm ri\,}}ght)-\tilde au\beta\left(A\tilde x_k+By_{k-1}-b {\mbox{\rm ri\,}}ght)\\ &=-(\tilde au+\theta)\beta\left(A\tilde x_k+By_k-b {\mbox{\rm ri\,}}ght)+\tilde au\beta B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght), \end{align*} which is equivalent to \eqref{3:equ}. Now, from the optimality condition for \eqref{g_sub}, we have \begin{equation}{\langle}bel{aux1:sub_y} 0\in\partial g(y_{k})- B^{\ast}\left[\gamma_{k-\frac{1}{2}}-\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght] + H\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght). \end{equation} On the other hand, using \eqref{tilde}, we obtain \begin{align*} \gamma_{k-\frac{1}{2}}-\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght)&= \gamma_{k-\frac{1}{2}}-\beta\left(A\tilde x_{k}+By_{k-1}-b{\mbox{\rm ri\,}}ght)-\beta B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght)\\ &=\tilde{\gamma}_{k}+\gamma_{k-\frac{1}{2}}-\gamma_{k-1}-\beta B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght). \end{align*} From the definition of $\gamma_{k}$ in \eqref{mult_2}, we find \begin{align*} \gamma_{k-\frac{1}{2}}-\gamma_{k-1} &=\gamma_{k-\frac{1}{2}}-\gamma_{k}+\gamma_{k}-\gamma_{k-1}=\theta\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght)+\gamma_{k}-\gamma_{k-1}\\ &=\theta\beta\left[\frac{\tilde au}{\tilde au+\theta}B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght)-\frac{1}{(\tilde au+\theta)\beta}\left(\gamma_{k}-\gamma_{k-1}{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]+\gamma_{k}-\gamma_{k-1}\\ &=\frac{\tilde au\theta\beta}{\tilde au+\theta}B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght)+\frac{\tilde au}{\tilde au+\theta}\left(\gamma_{k}-\gamma_{k-1}{\mbox{\rm ri\,}}ght), \end{align*} where the last equality is due to \eqref{3:equ}. Combining the last two equalities, we have \begin{equation*} \gamma_{k-\frac{1}{2}}-\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght)= \tilde{\gamma}_{k}-\frac{(\tilde au-\tilde au\theta+\theta)\beta}{\tilde au+\theta}B\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght)+\frac{\tilde au}{\tilde au+\theta}\left(\gamma_{k}-\gamma_{k-1}{\mbox{\rm ri\,}}ght), \end{equation*} which, combined with \eqref{aux1:sub_y}, implies \eqref{2:incl}. \end{proof} In the remaining part of this section, we will prove that the inequality in \eqref{eq:bn45} holds. Toward this goal, we next establish three technical results. \begin{lemma}{\langle}bel{cor:aux2} Let $\{z_k\}$ and $\{\tilde{z}_{k}\}$ be as in \eqref{def:ztz}. Then, for every $z^{\ast}\in T^{-1}(0)$, we have \[ \mathbb{N}orm{z^{\ast}-z_{k}}_{M}^{2}-\mathbb{N}orm{z^{\ast}-z_{k-1}}_{M}^{2} \leq \mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2}-\mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}, \quad \forall\,k\geq 1. \] \end{lemma} \begin{proof} As $M\left(z_{k-1}-z_{k}{\mbox{\rm ri\,}}ght)\in T(\tilde z_{k})$ (Theorem~\ref{pr:aux}), $T$ is monotone maximal (Proposition~\ref{def:2oper}) and $0 \in T(z^*)$, we obtain $\mathcal{I}nner{M(z_{k-1}-z_{k})}{\tilde{z}_{k}-z^{\ast}}\geq 0$. Hence, \begin{align*} \|z^{*}-z_{k}\|^{2}_{M}-\|z^{*}-z_{k-1}\|^{2}_{M}&=\|z^{*}-\tilde{z}_{k}+\tilde{z}_{k}-z_{k}\|^{2}_{M}-\|z^{*}-\tilde{z}_{k}+\tilde{z}_{k}-z_{k-1}\|^{2}_{M}\\ & = \|\tilde{z}_{k}-z_{k}\|^{2}_{M}+2{\langle}ngle M(z_{k-1}- z_{k}),z^{*}-\tilde{z}_{k} {\rangle}ngle-\|\tilde{z}_{k}-z_{k-1}\|^{2}_{M}\\ & \leq \|\tilde{z}_{k}-z_{k}\|^{2}_{M}-\|\tilde{z}_{k}-z_{k-1}\|^{2}_{M}, \end{align*} concluding the proof. \end{proof} \begin{proposition} {\langle}bel{pr:ang} Define the matrix $Q$ and the scalar $\vartheta$ as \begin{equation}{\langle}bel{eq:l90} Q=\left[ \begin{array}{cc} \left(3-3\tilde au-2\tilde{\sigma}{\mbox{\rm ri\,}}ght)\beta I &2\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)I\\[2mm] 2\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)I & \frac{4-\tilde au-\theta-2\tilde{\sigma}}{\beta}I \end{array} {\mbox{\rm ri\,}}ght], \end{equation} and \begin{equation}{\langle}bel{def:vart} \vartheta=\sqrt{\left(3-3\tilde au-2\tilde{\sigma}{\mbox{\rm ri\,}}ght)\left(4-\tilde au-\theta-2\tilde{\sigma}{\mbox{\rm ri\,}}ght)}-2\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght). \end{equation} Then, $Q$ is symmetric positive definite and $\vartheta>0$. Moreover, for any $(y,\gamma)\in\mathbb{R}^{p}\times\mathbb{R}^{m}$ \begin{equation*}{\langle}bel{eq:a54} \mathbb{N}orm{\left(y,\gamma{\mbox{\rm ri\,}}ght)}_{Q}^{2}\geq - 2\vartheta\mathcal{I}nner{y}{\gamma}. \end{equation*} \end{proposition} \begin{proof} Clearly $Q$ is symmetric, and is positive definite iff \begin{equation*} \widehat{Q}=\left[ \begin{array}{cc} \left(3-3\tilde au-2\tilde{\sigma}{\mbox{\rm ri\,}}ght)\beta &2\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)\\[2mm] 2\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght) &\frac{4-\tilde au-\theta-2\tilde{\sigma}}{\beta} \end{array} {\mbox{\rm ri\,}}ght] \end{equation*} is positive definite. To show that $\widehat{Q}\in \mathbb{S}^{2}_{++}$ consider the scalars $\varrho$, $\tilde{\varrho}$, and $\hat{\varrho}$ defined by \begin{equation*}{\langle}bel{vrho} \varrho=\left(3-3\tilde au-2\tilde{\sigma}{\mbox{\rm ri\,}}ght)\beta, \qquad \tilde{\varrho}=2\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght),\qquad\mbox{and} \qquad \hat{\varrho}=\frac{4-\tilde au-\theta-2\tilde{\sigma}}{\beta}. \end{equation*} Since $3-3\tilde au-2\tilde{\sigma}=\left(1-\tilde au{\mbox{\rm ri\,}}ght)+2\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)$ and $4-\tilde au-\theta-2\tilde{\sigma}\!=\!\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)+2\left(2-\tilde au-\theta-\tilde{\sigma}{\mbox{\rm ri\,}}ght)$, we obtain, from \eqref{def:Reg}, that $\varrho,\hat{\varrho}>0$. Moreover, \begin{align*} \varrho\hat{\varrho}-\tilde{\varrho}^{2}&=\left[\left(1-\tilde au{\mbox{\rm ri\,}}ght)+2\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]\left(4-\tilde au-\theta-2\tilde{\sigma}{\mbox{\rm ri\,}}ght)-4\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)^{2}\\ &=\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)\left[\left(2-\tilde au-\theta-\tilde{\sigma}{\mbox{\rm ri\,}}ght)+2\left(3+\tilde au-\theta{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]+\tilde{\sigma}\left(3-\tilde{\sigma}-\theta{\mbox{\rm ri\,}}ght). \end{align*} From \eqref{def:Reg}, we have $(1-\tilde au-\tilde{\sigma})>0$, $(2-\tilde au-\theta-\tilde{\sigma})>0$ and $\theta<2$. The latter inequality, together with the facts that $\tilde au>-1$ and $\tilde{\sigma}<1$, yields $3+\tilde au-\theta>0$ and $3-\tilde{\sigma}-\theta>0$. Therefore, $\det(\widehat{Q})>0$ and $\mbox{Tr}(\widehat{Q})>0$, and we conclude that $Q$ is positive definite. In addition, inequalities $\varrho\hat{\varrho}-\tilde{\varrho}^{2}>0$ and $(1-\tilde au-\tilde{\sigma})>0$ clearly imply that $\vartheta>0$. Now, for a given $\left(y,\gamma{\mbox{\rm ri\,}}ght)\in \mathbb{R}^p\times\mathbb{R}^m$, using \eqref{eq:l90}, \eqref{def:vart} and simple algebraic manipulations, we find \begin{equation*} \mathbb{N}orm{\left(y,\gamma{\mbox{\rm ri\,}}ght)}_{Q}^{2}=\mathbb{N}orm{\sqrt{\left(3-3\tilde au-2\tilde{\sigma}{\mbox{\rm ri\,}}ght)\beta}y +\frac{\sqrt{4-\tilde au-\theta-2\tilde{\sigma}}}{\sqrt{\beta}}\gamma}^{2} -2\vartheta\mathcal{I}nner{y}{\gamma} \geq - 2\vartheta\mathcal{I}nner{y}{\gamma}, \end{equation*} which concluded the proof of the proposition. \end{proof} \begin{proposition}{\langle}bel{lm:coef} Consider the functions $\varphi, \widehat{\varphi}, \widetilde{\varphi},\overline{\varphi}\colon\mathbb{R}\to\mathbb{R}$ defined by \begin{subequations}{\langle}bel{coef} \begin{align} \varphi(\sigma)&= \left(1-\tilde au{\mbox{\rm ri\,}}ght)\left(\sigma-1{\mbox{\rm ri\,}}ght)+\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)\left(\tilde au+\theta{\mbox{\rm ri\,}}ght),{\langle}bel{ph} \\\noalign{ } \widehat{\varphi}(\sigma)&=\left(1-\tilde au{\mbox{\rm ri\,}}ght)\left[\left(1+\theta{\mbox{\rm ri\,}}ght)\sigma-1+\tilde au{\mbox{\rm ri\,}}ght]-\tilde{\sigma}\left(\tilde au+\theta{\mbox{\rm ri\,}}ght),{\langle}bel{ph:hat} \\\noalign{ } \widetilde{\varphi}(\sigma)&=\sigma-\left(1-\tilde au-\theta{\mbox{\rm ri\,}}ght)^{2}-\tilde{\sigma}\left(\tilde au+\theta{\mbox{\rm ri\,}}ght),{\langle}bel{ph:til} \\\noalign{ } \overline{\varphi}(\sigma)&=\left[\left(1+\tilde au{\mbox{\rm ri\,}}ght)\widehat{\varphi}(\sigma)-2\tilde au\varphi(\sigma){\mbox{\rm ri\,}}ght]\left(1+\tilde au{\mbox{\rm ri\,}}ght)\widetilde{\varphi}(\sigma)-\left(1-\theta{\mbox{\rm ri\,}}ght)^{2}\left(\varphi(\sigma){\mbox{\rm ri\,}}ght)^{2}. {\langle}bel{ph:bar} \end{align} \end{subequations} Then, there exists a scalar ${\sigma}\in[\hat{\sigma},1)$ such that $\varphi(\sigma)\geq 0$, $\widehat{\varphi}(\sigma)\geq 0$, $\widetilde{\varphi}(\sigma)> 0$ and $\overline{\varphi}(\sigma)\geq 0$. \end{proposition} \begin{proof} Since \begin{equation*} \varphi(1)=\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)=\widehat{\varphi}(1), \qquad \widetilde{\varphi}(1)=\left(2-\tilde au-\theta-\tilde{\sigma}{\mbox{\rm ri\,}}ght)\left(\tilde au+\theta{\mbox{\rm ri\,}}ght), \end{equation*} and \begin{align*} \overline{\varphi}(1)=\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)^{2}\left[\left(1-\tilde au^{2}{\mbox{\rm ri\,}}ght)\left(2-\tilde au-\theta-\tilde{\sigma}{\mbox{\rm ri\,}}ght)-\left(1-\theta{\mbox{\rm ri\,}}ght)^{2}\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght], \end{align*} it follows from \eqref{def:Reg} that all functions defined in \eqref{coef} are positive for $\sigma=1$. Therefore, there exists $\sigma\in [\hat{\sigma},1)$ close to $1$ such that the statements of the proposition hold. \end{proof} The following lemma provides some estimates of the sequences $\{\mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}\}$ and $\{\mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2}\}$, which appear in \eqref{eq:bn45}. \begin{lemma}{\langle}bel{lm/abc} Let $T$, $M$, $\{p_k\}$, $\{q_k\}$, $\{z_k\}$ and $\{\tilde{z}_{k}\}$ be as in \eqref{def:oper}, \eqref{def:pq} and \eqref{def:ztz}. Then, for every $k\geq 1$, \begin{equation}{\langle}bel{eq:f456} \mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}=\mathbb{N}orm{\tilde x_{k}-x_{k-1}}_{G}^{2}+\mathbb{N}orm{y_{k}-y_{k-1}}_{H}^{2}+ a_{k}, \qquad \mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2}=\mathbb{N}orm{\tilde x_{k}-x_{k}}_{G}^{2} + b_{k}, \end{equation} where \begin{equation*} a_{k}:=\frac{(1-\tilde au)(1+\theta)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{k}}^{2}+\frac{2(1-\tilde au)}{\tilde au+\theta}\mathcal{I}nner{p_{k}}{q_{k}}+\frac{1}{(\tilde au+\theta)\beta}\mathbb{N}orm{q_{k}}^{2}, \end{equation*} and \begin{equation*} b_{k}:=\frac{(1-\tilde au)^{2}\beta}{\tilde au+\theta}\mathbb{N}orm{p_{k}}^{2}+\frac{2(1-\tilde au)(1-\tilde au-\theta)}{\tilde au+\theta}\mathcal{I}nner{p_{k}}{q_{k}}+\frac{(1-\tilde au-\theta)^{2}}{(\tilde au+\theta)\beta}\mathbb{N}orm{q_{k}}^{2}. \end{equation*} \end{lemma} \begin{proof} It follows from \eqref{def:ztz} and the first equality in \eqref{gtk:gk} that \begin{gather*} \mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}= \mathbb{N}orm{\left(\tilde x_{k}-x_{k-1},y_{k}-y_{k-1},\beta p_{k}+ q_{k}{\mbox{\rm ri\,}}ght)}_{M}^{2}. \end{gather*} Hence, using \eqref{def:oper} and \eqref{def:pq}, we find \begin{equation*} \mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}= \mathbb{N}orm{\tilde x_{k}-x_{k-1}}_{G}^{2}+\mathbb{N}orm{y_{k}-y_{k-1}}_{H}^{2}+ \tilde{a}_{k} \end{equation*} where \begin{equation*} \tilde{a}_{k}:=\frac{(\tilde au-\tilde au\theta+\theta)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{k}}^{2}-\frac{2\tilde au}{\tilde au+\theta}\mathcal{I}nner{p_{k}}{\beta p_{k}+q_{k}}+\frac{1}{(\tilde au+\theta)\beta}\mathbb{N}orm{\beta p_{k}+q_{k}}^{2}. \end{equation*} By developing the right-hand side of the last expression, we have \begin{align*} \tilde{a}_{k}&= \frac{(\tilde au-\tilde au\theta+\theta-2\tilde au+1)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{k}}^{2}-\frac{2\tilde au-2}{\tilde au+\theta}\mathcal{I}nner{p_{k}}{q_{k}}+\frac{1}{(\tilde au+\theta)\beta}\mathbb{N}orm{q_{k}}^{2}\\ &=\frac{(1-\tilde au)(1+\theta)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{k}}^{2}+\frac{2(1-\tilde au)}{\tilde au+\theta}\mathcal{I}nner{p_{k}}{q_{k}}+\frac{1}{(\tilde au+\theta)\beta}\mathbb{N}orm{q_{k}}^{2}=a_{k}. \end{align*} Therefore, the first equation in \eqref{eq:f456} follows. Now, using \eqref{def:ztz}, \eqref{def:pq}, the second equality in \eqref{gtk:gk}, and the definition of $M$ in \eqref{def:oper}, we obtain \begin{align*} \mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2}&=\mathbb{N}orm{\left(\tilde x_{k}-x_{k},0,(1-\tilde au)\beta p_{k}+(1-\tilde au-\theta)q_{k}{\mbox{\rm ri\,}}ght)}_{M}^{2}\\ &=\mathbb{N}orm{\tilde x_{k}-x_{k-1}}_{G}^{2}+\frac{1}{(\tilde au+\theta)\beta}\mathbb{N}orm{(1-\tilde au)\beta p_{k}+(1-\tilde au-\theta)q_{k}}^{2}, \end{align*} which is equivalent to the second equation in \eqref{eq:f456}. \end{proof} Before proving the inequality in \eqref{eq:bn45}, we establish some other relations satisfied by the sequences generated by Algorithm~\ref{alg:in:sy}. To do this, we consider the following constant \begin{equation}{\langle}bel{defd_0} d_{0}=\inf\left\{\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}\,:\, z^{\ast}\in T^{-1}(0){\mbox{\rm ri\,}}ght\}, \end{equation} where $M$, $T$ and $z_{0}$ are as in \eqref{def:oper} and \eqref{def:ztz}. Note that, if $M$ is positive definite, then $d_{0}$ measures the squared distance in the norm $\norm{\cdot}_{M}$ of the initial point $z_{0}=(x_{0},y_{0},\gamma_{0})$ to the solution set of \eqref{optl}. \begin{lemma} {\langle}bel{cond:ang} Let $\{p_k\}$, $\{q_k\}$ and $d_0$ be as in \eqref{def:pq} and \eqref{defd_0}. Then, the following hold:\\ (a) \begin{equation*} \min\left\{2\vartheta\mathcal{I}nner{p_{1}}{q_{1}},-\mathbb{N}orm{y_{1}-y_{0}}_{H}^{2}{\mbox{\rm ri\,}}ght\}\geq-4d_{0}, \end{equation*} where $\vartheta$ is as in \eqref{def:vart}.\\ (b) for every $k\geq 2$, we have \begin{equation*} 2(1+\tilde au)\mathcal{I}nner{p_{k}}{q_{k}}\geq 2(1-\theta)\mathcal{I}nner{p_{k}}{q_{k-1}}- 2\tilde au\beta\mathbb{N}orm{p_{k}}^{2}+ \mathbb{N}orm{y_{k}-y_{k-1}}_{H}^{2}-\mathbb{N}orm{y_{k-1}-y_{k-2}}_{H}^{2}. \end{equation*} \end{lemma} \begin{proof} $(a)$ From \eqref{gtk:gk2} with $k=1$, we have $\gamma_{1}-\gamma_{0}=\tilde au\beta p_{1}+(\tilde au+\theta)q_{1}$. Then, using \eqref{def:ztz} (with $k=1$) and the definition of $M$ in \eqref{def:oper}, we find \begin{align} \mathbb{N}orm{z_{1}-z_{0}}_{M}^{2}=\mathbb{N}orm{\left(x_{1}-x_{0},y_{1}-y_{0},\gamma_{1}-\gamma_{0}{\mbox{\rm ri\,}}ght)}_{M}^{2}=\mathbb{N}orm{x_{1}-x_{0}}_{G}^{2}+\mathbb{N}orm{y_{1}-y_{0}}_{H}^{2} + c_{1}, {\langle}bel{eq:lu8} \end{align} where \begin{align} \nonumber c_{1}&:=\frac{\left(\tilde au-\tilde au\theta+\theta{\mbox{\rm ri\,}}ght)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{1}}^{2}-\frac{2\tilde au}{\tilde au+\theta}\mathcal{I}nner{p_{1}}{\gamma_{1}-\gamma_{0}} +\frac{1}{(\tilde au+\theta)\beta}\mathbb{N}orm{\gamma_{1}-\gamma_{0}}^{2}\\ &=\frac{\left(\tilde au-\tilde au\theta+\theta-\tilde au^{2}{\mbox{\rm ri\,}}ght)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{1}}^{2}+\frac{\tilde au+\theta}{\beta}\mathbb{N}orm{q_{1}}^{2} =\left(1-\tilde au{\mbox{\rm ri\,}}ght)\beta\mathbb{N}orm{p_{1}}^{2}+\frac{\tilde au+\theta}{\beta}\mathbb{N}orm{q_{1}}^{2}.{\langle}bel{defc1} \end{align} Let $z^{\ast}=\left(x^{\ast},y^{\ast},\gamma^{\ast}{\mbox{\rm ri\,}}ght)$ be an arbitrary solution of \eqref{sist:lag}, i.e., $z^{\ast}\in T^{-1}(0)$ with $T$ as in \eqref{def:oper}. Hence, it follows from \eqref{eq:lu8} and the fact that $\mathbb{N}orm{z-z'}_{M}^{2}\leq 2(\mathbb{N}orm{z}_{M}^{2}+\mathbb{N}orm{z'}_{M}^{2})$, for all $z,z'$, that \begin{equation}{\langle}bel{ax2:ag} c_{1}+\mathbb{N}orm{y_{1}-y_{0}}_{H}^{2}\leq \mathbb{N}orm{z_{1}-z_{0}}_{M}^{2}\leq 2\left(\mathbb{N}orm{z^{\ast}-z_{1}}_{M}^{2}+\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}{\mbox{\rm ri\,}}ght). \end{equation} On the other hand, it follows from \eqref{cond:inex2} with $k=1$ and the definition of $x_{1}$ in \eqref{mult_2} that \begin{equation*} \mathbb{N}orm{\tilde x_{1}-x_{1}}_{G}^{2}\leq\hat{\sigma}\mathbb{N}orm{\tilde x_{1}-x_{0}}_{G}^{2}+\frac{\tilde{\sigma}}{\beta}\mathcal{V}ert\tilde{\gamma}_{1}-\gamma_{0}\mathcal{V}ert^{2}\leq \mathbb{N}orm{\tilde x_{1}-x_{0}}_{G}^{2}+\frac{\tilde{\sigma}}{\beta}\mathcal{V}ert\tilde{\gamma}_{1}-\gamma_{0}\mathcal{V}ert^{2}, \end{equation*} where, in the second inequality, we use that $\hat{\sigma}<1$. Thus, using the first identity in \eqref{gtk:gk} with $k=1$, we obtain \begin{equation*} \mathbb{N}orm{\tilde x_{1}-x_{0}}_{G}^{2}-\mathbb{N}orm{\tilde x_{1}-x_{1}}_{G}^{2}\geq -\frac{\tilde{\sigma}}{\beta}\mathbb{N}orm{\beta p_{1}+ q_{1}}^{2}. \end{equation*} This inequality, together with Lemma~\ref{cor:aux2} and Lemma~\ref{lm/abc} (with $k=1$), implies that \begin{align*} \mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}-\mathbb{N}orm{z^{\ast}-z_{1}}_{M}^{2}&\geq \mathbb{N}orm{\tilde z_{1}-z_{0}}_{M}^{2}-\mathbb{N}orm{\tilde z_{1}-z_{1}}_{M}^{2}\\ &\geq\mathbb{N}orm{\tilde x_{1}-x_{0}}_{G}^{2}-\mathbb{N}orm{\tilde x_{1}-x_{1}}_{G}^{2}+\frac{(1-\tilde au)\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{1}}^{2}\\ &+\frac{2(1-\tilde au)(\tilde au+\theta)}{\tilde au+\theta}\mathcal{I}nner{p_{1}}{q_{1}}+\frac{1-\left[1-\left(\tilde au+\theta{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]^{2}}{(\tilde au+\theta)\beta}\mathbb{N}orm{q_1}^{2}\\ &\geq\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)\left[\beta\mathbb{N}orm{p_{1}}^{2}+2\mathcal{I}nner{p_{1}}{q_{1}}{\mbox{\rm ri\,}}ght]+\frac{2-\tilde au-\theta-\tilde{\sigma}}{\beta}\mathbb{N}orm{q_1}^{2}. \end{align*} Combining this inequality with \eqref{ax2:ag} and using the identity in \eqref{defc1}, we find \begin{align*} 4\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2} &\geq\mathbb{N}orm{y_{1}-y_{0}}_{H}^{2}+ \left(3-3\tilde au-2\tilde{\sigma}{\mbox{\rm ri\,}}ght)\beta\mathbb{N}orm{p_{1}}^{2} + 4\left(1-\tilde au-\tilde{\sigma}{\mbox{\rm ri\,}}ght)\mathcal{I}nner{p_{1}}{q_{1}} + \frac{4-\tilde au-\theta-2\tilde{\sigma}}{\beta}\mathbb{N}orm{q_{1}}^{2}\\ &=\mathbb{N}orm{y_{1}-y_{0}}_{H}^{2} + \mathbb{N}orm{\left(p_{1},q_{1}{\mbox{\rm ri\,}}ght)}_{Q}^{2}, \end{align*} where $Q$ is as in \eqref{eq:l90}. Hence, using Proposition~\ref{pr:ang} we conclude that \begin{equation*} \max\left\{-\vartheta\mathcal{I}nner{p_{1}}{q_{1}},\mathbb{N}orm{y_{1}-y_{0}}_{H}^{2}{\mbox{\rm ri\,}}ght\} \leq 4\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}. \end{equation*} Therefore, statement $(a)$ follows from the definition of $d_{0}$ in \eqref{defd_0}. \\[2mm] $(b)$ It follows from the definitions of $\gamma_{k}$ and $q_{k}$ in \eqref{mult_2} and \eqref{def:pq}, respectively, that \[ \gamma_{k-\frac{1}{2}}-\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght)= \gamma_{k}-\left(1-\theta{\mbox{\rm ri\,}}ght)\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght)= \gamma_{k}+\left(1-\theta{\mbox{\rm ri\,}}ght)q_{k}. \] Hence, since $y_{k}$ is an optimal solution of \eqref{g_sub}, we obtain, for every $k\geq 1$, \begin{align*} 0&\in\partial g(y_{k})-B^{\ast}\left[\gamma_{k-\frac{1}{2}}-\beta\left(A\tilde x_{k}+By_{k}-b{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght] +H\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght)\\ &=\partial g(y_{k})-B^{\ast}\left[\gamma_{k}+\left(1-\theta{\mbox{\rm ri\,}}ght)q_{k}{\mbox{\rm ri\,}}ght] + H\left(y_{k}-y_{k-1}{\mbox{\rm ri\,}}ght). \end{align*} Thus, the monotonicity of $\partial g$ and the definition of $p_{k}$ in \eqref{def:pq} imply that, for eve-ry~$k\geq ~2$, \begin{align*} 0&\leq\mathcal{I}nner{\gamma_{k}-\gamma_{k-1}+\left(1-\theta{\mbox{\rm ri\,}}ght)\left(q_{k}-q_{k-1}{\mbox{\rm ri\,}}ght)}{p_{k}}-\mathbb{N}orm{y_{k} -y_{k-1}}_{H}^{2} +\mathcal{I}nner{H\left(y_{k-1} -y_{k-2}{\mbox{\rm ri\,}}ght)}{y_{k} -y_{k-1}}\\ &=\mathcal{I}nner{\tilde au\beta p_{k} +\left(1+\tilde au{\mbox{\rm ri\,}}ght)q_{k}-\left(1-\theta{\mbox{\rm ri\,}}ght)q_{k-1}}{p_{k}}-\mathbb{N}orm{y_{k} -y_{k-1}}_{H}^{2} +\mathcal{I}nner{H\left(y_{k-1} -y_{k-2}{\mbox{\rm ri\,}}ght)}{y_{k} -y_{k-1}}\\ &\leq \tilde au\beta\mathbb{N}orm{p_{k}}^{2}+\left(1+\tilde au{\mbox{\rm ri\,}}ght)\mathcal{I}nner{p_{k}}{q_{k}}-\left(1-\theta{\mbox{\rm ri\,}}ght)\mathcal{I}nner{p_{k}}{q_{k-1}}-\frac{1}{2}\mathbb{N}orm{y_{k} -y_{k-1}}_{H}^{2}+\frac{1}{2}\mathbb{N}orm{y_{k-1} -y_{k-2}}_{H}^{2}, \end{align*} where the second equality is due to \eqref{gtk:gk2} and the last inequality is due to the fact that $2\mathcal{I}nner{Hy}{y^{\prime}}\leq \mathbb{N}orm{y}_{H}^{2}+\mathbb{N}orm{y^{\prime}}_{H}^{2}$ for all $y,y^{\prime}\in\mathbb{R}^p$. Therefore, the desired inequality follows immediately from the last one. \end{proof} With the above propositions and lemmas, we now prove the inequality in \eqref{eq:bn45}. \begin{theorem}{\langle}bel{inq:hpe} Let $\{z_{k}\}$, $\{\tilde z_{k}\}$ and $\{q_k\}$ be as in \eqref{def:ztz} and \eqref{def:pq} and assume that $\sigma\in[\hat{\sigma},1)$ is given by Proposition~\ref{lm:coef}. Consider the sequence $\{\eta_{k}\}$ defined by \begin{equation}{\langle}bel{def:eta} \eta_{0}= \frac{4\left(1+\tilde au+\vartheta{\mbox{\rm ri\,}}ght)\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)\vartheta}d_{0}, \quad \eta_{k}=\frac{\widetilde{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta}\mathbb{N}orm{q_{k}}^{2}+\frac{\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)}\mathbb{N}orm{y_{k}-y_{k-1}}_{H}^{2},\quad \forall\,k\geq 1, \end{equation} where $\vartheta$, $d_{0}$, $\varphi$ and $\widetilde{\varphi}$ are as in \eqref{def:vart}, \eqref{defd_0}, \eqref{ph} and \eqref{ph:til}, respectively. Then, for every $k\geq 1$, \begin{equation}{\langle}bel{pr:inq} \mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2}+\eta_{k}\leq \sigma\mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}+\eta_{k-1}, \end{equation} where $M$ is as in \eqref{def:oper}. \end{theorem} \begin{proof} It follows from Lemma~\ref{lm/abc} that \begin{align} \nonumber \sigma\mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}-\mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2}&=\sigma\mathbb{N}orm{\tilde x_{k}-x_{k-1}}_{G}^{2}-\mathbb{N}orm{\tilde x_{k}-x_{k}}_{G}^{2}\\\nonumber &+\sigma\mathbb{N}orm{y_{k}-y_{k-1}}_{H}^{2} +\frac{\left(1-\tilde au{\mbox{\rm ri\,}}ght)\left[\left(1+\theta{\mbox{\rm ri\,}}ght)\sigma-\left(1-\tilde au{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]\beta}{\tilde au+\theta}\mathbb{N}orm{p_{k}}^{2}\\ &+\frac{2\left(1-\tilde au{\mbox{\rm ri\,}}ght)\left[\sigma-\left(1-\tilde au-\theta{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]}{\tilde au+\theta}\mathcal{I}nner{p_{k}}{q_{k}} +\frac{\sigma-\left(1-\tilde au-\theta{\mbox{\rm ri\,}}ght)^2}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta}\mathbb{N}orm{q_{k}}^{2}.{\langle}bel{da:34} \end{align} Using the inequality in \eqref{cond:inex2}, the definition of $x_{k}$ in \eqref{mult_2} and noting that $\sigma\geq \hat{\sigma}$, we obtain \begin{align*} \sigma\mathbb{N}orm{\tilde x_{k}-x_{k-1}}_{G}^{2}-\mathbb{N}orm{\tilde x_{k}-x_{k}}_{G}^{2} \geq -\frac{\tilde{\sigma}}{\beta}\mathbb{N}orm{\tilde{\gamma}_{k}-\gamma_{k-1}}^{2} = -\tilde{\sigma}\beta\mathbb{N}orm{p_{k}}^{2}-2\tilde{\sigma}\mathcal{I}nner{p_{k}}{q_{k}}-\frac{\tilde{\sigma}}{\beta}\mathbb{N}orm{q_{k}}^{2} \end{align*} where the last equality is due to the first expression in \eqref{gtk:gk}. Combining the last inequality with \eqref{da:34} and definitions in \eqref{coef}, we find \begin{align}{\langle}bel{two:case} \sigma\mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}&-\mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2} \geq\frac{\widehat{\varphi}(\sigma)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{k}}^{2} +\frac{2 \varphi\left(\sigma{\mbox{\rm ri\,}}ght)}{\tilde au+\theta}\mathcal{I}nner{p_{k}}{q_{k}} +\frac{\widetilde{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta}\mathbb{N}orm{q_{k}}^{2}. \end{align} Let us now consider two cases: $k=1$ and $k\geq 2$. \\[2mm] Case 1 ($k=1$): From \eqref{two:case} with $k=1$, Lemma~\ref{cond:ang}$(a)$ and the fact that $\varphi(\sigma)\geq 0$, we have \begin{equation*} \sigma\mathbb{N}orm{\tilde z_{1}-z_{0}}_{M}^{2}-\mathbb{N}orm{\tilde z_{1}-z_{1}}_{M}^{2} \geq \frac{\widehat{\varphi}(\sigma)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{1}}^{2}-\frac{4\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\vartheta}d_{0}+ \frac{\widetilde{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta}\mathbb{N}orm{q_{1}}^{2}. \end{equation*} Hence, in view of the definitions of $\eta_{0}$ and $\eta_{1}$ in \eqref{def:eta}, we conclude that \begin{align*} \sigma\mathbb{N}orm{\tilde z_{1}-z_{0}}_{M}^{2}&-\mathbb{N}orm{\tilde z_{1}-z_{1}}_{M}^{2}+\eta_{0}-\eta_{1} \geq \frac{\widehat{\varphi}(\sigma)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{1}}^{2} +\frac{\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)}\left[4d_{0}-\mathbb{N}orm{y_{1}-y_{0}}_{H}^{2}{\mbox{\rm ri\,}}ght]\geq 0, \end{align*} where the last inequality is due to Lemma~\ref{cond:ang}$(a)$ and Proposition~\ref{lm:coef}. This implies that \eqref{pr:inq} holds for $k=1$. \\[2mm] Case 2 ($k\geq2$): It follows from Lemma~\ref{cond:ang}$(b)$ and \eqref{two:case} that \begin{align*} &\sigma\mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}-\mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2}\geq\frac{\widehat{\varphi}(\sigma)\beta}{\tilde au+\theta}\mathbb{N}orm{p_{k}}^{2}+\frac{\widetilde{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta}\mathbb{N}orm{q_{k}}^{2}\\ &+\frac{\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)} \left[2\left(1-\theta{\mbox{\rm ri\,}}ght)\mathcal{I}nner{p_{k}}{q_{k-1}}- 2\tilde au\beta\mathbb{N}orm{p_{k}}^{2}+\mathbb{N}orm{y_{k}-y_{k-1}}_{H}^{2}-\mathbb{N}orm{y_{k-1}-y_{k-2}}_{H}^{2}{\mbox{\rm ri\,}}ght], \end{align*} which, combined with the definition of $\eta_{k}$ in \eqref{def:eta}, yields \begin{align*} \sigma\mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}-\mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2} +\eta_{k-1}-\eta_{k}&\geq\frac{\left[\left(1+\tilde au{\mbox{\rm ri\,}}ght)\widehat{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)-2\tilde au\varphi\left(\sigma{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]\beta}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)}\mathbb{N}orm{p_{k}}^{2} \\\noalign{ } & +\frac{2\left(1-\theta{\mbox{\rm ri\,}}ght)\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)}\mathcal{I}nner{p_{k}}{q_{k-1}}+\frac{\widetilde{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta}\mathbb{N}orm{q_{k-1}}^{2}. \end{align*} For simplicity, we define constants a, b, and c by \begin{equation*} a=\frac{\left[\left(1+\tilde au{\mbox{\rm ri\,}}ght)\widehat{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)-2\tilde au\varphi\left(\sigma{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]\beta}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)}, \qquad b=\frac{\left(1-\theta{\mbox{\rm ri\,}}ght)\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)}, \qquad \mbox{and}\qquad c=\frac{\widetilde{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta}. \end{equation*} Hence, \begin{equation}{\langle}bel{1ax:2cs} \sigma\mathbb{N}orm{\tilde z_{k}-z_{k-1}}_{M}^{2}-\mathbb{N}orm{\tilde z_{k}-z_{k}}_{M}^{2} +\eta_{k-1}-\eta_{k} \geq a\mathbb{N}orm{p_{k}}^{2}-2b\mathcal{I}nner{p_{k}}{q_{k-1}}+c\mathbb{N}orm{q_{k-1}}^{2}. \end{equation} Now, note that \begin{align*} ac-{b}^{2}=\frac{\left[\left(1+\tilde au{\mbox{\rm ri\,}}ght)\widehat{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)-2\tilde au\varphi\left(\sigma{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]\left(1+\tilde au{\mbox{\rm ri\,}}ght)\widetilde{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)-\left(1-\theta{\mbox{\rm ri\,}}ght)^{2}\left(\varphi\left(\sigma{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght)^{2}}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)^{2}\left(1+\tilde au{\mbox{\rm ri\,}}ght)^{2}} =\frac{\overline{\varphi}\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)^{2}\left(1+\tilde au{\mbox{\rm ri\,}}ght)^{2}}, \end{align*} where $\overline{\varphi}$ is given in \eqref{ph:bar}. Therefore, it follows from Proposition~\ref{lm:coef} that $c>0$ and $ac-{b}^{2}\geq 0$, which, combined with \eqref{1ax:2cs}, implies that \eqref{pr:inq} also holds for $k\geq 2$. \end{proof} \begin{remark} If $\tilde au=0$ (resp. $\theta=1$), then Theorems \ref{pr:aux} and \ref{inq:hpe} correspond to Lemma~3.1 and Theorem~3.3 in \cite{adona2018partially} (resp. \cite[Proposition~1(a)]{adonaCOAP}). \end{remark} \subsection{Pointwise and ergodic convergence rates of Algorithm~\ref{alg:in:sy}}{\langle}bel{sec:bound} In this section, we establish pointwise and ergodic convergence rates for Algorithm~\ref{alg:in:sy}. \begin{theorem}[Pointwise convergence rate of Algorithm~\ref{alg:in:sy}] {\langle}bel{th:ptw} Consider the sequences $\{v_{k}\}$ and $\{w_{k}\}$ defined, for every $k\geq 1$, by \begin{align}{\langle}bel{1rsd} v_{k}&=\left(H+\frac{\left(\tilde au-\tilde au\theta+\theta{\mbox{\rm ri\,}}ght)\beta}{\tilde au+\theta} B^{\ast}B{\mbox{\rm ri\,}}ght)\left(y_{k-1}-y_{k}{\mbox{\rm ri\,}}ght)-\frac{\tilde au}{\tilde au+\theta}B^{\ast}\left(\gamma_{k-1}-\gamma_{k}{\mbox{\rm ri\,}}ght),\\{\langle}bel{2rsd} w_{k}&=-\frac{\tilde au}{\tilde au+\theta}B\left(y_{k-1}-y_{k}{\mbox{\rm ri\,}}ght)+\frac{1}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\beta}\left(\gamma_{k-1}-\gamma_k{\mbox{\rm ri\,}}ght). \end{align} Then, for every $k\geq 1$, \begin{equation}{\langle}bel{1rsl} u_{k}\in\partial f(\tilde x_k)-A^{\ast}\tilde{\gamma}_k, \qquad v_{k}\in \partial g(y_k)-B^{\ast}\tilde{\gamma}_k,\qquad w_{k}= A\tilde{x}_{k}+By_{k}-b, \end{equation} and there exists $i\leq k$ such that \begin{equation}{\langle}bel{2rsl} \max\left\{\mathbb{N}orm{u_{i}},\mathbb{N}orm{v_{i}},\mathbb{N}orm{w_{i}}{\mbox{\rm ri\,}}ght\}\leq\sqrt{\frac{2{\langle}mbda_{M}d_0\mathcal{C}_1}{k}} \end{equation} where $\mathcal{C}_1:= [1+\sigma+{8\left(1+\tilde au+\vartheta{\mbox{\rm ri\,}}ght)\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}/({\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)\vartheta})]/[1-\sigma]$, ${\langle}mbda_{M}$ is the largest eigenvalue of the matrix $M$ defined in \eqref{def:oper}, $ \sigma\in[\hat{\sigma},1)$ is given by Proposition~\ref{lm:coef} and $\vartheta$, $\varphi$ and $d_0$ are as in \eqref{def:vart}, \eqref{ph} and \eqref{defd_0}, respectively. \end{theorem} \begin{proof} By noting that $u_{k}=G\left(x_{k-1}-x_{k}{\mbox{\rm ri\,}}ght)$ (see \eqref{mult_2}), the expressions in \eqref{1rsl} follow immediately from \eqref{1rsd}, \eqref{2rsd} and Theorem~\ref{pr:aux}. From Theorem~\ref{pr:aux}, we have $\left(u_{k},v_{k},w_{k}{\mbox{\rm ri\,}}ght)=M(z_{k-1}-z_{k})$ and hence \begin{align*} \mathbb{N}orm{\left(u_{k},v_{k},w_{k}{\mbox{\rm ri\,}}ght)}^2&\leq{\langle}mbda_M\mathbb{N}orm{z_{k-1}-z_k}_{M}^{2}\leq 2{\langle}mbda_M\left[\mathbb{N}orm{z_{k-1}-\tilde{z}_k}_{M}^{2}+\mathbb{N}orm{\tilde z_k-z_k}_{M}^{2}{\mbox{\rm ri\,}}ght]\nonumber\\ &\leq 2{\langle}mbda_M\left[\left(1+\sigma{\mbox{\rm ri\,}}ght)\mathbb{N}orm{z_{k-1}- \tilde z_k}_M^2+\eta_{k-1}-\eta_{k}{\mbox{\rm ri\,}}ght], \end{align*} where the last inequality is due to \eqref{pr:inq}. On the other hand, from Lemma~\ref{cor:aux2} and \eqref{pr:inq}, we obtain \begin{equation*} \mathbb{N}orm{z^{\ast}-z_k}_{M}^{2}-\mathbb{N}orm{z^{\ast}-z_{k-1}}_{M}^{2}\leq \left(\sigma-1{\mbox{\rm ri\,}}ght)\mathbb{N}orm{\tilde z_k-z_{k-1}}_{M}^{2}+\eta_{k-1}-\eta_{k}, \end{equation*} where $z^{\ast}\in T^{-1}(0)$. The last two estimates and the fact that $\sigma<1$ imply that, for every $k\geq 1$ \begin{align*} \mathbb{N}orm{\left(u_{k},v_{k},w_{k}{\mbox{\rm ri\,}}ght)}^2&\leq 2{\langle}mbda_M\left[\frac{1+\sigma}{1-\sigma}\left(\mathbb{N}orm{z^{\ast}-z_{k-1}}_{M}^2-\mathbb{N}orm{z^{\ast}-z_{k}}_{M}^{2}+\eta_{k-1}-\eta_{k}{\mbox{\rm ri\,}}ght)+\eta_{k-1}-\eta_{k}{\mbox{\rm ri\,}}ght]\\ &=\frac{2{\langle}mbda_M}{1-\sigma}\left[\left(1+\sigma{\mbox{\rm ri\,}}ght)\left(\mathbb{N}orm{z^{\ast}-z_{k-1}}_{M}^2-\mathbb{N}orm{z^{\ast}-z_{k}}_{M}^2{\mbox{\rm ri\,}}ght)+2\left(\eta_{k-1}-\eta_{k}{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght]. \end{align*} By summing the above inequality from $k=1$ to $k$, we obtain \begin{equation}{\langle}bel{eq:893} \sum_{l=1}^{k}\mathbb{N}orm{\left(u_{l},v_{l},w_{l}{\mbox{\rm ri\,}}ght)}^2\leq \frac{2{\langle}mbda_M}{1-\sigma}\left[\left(1+\sigma{\mbox{\rm ri\,}}ght)\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}+2\eta_{0}{\mbox{\rm ri\,}}ght], \end{equation} which, combined with the definitions of $d_{0}$ and $\eta_{0}$ \eqref{defd_0} and \eqref{def:eta}, respectively, yields \begin{equation*} k\left(\min_{l=1,\ldots,k}\mathbb{N}orm{\left(u_{l},v_{l},w_{l}{\mbox{\rm ri\,}}ght)}^2{\mbox{\rm ri\,}}ght)\leq \frac{2{\langle}mbda_M}{1-\sigma}\left[\left(1+\sigma{\mbox{\rm ri\,}}ght)+ \frac{8\left(1+\tilde au+\vartheta{\mbox{\rm ri\,}}ght)\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}{\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)\vartheta} {\mbox{\rm ri\,}}ght]d_0. \end{equation*} Therefore, \eqref{2rsl} follows now from the last inequality and the definition of $\mathcal{C}_1$. \end{proof} \begin{remark}{\langle}bel{remark-pointwise} (a) It follows from Theorem~\ref{th:ptw} that, for a given tolerance $\rho>0$, Algorithm~\ref{alg:in:sy} generates a $\rho-$approximate solution $(\tilde x_i,y_i,\tilde{\gamma}_i)$ of \eqref{sist:lag} with {residual} $(u_{i},v_{i},w_i)$, i.e., \[ u_{i}\in\partial f(\tilde x_i)-A^{\ast}\tilde{\gamma}_i, \qquad v_{i}\in \partial g(y_i)-B^{\ast}\tilde{\gamma}_i,\qquad w_{i}= A\tilde{x}_{i}+By_{i}-b, \] such that \[ \max\{\|u_i\|, \|v_i\|, \|w_i\|\}\leq \rho,\] in at most \begin{equation*} \begin{array}r{k}=\left\lceil\frac{2{\langle}mbda_{M}d_{0}\mathcal{C}_1}{\rho^{2}}{\mbox{\rm ri\,}}ght\rceil \end{equation*} iterations. (b) Theorem~\ref{th:ptw} encompasses many recently pointwise convergence rates of ADMM variants. Namely, (i) by taking $\tilde au=0$ and $G=I/\beta$, we obtain the pointwise convergence rate of the partially inexact proximal ADMM established in \cite[Theorem~3.1]{adona2018partially}. Additionally, if $\tilde{\sigma}=\hat{\sigma}=0$, the pointwise rate of the FG-P-ADMM in \cite[Theorem~2.1] {MJR2} is recovered. (ii) By choosing $\theta=1$ and $G=I/\beta$, we have the pointwise rate of the inexact proximal generalized ADMM as in \cite[Theorem~1]{adonaCOAP}. Finally, if $\theta=1$, $G=I/\beta$ and $\tilde{\sigma}=\hat{\sigma}=0$, the pointwise convergence rate of the G-P-ADMM in \cite[Theorem~3.4]{Adona2018} is obtained. \end{remark} \begin{theorem}[Ergodic convergence rate of Algorithm~\ref{alg:in:sy}] {\langle}bel{the_erg} Consider the sequences $\left\{\left(x^a_k,y^a_k,\gamma^a_k,\tilde{x}^a_k,\tilde\gamma^a_k{\mbox{\rm ri\,}}ght){\mbox{\rm ri\,}}ght\}$, $\{\left(u_{k}^a,v_{k}^a,w_{k}^{a}{\mbox{\rm ri\,}}ght)\}$, and $\{\left(\varepsilon^a_{k},\zeta^a_{k}{\mbox{\rm ri\,}}ght)\}$ defined, for every $k\geq 1$, by \begin{equation}{\langle}bel{erg01} \left(x^a_k,y^a_k,\gamma^a_k,\tilde{x}^a_k,\tilde\gamma^a_k{\mbox{\rm ri\,}}ght)=\frac{1}{k}\sum_{i=1}^k\left(x_i,y_i,\gamma_i,\tilde{x}_i,\tilde{\gamma}_i{\mbox{\rm ri\,}}ght), \qquad \left(u_{k}^a,v_{k}^a,w_{k}^{a}{\mbox{\rm ri\,}}ght)=\frac{1}{k}\sum_{i=1}^k\left(u_{i},v_{i},w_{i}{\mbox{\rm ri\,}}ght), \end{equation} \begin{equation}{\langle}bel{erg02} \varepsilon^a_{k}=\frac{1}{{k}}\sum_{i=1}^k\mathcal{I}nner{u_{i}+A^{\ast}\tilde{\gamma}_{i}}{\tilde{x}_i-\tilde{x}_k^a},\qquad\mbox{and}\qquad\zeta^a_{k}=\frac{1}{{k}}\sum_{i=1}^k \mathcal{I}nner{v_{i}+B^{\ast}\tilde{\gamma}_{i}}{y_i-y_k^a}, \end{equation} where $v_{i}$ and $w_{i}$ are as in \eqref{1rsd} and \eqref{2rsd}, respectively. Then, for every $k\geq 1$, there hold $\varepsilon^a_{k}\geq 0$, $\zeta^{a}_{k}\geq 0$, \begin{equation}{\langle}bel{1r_erg} u_{k}^{a}\in\partial_{\varepsilon^{a}_{k}}f\left(\tilde{x}_k^a{\mbox{\rm ri\,}}ght)- A^{\ast}\tilde{\gamma}_k^a, \qquad v_{k}^{a}\in \partial_{\zeta^{a}_{k}}g\left(y_k^a{\mbox{\rm ri\,}}ght)-B^{\ast}\tilde{\gamma}_k^a, \qquad w_{k}^{a}=A\tilde{x}_{k}^{a}+By_{k}^{a}-b, \end{equation} \begin{equation}{\langle}bel{2r_erg} \max\left\{\mathbb{N}orm{u_{k}^{a}},\mathbb{N}orm{v_{k}^a},\mathbb{N}orm{w_k^a}{\mbox{\rm ri\,}}ght\} \leq \frac{2\sqrt{{\langle}mbda_{M}d_{0}\mathcal{C}_2}}{k}, \qquad \max\left\{\varepsilon^a_{k},\zeta^a_{k}{\mbox{\rm ri\,}}ght\}\leq \frac{3d_0\mathcal{C}_3}{2k}, \end{equation} where $\mathcal{C}_2:=[1+{4\left(1+\tilde au+\vartheta{\mbox{\rm ri\,}}ght)\varphi\left(\sigma{\mbox{\rm ri\,}}ght)}/({\left(\tilde au+\theta{\mbox{\rm ri\,}}ght)\left(1+\tilde au{\mbox{\rm ri\,}}ght)\vartheta})]$ and $\mathcal{C}_3:=(3-2\sigma)\mathcal{C}_2/(1-\sigma)$, ${\langle}mbda_{M}$ is the largest eigenvalue of the matrix $M$ defined in \eqref{def:oper}, $ \sigma\in[\hat{\sigma},1)$ is given by Proposition~\ref{lm:coef} and $\vartheta$, $\varphi$ and $d_0$ are as in \eqref{def:vart}, \eqref{ph} and \eqref{defd_0}, respectively. \end{theorem} \begin{proof} For every $i\geq 1$, it follows from Theorem~\ref{th:ptw} that \begin{equation*} u_{i}+A^{\ast}\tilde{\gamma}_i\in\partial f(\tilde{x}_{i}),\qquad v_{i}+B^{\ast}\tilde{\gamma}_i\in\partial g(y_i),\qquad w_{i}= A\tilde{x}_{i}+B y_{i}-b. \end{equation*} Hence, using \eqref{erg01}, we immediately obtain the last equality in \eqref{1r_erg}. Furthermore, from the first two inclusions above, \eqref{erg01}, \eqref{erg02} and \cite[Theorem 2.1]{Goncalves2018} we conclude that, for every $k\geq 1$, $\varepsilon^a_{k}\geq 0$, $\zeta^a_{k}\geq 0$, and the inclusions in \eqref{1r_erg} hold. To show \eqref{2r_erg}, we recall again that $\left(u_{i},v_{i},w_{i}{\mbox{\rm ri\,}}ght)=M\left(z_{i-1}-z_{i}{\mbox{\rm ri\,}}ght)$ (see the proof of Theorem~\ref{th:ptw}), which together with \eqref{erg01}, yields $\left(u_{k}^{a},v_{k}^a,w_{k}^a{\mbox{\rm ri\,}}ght)=\left(1/k{\mbox{\rm ri\,}}ght)M\left(z_{0}-z_{k}{\mbox{\rm ri\,}}ght)$. Then, for an arbitrary solution $z^{\ast}=\left(x^{\ast},y^{\ast},\gamma^{\ast}{\mbox{\rm ri\,}}ght)$ of \eqref{sist:lag}, we have \begin{align*} \mathbb{N}orm{\left(u_{k}^{a},v_{k}^a,w_{k}^a{\mbox{\rm ri\,}}ght)}^{2} \leq\frac{{\langle}mbda_{M}}{k^2}\mathbb{N}orm{z_{0}-z_{k}}^2_M \leq \frac{2{\langle}mbda_{M}}{k^2}\left(\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2} + \mathbb{N}orm{z^*-z_k}^{2}_{M}{\mbox{\rm ri\,}}ght). \end{align*} Combining Lemma~\ref{cor:aux2} with \eqref{pr:inq}, we obtain, for every $k\geq 1$, that \begin{equation}{\langle}bel{es:zst} \mathbb{N}orm{z^{\ast}-z_k}^{2}_{M}+\eta_{k}\leq \mathbb{N}orm{z^{\ast}-z_{k-1}}^{2}_{M}+\left(\sigma-1{\mbox{\rm ri\,}}ght)\mathbb{N}orm{\tilde z_{k}-z_{k-1}}^{2}_{M}+\eta_{k-1} \leq\mathbb{N}orm{z^{\ast}-z_{k-1}}^{2}_{M}+\eta_{k-1}. \end{equation} The last two expressions imply that \begin{equation*}{\langle}bel{eq:ui56} \mathbb{N}orm{\left(u_{k}^{a},v_{k}^a,w_{k}^a{\mbox{\rm ri\,}}ght)}^{2}\leq \frac{4{\langle}mbda_{M}}{k^2}\left(\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}+\eta_{0} {\mbox{\rm ri\,}}ght), \end{equation*} which, combined with the definitions of $d_{0}$ and $\eta_{0}$ in \eqref{defd_0} and \eqref{def:eta}, respectively, implies the first inequality in \eqref{2r_erg}. Let us now show the second inequality in \eqref{2r_erg}. From definitions in \eqref{erg02}, we have \begin{align*} \varepsilon_{k}^a+\zeta_{k}^a &=\dfrac{1}{k}\sum_{i=1}^k\,\Big(\mathcal{I}nner{u_{i}}{\tilde{x}_i-\tilde{x}_k^a}+\mathcal{I}nner{v_{i}}{y_i-y_k^a}+\mathcal{I}nner{\tilde{\gamma}_i}{A\tilde{x}_i+By_i-A\tilde{x}_k^a-By_k^a}\Big)\\ &=\dfrac{1}{k}\sum_{i=1}^k\,\Big(\mathcal{I}nner{u_{i}}{\tilde{x}_i-\tilde{x}_k^a}+\mathcal{I}nner{v_{i}}{y_i-y_k^a}+\mathcal{I}nner{\tilde{\gamma}_i}{w_{i}-w_{k}^{a}}\Big)\\ &=\dfrac{1}{k}\sum_{i=1}^k\,\Big(\mathcal{I}nner{u_{i}}{\tilde{x}_i-\tilde{x}_k^a}+\mathcal{I}nner{v_{i}}{y_i-y_k^a}+\mathcal{I}nner{w_i}{\tilde{\gamma}_{i}-\tilde{\gamma}_{k}^{a}}\Big), \end{align*} where the second equality is due to the expressions of $w_{i}$ and $w_{k}^{a}$ in \eqref{1rsl} and \eqref{1r_erg}, respectively, and the third follows from the fact that \begin{align*} \frac{1}{k}\sum_{i=1}^k\mathcal{I}nner{\tilde{\gamma}_i}{w_{i}-w_{k}^a}=\frac{1}{k}\sum_{i=1}^k\mathcal{I}nner{\tilde{\gamma}_i-\tilde{\gamma}_k^a}{w_{i}-w_{k}^a}= \frac{1}{k}\sum_{i=1}^k\mathcal{I}nner{w_{i}}{\tilde{\gamma}_i-\tilde{\gamma}_k^a} \end{align*} (see the definitions of $w_{k}^{a}$ and $\tilde{\gamma}_{k}^{a}$ in \eqref{erg01}). Hence, setting $\tilde{z}_{k}^{a}=(\tilde{x}_k^a,y_k^a,\tilde{\gamma}_k^a)$, and noting that $(u_{i},v_{i},w_{i})=M\left(z_{i-1}-z_{i}{\mbox{\rm ri\,}}ght)$ and $\tilde{z}_{i}=(\tilde{x}_i,y_i,\tilde{\gamma}_i)$, we obtain \begin{equation}{\langle}bel{ep:ze} \varepsilon_{k}^a+\zeta_{k}^a= \frac{1}{k}\sum_{i=1}^k\mathcal{I}nner{M\left(z_{i-1}-z_{i}{\mbox{\rm ri\,}}ght)}{\tilde{z}_{i}-\tilde{z}_{k}^{a}}. \end{equation} On the other hand, using \eqref{pr:inq}, we deduce that for all $z\in \mathbb{R}^n\times\mathbb{R}^p\times \mathbb{R}^m$ \begin{align*} \mathbb{N}orm{z-z_{i}}_{M}^{2}-\mathbb{N}orm{z-z_{i-1}}_{M}^{2} &=\mathbb{N}orm{\tilde{z}_{i}-z_{i}}_{M}^{2}-\mathbb{N}orm{\tilde{z}_{i}-z_{i-1}}_{M}^{2}+2\mathcal{I}nner{M(z_{i-1}-z_{i})}{z-\tilde{z}_{i}}\\ &\leq (\sigma-1)\mathbb{N}orm{\tilde{z}_{i}-z_{i-1}}_{M}^{2} +\eta_{i-1}-\eta_{i}+2\mathcal{I}nner{M(z_{i-1}-z_{i})}{z-\tilde{z}_{i}}, \end{align*} and then, since $\sigma<1$, we find \begin{equation*} 2\sum_{i=1}^k \mathcal{I}nner{M(z_{i-1}-z_{i})}{\tilde{z}_{i}-z}\leq \mathbb{N}orm{z-z_{0}}_{M}^{2}-\mathbb{N}orm{z-z_{k}}_{M}^{2} +\eta_{0}-\eta_{k}\leq \mathbb{N}orm{z-z_{0}}_{M}^{2}+\eta_{0}. \end{equation*} Applying this result with $z:=\tilde{z}_{k}^{a}$ and combining with \eqref{ep:ze}, we find \begin{align}{\langle}bel{2ep:ze} 2k(\varepsilon_{k}^a+\zeta_{k}^a)\leq \mathbb{N}orm{\tilde{z}_{k}^{a}-z_{0}}_{M}^{2}+\eta_{0} \leq \frac{1}{k}\sum_{i=1}^k \mathbb{N}orm{\tilde{z}_{i}-z_{0}}_{M}^{2}+\eta_{0} \leq \max_{i=1,\ldots,k}\mathbb{N}orm{\tilde{z}_{i}-z_{0}}_{M}^{2}+\eta_{0}, \end{align} where, in the second inequality, we used the convexity of $\|\cdot\|_{M}^{2}$ and the fact that $\tilde{z}_{k}^{a}=(1/k)\sum_{i=1}^{k}\tilde{z}_{i}$. Additionally, since $\|z+z^{\prime}+z''\|_{M}^2\leq 3\left(\|z\|_{M}^2+\|z^{\prime}\|_{M}^2+\|z^{\prime\prime}\|_{M}^2{\mbox{\rm ri\,}}ght)$, for all $z, z^{\prime}, z^{\prime\prime}\in \mathbb{R}^n\times\mathbb{R}^p\times \mathbb{R}^m$, we also have \begin{equation*} \mathbb{N}orm{\tilde{z}_{i}-z_{0}}_{M}^{2}\leq 3\left[\mathbb{N}orm{\tilde{z}_{i}-z_{i}}_{M}^{2}+\mathbb{N}orm{z^{\ast}-z_{i}}_{M}^{2}+\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}{\mbox{\rm ri\,}}ght],\qquad\forall\,i\geq 1. \end{equation*} This, together with \eqref{pr:inq} and \eqref{es:zst}, implies that \begin{align*} \mathbb{N}orm{\tilde{z}_{i}-z_{0}}_{M}^{2} &\leq 3\left[\sigma\mathbb{N}orm{\tilde{z}_{i}-z_{i-1}}_{M}^{2}+\eta_{i-1}+\mathbb{N}orm{z^{\ast}-z_{i-1}}_{M}^{2}+\eta_{i-1}+\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}{\mbox{\rm ri\,}}ght]\\ &\leq 3\left[\sigma\mathbb{N}orm{\tilde{z}_{i}-z_{i-1}}_{M}^{2}+2\left(\mathbb{N}orm{z^{\ast}-z_{i-1}}_{M}^{2}+\eta_{i-1}{\mbox{\rm ri\,}}ght)+\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}{\mbox{\rm ri\,}}ght]\\ &\leq 3\left[\sigma\mathbb{N}orm{\tilde{z}_{i}-z_{i-1}}_{M}^{2}+3\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}+2\eta_{0}{\mbox{\rm ri\,}}ght], \end{align*} which, combined with \eqref{2ep:ze}, yields \begin{equation*} 2k\left(\varepsilon_{k}^a+\zeta_{k}^a{\mbox{\rm ri\,}}ght)\leq 3\left[3\left(\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}+\eta_{0}{\mbox{\rm ri\,}}ght)+\sigma\max_{i=1,\ldots,k}\mathbb{N}orm{\tilde{z}_{i}-z_{i-1}}_{M}^{2}{\mbox{\rm ri\,}}ght]. \end{equation*} Now, from \eqref{es:zst}, it is also possible to verify that \begin{equation*} \left(1-\sigma{\mbox{\rm ri\,}}ght)\mathbb{N}orm{\tilde{z}_{i}-z_{i-1}}_{M}^{2}\leq\mathbb{N}orm{z^{\ast}-z_{i-1}}_{M}^{2}+ \eta_{i-1}\leq \mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}+ \eta_{0}, \end{equation*} and, therefore \begin{equation}{\langle}bel{eq:id90} \varepsilon_{k}^a+\zeta_{k}^a\leq \frac{3(3-2\sigma)}{2(1-\sigma)k}\left(\mathbb{N}orm{z^{\ast}-z_{0}}_{M}^{2}+ \eta_{0}{\mbox{\rm ri\,}}ght). \end{equation} Therefore, the second inequality in \eqref{2r_erg} now follows from the definitions of $d_{0}$ and $\eta_{0}$ in \eqref{defd_0} and \eqref{def:eta}, respectively. \end{proof} \begin{remark}{\langle}bel{remark-ergodic} (a) It follows from Theorem~\ref{the_erg} that, for a given tolerance $\rho>0$, Algorithm \ref{alg:in:sy} generates a $\rho-$approximate solution $(\tilde x^a_k,y^a_k,\tilde{\gamma}^a_k)$ of \eqref{sist:lag} with {residuals} $(u^a_{k},v^a_{k},w^a_k)$ and $(\varepsilon^a_{k},\zeta^a_k)$, i.e., \[u_{k}^a\in \partial_{\varepsilon^a_{k}}f(\tilde{x}_k^a)- A^*\tilde{\gamma}_k^a,\qquad v_{k}^a\in \partial_{{\zeta^{a}_{k}}}g(y_k^a)- B^*\tilde{\gamma}_k^a, \qquad w_{k}^a=A\tilde{x}_k^a+By_k^a-b, \] such that \[\max \{\|u_{k}^a\|,\|v_{k}^a\|,\|w_{k}^a\|,\varepsilon^a_{k},{\zeta^{a}_{k}} \}\leq \rho,\] in at most $\begin{array}r{k}=\max\left\{k_{1},k_{2}{\mbox{\rm ri\,}}ght\}$ iterations, where \begin{equation*} k_{1}=\left\lceil\frac{2\sqrt{{\langle}mbda_{M}d_{0}\mathcal{C}_2}}{\rho}{\mbox{\rm ri\,}}ght\rceil,\qquad\text{and}\qquad k_{2}=\left\lceil\frac{3d_{0}\mathcal{C}_3}{2\rho}{\mbox{\rm ri\,}}ght\rceil. \end{equation*} (b) Similarly to Theorem~\ref{th:ptw}, Theorem~\ref{the_erg} recovers, in particular, many recently ergodic convergence rates of ADMM variants. Namely, (i) by taking $\tilde au=0$ and $G=I/\beta$, we obtain the ergodic convergence rate of the partially inexact proximal ADMM established in \cite[Theorem~3.2]{adona2018partially}. Additionally, if $\tilde{\sigma}=\hat{\sigma}=0$, the ergodic rate of the FG-P-ADMM with $\theta \in (0,(1+\sqrt{5})/2)$ in \cite[Theorem~2.2] {MJR2} is obtained. (ii) By choosing $\theta=1$ and $G=I/\beta$, we have the ergodic rate of the inexact proximal generalized ADMM as in \cite[Theorem~2]{adonaCOAP}. Finally, if $\theta=1$, $G=I/\beta$ and $\tilde{\sigma}=\hat{\sigma}=0$, the ergodic convergence rate of the G-P-ADMM with $\tilde au\in(-1,1)$ in \cite[Theorem~3.6]{Adona2018} is recuperated. \end{remark} \section{Numerical experiments}{\langle}bel{sec:Numer} The purpose of this section is to assess the practical behavior of the proposed method. We first mention that the inexact FG-P-ADMM (Algorithm~\ref{alg:in:sy} with $\tilde au=0$) and the inexact G-P-ADMM (Algorithm~\ref{alg:in:sy} with $\theta=1$) have been shown very efficient in some applications. Indeed, as reported in \cite{adona2018partially}, the inexact FG-P-ADMM with $\theta=1.6$ outperformed other inexact ADMMs for two classes of problems, namely, LASSO and $\ell_1-$regularized logistic regression. On the other hand, the inexact G-P-ADMM, proposed later in \cite{adonaCOAP}, with $\tilde au=0.9$ (or, $\alpha=1.9$ in term of the relaxation factor $\alpha$) showed to be even more efficient than the FG-P-ADMM with $\theta=1.6$ for these same classes of problems. Therefore, our goal here is to investigate the efficiency of Algorithm~\ref{alg:in:sy}, which combines both acceleration parameters $\tilde au$ and $\theta$ in a single method, for solving another real-life application. The computational results were obtained using MATLAB R2018a on a 2.4 GHz Intel(R) Core i7 computer with 8 GB of RAM. We use as test problem the total variation (TV) regularization problem (a.k.a. TV/L2 minimization), first proposed by \cite{Rudin1992259}, \begin{equation}{\langle}bel{im:rest} \min_{x\in \mathbb{R}^{m\times n}} \frac{\mu}{2}\mathbb{N}orm{Kx-c}^{2}+ \mathbb{N}orm{x}_{TV}, \end{equation} where $x\in\mathbb{R}^{m\times n}$ is the original image to be restored, $\mu$ is a positive regularization parameter, $K:\mathbb{R}^{m\times n}\to \mathbb{R}^{m\times n}$ is a linear operator representing some blurring operator, $c\in\mathbb{R}^{m\times n}$ is the degraded image and $\norm{\cdot}_{TV}$ is the discrete TV-norm. Let us briefly recall the definition of TV-norm. Let $x\in \mathbb{R}^{m\times n}$ be given and consider $D^1$ and $D^2$ the first-order finite difference $m\times n$ matrices in the horizontal and vertical directions, respectively, which, under the periodic boundary condition, are defined by \begin{equation*} (D^1x)_{i,j} = \begin{cases} x_{i+1,j}- x_{i,j} &\text{if}\quad i< m,\\ x_{1,j}- x_{m,j} &\text{if}\quad i= m, \end{cases} \qquad (D^2x)_{i,j} = \begin{cases} x_{i,j+1}- x_{i,j} &\text{if}\quad j<n,\\ x_{i,1}- x_{i,n} &\text{if}\quad j=n, \end{cases} \end{equation*} for $i=1,2,\ldots,m$ and $j=1,2,\ldots,n$. By defining $D=\left(D^1;D^2{\mbox{\rm ri\,}}ght)$, we obtain \begin{equation}{\langle}bel{def:TV} \mathbb{N}orm{x}_{TV}=\mathbb{N}orm{x}_{TV_{s}}:=\normiii*{Dx}_{s}:=\sum_{i=1}^{m}\sum_{j=1}^{n}\mathbb{N}orm{\left(Dx{\mbox{\rm ri\,}}ght)_{i,j}}_{s}, \end{equation} where $\left(Dx{\mbox{\rm ri\,}}ght)_{i,j}=\left(\left(D^1x{\mbox{\rm ri\,}}ght)_{i,j},\left(D^2x{\mbox{\rm ri\,}}ght)_{i,j}{\mbox{\rm ri\,}}ght)\in\mathbb{R}^{2}$ and $s=1$ or $2$. The TV norm is known as {anisotropic} and isotropic if $s=1$ and $s=2$, respectively. Here, we consider only the isotropic case. By introducing an auxiliary variable $y=(y^1,y^2)$ where $y^1,y^2\in\mathbb{R}^{m\times n}$ and, in view of the definition in \eqref{def:TV}, the problem in \eqref{im:rest} can be written as \begin{equation}{\langle}bel{probl1} \min_{x,y} \frac{\mu}{2}\mathbb{N}orm{Kx-c}^{2}+\normiii*{y}_{2}\quad s.t.\quad y = Dx, \end{equation} which is obviously an instance of \eqref{optl} with $f(x)=\frac{\mu}{2}\mathbb{N}orm{Kx-c}^{2}$, $g(y)=\normiii*{y}_{2}$, A$=-$D, B$=$I, and b$=0$. In this case, the pair $(\tilde{x}_k, u_k)$ in \eqref{cond:inex} can be obtained by computing an approximate solution $\tilde{x}_k$ with a residual $u_k$ of the following linear system \[ \left(\mu K^{\top}K+\beta D^{\top}D{\mbox{\rm ri\,}}ght)x= \mu K^{\top}c + D^{\top}\left(\beta y_{k-1}-\gamma_{k-1}{\mbox{\rm ri\,}}ght). \] In our implementation, the above linear system was reshaped as a linear system of size $mn\times1$ and then solved by means of the conjugate gradient method \cite{nocedal2006numerical} starting from the origin. Note that, by using the two-dimensional shrinkage operator \cite{Wang2008248,Yang2009569}, the subproblem \eqref{g_sub} has a closed-form solution $y_k=\left(y^1_k,y^2_k{\mbox{\rm ri\,}}ght)$ given explicitly by \[ \left(\left(y^1_k{\mbox{\rm ri\,}}ght)_{i,j},\left(y^2_k{\mbox{\rm ri\,}}ght)_{i,j}{\mbox{\rm ri\,}}ght):=\max\left\{\mathbb{N}orm{(w^1_{i,j},w^2_{i,j})}-\frac{1}{\beta}, 0{\mbox{\rm ri\,}}ght\}\left(\frac{w^1_{i,j}}{\mathbb{N}orm{(w^1_{i,j},w^2_{i,j})}},\frac{w^2_{i,j}}{\mathbb{N}orm{(w^1_{i,j},w^2_{i,j})}}{\mbox{\rm ri\,}}ght), \] for $i=1,2,\ldots,m$ and $j=1,2,\ldots,n$, where \[\left(w^1,w^2{\mbox{\rm ri\,}}ght):=(D^1\tilde{x}_{k}+(1/\beta)\gamma^1_{k-\frac{1}{2}}, D^2\tilde{x}_{k}+(1/\beta)\gamma^2_{k-\frac{1}{2}}),\] and the convention $0\cdot(0/0)=0$ is followed. The initialization parameters in Algorithm~\ref{alg:in:sy} were set as follows: $(x_{0},y_{0},\gamma_{0})=(0,0,0)$, $\beta=1$, $G=I/\beta$, $H={\bf 0}$ and $\hat{\sigma}=1-10^{-8}$. From \eqref{def:Reg} (see also Remark~\ref{remarkalg}(a)), for given $\tilde au\in\left(-1,1{\mbox{\rm ri\,}}ght)$ and $\theta\in\left(-\tilde au,\left(1- \tilde au + \sqrt{5+2\tilde au-3\tilde au^2}{\mbox{\rm ri\,}}ght)/2{\mbox{\rm ri\,}}ght)$, the error tolerance parameter $\tilde{\sigma}$ was defined as \begin{equation*} \tilde{\sigma}=0.99\times \begin{cases} \min\left\{\dfrac{\left(1+\tilde au+\theta-\tilde au\theta-\tilde au^{2}-\theta^{2}{\mbox{\rm ri\,}}ght)\left(\tilde au-1{\mbox{\rm ri\,}}ght)}{\tilde au^{2}-2\theta+\theta^{2}}, 1-\tilde au, 1{\mbox{\rm ri\,}}ght\}, &\!\!\text{if}\, \tilde au^{2}-2\theta+\theta^{2}<0,\\ \min\left\{1-\tilde au, 1{\mbox{\rm ri\,}}ght\}, &\!\!\text{if}\, \tilde au^{2}-2\theta+\theta^{2}\geq 0. \end{cases} \end{equation*} Moreover, we used the following stopping criterion \begin{equation*}{\langle}bel{crit:stop} \mathbb{N}orm{M(z_{k-1}-z_{k})}_{\infty}< 10^{-2}, \end{equation*} where $z_{k}=(x_{k},y_{k},\gamma_{k})$ and $M$ is as in \eqref{def:oper}. We considered six test images, which were scaled in intensity to $[0,1]$, namely, (a) Barbara ($512\times 512$), (b) baboon ($512\times 512$), (c) cameraman ($256\times 256$), (d) Einstein ($225\times 225$), (e) clock $(256\times 256)$, and (f) moon $(347\times 403)$. All images were blurred by a Gaussian blur of size $9\times 9$ with standard deviation 5 and then corrupted by a mean-zero Gaussian noise with variance $10^{-4}.$ The regularization parameter $\mu$ was set equal to $10^3$. The quality of the images was measured by the peak signal-to-noise ratio (PSNR) in decibel (dB): \begin{equation*} \text{PSNR} =10\log_{10}\left(\frac{\begin{array}r{x}_{\text{max}}^{2}}{\text{MSE}}{\mbox{\rm ri\,}}ght) \end{equation*} where $\text{MSE}=\frac{1}{mn}\sum_{i=1}^{m}\sum_{j=1}^{n}\left(\begin{array}r{x}_{i,j}-x_{i,j}{\mbox{\rm ri\,}}ght)$, $\begin{array}r{x}_{\text{max}}$ is the maximum possible pixel value of the original image and $\begin{array}r{x}$ and $x$ are the original image and the recovered image, respectively. Tables~\ref{tab:22}--\ref{tab:77} report the numerical results of Algorithm~\ref{alg:in:sy}, with some choices of $(\tilde au,\theta)$ satisfying \eqref{def:Reg}, for solving the six TV regularization problem instances. In the tables, ``Out" and ``Inner" denote the number of iterations and the total of inner iterations of the method, respectively, whereas ``Time" is the CPU time in seconds. We mention that, for each problem instance, the final PSNRs were the same for all $(\tilde au,\theta)$ considered. We displayed these values in the tables as well as the PSNRs of the corrupted images. \begin{table}[h!] \resizebox{\textwidth}{!}{ \begin{minipage}{0.5\textwidth} \centering \caption{Baboon $512\times 512$}{\langle}bel{tab:22} \begin{tabular}{cccrrr} \toprule \multicolumn{6}{c}{PSNR: input 19.35dB, output 20.71dB}\\\midrule $\tilde au$ &$\theta$ &$\tilde{\sigma}$ &Out &Inner &Time\\\midrule 0.0 &1.00 &0.990 &131 &11723 &503.12\\ 0.0 &1.60 &0.062 &100 &11748 &495.38\\ 0.9 &1.00 &0.099 &71 &7408 &314.51\\ 0.7 &1.12 &0.175 &73 &7224 &312.27\\ 0.7 &1.15 &0.142 &71 &7120 &303.42\\ 0.7 &1.18 &0.107 &70 &7205 &309.97\\ 0.8 &1.12 &0.074 &76 &8322 &387.54\\ 0.8 &1.15 &0.040 &75 &8672 &393.05\\ \bottomrule \end{tabular} \end{minipage} \begin{minipage}{0.5\textwidth} \centering \caption{Barbara $512\times 512$} \begin{tabular}{cccrrr} \toprule \multicolumn{6}{c}{PSNR: input 22.59dB, output 23.81dB}\\\midrule $\tilde au$ &$\theta$ &$\tilde{\sigma}$ &Out &Inner &Time\\\midrule 0.0 &1.00 &0.990 &142 &12910 &574.58\\ 0.0 &1.60 &0.062 &105 &12403 &538.17\\ 0.9 &1.00 &0.099 &80 &8620 &411.47\\ 0.7 &1.12 &0.175 &84 &8643 &394.74\\ 0.7 &1.15 &0.142 &82 &{8583} &391.85\\ 0.7 &1.18 &0.107 &82 &8835 &392.69\\ 0.8 &1.12 &0.074 &{{79}} &8665 &{371.47}\\ 0.8 &1.15 &0.040 &{79} &9110 &400.56\\ \bottomrule \end{tabular} \end{minipage} } \end{table} \begin{table}[h!] \resizebox{\textwidth}{!}{ \begin{minipage}{0.5\textwidth} \centering \caption{Cameraman $256\times 256$} \begin{tabular}{cccrrr} \toprule \multicolumn{6}{c}{PSNR: input 21.02dB, output 25.14dB}\\\midrule $\tilde au$ &$\theta$ &$\tilde{\sigma}$ &Out &Inner &Time\\\midrule 0.0 &1.00 &0.990 &135 &13684 &87.92\\ 0.0 &1.60 &0.062 &85 &10382 &64.21\\ 0.9 &1.00 &0.099 &72 &8472 &54.83\\ 0.7 &1.12 &0.175 &75 &8473 &52.77\\ 0.7 &1.15 &0.142 &74 &8429 &52.02\\ 0.7 &1.18 &0.107 &74 &8709 &53.41\\ 0.8 &1.12 &0.074 &71 &8460 &51.83\\ 0.8 &1.15 &0.040 &71 &8756 &53.75\\ \bottomrule \end{tabular} \end{minipage} \begin{minipage}{0.5\textwidth} \centering \caption{Clock $256\times 256$} \begin{tabular}{cccrrr} \toprule \multicolumn{6}{c}{PSNR: input 22.68dB, output 27.44dB}\\\midrule $\tilde au$ &$\theta$ &$\tilde{\sigma}$ &Out &Inner &Time\\\midrule 0.0 &1.00 &0.990 &130 &12666 &82.03\\ 0.0 &1.60 &0.062 &84 &10104 &64.97\\ 0.9 &1.00 &0.099 &69 &7746 &53.39\\ 0.7 &1.12 &0.175 &72 &7807 &51.95\\ 0.7 &1.15 &0.142 &73 &7985 &52.01\\ 0.7 &1.18 &0.107 &70 &7767 &55.67\\ 0.8 &1.12 &0.074 &68 &7740 &50.03\\ 0.8 &1.15 &0.040 &67 &7994 &51.48\\ \bottomrule \end{tabular} \end{minipage} } \end{table} \begin{table}[h!] \resizebox{\textwidth}{!}{ \begin{minipage}{0.5\textwidth} \centering \caption{Einstein $225\times 225$} \begin{tabular}{cccrrr} \toprule \multicolumn{6}{c}{PSNR: input 23.70dB, output 28.24dB}\\\midrule $\tilde au$ &$\theta$ &$\tilde{\sigma}$ &Out &Inner &Time\\\midrule 0.0 &1.00 &0.990 &120 &10506 &56.86\\ 0.0 &1.60 &0.062 &88 &9968 &52.84\\ 0.9 &1.00 &0.099 &72 &7560 &40.47\\ 0.7 &1.12 &0.175 &68 &6573 &34.54\\ 0.7 &1.15 &0.142 &74 &7631 &40.88\\ 0.7 &1.18 &0.107 &73 &7566 &40.69\\ 0.8 &1.12 &0.074 &71 &7685 &40.00\\ 0.8 &1.15 &0.040 &70 &7830 &40.27\\ \bottomrule \end{tabular} \end{minipage} \begin{minipage}{0.5\textwidth} \centering \caption{Moon $347\times 403$}{\langle}bel{tab:77} \begin{tabular}{cccrrr} \toprule \multicolumn{6}{c}{PSNR: input 25.57dB, output 28.28dB}\\\midrule $\tilde au$ &$\theta$ &$\tilde{\sigma}$ &Out &Inner &Time\\\midrule 0.0 &1.00 &0.990 &128 &11684 &249.27\\ 0.0 &1.60 &0.062 &88 &10239 &215.91\\ 0.9 &1.00 &0.099 &72 &7828 &168.30\\ 0.7 &1.12 &0.175 &76 &7921 &170.00\\ 0.7 &1.15 &0.142 &74 &7796 &205.34\\ 0.7 &1.18 &0.107 &73 &7909 &194.64\\ 0.8 &1.12 &0.074 &68 &7412 &161.05\\ 0.8 &1.15 &0.040 &67 &7711 &181.89\\ \bottomrule \end{tabular} \end{minipage} } \end{table} From the tables, we can see clearly the numerical benefits of using acceleration parameters $\tilde au >0$ and $\theta>1$. Note that Algorithm~\ref{alg:in:sy} with the choice $(\tilde au,\theta)=(0,1)$ had the worst performance, in terms of the three performance measurements, for all problem instances. Note also that Algorithm~\ref{alg:in:sy} with $(\tilde au,\theta)=(0.9,1)$ ($0.9$ was the best value for $\tilde au$ in \cite{adonaCOAP}) performed better than Algorithm~\ref{alg:in:sy} with $(\tilde au,\theta)=(0,1.6)$ ($1.6$ was the best value for $\theta$ in \cite{adona2018partially}), such behavior was also observed in \cite{adonaCOAP} for the LASSO and $\ell_1-$regularized logistic regression problems. We stress that Algorithm~\ref{alg:in:sy} with $(\tilde au,\theta)=(0.8,1.12)$ was faster in four (Barbara, cameraman, clock and moon) of six instances. Fig.~\ref{figCamer} plots the original and corrupted images as well as the restored image by Algorithm~\ref{alg:in:sy} with $(\tilde au,\theta)=(0.8,1.12)$ for the six instances. As a summary, we can conclude that combinations of the acceleration parameters $\tilde au$ and $\theta$ can also be efficient strategies in the inexact ADMMs for solving real-life applications. \begin{figure} \caption{Results on the images (top to bottom): ``Baboon", ``Barbara", ``Cameraman", ``Clock", ``Einstein" and ``Moon". First column is the original images, the second is blurred and noisy images, and the third is the restored images by Algorithm~\ref{alg:in:sy} \end{figure} \section{Final remarks}{\langle}bel{conclusion} We proposed an inexact symmetric proximal ADMM for solving linearly constrained optimization problems. Under appropriate hypotheses, the global $\mathcal{O} (1/ \sqrt{k})$ pointwise and $\mathcal{O} (1/ {k})$ ergodic convergence rates of the proposed method were established for a domain of the acceleration parameters, which is consistent with the largest known one in the exact case. Numerical experiments were carried out in order to illustrate the numerical behavior of the new method. They indicate that the proposed scheme represents an useful tool for solving real-life applications. To the best of our knowledge, this was the first time that an inexact variant of the symmetric proximal ADMM was proposed and analyzed. \def$'${$'$} \end{document}
\begin{document} \title[The virtual intersection theory of isotropic Quot Schemes]{The virtual intersection theory of isotropic Quot Schemes} \begin{abstract} Isotropic Quot schemes parameterize rank $r$ isotropic subsheaves of a vector bundle equipped with symplectic or symmetric quadratic form. We define a virtual fundamental class for isotropic Quot schemes over smooth projective curves. Using torus localization, we prescribe a way to calculate top intersection numbers of tautological classes, and obtain explicit formulas when $r=2$. These include and generalize the Vafa-Intriligator formula. In this setting, we compare the Quot scheme invariants with the invariants obtained via the stable map compactification. \end{abstract} \maketitle \section{Introduction} The isotropic Grassmannian $\SG(r,\mathbb{C}^N)$ (or $\OG(r,\mathbb{C}^N)$) is the variety parameterizing $r$ dimensional isotropic subspaces of a vector space $\mathbb{C}^N$ endowed with symplectic (or symmetric) non-degenerate bilinear form. The classical intersection theory of the Grassmannian $\G(r,\mathbb{C}^N)$ and isotropic Grassmannians has been an important subject connecting many areas of mathematics. The Quot scheme is a natural generalization of Grassmannian. Fix a smooth projective curve $C$ of genus $g$. The Quot scheme $\quot_d(V,r,C)$ (for short $\quot_d$) parameterizes degree $-d$, rank $r$ sub-sheaves of a fixed vector bundle $V$ over $C$. Let $L$ be a line bundle over $C$ and let $\sigma$ be a symplectic or symmetric non-degenerate $L$-valued form on $V$: \[\sigma: V\otimes V\to L. \] A subsheaf $S\subset V$ is isotropic if the restriction $\sigma|_{S\otimes S}=0$. The isotropic Quot scheme $\IQ_d(V,\sigma,r,C)$ (for short $\IQ_d$) is the closed subscheme of $\quot_d$ consisting of isotropic subsheaves. When $V$ is the trival rank $N$ bundle, $\quot_d$ provides a natural compactification of $\Mor_d(C,\G(r,\mathbb{C}^N))$, the scheme parameterizing degree $d$ maps from $C$ to the Grassmannian $\G(r,\mathbb{C}^N)$. Moreover, when $L$ is trivial and $\sigma$ is induced by a symplectic or symmetric form on $\mathbb{C}^N$ (we call such $\sigma $ standard), $\IQ_d$ gives a natural compactification for the space of maps $\Mor_d(C,\SG(r,\mathbb{C}^N))$ and $\Mor_d(C,\OG(r,\mathbb{C}^N))$ respectively. Another way to compactify the morphism space is via stable maps. This compactification is important for defining quantum cohomology (see \cite{RuTi_quantum_coh}). A geometric comparison between the Quot scheme and the stable map compactification was done in \cite{Popa_Roth}. A presentation for the quantum cohomology of $\G(r,\mathbb{C}^N)$ was derived in \cite{Q_C_Fano}, and a formula for Gromov-Ruan-Witten (GRW) invariants was proven. The presentations for the quantum cohomology rings of the isotropic Grassmannians were obtained in \cite{QC_LG}, \cite{Quantum_pieri_OG} and \cite{QC_isotropic_grassmannians}. The intersection theory of the Quot scheme was studied extensively in \cite{1994alg.geom..3007B}, \cite{Bert_Dask_Went}, \cite{Bertram_1997} and \cite{marian2007}. In particular, GRW invariants were recovered and new calculations were performed in \cite{marian2007}. The isotropic analogue of the Quot scheme first appeared as the Lagrangian Quot scheme over $\mathbb{P}^1$ (parameterizing maximal rank isotropic subsheaves) in \cite{QC_LG}. The Lagrangian Quot schemes have been recently studied in all genera in \cite{cheong2020irreducibility},\cite{cheong2019counting}. In this paper, we construct a virtual fundamental class for $\IQ_d$ for all $V$, all ranks $r$, all degrees and all genera. When $V$ is trivial and $\sigma $ is standard, we use virtual localization \cite{Graber1997LocalizationOV} to study the virtual intersection theory of $\IQ_d$. We prescribe a way to calculate top intersection numbers of tautological classes, and obtain explicit formulas when $r=2$. We further compute the Gromov-Ruan-Witten invariants obtained via the stable map compactification for the corresponding isotropic Grassmannians and compare the answers. We will now describe the results in detail. \subsection{The Virtual Fundamental Class} Isotropic Quot schemes are, in most cases, not smooth. To define invariants, we first construct a virtual fundamental class on the isotropic Quot scheme. In \cite{marian2007}, Marian and Oprea constructed a virtual fundamental class for the Quot schemes $\quot_d$; see also \cite{Virt_Euler_CFK}. The virtual fundamental class on the isotropic Quot scheme is not a direct consequence of their construction. Let us assume $\sigma$ is symplectic. We may replace $\wedge^2$ with $\Sym^2$ when $\sigma$ is symmetric to obtain the following results. The best scenario occurs when $V$ is the trivial vector bundle over $\mathbb{P}^1$. In this case, $\quot_d$ is a smooth scheme and $\IQ_d$ is the zero locus of a section of the vector bundle $\pi_*(\wedge^2\cS^\vee)$. Here, we consider the universal exact sequence over $C\times \IQ_d $, \[0\to \cS\to p^*V\to \cQ\to 0, \] where $p$ and $\pi$ are the projection maps to $C$ and $\IQ_d$ respectively. Unfortunately, for an arbitrary vector bundle $V$ over a higher genus curve $C$, $\quot_d$ is not smooth and $\pi_*(\wedge^2\cS^\vee)$ is not a vector bundle. Our first main result is \begin{theorem}\label{thm:POT} There is a morphism in the derived category \begin{equation} \mathbf{R}\pi_*(J^\bullet)^\vee \to \tau_{[-1,0]}\mathbb{L}_{\IQ_d} \end{equation} where $J^\bullet=[\rHom(\cS,\cQ)\to \sHom(\wedge^2\cS,p^*L)]$, which induces a $2$-term perfect obstruction theory and hence a virtual fundamental, $[\IQ_d]^{\vir }$, on the isotropic Quot scheme. \end{theorem} We prove Theorem \ref{thm:POT} in Section \ref{sec:Vir_Fund_class}. Over a closed point $[0\to S\to V\to Q\to 0]$ in $\IQ_d$, the tangent space and the obstruction space are given by the hypercohomology of the complex of sheaves $[\sHom(S,Q)\to \sHom(\wedge^2S,L)]$. The virtual dimension is \begin{align*} \vd&=\begin{cases} \chi(S^\vee \otimes Q)-\chi(\wedge^2S^\vee\otimes L) & \text{when $\sigma $ is symplectic} \\ \chi(S^\vee \otimes Q)-\chi(\Sym^2S^\vee\otimes L) & \text{when $\sigma $ is symmetric} \end{cases}, \end{align*} where $\chi(E)$ denotes the Euler characteristic of a sheaf $E$. These are easy to calculate as an application of the Riemann-Roch formula. \begin{rem} When $2r=N$ and $\sigma$ is symplectic, the isotropic Quot scheme is irreducible and generically smooth \cite{cheong2020irreducibility} for $d>>0$ and its dimension equals the virtual dimension obtained above. In this case, the virtual fundamental class agrees with the fundamental class. \end{rem} \begin{rem} Our method can also be extended to obtain a virtual fundamental class for the closed subscheme of $\quot_d$ parameterizing subsheaves $S\to V$ isotropic with respect to higher order forms $\sigma:\wedge^{k}V\to L$ and $\sigma:\Sym^{k}V\to L$. \end{rem} For the rest of the introduction, we will assume that $V$ is a trivial vector bundle of even rank $N$. We will also assume that the line bundle $L$ is trivial and the non-degenerate symplectic or symmetric form $\sigma$ is standard. \subsection{Compatibility of virtual fundamental classes} The group $G= Sp(N)$ (or $G=SO(N)$) acts on the isotropic Quot scheme with $\sigma $ symplectic (resp. symmetric). The perfect obstruction theory we construct is equivariant under any one-parameter subgroup $\mathbb{C}^*\subset G$. In this case, we use the virtual localization theorem \cite{Graber1997LocalizationOV} to study the virtual intersection theory of $\IQ_d$. This has been done extensively for $\quot_d$ in \cite{marian2007}. We first show a compatibility result for the virtual fundamental classes. Fix a point $q\in C$. There is a natural embedding \begin{equation*} i_q:\IQ_d\to \IQ_{d+r} \end{equation*} which sends a subsheaf $S\subset \mathbb{C}^N\otimes\cO$ to the composition \[S(-q)\to S \to \mathbb{C}^N\otimes\cO, \] which is also an isotropic subsheaf of degree $-(d+r)$. \begin{theorem}\label{compatibiliy_theorem} We have the following identity in the homology $H_*(\IQ_{d+r})$ : \begin{equation} {i_q}_*(c_{\text{top}}(\wedge^2\cS^\vee_q)^2 \cap [\IQ_d]^{\vir})=c_{\text{top}}({\cS_q^{\vee}})^N\cap [\IQ_{d+r}]^{\vir} \end{equation} where we assume that $\sigma$ is symplectic. The corresponding identity for symmetric form is obtained by replacing $\wedge ^2$ with $\Sym^2$. \end{theorem} This means that the virtual fundamental classes we construct, $[\IQ_d]^{\vir}$, are related as we vary the degree $d$ by a multiple of $r$. An analogous result was proven in the case of the Quot scheme in \cite{marian2007}. \subsection{Virtual Invariants} Let $\{1,\delta_1,\dots \delta_{2g}, \omega\}$ be a symplectic basis for the cohomology of $C$. Let the K\"unneth decomposition of $\cS^\vee$ over $C\times \IQ_d$ be \begin{align*} c_i(\cS^\vee)= a_i\otimes 1 + \sum_{k=1}^{2g}b_i^k\otimes\delta_k+ f_i\otimes \omega, \end{align*} where $a_i\in H^{2i}(\IQ_d)$, $b_i^{k}\in H^{2i-1}(\IQ_d)$ and $f_i\in H^{2i-2}(\IQ_d)$. The classes $a_i$ and $f_i$ have natural algebro-geometric descriptions. For any point $q\in\IQ_d$, let $\cS_q$ be the restriction of $\cS$ to $\IQ_d\times \{q\}$. Then \begin{align*} a_i=c_i(\cS^{\vee}_q),\hspace*{1.4cm} f_i=\pi_*c_i(\cS^{\vee}). \end{align*} The top intersections of the corresponding $a$-classes over $\quot_d$ match the GRW invariants for Grassmannians. The explicit answers were first obtained in the physics literature by Vafa and Intriligator \cite{INTRILIGATOR_1991}. In the mathematics literature, these formulas appeared in \cite{1994alg.geom..3007B}, \cite{Q_C_Fano} and \cite{marian2007}. We are interested in understanding the intersection products of the above two kinds of classes evaluated on the virtual fundamental cycle. The virtual localization theorem \cite{Graber1997LocalizationOV} allows us to evaluate all monomials in $a_i$ and $f_i$ on the virtual fundamental class $[\IQ_d]^{\vir}$. However, closed form expressions are harder to write down due to the fact that the combinatorics becomes very involved. When $r=2$, we prove a Vafa-Intriligator type formula for such intersection numbers. We achieve this by developing combinatorial techniques in Section \ref{sec:prelim_hilb} to evaluate and sum the fixed loci contributions. In the process, we simplify some of the combinatorics in \cite{marian2007}. At this point, we will have to distinguish the two cases depending on $\sigma$ being a symplectic or symmetric form. \subsection{When $\sigma$ is symplectic} When $\sigma$ is symplectic and $r=2$, the virtual dimension is \[\vd=(N-1)d-(2N-5)\bar{g},\] where we use the convention \[\bar{g}=g-1.\] We further define \[T_{d,g}(N)= \sum_{i=0}^{d}\binom{g}{i}(-N)^{-i}. \] The above expression equals $(1-1/N)^g$ when $d\ge g$. Note that the non-negativity of the virtual dimension implies that $d< g$ if and only if $\vd=0$ and $N=4$ or $\vd=0$ and $g=1$. \begin{theorem}\label{thm:r=2,sympl} Let $\sigma$ be a symplectic form and $m_1+2m_2=\vd\ge 0$. Then \begin{equation}\label{eq:a_1a_2formula} \int_{[\IQ_d]^{\vir}}^{}a_1^{m_1}a_2^{m_2} =u\frac{N}{2}T_{d,g}(N) \sum_{\zeta\ne\pm1}^{}(1+\zeta)^{m_1+d}\zeta^{m_2}J(1,\zeta)^{\bar{g}}, \end{equation} where the sum is taken over $N^{th}$ roots of unity $\zeta\ne \pm 1$. Here $u=(-1)^{\bar{g}+d}$ and \begin{align*} J(z_1,z_2)&=N^2z_1^{-1}z_2^{-1}(z_1-z_2)^{-2}(z_1+z_2)^{-1}. \end{align*} \end{theorem} \begin{exm}\label{exm:N=4} When $N=4$, the virtual dimension $\vd=3d-3\bar{g}$. The above theorem specializes to \begin{align*} \int_{[\IQ_d]^{\vir}}^{}a_1^{m_1}a_2^{m_2} =\begin{cases} 2^{2d-m_2-\bar{g}}3^g & \vd >0\\ 2^{\bar{g}}(3^g+(-1)^{\bar{g}})& \vd=0. \end{cases} \end{align*} When $\vd=0$, the resulting invariant can be interpreted as a `virtual' count of isotropic subsheaves of $V$. This virtual count matches the enumerative count \cite{cheong2019counting} of the rank two maximal degree isotropic subbundle of a general rank 4 stable bundle endowed with an $\cO$-valued symplectic form. \end{exm} \begin{exm}\label{exm:g=1} When $g=1$, the virtual dimension $\vd=(N-1)d$. Then \begin{align*} \int_{[\IQ_d]^{\vir}}^{}a_1^{\vd}=\begin{cases} (-1)^d\frac{N-1}{2}[q^{Nd}]\Big( \frac{N(1-q)^{N-1}}{(1-q)^N-q^N}-\frac{1}{1+2q}\Big)& d>0\\ \frac{N(N-2)}{2}& d=0 \end{cases}. \end{align*} \end{exm} We have the following results involving $f$-classes; the latter are typically intractable by other methods. \begin{theorem}\label{thm:f_classes_d>g} Let $m_1+m_2+1=\vd$ and $d>g$, then \begin{align*} \int_{[\IQ_d]^{\vir}}f_2a_1^{m_1}a_2^{m_2}=&\bigg(1-\frac{1}{N}\bigg)^{g}\sum_{\zeta\ne \pm1}^{}\bigg( D\circ B(1,\zeta)-\frac{\zeta B(1,\zeta)}{(1+\zeta)}\bigg). \end{align*} where \[D \circ R(z_1,z_2)=\frac{z_1z_2}{2}\bigg(\frac{\partial}{\partial z_1}+\frac{\partial}{\partial z_2}\bigg)R(z_1,z_2)\] is a differential operator and \begin{equation*} B(z_1,z_2)= u(z_1+z_2)^{m_1}(z_1z_2)^{m_2}\frac{(z_1+z_2)^{d-\bar{g}}}{(z_1-z_2)^{2\bar{g}}}\prod_{i=1}^{2} (Nz_i^{N-1})^{\bar{g}}. \end{equation*} \end{theorem} In Section \ref{sec:f_intersection}, we provide a complete answer for the intersection numbers of the form $ f_2^{\ell}a_1^{m_1}a_2^{m_2}\cap [\IQ_d]^{\vir}$ at the cost of making the formula more cumbersome. The answer involves higher degree differential operators. We remark here that our method can also be applied to obtain virtual intersection numbers involving higher powers of $f_2$ over the Quot schemes as well (for which closed form expressions were not known). \subsection{When $\sigma$ is symmetric} When $r=1$, every rank $r$ subsheaf of a symplectic vector bundle is isotropic. In this case $\IQ_d=\quot_d$. However, when $\sigma$ is a symmetric form, this is not the case. \begin{prop} Let $r=1$, let $N$ be even and let $\sigma$ be a symmetric form. Then \begin{align*} \int_{[\IQ_d]^{\vir}}^{}a_1^{\vd}= (N-2)^g2^{2d-\bar{g}}, \end{align*} where $\vd=(N-2)(d-\bar{g})$ is the virtual dimension and $d\ge g$. \end{prop} When $r=2$, the virtual dimension of $\IQ_d$ is \[\vd = (N-3)d-\bar{g}(2N-7).\] \begin{theorem}\label{thm:r=2_symmetric} Let $m_1+2m_2=\vd$ and $N=2n+2$. \begin{itemize} \item[(i)] When $m_2>0$, then \begin{equation*} \int_{[\IQ_d]^{\vir}}^{}a_1^{m_1}a_2^{m_2} = c\sum_{\zeta\ne \pm 1}(1+\zeta)^{m_1+d}\zeta^{m_2}J(1,\zeta)^{\bar{g}} \end{equation*} \item[(ii)] When $m_2=0$, \begin{equation*} \int_{[\IQ_d]^{\vir}}^{}a_1^{m_1} =c\bigg(4(-n)^{\bar{g}}+\sum_{\zeta\ne \pm 1}(1+\zeta)^{m_1+d}J(1,\zeta)^{\bar{g}}\bigg), \end{equation*} \end{itemize} where the sum is taken over $2n^{th}$ roots of unity $\zeta\ne \pm 1$. Here $u=(-1)^{\bar{g}+d}$, \begin{align*} c=u4^dnT_{d,g}(2n), \quad J(z_1,z_2)=n^2(z_1+z_2)^{-1}(z_1-z_2)^{-2}. \end{align*} \end{theorem} In the above theorem, there are two differences from Theorem \ref{thm:r=2,sympl} which make the proof more difficult. First, the case $m_2=0$ requires extra care. Second, in the sum above $\zeta$ is $(N-2)^{th}$ root of unity. This arises from picking a non-standard $\mathbb{C}^*$ action in the localization formula. In particular, the fixed loci thus obtained come equipped with a non-standard virtual structure. We observe a surprising duality in the $a$-class intersection numbers over the symmetric and symplectic isotropic Quot schemes. We will later observe the same phenomenon for GRW invariants. \begin{cor} Let $\IQ_d$ (and $\widetilde{\IQ}_d$) be symplectic (respectively symmetric) isotropic Quot scheme parameterizing rank $2$ degree $d$ isotropic subsheaves of $\mathbb{C}^N\otimes\cO$ (and $\mathbb{C}^{N+2}\otimes\cO$ respectively). Then, for integers $m_1,m_2$ such that $m_1+2m_2=(N-1)d-\bar{g}(2N-5)$ and $m_2-\bar{g}>0$, we have \begin{align*} \int_{[\widetilde{\IQ}_d]^{\vir}}^{}a_1^{m_1}a_2^{m_2-\bar{g}}=4^{d-2\bar{g}}\int_{[\IQ_d]^{\vir}}a_1^{m_1}a_2^{m_2}. \end{align*} \end{cor} \subsection{Gromov-Ruan-Witten Invariants} In the previous sections, we considered the Quot scheme compactification of the morphism space $\Mor_d(C,\SG(2,N))$ and $\Mor_d(C,\OG(2,N))$. Let $(M,\omega)$ be a compact symplectic manifold with a generic almost complex structure $J$ tamed by $\omega$ (i.e. $\omega(v,Jv)>0$ for all non-zero $v\in TM $). We will further assume that $H_2(M,\mathbb{Z})\cong \mathbb{Z}$ and $M$ is positive in the sense that $c_1(TM,J) \cdot f_*[\mathbb{P}^1]>0$ for all non-constant $J$-holomorphic maps $f:\mathbb{P}^1\to M$. The morphism space of $J$-holomorphic maps from $C$ to $(M,\omega)$ can be compactified by letting the curve $C$ `bubble' \cite{RuTi_quantum_coh}. The boundary of this compactification includes $C$ with finitely many trees of rational curves. This leads to the definition of quantum cohomology and Gromov-Ruan-Witten (GRW) invariants. We briefly describe these terms, but readers are suggested to see \cite{Q_C_Fano}, \cite{J-holo_curves_SalamonDuff} for more details. Let $\alpha\in H^2(M,\mathbb{Z})$ be a positive generator. Define the index $e$ of $M$ by $c_1(M)=e\alpha$. Let $d\in H^2(M,\mathbb{Z})$ and $\alpha_1,\dots,\alpha_s$ be cohomology classes in $H^*(M,\mathbb{Z})$ satisfying \begin{align}\label{eq:exp_dim} \frac{1}{2}\sum_{i=1}^{s}\deg \alpha_i= ed+\dim(M)(1-g). \end{align} The right side of the above expression is the expected dimension of the moduli space of maps $f:C\to M$ with $f_*(C)=d\in H_2(M,\mathbb{Z})$. Let $B_1,\dots,B_s$ be a generic choice of the Poincar\'e dual homology classes of $\alpha_1,\dots,\alpha_s$. Then for $s$ generic points $p_1,\dots,p_s\in C$, the GRW invariants \[\Phi_{g,d}(\alpha_1,\dots ,\alpha_s)\] is the algebraic count (considering sign and multiplicities) of $J$-holomorphic curves $f:C\to X$ such that $f(p_i)\in B_i$ and $f_*([C])=d$. The GRW invariants depend on the genus but not the complex structure of the curve. Quantum cohomology packages the information of $3$-point genus zero GRW invariants giving a deformation of the usual cohomology ring (see \cite{J-holo_curves_SalamonDuff} for more details). A presentation of quantum cohomology of $\SG(r,N)$ and $\OG(r,N)$ was described in \cite{QC_isotropic_grassmannians} and \cite{Quantum_pieri_OG}. In \cite{Q_C_IG}, the authors gave a simpler presentation for $\SG(2,N)$. We extend their result obtaining a similar presentation for $\OG(2,N)$. Let $N=2n+2$. We have the universal exact sequence \[0\to \cS\to \mathbb{C}^N\otimes\cO\to \cQ\to 0\] over $\OG(2,N)$. Let $\cS^{\perp}\subset \mathbb{C}^N\otimes\cO$ be the rank $N-2$ orthogonal complement. We have the following cohomology classes : \begin{itemize} \item The Chern classes $a_i= c_i(\cS^\vee)$ for $i\in\{1,2\}$. \item Let $b_i=c_{2i}(\cS^{\perp}/\cS)$ for $i\in \{1,\dots ,n-1 \}$. The bundle $\cS^{\perp}/\cS$ is self dual, hence all the odd Chern classes vanish. \item Let $\xi$ be the Edidin-Graham square root class \cite{Char_classes_quadric} of the bundle $\cS^{\perp}/\cS$. In particular, it satisfies \[ (-1)^{n-1}\xi^2=b_{n-1}. \] \end{itemize} \begin{prop} The quantum cohomology ring $QH^*(\OG(2,2n+2),\mathbb{C})$ is isomorphic to the quotient of the ring $\mathbb{C}[a_1,a_2,b_1,\dots,b_{n-2}, \xi,q]$ by the ideal generated by the relations \[\xi a_2=0\] and \begin{equation*} (1+(2a_2-a_1^2)x^2+a_2^2x^4)(1+b_1x^2+\cdots + b_{n-2}x^{2n-4}+(-1)^{n-1}\xi^2x^{2n-2})=1+4 qa_1x^{2n}, \end{equation*} where $x$ is a formal variable. \end{prop} Define the GRW invariant \[\langle a_1^{m_1}a_2^{m_2}\rangle_g=\Phi_{g,d}(a_1,\dots,a_1,a_2,\dots ,a_2),\] where $a_1$ and $a_2$ appear $m_1$ and $m_2$ times respectively; and $d$ is chosen (if possible) such that it satisfies \eqref{eq:exp_dim}. In \cite{Q_C_Fano}, Siebert and Tian gave a remarkable technique to compute the higher genus GRW invariants using a given presentation for the quantum cohomology. We explicitly calculate the GRW invariants for $\SG(2,N)$ and $\OG(2,N)$ in Theorems \ref{thm:GRW_invariant_SG} and \ref{thm:GRW_invariants_OG} respectively. We verify the slogan below for $r=2$. \[\text{``GRW Invariants} = \text{Virtual $a$-class intersections''}\] In particular, we prove the following theorem. \begin{theorem} Let $d$, $m_1$ and $m_2$ be non-negative integers such that $\vd=m_1+2m_2$ is the expected dimension. The GRW invariants for $\SG(2,N)$ (and $\OG(2,N)$) \begin{equation*} \langle a_1^{m_1}a_2^{m_2}\rangle_g= \int_{[\IQ_d]^{\vir}}^{}a_1^{m_1}a_2^{m_2}, \end{equation*} where $\IQ_{d}$ is the symplectic (respectively symmetric) isotropic Quot scheme. \end{theorem} \begin{qes} In the large degree regime, we expect that $\IQ_{d}$ and the corresponding stable map compactification are irreducible and the above invariants are enumerative. The irreducibility of the Lagrangian Quot schemes for $d>>0$ is proven in \cite{cheong2020irreducibility}. \end{qes} \subsection{Virtual Euler Characteristic} The topological Euler characteristics of schemes $\IQ_{d}$ is given by \begin{equation*} \sum_{d=0}^{\infty}e(\IQ_d)q^d=2^r\binom{n}{r}(1-q)^{r(2g-2)}, \end{equation*} where $N=2n$. Let $X$ be a scheme admitting a 2-term perfect obstruction theory. The virtual Euler characteristic is defined \cite{Virt_Euler_FG}, \cite{Virt_Euler_CFK} \begin{equation*} e^{\vir}(X)=\int_{[X]^{\vir}}^{}c(T_X^{\vir}). \end{equation*} The virtual Euler characteristic of Quot scheme parameterizing zero dimensional quotients over surfaces were calculated in \cite{oprea2021quot}. When $X$ is smooth and the obstruction bundle vanishes, the virtual Euler characteristic $e^{\vir}(X)$ matches the topological Euler characteristic of $X$. The isotropic Quot schemes, $\IQ_1$, are smooth for $C=\mathbb{P}^1$ and all values of $N=2n$ and $r$. By contrast, the isotropic Quot schemes $\IQ_{d}$ are not smooth for $d>1$ even when $C=\mathbb{P}^1$. Thus the virtual Euler characteristics, $e^{\vir}(\IQ_{d})$, are new invariants. While we do not a have a closed form expression for these power series, nonetheless we find a finite number of values using Sagemath \cite{sagemath}. We provide a small list of these invariants in Section \ref{sec:vir_Euler_char}. When $r=2$, $N=4$ and $\sigma$ is symplectic, we plot a log scale graph for the absolute value of $e^{\vir}(\IQ_{d})$. The plot (see Figure \ref{fig:plot}) indicates an exponential growth in contrast with the polynomial expression for the topological Euler characteristics. \begin{qes} Find a closed form expression for the virtual Euler characteristic of $\quot_d$ and $\IQ_{d}$ for all genus $g$ and all ranks $r$ and $N$. \end{qes} \subsection{Plan of the paper} We construct the virtual fundamental class over $\IQ_d$ in Section \ref{sec:Vir_Fund_class}, thus proving Theorem \ref{thm:POT}. In Sections \ref{sec:Nvir_sympl} and \ref{sec:Nvir_symm}, we will describe the torus action on $\IQ_d$ and find an expression for the equivariant virtual normal bundles over the fixed loci. Section \ref{sec:prelim_hilb} is technical, and it contains calculations on the product of symmetric powers of curves. These will be used in Sections \ref{sec:Int_a_classes} and \ref{sec:f_intersection} to prove Theorems \ref{thm:r=2,sympl} and \ref{thm:r=2_symmetric}. The quantum cohomology and GRW invariants are calculated in Section \ref{sec:GRW_Invariants}; this section is technically disjoint from all the other sections. \begin{center} \begin{figure} \caption{The absolute value of the virtual Euler characteristic of $\IQ_{d} \label{fig:plot} \end{figure} \end{center} \section{Virtual Fundamental Class}\label{sec:Vir_Fund_class} We will construct a natural 2-term perfect obstruction theory for the isotropic Quot Schemes $\IQ_{d}$ over smooth projective curves. This yields a virtual fundamental class using the results in \cite{Behrend_1997} and \cite{li1996virtual}. The argument can be slightly simplified for trivial vector bundles $V=\mathbb{C}^N\otimes\cO$ over $\mathbb{P}^1$ and we will explain this case first. We will assume that $\sigma$ is a symplectic non-degenerate bilinear form on a vector bundle $V$, although similar results hold for symmetric bilinear forms and can be proved verbatim replacing $\wedge^2$ with $\Sym^2$. \subsection{Background} We will briefly describe the results pertaining to the construction of virtual fundamental classes in \cite{Behrend_1997}. Let $X$ be a scheme (or a stack) over a scheme (or a stack) $S$ and $\mathbb{L}_{X/S}$ be the relative cotangent complex. \begin{defn} A 2-term relative perfect obstruction theory is a morphism in the derived category\[ \phi : E^{\bullet}\to \tau_{[-1,0]}\mathbb{L}_{X/S}, \] where $E^\bullet= [E^{-1}\to E^{0}]$ is a complex of vector bundles over $X$ of amplitude contained in $[-1,0]$ and satisfies: \begin{itemize} \item $h^0$ is an isomorphism and \item $h^{-1}$ is a surjection. \end{itemize} \end{defn} Let $[E_0\to E_1]$ be the dual of $E^\bullet$. Given a 2-term perfect obstruction theory, \cite{Behrend_1997} and \cite{li1996virtual} define a cone inside $E_{1}$. The virtual fundamental class is then defined to be an element in $H_{2e}(X)$ given by the refined intersection of the cone with the zero section of $E_1$. Here $e= \rank E_0-\rank E_1$ is called the virtual dimension of $X$. For practical purposes, we only need the description of the virtual tangent (or cotangent) bundle, which is an element in the $K$-theory \[T^{\vir}_X=[E_0]-[E_1]\in K^0(X).\] The simplest case is when $X$ is a closed subscheme of a smooth scheme $Y$ cut out by a section $s$ of a vector bundle $V$ over $Y$. In this case, there is a natural 2-term perfect obstruction theory given by $[V^{\vee}|_X\to \Omega_Y|_X]$. Note that when $s$ is a regular section, we get the usual fundamental class. For the remainder of this section, we provide a 2-term perfect obstruction theory for $\IQ_d$. \subsection{Genus 0} Over $\mathbb{P}^1$, the Quot scheme $\quot_d(\mathbb{C}^{N},r,\mathbb{P}^1)$ is smooth for any choice of $N,r$ and $d$. The isotropic Quot scheme $\IQ_{d}$ is smooth for $d=0,1$ for all $r$ and $N$, but it is singular for higher values of $d$. The isotropic Quot schemes $\IQ_d$ can be described as the zero locus of a section of a vector bundle over $\quot_d$. Therefore, the virtual fundamental class exists and is given by the Euler class of the vector bundle. The following well-known Propositions explain the details. \begin{prop}\label{prop:quot_smooth} For any choice of $N$, $r$ and $d$, $\quot_d(\mathbb{C}^{N},r,\mathbb{P}^1)$ is smooth. \end{prop} \begin{proof} The deformation theory for Quot schemes is given by $\text{Ext}^\bullet(S,Q) $. Since we work over curves it is enough to show that $\text{Ext}^1(S,Q)=0$. Using Serre duality, $\text{Ext}^1(S,Q)=\text{Ext}^0(Q,S(-2))^\vee$. Since $\mathbb{C}^{N}\otimes\cO\to Q$ is a surjection and $S\to \mathbb{C}^{N}\times\cO$ is an injection, it is enough to show that $\Hom(\mathbb{C}^{N}\otimes\cO, \mathbb{C}^{N}\otimes\cO(-2))=0$, which is clear. \end{proof} \begin{prop} Let $\pi:\quot_d\times \mathbb{P}^1\to \quot_d$ be the projection. Then $\pi_* (\wedge^2\cS^{\vee})$ is a locally free sheaf. \end{prop} \begin{proof} Note that for any point $q=[0\to S\to \cO^{N}\to Q\to 0]$ in the Quot scheme, $\mathbb{C}^N\otimes\cO\to S^{\vee}$ is generically surjective and so is \[\phi:\wedge^2( \mathbb{C}^{N}\otimes \cO)\to \wedge^2 S^{\vee}.\] Observe that $\wedge^2(\mathbb{C}^N\otimes \cO)= \mathbb{C}^{\binom{N}{2}}\otimes \cO$. We have the following exact sequences of sheaves \begin{center} \begin{tikzcd}[column sep=0cm, row sep=0cm] 0&\to&\ker \phi &\to& \mathbb{C}^{\binom{N}{2}}\otimes \cO&\to& \text{im}\,\phi&\to& 0\\ 0&\to&\text{im}\,\phi &\to& \wedge^2S^{\vee}&\to& \coker\phi&\to& 0 \end{tikzcd}. \end{center} Since $\coker(\phi)$ is zero dimensional and $ \mathbb{C}^{\binom{N}{2}}\otimes \cO$ is a trivial vector bundle over $\mathbb{P}^1$, their first sheaf cohomology groups vanish. The first exact sequence implies $H^1(\mathbb{P}^1,\text{im}\,\phi)=0$. The second exact sequence gives us $H^1(\mathbb{P}^1,\wedge^2(S^{\vee}))=0$, hence $h^0(\wedge^2 S^{\vee})=\chi(\wedge^2 S^{\vee})$ is constant. Using Grauert's theorem we conclude that $\pi_*(\wedge^2(\cS^{\vee}))$ is locally free. \end{proof} The symplectic form $\sigma: \wedge^2(\mathbb{C}^N\otimes\cO)\to \cO$ induces an element of $H^0(\mathbb{P}^1,\wedge^2S^\vee)$ given as the composition \[\wedge^2 S\to\wedge^2\mathbb{C}^N\otimes\cO \xrightarrow{\sigma} \cO \] for any subsheaf $S$ of $\mathbb{C}^N\otimes\cO$. This induces a section, denoted as $\tilde{\sigma}$, of $\pi_*(\wedge^2\cS^\vee)$ over $\quot_d$. Recall that $\IQ_d$ is the subscheme of $\quot_d$ consisting of subsheaves $S$ of $\mathbb{C}^N\otimes\cO$ such that the above composition is zero, hence $\IQ_d=\text{Zero}(\tilde{\sigma})$. Therefore, we have a natural perfect obstruction theory and a virtual fundamental class proving Theorem \ref{thm:POT} in this case. \subsection{The Perfect Obstruction theory in general} In the general case, the two main aspects of the above proof break down, namely $\quot_d$ is not always smooth and the sheaf $\pi_*(\wedge^2\cS^\vee)$ may not be locally free. To construct a perfect obstruction theory, we will have to make a few auxiliary constructions. Fix $V,L, r$ and $d$. Let $\bun$ be the moduli stack of rank $r$ and degree $d$ vector bundles over $C$. There is a natural forgetful map $\mu: \quot_d\to \bun $ sending the exact sequence $0\to S\to V\to Q\to 0$ to $[S^\vee]\in \bun$. We define another stack $\wstack$ which parameterizes pairs $(S,\phi)$, where $S$ is a vector bundle with $S^\vee\in \bun$ and $\phi:\wedge^2S\to L$ is a morphism of sheaves. This also comes equipped with a natural map $\eta:\wstack\to \bun$ sending the pair $(S,\phi)$ to $[S^\vee]$. We have tabulated the situation in the following commutative diagram \begin{center} \begin{tikzcd} \IQ_d \arrow[hookrightarrow,r,"i"]\arrow[d, "\mu\circ i"]&\quot_d\arrow[d, "\tilde{\sigma }"]\arrow[ld, "\mu"]\\ \bun \arrow[hookrightarrow,r , "z"]\arrow[leftarrow,r, bend right, "\eta"]&\wstack \end{tikzcd}. \end{center} Here $\tilde{\sigma} $ is the map sending the short exact sequence $0\to S\to V\to Q\to 0 \in \quot_d$ to the pair $(S,\phi)$, where $\phi$ is the composition $\wedge^2S\to \wedge^2V\xrightarrow{\sigma}L$. Recall $\IQ_d$ is precisely the closed locus in $\quot_d$ which is sent to $(S,0)$ under the map $\tilde{\sigma}$. There is a zero section $z:\bun \to \wstack$ sending $[S^\vee]$ to $(S,0)$, and we see that $\IQ_d$ is the fiber product of the maps $\tilde{\sigma}$ and $z$. The advantage of the above description is that we understand the cotangent complex of $\quot_d$ and $\bun$, and the new stack $\wstack$ is an abelian cone over $\bun$. We will first describe relative perfect obstruction theory for the maps $\mu$ and $\eta$, and use it to obtain a relative perfect obstruction theory for $\IQ_d$ relative to $\bun$. Since $\bun$ is a smooth Artin stack, this standardly yields a global perfect obstruction theory for $\IQ_d$, by \cite[Appendix B]{Graber1997LocalizationOV}. \subsection{A perfect obstruction theory for $\wstack$} We will first carefully define the stack $\wstack$ and show that it is an abelian cone over $\bun$. We will use the results in \cite{scattareggia2018perfect} and \cite{scattareggia_2017} to obtain perfect obstruction theory of $\wstack$ over $\bun$. \begin{defn} A Wedge system is a pair $(S,\phi)$ where $S$ is a locally free sheaf on $C$ and $\phi$ is a morphism of sheaves $\phi:\wedge^2S\to L$ over $C$. A family of Wedge systems over a scheme $T$ is $(\pi :C\times T\to T, \cS,\phi : \wedge^2\cS \to p^*L )$ where $p:C\times T\to C$ is the first projection and $\cS $ is a locally free sheaf over $C\times T$. \end{defn} An isomorphism of two families of Wedge system $(\pi :C\times T\to T, \cS,\phi : \wedge^2\cS \to p^*L )$ and $(\pi :C\times T\to T, \cS',\phi' : \wedge^2\cS' \to p^*L )$ over $T$ is an isomorphism $\alpha:\cS \to \cS'$ over $C\times T$ such that $ \phi =\phi'\circ \wedge^2\alpha$. \begin{defn} Let $\wstack$ be the category fibered in groupoids defined by $\wstack (T)$ being the families of Wedge systems over T. Let $\eta:\wstack\to \bun $ be the forgetful morphism. \end{defn} \begin{prop}\label{prop:S_abelian_cone} There is a natural isomorphism of $\bun$-stacks \begin{equation} \wstack \to \Spec\Sym (\mathbf{R}^1\pi_*(\wedge^2\cS \otimes p^*L^\vee\otimes \omega_\pi)) \end{equation} where $\omega_\pi$ is the relative dualising sheaf of $\pi:\wstack\times C \to\wstack$. In particular $\wstack$ is an abelian cone over $\bun $. Thus $\wstack$ is an algebraic stack. \begin{proof} The proof is almost same as the proof of Prop 1.8 in \cite{scattareggia2018perfect}. Let $T$ be a scheme, then $\wstack (T)=\{ {t:T\to \bun , \phi :\bar{t}^*\wedge^2\cS\to p^*L } \}$, where $\bar{t}$ is the induced map from $C\times T\to C\times \bun$. Using Grothendieck duality and base change there is a canonical bijection between $\Hom(\bar{t}^*\wedge^2\cS, p^*L)$ and $\Hom(t^*\mathbf{R}^1\pi_*(\wedge^2\cS \otimes p^*L^\vee\otimes \omega_\pi),\cO_T)$ which is compatible with pull backs. \end{proof} \end{prop} \begin{cor}\label{cor:POT_eta} There is a relative perfect obstruction theory for $\eta$ induced by \begin{equation*} \mathbf{R}\pi_*(\sHom(\wedge^2\cS,p^*L))^\vee \to \tau_{[-1,0]}\mathbb{L}_\eta. \end{equation*} \end{cor} \begin{proof} The corollary follows using Lemma \ref{lem:relative_obs_theory} by observing that \[\rHom (\mathbf{R}{\pi}_*(\wedge^2\cS \otimes p^*L^\vee\otimes \omega_\pi[1]),\cO_{\wstack} )\] is isomorphic to $\mathbf{R}\pi_*(\sHom(\wedge^2\cS,p^*L))$ in the derived category. \end{proof} \begin{lem}\label{lem:relative_obs_theory} Let $\pi:Y'\to Y$ be a relative dimension one, flat, projective morphism of algebraic stacks and let $F\in Coh(Y')$ be flat over $Y$, then the abelian cone $\wstack:=\Spec\Sym (\mathbf{R}^1\pi_* F)\xrightarrow{\eta} Y$ has a relative perfect obstruction theory induced by the canonical morphism \begin{equation} \mathbf{R}\bar{\pi}_*(\bar{F}[1])\to \tau_{[-1,0]}\mathbb{L}_\eta \end{equation} where $\bar{\pi}: Y'\times_Y \wstack\to \wstack$ and $\bar{F}$ is the induced sheaf on $Y'\times_Y \wstack$. \end{lem} \begin{proof} We will briefly explain the argument assuming $Y$ is a scheme. The complete proof is exactly the same as the proof of Proposition 2.4 in \cite{scattareggia2018perfect}. Under the given conditions, $F$ can be shown to admit a resolution \[0\to K\to M\to F\to 0 \] where $M$ is locally free, $\pi_*K=\pi_*M=0$ and the first derived pushforwards $\mathbf{R}^1\pi_*M$ and $\mathbf{R}^1\pi_*K$ are locally free. Then $\eta$ admits a factorization \[ \wstack \xrightarrow{i} \Spec\Sym(\mathbf{R}^1\pi_*M)\xrightarrow{q} Y \] where $\eta=q\circ i$, $q$ is a smooth morphism and $i$ is a closed embedding. Then $\tau_{[-1,0]}\mathbb{L}_\eta\cong [I|_{\wstack}\to \Omega_q |_{\wstack}]$, where $I$ is the ideal sheaf of $i$. There is a natural isomorphism $\eta^*\mathbf{R}^1\pi_*M\to \Omega_q |_{\wstack} $ and surjection $\eta^*\mathbf{R}^1\pi_*K \to I|_{\wstack} $. Therefore, it remains to show that $[\eta^*\mathbf{R}^1\pi_*K \to \eta^*\mathbf{R}^1\pi_*M ]$ is quasi-isomorphic to $ \mathbf{R}\bar{\pi}_*(\bar{F}[1])$. By cohomology and base-change, $[\eta^*\mathbf{R}^1\pi_*K \to \eta^*\mathbf{R}^1\pi_*M ]$ is isomorphic to $[\mathbf{R}^1\bar{\pi}_*\bar{\eta}^*\bar{K} \to \mathbf{R}^1\bar{\pi}_*\bar{\eta}^*\bar{M} ]$, where \[0\to\bar{ K}\to \bar{M}\to \bar{F}\to 0\] is the induced resolution on $Y'\times_Y \wstack$. The required statement is obtained by the distinguished triangle of the above short exact sequence. \end{proof} \subsection{Perfect Obstruction theory} Recall that we have a map $\tilde{\sigma}:\quot_d \to \wstack$ which takes a subsheaf $[0\to S\to V\to Q\to 0]$ to the point $(S,\phi)$ in $\wstack$ where $\phi$ is the composition of $\wedge^2S\to \wedge^2V\to L$. This can be defined as a morphism of $\bun$-stacks. Consider the morphisms \begin{center} \begin{tikzcd} \quot \arrow[r, "\tilde{\sigma}"] & \wstack\arrow[r,"\eta "] &\bun. \end{tikzcd} \end{center} Let $\mu =\eta \circ \tilde{\sigma}$. There exists a distinguished triangle \begin{equation}\label{eq:dist_trianle1} \tilde{\sigma}^*\mathbb{L}_\eta \to \mathbb{L}_\mu \to \mathbb{L}_{\tilde{\sigma}}\to \tilde{\sigma}^*\mathbb{L}_\eta[1]. \end{equation} Note that the Quot schemes over smooth curves have perfect obstruction theories as described in \cite{marian2007}. In order to obtain the relative perfect obstruction theory over $\bun$, we consider $\quot_d $ as an open substack of the abelian cone \[\Spec\Sym (\mathbf{R}^1\pi_*(\cS \otimes p^*V^\vee \otimes\omega_\pi)).\] Therefore Lemma \ref{lem:relative_obs_theory} and relative duality implies that the morphism \begin{equation*} \mathbf{R}\pi_*(\sHom(\cS,p^*V))^\vee \to \tau_{[-1,0]}\mathbb{L}_\mu \end{equation*} induces a perfect obstruction theory for $\mu : \quot_d \to \bun$. We also recall Corollary \ref{cor:POT_eta}. Thus we get a map of distinguished triangles completing \eqref{eq:dist_trianle1} by the axioms of derived category: \begin{equation} \begin{tikzcd}\label{eq:dist_triangle} \mathbf{R}\pi_*(\sHom(\wedge^2\cS,p^*L))^\vee\arrow[r,"d\tilde{\sigma}"]\arrow[d]& \mathbf{R}\pi_*(\sHom(\cS,p^*V))^\vee \arrow[r]\arrow[d]&\mathbf{R}\pi_*(D^{\bullet})^\vee\arrow[d] \\ \tilde{\sigma}^*\mathbb{L}_\eta \arrow[r]& \mathbb{L}_\mu \arrow[r]& \mathbb{L}_{\tilde{\sigma}} \end{tikzcd} \end{equation} where $D^\bullet=[\sHom(\cS,p^*V)\xrightarrow{d\sigma} \sHom(\wedge^2\cS,p^*L)]$. The description of $d\sigma$, given below, is important for proving Lemma \ref{lem:generic_surjectivity}. Fix a vector bundle $S$ in $\bun$, then the map $\tilde{\sigma}$ restricts to a quadratic map $\Hom (S,V)\to \Hom (\wedge^2S,L)$ sending $f$ to $\sigma\circ\wedge^2f$. Vanishing of this map is precisely the locus of the fiber of $\IQ_d$ over $S$. Hence the tangent space at a point $f=[0\to S\xrightarrow{f} V\to Q\to 0]$ in $\IQ_d$ relative to $\bun$ is given as kernel of the linear map $d\tilde{\sigma}:\Hom (S,V)\to \Hom (\wedge^2S,L)$ sending $g$ to the map $[u\wedge v\to \sigma(f(u)\wedge g(v)+g(u)\wedge f(v))]$. The corresponding map of sheaves $d\sigma:\sHom(S,V)\to \sHom(\wedge^2S,L) $ over the fiber $C\times \{f\}$ is given by the same expression over each open sets of $C$. Over $C\times \IQ_d$ we have the universal section $f$ of the vector bundle $\sHom(\cS,p^*V)$. The above description induces a morphism of locally free sheaves \begin{equation*} d\sigma : \sHom(\cS,p^*V)\to \sHom(\wedge^2\cS,p^*L). \end{equation*} We have seen in Proposition \ref{prop:S_abelian_cone} that $\wstack $ is an abelian cone, therefore it comes equipped with the zero section $z:\bun \to \wstack$ which is a closed immersion. Recall that $\IQ_d$ sits inside the commutative diagram \begin{center} \begin{tikzcd} \IQ_d \arrow[hookrightarrow,r,"i"]\arrow[d, "\mu\circ i"]&\quot_d\arrow[d, "\tilde{\sigma }"]\arrow[ld, "\mu"]\\ \bun \arrow[hookrightarrow,r , "z"]\arrow[leftarrow,r, bend right, "\eta"]&\wstack \end{tikzcd}. \end{center} Observe that $\IQ_d$ is the inverse image $\tilde{\sigma}^{-1}(z(\bun)) $. The perfect obstruction theory $\mathbf{R}\pi_*(D^{\bullet})^{\vee}$ of $\sigma$ induces a perfect obstruction theory of $\IQ_{d}$ relative to $\bun$ using the map of cotangent complex \begin{equation}\label{eq:Cotangent_cmlx_IQ/Bun} i^*\mathbb{L}_{\tilde{\sigma}}\to \mathbb{L}_{\IQ_d/\bun}. \end{equation} \begin{lem}\label{lem:rel_obs_theory} There is a perfect obstruction theory of $\IQ_d$ relative to $\bun$ induced by \begin{equation}\label{eq:IQ_rel_Obst} \mathbf{R}\pi_*(D^\bullet)^\vee \to \tau_{[-1,0]}\mathbb{L}_{\IQ_d/\bun}. \end{equation} where $D^\bullet=[\sHom(\cS,p^*V)\xrightarrow{d\sigma} \sHom(\wedge^2\cS,p^*L)]$ is the two term complex over vector bundles with amplitude in [0,1] over $C\times \IQ_d$. \end{lem} \begin{proof} We obtain the perfect obstruction theory in \eqref{eq:IQ_rel_Obst} by restricting the perfect obstruction theory of $\tilde{\sigma}$ in \eqref{eq:dist_triangle} to $\IQ_d$ using \eqref{eq:Cotangent_cmlx_IQ/Bun}. Let $D^\bullet|_C=[Hom(S,V)\xrightarrow{d\sigma} Hom(\wedge^2S,L)]$ be the restriction to a fibers, denoted as $C$, of $\pi:C\times \IQ_d\to \IQ_d$. Consider the hypercohomology long exact sequence \begin{equation*} \cdots \to\Hglob^1(Hom(S,V))\to \Hglob^1(Hom(\wedge^2S,L))\to \mathbb{H}^2(D^\bullet|_C)\to \Hglob^2(Hom(S,V))=0. \end{equation*} Since $d\sigma$ is generically surjective (see Lemma \ref{lem:generic_surjectivity}) and $C$ is one dimensional, $\Hglob^1(Hom(S,V))\to \Hglob^1(Hom(\wedge^2S,L))$ is surjective. Thus we conclude that $\mathbb{H}^2(D^\bullet|_C)$ vanishes. \end{proof} \begin{lem}\label{lem:generic_surjectivity} The restriction of $d\sigma$ to each fiber $C=C\times \{f\}$, where $[0\to S\xrightarrow{f} V\to Q\to 0]$ is an element in $\IQ_{d}$, is generically surjective. \end{lem} \begin{proof} Note that $f$ is morphism of vector bundle over $C\backslash A$ where $A$ is finite set of points in $C$. We will show that the linear map of vector spaces \begin{align*} \phi:Hom(S_x\to V_x)&\to Hom(\wedge^2 S_x,L_x) \\ g&\to [u\wedge v\to \sigma(f(u)\wedge g(v)+g(u)\wedge f(v))] \end{align*} is surjective for all $x\in C\backslash A$. This is now an exercise in linear algebra. Let $N=2n$. We can choose symplectic coordinates $\{e_1,\dots e_N\}$ of $V_x$ such that $\sigma(e_i,e_{n+i})=1$ and $f$ identifies the isotropic subspace $S_x$ with $span \{e_1,\dots ,e_r\}$. An element $g\in Hom(S_x\to V_x)$ can be identified with an $N\times r$ matrix $( B_{i,j} )$. A simple calculation shows that $g\in \ker \phi$ if and only if $B_{i,n+k}=B_{k,n+i}$ for all $1\le i,k\le r$. Thus the rank of $\ker \phi$ is $Nr-\binom{r}{2}$, hence $\phi$ is surjective. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:POT}] In Lemma \ref{lem:rel_obs_theory}, we constructed a relative perfect obstruction theory. We follow the arguments in \cite[Appendix B]{Graber1997LocalizationOV} verbatim to obtain an absolute perfect obstruction theory. Here we use the fact that $\bun$ is a smooth Artin stack with obstruction theory given by $\mathbf{R}\pi_*(\sHom(\cS,\cS))^\vee[-1]\to \mathbb{L}_{\bun}$. \end{proof} \begin{rem} We note that when $V$ and $L$ are trivial and $\sigma$ is induced from a standard symplectic or symmetric form on $\mathbb{C}^N$, there is another way to construct the virtual fundamental class for $\IQ_d$ using the theory of quasi-maps to GIT quotients as discussed in \cite{ciocanfontanine2011stable}. Indeed, $\IQ_d$ can be considered as the moduli space of quasi maps from $C$ to $\SG(r,N)$ (or $\OG(r,N)$). The isotropic Grassmannian can be realized as a GIT quotient of $W\git_{\theta} G$, where $\theta = \det^{-1}$ is the multiplicative character of $G=GL_r$ and $W=\{f\in \Hom(\mathbb{C}^r,\mathbb{C}^N) : \sigma(f(u),f(v))=0 \,\forall \,u,v\in \mathbb{C}^r\}$ is a closed subscheme of the affine space $\Hom(\mathbb{C}^r,\mathbb{C}^N)$. \end{rem} \section{Symplectic isotropic Quot schemes}\label{sec:Nvir_sympl} Throughout this section we will assume that $\om$ is the standard symplectic form on $\mathbb{C}^N\otimes\cO$; i.e., it is induced by the block matrix \begin{align*} \om=\begin{bmatrix} 0&I_n\\ -I_n&0 \end{bmatrix} \end{align*} where $N=2n$. There is a natural action of $Sp(2n)$ on $\IQ_d$ induced by the respective action on $ \mathbb{C}^{2n}$. We consider the subtorus $G=\mathbb{C}^*\subseteq Sp(2n) $ given by $(t^{-w_1},\dots , t^{-w_N})$ where $w_i=-w_{i+n}$ for $1\le i\le n$. The weights $w_i$ are assumed to be distinct, unless stated otherwise. \subsection{Fixed Loci} \label{sec:fixed_loci} Each summand $\cO$ of $\mathbb{C}^N\otimes\cO$ is acted upon with different weights. A point $[0\to S\to \mathbb{C}^N\otimes\cO\to Q\to 0]$ in $\IQ_d$ is fixed under the action of $G$ if and only if : \begin{itemize} \item[(i)] $S$ splits as a direct sum of line bundles \[S=\oplus_{j=1}^rL_j,\] where $L_j$ is subsheaf of one of the $N$ copies of $\cO$ of $\mathbb{C}^N\otimes\cO$. Denote $k_j$ by the position of this copy of $\cO$. \item[(ii)] $k_j-k_i\not \equiv 0 \mod n$ for any $1\le i<j\le r$ : This ensures that $S$ is isotropic. \end{itemize} Let $\underline{k}=\{k_1,\dots,k_r\}$ and $\vec{d}=(d_1,\dots,d_r)$ where $d_j=\deg L_j$ and \[d_1+\cdots +d_r=d. \] We require $\{i,i+n\}\not \subset \underline{k}$ for any $1\le i \le n$. Let $\fix_{\vec{d},\underline{k}}$ be the set of fixed points with the numerical data $\vec{d}$ and $\underline{k}$. Note that there are $2^r\binom{n}{r}$ possible values of $\underline{k}$ and $\binom{d+r-1}{r-1}$ choices of $\vec{d}$. Denote $\cO_{k_i}$ be the $k_i$'th copy of $\cO$ in $\mathbb{C}^N\otimes\cO$. The short exact sequence \[0\to L_i\to \cO_{k_i}\to T_i\to 0\] defines an element of $C^{[d_i]}$, the Hilbert scheme of $d_i$ points on $C$. Therefore we have \begin{align*} \fix_{\vec{d},\underline{k}}=C^{[d_1]}\times C^{[d_2]}\times \dots \times C^{[d_r]}. \end{align*} \subsection{The Equivariant Normal bundle} Let $0\to \mathcal{L}_{i}\to\cO_{k_i}\to \mathcal{T}_{i} \to0$ be the universal exact sequence over $C\times C^{[d_i]}$. We use the same notation for the pull-back exact sequence over $C\times \fix_{\vec{d}, \underline{k}}$. Let $0\to \cS\to \mathbb{C}^{N}\otimes\cO\to \cQ\to 0$ be the universal exact sequence over $C\times \IQ_d$. This restricts to \begin{align*} 0\to \mathcal{L}_{1}\oplus \dots \oplus \mathcal{L}_{r}\to \mathbb{C}^{N} \otimes \cO\to \mathcal{T}_{1}\oplus \dots \oplus \mathcal{T}_{r} \oplus \mathbb{C}^{N-r} \otimes \cO\to 0 \end{align*} on $C\times \fix_{\vec{d},\underline{k}}$. Let $\pi_!$ be the derived pushforward $\mathbf{R}^0\pi_*-\mathbf{R}^1\pi_*$ in the K-theory. Recall that in Theorem \ref{thm:POT}, we provided a perfect obstruction theory for the isotropic Quot scheme. In the $K$-theory of $\IQ_d$, the corresponding virtual tangent bundle is given by \begin{align*} T^{\vir}=\pi_![(\rHom(\cS,\cQ))]-\pi_![(\sHom(\wedge^2{\cS},\cO))]. \end{align*} The restriction of the virtual tangent bundle in the $\mathbb{C}^*$-equivariant $K$-theory of $\fix_{\vec{d},\underline{k}}$ is given by the following formula \begin{align*} \pi_!\bigg(\sum_{i,j\in [r]}^{} [\mathcal{L}_i^\vee\otimes\mathcal{T}_j ]+\sum_{i\in [r],k\in \underline{k}^c}[\mathcal{L}^\vee_i]-\sum_{1\le i<j\le r}^{} [\mathcal{L}^\vee_i\otimes\mathcal{L}^\vee_j]\bigg), \end{align*} where the above three groups of elements have $\mathbb{C}^*$ weights $(w_{k_i}-w_{k_j})$, $(w_{k_i}-w_{k})$ and $(w_{k_i}+w_{k_j})$ respectively. Note that the fixed part of the restriction of $T^{\vir}$ to $\fix_{\vec{d},\underline{k}}$ is \[\sum_{i\in [r]}^{}\pi_![\mathcal{L}^\vee_i\otimes\mathcal{T}_i], \] which matches the tangent bundle of $\fix_{\vec{d},\underline{k}}$. The induced virtual class $[\fix_{\vec{d},\underline{k}}]^{\vir}=[\fix_{\vec{d},\underline{k}}]$ agrees with the usual fundamental class. The virtual equivariant normal bundle $\nbun$ is given by the moving part of the restriction of $T^{\vir}$. Using the identity in $K$-theory, \[[\mathcal{L}_i^\vee\otimes\mathcal{T}_j ]=[\mathcal{L}_i^\vee\otimes\cO_{k_j} ]-[\mathcal{L}_i^\vee\otimes\mathcal{L}_j ],\] we obtain the following equality \begin{align}\label{eq:Nvir_sympl} \nbun= \pi_!\bigg(\sum_{\substack{i\in [r],k\in [N]\\ k\ne k_i } }[\mathcal{L}^\vee_i]-\sum_{\substack{i,j\in [r] \\ i\ne j}}^{} [\mathcal{L}_i^\vee\otimes\mathcal{L}_j ]-\sum_{1\le i<j\le r}^{} [\mathcal{L}^\vee_i\otimes\mathcal{L}^\vee_j]\bigg), \end{align} where the terms are acted on with wights $(w_{k_i}-w_{k})$, $(w_{k_i}-w_{k_j})$ and $(w_{k_i}+w_{k_j})$ respectively. \subsection{Chern polynomials} In the subsection we briefly describe certain Grothendieck-Riemann-Roch calculations for the map $\pi:C\times X\to X$, where \[X=C^{[d_1]}\times C^{[d_2]}\times \dots \times C^{[d_r]}.\] Let $\{1,\delta_1,\dots, \delta_{2g}, \omega \}$ be the symplectic basis for the cohomology ring of $C$ with the relations $\delta_i\delta_{i+g}=\omega=-\delta_{i+g}\delta_i$ for all $1\le i\le g$. Consider the K\"unneth decomposition of the cohomology classes $c_1(\mathcal{L}^\vee)$ in $C\times C^{[d_i]}$ with respect to a chosen symplectic basis of $\Hglob^*(C)$, \begin{equation}\label{eq:def_x,y_classes} c_1(\mathcal{L}_i^\vee)=x_i\otimes 1 + \sum_{k=1}^{2g}y_i^k\otimes\delta_k+ d_i\otimes \omega. \end{equation} The theta class, $\theta_i \in \Hglob^*(C^{[d_i]})$, is the pullback of the usual theta class under the map \begin{align*} C^{[d_i]}\to \text{Pic}^{d_i}. \end{align*} We have the following relation (explained in \cite{ACGH}) \[\bigg(\sum_{k=1}^{2g}(y_i^k\otimes \delta_k) \bigg)^2=-2\theta_i\otimes \omega. \] We will use the same notation for the pullback of $x_i, y_i^{k}$ and $\theta_i$ under the map \begin{equation*} pr_{i}: X \to C^{d_i}. \end{equation*} Let $E$ be a vector bundle of rank $m$ and let $c_t(E)=1+c_1(E)t+\cdots+c_m(E)t^m$ be its Chern polynomial. We extend the definition of $c_t$ to the $K$-theory in the usual way. We can use Grothendieck-Riemann-Roch to obtain expression for the Chern polynomials $c_t(\pi_![\mathcal{L}_i^\vee])$, $c_t(\pi_![\mathcal{L}_i^\vee\otimes\mathcal{L}_j])$ and $c_t(\pi_![\mathcal{L}_i^\vee\otimes\mathcal{L}_j^\vee])$: \begin{align}\label{GRR} c_t(\pi_![\mathcal{L}_i^\vee])&= (1+tx_i)^{d_i-\bar{g}}e^{-\frac{t\theta_i}{(1+tx_i)}} \\ c_t(\pi_![\mathcal{L}_i^\vee\otimes\mathcal{L}_j])&=(1+t(x_i-x_j))^{d_i-d_j-\bar{g}} e^{-\frac{t(\theta_i+\theta_j+\phi_{ij})}{1+t(x_i-x_j)}}\nonumber \\ c_t(\pi_![\mathcal{L}_i^\vee\otimes\mathcal{L}_j^\vee])&=(1+t(x_i+x_j))^{d_i+d_j-\bar{g}} e^{-\frac{t(\theta_i+\theta_j-\phi_{ij})}{1+t(x_i+x_j)}} \nonumber\\ c_t(\pi_![\mathcal{L}_i^\vee\otimes\mathcal{L}_i^{\vee}])&=(1+2tx_i)^{2d_i-\bar{g}} e^{-\frac{4t\theta_i}{1+2tx_i}}\nonumber \end{align} where $\phi_{ij}=-\sum_{k=1}^{g}(y_i^ky_j^{k+g}+y_j^ky_i^{k+g})$. The detailed calculation for the first two expression can be found in \cite{ACGH} and \cite{marian2007}. The other two expressions are obtained in a similar way. We will briefly explain the last one for completeness: The first Chern class is $c_1(\mathcal{L}^\vee \otimes \mathcal{L}^\vee)= 2c_1(\mathcal{L}^\vee)$, therefore the Chern character \[ch(\mathcal{L}^\vee\otimes\mathcal{L}^\vee) = e^{2x}\otimes 1 + e^{2x}(2d- 4\theta)\otimes \omega + 2\sum_{k}^{}y^k\otimes \delta_k. \] We may further apply Grothendieck Riemann Roch to obtain the Chern characters of $\pi_![\mathcal{L}^\vee\otimes \mathcal{L}^\vee]$ and then covert it into Chern polynomials to obtain the required result. The Chern character is \begin{align*} ch(\pi_![\mathcal{L}^\vee\otimes \mathcal{L}^\vee]) &= \pi_* (ch(\mathcal{L}^\vee\otimes \mathcal{L}^\vee)(1+(1-g)\omega))\\ &= e^{2x}(2d+(1-g)-4\theta). \end{align*} \subsection{The Euler class of virtual normal bundle}\label{sec:Euler_class_sympl} Next we would like to find the equivariant Euler class of $\nbun$ in the equivariant cohomology ring $H^*(\fix_{\vec{d},\underline{k}})[[t,t^{-1}]]$. This will be useful in the virtual localization formula. Let $E$ be one of the line bundles appearing in the formula for $\nbun$ in \eqref{eq:Nvir_sympl}. We evaluated the formula for the total Chern classes $c_q(\pi_!E)$ in \eqref{GRR}. Let $\pi_!E$ be acted on with weight $w$, then the equivariant Euler class is a homogeneous element in $H^*(\fix_{\vec{d},\underline{k}})[t,t^{-1}]$ and is given by \begin{equation*} e_{\mathbb{C}^*}(\pi_!E)=(wt)^mc_{\frac{1}{wt}}(\pi_!E) \end{equation*} where $m=\chi(\pi_!E)$ is the virtual rank. Consider the polynomial $P(X)=\prod_{i=1}^{N}(X-w_it)$. Let $Y_i=x_i+w_{k_i}t$ be a change of variable over $\mathbb{C}[[t]]$. Then \begin{align}\label{eq:euler_class_computation_1} \prod_{\substack{i\in [r],k\in [N]\\ k\ne k_i } }\frac{1}{e_{\mathbb{C}^*}(\pi_![\mathcal{L}^\vee_i])} &= \prod_{\substack{i\in [r],k\in [N]\\ k\ne k_i } }(Y_i-w_kt)^{-d_i+\bar{g}}e^{\frac{\theta_i}{(Y_i-w_kt)}} \\ &= \prod_{i\in [r]}^{}\bigg(\frac{P(Y_i)}{x_i}\bigg)^{-d_i+\bar{g}}e^{\theta_i\big(\frac{P'(Y_i)}{P(Y_i)}-\frac{1}{x_i} \big) }\nonumber \end{align} Here we are using the elementary identity \[\frac{P'(X)}{P(X)}=\sum_{k=1}^{N}\frac{1}{X-w_kt}. \] For the remaining classes, we obtain \begin{align}\label{eq:euler_class_computation_2} \prod_{\substack{i,j\in [r] \\ i\ne j}}^{} e_{\mathbb{C}^*}(\pi_![\mathcal{L}_i^\vee\otimes\mathcal{L}_j ]) &= \prod_{\substack{i,j\in [r] \\ i\ne j}}^{} (Y_i-Y_j)^{d_i-d_j-\bar{g}} e^{-\frac{(\theta_i+\theta_j+\phi_{ij})}{Y_i-Y_j}}\\ &= (-1)^{\bar{g}\binom{r}{2}+d(r-1)} \prod_{i< j}^{}(Y_i-Y_j)^{-2\bar{g}} \nonumber \end{align} \begin{align}\label{eq:euler_class_computation_3} \prod_{\substack{i,j\in [r] \\ i< j}}^{} e_{\mathbb{C}^*}(\pi_![\mathcal{L}^\vee_i\otimes\mathcal{L}^\vee_j]) &= \prod_{ i<j}^{}(Y_i+Y_j)^{d_i+d_j-\bar{g}} e^{-\frac{(\theta_i+\theta_j-\phi_{ij})}{Y_i+Y_j}} \end{align} Using the multiplicative property for the Euler classes, we have the the following expression for the equivariant Euler class of the virtual normal bundle : \begin{align}\label{eq:e(Nvir)spl} \frac{1}{e_{\mathbb{C}^*}(\nbun)}=u\prod_{i}^{}h_i^{d_i-\bar{g}} e^{\theta_iz_i} \cdot \prod_{i< j}\frac{(Y_i+Y_j)^{d_i+d_j-\bar{g}}}{(Y_i-Y_j)^{2\bar{g}}}e^{-\frac{\theta_i+\theta_j-\phi_{ij}}{Y_i+Y_j}} \end{align} where $u=(-1)^{\bar{g}\binom{r}{2}+d(r-1)}$, $h_i=\frac{x_i}{P(Y_i)}$ and \begin{equation}\label{eq:def_z_i} z_i=\bigg(\frac{P'(Y_i)}{P(Y_i)}-\frac{1}{x_i} \bigg). \end{equation} \section{Symmetric isotropic Quot scheme}\label{sec:Nvir_symm} Throughout this section we will assume $N=2n$, $V=\mathbb{C}^N\otimes{\cO}$ is the trivial vector bundle over $C$ and $\sigma$ is induced by a non-degenerate symmetric form on $\mathbb{C}^N$. We may assume that the symmetric form $\sigma$ is given by the block matrix \begin{align*} \sigma =\begin{bmatrix} 0&I_n\\ I_n&0 \end{bmatrix}. \end{align*} There is a natural action of $SO(N)$ on the $\IQ_d$ induced by the respective action on $ \mathbb{C}^N$. The subtorus $G=\mathbb{C}^*\subset SO(N) $ given by $(t^{-w_1},\dots , t^{-w_N})$ also acts on $\IQ_{d}$ where the weights $w_i=-w_{i+n}$ for $1\le i\le n$. \subsection{Fixed Loci} When the weights are distinct, we get the same description of fixed loci as in the case of $\sigma$ symplectic. Thus the fixed loci of the $\mathbb{C}^*$ action are isomorphic to a disjoint union of \[\fix_{\vec{d},\underline{k}}=C^{[d_1]}\times C^{[d_2]}\times \dots \times C^{[d_r]}\] for each possible tuple of positive integers $\vec{d}=(d_1,d_2,\dots,d_r)$ such that $d_1+d_2+\dots +d_r=d$ and $\underline{k}=\{k_1,\dots , k_r\}\subset \{1,\dots , N\}$ such that $\{i,i+n\}\not \subset \underline{k}$ for any $1\le i \le n$. We will use the localization formula with distinct weights to show compatibility of the virtual fundamental classes in Theorem \ref{compatibiliy_theorem}. We will use non-distinct weights to obtain the Vafa-Intriligator type formula in Theorem \ref{thm:r=2_symmetric}. In the latter case, we will obtain different fixed loci; we will describe it in Section \ref{sec:Int_a_classes}. The description of the equivariant normal bundle will be crucial in proving both the theorems. \subsection{Equivariant Normal bundle}\label{sec:Euler_class_symm} Let $0\to \cS\to \mathbb{C}^{N}\otimes\cO\to \cQ\to 0$ be the universal exact sequence over $C\times \IQ_d$. This restricts to \begin{align*} 0\to \mathcal{L}_{1}\oplus \dots \oplus \mathcal{L}_{r}\to\mathbb{C}^{N} \otimes \cO \to \mathcal{T}_{1}\oplus \dots \oplus \mathcal{T}_{r} \oplus \mathbb{C}^{N-r} \otimes\cO\to 0 \end{align*} on $C\times \fix_{\vec{d},\underline{k}}$, where $0\to \mathcal{L}_{i}\to\cO\to \mathcal{T}_{i} \to0$ is the universal exact sequence over $C\times C^{[d_i]}$ at the position $k_i$. Recall that in Theorem \ref{thm:POT}, we provided a perfect obstruction theory for the isotropic Quot scheme. In the $K$-theory of $\IQ_d$, the corresponding virtual tangent bundle is given by \begin{align*} T^{\vir}=\pi_![(\rHom(\cS,\cQ))]-\pi_![(\sHom(\Sym^2{\cS},\cO))]. \end{align*} The restriction of the virtual tangent bundle in the $\mathbb{C}^*$ equivariant $K$-theory of $\fix_{\vec{d},\underline{k}}$ is given by \begin{align*} \pi_!\bigg(\sum_{i,j\in [r]}^{} [\mathcal{L}_i^\vee\otimes\mathcal{T}_j ]+\sum_{i \in [r],k\in \underline{k}^c}[\mathcal{L}^\vee_i]-\sum_{1 \le i\le j\le r}^{} [\mathcal{L}^\vee_i\otimes\mathcal{L}^\vee_j]\bigg). \end{align*} where the above three summands have $\mathbb{C}^*$ weights $(w_{k_i}-w_{k_j})$, $(w_{k_i}-w_{k})$ and $(w_{k_i}+w_{k_j})$ respectively. The fixed part of the restriction of $T^{\vir}$ to $\fix_{\vec{d},\underline{k}}$ is \[\sum_{i\in \underline{k}}^{}\pi_![\mathcal{L}^\vee_i\otimes\mathcal{T}_i] \] which matches with the $K$-theory class of the tangent bundle of $\fix_{\vec{d},\underline{k}}$. The virtual normal bundle $\nbun$ is given by the moving part of the restriction of $T^{\vir}$. In the $K$-theory of $\fix_{\vec{d}}$, \begin{equation} \nbun=\pi_!\bigg(\sum_{\substack{i\in[r]\\ k\ne k_i}}[\mathcal{L}^\vee_i]-\sum_{\substack{i,j\in [r]\\i\ne j}}^{} [\mathcal{L}_i^\vee\otimes\mathcal{L}_j ]-\sum_{1\le i\le j\le r}^{} [\mathcal{L}^\vee_i\otimes\mathcal{L}^\vee_j]\bigg). \end{equation} Next we would like to determine the equivariant Euler class of $\nbun$ in the equivariant cohomology ring $H^*(\fix_{\vec{d},\underline{k}})[t,t^{-1}]$. Let $P(X)=\prod_{k=1}^{N}(X-w_{k}t)$ and $Y_i=x_i+w_{k_i}t$. Using \eqref{eq:euler_class_computation_1}, \eqref{eq:euler_class_computation_2} and \eqref{eq:euler_class_computation_3}and the identity \begin{align} \prod_{i\in [r]}^{} e_{\mathbb{C}^*}(\pi_![\mathcal{L}^\vee_i\otimes\mathcal{L}^\vee_i]) &= \prod_{ i\in [r]}^{}(2Y_i)^{2d_i-\bar{g}} e^{-\frac{2\theta_i}{Y_i}}, \end{align} we obtain the expression for the equivariant Euler class of $\nbun$: \begin{align}\label{eq:e(N)_symmetric} \frac{1}{e_{\mathbb{C}^*}(\nbun)}=&u2^{2d-r\bar{g}} \prod_{i=1}^{r}h_i^{d_i-\bar{g}}Y_i^{2d_i-\bar{g}}e^{\theta_iz_i}\prod_{i< j}\frac{(Y_i+Y_j)^{d_i+d_j-\bar{g}}}{(Y_i-Y_j)^{2\bar{g}}}e^{-\frac{\theta_i+\theta_j-\phi_{ij}}{Y_i+Y_j}} \end{align} where $u=(-1)^{d(r-1)+\binom{r}{2}\bar{g}}$ and \begin{equation} \begin{aligned} h_i&=\frac{x_i}{P(Y_i)}\\ z_i&=\frac{P'(Y_i)}{P(Y_i)}-\frac{2}{Y_i}-\frac{1}{x_i}. \end{aligned} \end{equation} \section{Compatibility of virtual fundamental classes} In this section we only consider $\IQ_d$ with $V, L$ trivial and $N$ even. Fix a point $q\in C$. Then there is a natural embedding \begin{equation} i_q:\IQ_d\to \IQ_{d+r} \end{equation} which sends a subsheaf $S\subset \mathbb{C}^N\otimes \cO$ to the composition $S(-q)\to S \to \mathbb{C}^N\otimes \cO $. Observe that $S(-q)$ is an isotropic subsheaf because the composition \begin{equation*} S(-q)\to S\to \mathbb{C}^N\otimes \cO \xrightarrow{\sigma} \mathbb{C}^N\otimes \cO\to S^\vee\to S(-q)^{\vee} \end{equation*} is zero. \begin{proof}[Proof of Theorem \ref{compatibiliy_theorem}] We work with the symmetric isotropic Quot scheme. The argument in the symplectic case is similar. Let $j$ be the inclusion of the fixed loci into $\IQ_{d}$. The virtual localization formula \cite{Graber1997LocalizationOV} asserts that \begin{align*} [\IQ_{d}]^{\vir}=j_*\sum_{\vec{d},\underline{k}}\frac{[\fix_{\vec{d},\underline{k}}]^{\vir}}{e_{\mathbb{C}^*}(\nbun)} \end{align*} in $A_{*}^{\mathbb{C}^*}(\IQ_{d})\otimes \mathbb{Q}[t,t^{-1}]$ where $t$ is the generator of the equivariant ring of $\mathbb{C}^*$. Note that $[\fix_{\vec{d}, \underline{k}}]^{\vir}=[\fix_{\vec{d}, \underline{k}}]$ in our case. We will show the compatibility of the virtual fundamental classes by equating the fixed loci contributions. We denote $\bar{\fix}=\fix_{\vec{d}+(1,\dots,1),\underline{k}}$ and $\fix= \fix_{\vec{d},\underline{k}}$ for notational convenience. These are fixed loci on $\IQ_{d}$ and $\IQ_{d+r}$ respectively. The map $i_q$ restricts to the natural map over the fixed locus $\tilde{i}_q: \fix \to\bar{\fix}$. This sends the fixed point $L_1\oplus \cdots \oplus L_r\subset \mathbb{C}^N\otimes\cO$ to $L_1(-q)\oplus \cdots \oplus L_r(-q)\subset \mathbb{C}^N\otimes\cO$. We have the identity (see \cite{marian2007} for more details) \begin{align*} \tilde{i}_{q*}[\fix]=\prod_{\ell=1}^{r}\bar{x}_i\cap [\bar{\fix}], \end{align*} where $\bar{x}_i$ are the cohomology classes on $\bar{\fix}$ defined in \eqref{eq:def_x,y_classes}. In the equivariant cohomology of the fixed loci $\fix$, \begin{align*} c_{\text{top}}(\Sym^2\cS^{\vee}_q)|_{\fix}& = \prod_{ 1\le i\le j\le r}(Y_i+Y_j) \end{align*} where $Y_i=x_i+w_{k_i}t$, and over $\bar{\fix}$ we have \begin{equation}\label{eq:compatibility_proof} c_{\text{top}}(\sHom(\cS_q,\mathbb{C}^N\otimes\cO))|_{\bar{\fix}}= \prod_{ i=1}^{r}\bar{x}_i\cdot \prod_{ i=1}^{r}\bar{h}_i^{-1}. \end{equation} Using the description of the Euler class of the equivariant normal bundle in \eqref{eq:e(N)_symmetric}, we have \begin{align*} \prod_{ 1\le i\le j\le r}(Y_i+Y_j)^2\cdot \frac{1}{e_{\mathbb{C}^*}(\nbun_{\fix/\IQ_d})}=\tilde{i}_q^*\prod_{ i=1}^{r}h_i^{-1}\cdot \tilde{i}_q^*\frac{1}{e_{\mathbb{C}^*}(\nbun_{\bar{\fix}/\IQ_{d+r}})}. \end{align*} Hence the fixed loci contribution matches in the application of equivariant virtual localization in \cite{Graber1997LocalizationOV} to $\IQ_{d+r}$ for the fixed loci of the kind $\bar{\fix}=\fix_{\vec{d},\underline{k}}$ with $d_i>0$ for any $1\le i\le r$ with the corresponding contribution over $\IQ_{d}$. When $d_i=0$ for some $i$, the fixed point contribution vanishes since $\bar{x}_i$ appears in \eqref{eq:compatibility_proof}. \end{proof} \section{Symmetric powers of curves}\label{sec:prelim_hilb} In this section we will describe the intersection theory of the products of symmetric powers of curves \[X_{\vec{d}}= C^{[{d_1}]}\times\cdots \times C^{[d_r]}.\] This will be needed to obtain the Vafa-Intriligator type formula for the intersection of $a$ and $f$ classes over isotropic Quot schemes. There are two difficulties in the calculation of the virtual intersection numbers involving the above classes : knowing how to intersect $\theta$, $\phi_{ij}$ and $x$ (defined in section \ref{sec:Euler_class_sympl}), and summing over all the fixed loci. Note that the number of fixed loci increases as $d$ increases. Moreover, the expressions for the Euler class of the virtual normal bundles \eqref{eq:e(Nvir)spl} and \eqref{eq:e(N)_symmetric} over the fixed loci involve many complicated terms. We describe techniques to evaluate intersection numbers involving the above terms. For the summation, we will use a beautiful combinatorial technique called multivariate Lagrange-B\"urman formula. \subsection{Intersection theory of $X_{\vec{d}}$} The following are some known facts about the $x$, $\theta$ and $y$ classes (see \cite{ACGH} and \cite{Th}) over $C^{[d]}$: \begin{itemize} \item The intersections of $x$ and $\theta $ are given by: \begin{equation*} \int_{C^{[d]}}^{}\theta^\ell x^{d-\ell}=\begin{cases} \frac{g!}{(g-\ell)!}&\ell\le g \\ 0& \ell>g \end{cases}. \end{equation*} In particular, for any polynomial $P$, and $\ell\le g$ \begin{equation}\label{eq:theta_int} \int_{C^{[d]}}^{}\theta^\ell P(x)=\frac{g!}{(g-\ell)!}\int_{C^{[d]}}^{} x^\ell P(x). \end{equation} \item The non-zero integrals in the $y$ classes over $C^{[d]}$ satisfy \begin{itemize} \item[(i)] $y^{k}$ appears with exponent at most 1 because these are odd classes. \item[(ii)] $y^{k}$ appears if and only if $y^{k+g}$ appears. \item[(iii)] For any choice of choice of distinct integers $k_1,\dots, k_s\in \{1,\dots g\}$ and a polynomial $P$ in two variables, \begin{equation}\label{y_int} \int_{C^{[d]}}y^{k_1}y^{k_1+g}\cdots y^{k_s}y^{k_s+g}P(x,\theta)=\frac{(g-s)!}{g!} \int_{C^{[d]}}\theta^sP(x,\theta). \end{equation} \end{itemize} \end{itemize} Fix $\vec{d}=(d_1,\dots,d_r)$ and $X_{\vec{d}}=C^{[d_1]}\times\cdots\times C^{[d_r]}$. For $1\le i\le r$, define the cohomology classes $x_i,y_i^k$ and $\theta_i$ on $X_{\vec{d}}$ obtained by pulling back the corresponding classes from $C^{[d_i]}$. \begin{prop}\label{prop:phi_int} Let $P$ be a polynomial in $2r$ variables, then \begin{align} \int_{X_{\vec{d}}}^{}\phi_{12}^{2\ell}P(\underline{x},\underline{\theta})= (-1)^\ell\binom{2\ell}{\ell}\binom{g}{\ell}^{-1}\int_{X_{\vec{d}}}^{}(\theta_1\theta_2)^\ell P(\underline{x},\underline{\theta}) \end{align} where $\underline{x}=(x_1,\dots,x_r)$ and $\underline{\theta}=(\theta_1,\dots,\theta_r)$. \end{prop} \begin{proof} Recall that \[\phi_{12}=-\sum_{k=1}^{g}(y_1^ky_2^{k+g}+y_2^ky_1^{k+g}). \] For parity reasons, $\phi_{12}$ must appear with even exponent. Using \eqref{y_int}, $\phi_{12}^{2\ell}$ can be replaced by a constant multiple of $\theta_1^\ell\theta_2^\ell$, where the constant is $\frac{(g-\ell)!^2}{g!^2}$ times the sum of coefficients of \[y_1^{k_1}y_1^{k_1+g}\dots y_1^{k_\ell}y_1^{k_\ell+g}\cdot y_2^{k_1}y_2^{k_1+g}\dots y_2^{k_\ell}y_2^{k_\ell+g}\] in the multinomial expansion of $\phi_{12}^{2\ell}$. We observe that \begin{align*} (y_1^{k}y_2^{k+g}+y_2^ky_1^{k+g})^2&=y_1^{k}y_2^{k+g}y_2^ky_1^{k+g}+y_2^{k}y_1^{k+g}y_1^ky_2^{k+g}\\ &=-2y_1^{k}y_1^{k+g}y_2^ky_2^{k+g}. \end{align*} Thus the required sum of coefficients is \[(-2)^{\ell}\binom{g}{\ell}\binom{2\ell}{2,\dots, 2},\] where $\binom{g}{\ell}$ is the number of choices for $\{k_{i_1},\dots, k_{i_\ell}\}$ and $\binom{2\ell}{2,\dots, 2}$ is the number of ways of picking $\ell$ pairs of factors in $\phi_{12}^{2\ell}$ each of which contributes $(-2)$. The binomial identity \begin{align} (-2)^\ell\binom{g}{\ell}\binom{2\ell}{2,\dots, 2}\frac{(g-\ell)!^2}{(g!)^2}=(-1)^\ell\binom{2\ell}{\ell}\binom{g}{\ell}^{-1} \end{align} completes the proof. \end{proof} \subsection{Summing over $|\vec{d}|=d$} In Section \ref{sec:Int_a_classes} and \ref{sec:f_intersection}, we will use the localization formula to calculate the tautological intersection numbers. We use the independence of the weights in the localization formula. We will describe how to sum over the fixed point contributions for a special choice of weights. The following two Propositions are crucial for our argument. Let $w_1,\dots, w_r$ be $r$ distinct $N^{\text{th}}$ roots of unity and let $P(Y)=Y^N-1$. \begin{prop}\label{prop:theta_sum_int} Let $p_1,\dots, p_r$ and $d$ be non-negative integers and $R(Y_1,\dots ,Y_r)$ be a homogeneous rational function of degree $s=Nd-r\bar{g}(N-1)-p$ where $p_1+\cdots +p_r=p$. Let $B(Y)=\frac{aY^N+b}{Y}$, $Y_i=x_i+w_i$, $h_i=\frac{x_i}{P(Y_i)}$ and \[z_i=\frac{B(Y_i)}{P(Y_i)}-\frac{1}{x_i}. \] Then we have the following identity \begin{align}\label{Eq:LB} &\sum_{|\vec{d}|=d}^{}\int_{X_{\vec{d}}}^{}R(Y_1,\dots,Y_r)\prod_{i=1}^{r}\frac{\theta_i^{p_i}}{p_i!}e^{\theta_iz_i}h_i^{d_i-\bar{g}} \\&=N^{-r}\frac{R(w_1,\dots, w_r)}{(w_1\cdots w_r)^{\bar{g}}}\prod_{i=1}^{r}\binom{g}{p_i}w_i^{p_i}[q^{d}](a+b+aq)^{rg-p}(1+q)^{d-r{g}}q^p\nonumber. \end{align} \end{prop} \begin{proof} The expression inside the integral is considered in the power series ring $\mathbb{Q}[[x_1,\dots,x_r,\theta_1,\dots,\theta_r]]$. We will first single out the terms containing $\theta_i$. We know that $\theta^k=0$ for $k>g$ thus \begin{align*} \frac{\theta_i^{p_i}}{p_i!}e^{\theta_iz_i}&=\sum_{\ell=0}^{g-p_i} \frac{\theta_i^{p_i+\ell}}{p_i!\ell!}\bigg(\frac{B(Y_i)}{P(Y_i)}-\frac{1}{x_i}\bigg)^\ell \end{align*} We replace $\theta_i^{p_i+\ell}$ by $\frac{g!}{(g-p_i-\ell)!}x_i^{p_i+l}$ using \eqref{eq:theta_int}. We further simplify \begin{align*} \sum_{\ell=0}^{g-p_i} \frac{g!x_i^{p_i+\ell}}{p_i!(g-p_i-\ell)!}\frac{1}{\ell!}\bigg(\frac{B(Y_i)}{P(Y_i)}-\frac{1}{x_i}\bigg)^\ell&=\binom{g}{p_i}\cdot x_i^{p_i}\cdot\bigg(\frac{x_iB(Y_i)}{P(Y_i)} \bigg)^{g-p_i}. \end{align*} Plugging this back in \eqref{Eq:LB}, we obtain the following integral of a power series in the variables $x_1,\dots,x_r$ \begin{align*} \sum_{|\vec{d}|=d}^{}\int_{X_{\vec{d}}}^{}R(Y_1,\dots,Y_r)\prod_{i=1}^{r}\binom{g}{p_i}\cdot x_i^{p_i}\cdot\bigg(\frac{x_iB(Y_i)}{P(Y_i)} \bigg)^{g-p_i}h_i^{d_i-\bar{g}}. \end{align*} We now have to find the coefficient of $x_1^{d_1}\dots x_r^{d_r}$ in the above expression and sum it over $|\vec{d}|=d_1+\cdots +d_r=d$. For such problems, we have a very useful result from combinatorics, the Lagrange-B\"urmann formula \cite{Lagrange_Burman}, which states \begin{equation}\label{eq:L_B} \sum_{|\vec{d}|}q_1^{d_1}\cdots q_2^{d_2}([x_1^{d_1}\cdots x_r^{d_r}]f(x_1,\dots,x_r)\prod_{i=1}^{r}h_i^{d_i})=f(x_1,\dots,x_r)\cdot \prod_{i=1}^{r}\frac{1}{h_i}\frac{d{x_i}}{d{q_i}} \end{equation} where $q_i=\frac{x_i}{h_i}$ and $h_i:=h_i(x_i)$ are power series with $h_i(0)\ne 0$. We can apply this formula to \begin{align*} h_i&=\frac{x_i}{P(Y_i)} \\ f(x_1,\dots x_r)& = R(Y_1,\dots,Y_r)\prod_{i=1}^{r}\binom{g}{p_i}\cdot x_i^{g}\cdot\bigg(\frac{B(Y_i)}{P(Y_i)} \bigg)^{g-p_i}\bigg(\frac{x_i}{P(Y_i)}\bigg)^{-\bar{g}}\\ &= R(Y_1,\dots,Y_r)\prod_{i=1}^{r}\binom{g}{p_i}B(Y_i)^{g-p_i}P(Y_i)^{p_i}h_i. \end{align*} We have the change of variable \[q_i=\frac{x_i}{h_i}=P(Y_i)=Y_i^N-1=(x_i+w_i)^N-1,\]and the inverse is given by \begin{align*} x_i=Y_i-w_i=w_i(1+q_i)^{1/N}-w_i. \end{align*} Observe that the derivative \begin{align*} \frac{dx_i}{dq_i}&=\frac{1}{P'(Y_i)}. \end{align*} By direct computation \begin{align}\label{eq:power_series_version_theta} f(x_1,\dots,x_r)\cdot \prod_{i=1}^{r}\frac{1}{h_i}\frac{d{x_i}}{d{q_i}}&=R(Y_1,\dots,Y_r)\prod_{i=1}^{r}\binom{g}{p_i} \frac{B(Y_i)^{g-p_i}P(Y_i)^{p_i}}{P'(Y_i)}. \end{align} In \eqref{Eq:LB}, we are interested in finding the sum over the coefficients of $q_1^{d_1}\cdots q_r^{d_r}$ where $d_1+\cdots +d_r=d$. To find this sum, we will substitute \[q_1=\cdots=q_r=q\] to obtain a power series in one variable $q$ and find the coefficient of $q^d$. In this situation, \begin{align*} &Y_i=w_i(1+q)^{1/N}, \hspace{.5cm}B(Y_i)=\frac{(aq+(a+b))}{w_i(1+q)^{1/N}},\\&P'(Y_i)=Nw_i^{-1}(1+q)^{\frac{N-1}{N}}. \end{align*} Note that $R$ is a homogeneous rational function of degree $s$, thus $R(Y_1,\dots Y_r)=R(w_1,\dots,w_r)(1+q)^{s/N}$. Substituting, the power series \eqref{eq:power_series_version_theta} becomes \begin{align*} &R(w_1,\dots,w_r)(1+q)^{\frac{s}{N}}\prod_{i=1}^{r}\binom{g}{p_i}\frac{w_i^{p_i-\bar{g}}}{N}\frac{(a+b+aq)^{g-p_i}}{(1+q)^{\frac{g-p_i}{N}+\frac{N-1}{N}}}q^{p_i}\\ &=(a+b+aq)^{rg-p}(1+q)^{d-r{g}}q^pN^{-r}\frac{R(w_1,\dots, w_r)}{(w_1\cdots w_r)^{\bar{g}}}\prod_{i=1}^{r}\binom{g}{p_i}w_i^{p_i}, \end{align*} where $p=p_1+\cdots+p_r$. \end{proof} \begin{rem} When $p\ge rg$ then $p_i>g$ for some $i$, thus the integral is $0$ since $\theta^p_i=0$. Therefore we may assume that the first term is a polynomial. Moreover, when $d\ge rg$ or $b=0$ and $d\ge p$ then the answer in \eqref{Eq:LB} is given by \begin{align*} \frac{a^{rg}}{N^{r}}\frac{R(w_1,\dots,w_r)}{(w_1\cdots w_r)^{\bar{g}}}\prod_{i=1}^{r}\binom{g}{p_i}\frac{w_i^{p_i}}{a^{p_i}}. \end{align*} \end{rem} \begin{rem} The above proposition, specialized to $B(Y)=P'(Y)$ and $p=0$, greatly simplifies the combinatorics used in finding the Vafa-Intriligator formula for Quot schemes in Section 4 of \cite{marian2007}. \end{rem} The previous result does not suffice for the calculation of virtual intersection numbers over isotropic Quot schemes. When rank $r=2$, the following proposition can be used to find Vafa-Intriligator type formulas for $\IQ_d$. \begin{prop}\label{prop:fixed_loci_calculation} Let $R(Y_1,Y_2)$ be a homogeneous rational function of degree $s=Nd-2\bar{g}(N-1)$. We borrow the notation $X_{\vec{d}}$, $Y_i$, $P(Y)$, $B(Y)$, $h_i$ and $z_i$ from Proposition \ref{prop:theta_sum_int}. Let $T(q)=(a+b+aq)/q$. Then we have the following identity \begin{align*} \sum_{|\vec{d}|=d}^{}\int_{X_{\vec{d}}}^{}&R(Y_1,Y_2)e^{-\frac{\theta_1+\theta_2-\phi_{12}}{Y_1+Y_2}}\prod_{i=1}^{2}e^{\theta_iz_i}h_i^{d_i-\bar{g}} \\&=\frac{1}{N^2}\frac{R(w_1,w_2)}{(w_1w_2)^{\bar{g}}}[q^d](1+q)^{d}\bigg(\frac{qT(q)}{1+q}\bigg)^{2g}\bigg(1-\frac{1}{T(q)}\bigg)^g. \end{align*} In particular, when $d\ge2g$ the above value is \begin{equation*} \frac{a^g(a-1)^g}{N^2}\frac{R(w_1,w_2)}{(w_1w_2)^{\bar{g}}}. \end{equation*} \end{prop} \begin{proof} We will first replace exponents of $\phi_{12}$ with the exponents of $\theta_1\theta_2$ using Proposition \ref{prop:phi_int}. For parity reasons $\phi_{12}$ must appear with an even power to obtain a non-zero number. Thus we can make following replacements: \begin{align*} e^{-\frac{\theta_1+\theta_2- \phi_{12}}{Y_1+Y_2}} &\to\sum_{p=0}^{\infty}\frac{(-1)^p}{p!(Y_1+Y_2)^p}\bigg(\sum_{2\ell+r+s=p}^{}\binom{p}{2\ell,r,s}\theta_1^{r}\theta_2^{s}\phi_{12}^{2\ell} \bigg)\\ &\to\sum_{p=0}^{\infty}\sum_{2\ell+r+s=p}^{}\frac{(-1)^{p-\ell}}{p!}\binom{p}{2\ell,r,s}\frac{\binom{2\ell}{\ell}}{\binom{g}{\ell}}\frac{\theta_1^{r+\ell}\theta_2^{s+\ell}}{(Y_1+Y_2)^p} \\ &= \sum_{p=0}^{\infty}\sum_{2\ell+r+s=p}^{}\frac{(-1)^{p-\ell}}{(Y_1+Y_2)^p}\frac{\binom{p}{2\ell,r,s}}{\binom{p}{r+\ell}}\frac{\binom{2\ell}{\ell}}{\binom{g}{\ell}}\frac{\theta_1^{r+\ell}\theta_2^{s+\ell}}{(r+\ell)!(s+\ell)!}. \end{align*} Now we use Proposition \ref{prop:theta_sum_int} to reduce the problem to finding \begin{align*} \sum_{\substack{2\ell+r+s=p}}^{}(-1)^{p-\ell} \frac{\binom{p}{2\ell,r,s}}{\binom{p}{r+\ell}}\frac{\binom{2\ell}{\ell}}{\binom{g}{\ell}}\cdot \frac{1}{N^2}\frac{R(w_1,w_2)w_1^{r+\ell}w_2^{s+\ell}}{(w_1+w_2)^p(w_1w_2)^{\bar{g}}}\binom{g}{r+\ell}\binom{g}{s+\ell} \\ \cdot[q^{d}](1+q)^d\bigg(\frac{a+b+aq}{1+q}\bigg)^{2g}\bigg(\frac{q}{a+b+aq}\bigg)^{p} \end{align*} where the sum is taken over $r,s,\ell$ such that $r+\ell,s+\ell\le g$. Rearranging the binomial coefficients, the above expression is same as \begin{align*} [q^d](1+q)^d&\bigg(\frac{a+b+aq}{1+q}\bigg)^{2g}\frac{1}{N^2}\frac{R(w_1,w_2)}{(w_1w_2)^{\bar{g}}}\\ &\cdot\sum_{\substack{2\ell+r+s=p}}^{} (-1)^{\ell}\binom{g}{\ell}\binom{g-\ell}{r}\binom{g-\ell}{s}\frac{(-w_1)^{r+\ell}(-w_2)^{s+\ell}}{T(q)^p(w_1+w_2)^p}. \end{align*} The summation in the above expression greatly simplifies via the following lemma. \end{proof} \begin{lem} Let $g$ and $d$ be integers, then \begin{align*} \sum_{\substack{2\ell+r+s=p}}^{}(-1)^{\ell}\binom{g}{\ell}\binom{g-\ell}{r}\binom{g-\ell}{s}\frac{(-w_1)^{r+\ell}(-w_2)^{s+\ell}}{T(q)^p(w_1+w_2)^p} =\bigg(1-\frac{1}{T(q)}\bigg)^g. \end{align*} \end{lem} \begin{proof} The lemma follows by observing that the given expression simplifies as \begin{align*} &\sum_{\ell}^{}\binom{g}{\ell}\frac{(-1)^{\ell}}{T(q)^{2\ell}}\frac{(-w_1)^{\ell}(-w_2)^{\ell}}{(w_1+w_2)^{2\ell}} \bigg(1-\frac{w_1}{T(q)(w_1+w_1)}\bigg)^{g-\ell} \bigg(1-\frac{w_2}{T(q)(w_1+w_1)}\bigg)^{g-\ell} \\ &=\bigg(\bigg(1-\frac{w_1}{T(q)(w_1+w_1)}\bigg) \bigg(1-\frac{w_2}{T(q)(w_1+w_1)}\bigg)-\frac{w_1w_2}{T(q)^2(w_1+w_2)^2}\bigg)^g\\ &=\bigg(1-\frac{1}{T(q)}\bigg)^g. \end{align*} \end{proof} \section{Intersection of $a$-classes}\label{sec:Int_a_classes} In this section we will prove Theorem \ref{thm:r=2,sympl} and \ref{thm:r=2_symmetric}, which are explicit expressions for the intersections of $a$-classes in the symplectic and symmetric case respectively. \subsection{$a$-class intersections for $\sigma$ symplectic} Let $r=2$. In this case the virtual dimension of $\IQ_d$ is given by \[\vd = (N-1)d-(2N-5)\bar{g}. \] Let us define \begin{equation}\label{eq:T_d,g} T_{d,g}(N)= [q^d](1+q)^{d-g}\bigg(1+\frac{N-1}{N}q\bigg)^g. \end{equation}In particular, when $d\ge g$, we get $T_{d,g}(N)=(1-1/N)^g$. A simple usage of Lagrange inversion theorem implies \[T_{d,g}(N)=[q^d](1-q/N)^g(1-q)^{-1}\] and hence $T_{d,g}(N)$ is the sum of the first $d$ terms in the binomial expansion of $(1-1/N)^g$. \begin{theorem}\label{thm:7.1_r=2_sympl} Let $Q(X_1,X_2)$ be a polynomial of weighted degree $\vd$, where the variables $X_i$ have degree $i$. Then, \begin{equation} \int_{[\IQ_d]^{\vir}}^{}Q(a_1,a_2) =uT_{d,g}(N) \sum_{w_1,w_2}^{}S(w_1,w_2)J(w_1,w_2)^{\bar{g}}(w_1+w_2)^d \end{equation} where the sum is taken over all the pairs of $N^\text{th}$ roots of unity $\{w_1,w_2\}$ with $w_1\ne \pm w_2$. Here $u=(-1)^{\bar{g}+d}$ and \begin{align*} J(w_1,w_2)&=N^2w_1^{-1}w_2^{-1}(w_1-w_2)^{-2}(w_1+w_2)^{-1}, \end{align*} and $S(w_1,w_2)=Q(w_1+w_2,w_1w_2)$. \end{theorem} \begin{proof} The equivariant pull back of $a_i$ to the fixed loci is the $i$th elementary symmetric function $\sigma_i((w_1t+x_1),(w_2t+x_2))$, hence $Q(a_1,a_2)$ pulls back to $S(w_1t+x_1,w_2t+x_2)$. We are in a position to apply the equivariant virtual localization formula \cite{Graber1997LocalizationOV} which yields \begin{align}\label{eq:localiz_r=2_symp} \int_{[\IQ_d]^{\vir}}^{}Q(a_1,a_2)=\sum_{d_1+d_2=d}^{}\sum_{w_1,w_2}^{} \int_{\fix_{\vec{d},\underline{k}}}^{} \frac{S(Y_1,Y_2)}{e_{\mathbb{C}^*}(\nbun)}, \end{align} where the sum is taken over all the prescribed choices for $\{w_1,w_2\}$ and $Y_i=x_i+w_it$. After appropriately replacing $\theta$ and $\phi_{12}$ classes with $x$ classes as described in Section \ref{sec:prelim_hilb}, the above expression can be written as a rational function in $x_1,x_2$ and $t$ of with total degree $d$ . The integral can thus be evaluated by finding coefficient of $x_1^{d_1}x_2^{d_2}$. The homogeneity and the identity $d_1+d_2=d $ ensures that resulting element in $\mathbb{C}[t,t^{-1}]$ has $t$ degree 0. Hence we can safely assume $t=1$ for the purpose of our calculation without changing the value of integral. Moreover, the localization formula is independent of the choice of the weights $(w_1,\dots w_N)$ as long as these are distinct and satisfy $w_i=-w_{i+n}$ for $1\le i\le n$. Hence we may assume these to be distinct roots of the polynomial $P(X)=X^N-1$. We substitute the expression \eqref{eq:e(Nvir)spl} of the Euler class of $\nbun$ into \eqref{eq:localiz_r=2_symp} to get \begin{align*} \sum_{w_1,w_2}^{}\sum_{d_1+d_2=d}^{} \int_{\fix_{\vec{d},\underline{k}}}^{} R(Y_1,Y_2)e^{-\frac{\theta_1+\theta_2-\phi_{12}}{Y_1+Y_2}}\prod_{i=1}^{2}e^{\theta_iz_i}h_i^{d_i-\bar{g}}, \end{align*} where by \eqref{eq:def_z_i} $z_i=\frac{P'(Y_i)}{P(Y_i)}-\frac{1}{x_i}$, $h_i=\frac{x_i}{P(Y_i)}$ and \[R(Y_1,Y_2)=uS(Y_1,Y_2)\frac{(Y_1+Y_2)^{d-\bar{g}}}{(Y_1-Y_2)^{2\bar{g}}}. \] The homogeneous degree of $R$ is $\vd +(d-3\bar{g})=Nd -2\bar{g}(N-1)$, therefore Proposition \ref{prop:fixed_loci_calculation} gives the required intersection number \begin{align} \sum_{w_1,w_2}\frac{1}{N^2}\frac{R(w_1,w_2)}{(w_1w_2)^{\bar{g}}}[q^d]N^{2g}(1+q)^{d-g}\bigg(1+\frac{N-1}{N}q\bigg)^g, \end{align} completing the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:r=2,sympl}] In the statement of Theorem \ref{thm:7.1_r=2_sympl}, the expression \[S(w_1,w_2)J(w_1,w_2)^{\bar{g}}(w_1+w_2)^d\] is homogeneous of degree $N(d-2\bar{g})$, hence this equals $S(1,\zeta)J(1,\zeta)^{\bar{g}}(1+\zeta)^d,$ where $\zeta=w_2/w_1$. \end{proof} \subsection{$a$-class intersections for $\sigma$ symmetric}\label{sec_r=1_symmetric} Define \[\tilde{T}_{d,g}(N)=[q^d]\bigg(1+\frac{N-2}{N}q\bigg)^g(1+q)^{d-g}. \] \begin{prop}\label{prop:r=1} Over $\IQ_d$, where $N$ is even, $r=1$ and $\sigma$ is symmetric, the top intersection of the tautological class is given by \begin{align} \int_{[\IQ_d]^{\vir}}^{}a_1^{\vd}= N^{g}\tilde{T}_{d,g}(N)2^{2d-\bar{g}} \end{align} where $\vd=(N-2)(d-\bar{g})$ is the virtual dimension. \end{prop} \begin{proof} The restriction of $a_1$ to the fixed locus $\fix_{d, i}=C^{[d]}$ is $Y_i=x_i+w_it$. The Euler class of the equivariant normal bundle of the fixed locus is given by \eqref{eq:e(N)_symmetric} \begin{align*} \frac{1}{e^{\vir}_{\mathbb{C}^*}(\nbun)}&=2^{2d-\bar{g}}Y_i^{2d-\bar{g}}h_i^{d-\bar{g}}e^{\theta_iz_i } \end{align*} where $z_i=(B(Y_i)/P(Y_i)-1/x_i)$ and \[\frac{B(Y)}{P(Y)}= \frac{P'(Y)}{P(Y)}-\frac{2}{Y}. \] The equivariant virtual localization formula gives \begin{equation*} \int_{[\IQ_d]^{\vir}}^{}a_1^{\vd}=\sum_{i=1}^{N}\int_{\fix_{d, i}}\frac{Y_i^{\vd}}{e^{\vir}_{\mathbb{C}^*}(\nbun)}. \end{equation*} We choose the weight of the action to be $N^{\text{th}}$ roots of unity, thus $P(X)=X^N-1$, hence $B(Y)= \frac{(N-2)Y^N+2}{Y}$, and we obtain the integral as a special case of Proposition \ref{prop:theta_sum_int} by putting $r=1$ and $p=0$. \end{proof} \begin{rem} Similar results can be obtained when $N$ is odd, $r=1$ and $\sigma$ symmetric. In particular, when the virtual dimension is non-zero, \begin{align} \int_{[\IQ_d]^{\vir}}^{}a_1^{\vd}= (N-1)^{g}2^{2d-\bar{g}}T_{d,g}(N-1). \end{align} \end{rem} When $r=2$, localizing with distinct weights makes combinatorics very difficult. However using two equal weights enable us to find a simple formula for these intersections. Using exactly two equal weights results in getting $C^{[d_1]}\times\IQ_{d_2}(\mathbb{C}^2\otimes\cO, r=1,\sigma)$ as part of the fixed loci. We will first show that \[\IQ_{d}(\mathbb{C}^2\otimes\cO, r=1,\sigma)= C^{[d]}\sqcup C^{[d]},\]and the two components $C^{[d]}$ come equipped with a non-standard virtual structure. We will use Proposition \ref{prop:r=1} to understand how to intersect over these non-standard loci. Recall that the virtual dimension of $\IQ_d$ is \begin{equation*} \vd=(N-3)d -\bar{g}(2N-7). \end{equation*} Let $N=2n$. Let $G=\mathbb{C}^*$ act on $\IQ_d$ with weights \[(w_1,\dots,w_N)=(\zeta ,\zeta^2,\dots\zeta^{n-1},0,\zeta^n,\dots,\zeta^{2n-2},0),\] where $\zeta$ is a primitive $(N-2)$'th root of unity. A point $[0\to S\to \mathbb{C}^N\otimes\cO\to Q\to 0]$ in $\IQ_d$ is fixed under the action of $G$ if and only if one of the following is satisfied: \begin{itemize} \item[(i)] The sheaf $S$ splits as $L_1\oplus L_2$ where $L_i$ is a subsheaf of one of the $N-2$ copies of $\cO$, at position $k_i\notin\{n,2n\}$, in $\mathbb{C}^N\otimes\cO$ such that $k_1-k_2\not \equiv 0 \mod n$. The corresponding fixed locus is \[\fix_{\vec{d},\underline{k}} \cong C^{[d_1]}\times C^{[d_2]}, \] where $\deg{L_i}=d_i$ and $\underline{k}=(k_1,k_2)$. \item[(ii)] The sheaf $S$ splits as $L_1\oplus E$ where $L_1$ is a subsheaf of one the copies of $\cO$, at position $k\notin \{n,2n\}$, in $\mathbb{C}^N\otimes\cO$ and $E$ is an isotropic rank one subsheaf of $\cO_n\oplus\cO_{2n}$, the sum of copies of $\cO$ at positions $n$ and $2n$. Let $\fix_{\vec{d}, k}$ be the component of the fixed loci consisting of $(L_1,E)$, where $d_1=\deg L_1$, $d_2=\deg E$ and $k$ is the position mentioned above. Note that \[\fix_{\vec{d}, k}\cong C^{[d_1]}\times\IQ_{d_2}(\cO\otimes\mathbb{C}^2, r=1,\sigma). \] \end{itemize} \begin{theorem} Let $Q(X_1,X_2)$ be a polynomial of weighted degree $\vd$, where the variables $X_i$ have degree $i$. Then, \begin{equation*} \int_{[\IQ_d]^{\vir}}^{}Q(a_1,a_2) = I_1+I_2 \end{equation*} where $S(X_1,X_2)=Q(X_1+X_2,X_1X_2)$, \begin{align*} I_1&=u4^dT_{d,g}(N-2)\sum_{w_1\ne\pm w_2}S(w_1,w_2)J(w_1,w_2)^{\bar{g}}(w_1+w_2)^d,\\ I_2&=(-1)^{d}2^{2d+2-g}T_{d,g}(N-2)(N-2)^{g}\cdot Q(1,0), \end{align*} and $ J(w_1,w_2)=\frac{(N-2)^2}{4}(w_1+w_2)^{-1}(w_1-w_2)^{-2}.$ \end{theorem} \begin{proof} Using equivariant virtual localization formula, we can write \begin{align*} \int_{[\IQ_d]^{\vir}}Q(a_1,a_2)= I_1+I_2, \end{align*} where \begin{align*} I_1&= \sum_{\substack{k_1,k_2\notin\{n,2n\} \\|k_1-k_2|\ne n }}^{}\sum_{d_1+d_2=d}^{}\int_{\fix_{\vec{d},\underline{k}}}^{}\frac{i^*(Q(a_1,a_2))}{e_{\mathbb{C}^*}(\nbun_{\fix_{\vec{d},\underline{k}}})}\\ I_2&= \sum_{\substack{k\in[N]\\ k\notin\{n,2n\}}}^{}\sum_{d_1+d_2=d}^{}\int_{\fix_{\vec{d},k}}^{}\frac{i^*(Q(a_1,a_2))}{e_{\mathbb{C}^*}(\nbun_{\fix_{\vec{d},k}})}. \end{align*} Here we denote $i^*$ the restriction to the fixed loci. The next two subsections will be devoted to the calculation of $I_1$ and $I_2$ respectively. \end{proof} \subsubsection{Fixed loci of the first kind} $\fix_{\vec{d},\underline{k}}=C^{[d_1]}\times C^{[d_2]}$ . In Section \ref{sec:Euler_class_symm} we noted that the $\mathbb{C}^{*}$ equivariant virtual tangent bundle is given by \begin{equation*} T^{\vir}= \pi_![(\rHom(\cS,\cQ))]-\pi_![(\sHom(\Sym^2{\cS},\cO))]. \end{equation*} The non-moving part of the restriction of $T^{\vir}$ to $\fix_{\vec{d},\underline{k}}$ matches the $K$-theory class the tangent bundle of $\fix_{\vec{d},\underline{k}}$. The virtual normal bundle \begin{equation*} \nbun=\pi_*\bigg(\sum_{\substack{i=1,2\\1\le k\le N\\ k_i\ne k}}[\mathcal{L}^\vee_i]-\sum_{\substack{i,j\in [2]\\i\ne j}}^{} [\mathcal{L}_i^\vee\otimes\mathcal{L}_j ]-\sum_{1\le i\le j\le 2}^{} [\mathcal{L}^\vee_i\otimes\mathcal{L}^\vee_j]\bigg). \end{equation*} Therefore using \eqref{eq:e(N)_symmetric}, we have \begin{align}\label{eq:Nvir_symm_r=2} \frac{1}{e_{\mathbb{C}^*}(\nbun)}&= u2^{2d-2\bar{g}}\frac{(Y_1+Y_2)^{d-\bar{g}}}{(Y_1-Y_2)^{2\bar{g}}}(Y_1Y_2)^{\bar{g}}e^{-\frac{\theta_1+\theta_2-\phi_{12}}{Y_1+Y_2}}\prod_{i=1}^{2}h_i^{d_i-\bar{g}}e^{\theta_iz_i} \end{align} where $P_0(X)=X^{N-2}-1$ and \begin{align*} h_i&=\frac{x_iY_i^2}{P(Y_i)}=\frac{x_i}{P_0(Y_i)}, \hspace{1cm} B(Y_i)=P_0'(Y_i), \\ z_i&=\frac{P'(Y_i)}{P(Y_i)}-\frac{2}{Y_1}-\frac{1}{x_i} = \frac{B(Y_i)}{P_0(Y_i)}-\frac{1}{x_i}. \end{align*} \begin{prop} We have \begin{align} I_1= u4^dT_{d,g}(N-2)\sum_{w_1,w_2}S(w_1,w_2)J(w_1,w_2)^{\bar{g}}(w_1+w_2)^d \end{align} where the sum is taken over pairs of $(N-2)^{\text{th}}$ roots of unity $\{w_1,w_2\}$ with $w_1\ne \pm w_2$, and \begin{align*} J(w_1,w_2)=\frac{(N-2)^2}{4}(w_1+w_2)^{-1}(w_1-w_2)^{-2}. \end{align*} In particular when $d\ge g$, $T_{d,g}(N-2)=(N-3)^g(N-2)^{-g}$. \end{prop} \begin{proof} For notational convenience, we assume $\underline{k}=(1,2)$. The classes $a_1$ and $a_2$ restrict to $Y_1+Y_2$ and $Y_1Y_2$ respectively, where $Y_i= x_i+w_it$ in the equivariant cohomology ring $H^*(\fix_{\vec{d},\underline{k}}=C^{[d_1]}\times C^{[d_2]})[[t]]$. We are interested in evaluating the following sum \begin{align*} \sum_{d_1+d_2=d}^{}\sum_{w_1,w_2}^{} \int_{\fix_{\vec{d},\underline{k}}}^{} \frac{S(Y_1,Y_2)}{e_{\mathbb{C}^*}(\nbun_{\fix_{\vec{d},\underline{k}}})}, \end{align*} where $S(Y_i,Y_i)=Q(Y_1+Y_2,Y_1Y_2)$. After replacing the classes $\theta_i$ and $\phi_{12}$ as in the proof of Theorem \ref{thm:r=2,sympl}, the above expression becomes a homogeneous degree rational function of degree $d=d_1+d_2$ in the variables $x_i$ and $t$ and a power series in $x_1$ and $x_2$ with coefficients in $\mathbb{C}[[t,t^{-1}]]$. Integrating over $C^{[d_1]}\times C^{[d_2]}$ amounts to finding the coefficient of $x_1^{d_1}x_2^{d_2}$. Using the calculation of $e(\nbun)$ in \eqref{eq:Nvir_symm_r=2}, we reduce our problem to finding \begin{equation*} \sum_{d_1+d_2=d}^{}\sum_{w_1,w_2}^{} \int_{\fix_{\vec{d},\underline{k}}}^{} R(Y_1,Y_2)e^{-\frac{\theta_1+\theta_2-\phi_{12}}{Y_1+Y_2}}\prod_{i=1}^{2}h_i^{d_i-\bar{g}}e^{\theta_iz_i} \end{equation*} where $(w_1,w_2)$ are the prescribed pair of $(N-2)$'th roots of unity and \[R(Y_1,Y_2)=u2^{2d-2\bar{g}}S(Y_1,Y_2)(Y_1Y_2)^{\bar{g}}\frac{(Y_1+Y_2)^{d-\bar{g}}}{(Y_1-Y_2)^{2\bar{g}}}. \] We apply Proposition \ref{prop:fixed_loci_calculation} to find \begin{align*} I_1=\sum_{w_1,w_2}^{}\frac{1}{(N-2)^2}\frac{R(w_1,w_2)}{(w_1,w_2)^{\bar{g}}}[q^d](N-2)^{2g}(1+q)^{d-g}\bigg(1+\frac{N-3}{N-2}q\bigg)^g. \end{align*} \end{proof} \subsubsection{Fixed Loci of second kind} We will first understand the virtual geometry of the isotropic Quot scheme $\IQ_d^\circ=\IQ_{d}(\cO\otimes\mathbb{C}^2, r=1,\sigma)$. \begin{lem} The isotropic Quot scheme $\IQ_d^\circ$ is isomorphic to the disjoint union $C^{[d]}\sqcup C^{[d]}$. The virtual tangent bundle of $\IQ_d^\circ$ restricted to either copy of $C^{[d]}$ is given by \begin{align*} T^{\vir}=\pi_!([\mathcal{L}^\vee\otimes (\mathcal{T}\oplus \cO)]-[\mathcal{L}^\vee\otimes\mathcal{L}^\vee]), \end{align*} where $\pi$ is the projection $\pi : C\times C^{[d]}\to C^{[d]}$ and $0\to\mathcal{L}\to \cO\to \mathcal{T}\to 0$ is the universal exact sequence on $C\times C^{[d]}$. \end{lem} \begin{proof} A subsheaf $E\subset \mathbb{C}^2\otimes\cO$ is isotropic if and only if $E$ factors through a copy of $\cO$ in $\mathbb{C}^2\otimes\cO$, hence $\IQ_d^\circ\cong C^{[d]}\sqcup C^{[d]}$. The universal short exact sequence over $C\times \IQ_d^\circ$ restricts to \[0\to \mathcal{L}\to \mathbb{C}^2\otimes\cO \to \mathcal{T}\oplus \cO\to 0 \] over each copy of $C\times C^{[d]}$. The lemma follows using the description of $T^{\vir}$ of $\IQ_d^\circ$ in Theorem \ref{thm:POT}. \end{proof} Therefore we see that the virtual fundamental class $[C^{[d]}]^{\vir}$ induced over each component $C^{[d]}$ of $\IQ_{d}^\circ$ is different from the usual fundamental class $[C^{[d]}]$. We also observe that the virtual dimension for $C^{[d]}$ is zero. \begin{lem}\label{lem:Cvir_integral} Let $C^{[d]}$ be equipped with the non-standard virtual structure as described above, then \begin{equation*} \int_{[C^{[d]} ]^{\vir}}1=2^{2d}(-1)^d\binom{\bar{g}}{d}. \end{equation*} \end{lem} \begin{proof} We have a natural automorphism obtained by swapping the copies of the $\cO$ in $\mathbb{C}^{2}\otimes\cO$. Therefore the above intersection number is independent of the copy of $C^{[d]}$ we have chosen. The Proposition \ref{prop:r=1} tells us \begin{align*} \int_{[C^{[d]} ]^{\vir}}1=\frac{1}{2}\int_{[\IQ_{d}^\circ]^{\vir}}^{}1=2^{2d}[q^d](1+q)^{d-g}. \end{align*} \end{proof} Now we are ready to prove \begin{prop} We have \begin{align*} I_2=(-1)^{d}2^{2d+2-g}(N-2)^{g}T_{d,g}(N-2)\cdot Q(1,0) \end{align*} \end{prop} \begin{proof} We are working over the fixed loci $\fix_{\vec{d}, k,\epsilon}=C^{[d_1]}\times \Cvir^{[d_2]}$ where $ k\notin\{n,2n\}$ and the first factor corresponds to the copy of $\cO$ at position $k$ and the index $\epsilon$ differentiates between the two components of $\IQ_{d_2}^{0}=C^{[d_2]}\sqcup C^{[d_2]}$. Let $\mathcal{L}_1$ and $\mathcal{L}_2$ be the pullbacks of the universal subsheaves over $C^{[d_1]}$ and $\Cvir^{[d_2]}$ to the product $\fix_{\vec{d}, k,\epsilon}$. The virtual normal bundle is the moving part of the restriction of the $T^{\vir}$ and is given by \begin{equation*} \nbun=\pi_!\bigg(\sum_{j\in [N]-\{k\}}[\mathcal{L}_1^\vee] + \sum_{\substack{j\in [N]\\ j\notin\{n,2n\}}}^{}[\mathcal{L}_2^{\vee}] - [\mathcal{L}_1^\vee\otimes\mathcal{L}_2]-[\mathcal{L}_1\otimes\mathcal{L}_2^\vee]- [\mathcal{L}^\vee_1\otimes\mathcal{L}^\vee_2] -[\mathcal{L}^\vee_1\otimes\mathcal{L}^\vee_1]\bigg), \end{equation*} where the above terms have $\mathbb{C}^*$ weights $(w_k-w_j)$, $-w_j$, $w_k$, $-w_k$, $w_k$ and $2w_k$ respectively. We may assume $t=1$ (see the proof of Theorem \ref{thm:7.1_r=2_sympl}. Let $Y_1=x_1+w_k$, $u=(-1)^{d+\bar{g}}$ and $P(X)=X^{N-2}-1$. A careful calculation using \eqref{eq:euler_class_computation_1}, \eqref{eq:euler_class_computation_2} and \eqref{eq:euler_class_computation_3} gives \begin{align*} \frac{1}{e_{\mathbb{C}^*}(\nbun)}=& \bigg(\frac{Y_1^2P(Y_1)}{x_1}\bigg)^{-d_1+\bar{g}} e^{\theta_1\big(\frac{P'(Y_1)}{P(Y_1)}+\frac{2}{Y_1}-\frac{1}{x_1}\big)} \cdot P(x_\epsilon)^{-d_2+\bar{g}}e^{\theta_\epsilon\frac{P'(x_\epsilon)}{P(x_\epsilon)}}\\ &\cdot u(Y_1-x_\epsilon)^{-2\bar{g}}\cdot(Y_1+x_\epsilon)^{d-\bar{g}}e^{(-\frac{\theta_1+\theta_\epsilon-\phi_{12}}{(Y_1+x_\epsilon)})}\cdot (2Y_1)^{2d_1-\bar{g}}e^{-\frac{2\theta_1}{Y_1}} \end{align*} Since $\Cvir^{[d_2]}$ has virtual dimension zero, $x_\epsilon$ and $\theta_\epsilon$ yield zero when intersected with the virtual fundamental class $[\Cvir^{[d_2]}]^{\vir}$. Thus for the purpose of our calculation, we may substitute $x_\epsilon=\theta_\epsilon=\phi_{12}=0$ in the above expression to get \begin{align*}\label{eq:nbun_vir_r=2} u2^{2d_1-\bar{g}}Y_1^{d-2\bar{g}}h_1^{d_1-\bar{g}}e^{\theta_1 z_1}\cdot (-1)^{(\bar{g}-d_2)}, \end{align*} where $h_1=x_1/P(Y_1)$ and $z_1=P'(Y_1)/P(Y_1)-1/Y_1-1/x_1$. Note that $a_1$ and $a_2$ restrict to $Y_1+x_\epsilon$ and $Y_1x_\epsilon$ respectively over the fixed loci. We want to calculate \begin{align*} I_2&= \sum_{k=1}^{N-2}\sum_{d_1+d_2=d}^{}\sum_{\epsilon=1}^{2}\int_{[\fix_{\vec{d},k,\epsilon}]^{\vir}}^{}\frac{i^*(Q(a_1,a_2))}{e_{\mathbb{C}^*}(\nbun_{\fix_{\vec{d},k}})} \end{align*} Substituting $x_\epsilon=0$, we get \begin{align} I_2&= Q(1,0)\sum_{k=1}^{N-2}\sum_{d_1+d_2=d}^{}\sum_{\epsilon=1}^{2}\int_{[\fix_{\vec{d},k,\epsilon}]^{\vir}}^{}\frac{Y_1^{\vd}}{e_{\mathbb{C}^*}(\nbun)}. \end{align} Simplifying further using Lemma \ref{lem:Cvir_integral}, we get \begin{align*} I_2&=Q(1,0)\sum_{k=1}^{N-2}\sum_{\epsilon=1}^{2}\sum_{d_1+d_2=d}^{}u2^{2d_1-\bar{g}}(-1)^{\bar{g}-d_2} \int_{C^{[d_1]}}^{}Y_1^{\vd +d-2\bar{g}}h_1^{d_1-\bar{g}}e^{\theta_1 z_1} \int_{[C^{[d_2]}]^{\vir}}1 \\ &= Q(1,0)\sum_{k=1}^{N-2}\sum_{\epsilon=1}^{2}\sum_{d_1+d_2=d}^{}u2^{2d-\bar{g}}(-1)^{\bar{g}}\binom{\bar{g}}{d_2}\int_{C^{[d_1]}}^{}Y_1^{\vd +d-2\bar{g}}h_1^{d_1-\bar{g}}e^{\theta_1 z_1}\\ &= Q(1,0)\sum_{k=1}^{N-2}\sum_{\epsilon=1}^{2}u2^{2d-\bar{g}}(-1)^{\bar{g}}(N-2)^{\bar{g}}[q^d](1+q)^{d-g}\bigg(1+\frac{N-3}{N-2}q\bigg)^g. \end{align*} The last equality follows from noting that $\binom{\bar{g}}{d_2}=[q^{d_2}](1+q)^{\bar{g}}$ and the following Lemma. \end{proof} \begin{lem} \begin{align*} \int_{C^{[d_1]}}^{}Y_1^{\vd +d-2\bar{g}}h_1^{d_1-\bar{g}}e^{\theta_1 z_1} = (N-2)^{\bar{g}}[q^{d_1}](1+q)^{d-\bar{g}-g}\bigg(1+\frac{N-3}{N-2}q\bigg)^g \end{align*} \end{lem} \begin{proof} Proposition \ref{prop:theta_sum_int} does not directly apply here due to shape of $d_1$. However, we closely follow the proof of Proposition \ref{prop:theta_sum_int}. Correctly replacing $e^{\theta_1 z_1}$ yield \begin{align*} \int_{C^{[d_1]}}^{}Y_1^{\vd +d-2\bar{g}}h_1^{d_1-\bar{g}}\bigg(\frac{x_1B(Y_1)}{P'(Y_1)}\bigg)^g. \end{align*} Applying the Lagrange-B\"urmann formula, we obtain \begin{align*} [q^{d_1}]Y_1^{\vd+d-2\bar{g}}\frac{B(Y_1)^g}{P'(Y_1)} \end{align*} where $Y_1=w_1(1+q)^{\frac{1}{N-2}}$ and $Y_1B(Y_1)=(N-3)Y_1^{N-2}+1$. Therefore, it equals \begin{align*} (N-2)^{\bar{g}}[q^{d_1}](1+q)^{d-\bar{g}-g}\bigg(1+\frac{N-3}{N-2}q\bigg)^g. \end{align*} \end{proof} \section{Gromov-Ruan-Witten Invariants}\label{sec:GRW_Invariants} In this section we will compare the sheaf theoretic invariants obtained using isotropic Quot schemes and Gromov-Ruan-Witten invariants for Isotropic Grassmannians. We will denote by $\SG(2,N)$ and $\OG (2,N)$ the symplectic Grassmannian and orthogonal Grassmannian respectively. \subsection{Quantum Cohomology} The small quantum cohomology of the Isotropic Grassmannian and its presentation are known (see \cite{Quantum_pieri_OG}, \cite{QC_isotropic_grassmannians}). However, the explicit expressions for the high genus and large degree Gromov-Ruan-Witten invariants require further arguments. When the rank $r=2$, a simpler presentation for the quantum cohomology of $\SG(2,2n)$ was obtained in \cite{Q_C_IG}. We will briefly describe their result and find a similar presentation for the quantum cohomology of $\OG(2,2n+2)$. Let $N=2n$. We have the universal exact sequence $0\to \cS\to \mathbb{C}^N\otimes\cO\to \cQ\to 0$ over $\SG(2,N)$. Let $\cS^{\perp}\subset \mathbb{C}^N\otimes\cO$ be the rank $N-2$ vector bundle consisting of vectors perpendicular to $\cS$. Moreover, $\cS^\perp$ is the kernel of the composition $\mathbb{C}^N\otimes\cO\xrightarrow{\sigma}(\mathbb{C}^N)^{\vee}\otimes\cO\to \cS^\vee $ which gives us an identity for the Chern polynomial $c_t(\cS^\vee)c_t(\cS^{\perp})=1$. This implies \begin{align}\label{eq:QC_SG_identity} c_t(\cS)c_t(\cS^\vee)c_t(\cS^{\perp}/\cS)&=1. \end{align} The above identity suggests us to define the following cohomology classes : \begin{itemize} \item The Chern classes $a_i= c_i(\cS^\vee)$ for $i\in\{1,2\}$. \item Let $b_i=c_{2i}(\cS^{\perp}/\cS)$ for $i\in \{1,\dots ,n-2 \}$. The bundle $\cS^{\perp}/\cS$ is self dual, hence all the odd Chern classes vanish. \end{itemize} The cohomology ring $H^*(\SG(2,2n))$ is isomorphic to the quotient of the ring $\mathbb{C}[a_1,a_2,b_1,\dots ,b_{n-2}]$ by the ideal generated by \begin{equation}\label{eq:H*SG(2,2n)} (1+(2a_2-a_1^2)x^2+a_2x^4)(1+b_1x^2+\dots + b_{n-2}x^{2n-4})=1. \end{equation} The above identity is simply a restatement of \eqref{eq:QC_SG_identity}. The quantum cohomology ring is $H^*(\SG(2,2n))\otimes \mathbb{C}[[q]]$, where the quantum products is described in the following theorem. Note that $\deg(q)=2n-1$ is the index of $\SG(2,2n)$. \begin{theorem}[\cite{Q_C_IG}] The quantum cohomology ring $QH^*(\SG(2,2n))$ is isomorphic to the quotient of the ring $\mathbb{C}[a_1,a_2,b_1,\dots ,b_{n-2},q]$ by the ideal generated by \begin{equation}\label{eq:QH_Pres.} (1+(2a_2-a_1^2)x^2+a_2x^4)(1+b_1x^2+\dots+ b_{n-2}x^{2n-4})=1+qa_1x^{2n} \end{equation} \end{theorem} The detailed proof of the above result can be found in \cite{Q_C_IG}. Now we will describe a similar presentation for the orthogonal Grassmannian $\OG(2,N)$, where $N=2n+2$. We will assume $n\ge 3$, otherwise $H^2(\OG(2,N),\mathbb{C})$ may have rank greater than one. We have the universal exact sequence $0\to \cS\to \mathbb{C}^N\otimes\cO\to \cQ\to 0$ over $\OG(2,N)$. Let $\cS^{\perp}\subset \mathbb{C}^N\otimes\cO$ be the rank $N-2$ vector bundle consisting of vectors perpendicular to $\cS$. Unlike the symplectic case, there is a cohomology class which is not obtained using the universal exact sequence. Let $\quadric\subset \mathbb{P}(\mathbb{C}^N)$ be the quadric of isotropic lines in $\mathbb{C}^N$ equipped with a non-degenerate symmetric bilinear form $\sigma$. Let $\pi :\mathbb{P}(\cS)\to \OG(2,N)$ be the projective bundle. We have the natural the map $\theta : \mathbb{P}(\cS)\to \quadric$. Note that $O(2n+2)$ acts on $\mathbb{C}^{2n+2}$. There are precisely two $SO(2n+2)$ orbits of maximal isotropic subspaces. Two maximal isotropic subspaces $E$ and $F$ lie in different orbits if and only if $\dim E\cap F$ is even. Let $e$ and $f$ be the cohomology classes corresponding to $\mathbb{P}(E)$ and $\mathbb{P}(F)$ inside the quadric $\quadric\subset \mathbb{P}(\mathbb{C}^N)$. The classes $e$ and $f$ corresponds to two rulings of $\quadric$. The cohomology ring of $\quadric$ is generated by the hyper plane class $h$ and ruling classes $e$ and $f$ (see \cite{Char_classes_quadric}). Over $\OG(2,N)$, we have the following cohomology classes : \begin{itemize} \item The Chern classes $a_i= c_i(\cS^\vee)$ for $i\in\{1,2\}$. \item Let $b_i=c_{2i}(\cS^{\perp}/\cS)$ for $i\in \{1,\dots ,n-1 \}$. The bundle $\cS^{\perp}/\cS$ is self dual, hence all the odd Chern classes vanish. \item Let $\pi:\mathbb{P}(\cS)\to \OG$ be the projection, then we define \[\xi = \pi_*\theta^*(e-f). \] \end{itemize} The above classes still satisfy the identity \eqref{eq:QC_SG_identity}, but two new identities involving $\xi$ are required. We will briefly describe these for readers convenience. \begin{lem}\label{lem:xi_relation} The cohomology class $\xi$ satisfy $\xi a_2=0$ and $\xi^2=(-1)^{n-1}b_{n-1}$. \end{lem} \begin{proof} Let $h=c_1(\cO(1))$ on $\mathbb{P}(S)$, then $h\theta^*(e-f)=0$. Multiplying $\theta^*(e-f)$ to the identity \[h^2-hc_1(\pi^*\cS^\vee)+c_2(\pi^*\cS^\vee)=0, \] we obtain $\theta^*(e-f)\pi ^*a_2=0$. The projection formula implies $\xi a_2=0$. Using the identities $c_t(\cS)c_t(\cS^{\vee})c_t(\cS^{\perp}/\cS)=1$ and $c_t(\cS)c_t(\cQ)=1$, we obtain $c_t(\cS^{\perp}/\cS)=c_t(\cQ)c_{-t}(\cQ)$. In particular, for all $1\le k\le n-1$ \begin{equation*} (-1)^{k}b_k=c_k(\cQ)^2+2\sum_{i=1}^{k}(-1)^ic_{k+i}(\cQ)c_{k-i}(\cQ). \end{equation*} When $k=n-1$, the right side of the above equality is $\xi^2$ by \cite{Quantum_pieri_OG}. \end{proof} \begin{rem} The class $\xi$ is the Edidin-Graham characteristic square root class for the quadratic bundle $\cS^\perp/\cS$. \end{rem} \begin{prop} The cohomology ring $H^*(\OG(2,2n+2))$ is isomorphic to the quotient of the ring $\mathbb{C}[a_1,a_2,b_1,\dots,b_{n-2}, \xi]$ by the ideal generated by the relations $\xi a_2=0$ and \begin{equation*} (1+(2a_2-a_1^2)x^2+a_2^2x^4)(1+b_1x^2+\cdots + b_{n-2}x^{2n-4}+(-1)^{n-1}\xi^2x^{2n-2})=1. \end{equation*} \end{prop} \begin{proof} Note that the topological Euler characteristic of $\OG$ is the vector space dimension of $H^*(\OG)$ and is given by $2^2\binom{n+1}{2}$. This is obtained by counting the number of fixed points under $\mathbb{C}^*$ action on $\OG$. We can unpack the relations to obtain the generators of the ideal: \begin{equation}\label{eq:def_fi_OG} \begin{aligned} f_0&=\xi a_2\\ f_1&= b_1+(2a_2-a_1^2) \\ &\vdots \\ f_{n-1}&= (-1)^{n-1}\xi^2+ b_{n-2}(2a_2-a_1^2)+b_{n-3}a_2^2\\ f_{n}&= (-1)^{n-1}\xi^2(2a_2-a_1^2)+ b_{n-2}a_2^2 \end{aligned} \end{equation} Define $R'=\mathbb{C}[a_1,a_2,b_1,\dots,b_{n-2}, \xi]/\langle f_0,\dots,f_n\rangle $. Using Lemma \ref{lem:xi_relation} and $c_t(\cS)c_t(\cS^{\vee})c_t(\cS^{\perp}/\cS)=1$, we know that $f_i=0$ for all $0\le i\le n$ in $H^*(\OG)$ . Moreover, the classes $a_1,a_2$ and $\xi$ generates $H^*(\OG)$ (see \cite{Quantum_pieri_OG}). Therefore we get the surjective ring homomorphism \[R'\to H^*(\OG).\] It is enough to show that $R'$ is a vector space of dimension at most $2^2\binom{n+1}{2}$. We bound the dimension of $R'$ using the exact sequence \[0\to \langle \xi\rangle \to R'\to R'/\langle \xi\rangle\to 0. \] Using \eqref{eq:H*SG(2,2n)}, we observe that $R'/\langle \xi\rangle = H^*(\SG(2,2n))$. Thus $R'/\langle \xi\rangle$ has dimension $2n^2-2n$, which is the Euler characteristic of $\SG(2,2n)$. Note that $b_i\in a_1^{2i} + \langle a_2\rangle$, $\xi^2\in a_1^{2n-2}+\langle a_2\rangle$ and $\xi^2a_1^2\in \langle a_2\rangle$. Hence $\dim R'/\langle a_2\rangle\le |\{1,a_1\dots,a_1^{2n-1},\xi,\dots \xi a_1^{2n-1} \}|=4n$. Consider the exact sequence \[0\to \ker\to R'\xrightarrow{\cdot a_2}R'\to R'/\langle a_2\rangle\to 0. \] Note that $\langle \xi\rangle\subset \ker$, thus \[\dim\langle \xi\rangle\le \dim \ker = \dim R'/\langle a_2\rangle\le4n.\] \end{proof} Now we will turn our attention to the small quantum cohomology. \begin{prop} Let $n>2$. The small quantum cohomology ring $QH^*(\OG(2,2n+2))$ is isomorphic to the quotient of the ring $\mathbb{C}[a_1,a_2,b_1,\dots,b_{n-2}, \xi,q]$ by the ideal generated by the relations $\xi a_2=0$ and \begin{equation}\label{eq:QH_pres_OG} (1+(2a_2-a_1^2)x^2+a_2^2x^4)(1+\cdots + b_{n-2}x^{2n-4}+(-1)^{n-1}\xi^2x^{2n-2})=1+4 qa_1x^{2n}. \end{equation} \end{prop} \begin{proof} The degrees of the relations in the given presentation of $H^*(\OG)$ are \[ \deg f_i = \begin{cases} n+1& i=0\\ 2i& 1\le i\le n. \end{cases} \]Since $q$ has degree $2n-1$, the quantum term can appear only in degree $2n$ in the above presentation of the cohomology. Therefore, \[(-1)^{n-1}\xi^2(2a_2-a_1^2) +b_{n-2}a_2^2=cqa_1\] for some constant $c$. Recall that $(-1)^{n-1}\xi^2=b_{n-1}=c_{2n-2}(\cS^\perp/\cS)$. The first term $\xi^2a_2=0$ since $\xi a_2=0$. Note that we have the following Schubert classes \begin{align*} b_{n-1}a_1& = c_{2n-1}(\cQ) \\ b_{n-2}a_2+b_{n-1}&= c_{2n-2}(\cQ). \end{align*} It is enough to show that the three point GRW invariants \begin{align*} \Phi_{0,1} (a_1,c_{2n-1}(\cQ), a_1^*) =2&, \hspace{1cm} \Phi_{0,1}( a_2,c_{2n-2}(\cQ), a_1^*)=2, \end{align*} where $a_1^*$ corresponds to the class of a line. It follows by carefully applying the quantum Pieri rule stated in \cite{Quantum_pieri_OG}, which describes the three term genus zero GWR invariants (equivalently the quantum product) of the Schubert classes. \end{proof} \subsection{Jacobian Calculation}\label{subsec:Jacobian_cal} We can unpack \eqref{eq:QH_Pres.} to write that the ideal of relations is generated by \begin{equation}\label{eq:def_f_i} \begin{aligned} \tilde{f}_1= & b_1+(2a_2-a_1^2) \\ \tilde{f}_2=& b_2+b_1(2a_2-a_1^2)+a_2^2\\ &\vdots \\ \tilde{f}_{n-2}=& b_{n-2}+b_{n-3}(2a_2-a_1^2)+b_{n-4}a_2^2\\ \tilde{f}_{n-1}=& b_{n-2}(2a_2-a_1^2)+b_{n-3}a_2^2\\ \tilde{f}_{n}=& b_{n-2}a_2^2-qa_1. \end{aligned} \end{equation} Let $R=\mathbb{C}[a_1,a_2,b_1,\dots,b_{n-2},q]/\langle \tilde{f}_1,\dots,\tilde{f}_n\rangle$ be the quantum cohomology ring of $\SG(2,2n)$ over $\mathbb{C}[q]$. In order to calculate the Gromov-Ruan-Witten invariants, we are required to compute the Jacobian \begin{equation*} J=\det\begin{bmatrix} \frac{\partial \tilde{f}_1}{\partial a_1}&\dots &\frac{\partial \tilde{f}_n}{\partial a_1}\\ \vdots& & \vdots\\ \frac{\partial \tilde{f}_1}{\partial b_{n-2}}&\dots& \frac{\partial \tilde{f}_n}{\partial b_{n-2}} \end{bmatrix} \end{equation*} at the vanishing locus of $(\tilde{f}_1, \tilde{f}_2,\dots,\tilde{f}_n)$. Substituting $b_1=(a_1^2-2a_2)$, this determinant equals \begin{equation*} -4a_1\det \begin{bmatrix} 1&b_1&b_2&b_3&\dots &b_{n-2}&\frac{q}{2a_1}\\ 1&(a_2+b_1)&(a_2b_1+b_2)&(a_2b_2+b_3) &\dots &(a_2b_{n-3}+b_{n-2})&a_2b_{n-2}\\ 1&-b_1&a_2^2&0&\dots&0&0\\ 0&1&-b_1&a_2^2&\dots&0&0\\ 0&0&1&-b_1&\dots&0&0\\ \vdots&\vdots & \vdots\\ 0&0&0&\dots &1&-b_1&a_2^2 \end{bmatrix}. \end{equation*} After subtracting first two rows, we observe that the above equals \begin{equation*} -4a_1a_2\det \begin{bmatrix} 1&b_1&b_2&b_3&\dots &b_{n-2}&\frac{q}{2a_1}\\ 0&1&b_1&b_2 &\dots &b_{n-3}&b_{n-2}-\frac{q}{2a_1a_2}\\ 1&-b_1&a_2^2&0&\dots&0&0\\ 0&1&-b_1&a_2^2&\dots&0&0\\ 0&0&1&-b_1&\dots&0&0\\ \vdots&\vdots & \vdots\\ 0&0&0&\dots &1&-b_1&a_2^2 \end{bmatrix}. \end{equation*} Let $v_0, v_1,\dots , v_{n-1}$ be the column vectors in the above matrix. Then the \begin{align*} \det[v_0,\dots,v_{n-1}]= \det [V_0,\dots V_{n-1} ] \end{align*} where $V_i=v_ib_0+v_{i-1}b_1+\cdots + v_0b_i$. Using the identity, $a_2^2b_{i-2}-b_1b_{i-1}+b_i=0$, we observe that \begin{align*} [V_0,\dots,V_{n-1}] = \begin{bmatrix} 1&B_1&B_2&B_3&\dots &B_{n-2}&B_{n-1}+\frac{q}{2a_1}\\ 0&1&B_1&B_2 &\dots &B_{n-3}&B_{n-2}-\frac{q}{2a_1a_2}\\ 1&0&0&0&\dots&0&0\\ 0&1&0&0&\dots&0&0\\ \vdots&\vdots & \vdots\\ 0&0&0&\dots &\ \ 1&0&0 \end{bmatrix} \end{align*} where $b_{n-1}:=0$ and $B_i:=b_ib_0+b_1b_{i-1}+b_2b_{i-2}+\dots +b_0b_i$. Therefore the required Jacobian is given by \begin{align}\label{eq:Jacobian_SG_Bi} J=-4a_1a_2\det \begin{bmatrix} B_{n-2}&B_{n-1}+\frac{q}{2a_1}\\ B_{n-3}&B_{n-2}-\frac{q}{2a_1a_2}. \end{bmatrix}. \end{align} \subsection{Residues} We will use the presentation of the quantum cohomology in \eqref{eq:QH_Pres.} and \eqref{eq:QH_pres_OG} to obtain the higher genus GRW invariants for $\SG(2,2n)$ and $\OG(2,2n+2)$ using the techniques in \cite{Q_C_Fano}. We will briefly describe the result we require from \cite{Q_C_Fano}. Let $F\in \mathbb{C}[x_1,\dots,x_n]$ be a polynomial, and $f=(f_1,\dots,f_n):\mathbb{C}^n\to\mathbb{C}^n$ be a tuple of polynomials such that $f^{-1}(0)$ is finite. For any $p\in f^{-1}(0)$, we define \begin{equation*} Res_f(p;F):=\frac{1}{(2\pi i)^n}\int_{\Gamma^{\epsilon}_p}^{}\frac{F}{f_1\cdots f_n}dx_1\dots dx_n \end{equation*} with $\Gamma^{\epsilon}_p=\{q\in U(p): |f(q)|=\epsilon \}$, $U(p)$ small neighborhood of $a$ with $f^{-1}(0)\cap U(p)=\{p\}$ and $\Gamma^{\epsilon}_p$ relatively compact in $U(p)$. We may further define \[Res_f(F) =\sum_{p\in f^{-1}(0)}^{}Res_f(p;F).\] Note that when $p$ is a regular point, i.e. the Jacobian $J=\det\big(\partial f_i/\partial x_j \big)\ne 0$ at $p$, then \[Res_f(p;F)= \bigg(\frac{F}{J}\bigg)(p) .\] Let $M$ be a Fano manifold with $h^2(M,\mathbb{C})=1$ and the cohomology ring $H^*(M,\mathbb{C})=\mathbb{C}[x_1,\dots,x_n]/\langle f_1,\dots,f_n\rangle $, where each $x_i$ corresponds to a pure dimensional cohomology class. Let \[QH^*(M,\mathbb{C})=\mathbb{C}[x_1,\dots,x_n,q]/\langle \tilde{f}_1,\dots,\tilde{f}_n \rangle\] be the quantum cohomology as an algebra over $\mathbb{C}[q]$. Substitute $q$ for a complex number, and let $\tilde{f}^{q}=(\tilde{f}^q_1,\dots,\tilde{f}^q_n)$ be the corresponding tuple of polynomials in $x_1,\dots,x_n$. Let $R_q=QH_q^*(M,\mathbb{C})$ be the corresponding quantum cohomology ring. Note that $R_q$ and $H^*(M,\mathbb{C})$ are isomorphic as vector spaces. The ring $R_q$ is equipped with a quantum multiplication that matches the usual multiplication of cohomology classes when $q=0$. \begin{theorem}\label{thm:residue_formula}\cite{Q_C_Fano} Let $M$ and $\tilde{f}^q$ be defined as above. Let $F\in \mathbb{C}[x_1,\dots,x_n]$ be a weighted homogeneous polynomial satisfying the dimension condition \eqref{eq:exp_dim} for a natural number $d$. Then \begin{align*} \langle F\rangle_gq^{d}=c^{\bar{g}} \text{Res}_{\tilde{f}^q}(J_q^gF)=\lim\limits_{\substack{y\to 0} }\sum_{x\in {(\tilde{f}^q)}^{-1}(y)}^{} ((cJ_q)^{\bar{g}}F)(x) \end{align*} where the limit is taken over regular points $y$, $c$ is a constant and $J_q=\det\big(\partial \tilde{f}^q_i/\partial x_j \big)$ is the Jacobian. \end{theorem} \subsection{GRW invariants for $\SG(2,2n)$} We will the apply Theorem \ref{thm:residue_formula} to the presentation of the quantum cohomology $R=QH^*(\SG(2,2n))$ in \eqref{eq:QH_Pres.}. To be precise, let $(x_1,x_2,x_3,\dots,x_n)=(a_1,a_2,b_1,\dots,b_{n-2})$ and let $\tilde{f}$ defined by \eqref{eq:def_f_i}. Fix $q=-1$ (or any non-zero number). Equation \eqref{eq:QH_Pres.} can be rephrased as \begin{equation*} (z^2-z_1^2)(z^2-z_2^2)Q(z)=z^{2n}+q(z_1+z_2) \end{equation*} where $a_1=z_1+z_2$, $a_2=z_1z_2$ and $Q(z)=z^{2n-4}+b_1z^{2n-6}+\dots + b_{n-2}$. Observe that $b_i$ can be represented in terms of $a_1$ and $a_2$ for all $1\le i\le n-2$. Evaluating at $z_1$ and $z_2$, we obtain \begin{align*} z_1^{2n}=-q(z_1+z_2)\\ z_2^{2n}=-q(z_1+z_2). \end{align*} The structure of $R_q$ is described in \cite{Q_C_IG}. The set $(\tilde{f}^q)^{-1}(0) $ has two types of points: \begin{itemize} \item Reduced points: The points described by the unordered pair $\{z_1,z_2\}$ satisfying \begin{align}\label{eq:z_i_chern_roots} z_2&= \zeta z_1\\ z_1&=\omega(1+\zeta)^{\frac{1}{2n-1}},\nonumber \end{align} where $\omega^{2n-1}=-q$, $\zeta^{2n}=1$ and $\zeta\ne \pm 1$. Since $\{z_1,z_2\}$ is an unordered, $(\omega,\zeta)$ and $(\omega,\zeta^{-1})$ yields the same point. Thus there are $(n-1)(2n-1)$ such points. The non-vanishing of the Jacobian computed below implies that these points are reduced. \item Fat point : The origin is the only other point in $(\tilde{f}^q)^{-1}(0) $. Since the vector space dimension $\dim(R_q)=2n(n-1)$, the origin is a non-reduced point of order $(n-1)$ in $\text{Spec}(R_q)$. \end{itemize} Thus $R_q=A_1\times A_2$ where $A_1\cong\mathbb{C}[\epsilon ]/\langle \epsilon^{n-1}\rangle$ corresponds to the fat point at origin in $Spec(R_q)$ and $\text{Spec}(A_2)$ consists of $(n-1)(2n-1)$ distinct reduced points. \begin{prop}\label{prop:Jacobian_SG} Let $p\in A_2$ be a reduced point described using \eqref{eq:z_i_chern_roots}. The Jacobian at $p$ is \begin{equation} J_q(p)=2n(2n-1)\zeta^{-1}(1+\zeta)^{-1}(1-\zeta)^{-2}z_1^{4n-5}. \end{equation} \end{prop} \begin{proof} We recursively calculate a concise expression for $b_1,\dots,b_{n-2}$: \begin{align*} b_i&=z_1^{2i}(1+\zeta^2+\dots +\zeta^{2i}). \end{align*} We define $b_i$ for all $i\in \mathbb{N}$ using the above identity. Note that $b_{n-1}=0$ and $b_0=1$. We are now going to give a simple formula for the convolution products $B_i$, and use it to find the Jacobian. Let $t=z_1^2$. Let $P(x)=1+b_1x+b_2x^2+\cdots $ be the power series in $x$. Then \begin{align*} (1-\zeta^2)P(x)&= \sum_{i=0}^{\infty} (1-\zeta^{2i+2})(tx)^i\\ &= \frac{1}{1-tx}-\frac{\zeta^2}{1-\zeta^2tx}. \end{align*} Observe that $P(x)^2=1+B_1x+B_2x^2+\cdots $, which can be expressed as \[P(x)^2 =\frac{1}{(1-\zeta^2)^2}\bigg(\frac{1}{(1-tx)^2}+ \frac{\zeta^4}{(1-\zeta^2tx)^2}-\frac{2\zeta^2}{1-\zeta^2}\bigg(\frac{1}{1-tx}-\frac{\zeta^2}{1-\zeta^2tx} \bigg) \bigg). \] Extracting the coefficient of $x^i$ in the above expression gives \begin{align*} B_i&=\frac{1}{(1-\zeta^2)^2}\bigg((i+1)t^i+(i+1)\zeta^{2i+4}t^i-\frac{2\zeta^2}{1-\zeta^2}(t^i-\zeta^{2i+2}t^i ) \bigg)\\ &=\bigg(\frac{(i+1)(1+\zeta^{2i+4})}{(1-\zeta^2)^2} - \frac{2\zeta^2(1-\zeta^{2i+2})}{(1-\zeta^2)^3} \bigg)t^{i}. \end{align*} In particular, we have \begin{align*} &B_{n-1}=n\frac{1+\zeta^2}{(1-\zeta^2)^2}t^{n-1}, \hspace{.5cm} B_{n-2}=\frac{2n}{(1-\zeta^2)^2}t^{n-2},\\ &B_{n-3}= \frac{n(1+\zeta^2)}{\zeta^2(1-\zeta^2)^2}t^{n-3}. \end{align*} Substituting $q=b_{n-2}a_2^2/a_1$ and using $a_1^2=t(1+\zeta)^2$, $b_{n-2}=-t^{n-2}/\zeta^2$ and $a_2=t\zeta$ we get the expression for Jacobian for $\tilde{f}^q=(\tilde{f}^q_1,\tilde{f}^{q}_{2},\dots,\tilde{f}^q_{n})$ at $p$: \begin{align*} J_q(p)&=-4a_1a_2\bigg(\det\begin{bmatrix} B_{n-2}&B_{n-1}\\ B_{n-3}&B_{n-2} \end{bmatrix}+\det \begin{bmatrix} B_{n-2}& \frac{b_{n-2}a_2^2}{2a_1^2}\\ B_{n-3}& -\frac{b_{n-2}a_2}{2a_1^2} \end{bmatrix} \bigg)\\ &=-4a_1a_2\bigg(-\frac{n^2}{\zeta^2(1-\zeta^2)^2} +\frac{n}{2\zeta^2(1-\zeta^2)^2} \bigg)t^{2n-4}\\ &=2n(2n-1)\zeta^{-1}(1+\zeta)^{-1}(1-\zeta)^{-2}z_1^{4n-5}. \end{align*} \end{proof} \begin{prop}\label{prop:residue_SG_reduced} Let $\vd=(2n-1)d-\bar{g}(4n-5)$ and $F=a_1^{m_1}a_2^{m_2}$ such that $m_1+2m_2=\vd$, then \begin{align} \sum_{p\in A_2}^{}\text{Res}_{\tilde{f}^q}(p;J_q^{g}F)= \frac{2n-1}{2} \sum_{\zeta\ne \pm 1}^{}(1+\zeta)^{m_1}\zeta^{m_2}J(\zeta)^{\bar{g}}(1+\zeta)^d(-q)^d \end{align} where $\zeta\ne \pm 1 $ is an $2n^{\text{th}}$ root of unity and $J(\zeta):=2n(2n-1)\zeta^{-1}(1+\zeta)^{-1}(1-\zeta)^{-2}$. \end{prop} \begin{proof} Let $p$ be given by $(\omega,\zeta)$. Using Proposition \ref{prop:Jacobian_SG} \begin{align*} Res_{\tilde{f}^q}(p;J_q^{g}F)&= (J_q^{g-1}F)(p)\\ &= J(\zeta)^{\bar{g}}(1+\zeta)^{m_1}\zeta^{m_2}z_1^{\vd + \bar{g}(4n-5)}. \end{align*} Observe that $z_1^{\vd + \bar{g}(4n-5)}=(1+\zeta)^d(-q)^d,$ thus \begin{align*} \sum_{p\in A_2}^{}\text{Res}_{\tilde{f}^q}(p;J_q^{g}F)= \sum_{(\omega,\zeta)}^{}(1+\zeta)^{m_1}\zeta^{m_2}J(\zeta)^{\bar{g}}(1+\zeta)^d(-q)^d \end{align*} where the latter is summed over pairs $(\omega,\zeta)$ such that $\omega^{2n-1}=(-q)$ and $\zeta$ is a $2n^{\text{th}}$ root of unity with strictly positive imaginary part. The above expression does not depend on the choice of $\omega$ and it is invariant under $\zeta\to \zeta^{-1}$. When summed over these choices the required formula is obtained. \end{proof} \begin{theorem}\label{thm:GRW_invariant_SG} Let $m_1+2m_2=\vd=(2n-1)d-(4n-5)\bar{g}$. The GRW invariants for $\SG(2,2n)$ equal the top virtual intersections of the $a$-classes on the corresponding isotropic Quot scheme: \begin{align} \langle a_1^{m_1}a_2^{m_2}\rangle_g= \int_{[\IQ_d]^{\vir}}^{}a_1^{m_1}a_2^{m_2} \end{align} \end{theorem} \begin{proof} The origin $y=0:=(0,\dots, 0)$ is not necessarily a regular point for the function $\tilde{f}^q=(\tilde{f}^q_1,\dots,\tilde{f}^q_n)$. We will evaluate the limit \begin{equation} \lim\limits_{y\to 0}\sum_{p\in {(\tilde{f}^q)}^{-1}(y)}^{} (J^{\bar{g}}F)(p) , \end{equation} where the limit $y\to 0$ is taken over regular values of $y$. Let $ \epsilon$ be a non-zero complex number with small absolute value, and let $y_{\epsilon}=(0,\dots,0,\epsilon^{n-1},0)$. We will see that $y_{\epsilon}$ is regular for $\epsilon$ small enough. Reduced points : Since the Jacobian for each point $p\in A_2$ is non-zero, the inverse function theorem implies that for small enough $\epsilon$, there is exactly one reduced point $p_\epsilon$ near $p$ satisfying $f(p_\epsilon)=y_\epsilon$. Thus $y_{\epsilon}$ is a regular value for all $\epsilon$ in a neighborhood of $0$. Let $A_2^\epsilon$ be the set of unique points $p_\epsilon$ near $p\in A_2$. Observe that the residue contribution is \begin{equation} \lim\limits_{\epsilon\to 0}\sum_{p_\epsilon\in A_2^{\epsilon}}^{} (J^{\bar{g}}F)(p_\epsilon) = \sum_{p\in A_2}^{}\text{Res}_f(p;J^{g}F). \end{equation} This has been calculated in Proposition \ref{prop:residue_SG_reduced}. Fat point : The vanishing of $\tilde{f}^q_1,\dots,\tilde{f}^q_{n-2}$ implies that $b_1,\dots,b_{n-2}$ is a polynomial in $a_1$ and $a_2$. Observe that \[b_i=(-1)^i(i+1)a_2^i+\langle a_1^2\rangle. \] Since $q\ne0$, the vanishing of $\tilde{f}^q_n$ implies \[a_1=q^{-1}a_2^{n}+\langle a_1^2\rangle. \] Therefore $a_1=a_2^{n}h_1(a_2)$ for some power series $h$ that defines a holomorphic function for an open set containing $0$. A similar argument shows that $f_{n-1}=a_2^{n-1}h_2(a_2)$ where $h_2$ is holomorphic with non-zero constant term. Observe that $a_2^{n-1}h_2(a_2)=\epsilon\ne0$ has exactly $(n-1)$ simple zeros for all $\epsilon$ lying in a neighborhood of $0$. Note that $a_2=O(\epsilon)$, $a_1=O(\epsilon^n)$ and $b_i=O(\epsilon^i)$ as $\epsilon$ approaches $0$. Substituting the above orders in \eqref{eq:Jacobian_SG_Bi}, we get $J=O(\epsilon^{n-2})$. Thus the residue contributions of these $n-1$ points has order $O(\epsilon^{nm_1+m_2+\bar{g}(n-2)})$, which vanishes in the limit $\epsilon\to 0$ when the the exponent $nm_1+m_2+\bar{g}(n-2)$ is non-zero. There are exactly two cases when the above exponent is zero: (i) $\vd=0$, $d=g-1$, $N=2n=4$; and (ii) $\vd=d=0$, $g=1$. An easy calculation shows that the residue contribution are $(2q)^{d}$ and $1$ respectively. These are the only instances where $\vd \ge 0$ and $d<g$. We apply Theorem \ref{thm:residue_formula} to obtain the GRW invariant up to a constant $c$. When $g=d=0$, the GRW invariants are the top intersections in the cohomology ring of $\SG(2,2n)$. Note that $\IQ_{0}\cong \SG(2,2n)$ when $g=0$, thus the virtual invariants in \eqref{eq:a_1a_2formula} must match the GRW invariants. Comparing the two we obtain $c=-1$. Putting together all the terms, we get \begin{align*} \langle a_1^{m_1}a_2^{m_2}\rangle_g= \begin{cases} (-1)^{d+\bar{g}}\frac{2n-1}{2} \sum_{\zeta}^{}(1+\zeta)^{m_1+d}\zeta^{m_2}J(\zeta)^{\bar{g}} &d\ge g\\ 2^{\bar{g}}3^g+(-1)^{\bar{g}}2^{d} & n=2,\ d=\bar{g}\\ 2n(n-1) & g=1, d=0 \end{cases}. \end{align*} This match the expression in Theorem \ref{thm:r=2,sympl} (also see Examples \ref{exm:N=4} and \ref{exm:g=1}) for all $d$, $g$ and $N$. \end{proof} \subsection{GRW invariants for $\OG(2,2n+2)$} Let $n\ge 3$. Recall the definition of ${f}_0,{f}_1,\dots,{f}_n$ from \eqref{eq:def_fi_OG}. Let $\tilde{f}_i=f_i$ for $0\le i\le n-1$ and let $\tilde{f}_n=f_n-4qa_1$ as prescribed by \eqref{eq:def_fi_OG}. In particular, \begin{align*} \tilde{f}^q_0&=\xi a_2\nonumber\\ \tilde{f}^q_1&= b_1+(2a_2-a_1^2) \nonumber\\ &\vdots \\ \tilde{f}^q_{n-1}&= (-1)^{n-1}\xi^2+ b_{n-2}(2a_2-a_1^2)+b_{n-3}a_2^2 \nonumber\\ \tilde{f}^q_{n}&= (-1)^{n-1}\xi^2(2a_2-a_1^2)+ b_{n-2}a_2^2 - 4qa_1 \nonumber \end{align*} Let $R'=\mathbb{C}[\xi,a_1,a_2,b_1,\dots,b_{n-2},q]/\langle \tilde{f}_0,\dots,\tilde{f}_n\rangle$ be the presentation for the quantum cohomology of $\OG(2,2n+2)$ (see \eqref{eq:QH_pres_OG}). The Jacobian $J'$ for $\tilde{f}=(\tilde{f}_0,\dots,\tilde{f}_n)$ is calculated in similar fashion as it was done in the symplectic case. Observe that \begin{equation} J'\in -4a_1a_2^2\det \begin{bmatrix} B_{n-2}&B_{n-1}+\frac{4q}{2a_1}\\ B_{n-3}&B_{n-2}-\frac{4q}{2a_1a_2} \end{bmatrix}+\langle \xi\rangle, \end{equation} where $b_0=1$, $b_{n-1}:=(-1)^{n-1}\xi^{2}$ and $B_i=b_ib_0+\cdots+b_0b_i$. Note that modulo $\langle a_2\rangle$, we have \begin{align*} \tilde{f}_0&=0\\ \tilde{f}_1&=b_1-a_1^2\\ \vdots&\\ \tilde{f}_{n-1}&=(-1)^{n-1}\xi^2-b_{n-2}a_1^2\\ \tilde{f}_{n}&= (-1)^{n-1}\xi^2(-a_1^2)-4qa_1 \end{align*} An easy calculation shows that \begin{equation*} J'\in-2b_{n-1}(2a_1B_{n-1} +4q )+\langle a_2\rangle. \end{equation*} Note that $b_i\in a_1^{2i}+\langle a_2\rangle$, thus we may further write \begin{equation}\label{eq:Jacobian_OG_a_1} J'\in -2a_1^{2n-2}(2na_1^{2n-1}+4q)+\langle a_2\rangle . \end{equation} Fix a non-zero number $q$. Note that $f_0=0$ implies that either $\xi=0$ or $a_2=0$. The set $(\tilde{f}^{q})^{-1}(0)$ has three types of points: \begin{itemize} \item Reduced points ($a_2\ne0$): The reduced points with $\xi=0$ have almost the same description as that of $Spec(A_2)$ in the symplectic case. It is obtained by replacing $q\to 4q$ and letting $a_1$ and $a_2$ be described (similar to \eqref{eq:z_i_chern_roots}) using Chern roots $\{z_1,z_2\}$ in this case. \item Reduced points ($\xi\ne0$): Thus $a_2=0$ and hence $b_i=a_1^{2i}$. Moreover, $\tilde{f}^q_{n-1}=\tilde{f}^q_{n}=0$ implies \begin{align*} (-1)^{n-1}\xi^{2}&=a_1^{2n-2}\\ a_1^{2n}&=-4qa_1. \end{align*} Thus there are $(4n-2)$ points given by $(\xi,a_1)=(\sqrt{-4q}\mu^{-1},\mu^2)$ where $\mu $ is a $(4n-2)^{\text{th}}$ root of $(-4q)$. We observe that the Jacobian (see \eqref{eq:Jacobian_OG_a_1}) is non-zero. \item Fat point $A_1$: The origin is the non-reduced point of order $(n+1)$. \end{itemize} The Artinian ring $R'_q$ is isomorphic to $A_1\times A_2\times A_3$ where $A_1\cong \mathbb{C}[\epsilon]/\langle \epsilon^{n+1} \rangle$. The Spec of $A_2$ and $A_3$ corresponds to the distinct reduced points with $a_2\ne0$ and $\xi\ne 0$ respectively. Over the points $p\in Spec(A_2)$ given by a choice of $\{z_1,z_2\}$ as defined in \eqref{eq:z_i_chern_roots} by replacing $q\to 4q$, the Jacobian \begin{align*} J_q'(p)=2n(2n-1)(1+\zeta)^{-1}(1-\zeta)^{-2}z_1^{4n-3}. \end{align*} We obtain an analogue of Proposition \ref{prop:residue_SG_reduced}: \begin{prop} Let $\vd=(2n-1)d-\bar{g}(4n-3)$ and $F=a_1^{m_1}a_2^{m_2}$ such that $m_1+2m_2=\vd$, then \begin{align} \sum_{p\in A_2}^{}\text{Res}_{\tilde{f}^q}(p;J'^{g}F)= \frac{2n-1}{2}\sum_{\zeta\ne \pm 1}(1+\zeta)^{m_1+d}\zeta^{m_2}J'(\zeta)^{\bar{g}}(-4q)^d \end{align} where $\zeta\ne \pm 1 $ is $2n^{\text{th}}$ root of unity and $J'(\zeta):=2n(2n-1)(1+\zeta)^{-1}(1-\zeta)^{-2}$. \end{prop} \begin{prop} Let $F=a_1^{m_1}a_2^{m_2}$, where $m_1+2m_2=\vd$. Then \begin{align} \sum_{p\in A_3}^{}\text{Res}_{\tilde{f}^q}(p;J'^{g}F)=\begin{cases} (-1)^{\bar{g}}(4n-2)^{g} (-4q)^d & m_2=0\\ 0& m_2>0 \end{cases}. \end{align} \end{prop} \begin{proof} Let $p\in A_3$ be determined by $(\xi,a_1)=(\sqrt{-4q}\mu^{-1},\mu^2)$ where $\mu$ is a $(4n-2)^{\text{th}}$ root of unity. Note that $a_2=0$, thus the residues vanish when $m_2>0$. We may assume $m_2=0$. Using \eqref{eq:Jacobian_OG_a_1} and the equality $a_1^{2n-1}+4q=0$, the Jacobian is $-2a_1^{4n-3}(2n-1)$. Thus \begin{align*} \text{Res}_{\tilde{f}^q}(p;J'^{g}a_1^{\vd}) &=(-1)^{\bar{g}}(2(2n-1))^{\bar{g}} a_1^{(2n-1)d}\\ &=(-1)^{\bar{g}}(4n-2)^{\bar{g}} (-4q)^d. \end{align*} \end{proof} \begin{theorem} Let $m_1+2m_2=(2n-1)d-(4n-3)\bar{g}$ and $n\ge 3$. The GRW invariants for $\OG(2,2n+2)$ involving $a_1$ and $a_2$ equal the top virtual intersections of the $a$-classes on the corresponding isotropic Quot schemes. In particular, when $d\ge g$ and \begin{itemize}\label{thm:GRW_invariants_OG} \item[(i)] When $m_2>0$, then \begin{align*} \langle a_1^{m_1}a_2^{m_2}\rangle_g=u4^{d} \frac{2n-1}{2}\sum_{\zeta\ne \pm 1}(1+\zeta)^{m_1+d}\zeta^{m_2}\bigg(\frac{J'(\zeta)}{4}\bigg)^{\bar{g}}, \end{align*} where $u=(-1)^{\bar{g}+d}$ and $J'(\zeta)=2n(2n-1)(1+\zeta)^{-1}(1-\zeta)^{-2}$. \item[(ii)] When $m_2=0$, then \begin{align*} \langle a_1^{m_1}\rangle_g=u4^{d}\bigg(\frac{(-1)^{\bar{g}}(4n-2)^{\bar{g}}}{4^{\bar{g}}}+ \frac{2n-1}{2}\sum_{\zeta\ne \pm 1}\frac{(1+\zeta)^{m_1+d}J'(\zeta)^{\bar{g}}}{4^{\bar{g}}} \bigg). \end{align*} \end{itemize} \end{theorem} The proof of the above theorem is similar to that of Theorem \ref{thm:GRW_invariant_SG}. \section{Intersection of $f$ classes}\label{sec:f_intersection} We will find an explicit expression for the intersection numbers of polynomials in $a$ and $f$ classes in terms of multivariate generating functions. We obtain Theorem \ref{thm:f_classes_d>g} as a corollary. While the computations are more involved, the basic ideas are similar to those in Section \ref{sec:Int_a_classes}. We will only work with symplectic isotropic Quot scheme $\IQ_{d}$ with $r=2$. A similar analysis can be carried out when $\sigma$ is symmetric. Over the fixed loci $\fix_{\vec{d},\underline{k}}$, the equivariant restriction of the $f$ classes are given by $f_1=d$ and $f_2=\phi_{12}+d_1(x_2+w_2t)+d_2(x_1+w_1t)$. The formula for the intersection of $f$ classes with a polynomial in $a$ classes involves differential operators. Let $P(X)=X^N-1$ and \begin{equation*} T_{g}(t,Y_1,Y_2)=\bigg(\prod_{i=1}^{2} (1-\eta_i)-\prod_{i=1}^{2}t^2\eta_i\bigg)^g, \end{equation*} where $\eta_i=\frac{P(Y_i)}{P'(Y_i)(Y_1+Y_2)}$. When $Y_i=w_i(1+q_i)^{\frac{1}{N}}$, $T_{g}(t,Y_1,Y_2)$ is a power series in $q_1$ and $q_2$ over $\mathbb{C}[t]$. This should be considered as an analogue of $T_{d,g}(N)$ in \eqref{eq:T_d,g}. In particular, \begin{equation*} T_{g}(1,w_1(1+q)^{\frac{1}{N}},w_2(1+q)^{\frac{1}{N}})= \bigg(1-\frac{q}{N(1+q)}\bigg)^g. \end{equation*} Let $\partial_i$ and $\partial_t$ be the partial derivatives with respect to $Y_i$ and $t$ respectively. Define the differential operators $\mathfrak{d}_t=-(Y_1+Y_2)\partial_t$, \begin{align*} \Delta^u&:=\sum_{i=0}^{u}\binom{u}{i}(q_1\partial_1)^i(q_2\partial_2)^{u-i}Y_2^iY_1^{u-i},\\ (\Delta+\mathfrak{d}_t)^m&:=\sum_{u=0}^{m}\binom{m}{u}\Delta^u\mathfrak{d}_t^{m-u}. \end{align*} Note that $\Delta^u$ defined above is not $u^{\text{th}}$ power of the operator $\Delta$. \begin{theorem}\label{thm:f-classes} Let $Q(X_1,X_2)$ be a weighted homogeneous polynomial and $m$ be a positive integer satisfying $\vd=m+\deg Q$, where $\deg Q$ is the weighted degree. Then \begin{align*} \int_{[\IQ_d]^{\vir}}f_2^{m}Q(a_1,a_2)=\sum_{w_1,w_2}^{}[q^{d}](\Delta+\mathfrak{d}_t)^m B(Y_1,Y_2) T_{g}(t,Y_1,Y_2)\bigg|_{t=1, q=q_1=q_2} \end{align*} where the sum is taken over $N^{\text{th}}$ roots of unity $\{w_1,w_2\}$ such that $w_1\ne \pm w_2$, $u=(-1)^{\bar{g}+d}$, $Y_i=w_i(1+q_i)^{1/N}$ and \begin{equation*} B(Y_1,Y_2)= uQ(Y_1+Y_2, Y_1Y_2)\frac{(Y_1+Y_2)^{d-\bar{g}}}{(Y_1-Y_2)^{2\bar{g}}}\prod_{i=1}^{2} P'(Y_i)^{\bar{g}}. \end{equation*} \end{theorem} \begin{proof} Using the same arguments as in the proof of Theorem \ref{thm:7.1_r=2_sympl}, we see that the required intersection number equals \begin{align*} \sum_{w_1,w_2}^{}\sum_{|\vec{d}|=d}^{}\sum_{k=0}^{m}\binom{m}{k} \int_{\fix_{\vec{d},\underline{k}}}^{}\phi_{12}^{k}(d_1Y_2+d_2Y_1)^{m-k} R(Y_1,Y_2)e^{-\frac{\theta_1+\theta_2-\phi_{12}}{Y_1+Y_2}}\prod_{i=1}^{2}e^{\theta_iz_i}h_i^{d_i-\bar{g}}, \end{align*} where $z_i=\frac{P'(Y_i)}{P(Y_i)}-\frac{1}{x_i}$ and $h_i=\frac{x_i}{P(Y_i)}$ and \[R(Y_1,Y_2)=uQ(Y_1+Y_2, Y_1Y_2)\frac{(Y_1+Y_2)^{d-\bar{g}}}{(Y_1-Y_2)^{2\bar{g}}}. \] We pursue this calculation in Subsection \ref{sec:furth_inter}, in particular we use Proposition \ref{prop:f_int} to finish the proof. \end{proof} When $m=0$, we recover Theorem \ref{thm:r=2,sympl}. We specialize to the case $m=1$ to obtain a simple expression. \begin{cor} Recall the definition of $T_{d,g}(N)$ from Theorem \ref{thm:r=2,sympl}. Let $Q$ be a homogeneous polynomial such that $\vd=m+\deg Q$, where $\deg Q$ is the weighted degree. Then \begin{align*} \int_{[\IQ_d]^{\vir}}f_2Q(a_1,a_2)=&\frac{2}{N}\sum_{w_1,w_2}^{}\bigg(T_{d-1,g}(N) D\circ B(w_1,w_2)+\\ &\frac{1}{N}\frac{w_1w_2B(w_1,w_2)}{(w_1+w_2)}(T_{d-2,\bar{g}}(N)-NT_{d-1,\bar{g}}(N))\bigg) \end{align*} where $D \circ B(z_1,z_2)=\frac{z_1z_2}{2}\big(\frac{\partial}{\partial z_1}+\frac{\partial}{\partial z_2}\big)B(z_1,z_2)$ and the sum is taken over all the pairs of $N^\text{th}$ roots of unity $\{w_1,w_2\}$ with $w_1\ne \pm w_2$. In particular, when $d>g$ we get \begin{align*} \int_{[\IQ_d]^{\vir}}f_2Q(a_1,a_2)=&\frac{2}{N}\bigg(1-\frac{1}{N}\bigg)^{g}\sum_{w_1,w_2}^{}\bigg( D\circ B(w_1,w_2)-\frac{w_1w_2B(w_1,w_2)}{(w_1+w_2)}\bigg). \end{align*} \end{cor} \begin{proof} Since $B$ is a homogeneous rational function in variables $Y_1$ and $Y_2$ of degree $Nd-1$, substituting $Y_1/w_1=Y_2/w_2=(1+q)^\frac{1}{N}$ gives a constant multiple of $(1+q)^{d-1/N}$. We use product rule to split the calculation. First we see that \begin{align}\label{eq:DeltaB} [q^d]T_{g}(t,Y_1,Y_2)\Delta B(Y_1,Y_2)\bigg|_{q_1=q_2=q}= \frac{2}{N}T_{d-1,g}(N) D\circ B(w_1,w_2), \end{align} since substituting $Y_1/w_1=Y_2/w_2=(1+q)^\frac{1}{N}$ in $\Delta B(Y_1,Y_2) $ gives us a constant times $q(1+q)^{d-1}$. The rest follows from the definition of $D$ and $T_{d,g}(N)$. Now we will find $[q^d]B(Y_1,Y_2)(\Delta+\mathfrak{d}_t)T_{g}(t,Y_1,Y_2)$. Let us define \begin{equation*} T_g(q)=\bigg(1-\frac{q}{N(1+q)}\bigg)^g \end{equation*} for notational convenience. Note that \begin{align*} \mathfrak{d}_tT_{g}(t,Y_1,Y_2)&=-(Y_1+Y_2)gT_{g-1}(t,Y_1,Y_2)(-2t\eta_1\eta_2) \end{align*} therefore \begin{equation*} \mathfrak{d}_tT_{g}(t,Y_1,Y_2)|_{t=1,q_1=q=q_2}=2g\frac{w_1w_2}{w_1+w_2}\frac{q^2T_{g-1}(q)}{N^2(1+q)^2}(1+q)^{\frac{1}{N}}, \end{equation*} hence the the corresponding contribution is \begin{equation}\label{eq:DetlatT} [q^d]B(Y_1,Y_2)\mathfrak{d}_tT_{g}(t,Y_1,Y_2)|_{t=1,q_1=q=q_2}=\frac{2}{N^2}\frac{w_1w_2B(w_1,w_2)}{w_1+w_2}T_{d-2,g-1}(N). \end{equation} The other term simplifies as \begin{align}\label{eq:DeltaT} \Delta T_{g}(1,Y_1,Y_2)&=-gT_{g-1}(1,Y_1,Y_2)\big(q_1Y_2(\partial_1\eta_1+\partial_1\eta_2)+q_2Y_1(\partial_2\eta_1+\partial_2\eta_2)\big), \end{align} where we evaluate the partial derivatives \begin{align*} \partial_1\eta_1&=\bigg(\frac{1}{Y_1+Y_2}-\frac{P(Y_1)P''(Y_1)}{P'(Y_i)^2(Y_1+Y_2)}- \frac{P(Y_i)}{P'(Y_1)(Y_1+Y_2)^2}\bigg)\partial_1Y_1\\ \partial_1\eta_2&= -\frac{P(Y_2)}{P'(Y_2)(Y_1+Y_2)^2}\partial_1Y_1. \end{align*} Similar expressions hold for $\partial_2\eta_1$ and $\partial_2\eta_2$. Note that we also know that $\partial_iY_i= \frac{1}{NY_i^{N-1}}=\frac{1}{P'(Y_i)}$. Using this we find the following identities: \begin{align*} \frac{q_1Y_2}{(Y_1+Y_2)P'(Y_1)}+\frac{q_2Y_1}{(Y_1+Y_2)P'(Y_2)}\bigg|_q&= \frac{2}{N}\frac{w_1w_2}{(w_1+w_2)}\frac{q(1+q)^{\frac{1}{N}}}{(1+q)}\\ \frac{q_1Y_2P(Y_1)P''(Y_1)}{(Y_1+Y_2)P'(Y_1)^3}+\frac{q_2Y_1P(Y_2)P''(Y_2)}{(Y_1+Y_2)P'(Y_1)^3}\bigg|_q&= \frac{2(N-1)}{N^2}\frac{w_1w_2}{(w_1+w_2)}\frac{q^2(1+q)^{\frac{1}{N}}}{(1+q)^2}\\ \frac{q_1Y_2P(Y_1)}{(Y_1+Y_2)^2P'(Y_1)^2}+\frac{q_2Y_1P(Y_2)}{(Y_1+Y_2)^2P'(Y_2)^2}\bigg|_q&= \frac{1}{N^2}\frac{w_1w_2}{(w_1+w_2)}\frac{q^2(1+q)^{\frac{1}{N}}}{(1+q)^2}\\ \frac{1}{Y_1+Y_2}\bigg(\frac{q_1Y_2P(Y_2)}{P'(Y_2)P'(Y_1)}+\frac{q_2Y_1P(Y_1)}{P'(Y_1)P'(Y_2)}\bigg)\bigg|_q&= \frac{1}{N^2}\frac{w_1w_2}{(w_1+w_2)}\frac{q^2(1+q)^{\frac{1}{N}}}{(1+q)^2}. \end{align*} Substituting the above expressions back in \eqref{eq:DeltaT}, we obtain \begin{align*} \Delta T_{g}(1,Y_1,Y_2)\bigg|_{q_1=q_2=q}=gT_{g-1}(q)\frac{w_1w_2}{w_1+w_2}\frac{2}{N}\frac{-q}{(1+q)^2}(1+q)^{\frac{1}{N}}. \end{align*} Therefore \begin{align}\label{eq:DeltaTq1=q2} [q^d]B(Y_1,Y_2)\Delta T_{g}(1,Y_1,Y_2)|_{,q_1=q=q_2}=\frac{-2}{N}\frac{w_1w_2B(w_1,w_2)}{w_1+w_2}T_{d-1,g-1}(N). \end{align} We get the required expression by summing \eqref{eq:DeltaB}, \eqref{eq:DetlatT} and \eqref{eq:DeltaTq1=q2}. \end{proof} \subsection{Further calculations over fixed loci}\label{sec:furth_inter} The following results are crucially used to obtain Theorem \ref{thm:f-classes}. They are analogue of Proposition \ref{prop:theta_sum_int} and \ref{prop:fixed_loci_calculation}. \begin{prop}\label{prop:theta_int_f_class} Let $R$ be a homogeneous polynomial with weighted degree $Nd-2\bar{g}(N-1)-p-u$. Let $R(Y_1,Y_2)$ be a homogeneous rational function of degree $s=Nd-2\bar{g}(N-1)$. We borrow the notation $X_{\vec{d}}$, $Y_i$, $P(Y)$, $B(Y)$, $h_i$ and $z_i$ from Proposition \ref{prop:theta_sum_int}. Then \begin{align*} &\int_{X_{\vec{d}}}^{}(d_1Y_2+d_2Y_1)^uR(Y_1,Y_2)\prod_{i=1}^{2}\frac{\theta_i^{p_i}}{p_i!}e^{\theta_iz_i}h_i^{d_i-\bar{g}} \\ &= [q_1^{d_1}q_2^{d_2}]\Delta^u \bigg(R(Y_1,Y_2)\prod_{i=1}^{2}\binom{g}{p_i} \frac{B(Y_i)^{g-p_i}P(Y_i)^{p_i}}{P'(Y_i)}\bigg) \end{align*} where $Y_i=w_i(1+q_i)^{\frac{1}{N}}$ as a power series in $q_i$ on the right hand side. \end{prop} \begin{proof} Let $g(x)=\sum a_dx^d$. The generating functions of the form $f(x)=\sum d^ka_dx^d$ can be evaluated as \[f(x)=\bigg(x\frac{\partial}{\partial x}\bigg)^kg(x). \]This holds true for multivariate generating functions (by using partial derivatives). Using the proof of Proposition \ref{prop:theta_sum_int}, specifically equation \ref{eq:power_series_version_theta}, we get the required expression. \end{proof} \begin{prop}\label{prop:f_int} The following identity holds \begin{align*} &\int_{X_{\vec{d}}}^{}\phi_{12}^k(d_1Y_2+d_2Y_1)^{m-k} R(Y_1,Y_2)e^{-\frac{\theta_1+\theta_2-\phi_{12}}{Y_1+Y_2}}\prod_{i=1}^{2}e^{\theta_iz_i}h_i^{d_i-\bar{g}} \\ &= [q_1^{d_1}q_2^{d_2}]\Delta^{m-k}\mathfrak{d}_t^k F_t(Y_1,Y_2)\bigg|_{t=1} \end{align*} where $\eta_i=\frac{P(Y_i)}{B(Y_i)(Y_1+Y_2)}$, $\mathfrak{d}_t=-(Y_1+Y_2)\partial_t$ and \begin{align*} F_t(Y_1,Y_2)&= R(Y_1,Y_2)\prod_{i=1}^{2} \frac{B(Y_i)^{g}}{P'(Y_i)} \bigg(\prod_{i=1}^{2} (1-\eta_i)-\prod_{i=1}^{2}t^2\eta_i\bigg)^g. \end{align*} \end{prop} \begin{proof} Using Proposition \ref{prop:phi_int} we may replace even powers of $\phi_{12}$ with suitable expression in $\theta_i$'s. Therefore we can make the following replacement \begin{align*} \phi_{12}^ke^{-\frac{\theta_1+\theta_2- \phi_{12}}{Y_1+Y_2}} &\to\sum_{p=0}^{\infty}\frac{(-1)^{p+\ell}}{p!(Y_1+Y_2)^p}\bigg(\sum_{\substack{\ell+r+s=p\\\ell\equiv k\mod 2}}^{}\binom{p}{\ell,r,s}\theta_1^{r}\theta_2^{s}\phi_{12}^{\ell+k} \bigg)\\ &\to\sum_{p=0}^{\infty}\sum_{\substack{\ell+r+s=p\\\ell\equiv k\mod 2}}^{}\frac{(-1)^{p+k-\frac{\ell+k}{2}}}{p!}\binom{p}{\ell,r,s}\binom{\ell+k}{\frac{\ell+k}{2}}\binom{g}{\frac{\ell+k}{2}}^{-1}\frac{\theta_1^{r+\frac{\ell+k}{2}}\theta_2^{s+\frac{\ell+k}{2}}}{(Y_1+Y_2)^p} \end{align*} We use Proposition \ref{prop:theta_int_f_class} and binomial identities to obtain that the required expression is \begin{align*} \sum_{p=0}^{\infty}\sum_{\substack{\ell+r+s=p\\\ell\equiv k\mod 2}}^{}(-1)^{p+k-\frac{\ell+k}{2}}\binom{p}{\ell,r,s}\frac{(p+k)!}{p!}\binom{p+k}{r+\frac{\ell+k}{2}}^{-1}\binom{\ell+k}{\frac{\ell+k}{2}}\binom{g}{\frac{\ell+k}{2}}^{-1}\\ \cdot \binom{g}{r+\frac{\ell+k}{2}}\binom{g}{s+\frac{\ell+k}{2}}[q_1^{d_1}q_2^{d_2}]\Delta^{m-k} \frac{J(Y_1,Y_2)}{(Y_1+Y_2)^p} \bigg(\frac{Y_1}{h(Y_1)}\bigg)^{r+\frac{\ell+k}{2}}\bigg(\frac{Y_2}{h(Y_2)}\bigg)^{s+\frac{\ell+k}{2}}, \end{align*} where $h(Y_i)=Y_iB(Y_i)/P(Y_i)$ and \[J(Y_1,Y_2)= R(Y_1,Y_2)\prod_{i=1}^{2} \frac{B(Y_i)^{g}}{P'(Y_i)}. \] The binomial factor simplifies to give us \begin{align*} [q_1^{d_1}q_2^{d_2}]\Delta^u\sum_{p=0}^{\infty}&\sum_{\substack{\ell+r+s=p\\2| \ell- k}}^{}(-1)^{\frac{\ell+k}{2}}\frac{(k+\ell)!}{\ell!}\binom{g}{\frac{\ell+k}{2}}\binom{g-\frac{\ell+k}{2}}{r} \binom{g-\frac{\ell+k}{2}}{s}\\ &\cdot\frac{J(Y_1,Y_2)}{(Y_1+Y_2)^p} \bigg(\frac{-Y_1}{h(Y_1)}\bigg)^{r+\frac{\ell+k}{2}}\bigg(\frac{-Y_2}{h(Y_2)}\bigg)^{s+\frac{\ell+k}{2}} \end{align*} We sum over $r$ and $s$ keeping $\ell $ fixed after pulling out the terms independent of $r,s$ and $\ell$ to obtain \begin{align*} [q_1^{d_1}q_2^{d_2}]\Delta^{m-k}(-1)^k(Y_1+Y_2)^k J(Y_1,Y_2) \sum_{2|(\ell- k)}^{}\frac{(k+\ell)!}{\ell!}\binom{g}{\frac{\ell+k}{2}}(-1)^{\frac{\ell+k}{2}} \\ \cdot \prod_{i=1}^{2}(-\eta_i)^{\frac{\ell+k}{2}} (1-\eta_i )^{g-\frac{\ell+k}{2}} \end{align*} The result follows by noting that \begin{align*} \sum_{2|(\ell- k)}^{}\frac{(k+\ell)!}{\ell!}\binom{g}{\frac{\ell+k}{2}}(-1)^{\frac{\ell+k}{2}} \prod_{i=1}^{2}(-\eta_i)^{\frac{\ell+k}{2}} (1-\eta_i )^{g-\frac{\ell+k}{2}}= \partial_t^k \bigg(\prod_{i=1}^{2} (1-\eta_i)-\prod_{i=1}^{2}t^2\eta_i\bigg)^g\bigg|_{t=1}. \end{align*} \end{proof} \section{Virtual Euler characteristics}\label{sec:vir_Euler_char} The Euler characteristic of the symmetric product of curves is given by the well known formula \begin{align*} e(C^{[d]}) = [q^{d}](1-q)^{2g-2}. \end{align*} Let $\vec{d}=(d_1,\dots,d_r)$ and $X_{\vec{d}}=C^{[d_1]}\times\cdots\times C^{[d_r]}$. Then the multiplicative property of Euler characteristic implies \begin{align*} \sum_{|\vec{d}|=d}e(X_{\vec{d}}) = [q^{d}](1-q)^{r(2g-2)}. \end{align*} Let $\IQ_{d}$ be the symplectic isotropic Quot scheme with $N=2n$. The fixed loci under the $\mathbb{C}^*$ action described in Section \ref{sec:fixed_loci}. The localization formula give us explicit expression for the Euler characteristics: \begin{equation*} \sum_{d=0}^{\infty}e(\IQ_d)q^d=2^r\binom{n}{r}(1-q)^{r(2g-2)}. \end{equation*} Since the isotropic Quot scheme are not necessarily smooth, the virtual Euler characteristic $e^{\vir}(\IQ_{d})$ may not coincide with the topological Euler characteristic. Define the formal power series \[A_{N,r,g}(q)=\sum_{d=0}^{\infty}e^{\vir}(\IQ_{d})q^d.\] The virtual localization formula gives \begin{align*} e^{\vir}(\IQ_d)=\sum_{d_1+d_2=d}^{}\sum_{w_1,w_2}\int_{\fix_{\vec{d}, \underline{k}}}^{} c(\fix_{\vec{d}, \underline{k}})\frac{c_{\mathbb{C}^*}(\nbun)}{e_{\mathbb{C}^*}(\nbun)}. \end{align*} We know how to evaluate the above integral (see Section \ref{sec:Euler_class_sympl}), but the details are computationally challenging. We do not a have a closed form expression or a conjecture for $A_{N,r,g}(q)$. Over $\mathbb{P}^1$, we find a finite number of values using computers. We used Sagemath \cite{sagemath} for these calculations: \begin{align*} A_{4,2,0}(q)=&4 + 16q + 32q^2 + 112q^3 + (-396)q^4 + 6800q^5 + (-85856)q^6 + 1122544q^7+ \\& (-14660608)q^8 + 192011264q^9 + (-2520726176)q^{10} + 33164547968q^{11}+\cdots \\ A_{6,2,0}(q)= &12+48q+96q^2+228q^3-3246q^4+\cdots \\ A_{8,2,0}(q)=& 24+96q+192q^2+464q^3+\cdots \end{align*} We observe that $e^{\vir}(\IQ_d)$ differs from the topological Euler characteristic when $d\ge2$, which indicates that $\IQ_{d}$ is not smooth. When $d=0,1$, the space $\IQ_{d}$ is always smooth. \end{document}
\begin{document} \title{On logics extended with embedding-closed quantifiers} \begin{abstract} We study first-order as well as infinitary logics extended with quantifiers closed upwards under embeddings. In particular, we show that if a chain of quasi-homogeneous structures is sufficiently long then, in that chain, a given formula of such a logic becomes eventually equivalent to a quantifier-free formula. We use this fact to produce a number of undefinability results for logics with embedding-closed quantifiers. In the final section we introduce an Ehrenfeucht-Fra\"iss\'e game that characterizes the $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$-equivalence between structures, where $\mathcal{Q}_{\emb}$ is the class of all embedding-closed quantifiers. In conclusion, we provide an application of this game illustrating its use. \end{abstract} \section{Introduction} In this paper we focus our attention on a certain class of logics whose expressive power is greater than that of first-order logic (denoted here by $\mathcal{L}_{\omega\omega}$), the central logic in mathematics. While $\mathcal{L}_{\omega\omega}$ has well-developed model theory due to its many convenient properties, it has a downside in that its expressive power is rather limited. Many natural mathematical statements, for example "there are infinitely many", cannot be expressed in $\mathcal{L}_{\omega\omega}$. This motivates study of alternative logics. Mostowski was one of the first to suggest in \cite{Mostowski: 1957} the idea of expanding $\mathcal{L}_{\omega\omega}$ with formulas of the form $Q_{\alpha} x \varphi(x)$ which are interpreted so that $\mathfrak{A} \vDash Q_{\alpha} x \varphi(x)$ if and only if there are at least $\aleph_{\alpha}$ elements $a$ with $\mathfrak{A} \vDash \varphi(a)$. This idea broadened the notion of quantifier giving rise to many interesting logics defined in a similar way. The current definition of generalized quantifier is due to Lindstr\"om \cite{Lindstrom: 1966}. We describe it in more detail in Section \ref{prel}. In short, every generalized quantifier $Q$ corresponds to some property of structures that we denote by $\mathcal{K}_Q$. Suppose $\mathcal{L}$ is a logic closed under substitution and $P$ is a property not expressible in it. By adding quantifier $Q_P$ to $\mathcal{L}$ we get the smallest extension of $\mathcal{L}$ satisfying certain closure conditions that can express $P$. The properties of the new logic $\mathcal{L}(Q_P)$ can differ substantially from those of $\mathcal{L}$ and may thus become an interesting object of study. A justification for studying generalized quantifiers comes among others from the fact that any reasonable extension $\mathcal{L}$ of, say, the logic $\mathcal{L}_{\omega\omega}$, closed under substitutions, is equivalent to the logic $\mathcal{L}_{\omega\omega}(\mathcal{Q})$ where $\mathcal{Q}$ is a class of quantifiers corresponding to some new properties definable in $\mathcal{L}$. In the present work we shall concentrate on extensions of logics $\mathcal{L}_{\omega\omega}$, $\mathcal{L}_{\infty \omega}$ and $\mathcal{L}_{\infty \omega}^{\omega}$ (the finite variable logic) with generalized quantifiers $Q$ that satisfy the following restriction: for all structures $\mathfrak{A} \in \Str[\tau_Q]$, if $\mathfrak{A} \in \mathcal{K}_Q$ and $\mathfrak{A}$ is embeddable into $\mathfrak{B}$ then $\mathfrak{B} \in \mathcal{K}_Q$. We call such quantifiers \emph{embedding-closed} and denote the class of all embedding-closed quantifiers by $\mathcal{Q}_{\emb}$. This is an interesting class of quantifiers to study since many well-known quantifiers and properties, like cardinality quantifiers $Q_{\alpha}$ for instance, are closed under embeddings either upwards or downwards which is essentially the same for our purposes. We present more examples of such properties in Section \ref{emb-cl_quant}. We also note that embedding-closed quantifiers are natural in the sense that if $Q$ is embedding-closed then a sentence $Q(\overline{x}_{\alpha}\varphi_{\alpha})_{\alpha<\kappa}$ says that formulas $\varphi_{\alpha}$, $\alpha<\kappa$, define a structure that have a substructure belonging to a given class of structures. Thus, embedding-closed quantifiers seem to be an interesting object of study, and our observations in this paper show that it is possible to develop general theory for this class of quantifiers. Call a structure $\mathfrak{A}$ \emph{quasi-homogeneous} if every isomorphism between fi\-ni\-te\-ly generated substructures of $\mathfrak{A}$ can be extended to an embedding of $\mathfrak{A}$ into itself. This weakens the usual notion of homogeneity which deals with automorphisms instead of embeddings. The notion of embedding-closed quantifier arises naturally when we observe in Theorem \ref{homog} that in order to guarantee that a quasi-homogeneous structure has quantifier elimination for logic $\mathcal{L}_{\infty\omega}$ extended with a set of quantifiers $\mathcal{Q}$, the quantifiers in $\mathcal{Q}$ must be closed under embeddings. In \cite{Dawar: 2010}, Dawar and Gr\"adel showed that $\mathcal{L}_{\infty \omega}^{\omega}$ augmented with finitely many embedding-closed quantifiers of finite width has a $0$-$1$ law meaning that on finite structures such a logic can only express properties that hold in almost all finite structures. Our aim is to study further limits of the expressive power of logics with embedding-closed quantifiers that are not implied by a $0$-$1$ law. These include for example indefinability of properties of infinite structures and structures with function symbols. In this article we provide two methods that make it possible. The first method involves construction of a certain chain of quasi-homogeneous structures. The second method is based on the Ehrenfeucht-Fra\"iss\'e game that we develop in order to characterize $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$-equivalence between structures. In the article we apply these methods to produce a number of undefinability results. The article is structured as follows. In Section \ref{prel} we introduce preliminary notions. In Section \ref{emb-cl_quant} we describe basic properties of embedding-closed quantifiers that will be needed later, and give some examples. Before moving to our own major results, we show that $\mathcal{L}^{\omega}_{\infty \omega}(\mathcal{Q})$, where $\mathcal{Q}$ is a finite set of embedding-closed quantifiers of finite width, has a $0$-$1$ law. We do this in Section \ref{law}. The proof concerning $0$-$1$ law was originally given in \cite{Dawar: 2010}. In Section \ref{chain_section} we introduce the notion of quasi-homogeneity and show that if a chain of quasi-homogeneous structures is sufficiently long then the truth value of a given sentence of a logic with embedding-closed quantifiers is eventually preserved. This in turn allows us to obtain some undefinability results. The section has two subsections, one of which is devoted to the undefinability of properties of finite structures and another deals with infinite structures. In Section \ref{game_section} we describe the \emph{embedding game} that characterizes $\mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$-equivalence of a given pair of structures. We close the section with an application of the game that allows us to show that for each $n<\omega$ there is a first-order sentence of quantifier rank $n$ that is not expressible by any sentence of $\mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$ of quantifier rank $<n$. \section{Preliminaries}\label{prel} The notation for the basic concepts of abstract logics that we use in this paper is taken from the book \cite{Ebbinghaus: 1985}, and we assume that the reader is familiar with them. A \emph{vocabulary} $\tau$ consists of \emph{relation}, \emph{function} and \emph{constant symbols}, \[ \tau = \{R,\dots,f,\dots,c,\dots\}. \] We denote by $\ar(R)$ and $\ar(f)$ the \emph{arities} of relation and function symbols, respectively. A \emph{$\tau$-structure} $\mathfrak{A}$ is a sequence \[ \mathfrak{A} = (A,R^{\mathfrak{A}},\dots,f^{\mathfrak{A}},\dots,c^{\mathfrak{A}}), \] where $A$ is a set that we call the \emph{universe} of $\mathfrak{A}$, and $R^{\mathfrak{A}}\subseteq A^{\ar(R)},\dots$ are interpretations of symbols of $\tau$. We denote the class of all $\tau$-structures by $\Str[\tau]$. We denote the number of free variables of $\varphi$ by $\frvar(\varphi)$. If $\mathfrak{A}\in\Str[\tau]$ and $\varphi \in \mathcal{L}[\tau]$ then we write \[ \varphi^{\mathfrak{A}} = \{\overline{a} \in A^{\frvar(\varphi)} : \mathfrak{A},\overline{a} \vDash \varphi \}. \] A \emph{literal} is an atomic formula or a negation of an atomic formula. An \emph{atomic $n$-type} of $\tau$ is a set $\Phi$ of literals of $\tau$ in variables $x_1,\dots,x_n$ such that there is a $\tau$-structure $\mathfrak{A}$ and $n$-tuple $\overline{a}$ of elements in $A$ with \[ \Phi = \{\varphi \colon \mathfrak{A},\overline{a} \vDash \varphi \text{ and } \varphi \text{ is a literal of }\tau \}. \] If we work with a logic $\mathcal{L}$ in which for each atomic type $\Phi$ there is a formula $t$ equivalent to $\bigwedge\Phi$ then $t$ is also called an atomic type. \begin{lemma}\label{quant} Let $\vartheta$ be a quantifier-free $\tau$-formula in $n$ free variables. Then there exists a set $T$ of atomic $n$-types of $\tau$ such that \[ \vDash \vartheta \leftrightarrow \bigvee_{\Phi\in T} \bigwedge_{\varphi\in\Phi} \varphi. \] \end{lemma} In this article we will consider the following logics. We assume that the reader is familiar with the \emph{first-order logic} $\mathcal{L}_{\omega\omega}$ and the related notions. Let $\kappa$ be a cardinal. The logic $\mathcal{L}_{\kappa\omega}$ is allowed to have conjunctions and disjunctions over sets of formulas of cardinality $< \kappa$. The logic $\mathcal{L}_{\infty\omega}$ can have conjunctions and disjunctions over arbitrary sets of formulas. Formulas of the logic $\mathcal{L}_{\infty\omega}^{\omega}$ (\emph{finite variable logic}) are exactly those of $\mathcal{L}_{\infty \omega}$ that use at most finite number of variables. Suppose $\mathcal{L}$ is a logic and $\tau,\sigma$ are vocabularies where $\sigma$ has only relation symbols. An \emph{$\mathcal{L}$-interpretation} of $\sigma$ in $\tau$ is a sequence $(\Psi,(\psi_R)_{R\in\sigma})$, where each $\psi_R$ is an $\mathcal{L}[\tau]$-formula that has exactly $\ar(R)$ free variables, and $\Psi$ is a function $\Str[\tau] \rightarrow \Str[\sigma]$ such that for each $\mathfrak{A}\in\Str[\tau]$, the universe of $\Psi(\mathfrak{A})$ is $A$ and \begin{equation*} R^{\Psi(\mathfrak{A})} = \{\overline{a} \in A^{\ar(R)} \colon \mathfrak{A}\vDash\psi_R(\overline{a}) \} \end{equation*} for all $R\in\sigma$. Let $C \subseteq \Str[\sigma]$ be a class of structures closed under isomorphism. The logic $\mathcal{L}_{\kappa\omega}(Q_C)$ is the smallest extension of $\mathcal{L}_{\kappa\omega}$ closed under negation, conjunctions and disjunctions of cardinality $<\kappa$ and application of the existential quantifier $\exists$ such that for all $\mathcal{L}_{\kappa\omega}(Q_C)$-interpretations $(\Psi,(\psi_R)_{R\in \sigma})$ there is a $\mathcal{L}_{\kappa\omega}(Q_C)[\tau]$-formula $\chi$ such that \begin{equation}\label{clever} \mathfrak{A} \vDash \chi \Leftrightarrow \Psi(\mathfrak{A})\in C \end{equation} for all $\mathfrak{A} \in \Str[\tau]$. In a similar way we define the logics $\mathcal{L}_{\infty \omega}(Q_C)$ and $\mathcal{L}^{\omega}_{\infty\omega}(Q_C)$. We say that $Q_C$ is the \emph{generalized quantifier} corresponding to the class $C$, and write $Q_C(\overline{x}_R\psi_R)_{R\in\sigma}$ for the formula $\chi$ in \eqref{clever}. The class $C$ is called the \emph{defining class} of $Q_C$, and $\sigma$ the \emph{vocabulary} of $Q$. Sometimes we denote a quantifier just by the symbol $Q$, its defining class by $K_Q$, and the vocabulary of $Q$ by $\tau_Q$. We say that $|\tau_Q|$ is the \emph{width} of $Q$. In a similar way, we can also define the extension of a given logic $\mathcal{L}$ with a \emph{class} $\mathcal{Q}$ of quantifiers, instead of just one, which we denote by $\mathcal{L}(\mathcal{Q})$. \begin{lemma}\label{interpr} Let $\mathcal{Q}$ be a possibly empty class of quantifiers, $\kappa$ a cardinal and $\mathcal{L} = \mathcal{L}_{\kappa \omega}(\mathcal{Q})$ or $\mathcal{L} = \mathcal{L}^{\omega}_{\infty\omega}(\mathcal{Q})$. Suppose $(\Psi,(\psi_R)_{R\in \sigma})$ is an $\mathcal{L}$-interpretation of $\sigma$ in $\tau$. Then for each $\mathcal{L}[\sigma]$-formula $\varphi$ there is an $\mathcal{L}[\tau]$-formula $\varphi^*$ such that \[ \mathfrak{A},\overline{a} \vDash \varphi* \Leftrightarrow \Psi(\mathfrak{A}),\overline{a} \vDash \varphi \] for all $\mathfrak{A} \in \Str[\tau]$ and tuples $\overline{a}$ of elements in $A$. \end{lemma} \begin{proof} Replace all atomic subformulas $R(\overline{x})$ of $\varphi$ with formulas $\psi_R(\overline{x})$ to get $\varphi^*$. \end{proof} \begin{lemma}\label{possibly_empty} Let $\mathcal{Q}_0$ be a possibly empty class of quantifiers, $\kappa$ a cardinal and $\mathcal{L} = \mathcal{L}_{\kappa \omega}(\mathcal{Q}_0)$ or $\mathcal{L} = \mathcal{L}^{\omega}_{\infty\omega}(\mathcal{Q}_0)$. Suppose $\mathcal{Q}_1$ is a class of quantifiers such that for every $Q\in\mathcal{Q}_1$ its defining class $\mathcal{K}_Q$ is definable in $\mathcal{L}$. Then $\mathcal{L}(\mathcal{Q}_1) \equiv \mathcal{L}$. \end{lemma} \begin{proof} Let $\tau$ be a vocabulary and consider a formula $Q(\overline{x}_R\varphi_R)_{R\in\tau_Q}$ with $Q \in \mathcal{Q}_1$ and $\varphi_R \in \mathcal{L}[\tau]$ for all $R\in\tau_Q$. Let $\psi \in \mathcal{L}[\tau_Q]$ be the sentence defining $\mathcal{K}_Q$. Then \[ \vDash Q(x_R\varphi_R)_{R\in\tau_Q} \leftrightarrow \chi \] where $\chi$ is the formula got from $\psi$ by substituting every atomic subformula $R(\overline{x})$ of $\psi$ with $\varphi_R(\overline{x})$. \end{proof} \section{Embedding-closed quantifiers}\label{emb-cl_quant} \begin{definition} Let $\mathfrak{A}$ and $\mathfrak{B}$ be structures of the same vocabulary $\tau$. An injection $f \colon A \rightarrow B$ is an \emph{embedding of $\mathfrak{A}$ into $\mathfrak{B}$} if \begin{enumerate} \item $f(c^{\mathfrak{A}}) = c^{\mathfrak{B}}$ for all constant symbols $c \in \tau$, \item $\overline{a} \in R^{\mathfrak{A}} \Leftrightarrow f\overline{a} \in R^{\mathfrak{B}}$ for all relation symbols $R \in \tau$ and tuples $\overline{a}$ in $A$, \item $fF^{\mathfrak{A}}(\overline{a}) = F^{\mathfrak{B}}(f\overline{a})$ for all function symbols $F \in \tau$ and tuples $\overline{a}$ in $A$. \end{enumerate} The notation $\mathfrak{A} \leq \mathfrak{B}$ means that $\mathfrak{A}$ is embeddable into $\mathfrak{B}$. We say that a sequence $(\mathfrak{A}_{\alpha})_{\alpha < \gamma}$ of $\tau$-structures is a \emph{chain} if $\mathfrak{A}_{\alpha} \leq \mathfrak{B}_{\beta}$ whenever $\alpha < \beta$. We say that a class $C$ of $\tau$-structures is an \emph{antichain} if $\mathfrak{A} < \mathfrak{B}$ is never true for any structures $\mathfrak{A}, \mathfrak{B} \in C$. A class $K$ of $\tau$-structures is \emph{embedding-closed} if $\mathfrak{A} \in K$ and $\mathfrak{A} \leq \mathfrak{B}$ imply $\mathfrak{B} \in K$. We say that a quantifier $Q$ is embedding-closed if its defining class is embedding-closed. We denote by $\mathcal{Q}_{\emb}$ the class of all embedding-closed quantifiers. \end{definition} \begin{lemma}\label{preservation} Let $\tau$ be a vocabulary, $(\varphi_{\alpha} )_{\alpha < \kappa}$ quantifier-free $\tau$-formulas and $Q$ an embedding-closed quantifier of width $\kappa$. The formula $Q(\overline{x}_{\alpha} \varphi_{\alpha})_{\alpha < \kappa}$ is preserved by embeddings. \end{lemma} \begin{proof} Let $\mathfrak{A}$ and $\mathfrak{B}$ be $\tau$-structures. Suppose that $(\mathfrak{A},\overline{a}) \vDash Q(\overline{x}_{\alpha} \varphi_{\alpha})_{\alpha < \kappa}$ and $f \colon A \rightarrow B$ is an embedding. Then $f$ is also an embedding of $(A,(\varphi_{\alpha}^{\mathfrak{A},\overline{a}})_{\alpha < \kappa})$ into $(B,(\varphi_{\alpha}^{\mathfrak{B},f\overline{a}})_{\alpha < \kappa})$ since quantifier-free formulas are preserved by embeddings, so $(\mathfrak{B},f\overline{a}) \vDash Q(\overline{x}_{\alpha} \varphi_{\alpha})_{\alpha < \kappa}$ since $Q$ is embedding-closed. \end{proof} Note that instead of requiring the quantifiers to be closed upwards under embeddings, we could use the downwards closure to get an equivalent class of quantifiers. Call a quantifier $Q$ \emph{substructure-closed} if from $\mathfrak{A} \in K_Q$ and $\mathfrak{B} \leq \mathfrak{A}$ follows $\mathfrak{B} \in K_Q$, and denote the class of all substructure-closed quantifiers by $\mathcal{Q}_{\sub}$. The expressive power of $\mathcal{Q}_{\sub}$ is clearly the same as that of $\mathcal{Q}_{\emb}$ since the complement $Q^*$ of an embedding-closed quantifier $Q$ is substructure-closed, so \[ \mathfrak{A} \vDash Q(\overline{x}_{\alpha}\varphi_{\alpha})_{\alpha < \kappa} \Leftrightarrow \mathfrak{A} \vDash \neg Q^*(\overline{x}_{\alpha}\varphi_{\alpha})_{\alpha < \kappa}. \] Next we present some examples of well-known properties and quantifiers that are either embedding-closed or are definable in the logic $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$. We use notation $Q^{\cl}$ to denote the closure of the quantifier $Q$ under embeddings. In other words $Q^{\cl}$ is the smallest embedding-closed quantifier containig $Q$. \begin{examples} \begin{enumerate} \item Let $\tau = \{U\}$ be a vocabulary consisting of a single unary relation symbol. The existential quantifier $\exists$ corresponds to the class of structures $\{\mathfrak{A} \in \Str[\tau] : U^{\mathfrak{A}} \ne \emptyset \}$, so it is embedding-closed. \item Let $\tau$ be the same as above and $\alpha$ an ordinal. The defining class of the \emph{cardinality quantifier} $Q_{\alpha}$ is $\{\mathfrak{A} \in \Str[\tau] : |U^{\mathfrak{A}}| \geq \aleph_{\alpha} \}$ which is clearly embedding-closed. \item For each $n < \omega$, let $\sigma_n = \{M_n\}$ be a vocabulary consisting of a single $n$-ary relation symbol. The \emph{Magidor-Malitz} quantifier $Q_{\alpha}^n$, whose defining class is \[ \{ \mathfrak{A} \in \Str[\sigma_n] : \text{ there is } C \subseteq A \text{ with } |C| \geq \aleph_{\alpha} \text{ and } C^n \subseteq M_n^{\mathfrak{A}}\}, \] is embedding-closed. \item The well-ordering quantifier $Q^W$, whose defining class is the class of all well-orders, is substructure-closed, so by our earlier remark it can be defined with embedding-closed quantifiers. \item The equivalence quantifier $Q^E_{\alpha}$, whose defining class consists of all structures $(A,E)$ where $E$ is an equivalence relation on $A$ with at least $\aleph_{\alpha}$ equivalence classes, is not embedding-closed. Nonetheless, it can be defined by the sentence \[ (Q^E_{\alpha})^{\cl}xyE(x,y) \land \text{"}E \text{ is an equivalence relation"} \] in $\mathcal{L}_{\omega\omega}(Q)$ where $Q = (Q^E_{\alpha})^{\cl}$ is embedding-closed. \item Many graph properties are embedding- or substructure-closed. Examples include $k$-colorability, being a forest, completeness, planarity, having a cycle, and many others. \item This is an example of a graph property that is not embedding- or substructure-closed but is however definable in $\mathcal{L}_{\omega\omega}(Q)$ for an em\-bed\-ding-closed quantifier $Q$. The property in question is connectedness of a graph. Let $\sigma = \{R,B,E\}$ be the vocabulary of coloured graphs where symbols $R$ and $B$ stand for colors \emph{red} and \emph{blue}. Let $C \subseteq \Str[\sigma]$ consist of all the graphs $G$ in which for every blue-red pair $(x,y)$ of vertices there is a path between $x$ and $y$ and $R^G$, $B^G$ are not empty. Put $D = C^{\cl}$. Then for all graphs $G$, \[ G \vDash \forall xy Q_Dstuv(s=x,t=y,E(u,v)) \] if and only if $G$ is connected. \end{enumerate} \end{examples} As we will show below, there exist properties not definable in $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$. These include among others equicardinality of sets (Example \ref{Hartig}), and completeness and cofinality of an ordering (Examples \ref{completeness} and \ref{cofinality}). \subsection{Homomorphism-closed quantifiers} In addition to being closed under embeddings, we can think of various other interesting closure conditions for quantifiers, like being closed under homomorphisms for instance. The purpose of this subsection is to show that any embedding-closed property can be defined in a logic with homomorphism-closed quantifiers only. Thus, requiring quantifiers to be closed under homomorphisms is not essentially more restrictive than the requirement that they are closed under embeddings. We denote by $\mathcal{Q}_{\hom}$ the class of all homomorphism-closed quantifiers. \begin{theorem} Let $\mathcal{L} = \mathcal{L}_{\kappa\omega}$ for some cardinal $\kappa$ or $\mathcal{L} = \mathcal{L}_{\infty\omega}^{\omega}$. Then $\mathcal{L}(\mathcal{Q}_{\emb}) \equiv \mathcal{L}(\mathcal{Q}_{\hom})$. \end{theorem} \begin{proof} Let $\tau$ be a relational vocabulary and $K$ an embedding-closed class of $\tau$-structures. Let $\tau' = \tau \cup \{R_* : R \in \tau \} \cup \{N\}$ with $\ar(R_*)= \ar(R)$ for all $R \in\tau$ and $\ar(N) = 2$. Let $F : \Str[\tau] \rightarrow \Str[\tau']$ be the function such that for each $\mathfrak{A} \in \Str[\tau]$ the universe of $F(\mathfrak{A})$ is $A$, $R^{F(\mathfrak{A})} = R^{\mathfrak{A}}$ and $R_*^{F(\mathfrak{A})} = (\neg R)^{\mathfrak{A}}$ for all $R \in \tau$, and $N^{F(\mathfrak{A})} = (x\ne y)^{\mathfrak{A}}$. Now let \[ K' = \{\mathfrak{A}\in\tau' : \text{ there is } \mathfrak{B} \in K \text{ and a homomorphism }f:F(\mathfrak{B})\rightarrow \mathfrak{A} \}. \] Then $K'$ is homomorphism-closed since the composition of two homomorphisms is itself a homomorphism. Let $\psi$ be the following sentence of $\mathcal{L}(\mathcal{Q}_{\hom})$: \[ \psi := Q_{K'}((\overline{x}_RR)_{R\in\tau},(\overline{y}_R\neg R)_{R\in\tau},uv(u\ne v)). \] We claim that $\psi$ defines the class $K$. To show this, suppose first that $\mathfrak{A} \in \Str[\tau]$ is such that $\mathfrak{A}\vDash \psi$. Then by the definition of $K'$ there is $\mathfrak{B} \in K$ and a homomorphism \[ f : F(\mathfrak{B}) \rightarrow (A,(R^{\mathfrak{A}})_{R\in\tau},(\neg R^{\mathfrak{A}})_{R\in\tau},(x\ne y)^{\mathfrak{A}}) \] which is in fact an embedding of $\mathfrak{B}$ into $\mathfrak{A}$ so, since $K$ is embedding-closed, we have $\mathfrak{A} \in K$. For the other direction, if $\mathfrak{A}\in K$ then $F(\mathfrak{A}) \in K'$ so \[ (A,(R^{\mathfrak{A}})_{R\in\tau},(\neg R^{\mathfrak{A}})_{R\in\tau},(x\ne y)^{\mathfrak{A}}) \in K' \] from which it follows that $\mathfrak{A} \vDash \psi$. Thus, $K$ is definable in $\mathcal{L}(\mathcal{Q}_{\hom})$ so by Lemma \ref{possibly_empty} we have $\mathcal{L}(\mathcal{Q}_{\emb}) \leq \mathcal{L}(\mathcal{Q}_{\hom})$, and since every homomomorphism-closed quantifier is also embedding-closed we have $\mathcal{L}(\mathcal{Q}_{\emb}) \equiv \mathcal{L}(\mathcal{Q}_{\hom})$ \end{proof} \section{$0$-$1$ law}\label{law} In \cite{Dawar: 2010}, Dawar and Gr\"adel showed that logic $\mathcal{L}^{\omega}_{\infty \omega}$ (finite variable logic) extended with finitely many embedding-closed quantifiers of finite width has a $0$-$1$ law. We start our investigation of embedding-closed quantifiers by exhibiting this proof here. The notation $\mu(P) = r$ means that the asymptotic probability of a property $P$ is $r$. A structure $\mathfrak{A}$ is \emph{homogeneous} if every isomorphism between finitely generated substructures of $\mathfrak{A}$ can be extended to an atomorphism of $\mathfrak{A}$. Let $\tau$ be a relational vocabulary. The \emph{random $\tau$-structure} is the unique homogeneous countable $\tau$-structure into which any finite $\tau$-structure can be embedded. The following is a well-known fact: \begin{theorem} If $P$ is $\mathcal{L}_{\omega\omega}$-definable property that is true in the random structure then $\mu(P) = 1$. \end{theorem} \begin{lemma}\label{random} Let $\tau$ be a finite relational vocabulary, $\mathfrak{A}$ a finite $\tau$-structure and $\overline{a} \in A^n$ for some $n<\omega$. Suppose that $t$ is the atomic type of $\overline{a}$ and set \[ P = \{\mathfrak{B} \in \Str[\tau] \colon \text{for all } \overline{b} \in B^n, \text{ if } \mathfrak{B} \vDash t(\overline{b}) \text{ then } (\mathfrak{A},\overline{a}) \leq (\mathfrak{B},\overline{b}) \}. \] Then $\mu(P) = 1$. \end{lemma} \begin{proof} Denote the random $\tau$-structure by $\mathfrak{R}$ . The structure $\mathfrak{A}$ is embeddable into $\mathfrak{R}$ by, say, an embedding $f$. Let $\overline{b}$ be a tuple in $R$ whose atomic type is $t$. Since $\mathfrak{R}$ is homogeneous, there is an automorphism $h$ that takes $\overline{a}$ to $\overline{b}$. Thus, $h \circ f$ is an embedding of $(\mathfrak{A},\overline{a})$ into $(\mathfrak{R},\overline{b})$, so $P$ holds in the random structure, and since $P$ is $\mathcal{L}_{\omega\omega}$-definable, we have $\mu(P) = 1$. \end{proof} \begin{lemma}[\cite{Dawar: 2010}]\label{asympt2} Let $\tau$ be a finite relational vocabulary, $Q$ embedding-closed quantifier of finite width $k$ and $\psi_0,\dots,\psi_{k-1}$ quantifier-free $\tau$-formulas. There is a quantifier-free $\tau$-formula $\vartheta$ such that $\forall \overline{x}(\vartheta \leftrightarrow Q(\overline{y}_i\psi_i)_{i<k})$ has asymptotic probability $1$. \end{lemma} \begin{proof} Write $\varphi := Q(\overline{y}_i\psi_i)_{i < k}$, and set \begin{align*} \vartheta := \bigvee\{t \colon &t \text{ is an atomic type and } (\mathfrak{A},\overline{a}) \vDash t \land \varphi \\ &\text{ for some finite } \tau \text{-structure } \mathfrak{A} \text{ and tuple }\overline{a}\}. \end{align*} We clearly have $\mathfrak{A} \vDash \forall \overline{x}(\varphi \rightarrow \vartheta)$ for all finite $\tau$-structures $\mathfrak{A}$. For the other direction, if $\vartheta$ is an empty disjunction then $\varphi$ defines the empty relation on all finite structures thus being equivalent to a quantifier-free formula. Therefore, assume that $\vartheta$ is not empty. For each type $t$ in $\vartheta$, choose a pair $(\mathfrak{A},\overline{a})_t$ such that $(\mathfrak{A},\overline{a})_t \vDash t \land \varphi$ and $\mathfrak{A}$ is finite. Such pairs exist by the definition of $\vartheta$. Let $\mathfrak{B}$ be a finite $\tau$-structure such that $\mathfrak{B} \vDash \exists \overline{x}(\vartheta \land \neg \varphi)$. Then there is a tuple $\overline{b}$ in $B$ such that $(\mathfrak{B},\overline{b}) \vDash \vartheta \land \neg \varphi$. Let $t$ be the atomic type of $\overline{b}$. Then $(\mathfrak{A},\overline{a})_t \vDash t \land \varphi$, so if $(\mathfrak{A},\overline{a}) \leq (\mathfrak{B},\overline{b})$ then by Lemma \ref{preservation}, $(\mathfrak{B},\overline{b}) \vDash \varphi$ which is contradiction. Thus, $(\mathfrak{A},\overline{a}) \nleq (\mathfrak{B},\overline{b})$, so by Lemma \ref{random}, $\mu(\exists \overline{x}(\vartheta \land \neg \varphi)) = 0$, so $\mu(\forall \overline{x}(\vartheta \rightarrow \varphi)) = 1$. \end{proof} \begin{theorem}[\cite{Dawar: 2010}] Let $\tau$ be a finite relational vocabulary and $\mathcal{Q}$ a finite set of embedding-closed quantifiers of finite width. For any $\tau$-formula $\varphi \in \mathcal{L}^{\omega}_{\infty\omega}(\mathcal{Q})$ there is a quantifier-free $\tau$-formula $\vartheta$ such that $\forall \overline{x}(\vartheta \leftrightarrow \varphi)$ has asymptotic probability $1$. \end{theorem} \begin{proof} Let $k$ be a natural number. There are, up to logical equivalence, finitely many quantifier-free formulas that use only variables $x_0,\dots,x_{k-1}$. Let $\psi_0,\dots,\psi_{l-1}$ be an enumeration of all $\mathcal{L}^k_{\infty\omega}$-formulas of the form $Q(\overline{y}_i \vartheta_i)_{i < n}$ with all $\vartheta_i$ quantifier-free. Note that $l$ is finite. By Lemma \ref{asympt2}, for each $i < l$ there is a quantifier free formula $\chi_i$ such that $\forall \overline{x}(\psi_i \leftrightarrow \chi_i)$ has asymptotic probability $1$. For every $i < l$, let $C_i$ be the set of all isomorphism types of finite structures on which $\forall \overline{x}(\psi_i \leftrightarrow \chi_i)$ is true. Put $C = \bigcap_{i < l}C_i$. Then $\mu(C) = 1$, since $l$ is finite and $\mu(C_i) = 1$ for all $i$. Now we can show that for all $\varphi \in \mathcal{L}^k_{\infty\omega}(\mathcal{Q})$ there is a quantifier-free formula $\vartheta$ such that $\mathfrak{A} \vDash \forall \overline{x}(\vartheta \leftrightarrow \varphi)$ for all $\mathfrak{A} \in C$ from which the claim follows. We use induction on the structure of $\varphi$. If $\varphi$ is quantifier-free, there is nothing to prove. It is also clear that the claim holds for $\varphi = \neg \alpha$ and for $\varphi = \bigwedge_{i \in I}\alpha_i$ if it holds for $\alpha$ and all $\alpha_i$, respectively. Assume that $\varphi = Q(\overline{y}_i \alpha_i)_{i < n}$ and the claim holds for all $\alpha_i$. By the induction hypothesis, there are quantifier-free formulas $\vartheta_i$ such that \[ \mathfrak{A} \vDash \forall \overline{x}(\varphi \leftrightarrow Q(\overline{y}_i\vartheta_i)_{i < n}) \] for all $\mathfrak{A} \in C$. Since $Q(\overline{y}_i\vartheta_i)_{i<n} = \psi_m$ for some $m < l$, we have $\forall \overline{x}(\varphi \leftrightarrow \chi_m)$ on all structures of $C_m$, and therefore on $C$, because $C \subseteq C_m$. \end{proof} \begin{corollary}[\cite{Dawar: 2010}] For any finite set $\mathcal{Q}$ of embedding-closed quantifiers of finite width, the logic $\mathcal{L}^{\omega}_{\infty\omega}(\mathcal{Q})$ has a $0$-$1$ law. \end{corollary} \section{Quantifier elimination for $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$ and \\ some undefinability results}\label{chain_section} In this section we introduce a method that allows us to produce some undefinability results for logics with embedding-closed quantifiers that cannot be established by using a $0$-$1$ law. For instance we will show that equicardinality cannot be defined in $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$. \begin{definition} A structure $\mathfrak{A}$ is \emph{homogeneous} if every isomorphism between finitely generated substructures of $\mathfrak{A}$ can be extended to an automorphism of $\mathfrak{A}$. We say that $\mathfrak{A}$ is \emph{quasi-homogeneous} if every isomorphism between finitely generated substructures of $\mathfrak{A}$ can be extended to an embedding of $\mathfrak{A}$ into itself. \end{definition} Note that a structure $\mathfrak{A}$ is homogeneous (quasi-homogeneous) if and only if for all tuples $\overline{a}$ and $\overline{b}$ of the same atomic type there is an automorphism (embedding) of $\mathfrak{A}$ taking $\overline{a}$ to $\overline{b}$. It is clear that every countable quasi-homogeneous structure is homogeneous. Let $\mathfrak{R} = (R\setminus\{r\},\leq)$ be the usual ordering of real numbers with some number $r$ removed. This "punctured" real line is an example of a structure that is quasi-homogeneous but not homogeneous. \begin{definition} Suppose $\mathcal{L}$ is a logic. We say that a structure $\mathfrak{A}$ has \emph{quantifier elimination for} $\mathcal{L}$ if for all formulas $\varphi \in \mathcal{L}$ there is a quantifier-free formula $\vartheta$ such that $\mathfrak{A} \vDash \forall \overline{x}(\vartheta \leftrightarrow \varphi)$. \end{definition} \begin{theorem}\label{homog} A $\tau$-structure $\mathfrak{A}$ has quantifier elimination for $\mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$ if and only if it is quasi-homogeneous. \end{theorem} \begin{proof} Assume for simplicity that $\tau$ is relational vocabulary. The proof can be generalized in a straightforward way to vocabularies with constant and function symbols. Suppose first that $\mathfrak{A}$ has quantifier elimination. Let $\overline{a} = (a_1,\dots,a_k)$ and $\overline{b} = (b_1,\dots,b_k)$ be tuples of elements of $A$ having the same atomic type. We want to find embedding of $\mathfrak{A}$ into itself that maps $\overline{a}$ to $\overline{b}$. Let $\tau' = \tau \cup \{P\}$ where $P$ is a new relation symbol of arity $k$. Define a $\tau'$-structure $\mathfrak{A}'$ by setting $\mathfrak{A}'\upharpoonright \tau = \mathfrak{A}$ and $P^{\mathfrak{A}'} = \{\overline{a}\}$. Let $Q$ be a quantifier whose defining class is \[ K_Q = \{\mathfrak{B} \in \Str[\tau'] \colon \mathfrak{A}' \text{ is embeddable into } \mathfrak{B}\}, \] and suppose $\varphi \in \mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})[\tau]$ is the next formula: \[ \varphi(\overline{z}) := Q((\overline{x}_RR)_{R\in\tau},\overline{x}_P=\overline{z}). \] Then $\mathfrak{A} \vDash \varphi(\overline{a})$ and, since $\mathfrak{A}$ has quantifier elimination and $\overline{a}$ and $\overline{b}$ have the same atomic type, we have $\mathfrak{A} \vDash \varphi(\overline{b})$, so there is an embedding $f$ of $\mathfrak{A}'$ into $(A,(S^{\mathfrak{A}})_{S\in\tau}, \{\overline{b}\})$ which clearly is a wanted embedding. For the other direction, assume that $\mathfrak{A}$ is quasi-homogeneous. Let $Q \in \mathcal{Q}_{\emb}$ and suppose $(\psi_R)_{R \in \tau_Q}$ are quantifier-free formulas. Now set $\varphi := Q(\overline{x}_R\psi_R)_{R \in \tau_Q}$ and denote by $k$ the number of free variables of $\varphi$. Let \[ \vartheta = \bigvee\{t \colon t \text{ is an atomic type and for some } \overline{a} \in A^k, (\mathfrak{A}, \overline{a}) \vDash \varphi \land t\}. \] Let $\overline{b} \in A^k$ and $(\mathfrak{A}, \overline{b}) \vDash \vartheta$. Then $(\mathfrak{A},\overline{b}) \vDash t$ and $(\mathfrak{A},\overline{a}) \vDash \varphi \land t$ for some $\overline{a} \in A^k$ with atomic type $t$. Since $\mathfrak{A}$ is quasi-homogeneous, there is an embedding $f$ of $(\mathfrak{A},\overline{a})$ into $(\mathfrak{A},\overline{b})$. Since $\mathfrak{A},\overline{a} \vDash \varphi$, we have $(A,(\psi^{\mathfrak{A},\overline{a}}_R)_{R \in \tau_Q}) \in K_Q$, so $(A,(\psi^{\mathfrak{A},\overline{b}}_R)_{R \in \tau_Q}) \in K_Q$ because $Q$ is embedding-closed and $\psi_R$ are quantifier-free. Thus, $\mathfrak{A} \vDash \forall \overline{x}(\vartheta \rightarrow \varphi)$, and since clearly $\mathfrak{A} \vDash \forall \overline{x}(\varphi \rightarrow \vartheta)$, we have $\mathfrak{A} \vDash \forall \overline{x}(\vartheta \leftrightarrow \varphi)$. Thus, by using induction, we can eliminate quantifiers in all formulas $\varphi \in \mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$. \end{proof} \subsection{The finite case} In this subsection, we will consider logics $\mathcal{L}_{\infty\omega}^m$ (the restriction of $\mathcal{L}_{\infty\omega}$ to formulas that use at most $m$ variables) extended with finite number of em\-bed\-ding-closed quantifiers of finite width. We will show that in a countably infinite chain of quasi-homogeneous structures of finite relational vocabulary, a formula of such a logic is eventually equivalent to a quantifier-free formula. This will allow us to demonstrate, among other things, that certain properties of finite structures are not definable in such a logic. \begin{lemma}\label{quant_elim_finite_lemma} Let $\tau$ be a finite relational vocabulary, $Q$ an embedding-closed quantifier of width $n<\omega$ and $\varphi = Q(x_i\psi_i)_{i<n}$ where all $\psi_i$ are quantifier-free $\tau$-formulas. Let $(\mathfrak{A}_i)_{i<\omega}$ be a chain of quasi-homogeneous $\tau$-structures. Then there is a natural number $k$ and a quantifier-free $\tau$-formula $\vartheta$ such that \[ \mathfrak{A}_i \vDash \forall \overline{x}(\varphi \leftrightarrow \vartheta) \] for all $k \leq i$. \end{lemma} \begin{proof} For each $i < \omega$, let \[ T_i = \{t \colon t \text{ is an atomic type and } (\mathfrak{A}_i,\overline{a}) \vDash \varphi \land t \text{ for some } \overline{a} \}. \] Since all $\mathfrak{A}_i$ are quasi-homogeneous, it follows from Theorem \ref{homog} and Lemma \ref{quant} that \[ \mathfrak{A}_i \vDash \forall \overline{x}(\varphi \leftrightarrow \bigvee T_i). \] Now let $i \leq j < \omega$ and $t \in T_i$. We have $(\mathfrak{A}_i,\overline{a}) \vDash \varphi \land t$ for some $\overline{a}$, so $(\mathfrak{A}_j,\overline{a}) \vDash \varphi \land t$ since both $\varphi$ and $t$ are preserved by embeddings. Thus, $t \in T_j$, so $T_i \subseteq T_j$ always when $i \leq j$. Since there are finitely many distinct atomic $n$-types, the chain $(T_i)_{i < \omega}$ reaches its maximum at some $k < \omega$. Then $\vartheta = \bigvee T_k$ is a quantifier-free $\tau$-formula we want. \end{proof} \begin{theorem}\label{finite} Let $\tau$ be a finite relational vocabulary, $\mathcal{Q}$ a finite set of embed\-ding-closed quantifiers of finite width and $(\mathfrak{A}_i)_{i<\omega}$ a chain of quasi-ho\-mo\-ge\-neous $\tau$-structures. For each $m< \omega$, there is a natural number $N_m$ such that for every formula $\varphi \in \mathcal{L}^m_{\infty \omega}(\mathcal{Q})[\tau]$ there is a quantifier-free $\tau$-formula $\vartheta_{\varphi}$ such that \[ \mathfrak{A}_i \vDash \forall \overline{x}(\varphi \leftrightarrow \vartheta_{\varphi}) \] for all $i \geq N_m$. \end{theorem} \begin{proof} Let $\psi_0,\dots,\psi_l$ be an enumeration of all (up to equivalence) the $\tau$-formulas in at most $m$ variables having form $Q(\overline{x}_i\vartheta_i)_{i<n}$ with $Q \in \mathcal{Q}$ and all $\vartheta_i$ quantifier-free. Note that $l$ is finite. By Lemma \ref{quant_elim_finite_lemma} for each $\psi_i$ there is $k_i < \omega$ and a quantifier-free $\tau$-formula $\vartheta$ such that \[ \mathfrak{A}_j \vDash \forall \overline{x} (\psi_i \leftrightarrow \vartheta) \] when $j \geq k_i$. We claim that we can set $N_m := \max\{k_i : i \leq l\}$. We prove the claim by induction on the structure of the formula $\varphi$. The cases of $\varphi$ atomic, $\varphi = \neg \alpha$ and $\varphi = \bigwedge_{i \in I}\alpha_i$ are clear. Suppose $\varphi = Q(\overline{x}_i\alpha_i)_{i<n}$ and the claim holds for all $\alpha_i$. Then there are quantifier-free $\tau$-formulas $\vartheta_i$ such that \[ \mathfrak{A}_j \vDash \forall \overline{y}(\varphi \leftrightarrow Q(\overline{x}_i\vartheta_i)_{i<n}) \] when $j \geq N_m$. Thus, if $j \geq N_m$ then $\varphi$ is equivalent to some $\psi_r$ and therefore to some quantifier-free formula $\vartheta$, so we can set $\vartheta_{\varphi} := \vartheta$. \end{proof} \begin{definition} Let $\mathcal{L}$ be a logic, $\tau$ a vocabulary and $\mathfrak{A}$, $\mathfrak{B}$ $\tau$-structures. Suppose also that there is an embedding $f$ of $\mathfrak{A}$ into $\mathfrak{B}$. Let \[ \tau^* = \tau \cup_{\disjoint} \{c_a : a \in A\} \] and denote by $\mathfrak{A}^*$ and $\mathfrak{B}^*$ the $\tau^*$-extensions of $\mathfrak{A}$ and $\mathfrak{B}$, respectively, such that $c_a^{\mathfrak{A}^*} = a$ and $c_a^{\mathfrak{B}^*} = f(a)$ for all $a \in A$. Then we write $\mathfrak{A} \preceq_{\mathcal{L}} \mathfrak{B}$ if $\mathfrak{A}^*$ and $\mathfrak{B}^*$ satisfy exactly the same sentences of $\mathcal{L}[\tau^*]$. \end{definition} \begin{corollary}\label{finite2} Let $\tau$ be a finite relational vocabulary, $\mathcal{Q}$ a finite set of embed\-ding-closed quantifiers of finite width and $(\mathfrak{A}_i)_{i<\omega}$ a chain of quasi-ho\-mo\-ge\-neous $\tau$-structures. Let $m<\omega$ and write $\mathcal{L} = \mathcal{L}_{\infty\omega}^m(\mathcal{Q})$. There is a natural number $N_m$ such that $\mathfrak{A}_i \preceq_{\mathcal{L}} \mathfrak{A}_j$ for all $j \geq i \geq N_m$. \end{corollary} In Section \ref{law} we saw that logic $\mathcal{L}^{\omega}_{\infty\omega}$ extended with finitely many em\-bed\-ding-closed quantifiers of finite width has a $0$-$1$ law which implies undefinability of certain properties, like having even cardinality, in such a logic. By using Corollary \ref{finite2} we can determine further properties of finite structures that are not definable in a logic of this kind. In order to apply the theorem, however, we first need to know which structures are homogeneous. This question has been studied to some extent (a survey can be found in \cite{Lachlan: 1986}). Finite homogeneous structures have been classified completely at least in the cases of finite graphs \cite{Gardiner: 1976}, groups \cite{Cherlin: 2000} and rings \cite{Saracino: 1988}. In addition, it is easy to see that all unary structures are homogeneous. \begin{example}\label{hom_graphs} According to \cite{Lachlan: 1986}, the only finite homogeneous (undirected) graphs are up to complement \begin{enumerate} \item $Pe = (\{0,1,2,3,4\},\{ (i,j)\colon |i-j|\in\{1,4 \}\})$ (pentagon), \item $K_3 \times K_3$, \item $I_m[K_n]$ with $m,n < \omega$, \end{enumerate} where $K_n$ is the complete graph of $n$ vertices and $I_m[G]$ consists of $m$ disjoint copies of $G$. It is easy to see that $I_m[K_n] \leq I_{m'}[K_{n'}]$ if and only if $m \leq m'$ and $n \leq n'$. Thus, for example, the graph properties "there is a clique of even cardinality" or "there are more cliques than there are vertices in any clique" are not definable in $\mathcal{L}^{\omega}_{\infty\omega}(\mathcal{Q})$ for any finite set $\mathcal{Q}$ of embedding-closed quantifiers of finite width. \end{example} \begin{example}\label{Hartig_ex} Let $\tau = \{U,V\}$ be a vocabulary with $U$ and $V$ unary relation symbols. The quantifier corresponding to the class \[ I = \{\mathfrak{A}\in\Str[\tau]\colon |U^{\mathfrak{A}}|=|V^{\mathfrak{A}}| \} \] of $\tau$-structures is known as \emph{H\"artig quantifier}. Let $\mathcal{Q}$ be a finite set of embedding-closed quantifiers of finite width. The following is a simple observation showing that the H\"artig quantifier is not definable in $\mathcal{L}^{\omega}_{\infty\omega}(\mathcal{Q})$ even if we consider only finite structures. For all $i<\omega$ define a $\tau$-structure $\mathfrak{A}_i$ by setting $A_i = \{0,\dots,i\}$ and letting $U^{\mathfrak{A}_i}$ to be the set of all even and $V^{\mathfrak{A}_i}$ of all odd numbers of $A_i$. Then $\mathfrak{A}_i \leq\mathfrak{A}_{i+1}$ for all $i<\omega$, and $|U^{\mathfrak{A}_i}|=|V^{\mathfrak{A}_i}|$ if and only if $i$ is odd. Thus, it follows from Corollary \ref{finite2} that the class \[ \{\mathfrak{A} \in \Str[\tau] \colon \mathfrak{A} \text{ is finite and }|U^{\mathfrak{A}}| = |V^{\mathfrak{A}}| \} \] is not definable in the logic $\mathcal{L}^{\omega}_{\infty \omega}(\mathcal{Q})$. In the next subsection we will show that in fact the H\"artig quantifier is not definable in the logic $\mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$ as well. \end{example} As we saw in Example \ref{hom_graphs}, there are only few homogeneous finite graphs, so we cannot usually directly apply Corollary \ref{finite2} in studying definability of graph properties. The following theorem shows that this situation can be remedied to some extent by using \emph{interpretations}. \begin{theorem}\label{inter} Let $\tau$ and $\sigma$ be finite relational vocabularies and suppose $\mathcal{Q}$ is a finite set of embedding-closed quantifiers of finite width. Let $m < \omega$, and write $\mathcal{L} = \mathcal{L}^m_{\infty\omega}(\mathcal{Q})$. Suppose $(\mathfrak{A}_i)_{i<\omega}$ is a chain of quasi-homogeneous $\tau$-structures and $(\Psi,(\psi_R)_{R\in\sigma})$ is an $\mathcal{L}$-interpretation of $\sigma$ in $\tau$. There is a natural number $N_m$ such that for all $j \geq i \geq N_m$ we have $\Psi(\mathfrak{A}_i) \preceq_{\mathcal{L}} \Psi(\mathfrak{A}_j)$. \end{theorem} \begin{proof} By Corollary \ref{finite2}, there is a natural number $N'_m$ such that $\mathfrak{A}_i \preceq_{\mathcal{L}} \mathfrak{A}_j$ for all $j\geq i \geq N'_m$. We claim that $N'_m$ is a wanted number $N_m$. To show that, let $\varphi \in \mathcal{L}[\sigma]$. By Lemma \ref{interpr}, there is a formula $\varphi^* \in \mathcal{L}[\tau]$ such that \[ \mathfrak{A}_i, \overline{a} \vDash \varphi^* \Leftrightarrow \Psi(\mathfrak{A}_i), \overline{a} \vDash \varphi \] for all $i<\omega$ and tuples $\overline{a}$ of elements of $A$. Let $j \geq i \geq N'_m$ and suppose that $\Psi(\mathfrak{A}_i),\overline{a} \vDash \varphi$. Then we have $\mathfrak{A}_i,\overline{a} \vDash \varphi^*$, so $\mathfrak{A}_j, \overline{a} \vDash \varphi^*$ from which follows $\Psi(\mathfrak{A}_j), \overline{a} \vDash \varphi$. The claim is thus proved. \end{proof} \begin{example} A graph $G$ is \emph{regular} if every vertex of $G$ has the same number of neighbours. We will show that regularity of a graph is not definable in the logic $\mathcal{L} = \mathcal{L}^{\omega}_{\infty \omega}(Q_1,\dots,Q_n)$ where all $Q_i \in \mathcal{Q}_{\emb}$ have finite width. Let $\tau$ be the same vocabulary and $(\mathfrak{A}_i)_{i<\omega}$ a chain of $\tau$-structures as in Example \ref{Hartig_ex}. Let $\sigma = \{E\}$ where $E$ is a relation symbol and suppose $(\Psi,\psi)$ is an interpretaion of $\sigma$ in $\tau$ where \[ \psi(x,y) := (U(x)\land U(y)) \lor (V(x) \land V(y)). \] Then for all $k<\omega$, \[ \Psi(\mathfrak{A}_k) = \begin{cases} K_{\frac{k+1}{2}} + K_{\frac{k+1}{2}} \text{ if } k \text{ is odd} \\ K_{\frac{k}{2}} + K_{\frac{k}{2}+1} \text{ if } k \text{ is even}, \end{cases} \] where every $K_m$ is the complete graph on $m$ vertices and $+$ means disjoint union. Thus, a graph $\Psi(\mathfrak{A}_k)$ is regular if and only if $k$ is odd, so by Theorem \ref{inter} regularity of graphs is not definable in $\mathcal{L}$. \end{example} \begin{example}\label{groups} If we allow $\tau$ to have function symbols then Corollary \ref{finite2} does not hold. Let $\tau$ be the vocabulary of groups. A formula $\varphi \in \mathcal{L}_{\infty\omega}^{\omega}[\tau]$, \begin{align*} \varphi(x,y) := \bigvee_{k<\omega}\big(&(x^k=1 \wedge y^k=1) \\ &\wedge \neg \bigvee_{m<k}(x^m = 1) \wedge \neg \bigvee_{m<k}(y^m=1)\big) \end{align*} says that $x$ and $y$ have the same order, and a formula $\chi \in \mathcal{L}_{\infty\omega}^{\omega}[\tau]$, \[ \chi(x,y) := \bigvee_{k<\omega}y = x^k, \] says that $y$ is in the subgroup generated by $x$. For each $n<\omega$, set \begin{align*} G_{2n} &= \prod_{i\leq n} C_{p_i}^2 \\ G_{2n+1} &= \prod_{i\leq n} C_{p_i}^2 \times C_{p_{n+1}}, \end{align*} where $p_i$ is the $i$:th prime number and $C_{p_i}$ the cyclic group of order $p_i$. Then $(G_n)_{n<\omega}$ is a chain of homogeneous groups, but $G_{2n} \vDash \psi$ and $G_{2n+1} \nvDash \psi$ for all $n$, where \[ \psi := \forall x \exists y(\neg\chi(x,y) \wedge \varphi(x,y)). \] \end{example} We will need the following lemmas in the proof of Theorem \ref{log_eq}. The proof of Lemma \ref{shrinking} follows from the main results of papers \cite{Cherlin: 1986} and \cite{Lachlan: 1984}. We will not present it here due to its complexity. \begin{lemma}\label{shrinking} Let $\tau$ be a finite relational vocabulary and $\mathcal{H}$ the class of all finite homogeneous $\tau$-strucutres. Then all antichains of structures in $\mathcal{H}$ are finite, in other words, if $C \subseteq \mathcal{H}$ is infinite then there are $\mathfrak{A}$, $\mathfrak{B} \in C$ such that $\mathfrak{A} < \mathfrak{B}$. \end{lemma} \begin{lemma}\label{substr} Let $\tau$ be a finite relational vocabulary and $\mathfrak{A}_0, \dots, \mathfrak{A}_{k-1}$ finite $\tau$-structures. There is a sentence $\varphi \in \mathcal{L}_{\omega\omega}[\tau]$ such that for all $\mathfrak{B} \in \Str[\tau]$, \[ \mathfrak{B} \vDash \varphi \Leftrightarrow \mathfrak{A}_i \leq \mathfrak{B} \text{ for some } i < k. \] \end{lemma} \begin{proof} Every finite $\tau$-structure can be described up to isomorphism by a sentence in $\mathcal{L}_{\omega\omega}[\tau]$. Thus, if $\psi_i \in \mathcal{L}_{\omega\omega}[\tau]$ describes $\mathfrak{A}_i$ then $\bigvee_{i<k}\psi_i$ is the desired sentence. \end{proof} \begin{theorem}\label{log_eq} Let $\tau$ be a finite relational vocabulary and $\mathcal{H}$ the class of all finite homogeneous $\tau$-structures. Suppose that $\mathcal{Q}$ is a finite set of embedding-closed quantifiers of finite width. Then $\mathcal{L}_{\infty\omega}^{\omega}(\mathcal{Q}) \equiv \mathcal{L}_{\omega\omega}$ over $\mathcal{H}$. \end{theorem} \begin{proof} Let $\varphi \in \mathcal{L}^{\omega}_{\infty\omega}(\mathcal{Q})[\tau]$. We say that $\mathfrak{A} \in \mathcal{H}$ \emph{stabilizes $\varphi$ in $\mathcal{H}$} if for all $\mathfrak{B} \in \mathcal{H}$, $\mathfrak{A} \leq \mathfrak{B}$ implies $\mathfrak{A} \vDash \varphi \Leftrightarrow \mathfrak{B} \vDash \varphi$. Clearly, there is a structure $\mathfrak{A}_0 \in \mathcal{H}$ that stabilizes $\varphi$ in $\mathcal{H}$ since otherwise we would be able to construct an infinite chain of structures of $\mathcal{H}$ which contradicts Theorem \ref{finite}. Similarly, if the class of finite $\tau$-structures incomparable with $\mathfrak{A}_0$ is not empty then we can find a structure $\mathfrak{A}_1 \in \mathcal{H}$ incomparable with $\mathfrak{A}_0$ that stabilizes $\varphi$ in $\mathcal{H}$. Continuing in the same way we can construct an antichain $\mathcal{C} \subseteq \mathcal{H}$ of finite structures such that every structure $\mathfrak{B} \in \mathcal{H}$ is comparable with some structure $\mathfrak{A} \in \mathcal{C}$ and every structure in $C$ stabilizes $\varphi$. By Lemma \ref{shrinking}, $\mathcal{C}$ is finite so by Lemma \ref{substr}, $\varphi$ is equivalent to a sentence in $\mathcal{L}_{\omega\omega}[\tau]$ over the structures in $\mathcal{H}$. \end{proof} In particular, if $\tau$ is a finite unary vocabulary and $\mathcal{Q}$ is a finite set of embedding-closed quantifiers of finite width then $\mathcal{L}_{\infty\omega}^{\omega}(\mathcal{Q}) \equiv \mathcal{L}_{\omega\omega}$ over finite $\tau$-structures. \subsection{The infinite case} It is possible to generalize Theorem \ref{finite} to vocabularies and sets of embedding-closed quantifiers of arbitrary cardinality. \begin{theorem}\label{infinite} Let $\tau$ be a vocabulary, $\kappa$ a cardinal, $\mathcal{Q}$ a set of embedding-closed quantifiers of width less than $\kappa$, and $\lambda$ a regular cardinal such that \[ 2^{|\tau|\cdot\aleph_0} \cdot |\mathcal{Q}| \cdot \kappa < \lambda. \] Suppose $(\mathfrak{A}_{\alpha})_{\alpha<\lambda}$ is a chain of quasi-homogeneous $\tau$-structures. There is a cardinal $\mu<\lambda$ such that for every formula $\varphi \in \mathcal{L}_{\infty\omega}(\mathcal{Q})$ there is a quantifier-free $\tau$-formula $\vartheta_{\varphi}$ such that \[ \mathfrak{A}_{\alpha} \vDash \forall \overline{x}(\varphi \leftrightarrow \vartheta_{\varphi}) \] for all $\mu \leq \alpha$. In particular, $\mathfrak{A}_{\alpha} \preceq_{\mathcal{L}_{\infty\omega}(\mathcal{Q})} \mathfrak{A}_{\beta}$ for all $\mu \leq \alpha \leq \beta < \lambda$. \end{theorem} \begin{proof} Let $Q \in \mathcal{Q}_{\emb}$ and $\varphi = Q(\overline{x}_i\vartheta_i)_{i<\delta}$ where all $\vartheta_i$ are quantifier-free $\tau$-formulas. In the first part of this proof, we will generalize the proof of Lemma \ref{quant_elim_finite_lemma} to apply for chains of quasi-homogeneous structures of arbitrary length. First we observe that for all $\alpha< \lambda$ there is the smallest set $T_{\alpha}$ of atomic types of $\tau$ such that $\mathfrak{A}_{\alpha} \vDash \forall\overline{x}(\varphi \leftrightarrow \bigvee T_{\alpha})$ because of quasi-homogeneity of $\mathfrak{A}_{\alpha}$. In addition, if $\alpha < \beta < \lambda$ then $T_{\alpha} \subseteq T_{\beta}$. Thus, since there are at most $2^{|\tau|+\aleph_0}$ atomic types of $\tau$, if $\lambda$ is a regular cardinal greater than $2^{|\tau|+\aleph_0}$ and $(\mathfrak{A}_{\alpha})_{\alpha<\lambda}$ is a chain of quasi-homogeneous $\tau$-structures then there is a cardinal $\kappa_{\varphi}<\lambda$ and a quantifier-free $\tau$-formula $\vartheta_{\varphi}$ that is equivalent to $\varphi$ in structures $\mathfrak{A}_{\alpha}$ with $\alpha\geq\kappa_{\varphi}$. Let $\Phi$ be the set of all formulas of the form $Q(\overline{x}_i\vartheta_i)_{i<\delta}$, with $Q\in\mathcal{Q}$ and all $\vartheta_i$ quantifier-free. Then $|\Phi| \leq 2^{|\tau|\cdot\aleph_0}\cdot |\mathcal{Q}| \cdot \kappa$. We claim that we can set $\mu := \sup\{\kappa_{\varphi} \colon \varphi \in \Phi \}$. We use induction on the structure of the formula to prove this claim. If $\varphi$ is atomic there is nothing to prove. If $\varphi = \bigwedge_{i\in I} \varphi_i$ and the claim holds for all $\varphi_i$ then $\vartheta_{\varphi} = \bigwedge_{i\in I}\vartheta_{\varphi_i}$. If $\varphi = \neg \varphi'$ and the claim is true for $\varphi'$ then $\vartheta_{\varphi} = \neg \vartheta_{\varphi'}$. Finally, suppose that $\varphi = Q(\overline{x}_i\varphi_i)_{i<\delta}$ and the claim holds for all $\varphi_i$. Then each $\varphi_i$ is equivalent to a quantifier-free formula $\vartheta_i$ on structures $\mathfrak{A}_{\alpha}$ with $\alpha \geq \mu$ so by the first part of the proof, $\varphi$ is also equivalent to some quantifier-free formula $\vartheta_{\varphi}$ on these structures. \end{proof} \begin{example}\label{Hartig} It follows directly from Theorem \ref{infinite} that the H\"artig quantifier, introduced in Example \ref{Hartig_ex}, is not definable in $\mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$. \end{example} \begin{lemma} If $\mathfrak{A}$ and $\mathfrak{B}$ are bi-embeddable quasi-homogeneous $\tau$-structures then $\mathfrak{A} \equiv_{\emb} \mathfrak{B}$. \end{lemma} \begin{proof} We can build a chain of arbitrary length in which structures $\mathfrak{A}$ and $\mathfrak{B}$ alternate. By Theorem \ref{infinite} the truth value of any sentence $\varphi \in \mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$ is eventually preserved in this chain, so $\mathfrak{A} \vDash \varphi \Leftrightarrow \mathfrak{B} \vDash \varphi$. \end{proof} \begin{example}\label{completeness} Let $\eta = (0,1)$, that is $\eta$ is the open real line interval between $0$ and $1$, and $\xi = \eta \setminus \{\frac{1}{2}\}$. Then $\eta$ and $\xi$ are both quasi-homogeneous and bi-embeddable, so $\eta \equiv_{\emb} \xi$. Thus, the completeness of an ordering is not definable in $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$. \end{example} \begin{example}\label{cofinality} We denote by $\omega_{\alpha}^{\omega_{\alpha}}$ the set of all functions $\omega_{\alpha} \rightarrow \omega_{\alpha}$. Let $\aleph_{\alpha}$ be a regular cardinal, $\eta$ the lexicographic ordering of the set $\omega_{\alpha}^{\omega_{\alpha}}$, and $\xi$ the lexicographic ordering of the set $\omega \times \omega_{\alpha}^{\omega_{\alpha}}$. Then $\cf(\eta) = \aleph_{\alpha}$ and $\cf(\xi) = \aleph_0$, where $\cf$ means the cofinality of an ordering. The orderings $\eta$ and $\xi$ are both quasi-homogeneous and bi-embeddable, hence $\eta \equiv_{\emb} \xi$. Therefore, for any ordinal $\beta$, the property of having cofinality $\aleph_{\beta}$ is not definable in $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$. \end{example} We can use Example \ref{cofinality} to obtain the following result: \begin{theorem} The logic $\mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$ does not allow interpolation for the logic $\mathcal{L}_{\omega\omega}(Q_1)$. \end{theorem} \begin{proof} Let $C_0$, $C_1$ be the classes of all linear orderings of cofinality $\aleph_0$, $\aleph_1$, respectively. Both $C_0$ and $C_1$ are projective classes in $\mathcal{L}_{\omega\omega}(Q_1)$. To see this, let $\psi(\leq,U)$ be the sentence saying that $\leq$ is a linear ordering of the universe without a greatest element and $U$ induces a cofinal subordering of $\leq$. Furthermore, let $\chi(\leq,U)$ be the following sentence: \[ \chi(\leq,U) := \ Q_1xU(x) \wedge \forall x\big(U(x)\rightarrow \neg Q_1y\big(U(y) \wedge y \leq x\big) \big). \] We say that a linear ordering $\leq$ is \emph{$\omega_1$-like} if the sentence \[ Q_1(x=x) \wedge \forall x \neg Q_1 y(y\leq x) \] is true in it. Thus, if $\leq$ is a linear ordering then $\chi(\leq,U)$ says that the subordering of $\leq$ induced by $U$ is $\omega_1$-like. Now let \[ \varphi_0(\leq,U) := \psi(\leq,U) \wedge \neg Q_1 x U(x) \] and \[ \varphi_1(\leq,U) := \psi(\leq,U) \wedge \chi(\leq,U). \] Then clearly the sentence $\varphi_0$ is a projective definition of the class $C_0$. That $\varphi_1$ defines projectively the class $C_1$ follows from the fact that an ordering has cofinality $\aleph_1$ if and only if it has a cofinal $\omega_1$-like subordering. Thus, $C_0$ and $C_1$ are disjoint projective classes in $\mathcal{L}_{\omega\omega}(Q_1)$ that by Example \ref{cofinality} cannot be separated by any elementary class in $\mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$ from which the claim follows. \end{proof} \section{Embedding game}\label{game_section} In this section we will introduce a game characterizing relation $\equiv_{\emb}$. The \emph{embedding game} is played on two structures $\mathfrak{A}$ and $\mathfrak{B}$ of the same vocabulary by two players, Spoiler and Duplicator. A \emph{position} in the game is a tuple $(\mathfrak{A},\overline{a},\mathfrak{B},\overline{b})$, where $\overline{a}$ and $\overline{b}$ are tuples of elements of $A$ and $B$, respectively. The game proceeds in rounds and starts from the position $(\mathfrak{A},\emptyset,\mathfrak{B},\emptyset)$. Suppose that $n$ rounds of the game have been played and the position is $(\mathfrak{A},\overline{a},\mathfrak{B},\overline{b})$. First Duplicator chooses embeddings $f \colon A \rightarrow B$ and $g \colon B \rightarrow A$ such that $f\overline{a} = \overline{b}$ and $g\overline{b} = \overline{a}$. If there are no such embeddings then Spoiler wins the game. Otherwise Spoiler selects a natural number $k$ and a tuple $\overline{c} \in A^k$ or $\overline{d} \in B^k$. This completes the round, and the game continues from the position $(\mathfrak{A},\overline{a}\overline{c},\mathfrak{B},\overline{b}f\overline{c})$ or $(\mathfrak{A},\overline{a}g\overline{d},\mathfrak{B},\overline{b}\overline{d})$ depending on whether Spoiler chose $\overline{c} \in A^n$ or $\overline{d} \in B^n$. Duplicator wins the game if and only if the game goes on infinitely. \begin{definition} For every formula $\varphi \in \mathcal{L}_{\infty\omega}$, we define inductively its \emph{quantifier rank} denoted by $\qr(\varphi)$ by setting \begin{align*} \qr(\varphi) &= 0 \text{ if } \varphi \text{ is quantifier-free}, \\ \qr(\neg\varphi) &= \qr(\varphi), \\ \qr(\bigvee\Phi) &= \qr(\bigwedge\Phi) = \sup\{\qr(\varphi) : \varphi \in \Phi\}, \\ \qr(Q(\overline{x}_{\delta}\varphi_{\delta})) &= \sup\{\qr(\varphi_{\delta}) : \delta < \kappa \} + 1. \end{align*} We write $\mathfrak{A} \simeq_{\emb} \mathfrak{B}$ if Duplicator wins the embedding game on $\mathfrak{A}$ and $\mathfrak{B}$, and $\mathfrak{A} \simeq_{\emb}^{\gamma} \mathfrak{B}$ if Duplicator does not lose in the first $\gamma$ rounds. We write $\mathfrak{A} \equiv_{\emb} \mathfrak{B}$ if $\mathfrak{A}$ and $\mathfrak{B}$ agree on all sentences of $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$. Notation $\mathfrak{A} \equiv_{\emb}^{\gamma} \mathfrak{B}$ means that $\mathfrak{A}$ and $\mathfrak{B}$ agree on all the sentences of $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$ whose quantifier rank is $\leq \gamma$. \end{definition} \begin{remark}\label{bijective} The embedding game was inspired by and bears some resemblance to Hella's \emph{bijective game} which was introduced in \cite{Hella: 1984}. In it, Duplicator selects a bijection $f$ between two structures $\mathfrak{A}$, $\mathfrak{B}$ instead of a pair of embeddings. Then Spoiler chooses a tuple $\overline{c} \in A^n$ where $n<\omega$ is a number whose value is fixed at the beginning of the game (we say that it is \emph{$n$-bijective game}). Duplicator loses if $c \mapsto f(c)$ is not a partial isomorphism on the elements of $\overline{c}$ between the structures $(\mathfrak{A},\overline{a})$ and $(\mathfrak{B},\overline{b})$ where $\overline{a}$ and $\overline{b}$ are elements chosen in the previous rounds of the game. Otherwise the game continues to the next round from the position $(\mathfrak{A},\overline{a}\overline{c},\mathfrak{B},\overline{b}f\overline{c})$. The $n$-bijective game characterizes the equivalence of structures in relation to the logic $\mathcal{L}_{\infty\omega}$ extended with the class of all generalized quantifiers of arity $\leq n$. \end{remark} \begin{remark} Let $\tau$ be a vocabulary, $\mathfrak{A}$ and $\mathfrak{B}$ $\tau$-structures, $\overline{a} \in A^n$ and $\overline{b} \in B^n$. A position $(\mathfrak{A},\overline{a},\mathfrak{B},\overline{b})$ is equivalent to the position $(\mathfrak{A}',\emptyset,\mathfrak{B}',\emptyset)$, where $\mathfrak{A}'$ and $\mathfrak{B}'$ are structures of vocabulary $\tau$ expanded with new constant symbols $c_1,\dots,c_n$ with interpretations $c_i^{\mathfrak{A}'} = a_i$ and $c_i^{\mathfrak{B}'} = b_i$ for all $i$. For the sake of brevity, we will use vocabulary expansions instead of writing positions explicitely. \end{remark} \begin{theorem}\label{game} Let $\tau$ be vocabulary and $\mathfrak{A}$, $\mathfrak{B}$ $\tau$-structures. For all ordinals $\gamma \geq 1$ we have $ \mathfrak{A} \simeq_{\emb}^{\gamma} \mathfrak{B} \text{ if and only if } \mathfrak{A} \equiv_{\emb}^{\gamma} \mathfrak{B}$. \end{theorem} \begin{proof} We use induction on $\gamma$. Supppose first that $\mathfrak{A} \simeq^1_{\emb} \mathfrak{B}$. Then $\mathfrak{A} \leq \mathfrak{B}$ and $\mathfrak{B} \leq \mathfrak{A}$, so $\mathfrak{A} \equiv_{\emb}^1 \mathfrak{B}$ by Lemma \ref{preservation}. Assume next that $\mathfrak{A} \equiv^1_{\emb} \mathfrak{B}$. Let $\mathfrak{A}'$ be the structure $\mathfrak{A}$ with functions and constants replaced by corresponding relations. Let $Q$ be the smallest embedding-closed quantifier containing $\mathfrak{A}'$. For each symbol of $\tau$ define $\varphi_R := R$ for all relation symbols $R \in \tau$, $\varphi_f := f(\overline{x}) = y$ for all function symbols $f \in \tau$ and $\varphi_c := x = c$ for all constant symbols $c \in \tau$. Then $\mathfrak{A} \vDash Q(\overline{x}_S \varphi_S)_{S \in \tau}$, and since $\mathfrak{A} \equiv^1_{\emb} \mathfrak{B}$, we have $\mathfrak{B} \vDash Q(\overline{x}_S \varphi_S)_{S \in \tau}$, so $\mathfrak{A} \leq \mathfrak{B}$. In the same way we prove that $\mathfrak{B} \leq \mathfrak{A}$, so $\mathfrak{A} \simeq^1_{\emb} \mathfrak{B}$. The base step of induction is thus proved. Assume now that $\gamma > 1$ and the claim holds for all $\alpha < \gamma$. Suppose first that $\mathfrak{A} \simeq^{\gamma}_{\emb} \mathfrak{B}$. We use induction on the structure of formulas to show that $\mathfrak{A} \equiv_{\emb}^{\gamma} \mathfrak{B}$. Thus, assume that $\mathfrak{A} \vDash \varphi$ and $\qr(\varphi) \leq \gamma$. If $\varphi$ is quantifier-free then $\mathfrak{B} \vDash \varphi$ since otherwise Duplicator would lose immediately. If $\varphi = \neg \psi$ or $\varphi = \bigwedge \Psi$ and the claim holds for $\psi$ and all $\chi \in \Psi$ then it is straightforward to see that $\mathfrak{B} \vDash \varphi$ as well. Now assume that $\varphi = Q(\overline{x}_{\delta}\psi_{\delta})_{\delta<\kappa}$ and the claim is true for all $\psi_{\delta}$. Let $f$ be an embedding $\mathfrak{A} \rightarrow \mathfrak{B}$ that Duplicator can choose in the first round according to her winning strategy. Then for all $\overline{a}\in A^{<\omega}$ we have $(\mathfrak{A},\overline{a})\simeq^{\gamma-1}_{\emb}(\mathfrak{B},f\overline{a})$ if $\gamma$ is finite and $(\mathfrak{A},\overline{a})\simeq^{\gamma}_{\emb}(\mathfrak{B},f\overline{a})$ if $\gamma$ is infinite, so $(\mathfrak{A},\overline{a})\simeq^{\alpha}_{\emb}(\mathfrak{B},f\overline{a})$ for all $\alpha<\gamma$. Thus, by the induction hypothesis, $(\mathfrak{A},\overline{a})\equiv^{\alpha}_{\emb}(\mathfrak{B},f\overline{a})$ for all $\alpha<\gamma$ so \[ \mathfrak{A} \vDash \psi_{\delta}(\overline{a}) \Leftrightarrow \mathfrak{B} \vDash \psi_{\delta}(f\overline{a}) \] for all $\delta<\kappa$ since $\qr(\psi_{\delta}) < \gamma$ by the definition of quantifier rank and the fact that $\qr(\varphi) \leq \gamma$. This means that $f$ is an embedding \[ (A, (\psi_{\delta}^{\mathfrak{A}})_{\delta < \kappa}) \rightarrow (B, (\psi_{\delta}^{\mathfrak{B}})_{\delta < \kappa}), \] so $\mathfrak{B} \vDash Q(\overline{x}_{\delta} \psi_{\delta})_{\delta < \kappa}$ since $Q$ is embedding-closed. In the same way we show that $\mathfrak{B} \vDash \varphi$ implies $\mathfrak{A} \vDash \varphi$ for all $\varphi \in \mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$ with quantifier rank $\leq \gamma$ thus proving that $\mathfrak{A} \equiv^{\gamma}_{\emb} \mathfrak{B}$. For the other direction, assume that $\mathfrak{A} \not \simeq^{\gamma}_{\emb} \mathfrak{B}$. We denote by $\mathcal{F}_A$ the set of all embeddings $\mathfrak{A} \rightarrow \mathfrak{B}$, and by $\mathcal{F}_B$ the set of all embeddings $\mathfrak{B} \rightarrow \mathfrak{A}$. Then for each pair of $(f,g) \in \mathcal{F}_A\times\mathcal{F}_B$ there are tuples $\overline{a} \in A^{<\omega}$, $\overline{b} \in B^{<\omega}$ such that $(\mathfrak{A},\overline{a}) \not \simeq^{\alpha}_{\emb} (\mathfrak{B},f\overline{a})$ or $(\mathfrak{A},g\overline{b}) \not \simeq^{\alpha}_{\emb} (\mathfrak{B},\overline{b})$ for some $\alpha < \gamma$. Thus, by the induction hypothesis, for each pair of embeddings $(f,g)$ there is a formula $\psi_f$ or a formula $\psi_g$ of quantifier rank $< \gamma$, such that \begin{align*} (*)\ &\mathfrak{A} \vDash \psi_f(\overline{a}) \nLeftrightarrow \mathfrak{B} \vDash \psi_f(f\overline{a}) \text{ or } \\ &\mathfrak{A} \vDash \psi_g(g\overline{b}) \nLeftrightarrow \mathfrak{B} \vDash \psi_g(\overline{b}) \end{align*} for some $\overline{a} \in A^{<\omega}$ or $\overline{b} \in B^{<\omega}$. Let $Q_A$ be the smallest embedding-closed quantifier containing the structure $(A, (\psi_h^{\mathfrak{A}})_{h \in \mathcal{F}_A})$, and $Q_B$ be the smallest embedding-closed quantifier containing the structure $(B, (\psi_h^{\mathfrak{B}})_{h \in \mathcal{F}_B})$. Then $\mathfrak{A} \vDash Q_A(\overline{x}_h \psi_h)_{h \in \mathcal{F}_A}$ and $\mathfrak{B} \vDash Q_B(\overline{x}_h \psi_h)_{h \in \mathcal{F}_B}$. Assume to the contrary that $\mathfrak{B} \vDash Q_A(\overline{x}_h \psi_h)_{h \in \mathcal{F}_A}$ and $\mathfrak{A} \vDash Q_B(\overline{x}_h \psi_h)_{h \in \mathcal{F}_B}$. Then there are embeddings \begin{align*} f : (A,(\psi_h^{\mathfrak{A}})_{h \in \mathcal{F}_A}) &\rightarrow (B,(\psi_h^{\mathfrak{B}})_{h \in \mathcal{F}_A}) \text{ and} \\ g : (B,(\psi_h^{\mathfrak{B}})_{h \in \mathcal{F}_B}) &\rightarrow (A,(\psi_h^{\mathfrak{A}})_{h \in \mathcal{F}_B}), \end{align*} so there are embeddings $f$ and $b$ such that \begin{align*} \mathfrak{A} \vDash \psi_f(\overline{a}) &\Leftrightarrow \mathfrak{B} \vDash \psi_f(f\overline{a}) \text{ and }\\ \mathfrak{A} \vDash \psi_g(g\overline{b}) &\Leftrightarrow \mathfrak{B} \vDash \psi_g(\overline{b}) \end{align*} for all $\overline{a}$ and $\overline{b}$, which contradicts $(*)$. Thus, $\mathfrak{B} \nvDash Q_A(\overline{x}_h \psi_h)_{h \in \mathcal{F}_A}$ or $\mathfrak{A} \nvDash Q_B(\overline{x}_h \psi_h)_{h \in \mathcal{F}_B}$, which completes the induction step. \end{proof} The next proposition says that in order to ensure that $\mathfrak{A} \equiv_{\emb} \mathfrak{B}$ it is sufficient for Duplicator to have a winning strategy for the embedding game of length $\omega$. \begin{proposition} For all structures $\mathfrak{A}$, $\mathfrak{B}$ of the same vocabulary we have $\mathfrak{A} \equiv_{\emb} \mathfrak{B}$ if and only if $\mathfrak{A} \equiv_{\emb}^{\omega} \mathfrak{B}$. \end{proposition} \begin{proof} The implication from left to right is trivial. For the other direction, we use induction on ordinals $\gamma$ to show that for all structures $\mathfrak{A}$, $\mathfrak{B}$ if $\mathfrak{A} \equiv_{\emb}^{\omega} \mathfrak{B}$ then $\mathfrak{A} \equiv_{\emb}^{\gamma} \mathfrak{B}$. Thus suppose that the claim holds for all $\alpha < \gamma$. Let $\varphi$ be an $\mathcal{L}_{\infty\omega}(\mathcal{Q}_{\emb})$-sentence with $\qr(\varphi) \leq \gamma$. We use induction on the structure of $\varphi$ to show that $\mathfrak{A}$ and $\mathfrak{B}$ agree on its truth value. The only interesting case is when $\varphi = Q(\overline{x}_{\delta}\psi_{\delta})_{\delta<\kappa}$. Suppose that $\mathfrak{A} \vDash \varphi$, and let $\alpha = \sup\{\qr(\psi_{\delta}) : \delta < \kappa \}$ By the definition of quantifier rank we have $\alpha < \gamma$ so by the induction hypthesis $\mathfrak{A} \simeq^{\alpha}_{\emb} \mathfrak{B}$. If $\alpha$ is finite then $\mathfrak{B} \vDash \varphi$ since $\mathfrak{A} \equiv^{\omega}_{\emb} \mathfrak{B}$ and we are done. Thus assume that $\alpha$ is infinite. Let $f : \mathfrak{A} \rightarrow \mathfrak{B}$ be an embedding that Duplicator can choose in the first round in order to win the game of length $\alpha$. Then $(\mathfrak{A},\overline{a}) \simeq^{\alpha}_{\emb} (\mathfrak{B},f\overline{a})$ for all $\overline{a} \in A^{<\omega}$ since $\alpha$ is infinite so \[ \mathfrak{A} \vDash \psi_{\delta}(\overline{a}) \Leftrightarrow \mathfrak{B} \vDash \psi_{\delta}(f\overline{a}) \] for all $\overline{a} \in A^{\omega}$. Hence, $f$ is an embedding \[ (A,(\psi_{\delta}^{\mathfrak{A}})_{\delta<\kappa}) \rightarrow (B,(\psi_{\delta}^{\mathfrak{B}})_{\delta<\kappa}) \] so $\mathfrak{B} \vDash \varphi$ since $Q$ is embedding-closed. In the same way we show that $\mathfrak{B} \vDash \varphi$ implies $\mathfrak{A} \vDash \varphi$. \end{proof} \begin{example}\label{hier} Let $E_0$ be an equivalence relation with countably infinite number of $E_0$-classes, and suppose that each $E_0$-class has cardinality $\aleph_1$. Let $E_1$ satisfy the same conditions with exception of having one $E_1$-class of cardinality $\aleph_0$. Then $E_0$ and $E_1$ are bi-embeddable, so $E_0 \equiv_{\emb}^1 E_1$. Let $f \colon E_0 \rightarrow E_1$ and $g \colon E_1 \rightarrow E_0$ be embeddings. Let $[a]_{E_1}$ be the $E_1$-class of cardinality $\aleph_0$, and suppose Spoiler chooses the embedding $g$ and the element $a$. It is easy to see that there is no embedding of $E_0$ into $E_1$ that maps $g(a)$ to $a$, since the restriction of such an embedding to an $E_0$-class must be included in some $E_1$-class, and $|[g(a)]_{E_0}| = \aleph_1$ and $|[a]_{E_1}| = \aleph_0$. Thus, Duplicator loses in the second round, so $E_0 \not \equiv_{\emb}^2 E_1$. \end{example} We are going to use the embedding game in order to prove the following theorem: \begin{theorem}\label{for_each} For each $n<\omega$, there is a first-order sentence $\varphi_n$ of quantifier rank $n$ that is not expressible by any $\mathcal{L}_{\infty \omega}(\mathcal{Q}_{\emb})$-sentence of quantifier rank $<n$ and is of the form \[ \varphi_n = Q_n x_n \cdots Q_1 x_1 \vartheta(x_1,\dots,x_n) \] where \[ Q_n = \begin{cases} \forall \text{ if } n \text{ is odd},\\ \exists \text{ if } n \text{ if even}, \end{cases} \] and $\vartheta$ is quantifier-free. \end{theorem} For the remaining part of this text we will denote by $\eta$ the usual ordering of rational numbers. The following lemma is a well-known fact. \begin{lemma}\label{order_isom} Every open segment of $\eta$ is isomorphic to $\eta$. \end{lemma} \begin{definition} In the text of this definition we will assume that all structures are disjoint unless mentioned otherwise. For every natural number $n \geq 1$, we define the vocabulary $\tau_n = \{P_0,\dots,P_n,\leq,E_1,\dots,E_n\}$ where all $P_i$ are unary and $E_i$, $\leq$ binary relation symbols, and $\tau_n^* = \tau_n \cup \{C_1,\dots,C_n\}$ where all $C_i$ are unary relation symbols. Our aim is to define classes $S_n, T_n \subset \Str[\tau_n^*]$ such that $(\mathfrak{A} \upharpoonright \tau_n) \equiv_{\emb}^n (\mathfrak{B} \upharpoonright \tau_n)$ but $(\mathfrak{A} \upharpoonright \tau_n) \not \equiv_{\mathcal{L}_{\omega\omega}}^{n+1} (\mathfrak{B} \upharpoonright \tau_n)$ for all $\mathfrak{A} \in S_n$ and $\mathfrak{B} \in T_n$. $\textbf{n = 1:}$ First we define class $R_1 \subset \Str[\tau_1^*]$ by setting $\mathfrak{A} \in R_1$ if and only if there are disjoint structures $\mathfrak{B}_0$, $\mathfrak{B}_1 \in \Str[\{\leq\}]$ isomorphic to $\eta$ such that $\mathfrak{A} \upharpoonright \{\leq\} = \mathfrak{B}_0 \cup \mathfrak{B}_1$ and $P_i^{\mathfrak{A}} = B_i$ for $i = 0,1$. In what follows, we will use $\eta^{\mathfrak{A}}_i$ to denote $\mathfrak{B}_i$ for $i = 0,1$. Note that $\eta_i^{\mathfrak{A}} = (\mathfrak{A} \upharpoonright \{\leq\}) | P_i^{\mathfrak{A}}$. Finally we set \[ S_1 = \{\mathfrak{A}\in R_1 : E_1^{\mathfrak{A}} \text{ is an isomorphism between } \eta^{\mathfrak{A}}_1 \text{ and } \eta^{\mathfrak{A}}_0 \} \] and \begin{align*} T_1 = \{\mathfrak{A}\in R_1 : &\ C_1^{\mathfrak{A}} = \{a\} \text{ for some } a \in P^{\mathfrak{A}}_1 \text{ and } \\ & \text{ there is an isomorphism } h \text{ between }\\ &\ \eta^{\mathfrak{A}}_1 \text{ and }\eta^{\mathfrak{A}}_0 \text{ such that } E_1^{\mathfrak{A}} = h \setminus \{(a,h(a))\}. \end{align*} \includegraphics[scale = 0.8]{figure1} $\textbf{n > 1:}$ We define class $R_n \subset \Str[\tau_n^*]$ by setting $\mathfrak{A} \in R_n$ if and only if there are structures $\mathfrak{B} \in \Str[\{\leq\}]$ and $\mathfrak{M}_a \in \Str[\tau^*_{n-1}]$ for each $a \in B$ such that \[ \mathfrak{A} \upharpoonright \tau^*_{n-1} = \mathfrak{B} \cup \bigcup_{a \in B}\mathfrak{M}_a, \] $P_n^{\mathfrak{A}} = B$, $C_n^{\mathfrak{A}} = \{a\}$ for some $a \in P_n^{\mathfrak{A}}$, and \[ E_n^{\mathfrak{A}} = \{(a,b) \in A^2 : a \in P_n^{\mathfrak{A}} \text{ and } b \in M_a \}. \] If $n$ is even then we set $\mathfrak{A}\in S_n$ if and only if $\mathfrak{A} \in R_n$ and for all $a\in P^{\mathfrak{A}}_n$, \begin{align*} \mathfrak{M}_a \in T_{n-1} &\text{ if } a \notin C^{\mathfrak{A}}_n,\\ \mathfrak{M}_a \in S_{n-1} &\text{ if } a \in C^{\mathfrak{A}}_n, \end{align*} and $\mathfrak{A} \in T_n$ if and only if $\mathfrak{A} \in R_n$ and for all $a \in P_n^{\mathfrak{A}}$ we have $\mathfrak{M}_a \in T_{n-1}$. If $n$ is odd then we set $\mathfrak{A} \in S_n$ if and only if $\mathfrak{A} \in R_n$ and for all $a \in P^{\mathfrak{A}}_n$ we have $\mathfrak{M}_a \in S_{n-1}$, and $\mathfrak{A} \in T_n$ if and only if $\mathfrak{A} \in R_n$ and for all $a \in P^{\mathfrak{A}}_a$, \begin{align*} \mathfrak{M}_a \in S_{n-1} &\text{ if } a \notin C^{\mathfrak{A}}_n,\\ \mathfrak{M}_a \in T_{n-1} &\text{ if } a \in C^{\mathfrak{A}}_n. \end{align*} \includegraphics[scale = 0.8]{figure2} \includegraphics[scale = 0.8]{figure3} \end{definition} \begin{lemma}\label{exactly_one} Let $n\geq 1$ be a natural number. Both $S_n$ and $T_n$ each have exactly one structure up to isomorphism. \end{lemma} \begin{proposition}\label{suppose} Suppose $n \geq 1$ is a natural number. Let $\mathfrak{A} \in S_n$ and $\mathfrak{B} \in T_n$. There is an $\mathcal{L}_{\omega\omega}[\tau_n]$-sentence $\varphi_n$ of quantifier rank $n+1$ such that $\mathfrak{A} \vDash \varphi_n$ and $\mathfrak{B} \nvDash \varphi_n$. \end{proposition} \begin{proof} If $n = 1$ then we can set $\varphi_1 = \forall x (P_1(x)\rightarrow \exists y (P_0(x) \wedge E_1(x,y)))$. Assume that $n > 1$ and such $\varphi_{n-1}$ exists. Then if $n$ is even we can set $\varphi_n = \exists x (P_n(x) \wedge \varphi_{n-1}^{\{y : E_n(x,y)\}})$, and if $n$ is odd we can set $\varphi_n = \forall x (P_n(x)\rightarrow \varphi_{n-1}^{\{y : E_n(x,y)\}})$. \end{proof} \begin{lemma}\label{natural_number} Let $n \geq 1$ be a natural number. Let $\mathfrak{A}' \in S_n$, $\mathfrak{B}'\in T_n$ if $n$ is odd, and $\mathfrak{A}'\in T_n$, $\mathfrak{B}' \in S_n$ if $n$ is even. Set $\mathfrak{A} = \mathfrak{A}'\upharpoonright \tau_n$ and $\mathfrak{B} = \mathfrak{B}'\upharpoonright \tau_n$. There is an embedding $f$ of $\mathfrak{A}$ into $\mathfrak{B}$ such that for any $\overline{a} \in A^{<\omega}$ there are partitions $(A_1,A_2)$ of $A$ and $(B_1,B_2)$ of $B$ such that \begin{enumerate} \item $\overline{a} \in A_1^{<\omega}$, \item $f | A_1$ is an isomorphism between $\mathfrak{A} | A_1$ and $\mathfrak{B} | B_1$, \item $\mathfrak{A}' | A_2 \cong \mathfrak{A}'$ and $\mathfrak{B}'|B_2 \cong \mathfrak{B}'$, \item for every embedding $g :\mathfrak{A}|A_2\rightarrow \mathfrak{B}|B_2$ and $h : \mathfrak{B} |B_2\rightarrow \mathfrak{A}|A_1$ it is the case that $(f|A_1) \cup g$ and $(f^{-1}|B_1) \cup h$ are embeddings $\mathfrak{A}\rightarrow\mathfrak{B}$ and $\mathfrak{B}\rightarrow\mathfrak{A}$, respectively, \item for any $m < \omega$, we have $(\mathfrak{A},\overline{a}) \equiv^m_{\emb} (\mathfrak{B},f\overline{a})$ if and only if $\mathfrak{A} \equiv^m_{\emb} \mathfrak{B}$. \end{enumerate} \end{lemma} \begin{proof} Recall first that for each natural number $n\geq 1$, there is an element $b_n \in B$ such that $C^{\mathfrak{B}'}_n = \{b_n\}$. We use this element to define the ordering $\xi_n = \eta^{\mathfrak{B}}_n | \{x \in B : x <b_n \}$. It is easy to see by using Lemma \ref{order_isom} that for all $n\geq 1$ there is an isomorphism $h_n$ between $\eta^{\mathfrak{A}}_n$ and $\xi_n$. We have three possible cases to consider: $n=1$, $n > 1$ is even and $n > 1$ is odd. First suppose that $n = 1$. Then $\mathfrak{A}' \in S_1$ and $\mathfrak{B}' \in T_1$. Define function $f : A \rightarrow B$ by setting $f(x) = h_1(x)$ if $x \in P_1^{\mathfrak{A}}$, and $f(x) = y$ if $x \in P^{\mathfrak{A}}_0$ where $y \in P^{\mathfrak{B}}_0$ is such that there is $z \in P^{\mathfrak{A}}_1$ with $\mathfrak{A} \vDash E_1(z,x)$ and $\mathfrak{B} \vDash E_1(h_1(z),y)$. It is easy to see that $f$ is an embedding of $\mathfrak{A}$ into $\mathfrak{B}$. Let $a_0,\dots,a_k \in A$. We must find partitions satisfying conditions 1.-5. For this, let $b, c \in A$ be such that \begin{align*} \mathfrak{A} \vDash &P_1(b) \wedge P_0(c) \wedge E_1(b,c) \wedge \\ &\bigwedge_{i\leq k}\big(\big(P_1(a_i) \rightarrow a_i<b\big) \wedge \big(P_0(a_i)\rightarrow a_i < c\big)\big). \end{align*} Set $A_1 = \{x \in A : x < b \text{ or } x < c \}$, $B_1 = f[A_1]$, and denote by $A_2$, $B_2$ the complements of $A_1$, $B_1$, respectively. It is straigthforward to verify that partititions $(A_1,A_2)$, $(B_1,B_2)$ satisfy all the five requirements. Notice that the fact 5. follows directly from the fact 4. Thus, the case $n = 1$ is proved. Assume now that $n > 1$ is even. Then $\mathfrak{A}' \in T_n$ and $\mathfrak{B}' \in S_n$, and we have $\mathfrak{M}_x \in T_{n-1}$ for all $x \in P^{\mathfrak{A}}_n$ and $\mathfrak{M}_x \in T_{n-1}$ for all $x \in \xi_n$ so, by Lemma \ref{exactly_one}, for each $x \in P_n^{\mathfrak{A}}$ there is an isomorphism $h_x$ between $\mathfrak{M}_x$ and $\mathfrak{M}_x$. Then $f = h_n \cup \bigcup_{x\in P^{\mathfrak{A}}_n}h_x$ is a wanted embedding. The case where $n>1$ is odd is proved in the same way. \end{proof} \begin{lemma}\label{structure} Let $n>1$ be a natural number. Let $\mathfrak{A}' \in S_n$, $\mathfrak{B}'\in T_n$ if $n$ is odd, and $\mathfrak{A}'\in T_n$, $\mathfrak{B}' \in S_n$ if $n$ is even. Set $\mathfrak{A} = \mathfrak{A}'\upharpoonright \tau_n$ and $\mathfrak{B} = \mathfrak{B}'\upharpoonright \tau_n$. There is an embedding $f$ of $\mathfrak{B}$ into $\mathfrak{A}$ such that for any $\overline{a} \in B^{<\omega}$ there are partitions $(A_1,A_2)$ of $A$ and $(B_1,B_2)$ of $B$ such that \begin{enumerate} \item $\overline{a} \in B_1^{<\omega}$, \item $f | B_1$ is an isomorphism between $\mathfrak{B} | B_1$ and $\mathfrak{A} | A_1$, \item $\mathfrak{A}'|A_2 \in T_{n-1}$ and $\mathfrak{B}'|B_2 \in S_{n-1}$ if $n$ is even, \item $\mathfrak{A}'|A_2 \in S_{n-1}$ and $\mathfrak{B}'|B_2 \in T_{n-1}$ if $n$ is odd, \item for every embedding $g :\mathfrak{A}|A_2\rightarrow \mathfrak{B}|B_2$ and $h : \mathfrak{B} |B_2\rightarrow \mathfrak{A}|A_1$ it is the case that $(f^{-1}|A_1) \cup g$ and $(f|B_1) \cup h$ are embeddings $\mathfrak{A}\rightarrow\mathfrak{B}$ and $\mathfrak{B}\rightarrow\mathfrak{A}$, respectively, \item if $n$ is even then $(\mathfrak{A},f\overline{a}) \equiv^{n-1}_{\emb} (\mathfrak{B},\overline{a})$ if and only if $(\mathfrak{M}\upharpoonright\tau_{n-1}) \equiv^{n-1}_{\emb} (\mathfrak{N}\upharpoonright \tau_{n-1})$ for all $\mathfrak{M} \in T_{n-1}$ and $\mathfrak{N} \in S_{n-1}$, \item if $n$ is odd then $(\mathfrak{A},f\overline{a}) \equiv^{n-1}_{\emb} (\mathfrak{B},\overline{a})$ if and only if $(\mathfrak{M}\upharpoonright \tau_{n-1}) \equiv^{n-1}_{\emb} (\mathfrak{N}\upharpoonright \tau_{n-1})$ for all $\mathfrak{M} \in S_{n-1}$ and $\mathfrak{N} \in T_{n-1}$. \end{enumerate} \end{lemma} \begin{proof} There are two possible cases: $n$ is even and $n$ is odd. Suppose first that $n$ is even. Then $\mathfrak{A}' \in T_n$ and $\mathfrak{B}' \in S_n$. Let $h$ be an isomorphism between $\eta^{\mathfrak{B}}_n$ and $\eta^{\mathfrak{A}}_n$. For each $x\in P_n^{\mathfrak{B}} \setminus C_n^{\mathfrak{B}}$, let $h_x$ be an isomorphism between $\mathfrak{M}_x$ and $\mathfrak{M}_{h(x)}$. For $a \in B$ such that $C^{\mathfrak{B}}_n = \{a\}$, let $h_a$ be an embedding of $\mathfrak{M}_a$ into $\mathfrak{M}_{h(a)}$ like in Lemma \ref{natural_number}. Let $f = h \cup \bigcup_{x\in P^{\mathfrak{B}}_n}h_x$. Then $f$ is an embedding of $\mathfrak{B}$ into $\mathfrak{A}$. Note that $\mathfrak{M}_a \in S_{n-1}$ and $\mathfrak{M}_{h(a)} \in T_{n-1}$, and for all embeddings $g : \mathfrak{M}_a \rightarrow \mathfrak{M}_{h(a)}$ and $g' : \mathfrak{M}_{h(a)} \rightarrow \mathfrak{M}_a$ the functions $f' \cup g$ and $f'^{-1} \cup g'$ are also embeddings where $f' = h\cup \bigcup_{x\in P_n^{\mathfrak{B}}\setminus \{a\}}$. Let $\overline{a} \in B^{<\omega}$, $\overline{a} = (a_1,\dots,a_k)$, and assume without loss of generality that $a_1,\dots,a_m \in B\setminus M_a$ and $a_{m+1},\dots,a_k \in M_a$ for some $m\leq k$. For all $x \in P^{\mathfrak{A}}_n \cup P^{\mathfrak{B}}_n$, let $\mathfrak{M}'_x = \mathfrak{M}_x\upharpoonright \tau_{n-1}$. By Lemma \ref{natural_number}, there are partitions $(M_1,M_2)$ of $M_a$ and $(N_1,N_2)$ of $M_{h(a)}$ such that \begin{enumerate} \item $a_{m+1},\dots,a_k \in M_1$, \item $h_a|M_1$ is an isomorphism between $\mathfrak{M}'_a | M_1$ and $\mathfrak{M}'_{h(a)} | N_1$, \item $\mathfrak{M}_a | M_2 \in S_{n-1}$ and $\mathfrak{M}_{h(a)} | N_2 \in T_{n-1}$, \item for every embedding $g : \mathfrak{M}'_a|M_2 \rightarrow \mathfrak{M}'_{h(a)}|N_2$ and $g' : \mathfrak{M}'_{h(a)}|N_2 \rightarrow \mathfrak{M}'_a | M_2$ we have that $(h_a | M_1) \cup g$ and $(h_a^{-1} | N_1)\cup g'$ are also embeddings $\mathfrak{M}'_a \rightarrow \mathfrak{M}'_{h(a)}$ and $\mathfrak{M}'_{h(a)} \rightarrow \mathfrak{M}'_a$, respectively. \end{enumerate} Now let $A_1 = A \setminus N_2$, $A_2 = N_2$, $B_1 = B \setminus M_2$ and $B_2 = M_2$. Then $(A_1,A_2)$ and $(B_1,B_2)$ are partititions of $A$ and $B$, respectively, satisfying all the seven requirements. The case of $n$ being odd is proved in the same way. \end{proof} \begin{lemma}\label{we_have} Let $\mathfrak{A}' \in S_1$ and $\mathfrak{B}' \in T_1$. Then $\mathfrak{A} \equiv_{\emb}^1 \mathfrak{B}$ where $\mathfrak{A} = \mathfrak{A}'\upharpoonright \tau_1$ and $\mathfrak{B} = \mathfrak{B}'\upharpoonright \tau_1$. \end{lemma} \begin{proof} We already proved that there is an embedding of $\mathfrak{A}$ into $\mathfrak{B}$ in Lemma \ref{natural_number}, so we need to find an embedding of $\mathfrak{B}$ into $\mathfrak{A}$. We expand the vocabulary $\tau_1$ by setting $\tau^*_1 = \tau_1 \cup \{c_q,d_q : q \text{ is a rational number } \}$. Let $h : \eta \rightarrow \eta^{\mathfrak{A}}_1$ and $g : \eta \rightarrow \eta^{\mathfrak{B}}_1$ be isomorphisms. Let $a \in \eta$ be such that $C^{\mathfrak{B}}_1 = \{g(a)\}$. Let $\mathfrak{A}^*$, $\mathfrak{B}^*$ be $\tau^*$-structures such that $\mathfrak{A}^* \upharpoonright \tau_1 = \mathfrak{A}$ and $\mathfrak{B}^*\upharpoonright \tau_1 = \mathfrak{B}$, $c^{\mathfrak{A}^*}_q = h(q)$, $c^{\mathfrak{B}^*}_q = g(q)$ for all $q\in \eta$, $d_q^{\mathfrak{A}^*}$, $d_q^{\mathfrak{B}^*}$ are such that $\mathfrak{C} \vDash E_1(c_q,d_q)$ for $\mathfrak{C} \in \{\mathfrak{A}^*,\mathfrak{B}^*\}$ and $q \in \eta \setminus \{a \}$, and $d_a^{\mathfrak{B}^*} = g(a)$. We define a function $f : B \rightarrow A$ by setting \begin{align*} &\left. \begin{array}{rl} f(c_q^{\mathfrak{B}^*}) = c_{q+1}^{\mathfrak{A}^*}, \\ f(d_q^{\mathfrak{B}^*}) = d_{q+1}^{\mathfrak{A}^*} \\ \end{array} \right\} \text{ if } q > a, \\ &\left. \begin{array}{rl} f(c_q^{\mathfrak{B}^*}) = c_{q-1}^{\mathfrak{A}^*}, \\ f(d_q^{\mathfrak{B}^*}) = d_{q-1}^{\mathfrak{A}^*} \\ \end{array} \right\} \text{ if } q < a,\\ &\ \ f(c^{\mathfrak{B}^*}_a) = c^{\mathfrak{A}^*}_a, \\ &\ \ f(d^{\mathfrak{B}^*}_a) = d^{\mathfrak{A}^*}_{a+\frac{1}{2}}. \end{align*} It is straightforward to verify that $f$ is indeed an embedding of $\mathfrak{B}$ into $\mathfrak{A}$. \end{proof} \begin{proposition}\label{let} Let $n \geq 1$ be a natural number and $\mathfrak{A}'\in T_n$, $\mathfrak{B}'\in S_n$. Then $\mathfrak{A} \equiv_{\emb}^n \mathfrak{B}$, where $\mathfrak{A} = \mathfrak{A}' \upharpoonright \tau_n$ and $\mathfrak{B} = \mathfrak{B}' \upharpoonright \tau_n$. \end{proposition} \begin{proof} We use induction on $n$. The base case is proved in Lemma \ref{we_have}. Assume that $n>1$ is even and the claim is true for $n-1$. Let Duplicator select embeddings $f : \mathfrak{A} \rightarrow \mathfrak{B}$ as in Lemma \ref{natural_number} and $g : \mathfrak{B} \rightarrow \mathfrak{A}$ as in Lemma \ref{structure}. Suppose first that Spoiler chose embedding $f$ and elements $a_1,\dots,a_k \in A$. Let $(A_1,A_2)$ and $(B_1,B_2)$ be partititions of $A$ and $B$, respectively, like in the statement of Lemma \ref{natural_number}. Then $(\mathfrak{A},\overline{a}) \equiv^{n}_{\emb} (\mathfrak{B},f\overline{a})$ by the fact 5. of Lemma \ref{natural_number} and the induction hypothesis. Now suppose that Spoiler selected the embedding $g$ and elements $b_1,\dots,b_l \in B$. Let $(A^*_1,A^*_2)$ and $(B^*_1,B^*_2)$ be partititions of $A$ and $B$, respectively, like in Lemma \ref{structure}. Then by its 8. fact we have $(\mathfrak{A},g\overline{b}) \equiv^{n-1}_{\emb} (\mathfrak{B},\overline{b})$. Thus, since also $(\mathfrak{A},\overline{a}) \equiv^{n}_{\emb} (\mathfrak{B},f\overline{a})$, we have $\mathfrak{A} \equiv^n_{\emb} \mathfrak{B}$ if $n$ is even. The case where $n$ is odd is proved in the same way. \end{proof} \begin{proof}[Proof of Theorem \ref{for_each}] Combine Propositons \ref{suppose} and \ref{let}. The existence of $\varphi_n$ having the required form with alternating existence and universal quantifiers follows from the proof of Proposition \ref{suppose}. \end{proof} \begin{remark} In the proof of Theorem \ref{for_each} we used a certain tree construction where new structures were built from the given structures by connecting them in a specific way, which is normal practice in results of this kind. A similar result concerning logics with quantifiers of bounded arity was proved in \cite{Keisler: 2011} by Keisler and Lotfallah. They used a somehow similar tree construction and the bijective game (see Remark \ref{bijective}) in their proof. Worth mentioning are also tree-like sums of Makowsky and Shelah in \cite{Makowski: 1979} based on the ideas of Friedman \cite{Friedman: 1973} and Gregory \cite{Gregory: 1974}. They are used to prove some results concerning, among others, Beth definability in abstract logics. \end{remark} \end{document}
\begin{document} \begin{center} {\Large \bf Role of $PT$-symmetry in understanding Hartman effect } \vertspace{1.3cm} {\sf Mohammad Hasan \footnote{e-mail address: \ \ [email protected], \ \ [email protected]}$^{,3}$, Vibhav Narayan Singh \footnote{e-mail address: [email protected]} Bhabani Prasad Mandal \footnote{e-mail address: \ \ [email protected], \ \ [email protected] }} {\em $^{1}$Indian Space Research Organisation, Bangalore-560094, INDIA \\ $^{2,3}$Department of Physics, Banaras Hindu University, Varanasi-221005, INDIA. \\ } \noindent {\bf Abstract} \end{center} The celebrated Hartman effect, according to which, the tunneling time through a opaque barrier is independent of the width of the barrier for a sufficiently thick barrier, is not well understood theoretically and experimentally till today. In this work we attempt to through some light to understand the mystery behind this paradoxical result of tunneling .For this purpose we calculate the tunneling time from a layered non-Hermitian system to examine the effect of $PT$-symmetry over tunneling time. We explicitly find that for system respecting $PT$-symmetry, the tunneling time saturates with the thickness of the $PT$-symmetric barrier and thus shows the existence of Hartman effect. For non PT-symmetric case, the tunneling time depends upon the thickness of the barrier and Hartman effect is lost. We further consider the limiting case in which the non-Hermitian system reduces to the real barrier to show that the Hartman effect from a real barrier is due to $PT$-symmetry (of the corresponding non-Hermitian system) . \vertspace{1in} \section{Introduction} The study of non-Hermitian system in quantum mechanics started as a mathematical curiosity. In the year 1998, it was shown that a non-Hermitian system which respect $PT$-symmetry can yield real energy eigen values \cite{ben4}. It was also found that a fully consistent quantum theory can be developed for non-Hermitian system in a modified Hilbert space through the restoration of equivalent Hermiticity and the unitrary time evolution \cite{mos, benr}. These theoretical works towards the consistency of non-Hermitian quantum mechanics (NHQM) strongly paved the way forward for NHQM to be the topic of frontier research in different areas in the last two decades \cite{nh1}-\cite{nh7}. Due to the analogy of the Schrodinger equation with certain wave equation in optics, the phenomena of NHQM can also be mapped to the analogous phenomena in optics. This lead to the possibility of experimental observation of the theoretical predictions of NHQM. This has been indeed the case and some of the predictions of NHQM have been observed in optics \cite{ opt1}-\cite{kotto}. The realizations of NHQM phenomena have ignited huge interest to study the subject both theoretically and experimentally. The study of non-Hermitian system in optics has become a constant theme of further research and advancement in the subject. The advancement in NHQM is one of the most recent developments in quantum mechanics. However one of the earliest studied problems of quantum mechanics, the quantum tunneling \cite{ nordheim1928, gurney1928, condon, wigner_1955, david_bohm_1951}, suffers with a paradox till today. How much time does a particle take to tunnel through a classically forbidden potential is still an open problem both theoretically and experimentally. In the year 1962, Hartman studied the problem of tunneling time by using stationary phase method (SPM) for metal-insulator-metal sandwich and showed that the tunneling time for opaque barrier is independent of the thickness for sufficiently thick barrier \cite{hartman_paper}. This is known as Hartman effect i.e. the saturation of tunneling time for an opaque barrier with the barrier thickness. Soon, this was also confirmed by an independent study by Fletcher \cite{fletcher}. Due to this paradox, various different authors proposed new definitions of tunneling time to account for the inconsistency (see \cite{hg_winful} and references therein). However, so far no satisfactory definition of tunneling time has been found that agrees with the experimental results. The calculation of tunneling time by the method of SPM for multi-barrier real potential shows that tunneling time is independent of the inter-barrier separation in the limit of large thickness of the barrier \cite{generalized_hartman, esposito_multi_barrier}. This is called as generalized Hartman effect in which the tunneling time is also independent of the inter-barrier separation for the tunneling through sufficiently thick opaque multi-barrier. For critical comments on generalized Hartman effect, see \cite{questions_ghf1,questions_ghf2,questions_ghf3}. Various attempts have been made to test the finding of the theoretical results of the tunneling time. Initial experiments have indicated the superluminal nature of the tunneling time and found to be insensitive to the thickness of the tunneling region \cite{sl_prl, nimtz, ph, ragni, sattari, longhi1, olindo}. This superluminal nature of the tunneling time is not at variance with Special Relativity and the phenomena of this kind have been discussed in a number of papers ( see \cite{barbero2000,recami2000} and references therein). The tunneling time found to be paradoxically short for the case of double barrier optical grating \cite{longhi1} and double barrier photonic band gap \cite{longhi2}. The reason for Hartman effect is not clear to the present day. A reshaping of the incident wave as it interact with the barrier has been proposed as a possible reason for the occurrence of Hartman effect \cite {reshaping}. Also, Hartman effect doesn't occur in space fractional quantum mechanics \cite {tt_sfqm_1,tt_sfqm_2}. To the best of our knowledge, the method of SPM has always shown the existence of Hartman effect from a real barrier potential (single barrier or multi-barrier). However, for a complex, non-PT symmetric barrier potential of the form $V_{1}+ i V_{2}$, it has been shown that Hartman effect doesn't occur and the tunneling time depends upon the barrier thickness \cite{complex_barrier_tunneling} . Also it is shown in \cite{hartman_layered}, when the complex potential is in the form of a layered $PT$-symmetric potential, the Hartman effect does occur for single as well for periodic multi-barrier systems. The result of \cite{complex_barrier_tunneling}, \cite{hartman_layered} and the Hartman effect from real barrier have motivated us to study the role of $PT$-symmetry in the occurrence of Hartman effect. We study the Hartman effect from a layered $PT$-symmetric potential and show that the occurrence of Hartman effect from a real barrier can be understood as the special limiting case of Hartman effect from a $PT$-symmetric complex system . We also study the tunneling time from a non PT-symmetric potential at the symmetry breaking threshold and show that Hartman effect doesn't occur when $PT$-symmetry is broken. Hartman effect is restored when $PT$-symmetry is respected . These results give strong indication that $PT$-symmetry plays an important role for the occurrence of Hartman effect. We further have shown explicitly that PT symmetry is crucial even for the real barrier which is shown to be the special limiting case of a layered $PT$-symmetric complex system. \paragraph{} We organize our paper as follows: In section \rangleef{spm}, we introduce the reader about stationary phase method of calculating the tunneling time. In section \rangleef{hf_pt_discussions} we discuss the Hartman effect from a `unit' PT-symmetric system and a layered $PT$-symmetric system made by the periodic repetitions of the `unit' $PT$-symmetric system. In subsection \rangleef{real_barrier_section}, we show that the Hartman effect from real barrier is the special limiting case of our layered $PT$-symmetric system. In section \rangleef{non_pt_section} , we calculate the tunneling time from a non $PT$-symmetric system at the $PT$-symmetry breaking threshold to show that when $PT$-symmetry is broken, Hartman effect is lost. We discuss the results in \rangleef{results_discussions} . Detail mathematical steps in obtaining various results are provided in Appendix. \section{Tunneling time and Hartman effect} \langleabel{spm} This section briefly introduces the reader about the stationary phase method (SPM) to calculate the tunneling time \cite{dutta_roy_book}. In SPM, the tunneling time is defined as the time difference between the peak of the incoming and outgoing wave packet as the wave packet traverse through the potential barrier. To understand this quantitatively, consider a normalized Gaussian wave packet $G_{k_{0}} (k)$ of mean momentum $\hbar k_{0}$. For $t>0$, the wave packet is given by, \begin{equation} \int G_{k_{0}} (k)e^{i(kx-\frac{}{}ac{Et}{\hbar})}dk. \langleabel{localized_wave_packet} \end{equation} In the above $k=\sqrt{2mE}$. The wave packet is propagating to positive $x$-direction and interact with the potential barrier $V(x)$ ($V(x)=V $ for $0 \langleeq x \langleeq b$ and zero elsewhere). The transmitted wave packet is given by, \begin{equation} \int G_{k_{0}} (k) \vertert A(k) \vertert e^{i(kx-\frac{}{}ac{Et}{\hbar} +\theta(k))}dk . \langleabel{emerged_wave_packet} \end{equation} Where $A(k)=\vertert A(k) \vertert e^{i\theta (k)}$ is the transmission coefficient for the potential barrier $V(x)$. By the method of SPM, the tunneling time $\tau$ is given by \begin{equation} \frac{}{}ac{d}{dk} \langleeft( k b-\frac{}{}ac{E\tau}{\hbar} +\theta(k) \rangleight)=0 . \langleabel{spm_condition} \end{equation} This results in the following expression of the tunneling time, \begin{equation} \tau= \hbar \frac{}{}ac{d \theta(E)}{dE} +\frac{}{}ac{b}{(\frac{}{}ac{\hbar k}{m})} . \langleabel{phase_delay_time} \end{equation} For a square barrier potential $V(x)=V$ of width $b$, Eq. \rangleef{phase_delay_time} results in the following expression, \begin{equation} \tau= \hbar \frac{}{}ac{d}{dE} \tan^{-1} \langleeft( \frac{}{}ac{k^{2}-q^{2}}{2kq} \tanh{q b}\rangleight) . \langleabel{t_barrier} \end{equation} Here $q= \sqrt{2m(V-E)}/\hbar$. The following things are apparent from Eq. \rangleef{t_barrier}. \begin{equation} \langleim_{b \to 0} =0, \ \ \langleim_{b \to \infty} =\frac{}{}ac{2m}{\hbar qk} . \langleabel{t_barrier_limit} \end{equation} The tunneling time is expected to vanish for $b \rangleightarrow 0$. However, the result for $b \rangleightarrow \infty$ is highly unexpected as the tunneling time saturates to a finite value and is also independent of $b$. This shows that for thick barriers , the tunneling time is independent of the thickness of the barriers. This is the famous Hartman effect. We will use the system of unit $2m=1$, $\hbar=1$, $c=1$ throughout the article. In this unit the tunneling time from the square barrier is, \begin{equation} \langleim_{b\rangleightarrow \infty} \tau =\frac{}{}ac{1}{qk}. \langleabel{tt_qm} \end{equation} \section{Hartman effect from PT-symmetric barrier } \langleabel{hf_pt_discussions} In this section we show that controversial Hartman effect exists for barriers arranged in $PT$-symmetric configurations. We first study the simplest $PT$-symmetric system made by the complex potential $u+iv$ and $u-iv$ each of thickness $b$ and arrange adjacently without intervening gap. This is shown in Fig \rangleef{hf_simple_pt}. We call this as `unit' PT-symmetric barrier system. Next we investigate the Hartman effect when this `unit' system repeats periodically to make a layered $PT$- symmetric barrier of finite repetition N as shown in Fig \rangleef{layered_pt_fig}. We also present our detailed calculations to show the $N \rangleightarrow \infty$ limit over a finite length $L$ of our layered $PT$-symmetric system gives the same tunneling time expression as of real rectangular barrier of height $u$ and length $L$. Therefore the Hartman effect from real barrier can be due to the Hartman effect of our layered $PT$-symmetric system in the special limiting case. For the purpose of clarity we discuss all the above three cases separately. \subsection{Unit PT-symmetric barrier } \langleabel{hf_simple_pt} \begin{figure} \caption{\it A $PT$-symmetric `unit cell' consisting a pair of complex conjugate barrier. $y$-axis represents the complex height of the potential.} \end{figure} In this section we calculate the tunneling time and investigate the Hartman effect from the following simple $PT$-symmetric system (shown in Fig- \rangleef{pt_fig}) \begin{eqnarray} V(x)&=& u+iv \ \ \ \ \ \mbox{for} \ -b< x < 0 \nonumberumber \\ V(x)&=& u-iv \ \ \ \ \ \mbox{for} \ 0 > x >b \nonumberumber \\ V(x)&=& 0 \ \ \ \ \ \mbox {for} \ |x| \geq b. \langleabel{pt_potential} \end{eqnarray} In the above $\{ u,v\} \in R^{+}$. It will be shown that the $PT$-symmetric potential given by Eq. \rangleef{pt_potential} display Hartman effect. The transmission coefficient ($t$) for this potential can be easily calculated and given by \begin{equation} t=\frac{}{}ac{e^{i (\theta -2kb)}}{\sqrt{\xi^{2}+\chi^{2}}} , \ \ \ \theta= \tan^{-1} \langleeft ( \frac{}{}ac{\chi}{\xi}\rangleight ), \langleabel{t_pt_system} \end{equation} and, \begin{equation} \xi=\frac{}{}ac{1}{2}(\cos{2\beta} +\cosh{2 \alpha}) + \cos{2\phi}(\cosh^{2}{\alpha} \sin^{2}{\beta} + \cos^{2}{\beta} \sinh^{2}{\alpha}) , \langleabel{xi_exp} \end{equation} \begin{equation} \chi=\frac{}{}ac{1}{2}(U_{+} \sin{\phi} \sin{2\beta} + U_{-} \cos{\phi} \sinh{2\alpha}) . \langleabel{chi_exp} \end{equation} In Eqs. \rangleef{xi_exp} and \rangleef{chi_exp}, the quantities $\alpha, \beta, \phi$ and $U_{\pm}$ are given by, \begin{equation} \alpha = b \rangleho \cos{\phi},\ \ \beta = b \rangleho \sin{\phi}, \ \ U_{\pm}= \frac{}{}ac{k}{\rangleho} \pm \frac{}{}ac{\rangleho}{k}, \langleabel{alpha_beta_u} \end{equation} and \begin{equation} \phi= \frac{}{}ac{1}{2} \tan^{-1} {\langleeft ( \frac{}{}ac{v}{u- k^{2}}\rangleight )}, \ \ \rangleho= \langleeft[ (u-k^{2})^{2}+v^{2} \rangleight]^{\frac{}{}ac{1}{4}} . \langleabel{rho_phi} \end{equation} Derivation of Eq. \rangleef{t_pt_system} is provided in Appendix- A (at the end) by transfer matrix approach. Using SPM, the tunneling time ($\tau$) is given by \begin{equation} \tau=\frac{}{}ac{1}{2k} \langleeft ( \frac{}{}ac{\xi \chi' -\chi \xi'}{\xi^{2}+ \chi^{2}} \rangleight ). \langleabel{t_unit_pt} \end{equation} We will use `$\prime$' (prime) to denote the derivatives with respect to wave vector $k$. The expressions for $\xi '$ and $\chi'$ are provided below \begin{equation} \xi'=2 \alpha'\cos^{2}{\phi} \sinh{2 \alpha} -2 \beta' \sin^{2}{\phi} \sin{2 \beta} + \phi' \sin{2 \phi} (\cos{2 \beta}-\cosh{2 \alpha}) . \langleabel{xi_prime} \end{equation} \begin{multline} \chi'= \frac{}{}ac{1}{2}\sin{\phi}( U_{+}' \sin{2 \beta} + 2\beta' U_{+} \cos{2 \beta} - \phi' U_{-} \sinh{2 \alpha}) +\\ \frac{}{}ac{1}{2} \cos{\phi}( U_{-}' \sinh{2 \alpha} +2 \alpha' U_{-} \cosh{2\alpha} +\phi' U_{+} \sin{2 \beta}) . \langleabel{chi_prime} \end{multline} The width dependency in tunneling time enters through $\alpha$ , $\alpha '$, $\beta$ and $\beta'$. Therefore for the existence of Hartman effect $\tau $ must be independent of these four quantities in the limit $b \rangleightarrow \infty$. It can be shown that \begin{equation} \langleim_{b \rangleightarrow \infty } \xi =\frac{}{}ac{e^{2 \alpha}}{2} \cos^{2}{\phi} , \langleabel{xi_limit} \end{equation} \begin{equation} \langleim_{b \rangleightarrow \infty } \chi =\frac{}{}ac{U_{-}}{4}e^{2 \alpha} \cos{\phi} , \langleabel{chi_limit} \end{equation} \begin{equation} \langleim_{b \rangleightarrow \infty } \xi' = e^{2 \alpha} (\alpha' \cos^{2}{\phi} -\frac{}{}ac{\phi'}{2} \sin{2 \phi} ) , \langleabel{xi_prime_limit} \end{equation} \begin{equation} \langleim_{b \rangleightarrow \infty } \chi' = \frac{}{}ac{e^{2 \alpha}}{4} \langleeft ( \cos{\phi} (U_{-}' +2 \alpha' U_{-}) - U_{-} \phi' \sin{\phi} \rangleight ) . \langleabel{chi_prime_limit} \end{equation} Using the results of \rangleef{xi_limit}-\rangleef{chi_prime_limit} in Eq. \rangleef{t_unit_pt}, we find \begin{equation} \langleim_{b \rangleightarrow \infty} \tau = \tau_{\infty} =\frac{}{}ac{U_{-}' \cos{\phi} + \phi' U_{-} \sin{\phi} }{k(4 \cos^{2}{\phi} + U_{-}^{2})} . \langleabel{hartman_unit_pt} \end{equation} In the expression of $\tau_{\infty}$, $b$ dependent terms doesn't appear . This proves that the $PT$-symmetric system given by Eq. \rangleef{pt_potential} shows Hartman effect. \subsection{Layered PT-symmetric barrier} \langleabel{layered_section} \begin{figure} \caption{\it A locally periodic $PT$-symmetric system obtained by periodic repetitions of the `unit' $PT$-symmetric system shown in Fig \rangleef{pt_fig} \end{figure} Next we calculate the tunneling time from a layered (locally periodic) $PT$-symmetric system obtained by periodic repetitions of the `unit cell' $PT$-symmetric system of Eq. \rangleef{pt_potential}. The layered $PT$-symmetric system is shown in Fig \rangleef{layered_pt_fig}. The net spatial extent of the layered $PT$-symmetric system is $L=2Nb$ where $N$ is the number of repetitions. It is easy to show that the transmission coefficient from such a system is (see Appendix-B for derivation) \begin{equation} t=\frac{}{}ac{e^{-ikL}}{H(k)} , \langleabel{t_simple} \end{equation} where $H(k)$ is given as \begin{equation} H(k)=(\xi-i\chi ) U_{N-1}(\xi)-U_{N-2}(\xi) . \end{equation} The phase of the transmission coefficient is given by, \begin{equation} \Theta = \tan^{-1} ( g \chi ) -kL , \langleabel{phase_n} \end{equation} where, \begin{equation} g=\frac{}{}ac{U_{N-1} (\xi)}{T_{N} (\xi)} . \langleabel{g_eq} \end{equation} From the knowledge of the phase of the transmission coefficient, the tunneling time is calculated to be \begin{equation} \tau_{N}= \frac{}{}ac{1}{2k(1+ g^{2}\chi^{2})} \langleeft [g \chi' + \chi \langleeft ( \frac{}{}ac{N \xi'}{\xi^{2}-1} - N \xi' g^{2}- \frac{}{}ac{g \xi \xi'}{\xi^{2}-1} \rangleight ) \rangleight ] . \langleabel{time_n} \end{equation} To show Hartman effect, we first evaluate the following limits, \begin{equation} \langleim _{b \rangleightarrow \infty} g \sim \frac{}{}ac{1}{\xi (b \rangleightarrow \infty)} , \langleabel{g_infinite} \end{equation} \begin{equation} \langleim _{b \rangleightarrow \infty} \frac{}{}ac{\chi}{\xi} = \frac{}{}ac{U_{-}}{2} \sec{\phi}=\eta . \langleabel{chi_xi_ratio_infinite} \end{equation} Making use of the fact, $\langleim_{b \rangleightarrow \infty} g \sim \frac{}{}ac{1}{\xi}$ and $\langleim_{b \rangleightarrow \infty} \xi >> 1$, we can write \begin{equation} \langleim _{b \rangleightarrow \infty} \tau_{N}= \frac{}{}ac{1}{2k} \langleeft ( \frac{}{}ac{1}{1+ \eta^{2}} \rangleight ) \langleeft [ \langleim_{b \rangleightarrow \infty} \langleeft ( \frac{}{}ac{\chi'}{\xi} -\frac{}{}ac{\chi \xi'}{\xi^{2}} \rangleight ) \rangleight ] . \langleabel{time_1} \end{equation} Using previously derived limits, the term in square parenthesis is given by \begin{equation} \langleim_{b \rangleightarrow \infty} \langleeft ( \frac{}{}ac{\chi'}{\xi} -\frac{}{}ac{\chi \xi'}{\xi^{2}} \rangleight ) = \frac{}{}ac{2}{\cos{\phi} } (U_{-}' +\phi U_{-} \tan{\phi}). \end{equation} Therefore the limiting case of the tunneling time is turned out to be , \begin{equation} \langleim _{b \rangleightarrow \infty} \tau_{N}= \langleim _{b \rangleightarrow \infty} \tau = \tau_{\infty}. \langleabel{time_2} \end{equation} Eq. \rangleef{time_2} proves the Hartman effect from the layered PT-symmetric system represented by Fig \rangleef{layered_pt_fig} . \subsection {Real barrier} \langleabel{real_barrier_section} In this sub-section we show that the famously known Hartman effect from a real barrier is the special limiting case of $PT$-symmetric system considered in the sub-section \rangleef{layered_section} above. We first show that the transmission phase of a rectangular barrier of height $u$ and width $L$ is the limiting case of $N \rangleightarrow \infty$ of our layered $PT$-symmetric system such that $b=\frac{}{}ac{L}{2N}$ where $L$ is fixed. To prove this, we first Taylor expand the quantity `$g \chi$' in power of `$b$' such that, \begin{equation} g \chi =A_{0} + \sum_{j=1}^{\infty} A_{j} b^{j} . \end{equation} It is found that $A_{0}=0$ and the coefficients of even power of $b$ are also zero. Therefore \begin{equation} g \chi= A_{1} + A_{3} b^{3} + A_{5} b^{5} + A_{7} b^{7} + A_{9} b^{9} + .... \langleabel{g_taylor} \end{equation} The coefficients of various powers of $b$ are given by, \begin{equation} A_{1}= N \rangleho (U_{-} \cos^{2}{\phi} + U_{+} \sin^{2}{\phi}) , \langleabel{a1} \end{equation} \begin{equation} A_{3}= - \frac{}{}ac{1}{6} N \rangleho^{3} \langleeft [ 8 N^{2} (U_{+} + U_{-}) \cos{2 \phi} -(U_{+}-U_{-}) \{ 4 N^{2} -1 + (4 N^{2} +1) \cos{4 \phi} \} \rangleight ] , \langleabel{a3} \end{equation} \begin{multline} A_{5}= \frac{}{}ac{N \rangleho ^{5}}{360} [ 2 (U_{+} + U_{-}) (96 N^{4} -5 N^{2} -1) - (U_{+} - U_{-}) \cos{2 \phi} (288 N^{4} -25 N^{2} -8) + \\ 2 (U_{+} + U_{-}) \cos{4 \phi} (96 N^{4} + 5 N^{2} +1) - (U_{+} - U_{-}) \cos{6 \phi} (96 N^{4} + 25 N^{2} +8) ] , \langleabel{a5} \end{multline} \begin{multline} A_{7}= \frac{}{}ac{N \rangleho ^{7}}{15120} [ (U_{+} - U_{-}) (9792 N^{6} -1008 N^{4} -161 N^{2} -34) -\\ 4 (U_{+} + U_{-}) \cos{2 \phi} (4896 N^{6} -168 N^{4} -35 N^{2} -10) + \\ 4 (U_{+} - U_{-}) \cos{4 \phi} (3264 N^{6} -35 N^{2} -16) - \\ 4 (U_{+} + U_{-}) \cos{6 \phi} (1632 N^{6} + 168 N^{4} + 35 N^{2} +10) + \\ (U_{+} - U_{-}) \cos{8 \phi} (3264 N^{6} + 1008 N^{4} + 301 N^{2} +98) ] , \langleabel{a7} \end{multline} \begin{multline} A_{9}= \frac{}{}ac{N \rangleho ^{9}}{453600} [ 4 (U_{+} + U_{-}) (119040 N^{8} - 6120 N^{6} -1029 N^{4} -215 N^{2} -61) - \\ 2 (U_{+} - U_{-}) \cos{2 \phi} (396800 N^{8} -28560 N^{6} -5502 N^{4} - 1045 N^{2} -268) + \\ 40 (U_{+} + U_{-}) \cos{4 \phi} ( 15872 N^{8} -42 N^{4} -15 N^{2} -5) - \\ (U_{+} - U_{-}) \cos{6 \phi} (396800 N^{8} + 28560 N^{6} + 1722 N^{4} - 655 N^{2} -352) + \\ 4 (U_{+} + U_{-}) \cos{8 \phi} (39680 N^{8} + 6120 N^{6} + 1449 N^{4} + 365 N^{2} + 111) - \\ (U_{+} - U_{-}) \cos{10 \phi} (79360 N^{8} + 28560 N^{6} + 9282 N^{4} + 2745 N^{2} + 888) ]. \langleabel{a9} \end{multline} Similarly other coefficients $A_{11}, A_{13} , A_{15}$ etc. can also be calculated. Next we take the limiting case $N \rangleightarrow \infty$ of these coefficients to show, \begin{equation} \langleim_{N \to \infty} A_{1} b= \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) (q L) , \langleabel{a1b} \end{equation} \begin{equation} \langleim_{N \to \infty} A_{3} b^{3}= - \frac{}{}ac{1}{3} \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) (q L)^{3} , \langleabel{a3b} \end{equation} \begin{equation} \langleim_{N \to \infty} A_{5} b^{5}= \frac{}{}ac{2}{15} \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) ( q L)^{5} , \langleabel{a5b} \end{equation} \begin{equation} \langleim_{N \to \infty} A_{7} b^{7}= - \frac{}{}ac{17}{315} \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) ( q L)^{7} , \langleabel{a7b} \end{equation} \begin{equation} \langleim_{N \to \infty} A_{9} b^{9}= \frac{}{}ac{31}{2835} \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) ( q L)^{9}. \langleabel{a9b} \end{equation} Where on deriving Eqs. \rangleef{a1b}-\rangleef{a9b}, we have identified $L=2 Nb$ and expanded $\cos{4\phi}, \cos{6\phi}, \cos{8\phi}$ and $\cos{10\phi}$ in power of $\cos{2 \phi}= \frac{}{}ac{q^{2}}{\rangleho^{2}}$. The detail calculations of deriving Eqs. \rangleef{a1b}-\rangleef{a9b} are shown in Appendix-C . Thus, \begin{equation} \langleim_{N \to \infty , b \to 0} g \chi = \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) \langleeft [ q L - \frac{}{}ac{1}{3} (q L)^{3} + \frac{}{}ac{2}{15}(q L)^{5} - \frac{}{}ac{17}{315}(q L)^{7} + \frac{}{}ac{31}{2835} (q L)^{9} - .... \rangleight ]. \end{equation} And we identify \begin{equation} \langleim_{N \to \infty , b \to 0} g \chi = \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) \tanh{qL} , \ \ L=2Nb \langleabel{gchi_limiting_case} \end{equation} Using Eq. \rangleef{gchi_limiting_case} in Eq. \rangleef{phase_n} we find \begin{equation} \langleim_{N \to \infty , b \to 0} \Theta = \tan^{-1} \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \tanh{qL} \rangleight ) -k L. \end{equation} Thus the tunneling time, \begin{equation} \langleim_{N \to \infty , b \to 0} \tau_{N} = \frac{}{}ac{d}{dE} \langleeft [ \tan^{-1} \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \tanh{qL} \rangleight ) \rangleight ]. \langleabel{taun_limit} \end{equation} Eq. \rangleef{taun_limit} yield the same value of tunneling time as Eq. \rangleef{tt_qm} . This result shows that Hartman effect in real barrier occur due to the limiting case $ N \rangleightarrow \infty$ of our layered $PT$-symmetric system such that each layered structure becomes infinitely thin. It can be shown that a real barrier of height $u$ and width $L$ is the limiting case $N \rangleightarrow \infty$ of our layered $PT$-symmetric system such that $b =\frac{}{}ac{L}{2 N}$ (we have left the complete exercise, however one can show that it exactly gives the expression of reflection and transmission coefficient of rectangular barrier. We have checked this numerically also) . Here $L$ is the net spatial extent of the layered $PT$-symmetric system. \section {Non PT-symmetric barrier system: No Hartman effect} \langleabel{non_pt_section} In this section we calculate the tunneling time by SPM method from the following non-Hermitian system \begin{eqnarray} V(x)&=& u+iv, \ \ \ \ \ \mbox{for} \ -b< x < 0. \nonumberumber \\ V(x)&=& u-i \vertarepsilon v , \ \ \ \ \ \mbox{for} \ 0 > x >b . \nonumberumber \\ V(x)&=& 0 , \ \ \ \ \ \mbox {for} \ |x| \geq b. \langleabel{non_pt_potential} \end{eqnarray} Where we have $\vertarepsilon \in R $. The system is shown graphically in Fig \rangleef{pt_fig_eps}. When $\vertarepsilon =1$ , the non-Hermitian system represented by Eq. \rangleef{non_pt_potential} is $ PT $-symmetric and is identical to the system represented by Eq. \rangleef{pt_potential}. The transmission coefficient from this system can be calculated and is given by \begin{equation} t=\frac{}{}ac{e^{-2ikb}}{Q}. \langleabel{t_eps} \end{equation} Where, \begin{equation} Q=P^{-}_{1} P^{-}_{2} - S_{1}S_{2}, \end{equation} and the various symbols are given by \begin{equation} P^{\pm}_{1,2}= 2 \cos{k_{1,2}b} \pm i \langleeft ( \mu_{1,2} + \frac{}{}ac{1}{\mu_{1,2}} \rangleight ) \sin{k_{1,2}b} , \end{equation} \begin{equation} S_{1,2}= i \langleeft ( \mu_{1,2} - \frac{}{}ac{1}{\mu_{1,2}} \rangleight ) \sin{k_{1,2}b}. \end{equation} Here, \begin{equation} \mu_{1,2} = \frac{}{}ac{k_{1,2}}{k} , \end{equation} and, \begin{equation} k_{1,2} =\sqrt{E- V_{1,2}}, \ V_{1}= u+iv , \ V_{2}= u-i \vertarepsilon v . \end{equation} \begin{figure} \caption{\it A non-Hermitian `unit cell' consisting a pair of complex conjugate barrier. $y$-axis represents the complex height of the potential. Note that the system becomes $PT$-symmetry when $\vertarepsilon =1$ and in this case identical to the `unit cell' shown in the Fig \rangleef{pt_fig} \end{figure} We express $k_{1} = \rangleho_{1} e^{i \phi_{1}}$, $k_{2} = \rangleho_{2} e^{-i \phi_{2}}$ and define the following quantities, \begin{equation} H_{\pm} = \frac{}{}ac{\rangleho_{1}\rangleho_{2}}{k^{2}} \pm \frac{}{}ac{k^{2}}{ \rangleho_{1}\rangleho_{2}}, \ \ \ G_{\pm} = \frac{}{}ac{\rangleho_{1}}{\rangleho_{2}} \pm \frac{}{}ac{\rangleho_{2}}{\rangleho_{1}} . \end{equation} \begin{equation} J_{1,2}^{\pm} = \frac{}{}ac{\rangleho_{1,2}}{k^{2}} \pm \frac{}{}ac{k^{2}}{\rangleho_{1,2}} . \end{equation} \begin{equation} \rangleho_{1} = [(u-k^{2})^{2}+ v^{2}]^{\frac{}{}ac{1}{4}}, \ \ \ \rangleho_{2} = [(u-k^{2})^{2}+ \epsilon^{2}v^{2}]^{\frac{}{}ac{1}{4}}. \end{equation} \begin{equation} \phi_{1} = \frac{}{}ac{1}{2} \tan^{-1} \langleeft ( \frac{}{}ac{v}{u-k^{2}} \rangleight ) , \ \ \ \phi_{2} = \frac{}{}ac{1}{2} \tan^{-1} \langleeft ( \frac{}{}ac{ \epsilon v}{u-k^{2}} \rangleight ) . \end{equation} Through the use of the above quantities, we separate $Q$ in real and imaginary parts (to obtain the phase of transmission coefficient given by Eq. \rangleef{t_eps}) \begin{equation} Q= (A_{1} - A_{2}) + i (B_{1} - B_{2}). \end{equation} Where, \begin{multline} A_{1} =4 z_{1} + 2 (x_{2} J_{2}^{+} \cos{\phi_{2}} -x_{1} J_{2}^{-} \sin{\phi_{2}} ) + 2 (w_{1} J_{1}^{-} \sin{\phi_{1}} + w_{2} J_{1}^{+} \cos{\phi_{1}} ) - \\ y_{1}(H_{+} \cos{(\phi_{1} -\phi_{2} )} + G_{+} \cos{(\phi_{1} +\phi_{2} )} ) + y_{2}(H_{-} \sin{(\phi_{1} -\phi_{2} )} + G_{-} \sin{(\phi_{1} +\phi_{2} )} ) \end{multline} \begin{multline} B_{1} =4 z_{2} - 2 (x_{1} J_{2}^{+} \cos{\phi_{2}} +x_{2} J_{2}^{-} \sin{\phi_{2}} ) - 2 (w_{1} J_{1}^{+} \cos{\phi_{1}} - w_{2} J_{1}^{-} \sin{\phi_{1}} ) - \\ y_{1}(H_{-} \sin{(\phi_{1} -\phi_{2} )} + G_{-} \sin{(\phi_{1} +\phi_{2} )} ) - y_{2}(H_{+} \cos{(\phi_{1} -\phi_{2} )} + G_{+} \cos{(\phi_{1} +\phi_{2} )} ) \end{multline} \begin{equation} A_{2} = y_{2}(H_{-} \sin{(\phi_{1} -\phi_{2} )} - G_{-} \sin{(\phi_{1} +\phi_{2} )} ) - y_{1}(H_{+} \cos{(\phi_{1} -\phi_{2} )} - G_{+} \cos{(\phi_{1} +\phi_{2} )} ) \end{equation} \begin{equation} B_{2} = -y_{2}(H_{+} \cos{(\phi_{1} -\phi_{2} )} - G_{+} \cos{(\phi_{1} +\phi_{2} )} ) - y_{1}(H_{-} \sin{(\phi_{1} -\phi_{2} )} - G_{-} \sin{(\phi_{1} +\phi_{2} )} ) \end{equation} In the above, the quantities $w_{1,2}, x_{1,2}, y_{1,2}, z_{1,2}$ are due to, \begin{equation} \sin{k_{1}b} \cos{k_{2}b} = w_{1}+ i w_{2} , \ \ \cos{k_{1}b} \sin{k_{2}b} = x_{1}+ i x_{2} , \end{equation} \begin{equation} \sin{k_{1}b} \sin{k_{2}b} = y_{1}+ i y_{2} , \ \ \cos{k_{1}b} \cos{k_{2}b} = z_{1}+ i z_{2} . \end{equation} The expressions of $w_{1,2}, x_{1,2}, y_{1,2}, z_{1,2}$ are given below, \begin{eqnarray} w_{1} & =& \cos{\alpha_{22}} \cosh{\beta_{11}} \cosh{\beta_{22}}\sin{\alpha_{11}} - \cos{\alpha_{11}} \sin{\alpha_{22}} \sinh{\beta_{11}} \sinh{\beta_{22}} , \\ w_{2} & =& \cos{\alpha_{11}} \cos{\alpha_{22}} \cosh{\beta_{22}} \sinh{\beta_{11}} + \cosh{\beta_{11}} \sin{\alpha_{11}} \sin{\alpha_{22}} \sinh{\beta_{22}} , \\ x_{1} & =& \cos{\alpha_{11}} \cosh{\beta_{11}} \cosh{\beta_{22}}\sin{\alpha_{22}} - \cos{\alpha_{22}} \sin{\alpha_{11}} \sinh{\beta_{11}} \sinh{\beta_{22}} ,\\ x_{2} & =& - \cosh{\beta_{22}} \sin{\alpha_{11}} \sin{\alpha_{22}} \sinh{\beta_{11}} - \cos{\alpha_{11}} \cos{\alpha_{22}} \cosh{\beta_{11}} \sinh{\beta_{22}} , \\ y_{1} & =& \cosh{\beta_{11}} \cosh{\beta_{22}}\sin{\alpha_{11}} \sin{\alpha_{22}} + \cos{\alpha_{11}} \cos{\alpha_{22}} \sinh{\beta_{11}} \sinh{\beta_{22}} ,\\ y_{2} & =& \cos{\alpha_{11}} \cosh{\beta_{22}} \sin{\alpha_{22}} \sinh{\beta_{11}} - \cos{\alpha_{22}} \cosh{\beta_{11}} \sin{\alpha_{11}} \sinh{\beta_{22}} , \\ z_{1} &= &\cos{\alpha_{11}} \cos{\alpha_{22}} \cosh{\beta_{11}} \cosh{\beta_{22}} + \sin{\alpha_{11}} \sin{\alpha_{22}} \sinh{\beta_{11}} \sinh{\beta_{22}} ,\\ z_{2} &=& -\cos{\alpha_{22}} \cosh{\beta_{22}} \sin{\alpha_{11}} \sinh{\beta_{11}} + \cos{\alpha_{11}} \cosh{\beta_{11}} \sin{\alpha_{22}} \sinh{\beta_{22}} . \end{eqnarray} In the above \begin{equation} \alpha_{ij}=b \rangleho_{i} \cos{\phi_{j}}, \ \ \ \beta_{ij}=b \rangleho_{i} \sin{\phi_{j}} . \end{equation} Now, the phase of transmission coefficient can be found as, \begin{equation} \theta = \Phi_{\vertarepsilon } -2kb. \end{equation} Where we have , \begin{equation} \Phi_{\vertarepsilon } = \tan^{-1}{\langleeft (\frac{}{}ac{B_{2} - B_{1}}{ A_{1} -A_{2}} \rangleight )} . \end{equation} Thus the tunneling time is , \begin{equation} \tau_{\vertarepsilon} = \frac{}{}ac{d}{dE} (\Phi_{\vertarepsilon } -2kb) + \frac{}{}ac{2b}{2k} . \end{equation} The last term of R.H.S. in the above equation is due to the free propagation time of traversing the length $2b$. The net tunneling time can be written as \begin{equation} \tau_{\vertarepsilon} = \frac{}{}ac{d \Phi_{\vertarepsilon}}{dE} =\frac{}{}ac{1}{2k} \frac{}{}ac{d \Phi_{\vertarepsilon}}{dk} . \end{equation} In order to analyze the effect of $PT-$ symmetry over Hartman effect, we Taylor expand $\tau_{\vertarepsilon }$ near $\vertarepsilon \sim 1$ to first order as follows , \begin{equation} \tau_{\vertarepsilon} = \tau_{\vertarepsilon} (\vertarepsilon =1) + \langleeft (\frac{}{}ac{d \tau_{\vertarepsilon}}{d \vertarepsilon} \rangleight )_{\vertarepsilon =1} (\vertarepsilon -1) . \langleabel{tau_eps_taylor_expand} \end{equation} We take the limit $\langleim_{ b \rangleightarrow \infty} $ of Eq. \rangleef{tau_eps_taylor_expand} to study Hartman effect near the symmetry breaking threshold $\vertarepsilon =1$. Taking the limit of Eq. \rangleef{tau_eps_taylor_expand}, \begin{equation} \langleim_{b \to \infty} \tau_{\vertarepsilon} = \langleim_{b \to \infty} \tau_{\vertarepsilon} ( \vertarepsilon =1) + \langleim_{b \to \infty} \langleeft (\frac{}{}ac{d \tau_{\vertarepsilon}}{d \vertarepsilon} \rangleight )_{\vertarepsilon =1} (\vertarepsilon -1) . \end{equation} The first term of right hand side is $\tau_{\infty}$ and is independent of $b$ . Thus , \begin{equation} \langleim_{b \to \infty} \tau_{\vertarepsilon} = \tau_{\infty} + \langleim_{b \to \infty} \langleeft (\frac{}{}ac{d \tau_{ \vertarepsilon}}{d \vertarepsilon} \rangleight )_{\vertarepsilon =1} (\vertarepsilon -1) . \end{equation} Therefore to find whether Hartman effect exist or not when $PT-$ symmetry is broken, we investigate the second term of R.H.S about its dependency on the thickness $b$ in the limit $b \rangleightarrow \infty$. We first evaluate the following derivative in the limit $b \rangleightarrow \infty$, \begin{equation} \langleim_{b \to \infty} \langleeft (\frac{}{}ac{d \tau_{\vertarepsilon}}{ d\vertarepsilon} \rangleight )= \frac{}{}ac{1}{2k} \langleim_{b \to \infty} \langleeft ( \frac{}{}ac{d}{d \vertarepsilon } (\frac{}{}ac{d \Phi_{\vertarepsilon }}{dk}) \rangleight ) \end{equation} As $\vertarepsilon $ , $k$ and $b$ are independent quantities, we can take the limit inside the differential sign. Thus we write, \begin{equation} \langleim_{b \to \infty} \langleeft (\frac{}{}ac{d \tau_{\vertarepsilon}}{ d\vertarepsilon} \rangleight )= \frac{}{}ac{1}{2k} \langleeft [ \frac{}{}ac{d}{d \vertarepsilon } \langleeft (\frac{}{}ac{d }{dk} (\langleim_{b \to \infty} \Phi_{\vertarepsilon } ) \rangleight ) \rangleight ] . \langleabel{tau_ep_lim} \end{equation} In the next we evaluate $\langleim_{b \to \infty} \Phi_{\vertarepsilon }$. For this we evaluate the limiting values of the following quantities, \begin{eqnarray} \langleim_{b \to \infty} z_{1} & = & \frac{}{}ac{1}{4} e^{\beta_{11} +\beta_{22}} \cos{(\alpha_{11} -\alpha_{22})} ,\\ \langleim_{b \to \infty} z_{2} & = & \frac{}{}ac{1}{4} e^{\beta_{11} +\beta_{22}} \sin{(\alpha_{22} -\alpha_{11})} ,\\ \langleim_{b \to \infty} y_{1} & = & \frac{}{}ac{1}{4} e^{\beta_{11} +\beta_{22}} \cos{(\alpha_{11} -\alpha_{22})} ,\\ \langleim_{b \to \infty} y_{2} & = & \frac{}{}ac{1}{4} e^{\beta_{11} +\beta_{22}} \sin{(\alpha_{22} -\alpha_{11})} ,\\ \langleim_{b \to \infty} x_{1} & = & \frac{}{}ac{1}{4} e^{\beta_{11} +\beta_{22}} \sin{(\alpha_{22} -\alpha_{11})} ,\\ \langleim_{b \to \infty} x_{2} & = & -\frac{}{}ac{1}{4} e^{\beta_{11} +\beta_{22}} \cos{(\alpha_{11} -\alpha_{22})} ,\\ \langleim_{b \to \infty} w_{1} & = & \frac{}{}ac{1}{4} e^{\beta_{11} +\beta_{22}} \sin{(\alpha_{11} -\alpha_{22})} ,\\ \langleim_{b \to \infty} w_{2} & = & \frac{}{}ac{1}{4} e^{\beta_{11} +\beta_{22}} \cos{(\alpha_{11} -\alpha_{22})} . \end{eqnarray} It is observe that, \begin{eqnarray} \langleim_{b \to \infty} z_{1} & = & \langleim_{b \to \infty} y_{1} , \\ \langleim_{b \to \infty} z_{2} & = & \langleim_{b \to \infty} y_{2} , \\ \langleim_{b \to \infty} x_{1} & = & \langleim_{b \to \infty} y_{2} , \\ \langleim_{b \to \infty} x_{2} & = & -\langleim_{b \to \infty} y_{1} , \\ \langleim_{b \to \infty} w_{1} & = & -\langleim_{b \to \infty} y_{2} , \\ \langleim_{b \to \infty} w_{2} & = & \langleim_{b \to \infty} y_{1} . \end{eqnarray} We define, \begin{equation} Y_{1} = \langleim_{b \to \infty} y_{1} \ , Y_{2} = \langleim_{b \to \infty} y_{2} , \end{equation} so that, \begin{equation} Y_{2} = -Y_{1} \tan{(\alpha_{11} -\alpha_{22})} . \end{equation} With these results we simplify the expression of $\Phi_{\vertarepsilon }$ in the limit $b \rangleightarrow \infty$ to obtain, \begin{equation} \langleim_{b \to \infty} \Phi_{\vertarepsilon } = \tan^{-1} \langleeft ( \frac{}{}ac{Q_{1} \tan{\zeta -Q_{2}}}{ Q_{1} + Q_{2} \tan{\zeta}} \rangleight ). \end{equation} Where, $Q_{1}$ and $Q_{2}$ are given by \begin{equation} Q_{1} = 2 + J_{1}^{+} \cos{\phi_{1}} - J_{2}^{+} \cos{\phi_{2}} - G_{+} \cos{(\phi_{1}+\phi_{2})}, \end{equation} \begin{equation} Q_{2} = J_{1}^{-} \sin{\phi_{1}} + J_{2}^{-} \sin{\phi_{2}} - G_{-} \sin{(\phi_{1}+\phi_{2})}. \end{equation} For future calculations in mind, we define the quantity $P$ as, \begin{equation} P= \frac{}{}ac{Q_{1} \tan{\zeta -Q_{2}}}{ Q_{1} + Q_{2} \tan{\zeta}}. \langleabel{p_exp} \end{equation} so that, \begin{equation} \langleim_{b \to \infty} \Phi_{\vertarepsilon } = \tan^{-1} P. \langleabel{phi_ep_lim} \end{equation} Using Eq. \rangleef{phi_ep_lim} in Eq. \rangleef{tau_ep_lim}, we find the following expression at $\vertarepsilon =1$, \begin{equation} \langleeft [ \langleim_{b \to \infty} \langleeft (\frac{}{}ac{d \tau_{\vertarepsilon}}{ d\vertarepsilon} \rangleight ) \rangleight ]_{\vertarepsilon =1}= \frac{}{}ac{1}{2k} \langleeft [ \frac{}{}ac{1}{(1+ P^{2})} \frac{}{}ac{d^{2}P}{d \vertarepsilon dk} - \frac{}{}ac{2P}{(1+ P^{2})^{2}} \frac{}{}ac{dP}{dk} \frac{}{}ac{dP}{d \vertarepsilon}\rangleight ]_{\vertarepsilon =1} . \langleabel{tau_eps1} \end{equation} It is a massive calculation to evaluate the right hand side of Eq. \rangleef{tau_eps1}. The details of the calculations are provided in Appendix-D. The term in the parenthesis of the right hand side is given by, \begin{equation} \langleeft [ \frac{}{}ac{1}{(1+ P^{2})} \frac{}{}ac{d^{2}P}{d \vertarepsilon dk} - \frac{}{}ac{2P}{(1+ P^{2})^{2}} \frac{}{}ac{dP}{dk} \frac{}{}ac{dP}{d \vertarepsilon}\rangleight ]_{\vertarepsilon =1} = K_{0} + K_{1} b. \end{equation} The expressions for $K_{0}$ and $K_{1}$ are provided in the Appendix-D . Both $K_{0}$ and $K_{1}$, are independent of `$b$'. Now, the net tunneling time in the vicinity of $\vertarepsilon \sim 1$ for large thickness `$2b$' is , \begin{equation} \langleim_{b \to \infty} \tau_{\vertarepsilon} = \langleeft [\tau_{\infty} + \frac{}{}ac{K_{0}}{2k}(\vertarepsilon -1) \rangleight ] + \frac{}{}ac{K_{1}}{2k}(\vertarepsilon -1)b. \langleabel{tau_eps2} \end{equation} It is clear from the above Eq. \rangleef{tau_eps2} that the tunneling time depends upon the thickness when $\vertarepsilon \neq 1$ i.e. Hartman effect is lost when the $PT$-symmetry is broken. It is easily seen from Eq. \rangleef{tau_eps2} that Hartman effect is restored when $\vertarepsilon = 1$ i.e when the system recovers the $PT$-symmetry. \section{Conclusions and Discussions} \langleabel{results_discussions} We have investigated the role of $PT$-symmetry for the occurrence of Hartman effect. We have considered a `unit cell' $PT$-symmetric potential made by the two potentials of height $u+iv$ and $u-iv$ without an inter barrier separation and each of equal width `$b$'. We found that when $b \rangleightarrow \infty$, the tunneling time saturates and become independent of $b$. Thus Hartman effect exist in this $PT$-symmetric potential. Further it was found that layered $PT$-symmetric potential made by an arbitrary $N$ repetitions of this `unit cell' potential also shows Hartman effect. We have analytically investigated the case of infinite $N$ repetitions over finite spatial length $L$ and found that $N \rangleightarrow \infty$ limit results in the same analytical expression of tunneling time as that of rectangular barrier of height `$u$' and width $L$. This result shows that the Hartman effect from a real barrier can be due to the Hartman effect from our layered $PT$-symmetric system. Also, the real rectangular barrier of height $u$ and $L$ is the limiting case $N \rangleightarrow \infty$ of our layered $PT$-symmetric system over fixed spatial length $L$. To study the occurrence of Hartman effect at symmetry breaking threshold, we consider the tunneling time through a non-Hermitian potential made by two potentials of height $u+iv$ and $u- i\vertarepsilon v$ each of equal width `$b$'. Expression of tunneling time is obtained analytically at the symmetry breaking threshold $\vertarepsilon \sim 1$ . It is found that when $ \vertarepsilon \neq 1 $, i.e. when $PT$-symmetry is broken, the Hartman effect is lost from the system. However, when $ \vertarepsilon = 1 $ i.e. when $PT$-symmetry is respected, the Hartman effect is restored. This result along with the result of our layered $PT$-symmetric system and its limiting case $N \rangleightarrow \infty$ for fixed $L$ indicates that $PT$-symmetry could be playing an important role for the occurrence of Hartman effect. {\it \bf{Acknowledgements}}: \\ MH acknowledges supports from SSPO for the encouragement of research activities. BPM acknowledges the support from MATRIX project (Grant No. MTR/2018/000611), SERB, DST Govt. of India. . \begin{center} {\langlearge \bf Appendix - A : Derivation of transmission coefficient from unit PT-symmetric barrier } \end{center} The transfer matrices for the two barriers `$1$' and `$2$' as labeled in the Fig \rangleef{pt_fig} are given by \begin{equation} M_{1,2}(k)= \frac{}{}ac{1}{2 }\begin{pmatrix} e^{-ikb} P_{+}^{1,2} & e^{-ikb(1+2j)} S^{1,2} \\ -e^{ikb(1+2j)} S^{1,2} & e^{ikb} P_{-}^{1,2} \end{pmatrix} . \end{equation} In the above matrix, $j=0$ for barrier-$1$ and $j=1$ for barrier-$2$. Various symbols are given below, \begin{equation} P_{\pm}^{1,2}=2 \cos{k_{1,2} b} \pm i \langleeft(\mu_{1,2} +\frac{}{}ac{1}{\mu_{1,2}} \rangleight) \sin{k_{1,2} b} , \langleabel{p_expression} \end{equation} \begin{equation} S^{1,2}= i \langleeft(\mu_{1,2} -\frac{}{}ac{1}{\mu_{1,2}} \rangleight) \sin{k_{1,2} b} , \langleabel{s_expression} \end{equation} \begin{equation} \mu_{1,2}=\frac{}{}ac{k_{1,2}}{k}, \ \ k_{1,2}=\sqrt{k^{2}-V_{1,2}} . \langleabel{u12} \end{equation} For the potential represented by Eq. \rangleef{pt_potential} (or by Fig \rangleef{pt_fig}) $V_{1} = u+ iv$ and $V_{2} =u- iv$. From the composition properties of the transfer matrix, we can find the net transfer matrix $M$, of our $PT$-symmetric `unit cell' as $M(k)=M_{2}(k). M_{1}(k)$. Therefore, \begin{equation} M(k)= \frac{}{}ac{1}{4 }\begin{pmatrix} e^{-2ikb} (P_{+}^{1} P_{+}^{2}-S^{1}S^{2}) & e^{-2ikb} (P_{+}^{2} S^{1}+P_{-}^{1} S^{2}) \\ -e^{2ikb} (P_{-}^{2} S^{1}+P_{+}^{1} S^{2}) & e^{2ikb} (P_{-}^{1} P_{-}^{2}-S^{1}S^{2}) \end{pmatrix} . \langleabel{pt_transfer} \end{equation} Now the transmission coefficient (inverse of the $M_{22}$ element ) can be expressed as, \begin{equation} t= \frac{}{}ac{e^{-2ikb}}{(P_{-}^{1} P_{-}^{2}-S^{1}S^{2})} . \langleabel{t_unitpt_appendix} \end{equation} We separate the denominator in real and imaginary parts. To do this we first express $k_{1} =\sqrt{k^{2}- (u+iv)} = \rangleho e^{i \phi}$ and $k_{2} =\sqrt{k^{2}- (u-iv)} = \rangleho e^{-i \phi}$ where, \begin{equation} \rangleho =[ (u-k^{2})^{2}+ v^{2}]^{\frac{}{}ac{1}{4}} , \ \ \phi= \frac{}{}ac{1}{2} \tan^{-1} \langleeft ( \frac{}{}ac{v}{u -k^{2}}\rangleight ). \end{equation} Upon substituting $k_{1,2}$ expressions, the denominator of Eq. \rangleef{t_unitpt_appendix} is simplified to \begin{equation} P_{-}^{1} P_{-}^{2}-S^{1}S^{2} = \xi -i \chi = (\sqrt{\xi^{2} + \chi^{2}}) e^{-i \theta}, \langleabel{d_simpl_appendix} \end{equation} where, \begin{equation} \theta = \tan^{-1}{\langleeft ( \frac{}{}ac{\chi}{\xi}\rangleight )} \end{equation} $\xi $ and $\chi $ are given by Eq. \rangleef{xi_exp} and \rangleef{chi_exp} respectively. Substitution of Eq. \rangleef{d_simpl_appendix} in Eq. \rangleef{t_unitpt_appendix} leads to \begin{equation} t= \frac{}{}ac{e^{i (\theta -2kb})}{\sqrt{\xi^{2} + \chi^{2}}} . \end{equation} This is Eq. \rangleef{t_pt_system}. \\ \begin{center} {\langlearge \bf Appendix - B : Derivation of the transmission coefficient from finite layered PT-symmetric barrier } \end{center} If the transfer matrix $M$, \begin{equation} M(k)= \begin{pmatrix} M_{11}(k) & M_{12}(k) \\ M_{21}(k) & M_{22}(k) \end{pmatrix} , \end{equation} of a `unit cell' potential is known such that, \begin{equation} \begin{pmatrix} A_{+}(k) \\ B_{+}(k) \end{pmatrix}= M(k) \begin{pmatrix} A_{-}(k) \\ B_{-}(k) \end{pmatrix} . \langleabel{t_matrix} \end{equation} Where coefficients of the asymptotic solution of the scattering wave to the right of the potential are $A_{+}, B_{+}$ and to the left of the potential are $A_{-}, B_{-}$ . Then the transmission coefficient (for incidence from left) of a periodic system made by $n$ repetitions of the `unit cell' is given by \begin{equation} t_{n}= \frac{}{}ac{e^{-i k n s}}{[M_{22}(k) e^{-iks} U_{n-1}(\Omega ) - U_{n-2} ( \Omega )] }. \langleabel{tn_eq} \end{equation} Where, \begin{equation} \Omega= \frac{}{}ac{1}{2} (M_{11}e^{iks} +M_{22} e^{-iks}), \langleabel{omega_expression} \end{equation} with $s=w+g$. Here $w$ is width of the `unit cell' potential and $g$ is the gap between consecutive `unit cell' potentials . For our present problem (section \rangleef{layered_section}), $w=2b$ and $g=0$, thus $s=2b$. The procedure to derive Eq. \rangleef{tn_eq} is outlined in \cite{griffith_periodic} . $M_{11}$ and $M_{22}$ elements of our `unit cell' potential are given in Eq. \rangleef{pt_transfer}. Using Eq. \rangleef{d_simpl_appendix}, $M_{22}$ element can be written as \begin{equation} M_{22}= (\xi -i\chi) e^{2ikb}. \langleabel{m22_simp} \end{equation} Similarly, \begin{equation} M_{11}= (\xi +i\chi) e^{-2ikb}. \langleabel{m11_simp} \end{equation} To arrive at Eq. \rangleef{m11_simp}, we have separated term $P_{+}^{1} P_{+}^{2}-S^{1}S^{2}$ in real and imaginary parts. The expression for $\xi$ and $\chi$ are given by Eq. \rangleef{xi_exp} and Eq. \rangleef{chi_exp} respectively. From simplified expressions of $M_{11}$ and $M_{22}$ , we observe $M_{22}=M_{11}^{*}$. This shows that the argument, $\Omega$ of Chebyshev polynomial is real. We substitute Eq. \rangleef{m22_simp} and Eq. \rangleef{m11_simp} in Eq. \rangleef{omega_expression} to obtain $\Omega=\xi$. Identifying $ns=2Nb=L$, where $L$ is the net spatial extent of our layered $PT$-symmetric system, the final expression of transmission coefficient $t_{n}=t$ is given by \begin{equation} t=\frac{}{}ac{e^{-ikL}}{H(k)} \langleabel{t_exp} \end{equation} where $H(k)=(\xi-i\chi ) U_{N-1}(\Omega)-U_{N-2}(\Omega)$ (Eq. \rangleef{t_simple}). \\ \begin{center} {\langlearge \bf Appendix - C: Limiting values of the terms of series expansion of $ g \chi$ } \end{center} The expressions for $A_{1}$ is , \begin{equation} A_{1}= N \rangleho (U_{-} \cos^{2}{\phi} + U_{+} \sin^{2}{\phi}) . \end{equation} Thus, \begin{equation} A_{1} b= \frac{}{}ac{L}{2} \rangleho (U_{-} \cos^{2}{\phi} + U_{+} \sin^{2}{\phi}) . \end{equation} Where we have used $Nb = L/2$. Upon substituting the expressions for $U_{+}$ and $U_{-}$ and using trigonometric identity we arrive at, \begin{equation} A_{1} b= \frac{}{}ac{L}{2} (k -\frac{}{}ac{\rangleho^{2}}{k} \cos{2 \phi} ) . \end{equation} Further substituting $\cos{2 \phi}= \frac{}{}ac{u-k^2}{\rangleho^2}$ in the above, we find \begin{equation} A_{1} b= \frac{}{}ac{k^{2} - q^{2}}{2 k q} (qL) , \end{equation} where $q= \sqrt{u-k^{2}}$. \paragraph{Evaluation of $\langleim_{N \to \infty} A_{3} b^{3} $ :} From Eq. \rangleef{a3} we can write, \begin{equation} A_{3} b^{3}= - \frac{}{}ac{1}{6} N b^{3} \rangleho^{3} \langleeft [ 8 N^{2} (U_{+} + U_{-}) \cos{2 \phi} -(U_{+}-U_{-}) \{ 4 N^{2} -1 + (4 N^{2} +1) \cos{4 \phi} \} \rangleight ] . \end{equation} Taking $N^{2}$ out from the parenthesis, the above equation can be written as , \begin{equation} A_{3} b^{3}= - \frac{}{}ac{1}{6} N^{3} b^{3} \rangleho^{3} \langleeft [ 8 (U_{+} + U_{-}) \cos{2 \phi} -(U_{+}-U_{-}) \{ 4 -\frac{}{}ac{1}{N^{2}} + (4 +\frac{}{}ac{1}{N^{2}}) \cos{4 \phi} \} \rangleight ]. \end{equation} Taking limit $N \rangleightarrow \infty$ of the above equation, we get \begin{equation} \langleim_{N \to \infty} A_{3} b^{3}= - \frac{}{}ac{1}{6} N^{3} b^{3} \rangleho^{3} \langleeft [ 8 (U_{+} + U_{-}) \cos{2 \phi} -(U_{+}-U_{-}) \{ 4 + 4 \cos{4 \phi} \} \rangleight ] . \end{equation} Upon substituting the values of $U_{\pm}$, $\cos{4\phi} = 2 \cos^{2}{2 \phi} -1$, $\cos{2 \phi} = \frac{}{}ac{q^2}{\rangleho^2}$ and $Nb = L/2$, the above expressions is simplified to , \begin{equation} \langleim_{N \to \infty} A_{3} b^{3}= - \langleeft ( \frac{}{}ac{ k^{2} -q^{2} }{2 k q}\rangleight ) \frac{}{}ac{(qL)^{3}}{3}. \end{equation} This is the same result given in Eq. \rangleef{a3b}. \paragraph{Evaluation of $\langleim_{N \to \infty} A_{5} b^{5}$:} From Eq. \rangleef{a5}, we write \begin{multline} A_{5} b^{5}= \frac{}{}ac{N b^{5} \rangleho ^{5}}{360} [ 2 (U_{+} + U_{-}) (96 N^{4} -5 N^{2} -1) - (U_{+} - U_{-}) \cos{2 \phi} (288 N^{4} -25 N^{2} -8) + \\ 2 (U_{+} + U_{-}) \cos{4 \phi} (96 N^{4} + 5 N^{2} +1) - (U_{+} - U_{-}) \cos{6 \phi} (96 N^{4} + 25 N^{2} +8) ] . \end{multline} This can be further written as, \begin{multline} A_{5} b^{5}= \frac{}{}ac{N^{5} b^{5} \rangleho ^{5}}{360} [ 2 (U_{+} + U_{-}) (96 - \frac{}{}ac{5}{N^{2}} -\frac{}{}ac{1}{N^{4}}) - (U_{+} - U_{-}) \cos{2 \phi} (288 - \frac{}{}ac{25}{N^{2}} -\frac{}{}ac{8}{N^{4}}) + \\ 2 (U_{+} + U_{-}) \cos{4 \phi} (96 + \frac{}{}ac{5}{N^{2}} +\frac{}{}ac{1}{N^{4}}) - (U_{+} - U_{-}) \cos{6 \phi} (96 + \frac{}{}ac{25} {N^{2}} + \frac{}{}ac{8}{N^{4}} ) ] . \end{multline} Taking the limit $N \rangleightarrow \infty$ of the above equation, all terms containing $N$ in denominator will become zero and we get \begin{equation} \langleim_{N \to \infty} A_{5} b^{5}= \frac{}{}ac{L^{5} \rangleho ^{5}}{2^{5} 360} [ 768 \frac{}{}ac{k}{\rangleho} \cos^{2}{2 \phi} -192 \frac{}{}ac{\rangleho}{k} (3 \cos{2 \phi} + \cos{6 \phi}) ] \end{equation} In the above we have already used $Nb = L/2$ and the expressions for $U_{\pm}$. Next we expand $\cos{4 \phi}$ and $\cos{6 \phi}$ in the power of $\cos{2 \phi}$ and substitute $\cos{2 \phi} = \frac{}{}ac{q^{2}}{\rangleho^2}$ to arrive at \begin{equation} \langleim_{N \to \infty} A_{5} b^{5} = \frac{}{}ac{2}{15} \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) ( q L)^{5}, \end{equation} which is Eq. \rangleef{a5b}. \paragraph{Evaluation of $\langleim_{N \to \infty} A_{7} b^{7} $:} From Eq. \rangleef{a7} we can write, \begin{multline} A_{7} b^{7}= \frac{}{}ac{N b^{7} \rangleho ^{7}}{15120} [ (U_{+} - U_{-}) (9792 N^{6} -1008 N^{4} -161 N^{2} -34) -\\ 4 (U_{+} + U_{-}) \cos{2 \phi} (4896 N^{6} -168 N^{4} -35 N^{2} -10) + \\ 4 (U_{+} - U_{-}) \cos{4 \phi} (3264 N^{6} -35 N^{2} -16) - \\ 4 (U_{+} + U_{-}) \cos{6 \phi} (1632 N^{6} + 168 N^{4} + 35 N^{2} +10) + \\ (U_{+} - U_{-}) \cos{8 \phi} (3264 N^{6} + 1008 N^{4} + 301 N^{2} +98) ] . \end{multline} The above equation can be further written as , \begin{multline} A_{7} b^{7}= \frac{}{}ac{N^{7} b^{7} \rangleho ^{7}}{15120} [ (U_{+} - U_{-}) (9792 -\frac{}{}ac{1008}{ N^{2}} - \frac{}{}ac{161} {N^{4}} -\frac{}{}ac{34}{N^{6}}) -\\ 4 (U_{+} + U_{-}) \cos{2 \phi} (4896 -\frac{}{}ac{168} {N^{2}} -\frac{}{}ac{35} {N^{4}} - \frac{}{}ac{10}{N^{6}}) + \\ 4 (U_{+} - U_{-}) \cos{4 \phi} (3264 -\frac{}{}ac{35} {N^{4}} -\frac{}{}ac{16}{N^{6}}) - \\ 4 (U_{+} + U_{-}) \cos{6 \phi} (1632 + \frac{}{}ac{168} {N^{2}} + \frac{}{}ac{35} {N^{4}} +\frac{}{}ac{10}{N^{6}}) + \\ (U_{+} - U_{-}) \cos{8 \phi} (3264 + \frac{}{}ac{1008} {N^{2}} + \frac{}{}ac{301} {N^{4}} +\frac{}{}ac{98}{N^{6}}) ] . \end{multline} Taking $N \rangleightarrow \infty$ limit of the above equation and substituting $Nb=\frac{}{}ac{L}{2}$, we obtain \begin{multline} \langleim_{N \to \infty} A_{7} b^{7}= \frac{}{}ac{L^{7} \rangleho ^{7}}{2^{7} . 15120} [ 9792(U_{+} - U_{-}) - 19584 (U_{+} + U_{-}) \cos{2 \phi} + 13056 (U_{+} - U_{-}) \cos{4 \phi} \\ - 6528 (U_{+} + U_{-}) \cos{6 \phi} + 3264 (U_{+} - U_{-}) \cos{8 \phi} ] . \end{multline} Next we expand $\cos{4 \phi}, \cos{6 \phi}, \cos{8 \phi}$ in powers of $\cos{2 \phi}$ and substitute the expressions for $U_{\pm}$ to show, \begin{multline} \langleim_{N \to \infty} A_{7} b^{7}= \langleeft (\frac{}{}ac{L}{2} \rangleight )^{7} \frac{}{}ac{ \rangleho ^{7}}{15120} \langleeft [ \langleeft(\frac{}{}ac{2\rangleho}{k} \rangleight ) \{ 9792 + 3264 (8 \cos^{4}{2 \phi}-3)\} - \langleeft (\frac{}{}ac{2k}{\rangleho} \rangleight ) 261121 \cos^{3}{2 \phi} \rangleight ] . \end{multline} Upon substituting $\cos{2 \phi} = \frac{}{}ac{q^{2}}{\rangleho^2}$, the above equation simplifies to, \begin{equation} \langleim_{N \to \infty} A_{7} b^{7} =- \frac{}{}ac{17}{315} \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) ( q L)^{7}, \end{equation} which is Eq. \rangleef{a7b}. \paragraph{Evaluation of $ \langleim_{N \to \infty} A_{9} b^{9} $:} From Eq. \rangleef{a9} we can write, \begin{multline} A_{9} b^{9}= \frac{}{}ac{N b^{9} \rangleho ^{9}}{453600} [ 4 (U_{+} + U_{-}) (119040 N^{8} - 6120 N^{6} -1029 N^{4} -215 N^{2} -61) - \\ 2 (U_{+} - U_{-}) \cos{2 \phi} (396800 N^{8} -28560 N^{6} -5502 N^{4} - 1045 N^{2} -268) + \\ 40 (U_{+} + U_{-}) \cos{4 \phi} ( 15872 N^{8} -42 N^{4} -15 N^{2} -5) - \\ (U_{+} - U_{-}) \cos{6 \phi} (396800 N^{8} + 28560 N^{6} + 1722 N^{4} - 655 N^{2} -352) + \\ 4 (U_{+} + U_{-}) \cos{8 \phi} (39680 N^{8} + 6120 N^{6} + 1449 N^{4} + 365 N^{2} + 111) - \\ (U_{+} - U_{-}) \cos{10 \phi} (79360 N^{8} + 28560 N^{6} + 9282 N^{4} + 2745 N^{2} + 888) ]. \end{multline} Again we write the above equation as follows \begin{multline} A_{9} b^{9}= \frac{}{}ac{N^{9} b^{9} \rangleho ^{9}}{453600} [ 4 (U_{+} + U_{-}) (119040 - \frac{}{}ac{6120} {N^{2}} - \frac{}{}ac{1029}{ N^{4}} -\frac{}{}ac{215} {N^{6}} - \frac{}{}ac{61}{N^{8}}) - \\ 2 (U_{+} - U_{-}) \cos{2 \phi} (396800 - \frac{}{}ac{28560} {N^{2}} -\frac{}{}ac{5502} {N^{4}} - \frac{}{}ac{1045} {N^{6}} -\frac{}{}ac{268}{N^{8}}) + \\ 40 (U_{+} + U_{-}) \cos{4 \phi} ( 15872 - \frac{}{}ac{42} {N^{4}} -\frac{}{}ac{15} {N^{6}} - \frac{}{}ac{5}{N^{8}}) - \\ (U_{+} - U_{-}) \cos{6 \phi} (396800 + \frac{}{}ac{28560} {N^{2}} + \frac{}{}ac{1722} {N^{4}} - \frac{}{}ac{655} {N^{6}} -\frac{}{}ac{352}{N^{8}}) + \\ 4 (U_{+} + U_{-}) \cos{8 \phi} (39680 + \frac{}{}ac{6120} {N^{2}} + \frac{}{}ac{1449} {N^{4}} + \frac{}{}ac{365} {N^{6}} + \frac{}{}ac{111}{N^{8}}) - \\ (U_{+} - U_{-}) \cos{10 \phi} (79360 + \frac{}{}ac{28560} {N^{2}} + \frac{}{}ac{9282} {N^{4}} + \frac{}{}ac{2745} {N^{6}} + \frac{}{}ac{888}{N^{8}}) . \end{multline} Taking $N \rangleightarrow \infty$ limit of the above equation yield, \begin{multline} \langleim_{N \to \infty} A_{9} b^{9}= \frac{}{}ac{N^{9} b^{9} \rangleho ^{9}}{453600} [ 476160(U_{+} + U_{-}) - 793600 (U_{+} - U_{-}) \cos{2 \phi} + 634880 (U_{+} + U_{-}) \cos{4 \phi} - \\ 396800 (U_{+} - U_{-}) \cos{6 \phi} + 158720 (U_{+} + U_{-}) \cos{8 \phi} - 79360 (U_{+} - U_{-}) \cos{10 \phi} ] . \end{multline} We expand $\cos{10 \phi}, \cos{8 \phi}, \cos{6 \phi}, \cos{4 \phi} $ in power of $\cos{2 \phi} = \frac{}{}ac{q^{2}}{\rangleho^2}$ and substitute the expressions for $U_{\pm}$ and $Nb =\frac{}{}ac{L}{2}$. It can be shown the above expression finally simplifies to , \begin{equation} \langleim_{N \to \infty} A_{9} b^{9}= \frac{}{}ac{31}{2835} \langleeft ( \frac{}{}ac{k^{2}-q^{2}}{2kq} \rangleight ) ( q L)^{9}. \end{equation} This is Eq. \rangleef{a9b}. \begin{center} {\langlearge \bf Appendix - D : Evaluation of $ \langleeft [ \langleim_{b \to \infty} \langleeft (\frac{}{}ac{d \tau_{\vertarepsilon}}{ d\vertarepsilon} \rangleight ) \rangleight ]_{\vertarepsilon =1} $ } \end{center} In this appendix, we evaluate the right hand side of Eq. \rangleef{tau_eps1}. The expression of $P$ is given by Eq. \rangleef{p_exp}. We express $\frac{}{}ac{dP}{dk}$ , $\frac{}{}ac{dP}{d\vertarepsilon}$ and $\frac{}{}ac{d^{2}P}{d\vertarepsilon dk}$ in terms of the derivatives of $\alpha, Q_{1}$ and $Q_{2}$ as follows, \begin{multline} \frac{}{}ac{dP}{df}= \frac{}{}ac{1}{(Q_{1}+ Q_{2} \tan{\alpha})} \langleeft [ Q_{2} \sec^{2}\alpha \frac{}{}ac{d \alpha}{df} + \tan{\alpha} \frac{}{}ac{d Q_{1}}{df} - \frac{}{}ac{d Q_{2}}{df} \rangleight ] - \\ \frac{}{}ac{P}{(Q_{1}+ Q_{2} \tan{\alpha})} \langleeft [ \frac{}{}ac{d Q_{1}}{df} + Q_{2} \sec^{2}{\alpha} \frac{}{}ac{d \alpha}{df} + \tan{\alpha} \frac{}{}ac{d Q_{2}}{df} \rangleight ] . \langleabel{dpdf} \end{multline} Where $f=k, \vertarepsilon $ . Also, \begin{multline} \frac{}{}ac{d^{2}P}{d \vertarepsilon dk}= \frac{}{}ac{1}{(Q_{1}+ Q_{2} \tan{\alpha})} [ Q_{1} \sec^{2}{\alpha} \frac{}{}ac{d^{2} \alpha}{ d \vertarepsilon dk} + \sec^{2}{\alpha} \frac{}{}ac{d \alpha}{ dk} \frac{}{}ac{d Q_{1}}{ d \vertarepsilon} + 2 Q_{1} \sec^{2}{\alpha} \tan{\alpha} \frac{}{}ac{d \alpha}{dk} \frac{}{}ac{d \alpha}{d \vertarepsilon} \\ + \tan{\alpha} \frac{}{}ac{d^{2} Q_{1}}{ d \vertarepsilon dk} + \sec^{2}{\alpha} \frac{}{}ac{dQ_{1}}{dk} \frac{}{}ac{d \alpha}{d\vertarepsilon} -\frac{}{}ac{d^{2} Q_{2}}{d \vertarepsilon dk} ] \\ - \frac{}{}ac{1}{(Q_{1}+ Q_{2} \tan{\alpha})^{2}} (\frac{}{}ac{d Q_{1}}{d \vertarepsilon} + \tan{\alpha} \frac{}{}ac{d Q_{2}}{d \vertarepsilon} + Q_{2} \sec^{2}{\alpha} \frac{}{}ac{d \alpha}{d \vertarepsilon} ) ( Q_{1} \sec^{2}{\alpha} \frac{}{}ac{d \alpha}{d k} + \tan{\alpha} \frac{}{}ac{d Q_{1}}{dk} - \frac{}{}ac{d Q_{2}}{dk} ) \\ - \frac{}{}ac{P}{(Q_{1}+ Q_{2} \tan{\alpha})} [ \frac{}{}ac{d^{2} Q_{1}}{d \vertarepsilon dk} + Q_{2} \sec^{2}{\alpha} \frac{}{}ac{d^{2} \alpha}{ d \vertarepsilon dk} + \sec^{2}{\alpha} \frac{}{}ac{d \alpha}{dk} \frac{}{}ac{d Q_{2}}{d \vertarepsilon } + \\ 2 Q_{2} \sec^{2}{\alpha} \tan{\alpha} \frac{}{}ac{d \alpha}{dk} \frac{}{}ac{d \alpha}{d \vertarepsilon} + \sec^{2}{\alpha} \frac{}{}ac{d \alpha}{d \vertarepsilon} \frac{}{}ac{d Q_{2}}{dk} + \tan{\alpha} \frac{}{}ac{d^{2} Q_{2}}{ d \vertarepsilon dk}] \\ - ( \frac{}{}ac{d Q_{1}}{dk} + Q_{2} \sec^{2}{\alpha} \frac{}{}ac{d \alpha}{dk} + \tan{\alpha} \frac{}{}ac{d Q_{2}}{dk}) [ \frac{}{}ac{1}{(Q_{1}+ Q_{2} \tan{\alpha})} \frac{}{}ac{dP}{d \vertarepsilon } \\ - \frac{}{}ac{P}{(Q_{1}+ Q_{2} \tan{\alpha})^{2}} \{ \frac{}{}ac{d Q_{1}}{d \vertarepsilon} + \frac{}{}ac{d Q_{2}}{d \vertarepsilon} \tan{\alpha} + Q_{2} \sec^{2}{\alpha} \frac{}{}ac{d \alpha}{d \vertarepsilon } \}] . \langleabel{d2pdkde} \end{multline} Various derivatives of $\alpha$, $Q_{1}, Q_{2}$ are given by, \begin{equation} \frac{}{}ac{d \alpha}{dk} = b k \langleeft[\frac{}{}ac{\langleeft(k^2-u\rangleight) \cos {\phi_{1}}-v \sin {\phi_{1}}}{\rangleho_{1}^{3}}+\frac{}{}ac{v \epsilon \sin{\phi_{2}}-\langleeft(k^2-u\rangleight) \cos{\phi_{1}}}{\rangleho_{2}^{3}}\rangleight]. \langleabel{dalphadk} \end{equation} \begin{equation} \frac{}{}ac{d \alpha}{d \vertarepsilon }= -\frac{}{}ac{b v^2 \epsilon }{\langleeft(2 \sqrt{2} \rangleho_{2}^2\rangleight) \sqrt{\rangleho_{2}^2-\langleeft(k^2-u\rangleight)}} . \langleabel{dalphadeps} \end{equation} \begin{equation} \frac{}{}ac{d^2 \alpha }{d\vertarepsilon dk}=\frac{}{}ac{b k v}{2 \rangleho _2^7} \langleeft[\sin \phi _2 \langleeft(\langleeft(k^2-u\rangleight)^2-v^2 \epsilon ^2\rangleight)+2 v \epsilon \langleeft(k^2-u\rangleight) \cos \phi _2\rangleight] . \langleabel{d2alphadepsdk} \end{equation} \begin{multline} \frac{}{}ac{dQ_{1}}{dk}=\frac{}{}ac{1}{k^2 \rangleho _1^5 \rangleho _2^5} \Big[ \rangleho _1^5 \langleeft(k^2-\rangleho _2^2\rangleight) \cos \langleeft(\phi _2\rangleight) \langleeft(u \langleeft(k^2-u\rangleight)-v^2 \epsilon ^2\rangleight)-\rangleho _2^5 \langleeft(k^2-\rangleho _1^2\rangleight) \cos \langleeft(\phi _1\rangleight) \langleeft(u \langleeft(k^2-u\rangleight)-v^2\rangleight) \\ -k^2 \rangleho _2^5 v \langleeft(k^2+\rangleho _1^2\rangleight) \sin \langleeft(\phi _1\rangleight)-k^3 \langleeft(\rangleho _1^2-\rangleho _2^2\rangleight) v^2 \langleeft(\epsilon ^2-1\rangleight) \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _1+\phi _2\rangleight)\\ +k^2 \rangleho _1^5 v \epsilon \langleeft(k^2+\rangleho _2^2\rangleight) \sin \langleeft(\phi _2\rangleight)+k^3 \langleeft(\rangleho _1^2+\rangleho _2^2\rangleight) v (\epsilon +1) \sin \langleeft(\phi _1+\phi _2\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2+v^2 \epsilon \rangleight) \Big] . \langleabel{dq1dk} \end{multline} \begin{multline} \frac{}{}ac{dQ_{1}}{d \vertarepsilon }=\frac{}{}ac{v}{2 k \rangleho _1 \rangleho _2^5} \Big[\rangleho _1 v \epsilon \langleeft(k^2-\rangleho _2^2\rangleight) \cos \langleeft(\phi _2\rangleight)+k \langleeft(\rangleho _1^2-\rangleho _2^2\rangleight) v \epsilon \cos \langleeft(\phi _1+\phi _2\rangleight) \\ -\langleeft(k^2-u\rangleight) \langleeft(\rangleho _1 \langleeft(k^2+\rangleho _2^2\rangleight) \sin \langleeft(\phi _2\rangleight)+k \langleeft(\rangleho _1^2+\rangleho _2^2\rangleight) \sin \langleeft(\phi _1+\phi _2\rangleight)\rangleight) \Big ] . \langleabel{dq1deps} \end{multline} \begin{multline} \frac{}{}ac{d^{2}Q_{1}}{d \vertarepsilon dk}= \frac{}{}ac{1}{2 k^2 \rangleho _1^9 \rangleho _2^{17}} \Big[ -k^2 \rangleho _1^9 \rangleho _2^8 v^2 \epsilon \langleeft(k^2+\rangleho _2^2\rangleight) \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _2\rangleight) - \\ \rangleho _1^9 \rangleho _2^8 v^2 \epsilon \cos \langleeft(\phi _2\rangleight) \langleeft(-k^2 \rangleho _2^4-3 k^2 \rangleho _2^2 \langleeft(k^2-u\rangleight)+5 k^4 \langleeft(k^2-u\rangleight)-\rangleho _2^6\rangleight) \\ -k^3 \rangleho _1^4 \rangleho _2^8 \langleeft(\rangleho _1^2+\rangleho _2^2\rangleight) v^2 (\epsilon +1) \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _1+\phi _2\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2+v^2 \epsilon \rangleight) \\ -4 k^2 \rangleho _2^8 \rangleho _1^9 v^3 \epsilon ^2 \langleeft(k^2+\rangleho _2^2\rangleight) \sin \langleeft(\phi _2\rangleight)-k^3 \rangleho _2^8 \langleeft(5 \rangleho _1^6-3 \rangleho _2^2 \rangleho _1^4-\rangleho _2^4 \rangleho _1^2-\rangleho _2^6\rangleight) \rangleho _1^4 v^2 \epsilon \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _1+\phi _2\rangleight) \\ \rangleho _2^8 \rangleho _1^9 v \langleeft(k^2-\rangleho _2^2\rangleight) +\langleeft(k^2-u\rangleight) \sin \langleeft(\phi _2\rangleight) \langleeft(u \langleeft(k^2-u\rangleight)-v^2 \epsilon ^2\rangleight)+k^2 \rangleho _2^8 \rangleho _1^9 v^3 \epsilon ^2 \langleeft(\rangleho _2^2-k^2\rangleight) \sin \langleeft(\phi _2\rangleight) \\ +2 k^2 \rangleho _1^9 \rangleho _2^{12} v \langleeft(k^2+\rangleho _2^2\rangleight) \sin \langleeft(\phi _2\rangleight)-k^3 \rangleho _1^4 \rangleho _2^8 \langleeft(\rangleho _1^2-\rangleho _2^2\rangleight) v^3 \epsilon (\epsilon +1) \sin \langleeft(\phi _1+\phi _2\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2+v^2 \epsilon \rangleight) \\ -k^3 \rangleho _1^4 \langleeft(\rangleho _1^2-\rangleho _2^2\rangleight) v^3 \langleeft(\epsilon ^2-1\rangleight) \sin \langleeft(\phi _1+\phi _2\rangleight) \langleeft(v^2 \epsilon ^2 \langleeft(k^2-u\rangleight)+\langleeft(k^2-u\rangleight)^3\rangleight)^2 \\ +2 k^3 \rangleho _1^8 \rangleho _2^8 \langleeft(\rangleho _1^2+\rangleho _2^2\rangleight) v \sin \langleeft(\phi _1+\phi _2\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2-v^2 \epsilon ^2\rangleight) \Big] . \langleabel{d2q1depsdk} \end{multline} \begin{multline} \frac{}{}ac{dQ_{2}}{d k }= \frac{}{}ac{1}{k^2 \rangleho _1^5 \rangleho _2^5} \Big[ k^2 \rangleho _2^5 v \langleeft(\rangleho _1^2-k^2\rangleight) \cos \langleeft(\phi _1\rangleight)+k^2 \rangleho _1^5 v \epsilon \langleeft(\rangleho _2^2-k^2\rangleight) \cos \langleeft(\phi _2\rangleight) \\ - \rangleho _2^5 \langleeft(k^2+\rangleho _1^2\rangleight) \sin \langleeft(\phi _1\rangleight) \langleeft(u \langleeft(k^2-u\rangleight)-v^2\rangleight)-k^3 \langleeft(\rangleho _1^2-\rangleho _2^2\rangleight) v (\epsilon +1) \cos \langleeft(\phi _1+\phi _2\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2+v^2 \epsilon \rangleight) \\ \rangleho _1^5 \langleeft(k^2+\rangleho _2^2\rangleight) \sin \langleeft(\phi _2\rangleight) \langleeft(u \langleeft(k^2-u\rangleight)-v^2 \epsilon ^2\rangleight)-k^3 \langleeft(\rangleho _1^2+\rangleho _2^2\rangleight) v^2 \langleeft(\epsilon ^2-1\rangleight) \langleeft(k^2-u\rangleight) \sin \langleeft(\phi _1+\phi _2\rangleight) \Big ] . \langleabel{dq2dk} \end{multline} \begin{multline} \frac{}{}ac{dQ_{2}}{d \vertarepsilon }= \frac{}{}ac{v}{2 k \rangleho _1 \rangleho _2^5} \Big[ \rangleho _1 \langleeft(k^2-\rangleho _2^2\rangleight) \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _2\rangleight)+k \langleeft(\rangleho _1^2-\rangleho _2^2\rangleight) \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _1+\phi _2\rangleight) \\ +v \epsilon \langleeft(\rangleho _1 \langleeft(k^2+\rangleho _2^2\rangleight) \sin \langleeft(\phi _2\rangleight)+k \langleeft(\rangleho _1^2+\rangleho _2^2\rangleight) \sin \langleeft(\phi _1+\phi _2\rangleight)\rangleight) \Big ] . \langleabel{dq2deps} \end{multline} \begin{multline} \frac{}{}ac{d^{2}Q_{2}}{d \vertarepsilon dk }= \frac{}{}ac{1}{2 k^2 \rangleho _1^9 \rangleho _2^{17}} \Big [ 4 k^2 \rangleho _1^9 \rangleho _2^8 v^3 \epsilon ^2 \langleeft(k^2-\rangleho _2^2\rangleight) \cos \langleeft(\phi _2\rangleight)+2 k^2 \rangleho _1^9 \rangleho _2^{12} v \langleeft(\rangleho _2^2-k^2\rangleight) \cos \langleeft(\phi _2\rangleight) \\ +k^2 \rangleho _1^9 \rangleho _2^8 v^3 \epsilon ^2 \langleeft(k^2+\rangleho _2^2\rangleight) \cos \langleeft(\phi _2\rangleight)-\rangleho _1^9 \rangleho _2^8 v \langleeft(k^2+\rangleho _2^2\rangleight) \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _2\rangleight) \langleeft(u \langleeft(k^2-u\rangleight)-v^2 \epsilon ^2\rangleight) \\ -2 k^3 \rangleho _1^8 \rangleho _2^8 \langleeft(\rangleho _1^2-\rangleho _2^2\rangleight) v \cos \langleeft(\phi _1+\phi _2\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2-v^2 \epsilon ^2\rangleight) \\ +k^3 \rangleho _1^4 \rangleho _2^8 \langleeft(\rangleho _1^2+\rangleho _2^2\rangleight) v^3 \epsilon (\epsilon +1) \cos \langleeft(\phi _1+\phi _2\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2+v^2 \epsilon \rangleight) \\ +k^3 \rangleho _1^4 \langleeft(\rangleho _1^2+\rangleho _2^2\rangleight) v^3 \langleeft(\epsilon ^2-1\rangleight) \cos \langleeft(\phi _1+\phi _2\rangleight) \langleeft(v^2 \epsilon ^2 \langleeft(k^2-u\rangleight)+\langleeft(k^2-u\rangleight)^3\rangleight)^2 \\ \rangleho _1^9 \rangleho _2^8 \langleeft(-v^2\rangleight) \epsilon \sin \langleeft(\phi _2\rangleight) \langleeft(-k^2 \rangleho _2^4+3 k^2 \rangleho _2^2 \langleeft(k^2-u\rangleight)+5 k^4 \langleeft(k^2-u\rangleight)+\rangleho _2^6\rangleight) \\ +k^2 \rangleho _1^9 \rangleho _2^8 v^2 \epsilon \langleeft(\rangleho _2^2-k^2\rangleight) \langleeft(k^2-u\rangleight) \sin \langleeft(\phi _2\rangleight)\\ -k^3 \rangleho _1^4 \rangleho _2^8 \langleeft(\rangleho _1^2-\rangleho _2^2\rangleight) v^2 (\epsilon +1) \langleeft(k^2-u\rangleight) \sin \langleeft(\phi _1+\phi _2\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2+v^2 \epsilon \rangleight) \\ -k^3 \rangleho _1^4 \rangleho _2^8 \langleeft(5 \rangleho _1^6+3 \rangleho _2^2 \rangleho _1^4-\rangleho _2^4 \rangleho _1^2+\rangleho _2^6\rangleight) v^2 \epsilon \langleeft(k^2-u\rangleight) \sin \langleeft(\phi _1+\phi _2\rangleight) \Big ]. \langleabel{d2q2depsdk} \end{multline} The right hand side of Eq. \rangleef{tau_eps1} is to be evaluated at $\vertarepsilon =1$. Therefore, we evaluate all the above derivatives of $\alpha$, $Q_{1}$ and $Q_{2}$ at $\vertarepsilon =1$. When $\vertarepsilon =1$, we also have $\rangleho_{2}= \rangleho_{1}$, $\phi_{2}= \phi_{1}$ and $\alpha=0$. We simplify the Eqs. \rangleef{dalphadk}, \rangleef{dalphadeps}, \rangleef{d2alphadepsdk}, \rangleef{dq1dk}, \rangleef{dq1deps}, \rangleef{d2q1depsdk}, \rangleef{dq2dk}, \rangleef{dq2deps} and Eq. \rangleef{d2q2depsdk} at $\vertarepsilon =1$ to obtain the following results, \begin{equation} \langleeft[ \frac{}{}ac{d \alpha}{dk} \rangleight]_{\vertarepsilon =1} =0 . \langleabel{dalphadke1} \end{equation} \begin{equation} \langleeft[ \frac{}{}ac{d Q_{1}}{dk} \rangleight]_{\vertarepsilon =1}=\frac{}{}ac{4 k v^2}{\rangleho _1^6} . \langleabel{dq1dke1} \end{equation} \begin{equation} \langleeft[ \frac{}{}ac{d Q_{2}}{d k } \rangleight]_{\vertarepsilon =1} =\frac{}{}ac{2 \langleeft(k^2+\rangleho _1^2\rangleight) \sin \langleeft(\phi _1\rangleight) \langleeft(u \langleeft(k^2-u\rangleight)-v^2\rangleight)}{k^2 \rangleho _1^5}+\frac{}{}ac{(2 v) \langleeft(\rangleho _1^2-k^2\rangleight) \cos \langleeft(\phi _1\rangleight)}{\rangleho _1^5} . \langleabel{dq2dke1} \end{equation} \begin{equation} \langleeft[ \frac{}{}ac{d \alpha}{d \vertarepsilon } \rangleight]_{\vertarepsilon =1} = -\frac{}{}ac{b v^2}{2 \sqrt{2} \rangleho _1^2} \langleeft [ \frac{}{}ac{1}{\sqrt{\rangleho _1^2-\langleeft(k^2-u\rangleight)}}\rangleight ] . \langleabel{dalphadepse1} \end{equation} \begin{equation} \langleeft[ \frac{}{}ac{d Q_{1}}{d \vertarepsilon } \rangleight]_{\vertarepsilon =1}= \langleeft (\frac{}{}ac{v}{2 k \rangleho _1^5} \rangleight ) \langleeft [ v \langleeft(k^2-\rangleho _1^2\rangleight) \cos \langleeft(\phi _1\rangleight)-\langleeft(k^2-u\rangleight) \sin \langleeft(\phi _1\rangleight) \langleeft(k^2+4 k \rangleho _1 \cos \langleeft(\phi _1\rangleight)+\rangleho _1^2\rangleight)\rangleight ] . \langleabel{dq1depse1} \end{equation} \begin{equation} \langleeft[ \frac{}{}ac{d Q_{2}}{d \vertarepsilon } \rangleight]_{\vertarepsilon =1} =\langleeft ( \frac{}{}ac{v}{2 k \rangleho _1^5} \rangleight ) \langleeft[ \langleeft(k^2-\rangleho _1^2\rangleight) \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _1\rangleight)+v \langleeft(k^2+\rangleho _1^2\rangleight) \sin \langleeft(\phi _1\rangleight)+\frac{}{}ac{2 k v^2}{\rangleho _1} \rangleight] . \langleabel{dq2depse1} \end{equation} \begin{equation} \langleeft[ \frac{}{}ac{d^{2} \alpha}{d \vertarepsilon dk} \rangleight]_{\vertarepsilon =1} = \frac{}{}ac{b k v}{2 \rangleho _1^7} \langleeft [ \sin \langleeft(\phi _1\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2-v^2\rangleight)+2 v \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _1\rangleight) \rangleight ] . \langleabel{d2alphadepsdke1} \end{equation} \begin{multline} \langleeft[ \frac{}{}ac{d^{2} Q_{1}}{d \vertarepsilon dk} \rangleight]_{\vertarepsilon =1} = \Big [ \rangleho _1^3 v \cos \langleeft(\phi _1\rangleight) \langleeft(-6 k^6+2 k^4 \langleeft(\rangleho _1^2+3 u\rangleight)+k^2 \rangleho _1^2 \langleeft(\rangleho _1^2-2 u\rangleight)+\rangleho _1^6\rangleight) \\ -4 k^3 \rangleho _1^4 v \langleeft(k^2-u\rangleight) \cos \langleeft(2 \phi _1\rangleight) +4 k^3 \rangleho _1^3 \rangleho _1 \sin \langleeft(2 \phi _1\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2-v^2\rangleight) \\ + \rangleho _1^3 \sin \langleeft(\phi _1\rangleight) \Big\{2 k^2 \rangleho _1^4 \langleeft(k^2+\rangleho _1^2\rangleight)-\rangleho _1^2 \langleeft(v^2 \langleeft(2 k^2+u\rangleight)+u \langleeft(k^2-u\rangleight)^2\rangleight)+ \\ k^2 \langleeft(v^2 \langleeft(u-6 k^2\rangleight)+u \langleeft(k^2-u\rangleight)^2\rangleight)\Big\} \Big ] . \langleabel{d2q1depsdke1} \end{multline} \begin{multline} \langleeft[ \frac{}{}ac{d^{2} Q_{2}}{d \vertarepsilon dk} \rangleight]_{\vertarepsilon =1} = \frac{}{}ac{v}{2 k^2 \rangleho _1^{14}} \Big [ 4 k^3 v^2 \langleeft(k^2-u\rangleight) \langleeft(\langleeft(k^2-u\rangleight)^2+2 \rangleho _1^4+v^2\rangleight) \\ + \rangleho _1^5 v \sin \langleeft(\phi _1\rangleight) \langleeft(-6 k^6+k^4 \langleeft(6 u-2 \rangleho _1^2\rangleight)+k^2 \rangleho _1^2 \langleeft(\rangleho _1^2+2 u\rangleight)-\rangleho _1^6\rangleight) \\ - \rangleho _1^5 \cos \langleeft(\phi _1\rangleight) \Big\{2 k^2 \rangleho _1^4 \langleeft(k^2-\rangleho _1^2\rangleight)+\rangleho _1^2 \langleeft(v^2 \langleeft(2 k^2+u\rangleight)+u \langleeft(k^2-u\rangleight)^2\rangleight)+ \\ k^2 \langleeft(v^2 \langleeft(u-6 k^2\rangleight)+u \langleeft(k^2-u\rangleight)^2\rangleight)\Big\} \Big ] . \langleabel{d2q2depsdke1} \end{multline} For $\vertarepsilon =1$ we have the following simplifications, \begin{equation} Q_{1}(\vertarepsilon =1)= 4 \sin^{2}{\phi_{1}}, \ \ Q_{2}(\vertarepsilon =1)= 2 J_{1}^{-} \sin{\phi_{1}} , \langleabel{q1q2_e1} \end{equation} and, \begin{equation} P (\vertarepsilon =1)= -\frac{}{}ac{1}{2} J_{1}^{-} \csc{\phi_{1}} \langleabel{p_e1} \end{equation} From the results of Eqs. \rangleef{dalphadke1},\rangleef{dalphadepse1}, \rangleef{d2alphadepsdke1}, \rangleef{dq1dke1}, \rangleef{dq1depse1}, \rangleef{d2q1depsdke1}, \rangleef{dq2dke1}, \rangleef{dq2depse1}, \rangleef{d2q2depsdke1} and Eqs. \rangleef{q1q2_e1}, \rangleef{p_e1} we can simplify $(\frac{}{}ac{dP}{dk})_{\vertarepsilon =1}, (\frac{}{}ac{dP}{d \vertarepsilon })_{\vertarepsilon =1}$ and $(\frac{}{}ac{d^{2}P}{d \vertarepsilon dk})_{\vertarepsilon =1}$ and can evaluate the right hand side of Eq. \rangleef{tau_eps1}. After a lengthy algebra, it can be shown that , \begin{equation} \langleeft [ \frac{}{}ac{1}{(1+ P^{2})} \frac{}{}ac{d^{2}P}{d \vertarepsilon dk} - \frac{}{}ac{2P}{(1+ P^{2})^{2}} \frac{}{}ac{dP}{dk} \frac{}{}ac{dP}{d \vertarepsilon}\rangleight ]_{\vertarepsilon =1} = K_{0} + K_{1} b. \end{equation} Where the expressions for $K_{1}$ and $K_{0}$ are given by, \begin{equation} K_{1}= \frac{}{}ac{kv}{2 \rangleho_{1}^{7}} \langleeft [ \{(k^{2}-u)^{2} -v^{2}\} \sin{\phi_{1}} + 2 (k^{2}-u) v \cos{\phi_{1}} \rangleight] \langleabel{k1_exp}. \end{equation} The expression for $K_{0}$ is lengthy and we expressed through the use of symbols $C_{1}, C_{2}, C_{3}, C_{4}, C_{5}$ and $C_{6}$ as given below, \begin{equation} K_{0} =\frac{}{}ac{v \csc ^2\langleeft(\phi _1\rangleight)}{2 k^3 \rangleho _1^{13} \langleeft[\langleeft( J_{1}^{-}\rangleight){}^2 \csc ^2\langleeft(\phi _1\rangleight)+4\rangleight]{}^2} (C_{1}+ C_{2}+ C_{3}+ C_{4}+ C_{5}- C_{6}). \langleabel{k0_exp} \end{equation} Various $C_{i}$'s appearing in the above equation are given by, \begin{multline} C_{1} =2 k^3 v \cot \langleeft(\phi _1\rangleight) \langleeft(k^4-2 k^2 u+u^2+v^2\rangleight) (-\langleeft( J_{1}^{-}\rangleight)^2-2 \cos \langleeft(2 \phi _1\rangleight)+2) \\ \times \cot \langleeft(\phi _1\rangleight) \langleeft(\langleeft(k^2-\rangleho _1^2\rangleight) \langleeft(k^2-u\rangleight) \cot \langleeft(\phi _1\rangleight)+v \langleeft(k^2+\rangleho _1^2\rangleight)+4 k \rangleho _1 v \cos \langleeft(\phi _1\rangleight)\rangleight) . \end{multline} \begin{multline} C_{2} = \langleeft [8 \langleeft( J_{1}^{-}\rangleight) k^3 v \cot \langleeft(\phi _1\rangleight) \langleeft(k^4-2 k^2 u+u^2+v^2\rangleight) \rangleight ] \\ \times \langleeft [ 4 k \rangleho _1 \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _1\rangleight)+\langleeft(k^2+\rangleho _1^2\rangleight) \langleeft(k^2-u\rangleight)-v \langleeft(k^2-\rangleho _1^2\rangleight) \cot \langleeft(\phi _1\rangleight) \rangleight ] . \end{multline} \begin{multline} C_{3} = \rangleho _1^2 \csc ^2\langleeft(\phi _1\rangleight) \langleeft ( 2-\frac{}{}ac{1}{2} \langleeft( J_{1}^{-}\rangleight)^2 \csc ^2\langleeft(\phi _1\rangleight) \rangleight ) \\ \times \langleeft(k^2+\rangleho _1^2\rangleight) \sin \langleeft(\phi _1\rangleight) \langleeft(u \langleeft(k^2-u\rangleight)-v^2\rangleight)+k^2 v \langleeft(\rangleho _1^2-k^2\rangleight) \cos \langleeft(\phi _1\rangleight) \\ \times \rangleho _1 v \langleeft(k^2-\rangleho _1^2\rangleight) \cos \langleeft(\phi _1\rangleight)-\rangleho _1 \langleeft(k^2-u\rangleight) \sin \langleeft(\phi _1\rangleight) \langleeft(k^2+4 k \rangleho _1 \cos \langleeft(\phi _1\rangleight)+\rangleho _1^2\rangleight). \end{multline} \begin{multline} C_{4} = 2 J_{1}^{-} \rangleho _1^2 \csc ^3\langleeft(\phi _1\rangleight) \langleeft(\langleeft(k^2+\rangleho _1^2\rangleight) \sin \langleeft(\phi _1\rangleight) \langleeft(u \langleeft(k^2-u\rangleight)-v^2\rangleight)+k^2 v \langleeft(\rangleho _1^2-k^2\rangleight) \cos \langleeft(\phi _1\rangleight)\rangleight) \\ \times \rangleho _1 \langleeft(k^2-\rangleho _1^2\rangleight) \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _1\rangleight)+\rangleho _1 v \sin \langleeft(\phi _1\rangleight) \langleeft(k^2+4 k \rangleho _1 \cos \langleeft(\phi _1\rangleight)+\rangleho _1^2\rangleight). \end{multline} \begin{multline} C_{5} = 2 J_{1}^{-} k \rangleho _1 \csc \langleeft(\phi _1\rangleight) \langleeft ( \frac{}{}ac{1}{4} \langleeft (J_{1}^{-} \rangleight)^2 \csc ^2\langleeft(\phi _1\rangleight)+1 \rangleight ) \\ \times \Big [ \rangleho _1^3 v \cos \langleeft(\phi _1\rangleight) \langleeft(-6 k^6+2 k^4 \langleeft(\rangleho _1^2+3 u\rangleight) +k^2 \langleeft(\rangleho _1^4-2 \rangleho _1^2 u\rangleight)+\rangleho _1^6 \rangleight) \\ -4 k^3 v \langleeft(k^2-u\rangleight) \cos \langleeft(2 \phi _1\rangleight) \langleeft(k^4-2 k^2 u+u^2+v^2\rangleight) \\ \rangleho _1^3 \sin \langleeft(\phi _1\rangleight) \Big \{ k^6 u-k^4 \langleeft(-2 \rangleho _1^4+2 u^2+\rangleho _1^2 u+6 v^2\rangleight)-\rangleho _1^2 u \langleeft(u^2+v^2\rangleight) \\ +k^2 \langleeft(2 \rangleho _1^6+u^3+2 \rangleho _1^2 u^2+u v^2-2 \rangleho _1^2 v^2\rangleight)+8 k^3 \rangleho _1 \cos \langleeft(\phi _1\rangleight) \langleeft(k^4-2 k^2 u+u^2-v^2\rangleight) \Big \} \Big ] \end{multline} \begin{multline} C_{6} = 4 k \rangleho _1 \langleeft(\frac{}{}ac{1}{4} \langleeft (J_{1}^{-} \rangleight)^2 \csc ^2\langleeft(\phi _1\rangleight)+1\rangleight) \Big [ -\rangleho _1^3 \cos \langleeft(\phi _1\rangleight) \\ \times \Big \{k^6 u+k^4 \langleeft(2 \rangleho _1^4-2 u^2+\rangleho _1^2 u-6 v^2\rangleight)+k^2 \langleeft(-2 \rangleho _1^6+u^3-2 \rangleho _1^2 u^2+u v^2+2 \rangleho _1^2 v^2\rangleight)+\rangleho _1^2 u \langleeft(u^2+v^2\rangleight) \Big \} \\ + v \Big \{ 4 k^3 v \cos \langleeft(2 \phi _1\rangleight) \langleeft(k^4-2 k^2 u+u^2+v^2\rangleight) \\ -\rangleho _1^3 \sin \langleeft(\phi _1\rangleight) \langleeft(6 k^6+k^4 \langleeft(2 \rangleho _1^2-6 u\rangleight)-k^2 \langleeft(\rangleho _1^4+2 \rangleho _1^2 u\rangleight)+16 k^3 \rangleho _1 \langleeft(k^2-u\rangleight) \cos \langleeft(\phi _1\rangleight)+\rangleho _1^6\rangleight) \Big \} \Big ]. \end{multline} $K_{0}$ and $K_{1}$ are independent of the thickness `$2b$' . \end{document}
\begin{document} \mainmatter \title{The Ontology of Knowledge Based Optimization} \titlerunning{The Ontology of Knowledge Based Optimization} \author{Mahyuddin K. M. Nasution} \authorrunning{M. K. M. Nasution} \institute{Mathematic Department, Fakultas Matematika dan Ilmu Pengetahuan Alam\\ Universitas Sumatera Utara, Padang Bulan 20150 USU Medan Indonesia\\ \mailsa\\} \toctitle{SIMANTAP 2010} \tocauthor{} \maketitle \begin{abstract} Optimization has been becoming a central of studies in mathematic and has many areas with different applications. However, many themes of optimization came from different area have not ties closing to origin concepts. This paper is to address some variants of optimization problems using ontology in order to building basic of knowledge about optimization, and then using it to enhance strategy to achieve knowledge based optimization. \keywords{Ontology, optimization problem, method, phenomena, paradigm.} \end{abstract} \section{Introduction} The optimization refers to choosing the best element from some set of available alternatives \cite{diwekar2008}. Simply, the optimization related to minimizing or maximizing a function by systematically choosing the values in/from/to real. In some respects the optimization embedded to economic, efficiency, and effectiveness. Therefore, in simple, a optimization problem can be defined as a function \begin{equation} \label{pers:utama} f : X\rightarrow {\bf R} \end{equation} where $\exists x_0\in X, \forall x_i\in X$ such that a minimizaton is $f(x_0)\leq f(x_i)$ or a maximization $f(x_0)\geq f(x_i)$ \cite{guler2010}. In general, a set of variables $X$ is a collection of variables representing attributes of entities as instances in studies which need an optimization. Some of them are the standard variables (static), but do not few dynamically as options. However, in many researches and studies, or the research behaviors about optimization, those variables on fixed condition is either in setting or in concept, such as many examples in course books of optimization, such that the optimization formulation proved to be stiff, and indeed do not applicable. Most optimization-related research in the airline industry, for example, are about areas: network design and schedule construction; fleet assignment; aircraft routing; crew scheduling; revenue management; irregular operations; air traffic control and ground delay program \cite{yu2008}. Which of all consider the green concept \cite{buttazzo2009}, for example. Therefore, each time the optimization problem formulated, but it is not optimization. The reasons, many studies consider the properties of variables setting, but they are not sure the characteristics of variables, so the latent variable can not be disclosed, need to both external and internal considerations as well as for providing impact in the optimization variables, such as the reliability and trusty. Therefore, this paper has a task to describe some problems of optimization using ontology, and to find out ties between problems based on adaptation concept. This paper divided into four sections, one remainder section is about the concept and some problem statements, and then they are expressed in Discussion section. \section{The Concepts and Some Open Problem Statements} The research in optimization area are becoming more flamboyant with the emergence of new area of studies involving resources (implicitly \cite{fisch2003}). Such studies include the exploration of new set theory such as fuzzy set \cite{jensen2005}. The optimization involves many themes and interest of every branch of mathematics as knowledge that exist today, which has the task in accordance with its purpose to identify the new things around optimization and assessment of object relations with other agencies, which also involves the main agenda of life such as environment, human rights, social welfare, and justice, where the green computing and nano-technology have played their roles\cite{audet2005,bock2005,baker2007,alves2008,chinchuluun2008,kugler2008,kosmidou2008,lowen2008,chaovalitwongse2009,papajorgji2009,gonzalez2010}. These do not only experienced a shift in accordance with the challenges and problems faced to get optimization, but also caused by changes how to approach the issues by the scientists. This shift is a response to the pressures of internal and external. Internal pressure is a jigsaw puzzle that is still hidden and not answered within the given optimization paradigm, caused by the approaches or methodologies that have not been able to explain phenomena or any event which can be observed, while the external pressure comes from the style of thought and the flow of thought. The flow of thought such as ontology is used to find a red thread between the paradigm and phenomena \cite{pickard2007}, thus optimization studies can be clearly disclosed in both research opportunities and applications. Existing red threads include heuristic rules, inductive, and deductive. However, it is also not effective to directly employ to many phenomena, this is because \begin{enumerate} \item the heuristic rule requires the scientist to define a specific rule for each specific type of optimization problem, thus that is not adaptive for different situations {\bf [Adapdation 1]}, \item the inductivity trains a scientist model individually and cannot be adapted to the another {\bf [Adapdation 2]}, and \item deductivity can deal with different instances simultaneously, but it cannot make use of the prediction {\bf [Adapdation 3]}. \end{enumerate} One of optimization phenomena is about feasible solution (is called optimal solution) that minimizes or maximizes the objective function, where the feasible region and the objective function present convexity or not. If (\ref{pers:utama}) as objective function (OF), cost function or energy function does not present convexity, then there may be several {\it local optima} (minima or maxima), where local optima $x^*$ is defined as a point for which there exists some $\delta>0$ so that for all $x$ such that $\|x-x^*\| ~(\leq\vee \geq)~ \delta$; the expression $f(x^*) ~(\leq\vee\geq)~ f(x)$ holds, whether the opposite is a global optima or not. Based on this conditions, there are two categories of optimization which always use the programming term as an emphasis on the use of iteration in complexity to get the solution of problem. First category is convex programming, i.e., optimization studies when the objective function (\ref{pers:utama}) is convex and the constraints, if any, also form a convex set. In general, the convex programming as follow \cite{burachik2008}. \begin{definition} [Convex optima] \label{def:ko} Let a real vector space $X$ together with a convex, real-valued function \begin{equation} \label{pers:convexoptima} f:{\cal X}\rightarrow{\bf R} \end{equation} such that $f({\bf x}^*) \leq f({\bf x})$, $\forall {\bf x}\in X$, $\exists {\bf x}^*\in{\cal X}$ for the number $f({\bf x})$ is smallest, where ${\cal X}\subset X$ is a convex subset. \end{definition} In this case, (\ref{pers:utama}) is linear and there are a set of constrainsts (SC) is specified only linear equalities and inequalities, such that there is a polytope if it is bounded. Specifically, the optimization problems form a polyhedron in conditions that (OF)-linear and (SC)-linear. Formally, these as follow \cite{matousek2007,leunberger2008}. \begin{definition} [Linear Programming] \label{def:lp} Linear programming (LP) is problems in canonical form: \begin{equation} \label{pers:subjeklp} \max {\bf c}^T{\bf x} \end{equation} subject to \begin{equation} \label{pers:kendalalp} A{\bf x}\leq {\bf b} \end{equation} where ${\bf x}$ is vector of variables, {\bf c} and {\bf b} are vectors of coefficients, $A$ is a matrix of coefficients. \end{definition} \begin{lemma} \label{lemma:lpco} If (\ref{pers:subjeklp}) and (\ref{pers:kendalalp}) respect to a linear objective function and constraints are convex, then LP is a convex optima. \end{lemma} In case, (\ref{pers:utama}) involves certain types of quadratic programs, we have optimization problems as follow \cite{mittelmann2003,kim2003}. \begin{definition} [Second-Order Cone Programming] \label{def:socp} Second-Order Cone Programming (SOCP) is a problem of the form \begin{equation} \label{pers:subjeksocp} \min f^Tx \end{equation} subject to \begin{equation} \label{pers:kendalasocp} \begin{array}{rcl} \|A_ix+b_i\|_2 &\leq& c_i^T+d_i, i=1,\dots,m\cr Fx &=& g\cr \end{array} \end{equation} where $f\in {\bf R}^n$, $A_i\in {\bf R}^{n_i\times n}$, $b_i\in {\bf R}^{n_i}$, $c_i\in {\bf R}^n$, $d_i\in {\bf R}$, $F\in {\bf R}^{p\times n}$, $g\in {\bf R}^p$, and $x\in {\bf R}^n$ is the optimization variable. \end{definition} \begin{lemma} \label{lemma:socpco} If (\ref{pers:subjeksocp}) and (\ref{pers:kendalasocp}) respect to a linear objective function and constraints are convex, then SOCP is a convex optima. \end{lemma} \begin{proposition} \label{proposisi:socplp} If $A_i=0$, $i=1,\dots,m$ in (\ref{pers:kendalasocp}), then SOCP reduces to LP. \end{proposition} Lemma \ref{lemma:lpco}, \ref{lemma:socpco} and Proposition \ref{proposisi:socplp} can be generalized into one conclusion as \emph{semidefinite programming} (SDP) \cite{kocvara2007,kocvara2009}, where semidefinite matrices as underlying variables, as follow. \begin{definition} \label{def:sdp} Let $S^n$ is the space of all $n\times n$ real symmetric matrices. We define a trace, $tr$, for equipping the space with the inner product, i.e., \begin{equation} \label{pers:sedefinit} tr(A^TB) = \langle A,B\rangle_{S^n} = \sum_{i=1,j=1}^n A_{ij}B_{ij}. \end{equation} A symmmetric matrix is \emph{positive semidefinite} if all its eigenvalues are nonnegative. \end{definition} \begin{lemma} \label{lemma:sdp} If (\ref{pers:sedefinit}) is convex, then the positive semidefinite is a convex optima. \end{lemma} Definition \ref{def:sdp} and Lemma \ref{lemma:sdp} generalize all of forms of convex programming. Thus, LP, SOCP and SDP can be viewed as conic programs with the appropriate type of cone as follow \cite{cezik2005,todd2008,warner2008}. \begin{definition} \label{def:co} Let $X$ is real vector space and also convex. The \emph{conic optimization} is a convex optima with real valued function \begin{equation} \label{pers:konikop} f:C\rightarrow{\bf R} \end{equation} defined on convex cone $C\subset X$, and an affine subspace ${\cal H}$ defined by a set of affine constraints $h_i(x) = 0$, such that $x\in C\cap {\cal H}$ for the number $f(x)$ is smallest. \end{definition} In geometry, let a monomial is a function $f: {\bf R}^n\rightarrow {\bf R}$ with dom $f={\bf R}_{++}^n$, i.e., $f(x) = cx_a^{a_1}x_2^{a_2}\cdots x_n^{a_n}$, where $c>0$ and $a_i\in {\bf R}$. Based on this concept, we define the geometric programming as follow \cite{rajgopal2002,boyd2007,yang2010}. \begin{definition} [Geometric programming] \label{def:gp} A geometric programming (GP) is optimization problem of the form \begin{equation} \label{pers:subjekgp} f_0(x) \end{equation} subject to \begin{equation} \label{pers:kendalagp} \begin{array}{rcl} f_i(x)&\leq& 1, i = 1,\dots,m\cr h_i(x)&=& 1, i=1,\dots,p\cr \end{array} \end{equation} where $f_0,\dots,f_m$ are polynomial and $h_1,\dots,h_p$ monomials. \end{definition} \begin{lemma} \label{lemma:gp} GP is a convex optima, if (\ref{pers:subjekgp}) and (\ref{pers:kendalagp}) become a sum of exponentials of affine functions. \end{lemma} Some concepts of optimization problems above include into convex programming, and then we will define the concepts of other programming. One of them is quadratic programming as a special type of mathematical optimization problem \cite{kim2003}. \begin{definition} [Quadratic Programming] \label{def:qp} Let ${\bf x} \in {\bf R}^n$. The $n\times n$ matrix $Q$ is symmetric, and ${\bf c}$ is any $n\times 1$ vector. Quadratic programming (QP) is to minimize (with reppect to {\bf x}) \begin{equation} \label{pers:subjekqp} f({\bf x}) = \frac{1}{2}{\bf x}^TQ{\bf x}+{\bf c}^T{\bf x} \end{equation} subject to one or more constraints of of the form \begin{equation} \label{pers:kendalaqp} \begin{array}{rcl} A{\bf x}&\leq& {\bf b}\cr E{\bf x}&=&{\bf d}\cr \end{array} \end{equation} where ${\bf x}^T$ indicates the vector transpose of ${\bf x}$. \end{definition} The notation $Ax\leq b$ means that every entry of the vector $Ax$ is less than or equal to the corresponding entry of the vector ${\bf b}$. \begin{lemma} If the matrix $Q$ is positive semidefinite, then (\ref{pers:subjekqp}) is a convex function, or QP is convex optima. \end{lemma} \begin{definition} [Nonlinear Programming] \label{def:nlp} \emph{\cite{leunberger2008}} Optimization model as nonlinear programing is problem can be stated simply as \begin{equation} \label{pers:nonlinear1} \max_{x\in X} f(x) \end{equation} is to maximize some variable such as product throughput, or \begin{equation} \label{pers:nonlinear2} \min_{x\in X} f(x) \end{equation} is to minimize a cost function, where $f:R^n\rightarrow R$, $X\subset R^n$. \end{definition} All models above are the determenistic optimization problems which are formulated with known parameters. Besides, the size of $X$ is bounded, also variable with another have self setting. When the parameters are known within certain bounds, one approach to tackling such problem is called \emph{robust optimization}. However the real world problems almost invariably include some unknown parameters. The goal of optimization is to find a solution which is feasible for all conditions and situations and optimal in some sense. Therefore, one of frameworks for modeling optimization problems is by involving uncertainty. \begin{table} \caption{Methods are used to solve optimization problems.} \label{tabel:metode} \begin{center} \begin{tabular}{|r|l|l|}\hline id & Method & Description\cr\hline 1. & active set \cite{murty1998} & A problem is defined using an objective function to\cr & & minimize or maximize, and a set of constraints $g_1(x)$\cr & & $\geq 0,\dots,g_k(x)\geq 0$ that define the feasible region, that \cr & & is, the set of all x to search for the optimal solution.\cr & & Given a point x in the feasible region, a constraint\cr & & $g_i(x)\geq 0$ is called active at $x$ if $g_i(x) = 0$ and \cr & & inactive at $x$ if $g_i(x) > 0$. \cr 2. & ant colony \cite{colorni1991} & A probabilistic technique for solving computational\cr & & problems which can be reduced to finding good paths \cr & & through graphs.\cr 3. & beam search \cite{kim2004,bautista2008} & A heuristic search algorithm that explores a graph by \cr & & expanding the most promising node in a limited set.\cr 4. & conjugate gradient & An algorithm for the numerical solution of particular \cr & \cite{dai2001} & systems of linear equations, namely those whose \cr & & matrix is symmetric and positive-definite\cr 5. & cuckoo search \cite{yang2009} & An optimization algorithm was inspired by the \cr & & obligate brood parasitism of some cuckoo species by \cr & & laying their eggs in the nests of other host birds \cr & & (of other species). \cr 6. & differential evolution & A method that optimizes a problem by iteratively \cr & \cite{zhang2008,zhang2009} & trying to improve a candidate solution with regard to \cr & & a given measure of quality.\cr 7. & dynamic relaxation & A numerical method, which, among other things, can \cr & \cite{lawphongpanich2006} & be used do "form-finding" for cable and fabric \cr & & structures.\cr 8. & ellipsoid \cite{grotschel1981} & An iterative method for minimizing convex functions.\cr 9. & evolution strategy & An optimization technique based on ideas of \cr & \cite{gonzalez1998} & adaptation and evolution. \crcr 10. & firefly algorithm & A metaheuristic algorithm, inspired by the flashing \cr & \cite{lukasik2009,yang2010b} & behaviour of fireflies.\cr 11. & Frank-Wolfe \cite{chryssoverghi1997}& A procedure for solving quadratic programming \cr & & problems with linear constraints.\cr 12. & genetic algorithm & A search heuristic that mimics the process of natural\cr & \cite{onwubolu2003,dogan2004} & evolution. This heuristic is routinely used to generate\cr & & useful solutions to optimization and search problems.\cr 13. & gradient projection & An algorithms that can be used to optimize virtually \cr & \cite{guocheng2010} & any rotation criterion.\cr 14. & harmony search & A phenomenon-mimicking algorithm inspired by the\cr & \cite{rappaport2007,kaveh2009} & improvisation process of musicians for finding \cr & & a best harmony as global optimum.\cr 15. & hill climbing & A mathematical optimization technique which belongs \cr & \cite{greiner1996,lewis2009} & to the family of local search.\cr 16. & interior point & A linear or nonlinear programming method that\cr & \cite{forsgren2002,bonnans2006} & achieves optimization by going through the middle of \cr & & the solid defined by the problem rather than around \cr & & its surface. \cr\hline \end{tabular} \end{center} \end{table} Ontologically, in certain optimization problems the unknown optimal solution might not be a number or vector, but rather a continuous quantity \cite{jeyakumar2008,pham2009}. This problem appeared because a continuous quantity cannot be determined by a finite number of certain degrees of freedom \cite{bot2009,chen2005,mishra2008}. However, such problem can be more challenging than finite-dimensional ones \cite{hinze2009}, so nothing of entities attributes depend on continuous times ever, or there are not event hold on long times. We know disciplines which study infinite-dimensional optimization problems are \emph{calculus of variations} \cite{buttazzo2009,chen2005}, \emph{optimal control} \cite{chinchuluun2010} and \emph{shape optimization} \cite{bucur2005,kovtumenko2006,eppler2008}. In constraints based optimization \cite{hinze2009}, a solution is a vector of variables that satisfies all constraints, where the constraint satisfication is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy. However, the solutions depend on Adaption 1, where constraint satisfaction typically identified with problems based on constraints on a finite domain, or based on patterns derived from the experience in classification, that be local solution. \begin{table} \caption{Methods are used to solve optimization problems (Continue)} \label{tabel:metode2} \begin{center} \begin{tabular}{|r|l|l|}\hline id & Method & Description\cr\hline 17. & IOSO & A multiobjective, multidimensional nonlinear \cr & & optimization technology.\cr 18. & line search \cite{belegundu2004,shi2005} & One of two basic iterative approaches to finding a\cr & & local minimum ${\bf x}^*$ of an objective function $f:{\bf R}^n\rightarrow {\bf R}$.\cr 19. & Nelder-Mead \cite{price2002,burmen2005} & A nonlinear optimization technique, which is a \cr & & well-defined numerical method for twice differentiable \cr & & and unimodal problems.\cr 20. & Newton \cite{mangasarian2004,fujishige2009} & A method for finding successively better approximations\cr & & to the zeroes (or roots) of a real-valued function.\cr 21. & particle swarm \cite{wu2009} & A computational method that optimizes a problem by\cr & & iteratively trying to improve a candidate solution \cr & & with regard to a given measure of quality.\cr 22. & quantum annealing & A general method for finding the global minimum of a\cr & & given objective function over a given set of candidate\cr & & solutions (the search space), by a process analogous \cr & & to quantum fluctuations.\cr 23. & quasi-Newton \cite{jiang2003} & A algorithm for finding local maxima and minima of \cr & & functions.\cr 24. & simplex \cite{ehrgott2007} & A popular algorithm for numerically solving linear\cr & & programming problems. \cr 25. & simulated annealing & A generic probabilistic metaheuristic for the global\cr & \cite{yen2004} & optimization problem of applied mathematics, namely \cr & & locating a good approximation to the global optimum of a\cr & & given function in a large search space.\cr 26. & stochastic tunneling & An approach to global optimization based on the Monte \cr & \cite{oblow2001} & Carlo method-sampling of the function to be minimized.\cr 27. & subgradient & The iterative methods for solving convex minimization \cr & \cite{gokbayrak2009} & problems.\cr 28. & tabu search & A mathematical optimization method for local search \cr & \cite{nowicki2005} & techniques.\cr\hline \end{tabular} \end{center} \end{table} In stochastic framework, some of the constraints or parameters depend on random variables \cite{pham2009,schneider2006,zhigljavsky2008}, or this models of optimization depend on probability distributions governing the data are known or can be estimated. The goal is to find some policy that is feasible for all (or almost all) the possible data instance and maximizes the expectation of some function of the decisions and the random variables. Although, stochastic programming has applications in a broad range of areas (most of them in uncertainty): finance, transportation, social modalities, or energy. However, random event occurs only affecting first acitivity. In theoretical computer science, many themes of optimization involving combinatorial \cite{avis2005,du2005,iglesias2005}, that related to operations research, algorithm theory, and computational complexity theory. The goal is to find the best solution. However, this optimization problems only dealing with graphs, matroids, and related structures, in which the set of feasible solutions is discrete or can be reduced to discrete \cite{kasperski2008,lozovanu2009,papadopulos2009}. In addition to, for combinatorial optimization are designed metaheuristic that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality \cite{doerner2007,rego2005,siarry2008}, where an optimal solution is sough over a discrete search-space. However, metaheurstic do not guarantee an optimal solution is ever found. Therefore, many metaheuristics implementation in stochastic optimization \cite{sorensen2003,tseng2006,lewis2007}. Many models of optimization have been defined, and many methods for solving them have been created to what(?). In modelling optimization, the existence of derivatives is not always assumed, therefore many methods were devised for specific situations. All methods are created based on smoothness of the objective functions (\ref{pers:utama}), like as class of combinatorial methods, class of derivative-free methods, class of first-order methods, class of second-order methods, see Tabel \ref{tabel:metode} and \ref{tabel:metode2}. However, all methods lie on convexity \cite{burachik2008}: should the objective function be convex over the region of interest, then any local minimum will also be a global minimum. So, the ontology of optimization models and their methods is used to describe chain one optimization problem with another into a structure as knowledge so that knowledge based optimization exists, i.e. a systematic approach for the study of optimization indiscernibility, where the possibilities of attributes (as independent variables) will be playing \cite{nasution2010}. \section{Discussion} All models of optimization (Definition \ref{def:lp}-\ref{def:nlp}) came closer to Adaptation 2, and methods for solving them to. Few methods have shown the possibility of solution using Adaptation 3, among of them in implicit conditions. Of course rasionally, many models of optimization have been defined in one space ${\bf R}^n$, where a function employed to relate between one (may be two) variable to many variables. Ontologically, the real world consist of relations between multiple spaces with variety of dimensions. For example, each academic persons related to attributes: name, age, salary, papers, etc. Each paper has a set of descriptions as attribute, i.e., title, venue, co-author, publisher, pages, year, etc. So, a publisher as entity related to: name, city, country, etc. There is an attribute has discrete type (name of person), but has relation with another atribut in different space (name of co-author) in probability, or there are a stochastic relation between name of person and name of co-author laying (social network) on title of paper by using Bayesian approach \cite{nasution2010b}. The optimization problems not only about looking for answers of questions: \begin{enumerate} \item [$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$] Is it possible to satisfy all constrains? \item [$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$] Does an optimum exists? \item [$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$] How can an optimum be found? \item [$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$] How does the optimum change if the problem changes? \end{enumerate} In the opposite, about question: \begin{center} (MQ)\\ {\it Why are so many models and methods of optimization created and so few used}? \end{center} We start this discussion using one analogy: The optimization problems are often expressed by $\min_{x\in {\bf R}} f(x)$ or $\max_{x\in{\bf R}} f(x)$. Certainly, $x$ ranges over the reals. What is answer for ${\rm argmin}_{x\in(-\infty,-1]} f(x)$. This asks for one (or more) values of $x$ in the interval $(-\infty,-1]$ that minimize the function. The answer is the undefined. This is satisfiability problem, i.e., the problem of finding any feasible solution at all without regard to objective value (do not satisfy (MQ)). This means that any feasible solution do not realiable! Why? Many optimization method need to start from a feasible point. What? One way to obtain such a point is to relax the feasibility conditions using a slack variable, with enough slack, any starting point is feasible. How? To minimize that slack variable until slack is null or negative. So, where is the positition of optimization goal? \begin{theorem} [Extreme Value of Karl Weierstrass] If a real-valued function $f$ is continuous in the closed and bounded interval $[a,b]$, then $f$ must attain its maximum and minimum value, each at least once. \end{theorem} \begin{table} \caption{Description of linear programming} \label{des:lp1} \begin{center} \begin{tabular}{|rcl|}\hline Name && {\bf Linier Programming} [Convex Optima]\cr\hline Description && A mathematical method for determining a way to achieve the best \cr && outcome in a given mathematical model for some list of requirements\cr && represented as linear equations.\cr\hline Formula && {\it Canonical form}: Definition \ref{def:lp}\cr && Standard form:\cr && 1. A linear function to be maximized\cr && ~~~e.g., maximize ${\bf c}_1{\bf x}_1+{\bf c}_2{\bf x}_2$\cr && 2. Problem constraints of the following form, i.e., \cr && ~~~$a_{1,1}{\bf x}_1+a_{1,2}{\bf x}_2\leq {\bf b}_1$\cr && ~~~$a_{2,1}{\bf x}_1+a_{2,2}{\bf x}_2\leq {\bf b}_2$\cr && ~~~$a_{3,1}{\bf x}_1+a_{3,2}{\bf x}_2\leq {\bf b}_3$\cr && 3. Non-negative variables, e.g., ${\bf x}_1\geq 0$, ${\bf x_2}\geq 0$.\cr && 4. Non-negative right hand side constants ${\bf b}_i\geq 0$.\cr\hline \end{tabular} \end{center} \end{table} This theorem describes optimum solution exist under conditions, explicitly, i.e., there exist numbers $c$ and $d$ in $[a,b]$ such that: $f(c)\leq f(x)\leq f(d)$, $\forall x\in [a,b]$. \begin{theorem} [Fermat theorem] Let $f:(a,b)\rightarrow{\bf R}$ be a function and suppose that $x_0\in(a,b)$ is a local extremum of $f$. If $f$ is differentiable at $x_0$ then $f'(x_0) = 0$. \end{theorem} Last theorem states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero. In general, the optima may be found at critical points, where the first derivate or gradient of the function is zero or is undefined, or on the boundary of the choosed set. What? First derivative test only identifies points that might be optima, but it cannot distinguish a point which is a minimum from one that is a maximum or one that is neither. How? Do twice differentiable, and check the second derivatives (matrix) for unconstrained problems. Therefore, the conditions that distinguish maxima and minima from other stationar points called the second-order conditions. While the inequality-constrained optimazation problems are conditioned by the Lagrange multiplier, or calculating the complementary slackness conditions based on Karush-Kuhn-Tucker conditions. Then, check the matrix of second derivatives of the objective function and the constraints. So, how many steps to find optimum solution in order to it is applicable? The convexity of (\ref{pers:convexoptima}) in Definition \ref{def:ko} makes the powerful tools of convex analysis applicable, such as theory of subgradients lead to a particularly satisfying theory of necessary and sufficient conditions for optimality, based on theorem as follows: \begin{theorem} [Hahn-Banach] Given a vector space $V$ over the field ${\bf R}$ of real numbers, a function $f:V\rightarrow {\bf R}$ is called sublinear if $f(\gamma x) = \gamma f(x)$ for any $\gamma\in {\bf R}_+$ and any $x\in V$, $f(x+y)\leq f(x)+f(y)$ for any $x,y\in V$. \end{theorem} In ontology, the applicability of linear programming is proved starting in Table \ref{des:lp1}. In general, a standard form is usual and most intuitive form of describing a convex minimization problem: (1) A convex function $f(x) : {\bf R}^n\rightarrow {\bf R}$ to be minimized over the variable $x$; (2) The constraints, if any: (a) Inequality constraints of the form $g_i(x)\leq 0$, where the functions $g_i$ are convex. (b) Equality constraints of the form $h_i(x)=0$, where the function $h_i$ are linear. \begin{table} \caption{Description of linear programming} \label{des:lp2} \begin{center} \begin{tabular}{|rcl|}\hline Name && {\bf Linier Programming} [Convex Optima]\cr\hline Description && A type of convex programming, a model optimiztion where objective \cr && is affine and the set of constraints is in affine.\cr\hline Formula && 1. Primal problem:\cr && ~~~Maximize ${\bf c}^T{\bf x}$ subject to $A{\bf x}\le {\bf b}$, ${\bf x}\geq 0$.\cr && 2. Symmetric dual problem:\cr && ~~~Minimize ${\bf b}^T{\bf y}$ subject to $A^T{\bf y}\geq {\bf c}$, ${\bf y}\geq 0$.\cr && 3. Alternative primal formulation:\cr && ~~~Maximize ${\bf c}^T{\bf x}$ subject to $A{\bf x}\le {\bf b}$.\cr && 4. Symmetric dual problem:\cr && ~~~Minimize ${\bf b}^T{\bf y}$ subject to $A^T{\bf y} = {\bf c}$, ${\bf y}\geq 0$, ${\bf y}\geq 0$. \cr\hline \end{tabular} \end{center} \end{table} In optimization model, the duality theory generalize convex programming where methods took an effective computational. Convex programming minimizes convex functions, so it useful also for maximizing concave functions, see Table \ref{des:lp1} and Table \ref{des:lp2}: The problem of maximizing a concave function can be reformulated equivalently as a problem of minimizing a convex function. Thus, a convex minimization problem written as $\min_x f(x)$ subject to $g_i(x)\leq 0$, $i=1,\dots,m$, $h_i(x) = 0$, $i=1,\dots,p$, where every equality constraint $h(x) = 0$ can be equivalently replaced by a pair of inequality constraint $h(x) \leq 0$ and $-h(x)\leq 0$. By this, $h_i(x) = 0$ has to be affine as opposed to merely being convex. If $h_i(x)$ is convex, $h_i(x)\leq 0$ is convex, but $-h_i(x)\leq 0$ is concave. Therefore, the only way for $h_i(x) = 0$ to be convex is for $h_i(x)$ to be affine. The way is a transformation! Can we see that the optimization problems is just one problem only? What is answer! For linear programming, we have a solution with many methods, see Table \ref{des:lp3}. \begin{table} \caption{Description of linear programming} \label{des:lp3} \begin{center} \begin{tabular}{|rcl|}\hline Name && {\bf Linier Programming} [Convex Optima]\cr\hline Description && A type of convex programming, a model optimiztion where objective \cr && is affine and the set of constraints is in affine.\cr\hline Formula && Canonic form $\models$\cr && ~~~~~~~~~~1. Primal problem\cr && ~~~~~~~~~~2. Dual problem\cr\hline \end{tabular} \begin{tabular}{|l|l|l|l|}\hline & Founder & Year & Solution Technique\cr \hline Model & Leonid Kantorovich & 1939 & \cr Methods & George B. Dantzig & 1947 & Simplex Method\cr & John von Neumann & 1947 & Duality\cr & Leonid Khachiyan & 1979 & Polynomial Time\cr & Narendra Karmarkar & 1984 & Interior point method\cr\hline \end{tabular} \end{center} \end{table} Consider an arbitrary maximization (or minimization) problem where the objective function $f({\bf x},{\bf r})$ depends on some parameters ${\bf r}$, $f^*({\bf r}) = \max_{\bf x} f({\bf x},{\bf r})$. The function $f^*({\bf r})$ is the problem's optimal-value function gives the maximized or minimized value of the objective function $f({\bf x},{\bf r})$ as a function of its parameter ${\bf r}$. \begin{theorem} [Envelope theorem] Let $\max_{\bf x} f({\bf x},{\bf r})$ s.t. $g({\bf x},{\bf r}) = 0$, and Lagrangian function ${\cal L}({\bf x},{\bf r}) = f({\bf x},{\bf r}) - \lambda g({\bf x},{\bf r})$, where $\lambda = (\lambda_1,\dots,\lambda_n)$, $g({\bf x},{\bf r}) = (g_1({\bf x},{\bf r}),\dots,g_n({\bf x},{\bf r}))$, ${\bf 0} = (0,\dots,0)\in {\bf R}^n$. Then \begin{equation} \frac{\partial f^*({\bf r})}{\partial r_i} = \frac{\partial{\cal L}({\bf x},{\bf r})}{\partial r_i}\Big|_{{\bf x}={\bf x}*({\bf r}),\lambda = \lambda({\bf r})}. \end{equation} \end{theorem} This theorem expresses how the value of an optimal solution changes when an underlying parameter changes. While, the continuity of the optimal solution as a function of underlying parameters, i.e. \begin{theorem} [Maximum theorem] Let $X$ and $\Theta$ be metric space, $f:X\times\Theta\rightarrow {\bf R}$ be a function jointly continuous in its two arguments, and ${\cal C}:\Theta\rightarrow X$ be a compact-valued correspondance. For $x\in X$ and $\theta\in \Theta$, let $f^*(\theta) = \max\{f(x,\theta)|x\in C(\theta)\}$ and $C^*(\theta) = {\rm arg}\max\{f(x,\theta)|x\in C(\theta)\} = \{x\in C(\theta)|f(x,$ $\theta) = f^*(\theta)\}$. If $C$ is continuous at some $\theta$, then $f^*$ is continuous at $\theta$ and $C^*$ is non-empty, compact-valued, and upper at $\theta$. \end{theorem} Consider the restriction of a convex function to a compact convex set, where the function on that set attains its constrained maximum only on the boundary. For example, Definition \ref{def:gp} defines that objective function and inequatlity constrainsts of GP can be expressed as \emph{psynomials} and equality constrainsts as \emph{monomials}. Both psynomials and monomials can be transformed into a convex program (Lemma \ref{lemma:gp}). Other side, constrained problems can often be transformed into unconstrained problems by using Lagrange multipliers. Consider a convex minimization problem given in standard form by a cost function $f(x)$ and inequality constraints $g_i(x)\leq 0$, where $i=1,\dots,m$. Then the domain ${\cal X}$ is ${\cal X} = \{x\in X|g_1(x)\leq 0,\dots,g_m(x)\leq 0\}$. The Lagrangian function for this problem is $L(x,\lambda_0,\dots,\lambda_m) =\lambda_0f(x)+\lambda_1g_1(x)+\dots+\lambda_mg_m(x)$. For each point $x \in X$ that minimizes (\ref{pers:utama}), there exist real number $\lambda_0,\dots,\lambda_m$, called Lagrangian multipliers, that satisfy these conditions simultaneously, (a) $x$ minimizes $L(y,\lambda_0,\dots,\lambda_m)$ over all $y\in X$, (b) $\lambda_0\geq 0$, $\lambda_1\geq 0$, $\dots$, $\lambda_m\geq 0$, with at least one $\lambda_k>0$, and (c) $\lambda_1g_1(x) = 0$, $\dots$, $\lambda_mg_m(x) = 0$. For instance, if there is a strictly feasible point, namely a point $z$ satisfying $g_1(z)<0,\dots, g_m(z)<0$, then must be assigned $\lambda_0=1$. Conversely, if some $x\in X$ satisfies (a)-(c) for $\lambda_0,\dots,\lambda_m$ with $\lambda_0=1$, then $x$ is certain to minimize (\ref{pers:utama}). Therefore, \begin{lemma} Three following statements is equivalent. \begin{enumerate} \item If a local minimum exists, then its is a global minimum. \item The set of all minima is convex. \item For each strictly convex function, if the function has a minimum, then the minimum is unique. \end{enumerate} \end{lemma} \begin{proposition} A set of problems that consist of least squares, LP, QP, Conic Optimization, GP, SOCP, SDP, Quadratically Constrained Quadratic Programming, and Entropy Maximization are convex optima if and only if can be transformed into convex minimization problems by changing variables. \end{proposition} The last proposition is typically interpreted as prividing conditions for optimization problems, where their ontology describe as follow: \[ \begin{array}{rclcccc} {\rm Optimization~problems} &\models& {\rm Convex~optima} &\dashv & &\cr &\models& {\rm Conic~Programming} &\uparrow&\dashv& \cr &\models& {\rm LP} &{\rm Lemma~1}&\uparrow&\dashv\cr &\models& {\rm SOCP} &{\rm Lemma~2}&\uparrow&{\rm Prop.~1}\cr &\models& {\rm SDP} &{\rm Lemma~3}&\perp&\cr &\models& {\rm GP} &{\rm Lemma~4}& &\cr &\models& {\rm QP} &{\rm Lemma~5}& &\cr &\models& {\rm Non-LP} & & &\cr &\models& {\rm Stochastic~Programming}& & &\cr &\models& {\rm Fuzzy~Programming}& & &\cr &\models& {\rm ?~Programming}& & &\cr {\rm Opt.~Indiscernebility}&\models& {\rm Incomplete~Programming}& & &\cr \end{array} \] To complete the interpretation of optimization problems and to understand the incomplete with them, some considerations have formulated as advice \cite{landry1996}, i.e., (a) the model of optimization be ready in close with strategic stakeholder with understanding the organization or environment; (b) the model of optimization is designed to adapt task at hand and to the cognitive capacity of the stakeholder; (c) the model become familiar with the various logics and preferences prevailing in the organization or many other environments; (d) the model of optimization problems must be formulated in comportable and a framework where many methods can be solved them; (e) the model optimization problems can be prepared to modify or develop a new version, change the framework in order to accordance with any method; (f) each model can give the options to select the making decisions; (g) the models can express the respecting implicitly become explicit. \begin{conjecture} The optimization in incomplete has optimum solution. \end{conjecture} \section{Conclusion} Throughout the ontology of optimization problems were in close connection with the principles and findings of the science, philosophy, and arts. It can be noticed that optimization problems are a reflection of pictures respecting to entities, where the ontology is the beginning of philosophy, the scratch of art, the formalization in science, to generalize knowledge behind a model of optimization. \begin{remark} The complete is dual of incomplete. \end{remark} \begin{thebibliography}{100} \bibitem{abraham2006} Abraham, A., Grosan, C. and Ramos V. 2006. {\it Stigmergic Optimization}. Springer-Verlag. \bibitem{alves2008} Alves, C. J. S., Pardalos, P. M., Vicente, L. N. (Eds). 2008. {\it Optimization in Medicine}. Springer-Verlag. \bibitem{audet2005} Audet, C., Hansen, P. and Savard, G. 2005. {\it Essays and Surveys in Global Optimization}. Springer-Verlag. \bibitem{avis2005} Avis, D., Hertz, A. and Marcotte, O. 2005. {\it Graph Theory and Combinatorial Optimization}. \bibitem{baker2007} Baker, E. K., Joseph, A., Mehrotra, A. and Trick M. A. (Eds.). 2007. {\it Extending the Horizons: Advances in Computing, Optimization, and Decision Technologies}. Springer-Verlag. \bibitem{barros2010} Barrow, M. F. M., Guilherme, J. M. C. and Horta, N. C. G. 2010. {\it Analog Circuits and Systems Optimization Based on Evolutionary Computation Techniques}. Springer-Verlag. \bibitem{bartholomew2008} Bartholomew-Biggs, M. 2008. {\it Nonlinear Optimization with Engineering Applications}. Springer-Verlag. \bibitem{battiti2008} Battiti, R., Brunato, M. and Mascia, F. 2008. {\it Reactive Search and Intelligent Optimization}. Springer-Verlag. \bibitem{bautista2008} Bautista, J., Pereira, J. and Adenso-Diaz, B. 2008. A beam search approach for the optimization version of car sequencing problem. {\it Ann Oper Res}, 159: 233-244. \bibitem{beckfl2010} Beck Fl, A. C. S. and Carro L. 2010. {\it Dynamic Reconfigurable Architectures and Transparent Optimization Techniques: Automatic Acceleration of Software Execution}. Springer-Verlag. \bibitem{belegundu2004} Belegundu, A. D. Damle, A., Rajan, S. D., Dattaguru, B. and Ville, J. St. 2004. Parallel line search in method of feasible directions. {\it Optimization and Engineering}, 5: 379-388. \bibitem{bendsoe2006} Bends\oe, M. P., Olhoff, N. and Sigmund, O. 2006. {\it IUTAM Symposium on Topological Design Optimization of Structures, Machines and Materials: Status and Perspectives}. Springer-Verlag. \bibitem{bock2005} Bock, H. G., Kostina, E., Phu, H. X. and Rannacher, R. (Eds.). 2005. {\it Modeling, Simulation and Optimization of Complex Processes}. Springer-Verlag. \bibitem{bonnans2006} Bonnans, J. F., Gilbert, J. C., Lemar\'{e}chal, C. and Sagastiz\'{a}bal, C. A. 2006. {\it Numerical Optimization: Theoretical and Practical Aspects}. Springer-Verlag. \bibitem{bot2009} Bot, R. I., Grad, S.-M. and Wanka, G. 2009. {\it Duality in Vector Optimization}. Springer-Verlag. \bibitem{bot2010} Bot, R. I. 2010. {\it Conjugate Duality in Convex Optimization}. Springer-Verlag. \bibitem{boukas2005} Boukas, El-K\'{e}bir and Malham\'{e}, R. P. 2005. {\it Analysis, Control, and Optimization of Complex Dynamic Systems}. Springer-Verlag. \bibitem{boyd2007} Boyd, S., Kim, S.-J., Vandenberghe, L. and Hassibi, A. 2007. A tutorial on geometric programming. {\it Optim. Eng.} 8: 67-127. \bibitem{bucur2005} Bucur, D. and Buttazzo, G. 2005. {\it Variational Methods in Shape Optimization Problems}. Birkh\"{a}user: Boston. \bibitem{burachik2008} Burachik, R. S., and Iusem, A. N. 2008. {\it Set-Valued Mappings and Enlargements of Monotone Operators}. Springer-Verlag. \bibitem{burmen2005} B\"{u}rmen, \'{A}., Puhan, J. and Tuma, T. 2005. Grid restrained Nelder-Mead Algorithm. {\it Computational Optimization and Applications}, 34: 359-375. \bibitem{buttazzo2009} Buttazzo, G. and Frediani, A. (Eds.) 2009. {\it Variational Analysis and Aerospace Engineering}. Springer-Verlag. \bibitem{cambini2009} Cambini, A. and Martein, L. 2009. {\it Generalized Convexity and Optimization: Theory and Applications}. Springer-Verlag. \bibitem{cascetta2009} Cascetta, E. 2009. {\it Transportation Systems Analysis: Models and Applications}. Springer-Verlag. \bibitem{cezik2005} Cezik, M. T. and Iyengar, G. 2005. Cuts for mixed 0-1 conic programming. {\it Math. Program., Ser. A}, 104: 179-202. \bibitem{chaovalitwongse2009} Chaovalitwongse, W., Furman, K. C. and Pardalos, P. M. (Eds.). 2009. {\it Optimization and Logistics Challenges in the Enterprise}. Springer-Verlag. \bibitem{chaovalitwongse2010} Chaovalitwongse, W., Pardalos, P. M. and Xanthopoulos, P. 2010. {\it Computational Neuroscience}. Springer-Verlag. \bibitem{chen2005} Chen, G.-y., Huang, X. and Yang, X. 2005. {\it Vector Optimization: Set-Valued and Variational Analysis}. Springer-Verlag. \bibitem{chen2006} Chen, L. Z., Nguan, S. K. and Chen, X. D. 2006. {\it Modelling and Optimization of Biotechnological Processes: Artificial Intelligence Approaches}. Springer-Verlag. \bibitem{chinchuluun2008} Chinchuluun, A., Pardalos, P. M., Migdalas, A., and Pitsoulis, L. (Eds.). 2008. {\it Pareto Optimality, Game Theory and Equilibria}. Springer-Verlag. \bibitem{chinchuluun2010} Chinchuluun, A., Pardalos, P. M., Enkhbat, R. and Tseveendorj, I. 2010. {\it Optimization and Optimal Control: Theory and Applications}. Springer-Verlag. \bibitem{christensen2009} Christensen, P. W. and Klarbring, A. 2009. {\it An Introduction to Structural Optimization}. Springer-Verlag. \bibitem{chryssoverghi1997} Chryssoverghi, I., Bacopoulos, A., Kokkinis, B. and Coletsos, J. 1997. Mixed Frank-Wolfe Penalty method with applications to nonconvex optimal control problems. {\it Journal of Optimization Theory and Applications}, 94(2): 311-334. \bibitem{ciegis2009} \v{C}iegis, R., Henty, D., K\.{a}gstr\"{o}m, B., and \v{Z}ilinskas, J. 2009. {\it Parallel Scientific Computing and Optimization: Andvances and Applications}. Springer-Verlag. \bibitem{colorni1991} Colorni, A. Dorigo, M. 1991. {\it Distributed Optimization by Ant Colonies}, France: Elsevier Publiching. \bibitem{dai2001} Dai, Y. H. and Yuan, Y. 2001. An efficient hybrid conjugate gradient method for uncontrained optimization. {\it Annals of Operations Research}, 103: 33-47. \bibitem{dempe2006} Dempe, S. and Kalashnikov, V. (Eds.). 2006. {\it Optimization with Multivalued Mappings: Theory, Applications, and Algorithms}. Springer-Verlag. \bibitem{diwekar2008} Diwekar, U. 2008. {\it Introduction to Applied Optimization}. Springer-Verlag. \bibitem{dostal2009} Dost\'{a}l. Z. 2009. {\it Optimal Quadratic Programming Algorithms}. Springer-Verlag. \bibitem{doerner2007} Doerner, K. F., Gendreau, M., Greistorfer, P., Gutjahr, W. J., Hartl, R. F. and Reimann, M. 2007. {\it Metaheuristics: Progress in Complex Systems Optimization}. Springer-Verlag. \bibitem{dogan2004} Do\v{g}an, A. and \"{O}zguner, F. 2004. Genetic algorithm based scheduling of meta-tasks with stochastic execution times in heterogeneous computing systems. {\it Cluster Computing}, 7: 177-190. \bibitem{du2005} Du, D.-Z. and Pardalos, P. M. 2005. {\it Handbook of Combinatorial Optimization: Supplement Volume B}. Springer-Verlag. \bibitem{ebendt2005} Ebendt, R., Fey, G. and Drechsler, R. 2005. {\it Advanced BDD Optimization}. Springer-Verlag. \bibitem{ehrgott2007} Ehrgott, M., Puerto, J. and Rodriquez-Chia, A. M. 2007. Primal-dual simplex method for multiobjective linear programming. {\it J. Optim Theory Appl}, 134: 483-497. \bibitem{eichfelder2008} Eichfelder, G. 2008. {\it Adaptive Scalariation Methods in Multiobjective Optimization}. Springer-Verlag. \bibitem{elgamel2006} Elgamel, M. A. and Bayoumi, M. A. 2006. {\it Interconnect Noise Optimization in Nanometer Technologies}. Springer-Verlag. \bibitem{eppler2008} Eppler, K., Harbrecht, H. and Mommer, M. S. 2008. A new fictitious domain method in shape optimization. {\it Comput Optim Appl}, 40: 281-298. \bibitem{feoktistov2006} Feoktistov, V. 2006. {\it Differential Evolution: In Search of Solutions}. Springer-Verlag. \bibitem{fiedler2006} Fiedler, M., Nedoma, J., Ram\'{i}k, J., John, J. and Zimmermann, K. 2006. {\it Linear Optimization Problems with Inexact Data}. Springer-Verlag. \bibitem{fisch2003} Fisch, J. H. 2003. Optimal dispersion of R\&D activities in multinational corporations with a genetic algorithm. {\it Research Policy}, 32: 1381-1396. \bibitem{forsgren2002} Forsgren, A., Gill, P. E. and Wright, M. H. 2002. Interior Methods for Nonlinear Optimization. {\it SIAM Rev.}, 44: 525-597. \bibitem{friesz2010} Friesz, T. L. 2010. {\it Dynamic Optimization and Differential Games}. Springer-Verlag. \bibitem{fujishige2009} Fujishige, S., Hayashi, T., Yamashita, K. and Zimmermann, U. 2009. Zonotopes and the LP-Newton method. {\it Optim Eng}, 10: 193-205. \bibitem{gao2009} Gao, D. Y. and Sherali, H. D. 2009. {\it Advances in Applied Mathematics and Global Optimization}. Springer-Verlag. \bibitem{geem2009} Geem, Z. W. (Ed.). 2009. {\it Harmony Search Algorithms for Structural Design Optimization}. Spriger-Verlag. \bibitem{geerdes2008} Geerdes, H.-F. 2008. {\it UMTS Radio Network Planning: Mastering Cell Coupling for Capacity Optimization}. Springer-Verlag. \bibitem{greiner1996} Greiner, R. 1996. PALO: a probabilistic hill-climbing algorithm. {\it Artificial Intelligence}, 84: 177-208. \bibitem{gokbayrak2009} Gokbayrak, K. and Selvi, O. 2009. A subgradient descent algorithm for optimization of initially controllable frow shop systems. {\it Discrete Event Dyn Syst}, 19: 267-282. \bibitem{golden2005} Golden, B., Raghavan, S. and Wasil, E. 2005. {\it The Next Wave in Computing, Optimization, and Decision Technologies}. Springer-Verlag. \bibitem{gonzalez1998} Gonzalez, A. I., Gra\'{n}a, M., D'anjou, A., Albizuri, F. X. and Torrealdea, F. J. 1998. A comparison of experimental results with an evolution strategy and competitive neural networks for near real-time color quantization of image sequences. {\it Applied Intelligence}, 8: 43-51. \bibitem{gonzalez2010} Gonz\'{a}lez, J. R., Pelta, D. A., Cruz, C., Terrazas, G. and Krasnogor, N. (Eds.), 2010. {\it Nature Inspired Cooperative Strategies for Optimization}. Springer-Verlag. \bibitem{grotschel1981} Gr\"{o}tschel, M., Lov\'{a}sz, L., and Schrijver, A. 1981. The ellipsoid method and its consequences in combinatorial optimization. {\it Combintorica}, 1(2): 169-197. \bibitem{guler2010} G\"{u}ler, O. 2010. {\it Foundations of Optimization}. Springer-Verlag. \bibitem{guocheng2010} GuoCheng, L., ShiJi, S. and Cheng, Wu. 2010. Generalized gradient projection neural networks for nonsmooth optimizaton problems. {\it Science China: Information Sciences}, 53(5): 990-1005. \bibitem{haines2006} Heines, S. 2006. {\it Pro Java EE 5 Performance Management and Optimization}. Springer-Verlag - Apress. \bibitem{hasle2007} Gasle, G., Lie, K.-A. and Quak, E. (Eds.). 2007. {\it Geometric Modelling, Numerical Simulation, and Optimization: Applied Mathematics at SINTEF}. Springer-Verlag. \bibitem{hendrix2010} Hendrix, E. M. T. and T\'{o}th, B. G. 2010. {\it Introduction to Nonlinear and Global Optimization}. Springer-Verlag. \bibitem{hinze2009} Hinze, M., Pinnau, R., Ulbrich, M. and Ulbrich, S. 2009. {\it Optimization with PDE Constraints}. Springer-Verlag. \bibitem{hirsch2010} Hirsch, M. J., Pardalos, P. M. and Murphey, R. 2010. {\it Dynamics of Information Systems}. Springer-Verlag. \bibitem{ho2007} Ho, Y.-C., Zhao, Q.-C. and Jia, Q.-S. 2007. {\it Ordinal Optimiation: Soft Optimization for Hard Problems}. Springer-Verlag. \bibitem{hooker2007} Hooker, J. N. 2007. {\it Integrated Methods for Optimization}. Springer-Verlag. \bibitem{hurlbert2010} Hurlbert, G. H. 2010. {\it Linear Optimization: The Simplex Workbook}. Springer-Verlag. \bibitem{isac2008} Isac, G. and N\'{e}meth, S. Z. 2008. {\it Scalar and Asymptotic Scalar Derivatives: Theory and Applications}. Springer-Verlag. \bibitem{iglesias2005} Iglesias, M., Naudts, B., Verschoren, A. and Vidal C. 2005. {\it Foundations of Generic Optimization}, Volume 1: A Combinatorial Approach to Epistasis. Springer-Verlag. \bibitem{jensen2005} Jensen, R. and Shen, Q. 2005. Fuzzy-rough data reduction with ant colony optimization. {\it Fuzzy Sets and Systems}, 149: 5-20. \bibitem{jeyakumar2008} Jeyakumar, V., and Luc, D. T. 2008. {\it Nonsmooth Vector Functions and Continuous Optimization}. Springer-Verlag. \bibitem{jiang2003} Jiang, H. and Ralph, D. 2003. Extension of quasi-newton methods to mathematical programs with complementarity constraints. {\it Computational Optimization and Applications}, 25: 123-150. \bibitem{kall2005} Kall, P. and Mayer, J. 2005. {\it Stochastic Linear Programming: Models, Theory, and Computation}. Springer-Verlag. \bibitem{kahraman2008} Kahraman, C. 2008. {\it Fuzzy Multi-Criteria Decision Making: Theory and Applications with Recent Developmets}. Springer-Verlag. \bibitem{kasperski2008} Kasperski, A. 2008. {\it Discrete Optimization with Interval Data: Minmax Regret and Fuzzy Approach}. Springer-Verlag. \bibitem{kaveh2009} Kaveh, A. and Talatahari, S. 2009. Hybrid algorithm of harmony search, particle swarm and ant colony for strutural design optimization. In Z. W. Geem (Ed.): {\it Harmony Search Algo. for Structural Design Optimization, SCI239}, Sringer-Verlag: 159-198. \bibitem{kim2003} Kim, S. and Kojima, M. 2003. Exact solutions of some nonconvex quadratic optimization problems via SDP and SOCP relaxations. {\it Computational Optimization and Applications}, 26: 143-154. \bibitem{kim2004} Kim, K. H., Kang, J. S., Ryu K. R. 2004. A beam search algorithm for the load sequencing of outbound containers in port container terminals. {\it OR Spectrum}, 26: 93-116. \bibitem{kocvara2007} Ko\v{c}vara, M. and Stingl, M. 2007. On the solution of large-scale SDP problems by the modified barrier method using iterative solvers. {\it Math. Program., Ser. B.}, 109: 413-444. \bibitem{kocvara2009} Ko\v{c}vara, M. and Stingl, M. 2009. On the solution of large-scale SDP problems by the modified barrier method using iterative solvers. {\it Math. Program., Ser. B.}, 120: 285-287. \bibitem{kosmidou2008} Kosmidou, K., Doumpos, M., and Zopounidis, C. (Eds.). 2008. {\it Country Risk Evaluation}. Springer-Verlag. \bibitem{kovtumenko2006} Kovtumenko, V. A. 2006. Interface cracks in composite orthotropic materials and their delamination via global shape optimization. {\it Optim Eng}, 7: 173-199. \bibitem{kugler2008} Kugler, T., Smith, J. C., Connolly, T., and Son, Y.-J. (Eds.). 2008. {\it Decision Modeling and Behavior in Complex and Uncertain Environments}. Springer-Verlag. \bibitem{landry1996} Landry, M., Banville C., and Oral, M. 1996. Model legitimization in operational research. {\it European Journal of Operational Research}, 92. \bibitem{lasserre2009} Lasserre, J. B. 2009. {\it Linear and Integer Programming vs Linear Integration and Counting}. Springer-Verlag \bibitem{lawphongpanich2006} Lauphongpanich, S. 2006. Dynaic slope scaling procedure and Lagrangian relaxation with subproblem approximation. {\it Journal of Global Optimization}, 35: 121-130. \bibitem{leunberger2008} Leunberger, D. G. and Ye, Y. 2008. {\it Linear and Nonlinear Programming}. Springer-Verlag. \bibitem{levitin2005} Levitin, G. 2005. {\it The Universal Generating Function in Reliability Analysis and Optimization}. Springer-Verlag. \bibitem{lewis2007} Lewis, R. 2007. Metaheuristics can solve sudoku puzzles. {\it J. Heuristics}, 13: 387-401. \bibitem{lewis2009} Lewis, R. 2009. A general-purpose hill-climbing method for order independent minimum group problems: A case study in graph colouring and bin packing. {\it Computers \& Operations Research}, 36: 2295-2310. \bibitem{lowen2008} Lowen, R. and Verschoren, A. (Eds.). 2008. {\it Foundations of Generic Optimization}, Volume 2: Applications of Fuzzy Control, Genetic Algorithms and Neural Networks. Springer-Verlag. \bibitem{lozovanu2009} Lozovanu, D. and Pickl, S. 2009. {\it Optimization and Multiobjective Control of Time-Discrete Systems}. Springer-Verlag. \bibitem{lukasik2009} Lukasik, S. and \'{Z}ak, S. 2009. Firefly algorithm for continuous constrained optimization tasks. In N. T. Nguyen et al. (Eds.): {\it ICCCI 2009}, LNAI 5796, Springer-Verlag: 97-106. \bibitem{luptacik2010} Lupt\'{a}\v{c}ik, M. 2010. {\it Mathematical Optimization and Economic Analaysis}. Springer-Verlag. \bibitem{mangasarian2004} Mangasarian, O. L. 2004. A Newton method for linear programming. {\it Journal of Optimization Theory and Applications}, 121(1): 1-18. \bibitem{matousek2007} Matou\v{s}ek J. and G\"{a}rtner, B. 2007. {\it Understanding and using linear programming}. Springer-Verlag. \bibitem{mishra2008} Mishra, S. K., Wang, S., and Lai, K. K. 2008. {\it V-Invex Functions and Vector Optimization}. Springer-Verlag. \bibitem{mittelmann2003} Mittelman, H. D. 2003. An independent benchmarking of SDP and SOCP solvers. {\it Mathematical Programming}, 95: 407-430. \bibitem{mucherino2009} Mucherino, A., Papajorgji, P. J. and Pardalos, P. M. (Eds.). 2009. {\it Data Mining in Agriculture}. Springer-Verlag. \bibitem{murty1998} Murty, K. G. 1998. {\it Linear Complementarity, Linear and Nonlinear Programming}. Sigma Series in Applied Mathematics 3. Berlin: Heldermann Verlag. \bibitem{nakayama2009} Nakayama, H., Yun, Y. and Yoon, M. 2009. {\it Sequential Approximate Multiobjective Optimization Using Computational Intelligence}. Springer-Verlag. \bibitem{nocedal2006} Nocedal, J. and Wright, S. J. 2006. {\it Numerical Optimization}. Springer-Verlag. \bibitem{nowicki2005} Nowicki, E. and Smutnicki, C. 2005. An advanced tabu search algorithm for the jop shop problem. {\it Journal of Schedulling}, 8: 145-159. \bibitem{nasution2010} Nasution, M. K. M. and Noah, S. A. 2010. Extracting social networks from Web documents. {\it The 1st National Doctoral Seminar on Artificial Intelligence Technology} (CAIT2010), UKM: 278-281. \bibitem{nasution2010b} Nasution, M. K. M. and Noah S. A. 2010. Superficial method for extracting social network for academis using Web snippets. In Yu, J. et al. (Eds.), {\it Rough Set and Knowledge Technology}, LNAI 6401. Springer-Verlag. \bibitem{oblow2001} Oblow, E. M. 2001. SPT: a stochastic tunneling algorithm for global optimization. {\it Journal of Global Optimization}, 20: 195-212. \bibitem{onwubolu2003}, Onwubolu, G. C. and Mutingi, M. 2003. A genetic algorithm approach for the cutting stock problem. {\it Journal of Intelligent Manufacturing}, 14: 209-218. \bibitem{pham2009} Pham, H. 2009. {\it Continuous-time Stochastic Control and Optimization with Financial Applications}. Springer-Verlag. \bibitem{papadopulos2009} Papadopoulos, C. T., O'Kelly, M. E. J., and Vidalis, M. J., and Spinellis, D. 2009. {\it Analysis and Design of Discrete Part Production Lines}. Springer-Verlag. \bibitem{papajorgji2009} Papajorgji, P. J. and Pardalos, P. M. (Eds.). 2009. {\it Advances in Modeling Agricultural Systems}. Springer-Verlag. \bibitem{pardalos2007} Pardalos, P. M., Boginski, V. L., and Vazacopoulos, A. (Eds.). 2007. {\it Data Mining in Biomedicine}. Springer-Verlag. \bibitem{pardalos2008} Pardalos, P. M. and Yatsenko, V. 2008. {\it Optimization and Control of Biliner Systems}. Springer-Verlag. \bibitem{pardalos2009} Pardalos, P. M. and Romeijn, H. E. (Eds.) 2009. {\it Handbook of Optimization in Medicine}. Springer-Verlag. \bibitem{pardalos2010} Pardalos, P. M. Rassias, T. M. and Khan, A. A. (Eds.). 2010. {\it Nonlinear Analysis and Variational Problems}. Springer-Verlag. \bibitem{passino2005} Passino, K. M. 2005. {\it Biomimicry for Optimization, Control, and Automation}. Springer-Verlag. \bibitem{pearce2009} Pearce, C. and Hunt, E. 2009. {\it Optimization: Structure and Application}. Springer-Verlag. \bibitem{pelikan2005} Pelikan, M. 2005. {\it Hierarchical Bayesian Optimization Algorithm: Toward a New Generation of Evolutionary Algorithms}. Springer-Verlag. \bibitem{pelikan2006} Pelikan, M., Sastry, K. and Cant\'{u}-Paz, E. (Eds.). 2006. {\it Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications}. \bibitem{pickard2007} Pickard, A. J. 2007. {\it Research Methods in Information}. London: Faced Publishing. \bibitem{price2002} Price, C. J., Doope, I. D. and Byatt, D. 2002. A convergent variant of the Nelder-Mead algorithm. {\it Journal of Optimizaiton Theory and Applications}, 113(1): 5-19. \bibitem{pronzato2009} Pronzato, L., and Zhigljavsky, A. 2009. {\it Optimal Design and Related Areas in Optimization and Statistics}. Springer-Verlag. \bibitem{rappaport2007} Rappaport, D. 2007. Maximal area sets and harmony. {\it Graphs and combinatorics}, 23[Suppl]:321-329. \bibitem{rajgopal2002} Rajgopal, J. 2002. Solving posynomial geometric programming problems via generalized linear programming. {\it Computational Optimization and Applications}, 21: 95-109. \bibitem{rego2005} Rego, C. and Alidaee, B. 2005. {\it Metaheuristic Optimization via Memory and Evolution: Tabu Search and Scatter Search}. Kluwer Academic Publishers: Boston. \bibitem{schaefer2007} Schaefer, R. 2007. {\it Foundations of Global Genetic Optimization}. Springer-Verlag. \bibitem{schneider2006} Schneider, J. J. and Kirkpatrick, S. 2006. {\it Stochastic Optimization}. Springer-Verlag. \bibitem{schobel2006} Sch\"{o}bel, A. 2006. {\it Optimization in Public Transportation}. Springer-Verlag. \bibitem{schwarze2009} Schwarze, S. 2009. {\it Path Player Games}. Springer-Verlag. \bibitem{seeger2006} Seeger, A. (Ed.). 2006. {\it Recent Advances in Optimization}. Springer-Verlag. \bibitem{shi2005} Shi, Z. J. and Shen, J. 2005. New inexact line search method for unconstrained optimization. {\it Journal of Optimization Theory and Applications}, 127(2): 425-446. \bibitem{siarry2008} Siarry, P. and Michalewicz, Z. (Eds.). 2008. {\it Advances in Metaheuristics for Hard Optimization}. Springer-Verlag. \bibitem{solojentsev2009} Solojentsev. E. D. 2009. {\it Scenario Logic and Probabilistic Management of Risk in Business and Engineering}. Springer-Verlag. \bibitem{sorensen2003} S\"{o}rensen, K. 2003. A framework for robust and flexible optimisation using metaheuristics: with applications in supply chain design. {\it 4OR}, 1: 341-345. \bibitem{sun2006} Sun, W. and Yuan, Y.-X. 2006. {\it Optimization Theory and Methods: Nonlinear Programming}. Springer-Verlag. \bibitem{szabo2007} Szab\'{o}, P. G., Mark\'{o}t, M.Cs., Csendes, T., Specht, E., Casado, L. G., and Garc\'{i}a, I. (Eds.). 2007. {\it New Approches to Circle Packing in a Square}. Springer-Verlag. \bibitem{thevenin2008} Th\'{e}venin, D. and Janiga, G. (Eds.). {\it Optimization and Computational Fluid Dynamics}. Springer-Verlag. \bibitem{todd2008} Todd, M. J. 2008. Dual versus primal-dual interior-point methods for linear and conic programming. {\it Math. Program., Ser. B}, 111: 301-313. \bibitem{torn2007} T\"{o}rn, A. and \v{Z}ilinskas, J. (eds.). 2007. {\it Models and Algorithms for Global Optimizaton}. Springer-Verlag. \bibitem{triantaphyllou2010} Triantaphyllou, E. 2010. {\it Data Mining and Knowledge Discovery via Logic-Based Methods}. Springer-Verlag. \bibitem{tseng2006} Tseng, L.-Y. and Liang, S.-C. 2006. A hybrid metaheuristic for the quadratic assignment problem. {\it Computational Optimization and Applications}, 24: 85-113. \bibitem{turke2007} T\"{u}rke, U. 2007. {\it Efficient Methods for WCDMA Radio Network Planning and Optimization}. Springer-Verlag - Teubner Research. \bibitem{verma2007} Verma, M., and Marwedel, P. 2007. {\it Advanced Memory Optimization Techniques for Low-Power Embedded Processors}. \bibitem{vob2006} Vo\ss, S. and Woodruff, D. L. 2006. {\it Introduction to Computational Optimization Models for Production Planning in a Supply Chain}. Springer-Verlag. \bibitem{warner2008} Warner, R. 2008. Cascading: an adjusted exchange method for robust conic programming. {\it CEJOR}, 16: 179-189. \bibitem{wu2009} Wu, C.-H., Wang, D.-Z., Ip. A., Wang, D.-W., Chan, C.-Y., and Wang, H.-F. 2009. A particle swarm optimization approach for components placement inspection on printed circuit boards. {\it J. Intell Manuf}, 20: 535-549. \bibitem{xhafa2010} Xhafa, F., Barolli, L. and Papajorgji, P. J. (Eds.). 2010. {\it Complex Intelligent Systems and Their Applications}. \bibitem{yang2009} Yang, X.-S. and Deb, S. 2009. Cuckoo search via L\'{e}vy flights. {\it World Congress on Nature \& Biologically Inspired Computing} (NaBIC 2009). IEEE Publications, December: 210-214. \bibitem{yang2010b} Yang, X.-S. and Deb, S. 2010. Eagle strategy using L\'{e}vy walk and firefly algorithms for stochastic optimization. In J. R. Gonz\'{a}lez et al. (Eds.): {\it NISCO 2010}, ScI 284, Springer-Verlag: 101-111. \bibitem{yang2010} Yang, J.-h. and Cao, B.-y. 2010. Fuzzy geometric programming and its application. {\it Fuzzy Inf. Eng.} 1: 101-112. \bibitem{yen2004} Yen, C. H., Wong, D. S. H. and Jang, S. S. 2004. Solution of trim-loss problem by an integrated simulated annealing and ordinal optimization approach. {\it Journal of Intelligent Manufacturig}, 15: 701-709. \bibitem{yu2008} Yu, G. and Thengvall, B. 2008. Airline optimization. In Floudas, A. C. and Pardalos, P. M. {\it Encyclopedia of Optimization}, second edition. Springer-Verlag. \bibitem{zaslavski2010} Zaslavski, A. J. 2010. {\it Optimization on Metric and Normed Spaces}. Springer-Verlag. \bibitem{zhang2008} Zhang, M., Luo, W. and Wang, X. 2008. Differential evolution with dynamic stochastic selection for constrained optimization. {\it Information Sciences}, 178: 3043-3074. \bibitem{zhang2009} Zhang, C., Ning, J., Lu, S., Ouyung, D. and Ding, T. 2009. A novel hybrid differential evolution and particle swarm optmization algorithm for unconstrained optimization. {\it Operations Research Letters}, 37: 117-122. \bibitem{zhigljavsky2008} Zhigljavsky, A. and \v{Z}ilinskas, A. 2008. {\it Stochastic Global Optimization}. Springer-Verlag. \bibitem{zopounidis2008} Zopounidis, C., Doumpos, M., and Pardalos, P. M. (Eds.). 2008. {\it Handbook of Financial Engineering}. Springer-Verlag. \end{thebibliography} \end{document}
\begin{document} \begin{frontmatter} \title{Bilinear control of evolution equations with unbounded lower order terms. Application to the Fokker-Planck equation\tnoteref{fund}} \tnotetext[fund]{This paper was partly supported by the INdAM National Group for Mathematical Analysis, Probability and their Applications (GNAMPA) and by the French-German-Italian Laboratoire International Associ\'{e} (LIA) COPDESC.} \author{Fatiha Alabau Boussouira} \address{Laboratoire Jacques-Louis Lions, Sorbonne Universit\'{e}, 75005, Paris, France Universit\'{e} de Lorraine, 57000 Metz, France [email protected]} \author{Piermarco Cannarsa} \address{Dipartimento di Matematica, Universit\`{a} di Roma Tor Vergata, 00133, Roma, Italy [email protected]} \author{Cristina Urbani} \address{Dipartimento di Matematica, Universit\`{a} di Roma Tor Vergata, 00133, Roma, Italy [email protected]} \begin{abstract} We study the exact controllability of the evolution equation \begin{equation*} u'(t)+Au(t)+p(t)Bu(t)=0 \end{equation*} where $A$ is a nonnegative self-adjoint operator on a Hilbert space $X$ and $B$ is an unbounded linear operator on $X$, which is dominated by the square root of $A$. The control action is bilinear and only of scalar-input form, meaning that the control is the scalar function $p$, which is assumed to depend only on time. Furthermore, we only consider square-integrable controls. Our main result is the local exact controllability of the above equation to the ground state solution, that is, the evolution through time, of the first eigenfunction of $A$, as initial data. The analogous problem (in a more general form) was addressed in our previous paper [Exact controllablity to eigensolutions for evolution equations of parabolic type via bilinear control, Alabau-Boussouira F., Cannarsa P. and Urbani C., Nonlinear Diff. Eq. Appl. (2022)] for a bounded operator $B$. The current extension to unbounded operators allows for many more applications, including the Fokker-Planck equation in one space dimension, and a larger class of control actions. \end{abstract} \begin{keyword} bilinear control \sep evolution equations \sep exact controllability \sep parabolic PDEs \sep ground state \sep Fokker-Planck equation \sep moment method \MSC[2010] 35Q93 \sep 93C25 \sep 93C10 \sep 93B05 \sep 35K90 \end{keyword} \end{frontmatter} \section{Introduction} In a series of recent papers (\cite{acu,acue,cu}), we have studied stabilization and exact controllability to eigensolutions for evolution equations of the form \begin{equation}\label{intro-eq-u} \begin{cases} u'(t)+Au(t)+p(t)Bu(t)=0,&t\in(0,T)\\ u(0)=u_0 \end{cases} \end{equation} where $u_0$ belongs to a Hilbert space $(X,\langle\cdot,\cdot\rangle,\norm{\cdot})$ and \begin{itemize} \item[i)] $A:D(A)\subset X\to X$ is a self-adjoint operator with compact resolvent and such that $A\geq -\sigma I$, with $\sigma\geq0$, \item[ii)] $B:X\to X$ is a bounded linear operator, \item[iii)] $p\in L^2(0,T)$ is a bilinear control. \end{itemize} We refer to equation \eqref{intro-eq-u} as being of parabolic type, since assumption i) is usually satisfied by parabolic operators. The scalar-input bilinear controllability problem has been addressed by several authors, starting with the negative result by Ball, Marsden, Slemrod \cite{bms}. Controllability issues are interesting also in the hyperbolic or diffusive context, where several results are now available to describe the reachable set of specific partial differential equations in $1$-D, such as the Schr{\"o}dinger equation (\cite{au,beau,bl}) and the classical (\cite{au,b}) and degenerate (\cite{cmu}) wave equation. The above problem enters in the so-called class of nonlinear control problems. We refer the readers to the book by Coron\cite{coron} where general control systems are studied as well as mathematical methods to treat them, with a focus on systems for which the nonlinearities are determinant for controllability issues, including the Schr\"odinger equation. Most of the above results devoted to scalar-input bilinear controllability issues have in common the fact that they address controllability properties of \eqref{intro-eq-u} near the ground state solution $\psi_1(t)=e^{-\lambda_1 t}\varphi_1$ (\cite{b,bl,cmu}) or, more in general, to eigensolutions $\psi_j(t)=e^{-\lambda_j t}\varphi_j$ (\cite{acu,acue}), that are solutions of the free dynamics ($p=0$) associated to \eqref{intro-eq-u}, namely \begin{equation*} u'(t)+Au(t)=0, \end{equation*} with initial condition $u_0=\varphi_j$, where we denote by $\{\lambda_k\}_{k\in{\mathbb{N}}^*}$ the eigenvalues of $A$, and by $\{\varphi_k\}_{k\in{\mathbb{N}}^*}$ the associated eigenfunctions. To prove local controllability to trajectories, we have introduced in \cite{acu} the notion of $j$-null controllability in time $T>0$ for the pair $\{A,B\}$: we require the existence of a constant $N_T>0$ such that for any initial condition $y_0\in X$, there exists a control $p\in L^2(0,T)$ satisfying \begin{equation}\label{estimpnew} \norm{p}_{L^2(0,T)}\leq N_T\norm{y_0} , \end{equation} and for which $y(T)=0$, where $y(\cdot)$ is the solution of the following linear problem \begin{equation}\label{newlin} \begin{cases} y'(t)+Ay(t)+p(t)B\varphi_j=0,&t\in[0,T]\\ y(0)=y_0. \end{cases} \end{equation} We call the best constant, defined as \begin{equation}\label{controlcost} N(T):=\sup_{\norm{y_0}=1}\inf\left\{\norm{p}_{L^2(0,T)}:y(T;y_0,p)=0\right\}, \end{equation} the \emph{control cost}. Furthermore, we have shown that sufficient conditions for $\{A,B\}$ to be $j$-null controllable are a gap condition of the eigenvalues of $A$ and a rank condition on $B$: we assume $B$ to spread $\varphi_j$ in all directions (see \cite[Theorem 1.2]{acue}). When, for instance, $X=L^2(0,1)$, $B$ could be taken as a multiplicative operator, i.e., \begin{equation*} (B\varphi)(x)=\mu(x)\varphi(x),\qquad x\in (0,1), \end{equation*} where $\mu$ has to be chosen in order to guarantee the dispersive action mentioned above (see \cite{au} for a general method to construct infinite classes of such $\mu$, including polynomial type classes, and for various PDE's, as well as various boundary conditions). Another common feature of the aforementioned references is that $B$ is assumed to be bounded. In many applications, however, one is forced to consider a bilinear control acting on a drift term rather than a potential, which leads to allowing $B$ to be unbounded. A typical example of such a situation occurs when dealing with the Fokker-Planck equation \begin{equation*} u_t-u_{xx}-p(t)(\mu(x)u)_x=0, \end{equation*} which is satisfied by the probability density of the diffusion process associated with \begin{equation*} dX_t=p(t)\mu(X_t)dt+\sqrt{2}dW_t, \end{equation*} where $W_t$ is the standard Wiener process on a probability space $(\Omega, \mathcal{A},\mathbb{P})$. This paper aims to extend the theory of \cite{acue} to unbounded operators $B:D(B)\subset X\to X$ satisfying \begin{equation*} D(A_\sigma^{1/2})\hookrightarrow D(B),\quad \text{and}\quad \norm{B\varphi}\leq C\left(\norm{A_\sigma\varphi}^2+\norm{\varphi}^2\right)^{1/2}, \end{equation*} for some constant $C>0$, where $A_\sigma:=A+\sigma I$ (the aforementioned $j$-null controllability property of the pair $\{A,B\}$ has to be assumed unchanged). We consider \eqref{intro-eq-u} where $A:D(A)\subset X\to X$ is a self-adjoint accretive operator with compact resolvent and $B$ is an unbounded linear operator. By adapting the approach of \cite{acue}, we first obtain in Section \ref{MainResult} a local controllability result (in the topology of $D(A^{1/2})$), to the first eigensolution $\psi_1$ (the ground state), see Theorem \ref{teo-contr-B-unb}. Then, in Section \ref{SemiGlobal} we derive two semi-global results, Theorem \ref{teoglobal} and \ref{teoglobal0}. Finally, in Section \ref{Applications} we discuss applications to partial differential equations including \begin{itemize} \item[a)] the Fokker-Planck equation, \item[b)] the heat equation with control on the drift under Neumann boundary conditions, \item[c)] a class of degenerate parabolic equations under Dirichlet or Neumann boundary conditions. \end{itemize} \section{Preliminaries} Let $(X,\langle\cdot,\cdot\rangle,\norm{\cdot})$ be a separable Hilbert space. Let $A:D(A)\subset X\to X$ be a densely defined linear operator with the following properties: \begin{equation}\label{ipA} \begin{array}{ll} (a) & A \mbox{ is self-adjoint},\\ (b) &\langle Ax,x\rangle \geq0,\,\, \forall\, x\in D(A),\\ (c) &\exists\,\lambda>0\,:\,(\lambda I+A)^{-1}:X\to X \mbox{ is compact}. \end{array} \end{equation} We recall that under the above assumptions $A$ is a closed operator and $D(A)$ is itself a Hilbert space with respect to the scalar product \begin{equation} (x,y)_{D(A)}=\langle x,y\rangle+\langle Ax,Ay\rangle,\quad \forall\,x,y\in D(A). \end{equation} Moreover, $-A$ is the infinitesimal generator of a strongly continuous semigroup of contractions on $X$ which is also analytic and will be denoted by $e^{-tA}$. In view of the above assumptions, there exists an orthonormal basis $\{\varphi_k\}_{k\in{\mathbb{N}}^*}$ in $X$ of eigenfunctions of $A$, that is, $\varphi_k\in D(A)$ and $A\varphi_k=\lambda_k\varphi_k$ $\forall\, k \in {\mathbb{N}}^*$, where $\{\lambda_k\}_{k\in{\mathbb{N}}^*}\subset {\mathbb{R}}$ denote the corresponding eigenvalues. We recall that $\lambda_k\geq 0$, $\forall\, k\in{\mathbb{N}}^*$ and we suppose --- without loss of generality --- that $\{\lambda_k\}_{k\in{\mathbb{N}}^*}$ is ordered so that $0\leq\lambda_k\leq\lambda_{k+1}\to \infty$ as $k\to\infty$. The associated semigroup has the following representation \begin{equation}\label{semigr} e^{-tA}\varphi=\sum_{k=1}^\infty\langle \varphi,\varphi_k\rangle e^{-\lambda_k t}\varphi_k,\quad\forall\, \varphi\in X. \end{equation} For any $s\geq 0$ we can define the operator $A^s:D(A^s)\subset X\to X$ which is the $s$-fractional power of $A$ (see \cite{p}). Under our assumptions, such a linear operator is characterized as follows \begin{equation} \begin{array}{l} D(A^s)=\left\{x\in X ~\left|~ \sum_{k\in{\mathbb{N}}^*}\lambda_k^{2s}|\langle x,\varphi_k\rangle|^2<\infty\right.\right\}\\\\ A^{s} x=\sum_{k\in{\mathbb{N}}^*}\lambda_k^{s}\langle x,\varphi_k\rangle\varphi_k,\qquad \forall\, x\in D(A^s). \end{array} \end{equation} The space $D(A^s)$, equipped with the norm \begin{equation*} \norm{x}_{D(A^s)}:=\left(\norm{x}^2+\norm{A^sx}^2\right)^{1/2},\quad\forall\, \varphi\in D(A^s), \end{equation*} induced by the scalar product \begin{equation*} \langle x,y\rangle_s=\langle x,y\rangle+\langle A^{s}x,A^{s}x\rangle \end{equation*} is a Hilbert space for any $s\geq0$. Note that we have trivially the inequality $$ \norm{x}_{D(A^s)} \leqslant \left(\norm{x}+\norm{A^sx}\right),\quad\forall\, \varphi\in D(A^s) $$ Of course, the right hand side also defines an equivalent norm to $\norm{\cdot}_{D(A^s)}$ on $D(A^s)$, but is not associated to a scalar product. We will make use of the above inequality in some computations without further referring to it. We indicate by $B_{R,s}(x)$ the unit ball of radius $R$ with respect to the norm $\norm{\cdot}_{D(A^s)}$ centered at $x$. Let $T>0$ and consider the following problem \begin{equation} \begin{cases}\label{controlsystem} u'(t)+Au(t)=f(t),&t\in[0,T]\\ u(0)=u_0 \end{cases} \end{equation} where $u_0\in X$ and $f\in L^2(0,T;X)$. We now recall two definitions of solution of problem \eqref{controlsystem}: \begin{itemize} \item the function $u\in C([0,T], X)$ defined by $$u(t)=e^{-tA}u_0+\int_0^t e^{-(t-s)A}f(s)ds$$ is called the \emph{mild solution} of \eqref{controlsystem}, \item a function \begin{equation}\label{012} u\in H^1(0,T;X)\cap L^2(0,T;D(A)) \end{equation} is called a \emph{strict solution} of \eqref{controlsystem} if $u(0)=u_0$ and $u$ satisfies the equation in \eqref{controlsystem} for a.e. $t\in [0,T]$. \end{itemize} The well-posedness of the Cauchy problem \eqref{controlsystem} is a classical result (see, for instance, \cite[Theorem 3.1, p. 143]{bd}). We observe that the space \begin{equation*} W(D(A);X)=H^1(0,T;X)\cap L^2(0,T;D(A)) \end{equation*} is a Banach space with the norm \begin{equation*} \norm{\varphi}_W=\left(\norm{\varphi}_{H^1(0,T;X)}^2+\norm{\varphi}_{L^2(0,T;D(A))}^2\right)^{1/2} \end{equation*} for all $\varphi\in W(D(A);X)$. \begin{thm}\label{maxreg} Let $u_0\in D(A^{1/2})$ and $f\in L^2(0,T;X)$. Under hypothesis \eqref{ipA}, the mild solution of system \eqref{controlsystem} \begin{equation}\label{013} u(t)=e^{-tA}u_0+\int_0^t e^{-(t-s)A}f(t)dt. \end{equation} is a strict solution. Moreover, there exists a constant $C>0$ such that \begin{equation}\label{014} \norm{u}_W\leq C\left(\norm{f}_{L^2(0,T;X)}+\norm{u_0}_{D(A^{1/2})}\right). \end{equation} \end{thm} The regularity \eqref{012} of the function $u$ given by \eqref{013} is called \emph{maximal regularity}. Under our assumptions such a property is due to the analyticity of $e^{-tA}$. Observe that \eqref{014} ensures that the strict solution of \eqref{controlsystem} is unique. We now recall a useful result which derives from spaces interpolation. \begin{prop}\label{lio-mag} Let $u\in W(D(A);X)$ then \begin{equation*} u\in C([0,T];D(A^{1/2})). \end{equation*} \end{prop} We refer to \cite[Proposition 2.1, p. 22 and Theorem 3.1, p. 23]{Lions-Magenes}) for the proof. From Proposition \ref{lio-mag} we deduce the following regularity property for the solution of problem \eqref{controlsystem}. \begin{cor}\label{cor-lio-mag} Let $u_0\in D(A^{1/2})$ and $f\in L^2(0,T;X)$. Then, the strict solution $u$ of \eqref{controlsystem} is such that $u\in C([0,T];D(A^{1/2})$ and there exists $C_0>0$ such that \begin{equation}\label{normC0} \sup_{t\in[0,T]}\norm{u(t)}_{D(A^{1/2})}\leq C_0\left(\norm{f}_{L^2(0,T;X)}+\norm{u_0}_{D(A^{1/2})}\right). \end{equation} \end{cor} Given $T>0$, let $B:D(B)\subset X\to X$ be a linear unbounded operator such that \begin{equation}\label{ipB} D(A^{1/2})\hookrightarrow D(B), \end{equation} namely $D(A^{1/2})\subset D(B)$ and there exists a constant $C_B>0$ (that we can suppose, without loss of generality, to be greater than one) such that \begin{equation}\label{BA12} \norm{B\varphi}\leq C_B\norm{\varphi}_{D(A^{1/2})},\qquad \forall\, \varphi\in D(A^{1/2}). \end{equation} In the proposition that follows we show the well-posedness of the bilinear control problem with a source term \begin{equation}\label{a1f} \begin{cases} u'(t)+Au(t)+p(t)Bu(t)+f(t)=0,& t\in [0,T]\\ u(0)=u_0. \end{cases} \end{equation} We introduce the following notation: $\forall s\geq0$ we set \begin{equation*}\begin{array}{l} \norm{\varphi}_{s}:=\norm{\varphi}_{D(A^s)},\qquad\forall\, \varphi\in D(A^s)\\\\ \norm{f}_{2,s}:=\norm{f}_{L^2(0,T;D(A^s))},\qquad\forall\,f\in L^2(0,T;D(A^s))\\\\ \norm{f}_{\infty,s}:=\norm{f}_{C([0,T];D(A^s))}=\sup_{t\in [0,T]}\norm{A^{s}f(t)},\qquad\forall\, f\in C([0,T];D(A^s)). \end{array} \end{equation*} \begin{prop}\label{propa24} Let $T>0$ and fix $p\in L^2(0,T)$. Then, for all $u_0\in D(A^{1/2})$ and $f\in L^2(0,T;X)$ there exists a unique mild solution $u$ of \eqref{a1f}. Moreover, $u\in C([0,T],D(A^{1/2})$ and the following equality holds for every $t\in [0,T]$ \begin{equation} u(t)=e^{-tA}u_0-\int_0^t e^{-(t-s)A}[p(s)Bu(s)+f(s)]ds. \end{equation} Furthermore, $u$ is a strict solution of \eqref{a1f} and there exists a constant $C_1=C_1(p)>0$ such that \begin{equation}\label{a5} ||u||_{\infty,1/2}\leq C_1 (||u_0||_{1/2}+||f||_{2,0}). \end{equation} \end{prop} Hereafter, we denote by $C$ a generic positive constant which may differ from line to line. Constants which play a specific role will be distinguished by an index i.e., $C_0$, $C_B$, \dots. \begin{proof} The existence and uniqueness of the solution of (\ref{a1f}) comes from a fixed point argument which is based on Theorem \ref{maxreg}. For any $\xi\in C([0,T];D(A^{1/2})$, let us consider the map \begin{equation}\nonumber \Phi(\xi)(t)=e^{-tA}u_0-\int_0^t e^{-(t-s)A}[p(s)B\xi(s)+f(s)]ds. \end{equation} We want to prove that $\Phi$ is a contraction. First, we prove that $\Phi$ maps $C([0,T];D(A^{1/2}))$ into itself. Since $u_0\in D(A^{1/2})$ and $p(\cdot)B\xi(\cdot)$, $f(\cdot)\in L^2(0,T;X)$, applying Theorem \ref{maxreg} and Corollary \ref{cor-lio-mag} it turns out that $\Phi(\xi) \in C([0,T];D(A^{1/2}))$. Now we have to prove that $\Phi$ is a contraction. Thanks to \eqref{normC0}, we have \begin{equation}\begin{split}\label{contr} \norm{\Phi(\xi)-\Phi(\zeta)}_{\infty,1/2}&=\sup_{t\in[0,T]}\norm{\int_0^t e^{-(t-s)A}p(s) B(\xi-\zeta)(s)ds}_{1/2}\\ &\leq C_0\norm{ p B(\xi-\zeta)}_{2,0}\\ &=C_0\left(\int_0^T|p(s)|^2\norm{ B(\xi-\zeta)(s)}^2ds\right)^{1/2}\\ &\leq C_0C_B \left(\int_0^T|p(s)|^2\norm{ (\xi-\zeta)(s)}_{D( A^{1/2})}^2ds\right)^{1/2}\\ &\leq C_0C_B \norm{p}_{L^2(0,T)}\norm{\xi-\zeta}_{\infty,1/2} \end{split} \end{equation} where we have used the fact that $D(A^{1/2}) \hookrightarrow D(B)$. If $C_0C_B\norm{p}_{L^2(0,T)}<1$, then \eqref{contr} shows that $\Phi$ is a contraction. If this quantity is larger than one, we subdivide the interval $[0,T]$ into $N$ subintervals $[T_0,T_1],[T_1,T_2],\dots,[T_{N-1},T_N]$, with $T_0=0,T_N=T$, such that $C_0C_B\norm{p}_{L^2(T_j,T_{j+1})}\leq 1/2$ in $[T_j,T_{j+1}]$, $\forall\, j=0,\dots, N-1$ and we repeat the contraction argument in each interval. It remains to prove (\ref{a5}). By using once again \eqref{normC0}, we get \begin{equation}\begin{split} \norm{u}_{\infty,1/2}&\leq \norm{u_0}_{1/2}+\sup_{t\in[0,T]}\norm{ \int_0^t e^{-(t-s)A}[p(s) Bu(s)+ f(s)]ds}_{1/2}\\ &\leq \norm{u_0}_{1/2}+C_0(\norm{p Bu}_{2,0}+\norm{f}_{2,0})\\ &= \norm{u_0}_{1/2}+C_0\left( \int_0^T |p(s)|^2\norm{ Bu(s)}^2ds\right)^{1/2}+C_0\norm{f}_{2,0}\\ &\leq \norm{u_0}_{1/2}+C_0C_B\left( \int_0^T |p(s)|^2\norm{ u(s)}_{D(A^{1/2})}^2ds\right)^{1/2}+C_0\norm{f}_{2,0}\\ &= \norm{u_0}_{1/2}+C_0C_B\norm{p}_{L^2(0,T)}\norm{u}_{\infty,1/2}+C_0\norm{f}_{2,0}, \end{split} \end{equation} which implies \begin{equation*} (1-C_0C_B\norm{p}_{L^2(0,T)})\norm{u}_{\infty,1/2}\leq \norm{u_0}_{1/2}+C_0\norm{f}_{2,0}. \end{equation*} If $C_0C_B\norm{p}_{L^2(0,T)}\leq1/2$, then we have inequality (\ref{a5}). Otherwise, to get the conclusion, we proceed subdividing the interval $[0,T]$ into smaller subintervals, as explained above. \end{proof} We now consider the following bilinear control problem with a specific source \begin{equation}\label{v} \left\{\begin{array}{ll} v'(t)+A v(t)+p(t)Bv(t)+p(t)B\varphi_1=0,&t\in[0,T]\\\\ v(0)=v_0. \end{array}\right. \end{equation} Let us denote by $v(\cdot;v_0,p)$ the solution of \eqref{v} associated to the initial condition $v_0$ and control $p$. The result that follows provides an estimate of $\sup_{t\in[0,T]}\norm{v(t;v_0,p)}_{1/2}$ in terms of the initial condition $v_0$. \begin{prop}\label{prop38} Let $T>0$. Let $A$ and $B$ satisfy hypotheses \eqref{ipA} and \eqref{ipB}, respectively. Let $v_0\in D(A^{1/2})$ and $p\in L^2(0,T)$ be such that \begin{equation}\label{NT} \norm{p}_{L^2(0,T)}\leq N_T\norm{v_0} \end{equation} with $N_T$ a positive constant. Then, the solution of \eqref{v} satisfies \begin{equation}\label{unifv} \sup_{t\in[0,T]}\norm{v(t;v_0,p)}^2_{1/2}\leq C_{1,1}(T,\norm{v_0}_{1/2})\norm{v_0}^2_{1/2}, \end{equation} where \begin{multline}\label{C1j} C_{1,1}(T,\norm{v_0}_{1/2}):=e^{C_BN_T\left(\frac{5}{2}C_BN_T\norm{v_0}_{1/2}+2\sqrt{T}\right)\norm{v_0}_{1/2}+T}\cdot\\ \cdot\left(1+\frac{5}{2}C_B^2(1+\lambda_1^{1/2})^2N_T^2+\frac{3}{2}C^2_BN_T^2\left(C^2_B(1+\lambda_1^{1/2})^2N_T^2+1\right)\norm{v_0}^2_{1/2}\right) \end{multline} \end{prop} \begin{proof} For the sake of compactness, sometimes we will denote the solution of \eqref{v}, by omitting the reference to the initial condition and the control, as $v(\cdot)$. We perform energy estimates of the equation satisfied by $v$ by taking first the scalar product with $v(t)$ \begin{equation*} \langle v^\prime(t),v(t) \rangle+\langle Av(t),v(t) \rangle+p(t)\langle Bv(t)+B\varphi_1,v(t) \rangle=0, \end{equation*} from which we have that \begin{equation}\label{barC} \begin{split} \frac{1}{2}&\frac{d}{dt}\norm{v(t)}^2+\norm{A^{1/2}v(t)}^{2}\leq |p(t)|\norm{Bv(t)}\norm{v(t)}+|p(t)|\norm{B\varphi_1}\norm{v(t)}\\ &\leq C_B|p(t)|\left(\norm{v(t)}^2+\norm{A^{1/2}v(t)}^2\right)^{1/2}\norm{v(t)}+C_B(1+\lambda_1^{1/2})|p(t)|\norm{v(t)}\\ &\leq C_B|p(t)|\norm{v(t)}^2+\frac{1}{2}\norm{A^{1/2}v(t)}^2+\frac{C^2_B}{2}|p(t)|^2\norm{v(t)}^2+\frac{C^2_B}{2}(1+\lambda_1^{1/2})^2|p(t)|^2+\frac{1}{2}\norm{v(t)}^2. \end{split} \end{equation} Therefore, from the above inequality it follows that \begin{equation*} \frac{1}{2}\frac{d}{dt}\norm{v(t)}^2\leq \left(C_B|p(t)|+\frac{C^2_B}{2}|p(t)|^2+\frac{1}{2}\right)\norm{v(t)}^2+\frac{C^2_B}{2}(1+\lambda_1^{1/2})^2|p(t)|^2. \end{equation*} We integrate from $0$ to $t$ and thanks to Gronwall's Lemma we deduce that \begin{equation}\label{supv} \sup_{t\in[0,T]}\norm{v(t;v_0,p)}^2\leq\left(\norm{v_0}^2+C_B^2(1+\lambda_1^{1/2})^2\norm{p}^2_{L^2(0,T)}\right)e^{C_2(T)}, \end{equation} where $C_2(T):=2C_B\sqrt{T}\norm{p}_{L^2(0,T)}+C^2_B\norm{p}_{L^2(0,T)}^2+T$. Now, we multiply the equation in \eqref{v} by $v^\prime(t)$ and we obtain \begin{equation*} \langle v'(t),v'(t) \rangle+\langle Av(t),v'(t) \rangle+p(t)\langle Bv(t)+B\varphi_1,v'(t) \rangle=0. \end{equation*} We recall that under our assumptions on $A$ the function \begin{equation*} t\mapsto \langle Av(t),v(t)\rangle \end{equation*} is absolutely continuous on $[0,T]$ and \begin{equation*} \frac{d}{dt}\langle Av(t),v(t)\rangle=2\langle Av(t),v'(t)\rangle. \end{equation*} Therefore, we have \begin{equation*} \begin{split} \norm{v'(t)}^2+\frac{1}{2}\frac{d}{dt}\norm{A^{1/2}v(t)}^{2}&\leq |p(t)|\norm{Bv(t)}\norm{v'(t)}+|p(t)|\norm{B\varphi_1}\norm{v'(t)}\\ &\leq C_B|p(t)|\left(\norm{v(t)}+\norm{A^{1/2}v(t)}\right)\norm{v'(t)}+C_B|p(t)|(1+\lambda_1^{1/2})\norm{v'(t)}\\ &\leq \frac{3}{4}C^2_B|p(t)|^2\norm{v(t)}^2+\frac{1}{3}\norm{v'(t)}^2+\frac{3}{4}C^2_B|p(t)|^2\norm{A^{1/2}v(t)}^2\\ &\quad+\frac{1}{3}\norm{v'(t)}^2+\frac{3}{4}C^2_B(1+\lambda_1^{1/2})^2|p(t)|^2+\frac{1}{3}\norm{v'(t)}^2, \end{split} \end{equation*} that gives \begin{equation*} \frac{d}{dt}\norm{A^{1/2}v(t)}^{2}\leq\left(\frac{3}{2}C^2_B|p(t)|^2\norm{v(t)}^2+\frac{3}{2}C^2_B(1+\lambda_1^{1/2})^2|p(t)|^2\right)+\frac{3}{2}C^2_B|p(t)|^2\norm{A^{1/2}v(t)}^2. \end{equation*} By Gronwall's Lemma and using the previous energy estimate \eqref{supv}, we deduce that \begin{equation*} \begin{split} &\norm{A^{1/2}v(t)}^{2}\leq\left(\norm{A^{1/2}v_0}^{2}+\frac{3}{2}C^2_B\int_0^t|p(s)|^2\norm{v(s)}^2ds+\frac{3}{2}C^2_B(1+\lambda_1^{1/2})^2\int_0^t|p(s)|^2ds\right)e^{\frac{3}{2}C^2_B\int_0^t|p(s)|^2ds}\\ &\quad\leq \left(\norm{A^{1/2}v_0}^2+\frac{3}{2}C^2_B\norm{p}^2_{L^2(0,T)}\sup_{t\in[0,T]}\norm{v(t)}^2+\frac{3}{2}C^2_B(1+\lambda_1^{1/2})^2\norm{p}^2_{L^2(0,T)}\right)e^{\frac{3}{2}C^2_B\norm{p}^2_{L^2(0,T)}}\\ &\quad\leq \left(\norm{A^{1/2}v_0}^2+\frac{3}{2}C^2_B\norm{p}^2_{L^2(0,T)}\left(\norm{v_0}^2+C_B^2(1+\lambda_1^{1/2})^2\norm{p}^2_{L^2(0,T)}\right)e^{C_2(T)}\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+\frac{3}{2}C^2_B(1+\lambda_1^{1/2})^2\norm{p}^2_{L^2(0,T)}\right)e^{\frac{3}{2}C^2_B\norm{p}^2_{L^2(0,T)}}\\ &\quad\leq\left(\norm{A^{1/2}v_0}^2+\frac{3}{2}C^2_B\norm{p}^2_{L^2(0,T)}\norm{v_0}^2+\frac{3}{2}C^4_B(1+\lambda_1^{1/2})^2\norm{p}^4_{L^2(0,T)}\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+\frac{3}{2}C^2_B(1+\lambda_1^{1/2})^2\norm{p}^2_{L^2(0,T)}\right)e^{C_3(T)} \end{split} \end{equation*} with $C_3(T):=\frac{3}{2}C^2_B\norm{p}^2_{L^2(0,T)}+C_2(T)$. Taking the supremum over the interval $[0,T]$, we have that \begin{multline}\label{supA1/2v} \sup_{t\in[0,T]}\norm{A^{1/2}v(t)}^{2}\leq\left(\norm{A^{1/2}v_0}^2+\frac{3}{2}C^2_B\norm{p}^2_{L^2(0,T)}\norm{v_0}^2+\frac{3}{2}C^4_B(1+\lambda_1^{1/2})^2\norm{p}^4_{L^2(0,T)}\right.\\ \left.+\frac{3}{2}C^2_B(1+\lambda_1^{1/2})^2\norm{p}^2_{L^2(0,T)}\right)e^{C_3(T)}. \end{multline} Finally, combining \eqref{supv} and \eqref{supA1/2v}, we find that \begin{equation*} \begin{split} \sup_{t\in[0,T]}\norm{v(t)}^2_{1/2}&\leq \sup_{t\in[0,T]}\norm{v(t)}^2+\sup_{t\in[0,T]}\norm{A^{1/2}v(t)}^2\\ &\leq e^{C_3(T)}\left(\norm{v_0}^2_{1/2}+\frac{5}{2}C_B^2(1+\lambda_1^{1/2})^2\norm{p}^2_{L^2(0,T)}+\frac{3}{2}C^2_B\norm{p}^2_{L^2(0,T)}\norm{v_0}^2\right.\\ &\,\,\,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+\frac{3}{2}C^4_B(1+\lambda_1^{1/2})^2\norm{p}^4_{L^2(0,T)}\right) \end{split} \end{equation*} and using estimate \eqref{NT} we conclude that \begin{equation*} \sup_{t\in[0,T]}\norm{v(t)}^2_{1/2}\leq e^{C_3(T)}\left(1+\frac{5}{2}C_B^2(1+\lambda_1^{1/2})^2N_T^2+\frac{3}{2}C^2_BN_T^2\left(C^2_B(1+\lambda_1^{1/2})^2N_T^2+1\right)\norm{v_0}^2_{1/2}\right)\norm{v_0}^2_{1/2}. \end{equation*} Thus, we get \eqref{unifv} with $C_{1,1}(T,\norm{v_0})$ defined in \eqref{C1j}. \end{proof} We now turn our attention to the following control problem \begin{equation}\label{w} \left\{\begin{array}{ll} w'(t)+Aw(t)+p(t)Bv(t)=0,&t\in[0,T]\\\\ w(0)=0 \end{array}\right. \end{equation} where $v$ solves \eqref{v}. We will denote by $w(\cdot;0,p)$ the solution of \eqref{w}. In the result that follows we give a quadratic estimate of $w(\cdot;0,p)$ in terms of the initial condition of the problem solved by $v$. \begin{prop}\label{prop39} Let $T>0$ and let $A$ and $B$ satisfy hypotheses \eqref{ipA} and \eqref{ipB}, respectively. Let $p\in L^2(0,T)$ satisfy \eqref{NT} with $N_T=:N(T)$ and $v_0\in D(A^{1/2})$ be such that \begin{equation}\label{v0} N(T)\norm{v_0}_{1/2}\leq 1. \end{equation} Then, it holds that \begin{equation}\label{wT} \norm{w(T;0,p)}_{1/2}\leq K(T)\norm{v_0}_{1/2}^2, \end{equation} where \begin{equation}\label{K} K(T)^2:=2e^{C_4(T)}C^2_BN(T)^2C_5(T) \end{equation} with \begin{equation*} C_4(T):=C_B\left(\frac{5}{2}C_B+2\sqrt{T}\right)+2T, \end{equation*} \begin{equation*} C_5(T)=1+\frac{5}{2}C^2_B(1+\lambda_1^{1/2})^2N(T)^2+\frac{3}{2}C^2_B\left(C^2_B(1+\lambda_1^{1/2})^2N(T)^2+1\right). \end{equation*} \end{prop} \begin{proof} For the sake of compactness, sometimes we will denote the solution of \eqref{w} by $w(\cdot)$. First, we multiply the equation in \eqref{w} by $w(t)$, and we get \begin{equation*} \langle w'(t),w(t)\rangle+\langle Aw(t),w(t)\rangle+p(t)\langle Bv(t),w(t)\rangle=0, \end{equation*} that implies \begin{equation*} \begin{split} \frac{1}{2}\frac{d}{dt}\norm{w(t)}^2&\leq|p(t)|\norm{Bv(t)}\norm{w(t)}\\ &\leq C_B|p(t)|\norm{v(t)}_{1/2}\norm{w(t)}\\ &\leq \frac{C^2_B}{2}|p(t)|^2\norm{v(t)}^2_{1/2}+\frac{1}{2}\norm{w(t)}^2. \end{split} \end{equation*} Applying Gronwall's Lemma we obtain \begin{equation*} \norm{w(t)}^2\leq C^2_Be^t\int_0^t|p(s)|^2\norm{v(s)}_{1/2}^2ds, \end{equation*} and therefore \begin{equation}\label{supw} \sup_{t\in[0,T]}\norm{w(t)}^2\leq C_B^2e^T\norm{p}^2_{L^2(0,T)}\sup_{t\in[0,T]}\norm{v(t)}_{1/2}^2. \end{equation} Now, we multiply the equation in \eqref{w} by $w^\prime(t)$, \begin{equation*} \langle w'(t),w'(t)\rangle+\langle Aw(t),w'(t)\rangle+p(t)\langle Bv(t),w'(t)\rangle=0, \end{equation*} from which we deduce \begin{equation*} \begin{split} \norm{w'(t)}^2+\frac{1}{2}\frac{d}{dt}\norm{A^{1/2}w(t)}^2&\leq |p(t)|\norm{Bv(t)}\norm{w'(t)}\\ &\leq C_B|p(t)|\norm{v(t)}_{1/2}\norm{w'(t)}\\ &\leq \frac{C^2_B}{2}|p(t)|^2\norm{v(t)}_{1/2}^2+\frac{1}{2}\norm{w'(t)}^2. \end{split} \end{equation*} Therefore, it holds that \begin{equation*} \frac{d}{dt}\norm{A^{1/2}w(t)}^2\leq C^2_B|p(t)|^2\norm{v(t)}_{1/2}^2, \end{equation*} that yields the following estimate \begin{equation}\label{supA1/2w} \sup_{t\in[0,T]}\norm{A^{1/2}w(t)}^2\leq C^2_B\norm{p}^2_{L^2(0,T)}\sup_{t\in[0,T]}\norm{v(t)}_{1/2}^2. \end{equation} Combining \eqref{supw} and \eqref{supA1/2w} we have \begin{equation*} \begin{split} \sup_{t\in[0,T]}\norm{w(t)}^2_{1/2}&\leq \sup_{t\in[0,T]}\norm{w(t)}^2+\sup_{t\in[0,T]}\norm{A^{1/2}w(t)}^2\\ &\leq 2e^TC^2_B\norm{p}^2_{L^2(0,T)}\sup_{t\in[0,T]}\norm{v(t)}_{1/2}^2 \end{split} \end{equation*} and thanks to the estimates of $\sup_{t\in[0,T]}\norm{v(t)}_{1/2}^2$ given by \eqref{unifv} and of $\norm{p}_{L^2(0,T)}$ given by \eqref{NT}, we deduce that \begin{equation} \sup_{t\in[0,T]}\norm{w(t)}^2_{1/2}\leq 2e^TC^2_BC_{1,1}(T,\norm{v_0}_{1/2})N(T)^2\norm{v_0}^4_{1/2}. \end{equation} Finally, thanks to hypothesis \eqref{v0}, we obtain the claim. \end{proof} \section{Main result}\label{MainResult} Let $T>0$. In a separable Hilbert space $(X,\langle\cdot,\cdot\rangle,\norm{\cdot})$, we consider the control problem \begin{equation}\label{u} \begin{cases} u'(t)+Au(t)+p(t)Bu(t)=0,& t\in [0,T]\\ u(0)=u_0 \end{cases} \end{equation} where $A:D(A)\subset X\to X$ is a densely defined linear operator satisfying \eqref{ipA}, $B:D(B)\subset X\to X$ is an unbounded linear operator which satisfies \eqref{ipB} and $p\in L^2(0,T)$ is a bilinear control. We denote by $u(\cdot;u_0,p)$ the solution of \eqref{u} associated to the initial condition $u_0$ and control $p$. We call the ``ground state solution" of problem \eqref{u} the function $\psi_1(t)=e^{-\lambda_1 t}\varphi_1$, where $\lambda_1\geq0$ is the first eigenvalue of $A$ and $\varphi_1$ is the associated eigenfunction. For any $0\leq s_0\leq s_1$ we also introduce the linear control problem \begin{equation}\label{ys0s1} \begin{cases} y'(t)+Ay(t)+p(t)B\varphi_1=0&t\in[s_0,s_1]\\ y(s_0)=y_0 \end{cases} \end{equation} and we denote by $y(\cdot;y_0,s_0,p)$ its solution associated to the initial condition $y_0$, attained at time $s_0$, and control $p$. We recall that the pair $\{A,B\}$ is called $1$-null controllable in time $T$ if there exists a constant $N_T>0$ such that for any $y_0\in X$ there exists a control $p\in L^2(0,T)$ with \begin{equation*} \norm{p}_{L^2(0,T)}\leq N_T\norm{y_0} \end{equation*} such that $y(T;y_0,0,p)=0$. If $\{A,B\}$ is $1$-null controllable in time $T$, we call the constant $N(T)$ defined in \eqref{controlcost} the control cost. We now state our main controllability result. \begin{thm}\label{teo-contr-B-unb} Let $A:D(A)\subset X\to X$ be a densely defined linear operator that satisfies \eqref{ipA}. Let $B:D(B)\subset X\to X$ be a linear unbounded operator such that \eqref{ipB} holds. Let $\{A,B\}$ be $1$-null controllable in any $T>0$ with control cost $N(\cdot)$ such that there exist $\nu,T_0>0$ for which \begin{equation}\label{bound-control-cost} N(\tau)\leq e^{\nu/\tau},\quad\forall\,0<\tau\leq T_0. \end{equation} Then, for any $T>0$, there exists a constant $R_{T}>0$ such that, for any $u_0\in B_{R_{T}, 1/2}(\varphi_1)$, there exists a control $p\in L^2(0,T)$ for which system \eqref{u} is locally controllable to the the ground state solution in time $T$, that is, $u(T;u_0,p)=\psi_1(T)$. Moreover, the following estimate holds \begin{equation}\label{boundL2norm-p} \norm{p}_{L^2(0,T)}\leq \frac{e^{-\pi^2\Gamma_0/T}}{e^{2\pi^2\Gamma_0/(3T)}-1}, \end{equation} where \begin{equation}\label{Gamma0} \Gamma_0:=2\nu+\max\{\log D, 0\} \end{equation} \begin{equation*} R_T:=e^{-6\Gamma_0/T_1} \end{equation*} with \begin{equation*} D:=2\sqrt{2}C_Be^{C_B\left(\frac{5}{4}C_B+1\right)+1}\left(\max\left\{1+\frac{3}{2}C^2_B,\frac{C^2_B}{2}(1+\lambda_1^{1/2})^2(5+3C^2_B)\right\}\right)^{1/2} \end{equation*} \begin{equation}\label{Talpha} T_1=\min\{\frac{6}{\pi^2}T,1,T_0\}. \end{equation} \end{thm} \begin{oss} \emph{Let us notice that if $A:D(A)\subseteq X\to X$ satisfies \begin{equation*} \exists\,\sigma>0\,:\,\langle Ax,x\rangle\geq-\sigma\norm{x}^2,\,\forall\,x\in D(A) \end{equation*} instead of item (b) of \eqref{ipA}, and \begin{equation*} \exists\,\lambda>\sigma\,:\, (\lambda I+A)^{-1}:X\to X\text{ is compact} \end{equation*} instead of (c), then it is always possible to apply Theorem \ref{teo-contr-B-unb}, and prove the controllability of $u$ to the associated ground state solution $\psi_1(t)=e^{\sigma t}\varphi_1$ in any positive time $T>0$, if $D((A+\sigma I)^\frac{1}{2})\hookrightarrow D(B)$. } \emph{Indeed, by the change of variable $z(t;u_0,p)=e^{-\sigma t}u(t;u_0,p)$, one has that $z$ solves the following problem \begin{equation}\label{pb-Asigma} \begin{cases} z'(t)+A_\sigma z(t)+p(t)Bz(t)=0\\ z(0)=u_0 \end{cases} \end{equation} where $A_\sigma=A+\sigma I$ satisfies \eqref{ipA}, and where $\sigma=-\lambda_1$}. \emph{We can then apply Theorem \eqref{teo-contr-B-unb} and deduce that the solution of \eqref{pb-Asigma} is controllable in time $T$ to the first eigensolution $\psi_1^\sigma(t)\equiv\varphi_1$. Finally, from this latter result we get \begin{equation*} u(T)-\psi_1(T)=e^{\sigma T}z(T)-e^{\sigma T}\varphi_1=e^{\sigma T}(z(T)-\varphi_1)=0 \end{equation*} which implies the controllability of $u$ to the ground state solution $\psi_1$.} \end{oss} \subsection{Proof of Theorem \ref{teo-contr-B-unb}} Our aim is to show the controllability of system \eqref{u} to the ground state solution $\psi_1(t)=e^{-\lambda_1 t}\varphi_1$, that is the solution of \eqref{u} when $p=0$ and $u_0=\varphi_1$. We recall that $\{\varphi_k\}_{k\in{\mathbb{N}}^*}$ is a basis of $X$ of orthonormal eigenfunctions of the operator $A$, associated to the eigenvalues $\{\lambda_k\}_{k\in{\mathbb{N}}^*}$: $A\varphi_k=\lambda_k\varphi_k$ for all $k\in{\mathbb{N}}^*$. The proof of Theorem \ref{teo-contr-B-unb} is divided into two parts: we first consider the case $\lambda_1=0$ and prove the controllability result to the corresponding stationary eigensolution $\psi_1(t)\equiv\varphi_1$. Then, we recover the result for the case $\lambda_1>0$ in the second part. \subsubsection{Case $\lambda_1=0$} We define the constant \begin{equation}\label{Tf} T_f:=\min\{T,\frac{\pi^2}{6},\frac{\pi^2}{6}T_0\}, \end{equation} where $T>0$ and $T_0$ is defined in \eqref{bound-control-cost}. In what follows we construct a control $p\in L^2(0,T_f)$ which drives the solution of \eqref{u} to $\psi_1$ in time $T_f$. Set \begin{equation}\label{T1} T_1:=\frac{6}{\pi^2}T_f. \end{equation} It is easy to see that $0<T_1\leq 1$. We now define the sequence $\{T_j\}_{j\in{\mathbb{N}}^*}$ by \begin{equation}\label{Tj} T_j:=\frac{T_1}{j^2}, \end{equation} and the time steps \begin{equation}\label{taun} \tau_n=\sum_{j=1}^n T_j,\qquad\forall\, n\in{\mathbb{N}}, \end{equation} with the convention that $\sum_{j=1}^0T_j=0$. Notice that $\sum_{j=1}^\infty T_j=\frac{\pi^2}{6}T_1=T_f$. Set $v:=u-\varphi_1$. Then, for any $0\leq s_0\leq s_1$, $v$ solves the following bilinear control problem \begin{equation}\label{eq-v} \begin{cases} v'(t)+Av(t)+p(t)Bv(t)+p(t)B\varphi_1=0&t\in[s_0,s_1]\\ v(s_0)=v_0 \end{cases} \end{equation} with $v_0=u(s_0)-\varphi_1$. We denote by $v(\cdot; v_0,s_0,p)$ the solution of \eqref{eq-v} associated to the initial condition $v_0$ attained at $s_0$ and control $p$. Observe that showing the controllability of $u$ to $\varphi_1$ in time $T_f$ is equivalent to prove null controllabiliy for the solution $v$ of \eqref{eq-v}: $v(T_f;u_0-\varphi_1,0,p)=0$. We follow the strategy of the proof of \cite[Theorem 1.1]{acue} which consists first of obtaining an estimate of the solution of \eqref{eq-v} at $T_1$ by the square of the norm of the initial condition thanks to the construction of a suitable control $p_1\in L^2(0,T_1)$. Then, the same procedure is iterated in consecutive time steps $[\tau_{n-1},\tau_n]$ in which we build a control $p_n\in L^2(\tau_{n-1},\tau_n)$ such that, setting \begin{equation}\label{qn-vn} \begin{array}{ll} q_n:=\sum_{j=1}^np_j(t)\chi_{[\tau_{n-1},\tau_n]}(t),\\ v_n:=v(\tau_n;v_0,0,q_n), \end{array} \end{equation} it holds that \begin{equation}\label{properties} \begin{array}{ll} 1.& \norm{p_n}_{L^2(\tau_{n-1},\tau_n)}\leq N(T_n)\norm{v_{n-1}},\\ 2.& y(\tau_n;v_{n-1},\tau_{n-1},p_n)=0,\\ 3.& \norm{v(\tau_n;v_{n-1},\tau_{n-1},p_n)}_{1/2}\leq e^{(\sum_{j=1}^n 2^{n-j}j^2-2^n 6)\Gamma_0/T_1},\\ 4.& \norm{v(\tau_n;v_{n-1},\tau_{n-1},p_n)}_{1/2}\leq\prod_{j=1}^nK(T_j)^{2^{n-j}}\norm{v_0}^{2^n}_{1/2}, \end{array} \end{equation} where $y(\cdot;v_{n-1},\tau_{n-1},p_n)$ is the solution of \eqref{ys0s1} in $[\tau_{n-1},\tau_n]$ with initial condition $v_{n-1}$ and control $p_n$. We recall that $K(\cdot)$ is defined in \eqref{K}. \textbf{First step}: we consider problem \eqref{eq-v} with $[s_0,s_1]=[0,T_1]$. Since by hypothesis the pair $\{A,B\}$ is $1$-null controllable in any time $T>0$, then for any $v_0\in D(A^{1/2})$ there exists a control $p_1\in L^2(0,T_1)$ such that \begin{equation*} \norm{p_1}_{L^2(0,T_1)}\leq N(T_1)\norm{v_0}\quad\text{and}\quad y(T_1;v_0,0,p_1)=0 \end{equation*} with $N(\cdot)$ the control cost defined in \eqref{bound-control-cost} and $y(\cdot;v_0,0,p_1)$ the solution of \eqref{ys0s1} in $[s_0,s_1]=[0,T_1]$. Therefore, 1. and 2. of \eqref{properties} are satisfied. We apply now Proposition \ref{prop38} with $N_T:=N(T_1)$ and we obtain \begin{equation*} \sup_{t\in[0,T_1]}\norm{v(t)}^2_{1/2}\leq C_{1,1}(T_1,\norm{v_0}_{1/2})\norm{v_0}^2_{1/2} \end{equation*} with $C_{1,1}$ defined in \eqref{C1j}. In order to prove 3. and 4. of \eqref{properties} we introduce the function $w(t):=v(t;v_0,0,p_1)-y(t;v_0,0,p_1)$. It is easy to see that $w$ solves \eqref{w} with $T=T_1$ and $p=p_1$. We then apply Proposition \ref{prop39} and we deduce that if \begin{equation}\label{condition-on-v_0} N(T_1)\norm{v_0}_{1/2}\leq 1 \end{equation} then \begin{equation}\label{estimate-w} \norm{w(T_1;0,p_1)}_{1/2}=\norm{v(T_1;v_0,0,p_1)}_{1/2}\leq K(T_1)\norm{v_0}^2_{1/2} \end{equation} where $K(\cdot)$ is defined in \eqref{K}. Observe that, thanks to the definition of $T_1$ and to \eqref{bound-control-cost}, we infer that there exists a constant $\Gamma_0>\nu$ such that \begin{equation}\label{boundK} K(\tau)\leq e^{\Gamma_0/\tau},\quad 0<\tau\leq T_1. \end{equation} Observe that, a possible choice of $\Gamma_0$ is given by \eqref{Gamma0}. Let $R_T=e^{-6\Gamma_0/T_1}$ with $T_1$ defined as in \eqref{T1}. Let us prove that if $v_0\in B_{R_T,1/2}(0)$ then \eqref{condition-on-v_0} is satisfied: \begin{equation*} N(T_1)\norm{v_0}\leq N(T_1)\norm{v_0}_{1/2}\leq e^{\nu/T_1}e^{-6\Gamma_0/T_1}\leq e^{-5\Gamma_0/T_1}\leq 1 \end{equation*} where we have used that $\Gamma_0>\nu$, so that \eqref{estimate-w} holds. Thus, from \eqref{estimate-w} we deduce that \begin{equation*} \norm{v(T_1;v_0;0,p_1)}_{1/2}\leq K(T_1)\norm{v_{0}}^2_{1/2}\leq e^{\Gamma_0/T_1}e^{-12\Gamma_0/T_1}=e^{-11\Gamma_0/T_1} \end{equation*} which proves 3. and 4. of \eqref{properties}. \textbf{Iterative step}: for the sake of completeness we report the proof of \cite[Section 3.1.2]{acue} adapted to the current choice of functional setting. Suppose that for every $j=1,\dots,n-1$ we have built controls $p_j\in L^2(\tau_{j-1},\tau_j)$ such that \eqref{properties} is satisfied. At the step $n-1$ we have construct $p_{n-1}\in L^2(\tau_{n-2},\tau_{n-1})$ such that \begin{equation}\label{propertiesn-1} \begin{array}{ll} 1.& \norm{p_{n-1}}_{L^2(\tau_{n-2},\tau_{n-1})}\leq N(T_{n-1})\norm{v_{n-2}},\\ 2.& y(\tau_{n-1};v_{n-2},\tau_{n-2},p_{n-1})=0,\\ 3.& \norm{v(\tau_{n-1};v_{n-2},\tau_{n-2},p_{n-1})}_{1/2}\leq e^{(\sum_{j=1}^{n-1} 2^{n-1-j}j^2-2^{n-1} 6)\Gamma_0/T_1},\\ 4.& \norm{v(\tau_{n-1};v_{n-2},\tau_{n-2},p_{n-1})}_{1/2}\leq\prod_{j=1}^{n-1}K(T_j)^{2^{n-1-j}}\norm{v_0}^{2^{n-1}}_{1/2}. \end{array} \end{equation} Let us prove that there exists $p_n\in L^2(\tau_{n-1},\tau_n)$ which verifies \eqref{properties}. First, we define $q_{n-1}$ and $v_{n-1}$ as in \eqref{qn-vn}. Consider then the problem \begin{equation}\label{vn-1} \begin{cases} v'(t)+Av(t)+p(t)Bv(t)+p(t)B\varphi_1=0&t\in[\tau_{n-1},\tau_n]\\ v(\tau_{n-1})=v_{n-1} \end{cases} \end{equation} where control $p$ has to be properly chosen. We apply the change of variables $s=t-\tau_{n-1}$ so that we shift the problem into the interval $[0,T_n]$. By introducing the new variables $\tilde{v}(s)=v(s+\tau_{n-1})$ and $\tilde{p}(t)=p(s+\tau_{n-1})$, problem \eqref{vn-1} becomes \begin{equation}\label{tildevn-1} \begin{cases} \tilde{v}'(t)+A\tilde{v}(t)+p(t)B\tilde{v}(t)+\tilde{p}(t)B\varphi_1=0&t\in[0,T_n]\\ \tilde{v}(0)=v_{n-1}. \end{cases} \end{equation} Since $\{A,B\}$ is $1$-null controllable in any positive time, there exists $\tilde{p}_n\in L^2(0,T_n)$ such that \begin{equation*} \norm{\tilde{p}_n}_{L^2(0,T_n)}\leq N(T_n)\norm{v_{n-1}}\quad\text{and}\quad \tilde{y}(T_n;v_{n-1},0,\tilde{p}_n)=0 \end{equation*} with $\tilde{y}(\cdot;v_{n-1},0,\tilde{p}_n)$ solves \eqref{eq-v} on $[0,T_n]$. Moreover, since $v_{n-1}=v(\tau_{n-1};v_0,0,q_{n-1})=v(\tau_{n-1};v_{n-2},\tau_{n-2},p_{n-1})$, we deduce that \begin{equation}\label{NTnvn-1} \begin{split} N(T_n)\norm{v_{n-1}}_{1/2}&\leq e^{\nu n^2/T_1}e^{\left(\sum_{j=1}^{n-1}2^{n-1-j}j^2-2^{n-1}6\right)\Gamma_0/T_1}\\ &\leq e^{\left(n^2+(-(n-1)^2-4(n-1)+2^{n-1}6-6-2^{n-1}6\right)\Gamma_0/T_1}\\ &=e^{-\left(2n+3\right)\Gamma_0/T_1}\leq1 \end{split} \end{equation} from 3. of \eqref{propertiesn-1}. Observe that we have used that $\nu\leq \Gamma_0$ and the identity \begin{equation*} \sum_{j=0}^n\frac{j^2}{2^j}=2^{-n}\left(-n^2-4n+6(2^n-1)\right),\quad n\geq0. \end{equation*} We now choose $\tilde{p}=\tilde{p}_n$ in \eqref{tildevn-1} and we keep denoting by $\tilde{v}$ the corresponding solution. Define $w=\tilde{v}-\tilde{y}$ and observe that $w$ solves \eqref{w} with $v=\tilde{v}$, $T=T_n$ and control $p=\tilde{p}_n$. Thanks to \eqref{NTnvn-1}, we can apply Proposition \ref{prop39} with $T=T_n$ and we deduce that \begin{equation*} \norm{w(T_n;0,\tilde{p}_n)}_{1/2}=\norm{\tilde{v}(T_n;v_{n-1},0,\tilde{p}_n)}_{1/2}\leq K(T_n)\norm{v_{n-1}}_{1/2}^2. \end{equation*} We define $p_n(t):=\tilde{p}_n(t-\tau_{n-1})$. By shifting back the problem into the interval $[\tau_{n-1},\tau_n]$ we obtain \begin{equation*} \norm{p_n}_{L^2(\tau_{n-1},\tau_n)}\leq N(T_n)\norm{v_{n-1}}_{1/2}\quad\text{and}\quad y(\tau_n;v_{n-1},\tau_{n-1},p_n)=0 \end{equation*} and \begin{equation}\label{vnKvn-1} \norm{v(\tau_n;v_{n-1},\tau_{n-1},p_n)}_{1/2}\leq K(T_n)\norm{v_{n-1}}^2_{1/2}. \end{equation} Thus, the first two items of \eqref{properties} are proved. Using 3. of \eqref{propertiesn-1} and \eqref{boundK}, we deduce that \begin{equation*} \begin{split} \norm{v(\tau_n;v_{n-1},\tau_{n-1},p_n)}_{1/2}&\leq e^{\Gamma_0 n^2/T_1}\left[e^{\left(\sum_{j=1}^{n-1}2^{n-1-j}j^2-2^{n-1}6\right)\Gamma_0/T_1}\right]^2\\ &=e^{\left(\sum_{j=1}^n2^{n-j}j^2-2^n6\right)\Gamma_0/T_1}. \end{split} \end{equation*} Therefore, item 3. of \eqref{properties} is verified. Finally, using again \eqref{vnKvn-1} and 4. of \eqref{propertiesn-1} we get \begin{equation*} \begin{split} \norm{v(\tau_n;v_{n-1},\tau_{n-1},p_n)}_{1/2}&\leq K(T_n)\left[\prod_{j=1}^{n-1}K(Tj)^{2^{n-1-j}}\norm{v_0}^{2^{n-1}}_{1/2}\right]^{2}\\ &=\prod_{j=1}^nK(T_j)^{2^{n-j}}\norm{v_0}^{2^n}_{1/2}. \end{split} \end{equation*} The induction argument is then concluded. We can now complete the proof of our theorem for the case $\lambda_1=0$. Notice that for all $n\in{\mathbb{N}}^*$ we have that \begin{equation*} \begin{split} \norm{v(\tau_n;v_{n-1},\tau_{n-1},p_n)}_{1/2}&\leq \prod_{j=1}^nK(T_j)^{2^{n-j}}\norm{v_0}^{2^n}_{1/2}\leq \prod_{j=1}^n\left(e^{\Gamma_0j^2/T_1}\right)^{2^{n-j}}\norm{v_0}^{2^n}_{1/2}\\ &\leq e^{\Gamma_02^n/T_1\sum_{j=1}^n j^2/2^j}\norm{v_0}^{2^n}_{1/2}\leq \left(e^{6\Gamma_0/T_1}\norm{v_0}_{1/2}\right)^{2^n}. \end{split} \end{equation*} The above inequality is equivalent to \begin{equation*} \norm{v(\tau_n;v_{0},0,q_n)}_{1/2}\leq\left(e^{6\Gamma_0/T_1}\norm{v_0}_{1/2}\right)^{2^n} \end{equation*} where $q_n$ is defined in \eqref{qn-vn}. Taking the limit as $n\to +\infty$ we deduce that \begin{equation*} \norm{u(T_f;u_0,q_\infty)-\varphi_1}_{1/2}=\norm{v(T_f;v_0,0,q_\infty)}_{1/2}\leq0 \end{equation*} where $T_f$ is defined in \eqref{Tf}. Indeed, by hypothesis $u_0\in B_{R_T,1/2}(\varphi_1)$, with $R_T$ defined in \eqref{Gamma0}, and so $\norm{v_0}<e^{-6\Gamma_0/T_1}$. This means that we have construct a control $p\in L^2_{loc}([0,+\infty))$, defined as follows \begin{equation*} p(t)=\begin{cases} \sum_{n=1}^\infty p_n(t)\chi_{[\tau_{n-1},\tau_n]}&t\in(0,T_f]\\ 0&t>T_f, \end{cases} \end{equation*} that steers the solution $u$ of \eqref{u} to the ground state solution in time $T_f$, less or equal to $T$. Moreover, we can give an upper bound of the $L^2$-norm of the control: \begin{equation*} \begin{split} \norm{p}^2_{L^2(0,T)}&=\sum_{n=1}^\infty\norm{p_n}^2_{L^2(\tau_{n-1},\tau_n)}\leq \sum_{n=1}^\infty\left(N(T_n)\norm{v_{n-1}}_{1/2}\right)^2\\ &\leq \sum_{n=1}^\infty e^{-2(2n+3)\Gamma_0/T_1}\leq\frac{e^{-6\Gamma_0/T_1}}{e^{4\Gamma_0/T_1}-1}=\frac{e^{-\pi^2\Gamma_0/T_f}}{e^{2\pi^2\Gamma_0/3T_f}-1} \end{split} \end{equation*} where we have used 1. of \eqref{properties} and \eqref{NTnvn-1}. Recalling that $T_f\leq T$, we obtain \eqref{boundL2norm-p}. \subsubsection{Case $\lambda_1>0$} If $\lambda_1>0$ the result is easily deducible from the previous case. Indeed, we introduce the operator \begin{equation*} A_1:=A-\lambda_1 I, \end{equation*} Observe that \begin{itemize} \item $A_1$ satisfies \eqref{ipA}, \item $A_1$ has the same eigenfunctions of $A$, $\{\varphi_j\}_{j\in{\mathbb{N}}^*}$, and the first eigenvalues of $A_1$ is equal to $\mu_1=\lambda_1-\lambda_1=0$, \item $\{A_1,B\}$ is $1$-null controllable with associated control cost $N_1(\cdot)$ that satisfies \eqref{bound-control-cost}. \end{itemize} Hence, once proved the exact controllability in time $T$ of the following problem \begin{equation*} \left\{\begin{array}{ll} z'(t)+A_1u(t)+p(t)Bz(t)=0,& t\in [0,T]\\\\ z(0)=u_0 \end{array}\right. \end{equation*} to the associated ground state solution $\tilde{\psi_1}=e^{-\mu_1 t}\varphi_1=\varphi_1$, we introduce the function $u(t):=z(t)e^{-\lambda_1 t}$ that is the solution of \begin{equation}\label{eq-u-theo} \left\{\begin{array}{ll} u'(t)+Au(t)+p(t)Bu(t)=0,& t\in [0,T]\\\\ u(0)=u_0 \end{array}\right. \end{equation} and satisfies \begin{equation*} \norm{u\left(T\right)-\psi_1\left(T\right)}=\norm{e^{-\lambda_1T}z\left(T\right)-e^{-\lambda_1T}\varphi_1}=e^{-\lambda_1T}\norm{z\left(T\right)-\varphi_1}=0. \end{equation*} Therefore, we have shown that \eqref{eq-u-theo} is exacly controllable to the ground state solution $\psi_1(t)=e^{-\lambda_1 t}\varphi_1$ in time $T$. \section{Semi-global results}\label{SemiGlobal} In this section we present two semi-global results for the exact controllability to the ground state solution of problem \eqref{u}. In the first one, Theorem \ref{teoglobal}, we show that the solution of \eqref{u} with initial condition $u_0$ lying in a suitable strip (see condition \eqref{ipu0}), reaches the ground state in finite time $T_R$. \begin{thm}\label{teoglobal} Let $A:D(A)\subset X\to X$ be a densely defined linear operator such that \eqref{ipA} holds, and let $B:D(B)\subset X\to X$ be an unbounded linear operator that verifies \eqref{ipB}. Let $\{A,B\}$ be a $1$-null controllable pair with control cost that satisfies \eqref{bound-control-cost}. Then there exists a constant $r_1>0$ such that for any $R>0$ there exists $T_{R}>0$ such that for all $u_0\in D(A^{1/2})$ that satisfy \begin{equation}\label{ipu0} \begin{array}{l} \left|\langle u_0,\varphi_1\rangle_{1/2}-1\right|< r_1,\\\\ \norm{u_0-\langle u_0,\varphi_1\rangle_{1/2}\;\varphi_1}_{1/2}\leq R, \end{array} \end{equation} problem \eqref{u} is exactly controllable to the ground state solution $\psi_1(t)=e^{-\lambda_1 t}\varphi_1$ in time $T_{R}$. \end{thm} Our second semi-global result, Theorem \ref{teoglobal0} below, ensures the exact controllability of all initial states $u_0\in D(A^{1/2})\setminus \varphi_1^\perp$ to the evolution of their orthogonal projection along the ground state defined by \begin{equation}\label{exactphi1} \phi_1(t)=\langle u_0,\varphi_1\rangle_{1/2}\; \psi_1(t), \quad\forall\, t \geq 0, \end{equation} where $\psi_1$ is the ground state solution. \begin{thm}\label{teoglobal0} Let $A:D(A)\subset X\to X$ be a densely defined linear operator such that \eqref{ipA} holds, and let $B:D(B)\subset X\to X$ be an unbounded linear operator that verifies \eqref{ipB}. Let $\{A,B\}$ be a $1$-null controllable pair with control cost that satisfies \eqref{bound-control-cost}. Then, for any $R>0$ there exists $T_R>0$ such that for all $u_0\in D(A^{1/2})$ satisfying \begin{equation}\label{cone} \norm{u_0-\langle u_0,\varphi_1\rangle_{1/2}\;\varphi_1}_{1/2}\leq R \left|\langle u_0,\varphi_1\rangle_{1/2}\,\right| \end{equation} system \eqref{u} is exactly controllable to $\phi_1$, defined in \eqref{exactphi1}, in time $T_R$. \end{thm} To prove Theorems \ref{teoglobal} and \ref{teoglobal0}, one may follows the strategies described in \cite[Section 5]{acue}. The only difference with respect to \cite[Section 5]{acue} is that, in the current setting, $u_0\in D(A^{1/2})$ and this yields to the following definition of the controllability time \begin{equation*} T_R:=1+\frac{1}{\lambda_2}\log\left(\frac{R^2}{r^2_1}\right). \end{equation*} \section{Applications}\label{Applications} In this section we discuss applications of Theorem \ref{teo-contr-B-unb} to parabolic equations. Let us first recall a result from \cite{acue} which provides sufficient conditions for a pair of linear operators $A$ and $B$, to be $j$-null controllable with control cost that fulfils \eqref{bound-control-cost}. \begin{thm}\label{Thm-suff-cond} Let $A:D(A)\subset X\to X$ be such that \begin{equation}\label{ipA-sigma} \begin{array}{ll} (a) & A \mbox{ is self-adjoint},\\ (b) &\exists\,\sigma>0\,:\,\langle Ax,x\rangle \geq-\sigma \norm{x}^2,\,\, \forall\, x\in D(A),\\ (c) &\exists\,\lambda>\sigma\,:\,(\lambda I+A)^{-1}:X\to X \mbox{ is compact} \end{array} \end{equation} and suppose that there exists a constant $\alpha>0$ for which the eigenvalues of $A$ verifies the gap condition \begin{equation}\label{gap} \sqrt{\lambda_{k+1}-\lambda_1}-\sqrt{\lambda_k-\lambda_1}\geq \alpha,\quad\forall\, k\in {\mathbb{N}}^*. \end{equation} Let $B:D(B)\subset X\to X$ be a linear operator such that there exist $b,q>0$ for which \begin{equation}\label{ipB-suff-cond} \begin{array}{l} \langle B\varphi_j,\varphi_j\rangle\neq0\quad\mbox{and}\quad\left|\lambda_k-\lambda_j\right|^q|\langle B\varphi_j,\varphi_k\rangle|\geq b,\quad\forall\,k\neq j. \end{array} \end{equation} Then, the pair $\{A,B\}$ is $j$-null controllable in any time $T>0$ with control cost $N(T)$ that satisfies \eqref{bound-control-cost}. \end{thm} In particular, for accretive operators ($\sigma=0$), hypothesis \eqref{gap} can be replaced by \begin{equation}\label{gap-no-l1} \sqrt{\lambda_{k+1}}-\sqrt{\lambda_k}\geq \alpha,\quad\forall\, k\in {\mathbb{N}}^*, \end{equation} see \cite[Remark 6.1]{acue}. In order to verify the property of $1$-null controllability required in Theorem \ref{teo-contr-B-unb}, we will check the validity of the assumptions of Theorem \ref{Thm-suff-cond} for $j=1$. \subsection{Fokker--Planck equation}\label{ex1} The Fokker-Planck equation describes the evolution of the probability density $u(t,x)$ of a real-valued random variable $X_t$, which is associated with an Ito stochastic differential equation driven by a standard Wiener process $W_t$ \begin{equation}\label{SDE} \begin{cases} dX_t=\nu(t,X_t)dt+\sigma(t,X_t)dW_t\\ X(t=0)=X_0 \end{cases} \end{equation} with drift $\nu(t,x)$, diffusion coefficient $D(t,x)=\sigma^2(t,x)/2$ and initial condition $X_0$. We recall that given any $a\leq b$, denoting by $u(t,x)$ the probability density associated to $X_t$, the following identity holds \begin{equation*} \mathbb{P}(a\leq X_t\leq b )=\int_a^b u(t,x)dx. \end{equation*} Under suitable assumptions on the coefficients $\nu$ and $\sigma$, the equation satisfied by $u(t,x)$, named after A. D. Fokker and M. Planck, is the following one \begin{equation*} \begin{cases} \frac{\partial}{\partial t}u(t,x)=\frac{\partial^2}{\partial x^2}\left(D(t,x)u(t,x)\right)-\frac{\partial}{\partial x}\left(\nu(t,x)u(t,x)\right),&(t,x)\in[0,T]\times {\mathbb{R}}\\ u(0,x)=u_0(x) \end{cases} \end{equation*} where $u_0$ is the density associated to $X_0$. Physically, the probability density $u(t,x)$ can be interpreted as a quantity proportional to the number of particles in a flow of an abstract substance. For instance, it can reflect the concentration of this substance at the point $x$ at time $t$. Let us first recall that \eqref{SDE} admits a strong solution (see \cite[Definition 9.1]{baldi}) which is pathwise unique (see \cite[Definition 9.4]{baldi}) under the following assumptions \begin{equation*}\label{BaldiH1} \text{(H1)} \begin{cases} \mbox{1. } (t,x) \mapsto \nu(t,x) \mbox{ and } (t,x) \mapsto \sigma(t,x) \mbox{ are measurable functions on } [0,T] \times \mathbb{R}\\ \mbox{2. } \exists\,M>0\,:\,|\nu(t,x)|\leqslant M(1+|x|) \mbox{ and } |\sigma(t,x)|\leqslant M(1+|x|)\,, \ \forall \ (t,x) \in [0,T] \times \mathbb{R}\\ \mbox{3. } \exists\,L>0\,:\,|\nu(t,x)-\nu(t,y)|\leqslant L|x-y| \mbox{ and } |\sigma(t,x)-\sigma(t,y)|\leqslant L|x-y|\,, \ \forall \ (t,x) \in [0,T] \times \mathbb{R} \end{cases} \end{equation*} (see \cite[Theorem 9.2]{baldi}). We are interested in studying the possibility to find a drift $\nu$ such that the probability density $u$ reaches the associated ground state solution in finite time. Moreover, as suggested by the results of the previous sections, we would like to take a drift of the form \begin{equation*} \nu(t,x)=p(t)\mu(x),\qquad (t,x)\in[0,T]\times {\mathbb{R}}. \end{equation*} However, the sublinear growth assumption $2.$ of (H1) on $\nu$ requires an essential bound on the scalar control $p$. Since our controls $p$ are only locally square integrable, we need to weaken (H1). Thus, in Appendix \ref{AppendixB}, we adapt the proof of the existence and uniqueness of a strong solution of \eqref{SDE} under the following weaker assumptions: \begin{equation*}\label{ABCUH2} \text{(H2)} \begin{cases} \mbox{1. } (t,x) \mapsto \sigma(t,x) \mbox{ is a measurable function on } [0,T] \times {\mathbb{R}}\\ \mbox{2. } p \in L^2_{loc}(\mathbb{R})\\ \mbox{3. } \exists\,M>0\,:\,|\mu(x)|\leqslant M(1+|x|) \mbox{ and} |\sigma(t,x)|\leqslant M(1+|x|)\,, \ \forall \ (t,x) \in [0,T] \times {\mathbb{R}}\\ \mbox{4. } \exists\,L>0\,:\,|\mu(x)-\mu(y)|\leqslant L|x-y| \,, \; |\sigma(t,x)-\sigma(t,y)|\leqslant L|x-y|\,, \; \forall \ t \in [0,T]\,, \; \forall \ x,y \in {\mathbb{R}} \end{cases} \end{equation*} From now on we will consider a constant diffusion $\sigma(t,x)\equiv\sqrt{2}$ and drift of the form $\nu(t,x)=p(t)\mu(x)$, where $p\in L^2(0,T)$ and $\mu:[0,1]\to {\mathbb{R}}$ is at least Lipschitz continuous. Then, by extending $\mu(\cdot)$ outside the interval $[0,1]$ as follows \begin{equation}\label{extension-mu} \tilde{\mu}(x)=\begin{cases} \mu(0)&x\leq0\\ \mu(x)&0<x<1\\ \mu(1)& x\geq1 \end{cases} \end{equation} it is clear that $\tilde{\mu}$ satisfies assumption (H2). The Fokker-Planck equation can be studied also on bounded domains under suitable boundary conditions such as perfectly reflecting, partially reflective, and non reflecting boundary conditions. We now describe such conditions for the equation \begin{equation*} \begin{cases} \frac{\partial}{\partial t}u(t,x)=\frac{\partial^2}{\partial x^2}u(t,x)-p(t)\frac{\partial}{\partial x}\left(\mu(x)u(t,x)\right),&(t,x)\in[0,T]\times [0,1]\\ u(0,x)=u_0(x) \end{cases} \end{equation*} with their physical aspects and the mathematical difficulties they generate. \vskip 4mm $\bullet$ The perfectly reflecting boundary conditions are given by: \begin{equation} \frac{\partial}{\partial x}u(t,1)-p(t)\mu(1)u(t,1)=\frac{\partial}{\partial x}u(t,0)-p(t)\mu(0)u(t,0)=0 \end{equation} Let us note that, for such boundary conditions, the total mass is conserved in the interval $[0,1]$. Thus, if the initial data $u_0$ has a total mass $1$, then through time, the probability density $u$ satisfies \begin{equation}\label{prob=1} \int_0^1 u(t,x)dx=1\qquad\forall\,t\in (0,T), \end{equation} that is, the probability to find particles in the interval $[0,1]$ is equal to 1. Indeed, by using the equation solved by $u$ we get \begin{equation*} \int_0^1 \frac{\partial}{\partial t}u(t,x)dx=\int_0^1\left(\frac{\partial^2}{\partial x^2}u(t,x)-p(t)\frac{\partial}{\partial x}\left(\mu(x)u(t,x)\right)\right)dx=0, \end{equation*} hence \begin{equation*} \frac{\partial}{\partial t}\int_0^1 u(t,x)dx=0\qquad\forall\,t\in(0,T) \end{equation*} which implies condition \eqref{prob=1}. However, these perfect boundary conditions have a great impact on the functional frame in which one can set up the abstract formulation of the Fokker-Planck equation: the domain of the abstract operator $A$ will have to include the dependence on time and control $p$, that is, one should have to handle $D(A(t,p))=\{u(t,\cdot) \in H^2(0,1), u_x(t,1)=p(t)\mu(1)u(t,1)\,, u_x(t,0)=p(t)\mu(0)u(t,0)\}$ for all $t \in [0,T]$. As far as we know such mathematical difficulties have not yet been studied in bilinear control, the strongest difficulty being that the domain depends itself on the scalar control $p$. \vskip 4mm $\bullet$ The partially reflecting boundary conditions (i.e. reflecting only the diffusive part of the process, the Brownian motion), are given by $$ \frac{\partial}{\partial x}u(t,1)=\frac{\partial}{\partial x}u(t,0)=0. $$ which leads to the following problem \begin{equation}\label{FP-Neumann} \left\{\begin{array}{ll} \frac{\partial}{\partial t}u(t,x)-\frac{\partial^2}{\partial x^2} u(t,x)+p(t)\frac{\partial}{\partial x}\left(\mu(x)u(t,x)\right)=0&(t,x)\in[0,T]\times[0,1]\\\\ \frac{\partial}{\partial x}u(t,1)=\frac{\partial}{\partial x}u(t,0)=0&t\in[0,T]\\\\ u(0,x)=u_0(x)&x\in[0,1] \end{array}\right. \end{equation} However, when considering such boundary conditions, the total mass is, in general, no more conserved through time. Indeed, considering again the equation solved by $u$, this time we find that \begin{equation*} \int_0^1 \frac{\partial}{\partial t}u(t,x)dx=\int_0^1\left(\frac{\partial^2}{\partial x^2}u(t,x)-p(t)\frac{\partial}{\partial x}\left(\mu(x)u(t,x)\right)\right)dx=p(t)[\mu(1)u(t,1)-\mu(0)u(t,0)]. \end{equation*} \vskip 4mm $\bullet$ The absorbing (Dirichlet) boundary conditions $$ u(t,1)=u(t,0)=0 $$ leads to following problem \begin{equation}\label{FP-Dirichlet} \left\{\begin{array}{ll} \frac{\partial}{\partial t}u(t,x)-\frac{\partial^2}{\partial x^2} u(t,x)+p(t)\frac{\partial}{\partial x}\left(\mu(x)u(t,x)\right)=0&(t,x)\in[0,T]\times[0,1]\\\\ u(t,1)=u(t,0)=0&t\in[0,T]\\\\ u(0,x)=u_0(x)&x\in[0,1]. \end{array}\right. \end{equation} The total density is again in general, not preserved \begin{equation*} \int_0^1 \frac{\partial}{\partial t}u(t,x)dx=\int_0^1\left(\frac{\partial^2}{\partial x^2}u(t,x)-p(t)\frac{\partial}{\partial x}\left(\mu(x)u(t,x)\right)\right)dx=\frac{\partial}{\partial x}u(t,1)-\frac{\partial}{\partial x}u(t,0). \end{equation*} In view of the mathematical difficulties generated by the perfectly reflecting boundary conditions, we shall consider in the present paper, only the partially reflecting boundary conditions, and the absorbing ones. \vskip 2mm Let us start by studying local controllability to the ground state for problem \eqref{FP-Neumann}, where our bilinear control will be the time dependent part of the drift $p(\cdot)$. We set $I=(0,1)$. We recast the problem in the general setting of \eqref{u} by introducing the operators $A$ and $B$ defined by \begin{equation*} D(A)=\left\{\varphi\in H^2(I)\,:\, \frac{\partial}{\partial x}\varphi(0)=\frac{\partial}{\partial x}\varphi(1)=0\right\},\quad A\varphi=-\frac{d^2\varphi}{dx^2} \end{equation*} \begin{equation*} D(B)=\left\{\varphi\in L^2(I)\,:\,\frac{d}{dx}(\mu\varphi)\in L^2(I)\right\},\quad B\varphi=\frac{d}{dx}\left(\mu\varphi\right) \end{equation*} where $\mu$ is a real-valued function in $H^2(I)$ to be chosen later on in order to fulfill the rank condition \eqref{ipB-suff-cond}. $A$ satisfies all the properties in \eqref{ipA} and its eigenvalues and eigenfunctions have the following explicit expressions \begin{equation*} \begin{array}{lll} \lambda_0=0,&\varphi_0=1\\\\ \lambda_k=(k\pi)^2,& \varphi_k(x)=\sqrt{2}\cos(k\pi x),& \forall\, k\geq1. \end{array} \end{equation*} It is straightforward to prove that the eigenvalues fulfill the required gap property. Indeed, \begin{equation*} \sqrt{\lambda_{k+1}}-\sqrt{\lambda_k}=(k+1)\pi-k\pi=\pi,\qquad \forall k\in {\mathbb{N}} \,, \end{equation*} so that \eqref{gap-no-l1} is satisfied. Observe that, in this context, we have an explicit description of the spaces $D(A^{s/2})$, see \cite[Section 4.3.3]{tri} for a general result. For example, for $s=1$ \begin{equation*} D(A^{1/2})=H^1(I). \end{equation*} In order to apply Theorem \ref{teo-contr-B-unb}, we have to check that $D(A^{1/2})\hookrightarrow D(B)$. This is easily proved as follows: \begin{equation*} ||\varphi||_{D(B)}=||(\mu\varphi)_x||_{L^2(I)}\leq C_\mu(||\varphi_x||_{L^2(I)}+||\varphi||_{L^2(I)})\leq C||\varphi||_{D(A^{1/2})} \end{equation*} for all $\varphi$ in $D(A^{1/2})$. \vskip 2mm To prove local controllability of \eqref{FP-Neumann} to the ground state solution $\psi_0(t,x)\equiv 1$, we want to use Theorem \ref{Thm-suff-cond} for $j=0$ (note also that, due to the Neumann boundary conditions, one has to consider $j=0$ instead of $j=1$ and for $k$ varying in $\mathbb{N}$ instead of $\mathbb{N}^{\ast}$) in order to apply Theorem \ref{teo-contr-B-unb}. Thus, we have to check that \eqref{ipB-suff-cond} is satisfied for all $k \in \mathbb{N}$. By definition of $B$, we have $B\varphi_0=\mu'$, hence one can observe that the choice $\mu(x)=x$ for all $x \in I$ leads to $\langle B\varphi_0,\varphi_k\rangle=0$, which is not suitable for our purposes. Let us first examine more precisely $\langle B\varphi_0,\varphi_k\rangle$ for all $k \in \mathbb{R}^{\ast}$: \begin{equation*} \begin{split} \langle B\varphi_0,\varphi_k\rangle&=\sqrt{2}\int_0^1\mu'(x)\cos(k\pi x)dx=\sqrt{2}\left.\mu'(x)\frac{\sin (k\pi x)}{k\pi}\right|^1_0-\sqrt{2}\int_0^1\mu''(x)\frac{\sin (k\pi x)}{k\pi}dx\\ &=\left.\sqrt{2}\mu''(x)\frac{\cos(k\pi x)}{(k\pi)^2}\right|^1_0-\sqrt{2}\int_0^1\mu'''(x)\frac{\cos(k\pi x)}{(k\pi)^2}dx\\ &=\frac{\sqrt{2}}{(k\pi)^2}\left[\mu''(1)(-1)^k-\mu''(0)\right]-\sqrt{2}\int_0^1\mu'''(x)\frac{\cos(k\pi x)}{(k\pi)^2}dx,\qquad\forall\,k\geq1 \end{split} \end{equation*} By the Riemann-Lebesgue Lemma, the last integral term on the right-hand side of the above identity converges to $0$ as $k$ goes to $+\infty$. Thus, if we choose $\mu$ such that $\mu''(1)\neq\pm \mu''(0)$, then there exists $k_1 \geq 1$, such that \begin{equation*} \exists\,b>0\,:\,\lambda_k|\langle B\varphi_0,\varphi_k\rangle|\geq b\,, \quad \mbox{ for all } k > k_1 \end{equation*} Furthermore, if $\mu(1)\neq \mu(0)$ we deduce that \begin{equation*} \langle B\varphi_0,\varphi_0\rangle=\int_0^1\mu'(x)dx\neq0. \end{equation*} Hence, if $\mu''(1)\neq\pm \mu''(0)$ and $\mu(1)\neq \mu(0)$, we have only to check that $\langle B\varphi_0,\varphi_k\rangle \neq 0$ for a finite range of $k \in \{1, \ldots, k_1\}$. To sum up, every function $\mu$ such that \begin{equation*} \left\{\begin{array}{l} \mu(1)\neq \mu(0)\\\\ \mu''(1)\neq \pm\mu''(0)\\\\ \langle \mu\varphi_0,\varphi_k\rangle\neq 0,\qquad k=1,\dots,k_1 \end{array} \right. \end{equation*} is suitable for our controllability purposes. For instance, any power $\mu(x)=x^n$, $x\in [0,1]$, with $n>2$ satisfies the above conditions at the boundary. Moreover, once extending $\mu$ as in \eqref{extension-mu}, it clearly satisfies (H2)\footnote{We observe that by multiplying $\mu$ by a cut-off function, it is possible to construct a smooth extension of $\mu$ which satisfies (H2).}. So, one has just to check that $\langle \mu\varphi_0,\varphi_k\rangle\neq 0$ for $k=1,\dots,k_1$. For example, for $n=3$ one easily prove that \begin{equation*} \langle \mu\varphi_0,\varphi_k\rangle=\begin{cases} 6\sqrt{2}\frac{(-1)^k}{(k\pi)^2}& k\geq1\\\\ \sqrt{2}&k=0. \end{cases} \end{equation*} Another suitable choice is $\mu(x)=\sin(\alpha x)$ for all $x \in I$, where $\alpha$ is as any positive real number chosen in $[0,\infty) \backslash \pi \mathbb{N}$. In this case one can check that $ \mu''(1)=-\alpha^2\sin(\alpha) \neq 0 =\mu''(0)$, $\mu(1)= \sin(\alpha)\neq 0= \mu(0)$, and $$ \langle B\varphi_0,\varphi_k\rangle=\frac{\sqrt{2}}{\alpha^2-(k\pi)^2}(-1)^k\alpha^2 \sin(\alpha). $$ Thus, any choice for $\mu$ of the above forms meets all the required properties, and also the assumptions in (H2) for the well-posedness of the stochastic differential equation. Hence, thanks to Theorems \ref{Thm-suff-cond} and \ref{teo-contr-B-unb}, is possible to build a control $p\in L^2(0,T)$ such that the solution of the Fokker--Planck equation \eqref{FP-Neumann}, with initial condition in a neighbourhood of $\varphi_0=1$, partially reflecting boundary conditions and drift $\nu(t,x)=p(t)\mu(x)$, is controllable to $\psi_0=1$ in time $T$. This means that the probability to find the particle in the interval $[0,1]$ at time $T$ is equal to 1 (then the event happens almost surely) even if we are in presence of non-perfectly reflecting walls. This is due to the appropriate choice of the drift. We move now to the Fokker-Planck equation with absorbing (Dirichlet) boundary conditions. In this case the eigenvalues and eigenfunctions of the Laplacian are the following \begin{equation*} \lambda_k=(k\pi)^2,\quad \varphi_k(x)=\sqrt{2}\sin(k\pi x),\quad \forall k\in{\mathbb{N}}^*. \end{equation*} Since we have that \begin{equation*} \int_0^1\varphi_1(x)dx=\sqrt{2}\int_0^1 \sin(\pi x)dx=\frac{2\sqrt{2}}{\pi} \end{equation*} controlling the solution to the ground state means that we are forcing some mass to remain in the interval $[0,1]$ after time $T$ (in the sense that with probability equal to $\frac{2\sqrt{2}}{\pi}\cong 0.9$ we find a particle in the interval $[0,1]$), even though we are in presence of absorbing boundary conditions. In order to apply Theorem \ref{Thm-suff-cond} for $j=1$, and deduce the controllability of \eqref{FP-Dirichlet} to the ground state by applying Theorem \ref{teo-contr-B-unb}, we have to verify the rank condition of the unbounded operator $B$. Let us compute the following scalar product \begin{equation*} \begin{split} \langle (\mu\varphi_1)',\varphi_k\rangle&=\sqrt{2}\int_0^1 (\mu\varphi_1)'(x)\sin(k\pi x)dx\\ &=\sqrt{2}\left(-\left.(\mu\varphi_1)'(x)\frac{\cos(k\pi x)}{k\pi}\right|^1_0+\int_0^1\left(\mu\varphi_1\right)''(x)\frac{\cos(k\pi x)}{k\pi}dx\right)\\ &=\frac{2}{k}\left(\mu(1)(-1)^{k}+\mu(0)\right)+\frac{\sqrt{2}}{k\pi}\int_0^1\left(\mu\varphi_1\right)''(x)\cos(k\pi x)dx. \end{split} \end{equation*} Thus, if $\mu(1)\neq\pm \mu(0)$ and $\langle (\mu\varphi_1)',\varphi_k\rangle \neq0,\,\forall\,k\in{\mathbb{N}}^*$, there exists a constant $b$ such that \begin{equation*} \lambda_k^{1/2}|\langle B\varphi_1,\varphi_k\rangle|\geq b,\qquad\forall\,k\in{\mathbb{N}}^*. \end{equation*} Hence, any potential $\mu(x)=x^n$, with $n\geq 1$, extended to the real line as in the previous example, is admissible for having exact controllability of \eqref{FP-Dirichlet} to the ground state. \begin{oss} \emph{If we consider $\mu(x)=x$, we can directly check that the Fourier coefficients of $B\varphi_1$ do not vanish for every $k\in{\mathbb{N}}$} \begin{equation*} \langle (\mu\varphi_1)',\varphi_k\rangle=\left\{\begin{array}{ll} \frac{(-1)^k2k}{k^2-1},&k\geq2\\\\ \frac{1}{2}&k=1. \end{array}\right. \end{equation*} \emph{and the lower bound \eqref{ipB-suff-cond} is satisfied with $q=\frac{1}{2}$ and $b=2\pi$}. \end{oss} \subsection{Diffusion equation with Neumann boundary conditions}\label{ex4} Now, we consider a diffusion equation with a controlled potential subject to Neumann boundary conditions. Let $I=(0,1)$ and consider the following problem \begin{equation}\label{40} \left\{\begin{array}{ll} u_t(t,x)-\partial^2_{x}u(t,x)+p(t)\mu(x)\left(u_x(t,x)+u(t,x)\right)=0 & (t,x)\in[0,T]\times I \\ u_x(t,0)=0,\quad u_x(t,1)=0 &t\in[0,T]\\ u(0,x)=u_0(x). & x\in I \end{array}\right. \end{equation} Let $X=L^2(I)$, we rewrite \eqref{40} in abstract form by defining the operators $A$ and $B$ as \begin{equation}\nonumber D(A)=\{ \varphi\in H^2(I): \varphi^\prime(0)=0\,,\varphi^\prime(1)=0\},\quad A\varphi=-\frac{d^2\varphi}{dx^2} \end{equation} \begin{equation}\nonumber D(B)=H^1(I),\quad B\varphi=\mu\left(\frac{d}{dx}\varphi+\varphi\right). \end{equation} where $\mu$ is a real-valued function in $H^2(I)$. Operator $A$ satisfies the assumptions in \eqref{ipA} and it is possible to compute explicitly its eigenvalues and eigenfunctions: \begin{equation*} \begin{array}{lll} \lambda_0=0,&\varphi_0=1\\ \lambda_k=(k\pi)^2,& \varphi_k(x)=\sqrt{2}\cos(k\pi x),& \forall\, k\geq1. \end{array} \end{equation*} Since the eigenvalues are the same of those in Example \ref{ex1} for $k\geq1$, the gap condition is satisfied for all $k\geq0$. Furthermore, we have that \begin{equation*} \norm{B\varphi}=\norm{\mu(\varphi_x+\varphi)}\leq C_\mu(\norm{A^{1/2}\varphi}+\norm{\varphi})\leq C\norm{\varphi}_{D(A^{1/2})}, \end{equation*} thus, also hypothesis \eqref{BA12} is verified. Let us compute the scalar product $\langle B\varphi_0,\varphi_k\rangle$ to find a lower bound of the Fourier coefficients of $B\varphi_0$: \begin{equation*}\begin{split} \langle \mu\left(\varphi_0^\prime+\varphi_0\right),\varphi_k\rangle&=\sqrt{2}\int_0^1 \mu(x)\cos(k\pi x)dx\\ &=\sqrt{2}\left(\left.\mu(x)\frac{\sin(k\pi x)}{k\pi}\right|^1_0-\int_0^1\mu'(x)\frac{\sin(k\pi x)}{k\pi}dx\right)\\ &=\sqrt{2}\left(\left.\mu'(x)\frac{\cos(k\pi x)}{(k\pi)^2}\right|^1_0-\int^1_0\mu''(x)\frac{\cos(k\pi x)}{(k\pi)^2}dx\right)\\ &=\frac{\sqrt{2}}{(k\pi)^2}\left(\mu'(1)(-1)^k-\mu'(0)\right)-\frac{\sqrt{2}}{(k\pi)^2}\int^1_0\mu''(x)\cos(k\pi x)dx. \end{split} \end{equation*} Thus, reasoning as Example \ref{ex1}, if $\langle B\varphi_0,\varphi_k\rangle\neq0$ $\forall k \in{\mathbb{N}}$ and $\mu\rq{}(1)\pm\mu\rq{}(0)\neq 0$, then we have that \begin{equation}\label{lb3} \exists\,\, C>0\mbox{ such that } |\langle B\varphi_0,\varphi_k\rangle|\geq Ck^{-2}=C\lambda_k^{-1},\quad \forall k\in{\mathbb{N}}^* \end{equation} and therefore hypothesis \eqref{ipB-suff-cond} is fulfilled. \begin{oss} \emph{An example of a suitable function $\mu$ for problem \eqref{40} that satisfies the above hypothesis, is $\mu(x)=x^2$, for which} \begin{equation*} \langle B\varphi_0,\varphi_k\rangle=\frac{2\sqrt{2}(-1)^k}{(k\pi)^2},\quad \forall \ k \geq 1,\,\, \mbox{ \emph{and} }\quad \langle B\varphi_0,\varphi_0\rangle=1/3. \end{equation*} \end{oss} Applying Theorem \ref{teo-contr-B-unb}, it follows that system \eqref{40} is exactly controllable to $\psi_0$. \subsection{Degenerate parabolic equation with Dirichlet boundary conditions} Let $T>0$, $I=(0,1)$, $X=L^2(I)$ and consider the following degenerate bilinear control system \begin{equation}\label{eq-ex-deg-D} \left\{ \begin{array}{ll} u_t-\left(x^{\alpha} u_x\right)_x+p(t)\mu(x)u_x=0,& (t,x)\in [0,T]\times I \\ u(t,1)=0,\quad\left\{\begin{array}{ll} u(t,0)=0,& \mbox{ if }\alpha\in[0,1) ,\\ \left(x^{\alpha}u_x\right)(t,0)=0,& \mbox{ if }\alpha\in[1,2),\end{array}\right. \\ u(0,x)=u_0(x),&t\in x\in I. \end{array} \right. \end{equation} with $\mu(x)=x$ and $\alpha\in[0,2)$ the degeneracy parameter. We recall that when $\alpha\in[0,1)$ problem \eqref{eq-ex-deg-D} is called \emph{weakly degenerate}, while for $\alpha\in[1,2)$ \emph{strongly degenerate}. Let $\alpha\in[0,1)$ and we define the weighted Sobolev spaces \begin{equation}\label{sob-sp-w} \begin{array}{l} H^1_{\alpha}(I)=\left\{u\in X: u \mbox{ is absolutely continuous on } [0,1], x^{\alpha/2}u_x\in X\right\} \\ H^1_{\alpha,0}(I)=\left\{u\in H^1_\alpha(I):\,\, u(0)=0,\,\,u(1)=0\right\} \\ H^2_\alpha(I)=\left\{u\in H^1_\alpha(I): x^{\alpha}u_x\in H^1(I)\right\}, \end{array} \end{equation} and the linear operator $A:D(A)\subset X\to X$ by \begin{equation*} \left\{\begin{array}{l} \forall u\in D(A),\quad Au:=-(x^{\alpha}u_x)_x, \\ D(A):=\{u\in H^1_{\alpha,0}(I),\,\, x^{\alpha}u_x\in H^1(I)\}. \end{array}\right. \end{equation*} For $\alpha\in[1,2)$, we define the spaces \begin{equation}\label{sob-sp-s} \begin{array}{l} H^1_{\alpha}(I)=\left\{u\in X: u \mbox{ is absolutely continuous on } (0,1],\,\, x^{\alpha/2}u_x\in X\right\} \\ H^1_{\alpha,0}(I):=\left\{u\in H^1_{\alpha}(I):\,\,u(1)=0\right\}, \\ H^2_\alpha(I)=\left\{u\in H^1_\alpha(I):\,\, x^{\alpha}u_x\in H^1(I)\right\} \end{array} \end{equation} and the linear degenerate operator $A:D(A)\subset X\to X$ by \begin{equation*} \left\{\begin{array}{l} \forall u\in D(A),\quad Au:=-(x^{\alpha}u_x)_x, \\ D(A):=\left\{u\in H^1_{\alpha,0}(I):\,\, x^{\alpha}u_x\in H^1(I)\right\} \\ \qquad\,\,\,\,\,=\left\{u\in X:\,\,u \mbox{ is absolutely continuous in (0,1] },\,\, x^{\alpha}u\in H^1_0(I),\right. \\ \qquad\qquad\,\,\,\left.x^{\alpha}u_x\in H^1(I)\mbox{ and } (x^{\alpha}u_x)(0)=0\right\}. \end{array}\right. \end{equation*} In both cases of weak and strong degeneracy it can be proved that $D(A)$ is dense in X and $A:D(A)\subset X\to X$ is a self-adjoint accretive operator (see \cite{cmp} and \cite{cmvn}, respectively). Therefore, $-A$ is the infinitesimal generator of an analytic semigroup of contractions on $X$. Furthermore, it has been showed (see, for instance, \cite[Appendix]{acf}) that $A$ has a compact resolvent. Let $\alpha\in [0,2)$ and define \begin{equation*} \nu_\alpha:=\frac{|1-\alpha|}{2-\alpha},\qquad k_\alpha:=\frac{2-\alpha}{2}. \end{equation*} Given $\nu\geq0$, we denote by $J_\nu$ the Bessel function of the first kind and order $\nu$ and by $j_{\nu,1}<j_{\nu,2}<\dots<j_{\nu,k}<\dots$ the sequence of all positive zeros of $J_\nu$. It is possible to prove that the eigenvalues and eigenfunctions of $A$ are given by \begin{equation}\label{13} \lambda_{\alpha,k}=k^2_{\alpha}j^2_{\nu_\alpha,k}, \end{equation} \begin{equation}\label{14} \varphi_{\alpha,k}(x)=\frac{\sqrt{2k_\alpha}}{|J'_{\nu_\alpha}(j_{\nu_\alpha,k})|}x^{(1-\alpha)/2}J_{\nu_\alpha}\left(j_{\nu_\alpha,k}x^{k_\alpha}\right) \end{equation} for every $k\in{\mathbb{N}}^*$. Moreover, the family $\left(\varphi_{\alpha,k}\right)_{k\in{\mathbb{N}}^*}$ is an orthonormal basis of $X$, see \cite{g}. Hereafter, we shall denote the eigenfunctions $\left(\varphi_{\alpha,k}\right)_{k\in{\mathbb{N}}^*}$ by $\left(\varphi_{k}\right)_{k\in{\mathbb{N}}^*}$, and the eigenvalues $\left(\lambda_{\alpha,k}\right)_{k\in{\mathbb{N}}^*}$ by $\left(\lambda_{k}\right)_{k\in{\mathbb{N}}^*}$. It has been proved (see \cite[page 135, Proposition 7.8]{kl} and \cite[Corollary 1]{cmv2}) that the gap condition is satisfied for all $\alpha\in [0,2)$. More precisely, it is proved that: \begin{itemize} \item if $\alpha\in [0,1)$, $\nu_{\alpha}=\frac{1-\alpha}{2-\alpha}\in\left(0,\frac{1}{2}\right]$, the sequence $\left( j_{\nu_\alpha,k+1}-j_{\nu_\alpha,k}\right)_{k\in{\mathbb{N}}^*}$ is nondecreasing and converges to $\pi$. Therefore, \begin{equation*} \sqrt{\lambda_{k+1}}-\sqrt{\lambda_k}=k_{\alpha}\left( j_{\nu_\alpha,k+1}-j_{\nu_\alpha,k}\right)\geq k_{\alpha}\left( j_{\nu_\alpha,2}-j_{\nu_\alpha,1}\right)\geq \frac{7}{16}\pi>0, \end{equation*} \item if $\nu_{\alpha}\geq \frac{1}{2}$, the sequence $\left( j_{\nu_\alpha,k+1}-j_{\nu_\alpha,k}\right)_{k\in{\mathbb{N}}^*}$ is nonincreasing and converges to $\pi$. Thus, \begin{equation*} \sqrt{\lambda_{k+1}}-\sqrt{\lambda_k}=k_{\alpha}\left( j_{\nu_\alpha,k+1}-j_{\nu_\alpha,k}\right)\geq k_{\alpha}\pi\geq \frac{(2 - \alpha)\pi}{2}> 0. \end{equation*} \end{itemize} Therefore, the hypothesis \eqref{gap} is fulfilled in both weak and strong degenerate problems with different constants. We now define the unbounded linear operator $B$ by \begin{equation*} \begin{split} B: D(B)=H^1(I)\subset X&\to X\\ \varphi&\mapsto \mu(x)\varphi^\prime \end{split} \end{equation*} where we recall that $\mu(x)=x$. We observe that $D(A^{1/2})\subset D(B)$ and \begin{equation*} ||\varphi||_{D(B)}=||C\mu\varphi'||\leq C||x^{\alpha/2}\varphi'||\leq C ||\varphi||_{D(A^{1/2})},\quad \forall\,\varphi\in D(A^{1/2}). \end{equation*} Hence \eqref{BA12} holds true. Finally, we need to prove the validity of \eqref{ipB-suff-cond}. To this purpose, we study the Fourier coefficients of $B\varphi_1$: \begin{equation*} \begin{split} \langle B\varphi_1,\varphi_k\rangle&=\int_0^1x\varphi_1^\prime(x)\varphi_k(x)dx=-\frac{1}{\lambda_k}\int_0^1x\varphi_1^\prime(x)\left(x^\alpha \varphi^\prime_k(x)\right)^\prime dx\\ &=-\frac{1}{\lambda_k}\left[\left.x\varphi_1^\prime(x) x^\alpha \varphi_k^\prime(x)\right|^1_0-\int_0^1(x\varphi_1^\prime)^\prime(x) x^\alpha \varphi_k^\prime(x)dx\right]\\ &=-\frac{1}{\lambda_k}\left[\varphi_1^\prime(1)\varphi_k^\prime(1)-\int_0^1(x\varphi_1^\prime)^\prime(x) x^\alpha \varphi_k^\prime(x)dx\right] \end{split} \end{equation*} where we used \cite[Proposition 2.5, Property (2.14)]{FABCL}) to prove that $$ \lim_{x \leftarrow 0^+}\big(x\varphi_1^\prime(x) x^\alpha \varphi_k^\prime(x)\big)=0, $$ so that, we have \begin{equation*} \begin{split} \langle B\varphi_1,\varphi_k\rangle&=-\frac{1}{\lambda_k}\biggl[\varphi_1^\prime(1)\varphi_k^\prime(1)-\left.\left(x\varphi_1^{\prime}\right)^\prime(x)x^\alpha\varphi_k(x)\right|^1_0\\ &\quad\left.+\int_0^1\Big((x\varphi_1^\prime)^\prime x^\alpha\Big)^{\prime}(x)\varphi_k(x)dx\right] \end{split} \end{equation*} We claim that for all $\alpha \in [0,2)$, the following property holds: $$ \left.\left(x\varphi_1^{\prime}(x)\right)^{\prime}x^\alpha\varphi_k(x)\right|_{x=0} \mbox{ is defined and is equal to } 0 $$ Let us consider $x \in (0,1)$. We have \begin{equation}\label{Rvarphi1} (x\varphi_1^{\prime})^{\prime}(x)=x^{1-\alpha} (x^{\alpha}\varphi_1^{\prime})^{\prime}(x) + (1-\alpha)\varphi_1^{\prime}(x)=- \lambda_1 x^{1-\alpha}\varphi_1 + (1-\alpha)\varphi_1^{\prime}(x). \end{equation} Thus, we obtain $$ x^{\alpha}(x\varphi_1^{\prime})^{\prime}(x)\varphi_k(x)=-\lambda_1x \varphi_1(x)\varphi_k(x)+ (1-\alpha)(x^{\alpha}\varphi_1^{\prime}(x))\varphi_k(x) $$ Observe that $$ \big| x \varphi_1(x)\varphi_k(x) \big| \leqslant \dfrac{1}{2}\big(x\varphi_1^2(x) + x\varphi_k^2(x)\big),\quad \ \forall \ x \in (0,1]. $$ Since $\varphi_1$ and $\varphi_k$ are in $H^1_{\alpha}(I)$, we can use \cite[Proposition 2.5, Property (2.12)]{FABCL}) in the particular case $a(x)=x^{\alpha}$ for $x \in [0,1]$, and successively with $u=\varphi_1$, and $u=\varphi_k$. Hence, we have $$ \lim_{x \leftarrow 0^+}\big(x\varphi_1^2(x) + x\varphi_k^2(x)\big)=0, $$ so that $$ \lim_{x \leftarrow 0^+}\big(x \varphi_1(x)\varphi_k(x)\big)=0. $$ Since $\varphi_1 \in D(A)$ and $\varphi_k$ is in $H^1_{\alpha}(I)$, we can use \cite[Proposition 2.5, Property (2.15)]{FABCL}), in the particular case $a(x)=x^{\alpha}$ for $x \in [0,1]$, $\phi= \varphi_k$, and $u=\varphi_1$, noticing in addition that we have $\phi(0)=0$ if $\alpha \in [0,1)$. Hence, we deduce $$ \lim_{x \leftarrow 0^+}\big(x^{\alpha}\varphi_1^{\prime}(x))\varphi_k(x)\big)=0. $$ Therefore, we have proved that for all $\alpha \in [0,2)$ $$ \left.\left(x\varphi_1^{\prime}(x)\right)^{\prime}x^\alpha\varphi_k(x)\right|_{x=0} \mbox{ is defined and is equal to } 0. $$ We easily prove that since $\varphi_1(1)=\varphi_k(1)=0$, we have $$ \left.\left(x\varphi_1^{\prime}(x)\right)^{\prime}x^\alpha\varphi_k(x)\right|_{x=1}= 0. $$ Thus, we obtain that $$ \left.\left(x\varphi_1^{\prime}\right)^\prime(x)x^\alpha\varphi_k(x)\right|^1_0=0, $$ so that using this property in our previous computations for $\langle B\varphi_1,\varphi_k\rangle$, we get \begin{equation*} \begin{split} \langle B\varphi_1,\varphi_k\rangle&=-\frac{1}{\lambda_k}\left[\varphi_1^\prime(1)\varphi_k^\prime(1)+\int_0^1\Big((x\varphi_1^\prime)^\prime x^\alpha\Big)^{\prime}(x)\varphi_k(x)dx\right]. \end{split} \end{equation*} Now we use the identity \eqref{Rvarphi1} which yields \begin{equation*} \begin{split} \Big(x^{\alpha}(x\varphi_1^\prime)^\prime\Big)^{\prime}(x)\varphi_k(x)&=\Big(-\lambda_1x\varphi_1(x) +(1-\alpha)x^{\alpha}\varphi_1^{\prime}(x) \Big)^{\prime}(x)\varphi_k(x)\\ &=\Big[-(2-\alpha)\lambda_1\varphi_1(x)\varphi_k(x) - \lambda_1x\varphi_1^{\prime}(x)\varphi_k(x)\Big]. \end{split} \end{equation*} Using this identity in our previous computations and the property that the family $\left(\varphi_{k}\right)_{k\in{\mathbb{N}}^*}$ is an orthonormal basis of $X$, we obtain \begin{equation*} \langle B\varphi_1,\varphi_k\rangle=-\frac{\varphi_1^\prime(1)\varphi_k^\prime(1)}{\lambda_k-\lambda_1}. \end{equation*} We have proved in \cite[page 12, formula (2.43)]{cu} that \begin{equation*} \varphi_1^\prime(1)\varphi_k^\prime(1)=\frac{2k_{\alpha}^3j_{\nu_\alpha,1}j_{\nu_\alpha,k}}{|J'_{\nu_\alpha}(j_{\nu_\alpha,1})||J'_{\nu_\alpha}(j_{\nu_\alpha,k})|}J'_{\nu_\alpha}(j_{\nu_\alpha,1})J'_{\nu_\alpha}(j_{\nu_\alpha,k}). \end{equation*} The above identity, together with \eqref{13}, imply that there exists a constant $C>0$ such that \begin{equation*} |\langle B\varphi_1,\varphi_k\rangle|\geq C\lambda_k^{-1/2},\quad\forall\,k>1. \end{equation*} For $k=1$, $\langle B\varphi_1,\varphi_1\rangle$ is given by \begin{equation*} \langle B\varphi_1,\varphi_1\rangle=\int_0^1x\varphi_1^\prime(x)\varphi_1(x)dx=\frac{1}{2}\int_0^1x\frac{d}{dx}\left(\varphi_1^2(x)\right)dx=\frac{1}{2}\left[\left.x\varphi_1^2(x)\right|^1_0-\int_0^1\varphi_1^2(x)dx\right]=-\frac{1}{2}, \end{equation*} where we used once again \cite[Proposition 2.5, Property (2.12)]{FABCL}) with $u=\varphi_1$. Thus, we have $\langle B\varphi_1,\varphi_1\rangle\neq0.$ Therefore, we proved that all the hypothesis of Theorem \eqref{teo-contr-B-unb} are satisfied. Applying such result, we deduce that for any $T>0$ and initial condition $u_0\in D(A^{1/2})$ sufficiently close to the ground state $\varphi_1$, there exists a control $p\in L^2(0,T)$ that steers the solution of \eqref{eq-ex-deg-D} to $\psi_1(T,x)=e^{-\lambda_1T}\varphi_1(x)$ in time $T$, by the iterative constructive process we set up in our abstract results. \subsection{Degenerate parabolic equation with Neumann boundary conditions} In this section we study the controllability of the following degenerate problem \begin{equation}\label{eq-ex-deg-N} \left\{ \begin{array}{ll} u_t-\left(x^{\alpha} u_x\right)_x+p(t)\mu(x)(u_x+u)=0,& (t,x)\in [0,T]\times I\\ (x^\alpha u_x)(t,0)=0,\quad u_x(t,1)=0, & t\in [0,T]\\ u(0,x)=u_0(x),&t\in x\in I. \end{array} \right. \end{equation} where $T>0$, $I=(0,1)$ and $\mu(x)=x^{2-\alpha}$. The control $p\in L^2(0,T)$ is a real valued function and $\mu$ represents an admissible potential. Recalling the definitions of the weighted Sobolev spaces $H^1_\alpha(I)$ and $H^2_\alpha(I)$ in \eqref{sob-sp-w} and \eqref{sob-sp-s} for weak and strong degeneracy respectively, we define the second order linear operator \begin{equation*} \begin{cases} \forall\, u \in D(A), \quad Au:=- (x ^\alpha u_x)_x , \\ D(A) := \{ u \in H^2_{\alpha} (0,1) , (x^\alpha u_x) (0)=0, u_x (1)=0 \} . \end{cases} \end{equation*} In \cite[Proposition 2.1, Proposition 2.2]{cmu} it is proved that for any $\alpha\in [0,2)$ the operator $A$ is self-adjoint accretive and has a dense domain. Therefore, $-A$ is the infinitesimal generator of an analytic semigroup of contraction. Moreover, in \cite[Proposition 3.1]{cmu} the authors showed that the eigenvalues and eigenfunctions of the operator $A$, when $\alpha\in [0,1)$, are given by \begin{equation}\label{EXDEGeq-vp-w-vp0} \lambda _{\alpha,0} = 0,\quad \varphi _{\alpha,0} (x)=1 \end{equation} and for all $m\geq 1$ \begin{equation}\label{EXDEGeq-vp-w-vpm} \lambda _{\alpha,m} = \kappa _\alpha ^2 \, j_{-\nu_\alpha - 1, m} ^2 , \end{equation} \begin{equation}\label{EXDEGeq-fp-w-fpm} \varphi _{\alpha,m} (x) = K_{\alpha,m} x^{\frac{1-\alpha}{2}} J_{-\nu_\alpha} \left( j_{-\nu_\alpha + 1, m} x^{\frac{2-\alpha}{2}}\right), \end{equation} where $$ \kappa _\alpha := \frac{2-\alpha}{2}, \quad \nu_\alpha := \frac{1-\alpha}{2-\alpha},$$ $J_{-\nu_\alpha}$ is the Bessel$^\prime$s function of order $-\nu_\alpha$, $(j_{-\nu_\alpha + 1, m}) _{m\geq 1}$ are the positive zeros of the Bessel$^\prime$s function $J_{-\nu_\alpha +1}$ and $K_{\alpha,m}$ are positive constants. For $\alpha\in[1,2)$, the eigenvalues and eigenfunctions of $A$ are defined by \begin{equation}\label{EXDEGeq-vp-s-vp0} \lambda _{\alpha,0} = 0,\quad \varphi _{\alpha,0} (x)=1 \end{equation} and for all $m\geq 1$ \begin{equation}\label{EXDEGeq-vp-s-vpm} \lambda _{\alpha,m} = \kappa _\alpha ^2 \, j_{\nu_\alpha + 1, m} ^2 , \end{equation} \begin{equation}\label{EXDEGeq-fp-s-fpm} \varphi _{\alpha,m} (x) = K_{\alpha,m} x^{\frac{1-\alpha}{2}} J_{\nu_\alpha} \left( j_{\nu_\alpha + 1, m}\, x^{\frac{2-\alpha}{2}}\right) , \end{equation} where $$ \kappa _\alpha := \frac{2-\alpha}{2}, \quad \nu_\alpha := \frac{\alpha -1}{2-\alpha},$$ $J_{\nu_\alpha}$ is the Bessel$^\prime$s function of order $\nu_\alpha$, $(j_{\nu_\alpha +1, m}) _{m\geq 1}$ are the positive zeros of the Bessel$^\prime$s function $J_{\nu_\alpha +1}$ and $K_{\alpha,m}$ are positive constants (see \cite[Proposition 3.2]{cmu}). Observe that, in the same paper \cite[Propositions 3.1 and 3.2]{cmu} it is proved that the eigenvalues $\{\lambda_{\alpha,m}\}_{m\in{\mathbb{N}}}$ satisfy the following gap condition \begin{equation*} \forall\, \alpha\in [0,2),\,\sqrt{\lambda_{\alpha,m+1}}-\sqrt{\lambda_{\alpha,m}}\geq\frac{2-\alpha}{2}\pi. \end{equation*} We introduce the operator $B:D(B)=:H^1(I)\subset X\to X$ defined by \begin{equation*} B\varphi=\mu\left(\varphi'+\varphi\right),\quad\forall\varphi\in D(B), \end{equation*} where $\mu(x)=x^{2-\alpha}$. In order to have that $D(A^{1/2})\hookrightarrow D(B)$ (that is, hypothesis \eqref{BA12}), we need to restrict the analysis to the case $\alpha\in[0,4/3]$. Indeed, \begin{equation*} \norm{B\varphi}=\norm{\mu(\varphi'+\varphi)}\leq C\left(\norm{x^{\alpha/2}\varphi'}+\norm{\varphi}\right)=C\left(\norm{A^{1/2}\varphi}+\norm{\varphi}\right)\leq C\norm{\varphi}_{D(A^{1/2})},\quad\forall \varphi\in D(A^{1/2}) \end{equation*} is satisfied if and only if $\alpha\in[0,4/3]$. Furthermore, it is possible to prove that also hypothesis \eqref{ipB-suff-cond} holds true. We compute the scalar product $\langle B\varphi_{\alpha,0},\varphi_{\alpha,m}\rangle$, for all $m\in{\mathbb{N}}$: $$ \langle B\varphi_{\alpha,0} , \varphi _{\alpha,0} \rangle = \int _0 ^1 x^{2-\alpha} dx = \frac{1}{3-\alpha} ,$$ and, for all $m\geq 1$ we have that \begin{equation*} \begin{split} \langle B\varphi_{\alpha,0} , \varphi _{\alpha,m}\rangle&=\int_0^1\mu(x)\left(\varphi_{\alpha,0}^\prime+\varphi_{\alpha,0}\right)\varphi_{\alpha,m}dx = \int _0 ^1 \mu (x) \varphi _{\alpha,m}(x) dx \\ &= \frac{1}{\lambda _{\alpha,m}} \int _0 ^1 \mu (x) (-x^\alpha \varphi _{\alpha,m}^\prime)^\prime(x) dx \\ &= \frac{1}{\lambda _{\alpha,m}} \left( [-x^\alpha \mu (x) \varphi _{\alpha,m}^\prime(x) ] _0 ^1 + \int _0 ^1 x^\alpha \mu^\prime(x) \varphi _{\alpha,m}^\prime(x) \right). \end{split} \end{equation*} Recalling that $\mu (x)=x^{2-\alpha}$, we obtain \begin{equation*} \begin{split} \int _0 ^1 x^\alpha \mu^\prime(x) \varphi _{\alpha,m}^\prime (x) &= (2-\alpha) \int _0 ^1 x \varphi _{\alpha,m}^\prime (x) \\ &= (2-\alpha) \left[x \varphi _{\alpha,m} (x)\right] _0 ^1 - (2-\alpha) \int _0 ^1 \varphi _{\alpha,m} (x) dx \\ &= (2-\alpha) \left[x \varphi _{\alpha,m} (x)\right] _0 ^1 - (2-\alpha) \langle \varphi _{\alpha,0} , \varphi _{\alpha,m} \rangle . \end{split} \end{equation*} Since the eigenfunctions are orthogonal, we have that $$ \langle \varphi _{\alpha,0} , \varphi _{\alpha,m} \rangle = 0 ,$$ hence $$ \langle B\varphi_{\alpha,0} , \varphi _{\alpha,m} \rangle = \frac{1}{\lambda _{\alpha,m}} \left( \left[-x^2 \varphi _{\alpha,m}^\prime(x) \right] _0 ^1 + (2-\alpha) \left[x \varphi _{\alpha,m} (x)\right] _0 ^1 \right) .$$ From the Neumann boundary conditions satisfied by $\varphi _{\alpha,m}$, we know that $x \varphi _{\alpha,m}^\prime(x) \to 0$ as $x\to 0$ and $x\to 1$, thus $$ \left[-x^2 \varphi _{\alpha,m}^\prime(x) \right] _0 ^1 = 0 .$$ Furthermore, in \cite[Lemma 5.1 and Lemma 5.2]{cmu} it is shown that $\varphi _{\alpha,m}$ has a finite limit as $x\to 0$, therefore $$ x \varphi _{\alpha,m} (x) \to 0, \quad \text{ as } x \to 0,$$ and moreover that $\vert \varphi _{\alpha,m} (1)\vert =\sqrt{2-\alpha}$, which implies $$ \vert (2-\alpha) [x \varphi _{\alpha,m} (x)] _0 ^1 \vert = (2-\alpha)^{3/2}.$$ Therefore, $$ \vert \langle B\varphi_{\alpha,0} , \varphi _{\alpha,m} \rangle\vert = \frac{(2-\alpha)^{3/2}}{\lambda _{\alpha,m}} ,$$ that is condition \eqref{ipB-suff-cond} with $q=1$. By applying Theorem \ref{teo-contr-B-unb}, we conclude that problem \eqref{eq-ex-deg-N}, for $\alpha\in[0,4/3]$, is exactly controllable to the ground state solution $\psi_0\equiv 1$ in any time $T>0$. Let us now show an application of Theorem \ref{teoglobal0} to Example 5.4. We recall that $\lambda_0=0$ is the first eigenvalue of $A$, and is associated to the eigenfunction $\varphi_0\equiv 1$. We set \begin{equation}\label{exactphi0} \phi_0(t)=\langle u_0,\varphi_1\rangle_{1/2}\; \psi_0(t)=\int_0^1 u_0(x)dx, \quad\forall\, t \geq 0. \end{equation} Theorem \ref{teoglobal0} applied to Example 5.4 gives: \vskip 1mm for any $R>0$ there exists $T_R>0$ such that for all $u_0\in D(A^{1/2})$ satisfying \begin{equation}\label{NV5.4} \int_0^1 u_0(x)dx \neq 0, \end{equation} and \begin{equation}\label{cone5.4} \left(\dfrac{\int_0^1u_0^2(x)dx -\big(\int_0^1u_0(x)dx\big)^2 + \int_0^1x^{\alpha}|u^{\prime}_{0}|^2(x)dx} {\left|\int_0^1u_0(x)dx\right|} \right)^{1/2} \leq R \end{equation} system \eqref{eq-ex-deg-N} is exactly controllable to $\phi_0$, defined in \eqref{exactphi0}, in time $T_R$. \begin{appendices} \section{}\label{AppendixB} In this section we will prove the existence of strong solutions of the stochastic differential equation \begin{equation}\label{SDE-Appendix} \begin{cases} dX_t=\nu(t,X_t)dt+\sigma(t,X_t)dB_t\\ X(t=0)=X_0 \end{cases} \end{equation} under assumptions (H2) which are weaker than assumptions (H1) in \cite[Theorem 9.2]{baldi}. We develop the computations for a drift $\nu$ of the form $$ \nu(t,x)=p(t)\mu(x),\,\,t \in [0,T],\, x \in {\mathbb{R}} . $$ with $p\in L^2(0,T)$, which is the case of interest to this paper. A general drift $\nu$ satisfying (H2) can be treated in a similar way. Let $(\Omega, \mathcal{F},\mathbb{P})$ be a probability space. We recall that the process $(X_t)_{t\in[0,T]}$ is a solution of \eqref{SDE-Appendix} if \begin{enumerate} \item $(\Omega, \mathcal{F},(\mathcal{F}_t)_{t\in[0,T]},(B_t)_{t\in[0,T]},\mathbb{P})$ is a standard Brownian motion \item $(X_t)_{t\in[0,T]}$ is adapted to the filtration $(\mathcal{F}_t)_{t\in[0,T]}$ \item for every $t\in[0,T]$ : \begin{equation*} X_t=X_0+\int_0^tp(s)\mu(X_s)ds+\int_0^t \sigma(s,X_s)dB_s. \end{equation*} \end{enumerate} We now prove the following preliminary result which adapts \cite[Theorem 9.1]{baldi} to the current setting. \begin{thm}\label{teo91} Let $X$ be a solution of \begin{equation*} X_t=X_0+\int_0^tp(s)\mu(X_s)ds+\int_0^t\sigma(s,X_s)dB_s \end{equation*} where $\mu,p$ and $\sigma$ satisfy hypotheses 1.,2. and 3. of (H2) (see page 20), and $X_0$ be a $\mathcal{F}_0$-measurable r.v. of $L^2$. Then, \begin{equation}\label{1estim-app} \mathbb{E}\left(\sup_{0\leq s\leq T}|X_s|^2\right)\leq c(T,M)(1+\mathbb{E}(|X_0|^2)) \end{equation} \begin{equation}\label{2estim-app} \mathbb{E}\left(\sup_{0\leq s\leq t}|X_s-X_0|^2\right)\leq c(T,M)t(1+\mathbb{E}(|X_0|^2)). \end{equation} \end{thm} \begin{proof} We follow the proof of \cite[Theorem 9.1]{baldi}. Fixed any $R>0$, we define $X_R(t):=X_{t\land \tau_R}$ where $\tau_R:=\inf\{t\,:\, 0\leq t\leq T,\, |X_t|\geq R\}$ is the exit time of the process $X$ from the open ball of radius $R$. If $|X_t|<R$ for every $t\in[0,T]$ then we set $\tau_R=T$. We have that \begin{equation*} \begin{split} X_R(t)&=X_0+\int_0^{t\land \tau_R}p(r)\mu(X_r)dr+\int_0^{t\land \tau_R}\sigma(r,X_r)dB_r\\ &=X_0+\int_0^{t}p(r)\mu(X_r)\mathbbm{1}_{\{r<\tau_R\}}dr+\int_0^{t}\sigma(r,X_r)\mathbbm{1}_{\{r<\tau_R\}}dB_r\\ &=X_0+\int_0^{t}p(r)\mu(X_R(r))\mathbbm{1}_{\{r<\tau_R\}}dr+\int_0^{t}\sigma(r,X_R(r))\mathbbm{1}_{\{r<\tau_R\}}dB_r. \end{split} \end{equation*} Therefore, we get \begin{equation}\label{EsupX2} \begin{split} \mathbb{E}\left[\sup_{0\leq s\leq t}|X_R(s)|^2\right]&\leq 3\mathbb{E}\left[|X_0|^2\right]+3\mathbb{E}\left[\sup_{0\leq s\leq t}\left|\int_0^sp(r)\mu(X_R(r))\mathbbm{1}_{\{r<\tau_R\}}rd\right|^2\right]\\ &\quad+3\mathbb{E}\left[\sup_{0\leq s\leq t}\left|\int_0^s\sigma(r,X_R(r))\mathbbm{1}_{\{r<\tau_R\}}\right|^2\right], \end{split} \end{equation} where we have used the inequality \begin{equation*} |x_1+\dots+x_m|^p\leq m^{p-1}(|x_1|^p+\dots+|x_m|^p) \end{equation*} which holds, in general, for any $x_1,\dots,x_m\in{\mathbb{R}}^d$. By H{\"o}lder's inequality and hypothesis 3. we obtain \begin{equation}\label{Esuppmu2} \begin{split} \mathbb{E}\left[\sup_{0\leq s\leq t}\left|\int_0^sp(r)\mu(X_R(r))\mathbbm{1}_{\{r<\tau_R\}}dr\right|^2\right]&\leq \mathbb{E}\left[\sup_{0\leq s\leq t}\left(\left(\int_0^s|p(r)|^2dr\right)\left(\int_0^s|\mu(X_R(r))|^2\mathbbm{1}_{\{r<\tau_R\}}rd\right)\right)\right]\\ &\leq \norm{p}^2_{L^2(0,T)}M^2\mathbb{E}\left[\sup_{0\leq s\leq t}\int_0^s(1+|X_R(r)|)^2\mathbbm{1}_{\{r<\tau_R\}}dr\right]. \end{split} \end{equation} Whereas, by the $L^2$ inequalities for stochastic integrals (see \cite[Proposition 8.4]{baldi}) we deduce \begin{equation}\label{Esupsigma2} \mathbb{E}\left[\sup_{0\leq s\leq t}\left|\int_0^s\sigma(r,X_R(r))\mathbbm{1}_{\{r<\tau_R\}}\right|^2\right]\leq c_2\mathbb{E}\left[\int_0^t|\sigma(r,X_R(r))|^2dr\right]\leq c_2M^2\mathbb{E}\left[\int_0^t\left(1+|X_R(r)|\right)^2dr\right], \end{equation} where in the last inequality we have used again hypothesis 3. Now, plugging \eqref{Esuppmu2} and \eqref{Esupsigma2} into \eqref{EsupX2}, we get \begin{equation*} \begin{split} \mathbb{E}\left[\sup_{0\leq s\leq t}|X_R(s)|^2\right]&\leq 3\mathbb{E}\left[|X_0|^2\right]+3M^2\left(\norm{p}^2_{L^2(0,T)}+c_2\right)2\mathbb{E}\left[T^2+\int_0^t|X_R(r)|^2dr\right]\\ &\leq c_1(T,M)\left(1+\mathbb{E}\left[|X_0|^2\right]\right)+c_2(T,M)\int_0^t\mathbb{E}\left[|X_R(r)|^2\right]dr. \end{split} \end{equation*} If we set $v(t):=\mathbb{E}\left[\sup_{0\leq s\leq t}|X_R(s)|^2\right]$, from the above estimate we obtain \begin{equation*} v(t)\leq c_1(T,M)\left(1+\mathbb{E}\left[|X_0|^2\right]\right)+c_2(T,M)\int_0^tv(r)dr. \end{equation*} Observe that, since $|X_R(t)|=|X_0|$ if $|X_0|>R$ and $|X_R(t)|\leq R$ otherwise, it holds that $|X_R(t)|\leq \max\{|X_0|,R\}$ and then $v(t)\leq \mathbb{E}\left[\max\{|X_0|^2,R^2\}\right]<+\infty$. Thus, $v$ is bounded and we can apply Gronwall's inequality: \begin{equation*} v(T)=\mathbb{E}\left[\sup_{0\leq s\leq T}|X_R(s)|^2\right]\leq c_1(T,M)\left(1+\mathbb{E}\left[|X_0|^2\right]\right)e^{Tc_2(T,M)}=c(T,M)\left(1+\mathbb{E}\left[|X_0|^2\right]\right). \end{equation*} Since the right-hand side does not depend on $R$, we can take the limit $R\to+\infty$. Let us prove that $\tau_R\to T$ as $R\to+\infty$. Since $X$ is continuous, $\displaystyle\sup_{0\leq t\leq \tau_R}|X_t|^2=R^2$ on $\{\tau_R<T\}$. Therefore, we have that \begin{equation*} \mathbb{E}\left[\sup_{0\leq t\leq \tau_R}|X_t|^2\right]\geq R^2\mathbb{P}(\tau_R<T) \end{equation*} so that \begin{equation*} \mathbb{P}(\tau_R<T)\leq \frac{1}{R^2}\mathbb{E}\left[\sup_{0\leq t\leq \tau_R}|X_t|^2\right]\leq \frac{c(T,M)(1+\mathbb{E}(|X_0|^2)}{R^2}. \end{equation*} Hence, $\mathbb{P}(\tau_R<T)\to 0$ as $R\to+\infty$. Since $R\to \tau_R$ is increasing, $\displaystyle\lim_{R\to +\infty}\tau_R=T$ a.s. and \begin{equation*} \sup_{0\leq s\leq T}|X_R(s)|^2\to\sup_{0\leq s\leq T}|X_s|^2\quad{a.s.,}\quad{for}\quad R\to+\infty. \end{equation*} By applying Fatou's lemma we conclude that \eqref{1estim-app} holds true. We refer to \cite[Theorem 9.1]{baldi} for the proof of \eqref{1estim-app}. \end{proof} Let $a,b\in{\mathbb{R}}$ such that $a\leq b$. We recall the definition of the spaces $M^p([a,b])$, with $p\geq1$. First, we define the space $M^p_{loc}([a,b])$ as the space of the equivalence classes of real-valued \emph{progressively measurable} processes $X=(\Omega, \mathcal{F},(\mathcal{F}_t)_{a\leq t\leq b},(X_t)_{a\leq t\leq b},\mathbb{P})$ such that \begin{equation*} \int_a^b|X_s|^pds<+\infty\qquad \text{a.s.} \end{equation*} Then, by $M^p([a,b])$ we denote the subspace of $M^p_{loc}([a,b])$ of the processes such that \begin{equation*} \mathbb{E}\left[\int_a^b|X_s|^pds\right]<+\infty. \end{equation*} \begin{thm} Let $X_0$ be a real-valued r.v., $\mathcal{F}_0$ measurable and square integrable. Then, under assumption (H2) (see page 20) there exists $X\in M^2([0,T])$ that satisfies \begin{equation}\label{sol-sde} X_t=X_0+\int_0^tp(t)\mu(X_s)ds+\int_0^t\sigma(s,X_s)dB_s. \end{equation} Moreover, if $X'$ is another solution of \eqref{sol-sde}, then \begin{equation} \mathbb{P}(X_t=X'_t\text{ for every }t\in[0,T])=1 \end{equation} (pathwise uniqueness). \end{thm} \begin{proof} We follow the proof of \cite[Theorem 9.2]{baldi}. We define recursively a sequence of processes by $X_0(t)=X_0$ and \begin{equation*} X_{m+1}(t)=X_0+\int_0^tp(s)\mu(X_m(s))ds+\int_0^t\sigma(s,X_m(s))dB_s. \end{equation*} Our aim is to show that the sequence $(X_m)_m$ converges uniformly on $[0,T]$ to a process $X$ which is solution of \eqref{sol-sde}. We first prove by induction that \begin{equation}\label{induction-appA} \mathbb{E}\left[\sup_{0\leq r\leq t}|X_{m+1}(r)-X_m(r)|^2\right]\leq \frac{(Rt)^{m+1}}{(m+1)!} \end{equation} with $R:=2(\norm{p}^2_{L^2(0,T)}+4)\max\{2M^2(1+\mathbb{E}[|X_0|^2]),L^2\}$, where $L,M>0$ are the constants in hypotheses 3. and 4.. For $m=0$ we have that \begin{equation*} \begin{split} \sup_{0\leq r\leq t}|X_1(r)-X_0|^2&\leq 2\sup_{0\leq r\leq t}\left|\int_0^rp(s)\mu(X_0)ds\right|^2+2\sup_{0\leq r\leq t}\left|\int_0^r\sigma(s,X_0)dB_s\right|^2\\ &\leq 2\norm{p}^2_{L^2(0,T)}\sup_{0\leq r\leq t}\int_0^r\left|\mu(X_0)\right|^2ds+2\sup_{0\leq r\leq t}\left|\int_0^r\sigma(s,X_0)dB_s\right|^2\\ &\leq 4M^2\norm{p}^2_{L^2(0,T)}\left(1+|X_0|\right)^2t+2\sup_{0\leq r\leq t}\left|\int_0^r\sigma(s,X_0)dB_s\right|^2 \end{split} \end{equation*} where we have applied H{\"o}lder's inequality and hypothesis 3.. By using Doob's maximal inequality (see \cite[formula (7.23), page 195]{baldi}) and hypothesis 3. of (H2) it is possible to prove that \begin{equation*} \mathbb{E}\left[\sup_{0\leq r\leq t}\left|\int_0^r\sigma(s,X_s)dB_s\right|^2\right]\leq 4\mathbb{E}\left[\int_0^t|\sigma(s,X_s)|^2ds\right]\leq 8M^2\left(t+\mathbb{E}\left[\int_0^t|X_s|^2ds\right]\right). \end{equation*} Hence, we get that \begin{equation*} \begin{split} \mathbb{E}\left[\sup_{0\leq r\leq t}|X_1(r)-X_0|^2\right]&\leq 4M^2\norm{p}_{L^2(0,T)}^2\left(1+\mathbb{E}\left[|X_0|^2\right]\right)t+16M^2\left(t+t\mathbb{E}\left[|X_0|^2\right]\right)\\ &=4M^2(\norm{p}^2_{L^2(0,T)}+4)\left(1+\mathbb{E}\left[|X_0|^2\right]\right)t \leq Rt. \end{split} \end{equation*} Now, suppose that \eqref{induction-appA} holds till index $m-1$ and let us prove it for $m$. Observe that, thanks to H{\"o}lder's inequality and hypothesis 4., it holds \begin{equation*} \begin{split} \sup_{0\leq r\leq t}|X_{m+1}(r)-X_m(r)|^2&\leq 2\sup_{0\leq r\leq t}\left|\int_0^rp(s)\left[\mu(X_{m}(s))-\mu(X_{m-1}(s))\right]ds\right|^2\\ &\quad+2\sup_{0\leq r\leq t}\left|\int_0^r\left[\sigma(s,X_{m}(s))-\sigma(s,X_{m-1}(s))\right]dB_s\right|^2\\ &\leq 2\norm{p}_{L^2(0,T)}\sup_{0\leq r\leq t}\int_0^r\left|\mu(X_{m}(s))-\mu(X_{m-1}(s))\right|^2ds\\ &\quad+2\sup_{0\leq r\leq t}\left|\int_0^r\left[\sigma(s,X_{m}(s))-\sigma(s,X_{m-1}(s))\right]dB_s\right|^2\\ &\leq 2L^2\norm{p}_{L^2(0,T)}\sup_{0\leq r\leq t}\int_0^r\left|X_{m}(s)-X_{m-1}(s)\right|^2ds\\ &\quad+2\sup_{0\leq r\leq t}\left|\int_0^r\left[\sigma(s,X_{m}(s))-\sigma(s,X_{m-1}(s))\right]dB_s\right|^2. \end{split} \end{equation*} We now compute the expected value \begin{equation*} \begin{split} \mathbb{E}\left[\sup_{0\leq r\leq t}|X_{m+1}(r)-X_m(r)|^2\right]&\leq 2L^2\norm{p}_{L^2(0,T)}\int_0^t\mathbb{E}\left[\left|X_{m}(s)-X_{m-1}(s)\right|^2\right]ds\\ &\quad+2\mathbb{E}\left[\sup_{0\leq r\leq t}\left|\int_0^r\left[\sigma(s,X_{m}(s))-\sigma(s,X_{m-1}(s))\right]dB_s\right|^2\right]. \end{split} \end{equation*} By using again Dobb's inequality, hypothesis 4. and the inductive step, we have \begin{equation*} \begin{split} \mathbb{E}\left[\sup_{0\leq r\leq t}|X_{m+1}(r)-X_m(r)|^2\right]&\leq 2L^2\norm{p}_{L^2(0,T)}\int_0^t\mathbb{E}\left[\left|X_{m}(s)-X_{m-1}(s)\right|^2\right]ds\\ &\quad+8\mathbb{E}\left[\int_0^t\left|\sigma(s,X_{m}(s))-\sigma(s,X_{m-1}(s))\right|^2ds\right]\\ &\leq 2L^2\norm{p}_{L^2(0,T)}\int_0^t\mathbb{E}\left[\left|X_{m}(s)-X_{m-1}(s)\right|^2\right]ds\\ &\quad+8L^2\int_0^t\mathbb{E}\left[\left|X_{m}(s)-X_{m-1}(s)\right|^2\right]ds\\ &\leq 2L^2\left(\norm{p}^2_{L^2(0,T)}+4\right)\int_0^t\frac{(Rs)^m}{m!}ds\leq \frac{(Rt)^{m+1}}{(m+1)!}. \end{split} \end{equation*} Thus, the proof of \eqref{induction-appA} is completed. Now we apply Markov's inequality which gives \begin{equation*} \mathbb{P}\left(\sup_{0\leq t\leq T}|X_{m+1}(t)-X_m(t)|>\frac{1}{2^m}\right)\leq 2^{2m}\mathbb{E}\left[\sup_{0\leq t\leq T}|X_{m+1}(t)-X_m(t)|^2\right]\leq 2^{2m}\frac{(RT)^{m+1}}{(m+1)!}. \end{equation*} Since the left-hand side of the above inequality is summable, by the Borel-Cantelli lemma we get \begin{equation*} \mathbb{P}\left(\sup_{0\leq t\leq T}|X_{m+1}(t)-X_m(t)|>\frac{1}{2^m}\text{ for infinitely many indices $m$}\right)=0, \end{equation*} that is, for almost every $\omega$ we eventually have \begin{equation*} \sup_{0\leq t\leq T}|X_{m+1}(t)-X_m(t)|\leq \frac{1}{2^m} \end{equation*} and therefore, for fixed $\omega$ the series \begin{equation*} X_0+\sum_{k=0}^{m-1}|X_{k+1}(t)-X_k(t)|=X_m(t) \end{equation*} converges uniformly on $[0,T]$ a.s. Let $X_t=\displaystyle\lim_{m\to+\infty} X_m(t)$. Then, $X$ is continuous, being the uniform limit of continuous processes, and therefore $X\in M^2_{loc}([0,T])$. We now prove that $X$ is solution of \eqref{sol-sde}. Recall that \begin{equation}\label{sol-sde2} X_{m+1}(t)=X_0+\int_0^tp(s)\mu(X_m(s))ds+\int_0^t\sigma(s,X_m(s))dB_s. \end{equation} Then, the left-hand side converges uniformly to $X$. From hypothesis 3. of (H2) we get that \begin{equation*} |p(t)||\mu(X_{m}(t))-\mu(X_t)|\leq L|p(t)|\left|X_{m}(t)-X_t\right|. \end{equation*} By Lebesgue's dominated convergence theorem, we deduce that \begin{equation*} \int_0^tp(s)\mu(X_m(s))ds\to\int_0^t p(s)\mu(X_s)ds\quad\text{a.s.}\quad\quad\text{as}\quad m\to+\infty. \end{equation*} Moreover, since by hypothesis 3. of (H2) \begin{equation*} |\sigma(s,X_m(s))-\sigma(s,X_s)|\leq L|X_m(s)-X_s| \end{equation*} we deduce that, uniformly on $[0,T]$ a.s., \begin{equation*} \lim_{m\to+\infty} \sigma(t,X_m(t))=\lim_{m\to+\infty}\sigma(t,X_m) \end{equation*} and therefore \begin{equation*} \int_0^t\sigma(s,X_m(s))ds\to\int_0^t \sigma(X_s)ds\quad\text{a.s.}\quad\text{as}\quad m\to+\infty. \end{equation*} Since a.s. convergence implies convergence in probability, we can take the limit in probability in \eqref{sol-sde2} and obtain that \begin{equation*} X_t=X_0+\int_0^tp(s)\mu(X_s)ds+\int_0^t\sigma(s,X_s)dB_s, \end{equation*} that is, $X$ is solution of the stochastic differential equation. Moreover, $X\in M^2([0,T])$ thanks to Theorem \ref{teo91}. We now prove uniqueness. Let $X_1$, $X_2$ be two solutions of \eqref{sol-sde}. Then, we have \begin{equation*} |X_1(t)-X_2(t)|\leq \left|\int_0^tp(s)\left(\mu(X_1(s))-\mu(X_2(s)\right)ds\right|+\left|\int_0^t\left(\sigma(s,X_1(s))-\sigma(s,X_2(s))\right)dB_s\right|. \end{equation*} We compute the mean value of the supremum over $[0,t]$ of the difference of the solutions \begin{equation*} \begin{split} \mathbb{E}\left[\sup_{0\leq r\leq t}\left|X_1(r)-X_2(r)\right|^2\right]&\leq 2\mathbb{E}\left[\sup_{0\leq r\leq t}\left|\int_0^rp(s)\left(\mu(X_1(s))-\mu(X_2(s)\right)ds\right|^2\right]\\ &\quad+2\mathbb{E}\left[\sup_{0\leq r\leq t}\left|\int_0^t\left(\sigma(s,X_1(s))-\sigma(s,X_2(s))\right)dB_s\right|^2\right]\\ &\leq 2\norm{p}^2_{L^2(0,T)}\mathbb{E}\left[\int_0^t\left|\mu(X_1(s))-\mu(X_2(s)\right|^2ds\right]\\ &\quad +8\mathbb{E}\left[\int_0^t\left|\sigma(s,X_1(s))-\sigma(s,X_2(s))\right|^2ds\right]\\ &\leq \left(2L^2\norm{p}_{L^2(0,T)}^2+8L^2\right)\int_0^t\mathbb{E}\left[\left|X_1(s)-X_2(s)\right|^2\right]ds. \end{split} \end{equation*} If we set $v(t)=\sup_{0\leq r\leq t}\left|X_1(r)-X_2(r)\right|^2$, then $v$ is bounded thanks to Theorem \ref{teo91} and it from the above inequality we deduce \begin{equation*} v(t)\leq C\int_0^t v(s)ds,\qquad 0\leq t\leq T \end{equation*} with $C:=2L^2\left(\norm{p}_{L^2(0,T)}^2+4\right)$. From Gronwall's inequality we conclude that $v\equiv 0$ on $[0,T]$. Thus, the two solutions $X_1$ and $X_2$ coincide. \end{proof} \end{appendices} \end{document}
\begin{document} \begin{abstract} We provide explicit equations of some smooth complex quartic surfaces with many lines, including all 10 quartics with more than 52 lines. We study the relation between linear automorphisms and some configurations of lines such as twin lines and special lines. We answer a question by Oguiso on a determinantal presentation of the Fermat quartic surface. \end{abstract} \maketitle \section{Introduction} \subsection{Principal results} In this paper, all algebraic varieties are defined over the field of complex numbers~$\CC$. If $X \subset \PP^3$ is an algebraic surface, we denote the number of lines on $X$ by~$\Phi(X)$. While it is classically known that a smooth cubic surface contains exactly 27 lines, the number of lines on a smooth quartic surface $X$ is finite, but depends on $X$. If $X$ is general, then $\Phi(X) = 0$. Segre first claimed in 1943 that $\Phi(X) \leq 64$ for any surface~\cite{segre}. His arguments, though, contained a flaw which was corrected about 70 years later by Rams and Schütt~\cite{64lines}. Their result was then strongly improved by Degtyarev, Itenberg and Sertöz~\cite{degtyarev-itenberg-sertoz}, who showed that if $\Phi(X) > 52$, then $X$ is projectively equivalent to exactly one of a list of 10 surfaces---called $X_{64}$, $X'_{60}$, $X''_{60}$, $\bar {X}''_{60}$, $X_{56}$, $\bar {X}_{56}$, $Y_{56}$, $Q_{56}$, $X_{54}$ and $Q_{54}$---whose Néron-Severi lattices are explicitly known. The subscripts denote the number of lines that they contain. The pairs $(X_{60}'',\bar X_{60}'')$ and $(X_{56},\bar X_{56})$ are complex conjugate to each other, so these 10 surfaces correspond to 8 different line configurations. The aim of this paper is to find an explicit defining equation of each of these 10 surfaces. Seven of these 10 surfaces are already known in the literature. The surface~$X_{64}$---which as an immediate corollary of this list is the only surface up to projective equivalence which contains $64$ lines---was found by Schur in 1882~\cite{schur} and is given by the equation \[ x_0(x_0^3-x_1^3) = x_2 (x_2^3 - x_3^3). \] The surfaces with 60 lines were found by Rams and Schütt. An equation of $X'_{60}$ is contained in~\cite{64lines}. It was found while studying a particular 6-dimensional family $\mathcal Z$ of quartics containing a line intersecting 18 or more other lines (see §\ref{subsec:symmetries-order-3}). The surfaces $X''_{60}$ and $\bar X''_{60}$ were found by using positive characteristic methods \cite{rams-schuett-char2}. These surfaces are still smooth and contain 60 lines when reduced modulo~$2$. According to Degtyarev~\cite{degtyarev-supersingular}, 60 lines is the maximal number of lines attainable by a smooth quartic surface defined over a field of characteristic~$2$. The surface $X_{56}$ was studied by Shimada and Shioda~\cite{shimada-shioda} due to a peculiar property: it is isomorphic as an abstract K3 surface to the Fermat quartic surface $X_{48}$, but it is not projectively equivalent to it, as the Fermat quartic only contains $48$ lines. They provide an explicit equation of $X_{56}$ and also an explicit isomorphism between the quartic surfaces. Oguiso showed that the graph $S$ of the isomorphism $X_{48}\xrightarrow{\sim} X_{56}$ is the complete intersection of four hypersurfaces of bi-degree $(1,1)$ in $\PP^3\times \PP^3$~\cite{oguiso17}. He also asked for explicit equations of the graph $S$, which we provide in~§\ref{sec:oguiso-pairs}. As a byproduct, we obtain a determinantal description of the Fermat quartic surface. A defining equation of the surface $Y_{56}$ is contained in~\cite{degtyarev-itenberg-sertoz}. This surface is a real surface with $56$ real lines, i.e., it attains the maximal number of real lines that can be contained in a smooth real quartic. The three remaining surfaces are $Q_{56}$, $X_{54}$ and $Q_{54}$. In order to provide an explicit equation of $Q_{56}$ and $X_{54}$ (see §\ref{subsec:Q56} and §\ref{subsec:X54}), we investigate the group of linear automorphisms of each quartic surface, meaning the automorphisms that are restrictions of automorphisms of $\PP^3$. We use the word `symmetry' for such an automorphism. Thanks to the global Torelli theorem, the group of symmetries can be computed from the Néron-Severi group using Nikulin's theory of lattices~\cite{nikulin}. As for $Q_{54}$, we first find by the same method an explicit equation of $X_{52}''$, the unique surface up to projective equivalence containing configuration~$\bX_{52}''$. This surface is isomorphic to $Q_{54}$ as abstract K3~surface. Following~\cite{shimada-shioda}, we find an explicit isomorphism between the two surfaces, which in turn enables us to find an equation of $Q_{54}$ (see §\ref{subsec:Q54}). In previous papers \cite{64lines, veniani1}, two particular configurations of lines played an important role, viz. twin lines and special lines. Their geometry is related to torsion sections of some elliptic fibrations. Here we relate these configurations to the presence of certain symmetries of order 2 and 3, thus providing yet another characterization of both phenomena (see Propositions~\ref{prop:twins-antisympl} and~\ref{prop:special-aut}). We extend our lattice computations to all rigid (i.e., of rank~$20$) line configurations that can be found in~\cite{degtyarev-itenberg-sertoz} and~\cite{degtyarev16}. In particular, we determine for each of them the size of their group of symmetries, listed in Table~\ref{tab:main-table}. We are also able to find explicit equations of all smooth quartic surfaces containing configurations $\bX_{52}''$, $\bX_{52}'''$, $\bX_{52}^\mathrm v$, $\bY_{52}'$, $\bQ_{52}'''$ and $\bX_{51}$ (see §\ref{sec:<=52-lines}). We also provide a 1-dimensional family of quartic surfaces whose general member is smooth and contains the non-rigid configuration $\bZ_{52}$ (see §\ref{subsec:Z52}). \subsection{Contents of the paper} In §\ref{sec:lattices} we introduce the basic nomenclature and notation about lattices. In §\ref{sec:fano-configurations} we establish the connection between lattices and configurations of lines on smooth quartic surfaces and their symmetries. In §\ref{sec:symmetries-and-lines} we investigate the properties of symmetries of order 2 and 3, relating them to some known configurations of lines, namely twin lines and special lines. In §\ref{sec:>52-lines} and §\ref{sec:<=52-lines} we explain how to find explicit equations of several quartic surfaces containing many lines. In §\ref{sec:oguiso-pairs}, we introduce Oguiso pairs and give explicit equations for the graph of the isomorphism $X_{48}\xrightarrow{\sim} X_{56}$. \section{Lattices} \label{sec:lattices} Let $R$ be $\ZZ$, $\QQ$ or $\RR$. An \emph{$R$-lattice} is a free finitely generated $R$-module $L$ equipped with a non-degenerate symmetric bilinear form $\langle \ , \: \rangle \colon L \times L \rightarrow R$. Let $n$ be the rank of a $\ZZ$-lattice $L$ and choose a basis $\{e_1,\ldots,e_n\}$ of $L$. The matrix whose $(i,j)$-component is $\langle e_i, e_j \rangle$ is called a \emph{Gram matrix} of $L$. The determinant of this matrix does not depend on the choice of the basis. It is called the \emph{determinant} of $L$ and denoted $\det L$. \subsection{Positive sign structures} By a well-known theorem of Sylvester, any $\RR$-lattice admits a diagonal Gram matrix whose diagonal entries are $\pm 1$. The numbers $s_+$ of $+1$ and $s_-$ of $-1$ are well defined and the pair $(s_+,s_-)$ is called the \emph{signature} of $L$. A lattice is \emph{positive definite} if $s_- = 0$, \emph{negative definite} if $s_+ = 0$, or \emph{indefinite} otherwise. It is \emph{hyperbolic} if $s_+ = 1$. A \emph{positive sign structure} of $L$ is the choice of a connected component of the manifold parameterizing oriented $s_+$-dimensional subspaces $\Pi$ of $L$ such that the restriction of $\langle \ , \: \rangle$ to $\Pi$ is positive definite. By definition, the signature and the positive sign structure of a $\ZZ$- or $\QQ$-lattice $L$ are those of $L\otimes \RR$. \subsection{Discriminant forms} Let $D$ be a finite abelian group. A \emph{finite symmetric bilinear form} on $D$ is a a homomorphism $b\colon D \times D \rightarrow \QQ/\ZZ$ such that $b(x,y) = b(y,x)$ for any $x,y \in D$. A \emph{finite quadratic form} on $D$ is a map $q\colon D \rightarrow \QQ/2\ZZ$ such that \begin{enumerate}[(i),noitemsep] \item $q(nx) = n^2q(x)$ for $n \in \ZZ$, $x \in D$; \item the map $b\colon D\times D \rightarrow \QQ/\ZZ$ defined by $(x,y) \mapsto (q(x+y) - q(x) - q(y))/2$ is a finite symmetric bilinear form. \end{enumerate} A finite quadratic form $(D,q)$ is \emph{non-degenerate} if the associated finite symmetric bilinear form $b$ is non-degenerate. We denote by $\Orth(D,q)$ the group of automorphisms of a non-degenerate finite quadratic form $(D,q)$. Let $L$ be a $\ZZ$-lattice. The \emph{dual lattice} $L^\vee$ of $L$ is the group of elements $x \in L \otimes \QQ$ such that $\langle x,v\rangle \in \ZZ$ for all $v \in L$. The dual lattice $L^\vee$ is a free $\ZZ$-module which contains $L$ as a submodule of finite index. In particular, $L$ and $L^\vee$ have the same rank. A $\ZZ$-lattice $L$ is \emph{even} if $\langle x,x \rangle \in 2\ZZ$ holds for any $x \in L$. Given an even $\ZZ$-lattice $L$, the \emph{discriminant form} $(D_L,q_L)$ of $L$ is the finite quadratic form on the group $D_L := L^\vee/L$ defined by $q_L(\bar x) = \langle x,x\rangle \mod 2\ZZ$, where $\bar x \in D_L$ denotes the class of $x \in L^\vee$ modulo $L$. \subsection{Genera} Let $(s_+,s_-)$ be a pair of non-negative integers and $(D,q)$ be a non-degenerate finite quadratic form. The \emph{genus} $\cG$ determined by $(s_+,s_-)$ and $(D,q)$ is the set of isometry classes of even $\ZZ$-lattices $L$ of signature $\sign(L) = (s_+,s_-)$ and discriminant form $(D_L,q_L)\cong (D,q)$. The \emph{oriented genus} $\cG^\orient$ determined by $(s_+,s_-)$ and $(D,q)$ is the set of equivalence classes of pairs $(L,\theta)$, where $L$ is a lattice whose isometry class belongs to $\cG$, and $\theta$ is a positive sign structure on $L$. We say that $(L,\theta)$ and $(L',\theta')$ are equivalent if there is an isometry $L \xrightarrow{\sim} L'$ which maps $\theta$ to $\theta'$. There is an obvious forgetful map $\cG^\orient \rightarrow \cG$. \subsection{Positive definite lattices of rank 2} \label{subsec:pos-def-rank-2} If $T$ is a positive definite even $\ZZ$-lattice of rank 2 and discriminant form $(D,q)$, then there exists a unique triple $(a,b,c)$ of integers with $0< a \leq c$, $0 \leq 2b \leq a$, $a$ and $c$ even, such that $T$ admits a Gram matrix of the form \[ \left[\begin{array}{cc} a & b \\ b & c \\ \end{array}\right]. \] We denote by $[a,b,c]$ the isometry class of $T$. It is easy to compute the genus $\cG$ and oriented genus $\cG^\orient$ determined by $(2,0)$ and $(D,q$), since $\frac34 c \leq \det T \leq c^2$. The preimage of $[a,b,c]$ under the map $\cG^\orient\rightarrow \cG$ has either one or two elements. It has one element if and only if $T$ admits an orientation reversing autoisometry, which is the case if and only if $a = c$, or $a = 2b$, or $b = 0$. \section{Fano configurations} \label{sec:fano-configurations} A \emph{$d$-polarized lattice} is a hyperbolic lattice $S$ together with a distinguished vector $h \in S$ such that $h^2 = d$, called the \emph{polarization}. A $\emph{line}$ in a polarized lattice $(S,h)$ is a vector $v \in S$ such that $v^2 = -2$ and $v\cdot h = 1$. The set of lines is denoted by $\Fn(S,h)$. A \emph{configuration} is a $4$-polarized lattice $(S,h)$ which is generated over $\QQ$ by $h$ and all lines in $\Fn(S,h)$. A configuration is \emph{rigid} if $\rank S = 20$. Let $X \subset \PP^3$ be a smooth quartic surface. The primitive sublattice $\cF(X)$ of $H^2(X,\ZZ)$ spanned over $\QQ$ by the plane section $h$ and the classes of all lines on $X$ is called the \emph{Fano configuration} of $X$. The plane section defines a polarization of $\cF(X)$. A configuration is called \emph{geometric} if it is isometric as a polarized lattice to the Fano configuration of some quartic surface. \subsection{Projective equivalence classes} \label{subsec:proj-equiv-classes} Let $S$ be a rigid geometric configuration. Consider the (non-empty) set of projective equivalence classes of smooth quartic surfaces whose Fano configuration is isometric to $S$. This set is finite and its cardinality can be computed in the following way. Let $\cG_S$ and $\cG^\orient_S$ be the (oriented) genus determined by $(2,0)$ and $(D_S,-q_S)$. Fix a positive definite lattice $T$ of rank $2$ whose class is in $\cG_S$, and let $\psi\colon (D_T,q_T) \rightarrow (D_S,-q_S)$ be an isomorphism. We can identify $\Orth_h(S)$ as a subgroup of $\Orth(D_T,q_T)$. Define \[ \cl(S) := |\Orth^+(T) \backslash \Orth(D_T,q_T) / \Orth_h(S)|.\] This number does not depend on the $T$ and $\psi$ chosen. The number of projective equivalence classes is equal to \[\Cl(S) := \cl(S) \cdot |\cG^\orient_S|.\] For more details, see \cite[Remark 3.6]{degtyarev16}. \subsection{Symmetries} \label{subsec:symmetries} Let $(S,h)$ be geometric Fano configuration. Fix a lattice $T$ in $\cG_S$ and an isomorphism $\psi\colon (D_S,-q_S) \xrightarrow{\sim} (D_T,q_T)$. Let $\Gamma_T$ be the image of $\Orth^+(T)$ in $\Orth(D_T,q_T)$. We define the subgroup \[ \Gamma_S := \{ \psi^{-1}\gamma \psi \mid \gamma \in \Gamma_T\} \subset \Orth(D_S,q_S). \] The group $\Gamma_S$ does not depend on the $T$ and $\psi$ chosen (see \cite[§2.4]{degtyarev16}). Let $\eta_S\colon \Orth_h(S) \rightarrow \Orth(D_S,q_S)$ be the natural homomorphism. We consider the subgroup of \emph{symmetries} of~$S$ \[ \Sym(S) := \{ \varphi \in \Orth_h(S) \mid \eta_S(\varphi) \in \Gamma_S \} \] and the subgroup of \emph{symplectic symmetries} of~$S$ \[ \Sympl(S) := \{ \varphi \in \Orth_h(S) \mid \eta_S(\varphi) = \id \}. \] Let $X \subset \PP^3$ be a smooth quartic surface. A \emph{symmetry} of $X$ is an automorphism $\varphi\colon X\rightarrow X$ which is the restriction of an automorphism of $\PP^3$. A symmetry $\varphi$ is \emph{symplectic} if it acts trivially on $H^{2,0}(X)$. We denote the group of symmetries of $X$ by $\Sym(X)$ and the subgroup of symplectic symmetries by $\Sympl(X)$. Let $S := \cF(X)$ be the Fano configuration of $X$. A symmetry of $X$ induces a symmetry of $S$, thus giving a homomorphism $\Sym(X) \rightarrow \Sym(S)$. The following proposition is a consequence of Nikulin's theory of lattices and the global Torelli theorem. \begin{proposition} \label{prop:torelli} If $S = \cF(X)$ is a rigid geometric Fano configuration, then the homomorphism $\Sym(X) \rightarrow \Sym(S)$ is an isomorphism. Furthermore, symplectic automorphisms of~$X$ correspond to symplectic automorphisms of~$S$ under this isomorphism. \end{proposition} \subsection{The main table} \label{subsec:main-table} Table~\ref{tab:main-table} lists all known rigid geometric Fano configurations found in \cite{degtyarev-itenberg-sertoz} and \cite{degtyarev16}. Let $N := |\Fn(S)|$. Note that $N$ is always equal to the subscript in the name of the configuration. \begin{itemize} \item Since $S$ is rigid, an element in $\Orth_h(S)$ corresponds uniquely to a permutation $\sigma \in \Symmetricgroup_N$ such that $\langle l_i ,\, l_j\rangle = \langle l_{\sigma(i)},\, l_{\sigma(j)} \rangle$. \item The sixth column contains a list of all elements of $\cG_S$ (see \ref{subsec:pos-def-rank-2}). Those classes that correspond to two elements in $\cG_S^\orient$ are marked by an asterisk~${}^*$. We write ${}^{\times 2}$ if $\cl(S) = 2$; otherwise, $\cl(S) = 1$. \item We compute $\Sym(S)$ and $\Sympl(S)$ using the definition. \end{itemize} The list of rigid configurations with exactly $52$ lines is not known to be complete. The only known non-rigid configuration with $52$ lines is $\bZ_{52}$ (see~§\ref{subsec:Z52}). On the other hand, there are certainly many more configurations with less than $52$ lines than the ones listed here. For our computations we used \verb|GAP| \cite{GAP4}. \begin{table} \caption{Rigid geometric Fano configurations with many lines. For an explanation of the entries, see~§\ref{subsec:main-table}.} \label{tab:main-table} \begin{tabular}{ccccrlc} \toprule $S$ & $|\Orth_h(S)|$ & $|\Sym(S)|$ & $|\Sympl(S)|$ & $\det T$ & $T^{\times \cl(S)}$ & see \\ \midrule $\bX_{64}$ & 4608 & 1152 & 192 & 48 & $[8,4,8]$ & \cite{schur} \\ $\bX_{60}'$ & 480 & 120 & 60 & 60 & $[4,2,16]$ & \cite{64lines} \\ $\bX_{60}''$ & 240 & 120 & 60 & 55 & $[4,1,14]^*$ & \cite{rams-schuett-char2} \\ $\bX_{56}$ & 128 & 64 & 16 & 64 & $[8,0,8]^{\times 2}$ & \cite{shimada-shioda}, §\ref{sec:oguiso-pairs} \\ $\bY_{56}$ & 64 & 32 & 16 & 64 & $[2,0,32]$ & \cite{degtyarev-itenberg-sertoz} \\ $\bQ_{56}$ & $384$ & $96$ & $48$ & 60 & $[4,2,16]$ & §\ref{subsec:Q56} \\ $\bX_{54}$ & $384$ & $48$ & $24$ & 96 & $[4,0,24]$ & §\ref{subsec:X54} \\ $\bQ_{54}$ & $48$ & $8$ & $8$ & 76 & $[4,2,20]$ & §\ref{subsec:Q54} \\ \midrule $\bX_{52}'$ & 24 & 3 & 3 & 80 & $[8,4,12]$ & \\ $\bX_{52}''$ & 36 & 6 & 6 & 76 & $[4,2,20]$ & §\ref{subsec:X52ii} \\ $\bX_{52}'''$ & 320 & 80 & 20 & 100 & $[10,0,10]$ & §\ref{subsec:X52iii} \\ $\bX_{52}^\mathrm v$ & 32 & 8 & 4 & 84 & $[10,4,10]$ & §\ref{subsec:X52v} \\ $\bY_{52}'$ & 8 & 8 & 4 & 76 & $\begin{cases} [2,0,38] \\ [8,2,10]^* \end{cases}$ & §\ref{subsec:Y52i} \\ $\bY_{52}''$ & 8 & 8 & 4 & 79 & $\begin{cases} [2,1,40] \\ [4,1,20]^* \\ [8,1,10]^* \end{cases}$ & \\ $\bQ_{52}'$ & 64 & 8 & 8 & 96 & $[4,0,24]$ & \\ $\bQ_{52}''$ & 64 & 16 & 16 & 80 & $[8,4,12]^{\times 2}$ & \\ $\bQ_{52}'''$ & 96 & 24 & 4 & 75 & $[10,5,10]$ & §\ref{subsec:Q52iii} \\ \midrule $\bX_{51}$ & 12 & 6 & 6 & 87 & $\begin{cases} [4,1,22]^* \\ [6,3,16] \end{cases}$ & §\ref{subsec:X51} \\ $\bX_{50}'$ & 18 & 3 & 3 & 75 & $[4,2,28]$ & \\ $\bX_{50}''$ & 12 & 3 & 3 & 96 & $[4,0,24]^{\times 2}$ & \\ $\bX_{50}'''$ & 16 & 4 & 2 & 96 & $[4,0,24]^{\times 2}$ & \\ $\bX_{48}$ & 6144 & 1536 & 384 & 64 & $[8,0,8]$ & §\ref{sec:oguiso-pairs} \\ $\bY_{48}'$ & 8 & 2 & 1 & 96 & $[2,0,48]$ & \\ $\bY_{48}''$ & 8 & 4 & 4 & 95 & $\begin{cases} [2,1,48] \\ [8,1,12]^* \\ [10,5,12] \end{cases}$ & \\ $\tilde \bY_{48}'$ & 48 & 48 & 24 & 76 & $\begin{cases} [2,0,38] \\ [8,2,10]^* \end{cases}$ & \\ $\tilde \bY_{48}''$ & 12 & 12 & 6 & 79 & $\begin{cases} [2,1,40] \\ [4,1,20]^* \\ [8,1,10]^* \end{cases}$ & \\ $\bQ_{48}$ & 128 & 16 & 8 & 80 & $[8,4,12]$ & \\ \bottomrule \end{tabular} \end{table} \section{Symmetries and lines on quartic surfaces} \label{sec:symmetries-and-lines} In this section we recall some basic facts about symplectic automorphisms of K3 surfaces. Moreover, we study symmetries of smooth quartic surfaces of order 2 and 3. Some of these symmetries are related to particular configurations of lines, which have played a major role in previous works (cf. \cite{64lines,veniani1}): twin lines and special lines. \subsection{Symplectic automorphisms} \label{subsec:sympl-autom} We recall some basic properties of symplectic automorphisms of K3 surfaces. For more details, see~\cite{nikulin-finite-groups}. If $\varphi\colon Y \rightarrow Y$ is an automorphism of an algebraic variety $Y$ its \emph{fixed locus} is denoted by $\Fix(\varphi)$. If $\varphi\colon X\rightarrow X$ is a symmetry of a quartic surface $X$, then we denote by the same letter also the corresponding automorphism of~$\PP^3$. If it is not clear from the context to which fixed locus we are referring, we write $\Fix(\varphi,X)$ or $\Fix(\varphi,\PP^3)$. Let $\varphi\colon Y\rightarrow Y$ be a symplectic automorphism of a K3 surface $Y$. If the order $n$ of $\varphi$ is finite, then $n\leq 8$ and $\Fix(\varphi)$ consists of a finite number $f_n$ of points. This number $f_n$ depends only on the order $n$. The following list shows all values of $f_n$ for $n = 1,\ldots,8$. \begin{table}[h] \begin{tabular}{c|ccccccc} $n$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ $f_n$ & 8 & 6 & 4 & 4 & 2 & 3 & 2 \\ \end{tabular} \end{table} \subsection{Type of a line} Let $l$ be a line on a smooth quartic surface $X$. Consider the pencil of planes $\{\Pi_t\}_{t\in \PP^1}$ containing $l$. The curve $C_t := \Pi_t \cap X$ is called the \emph{residual cubic} in the plane $\Pi_t$. If $C_t$ splits into three lines, then $C_t$ is called a \emph{$3$-fiber}. If $C_t$ splits into a line and an irreducible conic, then $C_t$ is called a \emph{$1$-fiber}. The \emph{type} of the original line $l$ is the pair $(p,q)$, where $p$ (resp. $q$) is the number of $3$-fibers (resp. $1$-fibers) of $l$. The name ``fiber'' comes from the fact that the morphism $X\rightarrow \PP^1$ whose fiber over $t \in \PP^1$ is $C_t$ is an elliptic fibration. In Kodaira's notation, a $3$-fiber corresponds to a fiber of type $\I_3$ or $\IV$, while a $1$-fiber corresponds to a fiber of type $\I_2$ or $\III$. The \emph{discriminant} of a line is the discriminant of its induced elliptic fibration. It is a homogeneous polynomial of degree~24 in two variables. The restriction of $X\rightarrow \PP^1$ to $l$ is a separable morphism of curves of degree~3. Its ramification divisor has degree~$4$. The \emph{ramification points of $l$} are the ramification points with respect to this morphism. \subsection{Symmetries of order 2 and twin lines} Once coordinates on $\PP^n$ are chosen, any automorphism of $\PP^n$ (hence, any symmetry of a quartic surface) is represented by a square matrix of size $n$, well defined up to multiplication by a non-zero scalar. The diagonal matrix with entries $a,b,c,d \in \CC$ will be denoted by $\diag(a,b,c,d)$. The following lemma is an easy consequence of the fact that a complex matrix of finite order is diagonalizable. \begin{lemma} \label{lem:autP3-ord2} If $\varphi\colon \PP^3 \rightarrow \PP^3$ is an automorphism of order $2$, then $\Fix(\varphi)$ consists of the disjoint union of either two lines, or one point and one plane. \end{lemma} \begin{proposition} \label{prop:fix-sympl-ord-2} If $\sigma\colon X \rightarrow X$ is a symplectic symmetry of order~$2$, then there exist two lines $l_1$ and $l_2$ in $\PP^3$ such that $\Fix(\sigma,\PP^3) = l_1\cup l_2$. Moreover, each $l_i$ intersects $X$ in $4$ distinct points. \end{proposition} \proof A symplectic automorphism of order $2$ of a K3 surface has exactly 8 fixed points, so the statement follows from Lemma~\ref{lem:autP3-ord2} (cf. \cite{vanGeemenSarti}). \endproof Two disjoint lines $l',l''$ on $X$ are \emph{twin lines} if there exist $10$ other distinct lines on $X$ which intersects both $l'$ and $l''$. \begin{proposition} \label{prop:twins-antisympl} If $l',l''$ are two disjoint lines on $X$, then the following conditions are equivalent: \begin{enumerate}[(a)] \item $l'$ and $l''$ are twin lines; \item there exists a non-symplectic symmetry $\tau\colon X \rightarrow X$ of order~$2$ such that $\Fix(\tau,X) = \Fix(\tau,\PP^3) = l' \cup l''$. \end{enumerate} \end{proposition} \proof The implication (a) $\implies$ (b) follows from \cite[Remark 3.4]{veniani1}. Suppose that (b) holds. Up to coordinate change we can suppose that $l',l''$ are given by $x_0 = x_1 = 0$ and $x_2 = x_3 = 0$, respectively. Then, necessarily $\tau = \diag(1,1,-1,-1)$. Imposing that $\tau$ is a non-symplectic symmetry of $X$ leads to the vanishing of nine coefficients, so $X$ belongs to the family $\mathcal A$ in \cite[Proposition 3.2]{veniani1}. By the same proposition, $l'$ and $l''$ are twin lines. \endproof \begin{corollary} \label{cor:p-fiber-twin-line} Suppose that $l',l''$ are twin lines and that the residual cubic in a plane containing $l'$ splits into three lines $m_1,m_2,m_3$. If $m_1$ intersects~$l''$, then the intersection point of $m_2$ and $m_3$ lies on $l'$. \end{corollary} \proof The non-symplectic symmetry $\tau$ fixes $m_1$ as a set, but not pointwise. Moreover, $\tau$ exchanges $m_2$ and $m_3$, otherwise there would be at least 3 fixed points on $m_1$. As $l'$ is fixed pointwise, the intersection point of $m_2$ and $m_3$ lies on $l'$. \endproof \subsection{Symmetries of order 3 and special lines} \label{subsec:symmetries-order-3} In a similar way as for order 2, we can prove the following lemma. \begin{lemma} \label{lem:autPn-ord3} Let $\varphi\colon \PP^n \rightarrow \PP^n$ be an automorphism of order $3$. \begin{itemize}[(a),noitemsep] \item If $n = 2$, then $\Fix(\varphi)$ consists of the disjoint union of either one point and one line, or three points. \item If $n = 3$, then $\Fix(\varphi)$ consists of the disjoint union of either one point and one plane, or two lines, or two points and one line. \end{itemize} \end{lemma} A line $l$ on a smooth quartic $X$ is said to be \emph{special}, if one can choose coordinates so that $l$ is given by $x_0 = x_1 = 0$ and $X$ belongs to the following family found by Rams--Schütt~\cite{64lines}: \begin{equation} \label{eq:familyZ} \mathcal Z\colon x_0x_3^3 + x_1x_2^3 + x_2x_3q(x_0,x_1) + g(x_0,x_1) = 0, \end{equation} where $q$ and $g$ are polynomials of degree $2$ and $4$, respectively. \begin{proposition} \label{prop:special-aut} If $l$ is a line on a smooth quartic surface $X$, then the following conditions are equivalent: \begin{enumerate}[(a)] \item the line $l$ is special; \item there exists a symplectic symmetry $\sigma\colon X \rightarrow X$ of order~$3$ which preserves each plane containing $l$ as a set. \end{enumerate} \end{proposition} \proof If (a) holds, then $\sigma = \diag(1,1,\zeta\,\zeta^2)$, where $\zeta$ is a primitive 3rd root of unity, is the required symmetry. Conversely, suppose that (b) holds. Since $\sigma$ is symplectic, $\Fix(\sigma,X)$ consists of $6$ distinct points. As $\sigma$ preserves $l$ as a set, $\sigma$ exactly two fixed points on $l$, say $P_1$ and $P_2$. As all ramification points of $l$ are fixed, $P_1$ and $P_2$ are the only ramification points, necessarily of ramification index~$3$. Moreover, $\Fix(\sigma,\PP^3)$ cannot be the disjoint union of one plane $\Pi$ and one point, because the curve $\Pi \cap X$ would be fixed pointwise. Suppose that all lines in $\Fix(\sigma,\PP^3)$ pass through $P_1$ or $P_2$. Choose a plane $\Pi$ containing $l$, but not containing any lines in $\Fix(\sigma,\PP^3)$. Then, $\Fix(\sigma,\Pi) = \{P,Q\}$, but this contradicts Lemma~\ref{lem:autPn-ord3}. Hence, $\Fix(\sigma,\PP^3)$ is the disjoint union of $P_1$, $P_2$ and one line $m$. For $i=1,2$, let $\Pi_i$ be the plane containing $l$ whose residual cubic contains $P_i$ for $i=1,2$, and let $Q_i$ be the point of intersection of $\Pi_i$ with $m$ for $i=1,2$. If we choose coordinates so that $P_1 = (0,0,1,0)$, $P_2 = (0,0,0,1)$, $Q_1 = (1,0,0,0)$ and $Q_2 = (0,1,0,0)$, then $\sigma = \diag(1,1,\zeta\,\zeta^2)$. In particular, $l$ is the line $x_0 = x_1 = 0$. By an explicit computation, imposing that $\sigma$ is a symplectic symmetry of $X$, we obtain that $X$ is a member of family $\mathcal Z$, i.e., $l$ is special. \endproof The richness of the geometry of special lines comes from the presence of a $3$-torsion section on a certain base change of the fibration induced by $l$. We will use the following fact. \begin{proposition}[\cite{64lines}] \label{prop:(6,q)-is-special} If $l$ is a line of type $(6,q)$, with $q > 0$, then $l$ is special. Moreover, each $1$-fiber intersects $l$ at a single point. \end{proposition} \section{Configurations with more than 52 lines} \label{sec:>52-lines} In this section we compute equations for $Q_{56}$, $X_{54}$ and $Q_{54}$. For our calculations we used \verb|SageMath|~\cite{sagemath} and \verb|Singular|~\cite{DGPS}. Discriminants of elliptic fibrations induced by lines are computed with the formulas found in~\cite{jac-of-gen-1-curves}. \subsection{Conventions} \label{subsec:conventions} As a general approach, given a rigid geometric configuration $S$, we let $X$ be a smooth quartic surface containing $S$, i.e., such that $\cF(X)$ is isometric to $S$. The complete knowledge of $\Sym(X)$ and $\Sympl(X)$ comes from the complete knowledge of $\Sym(S)$ and $\Sympl(S)$ thanks to Proposition~\ref{prop:torelli}. A \emph{quadrangle} is a set of four non-coplanar lines $l_0,\ldots,l_3$ such that $l_i$ intersects $l_{i+1}$ for each $i=0,\ldots,3$ (with subscripts interpreted modulo~$4$). The points of intersection of $l_i$ and $l_{i+1}$ are called \emph{vertices} of the quadrangle. A \emph{star} is a set of four coplanar lines intersecting in one single point. When a symmetry $\varphi$ of a quartic surface $X$ fixes a line, a quadrangle, etc. on $X$, it is understood that $\varphi$ fixes the line, the quadrangle, etc. as a set, unless we explicitly state that $\varphi$ fixes them pointwise. \subsection{Configuration \texorpdfstring{$\bQ_{56}$}{Q56}} \label{subsec:Q56} Let $Q_{56}$ be a quartic surface containing configuration~$\bQ_{56}$. Then, there exist four non-symplectic symmetries of order~$2$ $\tau_0,\ldots,\tau_3$ such that $\tau_i\tau_j = \tau_i\tau_j$ for all $i,j$ (there are three such quadruples). Since $\tau_i$ does not fix two lines on $X$ and the fixed locus of a non-symplectic automorphism of order~$2$ on a K3~surface does not contain isolated points, $\Fix(\tau_i,\PP^3)$ consists of a point $P_i$ (not belonging to $X$) and a plane $\Pi_i$. As $\tau_i$ and $\tau_j$ commute, $\tau_j(P_i) = P_i$ and $\tau_j(\Pi_i) = \Pi_i$. For $j\neq i$, $\tau_j$ does not fix $\Pi_i$ pointwise (otherwise it would coincide with $\tau_i$), so $\Fix(\tau_j,\Pi_i)$ is the union of a point and a line. It follows that $P_j \in \Pi_i$ and that the four planes $\Pi_0,\ldots,\Pi_3$ are in general position. Up to coordinate change, we can suppose that $\Pi_i$ is given by $x_i = 0$. Thus, $\tau_0 =\diag(-1,1,1,1),\ldots,\tau_3 =\diag(1,1,1,-1)$. Up to relabeling, both $\sigma_1 := \tau_0\tau_2 = \tau_1\tau_3$ and $\sigma_2:=\tau_0\tau_3 = \tau_1\tau_2$ fix eight lines on $Q_{56}$, say $a_0,\ldots,a_7$ and $b_0,\ldots,b_7$, respectively. The lines $a_0,\ldots,a_7$ form two quadrangles $\{a_0,\ldots,a_3\}$ and $\{a_4,\ldots,a_7\}$. Hence, the vertices of these quadrangles are the eight fixed points of $\sigma_1$ on $X$. By Proposition~\ref{prop:fix-sympl-ord-2}, they lie on the lines $l\colon x_0 = x_2 = 0$ and $l'\colon x_1 = x_3 = 0$. Analogously, the lines $b_0,\ldots,b_7$ form two quadrangles $\{b_0,\ldots,b_3\}$ and $\{b_4,\ldots,b_7\}$, whose vertices lie on $m\colon x_0 = x_3 = 0$ and $m'\colon x_1 = x_2 = 0$. By inspection of $\bQ_{56}$ and its symmetry group, we see that (up to relabeling) \begin{itemize} \item $a_i$ intersects $b_j$ if and only if $i \equiv j \mod 2$; \item there exists a symplectic automorphism $\sigma'_1$ of order~$2$ which fixes $a_0,a_2,a_4,a_6$; \item there exists a symplectic automorphism $\sigma'_2$ of order~$2$ which fixes $b_0,b_2,b_4,b_6$. \end{itemize} Up to rescaling variables and coefficients, we have \[ \sigma'_1 = \left[\begin{array}{cccc} & & & 1 \\ & & 1 & \\ & 1 & & \\ 1 & & & \\ \end{array}\right]\qquad \text{and} \qquad \sigma'_2 = \left[\begin{array}{cccc} & & 1 & \\ & & & 1 \\ 1 & & & \\ & 1 & & \\ \end{array}\right]. \] If $a_0$, $a_1$ intersect each other at the point $(0,p,0,1)$, and $b_0$, $b_1$ intersect each other at the point $(0,q,1,0)$, for $p,q\in \CC$, then the surface belongs to the $2$-dimensional family given by \begin{multline*} 2 \, p^{2} q^{2} {\left(x_{0}^{4} + x_{1}^{4} + x_{2}^{4} + x_{3}^{4}\right)} + {\left(p^{4} + 1\right)} {\left(q^{4} + 1\right)} {\left(x_{0}^{2} x_{1}^{2} + x_{2}^{2} x_{3}^{2}\right)} \\ - 2 \, q^{2}{\left(p^{4} + 1\right)} {\left(x_{0}^{2} x_{2}^{2} + x_{1}^{2} x_{3}^{2}\right)} - 2 \,p^{2} {\left(q^{4} + 1\right)} {\left(x_{0}^{2} x_{3}^{2} + x_{1}^{2} x_{2}^{2}\right)} = 0. \end{multline*} The automorphism $\sigma'_1$ fixes also two lines $c,d$ which form a quadrangle with $a_0$ and $a_4$; necessarily, $c$ and $d$ pass through the fixed points of $\sigma'_1$ on $a_0$ and $a_4$, which can be explicitly computed. Up to exchanging $p$ with $-p$ or $q$ with $-q$ (which does not influence the equation of the surface), it follows that \[q = \frac{p-1}{p+1}. \] Finally, the residual conic in the plane containing $a_0$ and $b_0$ is reducible. This means that $p$ satisfies \[ p^4 - p^3 + 2\,p^2 + p + 1 = 0. \] \begin{remark} The surface $Q_{56}$ itself is defined over $\QQ(\sqrt{-15})$. All lines are defined over $\QQ(p)$. The surface contains 24 lines of type $(3,7)$, whose fibrations have one singular fiber of type $\mathrm{III}$. \end{remark} \subsection{Configuration \texorpdfstring{$\bX_{54}$}{X54}} \label{subsec:X54} Let $X_{54}$ be a quartic surface whose Fano configuration is isometric to $\bX_{54}$. Then $X_{54}$ contains 4 special lines of type $(6,2)$ and 10 pairs of twin lines. In particular, there is a quadrangle containing two opposite lines of type $(6,2)$, say $l_0$ and $l_2$, and a pair of twin lines of type $(0,10)$, say $l_1$ and $l_3$. Up to coordinate change, we can suppose that $l_i$ is the line $x_i = x_{i+1} = 0$. The non-symplectic symmetry $\sigma_1$ corresponding to the twin pair formed by $l_1$ and $l_3$ is $\sigma_1 = \diag(1,-1,-1,1)$ (see Proposition~\ref{prop:twins-antisympl}). By inspection of $\Sym(\bX_{54})$, we find two symplectic symmetries $\sigma_2$ and $\sigma_3$ with the following properties \begin{itemize} \item $\sigma_2(l_0) = l_0,\,\sigma_2(l_1) = l_3,\,\sigma_2(l_2) = l_2$; \item $\sigma_3(l_0) = l_2,\,\sigma_3(l_1) = l_1,\, \sigma_3(l_3) = l_3$. \end{itemize} Therefore, there exist $a,b,c,d\in\CC$ such that \begin{equation*} \label{eq:symmetric-quadrangle} \sigma_2 = \left[\begin{array}{cccc} & 1 & & \\ ab &&& \\ &&& a \\ && b & \\ \end{array}\right], \qquad \sigma_3 = \left[\begin{array}{cccc} & & & 1 \\ & & c & \\ & d & & \\ cd & & & \end{array}\right]. \end{equation*} By Proposition~\ref{prop:(6,q)-is-special}, $l_0$ is a special line. This implies that $c = -b$ and that the residual conic is tangent to $l_0$ at the point of intersection with $l_1$ (see Proposition~\ref{prop:(6,q)-is-special}). Imposing all these conditions and normalizing the remaining coefficients, we find that $X_{54}$ is defined by the following equation: \begin{multline*} 3 \, x_{0}^{3} x_{2} - 3 \, x_{0} x_{1} x_{2}^{2} - x_{0} x_{2}^{3} + 3 \, x_{1}^{3} x_{3} + 3 \, x_{0}^{2} x_{2} x_{3} + 3 \, x_{1}^{2} x_{2} x_{3} - 3 \, x_{0} x_{1} x_{3}^{2} - x_{1} x_{3}^{3} = 0. \end{multline*} If $\xi$ is a primitive 12th root of unity, the lines in the plane $x_0 = \xi^3x_1$ are all defined over $\QQ(\xi)$. One can check that the three lines other than $l_0$ in this plane have type $(2,8)$. \subsection{Configuration \texorpdfstring{$\bQ_{54}$}{Q54}} \label{subsec:Q54} Let $X_{54}$ be a surface containing configuration~$\bQ_{54}$. All symmetries of $X_{54}$ are symplectic of order~$2$. There is only one symmetry~$\sigma$ which fixes $4$ disjoint lines $l_0,\ldots,l_3$. Observe that the restriction of~$\sigma$ to each $l_i$ must have 2 fixed points: these are all the 8 fixed points of $\sigma$ on~$X_{54}$. By Proposition~\ref{prop:fix-sympl-ord-2}, these $8$ points lie on two lines $l',l''$ in $\PP^3$. There are then $3$ more symmetries $\tau_1,\tau_2,\tau_3$, which fix two lines $m_1,m_2$ on $X_{54}$ and act in this way on $l_0,\ldots,l_3$: \begin{itemize} \item $\tau_1(l_0) = l_1,\,\tau_1(l_2) = l_3$; \item $\tau_2(l_0) = l_2,\,\tau_2(l_1) = l_3$. \item $\tau_3(l_0) = l_3,\,\tau_3(l_1) = l_2$. \end{itemize} Since $\Sym(\bQ_{54})$ is a commutative group, each $\tau_i$ permutes $l'$ and $l''$. As $\tau_3 = \tau_1 \circ \tau_2$, at least one $\tau_i$ fixes $l'$ and $l''$; hence, by symmetry, each $\tau_i$ fixes $l'$ and $l''$. Let $m', m''$ be the lines fixed pointwise in $\PP^3$ by $\tau_1$. The lines $l',m',l'',m''$ form a quadrangle. Up to coordinate change, we can suppose that $l'\colon x_1 = x_3 = 0, l''\colon x_0 = x_2 = 0, m'\colon x_0 = x_1 = 0, m''\colon x_2 = x_3 = 0$. In these coordinates, $\sigma = \diag(1,-1,1,-1)$ and $\tau_1 = \diag(1,1,-1,-1)$. We can rescale the coordinates so that $m_1\colon x_0 - x_1 = x_2 - x_3 = 0$ and \[ \tau_2 = \left[\begin{array}{cccc} & & 1 & \\ & & & 1 \\ 1 & & & \\ & 1 & & \end{array}\right]. \] If $l_0$ is given by $x_0 - \mu x_2 = x_1 - \nu x_3$, $\mu,\nu\in \CC$, then there exists $\lambda \in \CC$ so that $Q_{54}$ is given by an equation of the following form: \begin{multline} \label{eq:Q54} x_{0}^{4} + x_{2}^{4} + \lambda {\left(x_{1}^{4} + x_{3}^{4}\right)} - \frac{{\left(\mu^{4} + 1\right)}}{\mu^{2}} x_{0}^{2} x_{2}^{2} - \frac{\lambda {\left(\nu^{4} + 1\right)} }{\nu^{2}} x_{1}^{2} x_{3}^{2} \\ - {\left(\lambda + 1\right)}{\left(x_{0}^{2} x_{1}^{2} + x_{2}^{2} x_{3}^{2}\right)} + \frac{{\left(\mu^{2} \nu^{3} \lambda - \mu^{3} \nu^{2} - \mu \lambda + \nu\right)}}{{\left(\mu - \nu\right)} \mu \nu} {\left(x_{1}^{2} x_{2}^{2} + x_{0}^{2} x_{3}^{2}\right)} \\ - \frac{{\left(\mu^{2} \nu^{4} \lambda - \mu^{4} \nu^{2} - \mu^{2} \lambda + \nu^{2}\right)} {\left(\mu + \nu\right)}}{{\left(\mu - \nu\right)} \mu^{2} \nu^{2}} x_{0} x_{1} x_{2} x_{3} = 0. \end{multline} We will explain below how to determine $\lambda,\mu$ and $\nu$. It turns out that \begin{align*} \lambda = & \frac{95}{432} \, \nu^{11} - \frac{49}{48} \, \nu^{10} + \frac{1019}{432} \, \nu^{9} - \frac{1447}{432} \, \nu^{8} - \frac{323}{216} \, \nu^{7} + \frac{2221}{216} \, \nu^{6} \\ & - \frac{1247}{216} \, \nu^{5} - \frac{485}{216} \, \nu^{4} + \frac{3997}{144} \, \nu^{3} - \frac{7345}{432} \, \nu^{2} + \frac{593}{144} \, \nu - \frac{23}{48}, \\ \mu = & \frac{1}{96} \, \nu^{11} - \frac{13}{288} \, \nu^{10} + \frac{19}{288} \, \nu^{9} + \frac{1}{288} \, \nu^{8} - \frac{55}{144} \, \nu^{7} + \frac{113}{144} \, \nu^{6} \\ & + \frac{41}{144} \, \nu^{5} - \frac{205}{144} \, \nu^{4} + \frac{379}{288} \, \nu^{3} + \frac{59}{288} \, \nu^{2} - \frac{359}{96} \, \nu + \frac{1}{32}, \end{align*} and the minimal polynomial of $\nu$ over $\QQ$ is \begin{multline*} \nu^{12} -6 \, \nu^{11} + 18 \, \nu^{10} - 34 \, \nu^{9} + 23 \, \nu^{8} + 44 \, \nu^{7} - 100 \, \nu^{6} \\ + 68 \, \nu^{5} + 127 \, \nu^{4} - 262 \, \nu^{3} + 242 \, \nu^{2} - 66 \, \nu + 9. \end{multline*} \begin{remark} As a matter of fact, the surface $Q_{54}$ is defined over $\QQ(\lambda)$, which is a non-Galois field extension of degree~$6$ of $\QQ$. All lines are defined over its Galois closure $\QQ(\nu) = \QQ(\lambda,i)$. \end{remark} \subsubsection{An explicit isomorphism between \texorpdfstring{$Q_{54}$}{Q54} and \texorpdfstring{$X_{52}''$}{X52ii}} Let $X_{52}''$ be the only surface up to projective equivalence containing configuration~$\bX_{52}''$. Since the transcendental lattices of $Q_{54}$ and $X_{52}''$ are in the same oriented isometry class, these surfaces are isomorphic to each other by a theorem of Shioda-Inose~\cite{shioda-inose77}. More precisely, they form an Oguiso pair (cf. Section~\ref{sec:oguiso-pairs}). In Section~\ref{subsec:X52ii} we explain how to find a defining equation of $X_{52}''$. Starting from this equation, we provide here a way to compute an explicit isomorphism between $Q_{54}$ and $X_{52}''$ following a method illustrated by Shimada and Shioda. We refer to their article~\cite{shimada-shioda} for further details on the algorithms used. Let $(S,h)$ be the configuration $\bX_{52}''$. Let $\mathcal L$ be the set of the $52$ lines in $S$, \[ \cL := \{ l \in S \mid l^2 = -2,\, l\cdot h = 1 \}. \] Compute the set of very ample polarizations of degree~$4$ which have intersection~$6$ with $h$, \[ \cH := \{v \in S \mid v^2 = 4, v\cdot h = 6, \,\text{$v$ very ample}\} \] (cf. also~\cite[Lemma 6.8]{degtyarev16}). The set $\cH$ has $153$ elements. Let $\cO$ be the set of vectors $v$ in $\cH$ such that \begin{enumerate} \item the configuration $(S,v)$ is isometric to $\bQ_{54}$; \item there are six pairwise distinct lines $l_0,\ldots,l_5\in \cL$ such that \[ v = 3\,h - l_0 - \ldots - l_5. \] \end{enumerate} There are $36$ vectors in $\cH$ which satisfy the first condition. (The other $117$ vectors define a configuration isometric to $\bX_{52}''$.) The set $\cO$ has $6$ elements. For any vector $v\in \cO$ and sextuple $l_0,\ldots,l_5 \in \cL$ as above, it turns out that, up to relabeling, $l_0$ is of type $(4,4)$, $l_1,l_2$ are of type $(4,3)$, $l_3$ is of type $(3,5)$ and $l_4,l_5$ are of type $(0,12)$. Fix a vector $v \in \cO$ and a sextuple $l_0,\ldots,l_5 \in \cL$ as above. Compute the explicit equations of the lines $l_i$ in the surface $X_{52}''$ (cf. Remark~\ref{rmk:X52ii-def-lines}). A defining equation of $Q_{54}$ can then be obtained in a similar way as in~\cite[Theorem 4.5]{shimada-shioda}. Let $\Gamma_d$ be the space of homogeneous polynomials of degree $d$ in the variables $x_0,x_1,x_2,x_3$. Let $\Lambda \subset \Gamma_4$ be the 4-dimensional subspace of cubic polynomials that vanish along the lines $l_0,\ldots,l_5$. Since we know the equations of these lines, we can compute explicitly a basis $\varphi_0,\ldots,\varphi_3$ of $\Lambda$. Let $\bar\Gamma \subset \Gamma_{12}$ be the $290$-dimensional subspace of polynomials of degree~12 whose degree with respect to $x_0$ is $\leq 3$. Let $\sigma\colon \Gamma_4 \rightarrow \Gamma_{12}$ be the homomorphism given by the substitution $x_i \mapsto \varphi_i$. Let $\rho\colon \Gamma_{12} \rightarrow \bar\Gamma_{12}$ be the homomorphism given by the remainder of the division by the defining polynomial \eqref{eq:X52ii} of $X_{52}''$. Then, the kernel of $\rho\circ\sigma$ has dimension~1 and is generated by a defining equation of~$Q_{54}$. It is then a matter of changing coordinates in order to find an equation as in~\eqref{eq:Q54}. \begin{remark} Let $\zeta$ be a primitive 3rd root of unity. Let $r$ be the algebraic number defined as in Remark~\ref{rmk:X52ii-def-lines}. The isomorphism that we found is defined over a degree~24 Galois extension of $\QQ$ generated by the element $x = ir$. Its minimal polynomial is \begin{multline*} x^{24} + 38 \, x^{22} + 1045 \, x^{20} + 16306 \, x^{18} + 180538 \, x^{16} - 258514 \, x^{14} + 166541 \, x^{12} \\ - 258514 \, x^{10} + 180538 \, x^{8} + 16306 \, x^{6} + 1045 \, x^{4} + 38 \, x^{2} + 1. \end{multline*} Note that $\QQ(x) = \QQ(\nu,\zeta) = \QQ(r,i)$. \end{remark} \section{Some configurations with at most 52 lines} \label{sec:<=52-lines} In this section we give explicit equations of the surfaces containing configurations $\bX_{52}''$, $\bX_{52}'''$, $\bX_{52}^\mathrm v$, $\bY_{52}'$, $\bQ_{52}'''$ and $\bX_{51}$. The same conventions as in §\ref{subsec:conventions} apply. \subsection{Configuration \texorpdfstring{$\bX_{52}''$}{X52ii}} \label{subsec:X52ii} Let $X_{52}''$ be a quartic surface containing configuration $\bX_{52}''$. Let $l_0$ be the only line of type~$(6,0)$ contained in $X_{52}''$. There is a symplectic automorphism~$\sigma$ of order~$3$ which preserves $l_0$ and six of its reducible fibers. By Proposition~\ref{prop:special-aut}, $l_0$ is a special line; hence, $X_{52}''$ is given by an equation as in \eqref{eq:familyZ} and $\sigma = \diag(1,1,\zeta,\zeta^2)$, with $\zeta$ a primitive 3rd root of unity. Let $m\colon x_2 = x_3 = 0$ be the line in $\PP^3$ fixed pointwise by $\sigma$. Let $\tau$ be a symplectic automorphism of $X_{52}''$ of order~$2$. Then $\tau$ fixes four lines $l_0,\ldots,l_3$ and $\sigma^2\tau = \tau\sigma$, which implies that $\tau(\Fix(\sigma)) = \Fix(\sigma)$. The line $l_0$ has two ramified fibers in $\Pi'\colon x_0 = 0$ and $\Pi''\colon x_1 = 0$, so the planes $\Pi',\Pi''$ are permuted by $\tau$. If $\Pi',\Pi''$ were fixed by $\tau$, then $\tau$ would commute with $\sigma$. It follows that $\tau(\Pi') = \Pi''$, so $\tau$ has the following form (after rescaling one variable): \begin{equation} \label{eq:X52ii-tau-a} \tau = \left[\begin{array}{cccc} & p & & \\ 1 & & & \\ & & & p \\ & & 1 & \\ \end{array}\right], \end{equation} for some $p \in \CC$. For $i=1,2,3$, we can suppose that $l_i$ is the intersection of $\Pi_i\colon e_i x_0 + c_i x_1 + d_i x_2 = 0$ and $\tau(\Pi_i)$. After rescaling, we can suppose that $c_1 = d_1 = e_1 = 1$; we let $c := c_2$, $d := d_2$, $e := e_2$. Imposing that $l_1$ and $l_2$ are contained in $X_{52}''$ we can express all coefficients in terms of $p,c,d,e$; moreover, one of the following equations must hold: \begin{align} p &= \frac{{\left(c - d\right)}^{2}}{{\left(d - e\right)}^{2}}, \label{eq:X52ii-comp3} \\ p &= \frac{c^{2} + c d + d^{2}}{d^{2} + d e + e^{2}}, \label{eq:X52ii-comp2} \\ p &= -\frac{{\left(c + d\right)}^{2} {\left(c - d\right)}}{{\left(d^{2} - c e\right)} {\left(d + e\right)}}. \label{eq:X52ii-comp1} \end{align} Condition \eqref{eq:X52ii-comp3} implies that $l_1$ and $l_2$ intersect each other. Condition \eqref{eq:X52ii-comp2} implies that there exists a line intersecting $l_0$, $l_1$, $l_2$ and $l_3$. Both are contradictions, so condition~\eqref{eq:X52ii-comp1} must hold. We parametrize the pencil of planes containing $l_1$ by \[ t \mapsto \{x_{2} = -x_{0} - x_{1} - t {\left(x_{1} + x_{3} + x_{0}/p\right)}\}. \] The discriminant $\Delta$ of $l_1$ has the following form: \[ \Delta = P Q^2 R^2, \] where $P$, $Q$, $R$ are polynomials in $t$ of degree $12$, $4$, $2$, respectively. The line $l_1$ and $l_2$ are of type $(4,4)$, so $R$ divides $P$ and the following condition holds: \begin{multline} \label{eq:X52ii-genus1} c^{3} d^{2} + c^{2} d^{3} + c d^{4} - c^{4} e + 2 \, c^{3} d e + 4 \, c^{2} d^{2} e + 2 \, c d^{3} e \\ - d^{4} e + c^{3} e^{2} + c^{2} d e^{2} + c d^{2} e^{2} = 0. \end{multline} Under condition \eqref{eq:X52ii-genus1}, the polynomial $Q$ splits into two degree~2 polynomials $Q = Q_1Q_2$, with \begin{multline*} Q_2 = \left( c^{2} d^{3} + c d^{4} + d^{5} - c^{3} d e + d^{4} e - c^{3} e^{2} - c^{2} d e^{2} - c d^{2} e^{2}\right) t^{2} \\ + \left(- c^{3} d^{2} + c^{2} d^{3} + 4 \, c d^{4} + 2 \, d^{5} + c^{4} e + 2 \, c^{3} d e + 2 \, c^{2} d^{2} e + c d^{3} e \right) t \\ - c^{5} - 2 \, c^{4} d - c^{3} d^{2} + c^{2} d^{3} + 2 \, c d^{4} + d^{5}; \end{multline*} moreover, $P = W Q_2 R$ for some polynomial $W$ of degree~8, which we can explicitly compute. The polynomial $W$ has two double roots which account for the remaining $2$-fibers of $l_1$. After normalizing $d$ to $1$, we compute the resultant of $W$, obtaining another condition on $c,e$. Together with \eqref{eq:X52ii-genus1}, we get that \[ e = -\frac{192}{247} \, c^{5} - \frac{60}{247} \, c^{4} - \frac{3624}{247} \, c^{3} - \frac{621}{19} \, c^{2} - \frac{5336}{247} \, c - \frac{708}{247}, \] and $c$ satisfies \[ c^6 + 19\,c^4 + 36\,c^3 + 19\,c^2 + 1 = 0. \] \begin{remark} The last computations can be simplified by noting that all factors $F$ of $\Delta$ satisfy a condition of symmetry due to $\tau$: \[ k^{n/2} F(t) = t^n F(k/t), \] where $k = (c+d)^2(d-c)/((d^2-ce)(d+e))$ and $n = \deg F$. \end{remark} \begin{remark} The field $\QQ(c)$ is a Galois extension of $\QQ$ of degree~6. The surface $X_{52}''$, though, can be defined on the smaller non-Galois extension $\QQ(p)$, where $p$ is the parameter appearing in \eqref{eq:X52ii-tau-a} and is equal to \[ p = \frac{3}{26} c^{5} + \frac{3}{13} c^{4} + \frac{12}{13} c^{3} + \frac{15}{13} c^{2} + \frac{15}{26} c + \frac{4}{13}. \] The minimal polynomial of $p$ over $\QQ$ is \[ p^{3} - 201 \, p^{2} + 111 \, p - 19. \] The surface $X_{52}''$ is then defined by \begin{multline} \label{eq:X52ii} x_{0}^{4} + \left(-36 p^{2} + 6696 p + 2052\right) x_{0}^{3} x_{1} + 4968 p x_{0}^{2} x_{1}^{2} \\ + \left(-540 p^{2} + 6048 p - 684\right) x_{0} x_{1}^{3} + 3312 p^{2} x_{1}^{4} + \left(-19 p^{2} - 100 p + 209\right) x_{1} x_{2}^{3} \\ + \left(11 p^{2} - 5542 p + 1121\right) x_{0}^{2} x_{2} x_{3} + \left(-116 p^{2} + 1612 p - 380\right) x_{0} x_{1} x_{2} x_{3} \\ + \left(-3331 p^{2} - 100 p + 209\right) x_{1}^{2} x_{2} x_{3} + \left(-3919 p^{2} + 2318 p - 361\right) x_{0} x_{3}^{3} = 0. \end{multline} \begin{remark} \label{rmk:X52ii-def-lines} All lines on $X_{52}''$ are defined over $K = \QQ(c,\zeta)$, where $\zeta$ is a primitive 3rd root of unity. A primitive element of $K$ over $\QQ$ is $r = c\zeta$, whose minimal polynomial is \[ r^{12} - 19 \, r^{10} + 72 \, r^{9} + 342 \, r^{8} - 684 \, r^{7} + 937 \, r^{6} - 684 \, r^{5} + 342 \, r^{4} + 72 \, r^{3} - 19 \, r^{2} + 1. \] \end{remark} \end{remark} \begin{remark} The quintic curve in $\PP^2$ defined by condition~\eqref{eq:X52ii-genus1} has one singularity of type $\bD_4$ and two singularities of type $\bA_1$. Hence, it has geometric genus~$1$; in particular, it is not rational. \end{remark} \subsection{Configuration \texorpdfstring{$\bX_{52}'''$}{X52iii}} \label{subsec:X52iii} Let $X_{52}'''$ be a quartic surface containing configuration $\bX_{52}'''$. Then $X_{52}'''$ contains a pair of twin lines, say $l_0$ and $l_1$, of type $(0,10)$. There are four symplectic symmetries of order~$5$, which fix both $l_0$ and $l_1$, and no other lines. Choose one of them and call it $\tau$. By~§\ref{subsec:sympl-autom}, $\tau$ has exactly $4$ fixed point on $X_{52}'''$. Necessarily, two of them lie on $l_0$ and the other two on $l_1$. Up to coordinate change, we can suppose that $l_0$ and $l_1$ are given by $x_0 = x_1 = 0$ and $x_2 = x_3 = 0$, respectively, and that the fixed points of $\tau$ are the coordinate points. It follows that $\tau = \diag(1,\xi,\xi^i,\xi^j)$, where $\xi$ is a primitive 5th root of unity, $0\leq i,j\leq 4$. In fact, the first and second entries cannot be equal, because $\tau$ does not fix $l_0$ pointwise. Analogously for $l_1$, we have $i \neq j$; hence, up to exchanging $x_2$ with $x_3$, we can suppose $i < j$. The conditions $i = 0$ or $i = 1$ lead to contradictions: $X_{52}'''$ would contain more lines fixed by $\tau$ than just $l_0$ and $l_1$. As $X_{52}'''$ is smooth, we find $i = 2$ and $j = 4$. Imposing that $\tau$ is a symplectic symmetry and normalizing the remaining coefficients, $X_{52}'''$ turns out to be a Delsarte surface: \[ x_{0}^{3} x_{2} + x_{1} x_{2}^{3} + x_{1}^{3} x_{3} + x_{0} x_{3}^{3} = 0. \] All lines intersecting both $l_0$ and $l_1$ (for instance, the line given by $x_0 - x_1 = x_2 + x_3 = 0$) are of type $(4,6)$. \subsection{Configuration \texorpdfstring{$\bX_{52}^\mathrm v$}{X52v}} \label{subsec:X52v} Let $X_{52}^\mathrm v$ be a quartic surface whose Fano configuration is isometric to $\bX_{52}^\mathrm v$. Then $X_{52}^\mathrm v$ contains $12$ lines of type $(2,8)$, but only four of them, say $l_0,\ldots,l_3$, form a quadrangle in which the opposite lines form two pairs of twin lines. We choose coordinates so that $l_i$ is the line $x_i = x_{i+1} = 0$. There is a symplectic symmetry $\sigma$ of order~$4$ such that $\sigma(l_0) = l_2$, $\sigma(l_1) = l_3$ and $\sigma^2$ fixes $l_0,\ldots,l_3$. Since $\sigma^2$ has exactly 8 fixed points, of which 4 are the vertices of the quadrangle, we have $\sigma^2 = \diag(1,-1,1,-1)$. It follows that $\sigma$ is given by \[ \left[\begin{array}{cccc} & & 1 & \\ & & & a \\ -ab & & & \\ & b & & \end{array}\right] \] for some $a,b \in \CC$. The residual conics in the coordinate planes are irreducible. Hence, there is a plane containing $l_0$ different from $x_0 = 0$ and $x_1 = 0$ where the residual cubic splits into three lines. We let $m_1$ be the line intersecting $l_2$, which must be of type $(5,3)$. By Corollary~\ref{cor:p-fiber-twin-line}, the point of intersection of the other two lines lies on $l_0$. We introduce two parameters $p,q \in \CC$, so that $m_1$ is given by $x_0 - px_1 = x_2 - qx_3 = 0$. After imposing all conditions, we normalize all remaining coefficients except $a$ by rescaling variables. We find that $X_{52}^\mathrm v$ is given by a polynomial of the form \begin{multline*} a^{4} x_{1} x_{3}^{3} - a^{3} x_{1}^{3} x_{3} - a x_{0}^{3} x_{2} - {\left(a^{3} - 2 \, a\right)} x_{0} x_{1}^{2} x_{2} + {\left(2 \, a^{3} - a\right)} x_{0}^{2} x_{1} x_{3} \\ - {\left(2 \, a^{2} - 1\right)} x_{1} x_{2}^{2} x_{3} - {\left(a^{4} - 2 \, a^{2}\right)} x_{0} x_{2} x_{3}^{2} - x_{0} x_{2}^{3} = 0. \end{multline*} In order to determine $a$, we inspect the discriminant $\Delta$ of the fibration induced by $m_1$. We parametrize the planes containing $m_1$ by \[ t \mapsto \{ x_0 = px_1 + t(x_2 - qx_3)\}. \] It turns out that the discriminant is of the form \[ \Delta = t^4 P Q^2 R^3, \] where $P$, $Q$ and $R$ are polynomials in $t$ of degree $4$, $4$ and $2$, respectively. Since $m_1$ has type $(5,3)$, $P$ and $Q$ must have a common root. We compute the determinant of the Sylvester matrix associated to $P$ and $Q$, finding that $a$ must be a root of one of the following polynomials \begin{align*} & a^{4} - 3 \, a^{3} + 2 \, a^{2} + 3 \, a + 1, \\ & a^{4} + 3 \, a^{3} + 2 \, a^{2} - 3 \, a + 1, \text{ or} \\ & a^{12} - 4 \, a^{10} + 2 \, a^{8} + 5 \, a^{6} + 2 \, a^{4} - 4 \, a^{2} + 1. \end{align*} All solutions lead to projective equivalent surfaces, as $\Cl(\bX_{52}^\mathrm v) = 1$, see §\ref{subsec:proj-equiv-classes}. \subsection{Configuration \texorpdfstring{$\bY_{52}'$}{Y52i}} \label{subsec:Y52i} Let $Y_{52}'$ be a quartic surface containing configuration $\bY_{52}'$. Then $Y_{52}'$ contains two intersecting lines $l_0$ and $l_1$ of type $(4,6)$. We let $l_0'$ and $l_1'$ be their respective twin lines. As these lines form a quadrangle, we can choose coordinate so that $l_0\colon x_0 = x_1 = 0$, $l_0'\colon x_2 = x_3 = 0$, $l_1\colon x_0 = x_3 = 0$, $l_1'\colon x_1 = x_2 = 0$. The residual conics in the coordinate planes are irreducible. Moreover, there exists a unique symplectic symmetry $\sigma$ of order $2$ which fixes these four lines. It follows that the two lines fixed in $\PP^3$ by $\sigma$ must be the only two lines not contained in $Y_{52}'$ joining two vertices of the quadrangle, namely $x_0 = x_2 = 0$ and $x_1 = x_3 = 0$, so $\sigma = \diag(1,-1,1,-1)$. Let $\tau$ be one of the two symplectic symmetries of order $4$. We have $\tau(l_0) = l_1$ and $\tau(l_0') = l_1'$ and $\tau^2 = \sigma$. After rescaling variables, $\tau$ is given by \[ \tau = \left[\begin{array}{cccc} 1 & & & \\ & & & 1 \\ & & r & \\ & -1 & & \end{array}\right], \] with $r^2 = 1$. If $r = 1$, then the conic in $x_0 = 0$ is reducible, which is not the case, so $r = -1$. Hence, $Y_{52}'$ is given by an equation of the form \[ a x_{0}^{3} x_{2} + b x_{0} x_{2}^{3} + c x_{0}^{2} x_{1} x_{3} + d x_{1} x_{2}^{2} x_{3} + x_{0} x_{1}^{2} x_{2} + x_{1}^{3} x_{3} + x_{0} x_{2} x_{3}^{2} + x_{1} x_{3}^{3} = 0, \] for some $a,b,c,d \in \CC$. This is a 3-dimensional family, as one more coefficient can be set to $1$ by rescaling the other coefficients and the variables. We introduce two new parameters $p,q \in \CC$ so that one of the lines $m$ of type $(3,5)$ intersecting both $l_0$ and $l_0'$ and contained in a $3$-fiber of $l_0$ is given by $m\colon x_0 - px_1 = x_2 - qx_3 = 0$. By Corollary~\ref{cor:p-fiber-twin-line}, the other two lines in the plane containing $l_0$ and $m$ intersect in a point lying on $l_0$. The line $\tau(m)$ intersects $m$, and their residual conic is also reducible. Imposing all these conditions and normalizing $q$, we can express all coefficients in terms of $p$. We find that \begin{align*} a & = -\frac{{\left(2 \, p + 1\right)} {\left(p + 1\right)}^{2}}{2 \, {\left(p^{2} - 2 \, p - 1\right)} p^{3}}, \\ b & = -\frac{p^{2} + 2 \, p + 1}{4 \, p}, \\ c & = \frac{{\left(7 \, p + 3\right)} {\left(p + 1\right)}}{2 \, {\left(p^{2} - 2 \, p - 1\right)} p^{2}}, \\ d & = \frac{1}{4} \, {\left(p + 1\right)} {\left(p - 3\right)}. \end{align*} Finally, we require that $l_0$ has type $(4,6)$ by looking at the discriminant $\Delta$ of its induced fibration. We parametrize the planes containing $l_0$ by $t \mapsto \{x_0 = tx_1\}$. The discriminant has the following form: \[ \Delta = t^2(t-p)^3(t+p)^3 P^2 Q^2 R^2 S, \] where $P,Q,R,S$ are polynomials in $t$ of degree $2$, and $P(-t) = Q(t)$. Therefore, $S$ must have a common root with either $P$, $Q$ or $R$. We find a finite list of values for $p$. Looking also at the determinant of $m$, it turns out that $p$ is a root of \[ p^{3} - \frac{11}{9} \, p^{2} - \frac{7}{3} \, p - 1. \] Since $\Cl(\bY_{52}') = 3$, each root corresponds to a different projective equivalence class. In particular, the real root corresponds to the surface with transcendental lattice $[2,0,38]$. \subsection{Configuration \texorpdfstring{$\bQ_{52}'''$}{Q52iii}} \label{subsec:Q52iii} Let $Q_{52}'''$ be a quartic surface containing configuration $\bQ_{52}'''$. Then $Q_{52}'''$ contains exactly 4 lines $l_0,\ldots,l_3$ of type $(5,0)$. Since these lines intersect each other pairwise, they are coplanar. Moreover, there exists a symplectic automorphism $\sigma$ which fixes each~$l_i$. It follows that $l_0,\ldots,l_3$ form a star, otherwise $\sigma$ would fix three points on at least one $l_i$, so it would fix the whole line, but this cannot happen, as $\sigma$ is symplectic. By Lemma~\ref{lem:autP3-ord2}, $\sigma$ fixes two lines $m',\,m''$ in $\PP^3$ pointwise. If $P$ is the center of the star and $Q_i$ is the other point on $l_i$ fixed by $\sigma$ for $i=0,\ldots,3$, then necessarily all $Q_i$ lie on one of the two lines, say $m'$, and the other line $m''$ contains $P$. All symmetries of $Q_{52}'''$ fix $P$. There is a non-symplectic symmetry $\tau$ of order 3 which fixes each $l_i$ and commutes with $\sigma$; hence, $\tau$ fixes each $Q_i$. This means that, as an automorphism of $\PP^3$, $\tau$ fixes $m'$ pointwise. Its invariant lattice has rank 4, so by results of Artebani, Sarti and Taki \cite{ArtebaniSartiTaki}, $\Fix(\tau,X)$ consists of one smooth curve $C$ and exactly one point. By Lemma~\ref{lem:autPn-ord3}, the curve $C$ must be the intersection of $Q_{52}'''$ with a plane $\Pi$ in $\Fix(\tau,\PP^3)$; moreover, $\Pi$ necessarily contains $m'$, but not $P$, since $C$ is smooth. Let $R$ be the point of intersection of $\Pi$ and $m''$. If $R\in X$, then $\tau$, being of order 3, would fix at least three points on $m''$, so it would fix the whole line $m''$ pointwise. This is impossible since $\Fix(\tau,\PP^3) = \Pi \cup \{P\}$, so $R\notin X$ and $m''\cap X$ consists of four distinct points. There exists also a symplectic symmetry $\varphi$ such that $\varphi^2 = \sigma$. Since $\varphi$ commutes with $\sigma$ and $\tau$, $\varphi(R) = R$, so $\varphi$ fixes $m''$ pointwise. Up to relabeling, $\varphi(l_0) = l_3$ and $\varphi(l_1) = l_2$. We choose coordinates in such a way that $P = (0, 0, 1, 0)$, $Q_0 = (0, 0, 0, 1)$, $Q_3 = (0, 1, 0, 0)$, and $R = (1, 0, 0, 0)$. After rescaling, the automorphisms $\tau$ and $\varphi$ are given by \[ \tau = \left[\begin{array}{cccc} 1& & & \\ & 1 & & \\ & & \zeta & \\ & & & 1 \end{array}\right] \quad \text{and} \quad \varphi = \left[\begin{array}{cccc} 1& & & \\ & & & -1 \\ & & 1 & \\ & 1 & & \end{array}\right], \] where $\zeta$ is a primitive 3rd root of unity. If $\omega$ is a non-zero 2-form on $Q_{52}'''$, then either $\tau^*\omega = \zeta\omega$ or $\tau^*\omega = \zeta^2\omega$, but the second condition leads to $P$ being a singular point. It follows that $Q_{52}'''$ is given by an equation of the form \[ c x_{0}^{2} x_{1}^{2} + c x_{0}^{2} x_{3}^{2} + d x_{1}^{2} x_{3}^{2} + x_{0}^{4} + x_{0} x_{2}^{3} - x_{1}^{3} x_{3} + x_{1} x_{3}^{3} = 0, \] for some $c,d\in \CC$. Parametrizing the planes containing $l_0$ by $t \mapsto \{x_0 = tx_1\}$, the discriminant $\Delta$ of the fibration induced by $l_0$ has the following form: \[ \Delta = t^4 P(t^2)^2, \] where $P$ is a polynomial of degree $5$. Looking at the resultant of $P$ and excluding the conditions leading to surface singularities, we find that \[ 27 \, c^{4} d^{2} - 54 \, c^{2} d^{3} + 100 \, c^{4} + 27 \, d^{4} - 198 \, c^{2} d + 162 \, d^{2} + 243 = 0. \] This polynomial splits over $\QQ(\zeta)$. We see that there exists $e \in \CC$ such that \[ c = \frac{e^2-2\,\zeta-1}{3\,e}, \quad \text{and} \quad d = \frac 19 (e^2-20 \, \zeta - 10). \] Moreover, \[ \Delta = t^4 (t^2 - e/3)^4 Q(t^2)^2, \] where $Q$ is a polynomial of degree 3. Looking now at the resultant of $Q$, it turns out that if $e$ is a root of \[ e^{4} - 20 \, {\left(2 \, \zeta + 1\right)} e^2 + \frac{15}{4}, \] then $l_0$ is of type $(5,0)$, so the Fano configuration of $Q_{52}'''$ is indeed $\bQ_{52}'''$. As $\Cl(\bQ_{52}''') = 1$, all other quartic surfaces with this Fano configuration are projectively equivalent to the one we found. \subsection{Configuration \texorpdfstring{$\bX_{51}$}{X51}} \label{subsec:X51} Let $X_{51}$ be a surface containing configuration~$\bX_{51}$. Then $X_{51}$ contains a line $l_0$ of type $(6,2)$. By Proposition~\ref{prop:(6,q)-is-special}, $X_{51}$ is given by an equation as in~\eqref{eq:familyZ}. In particular, the two lines $l_1$ and $l_2$ in the $1$-fibers are given by $x_0 = x_2 = 0$ and $x_1 = x_3 = 0$. Note that the residual conics in the $1$-fibers intersect $l_1$ and $l_2$ at the coordinate points and are tangent to~$l_0$. There are three symplectic symmetries of order $2$ which exchange $l_1$ and $l_2$. Choose one of them and call it $\sigma$; necessarily, $\sigma$ has the following form: \begin{equation*} \sigma = \left[\begin{array}{cccc} & 1 & & \\ ab &&& \\ &&& a \\ && b & \\ \end{array}\right]. \end{equation*} Moreover, $\sigma$ fixes one line $m$ intersecting both $l_1$ and $l_2$. Up to rescaling variables, $m$ is given by $x_3 - x_1 = x_2 - ax_0 = 0$. By further inspection of $\bX_{51}$, we see that the residual conic in the plane containing $m$ and $l_1$ is reducible. After normalizing all coefficients except $a$, we are left with the following $1$-dimensional family: \begin{multline*} 3 \, a^{3} x_{0}^{2} x_{1}^{2} - 3 \, a^{3} x_{0}^{2} x_{2} x_{3} - 3 \, a^{2} x_{0} x_{1} x_{2} x_{3} - 3 \, a^{2} x_{1}^{2} x_{2} x_{3} \\ - {\left(a^{4} - a^{3}\right)} x_{0}^{3} x_{1} - {\left(a^{3} - a^{2}\right)} x_{0} x_{1}^{3} + {\left(4 \, a - 1\right)} x_{1} x_{2}^{3} + {\left(4 \, a^{3} - a^{2}\right)} x_{0} x_{3}^{3} = 0. \end{multline*} In order to find the last condition for $a$, we investigate the discriminant~$\Delta$ of the fibration induced by $l_1$. We parametrize the planes containing $l_1$ by $t\mapsto \{x_0 = tx_2\}$, so that $\Delta$ has the following form: \[ \Delta = t^3 (a^3t^3 -1)^3 P, \] where $P$ is a polynomial in $s = t^3$ of degree~$4$. As the line $l_1$ is of type $(3,4)$, the resultant of $P$ must vanish. Knowing also that $m$ is of type $(2,7)$, it follows that $a$ satisfies the following equation: \begin{equation*} a^{3} - \frac{11}{3} \, a^{2} + \frac{10}{9} \, a - \frac{1}{9} = 0. \end{equation*} Since $\Cl(\bX_{51}) = 3$, each root corresponds to a different projective equivalence class. In particular, the real root corresponds to the surface with transcendental lattice $[6,3,16]$. \subsection{Configuration \texorpdfstring{$\bZ_{52}$}{Z52}} \label{subsec:Z52} The general member of the following rational family contains $52$ lines forming configuration~$\bZ_{52}$ and has Picard number 19: \begin{multline*} t^{2} x_{1} x_{2} {\left(t x_{0} + t x_{3} - 2 \, x_{1} + 2 \, x_{2}\right)} {\left(t x_{0} - a x_{3} - 2 \, x_{1} - 2 \, x_{2}\right)} \\ = -4 \, x_{0} x_{3} {\left(t x_{0} + t x_{3} - 6 \, x_{1} + 6 \, x_{2}\right)} {\left(t x_{0} - t x_{3} - 6 \, x_{1} - 6 \, x_{2}\right)}. \end{multline*} This family was found by taking advantage of the fact that a surface containing $\bZ_{52}$ has four special lines of type~$(6,0)$ and six pairs of twin lines of type~$(2,8)$. Generically, the lines $x_0 = x_1 = 0$ and $x_2 = x_3 = 0$ are twin lines, while the lines $x_0 = x_2 = 0$ and $x_1 = x_3 = 0$ are special lines. All surfaces of the family have the symmetry \[ \left[\begin{array}{cccc} & & &1 \\ & & -1 & \\ & 1 && \\ -1 & & & \\ \end{array}\right]. \] We obtain models containing configurations $\bX_{64}$ and $\bX'_{60}$ when the minimal polynomial of $t$ is $t^{4} + 144 $ or $t^{4} - 12 \, t^{2} + 144$, respectively. \section{An explicit Oguiso pair} \label{sec:oguiso-pairs} In this section we answer a question posed by Oguiso~\cite{oguiso17}. \subsection{Oguiso pairs} \label{subsec:oguiso-pairs} Let $\pi_1,\pi_2\colon \PP^3\times \PP^3\rightarrow \PP^3$ be the first and second projection. A pair $(X_1,X_2)$ of smooth quartic surfaces (not necessarily distinct) is an \emph{Oguiso pair} if there exists a smooth complete intersection $S$ of four hypersurfaces $Q_0,\ldots,Q_3$ of bi-degree $(1,1)$ in $\PP^3\times \PP^3$ such that the restriction to $S$ of $\pi_i$ is an isomorphism onto $X_i$, for $i=1,2$. In particular, $X_1$ and $X_2$ are isomorphic as abstract K3 surfaces, and the isomorphism is given by \[ (\pi_2|_S) \circ (\pi_1|_S)^{-1} \colon X_1 \xrightarrow{\sim} X_2. \] Conversely, let $X$ be a K3 surface and suppose $h_1, h_2$ are very ample divisors which induce embeddings of $X$ into $\PP^3$ whose images are $X_1$ and $X_2$, respectively. \begin{theorem}[Oguiso \cite{oguiso17}] The smooth quartic surfaces $(X_1,X_2)$ form an Oguiso pair if and only if $h_1\cdot h_2 = 6$. \end{theorem} Under this assumption, both $X_1$ and $X_2$ are determinantal quartic surfaces (also called Cayley quartic surfaces). A determinantal description can be given in the following way. Let $x_0,\ldots,x_3$ and $y_0,\ldots,y_3$ be the coordinates in the first and second factor of $\PP^3\times\PP^3$, respectively. Write \[ Q_{k} = \sum_{i,j,k=0}^3 a_{ijk} x_i y_j. \] Consider the matrix $M_1,M_2$ whose $(i,j)$-component are \begin{align*} (M_1)_{ij} & = a_{0ij} x_0 + a_{1ij} x_1 + a_{2ij} x_2 a_{3ij}x_3, \\ (M_2)_{ij} & = a_{i0j} y_0 + a_{i1j} y_1 + a_{i2j} y_2 a_{i3j}y_3. \end{align*} Then the equations $\det M_1 = 0$ and $\det M_2 = 0$ define $X_1$ and $X_2$, respectively. \subsection{Models of the Fermat quartic surface} Let $X_{48}$ be the Fermat quartic surface (which is the only surface up to projective equivalence containing configuration $\bX_{48}$) and let $X_{56}$ be one of the two surfaces containing configuration $\bX_{56}$ (which are complex conjugate to each other). Shioda first noticed that $X_{48}$ and $X_{56}$ are isomorphic to each other as abstract K3 surfaces. An explicit equation of $X_{56}$ and an explicit isomorphism between $X_{48}$ and $X_{56}$ were found by Shimada and Shioda~\cite{shimada-shioda}. The two surfaces are not projectively equivalent to each other, but they form an Oguiso pair. According to Degtyarev~\cite{degtyarev16}, there are no other smooth quartic models of the Fermat quartic and---curiously enough---$(X_{56},X_{56})$ is also an Oguiso pair, but $(X_{48},X_{48})$ is not (cf. \cite[§6.5]{degtyarev16}). \subsection{A determinantal presentation} Oguiso asked in his paper~\cite{oguiso17} for explicit equations defining the complete intersection $S$ in $\PP^3\times\PP^3$ projecting onto $X_{48}$ and $X_{56}$. Here we provide such equations. In what follows we let $\zeta$ be a primitive 8th root of unity. The explicit isomorphism $f\colon X_{48} \xrightarrow{\sim} X_{56}$ can be found in \cite[Table 4.1]{shimada-shioda}. The surface $S$ is the graph of $f$. Generically, a hypersurface of bi-degree $(1,1)$ in $\PP^3\times \PP^3$ is defined by 16 coefficients. Choosing $12$ closed points $x_{1},\ldots,x_{12}$ on $X_{48}$ in a suitable way, one can find $12$ linearly independent conditions on these coefficients by imposing that $(x_i,f(x_i))$ belongs to $S$ for each $i=1,\ldots,12$. The points chosen are \begin{align*} (0,0,1,\zeta),\,(0,0,1,\zeta^5),\,(0,1,0,\zeta^5),\,(0,1,0,\zeta^7),\,(1,0,0,\zeta),\,(1,0,0,\zeta^5),\\ (1,0,0,\zeta^7),\,(1,0,\zeta^5,0),\,(1,\zeta,0,0),\,(0,1,\zeta,0),\,(0,1,\zeta^3,0),\,(\zeta,\zeta^2,\zeta,1), \end{align*} but of course this choice is quite arbitrary. One ends up with a vector space of dimension $4$ of polynomials of bi-degree~$(1,1)$. The following four polynomials $Q_0,\ldots,Q_3$ form a basis of that vector space. \begin{align*} Q_0 ={}& \zeta^{3} x_{0} y_{2} + \zeta x_{3} y_{0} + x_{1} y_{0} - x_{2} y_{2}, \\ Q_1 ={}& \zeta^{3} x_{3} y_{1} - \zeta^{3} x_{0} y_{3} + \zeta^{2} x_{2} y_{3} + x_{1} y_{1}, \\ \begin{split} Q_2 ={}& -{\left(\zeta^{2} - \zeta - 1\right)} x_{2} y_{0} - {\left(\zeta^{2} + 2 \, \zeta - 1\right)} x_{0} y_{1} - {\left(\zeta^{2} - 2\right)} x_{1} y_{1} \\ & + {\left(\zeta^{2} - \zeta - 1\right)} x_{2} y_{1} + {\left(\zeta^{2} - 1\right)} x_{0} y_{2} - {\left(\zeta^{3} + \zeta - 1\right)} x_{1} y_{2} \\ & - {\left(\zeta^{3} + \zeta + 2\right)} x_{3} y_{2} - {\left(\zeta^{3} + \zeta\right)} x_{0} y_{3} - {\left(\zeta^{3} + \zeta - 1\right)} x_{1} y_{3} \\ & + {\left(\zeta^{2} - \zeta + 1\right)} x_{2} y_{3} + \zeta x_{2} y_{2} + x_{1} y_{0}, \end{split} \\ \begin{split} Q_3 ={}& 6 \,\zeta^{3} x_{3} y_{3} - {\left(4 \, \zeta^{3} + 3 \, \zeta^{2} - 2 \, \zeta + 1\right)} x_{1} y_{0} - 3 \, {\left(\zeta^{3} + \zeta\right)} x_{2} y_{0} \\ & + {\left(2 \, \zeta^{3} + \zeta^{2} + 1\right)} x_{1} y_{1} - 3 \, {\left(\zeta^{3} + \zeta\right)} x_{2} y_{1} + 2 \, {\left(\zeta^{2} + \zeta - 1\right)} x_{0} y_{2} \\ & + 3 \, {\left(\zeta^{2} + 1\right)} x_{1} y_{2} + {\left(3 \, \zeta^{3} + 2 \, \zeta^{2} - \zeta - 2\right)} x_{2} y_{2} - 2 \, {\left(\zeta^{3} + \zeta + 1\right)} x_{0} y_{3} \\ & - 3 \, {\left(\zeta^{2} + 1\right)} x_{1} y_{3} + {\left(\zeta^{3} + \zeta - 2\right)} x_{2} y_{3} + 6\, x_{0} y_{0} .\end{split} \\ \end{align*} Indeed, the four hypersurfaces given by $Q_i = 0$ define a smooth complete intersection $S$ in $\PP^3\times \PP^3$ such that the restriction of $\pi_1$ and $\pi_2$ to $S$ is an isomorphism onto $X_{48}$ and $X_{56}$, respectively. This can be checked explicitly by computing the determinantal presentation explained in §\ref{subsec:oguiso-pairs}. The equation of $X_{56}$ that one obtains is the one provided in \cite[Theorem 1.3]{shimada-shioda}. \end{document}
\begin{document} \renewcommand{References}{References} \mainmatter \begin{center} \Large \textbf{Parametrizing elliptic curves by modular units} \normalsize \end{center} \emph{Abstract.} It is well-known that every elliptic curve over the rationals admits a parametrization by means of modular functions. In this short note, we show that only finitely many elliptic curves over $\mathbf{Q}$ can be parametrized by modular units. This answers a question raised by Zudilin in a recent work on Mahler measures. Further, we give the list of all elliptic curves $E$ of conductor up to $1000$ parametrized by modular units supported in the rational torsion subgroup of $E$. Finally, we raise several open questions. Since the work of Boyd \cite{boyd:expmath}, Deninger \cite{deninger:mahler} and others, it is known that there is a close relationship between Mahler measures of polynomials and special values of $L$-functions. Although this relationship is still largely open, some strategies have been identified in several instances. Specifically, let $P \in \mathbf{Q}[x,y]$ be a polynomial whose zero locus defines an elliptic curve $E$. If the polynomial $P$ is tempered, then the Mahler measure of $P$ can be expressed in terms of a regulator integral \begin{equation}\label{reg integral} \int_\gamma \log |x| d\arg(y)-\log |y| d\arg(x) \end{equation} where $\gamma$ is a (non necessarily closed) path on $E$ (see \cite{deninger:mahler,zudilin}). If the curve $E$ happens to have a parametrization by \emph{modular units} $x(\tau)$, $y(\tau)$, then we may change to the variable $\tau$ in (\ref{reg integral}) and try to compute the regulator integral using \cite[Thm 1]{zudilin}. In favourable cases, this leads to an identity between the Mahler measure of $P$ and $L(E,2)$: see for example \cite[\S 3]{zudilin} and the references therein. The following natural question, raised by Zudilin, thus arises: \begin{center} \emph{Which elliptic curves can be parametrized by modular units?} \end{center} We show in Section \ref{finiteness} that only finitely many elliptic curves over $\mathbf{Q}$ can be parametrized by modular units. The proof uses Watkins' lower bound on the modular degree of elliptic curves. Further, we give in Section \ref{preimages} the list of all elliptic curves $E$ of conductor up to $1000$ parametrized by modular units supported in the rational torsion subgroup of $E$. It turns out that there are 30 such elliptic curves. Finally, we raise in Section \ref{questions} several open questions. \section{A finiteness result}\label{finiteness} \begin{definition}\label{def param modunits} Let $E/\mathbf{Q}$ be an elliptic curve of conductor $N$. We say that $E$ can be \emph{parametrized by modular units} if there exist two modular units $u,v \in \mathcal{O}(Y_1(N))^\times$ such that the function field $\mathbf{Q}(E)$ is isomorphic to $\mathbf{Q}(u,v)$. \end{definition} \begin{thm}\label{thm finiteness} There are only finitely many elliptic curves over $\mathbf{Q}$ which can be parametrized by modular units. \end{thm} Let $E/\mathbf{Q}$ be an elliptic curve of conductor $N$. Assume that $E$ can be parametrized by two modular units $u,v$ on $Y_1(N)$. Then there is a finite morphism $\varphi : X_1(N) \to E$ and two rational functions $f,g \in \mathbf{Q}(E)^\times$ such that $\varphi^*(f)=u$ and $\varphi^*(g)=v$. Let $E_1$ be the $X_1(N)$-optimal elliptic curve in the isogeny class of $E$, and let $\varphi_1 : X_1(N) \to E_1$ be an optimal parametrization. By \cite[Prop 1.4]{stevens}, there exists an isogeny $\lambda : E_1 \to E$ such that $\varphi = \lambda \circ \varphi_1$. Consider the functions $f_1=\lambda^*(f)$ and $g_1=\lambda^*(g)$. Note that $u=\varphi_1^*(f_1)$ and $v=\varphi_1^*(g_1)$. Theorem \ref{thm finiteness} is now a consequence of the following result. \begin{thm}\label{thm 2} If $N$ is sufficiently large, then $\varphi_1^*(\mathbf{Q}(E_1)) \cap \mathcal{O}(Y_1(N))=\mathbf{Q}$. \end{thm} \begin{proof}[Proof] Let $C_1(N)$ be the set of cusps of $X_1(N)$. Let $f \in \mathbf{Q}(E_1) \backslash \mathbf{Q}$ be such that $\varphi_1^*(f) \in \mathcal{O}(Y_1(N))$. Let $P$ be a pole of $f$. Then $\varphi_1^{-1}(P)$ must be contained in $C_1(N)$, and we have \begin{equation*} \deg \varphi_1 = \sum_{Q \in \varphi_1^{-1}(P)} e_{\varphi_1}(Q) \leq \sum_{Q \in C_1(N)} e_{\varphi_1}(Q). \end{equation*} Let $g_N$ be the genus of $X_1(N)$. By the Riemann-Hurwitz formula for $\varphi_1$, we have \begin{equation*} 2g_N-2 = \sum_{Q \in X_1(N)} (e_{\varphi_1}(Q)-1). \end{equation*} It follows that \begin{align*} \deg \varphi_1 & \leq \#C_1(N) + \sum_{Q \in C_1(N)} (e_{\varphi_1}(Q)-1)\\ & \leq \#C_1(N) + 2g_N-2. \end{align*} By the classical genus formula \cite[Prop 1.40]{shimura:book}, and since $X_1(N)$ has no elliptic points for $N \geq 4$, we have \begin{equation*} \#C_1(N)+2g_N-2 = \frac{1}{12} [\SL_2(\mathbf{Z}):\Gamma_1(N)] = \frac{\phi(N) \nu(N)}{12} \qquad (N \geq 4) \end{equation*} where $\phi(N)$ denotes Euler's function, and $\nu(N)$ is defined by \begin{equation*} \nu(N) = N \prod_{i=1}^k (1+\frac{1}{p_i}) \qquad \textrm{if } N = \prod_{i=1}^k p_i^{\alpha_i}. \end{equation*} We thus get \begin{equation}\label{majoration deg phi1} \deg \varphi_1 \leq \frac{\phi(N) \nu(N)}{12}. \end{equation} We are now going to show that (\ref{majoration deg phi1}) contradicts lower bounds of Watkins on the modular degree if $N$ is sufficiently large. Let $E_0$ be the strong Weil curve in the isogeny class of $E$. We have a commutative diagram \begin{equation}\label{E1E0} \begin{tikzcd} X_1(N) \arrow{d}{\varphi_1} \arrow{r}{\pi} & X_0(N) \arrow{d}{\varphi_0}\\ E_1 \arrow{r}{\lambda_0} & E_0. \end{tikzcd} \end{equation} We deduce that \begin{equation*} \deg \varphi_1 = \frac{\deg \pi \cdot \deg \varphi_0}{\deg \lambda_0}. \end{equation*} We have $\deg \pi = \frac{\phi(N)}{2}$. For every $\alpha \in (\mathbf{Z}/N\mathbf{Z})^\times/\pm 1$, there exists a unique point $A(\alpha) \in E_1(\mathbf{Q})_{\mathrm{tors}}$ such that $\varphi_1 \circ \langle \alpha \rangle = t_{A(\alpha)} \circ \varphi_1$, where $t_{A(\alpha)}$ denotes translation by $A(\alpha)$. The map $\alpha \mapsto A(\alpha)$ is a morphism of groups and its image is $\ker(\lambda_0)$. It follows that $\deg(\lambda_0) \leq \# E_1(\mathbf{Q})_{\mathrm{tors}} \leq 16$. By \cite{watkins}, we have $\deg \varphi_0 \gg N^{7/6-\varepsilon}$ for any $\varepsilon>0$. It follows that $\deg \varphi_1 \gg \phi(N) N^{7/6-\varepsilon}$. Since $\nu(N) \ll N^{1+\varepsilon}$ for any $\varepsilon>0$, this contradicts (\ref{majoration deg phi1}) for $N$ sufficiently large. \end{proof} It would be interesting to determine the complete list of elliptic curves over $\mathbf{Q}$ parametrized by modular units. Unfortunately, the bound provided by Watkins' result, though effective, is too large to permit an exhaustive search. \section{Preimages of torsion points under modular parametrizations}\label{preimages} In order to find elliptic curves parametrized by modular units, we consider the following related problem. Let $E$ be an elliptic curve over $\mathbf{Q}$ of conductor $N$, and let $\varphi : X_1(N) \to E$ be a modular parametrization sending the $0$-cusp to $0$. By the Manin-Drinfeld theorem, the image by $\varphi$ of a cusp of $X_1(N)$ is a torsion point of $E$. Conversely, given a point $P \in E_{\mathrm{tors}}$, when does the preimage of $P$ under $\varphi$ consist only of cusps? The link between this question and parametrizations by modular units is given by the following easy lemma. \begin{lem}\label{lem preimage} Suppose that there exists a subset $S$ of $E(\mathbf{Q})_{\mathrm{tors}}$ satisfying the following two conditions: \begin{enumerate} \item We have $\varphi^{-1}(S) \subset C_1(N)$. \item There exist two functions $f,g$ on $E$ supported in $S$ such that $\mathbf{Q}(E)=\mathbf{Q}(f,g)$. \end{enumerate} Then $E$ can be parametrized by modular units. \end{lem} \begin{proof}[Proof] By condition (1), the functions $u=\varphi^*(f)$ and $v=\varphi^*(g)$ are modular units of level $N$, and by condition (2), we have $\mathbf{Q}(E) \cong \mathbf{Q}(u,v)$. \end{proof} We are therefore led to search for elliptic curves $E/\mathbf{Q}$ admitting sufficiently many torsion points $P$ such that $\varphi^{-1}(P) \subset C_1(N)$. We first give an equivalent form of condition (2) in Lemma \ref{lem preimage}. \begin{pro}\label{pro FS} Let $S$ be a subset of $E(\mathbf{Q})_{\textrm{tors}}$. Let $\mathcal{F}_S$ be the set of nonzero functions $f$ on $E$ which are supported in $S$. The following conditions are equivalent: \begin{enumerate} \item[(a)] \label{pro FS a} There exist two functions $f,g \in \mathcal{F}_S$ such that $\mathbf{Q}(E)=\mathbf{Q}(f,g)$. \item[(b)] \label{pro FS b} The field $\mathbf{Q}(E)$ is generated by $\mathcal{F}_S$. \item[(c)] \label{pro FS c} We have $\# S \geq 3$, and there exist two points $P,Q \in S$ such that $P-Q$ has order $\geq 3$. \end{enumerate} \end{pro} In order to prove Proposition \ref{pro FS}, we show the following lemma. \begin{lem}\label{lem FS} Let $P \in E(\mathbf{Q})_{\textrm{tors}}$ be a point of order $n \geq 2$. Let $f_P$ be a function on $E$ such that $\dv(f_P)=n(P)-n(0)$. Then the extension $\mathbf{Q}(E)/\mathbf{Q}(f_P)$ has no intermediate subfields. Moreover, if $P,P' \in E(\mathbf{Q})_{\textrm{tors}}$ are points of order $n \geq 4$ such that $\mathbf{Q}(f_P)=\mathbf{Q}(f_{P'})$, then $P=P'$. \end{lem} \begin{proof}[Proof] Let $K$ be a field such that $\mathbf{Q}(f_P) \subset K \subset \mathbf{Q}(E)$. If $K$ has genus $1$, then $K$ is the function field of an elliptic curve $E'/\mathbf{Q}$ and $f_P$ factors through an isogeny $\lambda : E \to E'$. Then $\dv(f_P)$ must be invariant under translation by $\ker(\lambda)$. This obviously implies $\ker(\lambda)=0$, hence $K=\mathbf{Q}(E)$. If $K$ has genus $0$, then we have $K=\mathbf{Q}(h)$ for some function $h$ on $E$, and we may factor $f_P$ as $g \circ h$ with $g : \mathbf{P}^1 \to \mathbf{P}^1$. We may assume $h(P)=0$ and $h(0)=\infty$. Then $g^{-1}(0) = \{0\}$ and $g^{-1}(\infty)=\{\infty\}$, which implies $g(t)=at^m$ for some $a \in \mathbf{Q}^\times$ and $m \geq 1$. Thus $\dv(f)=m\dv(h)$. Since $\dv(h)$ must be a principal divisor, it follows that $m=1$ and $K=\mathbf{Q}(f_P)$. Let $P, P' \in E(\mathbf{Q})$ be points of order $n \geq 4$ such that $\mathbf{Q}(f_P)=\mathbf{Q}(f_{P'})$ and $P \neq P'$. Then $f_{P'} = (af_P+b)/(cf_P+d)$ for some $\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \GL_2(\mathbf{Q})$. Considering the divisors of $f_P$ and $f_{P'}$, we must have $f_{P'}=af_P+b$ for some $a,b \in \mathbf{Q}^\times$. Then the ramification indices of $f_P : E \to \mathbf{P}^1$ at $P$, $P'$, $0$ are equal to $n$, which contradicts the Riemann-Hurwitz formula for $f_P$. \end{proof} \begin{proof}[Proof of Proposition \ref{pro FS}] It is clear that $(\mathrm{a})$ implies $(\mathrm{b})$. Let us show that $(\mathrm{b})$ implies $(\mathrm{c})$. If $\# S \leq 2$, then $\mathcal{F}_S/\mathbf{Q}^\times$ has rank at most $1$ and cannot generate $\mathbf{Q}(E)$. Assume that for every points $P,Q \in S$, we have $P-Q \in E[2]$. Translating $S$ if necessary, we may assume $0 \in S$. It follows that $S \subset E[2]$ and $\mathcal{F}_S \subset \mathbf{Q}(x) \subsetneq \mathbf{Q}(E)$. Finally, let us assume $(\mathrm{c})$. Translating $S$ if necessary, we may assume $0 \in S$. Let us first assume that $S$ contains a point $P$ of order $2$. Then $\mathbf{Q}(f_P) = \mathbf{Q}(x)$ has index $2$ in $\mathbf{Q}(E)$ and is the fixed field with respect to the involution $\sigma : p \mapsto -p$ on $E$. By assumption, there exist two points $Q, R \in S$ such that $Q-R$ has order $n \geq 3$. Let $g$ be a function on $E$ such that $\dv(g)=n(Q)-n(R)$. Then it is easy to see that $\dv(g)$ is not invariant under $\sigma$. It follows that $g \not\in \mathbf{Q}(f_P)$ and $\mathbf{Q}(f_P,g)=\mathbf{Q}(E)$. Let us now assume that $S \cap E[2]=\{0\}$. By assumption, $S$ contains two distinct points $P,Q$ having order $\geq 3$. If $P$ or $Q$ has order $\geq 4$, then Lemma \ref{lem FS} implies that $\mathbf{Q}(f_P,f_Q)=\mathbf{Q}(E)$. If $P$ and $Q$ have order $3$, then we must have $Q=-P$ because $\mathbf{Q}(E[3])$ contains $\mathbf{Q}(\zeta_3)$. It follows that the function $g$ on $E$ defined by $\dv(g)=(P)+(-P)-2(0)$ has degree $2$, so we have $g \not\in \mathbf{Q}(f_P)$ and $\mathbf{Q}(f_P,g)=\mathbf{Q}(E)$. \end{proof} Let $E/\mathbf{Q}$ be an elliptic curve of conductor $N$. Fix a Néron differential $\omega_E$ on $E$, and let $f_E$ be the newform of weight $2$ and level $N$ associated to $E$. We define $\omega_{f_E}=2\pi i f_E(z)dz$. Let $\varphi_E : X_1(N) \to E$ be a modular parametrization of minimal degree. We have $\varphi_E^* \omega_E = c_E \omega_{f_E}$ for some integer $c_E \in \mathbf{Z}-\{0\}$ \cite[Thm 1.6]{stevens}, and we normalize $\varphi_E$ so that $c_E>0$. Conjecturally, we have $c_E=1$ \cite[Conj. I]{stevens}. We now describe an algorithm to compute the set $S_E$ of points $P \in E(\mathbf{Q})_{\mathrm{tors}}$ such that $\varphi_E^{-1}(P) \subset C_1(N)$. Let $P \in E(\mathbf{Q})_{\mathrm{tors}}$. We define an integer $e_P$ by \begin{equation*} e_P = \sum_{\substack{x \in C_1(N) \\ \varphi_E(x)=P}} e_{\varphi_E}(x). \end{equation*} It is clear that $\varphi_E^{-1}(P) \subset C_1(N)$ if and only if $e_P=\deg \varphi_E$. Let $d$ be a divisor of $N$, and let $C_d$ be the set of cusps of $X_1(N)$ of denominator $d$ (that is, the set of cusps $\frac{a}{b}$ satisfying $(b,N)=d$). Every cusp $x \in C_d$ can be written (non uniquely) as $x=\langle \alpha \rangle \sigma(\frac1d)$ with $\alpha \in (\mathbf{Z}/N\mathbf{Z})^\times/\pm 1$ and $\sigma \in \Gal(\mathbf{Q}(\zeta_d)/\mathbf{Q})$. Since $e_{\varphi_E}(x)=e_{\varphi_1}(x)=e_{\varphi_1}(1/d)$, we get \begin{equation*} e_P = \sum_{d |N} e_{\varphi_1}(1/d) \cdot \# \{x \in C_d : \varphi_E(x)=P\}. \end{equation*} Recall that for each $\alpha \in (\mathbf{Z}/N\mathbf{Z})^\times$, there exists a unique point $A(\alpha) \in E(\mathbf{Q})_{\mathrm{tors}}$ such that $\varphi_E \circ \langle \alpha \rangle = t_{A(\alpha)} \circ \varphi_E$, where $t_{A(\alpha)}$ denotes translation by $A(\alpha)$. We let $A_E \subset E(\mathbf{Q})_{\mathrm{tors}}$ be the image of the map $\alpha \mapsto A(\alpha)$. Note that the set $\{x \in C_d : \varphi_E(x)=P\}$ is empty unless $\varphi_E(1/d) \in P+A_E$, in which case we have $\varphi_E(C_d) =P+A_E$ and the number of cusps $x \in C_d$ such that $\varphi_E(x)=P$ is given by $\# C_d / \# A_E$. Thus we get \begin{equation*} e_P = \frac{1}{\# A_E} \sum_{\substack{d |N \\ \varphi_E(1/d) \in P+A_E}} e_{\varphi_1}(1/d) \cdot \# C_d. \end{equation*} Furthermore, let $\pi : X_1(N) \to X_0(N)$ and $\varphi_0 : X_0(N) \to E_0$ be the maps as in (\ref{E1E0}). The ramification index of $\pi$ at $\frac{1}{d}$ is equal to $(d,N/d)$. Thus $e_{\varphi_1}(1/d)=(d,N/d) \cdot e_{\varphi_0}(1/d)$. The quantity $e_{\varphi_0}(1/d)$ is equal to the order of vanishing of $\omega_{f_E}$ at the cusp $1/d$, and may be computed numerically (see \cite[\S 7]{brunault:ephi}). Moreover, the number of cusps of $X_0(N)$ of denominator $d$ is given by $\phi((d,N/d))$. It follows that $\# C_d = \phi((d,N/d)) \cdot \phi(N)/(2 (d,N/d))$ and we get \begin{equation}\label{formula eP} e_P = \frac{\phi(N)}{2\# A_E} \sum_{\substack{d |N \\ \varphi_E(1/d) \in P+A_E}} e_{\varphi_0}(1/d) \cdot \phi((d,N/d)). \end{equation} Finally, using notations from Section \ref{finiteness}, the modular degree of $E$ may be computed as \begin{equation}\label{formula degphiE} \deg \varphi_E = \frac{\phi(N)}{2} \cdot \frac{\operatorname{covol}(\Lambda_{E_0})}{\operatorname{covol}(\Lambda_E)} \cdot \deg \varphi_0 \end{equation} where $\Lambda_{E_0}$ and $\Lambda_E$ denote the Néron lattices of $E_0$ and $E$. We read off the modular degree $\deg \varphi_0$ from Cremona's tables \cite[Table 5]{cremona:tables}. Formulas (\ref{formula eP}) and (\ref{formula degphiE}) lead to the following algorithm. \begin{enumerate} \item Compute generators $\alpha_1,\ldots,\alpha_r$ of $(\mathbf{Z}/N\mathbf{Z})^\times$. \item For each $j$, compute numerically $\int_{z_0}^{\langle \alpha_j \rangle z_0} \omega_{f_E}$ for $z_0=(-\alpha_j+i)/N$. \item Deduce $A_j = A(\alpha_j) \in E(\mathbf{Q})_{\mathrm{tors}}$. \item Compute the subgroup $A_E$ generated by $A_1,\ldots,A_r$. \item Compute the list $(P_1,\ldots,P_n)$ of all rational torsion points on $E$. \item Initialize a list $(e_{P_1},\ldots,e_{P_n})=(0,\ldots,0)$. \item For each $d$ dividing $N$, do the following: \begin{enumerate} \item Compute numerically $z_d = \int_0^{1/d} \omega_{f_E}$. \item Check whether the point $Q_d = \varphi_E(1/d)$ is rational or not. \item If $Q_d$ is rational, then do the following: \begin{enumerate} \item Compute numerically $e_{\varphi_0}(1/d)$. \item For each $B \in A_E$, do $e_{Q_d+B} \leftarrow e_{Q_d+B} + e_{\varphi_0}(1/d) \phi((d,N/d))$. \end{enumerate} \end{enumerate} \item Output $S_E = \{P \in E(\mathbf{Q})_{\mathrm{tors}} : e_P = \# A_E \cdot \frac{\operatorname{covol}(\Lambda_{E_0})}{\operatorname{covol}(\Lambda_E)} \cdot \deg \varphi_0\}$. \end{enumerate} The following table gives all elliptic curves $E$ of conductor $\leq 1000$ such that $S_E$ satisfies condition $(\mathrm{c})$ of Proposition \ref{pro FS}. Computations were done using Pari/GP \cite{pari273} and the Modular Symbols package of Magma \cite{magma}. \begin{table}[h!] \begin{tabular}{c|c|c||} $E$ & $E(\mathbf{Q})_{\mathrm{tors}}$ & $S_E$ \\ \mathcal{H}line $11a3$ & $\mathbf{Z}/5\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $14a1$ & $\mathbf{Z}/6\mathbf{Z}$ & $\{0, (9, 23), (1, -1), (2, -5)\}$ \\ $14a4$ & $\mathbf{Z}/6\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $14a6$ & $\mathbf{Z}/6\mathbf{Z}$ & $\{0, (2, -2), (2, -1)\}$ \\ $15a1$ & $\mathbf{Z}/4\mathbf{Z} \times \mathbf{Z}/2\mathbf{Z}$ & $\{0, (-2, 3), (-1, 0), (8, 18)\}$ \\ $15a3$ & $\mathbf{Z}/4\mathbf{Z} \times \mathbf{Z}/2\mathbf{Z}$ & $\{0, (0, 1), (1, -1), (0, -2)\}$ \\ $15a8$ & $\mathbf{Z}/4\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $17a4$ & $\mathbf{Z}/4\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $19a3$ & $\mathbf{Z}/3\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $20a1$ & $\mathbf{Z}/6\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $20a2$ & $\mathbf{Z}/6\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $21a1$ & $\mathbf{Z}/4\mathbf{Z} \times \mathbf{Z}/2\mathbf{Z}$ & $\{0, (-1, -1), (-2, 1), (5, 8)\}$ \\ $24a1$ & $\mathbf{Z}/4\mathbf{Z} \times \mathbf{Z}/2\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $24a3$ & $\mathbf{Z}/4\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $24a4$ & $\mathbf{Z}/4\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ \end{tabular} \begin{tabular}{c|c|c} $E$ & $E(\mathbf{Q})_{\mathrm{tors}}$ & $S_E$ \\ \mathcal{H}line $26a3$ & $\mathbf{Z}/3\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $27a3$ & $\mathbf{Z}/3\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $27a4$ & $\mathbf{Z}/3\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $30a1$ & $\mathbf{Z}/6\mathbf{Z}$ & $\{0, (3, 4), (-1, 0), (0, -2)\}$ \\ $32a1$ & $\mathbf{Z}/4\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $32a4$ & $\mathbf{Z}/4\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $35a3$ & $\mathbf{Z}/3\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $36a1$ & $\mathbf{Z}/6\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $36a2$ & $\mathbf{Z}/6\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $40a3$ & $\mathbf{Z}/4\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $44a1$ & $\mathbf{Z}/3\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $54a3$ & $\mathbf{Z}/3\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $56a1$ & $\mathbf{Z}/4\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $92a1$ & $\mathbf{Z}/3\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \\ $108a1$ & $\mathbf{Z}/3\mathbf{Z}$ & $E(\mathbf{Q})_{\mathrm{tors}}$ \end{tabular} \caption{Some elliptic curves parametrized by modular units} \end{table} \begin{remarks} \begin{enumerate} \item In order to compute the points $A_j$ in step (3) and $Q_d$ in step (7b), we implicitly make use of Stevens' conjecture that $c_E=1$. This conjecture is known for all elliptic curves of conductor $\leq 200$ \cite{stevens}. \item Of course, steps (2), (7a) and (7ci) are done only once for each isogeny class. \item If $x$ is a cusp of $X_1(N)$, then the order of $\varphi_E(x)$ is bounded by the exponent of the cuspidal subgroup of $J_1(N)$. Hence we may ascertain that $\varphi_E(x)$ is rational or not by a finite computation. \item We compute $e_{\varphi_0}(\frac1d)$ by a numerical method. It would be better to use an exact method. \end{enumerate} \end{remarks} \section{Further questions}\label{questions} Note that in Lemma \ref{lem preimage}, we considered functions on $E$ which are supported in $E(\mathbf{Q})_{\mathrm{tors}}$. In general, the image by $\varphi_E$ of a cusp of $X_1(N)$ is only rational over $\mathbf{Q}(\zeta_N)$, and we may use functions on $E$ supported in these non-rational points. In fact, let $S'_E$ denote the set of points $P \in E(\mathbf{Q}(\zeta_N))_{\mathrm{tors}}$ such that $\varphi_E^{-1}(P) \subset C_1(N)$. The set $S'_E$ is stable under the action of $\Gal(\mathbf{Q}(\zeta_N)/\mathbf{Q})$. Then $E$ can be parametrized by modular units if \emph{and only if} there exist two functions $f,g \in \mathbf{Q}(E)^\times$ supported in $S'_E$ such that $\mathbf{Q}(E)=\mathbf{Q}(f,g)$. As the next example shows, this yields new elliptic curves parametrized by modular units. \begin{example}\label{ex49} Consider the elliptic curve $E=X_0(49)=49a1 : y^2+xy=x^3-x^2-2x-1$. The group $E(\mathbf{Q})_{\mathrm{tors}}$ has order $2$ and is generated by the point $Q=(2,-1)$, which is none other than the cusp $\infty$ (recall that the cusp $0$ is the origin of $E$). The set $S'_E$ consists of all cusps of $X_0(49)$. Let $P$ be the cusp $\frac17$. It is defined over $\mathbf{Q}(\zeta_7)$ and its Galois conjugates are given by $\{P^\sigma\}_\sigma = \{P,3P+Q,-5P,-P+Q,-3P,5P+Q\}$. There exists a function $v \in \mathbf{Q}(E)$ of degree $7$ such that $\dv(v) = \sum (P^\sigma) + (Q) - 7(0)$. Since $x-2$ and $v$ have coprime degrees, the curve $E$ can be parametrized by the modular units $u=x-2$ and $v$. \end{example} \begin{example}\label{ex64} Consider the elliptic curve $E=64a1 : y^2=x^3-4x$. Its rational torsion subgroup is given by $E(\mathbf{Q})_{\mathrm{tors}} \cong \mathbf{Z}/2\mathbf{Z} \times \mathbf{Z}/2\mathbf{Z}$. There is a degree 2 morphism $\varphi_0 : X_0(64) \to E$, and we have $S_E = E(\mathbf{Q})_{\mathrm{tors}}$. However, the image of the cusp $\frac18$ is given by $P=\varphi_0(\frac18) = (2i,-2\sqrt{2}+2i\sqrt{2})$. This point is defined over $\mathbf{Q}(\zeta_8)$ and we have $S'_E = S_E \cup \{P^\sigma\}_\sigma$. We can check that $\mathcal{F}_{S'_E}/\mathbf{Q}^\times$ is generated by $x$, $x \pm 2$ and $x^2+4$, hence it cannot generate $\mathbf{Q}(E)$. However, if we base change to the field $\mathbf{Q}(\sqrt{2})$, then we find that the function $v=y-\sqrt{2} x+2\sqrt{2}$ is supported in $S'_E$ and has degree $3$. Hence $E/\mathbf{Q}(\sqrt{2})$ can be parametrized by the modular units $u=x$ and $v$. \end{example} Example \ref{ex64} suggests the following question : which elliptic curves $E/\mathbf{Q}$ of conductor $N$ can be parametrized by modular units \emph{defined over $\mathbf{Q}(\zeta_N)$}? Note that much of the argument in Section \ref{finiteness} is purely geometrical; however, we are crucially using the fact that the modular parametrization is defined over $\mathbf{Q}$. Finally, here are several questions to which I don't know the answer. \begin{question} Let $E/\mathbf{Q}$ be an elliptic curve of condutor $N$. Assume $E$ can be parametrized by modular units of some level $N'$ (not necessarily equal to $N$). Then we have a non-constant morphism $X_1(N') \to E$ and $N$ must divide $N'$. Does it necessarily follow that $E$ admits a parametrization by modular units of level $N$? In other words, does it make a difference if we allow modular units of arbitrary level in Definition \ref{def param modunits}? Similarly, does it make a difference if we replace $Y_1(N)$ by $Y(N)$ or $Y(N')$ in Definition \ref{def param modunits}? \end{question} \begin{question} Does it make a difference if we allow the function field of $E$ to be generated by more than two modular units in Definition \ref{def param modunits}? \end{question} \begin{question} What about elliptic curves over $\mathbf{C}$? It is not hard to show that if $E/\mathbf{C}$ can be parametrized by modular functions, then $E$ must be defined over $\mathbf{Q}b$. In fact, by the proof of Serre's conjecture due to Khare and Wintenberger, it is known that the elliptic curves over $\mathbf{Q}b$ which can be parametrized by modular functions are precisely the $\mathbf{Q}$-curves \cite{ribet}. Which $\mathbf{Q}$-curves can be parametrized by modular units? \end{question} \begin{question} It is conjectured in \cite{bggp} that only finitely many smooth projective curves over $\mathbf{Q}$ of given genus $g \geq 2$ can be parametrized by modular functions. Is it possible to prove, at least, that only finitely many smooth projective curves over $\mathbf{Q}$ of given genus $g \geq 2$ can be parametrized by modular units? \end{question} \begin{question} According to \cite{bggp}, there are exactly 213 curves of genus 2 over $\mathbf{Q}$ which are new and modular, and they can be explicitly listed. Which of them can be parametrized by modular units? \end{question} \begin{question} Let $u$ and $v$ be two multiplicatively independent modular units on $Y_1(N)$. Assume that $u$ and $v$ do not come from modular units of lower level. Can we find a lower bound for the genus of the function field generated by $u$ and $v$? \end{question} \end{document}
\begin{document} \title{Koszulness and supersolvability for Dirichlet arrangements} \author{Bob Lutz} \address{Department of Mathematics, University of Michigan, Ann Arbor, MI, USA} \email{[email protected]} \thanks{Work of the author was partially supported by NSF grants DMS-1401224 and DMS-1701576.} \date{\today} \subjclass[2010]{52C35 (Primary) 05B35, 16S37 (Secondary)} \begin{abstract} We prove that the cone over a Dirichlet arrangement is supersolvable if and only if its Orlik-Solomon algebra is Koszul. This was previously shown for four other classes of arrangements. We exhibit an infinite family of cones over Dirichlet arrangements that are combinatorially distinct from these other four classes. \end{abstract} \maketitle \section{Introduction} A \emph{Koszul algebra} is a graded algebra that is ``as close to semisimple as it can possibly be'' \cite[p. 480]{beilinson1996}. Koszul algebras play an important role in the topology of complex hyperplane arrangements. For example, if $\AA$ is such an arrangement and $U$ its complement, then the Orlik-Solomon algebra $\os(\AA)$ is Koszul if and only if $U$ is a rational $K(\pi,1)$-space. Also if $\os(\AA)$ is Koszul and $G_1\triangleright G_2 \triangleright\cdots$ denotes the lower central series of the fundamental group $\pi_1(U)$, defined by $G_1=\pi_1(U)$ and $G_{n+1} = [G_n,G_1]$, then the celebrated \emph{Lower Central Series Formula} holds: \begin{equation} \prod_{k=1}^\infty (1-t^k)^{\phi_k} = P(U,-t), \label{eq:lcsformula} \end{equation} where $P(U,t)$ is the Poincar\'{e} polynomial of $U$ and $\phi_k = \rk (G_k/G_{k+1})$. It is natural to seek a combinatorial characterization of the arrangements $\AA$ for which $\os(\AA)$ is Koszul. Shelton and Yuzvinsky \cite[Theorem 4.6]{shelton1997} showed that if $\AA$ is supersolvable, then $\os(\AA)$ is Koszul. Whether the converse holds is unknown. \begin{q} If the Orlik-Solomon algebra of a central hyperplane arrangement $\AA$ is Koszul, then is $\AA$ supersolvable? \label{q:koszul} \end{q} We answer this question affirmatively for cones (or centralizations) over \emph{Dirichlet arrangements}, a generalization of graphic arrangements arising from electrical networks and order polytopes of finite posets \cite{lutz2017,lutz2018mat}. \begin{thm} The cone over a Dirichlet arrangement is supersolvable if and only if its Orlik-Solomon algebra is Koszul. \label{thm:intro} \end{thm} Question \ref{q:koszul} has been answered affirmatively for other classes of arrangements, including graphic arrangements \cite{hultman2016,jambu1998, schenck2002, vanle2013}. Our next theorem shows that Theorem \ref{thm:intro} properly extends all previous results. We say that two central arrangements are \emph{combinatorially equivalent} if the underlying matroids are isomorphic. \begin{thm} There are infinitely many cones over Dirichlet arrangements that are not combinatorially equivalent to any arrangement for which Question \ref{q:koszul} has been previously answered. \label{thm:intro2} \end{thm} Dirichlet arrangements have also been called \emph{$\psi$-graphical arrangements} \cite{mu2015, stanley2015, suyama2018}. It was conjectured in \cite{mu2015} and proven in \cite{suyama2018} that the cone over a Dirichlet arrangement is supersolvable if and only if it is free (see also \cite{lutz2017}). \section{Background} \label{sec:bg} \subsection{Dirichlet arrangements and supersolvability} Let $\g=(V,E)$ be a finite connected undirected graph with no loops or multiple edges. Let $\B\subseteq V$ be a set of $\geq 2$ vertices inducing an edgless subgraph. We refer to the elements of $\B$ as \emph{boundary nodes}. Let $\be\subseteq E$ be the set of edges meeting $\B$. Let $\KK$ be a field of characteristic 0, and let $\u:\B\to \KK$ be injective. \begin{mydef} The \emph{Dirichlet arrangement} $\overline{\AA}(\g,\u)$ is the arrangement in $\KK^{V\setminus \B}$ of hyperplanes given by \begin{equation} \{x_i = x_j:ij\in E\setminus \be\}\cup\{ x_i = \u(j):ij\in \be\mbox{ with }j\in \B\}. \label{eq:intarr} \end{equation} \label{mydef:arr} \end{mydef} \begin{eg}[Wheatstone bridge] Consider the graph $\g$ on the left side of Figure \ref{fig:dir} with $V=\{i_1,i_2,j_1,j_2\}$, where the boundary nodes $j_1$ and $j_2$ are marked by white circles. Set $\KK=\R$, and let $ u(j_1)=1$ and $ u(j_2)=-1$. The Dirichlet arrangement $\overline{\AA}(\g,u)$ consists of the 5 hyperplanes $x_{i_1}=x_{i_2}$, $x_{i_1} = \pm1$ and $x_{i_2}=\pm 1$. This arrangement is illustrated on the right side of Figure \ref{fig:dir}. \begin{figure} \caption{A graph with boundary nodes marked in white, left, and a corresponding Dirichlet arrangement, right.} \label{fig:dir} \end{figure} \label{eg:dir} \end{eg} The arrangement $\overline{\AA}(\g,\u)$ is not central, i.e., the intersection of its elements is empty. We prefer to work with a centralized version of $\AA(\g,\u)$ with essentially the same combinatorics. If $\AA$ is an arrangement in $\KK^n$ defined by equations $f_i(x)=\alpha_i$ for homogenous functions $f_i$ and scalars $\alpha_i$, then the \emph{cone} over $\AA$ is the arrangement in $\KK^{n+1}$ defined by $f_i(x) = \alpha_ix_0$ for all $i$ and $x_0 = 0$, where $x_0$ is a new variable. The cone over any arrangement is central. \begin{mydef} Let $\AA(\g,\u)$ denote the cone over the Dirichlet arrangement $\overline{\AA}(\g,\u)$. \end{mydef} Recall that the \emph{intersection lattice} of a central arrangement $\AA$ is the geometric lattice $L(\AA)$ of intersections of elements of $\AA$, ordered by reverse inclusion and graded by codimension. \begin{mydef} A central arrangement $\AA$ is \emph{supersolvable} if the intersection lattice $L(\AA)$ admits a maximal chain of elements $X$ satisfying \[\rk(X)+\rk(Y) = \rk(X\wedge Y)+\rk(X\vee Y)\] for every $Y\in L(\AA)$. \end{mydef} The graph $\g$ is \emph{chordal} if for any cycle $Z$ of length $\geq 4$ there is an edge of $\g\setminus Z$ with both endpoints in $Z$. Stanley \cite[Proposition 2.8]{stanley1972} proved that the graphic arrangement $\AA(\g)$ is supersolvable if and only if $\g$ is chordal. We have the following generalization for Dirichlet arrangements. \begin{prop}[{\cite[Theorem 1.2]{lutz2017}}] Let $\mg$ be the graph obtained from $\g$ by adding edges between every pair of boundary nodes. The arrangement $\AA(\g,\u)$ is supersolvable if and only if $\mg$ is chordal. \label{prop:2conn} \end{prop} \subsection{Orlik-Solomon algebras} Given an ordered central arrangement $\AA$ over $\KK$, let $V$ be the $\KK$-vector space with basis $\{e_a : a\in \AA\}$. Let $\ea=\ea(V)$ be the exterior algebra of $V$. Write $xy=x\wedge y$ in $\ea$. The algebra $\ea$ is graded by taking $\ea^0 = \KK$ and $\ea^p$ to be spanned by all elements of the form $e_{a_1}\cdots e_{a_p}$. Let $\dif:\ea\to\ea$ be the linear map defined by $\dif 1 = 0$, $\dif e_a = 1$ for all $a\in \AA$, and \[\dif(xy) = \dif(x)y+(-1)^p x\dif(y)\] for all $x\in \ea^p$ and $y\in \ea$. The set $X$ is \emph{dependent} if the normal vectors of the hyperplanes in $X$ are linearly dependent. A \emph{circuit} is a minimal dependent set. If $X=\{a_1,\ldots,a_p\}\subseteq \AA$, assuming the $a_i$ are in increasing order, write $e_X = e_{a_1}\cdots e_{a_p}$ in $\ea$. \begin{mydef} The \emph{Orlik-Solomon algebra} $\os(\AA)$ of a central arrangement $\AA$ is the quotient of $\ea$ by the \emph{Orlik-Solomon ideal} \begin{equation} I=\langle \dif(e_C) : C\subseteq \AA \mbox{ is a circuit}\rangle. \end{equation} That is, $\os(\AA) = \ea/I$. \end{mydef} \subsection{Koszul algebras} We include the following definition of a Koszul algebra for completeness. A more thorough definition and further discussion can be found in \cite{peeva2010} and \cite{froberg1999}, respectively. \begin{mydef} A graded $\KK$-algebra $A$ is \emph{Koszul} if the minimal free graded resolution of $\KK$ over $A$ is linear. \end{mydef} Quadraticity is a key property of Koszul algebras. A \emph{minimal generator} of the Orlik-Solomon algebra $I$ is an element of the form $\dif(e_C)$, where $C$ is a circuit and \[\dif(e_C)\notin \langle \dif(e_X) : X\subseteq \AA\mbox{ is a circuit with }|X|<|C|\rangle.\] If the minimal generators of $I$ are of degree 2, then $\os(\AA)$ is called \emph{quadratic}. \begin{prop}[{\cite[Definition-Theorem 1]{froberg1999}}] If $\os(\AA)$ is Koszul, then $\os(\AA)$ is quadratic. \label{prop:kosquad} \end{prop} \section{Proof of Theorem \ref{thm:intro}} \label{sec:pf} We prove the following theorem, which implies Theorem \ref{thm:intro}. \begin{thm} Let $\mg$ be the graph obtained from $\g$ by adding an edge between each pair of boundary nodes. The following are equivalent: \begin{enumerate}[(i)] \item $\mg$ is chordal \item $\AA(\g,\u)$ is supersolvable \item $\os(\AA(\g,\u))$ is Koszul \item $\os(\AA(\g,\u))$ is quadratic. \end{enumerate} \label{thm:koszul} \end{thm} We write $x$ instead of $\{x\}$ for all single-element sets. Let $\eh$ be an element not in $E$, and let $\eo=E\cup \eh$, so that $\AA(\g,\u)$ is indexed by $\eo$. Fix an ordering of $\eo$ with $\eh$ minimal. We say that $C\subseteq\eo$ is a \emph{circuit} if the corresponding subset of $\AA$ is a circuit. \begin{mydef} A set $X\subseteq E$ is a \emph{crossing} if it is a minimal path between 2 distinct boundary nodes. \end{mydef} \begin{prop}[{\cite[Proposition 4.10]{lutz2018mat}}] A set $C\subseteq \eo$ is a circuit if and only if one of the following holds: \begin{enumerate}[(A)] \item $C = X\cup \eh$ for some crossing $X$ \item $C\subseteq E$ is a cycle of $\g$ meeting at most 1 boundary node \item $C\subseteq E$ is a minimal acyclic set containing 2 distinct crossings. \end{enumerate} \label{prop:circuits} \end{prop} The circuits of type (C) in Proposition \ref{prop:circuits} come in two flavors: one contains 3 distinct crossings, while the other contains only 2. These are illustrated in Figure \ref{fig:circuits}. Circuits of type (C) containing only 2 distinct crossings are either disconnected, as pictured, or connected with both crossings meeting at a single boundary node. \begin{figure} \caption{Two circuits of $\eo$ with boundary nodes marked in white.} \label{fig:circuits} \end{figure} Taken together, the following 2 lemmas imply that circuits of type (C) do not contribute minimal generators to the Orlik-Solomon ideal $I$. When the usage is clear we will write $S = e_S$, so that $S$ is considered as an element of $\ea$ and a subset of $\eo$. \begin{lem} Let $C\subseteq E$ be a circuit containing distinct crossings $X_1$, $X_2$ and $X_3$. In $\ea$ we have \[\dif(C) \in \langle \dif(\eh X_1),\dif(\eh X_2),\dif(\eh X_3)\rangle.\] \begin{proof} There are mutually disjoint paths $P_1, P_2,P_3\subseteq E$ in $\g$ such that $C=P_1\cup P_2\cup P_3$ and $X_i = P_j\cup P_k$ for distinct $i,j,k$. Write $a_i = |P_i|$, and suppose without loss of generality that $X_1 = P_2P_3$, $X_2 = P_1P_3$ and $X_3 = P_1P_2$ in $\ea$. We have \begin{align*} \dif(\eh X_3) &= P_1P_2 - \eh\dif(P_1)P_2 - (-1)^{a_1}\eh P_1\dif(P_2)\\ \dif(\eh X_2) &= P_1P_3 - \eh\dif(P_1)P_3 - (-1)^{a_1}\eh P_1\dif(P_3)\\ \dif(\eh X_1) &= P_2P_3 - \eh\dif(P_2)P_3 - (-1)^{a_2}\eh P_2\dif(P_3) \end{align*} Thus \begin{align*} \dif(P_3)\dif(\eh X_3) &= (-1)^{(a_1+a_2)(a_3-1)}(P_1P_2\dif(P_3) - \eh\dif(P_1)P_2\dif(P_3)\\ &\qquad- (-1)^{a_1}\eh P_1\dif(P_2)\dif(P_3))\\ \dif(P_2)\dif(\eh X_2) &= (-1)^{a_1(a_2-1)}(P_1\dif(P_2)P_3 - \eh\dif(P_1)\dif(P_2)P_3 \\ &\qquad + (-1)^{a_1+a_2}\eh P_1\dif(P_2)\dif(P_3))\\ \dif(P_1)\dif(\eh X_1) &= \dif(P_1)P_2P_3 +(-1)^{a_1}\eh\dif(P_1)\dif(P_2)P_3\\ &\qquad +(-1)^{a_1+a_2}\eh\dif(P_1) P_2\dif(P_3) \end{align*} Since $C=P_1P_2P_3$, we have \[\dif(C)=\dif(P_1)P_2P_3 + (-1)^{a_1}P_1\dif(P_2)P_3 + (-1)^{a_1+a_2}P_1P_2\dif(P_3),\] A computation now gives \[\dif(C)=\dif(P_1)\dif(\eh X_1)+(-1)^{a_1a_2}\dif(P_2)\dif(\eh X_2) + (-1)^{(a_1+a_2)a_3}\dif(P_3)\dif(\eh X_3),\] proving the result. \end{proof} \label{lem:osy} \end{lem} \begin{lem} Suppose that $X_1$ and $X_2$ are crossings such that no vertex in $V\setminus \B$ is met by both $X_1$ and $X_2$. In $\ea$ we have \[\dif(C)\in \langle \dif(\eh X_1),\dif(\eh X_2)\rangle.\] \begin{proof} The proof is similar to that of Lemma \ref{lem:osy}. In particular, we have \[\dif(X_1X_2) = \dif(X_1\eh)\dif(X_2) + \dif(X_1)\dif(X_2\eh),\] proving the result. \end{proof} \label{lem:os2cross} \end{lem} Let $C\subseteq \eo$ be a circuit. An element $i\in \eo$ is a \emph{chord} of $C$ if there exist circuits $C_1$ and $C_2$ such that $i = C_1\cap C_2$ and $C=(C_1\setminus C_2)\cup (C_2\setminus C_1)$. If $C$ admits no chord, then $C$ is \emph{chordless}. \begin{prop} The minimal generators of $I$ are the elements of the form $\dif(C)$, where $C\subseteq \eo$ is a chordless circuit of type (A) or (B) in Proposition \ref{prop:circuits}. \begin{proof} Let $J$ be the ideal of $\ea$ generated by the elements of the form $\dif(C)$ for all circuits $C$ of types (A) and (B) in Proposition \ref{prop:circuits}. Note that any circuit of type (C) is described by either Lemma \ref{lem:osy} or \ref{lem:os2cross}. It follows that $J=I$ is the Orlik-Solomon ideal. Let $C\subseteq \eo$ be a circuit of type (A) or (B). It remains to show that $\dif(C)$ is a minimal generator of $I$ if and only if $C$ is chordless. Notice that a chord of $C$ is any edge $i\in E$ connecting two vertices met by $E\cap C$. Suppose first that $C$ is of type (B), and write $C=\{e_1,\ldots,e_r\}$. We have \[\dif(C) = \sum_{j=1}^r (-1)^{j-1} e_1\cdots \widehat{e_j}\cdots e_r.\] There is a chord $i$ of $C$ if and only if there is a circuit $C'$ of with a term of $\dif(C')$ dividing $e_2\cdots e_r$. Suppose that such a chord $i$ exists, and partition $C$ into two paths $P_1$ and $P_2$ such that $P_1\cup i$ and $P_2\cup i$ are cycles of $\g$. Write $a_j = |P_j|$, and suppose without loss of generality that $C=P_1P_2$ in $\ea$. We have \[\dif(C)=\dif(P_1)\dif(iP_2)+(-1)^{a_1a_2}\dif(P_2)\dif(iP_1),\] so $\dif(C)$ is not a minimal generator. Thus if $C$ is a cycle of $\g$, then $\dif(C)$ is a minimal generator of $I$ if and only if $C$ is chordless. Now suppose that $C =X\cup \eh$ for some crossing $X$. We have $\dif(C) = X - \eh\dif(X)$. There is a circuit $C'$ with a term of $\dif(C')$ dividing $X$ if and only if there is a chord $i$ of $X$. Suppose that such a chord $i$ exists. Partition $X$ into two sets $X_1$ and $X_2$ such that $X_1\cup i$ is a cycle of $\g$ and $X_2\cup i$ is a crossing. Write $b_j = |X_j|$, and suppose without loss of generality that $X=X_1X_2$ in $\ea$. We have \[(-1)^{b_1}\dif(C) = \dif(X_1)\dif(\eh i X_2) + (\eh \dif(X_2) + (-1)^{b_2}X_2)\dif(i X_1),\] where $X_1\cup i$ and $X_2\cup \{\eh, i\}$ are circuits of smaller size than $C$. Hence $\dif(C)$ is not a minimal generator. Thus if $C=X\cup \eh$ for some crossing $X$, then $\dif(C)$ is a minimal generator of $I$ if and only if $C$ is chordless. The result follows. \end{proof} \label{prop:mingen} \end{prop} \begin{prop} The graph $\mg$ is chordal if and only if there are no chordless circuits of type (A) or (B) in Proposition \ref{prop:circuits} having size $\geq 4$. \begin{proof} Let $\me$ be the set of edges of $\mg$ not in $E$. Suppose that $C$ is a chordless circuit of size $k\geq 4$. If $C=X\cup \eh$ is of type (A) for some crossing $X$, then there is $e\in \me$ such that $X\cup e$ is a cycle of $\mg$ admitting no chord. If $C$ is of type (B), then $C$ is a cycle of $\g$ (and hence $\mg$) admitting no chord. The ``only if'' direction follows. Now suppose that $\mg$ has a cycle $Z$ of size $\geq 4$ admitting no chord. Then either $Z\subseteq E$, in which case $Z$ is a circuit of type (B); or $Z\cap \me$ consists of a single edge $e$, in which case $(Z\setminus e)\cup \eh$ is a circuit of type (A). \end{proof} \label{prop:chordless} \end{prop} \begin{proof}[Proof of Theorem \ref{thm:koszul}] (i) $\Rightarrow$ (ii): This follows from Proposition \ref{prop:2conn}. (ii) $\Rightarrow$ (iv): This follows from \cite[Theorem 4.6]{shelton1997}. (iii) $\Rightarrow$ (iv): This is the content of Proposition \ref{prop:kosquad}. (iv) $\Rightarrow$ (i): This follows from Propositions \ref{prop:mingen} and \ref{prop:chordless}. \end{proof} \section{An infinite family} \label{sec:eg} We prove Theorem \ref{thm:eg} below, which implies Theorem \ref{thm:intro2}. There are four classes of arrangements for which Question \ref{q:koszul} was previously answered: \begin{enumerate}[(i)] \item Graphic arrangements \item Ideal arrangements \item Hypersolvable arrangements \item Ordered arrangements with disjoint minimal broken circuits. \end{enumerate} See \cite{hultman2016,jambu1998, schenck2002, vanle2013} for individual treatments. A priori it is unclear how these classes overlap with cones over Dirichlet arrangements. Given a central arrangement $\AA$, let $M(\AA)$ be the usual matroid on $\AA$, so $X$ is independent in $M(\AA)$ if and only if the set of normal vectors of $X$ is linearly independent. For more on matroids and central arrangements, see \cite{stanley2007}. \begin{mydef} Two central arrangements are \emph{combinatorially equivalent} if their underlying matroids are isomorphic. \end{mydef} \begin{mydef} Let $\chi(\g,\B)$ denote the chromatic number of the graph with vertex set $\B$ and an edge between $i$ and $j$ if and only if there is a crossing in $\g$ connecting $i$ and $j$. \label{mydef:omega} \end{mydef} \begin{eg} Consider the graph $\g$ on the left side of Figure \ref{fig:chrom} with $\B$ marked in white. On the right side is the graph with vertex set $\B$ and an edge between $i$ and $j$ if and only if there is a crossing in $\g$ connecting $i$ and $j$. This graph can be colored using 6 colors, as pictured, and no fewer, since it contains a clique on 6 vertices. Hence $\chi(\g,\B)=6$. \begin{figure} \caption{A graph with boundary nodes marked in white and an illustration of the associated number $\chi(\g,\B)$.} \label{fig:chrom} \end{figure} \end{eg} \begin{thm} Suppose that $|E|\geq 240$ and $\chi(\g,\B)\geq 4$, and that some vertex of $\g$ is adjacent to at least 3 boundary nodes. If $\g\setminus \B$ contains the wheel graph on 5 vertices as an induced subgraph, then $\AA(\g,\u)$ is not combinatorially equivalent to any arrangement for which Question \ref{q:koszul} was previously answered. \label{thm:eg} \end{thm} \begin{eg} Recall that the \emph{join} $G+H$ of 2 graphs $G$ and $H$ is the disjoint union of $G$ and $H$ with edges added between every vertex of $G$ and every vertex of $H$. The join of any finite number of graphs is defined by induction. Let $\overline{K}_n$ and $K_n$ be the edgeless and complete graphs, resp., on $n$ vertices. Let $W_5$ be the wheel graph on 5 vertices. The graph $\g=\overline{K}_4+K_{14}+W_5$ with boundary $\B=\overline{K}_4$ satisfies the hypothesis of Theorem \ref{thm:eg} and does so with the minimum possible number of vertices. In particular we have $|E| = 245$, $\chi(\g,\B)=4$, and $|V|=23$. \end{eg} The proof of Theorem \ref{thm:eg} can be found at the end of the section. First we need some preliminary results on the classes of arrangements (ii)--(iv). \subsection{Ideal arrangements} Let $\Phi\subseteq \KK^n$ be a finite root system with set of positive roots $\Phi^+$. A standard reference for root systems is \cite{humphreys1972}. The \emph{Coxeter arrangement} associated to $\Phi$ is the set of normal hyperplanes of $\Phi^+$. Every Coxeter arrangement associated to a classical root system $\mathsf{A}_n$, $\mathsf{B}_n$, $\mathsf{C}_n$ or $\mathsf{D}_n$ is a subset of an arrangement of the following type. \begin{mydef} For all $n\geq 2$ let $\BB_n$ be the arrangement in $\KK^n$ of hyperplanes \[\{x_i = x_j : 1\leq i <j\leq n\}\cup\{x_i + x_j = 0 : 1\leq i<j\leq n\} \cup \{x_i = 0 : 1\leq i\leq n\}.\] \label{def:abd} \end{mydef} \begin{prop} If $\chi(\g,\B)\geq4$ and $|E|\geq 240$, then $\AA(\g,\u)$ is not combinatorially equivalent to any subarrangement of any Coxeter arrangement. \begin{proof} The matroids $M(\BB_n)$ are representable over any field $|\KK|$ with $|\KK|\geq 3$. However $M(\AA(\g,\u))$ is not representable over $\KK$ if $|\KK|<\chi(\g,\B)$ by \cite[Theorem 1.1(ii)]{lutz2018mat}. Hence if $\chi(\g,\B)\geq4$, then $\AA(\g,\u)$ is not combinatorially equivalent to any subarrangement of $\BB_n$. The exceptional root systems $\mathsf{E}_6$, $\mathsf{E}_7$, $\mathsf{E}_8$, $\mathsf{F}_4$ and $\mathsf{G}_2$ all have 240 or fewer elements. Hence no subarrangement of the associated Coxeter arrangements can have more than 240 elements. The result now follows from the classification of finite root systems. \end{proof} \label{prop:root} \end{prop} An \emph{ideal arrangement} (or a \emph{root ideal arrangement}) is a certain subarrangement of a Coxeter arrangement (see \cite{abe2014, hultman2016}). Graphic arrangements are subarrangements of $\BB_n$. Thus we have the following. \begin{cor} If $\chi(\g,\B)\geq 4$ and $|E|\geq 240$, then $\AA(\g,\u)$ is not combinatorially equivalent to any ideal arrangement or graphic arrangement. \label{cor:root} \end{cor} \subsection{Hypersolvable arrangements} Let $\AA$ be a central arrangement, and let $X\subseteq Y\subseteq \AA$. The containment $X\subseteq Y$ is \emph{closed} if $X\neq Y$ and $\{a,b,c\}$ is independent for all distinct $a,b\in X$ and $c\in Y\setminus X$. The containment $X\subseteq Y$ is \emph{complete} if $X\neq Y$ and for any distinct $a,b\in Y\setminus X$ there is $\gamma\in X$ such that $\{a,b,\gamma\}$ is dependent. If $X\subseteq Y$ is closed and complete, then the element $\gamma$ is uniquely determined by $a$ and $b$. Write $\gamma = f(a,b)$. The containment $X\subseteq Y$ is \emph{solvable} if it is closed and complete, and if for any distinct $a,b,c\in Y\setminus X$ with $f(a,b)$, $f(a,c)$ and $f(b,c)$ distinct, the set $\{f(a,b),f(a,c),f(b,c)\}$ is dependent. An increasing sequence $X_1 \subseteq\cdots\subseteq X_k = \AA$ is called a \emph{hypersolvable composition series} for $\AA$ if $|X_1|=1$ and each $X_i \subseteq X_{i+1}$ is solvable. \begin{mydef}[{\cite[Definition 1.8]{jambu1998}}] The central arrangement $\AA$ is \emph{hypersolvable} if it admits a hypersolvable composition series. \label{def:hyp} \end{mydef} There is an analog for graphs. Let $S\subseteq T\subseteq E$. We say that $S\subseteq T$ is \emph{solvable} if it satisfies the following conditions: \begin{enumerate}[(a)] \item There is no 3-cycle in $\g$ with two edges from $S$ and one edge from $T\setminus S$ \item Either $T\setminus S = e$ with neither endpoint of $e$ met by $S$, or there exist distinct vertices $v_1,\ldots,v_k,v$ met by $T$ with $v_1,\ldots,v_k$ met by $S$ such that \begin{enumerate}[(i)] \item $S$ contains a clique on $\{v_1,\ldots,v_k\}$, and \item $T\setminus S = \{vv_s \in E : s=1,\ldots,k\}$. \end{enumerate} \end{enumerate} An increasing sequence $S_1\subseteq \cdots \subseteq S_k = E$ is called a \emph{hypersolvable composition series} for $\g$ if $|S_1|=1$ and each $S_i\subseteq S_{i+1}$ is solvable \begin{mydef}[{\cite[Definition 6.6]{papadima2002}}] The graph $\g$ is \emph{hypersolvable} if it admits a hypersolvable composition series. \label{mydef:hypgraph} \end{mydef} \begin{prop} If the graph $\g$ is hypersolvable, then so is any induced subgraph of $\g$. \begin{proof} Suppose that $S_1\subseteq \cdots \subseteq S_k$ is a hypersolvable composition series for $\g$, and let $\overline{\g}$ be an induced subgraph of $\g$ with edge set $\overline{E}\subseteq E$. By eliminating empty sets and trivial containments in the sequence $S_1\cap \overline{E}\subseteq \cdots \subseteq S_k\cap \overline{E}$ one obtains a hypersolvable composition series for $\overline{\g}$. \end{proof} \label{prop:induced} \end{prop} The following proposition generalizes half a result of Papadima and Suciu \cite[Proposition 6.7]{papadima2002}, who showed that $\g$ is hypersolvable if and only if the associated graphic arrangement is hypersolvable. \begin{prop} If $\AA(\g,\u)$ is hypersolvable, then the graph $\mg$, obtained from $\g$ by adding edges between every pair of boundary nodes, is hypersolvable. \begin{proof} Let $\me$ be the set of added edges, so that the edge set of $\mg$ is the disjoint union $E\cup \me$. Write $\B=\{v_1,\ldots,v_m\}$. For $i=1,\ldots,m-1$ \[T_i = \{v_rv_s\in \me : r<s\leq i+1\},\] so for example $T_{m-1} = \me$. Suppose that $X_1\subseteq \cdots \subseteq X_k$ is a hypersolvable composition series for $\AA(\g,\u)$. For each $i$ let $S_i\subseteq \eo$ be the set corresponding to $X_i$. Let $j$ be the smallest index for which $\eh \in S_j$. Consider the increasing sequence \[ S_1\subseteq \cdots \subseteq S_{j-1} \subseteq S_{j-1}\cup T_1 \subseteq \cdots \subseteq S_{j-1}\cup T_{m-1}\subseteq S_{j+1}\cup \me \subseteq \cdots \subseteq S_k\cup \me, \] omitting the initial portion $S_1\subseteq \cdots \subseteq S_{j-1}$ if $j=1$. It is routine to show that this sequence is a hypersolvable composition series for $\mg$. \end{proof} \label{prop:hypersolvable} \end{prop} \begin{eg} Consider the network $N$ on the left side of Figure \ref{fig:nothyp}. Here $\mg=W_5$ is the wheel graph on 5 vertices. An exhaustive argument shows that $W_5$ is not hypersolvable. Hence Proposition \ref{prop:hypersolvable} implies that $\AA(\g,\u)$ is not hypersolvable. \label{eg:w5} \end{eg} \begin{figure} \caption{Left-to-right: a network $N$ with boundary nodes marked in white and the associated graph $\mg=W_5$.} \label{fig:nothyp} \end{figure} \begin{q} Does the converse of Proposition \ref{prop:hypersolvable} hold? In other words, is $\AA(\g,\u)$ hypersolvable whenever $\mg$ is hypersolvable? \end{q} \subsection{Disjoint broken circuits} Fix an ordering of a central arrangement $\AA$, and let $\min X$ denote the minimal element of any $X\subseteq \AA$. The \emph{broken circuits} of $\AA$ are the sets $C\setminus \min C$ for all circuits $C$ of $\AA$. A broken circuit is \emph{minimal} if it does not properly contain any broken circuits. Van Le and R\"{o}mer \cite[Theorem 4.9]{vanle2013} answered Question \ref{q:koszul} affirmatively for all ordered arrangements with disjoint minimal broken circuits. No matter the ordering, many Dirichlet arrangements do not satisfy this requirement, as the following proposition implies. \begin{prop} If there is an element of $V\setminus \B$ adjacent to at least 3 boundary nodes, then the minimal broken circuits of $\AA(\g,\u)$ are not disjoint with respect to any ordering. \begin{proof} Suppose that $i\in V\setminus \B$ is adjacent to distinct boundary nodes $j_1$, $j_2$ and $j_3$. Let $e_r$ be the edge $ij_r$ for $r=1,2,3$. Fix an ordering of $\AA(\g,\u)$ and suppose without loss of generality that $e_1<e_2<e_3$. We obtain circuits $\{e_0,e_1,e_3\}$ and $\{e_0,e_2,e_3\}$. The associated broken circuits are minimal, since there are no circuits of size $\leq 2$. Moreover both broken circuits contain $e_3$. \end{proof} \label{prop:dmbc} \end{prop} \begin{proof}[Proof of Theorem \ref{thm:eg}] Since $\chi(\g,\B)\geq 4$ and $|E|\geq 240$, Corollary \ref{cor:root} says that $\AA(\g,\u)$ is not combinatorially equivalent to any ideal arrangement or graphic arrangement. Since $\g\setminus \B$ contains $W_5$ as an induced subgraph, $\mg$ also contains $W_5$ as an induced subgraph. Example \ref{eg:w5} and Propositions \ref{prop:induced} and \ref{prop:hypersolvable} imply that $\AA(\g,\u)$ is not hypersolvable, a property depending only on $M(\AA(\g,\u))$. Finally Proposition \ref{prop:dmbc} says that the broken circuits of $\AA(\g,\u)$ are not disjoint with respect to any ordering. This property only depends on $M(\AA(\g,\u))$, so the result follows. \end{proof} \section*{Acknowledgments} \noindent The author thanks Trevor Hyde and the anonymous referee for helpful comments. \label{sec:falk} Falk posed the problem of interpreting $\phi_3$ combinatorially \cite[Problem 1.4]{falk2001}. This problem was solved by Schenck and Suciu for graphic arrangements, and later by Guo, Guo, Hu and Jiang for arrangements associated to certain signed graphs.\todo{keep going! and cite} Guo and Torielli The Orlik-Solomon ideal $I$ inherits a grading $I^p$ from $F$. Let $I_q$ be the ideal of $F$ generated by $I^0 + I^1 + \cdots + I^q$. The ideal $I_q$ is called the \emph{$q$-adic closure} of $I$ and inherits a grading $I_q^p = (I_q)^p$ from $F$. The algebra $A_q = F/I_q$ is called the \emph{$q$-adic closure} of $A$ and also inherits a grading $A_q^p = (A_q)^p$ from $F$. \begin{mydef} Consider the linear map $F^1\otimes I^2 \to F^3$ defined by $a\otimes b \mapsto ab$. The nullity of this map is denoted by $\phi_3$ and called the \emph{Falk invariant} of $L_0(N)$. \label{def:falk} \end{mydef} We say that $S\subset E$ is a \emph{short crossing} (resp., \emph{Wheatstone bridge}) if it induces the subgraph on the left (resp., right) side of Figure \ref{fig:scwb}, respecting the placement of the nodes and interior vertices. We say that $S\subset E$ is a \emph{triangle} (resp., a $K_4$) if it induces the complete subgraph $K_3$ (resp., $K_4$), with no distinction between nodes and interior vertices. \begin{figure} \caption{Left to right: a short crossing and a Wheatstone bridge with nodes marked in white.} \label{fig:scwb} \end{figure} \begin{figure} \caption{The four types of generators (fix this).} \label{fig:4gens} \end{figure} \begin{thm} The Falk invariant of the Kirchhoff matroid $L_0(N)$ is given by \[\phi_3 = 2(\kap_1+\kap_2+\kap_3+\kap_4),\] where $\kap_1$ is the number of short crossings of $N$, $\kap_2$ is the number of Wheatstone bridges in $N$, $\kap_3$ is the number of triangles in $\g$, and $\kap_4$ is the number of $K_4$ in $\g$. \end{thm} \begin{remark} The notation $\phi_3$ for the Falk invariant comes from the topology of hyperplane arrangements. Let $\u:\B\to \R$ be injective, and let $\AA$ be the cone of the Kirchhoff arrangement associated to $(N,\u)$. Let $\pi$ be the fundamental group of the complement of the complexified arrangement $\C\otimes \AA$, and let $\pi=G_1,G_2,\ldots$ be the lower central series of $\pi$. Then every quotient $G_i/G_{i+1}$ is a finitely generated abelian group. Denoting the rank of $G_i/G_{i+1}$ by $\phi_i$ is consistent with Definition \ref{def:falk}. \end{remark} In this section write $k=|\eo|$. \begin{prop}[{\cite[Theorem 4.7]{falk2001}}] The Falk invariant $\phi_3$ of $L_0(N)$ can be written in terms of $A$: \[\phi_3 = 2\binom{k+1}{3} - k \dim(A^2) + \dim(A^2_3).\] \end{prop} \begin{prop} We have \[\dim(A^2) = \binom{k}{2} - \kap_1 - \kap_3.\] \end{prop} Let $\triangle\subset \binom{E}{3}$ be the set of triangles in $\g$, and let $C\subset \binom{E}{2}$ be the set of short crossings of $N$. Define the following subsets of $F^3$: \begin{align*} S_\triangle &= \{ ijk : \{i,j,k\}\in \triangle\}\\ S_C &= \{ij\eh : \{i,j\}\in C\}\\ S_{W} &= \{t\dif (ijk) : t,i,j,k\mbox{ are illustrated in Figure \ref{fig:4gens}}\}\\ S_{K_4} &= \{t\dif(ijk) : \{i,j,k\}\in \triangle \mbox{ and } i,j,k,t\mbox{ belong to a }K_4\}\\ S_0 &= \{\eh \dif(ijk) : \{i,j,k\}\in \triangle\}\setminus S_{W}\\ S_1 &= \{t\dif (ijk) : \{i,j,k\}\in \triangle\mbox{ and }t\in E\}\setminus (S_\triangle \cup S_{K_4} \cup S_0)\\ S_2 &= \{t\dif(ij\eh) : \{i,j\}\in C\} \setminus (S_C\cup S_W). \end{align*} For each $S_i$, let $U_i$ denote the span in $F^3$ of $S_i$. \begin{lem} The vector space $I_3^2$ decomposes\todo{check these exp/sub} as \[I_3^2 = U_\triangle \oplus U_C \oplus U_W \oplus U_{K_4} \oplus U_0 \oplus U_1\oplus U_2.\] \end{lem} \begin{lem} Let $\mu$ be the number of triangles in $\g$ contained in Wheatstone bridges of $N$. We have \begin{align*} \dim U_\triangle &= \kap_3\\ \dim U_C &= \kap_1\\ \dim U_W &= 8\kap_2 + \mu\\ \dim U_{K_4} &= 10\kap_4\\ \dim U_0 &= \kap_3 - \mu\\ \dim U_1 &= -4\kap_2 + (k-4)\kap_3 - 12\kap_4\\ \dim U_2 &= (k-3)\kap_1-6\kap_2. \end{align*} \end{lem} \begin{prop} We have \[\dim(A_3^2) = \binom{k}{3} - (k-2)(\kap_1 + \kap_3) + 2(\kap_2 + \kap_4).\] \end{prop} \end{document}
\begin{document} \draft \preprint{\vbox{\baselineskip=12pt{\hbox{CALT-68-2021} \hbox{quant-ph/9602016}}}} \title{Efficient networks for quantum factoring} \author{David Beckman\footnote{\tt [email protected]}, Amalavoyal N. Chari\footnote{\tt [email protected]}, Srikrishna Devabhaktuni\footnote{\tt [email protected]}, and John Preskill\footnote{\tt [email protected]}} \address{California Institute of Technology, Pasadena, CA 91125} \maketitle \begin{abstract} We consider how to optimize memory use and computation time in operating a quantum computer. In particular, we estimate the number of memory qubits and the number of operations required to perform factorization, using the algorithm suggested by Shor. A $K$-bit number can be factored in time of order $K^3$ using a machine capable of storing $5K+1$ qubits. Evaluation of the modular exponential function (the bottleneck of Shor's algorithm) could be achieved with about $72 K^3$ elementary quantum gates; implementation using a linear ion trap would require about $396K^3$ laser pulses. A proof-of-principle demonstration of quantum factoring (factorization of 15) could be performed with only 6 trapped ions and 38 laser pulses. Though the ion trap may never be a useful computer, it will be a powerful device for exploring experimentally the properties of entangled quantum states. \end{abstract} \pacs{} \narrowtext \parskip=5pt \section{Introduction and Summary} Recently, Shor \cite{shor} has exhibited a probabilistic algorithm that enables a quantum computer to find a nontrivial factor of a large composite number $N$ in a time bounded from above by a polynomial in $\log(N)$. As it is widely believed that no polynomial-time factorization algorithm exists for a classical Turing machine, Shor's result indicates that a quantum computer can efficiently perform interesting computations that are intractable on a classical computer, as had been anticipated by Feynman \cite{feynman}, Deutsch \cite{deutsch}, and others \cite{others}. Furthermore, Cirac and Zoller \cite{cirac} have suggested an ingenious scheme for performing quantum computation using a potentially realizable device. The machine they envisage is an array of cold ions confined in a linear trap, and interacting with laser beams. Such linear ion traps have in fact been built \cite{wineland}, and these devices are remarkably well protected from the debilitating effects of decoherence. Thus, the Cirac-Zoller proposal has encouraged speculation that a proof-of-principle demonstration of quantum factoring might be performed in the reasonably near future. Spurred by these developments, we have studied the computational resources that are needed to carry out the factorization algorithm using the linear ion trap computer or a comparable device. Of particular interest is the inevitable tension between two competing requirements. Because of practical limitations on the number of ions that can be stored in the trap, there is a strong incentive to minimize the number of qubits in the device by managing memory resources frugally. On the other hand, the device has a characteristic decoherence time scale, and the computation will surely crash if it takes much longer that the decoherence time. For this reason, and because optimizing speed is desirable anyway, there is a strong incentive to minimize the total number of elementary operations that must be completed during the computation. A potential rub is that frugal memory management may result in longer computation time. One of our main conclusions, however, is that substantial squeezing of the needed memory space can be achieved without sacrificing much in speed. A quantum computer capable of storing $5K+1$ qubits can run Shor's algorithm to factor a $K$-bit number $N$ in a time of order $K^3$. Faster implementations of the algorithm are possible for asymptotically large $N$, but these require more qubits, and are relatively inefficient for values of $N$ that are likely to be of practical interest. For these values of $N$, a device with unlimited memory using our algorithms would be able to run only a little better than twice as fast as fast as a device that stores $5K+1$ qubits. Further squeezing of the memory space is also possible, but would increase the computation time to a higher power of $K$. Shor's algorithm (which we will review in detail in the next section) includes the evaluation of the modular exponential function; that is, a unitary transformation $U$ that acts on elements of the computational basis as \begin{equation} \label{exp} U: |a \rangle_i |0\rangle_o \longmapsto |a \rangle_i |x^a (mod ~N)\rangle_o \ . \end{equation} Here $N$ is the $K$-bit number to be factored, $a$ is an $L$-bit number (where usually $L\approx 2K$), and $x$ is a randomly selected positive integer less than $N$ that is relatively prime to $N$; $|\cdot\rangle_i$ and $|\cdot\rangle_o$ denote the states of the ``input'' and ``output'' registers of the machine, respectively. Shor's algorithm aims to find the period of this function, the {\it order} of $x$ mod $N$. From the order of $x$, a factor of $N$ can be extracted with reasonable likelihood, using standard results of number theory. To perform factorization, one first prepares the input register in a coherent superposition of all possible $L$-bit computational basis states: \begin{equation} {1\over 2^{L/2}}\sum_{a=0}^{2^{L}-1}|a\rangle_i \ . \end{equation} Preparation of this state is relatively simple, involving just $L$ one-qubit rotations (or, for the Cirac-Zoller device, just $L$ laser pulses applied to the ions in the trap). Then the modular exponential function is evaluated by applying the transformation $U$ above. Finally, a discrete Fourier transformation is applied to the input register, and the input register is subsequently measured. From the measured value, the order of $x$ mod $N$ can be inferred with reasonable likelihood. Shor's crucial insight was that the discrete Fourier transform can be evaluated in polynomial time on a quantum computer. Indeed, its evaluation is remarkably efficient. With an improvement suggested by Coppersmith \cite{coppersmith} and Deutsch \cite{deutschFFT}, evaluation of the $L$ bit Fourier transform is accomplished by composing $L$ one-qubit operations and ${1\over 2}L(L-1)$ two-qubit operations. (For the Cirac-Zoller device, implementation of the discrete Fourier transform requires $L(2L-1)$ distinct laser pulses.) The bottleneck of Shor's algorithm is the rather more mundane task of evaluating the modular exponential function, {\it i.e.}, the implementation of the transformation $U$ in Eq.\ (\ref{exp}). This task demands far more computational resources than the rest of the algorithm, so we will focus on evaluation of this function in this paper. There is a well-known (classical) algorithm for evaluating the modular exponential that involves O$(K^3)$ elementary operations, and we will make use of this algorithm here. The main problem that commands our attention is the management of the ``scratchpad'' space that is needed to perform the computation; that is, the extra qubits aside from the input and output registers that are used in intermediate steps of the computation. It is essential to erase the scratchpad before performing the discrete Fourier transform on the input register. Before the scratchpad is erased, the state of the machine will be of the form \begin{equation} \label{garbage} {1\over 2^{L/2}}\sum_a |a \rangle_i |x^a (mod ~N)\rangle_o |g(a)\rangle_s \ , \end{equation} where $|g(a)\rangle_s$ denotes the ``garbage'' stored in the scratchpad. If we were now to perform the discrete Fourier transform on $|\cdot\rangle_i$, we would be probing the periodicity properties of the function $x^a~ ({\rm mod} {}~N)\otimes g(a)$, which may be quite different than the periodicity properties of $x^a ~(mod ~N)$ that we are interested in. Thus, the garbage in the scratchpad must be erased, but the erasure is a somewhat delicate process. To avoid destroying the coherence of the computation, erasure must be performed as a reversible unitary operation. In principle, reversible erasure of the unwanted garbage presents no difficulty. Indeed, in his pioneering paper on reversible computation, Bennett \cite{bennett} formulated a general strategy for cleaning out the scratchpad: one can run the calculation to completion, producing the state Eq.\ (\ref{garbage}), copy the result from the output register to another ancillary register, and then run the computation backwards to erase both the output register and the scratchpad. However, while this strategy undoubtedly works, it may be far from optimal, for it may require the scratchpad to be much larger than is actually necessary. We can economize on scratchpad space by running subprocesses backwards at intermediate stages of the computation, thus freeing some registers to be reused in a subsequent process. (Indeed, Bennett himself \cite{bennett_trade} described a general procedure of this sort that greatly reduces the space memory requirements.) However, for this reduction in required scratchpad space, we may pay a price in increased computation time. One of our objectives in this paper is to explore this tradeoff between memory requirements and computation time. This tradeoff is a central general issue in quantum computation (or classical reversible computation) that we have investigated by studying the implementation of the modular exponential function, the bottleneck of Shor's factorization algorithm. We have constructed a variety of detailed quantum networks that evaluate the modular exponential, and we have analyzed the complexity of our networks. A convenient (though somewhat arbitrary) measure of the complexity of a quantum algorithm is the number of laser pulses that would be required to implement the algorithm on a device like that envisioned by Cirac and Zoller. We show that if $N$ and $x$ are $K$-bit classical numbers and $a$ is an $L$-bit quantum number, then, on a machine with $2K+1$ qubits of scratch space, the computation of $x^a~({\rm mod}~N)$ can be achieved with $198 L\left[K^2 +O(K)\right]$ laser pulses. If the scratch space of the machine is increased by a single qubit, the number of pulses can be reduced by about 6\% (for $K$ large), and if $K$ qubits are added, the improvement in speed is about 29\%. We also exhibit a network that requires only $K+1$ scratch qubits, but where the required number of pulses is of order $LK^4$. The smallest composite number to which Shor's algorithm may be meaningfully applied is $N$=15. (The algorithm fails for $N$ even and for $N=p^\alpha$, $p$ prime.) Our general purpose algorithm (which works for any value of $N$), in the case $N=15$ (or $K=4$, $L=8$), would require 21 qubits and about 15,000 laser pulses. In fact, a much faster special purpose algorithm that exploits special properties of the number 15 can also be constructed---for what it's worth, the special purpose algorithm could ``factor 15'' with 6 qubits and only 38 pulses. The fastest modern digital computers have difficulty factoring numbers larger than about 130 digits (432 bits). According to our estimates, to apply Shor's algorithm to a number of this size on the ion trap computer (or a machine of similar design) would require about 2160 ions and $3\times 10^{10}$ laser pulses. The ion trap is an intrinsically slow device, for the clock speed is limited by the frequency of the fundamental vibrational mode of the trapped ions. Even under very favorable conditions, it seems unlikely that more than $10^4$ operations could be implemented per second. For a computation of practical interest, the run time of the computation is likely to outstrip by far the decoherence time of the machine. It seems clear that a practical quantum computer will require a much faster clock speed than can be realized in the Cirac-Zoller design. For this reason, a design based on cavity quantum electrodynamics (in which processing involves excitation of photons rather than phonons) \cite{kimble,pellizzari} may prove more promising in the long run. Whatever the nature of the hardware, it seems likely that a practical quantum computer will need to invoke some type of error correction protocol to combat the debilitating effects of decoherence \cite{noise}. Recent progress in the theory of error-correcting quantum codes \cite{error_correct} has bolstered the hope that real quantum computers will eventually be able to perform interesting computational tasks . Although we expect that the linear ion trap is not likely to ever become a practical computer, we wish to emphasize that it is a marvelous device for the experimental studies of the peculiar properties of entangled quantum states. Cirac and Zoller \cite{cirac} have already pointed out that maximally entangled states of $n$ ions \cite{greenberger} can be prepared very efficiently. Since it is relatively easy to make measurements in the Bell operator basis for any pair of entangled ions in the trap \cite{ekert_deutsch}, it should be possible to, say, demonstrate the possibility of quantum teleportation \cite{teleportation} (at least from one end of the trap to the other). In Sec.\ II of this paper, we give a brief overview of the theory of quantum computation and describe Shor's algorithm for factoring. Cirac and Zoller's proposed implementation of a quantum computer using a linear ion trap is explained in Sec.\ III. Sec.\ IV gives a summary of the main ideas that guide the design of our modular exponentiation algorithms; the details of the algorithms are spelled out in Sec.\ V, and the complexity of the algorithms is quantified in Sec.\ VI. The special case $N=15$ is discussed in Sec.\ VII. In Sec.\ VIII, we propose a simple experimental test of the quantum Fourier transform. Finally, in Appendix A, we describe a scheme for further improving the efficiency of our networks. Quantum networks that evaluate the modular exponential function have also been designed and analyzed by Despain \cite{despain}, by Shor \cite{shor_long} and by Vedral, Barenco, and Ekert \cite{vedral}. Our main results are in qualitative agreement with the conclusions of these authors, but the networks we describe are substantially more efficient. \section{Quantum Computation and Shor's Factorization Algorithm} \subsection{Computation and physics} The theory of computation would be bootless if the computations that it describes could not actually be carried out using physically realizable devices. Hence it is really the task of physics to characterize what is computable, and to classify the efficiency of computations. The physical world is quantum mechanical. Therefore, the foundations of the theory of computation must be quantum mechanical as well. The classical theory of computation ({\it e.g}, the theory of the universal Turing machine) should be viewed as an important special case of a more general theory. A ``quantum computer'' is a computing device that invokes intrinsically quantum-mechanical phenomena, such as interference and entanglement.\footnote{For a lucid review of quantum computation and Shor's algorithm, see \cite{ekert_jozsa}.} In fact, a Turing machine can simulate a quantum computer to any desired accuracy (and vice versa); hence, the classical theory and the more fundamental quantum theory of computation agree on what is computable \cite{deutsch}. But they may disagree on the classification of {\it complexity}; what is easy to compute on a quantum computer may be hard on a classical computer. \subsection{Bits and qubits} In classical theory, the fundamental unit of information is the {\it bit}---it can take either of two values, say 0 and 1. All classical information can be encoded in bits, and any classical computation can be reduced to fundamental operations that flip bits (changing 0 to 1 or 1 to 0) conditioned on the values of other bits. In the quantum theory of information, the bit is replaced by a more general construct---the {\it qubit}. We regard $\bigl |{0}\bigr\rangle$ and $ \bigl |{1}\bigr\rangle$ as the orthonormal basis states for a two-dimensional complex vector space. The state of a qubit (if ``pure'') can be any normalized vector, denoted \begin{equation} \label{qubit_state} c_0 \bigl |{0}\bigr\rangle + c_1 \bigl |{1}\bigr\rangle \ , \end{equation} where $c_0$ and $c_1$ are complex numbers satisfying $|c_0|^2+|c_1|^2=1$. A classical bit can be viewed as the special case in which the state of the qubit is always either $c_0=1,~c_1=0$ or $c_0=0,~c_1=1$. The possible pure states of a qubit can be parametrized by two real numbers. (The overall phase of the state is physically irrelevant.) Nevertheless, only one bit of classical information can be stored in a qubit and reliably recovered. If the value of the qubit in the state Eq.\ (\ref{qubit_state}) is measured, the result is 0 with probability $|c_0|^2$ and 1 with probability $|c_1|^2$; in the case $|c_0|^2=|c_1|^2={1\over 2}$, the outcome of the measurement is a random number, and we recover no information at all. A string of $n$ classical bits can take any one of $2^n$ possible values. For $n$ qubits, these $2^n$ classical strings are regarded as the basis states for a complex vector space of dimension $2^n$, and a pure state of $n$ qubits is a normalized vector in this space. \subsection{Processing} In a quantum computation, $n$ qubits are initially prepared in an algorithmically simple input state, such as \begin{equation} \bigl |{\rm input}\bigr\rangle=\bigl |{0}\bigr\rangle\bigl |{0}\bigr\rangle\bigl |{0}\bigr\rangle\dots \bigl |{0}\bigr\rangle \ . \end{equation} Then a unitary transformation $U$ is applied to the input state, yielding an output state \begin{equation} \bigl |{\rm output}\bigr\rangle = U\bigl |{\rm input}\bigr\rangle \end{equation} Finally, a set of commuting observables ${\cal O}_1, {\cal O}_2, {\cal O}_3, \dots$ is measured in the output state. The measured values of these observables constitute the outcome of the computation. Since the output state is not necessarily an eigenstate of the measured observables, the quantum computation is not deterministic---rather, the same computation, performed many times, will generate a probability distribution of possible outcomes. (Note that the observables that are measured in the final step are assumed to be simple in some sense; otherwise the transformation $U$ would be superfluous. Without loss of generality, we may specify that the values of all qubits (or a subset of the qubits) are measured at the end of the computation; that is, the $j$th qubit $\bigl |{\cdot}\bigr\rangle_j$ is projected onto the ``computational basis'' $\lbrace \bigl |{0}\bigr\rangle_j~, ~ \bigl |{1}\bigr\rangle_j \rbrace $.) To characterize the complexity of a computation, we must formulate some rules that specify how the transformation $U$ is constructed. One way to do this is to demand that $U$ is expressed as a product of elementary unitary transformations, or ``quantum gates,'' that act on a bounded number of qubits (independent of $n$). In fact, it is not hard to see \cite{universal} that ``almost any'' two-qubit unitary transformation, together with qubit swapping operations, is universal for quantum computation. That is, given a generic $4\times 4$ unitary matrix $\tilde U$, let $\tilde U^{(i,j)}$ denote $\tilde U$ acting on the $i$th and $j$th qubits according to \begin{equation} \tilde U^{(i,j)}: \>\>\bigl |{\epsilon_i}\bigr\rangle_i\bigl |{\epsilon_j}\bigr\rangle_j\longmapsto \tilde U_{\epsilon_i\epsilon_j,\epsilon'_i\epsilon'_j} |{\epsilon'_i}\bigr\rangle_i\bigl |{\epsilon'_j}\bigr\rangle_j \end{equation} Then any $2^n\times 2^n$ unitary transformation $U$ can be approximated to arbitrary precision by a finite string of $\tilde U^{(i,j)}$'s, \begin{equation} U\simeq \tilde U^{(i_T,j_T)}\dots\tilde U^{(i_2,j_2)}\tilde U^{(i_1,j_1)} \end{equation} The length $T$ of this string (the ``time'') is a measure of the complexity of the quantum computation. Determining the precise string of $\tilde U^{(i,j)}$'s that is needed to perform a particular computational task may itself be computationally demanding. Therefore, to have a reasonable notion of complexity, we should require that a conventional computer (a Turing machine) generates the instructions for constructing the unitary transformation $U$. The complexity of the computation is actually the sum of the complexity of the classical computation and the complexity of the quantum computation. Then we may say that a problem is {\it tractable} on a quantum computer if the computation that solves the problem can be performed in a time that is bounded from above by a polynomial in $n$, the number of qubits contained in the quantum register. This notion of tractability has the nice property that it is largely independent of the details of the design of the machine---that is, the choice of the fundamental quantum gates. The quantum gates of one device can be simulated to polynomial accuracy in polynomial time by the quantum gates of another device. It is also clear that a classical computer can simulate a quantum computer to any desired accuracy---all that is required to construct the state $\bigl |{\rm output}\bigr\rangle$ is repeated matrix multiplication, and we can simulate the final measurement of the observables by expanding $\bigl |{\rm output}\bigr\rangle$ in a basis of eigenstates of the observables. However, the classical simulation may involve matrices of exponentially large size ($U$ is a $2^n\times 2^n$ matrix), and so may take an exponentially long time. It was this simple observation that led Feynman \cite{feynman} to suggest that quantum computers may be able to solve certain problems far more efficiently than classical computers. \subsection{Massive Parallelism} Deutsch \cite{deutsch} put this suggestion in a more tangible form by emphasizing that a quantum computer can exploit ``massive quantum parallelism.'' Suppose we are interested in studying the properties of a function $f$ defined on the domain of nonnegative integers $0,1,2,\dots,2^L-1$. Imagine that a unitary transformation $U_f$ can be constructed that efficiently computes $f$: \begin{eqnarray} \label{f_compute} U_f:\>\> &&\bigl | \left(i_{L-1}i_{L-2}\dots i_1 i_0\right)\bigr\rangle_{in}\bigl |\left(00\dots00\right)\bigr\rangle_{out} \nonumber\\ \longmapsto &&\bigl | \left(i_{L-1}i_{L-2}\dots i_1 i_0\right)\bigr\rangle_{in}\bigl |f(i_{L-1}i_{L-2}\dots i_1 i_0)\bigr\rangle_{out} \ . \end{eqnarray} Here $\left(i_{L-1}i_{L-2}\dots i_1 i_0\right)$ is an integer expressed in binary notation, and $\bigl | \left(i_{L-1}i_{L-2}\dots i_1 i_0\right)\bigr\rangle$ denotes the corresponding basis state of $L$ qubits. Since the function $f$ might not be invertible, $U_f$ has been constructed to leave the state in the $\bigl | \cdot\bigr\rangle_{in}$ register undisturbed, to ensure that it is indeed a reversible operation. Eq.\ (\ref{f_compute}) defines the action of $U_f$ on each of $2^L$ basis states, and hence, by linear superposition, on all states of a $2^L$-dimensional Hilbert space. In particular, starting with the state $\bigl | \left(00\dots00\right)\bigr\rangle_{in}$, and applying single-qubit unitary transformations to each of the $L$ qubits, we can easily prepare the state \begin{eqnarray} \label{rotate_bits} \left({1\over\sqrt{2}}\bigl | 0\bigr\rangle + {1\over\sqrt{2}}\bigl | 1\bigr\rangle\right)^L \>&&=\>{1\over 2^{L/2}}\sum_{i_{L-1}=0}^1\dots \sum_{i_{1}=0} ^1\sum_{i_{0}=0}^1 \bigl | \left(i_{L-1}i_{L-2}\dots i_1 i_0\right)\bigr\rangle_{in}\nonumber\\ &&\equiv \>{1\over 2^{L/2}}\sum_{x=0}^{2^L-1}\bigl |x\bigr\rangle_{in}\ , \end{eqnarray} an equally weighted coherent superposition of all of the $2^L$ distinct basis states. With this input, the action of $U_f$ prepares the state \begin{equation} \label{massive} \bigl |\psi_f\bigr\rangle\> \equiv \>{1\over 2^{L/2}}\sum_{x=0}^{2^L-1}\bigl |x\bigr\rangle_{in}\bigl |f(x)\bigr\rangle_{out}\ . \end{equation} The highly entangled quantum state Eq.\ (\ref{massive}) exhibits what Deutsch called ``massive parallelism.'' Although we have run the computation (applied the unitary transformation $U_f$) only once, in a sense this state encodes the value of the function $f$ for each possible value of the input variable $x$. Were we to measure the value of all the qubits of the input register, obtaining the result $x=a$, a subsequent measurement of the output register would reveal the value of $f(a)$. Unfortunately, the measurement will destroy the entangled state, so the procedure cannot be repeated. We succeed, then, in unambiguously evaluating $f$ for only a single value of its argument. \subsection{Periodicity} \label{sec:ft} Deutsch \cite{deutsch} emphasized, however, that certain global properties of the function $f$ {\it can} be extracted from the state Eq.\ (\ref{massive}) by making appropriate measurements. Suppose, for example, that $f$ is a {\it periodic} function (defined on the nonnegative integers), whose period $r$ is much less than $2^L$ (where $r$ does not necessarily divide $2^L$), and that we are interested in finding the period. In general, determining $r$ is a computationally difficult task (for a classical computer) if $r$ is large. Shor's central observation is that a quantum computer, by exploiting quantum interference, can determine the period of a function efficiently. Given the state Eq.\ (\ref{massive}), this computation of the period can be performed by manipulating (and ultimately measuring) only the state of the input register---the output register need not be disturbed. For the purpose of describing the outcome of such measurements, we may trace over the unobserved state of the output register, obtaining the mixed density matrix \begin{equation} \label{periodic_density} \rho_{in,f} \> \equiv \> {\rm tr}_{out}\left(\bigl | \psi_f\bigr \rangle \bigl\langle \psi_f \bigr | \right) \> = \> {1\over r}\sum_{k=0}^{r-1} \bigl |\psi_k \bigr \rangle\bigl\langle \psi_k \bigr | \ , \end{equation} where \begin{equation} \label{periodic_state} \bigl |\psi_k \bigr \rangle \>=\> {1\over\sqrt{{\cal N}}} \sum_{j=0}^{{\cal N}-1} \bigl |x=k+rj \bigr \rangle\>_{in} \end{equation} is the coherent superposition of all the input states that are mapped to a given output. (Here ${\cal N}-1$ is the greatest integer less than $(2^L-k)/r$.) Now, Shor showed that the unitary transformation \begin{equation} \label{ft} FT:\>\> \bigl | x\bigr \rangle \longmapsto {1\over 2^{L/2}} \sum_{y=0}^{2^L-1} e^{2\pi i xy/2^L} \bigl | y\bigr \rangle \end{equation} (the Fourier transform) can be composed from a number of elementary quantum gates that is bounded from above by a polynomial in $L$. The Fourier transform can be used to probe the periodicity properties of the state Eq.\ (\ref{periodic_density}). If we apply $FT$ to the input register and then measure its value $y$, the outcome of the measurement is governed by the probability distribution \begin{equation} \label{ft_prob} P(y)\> = \> {{\cal N}\over 2^L}\left| {1\over{\cal N}} \sum_{j=0}^{{\cal N}-1} e^{2\pi iyrj/2^L}\right|^2 \ . \end{equation} This probability distribution is strongly peaked about values of $y$ of the form \begin{equation} \label{peak_cond} {y\over 2^L}\> = \>{{\rm integer}\over r} \pm O(2^{-L}) \ , \end{equation} where the integer is a random number less than $r$. (For other values of $y$, the phases in the sum over $j$ interfere destructively.) Now suppose that the period $r$ is known to be less than $2^{L/2}$. The minimal spacing between two distinct rational numbers, both with denominator less than $2^{L/2}$ is $O(2^{-L})$. Therefore, if we measure $y$, the rational number with denominator less than $2^{L/2}$ that is closest to $y/2^L$ is reasonably likely to be a rational number with denominator $r$, where the numerator is a random number less than $r$. Finally, it is known that if positive integers $r$ and $s<r$ are randomly selected, then $r$ and $s$ will be relatively prime with a probability of order $1/\log\log r$. Hence, even after the rational number is reduced to lowest terms, it is not unlikely that the denominator will be $r$. We conclude then (if $r$ is known to be less than $2^{L/2}$), that each time we prepare the state Eq.\ (\ref{massive}), apply the $FT$ to the input register, and then measure the input register, we have a probability of order $1/\log\log r> 1/\log L$ of successfully inferring from the measurement the period $r$ of the function $f$. Hence, if we carry out this procedure a number of times that is large compared to $\log L$, we will find the period of $f$ with probability close to unity. All that remains to be explained is how the construction of the unitary transformation $FT$ is actually carried out. A simpler construction than the one originally presented by Shor \cite {shor} was later suggested by Coppersmith \cite{coppersmith} and Deutsch \cite{deutschFFT}. (It is, in fact, the standard fast Fourier transform, adapted for a quantum computer.) In their construction, two types of elementary quantum gates are used. The first type is a single-qubit rotation \begin{equation} \label{qubit_rot} U^{(j)}: \>\> \pmatrix{\bigl|0\bigr\rangle_j\cr \bigl|1\bigr\rangle_j} \longmapsto {1\over\sqrt{2}}\pmatrix{1&1\cr 1&-1} \pmatrix{\bigl|0\bigr\rangle_j\cr \bigl|1\bigr\rangle_j} \ , \end{equation} the same transformation that was used to construct the state Eq.\ (\ref{rotate_bits}). The second type is a two-qubit conditional phase operation \begin{equation} \label{cond_phase} V^{(j,k)}(\theta): \>\> \bigl|\epsilon\bigr\rangle_j \bigl|\eta\bigr\rangle_k \longmapsto e^{i\epsilon\eta \theta} \bigl|\epsilon\bigr\rangle_j \bigl|\eta\bigr\rangle_k \ . \end{equation} That is, $V^{(j,k)}(\theta)$ multiplies the state by the phase $e^{i\theta}$ if both the $j$th and $k$th qubits have the value 1, and acts trivially otherwise. It is not difficult to verify that the transformation \begin{eqnarray} \label{hat_ft} \hat{FT}&& \equiv \left\{U^{(0)}V^{(0,1)}(\pi/2)V^{(0,2)}(\pi/4)\cdots V^{(0,L-1)}(\pi/2^{L-1})\right\}\cdots\nonumber\\ && \cdots\left\{U^{(L-3)}V^{(L-3,L-2)}(\pi/2)V^{(L-3,L-1)} (\pi/4)\right\}\cdot\left\{U^{(L-2)}V^{(L-2,L-1)}(\pi/2)\right\} \cdot\left\{U^{(L-1)}\right\} \end{eqnarray} acts as specified in Eq.\ (\ref{ft}), except that the order of the qubits in $y$ is reversed.\footnote{For a lucid explanation, see \cite{shor_long}.} (Here the transformation furthest to the {\it right} acts first.) We may act on the input register with $\hat {FT}$ rather than $FT$, and then reverse the bits of $y$ after the measurement. Thus, the implementation of the Fourier transform is achieved by composing altogether $L$ one-qubit gates and $L(L-1)/2$ two-qubit gates. Of course, in an actual device, the phases of the $V^{(j,k)}(\theta)$ gates will not be rendered with perfect accuracy. Fortunately, the peaking of the probability distribution in Eq.\ (\ref{ft_prob}) is quite robust. As long as the errors in the phases occurring in the sum over $j$ are small compared to $2\pi$, constructive interference will occur when the condition Eq.\ (\ref{peak_cond}) is satisfied. In particular, the gates in Eq.\ (\ref{hat_ft}) with small values of $\theta=\pi/2^{|j-k|}$ can be omitted, without much affecting the probability of finding the correct period of the function $f$. Thus (as Coppersmith \cite{coppersmith} observed), the time required to execute the $FT$ operation to fixed accuracy increases only linearly with $L$. \subsection{Factoring} \label{sec:factoring} The above observations show that a quantum computer can find the prime factors of a number efficiently, for it is well known that factoring can be reduced to the problem of finding the period of a function. Suppose we wish to find a nontrivial prime factor of the positive integer $N$. We choose a random number $x<N$. We can efficiently check, using Euclid's algorithm, whether $x$ and $N$ have a common factor. If so, we have found a factor of $N$, as desired. If not, let us compute the period of the modular exponential function \begin{equation} f_{N,x}(a)\> \equiv \> x^a ~({\rm mod}~ N) \ . \end{equation} The period is the smallest positive $r$ such that $x^r\equiv 1 ~({\rm mod}~ N)$, called the {\it order} of $x$ mod $N$. It exists whenever $N$ and $x<N$ have no common factor. Now suppose that $r$ is even, and that $x^{r/2}\not\equiv - 1 ~({\rm mod}~ N)$. Then, since $N$ divides the product $\left(x^{r/2}+1\right) \left(x^{r/2}-1\right)=x^r-1$, but does not divide either one of the factors $\left(x^{r/2}\pm 1\right)$, $N$ must have a common factor with each of $\left(x^{r/2}\pm 1\right)$. This common factor, a nontrivial factor of $N$, can then be efficiently computed. It only remains to consider how likely it is, given a random $x$ relatively prime to $N$, that the conditions $r$ even and $x^{r/2}\not\equiv - 1 ~{(\rm mod)}~N$ are satisfied. In fact, it can be shown \cite{shor_long,ekert_jozsa} that, for $N$ odd, the probability that these conditions are met is at least $1/2$, except in the case where $N$ is a prime power ($N=p^\alpha$, $p $ prime). (The trouble with $N=p^\alpha$ is that in this case $\pm 1$ are the {\it only} ``square roots'' of 1 in multiplication mod $N$, so that, even if $r$ is even, $x^{r/2}\equiv -1~ ({\rm mod}~N)$ will always be satisfied.) Anyway, if $N$ is of this exceptional type (or if $N$ is even), it can be efficiently factored by conventional (classical) methods. Thus, Shor formulated a probabilistic algorithm for factoring $N$ that will succeed with probability close to 1 in a time that is bounded from above by a polynomial in $\log N$. To factor $N$ we choose $L$ so that, say, $N^2 \le 2^L<2N^2$. Then, since we know that $r<N<2^{L/2}$ we can use the method described above to efficiently compute the period $r$ of the function $f_{N,x}$. We generate the entangled state Eq.\ (\ref{massive}), apply the Fourier transform, and measure the input register, thus generating a candidate value of $r$. Then, a classical computer is used to find $gcd(x^{r/2}-1,N)$. If there is a nontrivial common divisor, we have succeeded in finding a factor of $N$. If not, we repeat the procedure until we succeed. Of course, it is implicit in the above description that the evaluation of the function $f_{N,x}$ can be performed efficiently on the quantum computer. The computational complexity of $f_{N,x}$ is, in fact, the main topic of this paper. \subsection{Outlook} It is widely believed that no classical algorithm can factor a large number in polynomially bounded time (though this has never been rigorously demonstrated). The existence of Shor's algorithm, then, indicates that the classification of complexity for quantum computation differs from the corresponding classical classification. Aside from being an interesting example of an intrinsically hard problem, factoring is also of some practical interest---the security of the widely used RSA public key cryptography scheme \cite{rsa} relies on the presumed difficulty of factoring large numbers. It is not yet known whether a quantum computer can efficiently solve ``NP-complete'' problems, which are believed to be intrinsically more difficult than the factoring problem. (The ``traveling salesman problem'' is a notorious example of an NP-complete problem.) It would be of great fundamental interest (and perhaps of practical interest) to settle this question. Conceivably, a positive answer could be found by explicitly exhibiting a suitable algorithm. In any event, better characterizing the class of problems that can be solved in ``quantum polynomial time'' is an important unsolved problem. The quantum factoring algorithm works by coherently summing an exponentially large number of amplitudes that interfere constructively, building up the strong peaks in the probability distribution Eq.\ (\ref{ft_prob}). Unfortunately, this ``exponential coherence'' is extremely vulnerable to the effects of noise \cite{noise}. When the computer interacts with its environment, the quantum state of the computer becomes entangled with the state of the environment; hence the pure quantum state of the computer decays to an incoherent mixed state, a phenomenon known as decoherence. Just as an illustration, imagine that, after the coherent superposition state Eq.\ (\ref{periodic_state}) is prepared, each qubit has a probability $p<<1$ of decohering completely before the $FT$ is applied and the device is measured; in other words, $pL$ of the $L$ qubits decohere, and the state of the computer becomes entangled with $2^{pL}$ mutually orthogonal states of the environment. Thus, the number of terms in the coherent sum in Eq.\ (\ref{ft_prob}) is reduced by the factor $2^{-pL}$, and the peaks in the probability distribution are weakened by the factor $2^{-2pL}$. For any nonzero $p$, then, the probability of successfully finding a factor decreases exponentially as $L$ grows large. Interaction with the environment, and hence decoherence, always occur at some level. It seems then, that the potential of a quantum computer to solve hard problems efficiently can be realized only if suitable schemes are found that control the debilitating effects of decoherence. In some remarkable recent developments \cite{error_correct}, clever error correction schemes have been proposed for encoding and {\it storing} quantum information that sharply reduce its susceptibility to noise. Some remaining challenges are: to incorporate error correction into the operation of a quantum network (so that it can operate with high reliability in spite of the effects of decoherence), and to find efficient error-correction schemes that can be implemented in realistic working devices. \section{The Linear Ion Trap} \subsection{A realizable device} The hardware for a quantum computer must meet a variety of demanding criteria. A suitable method for storing qubits should be chosen such that: (1) the state of an individual qubit can be controlled and manipulated, (2) carefully controlled strong interactions between distinct qubits can be induced (so that nonlinear logic gates can be constructed), and (3) the state of a qubit can be read out efficiently. Furthermore, to ensure effective operation: (1) the storage time for the qubits must be long enough so that many logical operations can be performed, (2) the machine should be free of imperfections that could introduce errors in the logic gates, and (3) the machine should be well isolated from its environment, so that the characteristic decoherence time is sufficiently long. Cirac and Zoller \cite{cirac} proposed an incarnation of a quantum computer that meets these criteria remarkably well and that may be within the grasp of existing technology. In their proposal, ions are collected in a linear harmonic trap. The internal state of each ion encodes one qubit: the ground state $\bigl | g\bigr\rangle$ is interpreted as $\bigl | 0\bigr\rangle$, and a long-lived metastable excited state $\bigl | e\bigr\rangle$ is interpreted as $\bigl | 1\bigr\rangle$. The quantum state of the computer in this basis can be efficiently read out by the ``quantum jump method'' \cite{jump}. A laser is tuned to a transition from the state $\bigl | g\bigr\rangle$ to a short-lived excited state that decays back to $\bigl | g\bigr\rangle$; when the laser illuminates the ions, each qubit with value $\bigl | 0\bigr\rangle$ fluoresces strongly, while the qubits with value $\bigl | 1\bigr\rangle$ remain dark. Coulomb repulsion keeps the ions sufficiently well separated that they can be {\it individually} addressed by pulsed lasers \cite{wineland}. If a laser is tuned to the frequency $\omega$, where $\hbar\omega$ is the energy splitting between $\bigl | g\bigr\rangle$ and $\bigl | e\bigr\rangle$, and is focused on the the $i$th ion, then Rabi oscillations are induced between $\bigl | 0\bigr\rangle_i$ and $\bigl | 1\bigr\rangle_i$. By timing the laser pulse properly, and choosing the phase of the laser appropriately, we can prepare the $i$th ion in an arbitrary superposition of $\bigl | 0\bigr\rangle_i$ and $\bigl | 1\bigr\rangle_i$. (Of course, since the states $\bigl | g\bigr\rangle$ and $\bigl | e\bigr\rangle$ are nondegenerate, the relative phase in this linear combination rotates with time as $e^{-i\omega t}$ even when the laser is turned off. It is most convenient to express the quantum state of the qubits in the interaction picture, so that this time-dependent phase is rotated away.) Crucial to the functioning of the quantum computer are the quantum gates that induce entanglement between distinct qubits. The qubits must interact if nontrivial quantum gates are to be constructed. In the ion trap computer, the interactions are effected by the Coulomb repulsion between the ions. Because of the mutual Coulomb repulsion, there is a spectrum of coupled normal modes for the ion motion. When an ion absorbs or emits a laser photon, the center of mass of the ion recoils. But if the laser is properly tuned, then when a single ion absorbs or emits, a normal mode involving many ions will recoil coherently (as in the M\"ossbauer effect). The vibrational mode of lowest frequency (frequency $\nu$) is the center-of-mass (CM) mode, in which the ions oscillate in lockstep in the harmonic well of the trap. The ions can be laser cooled to a temperature much less than $\nu$, so that each vibrational normal mode is very likely to occupy its quantum-mechanical ground state. Now imagine that a laser tuned to the frequency $\omega-\nu$ shines on the $i$th ion. For a properly timed pulse (a $\pi$ pulse, or a $k\pi$ pulse for $k$ odd), the state $\bigl | e\bigr\rangle_i$ will rotate to $\bigl | g\bigr\rangle_i$, while the CM oscillator makes a transition from its ground state $\bigl | 0\bigr\rangle_{\rm CM}$ to its first excited state $\bigl | 1\bigr\rangle_{\rm CM}$ (a CM ``phonon'' is produced). However, the state $\bigl | g\bigr\rangle_i\bigl | 0\bigr\rangle_{\rm CM} $ is not on resonance for any transition, and so is unaffected by the pulse. Thus, with a single laser pulse, we may induce the unitary transformation \begin{equation} \label{Wphonon} W^{(i)}_{\rm phon}:\>\> \left\{\begin{array}{lll}&\bigl | g\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\longmapsto &| g\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\nonumber\\ &\bigl | e\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\longmapsto -i&| g\bigr\rangle_i| 1\bigr\rangle_{\rm CM}\end{array}\right\}\nonumber\\ \end{equation} This operation removes a bit of information that is initially stored in the internal state of the $i$th ion, and deposits the bit in the CM phonon mode. Applying $W^{(i)}_{\rm phon}$ again would reverse the operation (up to a phase), removing the phonon and reinstating the bit stored in ion $i$. However, all of the ions couple to the CM phonon, so that once the information has been transferred to the CM mode, this information will influence the response of ion $j$ if a laser pulse is subsequently directed at that ion. By this scheme, nontrivial logic gates can be constructed, as we will describe in more detail below. An experimental demonstration of an operation similar to $W^{(i)}_{\rm phon}$ was recently carried out by Monroe {\it et al.} \cite{monroe}. In this experiment, a single $^9 Be^+$ ion occupied the trap. In earlier work, a linear trap was constructed that held 33 ions, but these were not cooled down to the vibrational ground state. The effort to increase the number of qubits in a working device is ongoing. Perhaps the biggest drawback of the ion trap is that it is an intrinsically slow device. Its speed is ultimately limited by the energy-time uncertainty relation; since the uncertainty in the energy of the laser photons should be small compared to the characteristic vibrational splitting $\nu$, the pulse must last a time large compared to $\nu^{-1}$. In the Monroe {\it et al.} experiment, $\nu$ was as large as 50 MHz, but it is likely to be orders of magnitude smaller in a device that contains many ions. In an alternate version of the above scheme (proposed by the Pellizzari {\it et al.} \cite{pellizzari}) many atoms are stored in an optical cavity, and the atoms interact via the cavity photon mode (rather than the CM vibrational mode). In principle, quantum gates in a scheme based on cavity QED could be intrinsically much faster than gates implemented in an ion trap. An experimental demonstration of a rudimentary quantum gate involving photons interacting with an atom in a cavity was recently reported by Turchette {\it et al.} \cite{kimble}. \subsection{Conditional phase gate} An interesting two-qubit gate can be constructed by applying three laser pulses \cite{cirac}. After a phonon has been (conditionally) excited, we can apply a laser pulse to the $j$th ion that is tuned to the transition $\bigl | g\bigr \rangle_j \bigl | 1 \bigr \rangle_{\rm CM}\longmapsto \bigl | e'\bigr \rangle_j \bigl | 0 \bigr \rangle_{\rm CM}$, where $\bigl | e'\bigr \rangle$ is another excited state (different than $\bigl | e\bigr \rangle$) of the ion. The effect of a $2\pi$ pulse is to induce the transformation \begin{equation} V^{(j)}:\>\> \left\{\begin{array}{lll}&\bigl | g\bigr\rangle_j| 0\bigr\rangle_{\rm CM}\longmapsto &| g\bigr\rangle_j| 0\bigr\rangle_{\rm CM}\nonumber\\ &\bigl | e\bigr\rangle_j| 0\bigr\rangle_{\rm CM}\longmapsto &| e\bigr\rangle_j| 0\bigr\rangle_{\rm CM}\nonumber\\ & \bigl | g\bigr\rangle_j| 1\bigr\rangle_{\rm CM}\longmapsto -&| g\bigr\rangle_j| 1\bigr\rangle_{\rm CM}\nonumber\\ &\bigl | e\bigr\rangle_j| 1\bigr\rangle_{\rm CM}\longmapsto &| e\bigr\rangle_j| 1\bigr\rangle_{\rm CM}\end{array}\right\}\nonumber\\ \end{equation} Only the phase of the state $| g\bigr\rangle_j| 1\bigr\rangle_{\rm CM}$ is affected by the $2\pi$ pulse, because this is the only state that is on resonance for a transition when the laser is switched on. (It would not have had the same effect if we had tuned the laser to the transition from $\bigl |g\bigr\rangle| 1\bigr\rangle_{\rm CM}$ to $ | e\bigr\rangle| 0\bigr\rangle_{\rm CM}$, because then the state $ | e\bigr\rangle| 0\bigr\rangle_{\rm CM}$ would also have been modified by the pulse.) Applying $W^{(i)}_{\rm phon}$ again removes the phonon, and we find that \begin{equation} \label{phase_gate} V^{(i,j)}\>\equiv\> W^{(i)}_{\rm phon}\cdot V^{(j)} \cdot W^{(i)}_{\rm phon}:\>\> \bigl|\epsilon\bigr\rangle_i \bigl|\eta\bigr\rangle_j \longmapsto (-1)^{\epsilon\eta} \bigl|\epsilon\bigr\rangle_i \bigl|\eta\bigr\rangle_j \ . \end{equation} is a {\it conditional phase} gate; it multiplies the quantum state by $(-1)$ if the qubits $\bigl|\cdot\bigr\rangle_i$ and $\bigl|\cdot\bigr\rangle_j$ both have the value 1, and acts trivially otherwise. A remarkable and convenient feature of this construction is that the two qubits that interact need not be in neighboring positions in the linear trap. In principle, the ions on which the gate acts could be arbitrarily far apart. This gate can be generalized so that the conditional phase $(-1)$ is replaced by an arbitrary phase $e^{i\theta}$---we replace the $2\pi$ pulse directed at ion $j$ by two $\pi$ pulses with differing values of the laser phase, and modify the laser phase for one of $\pi$ pulses directed at ion $i$. Thus, with 4 pulses, we construct the conditional phase transformation $V^{(i,j)}(\theta)$ defined in Eq.\ (\ref{cond_phase}) that is needed to implement the Fourier transform $\hat {FT}$. The $L$-qubit Fourier transform, then, requiring $L(L-1)/2$ conditional phase gates and $L$ single-qubit rotations, can be implemented with altogether $L(2L-1)$ laser pulses. Actually, we confront one annoying little problem when we attempt to implement the Fourier transform. The single-qubit rotations that can be simply induced by shining the laser on an ion are unitary transformations with determinant one (the exponential of an off-diagonal Hamiltonian), while the rotation $U^{(j)}$ defined in Eq.\ (\ref{qubit_rot}) actually has determinant $(-1)$. We can replace $U^{(j)}$ in the construction of the $\hat {FT}$ operator (Eq.\ (\ref{hat_ft})) by the transformation \begin{equation} \label{det_one_rot} \tilde U^{(j)}: \>\> \pmatrix{\bigl|0\bigr\rangle_j\cr \bigl|1\bigr\rangle_j} \longmapsto {1\over\sqrt{2}}\pmatrix{1&1\cr -1& 1} \pmatrix{\bigl|0\bigr\rangle_j\cr \bigl|1\bigr\rangle_j} \end{equation} (which {\it can} be induced by a single laser pulse with properly chosen laser phase). However, the transformation $\tilde {FT} $ thus constructed differs from $\hat{FT}$ according to \begin{equation} \bigl\langle y \bigr | \tilde{FT} \bigl | x\bigr \rangle \> =\> \left( -1\right) ^{{\rm Par} (y)}\bigl\langle y \bigr | \hat{FT} \bigl | x\bigr \rangle \end{equation} where ${\rm Par}(y)$ is the {\it parity} of $y$, the number of 1's appearing in its binary expansion. Fortunately, the additional phase ${\rm Par}(y)$ has no effect on the probability distribution Eq.\ (\ref{ft_prob}), so this construction is adequate for the purpose of carrying out the factorization algorithm. \subsection{Controlled$^k$-NOT gate} \label{sec:NOTpulses} The conditional $(-1)$ phase gate Eq.\ (\ref{phase_gate}) differs from a {\it controlled-NOT} gate by a mere change of basis \cite{cirac}. The controlled-NOT operation $C_{{\lbrack\!\lbrack i \rbrack\!\rbrack},j}$ acts as \begin{equation} C_{{\lbrack\!\lbrack i \rbrack\!\rbrack},j}:\>\> \bigl|\epsilon\bigr\rangle_i \bigl|\eta\bigr\rangle_j \longmapsto \bigl|\epsilon\bigr\rangle_i \bigl|\eta\oplus\epsilon\bigr\rangle_j \ , \end{equation} where $\oplus$ denotes the logical XOR operation (binary addition mod 2). Thus $C_{{\lbrack\!\lbrack i \rbrack\!\rbrack},j}$ flips the value of the target qubit $\bigl|\cdot\bigr\rangle_j$ if the control qubit $\bigl|\cdot\bigr\rangle_i$ has the value 1, and acts trivially otherwise. We see that the controlled-NOT can be constructed as \begin{equation} \label{trap_CN} C_{{\lbrack\!\lbrack i \rbrack\!\rbrack},j}\>\equiv \left[\tilde U^{(j)}\right]^{-1}\cdot V^{(i,j)}\cdot \tilde U^{(j)}\> = \> \left[\tilde U^{(j)}\right]^{-1}\cdot W^{(i)}_{\rm phon}\cdot V^{(j)} \cdot W^{(i)}_{\rm phon}\cdot \tilde U^{(j)} \end{equation} where $\tilde U^{(j)}$ is the single-qubit rotation defined in Eq.\ (\ref{det_one_rot}). Since $\tilde U^{(j)}$ (or its inverse) can be realized by directing a $\pi/2$ pulse at ion $j$, we see that the controlled-NOT operation can be implemented in the ion trap with altogether 5 laser pulses. The controlled-NOT gate can be generalized to an operation that has a string of $k$ control qubits; we will refer to this operation as the controlled$^k$-NOT operation. (For $k=2$, it is often called the Toffoli gate.) Its action is \begin{equation} C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j} : \>\> \bigl |{\epsilon_1}\bigr\rangle_{i_1} \cdots \bigl |{\epsilon_k} \bigr\rangle_{i_k} \bigl | \epsilon \bigr\rangle_j \longmapsto \bigl | {\epsilon_1} \bigr\rangle_{i_1} \cdots \bigl | {\epsilon_k} \bigr\rangle_{i_k} \bigl |{\epsilon \oplus (\epsilon_1 \wedge \cdots \wedge \epsilon_k)} \bigr\rangle_j \ , \end{equation} where $\wedge$ denotes the logical AND operation (binary multiplication). If all $k$ of the control qubits labeled $i_1,\dots,i_k$ take the value 1, then $C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j}$ flips the value of the target qubit labeled $j$; otherwise, $C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j}$ acts trivially. To implement this gate in the ion trap, we will make use of an operation $V^{(i)}_{\rm phon}$ that is induced by directing a $\pi$ pulse at ion $i$ tuned to the transition $\bigl | g\bigr \rangle_i \bigl | 1 \bigr \rangle_{\rm CM}\longmapsto \bigl | e'\bigr \rangle_j \bigl | 0 \bigr \rangle_{\rm CM}$; its action is \begin{equation} V^{(i)}_{\rm phon}:\>\> \left\{\begin{array}{lll}&\bigl | g\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\longmapsto &| g\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\nonumber\\ &\bigl | e\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\longmapsto &| e\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\nonumber\\ & \bigl | g\bigr\rangle_i| 1\bigr\rangle_{\rm CM}\longmapsto -i&| e'\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\nonumber\\ &\bigl | e\bigr\rangle_i| 1\bigr\rangle_{\rm CM}\longmapsto &| e\bigr\rangle_i| 1\bigr\rangle_{\rm CM}\end{array}\right\}\nonumber\\ \end{equation} The pulse has no effect unless the initial state is $\bigl | g\bigr\rangle_i| 1\bigr\rangle_{\rm CM}$, in which case the phonon is absorbed and ion $i$ undergoes a transition to the state $| e'\bigr\rangle_i$. We thus see that the controlled$^k$-NOT gate can be constructed as\cite{cirac} \begin{equation} C_{{\lbrack\!\lbrack i_1,\dots,i_k \rbrack\!\rbrack},j}\>\equiv \> \left[\tilde U^{(j)}\right]^{-1}\cdot W^{(i_1)}_{\rm phon}\cdot V^{(i_2)}_{\rm phon} \cdots V^{(i_{k})}_{\rm phon}\cdot V^{(j)} \cdot V^{(i_{k})}_{\rm phon} \cdots V^{(i_{2})}_{\rm phon}\cdot W^{(i_1)}_{\rm phon}\cdot \tilde U^{(j)} \ . \end{equation} To understand how the construction works, note first of all that if $\epsilon_{1}=0$, no phonon is ever excited and none of the pulses have any effect. If $\epsilon_1=\epsilon_2=\cdots =\epsilon_{m-1}=1$ and $\epsilon_m=0$ ($m\le k$), then the first $W^{(i_1)}_{\rm phon}$ produces a phonon that is absorbed during the first $V^{(i_{m})}_{\rm phon}$ operation, reemited during the second $V^{(i_{m})}_{\rm phon}$ operation, and finally absorbed again during the second $W^{(i_1)}_{\rm phon}$; the other pulses have no effect. Since each of the four pulses that is on resonance advances the phase of the state by $\pi/2$, there is no net change of phase. If $\epsilon_1=\epsilon_2=\cdots =\epsilon_{k}=1$, then a phonon is excited by the first $W^{(i_1)}_{\rm phon}$, and all of the $V^{(i_{m})}_{\rm phon}$'s act trivially; hence in this case $C_{{\lbrack\!\lbrack i_1,\dots,i_k \rbrack\!\rbrack},j}$ has the same action as $C_{{\lbrack\!\lbrack i_1 \rbrack\!\rbrack},j}$. We find then, that the controlled$^k$-NOT gate ($k=1,2,\dots$) can be implemented in the ion trap with altogether $2k+3$ laser pulses. These gates are the fundamental operations that we will use to build the modular exponential function.\footnote{In fact, the efficiency of our algorithms could be improved somewhat if we adopted other fundamental gates that can also be simply implemented with the ion trap. Implementations of some alternative gates are briefly discussed in Appendix A.} \section{Modular exponentiation: Some general features} In the next section, we will describe in detail several algorithms for performing modular exponentiation on a quantum computer. These algorithms evaluate the function \begin{equation} \label{expfunction} f_{N,x}(a)=x^a ~({\rm mod} ~N)\ , \end{equation} where $N$ and $x$ are $K$-bit classical numbers ({\it $c$-numbers}) and $a$ is an $L$-qubit quantum number ({\it $q$-number}). Our main motivation, of course, is that the evaluation of $f_{N,x}$ is the bottleneck of Shor's factorization algorithm. Most of our algorithms require a ``time'' (number of elementary quantum gates) of order $K^3$ for large $K$. In fact, for asymptotically large $K$, faster algorithms (time of order $K^2\log(K) \log\log (K)$) are possible---these take advantage of tricks for performing efficient multiplication of very large numbers \cite{fast_mult}. We will not consider these asymptotically faster algorithms in any detail here. Fast multiplication requires additional storage space. Furthermore, because fast multiplication carries a high overhead cost, the advantage in speed is realized only when the numbers being multiplied are enormous. We will concentrate instead on honing the efficiency of algorithms requiring $K^3$ time, and will study the tradeoff of computation time versus storage space for these algorithms. We will also briefly discuss an algorithm that takes considerably longer ($K^5$ time), but enables us to compress the storage space further. Finally, we will describe a ``customized'' algorithm that is designed to evaluate $f_{N,x}$ in the case $N=15$, the smallest value of $N$ for which Shor's algortihm can be applied. Unsurprisingly, this customized algorithm is far more efficient, both in terms of computation time and memory use, than our general purpose algorithms that apply for any value of $N$ and $x$. \subsection{The model of computation} \label{model} {\bf A classical computer and a quantum computer}: The machine that runs our program can be envisioned as a quantum computer controlled by a classical computer. The input that enters the machine consists of both classical data (a string of classical bits) and quantum data (a string of qubits prepared in a particular quantum state). The classical data takes a definite fixed value throughout the computation, while for the quantum data coherent superpositions of different basis states may be considered (and quantum entanglement of different qubits may occur). The classical computer processes the classical data, and produces an output that is a program for the quantum computer. The quantum computer is a quantum gate network of the sort described by Deutsch \cite{deutsch}. The program prepared by the classical computer is a list of elementary unitary transformations that are to be applied sequentially to the input state in the quantum register. (Typically, these elementary transformations act on one, two, or three qubits at a time; their precise form will vary depending on the design of the quantum computer.) Finally, the classical computer calls a routine that measures the state of a particular string of qubits, and the result is recorded. The result of this final measurement is the output of our device. This division between classical and quantum data is not strictly necessary. Naturally, a $c$-number is just a special case of a $q$-number, so we could certainly describe the whole device as a quantum gate network (though of course, our classical computer, unlike the quantum network, can perform irreversible operations). However, if we are interested in how a practical quantum computer might function, the distinction between the quantum computer and the classical computer is vitally important. In view of the difficulty of building and operating a quantum computer, if there is any operation performed by our device that is intrinsically classical, it will be highly advantageous to assign this operation to the classical computer; the quantum computer should be reserved for more important work. (This is especially so since it is likely to be quite a while before a quantum computer's ``clock speed'' will approach the speed of contemporary classical computers.) {\bf Counting operations:} Accordingly, when we count the operations that our algorithms require, we will be keeping track only of the elementary gates employed by the quantum computer, and will not discuss in detail the time required for the classical computer to process the classical data. Of course, for our device to be able to perform efficient factorization, the time required for the classical computation must be bounded above by a polynomial in $K$. In fact, the classical operations take a time of order $K^3$; thus, the operation of the quantum computer is likely to dominate the total computation time even for a very long computation.\footnote{Indeed, one important reason that we insist that the quantum computer is controlled by a classical computer is that we want to have an honest definition of computational complexity; if it required an exponentially long classical computation to figure out how to program the quantum computer, it would be misleading to say that the quantum computer could solve a problem efficiently.} In the case of the evaluation of the modular exponential function $f_{N,x}(a)$, the classical input consists of $N$ and $x$, and the quantum input is $a$ stored in the quantum register; in addition, the quantum computer will require some additional qubits (initially in the state $|0\rangle$) that will be used for scratch space. The particular sequence of elementary quantum gates that are are applied to the quantum input will depend on the values of the classical variables. In particular, the number of operations is actually a complicated function of $N$ and $x$. For this reason, our statements about the number of operations performed by the quantum computer require clarification. We will report the number of operations in two forms, which we will call the ``worst case'' and the ``average case.'' Our classical computer will typically compute and read a particular classical bit (or sequence of bits) and then decide on the basis of its value what operation to instruct the quantum computer to perform next. For example, the quantum computer might be instructed to apply a particular elementary gate if the classical bit reads 1, but to do nothing if it reads 0. To count the number of operations in the worst case, we will assume that the classical control bits always assume the value that maximizes the number of operations performed. This worst case counting will usually be a serious overestimate. A much more realistic estimate is obtained if we assume that the classical control bits are random (0 50\% of the time and 1 50\% of the time). This is how the ``average case'' estimate is arrived at. {\bf The basic machine and the enhanced machine:} Our quantum computer can be characterized by the elementary quantum gates that are ``hard-wired'' in the device. We will consider two different possibilities. In our ``basic machine'' the elementary operations will be the single-qubit NOT operation, the two-qubit controlled-NOT operation, and the three-qubit controlled-controlled-NOT operation (or Toffoli gate). These elementary gates are not computationally universal (we cannot construct arbitrary unitary operations by composing them), but they will suffice for our purposes; our machine won't need to be able to do anything else.\footnote{That is, these operations suffice for evaluation of the modular exponential function. Other gates will be needed to perform the discrete Fourier transform, as described in Sec. \ref{sec:ft}.} Our ``enhanced machine'' is equipped with these gates plus two more---a 4-qubit controlled$^3$-NOT gate and a 5-qubit controlled$^4$-NOT gate. In fact, the extra gates that are standard equipment for the enhanced machine can be simulated by the basic machine. However, this simulation is relatively inefficient, so that it might be misleading to quote the number of operations required by the basic machine when the enhanced machine could actually operate much faster. In particular, Cirac and Zoller described how to execute a controlled$^k$-NOT ($k\ge 1$) operation using $2k+3$ laser pulses in the linear ion trap; thus, {\it e.g.}, the controlled$^4$-NOT operation can be performed much more quickly in the ion trap than if it had to be constructed from controlled$^k$-NOT gates with $k=0,1,2$. To compare the speed of the basic machine and the enhanced machine, we must assign a relative cost to the basic operations. We will do so by expressing the number of operations in the currency of laser pulses under the Cirac-Zoller scheme: 1 pulse for a NOT, 5 for a controlled-NOT, 7 for a controlled$^2$-NOT, 9 for a controlled$^3$-NOT, and 11 for a controlled$^4$-NOT. We realize that this measure of speed is very crude. In particular, not all laser pulses are really equivalent. Different pulses may actually have differing frequencies and differing durations. Nevertheless, for the purpose of comparing the speed of different algorithms, we will make the simplifying assumption that the quantum computer has a fixed clock speed, and administers a laser pulse to an ion in the trap once in each cycle. The case of the (uncontrolled) NOT operation requires special comment. In the Cirac-Zoller scheme, the single qubit operations always are $2\times 2$ unitary operations of determinant one (the exponential of an off-diagonal $2\times 2$ Hamiltonian). But the NOT operation has determinant $(-1)$. A simple solution is to use the operation $i\cdot$(NOT) instead (which does have determinant 1 and can be executed with a single laser pulse). The overall phase $(i)$ has no effect on the outcome of the computation. Hence, we take the cost of a NOT operation to be one pulse. In counting operations, we assume that the controlled$^k$-NOT operation can be performed on any set of $k+1$ qubits in the device. Indeed, a beautiful feature of the Cirac-Zoller proposal is that the efficiency of the gate implementation is unaffected by the proximity of the ions. Accordingly, we do not assign any cost to ``swapping'' the qubits before they enter a quantum gate.\footnote{For a different type of hardware, such as the device envisioned by Lloyd,\cite{lloyd}, swapping of qubits would be required, and the number of elementary operations would be correspondingly larger.} \subsection{Saving space} \label{sec:space} A major challenge in programming a quantum computer is to minimize the ``scratchpad space'' that the device requires. We will repeatedly appeal to two basic tricks (both originally suggested by C. Bennett \cite{bennett,bennett_trade}) to make efficient use of the available space. {\bf Erasing garbage:} Suppose that a unitary transformation $F$ is constructed that computes a (not necessarily invertible) function $f$ of a $q$-number input $b$. Typically, besides writing the result $f(b)$ in the output register, the transformation $F$ will also fill a portion of the scratchpad with some expendable garbage $g(b)$; the action of $F$ can be expressed as \begin{equation} F_{\alpha,\beta,\gamma}: |b\rangle_\alpha|0\rangle_\beta|0\rangle_\gamma \longmapsto |b\rangle_\alpha|f(b)\rangle_\beta |g(b)\rangle_\gamma \ , \end{equation} where $|\cdot\rangle_\alpha$, $|\cdot\rangle_\beta$, $|\cdot\rangle_\gamma$ denote the input, output, and scratch registers, respectively. Before proceeding to the next step of the computation, we would like to clear $g(b)$ out of the scratch register, so that the space $|\cdot\rangle_\gamma$ can be reused. To erase the garbage, we invoke a unitary operation $COPY_{\alpha,\delta}$ that copies the contents of $|\cdot\rangle_\alpha$ to an additional register $|\cdot\rangle_{\delta}$, and then we apply the {\it inverse} $F^{-1}$ of the unitary operation $F$. Thus, we have \begin{equation} XF_{\alpha,\beta,\gamma,\delta}\equiv F^{-1}_{\alpha,\beta,\gamma}\cdot COPY_{\alpha,\delta} \cdot F_{\alpha,\beta,\gamma}: |b\rangle_\alpha |0\rangle_\beta |0\rangle_\gamma |0\rangle_\delta \longmapsto |b\rangle_\alpha |0\rangle_\beta |0\rangle_\gamma |f(b)\rangle_\delta \ . \end{equation} The composite operation $XF$ uses both of the registers $|\cdot\rangle_\beta$ and $|\cdot\rangle_\gamma$ as scratch space, but it cleans up after itself. Note that $XF$ preserves the value of $b$ in the input register. This is necessary, for a general function $f$, if the operation $XF$ is to be invertible. {\bf Overwriting invertible functions:} We can clear even more scratch space in the special case where $f$ is an invertible function. In that case, we can also construct another unitary operation $XFI$ that computes the inverse function $f^{-1}$; that is, \begin{equation} \label{xfi} XFI_{\alpha,\beta}: |b\rangle_\alpha|0\rangle_\beta \longmapsto |b\rangle_\alpha |f^{-1}(b)\rangle_\beta \ . \end{equation} or, equivalently, \begin{equation} \label{xfi2} XFI_{\beta,\alpha}: |0\rangle_\alpha|f(b)\rangle_\beta \longmapsto |b\rangle_\alpha |f(b)\rangle_\beta \ . \end{equation} ($XFI$, like $XF$, requires scratchpad space. But since $XFI$, like $XF$, leaves the state of the scratchpad unchanged, we have suppressed the scratch registers in Eq.\ (\ref{xfi}) and Eq.\ (\ref{xfi2}).) By composing $XF$ and $XFI^{-1}$, we obtain an operation $OF$ that evaluates the function $f(b)$ and ``overwrites'' the input $b$ with the result $f(b)$: \begin{equation} OF_{\alpha,\beta}\equiv XFI^{-1}_{\beta,\alpha}\cdot XF_{\alpha,\beta}: |b\rangle_\alpha|0\rangle_\beta\longmapsto |0\rangle_\alpha |f(b)\rangle_\beta\ {}. \end{equation} (Strictly speaking, this operation does not ``overwrite'' the input; rather, it erases the input register $|\cdot\rangle_\alpha$ and writes $f(b)$ in a different register $|\cdot\rangle_\beta$. A genuinely overwriting version of the evaluation of of $f$ can easily be constructed, if desired, by following $OF$ with a unitary $SWAP$ operation that interchanges the contents of the $|\cdot\rangle_\alpha$ and $|\cdot\rangle_\beta$ registers. Even more simply, we can merely swap the {\it labels} on the registers, a purely classical operation.) In our algorithms for evaluating the modular exponentiation function, the binary arithmetic operations that we perform have one classical operand and one quantum operand. For example, we evaluate the product $y\cdot b ~({\rm mod}~N)$, where $y$ is a $c$-number and $b$ is a $q$-number. Evaluation of the product can be viewed as the evaluation of a {\it function} $f_y(b)$ that is determined by the value of the $c$-number $y$. Furthermore, since the positive integers less than $N$ that are relatively prime to $N$ form a group under multiplication, the function $f_y$ is an {\it invertible} function if $gcd(y,N)=1$. Thus, for $gcd(y,N)=1$, we can (and will) use the above trick to overwrite the $q$-number $b$ with a new $q$-number $y\cdot b ~({\rm mod} N)$. \subsection{Multiplexed Adder} \label{multiplexed} The basic arithmetic operation that we will need to perform is addition (mod $N$)---we will evaluate $y+b ~ ({\rm mod}~ N)$ where $y$ is a $c$-number and $b$ is a $q$-number. The most efficient way that we have found to perform this operation is to build a {\it multiplexed} mod $N$ adder. Suppose that $N$ is a $K$-bit $c$-number, that $y$ is a $K$-bit $c$-number less than $N$, and that $b$ is a $K$-qubit $q$-number, also less than $N$. Evaluation of $y+b ~ ({\rm mod}~ N)$ can be regarded as a function, determined by the $c$-number $y$, that acts on the $q$-number $b$. This function can be described by the ``pseudo-code'' \begin{eqnarray} {\tt if} & \ (N-y> b) & \quad ADD \quad y \ ,\nonumber\\ {\tt if} & \ (N-y \le b) & \quad ADD \quad y -N \ . \end{eqnarray} Our multiplexed adder is designed to evaluate this function. First a comparison is made to determine if the $c$-number $N-y$ is greater than the $q$-number $b$, and the result of the comparison is stored as a ``select qubit.'' The adder then reads the select qubit, and performs an ``overwriting addition'' operation on the the $q$-number $b$, replacing it by either $y+b$ (for $N-y>b$), or $y+b-N$ (for $N-y \le b$). Finally, the comparison operation is run backwards to erase the select qubit. Actually, a slightly modified version of the above pseudo-code is implemented. Since it is a bit easier to add a positive $c$-number than a negative one, we choose to add $2^K+y-N$ to $b$ for $N-y \le b$. The $(K+1)$st bit of the sum (which is guaranteed to be 1 in this case), need not be (and is not) explicitly evaluated by the adder. \subsection{Enable bits} \label{sec:enable} Another essential feature of our algorithms is the use of ``enable'' qubits that control the arithmetic operations. Our multiplexed adder, for example, incorporates such an enable qubit. The adder reads the enable qubit, and if it has the value 1, the adder replaces the input $q$-number $b$ by the sum $y+b~({\rm mod}~N)$ (where $y$ is a $c$-number). If the enable qubit has the value 0, the adder leaves the input $q$-number $b$ unchanged. Enable qubits provide an efficient way to multiply a $q$-number by a $c$-number. A $K$-qubit $q$-number $b$ can be expanded in binary notation as \begin{equation} b=\sum_{i=0}^{K-1} b_i 2^i \ , \end{equation} and the product of $b$ and a $c$-number $y$ can be expressed as \begin{equation} b\cdot y ~({\rm mod} ~N)= \sum_{i=0}^{K-1} b_i\cdot [2^i y ~({\rm mod}~N)] \ . \end{equation} This product can be built by running the pseudo-code: \begin{equation} {\tt For} \ i=0 \ {\tt to} \ K-1\ , \quad {\tt if} \ b_i=1\ , \quad ADD \quad 2^i y ~({\rm mod}~N)\ ; \end{equation} multiplication is thus obtained by performing $K$ {\it conditional} mod $N$ additions. Hence our multiplication routine calls the multiplexed adder $K$ times; in the $i$th call, $b_i$ is the enable bit that controls the addition. In fact, to compute the modular exponential function as described below, we will need {\it conditional} multiplication; the multiplication routine will have an enable bit of its own. Our multiplier will replace the $q$-number $b$ by the product $b\cdot y ~({\rm mod} ~N)$ (where $y$ is a $c$-number) if the enable qubit reads $1$, and will leave $b$ unchanged if the enable qubit reads 0. To construct a multiplier with an enable bit, we will need an adder with a {\it pair} of enable bits---that is, an adder that is switched on only when both enable qubits read 1. The various detailed algorithms that we will describe differ according to how enable qubits are incorporated into the arithmetic operations. The most straightforward procedure (and the most efficient, in the linear ion trap device of Cirac and Zoller) is that underlying the design of our ``enhanced machine.'' We will see that a multiplexed adder can be constructed from the elementary gates NOT, controlled-NOT and controlled$^2$-NOT. One way to promote this adder to an adder with two enable bits is to replace each controlled$^k$-NOT by a controlled$^{(k+2)}$-NOT, where the two enable bits are added to the list of control bits in each elementary gate. We thus construct a routine that performs (multiplexed) addition when both enable bits read 1, and does nothing otherwise. The routine is built from elementary controlled$^k$-NOT gates with $k=4$ or less. In fact, it will turn out that we will not really need to add enable bits to the control list of every gate. But following the above strategy does require controlled$^k$ gates for $k$=0,1,2,3,4. This is how our enhanced machine performs mod $N$ addition with two enable bits (and mod $N$ multiplication with one enable bit). Because controlled$^4$-NOT and controlled$^3$-NOT gates are easy to implement on the linear ion trap, the above procedure is an efficient way to compute the modular exponential function with an ion trap. However, for a different type of quantum computing hardware, these elementary gates might not be readily constructed. Therefore, we will also consider a few other algorithms, which are built from elementary controlled$^k$-NOT gates for only $k=0,1,2$. These algorithms for our ``basic machine'' follow the same general design as the algorithm for the ``enhanced machine,'' except that the controlled$^3$-NOT and the controlled$^4$-NOT gates are expanded out in terms of the simpler elementary operations. (The various algorithms for the basic machine differ in the amount of scratch space that they require.) \subsection{Repeated squaring} \label{sec:repeated} One way to evaluate the modular exponential $x^a ~ ({\rm mod} ~N)$ is to multiply by $x$ a total of $a-1$ times, but this would be terribly inefficient. Fortunately, there is a well-known trick, {\it repeated squaring}, that speeds up the computation enormously. If $a$ is an $L$-bit number with the binary expansion $\sum_{i=0}^{L-1} a_i 2^i$, we note that \begin{equation} x^a=x^{\left(\sum_{i=0}^{L-1} a_i 2^i\right)}=\prod_{i=0}^{L-1}\left(x^{2^i}\right)^{a_i} \ . \end{equation} Furthermore, since \begin{equation} x^{2^i}=\left(x^{2^{i-1}}\right)^2 \ , \end{equation} we see that $x^{2^i} ~ ({\rm mod} ~N)$, can be computed by squaring $x^{2^{i-1}}$. We conclude that $x^a ~({\rm mod} ~N)$ can be obtained from at most $2(L-1)$ mod $N$ multiplications (fewer if some of the $a_i$ vanish). If ordinary ``grade school'' multiplication is used (rather than a fast multiplication algorithm), this evaluation of $x^a ~ ({\rm mod} ~N)$ requires of order $L\cdot K^2$ elementary bit operations (where $N$ and $x<N$ are $K$-bit numbers). Our algorithms for evaluating $x^a$, where $a$ is an $L$-bit $q$-number and $x$ is a $K$-bit $c$-number, are based on ``grade school'' multiplication, and will require of order $L\cdot K^2$ elementary quantum gates. Since $x$ is a $c$-number, the ``repeated squaring'' to evaluate $x^{2^i} ~ ({\rm mod} ~N)$ can be performed by our classical computer. Once these $c$-numbers are calculated and stored, then $x^a ~ ({\rm mod} ~N)$ can be found by running the pseudo-code \begin{equation} {\tt For} \ i=0 \ {\tt to} \ L-1\ , \quad {\tt if} \ a_i=1\ , \quad MULTIPLY \quad x^{2^i} ~({\rm mod}~N)\ . \end{equation} Thus, the modular exponential function is obtained from $L$ conditional multiplications. It is for this reason that our mod $N$ multiplier comes equipped with an enable bit. Our modular exponentiation algorithm calls the mod $N$ multiplier $L$ times; in the $i$th call, $a_{i-1}$ is the enable bit that controls the multiplication. \section{Modular Exponentiation in Detail} \label{sec:detail} \subsection{Notation} Having described above the central ideas underlying the algorithms, we now proceed to discuss their detailed implementation. We will be evaluating $x^a {}~({\rm mod}~ N)$, where $N$ is a $K$-bit $c$-number, $x$ is a $K$-bit $c$-number less than $N$, and $a$ is an $L$-bit $q$-number. For the factorization algorithm, we will typically choose $L\approx 2K$. We will use the ket notation $\bigl |\cdot\bigr\rangle$ to denote the quantum state of a single {\it qubit}, a two-level quantum system. The two basis states of a qubit are denoted $\bigl |0\bigr\rangle$ and $\bigl |1\bigr\rangle$. Since most of the $q$-numbers that will be manipulated by our computer will be $K$ qubits long, we will use a shorthand notation for $K$-qubit registers; such registers will be denoted by a ket that carries a lowercase Greek letter subscript, {\it e.g.}, $\bigl |b\bigr\rangle_\alpha$, where $b$ is a $K$-bit string that represents the number $\sum_{i=0}^{K-1} b_i 2^i$ in binary notation. Single qubits are denoted by kets that carry a numeral subscript, {\it e.g} $\bigl |c\bigr\rangle_1$, where $c$ is 0 or 1. Some registers will be $L$ bits long; these will be decorated by asterisk superscripts, {\it e.g.} $\bigl |a\bigr\rangle_\alpha^*$. The fundamental operation that our quantum computer performs is the controlled$^k$-NOT operation. This is the $(k+1)$-qubit quantum gate that acts on a basis according to \begin{equation} C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j} : \>\> \bigl |{\epsilon_1}\bigr\rangle_{i_1} \cdots \bigl |{\epsilon_k} \bigr\rangle_{i_k} \bigl | \epsilon \bigr\rangle_j \longmapsto \bigl | {\epsilon_1} \bigr\rangle_{i_1} \cdots \bigl | {\epsilon_k} \bigr\rangle_{i_k} \bigl |{\epsilon \oplus (\epsilon_1 \wedge \cdots \wedge \epsilon_k)} \bigr\rangle_j \, . \end{equation} Here, each of $\epsilon_1,\dots,\epsilon_k,\epsilon$ takes the value 0 or 1, $\wedge$ denotes the logical AND operation (binary multiplication) and $\oplus$ denotes the logical XOR operation (binary addition mod 2). Thus, the gate $C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j}$ acts on $k$ ``control'' qubits labeled $i_1,\dots,i_k$ and on one ``target qubit'' labeled $j$. If all $k$ of the control qubits take the value 1, then $C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j}$ flips the value of the target qubit; otherwise, $C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j}$ acts trivially. In order to represent our quantum circuits graphically, we will use Feynman's notation for the controlled$^k$-NOT, shown in Fig.\ \ref{figA}. Note that $C^{-1}_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j} = C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j}$, so a computation composed of controlled$^k$-NOT's can be inverted by simply executing the controlled$^k$-NOT's in the reverse order. As we explained above, our ``basic machine'' comes with the NOT, controlled-NOT and controlled$^2$-NOT gates as standard equipment. Our enhanced machine is equipped with these fundamental gates and, in addition, the controlled$^3$-NOT and controlled$^4$-NOT gates. \subsection{Addition} \label{sec:addition} {}From the controlled$^k$-NOT gates, we can build (reversible) arithmetic operations. The basic operation in (classical) computer arithmetic is the full adder. Given two addend bits $a$ and $b$, and an input carry bit $c$, the full adder computes the the sum bit \begin{equation} s=a\oplus b \oplus c \label{simplesum}\end{equation} and the output carry bit \begin{equation} c' = (a \wedge b) \vee (c \wedge (a \vee b))\, . \label{carry}\end{equation} The addition that our quantum computer performs always involves adding a $c$-number to a $q$-number. Thus, we will use two different types of quantum full adders, distinguished by the value of the classical addend bit. To add the classical bit $a=0$, we construct \begin{equation} \label{FA0} FA(a=0)_{1,2,3}\> \equiv \> C_{\lbrack\!\lbrack 1 \rbrack\!\rbrack, 2} C_{\lbrack\!\lbrack 1,2 \rbrack\!\rbrack, 3} \, , \end{equation} which acts on a basis according to \begin{equation} FA(a=0)_{1,2,3}: \>\> \bigl| b\bigr\rangle_1 \bigl| c \bigr\rangle_2 \bigl| 0 \bigr\rangle_3 \longmapsto \bigl| b\bigr\rangle_1 \bigl| b\oplus c \bigr\rangle_2 \bigl| b\wedge c \bigr\rangle_3 \, . \end{equation} Here, the string of controlled$^k$-NOT's defining $FA$ is to be read from right to left; that is, the gate furthest to the right acts on the kets first. The operation $FA(a=0)$ is shown diagramatically in Fig.\ \ref{figB}a, where, in keeping with our convention for operator ordering, the gate on the right acts first; hence, in the diagram, ``time'' runs from right to left. To add the classical bit $a=1$, we construct \begin{equation} \label{FA1} FA(a=1)_{1,2,3}\> \equiv \> C_{\lbrack\!\lbrack 1 \rbrack\!\rbrack, 2} C_{\lbrack\!\lbrack 1,2 \rbrack\!\rbrack, 3}C_2 C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3} \, , \end{equation} (see Fig.\ \ref{figB}b) which acts as \begin{equation} FA(a=1)_{1,2,3}: \>\> \bigl| b\bigr\rangle_1 \bigl| c \bigr\rangle_2 \bigl| 0 \bigr\rangle_3 \longmapsto \bigl| b\bigr\rangle_1 \bigl| b\oplus c\oplus 1\bigr\rangle_2 \bigl| c'=b\vee c \bigr\rangle_3 \, . \end{equation} Eqs.\ (\ref{FA0}) and (\ref{FA1}) provide an elementary example that illustrates the concept of a quantum computer controlled by a classical computer, as discussed in Sec.\ \ref{model}. The classical computer reads the value of the classical bit $a$, and then directs the quantum computer to execute either $FA(0)$ or $FA(1)$. As we have already remarked in Sec.\ \ref{multiplexed}, to perform modular arithmetic efficiently, we will construct a ``multiplexed'' full adder. The multiplexed full adder will choose as its classical addend {\it either one} of two classical bits $a_0$ and $a_1$, with the choice dictated by the value of a ``select qubit'' $\ell$. That is, if $\ell=0$, the classical addend will be $a_0$, and if $\ell=1$ the classical addend will be $a_1$. Thus the multiplexed full adder operation, which we denote $MUXFA'$, will actually be 4 distinct unitary transformations acting on the qubits of the quantum computer, depending on the four possible values of the classical bits $(a_0,a_1)$. The action of $MUXFA'$ is \begin{equation} MUXFA'(a_0,a_1)_{1,2,3,4} : \>\> \bigl |{\ell}\bigr\rangle_1 \bigl |b \bigr\rangle_2 \bigl |c \bigr\rangle_3 \bigl | 0 \bigr\rangle_4 \longmapsto \bigl |{\ell} \bigr\rangle_1 \bigl |b \bigr\rangle_2 \bigl |s \bigr\rangle_3 \bigl |{c'} \bigr\rangle_4 \, ; \end{equation} here $s$ and $c'$ are the sum and carry bits defined in Eqs.\ (\ref{simplesum}) and (\ref{carry}), but where now $a\equiv a_1\wedge \ell \vee a_0\wedge \mathchar"0218\ell =a_{\ell}$. In fact, for $a_0=a_1$, the value of the select qubit $\ell$ is irrelevant, and $MUXFA'$ reduces to the $FA$ operation that we have already constructed: \begin{eqnarray} \label{muxfa_zero} MUXFA'(a_0=0,a_1=0)_{1,2,3,4}&\equiv& FA(0)_{2,3,4}\nonumber \\ MUXFA'(a_0=1,a_1=1)_{1,2,3,4}&\equiv& FA(1)_{2,3,4} \, . \end{eqnarray} For $a_0=0$ and $a_1=1$, $MUXFA'$ adds $\ell$, while for $a_0=1$ and $a_1=0$, it adds $\mathchar"0218 \ell$. This is achieved by the construction (Fig.\ \ref{figD}) \begin{eqnarray} \label{muxfa_one} MUXFA'(a_0=0,a_1=1)_{1,2,3,4}\> &\equiv& \> C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3}C_{\lbrack\!\lbrack 2,3 \rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack 1 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack 1,3 \rbrack\!\rbrack, 4} \, ,\nonumber \\ MUXFA'(a_0=1,a_1=0)_{1,2,3,4}\> &\equiv& \> C_1C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3}C_{\lbrack\!\lbrack 2,3 \rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack 1 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack 1,3 \rbrack\!\rbrack, 4}C_1 \, . \end{eqnarray} (The second operation is almost the same as the first; the difference is that the qubit $\ell$ is flipped at the beginning and the end of the operation.) The full adder that we will actually use in our algorithms will be denoted $MUXFA$ (without the ${}'$). As noted in Sec.\ \ref{sec:enable}, to perform multiplication and modular exponentiation, we will need a (multiplexed) full adder that is controlled by an {\it enable bit}, or a string of enable bits. Thus $MUXFA$ will be an extension of the $MUXFA'$ operation defined above that incorporates enable bits. If all the enable bits have the value 1, $MUXFA$ acts just like $MUXFA'$. But if one or more enable bit is 0, $MUXFA$ will choose the classical addend to be 0, irrespective of the values of $a_0$ and $a_1$. We will use the symbol $\mathchar"024C$ to denote the full list of enable bits for the operation. Thus the action of $MUXFA$ can be expressed as \begin{equation} MUXFA(a_0,a_1)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack, 1,2,3,4} : \>\> \bigl |{\ell}\bigr\rangle_1 \bigl |b \bigr\rangle_2 \bigl |c \bigr\rangle_3 \bigl | 0 \bigr\rangle_4 \longmapsto \bigl |{\ell} \bigr\rangle_1 \bigl |b \bigr\rangle_2 \bigl |s \bigr\rangle_3 \bigl |{c'} \bigr\rangle_4 \, ; \end{equation} here $s$ and $c'$ are again the sum and carry bits defined in Eqs.\ (\ref{sum}) and (\ref{carry}), but this time $a\equiv \mathchar"024C \wedge \left(a_1\wedge \ell \vee a_0\wedge \mathchar"0218\ell\right)$; that is, it is 0 unless all bits of $\mathchar"024C$ take the value 1. The list $\mathchar"024C$ may not include the bits $1$, $2$, $3$, or $4$. In our algorithms, the number of enable bits will be either 1 or 2. Hence, there is a simple way to construct the $MUXFA$ operation on our ``enhanced machine'' that comes equipped with controlled$^3$-NOT and controlled$^4$-NOT gates. To carry out the construction, we note by inspecting Eq.\ (\ref{muxfa_zero},\ref{muxfa_one}) (or Fig.\ \ref{figD}) that $MUXFA'(a_0,a_1)$ has the form $MUXFA'(0,0) \cdot F(a_0,a_1)$; thus, by adding $\mathchar"024C$ to the list of control bits for each of the gates in $F(a_0,a_1)$, we obtain an operation that acts as $MUXFA'$ when $\mathchar"024C$ is all 1's, and adds 0 otherwise. Explicitly, we have \begin{eqnarray} \label{muxfa} MUXFA(a_0=0, a_1=0)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3,4}\> &\equiv& \> C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack 2,3 \rbrack\!\rbrack, 4} \, ,\nonumber \\ MUXFA(a_0=1, a_1=1)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3,4}\> &\equiv& \> C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack 2,3 \rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack \mathchar"024C,3 \rbrack\!\rbrack, 4} \, ,\nonumber \\ MUXFA(a_0=0,a_1=1)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3,4}\> &\equiv& \> C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3}C_{\lbrack\!\lbrack 2,3 \rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack \mathchar"024C,1 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack \mathchar"024C,1,3 \rbrack\!\rbrack, 4} \, ,\nonumber \\ MUXFA(a_0=1,a_1=0)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3,4}\> &\equiv& \> C_1C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3}C_{\lbrack\!\lbrack 2,3 \rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack \mathchar"024C,1 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack \mathchar"024C,1,3 \rbrack\!\rbrack, 4}C_1 \, . \end{eqnarray} (as indicated in Fig.\ \ref{figE}). Here, if $\mathchar"024C$ is a list of $j$ bits, then $C_{\lbrack\!\lbrack \mathchar"024C,1,3 \rbrack\!\rbrack, 4}$, for example, denotes the controlled$^{(j+2)}$-NOT with $\mathchar"024C,1,3$ as its control bits. Evidently, Eq.\ (\ref{muxfa}) is a construction of a multiplexed adder with $j$ enable bits in terms of controlled$^k$-NOT gates with $k\le j+2$. In particular, we have constructed the adder with two enable bits that we will need, using the gates that are available on our enhanced machine. The reader who is impatient to see how our algorithms work in detail is encouraged to proceed now to the next subsection of the paper. But first, we would like to dispel any notion that the algorithms make essential use of the elementary controlled$^3$-NOT and controlled$^4$-NOT gates. So let us now consider how the construction of the $MUXFA$ operation can be modified so that it can be carried out on the basic machine (which is limited to controlled$^k$-NOT gates with $k\le 2$). The simplest such modification requires an extra bit (or two) of scratch space. Suppose we want to build a $MUXFA''$ operation with a single enable bit, without using the controlled$^3$-NOT gate. For $a_0=a_1$, the construction in Eq.\ (\ref{muxfa}) need not be modified; in those cases, the action of the operation is independent of the select bit $\ell$, and therefore no controlled$^3$-NOT gates were needed. For $a_0\ne a_1$, controlled$^3$-NOT gates are used, but we note that the control string of these controlled$^3$-NOT gates includes both the enable bit and the select bit. Hence, we can easily eliminate the controlled$^3$-NOT gate $C_{\lbrack\!\lbrack \mathchar"024C,1,3 \rbrack\!\rbrack, 4}$ by using a controlled$^2$-NOT to compute (and store) the logical AND ($\mathchar"024C \wedge \ell$) of the enable and select bits, and then replacing the controlled$^3$-NOT by a controlled$^2$-NOT that has the scratch bit as one of its control bits. Another controlled$^2$-NOT at the end of the operation clears the scratch bit. In an equation: \begin{eqnarray} \label{muxfaprimeprime} MUXFA''(a_0=0,a_1=1)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3,4,5}\> &\equiv& \> C_{\lbrack\!\lbrack \mathchar"024C,1 \rbrack\!\rbrack, 5}C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3}C_{\lbrack\!\lbrack 2,3 \rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack 5 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack 5,3 \rbrack\!\rbrack, 4}C_{\lbrack\!\lbrack \mathchar"024C,1 \rbrack\!\rbrack, 5} \, ,\nonumber \\ MUXFA''(a_0=1,a_1=0)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3,4,5}\> &\equiv& \> C_1 C_{\lbrack\!\lbrack \mathchar"024C,1 \rbrack\!\rbrack, 5}C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3}C_{\lbrack\!\lbrack 2,3 \rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack 5 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack 5,3 \rbrack\!\rbrack, 4}C_{\lbrack\!\lbrack \mathchar"024C,1 \rbrack\!\rbrack, 5} C_1\, , \end{eqnarray} as illustrated in Fig.\ \ref{figEa}. If the scratch bit $\bigl |\cdot\bigr\rangle_5$ starts out in the state $\bigl |0\bigr\rangle_5$, $MUXFA''$ has the same action as $MUXFA$, and it returns the scratch bit to the state $\bigl |0\bigr\rangle_5$ at the end. By adding yet another bit of scratch space, and another controlled$^2$-NOT at the beginning and the end, we easily construct a $MUXFA$ operation with two enable bits. At the alternative cost of slightly increasing the number of elementary gates, the extra scratch bit in $MUXFA''$ can be eliminated. That is, an operation with precisely the same action as $MUXFA$ can be constructed from controlled$^k$-NOT gates with $k\le 2$, and without the extra scratch bit. This construction uses an idea of Barenco {\it et al.} \cite{barenco}, that a controlled$^k$-NOT can be constructed from two controlled$^{(k-1)}$-NOT's and two controlled$^2$-NOT's (for any $k\ge 3$) by employing an extra bit. This idea differs from the construction described above, because the extra bit, unlike our scratch bit, is not required to be preset to 0 at the beginning of the operation. Hence, to construct the $C_{\lbrack\!\lbrack \mathchar"024C,1,3 \rbrack\!\rbrack, 4}$ gate needed in MUXFA, we can use $\bigl | b\bigr \rangle_2$ as the extra bit. That is, we may use the Barenco {\it et al.} identity \begin{equation} \label{barenco} C_{\lbrack\!\lbrack \mathchar"024C,1,3 \rbrack\!\rbrack, 4}= C_{\lbrack\!\lbrack 2,3\rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack \mathchar"024C,1\rbrack\!\rbrack, 2} C_{\lbrack\!\lbrack 2,3\rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack \mathchar"024C,1\rbrack\!\rbrack, 2} \end{equation} to obtain, say, \begin{equation} \label{muxfappp} MUXFA'''(a_0=0,a_1=1)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3,4}\> \equiv \> C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3}C_{\lbrack\!\lbrack 2,3 \rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack \mathchar"024C,1 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack 2,3\rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack \mathchar"024C,1\rbrack\!\rbrack, 2} C_{\lbrack\!\lbrack 2,3\rbrack\!\rbrack, 4} C_{\lbrack\!\lbrack \mathchar"024C,1\rbrack\!\rbrack, 2} \, \end{equation} (as in Fig.\ \ref{figF}). This identity actually works irrespective of the number of bits in the enable string $\mathchar"024C$, but we have succeeded in reducing the elementary gates to those that can implemented on the basic machine only in the case of $MUXFA$ with a single enable bit. To reduce the $MUXFA$ operation with two enable bits to the basic gates, we can apply the same trick again, replacing each controlled$^3$-NOT by four controlled$^2$-NOT's (using, say, the 4th bit as the extra bit required by the Barenco {\it et al.} construction). We will refer to the resulting operation as $MUXFA''''$. Aside from the multiplexed full adder $MUXFA$, we will also use a multiplexed {\it half adder} which we will call $MUXHA$. The half adder does not compute the final carry bit; it acts according to \begin{equation} MUXHA(a_0,a_1)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3} : \>\> \bigl |{\ell}\bigr\rangle_1 \bigl |b \bigr\rangle_2 \bigl |c \bigr\rangle_3 \longmapsto \bigl |{\ell} \bigr\rangle_1 \bigl |b \bigr\rangle_2 \bigl |s \bigr\rangle_3 \, , \end{equation} where $s=a \oplus b \oplus c$, and $a=\mathchar"024C\wedge (a_1\wedge \ell \vee a_0\wedge \mathchar"0218\ell)$. (Note that, since the input qubit $b$ is preserved, the final carry bit is not needed to ensure the reversibility of the operation.) $MUXHA$ is constructed from elementary gates according to \begin{eqnarray} \label{muxha} MUXHA(a_0=0, a_1=0)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3}\> &\equiv& \> C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3} \, ,\nonumber \\ MUXHA(a_0=1, a_1=1)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3}\> &\equiv& \> C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack, 3}\, ,\nonumber \\ MUXHA(a_0=0,a_1=1)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3}\> &\equiv& \> C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3}C_{\lbrack\!\lbrack \mathchar"024C,1 \rbrack\!\rbrack, 3} \, , \nonumber \\ MUXHA(a_0=1,a_1=0)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,2,3}\> &\equiv& \> C_1C_{\lbrack\!\lbrack 2 \rbrack\!\rbrack, 3}C_{\lbrack\!\lbrack \mathchar"024C,1 \rbrack\!\rbrack, 3} C_1 \, \end{eqnarray} (see Fig.\ \ref{figG}). For a single enable bit, this construction can be carried out on the basic machine. If there are two enable bits, the controlled$^3$-NOT's can be expanded in terms of controlled$^2$-NOT's as described above. A multiplexed $K$-bit adder is easily constructed by chaining together $(K-1)$ $MUXFA$ gates and one $MUXHA$ gate, as shown in Fig.\ \ref{figH}. This operation, which we denote $MADD$, depends on a pair of $K$-bit $c$-numbers $a$ and $a'$. $MADD$ (if all enable bits read 1) adds either $a$ or $a'$ to the $K$-bit $q$-number $b$, with the choice determined by the value of the select bit $\ell$. (That is, it adds $a$ for $\ell=0$ and adds $a'$ for $\ell=1$.) Thus, $MADD$ acts according to: \begin{equation} MADD(a,a')_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,\gamma,1} : \>\> \bigl |{b}\bigr\rangle_\beta \bigl |0\bigr\rangle_\gamma \bigl |\ell \bigr\rangle_1 \longmapsto \bigl |{b}\bigr\rangle_\beta \bigl |s \bigr\rangle_\gamma \bigl |\ell \bigr\rangle_1 \, ; \end{equation} where \begin{equation} \label{sum} s=\left[b+ \mathchar"024C\wedge (a'\wedge \ell \vee a\wedge \mathchar"0218\ell)\right]_{{\rm mod}~ 2^K} \, . \end{equation} The $[{\cdot}]_{{\rm mod}~ 2^K}$ notation in Eq.\ (\ref{sum}) indicates that the sum $s$ residing in $\bigl |\cdot \bigr\rangle_\gamma$ at the end of the operation is only $K$ bits long---$MADD$ does not compute the final carry bit. Since we will not need the final bit to perform addition mod $N$, we save a few elementary operations by not bothering to compute it. (The $MADD$ operation is invertible nonetheless.) Transcribed as an equation, Fig.\ \ref{figH} says that $MADD$ is constructed as \begin{eqnarray} \label{madd} MADD(a,a')_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,\gamma,1} \> \equiv \>&& MUXHA(a_{K-1},a_{K-1}')_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,\beta_{K-1},\gamma_{K-1}}\nonumber\\ &&\cdot~ \Bigl(\prod_{\mkern36mu i=0}^{K-2\mkern36mu} MUXFA(a_i,a_i')_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,\beta_i,\gamma_i,\gamma_{i+1}}\Bigr) \end{eqnarray} We have skewed the subscript and superscript of $\prod$ in Eq.\ (\ref{madd}) to remind the reader that the order of the operations is to be read from right to left---hence the product has the operator with $i=0$ furthest to the {\it right} (acting first). Each $MUXFA$ operation reads the enable string $\mathchar"024C$, and, if enabled, performs an elementary (multiplexed) addition, passing its final carry bit on to the next operation in the chain. The two classical bits used by the $j$th $MUXFA$ are $a_j$ and $a_j'$, the $j$th bits of the $c$-numbers $a$ and $a'$. The final elementary addition is performed by $MUXHA$ rather than $MUXFA$, because the final carry bit will not be needed. \subsection{Comparison} \label{sec:comparison} In our algorithms, we need to perform addition mod $N$ of a $c$-number $a$ and a $q$-number $b$. An important step in modular addition is {\it comparison}---we must find out whether $a+b \ge N$. Thus, our next task is to devise a unitary operation that compares a $c$-number and a $q$-number. This operation should, say, flip a target bit if the $c$-number is greater than the $q$-number, and leave the target bit alone otherwise. A conceptually simple way to compare a $K$-bit $c$-number $a$ and a $K$-bit $q$-number $b$ is to devise an adder that computes the sum of the $c$-number $2^K-1-a$ and the $q$-number $b$. Since the sum is less than $2^K$ only for $a>b$, the final carry bit of the sum records the outcome of the comparison. This method works fine, but we will use a different method that turns out to be slightly more efficient. The idea of our method is that we can scan $a$ and $b$ from left to right, and compare them one bit at a time. If $a_{K-1}$ and $b_{K-1}$ are different, then the outcome of the comparison is determined and we are done. If $a_{K-1}$ and $b_{K-1}$ are the same, we proceed to examine $a_{K-2}$ and $b_{K-2}$ and repeat the procedure, {\it etc}. We can represent this routine in pseudo-code as \begin{eqnarray} \label{pseudolt} {\tt if}&\ a_{K-1}=0:\quad& \left\{\begin{array}{l} b_{K-1}=0 \Longrightarrow {\tt PROCEED} \nonumber\\ b_{K-1}=1 \Longrightarrow b\ge a\ {\tt END}\end{array}\right\} \nonumber\\ {\tt if}&\ a_{K-1}=1:\quad& \left\{\begin{array}{l}b_{K-1}=0 \Longrightarrow b<a \ {\tt END}\nonumber\\ b_{K-1}=1 \Longrightarrow {\tt PROCEED}\end{array}\right\}\nonumber\\ {\tt if}&\ a_{K-2}=0:\quad& \left\{\begin{array}{l}b_{K-2}=0 \Longrightarrow {\tt PROCEED}\nonumber\\ b_{K-2}=1 \Longrightarrow b\ge a\ {\tt END}\end{array}\right\}\nonumber\\ {\tt if}&\ a_{K-2}=1:\quad& \left\{\begin{array}{l}b_{K-2}=0 \Longrightarrow b<a\ {\tt END}\nonumber\\ b_{K-2}=1 \Longrightarrow {\tt PROCEED}\end{array}\right\}\nonumber\\ &\cdot\nonumber\\ &\cdot \\ {\tt if}&\ a_{0}=0:\quad& \hskip .9in ~ b\ge a\ {\tt END}\nonumber\\ {\tt if}&\ a_{0}=1:\quad& \left\{\begin{array}{l}b_{0}=0 \Longrightarrow b<a\ {\tt END}\nonumber\\ b_{0}=1 \Longrightarrow b\ge a\ {\tt END}\end{array}\right\} \end{eqnarray} To implement this pseudo-code as a unitary transformation, we will use enable qubits in each step of the comparison. Once the comparison has ``ended,'' all subsequent enable bits will be switched off, so that the subsequent operations will have no effect on the outcome. Unfortunately, to implement this strategy reversibly, we seem to need a new enable bit for (almost) every step of the comparison, so the comparison operation will fill $K-1$ bits of scratch space with junk. This need for scratch space is not really a big deal, though. We can immediately clear the scratch space, which will be required for subsequent use anyway. As in our construction of the adder, our comparison operation is a sequence of elementary quantum gates that depends on the value of the $K$-bit $c$-number $a$. We will call the operation $LT$ (for ``less than''). Its action is \begin{equation} LT(a)_{\beta,1,{\hat\gamma}} : \>\> \bigl |{b}\bigr\rangle_\beta \bigl |0 \bigr\rangle_1 |0\bigr\rangle_{\hat\gamma}\longmapsto \bigl |{ b'}\bigr\rangle_\beta \bigl |\ell \bigr\rangle_1\bigl |{\rm junk} \bigr\rangle_{\hat\gamma} \, , \end{equation} where $\ell$ takes the value 1 for $b<a$ and the value 0 for $b\ge a$. Here the register labeled $|\cdot\bigr\rangle_{\hat\gamma}$ is actually $K-1$ rather than $K$ qubits long. The junk that fills this register has a complicated dependence on $a$ and $b$, the details of which are not of interest. In passing, the $LT$ operation also modifies the $q$-number $b$, replacing it by $b'$. ($b'$ is almost the {\it negation} of $b$, $b$ with all of its qubits flipped, except that $b_0$ is not flipped unless $a_0=1$). We need not be concerned about this either, as we will soon run the $LT$ operation backwards to repair the damage. The $LT$ operation is constructed from elementary gates as: {\samepage \begin{eqnarray} \label{lt_long} LT(a)_{\beta,1,{\hat\gamma}} \> \equiv \> &&\left\{ {\tt if} \ (a_{0} = 1) \ C_{\lbrack\!\lbrack {\hat\gamma}_0,\beta_0\rbrack\!\rbrack,1} \> C_{\beta_0} \right\} \\ \cdot \prod_{i=1\mkern36mu}^{\mkern36mu K-2} &&\left\{ \begin{array}{ll} {\tt if} \ (a_{i} = 0) \ & C_{\lbrack\!\lbrack{{\hat\gamma}_i,\beta_i}\rbrack\!\rbrack,{\hat\gamma}_{i-1}} \> C_{\beta_i} \nonumber\\ {\tt if }\ (a_{i} = 1) \ & C_{\lbrack\!\lbrack{{\hat\gamma}_i,\beta_i}\rbrack\!\rbrack,1} \> C_{\beta_i} \> C_{\lbrack\!\lbrack{{\hat\gamma}_i,\beta_i}\rbrack\!\rbrack,{\hat\gamma}_{i-1}} \end{array} \right\}\nonumber\\ \cdot \> &&\left\{\begin{array}{ll} {\tt if} \ (a_{K-1} = 0) \ & C_{\lbrack\!\lbrack \beta_{K-1}\rbrack\!\rbrack,{\hat\gamma}_{K-2}} \> C_{\beta_{K-1}} \nonumber\\ {\tt if} \ (a_{K-1}= 1) \ & C_{\lbrack\!\lbrack \beta_{K-1}\rbrack\!\rbrack,1} \> C_{\beta_{K-1}} \> C_{\lbrack\!\lbrack \beta_{K-1}\rbrack\!\rbrack,{\hat\gamma}_{K-2}} \end{array}\right\} \end{eqnarray} } As usual, the gates furthest to the right act first. We have skewed the subscript and superscript of $\prod$ here to indicate that the operator with $i=1$ is furthest to the {\it left} (and hence acts last). The first step of the $LT$ algorithm is different from the rest because it is not conditioned on the value of any ``switch.'' For each of the $K-2$ intermediate steps ($i=K-2, K-1, \dots, 1$), the switch ${\hat\gamma}_i$ is read, and if the switch is on, the comparison of $a_i$ and $b_i$ is carried out. If $a_i\ne b_i$, then the outcome of the comparison of $a$ and $b$ is settled; the value of $\ell$ is adjusted accordingly, and the switch ${\hat\gamma}_{i-1}$ is {\it not} turned on. If $a_i=b_i$, then ${\hat\gamma}_{i-1}$ {\it is} switched on, so that the comparison can continue. Finally, the last step can be simplified, as in Eq.\ (\ref{pseudolt}). We can now easily construct a comparison operator that cleans up the scratch space, and restores the original value of $b$, by using the trick mentioned in Sec.\ \ref{sec:space}---we run $LT$, copy the outcome $\ell$ of the comparison, and then run $LT$ in reverse. We will actually want our comparison operator to be enabled by a string $\mathchar"024C$, which we can achieve by controlling the copy operation with $\mathchar"024C$. The resulting operator, which we call $XLT$, flips the target qubit if $b<a$: \begin{eqnarray} XLT(a)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,1,2,{\hat\gamma}} \> \equiv \> &LT(a)^{-1}_{\beta,2,{\hat\gamma}} \> C_{\lbrack\!\lbrack \mathchar"024C, 2 \rbrack\!\rbrack, 1} \> LT(a)_{\beta,2,{\hat\gamma}} :\nonumber\\ &\bigl |b\bigr\rangle_\beta \bigl |x\bigr\rangle_1 \bigl |0\bigr\rangle_2 \bigl |0\bigr\rangle_{\hat\gamma} \longmapsto \bigl |b\bigr\rangle_\beta \bigl |x\oplus y\bigr\rangle_1 \bigl |0\bigr\rangle_2 \bigl |0\bigr\rangle_{\hat\gamma} \end{eqnarray} where $y$ is 1 if $b<a$ and 0 otherwise. We recall that the register $\bigl |\cdot\bigr\rangle_{\hat\gamma}$ is actually $K-1$ qubits long, so the $XLT$ routine requires $K$ qubits of scratch space. \subsection{Addition mod $N$} Now that we have constructed a multiplexed adder and a comparison operator, we can easily perform addition mod $N$. First $XLT$ compares the $c$-number $N-a$ with the $q$-number $b$, and switches on the select bit $\ell$ if $a+b<N$. Then the multiplexed adder adds either $a$ (for $a+b<N$) or $2^K +a - N$ (for $a+b\ge N$) to $b$. Note that $2^K +a -N$ is guaranteed to be positive ($N$ and $a$ are $K$-bit numbers with $a<N$). In the case where $2^K+a-N$ is added, the desired result $a+b ~({\rm mod}~ N)$ is obtained by subtracting $2^K$ from the sum; that is, by dropping the final carry bit. That is why our $MADD$ routine does not bother to compute this final bit. We call our mod $N$ addition routine $ADDN$; it acts as \begin{eqnarray} \label{addn_act} ADDN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,1,\gamma} : \>\> && \bigl|{b}\bigr\rangle_\beta \bigl |{0}\bigr\rangle_1 \bigl |{0}\bigr\rangle_\gamma \nonumber\\ &&\longmapsto \bigl|{b}\bigr\rangle_\beta \bigl |{\ell \equiv \mathchar"024C \wedge(a+b < N)}\bigr\rangle_1 \bigl |b+ \mathchar"024C \wedge a~({\rm mod}~ N)\bigr\rangle_\gamma \, . \end{eqnarray} (Here the notation $\ell \equiv \mathchar"024C \wedge(a+b < N)$ means that the qubit $\ell$ reads 1 if the statement $\mathchar"024C \wedge(a+b < N)$ is true and reads 0 otherwise.) If enabled, this operator computes $a+b ~({\rm mod}~ N)$; if not, it merely copies $b$.\footnote{Thus, if $ADDN$ is {\it not} enabled, Eq.\ (\ref{addn_act}) is valid only for $b<N$. We assume here and in the following that $b<N$ is satisfied; in the evaluation of the modular exponential function, our operators will always be applied to $q$-numbers that satisfy this condition.} $ADDN$ is constructed from $MADD$ and $XLT$ according to \begin{equation} ADDN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,1,\gamma} \> \equiv \> MADD(2^K +a -N, a)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,\gamma,1}\cdot XLT(N-a)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,1,\gamma} \end{equation} (see Fig.\ \ref{figI}). Note that $XLT$ uses and then clears the $K$ bits of scratch space in the register $\bigl | \cdot\bigr \rangle_\gamma$, before $MADD$ writes the mod $N$ sum there. The $ADDN$ routine can be viewed as the computation of an invertible function (specified by the $c$-numbers $a$ and $N$) of the $q$-number $b$. (Note that the output of this function is the sum $a+b ~({\rm mod}~ N)$ {\it and} the comparison bit $\ell$---the comparison bit is needed to ensure invertibility, since it is possible that $b\ge N$). Thus, we can use the trick mentioned in Sec.\ \ref{sec:space} to devise an ``overwriting'' version of this function. Actually, since we will not need to know the value of $\ell$ (or worry about the case $b\ge N$), we can save a qubit by modifying the trick slightly. The overwriting addition routine $OADDN$ is constructed as \begin{eqnarray} \label{oaddn} OADDN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,1,\gamma} \> \equiv \> && SWAP_{\beta,\gamma} \> ADDN^{-1}(N-a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\gamma,1,\beta}\nonumber\\ &&\cdot \> C_{\lbrack\!\lbrack \mathchar"024C\rbrack\!\rbrack,1} \> ADDN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,1,\gamma} \end{eqnarray} (see Fig\ \ref{figJ}), and acts (for $b<N$) according to \begin{eqnarray} \label{oaddnact} OADDN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,1,\gamma} : \>\> && \bigl|{b}\bigr\rangle_\beta \bigl |{0}\bigr\rangle_1 \bigl |{0}\bigr\rangle_\gamma \nonumber\\ && \longmapsto \bigl|{b}\bigr\rangle_\beta \bigl |{\ell \equiv \mathchar"024C \wedge(a+b < N)}\bigr\rangle_1 \bigl |b+ \mathchar"024C \wedge a~({\rm mod}~ N)\bigr\rangle_\gamma \nonumber\\ && \longmapsto \bigl|{b}\bigr\rangle_\beta \bigl |{\ell \equiv \mathchar"024C \wedge(a+b \ge N)}\bigr\rangle_1 \bigl |b+ \mathchar"024C \wedge a~({\rm mod}~ N)\bigr\rangle_\gamma \nonumber\\ && \longmapsto \bigl|{0}\bigr\rangle_\beta \bigl |0\bigr\rangle_1 \bigl |b+ \mathchar"024C \wedge a~({\rm mod}~ N)\bigr\rangle_\gamma \nonumber\\ && \longmapsto \bigl |b+ \mathchar"024C \wedge a~({\rm mod}~ N)\bigr\rangle_\beta |0\bigr\rangle_1\bigl|{0}\bigr\rangle_\gamma \, . \end{eqnarray} Here, in Eq.\ (\ref{oaddnact}), we have indicated the effect of each of the successive operations in Eq.\ (\ref{oaddn}). We can easily verify that applying $ADDN(N-a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\gamma,1,\beta}$ to the second-to-last line of Eq.\ (\ref{oaddnact}) yields the preceding line. If the enable string $\mathchar"024C$ is false, the verification is trivial, for $b<N$. (It was in order to ensure that this would work that we needed the $XLT$ operation to be enabled by $\mathchar"024C$.) When $\mathchar"024C$ is true, we need only observe that $N-a +[b+a ~({\rm mod}~ N)]<N$ if and only if $a+b\ge N$ (assuming that $b<N$). The $SWAP$ operation in Eq.\ (\ref{oaddn}) is not a genuine quantum operation at all; it is a mere {\it relabeling} of the $\bigl|{\cdot}\bigr\rangle_\beta$ and $\bigl|{\cdot}\bigr\rangle_\gamma$ registers that is performed by the {\it classical} computer. We have included the $SWAP$ because it will be convenient for the sum to be stored in the $\bigl|{\cdot}\bigr\rangle_\beta$ register when we chain together $OADDN$'s to construct a multiplication operator. We see that $OADDN$ uses and then clears $K+1$ qubits of scratch space. \subsection{Multiplication mod $N$} We have already explained in Sec.\ \ref{sec:enable} how mod $N$ multiplication can be constructed from {\it conditional} mod $N$ addition. Implementing the strategy described there, we can construct a conditional multiplication operator $MULN$ that acts according to \begin{eqnarray} \label{muln_act} MULN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,\gamma,1,\delta}: \>\> &&\bigl|{b}\bigr\rangle_\beta \bigl |{0}\bigr\rangle_\gamma \bigl |{0}\bigr\rangle_1 \bigl |{0}\bigr\rangle_\delta \nonumber\\ &&\longmapsto \bigl|{b}\bigr\rangle_\beta \bigl |\mathchar"024C\wedge a\cdot b ~({\rm mod}~N) \bigr\rangle_\gamma\bigl |0\bigr\rangle_1 \bigl |0\bigr\rangle_\delta \, . \end{eqnarray} If enabled, $MULN$ computes the product mod $N$ of the $c$-number $a$ and the $q$-number $b$; otherwise, it acts trivially. We could construct $MULN$ by chaining together $K$ $OADDN$ operators. The first $ADDN$ loads $a\cdot b_0$, the second adds $a\cdot 2b_1$, the third adds $a\cdot 2^2 b_2$, and so on. But we can actually save a few elementary operations by simplifying the first operation in the chain. For this purpose we introduce an elementary multiplication operator $EMUL$ that multiplies a $c$-number $a$ by a single qubit $b_0$: \begin{equation} EMUL(a)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,\gamma}: \>\> \bigl|{b_0}\bigr\rangle_1 \bigl |{0}\bigr\rangle_\gamma \longmapsto \bigl|{b_0}\bigr\rangle_1 \bigl |\mathchar"024C\wedge{a\cdot b_0}\bigr\rangle_\gamma \, , \end{equation} which is constructed according to \begin{equation} EMUL(a)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,1,\gamma}\>\equiv\> \prod_{\mkern36mu i=0}^{ K-1\mkern36mu} {\tt if \ (a_{\tt{i}} = 1) \ } C_{\lbrack\!\lbrack{\mathchar"024C,1}\rbrack\!\rbrack, \gamma_i} \, . \end{equation} Now we can construct $MULN$ as \begin{eqnarray} \label{muln} MULN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,\gamma,1,\delta}\>\equiv\> \prod_{\mkern36mu i=1}^{ K-1\mkern36mu} && OADDN(2^i\cdot a~({\rm mod}~N),N)_{\lbrack\!\lbrack \mathchar"024C, \beta_i\rbrack\!\rbrack,\gamma,1,\delta} \nonumber\\ &&\cdot EMUL(a)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta_0,\gamma} \end{eqnarray} (see Fig.\ \ref{figK}). Note that the computation of $2^i\cdot a ~({\rm mod} ~ N)$ is carried out by the classical computer. (It can be done efficiently by ``repeated doubling.'') As long as $a$ and $N$ have no common divisor ($gcd(a,N)=1$), the operation of multiplying by $a$ (mod $N$) is invertible. In fact, the multiplicative inverse $a^{-1}$ (mod $N$) exists, and $MULN(a)$ is inverted by $MULN(a^{-1})$. Thus, we can use the trick discussed in Sec.\ \ref{sec:space} to construct an overwriting version of the multiplication operator. This operator, denoted $OMULN$, acts according to \begin{eqnarray} \label{omuln_act} OMULN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,\gamma,1,\delta}: \>\> &&\bigl|{b}\bigr\rangle_\beta \bigl |{0}\bigr\rangle_\gamma \bigl |{0}\bigr\rangle_1 \bigl |{0}\bigr\rangle_\delta \nonumber\\ &&\longmapsto \bigl |\mathchar"024C\wedge a\cdot b ~({\rm mod}~N) \vee\mathchar"0218\mathchar"024C\wedge b\bigr\rangle_\beta\bigl|{0}\bigr\rangle_\gamma\bigl |0\bigr\rangle_1 \bigl |0\bigr\rangle_\delta \, , \end{eqnarray} Note that $OMULN$ acts trivially when not enabled. It can be constructed as \begin{eqnarray} \label{omuln} OMULN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,\gamma,1,\delta} \> \equiv\> && XOR_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,\gamma}\> \cdot \> XOR_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\gamma,\beta}\nonumber\\ && \cdot \> MULN^{-1}(a^{-1},N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\gamma,\beta,1,\delta}\> \cdot \> MULN(a,N)_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\beta,\gamma,1,\delta} \end{eqnarray} (see Fig.\ \ref{figL}). Here, the (conditional) $XOR$ operation is \begin{equation} XOR_{\lbrack\!\lbrack \mathchar"024C \rbrack\!\rbrack,\alpha,\beta} \> \equiv \> \prod_{i=0}^{L-1}C_{\lbrack\!\lbrack \mathchar"024C,\alpha_i\rbrack\!\rbrack, \beta_i} :\>\> \bigl| a \bigr\rangle_\alpha \bigl| b \bigr\rangle_\beta \longmapsto \bigl| a \bigr\rangle_\alpha \bigl| b\oplus (a\wedge \mathchar"024C) \bigr\rangle_\beta \ \end{equation} where $\oplus$ denotes bitwise addition modulo 2. It is easy to verify that, when enabled, $OMULN$ acts as specified in Eq.\ (\ref{omuln_act}); the two $XOR's$ at the end are needed to swap $\bigl|{0}\bigr\rangle_\beta$ and $\bigl | a\cdot b ~({\rm mod}~N) \bigr\rangle_\gamma$. To verify Eq.\ (\ref{omuln_act}) when $OMULN$ is {\it not} enabled, we need to know that $MULN$, when not enabled, acts according to \begin{eqnarray} \label{funny_omuln} MULN(a,N)_{\lbrack\!\lbrack \mathchar"024C\ne 1 \rbrack\!\rbrack,\beta,\gamma,1,\delta}: \>\> &&\bigl|{0}\bigr\rangle_\beta \bigl |{b}\bigr\rangle_\gamma \bigl |{0}\bigr\rangle_1 \bigl |{0}\bigr\rangle_\delta \nonumber\\ &&\longmapsto \bigl|{0}\bigr\rangle_\beta \bigl | b \bigr\rangle_\gamma\bigl |0\bigr\rangle_1 \bigl |0\bigr\rangle_\delta \, . \end{eqnarray} Though Eq.\ (\ref{funny_omuln}) does not follow directly from the defining action of $MULN$ specified in Eq.\ (\ref{muln_act}), it can be seen to be a consequence of Eq.\ (\ref{muln},\ref{oaddnact}). Note that the computation of $a^{-1}$ is performed by the classical computer. (This is, in fact, the most computationally intensive task that our classical computer will need to perform.) We will require the $OMULN$ operator with an enable string $\mathchar"024C$ that is only a single qubit. Thus the construction that we have described can be implemented on our enhanced machine. So constructed, the $OMULN$ operator uses (and then clears) $2K+1$ qubits of scratch space. This amount is all of the scratch space that will be required to compute the modular exponential function. If we wish to construct $OMULN$ on the basic machine (using controlled$^k$-NOT's with $k=0,1,2$), there are several alternatives. One alternative (that requiring the fewest elementary gates) is to use two additional qubits of scratch space ($2K+3$ scratch qubits altogether). Then, when $MULN$ calls for $OADDN$ with two enable bits, we use one of the scratch qubits to store the logical AND of the two enable bits. Now $OADDN$ with one enable bit can be called instead, where the scratch bit is the enable bit. (See Fig.\ \ref{figLa}.) When $OADDN$ eventually calls for $MUXFA$ with a single enable bit, we can use the second extra scratch qubit to construct $MUXFA''$ as in Eq.\ (\ref{muxfaprimeprime}) and Fig.\ \ref{figEa}. Of course, another alternative is to use the Barenco {\it et al.} identity Eq.\ (\ref{barenco}) repeatedly to expand all the controlled$^3$-NOT and controlled$^4$-NOT gates in terms of controlled$^k$-NOT gates with $k=0,1,2$. Then we can get by with $2K+1$ bits of scratch space, but at the cost of sharply increasing the number of elementary gates. \subsection{Modular exponentiation} The operator $EXPN$ that computes the modular exponentiation operator can now be constructed from the conditional overwriting multiplication operator, as outlined in Sec.\ \ref{sec:repeated}. Its action is: \begin{eqnarray} EXPN(x,N)_{\alpha,\beta,\gamma,1,\delta}: \>\> &&\bigl|{a}\bigr\rangle_\alpha^* \bigl |{0}\bigr\rangle_\beta \bigl |{0}\bigr\rangle_\gamma \bigl |{0}\bigr\rangle_1 \bigl |{0}\bigr\rangle_\delta \nonumber\\ &&\longmapsto \bigl|{a}\bigr\rangle_\alpha^* \bigl |{x^a ~({\rm mod} ~ N)}\bigr\rangle_\beta \bigl |{0}\bigr\rangle_\gamma \bigl |{0}\bigr\rangle_1 \bigl |{0}\bigr\rangle_\delta \, . \end{eqnarray} (Recall that $\bigl|{\cdot}\bigr\rangle_\alpha^*$ denotes a register that is $L$ qubits long; $N$ and $x$ are $K$-bit $c$-numbers.) It is constructed as \begin{equation} \label{expn_from_muln} EXPN(x,N)_{\alpha,\beta,\gamma,1,\delta}\> \equiv\>\left( \prod_{\mkern36mu i=0}^{ L-1\mkern36mu} OMULN(x^{2^i}~({\rm mod}~N),N)_{\lbrack\!\lbrack \alpha_i\rbrack\!\rbrack,\beta,\gamma,1,\delta} \> \right) C_{\beta_0}\, \end{equation} (Fig.\ \ref{figM}). Note that the $C_{\beta_0}$ is necessary at the beginning to set the register $\bigl |{\cdot}\bigr\rangle_\beta$ to 1 (not 0). The classical computer must calculate each $x^{2^i}$ and each inverse $x^{-2^i}$. The computation of $x^{-1}$ (mod $N$) can be performed using Euclid's algorithm in O($K^3$) elementary bit operations using ``grade school'' multiplication, or more efficiently using fast multiplication tricks. Fortunately, only one inverse need be computed---the $x^{-2^i}$'s, like the $x^{2^i}$'s, are calculated by repeated squaring. Actually, it is possible to reduce the number of quantum gates somewhat if the NOT and the first $OMULN$ in Eq.\ (\ref{expn_from_muln}) are replaced by the simpler operation \begin{equation} \left( C_{\alpha_0}C_{\lbrack\!\lbrack \alpha_0\rbrack\!\rbrack,\beta_0} C_{\alpha_0}\right) \cdot EMUL(x)_{\alpha_0,\beta} \, . \end{equation} It is easy to verify that this operator has the same action on the state $\bigl|{a_0}\bigr\rangle_{\alpha_0} \bigl |{0}\bigr\rangle_\beta$ as $OMULN(x,N)_{\lbrack\!\lbrack \alpha_0\rbrack\!\rbrack,\beta,\gamma,1,\delta} \cdot C_{\beta_0}$. With this substitution, we have defined the $EXPN$ operation whose complexity will be analyzed in the following section. \section{Space versus time} Now that we have spelled out the algorithms in detail, we can count the number of elementary quantum gates that they use. \subsection{Enhanced machine} \label{sec:enhanced_count} We will use the notation \begin{equation} \label{enhanced_count} [OPERATOR]=[c_0,c_1,c_2,c_3,c_4] \end{equation} to indicate that $OPERATOR$ is implemented using $c_0$ NOT gates, $c_1$ controlled-NOT gates, $c_2$ controlled$^2$-NOT gates, $c_3$ controlled$^3$-NOT gates, and $c_4$ controlled$^4$-NOT gates on the enhanced machine, or \begin{equation} [OPERATOR]=[c_0,c_1,c_2] \label{basic_count} \end{equation} to indicate that $OPERATOR$ is implemented using $c_0$ NOT gates, $c_1$ controlled-NOT gates, and $c_2$ controlled$^2$-NOT gates on the basic machine. By inspecting the network constructed in Sec.\ \ref{sec:detail}, we see that the following identities hold: \begin{eqnarray} \label{expn_count} && [EXPN] \> = \>(L-1) \cdot \left [OMULN_{[1]}\right] \> + \> [EMUL]\> + \> \left[{\rm controlled}{\rm -NOT}\right] \> + \>2\cdot [{\rm NOT}] \, ; \nonumber\\ && \left [OMULN_{[1]}\right] \> = \> 2\cdot \left [MULN_{[1]}\right]\> + \> 2 \cdot \left [XOR_{[1]}\right] \, ; \nonumber\\ && \left [MULN_{[1]}\right]\> = \> (K-1)\cdot \left [OADDN_{[2]}\right] \> + \> \left [EMUL_{[1]}\right] \, ; \nonumber\\ && \left [OADDN_{[2]}\right] \> = \> 2\cdot \left [ADDN_{[2]}\right]\> + \> \left[{\rm controlled}^2{\rm -NOT}\right] \, ; \nonumber\\ && \left [ADDN_{[2]}\right] \> = \> \left [MADD_{[2]}\right] \> + \> \left [XLT_{[2]}\right] \, ; \nonumber\\ && \left [MADD_{[2]}\right] \> = \> (K-1)\cdot \left[MUXFA_{[2]}\right] \> + \> \left[MUXHA_{[2]}\right] \, ; \nonumber\\ && \left [XLT_{[2]}\right] = 2\cdot [LT] \> + \>\left[{\rm controlled}^3{\rm -NOT} \right]\, . \end{eqnarray} These equations just say that $OMULN_{[1]}$, say, is constructed from $2~ MULN_{[1]}$'s and $2~XOR_{[1]}$'s, and so forth. The subscript ${}_{[\cdot]}$ indicates the length of the string of enable bits for each operator. By combining these equations, we find the following expression for the total number of elementary gates called by our $EXPN$ routine: \begin{eqnarray} [EXPN] \> = \> (L-1)\cdot && \Bigl \{ \> 4(K-1)^2 \cdot \left[MUXFA_{[2]}\right] \> + \> 4(K-1) \cdot\left[MUXHA_{[2]}\right] \> \nonumber\\ && + \> 8(K-1)\cdot [LT] \> + \> 4(K-1)\cdot \left[{\rm controlled}^3{\rm -NOT}\right]\nonumber\\ && + \> 2(K-1)\cdot \left[{\rm controlled}^2{\rm -NOT}\right] \> + \> 2\cdot \left [EMUL_{[1]}\right] \> + \> 2\cdot \left [XOR_{[1]}\right] \> \Bigr \}\nonumber\\ && \> + \> \> [EMUL]\> + \> \left[{\rm controlled}{\rm -NOT}\right] \> + \>2\cdot [{\rm NOT}] \, . \end{eqnarray} By plugging in the number of elementary gates used by $MUXFA$, $MUXHA$, $LT$, $EMUL$, and $XOR$, we can find the number of controlled$^k$-NOT gates used in the $EXPN$ network. For large $K$, the leading term in our expression for the number of gates is of order $LK^2$. Only the $MUXFA$ and $LT$ operators contribute to this leading term; the other operators make a subleading contribution. Thus \begin{equation} \label{expn_ops} [EXPN] \> = \> \Bigl (\> 4LK^2\cdot \left[MUXFA_{[2]}\right] + 8LK\cdot [LT] \>\Bigr ) \Bigl( \> 1 + O(1/K)\>\Bigr ) \, . \end{equation} We will now discuss how this leading term varies as we change the amount of available scratch space, or replace the enhanced machine by the basic machine. The numbers of elementary gates used by $MUXFA$ and by $LT$ actually depend on the particular values of the classical bits in the binary expansions of $2^j x^{\pm 2^i}$ (mod $N$) and $2^K -N + 2^jx^{\pm 2^i}$ (mod $N$), where $j=1,\dots, K-1$ and $i=0,1,\dots,L-1$. We will estimate the number of gates in two different ways. To count the gates in the ``worst case,'' we always assume that the classical bits take values that maximize the number of gates. To count in the ``average case,'' we make the much more reasonable assumption that the classical bits take the value 0 with probability $1\over 2$ and take the value 1 with probability $1\over 2$. For example, in the case of the implementation of $MUXFA_{[2]}$ on the enhanced machine described in Eq.\ (\ref{muxfa}), counting the operations yields \begin{eqnarray} &&\left[MUXFA(0,0)_{[2]}\right] \> =\>[0,1,1,0,0] \, ,\nonumber\\ &&\left[MUXFA(1,1)_{[2]}\right] \> =\>[0,1,2,1,0] \, ,\nonumber\\ &&\left[MUXFA(0,1)_{[2]}\right] \> =\>[0,1,1,1,1] \, ,\nonumber\\ &&\left[MUXFA(1,0)_{[2]}\right] \> =\>[2,1,1,1,1] \, , \end{eqnarray} and thus \begin{eqnarray} & \left[MUXFA_{[2]}\right]^{\rm worst}\>&=\> [2,1,2,1,1] \, ,\nonumber\\ & \left[MUXFA_{[2]}\right]^{\rm ave}\>&=\> \left[{1\over 2},1,{5\over 4},{3\over 4},{1\over 2}\right] \, . \end{eqnarray} That is, the worst case is the {\it maximum} in each column, and the average case is the {\it mean} of each column. When we quote the number of gates without any qualification, the average case is meant. Similarly, for the $LT$ operation described in Eq.\ (\ref{lt_long}), we have \begin{eqnarray} & \left[LT\right]^{\rm worst}\>&=\> [K,2,2K-3,0,0] \, ,\nonumber\\ & \left[LT\right]^{\rm ave}\>&=\> \left[K-{1\over 2},{3\over 2},{3\over 2}K-{5\over 2},0,0\right] \, . \end{eqnarray} Note that $LT$ uses no controlled$^3$-NOT or controlled$^4$-NOT gates, and so can be implemented as above on the basic machine. Now, from Eq.\ (\ref{expn_ops}), we find the leading behavior of the number of gates used by the $EXPN$ routine: \begin{eqnarray} \label{worst_enhanced} &&[EXPN]_{{\rm enhanced},2K+1}^{\rm worst} \> = \> \> LK^2\cdot [16,4,24,4,4] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm enhanced},2K+1}^{\rm ave} \> = \> \> LK^2\cdot [10,4,17,3,2] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, , \end{eqnarray} where the subscript ${}_{{\rm enhanced},2K+1}$ serves to remind us that this count applies to the enhanced machine with $2K+1$ qubits of scratch space. A convenient (though quite crude) ``one-dimensional'' measure of the complexity of the algorithm is the total number of laser pulses required to implement the algorithm on a linear ion trap, following the scheme of Cirac and Zoller. Assuming 1 pulse for a NOT and $2k+3$ pulses for a controlled$^k$-NOT, $k=1,2,3,4$, we obtain \begin{eqnarray} &&[EXPN]_{{\rm enhanced},2K+1}^{\rm worst~ pulses} \> = \> \> 256LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm enhanced},2K+1}^{\rm ave~ pulses} \> = \> \> 198LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, . \end{eqnarray} (The estimate for the worst case is not obtained directly from Eq.\ (\ref{worst_enhanced}); instead we assume that $MUXFA$ is always called with the argument $(a_0=1,a_1=0)$---this maximizes the number of pulses required, though it does not maximize the number of controlled$^2$-NOT gates.) Including the subleading contributions, the count of gates and pulses used by our network in the average case is \begin{eqnarray} \label{pulses_enh_twoKone} &[EXPN]_{{\rm enhanced},2K+1}^{\rm ave} \> = \> \> (L-1)\cdot &[10K^2-14K+4,4K^2+8K -12,17K^2-36K+22,\nonumber\\ &&3K^2-3,2K^2-4K+2] +[2,{1\over 2}K+1,0,0,0]\, ,\nonumber\\ &[EXPN]_{{\rm enhanced},2K+1}^{\rm ave~ pulses} \> = \> \> (L-1)\cdot&\bigl(198K^2-270K+93\bigr)+{5\over 2}K +7 \, . \end{eqnarray} By allowing one extra qubit of scratch space, we can reduce the complexity (measured in laser pulses) somewhat. When $MULN_{[1]}$ calls for $OADDN_{[2]}$, we may use a controlled$^2$-NOT to store the AND of the two enable bits in the extra scratch qubit, and then call $OADDN_{[1]}$ instead, with the scratch bit as the enable bit. The extra controlled$^2$-NOT's that compute and clear the AND bit do not affect the leading behavior of the count of elementary gates. The only effect on the leading behavior is that $MUXFA_{[2]}$ can be replaced by $MUXFA_{[1]}$, for which \begin{eqnarray} & \left[MUXFA_{[1]}\right]^{\rm worst}\>&=\> [2,2,2,1,0] \, ,\nonumber\\ & \left[MUXFA_{[1]}\right]^{\rm ave}\>&=\> \left[{1\over 2},{5\over 4},{7\over 4},{1\over 2},0\right] \, . \end{eqnarray} Hence we find \begin{eqnarray} &&[EXPN]_{{\rm enhanced},2K+2}^{\rm worst} \> = \> \> LK^2\cdot [16,8,24,4,0] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm enhanced},2K+2}^{\rm ave} \> = \> \> LK^2\cdot [10,5,19,2,0] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, , \end{eqnarray} and \begin{eqnarray} &&[EXPN]_{{\rm enhanced},2K+2}^{\rm worst~ pulses} \> = \> \> 240 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm enhanced},2K+2}^{\rm ave~ pulses} \> = \> \> 186 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, . \end{eqnarray} The precise count in the average case is \begin{eqnarray} &[EXPN]_{{\rm enhanced},2K+2}^{\rm ave} \> = \> \> (L-1)\cdot&[10K^2-14K+4,5K^2+10K -14,19K^2-34K+21,\nonumber\\ &&2K^2-4K+2,0]+[2,{1\over2}K+1,0,0,0]\, ,\nonumber\\ &[EXPN]_{{\rm enhanced},2K+2}^{\rm ave~ pulses} \> = \> \> (L-1)\cdot&(186K^2-238K+99) +{5\over2}K+7\, . \end{eqnarray} Note that, in this version of the algorithm, no controlled$^4$-NOT gates are needed. \subsection{Basic machine} \label{sec:basic_count} Now we consider the basic machine, first with $2K+3$ bits of scratch space. We use one of our extra scratch bits to combine the enable bits for $OADDN$ as explained above. The other extra bit is used to replace $MUXFA_{[1]}$ by the version $MUXFA''_{[1]}$ given in Eq.\ (\ref{muxfaprimeprime})---$MUXFA''_{[1]}$ uses only the gates available on the basic machine. The new count is \begin{eqnarray} & \left[MUXFA''_{[1]}\right]^{\rm worst}\>&=\> [2,2,4] \, ,\nonumber\\ & \left[MUXFA''_{[1]}\right]^{\rm ave}\>&=\> \left[{1\over 2},{7\over 4},{11\over 4}\right] \, . \end{eqnarray} The $LT$ operation need not be modified, as it requires no controlled$^3$-NOT or controlled$^4$-NOT gates. We therefore find \begin{eqnarray} &&[EXPN]_{{\rm basic},2K+3}^{\rm worst} \> = \> \> LK^2\cdot [16,8,32] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm basic},2K+3}^{\rm ave} \> = \> \> LK^2\cdot [10,7,23] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, , \end{eqnarray} and \begin{eqnarray} &&[EXPN]_{{\rm basic},2K+3}^{\rm worst~ pulses} \> = \> \> 280 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm basic},2K+3}^{\rm ave~ pulses} \> = \> \> 206 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, . \end{eqnarray} With the subleading corrections we have in the average case \begin{eqnarray} &[EXPN]_{{\rm basic},2K+3}^{\rm ave} \> = \> \> (L-1)\cdot&[10K^2-14K+4,7K^2+6K -12,23K^2-42K+25]\nonumber\\ && +[2,{1\over 2}K+1,0]\, ,\nonumber\\ &[EXPN]_{{\rm basic},2K+3}^{\rm ave~ pulses} \> = \> \> (L-1)\cdot &(206K^2-278K+119)+{5\over 2}K+7 \, . \end{eqnarray} We can squeeze the scratch space down to $2K+2$ bits if we replace $MUXFA''_{[1]}$ by $MUXFA'''_{[1]}$ given in Eq.\ (\ref{muxfappp}), which does not require an extra scratch bit. The gate count becomes \begin{eqnarray} & \left[MUXFA'''_{[1]}\right]^{\rm worst}\>&=\> [2,2,6] \, ,\nonumber\\ & \left[MUXFA'''_{[1]}\right]^{\rm ave}\>&=\> \left[{1\over 2},{5\over 4},{15\over 4}\right] \, , \end{eqnarray} so that we now have \begin{eqnarray} &&[EXPN]_{{\rm basic},2K+2}^{\rm worst} \> = \> \> LK^2\cdot [16,8,40] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm basic},2K+2}^{\rm ave} \> = \> \> LK^2\cdot [10,5,27] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, , \end{eqnarray} and \begin{eqnarray} &&[EXPN]_{{\rm basic},2K+2}^{\rm worst~ pulses} \> = \> \> 316 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm basic},2K+2}^{\rm ave~ pulses} \> = \> \> 224 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, . \end{eqnarray} The precise count of gates and pulses in the average case is \begin{eqnarray} &[EXPN]_{{\rm basic},2K+2}^{\rm ave} \> = \> \> (L-1)\cdot&[10K^2-14K+4,5K^2+10K -14,27K^2-50K+29]+\nonumber\\ &&[2,{1\over 2} K+1,0]\, ,\nonumber\\ &[EXPN]_{{\rm basic},2K+2}^{\rm ave~ pulses} \> = \> \> (L-1)\cdot &(224K^2-314K+137) +{5\over 2}K+7\, . \end{eqnarray} To squeeze the scratch space by yet another bit, we must abandon the extra bit used by $MULN$. We then construct $MUXFA''''_{[2]}$ by expanding the controlled$^3$-NOT and controlled$^4$-NOT gates in terms of controlled$^2$-NOT gates, as discussed in Sec.\ \ref{sec:addition}. We find that \begin{eqnarray} & \left[MUXFA''''_{[2]}\right]^{\rm worst}\>&=\> [2,1,15] \, ,\nonumber\\ & \left[MUXFA''''_{[2]}\right]^{\rm ave}\>&=\> \left[{1\over 2},1,{37\over 4}\right] \, ; \end{eqnarray} therefore, \begin{eqnarray} &&[EXPN]_{{\rm basic},2K+1}^{\rm worst} \> = \> \> LK^2\cdot [16,4,76] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm basic},2K+1}^{\rm ave} \> = \> \> LK^2\cdot [10,4,49] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, , \end{eqnarray} and \begin{eqnarray} &&[EXPN]_{{\rm basic},2K+1}^{\rm worst~ pulses} \> = \> \> 568 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm basic},2K+1}^{\rm ave~ pulses} \> = \> \> 373 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, . \end{eqnarray} Including the subleading corrections the count in the average case is \begin{eqnarray} &[EXPN]_{{\rm basic},2K+1}^{\rm ave} \> = \> \>(L-1)\cdot &[10K^2-14K+4,4K^2+8K -12,49K^2-76K+30]\nonumber\\ &&+[2,{1\over 2} K+1,0]\, ,\nonumber\\ &[EXPN]_{{\rm basic},2K+1}^{\rm ave~ pulses} \> = \> \> (L-1)\cdot&(373K^2-506K+154)+{5\over 2}K+7 \, . \end{eqnarray} Our results for the average number of gates and pulses are summarized in the following table: \begin{center} \begin{tabular}{|c|c|c||c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c||}{basic} & \multicolumn{2}{c|}{enhanced} \\ \hline scratch&gates & pulses& gates& pulses \\ \hline\hline 2K+1 &[10,4,49]&373&[10,4,17,3,2]& 198 \\ 2K+2 & [10,5,27]& 224& [10,5,19,2,0]&186 \\ \cline{4-5} 2K+3 & [10,7,23]& 206 &\multicolumn{2}{c}{}\\ \cline{1-3} \end{tabular} \end{center} \begin{equation} \label{gate_table} \end{equation} Each entry in the table is the coefficient of $LK^2$ (the leading term) in the number of gates or pulses, where the notation for the number of gates is that defined in Eq.\ (\ref{enhanced_count},\ref{basic_count}). Of course, the numbers just represent our best effort to construct an efficient network. Perhaps a more clever designer could do better. \subsection{Unlimited Space} \label{sec:unlimited} The gate counts summarized in Eq.\ (\ref{gate_table}) provide a ``case study'' of the tradeoff between the amount of scratch space and the speed of computation. But all of the algorithms described above are quite parsimonious with scratch space. We will now consider how increasing the amount of scratch space considerably allows us to speed things up further. First of all, recall that our $OADDN$ routine calls the comparison operator $LT$ four times, twice running forwards and twice running in reverse. The point was that we wanted to clear the scratch space used by $LT$ before $MADD$ acted, so that space could be reused by $MADD$. But if we were to increase the scratch space by $K-1$ bits, it would not be necessary for $LT$ to run backwards before $MADD$ acts. Instead, a modified $OADDN$ routine could clear the scratch space used by $LT$ and by $MADD$, running each subroutine only twice (once forward and once backward). Thus, with adequate space, we can replace Eq.\ (\ref{expn_ops}) with \begin{equation} [EXPN] \> = \> \Bigl (\> 4LK^2\cdot \left[MUXFA_{[1]}\right] + 4LK\cdot [LT] \>\Bigr ) \Bigl( \> 1 + O(1/K)\>\Bigr ) \, . \end{equation} Using this observation, we can modify our old network on the enhanced machine (with $2K+2$ bits of scratch) to obtain \begin{eqnarray} &&[EXPN]_{{\rm enhanced},3K+1}^{\rm ave} \> = \> \> LK^2\cdot [6,5,13,2,0] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm enhanced},3K+1}^{\rm ave~ pulses} \> = \> \> 140 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, , \end{eqnarray} about 25\% faster. To do substantially better requires much more space. Optimized for speed, our algorithms will never clear the scratch space at intermediate stages of the computation. Instead, $EXPN$ will carry out of order $LK$ additions, filling new space each time a comparison is performed or a sum is computed. Once the computation of $x^a ~ ({\rm mod} ~N)$ is complete, we copy the result and then run the computation backwards to clear all the scratch space. But with altogether $\sim LK$ $ADDN$'s, each involving a comparison and a sum, we fill about $2LK^2$ qubits of scratch space. Combining the cost of running the gates forward and backward, we have \begin{equation} [EXPN]\cdot [EXPN^{-1}] \> = \> \Bigl (\> 2LK^2\cdot \left[MUXFA_{[1]}\right] + 2LK\cdot [LT] \>\Bigr ) \Bigl( \> 1 + O(1/K)\>\Bigr ) \, , \end{equation} and therefore \begin{eqnarray} &&[EXPN]_{{\rm enhanced},\mathchar"0218 2LK^2}^{\rm ave} \> = \> \> LK^2\cdot \left[3,{5\over 2},{13\over 2},1,0\right] \cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, ,\nonumber\\ &&[EXPN]_{{\rm enhanced},\mathchar"0218 2LK^2}^{\rm ave~ pulses} \> = \> \> 70 LK^2\cdot \Bigl( \> 1 + O(1/K)\>\Bigr )\, , \end{eqnarray} another factor of 2 improvement in speed. For asymptotically large $K$, further improvements are possible, for we can invoke classical algorithms that multiply $K$-bit numbers in time less than O($K^2$). The fastest known, the Sch\"onhage-Strassen algorithm, requires O($K\log K\log\log K$) elementary operations \cite{fast_mult}. It thus should be possible to perform modular exponentiation on a quantum computer in a time of order $LK\log K\log\log K$. We have not worked out the corresponding networks in detail, or determined the precise scratch space requirements for such an algorithm. \subsection{Minimal Space} \label{sec:minimal} Now consider the other extreme, where we disregard speed, and optimize our algorithms to minimize space. Since addition is an invertible operation, it is possible to construct a unitary ``overwriting addition'' operator that adds a $c$-number to a $q$-number and replaces the $q$ number addend with the sum. But the construction of our $OADDN$ operator involved two stages---first we performed the addition {\it without} overwriting the input, and then ran the addition routine backwards to erase the input. Thus, our overwriting $OADDN$ routine for adding a $K$-bit $c$-number to a $K$-bit $q$-number (mod $N$) required $K+1$ bits of scratch space. There is no reason in principle why this scratch space should be necessary (though eliminating it may slow down the computation). In fact, we will show that it is possible to add without using any scratch space at all. Of course, we will still need a comparison bit to perform mod $N$ addition. And there is no obvious way to eliminate the need for a $K$-bit scratch register that stores partial sums when we multiply. Still, using overwriting addition, we can construct an $EXPN$ operator that requires just $K+1$ bits of scratch space (compared to $2K+1$ in our best previous effort). The price we pay is that the computation slows down considerably. The key to adding without scratch space is to work from left to right instead of right to left. It is sufficient to see how to add a single-bit $c$-number $a_0$ to a $K$-bit $q$-number $b$, obtaining a $(K+1)$-bit $q$-number. Of course, if the classical bit is 0, we do nothing. If the classical bit is 1, we perform addition by executing the pseudo-code: \begin{eqnarray} \label{pseudo_oadd} {\tt if}&\ b_{K-1}=b_{K-2}=\cdots=b_1=b_0=1:\quad& {\tt flip}\ b_K\nonumber\\ {\tt if}&\ b_{K-2}=b_{K-3}=\cdots=b_1=b_0=1:\quad& {\tt flip}\ b_{K-1}\nonumber\\ &\cdot\nonumber\\ &\cdot\nonumber\\ {\tt if}&\ b_{1}=b_{0}=1:\quad& {\tt flip} \ b_2\nonumber\\ {\tt if}&\ b_{0}=1:\quad& {\tt flip} \ b_1\nonumber\\ & & {\tt flip}\ b_0 \end{eqnarray} Thus, the operator \begin{eqnarray} ADD(a_0)_{\beta_{K},\beta}\>\> && \equiv\>\> {\tt if} \ (a_0 = 1) \ \nonumber\\ C_{\beta_0}\> && C_{\lbrack\!\lbrack \beta_0\rbrack\!\rbrack,\beta_1} \>\dots\> C_{\lbrack\!\lbrack \beta_0,\beta_1\dots\beta_{K-2}\rbrack\!\rbrack,\beta_{K-1}}\> C_{\lbrack\!\lbrack \beta_0,\beta_1\dots\beta_{K-1}\rbrack\!\rbrack,\beta_K} \end{eqnarray} has the action \begin{equation} ADD(a_0)_{\beta_{K},\beta} : \>\> \bigl |0 \bigr\rangle_{\beta_K}\bigl |{b}\bigr\rangle_\beta \longmapsto \bigl |\left(b+a_0\right)_{K} \bigr\rangle_{\beta_K}\bigl |{b+a_0}\bigr\rangle_\beta \, . \end{equation} It fills the $K+1$ qubits $|\cdot\bigr\rangle_{\beta_K}\bigl |\cdot\bigr\rangle_\beta$ with the $(K+1)$-bit sum $b+a_0$. To add a $K$-bit $c$-number $a$ to the $K$-bit $q$-number $b$, we apply this procedure iteratively. After adding $a_0$ to $b$, we add $a_1$ to the $(K-1)$-qubit number $b_{K-1}b_{K-2}\dots b_2 b_1$, then add $a_2$ to the $(K-2)$-qubit number $b_{K-1}b_{K-2}\dots b_3 b_2$, and so on. Thus, the computation of $b+a$ requires in the worst case ($a=111\dots 11$) a total number of operations \begin{equation} [ADD(a)]\> = \> [K,K,K-1,K-2,\dots,2,1] \> ; \end{equation} that is, $K$ NOT's, $K$ controlled-NOT's, $K-1$ controlled$^2$-NOT's, $\dots$, 2 controlled$^{K-1}$-NOT's, and 1 controlled$^K$-NOT. In the average case (where half the bits of $a$ are zero), only half of these gates need to be executed. For the Cirac-Zoller device, figuring $2k+3$ laser pulses for a controlled$^k$-NOT with $k\ge 1$, and one pulse for a NOT, this translates into ${1\over 6}K\left(2K^2 +15K+19\right)$ laser pulses for each $K$-bit addition, in the worst case, or in the average case \begin{equation} [ADD]^{\rm ave~pulses}_{\rm no ~ scratch}={1\over 6} K^3 + {5\over 4} K^2 + {19\over 12} K \, . \end{equation} We can easily promote this operation to a {\it conditional} $ADD$ with $\ell$ enable bits by simply adding the enable qubits to the control string of each gate; the complexity then becomes \begin{equation} \left[ADD_{[\ell]}\right]^{\rm ave~pulses}_{\rm no ~ scratch}={1\over 6} K^3 + \left({1\over 2}\ell+{5\over 4}\right) K^2 + \left({3\over 2}\ell + {31\over 12}\right) K \, ,\quad \ell\ge 1 \, . \end{equation} We will need to add mod $N$. But if we can add, we can compare. We can do the comparison of $N-a$ and $b$ by adding $(2^K-N+a)$ to $b$; the final carry bit will be 1 only for $a+b\ge N$. Thus, we can use the overwriting addition operation $ADD$ in place of $LT$ to fix the value of the select bit, and then use a multiplexed version of $ADD$ to complete the mod $N$ addition. Following this strategy, we construct an overwriting mod $N$ adder that uses just one qubit of scratch space according to \begin{eqnarray} OADDN'&&(a,N)_{\lbrack\!\lbrack \mathchar"024C\rbrack\!\rbrack, \beta,\beta_K} \> \equiv \nonumber\\ \>&& ADD(a)_{\lbrack\!\lbrack \mathchar"024C\rbrack\!\rbrack, \beta_K,\beta}\cdot MADD'(N-a, 2^K-a)_{\lbrack\!\lbrack \mathchar"024C\rbrack\!\rbrack, \beta_K,\beta} \cdot ADD(2^K-N+a)_{\lbrack\!\lbrack \mathchar"024C\rbrack\!\rbrack, \beta_K,\beta}: \nonumber\\ && \qquad\bigl |0 \bigr\rangle_{\beta_K}\bigl |{b}\bigr\rangle_\beta \longmapsto \bigl |0\bigr\rangle_{\beta_K}\bigl |{b+\mathchar"024C \wedge a ~ ({\rm mod} ~ N)}\bigr\rangle_\beta \, . \end{eqnarray} Here each $ADD$ operation computes a $(K+1)$-bit sum as above, placing the final carry bit in the qubit $\bigl |\cdot\bigr\rangle_{\beta_K}$; however $MADD'$ computes a $K$-bit sum -- it is a multiplexed adder that adds $N-a$ if the select bit $\bigl |\cdot\bigr\rangle_{\beta_K}$ reads 0, and adds $2^K -a$ if the select bit reads 1. The construction of $MADD'$ follows the spirit of the construction of $MADD$ described in Sec.\ \ref{sec:addition}. In the average case, the number of laser pulses required to implement this $OADDN'$ operation is \begin{equation} \left[OADDN'_{[\ell]}\right]^{\rm ave ~ pulses}_{\rm 1~ scratch}= {7\over 12}K^3 + \left({7\over 4}\ell+{33\over 8}\right) K^2 + \left({15\over 4}\ell+{169\over 24}\right) K \, . \end{equation} The construction of the modular exponentiation operator $EXPN$ from this $OADDN'$ operator follows the construction described in Sec.\ \ref{sec:detail}. Thus, using the expression for $[EXPN]$ in terms of $\left[OADDN_{[2]}\right]$ implicit in Eq.\ (\ref{expn_count}), we find that with $K+1$ qubits of scratch space, the $EXPN$ function can be computed, in the average case, with a number of laser pulses given by \begin{equation} [EXPN]^{\rm ave ~ pulses}_{K+1}=(L-1)\cdot \left( {7\over 6} K^4 + {169\over 12} K^3 + {83\over 6} K^2 - {97\over 12} K\right) + {5\over 2}K + 7 \, . \end{equation} For small values of $K$ ($K<7$), fewer pulses are required than for the algorithms described in Sec.\ \ref{sec:enhanced_count} and \ref{sec:basic_count}. \section{$N=15$} \label{sec:fifteen} As we noted in Sec.\ \ref{sec:factoring}, Shor's factorization algorithm fails if $N$ is even or a prime power ($N=p^\alpha$, $p$ prime). Thus, the smallest composite integer $N$ that can be successfully factored by Shor's method is $N=15$. Though factoring 15 is not very hard, it is amusing to consider the computational resources that would be needed to solve this simplest of quantum factoring problems on, say, a linear ion trap. Appealing to Eq.\ (\ref{pulses_enh_twoKone}), with $K=4$ and $L=2K=8$, our ``average case'' estimate of the number of laser pulses required on a machine with altogether $K+L+(2K+1)=21$ qubits of storage is 15,284. With 22 qubits of storage, our estimate improves to 14,878 pulses. With another three qubits (25 total), we can use the technique described in Sec.\ \ref{sec:unlimited} to achieve a further improvement in speed. Several observations allow us to reduce these resources substantially further. First of all, we notice that, for any positive integer $x$ with $x<15$ and $gcd(x,15)=1$ ({\it i.e.}, for $x=1,2,4,7,8,11,13,14$), we have $x^4\equiv 1 {}~({\rm mod} ~15)$. Therefore, \begin{equation} x^a= x^{2a_1}\cdot x^{a_0}\ ; \end{equation} only the last two bits of $a$ are relevant in the computation of $x^a$. Hence, we might as well choose $L=2$ instead of $L=8$, which reduces the number of elementary operations required by a factor of about 7. (Even if the value of $L$ used in the evaluation of the discrete Fourier transform is greater than 2, there is still no point in using $L>2$ in the evaluation of the modular exponential function.) Second, we can save on storage space (and improve speed) by noting that the overwriting addition routine described in Sec.\ \ref{sec:minimal} is reasonably efficient for small values of $K$. For $K=4$ and $L=2$, we need 11 qubits of storage and an estimated 1406 laser pulses. For $N=15$, the above is the most efficient algorithm we know that actually computes $x^a$ on the quantum computer. We can do still better if we are willing to allow the classical computer to perform the calculation of $x^a$. Obviously, this strategy will fail dismally for large values of $K$---the classical calculation will require exponential time. Still, if our goal is merely to construct the entangled state \begin{equation} \label{mod_entangled} {1\over 2^{L/2}}\sum_{a} |a \rangle_i |x^a (mod ~N)\rangle_o \ , \end{equation} while using our quantum computational resources as sparingly as possible, then classical computation of $x^a$ is the most efficient procedure for small $K$. So we imagine that $x<15$ with $gcd(x,15)=1$ is randomly chosen, and that the classical computer generates a ``lookup table'' by computing the four bit number $x^a$ (mod 15) for $a=0,1,2,3$. The classical computer then instructs the quantum computer to execute a sequence of operations that prepares the state Eq.\ (\ref{mod_entangled}). These operations require no scratch space at all, so only $L+K=6$ qubits of storage are needed to prepare the entangled state. The ``worst case'' (most complex lookup table) is $x=7$ or $x=13$. The lookup table for $x=7$ is: \begin{center} \begin{tabular}{|ll|llll|} \hline \multicolumn{2}{|c|}{$a$}& \multicolumn{4}{c|}{$7^a ~({\rm mod}~ 15)$} \\ \hline 0&0&0&0&0&1\\ 0&1&0&1&1&1\\ 1&0&0&1&0&0\\ 1&1&1&1&0&1 \\ \hline $a_1$&$a_0$&$b_3$&$b_2$&$b_1$&$b_0$\\ \hline \end{tabular} \end{center} \begin{equation} \end{equation} An operator \begin{equation} EXPN(x=7,N=15)_{\alpha,\beta}: \>\> \bigl|{a}\bigr\rangle_\alpha^* \bigl |{0}\bigr\rangle_\beta \longmapsto \bigl|{a}\bigr\rangle_\alpha^* \bigl |{7^a ~({\rm mod} ~ 15)}\bigr\rangle_\beta \end{equation} that recreates this table can be constructed as \begin{equation} \label{expn_short} EXPN(x,N)_{\alpha,\beta}\> \equiv\> C_{\alpha_1}C_{\lbrack\!\lbrack \alpha_1,\alpha_0\rbrack\!\rbrack,\beta_1}C_{\alpha_0}C_{\lbrack\!\lbrack \alpha_1,\alpha_0\rbrack\!\rbrack,\beta_2}C_{\alpha_1}C_{\lbrack\!\lbrack \alpha_1,\alpha_0\rbrack\!\rbrack,\beta_0}C_{\alpha_0}C_{\lbrack\!\lbrack \alpha_1,\alpha_0\rbrack\!\rbrack,\beta_3}C_{\beta_2}C_{\beta_0} \ . \end{equation} The two NOT's at the beginning generate a ``table'' that is all 1's in the $\beta_0$ and $\beta_2$ columns, and all 0's in the $\beta_1$ and $\beta_3$ columns. The remaining operations fix the one incorrect entry in each row of the table. Thus, we have constructed an $EXPN$ operator with complexity \begin{equation} \left[EXPN(7,15)\right]= [6,0,4] \ ; \end{equation} it can be implemented with 34 laser pulses on the Cirac-Zoller device. Since two additional pulses suffice to prepare the input register in the superposition state \begin{equation} \label{fifteen_input} {1\over 2}\sum_{a=0}^{3} |a \rangle_i \end{equation} before $EXPN$ acts, we need 36 laser pulses to prepare the entangled state Eq.\ (\ref{mod_entangled}). The $EXPN$ operator constructed in Eq.\ (\ref{expn_short}) acts trivially on the input $q$-number $a$. Of course, this feature is not necessary; as long as the output state has the right correlations between the $\bigl|{\cdot}\bigr\rangle_\alpha^*$ and $\bigl|{\cdot}\bigr\rangle_\beta$ registers, we will successfully prepare the entangled state Eq.\ (\ref{mod_entangled}). By exploiting this observation, we can achieve another modest improvement in the complexity of $EXPN$; we see that \begin{equation} C_{\lbrack\!\lbrack \alpha_1,\alpha_0\rbrack\!\rbrack,\beta_3}C_{\alpha_0}C_{\lbrack\!\lbrack \alpha_1,\alpha_0\rbrack\!\rbrack,\beta_0}C_{\alpha_1}C_{\lbrack\!\lbrack \alpha_1,\alpha_0\rbrack\!\rbrack,\beta_2}C_{\alpha_0}C_{\lbrack\!\lbrack \alpha_1,\alpha_0\rbrack\!\rbrack,\beta_1}C_{\beta_2}C_{\beta_0} \ . \end{equation} applied to the input Eq.\ (\ref{fifteen_input}) also produces the output Eq.\ (\ref{mod_entangled}), even though it flips the value of $a_1$. Compared to Eq.\ (\ref{expn_short}), we do without the final NOT gate, and hence save one laser pulse. We can do better still by invoking the ``custom gates'' described in Appendix A; another implementation of the $EXPN$ operator is \begin{equation} \label{expn_custom} EXPN'(x,N)_{\alpha,\beta}\> \equiv\> C_{\lbrack\!\lbrack \overline\alpha_1,\alpha_0\rbrack\!\rbrack,\beta_1}C_{\lbrack\!\lbrack \overline\alpha_1,\overline\alpha_0\rbrack\!\rbrack,\beta_2}C_{\lbrack\!\lbrack \alpha_1,\overline\alpha_0\rbrack\!\rbrack,\beta_0}C_{\lbrack\!\lbrack \alpha_1,\alpha_0\rbrack\!\rbrack,\beta_3}C_{\beta_2}C_{\beta_0} \ . \end{equation} Here, $C_{\lbrack\!\lbrack \overline\alpha_1,\overline\alpha_0\rbrack\!\rbrack,\beta_2}$, for example, is a gate that flips the value of qubit $\beta_2$ if and only if both qubit $\alpha_1$ and qubit $\alpha_0$ have the value {\it zero} rather than one (see Appendix A). Each custom gate in Eq.\ (\ref{expn_custom}) can be implemented with 7 laser pulses. Hence, compared to Eq.\ (\ref{expn_short}) we save 4 pulses, and the state Eq.\ (\ref{mod_entangled}) can be prepared with just 32 pulses. To complete the task of ``factoring 15,'' it only remains to perform the Fourier transform on the input register and read it out. The measured value, the result of our quantum computation, will be a nonnegative integer $y<2^L$ satisfying \begin{equation} \label{y_fifteen} {y\over 2^L} ={{\rm integer}\over r} \ , \end{equation} where $r$ is the order of $x$ mod $N$ ($r$=4 in the case $N$=15 and $x$=7), and the integer takes a random value ranging from 0 to $r-1$. (Here the probability distribution for $y$ is actually perfectly peaked at the values in Eq.\ (\ref{y_fifteen}), because $r$ divides $2^L$.) Thus, if we perform the Fourier transform with $L=2$, the result for $y$ is a completely {\it random} number ranging over $y=0,1,2,3$. (Even so, by reducing $y$/4 to lowest terms, we succeed in recovering the correct value of $r$ with probability 1/2.) It is a bit disappointing to go to all the trouble to prepare the state Eq.\ (\ref{mod_entangled}) only to read out a random number in the end. If we wish, we can increase the number of qubits $L$ of the input register (though the $EXPN$ operator will still act only on the last two qubits). Then the outcome of the calculation will be a random multiple of $2^{L-2}$. But the probability of recovering the correct value of $r$ is still $1/2$. Once we have found $r=4$, a classical computer calculates $7^{(4/2)}\pm 1 \equiv 3,5 ~({\rm mod}~ N)$, which are, in fact, the factors of $N=15$. Since the $L=2$ Fourier transform can be performed using $L(2L-1)=6$ laser pulses on the ion trap, we can ``factor 15'' with 38 pulses (not counting the final reading out of the device). For values of $x$ other than 7 and 13, the number of pulses required is even smaller. \section{Testing the Fourier transform} In Shor's factorization algorithm, a periodic function (the modular exponential function) is computed, creating entanglement between the input register and the output register of our quantum computer. Then the Fourier transform is applied to the input register, and the input register is read. In Sec.\ \ref{sec:fifteen}, we noted that a simple demonstration of this procedure (factorization of 15) could be carried out on a linear ion trap, requiring only a modest number of laser pulses. Here we point out an even simpler demonstration of the principle underlying Shor's algorithm. Consider the function \begin{equation} f_K(a)=a ~({\rm mod} ~2^K) \ . \end{equation} Evaluation of this function is very easy, since it merely copies the last $K$ bits of the argument $a$. A unitary operator $MOD_{2^K}$ that acts according to \begin{equation} MOD_{2^K}:\>\> \bigl|a\bigr\rangle_\alpha^* \bigl|0\bigr\rangle_\beta \longmapsto \bigl|a\bigr\rangle_\alpha^* \bigl|a ~({\rm mod} {}~2^K)\bigr\rangle_\beta \end{equation} can be constructed as \begin{equation} MOD_{2^K}\>\equiv\> C_{\lbrack\!\lbrack \alpha_{K-1} \rbrack\!\rbrack,\beta_{K-1}}\cdots C_{\lbrack\!\lbrack \alpha_{1} \rbrack\!\rbrack,\beta_{1}} C_{\lbrack\!\lbrack \alpha_{0} \rbrack\!\rbrack,\beta_{0}} \end{equation} (where $\bigl|\cdot\bigr\rangle_\alpha^*$ is an $L$-qubit register and $\bigl|0\bigr\rangle_\beta$ is a $K$-qubit register). These $K$ controlled-NOT operations can be accomplished with $5K$ laser pulses in the ion trap. Including the $L$ single qubit rotations needed to prepare the input register, then, the entangled state \begin{equation} {1\over 2^{L/2}}\sum_{a=0}^{2^L-1} \bigl|a\bigr\rangle_\alpha^* \bigl|a ~({\rm mod} ~2^K)\bigr\rangle_\beta \end{equation} can be generated with $5K+L$ pulses. Now we can Fourier transform the input register ($L(2L-1)$ pulses), and read it out. Since the period $2^K$ of $f_K$ divides $2^L$, the Fourier transform should be perfectly peaked about values of $y$ that satisfy \begin{equation} y=2^{L-K}\cdot\left({\rm integer}\right) \end{equation} Thus, $y_{K-1},\dots,y_1,y_0$ should be identically zero, while $y_{L-1},\dots,y_{K+1},y_K$ take random values. The very simplest demonstration of this type ($L=2, K=1$) requires only three ions. Since $f_1$ has period 2, the two-qubit input register, after Fourier transforming, should read $y_1={\rm random}, y_0=0$. This demonstration can be performed with 13 laser pulses (not counting the final reading out), and should be feasible with current technology. \acknowledgments We thank Al Despain, Jeff Kimble, and Hideo Mabuchi for helpful discussions and encouragement. This research was supported in part by DOE Grant No. DE-FG03-92-ER40701, and by Caltech's Summer Undergraduate Research Fellowship program. \appendix \section{Custom gates} In the algorithms that we have described in this paper, we have used the controlled$^k$-NOT operator as our fundamental quantum gate. Of course, there is much arbitrariness in this choice. For example, instead of the operation $C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j}$, which flips qubit $j$ if and only if qubits $i_1,\dots,i_k$ all take the value 1, we could employ a gate that flips qubit $j$ if and only if $i_1 i_2 \dots i_k$ is some other specified string of $k$ bits. This generalized gate, like $C_{\lbrack\!\lbrack{i_1,\dots,i_k}\rbrack\!\rbrack,j}$ itself, can easily be implemented on, say, a linear ion trap. We remark here that using such ``custom gates'' can reduce the complexity of some algorithms (as measured by the total number of laser pulses required). To see how these generalized gates can be constructed using the ion trap, we note first of all that if we apply an appropriately tuned $3\pi$ pulse (instead of a $\pi$ pulse) to the $i$th ion,\footnote{Alternatively, we can implement $\tilde W^{(i)}_{\rm phon}$ with a $\pi$ pulse if the laser phase is appropriately adjusted.} then the operation $W^{(i)}_{\rm phon}$ defined in Eq.\ (\ref{Wphonon}) is replaced by \begin{equation} \tilde W^{(i)}_{\rm phon}:\>\> \left\{\begin{array}{lll}&\bigl | g\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\longmapsto &| g\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\nonumber\\ &\bigl | e\bigr\rangle_i| 0\bigr\rangle_{\rm CM}\longmapsto i&| g\bigr\rangle_i| 1\bigr\rangle_{\rm CM}\end{array}\right\}\nonumber\\ \end{equation} (whose nontrivial action differs by a sign from that of $W^{(i)}_{\rm phon}$). With $W^{(i)}_{\rm phon}$ and $\tilde W^{(i)}_{\rm phon}$, we can construct an alternative conditional phase gate \begin{equation} \tilde V^{(i,\overline j)}\>\equiv\> W^{(i)}_{\rm phon}\cdot V^{(j)} \cdot \tilde W^{(i)}_{\rm phon}:\>\> \bigl|\epsilon\bigr\rangle_i \bigl|\eta\bigr\rangle_j \longmapsto (-1)^{\epsilon\wedge\mathchar"0218\eta} \bigl|\epsilon\bigr\rangle_i \bigl|\eta\bigr\rangle_j \ \end{equation} that acts nontrivially only if $\epsilon=1$ and $\eta=0$. With an appropriate change of basis, this conditional phase gate becomes \begin{eqnarray} C_{{\lbrack\!\lbrack \overline i \rbrack\!\rbrack},j}\>\equiv \left[\tilde U^{(j)}\right]^{-1}\cdot V^{(j,\overline i)}\cdot \tilde U^{(j)}\> &&= \> \left[\tilde U^{(j)}\right]^{-1}\cdot W^{(j)}_{\rm phon}\cdot V^{(i)} \cdot \tilde W^{(j)}_{\rm phon}\cdot \tilde U^{(j)}:\nonumber\\ &&\bigl|\epsilon\bigr\rangle_i \bigl|\eta\bigr\rangle_j \longmapsto \bigl|\epsilon\bigr\rangle_i \bigl|\eta\oplus\epsilon\oplus 1\bigr\rangle_j \, , \end{eqnarray} a modified controlled-NOT gate that flips the target qubit if and only if the control qubit reads {\it zero} (compare Eq.\ (\ref{trap_CN})). Like the controlled-NOT gate, then, $C_{{\lbrack\!\lbrack \overline i \rbrack\!\rbrack},j}$ can be implemented with 5 laser pulses. Following the discussion in Sec.\ \ref{sec:NOTpulses}, it is straightforward to construct a modified controlled$^k$-NOT gate with a specified ``custom'' control string, for any $k\ge 1$. As a simple illustration of how a reduction in complexity can be achieved by using custom gates, consider the full adder $FA(a)$ defined by Eq.\ (\ref{FA0},\ref{FA1}) and shown in Fig.\ \ref{figB}. We can replace $FA(1)$ by the alternative implementation \begin{equation} FA'(a=1)_{1,2,3}\> \equiv \> C_{\lbrack\!\lbrack \overline 1 \rbrack\!\rbrack, 2} C_{\lbrack\!\lbrack \overline 1,2\rbrack\!\rbrack, 3} C_{\lbrack\!\lbrack 1\rbrack\!\rbrack, 3} \end{equation} (where the $\overline i$ indicates that qubit $i$ must have the value 0 (not 1) for the gate to act nontrivially). This saves one NOT gate, and hence one laser pulse, compared to the implementation in Eq.\ (\ref{FA1}). Another example of the use of custom gates is described in Sec.\ \ref{sec:fifteen}. \begin{references} \bibitem{shor}{P.W. Shor, ``Algorithms for quantum computation: Discrete logarithms and factoring,'' in {\it Proceedings 35th Annual Symposium on Foundations of Computer Science}, edited by S. Goldwasser (IEEE Computer Society Press, Los Alamitos, CA, 1994), p. 124.} \bibitem{feynman}{R. P. Feynman, Int. J. Theor. Phys. {\bf 21}, 467 (1982).} \bibitem{deutsch}{D. Deutsch, Proc. R. Soc. London A {\bf 400}, 96 (1985); {\bf 425}, 73 (1989).} \bibitem{others}{R. Jozsa, Proc. Roy. Soc. London A {\bf 435}, 563 (1991); D. Deutsch and R. Jozsa, Proc. R. Soc. London A. {\bf 439}, 553 (1992); A. Berthiaume and G. Brassard, ``The quantum challenge to structural complexity theory,'' in {\it Proceedings of the 7th Annual Structure in Complexity Theory Conference}, (IEEE Computer Society Press, Los Alamitos, CA, 1992), p. 132; E. Bernstein and U. Vazirani, ``Quantum complexity theory,'' in {\it Proceedings of the 25th Annual ACM Symposium on the Theory of Computing} (Association for Computing Machinery, New York, 1993), p. 11; D. Simon, ``On the power of quantum computation,'' in {\it Proceedings 35th Annual Symposium on Foundations of Computer Science}, edited by S. Goldwasser (IEEE Computer Society Press, Los Alamitos, CA, 1994), p. 116.} \bibitem{cirac}{J. I. Cirac and P. Zoller, Phys. Rev. Lett {\bf 74}, 4091 (1995).} \bibitem{wineland}{M. G. Raizen, J. M. Gilligan, J. C. Bergquist, W. M. Itano, and D. J. Wineland, Phys. Rev. A {\bf 45}, 6493 (1992).} \bibitem{coppersmith}{D. Coppersmith, ``An approximate Fourier transform useful in quantum factoring,'' IBM Research Report RC19642 (T. J. Watson Research Center, Yorktown Heights, NY, 1994).} \bibitem{deutschFFT}{D. Deutsch (1994), unpublished.} \bibitem{bennett}{C. H. Bennett, IBM J. Res. Develop. {\bf 17}, 525 (1973).} \bibitem{bennett_trade}{C. H. Bennett, SIAM J. Comput. {\bf 18}, 766 (1989).} \bibitem{kimble}{Q. A. Turchette, C. J. Hood, W. Lange, H. Mabuchi, and H. J. Kimble, Phys. Rev. Lett. {\bf 75}, 4710 (1995), quant-ph/9511008.} \bibitem{pellizzari}{T. Pellizzari, S. A. Gardiner, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. {\bf 75}, 3788 (1995).} \bibitem{noise}{W. G. Unruh, Phys. Rev. A {\bf 51}, 992 (1995); R. Landauer, ``Is quantum mechanically coherent computation useful?,'' in {\it Proceedings of the Drexel-4 Symposium on Quantum Nonintegrability -- Quantum-Classical Correspondence}, edited by D. H. Feng and B.-L. Hu (International Press, to appear); I. Chuang, R. Laflamme, P. Shor, and W. Zurek, Science {\bf 270}, 1635 (1995), quant-ph/9503007.} \bibitem{error_correct}{P. W. Shor, Phys. Rev. A {\bf 52}, 2493 (1995); A. R. Calderbank and P. W. Shor, ``Good quantum error-correcting codes exist,'' AT\&T Report (1995), quant-ph/9512032; A. Steane, ``Multiple particle interference and quantum error correction,'' Oxford preprint (1995), quant-ph/9601029.} \bibitem{greenberger}{D. M. Greenberger, M. A. Horne, and A. Zeilinger, in {\it Bell's theorem, Quantum Theory and Conceptions of the Universe}, edited by M. Kafatos (Kluwer Academic, Dordrecht, 1989), p. 69; N. D. Mermin, Phys. Rev. Lett. {\bf 65}, 1838 (1990).} \bibitem{ekert_deutsch}{A. Barenco, D. Deutsch, A. Ekert, and R, Jozsa, Phys. Rev. Lett. {\bf 74}, 4083 (1995), quant-ph/9503017.} \bibitem{teleportation}{C. H. Bennett, G. Brassard, C Cr\'epeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. {\bf 70}, 1895 (1993).} \bibitem{despain}{A. Despain {\it et al.}, ``Quantum networks,'' in {\it Quantum Computing}, JASON Report JSR-95-115 (The MITRE Corporation, McLean, VA, 1996), p. 48.} \bibitem{shor_long}{P. W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer,'' AT\&T Report (1995), quant-ph/9508027.} \bibitem{vedral}{V. Vedral, A. Barenco, and A. Ekert, ``Quantum networks for elementary arithmetic operations,'' Oxford preprint (1995), quant-ph/9511018.} \bibitem{ekert_jozsa}{A. Ekert and R. Jozsa, ``Shor's quantum algorithm for factorising numbers,'' to appear in Rev. Mod. Phys. (1996).} \bibitem{universal}{D. P. DiVincenzo, Phys. Rev. A {\bf 51}, 1015 (1995); T. Sleator and H. Weinfurter, Phys. Rev. Lett. {\bf 74}, 4087 (1995); D. Deutsch, A. Barenco, and A. Ekert, Proc. Roy. Soc. London A {\bf 449}, 669 (1995), quant-ph/9505018; S. Lloyd, Phys. Rev. Lett. {\bf 75}, 346 (1995).} \bibitem{rsa}{R. L. Rivest, A. Shamir, and L. Adleman, Comm. Assoc. Comput. Mach. {\bf 21}, 120 (1978).} \bibitem{jump}{W. Nagourney {\it et al.}, Phys. Rev. Lett. {\bf 56}, 2797 (1986); J. C. Bergquist {\it et al.} Phys. Rev. Lett. {\bf 56}, 1699 (1986); T. Sauter {\it et al.}, Phys. Rev. Lett. {\bf 56}, 1696 (1986).} \bibitem{fast_mult}{D. E. Knuth, {\it The Art of Computer Programming, Vol. 2: Seminumerical Algorithms}, 2nd Edition (Addison-Wesley, Reading, MA, 1981), p. 258 ff; A. V. Ahjo, J. E. Hopcroft, and J. D. Ullman, {\it The Design and Analysis of Computer Algorithms}, (Addison-Wesley, Reading, MA, 1974), p. 270 ff.} \bibitem{lloyd}{S. Lloyd, Science {\bf 261}, 1569 (1993).} \bibitem{monroe}{C. Monroe, D. M. Meekhof, B. E. King, W. M. Itano, and D. J. Wineland, Phys. Rev. Lett. {\bf 75}, 4714 (1995) .} \bibitem{barenco}{A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. Smolin, and H. Weinfurter, ``Elementary gates for quantum computation,'' (1995), quant-ph/9503016.} \end{references} \eject \begin{figure} \caption{The controlled$^k$-NOT gate. Input values of the qubits are shown on the right and output values on the left. This gate flips the value of the target qubit if all $k$ control qubits take the value 1; otherwise, the gate acts trivially.} \label{figA} \end{figure} \begin{figure} \caption{The full adder $FA(a)$. The order of the gates (here and in all of the following figures) is to be read from right to left. The gate array shown in (a) adds the classical bit $a=0$; the second qubit carries the output sum bit and the third qubit carries the output carry bit. The gate array shown in (b) adds the classical bit $a=1$.} \label{figB} \end{figure} \begin{figure} \caption{The multiplexed full adder $MUXFA'(a_0,a_1)$. Here $\ell$ is the {\it select bit} \label{figD} \end{figure} \begin{figure} \caption{The multiplexed full adder $MUXFA(a_0,a_1)$ has a select bit $\ell$ and an {\it enable string} \label{figE} \end{figure} \begin{figure} \caption{The multiplexed full adder $MUXFA''(a_0,a_1)$ (shown here for $a_0=0, a_1=1$) is a modification of $MUXFA$ that uses an extra bit of scratch space. The first gate stores $\mathchar"024C \wedge \ell$ in the extra scratch qubit, and subsequent gates use this scratch bit as a control bit. The last gate clears the scratch bit. The advantage of $MUXFA''$ is that the longest control string required by any gate is shorter by one bit than the longest control string required in $MUXFA$.} \label{figEa} \end{figure} \begin{figure} \caption{The multiplexed full adder $MUXFA'''(a_0,a_1)$ (shown here for $a_0=0,a_1=1$) uses simpler gates than those required by $MUXFA$, but unlike $MUXFA''$, it does not need an extra scratch bit.} \label{figF} \end{figure} \begin{figure} \caption{The multiplexed half adder $MUXHA$ is simpler than $MUXFA$ because it does not compute the output carry bit.} \label{figG} \end{figure} \begin{figure} \caption{The multiplexed $K$-bit adder $MADD(a,a')$ is constructed by chaining together $K-1$ $MUXFA$ operations and one $MUXHA$ operation. $MADD$ adds a $K$-bit $c$-number to an input $K$-bit $q$-number and obtains an output $K$-bit $q$-number (the final carry bit is not computed). If $MADD$ is enabled, the classical addend is $a$ when the select bit has the value $\ell=0$ or is $a'$ when $\ell=1$. (When $MADD$ is not enabled, the classical addend is 0.)} \label{figH} \end{figure} \begin{figure} \caption{The mod $N$ addition operator $ADDN(a,N)$ computes $a+b~({\rm mod} \label{figI} \end{figure} \eject \begin{figure} \caption{The {\it overwriting} \label{figJ} \end{figure} \begin{figure} \caption{The mod $N$ multiplication operator $MULN(a,N)$ (when enabled) computes $a\cdot b ~({\rm mod} \label{figK} \end{figure} \eject \begin{figure} \caption{The overwriting mod $N$ multiplication operator $OMULN(a,N)$ (when enabled) computes $a\cdot b ~({\rm mod} \label{figL} \end{figure} \begin{figure} \caption{The modified mod $N$ multiplication routine $MULN'(a,N)$ uses simpler elementary gates than those used by $MULN$, but $MULN'$ requires an extra bit of scratch space. Instead of calling the $OADDN$ routine with two enable bits, $MULN'$ first stores the AND of the two enable bits in the extra scratch bit. Then $OADDN$ with one enable bit can be called instead, where the scratch bit is the enable bit.} \label{figLa} \end{figure} \eject \begin{figure} \caption{The mod $N$ exponentiation operator $EXPN(x,N)$ computes $x^a ~({\rm mod} \label{figM} \end{figure} \end{document}
\begin{document} \title{Experimental orthogonalization of highly overlapping quantum states with single photons} \author{Gaoyan Zhu} \affiliation{Department of Physics, Southeast University, Nanjing 211189, China} \affiliation{Beijing Computational Science Research Center, Beijing 100084, China} \author{Orsolya K\'{a}lm\'{a}n} \affiliation{Institute for Solid State Physics and Optics, Wigner Research Centre, Hungarian Academy of Sciences, P.O. Box 49, Hungary} \author{Kunkun Wang} \affiliation{Beijing Computational Science Research Center, Beijing 100084, China} \author{Lei Xiao} \affiliation{Department of Physics, Southeast University, Nanjing 211189, China} \affiliation{Beijing Computational Science Research Center, Beijing 100084, China} \author{Dengke Qu} \affiliation{Department of Physics, Southeast University, Nanjing 211189, China} \affiliation{Beijing Computational Science Research Center, Beijing 100084, China} \author{Xiang Zhan} \affiliation{School of Science, Nanjing University of Science and Technology, Nanjing 210094, China} \author{Zhihao Bian} \affiliation{School of Science, Jiangnan University, Wuxi 214122, China} \author{Tam\'{a}s Kiss} \affiliation{Institute for Solid State Physics and Optics, Wigner Research Centre, Hungarian Academy of Sciences, P.O. Box 49, Hungary} \author{Peng Xue}\email{[email protected]} \affiliation{Beijing Computational Science Research Center, Beijing 100084, China} \begin{abstract} We experimentally realize a nonlinear quantum protocol on single-photon qubits with linear optical elements and appropriate measurements. The quantum nonlinearity is induced by post-selecting the polarization qubit based on a measurement result obtained on the spatial degree of freedom of the single photon which plays the role of a second qubit. Initially, both qubits are prepared in the same quantum state and an appropriate two-qubit unitary transformation entangles them before the measurement on the spatial part. We analyze the result by quantum state tomography on the polarization degree of freedom. We then demonstrate the usefulness of the protocol for quantum state discrimination by iteratively applying it on either one of two slightly different quantum states which rapidly converge to different orthogonal states by the iterative dynamics. Our work opens the door to employ effective quantum nonlinear evolution for quantum information processing. \end{abstract} \maketitle {\it Introduction:---}Quantum information processing protocols are known to exhibit speedup over classical algorithms due to specific features of quantum mechanics, such as linear superposition of quantum states or entanglement among subsystems. The usual assumption in quantum information theory is that the time evolution of the physical systems constituting the protocol is linear, e.g., in the case of a closed system the evolution is described by a unitary operator. If the constraint of linearity of the evolution is relieved, and a nonlinear equation governs the dynamics of the system, then one can design quantum protocols efficiently solving problems which are hard even for usual quantum algorithms \cite{AL98}. For example, the ability to quickly discriminate nonorthogonal states and thereby to solve unstructured search is a generic feature of nonlinear quantum mechanics \cite{CY16}. Nonlinear time evolution can be presented in standard quantum mechanics as an effective model, e.g., the Gross-Pitaevskii equation \cite{MW13} which approximately describes the collective behavior of atoms in a Bose-Einstein condensate. Were it not approximate, the Gross-Pitaevskii equation would be applicable to solve the unstructured search problem with an exponential improvement over protocols based on standard quantum theory \cite{CY16,LNT+18}. An alternative way of introducing effective nonlinear evolution within the framework of standard quantum theory is to apply selective measurements in iterated protocols \cite{KJA+06}. The original idea of Bechmann-Pasquinucci, Huttner, and Gisin \cite{BPHG98} is based on the fact that if two identically prepared qubits are subjected to an entangling quantum operation, then by measuring one of the output qubits in one of the computational basis states $\ket{0}$, the quantum state of the other qubit undergoes an effective nonlinear transformation. The presence of two identical states at the input, together with the entangling transformation on the two qubits and the post-selection of the second qubit according to the result of the measurement on the first qubit, are the key elements leading to the emergent nonlinearity. The resulting protocols, when applied iteratively, lead to highly nontrivial dynamics, with several intriguing features, such as a variety of fractals on the Bloch sphere representing the initial state of the qubit, leading to non-convergent, chaotic behavior~\cite{GKJ16,TBA+17,MJK+19}. One obviously cannot beat usual quantum efficiency limits in this way, since the emergent nonlinearity is an effective feature and one has to pay its cost in the form of discarded qubits \cite{GKJ16}, nevertheless, these protocols may find applications for specific tasks, e.g. when matching a state to a reference with a prescribed maximum error \cite{KK18}. The specific protocol we consider here is able to evolve any initial state to one of a pair of orthogonal states, according to a well-defined property of the initial state. Initial states, which have a positive $x$ coordinate on the Bloch sphere, will all converge to the quantum state pointing in the $+x$ direction, while the states with negative $x$ coordinate will converge to its orthogonal pair, the quantum state pointing in the $-x$ direction. Since the same protocol carries out this task for every initial state, one may demonstrate its effectiveness by comparing the convergence of highly overlapping initial states with $x$ components of opposite sign. Our protocol is thus able to discriminate any two quantum states with $x$ components of opposite sign unambiguously in the asymptotic limit. This approach is more general than standard optimal quantum state discrimination methods~\cite{I87,D88,P88,HMG+96,DB02,BCS+08}, where the discrimination of a pair of quantum states requires the construction of a specific protocol. After a finite number of steps, our protocol probabilistically enhances the overlap with one member of an orthogonal pair in a somewhat similar manner to the method proposed by Sol\'{i}s-Prosser et al.~\cite{SPD+16} for the probabilistic separation of a finite number of quantum states. Linear optics is a natural candidate among a variety of physical systems~\cite{LB99,ACM02} for realizing the protocols of quantum information processing~\cite{CN02}. In order to effectively implement quantum gates, linear optics has to be complemented by either optical elements exhibiting strong optical non-linearity~\cite{KB00} or, alternatively, apply post-selection with ancilla modes and projective measurements~\cite{TBA+17,GKJ16,KK18} resulting in probabilistic realizations. In this paper, we realize the orthogonalization of quantum states via measurement-induced nonlinearity with single photons. We demonstrate that, after a few iterations of the nonlinear quantum transformation, one can substantially decrease the overlap of two, initially highly overlapping quantum states. After serval steps of the iterations they can become almost orthogonal to each other with only a small residual overlap. {\it Theoretical description of the protocol:---}Our aim is to implement a measurement-induced nonlinear quantum transformation~\cite{TBA+17} on photonic qubits. This can be realized on one member of a pair of qubits, initially in the same quantum state, via a controlled two-qubit unitary transformation on the composite system and a subsequent post-selective measurement on the other member of the pair (shown in Fig.~\ref{fig:circuit}(a)). For the two qubits, we consider two two-level systems: one encoded by the polarizations $\{\ket{H}=\ket{0}_{\text{p}}, \ket{V}=\ket{1}_{\text{p}}\}$ and the other by the spatial modes $\{\ket{D}=\ket{0}_{\text{s}},\ket{U}=\ket{1}_{\text{s}}\}$ of single photons. Note that the subscripts $\text{p}$ and $\text{s}$ refer to the two types of degrees of freedom, respectively. Initially, both qubits are prepared in the same quantum state $\ket{\psi_{0}}$, which can be described by the single complex parameter $z$, and the two-qubit system is thus a product state of the form \begin{align} \label{eq:initial} \ket{\psi_0}_\text{p}\otimes\ket{\psi_0}_\text{s}=\frac{\ket{0}_{\text{p}}+z \ket{1}_{\text{p}}}{\sqrt{1+|z|^2}}\otimes\frac{\ket{0}_{\text{s}}+z \ket{1}_{\text{s}}}{\sqrt{1+|z|^2}}. \end{align} We apply the entangling two-qubit transformation \begin{equation} \label{eq:U} U=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & -1 & 1 & 0 \\ 0 &1 & 1 & 0 \\ 1 & 0 & 0 & -1 \\ \end{pmatrix} \end{equation} after which the state of the composite system becomes \begin{align} \ket{\Psi}_{\text{ps}}=\frac{1}{\sqrt{2}\left(1+\left|z\right|^{2}\right)}&\left[(1+z^2)\ket{00}_{\text{ps}}+2z\ket{10}_{\text{ps}}\right.\nonumber\\ &\left.+(1-z^2)\ket{11}_{\text{ps}}\right]. \label{2qubit_state} \end{align} Then, a projective measurement $P=\ket{D}\bra{D}=\ket{0}\bra{0}_{\text{s}}$ is applied on the spatial qubit by which one can post-select the polarization qubit in the state \begin{equation} \ket{\psi_1}_\text{p}=\frac{\ket{0}_{\text{p}}+f(z)\ket{1}_{\text{p}}}{\sqrt{1+|f(z)|^2}}, \end{equation} where \begin{equation} f(z)=\frac{2z}{1+z^2}. \label{f} \end{equation} The success probability of the first iteration of the protocol is dependent on the complex number $z$ characterizing the input state and can be formulated as \begin{equation} {\mathcal P}^{(1)}={\mathcal P}(z)=\frac{1}{2}+\frac{2\left(\mathrm{Re}z\right)^2}{\left(1+\left|z\right|^2\right)^2}. \end{equation} It can be seen that $\mathcal P^{(1)} \geq1/2$, the equality holds for $\mathrm{Re}z=0$, i.e., for the imaginary axis. In order to iterate the protocol, one needs to prepare also the spatial mode in state $\ket{\psi_1}_\text{s}$ is for the next step. In general, after $n$ iterations, the final state of the polarization qubit is $\ket{\psi_n}_\text{p}=(\ket{0}_{\text{p}}+f^{(n)}(z)\ket{1}_{\text{p}})/(\sqrt{1+|f^{(n)}(z)|^2})$, where $f^{(n)}(z)$ is defined recursively $f^{(n)}(z)=f[f^{(n-1)}(z)]$. The success probabilities of the second and the $n$th iterations are respectively $\mathcal{P}^{(2)}=\mathcal{P}\left[f(z)\right]$ and $\mathcal{P}^{(n)}=\mathcal{P}\left[f^{(n-1)}(z)\right]=1/2+2\{\mathrm{Re}\left[f^{(n-1)}(z)\right]\}^2/\left[1+|f^{(n-1)}(z)|^2\right]^2$. The success probability of orthogonalization -- or more precisely, of reaching an asymptotic state with a given precision, starting from an ensemble of qubits in the same initial state -- is a product of the single-iteration success probabilities $\prod_n \mathcal{P}^{(n)}$. We note that our setup is designed in a way that the projective measurement on the spatial qubit is automatically realized together with the post-selection whenever the photon is detected in the lower spatial mode (and not detected in the upper mode), see Fig.~\ref{fig:setup}. \begin{figure} \caption{(a) Schematic of one step of the nonlinear quantum protocol. $U$ and $P$ denote the entangling two-qubit transformation and the projective measurement, respectively. (b) The convergence regions of the corresponding complex map $f$ on the complex plane, where red (blue) color represents convergence to the asymptotic state $\ket{+} \label{fig:circuit} \end{figure} The nonlinear transformation $f$ of Eq.~(\ref{f}) is a complex quadratic rational map \cite{M06,ML93}, which has been analyzed in~\cite{TBA+17}. It has two superattractive fixed points: $z_{1}=1$, and $z_{2}=-1$. Superattractiveness, which is related to the fact that $\left.\frac{df}{dz}\right|_{z_{i}}$=0 $(i=1,2)$, ensures that the convergence to the two fixed points $z_{1}$ and $z_{2}$ is fast. There is a set of points which do not converge to any of the attractive fixed points when iterating the map $f$ and these form the so-called Julia set of the complex map (the third fixed point of the map $z_{3}=0$, which is repelling, is also a member of the Julia set). The Julia set of the map $f$ is the imaginary axis on the complex plane (see Fig.~\ref{fig:circuit}(b)) or equivalently, the great circle which intersects the $y$ axis on the Bloch sphere, while the two superattractive fixed points correspond to the orthogonal quantum states \begin{equation} \ket{\psi_{z_{1}}}=\ket{+}_{x}=\frac{\ket{0}+\ket{1}}{\sqrt{2}}, \text{ } \ket{\psi_{z_{2}}}=\ket{-}_{x}=\frac{\ket{0}-\ket{1}}{\sqrt{2}}, \end{equation} pointing in the $+x$ and $-x$ directions on the Bloch sphere, respectively. It can be seen in Fig.~\ref{fig:circuit}(b) that initial states which can be described by a complex number $z$ that has a positive (negative) real part, all converge to the asymptotic state $\ket{+}_{x}$ ($\ket{-}_{x}$), as represented by the coloring. Initial states which lie closer to the border of these convergence regions (i.e., the Julia set) need more iterations to approach the respective asymptotic state. It has been shown that by iterating the above procedure on two ensembles of qubits, the states of which initially have a large overlap, but have an $x$ component of opposite sign, then already a few iterational steps are enough to approximately orthogonalize them, thereby effectively implementing quantum state discrimination {\cite{TBA+17}}. Moreover, the scheme is applicable to sort all quantum states according to which part of the Bloch sphere they are initially from, without needing to modify the setup itself. Let us further note that the success probability of subsequent steps grows and approaches $1$ as the states converge to either of the asymptotic states. In our experiment, it is always the polarization qubit which is kept after the post-selection and analyzed afterwards, while in every subsequent step both the states of the spatial qubit and the polarization qubit are reprepared according to the quantum state tomographic measurements performed on the polarization qubit in the previous step. \begin{figure*} \caption{ Experimental setup. Photon pairs are generated via type-I spontaneous parametric down conversion (SPDC). The pump is filtered out by an interference filter which restricts the photon bandwidth to $3$nm. With the detection of the trigger via avalanche photodiode, the heralded single photon is injected into the optical network involving three stages: state preparation, measurement-induced nonlinear transformation and state tomography. Transformation is realized with combination of beam displacers (BDs) and half-wave plates (HWPs) at certain angles. A projection is applied on the auxiliary qubit and the target qubit is polarization analyzed using a quantum state tomography system consisting of a HWP and a quarter-wave plate (QWP) followed by a polarizing beam splitter (PBS) in front of avalanche photodiode (APD). Trigger-herald pair is counted by the coincidence of APDs.} \label{fig:setup} \end{figure*} {\it Experimental realization:---}For experimental demonstration, pairs of photons are generated via type-I spontaneous parametric down-conversion (SPDC)~\cite{XZQ+15,BLQ+15,ZZL+16,ZKK+17,ZCL+17,WEX+18,WWZ+18}. With the detection of trigger photons, the other photons in one pair are heralded and act as a single photon source in the experimental setup shown in Fig.~\ref{fig:setup}. Experimentally, photon pairs are counted by coincidences between two single-photon avalanche photodiodes (APDs) with a time window of $3$ns. Total coincidence counts are about $12,000$ over a collection time of $3$s. The heralded single photons pass through a polarizing beam splitter (PBS) followed by a quarter-wave plate (QWP) and a half-wave plate (HWP) with setting angles $\theta^\text{P}_Q$ and $\theta^\text{P}_H$, respectively. Then a birefringent calcite beam displacer (BD) splits them into two parallel spatial modes, i.e., upper and lower modes, depending on their polarizations. Photons in the upper mode pass through a HWP at $45^\circ$ to flip their polarizations from $\ket{V}$ to $\ket{H}$. Photons in both spatial modes pass through a QWP and a HWP with the setting angles $\theta^\text{P}_Q$ and $\theta^\text{P}_H$, respectively, and then they are prepared in the initial state (\ref{eq:initial}) with \begin{equation} z=\frac{i\sin2\theta^\text{P}_H+\sin(2\theta^\text{P}_H-2\theta^\text{P}_Q)}{i\cos2\theta^\text{P}_H+\cos(2\theta^\text{P}_H-2\theta^\text{P}_Q)}. \end{equation} Note that the matrix form of the operation of a HWP with setting angle $\theta$ reads $\begin{pmatrix} \cos 2\theta & \sin 2\theta \\ \sin 2\theta & -\cos 2\theta \\ \end{pmatrix}$, and that of a QWP at $\vartheta$ reads $\begin{pmatrix} \cos^2 \vartheta+i \sin^2 \vartheta & (1-i)\sin\vartheta\cos\vartheta\\ (1-i)\sin\vartheta\cos\vartheta & \sin^2\vartheta+i\cos^2\vartheta \\ \end{pmatrix}$. The unitary operation $U$ of Eq.~(\ref{eq:U}) is implemented as \begin{align} U=U^\dagger_\text{CNOT}\,\tilde{U}\,U_\text{CNOT}, \end{align} where \begin{align*} \tilde{U}=U_\text{CNOT}UU^\dagger_\text{CNOT}=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & -1 & 0 & 1\\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & 1 \\ \end{pmatrix}, \end{align*} \begin{align*} U_\text{CNOT}=U^\dagger_\text{CNOT}=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{pmatrix}. \end{align*} Here we used the fact that the operation $U$ can be decomposed into operations $\tilde{U}$, and controlled-Not operation $U_\text{CNOT}$. Both of these unitary operations are controlled two-qubit rotations. $\tilde{U}$ can be realized by two HWPs, one at $67.5^\circ$ inserted into the upper mode and one at $22.5^\circ$ inserted into the lower mode. $U_{\text{CNOT}}$ can be realized by HWPs at $45^\circ$ and BDs. BDs are used to split the photons with different polarizations into different spatial modes and then to combine the two polarization modes into the same spatial mode. Then two-mode transformations can be implemented via HWPs acting on the two polarization modes propagating in the same spatial mode. For $U_{\text{CNOT}}$, HWPs at $45^\circ$ are used to flip polarizations of the photons. \begin{figure*} \caption{ Overlap between three different pairs of quantum states in every iteration of the nonlinear transformation up to three (or four). Experimentally data obtained via quantum state tomography are shown by red color, and theoretical predictions are represented by grey bars. In the case of (a) the initial states for the protocol are described by the complex numbers $\{z_1=0.2,z_2=-0.2\} \label{fig:overlap} \end{figure*} The post-selection of the polarization state can be realized by projecting the spatial qubit onto the basis state corresponding to the lower spatial mode $\ket{0}_{\text{s}}=\ket{D}$, where the polarization state of the photon is also analyzed. If a photon is detected in the upper spatial mode, then the nonlinear transformation of the polarization state does not take place (see Eq.~(\ref{2qubit_state})). To demonstrate that the nonlinear protocol effectively orthogonalizes initially close quantum states~\cite{LZB+17,ZXB+17,WXQ+18,XQW+18,nc19,prl19}, the step presented in Fig.~\ref{fig:circuit}(a) has to be iterated, i.e., the initial state of the input qubits of the second step has to be equal to the output state $\ket{\psi_1}$ of the first step. In order to implement this, we use quantum state tomography to determine the output state after each step via a PBS, a QWP and a HWP with the setting angles $\theta^\text{M}_Q$ and $\theta^\text{M}_H$, respectively, projecting the output state into one of four different basis states $\{\ket{H},\ket{V},(\ket{H}+\ket{V})/\sqrt{2},(\ket{H}-i\ket{V})/\sqrt{2}\}$ to obtain the density matrix of the output state via maximum likelihood method. The resulting photons are detected by APDs, in coincidence with the trigger photons. With the measured density matrices we reconstruct a pure state $\ket{\psi_1}$ with the method of minimum squares, which we prepare as initial state for both the polarization and the spatial qubit for the next iteration. Subsequent iterations are realized in the same way. In our experiment, we chose three pairs of initial states $\ket{\psi_0(z_1)}$ and $\ket{\psi'_0(z_2)}$ to be discriminated by the nonlinear protocol. In Fig.~\ref{fig:overlap}, we show the experimental (red) and theoretical (grey) results of the overlaps for each iteration up to three (or four), starting from three different pairs of initial states. It can be seen that for the first pair of states (Fig.~\ref{fig:overlap}(a)), the overlap decreases from $0.927\pm0.003$ (the corresponding theoretical prediction is $0.923$) to $0.091\pm0.006$ (the corresponding theoretical prediction is $0.078$) after three iterations. For the second pair of states (Fig.~\ref{fig:overlap}(b)), the overlap decreases from $0.920\pm0.003$ (the corresponding theoretical prediction is $0.919$) to $0.070\pm0.006$ (the corresponding theoretical prediction is $0.054$) after three iterations. For the third pair of states (Fig.~\ref{fig:overlap}(c)), the overlap decreases from $0.969\pm0.002$ (the corresponding theoretical prediction is $0.962$) to $0.086\pm0.019$ (the corresponding theoretical prediction is $0.023$) after four iterations. Our experimental results agree well with those of the theoretical model, and the slight difference between the experimental data and theoretical values is due to the imperfections of the experiment. The results prove that the nonlinear transformation orthogonalizes the states in a few iterations and can therefore be employed for discriminating quantum states. {\it Summary:---}We experimentally generated measurement-induced nonlinear transformations by linear optical elements and post-selective measurements on qubits represented by single photons. We demonstrated that such a transformation, experimentally realized for the first time, can be applied for the approximate orthogonalization of states with high initial overlap. Via the orthogonalization procedure we can prepare the qubits in distinguishable states so that they can be either directly measured or used for further processing. This measurement-induced nonlinear evolution can be considered as an implementation of a Schr\"{o}dinger microscope~\cite{GKJ16,LS00}. In a more general context a similar protocol can be applied for quantum state matching~\cite{KK18}. \acknowledgements This work has been supported by the National Natural Science Foundation of China (Grant Nos. 11674056 and U1930402), and the startup funding of Beijing Computational Science Research Center. T. K. and O. K. are grateful for the support of the National Research, Development and Innovation Office of Hungary (Project Nos. K115624, K124351, PD120975, 2017-1.2.1-NKP-2017-00001). \begin{references} \bibitem{AL98} D. S. Abrams and S. Lloyd, Phys. Rev. Lett. {\bf 81}, 3992 (1998). \bibitem{CY16} A. M. Childs and J. Young, Phys. Rev. A {\bf 93}, 022314 (2016). \bibitem{MW13} D. A. Meyer and T. G. Wong, New J. Phys. {\bf 15}, 063014 (2013). \bibitem{LNT+18} K. de Lacy, L. Noakes, J. Twamley and J. B. Wang, Quantum Inf. Process. {\bf 17}, 266 (2018). \bibitem{KJA+06} T. Kiss, I. Jex, G. Alber and S. Vym\v etal, Phys. Rev. A {\bf 74}, 040301(R) (2006). \bibitem{BPHG98} H. Bechmann-Pasquinucci, B. Huttner and N. Gisin, Phys. Lett. A {\bf 242}, 198 (1998). \bibitem{GKJ16} A. Gily\'{e}n, T. Kiss and I. Jex, Sci. Rep. {\bf6}, 20076 (2016). \bibitem{TBA+17} J. M. Torres, J. Z. Bern\'{a}d, G. Alber, O. K\'{a}lm\'{a}n and T. Kiss, Phys. Rev. A {\bf95}, 023828 (2017). \bibitem{MJK+19} M. Malachov, I. Jex, O. K\'alm\'an and T. Kiss, Chaos {\bf 29}, 033107 (2019). \bibitem{KK18} O. K\'{a}lm\'{a}n and T. Kiss, Phys. Rev. A {\bf97}, 032125 (2018). \bibitem{I87} I. D. Ivanovic, Phys. Lett. A {\bf123}, 257 (1987). \bibitem{D88} D. Dieks, Phys. Lett. A {\bf126}, 303 (1988). \bibitem{P88} A. Peres, Phys. Lett. A {\bf128}, 19 (1988). \bibitem{HMG+96} B. Huttner, A. Muller, J. D. Gautier, H. Zbinden and N. Gisin, Phys. Rev. A {\bf 54}, 3783-3789 (1996). \bibitem{DB02} M. Du\u{s}ek and V. Bu\v{z}ek, Phys. Rev. A {\bf 66}, 022112 (2002). \bibitem{BCS+08} L. Bartu\v{s}kov\'{a}, A. \v{C}ernoch, J. Soubusta and M. Du\v{s}ek, Phys. Rev. A {\bf 77}, 034306 (2008). \bibitem{SPD+16} M. A. Sol\'{i}s-Prosser, A. Delgado, O. Jimenez, and L. Neves, Phys. Rev. A {\bf 93}, 012337 (2016). \bibitem{LB99} S. Lloyd and S. L. Braunstein, Phys. Rev. Lett. {\bf 82}, 1784 (1999). \bibitem{ACM02} G. M. D'Ariano, C. Macchiavello and L. Maccone, Fortschr. Phys. {\bf 48}, 573 (2000). \bibitem{CN02} I. L. Chuang and M. A. Nielsen, Quantum Computation and Quantum Information, Cambridge University Press (2002). \bibitem{KB00} P. Kok and S.L. Braunstein, Phys. Rev. A {\bf62}, 064301 (2000). \bibitem{M06} J. W. Milnor, {\it Dynamics in One Complex Variable}, Annals of Mathematical Studies (Princeton University Press, 2006). \bibitem{ML93} J. W. Milnor and T. Lei, Experiment. Math. {\bf 2}, 37 (1993). \bibitem{XZQ+15} P. Xue, R. Zhang, H. Qin, X. Zhan, Z. H. Bian, J. Li and B. C. Sanders, Phys. Rev. Lett. {\bf114}, 140502 (2015). \bibitem{BLQ+15} Z. H. Bian, J. Li, H. Qin, X. Zhan, R. Zhang, B. C. Sanders and P. Xue, Phys. Rev. Lett. {\bf 114}, 203602 (2015). \bibitem{ZZL+16} X. Zhan, X. Zhang, J. Li, Y. S. Zhang, B. C. Sanders and P. Xue, Phys. Rev. Lett. {\bf116}, 090401 (2016). \bibitem{ZKK+17} X. Zhan, P. Kurzynski, D. Kaszlikowski, K. K. Wang, Z. H. Bian, Y. S. Zhang and P. Xue, Phys. Rev. Lett. {\bf119}, 220403 (2017). \bibitem{ZCL+17} X. Zhan, E. G. Cavalcanti, J. Li, Z. H. Bian, Y. S. Zhang, H. M. Wiseman and P. Xue, Optica {\bf4}, 966-971 (2017). \bibitem{WEX+18} K. K. Wang, C. Emary, M. Y. Xu, X. Zhan, Z. H. Bian, L. Xiao and P. Xue, Phys. Rev. A {\bf97}, 020101(R) (2018). \bibitem{WWZ+18} K. K. Wang, X. P. Wang, X. Zhan, Z. H. Bian, J. Li, B. C. Sanders and P. Xue, Phys. Rev. A {\bf97}, 042112 (2018). \bibitem{LZB+17} L. Xiao, X. Zhan, Z. H. Bian, K. K.Wang, X. Zhang, X. P. Wang, J. Li, K. Mochizuki, D. Kim, N. Kawakami, W. Yi, H. Obuse, B. C. Sanders and P. Xue, Nat. Phys. {\bf13}, 1117 (2017). \bibitem{ZXB+17} X. Zhan, L. Xiao, Z. H. Bian, K. K. Wang, X. Z. Qiu, B. C. Sanders, W. Yi and P. Xue, Phys. Rev. Lett. {\bf119}, 130501 (2017). \bibitem{WXQ+18} X. P. Wang, L. Xiao, X. Qiu, K. K. Wang, W. Yi and P. Xue, Phys. Rev. A {\bf98}, 013835 (2018). \bibitem{XQW+18} L. Xiao, X. Qiu, K. K. Wang, Z. H. Bian, X. Zhan, H. Obuse, B. C. Sanders, W. Yi and P. Xue, Phys. Rev. A {\bf98}, 063847 (2018). \bibitem{nc19} K. K. Wang, X. Qiu, L. Xiao, X. Zhan, Z. H. Bian, B. C. Sanders, W. Yi and P. Xue, Nat. Commun. {\bf10}, 2293 (2019). \bibitem{prl19} K. K. Wang, X. Qiu, L. Xiao, X. Zhan, Z. H. Bian, W. Yi and P. Xue, Phys. Rev. Lett. {\bf122}, 020501 (2019). \bibitem{LS00} S. Lloyd and J.-J. E. Slotine, Phys. Rev. A {\bf62}, 012307 (2000). \end{references} \end{document}
\begin{document} \title{Three-color Ramsey number of an odd cycle versus bipartite graphs with small bandwidth \footnote{Center for Discrete Mathematics, Fuzhou University, Fuzhou, 350108, P.~R.~China. Email: {\em chunlin\[email protected], [email protected]. } }} \date{} \author{Chunlin You \;\; and \;\; Qizhong Lin\footnote{Corresponding author. Supported in part by NSFC (No.\ 12171088).} } \maketitle \begin{abstract} A graph $\mathcal{H}=(W,E_\mathcal{H})$ is said to have {\em bandwidth} at most $b$ if there exists a labeling of $W$ as $w_1,w_2,\dots,w_n$ such that $|i-j|\leq b$ for every edge $w_iw_j\in E_\mathcal{H}$. We say that $\mathcal{H}$ is a {\em balanced $(\beta,\Delta)$-graph} if it is a bipartite graph with bandwidth at most $\beta |W|$ and maximum degree at most $\Delta$, and it also has a proper 2-coloring $\chi :W\rightarrow[2]$ such that $||\chi^{-1}(1)|-|\chi^{-1}(2)||\leq\beta|\chi^{-1}(2)|$. In this paper, we prove that for every $\gamma>0$ and every natural number $\Delta$, there exists a constant $\beta>0$ such that for every balanced $(\beta,\Delta)$-graph $\mathcal{H}$ on $n$ vertices we have $$R(\mathcal{H}, \mathcal{H}, C_n) \leq (3+\gamma)n$$ for all sufficiently large odd $n$. The upper bound is sharp for several classes of graphs. Let $\theta_{n,t}$ be the graph consisting of $t$ internally disjoint paths of length $n$ all sharing the same endpoints. As a corollary, for each fixed $t\geq 1$, $R(\theta_{n, t},\theta_{n, t}, C_{nt+\lambda})=(3t+o(1))n,$ where $\lambda=0$ if $nt$ is odd and $\lambda=1$ if $nt$ is even. In particular, we have $R(C_{2n},C_{2n}, C_{2n+1})=(6+o(1))n$, which is a special case of a result of Figaj and {\L}uczak (2018). {\bf Keywords:} \ Ramsey number; Small bandwidth; Cycle; Regularity Lemma {\bf Mathematics Subject Classification:} \ 05C55;\;\;05D10 \end{abstract} \section{Introduction} For graphs $G$ and $G_1,G_2,\dots,G_k$ we write $G\rightarrow(G_1,G_2,\dots,G_k)$ if for every $k$-edge-coloring of $G$, there is a monochromatic copy of $G_i$ for some $i\in[k]$, where $[k]=\{1,2,\dots,k\}$. The multicolor Ramsey number $R(G_1,G_2,\dots,G_k)$ is defined to be the smallest integer $N$ such that $K_N\rightarrow(G_1,G_2,\dots,G_k)$. We denote $R_k(G)$ instead of $R(G_1,G_2,\dots,G_k)$ when $G_i=G$ for $1\le i\le k$. Let $P_n$ (resp. $C_n$) be the path (resp. cycle) on $n$ vertices. For the Ramsey numbers of $R(C_m,C_n)$, it has been studied and completely determined in Bondy and Erd\H{o}s \cite{b-e}, Faudree and Schelp \cite{f-s}, and Rosta \cite{ros}. Later, Erd\H{o}s, Faudree, Rousseau and Schelp \cite{efrs} obtained the exact value of $R(C_\ell,C_m,C_n)$ for sufficiently large $n$. In 1999, by using the regularity lemma, {\L}uczak \cite{lucazk-1999} proved that $R(C_n,C_n,C_n)=(4+o(1))n$ for all large odd integer $n$, and some years later Kohayakawa, Simonovits and Skokan \cite{kss-2005} determined the exact value of it for all large odd $n$. In \cite{lucazk-1999}, {\L}uczak introduced a technique that uses the regularity lemma to reduce problems about paths and cycles to problems about \emph{connected matchings}, which are matchings that are contained in a connected component. This technique has become fairly standard. The following papers were based on this technique. For large even cycle, Figaj and {\L}uczak \cite{f-l-2007} obtained that $R(C_n,C_n,C_n)=(2+o(1))n$, and Benevides and Skokan \cite{Benevides-Skokan-2009} obtained the exact value. Furthermore, Figaj and {\L}uczak \cite{f-l-2007,Figaj-luczak-2018} determined the asymptotic values of $R(C_\ell,C_m,C_n)$ when $\ell,m$, and $n$ are large integers with same order. In particular, Figaj and {\L}uczak \cite{Figaj-luczak-2018} determined \[R(C_{2n}, C_{2n}, C_{2n+1})=(6+o(1))n.\] Furthermore, Ferguson \cite{Ferguson-1, Ferguson-2, Ferguson-3} obtained exact Ramsey numbers for the three-color cases of mixed parity cycles. Recently, Jenssen and Skokan \cite{j-s} established that $$R_k(C_n)=2^{k-1}(n-1)+1$$ for each fixed $k\ge2$ and large odd $n$, which confirms a conjecture of Bondy and Erd\H{o}s \cite{b-e}. For more multicolor Ramsey numbers involving large cycles, we refer the reader to \cite{ a-b-s-2013,efrs, lucss, sark-2016}, etc. The Ramsey numbers of paths are also fruitful. A well-known result of Gerencs\'{e}r and Gy\'{a}rf\'{a}s \cite{gerence-gyarfas-1967} states that $R({P_m},{P_n}) = n+\lfloor {\frac{m}{2}}\rfloor$ for all $n\ge m\ge1$. Faudree and Schelp \cite{Faudree-Schelp-1975} determined $R({P_{{n_0}}},{P_{n_1}},P_{n_2})$ for $n_0\ge6(n_1+n_2)^2$, and they also conjectured that \begin{align}\label{path-path-p} R({P_n},{P_n},{P_n}) =\left\{ {\begin{array}{*{20}{c}} {2n- 1,}\\{2n- 2,} \end{array}} \right.\begin{array}{*{20}{c}} \text{$n$ is odd},\\ \text{$n$ is even}. \end{array} \end{align} Gy\'{a}rf\'{a}s, Ruszink\'{o}, S\'{a}rk\"{o}zy and Szemer\'{e}di \cite{gyarfas-szemerdi-2007} confirmed this conjecture for large $n$. For Ramsey numbers of cycles versus paths, Faudree, Lawrence, Parsons and Schelp \cite{FLPS} obtained the Ramsey numbers for all path-cycle pairs. In 2009, Dzido and Fidytek \cite{DzFi2} (independently by Bielak \cite{Bielak}) obtained that if $m\ge3$ is odd, $n\geq m$ and $n > \frac{{3{t^2} - 14t + 25}}{4}$ when $t$ is odd, and $n > \frac{{3{t^2} - 10t + 20}}{8}$ when $t$ is even, then $R({P_t},{P_n},{C_m})= 2n + 2\left\lfloor {\frac{t}{2}} \right\rfloor - 3.$ In \cite{shao-xu}, Shao, Xu, Shi and Pan proved that $R(P_4, P_5, C_n)=n+2$ for $n\ge23$ and $R(P_4, P_6, C_n)=n+3$ for $n\ge18$. Omidi and Raeisi \cite{Omidi-Raeisi-2011} determined $R(P_4,P_n,C_4)=R(P_5,P_n,C_4)=n+2$ for $n\ge5$. We say that a graph $\mathcal{H}=(W,E_\mathcal{H})$ has bandwidth at most $b$ if there is a labeling of $W$ as $w_1,w_2,\dots,w_n$ such that $|i-j|\leq b$ for every edge $w_iw_j\in E_\mathcal{H}$ (see, e.g., \cite{julia-klaas}). We say $\mathcal{H}=(W,E_\mathcal{H})$ is a {\em balanced $(\beta,\Delta)$-graph} if it is a bipartite graph with bandwidth at most $\beta |W|$ and maximum degree at most $\Delta$, and furthermore it has a proper 2-coloring $\chi :W\rightarrow[2]$ such that $||\chi^{-1}(1)|-|\chi^{-1}(2)||\leq\beta|\chi^{-1}(2)|$. In \cite{G-G-M}, Mota, S\'{a}rk\"{o}zy, Schacht and Taraz determined asymptotically the three color Ramsey number $R(\mathcal{H},\mathcal{H},\mathcal{H})\leq (2+o(1))n$, which implies that $$R(C_{n},C_n,C_n)=(2+o(1))n$$ for all large even $n$. In this paper, we are concerned with the asymptotic behavior of the Ramsey number $R(\mathcal{H},\mathcal{H},C_n)$ when $n$ is odd and every balanced $(\beta,\Delta)$-graph $\mathcal{H}$ on $n$ vertices. \begin{theorem}\label{main theorem} For any sufficiently small $\gamma$ and every natural number $\Delta$, there exists a constant $\beta>0$ such that for every balanced $(\beta,\Delta)$-graph $\mathcal{H}$ on $n$ vertices we have $$R(\mathcal{H}, \mathcal{H}, C_n) \leq (3+\gamma)n$$ for all sufficiently large odd $n$. \end{theorem} Let $\theta_{n,t}$ be the graph consisting of $t$ internally disjoint paths of length $n$ all sharing the same endpoints. In particular, $\theta_{n,2}=C_{2n}$, i.e. the even cycle on $2n$ vertices. Observe that $\theta_{n,t}$ is a bipartite graph with $(n-1)t +2$ vertices, and $\theta_{n,t}$ is a balanced $(\beta,t)$-graph with $\beta \leq\frac{2}{n}$. The following result is clear from Theorem \ref{main theorem}. \begin{corollary}\label{corollary1} For each fixed $t\geq 1$, $$ R(\theta_{n, t},\theta_{n, t}, C_{nt+\lambda})=(3t+o(1))n,$$ where $\lambda=0$ if $nt$ is odd and $\lambda=1$ if $nt$ is even. In particular, we have that $R(P_{2n},P_{2n}, C_{2n+1})\sim R(C_{2n},C_{2n}, C_{2n+1})=(6+o(1))n$. \end{corollary} Let us point out that the result $R(C_{2n},C_{2n}, C_{2n+1})=(6+o(1))n$ and $R(P_{2n},P_{2n}, C_{2n+1})=(6+o(1))n$ follow from the work of Figaj and {\L}uczak \cite{Figaj-luczak-2018} and have been strengthened by Ferguson \cite{Ferguson-1}. \section{Preliminaries}\label{chap2} In this paper, we shall omit all floors and ceilings since it will not affect our argument. For a graph $G$, let $V(G)$ and $E(G)$ denote its vertex set and edge set, respectively. Let $|E(G)|=e(G)$ and $|V(G)|=v(G)$. For $v\in V$, we let $N_G(v)$ denote the neighborhood of $v$ in $G$, and $\deg_G(v)=|N_G(v)|$ which is the degree of a vertex $v\in V(G)$. We denote by $\delta(G)$ and $\triangle(G)$ the minimum and maximum degrees of the vertices of $G$. For any subset $U \subseteq V$, we use $G[U]$ to denote the subgraph induced by the vertex set $U$ in $G$. For a vertex $v\in V$ and $U\subset V$, we write $N_G(v,U)$ for the neighbors of $v$ in $U$ in graph $G$ and $\deg(v,U)=|N_G(v,U)|$. \subsection{The regularity method} For disjoint vertex sets $A, B\subseteq V$, let $e_G(A,B)$ denote the number of edges of $G$ with one endpoint in $A$ and the other in $B$, and the density between $A$ and $B$ is \[d_G(A,B)=\dfrac{e_G(A,B)}{|A| |B|}.\] We always omit the subscript when there is no confusion. \begin{definition}[$\epsilon$-regular]\label{regular} A pair $(A,B)$ is $\epsilon$-regular if for all $X\subseteq A$ and $Y\subseteq B$ with $|X|>\epsilon|A|$ and $|Y|>\epsilon|B|$ we have $|d(X,Y)-d(A,B)|<\epsilon$. \end{definition} \begin{definition}[($\epsilon,d$)-regular] For $\epsilon>0, d\leq 1$, a pair $(A,B)$ is said to be $(\epsilon,d)$-regular if it is $\epsilon$-regular and $d(A,B)\geq d$. \end{definition} \begin{definition}[$(\epsilon,d)$-super-regular] A pair $(A,B)$ is said to be $(\epsilon,d)$-super-regular if it is $\epsilon$-regular and $\deg(u,B)>d|B|$ for all $u\in A$ and $\deg(v,A)>d|A|$ for all $v\in B$. \end{definition} We have a property that any regular pair has a large subgraph which is super-regular, and we include a proof for completeness. \begin{fact}\label{localregular} For $0<\epsilon<1/2$ and $d\leq1$, if $(A,B)$ is $(\epsilon,d)$-regular with $|A|=|B|=m$, then there exist $A_1\subseteq A$ and $B_1\subseteq B$ with $|A_1|=|B_1|\ge(1-\epsilon)m$ such that $(A_1,B_1)$ is $(2\epsilon,d-2\epsilon)$-super-regular. \end{fact} \noindent{\bf Proof.} Let $X\subseteq A$ consist of vertices with at most $(d-\epsilon)|B|$ neighbors in $B$. Accounting for $e(X,B)\leq |X|\cdot(d-\epsilon)|B|$, we have $|d(X,B)-d|\geq \epsilon$. Due to the definition of $\epsilon$-regular, it follows that $|X|\le\epsilon m$. Similarly, assume that $Y\subseteq B$ consists of vertices with at most $(d-\epsilon)|A|$ neighbors in $A$, then we obtain $|Y|\le\epsilon m$. Take $A_1\subseteq A\setminus X$ and $B_1\subseteq B\setminus Y$ with $|A_1|=|B_1|=(1-\epsilon)m$. Thus for any vertex $u\in A_1$, $\deg(u,B_1)\geq (d-2\epsilon)m$. Also, $\deg(w,A_1)\geq (d-2\epsilon)m$ for any $w\in B_1$. Now, for any subsets $S\subseteq A_1$ and $T\subseteq B_1$, if $|S|> 2\epsilon|A_1|$ and $|T|> 2\epsilon|B_1|$, then clearly $|S|> \epsilon m$ and $|T|> \epsilon m$. Since $(A,B)$ is $(\epsilon,d)$-regular, it follows that \[ |d(S,T)-d(A_1,B_1)|\le|d(S,T)-d(A,B)|+|d(A_1,B_1)-d(A,B)|<2\epsilon. \] We have thus proved the fact. $\Box$ In this paper, we will use the following three-color version of the regularity lemma. For many applications, we refer the reader to surveys \cite{ks,rs} and many recent references \cite{afz,cly,conlon,cf,cfw,flz, lp,nr09,sz15}, etc. \begin{lemma}[Szemer\'{e}di \cite{regular-lemma}]\label{regular lemma} For every $\epsilon> 0$ and integer $t_0\geq 1$, there exists $T_0=T_0(\epsilon,t_0)\geq t_0$ such that the following holds. For all graphs $G_1, G_2$ and $G_3$ with the same vertex set $V$ and $|V|\geq t_0$, there exists a partition $V = \cup_{i=0}^t{V_i}$ satisfying $t_0\le t\le T_0$ and $(1)$ $\left| {{V_0}} \right| < \epsilon n$, $\left| {{V_1}} \right| = \left| {{V_2}} \right| = \ldots = \left| {{V_t}} \right|$; $(2)$ all but at most $\epsilon {t\choose 2}$ pairs $(V_i,V_j)$, $1 \le i \neq j \le t$, are $\epsilon$-regular for $G_1, G_2$ and $G_3$. \end{lemma} The next lemma by Benevides and Skokan \cite{Benevides-Skokan-2009} is a slightly stronger version compared to the original one established by {\L}uczak \cite[Claim 3]{lucazk-1999}. \begin{lemma}[Benevides and Skokan \cite{Benevides-Skokan-2009}]\label{long-path-lemma} For every $0<\beta_0<1$, there exists an $n_0$ such that for every $n>n_0$ the following holds: If $(V_1,V_2)$ is $\epsilon$-regular with $|V_1|=|V_2|=n$ and density at least $\beta_0/4$ for some $\epsilon$ satisfying $0<\epsilon<\beta_0/100$, then for every $\ell, 1\leq \ell \leq n-5\epsilon n/\beta_0$, and for every pair of vertices $v'\in V_1$, $v''\in V_2$ satisfying $\deg(v',V_2)$, $\deg(v'', V_1)\geq \beta_0 n/5$, $G$ contains a path of length $2\ell+1$ connecting $v'$ and $v''$. \end{lemma} Let $\mathcal{H}=(W,E_\mathcal{H})$ be a graph, for $S\subseteq W$, denote $N_\mathcal{H}(S)=[\cup_{v\in S} N_\mathcal{H}(v)]\setminus S$. For a graph $G = (V, E)$, a partition $\cup_{i = 1}^k {{V_i}}$ of $V$ is said to be $(\epsilon,d)$-regular on a reduced graph $H$ with vertex set contained in $[k]$ if the pair $(V_i,V_j)$ is $(\epsilon,d)$-regular whenever $ij\in E(H)$. When applying the regularity lemma, we will indeed find a partition of a monochromatic subgraph $G$ of $K_N$ with corresponding reduced graph containing a tree $T$ that contains a ``large'' matching $M$, where the bipartite subgraphs of $G$ corresponding to the matching are super-regular pairs. The following definition of \emph{$\epsilon$-compatible} is due to Mota, S\'{a}rk\"{o}zy, Schacht and Taraz \cite{G-G-M}. \begin{definition}[$\epsilon$-compatible]\label{def} Let $\mathcal{H}=(W, E_\mathcal{H})$ and $T=([t],E_{T})$ be graphs. Let $M=([t],E_M)$ be a subgraph of $T$ where $E_M$ is a matching. Given a partition $W=\cup_{i = 1}^t {{W_i}}$, let $U_i$, for $i\in [t]$, be the set of vertices in $W_i$, with neighbors in some $W_j$ with $ij\in E_{T}\setminus E_{M}$. Set $U=\cup U_i$ and $U'_i=N_{\mathcal{H}}(U) \cap(W_{i}\setminus U)$. We say that $W=\cup_{i = 1}^t {{W_i}}$ is $(\epsilon,T, M)$-compatible with a vertex partition $\cup_{i = 1}^t {{V_i}}$ of a graph $G=(V, E)$ if the following holds. (1) $|W_i|\leq|V_i|$ for $i\in[t]$. (2) $xy\in E_\mathcal{H}$ for $x\in W_i, y\in W_j$ implies $ij\in E_T$ for all distinct $i, j\in [t]$. (3) $|U_i|\leq \epsilon |V_i|$ for $i\in[t]$. (4) $|U'_i|, |U'_j|\leq \epsilon \min\{|V_i|, |V_j|: ij\in E_{M}\}$. \end{definition} The following corollary of the Blow-up Lemma (see B\"{o}ttcher, Heinig and Taraz \cite{J-B,J-P}) asserts that in the setup of Definition \ref{def} graphs $\mathcal{H}$ of bounded degree can be embedded into $G$, if $G$ admits a partition being sufficiently regular on $T$ and super-regular on $M$. \begin{lemma}[Embedding Lemma \cite{J-B,J-P}]\label{Embedding Lemma} For all $d, \Delta>0$ there is a constant $\epsilon=\epsilon(d,\Delta)>0$ such that the following holds. Let $G=(V,E)$ be an $N$-vertex graph that has a partition $\cup_{i = 1}^t{{V_i}}$ with $(\epsilon,d)$-reduced graph $T$ on $[t]$ which is $(\epsilon,d)$-super-regular on a graph $M\subset T$. Further, let $\mathcal{H}=(W,E_\mathcal{H})$ be an $n$-vertex graph with maximum degree $\Delta(\mathcal{H})\leq\Delta$ and $n\leq N$ that has a vertex partition $\cup_{i = 1}^t{{W_i}}$ of $W$ which is $(\epsilon, T, M)$-compatible with $\cup_{i = 1}^t{{V_i}}$. Then $\mathcal{H}\subseteq G$. \end{lemma} For a graph $\mathcal{H}=(W,E_\mathcal{H})$ with $W=\{w_1,w_2,\dots,w_n\}$, where $w_i$ is a labeling of the vertices, let $\chi:W\rightarrow[2]$ be a 2-coloring. For $W'\subseteq W$, denote $C_i(W')=|\chi^{-1}(i)\cap W'|$ for $i=1,2$. We know that $\chi$ is a $\beta$-balanced coloring of $W$ if $1-\beta\leq\frac{C_1(W)}{C_2(W)}\leq 1+\beta$. A set $I\subseteq W$ is called an interval if there exists $p<q$ such that $I=\{w_p,w_{p+1},\dots,w_q\}$. Finally, let $\sigma:[\ell]\rightarrow[\ell]$ be a permutation, and for a partition $\{I_1,I_2,\dots,I_\ell\}$ of $W$, where $I_1,\dots,I_\ell$ are intervals, let $C_i(\sigma,a,b)=\sum_{j=a}^bC_i(I_{\sigma(j)})$ for $i=1,2$. \begin{lemma}[Mota et al. \cite{G-G-M}]\label{balanced} For every $\xi>0$ and every integer $\ell\geq 1$ there exists $n_0$ such that if $\mathcal{H}=(W,E_\mathcal{H})$ is a graph on $W=\{w_1,w_2,\dots,w_n\}$ with $n\geq n_0$, then for every $\beta$-balanced 2-coloring $\chi$ of $W$ with $\beta\leq 2/\ell$, and every partition of $W$ into intervals $I_1,I_2,\dots,I_{\ell}$ with $|I_1|\leq|I_2|\leq\dots\leq|I_{\ell}|\leq|I_1|+1$ there exists a permutation $\sigma:[\ell]\rightarrow[\ell]$ such that for every pair of integers $1\leq a<b\leq \ell$ with $b-a\geq7/\xi$, we have $|C_1(\sigma,a,b)-C_2(\sigma,a,b)|\leq\xi C_2(\sigma,a,b).$ \end{lemma} \subsection{ Structure} Recall that a set $M$ of independent edges in a graph $G=(V,E)$ is called a matching. The size of a connected matching corresponds to the number of edges in the matching. The structure of graphs without large connected matchings play an important role in this paper. As a path on $n$ vertices contains a connected matching on $\lfloor n/2\rfloor$ edges, extremal results for paths directly give an upper bound for connected matchings. The following is the well-known extremal result for paths. \begin{lemma}[Erd\H{o}s and Gallai \cite{Erd-Gallai-1959}]\label{gallai} If $G$ is a graph of order $N$ which contains no $P_n$, then \[e(G)\le \frac{{n - 2}}{2}N.\] \end{lemma} The following result describes the structure of a graph without a large matching. \begin{lemma}[Knierim and Su \cite{kn-su}]\label{su-lemma} For every connected graph $G = (V, E)$ which contains no matching of size $n/2$, there is a partition $S_G\cup Q_G \cup I_G$ of the vertex set $V$ such that $(i)$ $\left| {{Q_G}} \right| + 2\left| {{S_G}} \right| = \min \left\{ {v(G),n - 1} \right\}$, $(ii)$ $I_G$ is an independent set; additionally, if $v(G)\leq n-1$, then $I_G=\emptyset$, $(iii)$ every vertex in $Q_G$ has at most one neighbor in $I_G$, $(iv)$ every vertex in $I_G$ has degree less than $n/2$. \end{lemma} We will also use the following lemmas. Let us begin with a result due to {\L}uczak \cite{lucazk-1999} which gives a description of the structure of a graph that contains no large odd cycle as a subgraph. \begin{lemma}[{\L}uczak \cite{lucazk-1999}]\label{luc-3} For every $0 < \delta < {10^{ - 15}}$, $\alpha\geq 2\delta$ and $t\geq exp(\delta^{-16}/\alpha)$ the following holds. Each graph $H$ on $t$ vertices which contains no odd cycles longer than $\alpha t$ contains subgraphs $H'$ and $H''$ such that: $(i)$ $V(H') \cup V(H'')=V(H)$, $V(H')\cap V(H'')=\emptyset$ and each of the sets $V(H')$ and $V(H'')$ is either empty or contains at least $\alpha \delta t/2$ vertices; $(ii)$ $H'$ is bipartite; $(iii)$ $H''$ contains no more than $\alpha t\left| {V(H'')} \right|/2$ edges; $(iv)$ all except no more than $\delta {t^2}$ edges of $H$ belong to either $H'$ or $H''$. \end{lemma} The following result due to Gy\'{a}rf\'{a}s, Ruszink\'{o}, S\'{a}rk\"{o}zy and Szemer\'{e}di \cite{gyarfas-szemerdi-2007} states that a graph with high density always contains a dense subgraph. \begin{fact}[Gy\'{a}rf\'{a}s et al. \cite{gyarfas-szemerdi-2007}]\label{density-degree} Let $\epsilon>0$ be sufficiently small and let $H$ be a graph with $v(H)$ vertices. If $e(H)\geq {v(H)\choose{2}}-\epsilon{t\choose{2}}$, then $H$ has a subgraph $H'$ with at least $v(H) - \sqrt \epsilon t$ vertices and $\delta(H')\geq v(H) - 2\sqrt \epsilon t$. \end{fact} A spanning subgraph of a graph $G$ is a subgraph obtained by edge deletions only, in other words, a subgraph whose vertex set is the entire vertex set of $G$. We also need the following simple result. \begin{lemma}\label{density-degree-upper} For any $\epsilon>0$, if $H'$ is a spanning subgraph of $ H$ with $e(H')\geq e(H)-\epsilon{t\choose{2}}$, then there exists a induced subgraph $H'' \subset H$ with at least $v(H)- \sqrt \epsilon t$ vertices and $\deg_{H'' }(u) <\deg_{H'}(u)+\sqrt \epsilon t$ for any vertex $u\in V(H'')$. \end{lemma} {\bf Proof.} Let $X=\{ u\in V( H)| \deg_H(u)-\deg_{H'}(u)\geq \sqrt{\epsilon}t\}$. Clearly, $e(H)-e(H')\geq\frac{\sqrt{\epsilon}t|X|}{2}$. Thus we have \[\frac{\sqrt{\epsilon}t|X|}{2}\leq \epsilon {t\choose{2}},\] implying that $|X|<\sqrt{\epsilon}t$. Denote $H''=H- X$, i.e. the subgraph obtained from $H$ by deleting all vertices of $X$ and all edges incident to some vertices of $X$. Thus for any $u\in V(H'')$, \[\deg_{H''}(u)=\deg_H(u)-\deg_H(u,X)<\left(\deg_{H'}(u)+\sqrt\epsilon t \right)-\deg_H(u, X)\leq \deg_{H'}(u)+\sqrt \epsilon t,\] completing the proof. $\Box$ \subsection{Monochromatic components} In \cite{conj-schelp}, Schelp conjectured that if $G$ is a graph on $3n-1$ vertices with minimum degree at least $3|V(G)|/4$, then $G\rightarrow(P_{2n},P_{2n})$ provided $n$ sufficiently large. Gy\'{a}rf\'{a}s and S\'{a}rk\"{o}zy \cite{gyar-sar-2012} and independently Benevides, {\L}uczak, Scott, Skokan and White \cite{ben} confirmed this conjecture asymptotically. Recently, Balogh, Kostochka, Lavrov and Liu \cite{balogh-2019} obtain the following result and thus confirm this conjecture thoroughly. \begin{lemma}[Balogh et al. \cite{balogh-2019}]\label{balogh-conj} Let $G$ be a graph on $3n -1$ vertices with minimum degree at least $(3|V(G)|-1)/4$. If $n$ is sufficiently large, then $G\rightarrow(P_{2n},P_{2n})$. \end{lemma} Erd\H{o}s and Rado remarked that any 2-colored complete graph contains a monochromatic spanning tree, see \cite{gy}. We will apply the following result due to Gy\'{a}rf\'{a}s and S\'{a}rk\"{o}zy \cite{gyar-sar-2012} to get a large monochromatic component for every $2$-coloring of the edges of graph $G$ with large minimum degree. \begin{lemma}[Gy\'{a}rf\'{a}s and S\'{a}rk\"{o}zy \cite{gyar-sar-2012}]\label{gya-lemma} For any $2$-coloring of edges of a graph $G$ with minimum degree $\delta(G)\geq\frac{3|V(G)|}{4}$, there is a monochromatic component of order larger than $\delta(G)$. This estimate is sharp. \end{lemma} \section{Proof of Theorem \ref{main theorem}}\label{chap3} Let $N=(3+\gamma)n$, where $\gamma>0$ is a sufficiently small real number and $n$ is a sufficiently large odd integer. Consider a 3-edge coloring of $K_N$ on vertex set $V$, and let $G_i$ ($i=1,2,3$) be the graph induced by the edges in the $i$th color. We shall show that either $G_i$ contains a copy of $\mathcal{H}$ for some $i=1,2$, or $G_3$ contains an odd cycle $C_n$. On the contrary, we suppose that $G_i$ contains no $\mathcal{H}$ for $i=1,2$, and $G_3$ contains no odd cycle $C_n$. We aim to find a contradiction. We write $a \ll b$ if $a$ is much smaller than $b$. Let $\gamma>0$ and $\Delta\geq1$ be given. We apply Lemma \ref{Embedding Lemma} to $G_1$ (or $G_2$) with $d=1/4$ and $\Delta$ to get $\epsilon_1$. We set $\eta,\epsilon$, $\delta$ and $\beta$ such that \begin{align}\label{eta-ep} \eta=\frac{\gamma}{15}, \;\;\delta=\min\left\{10^{ -16},\; \frac{\eta^{2}}{100}\right\} \;\;\epsilon =\min\left\{\frac{\delta}{1296}, \epsilon_1 \right\}\;\;\text{and}\;\;0<\beta\ll \epsilon. \end{align} Moreover, set \begin{align}\label{constant-1} t_0= \max \left\{\frac1\epsilon,\;\exp\left(\frac{3(1+\eta)}{\delta^{16}}\right)\right\}. \end{align} We apply the regularity lemma (Lemma \ref{regular lemma}) to $G_1$, $G_2$ and $G_3$ with $\epsilon, t_0$ to obtain a $T_0=T_0(\epsilon,t_0)$ such that there exists a partition of the vertex set $V$ into $t+1$ classes $V=V_{0}\cup V_{1}\cup\dots \cup V_t$ satisfying $t_0\le t\le T_0$ and (1) $\left| {{V_0}} \right| < \epsilon n$, $\left| {{V_1}} \right| = \left| {{V_2}} \right| = \ldots = \left| {{V_t}} \right|$; (2) all but at most $\epsilon {t\choose 2}$ pairs $(V_i,V_j)$, $1 \le i \neq j \le t$, are $\epsilon$-regular for $G_1, G_2$ and $G_3$. We construct the reduced graph $H$ with vertex set $\{v_1, v_2, \dots, v_t\}$ and the edge set formed by pairs $\{v_i,v_j\}$ for which $(V_i,V_j)$ is $\epsilon$-regular with respect to $G_1$, $G_2$ and $G_3$. In which $v_i$ and $v_j$ are non-adjacent in $H$ if the pairs $(V_i, V_j)$ is not $\epsilon$-regular for some $G_i$. Thus we obtain a bijection $f:{v_i} \to {V_i}$ between the vertices of $H$ and the clusters of the partition. Clearly, $e(H)\geq (1-\epsilon)$$t\choose{2}$ from Lemma \ref{regular lemma}. Accordingly, we assign color $i$ ($i\in[3]$) to an edge of $H$ if and only if $i$ is the minimum integer for which \[d_{G_i}(V_i,V_j)\geq 1/3.\] Let $H_i$ ($F_i$) be the spanning subgraph of $H$ ($F$) induced by all edges that have received color $i$. \noindent {\textbf{Overview of the remaining proof}:} The remaining part of the proof is straightforward but rather rich in technical details, so we shall briefly outline it first. From the assumption that $G_3$ contains no odd cycle $C_n$, we will show that the reduced subgraph $H_3$ contains no odd cycle of length at least $\frac{{(1 + 0.1\eta )t}}{{3(1 + \eta )}}$. Since $G_i$ contains no $\mathcal{H}$ for $i=1$ or $2$, we shall get that $H_i$ contains no connected matching on more than $(\frac{1}{3}-0.2\eta)t$ vertices for $i=1$ or $2$. Then, we can easily get that $e(H_3)\geq e(H)/3\geq(\frac{1}{6}-\epsilon/3)t^2$. Now, since $H_3$ contains no odd cycle of length at least $\frac{{(1 + 0.1\eta )t}}{{3(1 + \eta )}}$, we apply Lemma \ref{luc-3} to graph $H_3$ with $\alpha=\frac{{(1 + 0.1\eta )}}{{3(1 + \eta )}}$ to deduce that $H_3$ contains subgraphs ${H_3'}$ which is bipartite and $H_3''$ such that the desired properties hold. Let $A$ and $B$ be the color classes of the bipartition of ${H_3'}$, and let $X=V({H_3''})$. We will show that $|X|<(1/3-\eta/5)t$ and $\max\{|A|,|B|\}<({\frac{1}{2} - \sqrt \delta/2 } )t.$ Combining with the above desired properties and Lemma \ref{density-degree-upper}, we can deduce that the reduced graph $H$ contains a subgraph $F$ on at least $(1-\frac{3}{2}\sqrt{\delta})t$ vertices such that $\delta(F)>v(F) - \sqrt \delta t/2$ and $\deg_{F_3}(u)<(\frac{1}{2} + \sqrt \delta )t$ for any vertex $u\in V(F)$. Since $F\subset H$ and $H_i$ contains no connected matching on more than $(\frac{1}{3}-0.2\eta)t$ vertices for $i=1$ or $2$, we conclude that $F_i$ contains no connected matching on more than $(\frac{1}{3} - 3\sqrt \delta )v({F})$ vertices for $i=1, 2$. Moreover, by Lemma \ref{su-lemma}, the order of the largest component of $F_i ~(i=1, 2)$ must be less than $\frac{1}{3}v(F)$. By noting $|X|<(1/3-\eta/5)t$ and Fact \ref{density-degree}, we obtain a vertex set $B''\subseteq B\cap V(F)$ such that each vertex of $B''$ is adjacent to at least $\max\{v(F)/3, (1-\sqrt{\delta}/4)|B''|\}$. Now we apply Lemma \ref{gya-lemma} to $H[B'']$ to conclude that $H[B'']$ contains a monochromatic component in color 1 or 2 of order larger than $\frac{1}{3}v(F)$. Then $F_i$ (i = 1, 2) contains a monochromatic component in color $1$ or $2$ of order at least $\frac{1}{3}v(F)$, which will leads to a contradiction. $\Box$ \noindent {\textbf{Details of the remaining proof}:} We will have the following claims at first. \begin{claim}\label{long path} $H_3$ contains no odd cycle of length at least $\frac{{(1 + 0.1\eta )t}}{{3(1 + \eta )}}$. \end{claim} \noindent{\bf Proof of Claim \ref{long path}.} On the contrary, let $C$ be an odd cycle of length $s\ge\frac{{(1 + 0.1\eta )t}}{{3(1 + \eta )}}$. Without loss of generality, assume that its vertex set is $[s]$. Thus we have that for $i=1,2,\dots,s$, $(V_i,V_{i+1})$ is $\epsilon$-regular and $d_{G_3}(V_i,V_{i+1})\geq 1/3$, where the indices take modula on $s$. By Fact \ref{localregular}, for odd $i=1,3,\dots,s-2$, we can take $V_i'\subseteq V_i$ with $|V_i'|\ge (1-\epsilon)|V_i|$ and $|V_{i+1}'|\ge (1-\epsilon)|V_{i+1}|$ such that $(V'_i,V'_{i+1})$ is $(2\epsilon,1/3-2\epsilon)$-super-regular. From the property of $\epsilon$-regularity pairs, we can find an odd cycle $u_1u_2\dots u_{s}u_1$ such that $u_i\in V_i'$ for $i=1,2,\dots,s-1$. We apply the techniques used by {\L}uczak \cite{lucazk-1999} and Lemma \ref{long-path-lemma} to show that the $G_3$ contains a $C_n$. Let $m=(1-\epsilon)^2\frac{N}{t}$, $\epsilon'=2\epsilon$ and $\beta_0=1-6\epsilon.$ Thus we have that every pair of vertices $v_i\in V_i'$, $v_{i+1}\in V_{i+1}'$ satisfies $\deg_{G_3}(v_i,V_{i+1}')\ge (1/3-2\epsilon)|V_{i+1}'|>\beta_0 m/5$, and $\deg_{G_3}(v_{i+1},V_i')> \beta_0 m/5$. Therefore, by Lemma \ref{long-path-lemma}, for every $\ell$, $1\le \ell \le m-5\epsilon m/\beta_0$, and for every pair of vertices $u_i\in V_i'$ and $u_{i+1}\in V_{i+1}'$ and odd $i=1,3,\dots,s-2$, $G_3$ contains a path of length $2\ell+1$ connecting $u_i$ and $u_{i+1}$. Thus, there are odd cycles of all lengths from $s$ to $(s-1)(m-5\epsilon m/\beta_0)$. Since \begin{align*} (s-1)(m-5\epsilon m/\beta_0)&\ge\left(\frac{(1+0.1\eta)t}{3(1+\eta)}-1\right) \left(1-\frac{5\epsilon} {1-6\epsilon}\right)\left(1-\epsilon\right)^2\frac{N}{t} \\&\overset{(\ref{constant-1})}{>}\frac{t}{3(1+\eta)}\left(1-10\epsilon \right)\frac{\left(3+\gamma\right)n}{t}\overset{(\ref{eta-ep})}{\ge}\frac{{{\rm{1 + 5}}\eta }}{{{\rm{1 + }}\eta }}\left(1 - 10\epsilon \right)n, \end{align*} which is at least $n$ again by noting (\ref{eta-ep}). So $G_3$ contains an odd cycle $C_n$ as desired. $\Box$ A connected matching in a graph $G$ is a matching $M$ such that all edges of $M$ are in the same connected component of $G$. By the assumption of $G_i$ contains no $\mathcal{H}$ for $i=1,2$, we have the following claim. \begin{claim}\label{cla-match} For $i=1,2$, $H_i$ contains no connected matching on more than $(\frac{1}{3}-0.2\eta)t$ vertices. \end{claim} \noindent{\bf Proof of Claim \ref{cla-match}}. On the contrary, without loss of generality, suppose that $H_1$ contains a connected matching $M$ on at least $(\frac{1}{3}-0.2\eta)t$ vertices that is contained in a tree $T\subset H_1$. Suppose that the vertex set of $T$ is $\{x_1,\dots,x_l,x_{l+1},\dots, x_{2l}, x_{2l+1},\dots, x_{2l+l'}\}$, and the matching $M$ has edge set $E_M=\{x_ix_{l+i}: i=1,\dots, l\}$. We will prove that there exists a copy of $\mathcal{H}$ in $G_1$, contradicting the assumption that $G_1$ contains no $\mathcal{H}$. Since the proof is similar as in \cite[Theorem 1.3]{G-G-M}, we only give a sketch of the proof as follows. Firstly, we shall apply Fact \ref{localregular} to get a subgraph $G_P$ of $G_1$ with classes $$A_1,\dots,A_{l}, A_{l+1},\dots,A_{2l}, A_{2l+1},\dots, A_{2l+l'}$$ which corresponds to the vertices $x_1,\dots,x_l,x_{l+1},\dots, x_{2l}, x_{2l+1},\dots, x_{2l+l'}$, and each of those sets has size at least $(1-2\epsilon)N/t$ and the bipartite graphs induced by $A_i$ and $A_{l+i}$ are $(2\epsilon, 1/3-\epsilon)$-super-regular for $i\in[l]$ and the bipartite graphs induced by all the other pairs are $(2\epsilon, 1/3-\epsilon)$-regular. It is clear that these dense super-regular pairs covering $(1+o(1))n$ vertices. Secondly, we partition the vertices of $\mathcal{H}$ and, since $\mathcal{H}=(W,E_\mathcal{H})$ has small bandwidth, we can apply Lemma \ref{balanced} to obtain a partition of $W$ which will be composed of clusters $$W_1,\dots,W_l,W_{l+1}\dots,W_{2l}, W_{2l+1},\dots,W_{2l+l'}.$$ Fix $1\leq j\leq 2l+l'$, we define $U_j$ as the set of vertices of $W_j$ with neighbors in some $W_k$ with $j\neq k$ and $\left\{ {x_jx_k} \right\} \notin M$. Define the set $U_j'=N_H(U)\cap (W_j\setminus U)$, where $U = \bigcup\nolimits_{I = 1}^{2l + l'} {{U_i}}$. Then, we can verify that all of the four conditions of Definition \ref{def} hold, i.e., (1) $|W_i|\leq |A_i|$ for $i\in[2l+l']$. (2) $xy\in E_\mathcal{H}$ for $x\in W_i, y\in W_j$ implies $x_ix_j\in E_T$ for all distinct $i, j\in [2l+l']$. (3) $|U_i|\leq \epsilon |A_i|$ for $i\in[2l+l']$. (4) $|U'_i|, |U'_j|\leq \epsilon \min\{|A_i|, |A_j|: x_ix_j\in E_{M}\}$. Thus, the partition $\{W_1,\dots, W_{2l+l'}\}$ of $W$ is $(2\epsilon, T , M)$-compatible with $\{A_1,\dots, A_{2l+l'}\}$, which is a partition of $V(G_P)$. Finally, we can find a copy of $\mathcal{H}$ in $G_P$ by the Embedding Lemma (Lemma \ref{Embedding Lemma}), completing the proof. $\Box$ \begin{claim}\label{eh3} $e(H_3)\geq e(H)/3\geq(\frac{1}{6}-\epsilon/3)t^2$. \end{claim} \noindent{\bf Proof of Claim \ref{eh3}.} On the contrary, suppose that $e(H_3)< e(H)/3$. Thus, without loss of generality, suppose that \begin{align}\label{eh-3} e(H_1)>e(H)/3\geq \frac{1}{3}(1-\epsilon){t\choose{2}}\geq\left(\frac{1}{6}-\epsilon/3\right)t^2. \end{align} However, Claim \ref{cla-match} implies that $H_1$ contains no path with more than $(\frac{1}{3}-0.2\eta)t+1$ vertices, it follows by Lemma \ref{gallai} that \[ e({H_1}) \leq \frac{(\frac{1}{3}-0.2\eta)t-1}{2}\cdot t< \left(\frac{1}{6} - 0.1\eta \right){t^2}. \] This contradicts (\ref{eh-3}) by noting (\ref{eta-ep}). $\Box$ By Claim \ref{long path}, $H_3$ contains no odd cycle of length at least $\frac{{(1 + 0.1\eta )t}}{{3(1 + \eta )}}$. Now we apply Lemma \ref{luc-3} to graph $H_3$ with $\alpha=\frac{{(1 + 0.1\eta )}}{{3(1 + \eta )}}$ to deduce that $H_3$ contains subgraphs ${H_3'}$ and $H_3''$ such that ($i$) $V({H_3'}) \cup V({H_3''})=V(H_3)$, $V({H_3'})\cap V({H_3''})=\emptyset$ and each of the sets $V({H_3'})$ and $V({H_3''})$ is either empty or contains at least $\alpha \delta t/2$ vertices; ($ii$) ${H_3'}$ is bipartite; ($iii$) ${H_3''}$ contains no more than $\alpha t\left| {V({H_3''})} \right|/2$ edges; ($iv$) all except no more than $\delta {t^2}$ edges of $H_3$ belong to either ${H_3'}$ or ${H_3''}$. Let $A$ and $B$ be the sets of the bipartition of ${H_3'}$, and let $X=V({H_3''})$. From Claim \ref{eh3} and the property of $H_3$ (see ($iv$)), we have \begin{align}\label{eh3-lower} e({H_3'})+e({H_3''})&\geq e(H_3)-\delta t^2\geq\left(\frac{1}{6}-\epsilon/3 \right)t^2-\delta t^2 \overset{(\ref{eta-ep})}{>}\left(\frac{1-7\delta}{6}\right)t^2. \end{align} \begin{claim}\label{claim123} $|X|<(1/3-\eta/5)t$. \end{claim} \noindent{\bf Proof of Claim \ref{claim123}.} Let us put $|X|=\lambda t$ and $|A\cup B|=(1-\lambda)t$. By Lemma \ref{luc-3} ($iii$), \begin{align}\label{eh3-upper} e({H_3'})+e({H_3''}) &{\le} e({H_3'})+ \frac{{\alpha t\left| {V({H_3''})} \right|}}{2} \le\frac{|V(H_3')|^2}{4}+\frac{{\alpha t\left| {V({H_3''})} \right|}}{2}\nonumber \\&\le\frac{{{{\left( {1 - \lambda } \right)}^2}}}{4}{t^2}+\frac{{\lambda \left( {1 + 0.1\eta } \right)}}{{6(1 + \eta )}}{t^2}, \end{align} which together with (\ref{eh3-lower}) yield that \[\frac{{{{\left( {1 - \lambda } \right)}^2}}}{4}+\frac{{\lambda \left( {1-0.8\eta } \right)}}{6} >\frac{{{{\left( {1 - \lambda } \right)}^2}}}{4}+\frac{{\lambda \left( {1 + 0.1\eta } \right)}}{{6(1 + \eta )}} \geq\frac{1}{6}-\frac{7\delta}{6}, \] from which we obtain that $3\lambda^2-(4+1.6\eta)\lambda+(1+14\delta)>0.$ Since $\lambda\leq 1$, it follows that \begin{align*} \lambda<\frac{1}{6}\left(4+1.6\eta-\sqrt{(4+1.6\eta)^2-12(1+14\delta)}\right) <\frac{1}{3}\left(2+0.8\eta-\sqrt{1+3\eta}\right)<\frac13-\frac\eta5 \end{align*} provided $\eta$ is sufficiently small and by noting (\ref{eta-ep}). $\Box$ \begin{claim}\label{claim12} $\max\{|A|,|B|\}<({\frac{1}{2} - \sqrt \delta/2 } )t.$ \end{claim} \noindent{\bf Proof of Claim \ref{claim12}.} On the contrary, suppose that $|A|\geq ( {\frac{1}{2} - \sqrt \delta /2 } )t$ without loss of generality. Let $H[A]$ be the subgraph of $H$ induced by $A$. Note that $H[A]$ only contains edges in color 1 or color 2 as $H_3'$ is bipartite. Note also that $e(H[A])\geq{|A|\choose{2}}-\epsilon{t\choose{2}}.$ By Fact \ref{density-degree}, $H[A]$ contains a subgraph $H[A']$ such that $|A'|\ge({\frac{1}{2} - \sqrt \delta/2 } )t-\sqrt{\epsilon}t>( \frac{1}{2}-\sqrt{\delta} )t$ and \begin{align*} \delta(H[A']) &\ge \left| {A'} \right| - 2\sqrt \epsilon t > \left| {A'} \right| - 6\sqrt \epsilon | A'|\overset{(\ref{eta-ep})}{>}(1 - \sqrt \delta)| {A'}|, \end{align*} the second inequality due to $t<\frac{|A'|}{(1/2-\sqrt \delta)}$ and $\delta$ is sufficiently small. Now we apply Lemma \ref{balogh-conj} with graph $H[A']$ and $n=\frac{(1/2-\sqrt{\delta})t+1}{3}$ to conclude that $H[A']$ contains a monochromatic path ${P_{\frac{{(1- 2\sqrt \delta )t }}{3}}}$ in color 1 or color 2. As $\frac{(1 - 2\sqrt \delta )t}{3}>(\frac{1}{3}-0.2\eta)t+1$ from (\ref{eta-ep}), we have that either $G_1$ or $G_2$ must contain a copy of $\mathcal{H}$ by Claim \ref{cla-match}, a contradiction. $\Box$ For graphs $P$ and $Q$, we use $P\cup Q$ to denote the graph defined on $V(P)\cup V(Q)$ whose edge set is $E(P)\cup E(Q)$. From Claims \ref{claim123}-\ref{claim12} noting that $V({H_3'}) \cup V({H_3''})=V(H_3)=V(H)$ and $V({H_3'})\cap V({H_3''})=\emptyset$, we have that for any vertex $u\in V(H)$, \begin{align}\label{degree-upper} \deg_{H_3'\cup H_3''}(u)< \left( {\frac{1}{2} - \frac{\sqrt \delta}2} \right)t. \end{align} For an edge-colored graph $F$, we use $F_i$ to denote the subgraph induced by edges in color $i$ in $F$. \begin{claim}\label{cla6} $H$ has a subgraph $F$ with at least $(1-\frac{3}{2}\sqrt{\delta})t$ vertices such that $\delta(F)>v(F) - \sqrt \delta t/2$ and $\deg_{F_3}(u)<(\frac{1}{2} + \sqrt \delta )t$ for any vertex $u\in V(F)$. \end{claim} \noindent{\bf Proof of Claim \ref{cla6}.} We know that all but at most $\delta t^2$ edges of $H_3$ are contained in $H_3' \cup H_3''$. By Lemma \ref{density-degree-upper}, there exists an induced subgraph $H_3^0$ of $H_3$ such that $v(H_3^0)\geq v(H)-\sqrt{2\delta}t$ and \begin{align}\label{q-degree} \deg_{H_3^0}(u)<\deg_{{H_3'}\cup{H_3''}}(u)+\sqrt{2\delta}t\overset{(\ref{degree-upper})}{<}\left( {\frac{1}{2} - \sqrt \delta/2} \right)t+\sqrt{2\delta}t<\left(\frac{1}{2}+\sqrt{\delta}\right)t \end{align} for any vertex $u\in V(H_3^0)$. Note that $e(H[V(H_3^0)])\geq {v(H_3^0)\choose{2}}-\epsilon{t\choose{2}}$, it follows from Fact \ref{density-degree} that $H[V(H_3^0)]$ contains a subgraph $F$ such that \begin{align}\label{F-degree} v(F)\geq v(H_3^0)-\sqrt{\epsilon}t\geq v(H)-\sqrt{2\delta}t-\sqrt{\epsilon}t\overset{(\ref{eta-ep})}{>}\left(1-\frac{3}{2}\sqrt{\delta}\right)t \end{align} and \begin{align}\label{F-degree-min} \delta(F)\geq v(H_3^0)-2\sqrt{\epsilon}t>v(F)-\sqrt \delta t/2. \end{align} Since $F_3\subset H_3^0$, by noting (\ref{q-degree}), we obtain that for any $u\in V(F)$, \begin{align}\label{f3-upper-degree} \deg_{F_3}(u)\leq \deg_{H_3^0}(u)<\left(\frac{1}{2}+\sqrt{ \delta}\right)t, \end{align} completing the proof. $\Box$ According to (\ref{F-degree-min}) and (\ref{f3-upper-degree}), we obtain that for any vertex $u\in V(F)$, \begin{align}\label{degree-F12} \deg_{F_1\cup F_2}(u)&={\deg_{{F_1}}}(u) + {\deg_{{F_2}}}(u) \geq \delta(F)-d_{F_3}(u)\nonumber\geq v(F) - \sqrt \delta t/2-\left(\frac{1}{2}+\sqrt{ \delta}\right)t\nonumber \\&\overset{(\ref{F-degree})}{>} \left( {1 - \frac{{\frac{1}{2}+\frac{3}{2}\sqrt \delta }}{{1 - \frac{3}{2}\sqrt \delta }}} \right)v(F)> \left( \frac{1}{2} -3\sqrt {\delta} \right) v(F). \end{align} Recall that a connected matching in a graph $G$ is a matching $M$ such that all edges of $M$ are in the same connected component of $G$. \begin{claim}\label{cla7} For $i=1,2$, $F_i$ contains no connected matching on more than $(\frac{1}{3} - 3\sqrt \delta )v({F})$ vertices. \end{claim} \noindent{\bf Proof of Claim \ref{cla7}.} Note that since $F\subset H$, by Claim \ref{cla-match}, $F_i$ contains no connected matching on more than $(\frac{1}{3}-0.2\eta)t$ vertices for $i=1,2$. Since \[ \left({\frac{{\rm{1}}}{{\rm{3}}}{\rm{ - 3}}\sqrt \delta } \right)v(F) \overset{(\ref{F-degree})}{>} \left( {\frac{{\rm{1}}}{{\rm{3}}}{\rm{ - 3}}\sqrt \delta } \right)\left( {1 - \frac{{3\sqrt \delta }}{2}} \right)t > \left( {\frac{{\rm{1}}}{{\rm{3}}} - \frac{{7\sqrt \delta }}{2}} \right)t \overset{(\ref{eta-ep})}{> } \left(\frac{{\rm{1}}}{{\rm{3}}} - 0.2\eta \right)t. \] Thus $F_i$ ($i=1,2$) contains no connected matching on more than $(\frac{1}{3} - 3\sqrt \delta )v({F})$ vertices. $\Box$ \begin{claim}\label{lag-cp-F12} For $i=1,2$, the largest component of $F_i$ has order less than $\frac{1}{3}v(F)$. \end{claim} \noindent{\bf Proof of Claim \ref{lag-cp-F12}.} Let $R_1,\dots, R_r$ be the components of $F_1$ and let $B_1,\dots,B_b$ be the components of $F_2$. Without loss of generality, suppose that $|V(R_i)|\geq |V(R_{i+1})|$ for all $1\leq i\leq r-1$, and $|V(B_j)\geq |V(B_{j+1})|$ for all $1\leq j\leq b-1$. Let $r'$, $0\leq r' \leq r $, be the maximum integer such that $|V(R_{r'})|\geq (\frac{1}{3} - 3\sqrt \delta)v(F)$. Similarly, let $b'$, $0\leq b' \leq b$, be the maximum integer such that $|V(R_{b'})|\geq(\frac{1}{3} - 3\sqrt \delta)v(F)$. We aim to show that $r'=b'=0$. By Claim \ref{cla7}, $F_1$ and hence each $R_i$ contains no connected matching on more than $(\frac{1}{3}-3\sqrt \delta )v({F})$ vertices. Thus Lemma \ref{su-lemma} implies that each $R_i$ has a partition $S_{R_i}\cup Q_{R_i}\cup I_{R_i}$ satisfying \begin{align}\label{su-ineq-1} \left|{{Q_{{R_i}}}} \right| + 2\left| {{S_{{R_i}}}} \right| = \min \left\{ {\left| {V({R_i})} \right|,\left( {\frac{1}{3} - 3\sqrt \delta } \right)v(F) - 1} \right\}. \end{align} \begin{prop}\label{cla-empty} For $i>r'$, $I_{R_i}=\emptyset$ and $S_{R_i}=\emptyset$; similarly, for $j>b'$, $I_{B_j}=\emptyset$ and $S_{B_j}=\emptyset$. \end{prop} {\bf Proof.} For $i>r'$, $\left| {V({R_i})} \right| \le ( {\frac{1}{3} - 3\sqrt \delta } )v(F)-1$, implying that $\left| {{Q_{{R_i}}}} \right| + 2\left| {{S_{{R_i}}}} \right| = \left| {V({R_i})} \right|$ and $I_{R_i}=\emptyset$ by (\ref{su-ineq-1}). Thus $\left| {{Q_{{R_i}}}} \right| + \left| {{S_{{R_i}}}} \right| = \left| {V({R_i})} \right|$, and $S_{R_i}=\emptyset$ follows. The second assertion is similar. $\Box$ According to Lemma \ref{su-lemma} and (\ref{su-ineq-1}), we obtain that for $1\le i\le r$, \begin{align}\label{whit-req1} |S_{R_i}|<\frac{1}{2}\left(\frac{1}{3}- 3\sqrt \delta \right)v(F), \;\;|Q_{R_i}|+2|S_{R_i}|<\left(\frac{1}{3}- 3\sqrt \delta \right)v(F), \end{align} and \begin{align}\label{whit-req2} \nonumber |I_{R_i}|=|V(R_i)\setminus (Q_{R_i}\cup S_{R_i})|&=|V(R_i)|+|S_{R_i}|-(|Q_{R_i}|+2|S_{R_i}|) \\&>|V(R_i)|+|S_{R_i}|-\left(\frac{1}{3}- 3\sqrt \delta \right)v(F). \end{align} Note that for any $B_j$, similar sets $S_{B_j}$, $I_{B_j}$ and $Q_{B_j}$ can be defined, with analogues of the above bounds. In particular, $| {{I_{B_j}}}|\ge { | {V({B_j})} |+{| {{S_{{B_j}}}} | - (\frac{1}{3} - 3\sqrt \delta )v(F)} }$. By Lemma \ref{su-lemma} ($iv$), for $1\leq i\leq r$, any vertex $u\in I_{R_i}$ satisfies that $\deg_{F_1}(u)\le\frac{1}{2}(\frac{1}{3} - 3\sqrt \delta )v(F)$. Similarly, for $1\leq j\leq b$, any vertex $u\in I_{B_j}$ satisfies that $\deg_{F_2}(u)\le\frac{1}{2}(\frac{1}{3} - 3\sqrt \delta )v(F)$. Note that for $1\leq i\leq r$, each vertex in $Q_{R_i}$ has at most one neighbor in $I_{R_i}$ in $F_1$ by Lemma \ref{su-lemma} ($iii$). Hence, any vertex $u\in Q_{R_i}$ satisfies that $\deg_{F_1}(u)\le|(Q_{R_i}\cup S_{R_i})\setminus\{u\}|+1$, which is less than $(\frac{1}{3} - 3\sqrt \delta)v(F)$ by noting (\ref{whit-req1}). Similarly, for $1\leq j\leq b$, any vertex $u\in Q_{B_j}$ satisfies that $\deg_{F_2}(u)<(\frac{1}{3} - 3\sqrt \delta )v(F)$. \begin{prop}\label{white-claim2} $I_{R_i}\cap(I_{B_j}\cup Q_{B_j})= \emptyset $. \end{prop} {\bf Proof.} Indeed, if there is a vertex $u\in I_{R_i}\cap(I_{B_j}\cup Q_{B_j})$, then from the above observation, $$\deg_{F_1\cup F_2}(u)=\deg_{F_1}(u)+\deg_{F_2}(u)<\frac{1}{2}\left(\frac{1}{3}-3\sqrt \delta\right)v(F)+\left(\frac{1}{3}-3\sqrt \delta\right)v(F),$$ which is less than $(\frac{1}{2}-4\sqrt\delta)v(F)$, contradicting (\ref{degree-F12}). $\Box$ Therefore, for $1\leq i\leq r$, the set $I_{R_i}\subseteq \bigcup\nolimits_{1 \le j \le b} {{S_{{B_j}}}}$ due to Proposition \ref {white-claim2}. Since $S_{B_j}=\emptyset$ for $j>b'$ by Proposition \ref{cla-empty}, we have that $I_{R_i}\subseteq \bigcup\nolimits_{1 \le j \le b'} {{S_{{B_j}}}}$. Similarly, for $1\leq j\leq b$, $I_{B_j}\subseteq\bigcup\nolimits_{1 \le i \le r'} {{S_{{R_i}}}} $. Consequently, again by Proposition \ref{cla-empty}, $I_{R_i}=\emptyset$ for $i>r'$ and $I_{B_j}=\emptyset$ for $j>b'$, we obtain that \begin{align}\label{B_r_R} \sum\limits_{i= 1}^{r'} {\left| {{I_{{R_i}}}} \right|}=\sum\limits_{i = 1}^{r} {\left| {{I_{{R_i}}}} \right|} \le \sum\limits_{j = 1}^{b'} {\left| {{S_{{B_j}}}} \right|},\;\;\text{and}\;\;\sum\limits_{j= 1}^{b'} {\left| {{I_{{B_j}}}} \right|} \le \sum\limits_{i = 1}^{r'} {\left| {{S_{{R_i}}}} \right|}, \end{align} which together with (\ref{whit-req2}) yield that \begin{align}\label{Bj-Ri} \nonumber0 &\ge \sum\limits_{j = 1}^{b'} {\left| {{I_{B_j}}} \right|} - \sum\limits_{i = 1}^{r'} {\left| {{S_{{R_i}}}} \right|} \nonumber\\& \ge\sum\limits_{j = 1}^{b'} {\left(\left| {V({B_j})} \right|+ {\left| {{S_{{B_j}}}} \right| - \left(\frac{1}{3} - 3\sqrt \delta \right)t} \right)} - \sum\limits_{i = 1}^{r'} {\left| {{S_{{R_i}}}} \right|} \nonumber\\& \overset{(\ref{B_r_R})}{\ge} \sum\limits_{j = 1}^{b'} {\left( {\left| {V({B_j})} \right| - \left(\frac{1}{3} - 3\sqrt \delta \right)t } \right)} + \sum\limits_{i = 1}^{r'} {\left| {{I_{{R_i}}}} \right|} - \sum\limits_{i = 1}^{r'} {\left| {{S_{{R_i}}}} \right|} \nonumber\\&\overset{(\ref{whit-req2})}{>} \sum\limits_{j = 1}^{b'} {\left( {\left| {V({B_j})} \right| - {\left(\frac{1}{3} - 3\sqrt \delta \right)v(F) } } \right)} + \sum\limits_{i = 1}^{r'} {\left( {\left| {V({R_i})} \right| - {\left(\frac{1}{3} - 3\sqrt \delta \right)v(F) } } \right)}, \end{align} which implies that $r'=b'=0$ since $\left| {V({R_{i}})} \right| \ge (\frac{1}{3} - 3\sqrt \delta )v(F)$ and $\left| {V({B_{j}})} \right| \ge (\frac{1}{3} - 3\sqrt \delta )v(F)$ for $1\leq i\leq r'$ and $1\leq j\leq b'$. This completes Claim \ref{lag-cp-F12}. $\Box$ By noting Claim \ref{claim123} and $A\cup B\cup X=V(H_3)=V(H)$ and $F\subseteq H$, we obtain that \begin{align*} \left|{\left( A \cup B \right) \cap V({F})} \right|&\ge |V({F})|-|X|> v({F}) - \left(\frac{1}{3} - \frac{1}{5}\eta \right)t\overset{(\ref{F-degree})}{>} v(F) - \frac{{\frac{1}{3} - \frac{1}{5}\eta }}{{1 - \frac{3}{2}\sqrt \delta }}v(F) \\&\overset{(\ref{eta-ep})}{>}v(F) - \frac{1}{3}(1 - \sqrt \delta/2 )v(F)= \left( {\frac{2}{3} + \frac{1}{6}\sqrt \delta } \right)v(F). \end{align*} Thus, one of $|A \cap V({F})|$ and $|B \cap V(F)|$ must be at least $({\frac{1}{3} + \frac{1}{12}\sqrt \delta } )v(F).$ Without loss of generality, we suppose that ${B'} \subseteq {B} \cap V({F})$ satisfies \begin{align}\label{v3equ} \left|B' \right| = \left( {\frac{1}{3} + \frac{1}{12}\sqrt \delta } \right)v(F). \end{align} Note that $e(H[B'])\geq {|B'|\choose{2}}-\epsilon{t\choose{2}}.$ Therefore, by deleting at most $\sqrt{{\epsilon}}t$ vertices from $B'$ we obtain a vertex set $B''\subseteq B'$ such that each vertex of $B''$ is adjacent to at least $|B'|-2\sqrt{\epsilon}t$ in $H[B'']$ from Fact \ref{density-degree}. We obtain that \begin{align}\label{last-ineq1} \delta (H[B'']) \ge \left| {B'} \right| - 2\sqrt{\epsilon}t\overset{(\ref{v3equ}), (\ref{F-degree})}{>} \left( {\frac{1}{3} + \frac{1}{12}\sqrt \delta } \right)v(F) - \frac{{2\sqrt {\epsilon } }}{{1 - \frac{3}{2}\sqrt \delta }}v(F) \overset{(\ref{eta-ep})}{\geq} \frac{1}{3} v(F). \end{align} By noting (\ref{v3equ}), \begin{equation*} \delta (H[B'']) >\frac{1}{3}v(F)= \frac{{\left| {B'} \right|}}{{1 + \sqrt \delta/4 }} \ge\frac{{\left|{B''} \right|}}{{1 + \sqrt \delta/4 }}> \left( {1 - \frac{ \sqrt{\delta} }{4}} \right)\left| {B''} \right|. \end{equation*} Note that all edges of $H[B'']$ are colored only with color $1$ or $2$ since $B''\subseteq B$ and $B$ is one of parts of the bipartite graph $H_3'$. Now we apply Lemma \ref{gya-lemma} to $H[B'']$ to conclude that $H[B'']$ contains a monochromatic component in color 1 or 2 of order larger than $\delta (H[B''])$, which is at least $\frac{1}{3}v(F)$ according to (\ref{last-ineq1}). Since $H[B''] \subseteq {F}$, it follows that $F$ contains a monochromatic component in color 1 or 2 of order at least $\frac{1}{3}v(F)$. This contradicts Claim \ref{lag-cp-F12}. In conclusion, the proof of Theorem \ref{main theorem} is complete. $\Box$ \end{spacing} \end{document}
\begin{document} \title[VAs of Variable Annuities]{Out-of-model adjustments of Variable Annuities} \author[Zhiyi Shen]{ Zhiyi Shen \\ Morgan Stanley \\ This draft: \today } \thanks{\textit{Email address: \href{[email protected]}{[email protected]} }} \thanks{\textit{JEL Classification:} G22, C14, C63. } \thanks{The views expressed in this article are the author's own and do not represent the opinions of any firm or institution.} \begin{abstract} This paper studies the model risk of the Black-Scholes (BS) model in pricing and risk-managing variable annuities motivated by its wide usage in the insurance industry. Specifically, we derive a model-free decomposition of the no-arbitrage price of the variable annuity into the BS model price in conjunction with three out-of-model adjustment terms. This sheds light on all risk drivers behind the product, that is, spot price, realized volatility, future smile, and sub-optimal withdrawal. We further investigate the efficacy of the BS-based hedging strategy given the market diverges from the model assumptions. We disclose that the spot price risk can always be eliminated by the strategy and the hedger's cumulative P\&L exhibits gradual slippage and instantaneous leakage. We finally show that the pricing, risk and hedging models can be separated from each other in managing the risks of variable annuities.\\ \textbf{Keywords:} Valuation Adjustment, Variable Annuities, Model Risk. {\mathbb{E}}nd{abstract} \maketitle \section{Introduction} Variable annuities are long-term, equity-linked, and tax-deferred structure products issued by insurance companies targeting retail customers. The size of the U.S. variable annuities market is remarkable. By the end of 2021, variable annuity net assets in the U.S. climbed to 2,130 billion dollars \cite{IRI2021}, around two-thirds of the notional amount outstanding of the entire U.S. OTC equity derivatives market with 3,567 billion dollars \cite{BIS2021}. The pricing of variable annuities is a stochastic control problem without analytical solution in general \cite{Azimzadeh2015,Huang2016,Huang2017,Shen2020} and thus is fairly cumbersome even under the classical Black-Scholes (BS) model. Despite extensive studies on various numerical methods for pricing variable annuities, little understanding has been delivered in the literature regarding the impact of model misspecification on the pricing and hedging of variable annuities. This paper aims to bridge this gap by exploring tentative answers to the following questions raised from different aspects. \begin{itemize} \item From the perspective of pricing, the pricer needs to have a thorough understanding of how to determine the volatility parameter in the BS model to conservatively price the product to compensate for the potential model risk. \item From the hedger's perspective, a natural question: is it still viable to conduct BS-Delta-hedging given the market diverges from the model assumptions? Does doing so reduce the risk or even escalate the problem? \item From the viewpoint of risk management, it is important to pinpoint all risk drivers behind the variable annuity. Then the insurer can decide which risk to hedge, which risk to outsource, and which risk to take. {\mathbb{E}}nd{itemize} In response to the questions above, this paper makes several contributions to the literature. As the primary contribution, we decompose the model-free no-arbitrage price of the variable annuity into the BS model price in together with (i) valuation adjustment for future realized volatility, (ii) valuation adjustment for future implied volatility smile, and (iii) valuation adjustment for sub-optimal withdrawal risk; see Theorems \ref{thm:risk_decompose} and \ref{thm:va_multi_period} in Section \ref{sec:valuation_adjustment}. We further show that the BS model enables the insurer to speculate the volatility risks by marking up/down the volatility parameter. As the second contribution, we investigate the efficacy of BS-based delta-hedging in the presence of model risk. We find that the risk caused by the underlying asset's price change can always be eliminated by such a classical hedging strategy \textit{regardless of whether the market behaves in accordance with the BS model's assumptions or not}. Furthermore, there is even a chance that the hedger can benefit from taking the model risk; see Proposition \ref{prop:carry_pnl}. This justifies the use of the BS model as a hedging tool to some extent. However, such a hedging strategy does not come without any downsides. We disclose that the hedger's cumulative P\&L exhibits gradual slippage throughout the contract's lifetime and instantaneous leakage across each withdrawal date; see Remark \ref{rem:pnl_leakage} in the sequel. As the third contribution, we show that it is viable to separate the risk model from the pricing model. Specifically, on the one hand, the insurer may solely use the BS model as an extractor for the spot price risk with the residual part further decoupled into three extra factors, i.e., realized volatility, future implied volatility and sub-optimal withdrawal. Such a way of risk attribution is \textit{exhaustive} and can be constantly monitored; see Remark \ref{rem:risk_attribution}. On the other hand, the insurer may use a different pricing model to charge the premium or estimate the hedging costs for the four risk factors. Finally, we would like to stress that the notion of out-of-model-adjustment is not new and has been well understood for European options \cite{Andersen2010,Bergomi2015,Carr2001,Gatheral2011} and cliquets \cite{Bergomi2015}. However, it is a fairly challenging task to find a systematic way to decompose the model-free price of any exotic derivative product into a given model price plus adjustments that have financial meanings and can shed light on different aspects of model risk. For variable annuities, to the best of the author, this paper is the first attempt to pursue this route. The existing results mentioned above cannot be carried over here because the variable annuity contains some unique risk features, and in particular, it gives rise to the valuation adjustment for sub-optimal withdrawal risk that is unseen in other products. This distinguished risk profile stems from the stochastic control problem involved in variable annuities. The remainder of this article is structured as follows. Section \ref{sec:variable_annuity} gives a brief recap on the pricing of the variable annuity as a stochastic control problem. Section \ref{sec:valuation_adjustment} presents the main result of the paper: the out-of-model adjustment formula. Section \ref{sec:num_studies} gives numerical studies and finally Section \ref{sec:conclusion} concludes the paper. \section{Variable Annuities} \label{sec:variable_annuity} \subsection{Notations} \begin{itemize} \item Consider a set of equally-spaced withdrawal dates $\mathcal{T}:=\{t_i\}_{i=1}^{N}$ with $\delta:= t_{i+1} - t_i$. \item Let $X(t)$ be the time-$t$ value of the state variable associated with the contract (investment account, benefit base, etc) valued in $\mathbb{X}$. To ease the notations, in the sequel, we denote shorthand $X_n:=X(t_n)$ and $X_{n^+}:=X\left(t_n^{+}\right)$. \begin{remark} $X(t)$ is not necessarily a scalar process. For the clarity of presentation, this paper concentrates on the one-dimensional case, which however is not essential to our argument for proving the main results. {\mathbb{E}}nd{remark} \item Denote $K: \mathbb{X}\times {\mathbb{A}}\rightarrow \mathbb{X}$ as the transition map of the state variable across an event date, with ${\mathbb{A}}$ being the feasible set of the withdrawal policy. That is, \begin{eqnarray}s X_{n^+} = K(X_n, a),\ \ a \in A_n(X_n) {\mathbb{E}}as where $a$ is the policyholder's withdrawal amount at $t_n$ and $A_n(X_n) \subset {\mathbb{A}}$ is some state-dependent constraint. \item Between two withdrawal dates, the state variable evolves according to \begin{eqnarray}s X(t) = F_n\left(X_{n^{+}}, \textup{VA}repsilon(t)\right),\ \ t \in {\color{blue}(t_n}, t_{n+1}], {\mathbb{E}}as where $\textup{VA}repsilon(t) \in {\mathcal{F}}_t$ is some random driver (e.g. the cumulative return of the underlying asset over $[t_n, t]$). \item Let $g: \mathbb{X} \rightarrow {\mathbb{R}}_+$ and $f_n: \mathbb{X} \times {\mathbb{A}} \rightarrow {\mathbb{R}}_+$ be the terminal and intermediate payoff functions, respectively. \item Let $r$ be the risk-free rate and denote by $\mathbb{Q}$ the risk-neutral measure. Let ${\mathcal{F}}_t$ be the information up to time $t$. {\mathbb{E}}nd{itemize} \subsection{Bellman Equation -- Model-free Case} The pricing of variable annuities is typically formulated as a discrete-time stochastic control problem and accordingly, the value function at withdrawal time is recursively given by the following Bellman equation: \begin{eqnarray} \label{eq:Bellman_eq} \begin{cases} V(t_N) &= g(X_N),\\ V(t_n) &= \sup\limits_{a \in A_{n}\left(X_n\right)} \left[f_{n}(X_n, a) + C_{n}\left(K(X_n, a)\right)\right], \ \ 1\leq n \leq N-1, {\mathbb{E}}nd{cases} {\mathbb{E}}a where \begin{eqnarray} \label{eq:post_withdrawal_value} C_{n}(x) = {\mathbb{E}}_{n,x}^{\mathbb{Q}} \left[e^{-r\delta}V(t_{n+1})\right], {\mathbb{E}}a with $ {\mathbb{E}}_{n,x}^{\mathbb{Q}}[\cdot]:={\mathbb{E}}^{\mathbb{Q}}\left[\cdot|\mathcal{G}_{n}^{x}\right] $ and $\mathcal{G}_{n}^{x}:=\sigma\left(\{X_{n^{+}}=x\}\bigcup \mathcal{F}_{t_n}\right)$; see e.g. \cite{Azimzadeh2015,Huang2016,Huang2017,Shen2020}. $C_n(x)$ can be thought of as the value of the contract right after the policyholder's withdrawal at $t_n$ with the post-withdrawal state $X_{n^+}=x$. Between two event dates, the value function satisfies \begin{eqnarray} \label{eq:martingale_eq} V(t) = {\mathbb{E}}^{\mathbb{Q}} \left[\left. e^{-r(t_{n+1}-t)}V(t_{n+1})\right|\mathcal{F}_t\right],\ \ t\in{\color{blue}(t_n}, t_{n+1}]. {\mathbb{E}}a As a remark, with the convention that the contract inception time $t_0$ is not a withdrawal date, the above equation holds for $t\in{\color{blue} [t_0}, t_{1}]$ when $n=0$. \subsection{Bellman Equation -- Black-Scholes Case} In a BS world, the price function of the variable annuity is recursively given by \begin{eqnarray} \label{eq:Bellman_eq_BS} \begin{cases} V_{\textup{BS}}(t_N, x) &= g(x),\\ V_{\textup{BS}}(t_n, x, v) &= \sup\limits_{a \in A_{n}\left(x\right)} \left[f_{n}(x, a) + C_{n}^{\textup{BS}}\left(K(x, a),v\right)\right], \ \ 1\leq n \leq N-1, {\mathbb{E}}nd{cases} {\mathbb{E}}a where \begin{eqnarray} \label{eq:post_withdrawal_BS} C_{n}^{\textup{BS}}(x,v) = V_{\textup{BS}}\left({\color{blue} t_{n}^{+}}, x,v\right), {\mathbb{E}}a and in between two withdrawal dates $V_{\textup{BS}}$ solves the following BS-type PDE: \begin{eqnarray} \label{eq:BS_eq} \begin{cases} V_{\textup{BS}}|_{t=t_{n+1}} = V_{\textup{BS}}({\color{blue} t_{n+1}}, x, v),&\\ {\mathbb{P}}artial_t V_{\textup{BS}} + rx {\mathbb{P}}artial_x V_{\textup{BS}} + \frac{v}{2}x^2{\mathbb{P}}artial^2_{xx} V_{\textup{BS}}- rV_{\textup{BS}} = 0,\ \ {\color{blue} t\in(t_n, t_{n+1})},\ \ 0\leq n \leq N-1.& {\mathbb{E}}nd{cases} {\mathbb{E}}a with $v$ denoting the BS variance parameter; cf. \cite{Azimzadeh2015}. \begin{remark}[Value Function vs. Value Process] It is worth stressing the difference between $V_{\textup{BS}}(t, x, v)$ and $V(t)$: the former is a value function and while the latter is a stochastic process which is not necessarily a function of the state variable $X(t)$ due to the existence of other (latent) risk factors generating the filtration ${\mathcal{F}}=\left({\mathcal{F}}_t\right)_{t\in[0, t_N]}$. Similarly, given a realized value of $X_{n^+}=x$, $C_n(x)$ is still a random variable (parameterized by $x$) while $C_{n}^{\textup{BS}}(x, v)$ is deterministic. {\mathbb{E}}nd{remark} The primary thrust of the article is to characterize the discrepancy between $V_{\textup{BS}}(t,x,v)$ and $V(t,x)$. \section{Out-of-Model Adjustments} \label{sec:valuation_adjustment} This section is devoted to presenting the main results of the article. To get some flavours of the problem, we start with a two-period case in the subsequent Section \ref{sec:two_period_case}. \subsection{Valuation Adjustments} \label{sec:two_period_case} \begin{theorem}[Out-of-Model Adjustments] \label{thm:risk_decompose} Let $N=2$ and $\xi_1 \in {\mathcal{F}}_{t_1}$ denote the BS implied variance of an European option with payoff $g(X(t_2))$ as of time $t_1$ (see also Definition \ref{def:imp_var} in Appendix \ref{app:proof_multi_period}). Suppose the assumptions in Appendix \ref{app:assum} hold. Then \begin{eqnarray} \label{eq:va_eq} V(t) = V_{\textup{BS}}(t, X(t), v)+ V_{1}(t) + V_{2}(t), \ \ t \in [t_0, t_1], {\mathbb{E}}a where \begin{eqnarray} \label{eq:gamma_va} V_{1}(t) &=&{\mathbb{E}}^{\mathbb{Q}}\left[\left.\int_t^{t_1} \frac{1}{2}e^{-r(u - t)} {\mathbb{P}}artial_{x}^2 V_{\textup{BS}}\left(u, X(u), v\right)\left[{\textup{d}}\left[X\right](u)-vX^2(u){\textup{d}} u\right]\right|{\mathcal{F}}_t\right],\\ \label{eq:smile_va} V_{2}(t) &=& e^{-r(t_1 - t)}{\mathbb{E}}^{\mathbb{Q}}\left[\left.V_{\textup{BS}}\left(t_1, X_1, {\color{blue}\xi_1}\right) - V_{\textup{BS}}\left(t_1, X_1, {\color{blue} v}\right)\right|{\mathcal{F}}_t\right], {\mathbb{E}}a and $\left[X\right](t)$ denotes the quadratic variation process of $X(t)$. {\mathbb{E}}nd{theorem} \begin{remark}[Realized \& Implied Volatility Risks] Theorem \ref{thm:risk_decompose} discloses two different risks that are not priced in the BS model. \begin{itemize} \item {\it Realized Volatility Risk}\mathbb{Q}uad The first valuation adjustment (VA) term {\mathbb{E}}qref{eq:gamma_va} is contributed by the discrepancy between the BS variance parameter $v$ and the \textit{realized variance} $\left({\textup{d}} X(u)/X(u)\right)^2$ of the underlying. Should the underlying price behave in accordance with the BS model, this VA term would vanish. \item {\it Future Smile Risk}\mathbb{Q}uad The second VA term {\mathbb{E}}qref{eq:smile_va} is caused by the difference between BS variance $v$ and the \textit{implied variance} of the European option with payoff $g(X(t_2))$ at a \textit{future} time point $t_1$. The latter is determined by the entire volatility smile as of $t_1$ and for this reason, we refer to this risk source as the \textit{future-smile risk}. In an ideal BS world, all European options have the same implied variance and thus this VA term disappears. {\mathbb{E}}nd{itemize} To summarize, we have \begin{eqnarray}s \textup{Fair price} = \textup{BS price} + \textup{VA for future realized volatility risk} + \textup{VA for future smile risk}. {\mathbb{E}}as The two VA terms vanish should the model's assumptions be satisfied. {\mathbb{E}}nd{remark} \begin{remark} \label{rem:breakeven} Evaluating the two VA terms calls for the knowledge of the true model, which is a formidable task. Nonetheless, gauging their signs is relatively easier: \begin{enumerate} \item The VA for the future realized volatility risk is positive (resp. negative) if the the BS model variance $v$ underestimates (resp. overestimates) the {\it realized} variance $\left({\textup{d}} X(u)/X(u)\right)^2$ over $[t, t_1]$; \item The VA for the future smile risk is positive (resp. negative) if the the BS model variance $v$ underestimates (resp. overestimates) the {\it implied} variance $\xi_1$. {\mathbb{E}}nd{enumerate} The critical insight from the above is that the insurer has the freedom of under-pricing or over-pricing the variable annuity by marking up/down the BS variance parameter $v$. In other words, {\it intentionally or not, should the insure decide to choose the BS model to price the variable annuity, she essentially speculates the realized volatility and future smile risks.} {\mathbb{E}}nd{remark} \subsection{P\&L Slippage and Leakage} The previous discussion reveals that the BS model fails to price in Gamma and future smile risks. Next, we study the impact of the use of BS as a hedging tool on the insurer's cumulative P\&L. Consider the following situation: the BS model is not in line with the market's dynamics but the insurer pretends it to be and delta-hedges her exposure to the variable annuity. Specifically, the hedging strategy is given as follows. \begin{itemize} \item[] {\bf (H1)} At time $t_0$, the insurer sells a variable annuity contract and chooses to value it by the BS model with a BS variance parameter $v$. Accordingly, the premium received by the issuer is $V_{\textup{BS}}(t_0)$ which is used to finance her hedging strategy. \item[] {\bf (H2)} Over the time horizon $[t_0, t_1)$, the insurer continuously delta-hedges her exposure with two commitments: \begin{enumerate} \item[] {\bf (C1)} the insurer freezes the variance parameter $v$ because the implied volatility of the variable annuity is not quoted in the market; \item[] {\bf (C2)} for the same reason as the above, the insurer always marks her position to the model price $V_{\textup{BS}}(t, X(t), v)$. {\mathbb{E}}nd{enumerate} \item[] {\bf (H3)} At time $t_1$, the variable annuity degenerates into an European option whose implied variance $\xi_1$ can be observed from the market quotes for vanilla options\footnote{ Given a market for vanilla call options with all strikes, one can pin down the BS implied variance/volatility for any European option with convex payoff \cite{Bergomi2015}.}. Accordingly, the insurer can mark the value of the contract be $V_{\textup{BS}}(t_1, X_1, \xi_1)$. {\mathbb{E}}nd{itemize} Now the problem of interest is to understand how the insurer's mark for P\&L varies as time progresses from $t_0$ to $t_1$. The following proposition sheds light on this. \begin{proposition} \label{prop:carry_pnl} Suppose the assumptions in Appendix \ref{app:assum} hold. Let $\Pi(u)$ be the value of the insurer's hedges as of time $u$. The cumulative profit and loss marked by the insurer is given by \begin{eqnarray} \label{eq:carry_pnl} \textup{P\&L}(t) = \Pi(t) - V_{\textup{BS}}\left(t, X(t), v\right) = \textup{P\&L}_{\Gamma}(t), \ \ t \in [t_0, {\color{blue} t_1)}, {\mathbb{E}}a where \begin{eqnarray} \label{eq:Gamma_pnl} \textup{P\&L}_{\Gamma}(t) = \int_{t_0}^{t} \frac{e^{r(t-u)}}{2}{\mathbb{P}}artial_{x}^2V_{\textup{BS}}\left[vX^2(u){\textup{d}} u - {\textup{d}}\left[X\right](u)\right], \ \ t \in [t_0, t_1]. {\mathbb{E}}a Furthermore, the cumulative profit and loss realized by the insurer as of $t_1$ is given by \begin{eqnarray} \label{eq:mtm_pnl} \textup{P\&L}({\color{blue} t_1}) &=& \Pi(t_1) - V_{\textup{BS}}\left(t_1, X_1, \xi_1\right) = \underbrace{\textup{P\&L}_{\Gamma}(t_1)}_{\textup{P\&L slippage}} + \underbrace{\color{blue} \textup{P\&L}_{L}(t_1)}_{\textup{P\&L leakage}}, {\mathbb{E}}a where \begin{eqnarray} \label{eq:pnl_leakage} \textup{P\&L}_{L}(t_1) = V_{\textup{BS}}\left(t_1, X_1, v\right) - V_{\textup{BS}}\left(t_1, X_1, \xi_1\right). {\mathbb{E}}a {\mathbb{E}}nd{proposition} \begin{remark}[P\&L Slippage and Leakage] \label{rem:pnl_leakage} Proposition \ref{prop:carry_pnl} discloses two P\&L impacts with different natures brought by the use of the BS-based hedging strategy. \begin{itemize} \item {\it Slippage}\mathbb{Q}uad Before time $t_1$, as time $t$ progresses, the insurer can gradually feel the mis-hedging of the BS model because she can observe exposure to Gamma risk (Eq. {\mathbb{E}}qref{eq:Gamma_pnl}) in her marked P\&L (Eq. {\mathbb{E}}qref{eq:carry_pnl}) which shouldn't have come into place had the market behaved in accordance with the BS model. We refer to the term {\mathbb{E}}qref{eq:Gamma_pnl} as the P\&L \textit{slippage} to stress this incremental bleeding. \item {\it Leakage}\mathbb{Q}uad Recall from {\bf (H3)} that at time $t_1$ the insurer can observe the fair market price of the variable annuity and her final P\&L reads {\mathbb{E}}qref{eq:carry_pnl}. By comparing Eqs. {\mathbb{E}}qref{eq:carry_pnl} and {\mathbb{E}}qref{eq:carry_pnl}, the insurer sees a sudden mark up/down for her position across $t_1$ caused by the second term in Eq. {\mathbb{E}}qref{eq:mtm_pnl}. To stress this discontinuity in $\textup{P\&L(t)}$ across $t_1$, we refer to the term {\mathbb{E}}qref{eq:pnl_leakage} as the P\&L \textit{leakage}. {\mathbb{E}}nd{itemize} {\mathbb{E}}nd{remark} \begin{remark} In the classical BS paradigm, the model's assumptions are supposed to be fulfilled by the market and thus the Gamma risk enters into the hedger's P\&L only when the hedging frequency is discrete. Proposition \ref{prop:carry_pnl} reveals that the Gamma exposure is generally inevitable even if we assume continuous hedging due to the presence of model risk. In a more realistic situation, we shall expect extra P\&L and valuation adjustment terms taking into account the impact of discrete re-balancing. {\mathbb{E}}nd{remark} The conclusion of Proposition \ref{prop:carry_pnl} is not surprising: we have shown in Theorem \ref{thm:risk_decompose} that the fair price contains two VA terms on top of the BS model price, which implies that if the insurer only charges the BS model price as the premium it might overestimate/underestimate the entire hedging cost as reflected by the P\&L slippage and leakage terms illustrated in the above. Nevertheless, the insurer does not necessarily lose/make money depending on the relative order between the parameter $v$ and implied/realized variance; see Remark \ref{rem:breakeven}. \subsection{Multi-period Case} \label{sec:multi_period} The following theorem generalizes Theorem \ref{thm:risk_decompose} to the case with multiple withdrawal dates. \begin{theorem} \label{thm:va_multi_period} Suppose the assumptions in Appendix \ref{app:assum} hold. \begin{eqnarray} \label{eq:va_eq_multi_period} V(t) = V_{\textup{BS}}(t, X(t), v)+ V_{1}(t) + V_{2}(t) + V_3(t), \ \ t \in [t_{n-1}, t_n], \ \ 1\leq n \leq N-1, {\mathbb{E}}a where $V_i(t), i=1,2,3,$ are given in Eqs. {\mathbb{E}}qref{eq:va_gamma_process}--{\mathbb{E}}qref{eq:va_suboptimal_process} of Appendix \ref{app:proof_multi_period}. {\mathbb{E}}nd{theorem} \begin{remark}[Sub-optimal Withdrawal Risk] \label{rem:suboptimal_withdrawal_risk} With respect to the two-period case (see Eq. {\mathbb{E}}qref{eq:va_eq}), Eq. {\mathbb{E}}qref{eq:va_eq_multi_period} discloses one extra valuation adjustment term $V_{3}(t)$ which stems from the fact that the optimal withdrawal strategy associated with the ``true'' model is only suboptimal under the BS model; see Eq. {\mathbb{E}}qref{eq:va_suboptimal_process} for a concise definition of the sub-optimal withdrawal risk. Generally speaking, it is not surprising that the optimal withdrawal strategy of any given model is not in line with the one observed from the market due to the existence of model risk. Such a disagreement has been attributed to the irrationality of the policyholder or tax considerations in the literature; see \cite{Chen2008,Moenig2016,Knoller2016} and the references therein. This article provides an alternative explanation, that is, the model mis-specification risk. {\mathbb{E}}nd{remark} The following corollary directly follows from the above theorem. \begin{corollary} Under the conditions of Theorem \ref{thm:va_multi_period}, we have \begin{eqnarray} \label{eq:pnl_attribution} \nonumber {\textup{d}} V(t) &=& \underbrace{{\mathbb{P}}artial_t V_{\textup{BS}}(t, X(t), v) {\textup{d}} t}_{\textup{time decay}} + \underbrace{{\mathbb{P}}artial_x V_{\textup{BS}}(t, X(t), v){\textup{d}} X(t)}_{\textup{Delta effect}}\\ &&+ \underbrace{\frac{1}{2}{\mathbb{P}}artial_{x}^2 V_{\textup{BS}}(t, X(t), v){\textup{d}} [X](t)}_{\textup{realized vol effect}} + \underbrace{{\textup{d}} V_{2}(t)}_{\textup{future smile effect}} + \underbrace{{\textup{d}} V_{3}(t),}_{\textup{withdrawal effect}} {\mathbb{E}}a where the expressions of $V_2(t)$ and $V_3(t)$ are relegated to Definition \ref{def:va_adj} of Appendix \ref{app:proof_multi_period} for the clarity of presentation. {\mathbb{E}}nd{corollary} \begin{remark}[Exhaustive Risk Attribution] \label{rem:risk_attribution} The primary message delivered by the above corollary is that the P\&L of the variable annuity can be \textit{exhaustively} attributed to four drivers (i) spot price\footnote{To be precise, we refer to P\&L that is solely caused by the first-order-change of the underlying asset price as the spot price risk. The second-order effect is attributed to the future realized volatility.}, (ii) future realized volatility, (iii) future implied volatility and (iv) sub-optimal withdrawal. It is also worth noting the last two terms are missing in the classical attribution analysis based on the BS greeks. {\it Generally speaking, for any given model, should it diverge from the market, the classical greeks-based-attribution is incomplete, which is reflected by unexplained profit/loss.} {\mathbb{E}}nd{remark} \subsection{Separating Risk, Hedging, and Pricing Models} \label{sec:separation} Now we comment on the impacts of the model risk of the BS model from several aspects. \begin{itemize} \item Firstly and foremost, Remark \ref{rem:breakeven} reveals that the BS model enables the insurer to speculate the future implied and realized volatility via marking up/down the parameter $v$. This distorts the incentives of the insurer and is undesirable from the regulator's perspective. \item From the perspective of pricing, the problem with the BS model is more about its lack of flexibility. There is only one degree of freedom (the parameter $v$) that can be utilized by the pricer to mark up/down the realized volatility and future smile risks {\it simultaneously}. In other words, the insurer is not able to control the two risks {\it separately} despite that realized and implied volatility don't behave in line with each other in reality \cite{Bergomi2015} and have different impacts on the insurer's P\&L (see Remark \ref{rem:pnl_leakage}). \item From the viewpoint of hedging, the hedger's perception of the P\&L is misled by the BS model. We recall from Remark \ref{rem:pnl_leakage} that the presence of the future smile risk causes an instantaneous jump across the contract event date. This is very annoying: it is likely that the hedger's positive cumulative P\&L is suddenly skewed up by leakage term {\mathbb{E}}qref{eq:pnl_leakage}. \item Remark \ref{rem:risk_attribution} reveals that the BS model can be used as a decent tool for risk attribution analysis. Specifically, it precisely pinpoints all risk drivers behind the variable annuity; see Remark \ref{rem:risk_attribution}. {\mathbb{E}}nd{itemize} \section{Numerical Examples} \label{sec:num_studies} This section conducts some numerical experiments to study the efficacy of the BS-delta-hedging given the market does not respect the BS model assumptions. \subsection{Contract Specification} The specification of contract-related payoff functions is relegated to Appendix \ref{app:num_studies} for the clarity of the presentation. We confine our attention into the two-period case considered by Section \ref{sec:two_period_case} with $t_i=i, i=0,1,2$. \subsection{Dynamics of the Market} For illustration purposes, we postulate the market follows the Heston-type stochastic volatility model given by \begin{eqnarray}s \begin{cases} {\textup{d}} X(u) &= r X(u) {\textup{d}} u + \sqrt{\alpha(u)} X(u){\textup{d}} W_1(u),\ u \in [t_0, t_1],\\ {\textup{d}} \alpha(u) &= \kappa \left[\theta - \alpha(u)\right] {\textup{d}} u + \nu \sqrt{\alpha(u)}{\textup{d}} W_2(u), \ {\textup{d}} W_1(u) {\textup{d}} W_2(u) = \rho {\textup{d}} u, {\mathbb{E}}nd{cases} {\mathbb{E}}as with $u \in [t_0, t_1]$. We adopt the Heston parameters given in the following table. \begin{table}[ht] \begin{center} \caption{Heston parameters used for numerical experiments.} \label{tab:heston_para} \begin{tabular*}{0.9\textwidth}{l@{{\mathbb{E}}xtracolsep{\fill}}llllll} \toprule Parameter & $r$ & $\kappa$& $\nu$ & $\theta$& $\alpha(0)$& $\rho$ \\ \midrule Value & 0.0 &0.1& 0.1& 0.04& 0.04& -0.69\\ \bottomrule {\mathbb{E}}nd{tabular*} {\mathbb{E}}nd{center} {\mathbb{E}}nd{table} In our numerical experiments, we simulate the Heston process at equally-spaced time points $\{u_i\}_{i=0}^{N}$ with $u_0=t_0$, $u_{N}=t_1$ and step size $\Delta u = u_{i+1}-u_{i}$. Further denote $X_{i}^{{\color{blue} [m]}}$ as the value of $X(u_i)$ in $m$-th simulated path. In the consequent numerical experiments, we simulate 100 scenarios in total, which is sufficient for illustrating the main conclusion. \subsection{Efficacy of the BS-Hedging} We recall from Eqs. {\mathbb{E}}qref{eq:carry_pnl} and {\mathbb{E}}qref{eq:mtm_pnl} that the cumulative P\&L of the BS-hedging strategy is given by \begin{eqnarray}s \textup{P\&L}(t) &=& \left[\Pi(t) - V_{\textup{BS}}\left(t, X(t), v\right)\right]\\ &&+ \left[V_{\textup{BS}}\left(t_1, X(t_1), v\right) - V_{\textup{BS}}\left(t_1, X(t_1), \xi_1\right)\right]{\color{blue}\mathbf{1}\{t=t_1\}},\ \ t\in[t_0, t_1], {\mathbb{E}}as where \begin{eqnarray}s \Pi(t) = \int_0^{t}{\mathbb{P}}artial_x V_{\textup{BS}}(u, X(u), v) {\textup{d}} X(u) + r\int_0^t\left[\Pi(u) - {\mathbb{P}}artial_x V_{\textup{BS}}(u, X(u), v)X(u)\right]{\textup{d}} u. {\mathbb{E}}as Then the cumulative P\&L at time $t_1$ along the $m$-th simulated path is approximately given by: \begin{eqnarray}s \widetilde{{\mathbb{P}}nl}^{{\color{blue} [m]}}(v) = \widetilde{\Pi}^{{\color{blue} [m]}}(u_N, v) - V_{\textup{BS}}\left(u_N, X_{N}^{{\color{blue} [m]}}, \xi_1^{{\color{blue} [m]}}\right) {\mathbb{E}}as where $\widetilde{\Pi}$ is recursively defined by \begin{eqnarray}s \widetilde{\Pi}^{{\color{blue} [m]}}(u_{{\color{red} i+1}}, v) = \Delta_{\textup{BS}}^{{\color{blue} [m]}}(u_{\color{red} i})\left[\Delta X_i^{{\color{blue} [m]}} - rX_i^{{\color{blue} [m]}}\Delta u\right] + \widetilde{\Pi}^{{\color{blue} [m]}}(u_{{\color{red} i}}, v) \left[1 + r\Delta u\right], \ \ 0\leq i \leq N-1, {\mathbb{E}}as with $\widetilde{\Pi}^{{\color{blue} [m]}}(u_{0}, v) = V_{\textup{BS}}\left(0, X(t_0), v\right)$ and $ \Delta_{\textup{BS}}^{{\color{blue} [m]}}(u_i) := {\mathbb{P}}artial_x V_{\textup{BS}}\left(u_i, X_i^{{\color{blue} [m]}}, v\right); $ see Eq. {\mathbb{E}}qref{eq:bs_delta} of Appendix \ref{app:num_studies} for the expression of ${\mathbb{P}}artial_x V_{\textup{BS}}$. Figure \ref{fig:density_plot_imp_vol} displays the histograms of the simulated values of the BS implied variance of the contract $\xi_1^{{\color{blue} [m]}}$ and the underlying asset price $X_N^{{\color{blue} [m]}}$ at $t_1$ respectively. We can see that the implied variance at a future time point is random under the Heston model: it can lie arbitrarily above or below any given BS variance parameter $v$ and thus introduces the future smile risk and P\&L leakage; see Remark \ref{rem:pnl_leakage}. \begin{figure}[ht] \begin{center} \includegraphics[width=1.0\textwidth]{density_plot_imp_vol.png} \captionsetup{width=0.9\textwidth} \caption{Histograms of implied variance of the contract and the underlying price at $t_1$.} \label{fig:density_plot_imp_vol} {\mathbb{E}}nd{center} {\mathbb{E}}nd{figure} Figure \ref{fig:density_plot_pnl} plots the histograms of the cumulative P\&L at $t_1$ with varying hedging frequency (daily vs monthly) and BS variance parameter $v$ ($0.04$ vs $0.09$). We have two observations. Firstly, as one increases the hedging frequency from monthly to daily, the P\&L becomes more stable, as reflected by the more spiked shape of the density plot. Such a variance reduction is due to the fact that the BS-Delta-hedging eliminates the spot price risk. Secondly, as the hedger marks up the BS variance parameter $v$, it is more likely to get positive cumulative P\&L. This is in line with the conclusion of Proposition \ref{prop:carry_pnl}: both P\&L leakage and slippage terms are monotone in $v$; furthermore, they are positive should $v$ dominate the implied and realized variance over the course of the hedging. \begin{figure}[ht] \begin{center} \includegraphics[width=1.0\textwidth]{density_plot_pnl.png} \captionsetup{width=0.9\textwidth} \caption{Histograms of the cumulative P\&L at $t_1$.} \label{fig:density_plot_pnl} {\mathbb{E}}nd{center} {\mathbb{E}}nd{figure} To sum up, despite that the market disagrees with the BS model assumptions, the hedger can always stabilize her final P\&L by increasing hedging frequency and even benefit from the mis-hedging (model risk) by speculatively marking up the BS variance parameter. \section{Concluding Remarks} \label{sec:conclusion} We have shown that the fair value of the variable annuity can be decomposed into four parts (i) BS model price, (ii) valuation adjustment for future realized volatility, (iii) valuation adjustment for future implied volatility smile, (iv) and valuation adjustment for sub-optimal withdrawal risk. This discloses that the risk of the variable annuity can be exhaustively attributed to four corresponding factors. The insurer can conservatively price (ii) and (iii) simultaneously by marking up the variance parameter but has no control over (iv). This paper also shows that the impact of model risk on the cumulative P\&L of the classical BS-based-hedging strategy is reflected in two different ways: gradual slippage and instantaneous leakage. There is even a chance that the hedger can benefit from taking the model risk. Furthermore, the P\&L caused by spot price can always be eliminated by the strategy, which is \textit{immune} to the model risk. It is worth stressing that the primary thrust of this article is delineating the risk profile of variable annuities rather than promoting or criticizing the BS model. The BS model plays the role of an extractor for the spot price risk and our out-of-model adjustment formula further anatomizes the residual model risk. Pinpointing all risk drivers paves the way to systematically access the advantages and limitations of more advanced models (local volatility model, stochastic volatility model, stochastic local volatility model, etc) in pricing and risk-managing variable annuities. Specifically, one may scrutinize how a given model prices in each risk segment and accordingly, underprice/overprice the product as a whole. This is left as a future research avenue. Generally speaking, one may decompose the model-free price into any given model price plus out-of-model adjustments. It will be fruitful to explore different decomposition formulas on top of different models. However, the more fundamental question is: does such a decomposition decouples the risks with different natures and allow the pricer to control them \textit{individually} by marking up/down some free parameter? Our choice of the BS model as the extractor does not have no special significance. A more fancy model does not necessarily give better pricing and risk-management of the variable annuity due to the less transparency of the model risk. Putting it another way, the simpler the model is, the better grasp of the model risk is. \section{Appendix Companion to Section \ref{sec:num_studies}} \label{app:num_studies} \subsection{Contract Specification} For the clarity of illustration, we consider a simple variable annuity contract specified as follows. \begin{itemize} \item[] \textit{Transition of investment account across withdrawal date}\mathbb{Q}uad Across each withdrawal date, the investment account balance is reduced by the withdrawal amount and accordingly, \begin{eqnarray}s K(x, a) = \max\left[x-a,0\right],\ a\in A_{n}(x)=\left[0, x\right]. {\mathbb{E}}as \item[] \textit{Intermediate payoff function}\mathbb{Q}uad Given the withdrawal amount $a$ at $t_n$, the policyholder's reward is given by \begin{eqnarray}s f_n(x, a) = a(1-{\mathbb{E}}ta),\ a \in A_{n}(x), {\mathbb{E}}as where ${\mathbb{E}}ta \in [0,1]$ is withdrawal penalty. \item[] {\it Terminal payoff function}\mathbb{Q}uad At maturity, the policyholder can receive the balance of the investment account or the guaranteed withdrawal amount, whichever is larger. Thus, the terminal payoff is given by \begin{eqnarray}s g(x) = x + \max\left[G-x, 0\right]. {\mathbb{E}}as {\mathbb{E}}nd{itemize} Throughout Section \ref{sec:num_studies}, we adopt the set of parameters in the table below. \begin{table}[ht] \begin{center} \caption{Product parameters used for numerical experiments} \label{tab:product_para} \begin{tabular*}{0.5\textwidth}{c@{{\mathbb{E}}xtracolsep{\fill}}cc} \toprule Parameter & ${\mathbb{E}}ta$ & $G$ \\ \midrule Value & 0.6 & 50\\ \bottomrule {\mathbb{E}}nd{tabular*} {\mathbb{E}}nd{center} {\mathbb{E}}nd{table} \subsection{BS Value Function and Delta} It follows from Eq. {\mathbb{E}}qref{eq:Bellman_eq_BS} that $V_{\textup{BS}}(t_2, x, v) = \max[G, x]$ and \begin{eqnarray}s V_{\textup{BS}}(t_1, x, v) = \sup_{a \in A_1(x)} \left[f_1(x,a) + K(x,a) + P_{\textup{BS}}(\delta, K(x,a), G, v)\right], {\mathbb{E}}as where $P_{\textup{BS}}(\delta, x, k, v)$ denotes the BS put option price with spot price $x$, strike $k$, time-to-expiry $\delta$ and BS-variance parameter $v$. The plot of the value function is displayed in Figure \ref{fig:bs_value_fun} from which we can see how sensitive the function is to the BS variance parameter. When the market does not follow the BS model, we recall from Proposition \ref{prop:carry_pnl} that the fair value of the contract is given by the above BS value function with $v$ replaced by the prevailing implied variance at $t_1$ which can lies arbitrarily above or below $v$. To avoid direct evaluation of the derivative of $V_{\textup{BS}}$, we adopt the likelihood ratio method \cite{Broadie1996} to compute \begin{eqnarray} \label{eq:bs_delta} {\mathbb{P}}artial_x V_{\textup{BS}}(u, x, v) = {\mathbb{E}}^{\textup{BS}}\left[\frac{e^{-r(t_1 - u)}}{x\sqrt{v(t_1-u)}}d(u, x, v)V_{\textup{BS}}\left(t_1, X(t_1), v\right)\right] {\mathbb{E}}a where \begin{eqnarray}s d(u, x, v) = \frac{\ln \left[X(t_1)/x\right] - \left(r - v/2\right)(t_1 -u)}{\sqrt{v(t_1-u)}} {\mathbb{E}}as and ${\mathbb{E}}^{\textup{BS}}$ denotes the expectation under which \begin{eqnarray}s \ln \left[X(t_1)/x\right] \sim \mathcal{N}\left((r-v/2)(t_1 - u), v(t_1 - u)\right). {\mathbb{E}}as \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{bs_value_fun.png} \captionsetup{width=0.95\textwidth} \caption{BS value function with varying variance parameters.} \label{fig:bs_value_fun} {\mathbb{E}}nd{center} {\mathbb{E}}nd{figure} \section{Technical Assumptions} \label{app:assum} Throughout this paper, we impose the following technical assumptions. Consider a filtered probability space $\left(\Omega, \left({\mathcal{F}}_{t}\right)_{t\in[t_0, t_N]}, \mathbb{Q}\right)$ satisfying the usual conditions. \begin{assumption} \begin{eqnarray}s \sup_{\mathbf{a}\in \mathcal{A}} {\mathbb{E}}^{\mathbb{Q}}\left[\sum_{n=1}^{N-1} |f_{n}(X_n, a_n)|\right] < \infty \ \textnormal{and}\ \sup_{\mathbf{a}\in \mathcal{A}} {\mathbb{E}}^{\mathbb{Q}}\left[g(X_{N})\right], {\mathbb{E}}as where \begin{eqnarray}s \mathcal{A}=\left\{\left.\mathbf{a}=\{a_{n}\}_{n=1}^{N-1}\right| a_{n}\ \textnormal{is}\ {\mathcal{F}}_{t_n}\textnormal{-measurable and}\ a_n \in A_n(X_{n}) \right\} {\mathbb{E}}as is the set of admissible controls. {\mathbb{E}}nd{assumption} \begin{assumption} \label{assum:optimal_policy} There exists $\mathbf{a}^{\star}= \{a_{n}^{\star}\}_{n=1}^{N-1}\in \mathcal{A}$ s.t. the supremum in Bellman equation {\mathbb{E}}qref{eq:Bellman_eq} is attained at $a_{n}^{\star}$ for $1\leq n \leq N-1$. {\mathbb{E}}nd{assumption} \begin{assumption} The investment account of the variable annuity is tied to a tradable asset and accordingly, its value between two consecutive withdrawal dates is given by \begin{eqnarray}s X(t) = X_{n^{+}} \frac{S(t)}{S(t_n)},\ t\in{\color{blue} (t_n}, t_{n+1}], \ 0\leq n\leq N-1, {\mathbb{E}}as where $S(t)$ is the price process of the underlying asset. Assume that $S(t)$ is a continuous semi-martingale and does not pay dividends. Then we have \begin{eqnarray} \label{eq:martingale} {\mathbb{E}}^{\mathbb{Q}}\left[\left.{\textup{d}} X(u) - rX(u){\textup{d}} u\right|{\mathcal{F}}_t\right] = 0,\ \ t_n < t\leq u\leq t_{n+1}, \ \ 0\leq n\leq N-1 {\mathbb{E}}a with $\mathbb{Q}$ being the risk-neutral measure. {\mathbb{E}}nd{assumption} \begin{remark} The zero-dividend assumption can be removed by replacing $r$ in Eq. {\mathbb{E}}qref{eq:martingale} by $r-d$ with $d$ being the dividend yield. {\mathbb{E}}nd{remark} \begin{assumption} The value function under the BS model $V_{\textup{BS}}(t_n, x)$ defined via Eq. {\mathbb{E}}qref{eq:Bellman_eq_BS} is convex in $x$. Specially, when $n=N$, the terminal payoff $V_{\textup{BS}}(t_n, x) = g(x)$ is a convex function. {\mathbb{E}}nd{assumption} \begin{remark} The above assumption ensures the BS-implied variance in the sequel Definition \ref{def:imp_var} is well-defined. The BS value function of the variable annuity is convex under very mild conditions; see e.g. \cite{Azimzadeh2015,Huang2016,Huang2017,Shen2020}. The convexity is a sufficient condition for the existence of the implied variance and thus can be further relaxed. {\mathbb{E}}nd{remark} \section{Proof of Theorem \ref{thm:risk_decompose}} \label{app:proof_risk_decompose} \begin{proof} Recall that when $N=2$ we have $V(t_2) = g(X(t_2)).$ Then it follows from Eq. {\mathbb{E}}qref{eq:post_withdrawal_value} that \begin{eqnarray}s C_{1}(x) = {\mathbb{E}}_{1,x}^{\mathbb{Q}} \left[e^{-r\delta}V(t_2)\right] = {\mathbb{E}}_{1,x}^{\mathbb{Q}} \left[e^{-r\delta}g(X(t_2))\right] = C_{1}^{\textup{BS}}\left(x,\xi_{1}\right), {\mathbb{E}}as where the last equality is by the definition of $\xi_1$. By Eqs. {\mathbb{E}}qref{eq:Bellman_eq} and {\mathbb{E}}qref{eq:Bellman_eq_BS}, we get \begin{eqnarray}s V(t_1) &=& \sup_{a \in A_1(X_1)} \left[f_1(X_1, a) + C_1\left(K(X_1, a)\right)\right] \\ &=& \sup_{a \in A_1(X_1)} \left[f_1(X_1, a) + C_1^{\textup{BS}}\left(K(X_1, a), {\color{blue} \xi_1}\right)\right] = V_{\textup{BS}}\left(t_1, X_1, {\color{blue}\xi_1}\right). {\mathbb{E}}as Then it follows from Eq. {\mathbb{E}}qref{eq:martingale_eq} that \begin{eqnarray}s V(t) = {\mathbb{E}}^{\mathbb{Q}}\left[\left. e^{-r (t_1 -t)}V(t_1)\right|{\mathcal{F}}_t\right] = {\mathbb{E}}^{\mathbb{Q}} \left[\left. e^{-r (t_1 -t)}V_{\textup{BS}}\left(t_1, X(t_1), {\color{blue} v}\right)\right|{\mathcal{F}}_t\right] + V_{2}(t), {\mathbb{E}}as where the last equality follows by Eq. {\mathbb{E}}qref{eq:smile_va}. On the other hand, the BS equation {\mathbb{E}}qref{eq:Bellman_eq} in conjunction with the It\^{o}'s lemma implies \begin{eqnarray}s e^{-r (t_1 -t)}V_{\textup{BS}}\left(t_1, X(t_1), v\right) - V_{\textup{BS}}(t, X(t), v) &=& \int_{t}^{t_1}e^{-r(u - t)} \frac{1}{2}{\mathbb{P}}artial_{x}^2V_{\textup{BS}}\left[{\textup{d}}\left[X\right](u)-vX^2(u){\textup{d}} u\right]\\ &&+ \int_{t}^{t_1} e^{-r(u-t)}{\mathbb{P}}artial_x V_{\textup{BS}} \left[{\textup{d}} X(u) - r X(u){\textup{d}} u\right]. {\mathbb{E}}as Taking expectations on both sides of the above equation gives \begin{eqnarray}s {\mathbb{E}}^{\mathbb{Q}} \left[\left. e^{-r (t_1 -t)}V_{\textup{BS}}\left(t_1, X(t_1), {\color{blue} v}\right)\right|{\mathcal{F}}_t\right] &=& V_{\textup{BS}}(t, X(t), {\color{blue} v}) + V_1(t)\\ &&+ {\mathbb{E}}^{\mathbb{Q}}\left[\left.\int_{t}^{t_1} e^{-r(u-t)}{\mathbb{P}}artial_x V_{\textup{BS}} \left[{\textup{d}} X(u) - r X(u){\textup{d}} u\right]\right|{\mathcal{F}}_t\right]\\ &=& V_{\textup{BS}}(t, X(t), v) + V_1(t) {\mathbb{E}}as where the last equality follows by Eq. {\mathbb{E}}qref{eq:martingale}. Putting the last three displays together yields Eq. {\mathbb{E}}qref{eq:va_eq}. This completes the proof. {\mathbb{E}}nd{proof} \section{Proof for Proposition \ref{prop:carry_pnl}} \label{app:proof_carry_pnl} \begin{proof} Let $\Pi(u)$ be the value of the insurer's hedges as of time $u$. In accordance with the setup of the hedging strategy {\bf (H1)}--{\bf (H3)}, the hedges are made up of the underlying asset $X(u)$ and cash position $B(u)$: \begin{eqnarray}s \Pi(u) = \Delta_{\textup{BS}}(u)X(u) + B(u), \ \ \textup{with}\ \ \Delta_{\textup{BS}}(u) := {\mathbb{P}}artial_x V_{\textup{BS}}(u, X(u), v) {\mathbb{E}}as which satisfies the self-financing condition: \begin{eqnarray} \label{eq:self-financing} {\textup{d}} \Pi(u) = \Delta_{\textup{BS}}(u) {\textup{d}} X(u) + rB(u) {\textup{d}} u, {\mathbb{E}}a and the initial cost constraint $\Pi(0)=V_{\textup{BS}}(0, X_0, v)$. In the sequel, we prove Eqs. {\mathbb{E}}qref{eq:carry_pnl} and {\mathbb{E}}qref{eq:mtm_pnl} consequently. The first equality of Eq. {\mathbb{E}}qref{eq:carry_pnl} follows by {\bf (H2)}. By It\^{o}'s lemma, we get \begin{eqnarray}s {\textup{d}} V_{\textup{BS}}(u, X(u), v) = {\mathbb{P}}artial_t V_{\textup{BS}} {\textup{d}} u + \Delta_{\textup{BS}}(u) {\textup{d}} X(u) + \frac{1}{2}{\mathbb{P}}artial_{x}^2V_{\textup{BS}}\left[{\textup{d}} X(u)\right]^2, {\mathbb{E}}as which in conjunction with Eq. {\mathbb{E}}qref{eq:self-financing} yields \begin{eqnarray}s {\textup{d}} \left[V_{\textup{BS}}(u, X(u),v)- \Pi(u)\right] &=& {\mathbb{P}}artial_t V_{\textup{BS}} {\textup{d}} u + \frac{1}{2}{\mathbb{P}}artial_{x}^2V_{\textup{BS}}\left[{\textup{d}} X(u)\right]^2\\ &&- r\left[\Pi(u) - \Delta_{\textup{BS}}(u) X(u)\right]{\textup{d}} u. {\mathbb{E}}as Recall that $V_{\textup{BS}}$ satisfies Eq. {\mathbb{E}}qref{eq:BS_eq}. Plugging it into the above equation yields \begin{eqnarray}s {\textup{d}} \left[V_{\textup{BS}}(u, X(u), v) - \Pi(u)\right] &=& \frac{1}{2}{\mathbb{P}}artial_{x}^2V_{\textup{BS}}\left[{\textup{d}}\left[X\right](u)-vX^2(u){\textup{d}} u\right]\\ &&+ r\left[V_{\textup{BS}}(u, X(u), v) - \Pi(u)\right]{\textup{d}} u. {\mathbb{E}}as Integrating the above equation over $[t_0, t]$ and exploiting the fact that $e^{-r u} X(u)$ is a martingale implies \begin{eqnarray}s V_{\textup{BS}}(t, X(t), v) - \Pi(t) &=& e^{r(t-t_0)}\left[V_{\textup{BS}}(0, X_0, v) - \Pi(0)\right] \\ &&+ \frac{1}{2}e^{r(t-t_0)} \int_{t_0}^{t} e^{-r(u-t_0)}{\mathbb{P}}artial_{x}^2V_{\textup{BS}}\left[{\textup{d}}\left[X\right](u)-vX^2(u){\textup{d}} u\right]. {\mathbb{E}}as Then Eq. {\mathbb{E}}qref{eq:carry_pnl} follows by the fact that $V_{\textup{BS}}(0, X_0, v) = \Pi(0)$. Finally, Eq. {\mathbb{E}}qref{eq:mtm_pnl} follows by {\bf (H3)}. This completes the proof of Proposition \ref{prop:carry_pnl}. {\mathbb{E}}nd{proof} \section{Proof of Theorem \ref{thm:va_multi_period}} \label{app:proof_multi_period} \subsection{Preliminaries} \begin{definition} \label{def:quasi_BS_post_withdrawal} Consider the following BS-type PDE: \begin{eqnarray} \label{eq:BS_eq_xi} \begin{cases} u|_{t=t_{n+1}}=V_{\textup{BS}}(t_{n+1}, x, {\color{spicy} v}),&\\ {\mathbb{P}}artial_t u + rx {\mathbb{P}}artial_x u + {\color{blue} \frac{\xi}{2}}x^2{\mathbb{P}}artial^2_{xx} u - r u = 0,\ \ t\in[t_n, t_{n+1}),\\ {\mathbb{E}}nd{cases} {\mathbb{E}}a We define the function \begin{eqnarray} \widetilde{C}_n(x, {\color{blue} \xi}) = u\left(t_{n}^{+}, x\right),\ \ 1\leq n \leq N-1. {\mathbb{E}}a {\mathbb{E}}nd{definition} \begin{remark} \label{rem:20220425} It is worth stressing the difference between the functions $\widetilde{C}_n(\cdot, \xi)$ and $C_{n}^{\textup{BS}}(\cdot)$ (see Eq. {\mathbb{E}}qref{eq:post_withdrawal_BS}). \begin{enumerate} \item They are both defined via the BS-type PDE but with BS variance parameters $\xi$ and $v$, respectively; see Eqs. {\mathbb{E}}qref{eq:BS_eq} and {\mathbb{E}}qref{eq:BS_eq_xi} respectively. \item The time boundary conditions at $t=t_{n+1}$ of the two PDEs are the same. {\mathbb{E}}nd{enumerate} See also the following Lemma \ref{lemma:20220423}. {\mathbb{E}}nd{remark} \begin{definition} \label{def:imp_var} We say $\xi_n$ is the BS-implied variance of the European option with payoff $V_{\textup{BS}}\left(t_{n+1}, X_{n+1}, v\right)$ observed at $t_n$ if it satisfies the following equation: \begin{eqnarray} \label{eq:imp_var} {\mathbb{E}}_{n,x}^{\mathbb{Q}}\left[e^{-r\delta}V_{\textup{BS}}\left(t_{n+1}, X_{n+1}, v\right)\right] = \widetilde{C}_n(x, \xi_n), {\mathbb{E}}a where $\widetilde{C}_n(x, \xi)$ is defined in Definition \ref{def:quasi_BS_post_withdrawal} and we recall that ${\mathbb{E}}_{n,x}^{\mathbb{Q}}[\cdot]:={\mathbb{E}}^{\mathbb{Q}}\left[\cdot|\mathcal{G}_{t}^{x}\right]$ with $ \mathcal{G}_{n}^{x}:=\sigma\left(\{X_{n^{+}}=x\}\bigcup \mathcal{F}_{t_n}\right). $ {\mathbb{E}}nd{definition} Let \begin{eqnarray} \label{eq:obj_BS} J_n(x, a, \theta) = f_n(x,a) + \widetilde{C}_n\left(K(x,a), \theta\right). {\mathbb{E}}a and \begin{eqnarray} \label{eq:quasi_BS_value_fun} \widetilde{V}_n(x,\theta) = \sup_{a \in A_n(x)} J_n(x, a, \theta). {\mathbb{E}}a \begin{lemma} \label{lemma:20220423} $\widetilde{C}_{n}(x, v) = C_{n}^{\textup{BS}}(x, v)$ and $\widetilde{V}_n(x) = V_{\textup{BS}}(t_n, x, v)$ for $1\leq n \leq N-1$ where $C_n^{\textup{BS}}(x,v)$ and $V_{\textup{BS}}(t_n,x,v)$ are given by Eqs. {\mathbb{E}}qref{eq:post_withdrawal_BS} and {\mathbb{E}}qref{eq:Bellman_eq_BS} respectively. {\mathbb{E}}nd{lemma} \begin{proof} The proof directly follows from Remark \ref{rem:20220425} and thus is omitted. {\mathbb{E}}nd{proof} Next, we define the valuation adjustments (VAs) for future smile and sub-optimal withdrawal risks respectively. \begin{definition} \label{def:va_adj} \begin{enumerate} \item The valuation adjustment for the future realized volatility is defined as \begin{eqnarray} \label{eq:va_gamma_process} V_1(t) = {\mathbb{E}}^{\mathbb{Q}}\left[\left.\int_{t}^{t_n} e^{-r(u-t)}\frac{1}{2}{\mathbb{P}}artial_{x}^2 V_{\textup{BS}} \left[{\textup{d}} [X](u) - X^2(u){\textup{d}} u\right]\right|{\mathcal{F}}_t\right], \ \ t\in(t_{n-1}, t_{n}], {\mathbb{E}}a with the convention $V_1(t_n) =0$ for $0\leq n \leq N-1$. \item The valuation adjustment for the future smile risk is defined as \begin{eqnarray} \label{eq:va_future_smile_process} V_2(t) = {\mathbb{E}}^{\mathbb{Q}}\left[\left.\sum_{n=1}^{N-1}e^{-r(t_n - t)} \left[\widetilde{V}_n(X_n, {\color{blue} \xi_n}) - V_{\textup{BS}}(t_n, X_n, v) \right]\mathbf{1}_{\{t \leq t_n\}} \right|{\mathcal{F}}_t\right]. {\mathbb{E}}a \item Let $a_n^{\star}$ be the optimal policy at $t_n$ of {\mathbb{E}}qref{eq:Bellman_eq}; see Assumption \ref{assum:optimal_policy}. Suppose $\widetilde{a}_n$ satisfies $\widetilde{V}_n(x,\xi_n) = J_n\left(X_n, \widetilde{a}_n, \xi_n\right)$. The valuation adjustment for suboptimal withdrawal risk is defined as \begin{eqnarray} \label{eq:va_suboptimal_process} V_3(t) &=& {\mathbb{E}}^{\mathbb{Q}} \left[\left.\sum_{n=1}^{N-1}e^{-r(t_n - t)} \left[ J_n\left(X_n, {\color{blue} a_n^{\star}}, \xi_n\right) - J_{n}\left(X_n, {\color{blue} \widetilde{a}_n}, \xi_n\right)\right]\mathbf{1}_{\{t \leq t_n\}} \right|{\mathcal{F}}_t\right]. {\mathbb{E}}a {\mathbb{E}}nd{enumerate} {\mathbb{E}}nd{definition} \begin{lemma} The valuation adjustment for the future smile risk satisfies the following recursion system: \begin{eqnarray} \label{eq:va_future_smile} V_2(t_n) = \begin{cases} 0, & n=N,\\ \widetilde{V}_n(X_n, {\color{blue} \xi_n}) - V_{\textup{BS}}(t_n, X_n, {\color{blue} v}) + C_{2,n}\left(K(X_n, a_n^{\star})\right),&1\leq n \leq N-1, {\mathbb{E}}nd{cases} {\mathbb{E}}a where \begin{eqnarray} \label{eq:exp_future_smile} C_{2, n}(x) = {\mathbb{E}}_{n,x}^{\mathbb{Q}}[e^{-r\delta}V_2(t_{n+1})]. {\mathbb{E}}a Furthermore, \begin{eqnarray} \label{eq:va_suboptimal_withdrawal} V_3(t_n) = \begin{cases} 0, & n=N,\\ J_n\left(X_n, {\color{blue} a_n^{\star}}, \xi_n\right) - J_{n}\left(X_n, {\color{blue} \widetilde{a}_n}, \xi_n\right) +C_{3,n}\left(K(X_n, a_n^{\star})\right),&1\leq n \leq N-1, {\mathbb{E}}nd{cases} {\mathbb{E}}a where \begin{eqnarray} \label{eq:exp_suboptimal_withdrawal} C_{3, n}(x) = {\mathbb{E}}_{n,x}^{\mathbb{Q}}[e^{-r\delta}V_3(t_{n+1})]. {\mathbb{E}}a {\mathbb{E}}nd{lemma} \begin{proof} The conclusion follows from a straightforward application of the Tower property of the conditional expectation. {\mathbb{E}}nd{proof} \begin{lemma} \label{lemma:va_eq} \begin{eqnarray} \label{eq:20220423} V(t_n) = V_{\textup{BS}}(t_n, X_n, v) + V_2(t_n) + V_3(t_n), \ \ 1 \leq n \leq N-1, {\mathbb{E}}a where $V_2(t_n)$ and $V_3(t_n)$ are given by Eqs. {\mathbb{E}}qref{eq:va_future_smile_process} and {\mathbb{E}}qref{eq:va_suboptimal_process} respectively. {\mathbb{E}}nd{lemma} \begin{proof} We prove the lemma via an induction argument. \begin{enumerate} \item \begin{enumerate} \item By Definition \ref{def:imp_var} and Eq. {\mathbb{E}}qref{eq:post_withdrawal_value}, we have $C_{N-1}(x) = \widetilde{C}_{N-1}(x, \xi_{N-1})$. Plugging this into Eq. {\mathbb{E}}qref{eq:Bellman_eq} implies that \begin{eqnarray}s V(t_{N-1}) &=& \sup_{a\in A_{N-1}(X_{N-1})}\left[f_{N-1}\left(X_{N-1}, a\right) + C_{N-1}\left(K\left(X_{N-1}, a\right)\right)\right]\\ &=& \sup_{a\in A_{N-1}(X_{N-1})}\left[f_{N-1}\left(X_{N-1}, a\right) + \widetilde{C}_{N-1}\left(K\left(X_{N-1}, a\right), {\color{blue}\xi_{N-1}}\right)\right]\\ &=& \sup_{a\in A_{N-1}(X_{N-1})}J_{N-1}\left(X_{N-1}, a, \xi_{N-1}\right)\\ &=& \widetilde{V}_{N-1}(X_{N-1}, {\color{blue}\xi_{N-1}}), {\mathbb{E}}as where the last equality is by Eq. {\mathbb{E}}qref{eq:quasi_BS_value_fun} and thus $a_{N-1}^{\star} = \widetilde{a}_{N-1}$. \item In view of Eqs. {\mathbb{E}}qref{eq:va_future_smile} and {\mathbb{E}}qref{eq:exp_future_smile}, we get \begin{eqnarray}s V_2(t_{N-1}) = \widetilde{V}_{N-1}(X_{N-1}, {\color{blue}\xi_{N-1}}) - V_{\textup{BS}}(t_{N-1}, X_{N-1}, {\color{blue} v}). {\mathbb{E}}as \item Furthermore, combining Eqs. {\mathbb{E}}qref{eq:va_suboptimal_withdrawal} and {\mathbb{E}}qref{eq:exp_suboptimal_withdrawal} implies that $V_3(t_{N-1})=0$. {\mathbb{E}}nd{enumerate} Putting (a)-(c) together implies Eq. {\mathbb{E}}qref{eq:20220423} holds for $n=N-1$. \item Now by induction we assume Eq. {\mathbb{E}}qref{eq:20220423} holds for $n$. Plugging Eq. {\mathbb{E}}qref{eq:20220423} into Eq. {\mathbb{E}}qref{eq:post_withdrawal_value} gives \begin{eqnarray}s C_{n-1}(x) &=&{\mathbb{E}}_{n-1,x}^{\mathbb{Q}} \left[e^{-r\delta} V_{\textup{BS}}(t_{n}, X_n, v)\right] + e^{-r\delta}{\mathbb{E}}_{n-1,x}^{\mathbb{Q}} \left[ V_2(t_n) + V_3(t_n)\right]\\ &=& \widetilde{C}_{n-1}\left(K(x,a), {\color{blue} \xi_{n-1}}\right) + C_{2,n-1}\left(K(x,a)\right) + C_{3,n-1}\left(K(x, a)\right) {\mathbb{E}}as where the last equality follows by Eqs. {\mathbb{E}}qref{eq:imp_var}, {\mathbb{E}}qref{eq:exp_future_smile} and {\mathbb{E}}qref{eq:exp_suboptimal_withdrawal}. The above display in conjunction with Eq. {\mathbb{E}}qref{eq:Bellman_eq} gives \begin{eqnarray}s V(t_{n-1}) &=& f_{n-1}\left(X_{n-1}, a_{n-1}^{\star}\right) + \widetilde{C}_{n-1}\left(K\left(X_{n-1},a_{n-1}^{\star}\right), \xi_{n-1}\right) \\ &&+\sum_{j=2}^3 C_{j,n}\left(K\left(X_{n-1},a_{n-1}^{\star}\right)\right)\\ &=& J_n\left(X_{n-1}, {\color{blue} a_{n-1}^{\star}}, \xi_{n-1}\right) +\sum_{j=2}^3 C_{j,n}\left(K\left(X_{n-1},a_{n-1}^{\star}\right)\right)\\ &=& \widetilde{V}_{n-1}\left(X_{n-1}, {\color{blue} v}\right) \\ && + \underbrace{\widetilde{V}_{n-1}\left( X_{n-1}, {\color{blue} \xi_{n-1}}\right) - V_{\textup{BS}}\left(t_{n-1}, X_{n-1}, {\color{blue} v}\right) + C_{2,n}\left(K\left(X_{n-1},a_{n-1}^{\star}\right)\right)}_{=V_2(t_{n-1})}\\\ &&+ \underbrace{J_n\left(X_{n-1}, {\color{blue} a_{n-1}^{\star}}, \xi_{n-1}\right) - J_{n}\left(X_{n-1}, {\color{blue} \widetilde{a}_{n-1}}, \xi_{n-1}\right) + C_{3,n}\left(K\left(X_{n-1},a_{n-1}^{\star}\right)\right)}_{=V_3(t_{n-1})}\\ &=& V_{\textup{BS}}\left(t_{n-1}, X_{n-1}\right) + V_2(t_{n-1}) + V_3(t_{n-1}), {\mathbb{E}}as where the second equality is by Eq. {\mathbb{E}}qref{eq:obj_BS} and the last equality follows by Eqs. {\mathbb{E}}qref{eq:va_future_smile} and {\mathbb{E}}qref{eq:va_suboptimal_withdrawal} in together with Lemma \ref{lemma:20220423}. This proves Eq. {\mathbb{E}}qref{eq:20220423} for $n-1$. {\mathbb{E}}nd{enumerate} {\mathbb{E}}nd{proof} \subsection{Proof of Theorem \ref{thm:va_multi_period}} \begin{proof} Recall from Lemma \ref{lemma:20220423} that \begin{eqnarray}s V(t_n) = V_{\textup{BS}}(t_n, X_n, v) + V_2(t_n) + V_3(t_n), \ \ 1 \leq n \leq N-1. {\mathbb{E}}as Plugging the above display into Eq. {\mathbb{E}}qref{eq:martingale_eq} gives \begin{eqnarray} \nonumber V(t) &=& {\mathbb{E}}^{\mathbb{Q}}\left[\left. e^{-r(t_n - t)}V_{\textup{BS}}(t_n, X_n, v)\right| {\mathcal{F}}_t\right] + \sum_{j=2}^3{\mathbb{E}}^{\mathbb{Q}}\left[\left. e^{-r(t_n - t)}V_j(t_n)\right| {\mathcal{F}}_t\right]\\ \label{eq:20220508} &=& {\mathbb{E}}^{\mathbb{Q}}\left[\left. e^{-r(t_n - t)}V_{\textup{BS}}(t_n, X_n, v)\right| {\mathcal{F}}_t\right] + \sum_{j=2}^3 V_j(t), {\mathbb{E}}a where the last equality follows by the Tower property; see Eqs. {\mathbb{E}}qref{eq:va_future_smile_process} and {\mathbb{E}}qref{eq:va_suboptimal_process}. On the other hand, applying It\^{o}'s lemma to $e^{-r(t_n - t)}V_{\textup{BS}}(t_n, X_n, v)$ gives \begin{eqnarray}s e^{-r(t_n - t)}V_{\textup{BS}}(t_n, X_n, v) - V_{\textup{BS}}(t, X(t), v) &=& \int_{t}^{t_n} e^{-r(u-t)}\left[{\mathbb{P}}artial_t V_{\textup{BS}}-rV_{\textup{BS}}\right]{\textup{d}} u\\ &&+\int_{t}^{t_n} e^{-r(u-t)}\Delta_{\textup{BS}}(u) {\textup{d}} X(u)\\ &&+ \int_{t}^{t_n} e^{-r(u-t)}\frac{1}{2}{\mathbb{P}}artial_{x}^2V_{\textup{BS}} {\textup{d}} [X](u), {\mathbb{E}}as for $u \in [t, t_n]$. Recall that $V_{\textup{BS}}$ satisfies the BS equation: \begin{eqnarray}s {\mathbb{P}}artial_t V_{\textup{BS}} + rx {\mathbb{P}}artial_x V_{\textup{BS}} + \frac{v}{2}x^2{\mathbb{P}}artial^2_{xx} V_{\textup{BS}}- rV_{\textup{BS}} = 0. {\mathbb{E}}as Hence, \begin{eqnarray}s e^{-r(t_n - t)}V_{\textup{BS}}(t_n, X_n, v) - V_{\textup{BS}}(t, X(t), v) &=& \int_{t}^{t_n} e^{-r(u-t)}{\mathbb{P}}artial_x V_{\textup{BS}} \left[{\textup{d}} X(u) - r X(u) {\textup{d}} u\right]\\ &&+ \int_{t}^{t_n} e^{-r(u-t)}\frac{1}{2}{\mathbb{P}}artial_{x}^2 V_{\textup{BS}} \left[{\textup{d}} [X](u) - vX^2(u){\textup{d}} u\right]. {\mathbb{E}}as Taking expecations on both sides of the above equation gives \begin{eqnarray}s {\mathbb{E}}^{\mathbb{Q}}\left[\left. e^{-r(t_n - t)}V_{\textup{BS}}(t_n, X_n, v)\right| {\mathcal{F}}_t\right] &=& V_{\textup{BS}}(t, X(t), v) + V_{1}(t) \\ &&+ {\mathbb{E}}^{\mathbb{Q}}\left[\left.\int_{t}^{t_n} e^{-r(u-t)}{\mathbb{P}}artial_x V_{\textup{BS}} \left[{\textup{d}} X(u) - r X(u) {\textup{d}} u\right]\right|{\mathcal{F}}_t\right]\\ &=& V_{\textup{BS}}(t, X(t), v) + V_{1}(t), {\mathbb{E}}as where the last equality follows by Eq. {\mathbb{E}}qref{eq:martingale}. The above display in conjunction with Eq. {\mathbb{E}}qref{eq:20220508} proves Theorem \ref{thm:va_multi_period}. {\mathbb{E}}nd{proof} {\mathbb{E}}nd{document}
\begin{equation}gin{document} \begin{equation}gin{abstract} { We study } the properties of linear and non-linear determining functionals for dissipative dynamical systems generated by PDEs. The main attention is payed to the lower bounds for the number of such functionals. In contradiction to the common paradigm, it is shown that the optimal number of determining functionals (the so-called determining dimension) is strongly related to the proper dimension of the set of equilibria of the considered dynamical system rather than to the dimensions of the global attractors and the complexity of the dynamics on it. In particular, in the generic case where the set of equilibria is finite, the determining dimension equals to one (in complete agreement with the Takens delayed embedding theorem) no matter how complex the underlying dynamics is. The obtained results are illustrated by a number of explicit examples. \end{abstract} \subjclass[2010]{35B40, 35B42, 37D10, 37L25} \keywords{Determining functionals, finite-dimensional reduction, Takens delay embedding theorem} \varthetaanks{ This work is partially supported by the grant 19-71-30004 of RSF (Russia), and the Leverhulme grant No. RPG-2021-072 (United Kingdom) } \maketitle \tableofcontents \section{Introduction}\lambdabel{s0} It is believed that in many cases the limit dynamics generated by dissipative PDEs is essentially finite-dimensional and can be effectively described by finitely many parameters (the so-called order parameters in the terminology of I. Prigogine) governed by a system of ODEs describing its evolution in time (which is usually referred as an Inertial Form (IF) of the system considered), see \cite{BV92,CV02,ha,MZ08,R01,SY02,T97} and references therein. A mathematically rigorous interpretation of this conjecture naturally leads to the concept of an Inertial Manifold (IM). By definition, IM is an invariant smooth (at least Lipschitz) finite-dimensional submanifold in the phase space which is exponentially stable and possesses the so-called exponential tracking property, see \cite{CFNT89,FST88,M-PS88,M91,R94,Z14} and references therein. However, the existence of such an object requires rather restrictive extra assumptions on the considered system, therefore, an IM may a priori not exist for many interesting equations arising in applications, see \cite{EKZ13,KZ18,KZ17,Z14} for more details. In particular, the existence or non-existence of an IM for 2D Navier-Stokes remains an open problem and for the simplified models of 1D coupled Burgers type systems such a manifold may not exist, see \cite{KZ18,KZ17,Z14}. \par By this reason, a number of weaker interpretations of the finite-di\-men\-sio\-na\-lity conjecture are intensively studied during the last 50 years. One of them is related to the concept of a global attractor and interprets its finite-dimensionality in terms of Hausdorff or/and box-counting dimensions keeping in mind the Man\'e projection theorem. Indeed, it is well-known that under some relatively weak assumptions the dissipative system considered possesses a compact global attractor of finite box-counting dimension, see \cite{BV92,MZ08,R01,T97} and references therein. { Then the} corresponding IF can be constructed by projecting the considered dynamical system to a generic plane of sufficiently large, but finite dimension. The key drawback of this approach is related to the fact that the obtained IF is only H\"older continuous (which is not enough in general even for the uniqueness of solutions of the reduced system of ODEs). Moreover, as recent examples show, the limit dynamics may remain in a sense infinite-dimensional despite the existence of a H\"older continuous IF (at least, it may demonstrate some features, like super-exponentially attracting limit cycles, traveling waves in Fourier space, etc., which are impossible in classical dynamics), see \cite{Z14} for more details. \par An alternative even more weaker approach (which actually has been historically the first, see \cite{FP67,Lad72}) is related to the concept of {\it determining functionals}. By definition, a system of (usually linear) continuous functionals $\mathcal F:=\{F_1,\cdots,F_N\}$ on the phase space $H$ is called (asymptotically) determining if for any two trajectories $u_1(t)$ and $u_2(t)$, the convergence $$ F_n(u_1(t))-F_n(u_2(t))\to0 \ \text{ as $t\to\infty$ for all $n=1,\cdots,N$} $$ implies that $u_1(t)\to u_2(t)$ in $H$ as $t\to\infty$. Thus, in the case where $\mathcal F$ exists, the limit behavior of a trajectory $u(t)$ as $t\to\infty$ is uniquely determined by the behaviour of finitely many scalar quantities $F_1(u(t))$, $\cdots$, $F_N(u(t))$, see e.g. \cite{Chu} for more details. \par To be more precise, it has been shown in \cite{FP67,Lad72} that, for the case of Navier-Stokes equations in 2D, a system generated by first $N$ Fourier modes is asymptotically determining if $N$ is large enough. Later on the notions of determining nodes, volume elements, etc., have been introduced and various upper bounds for the number $N$ of elements in such systems have been obtained for various dissipative PDEs (see, e.g., \cite{Chu1,FMRT, FT, FTT,JT} and references therein). More general classes of determining functionals have been introduced in \cite{Cock1,Cock2}, see also \cite{Chu,Chu1}. \par We emphasize from the very beginning that the existence of the finite system $\mathcal F$ of determining functionals {\it does not} imply in general that the quantities $F_n(u(t))$, $n\in1,\cdots,N$ obey a system of ODEs, therefore, this approach does not a priori justify the finite-dimensionality conjecture. Moreover, these quantities usually obey only the delay differential equations (DDEs) whose phase space remain infinite-dimensional, see Section \ref{s2} below. Nevertheless, they may be useful for many purposes, for instance, for establishing the controllability of an initially infinite dimensional system by finitely many modes (see e.g. \cite{AT14}), verifying the uniqueness of an invariant measure for random/stochasitc PDEs (see e.g. \cite{kuksin}), etc. We also mention more recent but very promising applications of determining functionals to data assimilation problems where the values of functionals $F_n(u(t))$ are interpreted as the results of observations and the theory of determining functionals allows us to build new methods of restoring the trajectory $u(t)$ by the results of observations, see \cite{AT14,AT13,OT08,OT03} and references therein. \par It { is } worth to note here that, despite the long history and big number of research papers and interesting results, the determining functionals are essentially less understood in comparison with IMs and global attractors. Indeed, the general strategy which most of the papers in the field follow on is to fix some very specific class of functionals (like Fourier modes or the values of $u$ in some prescribed points in space (the so-called nodes), etc.) and to give as accurate as possible {\it upper} bounds for the number of such functionals which is sufficient to generate a determining system. These upper bounds are then expressed in terms of physical parameters of the considered system, e.g. in terms of the so-called Grashof's number when 2D Navier-Stokes equations are considered, see \cite{Chu,OT08} for details. These estimates often give the bounds for the number $N$ which are compatible to the box-counting dimension of the global attractor. However, the questions of sharpness of these upper bounds and of finding the appropriate lower bounds remain completely unstudied. The only natural lower bound for $N$ which is available is related to the dimension of the set of equilibria for the considered system. \par An exception is the case of one spatial variable, where the determining systems of very few number of functionals (which is compatible with the dimension of the equilibria set) are known, see e.g. \cite{kukavica}. However, these 1D results somehow support the Takens conjecture (see \cite{Tak}) that the number of determining functionals should be very small (a generic single functional should be determining for a generic system whose set of equilibria is finite by the Sard theorem) and clearly contradict the conjecture that the above mentioned estimates (which are compatible with the dimension of the attractor) are sharp. We also mention that the restriction to the class of linear determining functionals looks artificial especially when a non-linear dissipative system is considered. \par The main aim of the present paper is to shed some light on the above mentioned questions. For simplicity, we restrict ourselves by considering only the following semilinear abstract parabolic equation in a Hilbert space~$H$: \begin{equation}gin{equation}\lambdabel{0.abs} \partial_t u+Au-f(u)=g, \end{equation} where $A:D(A)\to H$ is a linear self-adjoint positive operator with compact inverse, $f$ is a given nonlinearity which is globally Lipschitz and subordinated to the linear part $A$ of the equation and $g$ is a given external force, see Section \ref{s2} for precise conditions. \par One of the main results of the manuscript is the following theorem, \begin{equation}gin{theorem}\lambdabel{Th0.main} Let the operator $A$ and the nonlinearity $f$ satisfy some natural assumptions stated in Section \ref{s2} and let the right-hand side $g\in H$ be chosen in such a way that the set $\mathcal R$ of equilibria points of equation \eqref{0.abs} is finite. Then, there is a prevalent set of continuous maps $F: H\to\R$ each of them can be chosen as an asymptotically determining functional for problem \eqref{0.abs}. Moreover, such a functional can be chosen from the class of polynomial maps of sufficiently { large} order. \end{theorem} The proof of this theorem is strongly based on the H\"older continuous version of the Takens delay embedding theorem and is given in Section \ref{s3}. \par The next corollary describes the dynamical properties of the scalar quantity $Z(t):=F(u(t))$. \begin{equation}gin{corollary}\lambdabel{Cor0.delay} Let the assumptions of Theorem \ref{Th0.main} hold and let $F:H\to\R$ be a smooth determining functional constructed there. Then, there is a sufficiently small delay $\tau>0$, a sufficiently large $k\in\mathbb N$ and a continuous function $\bar\Theta:\R^k\to\R$ such that the quantity $Z(t):=F(u(t))$ obeys the following scalar ODE with delay: \begin{equation}gin{equation}\lambdabel{0.delay} \frac d{dt}Z(t)=\bar \Theta(Z(t-\tau),\cdots,Z(t-k\tau)),\ \ t\in\R \end{equation} for every $u(t)$ belonging to the global attractor $\mathcal A$. \end{corollary} We now relax the assumption that the set $\mathcal R$ of equilibria is finite. To this end, we introduce the so-called embedding dimension $\partial_im_{emb}(\mathcal R,H)$ as the minimal number $M$ such that there is an injective continuous map $\Phi:\mathcal R\to\R^M$. Obviously, the number of functionals in any determining system cannot be less than $\partial_im_{emb}(\mathcal R,H)$: \begin{equation}gin{equation} \partial_im_{det}(S(t),H)\ge\partial_im_{emb}(\mathcal R,H), \end{equation} where $\partial_im_{det}(S(t),H)$ is the minimal number $N$ such that equation \eqref{0.abs} possesses a determining system of $N$ continuous (not necessarily linear) functionals. The next proposition is the analogue of Theorem \ref{Th0.main} for { the} general case. \begin{equation}gin{proposition}\lambdabel{Prop0.R} Let all of the assumptions of Theorem \ref{Th0.main} except { of} the finiteness of $\mathcal R$ be satisfied. Then, the determining dimension of the dynamical system $S(t)$ generated by equation \eqref{0.abs} satisfies: \begin{equation}gin{equation}\lambdabel{0.est} \partial_im_{emb}(\mathcal R,H)\le\partial_im_{det}(S(t),H)\le \partial_im_{emb}(\mathcal R)+1. \end{equation} \end{proposition} Thus, the number of functionals in the "optimal" determining system { depends} only { on} the embedding properties of the equilibria set $\mathcal R$ to Euclidean spaces and is {\it not related} { to} the size of the global attractor or the complexity of the dynamics on it. Moreover, as shown in Example \ref{Ex4.wave} below, it may be unrelated even { to} the dissipativity of the system considered. \par We also note that the above mentioned "paradox" with the case of one spatial dimension can be naturally resolved in light of Theorem \ref{Th0.main} and Proposition \ref{Prop0.R}. Indeed, although the optimal number of determining functionals is always related { to} the set of equilibria $\mathcal R$ only, in 1D case the situation is essentially simpler since the equilibria usually solve the system of ODEs and, therefore, the embedding dimension of $\mathcal R$ cannot be greater than the order of this system of ODEs (of course, if this system is smooth enough for the uniqueness theorem to hold). Thus, in 1D case we always have a good estimate for $\partial_im_{emb}(\mathcal R,H)$ which is independent of the physical parameters of the system, see Examples \ref{Ex4.dir}, \ref{Ex4.per} and \ref{Ex4.inf} where the explicit form of possible determining functionals are given for various cases of semilinear heat equations. \par In contrast to this, in the multidimensional case, the elements of $\mathcal R$ are usually the solutions of elliptic PDEs and we do not have convenient estimates for the size of $\mathcal R$, we only know that, for generic external forces $g$, $\mathcal R$ is finite due to the Sard theorem, but for exceptional choices of $g$, we do not have anything more than the obvious estimate \begin{equation}gin{equation} \partial_im_{emb}(\mathcal R,H)\le 2\partial_im_B(\mathcal R,H)+1\le \partial_im_B(\mathcal A,H)+1 \end{equation} and the box-counting dimension of $\mathcal R$ may indeed be compatible with the appropriate dimension of the global attractor $\mathcal A$, see Example \ref{Ex4.deg}. Thus, the assumption that the external forces $g$ should be generic { appears to be} unavoidable if we want to get sharp results. Note also, that as shown in Example \ref{Ex4.linbad}, the class of linear functionals is not sufficient to get the above mentioned results, so considering polynomial functionals is also unavoidable. \par The paper is organized as follows. Some preliminary results on a general theory of determining functionals are collected in Section \ref{s1}. In particular, a bit stronger concept of functionals separating trajectories on the attractor is introduced there. The non-equivalence of determining and separating functionals is shown in Example \ref{Ex4.sep}. \par The classical theory of determining Fourier modes for equation \eqref{0.abs} including the related Lyapunov-Schmidt reduction and recent applications to restoring the trajectory $u(t)$ by the observation data related to determining modes are discussed in Section \ref{s2}. \par The main results of the paper are stated and proved in Section \ref{s3}. \par Section \ref{s4}, which is also one of the central sections of the paper, is devoted to various examples and counter-examples related { to} the theory of determining functionals. Finally, some open problems which are interesting from our point of view are presented in Section \ref{s5}. \section{Preliminaries}\lambdabel{s1} In this section we recall the basic facts about the determining functionals, introduce the notations and state the necessary definitions, see e.g. \cite{Chu,OT08} for more detailed exposition. Let $H$ be a Banach space and $S(t): H\to H$, $t\ge0$ be a semigroup acting on it, i.e. \begin{equation}gin{equation}\lambdabel{1.sem} S(t+l)=S(t)\circ S(l) \ \text{ for all $t,l\ge0$ and }\ S(0)=\operatorname{Id}. \end{equation} This semigroup will be referred as a dynamical system (DS) in the phase space $H$ and its orbits $u(t)=S(t)u_0$, $u_0\in H$, $t\ge0$ will be treated as (semi)trajectories of this dynamical system. \par We are now ready to give the definition of determining functionals which will be used throughout the paper. \begin{equation}gin{definition}\lambdabel{Def1.as-det} A system $\mathcal F:=\{F_1,\cdots, F_N\}$ of possibly non-linear continuous functionals $F_i:H\to\R$ is called asymptotically determining for the DS $S(t)$ if, for any two trajectories $u(t)$ and $v(t)$ of this DS, \begin{equation}gin{equation}\lambdabel{1.F-van} \lim_{t\to\infty}(F_i(u(t))-F_i(v(t))=0,\ \ i=1,\cdots, N \end{equation} implies that \begin{equation}gin{equation}\lambdabel{1.van} \lim_{t\to\infty}\|u(t)-v(t)\|_H=0. \end{equation} \end{definition} Let us assume in addition that the considered DS is {\it dissipative}, i.e., there exist positive constants $C$ and $\alphapha$ and a monotone increasing function $Q$ such that \begin{equation}gin{equation}\lambdabel{1.dis} \|u(t)\|_H\le Q(\|u_0\|_H)e^{-\alphapha t}+C \end{equation} for every trajectory $u(t)$, and possesses the so-called {\it global attractor} $\mathcal A$ in the phase space $H$. We recall that the set $\mathcal A$ is a global attractor of the DS $S(t)$ if the following assumptions are satisfied: \par 1) $\mathcal A$ is a compact set in $H$; \par 2) $\mathcal A$ is strictly invariant: $S(t)\mathcal A=\mathcal A$ for all $t\ge0$; \par 3) $\mathcal A$ attracts the images of all bounded sets of $H$ as $t\to\infty$, i.e., for any bounded $B\subset H$ and any neighbourhood $\mathcal O(\mathcal A)$ of the attractor, there exists $T=T(B,\mathcal O)$ such that $$ S(t)B\subset \mathcal O(\mathcal A) \ \ \text{if $t\ge T $}, $$ see \cite{BV92,T97} for more details. \par In this case, we may give an alternative definition which is often a bit simpler to verify in applications. \begin{equation}gin{definition}\lambdabel{Def1.atr-det} Let $S(t)$ be a DS in $H$ which possesses a global attractor $\mathcal A$ in it. Then, a system $\mathcal F:=\{F_1,\cdots,F_N\}$ of continuous functionals is called separating on the attractor $\mathcal A$ if for any two complete trajectories $u(t)$ and $v(t)$, $t\in\R$ belonging to the attractor, the identity \begin{equation}gin{equation}\lambdabel{1.det-attr} F_i(u(t))=F_i(v(t)),\ \ \text{for all $t\in\R$ and $i=1,\cdots, N$} \end{equation} implies that $u(t)\equiv v(t)$, $t\in\R$. \end{definition} The next standard proposition shows that under some natural assumptions, separating system on the attractor is automatically asymptotically determining. \begin{equation}gin{proposition}\lambdabel{Prop1.det} Let $S(t)$ be a dissipative DS which possesses a global attractor $\mathcal A$ in the phase space $H$. Assume also that all its trajectories $u(t)$ are continuous in time (belong to the space $C_{loc}(\R_+,H)$) and the map $S: u_0\to S(t)u_0$ is continuous as the map from $H$ to $C_{loc}(\R_+,H)$. Then any separating on the attractor system $\mathcal F$ is asymptotically determining. \end{proposition} \begin{equation}gin{proof} { Assume} that $\mathcal F$ is not asymptotically determining. Then, there exist two trajectories $u(t)$ and $v(t)$ such that $F_i(u(t))\to F_i(v(t))$ as $t\to \infty$, but $u(t)$ does not tend to $v(t)$ as $t\to\infty$. Therefore, there exists a sequence $t_n\to\infty$ such that $\|u(t_n)-v(t_n)\|_H\ge\varepsilon>0$. \par Recall that, by assumptions, $S(t)$ in $H$ possesses a global attractor $\mathcal A$. Let us consider the set $K_+\subset C_{loc}(\R_+,H)$ of all semi-trajectories of the initial DS. Then the semigroup of shifts $$ (T(h)u)(t):=u(t+h),\ t,h\ge0 $$ acts on $K_+$ (i.e. $T(h)K_+\subset K_+$). This semigroup is often referred as the trajectory dynamical system associated with the initial DS $S(t)$, see \cite{CV02, MZ08} for more details. Moreover, according to our assumptions, this semigroup is homeomorphically conjugated to the initial DS $S(t)$. In particular, the orbits $T(h)u$, $h\in\R_+$ are asymptotically compact in $C_{loc}(\R_+,H)$ for any $u\in K_+$. Thus, without loss of generality, we may assume that $$ T(t_n)u\to\bar u,\ \ \ T(t_n)v\to\bar v $$ in $C_{loc}(\R_+,H)$, where $\bar u$ and $\bar v$ are two trajectories belonging to the attractor. Since we assume that $F_i(u(t))\to F_i(v(t))$ as $t\to\infty$, from the continuity of $F_i$ we conclude that $$ F_i(\bar u(t))\equiv F_i(\bar v(t)), \ \forall t\in\R. $$ On the other hand, since $\|u(t_n)-v(t_n)\|_H\ge\varepsilon>0$, we conclude that $\bar u(0)\ne\bar v(0)$. The last statement contradicts the assumption that $\mathcal F$ is separating on the attractor and finishes the proof of the proposition. \end{proof} \begin{equation}gin{remark}\lambdabel{Rem1.strange} As we will show in Example \ref{Ex4.sep} below, the asymptotically determining system may be not separating on the attractor, so the above two definitions are not equivalent. However, the assumptions of Definition \ref{Def1.atr-det} are usually easier to verify, so we mainly deal in what follows with separating systems on the attractor. \par The analogue of Proposition \ref{Prop1.det} holds for some non-dissipative systems as well. Indeed, if we replace the existence of a global attractor by the assumption that the $\omega$-limit set $\omega(u_0)$ of any point $u_0\in H$ is not empty and compact and require \eqref{1.det-attr} to be satisfied for any complete bounded trajectory $u(t)$, $t\in\R$, the assertion of the proposition will remain true. We also note that the continuity assumptions on the semigroup can be relaxed { to} the standard requirements that the maps $S(t)$ are continuous for every fixed $t$. \end{remark} Since we are interested in the number $N$ of the "optimal" system of determining functionals, it is natural to give the following definition. \begin{equation}gin{definition}\lambdabel{Def3.det-dim} Let $S(t)$ be a DS acting in a Banach space $H$. A determining dimension $\partial_im_{det}(S(t),H)$ is the minimal number $N\in\mathbb N$ such that there exists asymptotically determining system $\mathcal F$ which consists of $N$ functionals. \end{definition} Our primary goal is to find or estimate the determining dimension for a given DS $S(t)$. We start with some obvious, but useful observations. Namely, let $\mathcal R\subset H$ be the set of all equilibria of the DS $S(t)$ and let $\mathcal F=\{F_1,\cdots,F_N\}$ be an asymptotically determining system of functionals which generates a continuous map $F:H\to\R^N$ via $F(u):=(F_1(u),\cdots, F_N(u))$. Obviously, this map must be {\it injective} on $\mathcal R$. This gives a natural lower bound for the determining dimension: \begin{equation}gin{equation}\lambdabel{1.lower} \partial_im_{det}(S(t),H)\ge \partial_im_{emb}(\mathcal R), \end{equation} where the embedding dimension of a set $\mathcal R\subset H$ is a minimal $N\in\mathbb N$ such that there exists a continuous embedding of $\mathcal R$ to $\R^N$. \par Let us now discuss straightforward upper bounds. To this end, we assume that the DS considered is dissipative and possesses a global attractor $\mathcal A$ of finite box-counting (fractal) dimension \begin{equation}gin{equation}\lambdabel{1.dim-ass} \partial_im_B(\mathcal A,H)<\infty. \end{equation} This assumption is true for many interesting classes of dissipative systems generated by PDEs, see \cite{BV92,MZ08,T97} and references therein. Then, by Man\'e projection theorem, see e.g. \cite{3,hunt,R11}, a generic projector $P:H\to V_N$ on a plane $V_N$ of dimension $N\ge2\partial_im_B(\mathcal A,H)+1$ is one-to-one on the attractor $\mathcal A$. Let us write this projector in the form \begin{equation}gin{equation}\lambdabel{1.mp} Ph=\sum_{n=1}^N (F_n,h)v_n,\ \ h\in H, \end{equation} where $F_n\in H^*$ and $\{v_n\}_{n=1}^N\in H$ is the base in $V_N$. Then the system of linear functionals $\{F_n(h)=(F_n,h)\}_{n=1}^N$ is obviously separating on the attractor $\mathcal A$. This gives the desired estimate: \begin{equation}gin{equation} \partial_im_{det}(S(t),H)\le 2\partial_im_B(\mathcal A,H)+1. \end{equation} Thus, we have proved the following proposition. \begin{equation}gin{proposition}\lambdabel{Prop1.rough} Let the DS $S(t)$ be dissipative and possess a global attractor $\mathcal A$ of finite fractal dimension. Then, \begin{equation}gin{equation}\lambdabel{1.r} \partial_im_{emb}(\mathcal R)\le \partial_im_{det}(S(t),H)\le 2\partial_im_B(\mathcal A,H)+1, \end{equation} where $\mathcal R\subset H$ is the set of all equilibria of the DS considered. \end{proposition} \begin{equation}gin{remark}\lambdabel{Rem1.fut} As we will see below, the upper bound in \eqref{1.r} is too rough and can be replaced by the corresponding dimension of the equilibria set $\mathcal R$. Note also that the determining dimension may be defined not as a minimum dimension with respect to all continuous functionals, but for functionals satisfying some extra restrictions, for instance, for {\it linear} functions only (or even linear functionals of some special form, e.g., determining nodes, etc.). This may potentially lead to new non-trivial results. However, the restriction for determining functionals to be linear does not look natural especially when the non-linear DS is considered. Moreover, as shown in Example \ref{Ex4.linbad}, linear functionals may fail to distinguish different periodic orbits with the same period (even in linear systems) which leads to unnecessary increase of the number of functionals in a determining system. As we also see below, the class of polynomial functionals is { rich} enough to overcome such problems. \end{remark} \section{Determining modes: a classical example}\lambdabel{s2} In this section, we recall an old result on the determining modes for a semilinear parabolic equation which we are planning to use in the sequel, see e.g. \cite{Chu} or \cite{HR} for a more detailed exposition. Namely, we consider the following abstract functional model: \begin{equation}gin{equation}\lambdabel{2.abs} \partial_t u+Au-f(u)=g,\ \ u\big|_{t=0}=u_0, \end{equation} where $H$ is a Hilbert space, $A=A^*>0$ is a positive self-adjoint unbounded operator in $A$ with a compact inverse, $g\in H^{-\alphapha}$ and $f$ is a given non-linear map which is subordinated to the operator $H$ and is bounded and globally Lipschitz: \begin{equation}gin{equation}\lambdabel{2.lip} \begin{equation}gin{cases} 1.\ \|f(u)\|_{H^{-\alphapha}}\le -C,\ \forall u\in H^\alphapha;\\ 2.\ \|f(u_1)-f(u_2)\|_{H^{-\alphapha}}\le L\|u_1-u_2\|_{H^\alphapha},\ \forall u_1,u_2\in H^\alphapha, \end{cases} \end{equation} where $\alphapha\in[0,1)$ and $H^\alphapha:=D(A^{\alphapha/2})$ and $C>0$. It is well-known that under the above assumptions, equation \eqref{2.abs} generates a dissipative DS in the space $H$ which possesses a global attractor $\mathcal A$. Moreover, this attractor has the finite box-counting dimension: \begin{equation}gin{equation} \partial_im_B(\mathcal A,H)<\infty, \end{equation} see e.g. \cite{hen,R01} for more details. Note that \eqref{2.abs} is a functional model for many dissipative PDEs arising in applications including reaction-diffusion equations, Navier-Stokes system, etc. The extra assumption that $f$ is {\it globally} Lipschitz is not a big restriction if the existence of an absorbing ball is established and can be achieved by the standard cut off procedure outside of this ball. \par Let $\{e_n\}_{n=1}^\infty$ be an orthonormal base of eigenvectors of the operator $A$ with the corresponding eigenvalues $\{\lambdambda_n\}_{n=1}^\infty$ and let $P_N:H\to H$ be the orthoprojector to the first $N$ eigenvectors of $A$. Denote also $Q_N:=1-P_N$. Consider the system $\mathcal F=\{F_1,\cdots,F_N\}$ of linear functionals generated by the corresponding Fourier modes $F_n(u):=(u,e_n)$. The next proposition tells us that the system $\mathcal F$ is determining if $N$ is large enough. \begin{equation}gin{proposition}\lambdabel{Prop2.modes} Let the above assumptions hold and let $N$ satisfy the inequality \begin{equation}gin{equation}\lambdabel{2.one} L<\lambdambda_{N+1}^{1-\alphapha}. \end{equation} Then the system $\mathcal F$ of first $N$ Fourier modes is asymptotically determining for the DS generated by equation \eqref{2.abs}. \end{proposition} \begin{equation}gin{proof} Indeed, let $u_1(t)$ and $u_2(t)$ be two trajectories belonging to the attractor. Such that $P_Nu_1(t)=P_Nu_2(t)$ for all $t\in\R$. Let $v(t):=u_1(t)-u_2(t)$. Then this function solves \begin{equation}gin{equation}\lambdabel{2.dif} \partial_t v(t)+Av(t)=[f(u_1(t))-f(u_2(t))],\ \ t\in\R. \end{equation} Taking an inner product of equation \eqref{2.dif} with $v(t)$ and using the Lipschitz continuity of the map $f$ and the fact that $P_Nv(t)\equiv0$, we arrive at $$ \frac12\frac d{dt}\|v(t)\|^2_H+(\lambdambda_{N+1}^{1-\alphapha}-L)\|v(t)\|_{H^\alphapha}^2\le0 $$ and, therefore, $$ \|v(t)\|^2_H\le e^{-\begin{equation}ta(t-s)}\|v(s)\|^2_H,\ \ s\le t $$ for some positive $\begin{equation}ta$. Since $u_1$ and $u_2$ belong to the attractor, $\|v(s)\|_H$ remains bounded as $s\to-\infty$ and, passing to the limit $s\to-\infty$, we get $v(t)\equiv0$. Thus, $\mathcal F$ is separating on the attractor and the proposition is proved. \end{proof} \begin{equation}gin{remark}\lambdabel{Rem2.NS} Similar result has been initially proved for 2D Navier-Stokes problem, see \cite{FP67,Lad72}, and has been extended later for many other classes of dissipative PDEs. For simplicity we consider here only the case of abstract parabolic equations although the result remains true for much more general types of equations, including e.g. damped wave equations, etc, see \cite{Chu}. The main drawback of the construction given above is that it gives {\it one-sided} estimate for the number $N$ of functionals in the determining system $\mathcal F$ and, as we will see below, there are no reasons to expect that this estimate is sharp. Moreover, the Takens conjecture as well as the results stated in the next section allow us to guess that in a "generic" situation we have $N=1$. \end{remark} We now reformulate the result of the proposition in terms of Lyapunov-Schmidt reduction. For simplicity, we restrict ourselves to consider the case $\alphapha=0$ only although everything remains true (with minor changes in a general case $\alphapha\in[0,1)$ as well). Let us rewrite equation \eqref{2.abs} in terms of lower $u_+(t):=P_Nu(t)$ and higher $u_-(t):=Q_Nu(t)$ Fourier modes, namely, \begin{equation}gin{equation}\lambdabel{2.sys} \begin{equation}gin{cases} \frac d{dt}u_+(t)+Au_+(t)-P_Nf(u_+(t)+u_-(t))=g_+,\\ \frac d{dt}u_-(t)+Au_-(t)-Q_Nf(u_+(t)+u_-(t))=g_-. \end{cases} \end{equation} The next lemma shows that the higher modes part $u_-(t)$ is uniquely determined if its lower modes part $u_+(t)$ are known. \begin{equation}gin{lemma}\lambdabel{Lem2.Lyap} Let \eqref{2.one} be satisfied and let $\alphapha=0$. Then, for every $u_+\in C_b(\R_-,H)$ there is a unique solution $u_-\in C_b(\R_-,H)$ of the second equation of \eqref{2.sys}. Moreover, the solution operator $\Phi:C_b(\R_-,H)\to C_b(\R_-,H)$ is Lipschitz continuous in the following sense: \begin{equation}gin{equation}\lambdabel{2.llip} \|\Phi(u_+^1)-\Phi(u_+^2)\|_{C_{e^{\begin{equation}ta t}}(\R_-,H)}\le K\|u_+^1-u_+^2\|_{C_{e^{\begin{equation}ta t}}(\R_-,H)} \end{equation} for some positive $\begin{equation}ta$ and $K$ which are independent of $u_+^1,u_+^2\in C_b(\R_-,H)$. \end{lemma} \begin{equation}gin{proof} For a given $u_+\in C_b(\R_-,H)$ let us consider the following problem \begin{equation}gin{equation}\lambdabel{2.app} \partial_t u_-+Au_--Q_Nf(u_-+u_+)=g,\ \ u_-\big|_{t=-M}=0, \end{equation} where $M>0$ is fixed. Obviously, this problem has a unique solution $u_{-,M}\in C([-M,0],H)$. Moreover, multiplying equation \eqref{2.app} scalarly in $H$ by $u_-$, we get \begin{equation}gin{multline} \frac12\frac d{dt}\|u_-\|^2_H+\lambdambda_{N+1}\|u_-\|^2_H\le\\\le (f(u_-+u_+)-f(0),u_-)+(f(0)+g,u_-)\le \\\le L\|u_-\|^2_H+L\|u_+\|_H\|u_-\|_H+(\|f(0)\|_H+\|g\|_H)\|u_-\|_H. \end{multline} Using \eqref{2.one}, we arrive at $$ \frac d{dt}\|u_-(t)\|^2_H+\begin{equation}ta\|u_-(t)\|^2_H\le C(1+\|g\|^2_H+\|u_+(t)\|^2_H) $$ for some positive constants $\begin{equation}ta$ and $C$. Integrating this inequality, we see that $$ \|u_-(t)\|_{H}^2\le C_1(1+\|g\|^2_H)+C_1\int_{-M}^te^{-\begin{equation}ta(t-s)}\|u_+(s)\|^2_H\,ds,\ \ t\ge -M. $$ This estimate in particular shows that the functions $u_{-,M}(t)$ are uniformly bounded with respect to $M$ (since $u_+\in C_b(\R_-,H)$). We claim that the sequence $\{u_{-,M}\}_{M=1}^\infty$ is Cauchy in $C_{loc}(\R_-,H)$. Indeed, let $M_1>M_2$ and $v_{M_1,M_2}(t):=u_{-,M_1}(t)-u_{-,M_2}(t)$. Then this function solves the equation \begin{equation}gin{multline} \partial_t v_{M_1,M_2}+Av_{M_1,M_2}=Q_N[f(u_{-,M_1}+u_+)-f(u_{-,M_2}+u_+)],\\ v_{M_1,M_2}\big|_{t=-M_2}=u_{-,M_1}(-M_2). \end{multline} Multiplying this equation by $v_{M_1,M_2}$ and arguing as in the proof of Proposition \ref{Prop2.modes} using that $u_{-,M_1}(-M_2)$ is uniformly bounded with respect to $M_1$ and $M_2$, we infer $$ \|v_{M_1,M_2}(t)\|_H^2\le Ce^{-\begin{equation}ta(t+M_2)},\ \ t\ge-M_2, $$ so $u_{-,M}(t)$ is indeed a Cauchy sequence. Passing now to the limit $M\to\infty$, we get the desired solution $u_-\in C_b(\R_-,H)$ of the second equation of \eqref{2.sys}. Thus, it only remains to verify the uniqueness and estimate \eqref{2.llip}. To this end, we take two solutions $u_-^1(t)$ and $u_-^2(t)$ which correspond to different functions $u_+^1(t)$ and $u_+^2(t)$ and take their difference $v(t)=u_-^1(t)-u_-^2(t)$. Writing out the equation for $v(t)$ and arguing as in the proof of Proposition \ref{Prop2.modes}, we end up with the following inequality: \begin{equation}gin{equation}\lambdabel{2.dif-est} \frac d{dt}\|v(t)\|^2_H+\begin{equation}ta\|v(t)\|^2_H\le C\|u_+^1(t)-u_+^2(t)\|_H^2. \end{equation} Integrating this inequality, we get \begin{equation}gin{multline} \|v(t)\|^2_H\le \|u_+^1(-M)-u_+^2(-M)\|_H^2e^{-\begin{equation}ta(t+M)}+\\+ C_\begin{equation}ta\int_{-M}^te^{-\begin{equation}ta(t-s)}\|u_+^1(s)-u_+^2(s)\|_H^2\,ds. \end{multline} Using now that all functions involved are bounded as $t\to-\infty$, we may pass to the limit $M\to\infty$ in the last inequality and get \begin{equation}gin{multline} \|v(t)\|^2_H\le C_\begin{equation}ta\int_{-\infty}^te^{-\begin{equation}ta(t-s)}\|u_+^1(s)-u_+^2(s)\|_H^2\,ds\le \\\le C'_\begin{equation}ta\sup_{s\in\R_-}\left\{e^{-\begin{equation}ta(t-s)/2}\|u_+^1(s)-u_+^2(s)\|_H^2\right\} \end{multline} which gives the desired estimate \eqref{2.llip} and finishes the proof of the lemma. \end{proof} Thus, according to Lemma \ref{Lem2.Lyap}, the value of $u_-(t)$ can be found by the map $\Phi$ if the trajectory $u_+(s)$, $s\le t$ is known. Namely $$ u_-(t)=\Phi((u_+)_t)(0):=\Phi_0((u_+)_t), $$ where the function $(u_+)_t\in C_b(\R_-,H)$ is given by $(u_+)_t(s):=u_+(t+s)$. This allows us to reduce the initial equation \eqref{2.abs} at least on the attractor to the following {\it delayed} system of $N$ ODEs: \begin{equation}gin{equation}\lambdabel{2.dode} \frac d{dt} u_+(t)+Au_+(t)-P_Nf(u_+(t)+\Phi_0((u_+)_t))=g. \end{equation} \begin{equation}gin{remark}\lambdabel{Rem2.nd} We see that determining modes (at least in a dissipative system \eqref{2.abs}) are responsible for the reduction of the initial PDE to a system of ODEs with {\it delay} (DDEs) where the number of equations in the reduced system is equal to $N$. Note that the phase space for the obtained system of DDEs remains {\it infinite-dimensional} (since we have an infinite delay in \eqref{2.dode}, it is natural to take the space $C_b(\R_-,H)$ with the topology of $C_{loc}(\R_-,H)$ as the phase space of this problem, see \cite{ha} and references therein). Thus, determining modes {\it do not produce} any finite-dimensional reduction and only reduce the initial PDE to the system of DDEs whose dynamics is still a priori infinite-dimensional. In particular, the number $N$ or the determining dimension {\it is not a priori} related to the effective number of degrees of freedom of the dissipative system considered. We also mention that more advanced technique related to {\it inertial manifolds} allows us to express $u_-(t)$ through $u_+(t)$ as a local in time function without delay $$ u_-(t)=\Phi_0(u_+(t)),\ \ \Phi_0\,:\, P_NH\to Q_NH $$ (under further rather restrictive assumptions on $A$ and $f$, see \cite{BV92} for more details). In this case, equation \eqref{2.dode} become a system of ODEs without delay and the finite-dimensional reduction holds. \end{remark} We conclude this section by one more related result which allows us to restore the trajectory $u(t)$ in "real time" if the values of its determining modes are known (say, from observations) and which may be useful for data assimilation, see \cite{AT13,OT08} and references therein. \begin{equation}gin{proposition}\lambdabel{Prop2.assym} Let $u(t)$ be a given trajectory of the dynamical system generated by equation \eqref{2.abs} and let condition \eqref{2.one} be satisfied. Let the constant $K$ be large enough and let $v(t)$ solve the following problem: \begin{equation}gin{equation}\lambdabel{2.as} \frac d{dt}v(t)+Av(t)-f(v(t))+K(P_N v(t)-P_N u(t))=g. \end{equation} Then, the following estimate holds: \begin{equation}gin{equation}\lambdabel{2.conv} \|v(t))-u(t)\|_H\le \|v(0)-u(0)\|_He^{-\begin{equation}ta t}, \ \ t\ge0 \end{equation} for some positive constant $\begin{equation}ta$ which is independent of $u$ and $v$. \end{proposition} \begin{equation}gin{proof} Indeed, let $w(t):=v(t)-u(t)$. Then this function solves $$ \partial_t w(t)+Aw(t)-[f(v(t))-f(u(t))]+K P_Nw(t)=0. $$ Taking a scalar product of this equation with $w(t)$ in $H$ and using the Lipschitz continuity of $f$, we get $$ \frac12\frac d{dt}\|w(t)\|^2_H+\|w(t)\|_{H^1}^2-L\|w(t)\|^2_{H^\alphapha}+K\|P_Nw(t)\|^2_H\le0. $$ Moreover, arguing as in the proof of Proposition \ref{Prop2.modes}, we get \begin{equation}gin{multline} \|w\|_{H^1}^2-L\|w\|^2_{H^\alphapha}+K\|P_Nw\|^2_H=(\|Q_Nw\|_{H^1}^2-L\|Q_Nw\|^2_{H^\alphapha})+\\+ (\|P_Nw\|_{H^1}^2-L\|P_Nw\|_{H^\alphapha}^2+K\|P_Nw\|^2_H)\ge\\\ge (\lambdambda_{N+1}^{1-\alphapha}-L)\|Q_Nw\|^2_{H^\alphapha}+(K-L\lambdambda_N^\alphapha)\|P_Nw\|_{H}^2\ge \begin{equation}ta\|w\|^2_H \end{multline} for some positive $\begin{equation}ta$ if $K>L\lambdambda^\alphapha$. This gives the estimate $$ \frac12\frac d{dt}\|w(t)\|^2_H+\begin{equation}ta\|w(t)\|^2_H\le0 $$ and finishes the proof of the proposition. \end{proof} \begin{equation}gin{remark} As we will see below, the results which are similar (although slightly more complicated) to Lemma \ref{Lem2.Lyap} and Proposition \ref{Prop2.assym} hold for a general system of determining functionals. \end{remark} \section{The Takens delay embedding theorem and determining functionals}\lambdabel{s3} In this section, we prove that at least for the case of equations \eqref{2.abs} and generic external forces $g$, the determining dimension of the corresponding DS is one. We start with recalling two preliminary results. \begin{equation}gin{proposition}\lambdabel{Prop3.gen} Let the operator $A$ and the nonlinearity $f$ satisfy the assumptions of Section \ref{s2}. Then, there is a dense in $H^{-\alphapha}$ set of external forces $g$ for which the corresponding set $\mathcal R_g$ of equilibria is finite. \end{proposition} Indeed, this is a standard corollary of the Sard lemma, see e.g. \cite{BV92} for more details. \begin{equation}gin{proposition}\lambdabel{Prop3.per} Let the operator $A$ and the nonlinearity $f$ satisfy the assumptions of Section \ref{s2}. Then there exists $T_0>0$ depending on $A$ and $f$ only such that the period $T$ of any non-trivial periodic orbit $u(t)$ of problem \eqref{2.abs} is not smaller than $T_0$ (i.e. $T\ge T_0$). \end{proposition} For the proof of this fact, see \cite{R11}. \par We are now ready to state a version of the Takens delayed embedding theorem for the DS $S(t)$ generated by problem \eqref{2.abs} which is our main technical tool in this section. \begin{equation}gin{theorem}\lambdabel{Th3.Tkn} Let $\mathcal A$ be the global attractor of equation \eqref{2.abs} and let the natural number $$ k\ge(2+\partial_im_B(\mathcal A))\partial_im_B(\mathcal A)+1 $$ be fixed. Assume also that the set $\mathcal R$ of equilibria is finite and $\tau>0$ is fixed in such a way that $k\tau\ge T_0$ (where $T_0$ is the same as in Proposition \ref{Prop3.per}). Then there exists a dense (actually a prevalent) set of Lipschitz maps $F:H\to\R$ such that the map \begin{equation}gin{equation}\lambdabel{3.tak} F(k,u):=(F(u),F(S(\tau)u),\cdots,F(S((k-1)\tau)u)) \end{equation} is one-to-one on the attractor $\mathcal A$. Moreover, the map $F$ can be chosen from the class of polynomials of degree $2k$. \end{theorem} The proof of this theorem is given in \cite{R11}. \begin{equation}gin{remark} A { bit non-standard} condition on $k$ is related { to} the fact that the H\"older Man\'e projection theorem is essentially used in the proof and the maximal H\"older exponent there is restricted by the so-called dual thickness exponent of the attractor, see \cite{R11}. In many cases related { to} semilinear parabolic equations, this thickness exponent is known to be zero and then we return to the standard condition $k\ge 2\partial_im_B(\mathcal A)+1$ in the Takens theorem. \end{remark} \begin{equation}gin{corollary}\lambdabel{Cor3.det} Under the assumptions of Theorem \ref{Th3.Tkn}, functional $F$ constructed in this theorem is asymptotically determining and, therefore, \begin{equation}gin{equation} \partial_im_{det}(S(t),H)=1. \end{equation} \end{corollary} \begin{equation}gin{proof} Indeed, let $u_1(t)$ and $u_2(t)$ be two complete trajectories belonging to the attractor and let $F(u_1(t))\equiv F(u_2(t))$ for all $t\in\R$. Then, since the map \eqref{3.tak} is one-to-one on the attractor, we conclude that $u_1(t)\equiv u_2(t)$ and $F$ is separating on the attractor. This finishes the proof of the corollary due to Proposition \ref{Prop1.det}. \end{proof} Let us take a functional $F$ constructed in Theorem \ref{Th3.Tkn} and let $\bar{\mathcal A}:=F(k,\mathcal A)\subset\R^{k}$. Then, since $\mathcal A$ is compact, $F(k,\cdot)$ is a homeomorphism between $\mathcal A$ and $\bar{\mathcal A}$. Let us introduce one more map \begin{equation}gin{equation}\lambdabel{3.theta} \Theta: \bar{\mathcal A}\to H,\ \ \Theta(u):=S(k\tau)F(k,u)^{-1}. \end{equation} By the construction, this map is continuous and, for every trajectory $u(t)$, $t\in\R$, belonging to the attractor, we have the identity \begin{equation}gin{equation}\lambdabel{3.delay1} u(t)=\Theta(F(u(t-k\tau),\cdots,F(u(t-\tau))),\ \ t\in\R. \end{equation} In particular, denoting $Z(t):=F(u(t))\in\R$, we get \begin{equation}gin{equation}\lambdabel{2.Z} Z(t)=F(\Theta(Z(t-k\tau),\cdots, Z(t-\tau))) \end{equation} which gives the reduction of the dissipative dynamics generated by \eqref{2.abs} on the attractor to a {\it scalar} equation with delay. However, equation \eqref{2.Z} looks not friendly (e.g., does not possess any smoothing/compactness property on a finite time interval), so it { seems} better to replace it by delay differential equation in the spirit of \eqref{2.dode}. To this end, we assume in addition that $\alphapha=0$ and our functional $F$ is smooth and use the relation \ref{3.delay1} in order to find the derivative $\partial_t u$ through equation \eqref{2.abs}. Using in addition that the operator $A$ is continuous on the attractor as the map from $H$ to $H$ (see e.g. \cite{R11}), we end up with \begin{equation}gin{multline} \partial_t u(t)=-A\Theta(F(u(t-k\tau),\cdots,F(u(t-\tau)))+\\+f( \Theta(F(u(t-k\tau),\cdots,F(u(t-\tau))))+g \end{multline} and therefore \begin{equation}gin{multline}\lambdabel{3.Z-ode} \frac d{dt}Z(t)=\bar \Theta(Z(t-k\tau),\cdots,Z(t-\tau))), \ \text{where}\\ \bar\Theta(\xi_1,\cdots,\xi_{k}):=\\=F'(\Theta(\xi_1,\cdots,\xi_{k}))[-A\Theta(\xi_1,\cdots,\xi_{k}) +f(\Theta(\xi_1,\cdots,\xi_{k}))+g] \end{multline} Thus, we have proved the following result which is somehow analogous to Lemma \ref{Lem2.Lyap}. \begin{equation}gin{corollary} Let the operator $A$ and the non-linearity $f$ satisfy the assumptions of Section \ref{s2} with $\alphapha=0$ and let the set of equilibria $\mathcal R$ be finite. Then, there exists a continuous function $\bar \Theta:\R^k\to\R$ (where $k$ is the same as in Theorem \ref{Th3.Tkn}) such that the dynamics of \eqref{2.abs} on the attractor is homeomorphically embedded to the dynamics of scalar DDE \eqref{3.Z-ode}. \end{corollary} Indeed, the natural phase space of problem \eqref{3.Z-ode} is $C[-k\tau,0]$ and, by the construction, the map $\Theta_Z:u_0\to F(S(\cdot+k\tau)u_0)$ gives a homeomorphic embedding of the attractor $\mathcal A$ to $C[-k\tau,0]$. Moreover, the dynamics of \eqref{2.abs} on $\mathcal A$ is conjugated to the dynamics of \eqref{3.Z-ode} (also by the construction). It only remains to note that the function $\bar\Theta$ is initially defined on $\bar A$ only, but it can be extended to the whole $\R^k$, e.g. by Tietze theorem. \par We now discuss the analogue of Proposition \ref{Prop2.assym}. To this end, we fix $N\in\mathbb N$ the same as in Proposition \ref{Prop2.assym} and introduce the finite-dimensional map \begin{equation}gin{equation} \Theta_N(\xi_1,\cdots,\xi_k):=P_N\Theta(\xi_1,\cdots,\xi_k),\ \Theta_N : \bar{\mathcal A}\to\R^N \end{equation} which can be extended by Tietze theorem to a continuous map from $\R^k$ to $\R^N$. Then the following result holds. \begin{equation}gin{proposition}\lambdabel{Prop3.assym} Let the assumptions of Corollary \ref{Cor3.det} holds. Let also the trajectory $u(t)\in\mathcal A$ be given and $F:H\to\R$ be an asymptotically determining functional. Consider the problem \begin{equation}gin{multline}\lambdabel{3.as} \partial_t v(t)+Av(t)-f(v(t))+\\+K(P_Nv(t)-\Theta_N(F(u(t-k\tau)),\cdots,F(u(t-\tau))))=0, \end{multline} where $K$ is large enough. Then, estimate \eqref{2.conv} holds. \end{proposition} Indeed, by the construction, $$ P_Nu(t)=\Theta_N(F(u(t-k\tau),\cdots,F(u(t-\tau)))) $$ and the assertion follows immediately from Proposition \ref{Prop2.assym}. \begin{equation}gin{remark}As we have already mentioned, the assumption $\alphapha=0$ is introduced for simplicity only. All of the results stated below remain true (with very minor changes) in a general case $\alphapha\in[0,1)$ as well. \end{remark} To conclude the section, we briefly discuss the case where the extra generic assumption about the finiteness of $\mathcal R$ is not posed. This may be useful e.g. for the case when problem \eqref{2.abs} possesses some physically relevant symmetries which are forbidden to break. The analogue of Corollary \ref{Cor3.det} now reads. \begin{equation}gin{corollary}\lambdabel{Cor3.det-R} Let the operator $A$ and the nonlinearity $f$ satisfy the assumptions of Section \ref{s2}. Then the determining dimension of the DS associated with equation \eqref{2.abs} possesses the following estimate: \begin{equation}gin{equation}\lambdabel{3.goodest} \partial_im_{emb}(\mathcal R)\le\partial_im_{det}(S(t),H)\le\partial_im_{emb}(S(t),H)+1, \end{equation} where $\mathcal R$ is the corresponding set of equilibria. \end{corollary} \begin{equation}gin{proof}[Sketch of the proof] The proof of this statement consists of a revision of the proof given in \cite[Theorem 14.7]{R11} adapting it to our case. First, the reduction to finite-dimensional case works exactly as in \cite{R11}, so we only need to revise the proof of the finite-dimensional version, see \cite[Theorem 14.5]{R11}. Since in our case all periodic orbits of small period are equilibria, three cases considered in the proof of this theorem are reduced to two cases. \par {\it Case 1:} at least one of two points $x,y\in\mathcal A$ does not belong to $\mathcal R$. In this case, a single functional $F$ which distinguishes such points can be constructed exactly in the same way as in the proof of \cite[Theorem 14.5]{R11} (estimating the rank of the corresponding matrix and using \cite[Lemma 14.4]{R11}). \par {\it Case 2:} $x,y\in\mathcal R$. To distinguish such points we use the fact that there exists a homeomorphic embedding $\phi:\mathcal R\to\R^{\partial_im_{emb}(\mathcal R)}$, so we just add the coordinates of $\phi$ to our system of functionals and get a system $\{F,\phi_1,\cdots,\phi_{\partial_im_{emb}(\mathcal R)}\}$ (we implicitly assume that $\phi$ is already extended in a continuous way to the whole $H$). This gives the desired upper bound for the determining dimension. The lower bound follows from \eqref{1.lower} and the corollary is proved. \end{proof} \begin{equation}gin{remark} Recall that the embedding dimension of $\mathcal R$ possesses the following estimates: \begin{equation}gin{equation}\lambdabel{3.cont} \partial_im_{emb}(\R)\le 2\partial_im_{top}(\R)+1\le2\partial_im_B(\R)+1\le2\partial_im_B(\mathcal A)+1<\infty, \end{equation} where $\partial_im_{top}$ is a topological (Lebesgue covering) dimension, see e.g. \cite{R11}. Mention also that, if we control the embedding dimension by the topological one, we may also claim that the determining functionals are dense and if we control it by box-counting dimension, we may get the system of determining functionals all of them except of one are linear and this one is polynomial. In order to avoid the technicalities, we leave the rigorous proof of this fact to the reader. \end{remark} \section{Examples}\lambdabel{s4} In this section, we illustrate the theory by a number of examples where the determining functionals can be explicitly found. Some of them are known, but the rest { are new to the best of our knowledge}. We start with the example which shows the difference between asymptotically determining and separating on the attractor functionals. \begin{equation}gin{example}\lambdabel{Ex4.sep} Let $H=\R^2$ and the dynamical system $S(t)$ is determined by the phase portrait below { \hspace{35mm} \begin{equation}gin{tikzpicture}[scale=0.35] \fill[outer color=red!5] (0,1) circle (2); \draw[thick, ->] (0,-1) arc (270:360:0.5); \draw[thick, ->] (0.5,-0.5) arc (0:180:0.5); \draw[thick, ->] (-0.5,-0.5) arc (180:270:0.5); \draw[thick, ->] (1,0) arc (0:120:1); \draw[thick, ->] (-0.5,0.8660254) arc (120:360:1); \draw[thick, ->] (0,-1) arc (270:360:1.5); \draw[thick, ->] (1.5,0.5) arc (0:180:1.5); \draw[thick, ->] (-1.5,0.5) arc (180:270:1.5); \draw[color=red!70, thick, ->] (0,-1) arc (270:360:2); \draw[color=red!70, thick, ->] (2,1) arc (0:90:2); \draw[color=red!70, thick, ->] (0,3) arc (90:180:2); \draw[color=red!70, thick, ->] (-2,1) arc (180:270:2); \draw[thick, ->](0,3.8) arc (90:230:2.6); \draw[thick, ->](-1.6712478,-0.791715544) to [out=330, in=195] (0,-1); \draw[thick, ->](2.7,1.1) arc (0:90:2.7); \draw[thick](2.7,1.1) arc (360:300:2.7); \draw[thick, ->] (0.5,-3) to [out=85, in=195] (1.35,-1.23826858); \draw[thick, ->] (0.5,-4)--(0.5,-3); \draw[thick, ->] (0,-4)--(0,-2); \draw[thick, ->] (0,-2)--(0,-1); \draw[thick, ->](0,5.8) arc (90:180:4.7); \draw[thick, ->](-4.7,1.1) arc (180:230:4.7); \draw[thick, ->](-3.021101767,-2.500408868) to [out=330, in=260] (0,-1); \draw[thick](0,5.8) arc (90:30:4.9); \draw[thick, ->](4.24352446,-1.55) arc (330:390:4.9); \draw[thick, ->] (2.5,-4) to [out=85, in=230] (4.24352446,-1.55); \draw[thick, ->] (0.5,-4)--(0.5,-3); \draw[thick, ->](-5,-4) to [out=350, in=265] (-0.05,-1.4); \draw[thick,](0,-1) -- (-0.05,-1.4); \end{tikzpicture} } {\centerline{Figure 1.}} \par Actually this phase portrait consists of a saddle-node at the origin $(0,0)$ glued with the disc $\{(x,y)\in\R^2\,:\ x^2+(y-1)^2\le 1\}$ filled by homoclinic orbits to the origin. The key feature of this DS is that the $\omega$-limit set of any single trajectory coincides with the origin (and, therefore, any continuous functional is automatically asymptotically determining), but the global attractor $$ \mathcal A=\{(x,y)\in\R^2,\ \ x^2+(y-1)^2\le1\} $$ is not trivial. Obviously, the functional $F(x,y)=x^2+(y-1)^2-1$ is not separating on the attractor since it does not distinguish the origin and the homoclinic orbit at the boundary of the attractor. \end{example} Next example shows that the class of linear functionals may be insufficient for finding good determining functionals. \begin{equation}gin{example}\lambdabel{Ex4.linbad} Let $H=\R^4$ and let the dynamical system $S(t)$ be determined by the following system of linear ODEs: \begin{equation}gin{equation}\lambdabel{4.linear} \dot x=y,\ \ \dot y=-x,\ \ \dot z=u,\ \ \dot u=-z. \end{equation} Of course, this DS is not dissipative, but we may correct the right-hand sides outside of a large ball making it dissipative. A general solution of this system has the form: \begin{equation}gin{equation*} x=A\sin(t+\phi_1),\ y=A\cos(t+\phi_1),\ \ z=B\sin(t+\phi_2),\ \ u=B\cos(t+\phi_2), \end{equation*} where the amplitudes $A,B$ and phases $\phi_1$ and $\phi_2$ are arbitrary. We see that this system has only one equilibrium (this property can be easily preserved under the correction of the right-hand sides for making the system dissipative), therefore, we know from a general theory that the determining dimension of this system is one. In { a} fact, it is not difficult to see that the desired functional can be found in the class of quadratic functionals even for the initial system \eqref{4.linear}. We claim that such a functional cannot be linear. Indeed, let $$ F=\alphapha x+\begin{equation}ta y+\gamma z+\delta u $$ be such a functional where not all of $\alphapha,\begin{equation}ta,\gamma,\delta$ are equal to zero. We claim that there are non-zero $A,B,\phi_1,\phi_2$ such that \begin{equation}gin{equation}\lambdabel{4.trig} \alphapha\cos(t-\phi_1)+\begin{equation}ta\sin(t-\phi_1)+\gamma\cos(t-\phi_2)+\delta\sin(t-\phi_2)\equiv0. \end{equation} Indeed, after the elementary transformations, \eqref{4.trig} is equivalent to $$ AA_{\alphapha,\begin{equation}ta}\cos(t-\phi_1-\phi_{\alphapha,\begin{equation}ta})+BA_{\gamma,\delta}\cos(t-\phi_2-\phi_{\gamma,\delta})\equiv0 $$ for the appropriate amplitudes $A_{\alphapha,\begin{equation}ta}$ and $A_{\gamma,\delta}$ and phases $\phi_{\alphapha,\begin{equation}ta}$ and $\phi_{\gamma,\delta}$ depending only on $\alphapha,\begin{equation}ta,\gamma,\delta$. In turn, this identity will be satisfied if $$ \sin(\phi_1-\phi_2+\phi_{\alphapha,\begin{equation}ta}-\phi_{\gamma,\delta})=0 $$ and $$ AA_{\alphapha,\begin{equation}ta}+BA_{\gamma,\delta}\cos(\sin(\phi_1-\phi_2+\phi_{\alphapha,\begin{equation}ta}-\phi_{\gamma,\delta})=0. $$ The first equation can be satisfied by the choice of phases $\phi_1$ and $\phi_2$. Then the second one becomes a linear equation on the amplitudes $A$ and $B$ which always has a non-trivial solution. Thus, $F$ cannot be a determining functional. \end{example} The next two examples are related to the case of one spatial dimension. \begin{equation}gin{example}\lambdabel{Ex4.dir} Let us consider the following 1D semilinear heat equation \begin{equation}gin{equation}\lambdabel{4.heat} \partial_t u=\nu\partial_x^2u-f(u)+g,\ \ x\in[0,\pi], \ \ \nu>0 \end{equation} endowed with Dirichlet boundary conditions $$ u\big|_{x=0}=u\big|_{x=L}=0. $$ Assume also that $f\in C^1(\R,\R)$ satisfies some dissipativity conditions, say $f(u)u\ge-C$. Then, equation \eqref{4.heat} generates a dissipative DS in the phase space $H=L^2(0,L)$ and this DS possesses a global attractor $\mathcal A$ which is bounded at least in $C^2([0,L])$, see e.g. \cite{BV92}, \par Moreover, the equilibria $\mathcal R$ of this problem satisfies the second order ODE $$ \nu u''(x)-f(u(x))+g=0,\ \ u(0)=u(L)=0. $$ Thus, the map $u\to u'(0)$ gives a homeomorphic (and even smooth) embedding of $\mathcal R$ to $\R$, so $\partial_im_{emb}(\mathcal R)=1$. Thus, we expect that $\partial_im_{det}(S(t),H)=1$ (or at most $2$ according to Corollary \ref{Cor3.det-R}). The possible explicit form of the determining functional is well-known here: $F(u):= u\big|_{x=x_0}$, where $x_0>0$ is small enough, see \cite{kukavica}. Indeed, let $u_1(t),u_2(t)\in\mathcal A$ be two complete trajectories of \eqref{4.heat} belonging to the attractor such that $u_1(t,x_0)\equiv u_2(t,x_0)$. Then the function $v(t)=u_1(t)-u_2(t)$ solves \begin{equation}gin{equation}\lambdabel{4.dif} \partial_t v=\nu\partial_x^2v-l(t)v,\ v\big|_{x=0}=v\big|_{x=x_0}=0, \end{equation} where $l(t):=\int_0^1f'(su_1(t)+(1-s)u_2(t))\,ds$. Since the attractor $\mathcal A$ is bounded in $C$, we know that $|l(t)|_{C[0,L]}\le L$ is also globally bounded. Multiplying equation \eqref{4.dif} by $v$, integrating in $x\in[0,x_0]$ and using the fact that the first eigenvalue for $-\partial_x^2$ with Dirichlet boundary conditions is $(\frac\pi{x_0})^2$, we get $$ \frac12\frac d{dt}\|v(t)\|^2_{L^2}+\nu\left(\frac\pi{x_0}\right)^2\|v(t)\|^2_{L^2}-L\|v(t)\|^2_{L^2}\le0 $$ Fixing $x_0>0$ to be small enough that $\nu\left(\frac\pi{x_0}\right)^2>L$, applying the Gronwall inequality and using that $\|v(t)\|_{L^2}$ remains bounded as $t\to-\infty$, we conclude that $v(t)\equiv0$ for all $t\in\R$ and $x\in[0,x_0]$. Thus, the trajectories $u_1(t,x)$ and $u_2(t,x)$ coincide for all $t$ and all $x\in[0,x_0]$. Using now the arguments related to logarithmic convexity or Carleman type estimates which work for much more general class of equations, see \cite{RL}), we conclude that the trajectories $u_1$ and $u_2$ coincide. Thus, $F$ is separating on the attractor and therefore is asymptotically determining. Being pedantic, we need to note that the functional $F(u)=u\big|_{x=x_0}$ is not defined on the phase space $H$, but on its proper subspace $C[0,L]$ (this is a typical situation for determining nodes, see \cite{OT08}). However, we have an instantaneous $H\to C[0,L]$ smoothing property, so if we start from $u_0\in H$, the value $F(u(t))$ will be defined for all $t>0$, so we just ignore this small inconsistency. \end{example} \begin{equation}gin{example}\lambdabel{Ex4.per} Let us consider the same equation \eqref{4.heat} on $[0,L]$, but endowed with {\it periodic} boundary conditions. In this case we do not have the condition $v(0)=0$, so the set $\mathcal R$ of equilibria is naturally embedded in $\R^2$, not in $\R^1$, by the map $u\to (u\big|_{x=0},u'\big|_{x=0})$. Thus, we cannot expect that the determining dimension is one. Moreover, at least in the case when $g=const$, equation \eqref{4.heat} possess a spatial shift as a symmetry and, therefore, any nontrivial equilibrium generates the whole circle of equilibria. Since a circle cannot be homeomorphically embedded in $\R^1$, the determining dimension must be at least $2$. We claim that it is indeed $2$ and the determining functionals can be taken in the form: \begin{equation}gin{equation}\lambdabel{4.2-det} F_1(u):=u\big|_{x=0},\ \ F_2(u)=u\big|_{x=x_0}. \end{equation} Indeed, arguing exactly as in the previous example, we see that the system $\mathcal F=\{F_1,F_2\}$ is asymptotically determining if $x_0>0$ is small enough. \end{example} The next natural example shows that the determining dimension may be finite and small even if the corresponding global attractor is infinite-dimensional. \begin{equation}gin{example}\lambdabel{Ex4.inf} Let us consider the semilinear heat equation \eqref{4.heat} on the whole line $x\in\R$. The natural phase space for this problem is the so-called {\it uniformly-local} space \begin{equation}gin{equation}\lambdabel{4.ul} H=L^2_b(\R):=\{u\in L^2_{loc}(\R),\ \|u\|_{L^2_b}:=\sup_{x\in\R}\|u\|_{L^2(x,x+1)}<\infty\}. \end{equation} It is known that, under natural dissipativity assumption on $f\in C^1(\R)$ (e.g., $f(u)u\ge-C+\alphapha u^2$ with $\alphapha>0$), this equation generates a dissipative DS $S(t)$ in $H$ for every $g\in L^2_b(\R)$. Moreover, this DS possesses the so called {\it locally compact} global attractor $\mathcal A$ that is a bounded in $H$ and compact in $L^2_{loc}(\R)$ strictly invariant set which attracts bounded in $H$ sets in the topology of $L^2_{loc}(\R)$, see \cite{MZ08} and references therein. Note that in contrast to the case of bounded domains the compactness and attraction property in $H$ fail in general in the case of unbounded domains. It is also known that, at least in the case where equation \eqref{4.heat} possesses a spatially homogeneous exponentially unstable equilibrium (e.g., in the case where $g=0$ and $f(u)=u^3-u$), the box-counting dimension of $\mathcal A$ is infinite (actually it contains submanifolds of any finite dimension), see \cite{MZ08}. \par Nevertheless, the system of linear functionals \eqref{4.2-det} remains determining for this equations by {\it exactly} the same reasons as in Examples \ref{Ex4.dir} and \ref{Ex4.per}. Thus, $$ \partial_im_{det}(S(t),H)=2. $$ One determining functional does not exist in general by the reasons explained in Example \ref{Ex4.per}. \end{example} Next example shows that in some extremely degenerate cases, the dimension of the attractor may coincide with the embedding dimension of the set of equilibria and with the number $N$ of determining Fourier modes from Proposition \ref{Prop2.modes}. \begin{equation}gin{example}\lambdabel{Ex4.deg} Let the assumptions of Proposition \ref{Prop2.modes} hold and let for simplicity $\alphapha=0$ (i.e., $f$ is globally Lipschitz as a map from $H$ to $H$ with the Lipschitz constant $L$). Let us define $N$ as a minimal natural number for which $L<\lambdambda_{N+1}$ be satisfied. Let us fix a smooth function $\phi:\R\to\R$, $\phi\in C_0^\infty(\R)$ in such a way that $|\phi'(x)|\le1$ for all $x$ and $\phi(x)=x$ for all $x\in[-1,1]$. Then, finally, take \begin{equation}gin{equation}\lambdabel{4.deg} f(u)=\sum_{n=1}^N\lambdambda_n\phi((u,e_n))e_n,\ \ g\equiv0. \end{equation} It is clear that $f$ is bounded and globally Lipschitz with Lipschitz constant $\lambdambda_N<L$, so equation \eqref{2.abs} possesses a global attractor $\mathcal A$ whose fractal dimension is exactly $N$. Indeed, by the construction of $f$, $Q_Nf(u)\equiv0$ and, therefore, $\mathcal A\subset P_NH=\R^n$. On the other hand, by the construction of $f$, the set $\mathcal R$ of equilibria contains a cube $[-1,1]^N\subset P_NH=\R^N$ and, therefore, $\partial_im_{emb}(\mathcal R)=N$. On the other hand, by Proposition \ref{Prop2.modes}, the first $N$ Fourier modes are asymptotically determining and, therefore, $$ \partial_im_{det}(S(t),H)=N. $$ \end{example} We now give an example of a non-dissipative and even {\it conservative} system with determining dimension one. \begin{equation}gin{example}\lambdabel{Ex4.wave} Let us consider the following 1D wave equation: \begin{equation}gin{equation}\lambdabel{4.wave} \partial^2_tu=\partial_x^2u,\ \ x\in(0,\pi),\ \ u\big|_{x=0}=u\big|_{x=\pi}=0,\ \ \xi_u\big|_{t=0}=\xi_0, \end{equation} where $\xi_u(t):=\{u(t),\partial_t u(t)\}$. It is well-known that problem \eqref{4.wave} is well-posed in the energy phase space $E:=H^1_0(0,\pi)\times L^2(0,\pi)$ and the energy identity holds \begin{equation}gin{equation} \|\partial_t u(t)\|_{L^2}^2+\|\partial_x u(t)\|^2_{L^2}=const. \end{equation} Moreover, the solution $u(t)$ can be found explicitly in terms of $\sin$-Fourier series: \begin{equation}gin{equation}\lambdabel{4.sol} u(t)=\sum_{n=1}^\infty (A_n\cos(nt)+B_n\sin(nt))\sin (nx), \end{equation} where $A_n=\frac2{\pi}(u(0),\sin (nx)$, $B_n=\frac2{n\pi}(u'(0),\sin(nx))$. Crucial for us is that the function \eqref{4.sol} is an {\it almost-periodic} function of time with values in $H^1_0$. Let us consider now a linear functional on $H=L^2(-\pi,\pi)$, i.e, \begin{equation}gin{equation} Fu=(l(x),u)=\sum_{n=1}^\infty l_nu_n,\ \ \end{equation} where $l_n$ and $u_n$ are the Fourier coefficients of $l\in H$ and $u$ respectively. Then \begin{equation}gin{equation} Fu(t)=\sum_{n=1}^\infty l_nA_n\cos(nt)+l_nB_n\sin(nt) \end{equation} is a scalar almost periodic function. Since the Fourier coefficients of an almost-periodic function are uniquely determined by the function, we have $$ Fu(t)\equiv0\ \ \text{ if and only if }\ \ l_nA_n=l_nB_n=0, $$ see \cite{LZ} for details. Thus, if we take a generic function $l$ (for which $l_n\ne0$ for all $n\in\mathbb N$, $Fu(t)\equiv0$ will imply that $A_n=B_n=0$ and, therefore, $u(t)\equiv0$. Thus, $F$ is separating on the set of complete trajectories. It remains to note that, since any trajectory of \eqref{4.wave} is almost-periodic in $E$, the $\omega$-limit set of any trajectory exists and is compact in $E$. Then, by Proposition \ref{Prop1.det} and Remark \ref{Rem1.strange}, $F$ is also asymptotically determining, so the determining dimension of this system is one. \end{example} We conclude this section by a bit counterintuitive example which shows that multiple instability can be removed by the feedback control based on one mode. Such examples are well-known in the control theory (in a fact, they are particular cases of the famous Kalman rank theorem, see \cite{Kal,Kal1}), but have been somehow overseen in the theory of determining functionals. This example also gives an alternative explicit form for the extra terms in Proposition \ref{Prop2.assym} and \ref{Prop3.assym}. \begin{equation}gin{example}\lambdabel{Ex4.feed} Let us consider the following linear problem \begin{equation}gin{equation}\lambdabel{4.linheat} \partial_t u=\nu\partial_x^2 u-a(x)u,\ \ x\in(0,L),\ \ \ u\big|_{x=0}=u\big|_{x=L}=0, \end{equation} where $a(x)$ is a given function (belonging to $C[0,L]$ for simplicity). We would like to stabilize zero equilibrium of this equation by the rank-one linear feedback control given by the term $(l(x),u)w(x)$ for the properly chosen functions $l,w\in H:=L^2(0,L)$. In other words, we need to find $l$ and $w$ such that the equation \begin{equation}gin{equation}\lambdabel{4.feed} \partial_t v=\nu\partial_x^2v-a(x)v+(l,v)w, \ \ \ v\big|_{x=0}=v\big|_{x=L}=0 \end{equation} become exponentially stable. Let $A=\nu\partial_x^2-a(x)$ be a self-adjoint operator in $H=L^2(0,L)$ endowed by homogeneous Dirichlet boundary conditions and let $\mu_n$ and $\xi_n$ be the corresponding eigenvalues and eigenvectors enumerated in the non-increasing order. Then, since $\mu_n\to-\infty$ as $n\to\infty$, we have only finite unstable eigenmodes and let $N$ be the number of such modes. Thus, the higher modes are already stable, so we need not to stabilize them and we may assume that $l,w\in P_NH$. This reduces our problem to the finite dimensional problem in $\R^N=P_NH$: \begin{equation}gin{equation}\lambdabel{4.fin} \partial_t v=A_Nv+(l,v)w,\ \ A_N=\operatorname{diag}\{\mu_1,\cdots,\mu_N\},\ \ l,v,w\in\R^N. \end{equation} Crucial for us is that the eigenvalues of $A$ are all {\it simple} (here we have essentially used that our problem is scalar and has only one spatial dimension), therefore, the matrix $A_N$ possesses a cyclic vector $e=e_1$. Let us consider the corresponding cyclic base $\{e_1,\cdots,e_N\}$ in $\R^N$: $$ Ae_1=e_2,\cdots, Ae_{N-1}=e_N,\ \ Ae_N=\alphapha_1e_1+\cdots\alphapha_Ne_N. $$ In this base, the matrix $A_N$ reads \begin{equation}gin{equation} A_N=\left(\begin{equation}gin{matrix}0&1&0&0&\cdots&0\\ 0&0&1&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\cdots&\vdots\\ 0&0&0&0&\cdots&1\\ \alphapha_1&\alphapha_2&\alphapha_3&\alphapha_4&\cdots&\alphapha_N \end{matrix}\right) \end{equation} Note also that the coefficients $\alphapha_1,\cdots,\alphapha_N$ are nothing more than the coefficients of the characteristic polynomial of the matrix $A_N$. Thus, controlling these coefficients, we control the spectrum of the matrix $A_N$. Let us now take $w=e_N$. Then, the matrix which corresponds to the rank-one operator $(l,u)e_N$ reads \begin{equation}gin{equation} \left(\begin{equation}gin{matrix}0&0&0&0&\cdots&0\\ 0&0&0&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\cdots&\vdots\\ 0&0&0&0&\cdots&0\\ l_1&l_2&l_3&l_4&\cdots&l_N \end{matrix}\right). \end{equation} Thus, choosing the coefficients $l_1,\cdots,l_N$ in a proper way, we can change the characteristic polynomial of $A_N+(l,\cdot)e_N$ arbitrarily. In particular, we can make all its roots negative and this gives us the desired feedback control. \end{example} \section{Open problems}\lambdabel{s5} In this concluding section we briefly discuss some interesting open problems related to the theory of determining functionals. We start with a natural question related to Navier-Stokes equations. \begin{equation}gin{problem}\lambdabel{Pr5.pressure} It looks natural and interesting to study the non-linear determining functionals satisfying some extra restrictions. For instance, let us consider the Navier-Stokes problem: \begin{equation}gin{equation}\lambdabel{5.NS} \partial_t u+(u,\nabla_x)u+\nabla_x p-\Delta_x u=g,\ \ \operatorname{div} u=0 \end{equation} in a bounded domain $\Omegaega$. Here $u$ is an unknown velocity field and $p$ is an unknown pressure. The following question is of big both theoretical and applied interests: \par {\it Is it possible to find determining functionals depending on pressure $p$ only?} \par Note that the pressure can be expressed through velocity via $$ \Delta_x p=-\sum_{i,j}\partial_{x_i}\partial_{x_j}(u_iu_j)+\operatorname{div} g $$ and, therefore, even if the desired functionals will be linear in pressure, they can be expressed as {\it non-linear} (quadratic) ones in terms of the velocity vector field. This problem is one of possible extra motivations to study the non-linear determining functionals. \end{problem} Next problem is related to the actual smoothness of the reduction to DDEs. \begin{equation}gin{problem}\lambdabel{Pr5.smo} We recall that the non-linearity $\bar\Theta$ in the constructed delayed ODE \eqref{3.Z-ode} is a priori continuous only although we expect that it general it is at least H\"older continuous (to verify this one should use H\"older Mane projection theorem together with the appropriate version of Takens theorem). The situation with further smoothness of $\bar\Theta$ is more delicate. Indeed, the original finite-dimensional version of Takens theorem, see \cite{Tak} deals with smooth maps and diffeomorphisms, however, the infinite-dimensional version is usually verified using the H\"older Mane projection theorem as an intermediate step and this leads to a drastic loss of smoothness making everything H\"older continuous only, see \cite{SYC91} or \cite{R11}. It is not clear how essential this loss of smoothness is and whether or not the problem can be overcome using more sophisticated techniques and allowing the infinite delay. \par It also worth mentioning that the nonlinearity $\Phi_0$ in the particular case of the reduction related to Fourier modes is smooth (its smoothness is restricted by the smoothness of the map $\Phi_0$ only) which gives a hope that the smoothness problem can be overcome in a general case as well. On the other hand, as known, for the "true" finite-dimensional reduction related to Man\'e projection theorem, this loss of smoothness is crucial and, to the best of our knowledge, can be overcome only in the case where the considered DS possesses an IM, see \cite{R11, Z14} and references therein. \end{problem} Our final open problem is related to application to data assimilation, namely, with explicit constructions of recovering equations \eqref{3.as} in the cases where the explicit construction of determining functionals is available. \begin{equation}gin{problem}\lambdabel{Pr5.Edr} We know from Example \ref{Ex4.dir} that reaction-diffusion equation \eqref{4.heat} possesses a single determining functional $F(u):=u\big|_{x=x_0}$, where $x_0>0$ is small enough. Therefore, according to Proposition \ref{Prop3.assym}, any trajectory $u(t)$ on the attractor can be recovered by its values at point $x=x_0$ by solving equation \eqref{3.as}. However, the corresponding function $\Theta_N$ involved into this equation is rather complicated and requires us to compute the attractor $\mathcal A$ as well as to verify that $F$ satisfies the Takens theorem before we will be able to use equations \eqref{3.as}. On the other hand, as shown in \cite{AT14}, one can use a very simple explicit form of function $\Theta_N$ (similar to what is used in \eqref{2.as}) if we take sufficiently many nodes $0<x_1<x_2<\cdots<x_N<L$ and the corresponding functionals. Thus, a natural question arises here: \par {\it Is it possible to find a simpler form of equations \eqref{3.as} which do not require to compute the attractor $\mathcal A$?} \par Example \ref{Ex4.feed} gives a partial answer to this question for the case of {\it linear} 1D parabolic equation. However, it is not clear how to extend it to the non-linear case of equation \eqref{4.heat} and it would be also nice to use the observation functional in the form $F(u)=u\big|_{x=x_0}$. \par Revising Example \ref{Ex4.dir}, we see that this problem is closely related to the regularization of the ill posed (overdetermined) boundary value problem: $$ \partial_t w=\nu\partial_x^2w-l(t)w,\ \ w\big|_{x=x_0}=\partial_x w\big|_{x=x_0}=w\big|_{x=L}=0. $$ We expect that, regularizing this problem properly and using the Carleman type estimates on the interval $(x_0,L)$ (see e.g. \cite{RL}), one should be able to construct the approximating solution $v(t,x)$ in such a way that $v(t,x)\to u(t,x)$ as $t\to\infty$ exponentially fast on the whole interval $x\in[0,L]$. We will return to this topic somewhere else. \begin{equation}gin{comment} This can be done, e.g., by adding the extra 3rd order $\varepsilon\partial_x^3w$ term, so we expect that the following analogue of recovering system \eqref{3.as} works: \begin{equation}gin{equation} \begin{equation}gin{cases} \partial_t v=\nu\partial_x^2 v+e^{-\delta t}\chi_{[x_0,L]}(x)\partial_x^3v-f(v)=g,\\ v\big|_{x=0}=v\big|_{x=L}=0,\\ v\big|_{x=x_0-}=v\big|_{x=x_0+}=u\big|_{x=x_0},\ \ \partial_x v\big|_{x=x_0-}=\partial_x v\big|_{x=x_0+}, \end{cases} \end{equation} where $\chi_{[x_0,L]}(x)$ is a characteristic function of the interval $[x_0,L]$ and $\delta>0$ is small enough. Then, arguing exactly as in Example \ref{Ex4.dir}, we get \begin{equation}gin{equation} \|v(t)-u(t)\|_{L^2(0,x_0)}\le e^{-\begin{equation}ta t}\|v(t)-u(0)\|_{L^2(0,x_0)} \end{equation} for some positive $\begin{equation}ta$. It is important to fix $\delta<\begin{equation}ta$, then it is possible to show that the solution $v(t)$ remains bounded as $t\to\infty$. On the other hand, the Carleman type estimates on the interval $(x_0,L)$ (see e.g. \cite{RL}) should be enough to verify that $v(t,x)\to u(t,x)$ exponentially fast on this interval as well. We will give a detailed prove of this fact somewhere else. \end{comment} \end{problem} \begin{equation}gin{thebibliography}{9} \bibitem{AT14} A. Azouani and E. Titi, {\it Feedback control of nonlinear dissipative systems by finite determining parameters—a reaction–diffusion paradigm}, Evol. Equ. Control Theory 3 (2014) 579--594. \bibitem{AT13} A. Azouani, E. Olson and E. Titi, {\it Continuous data assimilation using general interpolant observables}, J. Nonlinear Sci. 24 (2013) 1--27. \bibitem{BV92} A. Babin and M. Vishik, {\it Attractors of evolution equations,} Studies in Mathematics and its Applications, 25. North-Holland Publishing Co., Amsterdam, 1992. \bibitem{3} A. Ben-Artzi, A. Eden, C. Foias, and B. Nicolaenko, {\it H\"older continuity for the inverse of Mane’s projection.} J. Math. Anal. Appl., vol.178, (1993) 22--29. \bibitem{CV02} V. Chepyzhov and M. Vishik, {\it Attractors for equations of mathematical physics,} American Mathematical Society Colloquium Publications, 49. American Mathematical Society, Providence, RI, 2002. \bibitem{Chu} I. D. Chueshov, {\it Theory of functionals that uniquely determine the asymptotic dynamics of infinite-dimensional dissipative systems}, Uspekhi Mat. Nauk, 1998, Volume 53, Issue 4(322), 77–124. \bibitem{Chu1} I. D. Chueshov, {\it Dynamics of quasi-stable dissipative systems.} Universitext, Springer, Cham, 2015. \bibitem{Cock1} B. Cockburn, D.A. Jones and E.S. Titi, {\it Degr\'es de libert\'e d\'eterminants pour \'equations non lin\'eaires dissipatives}, C.R. Acad. Sci.-Paris, S\'er. I 321 (1995), 563--568 . \bibitem{Cock2} B. Cockburn, D.A. Jones and E.S. Titi, {\it Estimating the number of asymptotic degrees of freedom for nonlinear dissipative systems}, Math. Comput. 97 (1997), 1073--1087. \bibitem {CFNT89} P. Constantin, C. Foias, B. Nicolaenko, and R. Temam, \emph{Inertial Manifolds for Dissipative Partial Differential Equations (Applied Mathematical Sciences, no. 70)}, Springer-Verlag, New York, 1989. \bibitem{EKZ13} A. Eden, V. Kalanarov and S. Zelik, {\it Counterexamples to the regularity of Mane projections and global attractors,} Russian Math Surveys, vol. 68, no. 2, (2013) 199--226. \bibitem{FMRT} C. Foias, O.P. Manley, R. Rosa, R. Temam, {\it Navier-Stokes Equations and Turbulence}, Cambridge University Press, 2001. \bibitem{FP67} C. Foias and G. Prodi, {\it Sur le comportement global des solutions non-stationnaires des equations de Navier–Stokes en dimension 2}, Rend. Sem. Mat. Univ. Padova 39 (1967) 1–34. \bibitem{FT} C. Foias, R. Temam, {\it Determination of the solutions of the Navier-Stokes equations by a set of nodal values,} Math. Comp. 43, no. 167 (1984) 117--133. \bibitem{FTT} C. Foias, E.S. Titi, {\it Determining nodes, finite difference schemes and inertial mani- folds,} Nonlinearity 4, no. 1 (1991) 135--153. \bibitem {FST88} C. Foias, G. Sell, and R. Temam, \emph{Inertial manifolds for nonlinear evolutionary equations}, J. Differential Equations, vol. 73, no. 2, (1988) 309--353. \bibitem{ha} {\au J. Hale}, {\bk Asymptotic Behaviour of Dissipative Systems}, \eds{Math. Surveys and Mon.}{AMS Providence, RI}{1987} \bibitem{HR} J. Hale and G. Raugel, {\it Regularity, determining modes and Galerkin methods}, Journal de Math\'ematiques Pures et Appliqu\'ees, vol. 82, no. 6 (2003) 1075--1136. \bibitem{hen} D. Henry, {\it Geometric theory of semilinear parabolic equations.} Lecture Notes in Mathematics, 840. Springer-Verlag, Berlin–New York, 1981. \bibitem{hunt} B. Hunt and V. Kaloshin, {\it Regularity of embeddings of infinite-dimensional fractal sets into finite-dimensional spaces.} Nonlinearity, vol. 12, (1999) 1263--1275. \bibitem{JT} D.A. Jones, E.S. Titi, {\it Upper bounds on the number of determining modes, nodes, and volume elements for the Navier-Stokes equations,} Indiana Univ. Math. J. 42, no. 3 (1993), 875--887. \bibitem{Kal} R. Kalman, {\it Contributions to the theory of optimal control}, Bol Soc Mat Mex 5 (1960) 102--119. \bibitem{Kal1} R. Kalman, Y. Ho, and K. Narendra, {\it Controllability of linear dynamical systems}, Contr Diff Eqs 1 (1963) 189--213. \bibitem{KZ18} A. Kostianko and S. Zelik, {\it Inertial manifolds for 1D reaction-diffusion-advection systems. Part II: Periodic boundary conditions,} Commun. Pure Appl. Anal., vol. 17, no. 1, (2018) 265--317. \bibitem{KZ17} A. Kostianko and S. Zelik, {\it Inertial manifolds for 1D reaction-diffusion-advection systems. Part I: Dirichlet and Neumann boundary conditions,} Commun. Pure Appl. Anal., vol. 16, no. 6, (2017) 2357– 2376. \bibitem{kukavica} I. Kukavica, {\it On the number of determining nodes for the Ginzburg-Landau equation}, Nonlineaity 5 (1992) 997--1006. \bibitem{kuksin} S. Kuksin and A. Shirikyan, {\it Mathematics of Two-Dimensional Turbulence}, Cambridge Tracts in Mathematics 194, Cambridge University Press, 2012. \bibitem{Lad72} O. Ladyzhenskaya, {\it On a dynamical system generated by Navier–Stokes equations}, Zap. Nauchn. Sem. LOMI, Volume 27 (1972) 91--115. \bibitem{LZ} B. Levitan and V. Zhikov, {\it Almost-periodic functions and differential equations}, Cambridge University Press, 1982. \bibitem {M-PS88} J. Mallet-Paret and G. Sell, \emph{Inertial manifolds for reaction diffusion equations in higher space dimensions}, J. Am. Math. Soc., vol. 1, no. 4, (1988) 805--866. \bibitem{MPSS93} J. Mallet-Paret, G. Sell, and Z. Shao, {\it Obstructions to the existence of normally hyperbolic inertial manifolds,} Indiana Univ. Math. J., 42, no. 3, (1993) 1027–1055. \bibitem {M91} M. Miklavcic, \emph{A sharp condition for existence of an inertial manifold}, J. Dyn. Differ. Equations, vol. 3, no. 3, (1991) 437--456. \bibitem {MZ08} A. Miranville and S. Zelik, \emph{Attractors for dissipative partial differential equations in bounded and unbounded domains}, in: Handbook of Differential Equations: Evolutionary Equations, vol. IV, Elsevier/North-Holland, Amsterdam, 2008. \bibitem{OT03} E. Olson and E. Titi, {\it Determining modes for continuous data assimilation in 2D turbulence}, J. Stat. Phys. 113 (2003) 799--840. \bibitem{OT08} E. Olson and E. Titi, {\it Determining modes and Grashof number in 2D turbulence}, Theor. Comput. Fluid Dyn. 22(2008) 327--339. \bibitem {R01} J. Robinson, \emph{Infinite-dimensional Dynamical Systems}, Cambridge University Press, 2001. \bibitem{R11} J. Robinson, {\it Dimensions, embeddings, and attractors,} Cambridge University Press, Cambridge, 2011. \bibitem {R94} A. Romanov, \emph{Sharp estimates for the dimension of inertial manifolds for nonlinear parabolic equations}, Izv. Math. vol. 43, no. 1, (1994) 31--47. \bibitem{Rom00} A. Romanov, {\it Three counterexamples in the theory of inertial manifolds,} Math. Notes, vol. 68, no. 3–4, (2000) 378--385. \bibitem{RL} J. Rousseau and G. Lebeau, {\it On Carleman Estimates for Elliptic and Parabolic Operators. Applications to Unique Continuation and Control of Parabolic Equations}, ESAIM: COCV 18 (2012) 712--747. \bibitem{SYC91} T. Sauer, J. Yorke, and M. Casdagli, {\it Embedology,} J. Stat. Phys. 71(1991) 529–547. \bibitem {SY02} G. Sell and Y. You, \emph{Dynamics of evolutionary equations}, Springer, New York, 2002. \bibitem{stein} E. Stein, {\it Singular Integrals and Differentiability Properties of Functions}, Princeton Univ. Press, Princeton, 1970. \bibitem{Tak} F. Takens, {\it Detecting strange attractors in turbulence}, Lecture Notes in Mathematics No. 898, (1981) 366--381. \bibitem {T97} R. Temam, \emph{Infinite-Dimensional Dynamical systems in Mechanics and Physics}, second edition, Applied Mathematical Sciences, vol 68, Springer-Verlag, New York, 1997. \bibitem {Z14} S. Zelik, \emph{Inertial manifolds and finite-dimensional reduction for dissipative PDEs}, Proc. Royal Soc. Edinburgh 144, vol. 6, (2014) 1245--1327. \end{thebibliography} \end{document}
\begin{document} \title{Exchange-Free Computation on an Unknown Qubit at a Distance} \author{Hatim Salih} \mathrm{e}mail{[email protected]} \affiliation{Quantum Engineering Technology Laboratory, Department of Electrical and Electronic Engineering, University of Bristol, Woodland Road, Bristol, BS8 1UB, UK} \affiliation{Quantum Technology Enterprise Centre, HH Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol, BS8 1TL, UK} \author{Jonte R. Hance} \affiliation{Quantum Engineering Technology Laboratory, Department of Electrical and Electronic Engineering, University of Bristol, Woodland Road, Bristol, BS8 1UB, UK} \author{Will McCutcheon} \affiliation{Quantum Engineering Technology Laboratory, Department of Electrical and Electronic Engineering, University of Bristol, Woodland Road, Bristol, BS8 1UB, UK} \affiliation{Institute of Photonics and Quantum Science, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK} \author{Terry Rudolph} \affiliation{Department of Physics, Imperial College London, Prince Consort Road, London SW7 2AZ, United Kingdom} \author{John Rarity} \affiliation{Quantum Engineering Technology Laboratory, Department of Electrical and Electronic Engineering, University of Bristol, Woodland Road, Bristol, BS8 1UB, UK} \date{\today} \begin{abstract} We present a way of directly manipulating an arbitrary qubit, without the exchange of any particles. This includes as an application the exchange-free preparation of an arbitrary quantum state at Alice by a remote classical Bob. As a result, we are able to propose a protocol that allows one party to directly enact, by means of a suitable program, any computation exchange-free on a remote second party's unknown qubit. Further, we show how to use this for the exchange-free control of a universal two-qubit gate, thus opening the possibility of directly enacting any desired algorithm remotely on a programmable quantum circuit. \mathrm{e}nd{abstract} \maketitle Quantum physics opens up the surprising possibility of obtaining knowledge from, or through, places where no information-carrying particles have been. This was first proposed and subsequently demonstrated experimentally in the context of computing \cite{Mitchison2001CFComputation, Hosten2006CounterComp}, where the result of a computation is learnt based on the phenomena of interaction-free measurement and the Zeno effect \cite{Elitzur1993Bomb, Kwiat1995IFM, kwiat1999high, Misra1977Zeno, Rudolph2000Zeno}. More specifically, without any photons entering or leaving an optical circuit, the result of a computation is obtained without the computer ever `running'. Just as intriguing was the proposal and subsequent experimental demonstration of a simple quantum scheme for allowing two remote parties to share a cryptographic random bit-string, without exchanging any information-carrying particles \cite{Noh2009CounterfactualCrypto,Liu2012NohDemons}. The fact that the protocol had limited maximum-efficiency was not a serious a drawback for its purpose since the shared information was random, meaning failed attempts could simply be discarded in the end. This, however, begged the question whether efficient, deterministic communication was possible exchange-free, that is without particles crossing the communication channel. In 2013, building on the ideas above, Salih et al devised a scheme allowing two remote parties to efficiently and deterministically share a message exchange-free, in the limit of a large number of protocol cycles and ideal practical implementation \cite{Salih2013Protocol}. The protocol was recently demonstrated experimentally by Pan and colleagues \cite{Pan2017Experiment}. Importantly, the previously-heated debate over whether the laws of physics even allow such communication (for both bit values) seems to be settling; Nature does allow exchange-free communication (and therefore computation) \cite{Vaidman2014SalihCommProtocol,Salih2014ReplyVaidmanComment,Griffith2016Path,Salih2018CommentPath,Griffiths2018Reply,Aharonov2019Modification, Salih2018Laws}. We present in what follows a protocol allowing a remote Bob to prepare any qubit he wishes at Alice without any particles passing between them, thus exchange-free. This is different from counterfactually sending a quantum state from Bob to Alice by means of counterportation \cite{Salih2016Qubit,*Salih2014Qubit,Salih2018Paradox}, in that Bob does not need to prepare a quantum object at his end (a quantum superposition of blocking and not blocking the optical communication channel) thus making the scheme much easier to implement. More generally, Bob can directly apply any arbitrary Bloch-sphere rotation to an unknown qubit at Alice---in other words, any single-qubit quantum computation. Note that we use ``exchange-free" and ``counterfactual" interchangeably. While we describe an optical realisation using photon polarisation, the scheme is in principle applicable to other physical implementations---and helps advance quantum information science. \begin{figure*} \centering \includegraphics[width=0.6\linewidth]{StatePrepZRot.pdf} \caption{Our exchange-free Phase Unit, which applies a phase determined by Bob to Alice's $H$-polarised input photon. The Phase Unit comprises an equivalent setup to that of Salih et al's 2013 protocol \cite{Salih2013Protocol}, but with an added phase-module in the dashed box. The optical switches each alter the paths at different times in the protocol to allow the photon to do the correct number of cycles. Optical switch $M1$ inserts the photon into the device, and keeps it in for $M$ outer cycles; optical switches $M2$ and $M3$ cycle the photon around for $N$ inner cycles per outer cycle. The Polarising Beamsplitters transmit $H$-polarised light, and reflect $V$-polarised light. The half wave plates are tuned to implement $\hat{\textbf{R}}_y(\theta)$ rotations on polarisation with $\theta$ of $\pi$, $\pi/M$ and $\pi/N$, as shown in the figure. As explained in the text, detectors $D_A$ and $D_B$ not clicking ensure that the photon has not been to Bob. After $M$ outer cycles, the photon is sent by $M1$ to the right. The photon only exits the Phase Unit if its polarisation had been flipped to $V$ as a result of Bob blocking the channel (which he does by switching his Switchable Mirror on) because of the action of the Polarising Beamsplitter in the dashed box. The phase plate (tuned to enact a $\hat{\textbf{R}}_z$, or phase, rotation, and realisable using a tiltable glass plate) adds a phase of $\pi/2L$ to the photon every time it passes through it, summing to $\pi/L$ every time it is sent $H$-polarised to the right by M1. Bob doesn't block for $k$ runs (out of a maximum $L$), then blocks, allowing him to set the final phase of the photon, $k\pi/L$, anywhere from $0$ to $\pi$, in increments of $\pi/L$. An initially $V$-polarised photon can be put through an altered version of this device to add a phase to it (identical, except for the $\pi/2$ half-wave plate being moved to above $M1$). The unit rotates Alice's qubit by $\hat{\textbf{R}}_z(k\pi/L)$.} \label{fig:phaseunit} \mathrm{e}nd{figure*} Our protocol consists of a number of nested outer interferometers, each containing a number of inner interferometers, as in Salih et al's 2013 protocol \cite{Salih2013Protocol}. We combine these interferometers into a device that we call a Phase Unit, allowing Bob to apply a relative phase to Alice's photonic qubit (Fig.\ref{fig:phaseunit}). We pair two Phase Units such that one applies some phase to Alice's $H$-polarised component, while the other applies an equal but opposite-sign phase to her $V$-polarised component, resulting in a $\hat{\textbf{R}}_z(\theta)$ rotator. By chaining three such $\hat{\textbf{R}}_z(\theta)$ rotators, interspersed with appropriate wave-plates, Bob can apply any arbitrary unitary to Alice's qubit, exchange-free (Fig.\ref{fig:OverallProt}). Note, we define the Bloch sphere for polarisation such that the poles are $\ket{H}$ and $\ket{V}$, and the rotations are \begin{equation} \hat{\textbf{R}}_x(\theta) = \begin{pmatrix} \cos{\big(\frac{\theta}{2}\big)} & -i\sin{\big(\frac{\theta}{2}\big)}\\ i\sin{\big(\frac{\theta}{2}\big)} & \cos{\big(\frac{\theta}{2}\big)} \mathrm{e}nd{pmatrix} = e^{-i\theta\hat{\sigma}_x/2} \mathrm{e}nd{equation} \begin{equation} \hat{\textbf{R}}_y(\theta) = \begin{pmatrix} \cos{\big(\frac{\theta}{2}\big)} & -\sin{\big(\frac{\theta}{2}\big)}\\ \sin{\big(\frac{\theta}{2}\big)} & \cos{\big(\frac{\theta}{2}\big)} \mathrm{e}nd{pmatrix} = e^{-i\theta\hat{\sigma}_y/2} \mathrm{e}nd{equation} \begin{equation} \hat{\textbf{R}}_z(\theta) = \begin{pmatrix} e^{-i\theta/2} & 0\\ 0 & e^{i\theta/2} \mathrm{e}nd{pmatrix} = e^{-i\theta\hat{\sigma}_z/2} \mathrm{e}nd{equation} for dummy variable $\theta$, and Pauli matrices $\hat{\sigma}_{x,y,z}$. We first go through Salih et al's 2013 protocol. However, we describe the protocol, following \cite{Salih2018Paradox}, without any reference to either interaction-free measurement or the Zeno effect of \cite{Misra1977Zeno,Elitzur1993Bomb}. In order to do this, we think of our detectors as being placed far enough, such that they perform no measurement before the photon had had time to exit the protocol. Any photonic component travelling towards either detector can thus be thought of as entering a loss mode, meaning that if the photon exits the protocol successfully then it cannot have taken the path towards that detector, and the detector will subsequently not register a click. To start with, a photon of state $a\ket{H}+b\ket{V}$ enters the outer interferometer through a half wave plate (HWP) tuned to apply a $\hat{\textbf{R}}_y(\pi/M)$ rotation. The photon then enters a polarising beam splitter (PBS), which transmits horizontal polarisation, but reflects vertical polarisation. The $V$-polarised component circles through a series of $N$ inner interferometers, where, in each, it goes through a HWP tuned to apply a $\hat{\textbf{R}}_y(\pi/N)$ rotation, then through another PBS. The $H$-polarised component from this PBS passes across the channel, from Alice to Bob, who can choose to block or not block, by switching on or off his switchable mirror. If he blocks, this $H$-polarised component goes into a loss mode towards detector $D_B$; if not, it returns to Alice's side, recombines at another PBS with the $V$-polarised component, then enters the next inner interferometer. After the chain of $N$ inner interferometers, the resulting components are then passed through one final PBS, sending any $H$-polarised component that has been to Bob into a loss mode towards detector $D_A$, before being recombined at another PBS with the $H$-polarised component from the arm of the outer interferometer. Importantly, neither detector clicking, ensures that the photon has not been to Bob. As each inner interferometer applies $\hat{\textbf{R}}_y(\pi/N)$, if Bob doesn't block, the rotations sum to \begin{equation} \begin{split} \hat{\textbf{U}}_{NB}^N=(e^{-i\pi\hat{\sigma}_y/2N})^N =e^{-i\pi\hat{\sigma}_y/2} = \hat{\textbf{R}}_y(\pi) \mathrm{e}nd{split} \mathrm{e}nd{equation} Therefore, the state after the inner interferometer chain is \begin{equation} \begin{split} \ket{V}_I\rightarrow\hat{\textbf{U}}_{NB}^N\ket{V}_I=\ket{H}_I\rightarrow Loss \mathrm{e}nd{split} \mathrm{e}nd{equation} This means the $V$-polarised component becomes $H$-polarised, entering the loss mode towards detector $D_A$ after the final PBS, meaning the only component of the wavefunction exiting the outer interferometer is the $H$-polarised one that went via the outer arm. Similarly, if Bob blocks for all inner interferometers, \begin{equation} \begin{split} \hat{\textbf{A}}_{B}^N=&\Bigg[ e^{-i\pi\hat{\sigma}_y/2N} \begin{pmatrix} 1 & 0\\ 0 & 0 \mathrm{e}nd{pmatrix}\Bigg]^N\\ =& \begin{pmatrix} \cos{(\frac{\pi}{2N})}^N & 0\\ \cos{(\frac{\pi}{2N})}^{N-1}\sin{(\frac{\pi}{2N})} & 0 \mathrm{e}nd{pmatrix} \mathrm{e}nd{split} \mathrm{e}nd{equation} Therefore, the state after an outer interferometer is \begin{equation} \begin{split} \ket{V}_I\rightarrow &\hat{\textbf{A}}_{B}^N\ket{V}_I\\ =&\cos{\big(\frac{\pi}{2N}\big)}^N\ket{V}_I+\cos{\big(\frac{\pi}{2N}\big)}^{N-1}\sin{\big(\frac{\pi}{2N}\big)}\ket{H}_I\\ \rightarrow& \cos{\big(\frac{\pi}{2N}\big)}^N\ket{V}+Loss \mathrm{e}nd{split} \mathrm{e}nd{equation} meaning some $V$-polarised component exits the outer interferometer. If Bob, doesn't block, the outer cycle applies \begin{equation} \begin{split} \begin{pmatrix} 1 & 0\\ 0 & 0 \mathrm{e}nd{pmatrix} e^{-i\pi\hat{\sigma}_y/2M} \mathrm{e}nd{split} \mathrm{e}nd{equation} If he does block, the outer cycle applies \begin{equation} \begin{split} \begin{pmatrix} 1 & 0\\ 0 & \cos{\big(\frac{\pi}{2N}\big)}^N \mathrm{e}nd{pmatrix} e^{-i\pi\hat{\sigma}_y/2M} \mathrm{e}nd{split} \mathrm{e}nd{equation} We repeat this $M$ times, starting with a $H$-polarised photon, and using a final PBS to split it into $H$- and $V$-polarised components. As Alice applies a $\hat{\textbf{R}}_y(\pi/M)$ rotation at the start of each outer interferometer, if Bob doesn't block, the state of the photon after $M$ outer cycles is \begin{equation} \begin{split} \cos{\big(\frac{\pi}{2M}\big)}^M\ket{H} \mathrm{e}nd{split} \mathrm{e}nd{equation} Therefore, if the photon isn't lost, it remains $H$-polarised. However, if Bob blocks, the photon after $M$ outer cycles (as $N\rightarrow\infty$) becomes $V$-polarised. To prepare any qubit at Alice, Bob needs to apply a relative phase between Alice's two component, which can be represented as a $\hat{\textbf{R}}_z(\theta)$ rotation. Bob can implement this exchange-free using the device in Fig.\ref{fig:phaseunit}, for an $H$-polarised component, relative to some other $V$-polarised component (e.g. one separated beforehand using a polarising beamsplitter). We put this $H$-polarised component through one run of Salih et al's 2013 protocol, with Bob either always blocking or not blocking his channel. If he blocks, and the component exits $V$-polarised, the PBS sends it through a half wave plate that flips it to $H$-polarised, and it is kicked out of the device; however, if it is $H$-polarised, it goes through a phase plate (gaining a phase increase of $\pi/2L$), hits a mirror, goes back through the phase plate (gaining another phase increase of $\pi/2L$, for a total increase of $\pi/L$), and re-enters the device for another run. This is repeated $L$ times, with Bob blocking or not blocking for all outer cycles in a given run. After each run, the component goes into a PBS: if it is $H$-polarised, it gains a phase of $\pi/L$; if $V$-polarised, it is flipped to $H$-polarised and sent out from the unit. Bob first doesn't block for $k$ runs, applying a phase of $k\pi/L$, then blocks, applying the transformation \begin{equation} \ket{H} \rightarrow e^{ik\pi/L}\ket{H} \mathrm{e}nd{equation} When $N$ is finite, the rotations applied by each outer cycle when Bob blocks are not complete, meaning one run ($M$ outer cycles) doesn't fully rotate the state from $H$ to $V$. However, given Bob only blocks after the component has had a phase applied to it, to kick the component out of the device, any erroneous $H$-polarised component can be kept in the device by Bob not blocking for the remaining $L-k$ full runs afterwards, letting us treat the erroneous $H$-component as loss. While coarse-grained for finite $L$, as $L$ goes to infinity (with $0\leq k/L\leq1$), Bob can generate any relative phase for Alice's qubit, from $0$ to $\pi$. Further, by moving the $\pi/2$ half-wave plate from its location in Fig.\ref{fig:phaseunit} to the input, a similar phase can be added to a $V$-polarised component, relative to a $H$-polarised component. Moreover, the Phase Unit can be constructed to include Aharonov and Vaidman's clever modification of Salih et al's 2013 protocol \cite{Aharonov2019Modification}, satisfying their weak-measurement criterion for exchange-free communication. We do this by running the inner cycles for $2N$ cycles rather than $N$, except that for the case of Bob not blocking, he instead blocks for one of the $2N$ inner cycles, namely the $N$th inner cycle. This has the effect of helping to remove any lingering $V$ component exiting the inner interferometer of Fig.\ref{fig:phaseunit} due to imperfections in practical implementation. We now use our Phase Unit as the building block for a protocol where Bob can implement any arbitrary unitary onto Alice's qubit, exchange-free. \begin{figure} \centering \includegraphics[width=\linewidth]{OverallProt.pdf} \caption{The overall protocol, incorporating multiple Phase Units from Fig.\ref{fig:phaseunit}, as well as polarising beamsplitters (which transmit horizontally-polarised, and reflect vertically-polarised, light), as well as a quarter wave plate and its adjoint (conjugate-transpose). The setup allows Bob to implement any arbitrary unitary on any initial pure state $\ket{\psi}$ Alice inserts, entirely exchange-free.} \label{fig:OverallProt} \mathrm{e}nd{figure} Any arbitrary $2\times2$ unitary matrix can be written as \begin{equation} \begin{split} \hat{\textbf{U}} &=e^{i(2\alpha'-\beta'\hat{\sigma}_z-\gamma'\hat{\sigma}_y-\delta'\hat{\sigma}_z)/2}\\ &=e^{i\alpha'} \hat{\textbf{R}}_z(\beta') \hat{\textbf{R}}_y(\gamma') \hat{\textbf{R}}_z(\delta') \mathrm{e}nd{split} \mathrm{e}nd{equation} Note, the factor of $e^{i\alpha'}$ can be ignored, as it provides global rather than relative phase, which is unphysical for a quantum state \cite{nielsenchuang2002}. We can apply the $\hat{\textbf{R}}_z(\theta)$ rotations using the Phase Unit, and make a $\hat{\textbf{R}}_y(\theta)$ rotation by sandwiching a $\hat{\textbf{R}}_z(\theta)$ rotation between a $-\pi/4$-aligned Quarter Wave Plate, $\hat{\textbf{U}}_{QWP}$, and its adjoint, $\hat{\textbf{U}}^{\dagger}_{QWP}$, where \begin{equation} \begin{split} \hat{\textbf{U}}_{QWP} =& \hat{\textbf{R}}_x(-\pi/2) = e^{i\pi\hat{\sigma}_x/4}\\ \hat{\textbf{U}}^{\dagger}_{QWP} =& \hat{\textbf{R}}_x(\pi/2) = e^{-i\pi\hat{\sigma}_x/4} \mathrm{e}nd{split} \mathrm{e}nd{equation} We set \begin{equation} \begin{split} \beta' = 2\pi\beta/L,\; \gamma' = 2\pi\gamma/L,\; \delta' = 2\pi\delta/L \mathrm{e}nd{split} \mathrm{e}nd{equation} where, for the three Phase Unit runs, $k$ is $\beta,\;\gamma$ and $\delta$. The Phase Units form components of the overall protocol, as shown in Fig.\ref{fig:OverallProt}. Here, Alice first splits her input state $\ket{\psi}$ into $H$- and $V$-polarised components with a polarising beamsplitter (PBS), before putting each component through a Phase Unit, to generate equal and opposite phases on each. She recombines these at another PBS. Afterwards, she puts the components through a quarter wave plate, then through another run of PBS, Phase Unit, and PBS, then through the conjugate-transpose of the quarter-wave plate, tuned to convert the partial $\hat{\textbf{R}}_z$ rotation (phase rotation) into a partial $\hat{\textbf{R}}_y$ rotation. Finally, she applies another run of PBS, Phase Unit, and PBS to implement a second $\hat{\textbf{R}}_z$ rotation. Using these $\hat{\textbf{R}}_z$ and $\hat{\textbf{R}}_y$ rotations, Bob can implement any arbitrary rotation on the surface of the Bloch sphere on Alice's state. This can be used either to allow Bob to prepare an arbitrary pure state at Alice (if she inserts a known state, such as $\ket{H}$), or to perform any arbitrary unitary transformation on Alice's qubit, without Bob necessarily knowing that input state. Because the Phase Units output their respective photon components after Bob blocks for a run, the timing of which depends on the phase Bob wants to apply, there is a time-binning (a grouping of exit times into discrete bins) of the components from each Phase Unit correlated with the phase Bob applies in that unit. Bob can, on his side, compensate for the time-binning (given he knows the phase he is applying). Further, in order to locate the photon in time, Alice can detect the time of exit using a non-demolition single photon detector. Alternatively, we could add a final pair of Phase Units with the value of $k$ set to $3L'-\beta-\gamma-\delta$ (where $L'$ is the value of $L$ for each of the first three Phase Unit pairs, and $\beta$, $\gamma$ and $\delta$ are their respective $k$-values), but without phase plates (see Fig.\ref{fig:phaseunit}). This means that while no phase is applied, a time delay is still added to the components, meaning the photon always exits the overall device at a time proportional to $3L$, rather than $\beta$, $\gamma$ and/or $\delta$ as before. This makes the time of exit uncorrelated to Bob's unitary, which means Alice can know in advance the expected exit time of her photon from the protocol (without needing to perform a non-demolition measurement to find it). \begin{figure} \centering \includegraphics[width=\linewidth]{PSurvDensGrid.pdf} \caption{The survival probability of a photon going through a Phase Unit (Fig.\ref{fig:phaseunit}) of given M (number of outer cycles) and N (number of inner cycles). This is shown for the unit imparting phase $i k \pi/L$, where $k$, the number of runs of the protocol before the photon is emitted from the unit, is 1 for (a), 5 for (b), 10 for (c) and 20 for (d). Note there is no dependence on $L$, the maximum number of runs.} \label{fig:PSurvPhaseUnit} \mathrm{e}nd{figure} When considering a finite number of outer and inner cycles, there is a nonzero probability of the photon not returning to Alice, which reduces the protocol's efficiency. The survival probability of a photon going through a Phase Unit is plotted in Fig.\ref{fig:PSurvPhaseUnit}. The survival probability for the overall protocol is the product of the survival probability for the three Phase Units: \begin{equation} P(Tot)_{Sv} = P(\beta)_{Sv}\cdot P(\gamma)_{Sv}\cdot P(\delta)_{Sv} \mathrm{e}nd{equation} As expected, as $\{M,N\}\rightarrow\infty$, the survival probability goes to one. Regardless, postselection renormalises Alice's output state such that if Alice receives an output photon, it will be in a pure state. Thus, for our set-up, given ideal optical components, the rotation enacted on Alice's qubit is always the rotation Bob has applied, not just for any $L$, but also for any $N$, $M$, and $k$. Interestingly, a Phase Unit, which outputs a photon into one of $L$ different time bins depending on the number of runs Bob blocks, could be adapted to sending, exchange-free, a classical logical state of dimension $d$ greater than two---a ``dit", rather than a bit. We do this by removing the phase plate in the Phase Unit (see Fig.\ref{fig:phaseunit}). Bob first doesn't block for $k$ runs, then blocks for the remaining $L-k$, meaning the photon's output occurs in the $k^{th}$ time-bin of $L$ possible time-bins. This encodes a dit of dimension $L$ into the photon, which Alice can read via non-demolition single photon detector (NDSPD). \begin{figure} \centering \includegraphics[width=\linewidth]{QCircuitDiag.pdf} \caption{Quantum circuit diagram, showing how a 3-qubit gate applying a controlled-controlled unitary $U$ can be constructed from two-qubit gates along with single-qubit gates, where $U$ is some unitary transformation, and $V^2=U$ \cite{Barenco1995Gates}. Using our exchange-free single-qubit gate, a classical Bob can directly simulate the control action on Alice's photonic qubits. Since any quantum circuit can be constructed using 2-qubit gates along with single-qubit ones, our exchange-free single-qubit gate allows Bob in principle to directly program any quantum algorithm at Alice, without exchanging any photons.} \label{fig:QCircuit} \mathrm{e}nd{figure} We now show how our exchange-free protocol enabling arbitrary single-qubit operations, can in principle allow a classical Bob to directly enact any quantum algorithm he wishes on Alice's qubits, without exchanging any particles with her. This is based on the fact any quantum algorithm can be efficiently constructed from 2-qubit operations (such as CNOT) and single-qubit ones. Our protocol already enables exchange-free single-qubit operations, i.e. gates. Thus, if Bob can directly activate or not, a 2-qubit gate at Alice, exchange-free, then directly programming an entire quantum algorithm at Alice using these two building blocks becomes possible. The quantum network of figure Fig.\ref{fig:QCircuit} shows how a 3-qubit controlled-controlled gate, applying some unitary $U$ to the target qubit at Alice, can be constructed from 2-qubit controlled gates \cite{Barenco1995Gates}. (Some controlled-controlled gates can have more implementable circuits than the general one given here \cite{nielsenchuang2002}.) A classical Bob, at the top end of Fig.\ref{fig:QCircuit}, uses our exchange-free single-qubit gate to simulate the control action on Alice's photonic qubits. For simulating the CNOT gate, he can choose to either apply the identity transformation, representing control-bit $\ket{0}$, or apply an $X$ transformation, representing control-bit $\ket{1}$. For the controlled-$V$ gate, he can choose to either apply the identity transformation, again representing control-bit $\ket{0}$, or apply a $V$ transformation, representing control-bit $\ket{1}$. In this scenario, we envisage an optical programmable circuit, with exchange-free single-qubit gates acting on Alice's qubits that Bob can directly program, and 2-qubit gates acting on Alice's qubits that he can directly choose to activate, exchange-free. In summary, we have presented a protocol allowing Bob to directly perform any computation on a remote Alice's qubit, without exchanging any photons between them. We use this to show how, in principle, Bob can directly enact any quantum algorithm at Alice, exchange-free. \textit{Acknowledgements---} We thank Claudio Marinelli for helping to bring this collaboration together. This work was supported by the Engineering and Physical Sciences Research Council (Grants EP/P510269/1, EP/T001011/1, EP/R513386/1, EP/M013472/1 and EP/L024020/1). \appendix \section{A Simpler Protocol for applying an \texorpdfstring{$\hat{\textbf{R}}_y$}{Ry} Rotation} While considering how Bob could prepare an arbitrary qubit at Alice exchange-free, we've stumbled upon a much simpler protocol for Bob to prepare exchange-free a qubit with real, positive probability amplitudes. Consider Fig.\ref{fig:phaseunit}, without the phase module. Starting with Alice's $H$-polarised photon, instead of Bob blocking or not blocking every cycle, he instead doesn't block for the first $M-k$ outer cycles, then blocks for the rest. In order to eliminate the error resulting from a finite number of blocked inner cycles, Alice introduces loss, attenuating the outer arm of the interferometer on her side by a factor of $\cos{(\pi/2N)}^N$ for each outer cycle. This means, before the final PBS, the state is \begin{equation} \begin{split} \ket{\Psi}=&\cos{\big(\frac{\pi}{2N}\big)}^{MN}\cos{\big(\frac{\pi}{2M}\big)}^{M-k}\\ &\Big(\cos{\big(\frac{k\pi}{2M}\big)}\ket{H}+\sin{\big(\frac{k\pi}{2M}\big)}\ket{V}\Big) \mathrm{e}nd{split} \mathrm{e}nd{equation} By postselecting on Alice's photon successfully exiting the protocol, she receives the state \begin{equation} \begin{split} \ket{\Psi}_{PS}=\cos{\big(\frac{k\pi}{2M}\big)}\ket{H}+\sin{\big(\frac{k\pi}{2M}\big)}\ket{V} \mathrm{e}nd{split} \mathrm{e}nd{equation} By choosing $k$, Bob directly applies a $\hat{\textbf{R}}_y$ rotation to Alice's $\ket{H}$ input state. Now, in order to allow Bob to apply such a rotation to an arbitrary input polarisation state, Alice's photon is initially split into $H$ and $V$-components using a PBS. The desired rotation is applied separately. In the case of the $V$-component, its polarisation is first flipped to $H$ before the rotation is applied, followed by a phase flip and a polarisation flip upon exit. The separate components can then be combined using a 50:50 beamsplitter, with the correct state obtained 50\% of the time. The advantage, however, is that, assuming perfect optical components and a large number of cycles, only two runs of the protocol are needed on average. \section{Kraus Operator Notation} Viewing exchange-free communication more abstractly, we consider the communication channel in Kraus operator notation. Here, we associate a channel $\mathcal{X}$ to the set of Kraus operators $\{X_i\}_i$ which describe its action on a given density operator such that \begin{equation} \begin{split} \mathcal{X} \sim \{ X_i\}_i\\ \rho \rightarrow \mathcal{X}(\rho)= \sum_i X_i \rho X_i^\dagger\\ \sum_i X^\dagger_i X_i = \mathds{1} \mathrm{e}nd{split} \mathrm{e}nd{equation} In general, for channels $\mathcal{X}$ and $\mathcal{Y}$, their composition can be written, \begin{equation} \begin{split} \mathcal{X}\circ \mathcal{Y} (\rho) := \mathcal{X}(\mathcal{Y}(\rho)) &= \sum_i X_i \bigl(\sum_j Y_j\rho Y_j^\dagger\bigr) X_i^\dagger\\ \mathrm{e}nd{split} \mathrm{e}nd{equation} and we denote the $N$-fold composition of a channel $\mathcal{X}^N(\rho):= \mathcal{X}\circ\mathcal{X}\circ\dots\circ\mathcal{X}(\rho)$. In this manner we can define three channels in this protocol: first that constituting Bob's action on the channel, $b$, that goes via him, when he blocks/doesn't block \begin{equation} \begin{split} \mathcal{B}^{B}\sim\{\ketbra{0_b}{1_b},\ketbra{0_b}{0_b}\},\\ \mathcal{B}^{NB}\sim\{\ketbra{1_b}{1_b},\ketbra{0_b}{0_b}\}; \mathrm{e}nd{split} \mathrm{e}nd{equation} Each cycle of Alice's inner-interferometer is given by $\mathcal{B}^{NB}\circ \hat{\textbf{R}}_{y,(a1,b)}^{(\pi/N)}$, and imposing that the initial state in Bob's mode is vacuum, and omitting it from the output state by tracing it out, we have that over the $N$ inner cycles one finds channels on Alice's mode $a1$ given by, \begin{equation} \begin{split} \mathcal{A}_1^{B}(\rho)&:= \text{tr}_b \bigl[ (\mathcal{B}^{B}\circ \hat{\textbf{R}}_{y,(a1,b)}^{(\pi/N)})^N(\rho \otimes \ketbra{0_b}{0_b})\bigr],\\ \mathcal{A}_1^{NB}(\rho)&:= \text{tr}_b \bigl[ (\mathcal{B}^{NB}\circ \hat{\textbf{R}}_{y,(a1,b)}^{(\pi/N)})^N(\rho \otimes \ketbra{0_b}{0_b})\bigr], \mathrm{e}nd{split} \mathrm{e}nd{equation} where any channel acting on a larger Hilbert space than that on which it's defined acts as identity channel, i.e. $\mathcal{B}^{NB}\sim \mathcal{B}^{NB}\otimes\mathds{1}$. We find when Bob blocks/doesn't block: \begin{equation} \begin{split} \mathcal{A}_1^{B}\sim&\{ \cos{(\frac{\pi}{2N})}^N\ketbra{1_{a1}}{1_{a1}}, \ketbra{0_{a1}}{0_{a1}},\\ &\cos{(\frac{\pi}{2N})}^{N-1}\sin{(\frac{\pi}{2N})}\ketbra{0_{a1}}{1_{a1}}\},\\ \mathcal{A}_1^{NB}\sim&\{ \ketbra{0_{a1}}{1_{a1}}, \ketbra{0_{a1}}{0_{a1}}\}; \mathrm{e}nd{split} \mathrm{e}nd{equation} then finally the effect this has overall as the channel created by a chain of $M$ outer interferometers on Alice's inner and outer interferometer ($V$ and $H$) modes, when Bob blocks/doesn't block: \begin{equation} \begin{split} \mathcal{A}_{12}^{B}(\rho)&:= (\mathcal{A}_1^{B}\circ \hat{\textbf{R}}_{y,(a2,a1)}^{(\pi/M)})^M(\rho),\\ \mathcal{A}_{12}^{NB}(\rho)&:= (\mathcal{A}_1^{NB}\circ \hat{\textbf{R}}_{y,(a2,a1)}^{(\pi/M)})^M(\rho) \mathrm{e}nd{split} \mathrm{e}nd{equation} Therefore, we find \begin{equation} \begin{split} \mathcal{A}_{12}^{B}\sim \Bigg\{ c_1&\ketbra{1_{a2}0_{a1}}{1_{a2}0_{a1}},\\ c_2\ketbra{0_{a2}1_{a1}}{0_{a2}1_{a1}}, c_3&\ketbra{0_{a2} 1_{a1}}{1_{a2} 0_{a1}},\\ c_4\ketbra{1_{a2} 0_{a1}}{0_{a2} 1_{a1}}, &\ketbra{0_{a2}0_{a1}}{0_{a2}0_{a1}}\\ \sqrt{(1-c_1^2-c_3^2)}&\ketbra{0_{a2}0_{a1}}{1_{a2}0_{a1}},\\ \sqrt{(1-c_2^2-c_4^2)}&\ketbra{0_{a2}0_{a1}}{0_{a2}1_{a1}}\Bigg\} \mathrm{e}nd{split} \mathrm{e}nd{equation} \begin{equation} \begin{split} \mathcal{A}_{12}^{NB}\;\sim\; \Bigg\{\; \cos{(\frac{\pi}{2M})}^M&\ketbra{1_{a2} 0_{a1}}{1_{a2} 0_{a1}},\\ \sqrt{(1-\cos{(\frac{\pi}{2M})}^{2M})}&\ketbra{0_{a2}0_{a1}}{1_{a2}0_{a1}},\\ \ketbra{0_{a2}0_{a1}}{0_{a2}1_{a1}}, &\ketbra{0_{a2}0_{a1}}{0_{a2}0_{a1}}\;\Bigg\} \mathrm{e}nd{split} \mathrm{e}nd{equation} where coefficients $c_1$,$c_2$, $c_3$ and $c_4$ are functions of $M$ and $N$, with $c_2$ and $c_3$ going to 1, and $c_1$ and $c_4$ going to zero, as $N$ and $M$ go to infinity. This means one run (of $M$ outer cycles of $N$ inner cycles each) acts as a perfect optical switch in this limit, turning $H$ to $V$ (and vice-versa) if Bob blocks, and implementing identity if he doesn't. \mathrm{e}nd{document}
\begin{document} \title{Quantum Operations in an Information Theory for Fermions} \author{Nicetu Tibau Vidal} {\operatorname{e}}mail{[email protected]} \affiliation{University of Oxford, Clarendon Laboratory, Atomic and Laser Physics Sub-Department. Oxford, Oxfordshire, United Kingdom } \author{Mohit Lal Bera} \affiliation{ICFO -- Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, ES-08860 Castelldefels, Spain} \author{Arnau Riera} \affiliation{Institut el Sui, Carrer Sant Ramon de Penyafort, s/n, 08440 Cardedeu (Barcelona), Spain} \affiliation{ICFO -- Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, ES-08860 Castelldefels, Spain} \affiliation{Max-Planck-Institut f\"ur Quantenoptik, D-85748 Garching, Germany} \author{Maciej Lewenstein} \affiliation{ICFO -- Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, ES-08860 Castelldefels, Spain} \affiliation{ICREA, Pg.~Lluis Companys 23, ES-08010 Barcelona, Spain} \author{Manabendra Nath Bera} {\operatorname{e}}mail{[email protected]} \affiliation{Department of Physical Sciences, Indian Institute of Science Education and Research (IISER), Mohali, Punjab 140306, India} \begin{abstract} A reasonable quantum information theory for fermions must respect the parity super-selection rule to comply with the special theory of relativity and the no-signaling principle. This rule restricts the possibility of any quantum state to have a superposition between even and odd parity fermionic states. It thereby characterizes the set of physically allowed fermionic quantum states. Here we introduce the physically allowed quantum operations, in congruence with the parity super-selection rule, that map the set of allowed fermionic states onto itself. We first introduce unitary and projective measurement operations of the fermionic states. We further extend the formalism to general quantum operations in the forms of Stinespring dilation, operator-sum representation, and axiomatic completely-positive-trace-preserving maps. We explicitly show the equivalence between these three representations of fermionic quantum operations. We discuss the possible implications of our results in characterization of correlations in fermionic systems. {\operatorname{e}}nd{abstract} \maketitle \section{Introduction} Information theory lays one of the foundations of modern science and technologies. In particular, its classical counterpart has revolutionized the realm of information technology, since the time Shannon introduced the classical framework to treat information \cite{Shannon48a, Shannon48b}. The recent developments in quantum technology and understanding of the quantum nature of information have sparked the possibility of information technology that could outperform the classical one \cite{Nielsen00} in presence of quantum resources, such as entanglement \cite{Horodecki09}. The understanding and quantification of quantum information and resources in distinguishable systems, e.g., qudits, rely on the underlying Hilbert space's separability. The global space of composite systems becomes a tensor-product of local Hilbert spaces. However, this assumption falls short when one considers information theory beyond distinguishable systems, as in indistinguishable particle systems, e.g., fermions. Physical systems that resemble qudit-like structures are very limited in nature and generally adhere to quantum mechanics' first quantization. On the other hand, much of the physical systems are understood in terms of indistinguishable particles and quantum fields, using the framework of second quantization. Therefore it is essential to extend information theory to the indistinguishable particles and the quantum fields. The inadequacy of the existing framework of quantum information theory (QIT) results from the fact that, for indistinguishable particles, the underlying global Hilbert spaces are either symmetrized (for bosons) or anti-symmetrized (for fermions) part of local (particle or mode) Hilbert spaces. Therefore, Hilbert spaces do not satisfy separability as for qudit cases. To lay a consistent formulation of QIT for indistinguishable particles is essential to understand quantum operations and states, be it local or global, in these ``non-separable'' Hilbert spaces. The latter is associated with entanglement, which has been studied quite extensively in the last few decades. For fermions, the literature considers two different approaches to characterize quantum information and resources. Authors differ on whether they can impose separability on the Hilbert spaces from particles or modes' perspectives. In characterizing entanglement, in \cite{Schliemann01, Schliemann01a, Eckert02, Wiseman03, Ghirardi04, Kraus09, Plastino09, Iemini13, Iemini14, Iemini15, Debarba17, Oszmaniec13, Oszmaniec14, Oszmaniec14a, Sarosi14, Sarosi14a, Majtey16, Kruppa:2020rfa} they assume the {\operatorname{e}}mph{particle} approach, where they consider the indistinguishable particles as subsystems, and the states are the elements of the corresponding local Hilbert spaces. Another approach is based on {\operatorname{e}}mph{mode} \cite{Zanardi02, Shi03, Friis13, Benatti14, Puspus14, Tosta:2018iqh, Shapourian, Shapourian:2018ozl, Li:2018xon, Szalay, Debarba_2020}, where subsystems are considered to be single-particle modes. One can crudely say that while the notion of particle-based entanglement relies on and exploits quantum mechanics' first-quantization, which strictly conserves particle numbers, the mode-based entanglement is exclusively based on second-quantization of quantum mechanics (using creation and annihilation operators) where particle number may or may not be conserved. It is now clear that the mode-based characterization is more ``reasonable'' in characterizing fermionic entanglement. In this view, one has to impose anti-symmetrization due to the creation and annihilation operators' anti-commutation relations. Moreover, one also has to ascertain that physically allowed fermionic states have to satisfy {\operatorname{e}}mph{parity super-selection rules} \cite{Friis13}. In this work, we adhere to the reasonable mode-based approach to fermionic information theory and characterize the physically allowed quantum operations that map the set of fermionic states onto itself. One of our main results proves that all physically allowed unitary operations applicable to fermionic systems must respect the parity super-selection rule to comply with the no-signaling principle. We introduce the general form of reasonable projective operations concerning quantum measurements. The operations are then further extended to the general quantum operations in terms of three equivalent formalisms based on Stinespring dilation, operator-sum-representation (or Kraus representation), and completely-positive-trace-preserving (CPTP) maps. We also discuss the implications of our finding in characterization of correlations present in fermionic systems. The structure of the paper is as follows. In section \ref{sec:febo} we present a mathematical formalization of the fermionic state space. We precisely define the $\wedge$ product (analogous to $\otimes$), the fermionic Hilbert space, and the parity super-selection rule (SSR). We also give explicit forms to the physically allowed states and observables respecting the parity SSR. The section \ref{sec:quanop} is devoted to the characterization of allowed fermionic quantum operations. In particular, we describe the partial tracing process, the linear operators that preserve the states' parity SSR form, and the unitary and projective operators. We further present the general quantum operations in terms of Stinespring dilation, operator-sum representation, and axiomatic completely-positive-trace-preserving maps. In section \ref{sec:discussion}, we make a brief discussion on the implications of our findings in fermionic entanglement theory. In particular, we characterize uncorrelated and correlated states, where we also consider the Schmidt decomposition and the purification of states in the context of the parity SSR. A conclusion is made in section \ref{sec:conclusions}. \section{Fermionic state space} \label{sec:febo} Let us consider a set of fermionic modes $I_f$ and denote their corresponding creation and annihilation operators by $f_i^\dagger$ and $f_i$ respectively for any mode $i\in I_f$. These creation and annihilation operators satisfy the standard anti-commutation relations \begin{align} \{f_i,f_j\}=0; \ \ \ \{f_i^{\dagger},f_j^{\dagger}\}=0; \ \ \ \{ f_i,f_j^{\dagger}\}=\delta_{ij} \mathbb{I}, \ \ \ \forall i,j\in I_f, \nonumber {\operatorname{e}}nd{align} with $\{ \hat{A},\hat{B}\}=\hat{A}\hat{B}+\hat{B}\hat{A}$, implying that any multi-mode wave function to be anti-symmetric respect to particle exchange in two different modes. In the case where $|I_f|<+\infty$, there are a finite number of modes and the dimension of the Hilbert space ($\mathcal{H}$) is also finite. Let us assume that $\mathcal{H}$ is a space for $N$ fermionic modes, i.e. $|I_f|=N$. Then, an order can be chosen in $I_f$, and each mode is referred in respect to its position in $I_f$. The number operator is defined as $\hat{n} = \sum_i f_i^\dagger f_i$. The Hilbert space becomes a direct sum of the subspaces corresponding to different particle number sectors, $ \mathcal{H}=\bigoplus_{n=0}^{|I_f|} \mathcal{H}_{n-p}, \ n=0, \ldots, N$, where $\mathcal{H}_{n-p}$ is the $n$-particle subspace, i.e., any $\ket \psi \in \mathcal{H}_{n-p}$ satisfies $\hat{n} \ket \psi = n \ket \psi$. For a Hilbert (sub-)space $\mathcal{H}_{n-p}$, where $n$-modes are excited, the basis can be reduced to the element $\{f_{i_1}^{\dagger}f_{i_2}^{\dagger}\ldots f_{i_n}^{\dagger} \ket{\Omega}\}$ where $i_1\leq i_2 \leq \ldots \leq i_n$ and $\forall i_j \ \in I_f$. Note that the number of fermionic creation operators applied on the vacuum is $n$. Due to the fact that $f_{i}^{\dagger}f_{j}^{\dagger}=-f_{j}^{\dagger}f_{i}^{\dagger}$, any unordered elements will be equivalent to the ordered element up to a factor $-1$. Further, $f_{j}^{\dagger}f_{j}^{\dagger}=0$ guarantees that the spaces $\mathcal{H}_{n-p}$ for $n>N$ is $\{0\}$. By decomposing the $N$-mode Hilbert space in terms of $ \mathcal{H}_{n-p}$ and ordering the basis in each $\mathcal{H}_{n-p}$, we can fully characterize the space. \subsection{Wedge product Hilbert space} \label{subsec:wedge} At this point, we note that the Hilbert space for indistinguishable particles or subsystems is very different from the distinguishable one. For distinguishable systems, the global Hilbert spaces are the tensor product of the corresponding subsystems' local Hilbert spaces. For example, for two distinguishable systems $A$ and $B$, with the local Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$ respectively, the global Hilbert space of $AB$ becomes $\mathcal{H}_{AB}=\mathcal{H}_{A} \otimes \mathcal{H}_{B}$. This also implies that we could form a complete set of basis as $\ket{\psi_A}\otimes \ket{\psi_B} \in \mathcal{H}_{AB}$, in terms of the local basis $\ket{\psi_A}\in \mathcal{H}_A$ and $\ket{\psi_B}\in \mathcal{H}_B$. In contrast, for a system composed of fermions (or fermionic subsystems), the Hilbert space is anti-symmetrized. The algebraic structure of such a space is ensured by introducing a wedge product ($\wedge$), in place of the tensor product ($\otimes$). Then, for two fermionic systems $A$ and $B$ of $N$ and $M$ modes respectively, the global Hilbert space would be $\mathcal{H}_{AB}=\mathcal{H}_A \wedge \mathcal{H}_B$. In Appendix \ref{sec:properties}, we expose some essential properties of $\wedge$ product which we use in the following discussions. These properties of the wedge product Hilbert space, defined for fermions, are also compared to those of the tensor product structure for distinguishable particles. By choosing the vacuum state $\ket{\Omega}$ as the identity element of the $\wedge$ product, we find that the following definition of the product is also natural \begin{align} \ket{i}\wedge\ket{j}=f_i^{\dagger}\ket{\Omega} \wedge f_j^{\dagger} \ket{\Omega}{\operatorname{e}}quiv f_i^{\dagger}f_j^{\dagger}\ket{\Omega}. \nonumber {\operatorname{e}}nd{align} With the anti-commutation relation in mode spaces ($i \in I_{f_A}$ and $j\in I_{f_B}$) it is easy to check that the $\wedge$ product is anti-symmetric, as one would expect for fermions. The extension of the product to the dual spaces is given in the following natural terms \begin{align} \left(\ket{i}\wedge\ket{j}\right)^{\dagger}= \left(f_i^{\dagger}f_j^{\dagger}\ket{\Omega}\right)^{\dagger}= \bra{\Omega}f_j f_i {\operatorname{e}}quiv \bra{i}\wedge\bra{j}. \nonumber {\operatorname{e}}nd{align} This notion can be naturally extended to arbitrary number of particles as the wedge product is associative, i.e., \begin{align} \left(\ket{i_1}\wedge\ldots\wedge\ket{i_N}\right) & \wedge \left(\ket{j_1}\wedge\ldots\wedge\ket{j_M}\right) \nonumber \\ & =f_{i_1}^{\dagger}\ldots f_{i_N}^{\dagger} f_{j_1}^{\dagger}\ldots f_{j_M}^{\dagger}\ket{\Omega} \nonumber \\ & =\ket{i_1}\wedge\ldots\wedge\ket{i_N} \wedge\ket{j_1}\wedge\ldots\wedge\ket{j_M} {\operatorname{e}}nd{align} This well-defined notion of state vector elements, based on wedge product, form the basis of the Hilbert space. For ease of discussion, we impose an ordering to fix the basis and, then, the Hilbert space of $N$ fermions is expressed with the following proposition. \begin{prop}\label{Prop:basis1} For an $N$-mode fermion system, the set $\mathcal{B}=\{\ket{\Omega},\ket{1},\ket{2},\ket{1}\wedge \ket{2},\ket{3},\ket{1}\wedge\ket{3}, \ket{2}\wedge \ket{3}, \ket{1}\wedge\ket{2}\wedge\ket{3},\ldots, \ket{1}\wedge\ket{2}\wedge \ldots \wedge \ket{N} \}$ is an orthonormal basis of the Hilbert space $\mathcal{H}$, with dimension $2^N$. {\operatorname{e}}nd{prop} The Hilbert space $\mathcal{H}$ can be re-expressed, alternatively, in terms of subspaces $\mathcal{H}_i$ spanned by the basis elements $\{\ket{\Omega},\ket{i}\}$, where $\ket{i}$ is the state when only the $i$th mode is excited. Any arbitrary element $\ket{\psi_i}$, in the Hilbert space belonging to the $i$th mode, can be formed using this basis. Then, in this new representation of an $N$-fermion Hilbert space, every element of $\mathcal{B}$ can be written as $\ket{\psi_1}\wedge\ket{\psi_2}\wedge \ldots \ket{\psi_N}$, where $\ket{\psi_i}\in \mathcal{H}_i$. This enables us to express the global Hilbert space as the wedge product of $\mathcal{H}_i$, and that is \begin{align} \mathcal{H}=\bigwedge_{i=1 }^{N} \mathcal{H}_{i}=\mathcal{H}_1 \wedge \ldots \wedge \mathcal{H}_N. {\operatorname{e}}nd{align} Given the above representation of the Hilbert space $\mathcal{H}$ and its elements, we can study its operator space. A linear operator space $\Gamma (\mathcal{H})$ can be defined with the help of the Cartesian outer product between $\mathcal{H}$ and its dual $\mathcal{H}^*$. More details are given in Appendix \ref{sec:properties}. Unlike the Hilbert spaces of distinguishable particles and their elements, every element in the fermionic Hilbert space may not represent a physical fermionic state. In fact, every element that represents a physically allowed state has to satisfy a rule, the {\operatorname{e}}mph{parity super-selection} rule, which we discuss next. \subsection{Parity super-selection rule} \label{subsec:SSR} The Hilbert space of fermionic particles is restricted by the algebraic properties due to the anti-commutation relations of creation and annihilation operators and constrained by the {\operatorname{e}}mph{parity super-selection rule} \cite{Wick52}. The super-selection rules (SSRs) are, in general, a set of physical restrictions that are imposed on the states and operations to discard the non-physical cases produced in a physical theory. For the fermionic case, the parity SSR restricts the physical states based on whether a state has an even or odd number of fermions. The parity operator is \begin{align} \Pi=e^{i \pi \hat{n}}, {\operatorname{e}}nd{align} where $\hat{n}=\sum_i f_i^\dag f_i$ is the number operator. A fermionic state $\ket{\psi^e}$ is even if $ \Pi \ket{\psi^e}=\ket{\psi^e}$, and a state $\ket{\psi^o}$ is said to be odd if $ \Pi \ket{\psi^o}=-\ket{\psi^o}$. Now the parity SSR can be stated as in the following. \\ \textbf{Parity SSR} \cite{Wick52}: \textit{Nature {\bf does not} allow a fermionic quantum state, which is in a coherent superposition between states with even and odd (particle number) parities, i.e. the states $\ket{\phi} = c_e \ket{\psi^e} + c_o \ket{\psi^o}$, where $\{ c_e, \ c_o \} \in \mathbb{C}\backslash\{0\}$, are not physically allowed.}\\ The parity SSR was initially introduced to describe elementary particles in the framework of quantum field theory \cite{Wick52}. It is understood as a consequence of the underlying deep correspondence between quantum mechanics and the (special) theory of relativity \cite{Pauli40, Schwinger51, Peskin95}, through the spin-statistics connection. Beyond its utility in understanding elementary particles, the parity SSR finds important applications to formulate a reasonable quantum information theory for fermions \cite{Friis16}. There is a `partial trace ambiguity' in a fermionic bipartite system if one only relies on fermionic creation and annihilation operators' algebra. For example, it can be seen that there are pure states of a bipartite fermionic system $\ket{\psi}_{AB}$ of fermionic subsystems $A$ and $B$ for which the marginal states $\rho_{A} = {\operatorname{Tr\,}}_B \left[ \proj{\psi}_{AB} \right]$ and $\rho_{B} = {\operatorname{Tr\,}}_A \left[ \proj{\psi}_{AB} \right]$ do not share the same spectra. However, a `reasonable' quantum information theory demands that the marginals should possess the same spectra as they share equal information. This serious flaw is resolved by the imposition of parity SSR \cite{Friis16}. It has been shown that any state $\ket{\psi}_{AB}$ which satisfies $\Pi \ket{\psi}_{AB} = \pm \ket{\psi}_{AB} $ results in marginal states with the same spectra. Moreover, in \cite{Johansson16}, it is argued that one does not need to invoke relativity to justify parity SSR for fermions. Rather, the parity SSR can be understood as the consequence of the no-signaling principle, due to the micro-causality constraint \cite{Bogolubov90, Greiner96} on the separable fermionic Hilbert space, in conjugation with the algebra of creation and annihilation operators. It thereby implies that, even in a non-relativistic scenario, fermionic states are constrained by parity SSR. Hence, one cannot ignore SSR while constructing a framework for information theory. \subsection{Physical states and observables of fermionic systems} \label{subsec:physstates} The physical states of an arbitrary fermionic system must satisfy the parity SSR. Therefore the corresponding Hilbert space $\mathcal{H}_S$, with all physical states satisfying parity SSR, forms a subset, $\mathcal{H}_S \subset \mathcal{H} $. To better represent the set of states that respect parity SSR, we reorder the basis $\mathcal{B}$. The basis $\mathcal{B}$ in an $N$-mode fermionic Hilbert space has $2^N$ elements, and it is easy to see that there are $2^{N-1}$ even elements and $2^{N-1}$ odd elements in it. The new ordering of the elements in the basis set, denoted by $\mathcal{B}'$, is considered in the proposition below. \begin{defi}\label{Prop:basis2} For an $N$-mode fermionic system, the basis set $\mathcal{B}' {\operatorname{e}}quiv \{\mathcal{B}_e, \mathcal{B}_o \}$ is formed by reordering the elements in $\mathcal{B}$, where the first $2^{N-1}$ elements $\mathcal{B}_e $ are even and last $2^{N-1}$ elements $\mathcal{B}_o $ are odd. {\operatorname{e}}nd{defi} For example, for a $3$-mode fermionic system, the $ \mathcal{B}_e = \{\ket{\Omega}, \ket{1}\wedge \ket{2}, \ket{1}\wedge \ket{3}, \ket{2}\wedge \ket{3} \}$ and $ \mathcal{B}_o = \{\ket{1}, \ket{2}, \ket{3}, \ket{1}\wedge \ket{2} \wedge \ket{3} \}$. Any arbitrary fermionic pure state $\ket{\psi} \in \mathcal{H}_S$ is either an even or an odd state, and it is expressed as a coherent superposition among the elements either from $\mathcal{B}_e$ or from $\mathcal{B}_o$ respectively. Therefore, an $N$-mode fermionic pure state satisfying parity SSR can be cast in the following form: \begin{equation} \ket{\psi}= \left\{\begin{array}{lr} \ket{\psi^e} = \sum_{\vec{s}} \alpha_{\vec{s}} \left(f^{\dagger}_1\right)^{s_1} \ldots \left(f^{\dagger}_N\right)^{s_N} \ket{\Omega}, \ \forall (S \mod 2)=0, \\ \ket{\psi^o} = \sum_{\vec{s}} \alpha_{\vec{s}} \left(f^{\dagger}_1\right)^{s_1} \ldots \left(f^{\dagger}_N\right)^{s_N}\ket{\Omega}, \ \forall (S \mod 2)=1, {\operatorname{e}}nd{array} \right. \nonumber {\operatorname{e}}nd{equation} where $\vec{s}=\{s_i \}_1^N$, $s_i=\{0,1\}$, $S=\sum_i s_i$, and $\sum_{\vec{s}} |\alpha_{\vec{s}}|^2=1$. The $\sum_{\vec{s}}$ denotes the sum over all possible values of the vector. The even states $\ket{\psi^e}$ (odd states $\ket{\psi^o}$) are the results of coherent superposition between the elements in $\mathcal{B}_e$ ($\mathcal{B}_o$). By construction, the $\ket{\psi^e}$ and $\ket{\psi^o}$ are the fermionic states that correspond to even and odd parities, i.e. $\Pi \ket{\psi^e}=\ket{\psi^e}$ and $\Pi \ket{\psi^o}=-\ket{\psi^o}$ respectively. Note, the fermionic operator space $\Gamma_S$ can also be constructed in terms of the elements $\ketbra{\psi}{\varphi}$ where both $\ket{\psi}, \ket{\varphi} \in \mathcal{H}_S$. Using fermionic operators, more general (or mixed) states can be expressed in terms of density matrices. The mixed states are defined as ensembles of pure states. Therefore, the parity SSR imposes restrictions on the matrices that could represent a physical state. A density matrix $\rho$ representing a physically allowed fermionic state, in the basis $\mathcal{B}'$, takes the form \begin{align} \rho= \rho_e \oplus \rho_o =\begin{pmatrix} \rho_e && 0 \\ 0 && \rho_o {\operatorname{e}}nd{pmatrix}, {\operatorname{e}}nd{align} where $\rho_e$ and $\rho_o$ are $2^{N-1}\times 2^{N-1}$ positive semi-definite matrices, given by \begin{align} \rho_e=\sum_i p^e_i \ketbra{\psi^e_i}{\psi^e_i} \ \ \mbox{and} \ \ \rho_e=\sum_i p^o_j \ketbra{\psi^o_j}{\psi^o_j}, \label{eq:rho} {\operatorname{e}}nd{align} with $0 \leqslant p^e_i, p^o_j \leqslant 1$ and $\sum_i p^e_i + \sum_j p^o_j=1$. By construction, the density matrices are symmetric under parity, i.e. $\Pi \rho \Pi^\dag = \rho$. This also implies that $\Pi \rho_e \Pi^\dag = \rho_e$ and $\Pi \rho_o \Pi^\dag = \rho_o$. The properties of fermionic density matrices are discussed in Appendix \ref{sec:proofs} in detail. The density matrices are positive semi-definite matrices and all positive semi-definite operators form a subset $\mathcal{R}_S \subset \Gamma_S$, where $\rho \in \mathcal{R}_S$. Unlike distinguishable systems, the physically allowed fermionic observables cannot be represented by any Hermitian operators. Rather, the observables have to respect the parity SSR. It constrains a physically allowed observable $\hat{A} \in \Gamma_S$ to have the form \begin{align} \hat{A}=\sum_i a_i \ketbra{\psi_i}{\psi_i} = \hat{A}_e \oplus \hat{A}_o = \begin{pmatrix} \hat{A}_e && 0 \\ 0 && \hat{A}_o {\operatorname{e}}nd{pmatrix}, {\operatorname{e}}nd{align} where $a_i \in \mathbb{R}$ and $\ket{\psi_i} \in \mathcal{H}_S$. \section{Fermionic Quantum Operations} \label{sec:quanop} So far, we have seen how parity SSR imposes conditions on the states and the observables for the fermionic systems. Now we explore how the wedge product structure of Hilbert space and the parity SSR results in a restricted class of allowed quantum operations on the fermionic space that is physical. We start by re-considering the partial tracing operation, initially introduced in \cite{Friis16}. We re-establish the same procedure using a consistency condition (discussed below) and show how the restrictions of fermionic observables and states due to the parity SSR play no role in the notion nor the form of fermionic partial tracing procedure. Then, we move on to classify arbitrary unitary and projection operations that respect the parity SSR structure. Finally, we extend such characterization to general quantum operations. \subsection{Partial tracing} \label{subsec:pt} For the re-derivation of partial tracing operation, let us give a precise definition of a local operator. \begin{defi}[Local operators] Consider a global Hilbert space $\mathcal{H}^{AB}_S = \mathcal{H}^A_S \wedge \mathcal{H}^B_S$ of finite fermionic systems $A$ and $B$. An operator $\hat{O}\neq 0$ that acts on $\mathcal{H}_S^{AB}$ is said to be local on $\mathcal{H}^A_S$ if, and only if, it has the form $\hat{O}=\hat{O}_A\wedge \mathbb{I}_B$ with $\hat{O}_A\neq 0$. {\operatorname{e}}nd{defi} This definition is equivalent to say that all operators local on $\mathcal{H}^A_S$ can be formed by combining creation and annihilation operators of the modes of $\mathcal{H}^A_S$. For more details about this correspondence, see Appendix \ref{sec:properties}. With this notion of local operators, we go on to define partial tracing in a fermionic system with the help of following consistency conditions in order to be able to interpret the reduced density states physically. \begin{defi}[Consistency conditions] Given a global Hilbert space $\mathcal{H}_S^{AB} =\mathcal{H}_S^A \wedge \mathcal{H}_S^B$ of finite-mode fermionic systems $A$ and $B$, the consistency conditions for a partial tracing procedure over $\mathcal{H}_S^B$ of a density matrix $\rho_{AB} \in \mathcal{R}^{AB}_S$ are: that ${\operatorname{Tr\,}}_{B}(\rho_{AB})=\rho_{A}\in \mathcal{R}^A_S$ is a density matrix and that the following equations are satisfied: \begin{align} {\operatorname{Tr\,}}(\hat{O}_{A} \rho_{AB})={\operatorname{Tr\,}} (\hat{O}_{A}\rho_{A}), {\operatorname{e}}nd{align} for all $\hat{O}_{A}$ being a local physical observable, thus being an Hermitian operator in $\Gamma^A_S$. {\operatorname{e}}nd{defi} The consistency conditions give us the physical definition of the reduced density matrix, imposing that the expectation value for local observables in $A$ has to be the same for $\rho_{AB}$ and $\rho_A$. Now, with the consistency conditions above, we put forward Proposition \ref{prop:partialtrace} below that recovers the usual partial tracing procedure prescribed in \cite{Friis16} defined mathematically as ${\operatorname{Tr\,}}_B(\hat{O}_{AB})=\sum_i \bra{i_B} \hat{O}_{AB} \ket{i_B}$. Thus, we can recover the usual mathematical definition of partial tracing directly from its physical meaning. The proof of Proposition \ref{prop:partialtrace}, along with its equivalence with the procedure considered in \cite{Friis16}, is given in Appendix \ref{sec:ptappendix}. \begin{prop}[Partial trace] \label{prop:partialtrace} For a density operator $\rho \in \mathcal{R}^N_S$, of an $N$-mode fermionic system, the partial tracing over the set of modes $M=\{m_1,\ldots,m_{|M|}\}\subset\{1,\ldots,N\}$ must result in a reduced density operator $\sigma \in \mathcal{R}^{N\setminus M}_S$, given by \begin{align} \sigma={\operatorname{Tr\,}}_{M}(\rho)={\operatorname{Tr\,}}_{m_1}\circ {\operatorname{Tr\,}}_{m_2} \circ \ldots \circ {\operatorname{Tr\,}}_{m_{|M|}}(\rho), {\operatorname{e}}nd{align} There is a unique partial tracing procedure that satisfies the physically imposed consistency conditions. The operation of partial tracing one mode $m_i$, of an element of $\rho$, is given then by: \begin{align} {\operatorname{Tr\,}}_{m_i} & \left(\left(f_1^{\dagger}\right)^{s_1} \ldots \left(f_{m_i}^{\dagger}\right)^{s_{m_i}}\ldots \left(f_N^{\dagger}\right)^{s_N}\proj{\Omega}f_N^{r_N}\ldots f_{m_i}^{r_{m_i}}\ldots f_1^{r_1} \right)\nonumber \\ &= \delta_{s_{m_i} r_{m_i}} (-1)^{k}\left(f_1^{\dagger}\right)^{s_1} \ldots \left(f_N^{\dagger}\right)^{s_N}\proj{\Omega}f_N^{r_N}\ldots f_1^{r_1}, {\operatorname{e}}nd{align} with $k=s_{m_i} \sum_{j=m_i}^{N-1} s_{j+1} +r_{m_i} \sum_{k=m_i}^{N-1} r_{k+1}$ and $s_i,r_j \in \{0,1 \}$. {\operatorname{e}}nd{prop} The partial tracing procedure can be further simplified. Assume that $\ket{a},\ket{b},\ket{c},\ket{d}$ are pure states in $\mathcal{H}_{S}$ of $N$ modes. Say $M\subset \mathcal{N}=\{1,\ldots, N\}$, where $\ket{a}$ and $\ket{b}$ belong to the modes $M$, and $\ket{c}$ and $\ket{d}$ belong to the modes $M^c{\operatorname{e}}quiv N\setminus M$. Then we have \begin{align} &{\operatorname{Tr\,}}_{M}\left(\ketbra{a}{b}\wedge \ketbra{c}{d}\right)=\scp{b}{a}\ketbra{c}{d}, \nonumber \\ &{\operatorname{Tr\,}}_{M^c}\left(\ketbra{a}{b}\wedge \ketbra{c}{d}\right)=\scp{d}{c}\ketbra{a}{b}. \nonumber {\operatorname{e}}nd{align} Note, this now becomes analogous to the partial tracing operation for the distinguishable systems, and the mathematics behind the derivation are outlined in Appendix \ref{sec:proofs}. \subsection{Linear operators, projectors, and unitaries} \label{subsec:linear} Here we consider operators that act on the fermionic Hilbert space $\mathcal{H}_S$. We start with linear operators that respect parity SSR. Any linear operator $\hat{O}$ on the fermionic space assumes the form \begin{align} \hat{O}=\sum_{i,j=1}^{2^N} O_{ij} \ketbra{e_i}{e_j}, {\operatorname{e}}nd{align} where $\ket{e_i}$ are the ordered elements of the basis set $\mathcal{B}'$. More precisely, the $\ket{e_i}$ and $\ket{e_j}$ have definite parity. The full restrictions on physically allowed linear operators are provided in the following theorem (see Appendix \ref{sec:proofs} for the proof). \begin{theorem}[Linear operators] \label{thm:blockform} If $\hat{O}$ is a linear operator $\hat{O}: \mathcal{H}^N \rightarrow \mathcal{H}^M $ such that for all $\hat{A}\in \Gamma_S^N$ and $\hat{B}\in \Gamma_S^M$, $\hat{O} \hat{A} \hat{O}^\dag \in \Gamma_S^M$ and $\hat{O}^\dagger \hat{B} \hat{O} \in \Gamma_S^N$ then the matrix representation of the operator $\hat{O}$ in the basis $\{{\mathcal{B}'}^N, \ {\mathcal{B}'}^M \}$ assumes one of these two diagonal and anti-diagonal forms: \begin{gather} \hat{O}|_{{\mathcal{B}'}^M,{\mathcal{B}'}^N}=\left(\begin{array}{c|c} O_{++} & \hat{0} \\ \hline \hat{0} & O_{--} {\operatorname{e}}nd{array}\right) \medspace \medspace \text{or} \medspace \medspace \hat{O}|_{{\mathcal{B}'}^M,{\mathcal{B}'}^N}=\left(\begin{array}{c|c} \hat{0} & O_{+-} \\ \hline O_{-+} & \hat{0} {\operatorname{e}}nd{array}\right), \nonumber {\operatorname{e}}nd{gather} where $O_{++},O_{--},O_{-+} ,O_{+-} \in M_{2^{M-1}\times 2^{N-1}}(\mathbb{C})$ and $\hat{0}$ is the zero element of $M_{2^{M-1}\times 2^{N-1}}(\mathbb{C})$. Such operators are linear operators that preserve the SSR form of the operators in $\Gamma_S$. {\operatorname{e}}nd{theorem} Using this, below we are able to characterize the sharp quantum measurements in terms of projection operations on space $\mathcal{H}_{S}$. $\hat{P} \in \Gamma_S$ is an SSR-respecting projector if it preserves the SSR structure, $\hat{P}^2\ket{\psi}=\hat{P}\ket{\psi}$ and $\hat{P}\ket{\psi}=\hat{P}^{\dagger}\ket{\psi}$, for all $\ket{\psi}\in \mathcal{H}_S$. It can be seen that such conditions imply the following form for the projector $\hat{P}$ when written in the basis $\mathcal{B}'$: \begin{align} \hat{P}=P_{ee} \oplus P_{oo} = \left( \begin{array}{c|c} P_{ee} & \hat{0} \\ \hline \hat{0} & P_{oo} {\operatorname{e}}nd{array}\right), {\operatorname{e}}nd{align} where $P_{ee},P_{oo}\in M_{2^{N-1}\times 2^{N-1}}(\mathbb{C})$ are projectors acting in the even and odd parity sub-spaces respectively. This directly implies that $\hat{P}$ is an SSR projector in an $N$-mode fermionic space $\mathcal{H}$ if, and only if, there exists $\ket{\psi_i} \in \mathcal{H}_{S}$ with $\scp{\psi_i}{\psi_j}=\delta_{ij}$, for $i=1,\ldots,n\leq 2^{N}$, such that \begin{align} \hat{P}=\sum_{i=1}^{n} \proj{\psi_i}. {\operatorname{e}}nd{align} We now turn to characterizing physically allowed unitary operators. A unitary operator $\hat{U}: \mathcal{H}_{S} \rightarrow \mathcal{H}_{S}$ is a linear operator that satisfies $\hat{U} \hat{U}^{\dagger}=\hat{U}^{\dagger} \hat{U}=\hat{\mathbb{I}}_{\mathcal{H}}$. By using Theorem \ref{thm:blockform}, we are able to characterize them. \begin{theorem}[Unitary]\label{thm:Unitary} $\hat{U}$ is an SSR-respecting unitary operator acting on $\mathcal{H}_S$ if and only if, the matrix representation of the operator $\hat{U}$ in the basis $\mathcal{B}'$ takes the following form: \begin{align} \hat{U}=U_{ee} \oplus U_{oo} = \left(\begin{array}{c|c} U_{ee} & \hat{0} \\ \hline \hat{0} & U_{oo} {\operatorname{e}}nd{array}\right), {\operatorname{e}}nd{align} where $U_{ee}$ and $U_{oo}$ are unitary matrices, each in $M_{2^{N-1}\times 2^{N-1}}(\mathbb{C})$, acting on the even and odd sub-spaces respectively, and $\hat{0}$ is the zero element of $M_{2^{N-1}\times 2^{N-1}}(\mathbb{C})$. {\operatorname{e}}nd{theorem} Theorem \ref{thm:Unitary} can be proven following two different approaches. The first one relies on the argument that any unitary can be expressed in the form $U=e^{iH}$ where $H$ is a Hermitian operator. Then, the $U$ can only have a non-block-diagonal structure if, and only if, the $H$ also has a non-block-diagonal structure. However, as we have shown earlier, a non-block-diagonal $H$ cannot be a physical fermionic observable (say a Hamiltonian). Therefore, the unitary can never be generated, as such an observable would not be physically allowed. In the second approach, the proof is derived by contradiction, and we show that the fermionic anti-diagonal unitaries violate the no-signaling principle. To demonstrate that, let us consider two parties, Alice and Bob. Alice has one qubit $Q$ and a set of $m$ fermionic modes $A$ in her possession. Bob has a set of $n$ fermionic modes $B$ in his possession. The global system is given by the Hilbert space $\mathcal{H}_{Q}\otimes \left(\mathcal{H}_{S}^A \wedge \mathcal{H}_S^B\right)$ corresponding to the qubit and fermionic modes. To start, the initial state is: \begin{align} \rho_i=\ketbra{+}{+}_Q \otimes \rho_{AB}, {\operatorname{e}}nd{align} where the initial qubit state is given by $\ket{+}_Q=\frac{1}{\sqrt{2}}(\ket{0}_Q + \ket{1}_Q)$ and $\rho_{AB}$ is an arbitrary state of the fermionic mode sets $A$ and $B$. Next, we consider the following steps where we process the system with evolutions that are governed by the anti-diagonal unitary operations on the fermionic spaces (see Figure~\ref{fig:circuit}). \\ \begin{figure} \centering \includegraphics[width=0.950\columnwidth]{circuit.png} \caption{\label{fig:circuit} A scheme that shows the violation of non-signaling by anti-diagonal unitaries. A tripartite system $QAB$ is considered where $Q$ is a qubit system and $A$ and $B$ are fermionic systems. The $H$ represents the Hadamard gate applied on $Q$. Depending on the application of an anti-diagonal fermionic unitaries $U_{B}$ and $U_{B}^\dag$ on $B$, while $QA$ undergoes and anti-diagonal controlled-unitaries $U_{QA}=\ketbra{0}{0}_Q\otimes \mathbb{I}_A + \ketbra{1}{1}_Q \otimes U_A$ and $U_{QA}^\dag$, the local state of $Q$ changes. It implies $B$ can signal $Q$ without having an interaction with it, which violates the no-signaling principle. See text for more details.} {\operatorname{e}}nd{figure} \noindent Step-1: Bob applies a unitary $U_B$ on the modes $B$, where the unitary is anti-diagonal and given by \begin{align} U_B= \left(\begin{smallmatrix}0 & U_b^{oe} \\ U_b^{eo} & 0 {\operatorname{e}}nd{smallmatrix}\right). {\operatorname{e}}nd{align} \noindent Step-2: Alice implements a global (control) unitary evolution on both $Q$ and the femionic modes $A$ driven by $U_{QA}=\ketbra{0}{0}_Q\otimes \mathbb{I}_A + \ketbra{1}{1}_Q \otimes U_A$, where \begin{align} U_A= \left(\begin{smallmatrix}0 & U_a^{oe} \\ U_a^{eo} & 0 {\operatorname{e}}nd{smallmatrix}\right), {\operatorname{e}}nd{align} is an anti-diagonal unitary applied on the modes of $A$. \noindent Step-3: Bob applies the unitary $U_B^\dag$ on the modes $B$. \noindent Step-4: Alice performs $U_{QA}^\dag$ jointly on the qubit $Q$ and the modes $A$ in her possession. \\ \noindent After the Step-4, the resultant final state of $QAB$ becomes \begin{align} \rho_f=\ketbra{-}{-}_Q \otimes \rho_{AB}, {\operatorname{e}}nd{align} where the final qubit state is given by $\ket{-}_Q=\frac{1}{\sqrt{2}}(\ket{0}_Q - \ket{1}_Q)$ and the (global) state of the modes $A$ and $B$ does not change. It is clear that if Bob does not apply $U_B$ and $U_B^\dag$ in Step-1 and Step-3 respectively, the overall initial state does not change and the qubit state remains in the state $\ket{+}_Q$. However, Bob's applications of unitaries updates it to $\ket{-}_Q$. Since these qubit states are orthogonal, i.e., $\scp{+}{-}_Q=0$, just by measuring the state of $Q$, Alice can determine whether Bob has applied the unitaries $U_B$ and $U_B^\dag$ or not without having any communication with him. This communication violates the no-signaling principle, and it is exclusively due to the anti-diagonal unitaries. Therefore, the only physically allowed fermionic unitaries are the ones that are block-diagonal. We outline a more detailed version in Appendix \ref{sec:proofs}. Therefore, as constrained by the parity SSR, both the fermionic projectors and unitaries must be block-diagonal when the space divides into odd and even subspaces. With this, we move on to characterize general quantum operations in the following result. \subsection{General quantum operations} \label{subsec:general} Here we introduce the general formalism to characterize quantum operations for finite fermionic systems. The formalism considers the algebraic structure of anti-commutation relations of creation and annihilation operators of a fermionic Hilbert space and the parity SSR. There are three different ways to describe general quantum operations or channels for distinguishable quantum systems, and all three approaches are equivalent \cite{Nielsen00}. The first one is physically motivated and describes a general quantum operation as an effect on a system while interacting with an environment. This approach is also known as the {\operatorname{e}}mph{Stinespring dilation} of quantum operations. The second approach, more abstract than the first, is the {\operatorname{e}}mph{operator-sum representation} (also known as {\operatorname{e}}mph{Kraus representation}) of quantum operations. The third method is {\operatorname{e}}mph{axiomatic}, and it is based on the complete positivity and trace preservation of operations, also known as {\operatorname{e}}mph{completely-positive-trace-preserving (CPTP)} operations. A map often describes an arbitrary quantum operation. Say $\varphi$ maps an arbitrary quantum state of $N$-modes to another state of $M$-modes. Here $N$ and $M$ are not necessarily equal. To characterize general quantum maps, let us define the complete-positivity (CP). \begin{defi}[CP maps] A map $\varphi$ is said to be completely positive (CP) if, $\forall \hat{A} \in \mathcal{R}_{S}^{(N)}$, i.e., $\hat{A} \geq 0$, \begin{align} (\varphi\wedge \mathbb{I}_K)(\hat{A})\geq 0, \ \ \ \forall K\in\mathbb{N}, {\operatorname{e}}nd{align} where $\mathbb{I}_K$ is the identity operation acting on the fermionic environment with the Hilbert space $\mathcal{H}_E^K$ of $K$-modes. {\operatorname{e}}nd{defi} This definition of complete-positivity of a map enables us to present the crucial theorem regarding general quantum channels, below. \begin{theorem}[General quantum channels] \label{thm:Sinespring-SupO-CPTP} For an SSR fermionic quantum operation represented by a map $\varphi$, the following statements are equivalent. \begin{enumerate} \item (Stinespring dilation.)\label{ax:STPD} There exist a fermionic $K$-mode environment with Hilbert space $\mathcal{H}_{E}^{K}$, a parity SSR-respecting state $\omega \in \mathcal{R}_E^{K}$, and a parity SSR-respecting unitary operator $\hat{U}$ that acts on $\mathcal{H}_{S}^{N}\wedge \mathcal{H}_{E}^{K}$ with $K \geq N$, such that: \begin{equation} \varphi(\rho)={\operatorname{Tr\,}}_{E}(\hat{U}(\rho\wedge \omega)\hat{U}^{\dagger}), \quad \forall \rho\in \mathcal{R}_{S}^{N}. {\operatorname{e}}nd{equation} \item (Operator-sum representation.) \label{ax:OSR2} There exists a set of parity SSR-respecting linear operators $E_k$, where $\sum_k E_k^{\dagger}E_k = \mathbb{I}_{N}$, such that: \begin{equation} \varphi(\rho)=\sum_k E_k \rho E_k^{\dagger}, \quad \forall \rho\in \mathcal{R}_{S}^{N}. {\operatorname{e}}nd{equation} \item (Axiomatic formalism.)\label{ax:CPTP2} $\varphi$ fulfills the following properties: \begin{itemize} \item It is trace preserving, i.e. ${\operatorname{Tr\,}}(\varphi(\rho))= {\operatorname{Tr\,}}(\rho)$, $\forall \rho\in \mathcal{R}_{S}^{N}$. \item Convex-linear, i.e. $\varphi\left(\sum_i p_i\rho_i \right)=\sum_i p_i \varphi(\rho_i)$, $\forall \rho_i \in \mathcal{R}_{S}^{N}$, where $p_i$s are the probabilities, i.e., $0\leq p_i \leq 1$ and $\sum_i p_i=1$. \item $\varphi$ is CP. {\operatorname{e}}nd{itemize} {\operatorname{e}}nd{enumerate} {\operatorname{e}}nd{theorem} The complete proof of the theorem is outlined in Appendix \ref{sec:prooftheorem}. The {\operatorname{e}}mph{Stinespring dilation} based formalism of quantum map is physically motivated. There, a fermionic system with state $\rho \in \mathcal{R}_{S}^{N}$, interacts with an environment in a state $\omega \in \mathcal{R}_{E}^{K}$ through a global unitary operation $U \in \Gamma_{S}^{N+K}$. The quantum operation acting on the system $\mathcal{R}_{S}^{N}$ is then given by the reduced effect on it, which is obtained by partial tracing over environment degrees of freedom. The linear operators $E_k$ are also known as {\operatorname{e}}mph{Kraus operators}. Since the $E_k$ operators map SSR-respecting fermionic states onto themselves, they have to fulfill the restrictions imposed by Theorem \ref{thm:blockform}. The condition $ \sum_k E_k^{\dagger}E_k = \mathbb{I}_{2^N}$ guarantees that the operation $\varphi$ is trace preserving, i.e. ${\operatorname{Tr\,}}(\varphi(\rho))= 1, \ \forall \rho$ with ${\operatorname{Tr\,}}(\rho)=1$. The case with $ \sum_k E_k^{\dagger}E_k < \mathbb{I}_{2^N}$ corresponds to incomplete (or selective) quantum operations, where ${\operatorname{Tr\,}}(\varphi(\rho))< 1$ for ${\operatorname{Tr\,}}[\rho]=1$. Note, although physical states, observables, projections and unitaries cannot take the anti-diagonal block form of preserving the parity SSR, the Kraus operators assume both block-diagonal and anti-block-diagonal forms. \section{Correlations in fermionic systems} \label{sec:discussion} With the physically meaningful notions of quantum states and operations, we now discuss some of our results' implications in the reasonable entanglement theory for finite-dimensional fermionic systems. The mode perspective has appeared to be more reasonable to study the correlations present in finite-dimensional fermionic systems. We recover the usual concepts in entanglement theory by exploiting the wedge product's analogies with the tensor product. \subsection{Uncorrelated states} \label{subsec:uncorrelated} One of the main ingredients for various tasks in quantum information theory is inter-system correlations. In the case of composite systems $AB$ with distinguishable subsystems $A$ and $B$, the global Hilbert space is $\mathcal{H}_{AB}=\mathcal{H}_A \otimes \mathcal{H}_B$. The fully uncorrelated states, across the partition $A$ and $B$, are the product states of the form $\rho_A\otimes \rho_B$. Following the discussion presented in \cite{Banuls09} we see that there are three possible ways to define the uncorrelated fermionic states for a bipartite system of fermionic modes with Hilbert space $\mathcal{H}=\mathcal{H}_A \wedge \mathcal{H}_B$. (i) The first option is to define them as the SSR states $\rho_{AB}$ that satisfy ${\operatorname{Tr\,}}(\rho_{AB} (\hat{O}_A\wedge \hat{O}_B))={\operatorname{Tr\,}}(\rho_A \hat{O}_A){\operatorname{Tr\,}}(\rho_B \hat{O}_B)$ for all $\hat{O}_A, \hat{O}_B$ that are local Hermitian operators. Note here that $\hat{O}_A$, and $\hat{O}_B$ do not need to respect the parity SSR. (ii) The second option is to directly define the uncorrelated SSR states $\rho_{AB}$ as the ones that are SSR product states, so that $\rho_{AB}=\rho_A\wedge \rho_B$. (iii) The third possibility is to define them as the SSR states that fulfil ${\operatorname{Tr\,}}(\rho_{AB} (\hat{O}_A\wedge \hat{O}_B))={\operatorname{Tr\,}}(\rho_A \hat{O}_A){\operatorname{Tr\,}}(\rho_B \hat{O}_B)$ for all $\hat{O}_A, \hat{O}_B$ that are local observables in $A$ and $B$ respectively. Notice that the difference between the definitions (i) and (iii) is the imposition of the SSR condition on the local Hermitian operators $\hat{O}_A, \hat{O}_B$, since we know from Proposition \ref{prop:observables} that Hermitian operators that respect parity SSR represent the fermionic observables. As shown in Appendix \ref{sec:proofs}, the definitions (i) and (ii) are in fact equivalent. However, the third definition is different from the other two, and this is apparent, particularly in the case of mixed states \cite{Banuls09}. The definition (iii) is the physically reasonable one. The property of a composite fermionic system being uncorrelated or correlated should be defined in physical terms, which can be experimentally tested. For this reason, we recast the definition of uncorrelated states for bipartite systems as: \begin{defi}[Uncorrelated states \cite{Banuls09}]\label{defi:UnCorr} Given a bipartite fermionic state $\rho_{AB}$, it is uncorrelated across the partition $A$ and $B$, $\forall \hat{O}_A\in \Gamma_S^A$ and $\forall \hat{O}_B\in \Gamma^B_S $, if \begin{gather} {\operatorname{Tr\,}}(\rho_{AB} (\hat{O}_A\wedge \hat{O}_B))={\operatorname{Tr\,}}(\rho_A \hat{O}_A){\operatorname{Tr\,}}(\rho_B \hat{O}_B), {\operatorname{e}}nd{gather} where $\mathcal{O}_X$ are the local observables in the subspace spanned by the modes on $X=A, B$. {\operatorname{e}}nd{defi} There are several distinctions and justifications required to arrive at this result, and that can be traced back to a fundamental difference between fermionic and distinguishable systems: the violation of local tomography principle for fermions \cite{Ariano}. In distinguishable systems, it is known that the local tomography principle is satisfied: given any bipartite state $\rho_{AB}$, the state can be fully reconstructed only by performing local measurements on $A$ and $B$. Nevertheless, as pointed out in \cite{Ariano}, the local tomography principle is not fulfilled in fermionic systems. In fact, there exist states $\rho_{AB} \neq \rho_A \wedge \rho_B$ that have the same expectation values as $\rho_A \wedge \rho_B$ for local observables on $A$ and $B$. \subsection{Separable and entangled states} \label{subsec:entangled} In quantum entanglement theory, the states created using local-operation-and-classical-communication (LOCC) on uncorrelated states are separable states. With the definition of uncorrelated and physically allowed quantum operations introduced in previous sections, the fermionic separable states shared by parties $A$ and $B$, are given by \begin{align} \rho_{AB}= \sum_i p_i \ \rho_i^{AB} , {\operatorname{e}}nd{align} where $0 \leqslant p_i \leqslant 1$, $\sum_i p_i=1$, and the states $\rho_i^{A B} \in \mathcal{R}_S^{AB}$ are uncorrelated states for the bipartition of the sets of modes $A$ and $B$. Obviously the $ \rho_{AB} \in \mathcal{R}_S^{AB}$ is an allowed fermionic state. Note that this definition is in agreement with the one introduced in \cite{Banuls09}. By definition, any bipartite state has non-vanishing correlations across the partition if it is not uncorrelated. The correlations exhibited in separable states can be quantified in terms of classical correlations and quantum discords, such as classical-quantum, quantum-classical, and quantum-quantum correlations \cite{Modi12}. By definition, an entangled state $\rho_{AB} \in \mathcal{R}_S^{AB}$ shared by two parties $A$ and $B$, is a state that is not separable. Now, one may go on to quantify the amount of entanglement present in a state. In general, it is challenging to characterize and quantify entanglement in a state. However, entanglement in pure states can be characterized easily by using Schmidt decomposition, a notion that we formalize for fermionic systems in the following lines. The Schmidt decomposition for pure bipartite fermionic states in the mode picture is similar to bipartite systems with distinguishable parties, given the theorem below. \begin{theorem}[Schmidt decomposition]\label{thm:schmidt} Given any bipartite system, any pure SSR fermionic state $\ket{\psi}_{AB} \in \mathcal{H}_{S}^{AB}$, there exist orthonormal basis $\{\ket{i}_A\} \in \mathcal{H}_{S}^A$ and $\{\ket{i}_B\} \in \mathcal{H}_{S}^B$, such that \begin{align} \ket{\psi}_{AB}=\sum_i \sqrt{p_i} \ket{i}_A\wedge \ket{i}_B, {\operatorname{e}}nd{align} where $\{p_i\}$ are probabilities. {\operatorname{e}}nd{theorem} The proof of Theorem \ref{thm:schmidt} is outlined in Appendix \ref{sec:proofs}. The $\{p_i\}$ are called Schmidt coefficients and the number of elements in the set $\{p_i\}$ is called Schmidt number. One may directly see that, a pure fermionic state $\ket{\psi}_{AB}\in \mathcal{H}_{S}^{AB}$ is an uncorrelated state if and only if $\ket{\psi}_{AB}$ possesses Schmidt number $1$. We can also prove one of the important results in information theory, also analogue to the distinguishable case: the purification of any fermionic SSR quantum state. This result is a corollary of the Schmidt decomposition Theorem in the mode picture \ref{thm:schmidt}. We provide the proof in Appendix \ref{sec:proofs}. \begin{corollary}[Purification] \label{cor:purif} If $\rho \in \mathcal{R}_{S}^M$ is an $M$-mode fermionic state, then there exists a fermionic space $\mathcal{H}_E^M$ of $M$ modes and a pure state $\omega \in \mathcal{H}_{S}^M \wedge \mathcal{H}_E^M$, such that ${\operatorname{Tr\,}}_E(\omega)=\rho$. {\operatorname{e}}nd{corollary} \section{Conclusions} \label{sec:conclusions} Although the characterization of reasonable fermionic states has been done previously, the identification of reasonable fermionic quantum operations was missing so far. In this work, we study the physically allowed operations that an arbitrary fermionic system may undergo. We show that the operations must satisfy the parity SSR in order to be physically allowed. We have started by introducing a wedge-product structure for the Hilbert space of a composite fermionic system that considers the anti-commutation relations of the creation and annihilation operators. Such product arises from the natural separation of the fermionic system into the subsystems of fermionic modes. In addition to that, by applying the parity SSR, we have identified the physical notions of states and observables. Using the framework, we have proven the uniqueness of the partial tracing procedure for fermionic states and characterized the projection and unitary operations. In particular, we have shown that the un-physical unitary operators may lead to violations of the no-signaling principle. We have then extended our studies to characterize general quantum operations in terms of the Stinespring dilation, the operator-sum-representation, and the axiomatic representation based on completely-positive trace-preserving maps. We have shown the equivalence between these three representations. We have also explored the implications of our findings in studying uncorrelated and correlated fermionic states. In particular, we have introduced Schmidt decomposition to characterize entanglement between modes in a fermionic system. This decomposition, in turn, has enabled us to demonstrate the purification of a general fermionic state. Our work has drawn a parallel between the quantum information theories (QITs) for distinguishable systems and fermionic (indistinguishable) systems. In particular, we have shown a close resemblance between the quantum states and operations, except that the QIT for distinguishable cases uses the tensor-product structure of the composite Hilbert space and the fermionic cases exploit the wedge-product structure along with a restriction imposed by parity SSR. With this, we see that much of the QIT developed for distinguishable systems can now be translated to fermionic systems. One may even extend the framework developed here to particles following fractional statistics, such as anyons. \\ \onecolumngrid \section*{Appendix} \appendix Here, we include the different properties of wedge product Hilbert space, proofs of the Lemmas and Theorems, technical details and explanations eluded in the main text. \section{Wedge product space} \label{sec:properties} We define the wedge product ($\wedge$) in Section \ref{sec:febo} to obtain a product notion in the fermionic space, analogous to the tensor product ($\otimes$) in the distinguishable system case. Here we check all the required properties of the so-called wedge product space, to be able to manipulate it consistently. We have the space $\mathcal{H}$ defined as the Hilbert space spanned by the orthonormal basis \begin{align} \mathcal{B}_0=\left\{f_{i_1}^{\dagger}\ldots f_{i_k}^{\dagger} \ket{\Omega} \medspace | \medspace i_j\in I_N \ \text{s.t.} \medspace i_{1}<i_2<\ldots<i_{k} \right\}, {\operatorname{e}}nd{align} where $k\in\{0,\ldots,N\}$ and $I_N$ is the ordered set of the $N$ modes. Since we desire to define a product that can be viewed as an analogue to the tensor product for the distinguishable case, it is natural to define it from a natural basis of the space. We denote by $\mathcal{H}$ the set of elements that are products (by products of operators we mean wedge product and its compositions unless stated otherwise) of $f_i^{\dagger}$ on the $\ket{\Omega}$ element. One can observe that the product, as they are defined above, are also the elements of the basis set $\mathcal{B}_0$. Now this basis can equivalently be expressed, thanks to this product definition and the definition of $\ket{i}=f_i^{\dagger}\ket{\Omega}$, as \begin{align}\label{eq:basis0} \mathcal{B}_0=\left\{\ket{i_1}\wedge \ldots\wedge \ket{i_n} |\medspace i_j<i_{j+1}, \medspace 0\leq n \leq N \right\}, \ \text{where} \ i_j\in I_N. {\operatorname{e}}nd{align} Once these properties are known, we can observe the structure of the basis $\mathcal{B}$ by fixing a number $N$ of modes. We consider the case where we extend our space to the space generated by our $N$ modes and one extra mode, which is labelled by $N+1$. It is clear that the extended basis set $\tilde{\mathcal{B}}$ contains the basis set $\mathcal{B}$. The basis set $\tilde{\mathcal{B}}$ is exactly the elements of the basis set of $\mathcal{B}$ by the wedge product of the elements of the set $\{\ket{\Omega},\ket{N+1}\}$. So we obtain that $\tilde{\mathcal{B}}=\mathcal{B} \wedge \{\ket{\Omega},\ket{N+1}\}$. Moreover, since the set $\{\ket{\Omega},\ket{N+1}\}$ it is exactly the canonical basis for the fermionic space of just the $(N+1)$th mode, we have found a decomposition of the basis sets. If we denote the basis $\mathcal{B}$ of $N$-mode space as $\mathcal{B}^N$, and the basis set in $i$th mode as $E_i = \{\ket{\Omega},\ket{i}\} \in \mathcal{H}_i$, we have \begin{align} \mathcal{B}^{N+1}=\mathcal{B}^N \wedge E_{N+1}=\mathcal{B}^{N-1} \wedge E_N \wedge E_{N+1}=\ldots= \mathcal{B}^{1} \wedge E_2\wedge \ldots \wedge E_{N+1} = E_{1} \wedge E_2\wedge \ldots \wedge E_{N+1}. {\operatorname{e}}nd{align} So, we finally obtain a decomposition of the general basis in a (wedge) product of the most elementary basis set one can think. It consists in the vacuum ($\ket{\Omega}$) and the only excited state ($\ket{i}$) of each mode ($i$th) in the system. An $N$-mode space $\mathcal{H}$ decomposes as the product of the Hilbert spaces $\mathcal{H}_i$ spanned by the orthonormal bases $E_i$, i.e. \begin{align} \mathcal{H}=\bigwedge_{i=1}^N \mathcal{H}_i. {\operatorname{e}}nd{align} Once we have the natural basis, one may want to explore these bases as the (wedge) product of bases belong to the subspaces. Given $\ket{u}\in \mathcal{H}_u$, $ \ket{v} \in \mathcal{H}_v$ for all $k_u,k_v \in \{0,\ldots,N\}, \medspace \text{and} \medspace i_{{\operatorname{e}}ta} \in I_N$, we define \begin{align}\label{eq:basestruc} & \ket{u}=\prod_{j_u=1}^{k_u} f_{i_{j_u}}^{\dagger} \ket{\Omega}, \quad \ket{v}=\prod_{j_v=1}^{k_v} f_{i_{j_v}}^{\dagger} \ket{\Omega}, \qquad & \ket{u} \wedge \ket{v} := \prod_{j_u=1}^{k_u} f_{i_{j_u}}^{\dagger} \prod_{j_v=1}^{k_v} f_{i_{j_v}}^{\dagger} \ket{\Omega} \in \mathcal{H}_u \wedge \mathcal{H}_v. {\operatorname{e}}nd{align} It may be worth commenting that the product is closed in the set of elements that are products of the creation operators acting on the vacuum state $\ket{\Omega}$. In $\mathcal{H}$, the product inherits the property of being associative from the associativity of the composition of operators. The element $\ket{\Omega}$, corresponds to the vacuum state, is the neutral element of the product, fulfilling that $\forall \ket{u}\in \mathcal{H}$, $\ket{u}\wedge \ket{\Omega}=\ket{\Omega}\wedge \ket{u}= \ket{u}$. It can also be seen that for any pair $\ket{u},\ket{v}\in\mathcal{H}_{u,v}$, $\ket{u}\wedge \ket{v} =\pm \ket{v}\wedge \ket{u}$, where the phase depends on the number of creation operators on each term. If the number of creation operators are even (or both are odd) for both the terms, then $\ket{u}\wedge \ket{v} = \ket{v}\wedge \ket{u}$, otherwise $\ket{u}\wedge \ket{v} = - \ket{v}\wedge \ket{u}$. This follows from the definition of the product in terms of the composition of the creation operators, and then applying the anti-commutation relations for the fermionic case. Using the same argument, the product between two elements $\ket{u},\ket{v} \in \mathcal{H}$ that share same creation operator, is $\ket{u}\wedge \ket{v}=0$. Beyond the bases, any two elements $ \ket{\psi_u} \in \mathcal{H}_u$ and $ \ket{\psi_v} \in \mathcal{H}_v$, the $\ket{\psi_u}\wedge \ket{\psi_v} \in \mathcal{H}_u \wedge \mathcal{H}_v$. One of the keys of this formalism is that, although we have fixed an order to work with, any permutation on this order of modes would lead to the same result; e.g. if we consider the product $E_2\wedge E_1$ is also a valid basis of the same space as $E_1\wedge E_2$, moreover it is the same basis with some of its elements multiplied by $-1$. From now on, when we refer to the product $\ket{\psi_u}\wedge \ket{\psi_v}$, we assume that they live in disjoint decompositions of spaces. With this notion of $\mathcal{H}$, we now consider its dual space. The dual space of $\mathcal{H}$ is the space where the elements $\bra{\psi}$ are, elements such that the scalar product defined in $\mathcal{H}$ can be seen as $(\ket{\varphi},\ket{\psi})=\scp{\varphi}{\psi}$. We now want to define how the $\wedge$ product acts on this space in order to have consistency with this condition. If we consider the simplest case, we find that the dual element of $\ket{\Omega}$ is given by $\ket{\Omega}^{\dagger}=\bra{\Omega}$, so that $\scp{\Omega}{\Omega}=1$. Then it is natural to see that for an element $\ket{i}=f_i^{\dagger}\ket{\Omega}$, the corresponding dual element is $\bra{\Omega}f_i$. From the commutation relation it follows that $\bra{\Omega}f_i f_i^{\dagger}\ket{\Omega}=1$, and it makes sense of the $\dagger$ notion of the dual space. Now, in order to see how the product $\wedge$ is defined, we consider the element $\ket{i}\wedge \ket{j}=f_i^{\dagger}f_j^{\dagger}\ket{\Omega}$, and we observe that its dual element is $\bra{\Omega}f_j f_i$ and not $\bra{\Omega} f_i f_j$, because the second gives that the norm of the ket is -1, and the first 1. So we obtained that $\bra{\Omega} f_i f_j=(\ket{i}\wedge\ket{j})^{\dagger}$. Now, since we desire to have similar properties to the tensor product, we would like to have that: $(\ket{i}\wedge\ket{j})^{\dagger}=\bra{i}\wedge\bra{j}$. So, imposing this condition we find that it can be defined a consistent operation for the $\wedge$ product on the dual space by setting: $\bra{i}\wedge\bra{j}:=\bra{\Omega}f_j f_i$. Then, performing the same generalization as in the case for the $\mathcal{H}$ space, the duals of the basis in Eq.~\ref{eq:basestruc} are defined as \begin{align} & \bra{u}=\bra{\Omega}\prod_{j_u=k_u}^{1} f_{i_{j_u}}, \quad \bra{v}=\bra{\Omega}\prod_{j_v=k_v}^{1} f_{i_{j_v}}, \qquad & \bra{u}\wedge \bra{v}:=\bra{\Omega} \prod_{j_v=k_v}^{1} f_{i_{j_v}} \prod_{j_u=k_u}^{1} f_{i_{j_u}}. {\operatorname{e}}nd{align} Then, by following the same technique as for the $\mathcal{H}$, we have \begin{align} \mathcal{H}^*=\mathcal{H}_{1}^* \wedge \ldots \wedge \mathcal{H}_N^* {\operatorname{e}}nd{align} where $\mathcal{H}_{i}^*=\{\bra{\Omega},\bra{i}\}$, that matches exactly with the dual set of $\mathcal{H}_i$. From here, it is easy to see that the wedge product behaves well in general with the dual operation. So, for $\ket{\psi_u} \in \mathcal{H}_u$ and $\ket{\varphi_v}\in\mathcal{H}_v$, then \begin{equation} \left(\ket{\psi_u}\wedge \ket{\varphi_v}\right)^{\dagger}=\bra{\psi_u}\wedge\bra{\varphi_v} \in \mathcal{H}_u^* \wedge \mathcal{H}_v^*. {\operatorname{e}}nd{equation} From the associativity of the $\wedge$ product on both spaces, we further have \begin{align} \left(\ket{\psi_1}\wedge\ldots\wedge\ket{\psi_n}\right)^{\dagger}=\bra{\psi_1}\wedge \ldots \wedge \bra{\psi_n}. {\operatorname{e}}nd{align} With the dual elements, we can easily cast a scalar products. For two basis elements it is given as in the following. \begin{lem} \label{lema:scprod} The scalar product between the elements $f_{i_1}^{\dagger}\ldots f_{i_k}^{\dagger}\ket{\Omega}$ and $f_{j_1}^{\dagger}\ldots f_{j_m}^{\dagger}\ket{\Omega}$ is given by: \begin{gather} \bra{\Omega}f_{i_k}\ldots f_{i_1}f_{j_1}^{\dagger}\ldots f_{j_m}^{\dagger}\ket{\Omega}=\begin{cases} 0 & \text{If } k\neq m \\ \det{W_k} & k=m {\operatorname{e}}nd{cases} \qquad \text{where } W_k=\begin{pmatrix} \delta_{i_1,j_1} & \ldots & \delta_{i_1,j_k} \\ \vdots & {\operatorname{d}}ots & \vdots\\ \delta_{i_k,j_1} & \ldots & \delta_{i_k,j_k} {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{gather} {\operatorname{e}}nd{lem} \begin{proof} First, it has to be seen that if $k\neq m$ then the value is 0. If $k\neq m$ then or $m>k$ or $k>m$.\\ In the first case, by the pigeon hole principle there will be some $l_0 \in \{1,...,m\} $ such that $j_{l_0}\neq i_s \forall s\in\{1,...,k\}$, therefore $\delta_{j_{l_0} s}=0 \forall s\in \{1,...,k\}$. That is why the element $f^{\dagger}_{j_{l_0}}$ can be commuted (with the corresponding sign, towards $\bra{\Omega}$, and annihilate it :\begin{equation} \bra{\Omega}f_{i_k}\ldots f_{i_1}f_{j_1}^{\dagger}\ldots f_{j_m}^{\dagger}\ket{\Omega}=(-1)^{k+j_{l_0}-1}\bra{\Omega}f^{\dagger}_{j_{l_0}} f_{i_k}\ldots f_{i_1}f_{j_1}^{\dagger}\ldots f_{j_m}^{\dagger}\ket{\Omega}=0 {\operatorname{e}}nd{equation} Similarly this procedure can be repeated in the second case commuting the corresponding $f$ element towards the element $\ket{\Omega}$ and annihilate it as well. So just the case $k=m$ is left to see. The exposed relation will be proved by induction. For $k=1$: \begin{equation} \bra{\Omega}f_{i_1}f^{\dagger}_{j_1}\ket{\Omega}=\delta_{i_1 j_1}- \bra{\Omega}f^{\dagger}_{j_1}f_{i_1}\ket{\Omega}=\delta_{i_1 j_1}= \det(\delta_{i_1 j_1}) {\operatorname{e}}nd{equation} For $k=2$ the proof follows by: \begin{gather} \bra{\Omega}f_{i_2}f_{i_1}f^{\dagger}_{j_1}f^{\dagger}_{j_2}\ket{\Omega}=\delta_{i_1 j_1}\bra{\Omega}f_{i_2}f^{\dagger}_{j_2}\ket{\Omega}-\bra{\Omega}f_{i_2}f^{\dagger}_{j_1}f_{i_1}f^{\dagger}_{j_2}\ket{\Omega}=\nonumber \\ =\delta_{i_1 j_1}\delta_{i_2 j_2}-\delta_{i_2 j_1}\bra{\Omega}f_{i_1}f^{\dagger}_{j_2}\ket{\Omega}+\bra{\Omega}f^{\dagger}_{j_1}f_{i_2}f_{i_1}f^{\dagger}_{j_2}\ket{\Omega}=\delta_{i_1 j_1}\delta_{i_2 j_2}-\delta_{i_2 j_1}\delta_{i_1 j_2}=\det \begin{pmatrix} \delta_{i_1 j_1} & \delta_{i_1 j_2}\\ \delta_{i_2 j_1} & \delta_{i_2 j_2} {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{gather} So now lets choose a fixed integer $n_0>1$, and lets assume that $\forall n\leq n_0, n\in \mathbb{N} $ the relation is true. If the relation is proved for $n_0+1$ then the proof is done. So lets consider this case: \begin{gather} \bra{\Omega}f_{i_{n_0+1}}\ldots f_{i_1}f_{j_1}^{\dagger}\ldots f_{j_{n_0+1}}^{\dagger}\ket{\Omega}=\delta_{i_1 j_1} \bra{\Omega}f_{i_{n_0+1}}\ldots f_{i_2}f_{j_2}^{\dagger}\ldots f_{j_{n_0+1}}^{\dagger}\ket{\Omega}-\bra{\Omega}f_{i_{n_0+1}}\ldots f_{i_2}f_{j_1}^{\dagger}f_{i_1}f_{j_2}^{\dagger}\ldots f_{j_{n_0+1}}^{\dagger}\ket{\Omega}=\nonumber \\ =\delta_{i_1 j_1}\det(\tilde{A}_{n_0+1}^{1,1})-\delta_{i_2 j_1} \bra{\Omega}f_{i_{n_0+1}}\ldots f_{i_3}f_{i_1}f_{j_2}^{\dagger}\ldots f_{j_{n_0+1}}^{\dagger}\ket{\Omega} +\bra{\Omega}f_{i_{n_0+1}}\ldots f_{i_3}f_{j_1}^{\dagger}f_{i_2}f_{i_1}f_{j_2}^{\dagger}\ldots f_{j_{n_0+1}}^{\dagger}\ket{\Omega}=\nonumber \\ = \delta_{i_1 j_1}\det(\tilde{A}_{n_0+1}^{1,1})-\delta_{i_2 j_1} \det(\tilde{A}_{n_0+1}^{2,1}) +\delta_{i_3 j_1}\bra{\Omega}f_{i_{n_0+1}}\ldots f_{i_4}f_{i_2}f_{i_1}f_{j_2}^{\dagger}\ldots f_{j_{n_0+1}}^{\dagger}\ket{\Omega}+\nonumber \\ -\bra{\Omega}f_{i_{n_0+1}}\ldots f_{i_4}f_{j_1}^{\dagger}f_{i_3}f_{i_2}f_{i_1}f_{j_2}^{\dagger}\ldots f_{j_{n_0+1}}^{\dagger}\ket{\Omega}=\ldots = \sum_{k=1}^{n_0+1} (-1)^{k+1}\delta_{i_k j_1} \det(\tilde{A}_{n_0+1}^{k,1})+\nonumber \\ + \bra{\Omega}f_{j_1}^{\dagger}f_{i_{n_0+1}}\ldots f_{i_1}f_{j_2}^{\dagger}\ldots f_{j_{n_0+1}}^{\dagger}\ket{\Omega} = \sum_{k=1}^{n_0+1} (-1)^{k+1}\delta_{i_k j_1} \det(\tilde{A}_{n_0+1}^{k,1})=\det(A_{n_0+1}) {\operatorname{e}}nd{gather}{\operatorname{e}}nd{proof} With the structure of the dual space of $\mathcal{H}$, we are now ready to the extend of the wedge product structure to the operator space of $\mathcal{H}$, that is $\Gamma$. The operator space is constructed with the Cartesian product between two basis of $\mathcal{H}$ and $\mathcal{H}^*$ and the conventional linear extension of this elements. So, we can say, without loss of generality, that any linear operator on $\mathcal{H}$, a fermionic space of N-modes, takes the form $\hat{A}=\sum_{r=1}^{2^N}\sum_{s=1}^{2^N} \alpha_r \beta_s \ketbra{u_r}{u_s}$, where $\{ \ket{u_r}, \ket{u_s} \} \in \mathcal{B}_0^N$. Focusing on the action of the $\wedge$ product lets consider a bipartition of the modes $A|B$. From now on, we consider the decomposition of any vector or operator in terms of ordered actions of creation and annihilation operators acting on the $\ket{\Omega}$ or the $\bra{\Omega}$ elements. In case that we have the sum of many of those terms, they will be distinct one from each other. From all these considerations taken into account, we present the following results. \begin{lem}\label{lema:distribu} If $\hat{C},\hat{E}$ are local operators on $A$, and $\hat{D},\hat{F}$ are local operators on $B$, then $(\hat{C}\wedge \hat{D})(\hat{E}\wedge \hat{F})=(\hat{C}\hat{E}\wedge \hat{D}\hat{F})$. {\operatorname{e}}nd{lem} \begin{proof} Since the products fulfill the distributive and linear properties both in $\mathcal{H}$ and $\mathcal{H}^*$, it is enough to proof it for just the matrix elements, given by \begin{gather} \hat{C}=c f_{\lambda_1}^{\dagger}\ldots f_{\lambda_l}^{\dagger}\proj{\Omega}f_{\mu_m}\ldots f_{\mu_1} \medspace \medspace \hat{D}=d f_{\nu_1}^{\dagger}\ldots f_{\nu_n}^{\dagger}\proj{\Omega}f_{\pi_p}\ldots f_{\pi_1} \medspace \medspace \hat{E}=e f_{\rho_1}^{\dagger}\ldots f_{\rho_r}^{\dagger}\proj{\Omega}f_{\sigma_s}\ldots f_{\sigma_1} \medspace \medspace \hat{F}=f f_{\tau_1}^{\dagger}\ldots f_{\tau_t}^{\dagger}\proj{\Omega}f_{\xi_x}\ldots f_{\xi_1} \nonumber \\ (\hat{C}\wedge \hat{D})(\hat{E}\wedge \hat{F})=cdef( f_{\lambda_1}^{\dagger}\ldots f_{\lambda_l}^{\dagger} f_{\nu_1}^{\dagger}\ldots f_{\nu_n}^{\dagger}\proj{\Omega} f_{\pi_p}\ldots f_{\pi_1} f_{\mu_m}\ldots f_{\mu_1}) (f_{\rho_1}^{\dagger}\ldots f_{\rho_r}^{\dagger} f_{\tau_1}^{\dagger}\ldots f_{\tau_t}^{\dagger}\proj{\Omega} f_{\xi_x}\ldots f_{\xi_1} f_{\sigma_s}\ldots f_{\sigma_1})=\nonumber \\ =cdef W f_{\lambda_1}^{\dagger}\ldots f_{\lambda_l}^{\dagger} f_{\nu_1}^{\dagger}\ldots f_{\nu_n}^{\dagger}\proj{\Omega} f_{\xi_x}\ldots f_{\xi_1} f_{\sigma_s}\ldots f_{\sigma_1}=\frac{W}{W_1 W_2}\cdot CE\wedge DF {\operatorname{e}}nd{gather} where $W=\bra{\Omega}f_{\pi_p}\ldots f_{\pi_1} f_{\mu_m}\ldots f_{\mu_1}f_{\rho_1}^{\dagger}\ldots f_{\rho_r}^{\dagger} f_{\tau_1}^{\dagger}\ldots f_{\tau_t}^{\dagger}\ket{\Omega}$, $W_1=\bra{\Omega} f_{\mu_m}\ldots f_{\mu_1}f_{\rho_1}^{\dagger}\ldots f_{\rho_r}^{\dagger} \ket{\Omega}$, and $W_2=\bra{\Omega}f_{\pi_p}\ldots f_{\pi_1} f_{\tau_1}^{\dagger}\ldots f_{\tau_t}^{\dagger}\ket{\Omega}$. The proof is complete if we can show that $W=W_1 W_2 $. For $t\neq p$ or $r\neq m$, we have $W=W_1 W_2=0$. But if they are equal, $t= p$ and $r= m$, then \begin{gather} W=\det \begin{pmatrix} \delta_{\mu_1 ,\rho_1} & \ldots & \delta_{\mu_1, \rho_r} & \delta_{\mu_1, \tau_1} & \ldots & \delta_{\mu_1, \tau_t} \\ \vdots & {\operatorname{d}}ots & \vdots & \vdots & {\operatorname{d}}ots & \vdots \\ \delta_{\mu_m ,\rho_1} & \ldots & \delta_{\mu_m, \rho_r} & \delta_{\mu_m, \tau_1} & \ldots & \delta_{\mu_m, \tau_t} \\ \delta_{\pi_1 ,\rho_1} & \ldots & \delta_{\pi_1, \rho_r} & \delta_{\pi_1, \tau_1} & \ldots & \delta_{\pi_1, \tau_t} \\ \vdots & {\operatorname{d}}ots & \vdots & \vdots & {\operatorname{d}}ots & \vdots \\ \delta_{\pi_p ,\rho_1} & \ldots & \delta_{\pi_p, \rho_r} & \delta_{\pi_p, \tau_1} & \ldots & \delta_{\pi_p, \tau_t} {\operatorname{e}}nd{pmatrix} = \det \begin{pmatrix} \delta_{\mu_1 ,\rho_1} & \ldots & \delta_{\mu_1, \rho_r} & 0 & \ldots & 0\\ \vdots & {\operatorname{d}}ots & \vdots & \vdots & {\operatorname{d}}ots & \vdots \\ \delta_{\mu_m ,\rho_1} & \ldots & \delta_{\mu_m, \rho_r} & 0 & \ldots & 0 \\ 0 & \ldots & 0 & \delta_{\pi_1, \tau_1} & \ldots & \delta_{\pi_1, \tau_t} \\ \vdots & {\operatorname{d}}ots & \vdots & \vdots & {\operatorname{d}}ots & \vdots \\ 0 & \ldots & 0 & \delta_{\pi_p, \tau_1} & \ldots & \delta_{\pi_p, \tau_t}{\operatorname{e}}nd{pmatrix}=\nonumber \\ = \det\begin{pmatrix} \delta_{\mu_1,\rho_1} & \ldots & \delta_{\mu_1, \rho_r}\\ \vdots & {\operatorname{d}}ots & \vdots \\ \delta_{\mu_m,\rho_1} & \ldots & \delta_{\mu_m, \rho_r} {\operatorname{e}}nd{pmatrix} \det\begin{pmatrix} \delta_{\pi_1,\tau_1} & \ldots & \delta_{\pi_1, \tau_t}\\ \vdots & {\operatorname{d}}ots & \vdots \\ \delta_{\pi_p,\tau_1} & \ldots & \delta_{\pi_p, \tau_t} {\operatorname{e}}nd{pmatrix}=W_1 W_2 {\operatorname{e}}nd{gather} Where we use the Lemma \ref{lema:scprod} in the above manipulation. {\operatorname{e}}nd{proof} \begin{lem} If $\hat{P_A}$ and $\hat{P_B}$ are hermitian operators, then $\hat{P_A}\wedge \hat{P_B}$ is also a Hermitian operator. \label{lema:lho} {\operatorname{e}}nd{lem} \begin{proof} Lets write down $\hat{P_A}$ and $\hat{P_B}$ as \begin{gather} \hat{P_A}=\sum_{\lambda_1,\ldots, \lambda_l, \mu_1,\ldots,\mu_m} {\operatorname{e}}ta_{\lambda_1,\ldots, \lambda_l \mu_1,\ldots,\mu_m} f_{\lambda_1}^{\dagger}\ldots f_{\lambda_l}^{\dagger} \proj{\Omega} f_{\mu_m}\ldots f_{\mu_1}, \quad \hat{P_B}=\sum_{\nu_1,\ldots, \nu_n, \sigma_1,\ldots,\sigma_s} \beta_{\nu_1,\ldots, \nu_n \sigma_1,\ldots,\sigma_s} f_{\nu_1}^{\dagger}\ldots f_{\nu_n}^{\dagger} \proj{\Omega} f_{\sigma_s}\ldots f_{\sigma_1}. {\operatorname{e}}nd{gather} Since, $\hat{P_A}$ and $\hat{P_B}$ are Hermitian the coefficients have to fulfill the following relationship \begin{equation} {\operatorname{e}}ta_{\lambda_1,\ldots, \lambda_l, \mu_1,\ldots,\mu_m}={\operatorname{e}}ta^{*}_{\mu_1,\ldots, \mu_m, \lambda_1,\ldots,\lambda_l}, \ \ \mbox{and} \ \ \beta_{\nu_1,\ldots, \nu_n, \sigma_1,\ldots,\sigma_s}=\beta^{*}_{\sigma_1,\ldots, \sigma_s, \nu_1,\ldots,\nu_n}. {\operatorname{e}}nd{equation} Now, with the behavior of the wedge product on the $\mathcal{H}$ and $\mathcal{H}^*$ spaces, the $\hat{P_A}\wedge \hat{P_B}$ can be expressed as \begin{gather} \hat{P_A}\wedge \hat{P_B} =\sum_{\lambda_1,\ldots, \lambda_l , \mu_1,\ldots,\mu_m, \nu_1,\ldots, \nu_n \sigma_1,\ldots,\sigma_s} {\operatorname{e}}ta_{\lambda_1,\ldots, \lambda_l \mu_1,\ldots,\mu_m} \beta_{\nu_1,\ldots, \nu_n \sigma_1,\ldots,\sigma_s} f_{\lambda_1}^{\dagger}\ldots f_{\lambda_l}^{\dagger} f_{\nu_1}^{\dagger}\ldots f_{\nu_n}^{\dagger} \proj{\Omega} f_{\sigma_s}\ldots f_{\sigma_1} f_{\mu_m}\ldots f_{\mu_1}. {\operatorname{e}}nd{gather} It will be hermitian if and only if $\alpha_{\lambda_1,\lambda_l,\nu_1,\nu_n; \mu_1,\mu_m,\sigma_1,\sigma_s} = \alpha^{*}_{\lambda_1,\lambda_l,\nu_1,\nu_n; \mu_1,\mu_m,\sigma_1,\sigma_s} $, where $\alpha_{\lambda_1,\lambda_l,\nu_1,\nu_n; \mu_1,\mu_m,\sigma_1,\sigma_s} = {\operatorname{e}}ta_{\lambda_1,\ldots, \lambda_l, \mu_1,\ldots,\mu_m} \beta_{\nu_1,\ldots, \nu_n, \sigma_1,\ldots,\sigma_s} $ and $\alpha^{*}_{\lambda_1,\lambda_l,\nu_1,\nu_n; \mu_1,\mu_m,\sigma_1,\sigma_s} = {\operatorname{e}}ta^{*}_{\lambda_1,\ldots, \lambda_l, \mu_1,\ldots,\mu_m} \beta^{*}_{\nu_1,\ldots, \nu_n, \sigma_1,\ldots,\sigma_s} $. Using the impositions ${\operatorname{e}}ta={\operatorname{e}}ta^*$ and $\beta=\beta^*$, we see that indeed $\alpha_{\lambda_1,\lambda_l,\nu_1,\nu_n; \mu_1,\mu_m,\sigma_1,\sigma_s} = \alpha^{*}_{\lambda_1,\lambda_l,\nu_1,\nu_n; \mu_1,\mu_m,\sigma_1,\sigma_s} $, therefore, $\hat{P_A}\wedge \hat{P_B}$ is Hermitian. {\operatorname{e}}nd{proof} \begin{lem} If $\hat{C}$ is a local operator on $A$ and $\hat{D}$ a local operator on $B$, then ${\operatorname{Tr\,}}(\hat{C}\wedge D)={\operatorname{Tr\,}}(\hat{C}) {\operatorname{Tr\,}}(D)$. \label{lema:trprod} {\operatorname{e}}nd{lem} \begin{proof} Consider the $\hat{C}$ and $\hat{D}$ as \begin{gather} \hat{C}=\sum_{\vec{\lambda}, \vec{\mu}} {\operatorname{e}}ta_{\vec{\lambda}, \vec{\mu}} f_{\lambda_1}^{\dagger}\ldots f_{\lambda_l}^{\dagger} \proj{\Omega} f_{\mu_m}\ldots f_{\mu_1}, \quad \hat{D}=\sum_{\vec{\nu}, \vec{\sigma}} \beta_{\vec{\nu}, \vec{\sigma}} f_{\nu_1}^{\dagger}\ldots f_{\nu_n}^{\dagger} \proj{\Omega} f_{\sigma_s}\ldots f_{\sigma_1}. {\operatorname{e}}nd{gather} Then it can be seen that by the imposition of a specific order, as seen in the proof of Lemma \ref{lema:scprod}: \begin{gather} \hat{C}\wedge \hat{D}= \sum_{\vec{\lambda}, \vec{\mu},\vec{\nu}, \vec{\sigma}} {\operatorname{e}}ta_{\vec{\lambda}, \vec{\mu}} \beta_{\vec{\nu}, \vec{\sigma}} f_{\lambda_1}^{\dagger}\ldots f_{\lambda_l}^{\dagger} f_{\nu_1}^{\dagger}\ldots f_{\nu_n}^{\dagger} \proj{\Omega} f_{\sigma_s}\ldots f_{\sigma_1} f_{\mu_m}\ldots f_{\mu_1}, \\ {\operatorname{Tr\,}}(\hat{C}\wedge \hat{D})= \sum_{\vec{\lambda}, \vec{\nu} } {\operatorname{e}}ta_{\vec{\lambda}, \vec{\lambda}} \beta_{\vec{\nu}, \vec{\nu}}=\sum_{\vec{\lambda}} {\operatorname{e}}ta_{\vec{\lambda},\vec{\lambda}} \cdot \sum_{\vec{\nu}} \beta_{\vec{\nu}, \vec{\nu}} ={\operatorname{Tr\,}}(\hat{C}) \cdot {\operatorname{Tr\,}}(\hat{D}). {\operatorname{e}}nd{gather} {\operatorname{e}}nd{proof} \begin{lem} For a fermionic space $\mathcal{H}$ of $N$ modes labeled by $x_1,\ldots, x_{N}$, the projector $\proj{\Omega}$ in terms of the creation and annihilation operators is \begin{equation} \proj{\Omega}=f_{x_{N}}\ldots f_{x_1}f^{\dagger}_{x_1}\ldots f^{\dagger}_{x_{N}} {\operatorname{e}}nd{equation} \label{lema:0} {\operatorname{e}}nd{lem} \begin{proof} The proof can be easily followed from the structures of creation and annihilation operators. {\operatorname{e}}nd{proof} This lemma enables us to say that any object, in the operator space of the Hilbert space $\mathcal{H}$, is a linear combination of products of creations and annihilation operators. Moreover, the following Corollary can be derived: \begin{corollary} If $\hat{O}_A$ is a fermionic local operator on the subspace spanned by the set of modes $A=\{a_1, \dots, a_M\}$, then $\hat{O}_A$ can be written as a sum of products of creation and annihilation operators of the modes in $A$ alone. {\operatorname{e}}nd{corollary} \begin{prop*} \textbf{\ref{Prop:basis1})} For an $N$-mode fermion system, the set $\mathcal{B}=\{\ket{\Omega},\ket{1},\ket{2},\ket{1}\wedge \ket{2},\ket{3},\ket{1}\wedge\ket{3}, \ket{2}\wedge \ket{3}, \ket{1}\wedge\ket{2}\wedge\ket{3},\ldots, \ket{1}\wedge\ket{2}\wedge \ldots \wedge \ket{N} \}$ is an orthonormal basis of the Hilbert space $\mathcal{H}$, with dimension $2^N$. {\operatorname{e}}nd{prop*} \begin{proof} We know that $\mathcal{H}=\bigoplus_{k=0}^{+\infty} \mathcal{H}_{k-p}$. For a $N$ mode fermionic system, we can see that $\mathcal{H}_{k-p}=\{0\}$ $\forall k>N$. This is due to the fact that these spaces are spanned by the elements $f_{i_1}^{\dagger} \dots f_{i_k}^{\dagger}\ket{\Omega}$, and since $k>N$ there must exist at least one repeated index $i_{l}=i_m$ for $1\leq l < m\leq k$. Thus, by anti-commuting this terms it is obtained $f_{i_1}^{\dagger} \dots f_{i_k}^{\dagger}\ket{\Omega}=(-1)^{l-m+1}f_{i_1}^{\dagger} \dots f_{i_l}^{\dagger}f_{i_m}^{\dagger} \dots f_{i_k}^{\dagger}\ket{\Omega}= (-1)^{l-m+1}f_{i_1}^{\dagger} \dots f_{i_l}^{\dagger}f_{i_l}^{\dagger} \dots f_{i_k}^{\dagger}\ket{\Omega}=0$. Where it is used that $f_i ^2=0=(f_i^{\dagger})^2$. So we have that $\mathcal{H}$ is the space spanned by all the elements $f_{i_1}^{\dagger}\ldots f_{i_k}^{\dagger}\ket{\Omega}$ where $k=0,\ldots,N$. Since the creation operators can be anti-commuted, WLOG it can be said that the space is spanned by the elements above such that $i_1<i_2<\ldots<i_k$. Because if are non-ordered, can be anti-commuted to an ordered case and strict inequality because if 2 are equal then the contribution is cancelled. By combinatorics is not difficult to see that the number of elements that span each $\mathcal{H}_{k-p}$ under this restrictions is $N\choose{k}$. So since the basis of $\mathcal{H}$ will be the reunion of al this generators, because it is a direct sum; we obtain that the dimension of $\mathcal{H}$ is $\sum_{k=0}^{N} {N\choose k} =2^N$ by the Newton binomial formula. So, if we find $2^N$ linearly independent states of $\mathcal{H}$ that are orthonormal we would have found a Hilbert basis of the space. To see that $\mathcal{B}$ is a basis, we start by observing that it can be constructed by starting with $\mathcal{B}_1=\{\ket{\Omega},\ket{1}\}$, then to each element we leave it invariant or apply the next creation operator, in this case $f^{\dagger}_2$, to obtain: $\mathcal{B}_2={\ket{\Omega},\ket{1},\ket{2},\ket{2}\wedge\ket{1}}$. In general the array $\mathcal{B}_{n+1}$ is obtained by concatenating $\mathcal{B}_{n}$ and $f^{\dagger}_{n+1}\mathcal{B}_{n}$. Then, once we have arrived to $\mathcal{B}_N$ we have obtained $2^N$ elements of the space $\mathcal{H}_{pure}$. By the construction, they are linearly independent because each one has a non repeating appearence of creation operators. In order that $\mathcal{B}_N$ is really the $\mathcal{B}$ of the proposition, it is required to reorder the components of the elements by reordering the creation operators, such operation only contributes up to a $-1$ sign that is irrelevant in the basis. So it is proven that $\mathcal{B}$ is a basis of $\mathcal{H}$. And to proof that is a Hilbert basis, it is only required to use the Lemma \ref{lema:scprod} and it follows directly from that result. {\operatorname{e}}nd{proof} \section{Definition of partial tracing} \label{sec:ptappendix} In this Appendix \ref{sec:ptappendix} the Proposition \ref{prop:partialtrace} is proven by showing several Lemmas that lead to the complete proof. The proof does work for finite systems although it seems that the result can be generalized to the countable infinite case. A neat property of the partial tracing procedure is that we can easily implement it in matrix representations. In \cite{code} there is an implementation of the partial tracing procedure in matrix representation, where a Mathematica function is programmed to take the fermionic partial trace. In the following lines, we deduce the partial tracing procedure for fermionic systems under the SSR. We observe that the obtained procedure is the same as in \cite{Friis16} where the SSR condition was not part of the partial tracing definition. \begin{lem} \label{lema:tracezero} If we have an hermitian operator $C\in \Gamma_S$ acting on $\mathcal{H}_S$ such that satisfies: \begin{gather} {\operatorname{Tr\,}}(O \cdot C)=0 ,\ \ \forall O\in \mathcal{O}_S {\operatorname{e}}nd{gather} where $\mathcal{O}_S$ is the set of all hermitian operators of $\Gamma_S$. Then $C$ is the null operator, $C=0$. {\operatorname{e}}nd{lem} \begin{proof} Since the trace is invariant up to unitary transformations and being $C$ a hermitian matrix, there exist a unitary matrix $U$ such that $D=U\cdot C\cdot U^{\dagger}$ is a diagonal matrix with real values on it. Due to the fact that $C$ is block diagonal, the unitary $U$ can also be chosen to be block diagonal. With these impositions the conditions can be modified to: \begin{gather} {\operatorname{Tr\,}}(U\cdot O \cdot U^{\dagger}\cdot U\cdot C\cdot U^{\dagger})={\operatorname{Tr\,}}(U\cdot O \cdot U^{\dagger}\cdot D)=0 \qquad \forall O \in \mathcal{O} {\operatorname{e}}nd{gather} Since $D$ is a diagonal matrix it follows that $[D]_{i,j}=\lambda_i \delta_{i j} \in \mathbb{R}$.\\ The matrix $M=U\cdot O \cdot U^{\dagger}$ is also block diagonal. It can be chosen a string of matrices $M_i$ for $i=1,\ldots 2^{N-1}$ such that each $M_i$ it takes the form of $[M_i]_{j,k}=\delta_{i k}\delta_{j i}$. Notice that each matrix $M_i$ is a real-valued diagonal matrix thus is hermitian and block diagonal. Therefore by setting $O=U^{\dagger}\cdot M_i\cdot U$, which is the unitary transformation (by $U^{\dagger}$) of a hermitian matrix, it will also be a hermitian matrix that is block-diagonal, and consequently $O\in \mathcal{O}_S$ as desired. So for each convenient $M_i$ exists a hermitian block-diagonal $O$ such that the condition has to hold. And therefore the string of impositions. \begin{equation} {\operatorname{Tr\,}}(M_i\cdot D)=0 \qquad \forall i\in\{1,\ldots, 2^{N-1}\} {\operatorname{e}}nd{equation} must be true. And therefore since \begin{equation} [M_i \cdot D]_{j ,k} =\delta_{j i}\delta_{i k}\lambda_i {\operatorname{e}}nd{equation} the conditions become: \begin{equation} {\operatorname{Tr\,}}(M_i\cdot D)=\lambda_i=0 \qquad \forall i\in\{1,\ldots, 2^{N-1}\} {\operatorname{e}}nd{equation} and therefore it must be that $D=0$ and since $C=U^{\dagger}\cdot D\cdot U$, $C=0$. {\operatorname{e}}nd{proof} \begin{lem} \label{lema:nth} If the partial trace of the $N^{th}$ mode is well defined and unique, then the definition of the partial trace on an arbitrary mode is well defined and unique. {\operatorname{e}}nd{lem} \begin{proof} Given an operator $A$ of the fermionic space for $N$ modes, labelled as $\{\ket{1},\ldots,\ket{N}\}$. Then $A$ can be decomposed as: $A=\sum_{\vec{r} \vec{s}} \alpha_{\vec{r} \vec{s}} \ket{1}^{r_1}\wedge \ldots \wedge \ket{N}^{r_N}\bra{1}^{s_1}\wedge \bra{N}^{s_N} $. If the mode $j$ it is wanted to be traced out, since the operator $A$ can be written as $A=\sum_{\vec{r} \vec{s}} (-1)^{\delta}\alpha_{\vec{r} \vec{s}} \ket{1}^{r_1}\wedge \ldots \wedge \ket{N}^{r_N}\wedge\ket{i}^{r_i} \bra{1}^{s_1}\wedge \bra{N}^{s_N}\wedge \bra{i}^{s_i}$ where $\delta$ is an integer that depends on $s_i,r_i,\vec{s},\vec{r}$. Since the partial trace of the last mode it is well defined and unique and now the $i$ mode is the last, the partial trace over the $i$ mode is well defined and unique. Since $i$ is arbitrary, the partial trace on any mode is well defined and unique. {\operatorname{e}}nd{proof} \begin{lem} \label{lema:ordering} If the partial tracing of one mode is well defined unique, then partial tracing over an ordered set of modes $k_1,\ldots,k_l$ is well defined and unique so that: \begin{gather} {\operatorname{Tr\,}}_{k_1,\ldots,k_l}={\operatorname{Tr\,}}_{k_1}\circ \ldots\circ {\operatorname{Tr\,}}_{k_l} {\operatorname{e}}nd{gather} {\operatorname{e}}nd{lem} \begin{proof} Let us define that $\Gamma$ is the global operator space with $N>l$ modes, and denote by $\mathcal{H}_C$ as the Hilbert fermionic space where we remove the $k_1,\ldots,k_l$ modes from the initial set, $\mathcal{H}_{k_1,\ldots,k_m}$ the Hilbert space of all $N$ modes. To proof the equality of these 2 operations lets proof that they act the same way to an arbitrary state $\rho$. So if it is seen that ${\operatorname{Tr\,}}_{k_1,\ldots,k_l}(\rho)={\operatorname{Tr\,}}_{k_1}\circ \ldots\circ {\operatorname{Tr\,}}_{k_l}(\rho)$ it is done. By choosing the ordering of the modes as putting $k_1,\ldots,k_l$ the last ones, the operators on $\Gamma_S$ that are local on $\mathcal{H}_C$ can be written as $O_C\wedge \mathbb{I}\wedge \ldots \wedge\mathbb{I}$. From the definition of partial trace it is known that: \begin{gather} {\operatorname{Tr\,}}( (O_C\wedge \mathbb{I}\wedge \ldots \wedge\mathbb{I}) \rho)={\operatorname{Tr\,}}(O_C {\operatorname{Tr\,}}_{k_1,\ldots,k_l}(\rho)) \qquad \forall O_C\in \mathcal{O}^C_S {\operatorname{e}}nd{gather} where $\mathcal{O}^C_S$ is the set of hermitian local operators in $\Gamma^C_S$. Now, in the other case, since ${\operatorname{Tr\,}}_{k_m}( \ldots({\operatorname{Tr\,}}_{k_l}(\rho))\ldots)\in \mathcal{H}_{k_1\ldots,k_{m-1}}$ is clear that since $\mathcal{O}_{C,k_1,\ldots, k_n}\subset \mathcal{O}_{C,k_1,\ldots, k_m} $ if $n\leq m$, for the definition of the partial trace operation: \begin{align} {\operatorname{Tr\,}}(O_C {\operatorname{Tr\,}}_{k_1}( \ldots({\operatorname{Tr\,}}_{k_l}(\rho))\ldots))&={\operatorname{Tr\,}}((O_C \wedge \mathbb{I}) {\operatorname{Tr\,}}_{k_2}( \ldots({\operatorname{Tr\,}}_{k_l}(\rho))\ldots)) \nonumber \\ &={\operatorname{Tr\,}}((O_C \wedge \mathbb{I} \wedge \mathbb{I}) {\operatorname{Tr\,}}_{k_3}( \ldots({\operatorname{Tr\,}}_{k_l}(\rho))\ldots))\nonumber \\ & \ \ \vdots \nonumber \\ &= {\operatorname{Tr\,}}((O_C \wedge \mathbb{I}\wedge \ldots \wedge \mathbb{I}) {\operatorname{Tr\,}}_{k_l}(\rho)) \nonumber \\ &={\operatorname{Tr\,}}((O_C \wedge \mathbb{I}\wedge \ldots \wedge \mathbb{I} ) \rho) \quad \forall O_C \in \mathcal{O}^C_S {\operatorname{e}}nd{align} And therefore it is found that \begin{equation} \label{eq:ordering} {\operatorname{Tr\,}}\left(O_C \left[{\operatorname{Tr\,}}_{k_1}( \ldots({\operatorname{Tr\,}}_{k_l}(\rho))\ldots)-{\operatorname{Tr\,}}_{k_1,\ldots,k_l}(\rho)\right]\right)=0 \qquad \forall O_C \in \mathcal{O}^C_S {\operatorname{e}}nd{equation} And for what it has been shown in the Lemma \ref{lema:tracezero}, And since $\rho$ has been arbitrary, this condition is sufficient to say that ${\operatorname{Tr\,}}_{k_1}\circ \ldots \circ {\operatorname{Tr\,}}_{k_l} = {\operatorname{Tr\,}}_{k_1,\ldots,k_l}$. {\operatorname{e}}nd{proof} Is important to remark that the uniqueness of the partial tracing procedure does not rely on the concrete ordering seen in the Lemma \ref{lema:ordering} since by the definition of partial tracing and its uniqueness is straightforward to prove that ${\operatorname{Tr\,}}_{A} \circ {\operatorname{Tr\,}}_{B}={\operatorname{Tr\,}}_{B} \circ {\operatorname{Tr\,}}_{A}$. Nevertheless, for procedural purposes is usually easier to trace out the 'largest' mode first. \begin{prop*}\textbf{\ref{prop:partialtrace}}) For a density operator $\rho \in \mathcal{R}^N_S$, of an $N$-mode fermionic system, the partial tracing over the set of modes $M=\{m_1,\ldots,m_{|M|}\}\subset\{1,\ldots,N\}$ must result in a reduced density operator $\sigma \in \mathcal{R}^{N\setminus M}_S$, given by \begin{align} \sigma={\operatorname{Tr\,}}_{M}(\rho)={\operatorname{Tr\,}}_{m_1}\circ {\operatorname{Tr\,}}_{m_2} \circ \ldots \circ {\operatorname{Tr\,}}_{m_{|M|}}(\rho), {\operatorname{e}}nd{align} There is a unique partial tracing procedure that satisfies the physically imposed consistency conditions. The operation of partial tracing one mode $m_i$, of an element of $\rho$, is given then by: \begin{align} {\operatorname{Tr\,}}_{m_i} & \left(\left(f_1^{\dagger}\right)^{s_1} \ldots \left(f_{m_i}^{\dagger}\right)^{s_{m_i}}\ldots \left(f_N^{\dagger}\right)^{s_N}\proj{\Omega}f_N^{r_N}\ldots f_{m_i}^{r_{m_i}}\ldots f_1^{r_1} \right) = \delta_{s_{m_i} r_{m_i}} (-1)^{k}\left(f_1^{\dagger}\right)^{s_1} \ldots \left(f_N^{\dagger}\right)^{s_N}\proj{\Omega}f_N^{r_N}\ldots f_1^{r_1}, {\operatorname{e}}nd{align} with $k=s_{m_i} \sum_{j=m_i}^{N-1} s_{j+1} +r_{m_i} \sum_{k=m_i}^{N-1} r_{k+1}$ and $s_i,r_j \in \{0,1 \}$. {\operatorname{e}}nd{prop*} \begin{proof} By the Lemmas \ref{lema:nth} \& \ref{lema:ordering} it is enough to see that there exists a well defined and unique way to trace out the $N$th mode and that it corresponds to the tracing procedure proposed in the statement of the proposition. \\ First we observe that to say that: \begin{gather} {\operatorname{Tr\,}}_{m_i}\left(\left(f_1^{\dagger}\right)^{s_1} \ldots \left(f_{m_i}^{\dagger}\right)^{s_{m_i}}\ldots \left(f_N^{\dagger}\right)^{s_N}\proj{\Omega}f_N^{r_N}\ldots f_{m_i}^{r_{m_i}}\ldots f_1^{r_1} \right) = \delta_{s_{m_i} r_{m_i}} (-1)^{k}\left(f_1^{\dagger}\right)^{s_1} \ldots \left(f_N^{\dagger}\right)^{s_N}\proj{\Omega}f_N^{r_N}\ldots f_1^{r_1} {\operatorname{e}}nd{gather} where $k=s_{m_i} \sum_{j=m_i}^{N-1} s_{j+1} +r_{m_i} \sum_{k=m_i}^{N-1} r_{k+1}$ and $s_i,r_j$ take the value $0$ or $1$. Is equivalent to say that you put the $m_i$ mode as the $N$th mode and then generate the basis $\mathcal{B}$, where the matrix representation of $A$ will be $\left.A\right|_{\mathcal{B}}=\begin{pmatrix} a && b \\ c && d {\operatorname{e}}nd{pmatrix}$ then $\left.{\operatorname{Tr\,}}_{m_i} A\right|_{\mathcal{B}}=a+d$.\\ This form of expressing the partial trace is useful to prove that indeed is a well defined partial tracing procedure.\\ First, since the partial tracing is a linear operation and due to the correspondence of the matrix representation of the operators with the operator space; if the partial trace can be defined and seen to be unique in terms of a matrix representation for a concrete basis, then by the correspondence of matrix representations with the space itself, the operation will be well defined and unique in the linear space. \\ As a first step, we check that the found procedure satisfies the properties of a partial trace. Suppose a hermitian local operator $O_A\in \Gamma_S^A$ on the first $N-1$ modes of the system. We can easily see that $\tilde{O}_A\otimes \mathbb{I}_2$ is the matrix representation of $O_A$ on the basis $\mathcal{B}$. Where $ \tilde{O}_A$ is the matrix representation of the operator $O_A$ in the basis $\mathcal{B}$ restricted to the space of the $N-1$ first modes. Now, it is wanted to check that defining that $\tilde{\rho}_A=a+c$ (in the decomposition seen above) it follows the conditions of partial tracing: \begin{equation} {\operatorname{Tr\,}}(O_A\cdot\rho)={\operatorname{Tr\,}}(O_A\cdot \rho_A) \quad \forall O_A\in \mathcal{O}^A_S {\operatorname{e}}nd{equation} (where $\mathcal{O}^A_S$ is the set of all local hermitian operators in $\Gamma_S^A$. So let's check it: \begin{gather} {\operatorname{Tr\,}}(O_A \cdot \rho)={\operatorname{Tr\,}}((\tilde{O}_A \otimes \mathbb{I}_2 )\left.\rho\right|_{\mathcal{B}})={\operatorname{Tr\,}}\left(\left(\begin{matrix} \tilde{O}_A && 0 \\ 0 && \tilde{O}_A {\operatorname{e}}nd{matrix}\right) \left(\begin{matrix} a && b \\ b^* && c {\operatorname{e}}nd{matrix}\right)\right)={\operatorname{Tr\,}}\left(\begin{matrix} \tilde{O}_A \cdot a && \tilde{O}_A \cdot b\\ \tilde{O}_A\cdot b^* && \tilde{O}_A \cdot c {\operatorname{e}}nd{matrix}\right) = {\operatorname{Tr\,}}(\tilde{O}_A \cdot a )+{\operatorname{Tr\,}}(\tilde{O}_A \cdot c)=\nonumber\\={\operatorname{Tr\,}}(\tilde{O}_A \cdot a +\tilde{O}_A \cdot c )= {\operatorname{Tr\,}}(\tilde{O}_A \cdot (a + c) )={\operatorname{Tr\,}}(\tilde{O}_A \cdot \tilde{\rho}_A )= {\operatorname{e}}nd{gather} and since $O_A$ is arbitrary, it is proven by all $O_A\in \mathcal{O}^A_S$. \\ So it just has been seen that a partial trace operation can be defined. Now the second step is to see that this definition is unique. \\ Let's assume that there exists another consistent definition of a partial trace, that will give rise to $\rho'_A$ as a reduce state. Lets proof that $\tilde{\rho'}_A-a-c=0$. Since for both notions the consistency conditions are true, the following relations hold: \begin{gather} {\operatorname{Tr\,}}(\tilde{O}_A \cdot (a+c))={\operatorname{Tr\,}}(\tilde{O}_A\otimes \mathbb{I}_2\cdot \tilde{\rho})={\operatorname{Tr\,}}(\tilde{O}_A \cdot \tilde{\rho'}_A) \quad \forall O_A \in \mathcal{O}^A_S \qquad {\operatorname{Tr\,}}(\tilde{O}_A \cdot( \tilde{\rho'}_A-a-c))=0 \qquad \forall O_A \in \mathcal{O}_S^A {\operatorname{e}}nd{gather} Since $\tilde{\rho'}_A$ and $a+c$ are matrix representations of reduced states and therefore states, both are hermitian and block-diagonal. Therefore the matrix $C=\tilde{\rho'}_A-a-c$ is hermitian and block-diagonal. The condition obtained becomes the same as in Lemma \ref{lema:tracezero}, and thus $C=0$ and so $\tilde{\rho'}_A=a+c$. Thus proving the uniqueness of the procedure, and hence proving the proposition. {\operatorname{e}}nd{proof} As we comment in the main text, one could think that the introduction of the SSR attempts against the uniqueness of the partial tracing procedure. However, we have seen that the partial tracing procedure does not lose its uniqueness nor the form used in \cite{Friis16} for general fermionic state contributions. Imposing the SSR to states is compensated by its imposition to the observables. \section{Incompatibility between Jordan-Wigner transformation and partial tracing operation} \label{sec:incopatibility} Why such development is important since we could use the Jordan-Wigner transformation to transform the fermionic states of $N$ modes to $N$ qubit states? As shown in \cite{Friis16}, under the parity SSR, the Jordan-Wigner transformation does not correctly describe partial tracing operation. In other words, by choosing any concrete Jordan-Wigner transformation $JW: \mathcal{H}_S^N\to \mathbb{C}^N$, we can find SSR fermionic states $\rho_{AB}$ such that ${\operatorname{Tr\,}}_B(JW \cdot \rho_{AB} \cdot JW^{\dagger}) \neq JW\cdot {\operatorname{Tr\,}}_B(\rho_{AB}) \cdot JW^{\dagger}$. Thus, when proceeding to work with $\rho_A$ it is unknown which representation to take in this scenario. However, even with this inconsistency, some can say that it is just a defect of the representation and that this inconsistency is minor and does not play a role in the calculation of relevant physical quantities. In the following lines, we illustrate how this is not the case for the canonical Jordan-Wigner transformation; there are SSR states that given these two procedures, the Von Neumann entropy of the associated $\rho_A$ is different. The Von Neumann entropy is a measure of information in the usual sense, for any state in $\mathcal{R}_{S}$, thanks to the diagonalization Proposition \ref{prop:observables}. Any state $\rho\in \mathcal{R}_{S}$ can be diagonalized, $\rho=\sum_i p_i \proj{\psi_i}$ with the eigenvalues being the probabilities $\{p_i\}$ and the eigenvectors being pure SSR states. Thus the Von Neumann entropy can be computed accordingly as \begin{align} S(\rho)=-{\operatorname{Tr\,}} \rho \log \rho =-\sum_i p_i \log p_i, {\operatorname{e}}nd{align} where we have used base $2$ with $\log 2=1$. Given the space of $4$ fermionic modes with canonical base $\mathcal{B}$ where and the canonical Jordan-Wigner transformation given by the assignment $f_i\to -(\bigotimes_{k=1}^{i-1} \hat{\sigma}_z )\otimes \ketbra{0}{1} \otimes (\bigotimes_{j=i+1}^4 \mathbb{I})$. this assignment is such that it provides the assignments $\ket{\Omega} \to \ket{0000}, \ket{1} \to -\ket{1000}, \ket{2}\to -\ket{0100}, \ket{1}\wedge\ket{2} \to \ket{1100}, \dots $. Notice that the odd states pick a global $-1$ phase. We notice that for a SSR state, in the density matrix formalism these $-1$ phases disappear. Thus, by choosing the canonical basis of the qubit space ${\mathbb{C}^2}^{\otimes 4}$ the canonical Jordan-Wigner transformation corresponds to the identity mapping of the representation of the density matrices in the corresponding canonical basis. With that mapping, and choosing the mode bipartition $14|23$ we can see that the fermionic SSR density operator $\rho_{1234}$ that is given by \begin{gather} \rho_{1234}=\frac{1}{16} \left[\mathbb{I} + \ket{\Omega}(\bra{1}\wedge\bra{4}) + \ket{1}\bra{4} + \ket{2}(\bra{1}\wedge\bra{2}\wedge\bra{4}) + \right. \nonumber \\ + (\ket{1}\wedge \ket{2})(\bra{2}\wedge\bra{4}) + \ket{3}(\bra{1}\wedge\bra{3} \wedge \bra{4}) +\nonumber \\+ (\ket{1}\wedge \ket{3})(\bra{3}\wedge\bra{4}) + (\ket{2}\wedge \ket{3})(\bra{1}\wedge\bra{2}\wedge\bra{3}\wedge\bra{4}) +\nonumber \\ \left. +(\ket{1}\wedge \ket{2}\wedge\ket{3})(\bra{2}\wedge\bra{3}\wedge\bra{4}) + h.c. \right] {\operatorname{e}}nd{gather} yields the following inconsistent results. The representation on the canonical computation basis, eigenvalues and Von Neumann information for the operator obtained by taking the fermionic partial trace to obtain $\rho_{12}$ and then applying the Jordan-Wigner transformation are the following \begin{gather} \left.JW\cdot \rho_{12}\cdot JW \right|_{\text{comp},\text{comp}} = \frac{1}{4} \begin{pmatrix} 1 & 0& 0& 0\\ 0 &1 &0 &0\\ 0 &0 &1 &0 \\ 0& 0 &0& 1 {\operatorname{e}}nd{pmatrix} \nonumber \\ \lambda_i\in \left\{\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4} \right\} \quad S(\rho_{12})= 2 {\operatorname{e}}nd{gather} while on the other hand, if we first take the Jordan-Wigner transformation and then apply the qubit partial tracing procedure the results become: \begin{gather} \left.{\operatorname{Tr\,}}_{23}(JW\cdot\rho_{1234}\cdot JW)\right|_{\text{comp},\text{comp}} = \frac{1}{4} \begin{pmatrix} 1 & 0& 0& 1\\ 0 &1 &1 &0\\ 0 &1 &1 &0 \\ 1& 0 &0& 1 {\operatorname{e}}nd{pmatrix} \nonumber \\ \lambda_i\in \left\{\frac{1}{2},\frac{1}{2},0,0 \right\} \quad S(\rho_{12})= 1 {\operatorname{e}}nd{gather} Therefore, showing the complete inconsistency of this use of the Jordan-Wigner function. Due to the excellent properties of the canonical Jordan-Wigner function on states under the SSR, the correct result using purely fermionic based representations is the same as the one yielded in the first procedure of the two. \section{Proofs of the structures that arise by imposing the parity super-selection rule} \label{sec:proofs} As shown in the main text, we invoke some restrictions on the fermionic space - the parity super-selection rule (SSR). In the following lines, we prove the found properties analogous to the tensor product space case described in the main text. \begin{lem} \label{lema:ssrmixed} The matrix representation of $\rho$, in the basis $\mathcal{B}'$, takes the form \begin{align} \rho= \rho_e \oplus \rho_o =\begin{pmatrix} \rho_e && 0 \\ 0 && \rho_o {\operatorname{e}}nd{pmatrix}, {\operatorname{e}}nd{align} where $\rho_e$ ($\rho_o$) are $2^{N-1}\times 2^{N-1}$ complex valued matrices. {\operatorname{e}}nd{lem} \begin{proof} Since $\rho$ is by definition an ensemble of pure fermionic SSR states then its expression in equation \ref{eq:rho} can be broken as \begin{gather} \rho=\sum_i p_i \sum_{\vec{s^i},\vec{r^i}; R^i,S^i\in even} \alpha_{\vec{s^i}} \alpha^*_{\vec{r^i}} \left(f^{\dag}_1\right)^{s^i_1} \ldots (f^{\dag}_N)^{s^i_N} \proj{\Omega} f_N^{r^i_N} \ldots f_1^{r^i_1} +\sum_i p_i\sum_{\vec{s^i},\vec{r^i}; R^i,S^i\in odd} \alpha_{\vec{s^i}} \alpha^*_{\vec{r^i}} \left(f^{\dagger}_1\right)^{s^i_1} \ldots \left(f^{\dagger}_N\right)^{s^i_N} \proj{\Omega} f_N^{r^i_N} \ldots f_1^{r^i_1} \nonumber \\ =\rho_E \oplus \rho_O {\operatorname{e}}nd{gather} and since the basis $\mathcal{B}'$ is a rearrangement of the basis $\mathcal{B}$ (that is constituted by the states $\left(f_1^\dagger\right)^{s_1}\ldots \left(f_N^\dagger\right)^{s_N}\ket{\Omega}$), where the $2^{N-1}$ even and $2^{N-1}$ odd components are splitted; we see that the $\rho_E$ and $\rho_O$ components will clearly have a matrix representation in the basis $\mathcal{B}'$ as \begin{align} \left.\rho_E\right|_{\mathcal{B}'}=\begin{pmatrix} E_{\rho} && 0 \\ 0 && 0 {\operatorname{e}}nd{pmatrix} \qquad \qquad\left.\rho_O\right|_{\mathcal{B}'}=\begin{pmatrix} 0 && 0 \\ 0 && O_{\rho} {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{align} The sum of matrix representations is the matrix representation of the sum, therefore the result follows. {\operatorname{e}}nd{proof} \begin{prop} \label{prop:ssrmixed} $\rho$ is a SSR density operator iff $\rho\in \Gamma_S$ is a positively semi-defined Hermitian operator with trace one. {\operatorname{e}}nd{prop} \begin{proof} $\Rightarrow$: since $\rho=\sum_i p_i \proj{\psi_i}$ with $\ket{\psi_i}$ being SSR states and $0\leq p_i \leq 1$ $\sum_i p_i =1$, we have by Lemma \ref{lema:ssrmixed} directly that $\rho \in \Gamma_S$. Of this form is also directly deduced that $\rho$ is Hermitian and that ${\operatorname{Tr\,}}(\rho)=1$. To see positivity we only need to check that $\bra{\varphi} \rho \ket{\varphi}=\sum_i p_i |\scp{\varphi}{\psi_i}|^2 \geq 0$ for all $\ket{\varphi}$. \\ $\Leftarrow$: We use the matrix representation of a positively semi-definite Hermitian operator of $\Gamma_S$ with trace 1 in the basis $\mathcal{B}'$. Such operator $\rho$ will have the following matrix representation: \begin{gather} \left. \rho \right|_{\mathcal{B}'}= \begin{pmatrix} \rho_e & 0 \\ 0 & \rho_o {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{gather} The fact that $\rho$ is Hermitian implies that $\rho_e$ and $\rho_o$ have to be Hermitian matrices. This implies that they can be diagonalised. This means that there exist different changes of basis within the subspaces spanned by $\mathcal{B}_e$ and $\mathcal{B}_o$ such that without changing the block-parity structure exist basis $\tilde{\mathcal{B}}_e$ and $\tilde{\mathcal{B}}_o$ such that in the total basis $\tilde{\mathcal{B}}'=\tilde{\mathcal{B}}_e \cup \tilde{\mathcal{B}}_e$ the matrix representation of $\rho$ would be \begin{gather} \left. \rho \right|_{\tilde{\mathcal{B}}'} = \begin{pmatrix} \lambda_1 & & & 0 & & \\ & {\operatorname{d}}ots & & & {\operatorname{d}}ots & \\ & & \lambda_{2^{N-1}} & & & 0 \\ 0 & & & \mu_1 & & \\ & {\operatorname{d}}ots & & & {\operatorname{d}}ots & \\ & & 0 & & & \mu_{2^{N-1}} {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{gather} Now, since $\rho$ is positively semi-definite this implies that necessarily $\mu_i,\lambda_j \geq 0$ for all $i,j$. Thus, this means that $\rho$ can be written as $\rho=\sum_i \lambda_i \proj{\psi_i} + \sum_j \mu_j \proj{\varphi_j}$ where $\ket{\psi_i}$ are even states that conform an orthonormal basis of the even subspace, and equally for $\ket{\varphi_j}$ in the odd case. This means that the union of the two conforms an orthonormal basis $\{\ket{\tilde{\psi}_i}\}_{i=1}^{2^N}$ where the first $2^{N-1}$ elements are evens states and the last are odd states. Thus by choosing $p_i=\lambda_i$ and $p_{2^{N-1}+i}=\mu_i$ for $i=1,\dots, 2^{N-1}$, we obtain that $\rho=\sum_i p_i \proj{\tilde{\psi}_i}$. From the definition of $p_i$ and the positive semi-definitness of $\rho$ it follows that $p_i\geq 0$. Since ${\operatorname{Tr\,}}(\rho)=1$ it follows that $\sum_i (\lambda_i + \mu_i)=1$ thus giving $\sum_i p_i=1$. Thus fulfilling the requirements of being a density operator. {\operatorname{e}}nd{proof} We remind the reader that we name the set of density operators that follow the SSR as $\mathcal{R}_S$. In the next lines, we prove a general statement of the block form that the fermionic linear operators have to preserve the SSR structure. We denote them by SSR linear operators. \begin{theorem*} \textbf{\ref{thm:blockform}. Linear operators}) If $\hat{O}$ is a linear operator such that $\hat{O}: \mathcal{H}^N \rightarrow \mathcal{H}^M $ such that for all $\hat{A}\in \Gamma_S^N$ and $\hat{B}\in \Gamma_S^M$, $\hat{O} \hat{A} \hat{O}^\dag \in \Gamma_S^M$ and $\hat{O}^\dagger \hat{B} \hat{O} \in \Gamma_S^N$ then the matrix representation of the operator $\hat{O}$ on the basis ${\mathcal{B}'}^N, {\mathcal{B}'}^M $ has to take one of these two diagonal and anti-diagonal forms: \begin{gather} \hat{O}|_{{\mathcal{B}'}^M,{\mathcal{B}'}^N}=\left(\begin{array}{c|c} O_{++} & \hat{0} \\ \hline \hat{0} & O_{--} {\operatorname{e}}nd{array}\right) \medspace \medspace \text{or} \medspace \medspace \hat{O}|_{{\mathcal{B}'}^M,{\mathcal{B}'}^N}=\left(\begin{array}{c|c} \hat{0} & O_{+-} \\ \hline O_{-+} & \hat{0} {\operatorname{e}}nd{array}\right) {\operatorname{e}}nd{gather} where $O_{++},O_{--},O_{-+} ,O_{+-} \in M_{2^{M-1}\times 2^{N-1}}(\mathbb{C})$ and $\hat{0}$ is the zero element of $M_{2^{M-1}\times 2^{N-1}}(\mathbb{C})$. Such operators are linear operators that preserve the SSR form of the operators in $\Gamma_S$ {\operatorname{e}}nd{theorem*} \begin{proof} To start, since $\hat{O}$ is a linear operator between the two spaces, it can be represented in the most general way in the basis $\mathcal{B}'^{N}$ and $\mathcal{B}'^{M}$ as: \begin{gather} \left.\hat{O}\right|_{\mathcal{B}'^{M}, \mathcal{B}'^{N}}=\begin{pmatrix} O_{++} && O_{+-} \\ O_{-+} && O_{--} {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{gather} But since it is required that preserves the separation among SSR and non-SSR contributions we have that, by choosing the decomposition of $\hat{A}\in \Gamma_S^N$ and $\hat{B}\in \Gamma_S^M$ in the basis $\mathcal{B}'^N$ and $\mathcal{B}'^M$ respectively, that we know are given by: \begin{gather} \left.\hat{A}\right|_{\mathcal{B}'^{N}, \mathcal{B}'^{N}}=\begin{pmatrix} A_{++} && \hat{0} \\ \hat{0} && A_{--} {\operatorname{e}}nd{pmatrix} \qquad \qquad \left.\hat{B}\right|_{\mathcal{B}'^{M}, \mathcal{B}'^{M}}=\begin{pmatrix} B_{++} && \hat{0} \\ \hat{0} && B_{--} {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{gather} Now, choosing the two spacial cases for each condition, by setting $A_{++}=\hat{0}$, $B_{++}=\hat{0}$ and in the other case setting $A_{--}=\hat{0}$, $B_{--}=\hat{0}$; we obtain that \begin{gather} \left.\hat{O}\hat{A}\hat{O}^\dagger\right|_{\mathcal{B}'^{M}, \mathcal{B}'^{M}}=\begin{pmatrix} O_{++} && O_{+-} \\ O_{-+} && O_{--} {\operatorname{e}}nd{pmatrix} \begin{pmatrix} \hat{0} && \hat{0} \\ \hat{0} && A_{--} {\operatorname{e}}nd{pmatrix} \begin{pmatrix} O^\dagger_{++} && O^\dagger_{-+} \\ O_{+-}^\dagger && O^\dagger_{--} {\operatorname{e}}nd{pmatrix}= \begin{pmatrix} O_{+-} A_{--} O^\dagger_{+-} && O_{+-} A_{--} O^\dagger_{--} \\ O_{--} A_{--} O^\dagger_{+-} && O_{--} A_{--} O^\dagger_{--} {\operatorname{e}}nd{pmatrix} \\ \left.\hat{O}\hat{A}\hat{O}^\dagger\right|_{\mathcal{B}'^{M}, \mathcal{B}'^{M}}=\begin{pmatrix} O_{++} && O_{+-} \\ O_{-+} && O_{--} {\operatorname{e}}nd{pmatrix} \begin{pmatrix} A_{++} && \hat{0} \\ \hat{0} && \hat{0} {\operatorname{e}}nd{pmatrix} \begin{pmatrix} O^\dagger_{++} && O^\dagger_{-+} \\ O_{+-}^\dagger && O^\dagger_{--} {\operatorname{e}}nd{pmatrix}= \begin{pmatrix} O_{++} A_{++} O^\dagger_{++} && O_{++} A_{++} O^\dagger_{-+} \\ O_{-+} A_{++} O^\dagger_{++} && O_{-+} A_{++} O^\dagger_{-+} {\operatorname{e}}nd{pmatrix} \\ \left.\hat{O}^\dagger \hat{B}\hat{O}\right|_{\mathcal{B}'^{N}, \mathcal{B}'^{N}}= \begin{pmatrix} O^\dagger_{++} && O^\dagger_{-+} \\ O_{+-}^\dagger && O^\dagger_{--} {\operatorname{e}}nd{pmatrix} \begin{pmatrix} \hat{0} && \hat{0} \\ \hat{0} && B_{--} {\operatorname{e}}nd{pmatrix} \begin{pmatrix} O_{++} && O_{+-} \\ O_{-+} && O_{--} {\operatorname{e}}nd{pmatrix}= \begin{pmatrix} O_{-+}^\dagger B_{--} O_{-+} && O_{-+}^\dagger B_{--} O_{--} \\ O^\dagger_{--} B_{--} O_{-+} && O^\dagger_{--} B_{--} O_{--} {\operatorname{e}}nd{pmatrix} \\ \left.\hat{O}^\dagger\hat{B}\hat{O}\right|_{\mathcal{B}'^{N}, \mathcal{B}'^{N}}= \begin{pmatrix} O^\dagger_{++} && O^\dagger_{-+} \\ O_{+-}^\dagger && O^\dagger_{--} {\operatorname{e}}nd{pmatrix} \begin{pmatrix} B_{++} && \hat{0} \\ \hat{0} && \hat{0} {\operatorname{e}}nd{pmatrix} \begin{pmatrix} O_{++} && O_{+-} \\ O_{-+} && O_{--} {\operatorname{e}}nd{pmatrix}= \begin{pmatrix} O^\dagger_{++} B_{++} O_{++} && O^\dagger_{++} B_{++} O_{+-} \\ O^\dagger_{+-} B_{++} O_{++} && O^\dagger_{+-} B_{++} O_{+-} {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{gather} In order to preserve the SSR form for these two cases for each condition, we find 4 constraints for each condition: \begin{gather} O_{-+} A_{++} O^\dagger_{++}=O_{++} A_{++} O^\dagger_{-+}=O_{--} A_{--} O^\dagger_{+-}= O_{+-} A_{--} O^\dagger_{--}=\hat{0} \qquad \forall A_{--}, A_{++} \in M_{2^{N-1}}(\mathbb{C}) \\ O^\dagger_{+-} B_{++} O_{++}= O^\dagger_{++} B_{++} O_{+-}= O^\dagger_{--} B_{--} O_{-+}= O^\dagger_{-+} B_{--} O_{--}=\hat{0} \qquad \forall B_{--}, B_{++} \in M_{2^{M-1}}(\mathbb{C}) {\operatorname{e}}nd{gather} These restrictions can be reduced to two for each condition due to the $\dagger$ property. \begin{gather} O_{-+} A O^\dagger_{++} = O_{+-} A O^\dagger_{--}=\hat{0} \qquad \forall A \in M_{2^{N-1}}(\mathbb{C}) \\ O^\dagger_{+-} B O_{++} = O^\dagger_{-+} B O_{--}=\hat{0} \qquad \forall B \in M_{2^{M-1}}(\mathbb{C}) {\operatorname{e}}nd{gather} In order to proceed is necessary to check that if exist $D\in M_{m,n}(\mathbb{C})$, $F\in M_{n,m}(\mathbb{C})$ such that $D\cdot E\cdot F=0$ for all $E\in M_{n}(\mathbb{C})$ then either $D=0$ or $F=0$. To prove it, lets assume there exist $D,F$ such that ${\operatorname{e}}xists i_1,j_2\in \{1,\dots,m\}$, ${\operatorname{e}}xists j_1,i_2\in \{1,\dots,n\}$ such that $D\cdot E\cdot F=0$ for all $E\in M_{n}(\mathbb{C})$ with $D_{i_1,j_1}\neq 0$ and $F_{i_2,j_2}\neq 0$. Now since it can be chosen the matrix $E$ given by $E_{i,j}=\delta_i^{j_1} \delta_j^{i_2}$ we find that ${(D\cdot E\cdot F)}_{i_1,j_2}=D_{i_1,j_1} F_{i_2,j_2} \neq 0$ giving us a contradiction. Thus the assumption must be false. \\ This result gives us that the overall four restrictions that we had transform to the following four statements. 1: Either $O_{-+}=0$ or $O_{++}=0$. 2: Either $O_{+-}=0$ or $O_{--}=0$. 3: Either $O_{+-}=0$ or $O_{++}=0$. 4: Either $O_{-+}=0$ or $O_{--}=0$. It follows that there are only two non-trivial configurations: $O_{+-}=0=O_{-+}$ or $O_{++}=0=O_{--}$. Such configurations exactly correspond to the diagonal and anti-diagonal block form showed in the theorem, just as desired. {\operatorname{e}}nd{proof} We observe that all the elements of $\Gamma_S$ are SSR linear operators but that there can exist anti-diagonal operators that are not in $\Gamma_S$ that preserve the SSR structure. With this observation and classification, we move to prove the following result that makes the use of the partial tracing procedure under the SSR easier. \begin{prop} \label{prop:pureptssr} Assume that $\ket{a},\ket{b},\ket{c},\ket{d}\in \mathcal{H}_{S}$ for $N$ modes. Being $M\subset \{1,\dots, N\}$ where $\ket{a}$ and $\ket{b}$ only have contributions of the modes on $M$, and $\ket{c}$ and $\ket{d}$ only from $M^c$. Then we have that: \begin{gather} {\operatorname{Tr\,}}_{M}\left(\ketbra{a}{b}\wedge \ketbra{c}{d}\right)=\scp{b}{a}\ketbra{c}{d} \qquad \qquad {\operatorname{Tr\,}}_{M^c}\left(\ketbra{a}{b}\wedge \ketbra{c}{d}\right)=\scp{d}{c}\ketbra{a}{b} {\operatorname{e}}nd{gather} {\operatorname{e}}nd{prop} \begin{proof} First let's develop $\ketbra{a}{b}\wedge \ketbra{c}{d}$ in terms of the mode operators: \begin{gather} \ket{a}=\sum_{\vec{a}} \alpha_{\vec{a}} \cdot f^{\dagger}_{a_1}\dots f^{\dagger}_{a_{n}}\ket{\Omega}\quad \ket{b}= \sum_{\vec{b}} \beta_{\vec{b}} \cdot f^{\dagger}_{b_1}\dots f^{\dagger}_{b_{m}}\ket{\Omega} \quad \ket{c}=\sum_{\vec{c}} \gamma_{\vec{c}} \cdot f^{\dagger}_{c_1}\dots f^{\dagger}_{c_{r}}\ket{\Omega} \quad \ket{d}=\sum_{\vec{d}} \delta_{\vec{d}} \cdot f^{\dagger}_{d_1}\dots f^{\dagger}_{d_{s}}\ket{\Omega} \\ \ketbra{a}{b}\wedge \ketbra{c}{d}=\sum_{\vec{a},\vec{b},\vec{c},\vec{d}} \alpha_{\vec{a}} \beta^*_{\vec{b}} \gamma_{\vec{c}} \delta^*_{\vec{d}} f^{\dagger}_{a_1}\dots f^{\dagger}_{a_{n}} f^{\dagger}_{c_1}\dots f^{\dagger}_{c_{r}}\proj{\Omega} f_{d_{s}}\dots f_{d_1} f_{b_{m}}\dots f_{b_1} \\ {\operatorname{Tr\,}}_{M^c}\left(\ketbra{a}{b}\wedge \ketbra{c}{d}\right)=\sum_{\vec{a},\vec{b},\vec{c},\vec{d}} \alpha_{\vec{a}} \beta^*_{\vec{b}} \gamma_{\vec{c}} \delta^*_{\vec{d}} \left(f^{\dagger}_{a_1}\dots f^{\dagger}_{a_{n}} \proj{\Omega} f_{b_{m}}\dots f_{b_1} \right)\left(\bra{\Omega}f_{d_{s}}\dots f_{d_1}f^{\dagger}_{c_1}\dots f^{\dagger}_{c_{r}}\ket{\Omega}\right)=\nonumber \\= \left(\sum_{\vec{a}} \alpha_{\vec{a}} f^{\dagger}_{a_1}\dots f^{\dagger}_{a_{n}} \ket{\Omega}\right)\left( \sum_{\vec{b}} \beta^*_{\vec{b}} \bra{\Omega} f_{b_{m}}\dots f_{b_1} \right) \left( \sum_{\vec{d}} \delta^*_{\vec{d}}\bra{\Omega} f_{d_{s}}\dots f_{d_1}\right)\left( \sum_{\vec{c}} \gamma_{\vec{c}} f^{\dagger}_{c_1}\dots f^{\dagger}_{c_{r}}\ket{\Omega}\right)= \ketbra{a}{b} \scp{d}{c} \\ \ketbra{a}{b}\wedge \ketbra{c}{d}=\sum_{\vec{a},\vec{b},\vec{c},\vec{d}} \alpha_{\vec{a}} \beta^*_{\vec{b}} \gamma_{\vec{c}} \delta^*_{\vec{d}} (-1)^{nr+sm} f^{\dagger}_{c_1}\dots f^{\dagger}_{c_{r}} f^{\dagger}_{a_1}\dots f^{\dagger}_{a_{n}} \proj{\Omega} f_{b_{m}}\dots f_{b_1} f_{d_{s}}\dots f_{d_1} {\operatorname{e}}nd{gather} Since they follow SSR: $n{\operatorname{e}}quiv m \mod 2$ and $r{\operatorname{e}}quiv s \mod 2$. Therefore the parity of $nr+sm$ is always even. Thus it is found that: \begin{gather} \ketbra{a}{b}\wedge \ketbra{c}{d}=\sum_{\vec{a},\vec{b},\vec{c},\vec{d}} \alpha_{\vec{a}} \beta^*_{\vec{b}} \gamma_{\vec{c}} \delta^*_{\vec{d}} f^{\dagger}_{c_1}\dots f^{\dagger}_{c_{r}} f^{\dagger}_{a_1}\dots f^{\dagger}_{a_{n}} \proj{\Omega} f_{b_{m}}\dots f_{b_1} f_{d_{s}}\dots f_{d_1} \\ {\operatorname{Tr\,}}_M\left(\ketbra{a}{b}\wedge \ketbra{c}{d}\right)= \sum_{\vec{a},\vec{b},\vec{c},\vec{d}} \alpha_{\vec{a}} \beta^*_{\vec{b}} \gamma_{\vec{c}} \delta^*_{\vec{d}} \left( f^{\dagger}_{c_1}\dots f^{\dagger}_{c_{r}} \proj{\Omega} f_{d_{s}}\dots f_{d_1} \right)\left(\bra{\Omega} f_{b_{m}}\dots f_{b_1} f^{\dagger}_{a_1}\dots f^{\dagger}_{a_{n}} \ket{\Omega}\right)=\nonumber \\ = \left( \sum_{\vec{c}} \gamma_{\vec{c}} f^{\dagger}_{c_1}\dots f^{\dagger}_{c_{r}} \ket{\Omega}\right)\left(\sum_{\vec{d}} \delta^*_{\vec{d}} \bra{\Omega} f_{d_{s}}\dots f_{d_1} \right)\left(\sum_{\vec{b}} \beta^*_{\vec{b}} \bra{\Omega} f_{b_{m}}\dots f_{b_1} \right)\left(\sum_{\vec{a}} \alpha_{\vec{a}} f^{\dagger}_{a_1}\dots f^{\dagger}_{a_{n}} \ket{\Omega}\right) = \ketbra{c}{d} \scp{b}{a} {\operatorname{e}}nd{gather} {\operatorname{e}}nd{proof} Once we have shown the proofs for general linear SSR preserving operators, we move to prove the theorems that characterize classes of such SSR preserving linear operators. \begin{prop}\textbf{ Projectors}) \label{prop:Projector}. Consider $\hat{P} \in \Gamma_S$ an SSR linear operator. $\hat{P}$ is called a projector iff $\hat{P}^2=\hat{P}$ and $\hat{P}=\hat{P}^{\dagger}$, which is equivalent to have the following form in the basis $\mathcal{B}'$: \begin{align} \hat{P}=P_{ee} \oplus P_{oo} = \left( \begin{array}{c|c} P_{ee} & \hat{0} \\ \hline \hat{0} & P_{oo} {\operatorname{e}}nd{array}\right) {\operatorname{e}}nd{align} where $P_{ee},P_{oo}\in M_{2^{N-1}\times 2^{N-1}}(\mathbb{C})$ are projectors of that space, and $\hat{0}$ is the zero element of $M_{2^{N-1}\times 2^{N-1}}(\mathbb{C})$. {\operatorname{e}}nd{prop} \begin{proof} By the Theorem \ref{thm:blockform} we know that any SSR linear operator $P$ decomposes in the basis $\mathcal{B}'$ as \begin{gather} P= \left( \begin{array}{c|c} P_{ee} & \hat{0} \\ \hline \hat{0} & P_{oo} {\operatorname{e}}nd{array}\right) \quad \text{or} \quad P= \left( \begin{array}{c|c} \hat{0} & P_{oe} \\ \hline P_{eo} & \hat{0} {\operatorname{e}}nd{array}\right) {\operatorname{e}}nd{gather} Now we only have to see that the anti-diagonal block form is not possible and then that $P$ is a projector iff $P_{ee}$ and $P_{oo}$ are.\\ If $P$ has an anti-diagonal block form then we obtain that $P^2$ is given by: \begin{gather} P^2=\left( \begin{array}{c|c} P_{oe} \cdot P_{eo} & \hat{0} \\ \hline \hat{0} & P_{eo} \cdot P_{oe} {\operatorname{e}}nd{array}\right) {\operatorname{e}}nd{gather} which cannot be equal to $P$, unless we have the trivial case that can also be considered diagonal. Thus a projector cannot have an anti-diagonal block form.\\ So, considering only the diagonal block form we have that since \begin{gather} P^\dagger =\left( \begin{array}{c|c} P_{ee}^\dagger & \hat{0} \\ \hline \hat{0} & P_{oo}^\dagger {\operatorname{e}}nd{array}\right) \qquad \qquad P^2 =\left( \begin{array}{c|c} P_{ee}^2 & \hat{0} \\ \hline \hat{0} & P_{oo}^2 {\operatorname{e}}nd{array}\right) {\operatorname{e}}nd{gather} Then, $P^\dagger=P$ iff $P_{ee}^\dagger=P_{ee}$ and $P_{oo}^\dagger=P_{oo}$, and $P^2=P$ iff $P_{ee}^2=P_{ee}$ and $P_{oo}^2=P_{oo}$; proving the Proposition. {\operatorname{e}}nd{proof} \begin{prop}\textbf{ Observables)} \label{prop:observables} An operator $\hat{A}$ is Hermitian and is in $\Gamma_S$ iff $\hat{A}$ is an observable for a fermionic system under the SSR i.e. exists a set of orthonormal $\ket{\psi_i}\in \mathcal{H}_S$ such that $\hat{A}=\sum_i a_i \proj{\psi_i}$ with $a_i \in \mathbb{R}$. {\operatorname{e}}nd{prop} \begin{proof} Lets start by naming the elements of the basis $\mathcal{B}'$ as $\ket{E_i}$ for the $i=1,\dots, 2^{N-1}$ firsts and $\ket{O_i}$ for the final $i=1,\dots, 2^{N-1}$ elements of the basis. Then since $A$ is an hermitian SSR operator it follows that \begin{gather} \left.A\right|_{\mathcal{B}'}=\begin{pmatrix} A_E && 0 \\ 0 && A_O {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{gather} where $A_E$ and $A_O$ are hermitian matrices of dimension $2^{N-1}$. Thus they can be decomposed into real values with unitary matrices $U_E$ and $U_O$ respectively, and thus: \begin{gather} \left.A\right|_{\mathcal{B}'}=\begin{pmatrix} A_E && 0 \\ 0 && A_O {\operatorname{e}}nd{pmatrix}=\begin{pmatrix} U_E D_E U_E^\dagger && 0 \\ 0 && U_O D_O U_O^\dagger {\operatorname{e}}nd{pmatrix}=\begin{pmatrix} U_E && 0 \\ 0 && U_O {\operatorname{e}}nd{pmatrix}\cdot \begin{pmatrix} D_E && 0 \\ 0 && D_O {\operatorname{e}}nd{pmatrix}\cdot \begin{pmatrix} U_E && 0 \\ 0 && U_O {\operatorname{e}}nd{pmatrix}^\dagger {\operatorname{e}}nd{gather} thus since the matrix $\begin{pmatrix} U_E && 0 \\ 0 && U_O {\operatorname{e}}nd{pmatrix}$ is the matrix representation of a unitary operator $U$ (for more clarity see Theorem \ref{thm:Unitary}), we have that the states $\ket{E_i'}=U \ket{E_i}$ and $\ket{O_i'}=U\ket{O_i}$ will be eigenvectors with a real eigenvalue of $A$. It is only left to see that indeed the states satisfy the SSR. Is clear that the subspaces of even and odd spaces are invariant under the action of $U$. Thus the theorem is proven. {\operatorname{e}}nd{proof} \begin{theorem*}\textbf{\ref{thm:Unitary}. Unitary)} $\hat{U}$ is an SSR unitary operator, acting on $\mathcal{H}_{S}$ if and only if the matrix representation of the operator $\hat{U}$ in the basis $\mathcal{B}'$ takes the following form: \begin{align} \hat{U}=U_{ee} \oplus U_{oo} = \left(\begin{array}{c|c} U_{ee} & \hat{0} \\ \hline \hat{0} & U_{oo} {\operatorname{e}}nd{array}\right), {\operatorname{e}}nd{align} where $U_{ee}$ and $U_{oo}$ are unitary matrices, each in $M_{2^{N-1}\times 2^{N-1}}(\mathbb{C})$, living in the even and odd space respectively, and $\hat{0}$ is the zero element of $M_{2^{N-1}\times 2^{N-1}}(\mathbb{C})$. {\operatorname{e}}nd{theorem*} \begin{proof} By the Theorem \ref{thm:blockform} we know that any SSR linear operator $U$ decomposes in the basis $\mathcal{B}'$ as \begin{gather} U= \left( \begin{array}{c|c} U_{ee} & \hat{0} \\ \hline \hat{0} & U_{oo} {\operatorname{e}}nd{array}\right) \quad \text{or} \quad \left( \begin{array}{c|c} \hat{0} & U_{oe} \\ \hline U_{eo} & \hat{0} {\operatorname{e}}nd{array}\right) {\operatorname{e}}nd{gather} First, we will prove that an anti-diagonal block unitary cannot exist. As mentioned in the main article, we do this by designing a protocol where the no-signalling principle is violated if such a unitary exists. The quantum circuit of the scheme is the following: \begin{figure}[ht] \centering \includegraphics[width=0.40\textwidth]{circuit.png} \caption{ Scheme that shows the violation of no-signalling by anti-diagonal unitaries. The dashed boxes show the possibility that Bob applies those gates or not, depending on if he wants Alice to have a $\ket{+}$ or a $\ket{-}$ state.} {\operatorname{e}}nd{figure} We assume that for an anti-diagonal unitary $\hat{U}$ in a set of modes $A$, we can apply the same unitary to another set of modes $B$. In this scheme, there are two separably distinct sets of fermionic modes $ A$ and $B$. We denote the different applications as $\hat{U}_A$ and $\hat{U}_B$. In the spatial location of $A$, there is also a qubit system. Alice is able to couple the qubit system to the fermionic modes $A$ via a controlled-$\hat{U}_A$ operation, defined as: $\ketbra{0}{0} \otimes \mathbb{I}_A + \ketbra{1}{1} \otimes \hat{U}_A$.\\ The scheme consists in two cases, the case where $B$ decides to apply the anti-diagonal unitaries $\hat{U}_B$ and $\hat{U}_B^\dagger$ to their modes in the timing chosen in the scheme; and the case where the modes of $B$ do not get acted on. The initial state is $\ketbra{0}{0}\otimes \rho_{AB}$, where $\rho_{AB}$ is any SSR fermionic state. We can choose the qubit to be in this initial state. Now, lets split the scheme into its two parts. \begin{enumerate} \item $B$ is unmodified: Then we have that the protocol gives $\ketbra{+}{+}\otimes \rho_{AB}$. After the controlled-$\hat{U}_A$ gate we have $\frac{1}{2}\left(\ketbra{0}{0}\otimes \rho_{AB}+\ketbra{0}{1}\otimes \rho_{AB} \hat{U}_A^\dagger +\ketbra{1}{0}\otimes \hat{U}_A \rho_{AB} +\ketbra{1}{1}\otimes \hat{U}_A\rho_{AB}\hat{U}_A^\dagger\right)$ and after the final controlled-$\hat{U}_A^\dagger$ gate we have $\frac{1}{2}\left(\ketbra{0}{0}\otimes \rho_{AB}+\ketbra{0}{1}\otimes \rho_{AB} \hat{U}_A^\dagger\hat{U}_A +\ketbra{1}{0}\otimes \hat{U}_A^\dagger\hat{U}_A \rho_{AB} +\ketbra{1}{1}\otimes \hat{U}_A^\dagger\hat{U}_A\rho_{AB}\hat{U}_A^\dagger\hat{U}_A\right)=\ketbra{+}{+}\otimes\rho_{AB}$. \item $B$ applies the unitaries: from also $\ketbra{+}{+}\otimes \rho_{AB}$ we then have $\ketbra{+}{+}\otimes \hat{U}_B \rho_{AB} \hat{U}_B^\dagger$. After applying the controlled-$\hat{U}_A$ gate it is obtained $\frac{1}{2}\left(\ketbra{0}{0}\otimes \hat{U}_B \rho_{AB} \hat{U}_B^\dagger +\ketbra{0}{1}\otimes \hat{U}_B \rho_{AB} \hat{U}_B^\dagger \hat{U}_A^\dagger+\ketbra{1}{0}\otimes \hat{U}_A \hat{U}_B \rho_{AB} \hat{U}_B^\dagger +\ketbra{1}{1}\otimes \hat{U}_A \hat{U}_B \rho_{AB} \hat{U}_B^\dagger \hat{U}_A^\dagger\right)$. Then, when $\hat{U}_B^\dagger$ is applied we get $\frac{1}{2}\left(\ketbra{0}{0}\otimes \rho_{AB} +\ketbra{0}{1}\otimes \rho_{AB} \hat{U}_B^\dagger \hat{U}_A^\dagger \hat{U}_B+\ketbra{1}{0}\otimes \hat{U}_B^\dagger \hat{U}_A \hat{U}_B \rho_{AB} +\ketbra{1}{1}\otimes \hat{U}_B^\dagger \hat{U}_A \hat{U}_B \rho_{AB} \hat{U}_B^\dagger \hat{U}_A^\dagger \hat{U}_B\right)$. And after the final controlled-$\hat{U}_A^\dagger$ operation is applied, the result obtained is $\frac{1}{2}\left(\ketbra{0}{0}\otimes \rho_{AB} +\ketbra{0}{1}\otimes \rho_{AB} \hat{U}_B^\dagger \hat{U}_A^\dagger \hat{U}_B \hat{U}_A+\ketbra{1}{0}\otimes \hat{U}_A^\dagger \hat{U}_B^\dagger \hat{U}_A \hat{U}_B \rho_{AB} +\ketbra{1}{1}\otimes \hat{U}_A^\dagger \hat{U}_B^\dagger \hat{U}_A \hat{U}_B \rho_{AB} \hat{U}_B^\dagger \hat{U}_A^\dagger \hat{U}_B \hat{U}_A\right)$ {\operatorname{e}}nd{enumerate} In order to proceed, is required that we proof the following statement. If $C$ and $D$ are anti-block diagonal SSR operators local on two non-overlapping sets of modes $A$ and $B$, then $CD=-DC$. To proof this statement, we just need to observe that such operators can be decomposed as a linear combination of monomials that are products of an odd number of creations and annihilation operators of the modes in $A$ and $B$ respectively. And since for each of this monomials $f_{\lambda_1}\dots f_{\lambda_n} f^\dagger_{\mu_1} \dots f^\dagger_{\mu_m}$ we observe that $(f_{\lambda_1}\dots f_{\lambda_n} f^\dagger_{\mu_1} \dots f^\dagger_{\mu_m})(f_{\nu_1}\dots f_{\nu_s} f^\dagger_{{\operatorname{e}}ta_1} \dots f^\dagger_{{\operatorname{e}}ta_r})=(-1)^{(n+m)(r+s)}(f_{\nu_1}\dots f_{\nu_s} f^\dagger_{{\operatorname{e}}ta_1} \dots f^\dagger_{{\operatorname{e}}ta_r}) (f_{\lambda_1}\dots f_{\lambda_n} f^\dagger_{\mu_1} \dots f^\dagger_{\mu_m}) $ if $\lambda_i,\mu_j \in A$ and $\nu_k,{\operatorname{e}}ta_l \in B$ from this follows our crucial statement. We can deduce that the final state of the scheme for when the unitaries in $B$ are applied is $\frac{1}{2}\left(\ketbra{0}{0}\otimes \rho_{AB} +\ketbra{0}{1}\otimes \rho_{AB} \hat{U}_B^\dagger \hat{U}_A^\dagger \hat{U}_B \hat{U}_A+\ketbra{1}{0}\otimes \hat{U}_A^\dagger \hat{U}_B^\dagger \hat{U}_A \hat{U}_B \rho_{AB} +\ketbra{1}{1}\otimes \hat{U}_A^\dagger \hat{U}_B^\dagger \hat{U}_A \hat{U}_B \rho_{AB} \hat{U}_B^\dagger \hat{U}_A^\dagger \hat{U}_B \hat{U}_A\right) = \frac{1}{2}\left(\ketbra{0}{0}\otimes \rho_{AB} -\ketbra{0}{1}\otimes \rho_{AB} \hat{U}_A^\dagger \hat{U}_B^\dagger \hat{U}_B \hat{U}_A-\ketbra{1}{0}\otimes \hat{U}_B^\dagger \hat{U}_A^\dagger \hat{U}_A \hat{U}_B \rho_{AB} +\ketbra{1}{1}\otimes \hat{U}_B^\dagger \hat{U}_A^\dagger \hat{U}_A \hat{U}_B \rho_{AB} \hat{U}_A^\dagger \hat{U}_B^\dagger \hat{U}_B \hat{U}_A\right) = \frac{1}{2}\left(\ketbra{0}{0}\otimes \rho_{AB} -\ketbra{0}{1}\otimes \rho_{AB}-\ketbra{1}{0}\otimes \rho_{AB} +\ketbra{1}{1}\otimes \rho_{AB}\right)=\ketbra{-}{-}\otimes \rho_{AB}$. And since $\ket{+}$ and $\ket{-}$ are orthogonal states, Alice would know if Bob has applied the unitaries by measuring the qubit in the diagonal basis. Thus, Bob would have transmitted information to Alice without exchanging any particle nor any sort of classical communication. Bob via this protocol is able to transmit a message to Alice by acting remotely on his modes, no classical communication channel connects the two. Moreover, the information is transmitted instantaneously. Thus, the no-signalling principle would be violated. For this reason we conclude that anti-block diagonal unitaries cannot exist.\\ Now, having discarded the anti-diagonal block case, we just have to see that $U$ is a unitary operator iff $U_{ee}$ and $U_{oo}$ are. And this follows directly from the block diagonal form action under hermitian conjugation and under product of block forms, e.g \begin{gather} U^\dagger =\left( \begin{array}{c|c} U_{ee}^\dagger & \hat{0} \\ \hline \hat{0} & U_{oo}^\dagger {\operatorname{e}}nd{array}\right) \qquad \qquad U U^\dagger = \left( \begin{array}{c|c} U_{ee} & \hat{0} \\ \hline \hat{0} & U_{oo} {\operatorname{e}}nd{array}\right) \left( \begin{array}{c|c} U_{ee}^\dagger & \hat{0} \\ \hline \hat{0} & U_{oo}^\dagger {\operatorname{e}}nd{array}\right)= \left( \begin{array}{c|c} U_{ee} U_{ee}^\dagger& \hat{0} \\ \hline \hat{0} & U_{oo} U_{oo}^\dagger {\operatorname{e}}nd{array}\right) {\operatorname{e}}nd{gather} Thus, $U U^\dagger=\mathbb{I}_{\mathcal{H}}$ iff $U_{ee} U_{ee}^\dagger=\mathbb{I}_{2^{N-1}}$ and $U_{oo} U_{oo}^\dagger=\mathbb{I}_{2^{N-1}}$; proving the Theorem. {\operatorname{e}}nd{proof} Once we have proven the theorems that characterize the different types of SSR operators, we reproduce the proofs of the results that we need to discuss the notion of separable states properly. First, we prove the analogous Schmidt decomposition and purification procedures. \begin{theorem*}\textbf{\ref{thm:schmidt}. Schmidt decomposition)} Given any bipartite, pure SSR fermionic state $\ket{\psi}_{AB} \in \mathcal{H}_{S}^{AB}$, there exist orthonormal basis $\{\ket{i}_A\} \in \mathcal{H}_{S}^A$ and $\{\ket{i}_B\} \in \mathcal{H}_{S}^B$, such that \begin{align} \ket{\psi}_{AB}=\sum_i \sqrt{p_i} \ket{i}_A\wedge \ket{i}_B, {\operatorname{e}}nd{align} where $\{p_i\}$ are probabilities. {\operatorname{e}}nd{theorem*} \begin{proof} First of all, the state $\ket{\psi}$ can be decomposed on the canonical basis $\mathcal{B}$ where its elements can be thought as products of the canonical basis of the subsystems: \begin{equation} \ket{\psi}=\sum_{i,j} \lambda_{i,j} \ket{e_i}\wedge \ket{e_j} {\operatorname{e}}nd{equation} where $\{\ket{e_i}\}$ is a basis of $\mathcal{H}_A$ and $\{\ket{e_j}\}$ is the canonical basis of $\mathcal{H}_B$. Therefore this expression can be transformed into another, transforming the elements of $\mathcal{B}_A$ into another basis $\mathcal{B}'_A$ and transforming the elements of $\mathcal{B}_B$ into another basis $\tilde{\mathcal{B}}_B$: \begin{equation} \ket{\psi}=\sum_{i,j} \mu_{i,j} \ket{f_i}\wedge \ket{g_j} {\operatorname{e}}nd{equation} with $\{f_i\}\in\mathcal{B}'_A$ and $\{g_j\}\in\tilde{\mathcal{B}}_B$. Since the transformation is unitary, the state stays well normalized. Therefore for any basis on $\mathcal{H}_A$ and $\mathcal{H}_B$ the state can be decomposed in these basis. \\ A new basis of $\mathcal{H}_B$ can be defined if the terms are grouped: \begin{gather} \ket{\psi}=\sum_{i} \ket{f_i}\wedge(\sum_j \mu_{i,j} \ket{g_j})=\sum_{i} \ket{f_i}\wedge\ket{h_i} {\operatorname{e}}nd{gather} where $\{\ket{h_i}\}$ it is not normalized neither orthogonal. \\ Once this description has been done, as a basis for $\mathcal{H}_A$ lets choose the basis in which $\rho_A$ is diagonal. $\rho_A$ is obtained by partial tracing B in $\rho=\proj{\psi}$. Therefore since we have $\rho_A=\sum_i p_i \proj{i}$, and it has been chosen $\ket{f_i}=\ket{i}$ lets see the relation with what we previously had: \begin{gather} \rho=\sum_{i,j} \ket{i}\wedge \ket{h_i}\bra{j}\wedge \bra{h_j} \Rightarrow \rho_A=\sum_{i,j} \ketbra{i}{j} \scp{h_j}{h_i}=\sum_i p_i \proj{i} {\operatorname{e}}nd{gather} where in the first implication it has been used the Proposition \ref{prop:pureptssr}. This relations imply that $\scp{h_j}{h_i}=p_i\delta_{i,j}$. Therefore the $\{h_i\}$ are indeed orthogonal. Defining $\ket{\tilde{i}}{\operatorname{e}}quiv \frac{\ket{h_i}}{\sqrt{p_i}}$ the set $\{\ket{\tilde{i}}\}$ conform an orthonormal basis of $\mathcal{H}_B$, and therefore it can be written: \begin{equation} \ket{\psi}=\sum_i \sqrt{p_i} \ket{i}\wedge \ket{\tilde{i}} {\operatorname{e}}nd{equation} {\operatorname{e}}nd{proof} \begin{corollary*}\textbf{\ref{cor:purif}. Purification)} If $\rho \in \mathcal{R}_{S}^M$, then there exists a fermionic space $E$ of $M$ modes and a pure state $\omega \in \mathcal{H}_{S}^M \wedge E$, such that ${\operatorname{Tr\,}}_E(\omega)=\rho$. {\operatorname{e}}nd{corollary*} \begin{proof} We know that we can decompose any $\rho$ as $\rho=\sum_i p_i \proj{\psi_i}$ with $\ket{\psi_i}$ chosen to be SSR states. Since the sum is finite, we consider a set of $M$ new modes, where $2^M$ is the number of summing terms in the decomposition of $\rho$. With this set, we generate a new fermionic space $E$ with $M$ modes. Now we choose the state: \begin{gather} \omega=\sum_i \sqrt{p_i} \ket{\psi_i}\wedge \ket{i} {\operatorname{e}}nd{gather} where $\ket{i}$ are states of the canonical basis of our new space of $M$ modes. Their parity is the same as the parity of $\ket{\psi_i}$. So the global state $\omega$ has an even parity in $\mathcal{H}_{S}\wedge E$. Now, it is straightforward to check with the proved properties of the partial trace that ${\operatorname{Tr\,}}_E(\omega)=\rho$. {\operatorname{e}}nd{proof} Once we have these tools, we prove the final results that characterize fermionic SSR uncorrelated states. We use them to discuss the relationship between the three different definitions presented in the main article. \begin{prop} \label{prop:ProdState} For a SSR fermionic bipartite state $\rho_{AB} \in \mathcal{R}_{S}^{AB}$: \begin{gather} {\operatorname{Tr\,}}(\rho_{AB} (\hat{O}_A\wedge \hat{O}_B))={\operatorname{Tr\,}}(\rho_A \hat{O}_A){\operatorname{Tr\,}}(\rho_B \hat{O}_B) \quad \forall \hat{O}_X\in \mathcal{O}_X \quad \Longleftrightarrow \quad \rho_{AB}=\rho_A \wedge \rho_B {\operatorname{e}}nd{gather} where $\mathcal{O}_X$ is the set of all Hermitian operators local on the subspace $\mathcal{H}^X$ {\operatorname{e}}nd{prop} \begin{proof} $\Leftarrow$: If we have $\rho_{AB}=\rho_A \wedge \rho_B$, then given any $\hat{O}_A\in \mathcal{O}_A$ and any $\hat{O}_B\in \mathcal{O}_B$, ${\operatorname{Tr\,}}(\rho_{AB}(\hat{O}_A \wedge \hat{O}_B))={\operatorname{Tr\,}}((\rho_A \wedge \rho_B)(\hat{O}_A \wedge \hat{O}_B))$ which by Lemma \ref{lema:distribu} is equal to ${\operatorname{Tr\,}}((\rho_A \hat{O}_A)\wedge (\rho_B \hat{O}_B) )$ and then by Lemma \ref{lema:trprod} gives ${\operatorname{Tr\,}}(\rho_A \hat{O}_A)\cdot {\operatorname{Tr\,}}(\rho_B \hat{O}_B)$ just as desired. Since the equality holds for any Hermitian local operators, holds for all SSR local Hermitian operators and all local Hermitian operators. \\ $\Rightarrow$: In order to proof the other implication we use that since we know that the $\Leftarrow$ implication holds, for all $\hat{O}_A \in \mathcal{O}_A$ and for all $ \hat{O}_B \in \mathcal{O}_B$: ${\operatorname{Tr\,}}(\rho_A \hat{O}_A)\cdot{\operatorname{Tr\,}}(\rho_B \hat{O}_B)={\operatorname{Tr\,}}((\rho_A\wedge \rho_B)(\hat{O}_A\wedge \hat{O}_B))$ So the condition $ {\operatorname{Tr\,}}(\rho_{AB} (\hat{O}_A\wedge \hat{O}_B))={\operatorname{Tr\,}}(\rho_A \hat{O}_A){\operatorname{Tr\,}}(\rho_B \hat{O}_B)$ for all $\hat{O}_X\in \mathcal{O}_X$ is equivalent to $ {\operatorname{Tr\,}}(\rho_{AB} (\hat{O}_A\wedge \hat{O}_B))={\operatorname{Tr\,}}((\rho_A\wedge \rho_B)(\hat{O}_A\wedge \hat{O}_B))$ for all $\hat{O}_X\in \mathcal{O}_X$ , which is equivalent by linearity of the trace and distribution of the product to $ {\operatorname{Tr\,}}((\rho_{AB}-\rho_A\wedge \rho_B) (\hat{O}_A\wedge \hat{O}_B))=0$ for all $\hat{O}_X\in \mathcal{O}_X$. And by defining $D= \rho_{AB}-\rho_A\wedge \rho_B$ the statement that we want to proof is equivalent to proof that: if ${\operatorname{Tr\,}}(D (\hat{O}_A\wedge \hat{O}_B))=0$ for all $\hat{O}_X\in \mathcal{O}_X$ then $D=0$. And since $D$ is a Hermitian operator because is a lineal combination of Hermitian operators, we can apply Lemma \ref{lema:tracezero} which is exactly this statement, so we have proven the implication. {\operatorname{e}}nd{proof} So we have proven that the first and second definitions of uncorrelated states are equivalent. Note though, that in the proof of the Proposition \ref{prop:ProdState} we see that in the implication $\Leftarrow$ also relies the proof that the states that satisfy the second possible definition of uncorrelated state mentioned in the article will also satisfy the third. This is since all SSR local observables are local Hermitian SSR operators, that are a subset of local Hermitian operators. But, as mentioned in \cite{Banuls09} there exist states that adhere to the third definition but not to the second one. One counterexample is, for a 2 mode fermionic system, under the matrix representation in the basis $\mathcal{B}$ \begin{gather} \rho_{12}=\frac{1}{16}\begin{pmatrix} 9 & 0 & 0 & -i \\ 0 & 3 & -i & 0 \\ 0 & i & 3 & 0 \\ i & 0 & 0 & 1 {\operatorname{e}}nd{pmatrix} \quad \rho_1=\frac{1}{16}\begin{pmatrix} 12 & 0 \\ 0 & 4 {\operatorname{e}}nd{pmatrix} \quad \rho_2= \frac{1}{16}\begin{pmatrix} 12 & 0 \\ 0 & 4 {\operatorname{e}}nd{pmatrix} {\operatorname{e}}nd{gather} The result relies on the fact that there are only 2 linearly independent Hermitian SSR local operators in the 1 mode system, and both have a diagonal representation in the basis $\mathcal{B}$, being $\{\mathbb{I}, f_1 f_1^\dagger\}$. Now, assuming that $\rho_{AB}$ is pure, we are able to proof that the three proposed definitions of uncorrelated states are the same. \begin{proof} We have to prove the implication from the third to the second definition, and we do it by \textit{reductio ad absurdum}. We start writing $\rho_{AB}$ as $\rho_{AB}=\proj{\psi}$ since we know that is pure. Now it will be assumed that $\rho_{AB}\neq \rho_A \wedge \rho_B$, and get to a contradiction:\\ Using the Schmidt decomposition from Theorem \ref{thm:schmidt} proven in this appendix, one can decompose $\rho_{AB}$ as: \begin{gather} \rho_{AB}=\sum_{i,j} \sqrt{p_i p_j} \ketbra{i_A }{j_A} \wedge \ketbra{i_B}{j_B} {\operatorname{e}}nd{gather} We can say that if $\rho_{AB} \neq \rho_A \wedge \rho_B$ then the number of non-zero $p_i$ is greater than 1. Therefore, without loss of generality we can consider $p_1,p_2 \in (0,1)$. Now, lets see what is obtained: \begin{gather} {\operatorname{Tr\,}}((P_A\wedge P_B)\rho)=\sum_{i,j } \sqrt{p_i p_j} {\operatorname{Tr\,}}((P_A\wedge P_B) \ketbra{i_A}{j_A} \wedge \ketbra{i_B}{j_B} )=\nonumber \\ =\sum_{i,j } \sqrt{p_i p_j} {\operatorname{Tr\,}}(P_A \ketbra{i_A}{j_A} \wedge P_B \ketbra{i_B}{j_B} )= \sum_{i,j } \sqrt{p_i p_j} {\operatorname{Tr\,}}(P_A \ketbra{i_A}{j_A}) {\operatorname{Tr\,}}(P_B \ketbra{i_B}{j_B} ) {\operatorname{e}}nd{gather} Now, it is easy to calculate the corresponding partial traces to obtain $\rho_A=\sum_{i} p_i \proj{i_A}$ and $\rho_B=\sum_{j} p_j \proj{j_B}$. Thus, the right hand part of the first uncorrelation relation becomes: \begin{gather} {\operatorname{Tr\,}}(P_A \rho_A){\operatorname{Tr\,}}(P_B \rho_B)=\sum_{i,j} p_i p_j {\operatorname{Tr\,}}(P_A \proj{i_A}) {\operatorname{Tr\,}}(P_B \proj{j_B}) {\operatorname{e}}nd{gather} Now we will see that these two quantities cannot be equal. If they were, for all $P_A,P_B$, choosing $P_A=\proj{1_A}$ and $P_B=\proj{2_B}$, which obviously are hermitian operators, one will obtain: \begin{gather} {\operatorname{Tr\,}}((P_A\wedge P_B) \rho_{AB})=\sum_{i,j} \delta_{i,1} \delta_{i,2} \sqrt{p_i p_j} {\operatorname{Tr\,}}(\ketbra{1_A}{j_A}) {\operatorname{Tr\,}}(\ketbra{2_B}{j_B})=0\\ {\operatorname{Tr\,}}(P_A\rho_A){\operatorname{Tr\,}}(P_B \rho_B)=\sum_{i,j} p_i p_j \delta_{1,i} \delta_{2,j} {\operatorname{Tr\,}}(\ketbra{1_A}{i_A}) {\operatorname{Tr\,}}(\ketbra{2_B}{j_B})=\sum_{i,j} p_i p_j \delta_{1,i}^2 \delta_{2,j}^2=p_1 p_2 {\operatorname{e}}nd{gather} So, we would obtain that $0=p_1 p_2$, but since we have that $p_1,p_2 \in (0,1)$ we can say that we arrived to a contradiction. Thus if $\rho\neq \rho_A \wedge \rho_B$ then ${\operatorname{Tr\,}}((P_A\wedge P_B)\rho)\neq {\operatorname{Tr\,}}(P_A\rho_A){\operatorname{Tr\,}}(P_B \rho_A)$ for all $P_A,P_B$ local hermitian operators on $\mathcal{H}_A$ and $\mathcal{H}_B$ respectively. {\operatorname{e}}nd{proof} Thus, all three definitions of uncorrelated states agree for pure states. \section{Proofs of CPTP-Kraus-Stinespring equivalences} \label{sec:prooftheorem} In this Appendix, we present the complete proofs of Theorem \ref{thm:Sinespring-SupO-CPTP} and the general characterization of quantum operations. To then proof more easily Theorem \ref{thm:Sinespring-SupO-CPTP}, we first prove the equivalence that holds for general quantum operations. \begin{theorem} \label{thm:Kraus-CPTP} \textbf{General quantum operations)} For a SSR fermionic quantum operation represented by a map $\varphi: \mathcal{R}_{S}^{N} \rightarrow \Gamma_{S}^{M}$ that transforms $N$-mode SSR fermionic states in $\mathcal{R}_{S}^{N}$ to a $M$-mode SSR fermionic operators in $\Gamma_{S}^{M}$, the following statements are equivalent. \begin{enumerate} \item (Operator-sum representation.) There exists a set of linear operators $E_k: \mathcal{H}_{S}^{N} \rightarrow \mathbb{C}\mathcal{H}_{S}^{M}$, with $0 \leq \sum_k E_k^{\dagger}E_k \leq \mathbb{I}_{N}$, such that: \begin{equation} \varphi(\rho)=\sum_k E_k \rho E_k^{\dagger} {\operatorname{e}}nd{equation} \item (Axiomatic formalism.) $\varphi$ fulfills the following properties: \begin{itemize} \item $ {\operatorname{Tr\,}}(\varphi(\rho))$ is a probability, i.e. $0\leq {\operatorname{Tr\,}}(\varphi(\rho))\leq 1$ for all $\rho\in \mathcal{R}^N_S$. \item Convex-linear, i.e. $\varphi\left(\sum_i p_i\rho_i \right)=\sum_i p_i \varphi(\rho_i)$ with $p_i$ probabilities and $\rho\in \mathcal{R}^N_S$. \item $\varphi: \mathcal{R}_{S}^{N} \rightarrow \Gamma_{S}^{M}$ is CP. {\operatorname{e}}nd{itemize} {\operatorname{e}}nd{enumerate} {\operatorname{e}}nd{theorem} \begin{proof} To proof this equivalence, it will be seen first that 1 implies 2 and latter that 2 implies 1. For 1 implies 2 it has to be seen that if $\varphi(\rho)=\sum_k E_k^{\dagger}\rho E_k$ with the stated properties then the set of axioms is fulfilled. It is clear that the property b) is fulfilled due to the linear properties of operator sum and the distributive property. To proof a) is easy to see that follows from the cyclic property of the trace and the preservation of inequalities by the trace operator and the multiplication by a positive operator. To proof c) we first check that indeed $\varphi: \mathcal{R}_{S}^{N} \rightarrow \Gamma_{S}^{M}$. This can easily seen that follows due to the SSR preservation property of the Kraus operators $E_k$. Now, to proof that is CP, assume that $K\in \mathbb{N}$ and that $L$ is a fermionic Hilbert space of $K$ modes. Then choosing any state $\ket{\psi}\in \mathcal{H}_S^M\wedge L$ we can define for every $E_i$ an unnormalized state $\ket{\phi_i}{\operatorname{e}}quiv (E_i^{\dagger}\wedge \mathbb{I}_K)\ket{\psi}$ where it can be easily checked that $0\leq \scp{\phi_i}{\phi_i}\leq 1$. Now if $A$ is an arbitrary positive operator of the Hilbert space $\mathcal{H}_S^N\wedge L$ we can see that: \begin{gather} \bra{\psi}(E_i\wedge\mathbb{I}_K)A(E_i^{\dagger}\wedge\mathbb{I}_K)\ket{\psi}=\bra{\phi_i}A\ket{\phi_i}\geq 0 {\operatorname{e}}nd{gather} where the last step is true for the positivity of $A$ and the norm of $\ket{\phi_i}$. Once we have that, since \begin{gather} (\varphi\wedge\mathbb{I}_K)(A)=\sum_i (E_i\wedge \mathbb{I}_K)A(E_i^{\dagger}\wedge \mathbb{I}_K) {\operatorname{e}}nd{gather} and the numerable sum of positive elements is positive, it is found that for an arbitrary $K$, an arbitrary state $\ket{\psi}\in\mathcal{H}_S^M\wedge L$ and for an arbitrary positive operator $A$ for $\mathcal{H}_S^N\wedge L$, $\bra{\psi}(\varphi\wedge \mathbb{I}_K )(A) \ket{\psi}\geq 0$, and therefore $(\varphi\wedge \mathbb{I}_K )(A)\geq 0$ and thus $\varphi$ is CP. \\ For the opposed implication, we have the map $\varphi$ fulfilling the three axioms. Now let us consider an additional fermionic Hilbert space of $N$ modes, that we will call $L$ and consider the global Hilbert space $\mathcal{H}_S^N\wedge L$. In these Hilbert spaces, orthonormal basis $\{\ket{i_H}\}\in \mathcal{H}_S^N$ and $\{\ket{i_L}\}\in L$ indexed by the same numerable label $i=1,\dots,2^N$ can be chosen. We can choose this basis so that the first $2^{N-1}$ are even SSR states and the last $2^{N-1}$ are odd SSR states, for both spaces.\\ Now we consider the even SSR state for the global system: \begin{gather} \ket{\alpha}{\operatorname{e}}quiv\frac{1}{2^N} \sum_{i=1}^{2^N} \ket{i_H}\wedge \ket{i_L} {\operatorname{e}}nd{gather} Now from this definition, an operator on the global Hilbert space is defined: \begin{gather} \sigma {\operatorname{e}}quiv (\varphi\wedge \mathbb{I}_N)(\proj{\alpha}) {\operatorname{e}}nd{gather} Once made this construction, it is known that any SSR state $\ket{{\operatorname{e}}ta}\in \mathcal{H}_S^N$ can be written as $\ket{{\operatorname{e}}ta}=\sum_{j=1}^{2^{N-1}} {\operatorname{e}}ta_j \ket{j_H}$ or $\ket{{\operatorname{e}}ta}=\sum_{j=1+2^{N-1}}^{2^{N}} {\operatorname{e}}ta_j \ket{j_H}$. To each SSR state, an analogue in $L$ is considered: \begin{gather} \ket{\tilde{{\operatorname{e}}ta}}=\sum_{j=1}^{2^{N-1}} {\operatorname{e}}ta_j^* \ket{j_L} \quad \text{or}\quad \ket{\tilde{{\operatorname{e}}ta}}=\sum_{j=1+2^{N-1}}^{2^N} {\operatorname{e}}ta_j^* \ket{j_L} {\operatorname{e}}nd{gather} which is also a SSR state on $L$ and with the same parity. For the properties of the wedge product and the definition of $\sigma$ it is found that: \begin{gather} \sigma=\frac{1}{2^{2N}}\sum_{i,j=1}^{2^N} \varphi(\ketbra{i_H}{j_H})\wedge \ketbra{i_L}{j_L} {\operatorname{e}}nd{gather} Under the SSR, it is found that either: \begin{gather} \bra{\tilde{{\operatorname{e}}ta}}\sigma\ket{\tilde{{\operatorname{e}}ta}}=\frac{1}{2^{2N}}\sum_{i,j=1}^{2^{N-1}} \varphi(\ketbra{i_H}{j_H}){\operatorname{e}}ta_{j}^*{\operatorname{e}}ta_i \quad \text{or} \quad \bra{\tilde{{\operatorname{e}}ta}}\sigma\ket{\tilde{{\operatorname{e}}ta}}=\frac{1}{2^{2N}}\sum_{i,j=2^{N-1}+1}^{2^{N}} \varphi(\ketbra{i_H}{j_H}){\operatorname{e}}ta_{j}^*{\operatorname{e}}ta_i {\operatorname{e}}nd{gather} Now applying the axiom 2 it is found $\bra{\tilde{{\operatorname{e}}ta}}\sigma\ket{\tilde{{\operatorname{e}}ta}}\cdot 2^{2N}=\varphi(\ketbra{{\operatorname{e}}ta}{{\operatorname{e}}ta})$. So from $\sigma$ it can be recovered $\varphi$.\\ Now, since $\varphi$ for the axiom 3 is CP, in particular, we find that $\sigma$ must be a positive operator. This fact implies that we can have a diagonal decomposition of the form: \begin{gather} \sigma=\sum_{i=1}^{2^{M+N}} a_i \proj{s_i} {\operatorname{e}}nd{gather} where $a_i\geq 0$. Now for the axiom 3, it implies that $\varphi\wedge \mathbb{I}_N$ is an operator that preserves the parity SSR for states in $\mathcal{H}_S^N\wedge L$. Since $\proj{\alpha}$ is an SSR state, $\sigma$ is also an SSR operator. This fact implies that we can choose $\ket{s_i}$ so that they are SSR states, and order them so that the first $2^{M+N-1}$ are even, and the rest odd.\\ Once this has been seen, we define the operators $E_k$ using the mentioned decomposition above. Given a SSR state of $\mathcal{H}_S^N$, the action of $E_k$ is defined as: $E_k(\ket{{\operatorname{e}}ta})=2^N \sqrt{a_k}\bra{\tilde{{\operatorname{e}}ta}}\ket{s_k}$.\\ Now, given a $\rho$ of $\mathcal{H}_S^N$ it can be written as $\rho=\sum_i p_i \proj{{\operatorname{e}}ta_i}$. Then we have that: \begin{gather} \varphi(\rho)= \sum_i p_i \varphi(\proj{{\operatorname{e}}ta_i})=\sum_i p_i 2^{2N}\bra{\tilde{{\operatorname{e}}ta_i}}\sigma\ket{\tilde{{\operatorname{e}}ta_i}}=\sum_i p_i 2^{2N} \bra{\tilde{{\operatorname{e}}ta_i}}\left(\sum_{k=1}^{2^{M+N}} a_k\proj{s_k}\right)\ket{\tilde{{\operatorname{e}}ta_i}}=\nonumber \\ =\sum_i p_i \sum_{k=1}^{2^{M+N}} E_k \ket{{\operatorname{e}}ta_i}\bra{{\operatorname{e}}ta_i}E_k^{\dagger}=\sum_k E_k\rho E_k^{\dagger} {\operatorname{e}}nd{gather} The property that $0\leq \sum_k E_k^{\dagger}E_k\leq \mathbb{I}_N$ follows directly from the axiom 1 and the last statement. So is just left to see that $E_k$ preserves the parity SSR structure. Since for any $\ket{{\operatorname{e}}ta}\in\mathcal{H}_S^N$, $E_k$ acts on it as \begin{gather} E_k \ket{{\operatorname{e}}ta}=2^N \sqrt{a_k}\bra{\tilde{{\operatorname{e}}ta}}\ket{s_k} {\operatorname{e}}nd{gather} Since we have seen that $\ket{s_k}$ is super selected on the global space, it can be decomposed as: \begin{gather} \ket{s_k}=\sum_{j=1}^{2^{M-1}} \sum_{l=1}^{2^{N-1}} b_{k,j,l}\ket{j_H}\wedge\ket{l_L}+\sum_{j=1+2^{M-1}}^{2^{M}}\sum_{l=1+2^{N-1}}^{2^{N}} c_{k,j,l}\ket{j_H}\wedge\ket{l_L} \medspace \text{or} \\ \ket{s_k}=\sum_{j=1}^{2^{M-1}} \sum_{l=1+2^{N-1}}^{2^{N}} d_{k,j,l}\ket{j_H}\wedge\ket{l_L}+\sum_{j=1+2^{M-1}}^{2^{M}}\sum_{l=1}^{2^{N-1}} e_{k,j,l}\ket{j_H}\wedge\ket{l_L} {\operatorname{e}}nd{gather} Then, if $\ket{{\operatorname{e}}ta}$ is even, $\ket{\tilde{{\operatorname{e}}ta}}$ is even and it is found: \begin{gather} \bra{\tilde{{\operatorname{e}}ta}}\ket{s_k}=\sum_{j=1}^{2^{M-1}} \sum_{l=1}^{2^{N-1}} b_{k,j,l}\ket{j_H}\bra{\tilde{{\operatorname{e}}ta}}\ket{l_L}=\sum_{j=1}^{2^{M-1}} \tilde{b}_{k,j} \ket{j_H}\in \mathbb{C}\mathcal{H}_S^M \\ \text{or}\quad \bra{\tilde{{\operatorname{e}}ta}}\ket{s_k}=\sum_{j=1+2^{M-1}}^{2^{M}}\sum_{l=1}^{2^{N-1}} c_{k,j,l}\ket{j_H}\bra{\tilde{{\operatorname{e}}ta}}\ket{l_L}=\sum_{j=1+2^{M-1}}^{2^{M}} \tilde{e}_{k,j} \ket{j_H} \in \mathbb{C}\mathcal{H}_S^M {\operatorname{e}}nd{gather} And if $\ket{{\operatorname{e}}ta}$ is odd, using the same arguments it is found: \begin{gather} \bra{\tilde{{\operatorname{e}}ta}}\ket{s_k}=\sum_{j=1+2^{M-1}}^{2^{M}}\sum_{l=1+2^{N-1}}^{2^N} c_{k,j,l}\ket{j_H}\bra{\tilde{{\operatorname{e}}ta}}\ket{l_L}= \sum_{j=1+2^{M-1}}^{2^{M}}\tilde{c}_{k,j} \ket{j_H}\in \mathbb{C}\mathcal{H}_S^M \\ \text{or}\quad \bra{\tilde{{\operatorname{e}}ta}}\ket{s_k}= \sum_{j=1}^{2^{M-1}} \sum_{l=1+2^{N-1}}^{2^N} d_{k,j,l}\ket{j_H}\bra{\tilde{{\operatorname{e}}ta}}\ket{l_L}=\sum_{j=1}^{2^{M-1}} \tilde{d}_{k,j} \ket{j_H} \in \mathbb{C}\mathcal{H}_S^M {\operatorname{e}}nd{gather} Therefore all the properties are checked, and the implication is proven. {\operatorname{e}}nd{proof} \begin{theorem*} \textbf{\ref{thm:Sinespring-SupO-CPTP}. General quantum channels)} For a SSR fermionic quantum channel represented by a map $\varphi: \mathcal{R}_{S}^{N} \rightarrow \mathcal{R}_{S}^{N}$ the following statements are equivalent. \begin{enumerate} \item (Operator-sum representation.) There exists a set of SSR linear operators $E_k: \mathcal{H}_{S}^{N} \rightarrow \mathbb{C}\mathcal{H}_{S}^{N}$, where $\sum_k E_k^{\dagger}E_k = \mathbb{I}_{N}$, such that: \begin{equation} \varphi(\rho)=\sum_k E_k \rho E_k^{\dagger} {\operatorname{e}}nd{equation} \item (Axiomatic formalism.) $\varphi$ fulfills the following properties: \begin{itemize} \item Is trace preserving, i.e. ${\operatorname{Tr\,}}(\varphi(\rho))= 1$ for all ${\operatorname{Tr\,}}(\rho)= 1$ and $\rho\in\mathcal{R}_{S}^{N}$. \item Convex-linear, i.e. $\varphi\left(\sum_i p_i\rho_i \right)=\sum_i p_i \varphi(\rho_i)$ with $p_i$ probabilities. \item $\varphi: \mathcal{R}_{S}^{N} \rightarrow \mathcal{R}_{S}^{N}$ is CP. {\operatorname{e}}nd{itemize} \item (Stinespring dilation.) There exists a fermionic $K$-mode environment ($L$) with Hilbert space $L=\mathcal{H}_{S}^{K}$ and $K\geq N$, a SSR pure state $\omega=\proj{\psi} \in L$ and a parity SSR respecting unitary operator $\hat{U}$ that acts on $\mathcal{H}_{S}^{N}\wedge L$, such that: \begin{equation} \varphi(\rho)={\operatorname{Tr\,}}_{L}(\hat{U}(\rho\wedge \omega)\hat{U}^{\dagger}), \qquad \forall \rho\in \mathcal{R}_{S}^{N}. {\operatorname{e}}nd{equation} {\operatorname{e}}nd{enumerate} {\operatorname{e}}nd{theorem*} \begin{proof} First, we will proof the implication: 3 implies 1.\\ We choose an orthonormal SSR basis on $E$ denoted by $(f_i)$ such that the first $2^{K-1}$ modes are even, and the last $2^{K-1}$ are odd. Then we have that, since the SSR is respected by all the operators the partial trace can be written as \begin{gather} \varphi(\rho)={\operatorname{Tr\,}}_L (U(\rho\wedge\proj{\psi})U^{\dagger})=\sum_{i} \bra{f_i}_{L} U(\rho \wedge\proj{\psi})U^{\dagger})\ket{f_i}_L {\operatorname{e}}nd{gather} where $\ket{f_i}_L$ is an element that only acts on the subspace $L$ of $\mathcal{H}_S^N\wedge L$; it can be seen as $\mathbb{I}_H\wedge \ket{f_i}$. This equality holds due to properties of the partial trace for SSR operators, exposed in Subsection \ref{subsec:pt} . \\ Using this terminology and the fact that since $\rho$ is a SSR state, then it can be seen that : \begin{gather} \rho \wedge \proj{\psi}=\ket{\psi}_L \rho \bra{\psi}_L {\operatorname{e}}nd{gather} And therefore we have: \begin{gather} \varphi(\rho)= \sum_{i} \bra{f_i}_{L} U(\rho \wedge\proj{\psi})U^{\dagger})\ket{f_i}_L=\sum_{i} \bra{f_i}_{L} U(\ket{\psi}_L \rho \bra{\psi}_L)U^{\dagger})\ket{f_i}_L=\nonumber \\ = \sum_{i} (\bra{f_i}_{L} U\ket{\psi}_L) \rho (\bra{\psi}_L U^{\dagger}\ket{f_i}_L) {\operatorname{e}}nd{gather} Now if we define $E_{i}=\bra{f_i}_{L} U\ket{\psi}_L$, it is indeed a linear operator of the space $\mathcal{H}_S^N$. And it is found that: \begin{gather} E_{i}^{\dagger}=\bra{\psi}_L U^{\dagger}\ket{f_i}_L {\operatorname{e}}nd{gather} Therefore, it is proved that $\varphi(\rho)=\sum_{i} E_{i} \rho E_{i}^{\dagger}$. Now the two other properties of the operators $E_j$ have to be seen. First, if we compute: \begin{gather} \sum_j E_j^{\dagger}E_j=\sum_{j} \bra{\psi}_L U^{\dagger}\ket{f_j}_L \bra{f_j}_{L} U\ket{\psi}_L=\nonumber \\ =\bra{\psi}_L U^{\dagger}\sum_j \ket{f_j}_L \bra{f_j}_{L} U\ket{\psi}_L =\nonumber \\=\bra{\psi}_L U^{\dagger}U\ket{\psi}_L=\bra{\psi}_L \mathbb{I}_N\wedge \mathbb{I}_K\ket{\psi}_L=\mathbb{I}_N {\operatorname{e}}nd{gather} the last property that has to be checked is that $\forall j$ if $\ket{{\operatorname{e}}ta}$ is a SSR state then $E_j\ket{{\operatorname{e}}ta}$ is also a SSR state. Therefore lets suppose that ${\operatorname{e}}ta$ is a SSR state. Now since $U$ is a unitary that preserves SSR, it can only take the diagonal form, due to Theorem \ref{thm:Unitary}.\\ In order to make the notation lighter lets assume that the ordering is clear, and that if it is denoted a state by $\ket{E_i}$ it means that is an even state of the corresponding space, and if $\ket{O_j}$ then it is odd, and the sets where they belong conform an orthonormal basis. Then any SSR unitary acting in a space of $N+K$ modes can be decomposed as: \begin{gather} \hat{U}=\sum_{i,j,k,l} a_{i,j,k,l} \ket{E_i}\wedge \ket{E_j}\bra{E_k}\wedge \bra{E_l}+ b_{i,j,k,l} \ket{E_i}\wedge \ket{E_j}\bra{O_k}\wedge \bra{O_l}+\nonumber \\+c_{i,j,k,l}\ket{O_i}\wedge \ket{O_j}\bra{E_k}\wedge \bra{E_l}+d_{i,j,k,l}\ket{O_i}\wedge \ket{O_j}\bra{O_k}\wedge \bra{O_l}+\nonumber \\+A_{i,j,k,l} \ket{E_i}\wedge \ket{O_j}\bra{E_k}\wedge \bra{O_l}+ B_{i,j,k,l} \ket{E_i}\wedge \ket{O_j}\bra{O_k}\wedge \bra{E_l}+\nonumber \\+C_{i,j,k,l}\ket{O_i}\wedge \ket{E_j}\bra{E_k}\wedge \bra{O_l}+D_{i,j,k,l}\ket{O_i}\wedge \ket{E_j}\bra{O_k}\wedge \bra{E_l} {\operatorname{e}}nd{gather} The initial SSR environment state $\ket{\psi}$ can either be even or odd. Moreover, for every $E_j$, the corresponding $\ket{f_j}$ can also be even or odd. Therefore there are four different cases to be taken into account. \begin{gather} \ket{\psi} \in \text{even :} \nonumber \\ U \ket{\psi}_L=\sum_{i,j,k,l} a_{i,j,k,l} \ket{E_i}\wedge \ket{E_j}\bra{E_k} \scp{E_l}{\psi}+ c_{i,j,k,l} \ket{O_i}\wedge \ket{O_j}\bra{E_k} \scp{E_l}{\psi}+ \nonumber \\ +B_{i,j,k,l} \ket{E_i}\wedge \ket{O_j}\bra{O_k} \scp{E_l}{\psi}+D_{i,j,k,l} \ket{O_i}\wedge \ket{E_j}\bra{O_k} \scp{E_l}{\psi} \\ \ket{\psi} \in \text{odd :} \nonumber \\ U \ket{\psi}_L=\sum_{i,j,k,l} b_{i,j,k,l} \ket{E_i}\wedge \ket{E_j}\bra{O_k} \scp{O_l}{\psi}+ d_{i,j,k,l} \ket{O_i}\wedge \ket{O_j}\bra{O_k} \scp{O_l}{\psi}+ \nonumber \\ + A_{i,j,k,l} \ket{E_i}\wedge \ket{O_j}\bra{E_k} \scp{O_l}{\psi}+C_{i,j,k,l} \ket{O_i}\wedge \ket{E_j}\bra{E_k} \scp{O_l}{\psi} {\operatorname{e}}nd{gather} Thus, the four combinations end up giving: \begin{gather} \ket{\psi} \in \text{even ,} \medspace \ket{f_{i'}} \in \text{even :} \quad \qquad \bra{f_{i'}}_L U \ket{\psi}_L=\sum_{i,j,k,l} a_{i,j,k,l} \scp{f_{i'}}{E_j} \scp{E_l}{\psi} \ketbra{E_i}{E_k} +D_{i,j,k,l} \scp{f_{i'}}{E_j} \scp{E_l}{\psi} \ketbra{O_i}{O_k} \\ \ket{\psi} \in \text{even ,} \medspace \ket{f_{i'}} \in \text{odd :} \quad \qquad \bra{f_{i'}}_L U \ket{\psi}_L=\sum_{i,j,k,l} c_{i,j,k,l} \scp{f_{i'}}{O_j} \scp{E_l}{\psi} \ketbra{O_i}{E_k} +B_{i,j,k,l} \scp{f_{i'}}{O_j} \scp{E_l}{\psi} \ketbra{E_i}{O_k}\\ \ket{\psi} \in \text{odd ,} \medspace \ket{f_{i'}} \in \text{even :} \quad \qquad \bra{f_{i'}}_L U \ket{\psi}_L=\sum_{i,j,k,l} b_{i,j,k,l} \scp{f_{i'}}{E_j} \scp{O_l}{\psi} \ketbra{E_i}{O_k} +C_{i,j,k,l} \scp{f_{i'}}{E_j} \scp{O_l}{\psi} \ketbra{O_i}{E_k}\\ \ket{\psi} \in \text{odd ,} \medspace \ket{f_{i'}} \in \text{odd :} \quad \qquad \bra{f_{i'}}_L U \ket{\psi}_L=\sum_{i,j,k,l} d_{i,j,k,l} \scp{f_{i'}}{O_j} \scp{O_l}{\psi} \ketbra{O_i}{O_k} +A_{i,j,k,l} \scp{f_{i'}}{O_j} \scp{O_l}{\psi} \ketbra{E_i}{E_k} {\operatorname{e}}nd{gather} Thus we see that the $E_{i'}$ operators are linear SSR operators as established in Theorem \ref{thm:blockform}. Hence the implication is done. We observe that for the statement to hold is necessary to have the possibility of having anti-diagonal Kraus operators and that we can achieve the full generality of forms of the Kraus operators by choosing any $\ket{\psi}$.\\ Now lets proof the reverse, 1 implies 3, that follows from the immense amounts of degrees of freedom that one has to choose a unitary matrix.\\ Consider a map $\varphi$ from $\mathcal{H}_S^N$ to itself such that $\phi(\rho)= \sum_k E_k \rho E_k^\dagger$ for all SSR $\rho$, with $E_k$ a set of SSR linear operators such that $\sum_k E_k^\dagger E_k=\mathbb{I}_N$. We now consider a fermionic finite space generated by a number of modes $K$ equal to the number of Kraus operators $E_k$. We denote this new fermionic space by $L$, and by construction its dimension is $2^K$. We denote the elements of the canonical SSR basis $\mathcal{B}_L$ by $\{\ket{e_j}\}_{j=1}^{2^N}$. Once this is done we consider the following map: \begin{gather} V: \mathcal{H}_S^N \wedge \mathbb{C} \ket{e_1} \longrightarrow \mathcal{H}\wedge L \nonumber \\ \ket{\psi} \wedge \ket{e_1} \longmapsto \sum_{k} E_k \ket{\psi} \wedge \omega_k {\operatorname{e}}nd{gather} where $\omega_k$'s choice is conditioned to how transforms the corresponding $E_k$ the parity. If $E_k$ is block diagonal then an even state is selected, and an odd state is selected if $E_k$ flips parities by being block anti-diagonal. Is important to point out that at each time a different element of the orthonormal basis $\mathcal{B}_L$ is chosen. This choice can always be made due to the fact that $K\leq 2^{K-1}$ always. This procedure ensures that $V$ preserves the parity sending even states to even states and odd states to odd states. This follows since $\ket{e_1}=\ket{\Omega}_L$, so it is an even state.\\ Since this map is an isometry, we can extend it to a unitary map from $\mathcal{H}_S^N\wedge L$ to itself. Since the restriction is weak, we have many degrees of freedom left. Given that the map preserves parity is not difficult to check that we can choose the extension to preserve parity by sending even states to even states and odd states to odd states. \\ So with this reasoning we obtain a unitary operator $U$ that acts on $\mathcal{H}_S^N\wedge L$ and that preserves parity. Now we are in conditions to claim that if we choose $\omega=\proj{e_1}=\proj{\Omega}$ the statement holds. It just has to be calculated: \begin{gather} {\operatorname{Tr\,}}_{L}\left(U\left(\rho\wedge\omega\right)U^\dagger\right)={\operatorname{Tr\,}}_{L}\left(U\left(\sum_i p_i \proj{\psi_i}\wedge\proj{e_1}\right)U^\dagger\right)=\sum_i p_i {\operatorname{Tr\,}}_{E}\left(U\left(\ket{\psi_i}\wedge\ket{e_1}\right)\left(\bra{\psi_i}\wedge\bra{e_1}\right)U^\dagger\right)=\nonumber\\=\sum_i p_i {\operatorname{Tr\,}}_{L}\left(V\left(\ket{\psi_i}\wedge\ket{e_1}\right)\left(\bra{\psi_i}\wedge\bra{e_1}\right)V^\dagger\right)=\sum_i p_i {\operatorname{Tr\,}}_{L}\left(\left(\sum_k E_k\ket{\psi_i} \wedge \omega_k\right)\left(\sum_{k'} E_{k'}\ket{\psi_i}\wedge \omega_{k'}\right)^\dagger\right)=\nonumber\\=\sum_i\sum_k \sum_{k'} p_i {\operatorname{Tr\,}}_{L}\left(\left(E_k\proj{\psi_i}E_{k'}^\dagger \right)\wedge \left(\omega_k \omega_{k'}^\dagger \right)\right)=\sum_i\sum_k \sum_{k'} p_i \left(E_k\proj{\psi_i}E_{k'}^\dagger \right) \left(\omega_{k'}^\dagger\omega_k \right)= \nonumber \\=\sum_i\sum_k \sum_{k'} p_i \left(E_k\proj{\psi_i}E_{k'}^\dagger \right) \delta_{k k'}=\sum_k E_k \left(\sum_i p_i \proj{\psi_i}\right) E_{k}^\dagger = \sum_k E_k \rho E_{k}^\dagger {\operatorname{e}}nd{gather} Just as desired. Thus, the implication holds.\\ Finally, the equivalence between statements 1 and 2 follows directly from Theorem \ref{thm:Kraus-CPTP}. Redoing the proof it can be seen easily that the trace-preserving property becomes equivalent to impose $\sum_k E_k^\dagger E_k = \mathbb{I}_N$. {\operatorname{e}}nd{proof} {\operatorname{e}}nd{document}
\begin{document} \title{f On the star arboricity of hypercubes hanks{This research is partially supported by a grant from the INSF.} \vspace*{-1cm} \begin{center} \footnotesize{ Department of Mathematical Sciences \\ Sharif University of Technology \\ P.O. Box 11155--9415 \\ Tehran, I.R. IRAN \\ } \end{center} \begin{abstract} A Hypercube $Q_n$ is a graph in which the vertices are all binary vectors of length $n$, and two vertices are adjacent if and only if their components differ in exactly one place. A galaxy or a star forest is a union of vertex disjoint stars. The star arboricity of a graph $G$, $\sa{G}$, is the minimum number of galaxies which partition the edge set of $G$. In this paper among other results, we determine the exact values of $\sa{Q_n}$ for $n \in \{ 2^k-3, 2^k+1, 2^k+2, 2^i+2^j-4\}$, $i \geq j \geq 2$. We also improve the last known upper bound of $\sa{Q_n}$ and show the relation between $\sa{G}$ and square coloring. \\ \end{abstract} \section{Introduction and preliminaries} Hypercubes have numerous applications in computer science such as studying networks. Their architecture has played an important role in the development of parallel processing and is still quite popular and influential \cite{parhami2002}. An \textit{$n$-cube} or \textit{$n$-dimensional hypercube}, $Q_n$, is a graph in which the vertices are all binary vectors of length $n$, and two vertices are adjacent if and only if the Hamming distance between them is $1$, i.e. their components differ in $1$ place. $Q_n$ is also defined recursively in terms of the cartesian product of two graphs as follows: $$\begin{array}{l} Q_1=K_2 \\ Q_n= Q_{n-1} \square K_2, \end{array}$$ where $\square$ stands for the cartesian product. A \textit{galaxy} or a \textit{star forest} is a union of vertex disjoint stars. The \textit{star arboricity} of a graph $G$, denoted by $\sa{G}$, is the minimum number of galaxies which partition the edge set of $G$. The study of decomposing graphs into galaxies is naturally suggested by the analysis of certain communication networks such as radio networks. As an example, suppose that we need to transmit once along every edge, in order to check that there is indeed a connection between each adjacent pair. It is explained in \cite{MR1273589} that the minimum number of steps in which we can finish all the required transmissions is precisely $\sa{G}$. Star arboricity was introduced by Akiyama and Kano in $1982$ \cite{MR778395}. They called it star decomposition index. In other literature some authors have used concepts and notations such as galactic number, $\gal{G}$, and star number, ${\rm st}(G)$ or ${\rm s}(G)$. Star arboricity is closely related to \textit{arboricity}, the minimum number of forests which partition the edges of a graph $G$ and is denoted by $\arb{G}$. But unlike arboricity which is easy, even determining whether the star arboricity of an arbitrary graph is at most $2$, is shown to be an NP-complete problem \cite{MR959906}, also see \cite{MR2528046}. Clearly, by definition $\arb{G} \leq \sa{G}$. Furthermore, it is easy to see that any tree can be covered by two star forests, thus $\sa{G} \leq 2\arb{G}$. Alon et al. \cite{MR1194728} showed that for each $k$, there exists a graph $G_k$ with $\arb{G_k}=k$ and $\sa{G_k}=2k$. They also showed that for any graph $G$, $\sa{G} \leq \arb{G}+O(\log_2 \Delta(G))$, where $\Delta(G)$ is the maximum degree of $G$. In \cite{MR959906} and \cite{MR885263} the star arboricity of $Q_n$ is studied and it is shown that $\sa{Q_{2^n-2}}=2^{n-1}$ and $\sa{Q_{2^n-1}}=2^{n-1}+1$. Here by extending earlier results, we find exact values of $\sa{Q_n}$ for $n \in \{2^k-4, 2^k-3, 2^k, 2^k+1, 2^k+2, 2^k+4, 2^k+2^j-4\}$. Also we introduce a new upper bound and show a relation between $\sa{G}$ and square coloring. \section{Some earlier results} In this section we mention some earlier results about the star arboricity of general graphs which are used in the next section. In the following theorem Akiyama and Kano found an exact value for the star arboricity of complete graphs $K_n$. \begin{theorema}{ \rm (\cite{MR778395})} \label{tcomplete} Let $n \geq 4$. Then the star arboricity of the complete graph of order $n$ is $\lceil{n \over 2} \rceil+1$, {\rm i.e.} $ \sa{K_n}=\lceil {n \over 2} \rceil+1. $ \end{theorema} In the next lemma an upper bound for the star arboricity of product of two graphs is given. \begin{lemma}{ \rm (\cite{MR959906})} \label{lcart} The star arboricity of the cartesian product of two graphs satisfies \linebreak $ \sa{G \square H} \leq \sa{G}+\sa{H}. $ \end{lemma} Next we state some of Truszczy{\'n}ski's results \cite{MR885263} which will be used in this paper. \begin{theorema}{ \rm (\cite{MR885263})} \label{Tdreg} Let $G$ be an $n$-regular graph, $n \geq 2$. Then $ \sa{G} \geq \lceil {n \over 2} \rceil +1. $ \end{theorema} \begin{lemma}{ \rm (\cite{MR885263})} \label{l6} Let $G$ be an $n$-regular graph, where $n$ is an even number. If \ $\chi(G)>~{{n \over 2}+1}$ \ or if \ ${{n \over 2}+1}$ does not divide $|V(G)|$, then $\sa{G} \geq {{n \over 2}+2}$. \end{lemma} The following question is also raised about the upper bound for $\sa{G}$. \begin{problem} { \rm (\cite{MR885263})} Is it true that for every $n$-regular graph $G$, $$ \Big\lceil {n \over 2} \Big\rceil +1 \leq \sa{G} \leq \Big\lceil {n \over 2} \Big\rceil +2 \ ? $$ \end{problem} \begin{lemma}{ \rm (\cite{MR885263})} \label{ltruz} If $k \geq 2$, then there is a partition ${\cal A}=\{ A_1,A_2, \ldots , A_{2^{k-1}} \}$ \ of \ $V(Q_{2^k-2})$ such that \begin{enumerate}[(i)] \item for every $i$, $1\leq i \leq 2^{k-1}$, $A_i$ is independent, \item for every $i$, $j$, $1\leq i < j\leq 2^{k-1}$, the subgraph of $Q_{2^k-2}$ induced by $A_i \cup A_j$, is $2$-regular. \end{enumerate} \end{lemma} Proof of Lemma~\ref{ltruz} in \cite{MR885263} is constructive and as an example a decomposition of $Q_6$ into $4$ sets is presented in Table~\ref{tab:Q6}. This will be used in the next section. \begin{table}[h] \begin{center} \includegraphics[scale=.45]{pic7.PNG} \caption{A vertex decomposition of $Q_6$ by Lemma~\ref{ltruz}. }\label{tab:Q6} \end{center} \end{table} In the earlier results the star arboricity of $Q_n$ was determined in just two cases. \begin{theorema}{ \rm (\cite{MR885263})} \label{t1} $\sa{Q_{2^k-2}}=2^{k-1}$ for $k \geq 2$. \end{theorema} By Theorem~\ref{Tdreg}, Lemma~\ref{l6} and Theorem~\ref{t1} we have, \begin{corollary} \label{ceven} $\sa{Q_{n}} \ge \lfloor \frac{n}{2} \rfloor +2$, \ except for $n=2^a-2$, $a\ge 2$. For $n=2^a-2$, we have $\sa{Q_{2^a-2}}= \lfloor \frac{n}{2} \rfloor +1=2^{a-1}$. \end{corollary} \begin{corollary}{ \rm (also in \cite{MR959906})}\label{l2} $\sa{Q_{2^k-1}}=2^{k-1}+1$, $k \geq 2$. \end{corollary} \begin{proof} $\sa{Q_{2^k-1}} \geq 2^{k-1}+1$ by Corollary~\ref{ceven}, and $\sa{Q_{2^k-1}}\leq \sa{Q_{2^k-2}}+1=2^{k-1}+1$.\end{proof} The following bounds also are given in \cite{MR885263}. \begin{theorema}{ \rm (\cite{MR885263})} \label{tlog} $\lceil {n+1 \over 2}\rceil+1 \leq \sa{Q_n} \leq \lceil {n \over 2}\rceil +\log_2 n$, \ for every $n \geq 3$, {\rm [}except for $n=2^{a}-2$ \ and \ $a\geq2$ {\rm ]}. \end{theorema} In the next section we introduce more exact values of star arboricity of some $Q_n$. \section{Hypercubes} In this section we focus on the star arboricity of hypercubes and extend earlier results. Based on the results we conjecture that, \begin{conjecture} \label{qn} For \ $n \neq 2^a-2$, $\sa{Q_{n}} = \lfloor \frac{n}{2} \rfloor +2$, and for \ $n = 2^a-2$, $\sa{Q_{2^a-2}}= \lfloor \frac{n}{2} \rfloor +1=2^{a-1}$, $a \geq 2$. \end{conjecture} Note that the second part of the Conjecture~\ref{qn} is known to be true (Theorem~\ref{t1}). \begin{theorem}\label{tt} If Conjecture~\ref{qn} holds for an odd integer $n$, then it holds for $n-1$ and $n+1$. \end{theorem} \begin{proof} If $n-1=2^a-2$ or $n+1=2^a-2$ for some $a$, then the statement follows. Otherwise let $n=2k+1$. For $n-1=2k$, we have $\sa{Q_{2k}} \leq \sa{Q_{2k+1}}=k+2$; the statement follows by Corollary~\ref{ceven}. For $n+1=2k+2$, we have $\sa{Q_{2k+2}} \leq \sa{Q_{2k+1}}+1=k+3$ and again by Corollary~\ref{ceven} the statement follows.~\end{proof} By Theorem~\ref{tt}, one only needs to show Conjecture~\ref{qn}, for odd numbers. \subsection{Exact values } \begin{proposition}\label{l4} $\sa{Q_{2^k+2^{j}-4}}=2^{k-1}+2^{j-1}$, \ for \ $k \geq j \geq 2$. \end{proposition} \begin{proof} By Lemma~\ref{lcart} and Theorem~\ref{t1}, each we have $\sa{Q_{2^k+2^{j}-4}} \le \sa{Q_{2^k-2}}+ \sa{Q_{2^{j}-2}} =2^{k-1}+2^{j-1 }$. Also by Corollary~\ref{ceven}, $\sa{Q_{2^k+2^{j}-4}} \ge 2^{k-1}+2^{j-1}$. So $\sa{Q_{2^k+2^{j}-4}}=2^{k-1}+2^{j-1}$.~\end{proof} The following lemma is useful tool for the next theorem. \begin{lemma}\label{lg1} If a graph $G$ satisfies the following conditions then $\sa{G}=2$. \begin{enumerate} \item \label{c1} G is tripartite with $V(G)=V_1 \cup V_2 \cup V_3$. \item \label{c2} Each vertex in $V_1$ or $V_2$ has degree $4$ and each vertex in $V_3$ has degree $2$. \item \label{c3} Each vertex in $V_1$ or $V_2$ has exactly $2$ neighbours in $V_3$ and each vertex in $V_3$ is adjacent to both $V_1$ and $V_2$. \end{enumerate} \end{lemma} \begin{proof} We decompose the edges of $G$ into two galaxies in such a way that all of the stars are $K_{1,3}$. The induced subgraph on $H=\langle V_1 \cup V_2 \rangle$ is a bipartite graph. This bipartite graph must be a disjoint union of some even cycles. So we can partition edges of $H$ into the sets $M_1$ and $M_2$, such that each of them is a perfect matching in $H$. Now we partition the edges of $G$ into two galaxies $G_1$ and $G_2$: The first one, $G_1$, is the union of $M_1$, with an induced subgraph of $\langle V_1 \cup V_3 \rangle$. In a similar way $G_2$ is the union of $M_2$, with an induced subgraph of $\langle V_2 \cup V_3 \rangle$. \end{proof} \begin{theorem} \label{l9} $\sa{Q_{2^k+1}}=2^{k-1}+2$, \ for \ $k \geq 2$. \end{theorem} \begin{proof} By Corollary~\ref{ceven}, $ \sa{Q_{2^k+1}} \geq 2^{k-1}+2$. So it suffices to partition the edges of $Q_{2^k+1}$ into $2^{k-1}+2$ galaxies. We know that $Q_{2^k+1}= Q_{2^k-2} \square Q_3$. Also by Lemma~\ref{ltruz} the vertices of $Q_{2^k-2}$ can be partitioned into $2^{k-1}$ sets, ${\cal A}=\{ A_1,A_2, \ldots , A_{2^{k-1}} \}$, such that an induced subgraph between each two sets is a $2$-regular subgraph. Now we need some conventions and notations. For a fixed $3$-bit codeword $c$ from $Q_3$, we extend each set $A_i$, $1 \leq i \leq 2^{k-1}$, to a set $A_i(c)$, which has codewords of length $2^k+1$, such that for each codeword $c^\prime \in A_i$, we append $c$ to the end of $c^\prime$ in $A_i(c)$. Therefore for each pair of $i$ and $j$, the induced subgraph on $A_i(c) \cup A_j(c)$ in $Q_{2^k+1}$ is a $2$-regular graph which can be decomposed into two perfect matchings. We denote them by $A_i(c) \rightarrow A_j(c)$ and $A_j(c) \rightarrow A_i(c)$. Also for any two $3$-bit codewords $c_1$ and $c_2$ which are different in only one bit, the induced subgraph of two sets $A_i(c_1)$ and $A_i(c_2)$ is a perfect matching between those sets and we denote it by $A_i(c_1) \parallel A_i(c_2)$. Also for any $3$-bit codeword $c$ we denote by $c^i$, the $3$-bit codeword which is different from $c$ exactly in the $i$-th bit, and by $\overline{c}$, the complement of $c$ that defers with $c$ in all bits. Also the set of all $3$-bit codewords with even weights is denoted by $E_c$, i.e. $E_c=\{000, 011, 110,101\}$. Now we are ready to introduce our $2^{k-1}$ galaxies. For each $i$, $1 \leq i \leq 2^{k-1}$, define $G_i$ as follows: $$G_i = \mathop{\bigcup}_{c \in E_c}\{ [\mathop{\cup}_{j \neq i}\big(A_i(c) \rightarrow A_j(c)\big)] \cup [A_i(c) \parallel A_i(c^1)] \cup [\mathop{\cup}_{k \neq i , i+1} \big(A_{i+1}(c^1) \rightarrow A_k(c^1)\big)] \},$$ \ \ \ \ $1 \leq j , k \leq 2^{k-1}.$ \\ \\ Note that in the above formula the indices $i$ and $j$ and $k$ are considered modulo $2^{k-1}$. Next we prove that the following statements hold for $G_i$: \begin{description} \item[] {\bf Statement 1.}\label{a} Every $G_i$ is a galaxy. \item[] {\bf Statement 2.} \label{b} The remaining edges satisfy conditions of Lemma~\ref{lg1}. \end{description} By using these two statements, we derive $\sa{Q_{2^k+1}}\leq 2^{k-1}+2$, therefore the statement of the theorem will be held. Before proving these statements, as an example, we illustrate our construction in case of $Q_9$. We have $Q_9 = Q_6 \square Q_3$. Previously in Table~\ref{tab:Q6} a decomposition of $Q_6$ into $4$ sets as in Lemma~\ref{ltruz} is presented. In Figure~\ref{fig:aQ9}:(a), we have shown a $Q_3$ and a figure in which each of these partitioned sets is a vertex of $2K_4$, where each edge stands for a perfect matching between two corresponding sets. In Figure~\ref{fig:aQ9}:(b) the galaxy $G_1$ is represented, where again each edge represents a perfect matching. To illustrate more, we have also shown $G_3$ in Figure~\ref{fig:aQ9}:(c). Figure~\ref{fig:aQ9}:(d) is for the last two galaxies obtained from the remaining edges. Each of these presented galaxies can be mapped to a galaxy of $Q_9$ by a blow up. \begin{figure} \caption{(a) $Q_6 \square Q_3$, (b) $G_1$, (c) $G_3$, (d) Galaxies obtained in Statement 2. } \label{fig:aQ9} \end{figure} { \bf Proof of Statement 1.} By the definitions, it is obvious that for each $G_i$ and for each $c \in E_c$, the independent sets $A_i(c)$, $A_j(c)$, $A_i(c^1)$, $A_{i+1}(c^1)$ and $A_k(c^1)$, $1 \leq j, \ k \leq 2^{k-1}$, \ $j \neq i$, \ $k \neq i,i+1$, are mutually disjoint. By construction, every component is a star, and $[\mathop{\cup}_{j \neq i}\big(A_i(c) \rightarrow A_j(c)\big)] \cup [A_i(c) \parallel A_i(c^1)]$ is a union of stars with centers at the vertices in $A_i(c)$, and similarly $\mathop{\cup}_{k \neq i , i+1} \big(A_{i+1}(c^1) \rightarrow A_k(c^1)\big)$ is a union of stars with centers at the vertices in $A_{i+1}(c^1)$. Since here every $c^1$ corresponds to exactly one $c$, it follows that these stars have no overlaps. {\bf Proof of Statement 2.} We must prove that the remaining edges make a new graph which satisfies conditions of Lemma~\ref{lg1}. Let $$ \begin{array}{l} V_1=\bigcup_{1 \leq s \leq 2^{k-2}}\big( A_{2s}(001) \cup A_{2s}(100) \cup A_{2s-1}(010) \cup A_{2s-1}(111) \big), \\ V_2=\bigcup_{1 \leq s \leq 2^{k-2}}\big( A_{2s-1}(001) \cup A_{2s-1}(100) \cup A_{2s}(010) \cup A_{2s}(111) \big), \\ V_3=\bigcup_{c \in E_c}\big( \cup_{1 \leq i \leq 2^{k-1}} A_i(c) \big). \end{array} $$ \\ First, we show that in the graph of remaining edges, $V_3$ is a set of vertices with degree $2$. By construction, each vertex in $V_3$, i.e. $A_i(c)$, $c \in E_c$, is a center with degree $2^{k-1}-1+1$ in a star of $G_i$, and is a leaf in any other $G_j$, $j \neq i$, $1 \leq i,j \leq 2^{k-1}$. So each vertex in $A_i(c)$ is covered in the given galaxies by totally $(2^{k-1}-1+1) + (2^{k-1}-1)\times 1=2^k-1$ adjacent edges. Since $Q_{2^k+1}$, is $(2^k+1)$-regular graph, so in the remaining graph the degree of each vertex in $A_i(c)$ is $2$. Next we show that each of the remaining vertices which are in $V_1 \cup V_2=\bigcup_{c \in E_c}\big(\cup_i A_i(\overline{c})\big)$, $1\leq i \leq 2^{k-1}$, has degree $4$. For each vertex $v$ in $A_i(\overline{c})$, it is clear that the galaxy $G_{i-1}$ covers $2^{k-1}-2$ edges of $v$ and each of the other galaxies, $G_j$, $j \neq i-1$, covers $1$ edge of $v$. So $(2^{k-1}-2)+(2^{k-1}-1) \times 1=2^k-3$ edges of each vertex in $A_i(\overline{c})$ is covered by all $G_j$, $1\leq j \leq 2^{k-1}$. Thus each vertex in $A_i(\overline{c})$ has $4$ uncovered edges. Therefore each of the vertices in $V_1 \cup V_2$ has degree $4$ and the vertices in $V_3$ have degree $2$, hence Condition~(\ref{c2}) is satisfied. For Condition~(\ref{c3}), by the given construction we note that in the first $2^{k-1}$ galaxies each vertex of $V_3$, i.e. $A_i(c)$ ($i$ fixed and $c \in E_c$) is adjacent just to one of the vertices of $V_1 \cup V_2$, i.e. $A_i(c^1)$. So in the remaining graph each vertex in $V_3$, which has degree $2$, is adjacent to both a vertex with degree $4$ in $A_i(c^2)$, and another vertex with degree $4$ in $A_i(c^3)$. Since $A_i(c^2) \cup A_i(c^3)\subseteq V_1 \cup V_2$, so vertices of $V_3$ are independent. Since $c^2$ and $c^3$ are different in the last two bits, so $A_i(c^2)$ and $A_i(c^3)$ can not be in the same $V_j$, $j \in \{1, 2\}$. Also each vertex in $A_i(\overline{c})$, which is in $V_1$ or $V_2$, is adjacent to a vertex with degree $2$ in $A_i(\overline{c}^2)$ and another vertex with degree $2$ in $A_i(\overline{c}^3)$. Thus each vertex in $V_1$ and $V_2$ has exactly $2$ neighbors in $V_3$. Hence Condition~(\ref{c3}) holds. To prove Condition~(\ref{c1}), it remains to show that each of $V_1$ and $V_2$ is an independent set. As we have seen each vertex in $A_i(\overline{c})$ has degree $4$ and two of its neighbors are in $V_3$. The other two neighbors are in the sets $A_{i+1}(\overline{c})$ and $A_{i-1}(\overline{c})$. By the definition of $V_1$ and $V_2$, it is obvious that $A_i(\overline{c})$ is not in the same $V_j$, $j \in \{1, 2\}$, as $A_{i+1}(\overline{c})$ and $A_{i-1}(\overline{c})$ are. \end{proof} \begin{lemma} \label{lf} $\sa{Q_n}=\lfloor {n \over 2}\rfloor+2$ \ for \ $n=2^k+4$ \ and \ $2^k-4 \leq n \leq 2^k+2$ \ except for \ $2^k-2$. \end{lemma} \begin{proof} We can check that the statement holds for $n \leq 10$, see Table~\ref{tab:table2}. \begin{table}[!ht] \begin{center} $\begin{array}{c|cccccccccc} n &1 &2 &3 & 4 &5 &6 & 7 &8 &9 & 10 \\ \hline {\rm sa}(Q_{n}) &1 &2 &3 & 4 &4 &4 & 5 &6 &6 & 7 \\ \hline \end{array} $ \caption{$\sa{Q_n}$ for $n \leq 10$. }\label{tab:table2} \end{center} \end{table} So let $k \geq 3$, other than previous mentioned cases in Theorem~\ref{t1}, Theorem~\ref{l9} and Corollary~\ref{l2}, for the remaining cases the statement holds as follows: \begin{itemize} \item $n=2^k+4$, (by Propsition~\ref{l4} for $j=3$), \item $n=2^k+2$, (by Theorem~\ref{l9} and Theorem~\ref{tt} for $n=2^k+1$), \item $n=2^k$, (by Propsition~\ref{l4} for $j=2$) and (by Theorem~\ref{l9} and Theorem~\ref{tt} for $n=2^k+1$), \item $n=2^k-3$, ($\sa{Q_{2^k-3}} \leq \sa{Q_{2^k-2}}=2^{k-1}$ and by Corollary~\ref{ceven} for $n=2^k-3$), \item $n=2^m-4$, (by Propsition~\ref{l4} for $k=j=m-1$) and (by Theorem~\ref{tt} for $n=2^m-3$). \end{itemize} \vspace*{-5mm}\end{proof} The value of $\sa{Q_{2^k+3}}$ is left open, but we know that $2^{k-1}+3 \leq \sa{Q_{2^k+3}} \leq 2^{k-1}+4$. So the smallest unknown case is $\sa{Q_{11}}$. \subsection{An upper bound} In the following theorem, we improve the known upper bound on $\sa{Q_n}$ by a method similar to the proof of Theorem~\ref{tlog}. \begin{theorem}\label{tUp} Let $n$ be an even integer. We can write $n$ as $n=\sum_{j=1}^l(2^{i_j}-2)+r$, where $r$ is in $ {\cal R}= \{ 2^k+2, 2^s+2^t-4\}$, $s \geq t \geq 2$, $i_1 > i_2 > \cdots > i_l$ and $l$ is the smallest number with this property. Also we have $$ \sa{Q_n} \leq {n \over 2}+l+2. $$ \end{theorem} \begin{proof} If $n \in {\cal R}$ then $l=0$ and the statement holds by Lemma~\ref{lf} or by Proposition~\ref{l4}. Otherwise it is easy to see that $n$ can be written as $2^{i_1}-2+r_1$, where $i_1$ is the largest possible integer and $r_1$ is the remainder. If $r_1=0$ then the statement holds by Theorem~\ref{t1}, else $4 < r_1 < 2^{i_1}-2$. If $r_1 \in \cal R$ then $l=1$ and by Lemma~\ref{lf} and Proposition~\ref{l4}, $\sa{Q_{r_1}}={r_1 \over 2}+2$ and $\sa{Q_n} \leq \sa{Q_{2^{i_1}-2}}+\sa{Q_{r_1}}=2^{i_1-1}+{r_1 \over 2}+2={n \over 2}+3$, and we are done. Else, again $r_1$ can be written as $2^{i_2}-2+r_2$, where $i_2$ is the largest possible integer and so on. Thus assume $n=\sum_{j=1}^l(2^{i_j}-2)+r$, $r \in {\cal R}$, then \\ \\ $ \begin{array}{rl} \sa{Q_{n}} & \leq \sa{Q_{2^{i_1}-2}} + \sa{Q_{2^{i_2}-2}}+ \cdots + \sa{Q_{2^{i_l}-2}} + \sa{Q_r} \\ & = 2^{i_1-1} + 2^{i_2-1} + \cdots + 2^{i_l-1} + \dfrac{\strut r }{\strut 2}+2\\ & = \dfrac{\strut 2^{i_1}-2 }{2} + \dfrac{\strut 2^{i_2}-2}{ 2} + \cdots + \dfrac{\strut 2^{i_l}-2 }{ 2} + l+ \dfrac{ r }{ 2}+2 \\ & = \dfrac{\strut n-r }{ 2} + l + \dfrac{r }{ 2}+2 \\ & = \dfrac{\strut n }{ 2} + l + 2. \end{array}$ \\ \vspace*{-9mm} \end{proof} \vspace*{9mm} \begin{corollary} Let $n \geq 1$, then $\sa{Q_n}\leq \lceil {n \over 2}\rceil +l +2$, where $l$ is obtained for $n$ or $n-1$ as in Theorem~\ref{tUp}, whether $n$ is even or odd, respectively. \end{corollary} \begin{corollary} $\sa{Q_n} \leq \lceil {n \over 2}\rceil +\lfloor \log_2 n \rfloor-1$ \ for \ $n \geq 5$. \end{corollary} \begin{proof} For $n=5$ or $6$ it follows from Lemma~\ref{lf}. If the statement holds for an even number $n$, then it holds for $n+1$ as follows $$\begin{array}{lll} \sa{Q_{n+1}} &=\sa{Q_n \square K_2} \\ & \leq \strut \sa{Q_n}+1 & \\ & \leq \Big\lceil \dfrac{\strut n }{\strut 2} \Big\rceil +\lfloor \log_2 n \rfloor -1+1 & \\ & = \Big\lceil \dfrac{\strut n +1 }{\strut 2} \Big\rceil +\lfloor \log_2 (n+1) \rfloor -1 & ({\rm Since \ for \ even} \ n, \ \lfloor \log_2 (n+1) \rfloor =\lfloor \log_2 n \rfloor). \end{array}$$ So it suffices to prove the corollary for even number $n$. It is easy to see that $n$ can be represented as a sum of $\lfloor \log_2 n \rfloor$ numbers of the form $2^k-2$, i.e. $n=\sum_{j=1}^m(2^{i_j}-2)$, $m \leq \lfloor \log_2n \rfloor$. In Theorem~\ref{tUp}, we represented $n=\sum_{j=1}^l(2^{i_j}-2)+r$. So $l \leq \lfloor \log_2 n \rfloor - \lfloor \log_2 r \rfloor$. As in Theorem~\ref{tUp}, $\sa{Q_n} \leq {n-r \over 2}+ \lfloor \log_2 n \rfloor - \lfloor \log_2 r \rfloor +\sa{Q_r}$, $r \in {\cal R}$. Now if $r=2^k+2$, then we have \\ \\ $ \begin{array}{ll} \sa{Q_n} &\leq \dfrac{n-2^k-2 }{2}+ \lfloor \log_2 n \rfloor -k+2^{k-1}+3 \\ & = \dfrac{\strut n}{\strut 2} - 2^{k-1}-1 + \lfloor \log_2 n \rfloor -k + 2^{k-1}+3 \\ &= \dfrac{\strut n}{\strut 2}+ \lfloor \log_2 n \rfloor -k +2.\end{array} $ \\ \\ If $r=2^k+2^j-4$ and $k \geq j \geq 2$, \\ \\ $ \begin{array}{ll} \sa{Q_n} &\leq \dfrac{n-2^k-2^j+4 }{ 2}+ \lfloor \log_2 n \rfloor -k+2^{k-1}+2^{j-1} \\ & = \dfrac{\strut n }{\strut 2} - 2^{k-1}-2^{j-1}+2 + \lfloor \log_2 n \rfloor -k + 2^{k-1}+2^{j-1} \\ &=\dfrac{\strut n }{\strut 2}+ \lfloor \log_2 n \rfloor -k +2. \end{array} $ \\ \\ As we have seen in both cases $\sa{Q_n} \leq {n \over 2}+ \lfloor \log_2 n \rfloor -k +2$. In both cases $k$ can be considered greater than or equal to $3$. As an example for $r=2^k+2^j-4$, assume $k=2$, then $j=2$ and $r=4$. Hence the last two numbers in $\sum_{j=1}^l(2^{i_j}-2)+r$, are $2^{i_l}-2$ and $r$ where $i_l \geq 3$. So we have $2^{i_l}-2+r=2^{i_l}-2+4=2^{i_l}+2$ which is a contradiction the choice of $r$. Therefore $\sa{Q_n} \leq {n \over 2}+ \lfloor \log_2 n \rfloor -k +2 \leq {n \over 2}+ \lfloor \log_2 n \rfloor-1$. \end{proof} \section{Coloring and star arboricity} The connection of star arboricity with other colorings such as incidence coloring and acyclic coloring are studied (see \cite{MR1428581} and \cite{MR1375101}). In this section we consider the connection between square coloring and star arboricity of graphs. Square of a graph $G$ is a graph denoted by $G^2$ with ${\rm V}(G)={\rm V}(G^2)$ and two vertices are adjacent if their distance in $G$ is at most $2$. A square-coloring of $G$ is a proper coloring of $G^2$. Let $\chi(G^2)$ be the minimum number of colors used in any square-coloring of $G$. \begin{theorem} \label{tsquare} If $\chi(G^2) \leq k$ \ then \ $\sa{G}\leq \lceil {k \over 2}\rceil+1$, $k \geq 4$. \end{theorem} \begin{proof} Let $c$ be a proper $k$-coloring of $G^2$ with color classes $C_1, C_2, \ldots, C_k$. We show that the degree of vertices in any induced subgraph on each pair of color classes is at most $1$. Assume to the contrary that there are two classes $C_i$ and $C_j$, $1 \leq i, j \leq k$, such that an induced subgraph on them has a vertex of degree at least $2$. Without loss of generality let $v$ be a vertex in $C_i$ with ${\rm deg}_{\langle C_i \cup C_j\rangle }v=2$. So $v$ has at least two neighbors $u$ and $w$ in $C_j$. But in $G^2$, $u$ and $w$ are adjacent, that is contradiction with $c$ being a proper coloring. Thus the vertices of $G$ are partitioned into $k$ independent sets such that an induced subgraph on each pair of them is a matching. Now using $G$ we construct a graph $H$ as follows. Each vertex of $H$ corresponds to a color class of $G$ and two vertices are adjacent if there is an edge between their corresponding color classes. Clearly $H$ is a subgraph of $K_k$. Thus from Theorem~\ref{tcomplete}, $\sa{H} \leq \lceil {k \over 2} \rceil+1$. By a blow up each galaxy of $H$ can be mapped to a galaxy of $G$. \end{proof} Note that the result in the theorem can be sharp. As an example for $Q_{2^t-1}$, we have $\chi(Q_{2^t-1}^2)=2^t$ (see \cite{MR2452764} and \cite{MR2098840}) which implies that $\sa{Q_{2^t-1}} \leq 2^{t-1}+1$. \end{document}
\begin{document} \title{Deformations of quasicoherent sheaves of algebras} \author{Valery A.~Lunts} \address{Department of Mathematics, Indiana University, Bloomington, IN 47405, USA} \email{vlunts@@indiana.edu} \begin{abstract} Gerstenhaber and Schack ([GS]) developed a deformation theory of presheaves of algebras on small categories. We translate their cohomological description to sheaf cohomology. More precisely, we describe the deformation space of (admissible) quasicoherent sheaves of algebras on a quasiprojective scheme $X$ in terms of sheaf cohomology on $X$ and $X\times X$. These results are applied to the study of deformations of the sheaf $D_X$ of differential operators on $X$. In particular, in case $X$ is a flag variety we show that any deformation of $D_X$, which is induced by a deformation of ${\cal O}_X$, must be trivial. This result is used in [LR3], where we study the localization construction for quantum groups. \end{abstract} \maketitle \section{Introduction} Let $X$ be a topological space, $k$ be a field, and ${\cal A} _X$ be a sheaf of $k$-algebras on $X$. We would like to study infinitesimal deformations of ${\cal A} _X$. Such deformatioms form a $k$-vector space which we denote by $\operatorname{def} ({\cal A} _X)$. In case $X=pt$ it is well known that the infinitesimal deformations of (the $k$-algebra) $A={\cal A} _X$ are controlled by the Hochschild cohomology of $A$. More precisely, $\operatorname{def} (A)=HH^2(A)=\operatorname{Ext} ^2_{A\otimes A^o} (A,A)$. However, for a general $X$ and ${\cal A} _X$ the situation is more subtle. More generally, given an ${\cal A} _X$-bimodule ${\cal M} _X$ we may ask for cohomological interpretation of $exal({\cal A} _X,{\cal M} _X)$ -- the space of algebra extensions of ${\cal A} _X$ by ${\cal M} _X$ ($exal({\cal A} _X,{\cal A} _X)= \operatorname{def} ({\cal A} _X)$). Gerstenhaber and Schack ([GS]) developed a deformation theory of {\it presheaves} of algebras. Given a small category ${\cal U} $ and a presheaf of algebras ${\cal A} _{{\cal U}}$ on ${\cal U}$ (i.e. a contravariant functor from ${\cal U}$ to the category of $k$-algebras) they consider the space $\operatorname{def} ({\cal A} _{{\cal U}})$ of infinitesimal deformations of ${\cal A} _{{\cal U}}$ and give it a cohomological interpretation. Namely, given an ${\cal A} _{{\cal U}}$-bimodule ${\cal M} _{{\cal U} }$ they define a natural exact sequence of complexes of $k$-vector spaces $$0\to T_a^\bullet({\cal M} _{{\cal U}})\to T^\bullet ({\cal M} _{{\cal U}}) \to \bar{T}^\bullet ({\cal M} _{{\cal U}})\to 0.$$ The middle term is the total complex of the {\it simplicial bar resolution} of ${\cal M} _{{\cal U}}$ and $$H^i(T ^\bullet({\cal M} _{{\cal U}}))= \operatorname{Ext}^i_{{\cal A} _{{\cal U}}\otimes {\cal A} _{{\cal U}}^o}({\cal A} _{{\cal U}},{\cal M} _{{\cal U}})$$ -- the Hochschild cohomology of ${\cal A} _{{\cal U}}$ with coefficients in ${\cal M} _{{\cal U}}$. The cohomology $H^i(\bar{T}^\bullet ({\cal M} _{{\cal U}}))$ is the cohomology $H^i({\cal U},{\cal M}_{{\cal U}})$ of the nerve of ${\cal U}$ (or the classifying space of ${\cal U}$) with coefficients in ${\cal M} _{{\cal U}}$. Finally, $$H^2(T_a ^\bullet ({\cal M} _{{\cal U}}))=exal({\cal A} _{{\cal U}},{\cal M} _{{\cal U}});$$ in particular, $H^2(T_a ^\bullet ({\cal A} _{{\cal U} }))= \operatorname{def}({\cal A} _{{\cal U}})$. As a consequence they obtain a long exact sequence of $k$- spaces $$\begin{array}{cccccc} ... & \to & \operatorname{Ext} ^1_{{\cal A} _{{\cal U}}\otimes {\cal A} _{{\cal U} }^o}({\cal A} _{{\cal U}}, {\cal M} _{{\cal U}}) & \to & H^1({\cal U} ,{\cal M} _{{\cal U}}) & \to \\ exal({\cal A} _{{\cal U}},{\cal M} _{{\cal U}}) & \to & \operatorname{Ext} ^2_{{\cal A} _{{\cal U}}\otimes {\cal A} _{{\cal U} }^o} ({\cal A} _{{\cal U}}, {\cal M} _{{\cal U}}) & \to & H^2({\cal U} ,{\cal M} _{{\cal U}}) & \to \\ ... & & & & & \\ \end{array}$$ Returning to our problem of trying to interpret cohomologically the space $exal({\cal A} _X,{\cal M} _X)$ we may proceed as follows. Let ${\cal U}$ be the category of (all or some) open subsets of $X$. From the sheaf of algebras ${\cal A} _X$ and its bimodule ${\cal M} _X$ we obtain the corresponding presheaves ${\cal A} _{{\cal U}}$ and ${\cal M} _{{\cal U}}$. At this point there are two natural questions. \noindent {\bf Q1}. Is $exal({\cal A} _X,{\cal M} _X)$ equal to $exal({\cal A} _{{\cal U}},{\cal M} _{{\cal U}})$? \noindent {\bf Q2}. Can we interpret the spaces $\operatorname{Ext}^i_{{\cal A} _{{\cal U}}\otimes {\cal A} _{{\cal U}}^o}({\cal A} _{{\cal U}},{\cal M} _{{\cal U}})$ and $H^i({\cal U}, {\cal M} _{{\cal U}})$ as sheaf cohomologies on $X$ or $X\times X$? The answers to these questions in general are probably negative. In this paper we obtain positive answers to the above questions in case $X$ is a quasiprojective scheme over $k$ and ${\cal A}_X$ and ${\cal M}_X$ are quasicoherent sheaves on $X$, which satisfy some additional conditions (the pair $({\cal A}_X,{\cal M}_X)$ must be admissible in the sense of Definition 4.7 below). In this case there is a natural quasicoherent sheaf of algebras ${\cal A}^e_Y$ on the product scheme $Y=X\times X$ (this is the analogue of the ring $A\otimes A^o$ for a single algebra $A$). Moreover, the ${\cal A}_X$-bimodule ${\cal M}_X$ gives rise to a ${\cal A}^e_Y$-module $\tilde{{\cal M}}_Y$; in particular, the ${\cal A}_X$-bimodule ${\cal A}_X$ defines an ${\cal A}^e_Y$-module $\tilde{{\cal A}}_Y$. If ${\cal U}$ is the category of all {\it affine} open subsets of $X$, then we prove that $$exal({\cal A}_X,{\cal M}_X)=exal({\cal A}_{{\cal U}},{\cal M}_{{\cal U}}),$$ and $$\operatorname{Ext}^i_{{\cal A} _{{\cal U}}\otimes {\cal A} _{{\cal U}}^o}({\cal A} _{{\cal U}},{\cal M} _{{\cal U}})= \operatorname{Ext}^i_{{\cal A}^e_Y}(\tilde{{\cal A}}_Y, \tilde{{\cal M}}_Y),$$ $$H^i({\cal U}, {\cal M} _{{\cal U}})=H^i(X,{\cal M}_X).$$ In particular, we obtain the long exact sequence $$\begin{array}{cccccc} ... & \to & \operatorname{Ext} ^1_{{\cal A}^e_Y}(\tilde{{\cal A}} _Y, \tilde{{\cal M}} _Y) & \to & H^1(X ,{\cal M} _X) &\to \\ exal({\cal A} _X,{\cal M} _X) & \to & \operatorname{Ext} ^2_{{\cal A}^e_Y} (\tilde{{\cal A}} _Y, \tilde{{\cal M}} _Y) & \to & H^2(X ,{\cal M} _{X}) &\to \\ ... & & & & & \\ \end{array}$$ which allows us to analyze the space $exal({\cal A} _X,{\cal M} _X)$. One of the implications is that $exal({\cal A} _X,{\cal M} _X)$ behaves well with respect to base field extensions. It is easy to describe the morphisms $$H^1(X,{\cal M} _X)\to exal({\cal A} _X,{\cal M} _X)\to \operatorname{Ext} ^2_{{\cal A}^e_Y} (\tilde{{\cal A}} _Y,\tilde{{\cal M}} _Y)$$ explicitly. Note that if $X$ is affine then $H^i(X,{\cal M}_X)=0$ for $i>0$ and hence $exal({\cal A}_X,{\cal M}_X)=\operatorname{Ext} ^2 _{{\cal A}^e_Y}(\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y)$. Moreover, in this case $$\operatorname{Ext} ^\bullet _{{\cal A}^e_Y}(\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y)=\operatorname{Ext} ^\bullet _{{\cal A}_X(X)\otimes {\cal A}_X^o(X)}({\cal A}_X(X),{\cal M}_X(X))$$ and thus $$exal({\cal A}_X,{\cal M}_X)=exal({\cal A}_X(X),{\cal M}_X(X)).$$ In the special case when ${\cal A} _X={\cal O} _X$ and ${\cal M} _X$ is a symmetric ${\cal O} _X$-bimodule the isomorphism $$\operatorname{Ext} ^i_{{\cal O}_Y}({\cal O} _X,{\cal M} _X)= \operatorname{Ext} ^i_{{\cal A} _{{\cal U}}\otimes {\cal A} _{{\cal U}}^o}({\cal A} _{{\cal U}},{\cal M} _{{\cal U}})$$ was proved by R.~Swan in [S]. We apply the above results to analyze $\operatorname{def} ({\cal A} _X)$ in case $X$ is a smooth quasiprojective variety over ${\Bbb C}$ and ${\cal A} _X=D_X$ -- the sheaf of differential operators on $X$. In this case $$\operatorname{Ext} ^i_{{\cal A}^e_Y}(\tilde{{\cal A}}_Y,\tilde{{\cal A}}_Y)=H^i(X^{an},{\Bbb C} ).$$ If in addition $X$ is $D$-affine (for example $X$ is affine) then $H^i(X,D_X)=0$ for $i>0$ and hence $$\operatorname{def}(D_X)=H^2(X^{an},{\Bbb C} ).$$ In the last section we study {\it induced} deformations of $D_X$, i.e. those which come from deformations of the structure sheaf ${\cal O} _X$. In particular if $X$ is a flag variety we show that every induced deformation of $D_X$ is trivial. This result is used in the work [LR3], where we study quantum differential operators on quantum flag varieties. It is my pleasure to thank Paul Bressler for his references to the literature on the deformation theory and Michael Larsen for helpful discussions of the subject. \section{Preliminaries on extension of algebras and Hochschild cohomology} \subsection{Extensions of algebras} Fix a field $k$. An algebra means an associative unital $k$-algebra. Fix an algebra $A$; $A^o$ is the opposite algebra and $A^e:=A\otimes _kA^o$. An $A$-module means a left $A$-module; an $A$-bimodule means an $A^e$-module. Fix an algebra $A$ and an $A$-bimodule $M$. Consider an exact sequence of $k$-modules $$0\to M\to B\stackrel{\epsilon}{\to}A\to 0$$ with the following properties \begin{itemize} \item $B$ is an algebra and $\epsilon$ is a homomorphism of algebras. (Hence $M$ is a 2-sided ideal in $B$.) \item The $B$-bimodule structure on $M$ factors through the homomorphism $\epsilon$ and the resulting $A$-bimodule structure on $M$ coincides with the given one. (In particular, the square of the ideal $M$ is zero.) \end{itemize} \begin{defn} An exact sequence as above is called an {\it algebra extension} of $A$ by $M$. An isomorphism between extensions $$0\to M\to B\to A\to 0$$ and $$0\to M\to B^\operatorname{pr}ime \to A\to 0$$ is an isomorphism of algebras $\alpha :B\to B^\operatorname{pr}ime$ which makes the following diagram commutative $$ \begin{array}{ccrcrcrcc} 0 & \to & M & \to & B & \to & A & \to & 0 \\ & & id \downarrow & & \alpha \downarrow & & id \downarrow & & \\ 0 & \to & M & \to & B^\operatorname{pr}ime & \to & A & \to & 0 \end{array} $$ An extension is {\it split} if there exists an algebra homomorphism $s:A\to B$ such that $\epsilon \cdot s=id$. Then $B=A\oplus M$ with the multiplication $(a,m)(a^\operatorname{pr}ime ,m^\operatorname{pr}ime )=(aa^\operatorname{pr}ime ,am^\operatorname{pr}ime + ma^\operatorname{pr}ime )$. The collection of isomorphism classes of algebra extensions of $A$ by $M$ is naturally a $k$-vector space which is denoted $exal(A,M)$. The zero element is the class of the split extension. \end{defn} Given a map of $A$-bimodules $M\to M^\operatorname{pr}ime$ the usual pushout construction for extensions defines a map $$exal(A,M)\to exal(A,M^\operatorname{pr}ime).$$ Given a homomorphism of algebras $A^\operatorname{pr}ime \to A$ the pullback construction for extensions defines a map $$exal(A,M)\to exal (A^\operatorname{pr}ime ,M).$$ Thus $exal(\cdot,\cdot)$ is a bifunctor covariant in the second variable and contravariant in the first one. In case $M=A$ the space $exal(A,A)$ can be considered as deformations of the first order of the algebra $A$. Let us describe this space in a different way. Put $k_1:=k[t]/(t^2)$. Consider $k_1$-algebras $B$ with a given isomorphism $\theta :grB\to A\otimes _kk_1$. (The algebra $B$ has the filtration $\{0\}\subset tB \subset B$ and $grB$ denotes the associated graded.) The isomorphism classes of such pairs $(B,\theta )$ form a pointed set which we denote by $\operatorname{def} (A)$. The distinguished element in $\operatorname{def} (A)$ is represented by the algebra $B=A\otimes _kk_1$. We claim that $exal(A,A)=\operatorname{def} (A)$ (hence $\operatorname{def} (A)$ is a $k$-vector space). Indeed, given $(B,\theta )$ as above we obtain an exact sequence $$0\to tB=A\to B\to A\to 0,$$ which gives a well defined map from $\operatorname{def} (A)\to exal(A,A)$. Conversely, given an algebra extension $$0\to M=A\to B\to A\to 0$$ define the multiplication $t:B\to B$ by $t\cdot 1_B=1_A\in M$. This makes $B$ a $k_1$-algebra and defines the inverse map $exal(A,A)\to \operatorname{def} (A)$. The above description of $exal(A,A)$ allows us to define the set $\operatorname{def} ^n(A)$ of $n$-th order deformations of $A$ as the collection of isomorphism classes of $k_n:=k[t]/(t^{n+1})$-algebras $B$ with an isomorphism of $k_n$-algebras $grB\to A\otimes _kk_n$. Thus $\operatorname{def} ^1(A)=\operatorname{def} (A)=exal(A,A)$. The algebra $B=A\otimes _kk_n$ represents the {\it trivial} deformation. Note that $B$ is trivial if there exists a $k$-algebra homomorphism $s:A\to B$, which is the left inverse to the residue homomorphism $B\to A$. Indeed, then $s\otimes 1:A\otimes _kk_n\to B$ is an isomorphism of $k_n$-algebras. Note that the quotient homomorphism $B\to B/t^nB$ defines the map $\operatorname{def} ^n(A)\to \operatorname{def} ^{n-1}(A)$. Denote by $\operatorname{def} _0^n(A)\subset \operatorname{def} ^n(A)$ the preimage in $\operatorname{def} ^n(A)$ of the trivial deformation in $\operatorname{def} ^{n-1}(A)$. \begin{lemma} There exists a natural identification $\operatorname{def} ^n_0(A)=\operatorname{def} (A)$. In particular, $\operatorname{def} ^n_0(A)$ has a natural structure of a $k$-vector space. \end{lemma} \begin{pf} Let $B\in \operatorname{def} ^n(A)$ be such that $B/t^nB=A\otimes _kk_{n-1}$. Consider the obvious $k$-algebra homomorphism $A\to A\otimes _kk_{n-1}$ and the induced pullback diagram $$ \begin{array}{ccrcccccc} 0 & \to & t^nB & \to & B^\operatorname{pr}ime & \to & A & \to & 0 \\ & & id \downarrow & & \downarrow & & \downarrow & & \\ 0 & \to & t^nB & \to & B & \to & A\otimes _kk_{n-1} & \to & 0 \end{array} $$ Then $B^\operatorname{pr}ime $ represents an element in $\operatorname{def} (A)$. We get a map $\operatorname{def} _0^n(A)\to \operatorname{def} (A)$. The inverse map $\operatorname{def} (A) \to \operatorname{def} _0^n(A)$ is defined as follows. Given $B^\operatorname{pr}ime \in \operatorname{def} (A)$ consider the projection $A\otimes _kk_{n-1} \to A$ and the corresponding pullback diagram $$ \begin{array}{ccrcccccc} 0 & \to & A & \to & B & \to & A\otimes _kk_{n-1} & \to & 0 \\ & & id \downarrow & & \downarrow & & \downarrow & & \\ 0 & \to & A & \to & B^\operatorname{pr}ime & \stackrel{p}{\to} & A & \to & 0 \end{array} $$ Then $B$ is a $k_n$-algebra as follows: $$t:(b^\operatorname{pr}ime ,0)\to (0,tp(b^\operatorname{pr}ime)),\quad t:(0,t^{n-1}a)\to (tp^{-1}(a),0).$$ This proves the lemma. \end{pf} \begin{cor} Assume that $\operatorname{def} (A)=0$. Then $\operatorname{def} ^n(A)=0$ for all $n$. \end{cor} \begin{pf} Induction on $n$ using the previous lemma. \end{pf} \subsection{Hochschild cohomology} The space $exal(A,M)$ has a well known cohomological description. Namely, there is a natural isomorphism $$exal(A,M)=\operatorname{Ext} ^2_{A^e}(A,M).$$ Let us recall how this isomorphism is defined. Consider the bar resolution $$...\stackrel{\partial _2}{\rightarrow}B_1\stackrel{\partial _1} {\rightarrow}B_0 \stackrel{\partial _0}{\rightarrow}A\rightarrow 0,$$ where $B_i=A^{\otimes i+2}$ and $$\partial_i(a_0\otimes ...\otimes a_{i+1})=\sum_j (-1)^ja_0\otimes ...\otimes a_ja_{j+1}\otimes ...a_{i+1}.$$ Note that $B_i$'s are naturally $A^e$-modules and the differentials $\partial _i$ are $A^e$-linear. Hence $B_\bullet \to A$ is a free resolution of the $A^e$-module $A$. Thus for any $A^e$-module $M$ $$H^\bullet \operatorname{Hom}_{A^e}(B_\bullet ,M)=\operatorname{Ext}^\bullet_{A^e}(A,M).$$ Note that $\operatorname{Hom} _{A^e}(B_i ,M)=\operatorname{Hom} _k(A^{\otimes i},M)$. Given an algebra extension $$0\to M\to B\to A\to 0$$ choose a $k$-linear splitting $s:A\to B$ and define a 2-cocycle $Z_s\in \operatorname{Hom} _k(A^{\otimes 2},M)$ by $$Z_s(a,b)=s(ab)-s(a)s(b).$$ Different $k$-splittings define cohomologous cocycles, hence we obtain a map $exal(A,M)\to \operatorname{Ext}^2_{A^e}(A,M)$ which is, in fact, an isomorphism. The spaces $\operatorname{Ext} ^\bullet_{A^e}(A,M)$ are called the Hochschild cohomology groups of $A$ with coefficients in $M$. In particular, $\operatorname{Ext} _{A^e}^\bullet (A,A)=HH^\bullet (A)$ is the usual Hochshild cohomology of $A$. Note that the space $\operatorname{Ext} ^0_{A^e}(A,M)=\operatorname{Hom} _{A^e}(A,M)$ coincides with the center $Z(M)$ of $M$: $$Z(M)=\{ m\in M\vert am=ma\ \ \forall a\in A\} .$$ The space $\operatorname{Ext} ^1_{A^e}(A,M)$ classifies the outer derivations of $A$ into $M$. Namely, a map $d:A\to M$ is a {\it derivation} if $d(ab)=ad(b)+d(a)b$. It is called an inner derivation (defined by $m\in M$) if $d(a)=[a,m]$. Denote by $Der (A,M)$ (resp. $Inder(A,M)$) the space of derivations (resp. inner derivations). Then $$\operatorname{Ext} ^1_{A^e}(A,M)=Outder(A,M):=Der(A,M)/Inder(A,M).$$ \begin{remark} Consider the split extension $B=A\oplus M\in exal (A)$, i.e. the multiplication in $B$ is $(a,m)(a^\operatorname{pr}ime, m^\operatorname{pr}ime)=(aa^\operatorname{pr}ime, am^\operatorname{pr}ime+ ma^\operatorname{pr}ime)$. Then an automorphism of this extension is an algebra automorphism $\alpha \in Aut(B)$ of the form $$\alpha (a,m)=(a,m+d(a)),$$ where $d:A\to M$ is a derivation. In other words the automorphism group of the trivial extension is the group $Der(A,M)$. \end{remark} \subsection{Deformation of sheaves of algebras} Let $X$ be a topological space and ${\cal A} $ be a sheaf of $k$-algebras on $X$. Let ${\cal A} ^o$ denote the sheaf of opposite $k$-algebras and ${\cal A} ^e={\cal A} \otimes _k{\cal A} ^o$. Given an ${\cal A} ^e$-module ${\cal M} $ we may repeat the above definition for algebras and modules to define the space of algebra extensions $exal({\cal A},{\cal M})$. In particular, an algebra extension of ${\cal A} $ by ${\cal M}$ is represented by an exact sequence of sheaves of $k$-vector spaces $$0\to {\cal M} \to {\cal B} \stackrel{\epsilon}{\to} {\cal A} \to 0$$ such that ${\cal B} $ is a sheaf of $k$-algebras and $\epsilon$ is a homomorphism of sheaves of algebras satisfying the properties of the Definition 2.1 above. A split extension is the one admitting a homomorphism of sheaves of algebras $s:{\cal A} \to {\cal B}$ such that $\epsilon \cdot s=id$. In particular, a split extension must be split as an extension of sheaves of $k$-vector spaces. In case ${\cal M} ={\cal A}$ we may again define the set $\operatorname{def} ^n({\cal A})$ of $n$-th order deformations of ${\cal A}$, so that $\operatorname{def} ^1({\cal A} )=\operatorname{def} ({\cal A})=exal({\cal A} ,{\cal A})$. Let again $\operatorname{def} ^n_0({\cal A} )\subset \operatorname{def} ^n({\cal A} )$ be the subset consisting of $n$-th order deformations which are trivial up to order $n-1$. Then repeating the proof of Lemma 2.2 we get $\operatorname{def} ^n_0({\cal A} )=\operatorname{def} ({\cal A})$. In particular, $\operatorname{def} ^n_0({\cal A} )$ is naturally a $k$-vector space and $\operatorname{def} ({\cal A})=0$ implies $\operatorname{def} ^n({\cal A})=0$ for all $n$. \section{Review of Gerstenhaber-Schack construction} In the paper [GS] the authors develop a deformation theory of presheaves of algebras on small categories. We will review their construction in a special case which is relevant to us. Namely let $X$ be a topological space and ${\cal U}$ be the category of all (or some) open subsets of $X$. Let ${\cal A}={\cal A}_{{\cal U}}$ be a presheaf of algebras on ${\cal U}$, i.e. ${\cal A}$ is a contravariant functor from ${\cal U}$ to the category of algebras. We denote by $k_{{\cal U}}$ the constant presheaf of algebras: $k_{{\cal U}}(U)=k$ for all $U\in {\cal U}$. Let ${\cal A} -mod$ be the abelian category (of presheaves) of left ${\cal A}$-modules. The presheaf of algebras ${\cal A} ^e={\cal A} \otimes {\cal A} ^o$ is defined in the obvious way: ${\cal A} ^e(U)={\cal A}(U)\otimes _k{\cal A} ^0(U)$. In case ${\cal A}=k_{{\cal U}}$ for ${\cal M}\in k_{{\cal U}}-mod$ we denote $\operatorname{Ext}^i_{k_{{\cal U}}}(k_{{\cal U}},{\cal M} )=H^i({\cal U} ,{\cal M} ).$ Fix an ${\cal A}$-bimodule ${\cal M}$ (i.e. ${\cal M} \in {\cal A}^e-mod$). The group $exal({\cal A} ,{\cal M} )$ is defined exactly as above in the case of a single algebra and its bimodule. We are going to give a natural description of the group $exal({\cal A}, {\cal M})$ in terms of homological algebra in the category of presheaves on ${\cal U}$. In patricular, we will construct a canonical map $$exal({\cal A} ,{\cal M})\to \operatorname{Ext} ^2_{{\cal A} ^e}({\cal A} ,{\cal M} ).$$ First recall some constructions from [GS]. \subsection{Categorical simplicial resolution} Let ${\cal C}={\cal C} _{{\cal U}}$ be a presheaf of algebras on ${\cal U}$. Given $U\in {\cal U}$ denote its inclusion $i_U:\{U\}\hookrightarrow {\cal U}$. The obvious (exact) restriction functor $$i^*:{\cal C} -mod \longrightarrow {\cal C} (U)-mod,\quad {\cal K} \mapsto {\cal K} (U)$$ has a right exact left adjoint functor $i_{U!}:{\cal C}(U)-mod\rightarrow {\cal C}-mod$ $$i_{U!}K(V)=\begin{cases} {\cal C}(V)\otimes _{{\cal C}(U)}K, & \text{ if $V\subset U$},\\ 0, & \text{ otherwise.} \end{cases}$$ Thus if $K$ is a projective ${\cal C}(U)$-module, then $i_{U!}K$ is a projective object in ${\cal C}-mod$. In particular, the category ${\cal C}-mod$ has enough projectives (it also has enough injectives (see [GS])). If the category ${\cal U}$ has a final object $U$, then ${\cal C}=i_{U!}{\cal C}(U)$ is projective in ${\cal C}-mod$. In patricular, then $$\operatorname{Ext} _{{\cal C}}^i({\cal C} ,{\cal K} )=0, \ \ \ \text{for all ${\cal K} \in {\cal C}-mod$, $i>0$.}$$ For ${\cal N}\in {\cal C}-mod$ define $$S({\cal N}):=\bigoplus_{U\in {\cal U}}i_{U!}i_U^*{\cal N}$$ with the canonical map $$\epsilon _{{\cal N}}:S({\cal N})\rightarrow {\cal N}.$$ Clearly $S$ is an endo-functor $S:{\cal C} -mod\longrightarrow {\cal C} -mod$ with a morphism of functors $\epsilon :S\to Id$. Define a diagram of functors $$...s_2\stackrel{\partial _1}{\rightarrow}s_1\stackrel{\partial _0}{\rightarrow}s_0 \stackrel{\partial _{-1}=\epsilon}{\longrightarrow}Id\to 0,$$ where $s_i=S^{i+1}$ and $\partial _i=\epsilon _{s_i} -S(\partial _{i-1})$. This diagram is a complex, i.e. $\partial _i\partial _{i-1}=0$, which is exact. So for ${\cal N} \in {\cal C}-mod$ we obtain a resolution $$...\to s_1({\cal N})\to s_0({\cal N} )\to {\cal N} \to 0.$$ Explicitly we have $$s_k({\cal N})=\bigoplus_{U_k\subset ... \subset U_0}i_{U_k!}i_{U_k}^*...i_{U_0!}i_{U_0}^*{\cal N}.$$ If ${\cal N}$ is locally projective (i.e. ${\cal N}(U)$ is a projective ${\cal C}(U)-module$ for all $U\in {\cal U}$), then the complex $s_\bullet({\cal N})$ consists of projective objects in ${\cal C}-mod$. So in this case for ${\cal M} \in {\cal C}-mod$ we have $$\operatorname{Hom} _{{\cal C}}(s_\bullet ({\cal N}),{\cal M})={\Bbb R} \operatorname{Hom}_{{\cal C}}^\bullet({\cal N} ,{\cal M} ).$$ \subsection{Simplicial bar resolution} Consider the bar resolution of the presheaf of algebras ${\cal A}$: $$...\to {\cal B} _1\to {\cal B} _0\to {\cal A},$$ where ${\cal B}_i={\cal A} ^{\otimes i+2}$ (this is a direct analogue of the usual bar resolution for algebras described above). The presheaves ${\cal B} _i$ are {\it locally} free ${\cal A} ^e$- modules, but usually not projective objects in ${\cal A} ^e-mod$. So the simplicial resolution $s_\bullet {\cal B} _\bullet$ of ${\cal B}_\bullet$ is a double complex consisting of projective objects in ${\cal A} ^e-mod$. For an ${\cal A} ^e$-module ${\cal M} $ denote by $T^{\bullet \bullet}({\cal M})$ the double complex $\operatorname{Hom} _{{\cal A} ^e}(s_\bullet{\cal B}_\bullet, {\cal M} )$, and let $T^\bullet ({\cal M})=\operatorname{Tot} (T^{\bullet\bullet}({\cal M} ))$ be its total complex. We have $$\operatorname{Ext} ^i_{{\cal A} ^e}({\cal A} ,{\cal M} )=H^i(T^\bullet({\cal M})).$$ Consider the double complex $T^{\bullet \bullet}({\cal M})$. It looks like $$\begin{array}{lclc} \uparrow & & \uparrow & \\ \operatorname{pr}od\limits_U\operatorname{Hom} _k ({\cal A} (U)\otimes {\cal A} (U), {\cal M} (U)) & \to & \operatorname{pr}od \limits_{V\subset U} \operatorname{Hom} _k({\cal A} (U)\otimes {\cal A} (U),{\cal M}(V)) & \to \\ \uparrow & & \uparrow & \\ \operatorname{pr}od\limits_{U}\operatorname{Hom} _k({\cal A} (U), {\cal M} (U)) & \to & \operatorname{pr}od\limits_{V\subset U} \operatorname{Hom} _k({\cal A} (U),{\cal M} (V)) & \to \\ \uparrow & & \uparrow & \\ \operatorname{pr}od\limits_U\operatorname{Hom} _k(k, {\cal M} (U)) & \to & \operatorname{pr}od\limits _{V\subset U}\operatorname{Hom} _k(k, {\cal M} (V)) & \to , \end{array}$$ \noindent where the left lower corner has bidegree $(0,0)$. The vertical arrows are the Hochshild differentials while the horizontal ones come from the simplicial resolution. Let $T_a^{\bullet \bullet}({\cal M} )\subset T^{\bullet \bullet}({\cal M} )$ be the sub- double complex which is the complement of the bottom row. Put $$T^\bullet_a({\cal M})=Tot(T^{\bullet\bullet}_a({\cal M}));\quad H_a^n({\cal A} ,{\cal M}):= H^n(T_a^\bullet({\cal M} )).$$ Note that the complex $T^\bullet({\cal M})/T_a^\bullet({\cal M}))$ is just $\operatorname{Hom} _k(s_\bullet (k_{{\cal U}}),{\cal M})$. Hence we obtain the long exact sequence $$\begin{array}{cccccc} \to & H_a^n({\cal A} ,{\cal M} ) & \to & \operatorname{Ext}^n_{{\cal A} ^e}({\cal A} ,{\cal M} ) & \to & H^n({\cal U} ,{\cal M} ) \\ \to & H_a^{n+1}({\cal A} ,{\cal M} ) & \to & ... & & \end{array} $$ In case ${\cal M} $ is a symmetric ${\cal A}$-bimodule, i.e. $am=ma$ for all $a\in {\cal A}$, $m\in {\cal M}$, the above sequence splits into short exact sequences ([GS],21.3) $$0 \to H_a^n({\cal A} ,{\cal M} )\to \operatorname{Ext} ^n_{{\cal A} ^e}({\cal A} ,{\cal M}) \to H^n({\cal U} ,{\cal M} )\to 0.$$ \subsection{The isomorphism $exal({\cal A} ,{\cal M})\simeq H_a^2({\cal A} ,{\cal M})$} Let the extension $$0\to {\cal M} \to {\cal B} \to {\cal A} \to 0$$ represent an element in $exal({\cal A} ,{\cal M} )$. Choose local $k$-linear splittings $s_U:{\cal A} (U)\to {\cal B} (U)$. Let us construct a 2-cocycle in $T^{\bullet\bullet}_a({\cal M} )$. Namely, put $$Z^{0,2}(a,b)=s_U(ab)-s_U(a)s_U(b), \quad U\in {\cal U},\ a,b\in {\cal A}(U),$$ $$Z^{1,1}(a)=s_Vr^{{\cal A}}_{U,V}(a)-r^{{\cal B}}_{U,V}s_U(a), \quad V\subset U,\ a\in {\cal A} (U),$$ where $r^{{\cal A}}_{U,V}:{\cal A}(U)\to {\cal A}(V)$, $r^{{\cal B}}_{U,V}:{\cal B} (U)\to {\cal B} (V)$ are the structure restriction maps of the presheaves ${\cal A} $ and ${\cal B}$. Then $(Z^{0,2},Z^{1,1})$ is a 2-cocycle in $T_a^{\bullet\bullet}({\cal M})$ and the induced map $$exal({\cal A} ,{\cal M} )\to H_a^2({\cal A} ,{\cal M})$$ is an isomorphism ([GS],21.4). The inverse isomorphism is constructed as follows. Let $(Z^{0,2},Z^{1,1})$ be a 2-cocycle in $T^{\bullet \bullet}_a({\cal M})$. For each $U\in {\cal U}$ put ${\cal B}(U)={\cal A}(U)\oplus {\cal M}(U)$ as a $k$-vector space; define the multiplication in ${\cal B}(U)$ by $(a,m)(a^\operatorname{pr}ime ,m^\operatorname{pr}ime ) =(aa^\operatorname{pr}ime ,am^\operatorname{pr}ime + ma^\operatorname{pr}ime + Z^{0,2}(a,a^\operatorname{pr}ime ))$. We make ${\cal B}$ the presheaf of algebras by defining the restriction maps $r_{U,V}^{{\cal B}} :{\cal B}(U)\to {\cal B}(V)$ to be $r_{U,V}^{{\cal B}}(a,m)=(r^{{\cal A}}_{U,V}(a), r_{U,V}^{{\cal M}}(m)+Z^{1,1}(a)).$ In particular, we obtain the 5-term exact sequence $$\begin{array}{cccccc} & ... & \to & \operatorname{Ext}^1_{{\cal A} ^e}({\cal A} ,{\cal M} ) & \to & H^n({\cal U} ,{\cal M} ) \\ \to & exal({\cal A} ,{\cal M} ) & \to & \operatorname{Ext} ^2_{{\cal A}^e}({\cal A},{\cal M}) & \to & H^2({\cal U}, {\cal M}) \end{array} $$ \section{Admissible quasicoherent sheaves of algebras and bimodules.} \begin{defn} Let $Z$ be a scheme and ${\cal A}_Z$ be a sheaf of unital $k$-algebras on $Z$. We say that ${\cal A}_Z$ is a {\it quasicoherent} sheaf of algebras if there is given a homomorphism of sheaves of unital $k$-algebras ${\cal O}_Z\to {\cal A}_Z$ which makes ${\cal A}_Z$ a quasicoherent left ${\cal O}_Z$-module. Note that ${\cal A}_Z^o$ is then a quasicoherent right ${\cal O}_Z$-module. Denote by $\mu({\cal A}_Z)\subset {\cal A}_Z-mod$ the full subcategory of left ${\cal A}_Z$-modules consisting of quasicoherent ${\cal O}_Z$-modules \end{defn} Fix a quasiprojective scheme $X$ over $k$ with a sheaf of unital $k$-algebras on ${\cal A}_X$. Let ${\cal A}_X^o$ be the sheaf of opposite algebras and ${\cal A}^e_X={\cal A}_X\otimes _k{\cal A}_X^o$. An ${\cal A}_X$-module means a left ${\cal A}_X$-module; an ${\cal A}_X$-bimodule means an ${\cal A}_X^e$-module. Put $Y=X\times X$ with the two projections $p_1,p_2:Y\to X$. We have the sheaves of algebras $p_1^{-1}{\cal A}_X$ and $p_2^{-1}{\cal A}_X^o$ on $Y$ and hence also their tensor product $p_1^{-1}{\cal A}_X\otimes _kp_2^{-1}{\cal A}_X^o$. Assume that ${\cal A}_X$ is quasicoherent. Then we can take the quasicoherent inverse images $p_1^*{\cal A}_X$ and $p_2^*{\cal A}_X^o$ (using left and right ${\cal O}_X$-structures respectively). Put $${\cal A}_Y^e:=p_1^*{\cal A}_X\otimes _{{\cal O}_Y}p_2^*{\cal A}_X^o.$$ Note that for affine open $U,V\subset X$, ${\cal A}_Y^e(U\times V)= {\cal A}_X(U)\otimes _k{\cal A}_X(V)$. This is a quasicoherent sheaf on $Y$ with a natural morphism of quasicoherent sheaves $$\beta :{\cal O}_Y\to {\cal A}_Y^e,$$ which sends $1$ to $1\otimes 1$. We also have the obvious morphism of sheaves of $k$-vector spaces $$\gamma :p_1^{-1}{\cal A}_X\otimes _kp_2^{-1}{\cal A}_X^o\to {\cal A}_Y^e.$$ \begin{defn} We say that the quasicoherent sheaf of algebras ${\cal A}_X$ satisfies condition (*) if ${\cal A}_Y^e$ has a structure of a sheaf of algebras so that $\beta $ and $\gamma $ are morphisms of sheaves of algebras. \end{defn} Note that if ${\cal A}_X$ satisfies condition (*) then, in particular, ${\cal A}_Y^e$ is a quasicoherent sheaf of algebras on $Y$. It seems that the algebra structure on ${\cal A}_Y^e$ as required in the condition (*), if it exists, should be unique. In any case, there is a canonical such structure in all examples that we have in mind. \noindent{\bf Examples.} 1. The condition (*) holds if the sheaf of algebras ${\cal A}_X$ is commutative. More generally, if the image of ${\cal O}_X$ lies in the center of ${\cal A}_X$. 2. Assume that $char(k)=0$ and $X$ is smooth. Then (*) holds for the sheaf ${\cal A}_X=D_X$ of differential operators on $X$. In this case $$p_1^*D_X\otimes _{{\cal O}_Y}p_2^*D_X=D_Y.$$ Let $\omega _X$ be the dualizing sheaf on $X$. Then $D_X^o=\omega _X \otimes _{{\cal O}_X}D_X\otimes _{{\cal O}_X}\omega _X^{-1}$ and hence $${\cal A}^e_Y=p_1^*D_X\otimes _{{\cal O}_Y}p_2^*D_X^0=p_2^*\omega _X\otimes _{{\cal O}_Y} D_Y\otimes _{{\cal O}_Y}p_2^*\omega _X^{-1}.$$ Let ${\cal M}_X$ be an ${\cal A}_X$-bimodule. Then, in particular, ${\cal M}_X$ is an ${\cal O}_X$-bimodule. \begin{defn} We say that ${\cal M}_X$ satisfies the condition $(\star)$ if for an open affine $U\subset X$ and $f\in {\cal O}(U)$ we have $${\cal M}_X(U_f)={\cal O}(U_f)\otimes _{{\cal O}(U)}{\cal M}_X(U)\otimes _{{\cal O}(U)} {\cal O}(U_f).$$ \end{defn} \begin{remark} The sheaves of algebras ${\cal A}_X$ in Examples 1,2 above satisfy the condition $(\star)$ when considered as ${\cal A}_X$-bimodules. \end{remark} \begin{lemma} Let ${\cal A}_X$ be a quasicoherent sheaf of algebras which satisfies the condition (*), and let ${\cal M}_X$ be an ${\cal A}_X$-bimodule which satisfies the condition $(\star )$. Then ${\cal M}_X$ defines a (unique up to an isomorphism) ${\cal A}^e_Y$-module $\tilde{{\cal M}}_Y$ on $Y$ such that for an affine open $U\subset X$ $$\tilde{{\cal M}}_Y(U\times U)={\cal M}_X(U).$$ We have $\tilde{{\cal M}}_Y\in \mu({\cal A}^e_Y)$. \end{lemma} \begin{pf} Choose an affine open covering $\{U\}$ of $X$. Then the affine open subsets $U\times U$ form a covering of $Y$. Fix one such subset $V=U\times U$. The sheaf of algebras ${\cal A}^e_Y$ is quasicoherent, hence by Serre's theorem below we have the equivalence of categories $$\mu({\cal A}^e_V)\simeq {\cal A}_Y^e(V)-mod.$$ The sheaf ${\cal M}_X$ defines an ${\cal A}^e_Y(V)={\cal A}_X(U)\otimes _k{\cal A}_X(U)$-module ${\cal M}_X(U)$, hence defines a quasicoherent ${\cal A}_V^e$-module $\tilde{{\cal M}}_V$. If $V^\operatorname{pr}ime =U^\operatorname{pr}ime\times U^\operatorname{pr}ime \subset V$, then the condition $(\star)$ for ${\cal M}_X$ implies that $\tilde{{\cal M}}_V \vert_{V^\operatorname{pr}ime}=\tilde{{\cal M}}_{V^\operatorname{pr}ime}$. Hence the local sheaves glue together into a global quasicoherent ${\cal A}^e_Y$-module $\tilde{{\cal M}}_Y$. The last assertion is obvious. \end{pf} \begin{thm} Let $Z=SpecC$ be an affine scheme, ${\cal A}_Z$ -- a quasicoherent sheaf of algebras on $Z$. Put $A=\Gamma (X,{\cal A}_X)$. Then the functor of global sections $\Gamma $ is an equivalence of categories $$\Gamma :\mu({\cal A}_Z)\to A-mod.$$ Its inverse is $\operatorname{D}elta$ defined by $$\operatorname{D}elta (M)={\cal A}_Z\otimes _AM.$$ Both $\Gamma $ and $\operatorname{D}elta $ are exact functors. \end{thm} \begin{pf} The point is that for an $A$-module $M$ the quasicoherent sheaf $\operatorname{D}elta (M)$ is indeed an ${\cal A}_Z$-module. The rest follows easily from the classical Serre's theorem about the equivalence $$qcoh(Z)\simeq C-mod.$$ \end{pf} \begin{defn} We call a quasicoherent sheaf of algebras ${\cal A}_X$ {\it admissible} if it satisfies conditions (*) and $(\star)$ (as a bimodule over itself). We call an ${\cal A}_X$-bimodule ${\cal M}_X$ {\it admissible} in it satisfies condition $(\star)$. We say that $({\cal A}_X,{\cal M}_X)$ is an admissible pair if both ${\cal A}_X$ and ${\cal M}_X$ are admissible. \end{defn} \begin{remark} The sheaf of algebras ${\cal A}_X$ as in Examples 1,2 above is admissible. \end{remark} Let us summarize our discussion in the following corollary. \begin{cor} Let $({\cal A}_X, {\cal M}_X)$ be an admissible pair. Then i) ${\cal A}_X$ defines is a quasicoherent sheaf of algebras ${\cal A}_Y^e$ on $Y$ such that for affine open $U,V\subset X$, ${\cal A}_Y^e(U\times V)={\cal A}_X(U)\otimes _k {\cal A}_X(V)^o$; ii) ${\cal M}_X$ defines a sheaf $\tilde{{\cal M}}_Y\in \mu({\cal A}_Y^e)$ such that for affine open $U\subset X$, $\tilde{{\cal M}}_Y(U\times U)={\cal M}_X(U).$ \end{cor} \begin{pf} This follows immediately from Definition 4.2 and Lemma 4.5. \end{pf} We will be able to give a cohomological interpretation of the group $exal({\cal A}_X,{\cal M}_X)$ for an admissible pair $({\cal A}_X,{\cal M}_X)$. \section{Cohomological description of the group $exal({\cal A}_X,{\cal M}_X)$ for an admissible pair $({\cal A}_X,{\cal M}_X)$.} Let $X$ be a quasiprojective scheme over $k$ and $({\cal A}_X,{\cal M}_X)$ be an admissible pair. We will consider the group $exal({\cal A}_X,{\cal M}_X)$ of algebra extensions of ${\cal A}_X$ by ${\cal M}_X$. Note that if an exact sequence $$0\to {\cal M}_X \to {\cal B} _X \to {\cal A} _X \to 0$$ is such an extension, then we do not require the sheaf ${\cal B}_X$ to be quasicoherent, or even an ${\cal O}_X$-module. Denote by ${\cal U} =Aff(X)$ be the category of all affine open subsets of $X$. Given a sheaf ${\cal F} _X$ on $X$ we denote by $j_X^*{\cal F} _X$ the presheaf on ${\cal U}$, which is obtained by restriction of ${\cal F} _X$ to affine open subsets. We will usually denote $j^*_X{\cal F} _X={\cal F} _{{\cal U}}$ if it causes no confusion. In particular, we obtain presheaves of algebras ${\cal A} _{{\cal U}}=j^*_X{\cal A} _X$, ${\cal A} ^e_{{\cal U}}:={\cal A} _{{\cal U}}\otimes {\cal A} _{{\cal U}}^o$ $({\cal A} ^e_{{\cal U}}\neq j^*_X{\cal A} ^e_X)$. \begin{lemma} Then there is a natural map $exal({\cal A} _X, {\cal M} _X)\rightarrow exal({\cal A} _{{\cal U}},{\cal M} _{{\cal U}})$ which is an isomorphism. In particular, $\operatorname{def} ({\cal A} _X)=\operatorname{def} ({\cal A} _{{\cal U}})$. \end{lemma} \begin{pf} Given an exact sequence of sheaves on $X$ $$0\to {\cal M}_X \to {\cal B} _X \to {\cal A} _X \to 0,$$ which represents an element in $exal({\cal A} _X,{\cal M} _X)$ we obtain the corresponding sequence $$0\to {\cal M} _{{\cal U}}\to {\cal B} _{{\cal U}} \to {\cal A}_{{\cal U}}\to 0$$ of presheaves on ${\cal U}$. This last sequence is exact because ${\cal M} _X$ is quasi-coherent. Hence it represents an element in $exal({\cal A} _{{\cal U}},{\cal M} _{{\cal U}})$. So we obtain a map $$exal({\cal A} _X,{\cal M} _X)\rightarrow exal({\cal A} _{{\cal U}},{\cal M} _{{\cal U}}).$$ Vice versa, let $$0 \to {\cal M} _{{\cal U} }\to {\cal B} _1 \to {\cal A} _{{\cal U}} \to 0$$ represent an element in $exal({\cal A} _{{\cal U}}, {\cal M}_{{\cal U}})$. Denote by ${}^+$ the (exact) functor which associates to a presheaf on ${\cal U}$ the corresponding sheaf on $X$. Then $({\cal A} _{{\cal U}})^+={\cal A} _X$, $({\cal M} _{{\cal U}})^+={\cal M} _X$ and hence we obtain an exact sequence $$0 \to {\cal M} _X \to {\cal B}_1^+ \to {\cal A} _X \to 0$$ which defines an element in $exal({\cal A} _X,{\cal M}_X)$. This defines the inverse map $$exal({\cal A} _{{\cal U}},{\cal M}_{{\cal U}})\rightarrow exal({\cal A} _X,{\cal M} _X).$$ \end{pf} Let $D^b({\cal A}_Y^e)$ and $D^b({\cal A}^e_{{\cal U}})$ denote the bounded derived categories of ${\cal A}_Y^e-mod$ and ${\cal A}^e_{{\cal U}}-mod$ respectively. Let $D^b_{\mu({\cal A}^e_Y)}({\cal A}_Y^e)\subset D^b({\cal A}_Y^e)$ be the full subcategory consisting of complexes with cohomologies in $\mu({\cal A}^e_Y)$. Denote by $j^*_Y: {\cal A}^e_Y-mod \longrightarrow {\cal A}^e_{{\cal U}}-mod$ the left exact functor defined by $j^*_Y({\cal F})(U):={\cal F}(U\times U)$, $U\in {\cal U}$. Consider its derived functor $${\Bbb R} j^*_Y:D^b({\cal A}^e_Y)\longrightarrow D^b({\cal A}^e_{{\cal U}}).$$ \begin{thm} The functor $${\Bbb R} j^*_Y:D^b_{\mu({\cal A}^e_Y)}({\cal A}^e_Y)\longrightarrow D^b({\cal A}^e_{{\cal U}})$$ is fully faithful. Equivalently, for ${\cal M},{\cal N}\in \mu ({\cal A} _Y^e)$ the map $$j^*_Y:\operatorname{Ext} ^n_{{\cal A} ^e_Y}({\cal M} ,{\cal N})\rightarrow \operatorname{Ext} ^n_{{\cal A}^e_{{\cal U}}} (j^*_Y{\cal M},j^*_Y{\cal N})$$ is an isomorphism for all $n$. \end{thm} \begin{prop} The map $$j^*_X:H^n(X,{\cal M}_X)\rightarrow H^n({\cal U}, {\cal M}_{{\cal U}})$$ is an isomorphism for all $n$. \end{prop} Let us first formulate some immediate corollaries of the theorem and the proposition. \begin{cor} There exists a natural exact sequence $$\operatorname{Ext} ^1_{{\cal A} ^e_Y}(\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y)\to H^1(X,{\cal M}_X)\to exal({\cal A} _X,{\cal M}_X ) \to \operatorname{Ext} ^2_{{\cal A} ^e_Y} (\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y)\to H^2(X,{\cal M}_X).$$ In particular, if $X$ is affine then $exal({\cal A} _X,{\cal M}_X )=\operatorname{Ext}^2_{{\cal A} ^e_Y}(\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y)$. If ${\cal M}_X$ is a symmetric ${\cal A}_X$-bimodule, then we get a short exact sequence $$0\to exal({\cal A} _X,{\cal M} _X)\to \operatorname{Ext} ^2_{{\cal A}^e_Y}(\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y) \to H^2(X,{\cal M}_X)\to 0.$$ \end{cor} \begin{pf} Indeed, this follows from Lemma 5.1, Theorem 5.2, Proposition 5.3 and results of Section 3. \end{pf} Recall the following theorem of J.~Bernstein. \begin{thm}([Bo]) Let $Z$ be a quasicompact separated scheme, ${\cal C}_Z$ -- a quasicoherent sheaf of algebras on $Z$. Then the natural functor $$\theta:D^b(\mu({\cal C}_Z))\to D^b_{\mu({\cal C}_Z)}({\cal C}_Z)$$ is an equivalence of categories. \end{thm} \begin{cor} Assume that $X$ is affine. Then $$exal({\cal A}_X,{\cal M}_X)\simeq exal({\cal A}_X(X),{\cal M}_X(X)).$$ \end{cor} \begin{pf} Put ${\cal A}_X(X)=A$, ${\cal M}_X(X)=M$. We have $$exal(A,M)=\operatorname{Ext} ^2_{A\otimes A^o}(A,M).$$ By Serre's theorem $$\operatorname{Ext} ^2_{A\otimes A^o}(A,M)=\operatorname{Ext} ^2_{\mu({\cal A}^e_Y)}(\tilde{{\cal A}}_Y, \tilde{{\cal M}}_Y).$$ By Bernstein's theorem $$\operatorname{Ext} ^2_{\mu({\cal A}^e_Y)}(\tilde{{\cal A}}_Y, \tilde{{\cal M}}_Y)=\operatorname{Ext} ^2_{{\cal A}^e_Y}(\tilde{{\cal A}}_Y, \tilde{{\cal M}}_Y).$$ Finally, by Corollary 5.4 above $$\operatorname{Ext} ^2_{{\cal A}^e_Y}(\tilde{{\cal A}}_Y, \tilde{{\cal M}}_Y)=exal({\cal A}_X,{\cal M}_X).$$ \end{pf} \noindent{\it Question.} Under the assumptions of the last corollary let ${\cal B}$ be a sheaf of algebras on $X$ representing an element in $exal({\cal A}_X, {\cal M}_X)$. Is ${\cal B} ={\cal A}_X \oplus {\cal M}_X$ as a sheaf of $k$-vector spaces? \section{Proof of Theorem 5.2 and Proposition 5.3.} \noindent{\it Proof of Proposition 5.3.} Let $k_{{\cal U}}$ be the constant presheaf on ${\cal U}$ and $s_\bullet (k_{{\cal U}})\to k_{{\cal U}}$ be its categorical simplicial resolution (Section 3). It is a projective resolution of $k_{{\cal U}}$, which consists of direct sums of presheaves $i_{U!}k$. Hence $$H^i({\cal U},{\cal M}_{{\cal U}})=\operatorname{Ext} ^i(k_{{\cal U}},{\cal M}_U)=H^i\operatorname{Hom}^\bullet (s_{\bullet}(k_{{\cal U}}), {\cal M}_{{\cal U}}).$$ Consider the exact functor $(\cdot )^+$ from the category of presheaves on ${\cal U}$ to the category on sheaves on $X$. Then $k_{{\cal U}}^+=k_X$ -- the constant sheaf on $X$. The functor $(\cdot )^+$ preserves direct sums and $(i_{U!}k)^+=k_U$ -- the extension by zero of the constant sheaf on $U$. Since ${\cal M}_X$ is quasicoherent, for an affine open $U\subset X$ we have $H^i(U,{\cal M}_X)=0$ for all $i>0$. Thus $$H^i(X,{\cal M}_X)=\operatorname{Ext} ^i(k_X,{\cal M}_X)=H^i\operatorname{Hom} ^\bullet(s_\bullet(k_{{\cal U}})^+, {\cal M}_X).$$ It remains to notice that $$\operatorname{Hom} (k_U,{\cal M}_X)=\Gamma (U,{\cal M}_X)=\operatorname{Hom} (i_{U!}k,{\cal M}_{{\cal U}}).$$ This completes the proof of the proposition. \noindent{\it Proof of Theorem 4.2.} Let us formulate a general statement which will imply the theorem. Let $Z$ be a quasicompact separated scheme over $k$. Let $Aff(Z)$ be the category of affine open subsets of $Z$ and ${\cal W}\subset Aff(Z)$ be a full subcategory which is closed under intersections and constitutes a covering of $Z$. Let ${\cal A}_Z$ be a quasicoherent sheaf of algebras on $Z$. Denote by ${\cal A}_W$ the corresponding presheaf of algebras on ${\cal W}$. Let $$j_Z^*:{\cal A}_Z-mod \longrightarrow {\cal A}_W-mod$$ be the natural (left exact) restriction functor. \begin{prop} In the above notation the derived functor $${\Bbb R} j_Z^*:D^b_{\mu({\cal A}_Z)}({\cal A}_Z)\to D^b({\cal A}_W)$$ is fully faithful. \end{prop} \begin{pf} By Bernstein's theorem the natural functor $$\theta :D^b(\mu({\cal A}_Z))\to D^b_{\mu({\cal A}_Z)}({\cal A}_Z)$$ is fully faithful. So it suffices to prove that the composition ${\Bbb R} j_Z^*\cdot \theta$ is fully faithful. The functor $j^*_Z:\mu({\cal A}_Z) \to {\cal A}_W-mod$ is exact. Let ${\cal M},{\cal N} \in \mu({\cal A}_Z)$. It suffices to prove that the map $$j^*_Z:\operatorname{Ext}^\bullet_{\mu({\cal A}_Z)} ({\cal M},{\cal N})\to \operatorname{Ext}^\bullet (j^*_Z{\cal M},j^*_Z{\cal N})$$ is an isomorphism. \noindent{\it Step 1.} Assume that $Z$ is affine and $Z\in {\cal W}$. Then by Serre's theorem $\mu({\cal A}_Z)\simeq {\cal A}_Z(Z)-mod$. Replacing ${\cal M}$ by a left free resolution we may assume that ${\cal M}={\cal A}_Z$. But then $$\operatorname{Ext} ^i({\cal A}_Z,{\cal N})=\operatorname{Ext} ^i({\cal A}_Z(Z),{\cal N}(Z))=\begin{cases} {\cal N}(Z), \text{if $i=0$}\\ 0, \text{otherwise} \end{cases}$$ On the other hand $j^*_Z{\cal A}_Z={\cal A}_{{\cal W}}$ is a projective object in ${\cal A}_{{\cal W}}-mod$ (Section 3) and $$\operatorname{Hom} ({\cal A}_{{\cal W}},j_Z^*{\cal N})=\operatorname{Hom} ({\cal A}_{{\cal W}}(Z),j^*_Z{\cal N}(Z))={\cal N}(Z).$$ So we are done. \noindent{\it Step 2. Reduction to the case when $Z$ is affine.} Let $i_U:U\hookrightarrow Z$ be an embedding of some $U\in {\cal W}$. Denote by ${\cal A}_U$ the restriction ${\cal A}_Z\vert _U$. We have two (exact) adjoint functors $i^*_U:\mu({\cal A}_Z)\to \mu({\cal A}_U)$, $i_{U*}: \mu({\cal A}_U)\to \mu ({\cal A}_Z)$. The functor $i_{U*}$ preserves injectives. Choose a finite covering $Z=\bigcup U_j$, $U_j\in {\cal W}$. Then the natural map $${\cal N} \to \bigoplus_ji_{U_j*}i^*_{U_j}{\cal N}$$ is a monomorphism. So we may assume that ${\cal N}=i_{U*}{\cal N}_U$ for some $U\in {\cal W}$ and ${\cal N}_U\in \mu({\cal A}_U)$. Then we have $$\operatorname{Ext} ^\bullet({\cal M} ,i_{U*}{\cal N}_{U})=\operatorname{Ext} ^\bullet(i^*_U{\cal M},{\cal N}_U).$$ We need a similar construction on the other end. Let $\tilde{i}_U:{\cal W}_U\hookrightarrow {\cal W}$ be the embedding of the full subcategory ${\cal W}_U=\{ V\in {\cal W}\vert V\subseteq U\}$. Let ${\cal A} _{{\cal W}_U}$ be the restriction of ${\cal A}_{{\cal W}}$ to ${\cal W}_U$. We have the obvious functor $\tilde{i}_U^*:{\cal A}_{{\cal W}}-mod\longrightarrow {\cal A}_{{\cal W}_U}-mod$ and its right adjoint $\tilde{i}_{U*}$ defined by $$\tilde{i}_{U*}({\cal K})(V):={\cal K}(V\cap U).$$ Both $\tilde{i}_U^*$ and $\tilde{i}_{U*}$ are exact and $i_{U*}$ preserves injectives. For ${\cal K} \in {\cal A}_{{\cal W}_U}$, ${\cal L} \in {\cal A}_{{\cal W}}$ we have $$\operatorname{Ext}^\bullet (\tilde{i}^*_U{\cal L},{\cal K})=\operatorname{Ext} ^\bullet({\cal L},\tilde{i}_{U*} {\cal K}).$$ Note that the following diagrams commute $$\begin{array}{ccc} \mu({\cal A}_Z) & \stackrel{i^*_U}{\longrightarrow} & \mu({\cal A}_U) \\ j^*_X\downarrow & & \downarrow j^*_U \\ {\cal A}_{{\cal W}}-mod & \stackrel{\tilde{i}^*_U}{\longrightarrow} & {\cal A}_{{\cal W}_U}-mod \end{array}$$ $$\begin{array}{ccc} \mu({\cal A}_Z) & \stackrel{i_{U*}}{\longleftarrow} & \mu({\cal A}_U) \\ j^*_X\downarrow & & \downarrow j^*_U \\ {\cal A}_{{\cal W}}-mod & \stackrel{\tilde{i}_{U*}}{\longleftarrow} & {\cal A}_{{\cal W}_U}-mod \end{array}$$ (here $j_U^*$ is the obvious restriction functor). Hence the following diagram commutes as well $$\begin{array}{lcl} \operatorname{Ext}^\bullet({\cal M},{\cal N})= & \stackrel{j^*_Z}{\longrightarrow} & \operatorname{Ext}^\bullet(j^*_Z{\cal M},j_Z^*{\cal N}) =\\ \operatorname{Ext}^\bullet({\cal M},i_{U*}{\cal N}_U)= & & \operatorname{Ext}^\bullet (j^*_Z{\cal M}, \tilde{i}_{U*}j^*_U{\cal N}_U)=\\ \operatorname{Ext} ^\bullet(i^*_U{\cal M},{\cal N}_U)= & \stackrel{j^*_U}{\longrightarrow} & \operatorname{Ext} ^\bullet(j^*_Ui^*_U{\cal M},j^*_U{\cal N}_U). \end{array}$$ But $j^*_U$ is an isomorphism by Step 1 above. Hence $j^*_Z$ is also an isomorphism. \end{pf} \section{A spectral sequence} Let $X$ be a quasiprojective variety and $({\cal A}_X,{\cal M}_X)$ be an admissible pair. For ${\cal N}_1,{\cal N}_2\in \mu({\cal A}^e_Y)$ we will construct a spectral sequence which abuts to $\operatorname{Ext}^\bullet_{{\cal A}^e_Y}({\cal N}_1,{\cal N}_2)$. In particular we will get an insight into the group $\operatorname{Ext} ^2_{{\cal A}^e_Y}(\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y)$. \begin{lemma} Any object in $\mu ({\cal A}^e_Y)$ is a quotient of a locally free ${\cal A}^e_Y$-module. \end{lemma} \begin{pf} Let ${\cal K}\in \mu({\cal A}^e_Y)$. Consider ${\cal K}$ as a quasi-coherent ${\cal O}_Y$-module. As such it is a quotient of a locally free ${\cal O}_Y$-module $Q$ (we can take $Q=\oplus{\cal O}_Y(-j)$). Then the ${\cal A}_Y$-module ${\cal A}_Y\otimes _{{\cal O}_Y}Q$ is locally free and surjects onto ${\cal K}$. \end{pf} Let $P_\bullet \to {\cal N}_1$ be a resolution of ${\cal N}_1$ consisting of locally free ${\cal A}_Y^e$ -modules. From the proof of the last lemma it follows that there exists an affine covering ${\cal V}$ of $Y$ such that for each $V\in {\cal V}$ and each $P_{-t}$ the restriction $P_{-t}\vert _V$ is a free ${\cal A}^e_V$-module. We may (and will) assume that each $V\in {\cal V}$ is of the form $U\times U$ for $U$ from an affine open covering ${\cal U}$ of $X$. Choose one such affine covering ${\cal V}$. Let $\check{C}_\bullet(P_\bullet)\rightarrow P_\bullet$ be the corresponding \v{C}ech resolution of $P_\bullet$. This is a double complex consisting of ${\cal A}_Y^e$-modules, which are extensions by zero from affine open subsets $V$ of free ${\cal A}^e_V$-modules. Thus $$H^\bullet_{{\cal A}^e_Y} \operatorname{Hom}(Tot(\check{C}_\bullet(P_\bullet)),{\cal N}_2)= \operatorname{Ext}^\bullet_{{\cal A}^e_Y}({\cal N}_1,{\cal N}_2).$$ The natural filtration of the double complex $\check{C}_\bullet(P_\bullet)$ gives rise to the spectral sequence with the $E_2$-term $$E_2^{p,q}=\check{H}^p({\cal V}, {\cal E} xt^q_{{\cal A}^e_Y}({\cal N}_1,{\cal N}_2)),$$ which abuts to $\operatorname{Ext} ^{p+q}_{{\cal A}^e_Y}({\cal N}_1,{\cal N}_2)$. In particular, in case ${\cal N}_1=\tilde{{\cal A}}^e_Y$, ${\cal N}_2=\tilde{{\cal M}}_Y$ this spectral sequence defines a filtration of the group $\operatorname{Ext} ^2_{{\cal A}^e_Y}(\tilde{{\cal A}}^e_Y,\tilde{{\cal M}}_Y)$. Namely there are maps $$\alpha _1:\operatorname{Ext}^2_{{\cal A}_Y^e}(\tilde{{\cal A}}_Y^e,\tilde{{\cal M}}_Y^e) \rightarrow \check{H}^0({\cal V}, {\cal E} xt^2_{{\cal A}_Y^e}(\tilde{{\cal A}}^e_Y,\tilde{{\cal M}}_Y^e)),$$ $$\alpha _2:\ker(\alpha _1)\rightarrow \check{H}^1({\cal V},{\cal E} xt^1_{{\cal A}_Y^e} (\tilde{{\cal A}}_Y^e,\tilde{{\cal M}}_Y^e)),$$ $$\alpha _3:\ker(\alpha _2)\rightarrow \check{H}^2({\cal V},{\cal E} xt^0_{{\cal A}_Y^e} (\tilde{{\cal A}}_Y^e,\tilde{{\cal M}}_Y^e).$$ Recall that for $V=U\times U\in {\cal V}$ by Bernstein's and Serre's theorems respectively we have $$\Gamma (V,{\cal E} xt^q_{{\cal A}_Y^e}(\tilde{{\cal A}}_Y^e,\tilde{{\cal M}}^e_Y))$$ $$=\Gamma (V,{\cal E} xt^q_{\mu({\cal A}_Y^e)}(\tilde{{\cal A}}_Y^e,\tilde{{\cal M}}^e_Y))$$ $$=\operatorname{Ext}^q_{{\cal A}_X(U)\otimes {\cal A}_X^o(U)}({\cal A}_X(U),{\cal M}_X(U)).$$ \subsection{Cohomological analysis of the group $exal({\cal A}_X,{\cal M}_X)$} Consider the exact sequence $$H^1(X,{\cal M}_X)\stackrel{\epsilon}{\rightarrow} exal({\cal A}_X,{\cal M}_X) \stackrel{\rho }{\rightarrow} \operatorname{Ext}^2_{{\cal A}_Y^e}(\tilde{{\cal A}}^e_Y, \tilde{{\cal M}}_Y).$$ Let us describe the morphisms $\epsilon$ and $\rho$ explicitly. Since ${\cal M}_X$ is quasi-coherent the cohomology group $H^1(X,{\cal M}_X)$ is isomorphic to the \v{C}ech cohomology $\check{H}^1({\cal U},{\cal M}_X)$. Given a 1-cocycle $\{m_{ij}\in {\cal M}_X(U_i\cap U_j)\vert U_i,U_j\in {\cal U}\}$ define an algebra extension $$0\to {\cal M}_X \to {\cal B} \to {\cal A}_X \to 0$$ as follows: on each $U\in {\cal U}$ the sheaf ${\cal B}\vert_U$ is a direct sum of sheaves ${\cal M}_X\vert_U$ and ${\cal A}_X\vert_U$ with the multiplication $$(m,a)(m^\operatorname{pr}ime,a^\operatorname{pr}ime)=(ma^\operatorname{pr}ime+am^\operatorname{pr}ime,aa^\operatorname{pr}ime).$$ That is, locally ${\cal B}$ is a split extension. Define the glueing algebra automorphisms $$\phi _{ij}:{\cal B}_{U_i\cap U_j}\stackrel{\sim}{\rightarrow} {\cal B}_{U_i\cap U_j},\quad \phi _{ij}(m,a)=(m+[a,m_{ij}],a).$$ This defines the map $\epsilon: H^1(X,{\cal M}_X)\to exal({\cal A}_X,{\cal M}_X)$. Now assume that an algebra extension ${\cal B}$ represents an element in $exal({\cal A}_X,{\cal M}_X)$. Consider $\rho ({\cal B})\in \operatorname{Ext}^2_{{\cal A}_Y^e}(\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y)$ and assume that $\alpha _1 (\rho ({\cal B}))=0$, i.e. locally ${\cal B}$ is a split extension. Thus for $U\in {\cal U}$ we have $${\cal B}(U)={\cal M}_X(U)\oplus {\cal A}_X(U)$$ with the multiplication $$(m,a)(m^\operatorname{pr}ime,a^\operatorname{pr}ime)=(ma^\operatorname{pr}ime+am^\operatorname{pr}ime,aa^\operatorname{pr}ime)$$ and with the glueing given by algebra automorphisms $$\phi _{ij}:{\cal B}(U_i\cap U_j)\stackrel{\sim}{\rightarrow}{\cal B}(U_i\cap U_j), \quad \phi _{ij}(m,a)=(m+\operatorname{def}lta _{ij}(a),a),$$ where $\operatorname{def}lta _{ij}:{\cal A}_X(U_i\cap U_j)\to {\cal M}_X (U_i\cap U_j)$ is a derivation. For an affine open $U\subset X$ the space $$\operatorname{Ext} ^1_{{\cal A}_X(U)\otimes {\cal A}_X^o(U)}({\cal A}_X(U),{\cal M}_X(U))$$ is the space of outer derivations ${\cal A}_X(U)\to {\cal M}_X(U)$. The collection $\{\operatorname{def}lta _{ij}\}$ defines an element in $\check{H}^1({\cal V},{\cal E} xt^1_{{\cal A}_Y^e}(\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y))$, which is equal to $\alpha _2(\rho ({\cal B}))$. Assume now that $\alpha _2(\rho({\cal B}))=0$. Then there exist elements $\operatorname{def}lta _i\in \operatorname{Ext} ^1_{{\cal A}_X(U_i) \otimes {\cal A}_X(U_i)^0} ({\cal A}_X(U_i),{\cal M}_X(U_i))$ such that $\operatorname{def}lta _{ij}=\operatorname{def}lta _i-\operatorname{def}lta _j$. Changing the local trivializations of ${\cal B}$ by the derivations $\operatorname{def}lta _i$'s we may assume that $\operatorname{def}lta _{ij}$'s are inner derivations. Choose $m_{ij}\in {\cal M}_X(U_i\cap U_j)$ so that $\operatorname{def}lta_{ij}(a)=[a,m_{ij}].$ The collection $\{m_{ij}\}$ defines a 1-cochain in $\check{C}({\cal U},{\cal M}_X)$. Its coboundary is a 2-cocycle which consists of central elements $m_{ijk}\in {\cal M}_X(U_i\cap U_j\cap U_k)$. Thus it defines an element in $\check{H}^2({\cal V}, {\cal H} om_{{\cal A}_Y^e} (\tilde{{\cal A}}_Y,\tilde{{\cal M}}_Y))$. It is equal to $\alpha _3(\rho ({\cal B}))$. \section{Examples} Let $X$ be a smooth complex quasiprojective variety. Let $\operatorname{def}lta :X\hookrightarrow Y=X\times X$ be the diagonal embedding, $\operatorname{D}elta=\operatorname{def}lta (X)$ -- the diagonal, and $p_1,p_2:Y\to X$ be the two projections. \subsection{Deformation of the structure sheaf} Let ${\cal A}_X={\cal M}_X={\cal O}_X$. Then ${\cal A}^e_Y={\cal O}_Y$, $\tilde{{\cal A}}_Y=\operatorname{def}lta _*{\cal O}_X$. Since the ${\cal O}_X$-bimodule ${\cal O}_X$ is symmetric we have the short exact sequence $$0\to \operatorname{def} ({\cal O}_X)\to \operatorname{Ext} ^2_{{\cal O}_Y}(\operatorname{def}lta _*{\cal O}_X,\operatorname{def}lta _*{\cal O}_X)\to H^2(X,{\cal O}_X)\to 0.$$ Assume that $X$ is projective. By the Hodge decomposition ([GS],[S]) $$\operatorname{Ext} ^2_{{\cal O}_Y}(\operatorname{def}lta _*{\cal O}_X,\operatorname{def}lta _*{\cal O}_X)=H^0(X,\wedge ^2T_X) \oplus H^1(X,T_X)\oplus H^2(X,{\cal O}_X).$$ The above short exact sequence identifies $\operatorname{def} ({\cal O}_X)$ with $H^0(X,\wedge ^2T_X) \oplus H^1(X,T_X)$. The summand $H^1(X,T_X)$ corresponds to the first order deformations of the variety $X$ by Kodaira-Spencer theory, i.e. to ``commutative'' deformations of ${\cal O}_X$, while the summand $H^0(X,\wedge ^2T_X)$ corresponds to ``noncommutative'' deformations. \subsection{Deformations of the sheaf of differential operators} Let ${\cal A}_X={\cal M}_X=D_X$ -- the sheaf of (algebraic) differential operators on $X$. Let $\omega _X$ be the dualizing sheaf on $X$. Then $$D_X^o=\omega _X\otimes _{{\cal O}_X}D_X\otimes _{{\cal O}_X}\omega^{-1}_X.$$ We have $D_Y=p^*_1D_X\otimes _{{\cal O}_Y}p^*_2D_X$, and hence $$D_Y^e=p_1^*\omega _X\otimes _{{\cal O}_Y}D_Y \otimes _{{\cal O}_Y}p_2^*\omega^{-1}_X.$$ The functor $\tau :M\mapsto p_1^*\omega _X\otimes _{{\cal O}_Y}M$ is an equivalence of categories $$\tau :D_Y-mod \longrightarrow D_Y^e-mod.$$ Denote by $\operatorname{def}lta _+:D_X-mod \longrightarrow D_Y-mod$ the functor of direct image ([Bo]). Then $$\tilde{D}_Y=\tau (\operatorname{def}lta _+{\cal O}_X).$$ Let $X^{an}$ denote the variety $X$ with the classical topology. \begin{prop} There is a natural isomorphism $$\operatorname{Ext}^\bullet_{D^e_Y}(\tilde{D}_Y,\tilde{D}_Y)\simeq H^\bullet(X^{an},{\Bbb C}).$$ \end{prop} \begin{pf} By the above remarks $$\operatorname{Ext}^\bullet_{D^e_Y}(\tilde{D}_Y,\tilde{D}_Y)= \operatorname{Ext}^\bullet_{D_Y}(\operatorname{def}lta _+{\cal O}_X,\operatorname{def}lta _+{\cal O}_X).$$ Let $D_{\operatorname{D}elta}^b(D_Y)$ be the full subcategory of $D^b(D_Y)$ consisting of complexes with cohomologies supported on $\operatorname{D}elta$. By Kashiwara's theorem the direct image functor $$\operatorname{def}lta _+:D^b(D_X)\longrightarrow D^b_{\operatorname{D}elta}(D_Y)$$ is an equivalence of categories (see [Bo]). Thus, in particular, $$\operatorname{Ext}^\bullet _{D_X}({\cal O}_X,{\cal O}_X)\simeq \operatorname{Ext}^\bullet _{D_Y}(\operatorname{def}lta _+{\cal O}_X,\operatorname{def}lta _+{\cal O}_X).$$ On the other hand by (a special case of) the Riemann-Hilbert correspondence $$\operatorname{Ext}^\bullet _{D_X}({\cal O}_X,{\cal O}_X)\simeq H^\bullet(X^{an},{\Bbb C}).$$ \end{pf} \begin{cor} Let $X$ be a smooth complex quasi-projective variety. Then we have an exact sequence $$H^1(X^{an},{\Bbb C})\to H^1(X,D_X)\to \operatorname{def} (D_X)\to H^2(X^{an},{\Bbb C}) \to H^2(X,D_X).$$ If $X$ is $D$-affine (for example $X$ is affine) then $$\operatorname{def} (D_X)=H^2(X^{an},{\Bbb C}).$$ \end{cor} \begin{pf} The first part follows immediately from Proposition 8.1 and Corollary 5.4. If $X$ is $D$-affine, then $H^i(X,D_X)=0$ for $i>0$. An affine variety is $D$-affine since $D_X$ is a quasicoherent sheaf of algebras. This implies the last assertion. \end{pf} \begin{example} Let $X={\Bbb C}^n$. Then $\operatorname{def} (D_X)=H^2(X,{\Bbb C})=0$. Since $X$ is affine, $\operatorname{def} (D_X)=\operatorname{def} (D_X(X))$, where $D_X(X)$ is the Weyl algebra. It is well known that the Hochschild cohomology of the Weyl algebra is trivial. \end{example} \section{Deformation of differential operators} \subsection{Induced deformations of differential operators} Let $S$ be a commutative ring and $C$ be an $S$-algebra with a finite filtration $$0=C_{-1}\subset C_0\subset C_1\subset ...\subset C_n=C,$$ such that the associated graded $grC$ is commutative. Then it makes sense to define the ring $D_S(C)=D(C)$ of ($S$-linear) differential operators on $C$ in the usual way. More generally, given two left $C$-modules $M$, $N$ define the space of differential operators of order $\leq m$ from $M$ to $N$ as follows. $$D^m(M,N)=\{ d\in \operatorname{Hom} _S(M,N)\vert [f_m,...,[f_1,[f_0,d]]...]=0 \ \text{for all $f_0,...f_m\in C$}\}.$$ Then $D(M,N):=\cup_mD^m(M,N)$ and in particular we obtain a filtered (by the order of differential operator) ring $D(C)=D(C,C)$. Note that $C\subset D(C)$ acting by left multiplication. Sometimes we will be more explicit and will write $D({}_CM,{}_CN)$ for $D(M,N)$. If the algebra $C$ is commutative then each $k$-subspace $D^m(M,N)\subset D(M,N)$ is also a (left and right) $C$-submodule. \begin{lemma} Denote by $S_n$ the ring $S[t]/(t^{n+1})$. Then canonically $$D_{S_n}(C\otimes _SS_n)\simeq D_S(C)\otimes _SS_n.$$ In particular, for a commutative $k$-algebra $A$ we have $$D_{k_n}(A\otimes _kk_n)\simeq D_k(A)\otimes _kk_n.$$ \end{lemma} \begin{pf} Indeed, every $f\in \operatorname{End} _{S_n}(C\otimes _SS_n)= \operatorname{Hom} _S(C,C\otimes _SS_n)$ can be uniquely decomposed as $$f=\bigoplus_{i=0}^{n}f_i\otimes t^i,$$ where $f_i\in \operatorname{End} _S(C,C)$. Now the inclusion $f\in D^m_{S_n}(C\otimes _SS_n)$ is equivalent to inclusions $f_i\in D^m_S(C)$ for all $i$. Whence the assertion of the lemma. \end{pf} For the rest of this section we will consider only $k[t]$-algebras, and all differential operators will be $k[t]$-linear, so we will omit the corresponding subscript. We denote as before $k_n=k[t]/(t^{n+1})$. Let $A$ be a commutative $k$-algebra and $B$ be a $k_n$-algebra with an isomorphism $grB\simeq A\otimes _kk_n$, i.e. $B$ defines an element in $\operatorname{def} ^n(A)$. Consider the inclusion of rings $D(B)\subset \operatorname{End}_{k_n}(B)$. Both these rings are filtered the powers of $t$, hence we obtain a natural homomorphism (of degree $0$ of graded algebras). $$\alpha :grD(B)\to gr\operatorname{End}_{k_n}(B).$$ Note that $\alpha $ may not be injective. On the other hand we have a natural homomorphism of graded algebras $$\operatorname{def}lta :gr\operatorname{End} _{k_n}(B)\to \operatorname{End} _{k_n}(grB),$$ which is, in fact, an isomorphism. We denote the composition of the two maps again by $\gamma :grD(B)\to \operatorname{End} _{k_n}(grB)$. \begin{lemma} i) The homomorphism $\gamma $ maps $grD(B)$ to $D(grB)$. ii) The following are equivalent a) The map $\gamma :grD(B)\to D(grB)$ is injective b) The map $\gamma :grD(B)\to D(grB)$ is surjective. \end{lemma} \begin{pf} i). Since everything is $k_n$-linear, it suffices to prove that $\gamma (D(B)/tD(B))\subset D(B/tB)$. Let $d\in D^m(B)$ and denote by $\bar{d}\in D(B)/tD(B)$ its residue. Let $b_0,...b_m\in B$ with the corresponding residues $\bar{b}_0,...,\bar{b}_m\in B/tB$. We have $$[b_0,...[b_m,d]...]=0,$$ hence $$[\bar{b}_0,...[\bar{b}_m,\gamma (\bar{d})]...]=0.$$ Thus $\gamma (\bar{d})\in D^m(B/tB)$. ii). The injectivity of $\gamma :grD(B)\to D(grB)$ is equivalent to the injectivity of the natural map $\alpha :grD(B)\to gr\operatorname{End} _{k_n}(B)$. Consider the subspace $D(B/tB)\simeq D(B,t^nB)\subset D(B,B)$. The injectivity of $\alpha $ is equivalent to the assertion that every $d\in D(B,t^nB)$ is equal to $t^nd_1$ for some $d_1\in D(B)$. But this last assertion is equivalent to the surgectivity of the map $D(B)/tD(B)\to D(B/tB)$ and hence to the surgectivity if $\gamma :grD(B)\to D(grB)$. \end{pf} \begin{defn} Assume that the map $\gamma :grD(B)\to D(grB)$ is an isomorphism. Then by the Lemma 7.1 the algebra $D(B)$ defines an element in $\operatorname{def} ^n(D(A))$. We call $D(B)$ the {\it induced} (by $B$) deformation of $D(A)$. We also say that $B$ {\it induces} a deformation of $D(A)$. \end{defn} \begin{example} It follows from Lemma 7.1 that the trivial deformation of $A$ induces a deformation of $D(A)$, which is also trivial. \end{example} \begin{remark} It would be interesting to see which deformations of $A$ induce deformations of $D(A)$. \end{remark} \subsection{Two lemmas about induced deformations} Assume that $A$ and $B$ are as above and $B$ induces a deformation of $D(A)$. Denote the residue map $\tau :D(B)\to D(A)$. Moreover, assume that $D(B)$ is a split extension of $D(A)$ with a splitting homomorphism (of $k$-algebras) $s:D(A)\to D(B)$. Since $A\subset D(A)$, the map $s$ defines, in particular, a structure of a left $A\otimes _kk_n$-module on $B$. The next two lemmas will be used in what follows. \begin{lemma} i) The residue map $\beta :B\to A$ is a homomorphism of left $A$-modules. ii) $B$ is a free $A\otimes _kk_n$-module of rank 1. \end{lemma} \begin{pf} i). Given $a\in A$, $b\in B$ we need to show that $\beta (s(a)b)=a\beta (b)$. This follows from the identity $\tau s(a)=a$ and the commutativity of the diagram $$\begin{array}{ccc} D(B)\times B & \stackrel{(\tau ,\beta )}{\longrightarrow} & D(A)\times A \\ \downarrow & & \downarrow \\ B & \stackrel{\beta }{\longrightarrow} & A, \end{array}$$ where the vertical arrows are the action morphisms. ii) The $A$-module map $\beta :B\to A$ has a splitting $\alpha :A\to B$, which induces an isomorphism $\alpha \otimes 1:A\otimes _kk_n\to B$ of left $A\otimes _kk_n$-modules. \end{pf} \begin{lemma} Assume that the $k$-algebra $A$ is finitely generated. Consider $B$ with the structure of a left $A\otimes _kk_n$-module defined above. Then $D({}_BB)=D({}_{A\otimes _kk_n}B)$ as subrings of $\operatorname{End} _{k_n}(B)$. \end{lemma} \begin{pf} Denote $\tilde{A}=A\otimes _kk_n$. Since $D(B)$ is a deformation of $D(A)$ the graded ring $grD(B)$ coincides with the subring $D(grB)\subset \operatorname{End} _{k_n}(grB)$. The isomorphism of $\tilde{A}$-modules ${}_{\tilde{A}}B\simeq \tilde{A}$ defines an isomorphism of rings $$D({}_{\tilde{A}}B)\simeq D(\tilde{A})=D(grB).$$ Hence, in patricular, $grD({}_{\tilde{A}}B)$ is a graded submodule of $\operatorname{End} _{k_n}(grB)$ and as such coincides with $D(grB)$. We conclude that the graded subrings of $\operatorname{End} _{k_n}(grB)$, $grD(B)$ and $grD({}_{\tilde{A}}B)$ coincide $(=D(grB))$. So it suffices to prove the inclusion $D({}_BB)\subset D({}_{\tilde{A}}B)$. We will prove by descending induction on $p$ that $$D({}_BB,{}_Bt^pB)\subset D({}_{\tilde{A}}B,{}_{\tilde{A}}t^pB).$$ It follows from Lemma 7.6,i) that the $A$- and $B$-module structure on $B$ coincide modulo $t$. More precisely, if $b\in B$ and $a=\beta (b)\in A$, then $$s(a)-b:t^\bullet B\to t^{\bullet +1}B.$$ This implies that $$D({}_BB,{}_Bt^nB)= D({}_{\tilde{A}}B,{}_{\tilde{A}}t^nB).$$ Suppose that we proved the inclusion $D({}_BB,{}_Bt^{p+1}B) \subset D({}_{\tilde{A}}B,{}_{\tilde{A}}t^{p+1}B).$ Let $a_1,...a_l$ be a set of generators of the algebra $A$. Choose $d\in D^m({}_BB,{}_Bt^pB)$. Then the operators $$d_{i_0...i_m}:=[s(a_{i_0}),...,[s(a_{i_m}),d]...],\quad i_j\in \{ 1,...,l\}$$ map $B$ to $t^{p+1}B$. Since $s(a_i)\in D({}_BB)$ also $$d_{i_0...i_m}\in D({}_BB,{}_Bt^{p+1}B)\subset D({}_{\tilde{A}}B,{}_{\tilde{A}}t^{p+1}B).$$ Thus there exists $N$ such that every $d_{i_0...i_m}\in D^N({}_{\tilde{A}}B,{}_{\tilde{A}}t^{p+1}B).$ Since $A$ is commutative this implies that for any $c_1,...,c_m\in A$ $$[s(c_1),...,[s(c_m),d]...]\in D^N({}_{\tilde{A}}B,{}_{\tilde{A}}t^{p+1}B).$$ But then $d\in D^{N+m}({}_{\tilde{A}}B,{}_{\tilde{A}}t^pB).$ Hence $D({}_BB,{}_Bt^pB) \subset D({}_{\tilde{A}}B,{}_{\tilde{A}}t^pB)$, which completes the induction step and proves the lemma. \end{pf} \subsection{Sheafification} Lat $Y$ be a scheme over $k$, ${\cal B}$ -- a sheaf of $k_n$-algebras on $Y$ with an isomorphism of sheaves of $k_n$-algebras $gr{\cal B}\simeq {\cal O}_Y \otimes _kk_n$, i.e. ${\cal B}$ defines an element in $\operatorname{def} ^n({\cal O}_Y)$. Then using the commutator definition as in 9.1 above we define the sheaf $D({\cal B})$ of $k_n$-linear differential operators on ${\cal B}$. Thus, in particular, $D({\cal B})$ is a subsheaf of ${\cal E} nd_{k_n}({\cal B})$. In this section all the differential operators will be $k[t]$-linear, so we omit the corresponding subscript. As in the ring case we obtain a natural homomorphism of sheaves of graded $k_n$-algebras (which, probably, is neither injective, nor surjective in general) $$\tilde{\gamma}:grD({\cal B})\to {\cal E} nd_{k_n}(gr{\cal B}).$$ The following two lemmas are the sheaf versions of Lemmas 9.1 and 9.2 which will be used later. The proofs are the same. \begin{lemma} $D({\cal O}_Y\otimes _kk_n)=D({\cal O}_Y)\otimes _kk_n \ \ (=D_Y \otimes _kk_n).$ \end{lemma} \begin{lemma} The homomorphism $ \tilde{\gamma}$ maps $grD({\cal B})$ to $D(gr{\cal B}).$ \end{lemma} \begin{defn} Assume that $\tilde{\gamma}:grD({\cal B})\to D(gr{\cal B})$ is an isomorphism. Then by Lemma 9.8 the sheaf $D({\cal B})$ defines an element in $\operatorname{def} ^n(D_Y)$. We call $D({\cal B})$ the {\it induced} (by ${\cal B}$) deformation of $D_Y$ and say that ${\cal B}$ {\it induces} this deformation. \end{defn} \subsection{Deformations of differential operators on a flag variety} \begin{thm} Let $G$ be a complex linear simple simply connected algebraic group, $B\subset G$ -- a Borel subgroup, $X=G/B$ -- the corresponding flag variety. Then any induced deformation of $D_X$ is trivial. \end{thm} \begin{remark} Since $H^1(X,T_X)=0$ (the variety $X$ is rigid) the only deformations of ${\cal O}_X$ are ``purely noncommutative'', i.e. they correspond to elements of $H^0(X,\wedge ^2T_X)$. In this respect one may ask the following question: Suppose $Y$ is a smooth projective variety, ${\cal B}$ -- a purely noncommutative deformation of ${\cal O}_Y$. Assume that ${\cal B}$ induces a deformation $D({\cal B})$ of $D_Y$. Is $D({\cal B})$ a trivial deformation of $D_Y$? \end{remark} \begin{pf} Assume that a sheaf of $k_n$-algebras ${\cal B}$, which represents an element in $\operatorname{def} ^n({\cal O}_X)$, induces a deformation (of order $n$) $D({\cal B})$ of $D_X$. Then for any $m>0$ the sheaf ${\cal B}/t^{m+1}{\cal B}$ induces a deformation of order $m$, $D({\cal B}/t^{m+1}{\cal B})$, of $D_X$. By induction we may assume that $D({\cal B}/t^n{\cal B})\simeq D_X\otimes _kk_{n-1}$, i.e. $D({\cal B})$ represents an element in $\operatorname{def} _0^n(D_X)$. (Recall that $\operatorname{def} ^n_0(D_X)\simeq \operatorname{def} (D_X)$.) We need to prove that $D({\cal B})$ is the trivial element in $\operatorname{def} ^n(D_X)$. For simplicity of notation we assume that $n=1$ (the proof in the general case is the same). It is well known that $X$ has an open covering $X=\cup_{w\in W}U_w$, where $W$ is the Weyl group of $G$ and $U_w\simeq {\Bbb C} ^d$, $d=\dim (X)$. Denote the covering ${\cal U} =\{U_w\}$. It follows from Example 8.3 that $D({\cal B})_{U_w}$ is the trivial deformation of $D_{U_w}$ for each $w\in W$. The variety $X$ is $D$-affine ([BB]), thus $\operatorname{def} (D_X) \simeq H^2(X^{an},{\Bbb C} )$ (Corollary 8.2). But $H^2(X^{an},{\Bbb C})= H^{1,1}(X^{an},{\Bbb C} )=Pic(X)\otimes _{{\Bbb Z}}{\Bbb C}$. Let us describe the isomorphism $\sigma :Pic(X)\otimes {\Bbb C} \to \operatorname{def} (D_X)$ directly. Let ${\cal L}$ be a line bundle on $X$. Then ${\cal L}\vert _{U_w}\simeq {\cal O}_{U_w}$ for all $w\in W$. Hence ${\cal L}$ is defined by a \v{C}ech 1-cocycle $\{ f_{ij}\in {\cal O} ^*_{U_{w_i}\cap U_{w_j}}\}$. Define derivations $$\operatorname{def}lta _{ij}:D_{U_{w_i}\cap U_{w_j}}\to D_{U_{w_i}\cap U_{w_j}}$$ by the formula $$\operatorname{def}lta _{ij}(d)=[d,log(f_{ij})].$$ Note that though $log(f_{ij})$ is a multivalued analytic function, $[\cdot , log(f_{ij})]$ is a well defined derivation of the ring of differential operators and it preserves the algebraic operators. So $\operatorname{def}lta _{ij}$ is well defined. Using these derivations we define the glueing over $U_{w_1} \cap U_{w_2}$ of the sheaves $D_{U_{w_i}}\otimes {\Bbb C} [t]/(t^2)$ and $D_{U_{w_j}}\otimes {\Bbb C} [t]/(t^2)$. We denote the corresponding global sheaf $\sigma ({\cal L})$. The map $\sigma :Pic(X)\to \operatorname{def} (D_X)$ is a group homomorphism which extends to an isomorphism $$\sigma :Pic(X)\otimes {\Bbb C} \stackrel{\sim}{\rightarrow} \operatorname{def} (D_X).$$ Let us get back to $D({\cal B})\in \operatorname{def} (D_X)$. By the above isomorphism, $D({\cal B})=\sigma ({\cal L})$ for some ${\cal L} \in Pic(X)\otimes {\Bbb C}$. We have $D({\cal B})_{U_w}= D_{U_w}\otimes {\Bbb C} [t]/(t^2)$, so that ${\cal B}_{U_w}$ has a structure of a $D_{U_w}$-module and, in particular, of an ${\cal O} _{U_w}$-module. By (a sheaf version of) Lemma 9.6,ii) ${\cal B} _{U_w}\simeq {\cal O} _{U_w}\otimes {\Bbb C} [t]/(t^2)$ as an ${\cal O} _{U_w}$-module. Since the glueing of different $D({\cal B})_{U_w}$'s is by means of derivations $[\cdot ,log(f_{ij})]$, it follows that the local ${\cal O} _{U_w}$-module structure on ${\cal B}$ agree on the intersections $U_{w_i}\cap U_{w_j}$. Hence ${\cal B}$ is an ${\cal O}_X$-module, which fits in the short exact sequence of ${\cal O}_X$-modules $$0\to {\cal O}_X \to {\cal B} \to {\cal O}_X \to 0.$$ Since $\operatorname{Ext} ^1_{{\cal O}_X}({\cal O}_X,{\cal O}_X)=0$, ${\cal B}={\cal O}_X\otimes {\Bbb C} [t]/(t^2)$. Thus $D({}_{{\cal O}_X}{\cal B})=D_X\otimes {\Bbb C} [t]/(t^2)$. But by (a sheaf version of) Lemma 9.7 $D({}_{{\cal O}_X}{\cal B} )=D({}_{{\cal B}}{\cal B})\ (=D({\cal B}))$, which proves the theorem. \end{pf} \end{document}
\begin{document} \title{A new and simple proof of the false centre theorem} \section{Introduction and preliminaries} We say that a set $A\mathfrak{su}bset \mathbb R^n$ is \emph{symmetric} if and only if there is a translated copy $A'$ of $A$ such that $A'=-A'$. In this case, if $A'=A-x_0$, we say that $x_0$ is the center of symmetry of $A$. By convention, the empty set $\emptyset$ is symmetric. The purpose of this paper is to give a new and simple proof of the following theorem: \begin{teo}\langlebel{Main}[False Centre Theorem] Let $K$ be a convex body in euclidean $3$-space and let ${p}$ be a point of $\mathbb R^3$. Suppose that for every plane $H$ through $p$, the section $H\cap K$ is symmetric. Then either $p$ is a centre of symmetry of $K$ or $K$ is an ellipsoid. \end{teo} For the proof we will use the following two known theorems. \begin{teo}\langlebel{teoss} Let $K$ be a convex body in euclidean $3$-space. Suppose that for every plane $H$, the section $H\cap K$ is symmetric. Then $K$ is an ellipsoid. \end{teo} \begin{teo} \langlebel{teorogers} Let $K$ be a convex body in euclidean $3$-space and let ${p}$ be point in $\mathbb R^3$. Suppose that for every plane $H$ through $p$, the section $H\cap K$ is symmetric. Then $K$ is symmetric. \end{teo} Theorem \ref{Main} was first proved in all its generality by D. G. Larman \cite{L}. Theorem \ref{teorogers} was proved by C. A. Rogers \cite{R}, when $p\in$ int $K$ and by G. R. Burton in general (Theorem 2 of \cite{B1}). Theorem \ref{teoss} was first proved by H. Brunn \cite{Bru} under the hypothesis of regularity and in general by G. R. Burton \cite {B2} (see (3.3) and (3.6) of Petty's survey \cite{P}). For more about characterization of ellipsoids see \cite{So} and Section 2.12 of \cite{MMO}. We need some notation. Let $K$ be a convex body in euclidean $3$-space $\mathbb R^3$, let $p\in \mathbb R^3$ and let $L$ be a directed line. We denote by $p_L$ the directed chord $(p+L)\cap K$ of $K$, by $|p_L|$ the length of the chord $p_L$ and by $\frac{p}{L}\in \mathbb R \cup \{\infty\}$, the radio in which the point $p$ divides the directed interval $p_L$. That is, if $p_L=[a,b]$, then $\frac{p}{L}=\frac{pa}{bp}$, where $pa$ and $bp$ denotes the signed length of the directed chords $[p,a]$ and $[b,p]$ in the directed line $p+L$, and by convention, if $p_L=[p,p]$, then $\frac{p}{L}=1$. If $p_L=(p+L)\cap K=\emptyset$, then by convention $|p_L|=-1$ and $\frac{p}{L}=-1.$ Suppose now $B$ is a convex figure in the plane and let $p$ be a point of $\mathbb R^2.$ If $B$ is symmetric with center the origin, $L$ is a directed line through the origin and $q=-p$, then: \begin{enumerate} \item $|p_L|= |q_L|$, and \item $\frac{q}{L}\frac{p}{L}=1$, \end{enumerate} \noindentoindent where by convention $0\infty=\infty0=1$. Conversely, if $B$ is a convex figure and $p,q\in \mathbb R^2$ are two points for which $(1)$ and $(2)$ holds, for every directed line $L$ through the origin, then $B$ is symmetric with center at the midpoint of $p$ and $q$. Essentially, this is so because, if $p=-q$ and $p_L=[a,b]$, then $q_L=[-b,-a]$. \section{The proof of Theorem \ref{Main}} Let $K$ be a convex body in euclidean $3$-space and let ${p}$ be point of $\mathbb R^3$. Suppose that for every plane through $p$, the corresponding section is symmetric. By Theorem \ref{teorogers}, we may assume that $K$ is symmetric with the center at the origin. Suppose $p$ is not the origin $0$. We shall prove that $K$ is an ellipsoid. Let $H$ be a plane through the origin that does not contain the point $p$. By hypothesis, the plane $(p+H)\cap K$ is symmetric. Suppose $v\noindentot=p$ is the center of $(p+H)\cap K$, and $w=tv$, for some $t\in \mathbb R$. We shall prove first that $(w+H)\cap K$ is symmetric with the center at $w$. For that purpose, it will be enough to prove that for every line $L\mathfrak{su}bset H$ through the origin: \begin{enumerate} \item $|(p-v+w)_L|= |(v-p+w)_L|$, and \item $\frac{v-p+w}{L}\frac{p-v+w}{L }=1.$ \end{enumerate} Note that the points $p-v+w$ and $v-p+w$ lie in $(w+H)\setminus \partial K$ and if $L\mathfrak{su}bset H$, then $(p-v+w)_L$ and $(v-p+w)_L$ are chords of $(w+H)\cap K$. Let $G_K^0amma$ be the plane through the origin generated by $L$ and $v$. Hence, by hypothesis, $(p+G_K^0amma)\cap K$ is symmetric. Suppose first that $K$ is strictly convex. In order to prove $(1)$ and $(2)$, we shall prove first: \begin{itemize} \item[a)] $|p_L|=|(p-2v)_L|,$ \item[b)] the center of $(p+G_K^0amma)\cap K$ lies in $(p-v)_L$, and \item[c)] $\frac{p}{L}=\frac{p-2v}{L}$. \end{itemize} Since $(p+H)\cap K$ is a symmetric section with center at $v$, then $|p_L|=|(2v-p)_L|$ and $\frac{p}{L}\frac{2v-p}{L}=1$. By the symmetry of $K$, $|(p-2v)_L|=|(2v-p)_L|$ and $\frac{p-2v}{L}\frac{2v-p}{L}=1$. Consequently, $|p_L|=|(p-2v)_L|$. So both chords, $p_L$ and $(p-2v)_L$, of the symmetric section $(p+G_K^0amma)\cap K$ have the same length. By the strictly convexity of $K$, the parallel mid chord contains the center, that is, the center of $(p+G_K^0amma)\cap K$ lies in $(p-v)_L$. Furthermore, $\frac{p}{L}=\frac{p-2v}{L}$. This proves a), b) and c). Let us prove now that $a), b)$ and $c)$ implies $(1)$ and $(2)$. The parallel chords $(p-v-w)_L$ and $(p-v+w)_L$ of the symmetric section $(p+G_K^0amma)\cap K$ have as a mid chord $(p-v)_L$. The fact that the center of $(p+G_K^0amma)\cap K$ lies in $(p-v)_L$ implies that $|(p-v+w)_L|= |(p-v-w)_L|$. Since $K$ is symmetric with center at the origin, then (1) holds, that is $|(p-v+w)_L|= |(v-p+w)_L|$. Since $\frac{p}{L}=\frac{p-2v}{L}$, then $(p-2v)_L+2v=p_L$. By symmetry of $(p+G_K^0amma)\cap K$, $(p-v-w)_L+2w=(p-v+w)_L$. Hence $\frac{p-v-w}{L}=\frac{p-v+w}{L}$. On the other hand, by the symmetry of $K$, $\frac{p-v-w}{L}\frac{v-p+w}{L}=1$. Consequently, $\frac{v-p+w}{L}\frac{p-v+w}{L}=1$, thus proving $1)$ and $2)$ and hence that $(w+H)\cap K$ is symmetric with the center at $w$. This proves that every section of $K$ parallel to $H$ is symmetric. Let us prove now that the collection of planes through the origin such that the center of the section $(p+H)\cap K$ is not $p$, is dense. Let $\Omega$ be the collection of planes through the origin such that the center of the section $(p+H)\cap K$ is $p$ and let $H\in$ int$\Omega$. If this is the case, by symmetry of $K$, the center of the section $(-p+H)\cap K$ is $-p$ and therefore, $\big((-p+H)\cap K\big)+2p = (p+H)\cap K$. Since the same hold for every section sufficiently close, we conclude that $(H\cap \partial K)+\{tp\in \mathbb R^3\mid|t|\leq 1\}\mathfrak{su}bset \partial K$, contradicting the strictly convexity assumption. Consequently, the collection of symmetric sections of $K$ is dense and since the limit of a sequence of symmetric sections is a symmetric section, then every section of $K$ is symmetric. By Brunn's Theorem \ref{teoss}, $K$ is an ellipsoid. In the non strictly convex case, our arguments for the case in which the center of $(p+H)\cap K$ is not $p$, only showed that $(w+H)\cap K$ is symmetric, when $w=tv$ from $|t|<1$, thus proving that every section of $K$ sufficiently close to the origin and parallel to $H$ is symmetric and hence that $H\cap\partial K$ is contained in a shadow boundary. On the other side, if $H\in$ int$\Omega$ and $(H\cap \partial K)+\{tp\in \mathbb R^3\mid|t|\leq 1\}\mathfrak{su}bset \partial K$, then clearly $H\cap\partial K$ is contained in a shadow boundary. Consequently, by Blaschke's Theorem 2.12.8 of \cite{MMO}, $K$ is an ellipsoid. \qed The version of Theorem \ref{Main} for dimensions $n\mathfrak{g}eq3$ and any codimension less than $n-1$ is true. The proof follows from our Theorem \ref{Main}, using standard arguments in the literature. \noindent{\bf Acknowledgments.} L. Montejano acknowledges support from CONACyT under project 166306 and from PAPIIT-UNAM under project IN112614. E. Morales-Amaya acknowledges support from CONACyT, SNI 21120. \end{document}